Search is not available for this dataset
text
string | meta
dict |
---|---|
\subsection{\texttt{gravity\_cube.py}}\label{code:gravity_cube}
\begin{verbatim}
#gravity_cube.py: A bouncing cube simulation using ESyS-Particle
# Author: D. Weatherley
# Date: 15 May 2007
# Organisation: ESSCC, University of Queensland
# (C) All rights reserved, 2007.
#
#
#import the division module for compatibility between Python 2 and Python 3
from __future__ import division
#import the appropriate ESyS-Particle modules:
from esys.lsm import *
from esys.lsm.util import Vec3, BoundingBox
from esys.lsm.geometry import CubicBlock,ConnectionFinder
from POVsnaps import POVsnaps
#instantiate a simulation object
#and initialise the neighbour search algorithm:
sim = LsmMpi(numWorkerProcesses=1, mpiDimList=[1,1,1])
sim.initNeighbourSearch(
particleType="NRotSphere",
gridSpacing=2.5,
verletDist=0.5
)
#set the number of timesteps and timestep increment:
sim.setNumTimeSteps(10000)
sim.setTimeStepSize(0.001)
#specify the spatial domain for the simulation:
domain = BoundingBox(Vec3(-20,-20,-20), Vec3(20,20,20))
sim.setSpatialDomain(domain)
#add a cube of particles to the domain:
cube = CubicBlock(dimCount=[6,6,6], radius=0.5)
cube.rotate(axis=Vec3(0,0,3.141592654/6.0),axisPt=Vec3(0,0,0))
sim.createParticles(cube)
#create bonds between particles separated by less than the specified
#maxDist:
sim.createConnections(
ConnectionFinder(
maxDist = 0.005,
bondTag = 1,
pList = cube
)
)
#specify bonded elastic interactions between bonded particles:
bondGrp = sim.createInteractionGroup(
NRotBondPrms(
name = "sphereBonds",
normalK = 10000.0,
breakDistance = 50.0,
tag = 1,
scaling = True
)
)
#initialise gravity in the domain:
sim.createInteractionGroup(
GravityPrms(name="earth-gravity", acceleration=Vec3(0,-9.81,0))
)
#add a horizontal wall to act as a floor to bounce particle off:
sim.createWall(
name="floor",
posn=Vec3(0,-10,0),
normal=Vec3(0,1,0)
)
#specify the type of interactions between wall and particles:
sim.createInteractionGroup(
NRotElasticWallPrms(
name = "elasticWall",
wallName = "floor",
normalK = 10000.0
)
)
#add local viscosity to simulate air resistance:
sim.createInteractionGroup(
LinDampingPrms(
name="linDamping",
viscosity=0.1,
maxIterations=100
)
)
#add a POVsnaps Runnable:
povcam = POVsnaps(sim=sim, interval=100)
povcam.configure(lookAt=Vec3(0,0,0), camPosn=Vec3(14,0,14))
sim.addPostTimeStepRunnable(povcam)
#execute the simulation
sim.run()
\end{verbatim}
| {
"alphanum_fraction": 0.7267780801,
"avg_line_length": 25.4752475248,
"ext": "tex",
"hexsha": "3d6e2bcc95111eebe4f468abd27eda4bad28d7e3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "danielfrascarelli/esys-particle",
"max_forks_repo_path": "Doc/Tutorial/examples/gravity_cube.py.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "danielfrascarelli/esys-particle",
"max_issues_repo_path": "Doc/Tutorial/examples/gravity_cube.py.tex",
"max_line_length": 75,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e56638000fd9c4af77e21c75aa35a4f8922fd9f0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "danielfrascarelli/esys-particle",
"max_stars_repo_path": "Doc/Tutorial/examples/gravity_cube.py.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 741,
"size": 2573
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{parskip}
\title{Writing 2}
\author{Matt Strapp}
\date{2021-02-26}
\begin{document}
\maketitle
\section*{Theta*}
This paper is about a modification of the $A^{\ast}$ algorithm that is dubbed $\theta^{\ast}$ \cite{paper}.
The paper is proposing a new algorithm by building on a previous one and altering it for other conditions.
Theta$^{\ast}$ modifies A$^{\ast}$ by allowing A$^{\ast}$ where any vertex can be the parent of any other vertex, compared to A$^{\ast}$ where the only successors allowed are the direct parents of the node.
The differences can best be demonstrated when there are obstacles in a given grid system. This is demonstrated in the paper as obstacles in a video game. The paper also mentions that it can be used for robotics for the same reason: pathfinding.
In the paper, the authors validate the algorithm with experiments involving small grids, large grids, and maps from the CRPG Baludr's Gate II.
The starting and stopping points were random for all of the experiments used.
The basic Theta$^{\ast}$ ran slightly faster than its angle-propagating counterpart in the experiments given.
While the full details of the experiments were not directly given, each of the algorithms were run either 500 times each for the random grids or 118 times each using the Baldur's Gate II maps.
The averages table was given of both the path lengths and the runtimes for everything except for A$^{\ast}$ using visibility graphs because of the large runtime that occured.
The experiments ran showed that the basic Theta$^{\ast}$ and a modified version of Theta$^{\ast}$ that adds angle propagation to the path selections ran faster and had better results than A$^{\ast}$ and a similar algorithm that also uses edge propagation titled Field D$^{\ast}$.
Neither the raw data nor the actual C\# used to test were given in the paper or in any of its references.
Pseudocode was given which allows others to write their own code and test their data against the data given in the paper.
The conclusion the paper draws is that basic Theta$^{\ast}$ and Angle-propagating Theta$^{\ast}$ compromise between speed and accuracy to make an algorithm made mainly for dealing with obstacles.
This is supported by all of the experiments that were ran.
The paper was well-written assuming someone has at least moderate understanding in artificial intelligence algorithms and the various mathematics involved with them.
It should be recommended that anyone reading this paper should also understand A$^{\ast}$ because Theta$^{\ast}$ is best described as both a derivative and variant of A$^{\ast}$.
Anyone who has not read on A$^{\ast}$ will likely be unable to discern what many of the variables stand for as their meanings are only given in context of Theta$^{\ast}$ and not A$^{\ast}$ or other algorithms as a whole.
The pseudocode given was documented as a figure and was also explained step-by-step by describing what it does and how Theta$^{\ast}$ works.
The most interesting part of the paper was probably its possible uses in the fields of robotics and game design where navigating around obstacles is important.
Obstacles are a paramount thing in both real life and CRPGs so finding a way to get around them quickly is tantamount to their functions.
As someone who plays CRPGs it is interesting to see how the AI works to find the best path around obstacles that exist in the gamespace.
\medskip
\bibliographystyle{unsrt}
\bibliography{writing2}
\end{document}
| {
"alphanum_fraction": 0.7646570796,
"avg_line_length": 86.0952380952,
"ext": "tex",
"hexsha": "1672c403564fec1f5542809ee09a751babf532e0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7a73162607544204032aa66cce755daf21edebda",
"max_forks_repo_licenses": [
"0BSD"
],
"max_forks_repo_name": "RosstheRoss/TestingFun",
"max_forks_repo_path": "csci4511w/writing2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7a73162607544204032aa66cce755daf21edebda",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"0BSD"
],
"max_issues_repo_name": "RosstheRoss/TestingFun",
"max_issues_repo_path": "csci4511w/writing2.tex",
"max_line_length": 282,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7a73162607544204032aa66cce755daf21edebda",
"max_stars_repo_licenses": [
"0BSD"
],
"max_stars_repo_name": "RosstheRoss/TestingFun",
"max_stars_repo_path": "csci4511w/writing2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 821,
"size": 3616
} |
\documentclass[11pt, oneside]{article}
\usepackage{../../shared/preamble}
\addbibresource{../../shared/references.bib}
\usepackage{../real-numbers/real-numbers}
%\usepackage{manifolds}
\title{Manifolds}
\author{Arthur Ryman, {\tt [email protected]}}
\date{\today}
% Document
\begin{document}
\maketitle
\begin{abstract}
This article contains Z Notation type declarations for manifolds and some related objects.
It has been type checked by \fuzz.
\end{abstract}
\section{Introduction}
Manifolds can be defined in several ways.
The way I prefer to think about them is that, first of all, they are based on topological spaces.
A manifold is therefore a topological space with some additional structure.
This additional structure allows one to regard a manifold as, locally, being like an open subset of $\R^n$
for some natural number $n$ referred to as the dimension of the manifold.
In the following, let $M$ be a topological space of dimension $n$.
\section{Charts}
A chart $\phi$ on $M$ is a continuous injection of some open subset $U \subseteq M$ into $\R^n$.
A chart gives every point $p \in U$ in its domain of definition a tuple of $n$ real number coordinates.
\begin{equation}
\phi: U \inj \R^n
\end{equation}
\subsection{Transition Functions}
Let $U, V, W$ be open subsets of $M$ with $W = U \cap V$.
Let $\phi: U \inj \R^n$ and $\psi: V \inj \R^n$ be charts.
Every point $p \in W$ is therefore given two, typically distinct, tuples of coordinates.
The mapping from one coordinate tuple to the other is called the transition function defined by the pair of charts.
Let $t_{\phi,\psi}$ denote that transition function that maps the $\phi$ coordinates to the $\psi$ coordinates.
\begin{equation}
\forall x \in \phi(W) @ t_{\phi,\psi}(x) = \psi(\phi^{-1}(x))
\end{equation}
\subsection{Compatible Charts}
Let $\mathcal{F}$ be some family of partial injections from $\R^n$ to $\R^n$, e.g. continuous, differentiable, smooth, defined on the open subsets.
\begin{equation}
\mathcal{F} \subseteq \R^n \pinj \R^n
\end{equation}
A pair of charts are said to be compatible with respect to $\mathcal{F}$ when their transition functions belong to $\mathcal{F}$.
\section{Atlases}
A set of pairwise compatible charts that cover $M$ is called an atlas for $M$.
An atlas gives $M$ a manifold structure.
If the charts are only required to be continuous then $M$ is called a topological manifold.
If the charts are required to be differentiable then the atlas is called a differential or differentiable structure and $M$ is called
a differentiable manifold.
Infinitely differentiable charts are called smooth charts.
We are only concerned with smooth charts and manifolds.
In general, we normally consider an atlas to be a maximal set of charts.
A given set of mutually compatible charts belongs to a unique maximal atlas.
The given set is said to generate the maximal atlas.
\section{Smooth Mappings}
Mappings from one smooth manifold to another are called smooth when they are smooth when expressed in their coordinate charts.
A smooth mapping that has a smooth inverse is called a diffeomorphism.
\section{Tangent Vectors}
A tangent vector $X$ at the point $p \in M$ is a mapping from the set of smooth functions at $p$ to $\R$ that satisfies the following
for all $c \in \R$ and $f,g \in C^{\infty}(M,p)$
\begin{align}
X(cf) &= cX(f) \\
X(f + g) &= X(f) + X(g) \\
X(fg) &= g(p)X(f) + f(p)X(g)
\end{align}
A smooth curve $\gamma: \R \fun M$ defines a tangent vector $X$ at $p=\gamma(0)$ by
\begin{equation}
X(f) = \left.\frac{df(\gamma(t))}{dt}\right|_{t=0}
\end{equation}
\section{Tangent Bundles}
The set of all tangent vectors at $p$ is denoted $M_p$ or $T_p(M)$.
It is an $n$-dimensional vector space and is called the tangent space at $p$.
The set of all tangent spaces is called the tangent bundle and is denoted $T(M)$
\begin{equation}
T(M) = \{~ (p,X) | p \in M, X \in M_p ~\}
\end{equation}
The tangent bundle $T(M)$ is a smooth vector bundle over $M$ under the natural projection
$\pi: T(M) \fun M, \pi(p,X) = p$.
\printbibliography
\end{document} | {
"alphanum_fraction": 0.683535608,
"avg_line_length": 42.3980582524,
"ext": "tex",
"hexsha": "114aa3a97ef758e03d416b89ea56f83a7f50b817",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "agryman/mathz",
"max_forks_repo_path": "articles/manifolds/manifolds.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "agryman/mathz",
"max_issues_repo_path": "articles/manifolds/manifolds.tex",
"max_line_length": 151,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a516a20936e1ed7b9f07c546eee7aacf1831de65",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "agryman/mathz",
"max_stars_repo_path": "articles/manifolds/manifolds.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-30T08:06:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-30T08:06:17.000Z",
"num_tokens": 1205,
"size": 4367
} |
\subsection{Classes, Interfaces, and Enumerations}
\hspace{\parindent} The structure of the code is similar to that explained in both RASD and DD documents. The app communicates with the database through controllers and services, and the database returns the data back. \newline
Since there is no backend on Firebase database, we've changed our design rationale from DD from thin-client to fat-client. This way pretty much all of the work is done by the application, with the database being only used for storing and loading data. Since the operations are very simple, this provides no issues to the functionality of the app. After adding "Book a visit" feature, app would get significantly more demanding and a switch to another database with backend implementation would be welcome. However, this version of the app only takes a few megabytes of space and is very fast even on older phones, with the only decrease of speed being because of the already mentioned Firebase data delay. \newline
One of the main challenges was to sync the synchronous application with the asynchronous database model. This meant making some design decisions that will both look good and feel right when using the app. The app is expected to be fast and consistent, which is something our database can't guarantee, so getting those two in line was a bit of an issue. \newline
Most of the work here is done by using listeners - functions that work on another thread and wait for the data from the database. This allows the app to keep working properly on one thread without waiting for the data. When data arrives, app updates the data that is on the screen. When the wait is too long, that is more than a second, loading screens are introduced to keep the dynamic feel of the app.\newline
The source code division is following:
\begin{itemize}
\item \textbf{Entities}
\begin{itemize}
\item ApplicationState
\item Store
\item StoreManager
\item Ticket
\item TicketState (ennumeration)
\item Timeslot
\item User
\item UserType (ennumeration)
\end{itemize}
\item \textbf{Services}
\begin{itemize}
\item DatabaseManagerService (interface)
\item DirectorService (interface)
\item EnterService (interface)
\item ExitService (interface)
\item LoginManagerService (interface)
\item QueueService (interface)
\item StoreSelectionManagerService (interface)
\item TicketService (interface)
\item \textbf{Implementation}
\begin{itemize}
\item DatabaseManager
\item Director
\item LoginManager
\item RequestManager
\item StoreManager
\item StoreSelectionManager
\end{itemize}
\end{itemize}
\item \textbf{Controllers}
\begin{itemize}
\item CustomerController
\item EncryptionService (not used in this version, encryption has been directly implemented in other classes)
\item ForgotPasswordController
\item HomeController
\item LoginController
\item PreLoginController
\item QrController
\item RegisterController (not used in this version)
\item ScannerController
\item StoreController
\item StoreManagerController
\item StrongAES
\item TicketController
\item UserProfileController (not used in this version)
\end{itemize}
\item \textbf{Listeners}
\begin{itemize}
\item OnCheckTicketListener
\item OnCredentialCheckListener
\item OnGetDataListener
\item OnGetTicketListener
\item OnGetTimeslotListener
\item OnTaskcompleteListener
\end{itemize}
\end{itemize}
All of the files are classes besides the ones that are described differently. Every controller has an additional \textit{activity\_controllername.xml} file that defines the design of an app page on the phone.
\subsection{Code examples}
\hspace{\parindent} Here are provided some code examples for the components. At least one component from each section is provided to give an example how the rest of the components from that section look.\newline
\textbf{Entities - Store}
\begin{lstlisting}
// Store.java
package com.example.clup.Entities;
public class Store {
public String name, address, city; // variables that define each Store
public int maxNoCustomers, id; // variables that define each Store
// Store constructors
public Store(){}
public Store(int id, String name, String city){
this.id = id;
this.name = name;
this.city = city;
}
public Store(int id, String name, String address, String city){
this.id = id;
this.name = name;
this.address = address;
this.city = city;
}
public Store(int id, String name, String address, String city, int maxNoCustomers){
this.id = id;
this.name = name;
this.address = address;
this.city = city;
this.maxNoCustomers = maxNoCustomers;
}
// Store getters
public String getAddress() {
return address;
}
public String getCity() {
return city;
}
public String getName() {
return name;
}
public int getId() {
return id;
}
public int getMaxNoCustomers() {
return maxNoCustomers;
}
}
\end{lstlisting}
\textbf{Entities - TicketService}
\begin{lstlisting}
// TicketService.java
package com.example.clup.Services;
import com.example.clup.Entities.Store;
import com.example.clup.Entities.Ticket;
import com.example.clup.OnCheckTicketListener;
import com.example.clup.OnGetDataListener;
import com.example.clup.OnGetTicketListener;
import com.example.clup.OnTaskCompleteListener;
public interface TicketService {
public void getTicket(Store store, OnGetTicketListener onGetTicketListener);
public void checkTicket(Ticket ticket, OnCheckTicketListener onCheckTicketListener);
public void checkQueue(Store store, OnCheckTicketListener onCheckTicketListener);
public void cancelTicket(Store store, Ticket ticket, OnTaskCompleteListener onTaskCompleteListener);
}
\end{lstlisting}
\textbf{Entities - RequestManager}
\begin{lstlisting}
// RequestManager.java
package com.example.clup.Services.Implementation;
import com.example.clup.Entities.Store;
import com.example.clup.Entities.Ticket;
import com.example.clup.Entities.TicketState;
import com.example.clup.Entities.Timeslot;
import com.example.clup.OnCheckTicketListener;
import com.example.clup.OnGetDataListener;
import com.example.clup.OnGetTicketListener;
import com.example.clup.OnGetTimeslotListener;
import com.example.clup.OnTaskCompleteListener;
import com.example.clup.Services.DatabaseManagerService;
import com.example.clup.Services.QueueService;
import com.example.clup.Services.TicketService;
import com.google.firebase.database.DataSnapshot;
import com.google.firebase.database.DatabaseError;
import java.sql.Time;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.List;
public class RequestManager implements QueueService, TicketService {
private StoreSelectionManager storeSelectionManager;
private DatabaseManager databaseManager = DatabaseManager.getInstance();
// Average waiting time and tickets lists have not been implemented in this version, as well as timeslots
// This can be used as an template for implementing "Book a visit" feature
List<Ticket> tickets;
//TODO
private int averageMinutesInStore = 15, maxId = -1;
// retrieves Ticket and sets its state based on other Store attributes
@Override
public void getTicket(Store store, OnGetTicketListener onGetTicketListener) {
//System.out.println("Get ticket rm");
maxId = -1;
databaseManager.getStore(store, new OnGetDataListener() {
@Override
public void onSuccess(DataSnapshot dataSnapshot) {
if (Integer.parseInt(dataSnapshot.child("open").getValue().toString()) == 0) {
onGetTicketListener.onFailure();
return;
}
maxId = Integer.parseInt(dataSnapshot.child("maxId").getValue().toString());
Ticket ticket = new Ticket(maxId + 1, store);
int occupancy = Integer.parseInt(dataSnapshot.child("occupancy").getValue().toString());
int maxNoCustomers = Integer.parseInt(dataSnapshot.child("maxNoCustomers")
.getValue().toString());
int activeTickets = 0;
for (DataSnapshot i : dataSnapshot.child("Tickets").getChildren()) {
if (i.child("ticketState").getValue().toString().equals("ACTIVE"))
activeTickets++;
}
if (occupancy + activeTickets < maxNoCustomers) {
ticket.setTicketState(TicketState.ACTIVE);
ticket.setTimeslot(new Timeslot(new Timestamp(System.currentTimeMillis() + 1000 * 60 * 5))); // wait for customer 5 mins
} else {
ticket.setTicketState(TicketState.WAITING);
ticket.setTimeslot(new Timeslot(new Timestamp(0)));
}
databaseManager.persistTicket(ticket);
onGetTicketListener.onSuccess(ticket);
}
@Override
public void onFailure(DatabaseError databaseError){
}
});
}
// checks the Ticket state, whether it's ACTIVE or WAITING
@Override
public void checkTicket(Ticket ticket, OnCheckTicketListener onCheckTicketListener) {
maxId = -1;
databaseManager.getStore(ticket.getStore(), new OnGetDataListener() {
@Override
public void onSuccess(DataSnapshot dataSnapshot) {
if(dataSnapshot.child("Tickets").
hasChild(String.valueOf(ticket.getId())) == true) {
if(dataSnapshot.child("Tickets")
.child(String.valueOf(ticket.getId())).
child("ticketState").getValue().toString().
equals("ACTIVE")) {
//how much does he have left
onCheckTicketListener.onActive(Timestamp.
valueOf(dataSnapshot.child("Tickets").child(String.
valueOf(ticket.getId())).child("expires").getValue().toString()));
} else {
//calculate people in front
int peopleAhead = 1;
for (DataSnapshot i : dataSnapshot.child("Tickets").getChildren()) {
if
(i.child("ticketState").getValue().toString().equals("WAITING") && Integer.parseInt(i.getKey()) < ticket.getId())
peopleAhead++;
}
onCheckTicketListener.onWaiting(peopleAhead);
}
}
else {
onCheckTicketListener.onBadStore("Ticket has already been used");
}
return;
}
@Override
public void onFailure(DatabaseError databaseError){
onCheckTicketListener.onBadStore("Bad store information - reload app");
}
});
}
// checks the current store queue to see how many customers are in line
@Override
public void checkQueue(Store store, OnCheckTicketListener onCheckTicketListener) {
maxId = -1;
databaseManager.getStore(store, new OnGetDataListener() {
@Override
public void onSuccess(DataSnapshot dataSnapshot) {
//calculate people in front
if (Integer.parseInt(dataSnapshot.child("open").getValue().toString()) == 0) {
onCheckTicketListener.onBadStore("The store is not open");
return;
}
int peopleAhead = 0;
for (DataSnapshot i : dataSnapshot.child("Tickets").getChildren()) {
if (i.child("ticketState").getValue().toString().equals("WAITING"))
peopleAhead++;
}
onCheckTicketListener.onWaiting(peopleAhead);
//System.out.println("AAAAA" + peopleAhead);
return;
}
@Override
public void onFailure(DatabaseError databaseError){
}
});
}
// cancels and deletes a Ticket
@Override
public void cancelTicket(Store store, Ticket ticket, OnTaskCompleteListener onTaskCompleteListener) {
databaseManager.getTicket(store, String.valueOf(ticket.getId()), new OnGetDataListener() {
@Override
public void onSuccess(DataSnapshot dataSnapshot) {
if (dataSnapshot.getValue() == null) {
onTaskCompleteListener.onFailure(0);
return;
}
dataSnapshot.getRef().setValue(null);
onTaskCompleteListener.onSuccess();
return;
}
@Override
public void onFailure(DatabaseError databaseError){
}
});
}
}
\end{lstlisting}
\textbf{Entities - OnTicketCheckListener}
\begin{lstlisting}
// OnTicketCheckListener.java
package com.example.clup;
import java.sql.Timestamp;
public interface OnCheckTicketListener {
public void onWaiting(int peopleAhead);
public void onActive(Timestamp expireTime);
public void onBadStore(String error);
}
\end{lstlisting}
\textbf{Entities - HomeController}
\begin{lstlisting}
// HomeController.java
package com.example.clup;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import android.Manifest;
import android.content.Context;
import android.content.Intent;
import android.content.SharedPreferences;
import android.content.pm.PackageManager;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import com.example.clup.Entities.ApplicationState;
import com.google.firebase.FirebaseApp;
import com.google.firebase.auth.FirebaseAuth;
public class HomeController extends AppCompatActivity implements View.OnClickListener{
private Button storeButton, loginButton;
public static final String MyPREFERENCES = "MyPrefs" ;
private static final int MY_CAMERA_REQUEST_CODE = 100;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_home_controller);
storeButton = (Button) findViewById(R.id.storeButton);
loginButton = (Button) findViewById(R.id.loginButton);
// Update user
if (FirebaseAuth.getInstance().getCurrentUser() == null) System.out.println("NOPE");
// checks for camera permission and asks for it if it's not permitted - scanner will crash the app if
// the camera is not enabled
if (checkSelfPermission(Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) requestPermissions(new String[]{Manifest.permission.CAMERA}, MY_CAMERA_REQUEST_CODE);
storeButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity((new Intent(v.getContext(), StoreController.class)));
}
});
loginButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v2) {
if (FirebaseAuth.getInstance().getCurrentUser() == null)
startActivity((new Intent(v2.getContext(), LoginController.class)));
else
startActivity((new Intent(v2.getContext(), PreLoginController.class)));
}
});
}
@Override
public void onClick(View v) {
}
// Sets the action of a back button pressed from Android
@Override
public void onBackPressed () {
((ApplicationState) getApplication()).clearAppState();
// Clears stack of activities
finishAffinity();
}
}
\end{lstlisting} | {
"alphanum_fraction": 0.6809289411,
"avg_line_length": 39.9143576826,
"ext": "tex",
"hexsha": "5b69dfbacdf32d86f09df0e38dca1b2e01c0b474",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-11-09T19:40:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-08T18:33:30.000Z",
"max_forks_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora",
"max_forks_repo_path": "Latex_ITD/Files/codestructure.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora",
"max_issues_repo_path": "Latex_ITD/Files/codestructure.tex",
"max_line_length": 714,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora",
"max_stars_repo_path": "Latex_ITD/Files/codestructure.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3128,
"size": 15846
} |
\documentclass[]{article}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{hyperref}
\usepackage{color}
\usepackage{soul}
\usepackage{amsmath}
\usepackage{amssymb}
\DeclareRobustCommand{\hlcyan}[1]{{\sethlcolor{cyan}\hl{#1}}}
\DeclareRobustCommand{\hlgreen}[1]{{\sethlcolor{green}\hl{#1}}}
\DeclareRobustCommand{\hlred}[1]{{\sethlcolor{red}\hl{#1}}}
\DeclareRobustCommand{\hlyellow}[1]{{\sethlcolor{yellow}\hl{#1}}}
\DeclareRobustCommand{\hlorange}[1]{{\sethlcolor{orange}\hl{#1}}}
%opening
\title{Session 8}
\author{Fakhir}
\begin{document}
\maketitle
\section*{Solutions}
Got it almost right \checkmark. But I should've done a more thorough analysis regarding the max heights of the waves etc etc.
\end{document}
| {
"alphanum_fraction": 0.7452316076,
"avg_line_length": 22.9375,
"ext": "tex",
"hexsha": "3294d6b2ed1ae5298eead33c7bbab1fcc3c1f364",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fakhirsh/JediTraining",
"max_forks_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session8.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fakhirsh/JediTraining",
"max_issues_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session8.tex",
"max_line_length": 126,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "62189eb1515753acd9499bf2296af0311a76d23e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fakhirsh/JediTraining",
"max_stars_repo_path": "mit-ocw/[in progress] Single Variable Calculus, 2010/my solutions/1. Differentiation/Session8.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 250,
"size": 734
} |
\documentclass[]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={Statlearn - homework II},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\providecommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{Statlearn - homework II}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{}
\preauthor{}\postauthor{}
\date{}
\predate{}\postdate{}
\begin{document}
\maketitle
\section{Part I - Song genre
classification}\label{part-i---song-genre-classification}
\subsection{Installing and importing
libraries}\label{installing-and-importing-libraries}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# this part is to be executed only once to install libraries we need }
\CommentTok{# i kindly suggest you run this on windows OS}
\CommentTok{# But if you feel like solving R dependencies hell on linux... give it a try .}
\CommentTok{# about macOS , don't really know}
\CommentTok{# }
\CommentTok{# }
\CommentTok{# install.packages('signal')}
\CommentTok{# install.packages('audio')}
\CommentTok{# install.packages('wrassp')}
\CommentTok{# install.packages('warbleR')}
\CommentTok{# install.packages('tuneR')}
\CommentTok{# install.packages('audiolyzR')}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# then we import all libraries needed here}
\KeywordTok{suppressMessages}\NormalTok{(}\KeywordTok{require}\NormalTok{(signal, }\DataTypeTok{quietly =}\NormalTok{ T))}
\KeywordTok{library}\NormalTok{(signal)}
\KeywordTok{suppressMessages}\NormalTok{(}\KeywordTok{require}\NormalTok{(audio, }\DataTypeTok{quietly =}\NormalTok{ T)) }
\KeywordTok{library}\NormalTok{(audio)}
\KeywordTok{suppressMessages}\NormalTok{(}\KeywordTok{require}\NormalTok{(wrassp, }\DataTypeTok{quietly =}\NormalTok{ T))}
\KeywordTok{library}\NormalTok{(wrassp)}
\KeywordTok{library}\NormalTok{(warbleR)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Loading required package: maps
\end{verbatim}
\begin{verbatim}
## Loading required package: tuneR
\end{verbatim}
\begin{verbatim}
##
## Attaching package: 'tuneR'
\end{verbatim}
\begin{verbatim}
## The following object is masked from 'package:audio':
##
## play
\end{verbatim}
\begin{verbatim}
## Loading required package: seewave
\end{verbatim}
\begin{verbatim}
##
## Attaching package: 'seewave'
\end{verbatim}
\begin{verbatim}
## The following object is masked from 'package:signal':
##
## unwrap
\end{verbatim}
\begin{verbatim}
## Loading required package: NatureSounds
\end{verbatim}
\begin{verbatim}
##
## NOTE: functions are being renamed (run 'print(new_function_names)' to see new names). Both old and new names are available in this version
## Please see citation('warbleR') for use in publication
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(tuneR)}
\KeywordTok{library}\NormalTok{(audiolyzR)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Loading required package: hexbin
\end{verbatim}
\begin{verbatim}
## Loading required package: RJSONIO
\end{verbatim}
\begin{verbatim}
## Loading required package: plotrix
\end{verbatim}
\begin{verbatim}
##
## Attaching package: 'plotrix'
\end{verbatim}
\begin{verbatim}
## The following object is masked from 'package:seewave':
##
## rescale
\end{verbatim}
\subsection{Reading and describing
Data}\label{reading-and-describing-data}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# My path to the data }
\NormalTok{auPath <-}\StringTok{ "data_example"}
\NormalTok{labelsFile <-}\StringTok{ }\KeywordTok{paste0}\NormalTok{(auPath,}\StringTok{'/labels.txt'}\NormalTok{)}
\NormalTok{labelsFile}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "data_example/labels.txt"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# List the .au files}
\NormalTok{auFiles <-}\StringTok{ }\KeywordTok{list.files}\NormalTok{(auPath, }\DataTypeTok{pattern=}\KeywordTok{glob2rx}\NormalTok{(}\StringTok{'*.au'}\NormalTok{), }\DataTypeTok{full.names=}\OtherTok{TRUE}\NormalTok{)}
\NormalTok{auFiles}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "data_example/f1.au" "data_example/f2.au"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# Number of files }
\NormalTok{N <-}\StringTok{ }\KeywordTok{length}\NormalTok{(auFiles)}
\end{Highlighting}
\end{Shaded}
we have a total of \{N\} songs in our dataset .
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{## Let's try to get the files in order }
\NormalTok{ord =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\NormalTok{N)}
\NormalTok{ordFileList =}\StringTok{ }\KeywordTok{paste0}\NormalTok{(}\KeywordTok{rep}\NormalTok{(}\KeywordTok{paste0}\NormalTok{(auPath,}\StringTok{'/f'}\NormalTok{)),}\KeywordTok{paste0}\NormalTok{(ord,}\KeywordTok{rep}\NormalTok{(}\StringTok{'.au'}\NormalTok{,N)))}
\NormalTok{ordFileList}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## [1] "data_example/f1.au" "data_example/f2.au"
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# let's get also labels for each files}
\NormalTok{labels <-}\StringTok{ }\KeywordTok{read.table}\NormalTok{(}\DataTypeTok{file=}\NormalTok{labelsFile, }\DataTypeTok{header=}\OtherTok{TRUE}\NormalTok{, }\DataTypeTok{sep=}\StringTok{" "}\NormalTok{,}\DataTypeTok{col.names =} \KeywordTok{c}\NormalTok{(}\StringTok{'id'}\NormalTok{,}\StringTok{'type'}\NormalTok{))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in read.table(file = labelsFile, header = TRUE, sep = " ",
## col.names = c("id", : incomplete final line found by readTableHeader on
## 'data_example/labels.txt'
\end{verbatim}
\begin{verbatim}
## Warning in read.table(file = labelsFile, header = TRUE, sep = " ",
## col.names = c("id", : header and 'col.names' are of different lengths
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{labels}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## id type
## 1 1 country
## 2 2 country
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{?read.csv}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## starting httpd help server ... done
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# Load an audio file, e.g. the first one in the list above}
\NormalTok{x <-}\StringTok{ }\KeywordTok{read.AsspDataObj}\NormalTok{(ordFileList[}\DecValTok{1}\NormalTok{])}
\KeywordTok{str}\NormalTok{(}\KeywordTok{attributes}\NormalTok{(x))}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## List of 10
## $ names : chr "audio"
## $ trackFormats: chr "INT16"
## $ sampleRate : num 22050
## $ filePath : chr "data_example/f1.au"
## $ origFreq : num 0
## $ startTime : num 0
## $ startRecord : int 1
## $ endRecord : int 666820
## $ class : chr "AsspDataObj"
## $ fileInfo : int [1:2] 15 2
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#Then we set a fixed samples length for all files}
\CommentTok{# as the minimum lenght of all of them }
\NormalTok{fixedLength =}\StringTok{ }\DecValTok{22050} \OperatorTok{*}\StringTok{ }\DecValTok{30} \CommentTok{# default length}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{N) \{}
\NormalTok{ x <-}\StringTok{ }\KeywordTok{read.AsspDataObj}\NormalTok{(auFiles[i])}
\NormalTok{ min =}\StringTok{ }\KeywordTok{attributes}\NormalTok{(x)}\OperatorTok{$}\NormalTok{endRecord }\CommentTok{# the samples length of the current file}
\NormalTok{ fixedLength <-}\StringTok{ }\KeywordTok{ifelse}\NormalTok{(fixedLength}\OperatorTok{<=}\NormalTok{min, fixedLength, min) }\CommentTok{# we take the minimum}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
We can see from the output that The records were made at a sample rate
of 22050hz for a duration of 30 seconds and therefore contains around
22050 * 30 = 661500 samples
Now lets' plot rthe first file samples to geta general idea
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# We }
\CommentTok{# (only plot every 10th element to accelerate plotting)}
\NormalTok{x =}\StringTok{ }\KeywordTok{read.AsspDataObj}\NormalTok{(ordFileList[}\DecValTok{1}\NormalTok{])}
\NormalTok{x}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Assp Data Object of file data_example/f1.au.
## Format: SND (binary)
## 666820 records at 22050 Hz
## Duration: 30.241270 s
## Number of tracks: 1
## audio (1 fields)
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{ith =}\StringTok{ }\DecValTok{22050} \OperatorTok{/}\DecValTok{5} \CommentTok{# ith element to plot . basically we are plotting elements each 0.2s}
\NormalTok{x_axe =}\StringTok{ }\KeywordTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{,}\KeywordTok{numRecs.AsspDataObj}\NormalTok{(x) }\OperatorTok{-}\StringTok{ }\DecValTok{1}\NormalTok{, ith) }\OperatorTok{/}\StringTok{ }\KeywordTok{rate.AsspDataObj}\NormalTok{(x)}
\NormalTok{y_axe =}\StringTok{ }\NormalTok{x}\OperatorTok{$}\NormalTok{audio[}\KeywordTok{c}\NormalTok{(}\OtherTok{TRUE}\NormalTok{, }\KeywordTok{rep}\NormalTok{(}\OtherTok{FALSE}\NormalTok{,ith}\OperatorTok{-}\DecValTok{1}\NormalTok{))]}
\KeywordTok{plot}\NormalTok{(x_axe,}
\NormalTok{ y_axe,}
\DataTypeTok{type=}\StringTok{'l'}\NormalTok{,}
\DataTypeTok{xlab=}\StringTok{'time (s)'}\NormalTok{,}
\DataTypeTok{ylab=}\StringTok{'Audio samples'}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\includegraphics{song_classification_g_20_files/figure-latex/unnamed-chunk-6-1.pdf}
\subsection{Features Extractions}\label{features-extractions}
\subsubsection{Features to extract}\label{features-to-extract}
\paragraph{Zero crossing rate}\label{zero-crossing-rate}
The zero-crossing rate is the rate of sign-changes along a signal, i.e.,
the rate at which the signal changes from positive to zero to negative
or from negative to zero to positive.{[}1{]} This feature has been used
heavily in both speech recognition and music information retrieval,
being a key feature to classify percussive sounds.{[}2{]}
ZCR is defined formally as
\{\displaystyle zcr=\{\frac {1}{T-1}\}\sum \emph{\{t=1\}\^{}\{T-1\}\mathbb {1}
}\{\mathbb {R} \emph{\{\textless{}0\}\}(s}\{t\}s\_\{t-1\})\}
\{\displaystyle zcr=\{\frac {1}{T-1}\}\sum \emph{\{t=1\}\^{}\{T-1\}\mathbb {1}
}\{\mathbb {R} \emph{\{\textless{}0\}\}(s}\{t\}s\_\{t-1\})\} where
\{\displaystyle s\} s is a signal of length \{\displaystyle T\} T and
\{\displaystyle \mathbb {1} \emph{\{\mathbb {R} }\{\textless{}0\}\}\}
\{\displaystyle \mathbb {1} \emph{\{\mathbb {R} }\{\textless{}0\}\}\}
is an indicator function.
\paragraph{Spectral properties}\label{spectral-properties}
the spectral prperties are a set of statistics computed on the spectrum
of an audio signal, sucha as - Spectral Centroid ( most important one) -
Spectral mean or median - spectral quartiles
The spectral centroid is a measure used in digital signal processing to
characterise a spectrum. It indicates where the center of mass of the
spectrum is located. Perceptually, it has a robust connection with the
impression of brightness of a sound. We basically loop on on audio
files, and compute some features using functions defined above .
It is calculated as the weighted mean of the frequencies present in the
signal, determined using a Fourier transform, with their magnitudes as
the weights
\paragraph{Spectral roll Off}\label{spectral-roll-off}
The roll-off frequency is defined as the frequency under which some
percentage (cutoff) of the total energy of the spectrum is contained.
The roll-off frequency can be used to distinguish between harmonic
(below roll-off) and noisy sounds (above roll-off).
\paragraph{Mel Frequency Cepstral
Coefficients}\label{mel-frequency-cepstral-coefficients}
Mel Frequency Cepstral Coefficients (MFCC) for an object of class Wave.
In speech recognition MFCCs are used to extract the stimulus of the
vocal tract from speech
\paragraph{Chroma frequencies}\label{chroma-frequencies}
Chroma features are an interesting and powerful representation for music
audio in which the entire spectrum is projected onto 12 bins
representing the 12 distinct semitones (or chroma) of the musical
octave. Since, in music, notes exactly one octave apart are perceived as
particularly similar, knowing the distribution of chroma even without
the absolute frequency (i.e.~the original octave) can give useful
musical information about the audio -- and may even reveal perceived
musical similarity that is not apparent in the original spectra.
\subsubsection{Implementation with R}\label{implementation-with-r}
In R we used principally package - Seawave - SoundGen - TuneR
\paragraph{Convert audio data to wave}\label{convert-audio-data-to-wave}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# Transform}
\CommentTok{# x : array to transform }
\CommentTok{# rate : the sample rate of x }
\CommentTok{# bit : }
\CommentTok{# reduceRate : wether reduce the sample rate or not }
\CommentTok{# newRate # if down Sample is TRUE, new sample rate to use }
\NormalTok{transformToWave <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x, rate, }\DataTypeTok{bit =} \DecValTok{16}\NormalTok{,}\DataTypeTok{reduceRate =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{newRate =} \DecValTok{11025}\NormalTok{ )\{}
\NormalTok{ xwv =}\StringTok{ }\KeywordTok{Wave}\NormalTok{( }\KeywordTok{as.numeric}\NormalTok{(x), }\DataTypeTok{samp.rate =}\NormalTok{ rate, }\DataTypeTok{bit =}\NormalTok{ bit)}
\ControlFlowTok{if}\NormalTok{(reduceRate)\{}
\NormalTok{ xwv =}\StringTok{ }\KeywordTok{downsample}\NormalTok{(xwv, }\DataTypeTok{samp.rate =}\NormalTok{ newRate)}
\NormalTok{ \}}
\CommentTok{#transformedWave <- ifelse(reduceRate, downsample(xwv, samp.rate = newRate),xwv)}
\KeywordTok{return}\NormalTok{( xwv)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\paragraph{Spectrum analysis , power spectrum and energy
band}\label{spectrum-analysis-power-spectrum-and-energy-band}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# Compute the powerspectrum of the input signal}
\CommentTok{# x : audio samples array }
\CommentTok{# rate : samples rate }
\CommentTok{# The output is a matrix, where each column represents a power spectrum }
\CommentTok{# for a given time frame and each row represents a frequency.}
\NormalTok{powerSpectrum <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x , rate )\{}
\NormalTok{ out =}\StringTok{ }\KeywordTok{powspec}\NormalTok{( }\KeywordTok{as.numeric}\NormalTok{(x), rate)}
\KeywordTok{return}\NormalTok{(out)}
\NormalTok{\}}
\CommentTok{# Spectral info ------------------------------------------------------------}
\CommentTok{# calculate the fundamental frequency contour}
\CommentTok{# name : name of the file input}
\NormalTok{spectralInfo <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(name)\{}
\NormalTok{ f0vals =}\StringTok{ }\KeywordTok{ksvF0}\NormalTok{(}\StringTok{"data_example/f1.au"}\NormalTok{, }\DataTypeTok{toFile=}\NormalTok{F)}
\KeywordTok{return}\NormalTok{(f0vals)}
\NormalTok{\}}
\CommentTok{# --------------------------------------------------------------------}
\CommentTok{# Get the spectogram of a wave }
\CommentTok{# x : wave array }
\CommentTok{# winsize : Fourier transform window size}
\CommentTok{# fs : rate }
\CommentTok{# overlap : overlap with previous window, defaults to half the window length.}
\NormalTok{getSpecgram <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{ (x, winsize, fs ,overlap)\{}
\NormalTok{ sp <-}\StringTok{ }\KeywordTok{specgram}\NormalTok{(x, }\DataTypeTok{n =}\NormalTok{ winsize, }\DataTypeTok{Fs =}\NormalTok{ fs, }\DataTypeTok{overlap =}\NormalTok{ overlap)}
\KeywordTok{return}\NormalTok{(sp)}
\NormalTok{\}}
\CommentTok{#---------------------------------------------------------------------}
\CommentTok{#Frequency spectrum of a time wave}
\CommentTok{# x : an R object.}
\CommentTok{# fs : sampling frequency of wave (in Hz). Does not need to be specified if embedded in wave.}
\CommentTok{# wl : if at is not null, length of the window for the analysis (by default = 512).}
\CommentTok{# wn : window name, see ftwindow (by default "hanning").}
\CommentTok{# fftw : if TRUE calls the function FFT of the library fftw for faster computation. See Notes of the function spectro.}
\CommentTok{# norm #if TRUE the spectrum is normalised by its maximum.}
\NormalTok{getSpec <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{ (x, winsize, fs )\{}
\NormalTok{ sp <-}\StringTok{ }\KeywordTok{spec}\NormalTok{(x, }\DataTypeTok{fs =}\NormalTok{ fs, }\DataTypeTok{wl =}\NormalTok{ winsize, }\DataTypeTok{fftw =} \OtherTok{TRUE}\NormalTok{,}\DataTypeTok{norm =}\OtherTok{TRUE}\NormalTok{)}
\KeywordTok{return}\NormalTok{(sp)}
\NormalTok{\}}
\CommentTok{#-----------------------------------------------------------------------------------}
\CommentTok{# To get spectral properties }
\CommentTok{# spec : a data set resulting of a spectral analysis obtained with spec or meanspec (not in dB).}
\CommentTok{# f :sampling frequency of spec (in Hz).}
\CommentTok{# str :logical, if TRUE returns the results in a structured table.}
\CommentTok{# flim :a vector of length 2 to specifgy the frequency limits of the analysis (in kHz)}
\CommentTok{# mel #a logical, if TRUE the (htk-)mel scale is used.}
\NormalTok{GetSpecProps <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x, fs)\{}
\NormalTok{ specProps =}\StringTok{ }\KeywordTok{specprop}\NormalTok{(x, }\DataTypeTok{f=}\NormalTok{ fs, }\DataTypeTok{mel =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{return}\NormalTok{(specProps)}
\NormalTok{\}}
\CommentTok{#---------------------------------------------------------------}
\CommentTok{# compute the zero crossing rate}
\CommentTok{# x : R wave object }
\CommentTok{#f :sampling frequency of wave (in Hz). Does not need to be specified if embedded in wave.}
\CommentTok{#wl: length of the window for the analysis (even number of points, by default = 512). If NULL the zero-crossing rate is computed of the complete signal.}
\CommentTok{#overlap : overlap between two successive analysis windows (in %) if wl is not NULL.}
\NormalTok{ zeroCrossingRate <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{ (x , fs, wl , overlap )\{}
\NormalTok{ cr =}\StringTok{ }\KeywordTok{zcr}\NormalTok{(x,}\DataTypeTok{f=}\NormalTok{ fs, }\DataTypeTok{wl =}\NormalTok{ wl, }\DataTypeTok{ovlp =}\NormalTok{ overlap)}
\KeywordTok{return}\NormalTok{(zcr)}
\NormalTok{ \}}
\CommentTok{# Computation of MFCCs (Mel Frequency Cepstral Coefficients) for a Wave object}
\CommentTok{# x : Object of class Wave.}
\NormalTok{getMfccs <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x)\{}
\NormalTok{ mfccs =}\StringTok{ }\KeywordTok{MFCC}\NormalTok{(x, }\DataTypeTok{a =} \FloatTok{0.1}\NormalTok{, }\DataTypeTok{HW.width =} \FloatTok{0.025}\NormalTok{, }\DataTypeTok{HW.overlapping =} \FloatTok{0.25}\NormalTok{, }
\DataTypeTok{T.number =} \DecValTok{24}\NormalTok{, }\DataTypeTok{T.overlapping =} \FloatTok{0.5}\NormalTok{, }\DataTypeTok{K =} \DecValTok{12}\NormalTok{)}
\KeywordTok{return}\NormalTok{(mfcccs)}
\NormalTok{\}}
\CommentTok{# Energy bands ----------------------------------------------------------------}
\CommentTok{# x : audio spectogram}
\CommentTok{# winsize : Fourier transform window size}
\CommentTok{# fs : rate }
\CommentTok{# nb : number of bands to select}
\CommentTok{# lowB :}
\CommentTok{# eps : default minimum energy value }
\NormalTok{energyBands <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(x,fs,nb, lowB,eps,winsize)\{}
\NormalTok{ntm <-}\StringTok{ }\KeywordTok{ncol}\NormalTok{(x}\OperatorTok{$}\NormalTok{S) }\CommentTok{# number of (overlapping) time segments}
\NormalTok{fco <-}\StringTok{ }\KeywordTok{round}\NormalTok{( }\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{, lowB}\OperatorTok{*}\NormalTok{(fs}\OperatorTok{/}\DecValTok{2}\OperatorTok{/}\NormalTok{lowB)}\OperatorTok{^}\NormalTok{((}\DecValTok{0}\OperatorTok{:}\NormalTok{(nb}\OperatorTok{-}\DecValTok{1}\NormalTok{))}\OperatorTok{/}\NormalTok{(nb}\OperatorTok{-}\DecValTok{1}\NormalTok{)))}\OperatorTok{/}\NormalTok{fs}\OperatorTok{*}\NormalTok{winsize )}
\NormalTok{energy <-}\StringTok{ }\KeywordTok{matrix}\NormalTok{(}\DecValTok{0}\NormalTok{, nb, ntm)}
\ControlFlowTok{for}\NormalTok{ (tm }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{ntm)\{}
\ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\OperatorTok{:}\NormalTok{nb)\{}
\NormalTok{ lower_bound <-}\StringTok{ }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{fco[i]}
\NormalTok{ upper_bound <-}\StringTok{ }\KeywordTok{min}\NormalTok{( }\KeywordTok{c}\NormalTok{( }\DecValTok{1} \OperatorTok{+}\StringTok{ }\NormalTok{fco[i }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{], }\KeywordTok{nrow}\NormalTok{(x}\OperatorTok{$}\NormalTok{S) ) )}
\NormalTok{ energy[i, tm] <-}\StringTok{ }\KeywordTok{sum}\NormalTok{( }\KeywordTok{abs}\NormalTok{(x}\OperatorTok{$}\NormalTok{S[ lower_bound}\OperatorTok{:}\NormalTok{upper_bound, tm ])}\OperatorTok{^}\DecValTok{2}\NormalTok{ )}
\NormalTok{ \}}
\NormalTok{\}}
\NormalTok{energy[energy }\OperatorTok{<}\StringTok{ }\NormalTok{eps] <-}\StringTok{ }\NormalTok{eps}
\NormalTok{energy =}\StringTok{ }\DecValTok{10}\OperatorTok{*}\KeywordTok{log10}\NormalTok{(energy)}
\KeywordTok{return}\NormalTok{(energy)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\subsubsection{Dataset Creation}\label{dataset-creation}
Basically we loop over the audio file, extracting all features and
saving in in text file
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# first we define the general parameters}
\NormalTok{rate =}\StringTok{ }\DecValTok{22050}
\NormalTok{newrate =}\StringTok{ }\DecValTok{11050}
\NormalTok{reduceRate =}\StringTok{ }\OtherTok{FALSE}
\CommentTok{# STFT}
\NormalTok{winsize <-}\StringTok{ }\DecValTok{2048}
\NormalTok{nfft <-}\StringTok{ }\DecValTok{2048}
\NormalTok{hopsize <-}\StringTok{ }\DecValTok{512}
\NormalTok{overlap <-}\StringTok{ }\NormalTok{winsize }\OperatorTok{-}\StringTok{ }\NormalTok{hopsize}
\CommentTok{# Frequency bands selection}
\NormalTok{nb <-}\StringTok{ }\DecValTok{2}\OperatorTok{^}\DecValTok{3}
\NormalTok{lowB <-}\StringTok{ }\DecValTok{100}
\NormalTok{eps <-}\StringTok{ }\NormalTok{.Machine}\OperatorTok{$}\NormalTok{double.eps}
\CommentTok{# Number of seconds of the analyzed window}
\NormalTok{corrtime <-}\StringTok{ }\DecValTok{15}
\CommentTok{# the file list to use is our Ordered file list }
\NormalTok{data =}\StringTok{ }\KeywordTok{list}\NormalTok{()}
\ControlFlowTok{for}\NormalTok{( file }\ControlFlowTok{in}\NormalTok{ ordFileList)\{}
\CommentTok{# we only take the fixed length sample to make equal for all files }
\NormalTok{ x =}\StringTok{ }\KeywordTok{read.AsspDataObj}\NormalTok{(file)}\OperatorTok{$}\NormalTok{audio[}\DecValTok{1}\OperatorTok{:}\NormalTok{fixedLength]}
\NormalTok{ xWave =}\StringTok{ }\KeywordTok{transformToWave}\NormalTok{(x,rate)}
\NormalTok{ xPoweSpec =}\StringTok{ }\KeywordTok{powerSpectrum}\NormalTok{(x,rate)}
\NormalTok{ xSpecgram =}\StringTok{ }\KeywordTok{getSpecgram}\NormalTok{(x,}\DataTypeTok{fs =}\NormalTok{ rate,}\DataTypeTok{winsize =}\NormalTok{ winsize,}\DataTypeTok{overlap =}\NormalTok{ overlap)}
\NormalTok{ xSpectralInfo =}\StringTok{ }\KeywordTok{spectralInfo}\NormalTok{(file)}
\CommentTok{# we save all the values }
\CommentTok{#xData = data.frame(xWave,xPoweSpec,xSpecgram,xSpectralInfo)}
\CommentTok{#data <- c(data,xData)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\subsection{Models to apply}\label{models-to-apply}
\subsubsection{List of models to use and reasons (limit to models seen
in
class)}\label{list-of-models-to-use-and-reasons-limit-to-models-seen-in-class}
\subsubsection{Models Implementation}\label{models-implementation}
\subsection{classification}\label{classification}
\subsubsection{Classification performance for each
model}\label{classification-performance-for-each-model}
\subsubsection{Chose the best model}\label{chose-the-best-model}
\subsubsection{Determine each feature contribution to
model}\label{determine-each-feature-contribution-to-model}
\subsubsection{Maintain only the most important
ones}\label{maintain-only-the-most-important-ones}
\subsubsection{Final classification}\label{final-classification}
\subsection{Map}\label{map}
\section{Part II - Theory}\label{part-ii---theory}
\end{document}
| {
"alphanum_fraction": 0.718892274,
"avg_line_length": 40.1829436039,
"ext": "tex",
"hexsha": "ada884191ba2994cde423244caaa077ecec86096",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f87d184a191d5bf3becc97a0da667283c64c52bd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arywatt/statlearn-homework",
"max_forks_repo_path": "song_classification_g_20.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f87d184a191d5bf3becc97a0da667283c64c52bd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arywatt/statlearn-homework",
"max_issues_repo_path": "song_classification_g_20.tex",
"max_line_length": 466,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f87d184a191d5bf3becc97a0da667283c64c52bd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arywatt/statlearn-homework",
"max_stars_repo_path": "song_classification_g_20.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9019,
"size": 29213
} |
\documentclass[a4paper,11pt]{article}
\usepackage[colorlinks,linkcolor=red]{hyperref}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{color, soul}
\title{DIFMAP Guidance}
\author{DW}
\begin{document}
\setulcolor{red}
\setstcolor{green}
\sethlcolor{yellow}
\lstset{numbers=left,
numberstyle=\tiny,
keywordstyle=\color{blue!70}, commentstyle=\color{red!50!green!50!blue!50},
frame=shadowbox,
rulesepcolor=\color{red!20!green!20!blue!20}
}
\maketitle
\newpage
\tableofcontents
\newpage
\section{Download}
\par introduction site : \url{https://science.nrao.edu/facilities/vlba/docs/manuals/oss2013a/post-processing-software/difmap}\\
In linux, typing
\begin{lstlisting}[language={[ANSI]C}]
lftp ftp://ftp.astro.caltech.edu/pub/difmap/
lftp :~> get difmap2.5e.tar.gz
lftp :~> get cookbook.ps.gz
lftp :~> quit
\end{lstlisting}
In README file, the install procedure shown by following command:
\begin{lstlisting}[language={[ANSI]C}]
tar xzf difmap2.5e.tar.gz
cd uvf_difmap/
vi ./configure
./configure linux-i486-gcc
sudo emerge -av pgplot
./makeall
\end{lstlisting}
Here I use gentoo, the pgplot can be also installed by \fbox{ftp} at \fbox{/pub/pgplot}, or you can just install (I didn't try this)
\begin{lstlisting}[language={[ANSI]C}]
sudo apt-get install pgplot5
\end{lstlisting}
And the data with suffix .fit can be downloaded in github \url{https://github.com/rstofi/VLBI_Imaging_Script/raw/master/VLBI_Imaging_Script/J0017%2B8135_S_1998_10_01_pus_vis.fits} and using wget is just fine.
\section{Starting up}
\par In the download directory, typing ./difmap to get in to the work space.
\tiny
\begin{lstlisting}[language={[ANSI]C}]
douwei@dpcg ~/difmap/uvf_difmap $ ./difmap
Caltech difference mapping program - version 2.5e (30 May 2019)
Copyright (c) 1993-2019 California Institute of Technology. All Rights Reserved.
Type 'help difmap' to list difference mapping commands and help topics.
Started logfile: difmap.log_8 on Sun Sep 29 13:49:37 2019
0>
\end{lstlisting}
\normalsize
\par A \fbox{difmap.log} will be generated and all commands will be placed in. Each line prefixed with a !. and the log file can be executed as a command file with \fbox{DIFMAP} by typing
\begin{lstlisting}[language={[ANSI]C}]
0>@difmap.log
\end{lstlisting}
and \fbox{help} to get help use \fbox{exit} or \fbox{quit} to quit
\section{Read data}
We begin with our download data within \fbox{DIFMAP} by \fbox{observe}
\tiny
\begin{lstlisting}[language={[ANSI]C}]
0>observe J0017+8135_S_1998_10_01_pus_vis.fits
Reading UV FITS file: J0017+8135_S_1998_10_01_pus_vis.fits
AN table 1: 1533 integrations on 136 of 136 possible baselines.
AN table 2: 810 integrations on 136 of 136 possible baselines.
AN table 3: 240 integrations on 136 of 136 possible baselines.
Apparent sampling: 0.29775 visibilities/baseline/integration-bin.
*** This seems a bit low - see "help observe" on the binwid argument.
Found source: J0017+8135
There are 4 IFs, and a total of 4 channels:
IF Channel Frequency Freq offset Number of Overall IF
origin at origin per channel channels bandwidth
------------------------------------------------------------- (Hz)
01 1 2.22298e+09 4e+06 1 4e+06
02 2 2.24298e+09 4e+06 1 4e+06
03 3 2.33298e+09 4e+06 1 4e+06
04 4 2.36298e+09 4e+06 1 4e+06
Polarization(s): RR
Read 2 lines of history.
Reading 418384 visibilities.
\end{lstlisting}
use \fbox{head} to get more information of the observation.
\begin{lstlisting}
0>header
UV FITS miscellaneous header keyword values:
OBSERVER = "RDV11"
DATE-OBS = "1998-10-01"
ORIGIN = "AIPSvlb047 NBRIPM 31DEC08"
TELESCOP = "VLBA"
INSTRUME = "VLBA"
EQUINOX = 2000.00
Sub-array 1 contains:
136 baselines 17 stations
1533 integrations 6 scans
Station name X (m) Y (m) Z(m)
01 BR -2.112065e+06 3.705357e+06 4.726814e+06
02 FD -1.324009e+06 5.332182e+06 3.231962e+06
03 GC -2.281547e+06 1.453645e+06 5.756993e+06
04 HN 1.446375e+06 4.447940e+06 4.322306e+06
05 KK -5.543838e+06 2.054568e+06 2.387852e+06
06 KP -1.995679e+06 5.037318e+06 3.357328e+06
07 LA -1.449752e+06 4.975299e+06 3.709124e+06
08 MK -5.464075e+06 2.495249e+06 2.148297e+06
09 NL -1.308723e+05 4.762317e+06 4.226851e+06
10 NY 1.202463e+06 -2.527344e+05 6.237766e+06
11 ON 3.370606e+06 -7.119175e+05 5.349831e+06
12 OV -2.409150e+06 4.478573e+06 3.838617e+06
13 PT -1.640954e+06 5.014816e+06 3.575412e+06
14 SC 2.607849e+06 5.488070e+06 1.932740e+06
15 WF 1.492207e+06 4.458131e+06 4.296016e+06
16 MC 4.461370e+06 -9.195969e+05 4.449559e+06
17 GN 8.837727e+05 4.924386e+06 3.944042e+06
Sub-array 2 contains:
136 baselines 17 stations
810 integrations 4 scans
Station name X (m) Y (m) Z(m)
01 BR -2.112065e+06 3.705357e+06 4.726814e+06
02 FD -1.324009e+06 5.332182e+06 3.231962e+06
03 GC -2.281547e+06 1.453645e+06 5.756993e+06
04 HN 1.446375e+06 4.447940e+06 4.322306e+06
05 KK -5.543838e+06 2.054568e+06 2.387852e+06
06 KP -1.995679e+06 5.037318e+06 3.357328e+06
07 LA -1.449752e+06 4.975299e+06 3.709124e+06
08 MK -5.464075e+06 2.495249e+06 2.148297e+06
09 NL -1.308723e+05 4.762317e+06 4.226851e+06
10 NY 1.202463e+06 -2.527344e+05 6.237766e+06
11 ON 3.370606e+06 -7.119175e+05 5.349831e+06
12 OV -2.409150e+06 4.478573e+06 3.838617e+06
13 PT -1.640954e+06 5.014816e+06 3.575412e+06
14 SC 2.607849e+06 5.488070e+06 1.932740e+06
15 WF 1.492207e+06 4.458131e+06 4.296016e+06
16 MC 4.461370e+06 -9.195969e+05 4.449559e+06
17 GN 8.837727e+05 4.924386e+06 3.944042e+06
Sub-array 3 contains:
136 baselines 17 stations
240 integrations 2 scans
Station name X (m) Y (m) Z(m)
01 BR -2.112065e+06 3.705357e+06 4.726814e+06
02 FD -1.324009e+06 5.332182e+06 3.231962e+06
03 GC -2.281547e+06 1.453645e+06 5.756993e+06
04 HN 1.446375e+06 4.447940e+06 4.322306e+06
05 KK -5.543838e+06 2.054568e+06 2.387852e+06
06 KP -1.995679e+06 5.037318e+06 3.357328e+06
07 LA -1.449752e+06 4.975299e+06 3.709124e+06
08 MK -5.464075e+06 2.495249e+06 2.148297e+06
09 NL -1.308723e+05 4.762317e+06 4.226851e+06
10 NY 1.202463e+06 -2.527344e+05 6.237766e+06
11 ON 3.370606e+06 -7.119175e+05 5.349831e+06
12 OV -2.409150e+06 4.478573e+06 3.838617e+06
13 PT -1.640954e+06 5.014816e+06 3.575412e+06
14 SC 2.607849e+06 5.488070e+06 1.932740e+06
15 WF 1.492207e+06 4.458131e+06 4.296016e+06
16 MC 4.461370e+06 -9.195969e+05 4.449559e+06
17 GN 8.837727e+05 4.924386e+06 3.944042e+06
There are 4 IFs, and a total of 4 channels:
IF Channel Frequency Freq offset Number of Overall IF
origin at origin per channel channels bandwidth
------------------------------------------------------------- (Hz)
01 1 2.22298e+09 4e+06 1 4e+06
02 2 2.24298e+09 4e+06 1 4e+06
03 3 2.33298e+09 4e+06 1 4e+06
04 4 2.36298e+09 4e+06 1 4e+06
Source parameters:
Source: J0017+8135
RA = 00 17 08.475 (2000.0) 00 17 14.947 (apparent)
DEC = +81 35 08.137 +81 34 41.639
Antenna pointing center:
OBSRA = 00 17 08.475 (2000.0)
OBSDEC = +81 35 08.136
Data characteristics:
Recorded units are UNCALIB.
Recorded polarizations: RR
Phases are rotated 0 mas East and 0 mas North.
UVW coordinates are rotated by 0 degrees clockwise.
Scale factor applied to FITS data weights: 1
Coordinate projection: SIN
Summary of overall dimensions:
3 sub-arrays, 4 IFs, 4 channels, 2583 integrations
1 polarizations, and up to 136 baselines per sub-array
Time related parameters:
Reference date: 1998 day 274/00:00:00 (1998 Oct 01)
Julian Date: 2451087.50, Epoch J1998.746
GAST at reference date: 00 38 05.893
Coherent integration time = 0.0 sec
Incoherent integration time = 0.0 sec
Sum of scan durations = 5136 sec
UT range: 274/14:35:30 to 275/12:24:51
Mean epoch: JD 2451088.562 = J1998.749
\end{lstlisting}
\normalsize
\par \ul{In the cookbook:}
\par In order for editing and self calibration to work visibilities from different baselines \hl{must be grouped with the same integration times}.
\fbox{UV FITS} files \textbf{DO NOT} \hl{provide any means to map visibilities} on different baselines into integrations. Each visibility has its own time stamp which need not agree with those on other baselines \hl{within the same logical integration}. \fbox{DIFMAP} on the other hand does require that visibilities be grouped into integrations. This is the reason for the 'binwid' argument of the observe command. If the visibilities do not lie on an integration grid then you must specify a suitable integration time into which visibilities should be binned into integrations. Depending on how the \fbox{FITS} file has been processed, it may already have visibilities grouped into integrations with identical time stamps assigned to each grouped visibility, in which case no 'binwid' argument will be required If you do not know what state your file is in, then try to read it with the \fbox{observe} command without specifying an integration time. Then if \fbox{observe} reports an apparent sampling of $\leq0.5$ then either run the \fbox{uvaver} command to re-grid the data or equivalently re-run \fbox{observe} with a suitable integration time. Other symptoms of incompletely binned integrations are that 、\fbox{selfcal} flags all of your data due to the lack of closure quantities and that station based editing in \fbox{vplot} behaves like baseline based editing.
\par To exam the data, we can type command \fbox{select} first if more than 1 polarization.
\begin{lstlisting}
0>select
Selecting polarization: RR, channels: 1..4
Reading IF 1 channels: 1..1
Reading IF 2 channels: 2..2
Reading IF 3 channels: 3..3
Reading IF 4 channels: 4..4
\end{lstlisting}
Take a look at a plot of amplitude vs $u-v$ radius
\scriptsize
\begin{lstlisting}
0>radplot
Graphics device/type (? to see list, default /NULL): /xserve
Using default options string "m1"
Move the cursor into the plot window and press 'H' for help
\end{lstlisting}
\par Here we use \fbox{xpra} to show the picture and therefore we choose \fbox{/xserve}. All the devices listed in the following
\begin{lstlisting}
Graphics device/type (? to see list, default /NULL): ?
PGPLOT v5.2.2 Copyright 1997 California Institute of Technology
Interactive devices:
/TEK4010 (Tektronix 4010 terminal)
/GF (GraphOn Tek terminal emulator)
/RETRO (Retrographics VT640 Tek emulator)
/GTERM (Color gterm terminal emulator)
/XTERM (XTERM Tek terminal emulator)
/ZSTEM (ZSTEM Tek terminal emulator)
/V603 (Visual 603 terminal)
/TK4100 (Tektronix 4100 terminals)
/VMAC (VersaTerm-PRO for Mac, Tek 4105)
/VT125 (DEC VT125 and other REGIS terminals)
/XDISP (pgdisp or figdisp server)
/XWINDOW (X window window@node:display.screen/xw)
/XSERVE (A /XWINDOW window that persists for re-use)
Non-interactive file formats:
/CANON (Canon LBP-8/A2 Laser printer, landscape)
/CGM (CGM file, indexed colour selection mode)
/CGMD (CGM file, direct colour selection mode)
/CW6320 (Colorwriter 6320 plotter)
/GIF (Graphics Interchange Format file, landscape orientation)
/VGIF (Graphics Interchange Format file, portrait orientation)
/HPGL (Hewlett Packard HPGL plotter, landscape orientation)
/VHPGL (Hewlett Packard HPGL plotter, portrait orientation)
/HPGL2 (Hewlett-Packard graphics)
/HIDMP (Houston Instruments pen plotter)
/HP7221 (Hewlett-Packard HP7221 pen plotter
/LIPS2 (Canon LIPS2 file, landscape orientation)
/VLIPS2 (Canon LIPS2 file, portrait orientation)
/LATEX (LaTeX picture environment)
/NULL (Null device, no output)
/PGMF (PGPLOT metafile)
/PNG (Portable Network Graphics file)
/TPNG (Portable Network Graphics file - transparent background)
/PPM (Portable Pixel Map file, landscape orientation)
/VPPM (Portable Pixel Map file, portrait orientation)
/PS (PostScript file, landscape orientation)
/VPS (PostScript file, portrait orientation)
/CPS (Colour PostScript file, landscape orientation)
/VCPS (Colour PostScript file, portrait orientation)
/QMS (QUIC/QMS file, landscape orientation)
/VQMS (QUIC/QMS file, portrait orientation)
/VCANON (Canon LBP-8/A2 Laser printer, portrait)
/WD (X Window Dump file, landscape orientation)
/VWD (X Window Dump file, portrait orientation)
\end{lstlisting}
\normalsize
Press \fbox{H} in the plot we get the help
\scriptsize
\begin{lstlisting}
You requested help by pressing 'H'.
The following keys are defined when pressed inside the plot:
X - Quit radplt
L - Re-display whole plot
. - Re-display plot with alternate marker symbol.
n - Highlight next telescope
p - Highlight previous telescope
N - Step to the next sub-array to highlight.
P - Step to the preceding sub-array to highlight.
T - Specify highlighted telescope from keyboard
s - Show the baseline and time of the nearest point to the cursor
S - Show the amp/phase statistics of the data within a selected area.
V - Show the real/imag statistics of the data within a selected area.
A - (Left-mouse-button) Flag the point closest to the cursor
C - Initiate selection of an area to flag.
W - Toggle spectral-line channel based editing.
Z - Select a new amplitude or phase display range.
U - Select a new UV-radius display range.
Display mode options:
M - Toggle model plotting.
1 - Display amplitude only.
2 - Display phase only.
3 - Display amplitude and phase.
E - Toggle whether to display an error plot.
- - Toggle whether to display residuals.
+ - Toggle whether to use a cross-hair cursor if available.
\end{lstlisting}
\normalsize
Another useful display is a plot of the $u-v$ coverage. This may be obtained by typing
\begin{lstlisting}
0>uvplot
\end{lstlisting}
To look at a cut of amplitude and/or phase along any radial line in the $u-v$ plane use the command \fbox{projplot} to display the projected amplitude and phase with distance along the position angle of the majority of source structure.
\begin{lstlisting}
0>projplot 45
\end{lstlisting}
\par Use \fbox{tplot} to check whether data are missing or have gaps.
\begin{lstlisting}
0>tplot
\end{lstlisting}
\par color:\\
green: no edit\\
yellow: any data to an antenna are flagged\\
blue: antenna has been flagged in \fbox{selfcal} or \fbox{corplot}\\
red: all data to a given antenna are flagged
\section{Editing data}
\par To get rid of bad data
\begin{lstlisting}
0>vplot
\end{lstlisting}
use \fbox{scancap} to change interscan gap (default 1 hour)
\scriptsize
\begin{lstlisting}
0>scangap
The delimiting interscan gap is 3600 seconds in all sub-arrays.
\end{lstlisting}
\normalsize
use \fbox{wtscale} to change weight scale factor(default 1.0)
\scriptsize
\begin{lstlisting}
0>scangap
The delimiting interscan gap is 3600 seconds in all sub-arrays.
\end{lstlisting}
\normalsize
The Vplot key bindings:
\tiny
\begin{lstlisting}
H - List the following key bindings.
X - Exit vplot (right-mouse-button).
A - Flag or un-flag the visibility nearest the cursor (left-mouse-button).
U - Select a new time range (hit U again for the full range).
Z - Select a new amplitude or phase range (hit Z twice for full range).
C - Flag all data inside a specified rectangular box.
R - Restore data inside a specified rectangular box.
K - Flag all visibilities of a selected baseline and scan.
L - Redisplay the current plot.
n - Display the next set of baselines.
p - Display the preceding set of baselines.
N - Display the next sub-array.
P - Display the preceding sub-array.
] - Plot from the next IF.
[ - Plot from the preceding IF.
M - Toggle whether to display model visibilities.
F - Toggle whether to display flagged visibilities.
E - Toggle whether to display error bars.
G - Toggle between GST and UTC times along the X-axis.
S - Select the number of sub-plots per page.
O - Toggle between seeing all or just upper baselines.
1 - Plot only amplitudes.
2 - Plot only phases.
3 - Plot both amplitudes and phases.
- - Toggle whether to display residuals.
B - Toggle whether to break the plot into scans (where present).
V - Toggle whether to use flagged data in autoscaling.
+ - Toggle whether to use a cross-hair cursor if available.
T - Request a new reference telescope/baseline.
- (SPACE BAR) Toggle station based vs. baseline based editing.
I - Toggle IF editing scope.
W - Toggle spectral-line channel editing scope.
\end{lstlisting}
\normalsize
write a copy
\begin{lstlisting}
0>wobs bak.edt
\end{lstlisting}
\normalsize
\section{Different mapping}
In each \caps{selfcal-mapplot-clean} iteration, the model is subtracted from the data in the $u-v$ plane. To start with the default 1 Jy point source model at the map center type:
\tiny
\begin{lstlisting}
0>startmod
Applying default point source starting model.
Performing phase self-cal
Adding 1 model components to the UV plane model.
The established model now contains 1 components and 1 Jy
Correcting IF 1.
A total of 14903 telescope corrections were flagged in sub-array 1.
A total of 9156 telescope corrections were flagged in sub-array 2.
A total of 2135 telescope corrections were flagged in sub-array 3.
Correcting IF 2.
A total of 14904 telescope corrections were flagged in sub-array 1.
A total of 9156 telescope corrections were flagged in sub-array 2.
A total of 2136 telescope corrections were flagged in sub-array 3.
Correcting IF 3.
A total of 14906 telescope corrections were flagged in sub-array 1.
A total of 9156 telescope corrections were flagged in sub-array 2.
A total of 2136 telescope corrections were flagged in sub-array 3.
Correcting IF 4.
A total of 14906 telescope corrections were flagged in sub-array 1.
A total of 9288 telescope corrections were flagged in sub-array 2.
A total of 2137 telescope corrections were flagged in sub-array 3.
Fit before self-cal, rms=2.128069Jy sigma=0.004096
Fit after self-cal, rms=2.126510Jy sigma=0.004068
clrmod: Cleared the established, tentative and continuum models.
Redundant starting model cleared.
\end{lstlisting}
\normalsize
\par \fbox{selfcal} reports the rms difference between the model and the data and also sigma, which is the rms divided by the variance implied by the visibility weights (effectively sigma is the square root of the reduced $\chi^2$.
\par If deal with a more complicated model than a point source, supply the name of a file containing that model to \fbox{startmod}
\par Define the image size and cell size you wish to map. Image size must be an integer power-of-2, it should be at least twice the maximum source dimension. The cell size should be small enough to allow for 3 or more pixels across the synthesized beam. for example:
\scriptsize
\begin{lstlisting}
0>mapsize 256,0.2
Map grid = 256x256 pixels with 0.200x0.200 milli-arcsec cellsize.
\end{lstlisting}
\normalsize
\par or use fixed cell size.
\scriptsize
\begin{lstlisting}
0>uvrange 0,51.6
Only data in the UV range: 0 -> 51.6 (mega-wavelengths) will be gridded.
\end{lstlisting}
\normalsize
use uniform weighting, error power of -1
\scriptsize
\begin{lstlisting}
0>uvweight 2,-1
Uniform weighting binwidth: 2 (pixels).
Gridding weights will be scaled by errors raised to the power -1.
Radial weighting is not currently selected.
\end{lstlisting}
\normalsize
use \fbox{mapplot} to take a look of the \hl{dirty map}
\scriptsize
\begin{lstlisting}
0>mapplot
Inverting map and beam
Estimated beam: bmin=1.936 mas, bmaj=2.084 mas, bpa=71.83 degrees
Estimated noise=479.048 mJy/beam.
Graphics device/type (? to see list, default /NULL): /xserve
Move the cursor into the plot window and press 'H' for help
\end{lstlisting}
\normalsize
typing \fbox{H}
\scriptsize
\begin{lstlisting}
You have selected one window corner - Use one of the following keys
A - Select the opposite corner of the window you have started
D - Discard the incomplete window
The following keys may be selected when the cursor is in the plot
X - Quit this session
A - Select the two opposite corners of a new clean window.
D - Delete the window with a corner closest to the cursor.
S - Describe the area of the window with a corner closest to the cursor.
V - Report the value of the pixel under the cursor.
f - Fiddle the colormap contrast and brightness.
F - Reset the colormap contrast and brightness to 1, 0.5.
L - Re-display the plot.
G - Install the default gray-scale color map.
c - Install the default pseudo-color color map.
C - Install a color map named at the keyboard.
T - Re-display with a different transfer function.
Z - Select a sub-image to be displayed.
K - Retain the current sub-image limits for subsequent mapplot's
m - Toggle display of the model.
M - Toggle display of just the variable part of the model.
N - Initiate the description of a new model component.
R - Remove the model component closest to the cursor.
U - Remove the marker closest to the cursor.
+ - Toggle whether to use a cross-hair cursor if available.
H - List key bindings.
\end{lstlisting}
\subsection{Cleaning}
\normalsize
\par Choose a number of iterations and a loop gain for cleaning
\scriptsize
\begin{lstlisting}
0>clean 100,0.05
clean: niter=100 gain=0.05 cutoff=0
Component: 050 - total flux cleaned = 0.438179 Jy
Component: 100 - total flux cleaned = 0.501705 Jy
Total flux subtracted in 100 components = 0.501705 Jy
Clean residual min=-0.008169 max=0.039204 Jy/beam
Clean residual mean=0.001368 rms=0.004485 Jy/beam
Combined flux in latest and established models = 0.501705 Jy
\end{lstlisting}
\normalsize
\subsection{Self-Calibration}
\par with the improved, but still basically point-like model just obtained, self-calibrate the phase by typing
\scriptsize
\begin{lstlisting}
0>selfcal
Performing phase self-cal
Adding 16 model components to the UV plane model.
The established model now contains 16 components and 0.501705 Jy
Correcting IF 1.
Correcting IF 2.
Correcting IF 3.
Correcting IF 4.
Fit before self-cal, rms=2.070359Jy sigma=0.002511
Fit after self-cal, rms=2.070296Jy sigma=0.002511
\end{lstlisting}
\normalsize
\par Use \fbox{mapplot} to see the effect of \fbox{gscale}. Use \fbox{gscale true} to allow the telescope amplitude factors to float freely. It is best to start with long solution intervals to insure a high enough SNR. For example:
\scriptsize
\begin{lstlisting}
0>selfcal true,true,30
Performing amp+phase self-cal over 30 minute time intervals
Correcting IF 1.
Correcting IF 2.
Correcting IF 3.
Correcting IF 4.
Fit before self-cal, rms=2.070296Jy sigma=0.002511
Fit after self-cal, rms=2.013221Jy sigma=0.002459
\end{lstlisting}
\normalsize
\par If the amplitude is not trusty, type
\scriptsize
\begin{lstlisting}
0>selfcal true,true,30
0>uncal false,true
uncal: All telescope amplitude corrections have been un-done.
\end{lstlisting}
\normalsize
\par If the clean is too deepy, we can try \fbox{clrmod true} to throw away your current model. and then iteratively issue \fbox{clean 200,0.03; keep; mapplot}. The \fbox{keep} command is necessary to force subtraction of the clean components from the visibility data as opposed to subtraction in the image plane.
\section{Saving data models, and windows}
\par Use \fbox{save} to save
\scriptsize
\begin{lstlisting}
0>save tmp
Writing UV FITS file: tmp.uvf
Writing 16 model components to file: tmp.mod
wwins: Wrote 1 windows to tmp.win
Inverting map and beam
Estimated beam: bmin=1.936 mas, bmaj=2.084 mas, bpa=71.83 degrees
Estimated noise=479.048 mJy/beam.
restore: Substituting estimate of restoring beam from last 'invert'.
Restoring with beam: 1.936 x 2.084 at 71.83 degrees (North through East)
Clean map min=-0.0079798 max=0.476 Jy/beam
Writing clean map to FITS file: tmp.fits
Writing difmap environment to: tmp.par
\end{lstlisting}
\normalsize
\par Individual \fbox{UVFITS}, model,window or map files may be written by typing:
\scriptsize
\begin{lstlisting}
0>wobs tmp.uvf
0>wmod tmp.mod
0>wwin tmp.win
0>wmap tmp.fits
\end{lstlisting}
\normalsize
\par Use \fbox{observe},\fbox{rmod} and \fbox{rwin} to read in merge, model and window files.
\section{Finer point in mapping}
see index
\section{Generate output for hardcopy}
\section{model fitting}
\end{document} | {
"alphanum_fraction": 0.6960367605,
"avg_line_length": 42.2572815534,
"ext": "tex",
"hexsha": "ff866321ea556cc4e1ee56ea153a5a43af345134",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dw839566105/dw839566105.github.io",
"max_forks_repo_path": "pdf/difmap/difmap.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dw839566105/dw839566105.github.io",
"max_issues_repo_path": "pdf/difmap/difmap.tex",
"max_line_length": 1371,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "81892dc7ce9be76d2d85f19b1ad7ba7416f1ec7c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dw839566105/dw839566105.github.io",
"max_stars_repo_path": "pdf/difmap/difmap.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-15T02:52:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-12T13:19:53.000Z",
"num_tokens": 8062,
"size": 26115
} |
\section{Technical Approach}
\subsection{Theoretical Background}
The full derivation of the micromorphic constitutive equations is very detailed with full derivations being presented in Regueiro~\cite{bib:regueiro_micro10} and the theory manual for the code~\cite{bib:miller17}. Briefly, the equations of motion which must be solved are the balance of linear momentum
\begin{equation}
\sigma_{ji,j} + \rho \left(f_i - a_i\right) = 0\\
\end{equation}
and the balance of the, ``first moment of momentum''
\begin{equation}
\sigma_{ij} - s_{ij} + m_{kji,k} + \rho \left(l_{ji} - \omega_{ji}\right) = 0\\
\end{equation}
where the $\left(\cdot\right)_{,j}$ indicates the derivative with respect to $x$ in the current coordinate system. We note that the terms in the balance equations are volume and area averages of quantities defined in the micro scale (indicated by $\left(\cdot\right)'$) via
\begin{align*}
\rho dv &\defeq \int_{dv} \rho' dv'\\
\sigma_{ji}n_j da &\defeq \int_{da} \sigma_{ji}'n_j'da'\\
\rho f_i dv &\defeq \int_{dv} \rho' f_i' dv'\\
\rho a_i dv &\defeq \int_{dv} \rho' a_i' dv'\\
s_{ij} dv &\defeq \int_{dv} \sigma_{ij}' dv'\\
m_{ijm} n_i da &\defeq \int_{da} \sigma_{ij}' \xi_m n_i' da'\\
\rho l_{ij} dv &\defeq \int_{dv} \rho' f_i' \xi_j dv'\\
\rho \omega_{ij} dv &\defeq \int_{dv} \rho' \ddot{\xi}_i \xi_j dv'\\
\end{align*}
The most striking of the results of these equations is that the Cauchy stress is no longer symmetric. This arises because while we assert that classical continuum mechanics are obeyed at the micro-scale, at the macro scale we must handle moments applied pointwise due to the higher order stress $m_{ijk}$. This couple, along with the micro body couple and micro-spin, results in a generally asymmetric nature.
Conceptually, we can understand the balance of first moment of momentum in the absence of body couples and micro-spin as the statement that the total stress of the body is the volume average of all the micro stresses ($s_{ij}$) added to the couple produced by the micro stresses acting on the lever arm $\xi_i$. The body couple results from a heterogeneous distribution of the body force per unit density and the micro-spin inertia results from the acceleration of the micro position vectors.
We define a mapping between the current and reference configurations for the position of $dv$ and $dv'$ as
\begin{align*}
F_{iI} &\defeq \frac{\partial x_i}{\partial X_I}\\
\xi_i &\defeq \chi_{iI}\Xi_I = \left(\delta_{iI}+\phi_{iI}\right)\Xi_I\\
\end{align*}
where we note the difference that $F_{iI}$ maps $dX_I$ into $dx_i$ through the differential relationship whereas $\chi_{iI}$ is purely a linear map between the configurations and is not defined through the differential elements.
\subsection{Algorithms}
The equations of motion will be solved using the finite element method in a so-called, ``Total Lagrangian,'' configuration. We do this by mapping the stresses back to the reference configuration (for details see Regueiro~\cite{bib:regueiro_micro10} or the theory manual~\cite{bib:miller17}) to find the balance of linear momentum for a single element $e$
\begin{align*}
\sum_{n=1}^{N^{nodes,e}} c^{n,e}_j \bigg\{&\int_{\partial \hat{\mathcal{B}}^{0,t,e}} \hat{N}^{n,e} F_{jJ} S_{IJ} \hat{J} \left(\frac{\partial X_{I}}{\partial \xi_{\hat{i}}}\right)^{-1} \hat{N}_{\hat{i}} d\hat{A}& + \int_{\hat{\mathcal{B}}^{0,e}} \big\{- \hat{N}^{n,e}_{,I} S_{IJ} F_{jJ} + \hat{N}^{n,e} \rho^0 \left(f_j - a_j\right) \big\} \hat{J} d\hat{V} = \mathcal{F}_j^{n,e}\bigg\}\\
\end{align*}
where we have transformed the equations into the element basis $e_\xi$ (indicated by $\hat{\left(\cdot\right)}$), $N$ is the shape function, $S_{IJ}$ is the second Piola Kirchhoff stress, and $\mathcal{F}_j^{n,e}$ is the residual. Note that this $\xi$ is not the same as the micro-position vector detailed above.
We also write the balance of the first moment of momentum as
\begin{align*}
\sum_{n=1}^{N^{nodes,e}} \eta_{ij}^{n,e} &\bigg\{\int_{\mathcal{B}^{0,e}} \bigg\{\hat{N}^{n,e} \left(F_{iI} \left(S_{IJ}-\Sigma_{IJ}\right) F_{jJ} + \rho^0\left(l_{ji} - \omega_{ji} \right)\right) - \frac{\partial \hat{N}^{n,e}}{\partial \xi_{\hat{i}}} \left(\frac{\partial X_{K}}{\partial \xi_{\hat{i}}}\right)^{-1} F_{jJ} \chi_{iI} M_{KJI} \bigg\} \hat{J} d\hat{V}\\
& + \int_{\partial \mathcal{B}^{0,t,e}} F_{jJ} \chi_{iI} M_{KJI} \hat{N}^n \hat{J} \left(\frac{\partial X_{K}}{\partial \xi_{\hat{i}}}\right)^{-1} \hat{N}_{\hat{i}} d\hat{A} = \mathcal{M}_{ij}^{n,e} \bigg\}\\
\end{align*}
We will organize these residuals into the residual vector using the following approach for a linear 8 noded hex element
\begin{align*}
\mathcal{R}^e &= \left\{\begin{array}{c}
\mathcal{F}_j^{1,e}\\
\mathcal{M}_j^{1,e}\\
\mathcal{F}_j^{2,e}\\
\mathcal{M}_j^{2,e}\\
\vdots\\
\mathcal{F}_j^{8,e}\\
\mathcal{M}_j^{8,e}\\
\end{array}\right\}
\end{align*}
where
\begin{align*}
\mathcal{M}_J^{n,e} = \left\{\begin{array}{c}
\mathcal{M}_{11}^{n,e}\\
\mathcal{M}_{22}^{n,e}\\
\mathcal{M}_{33}^{n,e}\\
\mathcal{M}_{23}^{n,e}\\
\mathcal{M}_{13}^{n,e}\\
\mathcal{M}_{12}^{n,e}\\
\mathcal{M}_{32}^{n,e}\\
\mathcal{M}_{31}^{n,e}\\
\mathcal{M}_{21}^{n,e}
\end{array}\right\}
\end{align*}
We solve the nonlinear equations using Newton Raphson which means we require the Jacobian. We write the element tangent as
\begin{align*}
\mathcal{J}_{IJ}^e = \frac{\partial \mathcal{R}_I}{\partial \mathcal{U}_J} &= -\left[\begin{array}{cccccc}
\frac{\partial \mathcal{F}_{1}^{1,e}}{\partial u_1^{1,e}} & \frac{\partial \mathcal{F}_{1}^{1,e}}{\partial u_2^{1,e}} & \frac{\partial \mathcal{F}_{1}^{1,e}}{\partial u_3^{1,e}} & \frac{\partial \mathcal{F}_{1}^{1,e}}{\partial \phi_{11}^{1,e}} & \cdots & \frac{\partial \mathcal{F}_{1}^{1,e}}{\partial \phi_{21}^{8,e}}\\
\frac{\partial \mathcal{F}_{2}^{1,e}}{\partial u_1^{1,e}} & \frac{\partial \mathcal{F}_{2}^{1,e}}{\partial u_2^{1,e}} & \frac{\partial \mathcal{F}_{2}^{1,e}}{\partial u_3^{1,e}} & \frac{\partial \mathcal{F}_{2}^{1,e}}{\partial \phi_{11}^{1,e}} & \cdots & \frac{\partial \mathcal{F}_{1}^{1,e}}{\partial \phi_{21}^{8,e}}\\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\
\frac{\partial \mathcal{M}_{2}^{8,e}}{\partial u_1^{1,e}} & \frac{\partial \mathcal{M}_{2}^{8,e}}{\partial u_2^{1,e}} & \frac{\partial \mathcal{M}_{2}^{8,e}}{\partial u_3^{1,e}} & \frac{\partial \mathcal{M}_{2}^{8,e}}{\partial \phi_{11}^{1,e}} & \cdots & \frac{\partial \mathcal{M}_{1}^{8,e}}{\partial \phi_{21}^{8,e}}\\
\end{array}\right]
\end{align*}
where the superscript numbers indicate the node number. This is a $96 \times 96$ matrix. We then assemble the individual Jacobian and residual for an element to form the global Jacobian and residual.
We write the linearized form of the residual as
\begin{equation}
\mathcal{R}_I^{n+1,k+1} \approx \mathcal{R}_I^{n,k} + \frac{\partial \mathcal{R}_I}{\partial \mathcal{U}_J} \Delta \mathcal{U}_J
\end{equation}
where $n$ is the current pseudo-timestep and $k$ is the iteration number.
We desire the residual at the next iteration to be zero so we write
\begin{equation}
\begin{aligned}
0 &= \mathcal{R}_I^k + \frac{\partial \mathcal{R}_I}{\partial \mathcal{U}_J} \Delta \mathcal{U}_J\\
\Rightarrow\ -\frac{\partial \mathcal{R}_I}{\partial \mathcal{U}_J} \Delta \mathcal{U}_J &= \mathcal{R}_I^k\\
\end{aligned}
\end{equation}
where $\Delta \hat{t}$ is an increment in pseudo-time and $k$ indicates the sub-iteration. We now introduce
\begin{align*}
\Delta \mathcal{U}_J &= \Delta \hat{t} \dot{\mathcal{U}}_J\\
\dot{\mathcal{U}}_J &= \alpha \dot{\mathcal{U}}_J^{k+1} + \left(1-\alpha\right) \dot{\mathcal{U}}_J^{k}
\end{align*}
$\alpha = 0$ indicates an explicit method and $\alpha = 1$ indicates an implicit method. Once a good initial condition has been established, Newton-Raphson iteration is performed to compute the proper value of $\dot{\mathcal{U}}_J^{k+1}$. Iterations will continue until the convergence of the residual vector is achieved.
We note that boundary conditions are enforced by either removing the relevant rows and columns in the residual and Jacobian in the case of a zero boundary condition or by first computing the resultant force, subtracting it from the residual, and then removing the rows and columns in the case of a non-zero boundary condition.
The matrix equation will be solved, at least initially, by using the Newton-Krylov solver contained in the C++ repository. This solver does not use the tangent but rather uses the residual to compute the required steps in the solution. Further efforts should involve implementing the code in a Newton-Raphson solver so that the tangent can be tested as well.
\FloatBarrier
| {
"alphanum_fraction": 0.6942807626,
"avg_line_length": 68.6904761905,
"ext": "tex",
"hexsha": "8d00086989664604946cf8046db5a18cf2f02629",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "lanl/tardigrade-micromorphic-element",
"max_forks_repo_path": "doc/Report/tex/technical_approach.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "lanl/tardigrade-micromorphic-element",
"max_issues_repo_path": "doc/Report/tex/technical_approach.tex",
"max_line_length": 492,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dafc66df8a308e9fef8af4907de902464b84302b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "lanl/tardigrade-micromorphic-element",
"max_stars_repo_path": "doc/Report/tex/technical_approach.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2937,
"size": 8655
} |
% !TEX TS-program = pdflatex
% !TEX root = Tes.tex
% !TEX spellcheck = en-EN
\documentclass[12pt,% % corpo del font principale
a4paper,% % A4 papers
%twoside,openright,% % twoside with free right side
oneside,openany,% % one side
titlepage,% % use a titlepage
headinclude,footinclude,% % header and foot header
BCOR5mm,% % rilegatura di 5 mm
cleardoublepage=empty,% % empty pages with no header and foot
tablecaptionabove,% % table caption above tables
floatperchapter,
]{scrreprt} % KOMA-Script report class;
\usepackage{braket}
\usepackage{changepage}
\usepackage[english]{babel} % latest language is predefined
\usepackage[T1]{fontenc} % font coding
\usepackage{pifont}
\usepackage{indentfirst} % indent first paragraph of each section
\usepackage{mparhack,fixltx2e,relsize} % fancy typographies stuff
\usepackage[eulerchapternumbers,% % chapter font Euler
subfig,% % in subfig objects
beramono,% % Bera Mono as fixed spacing font
eulermath,% % AMS Euler as math font
pdfspacing,% % improves line filling
listings,% % code output
% parts,% % uncomment for a document divided in parts
listsseparated,
]{classicthesis} % style ClassicThesis
%\setlength{\cftbeforeloftitleskip}{100pt}
%\renewcommand{\cftbeforeloftitleskip}{1000pt}
\usepackage{arsclassica} % modifies some aspects of ClassicThesis package
\let\marginpar\oldmarginpar % for margin notes with \todonotes (overwise conflict with the new definition of \marginpar in classic thesis)
\usepackage[shadow]{todonotes} % for margin notes and comments
\usepackage{bookmark} % bookmarks
%*********************************************************************************
% Bibliography
%*********************************************************************************
%%\usepackage[style=authoryear,hyperref,backref,natbib, ,maxcitenames=1, mincitenames = 1, citestyle=authoryear-comp, backend=biber,sortcites,sorting=ynt]{biblatex}
%\usepackage[style=numeric,hyperref,backref,natbib]{biblatex}
%*********************************************************************************
% Graphics
%*********************************************************************************
\usepackage{graphicx} % images
\usepackage{subfigure}
\usepackage{wrapfig}
\usepackage{tikz}
\usetikzlibrary{mindmap,trees}
\usetikzlibrary{backgrounds}
\usepackage{verbatim}
\usepackage[dvipsnames]{xcolor}
% \usepackage{morefloats}
%\usepackage{chngcntr}
\usepackage{pdfpages}
\usepackage{braket}
\usepackage{amssymb}
\def\delequal{\mathrel{\ensurestackMath{\stackon[1pt]{=}{\scriptstyle\Delta}}}}
\usepackage{lscape}
%*********************************************************************************
% Tables
%*********************************************************************************
\usepackage{tabularx} % table of predefined length
\usepackage{siunitx}
\usepackage{pbox}
\usepackage{colortbl}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{rotating}
\usepackage{hhline}
\setlength\tabcolsep{3pt}
\usepackage{changepage}
% \usepackage[showframe=true]{geometry}
%*********************************************************************************
% Mathematics and symbols
%*********************************************************************************
\usepackage{amsmath, amssymb, amsthm} % mathematics stuff
\usepackage{mathrsfs}
\usepackage{calc}
\usepackage{algorithmic}
\usepackage[ruled]{algorithm}
\usepackage{latexsym}
\usepackage[geometry]{ifsym}
\usepackage{mathabx}
\usepackage{pifont}
%*********************************************************************************
% Personal
%*********************************************************************************
\usepackage[font=itshape]{quoting} % fancy quotation packages. [font=small] old option
\usepackage[english]{varioref} % complete reference package
\usepackage{hyperref}
\usepackage{url}
\usepackage[intoc, english, noprefix]{nomencl} %for list of symbols
\usepackage[normalem]{ulem}
\usepackage{chemfig} %for chemical formulas
\usepackage{eurosym} % euro symbol
\usepackage{epigraph}
\usepackage{calligra}
\usepackage{soul}
\usepackage[utf8]{inputenc}
\usepackage{csquotes}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\setlength{\epigraphwidth}{\textwidth}
\newcommand{\textgreek}[1]{\begingroup\fontencoding{LGR}\selectfont#1\endgroup}
\lstset{language=R,
basicstyle=\small\ttfamily,
stringstyle=\color{DarkGreen},
otherkeywords={0,1,2,3,4,5,6,7,8,9},
morekeywords={TRUE,FALSE},
deletekeywords={data,frame,length,as,character},
keywordstyle=\color{blue},
commentstyle=\color{DarkGreen},
}
%*********************************************************************************
% Calling personal settings and making nomenclature
%*********************************************************************************
\input{custom-commands}
\input{general-settings} % general custom settings (margins etc)
% \makenomenclature
% \renewcommand{\nomname}{List of Symbols and Abbreviations}
\bibliographystyle{plain}
\begin{document}
%----------------------------------------------------------------------------------------
% TITLE AND AUTHOR(S)
%----------------------------------------------------------------------------------------
\title{\normalfont\spacedallcaps{Hazardous asteroids forecast via Markov random fields}} % The article title
\subtitle{Project for the course Probabilistic modelling (DSE)} % Uncomment to display a subtitle
\author{
Marzio De Corato
}
\date{} % An optional date to appear under the author(s)
%----------------------------------------------------------------------------------------
%----------------------------------------------------------------------------------------
% TABLE OF CONTENTS & LISTS OF FIGURES AND TABLES
%----------------------------------------------------------------------------------------
\maketitle % Print the title/author/date block
%\setcounter{tocdepth}{2} % Set the depth of the table of contents to show sections and subsections only
%\listoffigures % Print the list of figures%
%\listoftables % Print the list of tables
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/vesta_asteroid.jpg}
\captionsetup{labelformat=empty}
\caption{A NASA image of asteroid Vesta as taken by the spacecraft Dawn. Vesta is one of the biggest asteroid in the asteroid belt of solar system (its volume is equal to $7,5\cdot10^{7}$ km$^{3}$. Image taken from \cite{vesta_source}}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{Figures/Tunguska.png}
\captionsetup{labelformat=empty}
\caption{A picture provided by the Soviet Academy of Science in 1927 of Podkamennaya Tunguska. This place was hit, in 1908, by an asteroid of 50-60 m. Such asteroid flattened almost 80 million trees over an area of 2,150 km$^{2}$. The explosion intensity was near to 12 megatons: for reference the modern US nuclear bomb are in the range of 0.3 kilotons to 1.2 megatons. The Hiroshima bomb was nearly to 15 kilotons. Image taken from \cite{Tunguska_source}}
\end{center}
\end{figure}
\newpage
\epigraph{
\textit{This day may possibly be my last: but the laws of probability, so true in general, so fallacious in particular, still allow about fifteen years. }\\Edward Gibbon (1737-1794)
}
\epigraph{
\textit{In its efforts to learn as much as possible about nature, modern physics has found that certain things can never be “known” with certainty. Much of our knowledge must always remain uncertain. The most we can know is in terms of probabilities.}\\Richard P. Feynman (1918-1988)
}
\newpage
\section*{Abstract} The machine learning algorithms provide a promising approach to classify and predict natural or social phenomena. Differently to the theoretical approach, in which the laws that describe particular phenomena are derived from general assumptions using a mathematical language (e.g., a symmetry, the principle of energy conservation, the three Newton laws, the second principle of thermodynamics \cite{feynmanlectures}), the machine learning algorithms start from very general assumptions/expressions and then their parameters are optimized with the likehood maximization\cite{russell2010artificial,murphy2012machine}. The first approach allows obtaining interpretable predictions, the second one good forecasts with a reduced effort. However, the toll that has to be paid in this case is the low interpretability \cite{russell2010artificial,murphy2012machine}. Since they provide the list of connections within the features, the graphical methods represent a good compromise between the need for interpretability and the forecasts obtained without developing a general theory. In order to prove the validity of this statement, I considered a dataset for which the theory that interconnects its features was known: in particular, I chose the asteroids hazardousness as provided by CNEOS \cite{cneos+nasa} and published on Kaggle \cite{kaggle_dataset}. The outcomes of the probabilistic methods for predicting the asteroid hazardousness were compared with the ones provided by the theory and with the ones of Random Forest (RF), Support Vector Machines (SVM), and Logistic Regression and Quadric Discriminant analysis (QDA). The results show that the forecast performances of probabilistic methods are better than QDA and almost equal to the logistic regression but lower to RF and SVM. However, since the list of connections correctly reflects the laws of celestial mechanics \cite{murray1999solar}, it can be said that in this case, the probabilistic methods provide an interpretable and correct explanation of their mechanism. Therefore, contrary to other machine learning algorithms, such methods can be fully validated scientifically.
\end{center}}
\newpage
\tableofcontents % Print the table of contents
\newpage % Start the article content on the second page, remove this if you have a longer abstract that goes onto the second page
\newpage
%----------------------------------------------------------------------------------------
% INTRODUCTION
%----------------------------------------------------------------------------------------
\chapter{Introduction} The description and the forecast of physical \footnote{Where here physical should be intended as measurable} phenomena can be attacked with two different approaches: starting from a restricted set of principles/axioms, one can formulate theoretical models that provide equations that describe such phenomena. Such approach will be called here \textit{ab-initio}. On the other side, one can start from data and fit them into some general models (with fixed or not fixed number of parameters). These methods will be called here the machine learning (ML) methods. In the ab-initio case, one can fully explain the laws obtained and an overall idea of why nature works in this way. The toll that has to be paid in this case is that, since the phenomena are much more complicated than the starting principles, a consistent part of them may be excluded by the assumption made at the beginning. Furthermore, the calculation of the solution given the dynamical equation can be computationally expensive\footnote{This is the case, for instance, of the Hartree-Fock method or Density Functional Theory: such methods rewrite, with some approximations, the Schroedinger equation into a much more computationally affordable way. However, also, in this case, the solution for solids can be costly \cite{martin_2004}}. On the other side, the ML methods, since they make much more general assumptions and work directly on data, are not limited to a particular class of phenomena. Therefore they do not require the development of a specific theory as the \textit{ab-initio} one: the same model can be applied to exoplanets habitability as well to credit risk evaluation. The drawback of this approach is that the underlying mechanism, by which a forecast is preferred to another one, may not be interpretable. Such dichotomy between these two methods can be explained with an example taken from astronomy: at the beginning of XX century, astronomers knew that Mercury follows an elliptical orbit that, instead of being fixed, rotates around the Sun as shown in Fig. \ref{Perihelion_precession2} Such phenomenon is called \textit{Precession of the perihelion of Mercury}. The crucial point is that there was no way to explain from the Newton gravity theory such phenomenon. At this point, scholars had two paths: describe its motion with an empirical law (perhaps with an empirical modification of Newton's gravity law) or reformulate the Newton gravity from scratch. The second approach was the one followed by A.Einstein with his general relativity theory (further details are given in the A.Zee textbook \cite{zee2013einstein}). The graphical methods represent a good compromise between these two approaches: they can be used for every physical phenomenon, and, since they provide the connection list, they also give an elegant and intuitive representation of their mechanisms. Therefore, if the theory of the involved process is known, one can use this class of algorithms and compare its finding with the theory. In this way, one can evaluate the quality of the forecasts and the model provided by the graphical methods. The present work aims to perform such analysis for a dataset that contains the features and the hazardousness \footnote{For Earth} of asteroids. For this dataset, indeed, the relationships between the various features are known. This paper is organized as follows: first, a brief introduction about the basic theoretical concepts of the statistical methods here used will be provided, then the dataset will be described, and finally, the principal results here obtained with graphical methods will be presented as compared with other types of machine learning algorithms. Furthermore, a recap of the celestial mechanics will be provided in Appendix A. This theory explains the connections and the meaning of the different features contained in the dataset. Finally, the code used to produce the results here reported if provided in Appendix B.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Perihelion_precession2.png}
\caption{Precession of the perihelion of Mercury. Image taken from \cite{Perihelion_precession}}
\label{Perihelion_precession2}
\end{center}
\end{figure}
\chapter{Theoretical Framework}
In this section, I am going to review the theoretical concepts
that underlie the probabilistic methods here used: these will be
exposed following the approaches of Murphy \cite{murphy2012machine} , Kolleret al. \cite{koller2009probabilistic}, Højsgaardand et al. \cite{hojsgaard2012graphical} and Russel et al. \cite{russell2010artificial}. Furthermore, a rapid overview of the main concepts of information theory will also be provided following the Cover \cite{cover2006elements}, and MacKay \cite{mackay2003information} approaches. This is because some of its quantities in the preliminary analysis of the dataset were used. Finally, I concluded this chapter with a rapid overview of algorithm interpretability, which, together with the accuracy and the
confusion matrix, is necessary for the assessment of a ML performances. On the other side, the concepts related to the celestial
mechanics here used will be described, following the Murray
approach \cite{murray1999solar} into the Appendix A.
\section{Probabilistic models}
Lets start by supposing that we would represent compactly a joint distribution such as \cite{murphy2012machine}:
\begin{equation}
p(x_{1},x_{2},...,x_{n})
\end{equation}
that can represent for instance words in a documents or pixels of an image. Firstly we know that using the chain rule, we can decompose it, into the following form \cite{murphy2012machine}:
\begin{equation}
p(x_{1:V})=p(x_{1})p(x_{2}|x_{1})p(x_{3}|x_{2},x_{1})...p(x_{V}|x_{1:V-1})
\end{equation}
Where V is the number of variables and $1:V$ stands for ${1,2,...,V}$. This decomposition makes explicit the conditional probability tables, or in other terms, the transition probability tensors \cite{wu2017markov}. As one can point out, the number of parameters is cumbersome as the number of variables grows: indeed, the number of parameter required scales as $\mathcal{O}(K^{V})$. Such a challenging problem can be attacked by considering the concept of conditional independence. This is defined as\cite{murphy2012machine}:
\begin{equation}
X \perp Y| Z \iff p(X,Y|Z) = p(X|Z)p(Y|Z)
\end{equation}
A particular case of this definition is the Markov assumption, by which \textit{the future is independent from the past given the present } or in symbols \cite{murphy2012machine}:
\begin{equation}
p(\textbf{x}_{1:V})=p(x_{1})\prod^{V}_{t=1}p(x_{t}|x_{t-1})
\end{equation}
In this case, a first-order Markov chain is obtained, where the transition tensor is of second-order \cite{wu2017markov}. Given this formalism, we are interested in finding an intelligent way to plot such joint distribution intuitively: the graph theory answers this quest. The nodes can be used to represent the random variables while the presence or the lack of edges can be used to represent the conditional indipendence \cite{murphy2012machine}. Bayesian networks consider directed edges, while Markov random fields (MRF) are only undirected. Consequently, while the concept of a topological ordering, by which the parents n nodes are labelled lower to their children, is well defined for the Bayesian network, MRF is not. In order to solve this issue, it is helpful to consider the Hammersley-Clifford theorem as stated in \cite{murphy2012machine}:
\begin{theorem}[Hammersley-Clifford]
A positive distribution p(\textbf{y})>0 satisfies the CI properties of an indirect graph G iif p can be represented as a product of factor, one per maximal clique, i.e.
\begin{equation}
p(\textbf{y}|\theta)= \dfrac{1}{Z(\theta)}\prod_{c \in C }\psi_{c}(\textbf{y}_{c}|\theta_{c})
\end{equation}
where C is the set of all the (maximal) cliques of G, and Z($\theta$) is the partition function given by
\begin{equation}
Z(\theta):= \sum_{y}\prod_{c\in C}\psi_{c}(\textbf{y}_{c}|\theta_{c})
\end{equation}
Note that this partition function is what ensures the overall distribution sums to 1
\end{theorem}
Such theorem allows to represent a probability distribution with potential functions for each maximal clique in the graph. A particular case of these is the Gibbs distribution \cite{murphy2012machine}:
\begin{equation}
p(y|\theta)=\dfrac{1}{Z(\theta)} exp\left(-\sum_{c}E(y_{c}|\theta_{c})\right)
\end{equation}
where $E(y_{c})>0$ represent the energy associated with the variables in the clique c. This form can be adapted to a UGM with the following expression \cite{murphy2012machine}:
\begin{equation}
\psi_{c}(y_{c}|\theta_{c})=exp\left(-E(y_{c}|\theta_{c})\right)
\end{equation}
Finally, to reduce the computational cost, one can consider only the pairwise interaction instead of the maximum clique. This is the analogue of what is usually performed in solid-state physics (but not always) when only the interaction between the first neighbour atoms is considered. Another example is the Ising model: here we have a lattice of spins that can be or in $\ket{+}$ or in $\ket{-}$ and their interaction is modelled by\cite{murphy2012machine}:
\begin{equation}
\psi_{st}\left(y_{s},y_{t}\right) =
\begin{pmatrix}
e^{w_{st}} & e^{-w_{st}} \\
e^{-w_{st}} & e^{w_{st}} \\
\end{pmatrix}
\end{equation}
Where $w_{st}=J$ represent the coupling strength between two neighbour site. The collective state is described by \cite{murphy2012machine}:
\begin{equation}
\ket{i_{1},i_{2},...,i_{n}}=\ket{i_{1}}\otimes\ket{i_{2}}\otimes...\otimes\ket{i_{n}}
\end{equation}
Where $\otimes$ is the tensor product, if this parameter is associated with a positive finite value, we have an associative Markov network: collectively, all sites with the same configuration are favoured. Thus we will have two collective states: one for which we have all $\ket{+}$ and another in which we have all $\ket{-}$ Such a situation would model, in principle, the ferromagnetic materials where the external magnetic field induces into the material a magnetic field with the same direction. On the other side, if the material's magnetisation is opposite to the external field, and thus $J<0$, we have an anti-ferromagnetic system in which frustrated states are present. Furthermore lets consider the unnormalized log probability of a collective state $\textbf{y}=\ket{i_{1},i_{2},...,i_{n}}$ \cite{murphy2012machine}:
\begin{equation}
\log\tilde{\textbf{p}}(y)= -\sum_{s\sim t}y_{s}w_{st}y_{t}
\end{equation}
If we also consider an external field \cite{murphy2012machine}:
\begin{equation}
\log\tilde{\textbf{p}}(y)= -\sum_{s\sim t}y_{s}w_{st}y_{t}+\sum_{s}b_{s}y_{s}
\end{equation}
The previous expression is the Hamiltonian of an Ising system. This is not a simple coincidence: indeed, the Hamiltonian of a system represent, rudely speaking, its total energy. Thus according to the Boltzmann or Gibbs distribution, we have \cite{murphy2012machine}:
\begin{equation}
P_{\beta}(\textbf{y})=\dfrac{e^{-\beta H(\textbf{y}})}{Z_{\beta}}
\end{equation}
where $\beta$ is proportional to the inverse of the system temperature. Coming back the unnormalized probability of a collective state $\textbf{y}$, if we set $\Sigma^{-1}=\textbf{W}$, $\boldsymbol{\mu}=\boldsymbol{\Sigma} \textbf{b}$ and $c=\dfrac{1}{2}\mu^{T}\boldsymbol{\Sigma}^{-1}\mu$ we obtain a Gaussian \cite{murphy2012machine}:
\begin{equation}
\tilde{\textbf{p}}(y)\sim exp\left( -\frac{1}{2} (\textbf{y}-\boldsymbol{\mu})^{T} \boldsymbol{\Sigma}^{-1} (\textbf{y}-\boldsymbol{\mu}) + c \right)
\end{equation}
In general we refer to Gaussian Markov random fields for a joint distribution that can be decomposed in the following way \cite{murphy2012machine}:
\begin{equation}
p\left(\textbf{y}|\boldsymbol{\theta}\right) \propto \prod_{s\sim t} \psi_{st}\left(y_{s},y_{t}\right)\prod_{t}\psi_{t}\left(y_{t}\right)
\end{equation}
\begin{equation}
\psi_{st}\left( y_{s},y_{t} \right)=exp\left( -\dfrac{1}{2} y_{s}\Delta_{st}y_{t} \right)
\end{equation}
\begin{equation}
\psi_{t}\left(y_{t}\right)= \exp \left( -\dfrac{1}{2}\Delta_{tt}y^{2}_{t}+\eta_{t}y_{t}\right)
\end{equation}
\begin{equation}
p\left(\textbf{y}|\boldsymbol{\theta}\right) \propto \exp \left( \boldsymbol{\eta}^{T} \textbf{y}-\dfrac{1}{2}y^{T}\Delta \textbf{y} \right)
\end{equation}
(this last expression can be reconducted to the multivariate Gaussian if one consider $\boldsymbol{\Delta}=\boldsymbol{\Sigma}^{-1}$ and $\boldsymbol{\eta}=\boldsymbol{\Delta}\boldsymbol{\mu}$. Given the network, we would now move on how the parameters can be calculated. Lets start from a Markov random field in log-linear form \cite{murphy2012machine}:
\begin{equation}
p\left(\textbf{y}|\boldsymbol{\theta}\right) = \dfrac{1}{Z(\theta)}\exp \left( \sum_{c}\boldsymbol{\theta}^{T}_{c}\phi_{c}\left(\textbf{y}\right)\right)
\end{equation}
thus we can define the log-likehood as \cite{murphy2012machine}:
\begin{equation}
\mathcal{L}\left(\boldsymbol{\theta}\right):= \frac{1}{N}\sum_{i}\log p\left(\textbf{y}_{i}|\boldsymbol{\theta}\right)=\frac{1}{N}\sum_{i}\left[\sum_{c} \boldsymbol{\theta}^{T}_{c}\phi_{c}(y_{i})-\log Z\left(\boldsymbol{\theta}\right)\right]
\end{equation}
\begin{equation}
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\theta}_{c}}=\frac{1}{N}\sum_{i}\left[\phi_{c}(y_{i})-\frac{\partial}{\partial\boldsymbol{\theta}_{c}}\log Z(\boldsymbol{\theta})\right]
\end{equation}
\begin{equation}
\frac{\partial \log Z(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}=\mathbb{E}\left[\phi_{c}(\textbf{y})\right|\theta]=\sum_{\textbf{y}}\phi_{c}(\textbf{y})p(\textbf{y}|\boldsymbol{\theta})}
\end{equation}
\begin{equation}
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\theta}_{c}}=\left[\frac{1}{N}\sum_{i}\phi_{c}(y_{i})\right]-\mathbb{E}\left[\phi_{c}(\textbf{y})\right]
\end{equation}
In the first term $\textbf{y}$ is fixed to its observed values while in the second it is free. Such expression can be recast in to a more explicative form \cite{murphy2012machine}:
\begin{equation}
\frac{\partial\mathcal{L}}{\partial\boldsymbol{\theta}_{c}}=\mathbb{E}_{p_{emp}}\left[\phi_{c}(\textbf{y})\right]-\mathbb{E}_{p_{(\cdot|\boldsymbol{\theta})}}\left[\phi_{c}(\textbf{y})\right]
\end{equation}
Therefore at the optimum we will have \cite{murphy2012machine}:
\begin{equation}
\mathbb{E}_{p_{emp}}\left[\phi_{c}(\textbf{y})\right]=\mathbb{E}_{p_{(\cdot|\boldsymbol{\theta})}}\left[\phi_{c}(\textbf{y})\right]
\end{equation}
From this expression, it is clear why this method is called moment matching; it is worth noting that such computation is largely expensive from a computational point of view: thus, scholars usually consider other techniques or at least the stochastic gradient descent method. A full review can be found in \cite{murphy2012machine} and \cite{koller2009probabilistic}. Finally we consider, as for the dataset in analysed in this work, the case where we have both discrete and continuous variables i.e. $x=\left(i_{1},...,i_{d},y_{1},...,y_{q} \right)$ with d discrete variable and q continuous variables. They are called in the literature Mixed Interaction Models. In this case, the following density has to be considered \cite{hojsgaard2012graphical}:
\begin{equation}
\begin{split}
f(i,y)=& p(i)(2\pi)^{-q/2}det(\Sigma)^{-1/2} \\
& exp\left[-\dfrac{1}{2}\left(y-\mu(i)\right)^{T}\Sigma^{-1}\left(y-\mu(i)\right)\right]
\end{split}
\label{gaussMix}
\end{equation}
Which can be rewritten in the exponential family form \cite{hojsgaard2012graphical}:
\begin{equation}
\begin{split}
f(i,y) & = \exp\left\lbrace g(i)+\sum_{u}h^{u}(i)y_{u}-\dfrac{1}{2}\sum_{uv} y_{u}y_{v}k_{uv}\right\rbrace \\
&= \exp\left\lbrace g(i)+h(i)^{T}y-\dfrac{1}{2}y^{T}Ky \right\rbrace
\end{split}
\end{equation}
where $g(i)$, $h(i)$ and $K$ are the canonical parameters. These are connected with the parameters of expression \ref{gaussMix} by the following identities \cite{hojsgaard2012graphical}:
\begin{equation}
\begin{split}
K=&\Sigma^{-1} \\
h(i)=&\Sigma^{-1}\mu(i) \\
g(i)=&\log p(i) -\frac{1}{2}\log det (\Sigma) \\
&-\dfrac{1}{2}\mu(i)^{T}\Sigma^{-1}\mu(i)-\dfrac{q}{2}\log 2\pi
\end{split}
\end{equation}
Moreover, one can further modify the previous form in order to obtain a particular factorial expansion: such models are referred to as homogeneous mixed interaction models \cite{hojsgaard2012graphical}.
\section{Information theory}
Given an ensemble of random variables, we can quantify the amount of information that one variable contains: such quantity is called mutual information and is a crucial concept within information theory. This approach, which was implemented by Claude Shannon decades before the probabilistic modelling, represents a complementary way by which one can attack the problem of conditional dependence between random variables. Furthermore, as shown in Fig. \ref{Information_theory_connections}, this theory provides a formidable contribution to different scientific fields. Here I would provide some basic concepts of this theory, following the Cover \cite{cover2006elements} and MacKay \cite{mackay2003information} approaches, that allow defining the concept of mutual information properly. The founding concept of information theory is entropy. This quantity expresses the uncertainty of a random variable. Given a random variable $X$ with alphabet (the accessible states) $\cite{cover2006elements}$ and probability mass function $p(x)=Pr\left\lbrace X=x \rbrace\; x \in \chi$ we define the entropy of $X$ as $H(X)=-\sum_{x \in X} p(x)log p(x)$ where the logarithm has to be considered with basis 2. In analogous way the joint entropy of two random variables $(X,Y)$ with a joint distribution p(x,y) is defined as \cite{cover2006elements}:
\begin{equation}
H(X,Y)=-\sum_{x\in \chi}\sum_{y\in \mathcal{Y}}p(x,y)\log p(x,y)
\end{equation}
Furthermore, we can define also the conditional entropy as \cite{cover2006elements}:
\begin{equation}
\begin{split}
H(X|Y)=& \sum_{x\in \mathcal{X} }p(x)H(Y|X=x)\\
=& -\sum_{x\in \mathcal{X}}p(x)\sum_{y\in \mathcal{Y}}p(x,y)\log p(y|x) \\
=& -E\log p(Y|X)
\end{split}
\end{equation}
The joint entropy and the conditional entropy are related by the chain rule \cite{cover2006elements}:
\begin{equation}
H(X,Y)=H(X)+H(Y|X)
\end{equation}
Such rule can be extended to to the following from \cite{cover2006elements}:
\begin{equation}
H(X,Y|Z)=H(X|Z)+H(Y|X,Z)
\end{equation}
Given a distribution q and another distribution p, one can quantify how inefficiently the second one describes the first one using the concept of relative entropy or Kullback-Leibler distance \cite{cover2006elements,mackay2003information}:
\begin{equation}
D(p||q)=\sum p(x)\log\frac{p(x)}{q(x)}
\end{equation}
As stated by the Gibbs inequality \cite{cover2006elements,mackay2003information}:
\begin{equation}
D(p||q)\geq 0
\end{equation}
this quantity can not be negative: the entropy of a random variable associated with another cannot have a degree of uncertainty lower to the quantity it aims to describe. On these bases, we are now ready to introduce the concept of mutual information. This is defined as \cite{cover2006elements}:
\begin{equation}
\begin{split}
I(X;Y)&=\sum\sump(x,y)\log\dfrac{p(x,y)}{p(x)p(y)}=\\
&=D(p(x,y)||p(x)p(y)) \\
&=H(X)-H(X|Y)=H(Y)-H(Y|X)
\end{split}
\end{equation}
As for the joint distribution also in this case, we have a chain rule \cite{cover2006elements}:
\begin{equation}
I(X_{1},X_{2},...,X_{n};Y)=\sum^{n}_{i=1}I(X_{i};Y|X_{i-1},X_{i-2},...,X_{1})
\end{equation}
The concepts here reviewed, and the relations that interconnect them can be represented as shown in Fig. \ref{Entropy_MI}. Finally, I would report the data process inequality theorem that connects the information theory with the Markov chain: if we have a Markov chain, $X\rightarrow Y \rightarrow Z$ then $I(X;Y)\geq I(X;Z) $. As for the Gibbs inequality, the underlying idea is that no clever manipulation of the data can improve the inference that can be made on them \cite{cover2006elements,mackay2003information}. Otherwise, we would have a clear violation of the second principle of thermodynamics (see for instance, the Maxwell's demon \cite{feynman2018feynman}).
\section{Algorithm interpretability}
Besides the accuracy and other features connected with the confusion matrix, another critical feature is algorithm interpretability. This concept represents how much the algorithm explains its predictions and, in general, its mechanics. In other words, how much the algorithm is not a black box. Apart from all legal problems connected with an algorithm that is a blackbox\footnote{Think for instance the conditions imposed by the GDPR (e.g. right to explanation)}, in the author opinion, results provided by such an algorithm cannot be considered scientific: the scientific method also requires to explain and not simply to provide correct forecast. Following the Tarski et al. \cite{tarski1953undecidable} argument, which is based on mathematical logic, it can be said that the formal theory T can be translated into S if and only if S can prove the theorem of T in its language. On the other side, we would also know that an algorithm, as an explanation, is complete: we expect that it will also be able to make correct forecasts for all available data. However, as shown by \cite{doshi2017towards}, and by \cite{gilpin2018explaining} the interpretability of an algorithm is linked with its incompleteness. This is something that is nested both in formal systems as algorithms or scientific theories. For instance, let us consider the fall of object: at first glance, one can consider only the items on Earth. In this case, the acceleration is constant, and a straightforward theory will be obtained. As one moves to consider also the interaction within planets and stars, a much more complicate law should be considered (in which the previous case is a particular one). The first one is fully explainable but poorly complete; the second one is more difficult to explain but much more complete. This is something that was elegantly stated by A.Einstein \cite{physics-reality}:
\begin{displayquote}
\textit{Science uses the totality of the primary concepts, i.e., concepts directly connected with sense experiences, and propositions connecting them. In its first stage of development, science does not contain anything else. Our everyday thinking is satisfied on the whole with this level. Such a state of affairs cannot, however, satisfy a spirit which is really scientifically minded; because the totality of concepts and relations obtained in this manner is utterly lacking in logical unity. In order to supplement this deficiency, one invents a system poorer in concepts and relations, a system retaining the primary concepts and relations of the “first layer” as logically derived concepts and relations. This new “secondary system” pays for its higher logical unity by having elementary concepts (concepts of the second layer), which are no longer directly connected with complexes of sense experiences. Further striving for logical unity brings us to a tertiary system, still poorer in concepts and relations, for the deduction of the concepts and relations of the secondary (and so indirectly of the primary) layer. Thus the story goes on until we have arrived at a system of the greatest conceivable unity, and of the greatest poverty of concepts of the logical foundations, which is still compatible with the observations made by our senses. We do not know whether or not this ambition will ever result in a definitive system. If one is asked for his opinion, he is inclined to answer no. While wrestling with the problems, however, one will never give up hope that this greatest of all aims can really be attained to a very high degree [...]The essential thing is the aim to represent the multitude of concepts and propositions, close to experience, as propositions, logically deduced from a basis, as narrow as possible, of fundamental concepts and fundamental relations which themselves can be chosen freely (axioms). The liberty of choice, however, is of a special kind; it is not in any way similar to the liberty of a writer of fiction. Rather, it is similar to that of a man engaged in solving a well-designed word puzzle. He may, it is true, propose any word as the solution; but, there is only one word which really solves the puzzle in all its parts. It is a matter of faith that nature– as she is perceptible to our five senses– takes the character of such a well-formulated puzzle. The successes reaped up to now by science do, it is true, give a certain encouragement for this faith.}
\end{displayquote}
Therefore the evaluation of algorithm performances should be considered not only into one dimension, the accuracy but also by considering its interpretability as shown in Fig. \ref{ML_intepretability}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Information_theory_connections.jpg}
\caption{The connections of information theory with different scientific fields. Image taken from \cite{cover2006elements}}
\label{Information_theory_connections}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Entropy_MI.png}
\caption{The relations between the entropy, conditional entropy and mutual information. Image taken from \cite{cover2006elements}}
\label{Entropy_MI}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/ML_intepretability.png}
\caption{Different ML algorithms classified by their interpretability and by their accuracy. Image taken from \cite{ml_interpretability}}
\label{ML_intepretability}
\end{center}
\end{figure}
\chapter{Dataset description}
The asteroid dataset was retrieved from Kaggle \cite{kaggle_dataset}, which reports into a more machine readable form the dataset of The Center for Near-Earth Object Studies (CNEOS) \cite{cneos+nasa}, a NASA research centre. Among the 40 features that were present in the dataset, the following cases were excluded: the redundant features (e.g. the distance quantity evaluated in miles instead of kilometres), the features connected only to the other name of the asteroid, the features connected to the name of the orbit and the one connected with the orbiting planet ( since for all it was the Earth). Thus the features obtained were therefore reduced to 22. Their description will be postponed to Appendix A since tit cannot be disjoint with celestial mechanics concepts. Their enumeration is provided in Tab. \ref{tab_features}. Since only 16 \% of the asteroids were hazardous, I considered reducing the number of the non-hazardous to improve the portion of the hazardous ones. For this purpose, I constructed a database in which the proportion hazardous/not hazardous was 1:5: thus, all the hazardous ones were included, while the not-hazardous were randomly extracted \footnote{The author is aware that in principle, the proportion should be much less unbalanced, however, the tool that has to be paid for this operation is an overall reduction of the cases in the dataset that lowers the performances of the algorithms here used. The proportion here is used as a trade-off for these two contrasting facts}. Furthermore, concerning the continuous measures, the dataset was standardised and demeaned.
\begin{table}[]
\caption{The features used for the present analysis. The units of measure are reported in the original dataset \cite{kaggle_dataset}. The explanation of these features is provided in Appendix A. }
\begin{center}
\begin{tabular}{c|c}
\hline
\textbf{Features} & \textbf{Type} \\ \hline
Neo Reference ID & not used \\ \hline
Absolute Magnitude & Continuous \\ \hline
Est Dia in KM (min) & Continuous \\ \hline
Est Dia in KM (max) & Continuous \\ \hline
Close Approach Date & Continuous \\ \hline
Epoch Date Close Approach & Continuous \\ \hline
Relative\_Velocity & Continuous \\ \hline
Miss\_Dist & Continuous \\ \hline
Min\_Orbit\_Intersection & Continuous \\ \hline
Jupiter\_Tisserand\_Invariant & Continuous \\ \hline
Epoch\_Osculation & Continuous \\ \hline
Eccentricity & Continuous \\ \hline
Semi Major Axis & Continuous \\ \hline
Inclination & Continuous \\ \hline
Asc Node Longitude & Continuous \\ \hline
Orbital Period & Continuous \\ \hline
Perihelion Distance & Continuous \\ \hline
Perihelion Arg & Continuous \\ \hline
Perihelion Time & Continuous \\ \hline
Mean\_Anomaly & Continuous \\ \hline
Mean\_Motion & Continuous \\ \hline
Hazardous & Categorical (Binary)
\end{tabular}
\end{center}
\label{tab_features}
\end{table}
\chapter{Results and discussion}
This chapter is organized in the following way: first, I will report the results concerning the preliminary analysis performed on the dataset. The factor analysis of mixed data and the mutual information analysis of the continuous variables of asteroids vs their hazard will be reported. Then it will follows the analysis of the dataset performed with the probabilistic methods. After a preliminary analysis on the continuous variables, the mixed interaction model and the minforest model obtained for the whole dataset will be presented and discussed. Finally, the probabilistic models previously obtained will be compared with the outputs and the performances of four machine learning algorithms (Random Forest, Support vector machines, Quadratic Discriminant Analysis and Logistic regression).
\section{Preliminary analysis}
The first inspection that was performed on the dataset was related to the density distributions of a selection of continuous features that are known, from the celestial mechanics, to be important for the prediction of the asteroids dangerousness. These are reported in Fig. \ref{Density_relevant}. It can be seen that there is an evident distinction for the min orbit intersection, perihelion distance and eccentricity (note that these three parameters are the different faces of the same medal). On the other side, concerning the Absolute magnitude, the distinction seems not so clear. This result, together with the overlap in the three previous features, is because, according to the CNEOS definition (see Appendix A), a hazardous asteroid should have a min orbit intersection lower than a fixed parameter and an absolute magnitude higher to a certain threshold. The fulfilment of both these conditions makes the asteroid dangerous. Next I moved on a more systematic analysis with FADM and mutual information. In Fig. \ref{FADM},\ref{FAMD_Quantitative variables} and \ref{FAMD_Individuals_(c)} are reported the main achivements of FADM (performed with FactoMineR package \cite{le2008factominer}): in particular from the correlation circle in Fig. \ref{FAMD_Quantitative variables} the different correlations that are given by the celestial mechanics laws can be recognized. For instance, see that there is strong anti-correlation of the mean motion with the semi-major axis. This is due to the Kepler laws discussed in Appendix A. Furthermore we see that that the mean motion is correctly almost independent with respect to the the diameter (max or min) of the asteroid. As explained in Appendix A this is an another result of Newton gravitation theory for a two body interaction: the mean motion of an asteroid, as long as the the mass of it is much lower with respect to the earth, is independent from its mass. On the other side the fact that relative velocity is correlated with the diameter is a spurious correlation. At this point one can ask why the mean motion and the relative velocity are orthogonal: this outcome will be clarified from a theoretical point of view in appendix A and also with graphical models. Briefly the motion of an object on an ellipse as seen from a focus (the Earth) is not uniform: this is faster as the two bodies approaches. Beside this inspection a further analysis based on the concept of mutual information (summarized in the previous chapter) was performed: its result is reported in Fig. \ref{Mutual_information}. This figure summarize the ranking of the features considered in the dataset for the dangerousness classification of the asteroids according to the following expression \cite{kratzer2018varrank}
\begin{equation}
g(\alpha,\textbf{C},\textbf{S},f_{i})=MI(f_{i};\textbf{C})-\sum_{f_{s}\in S}\alpha(f_{i},f_{s},\textbf{C},\textbf{S})MI(f_{i};f_{s})
\end{equation}
where the first term $MI(f_{i};\textbf{C})$ is called relevance and measures the Mutual Information between the interesting feature set $\textbf{C}$ (only Hazardous in our case) and the analysed one $f_{i}$; the third term $MI(f_{i};f_{s})$ is called redundancy and measures the MI between the analysed feature and a chosen set of them. Finally the $\alpha(f_{i},f_{s},\textbf{C},\textbf{S})$ is a normalization function and in our case was set to \cite{kratzer2018varrank}
\begin{equation}
\alpha(f_{i},f_{s},\textbf{C},\textbf{S})=\dfrac{1}{|\textbf{S}|}
\end{equation}
following the Peng. et al approach \cite{peng2005feature}. It can be seen that the first place in the mutual information ranking, looking to the diagonal element, is taken by minimum orbit intersection: this is correct since this is one parameter used by NASA for deciding if an asteroid is hazardous or not (see Appendix A). The second place is occupied by Epoch date close approach. The third place is the eccentricity value: this parameter is entangled with the minimum orbit intersection and thus it is reasonable that it is important. Then two parameters that are related to the dimension of the asteroid are present: the Estimated min diameter and Absolute Magnitude. This fact is meaningful since if the asteroid has a too reduced volume it will be destroyed by the Earth atmosphere. Indeed the Absolute magnitude and the Min orbit intersection are the features used by CNEOS to classify if an asteroid is hazardous or not (see Appendix A). On the other hand the other parameters seems to have a too low MI score for being interesting in this preliminary analysis.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/DENSITY_Perihelion_Distance.pdf}
\subcaption{ \begin{center}
a) Perihelion Distance
\end{center}}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/DENSITY_Eccentricity.pdf}
\subcaption{ \begin{center}
b) Eccentricity
\end{center}}
\vspace{4ex}
\end{minipage} \\
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/DENSITY_Min_orbit_intersection.pdf}
\subcaption{ \begin{center}
c) Min orbit intersection
\end{center}}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/DENSITY_Absolute_magnitude.pdf}
\subcaption{ \begin{center}
d) Absolute magnitude
\end{center}}
\vspace{4ex}
\end{minipage}
\caption{Comparison between the density distributions of hazardous (red) and non-hazardous (light blue) asteroids for a selected set of features that, according to the theory, are interesting. Plot obtained with the ggplot2 package \cite{ggplot2}}
\label{Density_relevant}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/FAMD.pdf}
\caption{The FAMD main plot in which the correlation between the continuous and discrete variables is reported. Plot obtained from FactoMineR package \cite{le2008factominer}}
\label{FADM}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/FAMD_Quantitative variables.pdf}
\caption{The FAMD correlation circle for continuos variables as obtained from FactoMineR package \cite{le2008factominer}}
\label{FAMD_Quantitative variables}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/FAMD_Individuals_(c).pdf}
\caption{Graph of individuals, for the qualititative variables, as obtained from FactoMineR package \cite{le2008factominer}}
\label{FAMD_Individuals_(c)}
\end{center}
\end{figure}
\begin{landscape}
\begin{figure}
\begin{center}
\includegraphics[width=1.75\textheight]{Figures/Mutual_information.pdf}
\caption{Mutual information as obtained with the varrank package \cite{kratzer2018varrank}}
\label{Mutual_information}
\end{center}
\end{figure}
\end{landscape}
\pagebreak
\section{Probabilistic models} The analysis with the graphical models was started by inspecting the relations between continuous variables in the dataset. Therefore, in this first step, the binary variable \textit{Hazardous} was excluded. For this purpose, among the different methods available, I considered the \textit{Graphical least absolute shrinkage and selection operator} GLASSO as implemented in the \textit{glasso} R package \cite{friedman2008sparse,glasso}. After different tests reported in Fig. \ref{GLASSO_convergence}, I considered as the final result the graph obtained with the value of $\rho$ (the one that penalizes further connections) equal to $0.3$. Such choice is motivated by the fact that the connections stated by the Celestial Mechanics are correctly reproduced with this parameter. For instance, it can be seen that the diameter of the asteroids is not connected to all features related to their motion. Such a result is meaningful since the volume/mass of the asteroids is largely lower than the mass of the earth. Thus, as explained in Appendix A, there is no way the asteroid orbit can be modified by its mass. In addition, it can be seen that the Close approach Date and the Epoch date are linked to each other but in no way from the other features. This result is correct since the date and epoch are set with an arbitrary scale. Also, as expected, the Perihelion Epoch and Osculation Time are linked, but there is no dependence on the orbit parameters. Furthermore, the mean motion is not connected directly to relative velocity, but correctly, both are connected to the perihelion distance.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/GLASSO_0.1.pdf}
\subcaption{$\rho$=0.1}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/GLASSO_0.2.pdf}
\subcaption{$\rho$=0.2}
\vspace{4ex}
\end{minipage} \\
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/GLASSO_0.3.pdf}
\subcaption{$\rho$=0.3}
\vspace{4ex}
\end{minipage}%%
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/GLASSO_0.4.pdf}
\subcaption{$\rho$=0.4}
\vspace{4ex}
\end{minipage}%%
\caption{GLASSO analysis, performed with the glasso package \cite{friedman2008sparse,glasso}, with different $\rho$ parameter (the one that penalize further connections) for the Asteroid dataset without the discrete variable Hazardous. The plots were obtained with the \textit{igraph} package for R \cite{igraph}}
\label{GLASSO_convergence}
\end{figure}
Let us now move on to the mixed interaction model, where the hazardousness feature was also considered. The first model was calculated with the mgm package \cite{mgm,haslbeck2015mgm} where the nodewise regression is used \cite{meinshausen2006high}. The $k$ parameter was set equal to two, and a cross-validation (CV) with ten folds was considered. The result is reported in Fig. \ref{mgm}. From this figure, it can be seen that the features that are directly connected to the Hazardous feature are: the absolute magnitude, the min orbit intersection, and the eccentricity. The theory supports all these connections. The min orbit intersection is the main parameter for the hazardous value. On the other side, the eccentricity can be tough as a parameter that describes how near the celestial body moves at the perihelion (indeed, this parameter is correctly linked with the perihelion distance). The Absolute Magnitude is also a meaningful parameter for the hazard evaluation since, if the asteroid is too small, it will be destroyed by the earth atmosphere. Furthermore, the parameters related to the diameters are also not connected with the orbital parameters. In addition, the relative velocity is not directly related to the mean motion, but there are the orbital parameters within as expected. Moving to the relation of orbital parameters, it can be seen that there is a negative relation between the eccentricity and perihelion distance: this is meaningful since it comes from the definition of eccentricity. The other side of the coin of this connection is the positive relationships within the semimajor axis and the eccentricity. Finally the connections with the Jupiter Tisserand Invariant comes directly from the definition of this parameter. Such arguments can be much more clear with the equations and plots provided in Appendix A. Thus it can be said that the model produced is almost consistent with the astronomic laws. On this basis, the performances of the model in terms of confusion matrix, ROC (Receiver operating characteristic) curve and $\phi$ coefficient (also known as Matthews correlation coefficient ) were evaluated (for this purpose the Caret R package \cite{kuhn2008building,caret} was used). The confusion matrix obtained is reported in Fig. \ref{mgm_confusion}: the ROC curve is given in Fig. \ref{ROC_mgm} and it corresponds to a $\phi=0.6$. These performances will be commented on in the next section when the performances of other ML algorithms for the same dataset will be reported. Beside the mgm algorithm/package other approaches were also considered for the evaluation of the mixed interaction graphical model: the function \textit{mmod()} of gRim package \cite{hojsgaard2012graphical} and the \textit{minforest()} of gRapHD (which uses the minForest method) \cite{de2009high}. For both cases, a stepwise algorithm was used. Their result are reported in Fig. \ref{mmod} and \ref{minforest}. It can be seen that in these models, the minimum diameter is linked with the hazardous feature, which is correct, but what is without a physical meaning is that this quantity is also linked to the features connected with the orbital parameters. A weak link with the orbital parameters is meaningful for the magnitude (as the mgm model for the mean anomaly). The magnitude depends on the distance between the asteroids and the observer. However, a direct link from the volume parameters (and thus from the mass) with the orbital ones is not acceptable. In principle, one can set these links as forbidden in the algorithm (blacklist), but in the author view, a model where a better\footnote{In terms of consistency with astronomic laws} result is obtained without boundaries should be preferred to those obtained with a large number of constraints. Therefore we reject the mmod and minforest ones favouring the mgm, obtained without constraints (blacklist and/or whitelist) .
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/mgm.pdf}
\caption{The graphical model obtained with the mixed interaction model as implemented in the mgm package \cite{mgm,haslbeck2015mgm}. The plot was obtained with the \cite{qgraph} package}
\label{mgm}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/mgm_confusion.pdf}
\caption{The confusion matrix of the graphical model reported in Fig. \ref{mgm} as obtained from the Caret package \cite{kuhn2008building,caret}}
\label{mgm_confusion}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/ROC_mgm.pdf}
\caption{The ROC (Receiver operating characteristic) curve as obtained from the ROCR package \cite{sing2005rocr} and ggplot2 \cite{ggplot2}. The corresponding $\phi$ value is $0.6$ }
\label{ROC_mgm}
\end{center}
\end{figure}
\begin{landscape}
\begin{figure}
\begin{center}
\includegraphics[width=1.5\textheight]{Figures/mmod.pdf}
\caption{The graphical mixed interaction model obtained with the grim package \cite{hojsgaard2012graphical}. The plot was obtained with the igraph package for R \cite{igraph} }
\label{mmod}
\end{center}
\end{figure}
\end{landscape}
\\
\begin{landscape}
\begin{figure}
\begin{center}
\includegraphics[width=1.5\textheight]{Figures/minforest.pdf}
\caption{The graphical mixed interaction model obtained with the gRapHD package \cite{de2009high} with a stepwise algorithm. The plot was obtained with the qgraph \cite{qgraph} package }
\label{minforest}
\end{center}
\end{figure}
\end{landscape}
\pagebreak
\section{Machine learning algorithms} The performances of mgm model will be now compared with the one obtained from other ML algorithms. These include the random-forest (RF, as implemented in the RandomForest package \cite{rfor}), the support vector machines (SVM, as implemented in the e1071 package \cite{dimitriadou2008misc}), the quadratic discriminant analysis (QDA, as implemented in the MASS package \cite{MASS} and the logistic regression (as implemented in stats package \cite{stats}). Their performances are reported in Fig. \ref{CF_ML} and \ref{ROC_mgm} as well in the Tab. \ref{phi_values}. This comparison shows that the RF and the SVM outperform the mgm graphical method while the logistic one has similar performances. Thus, one can wonder about the advantage of using a graphical method instead of a random forest or an SVM since its performances seem lower. The answer is the interpretability of the model provided: the RF, at least, can provide variable rank importance as reported in Fig. \ref{RF_Importance} (note that apart from the Min Orbit intersection, there is a slight reshuffling in the feature importance to the one estain reverseblished by the MI in fig \ref{Mutual_information}. However, apart from this intepretation, the RF is a black-box as the SVM, the logistic regression and the QDA. Conversely, the probabilistic graphs, providing the list of connections among the random variables, give the user an interpretable model whose properties can also be compared, discussed, and validated with the theory. Thus the model developed in this way allows a more scientific evaluation with respect to the black-box ones. In the author's view, this characteristic compensates for the lack of predictive power regarding the RF or the SVM methods.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/RF_confusion.pdf}
\subcaption{Random Forest}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/SVM_confusion.pdf}
\subcaption{SVM}
\vspace{4ex}
\end{minipage} \\
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/QDA_confusion.pdf}
\subcaption{QDA}
\vspace{4ex}
\end{minipage}%%
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/Logisic_confusion.pdf}
\subcaption{Logistic}
\vspace{4ex}
\end{minipage}%%
\caption{Confusion matrices for a selected set of ML algorithms as obtained from the Caret package \cite{kuhn2008building,caret}}
\label{CF_ML}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/ROC_RF.pdf}
\subcaption{Random Forest}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/ROC_SVM.pdf}
\subcaption{SVM}
\vspace{4ex}
\end{minipage} \\
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/ROC_QDA.pdf}
\subcaption{QDA}
\vspace{4ex}
\end{minipage}%%
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{Figures/ROC_logistic.pdf}
\subcaption{Logistic}
\vspace{4ex}
\end{minipage}%%
\caption{ROC curves for a selected set of ML algorithms. Plot obtained with the caret and the ggplot2 package \cite{kuhn2008building},ggplot2}}
\label{ROC_ML}
\end{figure}
\begin{table}[]
\caption{$\phi$ coefficient (also known as Matthews correlation coefficient ) for a selected set of ML algorithms as compared with the mgm}
\begin{center}
\begin{tabular}{c|c}
Algorithm & $\phi$ \\ \hline
RF & 0.9876 \\ \hline
SVM & 0.7111 \\ \hline
logistic & 0.6173 \\ \hline
mgm & 0.5997 \\ \hline
QDA & 0.5562
\end{tabular}
\end{center}
\label{phi_values}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/RF_Importance.pdf}
\caption{Variable importance according to the random forest algorithm as implemented in \cite{rfor} package. Plot obtained with the ggplot2 package \cite{ggplot2}}
\label{RF_Importance}
\end{center}
\end{figure}
\chapter{Conclusions and Outlook}
The GLASSO (with a penalizing parameter $\rho=0.3$) and the mgm algorithms provided graphical models that almost correctly reproduce the physical connections of the dataset feature. In particular, for both models, it can be seen that no one of the orbital parameters is conneted to the diameter of the asteroid, as stated by theory. Furthermore, the mgm model can provide forecasts about the hazardousness of the asteroids with almost equal or better performances than the logistic and QDA algorithms. On the other hand, despite the mgm performances being lower than the RF and SVM, its interpretability is higher. Such a feature compensates the mgm forecast performances gap. As a future outlook I consider to extend this analysis for physical phenomena for which there is no deterministic theory but only probabilistic one\footnote{Following the I.Prigogine \cite{prigogine2017non,prigogine1997end,nicolis1989exploring,prigogine1978time} argument it must be said that a pure deterministic theory, if exist, cannot be fully applied. Therefore, in principle, all possible theories should be considered probabilistic. This is because the initial conditions can not be known with a infinite precision. Therefore, if one also perfectly knows the equation of motion and can also solve them exactly (and in most of cases it is not possible) a source of error still persist and it propagates with the Lyapunov exponent. However as it was shown by Prigogine it a system is at equilibrium this instability can neglected. In this case a deterministic approach is meaningful }. For instance one can consider Earthquakes and Vulcanos Eruptions \cite{sornette1989self,sparks2003forecasting,sornette2006critical}
\chapter{Appendix A: Concepts of astronomy}
This Appendix is dedicated to reviewing the basic concepts of astronomy needed to understand the dataset's features and how these are interconnected. First, the basic concepts of celestial mechanics will be recapped following the Murray approach \cite{murray1999solar}: these will describe the orbital parameters of the dataset. Then the concepts connected to the observation of the asteroids will be reviewed following the Burbine textboox \cite{burbine2016asteroids}. Finally, using the previous definition, the classification of the asteroids and in particular, the definition of their hazardousness will be provided following the Center for Near-Earth Object Studies (CNEOS) statements \cite{nasa_classification}.
\section{Celestial mechanics}
Let us start by considering two masses $m_{1}$ and $m_{2}$, which in the present case will be respectively the planet Earth and the asteroid. Their position will be given, respectively, by two vectors $\textbf{r}_{1}$ and $\textbf{r}_{2}$ considering the origin $O$ bounded in to an inertial space. Furthermore we can define also the relative position with the vector $\textbf{r}=\textbf{r}_{2}-\textbf{r}_{1}$. Since we suppose that the masses are not interacting with the electromagnetic force, these will be bounded only by the gravitational interaction, which is given by the Newton law \cite{murray1999solar}:
\begin{equation}
\textbf{F}_{1}=\mathcal{G} \cdot \frac{m_{1}m_{2}}{r^{3}}\textbf{r}=m_{1} \ddot{\textbf{r}}_{1}
\end{equation}
\begin{equation}
\textbf{F}_{2}=-\mathcal{G} \cdot \frac{m_{1}m_{2}}{r^{3}}\textbf{r}=m_{1} \ddot{\textbf{r}}_{2}
\end{equation}
where $\mathcal{G}$ is universal gravitational constant, and the second equivalence is given by the second Newton law $\textbf{F}=m\cdot \textbf{a}$ in which $\textbf{a}$ is the acceleration calculated as the second derivative of the position vector \textbf{r}. Setting $\ddot{\textbf{r}}=\ddot{\textbf{r}}_{2}-\ddot{\textbf{r}}_{1}$ (thus we consider the motion of the second item with respect to the first one) and $\mu=\mathcal{G}(m_{1}+m_{2})$ the following differential equation will be obtained from the previous two ones \cite{murray1999solar}:
\begin{equation}
\dfrac{d^{2}\textbf{r}}{dt^{2}}+\mu\dfrac{\textbf{r}}{r^{3}}=0
\end{equation}
It can be seen that the $\textbf{r}$ and $\dot{\textbf{r}}$ lies always in the same plane: this is because the product vector $\textbf{r} \times \ddot{ \textbf{r}}=0$, thus if one integrates she will get that the product vector $\textbf{r} \times \textbf{r}=\textbf{h}$ where \textbf{h} is a constant vector. Furthermore the problem can be simplified by using polar coordinates $\hat{\textbf{r}}$ and $\hat{\boldsymbol{\theta}}$. In this case speed and acceleration have the following form \cite{murray1999solar}:
\begin{equation}
\textbf{r}=r\hat{\textbf{r}}
\end{equation}
\begin{equation}
\dot{\textbf{r}}=\dot{r}\hat{\textbf{r}}+r\dot{\theta}\hat{\boldsymbol{\theta}}
\label{eq_dyn_nop}
\end{equation}
\begin{equation}
\ddot{\textbf{r}}=\left(\ddot{r}-r\dot{\theta}^{2}\right)\hat{\textbf{r}}+\left[\dfrac{1}{r}\frac{d}{dt}\left(r^{2}\dot{\theta}\right)\right]\hat{\boldsymbol{\theta}}
\end{equation}
Therefore the product vector between the speed and the position will have the following form \cite{murray1999solar}:
\begin{equation}
\textbf{h}=r^{2}\dot{\theta}\hat{\textbf{z}}
\end{equation}
where $\textbf{z}$ is a vector perpendicural to the plane, whose module is equal to \cite{murray1999solar}:
\begin{equation}
h=r^{2}\dot{\theta}
\end{equation}
If we consider the motion of the body $m_{2}$ in the time interval $\delta t$ we have that the area $\delta A$ illustrated in Fig \ref{Area_dynamics} will be \cite{murray1999solar}:
\begin{equation}
\delta A \approx \dfrac{1}{2} r(r+dr)\sin(\delta\theta) \approx \dfrac{1}{2} r^{2}\delta\theta
\end{equation}
where the Taylor expansion at first order was used. Therfore \cite{murray1999solar}:
\begin{equation}
\dfrac{dA}{dt}=\dfrac{1}{2}r^{2}\dfrac{d\theta}{dt}=\dfrac{1}{2}h
\end{equation}
but we know that $h$ is constant, therefore the first derivative, is constant. This is the $2^{th}$ Kepler law. Lets come back to the Eq. \ref{eq_dyn_nop}, if one recast it in polar coordinates, the following form will be obtained \cite{murray1999solar}:
\begin{equation}
\ddot{r}-r\dot{\theta}^{2}=-\frac{\mu}{r^{2}}
\end{equation}
This differential equation can be rewritten as an armonic oscillator with the substitutions $u=\dfrac{1}{r}$ $h=r^{2}\dot{\theta}$ \cite{murray1999solar}:
\begin{equation}
\dot{r}=-\frac{1}{u}\dfrac{du}{d\theta}\dot{\theta}=-h\frac{du}{d\theta}
\end{equation}
\begin{equation}
\ddot{r}=-h\dfrac{d^{2}u}{d\theta^{2}}\dot{\theta}=-h^{2}u^{2}\frac{d^{2}u}{d\theta^{2}}
\end{equation}
\begin{equation}
\dfrac{d^{2}u}{d\theta^{2}}+u=\frac{\mu}{h^{2}}
\end{equation}
\begin{equation}
u=\frac{\mu}{h^{2}}\left[1+e\cos(\theta-\phi)\right]
\end{equation}
where the integration constants e and $\phi$ are respectively the amplitude and the phase. Therefore we have \cite{murray1999solar}:
\begin{equation}
r=\dfrac{p}{1+e\cos(\theta-\phi)}
\end{equation}
In this form we can recognize in $e$ the \textcolor{red}{eccentricity} and $p$ is the semilatus rectum \cite{murray1999solar}:
\begin{equation}
p=\frac{h^{2}}{\mu}
\end{equation}
Depening on the eccentricity we have four possible conics \cite{murray1999solar}:
\begin{itemize}
\item circle: $e=0$ \quad $p=a$
\item ellipse: $0<e<1$ \quad $p=a$
\item parabola: $e=1$ \quad $p=2q$
\item hyperbola: $e>1$ \quad $p=a(e^{2}-1)$
\end{itemize}
In which $a$ is the semi-major axis of the conic. The shape of these orbits is reported in Fig. \ref{Elliptical_orbit}. All the asteroids considered here have an eccentricity $0<e<1$, they have elliptical orbits in which the Earth lies in one of the two focal points. It is worth noting that this is the first Kepler law. Looking to Fig. \ref{Elliptical_orbit} we can define the point of the minimum distance between $m_{1}$ and the orbiting body as the pericentre or \textcolor{red}{perihelion}, and the maximum distance as the apocentre or the aphelion. The \textcolor{red}{semi-major axis}, here denoted as $b$ on the other side, is defined as the distance between the pericentre and the apocentre. Using the following identity \cite{murray1999solar}:
\begin{equation}
b^{2}=a^{2}(1-e^{2})
\end{equation}
we get \cite{murray1999solar}:
\begin{equation}
r=\frac{a(1-e^{2}}{1+e\cdot cos(\theta-\phi)}
\label{eq-mot}
\end{equation}
Furthermore, the third Kepler law can be quickly obtained considering the area swept in one \textcolor{red}{orbital period} T (the time needed to complete a full round of the orbit) $A=\pi ab$. Since we know that this area is equal to $hT/2$ and $h^{2}=\mu a(1-e^{2})$ \cite{murray1999solar}:
\begin{equation}
T^{2}=\dfrac{4\pi^{2}}{\mu}a^{3}
\end{equation}
If we have two bodies, of masses $m$ and $m'$, that orbit around the Earth $m_{c}$, we can use the previous equation to obtain \cite{murray1999solar}:
\begin{equation}
\frac{m_{c}+m}{m_{c}+m'}=\left(\frac{a}{a'}\right)\left(\frac{T'}{T}\right)^{2}
\end{equation}
But since $m$,$m'<<m_{c}$ \cite{murray1999solar}:
\begin{equation}
\frac{m_{c}+m'}{m_{c}+m}\approx\dfrac{m}{m_{c}}=\left(a/a'\right)^{3}\left( T/T'\right)^{2}
\end{equation}
We see that the two orbital parameters $T'$ and $a'$ are independent to orbiting mass. This statement can be extended to the other orbital parameters by considering the previous approximation $m$,$m'<<m_{c}$. This is why we expect that the mass/volume of the asteroid can not be dependant on the orbital parameters. It is useful to define also the \textcolor{red}{mean motion} (feature that is also present in the asteroids dataset) as \cite{murray1999solar}:
\begin{equation}
n=\frac{2\pi}{T}
\end{equation}
Therefore \cite{murray1999solar}:
\begin{equation}
\mu=n^{2}a^{3}
\end{equation}
\begin{equation}
h=na^{2}\sqrt{1-e^{2}}=\sqrt{\mu a(1-e^{2})}
\label{eq_h}
\end{equation}
From which we can see that the angular velocity $\ddot{f}$ is function of the longitude. We are now going more in deep with this statement. Lets come back to the Eq. \ref{eq-mot}, this can be rewritten as \cite{murray1999solar}:
\begin{equation}
\dot{\textbf{r}}\cdot\ddot{\textbf{r}}+\mu\dfrac{\dot{r}}{r^{2}}=0
\end{equation}
whose integration gives \cite{murray1999solar}:
\begin{equation}
\frac{1}{2}v^{2}-\frac{\mu}{r}=C
\label{energy_cons}
\end{equation}
In which $v^{2}=\dot{\textbf{r}}\cdot\dot{\textbf{r}}$, and C the integration costant. This expression express the energy conservation: on the left side we have the \textit{vis-viva} term (basically the kinetic energy without the mass), on the right side the potential energy (rescaled with $\mu$). Lets come back to Eq. \ref{eq-mot}, and make the following substitution $f=\theta-\phi$, which is called the true anomaly. If we differentiate it we will obtain \cite{murray1999solar}:
\begin{equation}
\dot{r}=\frac{r\dot{f}e\sin f}{1+e\cos f}
\end{equation}
Remembering the the definition of $h=r^{2}\ddot{f}$, from Eq. \ref{eq_h} we have \cite{murray1999solar}:
\begin{equation}
\dot{r}=\frac{na}{\sqrt{1-e^{2}}}e\sin f
\end{equation}
\begin{equation}
r\dot{f}=\frac{na}{\sqrt{1-e^{2}}}\left(1+e\cos f\right)
\end{equation}
Therefore \cite{murray1999solar}:
\begin{equation}
\begin{split}
v^{2}&=\dfrac{n^{2}a^{2}}{1-e^{2}}\left(1+2e\cos f +e^{2}\right)= \\
&=\dfrac{n^{2}a^{2}}{1-e^{2}}\left(\dfrac{2a(1-e^{2})}{r}-(1-e^{2})\right)
\end{split}
\end{equation}
\begin{equation}
v^{2}=\mu\left(\frac{2}{r}-\dfrac{1}{a}\right)
\end{equation}
From which we get that the \textcolor{red}{velocity} \footnote{In the dataset this quanitity is called relative velocity because the it the reference system is Earth} of the asteroid is maximum at the perihelion, and minimum at the aphelion. Their values that are equal respectively to \cite{murray1999solar}:
\begin{equation}
v_{perihelion}=na\sqrt{\dfrac{1+e}{1-e}}
\end{equation}
\begin{equation}
v_{aphelion}=na\sqrt{\dfrac{1-e}{1+e}}
\end{equation}
Another quantity that is contained in the asteroids dataset, and is useful to describe their orbits is the \textcolor{red}{mean anomaly}. This is defined as \cite{murray1999solar}:
\begin{equation}
M=n(t-\tau)
\end{equation}
where $\tau$, the time of pericentre passage, increases linearly with time at a costant rate equal to the mean motion. Furthermore is bounded by the following relations for the perihelion and aphelion \cite{murray1999solar}:
\begin{itemize}
\item $M=f=0$\quad$t=\tau$\quad Perihelion
\item $M=f=\pi$\quad$t=\tau+T/2$ \quad Aphelion
\end{itemize}
Such boundaries should be intended as periodic for multiple of the orbital period T. The geometrical interpretation of the angle associated with the mean anomaly is given in Fig. \ref{Mean_anomaly}. It can be proven (see \cite{murray1999solar}) that the value of this angle, which describes the position of the orbiting item, is given by the following expression, known as Kepler equation \cite{murray1999solar}:
\begin{equation}
M=E-e\sin E
\end{equation}
Finally, as one move to space, other two angles are necessary for the description of an orbit: these are shown in Fig. \ref{Inclination} and are the \textcolor{red}{inclination} of the orbit (I) and the \textcolor{red}{longitude of the ascending node} $\Omega$. Given this quantities the Tisserard invariant can be calculated as \cite{murray1999solar}:
\begin{equation}
T_{P}=\frac{a_{p}}{a}+2\cos I\sqrt{\dfrac{a}{a_{p}}(1-e^{2})}
\end{equation}
If Jupiter is considered as perturbing body, we have the \textcolor{red}{Juptier Tisserard Invariant}. The underlying reason for this choice is to distinguish the Jupiter family comets ($2<T_{j}<3$) and the asteroids $T_{j}<2$.
\section{Observation}
The luminosity of an asteroid can be quantified with the concepts of magnitude. It is worth pointing out that asteroids have no intrinsic luminosity but instead reflects the radiation, usually not uniformly, since they have no atmosphere of stars or other celestial bodies with intrinsic luminosity.
First of all, before defining the concept of magnitude is helpful to provide the concept of radiation flux. In our case, it can be thought of as the number of photons \footnote{One can think, at first approximation, photons as a packet of energy associated with the emitted light. A more formal and complete description can be found in \cite{feynman2018feynman} } that moves across a sphere centred on the light source and with a radius equal to the observer distance \cite{burbine2016asteroids}:
\begin{equation}
\Phi=\frac{L}{4\pi r^{2}}
\end{equation}
As one can note, there is a dependence from $1/r^{2}$ which is given by the fact that the electromagnetic radiation, as for gravitational force, is spherically symmetric\footnote{A formal argument of this point can be found in \cite{zee2013einstein}}. On this basis, it is possible to define the relative magnitude as \cite{burbine2016asteroids}:
\begin{equation}
m=-2.5\log_{10}\Phi+C
\end{equation}
This definition is useful for the evaluation for the comparison between two light sources (e.g. two stars), since in this case we have \cite{burbine2016asteroids}:
\begin{equation}
m_{1}-m_{2}=-2.5\log_{10}\frac{\Phi_{1}}{\Phi_{2}}
\end{equation}
Finally the \textcolor{red}{Absolute magnitude} $M$ can be defined as the magnitude that the item will have if it is put at 1 AU\footnote{Astronomic Units, the distance between Earth and Sun: 1.49\cdot 10^{11} m} or 10 parsec \footnote{$3,08\cdot 10^{16}$ m. The physical meaning of this measure can be found in \cite{burbine2016asteroids}} from the observer. Therefore \cite{burbine2016asteroids}:
\begin{equation}
M-m=-2.5\log_{10}\frac{\Phi\cdot d^{2} }{\Phi\cdot 10^{2}}
\end{equation}
where $d$ is the light source distance in parsec. Rearranging the terms, we have \cite{burbine2016asteroids}:
\begin{equation}
M-m=-5\log_{10}\frac{ d^{2} }{10}+C
\end{equation}
\begin{equation}
M=m+5-5log_{10}d
\end{equation}
\section{Classification}
The solar system is composed not only of planets and the Sun but also, as shown in Fig. \ref{Orbit_asteorids}, by a plethora of small bodies\footnote{Small to the size of planets} whose orbit can be near to the Earth \cite{burbine2016asteroids}. In particular, if the orbit of these items is near less than 1.3 AU, such items are called near-Earth objects (NEO). Such items are of three types: comets, meteoroids and asteroids. The first ones are icy bodies that, as they move near the Sun, melt and release gases giving coloured tails. Meteoroids are minor \footnote{To asteroids} rocky/metallic items with a diameter less than 1 meter. Asteroids, on the other side, are items with a diameter larger than 1 m \cite{burbine2016asteroids}. The first two bodies are excluded from the present analysis. The near asteroids, which are mainly located on the sites reported in Fig. \ref{heic1715c}, are classified, as shown in Fig. \ref{neo_orbit_types}, according to their semi-major axis (a), perihelion distance (q), and aphelion distance (Q). Here we report the CNEOS definition \cite{nasa_classification}:
\begin{itemize}
\item \textbf{Atiras} $a < 1.0$ au $Q < 0.983$ au\quad \textit{NEAs whose orbits are contained entirely with the orbit of the Earth (named after asteroid 163693 Atira)}
\item \textbf{Atens} $a < 1.0$ au $Q > 0.983$ au \quad \textit{Earth-crossing NEAs with semi-major axes smaller than Earth's (named after asteroid 2062 Aten)}
\item \textbf{Apollos} $a>1.0$ au $q<1.017$ au \quad \textit{Earth-crossing NEAs with semi-major axes larger than Earth's (named after asteroid 1862 Apollo) }
\item \textbf{Amors} $a>1.0$ au $1.017<q<1.3$ au \textit{Earth-approaching NEAs with orbits exterior to Earth's but interior to Mars' (named after asteroid 1221 Amor)}
\end{itemize}
Besides this classification, there is the one that involve the hazardousness of an asteroid \cite{nasa_classification}:
\begin{itemize}
\item \textbf{Potentially Hazardous Asteroids}: MOID $\leq 0.05$ au $M \leq22.0$ \textit{NEAs whose Minimum Orbit Intersection Distance (MOID) with the Earth is 0.05 au or less and whose absolute magnitude (M) is 22.0 or brighter}
\end{itemize}
This is the formal definition by which the asteroids contained in the dataset here analysed are classified as hazardous or not.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Area_dynamics.png}
\caption{The portion of area $\delta A$ obtained when the position vector moves with an angle $\delta\theta$. Image taken from \cite{murray1999solar}}
\label{Area_dynamics}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Conics.png}
\caption{The four possible orbits as obtained from a sectionof a cone. Image taken from \cite{murray1999solar}}
\label{Conics}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Elliptical_orbit.png}
\caption{Main features of an elliptical orbit. Image taken from \cite{murray1999solar}}
\label{Elliptical_orbit}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Mean_anomaly.png}
\caption{The geometrical interpretation of mean anomaly: on the left panel a) is reported how the circumscribed circle should be draw, while on the right panel b) it is shown how the angle associated with the mean anomaly should be interpreted and its relation with the true anomaly angle f. Image taken from \cite{murray1999solar}}
\label{Mean_anomaly}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Inclination.png}
\caption{The parameters that are necessary for the description of an orbit in three dimension. Image taken from \cite{murray1999solar}}
\label{Inclination}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/Orbit_asteorids.png}
\caption{Orbits of potentially hazardous asteorids. Image taken from \cite{orbits-of-potentially-hazardous}}
\label{Orbit_asteorids}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/heic1715c.jpg}
\caption{Location of the solar system asteroids belt. Image taken from \cite{esahubble}}
\label{heic1715c}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\textwidth]{Figures/neo_orbit_types.jpg}
\caption{NASA classification of NEO asteroids accompained by the legend for the parameters a,q and Q. Image taken from \cite{nasa_classification}}
\label{neo_orbit_types}
\end{center}
\end{figure}
\chapter{Appendix B: R code} Here I provide the code which I run to obtain the results previously showed. For its writing I took in consideration the examples provided in the packages documentation, the Prof.Nicolussi lectures and the examples provided in the Hojsgaard textbook \cite{hojsgaard2012graphical}.
\definecolor{light-gray}{gray}{0.95}
\lstset{ columns=fullflexible, basicstyle=\ttfamily, backgroundcolor=\color{light-gray},xleftmargin=0.5cm,frame=lr,framesep=8pt,framerule=0pt,frame=single,breaklines=true, postbreak=\mbox{\textcolor{red}{$\hookrightarrow$}\space}}
\lstinputlisting[language=R]{Script_FINAL.R}
\bibliographystyle{unsrt}
\bibliography{sample.bib} % The file containing the bibliography
\newpage
%----------------------------------------------------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.7447259192,
"avg_line_length": 68.0041631973,
"ext": "tex",
"hexsha": "4261bf07de781914524349a097070829b9709fe2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0ba17e21b703c50ccb8f6994dd6925aec5c3306",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "marzione00/Hazardous-asteroids-forecast-via-Markov-random-fields",
"max_forks_repo_path": "OLD/Manuscript/manuscript.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0ba17e21b703c50ccb8f6994dd6925aec5c3306",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "marzione00/Hazardous-asteroids-forecast-via-Markov-random-fields",
"max_issues_repo_path": "OLD/Manuscript/manuscript.tex",
"max_line_length": 3974,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c0ba17e21b703c50ccb8f6994dd6925aec5c3306",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "marzione00/Hazardous-asteroids-forecast-via-Markov-random-fields",
"max_stars_repo_path": "OLD/Manuscript/manuscript.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 21745,
"size": 81673
} |
\documentclass{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{pgfplots}
\usepackage{tikz}
\usepackage{nicefrac}
\pgfplotsset{every axis legend/.append style={
at={(0,0)},
anchor=north east}}
\usetikzlibrary{shapes,positioning,intersections,quotes}
\definecolor{darkgreen}{rgb}{0.0, 0.6, 0.0}
\definecolor{darkred}{rgb}{0.7, 0.0, 0.0}
\title{Interpolation}
\begin{document}
\def\horzbar{\text{magic}}
\pagenumbering{gobble}
\maketitle
\newpage
\pagenumbering{arabic}
\section*{Introduction}
We have to arrays of numbers $X$ and $Y$. Array $X$ contains independent data points. Array $Y$ contains dependent data points $y_i,i=1,…,m$.
We want to find a function $\hat{y}(x)$, which gets the exact same value with given points.\\
\section*{Linear Interpolation}
Linear interpolation is achieved by connecting two data points with a straight line.
For $x_i < x < x_{i+1}$:
$$\hat{y}(x) = y_i + \frac{(y_{i+1} - y_{i})(x - x_{i})}{(x_{i+1} - x_{i})}.$$
\section*{Derivation}
\begin{tikzpicture}
\draw [dashed] (-3, -1.9) -- (3, -1.9);
\draw [dashed] (3, -5) -- (3, -1.9);
\draw [-stealth](-3,-5) -- (9,-5);
\draw [-stealth](-3,-5) -- (-3,1);
\draw [stealth-stealth, blue](0,-3) -- (8,0);
\draw [stealth-stealth, red](8,0) -- (8,-3);
\draw [stealth-stealth, green](0,-3) -- (8,-3);
\draw [stealth-stealth, black](0,-3.1) -- (3,-3.1);
\node[above right=0pt of {(8, 0)}, outer sep=2pt,fill=none] {$(x_2, y_2)$};
\node[above right=0pt of {(8, -1.8)}, outer sep=2pt,fill=none, darkred] {$y_2 - y_1$};
\node[above right=0pt of {(4.5, -3.6)}, outer sep=2pt,fill=none, darkgreen] {$x_2 - x_1$};
\node[above right=0pt of {(1, -3.6)}, outer sep=2pt,fill=none] {$x - x_1$};
\node[above right=0pt of {(-1,-3.7)}, outer sep=2pt,fill=none] {$(x_2, y_2)$};
\node[above right=0pt of {(-1,-1.7)}, outer sep=2pt,fill=none] {$y$};
\node[above right=0pt of {(3,-4.2)}, outer sep=2pt,fill=none] {$x$};
\node[above right=0pt of {(3,-2.7)}, outer sep=2pt,fill=none] {$h$};
\end{tikzpicture}
$$\alpha = \frac{y_2 - y_1}{x_2 - x_1}$$
$$h = \alpha \cdot (x - x_1)$$
$$y = y_1 + h$$
$$y = y_1 + (x - x_1) \cdot \frac{y_2 - y_1}{x_2 - x_1}$$
\section*{Example}
We are given two points A(-2, 0) and B (2, 2).\\
\begin{tikzpicture}
\begin{axis}[
axis x line=middle,
axis y line=middle,
width=8cm,
height=8cm,
xmin=-5, % start the diagram at this x-coordinate
xmax= 5, % end the diagram at this x-coordinate
ymin=-5, % start the diagram at this y-coordinate
ymax= 5, % end the diagram at this y-coordinate
xlabel=$x$,
ylabel=$y$,
legend cell align=left,
legend pos=south east,
legend style={draw=none},
tick align=outside,
enlargelimits=false]
% plot the function
\addplot[domain=-5:5, blue, ultra thick,samples=500] {0.5*x + 1};
\fill[red] (700,700) circle (3pt);
\fill[red] (300, 500) circle (3pt);
\draw [dashed] (500, 700) -- (680, 700);
\draw [dashed] (700, 500) -- (700, 680);
\node[above right=0pt of {(255,510)}, outer sep=2pt,fill=none] {A};
\node[above right=0pt of {(655,710)}, outer sep=2pt,fill=none] {B};
\legend{$\nicefrac{1}{2} \cdot x$ + 1}
\end{axis}
\end{tikzpicture}
Let's try to evaluate the value of the function at $x=1$
$$\hat{y}(x) = y_i + \frac{(y_{i+1} - y_{i})(x - x_{i})}{(x_{i+1} - x_{i})} = 0 + \frac{(2 - 0)(1 - (-2))}{(2 - (-2))} = 1.5$$
\newpage
\section*{Cubic Spline}
The interpolating function in cubic spline interpolation is a set of piecewise cubic functions.\\
For $x_i < x < x_{i+1}$:\\
We have two points $(x_i, y_i)$ and $(x_{i+1}, y_{i+1})$ joined with a cubic polynomial:
$$S_i(x) = a_i x^3 + b_i x^2 + c_i x + d_i$$
For $n$ points, there are $n-1$ cubic functions to find, and each cubic function requires four coefficients ($a_i, b_i, c_i, d_i$).
There are $4(n-1)$ unknowns to find.\\
\begin{tikzpicture}
\begin{axis}[
axis x line=middle,
axis y line=middle,
width=13cm, height=13cm, % size of the image
grid = none,
grid style={dashed, gray!0},
%xmode=log,log basis x=10,
%ymode=log,log basis y=10,
xmin=-2, % start the diagram at this x-coordinate
xmax= 4, % end the diagram at this x-coordinate
ymin=-7, % start the diagram at this y-coordinate
ymax= 7, % end the diagram at this y-coordinate
%/pgfplots/xtick={0,1,...,60}, % make steps of length 5
%extra x ticks={23},
%extra y ticks={0.507297},
axis background/.style={fill=white},
ylabel=y,
xlabel=x,
%xticklabels={,,},
%yticklabels={,,},
tick align=outside,
tension=0.08]
% plot the stirling-formulae
\addplot[name path global=a, domain=-2:4, blue, thick,samples=500] {-x*x*x + 4*x*x-x-4};
\fill[red] (211, 39.3) circle (3pt);
\fill[red] (455, 108.8) circle (3pt);
\node[above right=0pt of {(211, 39.3)}, outer sep=2pt,fill=none] {$y_1$};
\node[above right=0pt of {(455, 108.8)}, outer sep=2pt,fill=none] {$y_2$};
\node[above right=0pt of {(250, 98.8)}, outer sep=2pt,fill=none] {$S_1(x)$};
\draw [-stealth](280,98) -- (350,82
);
\end{axis}
\end{tikzpicture}
\section*{Derivation}
We are trying to find a function $S_i(x) = a_i x^3 + b_i x^2 + c_i x + d_i$ going trough both points: $(x_i, y_i)$ and $x_{i+1}, y_{i+1}$.
\begin{align}
S_i(x_i) &= y_i,\quad i = 1,\ldots,n-1,
\end{align}
\begin{align}
S_i(x_{i+1}) &= y_{i+1},\quad i = 1,\ldots,n-1,
\end{align}
Smoothness condition:
\begin{align}
S'_i(x_{i+1}) &= S^{\prime}_{i+1}(x_{i+1}),\quad i = 1,\ldots,n-2,
\end{align}
\begin{align}
S''_i(x_{i+1}) &= S''_{i+1}(x_{i+1}),\quad i = 1,\ldots,n-2,
\end{align}
Boundry condition: The curve is a “straight line” at the end points:
\begin{align}
S''_1(x_1) &= 0
\end{align}
\begin{align}
S''_{n-1}(x_n) &= 0
\end{align}
Let $h_{i}=x_{i}-x_{i-1}$
Let $S_i^{''}(x_i) = S_i^{''}(x_{i+1}) = M_i$
$S_1^{''}(x_1)= M_0 = 0$ and $S_n^{''}(x_n) = M_n = 0$
Other $M_i$ are unknown.
By Lagrange interpolation, we can interpolate each $S''_{i}$ on $[x_{i-1},x_{i}]$:
$$S''_{i}(x)=M_{i-1}{\frac {x_{i}-x}{h_{i}}}+M_{i}{\frac {x-x_{i-1}}{h_{i}}} \quad for \quad x\in [x_{i-1},x_{i}]$$
Integrating the above equation twice and using the condition that $C_{i}(x_{i-1})=y_{i-1}$ and $ C_{i}(x_{i})=y_{i}$ to determine the constants of integration, we have.
$$ S_{i}(x)=M_{i-1}{\frac {(x_{i}-x)^{3}}{6h_{i}}}+M_{i}{\frac {(x-x_{i-1})^{3}}{6h_{i}}}+\left(y_{i-1}-{\frac {M_{i-1}h_{i}^{2}}{6}}\right){\frac {x_{i}-x}{h_{i}}}+\left(y_{i}-{\frac {M_{i}h_{i}^{2}}{6}}\right){\frac {x-x_{i-1}}{h_{i}}}$$
$${\text{for}}\quad x\in [x_{i-1},x_{i}] $$\\
This expression gives us the cubic spline S(x) if $ M_{i},i=0,1,\cdots ,n$ can be determined.
$$S'_{i+1}(x)=-M_{i}{\frac {(x_{i+1}-x)^{2}}{2h_{i+1}}}+M_{i+1}{\frac {(x-x_{i})^{2}}{2h_{i+1}}}+{\frac {y_{i+1}-y_{i}}{h_{i+1}}}-{\frac {M_{i+1}-M_{i}}{6}}h_{i+1}$$
$$S'_{i+1}(x_{i})=-M_{i}{\frac {h_{i+1}}{2}}+{\frac {y_{i+1}-y_{i}}{h_{i+1}}}-{\frac {M_{i+1}-M_{i}}{6}}h_{i+1}$$
Similarly, when $x\in [x_{i-1},x_{i}]$, we can shift the index to obtain
\begin{align}
S'_{i}(x) &=-M_{i-1}{\frac {(x_{i}-x)^{2}}{2h_{i}}}+M_{i}{\frac {(x-x_{i-1})^{2}}{2h_{i}}}+{\frac {y_{i}-y_{i-1}}{h_{i}}}-{\frac {M_{i}-M_{i-1}}{6}}h_{i}
\end{align}
$$ S'_{i}(x_{i})=M_{i}{\frac {h_{i}}{2}}+{\frac {y_{i}-y_{i-1}}{h_{i}}}-{\frac {M_{i}-M_{i-1}}{6}}h_{i}$$
Since $ S'_{i+1}(x_{i})=S'_{i}(x_{i})$, we can derive:
$$\mu _{i}M_{i-1}+2M_{i}+\lambda _{i}M_{i+1}=d_{i}\quad {\text{for}}\quad i=1,2,\cdots ,n-1,$$
$$\mu _{i}={\frac {h_{i}}{h_{i}+h_{i+1}}},\quad \lambda _{i}=1-\mu _{i}={\frac {h_{i+1}}{h_{i}+h_{i+1}}},\quad {\text{and}}\quad d_{i}=6f[x_{i-1},x_{i},x_{i+1}]$$
and $f[x_{i-1},x_{i},x_{i+1}]$ is a divided difference.\\
According to different boundary conditions, we can solve the system of equations above to obtain the values of $M_{i}$'s.\\
$S'_{1}(x_{0})=f'_{0}$ and $S'_{n}(x_{n})=f'_{n}$. According to equation (7), we can obtain:
$$S'_{1}(x_{0})=-M_{0}{\frac {(x_{1}-x_{0})^{2}}{2h_{1}}}+M_{1}{\frac {(x_{0}-x_{0})^{2}}{2h_{1}}}+{\frac {y_{1}-y_{0}}{h_{1}}}-{\frac {M_{1}-M_{0}}{6}}h_{1}$$
$$\Rightarrow f'_{0}=-M_{0}{\frac {h_{1}}{2}}+f[x_{0},x_{1}]-{\frac {M_{1}-M_{0}}{6}}h_{1}$$
$$\Rightarrow 2M_{0}+M_{1}={\frac {6}{h_{1}}}(f[x_{0},x_{1}]-f'_{0})=6f[x_{0},x_{0},x_{1}]$$
Analogously:
$$ S'_{n}(x_{n})=-M_{n-1}{\frac {(x_{n}-x_{n})^{2}}{2h_{n}}}+M_{n}{\frac {(x_{n}-x_{n-1})^{2}}{2h_{n}}}+{\frac {y_{n}-y_{n-1}}{h_{n}}}-{\frac {M_{n}-M_{n-1}}{6}}h_{n}$$
$$M_{n-1}+2M_{n}={\frac {6}{h_{n}}}(f'_{n}-f[x_{n-1},x_{n}])=6f[x_{n-1},x_{n},x_{n+1}]$$
Let:\\
$\lambda _{0}=\mu _{n}=1,$\\
$d_{0}=6f[x_{0},x_{0},x_{1}]$ and\\
$d_{n}=6f[x_{n-1},x_{n},x_{n}]$
\begin{equation*}
\begin{bmatrix}
2 & \lambda_0 \\
\mu_1 & 2 & \lambda_1 \\
& \ddots & \ddots & \ddots \\
&& \ddots & \ddots & \ddots \\
&&& \ddots & \ddots & \ddots \\
&&&& \mu_{n-1} & 2 & \lambda_{n-1} \\
&&&&& \mu_{n} & 2 \\
\end{bmatrix}
%
\begin{bmatrix}
M_0 \\
M_1 \\
\vdots \\
\vdots \\
\vdots \\
M_{n-1} \\
M_n \\
\end{bmatrix}
=
%
\begin{bmatrix}
d_0 \\
d_1 \\
\vdots \\
\vdots \\
\vdots \\
d_{n-1} \\
d_n \\
\end{bmatrix}
\end{equation*}
\newpage
\section*{Lagrange Polynomial Interpolation}
Lagrange polynomial interpolation gives us a single polynomial that connects all of the data points.
That polynomial is denoted as $L(x)$. It is true that $L(x_i) = y_i$ for all points $(x_i, y_i)$ .
$$L(x) = \sum_{i = 1}^n y_i P_i(x).$$
Each polynomial appearing in the sum is called a Lagrange basis polynomials, $P_i(x)$.
$$P_i(x) = \prod_{j = 1, j\ne i}^n\frac{x - x_j}{x_i - x_j},$$
\section*{Example}
We are given three points A(-1, 1), B(2, 3) and C(3,5).\\
$$P_1(x) = \frac{(x - x_2)(x - x_3)}{(x_1-x_2)(x_1-x_3)} = \frac{(x - 2)(x - 3)}{(-1-2)(-1-3)} = \frac{1}{12}(x^2 - 5x + 6)$$
$$P_2(x) = \frac{(x - x_1)(x - x_3)}{(x_2-x_1)(x_2-x_3)} = \frac{(x + 1)(x - 3)}{(2 + 1)(2-3)} = -\frac{1}{3}(x^2 - 2x - 3)$$
$$P_3(x) = \frac{(x - x_1)(x - x_2)}{(x_3-x_1)(x_3-x_2)} = \frac{(x + 1)(x - 2)}{(3 + 1)(3-2)} =\frac{1}{4}(x^2 -x - 2)$$
$$ L(x) = 1 \cdot P_1(x) + 3 \cdot P_2(x) + 5 \cdot P_3(x) $$
$$ L(x) = 1 \cdot P_1(x) + 3 \cdot P_2(x) + 5 \cdot P_3(x) $$
$$ L(x) = \frac{1}{3} x^2 + \frac{1}{3} x + 1 $$
\begin{tikzpicture}
\begin{axis}[
axis x line=middle,
axis y line=middle,
width=10cm,
height=10cm,
xmin=-5, % start the diagram at this x-coordinate
xmax= 6, % end the diagram at this x-coordinate
ymin= -1, % start the diagram at this y-coordinate
ymax= 8, % end the diagram at this y-coordinate
xlabel=$x$,
ylabel=$y$,
legend cell align=left,
legend pos=north east,
legend style={draw=none},
tick align=outside,
enlargelimits=false,
xtick distance=1,
ytick distance=1]
% plot the function
\addplot[domain=-5:10, blue, ultra thick,samples=500] {1/3*(x^2 + x + 3)};
\fill[red] (400, 20) circle (3pt);
\fill[red] (700, 40) circle (3pt);
\fill[red] (800, 60) circle (3pt);
\node[above right=0pt of {(340, 13)}, outer sep=2pt,fill=none] {A};
\node[above right=0pt of {(630, 40)}, outer sep=2pt,fill=none] {B};
\node[above right=0pt of {(730, 60)}, outer sep=2pt,fill=none] {C};
\legend{$\frac{1}{3} x^2 + \frac{1}{3} x + 1$}
\end{axis}
\end{tikzpicture}
\end{document} | {
"alphanum_fraction": 0.5567978151,
"avg_line_length": 33.7665706052,
"ext": "tex",
"hexsha": "8756c85492aa79c630898b4ddbf9b530a6d4f597",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "djeada/Numerical-Methodes",
"max_forks_repo_path": "slides/interpolation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "djeada/Numerical-Methodes",
"max_issues_repo_path": "slides/interpolation.tex",
"max_line_length": 239,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "45a5288f4719568a62a82374efbb3fc06d33ec46",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "djeada/Numerical-Methodes",
"max_stars_repo_path": "slides/interpolation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5022,
"size": 11717
} |
\graphicspath{{Ch5_2021_iccv/figs/}}
\chapter{{Generative Compositional Augmentations for Scene Graph Prediction}\label{ch:iccv2021}}
\input{Ch5_2021_iccv/prolog}
\section{Introduction\label{sec:intro}}
Reasoning about the world in terms of objects and relationships between them is an important aspect of human and machine cognition~\citep{greff2020binding}.
In our environment, we can often observe frequent compositions such as ``person on a surfboard'' or ``person next to a dog''. When we are faced with a rare or previously unseen composition such as ``dog on a surfboard'', to understand the scene we need to understand the concepts of `person', `dog', `surfboard' and `on'. While such unbiased reasoning about concepts is easy for humans, for machines this task has remained extremely challenging~\citep{atzmon2016learning, johnson2017clevr, bahdanau2018systematic, keysers2019measuring, lake2019compositional}.
Learning-based models tend to capture spurious statistical correlations in the training data~\citep{arjovsky2019invariant,niu2020counterfactual}, \eg~`person' rather than `dog' has always occurred on a surfboard. When the evaluation is explicitly focused on \textit{compositional generalization} -- ability to recognize novel or rare combinations of objects and relationships -- such models then can fail remarkably~\citep{atzmon2016learning, lu2016visual, tang2020unbiased, knyazev2020graph}.
\begin{figure}[t]
\centering
\includegraphics[width=0.57\textwidth]{motivation.pdf}
% \vspace{-5pt}
\caption{ \small (\textbf{a}) The triplet distribution in Visual Genome~\citep{krishna2017visual} is extremely long-tailed, with numerous few- and zero-shot compositions (highlighted in red and yellow respectively). (\textbf{b}) The training set contains a tiny fraction (3\%) of all possible triplets, while many other plausible triplets exist. We aim to ``hallucinate'' such compositions using GANs to increase the diversity of training samples and improve generalization. Recall results are from~\citep{tang2020unbiased}.}
\label{fig:iccv_motivation}
%\vspace{-5pt}
\end{figure}
Predicting compositions of objects and the relationships between them from images is part of the scene graph generation (SGG) task. SGG is important, because accurately inferred scene graphs can improve downstream results in tasks, such as VQA~\citep{zhang2019empirical,NSM2019,cangea2019videonavqa,lee2019visual,shi2019explainable,hildebrandt2020scene,damodaran2021understanding}, image captioning~\citep{yang2019auto, gu2019unpaired,li2019know,wang2019role,milewski2020scene}, retrieval~\citep{johnson2015image,belilovsky2017joint,tang2020unbiased,tripathi2019compact,schroeder2020structured} and others~\citep{agarwal2020visual,xu2020survey}.
However, inferring scene graphs accurately is challenging due to a long tail data distribution and inevitable appearance of zero-shot (ZS) compositions (triplets) of objects and relationships at test time, \eg~``cup on surfboard''
(Figure~\ref{fig:iccv_motivation}).
The SGG results using the recent Total Direct Effect (TDE) method~\citep{tang2020unbiased} show a severe drop in ZS recall highlighting the extreme challenge of compositional generalization. This might appear surprising given that the marginal distributions in the entire scene graph dataset (\eg~Visual Genome~\citep{krishna2017visual}) and the ZS subset are very similar (\fig{\ref{fig:predicates}}). More specifically, the predicate and object categories that are frequent in the entire dataset, such as `on', `has' and `man', `person' \textit{also dominate} among the ZS triplets. For example, both ``cup on surfboard'' and ``bear has helmet'' consist of frequent entities, but represent extremely rare compositions (\fig{\ref{fig:iccv_motivation}}).
This strongly suggests that the challenging nature of correctly predicting ZS triplets does not directly stem from the imbalance of predicates (or objects), as commonly viewed in the previous SGG works, where the models attempt to improve mean (or predicate-normalized) recall metrics~\citep{chen2019knowledge, dornadula2019visual,tang2019learning,zhang2019graphical,tang2020unbiased,chen2019scene,zareian2020bridging,lin2020gps,zareian2020learning,yan2020pcpl}.
Therefore, we focus on compositional generalization and associated zero- and few-shot metrics.\looseness-1
\begin{figure}[t]
\begin{scriptsize}
\setlength{\tabcolsep}{5pt}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{rel_distr_test_VG_test_zs.pdf} & \includegraphics[width=0.4\textwidth]{obj_distr_test_VG_test_zs.pdf} \\
\end{tabular}
\end{center}
\end{scriptsize}
\vspace{-20pt}
\caption{ \small The distributions of top-25 predicate (\textbf{left}) and object (\textbf{right}) categories in Visual Genome~\citep{krishna2017visual} (split of~\citep{xu2017scene}).
}
\vspace{-12pt}
\label{fig:predicates}
\end{figure}
Despite recent improvements in compositional generalization within the SGG task~\citep{tang2020unbiased,knyazev2020graph,suhail2021energy}, the state-of-the-art result in zero-shot recall is still 4.5\% compared to 41\% for all-shot recall (Figure~\ref{fig:history}).
To address compositional generalization, we consider exposing the model to a large diversity of training examples that can lead to emergent generalization~\citep{hill2019environmental,ravuri2019seeing}. To avoid expensive labeling of additional data, we propose a compositional augmentation approach based on conditional generative adversarial networks (GANs)~\citep{goodfellow2014generative,mirza2014conditional}. Our general idea is augmenting the dataset by perturbing scene graphs and corresponding visual features of images, such that together they represent a novel or rare situation.
Overall, we make the following \textbf{contributions}:
\vspace{-2pt}
\begin{itemize}[labelsep=1pt]
\vspace{-5pt}
\itemsep0em
\item We propose scene graph perturbation methods (\S~\ref{sec:perturb}) as part of a GAN-based model (\S~\ref{sec:model}), to augment the training set with underrepresented compositions;
\vspace{-3pt}
\item We propose natural language- and dataset-based metrics to evaluate the quality of (perturbed) scene graphs (\S~\ref{sec:sg_quality});
\vspace{-3pt}
\item We extensively evaluate our model and outperform a strong baseline in zero-, few- and all-shot recall (\S~\ref{sec:iccv_exper}).
\vspace{-5pt}
\end{itemize}
Our code is available at {\url{https://github.com/bknyaz/sgg}}.
\begin{figure}
\begin{scriptsize}
%\setlength{\tabcolsep}{3.4pt}
%\renewcommand{\arraystretch}{1.3}
\begin{center}
\includegraphics[width=0.6\textwidth]{results_history.pdf}\vspace{-15pt}
\end{center}
%\begin{tabular}{cccccccc}
%\multicolumn{8}{c}{
%\vspace{-8pt}\\
%\hspace{48pt} \citep{xu2017scene} & \citep{zellers2018neural} & \citep{zellers2018neural} & \citep{chen2019knowledge} & \citep{tang2020unbiased} & \citep{tang2020unbiased} & \citep{knyazev2020graph} & \hspace{-10pt}{\tiny}\\
%\end{tabular}
\end{scriptsize}
\vspace{-5pt}
\caption{\small In this work, the compositional augmentations we propose improve on zero-shot (ZS) as well as all-shot recall.}
\vspace{-10pt}
\label{fig:history}
\end{figure}
\begin{figure}[t]
\centering
\vspace{-5pt}
\centering
{\includegraphics[width=0.92\textwidth,trim={0 0.3cm 0 0.2cm},clip]{perturbations.pdf}}
\vspace{-1pt}
\caption{\small Illustrative examples of different perturbation schemes we consider. Only the subgraph is shown for clarity. }
\vspace{-10pt}
\label{fig:perturb}
\end{figure}
\section{Related work}\label{sec:related}
\vspace{-5pt}
\textbf{Scene Graph Generation.} SGG~\citep{xu2017scene} extended an earlier visual relationship detection (VRD) task~\citep{lu2016visual,sadeghi2011recognition}, enabling generation of a complete scene graph (SG) for an image.
This spurred more research at the intersection of vision and language, where a SG can facilitate high-level visual reasoning tasks such as VQA~\citep{zhang2019empirical,NSM2019,shi2019explainable} and others~\citep{agarwal2020visual,xu2020survey,raboh2020differentiable}.
Follow-up SGG works~\citep{li2017scene,yang2018graph, zellers2018neural,zhang2019graphical,gu2019scene,tang2019learning,lu2019learning,lu2021multi} have significantly improved the performance in terms of all-shot recall (\fig{\ref{fig:history}}).
While the problem of zero-shot (ZS) generalization was already actively explored in the VRD task~\citep{zhang2017visual,yang2018shuffle,wang2019generating}, in a more challenging SGG task and on a realistic dataset, such as Visual Genome~\citep{krishna2017visual}, this problem has been addressed only recently in~\citep{tang2020unbiased}
by proposing Total Direct Effect (TDE), in~\citep{knyazev2020graph} by normalizing the graph loss, and in~\citep{suhail2021energy} by the energy-based loss.
Previous SGG works have not addressed the compositional generalization issue by synthesizing rare SGs.
The closest work that also considers a generative approach is~\citep{wang2019generating} solving the VRD task. Compared to it, our model follows a standard SGG pipeline and evaluation~\citep{xu2017scene,zellers2018neural} including object and predicate classification, instead of classifying only the predicate.
We also condition a GAN on SGs rather than triplets, which combinatorially increases the number of possible augmentations.
To improve SG's likelihood, we leverage both the language model and dataset statistics as opposed to random compositions as in~\citep{wang2019generating}.\looseness-1
\textbf{Predicate imbalance and mean recall.}
Recent SGG works have focused on the predicate imbalance problem~\citep{chen2019knowledge, dornadula2019visual,tang2019learning,zhang2019graphical,tang2020unbiased,chen2019scene,zareian2020bridging,lin2020gps,zareian2020learning,yan2020pcpl} and mean (over predicates) recall as a metric not sensitive to the dominance of frequent predicates. However, as we discussed in \S~\ref{sec:intro}, the challenge of compositional generalization does not directly stem from the imbalance of predicates, since frequent predicates (\eg~`on') still dominate in unseen/rare triplets (\fig{\ref{fig:predicates}}).
Moreover, \citep{tang2020unbiased} showed mean recall is relatively easy to improve by standard Reweight/Resample methods, while ZS recall is not.
\textbf{Data augmentation with GANs.} Data augmentation is a standard method for improving machine learning models \citep{ratner2017learning}. Typically these methods rely on domain specific knowledge such as applying known geometric transformations to images~\citep{devries2017improved,cubuk2018autoaugment}.
In the case of SGG we require more general augmentation methods, so here we explore a GAN-based approach as one of them.
GANs~\citep{goodfellow2014generative} have been significantly improved w.r.t.~stability of training and the quality of generated samples~\citep{brock2018large,karras2020training}, with recent works considering their usage for data augmentation~\citep{ravuri2019seeing,shin2018medical, sandfort2019data}. Furthermore, recent work has shown that it is possible to produce plausible out-of-distribution (OOD) examples conditioned on unseen label combinations, by intervening on the underlying graph~\citep{kocaoglu2017causalgan,casanova2020generating,sun2020learning,deng2021generative,greff2019multi}. In this work, we have direct access to the underlying graphs of images in the form of SGs, which allows us to condition on OOD compositions as in~\citep{casanova2020generating,deng2021generative}.\looseness-1
\section{Methods}\label{sec:iccv_methods}
\vspace{-5pt}
We consider a dataset of $N$ tuples ${\cal D}=\{(I,\graph,B)\}^N$, where $I$ is an image with a corresponding \textit{scene graph} $\graph$~\citep{johnson2015image} and bounding boxes $B$.
A scene graph $\graph=(O, R)$ consists of $n$ objects $O= \{o_1, ... , o_n\}$, and $m$ relationships between them $R=\{r_1, ..., r_m\}$.
For each object $o_i$ there is an associated bounding box
$b_i \in \mathbb{R}^{4}, B = \{b_1, ... , b_n\}$.
Each object $o_i$ is labeled with a particular category $o_i \in \cal{C}$, while each relationship $r_k=(i, e_k, j)$ is a triplet with a subject (start node) $i$, an object (end node) $j$ and a predicate $e_k \in {\cal R}$, where $\cal R$ is a set of all predicate classes.
For further convenience, we define a categorical triplet (\textit{composition}) $\tilde{r}_k=(o_i, e_k, o_j)$ consisting of object and predicate categories, $\tilde{R}=\{\tilde{r}_1, ..., \tilde{r}_m\}$.
An example of a scene graph is presented in Figure~\ref{fig:perturb} with objects $O=\{\text{\texttt{person}}, \text{\texttt{surfboard}}, \text{\texttt{wave}} \}$ and relationships $R=\{ (3,\text{\texttt{near}}, 1), (1,\text{\texttt{on}},2) \}$ and categorical relationships $\tilde{R}=\{ (\text{\texttt{wave}},\text{\texttt{near}},\text{\texttt{person}}), (\text{\texttt{person}},\text{\texttt{on}},\text{\texttt{surfboard}}) \}$.
\vspace{-5pt}
\subsection{Generative compositional augmentations\label{sec:model}}
\vspace{-5pt}
In a given dataset $\cal D$, such as Visual Genome~\citep{krishna2017visual}, the distribution of triplets is extremely long-tailed with a small fraction of dominating triplets (\fig{\ref{fig:iccv_motivation}}). To address the long-tail issue, we consider a GAN-based approach to augment $\cal D$ and artificially upsample rare compositions.
Our model is based on the high-level idea of generating an additional set $\hat{\cal D} = \{ (\hat{I},\pgraph, \hat{B}) \}^{\hat{N}}$. A typical scene-graph-to-image generation pipeline is~\citep{johnson2018image} $\pgraph \rightarrow \hat{B} \rightarrow \hat{I} $. We describe our model accordingly by beginning with constructing $\pgraph$ and $\hat{B}$ (\S~\ref{sec:perturb}) followed by the generation of $\hat{I}$ (in our case, features) (\S~\ref{sec:generation}). See Figure~\ref{fig:iccv_overview} for the overall pipeline.
\begin{figure}[tbph]
\centering
%\vspace{-5pt}
{\includegraphics[width=0.99\textwidth, trim={1cm 0cm 2cm 0.5cm}, clip]{overview}}
\caption{\small Our generative scene graph augmentation pipeline with its main components: discriminators $D$, a generator $G$ and a scene graph classification model $F$. See \S~\ref{sec:iccv_methods} for a detailed description of our pipeline and model architectures.\looseness-1}
\vspace{-5pt}
\label{fig:iccv_overview}
\end{figure}
%\vspace{-3pt}
\subsubsection{Scene Graph Perturbations\label{sec:perturb}}
%\vspace{-3pt}
We propose three methods to synthetically upsample underrepresented triplets in the dataset (\fig{\ref{fig:perturb}}).
Our goal is to construct diverse compositions avoiding both very likely (already abundant in the dataset) and very unlikely (``implausible'') combinations of objects and predicates, so that the distribution of synthetic $\pgraph$ will resemble the tail of the real distribution of $\graph$.
To construct $\pgraph$, we perturb existing $\graph$ available in $\cal D$, since constructing graphs from scratch is more difficult:
$\graph \rightarrow \pgraph$. We focus on perturbing nodes only as it allows the creation of highly diverse compositions, so $\pgraph=(\hat{O}, R)$, where $\hat{O} = \{ \hat{o}_1, ..., \hat{o}_n\}$ are the replacement object categories. We perturb only $L\cdot n$ nodes, where $L \in \mathbb{R}^{[0,1]}$, so
$\hat{o}_i = o_i$ for $n (1 - L)$ nodes.
We sample $L\cdot n$ nodes for perturbation based on their sum of in and out degrees. Each scene graph typically has a few ``hub'' nodes densely connected to other nodes.
So, by perturbing the hubs, we introduce more novel compositions with fewer perturbations.\looseness-1
\textbf{\textsc{Rand}} (random) is the simplest strategy, where for a node $i$ we uniformly sample a category $\hat{o}$ from $\cal C$, so that $o_i=\hat{o}$.
\textsc{\textbf{Neigh}} (semantic neighbors) leverages pretrained GloVe word embeddings~\citep{pennington2014glove} available for each of the object categories $\cal C$. Thus, given node $i$ of category $o_i$ we retrieve the top-k neighbors of $o_i$ in the embedding space using cosine similarity. We then uniformly sample $\hat{o}$ from the top-k neighbors replacing $o_i$ with $\hat{o}$.
\textsc{\textbf{\structn}} (graph-structured semantic neighbors). \textsc{Rand} and \textsc{Neigh} do not take into account the graph structure or dataset statistics leading to unlikely or not diverse enough compositions. To alleviate that, we propose the \structn~method. Given node $i$ of category $o_i$ in the graph $\cal G$, we consider all triplets $\tilde{R}_i=\{\tilde{r}_{k,i}\}$ in $\cal G$ that contain $i$ as the start or end node, i.e. $\tilde{r}_{k,i}=(o_i, e_k, o_j) \text{ or } (o_j, e_k, o_i)$.
For example in Figure~\ref{fig:perturb}, if $o_i$ is `person', then $\tilde{R}_i=\{ (\text{\texttt{person}},\text{\texttt{on}},\text{\texttt{surfboard}}), (\text{\texttt{wave}},\text{\texttt{near}},\text{\texttt{person}})\}$.
For each $\tilde{r}_{k,i}$ we find all triplets $\tilde{R}_c$ in the dataset $\cal D$ matching $(o_c, e_k, o_j)$ or $(o_j, e_k, o_c)$, where $o_c \neq o_i$ is a candidate replacement for $o_i$.
For each candidate $o_c$, we count matched triplets $n_c=|\tilde{R}_c|$ and define unnormalized probabilities $\hat{p}_c$ based on the inverse of $n_c$, namely $\hat{p}_c=1/n_c$.
This way we define a set of possible replacements $\{o_c, \hat{p}_c \}$ for node $i$.
One of our key observations is that depending on the evaluation metric and amount of noise in the dataset, we might want to avoid sampling candidates with very high $\hat{p}_c$ (low $n_c$).
Therefore, to control for that, we introduce an additional hyperparameter $\alpha$ that allows to filter out candidates with $n_c < \alpha$ by setting their $\hat{p}_c$ to 0.
This way we can trade-off between upsampling rare and frequent triplets.
We then normalize $p_c$ to ensure $\sum p_c = 1$ and sample $o^\prime \sim p_c$. To further increase the diversity, the final $\hat{o}$ is chosen from the top-k semantic neighbors of $o^\prime$ as in \textsc{Neigh}, including $o^\prime$ itself.
\structn~is a sequential perturbation procedure, where for each node the perturbation is conditioned on the current graph state. In contrast, \textsc{Rand} and \textsc{Neigh} perturb all $L\cdot n$ nodes in parallel.\looseness-1
\textbf{Bounding boxes.} Since we perturb only a few nodes, for simplicity we assume that the perturbed graph has the same bounding boxes $B$: $\hat{B}=B$. While one can reasonably argue that object sizes and positions vary a lot depending on the category, i.e. ``elephant'' is much larger than ``dog'', we can often find instances disproving that, \eg~if a toy ``elephant'' or a drawing of an elephant is present. Empirically we found this approach to work well.
%Please see \S~\ref{sec:pred_box} in \apdx~for the experiments with predicting $\hat{B}$ conditioned on $\pgraph$.
\subsubsection{Scene Graph to Visual Features\label{sec:generation}}
Given perturbed $(\pgraph, \hat{B})$, the next step in our GAN-based pipeline is to generate visual features (Figure~\ref{fig:iccv_overview}).
To train such a model, we first need to extract real features from the dataset ${\cal D}=\{(I,\graph,B)\}^N$.
Following~\citep{xu2017scene,zellers2018neural}, we use a pretrained and frozen object detector~\citep{ren2015faster} to extract global visual features $H$ from input images.
Then, given $B$ and $H$, we use RoIAlign~\citep{he2017mask} to extract visual features $(V,E)$ of nodes and edges, respectively. To extract edge features between a pair of nodes, the union of their bounding boxes is used~\citep{zellers2018neural}.
Since we do not update the detector, we do not need to generate images as in scene-graph-to-image models~\citep{johnson2018image}, just intermediate features $\hat{H}, \hat{V}, \hat{E}$.
\textbf{Main scene graph classification model $F$.}
Given extracted $(V,E)$, the main model $F$ predicts a scene graph $\graph=(O,R)$, i.e. it needs to correctly assign object labels $O$ to node features $V$ and predicate classes $R$ to edge features $E$.
Our pipeline is not constrained to the choice of $F$.
\textbf{Generator $G$.}
Our scene-graph-to-features generator $G$ follows the architecture of~\citep{johnson2018image}. First, a scene graph $\pgraph$ is processed by a graph convolutional network (GCN) to exchange information between nodes and edges. We found it beneficial to concatenate output GCN features of all nodes with visual features $V^\prime$, where $V^\prime$ are sampled from the set $\{V_{o_i}\}$ precomputed at the previous stage and ${o_i}$ is the category of node $i$.
By conditioning the generator on visual features, the main task of $G$ becomes simply to align and smooth the features appropriately, which we believe is easier than generating visual features from the categorical distribution.
In addition, the randomness of this sampling step injects noise improving the diversity of generated features.
The generated node features and the bounding boxes $\hat{B}$ are used to construct the layout followed by feature refinement~\citep{johnson2018image} to generate $\hat{H}$.
Afterwards, $(\hat{V}, \hat{E})$ are extracted from $\hat{H}$ the same way as $(V,E)$.\looseness=-1
\textbf{Discriminators $D$.}
We have independent discriminators for nodes and edges, $D_{\text{node}}$ and $D_{\text{edge}}$, that discriminate real features ($V$, $E$) from fake ones ($\hat{V}$, $\hat{E}$) conditioned on their class as per the CGAN~\citep{mirza2014conditional,radford2015unsupervised}. We add a global discriminator $D_{\text{global}}$ acting on feature maps $H$, which encourages global consistency between nodes and edges.
Thus, $D_{\text{node}}$ and $D_{\text{edge}}$ are trained to match marginal distributions, while $D_{\text{global}}$ is trained to match the joint distribution. The right balance between these discriminators should enable the generation of realistic visual features conditioned on OOD scene graphs. Please see our source code for the detailed architectures of $D$ and $G$.\looseness-1
\textbf{Losses.}
To train our generative model, we define several losses. These include the baseline SG classification loss \eqref{eq:baseline} and ones specific to our generative pipeline \eqref{eq:rec}-\eqref{eq:adv_full}. The latter are motivated by a CycleGAN~\citep{zhu2017unpaired} and, similarly, consist of the reconstruction and adversarial losses~\eqref{eq:rec}-\eqref{eq:adv_full}.
We use an improved \textbf{scene graph classification loss} from~\citep{knyazev2020graph}, which is a sum of the node cross-entropy loss ${\cal L}^{O}$ and graph density-normalized edge cross-entropy loss ${\cal L}^{R}$:
%
%\vspace{-5pt}
%\setlength{\abovedisplayskip}{2pt}
%\setlength{\belowdisplayskip}{2pt}
\begin{align}
\label{eq:baseline}
{\cal L}_\text{CLS} = {\cal L}(F(V, E), \graph) = {\cal L}^{O}(F( V, E), O) + {\cal L}^{R}(F( V, E), R).
\end{align}
%
${\cal L}^{R}$ is computed based on the ratio of foreground (annotated) to background (not annotated) edges in a batch of scene graphs~\citep{knyazev2020graph}.
To improve $F$ by training it on augmented features $(\hat{V}, \hat{E})$, we define the \textbf{reconstruction (cycle-consistency) loss} analogous to \eqref{eq:baseline}:
%
\begin{align}
\label{eq:rec}
{\cal L}_\text{REC} = {\cal L}(F(G(\pgraph, \hat{B}, V^\prime)), {\pgraph}) = {\cal L}^{O}(F( \hat{V}, \hat{E}), \hat{O}) + {\cal L}^{R}(F( \hat{V}, \hat{E} ), R).
\end{align}
%
We do not update $G$ on this loss to prevent its potential undesirable collaboration with $F$.
Instead, to train $G$ as well as $D$, we optimize \textbf{conditional adversarial losses}~\citep{mirza2014conditional}.
We first write these separately for $D$ and $G$ in a general form.
So, for some features $\bm{x}$ and their corresponding class $\bm{y}$:
%
\begin{align}
\label{eq:adv_D}
\mathcal{L}^D_{\text{ADV}}(\bm{x}, \bm{y}) =& \ \mathbb{E}_{\bm{x} \sim p_{\text{data}}(\bm{x})}[\log D(\bm{x}|\bm{y})] + \mathbb{E}_{\pgraph \sim p_{\pgraph}(\pgraph)}[\log (1-D(G(\pgraph)|\bm{y})] \\
\mathcal{L}^G_{\text{ADV}}(\bm{y}) =& \ \mathbb{E}_{\pgraph \sim p_{\pgraph}(\pgraph)}[\log D(G(\pgraph)| \bm{y}) ] .\
\end{align}
%
We compute these losses for object and edge visual features by using the discriminators $D_{\text{node}}$ and $D_{\text{edge}}$. This loss is also computed for global features $H$ using $D_{\text{global}}$, so that the total discriminator and generator losses are:
%
\begin{align}
\label{eq:adv_full}
\mathcal{L}^D_{\text{ADV}} &= \mathcal{L}^D_{\text{ADV}}(V,O) + \mathcal{L}^D_{\text{ADV}}(E,R) + \mathcal{L}^D_{\text{ADV}}(H,\emptyset) \nonumber \\
\mathcal{L}^G_{\text{ADV}} &= \mathcal{L}^G_{\text{ADV}}(O) + \mathcal{L}^G_{\text{ADV}}(R) + \mathcal{L}^G_{\text{ADV}}(\emptyset),
\end{align}
%
\noindent where $\emptyset$ denotes that our global discriminator is unconditional for simplicity.
Thus, the total loss to minimize is:
\vspace{-5pt}
\begin{align}
\label{eq:total_loss}
\mathcal{L} = \underbrace{{\cal L}_{\text{CLS}} + {\cal L}_{\text{REC}}}_{\text{update } F} - \gamma(\underbrace{\mathcal{L}^D_{\text{ADV}}}_{\text{update } D} + \underbrace{\mathcal{L}^G_{\text{ADV}}}_{\text{update } G}),
\end{align}
%
\noindent where the loss weight $\gamma=5$ worked well in our experiments.
Compared to a similar work of~\citep{wang2019generating}, in our model all of its components ($F,D,G$) are learned jointly end-to-end.
\subsection{Semantic plausibility of scene graphs\label{sec:sg_quality}}
\textbf{Language model.} To directly evaluate the quality of perturbations, it is desirable to have some quantitative measure other than downstream SGG performance. We found that a cheap (relative to human evaluation) and effective way to achieve this goal is to use a language model. In particular, we use a pretrained BERT~\citep{devlin2018bert} model and estimate the ``semantic plausibility'' of both ground truth and perturbed scene graphs in the following way.
We create a textual query from a scene graph by concatenating all triplets (in a random order). We then mask out one of the perturbed nodes (in case of $\pgraph$) or a random node (in case of $\graph$) in the triplet, so that BERT can return (unnormalized) likelihood scores for the object category of the masked out token.
We have also considered using this strategy to create SG perturbations as an alternative to \structn. However, we did not find it effective for obtaining rare scene graphs, since BERT is not grounded to visual concepts and not aware of what is considered ``rare'' in a particular SG dataset. For qualitative evaluation and when BERT scores are averaged over many samples, we found them still useful as a rough measure of SG quality.
%Please see \S~\ref{apdx:bert} in \apdx~for an example of the BERT-based estimation of scene graph quality.\looseness-1
\textbf{Hit rate}. For perturbed SGs, we compute an additional qualitative metric, which we call the `Hit rate'. Assuming we perturbed $M$ triplets in total for all training SGs, this metric computes the percentage of the triplets matching an actual annotation in an evaluation test subset (zero-, few- or all-shot).\looseness-1
\section{Experiments}
\label{sec:iccv_exper}
\vspace{-3pt}
\subsection{Dataset, models and hyperparameters\label{sec:settins}}
\vspace{-3pt}
We use a publicly available SGG codebase\footnote{\url{https://github.com/rowanz/neural-motifs}} for evaluation and baseline model implementations.
For the model $F$, we use Iterative Message Passing (IMP+)~\citep{xu2017scene, zellers2018neural} and Neural Motifs (NM)~\citep{zellers2018neural}.
IMP+ shows strong compositional generalization capabilities~\citep{knyazev2020graph} and, therefore is more explored in this work.
We use an improved loss for \eqref{eq:baseline} from~\citep{knyazev2020graph}, so we denote our baselines as IMP++ and NM++. %(Table~\ref{tab:losses}).
We use the default hyperparameters and identical setups for the baseline models without a GAN and our models with a GAN. We borrow the detector Faster-RCNN with the VGG16 backbone pretrained on Visual Genome (VG) from~\citep{zellers2018neural} and use it in all our experiments. We evaluate the models on a standard split of VG~\citep{krishna2017visual}, with the 150 most frequent object classes and 50 predicate classes, introduced in~\citep{xu2017scene}. The training set has 57723 and the test set has 26446 images. Similarly to~\citep{knyazev2020graph,wang2019generating,tang2020unbiased,suhail2021energy}, in addition to the all-shot (all test scene graphs) case, we define zero-shot, 10-shot and 100-shot test subsets.
For each such subset we keep only those triplets in a scene graph that occur 0, 1-10 or 11-100 times during training and remove samples without such triplets, which results in 4519, 9602 and 16528 test scene graphs (and images) respectively.
We use a held-out validation set of 5000 images for tuning the hyperparameters.
\begin{table}[tbhp]
\setlength{\tabcolsep}{0.5pt}
\tiny
\centering
\begin{center}
\caption{\small Results on Visual Genome~\citep{krishna2017visual} using models based on IMP++~\citep{knyazev2020graph}. The top-1 result in each column is \textbf{bolded} (ignoring \oracle). \oracle~results are an upper bound estimate of ZS recall obtained by directly using ZS test triplets for perturbations. }
\label{table:iccv_main_results}
\vspace{-2pt}
\begin{tabular}{l|c|cp{0.1cm}|c|cp{0.1cm}|c|cp{0.1cm}|c|c|c}
\toprule
& \multicolumn{2}{c}{\textsc{\textbf{Zero-shot Recall}}} & &
\multicolumn{2}{c}{\textsc{\textbf{10-shot Recall}}} & &
\multicolumn{2}{c}{\textsc{\textbf{100-shot Recall}}} & &
\multicolumn{3}{c}{\textsc{\textbf{All-Shot Recall}}}\Tstrut\Bstrut\\
\textsc{\textbf{Model}} &
\multicolumn{1}{c}{\scriptsize{SGCls}} & \multicolumn{1}{c}{\scriptsize{PredCls}} & &
\multicolumn{1}{c}{\scriptsize{SGCls}} & \multicolumn{1}{c}{\scriptsize{PredCls}} & & \multicolumn{1}{c}{\scriptsize{SGCls}} & \multicolumn{1}{c}{\scriptsize{PredCls}} & & \multicolumn{1}{c}{\scriptsize{SGCls}} & \multicolumn{1}{c}{\scriptsize{PredCls}} & \multicolumn{1}{c}{\scriptsize{SGCls-mR}}\\
\cline{1-3}\cline{5-6}\cline{8-9}\cline{11-13}
%\midrule
Baseline (IMP++) & 9.27\std{0.10} & 28.14\std{0.05} & & 21.80\std{0.19} & 42.78\std{0.32} & & 40.42\std{0.02} & 67.78\std{0.07} & & 48.70\std{0.08} & 77.48\std{0.09} & 27.78\std{0.10}\Tstrut\Bstrut\\
GAN+\structn, $\alpha=2$ & \textbf{9.89}\std{0.15} & 28.90\std{0.14} & & 21.96\std{0.30} & \textbf{43.79}\std{0.27} & & 41.22\std{0.33} & 69.17\std{0.24} & & 50.06\std{0.29} & 78.98\std{0.09} & 27.79\std{0.48}\\
GAN+\structn, $\alpha=5$ & 9.62\std{0.29} & \textbf{29.18}\std{0.33} & & \textbf{22.24}\std{0.11} & 43.74\std{0.10} & & 41.39\std{0.26} & 69.11\std{0.05} & & 50.14\std{0.21} & 78.94\std{0.03} & 27.98\std{0.23}\\
GAN+\structn, $\alpha=10$ & 9.84\std{0.17} & 28.90\std{0.46} & & 22.04\std{0.33} & 43.54\std{0.36} & & 41.46\std{0.15} & 69.13\std{0.24} & & 50.10\std{0.23} & 79.00\std{0.09} & 27.68\std{0.37}\\
GAN+\structn, $\alpha=20$ & 9.65\std{0.15} & 28.68\std{0.28} & & 21.97\std{0.30} & 43.64\std{0.20} & & 41.24\std{0.08} & \textbf{69.31}\std{0.17} & & 49.89\std{0.28} & 78.95\std{0.04} & 27.42\std{0.36}\Bstrut\\
\hline\hline
\multicolumn{2}{l}{\textbf{Ablated models}} \Tstrut\\
GAN (no perturb.) & 9.25\std{0.20} & 28.66\std{0.35} & & 22.15\std{0.21} & 43.66\std{0.29} & & \textbf{41.58}\std{0.20} & 69.16\std{0.16} & & \textbf{50.38}\std{0.28} & \textbf{79.05}\std{0.08} & 28.17\std{0.08}\\
GAN+\textsc{Rand} &
9.71\std{0.09} & 28.71\std{0.40} & & 21.89\std{0.21} & 43.33\std{0.18} & & 41.01\std{0.32} & 68.88\std{0.23} & & 49.83\std{0.32} & 78.84\std{0.10} & 27.45\std{0.48}\\
GAN+\textsc{Neigh} &
9.65\std{0.04} & 28.68\std{0.40} & & 21.86\std{0.23} & 43.77\std{0.15} & & 41.25\std{0.35} & 69.07\std{0.09} & & 50.00\std{0.36} & 78.94\std{0.10} & 27.41\std{0.51}\Bstrut \\
\hline\hline
\multicolumn{2}{l}{\textbf{Other baselines}} \Tstrut\\
\textsc{Reweight} & 9.58\std{0.14} & 28.27\std{0.22} & & 22.19\std{0.09} & 42.98\std{0.17} & & 40.00\std{0.01} & 65.27\std{0.13} & & 48.13\std{0.10} & 74.68\std{0.13} & \textbf{30.95}\std{0.05}\\
\textsc{Resample}-predicates & 9.13\std{0.06} & 27.77\std{0.10} & & 21.35\std{0.05} & 42.14\std{0.16} & & 39.69\std{0.06} & 66.74\std{0.01} & & 48.23\std{0.10} & 76.59\std{0.05} & 28.44\std{0.38} \\
\textsc{Resample}-triplets & 8.94\std{0.16} & 27.66\std{0.14} & & 21.65\std{0.10} & 42.60\std{0.17} & & 39.39\std{0.08} & 66.44\std{0.06} & & 47.77\std{0.10} & 76.38\std{0.14} & 27.56\std{0.10} \\
TDE & 9.21\std{0.21} & 27.91\std{0.09} & & 21.20\std{0.16} & 41.61\std{0.32} & & 39.72\std{0.10} & 65.40\std{0.21} & & 48.35\std{0.08} & 76.22\std{0.17} & 28.25\std{0.21}\Bstrut\\
\hline\hline
\multicolumn{2}{l}{\textbf{\textsc{Oracle} perturbations $\pgraph$}} \Tstrut\\
GAN+\oracle~$\pgraph$ &
10.11\std{0.34} & 29.27\std{0.10} & & 22.05\std{0.38} & 43.78\std{0.09} & & 41.38\std{0.50} & 69.06\std{0.16} & & 50.19\std{0.36} & 79.00\std{0.08} & 27.91\std{0.56}\\
GAN+\oracle~$\pgraph + \hat{B}$ & 10.52\std{0.31} & 29.43\std{0.42} & & 21.98\std{0.39} & 43.03\std{0.13} & & 41.12\std{0.19} & 68.73\std{0.17} & & 50.05\std{0.35} & 78.65\std{0.09} & 27.52\std{0.46}\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-10pt}
\end{table}
\begin{figure}[htpb]
%\vspace{-5pt}
\centering
\small
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{ccccc}
& \textbf{(a)} Zero-shot hit rate & \textbf{(b)} 10-shot hit rate &
\textbf{(c)} 100-shot hit rate &
\textbf{(d)} All-shot hit rate \vspace{-1pt} \\
{\includegraphics[align=c,width=0.13\textwidth,trim={8cm 3.5cm 1cm 3.5cm},clip]{allshot_vs_a_n3_topk3_legend.pdf}}
&
\includegraphics[align=c,width=0.2\textwidth,trim={0 0 0 0.5cm},clip]{zs_vs_a_n2_topk5.pdf} &
\includegraphics[align=c,width=0.2\textwidth,trim={0 0 0 0.5cm},clip]{10shot_vs_a_n3_topk3.pdf}
&
\includegraphics[align=c,width=0.2\textwidth,trim={0 0 0 0.5cm},clip]{100shot_vs_a_n3_topk3.pdf}
&
\includegraphics[align=c,width=0.2\textwidth,trim={0 0 0 0.5cm},clip]{allshot_vs_a_n3_topk3.pdf} \\
\end{tabular}
\vspace{-5pt}
\caption{\small Triplet hit rates (\S~\ref{sec:sg_quality}) versus the threshold $\alpha$ on four different VG test subsets using our perturbation methods.
}
%\vspace{-5pt}
\label{fig:hit_rates}
\end{figure}
\textbf{Baselines.}
In addition to the IMP++ and NM++ baselines, we evaluate \textsc{Resample}, \textsc{Reweight} and TDE~\citep{tang2020unbiased} when combined with IMP++.
\textsc{Resample} samples training images based on the inverse frequency of predicates/triplets~\citep{tang2020unbiased}. \textsc{Reweight} increases the softmax scores of rare predicate classes.
{TDE} debiases contextual edge features of a SGG model. We use the Total Effect (TE) variant according to Eq.~6 in~\citep{tang2020unbiased}, since applying TDE to IMP++ is not straightforward due to the absence of conditioning on node labels when making predictions for edges in IMP++.
\textsc{Reweight} and TDE/TE do not require retraining IMP++.\looseness-1
\vspace{-2pt}
\textbf{GAN.} To train the generator $G$ and discriminators $D$ of a GAN, we generally follow hyperparameters suggested by SPADE~\citep{SPADE}. In particular, we use Spectral Norm~\citep{miyato2018spectral} for $D$, Batch Norm~\citep{ioffe2015batch} for $G$, and TTUR~\citep{heusel2017gans} with learning rates of 1e-4 and 2e-4 for $G$ and $D$ respectively.
\textbf{Perturbation methods (\S~\ref{sec:perturb}).}
We found that perturbing $L=20\%$ nodes works well across the methods, which we use in all our experiments. For \textsc{Neigh} we use top-k=10 as a compromise between too limited diversity and plausibility. For \structn, we set top-k=5, as the method enables larger diversity even with very small top-k. To train the GAN-based models with \structn, we use frequency threshold $\alpha=[2, 5, 10, 20]$.
In addition to the proposed perturbation methods, we also consider so called \oracle~perturbations.
These are created by directly using ZS triplets from the test set (all obtained triplets are the same as ZS triplets, so that zero-shot hit rate is 100\%). We also evaluate \oracle+$\hat{B}$, which in addition to exploiting test ZS triplets, uses bounding boxes from the test samples corresponding to the resulted ZS triplets. \oracle-based results are an upper bound estimate of ZS recall, highlighting the challenging nature of the task.\looseness-1
\textbf{Evaluation.} Following prior work~\citep{xu2017scene,zellers2018neural,knyazev2020graph,tang2020unbiased}, we focus our evaluation on two standard SGG tasks: scene graph classification (\textbf{SGCls}) and predicate classification (\textbf{PredCls}), using recall (R@K) metrics.
%The scene graph generation (\textbf{SGGen}) results are presented in \S~\ref{sec:sggen} in \apdx.
Unless otherwise stated, we report results with K=100 for SGCls and K=50 for PredCls, since the latter is an easier task with saturated results for K=100. We compute recall \textit{without} the graph constraint in Table~\ref{table:iccv_main_results}, since it is a less noisy metric~\citep{knyazev2020graph}.
We emphasize performance metrics that focus on the ability to recognize rare and novel visual relationship compositions~\citep{knyazev2020graph,tang2020unbiased,suhail2021energy}: \textbf{zero-shot} and \textbf{10-shot} recalls.
In Tables~\ref{table:iccv_main_results} and~\ref{table:zs_results}, the mean and standard deviations of 3 runs (random seeds) are reported.
\vspace{-3pt}
\subsection{Results\label{sec:iccv_results}}
\vspace{-3pt}
\textbf{Main SGG results (Table~\ref{table:iccv_main_results}).}
First, we compare the baseline IMP++ to our GAN-based model trained \textit{without} and \textit{with} perturbation methods.
Even without any perturbations, the GAN-based model significantly outperforms IMP++, especially on the 100-shot and all-shot recalls.
GANs with simple perturbation strategies, \textsc{Rand} (as in~\citep{wang2019generating}) and \textsc{Neigh}, improve on zero-shots, but at a drop in the 100-shot and all-shot recalls.
GANs with \structn~further improve ZS and 10-shot recalls, but compared to \textsc{Rand} and \textsc{Neigh}, also show high recalls on the 100-shots and all-shots.%\looseness-1
For \structn, there is a connection between the SGG recall results (Table~\ref{table:iccv_main_results}) and triplet hit rates (\fig{\ref{fig:hit_rates}}) for different values of the threshold $\alpha$.
Specifically,
\structn~with lower $\alpha$ values upsamples more of the rare compositions leading to higher ZS and 10-shot \textit{hit rate} (\fig{\ref{fig:hit_rates}} a,b) and, as a result, higher ZS and 10-shot \textit{recalls} (Table~\ref{table:iccv_main_results}).
\structn~with higher $\alpha$ values upsamples more of the frequent compositions leading to higher 100-shot and all-shot \textit{hit rates} (\fig{\ref{fig:hit_rates}}~c,d) and, as a result, higher 100-shot and all-shot \textit{recalls}.
Compared to \textsc{Rand} and \textsc{Neigh}, the compositions obtained using \structn~have higher triplet hit rates due to better respecting the graph structure and dataset statistics. As a result, \structn~shows overall better recalls in SGG, even approaching the \oracle~model (Table~\ref{table:iccv_main_results}).
Devising a perturbation strategy universally strong across all metrics is challenging. \textsc{Neigh} can be viewed as such an attempt, which shows average hit rates for all test subsets, but lower performance in all SGG metrics.\looseness-1
\begin{table}[t]
\begin{center}
\caption{\small ZS recall results on VG using the graph constraint evaluation. $^\dagger$The results are obtained with a more advanced feature extractor and, thus, are not directly comparable.}
\vspace{-5pt}
\scriptsize
\setlength{\tabcolsep}{5pt}
\label{table:zs_results}
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\textsc{\textbf{Model}}} &
\multicolumn{2}{c}{{\textbf{SGCls}}} &
\multicolumn{2}{c}{{\textbf{PredCls}}}\Tstrut\\
& \scriptsize zsR@50 & \scriptsize zsR@100 & \scriptsize zsR@50 & \scriptsize zsR@100 \\
\midrule
\textsc{Freq}~\citep{zellers2018neural} & 0.0 & 0.0 & 0.1 & 0.1\Tstrut\\
KERN~\citep{chen2019knowledge} & $-$ & 1.5 & 3.9 & $-$\\
VCTree$^\dagger$~\citep{tang2020unbiased} & 1.9 & 2.6 & 10.8 & 14.3\Bstrut\\
\hline
NM~\citep{zellers2018neural} & 1.1 & 1.7 & 6.5 & 9.5\Tstrut\\
NM$^\dagger$~\citep{tang2020unbiased} & 2.2 & 3.0 & 10.9 & 14.5\\
NM, TDE$^\dagger$~\citep{tang2020unbiased} & 3.4 & \textbf{4.5} & 14.4 & 18.2\\
NM, EBM$^\dagger$~\citep{suhail2021energy} & 1.3 & $-$ & 4.9 & $-$\\
NM++~\citep{knyazev2020graph} & 1.8\std{0.1} & 2.3\std{0.1} & 10.2\std{0.1} & 13.4\std{0.3}\\
NM++, GAN+\structn & 2.5\std{0.1} & 3.1\std{0.1} & 14.2\std{0.0} & 17.4\std{0.3}\Bstrut\\
\hline
IMP+~\citep{xu2017scene,zellers2018neural} & 2.5 & 3.2 & 14.5 & 17.2\Tstrut\\
IMP+, EBM$^\dagger$~\citep{suhail2021energy} & 3.7 & $-$ & 18.6 & $-$\\
IMP++~\citep{knyazev2020graph} & 3.5\std{0.1} & 4.2\std{0.2} & 18.3\std{0.4} & 21.2\std{0.5}\\
IMP++, TDE & 3.5\std{0.1} & 4.3\std{0.1} & 18.5\std{0.3} & 21.5\std{0.3}\\
IMP++, GAN+\structn & 3.7\std{0.1} & 4.4\std{0.1} & 19.1\std{0.3} & 21.8\std{0.4}\\
IMP++, GAN+\structn~(max) & \textbf{3.8} & \textbf{4.5} & \textbf{19.5} & \textbf{22.4}\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-15pt}
\end{table}
Among the alternatives to our GAN approach, \textsc{Reweight} improves on zero-shots, 10-shots and mean recall (SGCls-mR) (Table~\ref{table:iccv_main_results}). However it downweights the class scores of frequent predicates, which directly degrades 100-shot and all-shot recalls.
\textsc{Resample} underperforms on all metrics except for SGCls-mR. The main limitation of \textsc{Resample} is that when we resample images with rare predicates/triplets, those images are likely to contain annotations of frequent predicates/triplets.
Another method, TDE~\citep{tang2020unbiased}, only debiases the predicates similarly to \textsc{Reweight} and \textsc{Resample}-predicates. So, it may benefit little in recognizing ZS triplets such as $(\text{\texttt{cup}}, \text{\texttt{on}}, \text{\texttt{surfboard}})$, because the predicate `on' is the frequent one.
ZS compositions with such frequent predicates are abundant in VG (\fig{\ref{fig:iccv_motivation}}). Thus, debiasing only the predicates fundamentally limits TDE's performance. In contrast, our GAN method does not suffer from this limitation, since we perturb scene graphs aiming to increase \textit{compositional diversity}, not merely the frequency of rare predicates.
As a result, our GAN method improves on all metrics, \textit{especially} on ZS (in relative terms).\looseness-1
\textbf{Comparison to other SGG works (Table~\ref{table:zs_results}).}
Our GAN approach also improves ZS recall (zsR) of other SGG models, namely NM++. For example in PredCls, GAN+\structn~improves zsR of NM++ by 4 percentage points.
Compared to the other previous methods presented in Table~\ref{table:zs_results}, we obtain competitive ZS results on par or better with TDE~\citep{tang2020unbiased} and recent EBM~\citep{suhail2021energy}. However, it is hard to directly compare to the results reported in~\citep{tang2020unbiased,suhail2021energy} due to the different object detectors and potential implementation discrepancies.
\begin{table}[t]
\centering
\caption{\small Evaluation of generated (fake) node feature using the metrics of ``similarity'' between two distributions $X$ and $Y$~\citep{kynkaanniemi2019improved,naeem2020reliable}. The same held-out set of real test features ($Y \sim V$) is used as the reference distribution in all cases. The percentage in the superscripts denotes a relative drop of the average metric when switching from test to test-zs conditioning. For all metrics, higher is better.\looseness-1}
\label{tab:gen}
\vspace{-5pt}
\scriptsize
\setlength{\tabcolsep}{5pt}
\begin{tabular}{l|cc|cc|p{1.2cm}}
\toprule
\multirow{2}{*}{\tiny\bf\textsc{Distribution $X$}} & \multicolumn{2}{c|}{\bf Fidelity (realism)} & \multicolumn{2}{c|}{\bf Diversity} & \multicolumn{1}{c}{\multirow{2}{*}{\bf \textsc{Avg}}}\Tstrut\\
& \bf \textsc{Precision} & \bf \textsc{Density}
& \bf \textsc{Recall} & \bf \textsc{Coverage} & \Bstrut\\
\midrule
Real test & 0.74 & 1.02 & 0.75 & 0.97 & 0.87 \Tstrut\\
Real test-zs & 0.66 & 0.99 & 0.70 & 0.94 & 0.82$^{-6\%}$ \\
GAN: Fake test & 0.55 & 0.77 & 0.42 & 0.82 & 0.64 \\
GAN: Fake test-zs & 0.47 & 0.60 & 0.41 & 0.75 & 0.56$^{-13\%}$\\
\bottomrule
\end{tabular}
%\vspace{-10pt}
\end{table}
\begin{figure}[t]
\centering
\footnotesize
\setlength{\tabcolsep}{6pt}
\begin{tabular}{cc}
\textbf{\textsc{Real node features}} $V$ & \textbf{\textsc{Fake node features}} $\hat{V}$\vspace{1pt}\\
\includegraphics[width=0.35\textwidth]{tsne_gan_nodes_test_zs_real.pdf} & \includegraphics[width=0.35\textwidth]{tsne_gan_nodes_test_zs_fake.pdf}\vspace{-5pt}\\
\multicolumn{2}{c}{{\includegraphics[width=0.8\textwidth, trim={3cm 8.5cm 0.5cm 0.6cm}, clip]{tsne_gan_nodes_test_zs_legend.pdf}}}
\vspace{-5pt}
\end{tabular}
\caption{\small Real \textit{vs} generated node features plotted using t-SNE.}
\label{fig:tsne}
%\vspace{-10pt}
\end{figure}
\textbf{Evaluation of generated visual features.}
We evaluate the quality of generated features of our GAN trained with \structn~by comparing the generated (fake) features to the real ones. To obtain fake node features $\hat{V}$, we condition our GAN on test SGs. To obtain real node features $V$, we apply the pretrained object detector to test images as described in \S~\ref{sec:generation}.
First, for the qualitative evaluation of node features, we group features based on the object category's super-type, \eg `people' includes all features of `man', `woman', `person', etc. When projected on a 2D space using t-SNE~\citep{van2008visualizing}, the fake features $\hat{V}$ generated using our GAN are clustered similarly to the real features $V$ (\fig{\ref{fig:tsne}}). Therefore, qualitatively our GAN generates realistic and diverse features given a scene graph.\looseness-1
Second, we evaluate GAN features quantitatively. For that purpose, we follow~\citep{devries2020instance} and use Precision, Recall~\citep{kynkaanniemi2019improved} and Density, Coverage~\citep{naeem2020reliable} metrics.
These metrics compare the manifolds spanned by real and fake features and do not require any labels.
We consider two cases: conditioning our GAN on test SGs and test zero-shot (test-zs) SGs. The motivation is similar to~\citep{casanova2020generating}: understand if novel compositions confuse the GAN and lead to poor features, that in our context may result in poor training of the main model $F$.
Indeed, the features generated conditioned on test-zs SGs significantly degrade in quality compared to test SGs, especially in terms of fidelity (Table~\ref{tab:gen}). This result suggests that it is more challenging to produce realistic features for more rare compositions limiting our approach (see \S~\ref{sec:nolimit}).
The same qualitative and quantitative experiments for edge features $(E,\hat{E})$ and global features $(H,\hat{H})$ confirm our results: (1) when conditioned on test SGs, the generated features are realistic and diverse; (2) conditioning on more rare compositions degrades feature quality.% (see \S~\ref{apdx:gan}).\looseness-1
\begin{figure}[t]
\centering
\setlength{\tabcolsep}{0pt}
\vspace{-2pt}
\begin{tabular}{c}
\includegraphics[width=0.9\textwidth]{ablations.pdf}
\end{tabular}
\vspace{-10pt}
\caption{
\small Ablations of our GAN model on SGG and feature quality metrics. Error bars denote standard deviation. For feature quality the average metric on the test-zs SGs from Table~\ref{tab:gen} is used.}
%\vspace{-13pt}
\label{fig:ablations}
\end{figure}
\textbf{Ablations (Figure~\ref{fig:ablations}).} We also performed ablations to determine the effect of the proposed GAN losses \eqref{eq:total_loss} and other design choices on the (i) SGG performance and (ii) quality of generated features. As a reference model, we use our GAN model without any perturbations.
In general, all ablated GANs degrade both in (i) and (ii) with correlated drops between (i) and (ii). So by improving generative models in future work, we can expect larger SGG gains. One exception is the GAN without the global terms in \eqref{eq:adv_full}, which performed better on zero-shots despite having lower feature quality. This might be explained as some regularization effect. We also found that this model did not combine well with perturbations.
\textbf{Evaluating the quality of SG perturbations.}
We show examples of SG perturbations in \fig{\ref{fig:examples}}. In case of \textsc{Rand}, most of the created triplets are implausible as a result of random perturbations. \textsc{Neigh} leads to very likely compositions, but less often provides rare plausible compositions.
In contrast, \structn~can create plausible compositions that are rare or more frequent depending on $\alpha$.
We also analyzed the quality of real and perturbed SGs using the BERT-based metric (\S~\ref{sec:sg_quality}).
We found that the overall test set has on average the highest BERT scores, while lower-shot subsets gradually decrease in ``semantic plausibility'', which aligns with our intuition. We then perturbed all nodes of all test SGs using our perturbation strategies. Surprisingly, real test-zs SGs have very low plausibility close to \textsc{Rand}-based SGs. \textsc{Neigh} produces SGs of plausibility between real 10-shot and 100-shot SGs. In contrast, with \structn~we can gradually slide between low and high plausibility, which enabled better SGG results. The BERT scores, however, are not tied to the VG dataset. So, semantic plausibility per BERT may be different from the likelihood per VG.\looseness-1
\begin{figure}[t]
\centering
%\vspace{-5pt}
\centering
\setlength{\tabcolsep}{0pt}
\begin{tabular}{cc}
\includegraphics[align=c,width=0.4\textwidth,trim={0 0 0 0},clip]{sem_plaus_1_0_vs_a.pdf} &
{\includegraphics[align=c,width=0.2\textwidth,trim={13.5cm 3cm 5.5cm 1.5cm},clip]{sem_plaus_legend.pdf}}\\
\end{tabular}
\vspace{-5pt}
\caption{\small Semantic plausibility (as per BERT) depending on $\alpha$. These results should be interpreted with caution, because: (1) the variance of scores is very high (not shown); (2) in the zero- and few-shot test subsets the graphs are significantly smaller, which affects the amount of contextual information available to BERT. }
\label{fig:results_semantic}
\end{figure}
\begin{figure}[t]
%\vspace{-5pt}
\centering
\scriptsize
\newcommand{\width}{0.22\textwidth}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{p{0.25cm}c|c|c|c}
& \includegraphics[width=0.22\textwidth]{2350517_sup.png}
& \includegraphics[width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_gt_graph_sup.png} &
\includegraphics[width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_rand_L0_5_topk10_a2_graph_1.png} &
\includegraphics[width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_neigh_L0_5_topk10_a2_graph_1.png} \\
& \textsc{\textbf{Image}} & \textbf{\textsc{Original SG}} & \textbf{\textsc{Rand}} & \textbf{\textsc{Neigh}}\Bstrut\\
\toprule %\\
\multicolumn{1}{c|}{\rotatebox[origin=c]{90}{\textbf{\structn}}} & \includegraphics[align=c,width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_structn_L0_5_topk5_a2_graph_13.png} &
\includegraphics[align=c,width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_structn_L0_5_topk5_a5_graph_3.png} &
\includegraphics[align=c,width=\width,trim={2cm 0.1cm 7cm 0.1cm},clip]{2350517_structn_L0_5_topk5_a10_graph_7.png} & \includegraphics[align=c,width=\width,trim={1.5cm 0.1cm 7cm 0.1cm},clip]{2350517_structn_L0_5_topk5_a20_graph_2.png}\Tstrut\\
& $\alpha=2$ & $\alpha=5$ & $\alpha=10$ & $\alpha=20$ \\
\end{tabular}
\vspace{-5pt}
\caption{\small Examples of perturbations (nodes in red) applied to a scene graph. The numbers on edges denote the count of triplets in the training set and a thick red arrow denotes matching a ZS triplet.\looseness-1}
\label{fig:examples}
%\vspace{-5pt}
\end{figure}
\subsection{Limitations\label{sec:nolimit}}
\vspace{-3pt}
Our method is limited in three main aspects. \textbf{First}, we rely on a pretrained object detector to extract visual features. Without generating augmentations all the way to the images --- in order to update the detector on rare compositions --- it is hard to obtain significantly stronger performance. While augmentations in the feature space can be effective~\citep{devries2017dataset, verma2019manifold}, their adoption for large-scale out-of-distribution generalization is underexplored.
\textbf{Second}, by making a simplification and keeping GT bounding boxes for perturbed scene graphs, we limit (1) the amount of perturbations we can make (if we permit many nodes to be perturbed, then it is hard to expect the same layout), and (2) the diversity of spatial compositions, which might be an important aspect of compositional generalization.
We attempted to verify that using \oracle~perturbations, which are created by directly using ZS triplets from the test set.
Using \oracle~with GT bounding boxes (our default setting) surprisingly does not result in large improvements. However, when we replace GT boxes with the ones taken from the corresponding samples of the test set, the results improve significantly.
This demonstrates that: (1) our GAN model may benefit from reliable bounding box prediction (e.g.~\citep{hong2018inferring}); (2) \structn~perturbations are already effective (close to \oracle) and improving the results further by relying solely on perturbations is challenging.
\textbf{Third}, the quality of generated features, especially, for novel and rare compositions is currently limited, which is also carefully analyzed in~\citep{casanova2020generating}. Addressing this challenge can further improve results both of \oracle~and non-\oracle~models.
\looseness-1
\vspace{-3pt}
\section{Conclusion}
\vspace{-5pt}
We focus on the compositional generalization problem within the scene graph generation task. Our GAN-based augmentation approach can be used with different SGG models and can improve their zero-, few- and all-shot SGG results. To obtain better SGG results using our augmentations, it is important to rely on the structure of scene graphs and tune the augmentation parameters towards a specific SGG metric. Our evaluation confirmed that our augmentations provide plausible compositions and the generator generally produces high-fidelity and diverse features enabling gains in SGG.\looseness-1
| {
"alphanum_fraction": 0.7448363406,
"avg_line_length": 92.2157190635,
"ext": "tex",
"hexsha": "40584083530a9b5ff0a78d279a80361a0f981e69",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "uoguelph-mlrg/phdthesis_boris",
"max_forks_repo_path": "Ch5_2021_iccv/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "uoguelph-mlrg/phdthesis_boris",
"max_issues_repo_path": "Ch5_2021_iccv/main.tex",
"max_line_length": 808,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "uoguelph-mlrg/phdthesis_boris",
"max_stars_repo_path": "Ch5_2021_iccv/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17354,
"size": 55145
} |
%*******************************************************************************
%****************************** Sixth Chapter *********************************
%*******************************************************************************
\chapter{Analysis of LISP mobility and ns-3 implementation}
\label{cha:ns-3}
% **************************** Define Graphics Path **************************
\ifpdf
\graphicspath{{Chapter7/Pics/Raster/}{Chapter7/Pics/PDF/}{Chapter7/}}
\else
\graphicspath{{Chapter7/Pics/Vector/}{Chapter7/}}
\fi
%-< ABSTRACT >--------------------------------------------------------------------
The \emph{Locator/Identifier Separation Protocol} (LISP), due to its map-and-encap approach, can bring benefits to mobility. LISP Mobile Node (LISP-MN) is based on the basic LISP functionality to provide mobility across networks. Thus, LISP can be implemented either on the border routers or directly on the end hosts to manage mobility. However, there are no experimental results comparing the advantages and disadvantages of each solution. Assessing the LISP mobility performance needs the testbeds or the simulators. The basic LISP architecture is deployed on LISP Beta Network and LISP-Lab platform to offer the researchers a realistic experimental environment, but both do not support LISP-MN. Moreover, since the simulators can help researchers quickly verify the proposals and test new features, we implemented mobility extensions in a LISP simulator.
% Some simulation models with LISP extensions are implemented on various simulators, but are not open source. Fortunately, there is an open source project implementing the fundamental LISP on ns-3 in 2016. Providing a free and flexible LISP simulator so to help researchers quickly test new LISP mobility behaviors motivates our work.
This chapter introduces the implementation of LISP mobility extensions under ns-3.27 leveraging on an open source basic LISP simulator. In addition, this chapter analyzes the different LISP mobility scenarios from the respects of handover delay and overhead of LISP Control Plane. It describes the characteristics of each scenario. % It also provides the evaluation results in mobility scenario to validate the model and shows when the current proposal of LISP-MN is behind a LISP-site has a very high delay during the handover procedure.
The rest of chapter is organized as follows: Sec.~\ref{sec:ns3_ns3} and Sec.~\ref{sec:ns3_basic_lisp} respectively introduces ns-3 and the existing LISP simulator on it. Sec.~\ref{sec:ns3_lispmn} analyzes the design and implementation of our prototype, and afterwards, Sec.~\ref{sec:ns3_analysis} illustrates three different LISP scenarios supporting mobility, presents their traffic schema, modelizes the handover delay and overhead of LISP Control Plane, and compares their advantages and disadvantages. % Sec.~\ref{sec:evaluation} presents preliminary evaluation results of our implementations.
Sec.~\ref{sec:ns3_conclusion} provides some ideas of evaluation based on the proposed simulation for future work.
%-< ABSTRACT >--------------------------------------------------------------------
%%-< SECTION >--------------------------------------------------------------------
%\section{Related work}
%\label{sec:ns3_related_work}
\section{NS-3}
\label{sec:ns3_ns3}
ns-3~\cite{ns3} is a popular and free discrete-event network simulator for networking research. It is developed completely in the C++ programming language. %, because it better facilitated the inclusion of C-based implementation code.
The ns-3 architecture is similar to Linux computers with application, TCP/IP protocol stack, network interface, sockets, etc. ns-3 is very well documented and has an active community which facilitate the researches to adapt ns-3 source code for their researches. Besides, ns-3 offers the possibility to visualize the simulation instance so to allow the users to visually confirm the packets flow as they expect.
%\begin{itemize}
% \item Description of ns-3.
% \item The simulator supporting LISP is introduced in Sec.~\ref{subsec:implementation_OMNet}.
%\end{itemize}
%-< SUB SECTION >--------------------------------------------------------------------
\section{Basic LISP implementation on ns-3}
\label{sec:ns3_basic_lisp}
Simulation is becoming more important for deploying new technologies or as a proof of concept of new protocols. In the study of LISP, there exist few simulators based on OMNet++~\cite{vesely2015locator, vesely2014multicast, klein2012integration} or based on Java~\cite{stockmayer2016jlisp}. However, these existing simulators are not open-source, which hinders other researchers to modify or adapt the simulator with respect to their own research purposes.
To our best knowledge, the unique open-source LISP simulator that we found in the literature is the one proposed by Agbodjan~\cite{lionel2016}. The authors implemented a basic LISP simulator under ns-3.24, but this work can be further polished. For example, the encoding of LISP Control Plane messages does not respect RFC 6830~\cite{rfc6830} so that the Wireshark~\cite{wireshark} can not resolve the captured results in the correct format. % and there exist some bugs when using this simulator.
More importantly, its implementation has no support for LISP mobility. % Our implementation work is inspired by work of~\cite{lionel2016}.
By leveraging the source code of Agbodjan, we implement an open source LISP simulator with LISP mobility extension. Besides, we also cover the shortage of the original source code. For example, the encoding of LISP control messages is according to RFC 6830~\cite{rfc6830} so that Wireshark can correctly decode these messages for analysis. The case of Negative Map-Reply has been covered. Recall that the work of Agbodjan is still under ns-3.24, but ns-3.24 evolved lots from ns-3 to the latest version ns-3.27. This implementation is under ns-3.27, which allows the other researchers to profit the newest functionalities of ns-3 simulator.
%-< SECTION >--------------------------------------------------------------------
\section{ LISP mobility extensions on ns-3}
\label{sec:ns3_lispmn}
Our implementation respects LISP RFC 6830~\cite{rfc6830} and LISP mobility standards~\cite{meyer2016lisp}. As a design choice, we implement LISP and LISP mobility functionalities by modifying and extending two already existing modules of ns-3:~\emph{internet} and \emph{internet-apps}, instead of by creating a new independent module. The justification behind this design is that LISP protocol and legacy Internet module have an interdependent relationship: an IP layer packet is processed by LISP and then passed to IP protocol again. However, this kind mutually dependent relationship between modules is not supported by ns-3. Our implementation consists of two parts: LISP Data Plane and LISP Control Plane. The communication between LISP Data and Control Plane is achieved via a dedicated socket (i.e. \emph{LispMappingSocket}) that inherits from ns-3 \emph{Socket} class. The Data Plane implementation is in "kernel space" (i.e. ns-3's \emph{TCP/IP stack}) and Control Plane is implemented in "user space" (i.e. ns-3 \emph{Application}). Such a design is inspired by that of OpenLISP~\cite{saucez2009openlisp}.
The UML diagram of proposed LISP and LISP mobility implementation is illustrated Fig.~\ref{LISP_UML}. The darker blocks are classes already in ns-3, while the white blocks refer to the classes that Agbodjan~\cite{lionel2016} added into ns-3. It should be noted that our implementation keeps the same class names used in the Agbodjan's implementation. Except for the class~\emph{Header}, we rewrote the contents of all his classes to support LISP mobility, especially for the class~\emph{LispEtrItrApplication},~\emph{LispOverIpv4Impl}, and~\emph{BasicMapTables}. % \yue{Add one sentence to differentiate our work compared with Lionel's work.}
This work currently only supports IPv4 at time of this writing. The IPv6 support (i.e. the implementation related to IPv6 such as \emph{LispOverIpv6Impl}) is still in process. In addition, the authentication procedure involved in LISP is not considered in our implementation.% \yue{This figure should be revised. For example, LispHeader should be renamed as LispDataPlaneHeader. MapTable should contains a list of MapEntry.}
%-< FIGURE >--------------------------------------------------------------------
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{Pics/LISP_NS3_UML}
\caption{UML diagram of LISP and LISP mobility implementation. The solid arrow refers to a composition relation, while the blank one refers to a inheritance relation.}
\label{LISP_UML}
\end{figure*}
%-< END FIGURE >--------------------------------------------------------------------
%-< SUB SECTION >--------------------------------------------------------------------
\subsection{Implementation of LISP Data Plane}
\label{subsec:modifyInternet}
%\begin{itemize}
% \item Modification of Receive method
% \item Modification of Delivery method
% \item Implementation of LISP encapsulation and decapsulation
%\end{itemize}
The implementation of LISP Data Plane mainly consists of \emph{LispOverIp} and \emph{MapTable} classes and their subclasses, along side with some auxiliary classes (e.g. \emph{LispHeader}). In addition, to support LISP functionalites, \emph{Ipv4L3Protcol}'s packet transmission, reception, forward and delivery procedures are accordingly adapted.
\subsubsection{LISP database and cache}
\label{subsec:database-impl}
Each LISP-speaking node should maintain one LISP database and LISP cache for LISP encapsulation and decapsulation operations. In our implementation, both LISP database and cache are modellized by the same class~\emph{MapTable} that stores and manages EID-RLOC mapping information. This class is in charge of CRUD (Create, Retrieve, Update, Delete) operations for mappings. Each mapping entry in LISP database and cache is an instance of \emph{MapEntry}. For the sake of flexibility, the class \emph{MapTable} is an abstract base class. The CRUD methods are implemented in its subclass \emph{BasicMapTable}. The mapping search operation is a straightforward iteration over LISP database or cache. It is possible that for other users to provide their own LISP database and cache implementation, for example, which uses more sophisticated mapping entry look up algorithms, by extending \emph{MapTable} class.
In addition, \emph{MapTable} has a callback which allows to send the buffered packet (either LISP Data Plane or Control Plane) %\yue{Recall that now I add a callback into MapTable and I only implement the resending once the required cache is inserted into cache. WE currently only use this for LISP-control messages (SMR-invoked Map Request). We can easily extend this to support other LISP data plane packets buffering and resending once the required EID-RLOC mapping is obtained.}
due to LISP cache miss event, upon insertion of the required EID-RLOC mapping information into LISP cache.
\subsubsection{Implementation of LISP encapsulation and decapsulation}
To integrate LISP and LISP mobility into conventional Internet protocol stack, one key technical difficulty is that \emph{Ipv4L3Protcol} should be able to determine when passing a packet being processed to LISP-related procedure and how to retrieve the associated mapping information. To this end, a new class called \emph{LispOverIp} and its extended classes
%(refer to Fig.~\ref{LISP_UML})
are added to ns-3 \emph{internet} module. This class is in charge of checking whether it is necessary to do LISP-related operations (\emph{NeedEncapsulation()}, \emph{NeedDecapsulation()}), and encapsulating conventional IP packets (i.e., \emph{LispOutput()}) as well as decapsulating LISP packets(\emph{LispInput()}). It contains a smart pointer\footnote{A smart pointer is an abstract data type introduced in C++ that simulates a pointer while providing additional features, such as automatic memory management or bounds checking.} pointing to the LISP database and LISP cache (e.g.~\emph{MapTable}) on which executes mapping search.
%\begin{figure*}[!t]
% \centering
% \includegraphics[width=\textwidth]{Pics/ns3_lisp_data_plane.eps}
% \caption{Illustration of LISP encapsulation and decapsulation}
%% \yue{This figure is actually the process of MN, xTR1/2. I forgot to draw to the processing of xTR3 and CN. I will make a draft of the left part and send it to you as soon as possible.}
% \label{fig:ns3-lisp-data-plane}
%\end{figure*}
We take the double encapsulation example shown in Fig.~\ref{LISP_archi_2encap} to illustrate how LISP encapsulation and decapsulation is implemented. We assume that the required mapping entries during LISP Data Plane operations are already in LISP-MN cache. A LISP-speaking node behind the $xTR_1$ needs to communicate with CN behind the $xTR_3$. Thus, a packet should be encapsulated within the considered node and forwarded to the $xTR_1$. Subsequently, the packet is further encapsulated and forwarded. At the $xTR_3$, % xTR which serves the CN,
the received packet is decapsulated twice and forwarded. Thus, the example involves packet transmission, forwarding and reception
%is illustrated in Fig.~\ref{fig:ns3-lisp-data-plane}.
Within the LISP-MN node, when the upper layer of IP protocol calls the \emph{Send()} method of \emph{Ipv4L3Protcol}, a packet comes down to IP layer. The \emph{Send()} method is adapted so that it first verifies whether the \emph{LispOverIp} object is present. If yes, some checks are then conducted to determine that this packet should be processed by \emph{LispOutput()} (to encapsulate the packets) or by conventional packet transmission routine. For example, if both source and destination IP address of this packet belong to the same network, the LISP-related process (e.g., encapsulation) is skipped and this packet is processed as in a non-LISP network. Otherwise, EID-RLOC mapping information is searched from LISP cache and LISP database on LISP-MN node.
After encapsulation and forwarding, the considered packet is forwarded to $xTR_1$. % This procedure is also illustrated in Fig.~\ref{fig:ns3-lisp-data-plane}.
Low layer invokes the \emph{Receive()} method of \emph{Ipv4L3Protocol} to pass this packet to IP layer. Since this packet is destinated to a remote CN instead of itself, this packet is processed by the patched \emph{IpForward()} method. % , \emph{IpForward} method of \emph{Ipv4L3Protocol} is called.
With this method, LISP encapsulation is verified. \emph{LispOverIp} looks for the source RLOC (RLOC of $xTR_1$) and destination RLOC (RLOC of $xTR_3$) for the outer IP header building. Once this step is done, \emph{Ipv4L3Protocol}'s \emph{Send()} method transmits these encapsulated packet to MAC layer for transmission.
When this packet arrives at $xTR_3$ which servers CN, $xTR_3$ finds that the destination of this packet is the node itself, the packet is processed by \emph{LocalDelivery()} method in \emph{Ipv4L3Protocol}. Before passing to transport layer, \emph{LocalDelivery} checks if the packet should be decapsulated. If yes, it is passed to \emph{LispInput()} method, in which the packet is decapsulated and re-injected into the IP stack. That is to say, \emph{Receive()} method is called again after decapsulation operation. If the packet still has LISP header, the aforementioned procedure will repeated until it has no need to be decapsulated. Finally, the packet with source address of LISP-MN and destination address of CN is forwarded to CN.
%-< SUB SECTION >--------------------------------------------------------------------
\subsection{Implementation of LISP Control Plane}
\label{subsec:control-plane-impl}
%\begin{itemize}
% \item Implementation of xTR under ns3
% \item Implementation of MS under ns3
% \item Socket communication between control plan and data plan
%\end{itemize}
The implementation of LISP Control Plane at least should provide ITR/ETR, MR and MS. In practice, ITR and ETR functionalities are usually placed on a same router called xTR. In our implementation, they are included into class \emph{LispEtrItrApplication}. The functionalities of MR and MS are respectively implemented by class \emph{MapResolver} and \emph{MapServer}. The LISP Control Plane messages (Map-Register, Map-Request, etc.) are represented by the derived classes of \emph{LispControlMsg}. In addition, to communicate with LISP Data Plane, a socket class \emph{LispMappingSocket} is proposed.
\subsubsection{Implementation of xTR functionalities}
A ns-3 node that runs \emph{LispEtrItrApplication} is a LISP-compatible router. It should be able to communicate with \emph{LispOverIp} on the same node (e.g. inform cache missing event) and other LISP-compatible routers (e.g. Map-Request/Map-Reply).
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{Pics/xTR_state_transition.eps}
\caption{State transition diagram of xTR application} % \yue{Good remark. First in a state transition diagram, a block refers to a `state' and an arrow refers to a certain action leading to state change. A state transition diagram identifies all possible states and reflects how a state is transited to another one. Back to this diagram, this state transition diagram is questionable, because this digram reflects the behaviors of the xTR running within LISP-node and the SMR-invoked Map-Request will be sent back to the xTR which previously initiates one SMR. For this scenario, when in state of `Wait for Map-Notify', it has 2 possible actions: if the required EID-RLOC mapping (to xTR2 for example) is known, it can switch to state of `Wait for SMR-invoked' map request by action of 'Send SMR'. Otherwise, a cache miss event occurs and this makes the xTR sends Map-Request message and enter into state of `Wait for Map-Reply'. Conclusion: to be more clear, maybe first change the legend of this figure to "State transition diagram of xTR application running on LISP-MN node. SMR-invoked Map-Request is sent back to the xTR which initiates.". Second, if you think, `cache miss' is misleading. You can change it as `Send Map-Request'. In addition, I think modify the state name `Idle' to `Listening' is better, because xTR is always listening to a UDP port for the incoming LISP control messages. Up to you to make the change. }
\label{fig:xTR-state-transition}
\end{figure*}
The state transition diagram is illustrated in Fig.~\ref{fig:xTR-state-transition}. When destination RLOC is not found in the cache of xTR (here indicates the xTR functionalities on LISP-MN) for a processed packet, the cache miss event occurs and LISP Data Plane (e.g. \emph{LispOverIp}) notifies \emph{LispEtrItrApplication} on the same LISP-MN node of this event via a \emph{LispMappingSocket} socket. Once reception of cache missing event from LISP Data Plane (i.e. \emph{LispOverIp} object), \emph{LispEtrItrApplication} initiates a Map-Request message to LISP mapping system. Once reception of Map-Reply, the received EID-RLOC mapping is inserted into LISP cache. It should be noted that in our implementation, before the reception of Map-Reply message, all transmitted packets with the required RLOC as desination are dropped. There exists one reception: the processing of SMR-invoked Map-Request message. If the RLOC of xTR initiating the SMR (actually a local RLOC) is not found, SMR-invoked Map-Request message is buffered and sent again once the insertion of required mapping information into LISP cache. This is achievable thanks to a callback associated with LISP cache insert operation.
In case of reception of Map-Request, \emph{LispEtrItrApplication} executes a database look up on \emph{MapTable} and generates the corresponding Map-Reply containing EID-RLOC mapping.
When xTR application starts or LISP database on a node has information update (e.g. during a handover scenario), xTR application sends a Map-Register message and waits for a Map-Notify message. In case of LISP database update, xTR sends a SMR message to all xTR whose RLOC is present in its cache. %, upon the reception of Map-Notify message.
According to RFC 6830~\cite{rfc6830}, the xTR receiving the SMR has two possibilities of reactions about sending SMR-invoked Map-Request: towards a mapping system or the xTR initiating this SMR. Both cases are implemented in our work and this can be configured by an attribute of \emph{LispEtrItrApplication}. It should be noted that in double-encapsulation, if SMR-invoked Map-Request is directly sent to LISP-MN node whose LRLOC is unknown for xTR, the first SMR-invoked Map-Request cannot be sent due to cache miss. We implement a callback function within \emph{BasicMapTable} which allows to send immediately the buffered SMR-invoked Map-Request upon insertion of required EID-RLOC mapping into cache. Note that this mechanism is designed only for LISP Control Plane messages. LISP Data Plane is dropped in case of cache miss.
To support LISP-MN feature, \emph{LispEtrItrApplication} also communicates with DHCP client application. For example, once a LISP-MN obtains an IP address from DHCP server, \emph{LispEtrItrApplication} receives the corresponding EID-RLOC mapping and sends a Map-Register message to the Map Server.
\subsubsection{Implementation of MR and MS}
A node that runs a \emph{MapServer} application is the MS in a LISP-supported network. This class has a smart pointer pointing to a LISP database (i.e.~\emph{MapTables}) to store the all EID-RLOC mapping information. This application is always listening to UDP port 4342. Once reception of Map-Register message, it retrieves the EID-RLOC mapping inside, inserts the latter into LISP database and sends a Map-Notify message as response if necessary. %\yue{Because only a certain flag of LISP header is set as 1, a Map-Notify message is sent as a response to Map-Register message. Otherwise, no map-notify is sent. This is defied by RFC 6830.}
Each time MS receives a Map-Request message, it looks up the required EID within its database. If the EID-RLOC mapping is found, map server forwards this request to the corresponding xTR otherwise sends a Negative Map-Reply message to the querying xTR. It is worth to indicating that class \emph{MapServer} is actually an abstract base class. The real functionalities are implemented by its subclasses, which is \emph{MapServerDdt} in our implementation. This design allows the researchers to easily integrate and test their own implementations of MS. In current implementation, the role of MR is to receive the Map-Request message from xTR and forward it to the MS.
\subsection{Modification of DHCP client to support LISP mobility}
\label{subsec:DHCP}
%\begin{itemize}
% \item LISP-MN, in case of IPv4, need the intervention of DHCP procedure
% \item The current version of DHCPv4 is not compatible with LISP
% \item Implementation of LISP-compatible DHCPv4 based on conventional DHCPv4
%\end{itemize}
To support LISP mobility for IPv4, the intervention of DHCP is indispensable. From ns-3.27, DHCP client and server have been implemented in module \emph{internet-apps}. However, the DHCP client of ns-3 is not compatible with LISP. Thus, conventional DHCP client is adapted to support LISP.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{Pics/DHCP_transition_state.eps}
\caption{LISP-compatible DHCP client state transition diagram}
\label{fig:DHCP-state-transition}
\end{figure*}
The state transition diagram of DHCP client is illustrated in Fig.~\ref{fig:DHCP-state-transition}. When a LISP-MN node roams into the area covered by another xTR, the boot procedure of DHCP client is triggered once that the link state is changed. Afterwards, the DHCP client sequentially pass 'INIT', 'Selecting', 'Requesting' states and enter into 'Bound' state after the reception of DHCP ACK message from DHCP server. In 'Bound' state, apart from saving the newly obtained IPv4 address (namely LRLOC) and default gateway, DHCP client should be able to check if the LRLOC is different from the one associated with its EID in its LISP database. If LRLOC is changed, DHCP client enters into state of 'LISP database update'. DHCP client is equipped with a dedicated socket of type \emph{LispMappingSocket}. By this socket, DHCP client notifies the \emph{LispEtrItrApplication} by sending a dedicated message that contains the EID-LRLOC mapping. \emph{LispEtrItrApplication} is in charge of populating the received mapping entry into LISP database and sending a Map-Register message to the Map Server.
To be compatible with DHCP, conventional LISP-related process is also modified. For example, to transmit a DHCP Discovery message (application layer message), its source IP address is set as $0.0.0.0$. The \emph{Send()} method of \emph{Ipv4L3Protocol} should be modified so that this message is not processed \emph{LispOverIp}.
%-< SUB SECTION >--------------------------------------------------------------------
\subsection{Integration of TUN net interface card}
\label{subsec:tundevice}
To support mobility, a LISP-speaking node actually can be regarded as a small LISP-Site. The xTR functionalities and DHCP service should be implemented on the LISP-speaking node. The address of MR and MS should be configured. As a LISP-MN, it has a static permanent EID and dynamic RLOC assigned by the DHCP server. To differentiate with conventional RLOC of xTR interface, such kind of RLOC is referred to as the local RLOC (LRLOC).
%There exist several possibilities about on which \acrlong{nic}s (\acrshort{nic}s) EID is configured: IP aliasing, loop back device and TUN device. IP aliasing consists of associating more than one IP address to a network interface. In case of LISP mobility, the supplementary IP address is EID. Loop back device can be also configured with EID to support LISP encapsulation.
As a design choice, we use the solution based on TUN device, which is also applied by Lispmob~\cite{LISPmob}. In our implementation, different from a conventional LISP node, at least two \acrshort{nic}s should be installed into the node. One is normal \acrshort{nic} such as~\emph{WifiNetDevice}. The DHCP client application runs on this kind of card and thus the LRLOC is allocated to this card. The other is a TUN type card. The TUN \acrshort{nic} is a virtual card which should actually invoke~\emph{Send()} of another real \acrshort{nic}. The permanent EID is assigned to TUN device.
Recall that after DHCP procedure, the node will be configured a default gateway provided by DHCP server. Routing table of LISP-MN are modified so that the packets from application layer always use EID as the source IP address of inner IP header.
%\begin{table}[]
% \centering
% \caption{Static route table of LISP-MN}
% \label{tab:static-route-table}
% \begin{tabular}{@{}c|c@{}}
% \hline\hline
% Destination Prefix & Interface \\ \hline
% 0.0.0.0/1 & TUN \\ \hline
% 128.0.0.1/1 & TUN \\ \hline \hline
% \end{tabular}
%\end{table}
%-< SUB SECTION >--------------------------------------------------------------------
%-< SECTION >--------------------------------------------------------------------
\section{Theoretical analysis}
\label{sec:ns3_analysis}
IP mobility leveraging on LISP can be implemented either on the border router or the end host. To explore the characteristics of each scheme, we propose the following three different scenarios and analyze the overall handover delay and LISP Control Plane overhead.
\begin{enumerate}[noitemsep,topsep=0pt]
\item LISP-MN in the non-LISP-Site (i.e., only the end host supports LISP).
\item MN in the LISP-Site (i.e., only the border router supports LISP).
\item LISP-MN in the LISP-Site (i.e., both the border router and the end host support LISP).
\end{enumerate}
%-< FIGURE >--------------------------------------------------------------------
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Pics/LISP_mobility_archi}
\caption{General scenario for LISP mobility architecture}
\label{sim_archi}
\end{figure}
%-< END FIGURE >--------------------------------------------------------------------
% -<Descriptions with parameters>--------------------------------------------------------------------
% All the scenarios are based on a same simulation architecture shown in Fig.~\ref{sim_archi}, but with some slight differences, which will be respectively specified in the following parts from Sec.~\ref{sec:ns3_analysis_lispmn} to Sec.~\ref{sec:ns3_analysis_lispmn_xTR}. In our designed architecture, an MN is initially in the subnet of $Router_1$. An \emph{echo} application on MN sends packets to a remote stationary node CN in the LISP-Site of $xTR_3$. The distance between xTR\_1 and xTR\_2 is $170 m$. The connection between MN and xTR\_1 can be either Wi-Fi or wired link. If they use Wi-Fi, MN will move into the subnet of $Router_2$ at speed of $7.07 m/s$ after several seconds when the simulation begins. The start time of movement is a random value in the range of $[x, x] s$. At a certain moment during the moving, the Wi-Fi link between MN and $Router_1$ is down, which triggers the handover procedure. Afterwards, MN connects to $Router_2$ and reestablishes the communication with CN node. If they use wired link, the connection of MN with $Router_1$ will be down and the one with $Router_2$ will be up at the same time. This action also triggers the handover procedure. Every link between two network entities in this simulation architecture is set to $20 ms$.
The network topology for all the scenarios is shown in Fig.~\ref{sim_archi}. An MN is initially in the subnet of $Router_1$. It exchanges packets with a remote stationary node CN situated in the LISP-Site of $xTR_3$. The roles of $Router_1$ and $Router_2$ are slightly different in different scenarios. This will be respectively specified in the following parts (Sec.~\ref{sec:ns3_analysis_lispmn} to Sec.~\ref{sec:ns3_analysis_lispmn_xTR}). To obtain the estimation of LISP mobility handover, we do not consider the delay due to wireless link switch and use an intermediate router to connect all three routers and the mapping system. The connection between MN and $Router_1$ can be either Wi-Fi or wired link. If Wi-Fi link is used, MN will move into the subnet of $Router_2$ at a constant speed. At a certain moment, the Wi-Fi link between MN and $Router_1$ is down, which triggers the handover procedure. Afterwards, MN connects to $Router_2$ and re-establishes the communication with CN node. If they use wired link, the handover procedure can be simulated as follows. The MN have two interfaces respectively connected to $Router_1$ and $Router_2$. At a certain time, the connection to $Router_1$ set to be down and the one with $Router_2$ is set to be up. At the same time, the DHCP client on the interface connected to $Router_2$ is run and this triggers the handover procedure. By using wired link between MN and its routers is just to test the different mechanisms in an ideal situation, and it is not a real scenario.
In this chapter, the overall handover delay related to LISP $D_{overall}$ is defined as the time interval between the first and the last LISP packets during the handover procedure. % last packet received by MN from CN via $Router_1$ and the first packet received by MN from CN via $Router_2$ after the link reestablishment.
The overall handover overhead $C_{overall}$ is defined as the number of LISP Control Plane messages exchanged during handover procedure.
According to the three following scenarios, the handover delay and overhead consist of different parts. All the necessary delay and LISP overhead of LISP Control Plane during the mobility are listed in Tab.~\ref{Symbols_numerical_analysis}.
%-< TABLE >-----------------------------------------------------------------
\begin{table}[!tb]
\centering
\caption{Symbols for numerical analysis}
\label{Symbols_numerical_analysis}{
\resizebox{0.6\textwidth}{!}{%
\begin{tabular}{@{}|c|c|@{}}
\hline\hline
Symbols & Explanations \\ \hline
$D_{overall}$ & Overall handover delay related to LISP \\ \hline
$D_{DHCP}$ & DHCP address configuration delay \\ \hline
$D_{Register}$ & Delay of sending Map-Register \\ \hline
$D_{Notify}$ & Delay of receiving Map-Notify \\ \hline
$D_{Request}$ & Delay of sending Map-Request to MDS \\ \hline
$D_{Reply}$ & Delay of receiving Map-Reply \\ \hline
$D_{Resolve}$ & Delay of resolving mapping information in MDS \\ \hline
$D_{SMR}$ & Delay of sending SMR \\ \hline
$D_{Request_{SMR}}$ & Delay of sending a SMR-invoked Map-Request \\ \hline
$D_{Link}$ & Link delay between two network entities \\ \hline
$T_{A-B}$ & Delay of packet transmission between A and B \\ \hline
$T_{timeout_SMR}$ & Timeout of SMR \\ \hline \hline
\end{tabular}
}}
\end{table}
%-< END TABLE >-----------------------------------------------------------------
%-< SUBSECTION >--------------------------------------------------------------------
\subsection{LISP-MN in non-LISP-Site}
\label{sec:ns3_analysis_lispmn}
The first scenario is the LISP-MN in non-LISP-Site, where the border routers are the conventional routers and LISP is only implemented on the mobile end host MN. In our simulation, the LISP-MN with permanent EID is initially placed in the subnet of $Router_1$, with the IP address distributed by $Router_1$ as its RLOC. The remote CN is a conventional stationary end host, residing in a LISP-Site of $xTR_3$. % The LISP-MN communicates with CN by encapsulating the packets on itself and decapsulating the packets on $xTR_3$. If we use Wi-Fi, the LISP-MN moves into subnet of $Router_2$ after the simulation begins. At a certain moment during the moving, the Wi-Fi link between LISP-MN and $Router_1$ is down, whereas LISP-MN detects $Router_2$, which triggers the handover procedure. If we use wired link, after a certain time that the simulation begins, we turn down the wired link between LISP-MN and $Router_1$, while set the link between LISP-MN and $Router_2$ up at the same time.
LISP-MN first has a DHCP procedure when MN moves to $Router_2$, so that the later allocates a new IP address as RLOC. Then LISP-MN needs register its new mapping information to the mapping system, and also send a $SMR$ to its communicating nodes in its cache (there is only CN in our scenario). $xTR_3$ sends an SMR-invoked Map-Request to the mapping system, so to obtain the new mapping information of LISP-MN. Afterwards, LISP-MN re-establishes the communication with CN node via $Router_2$. The detailed traffic schema related to the handover procedure is illustrated in Fig.~\ref{sim_schema_LISPMN}. % The total simulation time is set to $45s$ and the DHCP procedure delay is set to $1s$. We conduct many times of simulations with the various beacon interval of Wi-Fi channel in the range of $0.05s$ to $2s$.
%-< FIGURE >--------------------------------------------------------------------
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Pics/Mobility_LISPMN_schema_SMR_simplify}
\caption{Schema for LISP-MN mobility in non-LISP-Site (SMR-invoked Map-Request is sent to the mapping system)}
\label{sim_schema_LISPMN}
\end{figure}
%-< END FIGURE >--------------------------------------------------------------------
The overall handover delay in this scenario is composed by two parts: the DHCP related delay and LISP related delay. The delays of DHCP procedures consist of LISP-MN sending DHCP Discovery message to $Router_2$, receiving DHCP Offer message, sending DHCP Request message, and receiving DHCP ACK message. The delays of all the LISP procedures are presented in Fig.~\ref{sim_schema_LISPMN}. This chapter only focus on the delay caused by LISP. Thus, the overall handover delay $D_{overall}$ related to LISP of LISP-MN in non-LISP-Site is:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + D_{Request_{SMR}} + D_{Reply}
\end{eqnarray}
To facilitate the comparison between each scenario, we provide the numerical results for the designed topology as shown in Fig.~\ref{sim_archi}. Every link between two network entities in this architecture is set to $1 ms$. According to the experimental results of Coras et al.'s paper~\cite{coras2014performance}, we set the resolving time in the mapping system as $200 ms$. Thus, the $D_{overall}$ of the first scenario in our designed architecture is:
\begin{eqnarray}
% &=& T_{MN-MDS} + T_{MDS-MN} + T_{MN-xTR_3} + (T_{xTR_3-MDS} + D_{Resolve} + T_{MDS-MN}) + T_{MN-xTR_3} \nonumber \\
D_{overall} &=& 3T_{MN-MDS} + 2T_{MN-xTR_3} + T_{xTR_3-MDS} + D_{Resolve} \nonumber \\
&=& 3* (3*D_{Link}) + 2*(3*D_{Link}) + 2*D_{Link} + D_{Resolve}\nonumber \\
% &=& 3* (3*1ms) + 2*(3*1ms) + 2*1ms + D_{Resolve} \nonumber \\
&=& 17D_{Link} + D_{Resolve} \nonumber \\
&=& 217 ms \nonumber
\end{eqnarray}
% (Min value of handover delay = 1.073031 s, where packet sending interval = 0.01 s)
The handover overhead is 6 messages of LISP Control Plane. It includes 2 signalings of registration when LISP-MN connects to $Router_2$ (Map-Register and Map-Notify), and 4 signalings related to SMR procedure: 1 SMR from LISP-MN to $xTR_3$, 1 SMR-invoked Map-Request from $xTR_3$ to the mapping system, 1 Map-Request forwarded by mapping system to LISP-MN, and 1 Map-Reply from LISP-MN to $xTR_3$.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + C_{Notify} + C_{SMR} + C_{Request_{SMR}} + C_{Reply} \nonumber \\
%&=& C_{Register} + C_{Notify} + C_{SMR} + 2C_{Request} + C_{Reply} \nonumber \\
%&=& 6 C
%\end{eqnarray}
There are two options when $xTR_3$ receives $SMR$. It can send the SMR-invoked Map-Request to the mapping system as we described before, or it can directly send the SMR-invoked Map-Request to the source locator address of $SMR$~\cite{rfc6830}. In this scenario, the source of $SMR$ is LISP-MN. Thus, the overall handover delay $D_{overall}$ in this scenario is as follows:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + D_{Request_{SMR}} + D_{Reply}
\end{eqnarray}
The numerical result is:
\begin{eqnarray}
% &=& T_{MN-MDS} + T_{MDS-MN} + T_{MN-xTR_3} + T_{xTR_3-MN} + T_{MN-xTR_3} \nonumber \\
&=& 2T_{MN-MDS} + 3T_{MN-xTR_3} \nonumber \\
&=& 2* (3*D_{Link}) + 3*(3*D_{Link}) \nonumber \\
% &=& 2* (3*1ms) + 3*(3*1ms) \nonumber \\
&=& 15D_{Link} \nonumber \\
&=& 15 ms \nonumber
\end{eqnarray}
Compared with solution of sending SMR-invoked Map-Request to the mapping system, the overall handover delay of sending the SMR-invoked Map-Request back to the source of $SMR$ is smaller, since there is no resolving delay in the mapping system. % This solution is more interesting for the case that the distance to mapping system is much longer than that to the source of $SMR$.
In this scenario, the handover overhead associated to LISP Control Plane is 5 messages. Since the SMR-invoked Map-Request is directly sent from $xTR_3$ to LISP-MN instead of passing the mapping system, it has 1 signaling less than the one sent to the mapping system.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + C_{Notify} + C_{SMR} + C_{Request_{SMR}} + C_{Reply} \nonumber \\
%&=& C_{Register} + C_{Notify} + C_{SMR} + C_{Request} + C_{Reply} \nonumber \\
%&=& 5 C
%\end{eqnarray}
The advantages of this scenario, i.e., LISP-MN in non-LISP-Site, are:
\begin{inparaenum}[1)]
\item it is able to achieve handover through different subnets;
\item the numerical analysis indicates that the overall handover delay is small;
\item so to the overall overhead. Compared to the other two scenarios, which will be described in details in the following sections, the mobility of this scenario does not cause lots of traffic in LISP Control Plane.
\end{inparaenum}
However, since the routers are still the normal routers in this scenario, it cannot help to reduce the BGP routing table size, which is the initial purpose to motivate LISP. Moreover, each LISP-MN needs a permanent IP address as its EID, which increases the burden of IPv4 address allocation. Each permanent EID and its LRLOC needs register to the mapping system, which also increases the size of LISP mapping table.
%-< SUBSECTION >--------------------------------------------------------------------
\subsection{MN in LISP-Site}
\label{sec:ns3_analysis_xTR}
The second scenario is the MN in LISP-Site, where the mobile node MN is conventional and LISP are only implemented on the border routers. In our simulation, the MN is initially placed in the subnet of $xTR_1$, with the assigned IP address as its EID. The remote CN is same to the first scenario. The MN communicates with CN by encapsulating the packets on $xTR_1$ and decapsulating the packets on $xTR_3$, and the MN moves into the coverage of $xTR_2$ after the simulation begins. Since the communication should not be interrupted during the mobility, this scenario limits the movement of MN being only within the same subnet, i.e., one of the EID-prefixes of $xTR_1$ is same to one of $xTR_2$'s. % At a certain moment during the moving, the Wi-Fi link between MN and $xTR_1$ is down, whereas MN detects $xTR_2$, which triggers the switching connection procedure. If we use wired link, after a certain time that the simulation begins, the wired link between MN and $xTR_1$ is down, meanwhile the link between MN and $xTR_2$ is up.
Similar to the first scenario, the DHCP procedure is necessary so to trigger the registration of new mapping information, but MN still keeps the former IP address as its EID, instead of $xTR_2$ distributing a new one to it. Then $xTR_2$ registers the new mapping information to the mapping system. As the mapping system finds out that the EID of MN has been registered and associated by $xTR_1$, it sends a Map-Notify to both xTRs. The reason for sending to $xTR_2$ is the acknowledgment of the reception of its Map-Register. Whereas the purpose to $xTR_1$ is to tell it that the MN is now mapping with $xTR_2$ and inform the remote CN to update its mapping information. Since MN used $xTR_1$ to communicate with CN in the past time, only $xTR_1$ stored in its cache that to whom MN was exchanging the packets instead of $xTR_2$. Thus, $xTR_1$ sends a $SMR$ to $xTR_3$, and $xTR_3$ sends an SMR-invoked Map-Request to the mapping system, so to obtain the new mapping information of MN. Afterwards, MN re-establishes the communication with CN node via $xTR_2$. The detailed traffic schema related to the handover procedure is shown in Fig.~\ref{sim_schema_xTR}.
%-< FIGURE >--------------------------------------------------------------------
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Pics/Mobility_xTR_schema_SMR_simplify}
\caption{Schema for MN mobility in LISP-Site (SMR-invoked Map-Request is sent to the mapping system)}
\label{sim_schema_xTR}
\end{figure}
%-< END FIGURE >--------------------------------------------------------------------
The overall handover delay $D_{overall}$ in this scenario is almost same to the first one:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + D_{Request_{SMR}} + D_{Reply}
\end{eqnarray}
The numerical result is as follows:
\begin{eqnarray}
% &=& T_{xTR_2-MDS} + T_{MDS-xTR_1} + T_{xTR_1-xTR_3} + T_{xTR_3-MDS} + D_{Resolve} + T_{MDS-xTR_2} + T_{xTR_2-xTR_3} \nonumber \\
D_{overall} &=& 2T_{xTR_2-MDS} + T_{MDS-xTR_1} + T_{xTR_1-xTR_3} + T_{xTR_3-MDS} + \nonumber \\
& & D_{Resolve} + T_{xTR_2-xTR_3} \nonumber \\
&=& 2* (2*D_{Link}) + 2*D_{Link} + 2*D_{Link} + 2*D_{Link} + D_{Resolve} + 2*D_{Link} \nonumber \\
% &=& 2* (2*1ms) + 2*1ms + 2*1ms + 2*1ms + D_{Resolve} + 2*1ms \nonumber \\
&=& 12D_{Link} + D_{Resolve} \nonumber \\
&=& 212 ms \nonumber
\end{eqnarray}
% $CheckAlive$ is the delay that xTR\_1 checks if MN still connects to it. For example, xTR\_1 can simply \emph{ping} MN. If MN still connects to it, it will reply to mapping system that itself is still RLOC of MN. Otherwise, if \emph{ping} meets timeout, xTR\_1 will tell the mapping system that MN has left, and sends SMR to the CNs in its cache. The later situation has higher delay, because xTR\_1 needs to wait until timeout of \emph{ping}. % (Min value of simulation = 1.067679 s, where includes 1 s of DHCP delay)
The handover overhead in this scenario is 7. Besides 4 messages used for SMR procedure, it needs 3 signalings to complete the registration. There are 1 Map-Register from $xTR_2$ to the mapping system and 2 messages of Map-Notify: 1 to the $xTR_2$ and 1 to the $xTR_1$.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + 2C_{Notify} + C_{SMR} + C_{Request_{SMR}} + C_{Reply} \nonumber \\
%&=& C_{Register} + 2C_{Notify} + C_{SMR} + 2C_{Request} + C_{Reply} \nonumber \\
%&=& 7 C
%\end{eqnarray}
Same to the first scenario having two options, when $xTR_3$ receives the $SMR$ from $xTR_1$, it can also directly send SMR-invoked Map-Request back to $xTR_1$, which implies that the $xTR_1$ puts the new mappings into its database. The overall handover delay $D_{overall}$ is as follows:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + D_{Request_{SMR}} + D_{Reply}
\end{eqnarray}
The numerical result is:
\begin{eqnarray}
% &=& T_{xTR_2-MDS} + T_{MDS-xTR_1} + T_{xTR_1-xTR_3} + T_{xTR_3-xTR_2} + T_{xTR_2-xTR_3} \nonumber \\
&=& T_{xTR_2-MDS} + T_{MDS-xTR_1} + T_{xTR_1-xTR_3} + 2T_{xTR_2-xTR_3} \nonumber \\
&=& 2*D_{Link} + 2*D_{Link} + 2*D_{Link} + 2*(2*D_{Link}) \nonumber \\
% &=& 2*1ms + 2*1ms + 2*1ms + 2*(2*1ms) \nonumber \\
&=& 10D_{Link} \nonumber \\
&=& 10 ms \nonumber
\end{eqnarray}
Since the SMR-invoked Map-Request is not sent to the mapping system, there is no resolving delay. %, and the overall handover delay is much smaller than the former solution.
However, in this scenario, as $xTR_1$ is no longer in charge of MN, how long it stores the CNs for MN in its cache is an important point to discuss. If the expired time is set too long, it wastes the source of $xTR_1$ and is not necessary. % Whereas if the time is too short, there is the risk that remote xTRs of CNs like $xTR_3$ in our scenario, do not have enough time to request the new mapping information. Thus, an optimal value of timeout that offers an appropriate tradeoff between saving the sources and effectively updating the new mapping information of remote remains an open question.
Thus, it still remains as an open issue to seek for an optimal value of timeout that offers an appropriate tradeoff between resources saving and effectively providing the new mapping information.
Similarly to the first scenario, directly responding to the source locator of SMR has 1 signaling message less than the one requesting to the mapping system first. Thus, the overall handover overhead in this case is 6.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + 2C_{Notify} + C_{SMR} + C_{Request_{SMR}} + C_{Reply} \nonumber \\
%&=& C_{Register} + 2C_{Notify} + C_{SMR} + C_{Request} + C_{Reply} \nonumber \\
%&=& 6 C
%\end{eqnarray}
Since the routers support LISP, i.e., are the xTRs in this scenario, it can help to reduce the BGP routing table size, which is the initial motivation of proposing LISP. Besides, the analysis hints that the overall handover delay of this scenario is the shortest. However, it can only provide the mobility within a same subnet, which means that it is not able to offer the handover through different subnets. Thus, this scenario is more suitable to be applied for the mobility of virtual machines in the Data Center.
%-< SUBSECTION >--------------------------------------------------------------------
\subsection{LISP-MN in LISP-Site}
\label{sec:ns3_analysis_lispmn_xTR}
The third scenario is the LISP-MN in LISP-Site, where both the border routers and the mobile node MN are implemented LISP. In our simulation, the LISP-MN with permanent EID is initially placed in the subnet of $xTR_1$, with the IP address allocated by $xTR_1$ as its LRLOC. The remote CN is still same to the first two scenarios, which is a conventional stationary end host, residing in a LISP-Site of $xTR_3$. The LISP-MN communicates with CN by double encapsulation. The first encapsulation is on itself and the second time is on the $xTR_1$. When the LISP packets arrive at $xTR_3$, it needs decapsulate them twice. % If we use Wi-Fi, the LISP-MN moves into subnet of $xTR_2$ after the simulation begins. At a certain moment during the moving, the Wi-Fi link between LISP-MN and $xTR_1$ is down, whereas LISP-MN detects $xTR_2$, which triggers the handover procedure. If we use wired link, after a certain time that the simulation begins, we turn down the wired link between LISP-MN and $xTR_1$, while set the link between LISP-MN and $xTR_2$ up at the same time.
LISP-MN first has a DHCP procedure with $xTR_2$, so that the later assigns it a new IP address as its LRLOC. Then LISP-MN needs register its new mapping information to the mapping system, and also send a $SMR$ to all the xTRs of CNs in its cache (actually is to the xTR of CN, i.e., $xTR_3$ in our scenario). Once $xTR_3$ receives a $SMR$, it sends an SMR-invoked Map-Request to the mapping system, so to obtain the new mapping information of LISP-MN. Actually, this mapping information that $xTR_3$ obtains is the $<EID_MN, LRLOC>$ for LISP-MN. It is not able to send the packets to LISP-MN at the moment, since it does not know how to route the packets to the $LRLOC$ of LISP-MN, i.e., it lacks the mapping information for $LRLOC$. Only until $xTR_3$ receives the packets from CN to LISP-MN, which triggers the Map-Request procedure to the mapping system, can $xTR_3$ know the mapping information of LRLOC $<LRLOC, RLOC_{xTR_2}>$. Now $xTR_3$ gets the double mapping information for LISP-MN. Afterwards, LISP-MN re-establishes the communication with CN node by passing $xTR_2$. The detailed traffic schema related to the handover procedure is illustrated in Fig.~\ref{Mobility_double_encap_schema_SMR_askMDS_simplify}.
Since this scenario is double encapsulation that $xTR_3$ needs to know both inner and outer mapping information of LISP-MN for sending the packets. The overall handover delay $D_{overall}$ in this scenario is larger than the first two scenarios. The $D_{overall}$ is:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + D_{Request_{SMR}} + D_{Reply} + \nonumber \\
& & D_{Request}+ D_{Reply}
\end{eqnarray}
The numerical result is as follows:
\begin{eqnarray}
% &=& 2T_{MN-MDS} + T_{MN-xTR_3} + T_{xTR_3-MDS} + D_{Resolve} + T_{MDS-MN} + T_{MN-xTR_3} + T_{xTR_3-MDS} + D_{Resolve} + T_{MDS-xTR_2} + T_{xTR_2-xTR_3} \nonumber \\
&=& 3T_{MN-MDS} + 2T_{MN-xTR_3} + 2T_{xTR_3-MDS} + 2D_{Resolve} + \nonumber \\
& & T_{MDS-xTR_2} + T_{xTR_2-xTR_3} \nonumber \\
&=& 3* (3*D_{Link}) + 2*(3*D_{Link}) + 2*(2*D_{Link}) + 2D_{Resolve} + \nonumber \\
& & 2*D_{Link} + 2*D_{Link} \nonumber \\
% &=& 3* (3*1ms) + 2*(3*1ms) + 2*(2*1ms) + 2D_{Resolve} + \nonumber \\
% & & 2*1ms + 2*1ms \nonumber \\
&=& 23D_{Link} + 2D_{Resolve} \nonumber \\
&=& 423 ms \nonumber
\end{eqnarray}
%-< FIGURE >--------------------------------------------------------------------
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Pics/Mobility_double_encap_schema_SMR_askMDS_simplify}
\caption{Schema for LISP-MN mobility in LISP-Site (SMR-invoked Map-Request is sent to the mapping system)}
\label{Mobility_double_encap_schema_SMR_askMDS_simplify}
\end{figure}
%-< END FIGURE >--------------------------------------------------------------------
The double encapsulation causes not only longer handover delay but also more handover overhead related to LISP Control Plane. Two messages are needed for the registration, 3 signalings are used to obtain the $LRLOC$ of $EID$. Besides, two more Map-Requests (one from $xTR_3$ to the mapping system, one from the mapping system to $xTR_2$) and one more Map-Reply are required to get the $RLOC$ of $LRLOC$. Thus, the handover overhead is 9, which has 3 messages more than the first scenario.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + C_{Notify} + C_{SMR} + C_{Request_{SMR}} + C_{Reply} + C_{Request} + C_{Reply} \nonumber \\
%&=& C_{Register} + C_{Notify} + C_{SMR} + 2C_{Request} + C_{Reply} + 2C_{Request} + C_{Reply} \nonumber \\
%&=& 9 C
%\end{eqnarray}
Differently from the first two scenarios, when $xTR_3$ directly sends the SMR-invoked Map-Request back to the source of $SMR$ has smaller overall handover delay, this solution for the third scenario has bigger delay instead. It is caused by the double encapsulation in this scenario while the first two scenarios have only single encapsulation. When $xTR_3$ receives the $SMR$ from LISP-MN, it wants to send SMR-Invoked Map-Request to the LISP-MN for the mapping information of $<EID_MN, LRLOC>$, but it does not know how to reach to LISP-MN, i.e., it lacks the mapping information of $<LRLOC, RLOC_{xTR_2}>$. Thus, it discards the $SMR$ and sends the Map-Request to the mapping system first. Then, the mapping information of $<LRLOC, RLOC_{xTR_2}>$ is stored in its cache. It waits for the next $SMR$ so to send the SMR-invoked Map-Request to the LISP-MN. The traffic schema is shown in Fig.~\ref{Mobility_double_encap_schema_SMR_askxTR_simplify}.
%-< FIGURE >--------------------------------------------------------------------
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{Pics/Mobility_double_encap_schema_SMR_askxTR_simplify}
\caption{Schema for LISP-MN mobility in LISP-Site (SMR-invoked Map-Request is sent to the source of SMR)}
\label{Mobility_double_encap_schema_SMR_askxTR_simplify}
\end{figure}
%-< END FIGURE >--------------------------------------------------------------------
The overall handover delay $D_{overall}$ by sending the SMR-invoked Map-Request to the source of $SMR$ in this scenario is as follows, where $T_{timeout\_SMR}$ is the interval to re-send the $SMR$ in case of nothing received. We set it to $1 s$ in the simulation:
\begin{eqnarray}
D_{overall} &=& D_{Register} + D_{Notify} + D_{SMR} + T_{timeout_SMR} + D_{SMR} + \nonumber \\
& & D_{Request_{SMR}} + D_{Reply}
\end{eqnarray}
The numerical result is as follows:
\begin{eqnarray}
&=& 2T_{MN-MDS} + 4T_{MN-xTR_3} + T_{timeout_SMR}\nonumber \\
&=& 2* (3*D_{Link}) + 4*(3*D_{Link}) + T_{timeout\_SMR} \nonumber \\
% &=& 2* (3*1ms) + 4*(3*1ms) + 1s \nonumber \\
&=& 18D_{Link} + T_{timeout\_SMR} \nonumber \\
&=& 1018 ms \nonumber
\end{eqnarray}
Although when the SMR-invoked Map-Request directly sent to the source of SMR has the different traffic schema from the one sent to the mapping system, the overall handover overhead is still 9. The difference between them is only the order of getting mapping information on $xTR_3$, where this case is to obtain the outer mapping first but the last case is to get the inner mapping first.
%\begin{eqnarray}
%C_{overall} &=& C_{Register} + C_{Notify} + C_{SMR} + 2C_{Request} + C_{Reply} + C_{SMR} + \nonumber \\
%& & C_{Request_{SMR}} + C_{Reply} \nonumber \\
%&=& C_{Register} + C_{Notify} + 2C_{SMR} + 3C_{Request} + 2C_{Reply} \nonumber \\
%&=& 9 C
%\end{eqnarray}
%%-< FIGURE >--------------------------------------------------------------------
%\begin{figure}[!t]
% \centering
% \includegraphics[width=0.8\textwidth]{Pics/Mobility_double_encap_schema_SMR_improving_simplify}
% \caption{Schema for LISP-MN in LISP-Site mobility}
% \label{Mobility_double_encap_schema_SMR_improving_simplify}
%\end{figure}
%%-< END FIGURE >--------------------------------------------------------------------
%If we do not consider the security issues, the handover schema can be simplified as shown in Fig.~\ref{Mobility_double_encap_schema_SMR_improving_simplify}.
%\begin{eqnarray}
% D_{overall} &=& D_{DHCP} + D_{Register} + D_{Notify} + D_{SMR} + D_{Request} + D_{Reply} + D_{Request_{SMR}} + D_{Reply} \nonumber \\
% &=& D_{DHCP} + 2T_{LISPMN-MDS} + D_{Resolve} + 3T_{LISPMN-xTR_3} \nonumber \\
% &=& D_{DHCP} + 2* (3*2ms) + 200ms + 3*(3*2ms) \nonumber \\
% &=& D_{DHCP} + 230 ms
%\end{eqnarray}
%where $D$ is the delay, $BI$ is Beacon Interval, subscriptions $Wi-Fi$, $DHCP$ and $SMR$ respectively refers to Wi-Fi association, DHCP procedure and LISP SMR. (Min value of handover delay = 1.300349, where packet sending interval = 0.02 s)
%
%After several executions of simulation program, we observe that the overall handover delay changes by the various beacon intervals, in particular the Wi-Fi association delay depends on the different beacon intervals, whereas LISP SMR procedure always cost around $3s$. To get the lower bound of overall handover delay, we can ignore the Wi-Fi association delay when the beacon interval is $500ms$, and the latency due to DHCP procedure is always $1s$. Thus, adopting LISP-MN to conduct the host-based mobility takes at least $4s$. Compared to current most stable solution for host-based IP mobility management MIPv6, which latency including L2 and L3 in a real Wi-Fi testbed is around $3.68s$~\cite{vassiliou2010analysis}, LISP-MN has a higher delay caused by the double encapsulation mechanism introduced by LISP-MN behind LISP-Site.
%
%During handover, CN can successfully receive packets from LISP-MN right after DHCP procedure being accomplished, but LISP-MN cannot receive the packets from CN until LISP SMR procedure is also finished. Thus, during DHCP procedure, all bi-directional transmitted packets are lost. To improve the performance, \cite{tang2017lisp} proposes a network-level LISP-MN solution, but has not validated their proposals neither in simulation nor in testbed. Our ns-3 implementation can be used to realize them.
Since both the MN and the border routers support LISP, the advantages of this scenario, i.e., LISP-MN in the LISP-Site, are:
\begin{inparaenum}[1)]
\item it can help to reduce the BGP routing table size;
\item it is able to achieve handover through different subnets.
\end{inparaenum}
However, same to the shortcomings for the first scenario, each LISP-MN needs a permanent IP address as its EID, which increases the burden of IPv4 address allocation. Each permanent EID and its LRLOC needs register to the mapping system, which also increases the size of LISP mapping table. Besides, the numerical analysis indicates that the overall handover delay is much longer than the other two scenarios due to its double encapsulation.
Depend on our designed topology, all the aforementioned analyzed results are presented in Fig.~\ref{handover_delay_overhead_bar}. The left hand figure is the numerical overall handover delay related to LISP and right hand figure is about the handover overhead. To be noted, time values are specific to the topology used as an example, while the number of messages do not depend on the specific topology but only on the scenario.
\begin{figure}[!t]
\begin{minipage}[c]{.5\linewidth}
\begin{center}
\includegraphics[width=\linewidth]{Pics/LISP_handover_delay_Bar.eps}
\end{center}
\end{minipage}
\begin{minipage}[c]{.5\linewidth}
\begin{center}
\includegraphics[width=\linewidth]{Pics/LISP_handover_overhead_Bar}
\end{center}
\end{minipage}
\caption{The handover delay related to LISP (left) and the handover overhead (right) grouped by three LISP mobility scenarios.}
\label{handover_delay_overhead_bar}
\end{figure}
%%-< SECTION >--------------------------------------------------------------------
%\section{Evaluations}
%\label{sec:ns3_evaluation}
% It runs an \emph{echo} application which sends packets to an \emph{echo} server on a remote stationary node CN situated in the LISP-Site of $xTR_3$.
% Every link between two network entities in this simulation architecture is set to $2 ms$. According to the experimental results in~\cite{coras2014performance}, we set the resolving time in the mapping system as $200 ms$.
%%-< FIGURE >--------------------------------------------------------------------
%\begin{figure}[!t]
% \centering
% \includegraphics[width=0.7\textwidth]{Pics/LISP_mobility_LISPMN_PacketInterval}
% \caption{Impact of packet sending interval on handover delay}
% \label{LISP_mobility_LISPMN_PacketInterval}
%\end{figure}
%%-< END FIGURE >--------------------------------------------------------------------
%
%%-< FIGURE >--------------------------------------------------------------------
%\begin{figure}[!t]
% \centering
% \includegraphics[width=0.7\textwidth]{Pics/LISP_mobility_xTR_PacketInterval}
% \caption{Impact of packet sending interval on handover delay}
% \label{LISP_mobility_xTR_PacketInterval}
%\end{figure}
%%-< END FIGURE >--------------------------------------------------------------------
%
%%-< FIGURE >--------------------------------------------------------------------
%\begin{figure}[!t]
% \centering
% \includegraphics[width=0.7\textwidth]{Pics/LISP_mobility_double_encap_PacketInterval}
% \caption{Impact of packet sending interval on handover delay}
% \label{LISP_mobility_double_encap_PacketInterval}
%\end{figure}
%%-< END FIGURE >--------------------------------------------------------------------
%-< SECTION >--------------------------------------------------------------------
\section{Summary}
\label{sec:ns3_conclusion}
%\begin{itemize}
% \item The validation of the implemented simulator
% \item LISP-MN handover analysis
% \item The potential of the implemented simulator
%\end{itemize}
% As a promising technology for the future Internet architecture, LISP attracts more and more attention.
There exist some LISP simulation implementations, but they are proprietary or they do not support the extension of LISP mobility. Further, although measurements on LISP-testbeds can provide real time performance, due to the complicated topological structure, it is somewhat like a black box test, which hinders us to find the exact explanation for some results. This highlights the importance to have an open source simulator for LISP in particular to support LISP mobility functionality. In this chapter, based on an implementation of basic LISP on ns-3.24, we adapt it to ns-3.27 first (the latest version at the moment of writing). To facilitate the researchers to deeply track the exchange of LISP packets, we encode in LISP Data Plane packets so that the Wireshark can resolve them. Finally we implement the LISP mobility extensions on it. As there are three methods to support mobility in LISP: host-based (i.e. LISP-MN), network-based (i.e., xTR), both host-based and network-based (i.e., LISP-MN behinds xTR) mobility. We analyze the overall handover delay and the overhead of LISP Control Plane among them, compare the performance among them by listing their advantages and shortcomings. At the moment of writing, we are working on the simulation evaluation of the mobility scenarios. % The simulation results show that our implementation works well, and reveal the current LISP-MN proposal with a double encapsulation that has an high level delay during handover procedure. Our simulator can be a perfect choice to test the improvements of LISP-MN.
| {
"alphanum_fraction": 0.7333692006,
"avg_line_length": 122.4728682171,
"ext": "tex",
"hexsha": "7d475a6301733907aea6516ec243c377f320b71f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SeleneLI/YueLI_thesis",
"max_forks_repo_path": "Chapter7/chapter7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SeleneLI/YueLI_thesis",
"max_issues_repo_path": "Chapter7/chapter7.tex",
"max_line_length": 1558,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f2ae3525afe1e4f5be42daca2e932addbc66e00d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SeleneLI/YueLI_thesis",
"max_stars_repo_path": "Chapter7/chapter7.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16555,
"size": 63196
} |
\documentclass[12pt]{article}
\usepackage[top=1in, bottom=1.25in, left=1in, right=1in]{geometry}
\usepackage[]{graphicx}
\usepackage{amsmath}
\usepackage{multicol}
\usepackage{tikz}
\usepackage{authoraftertitle}
\usepackage{hyperref}
\setlength{\parindent}{0pt}
\setcounter{secnumdepth}{0}
\graphicspath{{images/}}
\tikzstyle{phase} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm, text centered, text width=3cm, draw=black]
\tikzstyle{arrow} = [thick, ->, >=stealth]
\title{
\begin{figure}[ht]
\centering
% \hspace{0.1cm}
% \begin{minipage}[b]{0.45\linewidth}
\includegraphics[scale=0.6]{Title_Photos.png}
% \end{minipage}
% \hspace{1cm}
% \begin{minipage}[b]{0.45\linewidth}
% \includegraphics[scale=0.45]{ETHenaLogo.png}
% \end{minipage}
\end{figure}\vspace{2cm}\\{\textbf{\Huge Luno Trading Challenge}} \vspace{1cm}\\ \textbf{{\Huge ETHena}}}
\date{}
\usepackage{authoraftertitle}
\author{\\
\author{}Sanchit Ajmera\\
\small{\affaddr{Department of Computing,}}\\
\small{\affaddr{Imperial College London}} \\
\small{\email{[email protected]}}
\and \\
\author{}Luqman Liaquat\\
\small{\affaddr{Department of Computing,}}\\
\small{\affaddr{Imperial College London}} \\
\small{\email{[email protected]}}
\and\\
\author{}Manuj Mishra\\
\small{\affaddr{Department of Computing,}}\\
\small{\affaddr{Imperial College London}} \\
\small{\email{[email protected]}}
\and\\
\author{}Shivam Patel\\
\small{\affaddr{Department of Mathematics,}}\\
\small{\affaddr{Imperial College London}} \\
\small{\email{[email protected]}}
\and\\
\author{}Devam Savjani\\
\small{\affaddr{Department of Computing,}}\\
\small{\affaddr{Imperial College London}} \\
\small{\email{[email protected]}}
}
\begin{document}
\begin{titlepage}
\maketitle
\end{titlepage}
\tableofcontents
\newpage
\begin{multicols}{2}
\section{Motivation}
We are a group of first year Imperial College London students, studying computing and mathematics. We all have an interest in trading and wanted to learn how we could implement algorithms to make useful and reliable predictions about the market. As a group of technically minded individuals, we were intrigued by blockchain and cryptocurrencies and this drew us to the Luno Challenge.
\\
Some of our team members have explored trading in the past and with our combined programming skills, we were confident that we could make a substantial profit. This competition has given us the opportunity to challenge ourselves and we have gained valuable insights into the world of blockchain, cryptocurrency and algorithmic trading.
\section{Overview}
\subsection{Luno API}
Understanding and working with the Luno API was a vital step in producing a well-made bot. Our team chose to primarily implement our bot in Go despite having no experience in the language. This was because the Luno Go SDK had comprehensive documentation, allowing us to quickly and easily setup the foundations for our bots.
\subsection{Historical Data}
An important feature we built, using the Luno API $'GetTicker'$ function, was a live data collection and storage program on six currency pairs. As historical data was not natively available, we designed our own program to produce daily spreadsheet files with minute-interval data on bid and ask prices. We ensured this data collection was automated, backed up our data regularly, and included an anti-loss feature to prevent any possible errors from corrupting our file. The recent historical data we obtained was then used to carry out thorough backtesting on each iteration of our trading bots to measure progress and assess weaknesses. Once we were satisfied with the results, we deployed ETHena on the AWS server.
\subsection{Performance Reports}
We knew that iterative development, paired with constant performance reviews, was essential in fine-tuning ETHena. This led us to develop a utility feature which produced a daily report of ETHena's performance. The report featured an automatically generated graph which allowed us to pinpoint where a buy or sell order was executed and if it was the best decision in context of the market that day. An example of this graph can be found in the appendix (see Figure 1). Our performance report generation was a pivotal development and allowed us to identify several improvements that we would not have noticed otherwise.
\subsection{Email Notifications}
To deliver our performance reports in a convenient manner, we created an integrated email system. The system allowed us to receive notifications on important events that had occurred. These notifications covered trade histories, ETHena's status, and daily performance summaries which allowed each member to stay updated on ETHena's progress.
\\
Alongside the performance reports, the emails enabled us to carry out a more automated approach in development and execution.
\section{Project flow}
After our initial data collection, we started researching a host of trading strategies. We filtered through these and picked the most promising techniques (detailed below) which we then implemented and backtested. We did this multiple times on different data sets to allow us to fine-tune the variables without overfitting to a particular dataset. The final stage was to deploy the most successful bots onto the servers for live trading on the Luno market. As discussed, these bots underwent further testing and constant improvement throughout the competition.
\begin{center}
\scalebox{0.8}{
\begin{tikzpicture}[node distance=1cm]
\node (Research) [phase] {Research};
\node (Development) [phase, bottom of=Research, yshift=-2cm] {Development};
\node (Testing) [phase, bottom of=Development, yshift=-4cm] {Testing};
\node (Deployment) [phase, bottom of=Testing, yshift=-6cm] {Deployment};
\draw [arrow] (Research) -- (Development);
\draw [arrow] (Development) -- (Testing);
\draw [arrow] (Testing) -- (Deployment);
\end{tikzpicture}}
\end{center}
\section{Research}
An integral part of our project was to research strategic trading methods to predict market trends. Our team's intention was to go beyond this research by combining several strategies, indicators and risk management techniques to develop ETHena. Our core research has been summarised below.
\subsection{Moving Average Convergence Divergence (MACD)}
This was one of the first strategies we decided to explore and implement. The strategy considers two moving averages: long term and short term. New trends are identified when the two lines cross over. When a short term moving average crosses \textbf{above} a long term moving average it is known as a 'convergence' event signalling a new uptrend. The opposite (i.e. the short term average crossing \textbf{below} the long term average) is known as a divergence event, hence the name - Moving Average Convergence Divergence.
\subsection{Relative Strength Index (RSI)}
The Relative Strength Index is a momentum indicator, which is intended to chart the strength or weakness of a stock based on the closing prices of a recent trading period. It signals when an asset is overbought or oversold allowing one to make predictions on market trends. The RSI formula is:
\begin{align*}
RSI = & \ 100 -\frac{100}{1+RS} \\
RS = & \ \frac{\text{Average Gain}}{\text{Average Loss}}
\end{align*}
The \textbf{Average Gain} and \textbf{Average Loss} are calculated from the previous $n$ differences in close prices of each trading interval. For ETHena we used $n=14$ and varied the trading interval which allowed us to run bots with varying risk levels.
\subsection{Exponential Moving Average (EMA)}
Exponential Moving Average is a moving average which exponentially prioritises the more recent prices, whereas the simple moving average (SMA) equally prioritises all data points. Using the EMA over the SMA greatly improved the effectiveness of our MACD and RSI strategies.
\subsection{Candlestick Analysis}
After researching into strategies and speaking to day traders we found candlestick analysis to be one of the major indicators used when making decisions. This type of analysis involves spotting a variety of candlestick patterns including 123 Reversal, Hammer, Inverse Hammer, Three White Slaves and Morningstar, which can signal future trends.
\subsection{EMA - Offset Algorithm}
The Offset strategy buys when the price significantly drops and sells when the price significantly rises in comparison to the moving average. We used this strategy as we noticed there were multiple instances of the market suffering a huge loss and then recovering in a short time frame. This is also known colloquially as a Flash Crash.
\subsection{Risk Management - Trailing Stoploss}
The Trailing Stoploss strategy is used to determine when to sell by maximising profits and placing a fixed limit on potential losses. As the price rises the stoploss will rise with it. The value of the stoploss will always be proportional to the maximum price the asset has risen to since buying. Based on our backtesting, this stoploss ratio was set to 99.75\%. We also implemented an initial bail-out value such that an emergency sell order was triggered if an acquired asset significantly dropped in value immediately after buying. ETHena was set to bail after a 1\% immediate loss.\\\\
\subsection{Final Choice}
After backtesting on several different sets of data, we created ETHena, a bot which could trade using all of our strategies. We decided to use the Trailing Stoploss risk management feature by default to ensure our sell orders were placed at the best possible closing positions. This yielded substantial profit.
\section{GUI}
To help manage the bot, we created the ETHena GUI as shown below. The drop down menu for \textit{Name} allows us to choose who is running the bot, which then automatically selects their API keys. The sliders then allow us to set which strategies are being used and the weighting applied to each strategy. The time interval option sets how often calls to the market are made, effectively increasing or decreasing risk. The select option for \textit{Live} and \textit{Offline} allows one to decide whether to trade live or backtest on historical data. The final option then allows the user to select where the main.go file is stored on their system.\\\\
\begin{minipage}[b]{1\linewidth}
\includegraphics[width =\textwidth]{GUI-Image.png}
\centering
Figure 1: ETHena GUI
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\includegraphics[width =\textwidth]{LoadingScreen.png}
\centering
Figure 2: ETHena Loading page
\vspace{0.25cm}
\end{minipage}
\begin{minipage}[b]{1\linewidth}
\includegraphics[width =\textwidth]{TUI.png}
\centering
Figure 3: ETHena TUI
\end{minipage}
\section{Tools and Technologies}
The primary programming language for this project was Go. Python was used initially to assist in data analysis and graphing when measuring the performance of each strategy. It was also used to create a front end GUI for a more user-friendly experience.
\section{Conclusion}
We have thoroughly enjoyed working on this project. Not only has it improved our programming skills significantly, it has provided us with greater knowledge of technical analysis within the trading sphere. We have each learnt a lot and are particularly proud of the effort that we've put in to produce a program this complex.
\section{Contributors}
This project was implemented by five 1\textsuperscript{st} year students at Imperial College London. Below are the backgrounds of our team contributors:
\\
\subsection{Sanchit Ajmera}
\vspace{-0.375cm}\textbf{\footnotesize{Joint Mathematics \& Computing}} \\ \\
Having a strong background in mathematics and an awareness of economics, allowed me to contribute to the project by designing a scalable structure with Manuj. This set the foundation for the rest of the project and from this, I implemented a model to backtest various trading strategies on historical data. Later on, I aided in the production of additional features including an email notification system, daily update reports and the ETHena TUI.
\subsection{Luqman Liaquat}
\vspace{-0.375cm}\textbf{\footnotesize{Computing}} \\ \\
With a keen interest in computing and experience in Linux, I set up and managed the AWS server instances for the live deployment of ETHena across our accounts. I also worked on anti-loss features for the data collection used within our backtesting facilities to ensure our files remained consistent. Additionally, I assisted Shivam in combining the different trading strategies intuitively. This project has sharpened my skills with the Go programming language and has significantly improved my understanding of technical analysis for trading.
\subsection{Manuj Mishra}
\vspace{-0.375cm}\textbf{\footnotesize{Joint Mathematics \& Computing}} \\ \\
My key contribution in this project were to create the RSI-only bot on which ETHena was based. After working with Sanchit to create the foundation of the codebase, I set up the utilities which allowed ETHena to trade live on the Luno market. As a student of both Mathematics and Computer Science, I was keen to involve myself in every facet of this competition - from the market analysis techniques to the roots of the codebase. I've learnt so much throughout this process and I'm very grateful for the opportunity.
\subsection{Shivam Patel}
\vspace{-0.375cm}\textbf{\footnotesize{Mathematics}} \\ \\
I have years of experience trading in stocks and a strong interest in coding. This hackathon was a great opportunity for me to bring these two interests together. My contribution to this project was providing the trading knowledge to help my team members build the bots early on in the project. As my confidence with Go increased and with Luqman's help, I also then implemented all the strategies into a single bot - ETHena - using a weighted system for each strategy.
\vspace{4cm}
\subsection{Devam Savjani}
\vspace{-0.375cm}\textbf{\footnotesize{Computing}} \\ \\
Approaching this project from a computing background, I was very well versed in programming which helped me with the implementation of the GUI. In addition to my technical contributions, I was heavily involved in the research of various strategies including the development of the candlestick analysis strategies which overall greatly developed my trading skills and financial knowledge.
\section{Acknowledgements}
We would like to thank Adam Hicks and his team at Luno for their help throughout the competition. We would also like to thank En[code] Club for organising the Spark University Hackathon. Finally, a special thanks to Anthony Beaumont for his continued support and mentorship throughout this project.
\\We are also grateful for the following people that have enabled us to accomplish this.
\begin{itemize}
\item Golang packages
\begin{enumerate}
\item Excelize - xuri
\item Tealeg - Geoffrey J. Teale
\item gomail.v2 - Alexandre Cesaro
\end{enumerate}
\item Python modules
\begin{enumerate}
\item PySimpleGui - PySimpleGui Organisation
\item pandas - Wes McKinney
\item matplotlib - John D. Hunter, Michael Droettboom et al.
\end{enumerate}
\end{itemize}
To see our demo of ETHena click \href{https://youtu.be/INVkpd85hOY}{here} or visit this link: \href{https://youtu.be/INVkpd85hOY}{https://youtu.be/INVkpd85hOY}.
\end{multicols}
\newpage
\section{Appendix}
\begin{figure}[h!]
\includegraphics[angle=270, scale=0.53]{{Graph.png}}
\centering
\caption{Graph of trading on 16-Aug}
\end{figure}
\end{document} | {
"alphanum_fraction": 0.750152532,
"avg_line_length": 80.7389162562,
"ext": "tex",
"hexsha": "70c29876352dc58ab6e7758fe4b48c025e0b291e",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2021-09-17T09:10:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-12-08T13:08:03.000Z",
"max_forks_repo_head_hexsha": "63bcbcde267168c89eecbdcb785808777c18f444",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Devam-Savjani/ETHena",
"max_forks_repo_path": "docs/report.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "3a2e413db574f43983915b92a0592c0c442dc110",
"max_issues_repo_issues_event_max_datetime": "2021-09-07T08:32:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-29T09:40:22.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "luqmanl/ETHena",
"max_issues_repo_path": "docs/report.tex",
"max_line_length": 721,
"max_stars_count": 23,
"max_stars_repo_head_hexsha": "63bcbcde267168c89eecbdcb785808777c18f444",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SanchitAjmera/ETHena",
"max_stars_repo_path": "docs/report.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-19T01:19:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-17T01:42:11.000Z",
"num_tokens": 3730,
"size": 16390
} |
%-------------------------
% Resume in Latex
% Author: Sourabh Bajaj, modified by Jerred Shepherd
% License: MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage{fancyhdr}
\usepackage[english]{babel}
\usepackage{tabularx}
\usepackage{xcolor}
\usepackage{hyperref}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubSubheading}[2]{
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{\Large Jerred Shepherd} & Email: \href{mailto:[email protected]}{[email protected]}\\
\href{https://shepherdjerred.com/}{https://shepherdjerred.com} & GitHub: \href{https://github.com/shepherdjerred}{https://github.com/shepherdjerred}
\end{tabular*}
%-----------EDUCATION-----------------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{Harding University}{Searcy, AR}
{Bachelor of Science in Software Development; GPA: 3.18}{August 2015 -- April. 2019}
\resumeSubHeadingListEnd
%-----------EXPERIENCE-----------------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{RStudio}{Seattle, WA}
{Software Engineer}{September 2021 - Present}
\resumeItemListStart
\resumeItem{Placeholder}
{Nothing yet :)}
\resumeItemListEnd
\resumeSubheading
{Amazon Web Services}{Seattle, WA}
{Software Development Engineer}{July 2019 - August 2021}
\resumeItemListStart
\resumeItem{\href{https://aws.amazon.com/about-aws/whats-new/2020/05/aws-systems-manager-now-supports-resource-groups-as-targets-for-state-manager/}{State Manager resource groups feature}}
{Designed, implemented, tested, and deployed a feature which added support for resource group targets to AWS Systems Manager's desired state configuration service.}
\resumeItem{Developer productivity improvements}{Developed several productivity tools for use by the State Manager team which led to a significant time savings when developing and deploying code including a notification service for CI/CD events, a service operations report generator, and an infrastructure CLI toolkit.}
\resumeItem{Intern mentorship}{Mentored an intern during their twelve week internship who was ultimately offered a full-time position. Identified a project and scoped the requirements for the intern. Helped the intern during their onboarding, design, and the development of their project which was deployed to production.}
\resumeItem{Front end for \href{https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager.html}{AWS Change Manager}}{Implemented the front-end for AWS Change Manager using React and TypeScript.}
\resumeItem{Service fleet optimization}
{Identified and implemented ideal server hardware configuration for team's software stack. Updated service dependencies and language runtimes. These improvements led to a 66\% reduction in server infrastructure cost.}
\resumeItem{Built service in \href{https://aws.amazon.com/blogs/publicsector/announcing-the-new-aws-secret-region/}{top secret AWS region}}{Modified service code and infrastructure while supporting requirements top secret security constraints.}
\resumeItem{Infrastructure improvements}{Significantly reduced the time to build a new AWS region for the State Manager service. Identified and implemented process improvements which led to a drastic reduction in operational work.}
\resumeItemListEnd
\resumeSubheading
{Amazon Web Services}{Seattle, WA}
{Software Development Engineer Intern}{May 2018 - July 2018}
\resumeItemListStart
\resumeItem{\href{https://aws.amazon.com/about-aws/whats-new/2019/02/aws-systems-manager-state-manager-enables-document-sharing-across-accounts/}{State Manager document sharing}}
{Designed, implemented, tested, and deployed a feature which adds cross-account document sharing for AWS Systems Manager's desired state configuration service.}
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------PROJECTS-----------------
\section{Projects}
\resumeSubHeadingListStart
\resumeSubItem{\href{https://github.com/harding-capstone/engine}{Castle Casters}}
{A cross-platform game and game engine written from scratch in Java 11. Uses OpenGL for 2D graphics rendering and netty for low-level networking with TCP and UDP sockets. Includes \href{https://github.com/harding-capstone/ai}{an AI} trained with a genetic algorithm and a \href{https://github.com/harding-capstone/logic}{robust implementation} of the \href{https://en.wikipedia.org/wiki/Quoridor}{Quoridor} board game.}
\resumeSubItem{\href{https://better-skill-capped.shepherdjerred.com/}{Better Skill Capped}}{An improved front-end for the \href{https://www.skill-capped.com/lol}{Skill Capped} website which implements features the original is lacking such as fuzzy searching, video bookmarking, and offline video viewing. Written using Python, TypeScript, and React. Hosted on AWS with Lambda and S3.}
\resumeSubItem{\href{https://github.com/shepherdjerred/gpt-2-simple-sagemaker-container}{GPT-2 SageMaker Container}}{Docker image and AWS Lambda Function to train and serve a fine-tined GPT-2 model with AWS SageMaker}
\resumeSubHeadingListEnd
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.7206070061,
"avg_line_length": 47.6418918919,
"ext": "tex",
"hexsha": "9c93bdaea19f4938ba845fecfb1575a2b64c08df",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1a7e22f1f200d13d454de4a66bc6a7d89d5e68c3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "shepherdjerred/resume",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1a7e22f1f200d13d454de4a66bc6a7d89d5e68c3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "shepherdjerred/resume",
"max_issues_repo_path": "resume.tex",
"max_line_length": 426,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1a7e22f1f200d13d454de4a66bc6a7d89d5e68c3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "shepherdjerred/resume",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1812,
"size": 7051
} |
\documentclass[main.tex]{subfiles}
\begin{document}
\section*{Thu Dec 12 2019}
We discuss orbits in Kerr geometry.
In general, orbits are nonplanar. Say we have an orbit which is not aligned with the rotation plane: frame dragging will change its spin and make it precess around, so it will span a bidimensional region.
An exceptional case is a planar orbit with \(\theta \equiv \pi /2\).
We will treat this case. Here, we have \(\rho^2 = r^2 + a^2 \cos^2 \theta \) but \(\cos \theta =0\) and \(\sin \theta = 1\): so \(\rho^2 = r^2\). Then the line element becomes
%
\begin{subequations}
\begin{align}
\begin{split}
\dd{s^2} = - \qty(1 - \frac{2GM}{r}) \dd{t^2}
- \frac{4GMa}{r} \dd{t} \dd{\varphi } + \\
+ \frac{r^2}{\Delta } \dd{r^2} + \qty(r^2+ a^2 + \frac{2GMa^2}{r}) \dd{\varphi^2}
\,,
\end{split}
\end{align}
\end{subequations}
%
where \(\Delta = r^2 - 2GMr + a^2\). We do not even write the \(\dd{\theta^2}\) term since \(\theta \) is constant.
We only outline the steps: first of all we introduce a 4-velocity
%
\begin{align}
u^{\alpha } = \left[\begin{array}{cccc}
u^{t} & u^{r} & 0 & u^{\varphi }
\end{array}\right]^{\top}
\,.
\end{align}
We have
%
\begin{align}
e = - \xi_{t} \cdot u = - g_{00} u^{t} - g_{03} u^{\varphi }
\,,
\end{align}
%
while
%
\begin{align}
l = \xi_{\varphi } \cdot u = g_{00} u^{t} + g_{30} u^{\varphi }
\,.
\end{align}
These are conserved in the motion, and they represent the energy and the angular momentum of the particle per unit mass as observed by a far away observer.
We insert these into \(u \cdot u = -1\): this gives us
%
\begin{align}
\frac{1}{2} \qty(\dv{r}{\tau })^2 + V _{\text{eff}} (r, e, l) = \frac{e^2 - 1}{2}
\,,
\end{align}
%
with
%
\begin{align}
V _{\text{eff}} (r, e, l) = -\frac{GM}{r} + \frac{l^2 - a^2 (e^2-1)}{2r^2} - \frac{GM(l - ae)^2}{r^3}
\,,
\end{align}
%
which, as we can see, reduces to Schwarzschild for \(a=0\).
Now we will consider circular orbits. These are characterized by \(r = \const\): therefore \(\dv*{r}{\tau }=0\). So the equation reduces to
%
\begin{align}
-\frac{GM}{r} + \frac{l^2 - a^2 (e^2-1)}{2r^2} - \frac{GM(l - ae)^2}{r^3} = \frac{e^2-1}{2}
\,,
\end{align}
%
which must certainly hold, but we also should require to be in an extremum of the potential: we impose
%
\begin{align}
\dv{V}{r} = 0
\,.
\end{align}
%
This equation looks like:
%
\begin{align}
r^2 GM - r \qty(l^2 - a^2(e^2-1)) + 3GM (l-ae)^2=0
\,,
\end{align}
%
and we should look at the solution of this where \(\dv*[2]{V}{r}>0\), so that our orbit is stable.
We are interested in the Kerr ISCO: the infimum of the set of \(r\)s defined by the conditions
%
\begin{subequations}
\begin{align}
\begin{cases}
V _{\text{eff}} (r) &= \frac{e^2-1}{2} \\
\dv{V _{\text{eff}}}{r} (r)&= 0 \\
\dv[2]{V _{\text{eff}}}{r} (r) &\geq 0
\end{cases}
\,,
\end{align}
\end{subequations}
%
which is characterized by the second derivative actually being \emph{equal} to 0. We solve these for the variables \(r, e, l\).
The algebra is extremely involved, and will not be an exam requirement. We plot the solutions in a plane \(R _{\text{ISCO}} / GM\) versus \(a/GM \in [0,1]\). When \(a \neq 0\) we actually have two separate solutions, for the different signs of \(l\) (the difference is really between the relative signs of \(l\) and \(a\): we can alternatively write \(a \in [-1, 1]\) and \(l \geq 0\)).
As \( a \rightarrow 1\) we get \(r _{\text{ISCO}} \rightarrow GM\) if we are corotating, and \(r _{\text{ISCO}} \rightarrow 9GM\) if we are counterrotating.
\subsection{Ergosphere}
As we get closer to the BH, we \emph{must} spin in the same direction it is.
This holds for \emph{any} motion, not just geodesic motion. A stationary person has \(u^{\mu } = (1, \vec{0})\); we can show that below a certain \(r\) this cannot have \(u^{\mu } g_{\mu \nu } u^{\nu } = -1\). This is
%
\begin{align}
u^2 = - \qty(1 - \frac{2GMr}{\rho^2}) (u^{t})^2 = -1
\,,
\end{align}
%
which means that, since \(\dv{t}{\tau }\geq 0\), we must have
%
\begin{align}
1 - \frac{2GMr}{r^2 + a^2 \cos^2 \theta } \geq 0
\,,
\end{align}
%
which means
%
\begin{align}
r^2 + a^2 \cos^2 \theta \geq 2GMr
\,,
\end{align}
%
and unlike Schwarzschild this is not inside the horizon: the solutions are
%
\begin{align}
r_{E\pm } = GM \pm \sqrt{(GM)^2 - a^2 \cos^2 \theta }
\,,
\end{align}
%
and the sign is positive outside of the two solutions. Recall that the horizon is given by
%
\begin{align}
r_{H \pm } = GM \pm \sqrt{(GM)^2 - a^2}
\,,
\end{align}
%
so we can see that since \(0 \leq \cos^2 \theta \leq 1\) the horizon radii are \emph{inner} with respect to the ergo radii: the inequality is
%
\begin{align}
r_{E-} \leq r_{H-} \leq r_{H+} \leq r_{E+}
\,.
\end{align}
%
so we have a region \emph{outside the horizons}: \(r_{H+} \leq r \leq r_{E+}\) in which one \emph{cannot stay at rest}. The full inequalities defining the out-of-horizon ergoregion are:
%
\begin{align}
GM + \sqrt{(GM)^2 - a^2} \leq r \leq GM + \sqrt{(GM)^2 - a^2 \cos^2 \theta }
\,,
\end{align}
%
and to see what this looks like, we fix things: if \(\theta = 0\) we have \(r_{E+}= r_{H+}\), while on the equator \(\theta = \pi /2\) we have \(r_{E+} = 2GM\), while \(r_{H+} = GM + \sqrt{(GM)^2-a^2} < 2GM\). So, the maximum extension of the ergoregion is given by
%
\begin{align}
\Delta r(\theta = \pi /2) = GM - \sqrt{(GM)^2 - a^2} = GM \qty(1 - \sqrt{1 - \qty(\frac{a}{GM})^2})
\sim \frac{a^2}{2GM}
\,
\end{align}
if \(a \ll GM\), otherwise we must do the full calculation.
\subsection{Penrose process}
It is possible to extract energy and momentum from a black hole.
We have a particle, called ``in'', which comes from infinity, reaches the ergosphere, goes inside of it, decays into a particle which we call ``out'' which goes to infinity plus a second particle which we call ``BH'' which goes inside the BH.
All of these move with geodesic motion.
For simplicicty, we consider the process in the equatorial plane although this is not necessary.
In a LIF, energy and momentum are conserved on decay:
%
\begin{align}
p^{\mu }_{\text{in}} = p^{\mu }_{\text{out}} + p^{\mu } _{\text{BH}}
\,,
\end{align}
%
but since this is tensorial it holds in all frames.
A stationary observer at infinity observes \(E _{\text{in}} = -p _{\text{in}}^{0}\) and \(E _{\text{out}} = - p^{0} _{\text{out}}\), where the components of the momentum are written in the usual Schwarzschild coordinates.
Recall: \(\xi^{\alpha }= (1, \vec{0})\) is a Killing vector of this geometry. Therefore, \(E _{\text{in}} = - \xi \cdot p _{\text{in}}\) is conserved along the trajectory and the same holds for \(E _{\text{out}}\).
Projecting the conservation of momentum along \(- \xi \), we get
%
\begin{align}
E _{\text{in}} = E _{\text{out}} - \xi \cdot p_{\text{BH}}
\,,
\end{align}
%
which tells us how we can compare the infalling energy to the energy we get out.
If the particle BH reached infinity, then, \(- \xi \cdot p _{\text{BH}}\) would be its energy as measured by the observer and it would need to be positive.
However, it does not.
So, we can arrange our system so that \(-\xi \cdot p _{\text{BH}} < 0 \): then we have \(E _{\text{out}} > E _{\text{in}}\).
The ergoregion is precisely the one in which \(g_{tt} >0 \) instead of \(g_{tt}<0\) as usual.
So if the decay happens inside the ergoregion, the projection of the conservation of momentum along \(\xi = (1, \vec{0})\) is actually the conservation of a \emph{spatial} component of the momentum, which can have any sign.
From the POV of an outside observer, this energy must come from the BH: so the BH must have lost energy.
We cannot actually model this directly: we consider the geometry as fixed, since it almost is.
This is analogous to the angular momentum transferred in a gravitational slingshot.
We take an observer in the ergosphere at fixed \(r, \theta \) with velocity \(u^{\alpha } = u^{t}\qty(1, 0, 0, \Omega )^{\top}\) with \(\dv{\varphi }{t }= \Omega >0\).
% \todo[inline]{The derivative with respect to \(t\) or \(\tau \)?}
For this observer the measured energy of the ``BH'' particle is
%
\begin{align}
E _{\text{obs}} = - u _{\text{obs}} \cdot p _{\text{BH}}>0
\,.
\end{align}
%
We can rewrite the observer's four velocity as a combination of the Killing vectors:
%
\begin{align}
u^{\alpha } _{\text{obs}} = u^{t} _{\text{obs}} \xi_{(t)}^{\alpha } + u^{t} _{\text{obs}} \Omega _{\text{obs}} \xi_{(\varphi) }^{\alpha }
\,,
\end{align}
%
so the measured energy is given by
%
\begin{align}
E _{\text{obs}} = - u^{t} _{\text{obs}} \xi_{t} \cdot p _{\text{BH}} - u^{t} _{\text{obs}} \Omega _{\text{obs}} \xi_{\varphi } \cdot p _{\text{BH}}
= + u^{t} _{\text{obs}} \qty( e _{\text{BH}} - \Omega _{\text{obs}} l _{\text{BH}} )
> 0
\,,
\end{align}
%
but \(e _{\text{BH}} <0\) so this means that we \emph{must have} \(l _{\text{BH}}<0\).
\end{document}
| {
"alphanum_fraction": 0.6311036789,
"avg_line_length": 36.4634146341,
"ext": "tex",
"hexsha": "a0c24bfc6939fdfbf061eebee0fdc3f13dcfe139",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z",
"max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jacopok/notes",
"max_forks_repo_path": "ap_first_semester/general_relativity/12dec.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jacopok/notes",
"max_issues_repo_path": "ap_first_semester/general_relativity/12dec.tex",
"max_line_length": 387,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jacopok/notes",
"max_stars_repo_path": "ap_first_semester/general_relativity/12dec.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z",
"num_tokens": 3201,
"size": 8970
} |
\newif\ifshowsolutions
\showsolutionstrue
\input{preamble}
\chead{%
{\vbox{%
\vspace{2mm}
\large
Machine Learning \& Data Mining \hfill
Caltech CS/CNS/EE 155 \hfill \\[1pt]
Miniproject 1\hfill
Released February 3, 2021 \\
}
}
}
\begin{document}
\pagestyle{fancy}
\section{Overview}
\noindent In miniproject 1, you will use data about the size, location, and time of reported wildfires to predict their cause. These wildfires were reported from the states of California and Georgia from 1992 to 2015, from small flames to massive forest fires.\\
\noindent You will participate in a competition on Kaggle, a site for data science competitions. There can be 10 submissions per day per team and submission window will close on Wednesday February 10th at 2:00 PM PST. \textbf{You may not use any additional data sources.}\\
\noindent In this competition, use the training data (\texttt{WILDFIRES_TRAIN/WILDFIRES_TRAIN.csv}) to come up with predictions for the test data (\texttt{WILDFIRES_TEST/WILDFIRES_TEST.csv}). There will be a public leaderboard that will show your performance, but it only consists of half of the test set (and you don't know which half). The private leaderboard ranking with the other half of the test set will only be revealed when the whole competition ends. The competition will end on Wednesday, February 10th at 2:00 PM PST.\\
\noindent The links to the competitions, which includes the datasets and guidebooks describing the datasets, can be found here: \url{https://www.kaggle.com/c/caltech-cs-155-2021-mp1-part-1}\\
\noindent There will be a benchmark submission added by the TAs. You should try to beat this benchmark.
\subsection{Your task:}
\noindent Each row in \texttt{WILDFIRES_TRAIN.csv} represents a fire event. The last column in \texttt{WILDFIRES_TRAIN.csv} is ``\texttt{LABEL}''. Each label is a description of the (statistical) cause of the fire.
\begin{itemize}
\item \texttt{LABEL} of 1: the fire originated from a natural cause (Lightning).
\item \texttt{LABEL} of 2: the fire originated from an accidental cause (in order of decreasing frequency: Debris Burning, Equipment Use, Children, Campfire, Smoking, Railroad, Powerline, Fireworks, Structure).
\item \texttt{LABEL} of 3: the fire originated from a malicious cause (Arson).
\item \texttt{LABEL} of 4: the fire originated from another cause (Miscellaneous, or Missing/Undefined).
\end{itemize}
\noindent Your task is to predict the target in \texttt{WILDFIRES_TEST.csv} to the best of your ability. It is generally encouraged to submit probabilities from your models instead of 0/1 predictions, as you get rewarded for having a non-zero probability of the correct class even if it is not the highest probability for that sample. This also represents real world conditions, where you would like to know the relative likelihoods of each class being the right one.\\
\noindent Please follow the format in the sample submission files (\texttt{sample_submission.csv}) when generating your submissions to Kaggle.\\
\subsection{Performance metric:}
\noindent The metric on which your model performance is tested is AUC, namely, the \underline{\textbf{\scshape a}}rea \underline{\textbf{\scshape u}}nder the receiver operating characteristic \underline{\textbf{\scshape c}}urve.
\section{Key Notes}
\begin{itemize}
\item The competitions end on Wednesday, February 10th at 2:00 PM PST.
\item The report is due on Thursday, February 11th at 9:00 PM PST, via Gradescope. See below for the report guidelines. The report should explain your process and results in a thorough manner.
\item You can work in groups of up to four people, but must make submissions from a single account.
\item You can make up to 10 submissions a day. However, at the end, you need to select the 2 submissions that you think will perform the best on the private test sets for both competitions.
\item If you have questions, please ask on Piazza! As with any Kaggle competition, it's best to get started early since you are only allowed to make 10 submissions a day.
\item You can use any open-source tools, using both concepts you learned in class as well as any other techniques you find online (except for existing code written to model this particular wildfire dataset), to get the best score that you can.
\item \textbf{You may collaborate fully within your team, but no collaboration is allowed between teams.}
\item \textbf{You may not search for additional data related to this task; you may only train your models using the provided training set.)}
\end{itemize}
\section{Report and Colab Demo Guidelines}
\medskip
\begin{itemize}
\item \textbf{Due date:} Thursday, February 11th at 9:00 PM PST
\item \textbf{Report (75 points):} The report should be written exactly to the length specifications given in this document. If a section of your report is too long for that section, please try to be more concise - there is an extra credit section for you to discuss other interesting insights/approaches that you tried that you can use as overflow. You are encouraged to use graphs in your report and Colab demo, as visualization is very helpful!
\item \textbf{Colab Demo (15 points):} You should write a Colab notebook that presents one or more interesting approaches / insights in a runnable and clearly written manner, so the class can learn from each other's work. This can include data exploration, feature engineering, model regularization, model ensembling. The notebook should be thoroughly annotated with markdown cells explaining what your code is doing and what you point you're making. Here is a nice \href{https://www.kaggle.com/cdeotte/how-to-choose-cnn-architecture-mnist}{example}. Given that you have a 1 page limit for each section of your report, you can dive deeper on your approach/model selection/etc. Please try to include visualizations as well. To submit this, please share the public, read-only Colab link on Piazza in a public note, and attach the Piazza post link and the Colab link in your report.
\item \textbf{Please submit your report in groups rather than submitting it once per student!} You can see how to submit in groups here:\\ \url{https://www.gradescope.com/help#help-center-item-student-group-members}
\end{itemize}
\noindent We highly recommend that you use the LaTeX template provided to you and simply fill in the blanks. To collaborate on the report writing, we recommend using Overleaf (\url{https://www.overleaf.com/edu/caltech}), an online LaTeX editor. Caltech students can get a pro account for free using caltech.edu emails.
\noindent See our example file for guidelines. The structure is as follows:
\begin{enumerate}
\item \textbf{Introduction (15 points):} This section is purely for the TAs and should be brief. Maximum of \underline{1 page}.
\begin{itemize}
\item Group members
\item Team name (needs to match your team name on Kaggle)
\item What place you got on the private leaderboard for both competitions.
\item What AUC score you got on the private leaderboard for both competitions.
\item Division of labor: Your team must ensure that each member has an equal amount of workload during the competition. If there is a noticeable discrepancy in the division of labor, team members may receive differing grades.
\end{itemize}
\item \textbf{Overview (15 points):} This section should be a concise summary of your attempts. More detailed explanations should go in the next section. Maximum of \underline{1 page}.
\begin{itemize}
\item Models and techniques tried: What models did you try? What techniques did you use along with your models? Did you implement anything out of the ordinary?
Descriptions should be concise, at most 1-2 sentences. Again, more details can be included in the next section. However, this section is meant to be a more general overview.
\item Work timeline: What did your timeline look like for the competition?
\end{itemize}
\item \textbf{Approach (15 points):} This section should be a more detailed explanation of how you approached the competition. Maximum of \underline{1 page}.
\begin{itemize}
\item Data exploration, processing and manipulation: Did you manipulate the data or the features in any way, such as data cleaning or feature engineering? What techniques and libraries did you use to accomplish such manipulation? Please justify your methodologies.
\item Details of models and techniques: Why did you try the models and techniques that you used? What was that process like? What are the advantages and disadvantages of using such methods?
\end{itemize}
\item \textbf{Model Selection (15 points):} This section should outline how you chose the best models. Maximum of \underline{1 page}.
\begin{itemize}
\item Scoring: What optimization objectives did you use, and why? How did you score your models, and why? Which models scored the best?
\item Validation and test: How did you split your data? Did you use validation techniques? How did you test your models? What were the results of these tests, and what did the results tell you?
\end{itemize}
\item \textbf{Colab and Piazza link (15 points):} Please paste your Colab link and Piazza post link in a page on your report. Your piazza post only needs to contain your team name, team members names, and your Colab link. Maximum of \underline{1 page}.
\item \textbf{Conclusion (15 points):} This section should be used to summarize the report, as well as to include any additional details. Maximum of \underline{1 page}.
\begin{itemize}
\item Insights: Please answer the following questions
\begin{itemize}
\item Among all the features in the data, which features have the most influence on the prediction target? Why? List top 10 features. (Bonus points if you can analyze whether these 10 features positively or negatively influence the prediction target.)
\item Overall, what did you learn from this project?
\end{itemize}
\item Challenges: What could you have done differently? What obstacles did you encounter during the process?
\end{itemize}
\item \textbf{Extra Credit (10 points):} This section should be used to mention additional interesting insights and make concluding remarks. You can be creative!
\begin{itemize}
\item Examples
\begin{itemize}
\item Why do we use AUC as our Kaggle competition metric? Do you think there is a better metric for this project? Why, or why not?
\item Among the machine learning methods/pipelines that your group uses,
are there any methods/pipelines that are parallelizable? If so, how can they be parallelized? If not, why not? (You are not required to actually parallelize your codes)
\end{itemize}
\end{itemize}
\end{enumerate}
\section{Grading metrics}
\noindent For the competition, you will be scored on the test set. You will see results of the public leaderboard (results of your model on half of the test set) for the duration of the competition, and the private leaderboard results (results on the other half of the test set) will be released after the deadline.\\
\noindent The report and Colab is worth the majority of your grade. That is, we care more about the process and thoughts behind your results rather than the scores.
\end{document} | {
"alphanum_fraction": 0.7611164081,
"avg_line_length": 82.4071428571,
"ext": "tex",
"hexsha": "70fd87306105354be4a01d8f70b1f6dfe29bd528",
"lang": "TeX",
"max_forks_count": 25,
"max_forks_repo_forks_event_max_datetime": "2022-01-10T14:31:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-06T19:15:00.000Z",
"max_forks_repo_head_hexsha": "e149d82586a7c6b79a71a2e56213daa7dfacbff0",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "ichakraborty/CS155-iniproject",
"max_forks_repo_path": "projects/project1/miniproject1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e149d82586a7c6b79a71a2e56213daa7dfacbff0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "ichakraborty/CS155-iniproject",
"max_issues_repo_path": "projects/project1/miniproject1.tex",
"max_line_length": 883,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "e149d82586a7c6b79a71a2e56213daa7dfacbff0",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "ichakraborty/CS155-iniproject",
"max_stars_repo_path": "projects/project1/miniproject1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-24T23:49:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-05T06:54:16.000Z",
"num_tokens": 2676,
"size": 11537
} |
\section{Other Essentials \ldots}\label{sec:07}
The platform provides other essential functionality that cannot be described in detail as part of this document.
In this section, we would like to highlight some of it.
\paragraph{Application Security.}
The platform incorporates both authentication and authorisation mechanisms that govern application security.
The authentication mechanism is heavily influenced by the Amazon S3 token-based security scheme in combination with RSA encryption.
It works automatically out of the box and may require developers only to plug-in the server-side user identification model.
The default model implementation, provided as part of the platform, stores all user credentials in the back-end database.
Alternative application specific implementations may work with an LDAP server or any other organisation-specific mechanism.
Unlike authentication, the authorisation mechanism is used more explicitly during application development, which is required to support fine-grained access control.
The platform provides a unique declarative authorisation mechanism that ergonomically fits into the development model using the object-oriented paradigm.
The concept of a \emph{security token} represents a security demarcation mechanism, which can be applied to the smallest execution artifacts such as accessors to domain entity properties or any other method in the domain model.
As opposed to many other existing technologies where the authorisation mechanism is often associated with UI controls, the platform unifies the authorisation mechanism with the business domain model.
This way, no matter where in the application logic (including UI and other client- or server-side modules) certain business functionality is accessed, it is always protected.
Security tokens are type-safe and may form hierarchies by reusing the object-oriented mechanism of inheritance.
This provides a convenient way to logically group security tokens.
Another important concept is the \emph{security demarcation scope}.
It provides automatic conflict resolution for nested scopes of application logic, which are demarcated by security tokens with different permissions.
The concept of semantic transparency is also applied to the authorisation mechanism.
Application developers and administrators who control user permissions perceive security tokens in exactly the same semantic context.
For example, this facilitates cooperation of these stakeholders to devise the most appropriate structure and scope for security tokens.
\paragraph{Users \& Roles.}
All application users are based on the user model provided by the platform.
Each TG-based application maintains the identity of the currently logged in user, which covers both client- and server-sides\footnote{
In order to support stateless server-side implementation, the platform does not maintain any user sessions at the server.
In order to identify requests from different users, an individual security token, associated with each logged in user, is transmitted with every HTTP request.}.
This way application business logic may easily access any information specific to the currently logged in user in order to gain even greater control over the execution flow.
In order to streamline application administration and reuse of different configurations (e.g. report configurations), the platform introduces the concept of the \emph{base-user} and \emph{based-on-user}.
The platform is very consistent in following its object-orientation and this new concept is not an exception.
There can be several base-users that are used for providing different application configurations.
The rest of application users are derived from any of the available base-users -- these are the based-on-users.
This leads to application configuration inheritance where based-on-users fully inherit a configuration, which is predefined for their respective base-user.
Any changes by base-users to configuration lead to the automatic pushing of these changes to all corresponding based-on-users.
At the same time, there is still a great degree of freedom in controlling individual based-on-users.
For example, application roles assignment and security token permissions are individually allocated for all users.
The platform also provides a model for automatic user-based data filtering to control user access to domain entities individually.
Developers only need to implement the filtering rules (e.g. specify the filtering criteria) and the platform fully automates their execution.
For example, any EQL query, either manually implemented by developers as part of some business logic or automatically composed by means of UI configuration tools, gets automatically transformed to incorporate filtering conditions.
\paragraph{Deployment \& Versioning.}
Application deployment and update facilities have a special place for business applications.
Unlike many other kinds of software, business applications belong to the most frequently modified systems, which reflects the real-life situation of fast changing business processes.
As shown in previous sections, the TG platform's development model unifies all aspects of application development around the concept of the business domain specifically to speed up the development life-cycle.
In order to deliver implemented changes to application users, the platform provides a deployment mechanism that orchestrates the delivery of applications and their incremental updates.
This mechanism implements a strict versioning model where only the client- and server-side applications of the same version can interact.
For example, if the server-side application was updated and there are older versions of the client-side application still actually running, then all requests from these clients to the server would be rejected with the requirement to the user to restart the application in order for it to be updated.
At the same time not all application modifications warrant a version change.
With this approach, developers can leverage both backward and non-backwards compatible changes.
The backward compatible modifications would not force an update for already running applications, but non-backwards compatible changes would.
| {
"alphanum_fraction": 0.8177805801,
"avg_line_length": 107.5254237288,
"ext": "tex",
"hexsha": "56c2202694d8cd5d7619af9e2c41a0d92910c058",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2020-06-27T01:55:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-21T08:26:56.000Z",
"max_forks_repo_head_hexsha": "4efd3b2475877d434a57cbba638b711df95748e7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fieldenms/",
"max_forks_repo_path": "platform-doc/doc/architecture-overview/sections/07-deployment-and-versioning/deployment-and-versioning.tex",
"max_issues_count": 647,
"max_issues_repo_head_hexsha": "4efd3b2475877d434a57cbba638b711df95748e7",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T13:03:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-21T07:47:44.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fieldenms/",
"max_issues_repo_path": "platform-doc/doc/architecture-overview/sections/07-deployment-and-versioning/deployment-and-versioning.tex",
"max_line_length": 301,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "4efd3b2475877d434a57cbba638b711df95748e7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fieldenms/",
"max_stars_repo_path": "platform-doc/doc/architecture-overview/sections/07-deployment-and-versioning/deployment-and-versioning.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-17T22:38:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-22T05:42:26.000Z",
"num_tokens": 1131,
"size": 6344
} |
\section{Section 0}\label{sec:zero}
This is a reference \cite{tur38}. This is an acronym: \ac{MI}. Fun fact: when using it again, it will only be displayed like such: \ac{MI}.
Note, that the gray boxes on the cover page can be replaced. Simply replace the \code{logo.png} file in the \code{images} folder.
cref Demonstration: Cref at beginning of sentence, cref in all other cases. \Cref{fig:logo} shows a simple fact, although \cref{fig:logo} could also show something else. \Cref{tab:simple} shows a simple fact, although \cref{tab:simple} could also show something else. \Cref{sec:one} shows a simple fact, although \cref{sec:one} could also show something else.
\image{logo}{Simple Figure}
Brackets work as designed: <test>
\begin{inparaenum}
\item All these items...
\item ...appear in one line
\item This is enabled by the paralist package.
\end{inparaenum}
\javafile{SetOperation}{A simple Javafile as an example}
| {
"alphanum_fraction": 0.7524219591,
"avg_line_length": 44.2380952381,
"ext": "tex",
"hexsha": "1873735baad5ee3253424fcbe33c6a4d1937f533",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-05-03T17:01:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-03T17:01:39.000Z",
"max_forks_repo_head_hexsha": "d53d59f1d05025f4a3d2b3ee8f3ff22c69ce124f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "christian-steinmeyer/theses-template",
"max_forks_repo_path": "sections/section-0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d53d59f1d05025f4a3d2b3ee8f3ff22c69ce124f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "christian-steinmeyer/theses-template",
"max_issues_repo_path": "sections/section-0.tex",
"max_line_length": 359,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "d53d59f1d05025f4a3d2b3ee8f3ff22c69ce124f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "christian-steinmeyer/theses-template",
"max_stars_repo_path": "sections/section-0.tex",
"max_stars_repo_stars_event_max_datetime": "2019-11-17T11:33:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-05-03T16:55:59.000Z",
"num_tokens": 261,
"size": 929
} |
% -*- root: ../gvoysey-thesis.tex -*-
\chapter{Discussion}
\label{chapter:Discussion}
\thispagestyle{myheadings}
% set this to the location of the figures for this chapter. it may
% also want to be ../Figures/2_Body/ or something. make sure that
% it has a trailing directory separator (i.e., '/')!
\graphicspath{{6_Discussion/Figures/}}
\section{Chapter Summary} % (fold)
\label{sec:discussion_summary}
This chapter compares the results obtained in this thesis with the human results obtained by~\cite{Mehraei2016Auditory}, and offers justifications and possible explanations for their similarities and differences.
% section discussion_summary (end)
\section{Nonlinear Behaviors in the Verhulst Model} % (fold)
\label{sec:nonlinear_behaviors_in_the_verhulst_model}
During the course of this work, an unexpected phenomenon was observed in the behavior of the Verhulst model in its response to stimuli of long duration. In response to a sustained pure tone stimulus, the model predicts a strong response along the sections of the basilar membrane near the frequency of the pure tone, consistent with intuition. Further, the model predicts small amplitude BM displacement at higher frequencies, correctly reflecting dispersion of energies along the BM. However, at the level of the IHC synapse, the off-frequency firing rate estimates are several times larger than the on-frequency response, and fall outside physiological boundaries. This behavior is not consistent with the BM displacement predictions of the previous stage of the model.
To relate basilar membrane displacement to IHC firing rates, the Verhulst model implements a three-store synaptic diffusion model adapted from~\cite{Westerman1988Diffusion} and extends it to have place-dependent initial values of vesicle state. Following~\cite{Liberman1978AuditoryNerve}, the saturated firing rate of a hair cell was also adapted to be place-dependent and used as a reset threshold for the diffusion model parameters. It is possible that in certain situations, this threshold is never reached and thus the firing rate estimate grows disproportionately, leading to the observed large-magnitude response at high frequencies to a low frequency tone.
This behavior would potentially overestimate the off-frequency basal (high frequency) response to a sustained, more apical (low frequency) stimulus. However, some evidence exists \citep{Kiang1974Tails,Yates1990Basilar} that basal responses to low-frequency stimuli can approach threshold in some cases, so an \emph{a priori} prediction of supra-threshold firing rate at high frequencies to a low frequency tone may not be inconsistent with predictions from physiological data.
% section nonlinear_behaviors_in_the_verhulst_model (end)
\section{Consequences of AN Population Response Modeling} % (fold)
\label{sec:consequences_of_percentage_weighting_degradation_for_synaptopathy}
To obtain the total contribution of one inner hair cell, and thus one CF, to the population response of the AN, the model scales the responses of a low-, mid-, and high-spontaneous rate modeled fiber by three linear weights, thus reflecting what proportion of spiral ganglia belong to a given category for that IHC. This approach makes two interrelated assumptions.
First, it assumes that the spontaneous behavior of a given fiber is sufficiently similar to that of all others of its spontaneous rate category that it is not necessary to simulate each fiber individually. In the case of the Verhulst model, this assumption is realistic since the model considers spontaneous rates to be fixed per fiber type. However, the Zilany model may be configured so that estimates of spontaneous rate contain additive white Gaussian noise with a different random seed for every simulation, so the firing statistics of a given fiber may differ both from others of its spontaneous rate class and from itself over sustained periods or repeated simulations.
Second, as a result of the stochasticity of the Zilany model, it would potentially be informative to investigate the loss of individual fibers in a Monte Carlo simulation to address the variance in model responses. This would further complicate simulation and increase the dimensionality of \emph{post-hoc} analysis.
These assumptions make computation of AN responses practical: only three fibers per CF are modeled. Using the default parameters that were used in this work, 3,000 fibers were simulated per model iteration. Simulating each fiber individually with individual stochastic spontaneous rates would incur a tenfold increase in the number of fibers to simulate, suggesting that a full exploration of the parameter space, as was done in this work, would take approximately 90 days to compute on the same computing infrastructure.
At the same time, it would more accurately reflect the consequences of cochlear synaptopathy. To the extent that the random noise in a fiber's spontaneous rate is orthogonal to that of any other fiber of the AN, and to the extent that this noise has random phase, any individual fiber will contribute a different amount to the compound action potential of the AN and its loss is not well represented by the current approach.
% section consequences_of_percentage_weighting_degradation_for_synaptopathy (end)
\section{Nonlinear Synaptopathic Models} % (fold)
\label{sec:nonlinear_synaptopathic_models}
The unexpectedly small effects of modeled synaptopathy on the overall model output may in part be due to the uniformity of the synaptopathic impairment that was simulated. While modeled impairment was specific to fibers of different spontaneous rates, it was applied uniformly over all CFs, as the variability of fiber type distribution per CF would impair some frequency ranges more than others for a given neuropathic condition.
However, sensorineural hearing loss, particularly age-related hearing loss, is often specific to high frequencies while leaving low frequency bands largely unchanged. Noise-induced or ototoxic hearing loss may have a narrower frequency band, leading to a notched audiogram while leaving other frequencies at normal thresholds, and models of synaptopathy that reflect these more complex losses may have more complex effects on simulation output.
Because the ABR arises from the synchronous activity of entire nerves or brainstem or midbrain areas, a frequency selective perturbation of the output of the AN should produce an effect of greater magnitude than the synaptopathy modeled in this work.
The minimal changes in Wave I peak amplitude are expected.
\cite{Liberman2014Efferent} and others have demonstrated the robustness of audiometric thresholds in animals with as little as 20\% of the original hair cell population intact, so the preservation of the AN compound action potential is consistent even with very severe synaptopathy. Wave I peak amplitudes will also vary with the stimulus.
% section nonlinear_synaptopathic_models (end) | {
"alphanum_fraction": 0.8210752688,
"avg_line_length": 151.6304347826,
"ext": "tex",
"hexsha": "75bafe821a85cfa4eada22ecf6f737e27d042185",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "766ed365f55ada08c3b6f548a6f857f9d3e49b91",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "gvoysey/thesis",
"max_forks_repo_path": "text/6_Discussion/discussion.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "766ed365f55ada08c3b6f548a6f857f9d3e49b91",
"max_issues_repo_issues_event_max_datetime": "2016-08-14T04:18:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-08-14T04:18:16.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "gvoysey/thesis",
"max_issues_repo_path": "text/6_Discussion/discussion.tex",
"max_line_length": 774,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "766ed365f55ada08c3b6f548a6f857f9d3e49b91",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "gvoysey/thesis",
"max_stars_repo_path": "text/6_Discussion/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2017-03-10T05:37:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-07-10T17:40:15.000Z",
"num_tokens": 1416,
"size": 6975
} |
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage[a4paper,width=150mm,top=25mm,bottom=25mm]{geometry}
\title{
{\includegraphics[width=3cm, height=2.5cm]{images/logo.png}}\\
{\large Faculty of Computers and Information}\\
{\textbf{Software Proposal Document for Project \\ Converting Handwriting into Text}}
}
\author{Callback Hell Avoiders Team \\ \\
Supervised by: Dr. Sara El-Metwally}
\date{March 4, 2022}
\begin{document}
\maketitle
\begin{abstract}
The main goal of this project is to create desktop app which involves AI algorithm that help solving the problem of converting handwriting into machine-readable form.
\end{abstract}
\section{Introduction}
This is \textbf{Capture-It} project, where we try to solve a problem that faces many students to store taken notes in digital format.
\section{Project Description}
\subsection{Technologies Used}
\begin{itemize}
\item \textbf{Python3 : } Main used programming language.
\item \textbf{PyQt : } GUI widgets toolkit and it is a Python interface for Qt.
\item \textbf{OCR : } Optical character recognition technology which is a good solution for automating data extraction from printed or written text from a scanned document or image file or white board and then converting the text into a machine-readable form to be used for data processing like editing or searching or any other stuff.
\end{itemize}
\newpage
\section{Project Management and Deliverable}
\subsection{Team Members}
\begin{itemize}
\item Abdulla Nasser.
\item Fares Emad.
\item Faten ElSaeed.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7701786815,
"avg_line_length": 33.1224489796,
"ext": "tex",
"hexsha": "7c8f5503db652a4cc6da5260e08b6261d959bb66",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a52e7b8077200024502e4577a76bcb8b2d6be2ef",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DrNykterstien/Capture-It",
"max_forks_repo_path": "Proposal/source/proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a52e7b8077200024502e4577a76bcb8b2d6be2ef",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DrNykterstien/Capture-It",
"max_issues_repo_path": "Proposal/source/proposal.tex",
"max_line_length": 339,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "a52e7b8077200024502e4577a76bcb8b2d6be2ef",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DrNykterstien/Capture-It",
"max_stars_repo_path": "Proposal/source/proposal.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T11:17:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-13T12:50:38.000Z",
"num_tokens": 408,
"size": 1623
} |
\documentclass{article}
\usepackage{lipsum}
\usepackage[margin=1.5in]{geometry}
\usepackage{titlesec}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{mathtools, amssymb, nccmath}
\usepackage{bigstrut, changepage, lipsum}
\usepackage{mathtools}
\newcommand{\code}{\texttt}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\usepackage{siunitx} % Required for alignment
% Specify images directory
\graphicspath{ {./report-images/} }
% Header and Footer stuff
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyfoot[R]{ \thepage\ }
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\newcommand{\sectionbreak}{\clearpage}
\setlength{\parindent}{0pt}
%
\begin{document}
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\begin{titlepage} % Suppresses displaying the page number on the title page and the subsequent page counts as page 1
\newcommand{\HRule}{\rule{\linewidth}{0.5mm}}% Defines a new command for horizontal lines, change thickness here
\center % Centre everything on the page
%------------------------------------------------
% Headings
%------------------------------------------------
\textsc{\Large Basic orthogonal and periodic functions}\\[0.5cm] % Major heading such as course name
\textsc{\large Exercise 3}\\[0.5cm] % Minor heading such as course title
%------------------------------------------------
% Title
%------------------------------------------------
\HRule\\[0.6cm]
{\huge\bfseries Least-squares Approximation of cos(2x) and cos(4x) Using Chebyshev Polynomials}\\[0.25cm] % Title of your document
\HRule\\[1.5cm]
%------------------------------------------------
% Author(s)
%------------------------------------------------
\begin{minipage}{0.4\textwidth}
\begin{flushleft}
\large
\textit{Author}\\
\textsc{Cesare De Cal} % Your name
\end{flushleft}
\end{minipage}
~
\begin{minipage}{0.4\textwidth}
\begin{flushright}
\large
\textit{Professor}\\
\textsc{Annie Cuyt}\\ % Supervisor's name
[0.25cm]
\textit{Assistant Professor}\\
\textsc{Ferre Knaepkens} % Supervisor's name
\end{flushright}
\end{minipage}
\vfill\vfill\vfill
{\large\today}
\vfill
\end{titlepage}
%----------------------------------------------- Introduction ------------------------------------------------------
\section{Introduction}\label{sec:intro}
This exercise asks to compute the Chebyshev approximation
$$t(x)=\sum^n_{j=0}{a_jT_j(x)}$$
for the functions $f(x)=\cos(2x)$ and $f(x)=cos(4x)$ over the interval $[-\pi,\pi]$ for $n = 6$ and then plot the original functions with the approximations and draw conclusions on the results.\\
As we've seen in class, Chebyshev polynomials are a set of orthogonal polynomials that can be used to approximate to a least squares fit. For this exercise, I'll first compute the roots of the polynomial (also called Chebyshev nodes), then calculate its coefficients for both the functions. Finally, I'll calculate the y values of the approximation to see how well it approximates the original function. During this process, I'll make sure to choose the correct data points and the correct number of data points to exploit the properties of these basis functions.
%---------------------------------- Tools ---------------------------------------------------------------------------
\section{Tools}
To solve this problem, I've used MATLAB as requested by the exercise. To make the computation more efficient, I wrote my own function which calculates the Chebyshev polynomial instead of using the built-in MATLAB function $\code{chebyshevT}$ which was noticeably slower. Writing my own function was just a matter of using the compact closed-form expression for the Chebyshev polynomials:
$$T_i(x)=\cos{(i\arccos(x))}$$
Built-in MATLAB functions have been used such as $\code{cos(x)}$ and $\code{acos(x)}$ to calculate the cosine and the inverse cosine function. $\code{linspace(start, end, nrOfPoints)}$ was used to create an array of equidistant points for plotting. The functions $\code{plot(x, y)}$, $\code{legend()}$, $\code{axis([xmin xmax ymin ymax])}$, $\code{ylabel(label)}$, and $\code{xlabel(label)}$ were also used for plotting.
\section{Computation}
Given $n=6$, I first calculate the nodes of the Chebyshev polynomial by using the formula:
$$x_j=\cos \Big( \frac{2j-1}{2n}\pi \Big), j=1,\dots,n+1$$
The resulting zeros are:
$$
\begin{bmatrix}
9.749279121818236e-01\\
7.818314824680298e-01\\
4.338837391175582e-01\\
6.123233995736766e-17\\
-4.338837391175581e-01\\
-7.818314824680295e-01\\
-9.749279121818237e-01\\
\end{bmatrix}
$$
Let's first analyze the function $\cos(2x)$. We've seen in class we can calculate the coefficients with the formula given by:
$$c_j=\frac{2}{N}\sum_{k=0}^{n}f(x_j)T_j(x_k)$$
To better exploit the properties of Chebyshev polynomials, I rescale the interval from $[-\pi,\pi]$ to $[-1,1]$. This is done in the function calculation $f(x_j)$ by multiplying the argument by $\pi$. I find the following coefficients:
$$
\begin{bmatrix}
4.407675118300712e-01\\
3.489272363107635e-16\\
5.739892333362830e-01\\
-2.537652627714643e-16\\
6.516380108719009e-01\\
-1.554312234475219e-15\\
-7.019674665493458e-01\\
\end{bmatrix}
$$
The polynomial for the first function $\cos{2x}$ looks like this:
\begin{equation}
\begin{array}{l}
t(x) = 4.407675118300712e-01 \times T_0(x)+ 3.489272363107635e-16 \times T_1(x) + \\
5.739892333362830e-01 \times T_2(x) + \dots -7.019674665493458e-01 \times T_6(x)
\end{array}
\end{equation}
To plot these coefficients in the graph, I need to calculate their associated y values:
$$
\begin{bmatrix}
9.876173836808355e-01\\
1.986723718905843e-01\\
-9.149466098687946e-01\\
9.999999999999993e-01\\
-9.149466098687926e-01\\
1.986723718905768e-01\\
9.876173836808371e-01\\
\end{bmatrix}
$$
Calculating the coefficients for the $\cos(4x)$ function now only requires minor changes in the code. The following are the coefficients of the polynomial:
$$
\begin{bmatrix}
6.879841220704104e-01\\
2.537652627714643e-16\\
-1.535576743556636e-01\\
-1.459150260935920e-15\\
1.012919200612446e+00\\
-1.094362695701940e-15\\
5.104689360033142e-01
\end{bmatrix}
$$
The polynomial for the second function $\cos{4x}$ looks like this:
\begin{equation}
\begin{array}{l}
t(x) = 6.879841220704104e-01 \times T_0(x) + 2.537652627714643e-16 \times T_1(x) \\
-1.535576743556636e-01 \times T_2(x) + \dots + 5.104689360033142e-01 \times T_6(x)
\end{array}
\end{equation}
The associated y values for the given roots are the following:
$$
\begin{bmatrix}
9.507761930971577e-01\\
-9.210585772947382e-01\\
6.742545978208001e-01\\
1.000000000000000e+00\\
6.742545978207990e-01\\
-9.210585772947411e-01\\
9.507761930971639e-01\\
\end{bmatrix}
$$
In order to properly calculate the associated y values, I had to subtract half of the first coefficient from the calculation. This is done because the formula I used to calculate the coefficients isn't correct for the first coefficient. The plots for both functions have been added to the next section.\\
\section{Plots}
Function $\cos(2x)$ and $\cos(4x)$ respectively:\\
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{cos2x.jpg}\\
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{cos4x.jpg}
\section{Observations}
I first calculated the zeros of the polynomial so I could use them as nodes in polynomial interpolation because the resulting interpolation polynomial minimizes the effect of Runge's phenomenon. The functions, however, still display a little bit of Runge's phenomenon. \\
The Chebyshev approximation for the function $\cos(2x)$ is pretty accurate, whereas the approximation for $\cos(4x)$ not so much as it doesn't take into account some of the spikes of the original function.
\end{document} | {
"alphanum_fraction": 0.6799900075,
"avg_line_length": 36.3909090909,
"ext": "tex",
"hexsha": "d8212dfc1c34b13e6f3b3ec96438c93bc71946c5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "csr/MATLAB-Scientific-Programming",
"max_forks_repo_path": "Orthogonal Basis Function/Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "csr/MATLAB-Scientific-Programming",
"max_issues_repo_path": "Orthogonal Basis Function/Report.tex",
"max_line_length": 563,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ac2d64ea235d7bee9cf0de8bbe42d06a3986bd5a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "csr/MATLAB-Scientific-Programming",
"max_stars_repo_path": "Orthogonal Basis Function/Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2376,
"size": 8006
} |
%!TEX root = project.tex
\newpage
\paragraph{Abstract}
We have created a cross platform multiplayer video game that can be played by using a mobile device, a traditional desktop or laptop and can be played using a virtual reality headset. The object of the project is to create a heterogeneous system that demonstrates an ability to communicate between multiple languages, operating systems and databases. This will be achieved through the use of data serialisation and marshalling and un-marshalling of data. The project will involve four programmers, using four different programming languages and 3 different databases. We also intended to deploy the system back-end within an AWS server and then deploy these separate systems within docker.
\paragraph{Authors}
The authors of this Document are Michael Kidd, John Mannion, Raymond Mannion and Kevin Moran, The authors are 4th year students in Galway Mayo Institute of Technology.
\paragraph{Github Link} \url{https://github.com/Michael-Kidd/4th-Year---Main-Project}
\chapter{Introduction}
The idea for our project came from a video that showed a version of Mario Kart in virtual reality. It depicted the fan favourite Mario Kart racing game that Nintendo originally released august 27 1992, but this time with a player sitting into a specially designed kart, placing a head mounted display unit on their heads and proceeding to play the game. There are currently 8 Mario Kart games, not including the VR versions or arcade versions. The unfortunate thing with this version of the game was that it was not open to the general public, it is simply an arcade version of the game and can only be played at designated venues.\newline
The project that we decided on was to create a similar style game to Mario Kart in virtual reality, however we also wanted that the game could be played cross-platform with a computer and also with mobile devices, while maintaining the ability for each of these versions to be able to play within the same instance. This would mean that a person playing the game on a mobile device and a person playing on a virtual reality headset could play together. From the very start we decided to make different versions of the game that would still allow the games to be played within the same instances. By separating the versions of the game we ensure that each version does not require nor depend on the packages and API's that the other versions depend on, for example the mobile and desktop versions would not depend on the Oculus API to function and therefore should not contain the files at all. \newline
There are also other kart games that already exist within virtual reality, however they have not been very popular as they often fall short in their implementation, such things as have no hand movement with the game. Forcing the player to use a steering wheel, while not being completely immersion breaking, is still not perfect. The games often are single player, removing the need to program for a multiplayer environment. Another issue with many virtual reality games is that the player base in virtual reality is simply too small, as the equipment needed to play is expensive, the setup of which takes up a large amount of space which can leave many people having to spend most of their time, assembling and dismantling their virtual reality rigs. Not to mention the expensive gaming PC that a player must already own to play the games, that are also sold separately. All in all, using virtual reality for gaming is expensive and the owner must have a desire to look past its limitations and requirements to want this. Another issue with virtual reality gaming is that players with virtual reality headset often can not play games with their friends who don't own the same technology. This is why when designing this game we decided that it would be important to have a cross platform game that didn't have any limitations on who could play with each other. For us, it was important that we show how a person moving their body and hands in virtual reality could be seen moving in real time.
\newline
We decided to create the main game within the Unity engine and using the C\# programming language, this will contain different scenes within the program that will serve different functions, the first being a login service. It will allow a user to create an account, then when a user has an account of their own it will allow the user to enter these details in order to login to the game. Since the game will be cross platform, a user can login to any of the different platforms and use the same credentials to gain access to the game. The login service would be programmed using the Java programming language and it will then use a MongoDB non relational database to be able to add new users when someone creates a new account. It will allow a user to access the game when they have entered the correct details. When the player enters incorrect details, they are refused entry to the game. After a successful login, the user will be presented with a screen where the user can host a game or find a game that is already being hosted by another user.\newline
The match finding service will be programmed using the Go programming language and will connect to a Redis Database. The purpose of this system will be to keep a record of all games that are being hosted, it will keep a record of the username of the person hosting the game and that persons IP address. This will allow other players to see a list of hosted games and a list of player names but will not need the persons IP address in order to access the game. Once the user selects a game that they wish to join or host, they will be sent to a lobby screen, where they will be able to see the list of other players that are playing the same game. When all players have clicked a button to signal they are ready, the game will start.\newline
The score keeping service will be programmed using the Python programming language and will connect to a MariaDB database. When the game has ended the players positions in the race will determine the score that they are given and their overall scores will be added to a database and sorted in order of how many points the players have accumulated. The player with the most points will be the first on the list and the player with the least points will be last in the list. When the players have completed the game they will see the position each player finished, then each players global position in the leader-board.\newline
With the back-end of the system for such features as user login, finding an active game and for global score keeping we intended to create 3 different programs that would be running within an Amazon web services server. Within the server we will run one instance of docker with 3 different containers. Each container will house one of the programs and its corresponding database, either the Java login service with the MongoDB instance, the Go match finding service with the Redis database, or the Python scoring service with the MariaDB database. \newline \newline
Part of the problems that will be faced when intentionally creating a heterogeneous system, is that there will be issues with data serialisation and with marshalling and un-marshalling of the data from one language to another. The C\# program will be communicating with 3 different programs that will act as services to accomplish specific tasks. The connections between the client side C\# program and each of the services will be made using an IP address and port number (socket) combination. Then once the connections are made the objects that will be added to the database or that may need to be manipulated by the service programs, will be passed through these sockets as serialised objects using either JSON or XML. The purpose of these formats is that they can be used by many different languages and frameworks as they are platform independent. This does not mean that it is guaranteed to work, along the way there is bound to be issues that we encounter and that we have yet to account for in our planning. As this is a college project, the principle of which is to develop us as programmers and to take us out of the areas that we have grown comfortable, we believe this project will present us with enough of a challenge to help us develop and also be within a realistic target so that we can have a product at the end that is at least a minimal viable product that works as a game. \newline
We don't expect that we will have a completely polished product at the end, and are aware of our own limitations as programmers and that we have not reached the level of experience that can be gained through working in the real world. That we don't yet understand all the possible bugs a real finished product would face or the vulnerabilities such a product would contain. Therefore we could not anticipate completely how our system could be interfered with or manipulated. In a worse case scenario, if it would be possible for all the players data to be intercepted. If such an attempt would be possible then the usernames, password and IP addresses of users could be compromised. During the programming phase of the project we simply made the game and its features function, we did not make a conscious attempt at protecting the data or transfers within the services or within the game itself. We are aware of the security issues and if the project was being released to the public, we would would have made security a priority. But, as the project is for educational purposes, we have decided to leave this feature out.
| {
"alphanum_fraction": 0.8075644223,
"avg_line_length": 283.0588235294,
"ext": "tex",
"hexsha": "50e47f7d112054fc0e9ba309be1eb00c81a76b07",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "319790b06a31511c4cbde41f23f5248a6e1f654e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Michael-Kidd/Dissertation",
"max_forks_repo_path": "TexFiles/intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "319790b06a31511c4cbde41f23f5248a6e1f654e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Michael-Kidd/Dissertation",
"max_issues_repo_path": "TexFiles/intro.tex",
"max_line_length": 1494,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "319790b06a31511c4cbde41f23f5248a6e1f654e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Michael-Kidd/Dissertation",
"max_stars_repo_path": "TexFiles/intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1878,
"size": 9624
} |
\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
%opening
\title{A Proposed Latecny Control Solution in Datacenter Marketing System}
\author{}
\begin{document}
\maketitle
\begin{abstract}
This short documentation blaablaa
\end{abstract}
\section{The Latency Problem Clarified}
\end{document}
| {
"alphanum_fraction": 0.785488959,
"avg_line_length": 15.0952380952,
"ext": "tex",
"hexsha": "18a16df55197763da1b4a840ba0bf2607785a5cd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "67bc485e73cf538498a89b28465afb822717affb",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Aliced3645/DataCenterMarketing",
"max_forks_repo_path": "Documentation/Latency_Control/Latency_Control.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "67bc485e73cf538498a89b28465afb822717affb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Aliced3645/DataCenterMarketing",
"max_issues_repo_path": "Documentation/Latency_Control/Latency_Control.tex",
"max_line_length": 74,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "67bc485e73cf538498a89b28465afb822717affb",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Aliced3645/DataCenterMarketing",
"max_stars_repo_path": "Documentation/Latency_Control/Latency_Control.tex",
"max_stars_repo_stars_event_max_datetime": "2015-05-23T00:07:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-23T00:07:36.000Z",
"num_tokens": 87,
"size": 317
} |
\section{Matrices}
\noindent
Matrices are an array of mathematical objects, most often numbers. They are often used to represent linear transformations between two spaces and systems of linear equations. We denote the size of a matrix by saying the number of rows then the number of columns.
\ifodd\includeBackgroundReviewExamples\input{./backgroundReview/matrices/matrices_example.tex}\fi
% Types of matrices
\input{./backgroundReview/matrices/typesOfMatrices.tex}
% Row Reduction
\input{./backgroundReview/matrices/rowReduction.tex}
% Determinants
\input{./backgroundReview/matrices/determinants.tex}
% Eigenvalues/vectors
\input{./backgroundReview/matrices/eigenvaluesEigenvectors.tex} | {
"alphanum_fraction": 0.8122332859,
"avg_line_length": 50.2142857143,
"ext": "tex",
"hexsha": "61a68271353e4b7308cc42d7c451d03d01b84974",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "rawsh/Math-Summaries",
"max_forks_repo_path": "diffEq/backgroundReview/matrices/matrices.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "rawsh/Math-Summaries",
"max_issues_repo_path": "diffEq/backgroundReview/matrices/matrices.tex",
"max_line_length": 263,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3ad58ef55c176f7ebaf145144e0a4eb720ebde86",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "rawsh/Math-Summaries",
"max_stars_repo_path": "diffEq/backgroundReview/matrices/matrices.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 159,
"size": 703
} |
\documentclass[11pt,a4paper]{article}
\usepackage[a4paper,vmargin={22mm,20mm},hmargin={24mm,24mm}]{geometry}
\usepackage{needspace}
\setlength{\parskip}{\baselineskip}%
\setlength{\parindent}{0pt}%
%\usepackage{amsmath} % for align and intertext
\usepackage{graphicx}
%\usepackage{epic} % for dottedline
%\usepackage{color}
%AUSKOMMENTIERT, DA ENGLISCHER BERICHT
%\usepackage[ngerman]{babel}
\begin{document}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
%
\newcommand {\D} {\frac {d} {d z} } % \d exists for an accent
\newcommand {\DD}{\frac {d^{2}} {d z^{2}} }
%
%\newcommand{\dr} { \frac{ \partial}{\partial r} }
%
\newcommand {\dt} {\frac {\partial }{\partial t } }
\newcommand {\dx} {\frac {\partial }{\partial x } }
\newcommand {\dy} {\frac {\partial }{\partial y } }
\newcommand {\dz} {\frac {\partial }{\partial z } }
%
\newcommand {\h} {\widehat } % \H exists for an accent
\newcommand {\T} {\widetilde } % \t exists for an accent
%
\renewcommand{\abstractname}{Overview}
%\thispagestyle{empty}
%\enlargethispage{250mm}
\title{Atom-Parser Documentation}
\author{Benjamin Schrauf}
\date{\today}
\maketitle
\begin{abstract}
This python script prepares the input data for the DFTB+ electron transport density functional theory code. The program takes a molecule, given in the form of atom positions, as input. Also provided are the maximum distances at which the atoms interact and the contact atoms through which electric current is introduced to the molecule. Using this information, the program divides the atoms into groups ("bins"). Each bin's atoms may only be in contact with atoms from at most two other bins. To satisfy this contraint, the program must generate an equivalent one-dimensional structure by merging together the appropriate bins and contacts until only two contacts are left.
\end{abstract}
\newpage
%\setcounter{page}{1}
\section{Task description}
The DFTB+ code uses tridiagonal Hamiltonian matrices to solve molecular electron transport problems. To generate a tridiagnal Hamiltonian matrix, the atoms need to be sorted into groups ("bins") in such a way that the atoms from one bin only interact with atoms from at most two other bins. Such an arrangement is only possible if the bins are ordered in a one-dimensional chain, with two electrical contacts on each end. Thus, the sorting algorithm must be able to partition the atoms of an arbitrary molecule with many contacts into an equivalent one-dimensional array of bins with two electrical contacts forming the ends.
Because the DFTB+ code slows down dramatically for larger bin sizes, the sorting algorithm should also attempt to create bins of similar size.
The Atom-Parser algorithm is supposed to be a reliable data preparation tool for the users of the DFTB+ code. Therefore, the output of the partitioning process must be automatically checked for validity. This includes checking whether the atoms in each bin only interact with atoms in the two neighboring bins, checking whether all atoms of the molecule have been sorted, and checking for duplicate atoms within the sorted atoms.
\section{Algorithm}
The program assigns an index to each atom of the input molecule, and associates this index with a position by a list. The program creates a list of atom indices for the atoms in each electrical contact, as well as a list of atom indices for the device atoms. Finally, the program generates a dictionary that defines the maximum interaction distances for any given pair of elements present in the molecule. Beyond this threshhold, there is assumed to be no interaction between two atoms.
\subsection{Data preparation}
First the program generates two $n \times n$ -matrices, where $n$ is the number of atoms in the input molecule. In these matrices, each column and row is associated with an atom index. The first matrix (dist\_mtrx) gives the distances between each pair of atoms in the molecule, the second matrix (interact\_mtrx) is composed of truth values that indicate whether any given pair of atoms is within interaction distance.
\newpage
\subsection{Partitioning atoms into bins}
We will examine the sorting process for a simple test case: a one-dimensional molecule (see figure \ref{fig:chains_two_contacts}). The atoms are sorted into bins starting from all contacts simultaneously. In this case, there are only two contacts, a and b, at the two ends of the molecule. Starting from these two contacts (depicted as black boxes with orange atoms), the program generates a one-dimensional chain of bins (black circles with blue atoms), with the atoms in each bin only interacting with atoms in the previous and the next bin of the chain. In this manner, two chains a and b are generated, starting from the two contacts. Each bin in the chain belongs to a "generation", which is given by the number of steps a bin is away from the contact bin, at which the chain originates.
At some point, the tips of the chains collide. In the simple one dimensional case, a collision of the chains implies that all atoms have been sorted. In a last step, the ends of the two chains are now "glued" together to form the data structure "final\_chain".
\
\begin{figure}[h]
\centering
\includegraphics[height=12cm, width = 17cm]{Bilder/chains_two_contacts.png}
\caption{Generating the "final chain" data structure. Left side: Molecule with colored dots representing atoms. The black circles are the bins the atoms are sorted into, where atoms in one bin only interact with the atoms in the two adjacent bins. Center: Sorted chains a and b, consisting of bins. Right side: Data structure "final\_chain", consisting of a nested list containing atom indices. The "final\_chain" list is created by glueing together the ends of the two chains a and b.}
\label{fig:chains_two_contacts}
\end{figure}
\newpage
\subsection{Merging chains}
In general, the input molecules will not have a one-dimensional structure. In order to create the desired one-dimensional "final\_chain" data structure, we must merge pairs of chains together until only two colliding chains are left, which can then be glued together into a "final\_chain" as described above. The merging process is shown in figures \ref{fig:merging_first_collision} and \ref{fig:merging_second_collision}, using the simple example of a T-junction molecule with three contacts a,~b~and~c.
In this algorithm, the merging happens concurrent to the partitioning process. Pairs of chains are merged as soon as they collide. This approach has the disadvantage of not being able to minimize the size of the resulting bins, since chains will be merged together irrespective of their size. A more optimized approach would first finish the partitioning before doing any merging. However, this approach would have been beyond the scope of the project, and for this reason was avoided.
Before merging, duplicate atoms must be removed from the chain tips. In particularly difficult cases, there can be multiple collisions between chains in a single generation. For this reason, the program finds all collisions in a given generation before attempting to merge chains.
\
\begin{figure}[h]
\centering
\includegraphics[height=12cm, width = 17cm]{Bilder/merging_first_collision.png}
\caption{Partitioning of a T-junction molecule. Left side: Molecule during partitioning process, after collision of chains a and c. Note that bin 4a is spacially non-contiguous: It consists of two groups of atoms that are outside of mutual interaction range. Such bins are generated because the partitioning process spreads like a wavefront throughout the molecule. Right side: Data structure "chains" in memory.}
\label{fig:merging_first_collision}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=12cm, width = 17cm]{Bilder/merging_second_collision.png}
\caption{Partitioning of a T-junction molecule. Left side: Fully partitioned molecule. Right side, top: Data structure "chains" in memory, after merging together chains a and c and continuing the partitioning to the last collision in generation 5. Two chains are merged by combining bins with the same indices. The two contact bins are also merged. The resulting chain will still fullfill the requirement that each bin have only two neighbours. Right side, bottom: Final chain formed by glueing together the ends of chains ac and b.}
\label{fig:merging_second_collision}
\end{figure}
\newpage
\vspace*{10cm}
\subsection{Merging dead ends}
After the two last chains have collided, there still can be unsorted atoms. We will call these unsorted atoms "dead ends". To generate the desired data structure "final chain", these dead ends must be merged into the existing chains.
The merging process is shown in figure \ref{fig:dead_ends}. Starting from the unsorted bin adjacent to the tip of the chain (here, bin eight), the atoms from each unsorted bin are assigned to the bins of the other chain, starting from the tip. This method ensures that the resulting final chain still fullfills the requirement that the atoms in each bin only interact with the atoms in at most two adjacent bins.
The dead end may be too long to be merged into the final chain. In this case, the dead end length can be halved by merging adjacent pairs of bins together. This process is repeated until the dead end is short enough to be merged into the final chain.
\
\begin{figure}[h]
\centering
\includegraphics[height=9cm, width = 17cm]{Bilder/dead_ends_cut.png}
\caption{Merging dead ends. Dead end shown in dark gray.}
\label{fig:dead_ends}
\end{figure}
\newpage
%\vspace*{10cm}
\section{Using the program}
Before starting the program, the user needs to specify an input molecule geometry. The files defining the molecule are stored in the folder "input\_files".
\subsection{Input file format specification}
Within the folder "input\_files", there are two files defining each molecule, called "FILENAME.dat" and "FILENAME\_transport.dat".
The "FILENAME.dat" file contains all the atom positions. In the first line, the number of atoms in the file is given. The second line is a comment line, which is skipped when reading the file. The following lines each specify the position of an atom in the molecule. Each of these lines starts with an element symbol, followed by three numbers, describing the X,Y and Z position of the molecule in a cartesian coordinate system. This file has the format of an ".XYZ" file, that can be displayed with molecule structure viewers such as Jmol.
The "FILENAME\_transport.dat" file has two functions. First, it specifies which of the atoms in the molecule are part of the electrical contacts, and which are part of the device to be partitioned. Second, the file defines an interaction distance for each pair of element types.
The "FILENAME\_transport.dat" file has the following format: The first line of the file starts with the keyword "Device" after which a range of device atom indices must be given, starting with index one. The indices correspond to the order in which the atoms are listed in "FILENAME.dat". The following lines define the contacts. They must start with the keywords "Contact1", "Contact2, and so on, followed once again by ranges of atom indices.
\subsection{Starting the program}
The entry point of the program is in "main.py". The program can be run using this file. Once the user has defined a molecule by writing two files "FILENAME.dat" and "FILENAME\_transport.dat" to the folder "input\_files", he must add an entry "FILENAME" to the list "ALL\_FILE\_NAMES" near the top of the file "main.py". The user can then specify which molecules are to be analyzed by splicing the appropriate parts of the list "ALL\_FILE\_NAMES" to the list "FILE\_NAMES".
The program caches the martices (dist\_mtrx) and (interact\_mtrx). If the flag "LOAD\_CACHE\_DATA" is set to true, the program will load the cached matrices if the program is run run repeatedly. For large molecules, this can save considerable time.
The integer "MAX\_GENERATIONS" defines the maximum number of generations of bins the program will generate during a run. This limit was defined to avoid accidental infinite loops. Its default value is 100, which may have to be increased for very large molecules.
Finally, if "GLOBAL\_VERBOSITY\_FLAG" is set to true, the program will print additional debugging information.
%\section{References}
%\begin{thebibliography}{20}
%\bibitem{Rueckmann:2015} R"uckmann, Gl"uge, Windzio:
% {\em Hinweise zum Praktikum und zur Auswertung
% von Messergebnissen.}
% Bremen, November 2015.
%\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7749510763,
"avg_line_length": 69.8087431694,
"ext": "tex",
"hexsha": "c1c5235168f7f8de121f8d01d56732722c38c982",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "460c8685d8682e51a00f29583593ce0ce9c58b91",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "JebKerman86/Atom-Parser",
"max_forks_repo_path": "documentation/Documentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "460c8685d8682e51a00f29583593ce0ce9c58b91",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "JebKerman86/Atom-Parser",
"max_issues_repo_path": "documentation/Documentation.tex",
"max_line_length": 792,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "460c8685d8682e51a00f29583593ce0ce9c58b91",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "JebKerman86/Atom-Parser",
"max_stars_repo_path": "documentation/Documentation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2933,
"size": 12775
} |
\documentclass[a4paper,12pt]{book}
\usepackage[english]{babel}
\usepackage{blindtext}
\begin{document}
\chapter{Exploring the page layout}
In this chapter we will study the layout of pages.
\section{Some filler text}
\blindtext
\section{A lot more filler text\protect\footnote{to fill the page}}
More dummy text\footnote{serving as a placeholder} will follow.
\subsection{Plenty of filler text}
\blindtext[10]
\end{document}
| {
"alphanum_fraction": 0.7671232877,
"avg_line_length": 31.2857142857,
"ext": "tex",
"hexsha": "65db4afade540b12ca3783df71bd2f4098d92d04",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2022-03-01T21:30:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-05-11T00:40:28.000Z",
"max_forks_repo_head_hexsha": "49f6c9c8e0c9f7a6554e720c8a82978a5f5d1042",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "eagleqian/LaTeX-Beginner-s-Guide",
"max_forks_repo_path": "Chapter03/9867_03_15.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "49f6c9c8e0c9f7a6554e720c8a82978a5f5d1042",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "eagleqian/LaTeX-Beginner-s-Guide",
"max_issues_repo_path": "Chapter03/9867_03_15.tex",
"max_line_length": 68,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "49f6c9c8e0c9f7a6554e720c8a82978a5f5d1042",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "eagleqian/LaTeX-Beginner-s-Guide",
"max_stars_repo_path": "Chapter03/9867_03_15.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T21:30:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-11T01:15:14.000Z",
"num_tokens": 118,
"size": 438
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% About this LaTeX file:
%%
%% This is a sample LaTeX file for a UWaterloo exam/test document using the
%% Odyssey exam management system and the Crowdmark online grading system.
%% Both Odyssey and Crowdmark cover over parts of every test page.
%% This LaTeX file sets up a page layout with this in mind.
%% Look for this LaTeX file on the University of Waterloo's
%% help page for the Crowdmark system: https://uwaterloo.ca/crowdmark/
%% or https://uwaterloo.ca/crowdmark/midterms-and-final-exams.
%% Sample pdf files showing the page layout options are there too, as are
%% documents describing how to use the Odyssey and Crowdmark systems for
%% managing and grading (online grading) your tests, quizzes, and exams.
%% Why the page layout is the way it is:
%%
%% Odyssey uses the bottom .65 inches of every page for a page number
%% and about 4 inches at the top of the cover page for exam information.
%% Crowdmark uses the top 1.5 inches of every page (including the cover page)
%% for QR coded booklet/page information.
%% When Odyssey and Crowdmark are used together, the cover page starts
%% with a 1.5 inch QR code area followed by the Odyssey 4 inch exam area.
%% In fact, Odyssey can use as little as 3.75 inches when there are no special
%% materials listed for an exam. And, it can use more than 4 inches when there
%% are many listed materials.
%% How to use this LaTeX file:
%%
%% This sample LaTeX file can be used for 4 variations of page layout.
%% Two variations are for Crowdmark, when the LaTeX variable tmargin is
%% set to 1.5in (default):
%% * Odyssey and Crowdmark: use the file as is
%% (the cover page framed box with exam info is covered up by Odyssey)
%% * Crowdmark without Odyssey: use the file as is
%% (the framed box is not covered up)
%%
%% And two variations that do not use Crowdmark, when the LaTeX
%% variable tmargin is set to .25in (the larger Crowdmark top margin
%% of 1.5 inches is no longer needed):
%% * Odyssey without Crowdmark
%% * Without Odyssey or Crowdmark
% Version 1 (Feb 19, 2019), Paul Kates ([email protected]).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
% Use showframe to see layout boundaries during drafts, but not for a printed
% exam.
%\usepackage{showframe}
%% Page layout
\usepackage{geometry} % used for page layout
% Size numbers you can change:
% hmargin{leftside,rightside} page margins can be adjusted to your liking.
% The page looks symmetric with the left, right values of .5in and .68in.
%
% Size numbers not meant for change. Sizes are determined by the heights of
% the areas covered by Odyssey and Crowdmark.
% tmargin = margin along the top of every page
% = 1.5 inches when used with Crowdmark, or
% = .25 inches when used without Crowdmark
% bmargin = margin along the bottom of every page = .65 inches; from Odyssey
% \myodysseyheight = height of this file's cover page exam info area
% which is meant to fit inside (be covered up by)
% the Odyssey exam info area (of about 4in in height
\newlength{\myodysseyheight}
\setlength{\myodysseyheight}{3.6in} % typical range [3.6,4] inches
\geometry{letterpaper, % 8.5 x 11 inch page (legalpaper 8.5x14in also works)
tmargin=1.5in, % page top margin when using Crowdmark
%tmargin=.25in, % page top margin when not using Crowdmark
hmargin={.5in,.68in}, % leftside, rightside page margins
% (page looks symmetric)
bmargin=.65in, % page bottom margin
includehead % place header in body of text below Crowdmark QR
}
%% Footer and header
% The footer below will be covered up by Odyssey's footer. But, when not using
% Odyssey, the footer will show the same information as Odyssey: exam title and
% page number.
\usepackage{lastpage} % for page number of last page
\usepackage{fancyhdr} % for setting footer
\pagestyle{fancy}
\fancyhead{} % turn off default header and footer
\fancyfoot{}
\fancyfoot[L]{University of Waterloo} % left, centre, right footers
\fancyfoot[C]{SE 465 Midterm Winter 2019}
\fancyfoot[R]{Page \thepage\ of \pageref{LastPage}}
\renewcommand{\headrulewidth}{0pt} % really turn off header rule
\renewcommand{\footrulewidth}{0.4pt} % default is 0pt
%% Other LaTeX packages and settings you use can go here:
%
\usepackage{mathtools, amssymb} % mathtools includes amsmath package
\usepackage{enumitem}
\usepackage{listings}
\usepackage{url}
\lstset{basicstyle=\footnotesize\ttfamily,breaklines=true}
%% Question grade point values in left margin
\reversemarginpar % put margin note/grade on left, default is rightside of page
\setlength{\marginparsep}{-.4in} % default 10pt
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Cover page of test:
\begin{document}
% Save original paragraph indentation size in case you want to restore it.
\newlength{\myoldparindent}
\setlength{\myoldparindent}{\parindent}
\setlength{\parindent}{0em} % turn off paragraph indentation (set length to 0)
% Use following line later if ever want to restore \parindent:
%\setlength{\parindent}{\myoldparindent}
% A framed box is placed around the area where Odyssey puts its cover page info.
% Anything put into this box will be covered up by Odyssey.
% You can put your exam info here for drafts.
% Or, use this area for your exam cover page if you decide not to use Odyssey.
% If you find that the box frame peeks below the Odyssey cover page info then
% either remove the framing lines by changing \fbox to \mbox below, or make
% a small reduction in the size of value \myodysseyheight (set above).
\fbox{ % to remove the frame lines, change this \fbox to \mbox
% Start of a minipage container (inside the fbox) for 2 inner minipages below.
\begin{minipage}[t][\myodysseyheight]{\textwidth}
% Some layout comments you can print on a draft cover page.
\begin{center}
%\large{\textbf{Space above this box is for a Crowdmark QR code}}\\[1ex]
%\large{\textbf{This boxed area will be covered up by Odyssey}}\\[2ex]
% Exam title information.
%
SE 465 Midterm Examination\\
University of Waterloo\\
Term: Winter \hspace{1cm} Year: 2019\\
\end{center}
% Required UWaterloo exam details for cover page:
\begin{minipage}[t]{3.5in} % half of the default 7 in page text width
Date: Thursday, February 28, 2019\\
Time: 18:30 – 20:00 (90 minutes)\\
Instructors: Patrick Lam\\
Lecture Section: 001\\
Exam Type: Open book, open notes, calculators with no communications capabilities\\
Number of exam pages (includes cover page): 8\\
\end{minipage} % end of first inner minipage of cover page exam details
\hfill%
%
% Student information area:
\begin{minipage}[t]{3.5in} % half of the page text width
\textit{Please Print}\\[1mm]
Last Name \hrulefill\\[2mm]
First Name \hrulefill\\[2mm]
UWaterloo ID \# \hrulefill\\[2mm]
Username \hrulefill\\[2mm]
\end{minipage} % end of second inner minipage of cover page student details
\end{minipage} % end of framed box container minipage
}
%% "Instructions to students" area of the cover page.
%% Every item here is optional. Even the grading box is here only as
%% an example.
%
% Adjust the vertical space here if the Odyssey exam info area grows larger
% and starts to cover up the grading box below.
%\vspace{1in} % height can be 0 to 1 inch to nicely position the grading box
% Grading box. Use the "Score" row for student scores if not using Crowdmark.
\begin{center}
\begin{tabular}{|l| c c c c c ||r|} \hline
Question & 1 & 2 & 3 & 4 & 5 & Total \\ \hline
Points & 15 & 10 & 15 & 10 & 10 & 60 \\ \hline
%Score & & & & & & & & & & & \\ \hline
\end{tabular}
\end{center}
%
\textbf{Instructions}
\begin{enumerate}
\item Turn off all communication devices. Communication devices must be stored with your personal items for the duration of the exam. Taking a communication device to a washroom break during this examination is not allowed and will be considered an academic offence.
\item I shuffled the order of the questions from what I said in class for better page breaks.
\item The exam lasts \textbf{90} minutes and there are 60 marks.
\item Verify that your name and student ID number is on the cover page.
\item If you feel like you need to ask a question, know that the most likely answer is ``Read the Question''. No questions are permitted. If you find that a question requires clarification, proceed by clearly stating any reasonable assumptions necessary to complete the question. If your assumptions are reasonable, they will be taken into account during grading.
\item Answer the questions in the spaces provided. If you require
additional space to answer a question, please use the second last page
and refer to this page in your solutions. You may tear off the last page
to use for rough work.
\item Do not write on the Crowdmark QR code at the top of each page.
\item Use a dark pencil or pen for your work.
% \item More instructions, about calculators, formula sheets, asking
% questions, etc.
\end{enumerate}
% Remove these 2 LaTeX commands when making your own cover page.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Second page of test, for exam questions.
%
\newpage
\renewcommand{\headrulewidth}{0.4pt} % put header rule on all non-cover pages
\section{Mutation Testing [10 Marks]}
Consider this function {\tt Location.add()}. Write down a (non-trivial, non-stillborn, non-equivalent) mutant to
the {\tt Location.add()} function (indicating your change) as well as a test case that kills
the mutant. (Use JUnit-like syntax; we don't care about the details).
Provide the expected and actual output for your testcase with the
original function as well as for the mutant.
\vspace*{1em}
Assume that you can create a {\tt Location}
object with an expression like {\tt new Location(0.0, 4.65, 3.50, W);}. Describe a {\tt Location} with a tuple like $\langle 0.0, 4.65, 3.50, W\rangle$.
\begin{lstlisting}[language=Java]
public Location(double x, double y, double z, Object world) {
this.x = x; this.y = y; this.z = z; this.world = world;
}
// credit: org.bukkit.Location.java
/**
* Adds the location by [sic] another.
*
* @param vec The other location
* @return the same location
* @throws IllegalArgumentException for differing worlds
*/
public Location add(Location vec) {
if (vec == null || vec.getWorld() != getWorld()) {
throw new IllegalArgumentException("Cannot add Locations of differing worlds");
}
x += vec.x;
y += vec.y;
z += vec.z;
return this;
}
\end{lstlisting}
\newpage
(Extra space for Q1 answers)
\newpage
%% private void removeFromOutgoing(Plugin plugin) {
%% synchronized (outgoingLock) {
%% Set<String> channels = outgoingByPlugin.get(plugin);
%% if (channels != null) {
%% String[] toRemove = channels.toArray(new String[0]);
%% outgoingByPlugin.remove(plugin);
%% for (String channel : toRemove) {
%% removeFromOutgoing(plugin, channel);
%% }
%% }
%% }
%% }
%% public Location setDirection(Vector vector) {
%% /*
%% * Sin = Opp / Hyp
%% * Cos = Adj / Hyp
%% * Tan = Opp / Adj
%% *
%% * x = -Opp
%% * z = Adj
%% */
%% final double _2PI = 2 * Math.PI;
%% final double x = vector.getX();
%% final double z = vector.getZ();
%% if (x == 0 && z == 0) {
%% pitch = vector.getY() > 0 ? -90 : 90;
%% return this;
%% }
%% double theta = Math.atan2(-x, z);
%% yaw = (float) Math.toDegrees((theta + _2PI) % _2PI);
%% double x2 = NumberConversions.square(x);
%% double z2 = NumberConversions.square(z);
%% double xz = Math.sqrt(x2 + z2);
%% pitch = (float) Math.toDegrees(Math.atan(-vector.getY() / xz));
%% return this;
%% }
\section{Branch and Statement Coverage [15 Marks]}
Here's the implementation of Java's \texttt{java.util.random.Random.nextInt} function. (5 Marks) Draw the control-flow graph
for \texttt{nextInt}. (5 Marks) Since the utility function \texttt{next()} returns pseudorandom bits,
discuss any difficulties that may arise in ensuring 100\% statement coverage for \texttt{nextInt()}.
How can you ensure 100\% statement coverage?
(5 Marks) What about branch coverage? What are the difficulties and how can you ensure 100\% branch coverage?
\vspace*{1em}
Assume that you can change any part of {\tt Random}'s state and call \texttt{nextInt} how you'd like; you may not change \texttt{nextInt} itself.
\begin{lstlisting}[language=Java]
// credit: java.util.Random
public int nextInt(int n) {
if (n<=0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while(bits - val + (n-1) < 0);
return val;
}
\end{lstlisting}
\newpage
%Give a set of test cases that achieve 100\% statement coverage on this
%function.
%% private AtomicLong seed;
%% protected int next(int bits) {
%% long oldseed, nextseed;
%% do {
%% oldseed = seed.get();
%% nextseed = (oldseed * multiplier + addend) & mask;
%% } while (!seed.attemptUpdate(oldseed, nextseed));
%% return (int)(nextseed >>> (48 - bits));
%% }
(Extra space for Q2 answers)
\newpage
\section{Input Generation [10 Marks]}
You have a test suite which contains the following set of calls to a REST API (\url{https://reqres.in}, specifically).
I've put the POST and PUT payloads in braces after the URL.
\begin{lstlisting}
GET /api/users
GET /api/users/2
POST /api/users { "name": "plam", "job": "SE director" }
PUT /api/users/2 { "name": "plam", "commute": "bicycle" }
DELETE /api/users/2
POST /api/register { "email": "[email protected]", "password": "password1" }
POST /api/login { "email": "[email protected]", "password": "password2" }
POST /api/logout { "token": "QpwL5tke4Pnpja7X" }
\end{lstlisting}
(2 Marks) Provide an additional input that looks correct and an input that looks incorrect.
(6 Marks)~Provide pseudocode that programmatically generates correct inputs, including at least 4 of the calls above. Your pseudocode may call primitives that randomly generate an integer or string.
(2 Marks)~Describe how to programmatically generate incorrect inputs. (A good way of
describing is by modifying the pseudocode).
\newpage
\section{Finite State Machines [10 Marks]}
(4 Marks) Propose a Finite State Machine for the REST API in the previous question. The FSM should abstract away from the details in the
example requests. It should also include an appropriate cycle. (There are multiple possible correct answers). (6 Marks) Write down the test
requirements for Simple Round Trip Coverage and a test suite which satisfies these test requirements. Are there additional test
requirements for Complete Round Trip Coverage in your FSM?
%Here are some inputs. Provide a grammar and an additional input that conforms to the grammar.
%Then, give a second input that ``almost conforms'' to the grammar, in the sense that one change to the
%grammar suffices to generate your second input.
%--> sequence of API calls
%get list, retrieve thing, modify it, push thing
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Some tests may want to end with blank pages for student's rough work, or
%% for more space to write answers. Some optional instructions are given below.
%% \newpage
%% \begin{center}
%% Extra page for answers.
%% Please specify the question number here and the use of this page on
%% the question page.
%% \end{center}
\newpage
\section{Short Answer [15 Marks total]}
Answer these questions using at most three sentences. Each question is worth
3 marks.
\begin{enumerate}[label=(\alph*)]
\item Give an example of a bug that is best detected by exploratory testing.
\vspace*{6em}
\item Your test suite achieves 55\% statement coverage and the tests all pass. Without any further information, what is one fact that you can conclude about the statements that are reported as covered?
\vspace*{6em}
\item For the same test suite as above, what can you conclude about the 45\% of statements that aren't covered?
\vspace*{6em}
\item You are developing a test suite for a web page that is to be translated into multiple languages. How do you introduce a level of abstraction into your tests?
\vspace*{6em}
\item Consider a reachable fault $F$ that infects the program state, propagating to output. Say you delete the line of code containing $F$. Would you still expect a failure? Where is the fault now?
\vspace*{6em}
%\item Describe one idiom in the code which makes it difficult to achieve 100\% statement coverage.
%\item Web frontend testing
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6792420524,
"avg_line_length": 42.1177884615,
"ext": "tex",
"hexsha": "ca70da512c401ab5608baa033c964cf0e63f0cea",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2020-07-16T18:40:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-07T20:35:20.000Z",
"max_forks_repo_head_hexsha": "49cd6d94aa5b60a93b707557bfa5e786215f98c9",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "patricklam/stqam-2019",
"max_forks_repo_path": "exams/se465-midterm-w19.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "49cd6d94aa5b60a93b707557bfa5e786215f98c9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "patricklam/stqam-2019",
"max_issues_repo_path": "exams/se465-midterm-w19.tex",
"max_line_length": 368,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "49cd6d94aa5b60a93b707557bfa5e786215f98c9",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "patricklam/stqam-2019",
"max_stars_repo_path": "exams/se465-midterm-w19.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-15T23:48:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-07T19:33:38.000Z",
"num_tokens": 4507,
"size": 17521
} |
%% March 2018
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% AGUJournalTemplate.tex: this template file is for articles formatted with LaTeX
%
% This file includes commands and instructions
% given in the order necessary to produce a final output that will
% satisfy AGU requirements, including customized APA reference formatting.
%
% You may copy this file and give it your
% article name, and enter your text.
%
%
% Step 1: Set the \documentclass
%
% There are two options for article format:
%
% PLEASE USE THE DRAFT OPTION TO SUBMIT YOUR PAPERS.
% The draft option produces double spaced output.
%
%% To submit your paper:
\documentclass[draft]{agujournal2018}
\usepackage{apacite}
\usepackage{textcomp}
\usepackage{url} %this package should fix any errors with URLs in refs.
\usepackage{lineno}
%\linenumbers
%%%%%%%
% As of 2018 we recommend use of the TrackChanges package to mark revisions.
% The trackchanges package adds five new LaTeX commands:
%
% \note[editor]{The note}
% \annote[editor]{Text to annotate}{The note}
% \add[editor]{Text to add}
% \remove[editor]{Text to remove}
% \change[editor]{Text to remove}{Text to add}
%
% complete documentation is here: http://trackchanges.sourceforge.net/
%%%%%%%
\draftfalse
%% Enter journal name below.
%% Choose from this list of Journals:
%
% JGR: Atmospheres
% JGR: Biogeosciences
% JGR: Earth Surface
% JGR: Oceans
% JGR: Planets
% JGR: Solid Earth
% JGR: Space Physics
% Global Biogeochemical Cycles
% Geophysical Research Letters
% Paleoceanography and Paleoclimatology
% Radio Science
% Reviews of Geophysics
% Tectonics
% Space Weather
% Water Resources Research
% Geochemistry, Geophysics, Geosystems
% Journal of Advances in Modeling Earth Systems (JAMES)
% Earth's Future
% Earth and Space Science
% Geohealth
%
% ie, \journalname{Water Resources Research}
\journalname{JGR: Solid Earth}
\begin{document}
%% ------------------------------------------------------------------------ %%
% Title
%
% (A title should be specific, informative, and brief. Use
% abbreviations only if they are defined in the abstract. Titles that
% start with general keywords then specific terms are optimized in
% searches)
%
%% ------------------------------------------------------------------------ %%
% Example: \title{This is a test title}
\title{Primary and secondary red bed magnetization constrained by fluvial intraclasts}
%% ------------------------------------------------------------------------ %%
%
% AUTHORS AND AFFILIATIONS
%
%% ------------------------------------------------------------------------ %%
% Authors are individuals who have significantly contributed to the
% research and preparation of the article. Group authors are allowed, if
% each author in the group is separately identified in an appendix.)
% List authors by first name or initial followed by last name and
% separated by commas. Use \affil{} to number affiliations, and
% \thanks{} for author notes.
% Additional author notes should be indicated with \thanks{} (for
% example, for current addresses).
% Example: \authors{A. B. Author\affil{1}\thanks{Current address, Antartica}, B. C. Author\affil{2,3}, and D. E.
% Author\affil{3,4}\thanks{Also funded by Monsanto.}}
\authors{Nicholas L. Swanson-Hysell\affil{1}, Luke M. Fairchild\affil{1}, Sarah P. Slotznick\affil{1}}
% \affiliation{1}{First Affiliation}
% \affiliation{2}{Second Affiliation}
% \affiliation{3}{Third Affiliation}
% \affiliation{4}{Fourth Affiliation}
\affiliation{1}{Department of Earth and Planetary Science, University of California, Berkeley, CA, USA}
%(repeat as many times as is necessary)
%% Corresponding Author:
% Corresponding author mailing address and e-mail address:
% (include name and email addresses of the corresponding author. More
% than one corresponding author is allowed in this LaTeX file and for
% publication; but only one corresponding author is allowed in our
% editorial system.)
% Example: \correspondingauthor{First and Last Name}{[email protected]}
\correspondingauthor{Nicholas Swanson-Hysell}{[email protected]}
%% Keypoints, final entry on title page.
% List up to three key points (at least one is required)
% Key Points summarize the main points and conclusions of the article
% Each must be 100 characters or less with no special characters or punctuation
% Example:
% \begin{keypoints}
% \item List up to three key points (at least one is required)
% \item Key Points summarize the main points and conclusions of the article
% \item Each must be 100 characters or less with no special characters or punctuation
% \end{keypoints}
\begin{keypoints}
\item Red siltstone intraclasts reveal two ancient magnetizations held by hematite -- one acquired before redeposition and the other after burial
\item Fine-grained hematite spans from superparamagnetic to single domain leading to a wide range of unblocking temperatures and coercivities
\item Detrital hematite thermally unblocks in a narrow high temperature range that can be isolated through high-resolution thermal demagnetization
\end{keypoints}
%% ------------------------------------------------------------------------ %%
%
% ABSTRACT
%
% A good abstract will begin with a short description of the problem
% being addressed, briefly describe the new data or analyses, then
% briefly states the main conclusion(s) and how they are supported and
% uncertainties.
%% ------------------------------------------------------------------------ %%
%% \begin{abstract} starts the second page
\begin{abstract}
The magnetization of hematite-bearing sedimentary rocks provides critical records of geomagnetic reversals and paleogeography. However, the timing of hematite remanent magnetization acquisition is typically difficult to constrain. While detrital hematite in sediment can lead to a primary depositional remanent magnetization, alteration of minerals through interaction with oxygen can lead to the post-depositional formation of hematite. In this study, we use exceptionally-preserved fluvial sediments within the 1.1 billion-year-old Freda Formation to gain insight into the timing of hematite remanence acquisition and its magnetic properties. This deposit contains siltstone intraclasts that were eroded from a coexisting lithofacies and redeposited within channel sandstone. Thermal demagnetization, petrography and rock magnetic experiments on these clasts reveal two generations of hematite. One population of hematite demagnetized at the highest unblocking temperatures and records directions that rotated along with the clasts. This component is a primary detrital remanent magnetization. The other component is removed at lower unblocking temperatures and has a consistent direction throughout the intraclasts. This component is held by finer-grained hematite that grew and acquired a chemical remanent magnetization following deposition resulting in a population that includes superparamagnetic nanoparticles in addition to remanence-carrying grains. The data support the interpretation that magnetizations of hematite-bearing sedimentary rocks held by $>$400 nm grains that unblock close to the N\'eel temperature are more likely to record magnetization from the time of deposition. This primary magnetization can be successfully isolated from co-occurring authigenic hematite through high-resolution thermal demagnetization.
\end{abstract}
%% ------------------------------------------------------------------------ %%
%
% TEXT
%
%% ------------------------------------------------------------------------ %%
%%% Suggested section heads:
% \section{Introduction}
%
% The main text should start with an introduction. Except for short
% manuscripts (such as comments and replies), the text should be divided
% into sections, each with its own heading.
% Headings should be sentence fragments and do not begin with a
% lowercase letter or number. Examples of good headings are:
% \section{Materials and Methods}
% Here is text on Materials and Methods.
%
% \subsection{A descriptive heading about methods}
% More about Methods.
%
% \section{Data} (Or section title might be a descriptive heading about data)
%
% \section{Results} (Or section title might be a descriptive heading about the
% results)
%
% \section{Conclusions}
\section{Introduction}
The magnetizations of hematite-bearing sedimentary rocks known as ``red beds'' have provided ample opportunities for Earth scientists to gain insight into the ancient geomagnetic field and the paleogeographic positions of sedimentary basins. However, with these opportunities has come much scientific debate, leading to what has been referred to as the ``red bed controversy'' \citep{Butler1992a, Beck2003b, Van-Der-Voo2012a}. This controversy stems from the reality that hematite within sedimentary rocks can have two sources: 1) detrital grains that are within the sediment at the time of deposition; 2) grains that grow \textit{in situ} after the sediments have been deposited.
How does one constrain the relative age of hematite within sedimentary rocks? Many of the traditional paleomagnetic field tests are unable to differentiate between primary versus diagenetic remanence. For example, a structural fold test can constrain that a remanence direction was obtained prior to folding, but millions of years have typically passed between the deposition of a sediment and such tectonic tilting. Dual-polarity directions through a sedimentary succession are commonly interpreted as providing assurance that the remanence records primary or near-primary magnetization; however, hematite growth could occur significantly after deposition during a protracted period over which the geomagnetic field was in both reversed and normal polarities. Petrographic investigations are valuable, but it can be difficult to ascertain how much the petrographically observed hematite contributes to the magnetization and to unambiguously interpret whether observed grains are detrital or not (e.g. \citealp{Elmore1982a}). A common approach to classify hematite grains within red beds is into a fine-grained pigmentary population, typically interpreted to have formed within the sediment, and a coarser-grained population that has been referred to in the literature as ``specularite'' \citep{Butler1992a, Van-Der-Voo2012a}. \citet{Tauxe1980a} showed that sediments with abundant red pigmentary hematite in the Miocene Siwalik Group had lower thermal unblocking temperatures than grey samples dominated by a coarser-grained phase of specular hematite. An additional approach taken by \citet{Tauxe1980a}, and other workers going back to the work of \citet{Collinson1965a}, is to preferentially remove fine-grained pigmentary hematite through prolonged immersion in concentrated HCl acid. Paired chemical and thermal demagnetization have been interpreted to show that removal of pigmentary hematite coincides with removal of hematite associated with lower unblocking temperatures. These data support the interpretation that coarser grains that are more resistant to dissolution in acid correspond with those that carry remanence to the highest unblocking temperatures \citep{Tauxe1980a,Bilardello2010c}. Observations such as these have led to the practice of defining the characteristic remanent magnetization from hematite-bearing sediments as that held by the highest unblocking temperatures \citep{Van-Der-Voo2012a}. Additional lines of evidence in numerous successions have supported this approach. For example, in the well-studied Carboniferous Mauch Chunk Formation of Pennsylvania, remanence removed up to $\sim$660\textdegree C has uniform polarity and fails a fold test while the component removed upwards of 670\textdegree C is dual-polarity, was acquired before folding, and is interpreted as a primary magnetization \citep{Kent1985b, DiVenere1991a}. Nevertheless, the primary versus secondary nature of micron-scale ``specularite'' grains that likely carry this remanence has been one of the largest sources of contention in the ``red bed controversy'' \citep{Van-Houten1968a, Tauxe1980a, Butler1992a, Van-Der-Voo2012a}.
What is needed to most confidently address the timing of remanence acquisition is a process that reorients the sediment before it has been lithified. Two such processes are: 1) syn-sedimentary slumping wherein coherent sediment is reoriented through soft-sediment folding in the surface environment and 2) intraclasts comprised of the lithology of interest that have been liberated and redeposited within the depositional environment. Sediments that have undergone reorienting processes within the depositional environment can provide significant insight into whether magnetization was acquired before or after reorientation.
\citet{Tauxe1980a} studied 7 cobble-sized clasts within the Siwalik Group that were interpreted to have formed by cut-bank collapse and discovered that their magnetic remanence was acquired prior to clast reorientation. \citet{Molina-Garza1991a} observed dispersed magnetization directions in sandstone and siltstone clasts within the Triassic Moenkopi and Chinle formations in New Mexico and interpreted the characteristic remanence to be a detrital or early chemical remanence. An investigation by \citet{Purucker1980a} on red beds also of the Triassic Moenkopi Formation of Arizona used multiple such syn-sedimentary processes to gain insight into hematite acquisition. In their study, an intraformational landslide deposit with isoclinal folds of hematite-bearing claystone revealed non-uniform directions upon blanket demagnetization to 650\textdegree C that cluster better when corrected for their tilt, leading to a primary interpretation for their remanence. Scatter was also observed in intraformational conglomerate clasts weathered out of an underlying unit upon blanket thermal demagnetization to 630\textdegree C. However, the lack of principal component analysis makes it difficult to evaluate the coherency of the directions. Complicating matters, \citet{Larson1982b} analyzed shale rip-up clasts in the same Moenkopi Formation and used the fact that similar remanence directions were removed between clasts during thermal demagnetization up to 645\textdegree C as support for the hypothesis that red beds rarely reflect the geomagnetic field at the time of deposition. Evaluating the robustness of this result, as well as the varying results of similar field tests conducted by \citet{Liebes1982a} on the Chugwater and Moenkopi formations, is hindered by the cessation of thermal demagnetization before the N\'eel temperature of hematite and the lack of principal component analysis. These limitations are found in many studies from this era of research, when the red bed controversy was particularly fervent, as the work predates the widespread application of principal component analysis in conjunction with systematic progressive thermal demagnetization \citep{Kirschvink1980a, Van-Der-Voo2012a}. Using such methods, \citet{Opdyke2004a} analyzed 20 red siltstone rip-up clasts from the Mauch Chunk Formation and found that the remanence component that unblocks above 650 \textdegree C and passes a structural fold test was reoriented with the rip-up clasts providing strong evidence for a syndepositional or early post-depositional origin of the hematite.
In this study, we investigate cm-scale siltstone intraclasts within the ca. 1.1 Ga Freda Formation that were eroded by fluvial processes and redeposited amongst cross-stratified sandstones (Fig. \ref{fig:intraclast_images}). High-resolution thermal demagnetization data on these clasts constrain the timing of hematite acquisition by revealing a primary component that formed prior to the erosion of the clasts within the depositional environment and a secondary component that formed following their redeposition. Rock magnetic experiments constrain the magnetic mineralogy and provide additional insights into the grain size of the hematite populations that hold these remanences.
\begin{figure}[!ht]
\centering
\noindent\includegraphics[width=\textwidth]{figures/BRIC_clast_images_2.png}
\caption{\small{A: Siltstone intraclasts within the Freda Formation. The field photo shows an intact layer of siltstone below the hammer head which is topped by a bed of trough cross-stratified coarse sandstone with horizons of siltstone intraclasts. The hammer is 40 cm long. The inset photo is of an individual intraclast that was sampled as BRIC.26. B: A scan of a thin section of the BRIC.26 intraclast (upper half of image) and the coarse sand matrix (lower half of image). The red color of the intraclast is due to pigmentary hematite. C: Backscatter electron image of the siltstone clast from the region of the white box in B. The light-colored detrital grains that are labeled with arrows (light due to iron's high atomic number) were confirmed to be hematite through electron backscatter diffraction.}}
\label{fig:intraclast_images}
\end{figure}
The $\sim$5 km thick Freda Formation was deposited in the North American Midcontinent Rift as it was thermally subsiding following the cessation of widespread magmatic activity \citep{Cannon1992b}. The fluvial sediments of the Freda Formation are part of the Oronto Group and were conformably deposited following the deposition of the alluvial Copper Harbor Conglomerate and the lacustrine Nonesuch Formation \citep{Ojakangas2001a, Slotznick2018b}. Abundant fine-grained red siltstones within the Freda Formation have a well-behaved magnetic remanence dominated by hematite \citep{Henry1977a}. A maximum age constraint on the Freda Formation of 1085.57 $\pm$ 0.25/1.3 Ma (2$\sigma$ analytical/analytical + tracer + decay constant uncertainty; \citealp{Fairchild2017a}) is provided by an U-Pb date of a lava flow within the underlying Copper Harbor Conglomerate. Minor volcanics within the Freda Formation on the Keweenaw Peninsula are unlikely to be substantially younger than the youngest dated volcanics within the Midcontinent basin (1083.52 $\pm$ 0.23/1.2 Ma from the Michipicoten Island Formation; \citealp{Fairchild2017a}). An age of ca. 1080 Ma for the basal 500 meters of the Freda Formation is consistent with modeling of post-rift thermal subsidence \citep{Hutchinson1990a}.
The studied intraclast-bearing outcrop is located along the Bad River (northern Wisconsin) in the lower portion of the Freda Formation -- approximately 320 to 340 meters above its conformable base with the Nonesuch Formation (latitude: 46.3866 \textdegree N, longitude 90.6373 \textdegree W). The two main lithofacies in the studied outcrop are: (1) siltstone to very fine sandstone with planar lamination and horizons of ripple cross-stratification and (2) coarse to very coarse subarkosic sandstone with dune-scale trough cross-stratification (Fig. \ref{fig:intraclast_images}). These lithofacies are consistent with a fluvial depositional environment where the coarse sandstone facies are channel deposits and the siltstones are inner-bank or over-bank deposits. The coarse-grained sandstone contains horizons of tabular cm-scale intraclasts comprised of the red siltstone lithology that is present in underlying beds of intact siltstone (Fig. \ref{fig:intraclast_images}). These tabular clayey-silt intraclasts were eroded within the depositional environment and redeposited in the sandstone. Due to migrating channels in fluvial systems, it is expected that a river will erode its own sediments. The intraclasts would have been held together through cohesion resulting from the clay component within the sediment. Given that the clasts are large (1 to 7 cm) relative to their host sediment, that they are angular, and that they would have been fragile at the time of deposition, it is unlikely that they were transported far within the channel.
\section*{Paleomagnetic Results}
Oriented samples were collected and analyzed from 39 Freda Formation intraclasts. The dimensions of the sampled clasts ranged from 2.2 x 1.4 x 0.5 cm to 7.2 x 2.3 x 1.2 cm. Given that the clasts were typically smaller than the 1-inch-diameter drill cores used for sampling, they were collected along with their sandstone matrix. These oriented cores were mounted onto quartz glass discs with Omega CC cement and the matrix material was micro-drilled away. The mounted clasts underwent stepwise thermal demagnetization in the UC Berkeley Paleomagnetism Lab using an ASC demagnetizer (residual fields $<$10 nT) with measurements made on a 2G DC-SQUID magnetometer. The demagnetization protocol had high resolution approaching the Ne\'el temperature of hematite (5\textdegree C to 2\textdegree C to 1\textdegree C) resulting in 30 total thermal demagnetization steps (Fig. \ref{fig:intraclast_pmag}). All paleomagnetic data are available to the measurement level in the MagIC database (\url{https://earthref.org/MagIC/doi/}). \textit{So that reviewers have access to the data, they are currently available in CIT lab format and MagIC format here: \url{https://github.com/Swanson-Hysell-Group/2018_Red_Bed_Intraclasts}}.
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/BRIC_pmag.pdf}
\caption{\small{Paleomagnetic data from intraclasts reveal a mid-temperature component that typically unblocks prior to 655\textdegree C and a high-temperature component that typically unblocks between 655\textdegree C and 687\textdegree C. These components are present as varying fractions of the overall remanence as seen in the three individual clasts shown here on vector component plots and measurement-level equal area plots in tilt-corrected coordinates (developed using PmagPy; \citealp{Tauxe2016a}). The direction of the mid-temperature component is shown as green arrows on the vector component plots and green circles on the equal area plots while the high-temperature component is shown with purple symbols. The mid-temperature component has a similar direction among the clasts as can be seen on the component directions equal area plots (tilt-corrected mean declination: 252.4, inclination: -12.5, $\alpha_{95}$: 6.6). In contrast, the high-temperature component directions are dispersed.}}
\label{fig:intraclast_pmag}
\end{figure}
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/BRa_pmag.pdf}
\caption{\small{Paleomagnetic data from in-place siltstone beds reveal a mid-temperature component that typically unblocks prior to 655\textdegree C and a high-temperature component that typically unblocks between 655\textdegree C and 689\textdegree C. The direction of the mid-temperature component is shown as purple arrows on the vector component plots and purple circles on the equal area plots while the high-temperature component is shown with green symbols. The mid-temperature component is similar to that removed from the clasts. The high-temperature component is well-grouped and has a distinct direction from the mid-temperature component with lower inclination.}}
\label{fig:insitu_pmag}
\end{figure}
The clasts typically reveal two distinct magnetization directions. One direction was similar throughout the intraclasts and was typically removed between 200\textdegree C and 650\textdegree C (Fig. \ref{fig:intraclast_pmag}). This mid-temperature component is continuously unblocked between these temperatures with no or minimal downward inflection at $\sim$580\textdegree C that would indicate appreciable remanence associated with magnetite (Fig. \ref{fig:intraclast_pmag}). This component is directionally well-grouped indicating that it was acquired following deposition of the clasts (Fig. \ref{fig:intraclast_pmag}). The other component trends towards the origin and is removed by thermal demagnetization steps at the highest levels such that it typically can be fit by a least-squares line between 665\textdegree C and 688\textdegree C. The relative magnitude of the components varies between intraclasts (Fig. \ref{fig:intraclast_pmag}). While the high-temperature component can sometimes be fit as a line with a lower temperature bound of 660\textdegree C (BRIC.31a in Fig. \ref{fig:intraclast_pmag}), due to overlapping unblocking temperatures between the mid-temperature and high-temperature components, the lower bounds of the high-temperature fits sometimes need to be as high as 680\textdegree C (BRIC.41a in Fig. \ref{fig:intraclast_pmag}). Note that while the Ne\'el temperature of hematite is sometimes given as 675\textdegree C in the paleomagnetic literature, experimental data often show the Ne\'el temperature to be as high as 690\textdegree C \citep{Ozdemir2006a}. In the data from the clasts, there is typically a significant directional change in the specimen magnetization between the mid-temperature component and the high-temperature component (Fig. \ref{fig:intraclast_pmag}). As a result, 29 of the 39 analyzed intraclast specimens could be fit with distinct mid-temperature and high-temperature least-squares lines. An additional five specimens were undergoing directional change through the highest thermal demagnetization steps indicative of the presence of a distinct high-temperature component, but this component was not well-expressed enough to be fit. Five of the specimens showed no directional change and could be fit with a single mid-to-high-temperature component that is grouped with the mid-temperature component directions. In contrast to the well-grouped mid-temperature component, the high-temperature component directions are dispersed, indicating that the component was acquired prior to erosion and redeposition of the clasts. The high-temperature component directions are more dispersed in declination than inclination leading to a distribution that is not randomly dispersed on a sphere. Given that the clasts are tabular, were liberated along their depositional lamination, and subsequently landed roughly bedding-parallel, it is to be expected that the rotations were largely around a vertical axis -- preferentially changing declination.
In-place siltstone and very fine sandstone, representing the same lithologies that were liberated into intraclasts, were collected and analyzed following the same thermal demagnetization protocol. These samples were collected from a section stratigraphically below the intraclast-bearing outcrop along a small tributary creek to the Bad River (section BRa; latitude: 46.3852 \textdegree N, longitude 90.6337 \textdegree W). These samples are between 50 and 70 meters above the base of the Freda Formation. The thermally demagnetized specimens display very similar demagnetization behavior to the intraclasts with a mid-temperature component that progressively unblocks up to $\sim$650 \textdegree C and then transitions to a slightly different direction that unblocks up to $\sim$689 \textdegree C. The mid-temperature component (tilt-corrected Dec = 256.2, Inc = -12.5, $\alpha_{95}$= 3.6) shares a common mean with the mid-temperature component isolated from the intraclast samples (tilt-corrected Dec = 252.4, Inc = -12.5, $\alpha_{95}$= 6.6). The high-temperature component directions are well-grouped, in contrast to their dispersion between the intraclasts, and have a direction that is similar, but distinct, from the mid-temperature component (i.e. the two means can be distinguished at the 95$\%$ confidence level). The inclination of the high-temperature component mean is very close to horizontal (Fig. \ref{fig:insitu_pmag}; tilt-corrected Dec = 247.5, Inc = 3.0, $\alpha_{95}$= 5.4).
The paleomagnetic results on the in-place siltstone beds are similar to those obtained by \citet{Henry1977a} who studied the basal Freda Formation in the Presque Isle Syncline and White Pine Basin of northern Michigan. As in our results, their data revealed a distinct mid-temperature component with a shallow upwards inclination and a high-temperature component with a near horizontal inclination. A progression from horizontal to upward inclinations is consistent with the expected change through time if the movement along the Keweenawan Track persisted past the end of rift magmatism \citep{Fairchild2017a, Swanson-Hysell2018a} and is consistent with a later age of remanence acquisition for the mid-temperature component. While the inclination of the mid- and high-temperature components are indistinguishable between our data and that of \citet{Henry1977a}, the declinations are different such that their declinations are 24\textdegree$\;$more northerly than those obtained for BRa. The origin of this difference in declination is unclear and could be associated with complications in the tilt-correction such as non-cylindrical folding or multiple tilting episodes (inclination is relative to bedding tilt so would not be effected by such processes). It is premature to recalculate a paleomagnetic pole for the Freda Formation. More analyses are needed from the Freda Formation: 1) to evaluate this declination discrepancy; 2) develop enough directional data to robustly apply the elongation/inclination method for inclination flattening correction ($>$100 to 150 samples necessary per \citealp{Tauxe2008a}) to increase the quality of the paleomagnetic pole for the purposes of paleogeographic reconstruction and 3) expand data to span the stratigraphic thickness of the formation as current results are limited to the basal portion of the formation.
\section*{Petrographic Results}
Petrography on the intraclasts reveals two distinct populations of hematite (Fig. \ref{fig:intraclast_images}). One population is fine-grained pigmentary hematite present dominantly within the clay-sized matrix and rimming detrital silt-sized grains. The zones of pigmentary hematite within the matrix remain cloudy to high magnification indicating that the grains are submicron in size. The other population of hematite has similar sizes and shapes to other detrital silt-sized grains -- typically ranging from 2 to 50 $\mu$m in diameter. These hematite grains were identified through reflected light microscopy with their mineralogy supported by energy-dispersive x-ray spectroscopy conducted on a scanning electron microscope (SEM) at Lawrence Berkeley National Laboratory and confirmed by electron backscatter diffraction on an SEM at UC Berkeley (see Supperting Information).
\section*{Rock Magnetic and M{\"o}ssbauer Spectroscopy Results}
The paleomagnetic data reveal that there are two distinct populations of remanence-carrying magnetic grains within the intraclasts and in-place siltstone with differing unblocking temperature ranges: one unblocking over a broad temperature range from 100 \textdegree C up to 650\textdegree C or higher and the other dominantly unblocking between 665\textdegree C and 690\textdegree C. Rock magnetic experiments and M{\"o}ssbauer spectroscopy can elucidate additional properties associated with the ferromagnetic mineralogy within the siltstone that is carrying the remanence as well as portions of the magnetic mineralogy that are not stable at room temperature.
\begin{figure}[!ht]
\noindent\includegraphics[width=0.9\textwidth]{figures/coercivity_spectra.pdf}
\caption{\small{Coercivity spectra developed from backfield demagnetization curves on BRIC intraclast specimens with the points being the data. The data were modeled using log-Gaussian functions implemented with the Max UnMix software package \citep{Maxbauer2016a} with the yellow curve corresponding to the model. The coercivity spectra can be well-explained with two distributions: the higher coercivity distribution in purple and the lower coercivity distribution in green.}}
\label{fig:coercivity_spectra}
\end{figure}
Backfield demagnetization curves, where the specimens were saturated in a 1.8 T field followed by a progressively larger field in the opposite direction, were developed on a Princeton Measurements vibrating sample magnetometer at the Institute for Rock Magnetism. Coercivity spectra, the derivative of the backfield curves, were modeled using the Max UnMix software package (\citealp{Maxbauer2016a}; Fig. \ref{fig:coercivity_spectra}). These spectra are well fit with two log-normal distributions associated with two populations of grains. One population has a median coercivity of $\sim$300 mT and a distribution that extends from the lowest to the highest coercivities (Fig. \ref{fig:coercivity_spectra}). The other population has a higher median coercivity of $\sim$700 mT and a coercivity distribution that is limited to the high coercivity range (Fig. \ref{fig:coercivity_spectra}). The median coercivity of the high-coercivity phase corresponds well with the coercivity of single-domain hematite in the 300 nm to 10 $\mu$m range \citep{Ozdemir2014a}. The spread in coercivities associated with the lower coercivity population is consistent with these values down to those associated with finer-grained hematite; the coercivity of hematite becomes progressively lower for grains that are smaller than 300 nm in diameter \citep{Ozdemir2014a}.
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/low_temp_ac}
\caption{\small{Frequency and temperature dependence of magnetic susceptibility ($\chi$) for BRIC-20 and BRIC-26 from 10 to 300 K. The left panels show the in-phase susceptibility which is dominated by the paramagnetic component of the samples. In the middle panels, this paramagnetic component has been removed by creating a Curie-Weiss law model and subtracting it from the in-phase susceptibility data. Strong frequency dependence over the whole temperature range suggests a broad size distribution of nanoparticles (e.g. \citealp{Jackson2012a}). The right panels show the out-of-phase (quadrature) susceptibility as well as the $\pi/2$ relationship of $\chi''[viscosity] = -(\pi/2)(\partial \chi'/\partial lnf$) where $f$=frequency. These data document the presence of viscous superparamagnetic particles.}}
\label{fig:low_temp_ac}
\end{figure}
The frequency and temperature dependence of magnetic susceptibility was determined through experiments conducted on a Magnetic Properties Measurement System (MPMS) at the Institute for Rock Magnetism. The dependence of susceptibility on both temperature and frequency provides a sensitive and diagnostic means of characterizing magnetic nanoparticles \citep{Worm1999a}. We observe a frequency dependence of susceptibility that persists from 300 K down to 10 K (Fig. \ref{fig:low_temp_ac}). This frequency dependence can be attributed to viscous superparamagnetic grains whose magnetic viscosity has relaxation times comparable to the AC field reversal interval \citep{Worm1998a, Worm1999a}. This interpretation is supported by the frequency dependence of the in-phase susceptibility and the shared peak between the out-of-phase (quadrature) susceptibility and the $\pi/2$ law (\citealp{Mullins1973a}; Fig. \ref{fig:low_temp_ac}). That the frequency dependence extends across the full low-temperature range indicates a broad blocking temperature spectrum of viscous superparamagnetic grains associated with a wide size distribution of the nanoparticles.
Hysteresis loops were measured from 5 T to -5 T on the MPMS at varying temperatures from room temperature (300 K) down to 50 K (Fig. \ref{fig:low_temp_loops}). These low-temperature hysteresis data reveal a progressive increase in remanent magnetization ($M_r$) as temperature decreases (Fig. \ref{fig:low_temp_loops}) leading to $M_r$ values that are between 9 and 13$\%$ higher at 50 K than at 300 K. This increase in $M_r$ at low temperatures is likely associated with superparamagnetic grains transitioning to behave as stable single domain grains at lower temperature. There is also an increase in saturation magnetization ($M_s$) as temperature decreases (Fig. \ref{fig:low_temp_loops}). However, the hysteresis loops require subtraction of a large paramagnetic component that becomes progressively non-linear at low temperatures leading to the possibility that this increase in $M_s$ is an artefact rather than an aspect of the ferromagnetic mineralogy of the samples. This increase in $M_r$ is insensitive to this correction and is therefore a more robust feature of the data.
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/low_temp_loops.pdf}
\caption{\small{Hysteresis loops measured from room temperature (300 K) down to 50 K for siltstone intraclast samples. The left panels are the raw uncorrected loops, the center panels remove the paramagnetic contribution using the methods of \citet{Jackson2010a} and the right panels zoom-in on applied fields close to 0 to illustrate the progressive increase in remanent magnetization ($M_r$) as temperature decreases.}}
\label{fig:low_temp_loops}
\end{figure}
M{\"o}ssbauer spectra were collected at the Institute for Rock Magnetism on bulk powders of intraclast samples at 300 K and 18 K (Fig. \ref{fig:mossbauer}). The main feature of these spectra is a magnetically split sextet with a magnetic hyperfine field of about 51.6 T -- diagnostic of hematite \citep{Dyar2006a}. Modeling of the spectra reveal that the majority of iron within the samples resides within hematite (58\% at 300 K and 60\% at 18 K). Due to the high-frequency of M{\"o}ssbauer spectroscopy, grains that are observed to be superparamagnetic at room temperature in typical rock magnetic experiments behave as stable ordered grains in M{\"o}ssbauer spectra. Nevertheless, there is a slight increase in the magnitude of the hematite sextet relative to the doublets in the 18 K spectrum, leading to the slight increase in modeled hematite content, likely associated with ordering of the smallest nanoparticles of hematite (\citealp{Bodker2000a}; Fig. \ref{fig:mossbauer}).
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/mossbauer.pdf}
\caption{\small{M{\"o}ssbauer spectra developed at room temperature (300 K) and low temperature (18 K) on a powder of intraclast sample BRIC.22. Data are shown as dots with the black line representing a model fit. The sextet portion of the fit is shown with the red and yellow curves (representing the spread in hyperfine splitting resulting from a natural population) and is diagnostic of hematite. The central peak is comprised of doublets dominated by Fe$^{3+}$. The height of the hematite sextet relative to the central peak is slightly higher in the low temperature experiment likely due to ordering of the smallest hematite nanoparticles.}}
\label{fig:mossbauer}
\end{figure}
\section*{DISCUSSION}
Single-domain hematite grains have high coercivities ($>$150 mT; \citealp{Ozdemir2014a}) and high unblocking temperatures. As a result, populations of hematite within rocks are stable on long timescales, resistant to overprinting, and therefore attractive for paleomagnetic study. In contrast to magnetite, hematite grains retain stable single-domain behavior in crystals $>$1 $\mu$m with the threshold to multidomain behavior occurring when grain diameters exceed $\sim$100 $\mu$m \citep{Kletetschka2002a, Ozdemir2014a}. Hematite nanoparticles with diameters $<$30 nm have superparamagnetic behavior wherein thermal fluctuation energy overwhelms the ability of the grain to retain a stable magnetization at Earth surface temperatures \citep{Ozdemir2014a}. Hematite grains become progressively less influenced by thermal fluctuations as they reach grain sizes of a few hundred nanometers at which point they are stable up to temperatures approaching the N\'eel temperature of $\sim$685\textdegree C \citep{Swanson-Hysell2011a, Ozdemir2014a}. As a result, there is a strong relationship between grain volume and unblocking temperature that can be utilized to estimate grain size following N\'eel relaxation theory \citep{Neel1949a, Swanson-Hysell2011a}. A hematite population that is progressively unblocking at thermal demagnetization steps well below the N\'eel transition temperature, such as the mid-temperature component of the intraclasts, is comprised of grains within the $\sim$30 to $\sim$400 nm size range (Fig. \ref{fig:summary}). This fine-grain size is consistent with the pigmentary phase observed within the intraclasts (Fig. \ref{fig:intraclast_images}).
Given the directional consistency of the mid-temperature component among the intraclasts (Fig. \ref{fig:intraclast_pmag}), this component must have dominantly formed as a chemical remanent magnetization after the intraclasts were redeposited in the channel. Chemical remanent magnetization acquisition by pigmentary hematite would have occurred as hematite grains grew to sizes above the superparamagnetic to stable single-domain transition resulting in the wide range of unblocking temperatures that is observed. The frequency dependence of susceptibility (Fig. \ref{fig:low_temp_ac}) and increase in remanent magnetization following saturation at low-temperature (Fig. \ref{fig:low_temp_loops}) both indicate the presence of a population of superparamagnetic grains. The coercivity spectra are consistent with a population of hematite that has a wide coercivity range extending from low coercivities up to high coercivities (the green component in the unmixing models of Fig. \ref{fig:coercivity_spectra}). Taken together and compared to the hematite coercivity compilation of \cite{Ozdemir2014a}, these data indicate that a population of authigenic hematite nanoparticles that spans from $<$30 nm to $>$300 nm is responsible for the post-depositional chemical remanent magnetization (Fig. \ref{fig:summary}).
In contrast, the sharp unblocking temperature close to the N\'eel temperature of the high-temperature component indicates that it is dominantly held by hematite grains that are $>$400 nm (based on N\'eel relaxation theory; Fig. \ref{fig:summary}) such as the silt-sized hematite grains observed petrographically (Fig. \ref{fig:intraclast_images}). The high-coercivity population within the coercivity spectra (purple curves in Fig. \ref{fig:coercivity_spectra}) are consistent with this grain-size interpretation (Fig. \ref{fig:summary}). The high-temperature remanence component held by these grains was rotated along with the clasts indicating that it is primary and was acquired prior to the redeposition of the cohesive silt clasts. That this component is held by larger grains sizes supports it being a detrital remanent magnetization, rather than a chemical remanent magnetization that formed very early prior to clast erosion.
Detailed rock magnetic data through the Nonesuch Formation, which immediately underlies the Freda Formation, reveal that the lacustrine lithologies preserve a depth-dependent environmental magnetic signature where the deep water facies have no hematite in contrast to hematite-rich shallow water facies in sediments of similar grain size \citep{Slotznick2018b}. This difference was interpreted by \citet{Slotznick2018b} as being due to microbial reductive dissolution of iron oxides at low oxygen levels in the deepest part of the lake and oxidation of the detrital input in the shallowest part of the lake. That this depth-dependent relationship is consistent over significant length scales across the Midcontinent Rift basin indicates that hematite formation in those sediments, and likely the Freda Formation as well, is associated with redox conditions at the time of deposition rather than the subsequent migration of fluids. Oxidation of iron in surface environments often begins with the formation of fine-grained poorly crystalline ferrihydrite, which transforms to stable crystalline hematite at neutral pH on geologically short timescales \citep{Cudennec2006a, Jiang2018a}. The broad unblocking temperatures we observe for the chemical remanent magnetization in the Freda intraclasts are similar to those in hematite populations produced through experimental ferrihydrite to hematite conversion \citep{Jiang2015a}. The direction of this chemical remanence is distinct, but similar, to that of the detrital remanence with a change in both declination and inclination. This result suggests that the chemical remanence was acquired as plate motion continued at the end of the Keweenawan Track \citep{Swanson-Hysell2018a}. This chemical remanent magnetization direction is well-grouped (Fig. \ref{fig:insitu_pmag}) suggesting that the hematite that carries the remanence formed at a similar time rather than over a protracted period (which could lead to a streaked distribution; \citealp{Beck2003b}). One possibility is that the conversion from a phase associated with the ferrihydrite to hematite transition that formed in the sediments in the near-surface was thermally activated by the geothermal gradient as the sediments were buried. More than 4 km of Freda Formation sediments were deposited atop those investigated in this study within the thermally subsiding basin that would have led to protracted interval at temperatures $>$100 \textdegree C following deposition. Such thermal activation of the transition of ferrihydrite, and/or intermediary phases such as hydromaghemite, to hematite could explain the association of chemical remanence directions with burial within the Midcontinent Rift. This mechanism could also explain the association of authigenic hematite remanence with other types of tectonothermal events such as the well-documented syn-folding chemical remanence of the Mauch Chunk Formation that is associated with the Alleghanian orogeny \citep{Kent1985b, Opdyke2004a}.
\begin{figure}[!ht]
\noindent\includegraphics[width=\textwidth]{figures/component_summary.pdf}
\caption{\small{Left: Calculated unblocking temperatures using N\'eel thermal relaxation theory of idealized spherical hematite grains using a thermal fluctuation rate of 10$^{10}$ s$^{-1}$ and a relaxation time of 5 minutes for comparison to thermal demagnetization data (modified from \citealp{Swanson-Hysell2011a}). The unblocking temperatures of the mid-temperature chemical component (green) and the high-temperature detrital component (purple) are shown and can be used to infer grain size. The N\'eel transition temperature (T$_{\mathrm{N}}$) is shown with a dashed line. Right: Compilation of coercivity data from hematite as a function of grain diameter from \cite{Ozdemir2014a}. The higher coercivity population from Fig. \ref{fig:coercivity_spectra} corresponds to the larger grain sizes than the lower coercivity component associated with the chemical remanence.}}
\label{fig:summary}
\end{figure}
The differential unblocking temperature spectra of the two components within the Freda intraclasts provides strong support for the argument of \citet{Jiang2015a} that chemical and detrital remanent magnetization can be distinguished due to detrital remanence unblocking at the highest temperatures. However, the Freda intraclast data also show that while the detrital remanent magnetization can be well-isolated at temperatures as low as 650\textdegree C (specimen BRIC.31a in Fig. \ref{fig:intraclast_pmag}), the chemical remanent magnetization thermal unblocking spectra can overlap with that of the detrital remanence and extend up to temperatures closer to the N\'eel temperature (specimen BRIC.41a in Fig. \ref{fig:intraclast_pmag}). Therefore, to isolate primary remanence in red beds, best practice should be to proceed with very high resolution thermal demagnetization steps above 600\textdegree C, and particularly above 650\textdegree C. Characteristic remanence magnetization directions associated with hematite that are fit to components that span a wide range of unblocking temperatures including those lower than $\sim$650 \textdegree C are likely convolving a pigmentary chemical remanence and a detrital remanence held by larger grains.
A complication with detrital remanent magnetization is associated inclination-shallowing -- an issue that has been explored in depth within the literature as pertains to hematite (e.g \citealp{Tauxe1984a, Bilardello2010a}). The presence of both pigmentary and detrital hematite complicates efforts that seek to use bulk magnetic fabrics to correct for these effects. This reality necessitates the use of careful methodologies that target the fabric of only the highest-unblocking-temperature hematite (e.g. \citealp{Bilardello2015a}) or that take other approaches such as analyzing the elongation of the directional distribution and correcting to values taken from secular variation models (the elongation-inclination method of \citealp{Tauxe2004a}).
Hematite-bearing sedimentary rocks have varied characteristics which has lead to the argument that it is difficult to apply results from red beds in one formation to those from another. However, the rock magnetic properties of the hematite grain size distributions that emerge from these data are associated with their mode of incorporation into the sediment. Hydrodynamic sorting associated with the delivery of detrital hematite will lead to a narrower and coarser size distribution of grains than that of authigenic pigmentary hematite growth. Authigenic growth can lead to a distribution of grains that span from sub-30 nm superparamagnetic grains up to stable single-domain grains $>$300 nm in diameter. Such an authigenic population has distinct rock magnetic characteristics including a very broad coercivity distribution and viscous superparamagnetic grains that can be detected through low-temperature magnetometry.
Overall, the intraclast data, combined with thoseoz of the in-place siltstone, reveal that directional change at the highest unblocking temperatures provides an effective means to discriminate primary and secondary magnetizations within siltstones of the Freda Formation. The isolation of remanence carried by primary detrital hematite in $>$1 billion-year-old siltstones lends confidence to magnetostratigraphic records and paleogeographic interpretations that are based on interpretations of primary magnetization isolated from high-unblocking-temperature hematite in ancient red beds.
%Text here ===>>>
%%
% Numbered lines in equations:
% To add line numbers to lines in equations,
% \begin{linenomath*}
% \begin{equation}
% \end{equation}
% \end{linenomath*}
%% Enter Figures and Tables near as possible to where they are first mentioned:
%
% DO NOT USE \psfrag or \subfigure commands.
%
% Figure captions go below the figure.
% Table titles go above tables; other caption information
% should be placed in last line of the table, using
% \multicolumn2l{$^a$ This is a table note.}
%
%----------------
% EXAMPLE FIGURE
%
% \begin{figure}[h]
% \centering
% when using pdflatex, use pdf file:
% \includegraphics[natwidth=800px,natheight=600px]{figsamp.pdf}
%
% when using dvips, use .eps file:
% \includegraphics[natwidth=800px,natheight=600px]{figsamp.eps}
%
% \caption{Short caption}
% \label{figone}
% \end{figure}
%
% We recommend that you provide the native width and height (natwidth, natheight) of your figures.
% Specifying native dimensions ensures that your figures are properly scaled
%
%
% ---------------
% EXAMPLE TABLE
%
% \begin{table}
% \caption{Time of the Transition Between Phase 1 and Phase 2$^{a}$}
% \centering
% \begin{tabular}{l c}
% \hline
% Run & Time (min) \\
% \hline
% $l1$ & 260 \\
% $l2$ & 300 \\
% $l3$ & 340 \\
% $h1$ & 270 \\
% $h2$ & 250 \\
% $h3$ & 380 \\
% $r1$ & 370 \\
% $r2$ & 390 \\
% \hline
% \multicolumn{2}{l}{$^{a}$Footnote text here.}
% \end{tabular}
% \end{table}
%% SIDEWAYS FIGURE and TABLE
% AGU prefers the use of {sidewaystable} over {landscapetable} as it causes fewer problems.
%
% \begin{sidewaysfigure}
% \includegraphics[width=20pc]{figsamp}
% \caption{caption here}
% \label{newfig}
% \end{sidewaysfigure}
%
% \begin{sidewaystable}
% \caption{Caption here}
% \label{tab:signif_gap_clos}
% \begin{tabular}{ccc}
% one&two&three\\
% four&five&six
% \end{tabular}
% \end{sidewaystable}
%% If using numbered lines, please surround equations with \begin{linenomath*}...\end{linenomath*}
%\begin{linenomath*}
%\begin{equation}
%y|{f} \sim g(m, \sigma),
%\end{equation}
%\end{linenomath*}
%%% End of body of article
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Optional Appendix goes here
%
% The \appendix command resets counters and redefines section heads
%
% After typing \appendix
%
%\section{Here Is Appendix Title}
% will show
% A: Here Is Appendix Title
%
%\appendix
%\section{Here is a sample appendix}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Optional Glossary, Notation or Acronym section goes here:
%
%%%%%%%%%%%%%%
% Glossary is only allowed in Reviews of Geophysics
% \begin{glossary}
% \term{Term}
% Term Definition here
% \term{Term}
% Term Definition here
% \term{Term}
% Term Definition here
% \end{glossary}
%
%%%%%%%%%%%%%%
% Acronyms
% \begin{acronyms}
% \acro{Acronym}
% Definition here
% \acro{EMOS}
% Ensemble model output statistics
% \acro{ECMWF}
% Centre for Medium-Range Weather Forecasts
% \end{acronyms}
%
%%%%%%%%%%%%%%
% Notation
% \begin{notation}
% \notation{$a+b$} Notation Definition here
% \notation{$e=mc^2$}
% Equation in German-born physicist Albert Einstein's theory of special
% relativity that showed that the increased relativistic mass ($m$) of a
% body comes from the energy of motion of the body—that is, its kinetic
% energy ($E$)—divided by the speed of light squared ($c^2$).
% \end{notation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% ACKNOWLEDGMENTS
%
% The acknowledgments must list:
%
% >>>> A statement that indicates to the reader where the data
% supporting the conclusions can be obtained (for example, in the
% references, tables, supporting information, and other databases).
%
% All funding sources related to this work from all authors
%
% Any real or perceived financial conflicts of interests for any
% author
%
% Other affiliations for any author that may be perceived as
% having a conflict of interest with respect to the results of this
% paper.
%
%
% It is also the appropriate place to thank colleagues and other contributors.
% AGU does not normally allow dedications.
\acknowledgments
This research was supported by the Esper S. Larsen, Jr. Research Fund and the National Science Foundation through grant EAR-1419894. Rock magnetic experiments were conducted on a visiting fellowship at the Institute for Rock Magnetism which is made possible through the Instrumentation and Facilities program of the National Science Foundation, Earth Science Division and funding from the University of Minnesota. SPS was supported by the Miller Institute for Basic Research in Science. The Wisconsin Department of Natural Resources granted a research and collection permit that enabled sampling within Copper Falls State Park. Oliver Abbitt assisted with field work, Taiyi Wang assisted with paleomagnetic analyses and Tim Teague provided technical support for EBSD analyses. Peat Solheid, Mike Jackson, and Dario Bilardello provided technical support at the Institute for Rock Magnetism. Discussions with them, and with Josh Feinberg and Bruce Moskowitz, provided insight on rock magnetic data interpretation. Data associated with this study are available within the MagIC database and a Github repository associated with this work.
%% ------------------------------------------------------------------------ %%
%% References and Citations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% BibTeX is preferred:
%
\bibliography{../../references/allrefs}
%
% don't specify bibliographystyle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Please use ONLY \citet and \citep for reference citations.
% DO NOT use other cite commands (e.g., \cite, \citeyear, \nocite, \citealp, etc.).
%% Example \citet and \citep:
% ...as shown by \citet{Boug10}, \citet{Buiz07}, \citet{Fra10},
% \citet{Ghel00}, and \citet{Leit74}.
% ...as shown by \citep{Boug10}, \citep{Buiz07}, \citep{Fra10},
% \citep{Ghel00, Leit74}.
% ...has been shown \citep [e.g.,][]{Boug10,Buiz07,Fra10}.
\end{document}
More Information and Advice:
%% ------------------------------------------------------------------------ %%
%
% SECTION HEADS
%
%% ------------------------------------------------------------------------ %%
% Capitalize the first letter of each word (except for
% prepositions, conjunctions, and articles that are
% three or fewer letters).
% AGU follows standard outline style; therefore, there cannot be a section 1 without
% a section 2, or a section 2.3.1 without a section 2.3.2.
% Please make sure your section numbers are balanced.
% ---------------
% Level 1 head
%
% Use the \section{} command to identify level 1 heads;
% type the appropriate head wording between the curly
% brackets, as shown below.
%
%An example:
%\section{Level 1 Head: Introduction}
%
% ---------------
% Level 2 head
%
% Use the \subsection{} command to identify level 2 heads.
%An example:
%\subsection{Level 2 Head}
%
% ---------------
% Level 3 head
%
% Use the \subsubsection{} command to identify level 3 heads
%An example:
%\subsubsection{Level 3 Head}
%
%---------------
% Level 4 head
%
% Use the \subsubsubsection{} command to identify level 3 heads
% An example:
%\subsubsubsection{Level 4 Head} An example.
%
%% ------------------------------------------------------------------------ %%
%
% IN-TEXT LISTS
%
%% ------------------------------------------------------------------------ %%
%
% Do not use bulleted lists; enumerated lists are okay.
% \begin{enumerate}
% \item
% \item
% \item
% \end{enumerate}
%
%% ------------------------------------------------------------------------ %%
%
% EQUATIONS
%
%% ------------------------------------------------------------------------ %%
% Single-line equations are centered.
% Equation arrays will appear left-aligned.
Math coded inside display math mode \[ ...\]
will not be numbered, e.g.,:
\[ x^2=y^2 + z^2\]
Math coded inside \begin{equation} and \end{equation} will
be automatically numbered, e.g.,:
\begin{equation}
x^2=y^2 + z^2
\end{equation}
% To create multiline equations, use the
% \begin{eqnarray} and \end{eqnarray} environment
% as demonstrated below.
\begin{eqnarray}
x_{1} & = & (x - x_{0}) \cos \Theta \nonumber \\
&& + (y - y_{0}) \sin \Theta \nonumber \\
y_{1} & = & -(x - x_{0}) \sin \Theta \nonumber \\
&& + (y - y_{0}) \cos \Theta.
\end{eqnarray}
%If you don't want an equation number, use the star form:
%\begin{eqnarray*}...\end{eqnarray*}
% Break each line at a sign of operation
% (+, -, etc.) if possible, with the sign of operation
% on the new line.
% Indent second and subsequent lines to align with
% the first character following the equal sign on the
% first line.
% Use an \hspace{} command to insert horizontal space
% into your equation if necessary. Place an appropriate
% unit of measure between the curly braces, e.g.
% \hspace{1in}; you may have to experiment to achieve
% the correct amount of space.
%% ------------------------------------------------------------------------ %%
%
% EQUATION NUMBERING: COUNTER
%
%% ------------------------------------------------------------------------ %%
% You may change equation numbering by resetting
% the equation counter or by explicitly numbering
% an equation.
% To explicitly number an equation, type \eqnum{}
% (with the desired number between the brackets)
% after the \begin{equation} or \begin{eqnarray}
% command. The \eqnum{} command will affect only
% the equation it appears with; LaTeX will number
% any equations appearing later in the manuscript
% according to the equation counter.
%
% If you have a multiline equation that needs only
% one equation number, use a \nonumber command in
% front of the double backslashes (\\) as shown in
% the multiline equation above.
% If you are using line numbers, remember to surround
% equations with \begin{linenomath*}...\end{linenomath*}
% To add line numbers to lines in equations:
% \begin{linenomath*}
% \begin{equation}
% \end{equation}
% \end{linenomath*}
| {
"alphanum_fraction": 0.7774707439,
"avg_line_length": 91.574695122,
"ext": "tex",
"hexsha": "153fe36f9ea3a258a57a4877ee571abd0b0e5f97",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74779a4d75747f2c9998a05f962dd90c8ab2662c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "Swanson-Hysell-Group/2018_Red_Bed_Intraclasts",
"max_forks_repo_path": "Manuscript/Submitted_version/SwansonHysell_etal_intraclast_submitted.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74779a4d75747f2c9998a05f962dd90c8ab2662c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "Swanson-Hysell-Group/2018_Red_Bed_Intraclasts",
"max_issues_repo_path": "Manuscript/Submitted_version/SwansonHysell_etal_intraclast_submitted.tex",
"max_line_length": 3132,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74779a4d75747f2c9998a05f962dd90c8ab2662c",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "Swanson-Hysell-Group/2018_Red_Bed_Intraclasts",
"max_stars_repo_path": "Manuscript/Submitted_version/SwansonHysell_etal_intraclast_submitted.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13875,
"size": 60073
} |
\chapter{Tests against confirmed foragers}
\textit{Erratum: please note that these plots come from early stages of the experiment. What is here labeled as "Trips" should be understood as "Gaps". The plots were created before the introduction of the Gaps abstraction.}
The BeesBook dataset contains logs from an experiment in which artificial feeders were built near the hive and bees that visited it were being noted down. This created a small set of ground truth data to operate on. For every bee in that set, I calculated the first and the last day she was seen at the feeder, and marked them on a plot that shows the number of Gaps for every day of her life.
6 of these plots are presented below. While on some of them, the earliest and latest forage lines seem to correspond to peaks in Gaps quite well, it needs to be noted that those dates don't necessarily mean the beginning or end of foraging in a bee's life, merely the first and last times that bee was spotted at a feeder.
The overall impression from seeing those plots was that the method I took could work, but is not working correctly in the current form.
\clearpage
\begin{figure}[htbp!]
\centering
\includegraphics[width=1.0\textwidth]{End-sections/Appendix1/forager-trips.png}
% \caption[beedays-schlegel]{}
\label{fig:beedays-schlegel}
\end{figure} | {
"alphanum_fraction": 0.7864661654,
"avg_line_length": 60.4545454545,
"ext": "tex",
"hexsha": "139eb47f4b5c6ae690fd5bcc3dddd830cb813d5f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e0aaa4face587a658d9eae4105be46d84723993a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "janek/bsc-beesbook",
"max_forks_repo_path": "End-sections/Appendix1/appendix1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e0aaa4face587a658d9eae4105be46d84723993a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "janek/bsc-beesbook",
"max_issues_repo_path": "End-sections/Appendix1/appendix1.tex",
"max_line_length": 394,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e0aaa4face587a658d9eae4105be46d84723993a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "janek/bsc-beesbook",
"max_stars_repo_path": "End-sections/Appendix1/appendix1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T12:43:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-01T12:43:09.000Z",
"num_tokens": 319,
"size": 1330
} |
\documentclass[a4paper,11pt]{ltxdoc}
\usepackage[utf8]{inputenc}
\usepackage{geometry}
\usepackage{amsmath}
\usepackage{color}
\usepackage{subfigure}
\usepackage{tabularx}
\usepackage{graphicx}
\usepackage{rotating}
\usepackage{natbib}
\usepackage{listings}
\usepackage{lmodern}
\usepackage{varwidth}
\newsavebox{\fmbox}
\definecolor{mygray}{rgb}{0.9,0.9,0.9}
\lstset{frame=single,breaklines=true,backgroundcolor=\color{mygray}}
\title{Freva -- BDG -- Basic Developer Guide (ALPHA VERSION)}
\begin{document}
\maketitle
\section{Introduction - Welcome to Freva}
In this guide we will explain the basic workflow of developing a plugin for Freva.
\subsection{Requirements for Plugin Developer}
The Central Evaluation System (CES) in MiKlip has different interfaces for an effizient use of data and tools/plug-ins\\
\\
\textbf{0. GENERAL}\\
A stand alone tool must have the ability to be used from different users (think of using cdo). The output must land in an (definable) OUTPUTDIR and the temporary files must be in a (defineable) CACHE. No data is allowed to be written in the directories of the tool! It should be installed at the MiKlip Server and well defined complining definitions and/or other programs needed (cdo, R, ncl, etc.) \\
\textbf{1. VERSIONING}\\
in MiKlip we are using GIT for version control and an effective development. We have several informations on how2deal with git in the developer guide.
It makes sense to have a central repository in and the CES is one "user" using a running version of each tool in plugfins4freva. When a developer have a new running version, the admin just "git pull" it.\\
\textbf{2. PLUG-IN}\\
The plug-in framework brings different tools in a common way into the CES. The standalone tool gets a python plugin file ".py" which enables the connection to the CES. There is a how2plugin section with examples and tips. Every tool has different requirements. The plugged-in tools offer a variety of solutions for different tasks. See the ".py" files in the tool section plugins4freva\\
\textbf{3. DATA RETRIEVAL via --databrowser}\\
MiKlip decided to work with the CMOR standard. For a common usage of that standard, INTEGRATION developed the searching tool "freva --databrowser". Using this, enables the developer to offer its userbase a variety of experiments and makes sure all coming experiments in MiKlip are usable with this tool. Because when the directory of the data is changing (what happens more often than you think), the tool developers doesn't need to take care of that.\\
\textbf{4. OUTPUT NETCDF}\\
Having output in NETCDF not in GRIB or SRV or ASCII helps to understand what happened with the data. Of course this doesn't work with every output and is depending on the tool.
\textbf{5. DOCUMENTATION}\\
Of course, the usage of the tool will be clear by just have the "help" coming with plugin-framework (freva --plugin XXXX --help), but a good documentation with examples helps the users to work with it.\\
\textbf{6. PRE-PROCESSING via plugged in tools}\\
Re-inventing the wheel doesn't make sense! If wished, you can use the other tools for pre- or postprocessing you evaluation. Maybe "pre" with the leadtimeselektor for using --databrowser in an advanced way including the leadtime ordering of your data and "post" with the movieplotter for plotting your results. Of course this is not necessary with every application.
\subsection{Simple Examples}
\subsubsection{5 Minutes Hands on}
The only thing you need to do is to write a python class that inherits from evaluation\_system.api.plugin.PluginAbstract and overwrites one method and a couple of arguments. \\
This is a minimal working plugin, we will describe more interesting examples later on.\\
\begin{lstlisting}
from evaluation_system.api import plugin, parameters
class MyPlugin(plugin.PluginAbstract):
tool_developer = {'name':'Max Mustermann', 'email':'[email protected]'}
__short_description__ = "MyPlugin short description (just to know what it does)"
__version__ = (0,0,1)
__parameters__ = parameters.ParameterDictionary(parameters.Integer(name='solution', default=42))
def runTool(self, config_dict=None):
print "MyPlugin", config_dict
\end{lstlisting}
The plugin itself is not doing much, just printing out the name and the configuration provided when it's being called.
But you have a full configurable plugin with a lot of functionality by just inheriting from the abstract class.\\
So how do we test it? Very, very simply... Here the steps: \\
Activate the system as described in Using the system
\begin{lstlisting}
module load miklip-ces
\end{lstlisting}
Create a directory where the plugin will be created
\begin{lstlisting}
mkdir /tmp/myplugin
\end{lstlisting}
Copy the plugin from above in a file ending in '.py' in the mentioned directory
\begin{lstlisting}
#Use your preferred method of writing the contents to /tmp/myplugin/something.py
$ cat /tmp/myplugin/something.py
from evaluation_system.api import plugin, parameters
class MyPlugin(plugin.PluginAbstract):
__short_description__ = "MyPlugin short description (just to know what it does)"
__version__ = (0,0,1)
__parameters__ = parameters.ParameterDictionary(parameters.Integer(name='solution', default=42))
def runTool(self, config_dict=None):
print "MyPlugin", config_dict
#Setup the environmental variable EVALUATION_SYSTEM_PLUGINS=<path>,<package>[:<path>,<package>] (that's a colon separated list of comma separated pairs which define the location (path) and package of the plugin.
#for bash
export EVALUATION_SYSTEM_PLUGINS=/tmp/myplugin,something
\end{lstlisting}
Test it
\begin{lstlisting}
$ freva --plugin
[...]
MyPlugin: MyPlugin short description (just to know what it does)
$ freva --plugin myplugin
MyPlugin {'solution': 42}
$ freva --plugin myplugin --help
MyPlugin (v0.0.1): MyPlugin short description (just to know what it does)
Options:
solution (default: 42)
No help available.
$ plugin --tool myplugin solution=777
MyPlugin {'solution': 777}
\end{lstlisting}
That's it!
\subsubsection{NCL simple plot}
\begin{lstlisting}
NCL simple plot
Target
Procedure
Requirements
Notes
Tutorial
Setting up the environment
NCL
Wrapping up the script
Making a plug-in
Runing the plugin
NCLPlot Usage
\end{lstlisting}
Welcome to the first introduction to the evaluation system (and NCL). We'll try to keep things simple enough while showing you around.
\textbf{Target}\\
To create a plugin for the evaluation system that just creates a simple plot of a 2D variable.
\textbf{Procedure}\\
We'll go bottom-up starting from the result we want to achieve and building up the plugin out of it.
\textbf{Requirements}\\
This tutorial requires you to have a bash shell, so if you are using any other (or don't know what you are using) Just start bash by issuing this at the prompt:
\begin{lstlisting}
bash\end{lstlisting}
We'll try to be thorough and assume the user has little knowledge about programming. Though some basic shell knowledge is required.
If not, just copy and paste the commands in here, you don't need to understand everything for getting through the tutorial, though it will certainly help you to locate errors and typos you might have.
\textbf{Notes}\\
The \$ symbol at the beginning of a line is used to mark a command that is given to the bash shell. You don't have to input that character when issuing the same command in your shell.When depicting an interactive shell session it denotes the commands issued to the shell and those lines which does not start with the \$ character denote the output from the shell. We might skip the character when there's no expected output from the shell (or if it's not useful), so that you may Copy'n'Paste the whole block. We sometimes number the lines when dissecting a program for didactic reasons. Sadly, this breaks the ability to simply Copy'n'Paste the code. We'll try only to do that on complete programs that you might directly download from this page.
\textbf{Tutorial}
Setting up the environment: We'll need a file for testing, grab any suitable for the task. We can use the evaluation system to find one fast.
\begin{lstlisting}
module load project-ces
freva --databrowser --baseline 1 variable=tas | head -n1
/miklip/integration/data4miklip/model/baseline1/output/MPI-M/MPI-ESM-LR/asORAoERAa/day/atmos/tas/r1i1p1/tas_day_MPI-ESM-LR_asORAoERAa_r1i1p1_19600101-19691231.nc
\end{lstlisting}
Let's store that in a variable for later use:
\begin{lstlisting}
file=$(find_files --baseline 1 variable=tas | head -n1)
$ echo $file
/miklip/integration/data4miklip/model/baseline1/output/MPI-M/MPI-ESM-LR/asORAoERAa/day/atmos/tas/r1i1p1/tas_day_MPI-ESM-LR_asORAoERAa_r1i1p1_19600101-19691231.nc
$ export file
\end{lstlisting}
The last command (export) tells bash to pass that variable to other programs started from this shell (this is one of many methods of passing info to NCL). Ok now let's see what the file has to offer:
\begin{lstlisting}
$ ncdump -h $file | grep ') ;'
double time(time) ;
double time_bnds(time, bnds) ;
double lat(lat) ;
double lat_bnds(lat, bnds) ;
double lon(lon) ;
double lon_bnds(lon, bnds) ;
float tas(time, lat, lon) ;
\end{lstlisting}
As expected we have one variable, tas, but check the order in which the values are stored (time, lat, lon) as we will need them later.
Now let's start with our bottom-up procedure. We'll start ncl and try to get a plot by using the NCAR tutorial:
\begin{lstlisting}
$ ncl
-bash: ncl: command not found
\end{lstlisting}
That means we need to load the ncl module, this will also be required when creating the plug-in.
\begin{lstlisting}
$ module load ncl
$ ncl
Copyright (C) 1995-2012 - All Rights Reserved
University Corporation for Atmospheric Research
NCAR Command Language Version 6.1.0-beta
The use of this software is governed by a License Agreement.
See http://www.ncl.ucar.edu/ for more details.
ncl 0>
\end{lstlisting}
\textbf{NCL}\\
So we'll use NCL to produce a simple plot of a file. Let's use the ncl tutorial here to generate a plot: \begin{lstlisting}http://www.ncl.ucar.edu/Document/Manuals/Getting_Started/Examples/gsun02n.shtml\end{lstlisting}
Now here's the complete session to generate a simple plot out of the information in the tutorial (you may Copy'n'Paste it into the ncl shell):
\begin{lstlisting}
load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl"
cdf_file = addfile("$NCARG_ROOT/lib/ncarg/data/cdf/contour.cdf","r")
temp = cdf_file->T(0,0,:,:) ; temperature
lat = cdf_file->lat ; latitude
lon = cdf_file->lon ; longitude
xwks = gsn_open_wks("x11","gsun02n") ; Open an X11 workstation
plot = gsn_contour(xwks,temp,False) ; Draw a contour plot.
\end{lstlisting}
(Remember to hit enter on the last line too! If not you'll see just an empty window...)\\
That should have displayed a simple contour plot over some sample data. The ncl shell blocks until you click on the plot (don't ask me why...)\\
That shows everything is setup as expected. Now let's display the file we selected:
\begin{lstlisting}
load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl"
file_path=getenv("file") ; get the value stored in the exported variable "file"
input_file = addfile(file_path,"r") ; now use that value to load the file
temp = input_file->tas(0,:,:) ; temperature (CMOR name tas - surface temparature)
lat = input_file->lat ; latitude
lon = input_file->lon ; longitude
xwks = gsn_open_wks("x11","gsun02n") ; Open the window with the tile "gsun02n" (same as before)
plot = gsn_contour(xwks,temp,False) ; Draw the contour plot again
\end{lstlisting}
So, the only changes are that in the second line we are retrieving the file name from the environment (remember the "export file" command?), and in the 4th we retrieve a variable named tas instead of T that has only 3 dimensions (2D + time) (remember the the "ncdump -h" command?). So "tas(0,:,:)" means selecting all lat/lon elements of the first time-step. That's what we have plotted. \\
\\
From the information in the UCAR page we could build a better plot before finishing it up:
\begin{lstlisting}
load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl"
file_path=getenv("file")
input_file = addfile(file_path,"r")
var = input_file->tas(0,:,:) ; variable
lat = input_file->lat
lon = input_file->lon
resources = True ; Indicate you want to set some
; resources.
resources@cnFillOn = True ; Turn on contour line fill.
resources@cnMonoLineColor = False ; Turn off the drawing of
; contours lines in one color.
resources@tiMainString = var@standard_name + " (" + var@units + ")" ; a title
resources@tiXAxisString = lon@long_name
resources@tiYAxisString = lat@long_name
resources@sfXArray = lon
resources@sfYArray = lat
xwks = gsn_open_wks("x11","gsun02n")
plot = gsn_contour_map(xwks,var,resources) ; Draw a map
\end{lstlisting}
So, we are almost ready, we need to store the result into a file instead of displaying it, pass some parameters (the output directory and file) and extract others, e.g. the variablename is always the first string in the name of the file before the "\_" character (Because the files we have follow the DRS standard) so we can use this to our advantage. \\
Another approach for passing values to the NCL program is by using the command line. We'll assume we have a variable called plot\_name with the path to the file we want the resulting plot to be stored is already given; the same applies for file\_path (just to show a different procedure for passing values around). And last (and definitely least), we'll pack everything in a begin block so we have a proper script. \\
\\
So this is how it looks like:
\begin{lstlisting}
load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl"
begin
input_file = addfile(file_path,"r")
file_name = systemfunc("basename " + file_path)
tmp = str_split(file_path, "_")
var_name = tmp(0)
delete(tmp)
var = input_file->$var_name$(0,:,:) ; first timestep of a 2D variable
lat = input_file->lat
lon = input_file->lon
resources = True ; Indicate you want to set some
; resources.
resources@cnFillOn = True ; Turn on contour line fill.
resources@cnMonoLineColor = False ; Turn off the drawing of
; contours lines in one color.
resources@tiMainString = var@standard_name + " (" + var@units + ")" ; a title
resources@tiXAxisString = lon@long_name
resources@tiYAxisString = lat@long_name
resources@sfXArray = lon
resources@sfYArray = lat
xwks = gsn_open_wks("eps",plot_name) ; create an eps file
plot = gsn_contour_map(xwks,var,resources) ; store the map
end
\end{lstlisting}
Store that in a file called plot.ncl. \\
So how can we test it? Simply by calling ncl in the directory where plot.ncl is located in the following way:
\begin{lstlisting}
ncl plot.ncl file_path=\"$file\" plot_name=\"output\"
\end{lstlisting}
You should have now a file called output.eps in your current directory which you can view with evince or gs. \\
\\
You might have notice the strange looking \" characters. Ncl requires string values to be always quoted. The shell does use the quotes for something different, so we have to escape them, which basically means telling the shell to pass the " character as is instead of interpreting it in any particular way.
\\
If you want more info on NCL functions: http://www.ncl.ucar.edu/Document/Functions/\\
\\
\textbf{Wrapping up the script}\\
\\
One last check... the environment in which we started the previous ncl command had already been setup (with module load ncl). Next time we want to use it might not be the case, or a different ncl version might be loaded, etc. In summary, the program might stop working because the environment is somehow different.\\
\\
We should assure the environment doesn't change. At least as much as we can.\\
What we need to do is to clean the environment, check the program doesn't work and set it up again so we are sure we have set it up properly.\\
Furthermore, we can do that without affecting our current shell by generating a sub-shel in bash by using the () characters.
By the way, module purge will remove all modules from the environment. So the test might look like this:
\begin{lstlisting}
$ (module purge && ncl plot.ncl file_path=\"$file\" plot_name=\"output\")
bash: ncl: Kommando nicht gefunden.
\end{lstlisting}
(The \&\& tells bash to issue the next command only if the previous one finished successfully.)
As expected it fails because of the missing ncl command. Now we can test if just adding the ncl module is enough:
\begin{lstlisting}
$ (module purge && module load ncl && ncl plot.ncl file_path=\"$file\" plot_name=\"output\")
Copyright (C) 1995-2012 - All Rights Reserved
University Corporation for Atmospheric Research
NCAR Command Language Version 6.1.0
The use of this software is governed by a License Agreement.
See http://www.ncl.ucar.edu/ for more details.
\end{lstlisting}
Now we are sure it works as intended.\\
\\
\textbf{Making a plug-in}\\
\\
Up to now we've seen an introduction to making a script that's usable. This has nothing to do with the system, but plain informatics.
Since making a plug-in is so easy, we had to write something to fill up the tutorial :-)\\
\\
So now we just have to wrap it up in our plug-in. From the information in Developing a plugin, Reference and some python knowledge we (at least I) could infer a very simple plugin:
\begin{lstlisting}
import os
from evaluation_system.api import plugin, parameters
class NCLPlot(plugin.PluginAbstract):
tool_developer = {'name':'Christopher Kadow', 'email':'[email protected]'
__short_description__ = "Creates a simple 2D countour map."
__version__ = (0,0,1)
#plugi.metadic is a special dictionary used for defining the plugin parameters
__parameters__ = parameters.ParameterDictionary(
parameters.File(name='file_path', mandatory=True, help='The NetCDF file to be plotted'),
parameters.Directory(name='plot_name', default='$USER_PLOTS_DIR/output', help='The absolute paht to the resulting plot'))
def runTool(self, config_dict=None):
file_path = config_dict['file_path']
plot_name = config_dict['plot_name']
result= self.call('module load ncl && ncl %s/plot.ncl file_path=\\"%s\\" plot_name=\\"%s\\"' % (self.getClassBaseDir(), file_path, plot_name))
print result[0]
#ncl adds the ".eps" suffix to our output file!
plot_name += ".eps"
if os.path.isfile(plot_name):
return self.prepareOutput(plot_name)
\end{lstlisting}
That's certainly looks more daunting than it is. (If you haven't done the 5 Minutes Hands on tutorial do it now)\\
The difference with the dummy plug-in is that:
\begin{lstlisting}
In 1 we import the os package that will be used in 20 to check if the file was properly created.
In 8 we provide a ParametersDictionary with some information about the parameters we expect.
(Check the Reference section for more information about parameters and how it's used in the system)
(You may refer directly to the source code: source:/src/evaluation_system/api/parameters.py)
Here we basically define two input variables:
file_path: a required file (just a string) with no default value.
plot_name: An optional string for saving the resulting plot. By default it will be stored in a system managed directory with the name output.eps
In 13 and 14 we read the variables passed to the application.
In 16 we call the script (which should is assumed to be located in the same location as the script).
(Check the code for more information source:/src/evaluation_system/api/plugin.py)
self.call is a method from the inherited class that issues a shell command and return it's output as a tuple.
self.getClassBaseDir() returns the location of the script.
%s is used to format strings in Python, so "%s - %s" % ( 1, 2) == "1 - 2"
\\" that's a double scape sequence... \\ tells python not to interpret the \ as enything special. It calls then bash with \" and we already saw why.
in 17 we display the output from the call. By default stderr (All rerrors) are redirected to stdout so everything appears in the first element of the returned tuple (hence the result[0]
in 19 we extend the named of the output file as ncl automatically adds that suffix.
20 checks if the file exists and in such a case it will be returned using a special construct which checks the status of it and store some info for later use.
\end{lstlisting}
\textbf{Running the plugin}\\
We are done, now we have to test it. We will create a directory, store all files there (plot.ncl and nclplot.py) and tell the evaluation system to use the plugin.
\begin{lstlisting}
$ mkdir plugin
$ cp nclplot.py plot.ncl plugin
$ cd plugin
$ ls
nclplot.py nclplot.pyc plot.ncl
$ pwd
/home/user/plugin
$ export EVALUATION_SYSTEM_PLUGINS=/tmp/plugin,nclplot
\end{lstlisting}
Recalling from [5 Minutes Hands on]] the EVALUATION\_SYSTEM\_PLUGINS show the evaluation system where to find the code. It's a colon separated list of comma separated path, packages pairs. Basically:
\begin{lstlisting}
EVALUATION_SYSTEM_PLUGINS=/some/path,mypackage:/oth/path,some.more.complex.package \end{lstlisting}
We are done!
\\
\textbf{NCLPlot Usage}\\
Now let's see what the evaluation system already can do with our simple plugin. \\
Let's start for displaying some help:
\begin{lstlisting}
$ freva --plugin
NCLPlot: Creates a simple 2D countour map.
PCA: Principal Component Analysis
$ freva --plugin nclplot --help
NCLPlot (v0.0.1): Creates a simple 2D countour map.
Options:
file_path (default: None) [mandatory]
The NetCDF file to be plotted
plot_name (default: $USER_PLOTS_DIR/output)
The absolute paht to the resulting plot
$ freva --plugin nclplot
ERROR: Missing required configuration for: file_path
\end{lstlisting}
So there you see to what purpose the metadict is being used. Let's use the \$file and create a plot.
\begin{lstlisting}
$ freva --plugin nclplot file_path=$file plot_name=first_plot
Copyright (C) 1995-2012 - All Rights Reserved
University Corporation for Atmospheric Research
NCAR Command Language Version 6.1.0
The use of this software is governed by a License Agreement.
See http://www.ncl.ucar.edu/ for more details.
$ evince first_plot.eps
\end{lstlisting}
And evince should display the plot. \\
\\
The configuration here is pretty simple, but you might still want to play around with it:
\begin{lstlisting}
$ freva --plugin nclplot --save-config --config-file myconfig.cfg --dry-run file_path=$file plot_name=first_plot
INFO:__main__:Configuration file saved in myconfig.cfg
$ cat myconfig.cfg
[NCLPlot]
#: The absolute path to the resulting plot
plot_name=first_plot
#: [mandatory] The NetCDF file to be plotted
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
$ rm first_plot.eps
$ freva --plugin nclplot --config-file myconfig.cfg
$ evince first_plot.eps
\end{lstlisting}
As we say, it makes no sense here, but if you have multiple parameters you might want to let some predefined to suit your requirements.
See also how to use the Evaluation System User Configuration.\\
Now the system also offers a history. There's a lot to be said about it, so I'll just leave a sample session here until I write a tutorial about it. \\
\begin{lstlisting}
$ freva --history
21) nclplot [2013-01-11 14:44:15.297102] first_plot.eps {u'plot_name': u'first_plot', u'file_path': u'tas_Am...
22) ...
...
$ freva --history limit=1 full_text
21) nclplot v0.0.1 [2013-01-11 14:44:15.297102]
Configuration:
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
plot_name=first_plot
Output:
/scratch/users/test/ncl_plot/plugin/first_plot.eps (available)
$ rm /scratch/users/test/ncl_plot/plugin/first_plot.eps
$ freva --history limit=1 full_text21) nclplot v0.0.1 [2013-01-11 14:44:15.297102]
Configuration:
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
plot_name=first_plot
Output:
/scratch/users/test/ncl_plot/plugin/first_plot.eps (deleted)
$ freva --history entry_ids=21 store_file=myconfig.cfg
Configuration stored in myconfig.cfg
$ cat myconfig.cfg [NCLPlot]
#: The absolute paht to the resulting plot
plot_name=first_plot
#: [mandatory] The NetCDF file to be plotted
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
$ freva --plugin NCLPlot --config-file myconfig.cfg
Copyright (C) 1995-2012 - All Rights Reserved
University Corporation for Atmospheric Research
NCAR Command Language Version 6.1.0
The use of this software is governed by a License Agreement.
See http://www.ncl.ucar.edu/ for more details.
$ freva --history limit=2 full_text
22) nclplot v0.0.1 [2013-01-11 14:51:40.910996]
Configuration:
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
plot_name=first_plot
Output:
/scratch/users/test/ncl_plot/plugin/first_plot.eps (available)
21) nclplot v0.0.1 [2013-01-11 14:44:15.297102]
Configuration:
file_path=tas_Amon_MPI-ESM-LR_decadal1990_r1i1p1_199101-200012.nc
plot_name=first_plot
Output:
/scratch/users/test/ncl_plot/plugin/first_plot.eps (modified)
$ freva --history --help
...
\end{lstlisting}
Well, this is it. I'll be adding more tutorials doing some more advanced stuff but I think now you have the knowledge to start creating plugins for the whole community.
\subsection{Parameters}
Parameters are central to the plugin and they are managed at the evaluation\_system.api.parameters module. Here a brief summary about what they can do and how they can be used.
\\
\textbf{ParameterDictionary}\\
This class manages everything related to the set of parameters the plugin uses. It's an ordered dictionary so the order in which parameters are defined is the order in which they will be presented to the user in the help, configuration files, web forms, etc.\\
\textbf{ParameterType}\\
This is the central Class handling parameters. Normally you use another class that inherits functionality from this one, but as most of the functionality is define by this class, we will describe the options used in the constructor and therefore in all other classes inheriting from this one.\\
\begin{lstlisting}
Option Default Value Description
name None Name of the parameter
default None The default value if none is provided
(this value will also be validated and parsed, so it must be a
valid parameter value!)
mandatory False If the parameter is required (note that if there's a default value,
the user might not be required to set it, and can always change it,
though he/she is not allowed to unset it)
max_items 1 If set to > 1 it will cause the values to be returned in a list
(even if the user only provided 1). An error will be risen if more
values than those are passed to the plugin
item_separator , The string used to separate multiple values for this parameter.
In some cases (at the shell, web interface, etc) the user have always
the option to provide multiple values by re-using the same parameter
name (e.g. param1=a param1=b produces {'param1': ['a', 'b']}). But the
configuration file does not allow this at this time. Therefore is better
to setup a separator, even though the user might not use it while giving
input. It must not be a character, it can be any string (make sure it's
not a valid value!!)
regex .* A regular expression defining valid "string" values before parsing them to
their defining classes (e.g. an Integer might define a regex of "[0-9]+" to
prevent getting negative numbers). This will be used also on Javascript
so don't use fancy expressions or make sure they are understood by both
python and Javascript.
help No help The help string describing what this parameter is good for.
print_format %s A python string format that will be used when displaying the value of
this parameter (e.g. %.2f to display always 2 decimals for floats)
\end{lstlisting}
\begin{lstlisting}
Available Parameters
String:
validated as string
Float
validated as float
Integer
validated as integer
File
validated as string
shows select file widget on the website
Directory
validated as string
system does autmatically append a unique id
InputDirectory
validated as string
system does not autmatically append a unique id
CacheDirectory
validated as string
Bool
validated as boolean
shows radio buttons on the website
SolrField
shows solr_widget on the website
takes additional parameters:
facet: CMOR facet (required)
group: If you have more than one solr group in your plugin
like MurCSS (default=1)
multiple: Allow multiple selections (default=False)
predefined_facets: Fix some facets for the data search (default=None)
SelectField
only speficied values can be selected
takes additional parameters:
options: python dictionary with "key":"value" pairs
\end{lstlisting}
\subsection{Special Variables}
The plugin has access to some special parameters which are setup for the user running the plugin. These are:
\begin{lstlisting}
Name Description
$USER_BASE_DIR Absolute path to the central directory for this user in
the evaluation system.
$USER_OUTPUT_DIR Absolute path to where the output data for this user is stored.
$USER_PLOTS_DIR Absolute path to where the plots for this user is stored.
$USER_CACHE_DIR Absolute path to where the cached data for this user is stored.
$SYSTEM_DATE Current date in the form YYYYMMDD (e.g. 20120130).
$SYSTEM_DATETIME Current date in the form YYYYMMDD_HHmmSS (e.g. 20120130_101123).
$SYSTEM_TIMESTAMP Milliseconds since epoch (i.e. a new number every millisecond,
e.g. 1358929581838).
$SYSTEM_RANDOM_UUID A random UUID string (e.g. 912cca21-6364-4f46-9b03-4263410c9899).
\end{lstlisting}
These can be used in the plugin like this:
\begin{lstlisting}
#[...]
__config_metadict__ = metadict(compact_creation = True,
output_file = '$USER_OUTPUT_DIR/${input_file}',
output_plot = '$USER_PLOTS_DIR/${input_file}',
work = '$USER_CACHE_DIR',
input_file = (None, dict(mandatory=True, type=str,
help='The processed file'))
)
\end{lstlisting}
\end{document}
| {
"alphanum_fraction": 0.7337296919,
"avg_line_length": 51.0600649351,
"ext": "tex",
"hexsha": "344bb4fa232837b22be67b5851eed66de101a36e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04",
"max_forks_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_forks_repo_name": "FREVA-CLINT/Freva",
"max_forks_repo_path": "docu/guides/bdg.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_issues_repo_name": "FREVA-CLINT/Freva",
"max_issues_repo_path": "docu/guides/bdg.tex",
"max_line_length": 747,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "53c6d0951a8dcfe985c8f33cbb3fbac7e8a3db04",
"max_stars_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_stars_repo_name": "FREVA-CLINT/Freva",
"max_stars_repo_path": "docu/guides/bdg.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-18T03:35:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-12T18:18:48.000Z",
"num_tokens": 7914,
"size": 31453
} |
\chapter{S.S. Liki}
\begin{enumerate}
\item \cs[2:00], walk up to \yuna, \sd, walk back to \wakka, \sd, walk back up to \yuna, \cs + 4 \skippablefmv, \sd\ from `Sin!'
\end{enumerate}
\begin{battle}[2000]{Sin Fin}
\begin{itemize}
\tidusf Defend
\switch{\yuna}{\lulu}
\luluf \thunder{} the Sin Fin
\kimahrif \lancet{} the Sin Fin
\enemyf Moves
\tidusf Defend
\kimahrif \lancet{} the Sin Fin
\luluf \thunder{} the Sin Fin
\switch{\tidus}{\yuna}
\summon{\valefor}
\valeforf Energy Blast \od\ on Sin Fin (change target!)
\end{itemize}
\end{battle}
\begin{enumerate}[resume]
\item \fmv+\cs[1:40]
\end{enumerate}
\begin{battle}[2000]{Sinspawn Echuilles}
\begin{itemize}
\tidusf Cheer x2
\wakkaf \darkattack{}
\tidusf Attack x2 \textit{if Str Node else} Cheer x2
\wakkaf Attack x2
\enemyf Blender
\wakkaf Attack x2
\tidusf Attack x2, one less if either \tidus\ crits or \wakka\ crits twice.
\tidusf \od
\end{itemize}
Check for \textbf{\icebrand{}, \iceball{}}
\end{battle}
\begin{enumerate}[resume]
\item \skippablefmv+\cs[1:30], \sd\ during \tidus\ monologue.
\end{enumerate} | {
"alphanum_fraction": 0.6834080717,
"avg_line_length": 28.5897435897,
"ext": "tex",
"hexsha": "8e1cebf012798936687462b90af698efeaaa7c76",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e4967187283746df2d9c74112b1e0e5e8b4f3a54",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Vardys/Final-Fantasy-X-Speedrun",
"max_forks_repo_path": "Final Fantasy X/Chapters/ssliki.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e4967187283746df2d9c74112b1e0e5e8b4f3a54",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Vardys/Final-Fantasy-X-Speedrun",
"max_issues_repo_path": "Final Fantasy X/Chapters/ssliki.tex",
"max_line_length": 129,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e4967187283746df2d9c74112b1e0e5e8b4f3a54",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Vardys/Final-Fantasy-X-Speedrun",
"max_stars_repo_path": "Final Fantasy X/Chapters/ssliki.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 461,
"size": 1115
} |
%%
%% Copyright 2007-2019 Elsevier Ltd
%%
%% This file is part of the 'Elsarticle Bundle'.
%% ---------------------------------------------
%%
%% It may be distributed under the conditions of the LaTeX Project Public
%% License, either version 1.2 of this license or (at your option) any
%% later version. The latest version of this license is in
%% http://www.latex-project.org/lppl.txt
%% and version 1.2 or later is part of all distributions of LaTeX
%% version 1999/12/01 or later.
%%
%% The list of all files belonging to the 'Elsarticle Bundle' is
%% given in the file `manifest.txt'.
%%
%% Template article for Elsevier's document class `elsarticle'
%% with harvard style bibliographic references
% \documentclass[preprint,12pt,authoryear]{elsarticle}
% \usepackage{graphicx}
% \usepackage{epstopdf}
% \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile}
% \DeclareGraphicsExtensions{.png,.pdf}
%% Use the option review to obtain double line spacing
%% \documentclass[authoryear,preprint,review,12pt]{elsarticle}
%% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn
%% for a journal layout:
%% \documentclass[final,1p,times,authoryear]{elsarticle}
\documentclass[final,1p,times,twocolumn,authoryear]{elsarticle}
%% \documentclass[final,3p,times,authoryear]{elsarticle}
%% \documentclass[final,3p,times,twocolumn,authoryear]{elsarticle}
%% \documentclass[final,5p,times,authoryear]{elsarticle}
%% \documentclass[final,5p,times,twocolumn,authoryear]{elsarticle}
%% For including figures, graphicx.sty has been loaded in
%% elsarticle.cls. If you prefer to use the old commands
%% please give \usepackage{epsfig}
%% The amssymb package provides various useful mathematical symbols
\usepackage{amssymb}
%% The amsthm package provides extended theorem environments
%% \usepackage{amsthm}
\usepackage{lineno}
\usepackage{longtable}
\usepackage{hyperref}
\usepackage{natbib}
%% The lineno packages adds line numbers. Start line numbering with
%% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on
%% for the whole article with \linenumbers.
%% \usepackage{lineno}
\journal{Patterns}
\newcommand{\cvm}[1]{\protect{\texttt{{#1}}}}
\newcommand{\om}{O\&M}
\begin{document}
\begin{frontmatter}
%% Title, authors and addresses
%% use the tnoteref command within \title for footnotes;
%% use the tnotetext command for theassociated footnote;
%% use the fnref command within \author or \address for footnotes;
%% use the fntext command for theassociated footnote;
%% use the corref command within \author for corresponding author footnotes;
%% use the cortext command for theassociated footnote;
%% use the ead command for the email address,
%% and the form \ead[url] for the home page:
%% \title{Title\tnoteref{label1}}
%% \tnotetext[label1]{}
%% \author{Name\corref{cor1}\fnref{label2}}
%% \ead{email address}
%% \ead[url]{home page}
%% \fntext[label2]{}
%% \cortext[cor1]{}
%% \address{Address\fnref{label3}}
%% \fntext[label3]{}
\title{Exploiting and Extending Vocabularies for Faceted Browse in Earth System Science}
%% use optional labels to link authors explicitly to addresses:
%% \author[label1,label2]{}
%% \address[label1]{}
%% \address[label2]{}
\author[1]{Ruth E. Petrie}
\author[2,3]{Bryan Lawrence}
\author[1]{Martin Juckes}
\author[1,4]{Victoria Bennett}
\author[4]{Philip Kershaw}
\author[1]{Ag Stephens}
\author[4]{Alison Waterfall}
\author[5]{Antony Wilson}
\address[1]{National Centre for Atmospheric Science, CEDA, Science and Technology Facilities Council, UK}
\address[2]{National Centre for Atmospheric Science, Department of Meteorology, University of Reading, UK}
\address[3]{Department of Computer Science, University of Reading, UK}
\address[4]{National Centre for Earth Observation, CEDA, Science and Technology Facilities Council, UK}
\address[5]{Scientific Computing Department, Science and Technology Facilities Council, UK}
\begin{abstract}
The earth system grid federation (ESGF) deployed a faceted browsing system for finding data within a large (petascale) globally distributed climate data archive. This system relied on a ``Data Reference Syntax'' which was designed initially for providing meaningful navigable identifiers for data from the fifth climate model intercomparison project (CMIP5). In this paper we provide additional context and expand on the need for such systems, discuss the inherent dependencies on controlled vocabularies, and show how the DRS concept has been extended to provide support for additional projects (CORDEX, CLIPC, CMIP6). We discuss the nature of search and browse and the impact of linked data concepts on successful data discovery. We also discuss the wider applicability in environmental science of faceted browse coupled with meaningful data identifiers.
\end{abstract}
%%Graphical abstract
% \begin{graphicalabstract}
% \includegraphics{grabs}
% \end{graphicalabstract}
% %%Research highlights
% \begin{highlights}
% \item Research highlight 1
% \item Research highlight 2
% \end{highlights}
\begin{keyword}
Earth System Science \sep
Data \sep
Metadata \sep
Data Reference Syntax \sep
Faceted searching
%% keywords here, in the form: keyword \sep keyword
%% PACS codes here, in the form: \PACS code \sep code
%% MSC codes here, in the form: \MSC code \sep code
%% or \MSC[2008] code \sep code (2000 is the default)
\end{keyword}
\end{frontmatter}
%% \linenumbers
%% main text
\section{Introduction}
Earth System Science (ESS) has always had difficulties associated with both the number and volume of data products.
These difficulties arise because both earth observation and numerical simulation involve continually increasing data production driven by underlying trends in computing, always at the edge of what is possible, and so managing and manipulating such data at scale has always required heroic effort.
For example the European Space Agency Sentinel satellites \citep{BerEA12} are currently (2018) approaching 15 TB/day \citep[5PB/year ---][]{Sentinel2019} and next iteration (phase 6) of Coupled Model Intercomparison Project (CMIP6) is anticipated to produce around 10-20 PB of data in the 2019-2020 timeframe.
One of the many challenges with this data is facilitating the discovery and use of climate data records.
Climate data comes from a variety of sources such as modelling (including global, regional and seasonal modelling), earth observation (e.g. from satellites, in-situ observations, radiosondes etc.), reanalysis (a combination of both model and observational data0 and climate impact indicators (derived products).
All these different climate data products are produced within different parts of the ESS community and it is common for the different communities to have different data conventions (such as file formats) and standards.
Nonetheless, these heterogeneous data need to be processed automatically, a problem which grows larger with increasing volume and heterogeneity.
Such automated processing requires the use of data description and organisation methodologies which are well defined and structured, which support discovery via multiple methods and entry points, and can be applied consistently across the ESS community.
In this paper we discuss two key methodologies: controlled vocabularies and structured syntaxes to support faceted browse, and their application in both the Climate Information Platform for Copernicus (CLIPC) and the Earth System Grid Federation.
Controlled vocabularies are sets of terms and definitions which have some managed process for change and maintenance which attempts to maximise accuracy, minimise ambiguity and repetition, and support incremental updates.
They are typically managed in computer systems which provide both human and machine readable interfaces (e.g. the NERC vocabulary service available at \href{https://www.bodc.ac.uk/resources/products/web\_services/vocab/}
{https://www.bodc.ac.uk/resources/products/web\_services/vocab/},
\citealt{Latham2009, Leadbetter2012}).
Faceted Browse is a particular form of data navigation which will be familiar to many people from application in internet shopping where a set of search results can be progressively refined by a set of pre-defined characteristics of the products (e.g. when shopping for fridges, refining by dimensions, manufacturer, energy rating etc). An important characteristic of faceted browse is that it is not hierarchical (e.g. for the fridge purchase example, one could start with manufacturer and refine, or start with energy rating and refine, etc).
\textit{TODO: Ruth: Add paragraph on paper layout here, based on the following original text and the eventual structure of the paper}
In this paper we are interested in how the known technologies of faceted search is applied and utilised in the ESS and how this could be applied to other scientific disciplines.
In order to achieve this two things are required. Firstly, a framework of both hardware and software infrastructure that supports for faceted searching is required and secondly a well defined set of controlled vocabularies that are widely adopted by the community.
It is essential that both the data and metadata are easily accessible to all users. In the context of “big data” this means not only that users are able to discover the information that they need but also that they are able to work with the data within their information handling systems. This consists of two key aspects: (1) Technical and domain specific terms must have accessible definitions and (2) Such definitions must be provided in a form that other peoples' software can work with.
The aims of this paper are to describe the issues around data discovery through faceted search in the climate modelling community through a demonstration of how the research project the Climate Information Platform for Copernicus (CLIPC) ?REF? utilised and extended the existing infrastructure to provide a single point of access to a variety of heterogeneous data.
\subsection{Data Discovery}
The notion of data discovery is understood by different communities differently, and even within one community the concept can mean different things depending on application.
For example, for some earth observation users, discovering a dataset would mean finding the set of all images from all sensors which are relevant for their location at a specific time, and for others, it might be to discover the set of all images from a particular sensor.
How this can be supported is a function of the available metadata, the way the data is organised, and the software which provides the discovery service. In particular, the success or failure of such discovery is often dependent on how the data publisher has organised data into datasets, what navigation facilities are provided to find datasets, and whether-or-not and how, datasets can be subsetted.
There are two key steps involved in finding data which can be characterised as teleporting and orienteering \citep{Teevan2004}: the former involves ``jumping'' to the neighbourhood where the right data can be found, and the latter, to the process of navigating around to find exactly what is wanted.
These two steps need to be supported by data metadata.
In practice there are several sorts of relevant metadata, particularly where data is kept in files on a disk or tape system as is often the case for high volume ESS data: metadata held in the files, metadata which appears in the physical layout of the files (e.g. directory names on a file system) and metadata held in databases or web-pages.
Whatever their source and location, they can be characterised using the taxonomy introduced in \cite{LawEA09}: ``A for archive'' data is necessary to navigate (orienteer) around a file system, potentially utilising information held in the files and the filepaths, ``B for Browse'' metadata is also used for navigation, but also to discriminate between similar datasets (e.g. two model simulations of the same phenomenon).
``C-Character'' holds information about actual or perceived quality, and ``D-Discovery'' provides the equivalent of dataset catalogue records, providing the information necessary for teleporting (to find a point from where one can orienteer).
If the datasets are not organised with sufficient granularity, or datasets structures differ, then many discovery use cases cannot exploit orienteering, either because the datasets are too large (one does not get close enough with teleporting for orienteering to work), or because the method of orienteering is too different between datasets.
When the information system and/or data are predominantly hierarchical (organised in simple tree structures) this problem is exacerbated. While teleporting can arrive at some point ``in the tree'', it becomes difficult to find similar parts of multiple trees via orienteering, unless the trees have similar structures.
When the datasets have too much granularity, there are too many potential teleporting targets and so the metadata system needs to compensate so that it can produce aggregated views that can be unfolded into the constituent parts as orienteering proceeds.
The obvious difficulty with aggregation is that there may be many different ways to do the aggregation.
Faceted browse with aggregation provides a sophisticated method of dealing with some of these issues, but only if it is underpinned by controlled vocabularies which ensure that different datasets can be viewed and/or aggregated using commonly understood terms.
Datasets then need to be tagged with the correct combination of terms, and the information system (hardware and software) need to deliver the requisite functionality.
Here we concentrate on the vocabularies and dataset granularities needed to deliver \dots
%TODO: Ruth completion
\textit{TODO: Ruth: COMPLETE}.
\section{Data Reference Syntax}
\subsection{History}
\label{s:history}
At the advent of the fifth climate model intercomparison project, CMIP5, it was apparent that the community was going to be faced with at least a petabyte of data organised into at least a million files.
Data was being produced by multiple organisations, using different software systems (models) according to the needs of a variety of experiments, and consisting of hundreds of different output variables being produced by simulations of ocean, atmosphere, land surface, etc.
It was going to be housed in distributed federation of data nodes, and users were going to be expected to find and download only the data of interest to them.
Experience from the earlier CMIP exercises had led to the knowledge that structured metadata was crucial, so the notion of quality A-metadata was already present, and it was known that mixing different variables within files could be problematic (greater chance for errors in any given file, more likelihood that users downloading files would be downloading data they didn't need).
However, it was also known that different groups liked to organise their data differently.
The solution which arose was the ``Data Reference Syntax" (DRS) introduced in \cite{TayEA12}, which provided a human and machine readable structured identifier for what became known as ``atomic datasets'' (nearly indivisible granules of data).
The DRS identifier utilised controlled vocabularies to provide CMIP5 both landing points for teleporting, and a nomenclature for a host of important routes to aggregating the atomic datasets.
It was also completely agnostic about file organisation (it is often necessary to remind people that the D does not stand for "Directory") allowing different groups to organise their data as they preferred.
The set of vocabularies which were chosen to construct a CMIP5 DRS identifier are listed in table \ref{cmip5-drs}.
An individual identifier is constructed by using vocabulary members constructed by concatenating them all in specific sequence separated by a dot:
\begin{quote}
\small\texttt{
\textless{}project\textgreater{}.\textless{}product\textgreater{}.\textless{}institute\textgreater{}.\textless{}model\textgreater{}.\textless{}experiment\textgreater{}.\textless{}time\_frequency\textgreater{}.\newline
\textless{}realm\textgreater{}.\textless{}cmor\_table\textgreater{}.\textless{}ensemble\textgreater{}.\textless{}version\textgreater{}}
\end{quote}
for example
\begin{quote}
\small\texttt{CMIP5.MPI-M.MPI-ESM-LR.amip.mon.land.Lmon.r5i1p1.v20120529}
\end{quote}
\begin{table}[ht!]
\label{tab:cmip5}
\begin{tabular}{|p{3cm}|p{9.5cm}|}
\hline
\textbf{Facet} & \textbf{Definition} \\ \hline
project & Fixed as CMIP5\\ \hline
product & The type of output produced by the model. \\ \hline
institute & The climate modelling centre(s) or University responsible for the model. \\ \hline
model & The specific name of the climate model used. \\ \hline
experiment & The valid CMIP5 experiment short identifier. \\ \hline
time frequency & The temporal frequency of the output data, e.g. ``mon" for monthly data. \\ \hline
realm & The earth system realm of the data, e.g. ``atmos" for the atmosphere. \\ \hline
CMOR table & A lookup table that relates the frequency of a variable and its realm. \\ \hline
ensemble member & The specific ensemble member of the model run of the form r\textless{}L\textgreater{}i\textless{}M\textgreater{}p\textless{}N\textgreater where, L M and N are integers and r is for realisation; i for initialisation and p is for physics. \\ \hline
version & This is an ESGF version to uniquely identify the dataset and version control the data, it is given the form vYYYYMMDD. \\
\hline
\end{tabular}
\caption{Facet definitions for CMIP5 \label{cmip5-drs}}
\end{table}
%TODO: This is not a complete list of CMIP5 identifiers, should we say something about the subset identifiers?
\subsection{The use of the DRS to support faceted browse in the ESGF}
The CMIP5 data were distributed by the Earth System Grid Federation (ESGF), an international collaboration built on a shared experience developing and deploying software infrastructure over the last two decades.
It was initially designed to handle the CMIP5 project alone \citep{WilEA11_esgf}, but has since grown to encompass a number of other projects, all deploying variants of the original DRS.
It now hosts in excess of 20 PB, and has been integral to recent assessments made by the Intergovernmental Panel on Climate Change (IPCC).
The ESGF is a global system with nodes distributed around the world and all continents represented.
Nodes interoperate with each other using a peer-to-peer paradigm, and provide the same set of standardised data search and access protocols.
Some nodes host data replicated from other nodes, but most simply provide local data for remote discovery and download.
The ESGF supports multiple projects, and the ESGF data discovery service relies heavily on the DRS concept, even for non-CMIP projects. The DRS identifier is the unique identifier which allows the system to know which datasets are replicated between nodes, and provides facets that both supports faceted browse in the discovery user interface and allows downstream applications to construct faceted search interfaces to ESGF data. An example of the faceted browse interface appears in figure \ref{fig:search-esgf-cmip5}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.32]{images/esgf_cmip5.png}
\caption{Screenshot of the Earth System Grid Federation (ESGF) CMIP5 faceted search interface. This example shows a case where no facets have been select so all results are returned. The CMIP5 \texttt{realm} facet is expanded on the left hand side showing the potential to select datasets from the seven different options in the CMIP5 realm vocabulary.\label{fig:search-esgf-cmip5} }
\end{figure}
\subsection{Underlying DRS principles for the ESGF}
The experience in ESGF showed that the DRS concept of conflating an identifier with a compound set of facets was very powerful, providing both machines and humans real utility.
However, this utility does not arise from chance, and it depends on the right set of facets, which in turn depend on how they are structured.
Facet terms should either be terms from a controlled vocabularies or be of a flexible structured form (e.g. sub-ranges within an axis, such as periods within a date range).
A controlled vocabulary (CV) is simply a list of terms with an associated precise definition that must be replicated in a precise form (including spelling, case and other characters).
Controlled vocabularies are widely used to organise and annotate large volumes of data, and are often used in file naming conventions, or to populate internal file metadata.
For use as an identifier and as facets they need to be un-ambiguous, fully partition the space of possible terms, and be distinct (no dataset may fall outside the domain of the vocabulary, or be capable of annotation by more than one term in each vocabulary).
For example, the CMIP5 facet \cvm{Realm} spans the full set of high level modelling domains expected within an earth system model, and can only take one of the permitted values: \cvm{atmos}, \cvm{ocean}, \cvm{land}, \cvm{landIce}, \cvm{seaIce}, \cvm{aerosol}, \cvm{atmosChem}, and \cvm{ocnBgchem} (where the latter covers ``ocean bio-geochemistry'' and the rest should be self evident).
Ensuring that these requirements are met is why they need ``control'' --- it must be difficult to inadvertently break these criteria, yet also allow new terms to be added as would be necessary if either the domain is expanded, or the facets need sub-division.
Facets which encompass flexible structures similarly need to span the range with no overlapping, but they too need to be controlled to an extent to ensure the facets are meaningful across data providers.
As an example, the CMIP5 facet ``Ensemble member'' must unambiguously identify all possible ensemble members within an ensemble (a set of simulations which vary somehow across a particular experimental configuration). However, while the extent of the domain of such ensemble members may not be known except by the data provider (and hence is not encoded), the structure of the possible domain is prescribed, in this case to a triplet of the form \cvm{r\textless{}L\textgreater{}}i\textless{}N\textgreater{}p\textless{}M\textgreater{} where \cvm{L,N} and \cvm{M} are integers and \cvm{r, i} and \cvm{p} indicate realisation, initialisation, and physics axes respectively, so that ensemble members can be identified in the space of all simulations carried out by a particular provider along those axes. So \cvm{r3i2p1} indicates the third realisation of the second kind of initialised simulation with the same physics as all other \cvm{rLiMp1} instances.
It should be noted that some structured facets need to exist for identification purposes, but not all structured facets are actually of use for faceted browse, as for example in this case, one is unlikely to look for all simulations which have the facet \cvm{r3i2p1}. By contrast a time-period structured facet may be of use in true faceted browse.
Within ESGF the DRS has two further constraints: the initial controlled vocabulary for the first facet is the set of all projects with data on ESGF, and the last is a version number of the form \cvm{vYYYYMMDD} for that atomic dataset within the ESGF (multiple versions may occur as a dataset is updated or superseded).
The ESGF software expects that different projects will expose different DRS structures (and facets), but that all datasets within a project are identified with the same DRS faceted syntax.
\subsection{Provenance Principles for a DRS}
The use of the DRS within ESGF is primarily to allow data users to navigate amongst the available data using the data provenance during the browse phase of data selection.
The initial application in support of CMIP5 was organised around the CMIP protocols, but a more generic approach is necessary for wider applicability, particularly if the DRS is to apply to observations as well as simulation.
In the typical production of a dataset, there is a series of processes and operations applied, analyses conducted, and interim data results generated; that is, a complex scientific workflow is enacted before a scientific experiment or observation yields its final data output. These processes and interim data outputs, along with other related metadata, form the dataset provenance. Provenance, also known as lineage, is increasingly important for determining authenticity and quality, especially when comparing products within the growing volumes of public domain datasets. It is also an important part of determining if data is fit for the intended purpose(s).
Observations \& Measurements \cite[\om; ][]{Cox2016} provides a standards based framework for describing the characteristics of an event making an observation --- what was measured, the procedure used, etc (see figure \ref{fig:oandm}).
As a framework it provides hooks for specialist descriptions which provide detailed descriptions of these key characteristics, but without prescribing so much that it becomes unimplementable.
An integral concept within \om\ is the concept of a ``Sampling Feature'' which explicitly allows for the spatiotemporal characteristics of the measurement to be recognised and captured.
\begin{figure} \label{fig:oandm}
\centering
\includegraphics[]{images/basic_om_model.png}
\caption{Observations and Measurements schema}
\end{figure}
Conforming with \om\ provides a basis for extending DRS provenance concepts, by requiring that any DRS covers at least the following:
\begin{enumerate}
\item \textbf{Parties}: who is involved, ownership of the data; e.g. experimenter, institution;
\item \textbf{Procedure}: the process; e.g. model or instrument information;
\item \textbf{Sampling feature}: spatiotemporal information; e.g. the sampling frequency;
\item \textbf{Feature of Interest}: what is measured e.g. atmosphere, ocean, clouds;
\item \textbf{Observed property}: the parameter; e.g. air temperature.
\end{enumerate}
These categories are not intended to be exclusive as there is not always a clean separation between them, particularly when one considers different perspectives.
For example, in a modelling context cloud properties may be a feature of interest but in an observational context they may be the observed property.
The procedure used for an observation (or simulation) is crucial.
One goal of any DRS will be to provide hooks for navigation from data to information about procedures used.
Such navigation will depend on ancillary services which ideally share the same controlled vocabularies.
For example, in the case of CMIP6 the facet ``experiment" is shared by both the ESGF DRS and the Earth System Documentation system (\href{https://es-doc.org}{https://es-doc.org}), and it is possible to navigate between the DRS view of data in the ESGF and a description of the experimental protocol \citep{PasEA19} at \href{https://search.es-doc.org/?project=cmip6&documentType=cim.2.designing.NumericalExperiment&client=esdoc-url-rewrite}{es-doc.org}.
Provenance capture in a simulation workflow is relatively simple, however,
the Advanced Climate Research Infrastructure for Data \citep[ACRID, ][]{Shaon2012} project demonstrated that provenance capture is also possible for climate data observations (providing descriptions of data sources and versions, software versions, and processing options), but there is work to do do to develop appropriate linking vocabularies.
\section{Exploiting Data Reference Syntaxes in real systems}
The CMIP5 DRS was deployed in ESGF, as described in section \ref{s:history}. The key components of the ESGF software and workflow are shown shaded in figure \ref{cmip5-drs}: a user interface exploits a catalog and delivery services.
The DRS provides organising principles for the catalog, and may be exploited by delivery services (including ancillary information services as discussed above).
In this section we discuss the extension of the original DRS to three important additional applications: to support the Climate Information Platform for Copernicus (CLIPC), the European Space Agency's Climate Change Indicators portal, and the sixth CMIP phase, CMIP6.
\begin{figure}
\centerline{\includegraphics[width=8cm]{figs/architecture_v3.png}}
\caption{Exploiting the data reference syntax within a software system. Data is published into catalogs, and user interfaces (discussed in the text) exploit those catalogs and a range of delivery services to allow users to navigate around data offerings before selecting and acquiring data. Such systems can exploit a Data Reference Syntax and their controlled vocabularies. \label{fig:arch}}
%TODO: Generate PDF version for final paper to increase quality
\end{figure}
\subsection{Climate information platform for Copernicus (CLIPC)}
CLIPC \cite{} includes a heterogeneous selection of data, extending the DRS considerably from the CMIP5 usage for global modelling by including selected datasets from the Regional Downscaling Experiment (CORDEX), the Met Office Hadley Centre's HadOBS project, and climate indicators calculated from the model data.
CORDEX was one of the first projects to be included in the ESGF as it expanded following CMIP5. It involved running regional climate models driven by boundary conditions from the CMIP5 archive for specific geographical domains \citep{Giorgi2009}.
The CORDEX DRS \cite[described in][]{christensen2014cordex} was a relatively straightforward extension of the CMIP5 DRS, with additional terms for domain, driving model, regional climate model, and regional climate model version --- with the project facet constrained to be ``CORDEX'' (related projects with different DRS facets and project name also exist or will exist, including CORDEX-Adjust, \citealt{Nikulin2016}, and CORDEX2, \citealt{GutEA16}).
For use within the CLIPC datsets, and associated DRS, not all the CORDEX terms were required as only one
\begin{table}[ht!]
\label{tab:cordex}
\begin{tabular}{|p{3cm}|p{9.5cm}|}
\hline
\textbf{Facet} & \textbf{Definition} \\ \hline
\textit{project} & Fixed as cordex.\\ \hline
\textit{product} & The type of output produced by the model. \\ \hline
\textit{institute} & The institute responsible for the data. \\ \hline
domain & A predefined region of the global that the data covers. \\ \hline
\textit{driving model} & The specific name of the climate model used to provide the boundary conditions. \\ \hline
experiment & The valid CORDEX experiment short name.\\ \hline
\textit{ensemble} & The ensemble member of the model run, inherited from the global model run; given in the form r\textless{}L\textgreater{}i\textless{}M\textgreater{}p\textless{}N\textgreater where, L M and N are integers and r is for realisation; i for initialisation and p is for physics. \\ \hline
rcm\_name & The regional climate model name. \\ \hline
rcm\_version & The regional climate model version. \\ \hline
\textit{time frequency} & The temporal frequency of the output data. \\ \hline
\textit{variable} & The short variable name identifier. \\ \hline
\textit{version} & This is an ESGF version to uniquely identify the dataset and version control the data, it is given the form vYYYYMMDD. \\
\hline
\end{tabular}
\caption{Facet definitions for CORDEX. Facets denoted with italics share the same controlled vocabulary as used for CMIP5, except for driving model, where that facet uses the CMIP5 source facet vocabulary.
These facets are connected together using a ``." to construct the unique DRS:
\textless{}project\textgreater{}.\textless{}product\textgreater{}.\textless{}domain\textgreater{}.\textless{}institute\textgreater{}.\textless{}driving\_model\textgreater{}.\textless{}experiment\textgreater{}.\textless{}ensemble\textgreater{}.\newline\textless{}rcm\_name\textgreater{}.\textless{}rcm\_version\textgreater{}.\textless{}time\_frequency\textgreater{}.\textless{}variable\textgreater{}.\textless{}version\textgreater{}
\label{t:cordex}}
\end{table}
A CMIP5 DRS example is \newline
\small{\texttt{cordex.output.AFR-44.DMI.ECMWF-ERAINT.evaluation.r1i1p1.HIRHAM5.v2.day.uas.v20140804}}
The CORDEX data is hosted directly by the ESGF, with a subset indexed within CLIPC. The necessary DRS to work with the ESGF was established as a fairly direct extension from CMIP5 (table \ref{t:cordex}), and is subsumed directly into CLIP-C.
The more widely used DRS elements that were are required for the publication of model data were not always appropriate for observational data
In this section four different project DRS are considered; they are the CMIP5, Regional Downscaling Experiment (CORDEX), ESA Climate Change Initiative (CCI) and the Met Office Hadley Centre (MOHC) HadOBS projects. The CMIP5 project is a modelling project, CORDEX is a regional modelling project, the ESA-CCI project is an satellite observation project and the HadOBS project is a ground based observations project.
Table \ref{tab:drs-schema} shows the DRS elements used in each of the projects CMIP5, CORDEX, ESA-CCI and HadOBS respectively. The facets shown in normal font are unique facets for a given project the facets in grey italics are facets already defined and can be utilised by multiple projects. Since CMIP5 was the first project to use this approach to data discovery all the facets used were uniquely defined for this project.
These facets are connected together using a ``." to construct the unique DRS:
\textless{}project\textgreater{}.\textless{}product\textgreater{}.\textless{}domain\textgreater{}.\textless{}institute\textgreater{}.\textless{}driving\_model\textgreater{}.\textless{}experiment\textgreater{}.\textless{}ensemble\textgreater{}.\newline\textless{}rcm\_name\textgreater{}.\textless{}rcm\_version\textgreater{}.\textless{}time\_frequency\textgreater{}.\textless{}variable\textgreater{}.\textless{}version\textgreater{}
A CMIP5 DRS example is \newline
\small{\texttt{cordex.output.AFR-44.DMI.ECMWF-ERAINT.evaluation.r1i1p1.HIRHAM5.v2.day.uas.v20140804}}.
\normalsize
\subsubsection{ESA-CCI: ESA Climate Change Initiative}
The European Space Agency Climate Change Initiative (ESA-CCI) project is the first remotely sensed data to be published in ESGF and it required a number of new facets. Firstly consider the provenance facets. The term project is introduced and it is now common place to use project rather than activity and they are often used interchangeably. The product facet in the modelling community has a different meaning to that used in the EO community, therefore to work with the existing infrastructure the facet ``product string" was included, where ``product string" is the typical name of the EO product. The observation community also commonly have product versions associated with their data to keep up-to-date with the most recent observations and methodologies this was not required in the CMIP program and so an additional facet was introduced to describe this. There are also a number of additional procedural facets that are required to describe the ESA-CCI data, they are the processing level, sensor id and platform id.
\begin{table}[ht!]
\label{tab:cci}
\begin{tabular}{|p{3cm}|p{9.5cm}|}
\hline
\textbf{Facet} & \textbf{Definition} \\ \hline
project & Fixed as clipc\\ \hline
product & The type of output; esacci \\ \hline
cci project & The ESA CCI essential climate variable project. \\ \hline
time frequency & The temporal frequency of the output data. \\ \hline
processing level & The level of processing applied to the observational data, e.g. L3, L4. \\ \hline
CCI geophysical parameter & The observed quantity. \\ \hline
sensor & The instrument name. \\ \hline
platform & The satellite that carried the sensor. \\ \hline
product string & An additional product descriptor required for uniqueness, could be the name of a processing algorithm. \\ \hline
product\_version & A version commonly associated with the dataset. \\
realization & The ensemble member. \\ \hline
version & This is an ESGF version to uniquely identify the dataset and version control the data, it is given the form vYYYYMMDD. \\
\hline
\end{tabular}
\caption{Facet definitions for ESA CCI}
\end{table}
The CCI facets were connected together to produce unique dataset identifiers of the form:
\noindent\textless{}project\textgreater{}.\textless{}cci\_project\textgreater{}.\textless{}time\_frequency\textgreater{}.\textless{}processing\_level\textgreater{}.\textless{}cci\_geophysical\_parameter\textgreater{}.\\\textless{}sensor\_id\textgreater{}.\textless{}platform\_id\textgreater{}.\textless{}product\_string\textgreater{}.\textless{}product\_version\textgreater{}.\textless{}realization\textgreater{}.\textless{}esgf\_version\textgreater{}
A CMIP5 DRS example is \newline
\small{\texttt{{clipc.esacci.CLOUD.day.L3U.CLD\_PRODUCTS.MODIS.Aqua.MODIS\_AQUA.1-0.r1.v20120704}}.\normalsize
\subsubsection{MOHC HadOBS: Met Office Hadley Centre, observational data products}
The MOHC HadOBS observational data also required additional provenance information to describe the data effectively.
\begin{table}[ht!]
\label{tab:hadobs}
\begin{tabular}{|p{3cm}|p{9.5cm}|}
\hline
\textbf{Facet} & \textbf{Definition} \\ \hline
project & Fixed as CLIPC\\ \hline
product & The type of data \\ \hline
institute & The institute responsible for the data. \\ \hline
framework & Dataset framework \\ \hline
collection & The dataset collection \\ \hline
frequency & The temporal frequency of the output data. \\ \hline
realization & The ensemble member. \\ \hline
product\_version & A version commonly associated with the dataset. \\ \hline
version & This is an ESGF version to uniquely identify the dataset and version control the data, it is given the form vYYYYMMDD. \\
\hline
\end{tabular}
\caption{Facet definitions for HadOBS}
\end{table}
These facets are connected together using a ``." to construct the unique DRS:
\noindent\textless{}project\textgreater{}.\textless{}product\textgreater{}.\textless{}institute\textgreater{}.\textless{}framework\textgreater{}.\textless{}collection\textgreater{}.\textless{}frequency\textgreater{}.\\\textless{}realization\textgreater{}.\textless{}product\_version\textgreater{}.\textless{}esgf\_version\textgreater{}
A CMIP5 DRS example is \newline
\small{\texttt{clipc.insitu.MOHC.HadOBS.HadISDH.mon.r1.v2-1-0-2015p.v20151231}}.\normalsize
\begin{figure}[ht!] \label{fig:clipc-hadobs}
\centering
\includegraphics[scale=0.4]{images/clipc-esgf-search.png}
\caption{Example ESGF search for insitu observational data}
\end{figure}
\subsection{CMIP6}
Although not a part of the CLIPC project, since that project took place the phase 6 CMIP data (CMIP6) is now being released and the DRS have been defined as shown it the table \ref{tab:cmip6}. Many similar facets are common between CMIP5 and CMIP6 however some have been renamed ** this is not good practice why oh why **. The largest change from CMIP5 to CMIP6 is the granularity level of the DRS. In CMIP5 all variables for a given dataset were included within the dataset. There were often many 20-30 variables within a dataset. This was not optimal from a search or data management perspective. Therefore within CMIP6 the variable has been elevated to be a distinct facet. This is extremely useful given the much larger volume of CMIP6. For example a user may simply be interested in a couple of variables for example sea ice and sea surface temperature, a user can simply search for these two variables and then narrow down their search from there.
\begin{table}[ht!]
\label{tab:cmip6}
\begin{tabular}{|p{3cm}|p{9.5cm}|}
\hline
\textbf{Facet} & \textbf{Definition} \\ \hline
mip\_era & The phase of CMIP in this case CMIP6; equivalent to the CMIP5 project.\\ \hline
activity\_drs & The model intercomparison project (MIP) to which the data belong. \\ \hline
institution\_id & The climate modelling centre(s) or University responsible for the model. \\ \hline
source\_id & The specific name of the climate model used. \\ \hline
experiment\_id & The valid CMIP6 experiment short identifier. \\ \hline
member\_id & The specific ensemble member of the model run of the form r\textless{}L\textgreater{}i\textless{}N\textgreater{}p\textless{}M\textgreater{}f\textless{}R\textgreater where, L, M, N and R integers and r is for realisation; i for initialisation; p is for physics and f is for forcing. \\ \hline
table\_id & A lookup table that relates the frequency of a variable and its realm. \\ \hline
variable\_id & A short variable name identifier. \\ \hline
grid\_label & A short grid type identifier. \\ \hline
Version & This is an ESGF version to uniquely identify the dataset and version control the data, it is given the form vYYYYMMDD. \\
\hline
\end{tabular}
\caption{Facet definitions for CMIP6}
\end{table}
These facets are connected together using a ``." to construct the unique DRS:
\textless{}mip\_era\textgreater{}.\textless{}activity\_drs\textgreater{}.\textless{}institution\_id\textgreater{}.\textless{}source\_id\textgreater{}.\textless{}experiment\_id\textgreater{}.\textless{}member\_id\textgreater{}.\newline\textless{}table\_id\textgreater{}.\textless{}variable\_id\textgreater{}.\textless{}grid\_label\textgreater{}.\textless{}version\textgreater{}
A CMIP6 DRS example is \newline \small{\texttt{CMIP6.CMIP.AWI.AWI-CM-1-1-MR.historical.r5i1p1f1.3hr.rldscs.gn.v20181218}.\normalsize
\subsection{Analysis of DRS extensions}
\begin{table}[htb!]
\label{tab:drs-schema}
\small
\begin{tabular}{p{1.4cm} p{2.0cm} p{2.1cm} p{2.1cm} p{1.4cm} p{2.0cm} }
& \textbf{Provenance} & \textbf{Procedure} & \textbf{Sampling} & \textbf{Feature} & \textbf{Parameter} \\[2pt] \hline
\\
\textbf{CMIP5} & activity & model & frequency & realm & variable name \\
& product & experiment & & cmor\_table & \ \\
& institute & ensemble & \ & \ & \ \\[2pt] \hline
\\[-4pt]
\textbf{CORDEX} & {\textit{activity}} & domain & {\textit{frequency}} & \ & \ \\
& {\textit{product}} & driving\_model & \ & \ & \ \\
& {\textit{institute}} & experiment & \ & \ & \ \\
& & {\textit{ensemble}} & \ & \ & \ \\
& & {\textit{model}} & \ & \ & \ \\
& & rcm\_version & \ & \ & \ \\[2pt]
\textbf{HadOBS} & framework & \ & \ & \ \\[2pt]
& collection & \ & \ & \ & \ \\[2pt]
\textbf{ESA-CCI} & project & processing level & time\_frequency & cci\_project & cci\_geo\_quantity$^*$ \\
& product\_string & sensor id & \ & \ & \ \\
& product\_version & platform id & \ & \ & \ \\
\ & \ & realization & \ & \ & \\[2pt] \hline
\\[-4pt]
\textbf{CMIP6} & mip\_era & source\_id & nom\_resolution$^*$ & realm& variable \\
& activity & experiment\_id & sub-experiment & table id & cf\_std \_name$*$ \\
& model\_cohort & source\_type & grid label & \\
& product & variant\_label & frequency & \\
& institution\_id & & & \\[2pt] \hline
\end{tabular}
\caption{DRS elements classified by schema element. (terms with $^*$ suffix have been abbreviated for presentation)}
%TODO: Expand table caption, move to later
\end{table}
\section{Improving navigability}
The benefit of using controlled vocabularies include flexibility, scalability and the linking of information subsystems.
One example of a mature controlled vocabulary within the Climate Science community is the Climate and Forecast (CF) standard names ?REF?. This is a list of variables names used in the climate and forecast community. Each term has precise spelling and definition. For example, air\_pressure\_at\_sea\_level has the definition ``sea\_level means mean sea level, which is close to the geoid in sea areas. It is defined as having canonical units of Pascals (Pa). Having precise definitions means that meteorologists and climate scientists anywhere in the world using this standard name know that they are referring to the same quantity. This becomes ever more important when considering more complex variables for example radiative fluxes which have a vector component of direction and can be absolute or net. Having a name which clearly specifies the direction of the radiation and whether it is the absolute or net value is vital to ensure that variables are compared correctly or radiative budgets calculated correctly.
Using CVs is an essential component of a DRS, however CVs must be managed and the complexity of these can vary. In the simple example of the ``realm" facet of CMIP5 there are only seven terms in the CV. Where there are only a small number of terms they could be managed for example in GitHub like many of the controlled vocabularies for CMIP6 as they require minimal management. In contrast the CF standard name table (a CV) currently consists of around 6000 terms and is managed by community experts. In order to add a new standard name to the table, it must be proposed, moderated (by the community experts) and approved; this involves a substantial amount of effort and collaboration. It is recommended that when using a CV in a DRS where possible the terms should be taken from existing CVs. In comparing the terms used in the ``frequency" facet it has been noticed that different data producers sometimes use subtly different terminology. For example, it is not uncommon to see year and yr, or monthly and mon. While it is possible to use semantic web technologies such as SKOS to relate these terms it is most beneficial if terms were used consistently.
% Within the CMIP5 and CMIP6 projects, which are large international collaborations specialist documentation and distribution software has been used and is in development, these projects have necessarily had to develop and utilise large CVs.
A number of new controlled vocabularies have been defined for CLIPC. The content of these new controlled vocabularies has been defined in consultation with the data providers and curators. Provenance information is currently represented using PROV-O for new vocabularies to be incorporated into the NERC Vocabulary Server (NVS). The new vocabularies for CLIPC included defining conceptual schemes or themes such as the Global Climate Observing System (GCOS) Essential Climate Variable (ECV) domains and subdomains: atmospheric, terrestrial, atmospheric surface, atmospheric upper-air, atmospheric composition, oceanic surface, oceanic sub-surface. To reconcile the different vocabularies for the different climate data records the Simple Knowledge Organisation System (SKOS) is used to provide a mapping framework that links terms from different vocabularies using semantic mappings.
SKOS provides relational matches between two predefined vocabularies using the Resource Description Framework (RDF). The SKOS relationships between different vocabularies can be loosely defined as
\begin{itemize}
\item associative: concepts are related, they may be approximately interchangeable; can be either close or exact relationships
\item hierarchical: concepts can be broader or narrower:
\begin{itemize}
\item broader: the current term has a more specific definition than the related term e.g. carbon dioxide has a broader relationship to greenhouse gases
\item narrower: the current term has a less specific definition than the related term e.g. atmospheric composition has a narrower relationship to greenhouse gases
\end{itemize}
\end{itemize}
Using SKOS the internal hierarchical and associative mappings are defined, this allows terms within and across different controlled vocabularies to be related greatly enriching the data search experience.
\subsection{The Climate Change Initiative (CCI) example}
The data reference syntax that were defined in \ref{tab:cci} for the ESA CCI project were used not only in the CLIPC project but also in the ESA CCI portal.
Here a faceted search was implemented on the data utilising the linked data technologies to enhance search and discovery.
Example:
Screenshot:
%TODO: Lessons learned: CORDEX wild west, CMIP changing facets. Trade off between ease of use for producers and consumers.
\section{Summary and Future Work}
The first use of DRS in the CMIP5 project provided a comprehensive list of facets that were relatively simply managed. A DRS:
\begin{itemize}
\item provides a unique identifier for the dataset,
\item provides a common terminology for a collection of datasets,
\item can aid filesystem management,
\item allows faceted searching.
\end{itemize}
Since then many new projects have emerged that use the ESGF infrastructure for publication and thus need a good DRS in order to provide a good quality faceted data search. A number of problems are now emerging and they are detailed below.
There must be robust communication in this multi-disciplinary environment as this is a community exercise with technical constraints.
\subsection{Lessons learned}
\subsection{Governance}
- The importance of maintenance of information . . . facet name inconsistencies eg table, table\_id
- Difficulties of a globally distributed differential funded environment on information management. Having all the controlled vocabularies for each project stored in a central GitHub repository - even this has its problems...
- social concept of communications costs necessary ... link to the vocabularies as being key to this.
\subsection{Flexibility and Futures}
\section*{SOME FIGURES}
The CLIPC portal allows users access to a visualisation tool
\begin{figure}
\centering
\includegraphics[scale=0.2]{images/c4iportal.png}
\caption{Mapping tool from CLIPC toolbox (If we use ths one will need to in introduce the DRS for the impact indicators)}
\label{fig:my_label}
\end{figure}
Main CLIPC SEARCH
\begin{figure}
\centering
\includegraphics[scale=0.3]{images/CLIPC-seach.png}
\caption{CLIPC data search}
\label{fig:my_label}
\end{figure}
CLIPC as found in C4I
\begin{figure}
\centering
\includegraphics[scale=0.4]{images/CLIPC-search-in-C4I.png}
\caption{CLIPC data search through Climate for impacts portal}
\label{fig:my_label}
\end{figure}
CCI
\begin{figure}
\centering
\includegraphics[scale=0.3]{images/CCI.png}
\caption{CCI}
\label{fig:my_label}
\end{figure}
%% The Appendices part is started with the command \appendix;
%% appendix sections are then done as normal sections
%% \appendix
%% \section{}
%% \label{}
%% If you have bibdatabase file and want bibtex to generate the
%% bibitems, please use
%%
\bibliographystyle{elsarticle-harv}
\bibliography{biblio}
%% else use the following coding to input the bibitems directly in the
%% TeX file.
% \begin{thebibliography}{00}
% %% \bibitem[Author(year)]{label}
% %% Text of bibliographic item
% \bibitem[ ()]{}
% \end{thebibliography}
% \newpage
% \appendix \label{App:1}
% \LARGE\textbf{Appendix I: Facet definitions for projects within this paper}
% \normalsize
% \begin{longtable}{ p{1.8cm} | p{4.2cm} | p{5cm} | p{1.1cm} | p{3cm} }
% \hline
% \textbf{Project} & \textbf{Facet} & \textbf{Definition} & \textbf{Format} & \textbf{Value Example} \\ \hline
% \textbf{CMIP5} & & & & \\ \hline
% & activity & Model intercomparison activity or other data collection activity & CV & cmip5 \\ \hline
% & product & Type of CMIP output & CV & output1 \\ \hline
% & institute & Institute responsible for the model results & CV & MPI-M \\ \hline
% & model & Model used and its version & SF & MPI-ESM-LR \\ \hline
% & experiment & identifies either the experiment or both the experiment family & CV & abrupt4xCO2 \\ \hline
% & Frequency & Temporal frequency of output & CV & mon \\ \hline
% & modelling realm & High level modeling component & CV & atmos \\ \hline
% & MIP table & MIP reference lookup table for variable name & CV & Amon \\ \hline
% & ensemble member & Ensemble member reference code & SF & r1i1p1 \\ \hline
% & version number & Dataset publication version as date & SF & v20120602 \\ \hline
% & variable name & Abbreviation of variable name given by MIP table & CV & --many-- \\ \hline
% \textbf{CORDEX} & & & & \\ \hline
% & activity & Name of project & Fixed & cordex \\ \hline
% & product & Type of output & Fixed & output \\ \hline
% & domain & CORDEX domain name & CV & EUR-44 \\ \hline
% & institution & Acronym of the institution responsible for the simulation & CV & MOHC \\ \hline
% & GCMModelName & Driving model name or reanalysis data used as the driving data & CV & ECMWF-ERAINT \\ \hline
% & CMIP5ExperimentName & Driving experiment name, evaluation or CMIP5 experiment\_id & CV & evaluation \\ \hline
% & CMIP5EnsembleMember & Driving model ensemble member & SF & r0i0p0 \\ \hline
% & RCMModelName & Regional Climate Model name identifier (<instiution>-<regional model>) & CV & MOHC-HadGEM3-RA \\ \hline
% & RCMVersionID & Regional Climate Model version identifier & Free string & v1 \\ \hline
% & Frequency & Temporal frequency of output & CV & 3hr \\ \hline
% & variable name & CMIP5 variable name abbreviation & CV & clt \\ \hline
% & version number & Dataset publication version as date & SF & v20120602 \\ \hline
% & & & & \\ \hline
% \textbf{CCI} & & & & \\ \hline
% & project & Project name & Fixed & clipc \\ \hline
% & product & product type & Fixed & esacci \\ \hline
% & cci\_project & The ESA CCI Essential Climate Variable project & CV & CLOUD \\ \hline
% & time\_frequency & Temporal frequency of output, yr, mon, day, etc & CV & day \\ \hline
% & processing\_level & The processing level of the data, e.g. L3, L4 & CV & L3U \\ \hline
% & data\_type & The parameter assessed & CV & CLD\_PRODUCTS \\ \hline
% & sensor\_id & The remote sensing instrument name & CV & MODIS \\ \hline
% & platform\_id & The satellite platform that the sensor instrument is on & CV & Aqua \\ \hline
% & product\_string & Product name, sometimes an algorithm & SF & MODIS\_AQUA \\ \hline
% & product\_version & product version number & SF & 1-0 \\ \hline
% & realization & ensemble version & SF & r1 \\ \hline
% & version number & Dataset publication version as date & SF & v20120704 \\ \hline
% & & & & \\ \hline
% \textbf{HadOBS} & & & & \\ \hline
% & project & CLIPC & Fixed & clipc \\ \hline
% & product & Type of product & CV & insitu \\ \hline
% & inst & Institute & CV & MOHC \\ \hline
% & framework & Dataset framework & CV & HadOBS \\ \hline
% & collection & Name of data product & CV & HadISD \\ \hline
% & frequency & Temporal frequency of output & CV & subdaily \\ \hline
% & table & ? include - not been needed? & & \\ \hline
% & realization & Ensemble member & SF & r1 \\ \hline
% & product\_version & Product version number & SF & 1-0-4-2015p \\ \hline
% & version & Dataset publication version as date & SF & v20151231 \\ \hline
% \end{longtable}
\end{document}
\endinput
%%
%% End of file `elsarticle-template-harv.tex'.
| {
"alphanum_fraction": 0.7672718741,
"avg_line_length": 75.1804613297,
"ext": "tex",
"hexsha": "166d29db281ddc67dd204029cc4226728034694e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6c7194946c3febee720d7f8a0694d58b5ab9c9b3",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "RuthPetrie/drs-paper",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6c7194946c3febee720d7f8a0694d58b5ab9c9b3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "RuthPetrie/drs-paper",
"max_issues_repo_path": "main.tex",
"max_line_length": 1160,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6c7194946c3febee720d7f8a0694d58b5ab9c9b3",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "RuthPetrie/drs-paper",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 13481,
"size": 55408
} |
\section{Workflow}
\label{S:WORKFLOW}
\begin{figure}[h]
\centering
\scalebox{0.7}{
\!\!\!\!\!\!\!\!\begin{tikzpicture}
% Node position
\node[paraamber](rawdata){\begin{tabular}{c} .MAT \\ and \\ .CSV data files \end{tabular}};
\node[esamber, below of=rawdata, yshift=-1.3cm] (preprocess) {\begin{tabular}{c} pre- \\ processing \end{tabular} };
\node[test, below of=preprocess, yshift=-1.3cm] (testpreprocess) {? };
\node[esbisque, right of=testpreprocess, xshift = 1.75cm ] (configmodel) { \begin{tabular}{c} model \\ configuration \end{tabular} };
\node[esbisque, right of=configmodel, xshift = 1.75cm ] (buildmodel) { \phantom{} build model \phantom{} };
\node[esbabyblueeyes, right of=buildmodel, xshift=1.75cm] (paramlearning){\begin{tabular}{c} parameters \\ learning \end{tabular}};
\node[parawhite, below of=paramlearning, xshift=0cm, yshift = -1.5cm] (inputcfg){\begin{tabular}{c} CFG\_*.m \\ DATA\_*.mat \end{tabular}};
\node[test, right of=paramlearning, xshift = 1.5cm] (testparamlearning) {?};
\node[esceladon, right of=testparamlearning, xshift = 1.5cm] (stateestimation) {\begin{tabular}{c} states \\ estimation \end{tabular}};
\node[test, right of=stateestimation, xshift = 1.5cm] (teststateestimation) {?};
\node[parawhite, right of=teststateestimation, xshift = 2.25cm] (results) {\begin{tabular}{c} CFG\_*.m \\ DATA\_*.mat \\ RES\_*.mat \\ PROJ\_*.mat \\ LOG\_*.txt \end{tabular}};
\node[user, below of=testpreprocess, yshift=-2.5cm] (user1) { \includegraphics[height=0.45cm]{./docfigs/user_logo.png}};
\node[user, below of=configmodel, yshift=-2.5cm] (user2) { \includegraphics[height=0.45cm]{./docfigs/user_logo.png}};
\node[user, below of=testparamlearning, yshift=-2.5cm] (user3) { \includegraphics[height=0.45cm]{./docfigs/user_logo.png}};
\node[user, below of=teststateestimation, yshift=-2.5cm] (user4) { \includegraphics[height=0.45cm]{./docfigs/user_logo.png}};
\node[eslightgray, above of=paramlearning, yshift=4cm, xshift=0cm](syntheticdatacreation) {\phantom{} synthetic data creation \phantom{}};
% Define path
\path[->, thick] (rawdata)edge(preprocess);
\path[-, thick] (preprocess)edge(testpreprocess);
\path[<->, thick] (testpreprocess)edge(user1);
\path[-, thick] (configmodel)edge(buildmodel);
\path[<->, thick] (configmodel)edge(user2);
\path[-, thick] (buildmodel)edge(paramlearning);
\path[-, thick] (paramlearning)edge(testparamlearning);
\path[<->, thick] (testparamlearning)edge(user3);
\path[-, thick] (testparamlearning)edge(stateestimation);
\path[-, thick] (stateestimation) edge (teststateestimation);
\path[<->, thick] (teststateestimation)edge(user4);
\path[-, thick] (testpreprocess) edge node[anchor=center, above] { yes} (configmodel);
\path[->, draw, thick] (testpreprocess.west) -| (-1.25cm,-3cm) |- node[anchor=center, above, rotate=90, fill=none]{ no} (preprocess.west);
\path[->, thick] (teststateestimation) edge node[anchor=center, above] { yes} (results);
\path[->, draw, thick] (testparamlearning.north) |- (9cm,-2.5cm) -| node[anchor=center, above, rotate=0, fill=none]{ no} (paramlearning.north);
\path[-, draw, thick] (testparamlearning) edge node[anchor=center, above] { yes} (stateestimation);
\path[->, draw, thick] (teststateestimation.north) |- (6cm,-2cm) -| node[anchor=center, above, rotate=0, fill=none]{ no} (buildmodel.north);
\path[->, draw, thick] (inputcfg) edge (paramlearning);
\path[->, draw, thick, dashed] (buildmodel.east) -| (6.75cm, -0.75cm) -| (syntheticdatacreation.south);
\path[->, draw, thick, dashed] (paramlearning.east) -| (9.75cm, -0.75cm) -| (syntheticdatacreation.south);
\path[->, draw, thick, dashed] (stateestimation.east) -| (14.5cm, -0.75cm) -| (syntheticdatacreation.south);
\path[->, draw, thick, dashed] (syntheticdatacreation.north) |- (9cm, 2cm) -| (rawdata.north);
\path[->, draw, thick, dashed] (syntheticdatacreation.north) |- (18cm, 2cm) -| (results.north);
% Rectangle
\draw [draw=amber, line width=0.5mm, dashed ] (-3cm, -3cm) rectangle ++(5cm , 4cm );
\draw [draw=bisque, line width=0.5mm, dashed ] (0.95cm,-5.55cm) rectangle ++(5.85cm , 2cm );
\draw [draw=babyblueeyes, line width=0.5mm, dashed ] (7cm,-5.55cm) rectangle ++(2.5cm , 2cm );
\draw [draw=celadon, line width=0.5mm, dashed ] (11.85cm,-5.55cm) rectangle ++(2.75cm , 2cm );
\draw [draw=lightgray, line width=0.5mm, dashed ] (6cm,-0.5cm) rectangle ++(4.5cm , 1.75cm );
% Text
\node[ above of = rawdata, xshift = -0.5cm, yshift = 0.325 cm, fill=white] { Section~\ref{S:DATALOADING} and~\ref{S:DATAEDITINGPREPROCESSING}};
\node[ above of = buildmodel, xshift = -1cm, yshift = 0.325 cm, fill=white] { Section~\ref{S:MODELCONFIGURATION} and~\ref{S:MODELCONSTRUCTION}};
\node[above of = paramlearning, xshift = 0cm, yshift = 0.325 cm, fill=white] { Section~\ref{S:PARAMESTIMATION} };
\node[above of = stateestimation, xshift = 0cm, yshift = 0.325 cm, fill=white] { Section~\ref{S:HIDDENSTATESESTIMATION} };
\node[above of = syntheticdatacreation, xshift = 0cm, yshift = 0.325 cm, fill=white] { Section~\ref{S:SYNTHETIC}};
\end{tikzpicture} }
\label{FIG:OpenBDLMworkflow}
\caption{OpenBDLM workflow}
\end{figure}
\input{section/OpenBDLMDataLoading.tex}
\input{section/OpenBDLMDataEditingPreProcessing}
\input{section/OpenBDLMModelConfiguration}
\input{section/OpenBDLMModelConstruction}
\input{section/OpenBDLMParamEstimation}
\input{section/OpenBDLMHiddenStatesEstimation}
| {
"alphanum_fraction": 0.7023324695,
"avg_line_length": 64.3095238095,
"ext": "tex",
"hexsha": "f14cf5a7aaaa3ff4c295a7e99aafa817a50e4def",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2022-01-30T02:26:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-18T07:18:38.000Z",
"max_forks_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bhargobdeka/OpenBDLM",
"max_forks_repo_path": "doc/pdf_doc/section/OpenBDLMMainWorkflow.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bhargobdeka/OpenBDLM",
"max_issues_repo_path": "doc/pdf_doc/section/OpenBDLMMainWorkflow.tex",
"max_line_length": 181,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "af395cea6d394b0d1fb91ce76ddda9d97c02318f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "CivML-PolyMtl/OpenBDLM",
"max_stars_repo_path": "doc/pdf_doc/section/OpenBDLMMainWorkflow.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T17:32:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-19T23:42:56.000Z",
"num_tokens": 1916,
"size": 5402
} |
\documentclass{finalreport}
\usepackage[utf8]{inputenc}
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=url,
match=\regexp{\\_},
replace=\regexp{_}]
}
}
}
\addbibresource{citations.bib}
%%% Template Usage
% 1. Go to "All Projects" and make a copy in Overleaf,
% or download the source to modify locally.
% 2. Fill in your name
% 3. Set the reportdate to Monday of the current week
\title{Using Toy Networks to Model Disruptions in Science Networks}
\author{Edris Qarghah}
\DTMsavedate{startdate}{2020-07-07}
\DTMsavedate{enddate}{2020-09-15}
\date{\DTMusedate{startdate}-\DTMusedate{enddate}}
\makenoidxglossaries
\newglossaryentry{esnet}{name={ESnet},
description={A high-bandwidth network, managed by the Lawrence Berkeley National Laboratory, that connects scientists at national laboratories, universities and other research institutions within the US}}
\newglossaryentry{bridge}{name={bridge-connected},
description={A subgraph connected to the rest of the graph by a single edge (a bridge\cite{bridge}, which would become a separate \gls{component} if that edge were removed}}
\newglossaryentry{betweenness}{name={Betweenness Centrality},
description={The centrality\cite{centrality} of an vertex in a graph can be defined in many ways. Betweenness centrality does so by calculating the number of paths along which a given node is essential (i.e., without the node, there would no longer be a connection between two other nodes)}}
\newglossaryentry{boundary}{name={boundary\cite{boundary}},
plural={boundaries},
description={A \gls{node} or set of nodes that are between two subgraphs}}
\newglossaryentry{hub}{name={hub\cite{hub}},
description={A \gls{node} with significantly more links than average}}
\newglossaryentry{degree}{name={degree\cite{degree}},
description={The number of edges connected to a node.}}
\newglossaryentry{anomaly}{name={anomaly detection},
description={The identification of events on the network that are outside the norm}}
\newglossaryentry{connected}{name={connected\cite{connectivity}},
description={A graph is said to be connected, if all nodes can be reached from all other nodes.}}
\newglossaryentry{kconnected}{name={$k$-connected\cite{connectivity}},
description={A graph is said to be $k$-connected, if removing $k$ edges somewhere in the graph would result in the graph no longer being \gls{connected}}}
\newglossaryentry{component}{name={component\cite{connectivity}},
description={A cohesive subgraph containing any number of connected nodes. A \gls{connected} graph has one component (the entire graph), but a \gls{kconnected} graph would break into more, smaller components if particular sets of $k$ edges were cut}}
\newglossaryentry{endpoint}{name={endpoint},
description={The devices that serve as the source or destination of a transmission along the network. While any device could serve as an endpoint, the only endpoints we are concerned with (as they are the only ones regarding which we have data), are \glspl{psnode}}}
\newglossaryentry{hop}{name={hop},
description={Each step along the network from one device to the next (and sometimes within the same device). The hop is usually documented via \gls{trace} and is identified by the \gls{ip} of the device the hop arrives at}}
\newglossaryentry{ipadd}{name={IP address\cite{ip}},
description={An Internet Protocol (IP) address is a 32-bit (IPv4) or 128-bit (IPv6) number that identifies a device on a network}}
\newglossaryentry{latency}{name={latency},
plural={latencies},
description={The amount of time it takes for one bit of data to travel along a network from one \gls{endpoint} to another}}
\newglossaryentry{tomography}{name={network tomography},
description={The study of the shape, state and other characteristics of a network using only data gathered from limited set of \glspl{endpoint}}}
\newglossaryentry{node}{name={node},
description={A device connected to the network which may serve as a \gls{hop} between two \glspl{endpoint}}}
\newglossaryentry{packetloss}{name={packet loss},
description={The percentage of packets of data that failed to reach their destination. The \gls{tcp} identifies and re-transmits lost packets, but this can slow transmission down to a trickle}}
\newglossaryentry{delay}{name={one-way delay},
description={The amount of time it takes for data to be transmitted from a source \gls{ps} node to a destination. This is measured by the difference in clock measurements between \glspl{endpoint}}}
\newglossaryentry{perfsonar}{name={perfSONAR\cite{perfsonar}},
description={Short for performance Service-Oriented Network monitoring ARchitecture, it is a network measurement toolkit designed to provide federated coverage of paths and help to establish end-to-end usage expectations}}
\newglossaryentry{psnode}{name={perfSONAR (PS) node},
description={Network \glspl{endpoint} equipped with \gls{perfsonar} that send regular communications to one another in order to collect network measurements (i.e., \gls{packetloss}, \gls{latency} and \gls{owd})}}
\newglossaryentry{trace}{name={traceroute},
description={Data collected about the \glspl{hop} between two \glspl{endpoint}}}
\newacronym{lhc}{LHC}{Large Hadron Collider\cite{lhc}}
\newacronym{owd}{OWD}{\gls{delay}}
\newacronym{ps}{PS}{\gls{perfsonar}}
\newacronym{tcp}{TCP}{Transmission Control Protocol\cite{tcp}}
\newacronym{ip}{IP}{Internet Protocol, though usually synechdoche for \gls{ipadd}}
\begin{document}
\maketitle
\newpage
\section{Abstract}
Tools exist for monitoring networks, but they often rely on measurements taken from a finite number of endpoints on the periphery of the network. These measurements enable the identification of network disruptions, but do not provide direct information on the status of individual connections on the network, making it difficult to pinpoint the location of those disruptions.
In this paper an approach is discussed for creating toy networks, adjusting them to resemble real network distributions and using them to model approaches to network tomography. We also discuss how to adapt these strategies for use with a real network by creating a representation of that network that is analogous to our toy model.
\section*{Introduction}
The \gls{lhc} at CERN\cite{cern} produces exabytes of data that is disseminated to high-energy physics labs around the world for analysis. The network that supports this transmission, which includes \gls{esnet}\cite{esnet} in the United States, is decentralized and consists of approximately $6,000$ individual \glspl{node}\footnote{A survey we conducted of \glspl{trace} from 7/7/2020 to 7/14/2020 found $5968$ unique \glspl{node}.}.
To monitor the activity on the network, certain \glspl{endpoint} are configured as \glspl{psnode}\cite{perfsonar} (around $420$ of them\footnote{The aforementioned survey found $423$ \gls{ps} nodes.}). These regularly transmit data to one another\footnote{The aforementioned survey found \gls{trace} between $24,503$ pairs of \gls{ps} nodes.} and record characteristics of those transmissions, such as \gls{owd}, \gls{packetloss} and \gls{latency}.
With this limited visibility, if there is a disruption somewhere along the network, it can be hard to pin down where the problem is, potentially leaving entire regions of researchers with slow or limited access to other regions for months at a time.
\subsection*{Identifying Network Disruptions}
This paper represents work toward the goal of developing better methods of determining the source of disruptions on the network used by the high-energy physics community. To address this, we need to:
\begin{itemize}
\item Identify that there \textit{is} an issue.
\begin{itemize}
\item This is done via \textbf{\gls{anomaly}}, the identification of events that are outside the norm. In our case, this would mean looking at \gls{packetloss}, \gls{owd} and other metrics to see whether any abnormalities may indicate there is a problem that needs to be addressed.
\end{itemize}
\item Identify \textit{where} the issue is occurring.
\begin{itemize}
\item To do this, we need to have an understanding of the topology of the network, which is achieved via the use of \textbf{\gls{tomography}}, which is the study of the shape, state and other characteristics of a network using only data gathered from limited set of \glspl{endpoint} (i.e., \glspl{psnode}).
\end{itemize}
\end{itemize}
\noindent In this paper, we are concerned with the latter problem, which we approached in three steps:
\begin{enumerate}
\item We created a toy network to use as a model.
\item We developed \gls{tomography} strategies using our toy model.
\item We created a representation of the real network that mirrors our toy model, so that tools/strategies can be adapted for use with real data.
\end{enumerate}
\section{Toy Network}
\subsection{Motivation}
We have limited visibility into the real network we are working with, as we only have the data collected by \gls{ps} nodes. Such data is incomplete, complicated and messy, so it is not ideal for testing ideas regarding network structure or determining causal relationships in network phenomena. Real data is subject to change for reasons that are sometimes unknowable and often completely outside our control.
This is what motivated the construction of a toy network, one where we could define all nodes and their connections. With such a model, we can make changes, observe the impact on various facets of the network and be certain this impact was caused by our changes. The \gls{tomography} strategies we develop in this controlled environment can then by adapted for use with the real network.
\subsection{Initial Parameters}
It is impractical to create a toy network on the same scale and with the same level of detail as the real one, so we started with the following parameters:
\begin{itemize}
\item The network consisted of 100 \glspl{node}.
\item 10 random nodes were selected to ``host" \gls{perfsonar}.
\item Each node was a coordinate on the $x,y$ plane.
\item Each node had a random number of connections (up to 4) to its nearest neighbors.
\item The ``\gls{latency}" for each link was their geometric distance.
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\linewidth]{week_1/Network_416b9f8671a64f04a87bbb59c431dc28.png}
\caption{Our first networks had no guarantee of being \gls{connected}, as seen by nodes 94, 8, 48 and 30.}
\end{figure}
\noindent There were some issues with such a na{\"i}ve approach:
\begin{itemize}
\item There were no long edges (we know this doesn't resemble reality, even without exploring real data, because of transatlantic cables\cite{transatlantic}).
\item There was no guarantee that all \glspl{component} would be \gls{connected} (for a cluster of nodes, the nearest neighbor to all those nodes may only be within that cluster).
\item The toy network was not grounded in any information about the real world (i.e., it may not resemble the real network at all, in which case it wouldn't be a very good proxy).
\item The ``\glspl{psnode}" were indistinguishable from others and had no additional functionality.
\end{itemize}
\subsection*{Incremental Improvements}
As we developed our toy network, we made a wide variety of incremental improvements. The list below highlights changes in roughly chronological order.
\begin{itemize}
\item We created \gls{hub} nodes that served as a backbone for the network, which were randomly placed in quadrants and quadrants within those quadrants, recursively (it can be configured to any number of layers deep, though we ultimately settled on 5). These hub nodes were connected by edges to the hub nodes within their respective sub-quadrants, ensuring that there were some longer edges and some structure to the network.
\item We made sure that the network was \gls{kconnected} ($k$ could be configured, though we used $1$-connected), which is to say that there is at least one \gls{component} that could be separated from the rest of the network by cutting $k$ edges.
\item We colored special nodes (i.e., \gls{hub} and \gls{ps} nodes).
\item We found the shortest paths between \gls{ps} nodes, using the Euclidean distances between the nodes along the path.
\item We gave each edge a \gls{packetloss} (and displayed it) based on a distribution pulled from \hyperref[kibana]{Kibana}, but made the mistake of applying the distribution to individual edges and not entire paths.
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\linewidth]{week_2/NetworkZero.png}
\caption{\Gls{hub} (lilac) and \gls{ps} nodes (brown) are colored and there are no unconnected \glspl{component}, but there are components without \gls{ps} nodes (which would never have been seen/traversed on a real network) and the \gls{packetloss} along the edges proved unrealistic.}
\end{figure}
\begin{itemize}
\item Ensured that all \gls{degree} $1$ nodes are \gls{ps} nodes (if a node is only connected to the network by a single edge, then the only way that node would ever be seen is if that node is itself a \gls{ps} node), but this didn't account for components that didn't contain a \gls{ps} node.
\item Made sure that all \gls{bridge} components (i.e., \glspl{component} that could be separated from the rest of the network by removing a single edge) have at least \gls{ps} node (otherwise there would be no reason to ever traverse this component).
\item Used low \gls{betweenness} (roughly a measure of how many paths would be disrupted if a node is removed) as a \gls{ps} selection criteria to ensure that \gls{boundary} nodes (ones connecting a \gls{component} to the rest of the network) are not selected.
\item Distributed \gls{ps} nodes to \glspl{component} proportional to the size of the component (to improve dispersion of nodes).
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\linewidth]{week_3/04 - Component Issues.png}
\caption{The \glspl{component} (circled green) have a \gls{ps} node (circled red), but that node is the \gls{boundary}, so there is no reason the rest of that component would ever be traversed.}
\end{figure}
\begin{itemize}
\item Added versioning to graphs, so multiple versions of the same graph can be compared.
\item Made various improvements to increase graph readability and interpretability:
\begin{itemize}
\item Increased font size.
\item Changed \gls{ps} nodes to blue.
\item Thickened edges and added banded colors along paths between \gls{ps} nodes, so that you can see where they diverge.
\item Added ability to specify the paths to highlight.
\item Removed edge labels.
\end{itemize}
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\linewidth]{final/color.png}
\caption{Making graphs more readable was helpful in providing a means of visually validating results and troubleshooting as we worked on \gls{tomography}.}
\end{figure}
\subsection*{Incorporating Real Distributions}
Once we had a functioning toy network, we had to determine whether it was in any way a reasonable representation of the real network. In order to do this, we pulled information about the emergent characteristics of the toy network and the same information from the real network, first via \hyperref[kibana]{Kibana} and then directly using the \texttt{\hyperref[es]{elasticsearch}} client in Python (you can read more about these in ~\nameref{app:tools}).
There were three primary metrics we looked at, which all pertained to the paths between \gls{ps} nodes (because that is the only kind of information we have regarding the real network):
\begin{itemize}
\item The number of \glspl{hop} (how many nodes were along the path to any given destination from any given source).
\item The total \gls{latency} (sum of the edge lengths, which, in the toy network, was simply Euclidean distance).
\item The percent \gls{packetloss} (the product of \gls{packetloss} along each edge of a path).
\end{itemize}
In the real data, the number of routes with any given \gls{packetloss} was heavily skewed toward $0$, with a small spike at $100\%$ (when a route had \gls{packetloss}, it was much more likely to have lost all packets). By contrast, looking at all the \gls{ps} pairs for one toy network, we discovered that we were nearing $100\%$ \gls{packetloss}, because we failed to account for the multiplicative nature of \gls{packetloss}.
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{week_2/packetloss.png}
\caption{The count of paths is on a logarithmic scale, so is more heavily skewed than it appears at first glance.}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{final/toypacketloss.png}
\caption{Most paths in our network were originally approaching $100\%$ \gls{packetloss}.}
\end{figure}
The \gls{latency} in the real network proved problematic, as there were paths with negative latency, a peak near $0$ latency and what looked exponential decay thereafter. It should be impossible to have even $0$ latency, as there is inherently some delay in communicating information over any distance, so this data was clearly erroneous and likely a consequence of clock sync issues.
\pagebreak
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{week_4/ExponentialDecay.png}
\caption{The real network had $0$ and negative \glspl{latency}, which is impossible, accompanied by what appeared to be exponential decay.}
\end{figure}
As a stop-gap measure, for want of a better solution, we worked under the (probably incorrect) assumption that all \glspl{latency} were simply offset by roughly $50$ ms. We also took the square root of the counts, to create a better comparison with the scale of the toy network. Though not perfect, we were able to generate some toy networks that roughly resembled this modified distribution.
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{week_4/ToyLatencyHist.png}
\caption{The blue dots represent the square root of the counts of the real network paths with a given \gls{latency}, normalized to a max of 6. The orange histogram represents the count of paths in a given \gls{latency} range.}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{week_4/Hops.png}
\caption{Though the counts required normalization, the average number of \glspl{hop} between \gls{ps} nodes was actually a fairly decent match for what we was emerging from our procedurally generated toy networks.}
\end{figure}
\pagebreak
\section*{Tomography on the Toy Network}
Once I'd built out a fairly robust toy network, that at least somewhat resembled the real one, and the tools to manipulate it (e.g., remove edges, calculate shortest paths, etc.), we were able to manipulate the network, observe changes in network metrics and determine whether it was possible to infer what had been manipulated from those results. The primary case we considered was edge deletion.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\linewidth]{final/OG_78_36.png}
\caption{A toy network, $G$, before any edges have been removed.}
\end{figure}
\noindent We used the following process to determine which edges were deleted:
\begin{enumerate}
\item Determine the shortest paths between all pairs of \gls{ps} nodes in a toy network, $G$.
\item Make a copy of that network, $H$.
\item Delete a single edge in $H$.
\item Determine the shortest paths between all pairs of \gls{ps} nodes in $H$.
\item Compare the shortest paths in $H$ to those in $G$:
\begin{enumerate}
\item Record edges on each path in $G$ that aren't on the corresponding path in $H$ (i.e., edges that were potentially removed).
\item Confirm that those edges were entirely removed from $H$ (i.e., they are not on any other paths in $H$).
\item Determine how many different paths each edge was removed from.
\begin{itemize}
\item It's likely that the edge removed from the most paths is the deleted edge.
\item If multiple edges were removed an equal number of times, there is ambiguity as to which was deleted.
\end{itemize}
\end{enumerate}
\item Repeat 2-5 for each edge in the network.
\end{enumerate}
While experimenting with this process, we also drew the network at each phase and highlighted paths, so that we could see the removed edge and the paths that were rerouted as a result. We plotted the impact of these changes on network metrics like \gls{latency}, \gls{packetloss} and the number of \glspl{hop} on paths.
\Gls{latency} necessarily increased, as we determined shortest path based on latency, so an edge removed from that path could not be replaced by one with smaller latency. It was quite possible, however, for the total number of \glspl{hop} or the \gls{packetloss} to go down as a result of moving edges, because in our toy network these were not factors in determining paths.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\linewidth]{final/New_78_36.png}
\caption{The toy network, $H$, a copy of $G$ with the edge from $78$ to $36$ removed. Note that $(78, 52)$ and $(52, 36)$ are now on a path and that some paths are still using $(36, 33)$ and $(33, 70)$. The significance of that second point is highlighted in the next figure.}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{final/Traceroute-78_36.png}
\caption{Only 5 edges were removed from paths as a result of the change. It is pretty clear that $(78, 36)$ is the removed edge, as it was removed from more than 4 times as many routes as any other. Furthermore, we can factor out edges that appear along some other path in $H$, such as $(36, 33)$ and $(33, 70)$.}
\end{figure}
In a real network, a connection between successive \glspl{hop} is rarely completely severed. This strategy would need to be adapted to consider some performance threshold on the real network as being equivalent to an edge being deleted in the toy model.
\section{Modelling the Real Network}
In order to begin applying the lessons learned from the toy network to the real one, we first needed a comparable representation of the real network or some part of it. The data aggregated by real \gls{ps} nodes are indexed several different ways in \hyperref[es]{ElasticSearch} to make it easier to track different characteristics. The two indices we were particularly concerned with were \texttt{trace\_derived\_v2}, which has summative data about all paths in the network over the entirety of our records, and \texttt{ps\_trace}, home of the individual \gls{trace} records the former is derived from.
From the aggregated data in \texttt{trace\_derived\_v2}, we were able to get a list of all \gls{ps} nodes that have served as a source or a destination. Originally, we looked to whittle down these pairs to form a list of paths to look for in the \texttt{ps\_trace} data. For example, we removed ones that had an average number of hops that was less than $1$ (unlike clock sync, we can't really come up with a reason why this would even be recorded as such, but it is clearly impossible to get from one endpoint to another without taking any hops).
We ultimately scrapped the approach of excluding routes based on particular criteria in favor of collecting data on all routes over a given time period and building a core network from that. Having collected $7$ days worth of \texttt{ps\_trace} data, we looked at how long, on average, the path between pairs of \gls{ps} nodes stayed stable.
The results were encouraging, as the majority of paths lasted the entire $168$ hours without changing even once. When counting pairs of \gls{ps} nodes that had shorter average path lives, the number of paths dropped precipitously as the life of path decreased.
\begin{figure}[!ht]
\centering
\includegraphics[width=.85\linewidth]{final/pathlife.png}
\caption{Most of the $22,606$ pairs of \gls{ps} nodes had a path that remained stable through the entire $7$ days of data (from $7/7/20-7/14/20$).}
\end{figure}
After determining that most routes are fairly stable, we determined what edges were most central to the network, as follows:
\begin{enumerate}
\item We created a list of all unique routes and their frequency.
\item We created a list of edges on each route, by combining all pairs of adjacent \glspl{hop}.
\item For each edge, we made a weighted sum of the number of unique routes it occurred on (weighted by the frequency of that route's occurrence).
\item We ordered put this list of edges in reverse order by count, to get a list of edges from the most frequent (a measure of centrality) to the least frequent.
\end{enumerate}
After creating a list of the most frequent edges, we took two different approaches to visualizing them. For both we used the \href{https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.layout.spring_layout.html}{spring layout} provided by the \hyperref[nx]{\texttt{networkx}} module, but in the first case we graphed only the $n$ most frequent edges, without creating a complete graph of the network. The results showed that the most central edges did tend to be connected, but that there were smaller high frequency clusters of edges that were presumably within a particular region (e.g., the \gls{ps} nodes in the UK are all configured to send messages to each other more frequently than to other nodes on the network).
\begin{figure}[!ht]
\centering
\includegraphics[width=.48\linewidth]{final/n00400.png}
\includegraphics[width=.48\linewidth]{final/n00800.png}
\includegraphics[width=.48\linewidth]{final/n01200.png}
\includegraphics[width=.48\linewidth]{final/n01600.png}
\caption{Redrawing the network with the $400$ most central edges, the $800$ most central and so on, shows that there are clearly clusters of central edges and ones on the periphery probably represent more localized networks.}
\end{figure}
This activity on the periphery eventually decreased (i.e., the clusters became connected to the larger network) as we increased the how many of the most frequent edges we considered. While this was expected, we also began to note a less expected phenomena.
The number of most central edges actually being plotted was not quite tracking with the number of edges being supplied. The deviance increased/became more apparent as the number of edges to be plotted increased. Upon investigation the reason for this appeared to be symmetric edges, which were being supplied twice (e.g., once as $(a, b)$ and once as $(b, a)$), but only graphed once. We would have expected this to be a very common occurrence, but, upon further investigation, only about $2\%$ of edges were symmetric.
\begin{figure}[!ht]
\centering
\includegraphics[width=.46\linewidth]{final/n02000.png}
\includegraphics[width=.46\linewidth]{final/n04000.png}
\includegraphics[width=.46\linewidth]{final/n06000.png}
\includegraphics[width=.46\linewidth]{final/n08000.png}
\includegraphics[width=.46\linewidth]{final/n10000.png}
\includegraphics[width=.46\linewidth]{final/n22817.png}
\caption{As the number of edges increased, the number of edges that remained unconnected decreased.}
\end{figure}
There was some interesting phenomena once we finally reached a complete graphing of the network, namely, the graph clearly split into two distinct clusters. This presumably represents the continental divide, but what was interesting about this is that the edges splitting the clusters were not of high enough frequency that the clusters became distinct early on.
To get an idea of how the network builds up as we increase the number of edges, it seemed prudent to repeat the process, but having fixed the entire graph layout before drawing any edges. This meant keeping the relative positions of all the nodes constant, but only drawing those incident to the $n$ most frequent edges.
\begin{figure}[!ht]
\centering
\includegraphics[width=.48\linewidth]{final/f00200.png}
\includegraphics[width=.48\linewidth]{final/f03600.png}
\includegraphics[width=.48\linewidth]{final/f13000.png}
\includegraphics[width=.48\linewidth]{final/f22817.png}
\caption{The continental divide is more apparent when graphing edges in the context of their position in a spring layout for the entire network.}
\end{figure}
It appears that the majority of \gls{ps} nodes are in most frequent communication with other nodes that are in the same region (as opposed to ones on the other side of the Atlantic), which explains why that divide is not apparent until very late when graphing without the context of the entire network.
\pagebreak
\section*{Topics for Further Study}
With the determination of which edges are most frequent and the ability to construct graphs of the real network in the same fashion we did graphs of the toy network, the door is open to further investigation and transference of lessons learned on the toy network to use with the real network. Below is a brief list of just a few outstanding questions/topics that would be worth exploring further:
\begin{itemize}
\item How significant is the impact of the asymmetry of edges on the network?
\item Are there ways to account for this asymmetry?
\item Similarly, there may be multiple representations of the same devices and \gls{ps} nodes on the network, most notably IPv4 vs. IPv6 identifiers; can we map different representations of a device to the same device?
\item Knowing that certain edges are very central to the network, can we monitor those specific edges for potential performance issues, to allow us to identify network issues sooner?
\item Alternatively, could we use \gls{anomaly} to identify when to investigate some portion of the network?
\item Would the same strategy used with the toy network (determining edges that were removed from multiple impacted paths), allow us to pinpoint issues on the real network?
\end{itemize}
\section{Summary}
\section{Acknowledgments}
\pagebreak
\printbibheading
\printbibliography[keyword=major,heading=subbibliography,title={Primary Sources}]
\printbibliography[keyword=minor,heading=subbibliography,title={Further Reading}]
\section{Appendix A: Terms}\label{app:gloss}
\printnoidxglossaries
\section{Appendix B: Tools Used}\label{app:tools}
\subsection{Describing the network: \texttt{networkx}}\label{nx}
For creating an underlying representation of the network, we used the \texttt{networkx}\cite{nx} package in Python. This provided a graph object with various means of adding nodes, edges and metadata. It also provided tools for determining graph characteristics, such as bridge-connected components\cite{nxb}, $k$-connectivity\cite{nxk} and betweeness centrality\cite{nxc}.
\subsection{Drawing the network and plotting data: \texttt{matplotlib}}\label{mpl}
For drawing the network, \texttt{networkx}\cite{nx} integrates with \texttt{matplotlib}\cite{mpl}. This gave me a means of specifying (or not) the locations of nodes, as well as drawing, labeling and coloring specific nodes and edges.
We also used \texttt{matplotlib} in conjunction with \texttt{NumPy}\cite{np} arrays and \texttt{pandas}\cite{pd} dataframes in order to create various charts and histograms.
\subsection{Exploring real network data: Kibana}\label{kibana}
For preliminary data exploration, we used Kibana\cite{kibana}, which provided visualizations such as histograms and tables. It also provided, via the Console and the Inspect tool (available on all visualizations), a means of testing and exploring how ElasticSearch's Query DSL\cite{query} works.
\subsection{Querying real network data: ElasticSearch Query DSL}\label{es}
For pulling large amounts of data for analysis, we used the \texttt{elasticsearch}\cite{es} client in Python, which provides an API for ElasticSearch's Query DSL\cite{query}. More specifically, for small queries we used \texttt{search} which can provide only limited results. For larger amounts of data, we used \texttt{scan}, which is a helper for the \texttt{bulk} API.
\end{document} | {
"alphanum_fraction": 0.7798565638,
"avg_line_length": 74.5813953488,
"ext": "tex",
"hexsha": "4f4564a3252e047dbea0a68d1cfcbd1edf8ee42d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7173f9bd1ce9f54f6fd0f0a6dd68ebb832fd1d21",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sand-ci/toy-network",
"max_forks_repo_path": "progress_reports/modellingnetworkdisruptions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7173f9bd1ce9f54f6fd0f0a6dd68ebb832fd1d21",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sand-ci/toy-network",
"max_issues_repo_path": "progress_reports/modellingnetworkdisruptions.tex",
"max_line_length": 752,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7173f9bd1ce9f54f6fd0f0a6dd68ebb832fd1d21",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sand-ci/toy-network",
"max_stars_repo_path": "progress_reports/modellingnetworkdisruptions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7922,
"size": 32070
} |
\section{Application to synthetic data}
\label{sec:synt_tests}
We applied the proposed method to three synthetic data sets simulating different geological scenarios. The first one is generated by a model containing a set of multiple sources with different geometries, all of them with the same magnetization direction. The second is generated by a set of multiple magnetic bodies, but one them being a shallow-seated source with the same magnetization direction. In the third test, we violate the hypothesis of unidirectional magnetization by simulating a shallow-seated source with different magnetization direction from the other bodies.
In all tests, the simulated data were computed on a regular grid of $49 \times 25$ points (with a total of $N = 1225$ observations) at $z = -100$ m. The simulated
area extends over $12$ km along the $x$- and $y$-axes, resulting in a grid spacing of $250$ m and $500$ m along the $x$- and $y$-axis, respectively. The data were contaminated with pseudorandom Gaussian noise with zero mean and $10$ nT standard deviation. The geomagnetic field direction simulated was $I_0 = -40^\circ$ and $D_0 = -22^\circ$ for the inclination and declination, respectively. In the inversion, we use an equivalent layer composed by a grid of $49 \times 25$ dipoles (with a total of $M = 1225$ equivalent sources) positioned at a depth of $z_c = 1150$ m below the observation plane ($2.5$ times the greater grid spacing). We use the L-curve to choose the regularizing parameter ($\mu$). Our algorithm starts with an initial guess $\bar{\mathbf{q}}^{0} = (-10^\circ,-10^\circ)$ for inclination and declination, respectively.
\subsection{Unidirectional magnetization sources}
We generate a 3D prism with polygonal cross-section whose the top is positioned at a depth of $450$ m and the bottom $3150$ m with magnetization intensity of $4$ A/m. We also generate two spheres with magnetization intensity equal to $3$ A/m and radius equal to $500$ m. The coordinates of the spheres centers $x_c = 1800$ m, $y_c = -1800$ m and $z_c = 1000$ m and $x_c = 800$ m, $y_c = 800$ m and $z_c= 1000$ m. We produce two rectangular prisms with $2.5$ A/m of magnetization intensity. The smaller prism has the top at a depth of $450$ m and side lengths of $1000$ m, $700$ m and $500$ m along $x$-, $y$- and $z$-axes, respectively. The greater prism has the top at a depth of $500$ m and side lengths of $1000$ m, $2000$ m and $1550$ m along $x$-, $y$- and $z$-axes. The total magnetization of
all simulated sources has inclination $-25^\circ$ and declination $30^\circ$.
The noise-corrupted data are shown in Figure \ref{fig:unidir_test}a.
Figure \ref{fig:unidir_test}b shows the predicted data produced by equivalent layer.
Figure \ref{fig:unidir_test}c shows the residuals defined as the difference between the
simulated data (Figure \ref{fig:unidir_test}a) and the predicted data
(Figure \ref{fig:unidir_test}b). The residuals appear normally distributed with a mean of
$-0.29$ nT and a standard deviation of $9.67$ nT as shown in Figure \ref{fig:unidir_test}d.
The estimated magnetization direction $\bar{\mathbf{q}}$ has inclination $-28.6^\circ$
and declination $30.7^\circ$ which are very close to the true one.
Figure \ref{fig:unidir_test}e shows the estimated magnetic-moment distribution $\bar{\mathbf{p}}$.
The convergence of the algorithm is shown in Figure \ref{fig:unidir_test}f. These results show
that the all-positive magnetic-moment distribution and the estimated magnetization direction
produce an acceptable data fitting.
\subsection{Unidirectional magnetization with shallow-seated source}
Here, we test the methodology performance when a shallow-seated source exists. The model seems the previous test except for the smaller prism, whose the top is $150$ m deep while maintaining its volume. The magnetization intensity of this shallow prism is equal to $1.5$ A/m. The magnetization direction of all sources is $-25^\circ$ inclination and declination $30^\circ$, respectively. The synthetic data are shown in Figure \ref{fig:unidir_shallow_test}a.
Figure \ref{fig:unidir_shallow_test}b shows the predicted total-field anomaly produced by
equivalent layer. Figure \ref{fig:unidir_shallow_test}c shows the residuals defined as the
difference between the simulated data (Figure \ref{fig:unidir_shallow_test}a) and the
predicted data (Figure \ref{fig:unidir_shallow_test}b). The residuals appear normally
distributed with a mean of $-0.42$ nT and a standard deviation of $10.67$ nT as shown in
Figure \ref{fig:unidir_shallow_test}d. Figure \ref{fig:unidir_shallow_test}e shows the
estimated magnetic-moment distribution $\bar{\mathbf{p}}$. The convergence of the
algorithm is shown in Figure \ref{fig:unidir_shallow_test}f. Despite the large residual
located above the shallow-seated source, we consider that the methodology produced a
reliable result because the estimated magnetization direction $\bar{\mathbf{q}}$ has
inclination $-28.8^\circ$ and declination $31.7^\circ$ and its very close to the
corresponding true magnetization direction, and the all-positive magnetic-moment
distribution produces an acceptable data fitting.
\subsection{Shallow-seated source with different magnetization direction}
In this test, we simulate the presence of a shallow-seated body with different
magnetization direction from the other magnetic sources. The shallow prism has the dimension
and magnetization intensity equal to the previous test. However, the magnetization direction
of the shallow prism is $20^\circ$ of inclination and $-30^\circ$ of declination, while the
other sources have inclination $-25^\circ$ and declination $30^\circ$. The noise-corrupted
data are shown in Figure \ref{fig:unidir_shallow_diff_test}a.
Figure \ref{fig:unidir_shallow_diff_test}b shows the predicted total-field anomaly.
Figure \ref{fig:unidir_shallow_diff_test}c shows the residuals defined as the difference
between the simulated data (Figure \ref{fig:unidir_shallow_diff_test}a) and the predicted
data (Figure \ref{fig:unidir_shallow_diff_test}b). The residuals have a mean of $-0.71$ nT
and a standard deviation of $12.84$ nT as shown in Figure \ref{fig:unidir_shallow_diff_test}d.
The estimated magnetization direction $\bar{\mathbf{q}}$ has inclination $-30.4^\circ$
and declination $27.6^\circ$. Figure \ref{fig:unidir_shallow_diff_test}e shows the estimated
magnetic-moment distribution $\bar{\mathbf{p}}$. The convergence of the algorithm is shown
in Figure \ref{fig:unidir_shallow_diff_test}f. We also note that the estimated magnetization
direction is very close to the magnetization direction of most sources. Moreover, despite the
slight difference from the true magnetization direction, the estimated magnetic-moment
distribution produces an acceptable data fit. With the exception of the small area exactly
above the small-seated prism most of the residuals are closely $0$ nT.
| {
"alphanum_fraction": 0.7810701956,
"avg_line_length": 102.2352941176,
"ext": "tex",
"hexsha": "9e139529b31a45172af85fb2aabaddb91c8017d9",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-17T15:32:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-17T15:32:29.000Z",
"max_forks_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_forks_repo_path": "manuscript/simulations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_issues_repo_path": "manuscript/simulations.tex",
"max_line_length": 840,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "dd929120b22bbd8d638c8bc5924d15f41831dce2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "pinga-lab/eqlayer-magnetization-direction",
"max_stars_repo_path": "manuscript/simulations.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-10T10:33:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-03T03:00:06.000Z",
"num_tokens": 1810,
"size": 6952
} |
\section{\sc Selected \\ Courses}
\vspace{-0.22cm}
\begin{center}
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{12pt}
\begin{tabular}{ c c }
\textbf{Pattern Recognition}: \hfill{17/20} & \textbf{Machine Learning}: \hfill{19.5/20} \\
\textbf{Data Mining}: \hfill{18.7/20} & \textbf{Technical Research}: \hfill{17.6/20} \\
\textbf{Data Structures}: \hfill{18.5/20} &\textbf {Algorithm Design}: \hfill{19.31/20}\\
\textbf{Engineering Statistics}: \hfill{18.5/20} & \textbf{Engineering Mathematics}: \hfill{19/20}\\
\textbf{Software Engineering}: \hfill{17.5/20} & \textbf{Microprocessors}: \hfill{19.42/20}\\
\textbf{Computer Aided Design}: \hfill{17.4/20} & \textbf{Engineering Ethics}: \hfill{20/20}\\
\textbf{Systems Analysis \& Design}: \hfill{19.68/20} & \textbf{Digital Design}: \hfill{20/20}\\
\end{tabular}
\end{center}
\endinput | {
"alphanum_fraction": 0.6871378911,
"avg_line_length": 50.7647058824,
"ext": "tex",
"hexsha": "cf80ff9a09cf7fc9b15e2ac4fc5e733e9ce7c253",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aligholami/RTP",
"max_forks_repo_path": "courses.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aligholami/RTP",
"max_issues_repo_path": "courses.tex",
"max_line_length": 102,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "73545f3e25225d9bf972170551bb78f1d54964b3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aligholami/RTP",
"max_stars_repo_path": "courses.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 333,
"size": 863
} |
\documentclass[a4paper]{article}
%% Language and font encodings
\usepackage{cmap} % поиск в PDF
\usepackage[T2A]{fontenc} % кодировка
\usepackage[utf8]{inputenc} % кодировка исходного текста
\usepackage[english]{babel} % локализация и переносы
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{bm}
\usepackage{color}
\usepackage{xcolor}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\setlength\parindent{0pt}
%\definecolor{darkOrange}{RGB}{255, 129, 0}
\definecolor{lightGray}{RGB}{236, 236, 236}
\definecolor{myGreen}{rgb}{0,0.6,0}
\definecolor{darkBlue}{RGB}{27,0,116}
\usepackage{caption}
\DeclareCaptionFont{white}{\color{white}}
\DeclareCaptionFormat{listing}{\colorbox{gray}{\parbox{\textwidth}{#1#2#3}}}
\captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white}
\renewcommand{\lstlistingname}{Listing}
\lstset{%
backgroundcolor=\color{lightGray},
commentstyle=\itshape\color{myGreen},
extendedchars=true,
keywordstyle=\bfseries\color{darkBlue},
language=Java,
otherkeywords={let,mut,pure,dirty,then,else,Int,Double,Bool,true,false,skip},
numbers=left, % where to put the line-numbers; possible values are (none, left, right)
numbersep=5pt
%stringstyle=\color{mymauve}
}
%% Sets page size and margins
\usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}
%% Spec title
\title{\textbf{Rogue} programming languages specs}
\author{Dmitry Kovanikov}
\date{}
\begin{document}
\maketitle
\textbf{Implementation language:} Haskell\\
\textbf{Target platform:} LLVM\\
\section*{Language features}
\textbf{Rogue} programming language is mix of imperative and functional paradigms. You can write algorithms step by step. But language has plenty of functional features. It's statically typed with local type inference.
As a general purpose programming language it has commonly used set of basics:
\begin{enumerate}
\item Variable declaration.
\item Basic arithmetic, logic and comparison operations.
\item Function declaration and calling.
\item Primitive types: Int, Bool, Double, Unit (Word? String? BigInteger?)
\end{enumerate}
Imperative language features are:
\begin{enumerate}
\item Variables can be mutable and immutable.
\item Named function arguments with default values.
\item Control-flow constructs: if-then-else, while and for loops.
\item TODO: Arrays with size in type?
\end{enumerate}
Functional language features are:
\begin{enumerate}
\item Pattern matching on constants.
\item Higher-order functions.
\item Anonymous functions.
\item Currying and partial application.
\item Classification of functions on pure and with side-effects.
\item Algebraic immutable data types.
\item TODO: parametric polymorphism.
\end{enumerate}
TODO: fully-compatible with Haskell.
\section*{Syntax examples}
\begin{lstlisting}[caption=Variables declaration]
mut x: Int = 0
mut y = 3
let z = true // `z` has type Bool
y = 5
\end{lstlisting}
Using \textbf{mut} keyword you can create mutable variables, and with \textbf{let} --- immutable. Note, that you can omit type of variable, if it can be inferred from it's local context. But variable initialization is required.
\begin{lstlisting}[caption=Function declaration]
pure f : (x: Double) -> (y: Double) -> Double {
let z = x + y
return z
}
dirty
g : (b: Bool) -> (mut x: Int) -> (y: Int = 5) -> Int
g (..) = {
if b { x += y }
return x
}
\end{lstlisting}
Arguments of functions separated with $\rightarrow$ symbol. They should have names and type. Arguments can have default values. Last type doesn't have name "--- it is the type of function result. All functions should be marked either \textbf{pure} or \textbf{dirty}. If function has side-effects (changes some of it's arguments or some global variables or does some IO) then it should be marked as \textbf{dirty}. Function block should have \textbf{return} keyword if result type of function is not \textbf{Unit}.
Next calls of function \texttt{g} are valid:
\begin{lstlisting}[caption=Function calls]
g true someX 4
g true someX // creates function with type `Int -> Int`
g(b = true, x = someX) // calls `g` with `y = 5`
g(x = someX) // creates function with type Bool -> Int
g() // creates function with type `Bool -> mut Int -> Int`
\end{lstlisting}
So \texttt{g \{x = someX\} true} is valid call whereas \texttt{g true \{x = someX\}} is not
because partial function application loses all information about arguments names. Though by convention you shouldn't write \texttt{g \{x = someX\} true} because it's less obvious what this function call does.
TODO: Call like \texttt{g(..) }
Functions also can perform pattern matching, have guards, etc.
\begin{lstlisting}[caption=Pattern matching]
dirty h : (b: Bool) -> (mut x: Int) -> (y: Int) -> Int
h (y = 0) = {
x = x - 3
let a = x / 2
return x + a
}
h (y = 1)
| x > 1 = x - 1
| else = { let t = x % 2; return t }
h false 1 _ = x + 10 // `y` is not available here
h true (..) = y + 10 // {..} for keeping rest arguments names
h (..) = x + 1
\end{lstlisting}
Here's example of function that reads two integers from input and performs binary pow aglorithm.
\begin{lstlisting}[caption=Binary pow algorithm]
dirty binPow : Unit {
mut k = readInt()
mut n = readInt()
mut res = 1
while k > 0 {
if k % 2 == 1 {
res *= n
} else {
skip // `skip` is empty operator
}
n = n * n
k = k / 2
}
print res
}
\end{lstlisting}
Example of calling with higher-order functions.
\begin{lstlisting}[caption=Higher order functions]
dirty decrementAndCheck : (mut x: Int)
-> (p: mut Int -> Int -> Bool)
-> Bool
{
x = x - 1
return (p x 3)
}
\end{lstlisting}
\end{document} | {
"alphanum_fraction": 0.7042181941,
"avg_line_length": 32.2568306011,
"ext": "tex",
"hexsha": "35c606dd9d98bc44b2fe6360ad3df15fd73a852c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-07-12T08:26:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-12T08:26:01.000Z",
"max_forks_repo_head_hexsha": "614b937271b985cda88c1949ee51930a826adfb5",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "ChShersh/rogue-lang",
"max_forks_repo_path": "spec/rogue-lang-spec.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "614b937271b985cda88c1949ee51930a826adfb5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "ChShersh/rogue-lang",
"max_issues_repo_path": "spec/rogue-lang-spec.tex",
"max_line_length": 513,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "614b937271b985cda88c1949ee51930a826adfb5",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "chshersh/rogue-lang",
"max_stars_repo_path": "spec/rogue-lang-spec.tex",
"max_stars_repo_stars_event_max_datetime": "2019-11-27T04:01:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-30T17:08:57.000Z",
"num_tokens": 1686,
"size": 5903
} |
\section{Introduction}
\par
If the ultimate goal is to solve linear systems of the form
$AX = B$, one must compute an $A = LDU$, $A = U^TDU$ or
$A = U^HDU$ factorization, depending on whether the matrix $A$
is nonsymmetric, symmetric or Hermitian.
$D$ is a diagonal or block diagonal matrix,
$L$ is unit lower triangular,
and $U$ is unit upper triangular.
$A$ is sparse, but the sparsity structure of $L$ and $U$ will
likely be much larger than that of $A$,
i.e., they will suffer fill-in.
It is crucial to find a permutation matrix such that the factors of
$PAP^T$ have as moderate fill-in as can be reasonably expected.
\par
To illustrate, consider a 27-point finite difference operator defined
on an $n \times n \times n$ grid.
The number of rows and columns in $A$ is $n^3$, as is the number of
nonzero entries in $A$.
Using the natural ordering, the numbers of entries in $L$ and $U$
are $O(n^5)$, and it takes $O(n^7)$ operations to compute the
factorization.
The banded and profile orderings \cite{geo81-book}
have the same complexity.
\par
Using the nested dissection ordering,
\cite{geo73-nested},
the factor storage is reduced to $O(n^4)$ and factor operations to
$O(n^6)$.
In practice, the minimum degree ordering has this same low-fill
nature, although topological counterexamples exist
\cite{ber90-mindeg}.
A unit cube is the worst case comparison between banded and profile
orderings and the minimum degree and nested dissection orderings.
But, there is still a lot to be gained by using a good permutation
when solving most sparse linear systems, and the relative gain
becomes larger as the problem size increases.
\par
This short paper is a gentle introduction to the ordering methods
--- the background as well as the specific function calls.
But finding a good ordering is not enough.
The ``choreography'' of the factorization and solves, i.e., what
data structures and computations exist, and in a parallel
environment, which thread or processor does what and when,
is as crucial.
The structure of the factor matrices, as well as the structure of the
computations is controlled by a ``front tree''.
This object is constructed directly by the {\bf SPOOLES} ordering
software, or can be created from the graph of the matrix and an
outside permutation.
Various transformations on the front tree can make a large
difference in performance.
Some knowledge of the linear system, (e.g., does it come from a 2-D
or 3-D problem? is it small or large?), coupled with some knowledge
of how to tailor a front tree, can be important to getting the best
performance from the library.
\par
Section~\ref{section:ordering} introduces some background on sparse
matrix orderings and describes the {\bf SPOOLES} ordering software.
Section~\ref{section:front-trees} presents the front tree object
that controls the factorization, and its various transformations
to improve performance.
| {
"alphanum_fraction": 0.775545549,
"avg_line_length": 45.109375,
"ext": "tex",
"hexsha": "9960941258ec2e2a76b09c10e475a5e5d93b0dbd",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/intro.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/intro.tex",
"max_line_length": 69,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/documentation/FrontTrees/intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 710,
"size": 2887
} |
\SetAPI{J-C}
\section{audit.verify.onload}
\label{configuration:AuditVerifyOnload}
\ClearAPI
\TODO%% GENERATED USAGE REFERENCE - DO NOT EDIT
\begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \
\endhead
\hline
\type{com.koch.ambeth.audit.server.AuditEntryVerifier} &
\prettyref{module:Audit} \\
\hline
\type{com.koch.ambeth.audit.server.AuditEntryVerifier} &
\prettyref{module:Audit} \\
\hline
\end{longtable}
%% GENERATED USAGE REFERENCE END
\begin{lstlisting}[style=Props,caption={Usage example for \textit{audit.verify.onload}}]
audit.verify.onload=NONE
\end{lstlisting} | {
"alphanum_fraction": 0.7524752475,
"avg_line_length": 31.8947368421,
"ext": "tex",
"hexsha": "1669bf07f07b5480a5ad019b855c1c4a5a69cd3d",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z",
"max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Dennis-Koch/ambeth",
"max_forks_repo_path": "doc/reference-manual/tex/configuration/AuditVerifyOnload.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Dennis-Koch/ambeth",
"max_issues_repo_path": "doc/reference-manual/tex/configuration/AuditVerifyOnload.tex",
"max_line_length": 88,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Dennis-Koch/ambeth",
"max_stars_repo_path": "doc/reference-manual/tex/configuration/AuditVerifyOnload.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 199,
"size": 606
} |
\documentclass{IEEEtran}
\usepackage{graphicx}
\usepackage{svg}
\usepackage{siunitx}
\newcommand{\myroot}{../}
\newcommand{\Gensp}[1]{\emph{#1}}
\newcommand{\Hirudomedicinalis}{\Gensp{Hirudo~medicinalis}}
\title{Bio-inspired soft robot}
\author{M.~Descour, L.~Devries, and D.~Evangelista\thanks{Authors are with the United States Naval Academy, Department of Weapons \& Systems Engineering}}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
Soft robotics provide a solution to the lack of maneuverability, durability, and degrees of freedom illustrated by many traditional hard-bodied robots. Challenges of soft robotics include their actuation and controllability. By examining the locomotion of a leech over land, this research proposes a novel method of soft robotic control. Specifically, this research will focus on soft pneumatic actuators, and their subsequent manipulations to accomplish locomotion. Research will be conducted to determine a system that optimally illustrates predetermined properties. The properties are the ability of forward motion, the incorporation of couple attachment points for variable environment maneuverability, and open loop control. The demonstration plan of this research will consist of three separate proof-of-concept demonstrations. A proof-of-concept demonstration will focus on the bending pneumatic actuator, determining the relationship between input pressure and bend angle as well as speed of actuation. The second proof-of-concept demonstration will determine a viable method for attachment/detachment that may also be incorporated into the soft actuator. The final proof-of-concept demonstration will be contingent upon the success of the previous two demonstration experiments. The final demonstration will examine the feasibility of locomotion, combining the previously acquired data. The total cost of this research is \$28,235 including materials, labor, and overhead. The research plan includes risk mitigation related to possible technical failures. This risk mitigation includes the design and fabrication portions of the research occurring in the early fall semester to allow adequate time for data collection.
\end{abstract}
\section{Background and Motivation}
\IEEEPARstart{T}raditionally, robots are thought of as having rigid, metallic bodies with discrete joints and hard material composition. However, these rigid bodies may have difficulties in manipulation and maneuverability, generally offering limited degrees of freedom On the other hand, many natural organisms have bodies that are soft and flexible, with the ability to deform in various ways. Engineers have turned to biology as a source of inspiration for robotic designs. Soft, bio-inspired robots offer limitless degrees of freedom, allowing opportunities to bend, twist, expand, and contract in various ways \cite{rus2015design}. The multi-gait robot shown in Figure~\ref{f1} illustrates some of the potential advantages of soft robots such as maneuverability combined with resiliency \cite{shepherd2011multigait}.
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal1.jpg}
\end{center}
\caption{The ``Resilient, Untethered Soft Robot'' \cite{shepherd2011multigait}}
\label{f1}
\end{figure}
The challenges of soft robotics include their actuation and subsequent controllability. As soft robotic designs lack many of the hard components such as servos and motors found in traditional machines, unique methods of actuation are incorporated. For soft robots, researchers have focused on two primary methods. In one method, variable length tendons, such as shape memory alloys, are embedded in soft materials. The other method, that will be discussed and researched in this paper, is pneumatic actuation. Pneumatic actuation for soft robotic systems was first explored in 1992, in which channels in an elastomer were inflated via pressurized air. In this design, constructed asymmetry caused the actuator to move in a desired manner, shown in Figure~\ref{f2} \cite{tanaka1992applying}. More recently, researchers have manipulated actuation of a soft robot with the implementation of an inextensible layer, allowing variable bending based on input pressure as shown in Figure \cite{polygerinos2015modeling}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal2.png}
\end{center}
\caption{A 1992 soft robotic microactuator achieving desired movement due to asymmetry in the design \cite{tanaka1992applying}}
\label{f2}
\end{figure}
With the current research and developments into soft robotics, there exists a number of different real-world societal applications. The maneuverability of soft robotics allows for their potential implementation in otherwise difficult environments. Researchers have looked into robots capable of navigating obstacles such as rubble in search and rescue operations of natural disasters, such as earthquakes and hurricanes\cite{irv2017xxx}. Furthermore, due to the soft nature of the materials, soft robotic systems are used in the medical field as wearable applications. Soft robotics generally offer more comfort for human-use applications, as rigid metallic devices have the risk of causing damage to human tissue. Examples of medical applications include wearable devices for orthopedic rehabilitation \cite{par2014xxx} and soft sensing suits for lower limb measurement \cite{men2014xxx}.
Additionally, the capability to operate robotics in an underwater environment could be improved through the implementation of soft actuators due to increased maneuverability in difficult spaces. Focusing on the underwater environment, soft robotic research can be expanded to fit real-world military applications. Notably, the United States military notably uses autonomous underwater vehicles (AUVs) to accomplish missions in Intelligence, Surveillance, and Reconnaissance (ISR), and has pledged \$600 million in their development \cite{pomerleau2016dod}. Applications of a bio-inspired and highly maneuverable soft robot are not limited to conducting ISR at sea, and may be expanded to usage on land and littoral environments.
\section{Problem statement}
The medicinal leech (\Hirudomedicinalis) is capable of crawling movements, achieved via the sequential activation of muscle groups arranged in longitudinal segments and coupled attachment points at the anterior and posterior ends, as shown in Figure~\ref{f3} \cite{kristan2005neuronal}.
Pneumatic actuators, as shown in Figure~\ref{f4}, are capable of variable bending under pressure inputs \cite{polygerinos2015modeling}. The device in Figure~\ref{f4} consists of chambers embedded within Ecoflex silicon material that bend about an inextensible bottom layer upon pressurization.
While \cite{polygerinos2015modeling} examined the effects of pressure inputs on the bending of soft actuators, a method for locomotion was not considered. I propose to take inspiration from leech-like crawling to examine if a soft bending pneumatic actuator can accomplish locomotion via attachable ends. This research proposes a bio-inspired, pneumatically actuated, soft robot and control laws capable of the following properties:
\begin{enumerate}
\item The soft robot is able to move forward in a ``crawling'' motion, as well as turn via a twisting motion.
\item The soft robot incorporates bio-inspired coupled attachment points, allowing maneuverability in variable environments.
\item Control will be open loop. The input of the control law will be desired motion and the output will be pressure and subsequent shape modifications necessary for the desired motion.
\end{enumerate}
\begin{figure}
\caption{The successive stages of leech crawling \cite{kristan2005neuronal}}
\label{f3}
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal3.png}
\end{center}
\end{figure}
\begin{figure}
\caption{A soft pneumatic acutator bending under variable pressure inputs \cite{polygerinos2015modeling}}
\label{f4}
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal4.png}
\end{center}
\end{figure}
\section{Literature review}
In the scientific community, the medicinal leech (\Hirudomedicinalis) has served as an extensively studied organism in the fields of neuroscience and biology. This can be attributed to the relative simplicity of leech anatomy and leech behavior, both of which are discussed in \cite{kristan2005neuronal}. Researchers in \cite{kristan2005neuronal} have produced detailed descriptions of six common leech mechanisms: heartbeat, local bending, shortening, swimming, crawling, and feeding. Most relevant to the research of a bio-inspired soft robot are the locomotive behaviors of bending, shortening, swimming, and crawling. In \cite{kristan2005neuronal}, researchers documented the relationship between the circular and longitudinal muscles that enable these locomotive functions. With respect to crawling, leeches exhibit both vermiform crawling (extension and contraction of hydrostatic skeleton) and ``inch-worm'' crawling (similar to vermiform crawling, but the suckers are brought adjacent to each other at the end of each contraction). The researchers noted the greater efficiency of ``inch-worm'' crawling, despite its rarer implementation due to the natural instability of the leech.
Due to the simple hydrostatic skeletal structure of a leech, researchers in \cite{alscher1998simulating} have developed a mathematical model for the dynamical behavior of a leech. The model closely follows leech anatomy, including twenty-one compartmentalized segments, each with a fixed volume. The circular and longitudinal muscle movements are modeled by elastic edges acting as damped elastic springs. Through these constraints, researchers developed equations of motion modeling the pressure values in the compartmentalized segments and, subsequently, the leech movements themselves. The dynamic leech model was successfully simulated, demonstrating its capability to generate the leech movements of crawling and swimming \cite{alscher1998simulating}. The model is limited by a lack of experimental data to determine its validity, yet it provides a method in which the major constructional principles of the leech may be mathematically analyzed.
Soft robotics provide a solution to developing an experimental platform in which leech locomotion may be replicated. Soft robots, heavily inspired by nature, are composed of compliant materials with deformable bodies. The most common methods of soft robotic actuation are variable length tendons embedded in soft segment and pneumatic actuation to inflate embedded channels in soft material. In \cite{rus2015design}, the many challenges of soft robotics are identified, including controlling soft materials that bend, twist, and stretch, offering infinite degrees of freedom. Another identified challenge of soft robots is the implementation of power sources for actuation. Currently, power sources for pneumatically actuated soft robots are limited to pumps or compressed air cylinders, both of which are bulky and may potentially inhibit the maneuverability of the robot.
Despite these challenges, this research is most interested in pneumatic actuation for soft robots, due to its affordability and its customizability to a given application. Researchers at Harvard University have modeled, designed, and tested soft pneumatic actuators, analyzing the effect of input pressure to various outputs \cite{polygerinos2015modeling}. Two different models were designed for the analysis of the fabricated soft actuator. An analytical model was developed to define the relationship between the input pressure and bending angle in free space, using material and geometric properties of the actuator. In order to bypass some of the limits of the analytical model, a finite-element method model was also developed to model the nonlinear responses of the actuator at unpressurized and pressurized states. An experimental platform was constructed to validate the analytical and FEM models. In addition, the controllability of the actuator was illustrated through a feedback control loop embedded in the actuator that calculated bending angle from air pressure in real time.
Inflating pneumatic networks (``pneu-net''), or embedded small channels in soft elastomeric materials, allow for sophisticated motions with simple controls and inputs. At Harvard University, researchers focused on improving existing pneumatic networks seen in \cite{polygerinos2015modeling} for speed and overall efficiency \cite{mosadegh2014pneumatic}. Using silicone-based elastomers, the newly designed actuators were empirically tested and demonstrated. The new pneumatic network design features multiple advancement including a higher speed for inflation, greater force exerted under a given pressure, lower change in volume for given degree of bending, and higher resiliency before failing. Their research is applicable as it optimizes a pneumatic network under the constraint of limited resources, such as lower operating pressures, smaller volumes, and smaller time constraints. In addition to comparisons with old pneumatic actuators, he newly designed pneumatic network demonstrated its speed and precision by playing notes on an electronic keyboard in succession, emulating human fingers \cite{mosadegh2014pneumatic}.
In response to the challenges presented in \cite{rus2015design} of the implementation of bulky power sources that restrict maneuverability, another team of researchers at Harvard University designed a completely untethered variant of a previously tethered multi-gait soft robot \cite{shepherd2011multigait} \cite{tolley2014resilient} . The untethered soft robot moves freely and is able to carry its own weight, including power source, over a substantial period. The soft robot also maintains its resiliency, previously seen in its tethered variant. Resiliency was experimentally tested under the exposure of a flame, run over a car, and walking outside in a snowstorm. The untethered and tethered variations of this soft robot demonstrate an ability to achieve ``walking'' locomotion through pneumatic actuation. Locomotion is accomplished via a four ``legged'' structure, with five total actuators acting in combination. The researchers noted the lack of optimization in the design of this particular soft robot, as the overall actuation speed and mobility could see improvement.
As shown in \cite{shepherd2011multigait} and \cite{tolley2014resilient} soft robots are capable of locomotion through pneumatic actuation. This research seeks to improve these methods of soft robot locomotion through inspiration taken from leech behavior discussed in \cite{kristan2005neuronal}. Specifically, leech ``inch-worm'' crawling, through the unique use of ``suckers'', has demonstrated its ability to be mathematically modeled in \cite{alscher1998simulating}. By implementing the leech’s ability to adhere to surfaces at both ends of its body structure to a soft robot, locomotion may be achieved. This research focuses on cutting down the number of actuators as seen in \cite{shepherd2011multigait} and \cite{tolley2014resilient}, while still accomplishing locomotion. The research done in \cite{polygerinos2015modeling} serves to provide insight on the effect of input pressure to the bending angle of a pneumatic actuator, as bending is a key component of the leech ``inch-worm'' crawl.
\section{Demonstration plan}
The demonstration plan for this research will consist of three separate proof-of-concept demonstrations. The proof-of-concept demonstrations allow for the design and creation of the actual pneumatic actuator, as well as determining a method to best accomplish attachment and detachment to reflect leech behavior. The demonstrations of the pneumatic actuator itself and the attachment/detachment mechanisms will be independent of one another, thus allowing for continuation of research given obstacles in the demonstration process. Ultimately, the goal is for a final proof-of-concept experiment of accomplishing locomotion of the actuator. This will require success in both the of the previous proof-of-concept demonstrations as both bending actuation and coupled attachment/detachment are necessary for the desired locomotion. The proof-of-concept demonstrations will display the properties of simplicity, replicability, and maneuverability.
\subsection{Proof-of-concept experiment: bending actuation}
The proof-of-concept demonstration for a bending actuator requires the construction of an electrical circuit as well as the design of the soft actuator itself. The circuit will be designed using guidance from the Soft Robotics Toolkit \cite{holland2014soft}, where the specific electrical components may also be found. In the circuit, a desired pressure output is sent to microcontroller as a voltage value, which is then sent as data to the air pump. Air travels through the solenoid valve and pressurizes the soft actuator. The soft actuator will inflate, and a bending angle will be observed. A pressure reading from a pressure sensor will also be measured. Position and bend angle of the actuator will be recorded and visually interpreted. Graph paper will be placed behind the soft actuator to aid in determining the bend angle, determining how the pressure inputs relate to bend angle for the specific actuator.
The second aspect of the proof-of-concept experiment for the bending actuator will be the design and fabrication of the soft actuator itself. The intent is to create two different kinds of soft actuator, the PneuNet actuator (shown in Figure~\ref{f7}) and the soft fiber-reinforced actuator (shown in Figure~\ref{f8}). Existing designs found in the Soft Robotics Toolkit \cite{holland2014soft} will be used in the fabrication process. Both designs required 3-D printed molds, filled with silicon material. The purpose of designing two different kinds of soft actuator will be for comparison of respective performance of simplicity, durability, and manipulability to fit the purposes of this research (i.e. incorporation of an attachment/detachment mechanism). Specifically, a measured metric will be the time required to reach a specific bend angle. Smaller times are desired, as this signifies increased speed and maneuverability of the final system.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/figures/proposal5.png}
\end{center}
\caption{Circuit, from \cite{holland2014soft}, that will be implemented to provide pressure to the actuator}
\label{f5}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/figures/proposal6.png}
\end{center}
\caption{Functional block diagram of the soft actuator}
\label{f6}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/figures/proposal7.png}
\end{center}
\caption{Building schematic of a PneuNet actuator, from \cite{holland2014soft}}
\label{f7}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.75\columnwidth]{\myroot/figures/fabricationprocess.png}
\end{center}
\caption{Building schematic of a fiber-reinforced actuator \cite{holland2014soft}}
\label{f8}
\end{figure}
\subsection{Proof-of-concept experiment: attachment and detachment methodology}
The next proof-of-concept demonstration will be the design of an attachment and detachment mechanism that can be incorporated into the soft actuator. Research, as well as several designs, will need to be conducted for this proof-of-concept demonstration. The primary design that will be investigated is illustrated in the sketch shown in Figure~\ref{f9}. Vacuum chambers will be incorporated into the soft actuator allowing for suction, and subsequently a method for attachment. This design will require incorporating vacuum pumps into the circuitry described earlier in Figure~\ref{f3}. Other methods of an attachment mechanism will be explored through prototype and fabrication such as ``grippers'' used in the RiSE robot illustrated in Figure~\ref{f10}, as well as other bio-inspired adhesives such as starfish feet and non-Newtonian fluids excreted by gastropods. The designs will be tested on surfaces of variable composition and incline. The performance of the designs will be visually interpreted as well as video recorded. The performance of the designs will be compared based on the properties of repeatability, simplicity, and ability to be incorporated into the soft actuator. The designs will also be compared based on the strength of the suction force. To measure this metric, a push pull force gauge will be attached to the suction mechanism and the force required for detachment will be calculated. Higher forces required for detachment are preferable as this will correleate to the maneuverability of the final system, specifically in varied environments.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal9.png}
\end{center}
\caption{Sketch of the incorporation of a vacuum induced attachment and detachment into the soft actuator.}
\label{f9}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{\myroot/figures/proposal10.png}
\end{center}
\caption{RiSE Gecko robot capable of adhering to multiple surfaces \cite{RiSEphoto}}
\label{f10}
\end{figure}
\subsection{Proof-of-concept experiment: locomotion}
The final proof-of-concept demonstration requires successful demonstrations of the soft bending actuator as well as a working attachment and detachment method. Locomotion will require comprehension of the interaction between the bending of the soft actuator and the attachment method. For example, a specific bend-angle of the actuator may be necessary to initiate an attachment to a surface inclined at a specific angle. Figure~\ref{f11} illustrates a proposed method to achieve leech-like locomotion with the soft actuator. Locomotion will require the ability to increase and decrease input pressure to change the bend angle ($\theta$) to a desired state. As input pressure increases, the bend angle will increase. In the proposed method, the robot will alternate between a small bend angle ($\theta_S$) and a large bend angle ($theta_L$) to achieve locomotion. During this process, the forward and rear ends of the robot will also alternate between an attached and detached state. This proposed method of locomotion is solely for the purpose of producing a forward movement of the actuator in a straight line.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/figures/proposal11.png}
\end{center}
\caption{A proposed method for soft robot locomotion over seven stages}
\label{f11}
\end{figure}
\subsection{Time risks and mitigation}
The major time risks associated with this project involve the design and fabrication of the soft actuators as well as the attachment/detachment mechanism. Building and modifying an electro-pneumatic circuit for the purposes of this project also poses a time risk. In order to mitigate this time risk, a basic electro-pneumatic circuit will be completed as soon as possible. This basic circuit will have the ability to inflate and deflate a simple actuator, while providing the ability to be modified to incorporate additional air pumps or vacuum pumps as needed.
\subsection{Technical risks and mitigation}
Due to the novelty of the proposed attachment/detachment mechanism, there exists a technical risk of an ideal vacuum ``gripper'' not working at all. Another technical risk is the possibility of damaging the soft actuators during experimentation or improper handling. To mitigate this risk, multiple copies of the actuators will be built. Furthermore, construction of the soft actuators will occur immediately at the start of the fall semester.
\subsection{Justification of special high risk activities}
This project involves the purchase and care-taking of live leeches and earthworms (\emph{Lumbricus sp.}) for feeding them. There exists a lower risk as these organisms are invertebrates and can be cared for with relative ease. The leeches and earthworms will aid in the demonstration of this proposal, as the performance and components of the soft actuator will be compared with that of actual annelids that are the source of the bio-inspiration.
\subsection{Budget}
\begin{table}
\caption{Budget}
\label{tbudget}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/tables/proposalbudget.png}
\end{center}
\end{table}
\section{Conclusion}
By examining leech locomotion on land, a novel method of bio-inspired soft robotic locomotion can be developed. This research focuses on accomplishing this system of locomotion via pneumatic actuators. Through three separate proof-of-concept demonstrations, the properties of the bio-inspired pneumatic actuator will be illustrated and analyzed. These properties include forward movement, inclusion of coupled attachment points, and open loop control. Successful research of the proposed system will aid in developing novel methods of search and rescue (SAR) and intelligence, surveillance, and reconnaissance (ISR) for real-world applications. As proof-of-concept demonstrations will be conducted in this research, the biggest risk are technical failures of the proposed designs. To mitigate this risk, the design and fabrication portions of the research will be conducted early in the fall semester to ensure the possibility of data collection.
\appendix
%Gantt chart here.
\begin{table*}
\caption{Gantt chart}
\label{tgantt}
\begin{center}
\includegraphics[width=\columnwidth]{\myroot/tables/proposalgantt.png}
\end{center}
\end{table*}
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,\myroot/references/descour}
\end{document}
| {
"alphanum_fraction": 0.8181607955,
"avg_line_length": 132.3775510204,
"ext": "tex",
"hexsha": "8a55a857afa54c8592d081c6debe39262992984b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ed17501625cc0e73b46bed39049684f9b3baafba",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "devangel77b/credle-manuscripts",
"max_forks_repo_path": "ew502proposal/es502proposal-descour.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ed17501625cc0e73b46bed39049684f9b3baafba",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "devangel77b/credle-manuscripts",
"max_issues_repo_path": "ew502proposal/es502proposal-descour.tex",
"max_line_length": 1727,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ed17501625cc0e73b46bed39049684f9b3baafba",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "devangel77b/credle-manuscripts",
"max_stars_repo_path": "ew502proposal/es502proposal-descour.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5571,
"size": 25946
} |
% BEGIN LICENSE BLOCK
% Version: CMPL 1.1
%
% The contents of this file are subject to the Cisco-style Mozilla Public
% License Version 1.1 (the "License"); you may not use this file except
% in compliance with the License. You may obtain a copy of the License
% at www.eclipse-clp.org/license.
%
% Software distributed under the License is distributed on an "AS IS"
% basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
% the License for the specific language governing rights and limitations
% under the License.
%
% The Original Code is The ECLiPSe Constraint Logic Programming System.
% The Initial Developer of the Original Code is Cisco Systems, Inc.
% Portions created by the Initial Developer are
% Copyright (C) 1995 - 2006 Cisco Systems, Inc. All Rights Reserved.
%
% Contributor(s):
%
% END LICENSE BLOCK
%
% @(#)umsprofile.tex 1.4 95/03/17
%
%
% umsprofile.tex
%
% REL DATE AUTHOR DESCRIPTION
% 8.5.90 Joachim Schimpf based on relase notes 2.3
%
\chapter{Profiling Prolog Execution}
\label{chapprofile}
\index{profiling}
%\section{Introduction}
%{\eclipse} contains two profiling tools that permit to collect statistics
%about the execution of a Prolog program.
%This information can be used to
%\begin{itemize}{}{\itemsep 1cm}
%\item find "hot spots" in a program that are worth optimising
%\item reveal unexpected behaviour of predicates (e.g. backtracking)
%\item provide mode declarations for the program
%\end{itemize}
%The first tool is the {\it profiler} which finds out how much time
%was spent in which procedure,
%the second tool is the {\it statistics} tool which collects
%for each called predicate the statistics about its behaviour
%during the program.
%\section{Using the Profiling Tool}
The profiling tool\footnote{%
The profiler requires a small amount of
hardware/compiler dependent code and may therefore not be available on
all platforms.}
helps to find "hot spots" in a program that are worth optimising.
It can be used any time with any compiled Prolog code,
it is not necessary to use a special compilation mode or set
any flags.
When
\begin{quote}
:- profile(Goal).
\end{quote}
is called, the profiler executes the {\it Goal} in the profiling mode,
which means that every 0.01s the execution is interrupted
and the profiler remembers the currently executing procedure.
When the goal succeeds or fails, the profiler prints so
and then it prints the statistics about the time spent
in every encountered procedure:
\begin{quote}
\begin{verbatim}
[eclipse 5]: profile(boyer).
rewriting...
proving...
goal succeeded
PROFILING STATISTICS
--------------------
Goal: boyer
Total user time: 10.65s
Predicate Module %Time Time
-------------------------------------------------
rewrite /2 eclipse 52.3% 5.57s
garbage_collect /0 sepia_kernel 23.1% 2.46s
rewrite_args /2 eclipse 16.6% 1.77s
equal /2 eclipse 4.7% 0.50s
remainder /3 eclipse 1.5% 0.16s
...
plus /3 eclipse 0.1% 0.01s
yes.
\end{verbatim}
\end{quote}
The profiler prints the predicate name and arity, its definition module,
percentage of total time spent in this predicate and the absolute time.
Some of auxiliary system predicates are printed under a
common name without arity, e.g. {\it arithmetic} or {\it all_solutions}.
Predicates which are local to locked modules are printed
together on one line that contains only the module name.
By default only predicates written in Prolog are profiled, i.e.
if a Prolog predicate calls an external or built-in predicate
written in C, the time will be assigned to the Prolog predicate.
The predicate {\bf profile(Goal, Flags)} can be used to change
the way profiling is made, {\it Flags} is a list of flags.
Currently only the flag {\tt simple} is accepted and it
causes separate profiling of simple predicates, i.e.
those written in C:
\begin{quote}
\begin{verbatim}
[eclipse 6]: profile(boyer, [simple]).
rewriting...
proving...
goal succeeded
PROFILING STATISTICS
--------------------
Goal: boyer
Total user time: 10.55s
Predicate Module %Time Time
-------------------------------------------------
=.. /2 sepia_kernel 31.1% 3.28s
garbage_collect /0 sepia_kernel 23.5% 2.48s
rewrite /2 eclipse 21.6% 2.28s
rewrite_args /2 eclipse 17.2% 1.82s
equal /2 eclipse 4.1% 0.43s
remainder /3 eclipse 0.9% 0.10s
...
plus /3 eclipse 0.1% 0.01s
yes.
\end{verbatim}
\end{quote}
%\section{Using the Statistics Facility}
%The statistics tool is predicate based.
%The user can switch on statistics collection
%for all predicates or for selected ones.
%
%The statistics tool is closely related to the debugger.
%In order to apply it to a program, this program must be
%compiled in {\bf dbgcomp}-mode and it must be run with the debugger
%switched on.
%
%\noindent
%A sample output from the statistics tool looks like this:
%\begin{verbatim}
% PROCEDURE # MODULE #CALL #EXIT #TRY #CUT #NEXT #FAIL
%true /0 sepia_k 2 2 0 0 0 0
%fail /0 sepia_k 27 0 0 0 0 27
%set_flag /3 sepia_k 1 1 0 0 0 0
%env /0 sepia_k 1 1 1 0 2 0
%spaces /1 sepia_k 309 309 309 286 23 0
%! /0 sepia_k 286 286 0 0 0 0
%open /3 sepia_k 1 1 0 0 0 0
%|TOTAL: PROCEDURES: 7 627 600 310 286 25 27
%\end{verbatim}
%
%The numbers show how often the execution passed the various predicate ports
%(for a description of the ports see \ref{chapdebug}).
%In coroutine mode the table has 2 more columns for DELAY and WAKE ports.
%The relation between the debugger ports and the statistics counters is as
%follows:
%
%\begin{description}
%\item [CALL -] counts CALL ports
%\item [EXIT -] counts EXIT and *EXIT ports
%\item [TRY -] there is no corresponding port, it stands for entering the
%first of several matching clauses or a disjunction (choicepoint creation)
%\item [CUT -] counts CUT ports
%\item [NEXT -] counts NEXT ports
%\item [FAIL -] counts FAIL and *FAIL ports
%\item [DELAY -] counts DELAY ports (in coroutine mode only)
%\item [WAKE -] counts WAKE ports (in coroutine mode only)
%\end{description}
%
%\noindent
%Ports that can not be displayed by the debugger are not available for
%the statistics tool either, ie.
%\begin{itemize}
%\item subgoals of predicates that are set to {\tt skipped} (user predicates
%are not skipped by default)
%\item subgoals of predicates that are compiled in {\bf nodbgcomp}-mode
%\item untraceable predicates (user predicates and all built-ins are
%traceable by default)
%\end{itemize}
%
%\noindent
%There is a global flag {\tt statistics} (accessed with \bipref{set_flag/2}{../bips/kernel/env/set_flag-2.html},
%\bipref{get_flag/2}{../bips/kernel/env/get_flag-2.html}) that can take four possible values:
%\begin{itemize}
%\item {\bf off} - no procedure is counted
%\item {\bf some} - some specified (using \bipref{set_flag/3}{../bips/kernel/compiler/set_flag-3.html} or
%\bipref{set_statistics/2}{../bips/kernel/obsolete/set_statistics-2.html}) procedures are counted
%\item {\bf all} - all traceable procedures are counted
%\item {\bf mode} - like {\bf all}, but the mode usage is also collected
%\end{itemize}
%
%The output of the statistics tool goes to the {\tt output} stream.
%Most of the time it is useful to write it into a file using
%\begin{quote}\begin{verbatim}
%?- open(table, write, output), print_statistics, close(output).
%\end{verbatim}\end{quote}
%where it can be further processed.
%The statistics table can be sorted on a specified column
%with the Unix {\it sort(2)} command, e.g.
%\begin{quote}\begin{verbatim}
%sort -n -r +4 table
%\end{verbatim}\end{quote}
%will sort with procedures that exited most frequently first.
%
%To improve the performance of a program, the following
%considerations might apply:
%
%\begin{itemize}
%\item The {\bf CALL} ports show how often a procedure is called
%and hence procedures with many CALLS are crucial to the program's
%performance.
%
%\item Many {\bf TRY} ports show that either the procedure
%is really nondeterministic, or that it is written in such a manner
%that the compiler cannot decide which clause will match a given
%call and so it has to create a choice point and try several clauses
%in sequence.
%
%\item {\bf NEXT} ports mean most often that the compiler did not
%succeed to pick up the right clause at the first try
%and so another one had to be tried.
%Rewriting the procedure might help, as well as providing mode
%declarations.
%
%\item If there are much less {\bf CUT} ports than {\bf CALL} ports
%of the procedure \bipref{!/0}{../bips/kernel/control/I-0.html}, it means that some cuts in
%the program source are redundant.
%\end{itemize}
%
%\subsection{Exhaustive Collection}
%To get a complete statistic about a program execution, ie. to collect
%information about all predicates executed, the global {\tt statistics}-flag
%is used. A sample session follows:
%\begin{verbatim}
%[eclipse 1]: [qsort]. % compile (in dbgcomp mode !)
%/.../qsort.pl compiled 2708 bytes in 0.02 seconds
%
%yes.
%[eclipse 2]: set_flag(statistics, all). % switch collecting on
%
%yes.
%[eclipse 3]: debug(go_qsort). % run program under debugger control
%Start debugging - leap mode
%Stop debugging.
%
%yes.
%[eclipse 4]: print_statistics. % print the results
% PROCEDURE # MODULE #CALL #EXIT #TRY #CUT #NEXT #FAIL
%go_qsort /0 eclipse 1 1 0 0 0 0
%list50 /1 eclipse 1 1 0 0 0 0
%qsort /3 eclipse 101 101 0 0 0 0
%partition /4 eclipse 275 275 225 103 122 0
%=< /2 sepia_k 225 103 0 0 0 122
%! /0 sepia_k 103 103 0 0 0 0
%|TOTAL: PROCEDURES: 6 706 584 225 103 122 122
%
%yes.
%\end{verbatim}
%By redirecting the {\tt output} stream the table can be printed into a file.
%It can then be easily sorted and printed.
%
%Calling {\tt set_flag(statistics, all)} again will reset all counters to zero,
%\newline
%{\tt set_flag(statistics, off)} will reset all counters to zero and
%disable further statistics.
%The current value of the global {\tt statistics}-flag can be queried with
%\bipref{get_flag/2}{../bips/kernel/env/get_flag-2.html} or \bipref{env/0}{../bips/kernel/env/env-0.html}.
%
%Counter values of individual predicates can be retrieved using
%the built-in \bipref{get_statistics/2}{../bips/kernel/obsolete/get_statistics-2.html}. It returns an 8-element-list of the
%counters in the same order as displayed by \bipref{print_statistics/0}{../bips/kernel/obsolete/print_statistics-0.html}
%(ie. \#CALL, \#EXIT, \#TRY, \#CUT, \#NEXT, \#FAIL, \#DELAY, \#WAKE).
%\begin{verbatim}
%[eclipse 1]: get_statistics(partition/4, Counters).
%
%Counters = [275, 275, 225, 103, 122, 0, 0, 0]
%yes.
%\end{verbatim}
%There is also a corresponding built-in \bipref{set_statistics/2}{../bips/kernel/obsolete/set_statistics-2.html} that allows
%initialising the counters. It can be used for collecting cumulative statistics.
%
%\subsection{Selective Collection}
%It is possible to collect statistic information only for some
%specified predicates.
%For that purpose, every predicate has an individual {\tt statistics}-flag.
%A predicate is selected for statistics by switching this flag to {\tt on},
%using:
%\begin{verbatim}
%set_flag(PredSpec, statistics, on).
%\end{verbatim}
%This will also initialise the predicate's counters to zero.
%Initialising the predicate's counters using \bipref{set_statistics/2}{../bips/kernel/obsolete/set_statistics-2.html} will also
%select this predicate for statistics and set its {\tt statistics}-flag.
%In both cases, the global flag will change to {\tt some}, provided its
%old value was not {\tt all}.
%\begin{verbatim}
%[eclipse 1]: set_flag(statistics, off). % reset all counters
%
%yes.
%[eclipse 2]: set_flag(partition/4, statistics, on),
% set_flag(qsort/3, statistics, on). % select some predicates
%
%yes.
%[eclipse 3]: get_flag(statistics, X).
%
%X = some
%yes.
%[eclipse 4]: debug(go_qsort). % run program under debugger control
%Start debugging - leap mode
%Stop debugging.
%
%yes.
%[eclipse 5]: print_statistics. % print the results
% PROCEDURE # MODULE #CALL #EXIT #TRY #CUT #NEXT #FAIL
%partition /4 eclipse 275 275 225 103 122 0
%qsort /3 eclipse 101 101 0 0 0 0
%|TOTAL: PROCEDURES: 2 376 376 225 103 122 0
%
%yes.
%\end{verbatim}
%
%\subsection{Obtaining mode information}
%\index{mode/1}
%\index{mode statistics}
%The global {\tt statistics}-flag can take another value called {\tt mode}.
%This has the same effect as {\tt all}, but in addition there is
%information collected about the actual arguments of predicate calls.
%The arguments are tested for being instantiated and for groundness.
%This information is helpful to provide mode declarations for
%a program.
%The results are displayed by the {\tt print_modes/0} built-in in the form
%of a mode declaration summarising the information that could be extracted
%from the predicate calls executed.
%The output is in the form of a mode declaration that can be read in and
%executed.
%\begin{verbatim}
%[eclipse 1]: set_flag(statistics,mode).
%
%yes.
%[eclipse 2]: debug(go_qsort).
%Start debugging - leap mode
%Stop debugging.
%
%yes.
%[eclipse 3]: print_modes.
%:- mode
% list50(-),
% qsort(++, -, ++),
% partition(++, ++, -, -).
%
%yes.
%\end{verbatim}
%Note that these modes are not the result of a program analysis.
%They just indicate what arguments occurred in the sample run of the program.
%Hence it may well be that running the program with different data
%produces a different mode statistics.
%
%If a procedure has already an explicit mode declaration, the modes
%returned by the mode statistics will not be more restrictive than the
%declaration, e.g. if a declaration
%\begin{verbatim}
%:- mode p(+).
%\end{verbatim}
%exists and {\bf p/1} is called with a ground argument only, the generated
%mode will be {\bf +} rather than {\bf ++}.
%
| {
"alphanum_fraction": 0.6709953812,
"avg_line_length": 39.0052219321,
"ext": "tex",
"hexsha": "192c439587d3019c28c63563366f1c0dc12b24ec",
"lang": "TeX",
"max_forks_count": 55,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T05:00:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-03T05:28:12.000Z",
"max_forks_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lambdaxymox/barrelfish",
"max_forks_repo_path": "usr/eclipseclp/documents/userman/umsprofile.tex",
"max_issues_count": 12,
"max_issues_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f",
"max_issues_repo_issues_event_max_datetime": "2020-03-18T13:30:29.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-03-22T14:44:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lambdaxymox/barrelfish",
"max_issues_repo_path": "usr/eclipseclp/documents/userman/umsprofile.tex",
"max_line_length": 127,
"max_stars_count": 111,
"max_stars_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lambdaxymox/barrelfish",
"max_stars_repo_path": "usr/eclipseclp/documents/userman/umsprofile.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T23:57:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-03T02:57:27.000Z",
"num_tokens": 4040,
"size": 14939
} |
\chapter{Rendering Surface Geometries using the NMM approach}
\label{chap:diffflssnmm}
In section $\ref{sec:snakegeomrenderings}$ we mentioned that we use the FLSS approach instead of the NMM approach for producing our renderings applied on a snake mesh. The reason for choosing the FLSS approach was that it produces reliable results (according to its evaluation plots as discussed in section $\ref{sec:virtualtestbench}$). Furthermore, the colors of renderings resulting from the NMM approach look purplish compared to those produced by the FLSS approach as shown in figure $\ref{fig:appendixflssvsnmm}$. This figure shows renderings of our snake mesh when using an Elaphe grating produced by the FLSS approach (see figure $\ref{fig:appendixcompflsselaphe}$) and the NMM approach (see figure $\ref{fig:appendixcompnmmlaphe}$). We observe that pixels, which have a bluish color tone in FLSS renderings exhibit a purplish color tone in the corresponding pixels in a NMM rendering. This color-tone shift towards the purple color region for NMM renderings does not correspond to the reality. This issue is directly related to the non-uniform wavelength spectrum sampling of the NMM approach.
\begin{figure}[H]
\centering
\subfigure[FLSS approach]{
\includegraphics[scale=0.45]{appendix/flss.png}
\label{fig:appendixcompflsselaphe}
}
~
\subfigure[NMM approach]{
\includegraphics[scale=0.45]{appendix/nmm.png}
\label{fig:appendixcompnmmlaphe}
}
~
\caption[Comparing the NMM Approach with the FLSS Approach]{Comparing the FLSS rendering approach with the NMM approach by rendering an Elaphe grating.}
\label{fig:appendixflssvsnmm}
\end{figure}
In order to address this color-tone issue we have to revisit the concept of how we compute the color values in our renderings. For this purpose let us consider the equation $\ref{eq:tristimrad}$ which defines how to compute the CIE XYZ color values. Without loss of generality let us consider the computation of the luminance $Y$ which is equal to:
\begin{align}
Y = \int_{\Lambda}L_\lambda(\omega_r)\overline{y}(\lambda)d\lambda \nonumber
\end{align}
In this formulation we integrate over the whole wavelength spectrum $\Lambda$ for computing the color value for $Y$. However, in the NMM approach we perform a \emph{non-uniform} integration over the wavelength spectrum. Thus, instead of directly integrating over the wavelength spectrum we integrate uniformly over the \emph{minimum} and \emph{maximum} wavenumber (denoted as $N_{min} and N_{max} respectively$) as explained in section $\ref{sec:nmmapproach}$. Therefore, we no longer integrate over the wavelength spectrum rather we integrate over the corresponding wavenumber range $[N_{min}, N_{min}]$. Hence, this depicts a change of integration variables. Unfortunately, I have not taken care of this factor in the NMM approach. And this is why my rendered images produced by the NMM approach look purplish. In the following I describe what this factor is equal to. \\
The wavenumber for a particular wavelength $\lambda$ is equal to
\begin{align}
k = \frac{2 \pi}{\lambda} \nonumber
\end{align}
The sampling of the the NMM approach performs an integration over infinitesimal wavenumbers $dk$ instead over $d\lambda$. By rearranging the definition of the wavenumber $k$ we can derive the following identity for the wavelength $\lambda$:
\begin{align}
\lambda = \frac{2 \pi}{k} \nonumber
\end{align}
Thus, the correction factor for changing the integrations variables $d\lambda$ and $dk$ can be computed as the following:
\begin{alignat}{4}
& \frac{d\lambda}{dk} &&= \frac{d}{dk} \left(\frac{2 \pi}{k} \right) = \frac{2 \pi}{k^2} \nonumber \\
\Rightarrow{} & d\lambda &&= \frac{2 \pi}{k^2} dk \nonumber
\end{alignat}
This will lead us to the final representation for performing an integration over the wavenumber range as defined in equation $\ref{eq:wavenumberintegration}$:
\begin{align}
Y
&= \int_{\Lambda}L_\lambda(\omega_r)\overline{y}(\lambda)d\lambda \nonumber \\
&= \int_{N_{min}}^{N_{max}} L_k(\omega_r)\overline{y}(k) \frac{2 \pi}{k^2} dk
\label{eq:wavenumberintegration}
\end{align}
| {
"alphanum_fraction": 0.7703847084,
"avg_line_length": 75.1454545455,
"ext": "tex",
"hexsha": "0278ec5b06fd45121a3428da559d63e2d2308489",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simplay/Bachelor-Thesis",
"max_forks_repo_path": "document/Source/Chapters/appendix_results.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88",
"max_issues_repo_issues_event_max_datetime": "2016-05-13T14:35:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-05-13T14:35:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simplay/Bachelor-Thesis",
"max_issues_repo_path": "document/Source/Chapters/appendix_results.tex",
"max_line_length": 1103,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ef450c5420b768b2a1fd84c9ad768f34db12fc88",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simplay/Bachelor-Thesis",
"max_stars_repo_path": "document/Source/Chapters/appendix_results.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1108,
"size": 4133
} |
\subsection{posix -- The most common POSIX system calls}
To be done ....
%
| {
"alphanum_fraction": 0.6842105263,
"avg_line_length": 15.2,
"ext": "tex",
"hexsha": "8cca4f61ef019756f3f961038a1620bd823c3063",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z",
"max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "remigiusz-suwalski/programming-notes",
"max_forks_repo_path": "src/python3/sections/posix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "remigiusz-suwalski/programming-notes",
"max_issues_repo_path": "src/python3/sections/posix.tex",
"max_line_length": 56,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "remigiusz-suwalski/programming-notes",
"max_stars_repo_path": "src/python3/sections/posix.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z",
"num_tokens": 17,
"size": 76
} |
\chapter{Symbolic notation}
\label{app.notation}
\section{Alternative nomenclature}
\paragraph{Truth-functional logic.} TFL goes by other names. Sometimes it is called \emph{sentential logic}, because this branch of logic deals fundamentally with sentences. Sometimes it is called \emph{propositional logic} because it might also be thought to deal fundamentally with propositions. We have used with \emph{truth-functional logic} to emphasize that it deals only with assignments of truth and falsity to sentences and that its connectives are all truth-functional.
\paragraph{Formulas.} In \S\ref{s:TFLSentences}, we defined \emph{sentences} of TFL. These are also sometimes called `formulas' (or `well-formed formulas') since in TFL there is no distinction between a formula and a sentence.
\paragraph{Valuations.} Some texts call valuations \emph{truth-assignments} or \emph{truth-value assignments}.
\section{Alternative symbols}
In the history of formal logic, different symbols have been used at different times and by different authors. Often, authors were forced to use notation that their printers could typeset. This appendix presents some common symbols, so that you can recognize them if you encounter them in an article or in another book.
\paragraph{Negation.} Two commonly used symbols are the \emph{hoe}, `$\neg$', and the \emph{swung dash} or \emph{tilda}, `${\sim}$.' In some more advanced formal systems it is necessary to distinguish between two kinds of negation; the distinction is sometimes represented by using both `$\neg$' and `${\sim}$'. Older texts sometimes indicate negation by a line over the formula being negated, e.g., $\overline{A \eand B}$.
\paragraph{Disjunction.} The symbol `$\vee$' is typically used to symbolize inclusive disjunction. One etymology is from the Latin word `vel', meaning `or'.%In some systems, disjunction is written as addition.
\begin{table*}\centering\sffamily\footnotesize
\ra{1.25}
\begin{tabular}{@{}l l@{}}\toprule
\textth{Symbols of formal logic} & \\\midrule
negation & $\neg$, ${\sim}$\\
conjunction & $\wedge$, $\&$, {\scriptsize\textbullet}\\
disjunction & $\vee$\\
conditional & $\rightarrow$, $\supset$\\
biconditional & $\leftrightarrow$, $\equiv$\\
\bottomrule
\end{tabular}
\caption{}\label{symbols-all}
\end{table*}
\paragraph{Conjunction.}
Conjunction is often symbolized with the \emph{ampersand}, `{\&}'. The ampersand is a decorative form of the Latin word `et', which means `and'. (Its etymology still lingers in certain fonts, particularly in italic fonts; thus an italic ampersand might appear as `\emph{\&}'.) This symbol is commonly used in natural English writing (e.g. `Smith \& Sons'), and so even though it is a natural choice, many logicians use a different symbol to avoid confusion between the object and metalanguage---as a symbol in a formal system, the ampersand is not the English word `\&'. The most common choice now is `$\wedge$', which is a counterpart to the symbol used for disjunction. Sometimes a single dot, `{\scriptsize\textbullet}', is used. In some older texts, there is no symbol for conjunction at all; `$A$ and $B$' is simply written `$AB$'.
\paragraph{Conditional.} There are two common symbols for the conditional (which can also be called the \textit{material conditional}): the \emph{arrow}, `$\rightarrow$', and the \emph{hook}, `$\supset$'.
\paragraph{Biconditional.} The \emph{double-headed arrow}, `$\leftrightarrow$', is used in systems that use the arrow to represent the biconditional. Systems that use the hook for the conditional typically use the \emph{triple bar}, `$\equiv$', for the biconditional.
%
%
%
%\section*{Polish notation}
%
%This section briefly discusses sentential logic in Polish notation, a system of notation introduced in the late 1920s by the Polish logician Jan {\L}ukasiewicz.
%
%Lower case letters are used as sentence letters. The capital letter $N$ is used for negation. $A$ is used for disjunction, $K$ for conjunction, $C$ for the conditional, $E$ for the biconditional. (`A' is for alternation, another name for logical disjunction. `E' is for equivalence.)
%%\marginpar{
%%\begin{tabular}{cc}
%%notation & Polish\\
%%of TFL & notation\\
%%\enot & $N$\\
%%\eand & $K$\\
%%\eor & $A$\\
%%\eif & $C$\\
%%\eiff & $E$
%%\end{tabular}
%%}
%
%In Polish notation, a binary connective is written \emph{before} the two sentences that it connects. For example, the sentence $A\eand B$ of TFL would be written $Kab$ in Polish notation.
%
%The sentences $\enot A\eif B$ and $\enot (A\eif B)$ are very different; the main logical operator of the first is the conditional, but the main connective of the second is negation. In TFL, we show this by putting parentheses around the conditional in the second sentence. In Polish notation, parentheses are never required. The left-most connective is always the main connective. The first sentence would simply be written $CNab$ and the second $NCab$.
%
%This feature of Polish notation means that it is possible to evaluate sentences simply by working through the symbols from right to left. If you were constructing a truth table for $NKab$, for example, you would first consider the truth-values assigned to $b$ and $a$, then consider their conjunction, and then negate the result. The general rule for what to evaluate next in TFL is not nearly so simple. In TFL, the truth table for $\enot(A\eand B)$ requires looking at $A$ and $B$, then looking in the middle of the sentence at the conjunction, and then at the beginning of the sentence at the negation. Because the order of operations can be specified more mechanically in Polish notation, variants of Polish notation are used as the internal structure for many computer programming languages.
%
| {
"alphanum_fraction": 0.7553006604,
"avg_line_length": 82.2,
"ext": "tex",
"hexsha": "739903220fa85f5131d46e7a5a8b567ccd342964",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d3ee7928df9679c938298571a51e5505ea21920a",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "loighic/forallx-msu",
"max_forks_repo_path": "forallx-msu-part5--notation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d3ee7928df9679c938298571a51e5505ea21920a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "loighic/forallx-msu",
"max_issues_repo_path": "forallx-msu-part5--notation.tex",
"max_line_length": 838,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d3ee7928df9679c938298571a51e5505ea21920a",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "loighic/forallx-msu",
"max_stars_repo_path": "forallx-msu-part5--notation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1459,
"size": 5754
} |
% ******************************* Thesis Appendix A ****************************
\chapter{Supporting information: Chapter 2}
\singlespacing
\includepdf[pages={-}, rotateoversize, offset=0.4cm 0cm, addtotoc= {1,section,1,Supplementary figures and tables,hlabel}]{Appendix1/suppmat.pdf}
| {
"alphanum_fraction": 0.6145833333,
"avg_line_length": 41.1428571429,
"ext": "tex",
"hexsha": "79cce04623a47681db1790e99d7730c24ef166be",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9d13275747f193f3d73ff18dc79113d3fd968af1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "andrewletten/LettenPhdThesis2015",
"max_forks_repo_path": "Appendix1/appendix1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9d13275747f193f3d73ff18dc79113d3fd968af1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "andrewletten/LettenPhdThesis2015",
"max_issues_repo_path": "Appendix1/appendix1.tex",
"max_line_length": 144,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9d13275747f193f3d73ff18dc79113d3fd968af1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "andrewletten/LettenPhdThesis2015",
"max_stars_repo_path": "Appendix1/appendix1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 76,
"size": 288
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage[english]{babel}
\usepackage{csquotes}
\usepackage[backend=biber,style=unified]{biblatex}
\usepackage{hyperref}
\urlstyle{same}
\pagestyle{empty}
\addbibresource{unified-test.bib}
\begin{document}
\section*{Unified biblatex style sheet for linguistics}
\nocite{*}
\printbibliography[heading=none]
\end{document}
| {
"alphanum_fraction": 0.7866004963,
"avg_line_length": 17.5217391304,
"ext": "tex",
"hexsha": "06226986d5ba38ad0511d38f71e75630f1eb2f96",
"lang": "TeX",
"max_forks_count": 18,
"max_forks_repo_forks_event_max_datetime": "2021-11-11T15:31:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-06-06T18:43:33.000Z",
"max_forks_repo_head_hexsha": "d2c958f8b4bed9490d5dd144c21d450dee746ab2",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "alecshaw/biblatex-sp-unified",
"max_forks_repo_path": "unified-test.tex",
"max_issues_count": 42,
"max_issues_repo_head_hexsha": "d2c958f8b4bed9490d5dd144c21d450dee746ab2",
"max_issues_repo_issues_event_max_datetime": "2022-02-07T12:35:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-06-14T18:46:15.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "alecshaw/biblatex-sp-unified",
"max_issues_repo_path": "unified-test.tex",
"max_line_length": 55,
"max_stars_count": 36,
"max_stars_repo_head_hexsha": "d2c958f8b4bed9490d5dd144c21d450dee746ab2",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "alecshaw/biblatex-sp-unified",
"max_stars_repo_path": "unified-test.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-06T18:46:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-29T21:17:52.000Z",
"num_tokens": 121,
"size": 403
} |
\documentclass[journal,12pt,twocolumn]{IEEEtran}
\usepackage{setspace}
\usepackage{gensymb}
\singlespacing
\usepackage[cmex10]{amsmath}
%\usepackage{amsthm}
%\interdisplaylinepenalty=2500
%\savesymbol{iint}
%\usepackage{txfonts}
%\restoresymbol{TXF}{iint}
%\usepackage{wasysym}
\usepackage{amsthm}
\usepackage{mathrsfs}
\usepackage{txfonts}
\usepackage{stfloats}
\usepackage{bm}
\usepackage{cite}
\usepackage{cases}
\usepackage{subfig}
\usepackage{longtable}
\usepackage{multirow}
\usepackage{enumitem}
\usepackage{mathtools}
\usepackage{steinmetz}
\usepackage{tikz}
\usepackage{circuitikz}
\usepackage{verbatim}
\usepackage{tfrupee}
\usepackage[breaklinks=true]{hyperref}
\usepackage{tkz-euclide} %loads TikZ and tkz-base
\usetikzlibrary{calc,math}
\usepackage{listings}
\usepackage{color}
\usepackage{array}
\usepackage{longtable}
\usepackage{calc}
\usepackage{multirow}
\usepackage{hhline}
\usepackage{ifthen}
\usepackage{lscape}
\usepackage{multicol}
\usepackage{chngcntr}
\DeclareMathOperator*{\Res}{Res}
\renewcommand\thesection{\arabic{section}}
\renewcommand\thesubsection{\thesection.\arabic{subsection}}
\renewcommand\thesubsubsection{\thesubsection.\arabic{subsubsection}}
\renewcommand\thesectiondis{\arabic{section}}
\renewcommand\thesubsectiondis{\thesectiondis.\arabic{subsection}}
\renewcommand\thesubsubsectiondis{\thesubsectiondis.\arabic{subsubsection}}
\hyphenation{op-tical net-works semi-conduc-tor}
\def\inputGnumericTable{} %%
\lstset{
%language=C,
frame=single,
breaklines=true,
columns=fullflexible
}
\begin{document}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{problem}{Problem}
\newtheorem{proposition}{Proposition}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{example}{Example}[section]
\newtheorem{definition}[problem]{Definition}
\newcommand{\BEQA}{\begin{eqnarray}}
\newcommand{\EEQA}{\end{eqnarray}}
\newcommand{\define}{\stackrel{\triangle}{=}}
\bibliographystyle{IEEEtran}
\providecommand{\mbf}{\mathbf}
\providecommand{\pr}[1]{\ensuremath{\Pr\left(#1\right)}}
\providecommand{\qfunc}[1]{\ensuremath{Q\left(#1\right)}}
\providecommand{\sbrak}[1]{\ensuremath{{}\left[#1\right]}}
\providecommand{\lsbrak}[1]{\ensuremath{{}\left[#1\right.}}
\providecommand{\rsbrak}[1]{\ensuremath{{}\left.#1\right]}}
\providecommand{\brak}[1]{\ensuremath{\left(#1\right)}}
\providecommand{\lbrak}[1]{\ensuremath{\left(#1\right.}}
\providecommand{\rbrak}[1]{\ensuremath{\left.#1\right)}}
\providecommand{\cbrak}[1]{\ensuremath{\left\{#1\right\}}}
\providecommand{\lcbrak}[1]{\ensuremath{\left\{#1\right.}}
\providecommand{\rcbrak}[1]{\ensuremath{\left.#1\right\}}}
\theoremstyle{remark}
\newtheorem{rem}{Remark}
\newcommand{\sgn}{\mathop{\mathrm{sgn}}}
\providecommand{\abs}[1]{\left\vert#1\right\vert}
\providecommand{\res}[1]{\Res\displaylimits_{#1}}
\providecommand{\norm}[1]{\left\lVert#1\right\rVert}
%\providecommand{\norm}[1]{\lVert#1\rVert}
\providecommand{\mtx}[1]{\mathbf{#1}}
\providecommand{\mean}[1]{E\left[ #1 \right]}
\providecommand{\fourier}{\overset{\mathcal{F}}{ \rightleftharpoons}}
%\providecommand{\hilbert}{\overset{\mathcal{H}}{ \rightleftharpoons}}
\providecommand{\system}{\overset{\mathcal{H}}{ \longleftrightarrow}}
%\newcommand{\solution}[2]{\textbf{Solution:}{#1}}
\newcommand{\solution}{\noindent \textbf{Solution: }}
\newcommand{\cosec}{\,\text{cosec}\,}
\providecommand{\dec}[2]{\ensuremath{\overset{#1}{\underset{#2}{\gtrless}}}}
\newcommand{\myvec}[1]{\ensuremath{\begin{pmatrix}#1\end{pmatrix}}}
\newcommand{\mydet}[1]{\ensuremath{\begin{vmatrix}#1\end{vmatrix}}}
\numberwithin{equation}{subsection}
\makeatletter
\@addtoreset{figure}{problem}
\makeatother
\let\StandardTheFigure\thefigure
\let\vec\mathbf
\renewcommand{\thefigure}{\theproblem}
\def\putbox#1#2#3{\makebox[0in][l]{\makebox[#1][l]{}\raisebox{\baselineskip}[0in][0in]{\raisebox{#2}[0in][0in]{#3}}}}
\def\rightbox#1{\makebox[0in][r]{#1}}
\def\centbox#1{\makebox[0in]{#1}}
\def\topbox#1{\raisebox{-\baselineskip}[0in][0in]{#1}}
\def\midbox#1{\raisebox{-0.5\baselineskip}[0in][0in]{#1}}
\vspace{3cm}
\title{Matrix Theory (EE5609) Challenging Problem}
\author{Arkadipta De\\MTech Artificial Intelligence\\AI20MTECH14002}
\maketitle
\newpage
%\tableofcontents
\bigskip
\renewcommand{\thefigure}{\theenumi}
\renewcommand{\thetable}{\theenumi}
\begin{abstract}
This document proves that orthogonal vectors are linearly independent.
\end{abstract}
Download latex codes from
%
\begin{lstlisting}
https://github.com/Arko98/EE5609/tree/master/Challenge_2
\end{lstlisting}
%
\section{Problem}
Suppose that a set of nonzero vectors $\vec{v}_{1},\vec{v}_2,\dots\vec{v}_{n}$ are mutually orthogonal, i.e., $\vec{v_i^T}\vec{v}_{j}=0$ for $i\not=j$. Prove that these vectors are also linearly independent.
\section{Proof}
Let us consider the following linear combination
\begin{align}
c_1\vec{v_1}+c_2\vec{v_2}+\dots+c_n\vec{v_n}= 0\label{eq1}
\end{align}
We have to show that in \eqref{eq1}, $c_1=0$, $c_2=0$ and so on upto $c_n=0$.\\
We compute the dot product of \eqref{eq1} with $\vec{v_i}$ as follows -
\begin{align}
\vec{v_i^T}\brak{c_1\vec{v_1}+c_2\vec{v_2}+\dots+c_n\vec{v_n}} &=0\\
\implies c_1\vec{v_i^T}\vec{v_1}+c_2\vec{v_i^T}\vec{v_2}+\dots+c_n\vec{v_i^T}\vec{v_n} &=0\\
\intertext{As $\vec{v_i^T}\vec{v}_{j}=0$ for all $i\not=j$}
\implies c_i\vec{v_i^T}\vec{v_i} &=0\\
\implies c_i\norm{\vec{v_i}} &=0
\intertext{As the set of vectors are non zero, $\norm{\vec{v_i}} \not = 0$, hence}
\implies c_i & = 0\label{eqFinal}
\end{align}
\eqref{eqFinal} is true for all the vectors in the orthogonal set of vectors, hence,
\begin{align}
c_1 = c_2 = \dots = c_n = 0\label{eqproved}
\end{align}
Hence, from \eqref{eqproved}, the set of orthogonal vectors are linearly independent.
\end{document}
| {
"alphanum_fraction": 0.7011606997,
"avg_line_length": 35.7719298246,
"ext": "tex",
"hexsha": "c785a250574b98eaeeffe277d926ec9fd65abb74",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-10-01T17:05:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-02T11:29:27.000Z",
"max_forks_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Arko98/EE5609-Matrix-Theory-",
"max_forks_repo_path": "Challenge_4/beamer.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Arko98/EE5609-Matrix-Theory-",
"max_issues_repo_path": "Challenge_4/beamer.tex",
"max_line_length": 208,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Arko98/EE5609-Matrix-Theory-",
"max_stars_repo_path": "Challenge_4/beamer.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2117,
"size": 6117
} |
\chapter{Facility Airside Operation and Design Overview}
\label{ch:Facility Airside Operation and Design Overview}
\section{Explanation of Facility Operation}
\label{sec:Explanation of Facility Operation}
Before listing the criteria and limits used to design the facility, it is beneficial to describe how the airside cycle within the facility will operate. The closed-loop airside subsystem contains the tested heat exchanger coil, along with an airflow measurement apparatus (i.e. code tester) and the necessary conditioning equipment to recirculate air and achieve the desired set point condition at the inlet of the tested coil. As air flows over the tested coil in the test section, the thermodynamic properties of air are modified through heat addition or rejection. Using the conditioning section, the air properties are then returned to the desired set point conditions before returning to the inlet of the tested coil. A schematic of the airside subsystem can be seen in Figure \ref{fig:TunnelAirsideSchematic}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{TunnelAirsideSchematic}
\caption{Simplified schematic of airside schematic with major components identified}
\label{fig:TunnelAirsideSchematic}
\end{figure}
Air travels through the tested coil, where properties are changed through heat addition, heat rejection, and/or dehumidification depending on the experiment. From the exit of the tested coil, the air flows through two sets of turning vanes and enters the conditioning section of the facility. The conditioning section contains a series combination of air filters, a code tester, conditioning coils, variable speed fans, electric reheat, steam humidification, and dampers. The air filters prevent any large debris or contaminants from entering subsequent conditioning equipment. The code tester allows for the calculation of airflow rate. It contains a nozzle plane with upstream and downstream air settling means, placed according to ASHRAE Standard 41.2 (2018) specifications. Four conditioning coils, arranged vertically into two pairs, counteract the change in air properties that occurred at the test coil by conditioning the air. A pair of dampers, downstream of the conditioning coils, determine which conditioning coil(s) air crosses over. Each coil can independently operate in heating or cooling mode. Variable speed fans provide the pressure rise needed to pass the air throughout the airside loop. Electric heaters provide reheat and are intended for precise air temperature control. Humidity control is achieved by a steam humidifier and injection manifold, allowing moisture to be reintroduced to the airstream. The damper located after the steam injection allows operation at reduced airflow rates by increasing static pressure on the fans. Upon leaving the conditioning section, air is returned to the test section via turning vanes. Air is then mixed to reduce temperature and humidity stratification throughout the cross section of the test section. Finally, a set of settling means creates a more uniform air velocity distribution before again arriving at the inlet of the tested coil.
\section{Design Operating Envelope}
\label{sec: Design Operating Envelope}
With a basic understanding of how the facility airside will operate, it is necessary to determine the physical size of the facility, as well as the limits of operation for set point conditions. Commercial size heat exchanger coils are the primary type of coil to be tested in this facility. The test section was designed to best accommodate this type of heat exchanger. A target operating envelope was originally developed by Bach and Sarfraz (2016) which includes the desired ranges of temperature, humidity, and airflow rate for the facility. This operating envelope has since been modified from what Bach and Sarfraz (2016) presented due to equipment limitations. The final operating envelope, shown in Table \ref{tab:OpEnvelope}, served as the basis for the final facility design.
\begin{table}[h]
\centering
\caption{Desired facility operating envelope, used as design inputs}
\label{tab:OpEnvelope}
\begin{tabular}{|c|c|}
\hline
\textbf{Parameter} & \textbf{Value} \\ \hline
{Temperature} & {0\degree F to 140\degree F} \\ \hline
{Humidity} & {20\% to 90\%} \\ \hline
{Test Coil Capacity} & {23 tons at 67\degree F} \\ \hline
{Maximum Air Flow Rate} & {8000 CFM} \\ \hline
{Overall Dimensions (L x W x H)} & {42 ft x 12 ft x 9 ft} \\ \hline
{Test Section Dimensions (W x H)} & {7 ft x 8 ft} \\ \hline
{Conditioning Section Dimensions (W x H)} & {4 ft x 8 ft} \\ \hline
\end{tabular}
\end{table}
The final design of the facility incorporated the design parameters seen above. The facility consists of two major subsystems: an airside subsystem where coils are tested and a conditioning subsystem which manages the heat within the airside subsystem. The primary focus of this thesis is to describe the design and construction of the airside subsystem.\\
| {
"alphanum_fraction": 0.7981687898,
"avg_line_length": 109.2173913043,
"ext": "tex",
"hexsha": "549a7b61e29c1b81c7a0b66b2d81c0307956b96e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "59f8339fe956b41e4599491079626be6211bd1cd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mkinche/Thesis-Revisions",
"max_forks_repo_path": "Ch-FacilityAirsideDesign/FacilityAirsideDesign.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "59f8339fe956b41e4599491079626be6211bd1cd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mkinche/Thesis-Revisions",
"max_issues_repo_path": "Ch-FacilityAirsideDesign/FacilityAirsideDesign.tex",
"max_line_length": 1903,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "59f8339fe956b41e4599491079626be6211bd1cd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mkinche/Thesis-Revisions",
"max_stars_repo_path": "Ch-FacilityAirsideDesign/FacilityAirsideDesign.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1093,
"size": 5024
} |
%-----------------------------------------------------------------
%Author: Yan Naing Aye
%Date: 2012 Feb 24
%-----------------------------------------------------------------
\documentclass[12pt,a4paper]{report} % Specifies the document class
\usepackage{graphicx} %for pdf, bitmapped graphics files
\usepackage{amsmath} %to facilitate writing math formulas and to improve the typographical quality
\usepackage{amssymb} %provides an extended symbol collection
\usepackage{wasysym} %provides many glyphs
%\usepackage{subfigure} %for subfigures
%\usepackage{subcaption}
\usepackage{epstopdf} %converts eps to pdf
%\usepackage{fullpage} %to use full page
\usepackage[table]{xcolor} %color extensions (for tables)
%\usepackage[numbers]{natbib} %reimplementation of \cite command
\usepackage{datetime} %date time
\usepackage[pdftitle={Aye-Thesis},pdfauthor={Yan Naing Aye}]{hyperref}
\hypersetup{
colorlinks,
citecolor=black,
filecolor=black,
linkcolor=black,
urlcolor=black
}
\usepackage[font={small,sf},labelfont=bf]{caption} %to change figure caption font
%\usepackage{float} %to use figure with [H] option
%\usepackage{fancyhdr} %fancy headers
%-----------------------------------------------------------------
%Macros
\def\titleSentence{REAL-TIME HIGH PERFORMANCE DISPLACEMENT SENSING IN HANDHELD INSTRUMENT FOR MICROSURGERY}
% I use this title in several places throughout the report
% that is why I define it as \titleSentence command
% so that I only need to change here and all will be updated accordingly
%-----------------------------------------------------------------
\newdateformat{mydate}{\THEYEAR} %year only
%\newdateformat{mydate}{\monthname[\THEMONTH] \THEYEAR} %use this for month and year
%-----------------------------------------------------------------
%\newcommand{\ip}[2]{(#1, #2)}
% Define a new command called \los to include list of symbol easily.
% It is used in ListOfSymbols.tex file
\newcommand{\los}[2]{\parbox[t]{5cm}{$#1 \dotfill$} \parbox[t]{10cm}{#2}\\[0.6cm]} %-----------------------------------------------------------------
\renewcommand{\bibname}{References}%change Bibliography to References
\renewcommand{\baselinestretch}{1.5} %linespace
%\linespread{1.6}
%-----------------------------------------------------------------
\pagestyle{headings} %comment out usepackage{fullpage} to use headings style
%-----------------------------------------------------------------
\begin{document} %End of preamble and beginning of text.
%\input{./Files/Title_Comfirmation.tex} %Title page for comfirmation report
\input{./Files/Title_Hard.tex} %Title page for thesis hardbound
\input{./Files/Title.tex} %Title page for thesis
%-----------------------------------------------------------------------------
\newpage
\pagenumbering{roman} %added it if has Acknowledgment
%\chapter*{Abstract}
%\addcontentsline{toc}{chapter}{Abstract}
\begin{abstract}
\input{./Files/Abstract}
\end{abstract}
%-----------------------------------------------------------------
\newpage
\chapter*{Acknowledgments}
\addcontentsline{toc}{chapter}{Acknowledgment}
\input{./Files/Acknowledge}
% -----------------------------------------------------------------
\newpage
\addcontentsline{toc}{chapter}{Table of Contents}
\tableofcontents
%------------------------------------------------------------------
\newpage
\addcontentsline{toc}{chapter}{List of Figures}
\listoffigures
%-----------------------------------------------------------------
\newpage
\addcontentsline{toc}{chapter}{List of Tables}
\listoftables
\clearpage
%-----------------------------------------------------------------
\newpage
\markboth{\MakeUppercase{List of Symbols and Abbreviations}}{\MakeUppercase{List of Symbols and Abbreviations}}
\chapter*{List of Symbols and Abbreviations\hfill}
\addcontentsline{toc}{chapter}{List of Symbols and Abbreviations}
\input{./Files/ListOfSymbols}
\clearpage
%-----------------------------------------------------------------
\newpage
\pagenumbering{arabic}
\setcounter{page}{1}
%-----------------------------------------------------------------
\input{./Files/Ch_Intro}
%-----------------------------------------------------------------
\input{./Files/Ch_Literature}
%-----------------------------------------------------------------
\input{./Files/Ch_SystemDesign}
%-----------------------------------------------------------------
\input{./Files/Ch_Conclusion}
%-----------------------------------------------------------------
\appendix
\input{./Files/ApErr}
%-----------------------------------------------------------------
\bibliographystyle{ieeetr}
%\bibliographystyle{cell}
%\bibliographystyle{apalike}
\bibliography{./Files/Ref}
\addcontentsline{toc}{chapter}{References}
% -----------------------------------------------------------------
\end{document}
% -----------------------------------------------------------------
| {
"alphanum_fraction": 0.504398827,
"avg_line_length": 46.0810810811,
"ext": "tex",
"hexsha": "112359eedf143409726b5453598647b26aba0092",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "790cd01434910c8e081dc8b21e0110614393e72a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "yan9a/LaTeX_Template_Thesis",
"max_forks_repo_path": "main_file.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "790cd01434910c8e081dc8b21e0110614393e72a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yan9a/LaTeX_Template_Thesis",
"max_issues_repo_path": "main_file.tex",
"max_line_length": 210,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "790cd01434910c8e081dc8b21e0110614393e72a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "yan9a/LaTeX_Template_Thesis",
"max_stars_repo_path": "main_file.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1134,
"size": 5115
} |
\documentclass[accentcolor=tud2c,usenames,dvipsnames,colorbacktitle,inverttitle,landscape,german,presentation,t]{tudbeamer}
\usepackage[english]{babel}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{slashed}
\usepackage{color}
\usepackage{physics}
\usepackage{graphicx}
\usepackage{braket}
% \usepackage[utf8]{inputenc}
\begin{document}
\input{macros.tex}
\setbeamerfont{footline}{size=\fontsize{1}{1}\selectfont}
\title{Chiral Green's functions and Ward identities}
\subtitle{\small{Matthias Heinz}}
\author{Matthias Heinz}
\institute[Institut f\"ur Kernphysik, TU Darmstadt]{Institut f\"ur Kernphysik, TU Darmstadt}
\date{January 30, 2020}
\setbeamertemplate{section in toc}[ball unnumbered]
\setbeamertemplate{subsection in toc}[ball unnumbered]
\nocite{*}
\begin{titleframe}
\vskip3em
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Outline:
\vskip2em
\begin{enumerate}
\item Ward identities in a $\U{1}$ example
\vskip2em
\item Chiral Ward identities via the algebra of currents
\vskip2em
\item The chiral generating functional
\end{enumerate}
\end{column}
\end{columns}
% \includegraphics[width=0.75\textwidth]{figures/05/critical_point_illustration}
% \\\footnotesize{Stephanov 2009}
\end{titleframe}
\section{Ward identities in a $\U{1}$ example}
\begin{frame}
\frametitle{Scalar $\Phi^4$ theory with a global $\U{1}$ symmetry}
\begin{equation*}
\mathcal{L}^0 = \frac{1}{2}(\dmulop{\Phidag}\dmuhip{\Phi})
- \frac{m^2}{2} \Phidag \Phi - \frac{\lambda}{4} (\Phidag \Phi)^2
\end{equation*}
\vskip3em
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Global $\U{1}$ symmetry:
\begin{equation*}
\begin{array}{cc}
\Phi \rightarrow (1 + i \epsilon) \Phi, &
\Phidag \rightarrow (1 - i \epsilon) \Phidag, \\
\end{array}
\end{equation*}
\vskip1em
Conserved Noether current:
\begin{equation*}
\Jmu = i (\dmuhip{\Phidag} \Phi - \Phidag \dmuhip{\Phi})
\end{equation*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Scalar $\Phi^4$ theory with a global $\U{1}$ symmetry}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Example Green's function:
\begin{equation*}
\Gmu(x,y,z) = \mel*{0}{\timeorder{\Phi(x)\Jmu(y)\Phidag(z)}}{0},
\end{equation*}
\vskip1em
Symmetry constraint:
\begin{equation*}
\begin{array}{cc}
\Jmu \rightarrow \Jmu, &
\Gmu \rightarrow \Gmu, \\
\end{array}
\end{equation*}
\vskip1em
Example Ward identity:
\begin{align*}
\dmulox{\Gmu(x,y,z)}{y} = & (\delta^4(y-x) - \delta^4(y-z))\mel*{0}{\timeorder{\Phi(x) \Phidag(z)}}{0} \\
& + \mel*{0}{\timeorder{\Phi(x) \dmulopx{\Jmu(y)}{y} \Phidag(z)}}{0},
\end{align*}
\end{column}
\end{columns}
\end{frame}
% \begin{frame}
% \frametitle{Recap of path integral formalism \\ Maybe just skip and explain on actual generating functional}
% \begin{columns}[c]
% \begin{column}{0.8\textwidth}
% Green's functions via path integral:
%
% \begin{equation*}
% \mel{0}{\timeorder{\Phidag(x) \Phi(y)}}{0} =
% \int \mathcal{D}\Phistar \mathcal{D}\Phi \Phistar(x) \Phi(y) \exp(i S[\Phi, \Phistar]),
% \end{equation*}
%
% Generating functional:
%
% \begin{equation*}
% W[j, j^{*}] = \mel*{0}{\timeorder{\exp(i\int d^4x[j(x) \Phidag (x) + j^{*}(x) \Phi(x)])}}{0},
% \end{equation*}
%
% Green's functions via functional derivatives:
%
% \begin{equation*}
% \mel{0}{\timeorder{\Phidag(x) \Phi(y)}}{0} = \left.\left(-i\frac{\delta}{\delta j(x)} \right) \left(-i\frac{\delta}{\delta j^{*}(y)} \right) W[j, j^{*}] \right\rvert_{j=0,j^{*}=0},
% \end{equation*}
% \end{column}
% \end{columns}
% \end{frame}
\begin{frame}
\frametitle{Generating functional for $\Phi^4$}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Generating functional:
\begin{equation*}
W[j, j^{*}, j_{\mu}] = \mel*{0}{\timeorder{\exp{i\int d^4x[j(x) \Phidag (x) + j^{*}(x) \Phi(x) + j_{\mu}(x) \Jmu(x)]}}}{0},
\end{equation*}
Our example Green's function:
\begin{equation*}
\Gmu(x,y,z) = \left.(-i)^3 \frac{\delta^3 W[j, j^{*}, j_{\mu}]}{\delta j^{*}(x) \delta j_{\mu}(y) \delta j(z)}\right\rvert_{j=0,j^{*}=0,j_{\mu}=0},
\end{equation*}
As path integral:
\vskip-1em
\begin{equation*}
\only<1>{W[j, j^{*}, j_{\mu}] = \int \mathcal{D}\Phistar \mathcal{D}\Phi \exp(i \int d^4x[\mathcal{L}^{0}(x) + \mathcal{L}_{\textrm{ext}}(x)]),}
\only<2>{W[j, j^{*}, j_{\mu}] = \int \mathcal{D}\Phistar \mathcal{D}\Phi \exp(i S[\Phi, \Phistar, j, j^{*}, j_{\mu}]),}
\end{equation*}
\only<1>{
\begin{equation*}
\mathcal{L}_{\textrm{ext}}(x) = j(x) \Phi^{*} (x) + j^{*}(x) \Phi(x) + j_{\mu}(x) \Jmu(x),
\end{equation*}
}
\only<2>{Note: Only in the presence of external fields can we demand $\mathcal{L}$ remain invariant under \textit{local} transformations.}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{The master equation for $\Phi^4$}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Demanding $S[\Phi, \Phidag, j, j^{*}, j_{\mu}] = S[\Phi^{\prime}, \Phi^{\prime \dagger}, j^{\prime}, j^{\prime*}, j^{\prime}_{\mu}]$ gives:
\begin{align*}
j(x) & \rightarrow (1 + i \epsilon(x))j(x), \\
j^{*}(x) & \rightarrow (1 - i \epsilon(x))j^{*}(x), \\
j_{\mu}(x) & \rightarrow j_{\mu} - \dmulop{\epsilon(x)},
\end{align*}
% \begin{equation*}
% \begin{array}{ccc}
% j(x) \rightarrow (1 + i \epsilon(x))j(x), &
% j^{*}(x) \rightarrow (1 - i \epsilon(x))j^{*}(x), &
% j_{\mu}(x) \rightarrow j_{\mu} - \dmulop{\epsilon(x)}, \\
% \end{array}
% \end{equation*}
We observe that this also means:
\begin{equation*}
W[j, j^{*}, j_{\mu}] = W[j^{\prime}, j^{\prime*}, j^{\prime}_{\mu}],
\end{equation*}
Master equation:
\begin{equation*}
\only<1>{0 = \int d^{4}x \epsilon(x) \left[ i j(x) \frac{\delta}{\delta j(x)} - i j^{*}(x) \frac{\delta}{\delta j^{*}(x)} + \dmulox{\frac{\delta}{\delta j_{\mu}(x)}}{x} \right] W[j, j^{*}, j_{\mu}],}
\only<2>{0 = \left[ j(x) \frac{\delta}{\delta j(x)} - j^{*}(x) \frac{\delta}{\delta j^{*}(x)} - i \dmulox{\frac{\delta}{\delta j_{\mu}(x)}}{x} \right] W[j, j^{*}, j_{\mu}],}
\end{equation*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{QCD in the chiral limit}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
\begin{equation*}
\mathcal{L}_{\textrm{QCD}}^{0} = \sum_{l=u,d,s}(\bar{q}_{R,l}i\slashed{D}q_{R,l} + \bar{q}_{L,l}i\slashed{D}q_{L,l})
- \frac{1}{4} \mathcal{G}_{a\mu\nu} \mathcal{G}_{a}^{\mu\nu},
\end{equation*}
Symmetry group:
\begin{equation*}
\U{3}_{L}\times\U{3}_{R} \xrightarrow[]{\textrm{Quantization}}\suxsuxu
\end{equation*}
\end{column}
\end{columns}
\vskip3em
\begin{columns}[t]
\begin{column}{0.45\textwidth}
Conserved currents:
\begin{itemize}
\item $\vecoct = R_{a}^{\mu} + L_{a}^{\mu} = \vecoctexpl$,
\item $\axvoct = R_{a}^{\mu} - L_{a}^{\mu} = \axvoctexpl$,
\item $\vecsing = R^{\mu} + L^{\mu} = \vecsingexpl$,
\end{itemize}
\end{column}
\begin{column}{0.45\textwidth}
Color-neutral quadratic forms:
\begin{itemize}
\item $\scalardensityx{a}{x} = \scalardensityexplx{a}{x}$,
\item $\pscalardensityx{a}{x} = \pscalardensityexplx{a}{x}$,
\end{itemize}
Note: $a=0,...,8$
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Chiral Green's functions and Ward identities \\ \small{\textit{An example}}}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Green's function:
\begin{equation*}
\Gmu_{APab}(x, y) = \mel*{0}{\timeorder{\axvoct(x)\pscalardensityx{b}{y}}}{0},
\end{equation*}
Ward identity:
\end{column}
\end{columns}
\vskip1em
\begin{equation*}
\dmulox{\Gmu_{APab}(x,y)}{x} = \delta(x_0 - y_0) \mel*{0}{\commutator{A_{a}^{0}(x)}{P_{b}(y)}}{0}
+ \mel*{0}{\timeorder{\dmulopx{\axvoct(x)}{x}\pscalardensityx{b}{y}}}{0},
\end{equation*}
\vskip1em
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Generalization to any $(n+1)$-point functions:
\begin{align*}
% \begin{equation*}
\partial_{\mu}^{x} &\mel*{0}{\timeorder{\Jmu(x)A_1(x_1)\ldots A_n(x_n)}}{0} = \mel*{0}{\timeorder{\dmulopx{\Jmu(x)}{x}A_1(x_1)\ldots A_n(x_n)}}{0} \\
& + \delta(x^{0} - x_{1}^{0}) \mel*{0}{\timeorder{[J_{0}(x), A_{1}(x_{1})] A_{2}(x_2)\ldots A_n(x_n)}}{0} \\
& + \ldots \\
& + \delta(x^{0} - x_{n}^{0}) \mel*{0}{\timeorder{A_{1}(x_1)A_2(x_2) \ldots [J_{0}(x), A_{n}(x_{n})] }}{0},
% \end{equation*}
\end{align*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Algebra of currents}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
\begin{itemize}
\item We could now evaluate $[J_{0}(x), A_{n}(x_{n})]$ commutators
\item \textit{But we have to be careful}
\item QED current example:
\begin{itemize}
\item $[J_0(t, \vec{x}), J_{i}(t, \vec{y})] = 0$
\item from which one can show $\mel*{0}{J_0(t, \vec{x})}{n} = 0$
\end{itemize}
\item Fix: Schwinger term in original charge-current commutator
\item In general, charge-current commutation relations only determined up to a derivative of a delta function
\item Another problem: used naive time-ordered product rather than \textit{covariant} time-ordered product
\item Seagull terms from covariant time-ordering cancel with Schwinger terms (Feynman)
\end{itemize}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Chiral generating functional}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Extend chiral Lagrangian to include external fields (sources):
\begin{equation*}
\mathcal{L} = \mathcal{L}^{0}_{\textrm{QCD}} + \lext
\end{equation*}
with
\begin{equation*}
\only<1>{\lext = \sum_{a = 1}^{8} v_a^{\mu} \vecoct + \frac{1}{3} v_{(s)}^{\mu} \vecsing + \sum_{a=1}^{8} a_a^{\mu} \axvoct
- \sum_{a=0}^{8}s_{a} \scalardensity{a} + \sum_{a=0}^{8}p_{a} \pscalardensity{a},}
\only<2>{\color{black}\lext = \bar{q} \gamma_{\mu} \left( \color{red}v^{\mu} \color{black}+ \frac{1}{3} \color{red}v_{(s)}^{\mu} \color{black}+ \gamma_5 \color{red}a^{\mu} \color{black}\right) q
- \bar{q} ( \color{red}s \color{black}- i \gamma_5 \color{red}p\color{black}) q,}
\end{equation*}
\vskip1em
\pause
using definitions
\begin{equation*}
\begin{array}{cc}
v^{\mu} = \sum_{a=1}^{8} v_{a}^{\mu} \frac{\lambda_a}{2}, &
a^{\mu} = \sum_{a=1}^{8} a_{a}^{\mu} \frac{\lambda_a}{2}, \\
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{cc}
s = \sum_{a=0}^{8} s_{a} \lambda_a, &
p = \sum_{a=0}^{8} p_{a} \lambda_a, \\
\end{array}
\end{equation*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Chiral generating functional \\ \small{\textit{Some examples}}}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Generating functional:
\begin{equation*}
W[v,a,s,p] = \mel*{0}{\timeorder{\exp{i \int d^4x \lext(x)}}}{0}_{0},
\end{equation*}
\only<1>{Chiral limit example:}
\only<2>{Physical example:}
\begin{equation*}
\only<1>{\bar{u} u = \frac{1}{2} \bar{q} \left(\sqrt{\frac{2}{3}} \lambda_0 + \lambda_3 + \frac{1}{\sqrt{3}} \lambda_8 \right) q,}
\end{equation*}
\end{column}
\end{columns}
\begin{equation*}
\only<1>{\mel*{0}{\bar{u}(x) u(x)}{0}_{0} = \frac{i}{2} \left. \left[ \sqrt{\frac{2}{3}} \frac{\delta}{\delta s_0(x)} + \frac{\delta}{\delta s_3(x)} + \frac{1}{\sqrt{3}} \frac{\delta}{\delta s_8(x)} \right] W[v,a,s,p]\right\rvert_{v=a=s=p=0},}
\only<2>{\mel*{0}{\timeorder{\axvoct(x)\pscalardensityx{b}{y}}}{0} = (-i)^2 \left. \frac{\delta^2}{\delta a_{a\mu}(x) \delta p_{b}(y)} W[v,a,s,p] \right\rvert_{v=a=p=0,s=\textrm{diag}(m_{u}, m_{d}, m_{s})},}
\end{equation*}
\end{frame}
\begin{frame}
\frametitle{Constraining external fields}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
We demand of $\mathcal{L}$ that it is:
\begin{itemize}
\item Hermitian Lorentz scalar
\item Even under $P$ and $C$
\item Invariant under local chiral transformations
\end{itemize}
\pause
Parity:
\begin{align*}
v^{\mu} &\xrightarrow[]{P} v_{\mu}, \\
v^{\mu}_{(s)} &\xrightarrow[]{P} v^{(s)}_{\mu}, \\
a^{\mu} &\xrightarrow[]{P} - a_{\mu}, \\
s &\xrightarrow[]{P} s, \\
p &\xrightarrow[]{P} -p,
\end{align*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Constraining external fields}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
We demand of $\mathcal{L}$ that it is:
\begin{itemize}
\item Hermitian Lorentz scalar
\item Even under $P$ and $C$
\item Invariant under local chiral transformations
\end{itemize}
Charge conjugation:
\begin{align*}
v_{\mu} &\xrightarrow[]{C} -v^{T}_{\mu}, \\
v_{\mu}^{(s)} &\xrightarrow[]{C} -v^{(s)T}_{\mu}, \\
a_{\mu} &\xrightarrow[]{C} a^{T}_{\mu}, \\
s &\xrightarrow[]{C} s^{T}, \\
p &\xrightarrow[]{C} p^{T},
\end{align*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Constraining external fields}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
Local chiral transformation:
\begin{equation*}
\begin{array}{cc}
q_{R} \rightarrow \exp{-i\frac{\Theta(x)}{3}} V_{R}(x) q_{R}, &
q_{L} \rightarrow \exp{-i\frac{\Theta(x)}{3}} V_{L}(x) q_{L},
\end{array}
\end{equation*}
After splitting our external fields into $r_{\mu}=v_{\mu}+a_{\mu}$ and $l_{\mu}=v_{\mu}-a_{\mu}$:
\begin{align*}
r_{\mu} &\rightarrow V_{R} r_{\mu} V_{R}^{\dagger} + i V_{R} \dmulop{V_{R}^{\dagger}}, \\
l_{\mu} &\rightarrow V_{L} l_{\mu} V_{L}^{\dagger} + i V_{L} \dmulop{V_{L}^{\dagger}}, \\
v_{\mu}^{(s)} &\rightarrow v_{\mu}^{(s)} - \dmulop{\Theta}, \\
s + ip &\rightarrow V_{R} (s + ip) V_{L}^{\dagger}, \\
s - ip &\rightarrow V_{L} (s - ip) V_{R}^{\dagger},
\end{align*}
\end{column}
\end{columns}
\end{frame}
\begin{frame}
\frametitle{Key takeaways}
\begin{columns}[c]
\begin{column}{0.8\textwidth}
$\U{1}$ example:
\begin{itemize}
\item Local invariance of generating functional contains all Ward identities of theory
\end{itemize}
Chiral Ward identities from algebra of currents:
\begin{itemize}
\item Using the algebra of currents, one must tread with caution (Schwinger and seagull terms)
\end{itemize}
Generating functional for chiral Green's functions:
\begin{itemize}
\item Allows one to compute Green's functions for chiral limit and ``real" world
\item Can constrain transformation behavior of external fields by invariance of generating functional under local transformations
\end{itemize}
\end{column}
\end{columns}
\end{frame}
%yank 7
% \begin{frame}
% \frametitle{Frame template}
% \begin{columns}[c]
% \begin{column}{0.8\textwidth}
% \end{column}
% \end{columns}
% \end{frame}
\begin{frame}[allowframebreaks]
\frametitle{References}
\bibliographystyle{apalike}
\bibliography{bibfile}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.567385528,
"avg_line_length": 33.5746887967,
"ext": "tex",
"hexsha": "4ac4be39b45d645886a9e7a4e6f1c5ad1e8cc0c4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a209e153c6847342bda44a7326d338fdcd15d63c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "cheshyre/talks",
"max_forks_repo_path": "2020/01.30_Seminar_TheoreticalHadronPhysics/MHeinz_Chiral_Ward_Identities.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a209e153c6847342bda44a7326d338fdcd15d63c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "cheshyre/talks",
"max_issues_repo_path": "2020/01.30_Seminar_TheoreticalHadronPhysics/MHeinz_Chiral_Ward_Identities.tex",
"max_line_length": 245,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a209e153c6847342bda44a7326d338fdcd15d63c",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "cheshyre/talks",
"max_stars_repo_path": "2020/01.30_Seminar_TheoreticalHadronPhysics/MHeinz_Chiral_Ward_Identities.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6011,
"size": 16183
} |
\subsubsection{Broadcast time}\label{subsubsec:rect2krtime}
Since in all the experiments we have always reached a coverage of at least
\(99\%\), here we will only discuss the broadcast time needed to reach the 99th
percentile of the coverage.
We have a large unexplained variation (\(12.31\%\)), similar to the cases of the
other scenarios. The reason for this result is the same and it will be discussed
in \chref{ch:starting-node}.
We can see that the most important factor is the broadcast radius that accounts
for the \(65.40\%\) of the variation, followed by the size of the hear window
(\(20.58\%\)). Other factors and their combinations are irrelevant. These
results are aligned with those obtained for the other scenarios, so the same
considerations apply.
Also in this case we had to use a logarithmic transformation of the predicted
variable in order to meet the assumption of finite variance for the residuals.
| {
"alphanum_fraction": 0.782937365,
"avg_line_length": 48.7368421053,
"ext": "tex",
"hexsha": "516155603cba065467996075d0afaeeeccb92801",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SpeedJack/pecsn",
"max_forks_repo_path": "doc/chapters/scenarios/rectangular/2kr/time.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SpeedJack/pecsn",
"max_issues_repo_path": "doc/chapters/scenarios/rectangular/2kr/time.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SpeedJack/pecsn",
"max_stars_repo_path": "doc/chapters/scenarios/rectangular/2kr/time.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 214,
"size": 926
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
%\usepackage[letterpaper, margin=1in]{geometry}
% allows for temporary adjustment of side margins
\usepackage{chngpage}
\title{SchnellSort 2016}
\author{Greg Hogan \\ \href{mailto:[email protected]}{[email protected]} }
\date{August 2016}
\usepackage{comment}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{multirow}
\usepackage{titlesec}
\usepackage[backend=biber]{biblatex}
\addbibresource{references.bib}
\begin{document}
\maketitle
\section{Introduction}
Apache Flink \cite{apacheflink} is an open source platform for distributed stream and batch data processing. Flink programs are assembled from a fluent selection of map, reduce, and join transformations and interoperate within the Apache big data software stack. Each program is compiled, optimized, and executed in the Flink distributed runtime with non-blocking transformations operating concurrently. Flink places a particular focus on streaming but handles batch processing as the special case in which the input source is consumed.
Sorting is a foundational but in isolation a painfully simple task for modern big data frameworks. This report presents an exploration of CloudSort \cite{cloudsort} using Apache Flink with an emphasis on documenting the process for others and our future selves.
The current CloudSort champion \cite{tritonsort2014} ran on Amazon Web Services \cite{amazonwebservices} and a follow-up report \cite{tritonsort2015} anaylzed GraySort on Google Cloud Platform \cite{googlecloudplatform}. These and other prior benchmarks used persistent block storage \cite{tritonsort2014} or ephemeral instance storage \cite{tritonsort2015} \cite{apachespark2014}. The following Indy CloudSort explores the use and current limitations of object storage for persistent input and output datasets.
\section{CloudSort on Public Clouds}
Public clouds sell three kinds of storage:
\begin{itemize}
\item ephemeral instance storage (legacy Amazon EC2, Google local SSD)
\item persistent block storage (Amazon EBS, Google Persistent Disk)
\item persistent object storage (Amazon S3, Google Cloud Storage)
\end{itemize}
\begin{table}
\begin{adjustwidth}{-1.5in}{-1.5in}
\centering
\begin{tabular}{ | l | c | c | c | c | c | c | }
\hline
\multirow{2}{*}{Storage Type} & \multicolumn{3}{|c|}{Amazon Web Services} & \multicolumn{3}{|c|}{Google Cloud Platform} \\
\cline{2-7}
& S3 Standard & S3 Reduced & EBS gp2 & GCS Standard & GCS Reduced & Persistent Disk \\
\hline
\$/GiB-month & 0.03 & 0.024 & 0.10 & 0.026 & 0.02 & 0.17 \\
\$/100 TB-hour & 3.89 & 3.11 & 12.94 & 3.37 & 2.59 & 21.99 \\
\hline
\end{tabular}
\caption{Cost for cloud persistent storage}
\label{table:persistentcost}
\end{adjustwidth}
\end{table}
\begin{table}
\begin{adjustwidth}{-1.5in}{-1.5in}
\centering
\begin{tabular}{ | l | c | c | c | c | c | c | c | c | c | c | }
\hline
& \multicolumn{6}{|c|}{Amazon Web Services} & \multicolumn{4}{|c|}{Google Cloud Platform} \\
\hline
Instance type & c4 & c3 & m4 & r3 & i2 & x1 & highcpu & standard & highmem & Local SSD \\
\hline
Price (\$/hr) & 1.675 & 1.68 & 2.394 & 2.66 & 6.82 & 13.338 & 1.20 & 1.60 & 2.00 & 0.113 \\
\hline
Memory (GiB) & 60 & 60 & 160 & 244 & 244 & 1952 & 28.8 & 120 & 208 & - \\
Memory (\$/100 TB-hr) & baseline & - & 669.39 & 498.41 & 2603.37 & 571.49 & baseline & 408.48 & 423.20 & - \\
\hline
SSD (GB) & - & 640 & - & 640 & 6400 & 3840 & - & - & - & 375 \\
SSD (\$/100 TB-hr) & - & 0.79 & - & 153.91 & 80.40 & 303.73 & - & - & - & 30.14 \\
\hline
\end{tabular}
\caption{Cost for cloud ephemeral storage}
\label{table:ephemeralcost}
\end{adjustwidth}
\end{table}
The CloudSort sort benchmark requires that the input and output datasets be written to persistent storage. Comparative pricing is provided for persistent storage in table \ref{table:persistentcost} and ephemeral storage in table \ref{table:ephemeralcost}. A significant distinction is that persistent storage can be allocated by the gibibyte whereas ephemeral storage is a fixed instance allotment (excepting Google local SSD, which is allocated as a multiple of 375 GB disks). This means that clusters using persistent storage can be of nearly any size whereas using ephemeral storage may require a very large cluster and associated I/O. Not reflected in this table is SSD performance, which is very good for Google local SSD and very poor for Amazon c3 instances.
CloudSort proceeds in two phases. In the first phase data is read from persistent storage and shuffled across the network. In the second phase the output data is written to persistent storage. In the shuffle each byte transits two network interfaces. Records are sorted in the first phase before spilling to disk. Thus the cluster I/O and CPU requirements are much greater in the first phase compared with the second phase.
Phase 1:
\begin{itemize}
\item read input from persistent storage
\item split into records and range partition
\item shuffle records to remote worker
\item spill records
\end{itemize}
Phase 2:
\begin{itemize}
\item read spilled records
\item merge-sort and count duplicate keys
\item write output to persistent storage
\end{itemize}
\begin{table}
\begin{adjustwidth}{-1.5in}{-1.5in}
\centering
\begin{tabular}{ | l | c | c | c | }
\hline
& \multicolumn{2}{|c|}{Amazon Web Services} & Google Cloud Platform \\
\cline{2-4}
& c4 w/EBS & c3 w/Instance Storage & n1-standard-8 w/Local SSD \\
\hline
Instance count & 98 & 164 & 280 \\
Instance disk (GB) & 1000 & 640 & 375 \\
Instance I/O (Gbps) & 10 + 4 EBS & 10 & 16 \\
\hline
Phase 1 maximum I/O (MB/s) & 416 & 380 & 272 \\
Phase 1 total I/O (GB/s) & 40.8 & 62.3 & 76.1 \\
Phase 1 minimum time (s) & 2453 & 1605 & 1314 \\
\hline
Phase 2 maximum I/O (MB/s) & 800 & 480 & 390 \\
Phase 2 total I/O (GB/s) & 78.4 & 78.7 & 109.2 \\
Phase 2 minimum time (s) & 1276 & 1271 & 916 \\
\hline
Overall minimum time (s) & 3729 & 2876 & 2230 \\
Instance cost (\$/hr) & 1.814 & 1.68 & 0.513 \\
Minimum compute cost (\$) & 184.15 & 220.11 & 88.98 \\
\hline
\end{tabular}
\caption{Comparision of Performance and Compute Cost}
\label{table:performancecomparision}
\end{adjustwidth}
\end{table}
Table \ref{table:performancecomparision} lists optimal cluster performance and cost of compute. The configurations include a 5\% storage buffer to allow for filesystem metadata, unbalanced spill file output (even using a round-robin the first disk may receive an extra file), and slight partition skew.
These baseline costs do not include the cost of persistent storage for the input and output datasets, require consistent maximum network and disk I/O, and assume no delay when starting the cluster and transitioning between sort phases.
Increasing the number of nodes in a small cluster reduces the required per-instance storage. For AWS, the optimal cluster configurations in table \ref{table:performancecomparision} mark the minimum storage to maintain maximum I/O. Increasing cluster size may result in a faster sort but will not reduce the sort cost. For GCE the cost may be further reduced if an instance can drive full disk I/O with fewer CPUs or less memory.
\section{FlinkSort}
Code, artifacts, and results are available at flink-cloudsort \cite{flink-cloudsort}. All benchmarks were run with vanilla flink-1.1.1 \cite{flink-1.1.1} as the execution was not CPU bound. The code provides a custom IndyRecord implementation for 10-byte keys with 90-byte records. The partitioner and CRC32 for validation are adapted from Apache Hadoop \cite{apachehadoop}.
There were three major challenges discovered during testing. First, profiling revealed that the Java implementation of SHA-1 used for SSL used 50\% of the available CPU cycles. This is solved with intinsics provided by the upcoming release of Java 8 build 112 paired with a Skylake or newer genereration processor implementing the Intel SHA Extensions \cite{intel-sha-extensions}. Since these processors may not be available from cloud providers for several users, the alternative was to pipe input and output from the AWS CLI \cite{awscli}.
The second hurdle was poor network performance starting with 16 x c4.8xlarge instances. This was alleviated by configuring Flink to configure Netty with larger network buffers, increasing the Linux default 4 MiB to 64 MiB, and increasing the number of Netty threads to equal the number of instance vcores (hyperthreads).
The third hurdle resulted from outlier performance when downloading from or uploading to Amazon S3. This was solved by killing and retrying transfers after a configurable timeout.
\section{Benchmarks}
\begin{table}
\begin{adjustwidth}{-1.5in}{-1.5in}
\centering
\begin{tabular}{ | l | c | c | c | c | c | }
\hline
Benchmark & \# Nodes & Average Time & Average Cost & Checksum & Duplicate Keys \\
\hline
Indy CloudSort & 129 & 6799.57 s & \$239.58 & 746a51007040ea07ed & 0 \\
\hline
\end{tabular}
\caption{Benchmark Summary}
\label{table:benchmarksummary}
\end{adjustwidth}
\end{table}
The valsort checksum and number of duplicate keys for the 100 TB of non-skewed data are listed in table \ref{table:benchmarksummary}. As in \cite{tritonsort2014}, there were no duplicate keys found. Checksums for any number of gigabytes up to a petabyte can be processed from the provided CRC file \cite{flink-cloudsort} with the following python summation. This is useful for testing smaller quantities of data. gensort \cite{gensort} runs fastest when writing to /dev/null.
The validation concatenation for the 400,384 output files is included in the flink-cloudsort repository \cite{flink-cloudsort}.
\begin{verbatim}
head -n \${BLOCKS} /path/to/crc32 | python -c "import sys; \
print hex(sum(int(l, 16) for l in sys.stdin))[2:].rstrip('L')"
\end{verbatim}
\begin{table}
\begin{adjustwidth}{-1.5in}{-1.5in}
\centering
\begin{tabular}{ | l | c | c | c | c | }
\hline
& Price & Run 1 & Run 2 & Run 3 \\
\hline
Time & & 7133 s & 6561 s & 6706 s \\
\hline
AWS c4.4xlarge Instances & & 129 & 129 & 129 \\
AWS c4.4xlarge Cost & \$0.838/instance-hr & \$214.20 & \$197.02 & \$201.38 \\
\hline
AWS EBS gp2 GiB & & 98994 & 98994 & 98994 \\
AWS EBS gp2 Cost & \$0.10/GiB-mo & \$27.25 & \$25.06 & \$25.62 \\
\hline
AWS S3 Cost & \$0.024/GiB-mo & \$7.59 & \$7.09 & \$7.23 \\
\hline
AWS S3 LIST, PUT & & 401,537 & 401,627 & 401,541 \\
AWS S3 LIST, PUT Cost & \$0.005 per 1,000 & \$2.01 & \$2.01 & \$2.01 \\
\hline
AWS S3 GET & & 100,330 & 100,149 & 100,185 \\
AWS S3 GET Cost & \$0.004 per 10,000 & \$0.05 & \$0.05 & \$0.05 \\
\hline
Total Cost & & \$251.10 & \$231.23 & \$236.39 \\
\hline
\end{tabular}
\caption{Benchmark Results}
\label{table:benchmarkresults}
\end{adjustwidth}
\end{table}
The three runs in table \ref{table:benchmarkresults} average \$239.58. Each cluster used one master c4.4xlarge instance and 128 worker c4.4xlarge instances. Earlier tests were run with c4.8xlarge instances running two Flink TaskManagers each, one per NUMA domain. Cluster were launched in a placement group for maximum networking performance. When launched in a placement group each instance is throttled to 5 Gbps outside the cluster. Communication to Amazon S3 was consistently throttled to 4.50 Gbps. Early tests with large c4.8xlarge clusters did not perform as well as later tests run with c4.4xlarge instances, but as with all things "cloud" it is nearly impossible to divine the reason.
For EBS storage, the master node was allocated a 50 GiB root partition. Worker nodes were allocated an 8 GiB root partition and three 255 GiB partitions for spilling intermediate data. The total allocation for spilled data was 105 TB.
The AWS S3 storage was computed by script (available as parse\_bytes.py \cite{flink-cloudsort}) as storing the input dataset over the full runtime of the sort and the output dataset from when the result output records were written to local memory. Flink JobManager and TaskManager statistics were collected (again, available in the repository) and each gauge value was processed from the previous timestamp.
These benchmarks were run using AWS S3 Reduced Redundancy Storage. This was not driven by cost or performance concerns but rather the consideration that creating and quickly deleting hundreds of terabytes should be performed with as light an impact as possible.
This Reduced Redundancy Storage satisfies the CloudSort requirements as described by the following description from Amazon. "The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage." \cite{awsreducedredundancystorage}
AWS S3 LIST requests return up to 1,000 results, requiring 100 requests for the 100,000 x 1 GB input files. The AWS S3 GET request count includes the extra requests for terminated downloads. The AWS S3 PUT requests are counted for the 400,384 x 256 MiB result files as well as the extra requests for terminated uploads.
It is critical to adjust the AWS CLI "multipart\_chunksize" to larger than 8 MiB. At the default size the cost of PUT requests for the 100 TB output is \$62.50.
\section{Running the sort}
The following sections document the five phases to running flink-cloudsort on Amazon Web Services as benchmarked in this report.
\subsection{Creating an Amazon Machine Instance}
The custom AMI is created by launching the latest Amazon Linux AMI then applying the following commands. This installs required software, optimizes the system, and configures passwordless SSH.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
sudo su
yum-config-manager --enable epel
yum update -y
# http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html
sed -i 's/\(kernel .*\)/\1 xen_blkfront.max=256/' /boot/grub/grub.conf
reboot
sudo su
yum install -y fio collectl ganglia-gmetad ganglia-gmond ganglia-web git htop iftop iotop pdsh \
sysstat systemtap telnet xfsprogs
stap-prep
# optional: first download then install Oracle JDK
yum localinstall -y jdk-*.rpm && rm -f jdk-*.rpm
# optional: Amazon's Linux AMI is not kept up-to-date
pip install --upgrade awscli
# install GNU Parallel
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - pi.dk/3) | bash
rm -rf parallel*
# increase the number of allowed open files and the size of core dumps
cat <<EOF > /etc/security/limits.conf
* soft nofile 1048576
* hard nofile 1048576
* soft core unlimited
* hard core unlimited
EOF
cat <<EOF > /etc/pam.d/common-session
session required pam_limits.so
EOF
# mount and configure EBS volumes during each boot
cat <<EOF >> /etc/rc.local
mkdir -p /volumes
format_and_mount() {
blockdev --setra 512 /dev/xvd\$1
echo 1024 > /sys/block/xvd\$1/queue/nr_requests
/sbin/mkfs.ext4 -m 0 /dev/xvd\$1
mkdir /volumes/xvd\$1
mount /dev/xvd\$1 /volumes/xvd\$1
mkdir /volumes/xvd\$1/tmp
chmod 777 /volumes/xvd\$1/tmp
}
for disk in b c d; do
format_and_mount \${disk} &
done
EOF
sed -i 's/^PermitRootLogin .*/PermitRootLogin without-password/' /etc/ssh/sshd_config
service sshd restart
ssh-keygen -N "" -t rsa -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat <<EOF > ~/.ssh/config
Host *
LogLevel ERROR
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
chmod 600 ~/.ssh/config
rm -rf /tmp/*
> ~/.bash_history && history -c && exit
ssh-keygen -N "" -t rsa -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat <<EOF > ~/.ssh/config
Host *
LogLevel ERROR
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
chmod 600 ~/.ssh/config
> ~/.bash_history && history -c && exit
\end{verbatim}
\end{adjustwidth}
\subsection{Starting an Amazon EC2 cluster using Spot Instances}
The following configuration and command starts a cluster in a placement group for low latency and high throughput networking. For larger clusters it is recommended to start an additional, on-demand instance to operate as the master node and monitor the cluster. This node can be created without the block devices used for spilled data. The placement group, subnet, AMI, EFS, key, and security group must be created and configured before launching the cluster.
The user-data initialization of cluster instances mounts a common Amazon EFS network filesystem from which the Flink software is run.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
INSTANCE_TYPE=c4.4xlarge
AVAILABILITY_ZONE=us-east-1a
PLACEMENT_GROUP=my-pg-a
SUBNET_ID=subnet-d67eb769
AMI=ami-815f3b96
EFS_ID_AND_REGION=fs-3f744dd8.efs.us-east-1
KEY_NAME=MyKey
SECURITY_GROUP_ID=sg-c25e687f
USER_DATA=$(base64 --wrap=0 <<EOF
#!/bin/bash
mkdir /efs && mount -t nfs4 -o nfsvers=4.1 ${AVAILABILITY_ZONE}.${EFS_ID_AND_REGION}.amazonaws.com:/ /efs
EOF
)
LAUNCH_SPECIFICATION=$(cat <<EOF
{
"ImageId": "${AMI}",
"KeyName": "${KEY_NAME}",
"UserData": "${USER_DATA}",
"InstanceType": "${INSTANCE_TYPE}",
"Placement": {
"AvailabilityZone": "${AVAILABILITY_ZONE}",
"GroupName": "${PLACEMENT_GROUP}"
},
"BlockDeviceMappings": [
{ "DeviceName": "/dev/sdb",
"Ebs": { "VolumeSize": 255, "DeleteOnTermination": true, "VolumeType": "gp2", "Encrypted": true } },
{ "DeviceName": "/dev/sdc",
"Ebs": { "VolumeSize": 255, "DeleteOnTermination": true, "VolumeType": "gp2", "Encrypted": true } },
{ "DeviceName": "/dev/sdd",
"Ebs": { "VolumeSize": 255, "DeleteOnTermination": true, "VolumeType": "gp2", "Encrypted": true } }
],
"SubnetId": "${SUBNET_ID}",
"EbsOptimized": true,
"SecurityGroupIds": [ "${SECURITY_GROUP_ID}" ]
}
EOF
)
SPOT_PRICE="0.50"
INSTANCE_COUNT=128
aws ec2 request-spot-instances --spot-price $SPOT_PRICE --instance-count $INSTANCE_COUNT \
--type "one-time" --launch-specification "${LAUNCH_SPECIFICATION}"
\end{verbatim}
\end{adjustwidth}
\subsection{Generating and validating input data}
Cluster-specific configuration must first be updated.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
# AWS CLI configuration and credentials
$ cat ~/.aws/config
[default]
output = json
region = us-east-1
s3 =
multipart_threshold = 1073741824
multipart_chunksize = 1073741824
$ cat ~/.aws/credentials
[default]
aws_access_key_id = ...
aws_secret_access_key = ...
# piping data through the AWS CLI looks to ignore the configuration,
# so the multipart configuration should also be changed in
# /usr/local/lib/python2.7/site-packages/awscli/customizations/s3/transferconfig.py
DEFAULTS = {
'multipart_threshold': 1024 * (1024 ** 2),
'multipart_chunksize': 1024 * (1024 ** 2),
# save the list of workers - could also filter on instance type;
# remove master node from list of IPs if also captured by the filter
aws ec2 describe-instances --filter Name=placement-group-name,Values=my-pg | \
python -c $'import json, sys; print "\\n".join(i["PrivateIpAddress"] for r in \
json.load(sys.stdin)["Reservations"] for i in r["Instances"] \
if i["State"]["Name"] == "running")' > ~/workers && wc ~/workers
# copy AWS CLI configuration and credentials to each worker
pdsh -w ^/home/ec2-user/workers mkdir -p /home/ec2-user/.aws
pdcp -r -w ^/home/ec2-user/workers /home/ec2-user/.aws/ /home/ec2-user/.aws
\end{verbatim}
\end{adjustwidth}
Ganglia is useful for monitoring aggregate network I/O. It could be useful for monitoring Flink, but because the Flink reporter broadcasts a unique ID for each TaskManager the Ganglia daemon is overwhelmed and crashes.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
# edit /etc/ganglia/gmond.conf with master IP
cat <<EOF > /etc/httpd/conf.d/ganglia.conf
#
# Ganglia monitoring system php web frontend
#
Alias /ganglia /usr/share/ganglia
<Location /ganglia>
Require all granted
</Location>
EOF
sudo pdcp -r -w ^/home/ec2-user/workers /etc/ganglia/gmond.conf /etc/ganglia/gmond.conf
sudo pdsh -w ^/home/ec2-user/workers service gmond start
sudo $GMOND_CONF /etc/ganglia/gmond.conf
sudo service gmond start
sudo service gmetad start
sudo service httpd start
\end{verbatim}
\end{adjustwidth}
The 100 TB of input data is constructed in blocks using gensort \cite{gensort}. The following command uses GNU Parallel \cite{gnuparallel} to distribute work among nodes. It is preferable to write to shared memory both for performance and so that instances may be configured without storage.
When uploading 1 GB blocks, the md5 checksums can be validated against the list provided in the flink-cloudsort repository \cite{flink-cloudsort}. Use "head -n" if validating less than a petabyte.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
# create and upload files with proper MD5 etags
BLOCKS=100000
BUCKET=cloudsort
GENSORT=/efs/bin/gensort
GENDIR=/dev/shm
PARALLELISM=8
PREFIX=input
RECORDS=$((10*1000*1000)) # 1 GB
STORAGE_CLASS=REDUCED_REDUNDANCY
WORKER_IPS=~/workers
# first remove any files
pdsh -w ^/home/ec2-user/workers rm -f /dev/shm/block\*
# generate and upload input data
seq -f "%06g" 0 $(($BLOCKS - 1)) | \
parallel -j ${PARALLELISM} --slf ${WORKER_IPS} --bar --timeout 120 --retries 490 "${GENSORT} \
-b{}${RECORDS:1} ${RECORDS} ${GENDIR}/block{},buf && aws s3api put-object --storage-class ${STORAGE_CLASS} \
--bucket ${BUCKET} --key ${PREFIX}/block{} --body ${GENDIR}/block{} > /dev/null && rm /dev/shm/block{}"
# fetch MD5 checksums for validation of uploaded files
aws s3api list-objects-v2 --bucket cloudsort --prefix input | python -c $'import json, sys; print \
"\\n".join(file["ETag"].strip(\'"\') for file in json.load(sys.stdin)["Contents"])' > md5
\end{verbatim}
\end{adjustwidth}
\subsection{Executing FlinkSort}
The following script should be run with nohup in case the SSH session is terminated. It should be run from the flink-1.1.1 directory. Usage would be "nohup ./run.sh 1 \textgreater run1.log \&". Multiple runs can be initiated by creating and running with nohup an outer script which calls run.sh more than once.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
#!/usr/bin/env bash
# flush Linux memory caches
sudo sh -c "sync ; echo 3 > /proc/sys/vm/drop_caches"
sudo pdsh -w ^/home/ec2-user/workers sh -c "sync ; echo 3 > /proc/sys/vm/drop_caches"
CLOUDSORT_DIR=/efs/cloudsort
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <run id>"
fi
RUN_ID=$1
RUN_DIR=${CLOUDSORT_DIR}/run/${RUN_ID}
if [ -d "$RUN_DIR" ]; then
echo "Run directory $RUN_DIR already exists!"
exit -1
fi
# '-u' prevents python from buffering stdin and stderr
python -u ${CLOUDSORT_DIR}/statsd_server.py 9020 > statsd_server.log &
statsd_server_pid=$!
# record time before starting cluster
date +%s.%N
./bin/start-cluster.sh
# read JobManager configuration
CONF=conf/flink-conf.yaml
read HOST PORT SLOTS <<<$(python -c 'import yaml; conf=yaml.load(open("'${CONF}'")); \
print conf["jobmanager.rpc.address"], conf["jobmanager.web.port"], conf["parallelism.default"]')
# wait for all TaskManagers to start
while [ $SLOTS -ne `curl -s http://${HOST}:${PORT}/overview | python -c $'import sys, json; \
data=sys.stdin.read(); print(json.loads(data)["slots-total"] if data else 0)'` ] ; do sleep 1 ; done
# execute FlinkSort
./bin/flink run -q -class org.apache.flink.cloudsort.indy.IndySort \
${CLOUDSORT_DIR}/flink-cloudsort-0.1-dev_shm_timeout.jar \
--input awscli --input_bucket cloudsort --input_prefix input/ \
--output awscli --output_bucket cloudsort --output_prefix output${RUN_ID}/ \
--buffer_size 67108864 --chunk_size 250000000 --concurrent_files 16 --storage_class REDUCED_REDUNDANCY \
--download_timeout 120 --upload_timeout 60
date +%s.%N
./bin/stop-cluster.sh
if kill -0 $statsd_server_pid 2>&1; then
kill $statsd_server_pid
else
echo "No statsd_server found with PID $statsd_server_pid"
fi
mkdir -p $RUN_DIR
mv statsd_server.log $RUN_DIR
mv log $RUN_DIR
mkdir log
\end{verbatim}
\end{adjustwidth}
\subsection{Validating the output data}
Generating input and validating output can be performed on instances without additional storage. Since spot pricing only applies to instances this can result in substantial savings.
\begin{adjustwidth}{-1.5in}{-1.5in}
\begin{verbatim}
# validate using valsort
PARALLELISM=8
WORKER_IPS=~/workers
VALSORT=/efs/bin/valsort
DATDIR=/efs/validate
BUCKET=cloudsort
PREFIX=output
# run valsort on each output file and save validation files
aws s3 ls s3://${BUCKET}/${PREFIX}/ --recursive | awk '{print $4}' | \
parallel -j ${PARALLELISM} --slf ${WORKER_IPS} --bar --retries 490 "mkfifo /tmp/fifo{#} ; \
mkdir -p ${DATDIR}/{//} ; aws s3 cp s3://${BUCKET}/{} - > /tmp/fifo{#} & ${VALSORT} -o ${DATDIR}/{}.dat \
/tmp/fifo{#},buf ; rm /tmp/fifo{#}"
# concatenate validation files and run valsort on the full set
find ${DATDIR} -type f -name '*.dat' -printf '%P\n' | sort -V | xargs -i{} cat ${DATDIR}/{} > \
${DATDIR}/checksums
${VALSORT} -s ${DATDIR}/checksums
# delete the cloudsort output
aws s3 rm --recursive s3://cloudsort/${PREFIX}
\end{verbatim}
\end{adjustwidth}
\section{Conclusion}
Given a time machine I would not attempt this sort benchmark; however, I look forward to further optimizing Flink and SchnellSort for next year. Had this been a sponsored attempt it would have been much less stressful. This report will be included at a high-level in my presentation this month at Flink Forward 2016 \cite{flink-forward}.
The cloud is beautiful, and powerful resources can be obtained very cheaply (particularly on nights and weekends). It is also a black box and frustratingly difficult to apprehend.
\begin{verbatim}
sudo pdsh -w ^/home/ec2-user/workers sudo shutdown -h now
\end{verbatim}
\printbibliography
\end{document}
| {
"alphanum_fraction": 0.7242423081,
"avg_line_length": 44.7667238422,
"ext": "tex",
"hexsha": "58bddfff3b6e20c9121cbd3cd09400ab7b908439",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e7c28c6fbb65cc4dce4b74150f28e26bd2523e93",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "greghogan/flink-cloudsort",
"max_forks_repo_path": "sortbenchmark/SchnellSort 2016/main.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "e7c28c6fbb65cc4dce4b74150f28e26bd2523e93",
"max_issues_repo_issues_event_max_datetime": "2019-07-02T17:23:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-07-02T17:23:02.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "greghogan/flink-cloudsort",
"max_issues_repo_path": "sortbenchmark/SchnellSort 2016/main.tex",
"max_line_length": 765,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e7c28c6fbb65cc4dce4b74150f28e26bd2523e93",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "greghogan/flink-cloudsort",
"max_stars_repo_path": "sortbenchmark/SchnellSort 2016/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7369,
"size": 26099
} |
\documentclass[a4paper, 10pt]{article}
%\topmargin-1.5cm
\usepackage{fancyhdr}
\usepackage{pagecounting}
\usepackage[dvips]{color}
% Color Information from - http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/latex_advanced/node13.html
% NEW COMMAND
% marginsize{left}{right}{top}{bottom}:
%\marginsize{3cm}{2cm}{1cm}{1cm}
%\marginsize{0.85in}{0.85in}{0.625in}{0.625in}
%\advance\oddsidemargin-0.85in
%\advance\evensidemargin-0.85in
%\textheight8.5in
%\textwidth6.75in
\newcommand\bb[1]{\mbox{\em #1}}
\def\baselinestretch{1.25}
%\pagestyle{empty}
\newcommand{\hsp}{\hspace*{\parindent}}
\definecolor{gray}{rgb}{0.4,0.4,0.4}
\newcommand{\authorname}{Katy Huff}
\newcommand{\longauthorname}{Dr. Kathryn~Huff}
\newcommand{\authorsite}{katyhuff.github.io}
\newcommand{\myitem}[1]{\item[\textcolor{gray}{\textbf{#1}}]}
\newcommand{\boldblue}[1]{\textcolor{cyan}{\textbf{#1}}}
\begin{document}
\pagestyle{fancy}
%\pagenumbering{gobble}
%\fancyhead[location]{text}
% Leave Left and Right Header empty.
%\lhead{}
%\rhead{}
%\rhead{\thepage}
\lhead{\textcolor{gray}{\it \authorname}}
\rhead{\textcolor{gray}{\thepage/\totalpages{}}}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\fancyfoot[C]{\footnotesize \textcolor{gray}{\authorsite}}
\begin{center}
{\LARGE \bf Self Evaluation}\\
\vspace*{0.1cm}
{\normalsize \longauthorname}
\end{center}
%\vspace*{0.2cm}
% Introduction
The following is a non-exhaustive list of activities I have participated in
since arriving in Berkeley in September 2013. During that time, I have been, on
average funded by the FHR project ($\sim$30\%), NSSC
($\sim$30\%), BIDS ($\sim$25\%), and
LLNL ($\sim$15\%). \boldblue{Especially notable accomplishments are in blue}.
\section*{FHR-Related}
\begin{itemize}
\myitem{PyRK} I have written a \boldblue{Python package for 0-D accident transient
modeling in nuclear reactors (PyRK) \cite{huff_pyrk_2015}}. I have conducted an
SFR analysis for validation. Additionally, this tool is expected to be
used as an engine for running accident transient experiments in CIET this fall.
\myitem{PB-FHR Analysis} The main purpose of PyRK, however, is simulation of
Pebble-Bed Fluoride-Salt-Cooled High-Temperature Reactor (PB-FHR). transients.
Accordingly, I am working on a manuscript related to my current results from
PyRK for the case of reactivity insertions and Loss of Heat Sink (LOHS)
transients in the PB-FHR. I will soon distribute a draft to my collaborators
for review and participation in the hopes that it will ideally be submitted
late this summer.
\myitem{MOOSE Extension} Development of my 3D, multi-scale, multi-physics PBFHR
model has begun. Using Pronghorn and RattleSNake within the MOOSE framework, I
can couple thermal hydraulics on coupled coarse and fine meshes. This requires
me to modify the pebble-bed flow-model to (currently gaseous flow) to allow
molten salt coolant in the pebble bed. I expect this analysis software will be
\boldblue{ready for demonstration on the NERSC resources in late fall 2015.}
\myitem{NERSC} I am pleased that, based on a proposal I submitted with BIDS,
\boldblue{I was awarded a signficant time allocation on NERSC.} I intend to
use my NERSC allocation (millions of cpu hours) to conduct my PB-FHR transient
analysis this fall (MOOSE extension above).
\myitem{INL LDRD} Related to my MOOSE extension, I am a co-investigator on an
LDRD proposal that, if funded, will provide travel funding to INL, access to
potenially validating data from the industry co-investigator, and potential
summer funding for a Berkeley student.
\myitem{COMSOL ATWS} I refactored Scarlat's AHTR COMSOL model to include the geometry
and neutronics of the PB-FHR and ran the first PB-FHR ATWS analysis. The
pressure drop optimization was conducted by Huddar and our results were
included in the PB-FHR Mk1 Design Report.
\myitem{BDBE Workshop} I led analysis for the Source Term Analysis and
Radiological Release Pathways section of the never-released 2013 BDBE workshop
white paper. I also assisted with Beyond Design Basis Event Analysis
Methods and Experimental Gaps section of the white paper.
\end{itemize}
\section*{Other Nuclear Engineering}
\begin{itemize}
\myitem{PyNE} I am a contributor to the PyNE (python for nuclear engineering
toolkit). Accordingly, I co-authored a PyNE ANS conference paper
\cite{bates_pyne_2014} and hosted two PyNE hackathons (NSSC 2013 and BIDS
2014). In the hackathons, we had developers join us for a few days and we
improved this open source package immensely. \boldblue{This package is used by many
nuclear engineers at universities and the national laboratories (including some
in the FHR group at Berkeley).}
\myitem{Cyclus} Continuing my involvement, I contributed to the most recent
release od Cyclus and have helped to conduct Fuel Cycle analyses in
collaboration with DOE and Prof. Fratoni. Finally, I \boldblue{submitted two
manuscripts} this year related to my past work with the Cyclus project. Though
they have not yet been accepted, they are in revision.
\myitem{Cyder} I \boldblue{submitted and resubmitted a journal article} based
on my dissertation work concerning hydrologic and thermal modeling of nuclear
waste repositories.
I am awaiting review comments on the resubmitted paper.
\myitem{FCWMD Vice-Chair} Continuing my service to ANS, I am now the
\boldblue{Vice-Chair of the Fuel Cycle and Waste Management Division.}
\myitem{Conference Papers} I submitted an ANS summary on the topic of
a nuclear engineering course syllabus based on my recently published book.
\myitem{BFF Program} I was invited to and attended a ``Building Future Faculty
Program'' at NCSU. This has already contributed to my pursuit of a faculty
position in Nuclear Engineering.
\myitem{Mentorship} To varying degrees, I have guided the research computation
of numerous students in the NE department including Xin Wang, Blake Huff, Tommy
Cisneros, Ryan Bergmann, Kelly Rowland, Madicken Munk, Grant Buster, Josh
Howland, and Russell Nibbelink.
\end{itemize}
\section*{Scientific Computing Education}
\begin{itemize}
\myitem{Best Practices} In 2014, I coauthored an extremely popular paper on how
best to use computers in science\cite{wilson_best_2014}. It has been
\textcolor{cyan}{\textbf{cited 80 times}}.
\myitem{The Hacker Within} I have led a \boldblue{popular weekly seminar on
scientific computing, attended by many nuclear engineering graduate students}
I've brought dozens of people into the NSSC and BIDS spaces with this meeting.
For this seminar, I schedule tutorials on tools and best practices for
scientific computing. It is popular among nuclear engineers and physicists, but
attracts a diversity of individuals. This recent success has inspired my PhD
advisor to reboot the original THW in Madison and Dr. Arna Karnik at Swinburne
University in Melbourne, Australia has started her own chapter of THW there
within the physics department.
\myitem{Software Carpentry} I am the current \boldblue{elected Chair
of the Software Carpentry Foundation Steering Committee}. This international
nonprofit organization focuses on teaching scientific computing skills to
scientists. Software Carpentry is responsible for over 100 workshops per year
and now has dozens of university, laboratory, and governmental partners. My
leadership has additionally led to a BIDS collaboration with them and has
facilitated sold out workshops in the BIDS space.
\myitem{Case Studies} With the BIDS Reproducibility Working Group, I
have helped collect case studies of reproducible workflows in scientific work
on campus. As a result, I am on track to be a \boldblue{chapter author} on the book form
of this collection and look forward to co-authoring an extended whitepaper or
journal article on the lessons learned.
\myitem{MOOSE Workshop} Professor Fratoni and I have arranged for a workshop on a
multiphysics simulation environment, MOOSE.
\myitem{WiSE Workshop} I was the lead instructor for a workshop at LBNL
dedicated to women in science and engineering. It became a
\myitem{GitHub Town Hall} I was invited by GitHub to visit the UW eScience space
and to sit on a panel discussing ``What Academia Can Learn From Open Source''.
\myitem{O'Reilly Book} Between May 2014 and January 2015, \boldblue{I wrote a
book} to help students and researchers in the physical sciences to conduct the
computational aspects of their research more effectively. It's called Effective
Computation In Physics and hundreds of copies have already been sold.
\myitem{Guest Lectures} I have served as a guest lecturer for many lessons in
NE155 and NE255.
\end{itemize}
\section*{Other}
\begin{itemize}
\myitem{SciPy} I have served as the Technical Program Chair (2013 and 2014) and
Proceedings Chair (2015) for SciPy, a conference on the scientific use of
python that brings together an entire community at the intersection of science
and programming.
\myitem{ASPP School} I have been invited two years in a row to be the
\boldblue{keynote speaker} for a week-long Advanced Scientific Programming in
Python Summer School in Europe (Croatia 2014, Munich 2015).
\myitem{BSN Deep Dive} I was invited to be a mentor for a weekend
retreat intended to help female Berkeley graduate students in STEM. This is
called a ``deep-dive'' and was hosted by the Berkeley Science Network at
Asilomar in March 2015.
\myitem{NEUP 2013 Proposal} I was primary author, but not PI on an NEUP
Proposal on reactor technology analysis in the context of fuel cycles that was
invited back for a full proposal but ultimately was not awarded.
\myitem{NEUP 2014 Proposal} I was PI on an NEUP Proposal on laser isotopic
separation that was invited back for a full proposal but ultimately was not
awarded.
\myitem{NSF 2014 Proposal} I was a co-investigator on an NSF proposal (led by
Professor Slaybaugh) related to scientific computing education. It was ultimately not
awarded.
\end{itemize}
\bibliographystyle{plain}
\bibliography{eval}
\end{document}
| {
"alphanum_fraction": 0.7858846918,
"avg_line_length": 50.0497512438,
"ext": "tex",
"hexsha": "8791315fde31add1ea68340c91fb5000c9100784",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9b3ca3ae8b47242d10a376e6ee9b55bb2b7354b1",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "katyhuff/bids",
"max_forks_repo_path": "six_month/eval/eval.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9b3ca3ae8b47242d10a376e6ee9b55bb2b7354b1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "katyhuff/bids",
"max_issues_repo_path": "six_month/eval/eval.tex",
"max_line_length": 104,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9b3ca3ae8b47242d10a376e6ee9b55bb2b7354b1",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "katyhuff/bids",
"max_stars_repo_path": "six_month/eval/eval.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2665,
"size": 10060
} |
\font\mainfont=cmr8
\font\mi=cmti8
\font\subsectionfont=cmbx8
\font\sectionfont=cmbx10
\font\headingfont=cmbx12
\font\titlefont=cmbx14
\def\RCS$#1: #2 ${\expandafter\def\csname RCS#1\endcsname{#2}}
\def\heading#1{\vskip 12 pt \leftline{\headingfont #1}}
\newcount\footnotes \footnotes=0
\def\footnoter#1{\advance\footnotes by 1 \footnote{$^{\the\footnotes}$}{\rm #1}}
\newcount\sectionnum \sectionnum=0
\newcount\subsectionnum \subsectionnum=0
\def\section#1{\vskip 12 pt \advance\sectionnum by 1 \subsectionnum=0 \leftline{\sectionfont \the\sectionnum. #1}}
\def\subsection#1{\vskip 12 pt \advance\subsectionnum by 1 \leftline{\subsectionfont \the\sectionnum.\the\subsectionnum. #1}}
\def\title#1{\centerline{\titlefont #1} \centerline{\sevenrm \RCSId} \vskip 24pt}
\newcount\itemnum
\def\items{\advance\itemnum by 1 \itemitem {\the\itemnum)}}
\def\iDesk{{\mi iDesk}}
\def\iDesks{{\mi iDesks}}
\def\NA{{\mi Not Applicable}}
\def\userline#1{\leftline{\hskip 24 pt #1}}
\def\user#1#2#3#4#5{\vskip 12 pt \userline{User name: #1}\userline{User role: #2}\userline{Subject matter experience: #3}\userline{Technological experience: #4}\userline{Other user characteristics: #5}}
\def\userpriority#1#2#3{\vskip 12 pt \userline{User category: #1}\userline{Priority: #2}\userline{Estimated percentage: #3}}
\def\name#1#2{\leftline{\hskip 24 pt {\mi #1}: #2}}
\def\funcrec#1#2{\vskip 12 pt \leftline{\hskip 24 pt{Functional Requirement: #1}}\leftline{\hskip 24 pt{Fit Criterion: #2}}}
\parskip 12 pt
\parindent 24 pt
\title{Volere Requirements Specification Template.}
\mainfont
\heading{Project Drivers.}
\section{The Purpose of the Product.}
\subsection{The user problem or background to the project effort.}
To develop the interface for an interactive desk, the \iDesk, to be used in lecture theatres to allow students to take notes and have access to the multi-media presentation given in that lecture.
\subsection{Goals of the project.}
We want the students to be able to use the \iDesk\ as a replacement for the traditional note taking apparatus used in lecture theatres.
Fit Criterion:
\itemnum=0
\parskip 0 pt
\items Allow users to log in.
\items Download a set of lecture notes for a particular subject.
\items Move through this set of lecture notes at their own pace.
\items Write notes on a touch sensitive screen using a stylus.
\items Provide and audio feed of the lecturers speech.
\items Have a live video feed of the lecturers presentation.
\items Display subtitles for the audio feed, below the main display.
\items Display scanned images of white boards and blackboards.
\items Toggle between displays in the two display windows. E.g. have any of the three input displays (lecture notes, scanned boards and live video) available in either the right or left display window.
\items Users should be able to customise their input options. E.g. hide any or all of the five input styles listed.
\items Users should be able to save their chosen ``images'' of the lecture, including their notes.
\items Print some of the inputs to local printers.
\items Retrieve data from a central storage server.
\parskip 12 pt
\section{Client, Customer and other Stake holders.}
\subsection{The client is the person/s paying for the development and owner of the delivered system.}
The University of Wollongong.
\subsection{The customer is the person/s who will buy the product from the client.}
The University of Wollongong.
\subsection{Other stake holders.}
\itemnum=0
\items Peter Hyland, senior lecturer.
\parskip 0 pt
\items Sally Schreiber, CSCI324 student.
\items Simon Bland, CSCI324 student.
\items Phillip Street, CSCI324 student.
\items Peter de Zwart, CSCI324 student.
\parskip 12 pt
\section{Users of the Product.}
\subsection{The users of the product.}
Note about the various students, they may be coming from a very diverse background of technical expertise, some students will have had very little exposure to computers before whilst some may be technophiles, whose entire life experience is from a computer. This can usually be reflected in which University faculty the students originates from. It could be reasonably assumed that a student from the Arts faculty would have less technical
savvy than a student from the Informatics faculty. This is a generalisation, nothing more.
\user{Undergraduate Students.}{Use of the \iDesk\ for lecture notes.}{Novice.}{Novice to Journeyman.}{Intelligent enough to attend University.}
\user{Honours Students.}{Use of the \iDesk\ for lecture notes.}{Novice.}{Novice to Journeyman.}{Completing the honours component of their bachelors degree.}
\user{Pass Masters Students.}{Use of the \iDesk\ for lecture notes.}{Novice.}{Novice.}{Completing either a pass Masters.}
\user{Research Masters Students.}{Use of the \iDesk\ for lecture/seminar notes.}{Journeyman.}{Journeyman.}{Completing a research Masters.}
\user{Doctoral Students.}{Use of the \iDesk\ for lecture/seminar notes.}{Journeyman to Master.}{Journeyman to Master.}{Completing a Doctoral research paper.}
\user{Post Doctoral Student.}{Use of the \iDesk\ for lecture/seminar notes.}{Master.}{Master.}{Completed at least one Doctorate.}
\user{Lecturer.}{Use of the \iDesk\ to create the various multi-media presentation used in a lecture/seminar.}{Journeyman to Master.}{Journeyman to Master.}{Attained enough experience and knowledge to teach students in the field of the lecture.}
\user{Administration.}{Maintenance of the \iDesk.}{Journeyman.}{Master.}{Specialist user attuned to the care and feeding of an \iDesk.}
\subsection{The priorities assigned to users.}
\userpriority{Undergraduate Students.}{Key User.}{\%80.}
\userpriority{Honours Students.}{Secondary User.}{\%4.}
\userpriority{Pass Masters Students.}{Unimportant User.}{\%5.}
\userpriority{Research Masters Students.}{Secondary User.}{\%3.}
\userpriority{Doctoral Students.}{Secondary User.}{\%2.}
\userpriority{Post Doctoral Students.}{Secondary User.}{\%1.}
\userpriority{Lecturer.}{Key User.}{\%5.}
\userpriority{Administration.}{Unimportant User.}{\%1.}
\subsection{User participation}
Of all the user categories that are comprised of students, it is expected that they will provide adequate feedback and user testing through the prototyping stage. It is assumed that on of the larger lecture theatres will be outfitted with prototype \iDesks\ where evaluation sheets will be collected after each lecture. The lecturer will also be required to submit their own evaluation of how the lecture went with the \iDesks\ in comparison to how it may have gone with out them.
Lecturers will be required to provide the ``business knowledge'' to help with the delivery of content to the users \iDesk. They will also be instrumental in the prototyping of the interface.
\heading{Project Constraints.}
\section{Mandated Constraints.}
\subsection{Solution constraints.}
The interface must use a non-proprietary operating system to reduce the total cost of ownership per \iDesk.
The interface must fit in a display that measures 360mm by 260mm, with a resolution of 1024 by 768 pixels.
\subsection{Implementation environment of the current system.}
The \iDesk\ will be deployed in a lecture theatre environment, subject to the rigours of student abuse. The \iDesk\ will be situated in a networked environment where each \iDesk\ will have access to central resources for the purpose of storage and printing.
\subsection{Partner applications.}
The \iDesk\ will have to be capable of understanding contemporary document formats that are used by lecturers to convey the content of their lectures. Most notably, the following formats will be used:
\itemnum=0
\parskip 0 pt
\items Microsoft Office\footnoter{http://office.microsoft.com/} documents.
\items Adobe\footnoter{http://www.adobe.com/} Portable Document Format (PDF) and PostScript (PS) documents.
\items Motion Picture Experts Group\footnoter{http://mpeg.telecomitalialab.com/} (MPEG) layers.
\items Joint Picture Experts Group\footnoter{http://www.jpeg.org/} (JPEG) image format.
\parskip 12 pt
\subsection{Commercial off the shelf packages.}
To aid in reducing the total cost of ownership per \iDesk, the office productivity software package Open Office\footnoter{http://www.openoffice.org/} should be used in place of the Microsoft Office suite of software.
\subsection{Anticipated workplace environment.}
The \iDesk\ will be deployed in an environment where noise is not acceptable, therefore, a standard mini-stereo jack should be used for audio output. It can be assumed that the students will provide their own earphones, or suitable earphones available on loan.
As the users are supposed to be placing the majority of their cognitive effort on the assimilation of the lecture content, the theme of the interface should be as unobtrusive as possible, except in case of errors, where a suitable dialog should appear.
\subsection{How long do the developers have to build the system?}
\NA
\subsection{What is the financial budget for the system?}
\NA
\section{Naming Conventions and Definitions.}
\vskip 12 pt
\parskip 0 pt
\name{iDesk}{An interactive desk used for taking notes using a stylus and viewing lecture multi-media content obtained from a storage server.}
\name{lecture}{A formal method of disclosure, intended for instruction.}
\name{multi-media}{transmission that combines multiple media of communication, e.g. text and graphics, etc...}
\name{media}{A substance of transmission, e.g. sound through air.}
\name{storage server}{A special purpose electronic device for the mass storage of information.}
\name{mass storage}{A term signifying the storage of a large magnitude of information, generally one terabyte of higher.}
\name{window}{A componentised interface of some sort that does not take up the entirety of a computer screen.}
\name{stylus}{A input device that consists of a rigid plastic instrument, analogous to a pencil, to write on a touch sensitive computer screen.}
\name{lecture theatre}{A room where a lecture is delivered, consisting of seating apparati designed for maximum discomfort.}
\name{authentication}{Where a user is determined to be who they claim they are.}
\name{authorisation}{Where a user is allowed to use the determined resources, usually after their identity has been confirmed.}
\parskip 12 pt
\section{Relevant Facts and Assumptions.}
\subsection{External factors that have an effect on the product, but are not mandated constraints.}
As most lectures are delivered with minimal lighting, the intensity of the interfaces colours should not be overwhelming as to throw a glow upon a user, so as not to bathe them in a pool of light, otherwise creating a ghostly countenance to the students in a lecture hampering the delivery of the lectures content.
\subsection{Assumptions that the team are making about the project.}
All students behave in a rational manner. As this is a utopian assumption, it is realistic to change the assumption to, students act in a rational manner relative to the norms of the society they live in. Similar to relative primes as used in cryptography.
\heading{Functional Requirements.}
\section{The Scope of the Work.}
\subsection{The context of the work.}
\NA
\subsection{Work partitioning.}
\NA
\section{The Scope of the Product.}
\subsection{Product Boundary.}
\NA
\subsection{Use case list.}
\NA
\section{Functional and Data Requirements.}
\subsection{Functional requirements.}
\funcrec{Allow only authenticated and authorised users to log in to the \iDesk.}{No unauthenticated or unauthorised users must not be able to use the \iDesk.}
\funcrec{Download the set of lecture notes for the current subject.}{Only allow the set of lecture notes for the current subject to be down loaded.}
\funcrec{Move through this set of lecture notes at their own pace.}{Ensure that the lecture notes do not advance nor retard the user assimilation of the lecture note content.}
\funcrec{Write notes on a touch sensitive screen using a stylus.}{A hand mashing the screen will not be considered as input.}
\funcrec{Provide and audio feed of the lecturers speech.}{The audio feed will consist of only the lecturers speech and filter out background noise.}
\funcrec{Have a live video feed of the lecturers presentation.}{The video feed will track the lecturer as they maneouver around the lecture theatre.}
\funcrec{Display subtitles for the audio feed, below the main display.}{The automatic transcription software is accurate to \%95 of words transcribed.}
\funcrec{Display scanned images of white boards and blackboards.}{These images will be kept in JPEG format.}
\funcrec{Toggle between displays in the two display windows.}{A user can not have two of the same inputs displayed in two windows.}
\funcrec{Users should be able to customise their input options.}{Any input switched off must be able to be switched back on.}
\funcrec{Users should be able to save their chosen ``images'' of the lecture, including their notes.}{The ability to save must be able to be done to a central storage server or to a local peripheral.}
\funcrec{Print some of the inputs to local printers.}{Input is printed to local printer.}
\funcrec{Retrieve data from a central storage server.}{Only the authenticated user can access their stored data.}
\subsection{Data requirements.}
See dictionary.
\heading{Non-Functional Requirements.}
\section{Look and Feel Requirements.}
\subsection{The interface.}
The interface will use a simple colour scheme with understated colours as to discourage distraction from the content of the lecture content.
Usage of text in the interface will be minimalised, with the visible text of a sufficiently sized font to ensure practical readability for the average student, with the ability to zoom in for those with defective eyesight.
All screens of the interface will adhere to the same look and feel to aid flow through the various screens.
The interface needs to have an intuitive iconoclastic flow to reduce the cognitive burden of the user.
\subsection{The style of the product.}
The product is to have a boring appearance, to suit the rest of the University of Wollongong.
\section{Usability Requirements.}
\subsection{Ease of use.}
The product shall be easy to use for any student that can attend the University of Wollongong.
The product must be usable for people with little grasp of the English language, as the majority of the student corpus are incapable of forming a rudimentary sentence to elucidate clue.
\subsection{Ease of learning.}
The product needs to be easy to learn from a students perspective to aid it's adoption in lectures.
The product needs to provide a simple on-line help facility that covers the basic usage of the \iDesk.
\section{Performance Requirements.}
\subsection{Speed requirements.}
The \iDesk\ will need to be able to fulfill the need for streaming video, natural writing input via a stylus and presentation slides. Yet not so powerful as to turn a lecture theatre in to a sauna.
\subsection{Safety critical requirements.}
\NA
\subsection{Precision requirements.}
\NA
\subsection{Reliability and Availability requirements.}
The \iDesk\ will require to be operational during University operating hours and be robust enough to recover from errors with little down time.
\subsection{Capacity requirements.}
\NA
\subsection{Scalability requirements.}
\NA
\section{Operational Requirements.}
\subsection{Expected physical environment.}
The \iDesk\ will have to be rugged enough to withstand the abuses of students who have little regards for property. As we are designing software, the problems of physical damage will have to be address by the hardware manufacturers.
\subsection{Expected technological environment.}
The \iDesk\ will have to be able to interface with a TCP/IP network to access the centralised storage server and authentication server.
\subsection{Partner applications.}
\NA
\subsection{Supportability.}
\NA
\section{Maintainability and Portability Requirements.}
\subsection{How easy must it be to maintain this product?}
\NA
\subsection{Are there special conditions that apply to the maintenance of this product?}
\NA
\subsection{Portability requirements.}
\NA
\section{Security Requirements.}
\subsection{Is the system confidential?}
In the sense of users personal files for the lecture notes, only they should be able to access their files, this can be done by correlating a users session with their authentication information. As only an authenticated user may use an \iDesk, the storage server can use the same mechanism of authentication for access to a users files.
\subsection{File integrity requirements.}
\NA
\subsection{Audit requirements.}
\NA
\section{Cultural and Political Requirements.}
\subsection{Are there any special factors about the product that would make it unacceptable for some political reason?}
Yes, you communist pig.
\section{Legal Requirements.}
\subsection{Does the system fall under the jurisdiction of any law?}
\NA
\subsection{Are there any standards with which we must comply?}
\NA
\heading{Project Issues.}
\section{Open Issues.}
\subsection{Issues that have been raised and do not yet have a conclusion.}
Who will be manufacturing the hardware.
\section{Off-the-Shelf Solutions.}
\subsection{Is there a ready-made system that could be bought?}
No.
\subsection{Can ready-made components be used for this product?}
Yes. There exists a large portion of Open Source\footnoter{http://www.opensource.org/} software that can function as the media viewers.
\subsection{Is there something that we could copy?}
See above.
\section{New Problems.}
\subsection{What problems could the new system cause in the current environment?}
\subsection{Will the new development affect any of the installed system?}
There is currently no installed system.
\subsection{Will any of our existing users be adversely affected by the new development?}
We have no existing users.
\subsection{What limitations exist in the anticipated implementation environment that may inhibit the new system?}
The physical size of the \iDesk\ is limited by the current seating arrangements of lecture theatres. Furthermore, the seating arrangements of lecture theatres differ so it may be difficult coming up with a standard size for the \iDesk\ without changing the existing lecture theatre seating layouts.
\subsection{Will the new system create other problems?}
A large amount of networking infrastructure would have to be created to facilitate the networked environment that the \iDesk\ will reside in for centralised storage and authentication.
\section{Tasks.}
\subsection{What steps have to be taken to deliver the system?}
\NA
\subsection{Development phases.}
\NA
\section{Cutover.}
\subsection{What special requirements do we have to get the existing data and procedures to work for the new system?}
\NA
\subsection{What data has to be modified/translated for the new system?}
\NA
\section{Risks.}
The major risk is that the interface will be rejected by the main corpus of students, rendering it distasteful and our key user base will not use the product.
There is also the risk that the cost of hand held computing devices is becoming sufficiently small that the majority of students may own a powerful hand held computer that can function like an \iDesk.
\section{Costs.}
\NA
\section{User Documentation and Training.}
\subsection{The plan for building the user documentation.}
\NA
\section{Waiting Room.}
\NA
\section{Ideas for Solutions.}
\NA
\bye
| {
"alphanum_fraction": 0.7848107722,
"avg_line_length": 40.5921325052,
"ext": "tex",
"hexsha": "0a1dbb04aa00fa9b6ce64a666d9a2ea2050b5630",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-07-03T09:22:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-03T09:22:08.000Z",
"max_forks_repo_head_hexsha": "b64d28b47381ea1e8c6b5282910365dc4292d57f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "felx/detritus",
"max_forks_repo_path": "src/UOW/CSCI324/ass5/volere.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b64d28b47381ea1e8c6b5282910365dc4292d57f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "felx/detritus",
"max_issues_repo_path": "src/UOW/CSCI324/ass5/volere.tex",
"max_line_length": 481,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b64d28b47381ea1e8c6b5282910365dc4292d57f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "felx/detritus",
"max_stars_repo_path": "src/UOW/CSCI324/ass5/volere.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4665,
"size": 19606
} |
\subsection*{RA Ex. 1.3}
We consider a Monte Carlo algorithm $A$ for a problem $\Pi$ whose expected running time is at most $T(n)$ on any instance of size $n$ and that produces a correct solution with probability $\gamma(n)$. Furthermore, given a solution to $\Pi$, we can verify its correctness in time $t(n)$.
We wish to obtain a Las Vegas algorithm that always gives a correct answer to $\Pi$ and runs in expected time at most
$$
\frac{T(n) + t(n)}{\gamma(n)}
$$
This can be obtained, by repeating the Monte Carlo algorithm $A$, until a correct solution to $\Pi$ is produced.
We know, that the expected running time is at most $T(n)$ on any instance of size $n$, so the expected running time would also be $T(n)$ for each repetition in the new algorithm.
\\
Furthermore, for each repetition, we also need to verify that the solution is correct, which adds $t(n)$ to the running time of each repetition.
Since the running time for each repetition is $T(n) + t(n)$, we just need to determine, how many repetitions are needed to produce a correct solution to $\Pi$. We call this number $X$.
\\
To determine $X$, we first consider the following observations:
\\
- In each repetition, the solution produced is either correct, which we consider a success, or incorrect, which we consider a failure.
\\
- Each repetition is independent.
\\
- Each repetition has the same probability $\gamma(n)$ for producing a success (meaning a correct solution to $\Pi$).
\\
Hence, we can model the number of failures, before the one success, as a geometric distribution. Then we know from App. C.4 (C.32), that
$$
E\left[X\right] = \frac{1}{\gamma(n)}
$$
Since we have now determined $X$, we can show that the new Las Vegas algoritm runs in expected time at most
$$
E\left[\left(T(n) + t(n)\right) \cdot X \right]= \frac{T(n) + t(n)}{\gamma(n)}
$$ | {
"alphanum_fraction": 0.7234042553,
"avg_line_length": 59.1290322581,
"ext": "tex",
"hexsha": "0ffe6c8a0f07b7561596a397b81b98cb13ec2bc9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pdebesc/AADS",
"max_forks_repo_path": "Uge2/Ex.RA-1.3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pdebesc/AADS",
"max_issues_repo_path": "Uge2/Ex.RA-1.3.tex",
"max_line_length": 286,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a26e24d18adee973d3ce88bdfd96d857ec472fdf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pdebesc/AADS",
"max_stars_repo_path": "Uge2/Ex.RA-1.3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 469,
"size": 1833
} |
\chapter{Interpolation Convergence Proofs}
\label{chap:cvip_converge}
This chapter works through the convergence proofs for MSN interpolation on
Chebyshev nodes. We focus on interpolating up to degree $2n$.
| {
"alphanum_fraction": 0.8181818182,
"avg_line_length": 29.8571428571,
"ext": "tex",
"hexsha": "266be117e7aba0b35acab038149ed429b68d79f6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03",
"max_forks_repo_licenses": [
"0BSD"
],
"max_forks_repo_name": "chgorman/UCSB-Dissertation-Template",
"max_forks_repo_path": "tex/vand_interp_conv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"0BSD"
],
"max_issues_repo_name": "chgorman/UCSB-Dissertation-Template",
"max_issues_repo_path": "tex/vand_interp_conv.tex",
"max_line_length": 74,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c57b9e5209e93ecb79abb364dbad29037a2aed03",
"max_stars_repo_licenses": [
"0BSD"
],
"max_stars_repo_name": "chgorman/UCSB-Dissertation-Template",
"max_stars_repo_path": "tex/vand_interp_conv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 53,
"size": 209
} |
\documentclass[serif,mathserif]{beamer}
\usepackage{amsmath, amsfonts, epsfig, xspace}
\usepackage{algorithm,algorithmic}
\usepackage{pstricks,pst-node}
\usepackage{multimedia}
\usepackage[normal,tight,center]{subfigure}
\setlength{\subfigcapskip}{-.5em}
\usepackage{beamerthemesplit}
\usetheme{lankton-keynote}
\author[ ]{Using firefox to make remote communication \quad \includegraphics[width=5.0cm]{img/firefoxtunnel.jpg}}
\title[firefox tunnel\hspace{2em}\insertframenumber/\inserttotalframenumber]{Firefox tunnel}
\date{ CoolerVoid - [email protected] - Dezember 21, 2017} %leave out for today's date to be insterted
\institute{Illustration by Anthony S Waters}
\begin{document}
\maketitle
% \section{Introduction} % add these to see outline in slides
\begin{frame}
\frametitle{whoami}
CoolerVoid just another computer programmer and infosec guy.
\end{frame}
\begin{frame}
\frametitle{Introduction}
Motivations:\pause
\begin{itemize}
\item it's different technique, you can not find in msfvenom, veil...\pause
\item RedTeam operations.\pause
\item Improve the work.\pause
\item Bypass any firewall. %leave out the \pause on the final item
\end{itemize}
\end{frame}
% \section{Main Body} % add these to see outline in slides
\begin{frame}
\frametitle{The Justify}
\begin{itemize}
\item \includegraphics[width=10.0cm]{img/tunnel1.png}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The Justify}
\begin{itemize}
\item \includegraphics[width=10.0cm]{img/tunnel2.png}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The Justify}
\begin{itemize}
\item \includegraphics[width=10.0cm]{img/tunnel4.png}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The Justify}
\begin{itemize}
\item \includegraphics[width=10.0cm]{img/tunnel6.png}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The Justify}
\begin{itemize}
\item \includegraphics[width=10.0cm]{img/tunnel8.png}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{How too - Part 1}
The recipe part 1 the web client:\pause
\begin{itemize}
\item Up in your remote host with httpd the directory firefox\char`_shell \pause
\item Put that directory in root dir of server example: /var/www/html \pause
\item Set permissions to write and read in all files of dir... with chmod command...\pause
\item if you enter in http://machine/firefox\char`_shell/firefox\char`_cmd\char`_tunnel.php?input=1 you can control remote server. %leave out the \pause on the final item
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{How too - Part 2}
The recipe part 2 - the firefox tunnel server:\pause
\begin{itemize}
\item You need mingw32, gcc, c++ and make to build.\pause
\item At file firefox\char`_tunnel.cpp change line 20 in var domain, replace it to your remote machine IP or DNS. \pause
\item compile in root dir of server with command "mingw32-make", take exe file in directory bin \pause
\item Execute the file, control him using the PHP client. \pause
\item if you enter in http://machine/firefox\char`_shell/firefox\char`_cmd\char`_tunnel.php?input=1 you can control remote server.
\end{itemize}
\end{frame}
% \section{Conclusion} % add these to see outline in slides
\begin{frame}
\frametitle{Demo}
\begin{itemize}
\item At YouTube
\item Look that following:
\item https://www.youtube.com/watch?v=C23N4yDRkjU
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Credits}
\begin{itemize}
\item Thank you
\item Any doubt ? talk to me [email protected]
\item Twitter: @Cooler\char`_freenode
\item Github: https://github.com/CoolerVoid/
%http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2007-11/msg00299.html
\end{itemize}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.739211014,
"avg_line_length": 31.2148760331,
"ext": "tex",
"hexsha": "6720d543d927b6ddd09dba4b22073b5a44ca8b95",
"lang": "TeX",
"max_forks_count": 17,
"max_forks_repo_forks_event_max_datetime": "2021-03-03T10:16:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-12-30T17:59:34.000Z",
"max_forks_repo_head_hexsha": "aea99992cc2580227ee1a636dcab12e133b75e31",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vaginessa/firefox_tunnel",
"max_forks_repo_path": "doc/beamer_demo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aea99992cc2580227ee1a636dcab12e133b75e31",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vaginessa/firefox_tunnel",
"max_issues_repo_path": "doc/beamer_demo.tex",
"max_line_length": 172,
"max_stars_count": 72,
"max_stars_repo_head_hexsha": "aea99992cc2580227ee1a636dcab12e133b75e31",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vaginessa/firefox_tunnel",
"max_stars_repo_path": "doc/beamer_demo.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-10T19:54:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-21T23:26:33.000Z",
"num_tokens": 1204,
"size": 3777
} |
\documentclass[11pt]{article}
\textheight 22cm
\textwidth 16cm
\hoffset= -0.6in
\voffset= -0.5in
\setlength{\parindent}{0cm}
\setlength{\parskip}{10pt plus 2pt minus 2pt}
\pagenumbering{roman}
\setcounter{page}{-9}
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\bd}{\begin{displaymath}}
\newcommand{\ed}{\end{displaymath}}
\begin{document}
\title{Astrophysics II: Laboratory 3}
\author{\Large Binary Stars and Stellar Masses}
\maketitle
\setlength{\parindent}{0.2pt}
\setlength{\parskip}{2ex}
\section{{\bf Objectives:}}
\begin{itemize}
\item Continue learning functionality of MATLAB.\\
- Visualize data.\\
- Use visualization in aid of fitting function to data.
\item Use Spectroscopic measurements of a stellar absorption line to determine binary masses.\\
\end{itemize}
\section{{\bf Introduction}}
Our sun appears to be a rarity in space. Approximately two-thirds of all solar-type field stars are members of binary systems, and recent studies suggest that virtually all stars begin life as members of multiple systems. Consequently, many of the stars you see at night are actually binaries, comprised of two stars gravitationally bound in orbit with one another. These binary systems are important astrophysical laboratories because they allow us to deduce the properties of the constituent stars more accurately than we can with single stars. The physics that governs how stars orbit one another was developed by Newton and Kepler over three hundred years ago, and can be summarized by the equation
\be
P^{2}=\frac{4\pi^{2}}{G(M_{P}+M_{S})}a^{3}
\ee
where P is the period of orbit, G is the gravitational constant, $M_{P}$ and $M_{S}$ are the masses of the primary and secondary stars respectively, and $a$ is the semi-major axes of the two orbits $a=a_{P}+a_{S}$. In mks units, $G = 6.67 \times 10^{-11}$ but these units are not the units of choice. If masses are measured in solar masses, distances in astronomical units, and periods in years, then the application of Newton's law to the Earth-Sun system gives $4\pi^{2}/G=1$.
Binary stars fall into several categories, depending on their observed properties: optical doubles, visual binaries, composite spectrum binaries, astrometric binaries, spectroscopic binaries, and eclipsing binaries (photometric binaries). Here we focus on the case of spectroscopic binaries in the case where the spectrum from the binary system exhibits a doublet of the same HI absorption line.
Your primary task in this lab is to measure the wavelength of the H$\alpha$ line vs time in a series of stellar spectra. This can be done simply by plotting the spectra in matlab and then calculating roughly by sight the shift in wavelength corresponding to doppler shift in the low velocity limit. We assume here that the components of the binary in question are assumed to follow circular orbits. This is not true for all binaries, but for the present system it is valid. This means that the orbital eccentricity is zero and that the orbital velocities are constant at all times.
\section{{\bf Procedure}}
\begin{enumerate}
\item Open MATLAB.
\item Download the seven data files at www.astro.umd.edu/$\sim$mavara/lab3-121/ and save them into the MATLAB directory.
\item Type the following commands at the matlab command line to load the data in each of these files into matlab:\\
$<<$ load(`binary1.dat')\\
$<<$ load(`binary2.dat')\\
etc.....
\item Note that these data files each contain a matrix of two columns, the first signifying wavelength in angstroms and the second a normalized flux.
\item Plot a few of these spectra to get a feel for what they're showing.
\item The primary absorption feature in these plots is a doubling of the HI$\alpha$ line. Notice that there are two strong absorption features in each spectrum and they are of unequal depths. What does that different signify in a normalized spectrum?
\item Use the equation
\be
\frac{\lambda_{0}}{\lambda}=\frac{v}{c}
\ee
to determine the corresponding radial velocities of both components of the binary system for each spectrum
\item The Spectra, in order, were taken on the Julian Dates given in Table 1.
\begin{table}[ht]
\caption{Dates of Observations}
\centering
\begin{tabular}{c c}
\hline \hline
Filename & Julian Date\\ [0.5ex]
\hline
binary1.dat & 2441578.831 \\
binary2.dat & 2441579.581 \\
binary3.dat & 2441580.742 \\
binary4.dat & 2441581.943 \\
binary5.dat & 2441582.670 \\
binary6.dat & 2441582.982 \\
binary7.dat & 2441583.960 \\
\hline
\end{tabular}
\end{table}
\item Plot the radial velocity from the absorption feature of each contributing star vs date, plotting both sets of data on the same plot.
\item Estimate the period, amplitude, etc, and plot a sin function over the data to confirm your estimations.
\item Note: In general, primary refers to the brighter or more massive star in a binary system.
\item Use our assumptions about this particular stellar binary system and the velocities to measure the semi-major axis of the orbit.
\item Use the simplified (units of stellar mass, etc.) version of Equation 1 to calculate the combined mass.
\item What are the masses of the two components in solar masses?\\
(Recall, from the definition of center of mass, $a_{S}M_{S} = a_{P}M_{P}$. Your measurement demonstrates the importance of studying binary stars. Virtually everything we know about stellar masses comes from analyzing their motions.)
\end{enumerate}
\section{{\bf Questions}}
{\it Now let's think about the usefulness of what we've learned.}
What is the primary problem with the measurements of masses of the primary and secondary we obtained?
How might we go about solving this problem?
What additional information about the binary system have we acquired with the spectroscopic information?
Can we be sure that the primary absorption features in the spectrum represent the same line? What else can particular lines tell us?
What would the spectrum of a triple system look like?
\\
\\
\\
* Note that this lab has borrowed heavily from Dr. Christopher Palma's Lab 4 of Astro 293, spring 2003.
http://www.astro.psu.edu/$\sim$cpalma/astro293/
\end{document}
| {
"alphanum_fraction": 0.7717020931,
"avg_line_length": 55.5225225225,
"ext": "tex",
"hexsha": "9d64da5d73c79de7f68d7e94f1ddb8146653ba57",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "cheyu-c/cheyu-c.github.io",
"max_forks_repo_path": "MATLAB/ASTR121/labBinary/astr121lab3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "cheyu-c/cheyu-c.github.io",
"max_issues_repo_path": "MATLAB/ASTR121/labBinary/astr121lab3.tex",
"max_line_length": 703,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b1f139027dfe036276476c5b0fd83a515fba515c",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "cheyu-c/cheyu-c.github.io",
"max_stars_repo_path": "MATLAB/ASTR121/labBinary/astr121lab3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1522,
"size": 6163
} |
\chapter{Axiomatizing Valid General Concept Inclusions of Finite
Interpretations}
\label{cha:axiom-valid-el}
Our considerations about extracting general concept inclusions from erroneous data will be
based on previous results obtained by Baader and Distel~\cite{Diss-Felix} on extracting
all \emph{valid} general concept inclusions from a given finite interpretation. In this
section, we shall therefore review the notions and results from this work that are
necessary for our own.
The problem of extracting all valid general concept inclusions from a finite
interpretation can be made more precise as follows. Let $\mathcal{I} =
(\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite interpretation over $N_C$ and
$N_R$, \ie $\Delta^{\mathcal{I}}$ is a finite set. The task then is to find the set of
all general concept inclusions $C \sqsubseteq D$ with $C, D \in \ELbot(N_C, N_R)$ which
are valid in $\mathcal{I}$.
Of course, the set of all valid general concept inclusions is infinite in general. This
is because if $C \sqsubseteq D$ holds in $\mathcal{I}$, and $r \in N_R$, then $\exists
r. C \sqsubseteq \exists r. D$ holds in $\mathcal{I}$ as well. Such an infinite set is
hardly usable to represent knowledge suitable for machine consumption. Therefore, the
considerations in~\cite{Diss-Felix} concentrate on finding \emph{finite bases} of
$\mathcal{I}$, \ie sets of valid general concept inclusions of $\mathcal{I}$ that are also
\emph{complete}. We shall introduce these notions briefly in \Cref{sec:bases-gener-conc}.
One of the main results of~\cite{Diss-Felix} then is that finite bases for finite
interpretations $\mathcal{I}$ always exist, and we shall discuss them in
\Cref{sec:base-all-valid}. These results have been obtained by exploiting a close
connection between description logics and formal concept analysis. It is therefore
crucial that we introduce this connection first, and we shall do so
in~\Cref{sec:motivation}. In particular, we shall talk about \emph{induced contexts} and
\emph{model-based most-specific concept descriptions}.
\section{Bases of General Concept Inclusions}
\label{sec:bases-gener-conc}
General concept inclusions have a \emph{model-based semantics}, \ie their semantics is
defined in terms of being valid in some interpretation. We can therefore introduce the
notions of \emph{entailment} and \emph{completeness} as follows. Also notice the
similarity of this definition to \Cref{def:sound-complete-base}.
\begin{Definition}
\label{def:entailment-of-gcis}
Let $\mathcal{L} \cup \set{ C \sqsubseteq D }$ be a set of general concept inclusions
over $N_C$ and $N_R$. We shall say that $\mathcal{L}$ \emph{entails} $C \sqsubseteq D$,
written $\mathcal{L} \models (C \sqsubseteq D)$ if and only if for all interpretations
over $N_C$ and $N_R$ it is true that if $\mathcal{I} \models \mathcal{L}$, then
$\mathcal{I} \models \set{ C \sqsubseteq D }$ as well.
Let $\mathcal{K}$ be another set of general concept inclusions over $N_C$ and $N_R$.
Then $\mathcal{K}$ is said to be \emph{sound} for $\mathcal{L}$ if and only if all
general concept inclusions in $\mathcal{K}$ are entailed by $\mathcal{L}$.
$\mathcal{K}$ is said to be \emph{complete} for $\mathcal{L}$ if and only if all general
concept inclusions in $\mathcal{L}$ are \emph{entailed} by $\mathcal{K}$. $\mathcal{K}$
is said to be a base for $\mathcal{L}$ if and only if $\mathcal{K}$ is sound and
complete for $\mathcal{L}$.
\end{Definition}
Let $\mathcal{I}$ be a finite interpretation over $N_C$ and $N_R$, and let us denote with
$\Th(\mathcal{I})$ the set of all \ELgfpbot general concept inclusions over $N_C$ and
$N_R$ which are valid in $\mathcal{I}$, \ie
\begin{equation*}
\Th(\mathcal{I}) := \set{ C \sqsubseteq D \mid C, D \in \ELgfpbot(N_{C}, N_{R}),
C^{\mathcal{I}} \subseteq D^{\mathcal{I}} }.
\end{equation*}
Let $\mathcal{K}$ be a set of general concept inclusions over $N_C$ and $N_R$. If
$\mathcal{K}$ is a base of $\Th(\mathcal{I})$, we shall simply say that $\mathcal{K}$ is a
\emph{base} of $\mathcal{I}$. If $\mathcal{K}$ consists of $\ELbot$ general concept
inclusions only, we shall say that $\mathcal{K}$ is an \emph{\ELbot base} of
$\mathcal{I}$. Otherwise, we shall occasionally say that $\mathcal{K}$ is an
\emph{\ELgfpbot base} of $\mathcal{I}$.
Notice that in the case that $\mathcal{K}$ is a base of $\mathcal{I}$, all general concept
inclusions in $\mathcal{K}$ have to hold in $\mathcal{I}$: the set $\Th(\mathcal{I})$ is
\emph{closed under entailment} in the sense that every \ELgfpbot general concept inclusion
over $N_C$ and $N_R$ which is entailed by $\Th(\mathcal{I})$ is already contained in this
set. Therefore, if $\mathcal{K}$ is sound for $\Th(\mathcal{I})$, it must be contained in
this set and thus $\mathcal{K}$ is a set of general concept inclusions which are valid in
$\mathcal{I}$. Moreover, since $\mathcal{K}$ is complete for $\Th(\mathcal{I})$, every
\ELgfpbot general concept inclusion over $N_C$ and $N_R$ that holds in $\mathcal{I}$ is
entailed by $\mathcal{K}$.
\section{Linking Formal Concept Analysis and Description Logics}
\label{sec:motivation}
Description logics and formal concept analysis are connected by a number of similar
notions. As an example, let us consider a formal context $\con K = (G, M, I)$ and a set
$A \subseteq M$. The set $A'$ then is the set of all objects of $\con K$ which have all
the attributes in $A$. We can view this fact from another perspective: if $A = \set{ m_1,
\dots, m_n }$, then we can think of the attributes $m_1, \dots, m_n$ as
\emph{propositions}, and the fact that $(g, m) \in I$ as saying that $g$ \emph{satisfies}
the proposition $m$. Then $g \in A'$ means that $g$ \emph{satisfies} the conjunction of
all propositions in $A$.
Let us reformulate this using description logics. To this end, let us define $N_C := M$
and $N_R = \emptyset$. Then we can think of $\con K$ as an interpretation
$\mathcal{I}_{\con K} = (G, \cdot^{\mathcal{I}_{\con K}})$ where
\begin{equation}
\label{eq:17}
m^{\mathcal{I}_{\con K}} := \set{ g \in G \mid (g, m) \in I } = \set{ m }'.
\end{equation}
Then we have $A' = (m_1 \sqcap \dots \sqcap m_n)^{\mathcal{I}_{\con K}}$ for all finite $A
= \set{ m_1, \dots, m_n } \subseteq M$. Indeed, if we would consider a description logic
that only allows for conjunction $\sqcap$, then we can view finite formal contexts,
derivation of sets of attributes and even implications as special cases of finite
interpretations, extensions of concept descriptions and general concept inclusions. Thus
the derivation operator $(\cdot)' \colon \subsets{M} \to \subsets{G}$ naturally
corresponds to computing the extension of concept descriptions in interpretations.
However, the other derivation operator $(\cdot)' \colon \subsets{G} \to \subsets{M}$ does
not have such a correspondence in description logics. This gap shall be filled by
considering \emph{model-based most-specific concept descriptions}, which we introduce in
\Cref{sec:defin-and-basic}.
The connection between description logics and formal concept analysis expressed in
\eqref{eq:17} only works in one direction: it allows to represent basic notions of formal
concept analysis in terms of description logics, but not vice versa. Even if we restrict
our attention to the rather light-weight description logic \ELbot, it is not clear how to
represent an interpretation by means of notions from formal concept analysis.
To approach this issue, we shall introduce \emph{induced contexts} in
\Cref{sec:induced-contexts}. Such contexts allow to express tight connections between the
notions of formal concept analysis and description logics, and, since induced contexts are
just formal contexts, still allow the application of standard methods from formal concept
analysis, such as the extraction of bases. This fact will be exploited when we discuss
the computation of finite bases in \Cref{sec:base-all-valid}.
\subsection{Model-Based Most-Specific Concept Descriptions}
\label{sec:defin-and-basic}
Let $\con K = (G, M, I)$, and let us try to motivate how to find a natural correspondence
of the derivation operator $(\cdot)' \colon \subsets{G} \to \subsets{M}$ within
description logics. Let $B \subseteq G$ be a set of objects of $\con K$. Then the set $A
:= B'$ can be thought of as the \emph{most-specific} set of attributes that
\emph{describe} $B$, \ie
\begin{enumerate}[i. ]
\item $B \subseteq A'$, \ie $A$ \emph{describes} $B$, and
\item for all sets $C \subseteq M$ that satisfy $B \subseteq C'$ (that describe $B$) it is
true that $C \subseteq A$, ($A$ contains \emph{more} attributes than $C$, \ie is
\emph{more specific}).
\end{enumerate}
The last point is true because if $B \subseteq C'$, then by
\Cref{lem:derivation-is-galois-connection} it is true that $C \subseteq B' = A$. Notice
that the description of $A$ as a most-specific description of $B$ is also a
characterization, \ie if $A$ is the most-specific description of $B$ in the above sense,
then $A = B'$.
To mimic this \emph{most-specific description} in description logics, Baader and Distel
introduce the notion of \emph{most-specific concept descriptions}.
\begin{Definition}[Model-Based Most-Specific Concept Description]
\label{def:most-specific-concept-description}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation,
and let $X \subseteq \Delta^{\mathcal{I}}$. A \emph{model-based most-specific concept
description} of $X$ in $\mathcal{I}$ is a concept description $C$ such that
\begin{enumerate}[i. ]
\item $X \subseteq C^{\mathcal{I}}$, and
\item for each concept description $D$ satisfying $X \subseteq D^{\mathcal{I}}$ it is
true that $C \sqsubseteq D$, \ie $C$ is subsumed by $D$.
\end{enumerate}
\end{Definition}
If a model-based most-specific concept description $C$ for $X$ in $\mathcal{I}$ exists, it
is unique up to equivalence: if $D$ is another such model-based most-specific concept
description, then $C \sqsubseteq D$ and $D \sqsubseteq C$, by the last condition of the
definition. Therefore, $C \equiv D$. Because of this, we can talk about \emph{the}
model-based most-specific concept description of $X$ in $\mathcal{I}$, and shall denote it
with $X^{\mathcal{I}}$, to stress the similarity to the derivation operator from formal
concept analysis. We shall also write $X^{\mathcal{I}\mathcal{I}}$ instead of
$(X^{\mathcal{I}})^{\mathcal{I}}$ and $C^{\mathcal{I}\mathcal{I}}$ instead of
$(C^{\mathcal{I}})^{\mathcal{I}}$ for syntactic convenience.
The existence of model-based most-specific concept descriptions, however, is not clear per
se, and the choice of the description logic in which we seek for model-based most-specific
concept descriptions is crucial here: if we only consider \ELbot concept descriptions,
then model-based most-specific concept descriptions do not necessarily exist, as is shown
in \Cref{expl:mmscs-may-not-exist-in-ELbot}. However, if we allow all concept
descriptions in \Cref{def:most-specific-concept-description} to be \ELgfp or \ELgfpbot
concept descriptions, then the existence of model-based most-specific concept descriptions
can be guaranteed.
\begin{Theorem}[Theorem 4.7 of~\cite{Diss-Felix}]
\label{thm:existence-of-mmscs-in-ELgfpbot}
Model-based most-specific concept descriptions exist in \ELgfp\ and \ELgfpbot for all
finite interpretations $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ and
sets $X \subseteq \Delta^{\mathcal{I}}$, and they can be computed effectively.
\end{Theorem}
The computation of model-based most-specific concept descriptions can be achieved using
\emph{\EL description graphs}, least common subsumers and
\emph{simulations}~\cite{DBLP:conf/ijcai/Baader03a,Diss-Felix}. See~\cite[Section
4.1.2]{Diss-Felix} for details on this.
We have motivated model-based most-specific concept descriptions by most-specific
descriptions in formal contexts, and for this we have made use of the fact that the
derivation operators form a Galois connection. It is therefore only natural to expect
that model-based most-specific concept descriptions are part of a Galois connection, too.
However, we have to notice that we cannot expect to obtain a Galois connection in the
sense of \Cref{sec:galois-connections}, simply because the relation $\sqsubseteq$ is not
antisymmetric, and thus not an order relation: it may be the case that $C \sqsubseteq D$
and $D \sqsubseteq C$, but $D \neq C$. We can remedy this fact by considering concept
descriptions only \emph{up to equivalence}: instead of a single concept description $C$,
we always consider the set $[C]$ of all concept descriptions which are equivalent to $C$.
Then $[C] \sqsubseteq [D]$ is well-defined for all concept descriptions $C$ and $D$, and
$\sqsubseteq$ indeed yields an order relation this way. This is only a technical detail,
however, and we shall not make it explicit in our following considerations.
\begin{Lemma}[Lemma 4.1 of~\cite{Diss-Felix}]
\label{lem:mmsc-and-extension-are-galois-connection}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation
over $N_C$ and $N_R$, $X \subseteq \Delta^{\mathcal{I}}$ and $C$ an \ELgfpbot concept
description over $N_C$ and $N_R$. Then
\begin{equation}
\label{eq:19}
X \subseteq C^{\mathcal{I}} \iff X^{\mathcal{I}} \sqsubseteq C.
\end{equation}
In particular, for $X, Y \subseteq \Delta^{\mathcal{I}}$ and for \ELgfpbot concept
descriptions $C, D$ over $N_C$ and $N_R$, it is true that
\begin{enumerate}[i. ]
\item $X \subseteq Y \implies X^{\mathcal{I}} \sqsubseteq Y^{\mathcal{I}}$,
\item $C \sqsubseteq D \implies C^{\mathcal{I}} \subseteq D^{\mathcal{I}}$,
\item $X \subseteq X^{\mathcal{I}\mathcal{I}}$,
\item $C^{\mathcal{I}\mathcal{I}} \sqsubseteq C$,
\item $X^{\mathcal{I}} \equiv X^{\mathcal{I}\mathcal{I}\mathcal{I}}$,
\item $C^{\mathcal{I}} = C^{\mathcal{I}\mathcal{I}\mathcal{I}}$.
\end{enumerate}
\end{Lemma}
\begin{Proof}
We only show \eqref{eq:19}, the other claims follow from
\Cref{lem:properties-of-galois-connections} and the above-made considerations. If $X
\subseteq C^{\mathcal{I}}$, then $X^{\mathcal{I}} \sqsubseteq C$ because
$X^{\mathcal{I}}$ is by definition the most-specific concept description that contains
$X$ in its extension. Conversely, if $X^{\mathcal{I}} \sqsubseteq C$, then by
definition $X^{\mathcal{I}\mathcal{I}} \subseteq C^{\mathcal{I}}$. But since
$X^{\mathcal{I}}$ is the model-based most-specific concept description of $X$ in
$\mathcal{I}$, it contains $X$ in its extension, \ie $X \subseteq
X^{\mathcal{I}\mathcal{I}}$. Therefore, $X \subseteq C^{\mathcal{I}}$.
\end{Proof}
Another useful property is the following, rather technical proposition.
\begin{Proposition}[Lemma 4.2 of~\cite{Diss-Felix}]
\label{prop:double-II-under-I}
Let $\mathcal{I}$ be an interpretation over $N_C$ and $N_R$, and let $C, D$ be \ELgfpbot
concept descriptions over $N_C$ and $N_R$ and let $r \in N_R$. Then
\begin{enumerate}[i. ]
\item $(C \sqcap D)^{\mathcal{I}} = (C^{\mathcal{I}\mathcal{I}} \sqcap
D)^{\mathcal{I}}$, and
\item $(\exists r. C)^{\mathcal{I}} = (\exists r. C^{\mathcal{I}\mathcal{I}})^{\mathcal{I}}$.
\end{enumerate}
\end{Proposition}
\begin{Proof}
For the first claim we use \Cref{lem:mmsc-and-extension-are-galois-connection} and obtain
\begin{equation*}
(C \sqcap D)^{\mathcal{I}} = C^{\mathcal{I}} \cap D^{\mathcal{I}} =
C^{\mathcal{I}\mathcal{I}\mathcal{I}} \cap D^{\mathcal{I}} =
(C^{\mathcal{I}\mathcal{I}} \sqcap D)^{\mathcal{I}}.
\end{equation*}
For the second one we observe that
\begin{align*}
(\exists r. C^{\mathcal{I}\mathcal{I}})^{\mathcal{I}}
&= \set{ x \in \Delta^{\mathcal{I}} \mid \exists y \in \Delta^{\mathcal{I}} \st (x, y)
\in r^{\mathcal{I}} \wedge y \in C^{\mathcal{I}\mathcal{I}\mathcal{I}} }\\
&= \set{ x \in \Delta^{\mathcal{I}} \mid \exists y \in \Delta^{\mathcal{I}} \st (x, y)
\in r^{\mathcal{I}} \wedge y \in C^{\mathcal{I}} } \\
&= (\exists r. C)^{\mathcal{I}},
\end{align*}
again because of $C^{\mathcal{I}} = C^{\mathcal{I}\mathcal{I}\mathcal{I}}$ from
\Cref{lem:mmsc-and-extension-are-galois-connection}.
\end{Proof}
\subsection{Induced Contexts}
\label{sec:induced-contexts}
We already have seen how formal contexts can be represented as interpretations. In this
section we shall introduce the approach of Baader and Distel of \emph{induced contexts},
which provides the inverse direction, \ie which allows to represent interpretations as
formal contexts. The notion of induced contexts also was used implicitly by works of
\textcite{books/math/Prediger00} in her study on \emph{terminological attribute logic}.
\begin{Definition}[Induced Context]
\label{def:induced-context}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation over $N_C$ and $N_R$, and let $M$ be a set of concept descriptions over
$N_C$ and $N_R$. The \emph{induced context} of $\mathcal{I}$ and $M$ is the formal
context $\con K_{\mathcal{I}, M} = (\Delta^{\mathcal{I}}, M, \nabla)$, where for $x \in
\Delta^{\mathcal{I}}$ and $C \in M$
\begin{equation*}
(x, C) \in \nabla \diff x \in C^{\mathcal{I}}.
\end{equation*}
\end{Definition}
Induced formal contexts do not necessarily represent the interpretation $\mathcal{I}$
completely; indeed, what is represented of $\mathcal{I}$ heavily depends on the choice of
the set $M$ of concept descriptions. We later shall see that we can choose this set $M$
to obtain a close connection between bases of $\con K_{\mathcal{I}, M}$ and bases of
$\mathcal{I}$.
We start our considerations about induced contexts by introducing some auxiliary notions
first. For a finite set $U \subseteq M$ we define the set
\begin{equation*}
\bigsqcap U :=
\begin{cases}
\top & \text{ if } U = \emptyset, \\
\bigsqcap_{V \in U} V & \text{otherwise}.
\end{cases}
\end{equation*}
We call $\bigsqcap U$ the \emph{concept description defined by} $U$. Furthermore, for a
concept description $C$ we define the \emph{projection} of $C$ onto $M$ as
\begin{equation*}
\pr_M(C) := \set{ D \in M \mid C \sqsubseteq D }.
\end{equation*}
Concept descriptions defined by subsets of $M$ together with projections capture some kind
of notion of \emph{upper approximation} in terms of $M$: if $C$ is a concept description,
then the most-specific concept description $D$ satisfying $C \sqsubseteq D$ that can be
defined by a subset of $M$ is given by
\begin{equation*}
D = \bigsqcap \pr_M(C).
\end{equation*}
This looks familiar to our introductory motivation for model-based most-specific concept
descriptions, and indeed there are similarities. One of them is that the mappings $U
\mapsto \bigsqcap U$ and $C \mapsto \pr_M(C)$ satisfy the main condition of an antitone
Galois connection.
\begin{Lemma}
\label{lem:pr-bigsqcap-forms-Galois-connection}
Let $M$ be a finite set of concept descriptions over $N_C$ and $N_R$. Then for each $U
\subseteq M$ and each concept description $C$ over $N_C$ and $N_R$ it is true that
\begin{equation*}
C \sqsubseteq \bigsqcap U \iff U \subseteq \pr_M(C).
\end{equation*}
In particular, the following statements holds for all $U, V \subseteq M$ and all concept
descriptions $C, D$ over $N_C$ and $N_R$.
\begin{enumerate}[i. ]
\item $C \sqsubseteq D \implies \pr_M(D) \subseteq \pr_M(C)$,
\item $U \subseteq V \implies \bigsqcap V \sqsubseteq \bigsqcap U$,
\item $C \sqsubseteq \bigsqcap \pr_M(C)$,
\item $U \subseteq \pr_M(\bigsqcap U)$.
\end{enumerate}
\end{Lemma}
\begin{Proof}
Assume $C \sqsubseteq \bigsqcap U$. Then $\pr_M(\bigsqcap U) \subseteq \pr_M(C)$, since
every concept description $D \in M$ satisfying $\bigsqcap U \sqsubseteq D$ also
satisfies $C \sqsubseteq D$. Furthermore, $U \subseteq \pr_M(\bigsqcap U)$, since for
each $F \in U$ it is true that $\bigsqcap U \sqsubseteq F$. Thus
\begin{equation*}
U \subseteq \pr_M(\bigsqcap U) \subseteq \pr_M(C).
\end{equation*}
For the converse direction, assume that $U \subseteq \pr_M(C)$. Then $\bigsqcap
\pr_M(C) \sqsubseteq \bigsqcap U$. Since for each $D \in \pr_M(C)$ it is true that $C
\sqsubseteq D$, we also have $C \sqsubseteq \bigsqcap \pr_M(C)$. In sum, we obtain
\begin{equation*}
C \sqsubseteq \bigsqcap \pr_M(C) \sqsubseteq \bigsqcap U.
\end{equation*}
\end{Proof}
For certain concept descriptions $C$, the upper approximation provided by $\bigsqcap
\pr_M(C)$ coincides with $C$. Those concept descriptions are exactly those which are
\emph{expressible in terms of} $M$, \ie there exists a subset $N \subseteq M$ such that $C
\equiv \bigsqcap N$.
\begin{Lemma}[\cite{Diss-Felix}]
\label{lem:characterizing-expressible-in-terms-of}
Let $M \cup \set{C}$ be a set of concept descriptions over $N_C$ and $N_R$. Then $C$ is
expressible in terms of $M$ if and only if
\begin{equation*}
C \equiv \bigsqcap \pr_M(C).
\end{equation*}
\end{Lemma}
\begin{Proof}
Clearly, if $C \equiv \bigsqcap \pr_M(C)$, then $C$ is expressible in terms of $M$.
Conversely, if $C$ is expressible in terms of $M$, then $C \equiv \bigsqcap N$ for some
$N \subseteq M$. Then $C \sqsubseteq D$ for all $D \in N$, and therefore $N \subseteq
\pr_M(C)$. By \Cref{lem:pr-bigsqcap-forms-Galois-connection}, it is thus true that
\begin{equation*}
C \sqsubseteq \bigsqcap \pr_M(C) \sqsubseteq \bigsqcap N \equiv C
\end{equation*}
and therefore $C \equiv \bigsqcap \pr_M(C)$.
\end{Proof}
We can now state some connections between the derivation operators of an induced context
on one side, and computing the extension of a concept description as well as model-based
most-specific concept descriptions on the other. These results are rather technical but
necessary for our further considerations. We include the proofs of these statements here,
as they are rather simple and may help to better understand the corresponding claims.
\begin{Proposition}[Lemma~4.11 and~4.12 of~\cite{Diss-Felix}]
\label{prop:connection-I-prime-1}
Let $\mathcal{I}$ be a finite interpretation and $M$ be a finite set of concept
descriptions. Then for every concept description expressible in terms of $M$ it is true
that
\begin{equation*}
C^{\mathcal{I}} = (\pr_M(C))',
\end{equation*}
and for $O \subseteq \Delta^{\mathcal{I}}$ it is true that
\begin{equation*}
O' = \pr_M(O^{\mathcal{I}}),
\end{equation*}
where the derivation is conducted in $\con K_{\mathcal{I}, M}$.
\end{Proposition}
\begin{Proof}
Since $C$ is expressible in terms of $M$,
\Cref{lem:characterizing-expressible-in-terms-of} yields $C \equiv \bigsqcap \pr_M(C)$.
Thus
\begin{align*}
x \in C^{\mathcal{I}}
& \iff x \in (\bigsqcap \pr_M(C))^{\mathcal{I}} \\
& \iff \forall D \in \pr_M(C) \holds x \in D^{\mathcal{I}} \\
& \iff x \in (\pr_M(C))',
\end{align*}
since $(\pr_M(C))' = \set{ x \in \Delta^{\mathcal{I}} \mid \forall D \in \pr_M(C) \holds
x \in D^{\mathcal{I}} }$.
If $O \subseteq \Delta^{\mathcal{I}}$, then
\begin{align*}
D \in O'
& \iff \forall g \in O \holds g \in D^{\mathcal{I}} \\
& \iff O \subseteq D^{\mathcal{I}} \\
& \iff O^{\mathcal{I}} \sqsubseteq D \\
& \iff D \in \pr_M(O^{\mathcal{I}}),
\end{align*}
where $O \subseteq D^{\mathcal{I}} \iff O^{\mathcal{I}} \sqsubseteq D$ holds due to
\Cref{lem:mmsc-and-extension-are-galois-connection}.
\end{Proof}
\begin{Proposition}[Lemma~4.10 and~4.11 of~\cite{Diss-Felix}]
\label{prop:connection-I-prime-2}
Let $\mathcal{I}$ be a finite interpretation and let $M$ be a finite set of concept
descriptions. Then each $B \subseteq M$ satisfies
\begin{equation*}
B' = (\bigsqcap B)^{\mathcal{I}},
\end{equation*}
and if $A \subseteq \Delta^{\mathcal{I}}$ is such that $A^{\mathcal{I}}$ is expressible
in terms of $M$, then
\begin{equation*}
\bigsqcap A' \equiv A^{\mathcal{I}},
\end{equation*}
where all derivations are conducted in $\con K_{\mathcal{I}, M} = (\Delta^{\mathcal{I}},
M, \nabla)$.
\end{Proposition}
\begin{Proof}
Observe that $x \in B'$ if and only if $x \in C^{\mathcal{I}}$ for all $C \in B$.
Therefore
\begin{equation*}
x \in B' \iff \forall C \in B \holds x \in C^{\mathcal{I}} \iff x \in \bigcap_{C \in
B} C^{\mathcal{I}} = (\bigsqcap B)^{\mathcal{I}},
\end{equation*}
and therefore $B' = (\bigsqcap B)^{\mathcal{I}}$.
If $A \subseteq \Delta^{\mathcal{I}}$ is such that $A^{\mathcal{I}}$ is expressible in
terms of $M$, then by \Cref{lem:characterizing-expressible-in-terms-of} it is true that
\begin{equation*}
A^{\mathcal{I}} \equiv \bigsqcap \pr_M(A^{\mathcal{I}}).
\end{equation*}
By \Cref{prop:connection-I-prime-1}, $\pr_M(A^{\mathcal{I}}) = A'$, and thus
$A^{\mathcal{I}} \equiv \bigsqcap A'$ as required.
\end{Proof}
\begin{Proposition}
\label{prop:connection-I-prime-3}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation and let $M$ be a set of concept descriptions. Let $A \subseteq
\Delta^{\mathcal{I}}$ such that $A^{\mathcal{I}}$ is expressible in terms of $M$. Then
$A^{\mathcal{I}\mathcal{I}} = A''$, where the derivations are conducted in $\con
K_{\mathcal{I}, M}$.
\end{Proposition}
\begin{Proof}
Again, by \Cref{lem:characterizing-expressible-in-terms-of} we have $A^{\mathcal{I}}
\equiv \bigsqcap \pr_M(A^{\mathcal{I}})$ and thus
\begin{align*}
A^{\mathcal{I}\mathcal{I}}
&= \bigl(\bigsqcap \pr_M(A^{\mathcal{I}})\bigr)^{\mathcal{I}} \\
&= \pr_M(A^{\mathcal{I}})' \\
&= A''
\end{align*}
by \Cref{prop:connection-I-prime-1} and \Cref{prop:connection-I-prime-2}.
\end{Proof}
We can rephrase some of the above results as follows. Let $\mathcal{I}$ be a finite
interpretation and let us call a concept description $C$ a \emph{model-based most-specific
concept description} of $\mathcal{I}$ if it is the model-based most-specific concept
description of some subset of $\Delta^{\mathcal{I}}$. Note that $C$ is a model-based
most-specific concept description of $\mathcal{I}$ if and only if $C \equiv
C^{\mathcal{I}\mathcal{I}}$.
Let $M$ be a set of concept descriptions such that all model-based most-specific concept
descriptions are expressible in terms of $M$. If we then identify equivalent model-based
most-specific concept descriptions and order them by $\sqsubseteq$, then the resulting
ordered set is dually isomorphic to the lattice of intents of $\con K_{\mathcal{I}, M}$.
Note that with $\Int(\con K_{\mathcal{I}, M})$ we denote the set of intents of $\con
K_{\mathcal{I}, M}$.
\begin{Corollary}[contains Corollary~4.13 of~\cite{Diss-Felix}]
\label{cor:mmsc-lattice}
Let $\mathcal{I}$ be a finite interpretation and let $M$ be a set of concept
descriptions such that model-based most-specific concept descriptions of $\mathcal{I}$
are expressible in terms of $M$. Denote with $\mathcal{M}$ the set of all model-based
most-specific concept descriptions considered up to equivalence. Then the mapping
\begin{equation*}
\begin{array}{cccc}
\phi \colon & \Int(\con K_{\mathcal{I}, M}) & \to & \mathcal{M} \\
~ & U & \mapsto & \bigsqcap U
\end{array}
\end{equation*}
is an order-isomorphism between $(\Int(\con K_{\mathcal{I}, M}), \subseteq)$ and
$(\mathcal{M}, \sqsupseteq)$, where
\begin{equation*}
\phi^{-1}(C) = \pr_M(C) \quad (C \in \mathcal{M}).
\end{equation*}
In particular this means
\begin{enumerate}[i. ]
\item\label{item:10} $\bigsqcap U \in \mathcal{M}$ for all $U \in \Int(\con K_{\mathcal{I}, M})$,
\item\label{item:11} $\pr_M(C) \in \Int(\con K_{\mathcal{I}, M})$ for all $C \in \mathcal{M}$,
\item\label{item:12} $U \subseteq V$ implies $\bigsqcap U \sqsupseteq \bigsqcap V$ for
all $U, V \subseteq M$,
\item\label{item:13} $C \sqsubseteq D$ implies $\pr_M(C) \supseteq \pr_M(D)$ for all $C,
D \in \mathcal{M}$,
\item\label{item:14} $\pr_M(\bigsqcap U) = U$ for all $U \in \Int(\con K_{\mathcal{I}, M})$,
\item\label{item:15} $\bigsqcap \pr_M(C) \equiv C$ for each $C \in \mathcal{M}$.
\end{enumerate}
Additionally,
\begin{equation}
\label{eq:18}
\begin{aligned}
U'' &= \pr_M((\bigsqcap U)^{\mathcal{I}\mathcal{I}}), \\
C^{\mathcal{I}\mathcal{I}} &= \bigsqcap (\pr_M(C))''
\end{aligned}
\end{equation}
is true for all $U \subseteq M$ and all concept descriptions $C$ expressible in terms of
$M$, and where the derivations are done in $\con K_{\mathcal{I}, M}$.
\end{Corollary}
\begin{Proof}
Claims~(\ref{item:12}) and~(\ref{item:13}) are already contained in
\Cref{lem:pr-bigsqcap-forms-Galois-connection}, and (\ref{item:15}) is just
\Cref{lem:characterizing-expressible-in-terms-of} again. We show the other claims step
by step.
For~(\ref{item:10}) let $U \in \Int(\con K_{\mathcal{I}, M})$. Then $U = U''$, and thus
\begin{equation*}
\bigsqcap U = \bigsqcap U'' \equiv (U')^{\mathcal{I}} = (\bigsqcap U)^{\mathcal{I}\mathcal{I}}
\end{equation*}
by \Cref{prop:connection-I-prime-2}. Thus $U \in \mathcal{M}$ up to equivalence.
For~(\ref{item:11}) let $C \in \mathcal{M}$. Then $C \equiv C^{\mathcal{I}\mathcal{I}}$
and $C$ is expressible in terms of $M$. From \Cref{prop:connection-I-prime-1} it
follows
\begin{align*}
\pr_M(C)
&= \pr_M(C^{\mathcal{I}\mathcal{I}}) \\
&= (C^{\mathcal{I}})' \\
&= \pr_M(C)''
\end{align*}
and thus $\pr_M(C) \in \Int(\con K_{\mathcal{I}, M})$.
For~(\ref{item:14}) let again $U \in \Int(\con K_{\mathcal{I}, M})$. We first observe
that $U \subseteq \pr_M(\bigsqcap U)$ by \Cref{lem:pr-bigsqcap-forms-Galois-connection}.
Furthermore, for each concept description $D$ it is true that
\begin{align*}
D \in \pr_M(\bigsqcap U)
&\iff \bigsqcap U \sqsubseteq D \\
&\:\implies (\bigsqcap U)^{\mathcal{I}} \subseteq D^{\mathcal{I}} \\
&\iff U' \subseteq \set{ D }'\\
&\iff U'' \supseteq \set{ D }'' \ni D \\
&\iff D \in U'' = U,
\end{align*}
using \Cref{prop:connection-I-prime-2} for $(\bigsqcap U)^{\mathcal{I}} = U'$, and the
definition of $\con K_{\mathcal{I}, M}$ to obtain $D^{\mathcal{I}} = \set{ D }'$. Thus,
$\pr_M(\bigsqcap U) \subseteq U$ and equality follows.
For the equations given in~(\ref{eq:18}) we observe
\begin{align*}
\pr_M((\bigsqcap U)^{\mathcal{I}\mathcal{I}})
&= \pr_M((U')^{\mathcal{I}}) \\
&= U''
\intertext{by \Cref{prop:connection-I-prime-2} and \Cref{prop:connection-I-prime-1}, and}
\bigsqcap (\pr_M(C))''
&\equiv (\pr_M(C)')^{\mathcal{I}} \\
&= C^{\mathcal{I}\mathcal{I}},
\end{align*}
again because of \Cref{prop:connection-I-prime-2} and \Cref{prop:connection-I-prime-1},
for every $U \subseteq M$ and every concept description $C$ expressible in terms of $M$.
\end{Proof}
The equivalence $\bigsqcap (\pr_M(C))'' \equiv C^{\mathcal{I}\mathcal{I}}$ does not hold
in general for concept descriptions $C$, as the following trivial example shows.
\begin{Example}
\label{expl:counterexample}
Let $N_C = \emptyset$ and $N_R = \set{ \mathsf{r} }$, and let $\mathcal{I} =
(\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation over $N_C$ and $N_R$
with $\Delta^{\mathcal{I}} = \set{ x }$ and $r^{\mathcal{I}} = \emptyset$. Then the
model-based most-specific concept descriptions of $\mathcal{I}$ are, up to equivalence,
just $\top$ and $\bot$. Let $M = \set{ \bot }$. Then clearly all model-based
most-specific concept descriptions of $\mathcal{I}$ are expressible in terms of $M$.
Then
\begin{equation*}
\con K_{\mathcal{I}, M} =
\begin{array}{c|c}
~ & \bot \\\midrule
x & . \\
\end{array}
\end{equation*}
Now consider $C = \exists r. \top$. Then on the one hand,
\begin{equation*}
C^{\mathcal{I}\mathcal{I}} = \emptyset^{\mathcal{I}} = \bot,
\end{equation*}
but on the other hand
\begin{equation*}
\bigsqcap \pr_M(C)'' = \bigsqcap \emptyset'' = \bigsqcap \emptyset = \top,
\end{equation*}
so $C^{\mathcal{I}\mathcal{I}} \neq \bigsqcap \pr_M(C)''$.
\end{Example}
A useful consequence of \Cref{cor:mmsc-lattice} is the following result.
\begin{Lemma}
\label{lem:double-II-gets-double-prime}
Let $\mathcal{I}$ be a finite interpretation, and let $U \subseteq M_{\mathcal{I}}$. Then
\begin{equation*}
(\bigsqcap U)^{\mathcal{I}\mathcal{I}} = \bigsqcap U'',
\end{equation*}
where the derivations are done in $\con K_{\mathcal{I}}$.
\end{Lemma}
\begin{Proof}
Clearly $\bigsqcap U$ is expressible in terms of $M_{\mathcal{I}}$. Thus
\Cref{cor:mmsc-lattice} yields
\begin{equation*}
(\bigsqcap U)^{\mathcal{I}\mathcal{I}} = \bigsqcap(\pr_{M_{\mathcal{I}}}(\bigsqcap U))''.
\end{equation*}
By \Cref{prop:connection-I-prime-1} it is true that $\pr_{M_{\mathcal{I}}}(\bigsqcap U)'
= (\bigsqcap U)^{\mathcal{I}}$, thus
\begin{align*}
(\bigsqcap U)^{\mathcal{I}\mathcal{I}}
&= \bigsqcap ((\bigsqcap U)^{\mathcal{I}})'\\
&= \bigsqcap U''
\end{align*}
where $(\bigsqcap U)^{\mathcal{I}} = U'$ is true due to
\Cref{prop:connection-I-prime-2}.
\end{Proof}
\section{Computing Bases of Valid GCIs of a Finite Interpretation}
\label{sec:base-all-valid}
Using the notions of model-based most-specific concept descriptions and induced contexts,
we are finally prepared to introduce some of the main results of~\cite{Diss-Felix} on
computing bases of finite interpretations. The main idea behind these results is to use
ideas and methods from formal concept analysis, either by simulating them in a description
logic setting, or by transforming the initially given interpretation into formal contexts
and applying standard methods from formal concept analysis to it.
Recall that for a (finite) formal context $\con K = (G, M, I)$ the set
\begin{equation*}
\set{ A \to A'' \mid A \subseteq M }
\end{equation*}
is always a base of $\con K$. This is because every valid implication $(A \to B) \in
\Th(\con K)$ already follows from $A \to A''$, because if $\con K \models (A \to B)$, then
$A' \subseteq B'$, \ie $B \subseteq A''$ and thus $\set{ A \to A ''} \models (A \to B)$.
Having introduced model-based most-specific concept descriptions, we are able to simulate
this result in terms of description logics as follows.
\begin{Lemma}[Lemma 4.3 of~\cite{Diss-Felix}]
\label{lem:simple-entailment-with-mmsc}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation,
and let $C \sqsubseteq D$ be a general concept inclusion that is valid in $\mathcal{I}$.
Then $C \sqsubseteq C^{\mathcal{I}\mathcal{I}}$ is valid in $\mathcal{I}$ as well, and
$C \sqsubseteq D$ follows from $C \sqsubseteq C^{\mathcal{I}\mathcal{I}}$.
\end{Lemma}
The following statement is then a simple corollary.
\begin{Corollary}
\label{cor:Felix-base-B0}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation
over $N_C$ and $N_R$. Then
\begin{equation}
\label{eq:20}
\mathcal{B}_0 := \set{ C \sqsubseteq C^{\mathcal{I}\mathcal{I}} \mid C \in
\ELgfpbot(N_C, N_R), C \neq \bot }
\end{equation}
is a base of $\mathcal{I}$.
\end{Corollary}
Of course, this base is not finite in general, \ie if $N_R \neq \emptyset$. However,
based on this result, Baader and Distel investigate subsets of $\mathcal{B}_0$ and finally
arrive at a finite base. The first step into this direction is to show that considering
only \ELbot concept descriptions is enough, as described in the next theorem. The main
advantage of this result is that we can now use induction over the premises of general
concept inclusions.
\begin{Theorem}[Theorem~5.7 of~\cite{Diss-Felix}]
\label{thm:Felix-base-B1}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be an interpretation
over $N_C$ and $N_R$. Then
\begin{equation}
\label{eq:21}
\mathcal{B}_1 := \set{ C \sqsubseteq C^{\mathcal{I}\mathcal{I}} \mid C \in \ELbot(N_C,
N_R), C \neq \bot }
\end{equation}
is a base of $\mathcal{I}$.
\end{Theorem}
The proof of this theorem is quite involved, and again makes use of \EL description graphs
and simulations between them. We shall not go into details here, and refer the reader
to~\cite[Section 5.1.1]{Diss-Felix}.
The base $\mathcal{B}_1$ still is not finite in general. To achieve finiteness, we
consider a particular finite set $M_{\mathcal{I}}$ of concept descriptions which turns out
to be enough, in the sense that we only need to consider general concept inclusions $C
\sqsubseteq C^{\mathcal{I}\mathcal{I}}$ where $C = \bigsqcap U$ for some $U \subseteq
M_{\mathcal{I}}$. Since $M_{\mathcal{I}}$ is finite, the resulting set of general concept
inclusions is finite and therefore yields a finite base of $\mathcal{I}$.
\begin{Definition}[$M_{\mathcal{I}}$]
\label{def:M_I}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation over $N_C$ and $N_R$. Then
\begin{equation*}
M_{\mathcal{I}} := N_C \cup \set{ \bot } \cup \set{ \exists r. X^{\mathcal{I}} \mid
r \in N_R, X \subseteq \Delta^{\mathcal{I}}, X \neq \emptyset }.
\end{equation*}
\end{Definition}
The definition of $M_{\mathcal{I}}$ seems to be incomprehensible at first. However, since
this set will play a major role for our further considerations, we shall give some
intuition why it is suitable for our purpose the way it is defined.
Note that $M_{\mathcal{I}}$ is finite since $\mathcal{I}$ is finite, and thus there are
only finitely many subsets of $\Delta^{\mathcal{I}}$. Furthermore notice that
$M_{\mathcal{I}}$ can be computed using the Next-Closure algorithm from
\Cref{thm:next-closure}. More precisely, we can compute all concept descriptions
$X^{\mathcal{I}}$ by noticing that $X^{\mathcal{I}} \equiv
X^{\mathcal{I}\mathcal{I}\mathcal{I}}$, and we can compute the sets
$X^{\mathcal{I}\mathcal{I}}$ using Next-Closure because the mapping $X \mapsto
X^{\mathcal{I}\mathcal{I}}$ is a closure operator on $\subsets{M_{\mathcal{I}}}$.
Before we show how the set $M_{\mathcal{I}}$ helps in finding finite bases, we note an
important property of it.
\begin{Lemma}[Lemma~5.9 of~\cite{Diss-Felix}]
\label{lem:mmsc-are-expressible-in-terms-of-M_I}
Let $\mathcal{I}$ be a finite interpretation and let $C$ be a model-based most-specific
concept description of $\mathcal{I}$. Then $C$ is expressible in terms of
$M_{\mathcal{I}}$.
\end{Lemma}
Let us define
\begin{equation}
\label{eq:26}
\mathcal{B}_2 := \set{ \bigsqcap U \sqsubseteq (\bigsqcap U)^{\mathcal{I}\mathcal{I}}
\mid U \subseteq M_{\mathcal{I}} }.
\end{equation}
Then clearly $\mathcal{B}_2 \models (C \sqsubseteq C^{\mathcal{I}\mathcal{I}})$ for $C \in
N_C$ or $C = \bot$. For $C = D \sqcap E$, and assuming by induction that $\mathcal{B}_2
\models (D \sqsubseteq D^{\mathcal{I}\mathcal{I}})$ and $\mathcal{B}_2 \models (E
\sqsubseteq E^{\mathcal{I}\mathcal{I}})$, we can find that
\begin{equation*}
\mathcal{B}_2 \models (D \sqcap E \sqsubseteq D^{\mathcal{I}\mathcal{I}} \sqcap E^{\mathcal{I}\mathcal{I}}).
\end{equation*}
But then $D^{\mathcal{I}\mathcal{I}} \sqcap E^{\mathcal{I}\mathcal{I}}$ is expressible in
terms of $M_{\mathcal{I}}$ (as a conjunction of model-based most-specific concept
descriptions, using \Cref{lem:mmsc-are-expressible-in-terms-of-M_I}), so
\begin{equation*}
\mathcal{B}_2 \models ((D^{\mathcal{I}\mathcal{I}} \sqcap E^{\mathcal{I}\mathcal{I}})
\sqsubseteq (D^{\mathcal{I}\mathcal{I}} \sqcap E^{\mathcal{I}\mathcal{I}})^{\mathcal{I}\mathcal{I}}).
\end{equation*}
Using \Cref{prop:double-II-under-I} we obtain $(D^{\mathcal{I}\mathcal{I}} \sqcap
E^{\mathcal{I}\mathcal{I}})^{\mathcal{I}\mathcal{I}} \equiv (D \sqcap
E)^{\mathcal{I}\mathcal{I}}$, so all in all
\begin{equation*}
\mathcal{B}_2 \models (D \sqcap E \sqsubseteq (D \sqcap E)^{\mathcal{I}\mathcal{I}}).
\end{equation*}
Notice that the main arguments here are \Cref{prop:double-II-under-I} and that all
model-based most-specific concept descriptions are expressible in terms of
$M_{\mathcal{I}}$.
If $C = \exists r. D$, and assuming that $\mathcal{B}_2 \models (D \sqsubseteq
D^{\mathcal{I}\mathcal{I}})$, we first obtain
\begin{equation}
\label{eq:22}
\mathcal{B}_2 \models ( \exists r. C \sqsubseteq \exists r. C^{\mathcal{I}\mathcal{I}} ).
\end{equation}
But then $(\exists r. C^{\mathcal{I}\mathcal{I}}) \in M_{\mathcal{I}}$ up to equivalence,
so
\begin{equation*}
\mathcal{B}_2 \models ( \exists r. C^{\mathcal{I}\mathcal{I}} \sqsubseteq (\exists
r.C^{\mathcal{I}\mathcal{I}})^{\mathcal{I}\mathcal{I}}).
\end{equation*}
Using \Cref{prop:double-II-under-I} again we obtain $(\exists
r. C^{\mathcal{I}\mathcal{I}})^{\mathcal{I}\mathcal{I}} \equiv (\exists
r. C)^{\mathcal{I}\mathcal{I}}$, so
\begin{equation*}
\mathcal{B}_2 \models ( \exists r. C \sqsubseteq (\exists r. C)^{\mathcal{I}\mathcal{I}} ).
\end{equation*}
Notice that the crucial property in that argumentation is that $M_{\mathcal{I}}$ contains
concept descriptions of the form $\exists r. C^{\mathcal{I}\mathcal{I}}$, and that
\Cref{prop:double-II-under-I} has been used again.
The preceding argument then shows the following claim.
\begin{Theorem}[Theorem~5.10 of~\cite{Diss-Felix}]
\label{thm:Felix-base-B2}
Let $\mathcal{I}$ be a finite interpretation. Then $\mathcal{B}_2$ as defined in
\Cref{eq:26} is a finite base of $\mathcal{I}$.
\end{Theorem}
A practical disadvantage of the finite base $\mathcal{B}_2$ is its size, which may be
exponential in $\abs{ M_{\mathcal{I}} }$, which itself may be exponential in the size of
$\Delta^{\mathcal{I}}$. To remedy this, we use methods from formal concept analysis to
extract bases from formal contexts. In particular, recall that the canonical base of a
formal context is minimal in size among all bases of a formal context, and that it can be
computed effectively. Having this in mind, we further observe that if we consider the
induced formal context $\con K_{\mathcal{I}} := \con K_{\mathcal{I}, M_{\mathcal{I}}}$,
then the set $\mathcal{L} := \set{ A \to A'' \mid A \subseteq M_{\mathcal{I}} }$ is a base
of $\con K_{\mathcal{I}}$, and that
\begin{equation*}
\mathcal{B}_2 = \bigsqcap \mathcal{L} := \set{ \bigsqcap A \sqsubseteq \bigsqcap A''
\mid (A \to A'') \in \mathcal{L} }.
\end{equation*}
Recall that $\bigsqcap A'' \equiv (\bigsqcap A)^{\mathcal{I}\mathcal{I}}$ by
\Cref{cor:mmsc-lattice}.
We can generalize this observation as follows: if $\mathcal{L} \subseteq \Th(\con
K_{\mathcal{I}})$ is a base of $\con K_{\mathcal{I}}$ which only contains implications of
the form $U \to U''$, then the set $\bigsqcap \mathcal{L}$ defined as
\begin{equation*}
\bigsqcap \mathcal{L} := \set{ \bigsqcap U \sqsubseteq (\bigsqcap
U)^{\mathcal{I}\mathcal{I}} \mid (U \to U'') \in \mathcal{L} }
\end{equation*}
is a base of $\con K_{\mathcal{I}}$. Note that then $\bigsqcap \mathcal{L}$ is always a
subset of $\mathcal{B}_2$, but $\bigsqcap \mathcal{L}$ may be much smaller than
$\mathcal{B}_2$, for example if $\mathcal{L}$ is irredundant or even minimal.
However, there is a redundancy in $\mathcal{L}$ which cannot be removed this way: if $C, D
\in M_{\mathcal{I}}$ such that $C$ is subsumed by $D$, then the implication $\set{ C } \to
\set{ D }$ will always be true in $\con K_{\mathcal{I}}$. But this means that this
implication has to be contained implicitly or explicitly in any base of $\con
K_{\mathcal{I}}$. On the other hand, the resulting GCI $C \sqsubseteq D$ is trivial, and
thus dispensable.
We can alleviate this situation by making use of bases with background knowledge. The
background knowledge we are interested in would be
\begin{equation}
\label{eq:28}
\mathcal{S}_{\mathcal{I}} := \set{ \set{ C } \to \set{ D } \mid C, D \in
M_{\mathcal{I}}, C \sqsubseteq D }.
\end{equation}
A base of $\con K_{\mathcal{I}}$ with background knowledge $\mathcal{S}_{\mathcal{I}}$ now
does not have to contain the information about the implications in
$\mathcal{S}_{\mathcal{I}}$ anymore, and may thus may be smaller than a base without this
background knowledge.
\begin{Theorem}[Theorem~5.12 of~\cite{Diss-Felix}]
\label{thm:Felix-base-B3}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation, and let $\mathcal{L}$ be a base of $\con K_{\mathcal{I}}$ with
background knowledge $\mathcal{S}_{\mathcal{I}}$. Assume that $\mathcal{L}$ only
contains implications of the form $U \to U''$ for some $U \subseteq M_{\mathcal{I}}$.
Then $\bigsqcap \mathcal{L}$ is a finite base of $\mathcal{I}$.
\end{Theorem}
We can extend this connection between bases of $\con K_{\mathcal{I}}$ and bases of
$\mathcal{I}$ even more: if $\mathcal{L}$ is the canonical base of $\con K_{\mathcal{I}}$
with background knowledge $\mathcal{S}_{\mathcal{I}}$, then $\bigsqcap \mathcal{L}$ is a
minimal base of $\mathcal{I}$.
\begin{Theorem}[Theorem~5.18 of~\cite{Diss-Felix}]
\label{thm:Felix-5.18}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation, and define
\begin{equation*}
\mathcal{B} := \bigsqcap \set{ A \to A'' \mid (A \to A'') \in \Can(\con
K_{\mathcal{I}}, \mathcal{S}_{\mathcal{I}}) }.
\end{equation*}
Then $\mathcal{B}$ is a minimal base of $\mathcal{I}$.
\end{Theorem}
So far, all bases we have obtained were \ELgfpbot-bases, \ie the GCIs contained in these
bases where allowed to contain proper \ELgfpbot concept descriptions. From a logical
point of view this is not a problem. However, \ELgfpbot concept descriptions are
inherently harder to read, since they allow for \enquote{local recursion} within concept
descriptions. This may be undesired, as those concept descriptions may have to be
inspected by domain experts for their validity, and those experts may not necessarily be
experts in logic as well.
On the other hand, \ELbot concept descriptions are much easier to read, and thus obtaining
\ELbot bases instead of \ELgfpbot bases may be much more desirable. For this, Baader and
Distel discuss a way to obtain such \ELbot bases from arbitrary \ELgfpbot bases by
\emph{unravelling}.
The crucial observation towards obtaining \ELbot bases from \ELgfpbot bases is that given
a finite interpretation $\mathcal{I}$ and a concept description $C$ it is true for $d \in
\NN_0$ \enquote{large enough} that $C^{\mathcal{I}} = C_d^{\mathcal{I}}$. Recall that $C_d$
denotes the unravelling of $C$ up to depth $d$.
\begin{Lemma}[Lemma~5.5 of~\cite{Diss-Felix}]
\label{lem:Felix-lemma-5.5}
Let $\mathcal{I} = (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ be a finite
interpretation, and let $C = (A, \mathcal{T})$ be an \ELgfp concept description.
Then for $d = \abs{ N_D(\mathcal{T}) } \cdot \abs{ \Delta^{\mathcal{I}} } + 1$ it is
true that $C^{\mathcal{I}} = C_d^{\mathcal{I}}$.
\end{Lemma}
Secondly, unravelling up to depth $d$ respects the structure of \ELbot concept
descriptions, as formulated in the following lemma.
\begin{Lemma}[Lemma~5.19 of~\cite{Diss-Felix}]
\label{lem:unravelling-is-homomorphism}
Let $C, D$ be two \ELgfp concept descriptions. Then
\begin{enumerate}[i. ]
\item $(\exists r. C)_d \equiv \exists r. C_{d-1}$,
\item $(C \sqcap D)_d \equiv C_d \sqcap D_d$.
\end{enumerate}
\end{Lemma}
To now unravel an \ELgfpbot base $\mathcal{B}$ of $\mathcal{I}$ the idea is to just
unravel every GCI $(C \sqsubseteq D) \in \mathcal{B}$ \enquote{deep enough}, \ie replacing
these GCIs by $C_d \sqsubseteq D_d$, where $d$ is chosen as in \Cref{lem:Felix-lemma-5.5}.
This, however, may not be enough, as we may not be able anymore to entail GCIs of the form
$(X^{\mathcal{I}})_d \sqsubseteq X^{\mathcal{I}}$ for $X \subseteq \Delta^{\mathcal{I}}$
from the base thus obtained. To remedy this, some extra GCIs need to be added.
\begin{Theorem}[Theorem~5.21 of~\cite{Diss-Felix}]
\label{thm:unravelling-ELgfpbot-bases}
Let $\mathcal{I}$ be a finite interpretation and let $\mathcal{B}$ be a finite \ELgfpbot
base of $\mathcal{I}$. Then
\begin{equation*}
\mathcal{B}_{\mathsf{u}} := \set{ C_d \sqsubseteq (C^{\mathcal{I}\mathcal{I}})_d \mid (C
\sqsubseteq D) \in \mathcal{B} } \cup \set{ (X^{\mathcal{I}})_d \sqsubseteq
(X^{\mathcal{I}})_{d+1} \mid X \subseteq \Delta^{\mathcal{I}}, X \neq \emptyset }
\end{equation*}
is a finite \ELbot base of $\mathcal{I}$, where $d \in \NN_0$ is defined as in
\Cref{lem:Felix-lemma-5.5}.
\end{Theorem}
We shall only give some intuition why this theorem is correct, as we shall discuss its
proof when we generalize it to bases of confident GCIs in \Cref{sec:unrav-elgfpb-bases}.
An important observation is that the set
\begin{equation*}
\mathcal{X} := \set{ (X^{\mathcal{I}})_d \sqsubseteq (X^{\mathcal{I}})_{d+1} \mid X
\subseteq \Delta^{\mathcal{I}}, X \neq \emptyset }
\end{equation*}
satisfies for all $X \subseteq \Delta^{\mathcal{I}}$
\begin{enumerate}[i. ]
\item $\mathcal{X} \models ( (X^{\mathcal{I}})_k \sqsubseteq (X^{\mathcal{I}})_{k+1} )$
for all $k \in \NN_0, k \geq d$, and
\item $\mathcal{X} \models ( (X^{\mathcal{I}})_d \sqsubseteq X^{\mathcal{I}} )$.
\end{enumerate}
The first property can be shown by induction over $k$, and for the second property we
observe that if $\mathcal{J}$ is a finite interpretation such that $\mathcal{J} \models
\mathcal{X}$, then by the first property
\begin{equation*}
((X^{\mathcal{I}})_d)^{\mathcal{J}} \subseteq ((X^{\mathcal{I}})_{d+1})^{\mathcal{J}}
\subseteq ((X^{\mathcal{I}})_{d+2})^{\mathcal{J}} \subseteq \dots
\end{equation*}
Since $\mathcal{J}$ is finite, for $k$ large enough it is true that
$((X^{\mathcal{I}})_k)^{\mathcal{J}} = ((X^{\mathcal{I}})_{k+1})^{\mathcal{J}}$ and thus
\begin{equation*}
((X^{\mathcal{I}})_k)^{\mathcal{J}} = (X^{\mathcal{I}})^{\mathcal{J}}.
\end{equation*}
Thus, $\mathcal{J} \models ( (X^{\mathcal{I}})_d \sqsubseteq X^{\mathcal{I}} )$ and
therefore $\mathcal{X} \models ( (X^{\mathcal{I}})_d \sqsubseteq X^{\mathcal{I}} )$,
because $\ELbot$ has the finite model property.
But then if $(C \sqsubseteq D) \in \mathcal{B}$, then $\mathcal{B}_{\mathsf{u}} \models (
(C^{\mathcal{I}\mathcal{I}})_d \sqsubseteq C^{\mathcal{I}\mathcal{I}})$ by the argument
just shown, and $\mathcal{B}_{\mathsf{u}} \models ( C_d \sqsubseteq
(C^{\mathcal{I}\mathcal{I}})_d )$, because this GCI is contained in
$\mathcal{B}_{\mathsf{u}}$. Thus
\begin{equation*}
\mathcal{B}_{\mathsf{u}} \models ( C \sqsubseteq C_d \sqsubseteq (C^{\mathcal{I}\mathcal{I}})_d
\sqsubseteq C^{\mathcal{I}\mathcal{I}}),
\end{equation*}
and \Cref{lem:simple-entailment-with-mmsc} yields $\mathcal{B}_{\mathsf{u}} \models (C
\sqsubseteq D)$. Thus, $\mathcal{B}_d$ entails all GCIs in $\mathcal{B}$, and since
$\mathcal{B}$ is complete for $\mathcal{I}$, $\mathcal{B}_{\mathsf{u}}$ is complete for
$\mathcal{I}$ as well.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../main"
%%% End:
% LocalWords: Prediger gener conc
| {
"alphanum_fraction": 0.6895120232,
"avg_line_length": 51.7931034483,
"ext": "tex",
"hexsha": "527fd384fb3a52ee3329f6998e62484e715109f7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5cda9bc3011e0c5697b8a5aede9525d0001058ca",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "exot/thesis",
"max_forks_repo_path": "chapters/axiomatizing-valid-gcis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5cda9bc3011e0c5697b8a5aede9525d0001058ca",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "exot/thesis",
"max_issues_repo_path": "chapters/axiomatizing-valid-gcis.tex",
"max_line_length": 110,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5cda9bc3011e0c5697b8a5aede9525d0001058ca",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "exot/thesis",
"max_stars_repo_path": "chapters/axiomatizing-valid-gcis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17226,
"size": 51068
} |
%%!TEX encoding = UTF-8 Unicode
% According to UA rules, font size should range from 10 to 12pt.
\documentclass[11pt,a4paper,openright,final,twoside,onecolumn]{memoir}
\listfiles
\fixpdflayout
\usepackage[utf8]{inputenc}
% Computer Modern Typewritter (For bold ttfamily in listings)
\usepackage{lmodern}
% OR... Bera Mono
%\usepackage[scaled]{beramono} % TTT Font
%\usepackage{anyfontsize} % As the name says...
\usepackage[T1]{fontenc}
%For PDF merging
\usepackage{pdfpages}
%SET DPI to 300
\pdfpxdimen=\dimexpr 1in/300\relax
\usepackage{morewrites} % Allow the use of a larger number of packages
%For English and Portuguese languages
%Portuguese will be the default.
%Use \setdefaultlanguage to change it
\usepackage{csquotes}
\usepackage[english,portuguese]{babel}
% For custom date format
\usepackage{datetime}
\newdateformat{thesisdate}{\monthname[\THEMONTH] \THEYEAR} % Month Year
\usepackage{microtype} % Make pdf look better
% Uncomment to enable floats on facing pages
%\usepackage{dpfloat}
%Side by side figures
% Eg. Fig 1a, Fig 1b
\usepackage[hang,small,bf]{caption}
%\let\tion\undefined
%\let\subfloat\undefined
\usepackage{subcaption}
%\RequirePackage{textcase}
% Dropped Caps
%\usepackage{lettrine}
% Configure Hyperlink color
%\usepackage[breaklinks=true,colorlinks=false,linkcolor=blue]{hyperref}
% Or use the default
\usepackage{hyperref}
%Optional: Redefine section names
%\def\sectionautorefname{Section}
%\def\chapterautorefname{Chapter}
%\def\figureautorefname{Figure}
%\def\listingautorefname{Listing}
%\def\tableautorefname{Table}
%For PDF Comments
\usepackage{comment}
\usepackage{pdfcomment}
\usepackage{bookmark} % New Bookmarks
%For Multiple columns in Glossary
\usepackage{multicol}
%Math symbols
\usepackage{amsmath}
\usepackage{amssymb}
%Graphics
\usepackage{graphicx}
%Colors
\usepackage{xcolor}
%Euro symbol
\usepackage{eurosym}
% Code boxes
\usepackage[outputdir=build]{minted}
\renewcommand\listingscaption{Código}
\fvset{fontsize=\footnotesize} % Make Code blocks smaller than text
%Biber using IEEE style for proper UTF-8 support
\usepackage[backend=biber,style=ieee, sorting=none]{biblatex}
\bibliography{bib/references.bib, bib/rfc.bib}
%Use acronyms
\usepackage[printonlyused]{acronym} % For acronyms
% Enable chart support through pgf and tikz
\usepackage[version=0.96]{pgf}
\usepackage{tikz}
\usepackage{pgf-umlsd}
\usetikzlibrary{arrows,shadows,trees,shapes,snakes,automata,backgrounds,petri,mindmap} % for pgf-umlsd
%For Electric Circuits
\usepackage[detect-weight=true, binary-units=true]{siunitx}
\sisetup{load-configurations = binary}
\usepackage[american,cuteinductors,smartlabels]{circuitikz}
\usetikzlibrary{calc}
\ctikzset{bipoles/thickness=1}
\ctikzset{bipoles/length=0.8cm}
\ctikzset{bipoles/diode/height=.375}
\ctikzset{bipoles/diode/width=.3}
\ctikzset{tripoles/thyristor/height=.8}
\ctikzset{tripoles/thyristor/width=1}
\ctikzset{bipoles/vsourceam/height/.initial=.7}
\ctikzset{bipoles/vsourceam/width/.initial=.7}
\tikzstyle{every node}=[font=\small]
\tikzstyle{every path}=[line width=0.8pt,line cap=round,line join=round]
% For inline TT text (e.g. code snippets)
\usepackage{verbatim}
%Frames around figures and allow force placement
\usepackage{float}
%Configure Float style
%\floatstyle{boxed}
%\restylefloat{table}
%\restylefloat{figure}
%\restylefloat{lstlisting}
%For test purposes
\usepackage{lipsum}
%Keep floats inside section!
\usepackage[section]{placeins}
\let \oldsubsubsection \subsubsection
\renewcommand{\subsubsection}[2][]{
\FloatBarrier
\oldsubsubsection#1{#2}
}
\let \oldsubsection \subsection
\renewcommand{\subsection}[2][]{
\FloatBarrier
\oldsubsection#1{#2}
}
\let \oldsection \section
\renewcommand{\section}[2][]{
\FloatBarrier
\oldsection#1{#2}
}
\let \oldchapter \chapter
\renewcommand{\chapter}[2][]{
\FloatBarrier
\oldchapter#1{#2}
}
%%%% Use the built-in division styling
\headstyles{memman}
%%% ToC down to subsections
\settocdepth{subsection}
%%% Numbering down to subsections as well
\setsecnumdepth{subsection}
%%%% extra index for first lines
\makeindex[lines]
%Margins for University of Aveiro Thesis
\setlrmarginsandblock{3cm}{2.5cm}{*}
\setulmarginsandblock{3cm}{3cm}{*}
\checkandfixthelayout
%Or custom spacing
%\addtolength{\parskip}{0.5\baselineskip}
\linespread{1.5}
\begin{document}
\includepdf[pages=-]{cover.pdf}
%
%Front matter
%Custom Chapter style named thesis
\makechapterstyle{thesis}{% Based on ell
\chapterstyle{default}
\renewcommand*{\chapnumfont}{\normalfont\sffamily}
\renewcommand*{\chaptitlefont}{\normalfont\Huge\sffamily}
\settowidth{\chapindent}{\chapnumfont 111}
\renewcommand*{\chapterheadstart}{\begingroup
\vspace*{\beforechapskip}%
\begin{adjustwidth}{}{-\chapindent}%
\hrulefill
\smash{\rule{0.4pt}{15mm}}
\end{adjustwidth}\endgroup}
\renewcommand*{\printchaptername}{}
\renewcommand*{\chapternamenum}{}
\renewcommand*{\printchapternum}{%
\begin{adjustwidth}{}{-\chapindent}
\hfill
\raisebox{10mm}[0pt][0pt]{\fontsize{30}{25}\selectfont\chapnumfont \thechapter}%
\hspace*{1em}
\end{adjustwidth}\vspace*{-3.0\onelineskip}}
\renewcommand*{\printchaptertitle}[1]{%
\vskip\onelineskip
\raggedleft {\chaptitlefont ##1}\par\nobreak\vskip 4\onelineskip}}
%Select chapter style from existing or select custom
%\chapterstyle{thesis} % Others: dowding, demo2, dash, chappell, brotherton, bianchi, ger, madsen, tatcher, veelo,indexes)
% thesis can also be used as defined previously
%
%If you feel adventurous you can also define all aspects of your theme
%Use either this input or the chapterstyle before
%\input{custom-theme.tex}
\chapterstyle{veelo}
%Exclude sub figures from List of Figures
%\captionsetup[subfloat]{list=no}
% Texts
\newenvironment{introduction}
{%
\begin{minipage}{\textwidth}%
\itshape%
}
{%
\end{minipage}%
\par\addvspace{2\baselineskip plus 0.2\baselineskip minus 0.2\baselineskip}%
}
%Select Page style
\pagestyle{plain}
\frontmatter
\tightlists
\midsloppy
\raggedbottom
\setcounter{tocdepth}{2} %subsections are added to the TOC
\setcounter{secnumdepth}{4} %subsubsections are numbered
%%Optional! Remove in final version.
{\small\listofpdfcomments[liststyle=SubjectAuthor]}
\cleardoublepage
%Table of contents
{\small\tableofcontents}
\cleardoublepage
%List of figures
{\small\listoffigures}
%List of tables
\cleardoublepage
{\small\listoftables}
%Print Glossary
{\small\include{glossary}}
%
%Main document starts here
%
\mainmatter
% Start of Thesis text ----------------------------------------------------------
%Line spacing: 1.5 pt
\OnehalfSpacing
\include{chapters/chapter1}
%\include{chapter2}
%\include{chapter3}
%\include{chapter4}
% End of Thesis text ---------------------------------------------------------
% Including files is advised:
%Appendix
\backmatter
%Print all used references
\begingroup
\renewcommand{\bibfont}{\footnotesize}
%Redefine References name
\defbibheading{bibliography}[Referências]{
\chapter{#1}
}
\SingleSpacing
\setlength\bibitemsep{8pt}
\printbibliography[heading=bibliography]
\endgroup
%Load appendix
%\include{appendix-a}
\end{document}
| {
"alphanum_fraction": 0.7481665975,
"avg_line_length": 22.4440993789,
"ext": "tex",
"hexsha": "f85be414d090c6cfa1fa18a0a8fe93c046865805",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-09-23T20:49:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-23T20:49:20.000Z",
"max_forks_repo_head_hexsha": "d235bd38ae65dec7f3ed8ce11da2b88330c56b3b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Joaobranquinho/ua-thesis-template",
"max_forks_repo_path": "matter.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d235bd38ae65dec7f3ed8ce11da2b88330c56b3b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Joaobranquinho/ua-thesis-template",
"max_issues_repo_path": "matter.tex",
"max_line_length": 122,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "29864b1807ddbf8a923e1778a841940321849904",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pbmartins/ua-thesis-template",
"max_stars_repo_path": "matter.tex",
"max_stars_repo_stars_event_max_datetime": "2018-12-15T13:09:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-03T16:46:05.000Z",
"num_tokens": 2241,
"size": 7227
} |
% Notes:
% - Compare to Gaia RV error / RUWE
% - Cross-match to Kepler, K2, TESS 2 min cadence, show some examples
% - Short-period things: allude to Daunt et al. (in prep)?
% -
% Relevant papers:
% - Asteroseismic modes / RV: https://ui.adsabs.harvard.edu/abs/2020MNRAS.493.1388Y/abstract
% - https://ui.adsabs.harvard.edu/abs/2018MNRAS.480L..48Y/abstract
% \begin{figure}[!t]
% \begin{center}
% % \includegraphics[width=0.9\textwidth]{visitstats.pdf}
% {\color{red} Figure placeholder}
% \end{center}
% \caption{%
% TODO
% \label{fig:chiplots}
% }
% \end{figure}
\PassOptionsToPackage{usenames,dvipsnames}{xcolor}
\documentclass[modern]{aastex63}
% \documentclass[twocolumn]{aastex63}
% Load common packages
\usepackage{microtype} % ALWAYS!
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{booktabs}
\usepackage{graphicx}
% \usepackage{color}
\usepackage{enumitem}
\setlist[description]{style=unboxed}
% Hogg's issues
\renewcommand{\twocolumngrid}{\onecolumngrid} % guess what this does HAHAHA!
\setlength{\parindent}{1.1\baselineskip}
\addtolength{\topmargin}{-0.2in}
\addtolength{\textheight}{0.4in}
\sloppy\sloppypar\raggedbottom\frenchspacing
% For referee:
\newcommand{\changes}[1]{{\color{violet}#1}}
% Numbers:
\newcommand{\nsources}{\ensuremath{232\,495}}
% \newcommand{\Kmin}{M_{\rm min}}
% \newcommand{\Kminval}{512}
% \newcommand{\nbinary}{\ensuremath{19\,635}}
% \newcommand{\goldsample}{\textit{Gold Sample}}
% \newcommand{\ngold}{\ensuremath{1\,032}}
% \newcommand{\nbimodal}{\ensuremath{127}}
% Other
\newcommand{\visit}{visit}
\newcommand{\thisdr}{\dr{17}}
\graphicspath{{figures/}}
\input{preamble.tex}
\shorttitle{Close binaries in APOGEE DR17}
\shortauthors{Price-Whelan et al.}
\begin{document}
\title{Close Binary Companions in the APOGEE Survey Data Release 17: \\
TODO}
\author[0000-0003-0872-7098]{Adrian~M.~Price-Whelan}
\affiliation{Center for Computational Astrophysics, Flatiron Institute,
Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA}
\email{[email protected]}
\correspondingauthor{Adrian M. Price-Whelan}
% \author[0000-0003-2866-9403]{David~W.~Hogg}
% \affiliation{Center for Computational Astrophysics, Flatiron Institute,
% Simons Foundation, 162 Fifth Avenue, New York, NY 10010, USA}
% \affiliation{Center for Cosmology and Particle Physics,
% Department of Physics,
% New York University, 726 Broadway,
% New York, NY 10003, USA}
% \affiliation{Max-Planck-Institut f\"ur Astronomie,
% K\"onigstuhl 17, D-69117 Heidelberg, Germany}
% \author[0000-0003-4996-9069]{Hans-Walter~Rix}
% \affiliation{Max-Planck-Institut f\"ur Astronomie,
% K\"onigstuhl 17, D-69117 Heidelberg, Germany}
% % APOGEE:
% \author[0000-0002-1691-8217]{Rachael~L.~Beaton}
% \altaffiliation{Hubble Fellow}
% \altaffiliation{Carnegie-Princeton Fellow}
% \affiliation{Department of Astrophysical Sciences, Princeton University,
% 4 Ivy Lane, Princeton, NJ~08544}
% \affiliation{The Observatories of the Carnegie Institution for Science,
% 813 Santa Barbara St., Pasadena, CA~91101}
% \author[0000-0002-7871-085X]{Hannah~M.~Lewis}
% \affiliation{Department of Astronomy, University of Virginia,
% Charlottesville, VA 22904-4325, USA}
% \author[0000-0002-1793-3689]{David~L.~Nidever}
% \affiliation{Department of Physics, Montana State University,
% P.O. Box 173840, Bozeman, MT 59717-3840}
% \affiliation{NSF’s National Optical-Infrared Astronomy Research Laboratory,
% 950 North Cherry Ave, Tucson, AZ 85719}
% % APOGEE alphabetical:
% \author{Andr\'es~Almeida}
% \affiliation{Instituto de Investigaci\'on Multidisciplinario en Ciencia y
% Tecnolog\'ia, Universidad de La Serena, Benavente 980,
% La Serena, Chile}
% \author{Carles~Badenes}
% \affiliation{Department of Physics and Astronomy,
% and Pittsburgh Particle Physics, Astrophysics and Cosmology Center
% (PITT PACC), University of Pittsburgh, 3941 O’Hara Street,
% Pittsburgh, PA 15260, USA}
% \author[0000-0003-1086-1579]{Rodolfo~Barba}
% \affiliation{Departamento de Astronom\'ia, Facultad de Ciencias,
% Universidad de La Serena, Cisternas 1200, La Serena, Chile}
% \author{Timothy~C.~Beers}
% \affiliation{Department of Physics and JINA Center for the Evolution of the
% Elements, University of Notre Dame, Notre Dame, IN 46556, USA}
% \author{Joleen~K.~Carlberg}
% \affiliation{Space Telescope Science Institute, 3700 San Martin Dr,
% Baltimore MD 21218}
% \author{Nathan~De~Lee}
% \affiliation{Department of Physics, Geology, and Engineering Technology,
% Northern Kentucky University, Highland Heights, KY 41099}
% \affiliation{Department of Physics and Astronomy, Vanderbilt University,
% VU Station 1807, Nashville, TN 37235, USA}
% \author{Jos\'e~G.~Fern\'andez-Trincado}
% \affiliation{Instituto de Astronom\'ia y Ciencias Planetarias,
% Universidad de Atacama, Copayapu 485, Copiap\'o, Chile}
% \author[0000-0002-0740-8346]{Peter~M.~Frinchaboy}
% \affiliation{Department of Physics \& Astronomy, Texas Christian University,
% Fort Worth, TX, 76129, USA}
% % \author{Domingo~An\'ibal Garc\'ia-Hern\'andez}
% \author{D.~A.~Garc\'ia-Hern\'andez}
% \affiliation{Instituto de Astrof\'isica de Canarias (IAC), E-38205 La Laguna,
% Tenerife, Spain}
% \affiliation{Universidad de La Laguna (ULL), Departamento de Astrof\'isica,
% E-38206 La Laguna, Tenerife, Spain}
% \author[0000-0002-8179-9445]{Paul~J.~Green}
% \affil{Center for Astrophysics | Harvard \& Smithsonian, 60 Garden Street,
% Cambridge, MA 02138, USA}
% \author{Sten~Hasselquist}
% \altaffiliation{NSF Astronomy and Astrophysics Postdoctoral Fellow}
% \affiliation{Department of Physics and Astronomy, University of Utah,
% 115 S. 1400 E., Salt Lake City, UT 84112, USA}
% \author{Pen\'elope~Longa-Pe{\~n}a}
% \affiliation{Centro de Astronom{\'i}a (CITEVA), Universidad de Antofagasta,
% Avenida Angamos 601, Antofagasta 1270300, Chile}
% \author{Steven~R.~Majewski}
% \affiliation{Department of Astronomy, University of Virginia,
% Charlottesville, VA 22904-4325, USA}
% \author{Christian~Nitschelm}
% \affiliation{Centro de Astronom{\'i}a (CITEVA), Universidad de Antofagasta,
% Avenida Angamos 601, Antofagasta 1270300, Chile}
% \author{Jennifer~Sobeck}
% \affiliation{Department of Astronomy, University of Washington, Box 351580,
% Seattle, WA 98195, USA}
% \author[0000-0002-3481-9052]{Keivan~G.~Stassun}
% \affiliation{Department of Physics and Astronomy, Vanderbilt University,
% VU Station 1807, Nashville, TN 37235, USA}
% \author[0000-0003-1479-3059]{Guy~S.~Stringfellow}
% \affiliation{Center for Astrophysics and Space Astronomy,
% Department of Astrophysical and Planetary Sciences,
% University of Colorado, 389 UCB,Boulder, CO 80309-0389, USA}
% \author{Nicholas~W.~Troup}
% \affiliation{Department of Physics, Salisbury University, Salisbury, MD 21801}
\begin{abstract}\noindent
TODO
\end{abstract}
% \keywords{}
\section*{~}\clearpage
\section{Introduction} \label{sec:intro}
Stuff.
\section{Data} \label{sec:data}
% We use spectroscopic data from data release 16 (\dr{16}) of the \apogee\ surveys
% (\citealt{Majewski:2017, DR16}; J\"onsson et al., in prep.).
% \apogee\ is a component of the Sloan Digital Sky Survey IV (\sdssiv;
% \citealt{Gunn:2006, Blanton:2017}); its main goal is to survey the chemical
% and dynamical properties of stars across much of the Milky Way disk by obtaining
% high-resolution ($R \sim 22,500$; \citealt{Wilson:2019}), infrared ($H$-band)
% spectroscopy of hundreds of thousands of stars.
% The primary survey targets are selected with simple color and magnitude cuts
% \citep{Zasowski:2013, Zasowski:2017}, but the survey uses fiber-plugged plates
% that cover only a small fraction of the available area, which leads to extremely
% nonuniform coverage of the Galactic stellar distribution (see, e.g., Figure~1 in
% \citealt{DR16}).
% \dr{16} is the first \sdss\ data release to contain \apogee\ data observed with
% a duplicate of the \apogee\ spectrograph on the 2.5m Ir\'en\'ee du Pont
% telescope \citep{Bowen:1973} at Las Campanas Observatory, providing access to
% targets in the Southern Hemisphere.
% For the first time, this data release also contains calibrated stellar
% parameters for dwarf stars (J\"onsson et al., in prep.).
% These two facts mean that \dr{16} contains nearly three times more sources with
% calibrated stellar parameters than the previous public data release, \dr{14}
% (\citealt{Abolfathi:2017, Holtzman:2018}; see Section~4 of \citealt{DR16} for
% many more details about \apogee\ \dr{16}).
% Most \apogee\ stars are observed multiple times in separate ``visits'' that are
% combined before the \apogee\ data reduction pipeline \citep{Nidever:2015,
% Zamora:2015, ASPCAP} determines stellar parameters and chemical abundances for
% each source.
% While the visit spectra naturally provide time-domain velocity information about
% sources (thus enabling searches for massive companions), studying stellar
% multiplicity is not the primary goal of the survey:
% The cadence and time baseline for a typical \apogee\ source is primarily
% governed by trying to schedule a set number of visits determined by
% signal-to-noise thresholds for the faintest targets in a given field.
% A small number of fields (five) were designed specifically for companion studies
% and have $>10$ visits spaced to enable binary-system characterization.
% While some past studies have made use of other fields with large numbers of
% visits to study binary-star systems \citep{Troup:2016, Fernandez-Trincado:2019},
% a consequence of this strategy is that the time resolution and number of visits
% for the vast majority of \apogee\ sources in \dr{16} is not sufficient for fully
% determining companion orbital properties, as illustrated below.
% Still, the large number of targets in \apogee\ and the dynamic range in stellar
% and chemical properties offers an exciting opportunity to study the
% \emph{population} of binary-star systems as a function of these intrinsic
% properties, even if most individual systems are poorly constrained.
% We have previously developed tools to enable such studies \citep{thejoker}, as
% summarized in \sectionname~\ref{sec:methods} below.
% Here, we describe quality cuts we apply to the \apogee\ \dr{16} catalogs before
% proceeding, and modifications to the visit-level velocity uncertainties to
% account for the fact that they are generally underestimated by the \apogee\ data
% reduction pipeline.
\subsection{Quality Cuts and Selecting the Parent Sample}
The primary goal of this \documentname\ is to produce a catalog of posterior
samplings in Keplerian orbital parameters for \emph{all} high-quality \apogee\
sources in \dr{16} with multiple, well-measured radial velocities.
We therefore impose a set of quality cuts to sub-select \apogee\ \dr{16} sources
by rejecting sources or visits using the following \apogee\
bitmasks (\citealt{Holtzman:2018}, J\"onsson et al., in prep.):
\begin{itemize}
\item Source-level (\texttt{allStar}) \texttt{STARFLAG} must not contain
\texttt{VERY\_BRIGHT\_NEIGHBOR}, \texttt{SUSPECT\_RV\_COMBINATION} (bitmask
values: 3, 16)
\item Source-level (\texttt{allStar}) \texttt{ASPCAPFLAG} must not contain
\texttt{TEFF\_BAD}, \texttt{LOGG\_BAD}, \texttt{VMICRO\_BAD},
\texttt{ROTATION\_BAD}, \texttt{VSINI\_BAD} (bitmask value: 16, 17, 18, 26,
30)
\item Visit-level (\texttt{allVisit}) \texttt{STARFLAG} must not contain
\texttt{VERY\_BRIGHT\_NEIGHBOR}, \texttt{SUSPECT\_RV\_COMBINATION},
\texttt{LOW\_SNR}, \texttt{PERSIST\_HIGH}, \texttt{PERSIST\_JUMP\_POS},
\texttt{PERSIST\_JUMP\_NEG} (bitmask value: 3, 9, 12, 13, 16)
\end{itemize}
These bitmasks are designed to remove the most obvious data reduction or
calibration failures that would directly impact the visit-level radial-velocity
determinations.
However, we later impose a stricter set of quality masks when showing results in
\sectionname~\ref{sec:gold-sample}.
After applying the above masks, we additionally reject any source with $<3$
visits.
Our final parent sample contains \nsources\ unique sources, selected from the
$437,485$ unique sources in all of \apogee\ \dr{16}.
Of the $\approx$$200,000$ sources removed, the vast majority were dropped
because they had $<3$ visits ($\approx$$17\,000$ were removed by the quality
cuts).
% Notebook: Figure-DR16-statistics.ipynb
% \begin{figure}[!t]
% \begin{center}
% \includegraphics[width=0.7\textwidth]{specHR.pdf}
% \end{center}
% \caption{%
% Two spectroscopic (ASPCAP) stellar parameters---effective temperature, $T_{\rm
% eff}$, and log-surface gravity, $\log g$---of the \apogee\ \dr{16} sources that
% pass our quality cuts.
% These sources represent our ``parent sample.''
% The pixel coloring indicates the number of sources in each bin of stellar
% parameters.
% The outlined regions roughly identify the red giant branch (upper polygon,
% blue), subgiant branch (middle polygon, black), and (FGK-type) main sequence
% (lower polygon, green).
% The numbers next to each selection polygon indicate the number of sources in
% each.
% \label{fig:specHR}
% }
% \end{figure}
% \figurename~\ref{fig:specHR} shows the sources in our parent sample---i.e.,
% \apogee\ sources with 3 or more visits that pass the quality cuts described
% above---as a function of spectroscopic stellar parameters $T_{\rm eff}$,
% effective temperature, and $\log g$, log-surface gravity.
% While the majority of sources are giant-branch stars ($>150\,000$), a
% substantial number of main-sequence stars are present ($>60\,000$), thanks to
% the \apogee\ data reduction pipeline improvements for \dr{16} (J\"onsson et al.,
% in prep.).
% Figure~\ref{fig:visitstats} shows some statistics about the time coverage of the
% visits for sources in our parent sample.
% About half of the sources have a small number of visits spread over a small time
% baseline (the time spanned from the first to last visit for each source): $50\%$
% of sources have $<5$ visits over $<100~{\rm days}$.
% About $7\%$ of sources ($15\,366$) have $\geq 10$ visits over $\geq 100~{\rm
% days}$.
% Notebook: Figure-DR16-statistics.ipynb
% \begin{figure}[!t]
% \begin{center}
% \includegraphics[width=0.9\textwidth]{visitstats.pdf}
% \end{center}
% \caption{%
% Some statistics of \apogee\ \dr{16} visits.
% \textbf{Left:} The number of sources with more than a given number of visits,
% $n_{\rm vis}$.
% While $\approx$$50\%$ of sources have 3 visits, ($114\,263$, $57\,593$,
% $15\,862$) sources have $> (3, 5, 10)$ visits, respectively.
% A very small number of sources have $>50$ visits.
% \textbf{Right:} The number of sources with a time baseline, $\tau$, longer than
% given (on the horizontal axis).
% While $\approx$$50\%$ of sources have a time baseline $\tau \lesssim 56~{\rm
% days}$, ($88\,737$, $9\,743$) sources have $\tau > (100, 1\,000)~{\rm days}$.
% \label{fig:visitstats}
% }
% \end{figure}
\subsection{Re-calibrating the \apogee\ Visit Velocity Uncertainties}
\label{sec:visitcalib}
The principal data product used in this work are the \apogee\ ``\visit'' radial
velocity measurements (RVs).
% , which are released in the ``allVisit'' data file.
Each \apogee\ visit spectrum for a source is generated from a nightly
combination of individual exposures that are all typically taken within a 1--2
hour time block on a given night.
The visit spectra thus provide time-resolved stellar parameters with a minimum
time separation of about one day.
For the most recent data release, \apogee\ \thisdr, the RVs for
each source's visit spectra are derived using a new, more accurate, and more
stable pipeline,
\package{doppler},\footnote{\url{https://github.com/dnidever/doppler}}
which ultimately computes the RVs by cross-correlating a given visit spectrum
with a template spectrum whose stellar parameters are set by an initial guess of
the combined (over all visits) spectrum for the source.
% ^ TODO: cite Doppler / DR17 paper?
From this procedure, the \apogee\ pipeline generates the visit RVs and an
estimate of the uncertainty associated with each visit RV measurement, computed
using TODO \citep{TODO}.
When using RV measurements to infer binary star orbital parameters, the
precision of the derived parameters is strongly dependent on both the intrinsic
and the reported uncertainties of the RV data.
The \emph{intrinsic} RV measurement uncertainties set the theoretical,
minimum-amplitude detectability thresholds for RV-variable sources.
The \emph{reported} RV uncertainties also affect the orbital parameter samplings
returned:
For example, if the reported uncertainties are underestimated, we will in
general find that scatter between the visit RV measurements (because of the
intrinsic noise) is interpreted as binarity, which will bias any
population-level inferences we can make about the binary fraction, among other
parameters.
If the uncertainties are overestimated, we will preferentially miss
low-amplitude RV variations, which will limit our sensitivity to longer-period
and lower-mass systems.
We have found that the visit RV uncertainties in \apogee\ \thisdr\ are
underestimated, as has been pointed out for previous iterations of the pipeline
\citep[e.g.,][]{TODO, BadenesIthink, Price-Whelan:2020}.
To demonstrate this for \thisdr, we select stars with successfully-measured
stellar parameters (\logg, \Teff, \mh, \vsini) in \apogee\ \thisdr.
We then select visits for which there is no detected secondary stellar spectrum
(i.e., SB2 systems; \texttt{N\_COMPONENTS==1}), which have no RV quality flags
triggered (\texttt{RV\_FLAG==0}), and which have good-quality combined-source
spectra (i.e., the \texttt{STARFLAG} bitmask must not contain bits
\texttt{VERY\_BRIGHT\_NEIGHBOR}, \texttt{PERSIST\_HIGH},
\texttt{SUSPECT\_RV\_COMBINATION}, \texttt{RV\_REJECT}, or
\texttt{RV\_SUSPECT}).
We only keep visits when a source has three or more total visits that pass these
quality checks.
\figurename~\ref{fig:chiplots} (left panel) shows the distribution of
uncertainty-normalized differences between the visit RVs $v_{nk}$ minus the mean
over each source's visits $\langle v_{nk} \rangle_k$ for all $k$ visits of a
source $n$, for all sources with three or more visits that pass the quality cuts
described above.
% Visit-error-calibrate.ipynb
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\textwidth]{chi-distr.pdf}
\end{center}
\caption{%
TODO
\label{fig:chiplots}
}
\end{figure}
% Visit-error-calibrate.ipynb
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\textwidth]{mean-visit-rv-err.pdf}
\end{center}
\caption{%
TODO
\label{fig:mean-rv-err}
}
\end{figure}
\acknowledgements
It is a pleasure to thank
% APW acknowledgements support and space from the Max-Planck-Institut f\"ur
% Astronomie during initial work on this project.
% We thank the anonymous referee for constructive comments that improved this
% manuscript.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P.
Sloan Foundation, the U.S. Department of Energy Office of Science, and the
Participating Institutions. SDSS-IV acknowledges support and resources from the
Center for High-Performance Computing at the University of Utah. The SDSS web
site is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the Brazilian
Participation Group, the Carnegie Institution for Science, Carnegie Mellon
University, the Chilean Participation Group, the French Participation Group,
Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de
Canarias, The Johns Hopkins University, Kavli Institute for the Physics and
Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley
National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP),
Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut
f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische
Physik (MPE), National Astronomical Observatories of China, New Mexico State
University, New York University, University of Notre Dame, Observat\'ario
Nacional / MCTI, The Ohio State University, Pennsylvania State University,
Shanghai Astronomical Observatory, United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University
of Colorado Boulder, University of Oxford, University of Portsmouth, University
of Utah, University of Virginia, University of Washington, University of
Wisconsin, Vanderbilt University, and Yale University.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\software{
Astropy \citep{astropy, astropy:2018},
apred \citep{Nidever:2015},
ASPCAP \citep{ASPCAP},
exoplanet \citep{exoplanet:exoplanet},
gala \citep{gala},
IPython \citep{ipython},
numpy \citep{numpy},
pymc3 \citep{Salvatier2016},
schwimmbad \citep{schwimmbad:2017},
scipy \citep{scipy},
theano \citep{theano},
thejoker \citep{thejoker, Price-Whelan:2019a}
}
\appendix
% \section{Data tables}
% \label{sec:datatables}
% The primary data product released with this \documentname\ are the posterior
% samplings generated for each of \nsources\ sources in \apogee\ \dr{16};
% \changes{These samplings will be released in the upcoming intermediate SDSS data
% release ``DR16+'' (expected in mid-2020).}
% However, we also compute summary information and statistics aboutf these
% samplings and provide these data in \tablename~\ref{tbl:metadata}.
% We also define a \goldsample\ of high-quality, uniquely solved binary-star
% systems (see \sectionname~\ref{sec:gold-sample}) and release summary information
% along with cross-matched data from \gaia\ \dr{2} and the \acronym{STARHORSE}
% catalog of stellar parameters in \tablename~\ref{tbl:goldsample}.
% \input{tables/metadata-schema.tex}
% \input{tables/goldsample-schema.tex}
\bibliographystyle{aasjournal}
\bibliography{dr17binaries}
\end{document}
| {
"alphanum_fraction": 0.7454609245,
"avg_line_length": 43.7658349328,
"ext": "tex",
"hexsha": "bbdb57946a81890079078e4596877d5e246de142",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a442c6ee7c81d482a00435083de252453118e63e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "APOGEE-DR17-Binaries/paper1-overview",
"max_forks_repo_path": "tex/dr17binaries.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a442c6ee7c81d482a00435083de252453118e63e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "APOGEE-DR17-Binaries/paper1-overview",
"max_issues_repo_path": "tex/dr17binaries.tex",
"max_line_length": 92,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a442c6ee7c81d482a00435083de252453118e63e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "APOGEE-DR17-Binaries/paper1-overview",
"max_stars_repo_path": "tex/dr17binaries.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6435,
"size": 22802
} |
% !TeX root = ../thesis.tex
\subsection{Meshing examples}
\paragraph{}
In this section, some other mesh examples with irregular geometric boundaries are considered.
Fig.~\ref{oct_ex:mesh_spinner} shows the mesh generated for a spinner with a CAD input illustrated in Fig.~\ref{oct_ex:mesh_spinner_cad}.
Fig.~\ref{oct_ex:mesh_sphnix} shows the mesh generated for an Egypt Sphinx with a CAD input plotted in Fig.~\ref{oct_ex:mesh_sphnix_cad}
\begin{figure}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/spinner_cad.png}
}
\caption[CAD design for spinner]{CAD design for the spinner}
\label{oct_ex:mesh_spinner_cad}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/spinner_full.eps}
}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/spinner_full_top.eps}
}
\end{subfigure} \\
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/spinner_full_side.eps}
}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/spinner_part.eps}
}
\end{subfigure}
\caption[Mesh for the spinner]{Mesh for the spinner}
\label{oct_ex:mesh_spinner}
\end{figure}
% ---- %
\begin{figure}
\centering
\scalebox{0.4}{
\includegraphics{octree/ex_images/sphnix_cad.png}
}
\caption[CAD design for the Egypt Sphinx]{CAD design for the Egypt Sphinx}
\label{oct_ex:mesh_sphnix_cad}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/sphnix_full.eps}
}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.25}{
\includegraphics{octree/ex_images/sphnix_front.eps}
}
\end{subfigure} \\
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.2}{
\includegraphics{octree/ex_images/sphnix_side.eps}
}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\scalebox{0.125}{
\includegraphics{octree/ex_images/sphnix_edges.eps}
}
\end{subfigure}\\
\begin{subfigure}[b]{1\linewidth}
\centering
\scalebox{0.3}{
\includegraphics{octree/ex_images/sphnix_internal.eps}
}
\end{subfigure}
\caption[Mesh for the Egypt Sphinx]{Mesh for the Egypt Sphinx}
\label{oct_ex:mesh_sphnix}
\end{figure} | {
"alphanum_fraction": 0.6413890858,
"avg_line_length": 30.6739130435,
"ext": "tex",
"hexsha": "162ee0e49bccdcd3c033130b601d266845af8fea",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fa93hws/thesis",
"max_forks_repo_path": "octree/ex_other_meshes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fa93hws/thesis",
"max_issues_repo_path": "octree/ex_other_meshes.tex",
"max_line_length": 137,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c397ddc18e5ff5d6e9b8d6de2e53be4c9c7b7a2d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fa93hws/thesis",
"max_stars_repo_path": "octree/ex_other_meshes.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-30T12:14:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-30T12:14:47.000Z",
"num_tokens": 830,
"size": 2822
} |
\documentclass[]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{natbib}
\bibliographystyle{plainnat}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
\date{}
\begin{document}
\textbf{Journal Options:} Water Research, \textbf{Water Resources
Research}, Freshwater Biology, Journal of Hydrology, Ecohydrology,
Journal of Environmental Quality, Hydrobiologia, JAWRA
\section{A hierarchical model of daily stream temperature for regional
predictions}\label{a-hierarchical-model-of-daily-stream-temperature-for-regional-predictions}
\subsubsection{Daniel J. Hocking, Ben Letcher, and Kyle
O'Neil}\label{daniel-j.-hocking-ben-letcher-and-kyle-oneil}
*Daniel J. Hocking
(\href{mailto:[email protected]}{\nolinkurl{[email protected]}}), US
Geological Survey, Conte Anadromous Fish Research Center, Turners Falls,
MA, USA
\subsection{Abstract}\label{abstract}
Set up the problem. Explain how you solve it. Tell what you find.
Explain why it's the best thing ever.
\subsection{Introduction}\label{introduction}
Temperature is a critical factor in regulating the physical, chemical,
and biological properties of streams. Warming stream temperatures
decrease dissolved oxygen, decrease water density, and alter the
circulation and stratification patterns of streams (refs).
Biogeochemical processes such as nitrogen and carbon cycling are also
temperature dependent and affect primary production, decomposition, and
eutrophication (refs). Both physical properties and biogeochemical
processes influence the suitability for organisms living in and using
the stream habitat beyond just primary producers. Additionally,
temperature can have direct effects on the biota, especially
poikliotherms such as invertebrates, amphibians, and fish
\citep[e.g.,][]{Kanno2013, Xu2010, Xu2010a, Al-Chokhachy2013a}. Given
commercial and recreational interests, there is a large body of
literature describing the effects of temperature on fish, particularly
the negative effects of warming temperatures on cool-water fishes such
as salmonids . Finally, stream temperature can even affect electricity,
drinking water, and recreation (see van Vliet et al 2011). Therefore,
understanding and predicting stream temperatures are important for a
multitude of stakeholders.
Stream temperature models can be used for explanatory purposes
(understanding factors and mechanisms affecting temperature) and for
prediction. Predictions can be spatial and temporal including
forecasting and hindcasting. Predictions across space are especially
valuable because there is often a need for information at locations with
little or no observed temperature data. For example, many states have
regulations related to the management of streams classified as cold,
cool, and warm waters (refs), but because of the tremendous number of
headwater streams it is impossible classify most streams based on
observed data. Therefore, modeled stream temperature is needed to
classify most streams for regulatory purposes. Forecasting can provide
immediate information such as the expected temperature the next hour,
day, or week as well as long-term information about expected
temperatures months, years, and decades in the future. Hindcasting can
be used to examine temperature variability and trends over time and for
model validation. Both forecasting and hindcasting are useful for
understanding climate change effects on stream temperature regimes.
Given the importance of temperature in aquatic systems, it is not
surprising that there are a variety of models and approaches to
understanding and predicting stream temperature. Stream temperature
models are generally divided into three categories: deterministic (also
called process-based or mechanistic), stochastic, and statistical
\citep{Chang2013, Caissie2006, Benyahya2007}. Deterministic models are
based on heat transfer and are often modeled using energy budgets
\citep{Benyahya2007, Caissie2006}. The models require large amounts of
detailed information on the physical properties of the stream and
adjacent landscape as well as hydrology and meteorology. These models
are useful for detailed re assessments and scenario testing. However,
the data requirements preclude the models from being applied over large
spatial extents.
Stochastic models attempt to combine pattern (seasonal and spatial
trends) with the random deviations to describe and predict environmental
data \citep{Chang2013, Sura2006, Kiraly2002}. Stochastic models of
stream temperature generally rely on relationships between air and water
temperature then with random noise and an autoregressive correlation,
often decomposed by seasonal and annual components. These models are
mostly commonly used to model daily temperature fluctuations because of
their ability to address autocorrelation and approximate the near-random
variability in environmental data
\citep{Kiraly2002, Caissie2001, Ahmadi-Nedushan2007}. A limitation is
that the physical processes driving temperature fluctuations are not
elucidated with these models. They are generally used to describe
characteristics and patterns in a system and to forecast these patterns
in the future \citep{Kiraly2002}. Additionally, stochastic models rely
on continuous, often long, time series from a single or a few locations.
Inference cannot be made to other locations without assuming that the
patterns and random deviations are identical at those locations.
As with stochastic models, statistical models generally rely on
correlative relationships between air and water temperatures, but also
typically include a variety of other predictor variables such as basin,
landscape, and land-use characteristics. Statistical models are often
linear with normally distributed error and therefore used at weekly or
monthly time steps to avoid problems with temporal autocorrelation at
shorter time steps (e.g.~daily, hourly, sub-hourly). Parametric,
nonlinear regression models have been developed to provide more
information regarding mechanisms than traditional statistical models
without the detail of physical deterministic models \citep{Mohseni1998}.
Researchers have also developed geospatial regression models that
account for spatial autocorrelation within dendritic stream networks
\citep{Isaak2010b, Peterson2010, Peterson2013}. However, due to the
complexity of the covariance structure of network geostatistical models,
they are best used for modeling single temperature values across space
(e.g.~summer maximum, July mean, etc.) rather than daily temperatures
\citep{Peterson2010, Peterson2007, VerHoef2010}. Additionally,
statistical machine learning techniques such as artificial neural
networks have been used to model stream temperatures when unclear
interactions, nonlinearities, and spatial relationships are of
particular concern \citep{Sivri2009, Sivri2007, DeWeber2014}.
In contrast with deterministic approaches, statistical models require
less detailed site-level data and therefore can be applied over greater
spatial extents than process-based models. They also can describe the
relationships between additional covariates and stream temperature,
which is a limitation of stochastic models. These relationships can be
used to understand and predict anthropogenic effects on stream
temperature such as timber harvest, impervious development, and water
control and release \citep{Webb2008}. Quantifying the relationship
between anthropogenic effects, landscape characteristics, meteorological
patterns, and stream temperature allows for prediction to new sites and
times using statistical models. This is advantageous for forecasting and
hindcasting to predict and understand climate change effects on stream
temperatures. This is critical because not all streams respond
identically to air temperature changes and the idiosyncratic responses
may be predicted based interactions of known factors such as flow,
precipitation, forest cover, basin topology, impervious surfaces, soil
characteristics, geology, and impoundments \citep{Webb2008}.
Letcher et al. \citeyearpar{Letcher2016t} outline six general challenges
of statistical stream temperature models including accounting for 1) the
non-linear relationship between air and water temperature at high and
low air temperatures, 2) different relationships between air and water
temperature in the spring and fall (hysteresis), 3) thermal inertia
resulting in lagged responses of water temperature to changes in air
temperature, 4) incomplete time series data and locations with large
differences in the amount of available data, 5) spatial and temporal
autocorrelation, and 6) important predictors of stream water temperature
other than air temperature. They developed a statistical model that
addresses aspects of non-linear relationships, hysteresis, thermal
inertia, and spatial and temporal autocorrelation but their analysis was
limited to a single small network of streams with long time series
\citep{Letcher2016t}.
We describe a novel statistical model of daily stream temperature that
incorporates features of stochastic models and extends the Letcher et
al. \citeyearpar{Letcher2016t} framework to large geographic areas. This
model handles time series data of widely varying duration from many
sites using a hierarchical mixed model approach to account for
autocorrelation at specific locations within watersheds. It incorporates
catchment, landscape, and meteorological covariates for explanatory and
predictive purposes. It includes an autoregressive function to account
for temporal autocorrelation in the time series, a challenge with other
statistical models at fine temporal resolution. Additionally, our
hierarchical Bayesian approach readily allows for complete accounting of
uncertainty. We use the model to predict daily stream temperature across
the northeastern United States over a 36-year time record.
\subsection{Methods}\label{methods}
\subsubsection{Water temperature data}\label{water-temperature-data}
We gathered stream temperature data from state and federal agencies,
individual academic researchers, and non-governmental organizations
(NGOs) from Maine to Virginia (Figure \#. \textbf{map}). The data were
collected using automated temperature loggers. The temporal frequency of
recording ranged from every 5 minutes to once per hour. This data was
consolidated in a PostgreSQL database linked to a web service at
\url{http://www.db.ecosheds.org}. Data collectors can upload data at
this website and choose whether to make the data publicly available or
not. The raw data is stored in the database and users can flag problem
values and time series. Only user-reviewed data are used in the analysis
and flagged values are excluded. For our analysis, we performed some
additional automated and visual quality assurance and quality control
(QAQC) on the sub-daily values, summarized to mean daily temperatures
and performed additional QAQC on the daily values. The QAQC was intended
to flag and remove values associated with logger malfunctions,
out-of-water events (including first and last days when loggers were
recording but not yet in streams), and days with incomplete data which
would alter the daily mean. The QAQC webtool used for flagging
questionable data can be found at \url{http://db.ecosheds.org/qaqc} We
also developed an R (ref) package for analyzing stream temperature data
from our database, including the QAQC functions which can be found at
\url{https://github.com/Conte-Ecology/conteStreamTemperature}. The R
scripts using these functions for our analysis are available at
\url{https://github.com/Conte-Ecology/conteStreamTemperature_northeast}.
Stream reach (stream section between any two confluences) was our finest
spatial resolution for the analysis. In the rare case where we had
multiple logger locations within the same reach (1,672 locations from
1,377 reaches) recording at the same time, we used the mean value from
the loggers for a given day. In the future, with sufficient within reach
data, it would be possible to use our modeling framework to also
estimate variability within reach by adding one more level to the
hierarchical structure of the model (see Statistical Model description
below).
\emph{Stream network delineation and landscape data}
Temperature logger locations were spatially mapped to the stream reaches
of a high resolution network of hydrologic catchments developed across
the Northeastern United States. The National Hydrography Dataset High
Resolution Delineation Version 2 (NHDHRDV2) maintains a higher
resolution and catchment areal consistency than the established NHDPlus
Version 2 dataset. The main purpose of the higher resolution is to
capture small headwaters that may be critical to ecological assessment.
A summary of this dataset with links to detailed documentation can be
found in the \href{http://conte-ecology.github.io/shedsGISData/}{SHEDS
Data project}.
\subsubsection{Meteorological and landscape
data}\label{meteorological-and-landscape-data}
The landscape and meteorological data were assembled from various
sources. These variables were spatially attributed to the hydrologic
catchments for incorporation into the model. The variables used in the
model are described in (Table 0?). All of the variables referenced in
the table refer to values calculated for the downstream point of each
catchment (confluence pour point).
\begin{longtable}[c]{@{}cllll@{}}
\toprule
\begin{minipage}[b]{0.10\columnwidth}\centering\strut
Variable
\strut\end{minipage} &
\begin{minipage}[b]{0.24\columnwidth}\raggedright\strut
Description
\strut\end{minipage} &
\begin{minipage}[b]{0.14\columnwidth}\raggedright\strut
Source
\strut\end{minipage} &
\begin{minipage}[b]{0.23\columnwidth}\raggedright\strut
Processing
\strut\end{minipage} &
\begin{minipage}[b]{0.16\columnwidth}\raggedright\strut
GitHub Repository
\strut\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Total Drainage Area
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The total contributing drainage area from the entire upstream network
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{http://conte-ecology.github.io/shedsData/}{The SHEDS Data project}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
The individual polygon areas are summed for all of the catchments in the
contributing network
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/NHDHRDV2}{NHDHRDV2}
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Riparian Forest Cover
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The percentage of the upstream 61 m (200 ft) riparian buffer area that
is covered by trees taller than 5 meters
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{http://www.mrlc.gov/nlcd06_data.php}{The National LandCover
Database (NLCD)}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
All of the NLCD forest type classifications are combined and attributed
to each riparian buffer polygon using GIS tools. All upstream polygon
values are then aggregated.
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/basinCharacteristics/rasterPrep/nlcdLandCover}{nlcdLandCover}
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Daily Precipition
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The daily precipitation record for the individual local catchment
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{https://daymet.ornl.gov/}{Daymet Daily Surface Weather and
Climatological Summaries}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
Daily precipitation records are spatially assigned to each catchment
based on overlapping grid cells using the
\href{https://github.com/Conte-Ecology/zonalDaymet}{zonalDaymet} R
package
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/daymet}{daymet}
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Upstream Impounded Area
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The total area in the contributing drainage basin that is covered by
wetlands, lakes, or ponds that intersect the stream network
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{http://www.fws.gov/wetlands/Data/Data-Download.html}{U.S. Fish \&
Wildlife Service (FWS) National Wetlands Inventory}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
All freshwater surface water bodies are attributed to each catchment
using GIS tools. All upstream polygon values are then aggregated.
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/basinCharacteristics/rasterPrep/fwsWetlands}{fwsWetlands}
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Percent Agriculture
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The percentage of the contributing drainage area that is covered by
agricultural land (e.g.~cultivated crops, orchards, and pasture)
including fallow land.
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{http://www.mrlc.gov/nlcd06_data.php}{The National LandCover
Database}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
All of the NLCD agricutlural classifications are combined and attributed
to each catchment polygon using GIS tools. All upstream polygon values
are then aggregated.
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/basinCharacteristics/rasterPrep/nlcdLandCover}{nlcdLandCover}
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\centering\strut
Percent High Intensity Developed
\strut\end{minipage} &
\begin{minipage}[t]{0.24\columnwidth}\raggedright\strut
The percentage of the contributing drainage area covered by places where
people work or live in high numbers (typically defined as areas covered
by more than 80\% impervious surface)
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedright\strut
\href{http://www.mrlc.gov/nlcd06_data.php}{The National LandCover
Database}
\strut\end{minipage} &
\begin{minipage}[t]{0.23\columnwidth}\raggedright\strut
The NLCD high intensity developed classification is attributed to each
catchment polygon using GIS tools. All upstream polygon values are then
aggregated.
\strut\end{minipage} &
\begin{minipage}[t]{0.16\columnwidth}\raggedright\strut
\href{https://github.com/Conte-Ecology/shedsData/tree/master/basinCharacteristics/rasterPrep/nlcdLandCover}{nlcdLandCover}
\strut\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\subsubsection{Statistical model}\label{statistical-model}
Statistical models of stream temperature often rely on the close
relationship between air temperature and water temperature. However,
this relationship breaks down during the winter in temperature zones,
particularly as streams freeze, thereby changing their thermal and
properties. Many researchers and managers are interested in the
non-winter effects of temperature. The winter period, when phase change
and ice cover alter the air-water relationship, differs in both time
(annually) and space. We developed an index of air-water synchrony
(\(Index_{sync}\)) so we can model the portion of the year that it not
affected by freezing properties. The index is the difference between air
and observed water temperatures divided by the water temperature plus
0.000001 to avoid division by zero.
We calculate the \(Index_{sync}\) for each day of the year at each reach
for each year with observed data. We then calculate the 99.9\%
confidence interval of \(Index_{sync}\) for days between the 125 and 275
days of the year (05 May and 02 October). Then moving from the middle of
the year (day 180) to the beginning of the year, we searched for the
first time when 10 consecutive days were not within the 99.9\% CI. This
was selected as the spring breakpoint. Similarly moving from the middle
to the end of the year, the first event with fewer than 16 consecutive
days within the 99.9\% CI was assigned as the autumn breakpoint.
Independent breakpoints were estimated for each reach-year combination.
For reach-years with insufficient data to generate continuous trends and
confidence intervals, we used the mean break points across years for
that reach. If there was not sufficient local reach information, we used
the mean breakpoints from the smallest hydrologic unit the reach is
nested in (i.e.~check for mean from HUC12, then HUC10, HUC8, etc.). More
details regarding the identification of the synchronized period can be
found in Letcher et al. \citeyearpar{Letcher2016t}. The portion of the
year between the spring and autumn breakpoints was used for modeling the
non-winter, approximately ice-free stream temperatures.
We used a generalized linear mixed model to account for correlation in
space (stream reach nested within HUC8). This allowed us to incorporate
short time series as well as long time series from different reaches and
disjunct time series from the same reaches without risk of
pseudoreplication (ref: Hurlbert). By limited stream drainage area to
\textless{}200 \(km^2\) and only modeling the synchronized period of the
year, we were able to use a linear model, avoiding the non-linearities
that occur at very high temperatures due to evaporative cooling and near
0 C due to phase change {[}\citet{Mohseni1999}.
We assumed stream temperature measurements were normally distributed
following,
\textbf{need to decide how to handle naming subscripts vs.~indexing
subscripts and superscripts}
\begin{itemize}
\tightlist
\item
maybe do naming as subscripts and indexing in bracketed subscripts
\item
drawback would be random vs.~fixed subscripts still
\item
another alternative is to have different variable names for everything
so don't reuse X, and B, mu, beta, or sigma
\item
This might be easier when I reduce the complexity of the year random
effects
\end{itemize}
\[ t_{h,r,y,d} \sim \mathcal{N}(\mu_{h,r,y,d}, \sigma) \]
where \(t_{h,r,y,d}\) is the observed stream water temperature at the
reach (\(r\)) within the sub-basin identified by the 8-digit Hydrologic
Unit Code (HUC8; \(h\)) for each day (\(d\)) in each year (\(y\)). We
describe the normal distribution based on the mean (\(mu_{h,r,y,d}\))
and standard deviation (\(\sigma\)) and assign a vague prior of
\(\sigma = 100\). The mean temperature is modeled to follow a linear
trend
\[ \omega_{h,r,y,d} = X_0 B_0 + X_{h,r} B_{h,r} + X_{h} B_{h} + X_{y} B_{y} \]
but the expected mean temperature (\(\mu_{h,r,y,d}\)) was also adjusted
based on the residual error from the previous day
\[ \mu_{h,r,y,d} = \begin{cases}
\omega_{h,r,y,d} + \delta(t_{h,r,y,d-1} - \omega_{h,r,y,d-1}) & \quad \text{for $t_{h,r,y,d-1}$ is real} \\
\omega_{h,r,y,d} & \quad \text{for $t_{h,r,y,d-1}$ is not real}
\end{cases}
\]
where \(\delta\) is an autoregressive {[}AR(1){]} coefficient and
\(\omega_{h,r,y,d}\) is the expected temperature before accounting for
temporal autocorrelation in the error structure.
\(X_{0}\) is the \(n \times K_0\) matrix of predictor values. \(B_0\) is
the vector of \(K_0\) coefficients, where \(K_0\) is the number of fixed
effects parameters including the overall intercept. We used 15 fixed
effect parameters including the overall intercept and interactions.
These were 2-day total precipitation, 30-day cumulative precipitation,
drainage area, upstream impounded area, percent forest cover within the
catchment and upstream catchments and various two- and three-way
interactions (Table 1?). We assumed the following distributions and
vague priors for the fixed effects coefficients
\[ B_0 \sim \mathcal{N}(0,\sigma_{k_0}), \text{for $k_0 = 1,...,K_0$,} \]
\[ B_0 = \beta_{0}^{1},...,\beta_{0}^{K_{0}} \sim \mathcal{N}(0, 100) \]
\[ \sigma_{k_0} = 100 \]
\(B_{h,r}\) is the \(R \times K_{R}\) matrix of regression coefficients
where \(R\) is the number of unique reaches and \(K_{R}\) is the number
of regression coefficients that vary randomly by reach within HUC8. The
effects of daily air temperature and mean air temperature over the
previous 7 days varied randomly with reach and HUC8 (Table 1). We
assumed prior distributions of
\[ B_{h,r} \sim \mathcal{N}(0,\sigma_{k_{r}}), \text{for $k_{r} = 1,...,K_{R}$,} \]
with a uniform prior on the standard deviation {[}Gelman2006{]}
\[ \sigma_{r_0} \sim uniform(0,100) \]
\(X_{h}\) is the matrix of parameters that vary by HUC8. We allowed for
correlation among the effects of these HUC8 coefficients as described by
Gelman and Hill \citeyearpar{Gelman2007}.
\(B_{h}\) is the \(H \times K_{H}\) matrix of coefficients where \(H\)
is the number of HUC8 groups and \(K_H\) is the number of parameters
that vary by HUC8 including a constant term. In our model,
\(K_{H} = K_{R}\) and we assumed priors distributions of
\[ B_{h} \sim \mathcal{N}(M_{h},\Sigma_{B_{h}}), \text{for $h = 1,...,H$} \]
where \(M_{h}\) is a vector of length \(K_{H}\) and \(\Sigma_{B_{h}}\)
is the \(K_{H} \times K_{H}\) covariance matrix.
\[ M_{h} \sim MVN(\mu_{1:K_h}^h, \sigma_{1:K_h}^h) \]
\[ \mu_{1}^h = 0; \mu_{2:K_h}^h \sim \mathcal{N}(0, 100) \]
\[ \Sigma_{B_{h}} \sim \text{Inv-Wishart}(diag(K_{h}), K_{h}+1) \]
We also allowed for the intercept to vary randomly by year. We assumed a
prior distributions of
\[ B_{y} \sim \mathcal{N}(0,\sigma_{y}) \]
\[ \sigma_{y} \sim uniform(0,100) \]
To estimate all the parameters and their uncertainties, we used a
Bayesian analysis with a Gibbs sampler implemented in JAGS (ref) through
R (ref) using the rjags package (ref). This approach was beneficial for
hierarchical model flexibility and tractability for large datasets. We
used vague priors for all parameters so all inferences would be based on
the data. We ran 13,000 iterations on each of three chains with
independent random starting values. We discarded the first 10,000
iterations, then thinned; saving every third iteration for a total of
3,000 iterations across 3 chains to use for inference.
\subsubsection{Model validation}\label{model-validation}
To validate our model, we held out 10\% stream reaches at random. We
also held out 10\% of remaining reach-year combinations with observed
temperature data at random. Additionally, we excluded all 2010 data
because it was an especially warm summer across the northeastern U.S.
based on the mean summer daymet air temperatures. This approach was also
used by \citep{DeWeber2014a} and helps to assess the model's predictive
ability under future warming conditions. This included reaches with no
data located within subbasins with and without data, which will be
important if using this model with future climate predictions. The most
challenging validation scenario was at reaches within HUC8s without any
data in a year without any data. In total, 26.4\% of observations and
33.3\% of reaches were held out for validation.
\begin{figure}[htbp]
\centering
\includegraphics{Figures/locationMap.png}
\caption{Figure 1.}
\end{figure}
\subsubsection{Derived metrics}\label{derived-metrics}
We use the meteorological data from daymet to predict daily temperatures
for all stream reaches (\textless{}200 km\(^2\)) in the region for the
synchronized period of the year from 1980-2015. The predictions are
conditional on the specific random effects where available and receive
the mean effect for reaches, HUC8s, and years when no data was
collected. From these daily predictions, we derive a variety of metrics
to characterize the stream thermal regime. These include mean (over the
36 years) July temperature, mean summer temperature, mean number of days
per year above a thermal threshold (18, 20, 22 C used by default),
frequency of years that the mean daily temperature exceeds each of these
thresholds, and the maximum 7-day and 30-day moving means for each year
and across all years. We also calculated the resistance of water
temperature to changes in air temperature during peak air temperature
(summer) based on the cumulative difference between the daily
temperatures. Finally, we assess the thermal sensitivity for each stream
reach as the change in daily stream temperature per 1 C change in daily
air temperature. This is essentially the reach-specific air temperature
coefficient converted back to the original scale from the standardized
scale.
\subsection{Results}\label{results}
To fit the model, we used 129,026 daily temperature observations from
627 stream reaches representing 1,051 reach-year combinations within 44
HUC8 subbasins between 1995 and 2013, excluding all records from 2010.
\emph{Evaluation of MCMC convergence (visual and R-hat)}
The iterations of the three MCMC chains converged on a single area of
high posterior probability while exhibiting minimal autocorrelation,
based on visual inspection of the iteration traceplots, partial vs.~full
density plots, autocorrelation (ACF) plots. The potential scale
reduction factors (PSRF, \(\hat{R}\)) for all parameters and the
multivariate PSRF were \textless{} 1.1, further indicating good
convergence of the MCMC chains \citep{Brooks1998}.
\emph{Coefficient estimates from the model}
Most variables and their interactions were significant with 95\%
Credible Intervals (CRI) that did not overlap zero (Table 1). The only
non-significant parameters were the interactions of air temperature and
forest cover and air temperature and Impounded Area. Drainage area alone
was not significant but it was significant in its interactions with all
combinations of air temperature and precipitation (Table 1). Air
temperature (1-day and 7-day) was the primary predictor of daily water
temperature. The effect of air temperature was dampened by interactions
with precipitation and drainage area (negative 3-way interactions; Table
1). There was also a large autocorrelation coefficient (AR1 = 0.77),
indicating that if the other parameters in the model predicted
temperature to be over- or under-estimated by 1 C yesterday, they will
be similarly over- or under-estimated by 0.77 C today.
\emph{Variability at the reach and huc scales}
There was much more unexplained random variation among sites than among
HUC8, but the effects of air temperature on water temperature were only
slightly more variable among sites compared with HUC8. There was very
little random variability among years not explained by other parameters
(Table 1).
\emph{Evaluation of model fit and predictive power}
\textbf{If use full region add map of average RMSE for streams with data
or locations so can see that it works equally well in north and south -
since no data in Pa or NY}
The overall Root Mean Squared Error (RMSE) was 0.58 C and the residuals
were normally distributed and unbiased (exhibiting no visual
heterogeneity), indicating that the model was a good approximation of
the process generating the data. These predicted values are adjusted for
residual error, but to understand how well the model predicts
temperatures when the previous day's observed temperature is unknown it
is better to use the predictions prior to adjusting with the residual
AR1 term. The RMSE for the fitted data using unadjusted predictions was
0.89 C. All additional predictions and summaries use the unadjusted
values to better understand the predictive abilities of the model.
Specifically, to evaluate the spatial and temporal predictive power of
our model, we used independent validation data consisting of 46,290
daily temperature observations from 313 stream reaches representing 383
reach-year combinations within 36 HUC8 subbasins between 1998 and 2013.
The overall unadjusted RMSE for all validation data was 1.81 C. Similar
to the fitted data, there was no bias in the predictions of the
validation data, with the potential exception of slight over-prediction
at very low temperatures and possible slight under-prediction at very
high temperatures (figure - appendix?).
\includegraphics{Figures/validation_plot.jpg}
To assess predictive accuracy in warm years without data, we calculated
the RMSE for all reaches in 2010 (excluded from model fitting) to be
1.85 C. The RMSE in 2010 for reaches that had data in other years used
in the modeling fitting was 1.77 C, whereas reaches that had no data in
other years had an overall RMSE of 1.95 C in 2010 (no information about
the specific reach or year in a warm year).
Interestingly, there appears to be only a slight improvement in RMSE
with increases in the amount of data used in the model fitting or years
of observed data (appendix figure).
\includegraphics{Figures/rmse_2010_obs_plot.jpg} Similarly, there is no
affect of the amount of validation data for a reach on the RMSE estimate
of that reach (appendix figure).
\includegraphics{Figures/rmse_2010_valid_obs_plot.jpg}
\subsection{Discussion}\label{discussion}
Most aquatic organisms inhabiting streams are ectothermic and are
therefore sensitive to changes in stream temperatures. Although air
temperature can be used as a proxy for water temperature in small
streams, there is considerable variability in the relationship between
air and water temperatures. Additionally, land-use change (e.g.~forest
cover, impervious surfaces) and modifications to the stream network
(e.g.~undersized culverts, dams) influence water temperature differently
than air temperature. It is also impossible to monitor water temperature
across all streams; therefore, regional models are needed to predict
stream temperatures across time and space accounting for differences in
the landscape and land-use. Many fish biologists have focused on weekly,
monthly, or summer-only models of stream temperature to relate warm
conditions to trout distributions (refs). However, daily temperatures
are useful because they can be used in observation processes when
activity or detection is dependent on the current thermal conditions
(refs) and they can be summarized into any derived metrics of interest.
Depending on the species, life-stage, or management options, decision
makers and biologists might be interested in different metrics such as
degree days since an event (e.g.~oviposition, hatching), frequency of
thermal excursions, magnitude of excursions, mean summer temperature, or
variability in temperature of different time frames, all of which can be
derived from daily temperature predictions. Daily temperatures can also
relate more closely to state agency regulations such as the frequency of
daily temperatures over a threshold when classifying cold, cool, and
warm streams for legal protection (MA Department of Environmental
Protection, CALM Regulations, Gerry Szal \emph{personal communication} -
should probably find a real reference for this). Without knowing in
advance all the potential uses of predicted stream temperatures, a daily
model provides the flexibility to derive the values needed for
particular decisions.
To accommodate these flexible needs, we developed a daily stream
temperature model that takes advantage of diverse data sources to make
predictions across a large region. Our model fit the data well as
indicated by the RMSE \textless{} 1 C and had a good ability to predict
daily stream temperatures across space and time. With regards to
predicting temperatures in warm years without fitted data, such as 2010,
the model predicted temperatures well even in reaches with no other data
(RMSE = 1.95 C). The predictions were even better at reaches with data
from other years (RMSE = 1.77 C), indicating that reach-specific data
can improve predictions in future years but this improvement is not
dramatic. The lack of dramatic improvement is likely due to multiple
factors.
Some of the reach-level variability is probably accounted for by other
nearby reaches within the same HUC8 (influence of HUC8 random effects).
We did not have sufficient data from combinations of reaches, HUC8, and
years to compare the RMSE for HUC8 with single versus multiple observed
reaches, but based on similar levels of variability explained at the
reach and HUC8 levels it is likely that having data from other reaches
in a HUC8 improves the predictions for unmonitored reaches in the same
HUC8. Therefore, on average, predictions will be worse at reaches within
HUC8 with no data. There are also local conditions that vary in time to
influence stream temperatures beyond what is included in the model. If
the effect of these unmodeled covariates were constant in time, we would
expect more of the variation to be captured by the random reach effects
and therefore a larger difference in the RMSE in 2010 between reaches
with other years of data and reaches with no observed data. Tim-varying
ground-surface water interactions are likely a major source of the
unexplained uncertainty in model predictions. Ground-surface water
interactions are particularly complex in the northeastern U.S. and
depend on dynamics of precipitation, temperature, snowmelt, local
geology, land-use, and landscape physiognomy (refs - I'm just making
this up based on physics and basic ecosystem processes). The amount of
groundwater entering streams depends on these time-varying conditions
but the temperature of the groundwater is also variable, depending on
the residence time, depth, and past weather conditions (refs). How much
the ground water affects the temperature of the stream water depends of
the volume and temperature of each source of water. We do not currently
have any landscape or environmental conditions that can predict these
ground-surface water interactions over broad space in the northeastern
U.S. However, work towards this is in progress and has been applied to
other areas (refs: than and others), and any appropriate predictors
could be added to our model without needed to change the overall
structure of the model.
\emph{interpretation of parameter estimates}
Of the parameters currently modeled, the current day's air temperature
and the mean air temperature over the previous 7 days had the largest
effect on daily stream water temperature. This is not surprising as we
limited our analysis to small streams and to the synchronized period of
the year when air and water temperature are most correlated. Past
studies of small streams have also found air temperature to be the main
predictor of stream temperature (refs) --compare specific coefficients
and TS to other papers?--
\emph{partitioning of variability}
However, the effects of air temperature and 7-day air temperature were
not identical across space. These effects varied moderately across sites
and HUC8 (Table 1), with similar variance for both temperature effects
although the daily air temperature had a slightly larger mean effect
(Table 1). Additionally, air temperature had significant 3-way
interactions with precipitation and drainage area. We used 2-day
precipitation x drainage area as in index of flow associated with storms
and 30-day precipitation x drainage area as an index of baseflow in
these small headwater streams (A. Rosner \emph{personal communication}).
Therefore, the negative 3-way interactions with air temperature are what
we would expect, indicating that at high flows the effect of air
temperature on water temperature is dampened. The effect size of these
interactions are extremely small, likely in part because of the
coarseness of using precipitation x drainage area as an index of flow
and not accounting for local ground-surface water interactions.
Air temperature did not interact significantly with percent forest cover
or impounded stream area. Alone forest cover had a significant, but
small, negative effect on stream temperature during the synchronized
period, whereas impounded area had a significant, moderately large
positive effect on temperature (Table 1).
We did not include other predictors previously found to be important in
statistical models because of correlation with existing covariates or a
lack of variability in the potential predictor across the study area.
For example, elevation can be a useful predictor of stream temperature
(refs) but it lacks a specific mechanistic relationship and covaries
strongly with air temperature across the region. Similarly, human
development and impervious surfaces can affect stream temperature but in
the northeastern U.S. these exhibited high negative correlation with
forest cover and both variables could not be included in the model. As
more data become available through our data portal
\url{http://db.ecosheds.org}, it may be possible to separate the effects
of forest cover and human development variables. Likewise, agricultural
land-use can influence stream temperature or the effect of air
temperature on stream temperature \citep{Deweber2014a}, but there were
insufficient observations over a range of agriculture in our data to
include it in the current model. Agriculture can be added to a future
version of the model as we expand coverage to the mid-Atlantic region of
the U.S. and as more data in added to our database. Shading can also
influence local stream conditions but is challenging to quantify over
large regions. As a step in this direction it would be possible to
replace forest cover at the catchment or watershed scale with canopy
cover within a riparian buffer area. Both riparian and drainage-level
forest cover could be included in the model if there were sufficient
data and they were not overly correlated.
\emph{Disagreement (conflicting evidence? confused terminology)
regarding the drivers of stream temperature}
\emph{Benefits of our approach}
\textbf{relate it to the 6 challenges of statistical models the ben
describes}
\emph{Letcher et al. \citeyearpar{Letcher2015} outline six general
challenges of statistical stream temperature models including accounting
for 1) the non-linear relationship between air and water temperature at
high and low air temperatures, 2) different relationships between air
and water temperature in the spring and fall (hysteresis), 3) thermal
inertia resulting in lagged responses of water temperature to changes in
air temperature, 4) incomplete time series data and locations with large
differences in the amount of available data, 5) spatial and temporal
autocorrelation, and 6) important predictors of stream water temperature
other than air temperature.}
Our model addresses a number
lots of sensors because relatively cheap and easy to collect, but
varying lengths of time at different reaches. Our model incorporates
reaches with any length of time (a few days to decades). reaches will
little data contribute less to the model but do provide some local and
spatial information. The more data a location has the more informative
so there is less shrinkage to the mean values. reaches with no data can
be predicted based on covariate values and HUC-level random effects but
do not get reach-specific coefficient effects.
model separates uncertainty in estimates and predictions from
variability across space and time. The random reach, HUC, and year
effects explicitly address spatial and temporal variability, allowing
for more proper accounting of uncertainty.
\emph{limitations}
ground-surface water interactions not included. However, if remotely
sensed predictors could be developed, or exist in a particular region,
they could easily be included as site-level predictors.
\emph{future developments}
\begin{itemize}
\tightlist
\item
groundwater
\item
within reach variability
\item
autoregressive temperature not just residuals
\item
detailed effects of impoundments (exponential decay with distance)
\item
spatial autocorrelation
\item
expand to larger spatial extent
\item
nonlinear relationships
\item
model winter
\item
adjust breakpoint sync function to adjust with different stream
conditions, elevations, and locations
\item
dynamic model (effect of air temperature varies over time)
\end{itemize}
\emph{derived metrics}
We used the daymet air temperature and precipitation along with
landscape covariates to predict daily stream temperatures in each reach
then calculated derived metrics of potential interest to biologists,
managers, and policy makers.
We generated maps of mean derived metrics from temperatures predicted
over the daymet record (1980-2013). When scaled to view the entire
region the patterns generally follow air temperature patterns with
cooler temperatures at higher elevations and latitudes and warmer
temperatures in urban, coastal, and lowland areas. An example of this
can be seen on the annual 30-day maximum of the mean daily stream
temperature map. However, when zoomed in to view individual catchments
on the HUC8 or HUC10 scale, it is clear that there is considerable local
variation in water temperatures (Figure \#)
\includegraphics{Figures/Inset3.png} based on forest cover, drainage
area, and local reach effects (unaccounted for local conditions
including ground-surface water interactions), as expected based on the
model coefficients and past research \citep{Kanno2013}. In lieu of
presenting small static maps, many of which would look somewhat similar
at the regional scale, we added maps of the derived metrics to our web
application which can be found at \url{http://ice.ecosheds.org/}
\emph{add special manuscript ice link}. Users can zoom to specific areas
and view information about individual stream reaches and associated
catchments. There is also the ability to filter to locate areas with
specific conditions. Our main Interactive Catchment Explorer (ICE) for
the northeastern and mid-Atlantic regions of the U.S. with information
about the landscape conditions and Brook Trout occupancy in addition to
stream temperatures can be found at \url{http://ice.ecosheds.org/} and
will be regularly updated as new data become available. This is part of
our web platform for Spatial Hydro-Ecological Decision Systems (SHEDS;
\url{http://ecosheds.org/}) where we present visualizations linking
datasets, statistical models, and decision support tools to help improve
natural resource management decisions. Interested users can contribute,
view, and download (if user-designated as publicly available) data at
\url{http://db.ecosheds.org/}. As noted above, these data will be used
to further improve model estimates and predictions, which will be
presented in ICE.
Although many of the derived metrics relating to peak temperatures have
relatively similar broad-scale spatial patterns, there are some metrics
that quantify other aspects of the thermal regime. For example, we
calculated the resistance of water temperature to changes in air
temperature during peak air temperature (summer) based on the cumulative
difference between the daily temperatures. The distribution of
resistance values was much more right-skewed than the annual 30-day
maximum temperature (Figure
\#).\includegraphics{Figures/metrics_histograms.jpg} This metric is
intended as a potential index of ground water influence on stream
temperature. Streams with larger resistance values would be expected to
have higher ground water influence because they would essentially be
buffered from changes in air temperature during the warmest part of the
year (\emph{could make figure to depict this for two extreme cases}).
This value could be adjusted for drainage area or flow since it is
possible that larger streams always fluctuate less and it could be
divided by mean water temperature during the summer to make it reflect
the relative resistance. We anticipate future efforts to quantify the
influence of ground water in summer stream temperature and explore how
well this metric is able to predict those values. Similarly, thermal
sensitivity (Figure \# - histograms above) or the size of the specific
reach random effect could serve as indicators of ground water influence.
In particular, the specific reach slope of air temperature suggests that
reaches with larger coefficients are highly responsive to changes in air
temperature (little ground water buffering) and reaches with small
coefficients are insensitive to changes in air temperature and therefore
likely to have significant ground water influence. These metrics are
hypothesized to indicate ground water influence but remain to be tested.
Given the differences in the distributions of these metrics (Figure \#
histograms), it is likely that some will be considerably more effective
as ground water indices than other metrics. A similar effort has
recently shown promise in creating a ground water influence index from
stream temperature data (ref: snyder, than and colleagues). These
indices would currently only apply to reaches with observed data, so the
next step would be to find landscape and geological parameters that
could predict the best ground water index across the region.
\subsection{Acknowledgments}\label{acknowledgments}
Thanks to A. Rosner for thoughtful discussions related to the analysis
and inference.
J. Walker for database creation and management, development of the
Interactive Catchment Explorer, and discussions.
Groups who provided data
\subsection{Tables}\label{tables}
Table 1. Regression summary table with coefficient estimates including
the mean, standard deviation (SD), and 95\% credible intervals (LCRI =
2.5\%, UCRI = 97.5\%).
\begin{longtable}[c]{@{}rrrrr@{}}
\toprule
\begin{minipage}[b]{0.37\columnwidth}\raggedleft\strut
Parameter
\strut\end{minipage} &
\begin{minipage}[b]{0.08\columnwidth}\raggedleft\strut
Mean
\strut\end{minipage} &
\begin{minipage}[b]{0.07\columnwidth}\raggedleft\strut
SD
\strut\end{minipage} &
\begin{minipage}[b]{0.10\columnwidth}\raggedleft\strut
LCRI
\strut\end{minipage} &
\begin{minipage}[b]{0.10\columnwidth}\raggedleft\strut
UCRI
\strut\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
16.69
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.135
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
16.4182
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
16.949
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
1.91
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.022
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
1.8620
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
1.950
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
7-day AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
1.36
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.029
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
1.3015
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
1.417
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
2-day Precip
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.06
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.002
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.0546
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.063
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
30-day Precip
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.01
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.006
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.0005
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.026
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
Drainage Area
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.04
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.096
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.1452
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.232
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
Impounded Area
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.50
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.095
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.3181
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.691
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
Forest Cover
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.15
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.047
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.2455
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.059
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x 2-day Precip
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.02
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.002
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.0195
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.028
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x 30-day Precip
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.01
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.004
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0224
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.007
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x Drainage
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.06
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.029
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.1170
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.006
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x Impounded Area
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.02
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.029
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0345
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.077
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x Forest
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.02
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.015
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0508
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.009
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
2-day Precip x Drainage
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.04
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.002
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0424
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.034
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
30-day Precip x Drainage
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.06
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.006
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0709
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.046
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x 2-day Precip x Drainage
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.01
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.002
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0156
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.008
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AirT x 30-day Precip x Drainage
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
-0.01
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.004
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.0193
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
-0.004
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.37\columnwidth}\raggedleft\strut
AR1
\strut\end{minipage} &
\begin{minipage}[t]{0.08\columnwidth}\raggedleft\strut
0.77
\strut\end{minipage} &
\begin{minipage}[t]{0.07\columnwidth}\raggedleft\strut
0.002
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.7681
\strut\end{minipage} &
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
0.776
\strut\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\textbf{Random effects:}
\begin{longtable}[c]{@{}rrrr@{}}
\toprule
\begin{minipage}[b]{0.10\columnwidth}\raggedleft\strut
Group
\strut\end{minipage} &
\begin{minipage}[b]{0.14\columnwidth}\raggedleft\strut
Coef
\strut\end{minipage} &
\begin{minipage}[b]{0.06\columnwidth}\raggedleft\strut
SD
\strut\end{minipage} &
\begin{minipage}[b]{0.12\columnwidth}\raggedleft\strut
Variance
\strut\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
Site
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
1.03
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
1.060
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.29
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.083
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
7-day AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.35
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.120
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
HUC8
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.59
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.345
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.27
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.072
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
7-day AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.26
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.066
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.10\columnwidth}\raggedleft\strut
Year
\strut\end{minipage} &
\begin{minipage}[t]{0.14\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[t]{0.06\columnwidth}\raggedleft\strut
0.28
\strut\end{minipage} &
\begin{minipage}[t]{0.12\columnwidth}\raggedleft\strut
0.076
\strut\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\textbf{HUC8 coefficient correlations:}
\begin{longtable}[c]{@{}rrrr@{}}
\toprule
\begin{minipage}[b]{0.16\columnwidth}\raggedleft\strut
~
\strut\end{minipage} &
\begin{minipage}[b]{0.15\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[b]{0.09\columnwidth}\raggedleft\strut
AirT
\strut\end{minipage} &
\begin{minipage}[b]{0.15\columnwidth}\raggedleft\strut
7-day AirT
\strut\end{minipage}\tabularnewline
\midrule
\endhead
\begin{minipage}[t]{0.16\columnwidth}\raggedleft\strut
Intercept
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.09\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedleft\strut
AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
0.64
\strut\end{minipage} &
\begin{minipage}[t]{0.09\columnwidth}\raggedleft\strut
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
\strut\end{minipage}\tabularnewline
\begin{minipage}[t]{0.16\columnwidth}\raggedleft\strut
7-day AirT
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
0.338
\strut\end{minipage} &
\begin{minipage}[t]{0.09\columnwidth}\raggedleft\strut
0.234
\strut\end{minipage} &
\begin{minipage}[t]{0.15\columnwidth}\raggedleft\strut
\strut\end{minipage}\tabularnewline
\bottomrule
\end{longtable}
\subsection{Figures (do this as a separate file then merge the PDF
because otherwise getting mixed in with citations during
pandoc)}\label{figures-do-this-as-a-separate-file-then-merge-the-pdf-because-otherwise-getting-mixed-in-with-citations-during-pandoc}
Figure \#. Map of the mean annual maximum 30-day mean stream temperature
(mean temperature during the warmest 30-day period each year). The inset
shows how much local variation there is that is not clearly visible on
the regional map. Gray areas have no predictions, usually because they
are in larger streams, outside the bounds of the data used in the model
(\textgreater{}200 \(km^2\) drainage area). Results are presented as
catchments delineated based on the stream reaches because at this scale
stream lines would blend together and not provide a smooth visual map
surface - \emph{not sure if I need to include this, maybe wait to see if
reviewers say anything}
Frigure \#. Hiearichal model structure. Refer to the Methods section for
specific definitions of the parameters. In general, \emph{t} represents
the daily temperature observation, \(\mu\) is a mean, \(\sigma\) is a
standard deviation, \(\omega\) is the expected value before accounting
for autocorrelation in the daily measurements, \(\delta\) is the
autocorrelation term, \(B\) are vectors of coefficients, and \(\rho\) is
the correlation coefficient among random HUC coefficients. Subscripts
represent levels at which the parameter varies and bracketed subscripts
are for identification only.
Figure 1. Example of adding a figure.
\begin{figure}[htbp]
\centering
\includegraphics{Figures/MADEP_W2033_T1.png}
\caption{}
\end{figure}
\subsection{Literature Cited}\label{literature-cited}
\bibliography{northeast_temperature_refs.bib}
\end{document}
| {
"alphanum_fraction": 0.7968298845,
"avg_line_length": 44.7784511785,
"ext": "tex",
"hexsha": "04ca1fa9b8954ce6bdd095276a11ff89f0394ab4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "92df915cb263e0aab6afffa51fb2fd6fbb0e8709",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Conte-Ecology/conteStreamTemperature_northeast",
"max_forks_repo_path": "manuscripts/northeast_temperature_ms2.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "92df915cb263e0aab6afffa51fb2fd6fbb0e8709",
"max_issues_repo_issues_event_max_datetime": "2016-07-20T18:00:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-12T20:35:28.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Conte-Ecology/conteStreamTemperature_northeast",
"max_issues_repo_path": "manuscripts/northeast_temperature_ms2.tex",
"max_line_length": 133,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "92df915cb263e0aab6afffa51fb2fd6fbb0e8709",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Conte-Ecology/conteStreamTemperature_northeast",
"max_stars_repo_path": "manuscripts/northeast_temperature_ms2.tex",
"max_stars_repo_stars_event_max_datetime": "2018-05-05T17:08:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-05T17:08:47.000Z",
"num_tokens": 18567,
"size": 66496
} |
Several particle types can be introduced in GIFMod each having different properties and different constitutive models can be assigned to each group. The transport of particles in a GI system is governed by the following general mass balance equation:
\begin{equation}
\label{eq:20}
\begin{split}
\frac{d\Gamma_{i,l} G_{p,l,i}}{dt} =\\ \beta_{p,l} \bigg[\sum_{j=1}^{nj} pos \big(Q_{ij}+v_{s,p,ij}A_{ij}\big)G_{p,l,j}-\sum_{j=1}^{nj} pos \big(-Q_{ij}-v_{s,p,ij}A_{ij}\big)G_{p,l,i}\bigg]\\
-S_i \big(\sum_{l'=1}^{nl_p}\textbf{K}_{p,l,l'}G_{p,l,i}-\sum_{l'=1}^{nl_p}\textbf{K}_{p,l,l'}G_{p,l',i}\big) + \beta_{p,l} \sum_{j=1}^{nj} A_{ij}\frac{D_{p,ij}}{d_{ij}}(G_{p,l,j}-G_{p,l,i})
\end{split}
\end{equation}
In Eq. \ref{eq:20}, the first term on the right hand side is advection due to flow and settling, the second term represents the exchange of particles between phases (i.e. mobile, attached, etc.), and the last term represent dispersion and diffusion of particles.
Three pre-defined models for particle transport are provided in GIFMod respectively named Single Phase, Dual Phase and Triple Phase. In the single phase model a particle is assumed to only reside in mobile phase with no interaction with the solid phases. If a particle type is designated to be controlled by a dual phase model, it can interact between mobile phase and attached phase. Finally a triple phase model means that particles can be in mobile, reversibly attached and irreversibly attached phases.
\begin{itemize}
\item \textbf{Single phase mode: } If a particle type is assigned to have a single phase model, there will be only one phase with $\alpha_{p,1}=1$ and there will be no mass exchange between phases. The interaction between aqueous particles and the air-water interface is assumed to occur instantaneously based on an aqueous -AWI partitioning coefficient $K_{aw}$. $\Gamma$ in Eq. \ref{eq:20} in this case will be a single member vector equal to $S_i + K_{aw} S_{aw}$. By default the air-water interface area is assumed to be proportional to the volume of air-phase (i.e. $S_aw = \sout{V}_i - S_i$) under unsaturated condition in the unsaturated soil block.
\item \textbf{Dual phase model: } In a dual phase models particles can be in two phases including mobile and attached. $\Gamma$, $\textbf{K}$ matrix and $\alpha_p$ will respectively be:
\begin{equation}
\label{eq:21}
\vec{\Gamma}_i =
\begin{bmatrix}
S_i \\
\sout{V}_i \rho_i
\end{bmatrix}
\end{equation}
\\
\begin{equation}
\label{eq:22}
\vec{K}_p =
\begin{bmatrix}
0 & f_i \alpha_p \eta_p \ |v_i| \ (1-\frac{G_{p,s}}{G_{s,max}}) \\
f_i k_{p,rel} pos(|v_i|-v_{crit}) & 0\\
\end{bmatrix}
\end{equation}
\\
\begin{equation}
\label{eq:23}
\vec{\beta}_p =
\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}
\end{equation}
\item \textbf{Triple phase model: } In a triple phase models particles can be in three phases including mobile and irreversibly attached and reversibly attached. $\Gamma$, $\textbf{K}$ matrix and $\alpha_p$ will respectively be:
\begin{equation}
\label{eq:24}
\vec{\Gamma}_i =
\begin{bmatrix}
S_i \\
\sout{V}_i \rho_i \\
\sout{V}_i \rho_i
\end{bmatrix}
\end{equation}
\\
\noindent
\begin{equation}
\label{eq:25}
\noindent
\vec{K}_p =
\begin{bmatrix}
0 & \zeta_p f_i \alpha_p \eta_p \ |v_i| \ (1-\frac{G_{p,s}}{G_{s,max}}) & (1-\zeta_p) f_i \alpha_p \eta_p \ |v_i| \ (1-\frac{G_{p,s}}{G_{s,max}})\\
0 & 0 & 0\\
f_i k_{p,rel} pos(|v_i|-v_{crit}) & 0 & 0\\
\end{bmatrix}
\end{equation}
\\
\begin{equation}
\label{eq:26}
\vec{\beta}_p =
\begin{bmatrix}
1 \\
0 \\
0 \\
\end{bmatrix}
\end{equation}
\end{itemize}
\subsection{Particle properties}
\begin{itemize}
\item \textbf{Attachment efficiency: } or sticking efficiency $\alpha_p$ in Eqs. \ref{eq:22} and \ref{eq:25}. This is the probability of a particle encountering a collector to stick to it. Not available for single phase particle model.
\item \textbf{Collection efficiency: } or transport efficiency $\eta_p$ in Eqs. \ref{eq:22} and \ref{eq:25}. The probability of a particle approaching a collector to encounter it. Not available for single phase particle model.
\item \textbf{Critical velocity for release: } Attached particles will start getting released when the velocity exceeds this value. The rate of release of attached particles is proportional to difference between the velocity in a block and the critical velocity (Eqs. \ref{eq:22} and \ref{eq:25}. Not available for single phase particle model.
\item \textbf{Diffusion coefficient: } $D_c$, Diffusion coefficient of the particle type beind defined.
\item \textbf{Dispersivity: } Dispersivity of the particle type. The mechanical dispersion coefficient will be calculated as: $D_{p,ij} = \alpha_{D,p} |v_ij| + D_{c,p}$;
\item \textbf{Irreversible collection fraction: } $\zeta_p$; in Eq. \ref{eq:25}. Only availble for the triple-phase particle model.
\item \textbf{Medium bulk density: } if entered this over-writes the bulk density specified in block properties in particle transport module.
\item \textbf{Partitioning coefficient to air-water interface: } The value of $k_aw$.
\item \textbf{Settling velocity: } Settling velocity $v_{s,p}$ in Eq. \ref{eq:20}.
\item \textbf{Specific surface area: } $f$ in Eqs. \ref{eq:22} and \ref{eq:25}
\end{itemize}
\subsection{Example: Colloid transport in a one dimensional saturated soil column under steady flow condition: }
In this example we show the creation of a simple model of multi-disperse particle transport and filtration in a one-dimensional column under steady-state flow condition using GIFMod. The column will be assumed to have $50cm$ lenght and an area of $100cm^2$ and it which is descretized into 10 layers. A dummy reservoir will also be added a the bottom of the column to ease imposition of downstream boundary condition through prescribed flow feature.
\begin{itemize}
\item \textbf{Create the first block and assign the properties: } Add a Darcy block from the top menu. Set the following properties:
- \textbf{Bottom Area: }\textit{$0.01m^2$}
- \textbf{Depth: }\textit{$0.05m$}
- \textbf{Bulk Density: }\textit{$1600 kg/m^3$}
- \textbf{Saturated moisture content: } \textit{$0.4$}
- \textbf{Initial moisture content: } \textit{$0.4$}
- \textbf{storitivity: } \textit{1$m^{-1}$}
\item \textbf{Create a vertical array: } Right-click on the block that was just created and then click on "Make grid of blocks" item. Select "Vertical 2D Grid" and then change the number of rows to 10. Click "Ok". \\
- \textbf{Note: } When a vertical grid is made the bottom area of the original block will be copied as the interface area of the connectors.
\item \textbf{Introducing particles: } We will model three different particle types with three different sticking rate coefficients $alpha_p$ values. In order to introduce the particles, from the Project Explorer right-click on \textbf{Water Quality}$\rightarrow$\textbf{Particles} and then select "Add Particles" from the drop-down menu.
\item \textbf{Setting particle properties: } Set the following properties for the particle:
- \textbf{Name: } \textit{Particle I}
- \textbf{Attachment Efficiency: }\textit{1}
- \textbf{Collection Efficiency: } \textit{1e-6}
- \textbf{Model: }\textit{Dual Phase}
- \textbf{Specific Surface Area: }\textit{10000$m^-1$}
\\\\
Add two more particle types with the following properties:
- \textbf{Name: }\textit{Particle II}
- \textbf{Attachment Efficiency: }\textit{1}
- \textbf{Collection Efficiency: }\textit{1e-5}
- \textbf{Model: }\textit{Dual Phase}
- \textbf{Specific Surface Area: }\textit{10000$m^-1$}
\\\\
- \textbf{Name: }\textit{Particle III}
- \textbf{Attachment Efficiency: }\textit{1}
- \textbf{Collection Efficiency: }\textit{1e-4}
- \textbf{Model: }\textit{Dual Phase}
- \textbf{Specific Surface Area: }\textit{10000$m^-1$}
\item \textbf{Save: } It is now a good time to save the work.
\item \textbf{Inflow file: } Here we create the inflow file. It is assumed that the flow rate is constant and equal to $0.01m^3/day$ which results in a velocity of $2.5m/day$ over the entire experiment which takes 1 day and the particle concentration for all three types of particles in $10mg/L$ for a period of 1hr and which is followed by no particles in the inflow. The inflow file will look like figure \ref{fig:12}.
\begin{figure}[!ht]\label{fig:12}
\begin{center}
\includegraphics[width=8cm]{Images/Figure12.png} \\
\caption{Inflow file for particle transport in soil example}
\end{center}
\end{figure}
Create or download "inflow.txt" from the example folder. Click on block \textbf{Darcy (1)} and choose the inflow file from inflow time series field in the properties dialog box. \\
Right-click on the \textbf{Darcy (1)} block and visualize the time series using \textbf{Plot Inflow Properties} menu item. The inflow concentration for particle I for example should look like Figure \ref{fig:13}
\begin{figure}[!ht]\label{fig:13}
\begin{center}
\includegraphics[width=8cm]{Images/Figure13.png} \\
\caption{Concentration of Particle I in the inflow}
\end{center}
\end{figure}
\item \textbf{Outflow: } In order to impose the outflow boundary condition a dummy reservoir will be added to the bottom of the soil column. The type of the block used for this reservoir is not important because we will use the \textbf{prescribed flow} feature to control the outflow. Let's use a storage block to create the dummy outflow reservoir. Add a storage by clicking on the storage icon \includegraphics[width=0.5cm]{Icons/storage_icon.png} on the top tool bar. Move the newly added storage block to the bottom of the column and connect it to the block named \textbf{Darcy (10)}. \\
- Enter a non-zero \textbf{length} for the newly added connector and non-zero \textbf{bottom area} and \textbf{depth} for the newly added storage block. These two values are required for the model to run. Also to prevent the storage block to become over-saturated enter a value of zero in the \textbf{Head-Storage relationship} property of the newly added storage block and set the bulk density of the storage block to \textit{1600$kg/m^3$}. //
\textbf{Note: } Note that if the value of \textbf{Medium Bulk Density} is entered in particle properties of a particle type, it will be used as the media density when performing the mass-balance for that particle class.
\item \textbf{Outflow rate time-series: } We want the outflow rate to be equal to the inflow rate. This can be accomplished in two ways. One is to turn on the "Steady State Hydraulics" in the \textbf{Setting$\rightarrow$Project Settings} or by assigning a \textbf{prescribed flow} to the bottom connector. We are going to use the second approach here. The flow time series to be prescribed to the \textbf{Darcy (10)-Pond (1)} connector should look like Figure \ref{fig:14}. Name the file "outflow.txt".
\begin{figure}[!ht]\label{fig:14}
\begin{center}
\includegraphics[width=8cm]{Images/Figure14.png} \\
\caption{Outflow file}
\end{center}
\end{figure}
Please note that the heading file in figure \label{fig:14} is ignored by the program and is only necessary for user documentation. \\
- Click on the \textbf{Darcy (10)-Pond (1)} connector, and choose "outflow.txt" from the \textbf{Prescribed flow} property. \\
- Set \textbf{Use Prescribed Flow} to "Yes". \\
- Save the project.
- Now the model is ready to run. Click on the run icon \includegraphics[width=0.5cm]{Icons/run_icon.png} on the left side tool bar and wait for the simulation to end. \\
- Right-click on the block named \textbf{Darcy (10)} and choose \textbf{Water Quality Results$\rightarrow$ Pericles I$\rightarrow$ Mobile} to see the breakthrough curve for particle type I \\
Do the same thing for \textbf{Particle II} and \textbf{Particle III}.
Right-click on the graph containing particle II results and click on \textbf{Copy Curve} item, then click on the graph containing particle type I results, right-click and select \textbf{Paste} item. \\ - Do the same thing with particle III to see the three breakthrough curves for the three particle types in a single graph. You can change the format of each curve by right-clicking on the graph and selecting the name of each curve (fig \ref{fig:30}).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{Images/Figure30.png} \\
\caption{Particle breakthrough curves for the three particle classes}\label{fig:30}
\end{center}
\end{figure}
\end{itemize}
\newpage
\subsection{Example: Particle settling in a one-dimensional water body: }
In this example we create a simple model of particles settling in a one dimensional water column. Similar to the previous example, three particles classes will be considered but this time their settling velocities will be different. We will use an array of storage block to create water column. The water column is assumed to have a height of 1.5m which is descritized into 10 equally sized layers each represented by a single storage block. The particles are assumed to be initially uniformly distributed and are allowed to settle under gravity.
\begin{itemize}
\item \textbf{Create a new project} \\
\item \textbf{Introducing particle classes: }
From the \textbf{Project Explorer} right-click on \textbf{Water Quality}$\rightarrow$\textbf{Particles} and then select "Add Particles" from the drop-down menu.
\item \textbf{Setting particle properties: } Set the following properties for the particle:
- \textbf{Name: } \textit{Particle I}
- \textbf{Model: }\textit{Single Phase}
- \textbf{Settling velocity: }\textit{10$m/day$}
\\\\
Add two more particle types with the following properties:
- \textbf{Name: } \textit{Particle II}
- \textbf{Model: }\textit{Single Phase}
- \textbf{Settling velocity: }\textit{1$m/day$}
\\\\
- \textbf{Name: } \textit{Particle III}
- \textbf{Model: }\textit{Single Phase}
- \textbf{Settling velocity: }\textit{0.1$m/day$}
\item \textbf{Save: } It is now a good time to save the
\item \textbf{Create a storage block and set the properties: } Add a storage by clicking on the storage icon \includegraphics[width=0.5cm]{Icons/storage_icon.png} on the top tool bar. \\
- Set the following properties for the storage block:
- \textbf{Bottom Area: }\textit{10$m^2$}
- \textbf{Depth: }\textit{0.15m} \\
- \textbf{Initial water depth: }\textit{0.15m} \\
- \textbf{Saturated moisture content: } \textit{1} (There is no solid media in the storage blocks) \\
- \textbf{Initial particle concentration: } Click on the "..." symbol in front of the \textbf{Particle Initial Concentration} label in properties windows and enter a concentration of 10 for the three particle classes as is shown in figure \ref{fig:17}. \\
\begin{figure}[!ht]\label{fig:17}
\begin{center}
\includegraphics[width=8cm]{Images/Figure17.png} \\
\caption{Particle initial concentration}
\end{center}
\end{figure}
- \textbf{Creating an array of storage blocks: } Right-click on the storage block that was just created and choose \textbf{Make Grid of Blocks} item from the menu. Choose \textbf{vertical 2D grid} option and enter 10 in the \textbf{Number of rows} box.
\item \textbf{Turn settling on for all the connectors: } By default the settling option for the connectors is set to \textbf{No}. In order to allow settling transport to occur via connectors the \textbf{Settling} option should be turned to \textbf{Yes}. Select all the connectors one by one and set the \textbf{Settling} property to \textbf{Yes}.
\item \textbf{Run the model: } Click on the run bottom on the left hand side tool bar \includegraphics[width=0.5cm]{Icons/run_icon.png} and wait for the simulation to end.
\item \textbf{Checking the results: } Right click on the \textbf{Storage (10)} block and choose \textbf{Plot Water Quality Results}$\rightarrow$\textbf{Particle I}$\rightarrow$\textbf{Mobile} to see the temporal variation of \textbf{Particle I} particle class at the bottom of the column. Do the same thing for \textbf{Particle II} and \textbf{Particle III} particle groups. You can copy and paste all curves into one window to compare the results for each particle class (Figure \ref{fig:31}).
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=8cm]{Images/Figure31.png} \\
\caption{Particle initial concentration}\label{fig:31}
\end{center}
\end{figure}
\item \textbf{Adding particle diffusion: } Change \textbf{Diffusion coefficient} for all the particles to 0.1$m^2/day$ by clicking on each particle size from \textbf{Project Explorer}$\rightarrow$\textbf{Water Quality}$\rightarrow$\textbf{Particles}$\rightarrow$ and changing the \textbf{diffusion coefficient} property of all particles to 0.1$m^2/day$ .Run the simulation again. The diffusion should prevent all particles to settle to the bottom block.
\end{itemize} | {
"alphanum_fraction": 0.7388328318,
"avg_line_length": 81.0861244019,
"ext": "tex",
"hexsha": "d2f39b5721fb64e3f1169075dca5145d868c4ce5",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2018-08-30T10:56:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-09T22:00:45.000Z",
"max_forks_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ArashMassoudieh/GIFMod_",
"max_forks_repo_path": "GIFMod User's Manual/Particles.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f",
"max_issues_repo_issues_event_max_datetime": "2017-07-04T05:43:37.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-07-04T05:40:30.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ArashMassoudieh/GIFMod_",
"max_issues_repo_path": "GIFMod User's Manual/Particles.tex",
"max_line_length": 662,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "1fa9eda21fab870fc3baf56462f79eb800d5154f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ArashMassoudieh/GIFMod_",
"max_stars_repo_path": "GIFMod User's Manual/Particles.tex",
"max_stars_repo_stars_event_max_datetime": "2018-08-28T06:08:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-20T19:32:27.000Z",
"num_tokens": 4784,
"size": 16947
} |
\documentclass{article}
\title{Exercise 9}
\begin{document}
\maketitle
\section{Review of Assignment 3}
~\\[5cm]
\section{FizzBuzz with Cuts}
Its time to use cuts to make a fizzbuzz program with example outputs as follows:
\begin{verbatim}
fizz(0) => fizzbuzz
fizz(1) => 1
fizz(2) => 2
fizz(3) => fizz
fizz(4) => 4
fizz(5) => buzz
\end{verbatim}
Utilise cuts to make this work.
\end{document}
| {
"alphanum_fraction": 0.7032418953,
"avg_line_length": 14.8518518519,
"ext": "tex",
"hexsha": "54feea921382ed7fed27a889437022fd85795194",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "63fc3fc1a6eb3a3258d7f3f6174b0d4f93acc666",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jb567/comp304-tutorials",
"max_forks_repo_path": "ex9/ex9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "63fc3fc1a6eb3a3258d7f3f6174b0d4f93acc666",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jb567/comp304-tutorials",
"max_issues_repo_path": "ex9/ex9.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "63fc3fc1a6eb3a3258d7f3f6174b0d4f93acc666",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jb567/comp304-tutorials",
"max_stars_repo_path": "ex9/ex9.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 139,
"size": 401
} |
\section{Parameters}\label{sec:parameters}
\begin{itemize}
\item \standout{Floorplan}:
\begin{description}
\item[userCount] \textit{(int)} The number \(N\) of
users inside the network (default: \code{100}).
\item[sizeX] \textit{(meters)} The horizontal size of
the floorplan (default: \code{150m}).
\item[sizeY] \textit{(meters)} The vertical size of the
floorplan (default: \code{150m}).
\item[indexStartingNode] \textit{(int distribution)} A
random number generated from a distribution to
select a random user to broadcast the message at
the start of the simulation (default:
\code{intuniform(0, userCount-1)}).
\end{description}
\item \standout{User}:
\begin{description}
\item[posX] \textit{(meters)} The X-axis position of the
user in the floorplan (default: set by Floorplan
as \code{uniform(0m, sizeX)}).
\item[posY] \textit{(meters)} The Y-axis position of the
user in the floorplan (default: set by Floorplan
as \code{uniform(0m, sizeY)}).
\item[sendOnStart] \textit{(bool)} Specifies if the user
should send out the message at the start of
simulation (default: \code{false}).
\item[slotDuration] \textit{(seconds)} The duration of a
time slot (default: \code{1s}).
\item[broadcastRadius] \textit{(meters)} The broadcast
radius \(R\) (default: \code{20m}).
\item[hearWindow] \textit{(int)} The time window of
\(T\) slots that the user should wait before
relaying the message (default: \code{5}).
\item[maxCopies] \textit{(int)} The maximum number of
copies \(m\) that the user can receive to
decide to relay the message (default: \code{3}).
Note that our model counts also the first
message as a ``copy'', but this is not an issue
since the problem defined in \chref{ch:specs}
says ``less than \(m\) copies'' (\emph{strict}
constraint) while here we talk about the
``maximum number of copies (including first
message)'' (\emph{loose} constraint).
\item[relayDelay] \textit{(int distribution)} The number
of time slots \(\delta\) to add to the time
window \(T\) before relaying the message
(default: \code{intuniform(0, 3)}).
\end{description}
\item \standout{Oracle}:
\begin{description}
\item[slotDuration] \textit{(seconds)} The duration of a
time slot. It should be equal to
\code{User.slotDuration}.
\item[timeout] \textit{(int)} The number of time slots
with no network activity the oracle must wait
before stopping the simulation. It should be
\(\geq T + \max(\delta) + 1\).
\end{description}
\end{itemize}
\begin{tcolorbox}[title=Note]
Since users only operates at time slot intervals, the parameter
\code{slotDuration} does not affect the behavior and the performance of
the network in any way. The only effect is that times \exgratia{total
broadcast time} are scaled, so we will fix this parameter to \(1s\). In
this way, each second of simulation time represents a time slot and the
analysis become easier since, by default, \omnetpp{} records emitted
signals with the timestamp at with emission occur.
Of course this is not the case in a real system (probably each slot is
much more shorter than 1 second) but this decision will not break any
consideration made by our analysis, except for the time scaling effect.
In the following we may use the terms ``1 slot'' and ``1 second''
interchangeably.
\end{tcolorbox}
| {
"alphanum_fraction": 0.7051989544,
"avg_line_length": 43.582278481,
"ext": "tex",
"hexsha": "5cd6d3d79e2dcfa2af355250c867f3af8052d72b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SpeedJack/pecsn",
"max_forks_repo_path": "doc/chapters/model/parameters.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SpeedJack/pecsn",
"max_issues_repo_path": "doc/chapters/model/parameters.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "40c757cddec978e06de766c9dff00abf57ccd6b3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SpeedJack/pecsn",
"max_stars_repo_path": "doc/chapters/model/parameters.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1009,
"size": 3443
} |
%# -*- coding: utf-8-unix -*-
\chapter{Inverse Kinematics of a Binary Variable Geometry Trussarm}
\label{chap:faq}
\section{Introduction}
| {
"alphanum_fraction": 0.7391304348,
"avg_line_length": 27.6,
"ext": "tex",
"hexsha": "6ceaefe9a936753537caa05f183f6a73f249dfd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ddcd9e2ea6618bef55ca13302e7332d537571b8d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "onenightinminhangcampus/SJTU-ENG-Thesis",
"max_forks_repo_path": "tex/chapter06.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ddcd9e2ea6618bef55ca13302e7332d537571b8d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "onenightinminhangcampus/SJTU-ENG-Thesis",
"max_issues_repo_path": "tex/chapter06.tex",
"max_line_length": 67,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ddcd9e2ea6618bef55ca13302e7332d537571b8d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "onenightinminhangcampus/SJTU-ENG-Thesis",
"max_stars_repo_path": "tex/chapter06.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 38,
"size": 138
} |
%
% Module TODO:MODNUM Chapter TODO:CHAPNUM Program Documentation
% CSC160-C00: Computer Science I (C++)
% Author: Ashton Hellwig
%
\documentclass[a4paper,11pt]{article}
% Packages
\usepackage[english]{babel} % Internationalization
\usepackage{soul} % Highlighting
\usepackage{hyperref} % Links (internal and external)
\usepackage{fancyhdr} % Headers and footers
\usepackage[dvipsnames]{xcolor} % Text Colors
\usepackage{listings} % Code Snippets
\usepackage{algorithmicx} % Algorithmic notation support
\usepackage{algpseudocode} % Algorithmic notation environments
\usepackage{enumitem} % Ordered lists
\usepackage{geometry} % Page layout
\usepackage{graphicx} % Image support
\usepackage[toc, page]{appendix} % Appendix
\usepackage{amsmath} % Mathematical Typesetting
% Colors
\newcommand{\commentstylecolor}{\color{Gray}}
\newcommand{\keywordstylecolor}{\color{MidnightBlue}}
\newcommand{\stringstylecolor}{\color{ForestGreen}}
\newcommand{\questioninput}{\color{Red}}
\newcommand{\answertcolor}{\color{Green}}
\newcommand{\myanswer}{\answertcolor{\hl}}
% Image Directory
\graphicspath{ {screenshots/} }
% Hyperlink Setup
\hypersetup{
colorlinks = true,
urlcolor = blue,
linkcolor = blue
}
% Syntax-Highlighting for Code Snippets
\lstset{
backgroundcolor=\color{white},
breaklines=true,%
captionpos=b,%
frame=tb,%
tabsize=4,%
numbers=left,%
showstringspaces=false,%
commentstyle=\commentstylecolor,%
keywordstyle=\keywordstylecolor,%
stringstyle=\stringstylecolor%
}
% Page Configuration
%% Style
\pagestyle{fancy}
%% Layout
\geometry{%
a4paper,%
top=2.5cm,%
bottom=2.5cm,%
left=2.5cm,%
right=2.5cm%
}
\setlength{\headheight}{12pt}
\setlength{\floatsep}{12pt}
%% Title page
\title{Chapter TODO:CHAPNUM Program Documentation}
\author{Ashton Hellwig}
\date\today
\setcounter{tocdepth}{3}
%% Subsequent pages
\lhead{CSC160}
\rhead{Computer Science I (C++)}
\lfoot{MTODO:MODNUMCTODO:CHAPNUM}
\rfoot{A. Hellwig}
% Document Content
\begin{document}
% Title Page
\maketitle
\tableofcontents
\listoffigures
\newpage
% Problem Analysis
\section{Problem Analysis}
The problem states:
\begin{quotation}
Write a program that uses \textbf{while} loops to perform the following:
\begin{enumerate}
\item Prompt the user to input two integers: firstNum and secondNum
(firstNum must be less than secondNum).
\item Output all odd numbers between firstNum and secondNum.
\item Output the sum of all even numbers between firstNum and secondNum.
\item Output the numbers and their squares between 1 and 10.
\item Output the sum of the square of the odd numbers between firstNum
and secondNum.
\item Output all uppercase letters.
\end{enumerate}
\end{quotation}
\subsection{Data}
Available data includes:
\begin{enumerate}
\item There are two variables: \texttt{firstNum} and \texttt{secondNum}
\item \texttt{firstNum} must always be less than \texttt{secondNum}
\end{enumerate}
\subsection{Desired Output}
\begin{figure}[h]
\caption{main.cpp output}
\begin{lstlisting}[language=bash]
Odd numbers between firstNum and secondNum:
Sum of even numbers between firstNum and secondNum:
firstNum =
firstNumSquares between 1 and 10:
secondNum =
secondNumSquares between 1 and 10:
The sum of the square of the odd numbers between firstNum and secondNum =
All uppercase letters used were:
\end{lstlisting}
\label{fig:do}
\end{figure}
% Algorithm
\newpage
\section{Algorithm}
Below is the algorithm for the program.
\begin{figure}[h]
\caption{Chapter 5 Program Algorithm}
\vspace{12pt}
\begin{algorithmic}
%% Variables
\State \Comment{--Variables--}
\State
\State $firstNum\gets $
\Comment{Needs user input}
\State $secondNum\gets $
\Comment{Needs user input}
%% Prompt Lines
\State \Comment{--Prompt Lines--}
\State \Call{toOutput}{} ``Please enter the values for firstNum''
\State $firstNum\gets input$
\State \Call{toOutput}{} ``Please enter the values for secondNum''
\State $secondNum\gets input$
\end{algorithmic}
\label{alg:c5}
\end{figure}
% User Documentation
\newpage
\section{User Documentation}
%% Usage
\subsection{Build}
The following are instructions with two use cases:
\begin{itemize}
\item Within Visual Studio 2017
\item Bundled Release
\item with GNU G++
\end{itemize}
\subsubsection{Within Visual Studio}
Simply load \texttt{ChapterTODO:CHAPNUM.sln} in Microsoft Visual Studio and
build/run the \texttt{release} version. If you require debugging
information, switch the configuration to \texttt{debug}.
\subsubsection{Bundled Release}
\begin{enumerate}
\item Navigate to the unziped folder containing the binary,
\textbf{with a terminal emulator or command prompt}, this will
(most likely) mean running:
\begin{lstlisting}[language=bash]
cd %USERPROFILE%\Downloads\ChapterTODO:CHAPNUM\x64\Release\
\end{lstlisting}
\item To run the program simply issue this within the command
prompt
\begin{lstlisting}[language=bash]
.\ChapterTODO:CHAPNUM.exe
\end{lstlisting}
\end{enumerate}
Of course if preferred, you may also navigate to the release folder in
file explorer and double click the executable (\texttt{Chapter4.exe})
\subsection{With g++}
If you prefer to use an open source debugger and compiler then I assume
the following:
\begin{enumerate}
\item You have installed \href{http://www.mingw.org/}{MinGW} and
it is in your \texttt{\$PATH}
\item You have installed the
\href{http://www.mingw.org/wiki/MSYS}{MSYS Tools} and they are in
your \texttt{\$PATH}
\end{enumerate}
% Appendix
\newpage
\appendix
% Appendix A
\section{Appendix A}
\end{document}
| {
"alphanum_fraction": 0.6519464533,
"avg_line_length": 29.6757990868,
"ext": "tex",
"hexsha": "bdd561a3e9905d594af229ca6b682c2e30b69868",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2204b7d30675b306231f1263c037bdc3b13fc4f2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ashellwig/generator-csc160-program",
"max_forks_repo_path": "generators/app/templates/doc/assigned/main.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "2204b7d30675b306231f1263c037bdc3b13fc4f2",
"max_issues_repo_issues_event_max_datetime": "2020-02-09T08:37:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-09T08:29:36.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ashellwig/generator-csc160-program",
"max_issues_repo_path": "generators/app/templates/doc/assigned/main.tex",
"max_line_length": 84,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2204b7d30675b306231f1263c037bdc3b13fc4f2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ashellwig/generator-csc160-program",
"max_stars_repo_path": "generators/app/templates/doc/assigned/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1727,
"size": 6499
} |
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{geometry}
\usepackage{enumerate}
\usepackage{natbib}
\usepackage{float}%稳定图片位置
\usepackage{graphicx}%画图
\usepackage[english]{babel}
\usepackage{a4wide}
\usepackage{indentfirst}%缩进
\usepackage{enumerate}%加序号
\usepackage{multirow}%合并行
\title{\large UM-SJTU JOINT INSTITUTE\\Introduction to Computer Organization\\(VE370)\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\
Project 3\\\ \\\ Literature Research \\\ \\\ \\\ \\\ \\\ }
\author{Name: Pan Chongdan\\ID: 516370910121}
\date{Date: \today}
\begin{document}
\maketitle
\newpage
\section{Abstract}
Moore's Law, which was stated by Gordon Moore in 1965 states that the number of transistors on a integrated circuit plate of same size for double every 18 months. Accordingly, the cost of computer is also reduces by half every 18 moths so that we can get to computers so easily and quickly. Moore's Law has been proved to be valid for more than fifty years and it has stimulated the development of IT industry greatly. However, Moore's Law will fail in the end because the size of transistor can't be infinite small. My literature research will cover the factors leading to Moore's Law's failure and solution.
\section{Moore's Law's Limit}
\subsection{Size Problem}
At present, the scale of wire for chip is about 15nm, if it keep shrinking at current speed, the scale will be smaller than 5nm before 2020, which can only contains 10 atoms on the wire. Because of uncertainty of quantum, transistor won't be reliable.
\par In addition, if we shrink the size of the chip, the distance wires will become closer and closer, but when they're too close, quantum transition will probably happen so that electrons will jump from one wire to another so the circuit can't work any more.
\subsection{Material Problem}
Moore's first law define improved transistor drive current $$I_{dsat}=\beta (V_{gs}-V_t)^2/2$$ where $V_{gs}$ is the gate to source voltage, $V_t$ is the device threshold, and $β$ is the MOS transistor gain factor. The gain factor $/beta$ is dependent on both the process parameters and the device geometry, and is given by
$$\beta=\mu\varepsilon/t_{ox}(W/L)$$ Where $\mu$ is the effective surface mobility of the carriers in the channel, $\varepsilon$ is the permittivity of the gate dielectric, $t_{ox}$ is the thickness of the gate dielectric, $W$ is the width of the channel and $L$ is the length of the channel. The gain factor $\beta$ thus consists of a process dependent factor $\mu\varepsilon/t_{ox}$, which contains all the process terms that account for such factors as doping density and gate oxide thickness. The process dependent factor is sometimes written as $\mu C_{ox}$ where $C_{ox}=\kappa\varepsilon_0 A/t_{ox}$, where $t_{ox}$ is limited by leakage and manufacturability. Thus the best options to increase the device performance are to increase $\kappa$ of the dielectric and improve the channel mobility.
\par Nowdays, $SiO_2$ are the best material used for transistors while we need to find high $\kappa$ dielectric to replace $Si O_2$. The new material must also achieve low electrical leakage and have negligible trap densities to meet gate leakage and reliability requirements. The require for new materials of low cost has been one of the most difficult problem to solve to keep Moore's Law.
\section{Solution}
\subsection{Cloud Computing}
Since we can't lower the cost of computers' on user's hands, we can just take the CPU out of the computer so that they won't worry about the size of the chip. With cloud computing technology, we can do the computation in the cloud and user's computer's just need to receive and send signals to the CPU in cloud.
\par In addition, if we build many CPU together in cloud, we can save the cost by building a super integrated CPU with other architecture which only designed for large scale calculation.
\subsection{More Moore's Law}
\begin{figure}[H]
\centering
\includegraphics[scale=1.5]{P1.png}
\end{figure}
Currently, circuit are designed on one plane, according to "More Moore's Law" concept, they can be designed in three dimension. It can't improve the performance of circuit significantly, but it will reduce voltage leakage, which is called "Power-driven Technology Transition"
\subsection{More than Moore's Law}
"More than Moore's Law" focus on the variety of chip's function, which means opening minds and making more advance in other fields such sensor or power to improve product's performance instead of being stuck in figuring how to making smaller chips. For example, we can make more development on wireless communication and power transfer technology so our cell phone can work faster and longer. In addition, engineers can put more efforts on development virtual reality and artificial intelligence.
\section{Conclusion}
In reality, there is always a limit for the size of transistor, but there is no limit for human's intelligence. So if we can decrease the cost of chips from other aspects like algorithm or architecture, there might be a chance that we can keep the Moore's Law
\section{Reference}
\begin{enumerate}[-]
\item Robert R.Schaller. "Moore's Law: Past, Present and Future." IEEE Spectrum. June 1997.
\item Andrew B.Kahang. "Scaling: More than Moore's Law." IEEE Design $\&$ Test of Computers.
\item Young-Kai Chen. "More than Moore's law — Scaling with silicon photonics."2016 International Symposium on VLSI Technology, Systems and Application. 30 May 2016.
\item J.Prased "Challenges and opportunities for the universities to support future technology developments in the semiconductor industry: staying on the Moore's Law." Proceedings of the 15th Biennial University/Government/ Industry Microelectronics Symposium. July 2003.
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.779541142,
"avg_line_length": 109.3773584906,
"ext": "tex",
"hexsha": "36319b75b8215ab7db02a5800f318ce442b77d92",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PANDApcd/SemiConductorCircuit",
"max_forks_repo_path": "VE370ComputerOrganization/HW/Project/Project3/Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PANDApcd/SemiConductorCircuit",
"max_issues_repo_path": "VE370ComputerOrganization/HW/Project/Project3/Report.tex",
"max_line_length": 801,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "430ffaeea0830dc3105883374dd729fe9f86cc55",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PANDApcd/SemiConductorCircuit",
"max_stars_repo_path": "VE370ComputerOrganization/HW/Project/Project3/Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1394,
"size": 5797
} |
% chktex-file 46
\documentclass[a4paper,11pt,parskip=half]{scrartcl}
\usepackage{graphicx}
\usepackage[utf8]{inputenc} %-- pour utiliser des accents en français
\usepackage{amsmath,amssymb,amsthm}
\usepackage[round]{natbib}
\usepackage{url}
\usepackage{xspace}
\usepackage[left=20mm,top=20mm]{geometry}
\usepackage[ruled,vlined,linesnumbered]{algorithm2e}
\usepackage{subcaption}
\usepackage{mathpazo}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{listings}
\newcommand{\ie}{i.e.}
\newcommand{\eg}{e.g.}
\newcommand{\reffig}[1]{Figure~\ref{#1}}
\newcommand{\refsec}[1]{Section~\ref{#1}}
\setcapindent{1em} %-- for captions of Figures
\addtokomafont{title}{\raggedright\setlength{\tabcolsep}{0pt}}
\addtokomafont{author}{\raggedright\setlength{\tabcolsep}{0pt}\normalsize}
\addtokomafont{date}{\raggedright\setlength{\tabcolsep}{0pt}\normalsize}
\title{\LARGE Homework \#2---GEO1111}
\author{Hugo de Jonge, \#343483}
\date{\today}
\begin{document}
\maketitle
%%%
%
\section{Introduction}%
\label{sec:intro}
This is a demo file for \LaTeX, it can be used as a template for homeworks.
It contains examples of all (?) the things you should want to do with \LaTeX.
%%%
%
\section{Cross-references}
The command \texttt{\textbackslash{}ref} can be used for chapters, sections, subsections, figures, tables, etc.
Alternatively, the macros defined above (lines 21 and 22 of the \texttt{.tex} file) can be used.
Section~\ref{sec:intro} is what you are currently reading, and Section~\ref{sec:code} shows how to put code.
\refsec{sec:intro} is what you are currently reading, and \refsec{sec:code} shows how to put code.
%%%
%
\section{Figures}%
\label{sec:figures}
Figure~\ref{fig:sometriangles} is a simple figure.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/sometriangles.png}
\caption{One nice figure}%
\label{fig:sometriangles}
\end{figure}
Notice that all figures in your document should be referenced to in the main text.
The same applies to tables and algorithms.
It is recommended \emph{not} to force-place your figures (\eg\ with commands such as: \texttt{\textbackslash{}newpage} or by forcing a figure to be at the top of a page).
\LaTeX\ usually places the figures automatically rather well.
Only if at the end of your thesis you have small problem then can you solve them.
As shown in \autoref{fig:sidebyside},
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[angle=90,width=\linewidth]{figs/sometriangles.png}
\caption{}\label{fig:sidebyside:1}
\end{subfigure}%
\qquad %-- that adds some space between th 2 figures
\begin{subfigure}[b]{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/lod1.png}
\caption{}\label{fig:sidebyside:2}
\end{subfigure}%
\caption[Shortened title for the list of figures]{Two figures side-by-side. (a) A triangulation of 2 polygons. (b) Something not related at all.}%
\label{fig:sidebyside}
\end{figure}
it is possible to have two figures (or more) side by side.
You can also refer to a subfigure: see \autoref{fig:sidebyside:2}.
\subsection[Shorter section name for the TOC]{Figures in PDF are possible and even encouraged!}%
\label{sec:pdf}
If you use Adobe Illustrator or \href{http://ipe7.sourceforge.net}{Ipe} you can make your figures vectorial and save them in PDF\@.
You include a PDF the same way as you do for a PNG, see \autoref{fig:pdffig},
\begin{figure}
\centering
\begin{subfigure}[b]{0.28\linewidth}
\centering
\includegraphics[page=1,width=\linewidth]{figs/tricat.pdf}
\caption{2 polygons}\label{fig:pdffig:1}
\end{subfigure}%
\qquad %-- that adds some space between th 2 figures
\begin{subfigure}[b]{0.28\linewidth}
\centering
\includegraphics[page=2,width=\linewidth]{figs/tricat.pdf}
\caption{CDT }\label{fig:pdffig:2}
\end{subfigure}%
\qquad %-- that adds some space between th 2 figures
\begin{subfigure}[b]{0.28\linewidth}
\centering
\includegraphics[page=3,width=\linewidth]{figs/tricat.pdf}
\caption{with colours}\label{fig:pdffig:3}
\end{subfigure}%
\caption{Three PDF figures.}%
\label{fig:pdffig}
\end{figure}
%%%
%
\section{How to add references?}
References are best handled using Bib\TeX.
See the \texttt{refs/myreferences.bib} file.
A good cross-platform reference manager is \href{http://jabref.sourceforge.net/}{JabRef}.
\citet{Descartes37} wrote this and that~\citep{Voronoi08,Delaunay34}.
Instead of citing the whole paper~\citep{Delaunay34}, it is also possible to cite only the authors (\eg\ \citeauthor{Delaunay34}).
%%%
%
\section{Footnotes}
Footnotes are a good way to write text that is not essential for the understanding of the text\footnote{but please do not overuse them}.
%%%
%
\section{Equations}
Equations and variables can be put inline in the text, but also numbered.
Let $S$ be a set of points in $\mathbb{R}^d$.
The Voronoi cell of a point $p \in S$, defined $\mathcal{V}_{p}$, is the set of points $x \in \mathbb{R}^d$ that are closer to $p$ than to any other point in $S$; that is:
\begin{equation}
\mathcal{V}_p = \{x \in \mathbb{R}^{d} \ | \ \|x-p\| \, \leq \, \|x-q\|, \ \forall \, q \in S \}.
\end{equation}
The union of the Voronoi cells of all generating points $p \in S$ form the Voronoi diagram of $S$, defined VD($S$).
%%%
%
\section{Tables}
The package \texttt{booktabs} permits you to make nicer tables than the basic ones in \LaTeX.
See for instance \autoref{tab:example}.
\begin{table}
\centering
\begin{tabular}{@{}lrrcrrc@{}} \toprule
& \multicolumn{2}{c}{3D model} && \multicolumn{2}{c}{input} \\
\cmidrule{2-3} \cmidrule{5-6}
& solids & faces && vertices & constraints \\
\toprule
\textbf{campus} & 370 & 4~298 && 5~970 & 3~976 \\
\textbf{kvz} & 637 & 6~549 && 8~951 & 13~571 \\
\textbf{engelen} & 1~629 & 15~870 && 23~732 & 15~868 \\
\bottomrule
\end{tabular}
\caption{Details concerning the datasets used for the experiments.}%
\label{tab:example}
\end{table}
%%%
%
\section{Plots}
The best way is to use \href{http://matplotlib.org}{matplotlib}, or its more beautiful version (\href{http://stanford.edu/~mwaskom/software/seaborn/index.html}{seaborn}).
With these, you can use Python to generate nice PDF plots, such as that in Figure~\ref{fig:myplot}.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{plots/myplot.pdf}
\caption{A super plot}%
\label{fig:myplot}
\end{figure}
In the folder \texttt{./plots/}, there is an example of a CSV file of the temperature of Delft, taken somewhere.
From this CSV, the plot is generated with the script \texttt{createplot.py}.
%%%
%
\section{Pseudo-code}%
\label{sec:code}
Please avoid putting code (Python, C++, Fortran) in your thesis.
Small excerpt are probably fine (for some cases), but do not put all the code in an appendix.
Instead, put your code somewhere online (\eg\ GitHub) and put \emph{pseudo-code} in your thesis.
The package \texttt{algorithm2e} is pretty handy, see for instance the \autoref{alg:walk}.
All your algorithms will be automatically added to the list of algorithms at the begining of the thesis.
\begin{algorithm}
\KwIn{A Delaunay tetrahedralization $\mathcal{T}$, a starting tetrahedron $\tau$, and a query point $p$}
\KwOut{$\tau_r$: the tetrahedron in $\mathcal{T}$ containing $p$}
\BlankLine
\While{$\tau_r$ not found}
{
\For{$i \leftarrow 0$ \KwTo 3}
{
$\sigma_i \leftarrow$ get face opposite vertex $i$ in $\tau$\;
\If{Orient($\sigma_i, p$) $< 0$\nllabel{l:walk}}
{
$\tau \leftarrow$ get neighbouring tetrahedron of $\tau$ incident to $\sigma_i$\;
break\;
}
}
\If{$i=3$}
{
\tcp{all the faces of $\tau$ have been tested}
\Return{$\tau_r$ = $\tau$}
}
}
\caption[W\textsc{alk}]{W\textsc{alk} ($\mathcal{T}$, $\tau$, $p$)}%
\label{alg:walk}
\end{algorithm}
Observe that you can put labels on certain lines (with \texttt{\nllabel{}}) and then reference to them: on line~\ref{l:walk} of the \autoref{alg:walk} this is happening.
If you want to put some code (or XML for instance), use the package \texttt{listings}, \eg\ you can wrap it in a Figure so that it does not span over multiple pages.
\begin{figure}
\begin{footnotesize}
\begin{lstlisting}
<gml:Solid>
<gml:exterior>
<gml:CompositeSurface>
<gml:surfaceMember>
<gml:Polygon>
<gml:exterior>
<gml:LinearRing>
<gml:pos>0.000000 0.000000 1.000000</gml:pos>
<gml:pos>1.000000 0.000000 1.000000</gml:pos>
<gml:pos>1.000000 1.000000 1.000000</gml:pos>
<gml:pos>0.000000 1.000000 1.000000</gml:pos>
<gml:pos>0.000000 0.000000 1.000000</gml:pos>
</gml:LinearRing>
</gml:exterior>
<gml:interior>
...
</gml:surfaceMember>
</gml:CompositeSurface>
</gml:interior>
</gml:Solid>
\end{lstlisting}
\end{footnotesize}
\caption{Some GML for a \texttt{gml:Solid}.}%
\label{fig:codegml}
\end{figure}
%%%
%
\section{Miscellaneous}%
\label{sec:misc}
This is the way to properly write these abbreviations, \ie\ so that the spacing is correct.
And this is how you use an example, \eg\ like this.
You should use one \texttt{-} for an hyphen between words (`multi-dimensional'), two \texttt{--} for a range between numbers (`1990--1995'), and three \texttt{---} for a punctuation in a sentence (`I like---unlike my father---to build multi-dimensional models').
\bibliographystyle{abbrvnat}
\bibliography{refs/myreferences}
\end{document}
| {
"alphanum_fraction": 0.6991389148,
"avg_line_length": 33.46875,
"ext": "tex",
"hexsha": "cb68f1e18621285d45a066cf4735260dfcdd4843",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-07-09T14:56:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-09T11:03:18.000Z",
"max_forks_repo_head_hexsha": "c2d17a7a23d6e2740364a29f27232a329eec48e4",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "tudelft3d/latex-getting-started",
"max_forks_repo_path": "template/mytemplate.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "c2d17a7a23d6e2740364a29f27232a329eec48e4",
"max_issues_repo_issues_event_max_datetime": "2021-07-09T13:07:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-07-09T13:07:35.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "tudelft3d/latex-getting-started",
"max_issues_repo_path": "template/mytemplate.tex",
"max_line_length": 262,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c2d17a7a23d6e2740364a29f27232a329eec48e4",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "tudelft3d/latex-getting-started",
"max_stars_repo_path": "template/mytemplate.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-04T07:49:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-22T13:10:31.000Z",
"num_tokens": 3107,
"size": 9639
} |
%**********************************************************************%
%* Physics for Poets Learning Notes
%* 4th edition
%* Author: Robert March
%* Chapter: 11
%* Notes: camilo tejeiro
%**********************************************************************%
% article type 12 pt font.
\documentclass[12pt, letterpaper]{article}
%----------------------- External Packages ----------------------------%
% package to insert images to our doc.
\usepackage{graphicx}
%------------------ Dimensions and Page Layout--------------------------%
%----------------------- LaTeX Environments ---------------------------%
%------------------------- LaTeX Commands -----------------------------%
%------------------------- Document Content ---------------------------%
% make title built in command values
\title{Chapter 11, $E=mc^2$ and all that.}
\author{Learning Notes, Physics for Poets}
\date{}
\begin{document}
\maketitle
\section{The Meaning of $E=mc^2$}
\textit{Ever since Hiroshima, this formula has been associated in the public
mind with nuclear energy. However, we must emphasize (at the risk of
repetition) that this formula applies equally well to all forms of
energy. It is a universal description of nature; as valid for a bonfire
as for a nuclear weapon.}
$e=mc^2$ is a statement saying that in fact for all practical purposes
mass and energy are identical; mass is energy and all energy has mass.
C is just a conversion factor from units of mass to units of energy,
much like converting from miles to km.
\section{The Mass Increase}
So just as energy can change, mass is no longer a constant.
What we used to call mass, we must now define as the rest mass, i.e.
the mass of the object at rest. Because as we speed
up an object to relativistic speeds, its mass changes;
from our point of view (i.e. our frame of reference) the object
starts to get more and more massive.
So we introduce the symbol $m_0$ to designate the mass of an
object at rest (the ``constant'' mass we are familiar with) and the
changing or relativistic mass then becomes:
$$m=\gamma m_0$$
From this equation, we can see that at rest our Lorentz factor ($\gamma$)
is 1 and our mass is equal to our rest mass, however as our object speeds
up, our Lorentz factor increases and our object starts becoming more
massive. As our object's speed approaches that of light we see that
our mass approaches infinity. Thus, this \textit{relativistic mass
increase} enforces the speed of light as our upper boundary; our
maximum speed.
\medskip
\textbf{but what does this mean?}
It means that as we do more work to increase the object's speed near
that of light, we are still increasing it's momentum, but our
energy investment is going towards increasing the object's mass while
the increases in speed are very small and harder and harder to achieve.
\section{Summary}
Despite relativistic reinterpretations of space and time, momentum conservation
survives if we simply allow mass to vary with velocity according to
the Lorentz factor.
\end{document}
| {
"alphanum_fraction": 0.6247695144,
"avg_line_length": 40.675,
"ext": "tex",
"hexsha": "2877ae3c25722fa1aead2640f7faf952d226cf4b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "30b729e92b7fcfeffdb32a8afa4cd5d7082d4c25",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "camilotejeiro/book_learning_notes",
"max_forks_repo_path": "physics_for_poets-4ed-robert_march/chapter_11-emc2_and_all_that/chapter_11-emc2_and_all_that.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "30b729e92b7fcfeffdb32a8afa4cd5d7082d4c25",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "camilotejeiro/book_learning_notes",
"max_issues_repo_path": "physics_for_poets-4ed-robert_march/chapter_11-emc2_and_all_that/chapter_11-emc2_and_all_that.tex",
"max_line_length": 84,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "30b729e92b7fcfeffdb32a8afa4cd5d7082d4c25",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "camilotejeiro/book_learning_notes",
"max_stars_repo_path": "physics_for_poets-4ed-robert_march/chapter_11-emc2_and_all_that/chapter_11-emc2_and_all_that.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 733,
"size": 3254
} |
\chapter{Storing a Path as a Tree}
\label{append:treap}
\vspace*{2em}
In phase 3 (\autoref{algo:dham:ph3}) of the DHAM, we construct a path out of the 2 cycles being patched and search (DFS/BFS) the tree formed on this path by double rotation operations. Now, there are 2 parts to this problem: One is to store the path in such a way that the double rotations operations can be done efficiently, and the other is to identify valid double rotations, given such a path.
If we make the obvious decision to store the path as a link list to make the double rotation operation constant time (by just swapping 2 of the links), the process of identifying a valid operation ends up being linear due to having to traverse the path.
Instead, if we store the path as an array that making generation of valid operations faster, we realize that executing the operation itself now needs updating up to linear number of elements in the array.
To get around this difficulty, we store our path as a Randomized Search Tree \cite{treap} to get a $\bigO(\log n)$ bound on both of the operations.
\section{Randomized Search Tree}
The Randomized Search Tree (or treap) as given by Aragon and Seidel \cite{treap} is based on 2 auxiliary operations - split (around a given key) and merge (assuming elements within the trees are in order of keys), to implement the main operations like insertion, deletion, union etc. The reason we choose this data structure is because we can use these auxiliary operations to also implement a double rotation by 'splitting' out a sub-array, and 'merging' it at the end.
To attain all the desired functionality, we make some modifications to the data structure
\subsection{Implicit Keys}
Doing a double rotation changes the position of elements in the structure. In this situation the keys(or index) of these elements becomes inaccurate. So, instead of storing the key as a value within the node, we maintain the keys of all nodes implicitly.
This is achieved by storing the size of the tree rooted at a node, and using this value to compute keys on the fly.
The updated node looks like
\begin{lstlisting}
struct Node {
Node *left, *right, *parent;
int val, y, subtree_size = 1;
void recalc();
};
int cnt(Node* n) { return n ? n->subtree_size : 0; }
void Node::recalc() { c = cnt(l) + cnt(r) + 1; }
\end{lstlisting}
The key of a node is the number of elements with a smaller key, which can be looked on as the number of elements in the left subtree (because of the BST structure). So while traversing down the tree, we can keep track of the sum of their sizes (every time we traverse to the right) to implicitly maintain the key of a node we are currently visiting.
Modifying the split function accordingly, we get
\begin{lstlisting}
pair<Node*, Node*> split(Node* n, int k)
if (!n) return {};
if (cnt(n->l) >= k) {
auto pa = split(n->l, k);
n->l = pa.second;
n->recalc();
return {pa.first, n};
} else {
auto pa = split(n->r, k - cnt(n->l) - 1);
n->r = pa.first;
n->recalc();
return {n, pa.second};
}
\end{lstlisting}
The merge function remains largely the same since we do not access the keys of a node, assuming them to already be in order.
\subsection{key $\iff$ value conversions}
With keys being automatically updated, we only need to add the functionality to return the value given a key (accessing an element) and return a key given the value (search operation). These operations will allow us to generate valid operations efficiently.
These can simply be implemented in $\bigO(\log n)$ as shown below
\begin{lstlisting}
int key(Node* root, Node* x) {
if (tree==nullptr || x==nullptr) return -1;
int ans = cnt(x->l);
while(x != root) {
auto par = x->p;
if (par->r == x)
ans += 1 + cnt(par->l);
x = par;
}
return ans;
}
int value(Node *n, int key) {
if (!n) return -1;
if (cnt(n->l) == key) return n->val;
else if (cnt(n->l) > key) return value(n->l, key);
else return value(n->r, key - cnt(n->l) - 1);
}
\end{lstlisting}
\newpage
\section{Double Rotation}
Armed with the tools described above, we can implement the double rotation operation to move the subarray $[l, r)$ to the index $k$ in $\bigO(\log n)$ time.
\begin{lstlisting}
void move(Node*& t, int l, int r, int k) {
Node *a, *b, *c;
tie(a,b) = split(t, l); tie(b,c) = split(b, r - l);
if (k <= l) t = merge(ins(a, b, k), c);
else t = merge(a, ins(c, b, k - r));
}
\end{lstlisting}
| {
"alphanum_fraction": 0.7074799644,
"avg_line_length": 48.8260869565,
"ext": "tex",
"hexsha": "ea8eca04b70f7f19b38c112f1e0e8cf329dec36f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LaughingBudda/hachikuji",
"max_forks_repo_path": "tex/chapters/ap-treap.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LaughingBudda/hachikuji",
"max_issues_repo_path": "tex/chapters/ap-treap.tex",
"max_line_length": 470,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0d65ecec12dd843dfb7e3828ac3b5c3824ce6901",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LaughingBudda/hachikuji",
"max_stars_repo_path": "tex/chapters/ap-treap.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1183,
"size": 4492
} |
\chapter{Identifying roots of non-linear equations}
| {
"alphanum_fraction": 0.7962962963,
"avg_line_length": 13.5,
"ext": "tex",
"hexsha": "2d2c310d4e7581968c7425ed581ba825eff687f7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/computer/rootsNonLinear/00-00-Chapter_name.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/computer/rootsNonLinear/00-00-Chapter_name.tex",
"max_line_length": 51,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/computer/rootsNonLinear/00-00-Chapter_name.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 54
} |
\chapter*{Vita}
\addcontentsline{toc}{chapter}{Vita} %add Vita section to Table of Contents
Vita may be provided by doctoral students only. The length of the vita is preferably one page. It may include the place of birth and should be written in third person. This vita is similar to the author biography found on book jackets. | {
"alphanum_fraction": 0.7926829268,
"avg_line_length": 109.3333333333,
"ext": "tex",
"hexsha": "0251ab27cf867ec9dfb1f0d93c84b4662e668d67",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0b547dca6b5a25a0d5d3fb36f34b41de4161ad0f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "scottschoenjr/gt-diss",
"max_forks_repo_path": "vita.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0b547dca6b5a25a0d5d3fb36f34b41de4161ad0f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "scottschoenjr/gt-diss",
"max_issues_repo_path": "vita.tex",
"max_line_length": 235,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0b547dca6b5a25a0d5d3fb36f34b41de4161ad0f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "scottschoenjr/gt-diss",
"max_stars_repo_path": "vita.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 77,
"size": 328
} |
% Awesome Source CV LaTeX Template
%
% This template has been downloaded from:
% https://github.com/darwiin/awesome-neue-latex-cv
%
% Author:
% Christophe Roger
%
% Template license:
% CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/)
%Section: Interests
\section{\texorpdfstring{\color{Blue}Interests}{Interests}}
\begin{tabular}{rl}
\textsc{Machile learning:} & iOS, Android, \textbf{Windows Phone}\\
\textsc{Web development:} & HTML5, CSS3 \\
\textsc{Reading} & \\
\textsc{Writting} & \\
\textsc{Open source development} & \\
\end{tabular} | {
"alphanum_fraction": 0.6728187919,
"avg_line_length": 29.8,
"ext": "tex",
"hexsha": "34c53317b7bad6b016fe995298099163f2f5f104",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2021-12-17T15:38:47.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-04-15T13:12:34.000Z",
"max_forks_repo_head_hexsha": "0d235a48f29aeb4070aafdccbaef3356e0862fd2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alanverdugo/resume",
"max_forks_repo_path": "section_interests.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0d235a48f29aeb4070aafdccbaef3356e0862fd2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alanverdugo/resume",
"max_issues_repo_path": "section_interests.tex",
"max_line_length": 72,
"max_stars_count": 27,
"max_stars_repo_head_hexsha": "0d235a48f29aeb4070aafdccbaef3356e0862fd2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alanverdugo/resume",
"max_stars_repo_path": "section_interests.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-17T15:38:41.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-15T20:28:36.000Z",
"num_tokens": 185,
"size": 596
} |
\section{Big Data}
\begin{definition}[ACID]\label{def:acid}
\begin{description}
\item[Atomicity:] An action will either completely fail or completely succeed, nothing in between.
\item[Consistency:] An action will either completely fail or completely succeed, nothing in between.
\item[Isolation:] Executing multiple actions simultaneously yields the same result as if all the actions
were executed serially (that is, no messing up because of pararell execution).
\item[Durability:] If an action is committed, it will remain in the system even if there are
power losses, crashes or errors.
\end{description}
\end{definition}
\begin{definition}[At-least-once ingestion]\label{def:atleastonce}
If any message is lost, retransmit it. Thus, we can guarantee that a message will be received.
There is a possibility of duplicates.
\end{definition}
\begin{definition}[At-most-once ingestion]\label{def:atmostonce}
Every message will be sent, but at most one time.
There is a possibility of duplicates.
\end{definition}
\begin{definition}[BASE]\label{def:base}
An alternative way of analyzing DBMS.\@ It is less strict than \nameref{def:acid}.
\begin{description}
\item[Basic Availibility] database should work most of the time
\item[Soft-state] replicas don't always have to be consistent
\item[Eventual consistency] An update doesn't have to be seen by all peers right away
\end{description}
\end{definition}
\begin{definition}[CAP theorem]\label{def:captheorem}
Working with Big Data involves dealing with inputs so large that conventional methods do not work.
This also involves that we have to compromise, not opting for ``optimal'' solutions like RDMS.\
We have
\begin{description}
\item[Consistency:] Whether or not copies of data are the same across all nodes.
For example, having a server in London with data X and another server in USA with data X', where X' is intended to be equal to X, but it is not.
\item[Availibility:] Whether or not we can guarantee success or failure (alternatively: every request returns a non-error response).
\item[Partition tolerance]: If a node goes down, will the system continue to operate?
\end{description}
The CAP theorem states that any big-data system can only acheive two out of tree letters (CA, CP or AP).
\end{definition}
\begin{proof}
Assume a system has two nodes: A and B. A and B cannot communicate.
We write data X to node A and B. Then, we write $X'$ to node A. Following that we want to read $X$ from node B.
If node A and B do not talk together, we will not achieve consistency (node B doesn't know X is updated in A).
This also means we struggle with availibility: $X'$ has not been written to both nodes.
If we do let node A and B talk together, then node B depends on A, so we are not parition tolerant.
Hence, achieving all three letters is not possible in this case.
\end{proof}
\begin{definition}[Consistent hashing]\label{def:consistenthashing}
A hashing scheme that scales well. When you add keys (expand domain), you don't need to
remap the majority of the keys as you would have to in normal hashing.
This can be thought of as extending a simple hashing algorithm $h(x) = \mod k$.
When we expand the domain of the function, it will simply result in a lower hashvalues that can
be used directly without changing other indices.
\end{definition}
\begin{definition}[Equi-depth histogram]\label{def:equidepthhistogram}
Take query:
\begin{verbatim} SELECT * FROM Person WHERE AGE > 24 \end{verbatim}.
Here, we can optimize the query by using an equidepth histogram:
instead of preparing to return each row, we first process the $AGE$ column and count
how many occurences we get. Then, we estimate parallelism, return size, etc, and return queries.
If $AGE$ is indexed, we could make estimates like this in $O(1)$.
\end{definition}
\begin{definition}[Equi-width histogram]
Similar to \nameref{def:equidepthhistogram}, except we fix bucket sizes.
For example, for an age query we don't return the number of people above a threshold age,
instead we bucketize, e.g. everyone under 24 in one bucket, everyone over 24 in another,
or splitting per 5 years, etc.
\end{definition}
\begin{definition}[Exactly once ingestion]\label{def:exactlyonce}
Every message is delivered once. No duplicates. Hard to implement (you need to retain data for a longer amount of time and have synchronization protocols).
\end{definition}
\begin{definition}[Fault Tolerance]
In Big Data, there are two types of failure that can occur
\begin{description}
\item[Soft:] unexpected errors, such as null values or code error
\item[Hard:] loss of nodes, etc.
\end{description}
\end{definition}
\begin{definition}[Five V's of Big Data]\label{def:fiveV}
\begin{description}
\item[Volume:] How much data do we have?
\item[Veracity:] Can we trust the data we have?
\item[Variety:] What types of data do we read?
\item[Value:] Is it really worth while to do big data solutions?
\item[Velocity:] How fast is data coming in?
\end{description}
\end{definition}
\begin{definition}[Log-structured Merge Tree]\label{def:LSM}
(LSM) A data-structure optimized for indexed access with a high insert volume.
It typically stores a lot of data in key-value form in-memory, and flushes out
when exceeding a memory threshold.
\end{definition}
\begin{definition}[Map-side join]\label{def:mapjoin}
Similar to \nameref{def:reducejoin}, but we avoid the second pass of sorting.
A few conditions are necessary to do the join in the first stage of our job.
\begin{enumerate}
\item Data is already sorted by join key inside partitions
\item Input datasets have same number of partitions (difficult if doing joins
for $> 2$ tables).
\end{enumerate}
\end{definition}
\begin{definition}[Reduce-side join]\label{def:reducejoin}
On the map-side of things, we simply sort the data by join-key. E.g. a mapper A
will give reducer A' all entries with key K. Then, reducer A' performs another sort,
joining entries from two or more tables on K.
\end{definition}
\begin{definition}[Side-Data]\label{def:sidedata}
Read-only data needed by a job to process a dataset.
\end{definition}
\begin{definition}[Uber task]\label{def:ubertask}
A term used in MapReduce to denote a job so small that pararellizing it
will not cause a benefit, thus leaving the job to be resolved in the same JVM
as the master.
\end{definition}
\begin{definition}[Write-ahead log]\label{def:WAL}
WAL:\@ write action before doing them. Important for \nameref{def:acid} properties.
\end{definition}
| {
"alphanum_fraction": 0.7315436242,
"avg_line_length": 50.3970588235,
"ext": "tex",
"hexsha": "0639d8a924e60ad034acd84aa893db9ed1258e0e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8afc580cce2ece4f129c006af3879bb738bb2269",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "andsild/NotusVitae",
"max_forks_repo_path": "src/def/bigdata.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8afc580cce2ece4f129c006af3879bb738bb2269",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "andsild/NotusVitae",
"max_issues_repo_path": "src/def/bigdata.tex",
"max_line_length": 159,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8afc580cce2ece4f129c006af3879bb738bb2269",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "andsild/NotusVitae",
"max_stars_repo_path": "src/def/bigdata.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1685,
"size": 6854
} |
%********************************************************************
% Appendix
%*******************************************************
% If problems with the headers: get headings in appendix etc. right
%\markboth{\spacedlowsmallcaps{Appendix}}{\spacedlowsmallcaps{Appendix}}
\chapter{Appendix}
No appendix content yet. | {
"alphanum_fraction": 0.4829721362,
"avg_line_length": 40.375,
"ext": "tex",
"hexsha": "4143b6afdec1bfef4a621b080bfbc7a002275ec9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "04ada5a7ea6c8f65b3185e4f73de1768d6fa501f",
"max_forks_repo_licenses": [
"CC0-1.0",
"CC-BY-4.0"
],
"max_forks_repo_name": "grself/BASV316_Text_Markdown",
"max_forks_repo_path": "source/00Appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "04ada5a7ea6c8f65b3185e4f73de1768d6fa501f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0",
"CC-BY-4.0"
],
"max_issues_repo_name": "grself/BASV316_Text_Markdown",
"max_issues_repo_path": "source/00Appendix.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "04ada5a7ea6c8f65b3185e4f73de1768d6fa501f",
"max_stars_repo_licenses": [
"CC0-1.0",
"CC-BY-4.0"
],
"max_stars_repo_name": "grself/BASV316_Text_Markdown",
"max_stars_repo_path": "source/00Appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 58,
"size": 323
} |
% !TeX root = ../main.tex
\section{Extended Backus-Naur Form of HCL grammar}
\label{AppendixEBNF}
\begin{sidewaysfigure}
\begin{align*}
\texttt{<Program>}\to & \texttt{ <Cmds> \$}\\
\texttt{<Cmds>}\to & \texttt{ \{<Cmd>\}}\\
\texttt{<Cmd>}\to & \texttt{ <VarDcl> linebreak}\\
| & \texttt{ <Assign> linebreak}\\
| & \texttt{ <Expr> linebreak}\\
| & \texttt{ <ReturnCmd> linebreak}\\
\texttt{<Dcl>}\to & \texttt{ <ImplicitType> identifier [equals <DclValue>]}\\
\texttt{<ImplicitType>}\to & \texttt{ <Type>}\\
| & \texttt{ func}\\
| & \texttt{ var}\\
\texttt{<Type>}\to & \texttt{ number}\\
| & \texttt{ text}\\
| & \texttt{ tuple sqBracketL [<TypeList>] sqBracketR}\\
| & \texttt{ list sqBracketL [<Type>] sqBracketR}\\
| & \texttt{ bool}\\
| & \texttt{ func sqBracketL [<TypeListNoneAndGenerics>] sqBracketR}\\
| & \texttt{ none}\\
\texttt{<TypeList>}\to & \texttt{ <Type> [comma <TypeList>]}\\
\texttt{<TypeListGenerics>}\to & \texttt{ <TypeGenerics> [separator <TypeListGenerics>] }\\
\texttt{<TypeNoneAndGenerics>}\to & \texttt{<TypeGenerics>}\\
| & \texttt{ none}\\
\end{align*} %pagebreak
\end{sidewaysfigure}
\begin{sidewaysfigure}
\begin{align*}
\texttt{<TypeListNoneAndGenerics>} \to & \texttt{ <TypeNoneAndGenerics> [separator <TypeListNoneAndGenerics>]}\\
\texttt{<TypeGenerics>}\to & \texttt{<Type>}\\
| & \texttt{ identifier}\\
\texttt{<Expr>}\to & \texttt{ <FunctionCall>}\\
| & \texttt{ <Value>}\\
| & \texttt{ parenL <Expr> parenR }\\
\texttt{<Value>}\to & \texttt{ <Literal>}\\
| & \texttt{ identifier}\\
\texttt{<Literal>}\to & \texttt{ literalNumber}\\
| & \texttt{ literalText}\\
| & \texttt{ literalBool}\\
| & \texttt{ <LiteralTuple>}\\
| & \texttt{ <LiteralList>}\\
\texttt{<Values>}\to & \texttt{ <Value> [comma <Values>]}\\
\texttt{<LiteralTuple>}\to & \texttt{ parenL <Values> parenR}\\
\texttt{<LiteralList>}\to & \texttt{ sqBracketL <Values> sqBracketR}\\
\texttt{<DclValue>}\to & \texttt{ <Expr>}\\
| & \texttt{ <LambdaExpr>}\\
\texttt{<Assign>}\to & \texttt{ identifier equals <DclValue>}\\
\texttt{<LambdaExpr>}\to & \texttt{ parenL [<FunDclParams>] parenR colon <TypeNoneAndGenerics> {linebreak} <LambdaBody>}\\
\texttt{<LambdaBody>}\to & \texttt{ curlyL <Cmds> curlyR}\\
\end{align*} %pagebreak
\end{sidewaysfigure}
\begin{sidewaysfigure}
\begin{align*}
\texttt{<FunDclParams>}\to & \texttt{ <FunDclParam> [comma <FunDclParams>]}\\
\texttt{<FunDclParam>}\to & \texttt{ <TypeListGenerics> identifier}\\
\texttt{<FunctionCall>}\to & \texttt{ identifier}\\
| & \texttt{ <Arg> identifier [<Args>]}\\
\texttt{<Args>}\to & \texttt{ \{<Arg>\}+}\\
| & \texttt{ parenL \{<Arg>\}+ parenR}\\
\texttt{<Arg>}\to & \texttt{[colon]<Value>}\\
| & \texttt{ <LambdaExpr>}\\
| & \texttt{ <LambdaBody>}\\
\texttt{<ReturnCmd>}\to & \texttt{ return <Expr>}
\end{align*}
\end{sidewaysfigure}
\newpage | {
"alphanum_fraction": 0.6389954656,
"avg_line_length": 42.1617647059,
"ext": "tex",
"hexsha": "4e37735c2094cae31e73b62bd60a7a79213e991e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-07-04T13:55:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-04T13:55:53.000Z",
"max_forks_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "C0DK/P3-HCL",
"max_forks_repo_path": "report/Appendix/ExtendedBackusNaur.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_issues_repo_issues_event_max_datetime": "2018-05-13T17:37:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-02-17T14:30:17.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "C0DK/P3-HCL",
"max_issues_repo_path": "report/Appendix/ExtendedBackusNaur.tex",
"max_line_length": 123,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "b6adbc46cc7347aacd45dce5fddffd6fee2d37ac",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "C0DK/P3-HCL",
"max_stars_repo_path": "report/Appendix/ExtendedBackusNaur.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-09T11:33:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-08T12:34:50.000Z",
"num_tokens": 1156,
"size": 2867
} |
% This is the document class for FARs, derived from the latex article
% class
\documentclass{mfe-nzers}
%If a document is typeset as draft it will be double spaced, with
%linenumbers, and with a watermark \documentclass[draft]{aebr}
% The FAR requires two titles. The first is the short title which
% goes on the page footers. The second is the long title which goes on
% the cover page. You are only allowed to use the LaTeX macros \\
% and \emph inside of titles safely. Others may result in
% unpredictable behaviour.
\title{Example report}{An example Fisheries Assessment Report (FAR)}
% Each FAR has a subtitle
\subtitle{A report for testing purposes}
%The date must be split into two parts for compatibility with reports.
% The year is specified with \date, and the month is specified with \reportmonth
\date{2015}
\reportmonth{January}
% Authors must be listed with their full names separated by \and.
\author{John Smith \and Jane E. Smith}
% The subcaption package is used to create subfigures in this example.
\usepackage{subcaption}
% The lipsum package is used to generate filler text. This won't be
% needed in your actual report.
\usepackage{lipsum}
% Each report has an ISBN and report number. If any of these are not
% included a place holder value will be used.
\isbn{XX-XXXXX-XX}
\reportno{XX}
% This specified the location of the bibliography file
\addbibresource{test.bib}
% The document begins here
\begin{document}
% Generate the title page
\maketitle
% Generate the table of contents.
\tableofcontents
% An FAR must contain an executive summary. Which \summary will generate a
% title and contents entry for.
\summary
% The summary must start with a citation for the current document which can be
% automatically generated using \citeself.
\citeself
\lipsum[1]
\clearpage
\section{Introduction}
%filler text
\lipsum[1]
\begin{figure}[h]
\begin{center}
\includegraphics[width=40mm,height=40mm]{FAR}
\end{center}
\caption{This is a caption that is longer than one line. We are just
making sure that it wraps successfully. The figure is the cover of
the report.}
\end{figure}
\section{Methods}
\subsection{Details}
\lipsum[2]
% An example of how to do subfigures using subcaption.
\begin{figure}[h]
\begin{center}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=40mm]{FAR}
\caption{Cover of a Fisheries Assessment Report (FAR).}
\end{subfigure}\qquad
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=40mm]{FAR}
\caption{Cover of an Aquatic Environment and Biodiversity Report
(AEBR).}
\end{subfigure}
\end{center}
\end{figure}
\section{Results}
\section{Discussion}
\subsection{A first level subsection}
\subsubsection{A second level subsection}
\lipsum[3]
\paragraph{A minor heading}
\lipsum[5]
Examples of how citations can be used in different ways
% Examples of how to do citations. Note the cite, citet and citep work as standard.
\begin{itemize}
\item citet \citet{baker_nzclassification_2010}
\item citep \citep{doc_sealion_2009}
\item parencite \parencite{gales_phocarctos_2008}
\item nptextcite \nptextcite{ mpi_review_2012}
\item fullcite \fullcite{mpi_review_2012}
\item fullcitebib \fullcitebib{mpi_review_2012}
\item citeyear \citeyear{robertson_population_2011}
%\item \footcite{mpi_review_2012}
\item textcite \textcite{roe_necropsy_2007}
\end{itemize}
% Bibliography should be printed starting on a new page.
\clearpage
\printbibliography
%End of document
\end{document}
| {
"alphanum_fraction": 0.7625142207,
"avg_line_length": 26.6363636364,
"ext": "tex",
"hexsha": "d7e7e88093ef86f8dbd4ecb76a7fe3df83f456cc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "54a2e158e1e341096728f30331c762a4b591bb45",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MfE-NZ/mfe-latex-templates",
"max_forks_repo_path": "mfe-nzers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "54a2e158e1e341096728f30331c762a4b591bb45",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MfE-NZ/mfe-latex-templates",
"max_issues_repo_path": "mfe-nzers.tex",
"max_line_length": 84,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "54a2e158e1e341096728f30331c762a4b591bb45",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MfE-NZ/mfe-latex-templates",
"max_stars_repo_path": "mfe-nzers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 989,
"size": 3516
} |
\documentclass[10pt]{article}
\usepackage{hyperref}
\title{ADMIT Code Testing}
\author{Marc Pound}
\oddsidemargin 0.0 pt
\textwidth 6.5in
\textheight 9in
\topmargin -0.25in
\begin{document}
\maketitle
\section{Unit Tests}\label{s-unittests}
A unit test should test only the functionality and correctness of a class
and not rely too heavily on external classes. The AT and BDP base classes
can be used in most cases where an AT or BDP is needed for the unit test.
Unit tests should aim to cover as much of the code as possible, even
if for trivial tests.
We have adopted the Python
\href{http://docs.python.org/2/library/unittest.html}{\tt unittest}
module as our unit testing framework. This module is the incorporation
fo the PyUnit package into the Python main line. The structure and use
of {\tt unittest} is similar to cppunit and JUnit that we are already
familiar with through CARMA development. Furthermore, since it is in
the Python main line, we do not have to worry about relying on external
packages with uncertain future support.
Use of {\tt unittest} is straightforward. Unit test classes derive
from {\tt unittest.TestCase} and define setUp() and tearDown() methods
for initialization and clean up. Any method with name prefix {\tt test\_}
automatically gets run by unittest.main(). Note that the test methods
are run in alphabetical order, so use method naming to control test order
if you need to or set up a suite following the web page instructions.
Various assert methods are provided to test conditions, which the test
runner will collect and report at the end of the run. Note the {\tt
assertRaises()} method provided: it is important for us to test that
exceptions are raised under exceptional conditions. See the unittest web
page and unittest\_FM.py for further examples.
\section{Integration Tests}\label{s-integrationtests}
The purpose of integration testing is to verify that individual tasks
and classes work correctly as a group. For example, an integration
test might insert several ATs into the FlowManager, then run the flow
verifying the output at each stage against fiducial data. The {\tt unittest}
infrastructure should also work well for conducting integration tests. Using
it also means we only have to learn one package for all of our testing.
\section{Guidelines For Writing Tests}\label{s-guidelines}
Below are some simple guidelines for creating unit and integration tests.
Following these guidelines will make automated testing easier as well
as help us to understand each others' tests.
\noindent Requirements for unit and integration tests:
\begin{enumerate}
\item All unit and integration tests must live in a {\tt test} subdirectory.
\item All unit test file names must have the prefix {\tt unittest\_}, e.g. ``unittest\_FM.py'' is the Flow Manager unit test.
\item All integration test file names must have a prefix {\tt integrationtest\_}.
\item All data required for integration testing must reside either in
the test subdirectory in which the test code resides or in \$ADMIT/admit/data.
\item Unit tests should minimize clutter to stdout so that the PASS/FAIL output is easy to see.
\end{enumerate}
The top-level Makefile now has a target `make unit' which will run
unit tests and `make integration' to run integration tests. Both use
a simple {\it find} command to just-in-time generate the list of tests to
be run, e.g.
\begin{verbatim}
find . -path \*test/unittest_\* -print -exec python {} \;
find . -path \*test/unittest_\* -print -exec casarun {} \;
\end{verbatim}
%\section{Code Coverage}\label{s-codecoverage}
%
%Code coverage measurements indicate the effectiveness
%of tests by showing which code is and isn't actually
%executed by the tests. The coverage tool most recommended is
%\href{http://nedbatchelder.com/code/coverage}{coverage.py}.
%Currently, our codebase is not so large that we need coverage measurements,
%but we can think about whether this is something we want.
%
\section{Automated Build and Testing}\label{s-automated}
Automatic compiling and running of unit and integration tests is
managed by buildbot (\url{http://www.buildbot.net}). We have leveraged
the buildbot master that was set up on chara.astro.umd.edu for CARMA,
and added an ADMIT buildbot slave on eris.astro.umd.edu. The build
checks out a full CVS tree, builds source and documentation, and
runs unit and integration tests. This is done once per hour, but
can also occur in response to checkins or a build requested submitted
on the web page. Built documentation is viewable at \url{http://carma.astro.umd.edu/admit}.
Instructions on how to set up buildbot and the files needed for our
particular installation are in \$ADMIT/buildbot.
\end{document}
| {
"alphanum_fraction": 0.7827280405,
"avg_line_length": 45.9805825243,
"ext": "tex",
"hexsha": "77416fcde90708a6507d309419669f63fafb27f0",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-03-30T18:58:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-11-10T14:10:22.000Z",
"max_forks_repo_head_hexsha": "1cae54d1937c9af3f719102838df716e7e6d655c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "teuben/admit",
"max_forks_repo_path": "doc/CodeTesting.tex",
"max_issues_count": 48,
"max_issues_repo_head_hexsha": "1cae54d1937c9af3f719102838df716e7e6d655c",
"max_issues_repo_issues_event_max_datetime": "2021-09-08T14:51:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-04T01:25:33.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "teuben/admit",
"max_issues_repo_path": "doc/CodeTesting.tex",
"max_line_length": 125,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "bbf3d79bb6e1a6f7523553ed8ede0d358d106f2c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "astroumd/admit",
"max_stars_repo_path": "doc/CodeTesting.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T19:23:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-01T17:26:28.000Z",
"num_tokens": 1116,
"size": 4736
} |
\subsection{The true density matrix}
\begin{frame}[fragile]
\frametitle{The \emph{true} density matrix}
\framesubtitle{Averaging several contributions}
\footnotesize
\begin{block}{Density matrix using non-equilibrium Green functions}
\vskip -2ex
\begin{align*}
\only<2->{\color{gray}}
\DM &
\only<2->{\color{gray}}
=\frac1{2\pi}
\iint_\BZ\dEBZ\cd \kk \dd\E\, \sum_\idxE\Spec_{\idxE,\kk}(\E) n_{F,\idxE}(\E)
\eikr
\\
\only<2->{\color{gray}}
\DM^\varsigma &
\only<2->{\color{gray}}
=
{\frac i{2\pi}\iint_\BZ\dEBZ\cd \kk\dd\E%
\left[\G_\kk - \G^\dagger_\kk \right]\eikr n_{F,\varsigma}(\E)}
+
{\frac1{2\pi}\sum_{\idxE'\neq\varsigma}\iint_\BZ\dEBZ\cd \kk\dd\E\, %
\G_\kk \Scat_{\idxE',\kk}
\G^\dagger_\kk\eikr\big[n_{F,\idxE'}(\E)-n_{F,\varsigma}(\E)\big]}
\\
\DM^\varsigma &=
{\DE^\varsigma}
+
{\sum_{\idxE'\neq\varsigma}\ncor_{\idxE'}^\varsigma}
\uncover<2->{=
\only<2->{\color{ok}}
% Essentially these
\langle\dots\rangle n_{F,\varsigma}(\E)
+
\sum_{\idxE'\neq\varsigma}\langle\dots\rangle [n_{F,\idxE'}(\E)-n_{F,\varsigma}]}
\end{align*}
\end{block}
\pause
\pause
\begin{center}
\def\eta{0.1}%
\def\radius{3.25}%
\def\lineS{-1}%
\def\poles{4}%
\def\poleSep{.25}%
% Calculate alpha angle
\pgfmathparse{\poleSep*(\poles+.5)/\radius}%
\edef\betaA{\pgfmathresult}%
\pgfmathparse{atan(\betaA)}%
\edef\alphaA{\pgfmathresult}%
\pgfmathparse{asin(\betaA)}%
\edef\betaA{\pgfmathresult}%
\begin{tikzpicture}[scale=.75]
% The axes
\begin{scope}[draw=gray!80!black,thick,->]
\draw (-2*\radius+\lineS-.5,0) -- (\radius+1.5,0) node[text=black,below] {$E$};
\draw (0,0) -- (0,\radius+.5) node[text=black,left] {$\Im$};
\end{scope}
\node[below] (mu-1) at (0,0) {$\mu_1$};
% The specific coordinates on the path
\coordinate (EB) at (-2*\radius+\lineS,\eta);
\coordinate (C-mid) at ({-\radius+\lineS-sin(\alphaA)*\radius},{cos(\alphaA)*\radius});
\coordinate (C-end) at (\lineS,{\poleSep*(\poles+.5)});
\coordinate (L-end) at (\radius,{\poleSep*(\poles+.5)});
\coordinate (L-end-end) at (\radius+1,{\poleSep*(\poles+.5)});
\coordinate (real-L-end) at (\radius,\eta);
\coordinate (real-L-end-end) at (\radius+1,\eta);
\begin{scope}[thick]
% The path (we draw it backwards)
\draw[->-=.3,very thick] (L-end) -- node[above right]
{$\mathcal L$} (C-end);
\draw[->-=.333,->-=.666,very thick] (C-end) to[out=90+\betaA,in=\alphaA] (C-mid)
node[above]
{$\mathcal C$}
to[out=180+\alphaA,in=90] (EB);
\draw[->-=.25,->-=.75] (EB) -- (real-L-end) node[above left] {$\mathcal R$};
% draw the continued lines
\draw[densely dotted] (real-L-end) -- (real-L-end-end);
\draw[densely dotted] (L-end) -- (L-end-end);
\end{scope}
% Draw the poles
\foreach \pole in {1,...,14} {
\ifnum\pole>\poles
\draw (0,\pole*\poleSep) circle (2pt);
\else
\fill (0,\pole*\poleSep) circle (2pt);
\fi
}
\node[left,anchor=east] at (0,{\poleSep*(\poles/2+.5)}) {$z_\nu$};
% correct size
\path[use as bounding box] (-8,-.5) rectangle ++(13,4.5);
\draw[densely dotted] (real-L-end-end) to[out=0,in=0] (L-end-end);
\draw[densely dotted,thick] (EB) -- ++(-.5,0);
% Draw the 2nd poles
\def\muB{1}
\node[below] (mu-2) at (\muB,0) {$\mu_2$};
\foreach \pole in {1,...,14} {
\ifnum\pole>\poles
\draw[bad] (\muB,\pole*\poleSep) circle (2pt);
\else
\fill[bad] (\muB,\pole*\poleSep) circle (2pt);
\fi
}
\def\muC{2}
\node[below] (mu-3) at (\muC,0) {$\mu_3$};
\foreach \pole in {1,...,14} {
\ifnum\pole>\poles
\draw[ok] (\muC,\pole*\poleSep) circle (2pt);
\else
\fill[ok] (\muC,\pole*\poleSep) circle (2pt);
\fi
}
\draw[<->]%decorate,decoration=brace]
($(mu-2.south)+(0,0)$) --
node[below left=6pt] {$\ncor^1_2,\color{bad}\ncor^2_1$}
($(mu-1.south)+(0,0)$);
\draw[<->]%decorate,decoration=brace]
($(mu-3.south)+(0,-.4)$) --
node[below=3pt] {$\color{bad}\ncor^2_3\color{black},\color{ok}\ncor^3_2$}
($(mu-2.south)+(0,-.4)$);
\draw[<->]
($(mu-3.south)+(0,-1.3)$) --
node[below=4pt] {$\ncor^1_3,\color{ok}\ncor^3_1$}
($(mu-1.south)+(0,-1.3)$);
\node at ($(mu-3.south east) + (1.5, -0.5)$) {3 electrodes!};
\end{tikzpicture}
\end{center}
\end{frame}
\begin{frame}
\frametitle{The \emph{true} density matrix}
\framesubtitle{Methods}
\begin{block}{Example of numeric integration of equilibrium contour}
\begin{center}
\begin{tikzpicture}[scale=.8]
\begin{axis}[width=14cm,height=8cm,name=circ,only marks,
ymin=0,ymax=15.5,xmin=-31,xmax=1.5,
xtick={-28,-24,-20,-16,-12,-8,-4},
extra x ticks={0.25},extra x tick label={$\mu$},
xlabel={Real Energy [eV]},
ylabel={Imaginary Energy [eV]}]
\addplot table {../data/EQ_circle.dat};
\addplot table {../data/EQ_fermi.dat};
\addplot table {../data/EQ_pole.dat};
\draw[densely dashed,green!50!black,very thick] (axis cs:-30,0.1) --
(axis cs:2,0.1);
\draw[->,>=latex] (axis cs:-.5,0) -- (axis cs:-8.75,2.2);
\draw[->,>=latex] (axis cs:-.5,1.25) -- (axis cs:-8.75,10.75);
\node[rotate=35] at (axis cs:-21.5,11.5) {Gauss-Legendre};
\end{axis}
\begin{axis}[width=6cm,height=5cm,only marks,
at={($(circ.south)+(0,1cm)$)},anchor=south,
xtick={-0.5,0,0.5},
extra x ticks={0.25},extra x tick label={$\mu$},
xmin=-.5,xmax=.75,ymin=0,ymax=1.3]
\addplot table {../data/EQ_circle.dat};
\addplot table {../data/EQ_fermi.dat};
\addplot table {../data/EQ_pole.dat};
\draw[<->] (axis cs:.3,0.568494) -- node[sloped,anchor=south] {$2\pi k_BT$} (axis cs:.3,0.406067);
\draw[densely dashed,green!50!black,very thick]
(axis cs:-1,0.03) -- (axis cs:1,0.03);
\node[anchor=south] at (axis cs:0.25,1) {Gauss-Fermi};
\node[rotate=90,anchor=south] at (axis cs:0.25,.5) {Poles};
\end{axis}
\end{tikzpicture}
\end{center}
\end{block}
\begin{block}<2->{Example of numeric integration of non-equilibrium contour, $\delta \E
\approx 0.01\,\mathrm{eV}$}
\begin{center}
\begin{tikzpicture}[scale=.8]
\begin{axis}[width=15cm,height=3cm,
ymin=0,ymax=1.5,xmin=-1,xmax=1,gen/.style={only marks,opacity=.7},
xlabel={Energy [eV]},
ylabel={Weight}]
\addplot[gen,bad,domain=-.55:.55,samples=30] {1./(exp((x-0.5)/0.01)
+ 1)- 1./(exp((x+0.5)/0.01) + 1)};
\draw[<->,bad,very thick] (axis cs:-.5, .1) -- (axis cs:.5,.1) node[midway,above]
{$n_F(\E+0.5\,\mathrm{eV}) - n_F(\E-0.5\,\mathrm{eV})$};
\end{axis}
\end{tikzpicture}
\end{center}
\end{block}
\end{frame}
\subsection{Weighing the density matrix}
\begin{frame}[fragile]
\frametitle{Weighing the density matrix}
\framesubtitle{Choosing the average}
\footnotesize
\begin{block}{Density matrix using non-equilibrium Green functions}
\vskip -2ex
\begin{columns}
\column{.1\textwidth}
\column{.3\textwidth}
Equivalent, but different!
\column{.6\textwidth}
\begin{align*}
\DM^\idxE &=
% {\DE^\idxE}
% +
% {\sum_{\idxE'\neq\idxE}\ncor_{\idxE'}^\idxE}
% =
% Essentially these
\langle\dots\rangle n_{F,\idxE}(\E)
+
\sum_{\idxE'\neq\idxE}\langle\dots\rangle [n_{F,\idxE'}(\E)-n_{F,\idxE}]
\\
\DM^\varsigma &=
% {\DE^\varsigma}
% +
% {\sum_{\idxE|\varsigma_\idxE\neq\varsigma}\ncor_{\idxE'}^\varsigma}
% =
\langle\dots\rangle n_{F,\varsigma}(\E)
+
\sum_{\idxE|\varsigma_\idxE\neq\varsigma} \langle\dots\rangle[n_{F,\varsigma_\idxE}(\E)-n_{F,\varsigma}]
\end{align*}
\end{columns}
\end{block}
\begin{block}<2->{Estimating $\DM$}
\begin{itemize}
\item<+->
TranSiesta calculates \emph{all} $\DM^\varsigma$ and estimates the true $\DM$ by an
average:
\begin{align*}
\shortintertext{for $2$ electrodes:}
w_i & = \frac{(\ncor_i)^2}{(\ncor_1)^2+(\ncor_2)^2}
& &w_1+w_2=1
\uncover<+->{
\\
\shortintertext{for $N$ electrodes:}
w_\varsigma &=
\prod_{\varsigma'\neq\varsigma}
\big(\smash{\sum_{\idxE|\varsigma_\idxE\neq\varsigma'}}(\ncor^{\varsigma'}_\idxE)^2\big)
\Big/
\Big\{
\sum_{\varsigma'}\prod_{\varsigma''\neq\varsigma'}
\big(\smash{\sum_{\idxE|\varsigma_\idxE\neq\varsigma''}}(\ncor^{\varsigma''}_\idxE)^2\big)
\Big\}
& &\sum_iw_i=1
\\
\DM &=\sum_\varsigma \DM^\varsigma w_\varsigma
}
\end{align*}
\item<+->%
Estimation of the ``error''
\begin{align*}
\mathrm e_{\mathrm{max}} &= \max\big[
\DM^{\varsigma'}-\DM^\varsigma;
\DM^{\varsigma''}-\DM^{\varsigma};
\DM^{\varsigma''}-\DM^{\varsigma'};
\dots\big]
\\
\mathrm e_{w} &= \max\big[
\DM^{\varsigma}-\DM;
\DM^{\varsigma'}-\DM;
\dots\big].
\end{align*}
\end{itemize}
\end{block}
\uncover<4->{%
\begin{tikzpicture}[remember picture,overlay]
\node[rotate=30,inner sep=20cm,anchor=mid,
fill=white,opacity=.7,text width=20cm,align=center,
text opacity=1] at (current page.center)
{\huge This is why we need \emph{both} a chemical \emph{and} an electrode block};
\end{tikzpicture}
}
\end{frame}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "talk"
%%% End:
| {
"alphanum_fraction": 0.5302630292,
"avg_line_length": 32.1603773585,
"ext": "tex",
"hexsha": "bb060a4232b67232c510c8cb74a17c824762ff59",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-06-17T10:18:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-27T10:27:51.000Z",
"max_forks_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rwiuff/QuantumTransport",
"max_forks_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/contour.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_issues_repo_issues_event_max_datetime": "2020-03-31T03:17:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-31T03:17:38.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rwiuff/QuantumTransport",
"max_issues_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/contour.tex",
"max_line_length": 112,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5367ca2130b7cf82fefd4e2e7c1565e25ba68093",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rwiuff/QuantumTransport",
"max_stars_repo_path": "ts-tbt-sisl-tutorial-master/presentations/03/contour.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-25T14:05:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-25T14:05:45.000Z",
"num_tokens": 3714,
"size": 10227
} |
\documentclass[../redux]{subfiles}
\begin{document}
\subsection{Ducks-Sagas connection}
As mentioned above, we tried to keep our sagas divided into the ducks components of our application.\\
Almost all our sagas are connected with a single duck. After the data fetching, a saga could call the connected duck's actions, for example to set a list in the store.
In the UML diagram below aren't shown the Duck and Saga interfaces, for more readability.
\subsubsection{UML}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{"diagrammi/redux/ducks-sagas connection"}
\caption{ducks-sagas connection}
\label{fig:Ducks-sagas connection}
\end{figure}
\end{document} | {
"alphanum_fraction": 0.7596566524,
"avg_line_length": 49.9285714286,
"ext": "tex",
"hexsha": "efd2086feba4de49ceaba9409f3603d79ac5589b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "M9k/Marvin-Documentazione---353",
"max_forks_repo_path": "Esterni/ManualeSviluppatore/redux/Duck-saga.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "M9k/Marvin-Documentazione---353",
"max_issues_repo_path": "Esterni/ManualeSviluppatore/redux/Duck-saga.tex",
"max_line_length": 169,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8bd2f83b3d458e60e6185a91ec66a1a35468f1ef",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "M9k/Marvin-Documentazione---353",
"max_stars_repo_path": "Esterni/ManualeSviluppatore/redux/Duck-saga.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 187,
"size": 699
} |
\section{How Does Sections, Subsections, and Subsections Look?}
Well, like this
\subsection{This is a Subsection}
and this
\subsubsection{This is a Subsubsection}
and this.
\paragraph{A Paragraph}
You can also use paragraph titles which look like this.
\subparagraph{A Subparagraph} Moreover, you can also use subparagraph titles which look like this\todo{Is it possible to add a subsubparagraph?}. They have a small indentation as opposed to the paragraph titles.
\todo[inline,color=green]{I think that a summary of this exciting chapter should be added.}
| {
"alphanum_fraction": 0.7946428571,
"avg_line_length": 40,
"ext": "tex",
"hexsha": "68cba73e774e6e1a8a9f17d9db3b0fcb721ce692",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-03-26T09:28:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-26T09:28:18.000Z",
"max_forks_repo_head_hexsha": "4c94618954bb5055623420486f2a88a03c1c7b1b",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Adrast/UCN-Latex-Templates",
"max_forks_repo_path": "UCNReportTemplate-en/sections/Chapter1/sections.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4c94618954bb5055623420486f2a88a03c1c7b1b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Adrast/UCN-Latex-Templates",
"max_issues_repo_path": "UCNReportTemplate-en/sections/Chapter1/sections.tex",
"max_line_length": 211,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "4c94618954bb5055623420486f2a88a03c1c7b1b",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Adrast/UCN-Latex-Templates",
"max_stars_repo_path": "UCNReportTemplate-en/sections/Chapter1/sections.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-12T13:12:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-04-04T20:24:49.000Z",
"num_tokens": 131,
"size": 560
} |
\documentclass[
shownotes,
xcolor={svgnames},
hyperref={colorlinks,citecolor=DarkBlue,linkcolor=DarkRed,urlcolor=DarkBlue}
]{beamer}
\usepackage{animate}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{pifont}
\usepackage{mathpazo}
%\usepackage{xcolor}
\usepackage{multimedia}
\usepackage{fancybox}
\usepackage[para]{threeparttable}
\usepackage{multirow}
\setcounter{MaxMatrixCols}{30}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage{lscape}
\usepackage[compatibility=false,font=small]{caption}
\usepackage{booktabs}
\usepackage{ragged2e}
\usepackage{chronosys}
\usepackage{appendixnumberbeamer}
\usepackage{animate}
\setbeamertemplate{caption}[numbered]
\usepackage{color}
%\usepackage{times}
\usepackage{tikz}
\usepackage{comment} %to comment
%% BibTeX settings
\usepackage{natbib}
\bibliographystyle{apalike}
\bibpunct{(}{)}{,}{a}{,}{,}
\setbeamertemplate{bibliography item}{[\theenumiv]}
% Defines columns for bespoke tables
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\usepackage{xfrac}
\usepackage{multicol}
\setlength{\columnsep}{0.5cm}
% Theme and colors
\usetheme{Boadilla}
% I use steel blue and a custom color palette. This defines it.
\definecolor{andesred}{HTML}{af2433}
% Other options
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
\usefonttheme{serif}
\setbeamertemplate{itemize items}[default]
\setbeamertemplate{enumerate items}[square]
\setbeamertemplate{section in toc}[circle]
\makeatletter
\definecolor{mybackground}{HTML}{82CAFA}
\definecolor{myforeground}{HTML}{0000A0}
\setbeamercolor{normal text}{fg=black,bg=white}
\setbeamercolor{alerted text}{fg=red}
\setbeamercolor{example text}{fg=black}
\setbeamercolor{background canvas}{fg=myforeground, bg=white}
\setbeamercolor{background}{fg=myforeground, bg=mybackground}
\setbeamercolor{palette primary}{fg=black, bg=gray!30!white}
\setbeamercolor{palette secondary}{fg=black, bg=gray!20!white}
\setbeamercolor{palette tertiary}{fg=white, bg=andesred}
\setbeamercolor{frametitle}{fg=andesred}
\setbeamercolor{title}{fg=andesred}
\setbeamercolor{block title}{fg=andesred}
\setbeamercolor{itemize item}{fg=andesred}
\setbeamercolor{itemize subitem}{fg=andesred}
\setbeamercolor{itemize subsubitem}{fg=andesred}
\setbeamercolor{enumerate item}{fg=andesred}
\setbeamercolor{item projected}{bg=gray!30!white,fg=andesred}
\setbeamercolor{enumerate subitem}{fg=andesred}
\setbeamercolor{section number projected}{bg=gray!30!white,fg=andesred}
\setbeamercolor{section in toc}{fg=andesred}
\setbeamercolor{caption name}{fg=andesred}
\setbeamercolor{button}{bg=gray!30!white,fg=andesred}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{graphicx}
\makeatletter
\definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66}
\usepackage{tikz}
% Tikz settings optimized for causal graphs.
\usetikzlibrary{shapes,decorations,arrows,calc,arrows.meta,fit,positioning}
\tikzset{
-Latex,auto,node distance =1 cm and 1 cm,semithick,
state/.style ={ellipse, draw, minimum width = 0.7 cm},
point/.style = {circle, draw, inner sep=0.04cm,fill,node contents={}},
bidirected/.style={Latex-Latex,dashed},
el/.style = {inner sep=2pt, align=left, sloped}
}
\makeatother
%%%%%%%%%%%%%%% BEGINS DOCUMENT %%%%%%%%%%%%%%%%%%
\begin{document}
\title[Lecture 15]{Lecture 15: \\ Linear Model Selection}
\subtitle{Big Data and Machine Learning for Applied Economics \\ Econ 4676}
\date{\today}
\author[Sarmiento-Barbieri]{Ignacio Sarmiento-Barbieri}
\institute[Uniandes]{Universidad de los Andes}
\begin{frame}[noframenumbering]
\maketitle
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------%
\begin{frame}
\frametitle{Agenda}
\tableofcontents
\end{frame}
%----------------------------------------------------------------------%
\section{Motivation }
\subsection{Recap: Overfit }
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Overfit and out of Sample Prediction}
\begin{itemize}
\item ML we care about prediction out of sample
\medskip
\item Overfit: complex models predict very well inside a sample but "bad" outside
\medskip
\item Choose the right complexity level
\medskip
\item How do we measure the out of sample error?
\medskip
\item $R^2$ doesn't work: measures prediction in sample, it's non decreasing in complexity (PS1)
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Overfit and out of Sample Prediction}
\begin{figure}[H] \centering
\captionsetup{justification=centering}
\includegraphics[scale=0.4]{figures/Fig0}
\end{figure}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Motivation}
\begin{itemize}
\item Estimating test error: two approaches
\medskip
\begin{enumerate}
\item We can directly estimate the test error, using either a validation set approach or a cross-validation approach
\medskip
\item We can indirectly estimate test error by making an adjustment to the training error to account for overfitting.
\medskip
\begin{itemize}
\item AIC, BIC, $C_p$ and Adjusted $R^2$
\medskip
\item These techniques adjust the training error for the model size, and can be used to select among a set of models with different numbers of variables.
\medskip
\item I'll focus on AIC and BIC. They are intimately related to more classical notions of hypothesis testing.
\end{itemize}
\end{enumerate}
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\section{Classical Framework for Model Selection}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Classical Framework for Model Selection }
\begin{itemize}
\item The framework for model selection can be described as follows.
\item We have a collection of parametric models
\begin{align}
{f_i(x_i,\theta)}
\end{align}
\item where $\theta \in \Theta_j$ for $j = 1,\dots, J$.
\item Some linear structure is usually imposed on the parameter space, so typically $\Theta_j=m_j\cap\theta_J$ , where $m_j$ is a linear subspace of $\mathcal{R}^{p_J}$ of dimension $p_j$ and $p_1 < p_2 < \dots < p_J$.
%\item To formally justify some of our subsequent connections to hypothesis testing it would be also necessary to add the requirement that the models are nested, i.e., that $\Theta_1\subset\Theta_2\subset \dots \Theta_J$
\item e.g.
\begin{align}
y=X_{n\times p_j}\beta+u
\end{align}
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\subsection{AIC: Akaike Information Criterion}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{AIC}
\begin{itemize}
\item Akaike (1969) was the first to offer a unified approach to the problem of model selection.
\item His point of view was to choose a model from the set ${f_i}$ which performed well when evaluated on the basis of forecasting performance.
\item His criterion, which has come to be called the Akaike information criterion is
\begin{align}
AIC(j) = l_j(\hat \theta) - p_j
\end{align}
\item where $l_j(\theta) $ the log likelihood corresponding to the $j$ model maximized over $\theta\in\Theta_j$.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{AIC}
\begin{align}
AIC(j) = l_j(\hat \theta) - p_j
\end{align}
\begin{itemize}
\item Akaike’s model selection rule was simply to maximize AIC over the $j$ models, that is to choose the model $j^*$ which maximizes $AIC(j)$.
\item This approach seeks to balance improvement in the fit of the model, as measured by the value of the likelihood, with a penalty term, $p_j$.
\item Thus one often sees this and related procedures referred to as penalized likelihood methods.
\item The trade-off is simply: does the improvement which comes inevitably from expanding the dimensionality of the model compensate for the increased penalty?
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\subsection{SIC/BIC: Schwarz/Bayesian Information Criterion}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{BIC}
\begin{itemize}
\item Schwarz (1978) showed that while the $AIC$ approach may be quite satisfactory for selecting a forecasting model
\item However had the unfortunate property that it was inconsistent, in particular, as $n \rightarrow \infty$, it tended to choose too large a model with positive probability.
\item Schwarz (1978) formalized the model selection problem from a Bayesian standpoint:
\begin{align}
SIC(j) = l_j(\hat \theta) -\frac{1}{2} p_j log(n)
\end{align}
\item It has the property that as $n\rightarrow \infty$, presuming that there was a true model, $j^*$, then $\hat j =argmax\,\, SIC(j)$, satisfied
\begin{align}
p(\hat j = j^*) \rightarrow 1
\end{align}
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{AIC vs BIC}
\begin{align}
AIC(j) = l_j(\hat \theta) - p_j
\end{align}
\begin{align}
SIC(j) = l_j(\hat \theta) - p_j \frac{1}{2} log(n)
\end{align}
\begin{itemize}
\item Note that
\begin{align}
\frac{1}{2} log(n) > 1 \,\,\, for \,\,\, n > 8
\end{align}
\item The SIC penalty is larger than the AIC penalty,
\item SIC tends to pick a smaller model.
\item In effect, by letting the penalty tend to infinity slowly with n, we eliminate the tendency of AIC to choose too large a model.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\subsection{Connection to Classical Hypothesis Testing: General}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Connection to Classical Hypothesis Testing: General}
\begin{itemize}
\item Recall the likelihood ratio tests, that we classically use to assess goodness of fit/ compare models.
\medskip
\item Suppose that we are comparing a larger model $j$ to a smaller model $i$
\begin{align}
T_n=2(l_j(\hat \theta_j)-l_i(\hat \theta_i))
\end{align}
\item It can be shown that $T_n \rightarrow \chi^2_{p_j-p_i}$ for $p_j >p_i=p^*$.
\medskip
\item So classical hypothesis testing would suggest that we should reject an hypothesized smaller model $i$, in favor of a larger model $j$ iff $T_n$ exceeds an appropriately chosen critical value from the $\chi^2_{p_j-p_i}$ table
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Connection to AIC}
AIC chooses $j$ over $i$, iff
\begin{align}
l_j(\hat \theta) - p_j > l_i(\hat \theta) - p_i
\end{align}
\begin{align}
l_j(\hat \theta) - l_i(\hat \theta) > p_j - p_i
\end{align}
\begin{align}
2\frac{l_j(\hat \theta) - l_i(\hat \theta)}{p_j - p_i} > 2
\end{align}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Connection to SIC}
In contrast Schwarz would choose $j$ over $i$, iff
\begin{align}
\frac{2(l_j(\hat \theta_j)-l_i(\hat \theta_i))}{p_j-p_i} > log(n)
\end{align}
Then $log(n)$ can be interpreted as an implicit critical value for the model selection decision based on SIC
\end{frame}
%----------------------------------------------------------------------%
\subsection{AIC/SIC in the linear regression model}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{AIC/SIC in the linear regression model}
Recall that for the for the Normal/Gaussian linear regression model the log likelihood function is
\begin{align}
l(\beta,\sigma^2) = -\frac{n}{2}log(2\pi)-\frac{n}{2}log(\sigma^2) -\frac{1}{2\sigma^2} (y-X\beta)'(y-X\beta)
\end{align}
evaluating at $\hat \beta$, and $\hat{\sigma}^2=(y-X\hat\beta)'(y-X\hat\beta)$ we get the concentrated/profile log-likelihood
\begin{align}
l(\hat \beta,\hat \sigma^2) = -\frac{n}{2}log(2\pi)-\frac{n}{2}log(\hat \sigma^2) -\frac{n}{2}
\end{align}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{AIC/SIC in the linear regression model}
Thus maximizing SIC
\begin{align}
l_i - \frac{1}{2} p_i log(n)
\end{align}
is equivalent to minimize
\begin{align}
\frac{n}{2}log(\hat \sigma_i^2) + \frac{1}{2} p_i log(n)
\end{align}
or minimizing
\begin{align}
log(\hat \sigma_i^2) + \frac{p_i}{n} log(n)
\end{align}
\begin{itemize}
\item Similarity for AIC
\item When using software is important to check what is being computed. In \texttt{R}, the function \texttt{AIC} minimizes and not maximizes, and defines AIC as $-2l_i+kp_i$ with $k=2$ as default that can be changed,e.g. $k=log(n)$ gives SIC
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Comparison LR, t, AIC, BIC in the linear regression model}
Example of adding one more covariate $p_j-p_i =1$
\begin{align}
T_n=2(l_j(\hat \theta_j)-l_i(\hat \theta_i)) \rightarrow \chi^2_{p_j-p_i}
\end{align}
\begin{align}
\frac{T_n}{p_j-p_i} \rightarrow \frac{\chi^2_{p_j-p_i}}{p_j-p_i}\approx F_{p_j-p_i,\infty}
\end{align}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Comparison LR, t, AIC, BIC in the linear regression model}
\begin{minipage}[t]{0.52\linewidth}
\scriptsize
\begin{align}
\sqrt{2(l_j(\hat \theta_j)-l_i(\hat \theta_i))} \rightarrow \sqrt{F_{1,\infty}}=t_\infty \nonumber
\end{align}
\begin{align}
\sqrt{2 l_j(\hat \theta) - l_i(\hat \theta)} > \sqrt{2} \nonumber
\end{align}
\begin{align}
\sqrt{2(l_j(\hat \theta_j)-l_i(\hat \theta_i)) }> \sqrt{log(n)} \nonumber
\end{align}
\end{minipage}
\hfill
\begin{minipage}[t]{0.43\linewidth}%
\begin{figure}[H] \centering
\captionsetup{justification=centering}
\includegraphics[scale=0.3]{figures/Fig1}
\end{figure}
\end{minipage}
\end{frame}
%----------------------------------------------------------------------%
\section{Model Selection in Practice}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Model Selection in Practice}
\begin{itemize}
\item We have $M_k$ models
\bigskip
\item We want to find the model that best predicts out of sample
\bigskip
\item We have a number of ways to go about it
\bigskip
\begin{itemize}
\item Best Subset Selection
\medskip
\item Stepwise Selection
\begin{itemize}
\item Forward selection
\medskip
\item Backward selection
\end{itemize}
\end{itemize}
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\subsection{Best Subset Selection}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Best Subset Selection}
\begin{enumerate}
\item Let $M_0$ denote the null model, which contains no predictors. This model simply predicts the sample mean for each observation.
\bigskip
\item For $k=1,2,\dots,p$:
\medskip
\begin{enumerate}
\item Fit all $\binom{p}{k}$ models that contain exactly k predictors
\medskip
\item Pick the best among these $\binom{p}{k}$ models, and call it $M_k$. Where {\it best} is the one with the smallest $SSR$
\end{enumerate}
\bigskip
\item Select a single best model from among $M_0,\dots, M_p$ using cross-validated prediction error, AIC ($C_p$), BIC, or adjusted $R^2$.
\end{enumerate}
\end{frame}
%----------------------------------------------------------------------%
\subsection{Stepwise Selection}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Stepwise Selection}
\begin{itemize}
\item For computational reasons, best subset selection cannot be applied with very large p.
\medskip
\item Best subset selection may also suffer from statistical problems when p is large: larger the search space, the higher the chance of finding models that look good on the training data, even though they might not have any predictive power on future data.
\medskip
\item Thus an enormous search space can lead to overfitting and high variance of the coefficient estimates.
\medskip
\item For both of these reasons, stepwise methods, which explore a far more restricted set of models, are attractive alternatives to best subset selection.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Forward Stepwise Selection}
\begin{itemize}
\item Forward stepwise selection begins with a model containing no predictors, and then adds predictors to the model, one-at-a-time, until all of the predictors are in the model.
\bigskip
\item In particular, at each step the variable that gives the greatest additional improvement to the fit is added to the model.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Forward Stepwise Selection}
\begin{enumerate}
\item Let $M_0$ denote the null model, which contains no predictors. This model simply predicts the sample mean for each observation.
\bigskip
\item For $k=0,1,\dots,p-1$:
\medskip
\begin{enumerate}
\item Consider all $p-k$ models that augment the predictors in $M_k$ with one additional predictor.
\medskip
\item Choose the best among these $p - k$ models, and call it $M_{k+1}$. Where {\it best} is the one with the smallest $SSR$
\end{enumerate}
\bigskip
\item Select a single best model from among $M_0,\dots, M_p$ using cross-validated prediction error, AIC ($C_p$), BIC, or adjusted $R^2$.
\end{enumerate}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Forward Stepwise Selection}
\begin{itemize}
\item Computational advantage over best subset selection is clear.
\item It is not guaranteed to find the best possible model out of all $2^p$ models containing subsets of the p predictors.
\item ISLR Example
\end{itemize}
\begin{figure}[H] \centering
\captionsetup{justification=centering}
\includegraphics[scale=0.4]{figures/Fig2}
\end{figure}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Backward Stepwise Selection}
\begin{itemize}
\item Like forward stepwise selection, backward stepwise selection provides an efficient alternative to best subset selection.
\bigskip
\item However, unlike forward stepwise selection, it begins with the full least squares model containing all p predictors, and then iteratively removes the least useful predictor, one-at-a-time.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Backward Stepwise Selection}
\begin{enumerate}
\item Let $M_0$ denote the null model, which contains no predictors. This model simply predicts the sample mean for each observation.
\bigskip
\item For $k=p,p-1,\dots,p$:
\medskip
\begin{enumerate}
\item Consider all $k$ models that contains all but one of the predictors in $M_k$, for a total of $k-1$ predictors
\medskip
\item Choose the best among these $k$ models, and call it $M_{k-1}$. Where {\it best} is the one with the smallest $SSR$
\end{enumerate}
\bigskip
\item Select a single best model from among $M_0,\dots, M_p$ using cross-validated prediction error, AIC ($C_p$), BIC, or adjusted $R^2$.
\end{enumerate}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Backward Stepwise Selection}
\begin{itemize}
\item Like forward stepwise selection, the backward selection approach searches through only $1 + p(p + 1)/2$ models, and so can be applied in settings where p is too large to apply best subset selection
\item Like forward stepwise selection, backward stepwise selection is not guaranteed to yield the best model containing a subset of the p predictors.
\item Backward selection requires that the number of samples n is larger than the number of variables p (so that the full model can be fit). In contrast, forward stepwise can be used even when $n < p$, and so is the only viable subset method when p is very large.
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\begin{frame}[fragile]
\frametitle{Validation and Cross-Validation}
\begin{itemize}
\item Each of the procedures returns a sequence of models $M_k$ indexed by model size $k = 0,1,2,\dots$ .
\item Our job here is to select $\hat k$. Once selected, we will return model $M_{\hat{k}}$
\item We compute the validation set error or the cross-validation error for each model $M_k$ under consideration, and then select the $k$ for which the resulting estimated test error is smallest.
\item This procedure has an advantage relative to AIC ($C_p$), BIC, and adjusted $R^2$, in that it provides a direct estimate of the test error, and doesn't require an estimate of the error variance $\sigma^2$
\item It can also be used in a wider range of model selection tasks, even in cases where it is hard to pinpoint the model degrees of freedom (e.g. the number of predictors in the model) or hard to estimate the error variance $\sigma^2$
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\section{Review \& Next Steps}
%----------------------------------------------------------------------%
\begin{frame}
\frametitle{Review \& Next Steps}
\begin{itemize}
\item Today:
\medskip
\begin{itemize}
\item Basic Classical Framework for Model Selection AIC, SIC/BIC
\medskip
\item Model Selection in Practice
\begin{itemize}
\item Best Subset Selection
\medskip
\item Stepwise Selection
\end{itemize}
\end{itemize}
\bigskip
\item Next class: Regularization/Shrinkage Methods
\bigskip
\item Questions? Questions about software?
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
\section{Further Readings}
%----------------------------------------------------------------------%
\begin{frame}
\frametitle{Further Readings}
\begin{itemize}
\item Friedman, J., Hastie, T., \& Tibshirani, R. (2001). The elements of statistical learning (Vol. 1, No. 10). New York: Springer series in statistics.
\medskip
\item James, G., Witten, D., Hastie, T., \& Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112, p. 18). New York: springer.
\medskip
\item Koenker, R. (2013) Economics 508: Lecture 4. Model Selection and Fishing for Significance. Mimeo
\end{itemize}
\end{frame}
%----------------------------------------------------------------------%
%----------------------------------------------------------------------%
\end{document}
%----------------------------------------------------------------------%
%----------------------------------------------------------------------%
| {
"alphanum_fraction": 0.643040792,
"avg_line_length": 35.9434482759,
"ext": "tex",
"hexsha": "84c6c4db39eeca4181fc329815454f06f1443175",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2020-08-20T18:33:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-08-11T15:51:49.000Z",
"max_forks_repo_head_hexsha": "03ba452c7c44c31a872b024d8ab24aea89970aec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Nurdaneta/Big-Data-Machine-Learning-Course",
"max_forks_repo_path": "Lecture15/Lecture15.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "03ba452c7c44c31a872b024d8ab24aea89970aec",
"max_issues_repo_issues_event_max_datetime": "2020-08-19T13:40:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-08-19T13:40:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Nurdaneta/Big-Data-Machine-Learning-Course",
"max_issues_repo_path": "Lecture15/Lecture15.tex",
"max_line_length": 264,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "03ba452c7c44c31a872b024d8ab24aea89970aec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Nurdaneta/Big-Data-Machine-Learning-Course",
"max_stars_repo_path": "Lecture15/Lecture15.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-19T16:12:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-18T16:35:54.000Z",
"num_tokens": 7323,
"size": 26059
} |
\chapter{A New Hope} | {
"alphanum_fraction": 0.75,
"avg_line_length": 20,
"ext": "tex",
"hexsha": "aa72fbdeb3f7ed333b59789d45a28e201056ca1f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a9633a0e42f6dc67cb351b9d59d859ddfe15e3a8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hbrausen/thesis_template",
"max_forks_repo_path": "MainMatter/04_ANewHope.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a9633a0e42f6dc67cb351b9d59d859ddfe15e3a8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hbrausen/thesis_template",
"max_issues_repo_path": "MainMatter/04_ANewHope.tex",
"max_line_length": 20,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a9633a0e42f6dc67cb351b9d59d859ddfe15e3a8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hbrausen/thesis_template",
"max_stars_repo_path": "MainMatter/04_ANewHope.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7,
"size": 20
} |
% List of important shorcuts
% Based on
% https://github.com/goerz/Refcards/raw/master/my_vim_mappings/my_vim_mappings.tex
%
% This work is licensed under the Creative Commons
% Attribution-Noncommercial-Share Alike 3.0 License.
% To view a copy of this license, visit
% http://creativecommons.org/licenses/by-nc-sa/
% \pdfoutput=1
\pdfpageheight=21cm
\pdfpagewidth=29.7cm
% Font definitions
\font\bigbf=cmbx12
\font\smallrm=cmr8
\font\smalltt=cmtt8
\font\tinyit=cmmi5
\def\title#1{\hfil{\bf #1}\hfil\par\vskip 2pt\hrule}
\def\cm#1#2#3{{\it#1} {\tt#2 }\dotfill#3\par}
\def\dcm#1#2#3#4#5{{\it#1} {\tt#2 } {\it#3} {\tt#4 }\dotfill#5\par}
\def\cn#1{\hfill$\lfloor$ #1\par}
\def\section#1{\vskip 0.7cm {\it#1\/}\par}
% Characters definitions
\def\\{\hfil\break}
\def\backspace{$\leftarrow$}
\def\ctrl{{\rm\char94}\kern-1pt}
\def\enter{$\hookleftarrow$}
\def\or{\thinspace{\tinyit{or}}\thinspace}
\def\key#1{$\langle${\rm{\it#1\/}}$\rangle$}
\def\rapos{\char125}
\def\lapos{\char123}
\def\bs{\char92}
%\def\leader{\char92}
\def\leader{<Leader>}
\def\tmux{<C-B>}
\def\locleader{<LocalLeader>}
\def\ileader{\ctrl L}
\def\tilde{\char126}
\def\lbracket{[}
\def\rbracket{]}
% Three columns definitions
\parindent 0pt
\nopagenumbers
\hoffset=-1.56cm
\voffset=-1.54cm
\newdimen\fullhsize
\fullhsize=27.9cm
\hsize=8.5cm
\vsize=19cm
\def\fullline{\hbox to\fullhsize}
\let\lr=L
\newbox\leftcolumn
\newbox\midcolumn
\output={
\if L\lr
\global\setbox\leftcolumn=\columnbox
\global\let\lr=M
\else\if M\lr
\global\setbox\midcolumn=\columnbox
\global\let\lr=R
\else
\tripleformat
\global\let\lr=L
\fi\fi
\ifnum\outputpenalty>-20000
\else
\dosupereject
\fi}
\def\tripleformat{
\shipout\vbox{\fullline{\box\leftcolumn\hfil\box\midcolumn\hfil\columnbox}}
\advancepageno}
\def\columnbox{\leftline{\pagebody}}
% Card content
% Header
%\hrule\vskip 3pt
\title{Shortcut Reference Thorben}
\vskip 0.3cm
{\tt \leader}: {\tt Space}% \hskip1cm {\tt \locleader}: {\tt \bs{}}
\vskip -0.3cm
\section{tmux}
%
\cm{}{\tmux{}c}{new window}
\cm{}{\tmux{}$\{$|,-$\}$}{split left/right or top/bottom}
\cm{}{\tmux{},}{rename window}
\cm{}{\tmux{}t}{thumbs}
\cm{}{\tmux{}!}{break pane out into own window}
\cm{}{\tmux{}$\{$s,w$\}$}{list of sessions, list of windows}
%
\cm{}{C-S$\{\leftarrow$,$\rightarrow\}$}{move the tab left/right}
\cm{}{C-$\{$hjkl$\}$}{move between windows}
\section{vim}
%
\cm{}{jk}{Esc}
\cm{}{C-n}{open completion menu}
\cm{}{C-k}{fzf search files}
\cm{}{C-S-f}{ripgrep via fzf}
\cm{}{C-$\{${h,j,k,l}$\}$}{$\leftarrow$,$\downarrow$,$\uparrow$,$\rightarrow$ through splits}
\cm{}{A-S-$\{${h,j,k,l}$\}$}{resize current split}
\cm{}{\leader{}$\{$b,h$\}$}{fzf buffers, history}
%
\section{coc.nvim}
\cm{}{C-Space}{trigger completion}
\cm{}{[g, ]g}{prev/next diagnostic}
\cm{}{gd, gy, gi}{go to definition, type def., impl.,}
\cm{}{gr}{list references}
%
\cm{}{\leader{}rn}{rename}
%\dcm{n:}{\leader d}{i:}{\ileader d}{Insert date stamp}
% Footer
\vfill \hrule\smallskip
% {\smallrm This work is licensed under the Creative Commons
% Attribution-Noncommercial-Share Alike 3.0 License.
% To view a copy of this license, visit
% http://creativecommons.org/licenses/by-nc-sa/ \\---
% (CC) {\oldstyle 2012} by Michael Goerz.
% % Ending
\supereject
\if L\lr \else\null\vfill\eject\fi
\if L\lr \else\null\vfill\eject\fi
\bye
% EOF
| {
"alphanum_fraction": 0.6704242065,
"avg_line_length": 24.4275362319,
"ext": "tex",
"hexsha": "b02181756f35922baa4527d2a97734c4dff8ee83",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "15700b436371ea8517a708cefc3efe192e406a35",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thorbenk/dotfiles",
"max_forks_repo_path": "ref/shortcuts.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "15700b436371ea8517a708cefc3efe192e406a35",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thorbenk/dotfiles",
"max_issues_repo_path": "ref/shortcuts.tex",
"max_line_length": 93,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "15700b436371ea8517a708cefc3efe192e406a35",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thorbenk/dotfiles",
"max_stars_repo_path": "ref/shortcuts.tex",
"max_stars_repo_stars_event_max_datetime": "2017-04-16T10:17:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-16T10:17:32.000Z",
"num_tokens": 1318,
"size": 3371
} |
\documentclass{homework}
\course{Math 5522H}
\author{Jim Fowler}
\input{preamble}
\begin{document}
\maketitle
\begin{inspiration}
Nature laughs at the difficulties of integration.
\byline{Pierre-Simon Laplace} % what's the original source for this quotation?
\end{inspiration}
\section{Terminology}
\begin{problem}
What does it mean to say that $\Omega \subset \C$ is connected? Is path-connected?
\end{problem}
\begin{solution}
$\Omega\subset \C$ is said to be connected if, for any two disjoint open sets $U,V\in \C$ containing $\Omega$ in their union, either $U\cap \Omega$ or $V\cap \Omega$ is empty.
\end{solution}
\begin{problem}
What is a piecewise-smooth \textbf{curve}? When are two curves ``the same''?
\end{problem}
\begin{solution}
A smooth curve is a smooth function $f$ from an interval $[t_0, t_1]: t_0, t_1\in\R$ to the complex numbers $\gamma(t)\in \C$ such that $\gamma'(t)\neq 0$ at any point $t$.A piecewise smooth curve is the function defined on the interval $[t_0, t_n]$ by $n$ smooth curves $$\gamma_i:[t_{i-1}, t_i]\mapsto \C$$ satisfying $\gamma_{i-1}(t_{i}) = \gamma_{i}(t_{i})$ for $i\in [1, n]$.
Two curves $\gamma_a:[a_0, a_1] \mapsto \C, \gamma_b:[b_0, b_1]\mapsto \C$ are said to be the same if there is a differentiable function $f: [a_0, a_1] \mapsto [b_0, b_1]$ such that $\forall t\in [a_0, a_1], \gamma_a(t) = \gamma_b(f(t))$.
\end{solution}
\begin{problem}
Define $\int_\gamma f(z) \, dz$ and define $\int_\gamma f(z) \, dx$ and define $\int_\gamma f(z) \, dy$.
\end{problem}
\begin{solution}
First, some notation for $\gamma$. Define $\gamma:[a,b]\to \{x + iy:x, y \in \R\}$. Then
\begin{align*}
\int_\gamma f(z) \, dz &:= \int_{a}^b \gamma'(t)f(\gamma(t)) \, dt\quad \color{purple}\text{ complex derivative }\\
\int_\gamma f(z) \, dx &:= \int_{a}^b \gamma_x(t)f(\gamma(t)) \, dt\quad\color{purple}\text{ first coordinate derivative }\\
\int_\gamma f(z) \, dy &:= i\int_{a}^b \gamma_y(t)f(\gamma(t)) \,dt\quad \color{purple}\text{ second coordinate derivative }
\end{align*}
\end{solution}
\begin{problem}
Define $\int_\gamma f(z) \, d\conj{z}$.
\end{problem}
\begin{solution}
\begin{align*}
\int_\gamma f(z) \, \conj{dz} &:= \conj{\int_{a}^b \gamma'(t)\conj{f(\gamma(t))} \, dt}\\
\end{align*}
\end{solution}
\begin{problem}
Define $\int_\gamma f(z) \, \abs{dz}$.
\end{problem}
\begin{solution}
\begin{align*}
\int_\gamma f(z) \, \abs{dz} &:= \int_{a}^b \abs{\gamma'(t)}f(\gamma(t)) \,dt \\
\end{align*}
\end{solution}
\begin{problem}
What does it mean to say that a $1$-form is \textbf{exact}?
\end{problem}
\begin{solution}
$f(z)\,dz$ is \textbf{exact} if there it has a \textbf{primitive} - a complex differentiable function $F(z)$ such that $\frac{d}{dz} F(z) = f(z)$.
\end{solution}
\begin{problem}
What are the \textbf{poles} and \textbf{zeros} of a rational
function $p(z)/q(z)$?
\end{problem}
\begin{solution}
Assuming that $p(z)$ and $q(z)$ share no zeros as part of the definition of a rational function:
The \textbf{poles} of $p(z)/q(z)$ are the points at which $q(z)=0$. The $\textbf{zeros}$ are the points at which $p(z)=0$.
\end{solution}
\section{Numericals}
\begin{problem}
Consider a piecewise smooth curve $\gamma$ tracing the boundary of the square
$$S = \{ z = x+iy\in \C : \abs{x} \mbox{ and } \abs{y} \leq 1 \}.$$
Compute $\displaystyle\int_\gamma \frac{1}{z} \, dz$ by hand.
\end{problem}
\begin{solution}
Going counter-clockwise, we recognize $\gamma = \gamma_1 + \gamma_2 + \gamma_3 + \gamma_4$ with $\gamma_i$ defined on the interval $[-1, 1]$ satisfying
\begin{align*}
\gamma_1 = 1 + it &\qquad \gamma_3 = -1 - it \\
\gamma_2 = -t + i &\qquad \gamma_4 = t - i
\end{align*}
Now we can compute the integral:
\begin{align*}
\int_{\gamma} \frac{1}{z} \, dz
&= \int_{-1}^1 \frac{\gamma_1'}{1 + it} \, dt +
\int_{-1}^1 \frac{\gamma_2'}{-t + i} \, dt +
\int_{-1}^1 \frac{\gamma_3'}{-1 - it} \, dt +
\int_{-1}^1 \frac{\gamma_4'}{t - i} \, dt\\
&= \int_{-1}^1 \frac{i}{1 + it} \, dt +
\int_{-1}^1 \frac{-1}{-t + i} \, dt +
\int_{-1}^1 \frac{-i}{-1 - it} \, dt +
\int_{-1}^1 \frac{1}{t - i} \, dt\\
&= \int_{-1}^1 \frac{4}{t - i} \, dt\\
&= 4\Log(t - i)|_{t=-1}^1\color{purple}\text{ Log is analytic in an open set containing [-1, 1]} \\
&= 4(\Log(1 - i) - \Log(-1 - i))\\
&= 4((\sqrt{2} + i7\pi/4) - (\sqrt{2} + i5\pi/4))\\
&= 4(i\pi/2) = 2\pi i
\end{align*}
\end{solution}
\begin{problem}\label{integral-powers-of-z}Consider the curve $\gamma : [0,2\pi] \to \C$ given by $\gamma(\theta) = e^{i\theta}$. For an integer $n \in \Z$, compute $\displaystyle\int_\gamma z^n \, dz$ and $\displaystyle\int_\gamma \conj{z}^n \, dz$.
\end{problem}
\begin{solution}
\begin{align*}
\int_\gamma z^ndz &= \int_{0}^{2\pi} \gamma'(\theta) e^{i\theta n} d\theta\\
&= i\int_{0}^{2\pi}e^{i(1+n)\theta} d\theta\\
&= \begin{cases}i\int_{0}^{2\pi} e^{i(1+n)\theta} d\theta & n \neq -1\\
i\int_{0}^{2\pi} 1 d\theta & n = -1 \end{cases}\\
&= \begin{cases} 0 & n \neq -1\\
2\pi i & n = -1 \end{cases}
\end{align*}
For the second integral,
\begin{align*}
\int_\gamma \conj{z}^ndz &= \int_{0}^{2\pi} \gamma'(\theta) e^{-i\theta n} d\theta\\
&= i\int_{0}^{2\pi}e^{i(1-n)\theta} d\theta\\
&= \begin{cases}i\int_{0}^{2\pi} e^{i(1-n)\theta} d\theta & n \neq 1\\
i\int_{0}^{2\pi} 1 d\theta & n = 1 \end{cases}\\
&= \begin{cases} 0 & n \neq 1\\
2\pi i & n = 1 \end{cases}
\end{align*}
\end{solution}
\begin{problem}\label{one-over-z-around-circle}Let $\gamma:[a, b]\mapsto \C$ be a (positively oriented) parametrization of a circle
in the plane, and suppose the image of $\gamma$ does not include the
origin. Compute $\displaystyle\int_\gamma \frac{1}{z} \, dz$.
\end{problem}
\begin{solution}
Since $\gamma$ is a positively oriented parameterization of a circle, we can find a differentiable map $f:[a,b]\mapsto [0, 2\pi]$ such that $r(e^{if(t)} + z)= \gamma(t)$, where $r\in\R$ is the radius of and $rz$ is the center of the circle parameterized by $\gamma$.
\begin{align*}
\int_\gamma \frac{1}{z} \, dz &= \int_a^{b} \frac{\frac{d}{dt}\gamma(t)}{\gamma(t)}dt\\
&= \int_a^{b} \frac{\frac{d}{dt}(re^{if(t)}+ rz)}{re^{if(t)} + rz}dt\\
&= \int_a^{b} \frac{if'(t)e^{if(t)}}{e^{if(t)} + z}dt\\
&= \int_0^{2\pi} \frac{ie^{iu}}{e^{iu} + z}du \quad \color{purple} u=f(t), du = f'(t) dt\\
&= \log(e^{ui} + z)\big|_{u=0}^{2\pi}
\end{align*}
Though we may be a bit suspcious about which log we are using. However, since the interior of $\gamma$ does not contain the origin, we can choose a branch of the log such that the angle which has a discontinuity is disjoint from the set of $\theta$ that satisfy $re^{i\theta}= e^{ui} +z$ for some $r, u$. Then the value of the above integral is 0 since $e^0 = e^{2\pi i}$.
On the other hand, if the image does include the origin, we can imagine that the argument of the parameter passed to log will continuously increase by $2\pi$, so if we switch branches of the log at the place that the derivative is continuous, the difference between the start and end point will be exactly $2\pi i$.
\end{solution}
\begin{problem}\label{lacunary-series}What is radius $R$ of convergence of
$\displaystyle\sum_{n=1} x^{(n!)}$? (This is a \textbf{lacunary
series} with large gaps between nonzero terms.)
\end{problem}
\begin{solution}
We use the following fact: the radius of convergence of a power series $R$ satisfies
\[\frac{1}{R} = \limsup_{k\to \infty} \sqrt[k]{|x_k|}\]
So we need to compute
\[\limsup_{n\to \infty} \sqrt[n!]{1} = 1\]
Therefore, the radius of convergence is 1.
\end{solution}
\section{Exploration}
\begin{problem}For the series in \ref{lacunary-series}, find a dense
subset of the circle $\{ z \in \C : \abs{z} = 1 \}$ where the series
diverges.
\end{problem}
\begin{solution}
Let $z=e^{2i\pi \theta}$ with $\theta$ some rational number $n/m$, this is a dense subset of the circle. Note that for $x>=m$,
\[z^x = (e^{2i\pi n/m})^{x!} = z^x = e^{(2i\pi n)*(x!/m)} = 1\],
so we are taking a sum of a 1 infinitely many times, which does not seem like a good sign for convergence.
\end{solution}
\begin{problem}
For which $z \in \C$ with $|z|=1$ does the series
$\displaystyle\sum_{n=0}^{\infty} \frac{z^n}{n}$ converge? Diverge?
\end{problem}
\begin{solution}
If $z=1$ then it's the harmonic series so it diverges. Otherwise it goes in a big spiral that obviously converges. But how to prove... Let's check by letting $z=e^{i\theta}, \theta\neq 0$ and using the integral test (checking if the integral from some lower bound to infinity is finite). It suffices to show that the real and imaginary part converge. For the imaginary part:
\begin{align*}
\int_{\pi/\theta}^\infty \frac{\sin(x\theta)}{x} dx &=
\sum_{k=1}^\infty (\int_{(2k + 1)\pi /\theta}^{(2k+2)\pi/\theta} \frac{\sin(x\theta)}{x} dx +
\int_{2k\pi/\theta}^{(2k + 1)\pi/\theta} \frac{\sin(x\theta)}{x} dx)
\end{align*}
Setting $u=x\theta, du = \theta dx$:
\begin{align*}
\int_{\pi/\theta}^\infty \frac{\sin(x\theta)}{x} dx &= \sum_{k=1}^\infty (\int_{(2k + 1)\pi}^{(2k+2)\pi} \frac{\sin(u)}{u} du +
\int_{2k\pi}^{(2k + 1)\pi} \frac{\sin(u)}{u} du)\\
&= \sum_{k=1}^\infty \int_{(2k + 1)\pi}^{(2k+2)\pi} \sin(u)(\frac{1}{u} - \frac{1}{u-\pi}) du \color{purple} \text{ note it's negative}\\
&\geq -\sum_{k=1}^\infty \int_{(2k + 1)\pi}^{(2k+2)\pi} (\frac{1}{(2k+2)\pi} - \frac{1}{2k\pi}) du \\
&= \sum_{k=1}^\infty \frac{1}{(2k+2)} - \frac{1}{2k} du = -\frac{1}{2}
\end{align*}
This integral is finite so the imaginary part of the series converges. An analogous argument with cosine shows that the real part of the series converges, so the series must converge in the complex plane.
\end{solution}
\begin{problem}
Suppose $f : \C \to \C$ is a rational function which
sends the unit circle to the real line, i.e., for $z \in \C$ with
$|z| = 1$ we have $f(z) \in \R$. Inspired
by \ref{schwarz-reflection-principle}, compute
$\overline{f(1/\conj{z})}$ and then discuss the relationship between
the poles and zeros of $f$.
\end{problem}
\begin{solution}
If $|z|=1$, then $f(z)=\conj{f(z)}$ and $1/\conj{z} = z$. Thus
$\conj{f(1/\conj{z})} = f(z)$ on the unit circle. Since these are rational functions that agree on an infinite set of points, they must be equal everywhere.
Thus for any zero or pole $z_0$ of $f$, there is another zero or pole respectively at $\frac{1}{\conj{z_0}}= \frac{z_0}{|z_0|}.$
\end{solution}
\begin{problem}
In lecture, we briefly saw an example (the
\textbf{topologist's sine curve}) of a subset of $\mathbb{C}$ which
is connected but not path-connected. For open subsets of
$\mathbb{C}$, what is the relationship between connectedness and
path-connectedness?
\end{problem}
\begin{solution}
If a open subset of $\C$ is path-connected, it's also connected: For contradiction, suppose there was a disconnection formed from disjoint open sets $U$, $V$. Choosing points $P_U \in U, P_V \in V$, there is a path $f:[0, 1]\mapsto \C$ from $P_U$ to $P_V$. Let $s = \sup x\in [0,1]: f(x) \in U$. s is not 1 since $f$ is continuous and there is a ball around $f(1)$ contained in $V$. Hence by the defintion of sup, we can find an infinite sequence $u_i \in [0, 1]$ with $f(u_i) \in U$ that converges to $s$ and any sequence of points $v_i \in [0, 1]$ with $v_i < s$ converging to $s$ has $f(v_i) \in V$, so $f(s)$ is a boundary point of both $U$ and $V$ contained in either $U$ or $V$, a contradiction.
Next we show that if a open subset of $S\in \C$ is connected, it's also path-connected. For each point $p \in S$, define $P$ to be the set of points that we can reach from $p$. $P$ is open since if we can reach any point by a path, we can also reach any point from $p$, then we can also reach any point in the (convex) open ball surrounding that point.
Now consider the complement $S/P$. This set is also open - supposing that we can't reach a point $s\in S/P$, then we can't reach any point in the open ball surrounding that point. So $P=(S/P)\cup S$ is a union of two disjoint open sets. As $P$ is connected, this implies that one of the $S/P$ and $S$ is empty, and as $s\in S$, this implies $S/P = \empty$ and $S=P$. Recalling the definition of $P$, we can reach any point in $S$ by a path starting at $p$, so we can reach any point from any other point by going first to $p$ and then to the second point.
\end{solution}
\begin{problem}\label{argument-principle-numerical}Consider $\gamma : [0,2\pi] \to \C$ given by $\gamma(\theta) = e^{i\theta}$. For an integer $n \in \Z$ and $f(z) = z^n$, compute
\[
\frac{1}{2\pi i} \displaystyle\int_\gamma \frac{f'(z)}{f(z)} \, dz
\]
in two different ways. First, evaluate $f'(z)/f(z)$ and invoke
\ref{integral-powers-of-z}. Second, describe a curve $\gamma_n$ and
interpret $\int_\gamma \frac{f'(z)}{f(z)} \, dz$ as
$\int_{\gamma_n} dz/z$ for that different curve $\gamma_n$. (This
is our first glimpse of the \textbf{argument principle}.)
\end{problem}
\begin{solution}
\[\frac{f'(z)}{f(z)} = \frac{nz^{n-1}}{z^n} = \frac{n}{z}\]
Then using \ref{integral-powers-of-z}, the integral's value is $n$.
Next, consider going along the curve $\gamma_n = f\circ \gamma$, it's the path that $f(z)$ traces out as we go along the curve. Algebraically, let $u = f(z)$ so $du = f'(z) dz.$
Then
\[
\int_{\gamma} \frac{f'(z)}{f(z)}dz = \int_{\gamma_n} \frac{1}{u} du = \int \frac{\gamma_n'(t)}{\gamma_n(t)}dt = \int_0^{2\pi} \frac{ine^{int}}{e^{int}}dt = 2in\pi
\]
Dividing out by $2in\pi$, the value of the integral we were given is $n$.
\end{solution}
\begin{problem}\label{one-over-z-w-around-circle}
Consider the curve $\gamma : [0,2\pi] \to \C$ given by $\gamma(\theta) = e^{i\theta}$. For $w \in \C$ with $|w| \neq 1$, compute
\[
\frac{1}{2\pi i} \displaystyle\int_\gamma \frac{1}{z-w} \, dz
\]
perhaps by invoking \ref{one-over-z-around-circle}.
\end{problem}
\begin{solution}
Let $v = z - w,$ so $dv = dz$. Defining the function $\gamma_2(t) := \gamma(t) - w$, we see that
\[\int_\gamma \frac{1}{z-w}dz = \int_{\gamma_2} \frac{1}{v}dv\]
This is exactly the integral of a circle, and \ref{one-over-z-around-circle} shows that if the circle doesn't include the origin, then the integral is 0 and otherwise, (the specific proof I did shows), it is $\frac{1}{2\pi i}$. Thus
\[
\frac{1}{2\pi i} \displaystyle\int_\gamma \frac{1}{z-w} \, dz = \begin{cases}1 & |w| < 1\\ 0 & |w| > 1\end{cases}.
\]
\end{solution}
\begin{problem}
Yet again consider the curve $\gamma : [0,2\pi] \to \C$ given by $\gamma(\theta) = e^{i\theta}$. For distinct $w_1, w_2 \in \C$ with $\abs{w_1} \neq 1$ and $\abs{w_2} \neq 1$, let $f(z) = (z - w_1) (z - w_2)$ and compute
\[
\frac{1}{2\pi i} \displaystyle\int_\gamma \frac{f'(z)}{f(z)} \, dz.
\]
\end{problem}
\begin{solution}
Let $\gamma' = f\circ \gamma = (e^{i\theta} - w_1)(e^{i\theta} - w_2)$, and make the subsitution $w=f(z)$ so $dw = f'(z)dz$.
Then
\begin{align*}
\frac{1}{2\pi i} \int_\gamma \frac{f'(z)}{f(z)} \, dz
&= \frac{1}{2\pi i} \int_{\gamma_1}\frac{1}{w} \, dw\\
&= \frac{1}{2\pi i} \int_0^{2\pi} \frac{\gamma_1'(t)}{\gamma_1(t)} \, dt\\
&= \frac{1}{2\pi i} \int_0^{2\pi} \frac{ie^{i\theta} - w_1}{e^{i\theta}} + \frac{ie^{i\theta}}{e^{i\theta} - w_1} \, dt\\
&= \frac{1}{2\pi i} (\int_0^{2\pi} \frac{ie^{i\theta}}{e^{i\theta} - w_1} dt + \int_0^{2\pi} \frac{ie^{i\theta}}{e^{i\theta} - w_1}dt )\\
&= \frac{1}{2\pi i} (\int_{\gamma} \frac{1}{z - w_1} dt + \int_{\gamma} \frac{1}{z - w_1}dt )
\end{align*}
Invoking \ref{one-over-z-w-around-circle}, this is
\[\left(\begin{cases}1 & |w_1| < 1\\ 0 & |w_1| > 1\end{cases} \right) +
\left(\begin{cases}1 & |w_2| < 1\\ 0 & |w_2| > 1\end{cases}\right)\]
\end{solution}
\section{Prove or Disprove and Salvage if Possible}
\begin{problem}
If $e^z = e^w$, then $z = w$.
\end{problem}
\begin{solution}
False, $e^0 = e^{2\pi i}= 1$. However, since $e^{x + iy} = e^x(\cos(y) + i\sin(y))$, $\exists \, k \, s.t. \, (e^z = e^w \implies z = w + 2\pi ik)$
\end{solution}
\begin{problem}
If $f \, dz$ is exact, then $\displaystyle\int_\gamma f \, dz = 0$.
\end{problem}
\begin{solution}
Additionally we require that $\gamma$ is a closed curve starting and ending at the same point $p$. In this case, letting $F$ be the antiderivative of $f$,
\begin{align*}
\int_\gamma f \, dz
&= \int \gamma'(t)f(\gamma(t)) \, dt \\
&= \int_\gamma \frac{d}{dt}(F\circ\gamma)(t) \, dt\\
&= (F\circ\gamma)(\gamma(p) - \gamma(p)) = 0
\end{align*}
\end{solution}
\begin{problem}% orientation issues
Suppose $\gamma : [0,1] \to \C$ is a smooth curve, and $p : [0,1] \to [0,1]$ is a smooth bijection.
Then
\[
\int_\gamma f(z) \, dz = \int_{\gamma \circ p} f(z) \, dz.
\]
\end{problem}
\begin{solution}
If $p(0)=1$, then we will switch the direction of the curve which will make the answer negative.
Otherwise, we can prove it as follows.
\begin{align*}
\int_{\gamma \circ p} f(z) \, dz &= \int_0^1 (\frac{d}{dt}(\gamma\circ p) \times (f \circ \gamma \circ p))t \,dt\\
&= \int_{0}^1 \gamma'(p(t))p'(t) (f \circ \gamma \circ p)t\, dt \\
&= \int_{0}^1 \gamma'(u) (f \circ \gamma)u\, du \quad \color{purple} u=p(t), du = p'(t)dt\\
&= \int_{\gamma} f(z) \, dz
\end{align*}
\end{solution}
\begin{problem}\label{identity-theorem}If $f \in \C[z]$ is a polynomial with infinitely many zeros, then
$f \equiv 0$.
\end{problem}
\begin{solution}
By the fundamental theorem of arithmetic, $f$ splits completely in $\C$ as $\prod_{i=0}^n (z - r_i)$ where $r_i$ is a root. At any number that's not a root, this product is clearly nonzero unless it is the empty product, so we must have that $f\equiv 0$.
\end{solution}
\end{document}
| {
"alphanum_fraction": 0.3548112135,
"avg_line_length": 95.6978851964,
"ext": "tex",
"hexsha": "50c1c4aade17a405ef6761d2665c4a85baadb669",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "Alex7Li/math5522h",
"max_forks_repo_path": "problem-solutions/sol3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "Alex7Li/math5522h",
"max_issues_repo_path": "problem-solutions/sol3.tex",
"max_line_length": 759,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9f1fa070997f40e11e981c49e7d6fb9556e128d6",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "Alex7Li/math5522h",
"max_stars_repo_path": "problem-solutions/sol3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6892,
"size": 31676
} |
\section{Outlier Detection Engine}
\label{sec:implementation}
Each tuple is extended with all of the possible expansions of all of its fields.
These expanded tuples are fed into the statistical analyzer, which looks at the distribution of values in each expanded column and identifies tuples with expanded attribute values, or pairs of attribute values, that are outliers.
\input{overview}
\input{expansion}
%\input{preprocessing}
\input{statistical-analysis}
\input{model-creation}
\input{outlier-detection}
| {
"alphanum_fraction": 0.815324165,
"avg_line_length": 46.2727272727,
"ext": "tex",
"hexsha": "87e6abd77c893a8c88f82862864c301537e3241e",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2022-02-28T06:42:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-04-21T12:28:33.000Z",
"max_forks_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "adrianlut/raha",
"max_forks_repo_path": "raha/tools/dBoost/paper/vldb/implementation.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d",
"max_issues_repo_issues_event_max_datetime": "2020-10-08T11:19:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-08T11:19:03.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "adrianlut/raha",
"max_issues_repo_path": "raha/tools/dBoost/paper/vldb/implementation.tex",
"max_line_length": 229,
"max_stars_count": 30,
"max_stars_repo_head_hexsha": "027ebeaf0ac4b524dc49df94e7bbc7be4391213d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "adrianlut/raha",
"max_stars_repo_path": "raha/tools/dBoost/paper/vldb/implementation.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-07T07:44:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-05T12:03:45.000Z",
"num_tokens": 109,
"size": 509
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={homework\_01},
pdfauthor={Esteban Jorquera},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{longtable,booktabs,array}
\usepackage{calc} % for calculating minipage widths
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\ifluatex
\usepackage{selnolig} % disable illegal ligatures
\fi
\title{homework\_01}
\author{Esteban Jorquera}
\date{2022-02-13}
\begin{document}
\maketitle
\hypertarget{practical-bam-file}{%
\subsection{Practical: BAM file}\label{practical-bam-file}}
\begin{verbatim}
# opens a qlogin session
qlogin
# copies the NA20538.bam file from its parent directory to a local work directory
cp /mnt/Timina/bioinfoII/data/format_qc/NA20538.bam /mnt/Citosina/amedina/ejorquera/BioInfoII/
# checks module availability
module avail
# loads a version (1.9) of the required program samtools, available from the module list
module load samtools/1.9
# displays samtools view options
samtools view -man
# loads the bam file and print only the header
samtools view -H NA20538.bam | less -S
\end{verbatim}
\hypertarget{what-does-rg-stand-for}{%
\section{What does RG stand for?}\label{what-does-rg-stand-for}}
RG stands for read group
\hypertarget{what-is-the-lane-id-lane-is-the-basic-independent-run-of-a-high--throughput-sequencing-machine.-for-illumina-machines-this-is-the-physical-sequencing-lane.-reads-from-one-lane-are-identified-by-the-same-read-group-id-and-the-information-about-lanes-can-be-found-in-the-header-in-lines-starting-with-rg.}{%
\section{What is the lane ID? (``Lane'' is the basic independent run of
a high- throughput sequencing machine. For Illumina machines, this is
the physical sequencing lane. Reads from one lane are identified by the
same read group ID and the information about lanes can be found in the
header in lines starting with
``@RG''.)}\label{what-is-the-lane-id-lane-is-the-basic-independent-run-of-a-high--throughput-sequencing-machine.-for-illumina-machines-this-is-the-physical-sequencing-lane.-reads-from-one-lane-are-identified-by-the-same-read-group-id-and-the-information-about-lanes-can-be-found-in-the-header-in-lines-starting-with-rg.}}
The lane ID is referenced by the ``ID'' tag
\hypertarget{what-is-the-sequencing-platform}{%
\section{What is the sequencing
platform?}\label{what-is-the-sequencing-platform}}
The sequencing platform is referenced by the ``PL'' tag, according to
this, the used platform was Illumina
\hypertarget{what-version-of-the-human-assembly-was-used-to-perform-the-alignments}{%
\section{What version of the human assembly was used to perform the
alignments?}\label{what-version-of-the-human-assembly-was-used-to-perform-the-alignments}}
Found at the @SQ lines (Reference sequence dictionary), referenced in
the ``AS'' tag (NCBI37) and in the ``UR'' tag
(\url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/reference/human_g1k_v37.fasta.gz}),
according to them, the reference genome assembly used corresponds to
GRCh37 (Genome Reference Consortium Human Build 37), specifically the
one downloaded from the 1000 Genomes Project
\hypertarget{what-programs-were-used-to-create-this-bam-file}{%
\section{What programs were used to create this BAM
file?}\label{what-programs-were-used-to-create-this-bam-file}}
Found at each @PG line, referenced in the ``ID'' tags (program name) and
``VN'' tags (version number), the used programs corresponded to: -Genome
Analysis Toolkit (GATK) - IndelRealigner version 1.0.4487 -Genome
Analysis Toolkit (GATK) - TableRecalibration version 1.0.4487
-Burrows-Wheeler Aligner (bwa) version 0.5.5
\hypertarget{what-version-of-bwa-was-used-to-align-the-reads}{%
\section{What version of bwa was used to align the
reads?}\label{what-version-of-bwa-was-used-to-align-the-reads}}
As mentioned before, the used version of bwa corresponds to version
0.5.5, as mentioned in its ``VN'' tag
\hypertarget{what-is-the-name-of-the-first-read}{%
\section{What is the name of the first
read?}\label{what-is-the-name-of-the-first-read}}
\begin{verbatim}
# loads the bam file and also prints the header
samtools view -h NA20538.bam | less -S
\end{verbatim}
As shown in the first line that does not correspond to the header, the
first read is named ERR003814.1408899
\hypertarget{what-position-does-the-alignment-of-the-read-start-at}{%
\section{What position does the alignment of the read start
at?}\label{what-position-does-the-alignment-of-the-read-start-at}}
The read ERR003814.1408899 starts at position 19999970 as it is shown in
the fourth column (POS)
\hypertarget{what-is-the-mapping-quality-of-the-first-read}{%
\section{What is the mapping quality of the first
read?}\label{what-is-the-mapping-quality-of-the-first-read}}
The mapping quality of the read ERR003814.1408899 corresponds to 23, as
it is shown in the fifth column (MAPQ)
\#\#Practical bcf
\hypertarget{what-is-a-bcf}{%
\section{What is a bcf?}\label{what-is-a-bcf}}
A bcf file is the binary variant of a vcf file (Variant Call Format),
which in itself is a text file format for the storage of gene sequence
variations, therefore a bcf is a Binary variant calling format
\hypertarget{can-you-convert-bcf-to-vcf-using-bcftools-how}{%
\section{Can you convert bcf to vcf using bcftools?
How?}\label{can-you-convert-bcf-to-vcf-using-bcftools-how}}
Yes, by executing the following code
\begin{verbatim}
# copies the 1kg.bcf file, and its index file from their parent directory to a local work directory
cp /mnt/Timina/bioinfoII/data/format_qc/1kg.bcf /mnt/Citosina/amedina/ejorquera/BioInfoII/
cp /mnt/Timina/bioinfoII/data/format_qc/1kg.bcf.csi /mnt/Citosina/amedina/ejorquera/BioInfoII/
# checks module availability
module avail
# loads a version (1.10.2) of the required program bcftools, available from the module list
module load bcftools/1.10.2
# loads the bcf file and saves it as a vcf file
bcftools view /mnt/Citosina/amedina/ejorquera/BioInfoII/1kg.bcf > 1kg.vcf
\end{verbatim}
Using the view command will load the binary file by bcftools, then
saving it as a vcf file will make it human readable
\hypertarget{how-many-samples-are-in-the-bcf-hint-use-the--l-option.}{%
\section{How many samples are in the BCF? Hint: use the -l
option.}\label{how-many-samples-are-in-the-bcf-hint-use-the--l-option.}}
\begin{verbatim}
# asks the bcf file for the sample list and counts the lines present
bcftools query -l /mnt/Citosina/amedina/ejorquera/BioInfoII/1kg.bcf | wc -l
\end{verbatim}
query extracts fields from either the vcf or bcf file. The -l option
makes query output the list of samples in the file, piping the previous
with ``wc -l'' makes it count the number of lines in the output.
So therefore, there are 50 samples in the file
\hypertarget{what-is-the-genotype-of-the-sample-hg00107-at-the-position-2024019472-hint-use-the-combination-of--r--s-and--f-tgt-options.}{%
\section{\texorpdfstring{What is the genotype of the sample HG00107 at
the position 20:24019472? Hint: use the combination of -r, -s, and -f
`{[} \%TGT{]}\n'
options.}{What is the genotype of the sample HG00107 at the position 20:24019472? Hint: use the combination of -r, -s, and -f `{[} \%TGT{]}' options.}}\label{what-is-the-genotype-of-the-sample-hg00107-at-the-position-2024019472-hint-use-the-combination-of--r--s-and--f-tgt-options.}}
\begin{verbatim}
# shows the data contained in the bcf file for the HG00107 sample at the chromosome 20 position 24019472
bcftools view -r 20:24019472 -s HG00107 -f '[ %TGT]\n' /mnt/Citosina/amedina/ejorquera/BioInfoII/1kg.bcf
\end{verbatim}
At the position 20:24019472, the sample HG00107 has a heterozygous
phenotype as indicated by the ``0/1'' data of the ``GT'' tag, we can't
be sure in which chromosome is located the reference (A) or alternative
(T) alleles due to the use of the ``/'' symbol instead of ``\textbar{}''
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{table }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{\textquotesingle{}20\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}24019472\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}A\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}T\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}999\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}AN=2;AC=1;AC\_Het=16;AC\_Hom=2;AC\_Hemi=0\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}GT:PL:DP\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}0/1:235;0;148:16\textquotesingle{}}\NormalTok{), }\AttributeTok{ncol=}\DecValTok{10}\NormalTok{, }\AttributeTok{byrow=}\ConstantTok{TRUE}\NormalTok{)}
\FunctionTok{colnames}\NormalTok{(table) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{\textquotesingle{}\#CHROM\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}POS\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}ID\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}REF\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}ALT\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}QUAL\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}FILTER\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}INFO\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}FORMAT\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}HG00107\textquotesingle{}}\NormalTok{)}
\NormalTok{table\_k }\OtherTok{\textless{}{-}} \FunctionTok{as.data.frame}\NormalTok{(table)}
\FunctionTok{kable}\NormalTok{(table\_k,}\AttributeTok{align =} \StringTok{"c"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{longtable}[]{@{}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.07}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.09}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.04}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.04}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.04}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.05}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.07}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.35}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.09}}
>{\centering\arraybackslash}p{(\columnwidth - 18\tabcolsep) * \real{0.16}}@{}}
\toprule
\#CHROM & POS & ID & REF & ALT & QUAL & FILTER & INFO & FORMAT &
HG00107 \\
\midrule
\endhead
20 & 24019472 & . & A & T & 999 & . &
AN=2;AC=1;AC\_Het=16;AC\_Hom=2;AC\_Hemi=0 & GT:PL:DP &
0/1:235;0;148:16 \\
\bottomrule
\end{longtable}
\hypertarget{how-many-positions-there-are-with-more-than-10-alternate-alleles-see-the-infoac-tag.-hint-use-the--i-filtering-option.}{%
\section{How many positions there are with more than 10 alternate
alleles? (See the INFO/AC tag.) Hint: use the -i filtering
option.}\label{how-many-positions-there-are-with-more-than-10-alternate-alleles-see-the-infoac-tag.-hint-use-the--i-filtering-option.}}
\begin{verbatim}
# asks the bcf file for any position with more than 10 alternate alleles and counts the lines present
bcftools query -f '%CHROM\t%POS\t%REF\t%ALT[\t%SAMPLE=%GT]\n' -i 'AC>10' /mnt/Citosina/amedina/ejorquera/BioInfoII/1kg.bcf | wc -l
\end{verbatim}
query shows the data contained in the bcf file in the -f `\#', format -f
'\%CHROM\t%POS\t%REF\t%ALT[\t%SAMPLE=%GT]\n' set the format of the query output to chromosome, position, reference allele, alternative allele, and the sample's genotype, -i 'AC>10' sets a filter to only display the lines with an "AC" value bigger than 10, piping the previous with "wc -l" makes it count the number of lines in the output.
So therefore, there are 4778 positions with more than 10 alternate
alleles
\hypertarget{list-all-positions-where-hg00107-has-a-non-reference-genotype-and-the-read-depth-is-bigger-than-10.}{%
\section{List all positions where HG00107 has a non-reference genotype
and the read depth is bigger than
10.}\label{list-all-positions-where-hg00107-has-a-non-reference-genotype-and-the-read-depth-is-bigger-than-10.}}
\begin{verbatim}
bcftools query -f '%CHROM\t%POS\t%REF\t%ALT[\t%SAMPLE=%GT]\n' -s HG00107 -i 'DP>10 && GT="ref"' /mnt/Citosina/amedina/ejorquera/BioInfoII/1kg.bcf | wc -l
\end{verbatim}
query shows the data contained in the bcf file in the -f `\#' format, -f
'\%CHROM\t%POS\t%REF\t%ALT[\t%SAMPLE=%GT]\n' set the format of the query output to chromosome, position, reference allele, alternative allele, and the sample's genotype, -s HG00107 set the samples to be evaluated to only HG00107, -i 'DP>10 && GT="alt"' sets a filter to only display the lines with a depth ("DP") bigger than 10 and where the genotype ("GT") is not equal to the reference, piping the previous with "wc -l" makes it count the number of lines in the output.
So therefore, there are 451 positions where the sample HG00107 has a
non-reference genotype and the read depth is bigger than 10.
\#\#Practical stats
\hypertarget{using-samtools-stats-answer}{%
\section{Using samtools stats
answer}\label{using-samtools-stats-answer}}
\begin{verbatim}
# displays samtools stats options
samtools stats -man
\end{verbatim}
\hypertarget{what-is-the-total-number-of-reads}{%
\section{What is the total number of
reads?}\label{what-is-the-total-number-of-reads}}
\begin{verbatim}
# displays samtools stats options
samtools stats -man
# shows stats for the NA20538.bam file and uses grep and cut to only show the summary numbers
samtools stats NA20538.bam | grep ^SN | cut -f 2-
\end{verbatim}
\hypertarget{what-proportion-of-the-reads-were-mapped}{%
\section{What proportion of the reads were
mapped?}\label{what-proportion-of-the-reads-were-mapped}}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# total reads results of previous code}
\NormalTok{reads\_raw }\OtherTok{\textless{}{-}} \DecValTok{347367}
\CommentTok{\# total mapped reads results of previous code}
\NormalTok{reads\_mapped }\OtherTok{\textless{}{-}} \DecValTok{323966}
\CommentTok{\# percent of the total reads that were mapped}
\NormalTok{reads\_percent }\OtherTok{=}\NormalTok{ reads\_mapped}\SpecialCharTok{*}\DecValTok{100}\SpecialCharTok{/}\NormalTok{reads\_raw}
\end{Highlighting}
\end{Shaded}
The portion of reads that were mapped corresponds to 93.3\%
\hypertarget{how-many-reads-were-mapped-to-a-different-chromosome}{%
\section{How many reads were mapped to a different
chromosome?}\label{how-many-reads-were-mapped-to-a-different-chromosome}}
According to the summary data of the NA20538.bam file, 4055 reads had
pairs on different chromosomes
\hypertarget{what-is-the-insert-size-mean-and-standard-deviation}{%
\section{What is the insert size mean and standard
deviation?}\label{what-is-the-insert-size-mean-and-standard-deviation}}
According to the summary data of the NA20538.bam file, the mean insert
size was 190.3, and its standard deviation: 136.4
\end{document}
| {
"alphanum_fraction": 0.7525384538,
"avg_line_length": 49.364764268,
"ext": "tex",
"hexsha": "a111bf28309fff7aa8e35492592c5ecf0e61b757",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-02-24T02:04:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-02-23T18:21:28.000Z",
"max_forks_repo_head_hexsha": "42b8d19c68da6195afcf8b7e3995982a6b11a15a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RamesDiego/bioinfo",
"max_forks_repo_path": "BioInfoII_homework_01.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "42b8d19c68da6195afcf8b7e3995982a6b11a15a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RamesDiego/bioinfo",
"max_issues_repo_path": "BioInfoII_homework_01.tex",
"max_line_length": 899,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "42b8d19c68da6195afcf8b7e3995982a6b11a15a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RamesDiego/bioinfo",
"max_stars_repo_path": "BioInfoII_homework_01.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6389,
"size": 19894
} |
\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{fullpage}
\usepackage{tikz}
\usepackage{enumitem}
\usetikzlibrary{shapes,arrows,calc,automata}
\tikzstyle{bt} = [rectangle, draw, fill=blue!20,
text width=4em, text centered, rounded corners, minimum height=2em]
\lstset{ %
language=Java,
basicstyle=\small \ttfamily,commentstyle=\scriptsize\itshape,showstringspaces=false,breaklines=true,numbers=left}
\usepackage{fontspec}
\setmonofont{Cousine}[Scale=MatchLowercase]
\begin{document}
\title{Software Testing, Quality Assurance \& Maintenance (ECE453/CS447/SE465): Midterm}
\author{}
\renewcommand{\today}{}
\maketitle
~\\[-7em]
\begin{center}
{\Large February 17, 2017}
\end{center}
This open-book midterm has 5 questions and 80 points. Answer the
questions in your answer book. You may consult any printed material
(books, notes, etc).
\section*{Question 1: Test Design (10 points)}
Sentence: Splitting the test makes it easier to pinpoint the cause of the failure
because of the descriptive test name and because the tests run more quickly.
\begin{lstlisting}[basicstyle=\scriptsize \ttfamily]
@Test
public void testStatic() throws Throwable {
// L3
}
@Test
public void testDeprecated() throws Throwable {
// L4
}
@Test
public void testReturnTypes() throws Throwable {
// L5, L6
}
@Test
public void testParameterTypes() throws Throwable {
// L7
}
@Test
public void testExceptionTypes() throws Throwable {
// L8
}
\end{lstlisting}
\newpage
\section*{Question 2: Fuzzing (10 points)}
\begin{lstlisting}
void naiveComputeAngleTest() {
Random r = new Random();
computeAngle(r.nextInt(), r.nextInt(), r.nextInt());
}
\end{lstlisting}
This is the bottleneck I mentioned in the notes. Most of the time the
execution of the method under test would simply stop at the {\tt
IllegalArgumentException} and not reach the key {\tt atan2} call. So, no,
{\tt naiveComputeAngleTest} would not give you good insight into what
{\tt computeAngle} does.
For the next part, I asked the TAs to be fairly lenient, especially with
respect to rounding issues. In particular it was OK to assume that the
{\tt sqrt} was close enough often enough. There are more clever solutions possible.
But I did ask you to never call {\tt computeAngle} with bad args, so you had
to filter that out.
\begin{lstlisting}
void smartComputeAngleTest() {
Random r = new Random();
while (true) {
int x = r.nextInt(), y = r.nextInt();
int z = Math.sqrt(x*x + y*y);
if (z*z == x*x + y*y) {
computeAngle(x, y, z);
}
}
}
\end{lstlisting}
A more clever solution (thanks Jun!) would be:
\begin{lstlisting}
void smartComputeAngleTest() {
Random r = new Random();
while (true) {
int a = r.nextInt(), b = r.nextInt();
computeAngle(a*a-b*b, 2*a*b, a*a+b*b); // revised
}
}
\end{lstlisting}
\newpage
\section*{Question 3: Short-circuit evaluation (15 points)}
Whoops. I had 8 nodes. Sorry. Also, there was a missing return, which you might choose to include or not.
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,
semithick,initial text=]
\node[bt] (1) {L3};
\node[bt] (2) [below of=1, yshift=.8em] {L4};
\node[bt,text width=10em] (3) [below of=2] {if (index $>$ 0) (L5a)};
\node[bt,text width=13em] (4) [below right of=3,yshift=-1em] {if (formatString...) (L5b)};
\node[bt] (5) [below of=4,xshift=.4em] {(L7-8)};
\node[bt] (6) [below left of=5,xshift=-.3em] {(L11)};
\node[bt] (7) [below of=6] {(L13)};
\node[bt] (8) [left of=2,xshift=-5em] {(L15)};
\path (1) edge node {} (2)
(2) edge node {T} (3)
(2) edge node[above] {F} (8)
(3) edge node[yshift=-.6em] {T} (4)
(4) edge node[yshift=-.6em] {T} (5)
(4) ++ (-1, -.5) edge node[left] {F} (6)
(3.south west) edge[bend right] node[left] {F} (6.south west)
(6) edge node {} (7);
\draw [->] (5.east) .. controls ++ (4, 0) and (4, -2) .. (2.east);
\draw [->] (7.west) .. controls ++ (-3, 0) and (-3, -1) .. (2.west);
\end{tikzpicture}
\end{center}
The missing branch is L5a true, L5b false (i.e.
\verb+(index > 0) && !formatString.charAt(index - 1) == '!'+);
the case \verb+"$"+ is L5a false and done, while the case \verb+"!$"+ is
L5a true, L5b true and done.
The question was somewhat ambiguous about whether the case should achieve 100\%
branch coverage on its own or combined with the two previous test cases.
Due to ambiguity, TAs should accept either one, but it's pretty easy to
make a standalone case that achieves 100\% branch coverage: \verb+"$!$$"+
achieves all of the previous branches and also includes a case where
\verb+formatString.charAt(index - 1) == '!'+.
\newpage
\section*{Question 4: Statement and Branch Coverage (20 points)}
Sorry again. I added a node at the last minute. For reference:
\begin{lstlisting}
public static List<String> wrap(String input, int line_length) {
List<String> rv = new ArrayList<String>();
int last_break = -1, last_space = 0;
for (int i = 0;
i < input.length();
i++) {
if (input.charAt(i) == ' ') {
last_space = i;
}
if (i - last_break > line_length) {
rv.add(input.substring(last_break + 1, last_space));
last_break = i;
}
}
if (last_space >= last_break + 1) {
rv.add(input.substring(last_break + 1, last_space));
}
return rv;
}
\end{lstlisting}
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,
semithick,initial text=]
\node[bt] (1) {L2-5};
\node[bt] (2) [below of=1, yshift=.8em] {L6};
\node[bt] (3) [below of=2] {L8};
\node[bt] (4) [below right of=3,xshift=3em,yshift=-.5em] {L9};
\node[bt] (5) [below of=3,yshift=-2em] {L12};
\node[bt] (6) [below right of=5,xshift=3em] {L13-14};
\node[bt] (7) [below left of=6,xshift=-3em] {L7};
\node[bt] (8) [below of=7] {L17};
\node[bt] (9) [below right of=8,yshift=-.5em] {L18};
\node[bt] (10) [below left of=9,yshift=-.5em] {L20};
\path (1) edge node {} (2)
(2) edge node {T} (3)
(3) edge node[yshift=-.6em] {T} (4)
(3) edge node {F} (5)
(4) edge node {} (5)
(5) edge node {T} (6)
(5) edge node {F} (7)
(6) edge node {} (7)
(8) edge node[right] {T} (9)
(8) edge node[left] {F} (10)
(9) edge node {} (10);
\draw [->] (2.west) .. controls ++ (-4, 0) and (-5, -6) .. node[left] {F} (8);
\draw [->] (7.west) .. controls ++ (-3, 0) and (-3, -1) .. (2.south west);
\end{tikzpicture}
\end{center}
The argument should proceed node-by-node. Lines 2--5 and L6 are obvious enough
that they don't need to be mentioned. Continuing to Line 8, we initially have
{\tt i = 0} and \verb+input.length > 0+, so we definitely enter the loop.
We also observe that the loop iterates over every possible {\tt i} (it doesn't skip any).
We reach line 9 because the input string contains spaces. Once we're in the loop,
Line 12 is unavoidable. \verb+last_break+ starts at -1 and is only changed inside the
if, and {\tt i - 1} exceeds \verb+line_length+ because we observe more than
one line in the output, so we definitely execute Lines 13--14.
Finally, Line 7 is unavoidable inside the loop as well. Because the program
terminates, we definitely exit the loop and reach Line 17. Finally,
we reach Line 18 because \verb+last_space+ is 13 on exit while \verb+last_break+ is 9.
Line 20 is obvious.
The branches that might not be covered given statement coverage are L8--L12, L12--L7, and L17--L20.
L8--L12 is executed on a non-space input, which the input clearly includes.
L12--L7 also gets executed for characters that don't cause line breaks, which
clearly exists because we observe lines that are longer than 1 letter long.
This input does not execute the branch L17--L20 because we only execute L17 once
and, as we argued above, that execution takes the true branch.
\newpage
\section*{Question 5: Understanding Mutation (25 points)}
For part (a), there are lots of non-trivial mutants. One such mutant is to replace
the constant "1" on line 18 by the constant "0".
This will cause the second line of the output to start with a space.
The mutant is non-trivial: any input which doesn't wrap (e.g. same {\tt input},
\verb+line_length = 40+) does not kill the mutant.
Part (b): Test case \verb+input = "one three two", line_length = 9+
without trailing space has the same expected output (\verb+one three\ntwo+)
and actual output \verb+one three+.
Part (c) depends on your answer from part (b).
Another mutant is on Line 17: changing \verb!"+ 1"! to
\verb!"+ 0"! will cause a crash on my input from (b).
On the other hand, input \verb+input = "one three two ", line_length 9+ shows
that the mutant is non-trivial. (I'd intended a fix for the
bug but that's actually harder than it seems, especially
without a computer.)
%Part (c): The most obvious mutant is the one that happens to fix the bug (which
%clearly also kills your test case), but many other mutants are also possible.
%The fix is on line 14: replace {\tt i} with \verb+last_space+.
\end{document}
| {
"alphanum_fraction": 0.6599634369,
"avg_line_length": 36.32421875,
"ext": "tex",
"hexsha": "3d7651f406b95cf578e67bf818e7e967ebcf57c9",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2018-04-14T20:06:46.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T18:29:15.000Z",
"max_forks_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "patricklam/stqam-2017",
"max_forks_repo_path": "exams/midterm-solutions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "patricklam/stqam-2017",
"max_issues_repo_path": "exams/midterm-solutions.tex",
"max_line_length": 113,
"max_stars_count": 30,
"max_stars_repo_head_hexsha": "63db2b4e97c0f53e49f3d91696e969ed73d67699",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "patricklam/stqam-2017",
"max_stars_repo_path": "exams/midterm-solutions.tex",
"max_stars_repo_stars_event_max_datetime": "2018-04-15T22:27:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-17T01:10:11.000Z",
"num_tokens": 2894,
"size": 9299
} |
\documentclass{article}
\begin{document}
\author{Dr. Desmond Moru}
\title{Cardano's Formula for Cubic Equations}
\maketitle
\begin{center}
\textbf{Abstract}
\end{center}
Gerolamo Cardano was born in Pavia 1504 as the illegitimate child of a jurist. He attended the University of Padua and became a physician in the town of Sacco, after being rejected by his home Europe, having treated the Pope. He was also an astrologer and an avid gambler, to which he wrote the Book on Games of chance , which was the first serious treatise on the mathematics of probalitity|l|.
\section{ Introduction to Cardano's Formula}
Cardano's formula for solution of cubic equations for an equation like;
\begin{equation}
x^3 + a1x^2 + a2x + a3 = 0
\end{equation}
the parameters Q, R , S and T can be computed thus,
\begin{equation}
Q= \frac{3a_{1}-a_{1}^2}{a} R=\frac{9a_{1}a_{2}-27a_{3}-2a_{1}^3}{54}
S=3\sqrt{R\sqrt{-Q^3 + R^2}}
T=\sqrt{R-\sqrt{Q^3 + R^2}}
\end{equation}
to give the roots;
\begin{flushleft}
\begin{equation}
x_{1} = S + T-\frac{1}{3}a_{1}
\end{equation}
\end{flushleft}
\begin{flushleft}
\begin{equation}
x_{2}=\frac{-S+T}{2} - \frac{a_1}{3} + i\frac{\sqrt{3}(s-T)}{2}
\end{equation}
\end{flushleft}
\begin{equation}
x_{2}=\frac{-S+T}{2} - \frac{a_1}{3} + i\frac{\sqrt{3}(s-T)}{2}
Note : x^3 must not have a co-efficient
\end{equation}
\end{document} | {
"alphanum_fraction": 0.6805949008,
"avg_line_length": 30.6956521739,
"ext": "tex",
"hexsha": "9dc40ffbf100f719eccc3053642363248453b92e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2c5a99694226c11f3c1a0feb885bf5300e7132ae",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adaobi15/adaobiCSC101",
"max_forks_repo_path": "class project iia .tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2c5a99694226c11f3c1a0feb885bf5300e7132ae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adaobi15/adaobiCSC101",
"max_issues_repo_path": "class project iia .tex",
"max_line_length": 398,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2c5a99694226c11f3c1a0feb885bf5300e7132ae",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adaobi15/adaobiCSC101",
"max_stars_repo_path": "class project iia .tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 519,
"size": 1412
} |
%8
\chapter{Clause combinations} \label{chap:8}
%\hypertarget{RefHeading22901935131865}
Some linguistic models, the mainstream generative grammar in particular, disregard the distinction between a clause and a sentence, but here the distinction is maintained. One of the main reasons is the medial clause system operating in Mauwake. A simple sentence in Mauwake consists of one clause, but if that is a verbal clause, it must be a finite clause, not a medial one, as medial clauses only function within a sentence in combination with other clauses. Their distribution is restricted to non-final position in a sentence -- they may occur sentence-finally only if they are dislocated or the final clause is ellipted. Medial clauses also add the chaining structure to the clause combination possibilities (\sectref{sec:8.2}), besides regular coordination (\sectref{sec:8.1}) and subordination (\sectref{sec:8.3}).
A sentence has the following features. It consists of one or more clauses. The end of a sentence is marked in speech by a falling intonation, or by a slightly rising intonation in polar questions, and normally a pause. The sentence-final falling intonation is clear, and can be distinguished from a less noticeable fall at the end of a non-final finite clause. In writing the end of a sentence is marked by a full stop, a question mark or an exclamation mark.
A simple sentence is the same as a clause, and was discussed in \chapref{chap:5}. When two main clauses are joined in a coordinate sentence, they are independent of each other as to their functional sentence type. In \REF{ex:8:x1352} the first clause is declarative and the second one interrogative; in \REF{ex:8:x1358} the first clause is imperative and the second one declarative, but the order could also be reversed.
\ea%x1352
\label{ex:8:x1352}
\gll Yo owora=ko me aaw-e-m, no moram efa ma-i-n?\\
1s.\textsc{unm} betelnut=\textsc{nf} not take-\textsc{pa}-1s 2s.\textsc{unm} why 1s.\textsc{acc} say-\textsc{Np}-\textsc{pr}.2s\\
\glt `I didn't take the betelnut, why do you accuse me?'
\z
\ea%x1358
\label{ex:8:x1358}
\gll Ni uf-owa ikiw-eka, yo miatin-i-yem. \\
2p.\textsc{unm} dance-\textsc{nmz} go-\textsc{imp}.2p 1s.\textsc{unm} dislike-\textsc{Np}-\textsc{pr}.1s\\
\glt `(You) go to dance, I don't want to.'
\z
In clause chaining (\sectref{sec:8.2}) and in complex clauses involving main and subordinate\linebreak clauses (\sectref{sec:8.3}), the situation is more complicated. Formally almost all of the subordinate and medial clauses are neutral/declarative. A subordinate clause typically lacks an illocutionary force of its own \citep[32]{Cristofaro2003} and conforms to the functional sentence type of the main clause. In the examples \REF{ex:8:x1357}--\REF{ex:8:x1898}, the subordinate clauses are in brackets.
\ea%x1357
\label{ex:8:x1357}
\gll {\ob}Ni ifa nia keraw-i-ya nain{\cb} sira kamenap on-i-man?\\
2p.\textsc{unm} snake 2p.\textsc{acc} bite-\textsc{Np}-\textsc{pr}.3s that1 custom what.like do-\textsc{Np}-\textsc{pr}.2p\\
\glt `When a snake bites you, what do you do?'
\z
\ea%x1897
\label{ex:8:x1897}
\gll Ni {\ob}yapen ... wiar in-em-ik-e-man nain{\cb} kerer-omak-eka!\\
2p.\textsc{unm} inland {\dots} 3.\textsc{dat} sleep-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-2p that1 arrive-\textsc{distr/pl}-2p.\textsc{imp}\\
\glt `Those (many) of you, who have stayed inland, arrive (back in your villages)!'
\z
\ea%x1898
\label{ex:8:x1898}
\gll {\ob}Ni uf-ep-na{\cb} ni maadara me iirar-eka.\\
2p.\textsc{unm} dance-\textsc{ss}.\textsc{seq}=\textsc{tp} 2p.\textsc{unm} forehead.ornament not remove-2p.\textsc{imp}\\
\glt `If/when you have danced, do not remove your forehead ornaments.'
\z
The non-polar questions are an exception, since the question word may also be in a subordinate clause \REF{ex:8:x1362}. When a subordinate clause contains a question word, the illocutionary force of a question spreads to whole sentence.
\ea%x1362
\label{ex:8:x1362}
\gll No {\ob}\textstyleEmphasizedVernacularWords{kaaneke} \textstyleEmphasizedVernacularWords{ikiw-owa}{\cb} efa maak-i-n?\\
2s.\textsc{unm} where.\textsc{cf} go-\textsc{nmz} 1s.\textsc{unm} tell-\textsc{Np}-\textsc{pr}.2s\\
\glt `You are telling me to go where?'
\z
A medial clause is coordinate with the main clause but dependent on it (\sectref{sec:8.2}). The imperative form is only possible in finite verbs, and the polar question marker only occurs sentence-finally.\footnote{As an alternative marker, the \textsc{qm} is used in non-final clauses as well (\sectref{sec:3.12.8}, \sectref{sec:8.1.2}).} Because of these formal restrictions, it is impossible to have an imperative or interrogative medial clause coordinated with a declarative main clause. A medial clause commonly conforms to the illocutionary force of the final clause, but it does not need to do so. In the examples \REF{ex:8:x1899} and \REF{ex:8:x1900} the bracketed medial clause is questioned with the main clause, in \REF{ex:8:x1901} and \REF{ex:8:x1902} it is not.
\ea%x1899
\label{ex:8:x1899}
\gll {\ob}Maamuma uruf-ap{\cb} ma-i-n-i? \\
money see-\textsc{ss}.\textsc{seq} say-\textsc{pa}-2s=\textsc{qm}\\
\glt `Have you seen the money and (so) ask?'
\z
\ea%x1900
\label{ex:8:x1900}
\gll {\ob}Yo pina on-amkun=ko{\cb} efa uruf-a-man=i?\\
1s.\textsc{unm} guilt do-1s/p.\textsc{ds}=\textsc{nf} 2s.\textsc{acc} see-\textsc{pa}-2p=\textsc{qm}\\
\glt `Did I do wrong and you saw me?'
\z
\ea%x1901
\label{ex:8:x1901}
\gll {\ob}Sande erup weeser-eya{\cb} owowa ekap-e-man=i? \\
week two finish-2/3s.\textsc{ds} village come-\textsc{pa}-2p=\textsc{qm}\\
\glt `When two weeks were finished, did you (then) come to the village?'
\z
\ea%x1902
\label{ex:8:x1902}
\gll {\ob}...ikoka ekap-ep{\cb} sira nain piipua-i-nan=i e weetak? \\
later come-\textsc{ss}.\textsc{seq} habit that1 leave-\textsc{Np}-\textsc{fu}.2s=\textsc{qm} or no\\
\glt `{\dots}later when you come, will you drop that habit or not?'
\z
When a medial clause itself contains a question word, the illocutionary force spreads to the whole sentence \REF{ex:8:x1363}, \REF{ex:8:x1903}.
\ea%x1363
\label{ex:8:x1363}
\gll {\ob}\textstyleEmphasizedVernacularWords{No} \textstyleEmphasizedVernacularWords{maa} \textstyleEmphasizedVernacularWords{mauwa} \textstyleEmphasizedVernacularWords{uruf-ap}{\cb} soran-ep kirir-e-n?\\
2s.\textsc{unm} thing what see-\textsc{ss}.\textsc{seq} be.startled-\textsc{ss}.\textsc{seq} shout-\textsc{pa}-2s\\
\glt `What did you see and (then) got startled and shouted?'
\z
\ea%x1903
\label{ex:8:x1903}
\gll {\ob}\textstyleEmphasizedVernacularWords{Naareke} \textstyleEmphasizedVernacularWords{nia} \textstyleEmphasizedVernacularWords{maak-eya}{\cb} ekap-e-man? \\
who.\textsc{cf} 2p.\textsc{acc} tell-2/3s.\textsc{ds} come-\textsc{pa}-2p\\
\glt `Who told you to come?' (Lit: `Who told you and you came?')
\z
When the final clause is in the imperative mood, the implication of a command often extends backwards to a medial verb marked for the same subject \REF{ex:8:x1365}, but not so easily to one marked for a different subject. In (7.\ref{ex:7:x1364}) above, the command /request extends to the medial clause, whereas in \REF{ex:8:x1356} it does not. For more examples, see (7.\ref{ex:7:x1082})--(7.\ref{ex:7:x1083}) above.
\ea%x1365
\label{ex:8:x1365}
\gll {\ob}\textstyleEmphasizedVernacularWords{No} \textstyleEmphasizedVernacularWords{nena} \textstyleEmphasizedVernacularWords{maa} \textstyleEmphasizedVernacularWords{fariar-ep}{\cb} \textstyleEmphasizedVernacularWords{muuka} \textstyleEmphasizedVernacularWords{nain} \textstyleEmphasizedVernacularWords{arim-ow-e}.\\
2s.\textsc{unm} 2s.\textsc{gen} food abstain-\textsc{ss}.\textsc{seq} son that1 grow-\textsc{caus}-\textsc{imp}.2s\\
\glt `Abstain from (certain) food(s) and bring up the son.'
\z
\ea%x1356
\label{ex:8:x1356}
\gll {\ob}Nefa war-iwkin{\cb} \textstyleEmphasizedVernacularWords{naap} \textstyleEmphasizedVernacularWords{ ma-e}. \\
2s.\textsc{acc} shoot-2/3p.\textsc{ds} thus say-\textsc{imp}.2s \\
\glt `(If/when) they shoot you, (then) say like that.'
\z
Although it is impossible to have an imperative verb form in a medial clause, a ``soft'' command/request (\sectref{sec:7.3}) may be used in medial clauses, as it takes the medial verb form. In \REF{ex:8:x1366}, the first clause is a request, the second one a statement.
\ea%x1366
\label{ex:8:x1366}
\gll Aite, {\ob}\textstyleEmphasizedVernacularWords{i} \textstyleEmphasizedVernacularWords{ aaya=ko} \textstyleEmphasizedVernacularWords{ yia} \textstyleEmphasizedVernacularWords{ aaw-om-aya}{\cb} enim-i-yan. \\
1s/p.mother 1p.\textsc{unm} sugarcane=\textsc{\textsc{nf}} 1p.\textsc{acc} get-\textsc{ben}-\textsc{bnfy}2.2/3s.\textsc{ds} eat-\textsc{Np}-\textsc{fu}.1p\\
\glt `Mother, get us sugarcane and we will eat it.'
\z
\section{Coordination of clauses}\label{sec:8.1}
%\hypertarget{RefHeading22921935131865}
Coordination links units of ``equivalent syntactic status'' \citep[93]{Crystal1997}. Clausal coordination commonly refers to the coordination of main clauses, as that is much more frequent than the coordination of subordinate clauses. In the following, it is assumed that the discussion is about main clause coordination unless stated otherwise.
The main clauses joined by coordination are independent in the sense that they could stand alone as individual sentences. Examples \REF{ex:8:x1352} and \REF{ex:8:x1358} above show that they can even manifest different functional sentence types. But they are called clauses firstly because they are coordinated within one sentence, and secondly for the sake of consistency, since the coordinated medial (\sectref{sec:8.2.1}) and subordinate clauses (\sectref{sec:8.3.7}) could not be called sentences.
As \citet[848]{Givon1990} points out, no clause in a text is truly independent from its context. Likewise, the coordination vs. subordination of clauses is in many languages a matter of degree rather than a clear-cut distinction.
Although the chaining of medial and final clauses (\sectref{sec:8.2}) is the main strategy for combining clauses in Mauwake, coordination of main clauses is also common. It is used not only for the cross-linguistically typical cases of conjunction, disjunction, and adversative relations between clauses, but also for causal and consecutive relations.
\subsection{Conjunction} \label{sec:8.1.1}
%\hypertarget{RefHeading22941935131865}
Conjunction is the most neutral form of coordination: two or more clauses are joined in a sentence, with or without a link between them. If there is a link, it is a pragmatic additive that does not specify the semantic relationship between the clauses. This sometimes allows different interpretations for the relationship, but usually the context constrains the interpretation considerably.
\subsubsection{Juxtaposition}
%\hypertarget{RefHeading22961935131865}
In juxtaposition\footnote{Also called ``zero strategy'' by \citet[25]{Payne1985}.} two or more clauses are joined without any linking device at all. According to \citet[8]{Haspelmath2007} unwritten languages tend to lack their own coordinators and therefore use more juxtaposition and/or coordinators borrowed from other, more prestigious languages.
In Mauwake, juxtaposition is the most typical strategy for conjunction overall. Especially the coordination of verbless clauses is often symmetrical: the reversal of the conjuncts is possible without a change of meaning \REF{ex:8:x1367}, \REF{ex:8:x1390}.
\ea%x1367
\label{ex:8:x1367}
\gll Wi Yaapan emeria weetak, mua manek=iw. \\
3p.\textsc{unm} Japan woman no man big=\textsc{lim}\\
\glt `The Japanese didn't have any wives, (they were) just the men.'
\z
\ea%x1390
\label{ex:8:x1390}
\gll Kuuten wiawi iperowa, yo auwa kapa=ke. \\
Kuuten 3s/p.father firstborn 1s.\textsc{unm} 1s/p.father lastborn=\textsc{cf} \\
\glt `Kuuten's father was the firstborn (son), my father was the lastborn.'
\z
When one of the conjuncts is a verbless clause and another is a verbal one, symmetrical conjunction is quite common \REF{ex:8:x1391}:
\ea%x1391
\label{ex:8:x1391}
\gll I uruwa miim-i-mik, ni sosora=ke.\\
1p.\textsc{unm} loincloth precede-\textsc{Np}-\textsc{pr}.1/3p 2p.\textsc{unm} grass.skirt=\textsc{cf}\\
\glt `We father's side of the family (lit: loincloth) go first, you are mother's side (lit: grass skirt).'
\z
Symmetrical conjunction of verbal clauses may be used, when there is parallelism between the clauses \REF{ex:8:x1368}, \REF{ex:8:x1392}:
\ea%x1368
\label{ex:8:x1368}
\gll Na-emi wi afa ar-omak-e-mik, osaiwa ar-e-mik, biri-birin-e-mik.\\
say-\textsc{ss}.\textsc{sim} 3p.\textsc{unm} flying.fox become-\textsc{distr}/\textsc{pl}-\textsc{pa}-1/3p bird.of.paradise become-\textsc{pa}-1/3p \textsc{rdp}-fly-\textsc{pa}-1/3p\\
\glt `Saying so, they became many flying foxes, they became birds of paradise, they flew (away).'
\z
\ea%x1392
\label{ex:8:x1392}
\gll Aria makera miirifa okaiwi soo=pa kaik-i-mik, okaiwi pia kaik-i-mik.\\
alright cane end other.side trap=\textsc{loc} tie-\textsc{Np}-\textsc{pr}.1/3p other.side bamboo tie-\textsc{Np}-\textsc{pr}.1/3p\\
\glt `Alright we tie one end of the cane to the trap, the other to a (piece of) bamboo.'
\z
In \REF{ex:8:x1851} the medial clause relates to both of the final clauses, not just to the first one:
\ea%x1851
\label{ex:8:x1851}
\gll Koora=pa efa uruf-am-ik-eya \textstyleEmphasizedVernacularWords{ikiw}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{nen} \textstyleEmphasizedVernacularWords{ekap}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{nen}.\\
house=\textsc{loc} 1s.\textsc{acc} see-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} go-\textsc{Np}-\textsc{fu}.1s come-\textsc{Np}-\textsc{fu}.1s\\
\glt `You see me from the house and/as I will go and come.'
\z
When the coordination is not symmetrical, the clause in the second conjunct is an example or an explanation of the first clause \REF{ex:8:x1370}, or it follows the first one in a temporal sequence \REF{ex:8:x1369}.
\ea%x1370
\label{ex:8:x1370}
\gll Auwa aite wia karu-i-yen, owowa=pa wia uruf-u.\\
1s/p.father 1s/p.mother 3p.\textsc{acc} visit-\textsc{Np}-\textsc{fu}.1p village=\textsc{loc} 3p.\textsc{acc} see-1d.\textsc{imp}\\
\glt `We'll visit my parents, let's see them in the village.'
\z
\ea%x1369
\label{ex:8:x1369}
\gll Miiw-aasa um-eya miiw-aasa nain on-am-ika-iwkin \textstyleEmphasizedVernacularWords{epa} \textstyleEmphasizedVernacularWords{kokom(a)-ar-e-k,} \textstyleEmphasizedVernacularWords{epa} \textstyleEmphasizedVernacularWords{iimeka} \textstyleEmphasizedVernacularWords{tuun-e-k}.
\\
land-canoe die-2/3s.\textsc{ds} land-canoe that1 do-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} place dark-\textsc{inch}-\textsc{pa}-3s place ten count?-\textsc{pa}-3s\\
\glt `The truck broke and while they were fixing the truck it became dark, (then) it was midnight.'
\z
A fairly common structure is one where the first conjunct is not directly followed by another finite clause but by one or more medial clauses before the final clause \REF{ex:8:x1371}:
\ea%x1371
\label{ex:8:x1371}
\gll \textstyleEmphasizedVernacularWords{Ikemika} \textstyleEmphasizedVernacularWords{kaik-ow(a)} \textstyleEmphasizedVernacularWords{mua} \textstyleEmphasizedVernacularWords{nain} \textstyleEmphasizedVernacularWords{nop-a-mik}, imen-ap maak-iwkin \textstyleEmphasizedVernacularWords{o} \textstyleEmphasizedVernacularWords{miim-o-k}.\\
wound tie-\textsc{nmz} man that1 search-\textsc{pa}-1/3p find-\textsc{ss}.\textsc{seq} tell-2/3p.\textsc{ds} 3s.\textsc{unm} precede-\textsc{pa}-3s\\
\glt`They looked for the medical orderly, and when they found him and told him, he went ahead of them.'
\z
Juxtaposition in itself is neutral and only shows that the two or more clauses are somehow connected with each other, but it can be used when propositions joined by it have different semantic relationships with each other \REF{ex:8:x1404}, \REF{ex:8:x1425}.
\ea%x1404
\label{ex:8:x1404}
\gll Waaya maneka marew pun, mua unowa me wia pepek-er-a-k.\\
pig big no(ne) also man many not 3p.\textsc{acc} enough-\textsc{inch}-\textsc{pa}-3s\\
\glt`Also, the pig was not big, (so) it was not enough for many people.'
\z
\ea%x1425
\label{ex:8:x1425}
\gll Ni iperuma fain me enim-eka, inasin(a) mua=ke. \\
2p.\textsc{unm} eel this not eat-\textsc{imp}.2p spirit man=\textsc{cf} \\
\glt`Don't eat this eel, (because) it is a spirit man.'
\z
\subsubsection{Conjunction with coordinating connectives} \label{sec:8.1.1.2}
%\hypertarget{RefHeading22981935131865}
Two of the three pragmatic connectives (\sectref{sec:3.11.1}) are used as clausal coordinators: the additive \textstyleStyleVernacularWordsItalic{ne} and \textstyleStyleVernacularWordsItalic{aria}, `alright' which marks a break in the topic chain. \textstyleStyleVernacularWordsItalic{Ne} can be used in some of the contexts where mere juxtaposition is also used, but it is less frequent. If the second conjunct is an explanation or example of the first one, conjoining the clauses with \textstyleStyleVernacularWordsItalic{ne} is not allowed. Example \REF{ex:8:x1372} is a case of symmetrical coordination, but if the order of the two conjuncts were reversed, the adverbial \textstyleStyleVernacularWordsItalic{pun} `also', which has to be in the second conjunct, would not move to the first conjunct with the rest of the clause.
\ea%x1372
\label{ex:8:x1372}
\gll I mua=ko me wia furew-a-mik, \textstyleEmphasizedVernacularWords{ne} yiena pun mukuna=ko me op-a-mik.\\
1p.\textsc{unm} man=\textsc{nf} not 3p.\textsc{acc} sense-\textsc{pa}-1/3p \textsc{add} 1p.\textsc{gen} also fire=\textsc{nf} not hold-\textsc{pa}-1/3p \\
\glt`We didn't sense anyone there and we ourselves did not hold fire either.'
\z
The example \REF{ex:8:x1373} is syntactically neutral, but semantically it is interpreted as both temporal and consecutive sequence.
\ea%x1373
\label{ex:8:x1373}
\gll ...maa wiar fe-feef-omak-e-mik, \textstyleEmphasizedVernacularWords{ne} wi ikiw-e-mik ...\\
food 3.\textsc{dat} \textsc{rdp}-spill-\textsc{distr}/\textsc{pl}-\textsc{pa}-1/3p \textsc{add} 3p.\textsc{unm} go-\textsc{pa}-1/3p \\
\glt`{\dots} they\textsubscript{i} spilled their\textsubscript{j} food, and (so/then) they\textsubscript{j} went (away) {\dots}'
\z
When there are more than two coordinated clauses in a sentence without any intervening medial clauses, it is common to have \textstyleStyleVernacularWordsItalic{ne} joining the last two clauses \REF{ex:8:x1374}:
\ea%x1374
\label{ex:8:x1374}
\gll Mua kuum-e-mik nain me wia kuuf-a-mik, me wia furew-a-mik, \textstyleEmphasizedVernacularWords{ne} me wia imen-a-mik. \\
man burn-\textsc{pa}-1/3p that1 not 3p.\textsc{acc} see-\textsc{pa}-1/3p not 3p.\textsc{acc} sense-\textsc{pa}-1/3p \textsc{add} not 3p.\textsc{acc} find-\textsc{pa}-1/3p \\
\glt`We didn't see the men who burned it, we didn't sense them and we didn't find them.'
\z
The connective \textstyleStyleVernacularWordsItalic{ne} is also used in sentences where an adversative interpretation can be applied.\footnote{Using \citegen[28]{Haspelmath2007} terms, \textit{ne} in the adversative function could be called an \textit{oppositive} coordinator, as the second coordinand does not cancel an expectation like it does in adversative clauses formed with either the demonstrative \textit{nain} or the topic marker -\textit{na} (\sectref{sec:8.3.4}).} Example \REF{ex:8:x1375} describes a couple that stayed in the village during the war and placed some of their belongings outside their house to show that there were people living in the village, while many others ran away into the rainforest.
\ea%x1375
\label{ex:8:x1375}
\gll Amina, wiowa, eka napia koor(a) miira=pa iimar-aw-ikiw-e-mik, \textstyleEmphasizedVernacularWords{ne} wi unowa baurar-e-mik. \\
pot spear water bamboo house front=\textsc{loc} stand-\textsc{caus}-go-\textsc{pa}-1/3p \textsc{add} 3p.\textsc{unm} many flee-\textsc{pa}-1/3p \\
\glt`We placed the pots, spears and bamboo water containers in line in front of the house, but many ran away.'
\z
The connective \textstyleStyleVernacularWordsItalic{aria} 'alright' may be used when there is a change of topic or an unexpected development within the sentence \REF{ex:8:x1376}, \REF{ex:8:x1377}.
\ea%x1376
\label{ex:8:x1376}
\gll Epa wii-wiim-ik-ua, \textstyleEmphasizedVernacularWords{aria} wi sawur=ke ekap-ep takira nain samapora onaiya akua aaw-e-mik.\\
place \textsc{rdp}-dawn-be-\textsc{pa}.3s alright 3p.\textsc{unm} spirit=\textsc{cf} come-\textsc{ss}.\textsc{seq} boy that1 bed with shoulder take-\textsc{pa}-1/3p \\
\glt`It was getting light, and spirits came and carried the boy with his bed (away) on their shoulders.'
\z
\ea%x1377
\label{ex:8:x1377}
\gll Iiriw muuka oko wiawi onak urera maa uup-e-mik, \textstyleEmphasizedVernacularWords{aria} maa me wu-om-a-mik yon{\dots} \\
earlier boy other 3s/p.father 3s/p.mother afternoon food cook-\textsc{pa}-1/3p alright food not put-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-1/3p perhaps \\
\glt`Long ago, the parents of a boy cooked food in the afternoon, (but) perhaps they did not put any food for him {\dots}'
\z
It is also the default coordinator when a non-verbal constituent in two or more otherwise very similar conjuncts are contrasted \REF{ex:8:x1379}, or emphasized \REF{ex:8:x1380}, in coordinated clauses.
\ea%x1379
\label{ex:8:x1379}
\gll Yo Malala mauw-owa nia asip-i-yem, \textstyleEmphasizedVernacularWords{aria} yena owowa, Moro owowa wia asip-i-yem.\\
1s.\textsc{unm} Malala work-\textsc{nmz} 2p.\textsc{acc} help-\textsc{Np}-\textsc{pr}.1s alright 1s.\textsc{gen} village Moro village 3p.\textsc{acc} help-\textsc{Np}-\textsc{pr}.1s \\
\glt`I help you Malala people with your work, and I help my village, Moro village.'
\z
\ea%x1380
\label{ex:8:x1380}
\gll Eema pun ekap-ep yia maak-e-k, \textstyleEmphasizedVernacularWords{aria} buburia ona pun ekap-ep yia maak-e-k. \\
Eema also come-\textsc{ss}.\textsc{seq} 1p.\textsc{acc} tell-\textsc{pa}-3s alright bald 3s.\textsc{gen} also come-\textsc{ss}.\textsc{seq} 1p.\textsc{acc} tell-\textsc{pa}-3s \\
\glt`Eema came and told us, and the bald man himself too came and told us.'
\z
\subsection{Disjunction} \label{sec:8.1.2}
%\hypertarget{RefHeading23001935131865}
The speech of the Mauwake people tends to be rather concrete in the sense that they do not speculate much on different abstract alternatives. So disjunction of clauses, although possible, is not common. Disjunction is marked by the connective \textstyleStyleVernacularWordsItalic{e} `or' placed between the conjuncts \REF{ex:8:x1385} (\sectref{sec:3.11.2}).
\ea%x1385
\label{ex:8:x1385}
\gll Nain=ke napum-ar-i-mik \textstyleEmphasizedVernacularWords{e} um-i-mik, mua oko napum-ar-e-k nain erewar-e-n. \\
that1=\textsc{cf} sickness-\textsc{inch}-\textsc{Np}-\textsc{pr}.1/3p or die-\textsc{Np}-\textsc{pr}.1/3p man other sickness-\textsc{inch}-\textsc{pa}-3s that1 foresee-\textsc{pa}-2s \\
\glt`That is about people becoming sick or dying, you foresaw (in a dream) that some man became sick.'
\z
Sometimes the question marker -\textstyleStyleVernacularWordsItalic{i} replaces the connective \REF{ex:8:x1387}.
\ea%x1387
\label{ex:8:x1387}
\gll Aria no ikoka mua owawiya irak-ep=\textstyleEmphasizedVernacularWords{i} kamenap on-ap yo me efar kerer-e, no nomokowa Kululu fan-e-k a.\\
alright 2s.\textsc{unm} later man with fight-\textsc{ss}.\textsc{seq}=\textsc{qm} how do-\textsc{ss}.\textsc{seq}1s.\textsc{unm} not 1s.\textsc{dat} arrive-\textsc{imp}.2s 2s.\textsc{unm} 2s/p.brother Kululu here-\textsc{pa}-3s \textsc{intj}\\
\glt`Alright, later when you fight with your husband or do something like that, do not come to me, your brother Kululu is right here.'
\z
Alternative questions (\sectref{sec:7.2.2}) have the question marker -\textstyleStyleVernacularWordsItalic{i} cliticized to the end of the clause at least in the first conjunct. Closed alternative questions leave the question mark out of the last conjunct \REF{ex:8:x1386}.
\ea%x1386
\label{ex:8:x1386}
\gll Ikoka ekap-ep feeke sira nain piipua-i-nan=\textstyleEmphasizedVernacularWords{i} \textstyleEmphasizedVernacularWords{e} weetak?\\
later come-\textsc{ss}.\textsc{seq} here.\textsc{cf} habit that1 leave-\textsc{Np}-\textsc{fu}.2s=\textsc{qm} or no\\
\glt`Later when you come, will you here leave that habit or not?'
\z
Open alternative questions have the question marker in all the conjuncts \REF{ex:8:x1384}.
\ea%x1384
\label{ex:8:x1384}
\gll Mua oko miira inawera=pa uruf-ap ma-i-mik, mua oko=ke napuma aaw-o-k=\textstyleEmphasizedVernacularWords{i} \textstyleEmphasizedVernacularWords{e} um-o-k=\textstyleEmphasizedVernacularWords{i}?\\
man other face dream=\textsc{loc} see-\textsc{ss}.\textsc{seq} say-\textsc{Np}-\textsc{pr}.1/3p man other=\textsc{cf} sickness get-\textsc{pa}-3s=\textsc{qm} or die-\textsc{pa}-3s=\textsc{qm}\\
\glt`When we see some man's face in a dream we say, ``Has some other man become sick or died (or possibly neither)?'' '
\z
\subsection{Adversative coordination} \label{sec:8.1.3}
%\hypertarget{RefHeading23021935131865}
There is no adversative coordinator in Mauwake. It was mentioned above (\sectref{sec:3.11.1}, 8.1.1.2) that the pragmatic additive connective \textstyleStyleVernacularWordsItalic{ne}, which is semantically neutral, is possible when there is a relationship between clauses that may be interpreted as contrastive \REF{ex:8:x1388}.
\ea%x1388
\label{ex:8:x1388}
\gll Iir nain Kedem manek akena keker op-a-k \textstyleEmphasizedVernacularWords{ne} Yoli weetak.\\
time that Kedem big very fear hold-\textsc{pa}-3s \textsc{add} Yoli no \\
\glt`That time Kedem was very scared but Yoli wasn't.'
\z
There are two strategies that can be used when a strong adversative is needed. A `but'-protasis \citep[237]{Reesink1983b} may be marked by either the distal demonstrative \textstyleStyleVernacularWordsItalic{nain} `that' (\sectref{sec:3.6.2}), or the topic marker -\textstyleStyleVernacularWordsItalic{na} (§\sectref{sec:3.12.7.1}, \ref{sec:8.3.4}), added to a finite clause. Adversative clauses with the demonstrative \textstyleStyleVernacularWordsItalic{nain} differ from nominalized clauses functioning as complement clauses or relative clauses in the following respects. Intonationally, \textstyleStyleVernacularWordsItalic{nain} is the initial element in the second one of the contrasted clauses, rather than a final element in a subordinate clause, and it is often preceded by a short pause \REF{ex:8:x1395}. The protasis may even be a separate sentence \REF{ex:8:x728}.
\ea%x1395
\label{ex:8:x1395}
\gll Panewowa nain, wi iiriw eno-wa en-e-mik, \textstyleEmphasizedVernacularWords{nain} me onak-e-mik.\\
old.person that1 3p.\textsc{unm} earlier eat-\textsc{nmz} eat-\textsc{pa}-1/3p that1 not give.3s-\textsc{pa}-1/3p\\
\glt`As for the old woman, they (aready) ate the meal earlier but did not give (any of it) to her to eat.'
\z
\ea%x728
\label{ex:8:x728}
\gll Yo bom koor miira=pa efar or-om-ik-ua. \textstyleEmphasizedVernacularWords{Nain} yo me baurar-em-ik-e-m. \\
1s.\textsc{unm} bomb house face=\textsc{loc} 1s.\textsc{dat} fall-\textsc{ss}.\textsc{sim}-be-\textsc{pa}.3s that1 1s.\textsc{unm} not flee-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1s \\
\glt`Bombs kept dropping in front of my house. But I didn't keep running away.'
\z
The examples \REF{ex:8:x1389} and \REF{ex:8:x1394} are structurally very similar to sentences with relative clauses (\sectref{sec:8.3.1.2}). But here the demonstrative \textstyleStyleVernacularWordsItalic{nain} is part of the adversative clause and is preceded by a pause.
\ea%x1389
\label{ex:8:x1389}
\gll Mera eka enim-i-mik, \textstyleEmphasizedVernacularWords{nain} i mangala me enim-i-mik, waaya me enim-i-mik.\\
fish water eat-\textsc{Np}-\textsc{pr}.1/3p that1 1p.\textsc{unm} shellfish not eat-\textsc{Np}-\textsc{pr}.1/3p pig not eat-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`We eat fish soup, but we don't eat shellfish, (and) we don't eat pork.'
\z
\ea%x1394
\label{ex:8:x1394}
\gll I nan soomar-e-mik, \textstyleEmphasizedVernacularWords{nain} i mukuna=ko me op-a-mik.\\
1p.\textsc{unm} there walk-\textsc{pa}-1/3p that1 1p.\textsc{unm} fire=\textsc{nf} not hold-\textsc{pa}-1/3p \\
\glt`We walked there, but we did not hold/have any fire.'
\z
Compare \REF{ex:8:x1394} with the relative clause \REF{ex:8:x1396}, where the demonstrative functions as a relative marker and comes at the end of the clause. This is shown by the slightly rising intonation on \textstyleStyleVernacularWordsxiiptItalic{nain}, as well as a pause following it in spoken text:\footnote{This similarity creates a problem with written texts that do not have adequate punctuation. Sometimes either interpretation is acceptable.}
\ea%x1396
\label{ex:8:x1396}
\gll I nan soomar-e-mik \textstyleEmphasizedVernacularWords{nain}, i mukuna=ko me op-a-mik.\\
1p.\textsc{unm} there walk-\textsc{pa}-1/3p that1 1p.\textsc{unm} fire=\textsc{nf} not hold-\textsc{pa}-1/3p\\
\glt`We who walked there didn't hold/have any fire.' (Or: `When we walked there, we didn't hold/have any fire.')
\z
The adversative sentences formed with the topic marker -\textstyleStyleVernacularWordsItalic{na} are complex rather than coordinate sentences (\sectref{sec:8.3.4}).
\subsection{Consecutive coordination}
%\hypertarget{RefHeading23041935131865}
Within a sentence, clauses are typically connected by one of the syntactically neutral strategies, which leave the semantic relationship implied. Some sentences using juxtaposition \REF{ex:8:x1425}, the pragmatic additive \textstyleStyleVernacularWordsItalic{ne} \REF{ex:8:x1373} or clause chaining \REF{ex:8:x1412} can be interpreted as having a consecutive relationship between the clauses, although this does not show in the syntax. This section deals with the cases where the consecutive relationship is marked overtly.
Relationships of cause and effect, or reason and result,\footnote{Reason-result relationship presupposes the presence of reasoning in the process, cause-effect relationship does not.} are central in the discussion of causal and consecutive clauses. It seems that currently Mauwake may be developing a distinction between cause and reason on one hand, and between effect and result on the other. But the tendency, if there, is not very strong (\sectref{sec:3.11.2}).
Both the clauses in a sentence expressing a cause-effect or reason-result relationship are main clauses and are in a coordinate relationship with each other. It is common for the two clauses to form separate sentences rather than be within the same sentence.
The tendency to present events in the same order that they occur, common to languages in general, is very strong in Papuan languages. Consequently, there is a strong preference to present a cause clause before an effect clause (\citealt[409]{Haiman1980}, \citealt[59]{Roberts1987}, \citealt{Reesink1987}). In Mauwake consecutive coordination is the default, unmarked strategy for those sentences that express cause-effect or reason-result relationships\linebreak overt\-ly, because their structure follows this principle \REF{ex:8:x1400}, whereas in causal coordination sentences the effect is stated before the cause.
\ea%x1400
\label{ex:8:x1400}
\gll Emar, nos=ke yo efa kemal-ep iripuma fain ifakim-o-n, \textstyleEmphasizedVernacularWords{naapeya} iripuma fain ik-ep enim-e.\\
1s/p.friend 2s.\textsc{cf}=\textsc{cf} 1s.\textsc{unm} 1s.\textsc{acc} pity-\textsc{ss}.\textsc{seq} iguana this kill-\textsc{pa}-2s therefore iguana this roast-\textsc{ss}.\textsc{seq} eat-\textsc{imp}.2s\\
\glt`Friend, it was you who pitied me and killed this iguana, therefore you roast and eat this iguana.'
\z
Effect and result clauses use \textstyleStyleVernacularWordsItalic{naapeya/naeya} `therefore, (and) so' (\sectref{sec:3.11.2}) as their connective \REF{ex:8:x1401}--\REF{ex:8:x1403}.
\ea%x1401
\label{ex:8:x1401}
\gll Koora fuluwa unowa marew, \textstyleEmphasizedVernacularWords{naapeya} in-i-mik nain dabela me senam furew-i-mik.\\
house hole many no(ne) therefore sleep-\textsc{Np}-\textsc{pr}.1/3p that1 cold not too.much sense-\textsc{Np}-\textsc{pr}.1/3p\\
\glt `The houses do not have many windows, so those who sleep (there) do not sense/feel the cold too much.'
\z
\ea%x1402
\label{ex:8:x1402}
\gll Pita weke wiar um-o-k, \textstyleEmphasizedVernacularWords{naapeya} o suule me iw-a-k. \\
Pita 3s/p.grandfather 3.\textsc{dat} die-\textsc{pa}-3s therefore 3s.\textsc{unm} school not go-\textsc{pa}-3s\\
\glt`Pita's grandfather died, so he (Pita) didn't go to school.'
\z
\ea%x1405
\label{ex:8:x1405}
\gll {\dots}pika oona me kekan-ow-a-k, \textstyleEmphasizedVernacularWords{naeya} uura ewar maneka=ke kerer-emi koora nain wiar teek-a-k.\\
...wall support not be.strong-\textsc{caus}-\textsc{pa}-3s therefore night wind big=\textsc{cf} appear-\textsc{ss}-\textsc{sim} house that1 3.\textsc{dat} tear-\textsc{pa}-3s\\
\glt`He did not strengthen the wall supports, so at night a big wind arose and tore down his house.'
\z
\ea%x1408
\label{ex:8:x1408}
\gll No nena pun pina sira naap nain on-i-n, \textstyleEmphasizedVernacularWords{naeya} nos pun opora=pa ika-i-nan.\\
2s.\textsc{unm} 2s.\textsc{gen} also guilt custom thus that1 do-\textsc{Np}-\textsc{pr}.2s therefore 2s.\textsc{fc} also talk=\textsc{loc} be-\textsc{Np}-\textsc{fu}.2s\\
\glt`You yourself do bad things like that too, therefore you too will be under accusation.'
\z
\textstyleStyleVernacularWordsItalic{Naapeya} can also co-occur with the conjunctive coordinator \textstyleStyleVernacularWordsItalic{ne} \REF{ex:8:x1403}.
\ea%x1403
\label{ex:8:x1403}
\gll Epa nan soomar-em-ik-ok or-o-mik, \textstyleEmphasizedVernacularWords{ne} \textstyleEmphasizedVernacularWords{naapeya} pina wi wiar korin-e-k. \\
place there walk-\textsc{ss}.\textsc{sim}-be-\textsc{ss} descend-\textsc{pa}-1/3p \textsc{add} therefore guilt 3p.\textsc{unm} 3.\textsc{acc} stick-\textsc{pa}-3s\\
\glt`They were walking there in that place and came down, and so the guilt (for starting a forest fire) stuck to them.'
\z
The use of \textstyleStyleVernacularWordsItalic{naapeya} and \textstyleStyleVernacularWordsItalic{naeya} is both external and internal, i.e., they connect events in a situation and ideas in a text. The internal use of \textstyleStyleVernacularWordsItalic{ne naapeya} and \textstyleStyleVernacularWordsItalic{aria naapeya} is restricted to intersentential use. They refer to a longer stretch in the preceding text as their protasis \REF{ex:8:x1407}.
\ea%x1407
\label{ex:8:x1407}
\gll \textstyleEmphasizedVernacularWords{Aria} \textstyleEmphasizedVernacularWords{naapeya} wi inasina ook-i-mik sira nain me wiar ook-eka. \\
alright therefore 3p.\textsc{unm} spirit follow-\textsc{Np}-\textsc{pr}.1/3p custom that1 not 3.\textsc{dat} follow-\textsc{imp}.2p\\
\glt`So therefore do not follow the behavior of those who follow/believe in spirits.'
\z
As an internal connective \textstyleStyleVernacularWordsItalic{naeya} mainly connects full sentences \REF{ex:8:x1410}, only seldom clauses within a sentence \REF{ex:8:x1411}:
\ea%x1410
\label{ex:8:x1410}
\gll No mua woos reen-owa=ke, \textstyleEmphasizedVernacularWords{naeya} no kema kir-owa miatin-i-n.\\
2s.\textsc{unm} man head dry-\textsc{nmz}=\textsc{cf} therefore 2s.\textsc{unm} liver turn-\textsc{nmz} dislike-\textsc{Np}-\textsc{pr}.2s\\
\glt`You are hard-headed, therefore you do not like to change your (bad) ways.'
\z
\ea%x1411
\label{ex:8:x1411}
\gll Ni sira-sira naap on-i-man. \textstyleEmphasizedVernacularWords{Naeya} opora iiriw ma-e-k nain pepek akena nia ma-e-k.\\
2p.\textsc{unm} \textsc{rdp}-custom thus do-\textsc{Np}-\textsc{pr}.2p therefore talk earlier say-\textsc{pa}-3s that1 enough very 2p.\textsc{acc} say-\textsc{pa}-3s\\
\glt`You do (bad) things like that. Therefore the talk that he already said about you is very accurate.'
\z
\textstyleStyleVernacularWordsItalic{Neemi} is a consecutive coordinator that almost exclusively conjoins full sentences rather than clauses within a sentence: \REF{ex:8:x1409} is from translated text but considered natural. (3.\ref{ex:3:x736}) is repeated here as \REF{ex:8:x1904}. \textit{Neemi} is an internal connective, only used in reasoning. It requires some point of similarity between the two conjuncts.
\ea%x1904
\label{ex:8:x1904}
\gll Teeria fain K10 wu-a-mik. \textstyleEmphasizedVernacularWords{Neemi} wi teeria nain pun K10 wu-a-mik.\\
group this K10 put-\textsc{pa}-1/3p therefore 3p.\textsc{unm} group that1 too K10 put-\textsc{pa}-1/3p\\
\glt`This group put down ten kina. Therefore that group put down ten kina, too.'
\z
\ea%x1409
\label{ex:8:x1409}
\gll Krais sirir-owa aaw-omak-e-k, \textstyleEmphasizedVernacularWords{neemi} is pun unowiya naap aaw-i-mik.\\
Christ hurt-\textsc{nmz} get-\textsc{distr}/\textsc{pl}-\textsc{pa}-3s therefore 1p.\textsc{fc} also all thus get-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`Christ received a lot of pain, so we all too get (pain) like that.'
\z
The connective \textstyleStyleVernacularWordsItalic{naap nain} is used almost only inter-sententially \REF{ex:8:x1905}. Between clauses in a sentence it is possible but rare \REF{ex:8:x1424}:
\ea%x1905
\label{ex:8:x1905}
\gll Naeya nokar-e-mik, ``\textstyleEmphasizedVernacularWords{Naap} \textstyleEmphasizedVernacularWords{nain} no naareke?'' \\
therefore ask-\textsc{pa}-1/3p thus that1 2s.\textsc{unm} who.\textsc{cf}\\
\glt`Therefore they asked, ``So then, who are you?'' '
\z
\ea%x1424
\label{ex:8:x1424}
\gll Wiam arow pepek nan urup-e-mik nain, \textstyleEmphasizedVernacularWords{naap} \textstyleEmphasizedVernacularWords{nain} yo moram urup-e-m. \\
3p.\textsc{refl} three enough there ascend-\textsc{pa}-1/3p that1 thus that1 1s.\textsc{unm} why/in.vain ascend-\textsc{pa}-1s \\
\glt`(Since it is the case that) those three are enough and came up, so then why did I have to come up? (or: {\dots}so then I came up in vain).'
\z
\subsection{Causal coordination, ``afterthought reason''}
%\hypertarget{RefHeading23061935131865}
The causal coordination is a very marked structure, which shows in the unusual ordering of the clauses: the causal clause follows rather than precedes the consequent clause. The causal clause in Mauwake begins with the connective \textstyleStyleVernacularWordsItalic{moram} `because' (\sectref{sec:3.11.2}), which is originally the interrogative word for `why'. There are two possible origins for this untypical structure. It may be a recent calque on the Tok Pisin causal construction, which uses \textstyleForeignWords{bilong wanem} `why/because' as the connector and the same ordering of the two clauses. The ordering of the clauses shows that it may also have originated as an ``afterthought reason'',\footnote{The term suggested by Ger Reesink.} even though currently it is used when the cause or reason is emphasized \REF{ex:8:x1417}, \REF{ex:8:x1420}.
\ea%x1417
\label{ex:8:x1417}
\gll Owowa mamaiya soora weetak, \textstyleEmphasizedVernacularWords{moram} iwera isak-omak-e-mik.\\
village near forest no because coconut plant-\textsc{distr}/\textsc{pl}-\textsc{pa}-1/3p\\
\glt`There is no forest near the village, because we have planted a lot of coconut palms.'
\z
\ea%x1420
\label{ex:8:x1420}
\gll Poh San uruf-ap kema ten-e-mik, \textstyleEmphasizedVernacularWords{moram} i kema naap suuw-a-mik, napuma me sariar-owa ik-ua.\\
Poh San see-\textsc{ss}.\textsc{seq} liver fall-\textsc{pa}-1/3p because 1p.\textsc{unm} liver thus push-\textsc{pa}-1/3p sickness not heal-\textsc{nmz} be-\textsc{pa}.3s\\
\glt`We saw Poh San and were relieved (lit: liver fell), because we had thought that (her) sickness hadn't healed yet (but it had).'
\z
\textstyleStyleVernacularWordsItalic{Moram wia} is used almost exclusively between full sentences \REF{ex:8:x1906}; the example \REF{ex:8:x1421} is the only intra-sentential instance of \textstyleStyleVernacularWordsItalic{moram wia} in the data. I have not noticed any semantic difference caused by the addition of the negator.
\ea%x1906
\label{ex:8:x1906}
\gll ...maamuma senam aaw-e-mik. \textstyleEmphasizedVernacularWords{Moram} \textstyleEmphasizedVernacularWords{wia}, maa ele-eliwa sesek-a-mik.\\
money too/very.much get-\textsc{pa}-1/3p why not thing/food \textsc{rdp}-good sell-\textsc{pa}-1/3p\\
\glt`{\dots}they got a lot of money. (That's) because they sold good food.'
\z
\ea%x1421
\label{ex:8:x1421}
\gll Iir nain yo owowa=pa=ko me mauw-a-m, \textstyleEmphasizedVernacularWords{moram} \textstyleEmphasizedVernacularWords{wia} yo Ukarumpa urup-owa=ke na-ep mauw-owa miatin-e-m.\\
time that1 1s.\textsc{unm} village=\textsc{loc}=\textsc{nf} not work-\textsc{pa}-1s because not 1s.\textsc{unm} Ukarumpa ascend-\textsc{nmz}=\textsc{cf} say-\textsc{ss}.\textsc{seq} work-\textsc{nmz} dislike-\textsc{pa}-1s\\
\glt`That time I did not work in the village, because I thought that I was due to go up to Ukarumpa, and (so) I didn't like to work.'
\z
Both a causative and a consecutive connective can co-occur in the same sentence. When that happens, the consecutive clause occurs twice: first without a connective and after the causal clause with a connective \REF{ex:8:x1422}, \REF{ex:8:x1423}. This underlines the strong preference to keep the cause-effect (or reason-result) order.
\ea%x1422
\label{ex:8:x1422}
\gll I epa unowa=ko me soomar-e-mik, \textstyleEmphasizedVernacularWords{moram} owowa maneka, \textstyleEmphasizedVernacularWords{naapeya} soomar-owa lebum(a)-ar-e-mik.\\
1p.\textsc{unm} place many=\textsc{nf} not walk-\textsc{pa}-1/3p because village big therefore walk-\textsc{nmz} lazy-\textsc{inch}-\textsc{pa}-1/3p\\
\glt`We didn't walk in many places, because the village/town was big, therefore we didn't care to walk.'
\z
\ea%x1423
\label{ex:8:x1423}
\gll Mua lebuma emeria me wi-i-mik, \textstyleEmphasizedVernacularWords{moram} emeria muukar-eya muuka nain maa mauwa enim-i-non, \textstyleEmphasizedVernacularWords{naapeya} mua lebuma emeria me wi-i-mik.\\
man lazy woman not give.them-\textsc{Np}-\textsc{pr}.1/3p because woman give.birth-2/3s.\textsc{ds} son that1 food what eat-\textsc{Np}-\textsc{fu}.3s therefore man lazy woman not give.them-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`We do not give wives to lazy men, because when the woman bears a child what would it eat, therefore we do not give wives to lazy men.'
\z
\subsection{Apprehensive coordination} \label{sec:8.1.6}
%\hypertarget{RefHeading23081935131865}
A less common clause type, that of apprehensive clauses \citep[61]{Roberts1987}, also called negative purpose clauses (\citealt[444]{Haiman1980}, \citealt[188]{ThompsonEtAl1985}), is perhaps more commonly subordinate than coordinate. But in Mauwake the apprehensive clauses are coordinated finite clauses \REF{ex:8:x1426}, \REF{ex:8:x1427}, originally separate sentences \REF{ex:8:x1428}. The apprehension clause is introduced by the indefinite \textstyleStyleVernacularWordsItalic{oko} `other' (\sectref{sec:3.7.2}), which has also developed the meaning `otherwise'.
% * <[email protected]> 2015-05-22T13:42:29.423Z:
%
% Thompson EtAl has now been added to the bibl.
%
% ^ <[email protected]> 2015-07-27T13:35:28.523Z.
\ea%x1426
\label{ex:8:x1426}
\gll Ni maa uru-uruf-ami ik-eka, \textstyleEmphasizedVernacularWords{oko} mua oko=ke nia peeskim-i-kuan.\\
2p.\textsc{unm} thing \textsc{rdp}-see-\textsc{ss}.\textsc{sim} be-\textsc{imp}.2p other man other=\textsc{cf} 2p.\textsc{acc} cheat-\textsc{Np}-\textsc{fu}.3p\\
\glt`Watch out, otherwise/lest you get cheated.'
\z
\ea%x1427
\label{ex:8:x1427}
\gll Naap on-owa weetak, \textstyleEmphasizedVernacularWords{oko} yiena sira puuk-i-yen. \\
thus do-\textsc{nmz} no other 1p.\textsc{gen} custom cut-\textsc{nps}-\textsc{fu}.1p\\
\glt`We must not do like that, otherwise/lest we break our custom/law (or: {\dots} lest we ourselves break the custom/law).'
\z
\ea%x1428
\label{ex:8:x1428}
\gll Naap yo aakisa efa uruf-i-n. \textstyleEmphasizedVernacularWords{Oko} neeke soomar-ekap-em-ik-omkun ma-i-nan, `` {\dots } ``\\
thus 1s.\textsc{unm} now 1s.\textsc{acc} see-\textsc{Np}-\textsc{pr}.2s other there.\textsc{cf} walk-come-\textsc{ss.sim}-be-1s/p.\textsc{ds} say-\textsc{Np}-\textsc{fu}.2s\\
\glt`So you see me now. Otherwise I'll be walking there and you will say, ``{\dots}'''
\z
% * <[email protected]> 2015-05-22T13:45:33.811Z:
%
% What is inconsistent here? 'walk' is the best gloss for soomar-
%
% ^ <[email protected]> 2015-08-20T14:45:54.981Z.
% * <[email protected]> 2015-05-27T13:14:20.188Z:
%
% OK, found the inconsistency: ss-sim should be ss.sim. Corrected it
%
% ^ <[email protected]> 2015-08-20T14:46:00.292Z.
\section{Clause chaining} \label{sec:8.2}
%\hypertarget{RefHeading23101935131865}
Clause chaining is a feature typical of Papuan languages, and of the Trans-New Guinea languages in particular.\footnote{\citet[36]{Wurm1982} seems to consider clause chaining a genetic feature of the \textsc{tng} languages, but \citet[xlvii]{Haiman1980} suggests that it is an areal feature. \citet[122]{Roberts1997}, with the most data to date, suggests that there is a combination of both, but leaves the final decision open.} A sentence may consist of several medial clauses\footnote{The terms \textit{medial} and \textit{final} clauses are well established in Papuan linguistics. } where the verbs have medial verb inflection (\sectref{sec:3.8.3.5}), and a final clause where the verb has ``normal'' finite inflection (\sectref{sec:3.8.3.4}). Clause chaining indicates either temporal sequence or simultaneity between adjacent clauses.
The division into just medial and final clauses is not adequate for describing the system. \citet[xii]{HaimanEtAl1983} call the medial clauses \textstyleEmphasizedWords{{marking clauses}} and the clauses following them \textstyleEmphasizedWords{{reference clauses}}.\footnote{\citet{Comrie1983} and \citet{Roberts1997} call them \textit{marked clauses} and \textit{controlling clauses}, respectively.} Marking clause is simply another name for a medial clause and will not be used here. But a reference clause may be medial or finite\footnote{I prefer the term \textit{finite} to \textit{final} clauses (and verbs), as it is the finiteness rather than the position in the sentence that is important in their relation with medial clauses. Subordinate clauses are the most typical \textit{non-final} finite clauses, and they may also have medial clauses preceding them and relating to them.} -- what is important is that both the temporal relationship of the medial verb, and the person reference, is stated in relation to the reference clause. When a reference clause for a preceding medial clause is also a medial clause, it again has its own reference clause following it.
The medial clauses linked by clause chaining are sometimes called \textstyleEmphasizedWords{{cosubordinate}} (\citealt{Olson1981}, \citealt[257]{FoleyEtAl1984}\footnote{This is cosubordination at the \textit{peripheral} level; verb serialization is cosubordination at core or nuclear level.} or \textstyleEmphasizedWords{{coordinate-}}\textstyleEmphasizedWords{{dependent}} \citep[177]{Foley1986}, because they share features with both coordinate and subordinate clauses. Their relationship with each other and with the following finite clause is essentially coordinate,\footnote{\citet{Roberts1988a} brings several syntactic arguments to show that basically switch reference is indeed coordination rather than subordination. But he also argues for a separate subordinate switch reference in Amele and some other languages.} but the medial clauses are dependent on the finite clause both for their absolute tense, and, in the case of ``same subject'' forms, also for their person/number specification.
Another term commonly used for the chained clauses, \textstyleEmphasizedWords{{switch-reference clauses}} (\textsc{sr}),\footnote{Clause chaining and switch reference are two separate strategies, but in Papuan languages the two very often go together \citep[104]{Roberts1997}.} is related to their other function as a reference-tracking device \citep[ix]{HaimanEtAl1983}. They typically indicate whether their topic/subject is the same as, or different from, the topic/subject of the following clause. This is discussed below in \sectref{sec:8.2.3}. In this grammar the two terms are used interchangeably, as in Mauwake the medial verbs not only indicate a temporal relationship but are used for reference tracking as well.
\subsection{Chained clauses as coordinate clauses} \label{sec:8.2.1}
%\hypertarget{RefHeading23121935131865}
It is widely accepted that the relationship of medial clauses to their reference clauses is basically coordinate, but with some special features and exceptions.\footnote{E.g., \citet[175, 193]{Reesink1987}, \citet[51]{Roberts1988a}, \citet[51, 1997]{Roberts1988a}.} In Mauwake medial clauses are subordinate only if subordinated with the topic/conditional marker -\textstyleStyleVernacularWordsItalic{na}; otherwise they are coordinate.
Instead of giving background information like subordinate clauses do, medial clauses are predications that carry on the foreground story line \REF{ex:8:x2000}. But they are also different from coordinate finite clauses. The similarities and differences are discussed in this section.
The pragmatic additives \textstyleStyleVernacularWordsItalic{ne} \REF{ex:8:x1485} and \textstyleStyleVernacularWordsItalic{aria} \REF{ex:8:x1486} (\sectref{sec:3.11.1}) can occur between a medial clause and its reference clause, as between normal coordinate clauses. This is uncommon, however.
\ea%x1444
\label{ex:8:x2000} % I have moved this from 8:x1444 to 8:x2000 to avoid multiple definitions. Fortunately, x1444/x2000 is NOT referenced anywhere.
\gll Wiawi ikiw-ep maak-eya, \textbf{ne} wiawi=ke maak-e-k {\dots} \\
3s/p.father go-\textsc{ss}.\textsc{seq} tell-2/3s.\textsc{ds} \textsc{add} 3s/p.father=\textsc{cf} tell-\textsc{pa}-3s\\
\glt`She went to her father and told him, and her father told her {\dots}'
\z
\ea%x1485
\label{ex:8:x1485}
\gll ... wiena en-emi, epira lolom if-emi \textbf{ne} owowa p-urup-em-ik-e-mik.\\
... 3p.\textsc{gen} eat-\textsc{ss}.\textsc{sim} plate mud spread-\textsc{ss}.\textsc{sim} \textsc{add} village \textsc{PBx}-ascend-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3p\\
\glt`They ate it themselves, spread mud on the plates, and brought them up to the village.'
\z
\ea%x1442
\label{ex:8:x1486} % I have moved x1442 to x1486 in order to avoid conflicts. Fortunately, x1442/x1486 is not referenced anywhere.
\gll I ikoka yien=iw urup-ep nia maak-omkun ora-iwkin, \textstyleEmphasizedVernacularWords{aria} owawiya feeke pok-ap ik-ok eka liiwa muuta en-ep \textstyleEmphasizedVernacularWords{aria} ni soomar-ek-eka.\\
1p.\textsc{unm} later 1p.\textsc{gen}=\textsc{lim} ascend-\textsc{ss}.\textsc{seq} 2p.\textsc{acc} tell-1s/p.\textsc{ds} descend-2/3p.\textsc{ds} alright together here.\textsc{cf} sit-\textsc{ss}.\textsc{seq} be-\textsc{ss} water little only eat-\textsc{ss}.\textsc{seq} alright 2p.\textsc{unm} walk-go-2p.\textsc{imp}\\
\glt`Later we (by) ourselves will come up and tell you (to come), and when you come down we will sit here together and eat a little bit of soup and then you can walk back.'
\z
Coordinated main clauses are free in regard to their mood and, related to that, their functional sentence type. The medial clauses do not have any marking for mood. They usually conform to that of the finite clause, but this is a pragmatic matter, not a syntactic requirement.
When either the medial clause or the finite clause is a question, the whole sentence is interrogative, even if the other clause is a statement. In \REF{ex:8:x1449} the finite clause is a polar question, but the medial clause is not questioned. In the story that \REF{ex:8:x1452} is taken from, the killing is not questioned, only the manner. But since a medial clause cannot take the question marker, the verb in the finite clause has to carry the marking.
\ea%x1449
\label{ex:8:x1449}
\gll Sande erup weeser-eya owowa ekap-e-man=i? \\
week two finish-2/3s.\textsc{ds} village come-\textsc{pa}-2p=\textsc{qm}\\
\glt`Two weeks were finished, and did you (then) come to the village?'\footnote{Another possible translation is `When the two weeks were finished, did you (then) come to the village?' but this does not reflect the coordinate relationship of the clauses in the original.}
\z
\ea%x1452
\label{ex:8:x1452}
\gll Naap on-ap ifakim-i-nen=i?\\
thus do-\textsc{ss}.\textsc{seq} kill-\textsc{Np}-\textsc{fu}.1s=\textsc{qm}\\
\glt`Shall I do like that and kill her?' (Or: `Is it in that way that I shall kill her?')
\z
A non-polar question can be in either a medial \REF{ex:8:x1451} or in a finite clause \REF{ex:8:x1450}.
\ea%x1451
\label{ex:8:x1451}
\gll No sira kamenap on-eya napuma fain nefar kerer-e-k?\\
2s.\textsc{unm} custom how do-2/3s.\textsc{ds} sickness this 2s.\textsc{dat} appear-\textsc{pa}-3s\\
\glt`What did you do (so that) this sickness came to you?'
\z
\ea%x1450
\label{ex:8:x1450}
\gll No karu-emi kame kaanek ikiw-o-n? \\
2s.\textsc{unm} run-\textsc{ss}.\textsc{sim} side where go-\textsc{pa}-2s\\
\glt`You ran and where did you go?'
\z
For more examples, see (7.\ref{ex:7:x1082})--(7.\ref{ex:7:x1083}) in \sectref{sec:7.3} and the introductory section to \chapref{chap:8}.
In regard to the scope of negation, the same-subject medial clauses differ from all other clauses. Negative spreading (\sectref{sec:6.2.4}) in both directions is allowed only between \textstyleAcronymallcaps{\textsc{ss}} medial clauses and their reference clauses, and even there it is not very common. Backwards spreading is especially rare. In the following examples, negative spreading takes place in \REF{ex:8:x1443} and \REF{ex:8:x1447}, but not in \REF{ex:8:x1446} and \REF{ex:8:x1448}. Between other types of clauses negative spreading is not permitted at all.
\ea%x1443
\label{ex:8:x1443}
\gll Nainiw \textbf{ekap-ep} maa \textbf{me} \textbf{sesenar-e-mik}. \\
again come-\textsc{ss}.\textsc{seq} food not sell-\textsc{pa}-1/3p\\
\glt`They did not come back and sell food.'
\z
\ea%x1447
\label{ex:8:x1447}
\gll Ikiw-em-ik-ok \textbf{me} \textbf{kir-ep} \textbf{uruf-e}, no oram woolal-ikiw-em-ik-e.\\
go-\textsc{ss}.\textsc{sim}-be-\textsc{ss} not turn-\textsc{ss}.\textsc{seq} look-\textsc{imp}.2s 2s.\textsc{unm} just paddle-go-\textsc{ss}.\textsc{sim}-be-\textsc{imp}.2s\\
\glt`While going, don't turn and look back, just keep paddling.'
\z
\ea%x1446
\label{ex:8:x1446}
\gll Yaapan=ke urup-em-ika-iwkin wi Australia=ke wia uruf-ap baurar-emi \textstyleEmphasizedVernacularWords{me} \textstyleEmphasizedVernacularWords{yia} \textstyleEmphasizedVernacularWords{maak-e-mik}.\\
Japan=\textsc{cf} ascend-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} 3p.\textsc{unm} Australia=\textsc{cf} 3p.\textsc{acc} see-\textsc{ss}.\textsc{seq} flee-\textsc{ss}.\textsc{sim} not 1p.\textsc{acc} tell-\textsc{pa}-1/3p\\
\glt`When the Japanese were coming up the Australians saw them and ran away and/but did not tell us.'
\z
\ea%x1448
\label{ex:8:x1448}
\gll Iiriw auwa=ke sira fain \textbf{me} \textbf{paayar-ep} muuka momor wiar aaw-em-ik-e-mik.\\
earlier 1s/p.father=\textsc{cf} custom this not understand-\textsc{ss}.\textsc{seq} son indiscriminately 3.\textsc{dat} get-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3p\\
\glt`Earlier our (fore)fathers didn't understand this custom, and (so) they adopted (lit: got/took) children indiscriminately.'
\z
Like coordinated main clauses and unlike subordinate clauses, medial clauses are not embedded as constituents in other clauses. However, a medial clause may interrupt its reference clause and appear inside it, if the subject or object noun phrase of the reference clause is fronted as the theme and thus precedes the interrupting medial clause \REF{ex:8:x1464}. For more examples, see (3.\ref{ex:3:x539}) and (3.\ref{ex:3:x540}). In the examples, the reference clause is bolded and the intervening medial clause is placed within square brackets.
\ea%x1464
\label{ex:8:x1464}
\gll Aria \textstyleEmphasizedVernacularWords{yena} \textstyleEmphasizedVernacularWords{mua} \textstyleEmphasizedVernacularWords{pun} {\ob}irak-owa kerer-owa epa weeser-em-ik-eya{\cb} \textstyleEmphasizedVernacularWords{iirar-iwkin} owowa ekap-o-k, o amia mua=pa ik-ok.\\
alright 1s.\textsc{gen} man too fight-\textsc{nmz} appear-\textsc{nmz} time finish-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} remove-2/3p.\textsc{ds} village come-\textsc{pa}-3s 3s.\textsc{unm} bow man=\textsc{loc} be-\textsc{ss}\\
\glt`Alright, the war was getting close and they dismissed my husband and he came to the village, after he had been a soldier.'
\z
In \REF{ex:8:x1465}, both the object and the subject are fronted. After the first medial clause, the object of the finite clause is fronted as the theme of the remainder of the sentence, and it pulls with it the subject, marked with the contrastive focus marker. In the free translation, passive is used, because the object is fronted as a theme.
\ea%x1465
\label{ex:8:x1465}
\gll Sisina=pa wu-ap \textstyleEmphasizedVernacularWords{papako}\textsubscript{O} \textstyleEmphasizedVernacularWords{mua=ke}\textsubscript{S} {\ob}mera saa urup-eya{\cb} \textstyleEmphasizedVernacularWords{patopat=iw} \textstyleEmphasizedVernacularWords{mik-i-mik}. \\
shallow.water=\textsc{loc} put-\textsc{ss}.\textsc{seq} some man=\textsc{cf} fish sand ascend-2/3s.\textsc{ds} fishing.spear=\textsc{inst} spear-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`They drive (lit: put) them to the shallow water and the fish ascend to the beach and (then) some are speared by men with a fishing spear.'
\z
The examples \REF{ex:8:x1466}--\REF{ex:8:x1468} show that some of the same-subject medial clauses interrupting the reference clause, especially those that have a directional verb or the verb \textstyleStyleVernacularWordsItalic{aaw}- `take, get', may be in the process of grammaticalizing into serial verbs:
\ea%x1466
\label{ex:8:x1466}
\gll \textstyleEmphasizedVernacularWords{I} \textstyleEmphasizedVernacularWords{iwer(a)} \textstyleEmphasizedVernacularWords{eka} {\ob}iki(w-e)p{\cb} \textstyleEmphasizedVernacularWords{nop-a-mik}. \\
1p.\textsc{unm} coconut water go-\textsc{ss}.\textsc{seq} fetch-\textsc{pa}-1/3p\\
\glt`We went and fetched coconut water.'
\z
\ea%x1467
\label{ex:8:x1467}
\gll \textstyleEmphasizedVernacularWords{Yo} \textstyleEmphasizedVernacularWords{merena} {\ob}fura aaw-ep{\cb} \textstyleEmphasizedVernacularWords{puuk-a-m}. \\
1s.\textsc{unm} leg knife take-\textsc{ss}.\textsc{seq} cut-\textsc{pa}-1s\\
\glt`I took a knife and cut (into) the leg. (Or: I cut into the leg with a knife.')
\z
\ea%x1468
\label{ex:8:x1468}
\gll Um-eya \textstyleEmphasizedVernacularWords{merena} \textstyleEmphasizedVernacularWords{ere-erup} {\ob}ifara aaw-ep{\cb} \textstyleEmphasizedVernacularWords{kaik-ap} nabena suuw-ap akua aaw-ep or-o-m.\\
die-2/3s.\textsc{ds} leg \textsc{rdp}-two rope take-\textsc{ss}.\textsc{sim} tie-\textsc{ss}.\textsc{seq} carrying.pole push-\textsc{ss}.\textsc{seq} shoulder take-\textsc{ss}.\textsc{seq} descend-\textsc{pa}-1s\\
\glt`It (=a pig) died and I took a rope and tied its legs two and two together and pushed it to a carrying pole and carried it down on my shoulder.'
\z
\textstyleEmphasizedWords{{Right-dislocation}} of a medial clause is not unusual. One reason commonly given for right-dislocations is an afterthought: the speaker notices something that should be part of the sentence and adds it to the end \REF{ex:8:x1471}. Another reason is giving prominence to the dislocated clause, since the end of a sentence is a focal position. The right-dislocation of same-subject sequential medial clauses in particular breaks the iconicity between the events and the sentence structure, and has this effect. Consequently, the right-dislocated \textstyleAcronymallcaps{\textsc{ss}} sequential clauses, like the ones in examples \REF{ex:8:x1469} and \REF{ex:8:x1470}, are much more prominent than medial clauses in their normal position.
\ea%x1471
\label{ex:8:x1471}
\gll Or-op naap wia uruf-a-mik, {\ob}mua oona, eneka, woosa kia kir-em-ik-eya{\cb}. \\
descend-\textsc{ss}.\textsc{seq} thus 3p.\textsc{acc} see-\textsc{pa}-1/3p man bone tooth head white turn-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds}\\
\glt`They went down and saw them like that, the people's bones, teeth and heads turning white.'
\z
\ea%x1469
\label{ex:8:x1469}
\gll Aw-iki(w-e)m-ik-eya wiena mua unowa fiker(a) epia nain ook-i-kuan, {\ob}wiowa aaw-ep{\cb}.\\
burn-go-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} 3p.\textsc{gen} man many kunai.grass fire that1 follow-\textsc{Np}-\textsc{fu}.3p spear take-\textsc{ss}.\textsc{seq}\\
\glt`It keeps burning and many men follow the kunai grass fire, having taken spears.'
\z
\ea%x1470
\label{ex:8:x1470}
\gll Aaya muuna kuisow enim-i-mik, {\ob}aite=ke manina=pa yia aaw-om-iwkin{\cb}. \\
sugarcane joint one eat-\textsc{Np}-\textsc{pr}.1/3p 1s/p.mother=\textsc{cf} garden=\textsc{loc} 1p.\textsc{acc} get-\textsc{ben}-2/3p.\textsc{ds}\\
\glt`We eat one joint of sugarcane, when/after our mothers have gotten it for us from the garden.'
\z
\subsection{Temporal relations in chained clauses}
%\hypertarget{RefHeading23141935131865}
Clause chaining in Mauwake distinguishes between sequential and simultaneous actions in the clauses joined by chaining, but only when the clauses have the same subject (\sectref{sec:3.8.3.5.1}). The sequential action verb in \REF{ex:8:x1431} indicates that one action is finished before the next one starts.
\ea%x1431
\label{ex:8:x1431}
\gll No nainiw kir-\textstyleEmphasizedVernacularWords{ep} ikiw-\textstyleEmphasizedVernacularWords{ep} owow mua wia maak-eya urup-\textstyleEmphasizedVernacularWords{ep} mukuna nain umuk-uk. \\
2s.\textsc{unm} again turn-\textsc{ss}.\textsc{seq} go-\textsc{ss}.\textsc{seq} village man 3p.\textsc{acc} tell-2/3s.\textsc{ds} ascend-\textsc{ss}.\textsc{seq} fire that1 extinguish-\textsc{imp}.3p\\
\glt`Turn around, go and tell the village men and let them come up and extinguish the fire.'
\z
When a clause has a simultaneous action medial verb \REF{ex:8:x1432}, it indicates at least some overlap with the action in the following clause.
\ea%x1432
\label{ex:8:x1432}
\gll Or-\textstyleEmphasizedVernacularWords{omi} yo koka koora=pa nan efa wu-\textstyleEmphasizedVernacularWords{ami} ma-e-k, `` ... '' \\
descend-\textsc{ss}.\textsc{sim} 1s.\textsc{unm} jungle house=\textsc{loc} there 1s.\textsc{acc} put-\textsc{ss}.\textsc{sim} say-\textsc{pa}-3s\\
\glt`As he went down, he put me in the jungle house and said, `` ... '' '
\z
Simultaneity vs. sequentiality is not always a choice between absolutes; sometimes it is a relative matter. Example \REF{ex:8:x1433} refers to a situation where a man came back home from a period of labour elsewhere and got married upon arrival. In actual life, there may have been a time gap of at least a number of days, possibly longer, but because the two events were so closely linked in the speaker's mind, the simultaneous action form was used when the story was told decades after the events took place.
\ea%x1433
\label{ex:8:x1433}
\gll Ekap-\textstyleEmphasizedVernacularWords{emi} yo efa aaw-o-k. \\
come-\textsc{ss}.\textsc{sim} 1s.\textsc{unm} 1s.\textsc{acc} take-\textsc{pa}-3s\\
\glt`He came and married me.'
\z
The simultaneous action form is less marked than the sequential action form: when the relative order of the actions or events is not relevant, the simultaneous action form is used. In example \REF{ex:8:x1437}, the order of the preparations for a pighunt is not crucial, but the sequential action form on the last medial verb indicates that all the actions take place before leaving, rather than just at the time of leaving.
\ea%x1437
\label{ex:8:x1437}
\gll Maa en-ep-pu-\textstyleEmphasizedVernacularWords{ami} top aaw-\textstyleEmphasizedVernacularWords{emi} moma unukum-\textstyleEmphasizedVernacularWords{emi} kapit, wiowa aaw-\textstyleEmphasizedVernacularWords{ep} fikera iw-i-mik.\\
food eat-\textsc{ss}.\textsc{seq}-\textsc{cmpl}-\textsc{ss}.\textsc{sim} trap take-\textsc{ss}.\textsc{sim} taro wrap-\textsc{ss}.\textsc{sim} trap.frame spear take-\textsc{ss}.\textsc{seq} kunai.grass go-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`We eat, take the trap, wrap taro, take the the trap frame and spear(s) and go to the kunai grass area.'
\z
A medial verb takes its temporal specification from the tense of the closest following finite clause \REF{ex:8:x1442}--\REF{ex:8:x1445}, or in the case of a right-dislocated medial clause, from the preceding finite clause \REF{ex:8:x1471}.
\ea%x1442
\label{ex:8:x1442}
\gll Nomokowa maala war-ep ekap-ep ifa nain ifakim-\textstyleEmphasizedVernacularWords{o}-k.\\
tree long cut-\textsc{ss}.\textsc{seq} come-\textsc{ss}.\textsc{seq} snake that1 kill-\textsc{pa}-3s\\
\glt`He cut a long stick, came and killed the snake.'
\z
\ea%x1444
\label{ex:8:x1444}
\gll Mua=ke kais-ap neeke wu-ap miiw-aasa nop-ap miiw-aasa=ke iwer(a) ififa nain aaw-ep p-ekap-ep epia koora mamaiya=pa wu-eya fook-\textstyleEmphasizedVernacularWords{i-mik}.\\
man=\textsc{cf} husk-\textsc{ss}.\textsc{seq} there.\textsc{cf} put-\textsc{ss}.\textsc{seq} land-canoe fetch-\textsc{ss}.\textsc{seq} land-canoe=\textsc{cf} coconut dry that1 take-\textsc{ss}.\textsc{seq} \textsc{\textsc{bp}x}-come-\textsc{ss}.\textsc{seq} fire house near=\textsc{loc} put-2/3s.\textsc{ds} split-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`Men husk them (coconuts) and put them there and fetch a truck, and the truck takes the dry coconuts and brings them close to the drying shed (lit: fire house), and we split them.'
\z
\ea%x1445
\label{ex:8:x1445}
\gll Ikoka mua ar-ep emeria aaw-ep kamenap on-\textstyleEmphasizedVernacularWords{i-nan}? \\
later man become-\textsc{ss}.\textsc{seq} woman take-\textsc{ss}.\textsc{seq} how do-\textsc{Np}-\textsc{fu}.2s\\
\glt`Later when you become a man and take a wife, what will you do?'
\z
The \textstyleAcronymallcaps{\textsc{ds}} medial verbs (\sectref{sec:3.8.3.5.2}) do not differentiate between sequential and simultaneous action. Sequential action \REF{ex:8:x1502} is the default interpretation for verbs other than \textstyleStyleVernacularWordsItalic{ik}- `be', which is interpreted as simultaneous with the verb in the reference clause \REF{ex:8:x1503}. So in order to specify that two or more actions by different participants took place at the same time, the speaker needs to use the continuous aspect form \REF{ex:8:x1472}:
\ea%x1502
\label{ex:8:x1502}
\gll Maa unowa ifer-aasa=ke p-urup\textstyleEmphasizedVernacularWords{-eya} miiw-aasa=ke fan p-ir-am-ik-ua.\\
thing many sea-canoe=\textsc{cf} \textsc{bpx}-ascend-2/3s.\textsc{ds} land-canoe=\textsc{cf} here \textsc{bpx}-come-\textsc{ss}.\textsc{sim}-be-\textsc{pa}.3s\\
\glt`The cargo was brought up (to the coast) by ship(s), and (then) trucks kept bringing it here.'
\z
\ea%x1503
\label{ex:8:x1503}
\gll Wi yapen=pa \textstyleEmphasizedVernacularWords{ik-}omak\textstyleEmphasizedVernacularWords{-iwkin} Amerika kerer-e-mik.\\
3p.\textsc{unm} inland=\textsc{loc} be-\textsc{distr}/\textsc{pl}-2/3p.\textsc{ds} America appear-\textsc{pa}-1/3p\\
\glt`Many people were inland and the Americans arrived.'
\z
\ea%x1472
\label{ex:8:x1472}
\gll Ek-ap umuk-i-nen na-ep on-am\textstyleEmphasizedVernacularWords{-ik-eya} ifa=ke keraw-a-k, ...\\
go-\textsc{ss}.\textsc{seq} extinguish-\textsc{Np}-\textsc{fu}.1s say-\textsc{ss}.\textsc{seq} do-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} snake=\textsc{cf} bite-\textsc{pa}-3s\\
\glt`He went and as he was trying to extinguish it (a fire), a snake bit him, {\dots}'
\z
Although the chaining structure itself only specifies the temporal relationship between the clauses and is otherwise neutral, it is open especially for causal/consecutive interpretation. \citet[237]{Reesink1983b} notes this for different-subject medial verbs in Usan, and although not very common in Mauwake in general, it is more frequent with \textsc{ds} predicates \REF{ex:8:x1434}, \REF{ex:8:x1412} than with \textsc{ss} verbs.
\ea%x1434
\label{ex:8:x1434}
\gll Yo maamuma marew\textstyleEmphasizedVernacularWords{-eya} maak-e-m, {\textquotedbl}Iir oko=pa ni-i-nen.''\\
1s.\textsc{unm} money no(ne)-2/3s.\textsc{ds} tell-\textsc{pa}-1s time other=\textsc{loc} give.you-\textsc{Np}-\textsc{fu}.1s\\
\glt`I had no money and I told him (or: Because I had no money I told him), ``I'll give it to you another time.'' '
\z
\ea%x1412
\label{ex:8:x1412}
\gll Iperowa=ke kekan-\textstyleEmphasizedVernacularWords{iwkin} ma-e-mik, ``Aria, ...'' \\
middle.aged=\textsc{cf} be.strong-2/3p.\textsc{ds} say-\textsc{pa}-1/3p alright\\
\glt`The elders insisted, and (so) we said, ``All right, {\dots}'' '
\z
The causal/consecutive interpretation is most common when the object of a transitive medial clause becomes the subject in an intransitive reference clause: in example \REF{ex:8:x1504} `the son' is the object of the first two clauses and the subject of the final clause.
\ea%x1504
\label{ex:8:x1504}
\gll {\ob}\textstyleEmphasizedVernacularWords{Muuka}{\cb}\textsubscript{O} p-or-op \textstyleEmphasizedVernacularWords{p-er-iwkin} \textstyleEmphasizedVernacularWords{yak-i-ya}. \\
son \textsc{bpx}-descend-\textsc{ss}.\textsc{seq} \textsc{bpx}-go-2/3p.\textsc{ds} bathe-\textsc{Np}-\textsc{pr}.3s\\
\glt`They bring the son down (from the house) and take him (to the well) and (so) he bathes.'
\z
Cognition verbs and feeling or experiential verbs seem to be the only ones that allow a causal/consecutive interpretation when a medial clause has a \textsc{ss} verb \REF{ex:8:x1440}--\REF{ex:8:x1484}:
\ea%x1440
\label{ex:8:x1440}
\gll Siiwa, epa maak-e-mik nain \textstyleEmphasizedVernacularWords{paayar-ep} ma-e-k, ``Amerika aakisa irak-owa kerer-e-mik.''\\
moon place/time tell-\textsc{pa}-1/3p that1 understand-\textsc{ss}.\textsc{seq} say-\textsc{pa}-3s America now fight-\textsc{nmz} appear-\textsc{pa}-1/3p\\
\glt`He understood the month and time/place that they (had) told him, and (so) he said, ``Now the Americans have come to fight.'' '
\z
\ea%x1441
\label{ex:8:x1441}
\gll ... ne wi ikiw-e-mik, \textstyleEmphasizedVernacularWords{kerewar-ep} ikiw-e-mik. \\
... \textsc{add} 3p.\textsc{unm} go-\textsc{pa}-1/3p become.angry-\textsc{ss}.\textsc{seq} go-\textsc{pa}-1/3p\\
\glt`{\dots} and they went; they were angry and (so) they went.'
\z
\ea%x1484
\label{ex:8:x1484}
\gll Mua oko=ko \textstyleEmphasizedVernacularWords{napum-ar-}\textstyleEmphasizedVernacularWords{ep} ikemika kaik-ow(a) mua wiar ikiw-o-k.\\
man other=\textsc{cf} sickness-\textsc{inch}-\textsc{ss}.\textsc{seq} wound tie-\textsc{nmz} man 3.\textsc{dat} go-\textsc{pa}-3s\\
\glt`A man got sick and (so) he went to a doctor.'
\z
\subsection{Person reference in chained clauses} \label{sec:8.2.3}
%\hypertarget{RefHeading23161935131865}
The switch-reference marking tracks the referents in a different way from the person/ number marking in finite verbs. The medial verb suffix indicates whether the clause has the same subject/topic as the reference clause that comes after it, and the \textstyleAcronymallcaps{\textsc{ds}} suffixes also have some specification of the subject (\sectref{sec:3.8.5.2}). In \REF{ex:8:x1436}, the subjects are a man and his wife in the first two clauses and in the last one, and a spirit man in all the others:
\ea%x1436
\label{ex:8:x1436}
\gll Ikiw-\textstyleEmphasizedVernacularWords{ep}\textsubscript{i} nan ika-\textstyleEmphasizedVernacularWords{iwkin}\textsubscript{i} inasina mua\textsubscript{j} ifa puuk-\textstyleEmphasizedVernacularWords{ap}\textsubscript{j} solon-\textstyleEmphasizedVernacularWords{ep}\textsubscript{j} urup-\textstyleEmphasizedVernacularWords{ep}\textsubscript{j} manina=pa waaya puuk-\textstyleEmphasizedVernacularWords{ap}\textsubscript{j} moma wiar en-em-ik-\textstyleEmphasizedVernacularWords{eya}\textsubscript{j} uruf-a-mik\textsubscript{i}.\\
go-\textsc{ss}.\textsc{seq} there be-2/3p.\textsc{ds} spirit man snake change.into-\textsc{ss}.\textsc{seq} crawl-\textsc{ss}.\textsc{seq} ascend-\textsc{ss}.\textsc{seq} garden=\textsc{loc} pig change.into-\textsc{ss}.\textsc{seq} taro 3.\textsc{dat} eat-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} see-\textsc{pa}-1/3s\\
\glt`They went and were there, and a spirit man came and changed into a snake and crawled up and in the garden it changed into a pig and as it was eating their taro they saw it.'
\z
Because the switch reference marking relates to the subject/topic in two different clauses at the same time, this sometimes causes ambiguities that need to be solved. If the subjects in adjacent clauses are partially same and partially different, a choice has to be made whether they are marked as \textsc{ss} or \textsc{ds}; only a few Papuan languages have a choice of marking both \textsc{ss} and \textsc{ds} on the same verb \citep{Roberts1997}. Also, if the \textsc{sr} marking is considered to track the syntactic subject, there are a number of apparent irregularities in the marking. These have been discussed in particular by \citet{Reesink1983a} and \citet{Roberts1988b} with reference to Papuan languages. The next three subsections describe how Mauwake deals with these questions.
\subsubsection{Partitioning of the participant set}
%\hypertarget{RefHeading23181935131865}
When one of the subjects is plural and the other is singular included in the plural, this mismatch theoretically allows for a number of different choices in the switch-reference marking, but in practice each language limits this choice in a way peculiar to it.\footnote{For a summary of how different Papuan languages treat this area of ambiguity, see \citet{Reesink1983a}, \citet[201--202]{Reesink1987} and \citet[87--91]{Roberts1988b}.} The following table \tabref{tab:15:switchref} shows this for Mauwake.
\begin{table}
\caption{Switch-reference marking with partial overlap of subjects}
\label{tab:15:switchref}
\begin{tabular}{llll}
\mytoprule
\multicolumn{2}{l}{{\bfseries Singular to plural}}
& \multicolumn{2}{l}{{\bfseries Plural to singular}}\\
\midrule
1s {{\textgreater}} 1p & \textsc{ss} & 1p {{\textgreater}} 1s & \textsc{ss}\\
2s {{\textgreater}} 1p & \textsc{ds} & 1p {{\textgreater}} 2s & \textsc{ss}\\
2s {{\textgreater}} 2p & \textsc{ss}/\textsc{ds} & 1p {{\textgreater}} 3s & \textsc{ss}\\
3s {{\textgreater}} 1p & \textsc{ss}/\textsc{ds} & 2p {{\textgreater}} 2s & \textsc{ss}\\
3s {{\textgreater}} 2p & \textsc{ss}/\textsc{ds} & 2p {{\textgreater}} 3s & \textsc{ss}\\
3s {{\textgreater}} 3p & \textsc{ss}/\textsc{ds} & 3p {{\textgreater}} 3s & \textsc{ss}\\
\mybottomrule
\end{tabular}
\end{table}
When a plural subject changes into a singular, the suffix is always the one used for same subject \REF{ex:8:x1438}.
\ea%x1438
\label{ex:8:x1438}
\gll {\dots}owowa urup-e-mik. Owowa urup-\textbf{ep } o koora ikiw-o-k.\\
{\dots}village ascend-\textsc{pa}-1/3p village ascend-\textsc{ss}.\textsc{seq} 3s.\textsc{unm} house go-\textsc{pa}-3s\\
\glt`{\dots}We came up to the village. After we came up to the village he went into the house.'
\z
When a singular subject changes into a plural there is more variation. First person singular changing into plural calls for same-subject marking \REF{ex:8:x1435}, but second person singular switching into first person plural requires different-subject marking even when this second person singular is part of the group denoted by the first person plural \REF{ex:8:x1439}.
\ea%x1435
\label{ex:8:x1435}
\gll Mik-ap, patot=iw mik-ap, aaw-ep, aasa=pa wu-ap, amap-urup-ep, yena koora=pa wu-ap, uuriw epa wiim-eya or-op, saa=pa pa-\textstyleEmphasizedVernacularWords{ep} uup-e-mik.\\
spear-\textsc{ss}.\textsc{seq} fishing.spear=\textsc{inst} spear-\textsc{ss}.\textsc{seq} take-\textsc{ss}.\textsc{seq} canoe=\textsc{loc} put-\textsc{ss}.\textsc{seq} \textsc{\textsc{bp}x}-ascend-\textsc{ss}.\textsc{seq} 1s.\textsc{gen} house=\textsc{loc} put-\textsc{ss}.\textsc{seq} morning place get.light-2/3s.\textsc{ds} descend-\textsc{ss}.\textsc{seq} sand=\textsc{loc} butcher-\textsc{ss}.\textsc{seq} cook-\textsc{pa}-1/3p\\
\glt`I speared it, I speared it with a fishing spear, and took it and put it in the canoe, brought it up and put it in my house, and in the morning when it was light I went down and butchered it on the beach, and \textstyleEmphasizedWords{we} cooked it.'
\z
\ea%x1439
\label{ex:8:x1439}
\gll Ekap-\textbf{eya} ikiw-i-yen.\\
come-2/3s.\textsc{ds} go-\textsc{Np}-\textsc{fu}.1p\\
\glt`When you come we (including you) will go.'
\z
When a second person plural switches into a first person plural (including the people indicated by the 2p), the marking has to be for different subject, but in the opposite case, the first person plural changing into the second person plural (again included in the 1p), the marking can be either for same or different subject. Both of these are exemplified in \REF{ex:8:x248}. Here the switch from first person plural to second person plural is marked with the \textsc{ss} marking.
\ea
\label{ex:8:x248}
\gll I ikoka yien=iw urup-ep nia maak-omkun ora-\textstyleEmphasizedVernacularWords{iwkin} aria owawiya feeke pok-ap ik-ok eka liiwa muuta en-\textbf{ep} aria ni soomar-ek-eka. \\
1p.\textsc{unm} later 1p.\textsc{gen}=\textsc{lim} ascend-\textsc{ss}.\textsc{seq} 2p.\textsc{acc} tell-1s/p.\textsc{ds} descend-2/3p.\textsc{ds} alright together here.\textsc{cf} sit-\textsc{ss}.\textsc{seq} be-\textsc{ss} water a.little only eat-\textsc{ss}.\textsc{seq} alright 2p.\textsc{unm} walk-go-\textsc{imp}.2p\\
\glt `Later we (by) ourselves will come up and tell you (to come), and when you come down we will sit here together and eat a bit of something and then you (can) walk back.'
\z
% * <[email protected]> 2015-05-22T15:03:35.027Z:
%
% In the pdf the example is OK
%
% ^ <[email protected]> 2015-07-27T15:16:38.516Z.
With the rest, the speaker has a choice between the two forms. This choice is probably pragmatic and depends on whether the speaker wants to stress the change or the continuity of the referents \citep[47]{Franklin1983}.
\subsubsection{Tracking a subject high in topicality}
%\hypertarget{RefHeading23201935131865}
\citet[xi]{HaimanEtAl1983} claim that it is strictly the syntactic subject whose reference is tracked, but this statement has been challenged and modified by several others.\footnote{\citet{Givon1983,Reesink1983a,Reesink1987,Roberts1988b,Roberts1997} and \citet{Farr1999} among others.} If it is accepted as such, both Mauwake and other Papuan languages present a number of irregularities that have to be explained somehow.
\citet[242--243]{Reesink1983a} suggests that the switch-reference system does monitor the subject co-referentiality in the medial clause and its reference clause, but topicality considerations cause apparent ``anomalies'' to the basic system. \citet{Roberts1988b} makes a well supported claim for Amele that in fact it is the topic that is tracked rather than the syntactic subject, or semantic agent, and he tentatively extends the claim to cover other Papuan languages as well. His later survey \citep{Roberts1997} presents a more balanced view that \textstyleAcronymallcaps{\textsc{sr}} can be either agent-oriented or topic-oriented, while maintaining that in most Papuan languages it is topic-oriented.
In a nominative-accusative language like Mauwake the syntactic subject, the semantic agent and the pragmatic topic coincide most of the time. The \textstyleAcronymallcaps{\textsc{sr}} marking tracks the subject, but when there is competition between a more topical and less topical subject in clause chains, it is the more topical one that is tracked. An object, even if it is the topic, does not participate in the \textstyleAcronymallcaps{\textsc{sr}} marking.
Competition between a more topical subject with a less topical one most commonly occurs when a clause with an inanimate subject intervenes between clauses where there is an animate/human subject. Even here the ``normal'' \textstyleAcronymallcaps{\textsc{sr}} strategy is used, if the inanimate subject is topical enough to control the \textstyleAcronymallcaps{\textsc{sr}} marking in the same way as animate subjects do. In the following examples, the drying of the soup \REF{ex:8:x1474} and the bending of the coconut palm \REF{ex:8:x1480} are important events in the development of the story and so the regular \textstyleAcronymallcaps{\textsc{sr}} marking is maintained. In \REF{ex:8:x1480}, the coconut palm can also be interpreted as a volitional participant, as it bends and straightens itself according to the needs of the people.
\largerpage
\ea%x1474
\label{ex:8:x1474}
\gll Uup-em-ika-\textstyleEmphasizedVernacularWords{iwkin} maa eka saanar-em-ik-\textstyleEmphasizedVernacularWords{eya} iki(w-e)p eka un-ep ekap-ep amina=pa feef-am-ik-e-mik.\\
cook-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} food water dry-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} go-\textsc{ss}.\textsc{seq} water draw-\textsc{ss}.\textsc{seq} come-\textsc{ss}.\textsc{seq} pot=\textsc{loc} pour-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3p\\
\glt`They were cooking it and the soup kept drying and they kept going and drawing water and coming and pouring it in the pot.'
% * <[email protected]> 2015-05-22T15:06:00.152Z:
%
% Gap above the free translation too wide?
%
\z
\ea%x1480
\label{ex:8:x1480}
\gll Emeria panewowa nain wiimasip erup wia aaw-ep owow uruma or-op iimar-ep ika-\textstyleEmphasizedVernacularWords{iwkin} iwera oko mekemkar-ep or-\textstyleEmphasizedVernacularWords{eya} wi iwera ir-\textstyleEmphasizedVernacularWords{iwkin} nainiw kaken iimar-e-k.\\
woman old that1 3s/p.grandchild two 3p.\textsc{acc} take-\textsc{ss}.\textsc{seq} village open.place descend-\textsc{ss}.\textsc{seq} stand.up-\textsc{ss}.\textsc{seq} be-2/3p.\textsc{ds} coconut other bend-\textsc{ss}.\textsc{seq} descend-2/3s.\textsc{ds} 3p.\textsc{unm} coconut climb-2/3p.\textsc{ds} again straight stand.up-\textsc{pa}-3s\\
\glt`The old woman took the two grandchildren and they went down to the village square and were standing there, and a coconut palm bent down and they climbed up the coconut palm and it stood up straight again.'
\z
When an inanimate subject is low in terms of topicality, the \textstyleAcronymallcaps{\textsc{sr}} marking of the previous clause disregards it and indicates same-subject continuation, but the verb in the inanimate clause has to indicate a change of subject, if a more topical subject follows. This structure in many Papuan languages is typical of temporal and climate expressions and other impersonal predications (\citealt{Reesink1983a}, \citealt{Roberts1988b}), which are often used for giving backgrounded\footnote{\citet[244]{Farr1999} calls this \textit{on-line background} to distinguish it from the off-line background information of subordinate clauses.} information. In examples \REF{ex:8:x1482} and \REF{ex:8:x1475} the verb in the initial medial clause predicating the action of human participants is marked with same subject following even when the following clause mentions the coming of darkness or dawn. Returning to the main line action requires different-subject marking. In the examples, the ``skipped'' medial clauses are in brackets.
\ea%x1482
\label{ex:8:x1482}
\gll Aria maa en-ep naap ik-\textstyleEmphasizedVernacularWords{ok} {\ob}kokom-ar-\textstyleEmphasizedVernacularWords{e}\textstyleEmphasizedVernacularWords{y}\textstyleEmphasizedVernacularWords{a}{\cb} in-e-mik.\\
alright food eat-\textsc{ss}.\textsc{seq} thus be-\textsc{ss} dark-\textsc{inch}-2/3s.\textsc{ds} sleep-\textsc{pa}-1/3p\\
\glt`Alright we ate and stayed like that and (then) it became dark and we slept.'
\z
\ea%x1475
\label{ex:8:x1475}
\gll In-\textstyleEmphasizedVernacularWords{ep} {\ob}epa wiim-\textstyleEmphasizedVernacularWords{eya}{\cb} onak maak-e-mik, ``{\dots''}\\
sleep-\textsc{ss}.\textsc{seq} place dawn-2/3s.\textsc{ds} 3s/p.mother tell-\textsc{pa}-1/3p\\
\glt`They slept, and when it dawned they told their mother, ``{\dots}'' '
\z
If the impersonal predicate is important for the main story line, rather than providing backgrounded information, the impersonal verb itself is placed as a final verb, and the verb in the preceding medial clause is marked for different subject \REF{ex:8:x1492}. \citet[206]{Reesink1987} notes a similar rule for Usan.
\ea%x1492
\label{ex:8:x1492}
\gll Kir-ep ekap-em-ika-\textstyleEmphasizedVernacularWords{iwkin} epa wiim-o-k. \\
turn-\textsc{ss}.\textsc{seq} come-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} place dawn-\textsc{pa}-3s\\
\glt`They turned and as they were coming, it dawned.'
\z
In many Papuan languages, the impersonal predications include a number of experiential verbs (\citealt[204]{Reesink1987}, \citealt{Roberts1997}). In Mauwake, most of the experiential expressions are adjunct plus verb constructions (\sectref{sec:3.8.5.2.1}), where the experiencer is a subject rather than an object; in chained clauses these behave in a regular manner. But those few experiential expressions that are impersonal do not trigger \textsc{ds} marking in the preceding medial clause, because the inanimate subject in the experiential clause is not topical enough to do it. In \REF{ex:8:x1491}, the first person singular subject of the medial clauses becomes the object of the final clause, but the medial clause has same subject marking:
\ea%x1491
\label{ex:8:x1491}
\gll Uuw-ap uuw-\textstyleEmphasizedVernacularWords{ap} oona=ke efa sirir-i-ya.\\
work-\textsc{ss}.\textsc{seq} work-\textsc{ss}.\textsc{seq} bone=\textsc{qf} 1s.\textsc{acc} hurt-\textsc{Nc}-\textsc{pr}.3s\\
\glt`I worked and worked and my bones hurt.'
\z
The verb \textstyleStyleVernacularWordsItalic{weeser}- `finish' is often used in chained clauses to indicate the finishing of an action. In this function, its low-topicality subject, the nominalized form of the preceding verb, is never mentioned overtly, and the preceding medial clause has \textsc{ss} marking \REF{ex:8:x1483}:
\ea%x1483
\label{ex:8:x1483}
\gll Uup-\textstyleEmphasizedVernacularWords{ep} {\ob}weeser-\textstyleEmphasizedVernacularWords{eya}{\cb} aria oposia gelemuta wiam erup fain wia wu-om-a-m.\\
cook-\textsc{ss}.\textsc{seq} finish-2/3s.\textsc{ds} alright meat small 3p.\textsc{refl} two this 3p.\textsc{acc} put-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-1s \\
\glt`I cooked it and when it was finished, all right, I put (aside) a little of the meat for these two (women).'
\z
In \REF{ex:8:x1476}, there are two intervening clauses with different low-topicality inanimate subjects. The same-subject marking of the first clause ``jumps over'' these two clauses and refers to the subject in the last clause. The two clauses in between both have \textsc{ds} marking.
\ea%x1476
\label{ex:8:x1476}
\gll Maa uup-\textstyleEmphasizedVernacularWords{ep} {\ob}fofola urup-\textstyleEmphasizedVernacularWords{eya}{\cb} {\ob}maa op-\textstyleEmphasizedVernacularWords{iya}{\cb} iiw-o-k.\\
food cook-\textsc{ss}.\textsc{seq} foam rise-2/3s.\textsc{ds} food be.done-2/3s.\textsc{ds} dish.out-\textsc{pa}-3s\hspace{-1mm}\\
\glt`She cooked the food and when it boiled and was done she dished it out.'
\z
Although human subjects are typically high on the topicality hierarchy \citep[364]{Givon1984}, even a human subject may occasionally be so low in topicality that it gets overlooked in the \textstyleAcronymallcaps{\textsc{sr}} marking \REF{ex:8:x1477}, \REF{ex:8:x1478}.\footnote{\citet[236--237]{Reesink1983a} gives similar examples from other Papuan languages.} What is particularly striking with the example \REF{ex:8:x1477} is that the clause that is overlooked has a subject in first person singular, which is usually considered to be topically the highest possible subject. A plausible explanation is that politeness and hospitality requires the host of a big meal to downplay his own importance in this way.
\ea%x1477
\label{ex:8:x1477}
\gll Efa arew-\textstyleEmphasizedVernacularWords{ap} {\ob}maa eka liiwa muuta on-\textstyleEmphasizedVernacularWords{amkun}{\cb} en-ep-pu-ami soomar-ek-eka. \\
1s.\textsc{acc} wait-\textsc{ss}.\textsc{seq} food water little only make-1s/p.\textsc{ds} eat-\textsc{ss}.\textsc{seq}-\textsc{cmpl}-\textsc{ss}.\textsc{sim} walk-go-\textsc{imp}.2p\\
\glt`Wait for me, and when I have made just a little soup you eat it and then you (may) go.'
\z
\ea%x1478
\label{ex:8:x1478}
\gll Ikiw-\textstyleEmphasizedVernacularWords{ep} {\ob}mua nain urema osarena=pa iimar-ep ik-\textstyleEmphasizedVernacularWords{eya}{\cb} ona mua nain ifakim-o-k. \\
go-\textsc{ss}.\textsc{seq} man that1 bandicoot path=\textsc{loc} stand-\textsc{ss}.\textsc{seq} be-2/3s.\textsc{ds} 3s.\textsc{gen} man that1 kill-\textsc{pa}-3s\\
\glt`She went and as the man was standing on the bandicoot path she killed that husband of hers.'
\z
In process descriptions, the identity of people performing the actions is not important, and their topicality is low. In \REF{ex:8:x1481}, the person watching the fire in the coconut drying shed is not mentioned in any way. This example is also like \REF{ex:8:x1476} above, in that there are two clauses with a different low-topicality subject, here one of them [+human], intervening between the second \textsc{ss} clause and the final clause, where the original subject is picked up.
\ea%x1481
\label{ex:8:x1481}
\gll Epia wu-ap ikiw-\textstyleEmphasizedVernacularWords{ep} {\ob}iwera kuuf-am-ik-\textstyleEmphasizedVernacularWords{eya}{\cb} {\ob}iwera reen-\textstyleEmphasizedVernacularWords{eya}{\cb} iwer urupa anum-i-mik. \\
fire put-\textsc{ss}.\textsc{seq} go-\textsc{ss}.\textsc{seq} coconut watch-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} coconut dry-2/3s.\textsc{ds} coconut shell knock-\textsc{Np}-\textsc{pr}.1/3p \\
\glt`We/they put them (the coconuts) on the fire and go, and (someone) keeps watching the coconuts and they dry and (then) we/they knock the shells away.'
\z
Even an inanimate subject may override an animate/human one in \textsc{sr} marking, if its topicality is high enough. In \REF{ex:8:x1479} the subject/topic is \textstyleForeignWords{kunai} grass and the burning of the grass, which is such an important part of a pighunt that the hunt itself is called \textstyleStyleVernacularWordsItalic{fiker(a) kuumowa} `kunai-burning'. The grass is a continuing topic from the previous several sentences, so a noun phrase is not used for marking it.
\ea%x1479
\label{ex:8:x1479}
\gll Kuum-\textstyleEmphasizedVernacularWords{iwkin} aw-\textstyleEmphasizedVernacularWords{emi} {\ob}mua unow maneka iiwawun fikera kuum-emi saawirin-ow-\textstyleEmphasizedVernacularWords{iwkin}{\cb} aria fiker epia aw-i-non.\\
burn-2/3p.\textsc{ds} burn-\textsc{ss}.\textsc{sim} man many very altogether kunai.grass burn-\textsc{ss}.\textsc{sim} round-\textsc{caus}-2/3p.\textsc{ds} alright kunai.grass fire burn-\textsc{Np}-\textsc{fu}.3s\\
\glt`They burn it and it burns and all the men burn and surround the kunai grass, (and) alright the kunai fire will burn.'
\z
\subsubsection{Apparent mismatches of reference}
%\hypertarget{RefHeading23221935131865}
A medial verb with \textsc{ds} marking is used in two instances where it does not indicate a change of subject. Both types have two or more clauses with identical \textsc{ds} marking even though the subject is the same; only the last of those clauses really indicates a change of subject. One of them is recursion of a \textsc{ds} verb \REF{ex:8:x1493}, indicating continuity; the identification of the subject is suspended until the repetition ends \citep[201]{Reesink1987}.
\ea%x1493
\label{ex:8:x1493}
\gll Wiawi kuum-\textstyleEmphasizedVernacularWords{eya} kuum-\textstyleEmphasizedVernacularWords{eya} kuum-\textstyleEmphasizedVernacularWords{eya} aw-ep eka iw-a-k na wia, eka=ke saanar-e-k. \\
3s/p.father burn-2/3s.\textsc{ds} burn-2/3s.\textsc{ds} burn-2/3s.\textsc{ds} burn-\textsc{ss}.\textsc{seq} river enter-\textsc{pa}-3s but no river=\textsc{cf} dry-\textsc{pa}-3s\\
\glt`It kept burning and burning their father and he burned and entered the river but no, the river dried.'
\z
A medial clause that has the same subject as the following medial clause may have \textsc{ds} marking if both the medial clauses relate to the same finite clause as their reference clause, and the first of the medial clauses gets expanded or defined more closely in the second one. The \textsc{ds} verbs may be identical \REF{ex:8:x1494}, but they do not need to be \REF{ex:8:x1495}, \REF{ex:8:x1496}.
\ea%x1494
\label{ex:8:x1494}
\gll Efa uruf-am-ik-\textstyleEmphasizedVernacularWords{eya}, koora=pa efa uruf-am-ik-\textstyleEmphasizedVernacularWords{eya} ikiw-i-nen ekap-i-nen.\\
1s.\textsc{acc} see-\textsc{ss}.\textsc{sim}-be-2/3.\textsc{ds} house=\textsc{loc} 1s.\textsc{acc} see-\textsc{ss}.\textsc{sim}-be-2/3.\textsc{ds} go-\textsc{Np}-\textsc{fu}.1s come-\textsc{Np}-\textsc{fu}.1s \\
\glt`You will keep seeing me, you will keep seeing me from the house, and I will come and go.'
\z
\ea%x1495
\label{ex:8:x1495}
\gll ...pon sisina=pa ik-\textstyleEmphasizedVernacularWords{eya}, piipa unowa=pa soomar-em-ik-\textstyleEmphasizedVernacularWords{eya} mik-a-m. \\
{\dots}turtle shallow.water=\textsc{loc} be-2/3s.\textsc{ds} seaweed many=\textsc{loc} walk-\textsc{ss}.\textsc{sim}.be-2/3s.\textsc{ds} spear-\textsc{pa}-1s\\
\glt`{\dots} the turtle was in the shallow water, it was walking among a lot of seaweed and I speared it.'
\z
\ea%x1496
\label{ex:8:x1496}
\gll No ikoka era=pa wia far-\textstyleEmphasizedVernacularWords{eya}, owora wia maak-\textstyleEmphasizedVernacularWords{eya}, aria mua=ke naap me nefa ma-i-nok, ``...'' \\
2s.\textsc{unm} later road=\textsc{loc} 3p.\textsc{acc} call-2/3s.\textsc{ds} betelnut 3p.\textsc{acc} tell-2/3s.\textsc{ds} alright man=\textsc{cf} thus not 2s.\textsc{acc} say-\textsc{Np}-\textsc{fu}.3s\\
\glt`Later, when you see them on the road, when you ask them for betelnut, alright let your husband not say about you that {\dots}'
\z
The \textsc{ss} medial form of the verb `be' is used in the expression \textstyleStyleVernacularWordsItalic{naap ikok} `it is/was thus (and)', regardless of the following subject/topic \REF{ex:8:x1500}, \REF{ex:8:x1501}. The construction seems to have grammaticalized as an expression of an indefinite time span.
\ea%x1500
\label{ex:8:x1500}
\gll \textstyleEmphasizedVernacularWords{Naap} \textstyleEmphasizedVernacularWords{ik-ok} wi Saramun=ke wiisa uf-e-mik. \\
thus be-\textsc{ss} 3p.\textsc{unm} Saramun=\textsc{cf} dance.name dance-\textsc{pa}-1/3p \\
\glt`It was like that and (then) the Saramun people danced \textstyleForeignWords{wiisa}.'
\z
\ea%x1501
\label{ex:8:x1501}
\gll ...mua me wia imen-a-mik. \textstyleEmphasizedVernacularWords{Naap} \textstyleEmphasizedVernacularWords{ik-ok} sarere uura buburia ona amia mua wiar kerer-ep opaimika=pa yia wu-a-k.\\
{\dots}man not 3p.\textsc{acc} find-\textsc{pa}-1/3p thus be-\textsc{ss} Saturday night bald 3s.\textsc{gen} bow man 3.\textsc{dat} appear-\textsc{ss}.\textsc{seq} talk=\textsc{loc} 1p.\textsc{acc} put-\textsc{pa}-3s\\
\glt`{\dots} we didn't find the men. It was like that, and on Saturday evening the bald man himself went to the police and accused us.'
\z
Even when the final clause is verbless \REF{ex:8:x1497}, \REF{ex:8:x1498}, or missing completely because of ellipsis \REF{ex:8:x1499}, a medial clause is still possible. In both cases, the \textsc{sr} marking is based on what the expected subject would be, if there were one.
\ea%x1497
\label{ex:8:x1497}
\gll Naap ik-\textstyleEmphasizedVernacularWords{ok} uruf-am-ika-\textstyleEmphasizedVernacularWords{iwkin} wia. \\
thus be-\textsc{ss} see-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} no\\
\glt`He was like that and they were watching him, but no (he didn't get any better).'
\z
\ea%x1498
\label{ex:8:x1498}
\gll Iinan aasa gurun-owa miim-\textstyleEmphasizedVernacularWords{ap} eka=iw umuk-owa ewur. \\
sky canoe rumble-\textsc{nmz} hear-\textsc{ss}.\textsc{seq} water=\textsc{inst} extinguish-\textsc{nmz} quickly\\
\glt`We heard the rumble of the airplane(s) and quickly extinguished (the fires) with water (lit: and the extinguishing with water quickly).'
\z
The two sentences preceding the example sentence \REF{ex:8:x1499} mention American airplanes that flew over and dropped messages during the Second World War. The ``same subject'' needs to be picked from there -- as the story continues without another reference to the Americans for a while -- and the elliptical clause construed as something like \textstyleStyleVernacularWordsItalic{naap onamik} `and they did so'.
\ea%x1499
\label{ex:8:x1499}
\gll Wi Yaapan nan ik-e-mik nain wia uruf-\textstyleEmphasizedVernacularWords{ap}. \\
3p.\textsc{unm} Japan there be-\textsc{pa}-1/3p that1 3p.\textsc{acc} see-\textsc{ss}.\textsc{seq}\\
\glt`They had seen that the Japanese were there (and so they {\ob}the Americans{\cb} did so).'
\z
\subsubsection{Medial clauses as a complementation strategy for perception verbs} \label{sec:8.2.3.4}
%\hypertarget{RefHeading23241935131865}
Perception verbs in Mauwake mostly use a medial clause as a complementation strategy \citep[371]{Dixon2010a}, when the object of the perception verb is an \textstyleEmphasizedWords{{activity}} \REF{ex:8:x1509}--\REF{ex:8:x1511}.\footnote{\citet[237]{Reesink1983b} notes this for Usan too.} Regular, nominalized complement clauses are only used with perception verbs when a \textstyleEmphasizedWords{{fact}} is reported (\sectref{sec:8.3.2.2}).
\ea%x1509
\label{ex:8:x1509}
\gll Moma wiar \textstyleEmphasizedVernacularWords{en-em-ik-eya} uruf-a-mik. \\
taro 3.\textsc{dat} eat-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} see-\textsc{pa}-1/3p \\
\glt`It was eating their taro, and they saw it.' (Or: `They saw that it was eating their taro.')
\z
\ea%x1510
\label{ex:8:x1510}
\gll Aara \textstyleEmphasizedVernacularWords{muuk-ar-ep} \textstyleEmphasizedVernacularWords{ik-eya} uruf-a-mik.\\
hen son-\textsc{inch}-\textsc{ss}.\textsc{seq} be-2/3s.\textsc{ds} see-\textsc{pa}-1/3p\\
\glt`The hen had laid an egg and we saw it.' (Or: `We saw that the hen had laid an egg.')
\z
\ea%x1511
\label{ex:8:x1511}
\gll Yo me baliwep paayar-e-m, oram iperowa=ke \textstyleEmphasizedVernacularWords{nanar}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{iwkin} miim-a-m.\\
1s.\textsc{unm} not well understand-\textsc{pa}-1s just middle.aged=\textsc{cf} tell.story-2/3p.\textsc{ds} hear-\textsc{pa}-1s \\
\glt`I do not understand it well, I have just heard the older people tell stories about it.'
\z
\subsubsection{Tail-head linkage} \label{sec:8.2.3.5}
%\hypertarget{RefHeading23261935131865}
Tail-head linkage is a typical feature especially in oral texts\footnote{With the development of written style, this feature is becoming less prominent.} in Papuan languages. It is an inter-sentential cohesive device and could be understood to belong outside ``syntax proper'', if syntax is defined very narrowly. It is mentioned here as it is an important linking device, and the chaining structure is used for it. In narratives and in descriptions of processes, tail-head linkage is utilized to tie sentences together within a thematic paragraph.
The tail-head link is formed when one sentence ends in a finite clause (``tail''), and the next sentence begins with a medial clause (``head'') that copies the verb but changes it into a medial one. The information in this medial clause is given rather than new, unlike in most other medial clauses. \citet[200--201]{Foley1986} claims for Yimas, and assumes for the rest of Papuan languages, that these medial clauses are subordinate, but at least in Mauwake they are not -- they are coordinate like other medial clauses. In a narrative, the final verbs, which then get recapitulated in the next sentence, carry the core of the story line \REF{ex:8:x1505}.
\ea%x1505
\label{ex:8:x1505}
\gll Wafur-a-k na weetak, \textstyleEmphasizedVernacularWords{ufer-a-k}. \textstyleEmphasizedVernacularWords{Ufer-ap} nainiw burir aaw-ep woosa=pa aruf-eya waaya nain \textstyleEmphasizedVernacularWords{in-e-k}. \textstyleEmphasizedVernacularWords{In-eya} yena ikiw-emi nainiw wiowa erup ar-ow-amkun iiwawun \textstyleEmphasizedVernacularWords{um-o-k}. \textstyleEmphasizedVernacularWords{Um-eya} merena ere-erup kaik-ap {\dots}\\
throw-\textsc{pa}-3s but no miss-\textsc{pa}-3s miss-\textsc{ss}.\textsc{seq} again axe take-\textsc{ss}.\textsc{seq} head=\textsc{loc} hit-2/3s.\textsc{ds} pig that1 lie.down-\textsc{pa}-3s lie.down-2/3s.\textsc{ds} 1s.\textsc{gen} go-\textsc{ss}.\textsc{sim} again spear two become-\textsc{caus}-1s/p.\textsc{ds} altogether die-\textsc{pa}-3s die-2/3s.\textsc{dn} leg \textsc{rdp}-two tie-\textsc{ss}.\textsc{seq}\\
\glt`He threw it (=a spear) but no, he missed. He missed it and again took an axe and hit it on the head and the pig fell down. It fell down and I myself went and speared it twice and it died completely. It died and I tied its legs two and two together and {\dots}'
\z
The repeated verb retains its arguments, but there is a choice in how overtly they and the peripherals are marked in the medial clause. Retaining them makes the medial clause more emphatic, and the first element becomes a theme for the new sentence (\sectref{sec:9.1}). In \REF{ex:8:x1505} only the verbs are copied; \REF{ex:8:x1506} copies the subject as well, \REF{ex:8:x1507} the object and \REF{ex:8:x1508} the locative adverbial.
\ea%x1506
\label{ex:8:x1506}
\gll \textstyleEmphasizedVernacularWords{Miiw-aasa} \textstyleEmphasizedVernacularWords{samor-ar-e-k.} \textstyleEmphasizedVernacularWords{Miiw-aasa} \textstyleEmphasizedVernacularWords{samor-ar-eya}{\dots} \\
land-canoe bad-\textsc{inch}-\textsc{pa}-3s land-canoe bad-\textsc{inch}-2/3s.\textsc{ds}\\
\glt`The car broke. The car broke and {\dots}'
\z
\ea%x1507
\label{ex:8:x1507}
\gll Owowa or-op, wuailal-ep \textstyleEmphasizedVernacularWords{akia} \textstyleEmphasizedVernacularWords{ik-e-k}. \textstyleEmphasizedVernacularWords{Akia} \textstyleEmphasizedVernacularWords{ik-ep} en-em-ik-ok, {\dots} \\
village descend-\textsc{ss}.\textsc{seq} be.hungry-\textsc{ss}.\textsc{seq} banana roast-\textsc{pa}-3s banana roast-\textsc{ss}.\textsc{seq} eat-\textsc{ss}.\textsc{sim}-be-\textsc{ss}\\
\glt`He came down to the village, was hungry and roasted bananas. He roasted bananas and was eating them, and {\dots}'
\z
\ea%x1508
\label{ex:8:x1508}
\gll P-ikiw-ep \textstyleEmphasizedVernacularWords{Bogia=pa} \textstyleEmphasizedVernacularWords{nan} \textstyleEmphasizedVernacularWords{wu-a-mik}. \textstyleEmphasizedVernacularWords{Bogia=pa} \textstyleEmphasizedVernacularWords{nan} \textstyleEmphasizedVernacularWords{wu-ap} i kiiriw ekap-e-mik. \\
\textsc{bpx}-go-\textsc{ss}.\textsc{seq} Bogia=\textsc{loc} there put-\textsc{pa}-1/3p Bogia=\textsc{loc} there put-\textsc{ss}.\textsc{seq} 1p.\textsc{\textsc{unm}} again come-\textsc{\textsc{pa}}-1/3p \\
\glt`We took it (=his body) and put/buried it in Bogia. We put it in Bogia and came back again.'
\z
Most commonly the derivational morphology in the two verbs is identical, but sometimes the derivation in the finite verb is dropped from the medial verb \REF{ex:8:x1513}, \REF{ex:8:x1514}. In \REF{ex:8:x1513}, there is a good reason for dropping the benefactive marking from the repeated verb: the spear was thrown for someone's benefit, but it missed, and consequently there was no benefit for anyone.
\ea%x1513
\label{ex:8:x1513}
\gll Olas=ke ekap-emi wiowa \textstyleEmphasizedVernacularWords{wafur-om-a-k}. \textstyleEmphasizedVernacularWords{Wafur-a-k} na weetak, ufer-a-k. \\
Olas=\textsc{cf} come-\textsc{ss}.\textsc{sim} spear throw-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-3s throw-\textsc{pa}-3s but no miss-\textsc{pa}-3s \\
\glt`Olas came and threw a spear for him. He threw it but no, he missed.'
\z
\ea%x1514
\label{ex:8:x1514}
\gll Epa wiim-eya mua \textstyleEmphasizedVernacularWords{karer-omak-e-mik}. \textstyleEmphasizedVernacularWords{Karer-a-p} ma-e-mik, ``{\dots''}\\
place dawn-2/3s.\textsc{ds} man gather-\textsc{distr}/\textsc{pl}-\textsc{pa}-1/3p gather-\textsc{ss}.\textsc{seq} say-\textsc{pa}-1/3p\\
\glt`It dawned and many men gathered. They gathered and said, ``{\dots}'' '
\z
Adding new derivation to the medial verb is possible, but rare: the example (3.\ref{ex:3:x237}) is repeated below as \REF{ex:8:x1515}.
% * <[email protected]> 2015-05-22T17:28:20.829Z:
%
% The example is (3.460) since it is in ch. 3
%
\ea%x1515
\label{ex:8:x1515}
\gll Ikiwosa wiar pepekim-ep \textstyleEmphasizedVernacularWords{kaik-a-m}. \textstyleEmphasizedVernacularWords{Kaik-om-ap}{\dots} \\
head 3.\textsc{dat} measure-\textsc{ss}.\textsc{seq} tie-\textsc{pa}-1s tie-\textsc{ben}-\textsc{bnfy}2.\textsc{ss}.\textsc{seq}\\
\glt`I measured her head and tied it (=headdress). I tied it for her and {\dots}'
\z
Similarly, aspect marking normally stays the same in both the verbs, but it is also possible to have aspect marking on the medial verb, although the finite verb is without any aspect marking \REF{ex:8:x1516}, \REF{ex:8:x1517}. When new information is added to the verb either by derivation or aspect marking, it is less clear if this still is a true case of tail-head linkage.
\ea%x1516
\label{ex:8:x1516}
\gll ...nomokowa maala war-ep, ekap-ep ifa nain \textstyleEmphasizedVernacularWords{ifakim-o-k}. \textstyleEmphasizedVernacularWords{Ifakim-em-ik-eya} ifa nain=ke siowa wasirk-a-k.\\
{\dots}tree long cut-\textsc{ss}.\textsc{seq} come-\textsc{ss}.\textsc{seq} snake that1 beat-\textsc{pa}-3s beat-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} snake that1=\textsc{cf} dog release-\textsc{pa}-3s\\
\glt`{\dots}he cut a long stick, came, and beat up the snake. As he was beating it, the snake released the dog.'
\z
\ea%x1517
\label{ex:8:x1517}
\gll Moma manina mokomokoka \textstyleEmphasizedVernacularWords{nop}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{mik}. \textbf{Nop}\textbf{-}\textbf{ap-pu}\textbf{-}\textbf{ap} nomokowa war-i-mik. \\
taro garden first clear-\textsc{Np}-\textsc{pr}.1/3p clear-\textsc{ss}.\textsc{seq}-\textsc{cmpl}-\textsc{ss}.\textsc{seq} tree cut-\textsc{Np}-\textsc{pr}.1/3p \\
\glt`First we clear (the undergrowth for) taro garden. When we have cleared it we cut the trees.'
\z
The tail-head linkage disregards right-dislocated items that come between the two verbs \REF{ex:8:x1518}, \REF{ex:8:x1519}.
\ea%x1518
\label{ex:8:x1518}
\gll Ne kiiriw nan Medebur \textstyleEmphasizedVernacularWords{ek-a-mik}, mua napuma onaiya. \textstyleEmphasizedVernacularWords{Ek-ap} Medebur=pa neeke {\dots}\\
\textsc{add} again there Medebur go-\textsc{pa}-1/3p man sick with go-\textsc{ss}.\textsc{seq} Medebur=\textsc{loc} there.\textsc{cf}\\
\glt`And again from there they went to Medebur, with the sick man. They went and there in Medebur {\dots}'
\z
\ea%x1519
\label{ex:8:x1519}
\gll ...pok-ap ika-iwkin mua wiar \textstyleEmphasizedVernacularWords{ekap-e-mik}, wiinar-ep. \textstyleEmphasizedVernacularWords{Ekap-emi} wia maak-e-mik, ``Maa iiw-eka.'' \\
sit.down-\textsc{ss}.\textsc{seq} be-\textsc{ss}.\textsc{seq} man 3.\textsc{dat} come-\textsc{pa}-1/3p make.planting.holes-\textsc{ss}.\textsc{seq} come-\textsc{ss}.\textsc{sim} 3p.\textsc{acc} tell-\textsc{pa}-1/3p food dish.out-\textsc{imp}.2p\\
\glt`{\dots} they were sitting and their husbands came, having made the planting holes. They came and told them, ``Dish out the food.'' '
\z
A summary tail-head linkage with a generic verb \REF{ex:8:x1520}, a common feature in many \textstyleAcronymallcaps{tng} languages, is used very little in Mauwake.
\ea%x1520
\label{ex:8:x1520}
\gll Or-omi \textstyleEmphasizedVernacularWords{ma-em-ik-e-mik}, ``Eka mamaiya akena i yoowa me aaw-i-yen.'' \textstyleEmphasizedVernacularWords{Naap} \textstyleEmphasizedVernacularWords{on-am-ika-iwkin} eka owowa kerer-ep {\dots}\\
descend-\textsc{ss}.\textsc{sim} say-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3p river near very 1p.\textsc{unm} hot not get-\textsc{Np}-\textsc{fu}.1p thus do-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} river village appear-\textsc{ss}.\textsc{seq}\\
\glt`They went down and were saying, ``Very near the river we won't get hot.'' They were doing like that and (then) the river reached the village and {\dots}'
\z
\section{Subordinate clauses: embedding and hypotaxis} \label{sec:8.3}
%\hypertarget{RefHeading23281935131865}
Subordinate clauses are a problematic area to define both cross-linguistically (\citealt{HaimanEtAl1984}, \citealt[317]{MathiessenEtAl1988}) and even within one language \citep[848]{Givon1990}. It seems that there is a continuum from fully independent to embedded clauses (\citealt[207]{Reesink1987}, \citealt[189]{Lehmann1988}).
Rather than treating subordinate clauses as one group, it is helpful to differentiate between embedding and hypotaxis. Embedded clauses have a function in the main clause: relative clauses as qualifiers within a \textstyleAcronymallcaps{np}, complement clauses as objects or subjects, and adverbial clauses as adverbials. Hypotactic clauses are also dependent on the main clause, but they do not function as a constituent in it (\citealt[219]{Halliday1994}, \citealt{Lehmann1988}). Even though subordination is ``a negative term which lumps together all deviations from some `main clause' norm'' \citep[510]{HaimanEtAl1984}%Thompson
, the term still has limited usefulness, as there are some rules that affect both embedded and hypotactic clauses.
In Mauwake, subordinate clauses usually precede the main clauses, and they have a non-final intonation pattern. The initial position is related to the pragmatic function of topic that these clauses often have \citep[187]{Lehmann1988}; but when the subordinate clause is right-dislocated, it does not have a topic function.\footnote{For a discussion on the topic function of subordinate clauses see, e.g., \citet{Reesink1983b, Reesink1987}, \citet{MathiessenEtAl1988}, \citet{Lehmann1988}, and \citet{ThompsonEtAl2007}.} The semantic function varies according to the type of subordinate clause.
Embedded clauses in Mauwake are nominalized clauses: relative clause nominalization (\textstyleAcronymallcaps{rc}) (\sectref{sec:8.3.1}) is always done with the demonstrative \textstyleStyleVernacularWordsItalic{nain}\textstyleStyleVernacularWordsItalic `that' added to a finite clause, whereas complement clauses (\textstyleAcronymallcaps{cc}) (\sectref{sec:8.3.2}) can use either one of the two nominalization strategies (\sectref{sec:5.7}). The locative and temporal adverbial clauses (\sectref{sec:8.3.3}), like the relative clauses, are Type 2 nominalized clauses (\sectref{sec:5.7.2}). All of these clauses bear out \citegen[236]{Reesink1983b} claim that ``subordinate clauses, especially in sentence-initial position, are natural vehicles for the speaker's presuppositions''.\footnote{``Presuppositions'' here refer to pragmatic, not logical-semantic presuppositions.} \citet[230]{Reesink1983b} also suggests that the origin of the relative clause is in a paratactic construction. At least in Mauwake this seems to be true not only of the relative clause but of the complement clause (\sectref{sec:8.3.2}) as well.
The hypotactic conditional and concessive clauses are dependent on their main clause, but not embedded in it.
\subsection{Relative clauses} \label{sec:8.3.1}
%\hypertarget{RefHeading23301935131865}
I define a restrictive relative clause (\textsc{rc}),\footnote{This definition only applies to restrictive relative clauses; non-restrictive \textsc{rc}s (\sectref{sec:8.3.1.4}) are not real \textsc{rc}s although they are structurally similar to the real \textsc{rc}s.} following \citet[206]{Andrews2007b}, as a ``subordinate clause which delimits the reference of a \textsc{np} by specifying the role of the referent of that \textsc{np} in the situation described by the \textsc{rc}''.
\newpage
The relative clause is a statement about some noun phrase in the main clause. That \textstyleAcronymallcaps{\textsc{np}} is here called the antecedent \textstyleAcronymallcaps{\textsc{np}} (\textstyleAcronymallcaps{\textsc{antnp}}),\footnote{This is often called Head NP, but because it is not grammatically a ``head'' of anything, I prefer to call it antecedent \textsc{np}. The name ``antecedent'' is also somewhat of a misnomer, as in Mauwake it does not \textit{precede} the \textsc{relnp}.} since it is the unit that the the coreferential \textstyleAcronymallcaps{\textsc{np}} in the relative clause derives its meaning from \citep[20]{Crystal1997}. The coreferential \textstyleAcronymallcaps{\textsc{np}} in the \textstyleAcronymallcaps{\textsc{rc}} is called the relative \textstyleAcronymallcaps{\textsc{np}} (\textstyleAcronymallcaps{\textsc{relnp}}).\footnote{\citet[142]{Keenan1985} calls it a domain noun.}
Often the referent of the \textstyleAcronymallcaps{\textsc{antnp}} is assumed to be known to the hearer but not necessarily easily accessible, so the \textstyleAcronymallcaps{\textsc{rc}} gives background information to help the hearer identify the referent.
The relative marker is the distal-1 demonstrative \textstyleStyleVernacularWordsItalic{nain} `that' (\sectref{sec:3.6.2}) occurring clause-finally in the relative clause \REF{ex:8:x1527}. It has a slightly rising non-final intonation indicating that the sentence continues; right-dislocated \textstyleAcronymallcaps{\textsc{rc}}s have sentence-final falling intonation. Givenness is an essential part of the meaning of the demonstrative, which is also used in \textstyleAcronymallcaps{\textsc{np}}s \REF{ex:8:x1528}. The demonstrative in effect makes the \textstyleAcronymallcaps{\textsc{rc}} into a noun phrase. The similarity of the two structures can be seen in the examples below.
\ea%x1527
\label{ex:8:x1527}
\gll {\ob}Takira gelemuta nain{\cb}\textsubscript{NP} uruf-a-m.\\
boy small that1 see-\textsc{pa}-1s \\
\glt`I saw that/the small boy.'
\z
\ea%x1528
\label{ex:8:x1528}
\gll {\ob}Takira me arim-o-k nain{\cb}\textsubscript{\textsc{rc}} uruf-a-m. \\
boy not grow-\textsc{pa}-3s that1 see-\textsc{pa}-1s \\
\glt`I saw the boy that has not grown.'
\z
\subsubsection{The type and position of the relative clause}
%\hypertarget{RefHeading23321935131865}
In typological terms, relative clauses in Mauwake are mostly replacive, also called internal \REF{ex:8:x1529}--\REF{ex:8:x1546}. A normal finite clause is made into a noun phrase by the addition of the demonstrative \textstyleStyleVernacularWordsItalic{nain}, and the \textstyleAcronymallcaps{\textsc{relnp}} inside the \textstyleAcronymallcaps{\textsc{rc}} replaces the \textstyleAcronymallcaps{\textsc{antnp}}. Pre-nominal \textstyleAcronymallcaps{\textsc{rc}s,} where the\textstyleAcronymallcaps{ \textsc{rc}} precedes the \textstyleAcronymallcaps{\textsc{antnp},} are cross-linguistically more typical of \textstyleAcronymallcaps{\textsc{ov}} languages than replacive ones \citep[144]{Keenan1985}, but the latter are also common in Papuan languages (\citealt[229]{Reesink1983b} and \citealt[219]{Reesink1987}, \citealt[49]{Roberts1987}, \citealt[281]{Farr1999}, \citealt[193]{Whitehead2004}). Often both pre-nominal and replacive \textstyleAcronymallcaps{\textsc{rc}}s are possible, with one or the other being the dominant type.
\ea%x1529
\label{ex:8:x1529}
\gll {\ob}Ni \textstyleEmphasizedVernacularWords{nomona} unuf-a-man nain{\cb}, aria iimeka kuisow na-e-man. \\
2p.\textsc{unm} stone call-\textsc{pa}-2p that1 alright ten one say-\textsc{pa}-2p \\
\glt`The money that you mentioned, alright you said ten (kina).'
\z
\ea%x1530
\label{ex:8:x1530}
\gll Ne {\ob}eka opora \textstyleEmphasizedVernacularWords{biiris} marew nain{\cb} wiena on-am-ik-e-mik. \\
\textsc{add} river mouth bridge no(ne) that1 3p.\textsc{gen} do-\textsc{ss}.\textsc{sim}-be-1/3p\\
\glt`And they themselves kept making bridges to river channels that didn't have them.'
\z
\ea%x1545
\label{ex:8:x1545}
\gll {\ob}\textstyleEmphasizedVernacularWords{Mua} kuum-e-mik nain{\cb} me wia kuuf-a-mik. \\
man burn-\textsc{pa}-1/3p that1 not 3p.\textsc{acc} see-\textsc{pa}-1/3p\\
\glt`We/They did not see the men that burned it.'
\z
\ea%x1546
\label{ex:8:x1546}
\gll Ne {\ob}\textstyleEmphasizedVernacularWords{akia} ik-e-k nain{\cb} me en-e-k. \\
\textsc{add} banana roast-\textsc{pa}-3s that1 not eat-\textsc{pa}-3s\\
\glt`And/but he did not eat the banana(s) that he roasted.'
\z
It is possible to retain the antecedent \textstyleAcronymallcaps{\textsc{np}}, in which case the relative clause is not replacive but pre-nominal. In Mauwake this is not common; it is used when the noun phrase that is relativized is given extra emphasis \REF{ex:8:x1532}.
\ea%x1532
\label{ex:8:x1532}
\gll {\ob}\textstyleEmphasizedVernacularWords{Fofa} ikiw-e-mik nain{\cb}, \textstyleEmphasizedVernacularWords{fofa} nain yo me paayar-e-m. \\
day go-\textsc{pa}-1/3p that1 day that1 1s.\textsc{unm} not know-\textsc{pa}-1s\\
\glt`The day that they went, I do not know the day/date.'
\z
Even though the \textstyleAcronymallcaps{\textsc{rc}} is usually embedded in the main clause, it can be right-dislocated. In that case the main clause contains the antecedent \textstyleAcronymallcaps{\textsc{np}}, and the relative \textstyleAcronymallcaps{\textsc{np}} is deleted from the \textstyleAcronymallcaps{\textsc{rc}}. This way the first one of the coreferential \textstyleAcronymallcaps{\textsc{np}}s is retained for easier processing. Reasons for right-dislocating a relative clause are firstly, a long \textstyleAcronymallcaps{\textsc{rc}}, which would be hard to process sentence-medially \REF{ex:8:x1533}, secondly, focusing on the \textstyleAcronymallcaps{\textsc{rc}}, or thirdly, an afterthought: something that the speaker still wants to add \REF{ex:8:x1534}.
\ea%x1533
\label{ex:8:x1533}
\gll \textstyleEmphasizedVernacularWords{Wi} \textstyleEmphasizedVernacularWords{teeria} \textstyleEmphasizedVernacularWords{papako} o asip-a-mik, {\ob}ona eka sesenar-ep wienak-e-k nain{\cb}. \\
3p.\textsc{unm} group other 3s.\textsc{unm} help-\textsc{pa}-1/3p 3s.\textsc{gen} water buy-\textsc{ss}.\textsc{seq} feed.them-\textsc{pa}-3s that1\\
\glt`Another group helped him, (those) that he had bought and given beer to.'
\z
\ea%x1534
\label{ex:8:x1534}
\gll \textstyleEmphasizedVernacularWords{I} \textstyleEmphasizedVernacularWords{mua} yiam ikur, {\ob}fikera ikiw-e-mik nain{\cb}. \\
1p.\textsc{unm} man 1p.\textsc{refl} five kunai.grass go-\textsc{pa}-1/3p that1\\
\glt`There were five of us men that went to the kunai grass (=pig-hunting).'
\z
In a very rare case the antecedent \textstyleAcronymallcaps{\textsc{np}} is deleted and the relative \textstyleAcronymallcaps{\textsc{np}} is retained in the right-dislocated \textstyleAcronymallcaps{\textsc{rc}}. What makes it possible in example \REF{ex:8:x1535} may be that the verb in the main clause can only have some food (or medicine/poison) as its object, so the object, although usually present, may also be left out.
\ea%x1535
\label{ex:8:x1535}
\gll Wi mua ... ekap-iwkin wienak-e-mik, {\ob}\textstyleEmphasizedVernacularWords{maa} nop-a-mik nain{\cb}.\\
3p.\textsc{unm} man ... come-2/3p.\textsc{ds} feed.them-\textsc{pa}-1/3p food search-\textsc{pa}-1/3p that1\\
\glt`The men {\dots} came, and we gave it to them to eat, (that is,) the food that we had searched for.'
\z
\citet[144--146]{Comrie1981} presents another typology based on how the role of the relative \textstyleAcronymallcaps{\textsc{np}} is presented in the \textstyleAcronymallcaps{\textsc{rc}}. Basically Mauwake is of the ``gap type'', which ``does not provide any overt indication of the role of the head within the relative clause''. Noun phrases get very little case marking for their clausal role, and this is reflected in the \textstyleAcronymallcaps{\textsc{rc}} too. This results in ambiguous relative clauses when both a third person subject \textstyleAcronymallcaps{\textsc{np}} and a third person object \textstyleAcronymallcaps{\textsc{np}} are present in the \textstyleAcronymallcaps{\textsc{rc}} and the context does not make the meaning clear enough \REF{ex:8:x1548}:
\ea%x1548
\label{ex:8:x1548}
\gll {\ob}Siowa kasi keraw-a-k nain{\cb} um-o-k. \\
dog cat bite-\textsc{pa}-3s that1 die-\textsc{pa}-3s\\
\glt`The dog that bit the cat died.' Or: `The dog that the cat bit died.'
\z
The ambiguity can be avoided by adding the contrastive focus marker to the subject when the object is fronted to the theme position.\footnote{For some reason this is done in relative clauses mainly with human subjects, although the contrastive focus marker can be added to non-human subjects as well.} Although this is not case marking, it can function as such, because the subject is the best candidate for contrastive focus marking \REF{ex:8:x1549} (\sectref{sec:3.12.7.2}).
\ea%x1549
\label{ex:8:x1549}
\gll {\ob}Mua ona emeria=ke aruf-a-k nain{\cb} uruf-a-m. \\
man 3s.\textsc{gen} woman=\textsc{cf} hit-\textsc{pa}-3s that1 see-\textsc{pa}-1s\\
\glt`I saw the man whose wife hit him.'
\z
\citegen[140]{Comrie1981} ``non-reduction type'' is exhibited in Mauwake by those few cases where the relative \textstyleAcronymallcaps{\textsc{np}} keeps its oblique case marking. With overt case marking on the \textstyleAcronymallcaps{\textsc{relnp}}, the \textstyleAcronymallcaps{\textsc{antnp}} has to be retained \REF{ex:8:x1544}:
\ea%x1544
\label{ex:8:x1544}
\gll [\textbf{Burir=iw} nomokowa war-e-m nain,] burir nain duduw-ar-e-k.\\
axe=\textsc{inst} tree cut-\textsc{pa}-1s that1 axe that blunt-\textsc{inch}-\textsc{pa}-3s\\
\glt`The axe with which I cut trees became blunt.'
\z
But when the case marking does not appear in the \textstyleAcronymallcaps{\textsc{rc},} the \textstyleAcronymallcaps{\textsc{antnp}} is not present in the main clause either, and the \textstyleAcronymallcaps{\textsc{rc}} is a gapping-type relative clause \REF{ex:8:x1541}:
\ea%x1541
\label{ex:8:x1541}
\gll {\ob}\textbf{Burir} nomokowa war-e-m nain=ke{\cb} duduw-ar-e-k. \\
axe tree cut-\textsc{pa}-1s that1=\textsc{cf} blunt-\textsc{inch}-\textsc{pa}-3s\\
\glt`The axe with which I cut trees became blunt.'
\z
\subsubsection{The structure of the relative clause} \label{sec:8.3.1.2}
%\hypertarget{RefHeading23341935131865}
In Mauwake, the most typical relative clause is syntactically like a finite main clause, plus the distal-1 deictic \textstyleStyleVernacularWordsItalic{nain} functioning as a clause final relative marker. It was mentioned in \sectref{sec:5.7.2} that this is one strategy for nominalizing clauses in Mauwake. The demonstrative as a possible origin of a relative marker is well attested cross-linguistically (e.g. \citealt[342]{Dixon2010b}).
The verb of the relative clause is a fully inflected finite verb. But when a non-verbal clause is a relative clause, it has no verb and is structurally like other non-verbal clauses \REF{ex:8:x1943}.
\ea%x1943
\label{ex:8:x1943}
\gll Ne {\ob}eka opora biiris marew nain{\cb} wiena on-am-ik-e-mik.\\
\textsc{add} river mouth bridge no(ne) that1 3p.\textsc{gen} do-\textsc{ss}.\textsc{sim}-\textsc{pa}-1/3p\\
\glt`And they themselves kept making bridges to rivers that didn't have them.'
\z
The relative \textstyleAcronymallcaps{\textsc{np}} tends to be initial in the \textstyleAcronymallcaps{\textsc{rc}} regardless of its syntactic function. This is because it often has the pragmatic function of theme, which takes the clause-initial position. The initial position is easy to have also because a typical clause in Mauwake has so few noun phrases: in many \textstyleAcronymallcaps{\textsc{rc}}s the \textstyleAcronymallcaps{\textsc{relnp}} is the only noun phrase \REF{ex:8:x1552}.
\ea%x1552
\label{ex:8:x1552}
\gll {\ob}Moma p-or-o-mik nain{\cb} wiar sesenar-e-mik.\\
taro \textsc{bpx}-descend-\textsc{pa}-1/3p that1 3.\textsc{dat} buy-\textsc{pa}-1/3p \\
\glt`They\textsubscript{i} bought from them\textsubscript{j} the taro that they\textsubscript{j} brought down.'
\z
When a personal pronoun functions as a subject and the relative \textstyleAcronymallcaps{\textsc{np}} in some other syntactic role, the pronoun tends to keep its initial position, thus maintaining the basic constituent order. The personal pronouns are high on the topicality hierarchy \citep[166]{Givon1976}, so it is natural that they tend to keep the clause-initial and also sentence-initial position. Since the object \textstyleStyleVernacularWordsItalic{sirirowa} `pain' in \REF{ex:8:x1531} is not fronted, a temporal adverbial also keeps a place it would have in a neutral main clause.
The tense in the \textstyleAcronymallcaps{\textsc{rc}} can be past \REF{ex:8:x1552} or present \REF{ex:8:x1559}, but not future. For future meaning, the present tense form has to be used \REF{ex:8:x1531}.
\ea%x1531
\label{ex:8:x1531}
\gll {\ob}Yo ikoka sirir-owa aaw-i-yem nain{\cb}, nis pun eliw aaw-owen=i?\\
1s.\textsc{unm} later hurt-\textsc{nmz} get-\textsc{Np}-\textsc{pr}.1s that1 2p.\textsc{fc} also well get-\textsc{fu}.2p=\textsc{qm}\\
\glt`Is it all right that you will also get the pain that I (will) later get?'
\z
As was mentioned above, the antecedent \textstyleAcronymallcaps{\textsc{np}} only rarely occurs overtly. But a relative \textstyleAcronymallcaps{\textsc{np}} can also be deleted if it is generic \REF{ex:8:x1561}, or recoverable from situational \REF{ex:8:x1555} or textual context \REF{ex:8:x1559}. In \REF{ex:8:x1555}, the deleted \textstyleAcronymallcaps{\textsc{relnp}} can either be generic `what/whatever' or it may be \textstyleStyleVernacularWordsItalic{opora} `talk'; in \REF{ex:8:x1559}, the speaker is describing the process of making a fishtrap, which has already been mentioned in previous sentences.
\ea%x1561
\label{ex:8:x1561}
\gll {\ob}Iinan aasa=pa or-omi kiikir furew-a-mik nain{\cb} dabela. \\
sky canoe=\textsc{loc} descend-\textsc{ss}.\textsc{sim} first sense-\textsc{pa}-1/3p that1 cold\\
\glt`What we first sensed/felt when we descended from the plane was the cold.'
\z
\ea%x1555
\label{ex:8:x1555}
\gll {\ob}Kululu ma-e-k nain{\cb} kirip-i-yem. \\
Kululu say-\textsc{pa}-3s that1 turn/reply-\textsc{Np}-\textsc{pr}.1s\\
\glt`I reply to what Kululu said.'
\z
\ea%x1559
\label{ex:8:x1559}
\gll Aria {\ob}malol=pa ifemak-i-mik nain{\cb} aana puuk-i-mik.\\
alright deep.sea=\textsc{loc} press-\textsc{Np}-\textsc{pr}.1/3p that1 cane cut-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`Alright for those that we lower to the deep sea we cut cane.'
\z
In \REF{ex:8:x1556}, there is no other indication of the relative \textstyleAcronymallcaps{\textsc{np}} than the person suffix of the verb. The group of women referred to were mentioned as a noun phrase only near the beginning of the story, whereas the example is from close to the end:
\ea%x1556
\label{ex:8:x1556}
\gll Domora=pa or-omi nan ik-e-mik, {\ob}afa ar-e-mik nain{\cb}. \\
Domora=\textsc{loc} descend-\textsc{ss}.\textsc{sim} there be-pa-1/3p flying.fox become-\textsc{pa}-1/3p that1\\
\glt`They went down from Domora and were there, those (women) who became flying foxes.'
\z
In the following two examples the \textstyleAcronymallcaps{\textsc{rc}s} are identical, but they have a different relative \textstyleAcronymallcaps{\textsc{np}}. The \textstyleAcronymallcaps{\textsc{relnp}} of \REF{ex:8:x1557} is \textstyleStyleVernacularWordsItalic{mukuruna} `noise', but the \textstyleAcronymallcaps{\textsc{relnp}} of \REF{ex:8:x1558}, \textstyleStyleVernacularWordsItalic{wi} `they', only shows in the verbal suffix. The obligatory accusative pronoun in the main clause provides a key for the interpretation of \REF{ex:8:x1558}.
\ea%x1557
\label{ex:8:x1557}
\gll {\ob}Mukuruna wua-i-mik nain{\cb} ikiw-ep miim-eka. \\
noise put-\textsc{Np}-\textsc{pr}.1/3p that1 go-\textsc{ss}.\textsc{sim} hear-\textsc{imp}.2p\\
\glt`Go and listen to the noise that they are making.'
\z
\ea%x1558
\label{ex:8:x1558}
\gll {\ob}Mukuruna wua-i-mik nain{\cb} ikiw-ep wia miim-eka.\\
noise put-\textsc{Np}-\textsc{pr}.1/3p that1 go-\textsc{ss}.\textsc{seq} 3p.\textsc{acc} hear-\textsc{imp}.2p\\
\glt`Go and listen to those who are making the noise.'
\z
The antecedent in most relative clauses has a specific reference. In Mauwake, when the reference is generic, a very generic noun is chosen as the head of the relativized \textstyleAcronymallcaps{\textsc{np}} and is modified by a question word \REF{ex:8:x1562}, \REF{ex:8:x1563}. So-called free \citep[213]{Andrews2007b} or condensed \citep[359]{Dixon2010b} relative clauses, which usually replace the whole \textstyleAcronymallcaps{\textsc{np}} with a generic or interrogative pronoun, are not used in Mauwake.
\ea%x1562
\label{ex:8:x1562}
\gll {\ob}Maa mauwa maak-i-n nain{\cb} me nefa miim-i-non.\\
thing what tell-\textsc{Np}-\textsc{pr}.2s that1 not 2s.\textsc{acc} hear-\textsc{Np}-\textsc{fu}.3s\\
\glt`Whatever you tell him, he will not hear.'
\z
\ea%x1563
\label{ex:8:x1563}
\gll {\ob}Mua naareke kema enek-ar-i-ya nain{\cb} eka dabela enim-i-nok.\\
man who.\textsc{cf} liver tooth-\textsc{inch}-\textsc{Np}-\textsc{pr}.3s that1 water cold eat-\textsc{Np}-\textsc{imp}.3s\\
\glt`Whoever is thirsty must drink (cold) water.'
\z
When the antecedent is generic and human, there are two more possibilities for the \textsc{relnp}: it may be \textstyleStyleVernacularWordsItalic{mua} `man, person' \REF{ex:8:x1564} or the third person singular pronoun, plus the specifier \textstyleStyleVernacularWordsItalic{ena} \REF{ex:8:x1565} (\sectref{sec:3.12.7.1}).
\ea%x1564
\label{ex:8:x1564}
\gll {\ob}Mua ena kema enek-ar-i-ya nain{\cb} ... \\
man \textsc{spec} liver tooth-\textsc{inch}-\textsc{Np}-\textsc{pr}.3s that1\\
\glt`Whoever is thirsty{\dots}'
\z
\ea%x1565
\label{ex:8:x1565}
\gll {\ob}O ena kema enek-ar-i-ya nain{\cb} ... \\
3s.\textsc{unm} \textsc{spec} liver tooth-\textsc{inch}-\textsc{Np}-\textsc{pr}.3s that1\\
\glt`Whoever is thirsty{\dots}'
\z
Non-verbal descriptive clauses can be made into relative clauses, but it is only in the negative that they are recognizable as such. In the affirmative, they are exactly like noun phrases with a demonstrative \REF{ex:8:x1550}, and because the meanings are so similar, it can be questioned whether there is such a thing as an affirmative non-verbal descriptive \textstyleAcronymallcaps{\textsc{rc}} at all in Mauwake.
\ea%x1550
\label{ex:8:x1550}
\gll {\ob}Mua eliwa nain{\cb} kookal-i-yem.\\
man good that1 like-\textsc{Np}-\textsc{pr}.1s\\
\glt`I like the good man.' Or `I like the man that is good.'
\z
In the negative, these clauses are different from the noun phrases because the negation is placed before the non-verbal predicate \REF{ex:8:x1551}.
\ea%x1551
\label{ex:8:x1551}
\gll {\ob}Koora \textstyleEmphasizedVernacularWords{me} maneka nain{\cb} uruf-a-m. \\
house not big that1 see-\textsc{pa}-1s\\
\glt`I saw the house that is not big.'
\z
\subsubsection{Relativizable noun phrase positions}
%\hypertarget{RefHeading23361935131865}
Several \textstyleAcronymallcaps{\textsc{np}} functions can be relativized, and Mauwake conforms to \citegen{KeenanEtAl1977} Noun Phrase Accessibility Hierarchy:\footnote{As Mauwake adjectives do not have comparative forms there can be no relativization for an object of comparison, which in Keenan and Comrie's hierarchy is the hardest to relativize.} the higher up an \textstyleAcronymallcaps{\textsc{np}} is in the hierarchy, the easier it is to relativize. Noun phrases with the following functions can be relativized: subject, object, recipient, beneficiary, instrument, comitative, object of genitive, temporal and locative.
Subject \REF{ex:8:x1537} and object \REF{ex:8:x1538} are the most frequent functions of the \textstyleAcronymallcaps{\textsc{RelNP}}.
\ea%x1537
\label{ex:8:x1537}
\gll {\ob}\textstyleEmphasizedVernacularWords{Mesa} \textstyleEmphasizedVernacularWords{asia} fiker(a) gone=pa ika-i-ya nain{\cb} aaw-em-ik-e-m.\\
wingbean wild kunai.grass middle=\textsc{loc} be-\textsc{Np}-\textsc{pr}.3s that1 get-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1s\\
\glt`I kept picking wild wingbeans that are/grow in the middle of the kunai grass.'
\z
\ea%x1538
\label{ex:8:x1538}
\gll Muuka, {\ob}yo \textstyleEmphasizedVernacularWords{opora} nefa maak-i-yem nain{\cb} miim-ap ook-e.\\
son 1s.\textsc{unm} talk 2s.\textsc{acc} tell-\textsc{Np}-\textsc{pr}.1s that1 hear-\textsc{ss}.\textsc{seq} follow-\textsc{imp}.2s\\
\glt`Son, listen to and follow the talk that I am telling you.'
\z
Recipient \REF{ex:8:x1539} and beneficiary \REF{ex:8:x1547} are possible to relativize, but in natural texts beneficiary is very infrequent.
\ea%x1539
\label{ex:8:x1539}
\gll {\ob}\textstyleEmphasizedVernacularWords{Takira} iwoka iw-e-m nain{\cb} yena aamun=ke.\\
boy yam give.him-\textsc{pa}-1s that1 1s.\textsc{gen} 1s/p.younger.sibling=\textsc{cf}\\
\glt`The boy that I gave yam to is my younger brother.'
\z
\ea%x1547
\label{ex:8:x1547}
\gll Ne {\ob}\textstyleEmphasizedVernacularWords{wi} \textstyleEmphasizedVernacularWords{emeria} \textstyleEmphasizedVernacularWords{papako} iiriw sawur wia iirar-om-a-k nain{\cb} {\dots}\\
\textsc{add} 3p.\textsc{unm} woman some earlier bad.spirit 3p.\textsc{acc} remove-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-3s that1\\
\glt`And some women, from (lit: for) whom he had removed bad spirits, {\dots}'
\z
When an instrument is relativized, the \textstyleAcronymallcaps{\textsc{relnp}} either takes the instrumental case marking \REF{ex:8:x1544} or has no case marking \REF{ex:8:x1553}:
\ea%x1553
\label{ex:8:x1553}
\gll Aria {\ob}\textstyleEmphasizedVernacularWords{maa} \textstyleEmphasizedVernacularWords{unowa} wakesim-e-mik nain{\cb} sererk-a-mik.\\
alright thing many cover-\textsc{pa}-1/3p that1 distribute-\textsc{pa}-1/3p\\
\glt`Alright they distributed the many things with which they had covered her (body).'
\z
A comitative \textstyleAcronymallcaps{np} (\sectref{sec:4.1.3}) containing a comitative postposition may be relativized \REF{ex:8:x1542}, but one formed with a comitative clitic may not.
\ea%x1542
\label{ex:8:x1542}
\gll {\ob}\textbf{Mua} \textbf{nain} \textbf{ikos} ikiw-e-mik nain{\cb} napum-ar-e-k. \\
man that1 with go-\textsc{pa}-1/3p that1 sick-\textsc{inch}-\textsc{pa}-3s\\
\glt`That man with whom I went became sick.'
\z
The object of genitive, or object of \textstyleEmphasizedWords{{possessive}} as it should be called when describing Mauwake grammar, only uses the dative pronoun (\sectref{sec:3.5.5}) when relativized \REF{ex:8:x1543}, not the unmarked (\sectref{sec:3.5.2.1}) or genitive (\sectref{sec:3.5.4}) pronoun.
\ea%x1543
\label{ex:8:x1543}
\gll {\ob}\textbf{Mua} emeria \textbf{wiar} um-o-k nain=ke{\cb} baurar-ep owowa oko ikiw-o-k.\\
man woman 3.\textsc{dat} die-\textsc{pa}-3s that1=\textsc{cf} flee-\textsc{ss}.\textsc{seq} village other go-\textsc{pa}-3s\\
\glt`The man whose wife died went away\footnote{Moving to another village after some misfortune is quite common, and the verb `flee' is used in this context but here it does not have a strongly negative connotation; this is reflected in the free translation.} to another village.'
\z
Temporal \REF{ex:8:x1554} and locative \REF{ex:8:x1560} \textstyleAcronymallcaps{\textsc{rc}}s are structurally identical to the other \textstyleAcronymallcaps{\textsc{rc}}s when the relativized temporal or locative \textstyleAcronymallcaps{\textsc{np}} does not have an adverbial function in the main clause.
\ea%x1554
\label{ex:8:x1554}
\gll {\ob}Fofa ikiw-e-mik nain{\cb} me paayar-e-m. \\
day go-\textsc{pa}-1/3p that1 not understand-\textsc{pa}-1s\\
\glt`I don't know the day that they went.'
\z
\ea%x1560
\label{ex:8:x1560}
\gll {\ob}Koora maneka wiena opora siisim-i-mik nain{\cb} uruf-a-mik. \\
house big 3p.\textsc{gen} talk write-\textsc{Np}-1/3p that1 see-\textsc{pa}-1/3p\\
\glt`We saw the big house where they write their talk (=printshop).'
\z
When the relativized temporal \textstyleAcronymallcaps{\textsc{np}} is a temporal in the main clause as well, the relative marker can optionally be replaced by the locative deictic \textstyleStyleVernacularWordsItalic{nan} or \textstyleStyleVernacularWordsItalic{neeke} `there' \REF{ex:8:x1625}.
\ea%x1625
\label{ex:8:x1625}
\gll [Aite uroma yaki-e-k fofa nain/nan/neeke] auwa Madang ikiw-o-k.\\
1s/p.mother stomach wash-\textsc{pa}-3s day that1/there/there.\textsc{cf} 1s/p.father Madang go-\textsc{pa}-3s\\
\glt`The day that mother gave birth, father went to Madang.'
\z
When the relativized locative \textstyleAcronymallcaps{\textsc{np}} is also a constituent in the main clause, the relative marker has to be replaced by \textstyleStyleVernacularWordsItalic{nan} or \textstyleStyleVernacularWordsItalic{neeke} \REF{ex:8:x1622}.
\ea%x1622
\label{ex:8:x1622}
\gll Or-op {\ob}i koora ik-e-mik neeke{\cb} ekap-o-k.\\
descend-\textsc{ss}.\textsc{seq} 1p.\textsc{unm} house be-\textsc{pa}-1p there.\textsc{cf} come-\textsc{pa}-3s\\
\glt`It descended and came to the house/building where we were.'
\z
Temporal adverbial clauses, which are structurally close to relative clauses, are discussed below in \sectref{sec:8.3.3.1}, and locative adverbial clauses in \sectref{sec:8.3.3.2}.
\subsubsection{Non-restrictive relative clauses} \label{sec:8.3.1.4}
%\hypertarget{RefHeading23381935131865}
Non-restrictive, or appositional, relative clauses are structurally exactly like restrictive relative clauses, but their function is different. They do not delimit the reference of the antecedent \textstyleAcronymallcaps{\textsc{np}}. Instead, they give new information about it. Functionally they are like a coordinate clause added to the main clause.
Because of the structural and even intonational similarity, it is sometimes difficult to tell if a particular \textstyleAcronymallcaps{\textsc{rc}} is restrictive or non-restrictive. When the \textstyleAcronymallcaps{\textsc{antnp}} is a proper noun or when it includes a first or second person singular pronoun, the \textstyleAcronymallcaps{\textsc{rc}} is usually non-restrictive \REF{ex:8:x1567}, \REF{ex:8:x1568}:
\ea%x1567
\label{ex:8:x1567}
\gll Bang=ke ekap-o-k, {\ob}Ponkila aaw-o-k nain{\cb}.\\
Bang=\textsc{cf} come-\textsc{pa}-3s Ponkila get-\textsc{pa}-3s that1\\
\glt`Bang came, (he) who married Ponkila.'
\z
\ea%x1568
\label{ex:8:x1568}
\gll Yo nena owowa {\ob}moma marew nain{\cb} miatin-i-yem. \\
1s.\textsc{unm} 2s.\textsc{gen} village taro no(ne) that1 dislike-\textsc{Np}-\textsc{pr}.1s\\
\glt`I don't like your village, which doesn't have taro.'
\z
The proximate demonstrative \textstyleStyleVernacularWordsItalic{fain} `this' can also function as a relative marker in the non-restrictive \textstyleAcronymallcaps{\textsc{rc}}s but not in restrictive ones \REF{ex:8:x1536}:
\ea%x1536
\label{ex:8:x1536}
\gll Nomokowa unowa fan-e-mik, {\ob}Simbine ekap-omak-e-mik fain{\cb}.\\
2s/p.brother many here-\textsc{pa}-1/3p Simbine come-\textsc{distr}/\textsc{pl}-\textsc{pa}-1/3p this\\
\glt`Your many (clan) brothers are here, these Simbine people who came.'
\z
When the \textstyleAcronymallcaps{\textsc{antnp}} is a pronoun other than first or second singular, the \textstyleAcronymallcaps{\textsc{rc}} may be either restrictive or non-restrictive \REF{ex:8:x1570}.
\ea%x1570
\label{ex:8:x1570}
\gll I mua yiam ikur, {\ob}fikera ikiw-e-mik nain{\cb}. \\
1p.\textsc{unm} man 1p.\textsc{refl} five kunai.grass go-\textsc{pa}-1/3p that1\\
\glt`There were five of us men who went to the kunai grass (= pig-hunting).' Or: `We were five men, who went pig-hunting.'
\z
\subsection{Complement clauses and other complementation strategies} \label{sec:8.3.2}
%\hypertarget{RefHeading23401935131865}
The prototypical function of a complement clause is as a subject or object in a main clause. In Mauwake, a complement clause proper functions as an object of a complement-taking verb (\textstyleAcronymallcaps{\textsc{ctv}}), and occasionally as a subject in a non-verbal clause. Structurally it is a Type 2 nominalized clause: a finite clause that has the distal-1 demonstrative \textstyleStyleVernacularWordsItalic{nain} `that' occurring as a nominaliser clause-finally (\sectref{sec:5.7.2}). The complement clause precedes the complement-taking verb. The complement clause differs from the relative clause in that none of the noun phrases inside it is an \textstyleAcronymallcaps{\textsc{antnp}} or a \textstyleAcronymallcaps{\textsc{relnp}}.
The division of complements into different types, ``Fact, Activity and Potential'', that \citet[371]{Dixon2010b} provides, is crucial for the use of the different complementation strategies in Mauwake. A complement clause is normally used when a \textstyleAcronymallcaps{\textsc{ctv}} needs a fact-type object complement.
Besides the regular complement clause described above, Mauwake has other complementation strategies. The indirect speech clauses are ordinary sentences embedded in the utterance clause (\sectref{sec:8.3.2.1.2}). Medial clauses are used as the main complementation strategy for activity-type complements with perception verbs (\sectref{sec:8.3.2.2}). Clauses with a nominalized verb are used for potential-type complements with various \textstyleAcronymallcaps{\textsc{ctv}}s. The regular complement clause and the clause with a nominalized verb may occur as a subject of a clause (\sectref{sec:8.3.2.4}).
Since one \textstyleAcronymallcaps{\textsc{ctv}} can take more than one complementation strategy, the main grouping below is done according to the \textstyleAcronymallcaps{\textsc{ctv}}s.
\subsubsection{Complements of utterance verbs} \label{sec:8.3.2.1}
%\hypertarget{RefHeading23421935131865}
Some utterance verbs (\sectref{sec:3.8.4.4.6}) are also used for thinking, so speech and thought are discussed as one group.
The status of direct quote clauses (\sectref{sec:8.3.2.1.1}) as complement clauses is questionable, but they are discussed here because of their co-occurrence with the utterance verbs and their similarity with the indirect quotes (\sectref{sec:8.3.2.1.2}), which are complement clauses.
The most important of the utterance verbs is \textstyleStyleVernacularWordsItalic{na}- `say, think'. It is used as the utterance verb for indirect quote complements, which in turn have grammaticalized, together with the same subject sequential form of the verb, as desiderative (\sectref{sec:8.3.2.1.3}) and purpose clauses (\sectref{sec:8.3.2.1.4}) and the conative construction (\sectref{sec:8.3.2.1.5})
\paragraph[Direct speech]{Direct speech} \label{sec:8.3.2.1.1}
%\hypertarget{RefHeading23441935131865}
It seems to be a universal feature of direct quote clauses that they behave independently of their matrix clauses. If they are considered complement clauses of utterance verbs, their independence sets them apart from all the other complement clauses \citep[303]{Munro1982}. \citet[398]{Dixon2010a} maintains that direct speech quotes are not any kind of complementation.
A direct quote may be a whole discourse on its own, not just a clause within a sentence.
It is rather typical in Papuan languages to have a strict quote formula both before and after a quotation, or at least before it (\citealt[120]{Franklin1971}, \citealt[1]{Davies1981}, \citealt[12]{Roberts1987}, \citealt{Farr1999}, \citealt[128]{Hepner2002}). It is also common that either there is no separate structure for indirect speech \citep[2]{Davies1981} or that direct and indirect speech are so similar that they are often hard to distinguish from each other \citep[14]{Roberts1987}.
In the use of quote formulas, Mauwake is much freer than Papuan languages in general. A direct quotation in Mauwake is often preceded or followed by one of the utterance verbs. The verbs \textstyleStyleVernacularWordsItalic{na}- `say/think' and \textstyleStyleVernacularWordsItalic{naak}- `say/tell' are almost exclusively used after quotes. Enclosing a quote between two utterance verbs is not frequent \REF{ex:8:x1571}:
\ea%x1571
\label{ex:8:x1571}
\gll Ne ona mua pun \textbf{ma-e-k}, ``Eka maneka nain=ke iwa-mi ifakim-o-k,'' \textstyleEmphasizedVernacularWords{na-e-k}.\\
\textsc{add} 3s.\textsc{gen} man also say-\textsc{pa}-3s river big that1=\textsc{cf} come-\textsc{ss}.\textsc{sim} kill-\textsc{pa}-3s say-\textsc{pa}-3s\\
\glt`Her husband also said, ``The big river came and killed her,'' he said.'
\z
Most commonly, only a speech verb precedes the quote \REF{ex:8:x1578}, \REF{ex:8:x1579}:
\ea%x1578
\label{ex:8:x1578}
\gll Panewowa=ke \textbf{ma-e-k}, ``Yo nia maak-emkun opaimika efa fien-a-man.''\\
old=\textsc{cf} say-\textsc{pa}-3s 1s.\textsc{unm} 2p.\textsc{acc} tell-2/3p.\textsc{ds} talk 1s.\textsc{acc} disobey-\textsc{pa}-2p\\
\glt`The old (woman) said, ``When I told you, you disobeyed me.'' '
\z
\ea%x1579
\label{ex:8:x1579}
\gll Iiw-ep wiipa muuka nain wia \textbf{maak-e-k}, ``Auwa maa p-ikiw-om-aka.\\
dish.out-\textsc{ss}.\textsc{seq} daughter son that1 3p.\textsc{acc} tell-\textsc{pa}-3s 1s/p.father food \textsc{bpx}-go-\textsc{ben}-\textsc{bnfy}2.\textsc{imp}.2p\\
\glt`She dished out (the food) and told the children, ``Take the food to father.''{}'
\z
A single utterance verb \REF{ex:8:x1580} or a whole quote-closing clause \REF{ex:8:x1583} may follow the quote. A quote-closing clause has to be used when the quotation consists of several sentences.
\ea%x1580
\label{ex:8:x1580}
\gll ``No bom fain=iw mera kuum-e,'' \textbf{naak-e-mik}. \\
2s.\textsc{unm} bomb this=\textsc{inst} fish burn-\textsc{imp}.2s tell-\textsc{pa}-1/3p\\
\glt` ``Blast fish with this bomb,'' they told him.'
\z
\ea%x1583
\label{ex:8:x1583}
\gll ``I muuka marew a, wiipa marew a,'' \textbf{naap} \textbf{wia} \textstyleEmphasizedVernacularWords{maak-e-k}.\\
1p.\textsc{unm} son no(ne) ah daughter no(ne) ah thus 3p.\textsc{acc} tell-\textsc{pa}-3s\\
\glt` ``We have no son, we have no daughter,'' he told them like that.'
\z
In narratives where there are several exchanges between the participants, it is possible to leave out the utterance verb \REF{ex:8:x1581}, \REF{ex:8:x1582} and even the \textstyleAcronymallcaps{\textsc{np}} referring to the speaker of the utterance \REF{ex:8:x1581}, if the speaker is clear enough from the context. A good speaker creates variety in the text by utilizing all these different possibilities.
\ea%x1581
\label{ex:8:x1581}
\gll Ne onak=ke \textstyleEmphasizedVernacularWords{{\O}}, ``A, ifera feeke un-eka.'' Ne wi maak-e-mik, ``Wia, i oro-or-op un-i-yan.'' ``A, neeke-r=iw un-eka.'' ``Weetak, i oro-ora-i-yan.''\\
\textsc{add} 3s/p.mother {\O} Ah, salt.water here.\textsc{cf} fetch(water)-\textsc{imp}.2p \textsc{add} 3p.\textsc{unm} tell-\textsc{pa}-1/3p No 1p.\textsc{unm} \textsc{rdp}-descend-\textsc{ss}.\textsc{seq} fetch-\textsc{Np}-\textsc{fu}.1p Ah there.\textsc{cf}-{\O}=\textsc{lim} fetch-\textsc{imp}.2p no 1p.\textsc{unm} \textsc{rdp}-descend-\textsc{Np}-\textsc{fu}.1p\\
\glt`And their mother (said), ``Ah, fetch the sea water (from) here.'' But they told her, ``No, we'll go down (to the deep sea) and fetch it.'' ``Ah, fetch it right there.'' ``No, we'll go down a long way.'' '
\z
\ea%x1582
\label{ex:8:x1582}
\gll ``Mauwa ar-e-n, amia=iya nenar-e-mik=i?'' Sarak=ke \textstyleEmphasizedVernacularWords{{\O}}.\hspace{-1mm}\\
what become-\textsc{pa}-2s bow=\textsc{com} shoot.you-\textsc{pa}-1/3p=\textsc{qm} Sarak=\textsc{cf} {\O}\\
\glt```What happened to you, did they shoot you with a gun?'' Sarak (asked).'
\z
\paragraph[Indirect speech]{Indirect speech} \label{sec:8.3.2.1.2}
%\hypertarget{RefHeading23461935131865}
Indirect speech quotes, which report speech or thought, are objects of speech verbs.
Most indirect quotes in Mauwake are syntactically identical to direct quotes. There is an intonational difference: the indirect quote is part of the intonation contour of the main clause, rather than having a contour of its own as a direct quote has. The quote is almost always followed by the utterance verb \textstyleStyleVernacularWordsItalic{na}- `say, think' \REF{ex:8:x1585}; but it is also possible for the verb \textstyleStyleVernacularWordsItalic{ma}- `say' to precede it, in which case both the utterance verb and the quote have their own intonation contour \REF{ex:8:x1587}.\footnote{In Amele, the absence of the speech verb before the quote is the main criterion for indirect speech \citep[14]{Roberts1987}. In Mauwake, this cannot be used as a criterion, as the occurrence of speech verbs with direct quotes varies so much.} An indirect quote is never enclosed between two utterance verbs.
\ea%x1585
\label{ex:8:x1585}
\gll Aria, Kalina, {\ob}Amerika ekap-e-mik{\cb} na-i-mik. \\
alright Kalina America come-\textsc{pa}-1/3p say-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`Alright, Kalina, they say that the Americans have come.'
\z
\ea%x1587
\label{ex:8:x1587}
\gll Ma-e-m, {\ob}nena owowa=pa ik-o-n{\cb}.\\
say-\textsc{pa}-1s 2s.\textsc{gen} village=\textsc{loc} be-\textsc{pa}-2s\\
\glt`I said (to her\textsubscript{i}) that you\textsubscript{j} are in your own village.'
\z
As direct quotes behave independently of their matrix clauses, they often have a separate deictic centre. But indirect quotes vary in this respect. Deictic elements, which get part or all of their interpretation from the situational context, are often the same in indirect quotes as they would be in direct quotes \REF{ex:8:x1584}, \REF{ex:8:x1280}:
\ea%x1584
\label{ex:8:x1584}
\gll Aite=ke {\ob}manina yook-e{\cb} na-eya o ook-e.\\
1s/p.mother=\textsc{cf} garden follow.me-\textsc{imp}.2s say-2/3s.\textsc{ds} 3s.\textsc{unm} follow.her-\textsc{imp}.2s\\
\glt`When mother tells you to follow her to the garden, follow her.'
\z
\ea%x1280
\label{ex:8:x1280}
\gll Ni Krais {\ob}yena teeria efar ik-eka{\cb} na-ep nia far-eya ona teeria wiar ik-e-man.\\
2p.\textsc{unm} Christ 1s.\textsc{gen} family 1s.\textsc{dat} be-\textsc{imp}.2p say-\textsc{ss}.\textsc{seq} 2p.\textsc{acc} call-2/3s.\textsc{ds} 3s.\textsc{gen} family 3.\textsc{dat} be-\textsc{pa}-2p\\
\glt`Christ called you to be his family and (now) you are his family.'
\z
But the deictic centre may also shift partly or completely towards that of the matrix clause. When this happens, the pronouns are the easiest to change, next the adverbs. In \REF{ex:8:x1586} a second person pronoun has replaced the proper name or third person pronoun that would have been used in a direct quote.
\ea%x1586
\label{ex:8:x1586}
\gll Sarak oo, Amerika ekap-ep Ulingan nan ik-e-mik, {\ob}\textstyleEmphasizedVernacularWords{nefa} ikum-i-mik{\cb} na-i-mik oo. \\
Sarak \textsc{intj} America come-\textsc{ss}.\textsc{seq} Ulingan there be-\textsc{pa}-1/3p 2s.\textsc{acc} wonder.about-\textsc{Np}-\textsc{pr}.1/3p say-\textsc{Np}-\textsc{pr}.1/3p \textsc{intj}\\
\glt`Sarak! The Americans have come and are in Ulingan and they say that they are wondering where you are!'
\z
When reported by the addressee of the example clause \REF{ex:8:x1281}, only the pronoun in the reported clause \REF{ex:8:x1282} is different:
\ea%x1281
\label{ex:8:x1281}
\gll No owowa ikiw-ep buk nain sesek-om-e.\\
2s.\textsc{unm} village go-\textsc{ss}.\textsc{seq} book that1 send-\textsc{ben}-\textsc{bnfy}1.\textsc{imp}.2s\\
\glt`When you go to the village, send the book to me.'
\z
\ea%x1282
\label{ex:8:x1282}
\gll {\ob}\textstyleEmphasizedVernacularWords{Yo} owowa \textstyleEmphasizedVernacularWords{ikiw-ep} buk nain sesek-om-e{\cb} efa na-e-k. \\
1s.\textsc{unm} village go-\textsc{ss}.\textsc{seq} book that1 send-\textsc{ben}-\textsc{bnfy}1.\textsc{imp}.2s 1s.\textsc{acc} say-\textsc{pa}-3s\\
\glt`He told me to send that book to him (lit: me) when I would go to the village.'
\z
The verbs are most resistant to deictic shift. In \REF{ex:8:x1264}, even though the verb root changes, it still retains the tense and person marking of the direct quote \REF{ex:8:x1265}. Both the temporal adverb and the pronoun are shifted to reflect the deictic centre of the matrix clause.
\ea%x1264
\label{ex:8:x1264}
\gll Uurika nefar ikiw-i-nen. \\
tomorrow 2s.\textsc{dat} go-\textsc{Np}-\textsc{fu}.1s\\
\glt`Tomorrow I'll come (lit: go) to you.'
\z
\ea%x1265
\label{ex:8:x1265}
\gll {\ob}\textstyleEmphasizedVernacularWords{Ikoka} \textstyleEmphasizedVernacularWords{efar} \textstyleEmphasizedVernacularWords{ekap-}i-nen{\cb} na-e-k na weetak.\\
Later(today) 1s.\textsc{dat} come-\textsc{Np}-\textsc{fu}.1s say-\textsc{pa}-3s but no\\
\glt`He said that he would come to me today, but he hasn't.'
\z
Below in \REF{ex:8:x1266} also the person suffix is changed from that in \REF{ex:8:x1267} to reflect the situation of the new speech act participants.
\ea%x1267
\label{ex:8:x1267}
\gll Ona owowa=pa ik-ua.\\
3s.\textsc{gen} village=\textsc{loc} be-\textsc{pa}.3s\\
\glt`She is in her own village.'
\z
\ea%x1266
\label{ex:8:x1266}
\gll Ma-e-m, {\ob}\textstyleEmphasizedVernacularWords{nena} \textstyleEmphasizedVernacularWords{} owowa=pa \textstyleEmphasizedVernacularWords{ik-o-n}{\cb}. \\
say-\textsc{pa}-1s 2s.\textsc{gen} village=\textsc{loc} be-\textsc{pa}-2s\\
\glt`I said (to her) that you are in your own village.'
\z
The deictic shift would need more study to ascertain if there are specific rules governing this variation in indirect quotes.
When the verb \textstyleStyleVernacularWordsItalic{na}- `say, think' indicates thinking, the complement clause is usually an indirect quote rather than a direct one \REF{ex:8:x1588}, \REF{ex:8:x1589}.
\ea%x1588
\label{ex:8:x1588}
\gll {\ob}Muuka ifera me enim-i-non{\cb} na-ep me uruf-a-m.\\
boy salt.water not drink-\textsc{Np}-\textsc{fu}.3s think-\textsc{ss}.\textsc{seq} not look-\textsc{pa}-1s\\
\glt`Thinking that the boy wouldn't drown I didn't watch him.'
\z
\ea%x1589
\label{ex:8:x1589}
\gll Mua pepena=ke {\ob}menat=ke ek-i-ya{\cb} na-ep menat ora-i-nan.\\
man inexperienced=\textsc{cf} tide=\textsc{cf} go-\textsc{Np}-\textsc{pr}.3s think-\textsc{ss}.\textsc{seq} tide descend-\textsc{Np}-\textsc{fu}.2s\\
\glt`An inexperienced man will think that the tide is going down and will go to fish at low tide.'
\z
Indirect non-polar questions are similar to the corresponding direct questions apart from possible adjustments to deictic elements \REF{ex:8:x1592}, \REF{ex:8:x1590}.
\ea%x1592
\label{ex:8:x1592}
\gll {\ob}Wi uf-ow(a) epa kaaneke ik-ua{\cb} na-e-k.\\
3p.\textsc{unm} dance-\textsc{nmz} place where.\textsc{cf} be-\textsc{pa}.3s say-\textsc{pa}-3s\\
\glt`He asked where their dancing place was.'
\z
\ea%x1590
\label{ex:8:x1590}
\gll {\ob}O ikoka sesa kamenap aaw-i-non{\cb} na-e-k.\\
3s.\textsc{unm} later price what.like get-\textsc{Np}-\textsc{fu}.3s say-\textsc{pa}-3s\\
\glt`He asked what kind of wages he would get later.'
\z
Polar questions, when indirect, have to be alternative questions \REF{ex:8:x1591}. The verb \textstyleStyleVernacularWordsItalic{naep} may be deleted, when the indirect question is sentence-final \REF{ex:8:x1593}.
\ea%x1591
\label{ex:8:x1591}
\gll {\ob}Beel-al-i-non=i kamenion{\cb} na-ep uruf-am-ik-ua.\\
rotten-\textsc{inch}-\textsc{Np}-\textsc{fu}.3s=\textsc{qm} or.what think-\textsc{ss}.\textsc{seq} see-\textsc{ss}.\textsc{sim}-be-\textsc{pa}.3s\\
\glt`He was watching (thinking) whether it would rot or what would happen.'
\z
\ea%x1593
\label{ex:8:x1593}
\gll Wi iwera iinan=pa ik-ok iwer(a) popoka wafur-am-ik-e-mik, {\ob}eka saanar-e-k=i eewuar{\cb} {\O}. \\
3p.\textsc{unm} coconut top=\textsc{loc} be-\textsc{ss} coconut unripe throw-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3p water dry-\textsc{pa}-3s=\textsc{qm} not.yet\\
\glt`They were at the top of the coconut palm and threw unripe coconuts (thinking) whether the water had dried or not yet.'
\z
\paragraph[Desiderative clauses]{Desiderative clauses}\label{sec:8.3.2.1.3}
%\hypertarget{RefHeading23481935131865}
It is very common in Papuan languages that an indirect quote construction with the intended action verb in future tense, imperative or irrealis form expresses a want/wish\footnote{In Mauwake, the verb \textit{kookal}- `like, love, desire', is mostly used with an \textsc{np} object, but it can take a clausal complement as well. It does not indicate intention or purpose. The complement is either type of nominalized clauses (\sectref{sec:5.7}).}, desire or intention to do something. (\citealt[254--259]{Reesink1987}, \citealt[157]{Foley1986}, \citealt[112]{Hardin2002}, \citealt[76--77]{Hepner2002}).
In Mauwake, the future, imperative, counterfactual and nominalized forms of the main verb are possible in the complement clause. In desiderative clauses, the verb \textstyleStyleVernacularWordsItalic{na-} `say/think' is always in the medial same-subject sequential form \textstyleStyleVernacularWordsItalic{naep}; in purpose clauses, other forms are possible as well. Historically, there probably always used to be a clause with a finite verb following the clause expressing intention or desire \REF{ex:8:x367}; synchronically the finite clause is often missing \REF{ex:8:x368}, especially when the verb would be the same as in the complement.
\ea%x367
\label{ex:8:x367}
\gll Niena {\ob}maa enim-u{\cb} na-ep iiw-eka. \\
2p.\textsc{gen} food eat-\textsc{imp}.1d say-\textsc{ss}.\textsc{seq} dish.out-\textsc{imp}.2p\\
\glt`If you want/intend to eat food, dish it out (yourselves).'
\z
\ea%x368
\label{ex:8:x368}
\gll Yo {\ob}opora gelemuta=ko ma-i-nen{\cb} na-ep.\\
1s.\textsc{unm} talk little=\textsc{nf} say-\textsc{nps}-\textsc{fu}.1s say-\textsc{ss}.\textsc{seq}\\
\glt`I want to tell a little story.' Or: `I'm going to tell a little story.'
\z
The main verb in the complement clause is either marked for first person\footnote{Purpose clauses may use other person forms as well (\sectref{sec:8.3.2.1.4}).} or is nominalized. Mauwake uses the future \REF{ex:8:x368} or imperative form \REF{ex:8:x369} of the main verb for intention or a clear/certain wish, and the counterfactual form for a wish that has less potential to be realized. The latter is also the most polite form to use, if the wish indicates a request \REF{ex:8:x370}.
\ea%x369
\label{ex:8:x369}
\gll {\ob}Haussik p-ek-u{\cb} na-ep miiw-aasa nop-a-mik.\\
aidpost \textsc{\textsc{bp}x}-go-\textsc{imp}.1d say-\textsc{ss}.\textsc{seq} land-canoe search-\textsc{pa}-1/3p\\
\glt`We/they wanted to take her to the aidpost and looked for a vehicle.'
\z
\ea%x370
\label{ex:8:x370}
\gll {\ob}Yo=ko wia uruf-ek-a-m{\cb} na-ep.\\
1s.\textsc{unm}=\textsc{nf} 3p.\textsc{acc} see-\textsc{cntf}-\textsc{pa}-1s say-\textsc{ss}.\textsc{seq}\\
\glt`I would like to see them.'
\z
The nominalized form is mostly used in complement clauses that can also be interpreted as purpose clauses. In ``pure'' desiderative clauses it is practical to use the nominalized form especially if the first person marking in the verb might make it harder to process the meaning \REF{ex:8:x1610}:
\ea%x1610
\label{ex:8:x1610}
\gll Ne {\ob}o uruf-owa{\cb} ne {\ob}maa en-owa asip-owa{\cb} na-ep=na eliw asip-uk.\\
\textsc{add} 3s.\textsc{unm} see-\textsc{nmz} \textsc{add} food eat-\textsc{nmz} help-\textsc{nmz} say-\textsc{ss}.\textsc{seq}=\textsc{tp} well help-\textsc{imp}.3p\\
\glt`And if they want to see him and help him with food, let them help him.'
\z
\paragraph[Purpose clauses]{Purpose clauses} \label{sec:8.3.2.1.4}
%\hypertarget{RefHeading23501935131865}
Purpose is both conceptually close and structurally similar to desiderative, and in Mauwake many of the desiderative clauses can be interpreted as purpose clauses. This is particularly so when the main verb is in the nominalized form \REF{ex:8:x371}, \REF{ex:8:x345}. But a truly desiderative clause even with an action nominal is never right-dislocated, whereas a purpose clause \REF{ex:8:x372} often is. The nominalized form in the main verb is common:
\ea%x371
\label{ex:8:x371}
\gll {\ob}Weniwa=pa en-owa{\cb} na-ep uuw-i-mik. \\
famine=\textsc{loc} eat-\textsc{nmz} say-\textsc{ss}.\textsc{seq} work-\textsc{Np}-\textsc{pr}.1/3p\\
\glt`We work in order to (be able to) eat during the time of hunger.'
\z
\ea%x345
\label{ex:8:x345}
\gll {\ob}Wi Amerika wiam=iya irak-owa{\cb} na-ep ikiw-e-mik.\\
3p.\textsc{unm} America 3p=\textsc{com} fight-\textsc{nmz} say/think-\textsc{ss}.\textsc{seq} go-\textsc{pa}-1/3p\\
\glt`They went to fight with the Americans.'
\z
\ea%x372
\label{ex:8:x372}
\gll Ona siowa ikos manina ikiw-e-mik, {\ob}pika on-owa{\cb} na-ep.\\
3s.\textsc{gen} dog with garden go-\textsc{pa}-1/3p fence make-\textsc{nmz} say-\textsc{ss}.\textsc{seq}\\
\glt`He went to the garden with his dog, in order to make a fence.'
\z
Future and imperative forms are also used in the purpose clause. When the subject in the purpose clause is the same as the subject of the utterance verb and the main clause, the first person future form is used for singular \REF{ex:8:x1614}, \REF{ex:8:x1616} and first person dual imperative for plural \REF{ex:8:x1620}.
\ea%x1614
\label{ex:8:x1614}
\gll {\ob}Nain nefa maak-i-nen{\cb} na-ep yo ep-a-m.\\
that1 2s.\textsc{acc} tell-\textsc{Np}-\textsc{fu}.1s say-\textsc{ss}.\textsc{seq} 1s.\textsc{unm} come-\textsc{pa}-1s\\
\glt`I came to tell you that.'
\z
\ea%x1616
\label{ex:8:x1616}
\gll No {\ob}owora sesenar-i-nen{\cb} na-ep Kainantu fofa ikiw-ep neeke aaw-i-nan.\\
2s.\textsc{unm} betelnut buy-\textsc{Np}-\textsc{fu}.1s say-\textsc{ss}.\textsc{seq} Kainantu market go-\textsc{ss}.\textsc{seq} there.\textsc{cf} get-\textsc{nps}-\textsc{fu}.2s\\
\glt`To buy betelnut you will (need to) go to Kainantu marker and get it \textstyleEmphasizedWords{{there}}.'
\z
\ea%x1620
\label{ex:8:x1620}
\gll Ne {\ob}haussik p-ek-u{\cb} na-ep miiw-aasa nop-a-mik.\\
\textsc{add} aidpost \textsc{bpx}-go-\textsc{imp}.1d say-\textsc{ss}.\textsc{seq} land-canoe search-\textsc{pa}-1/3p\\
\glt`And they searched for a truck (in order) to take him to the aidpost.'
\z
When the subject of the verb in the main clause differs from that of the purpose clause, the verb inside the purpose clause has to be in the imperative \REF{ex:8:x1062}--\REF{ex:8:x1627}. The whole purpose clause is structurally like a direct quote of the ``inner speech'' verb \textstyleStyleVernacularWordsItalic{naep}, so there is no deictic shift of the kind that may take place in indirect quotes.
\ea%x1062
\label{ex:8:x1062}
\gll {\ob}Me yiar-uk{\cb} na-ep koka=pa ik-e-mik. \\
not shoot.us-\textsc{imp}.3p say-\textsc{ss}.\textsc{seq} jungle=\textsc{loc} be-\textsc{pa}-1/3s\\
\glt`We stayed in the jungle so that they would not shoot us.'
\z
\ea%x346
\label{ex:8:x346}
\gll {\ob}Auwa=ke o=ko amukar-inok{\cb} na-ep maa naap sirar-em-ik-e-mik.\\
1s/p.father=\textsc{cf} 3s.\textsc{unm}=\textsc{nf} scold-\textsc{imp}.3s say-\textsc{ss}.\textsc{seq} thing thus make-\textsc{ss}.\textsc{sim}-be-\textsc{pa}-1/3\\
\glt`They kept doing things like that so that father would scold \textstyleEmphasizedWords{him} (and not them).'
\z
\ea%x1615
\label{ex:8:x1615}
\gll Nain {\ob}ni amis-ar-eka{\cb} na-ep feenap on-i-yem.\\
that1 2p.\textsc{unm} knowledge-\textsc{inch}-\textsc{imp}.2p say-\textsc{ss}.\textsc{seq} like.this do-\textsc{Np}-\textsc{pr}.1s\\
\glt`But I am doing this so that you would know.'
\z
\ea%x1617
\label{ex:8:x1617}
\gll {\ob}Efa asip-e{\cb} na-ep ekap-e-m. \\
1s.\textsc{acc} help-\textsc{imp}.2s say-\textsc{ss}.\textsc{seq} come-\textsc{pa}-1s\\
\glt`I came so that you would help me.'
\z
\ea%x1618
\label{ex:8:x1618}
\gll {\ob}Feenap nokar-eka{\cb} na-ep yia sesek-a-k. \\
like.this ask-\textsc{imp}.2p say-\textsc{ss}.\textsc{seq} 1p.\textsc{acc} send-\textsc{pa}-3s\\
\glt`He sent us to ask (you) like this.'
\z
\ea%x1619
\label{ex:8:x1619}
\gll {\ob}Yo efa miim-eka{\cb} na-ep wapena wu-ami ma-e-k,{\dots}\\
1s.\textsc{unm} 1s.\textsc{acc} hear-\textsc{imp}.2p say-\textsc{ss}.\textsc{seq} hand put-\textsc{ss}.\textsc{sim} say-\textsc{pa}-3s\\
\glt`He raised his hand for them to listen to him and said, {\dots}'
\z
\ea%x1627
\label{ex:8:x1627}
\gll Ne wi popor-ar-urum-ep ik-ok ifana muutiw wu-am-ika-i-kuan, {\ob}mua unuma wia miim-u{\cb} na-ep.\\
\textsc{add} 3p.\textsc{unm} silent-\textsc{inch}-\textsc{distr}/\textsc{a}-\textsc{ss}.\textsc{seq} be-\textsc{ss} ear only put-\textsc{ss}.\textsc{sim}-be-\textsc{Np}-\textsc{fu}.3p man name 3p.\textsc{acc} hear-\textsc{imp}.1d say-\textsc{ss}.\textsc{seq}\\
\glt`And they all will be quiet and listen carefully in order to hear the men's names.'
\z
There is no raising of negation from the subordinate to the main clause \REF{ex:8:x1623}.
\ea%x1623
\label{ex:8:x1623}
\gll {\ob}Yo me pina=pa nia wu-ek-a-m{\cb} na-ep ma-i-yem. \\
1s.\textsc{unm} not guilt=\textsc{loc} 2p.\textsc{acc} put-\textsc{cntf}-\textsc{pa}-1s say-\textsc{ss}.\textsc{seq} say-\textsc{Np}-\textsc{pr}.1s\\
\glt`I am not saying (this) to put guilt on you.' (=I am saying this, but not in order to put guilt on you.)
\z
A purpose clause does not always have the auxiliary \textstyleStyleVernacularWordsItalic{naep}. A clause with just a nominalized verb is used especially with the directional verbs \REF{ex:8:x1659}, \REF{ex:8:x1658}:
\ea%x1659
\label{ex:8:x1659}
\gll {\ob}Yo yena emeria aaw-owa{\cb} urup-e-m.\\
1s.\textsc{unm} 1s.\textsc{gen} woman take-\textsc{nmz} ascend-\textsc{pa}-1s\\
\glt`I came up to take my wife.'
\z
\ea%x1658
\label{ex:8:x1658}
\gll Bogia ikiw-e-mik, {\ob}opaimika aakun-owa{\cb}. \\
Bogia go-\textsc{pa}-1/3p talk talk-\textsc{nmz}\\
\glt`We went to Bogia to talk.'
\z
A clause with a nominalized verb plus a clause-final distal-1 demonstrative \textstyleStyleVernacularWordsItalic{nain} 'that' is also possible, but less common \REF{ex:8:x1633}, \REF{ex:8:x1634}. I have not observed a functional difference between the different purpose structures.
\ea%x1633
\label{ex:8:x1633}
\gll Tunde=pa {\ob}maa muutitik uruf-owa nain{\cb} soomar-e-mik.\\
Tuesday=\textsc{loc} thing all.kinds see-\textsc{nmz} that1 walk-\textsc{pa}-1/3p\\
\glt`On Tuesday we walked to see all kinds of things.'
\z
\ea%x1634
\label{ex:8:x1634}
\gll Ifemak-ep nomona iinan=pa wua-i-nan, {\ob}ikoka ifera me p-ikiw-owa nain{\cb}. \\
press-\textsc{ss}.\textsc{seq} stone on.top=\textsc{loc} put-\textsc{Np}-\textsc{fu}.2s later sea not \textsc{bpx}-go-\textsc{nmz} that1\\
\glt`You press it down and put stones on top ( or: put it on top of stones/corals) so that the sea would not later take it away.'
\z
\paragraph[Conative clauses: `try' ]{Conative clauses: `try'} \label{sec:8.3.2.1.5}
%\hypertarget{RefHeading23521935131865}
Instead of using a verbal construction with the verb `see' for conative modality -- expressing the attempt to do something -- which \citet[152]{Foley1986} claims as almost universal for Papuan languages, Mauwake makes use of a structure where the desiderative is followed by the verb \textstyleStyleVernacularWordsItalic{on-} `do' as the verb in its reference clause \REF{ex:8:x373}. Usan uses an identical construction for the same purpose \citep[258]{Reesink1987}.
\ea%x373
\label{ex:8:x373}
\gll {\ob}Mukuna umuk-u na-ep on-a-mik{\cb}=na me pepek.\\
fire extinguish-\textsc{imp}.1d say-\textsc{ss}.\textsc{seq} do-\textsc{pa}-1/3p=\textsc{tp} not enough\\
\glt`We tried to extinguish the fire but were not able to.'
\z
When this structure is used, it is implied that somehow or other the effort fails \REF{ex:8:x374}, \REF{ex:8:x1606}:
\ea%x374
\label{ex:8:x374}
\gll {\ob}Emeria aruf-i-nen na-ep on-am-ik-eya{\cb} op-a-mik.\\
woman hit-\textsc{Np}-\textsc{fu}.1s say-\textsc{ss}.\textsc{seq} do-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} hold-\textsc{pa}-1/3p\\
\glt`When he was trying to hit the woman, they grabbed him.'
\z
\ea%x1606
\label{ex:8:x1606}
\gll {\ob}Wia uruf-ek-a-m na-ep on-a-k on-a-k{\cb} weetak, o me wia uruf-a-k.\\
3p.\textsc{acc} see-\textsc{cntf}-\textsc{pa}-1s say-\textsc{ss}.\textsc{seq} do-\textsc{pa}-3s do-\textsc{pa}-3s no 3s.\textsc{unm} not 3p.\textsc{acc} see-\textsc{pa}-3s\\
\glt`He tried and tried to see them, but no, he didn't see them.'
\z
The conative structure is not used when the effort is successful \REF{ex:8:x375}, and also when the `trying' is not so much an effort to do something as experimenting \REF{ex:8:x376}. In these cases the verb \textstyleStyleVernacularWordsItalic{akim-} `try' is used, which is neutral as to the outcome. It requires a nominalized verb in the complement clause.
\ea%x375
\label{ex:8:x375}
\gll {\ob}Aasa keraw-owa{\cb} akim-ap akim-ap amis-ar-i-nan.\\
canoe carve-\textsc{nmz} try-\textsc{ss}.\textsc{seq} try-\textsc{ss}.\textsc{seq} knowledge-\textsc{inch}-\textsc{Np}-\textsc{fu}.2s\\
\glt`After trying and trying to carve a canoe, you will know (how to do it).'
\z
\ea%x376
\label{ex:8:x376}
\gll {\ob}Weria op-ap wiinar-owa nain{\cb} akim-am-ik-e.\\
planting.stick hold-\textsc{ss}.\textsc{seq} make.planting.holes-\textsc{nmz} that1 try-\textsc{ss}.\textsc{sim}-be-\textsc{imp}.2s\\
\glt`Keep trying/learning to make planting holes with the planting stick.'
\z
\paragraph[Complements of other utterance verbs ]{Complements of other utterance verbs}
%\hypertarget{RefHeading23541935131865}
The verb \textstyleStyleVernacularWordsItalic{ma}- `say, talk' can take a regular complement clause, which is of the fact type \citep[389]{Dixon2010b}. This clause functions as an object of the verb in the same way as an \textstyleAcronymallcaps{\textsc{np}} with the head noun \textstyleStyleVernacularWordsItalic{opora} (or \textstyleStyleVernacularWordsItalic{opaimika}) `talk/story' in \REF{ex:8:x1595}:
\ea%x1595
\label{ex:8:x1595}
\gll {\ob}Opora gelemuta=ko{\cb}\textsubscript{\textsc{np}} ma-i-nen na-ep.\\
talk little=\textsc{nf} say-\textsc{Np}-\textsc{fu}.1s say/think-\textsc{ss}.\textsc{seq}\\
\glt`I want to tell a little story.'
\z
The complement clause says something about the contents of the story and functions as a kind of title. This type of structure is quite common in Papuan languages\footnote{\citet[231]{Reesink1987} treats them under relative clauses and considers them equivalents of English cleft sentences.} and is used mainly in an opening or closing formula in narrative texts \REF{ex:8:x1596}:
\ea%x1596
\label{ex:8:x1596}
\gll Aria yo aakisa {\ob}takira en-owa gelemuta wia on-om-a-mik nain{\cb}\textsubscript{\textsc{cc}} ma-i-yem.\\
alright 1s.\textsc{unm} now child eat-\textsc{nmz} little 3p.\textsc{acc} make-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-1/3p that1 say-\textsc{Np}-\textsc{pr}.1s\\
\glt`Alright now I tell about our making a feast for the children.'
\z
The complement ``clause'' may actually be a whole sentence, since it is possible to have medial clauses preceding the finite clause of the complement \REF{ex:8:x1597}:
\ea%x1597
\label{ex:8:x1597}
\gll {\ob}Tunde=pa fikera kuum-iwkin ikiw-ep waaya mik-a-m nain{\cb}\textsubscript{\textsc{cc}} ma-i-yem.\\
Tuesday=\textsc{loc} kunai.grass burn-2/3p.\textsc{ds} go-\textsc{ss}.\textsc{seq} pig spear-\textsc{pa}-1s that1 say-\textsc{Np}-\textsc{pr}.1s\\
\glt`I tell about that when they burned kunai grass on Tuesday and I went and speared a pig.'
\z
Often the sentence has both a \textstyleAcronymallcaps{\textsc{np}} containing a word for `story' and the complement clause \REF{ex:8:x1594}. The relationship of these two \textstyleAcronymallcaps{\textsc{np}}s is not really appositional, because the nominalized clause modifies the other \textstyleAcronymallcaps{\textsc{np}}. But the nominalized clause is not a prototypical \textstyleAcronymallcaps{\textsc{rc}} either, in spite of identical structure, because \textstyleStyleVernacularWordsItalic{opora} is neither an antecedent \textstyleAcronymallcaps{\textsc{np}} nor a relative \textstyleAcronymallcaps{\textsc{np}} that would have a function in the \textstyleAcronymallcaps{\textsc{rc}.} I consider the nominalized clause a modifier of the other \textstyleAcronymallcaps{\textsc{np}}, and the whole comparable to the \textstyleAcronymallcaps{\textsc{np}} in \REF{ex:8:x1598}.\footnote{\citet{ComrieEtAl1995} present another alternative: treating complement clauses like this and relative clauses as a single construction, where the structure only indicates that the subordinate clause is connected to an \textsc{np}, and the interpretation of their relationship is done pragmatically. This possibility would need more investigation in Mauwake.}
\ea%x1594
\label{ex:8:x1594}
\gll Aria yo aakisa {\ob}fikera ikum kuum-e-mik nain{\cb}\textsubscript{\textsc{cc}} opora gelemuta=ko ma-i-yem.\\
alright 1s.\textsc{unm} now kunai.grass illicitly burn-\textsc{pa}-1/3p that1 story little=\textsc{nf} say-\textsc{Np}-\textsc{pr}.1s\\
\glt`Alright now I tell a little story about the kunai grass that was burned by arson.'
\z
\ea%x1598
\label{ex:8:x1598}
\gll manina uuw-owa opora \\
garden work-\textsc{nmz} talk\\
\glt`garden work talk / talk (n.) about garden work'
\z
Another complementation strategy for utterance verbs is a clause with a nominalized verb. It is used when the event expressed in the clause is regarded as potential, rather than an actual activity or a fact. The following example has two levels of complementation, as the verb in the nominalized complement also takes a nominalized complement \REF{ex:8:x1599}:
\ea%x1599
\label{ex:8:x1599}
\gll I {\ob}yiena {\ob}miiw-aasa muf-owa{\cb} ikiw-owa{\cb} na-em-ik-omkun o ar-e-k. \\
1p.\textsc{unm} 1p.\textsc{gen} land-canoe pull-\textsc{nmz} go-\textsc{nmz} say-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} 3s.\textsc{unm} become-\textsc{pa}-3s\\
\glt`While we were talking about our going to fetch a vehicle, she died (lit: became).'
\z
The same strategy is used with the verb \textstyleStyleVernacularWordsItalic{maak}- `tell' when it is used in the sense of ordering someone to do something \REF{ex:8:x1630}:
\ea%x1630
\label{ex:8:x1630}
\gll Emar, {\ob}no muut fain uf-owa{\cb} nefa maak-e-m.\\
friend 2s.\textsc{unm} only this dance-\textsc{nmz} 2s.\textsc{acc} tell-\textsc{pa}-1s\\
\glt`Friend, I told you to dance only this.'
\z
\subsubsection{Complements of perception verbs} \label{sec:8.3.2.2}
%\hypertarget{RefHeading23561935131865}
It was mentioned above (\sectref{sec:8.2.3.4}) that the chaining structure is used with perception verbs in Mauwake as the main complementation strategy for perception verbs, when the complement is an activity \REF{ex:8:x1512} or event \REF{ex:8:x1600}. These are not genuine complement clauses, as they are not embedded in the main clause, but they perform the same function as regular complement clauses do.
\ea%x1512
\label{ex:8:x1512}
\gll {\ob}Mukuruna wu-am-ika-iwkin{\cb} i miim-a-mik.\\
noise put-\textsc{ss}.\textsc{sim}-be-2/3p.\textsc{ds} 1p.\textsc{unm} hear-\textsc{pa}-1/3p\\
\glt`We heard you making (the) noise.'
\z
\ea%x1600
\label{ex:8:x1600}
\gll {\ob}Urema maneka um-ep ika-eya{\cb} uruf-a-mik. \\
bandicoot big die-\textsc{ss}.\textsc{seq} be-2/3s.\textsc{ds} see-\textsc{pa}-1/3p\\
\glt`They saw the big bandicoot dead (=having died).'
\z
A regular complement clause is only used with perception verbs about a past activity, when the complement clause reports a fact rather than an activity \REF{ex:8:x1628}, \REF{ex:8:x1629}.
\ea%x1628
\label{ex:8:x1628}
\gll Iikir-ami {\ob}iwera nain emeria ar-e-p ik-ua nain{\cb}\textsubscript{\textsc{cc}} uruf-ap {\dots}\\
get.up-\textsc{ss}.\textsc{sim} coconut that1 woman become-\textsc{pa}-3s be-\textsc{pa}.3s that1 see-\textsc{ss}.\textsc{seq}\\
\glt`He got up and saw that the coconut had become a woman, and {\dots}'
\z
\ea%x1629
\label{ex:8:x1629}
\gll {\ob}Yeesus owow iinan urup-o-k nain{\cb}\textsubscript{\textsc{cc}} uruf-ap kemel-a-mik.\\
Jesus village above ascend-\textsc{pa}-3s that1 see-\textsc{ss}.\textsc{seq} rejoice-\textsc{pa}-1/3p\\
\glt`They saw that Jesus ascended into heaven, and rejoiced.'
\z
When a perception verb takes an indirect question as a complement, it has to be a regular complement clause \REF{ex:8:x1631}:
\ea%x1631
\label{ex:8:x1631}
\gll Ni {\ob}kakala sira kamenap eliw-ar-i-ya nain{\cb}\textsubscript{\textsc{cc}} uruf-eka.\\
2p.\textsc{unm} flower custom what.like good-\textsc{inch}-\textsc{Np}-\textsc{pr}.3s that1 see-\textsc{imp}.2p\\
\glt`See how the flowers grow.'
\z
\subsubsection{Complements of cognitive verbs}
%\hypertarget{RefHeading23581935131865}
The verbs for knowing, \textstyleStyleVernacularWordsItalic{amisar}- and \textstyleStyleVernacularWordsItalic{paayar}- together cover the cognitive area of knowing facts and skills, coming to realize, and understanding. When the complement clause indicates contents of factual knowledge, it is usually a regular complement clause \REF{ex:8:x1602}.
\ea%x1602
\label{ex:8:x1602}
\gll O {\ob}kaanek aaw-ep p-ekap-om-a-mik nain{\cb} me amis-ar-e-k.\\
3s. where.\textsc{cf} get-\textsc{ss}.\textsc{seq} \textsc{bpx}-come-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-1/3p that1 not knowledge-\textsc{inch}-\textsc{pa}-3s\\
\glt`He didn't know where they got it from and brought to him.'
\z
It seems that a clause with a nominalized verb is also used as a ``fact'' complement but only when it refers to pre-knowledge of an event \REF{ex:8:x1605}. It could also be understood as a ``potential'' type complement, in which case it is natural that it uses this complementation strategy. This requires more investigation.
\ea%x1605
\label{ex:8:x1605}
\gll {\ob}O ikiw-owa nain{\cb} amis-ar-e-n=i? \\
3s.\textsc{unm} go-\textsc{nmz} that1 knowledge-\textsc{inch}-\textsc{pa}-2s=\textsc{qm}\\
\glt`Did you know about his going?'
\z
When the complement is about knowing a skill, the verb in the complement clause is in nominalized form, or a medial clause is used \REF{ex:8:x1603}, \REF{ex:8:x1849}:
\ea%x1603
\label{ex:8:x1603}
\gll {\ob}Nain on-owa (nain){\cb} me amis-ar-e-m.\\
that1 do-\textsc{nmz} that1 not knowledge-\textsc{inch}-\textsc{pa}-1s\\
\glt`I don't know how to do that.'
\z
\ea%x1849
\label{ex:8:x1849}
\gll {\ob}Sawiter inera on-ap{\cb} amis-ar-e-k.\\
Sawiter basket make-\textsc{ss}.\textsc{seq} knowledge-\textsc{inch}-\textsc{pa}-3s\\
\glt`Sawiter knows how to make baskets.'
\z
When the complement indicates lack of some experience, a construction with a medial clause is used. In this case, the main clause is in the negative, and the scope of the negation has to extend to the medial clause \REF{ex:8:x1604}:
\ea%x1604
\label{ex:8:x1604}
\gll {\ob}Owora en-ep{\cb} me paayar-e-m.\\
betelnut eat-\textsc{ss}.\textsc{seq} not understand-\textsc{pa}-1s\\
\glt`I'm not used to eating betelnut.' Or: `I don't know how to eat betelnut.'
\z
\subsubsection{Complement clauses as subjects}\label{sec:8.3.2.4}
%\hypertarget{RefHeading23601935131865}
Both types of a nominalized clause (§\sectref{sec:5.7.1}, \ref{sec:5.7.2}) may be used as subjects in verbless clauses, even though this function for complement clauses is not common. A clause with a nominalized verb is used when the activity is potential \REF{ex:8:x1636}, \REF{ex:8:x1637}.
\ea%x1636
\label{ex:8:x1636}
\gll {\ob}Maa wiar ikum aaw-owa{\cb} eliwa=ki? \\
thing 3.\textsc{dat} illicitly take-\textsc{nmz} good=\textsc{cf}.\textsc{qm}\\
\glt`Is stealing from others good?'
\z
\ea%x1637
\label{ex:8:x1637}
\gll {\ob}Maa eneka me en-owa{\cb} maa marew.\\
thing tooth not eat-\textsc{nmz} thing no(ne)\\
\glt`Not eating meat is all right.'
\z
A regular complement clause with a finite verb is used when the activity is considered a fact \REF{ex:8:x1639}:
\ea%x1639
\label{ex:8:x1639}
\gll {\ob}Ni unuma niam p-ir-i-man nain{\cb} eliw(a) marew.\\
2p.\textsc{unm} name 2p.\textsc{refl} \textsc{bpx}-ascend-\textsc{Np}-\textsc{pr}.2p that1 good no(ne)\\
\glt`That you praise yourselves (lit: lift up your own name) is not good.'
\z
\subsection{Adverbial clauses} \label{sec:8.3.3}
%\hypertarget{RefHeading23621935131865}
Adverbial clauses are a very small group of subordinate clauses. They are Type 2 nominalized clauses (\sectref{sec:5.7.2}), and they perform the same function in a clause as a temporal or locative adverbial phrase.
\subsubsection{Temporal clauses} \label{sec:8.3.3.1}
%\hypertarget{RefHeading23641935131865}
The presence of the distal-1 demonstrative \textstyleStyleVernacularWordsItalic{nain} `that' indicates the pragmatic difference between the temporal clauses and those medial clauses that may get a temporal interpretation: the temporal clauses are presented as given information \REF{ex:8:x1540}--\REF{ex:8:x1624}, whereas the medial clauses usually present new information \REF{ex:8:x1632}, except when they occur in tail-head constructions.
\ea%x1540
\label{ex:8:x1540}
\gll Ni {\ob}ifa nia keraw-i-ya nain{\cb} sira kamenap on-i-man?\\
2p.\textsc{unm} snake 2p.\textsc{acc} bite-\textsc{Np}-\textsc{pa}.3s that1 custom what.like do-\textsc{Np}-\textsc{pr}.2p\\
\glt`When a snake bites you, what do you do?'
\z
\ea%x1569
\label{ex:8:x1569}
\gll {\ob}Maa fain pakak na-e-k nain{\cb} yo soran-e-m.\\
thing this bang say-\textsc{pa}-3s that1 1s.\textsc{unm} be.startled-\textsc{pa}-1s\\
\glt`When this thing went ``bang!'' I got startled.'
\z
\ea%x1624
\label{ex:8:x1624}
\gll {\ob}Yo napum-ar-e-m nain{\cb} eneka maay-ar-e-m. \\
1s.\textsc{unm} sick-\textsc{inch}-\textsc{pa}-1s that1 tooth long-\textsc{inch}-\textsc{pa}-1s\\
\glt`When I got sick, I became hungry for meat (lit: my teeth got long).'
\z
\ea%x1632
\label{ex:8:x1632}
\gll Yo napum-ar-ep eneka maay-ar-e-m.\\
1s.\textsc{unm} sick-\textsc{inch}-\textsc{ss}.\textsc{seq} tooth long-\textsc{inch}-\textsc{pa}-1s\\
\glt`I got sick and became hungry for meat.'
\z
\subsubsection{Locative clauses} \label{sec:8.3.3.2}
%\hypertarget{RefHeading23661935131865}
Locative adverbial clauses use a clause-final deictic locative \textstyleStyleVernacularWordsItalic{nan} \REF{ex:8:x1621} or \textstyleStyleVernacularWordsItalic{neeke} \REF{ex:8:x1626} `there' instead of the demonstrative \textstyleStyleVernacularWordsItalic{nain} `that'. Note that in \REF{ex:8:x1621} the locative noun \textstyleStyleVernacularWordsItalic{manina} `garden' is not a relative \textstyleAcronymallcaps{\textsc{np}}; if there were one, that would be \textstyleStyleVernacularWordsItalic{epa} `place' immediately preceding \textstyleStyleVernacularWordsItalic{nan} `there'.
\ea%x1621
\label{ex:8:x1621}
\gll I naap ikiw-ep {\ob}yiena manina on-a-mik nan{\cb} ik-e-mik.\\
1p.\textsc{unm} thus go-\textsc{ss}.\textsc{seq} 1p.\textsc{gen} garden make-\textsc{pa}-1p there be-\textsc{pa}-1p\\
\glt`We went there and stayed where we had made our gardens.'
\z
\ea%x1626
\label{ex:8:x1626}
\gll {\ob}Luuwa niir-i-mik neeke{\cb} soomar-e-mik.\\
ball play-\textsc{Np}-\textsc{pr}.1/3p there.\textsc{cf} walk-\textsc{pa}-1/3p\\
\glt`We walked (to) where they play football.'
\z
The following example is actually a locative relative clause, since it has a relative \textstyleAcronymallcaps{\textsc{np}} \textstyleStyleVernacularWordsItalic{kame} `side' that has a function in both clauses \REF{ex:8:x1638}:
\ea%x1638
\label{ex:8:x1638}
\gll {\ob}No in-i-n kame nan{\cb} urup-ep tepak iw-a-mik. \\
2s.\textsc{unm} sleep-\textsc{Np}-\textsc{pr}.2s side there ascend-\textsc{ss}.\textsc{seq} inside go-\textsc{pa}-1/3p\\
\glt`They climbed up on the side where you sleep and went inside.'
\z
\subsection{Adversative subordinate clause} \label{sec:8.3.4}
%\hypertarget{RefHeading23681935131865}
Coordinate adversative clauses were discussed in \sectref{sec:8.1.3}.
The topic marker -\textstyleStyleVernacularWordsItalic{na} (\sectref{sec:3.12.7.1}) marks an adversative clause when the main clause cancels an expectation, either expressed in the text or assumed to be in the hearer's mind. Because of this, this construction is used when some effort is frustrated \REF{ex:8:x729}, or when there is a strong element of surprise \REF{ex:8:x730} in the main clause.
\ea%x729
\label{ex:8:x729}
\gll Mukuna nain umuk-a-mik=\textstyleEmphasizedVernacularWords{na} me pepek.\\
fire that1 quench-\textsc{pa}-1/3p=\textsc{tp} not able\\
\glt`They tried to quench the fire, but couldn't.'
\z
\ea%x730
\label{ex:8:x730}
\gll Ekap-ep uruf-a-k=\textstyleEmphasizedVernacularWords{na} ifa maneka=ke siowa wasi-ep-pu-eya {\dots} \\
come-\textsc{ss}.\textsc{seq} see-\textsc{pa}-3s=\textsc{tp} snake big=\textsc{cf} dog tie.around-\textsc{ss}.\textsc{seq}-\textsc{cmpl}-2/3s.\textsc{ds}\\
\glt`He came and looked, but a snake had tied itself around the dog, and/but {\dots}'
\z
In \REF{ex:8:x1393}, what the boys expect to see is a crocodile, but it turns out to be a turtle.
\ea%x1393
\label{ex:8:x1393}
\gll Takir(a) oko=ke pon muneka wu-ek-a-m na-ep urup-em-ika-eya uruf-ap tuar=ke na-ep alu-emi baurar-e-k. Takir(a) unowa ekap-ep uruf-a-mik=\textstyleEmphasizedVernacularWords{na} pon=ke, ne unow=iya op-ap kirip-a-mik.\\
boy other=\textsc{cf} turtle egg put-\textsc{cntf}-\textsc{pa}-1s say-\textsc{ss}.\textsc{seq} ascend-\textsc{ss}.\textsc{sim}-be-2/3s.\textsc{ds} see-\textsc{ss}.\textsc{seq} crocodile=\textsc{cf} say-\textsc{ss}.\textsc{seq} shout-\textsc{ss}.\textsc{sim} flee-\textsc{pa}-3s boy many come-\textsc{ss}.\textsc{seq} see-\textsc{pa}-1/3p=\textsc{tp} turtle=\textsc{cf} \textsc{add} many=\textsc{com} hold-\textsc{ss}.\textsc{seq} turn-\textsc{pa}-1/3p\\
\glt`A boy saw a turtle coming up (to the beach) to lay eggs and thought it was a crocodile, and shouted and fled. Many boys came and saw/looked, but it was a turtle, and they all together grabbed and turned it.'
\z
In \REF{ex:8:x1397}, a man talks to his son whom he wanted and expected to be a good person:
\ea%x1397
\label{ex:8:x1397}
\gll Aakisa yo nefa uruf-i-yem=\textstyleEmphasizedVernacularWords{na} no mua eliw marew.\\
now 1s.\textsc{unm} 2s.\textsc{acc} look-\textsc{Np}-\textsc{pr}.1s=\textsc{tp} 2s.\textsc{unm} man good no(ne)\\
\glt`I now look at you but you are not a good man.'
\z
Because these clauses express a cancellation or frustration of an expectation, a negator commonly follows as the first element in the main clause, and often the negator is the only element left of the main clause, as in \REF{ex:8:x1398}.
\ea%x1398
\label{ex:8:x1398}
\gll Marasin wu-om-a-mik=\textstyleEmphasizedVernacularWords{na} weetak.\\
medicine put-\textsc{ben}-\textsc{bnfy}2.\textsc{pa}-1/3p=\textsc{tp} no\\
\glt`They injected medicine in him, but no (it had no effect).'
\z
\subsection{Conditional clauses} \label{sec:8.3.5}
%\hypertarget{RefHeading23701935131865}
\citet{Haiman1978} was the first one to clearly describe the close connection between conditionals and topics, and it has since then been attested in various languages \citep[292]{ThompsonEtAl2007}. In many Papuan languages, the connection is very evident (\citealt[235--244]{Reesink1987}, \citealt[304--308]{MacDonald1990}, \citealt[263]{Farr1999}. The protasis -- the subordinate clause expressing the condition -- provides the presupposition for the apodosis, the asserted main clause. In other words, ``it constitutes the framework which has been selected for the following discourse'' \citep[585]{Haiman1978}.
The conditional clauses in Mauwake can be grouped into three main groups on semantic and structural grounds: imaginative, predictive, and reality conditionals.\footnote{The terminology is from \citet[255]{ThompsonEtAl2007}.} Imaginative and predictive conditionals together belong to the unreality conditionals. Reality conditionals only include habitual/generic conditionals, as there are no present or past conditionals.
The protasis clause, expressing the condition, is placed before the apodosis clause, which gives the consequence. Right-dislocation of the protasis is possible but rare. The verb forms in the protasis and the apodosis depend on the type of conditional. The topic marker -\textstyleStyleVernacularWordsItalic{na} is used as the conditional marker in the unreality conditional clauses, where it is cliticized to the last element of the protasis clause, usually the verb. Reality conditional clauses do not have a conditional marker, so structurally the protasis and apodosis are ordinary juxtaposed clauses. The intonation in the protasis has a slight rise towards the end, stronger with the topic marker -\textstyleStyleVernacularWordsItalic{na} than without it.
In \textstyleEmphasizedWords{{imaginative conditional clauses}}, the verb in both the protasis and the apodosis is in the counterfactual mood, which is marked by the suffix -\textstyleStyleVernacularWordsItalic{ek}. The conditional/topic marker -\textstyleStyleVernacularWordsItalic{na} is always present. The same form is used for semantically counterfactual and hypothetical conditionals. The counterfactual interpretation \REF{ex:8:x1645} is more common, but especially if there is a reference to present \REF{ex:8:x1646} or future time \REF{ex:8:x1647}, it forces a hypothetical interpretation.
\ea%x1645
\label{ex:8:x1645}
\gll {\ob}Yo Sek haussik ikiw-\textstyleEmphasizedVernacularWords{ek}-a-m=\textstyleEmphasizedVernacularWords{na}{\cb} miiw-aasa=pa uroma yaki-\textstyleEmphasizedVernacularWords{ek}-a-m.\\
1s.\textsc{unm} Sek hospital go-\textsc{cntf}-\textsc{pa}-1s=\textsc{tp} land-canoe=\textsc{loc} stomach wash-\textsc{cntf}-\textsc{pa}-1s\\
\glt`If I had gone to the Sek hospital, I would have given birth in the truck.'
\z
\ea%x1646
\label{ex:8:x1646}
\gll {\ob}Yena aamun aakisa uruf-\textstyleEmphasizedVernacularWords{ek}-a-m=\textstyleEmphasizedVernacularWords{na}{\cb} kemel-\textstyleEmphasizedVernacularWords{ek}-a-m.\\
1s.\textsc{gen} 1s/p.younger.sibling now see-\textsc{cntf}-\textsc{pa}-1s=\textsc{tp} be.happy-\textsc{cntf}-\textsc{pa}-1s\\
\glt`If I saw my younger brother now, I would be happy.'
\z
\ea%x1647
\label{ex:8:x1647}
\gll {\ob}Morauta iimar-ow(a) mua ik-\textstyleEmphasizedVernacularWords{ek}-a-k=\textstyleEmphasizedVernacularWords{na},{\cb} uurika ikiw-ep maak-\textstyleEmphasizedVernacularWords{ek}-a-mik.\\
Morauta stand.up-\textsc{nmz} man be-\textsc{cntf}-\textsc{pa}-3s=\textsc{tp} tomorrow go-\textsc{ss}.\textsc{seq} tell-\textsc{cntf}-\textsc{pa}-1/3p\\
\glt`If Morauta were the leader, we would go and talk to him tomorrow.'
\z
Usually the context determines the interpretation, but without a clear context the sentence may be ambiguous \REF{ex:8:x1648}:
\ea%x1648
\label{ex:8:x1648}
\gll {\ob}Inasin napuma ik-\textstyleEmphasizedVernacularWords{ek}-a-k=\textstyleEmphasizedVernacularWords{na}{\cb} sariar-\textstyleEmphasizedVernacularWords{ek}-a-k.\\
spirit/white.man sickness be-\textsc{cntf}-\textsc{pa}-3s=\textsc{tp} recover-\textsc{cntf}-\textsc{pa}-3s\\
\glt`If it were the white man's sickness\footnote{This is contrasted with \textit{owow napuma} `village sickness', caused by sorcery.} he would recover.' Or: `If it had been the white man's sickness, he would have recovered.'
\z
\textstyleEmphasizedWords{{Predictive conditionals}} are the most frequently used and show the greatest variation morphologically. The apodosis, and consequently the whole sentence, may be either a statement with a future tense verb, or a command with an imperative verb. The verb in the protasis may be in either present or future indicative, in imperative, or in medial form. The conditional/topic marker at the end of the protasis is obligatory.
When the predictive conditional is a statement, the verb in both the protasis and in the apodosis is usually in the future tense \REF{ex:8:x1652}.
\ea%x1652
\label{ex:8:x1652}
\gll {\ob}No oram mokok=iw \textstyleEmphasizedVernacularWords{ika-i-nan=na}{\cb} ikoka mua lebuma \textstyleEmphasizedVernacularWords{ika}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{nan}.\\
2s.\textsc{unm} just eye=\textsc{inst} be-\textsc{Np}-\textsc{fu}.2s=\textsc{tp} later man lazy be-\textsc{Np}-\textsc{fu}.2s\\
\glt`If you just watch with your eyes (without joining the work) you will be(come) a lazy man.'
\z
The protasis may have a medial verb form if the condition is likely to be fulfilled \REF{ex:8:x1654}, or when the protasis consists of two or more clauses that are in a medial-final relationship \REF{ex:8:x1653}.
\ea%x1654
\label{ex:8:x1654}
\gll {\ob}Emeria \textstyleEmphasizedVernacularWords{sesenar-ek-a-m} \textstyleEmphasizedVernacularWords{na-ep=na}{\cb} waaya ten erup naap wienak-i-non.\\
woman buy-\textsc{cntf}-\textsc{pa}-1s say/think-\textsc{ss}.\textsc{seq}=\textsc{tp} pig ten two thus feed.him-\textsc{Np}-\textsc{fu}.3s\\
\glt`If/when he wants to buy a wife, he will give him (=the bride's father) twenty or so pigs.'
\z
\ea%x1653
\label{ex:8:x1653}
\gll {\ob}Yaapan me \textstyleEmphasizedVernacularWords{piipu-ap=na} anane epaskun ika-i-nan=na{\cb} no iiwawun weeser-i-nan.
\\
Japan not leave-\textsc{ss}.\textsc{seq}=\textsc{tp} always together be-\textsc{Np}-\textsc{fu}.2s=\textsc{tp} 2s.\textsc{unm} altogether finish-\textsc{Np}-\textsc{fu}.2s\\
\glt`If you don't leave the Japanese but are always together, you will be finished altogether.'
\z
Predictive conditionals allow right-dislocation of the protasis, but it is uncommon \REF{ex:8:x1662}:
\ea%x1662
\label{ex:8:x1662}
\gll Owora fain aite panewowa onak-e, {\ob}ekap-ep \textstyleEmphasizedVernacularWords{kerer-eya=na}{\cb}. \\
betelnut this 1s/p.mother old feed-\textsc{imp}.2s come-\textsc{ss}.\textsc{seq} arrive-2/3s.\textsc{ds}=\textsc{tp}\\
\glt`Give these betelnuts to old mother to eat, if she comes and arrives here.'
\z
In those instances where the conditional marker is attached to a predicate that is not originally a verb, the predicate needs to have medial verb marking \REF{ex:8:x1660}, \REF{ex:8:x1661} (\sectref{sec:3.8.3.5}).
\ea%x1660
\label{ex:8:x1660}
\gll {\ob}\textstyleEmphasizedVernacularWords{Weetak-eya}\textstyleEmphasizedVernacularWords{=na}{\cb} weetak.\\
no-2/3s.\textsc{ds}=\textsc{tp} no\\
\glt`If not, then not.'
\z
\ea%x1661
\label{ex:8:x1661}
\gll {\ob}Mauw-owa \textstyleEmphasizedVernacularWords{manek-aya=na}{\cb} yia maak-i-non.\\
work-\textsc{nmz} big-2/3s.\textsc{ds}=\textsc{tp} 1p.\textsc{acc} tell-\textsc{Np}-\textsc{fu}.3s\\
\glt`If the work is big, he will tell us.'
\z
When the apodosis is in the imperative, there is normally some expectation that the the condition is to be fulfilled. When the likelihood is high, the medial form is used in the protasis \REF{ex:8:x1650}, \REF{ex:8:x1649}. Present tense \REF{ex:8:x1651} and imperative \REF{ex:8:x1656} indicate less, and future tense \REF{ex:8:x1657} the least likelihood for the condition to be fulfilled.
\ea%x1650
\label{ex:8:x1650}
\gll {\ob}Wia \textstyleEmphasizedVernacularWords{uruf-ap=na}{\cb} wia maak-e.\\
3p.\textsc{acc} see-\textsc{ss}.\textsc{seq}=\textsc{tp} 3p.\textsc{acc} tell-\textsc{imp}.2s\\
\glt`If/when you see them, tell them.'
\z
\ea%x1649
\label{ex:8:x1649}
\gll {\ob}Maa mauwa nefa \textstyleEmphasizedVernacularWords{maak-iwkin=na}{\cb} opaimika miim-e.\\
thing what 2s.\textsc{acc} tell-2/3p.\textsc{ds}=\textsc{tp} talk listen-\textsc{imp}.2s\\
\glt`Whatever they may tell you, listen to the talk.' (Lit: `If they tell you what(ever), listen to the talk.')
\z
\ea%x1651
\label{ex:8:x1651}
\gll Koora pun naap: {\ob}mua oko naareke koora \textstyleEmphasizedVernacularWords{kua-i-ya=na}{\cb} o asip-e.\\
house also thus man other who.\textsc{cf} house build-\textsc{Np}-\textsc{pr}.3s=\textsc{tp} 3p.\textsc{unm} help-\textsc{imp}.2p\\
\glt`A house is like that too: if/when any man builds a house, help him.'
\z
\ea%x1656
\label{ex:8:x1656}
\gll {\ob}Ni kirip-owa \textstyleEmphasizedVernacularWords{ika-inok}=\textstyleEmphasizedVernacularWords{na}{\cb} kirip-eka.\\
2p.\textsc{unm} reply-\textsc{nmz} be-\textsc{imp}.3s=\textsc{tp} reply-\textsc{imp}.2p\\
\glt`If you have something to reply, then reply.'
\z
\ea%x1657
\label{ex:8:x1657}
\gll {\ob}Wia \textstyleEmphasizedVernacularWords{uruf-i-nan=na}{\cb} wia maak-e.\\
3p.\textsc{acc} see-\textsc{Np}-\textsc{fu}.2s=\textsc{tp} 3p.\textsc{acc} tell-\textsc{imp}.2s\\
\glt`If you (happen to) see them, tell them.'
\z
\textstyleEmphasizedWords{{Reality conditional clauses}} are morpho-syntactically different from the other conditional clauses in that they are not marked with the topic marker. The protasis and apodosis are juxtaposed main clauses in future tense \REF{ex:8:x1644}, but this construction is mainly used to encode habitual or generic conditions. The protasis can never be right-dislocated, since it does not have the topic marker.
\ea%x1644
\label{ex:8:x1644}
\gll {\ob}No inasin(a) unuma me unuf-i-nan{\cb}, mua oko=ke waaya nain mik-ap nefar aaw-i-non.\\
2s.\textsc{unm} spirit name not call-\textsc{Np}-\textsc{fu}.2s man other=\textsc{cf} pig that1 spear-\textsc{ss}.\textsc{seq} 2s.\textsc{dat} take-\textsc{Np}-\textsc{fu}.3s\\
\glt`If you don't call the spirit name, another man will spear the pig and take it from you.'
\z
If there are two protasis clauses, they may be juxtaposed without a connective \REF{ex:8:x1635} or joined with the pragmatic additive \textstyleStyleVernacularWordsItalic{ne} \REF{ex:8:x1643}.
\ea%x1635
\label{ex:8:x1635}
\gll {\ob}Nena kuuf-i-nan, parew-i-non{\cb}, eliw perek-i-nan.\\
2s.\textsc{gen} see-\textsc{Np}-\textsc{fu}.2s mature-\textsc{Np}-\textsc{fu}.3s well harvest-\textsc{Np}-\textsc{fu}.2s\\
\glt`If you see it yourself and it is matured you may harvest it.'
\z
\ea%x1643
\label{ex:8:x1643}
\gll {\ob}Yo um-i-nen ne yena emeria mua oko aaw-i-non{\cb}, muuka onaiya me ikiw-i-non.\\
1s.\textsc{unm} die-\textsc{Np}-\textsc{fu}.1s \textsc{add} 1s.\textsc{gen} woman man other take-\textsc{Np}-\textsc{fu}.3s son with not go-\textsc{Np}-\textsc{fu}.3s\\
\glt`If I die and my wife takes another husband, she will not go (to him) with the son.'
\z
When a sentence contains alternatives expressed by two sets of reality conditional constructions, these are joined by the pragmatic additive \textstyleStyleVernacularWordsItalic{ne} \REF{ex:8:x1642}.
\ea%x1642
\label{ex:8:x1642}
\gll {\ob}Yo auwa miiwa=pa mauw-i-nen{\cb}, irak-owa marew, ne {\ob}yo aite miiwa=pa mauw-i-nen{\cb}, irak-owa ika-i-non.\\
1s 1s/p.father land=\textsc{loc} work-\textsc{Np}-\textsc{fu}.1s fight-\textsc{nmz} no(ne) \textsc{add} 1s 1s/p.moher land=\textsc{loc} work-\textsc{Np}-fu.1s fight-\textsc{nmz} be-\textsc{Np}-\textsc{fu}.3s\\
\glt`If I work on my father's land there is no fighting (over land), but if I work on my mother's land there will be fighting.'
\z
The same construction can encode a simple coordinate relationship, but it is less common. In spoken text a slightly falling intonation at the end of the first clause indicates a coordinate sentence \REF{ex:8:x1850}.
\ea%x1850
\label{ex:8:x1850}
\gll Oko=ke pusun-emi feeke \textstyleEmphasizedVernacularWords{ikiw}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{non}, a mua oko=ke \textstyleEmphasizedVernacularWords{mik}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{i}\textstyleEmphasizedVernacularWords{-}\textstyleEmphasizedVernacularWords{non}.\\
other=\textsc{cf} run.loose-\textsc{ss}.\textsc{sim} here.\textsc{cf} go-\textsc{Np}-\textsc{fu}.3s ah man other=\textsc{cf} spear-\textsc{Np}-\textsc{fu}.3s\\
\glt`Another (pig) will run loose and run this way, ah, another man will spear it.'
\z
\subsection{Concessive clauses}
%\hypertarget{RefHeading23721935131865}
Concessive clauses may look exactly like the predictive conditional clauses \REF{ex:8:x1655}. If the context is not clear enough, the phrase \textstyleStyleVernacularWordsItalic{nain pun} `that too' may be added between the protasis and the apodosis for clarification \REF{ex:8:x1430}.
\ea%x1655
\label{ex:8:x1655}
\gll {\ob}Naapeya aara=ki e kasi=ke um-inok=na{\cb} ni nain kema bagiw-ir-ap malaria sevis me wia iirar-eka.\\
therefore hen=\textsc{cf}.\textsc{qm} or cat=\textsc{cf} die-\textsc{imp}.3s=\textsc{tp} 2p.\textsc{unm} that1 liver hatred-rise-\textsc{ss}.\textsc{seq} malaria service not 3p.\textsc{acc} remove-\textsc{imp}.2p\\
\glt`Therefore, (even) if hens or cats die, do not get angry and drive away the Malaria Service people.'
\z
\ea%x1430
\label{ex:8:x1430}
\gll {\ob}Naap yia ma-ikuan=na{\cb} \textstyleEmphasizedVernacularWords{nain} \textstyleEmphasizedVernacularWords{pun} ni kekan-ep sira eliwa ook-eka.\\
thus 1p.\textsc{acc} say-\textsc{fu}.3p=\textsc{tp} that too 2p.\textsc{unm} be.strong-\textsc{ss}.\textsc{seq} custom good follow-\textsc{imp}.2p\\
\glt`Even if they talk about us like that, be strong and follow the good custom/ways.'
\z
\subsection{Coordination of subordinate clauses}\label{sec:8.3.7}
%\hypertarget{RefHeading23741935131865}
Subordinate clauses may also be coordinated with each other, although in normal speech the frequency of these constructions is low. The only subordinate clauses in the natural text data conjoined either by juxtaposition or with the additive \textstyleStyleVernacularWordsItalic{ne} are relative clauses \REF{ex:8:x1381}, \REF{ex:8:x1382}. The distal demonstrative \textstyleStyleVernacularWordsItalic{nain}, functioning as a relative marker, is attached to the end of each relative clause.
\ea%x1381
\label{ex:8:x1381}
\gll ...{\ob}\textstyleEmphasizedVernacularWords{waaya} \textstyleEmphasizedVernacularWords{koka=pa} \textstyleEmphasizedVernacularWords{ika-i-ya} \textstyleEmphasizedVernacularWords{nain}{\cb}\textsubscript{\textsc{rc}}, {\ob}\textstyleEmphasizedVernacularWords{sokowa} \textstyleEmphasizedVernacularWords{maneka=pa}
\textstyleEmphasizedVernacularWords{ika-i-ya} \textstyleEmphasizedVernacularWords{nain}{\cb}\textsubscript{\textsc{rc}} kanu-ep aap-ekap-ep fikera=pa-r=iw fiirim-eka.\\
pig jungle=\textsc{loc} be-\textsc{Np}-\textsc{pr}.3s that1 grove big=\textsc{loc} be-\textsc{Np}-\textsc{pr}.3s that1 chase-\textsc{ss}.\textsc{seq} \textsc{\textsc{bp}x}-come-\textsc{ss}.\textsc{seq} kunai.grass=\textsc{loc}-{\O}=\textsc{lim} gather-\textsc{imp}.2p\\
\glt`{\dots}chase the pigs that are in the jungle (and) that are in the big grove(s) and bring them and gather them right inside the kunai grass (area).'
\z
\ea%x1382
\label{ex:8:x1382}
\gll Ne {\ob}\textstyleEmphasizedVernacularWords{o} \textstyleEmphasizedVernacularWords{maa} \textstyleEmphasizedVernacularWords{kamenap} \textstyleEmphasizedVernacularWords{on-eya} \textstyleEmphasizedVernacularWords{wiar} \textstyleEmphasizedVernacularWords{uruf-i-n} \textstyleEmphasizedVernacularWords{nain}{\cb}\textsubscript{\textsc{rc}} \textstyleEmphasizedVernacularWords{ne} {\ob}\textstyleEmphasizedVernacularWords{wiar} \textstyleEmphasizedVernacularWords{miim-i-n} \textstyleEmphasizedVernacularWords{nain}{\cb}\textsubscript{\textsc{rc}} wia maak-em-ika-i-nan.\\
\textsc{add} 3s.\textsc{unm} thing how do-\textsc{ss}.\textsc{seq} 3.\textsc{dat} see-\textsc{Np}-\textsc{pr}.2s that1 \textsc{add} 3.\textsc{dat} hear-\textsc{Np}-\textsc{pr}.2s that1 3p.\textsc{acc} tell-\textsc{ss}.\textsc{sim}-be-\textsc{Np}-\textsc{fu}.2s\\
\glt`And you will keep telling them that which you see and which you hear him do.'
\z
The chaining structure is also used to coordinate relative clauses \REF{ex:8:x1463} and complement clauses that have a nominalized verb (5.\ref{ex:5:x1845}), copied as \REF{ex:8:x1848} below:
\ea%x1463
\label{ex:8:x1463}
\gll {\ob}\textstyleEmphasizedVernacularWords{Ni} \textstyleEmphasizedVernacularWords{manina} \textstyleEmphasizedVernacularWords{urup-ep} \textstyleEmphasizedVernacularWords{episowa} \textstyleEmphasizedVernacularWords{perek-a-man} \textstyleEmphasizedVernacularWords{nain}{\cb}\textsubscript{\textsc{rc}} auwa p-ikiw-om-aka.\\
2p.\textsc{unm} garden ascend-\textsc{ss}.\textsc{seq} tobacco pick-\textsc{pa}-2p that 1s/p.father \textsc{\textsc{bp}x}-go-\textsc{ben}-\textsc{bnfy}2.\textsc{imp}.2p\\
\glt`Take to father the tobacco that you went up to the garden and picked.'
\z
\ea%x1848
\label{ex:8:x1848}
\gll Toiyan iiriw maak-ep-pu-a-mik, {\ob}\textbf{uuriw} \textbf{yia} \textstyleEmphasizedVernacularWords{aaw-ep} \textstyleEmphasizedVernacularWords{Madang} \textstyleEmphasizedVernacularWords{ikiw-owa}{\cb}\textsubscript{\textsc{cc}} nain\\
Toiyan already tell-\textsc{ss}.\textsc{seq}-\textsc{cmpl}-\textsc{pa}-1/3p morning 1p.\textsc{acc} take-\textsc{ss}.\textsc{seq} Madang go-\textsc{nmz} that1\\
\glt`We already told Toiyan about taking us in the morning and going to Madang.'
\z | {
"alphanum_fraction": 0.7461321336,
"avg_line_length": 69.1335331335,
"ext": "tex",
"hexsha": "407bbf5e7965c638fc1eddc582c5d8b57b50783d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b77322ca18c02451e56d53b703dc4481e2297394",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/67",
"max_forks_repo_path": "chapters/8.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_issues_repo_issues_event_max_datetime": "2016-06-15T07:58:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-06-15T07:56:22.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/sidl",
"max_issues_repo_path": "finished/Berghall/chapters/8.tex",
"max_line_length": 1254,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/sidl",
"max_stars_repo_path": "finished/Berghall/chapters/8.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-09T06:28:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-09T06:28:07.000Z",
"num_tokens": 71922,
"size": 207608
} |
% \chapter{Conclusion}\label{ch:conclusion}
\section{Conclusion}\label{sec:conclusion}
We assembled 2020’s state-of-the-art single-view view synthesis pipeline. We applied Multiplane Images, which are essentially mini-local-light-field representations, to the field of 3D video chat because they are one of the first representations capable of real-time, high-quality, spatially-consistent view synthesis. We completed implementing both ways of a potentially real-time, rendering pipeline that takes in the head pose of each ``viewer" video frame and rerenders the corresponding ``viewee" video frame --- the one that syncs with the timestamp of the ``viewer".
| {
"alphanum_fraction": 0.8036253776,
"avg_line_length": 132.4,
"ext": "tex",
"hexsha": "24ccbadd161a42f940ddaa4f66c63110073ffa35",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c1161004f6f65f40143754d87a4663796a705a5d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "au001/thesis-template",
"max_forks_repo_path": "chapters/ch5-discussion-conclusion/sec2-conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c1161004f6f65f40143754d87a4663796a705a5d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "au001/thesis-template",
"max_issues_repo_path": "chapters/ch5-discussion-conclusion/sec2-conclusion.tex",
"max_line_length": 573,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c1161004f6f65f40143754d87a4663796a705a5d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "au001/thesis-template",
"max_stars_repo_path": "chapters/ch5-discussion-conclusion/sec2-conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 139,
"size": 662
} |
\subsection{Computer Science}
Offical course plan found here:
\href{http://www.uq.edu.au/study/plan_display.html?acad_plan=COSCIX2030}{\nolinkurl{http://www.uq.edu.au/study/plan_display.html?acad_plan=COSCIX2030}}
For Science students there is also this helpful guide
\href{http://planner.science.uq.edu.au/content/bsc/computer-science-major}{\nolinkurl{http://planner.science.uq.edu.au/content/bsc/computer-science-major}}
\subsubsection{First Year}
\begin{center}
\begin{multicols}{2}
Semester 1 \\
\courselink{CSSE1001} \\
\courselink{INFS1200} \\
\courselink{MATH1061} \\
\textbf{SCIE1000} \\
\vfill
\columnbreak
Semester 2 \\
\courselink{STAT1201} \\
\courselink{MATH1051} \\
\textbf{Elective} \\
\textbf{Elective} \\
\end{multicols}
\end{center}
\subsubsection{Second Year}
\begin{center}
\begin{multicols}{2}
Semester 1 \\
\courselink{CSSE2002} \\
\courselink{CSSE2010} \\
\textbf{Elective} \\
\textbf{Elective} \\
\vfill
\columnbreak
Semester 2 \\
\courselink{CSSE2310} \\
\textbf{Elective} \\
\textbf{Elective} \\
\textbf{Elective} \\
\end{multicols}
\end{center}
\subsubsection{Third Year}
\begin{center}
\begin{multicols}{2}
Semester 1 \\
\textbf{Elective} \\
\textbf{Elective} \\
\textbf{Elective} \\
\textbf{Elective} \\
\vfill
\columnbreak
Semester 2 \\
\courselink{COMP3506} \\
\courselink{DECO3801} \\
\textbf{Elective} \\
\textbf{Elective} \\
\end{multicols}
\end{center} | {
"alphanum_fraction": 0.7339055794,
"avg_line_length": 21.1818181818,
"ext": "tex",
"hexsha": "1bd9c037d7dadf953f40ff69c27d3ad4ca29b95b",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2017-06-15T12:18:41.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-08-04T13:34:03.000Z",
"max_forks_repo_head_hexsha": "a026bfdb8f7b2085509995bd0f99ece30fa51442",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "UQComputingSociety/subject-guide",
"max_forks_repo_path": "tex/courses/csci.tex",
"max_issues_count": 22,
"max_issues_repo_head_hexsha": "a026bfdb8f7b2085509995bd0f99ece30fa51442",
"max_issues_repo_issues_event_max_datetime": "2017-06-18T07:36:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-08-04T13:48:34.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "UQComputingSociety/subject-guide",
"max_issues_repo_path": "tex/courses/csci.tex",
"max_line_length": 155,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "a026bfdb8f7b2085509995bd0f99ece30fa51442",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "UQComputingSociety/subject-guide",
"max_stars_repo_path": "tex/courses/csci.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-07T03:00:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-08-04T14:08:24.000Z",
"num_tokens": 507,
"size": 1398
} |
\hypertarget{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException}{
\section{glite::wms::jdl::Ad\-Semantic\-Path\-Exception Class Reference}
\label{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException}\index{glite::wms::jdl::AdSemanticPathException@{glite::wms::jdl::AdSemanticPathException}}
}
{\tt \#include $<$Request\-Ad\-Exceptions.h$>$}
Inheritance diagram for glite::wms::jdl::Ad\-Semantic\-Path\-Exception::\begin{figure}[H]
\begin{center}
\leavevmode
\includegraphics[height=3cm]{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException}
\end{center}
\end{figure}
\subsection*{Public Member Functions}
\begin{CompactItemize}
\item
\hyperlink{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException_a0}{Ad\-Semantic\-Path\-Exception} (std::string file, int line, std::string method, int code, std::string attr\_\-name, std::string path\_\-name)
\end{CompactItemize}
\subsection{Detailed Description}
\hyperlink{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException}{Ad\-Semantic\-Path\-Exception} - raised when a mandatoty attribute is missing to the class\-Ad
\subsection{Constructor \& Destructor Documentation}
\hypertarget{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException_a0}{
\index{glite::wms::jdl::AdSemanticPathException@{glite::wms::jdl::Ad\-Semantic\-Path\-Exception}!AdSemanticPathException@{AdSemanticPathException}}
\index{AdSemanticPathException@{AdSemanticPathException}!glite::wms::jdl::AdSemanticPathException@{glite::wms::jdl::Ad\-Semantic\-Path\-Exception}}
\subsubsection[AdSemanticPathException]{\setlength{\rightskip}{0pt plus 5cm}glite::wms::jdl::Ad\-Semantic\-Path\-Exception::Ad\-Semantic\-Path\-Exception (std::string {\em file}, int {\em line}, std::string {\em method}, int {\em code}, std::string {\em attr\_\-name}, std::string {\em path\_\-name})}}
\label{classglite_1_1wms_1_1jdl_1_1AdSemanticPathException_a0}
Raised when Path attribute is missing, unable to find a specified path, wrong attribute coexistence
The documentation for this class was generated from the following file:\begin{CompactItemize}
\item
\hyperlink{RequestAdExceptions_8h}{Request\-Ad\-Exceptions.h}\end{CompactItemize}
| {
"alphanum_fraction": 0.797752809,
"avg_line_length": 56.2105263158,
"ext": "tex",
"hexsha": "9453a19ac9f0559e4cd42a0beb9233ddb25eb66b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "italiangrid/wms",
"max_forks_repo_path": "users-guide/WMS/autogen/jdl/classglite_1_1wms_1_1jdl_1_1AdSemanticPathException.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "italiangrid/wms",
"max_issues_repo_path": "users-guide/WMS/autogen/jdl/classglite_1_1wms_1_1jdl_1_1AdSemanticPathException.tex",
"max_line_length": 302,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "italiangrid/wms",
"max_stars_repo_path": "users-guide/WMS/autogen/jdl/classglite_1_1wms_1_1jdl_1_1AdSemanticPathException.tex",
"max_stars_repo_stars_event_max_datetime": "2019-01-18T02:19:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-18T02:19:18.000Z",
"num_tokens": 682,
"size": 2136
} |
\vfill \eject
\section{{\tt allInOne.c} -- A Serial $QR$ Driver Program}
\label{section:QR-serial-driver}
\begin{verbatim}
/* QRallInOne.c */
#include "../../misc.h"
#include "../../FrontMtx.h"
#include "../../SymbFac.h"
/*--------------------------------------------------------------------*/
int
main ( int argc, char *argv[] ) {
/*
--------------------------------------------------
QR all-in-one program
(1) read in matrix entries and form InpMtx object
of A and A^TA
(2) form Graph object of A^TA
(3) order matrix and form front tree
(4) get the permutation, permute the matrix and
front tree and get the symbolic factorization
(5) compute the numeric factorization
(6) read in right hand side entries
(7) compute the solution
created -- 98jun11, cca
--------------------------------------------------
*/
/*--------------------------------------------------------------------*/
char *matrixFileName, *rhsFileName ;
ChvManager *chvmanager ;
DenseMtx *mtxB, *mtxX ;
double facops, imag, real, value ;
double cpus[10] ;
ETree *frontETree ;
FILE *inputFile, *msgFile ;
FrontMtx *frontmtx ;
Graph *graph ;
int ient, irow, jcol, jrhs, jrow, msglvl, neqns,
nedges, nent, nrhs, nrow, seed, type ;
InpMtx *mtxA ;
IV *newToOldIV, *oldToNewIV ;
IVL *adjIVL, *symbfacIVL ;
SubMtxManager *mtxmanager ;
/*--------------------------------------------------------------------*/
/*
--------------------
get input parameters
--------------------
*/
if ( argc != 7 ) {
fprintf(stdout,
"\n usage: %s msglvl msgFile type matrixFileName rhsFileName seed"
"\n msglvl -- message level"
"\n msgFile -- message file"
"\n type -- type of entries"
"\n 1 (SPOOLES_REAL) -- real entries"
"\n 2 (SPOOLES_COMPLEX) -- complex entries"
"\n matrixFileName -- matrix file name, format"
"\n nrow ncol nent"
"\n irow jcol entry"
"\n ..."
"\n note: indices are zero based"
"\n rhsFileName -- right hand side file name, format"
"\n nrow "
"\n entry[0]"
"\n ..."
"\n entry[nrow-1]"
"\n seed -- random number seed, used for ordering"
"\n", argv[0]) ;
return(0) ;
}
msglvl = atoi(argv[1]) ;
if ( strcmp(argv[2], "stdout") == 0 ) {
msgFile = stdout ;
} else if ( (msgFile = fopen(argv[2], "a")) == NULL ) {
fprintf(stderr, "\n fatal error in %s"
"\n unable to open file %s\n",
argv[0], argv[2]) ;
return(-1) ;
}
type = atoi(argv[3]) ;
matrixFileName = argv[4] ;
rhsFileName = argv[5] ;
seed = atoi(argv[6]) ;
/*--------------------------------------------------------------------*/
/*
--------------------------------------------
STEP 1: read the entries from the input file
and create the InpMtx object of A
--------------------------------------------
*/
inputFile = fopen(matrixFileName, "r") ;
fscanf(inputFile, "%d %d %d", &nrow, &neqns, &nent) ;
mtxA = InpMtx_new() ;
InpMtx_init(mtxA, INPMTX_BY_ROWS, type, nent, 0) ;
if ( type == SPOOLES_REAL ) {
for ( ient = 0 ; ient < nent ; ient++ ) {
fscanf(inputFile, "%d %d %le", &irow, &jcol, &value) ;
InpMtx_inputRealEntry(mtxA, irow, jcol, value) ;
}
} else {
for ( ient = 0 ; ient < nent ; ient++ ) {
fscanf(inputFile, "%d %d %le %le", &irow, &jcol, &real, &imag) ;
InpMtx_inputComplexEntry(mtxA, irow, jcol, real, imag) ;
}
}
fclose(inputFile) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n input matrix") ;
InpMtx_writeForHumanEye(mtxA, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
----------------------------------------
STEP 2: read the right hand side entries
----------------------------------------
*/
inputFile = fopen(rhsFileName, "r") ;
fscanf(inputFile, "%d %d", &nrow, &nrhs) ;
mtxB = DenseMtx_new() ;
DenseMtx_init(mtxB, type, 0, 0, nrow, nrhs, 1, nrow) ;
DenseMtx_zero(mtxB) ;
if ( type == SPOOLES_REAL ) {
for ( irow = 0 ; irow < nrow ; irow++ ) {
fscanf(inputFile, "%d", &jrow) ;
for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) {
fscanf(inputFile, "%le", &value) ;
DenseMtx_setRealEntry(mtxB, jrow, jrhs, value) ;
}
}
} else {
for ( irow = 0 ; irow < nrow ; irow++ ) {
fscanf(inputFile, "%d", &jrow) ;
for ( jrhs = 0 ; jrhs < nrhs ; jrhs++ ) {
fscanf(inputFile, "%le %le", &real, &imag) ;
DenseMtx_setComplexEntry(mtxB, jrow, jrhs, real, imag) ;
}
}
}
fclose(inputFile) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n rhs matrix in original ordering") ;
DenseMtx_writeForHumanEye(mtxB, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
-------------------------------------------------
STEP 3 : find a low-fill ordering
(1) create the Graph object for A^TA or A^HA
(2) order the graph using multiple minimum degree
-------------------------------------------------
*/
graph = Graph_new() ;
adjIVL = InpMtx_adjForATA(mtxA) ;
nedges = IVL_tsize(adjIVL) ;
Graph_init2(graph, 0, neqns, 0, nedges, neqns, nedges, adjIVL,
NULL, NULL) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n graph of A^T A") ;
Graph_writeForHumanEye(graph, msgFile) ;
fflush(msgFile) ;
}
frontETree = orderViaMMD(graph, seed, msglvl, msgFile) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n front tree from ordering") ;
ETree_writeForHumanEye(frontETree, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
-----------------------------------------------------
STEP 4: get the permutation, permute the matrix and
front tree and get the symbolic factorization
-----------------------------------------------------
*/
oldToNewIV = ETree_oldToNewVtxPerm(frontETree) ;
newToOldIV = ETree_newToOldVtxPerm(frontETree) ;
InpMtx_permute(mtxA, NULL, IV_entries(oldToNewIV)) ;
InpMtx_changeStorageMode(mtxA, INPMTX_BY_VECTORS) ;
symbfacIVL = SymbFac_initFromGraph(frontETree, graph) ;
IVL_overwrite(symbfacIVL, oldToNewIV) ;
IVL_sortUp(symbfacIVL) ;
ETree_permuteVertices(frontETree, oldToNewIV) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n old-to-new permutation vector") ;
IV_writeForHumanEye(oldToNewIV, msgFile) ;
fprintf(msgFile, "\n\n new-to-old permutation vector") ;
IV_writeForHumanEye(newToOldIV, msgFile) ;
fprintf(msgFile, "\n\n front tree after permutation") ;
ETree_writeForHumanEye(frontETree, msgFile) ;
fprintf(msgFile, "\n\n input matrix after permutation") ;
InpMtx_writeForHumanEye(mtxA, msgFile) ;
fprintf(msgFile, "\n\n symbolic factorization") ;
IVL_writeForHumanEye(symbfacIVL, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
------------------------------------------
STEP 5: initialize the front matrix object
------------------------------------------
*/
frontmtx = FrontMtx_new() ;
mtxmanager = SubMtxManager_new() ;
SubMtxManager_init(mtxmanager, NO_LOCK, 0) ;
if ( type == SPOOLES_REAL ) {
FrontMtx_init(frontmtx, frontETree, symbfacIVL, type,
SPOOLES_SYMMETRIC, FRONTMTX_DENSE_FRONTS,
SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,
mtxmanager, msglvl, msgFile) ;
} else {
FrontMtx_init(frontmtx, frontETree, symbfacIVL, type,
SPOOLES_HERMITIAN, FRONTMTX_DENSE_FRONTS,
SPOOLES_NO_PIVOTING, NO_LOCK, 0, NULL,
mtxmanager, msglvl, msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
-----------------------------------------
STEP 6: compute the numeric factorization
-----------------------------------------
*/
chvmanager = ChvManager_new() ;
ChvManager_init(chvmanager, NO_LOCK, 1) ;
DVzero(10, cpus) ;
facops = 0.0 ;
FrontMtx_QR_factor(frontmtx, mtxA, chvmanager,
cpus, &facops, msglvl, msgFile) ;
ChvManager_free(chvmanager) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n factor matrix") ;
fprintf(msgFile, "\n facops = %9.2f", facops) ;
FrontMtx_writeForHumanEye(frontmtx, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
--------------------------------------
STEP 7: post-process the factorization
--------------------------------------
*/
FrontMtx_postProcess(frontmtx, msglvl, msgFile) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n factor matrix after post-processing") ;
FrontMtx_writeForHumanEye(frontmtx, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
-------------------------------
STEP 8: solve the linear system
-------------------------------
*/
mtxX = DenseMtx_new() ;
DenseMtx_init(mtxX, type, 0, 0, neqns, nrhs, 1, neqns) ;
FrontMtx_QR_solve(frontmtx, mtxA, mtxX, mtxB, mtxmanager,
cpus, msglvl, msgFile) ;
if ( msglvl > 1 ) {
fprintf(msgFile, "\n\n solution matrix in new ordering") ;
DenseMtx_writeForHumanEye(mtxX, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
-------------------------------------------------------
STEP 9: permute the solution into the original ordering
-------------------------------------------------------
*/
DenseMtx_permuteRows(mtxX, newToOldIV) ;
if ( msglvl > 0 ) {
fprintf(msgFile, "\n\n solution matrix in original ordering") ;
DenseMtx_writeForHumanEye(mtxX, msgFile) ;
fflush(msgFile) ;
}
/*--------------------------------------------------------------------*/
/*
------------------------
free the working storage
------------------------
*/
InpMtx_free(mtxA) ;
FrontMtx_free(frontmtx) ;
Graph_free(graph) ;
DenseMtx_free(mtxX) ;
DenseMtx_free(mtxB) ;
ETree_free(frontETree) ;
IV_free(newToOldIV) ;
IV_free(oldToNewIV) ;
IVL_free(symbfacIVL) ;
SubMtxManager_free(mtxmanager) ;
/*--------------------------------------------------------------------*/
return(1) ; }
/*--------------------------------------------------------------------*/
\end{verbatim}
| {
"alphanum_fraction": 0.4974526579,
"avg_line_length": 34.6766666667,
"ext": "tex",
"hexsha": "f7962abab35545307efde9af6be3f269b22b258e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/documentation/AllInOne/QR_serial_driver.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2796,
"size": 10403
} |
\intermediate{\subsection{Downloads and Installations}}
\step{Download and install \beast.}{
\beast is available at
\href{http://beast.bio.ed.ac.uk/Main_Page}{\url{http://beast.bio.ed.ac.uk/Main_Page}}.
This tutorial is written for version 1.7.5 of \beast.
}
\step{Download the data and other files from Google Drive.}{
Download the \localfile{div-time-tutorial.zip} archive from
\href{http://www.phyletica.com/downloads/div-time-tutorial.zip}{\url{http://www.phyletica.com/downloads/div-time-tutorial.zip}}
to your desktop, and unzip the archive.
You should now have a \localfile{div-time-tutorial} folder on your desktop,
and it should contain the files and folders shown in
Box~\ref{box:tutorialDir}.
\begin{textbox}
\centering
\fbox{\begin{minipage}[c][10em][c]{0.5\textwidth}
\ttfamily
\begin{compactitem}
\item div-time-tutorial/
\begin{compactitem}
\item crocodylia-cytb.nex
\item yule.py
\item output/
\begin{compactitem}
\item crocodylia-cytb-run1.log
\item crocodylia-cytb-run2.log
\item crocodylia-cytb-run1.trees
\item crocodylia-cytb-run2.trees
\end{compactitem}
\end{compactitem}
\end{compactitem}
\end{minipage}}
\caption{The files required for this tutorial.}
\label{box:tutorialDir}
\end{textbox}
}
\intermediate{\subsection{Setting up XML file with \program{BEAUTi}}}
\step{Launch BEAUTi.}{Begin by launching the \program{BEAUTi} program. If you
are using Mac OSX or Windows, you should be able to do this by double
clicking on the application. If everything is working correctly, a window
should appear that looks something like Figure~\ref{fig:beautiInit}.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.7\textwidth]{../screenshots/beauti-init.jpg}}
\caption{BEAUTi window.}
\label{fig:beautiInit}
\end{figure}
}
\step{Import the data in \localfile{crocodylia-cytb.nex}.}{
Import the sequence data from the file \localfile{crocodylia-cytb.nex}
using the drop-down menu \subItem{File}{Import Data} or using the
\plusbutton button near the bottom-left corner of the window.
You should be able to confirm that \program{BEAUTi} successfully imported
24 sequences of nucleotides of length 1137
(Figure~\ref{fig:beautiDataImported}).
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-data-imported.jpg}}
\caption{The data successfully loaded by BEAUTi.}
\label{fig:beautiDataImported}
\end{figure}
}
\step{Inspect the alignment.}{
Double click on the file name \localfile{crocodylia-cytb.nex} to bring up a
window displaying the aligned sequences. It is always good practice to make
sure everything looks as you expect. The cytochrome b gene is
protein-coding, and aligns well across Crocodylia without gaps.
}
\step{Define taxon sets.}{
Next, we need to define some sets of taxa. Later, we will be able to use
each of these sets to place priors on the age of their most recent common
ancestor (MRCA).
Let's start by defining the set for the clade
containing the fossil taxon \emph{Navajosuchus mooki}; this clade has
the family name Alligatoridae.
Click on the \menutab{Taxa} tab. Once in the \menutab{Taxa}
tab, click on the \plusbutton near the bottom-left corner of the window.
This will create an untitled taxon set in the left-most box in the window.
Change the name of this taxon set to \taxonset{Alligatoridae} and enter \fieldvalue{65}
into Age column. This age is simply a starting value for the age of the
MRCA of \taxonset{Alligatoridae}. It will ensure that the initial tree used
to start the analysis is consistent with the lower bound of our fossil
calibration (which will be 64 million years).
We do not want to constrain \taxonset{Alligatoridae} to be monophyletic, so
leave the \field{Mono?} box unchecked. Also, we are confident that
\emph{Navajosuchus mooki} is nested within \taxonset{Alligatoridae}, and so
we will leave the \field{Stem?} box unchecked. Because we only have a
single tree, you can leave the \field{Tree} column unchanged.
Next, we need to highlight the species that belong to
\taxonset{Alligatoridae} within the \field{Excluded Taxa} window, and move
them over to the \field{Included Taxa} window using the ``$\to$'' button.
\taxonset{Alligatoridae} includes the following genera:
\begin{compactitem}
\item \emph{Alligator}
\item \emph{Caiman}
\item \emph{Melanosuchus}
\item \emph{Paleosuchus}
\end{compactitem}
Highlight the species for these genera and move them over to the
\field{Included Taxa} window.
If you did everything correctly, your BEAUTi window should look like
Figure~\ref{fig:beautiAlligatoridae}.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.9\textwidth]{../screenshots/beauti-taxon-set-alligatoridae.jpg}}
\caption{The taxon set \taxonset{Alligatoridae} correctly defined.}
\label{fig:beautiAlligatoridae}
\end{figure}
Next, let's define a taxon set for the genus \emph{Crocodylus}, which we
will later use to specify an age prior corresponding to the fossil
\emph{Crocodylus palaeindicus}. Click on the \plusbutton again to create a
new taxon set, and name it \taxonset{Crocodylus}. Specify a starting age of
\fieldvalue{13}, and leave the \field{Mono?} unchecked.
We are confident that \emph{Crocodylus palaeindicus} is more closely
related to all \emph{Crocodylus} species than to any other crocodylians.
However, we are not confident that it is nested within extant species of
\emph{Crocodylus} and suspect it is actually sister to them (as illustrated
in Figure~\ref{fig:crocFossils}). As a result, we want to check the
\field{Stem?} box. This specifies that the node we are interested in
calibrating is the MRCA of all \emph{Crocodylus} sequences and their next
closest relative (i.e., the stem node of \emph{Crocodylus}).
Make sure the \taxonset{Crocodylus} taxon set is highlighted, and then
highlight all the \emph{Crocodylus} species in the \field{Excluded Taxa}
window and move them over to the \field{Included Taxa} window.
Lastly, we need to define a taxon set for the genus \emph{Caiman}, which we
will later use when specifying a calibration informed by the age of the
oldest known \emph{Caiman} fossils. Click the \plusbutton to create a new
taxon set, name it \taxonset{Caiman}, specify a starting age of
\fieldvalue{10}, and leave \field{Mono?} unchecked.
As with \emph{Crocodylus} we don't know if the oldest \emph{Caiman} fossil
taxa are nested within or sister to extant \emph{Caiman} species, and so we
need to check the \field{Stem?} box.
Highlight the three \emph{Caiman} species and move them over to the
\field{Included Taxa} window.
We will also be specifying a prior for the age of the root node of the
tree, but we do not need to define a taxon set for this, because the root
node is always defined by BEAUTi (you will see this later).
Before you proceed to the next step, double check the three taxon sets
you just defined and make sure you did not make a mistake with their
ages or in selecting the species associated with them. Even a single
misplaced species can lead to some very bizarre results!
If you did everything correctly, your BEAUTi window should look similar to
Figure~\ref{fig:beautiTaxSets}.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.9\textwidth]{../screenshots/beauti-taxon-sets.jpg}}
\caption{All three taxon sets defined.}
\label{fig:beautiTaxSets}
\end{figure}
}
% \step{Define geographic traits.}{
% Next we will use the \menutab{Traits} tab to define a new character for
% the geographic location of each species.
% This step is unrelated to divergence-time estimation. It will allow us to
% estimate the geographic locations of ancestral species across the
% phylogeny.
% Investigators are often interested in estimating ancestral states for
% characters of interest, so it is worth seeing how that can be done for in
% \beast while jointly estimating the phylogeny and divergence times.
% However, you will be learning all about ancestral character-state
% estimation in coming weeks, and so we will gloss over a lot of details to
% maintain the focus on divergence-time estimation.
% Once in the \menutab{Traits} tab, click the \plusbutton button (or the
% \field{Add trait} button). Once the \field{Create or Import Trait(s)}
% sub-window pops up, change the \field{Name} to \fieldvalue{geography}, set
% the \field{Type} to \fieldvalue{discrete}, and check the \field{Create a
% corresponding data partition} box
% (Figure~\ref{fig:beautiCreateTraitSubWindow}). Then, click \field{OK}.
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.4\textwidth]{../screenshots/beauti-create-trait-subwindow.jpg}}
% \caption{Creating a new trait.}
% \label{fig:beautiCreateTraitSubWindow}
% \end{figure}
% Next, click the \field{Guess trait values} button. Once the sub-window
% pops up, select \field{Defined by its order} and set its drop-down field to
% \fieldvalue{last}. Put \fieldvalue{\_} (underscore) in the \field{with
% delimiter} field (Figure~\ref{fig:beautiGuessTraitSubWindow}). Click
% \field{OK}.
% If successful, the \menutab{Traits} tab should look like
% Figure~\ref{fig:beautiTraits}.
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.5\textwidth]{../screenshots/beauti-guess-trait-subwindow.jpg}}
% \caption{\field{Guess trait} options.}
% \label{fig:beautiGuessTraitSubWindow}
% \end{figure}
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-traits.jpg}}
% \caption{Geographic traits successfully defined.}
% \label{fig:beautiTraits}
% \end{figure}
% }
\step{Define Markov-chain models of substitution}{
Next, we need to set up our continuous-time Markov chain (CTMC) model
of nucleotide substitution under the \menutab{Sites} tab.
Once in the \menutab{Sites} tab, and with \fieldvalue{crocodylia-cytb} selected
in the left \field{Substitution Model} window, select the following options:
\begin{compactdesc}
\centering
\item[\field{Substitution Model:}] \fieldvalue{HKY}
\item[\field{Base frequencies:}] \fieldvalue{Estimated}
\item[\field{Site Heterogeneity Model:}] \fieldvalue{Gamma}
\item[\field{Number of Gamma Categories:}] \fieldvalue{4}
\item[\field{Partition into codon positions:}] \fieldvalue{3 partitions: positions 1, 2, 3}
\end{compactdesc}
Lastly, check all three \field{Unlink parameters} options (Figure~\ref{fig:beautiCytbModel}).
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-cytb-model-no-geo.jpg}}
\caption{The CTMC model of nucleotide substitution for cytb.}
\label{fig:beautiCytbModel}
\end{figure}
% Next, select \fieldvalue{geography} in the left \field{Substitution Model}
% window, and select \fieldvalue{Asymmetric substitution model} for the
% \field{Discrete Trait Substitution Model} drop-down option. Check the box
% for \field{Infer social network with BSSVS} (Figure~\ref{fig:beautiGeoModel}).
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-geo-model.jpg}}
% \caption{The CTMC model for the geographic character.}
% \label{fig:beautiGeoModel}
% \end{figure}
}
\step{Define clock models.}{
Next, we need to move to the \menutab{Clocks} tab and specify our models of
branch rates across the tree. BEAUTi provides one strict-clock and
three relaxed-clock options:
\begin{compactdesc}
\item[\fieldvalue{Strict clock}] Assumes a constant rate of
substitution across all the branches of the tree.
\item[\fieldvalue{Lognormal relaxed clock (Uncorrelated)}] Assumes that
the rates of substitution on each branch of the tree are independent
and drawn from a single, discretized lognormal distribution
\citep{Drummond2006}.
\item[\fieldvalue{Exponential relaxed clock (Uncorrelated)}] Assumes that
the rates of substitution on each branch of the tree are independent
and drawn from a single, exponential distribution
\citep{Drummond2006}.
\item[\fieldvalue{Random local clock}] Uses Bayesian stochastic search
variable selection (BSSVS) to average over local clock models
(i.e., it averages over the number of rate changes and their
locations) \citep{DrummondSuchard2010}.
\end{compactdesc}
In general, it is best to compare (or sample over) different clock models.
But, for the sake of keeping this tutorial simple, we will select the most
commonly used relaxed-clock model for the cytb data.
For the \fieldvalue{crocodylia-cytb} data, select the \fieldvalue{Lognormal
relaxed clock (Uncorrelated)}.
% For the \fieldvalue{geography} trait, select the \fieldvalue{Strict clock
% model}.
Make sure to click the \field{Estimate} box
(Figure~\ref{fig:beautiClocks}). You do not need to worry about the
\field{Clock Model Group} options in the lower window.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-clocks-no-geo.jpg}}
\caption{The clock-model settings.}
\label{fig:beautiClocks}
\end{figure}
}
\step{Select the tree prior.}{
Next, let's move to the \menutab{Trees} tab to specify the prior and
starting conditions for our tree.
Select the \fieldvalue{Speciation: Birth-Death Process} option from the
drop-down for the \field{Tree Prior} option.
Select \fieldvalue{Random starting tree} in the lower window
(Figure~\ref{fig:beautiTrees}).
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.8\textwidth]{../screenshots/beauti-trees.jpg}}
\caption{The tree-prior settings.}
\label{fig:beautiTrees}
\end{figure}
BEAUTi offers a number of tree prior options. Those labeled
\fieldvalue{Speciation} are based on stochastic branching processes that
assume the tips of the phylogeny are species. The \fieldvalue{Coalescent}
tree priors are based on stochastic processes of lineage coalescence that
assume the tips of the tree are gene copies within a population.
}
% \step{Set the \field{States} options.}{
% Move to the \menutab{States} tab.
% With the \fieldvalue{crocodylia-cytb} \field{Partition} selected in the
% left column, leave all of the settings unchecked and the \field{Error
% Model} \fieldvalue{Off} (Figure~\ref{fig:beautiCytbStates}).
% Next, select the \fieldvalue{geography} \field{Partition} in the left column,
% check the \field{Reconstruct states at all ancestors} box, and leave all other
% options unchecked (Figure~\ref{fig:beautiGeoStates}).
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.7\textwidth]{../screenshots/beauti-cytb-states.jpg}}
% \caption{The \menutab{States} settings for cytb.}
% \label{fig:beautiCytbStates}
% \end{figure}
% \begin{figure}[htbp]
% \centering
% \fbox{\includegraphics[width=0.5\textwidth]{../screenshots/beauti-geo-states.jpg}}
% \caption{The \menutab{States} settings for geography.}
% \label{fig:beautiGeoStates}
% \end{figure}
% }
\step{Select priors for parameters.}{
Move to the \menutab{Priors} tab.
Here we see that all of the model parameters and statistics that we
specified under the other tabs are listed.
Now, we can specify prior probability distributions on the
substitution-model parameters, relaxed-clock parameters, tree-prior
parameters, and the time (age) of the most recent common ancestor (TMRCA)
of the taxon sets we specified earlier.
Let's start by selecting priors for the \fieldvalue{CP1.mu},
\fieldvalue{CP2.mu}, and \fieldvalue{CP3.mu} parameters. These are
relative-rate parameters that allow sites at the three codon positions to
evolve at different rates. Based on our knowledge of the redundancy of the
genetic code, we expect the sites at third-codon position to evolve more
rapidly than the first and second codon sites \emph{a priori}.
So we will specify our priors accordingly.
Click on the \field{Prior} column for the \fieldvalue{CP1.mu} parameter.
In the window that pops up, select an \fieldvalue{Exponential} \field{Prior
Distribution}, and specify a \field{Mean} of \fieldvalue{0.5}.
Do the same for \fieldvalue{CP2.mu}.
For \fieldvalue{CP3.mu}, also select an \fieldvalue{Exponential}
\field{Prior}, but set the \field{Mean} to \fieldvalue{5.0}
(Figure~\ref{fig:beautiPriorsCPmu}).
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-cp1mu.jpg}
\caption{CP1.mu.}
\label{fig:beautiPriorsCP1mu}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-cp2mu.jpg}
\caption{CP2.mu.}
\label{fig:beautiPriorsCP2mu}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-cp3mu.jpg}
\caption{CP2.mu.}
\label{fig:beautiPriorsCP3mu}
\end{subfigure}
\caption{Priors for relative-rate parameters.}
\label{fig:beautiPriorsCPmu}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-ucldmean-no-geo.jpg}
% \caption{crocodylia-cytb.ucld.mean.}
\caption{ucld.mean.}
\label{fig:beautiPriorsUcldMean}
\end{subfigure}
% \begin{subfigure}[b]{0.29\textwidth}
% \includegraphics[width=\textwidth]{../screenshots/beauti-prior-clockrate.jpg}
% \caption{geography.clock.rate.}
% \label{fig:beautiPriorsClockRate}
% \end{subfigure}
\begin{subfigure}[b]{0.325\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-birthrate.jpg}
\caption{birthDeath.meanGrowthRate.}
\label{fig:beautiPriorsBirthRate}
\end{subfigure}
\caption{Priors for clock rates and diversification rate.}
\label{fig:beautiPriorsClocks}
\end{figure}
Next, click on the \field{Prior} column for the
% \fieldvalue{crocodylia-cytb.ucld.mean} parameter.
\fieldvalue{ucld.mean} parameter.
This parameter controls the mean of the log-normal distribution from which
the rates of each branch of the tree are drawn.
Because we will be using fossils to calibrate the overall rate of substitution,
we will use a very diffuse prior for this parameter.
Select \fieldvalue{Exponential} for the \field{Prior}
and specify \fieldvalue{0.05} for the \field{Mean}
(Figure~\ref{fig:beautiPriorsUcldMean}).
Because we will be specifying node-age priors in units of millions of
% years, the mean of 0.05 for prior on \fieldvalue{crocodylia-cytb.ucld.mean}
years, the mean of 0.05 for prior on \fieldvalue{ucld.mean}
translates to a mean rate of 5\% per million years.
% Specify an \fieldvalue{Exponential} prior for the
% \fieldvalue{geography.clock.rate}, but with a \field{Mean} of
% \fieldvalue{0.01} (Figure~\ref{fig:beautiPriorsClockRate}).
The default prior for the \fieldvalue{birthDeath.meanGrowthRate}
is \fieldvalue{Uniform} from \fieldvalue{0} to \fieldvalue{10000}.
This is a very broad prior.
Based on our knowledge of the crocodylian fossil record, we can get
a rough idea of our prior expectations for this parameter.
Given the age of the oldest crocodylian fossil is 71.3 million
years, we know the height of our tree is at least that.
Given that, we can calculate a pure-birth (Yule process) rate that has an
expected tree height of 71.3 million years.
I have included a Python script \localfile{yule.py} in the tutorial
download that performs such calculations.
This program is hosted via \href{http://git-scm.com/}{git} on
\href{https://github.com/}{GitHub} at
\href{https://github.com/joaks1/pyule.git}{\url{https://github.com/joaks1/pyule.git}};
license information and documentation are available on the GitHub site.
You do not have to do this now, but if you open a \program{Terminal}
window and invoke the script as follows:
\hspace{1cm}\cmd{python yule.py height 71.3 24}
you get the following output:
\cmd{ntips = 24}\\
\cmd{rate = 0.0389334947792}\\
\cmd{height = 71.3}\\
\cmd{length = 590.750974976}
From these results, we see that the largest we expect a pure-birth rate to
be is around 0.04.
This allows us to specify a much better prior than the default.
Specify an \fieldvalue{Exponential} prior for the
\fieldvalue{birthDeath.meanGrowthRate}, but with a \field{Mean} of
\fieldvalue{0.1} (Figure~\ref{fig:beautiPriorsBirthRate}).
You will also specify an \fieldvalue{Exponential} prior for the
\fieldvalue{ucld.stdev} parameter.
{\color{red}We will assign you a value to specify for the \field{Mean} of
this distribution.}
At the end of the lab we will compare are our findings to see how sensitive
the analysis is to this prior.
}
\step{Select priors for node ages!}{
Next, we need to specify our node-age priors based on fossil information.
For \fieldvalue{tmrca(Alligatoridae)} select \fieldvalue{Gamma} for the
\field{Prior Distribution}, and specify \fieldvalue{2} for both the
\field{Shape} and \field{Scale} and \fieldvalue{64} for the
\field{Offset}
(Figure~\ref{fig:beautiPriorsAlligatoridae}).
For \fieldvalue{tmrca(Crocodylus)} select \fieldvalue{Exponential} for the
\field{Prior Distribution}, and specify \fieldvalue{10} for the mean
and \fieldvalue{12} for the \field{Offset}
(Figure~\ref{fig:beautiPriorsCrocodylus}).
For \fieldvalue{tmrca(Caiman)} select \fieldvalue{Exponential} for the
\field{Prior Distribution}, and specify \fieldvalue{4} for the mean
and \fieldvalue{9} for the \field{Offset}
(Figure~\ref{fig:beautiPriorsCaiman}).
For \fieldvalue{treeModel.rootHeight} select \fieldvalue{Gamma} for the
\field{Prior Distribution}, and specify \fieldvalue{78} for the
\field{Initial value}, \fieldvalue{1.5} for the \field{Shape},
\fieldvalue{6.0} for the \field{Scale}, and \fieldvalue{71.3} for the
\field{Offset}. Also, make sure \field{Truncate to} is unchecked
(Figure~\ref{fig:beautiPriorsRoot}).
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-alligatoridae.jpg}
\caption{tmrca(Alligatoridae).}
\label{fig:beautiPriorsAlligatoridae}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-crocodylus.jpg}
\caption{tmrca(Crocodylus).}
\label{fig:beautiPriorsCrocodylus}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-caiman.jpg}
\caption{tmrca(Caiman).}
\label{fig:beautiPriorsCaiman}
\end{subfigure}
\begin{subfigure}[b]{0.315\textwidth}
\includegraphics[width=\textwidth]{../screenshots/beauti-prior-root.jpg}
\caption{treeModel.rootHeight.}
\label{fig:beautiPriorsRoot}
\end{subfigure}
\caption{Priors for clock rates and diversification rate.}
\label{fig:beautiPriorsClocks}
\end{figure}
After setting all the above priors, your \menutab{Priors} tab window should
look like Figure~\ref{fig:beautiPriors}.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=1.0\textwidth]{../screenshots/beauti-priors.jpg}}
\caption{The prior settings.}
\label{fig:beautiPriors}
\end{figure}
}
\step{Specify MCMC settings and generate \beast XML files.}{
Next, move to the \menutab{MCMC} tab.
Change the following settings:
\begin{compactdesc}
\centering
\item[\field{Length of chain:}] \fieldvalue{1000000} (1 million)
\item[\field{Echo state to screen every:}] \fieldvalue{1000}
\item[\field{Log parameters every:}] \fieldvalue{1000}
\end{compactdesc}
Leave the remaining options at their default values
(Figure~\ref{fig:beautiMCMC}).
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=1.0\textwidth]{../screenshots/beauti-mcmc.jpg}}
\caption{The MCMC settings.}
\label{fig:beautiMCMC}
\end{figure}
Next, click the \field{Generate BEAST File\ldots} in the bottom-right
corner of the window.
A subwindow will pop up warning you that some of the priors are still at
their default values.
You can ignore this and click \field{Continue}.
Another subwindow will appear for specifying the name and location for
saving the XML file. You can leave the name the same and save the file to
the \localfile{div-time-tutorial} folder on your desktop.
Due to time constraints, in this lab we are running a single, short
MCMC chain to sample from the joint posterior distribution of the model we
just finished specifying.
As a result, the posterior estimates will have a large amount of
MCMC sampling error.
For a dataset of this size, and a model with this many parameters, we need
to run a longer chain in BEAST to get a more robust sample from the
posterior.
We will not do this today, but in general, you should always run multiple,
independent MCMC analyses in order to increase the posterior sample size,
and to help assess whether the chains converged to the same stationary
distribution.
Also, you should always run an MCMC analysis that samples from the joint
prior distribution (i.e., an analysis that ignores the data). This allows
you to evaluate the interaction of all the priors you have specified for
the various parameters, and also gives you an idea of how much the data are
influencing certain parameter estimates.
We will not do this today, but you do this by checking the \field{Sample
from prior only-create empty alignment} box and creating another XML file
(make sure you change the name!).
Later, when your analysis is running, we will look at results from multiple,
longer chains and from a chain that sampled only from the prior.
}
\intermediate{\subsection{Running the XML file with \beast}}
\step{Run the XML file in \beast.}{
Launch the \beast program. If you are using Mac OSX or Windows, you should
be able to do this by double clicking on the application.
After the \beast window appears, click the \field{Choose File\ldots} button,
and select the XML file you just created (Figure~\ref{fig:beast}).
Click \field{Run}. The analysis should take about 20 minutes.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=0.5\textwidth]{../screenshots/beast.jpg}}
\caption{The \beast GUI window.}
\label{fig:beast}
\end{figure}
}
\intermediate{\subsection{Inspecting previous results with \program{Tracer}}}
\intermediate{
In the \localfile{div-time-tutorial} folder, there is a subfolder called
\localfile{output} containing \localfile{.log} and \localfile{.trees} files
from analyses I ran previously.
}
\step{Inspect previous results with \program{Tracer}.}{
Launch the \program{Tracer} program.
Load the \localfile{crocodylia-cytb-run1.log} and
\localfile{crocodylia-cytb-run2.log} in the \localfile{output} directory
into
\program{Tracer} using \subItem{File}{Import Trace File\ldots} or the
\plusbutton button.
Use Tracer to inspect the behavior of the two MCMC chains.
\question{Does it look like the chains reached stationarity?}
\question{Does it look like both chains converged to the same stationarity
distribution?}
\question{What do we call this stationary distribution?}
Now, load the \localfile{crocodylia-cytb-prior.log} file into
\program{Tracer}.
This log file is from the analysis that sampled only from the prior
distribution.
Use \program{Tracer} to compare the prior and posterior samples.
\question{Are any of the parameter estimates similar between the prior and
posterior? Which ones?}
}
\intermediate{\subsection{Summarizing the trees with \program{LogCombiner} and \program{TreeAnnotator}}}
\intermediate{
Once you have reviewed the log files from the independent runs in Tracer and determined that they have
reached stationarity, you can combine the sampled trees into a single tree file.
}
\step{Combine tree files using \program{LogCombiner}.}{
Launch the \program{LogCombiner} program.
Change the \field{File type:} to \field{Tree Files}.
Load the \localfile{crocodylia-cytb-run1.trees} and \localfile{crocodylia-cytb-run2.trees} file in the \localfile{output} directory into
\program{LogCombiner} using the \plusbutton button.
Select the \field{Choose File\ldots} button and specify the \localfile{output}
directory and a file name, \localfile{crocodylia-cytb-runs1and2.trees}.
Specify an appropriate burn-in value based on what you saw in \program{Tracer}.
Click \field{Run}
}
\intermediate{Now you have a single tree file with all the trees from the two independent runs called \localfile{crocodylia-cytb-runs1and2.trees}.
TreeAnnotator will summarize the trees and identify the topology with the best posterior support, and summarize the age estimates for each node in the tree.
}
\step{Summarize the trees using \program{TreeAnnotator}.}{
Launch the \program{TreeAnnotator} program.
Specify the burnin value as \fieldvalue{0} (we removed the burn-in with \program{LogCombiner}).
For the \field{Target tree type} field, choose \fieldvalue{Maximum clade credibility tree}.
For the \field{Node heights} field, choose \fieldvalue{Median heights}.
Select the \field{Input Tree File} button and select the file \localfile{crocodylia-cytb-runs1and2.trees}.
Select the \field{Output File} button and specify the \localfile{output} directory and a
file name, \localfile{crocodylia-MCC.tre}.
Click \field{Run}
}
\intermediate{\subsection{Visualizing the tree in \program{FigTree}}}
\step{Look at the summary tree in \program{FigTree}.}{
Launch the \program{FigTree} program, and load the \localfile{crocodylia-MCC.tre} file
you just created with \program{TreeAnnotator}.
Check the \field{Scale Axis} option in the left column, and check the
\subItem{Scale Axis}{Reverse axis} box.
Check the \field{Node Bars} option and select
\fieldvalue{height\_95\%\_HPD} for the \subItem{Node bars}{Display} field.
\question{What is the age of the most recent common ancestor of all
\emph{Crocodylus} species?}
\question{What is the age of the stem node for \emph{Crocodylus}?}
}
\intermediate{\subsection{Inspecting your results with \program{Tracer}}}
\step{Inspect the results of your short analysis with \program{Tracer}.}{
If your analysis has finished, launch the \program{Tracer} program and load
the log file created by \program{BEAST}.
\question{What was the mean you specified for the prior on
\fieldvalue{ucld.stdev}?}
\question{What is your estimate of the mean and 95\% HPD interval for the
age of the stem node for \emph{Crocodylus} (hint: the
\fieldvalue{tmrca(Crocodylus)} statistic)?}
\question{Compare your estimate with your classmates that used a different
prior on \fieldvalue{ucld.stdev}. Are the results sensitive to this prior?
Is there a trend?}
}
| {
"alphanum_fraction": 0.7011452682,
"avg_line_length": 46.9306930693,
"ext": "tex",
"hexsha": "3c9c29e5007040aa9753ec1be7150e0c92796902",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "19aed5f7543e442086e578ae44fd5a2de30c1f5c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "joaks1/applied-phylogenetics",
"max_forks_repo_path": "div-time-estimation/lab/tutorial/div-time-tutorial-steps.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "19aed5f7543e442086e578ae44fd5a2de30c1f5c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "joaks1/applied-phylogenetics",
"max_issues_repo_path": "div-time-estimation/lab/tutorial/div-time-tutorial-steps.tex",
"max_line_length": 156,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "19aed5f7543e442086e578ae44fd5a2de30c1f5c",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "joaks1/applied-phylogenetics",
"max_stars_repo_path": "div-time-estimation/lab/tutorial/div-time-tutorial-steps.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9195,
"size": 33180
} |
\chapter{Results}\label{c3}
\section{Overall results}
\autoref{t4} shows the overall results obtained when the criterion hyperparameter is optimised for DT and RF, and the weights hyperparameter is optimised for kNN through five-fold cross-validation, using both imbalanced and balanced training data. The optimal hyperparameter is the one that produces the highest average F1 score. For each classification performed, the optimal hyperparameters were found to be the non-default values. The mean and standard deviation were obtained by averaging all scores output by all turbines for the optimal hyperparameter. From these results, all three classifiers performed better, with higher mean and lower standard deviation scores, when trained on imbalanced datasets using the multilabel classification approach compared to balanced datasets with separate estimators for each label. The F1 scores for DT, RF and kNN is higher by 0.6\,\%, 0.5\,\% and 1.9\,\% respectively using imbalanced data compared to balanced data. The best performance was by RF using imbalanced data, which had the highest mean and lowest deviation scores. The kNN classifier meanwhile produced the results with the lowest mean and highest deviations. An attempt was made to further improve the performance of RF by optimising the number of estimators hyperparameter, but this was not possible as the process was found to exceed the available RAM. Hence, the only hyperparameter considered for further tuning is the k value for kNN using imbalanced dataset. The default value of k is 5 on scikit-learn, and values between 1 and 200 were tested. \autoref{f3} shows the optimal k values found for each turbine, which are the values that produce the highest average F1 score. The optimal k is 13 or less for 17 turbines, and more than 100 for 5 turbines. Based on the overall scores in \autoref{t4}, the optimisation did increase the F1 score of kNN by 0.6\,\% compared to using the imbalanced data without k optimisation, but compared to the F1 scores of DT and RF using imbalanced data, this is still lower by 5.1\,\% and 6.1\,\% respectively.
\begin{table}
\centering
\caption{\label{t4}Overall precision, recall and F1 scores for optimising hyperparameters for decision trees and random forests, and k nearest neighbours. The mean and standard deviation are obtained by averaging all scores output by all turbines for the optimal hyperparameter. The values are colour-coded to show better performances (i.e., higher mean and lower standard deviation) in darker shades and worse performances in lighter shades.}
\includegraphics[width=\textwidth]{../images/t4}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{../images/f3}
\caption{\label{f3}Number of neighbours, or k value for each turbine optimised based on the average F1 score through five-fold cross-validation. The optimal k is 13 or less for 17 turbines, and more than 100 for 5 turbines.}
\end{figure}
The time taken to execute the Python code using the optimal hyperparameters to produce the results for all 25 turbines, which includes reading the merged CSV file, processing and labelling samples, classification using cross-validation, and calculation of performance metrics, for each classifier is listed in \autoref{t5}. As other processes running in the background at the time of execution and some runs were interrupted due to computer crashes, the time taken could not be measured accurately and these values are only approximate. Overall, balancing the training data is shown to increase the training time, which is expected as the size of training data will be larger and separate estimators are used for each label compared to just one when using imbalanced data. DT and RF only took 8 hours with imbalanced data. Despite balancing the training data, DT and RF took only 18 hours compared to kNN with imbalanced data, which took 20 hours. The relatively long timings make kNN an inefficient classifier compared to DT and RF. As a result, the following results will only focus on the classifier with the best performance, which is RF. The other classifiers, however, can be tested more efficiently if better computing resources are available.
\begin{table}
\centering
\caption{\label{t5}Time taken to run each classifier using imbalanced and balanced datasets for the 30-month period. These timings are approximate as the RAM was not utilised fully by the Python application due to other processes running in the background, and the application had to be restarted a number of time due to system crashes.}
\includegraphics{../images/t5}
\end{table}
\section{Performance of each turbine and label}
The classification results using random forests for each turbine and label in full can be found in \autoref{a3}. The scores of each performance metric from cross-validation were grouped based on turbine or label which were then averaged to produce the mean scores. Additionally, the maximum and minimum values were also found. The turbine with the worst performance is turbine 1, with a mean and minimum F1 scores of 87\,\% and 44\,\% respectively using imbalanced data, and 86\,\% and 41\,\% respectively using balanced data, for turbine 1. Four other turbines had minimum scores less than 70\,\%, namely turbines 7, 9 ,15 and 16. Looking at the labels, turbine category 10, which is `electrical system' had the worst performance, with mean and minimum F1 scores of 84\,\% and 44\,\% respectively using imbalanced data, and 82\,\% and 41\,\% respectively using balanced data. These minimum scores correspond to the scores for turbine 1. Therefore, it can be deduced that the classifier's ability to predict faults in the electrical system is relatively low. This is followed closely by turbine category 11, `pitch control', which has mean and minimum F1 scores of 84\,\% and 57\,\% respectively using imbalanced data and 83\,\% and 55\,\% respectively using balanced data. For all other turbine categories, the minimum score did not drop below 75\,\%.
\autoref{f4} shows the various turbine categories quantified by downtime frequency on the left and period on the right, both per turbine per year. Looking at only turbine categories used as labels, `pitch control' and `electrical system' are the two categories causing the most downtime events and are in the top three in terms of the downtime period. These two labels also had the worst performance scores.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../images/f4}
\caption{\label{f4}Bar chart showing the various turbine categories quantified by the downtime frequency per turbine per year on the left, and downtime period, in hours, per turbine per year on the right. This was plot using the downtime data.}
\end{figure}
\section{Performance of each class}
Since turbine category 10 was found to have the worst performance, the performance of each class for this label is looked at in more detail, which is done by obtaining confusion matrices. A confusion matrix displays, for each class, the number of samples predicted correctly and what the wrongly predicted samples were classified as \cite{33M}. This will allow the decision to be made whether the number of classes and intervals used for fault prediction can be tweaked for better classifier performance. The matrices were first obtained for all turbines with only this label using both imbalanced and balanced training data. Through five-fold cross-validation, a total of 125 matrices were produced, which were then combined and normalised, which will produce the classification accuracy. The confusion matrices are shown in \autoref{a4}. 93-95\,\% of `normal' and 73-75\,\% of `curtailment' samples were classed correctly. In comparison, only 21-24\,\% of `faulty' samples were classified correctly, with 47-48\,\% misclassified as `curtailment' and 21-26\,\% misclassified as `normal'.
Due to this misclassification percentage being higher than the accuracy of the `faulty' class, the classification was repeated by dropping all rows with `curtailment', effectively removing the class. There is a significant improvement in the accuracy of `faulty' samples, from 21-24\,\% to 41-44\,\%. However, the majority of samples belonging to this class (44-49\,\%) were still misclassified as `normal'. In fact, this is the case for the `X hours before fault' classes, with or without the use of the `curtailment' class. As X increases, the accuracy is seen to decrease, and the percentage of misclassification as `normal' increases.
To make a comparison, the same analysis was repeated for turbine category 5, which is `gearbox'. This category was chosen as its mean F1 score was relatively high (92\,\% compared to 84\,\% for turbine category 10), it causes the second longest downtime period based on \autoref{f4}, and it indicates a problem in the mechanical system, rather than electrical. 96-97\,\% of `normal' and 83\,\% of `curtailment' samples were classed correctly. In comparison, 43-44\,\% of `faulty' samples were classified correctly, with 16-21\,\% misclassified as `curtailment' and 34-40\,\% misclassified as `normal'. The performance was better compared to turbine category 10, but the misclassification of the `faulty' class as `normal' is higher. Removing the `curtailment' increased the accuracy of `faulty' samples, from 43-44\,\% to 48-55\,\%. However, the misclassification of this class as `normal' was still high (41-48\,\%).
Using a balanced dataset overall decreased the misclassification rate of `X hours before fault' classes as `normal', but increased the misclassification of the `faulty' class as `normal'.
\section{Feature importance}
The importance of each feature used, which are a set of normalised scores \cite{Rudy13}, were also obtained similar to the confusion matrix. The higher the feature importance, the more influence the feature had in determining the class of the samples. The feature importance for turbine categories 10 and 5 are shown in \autoref{t6}. For both turbine categories, the wind speed and nacelle position were found to be the most important features, and the maximum, average and deviations of the active power were found to be the least important, regardless of training data balancing. The wind direction was the third most important feature for turbine category 10 regardless of balancing, and for turbine category 5 using imbalanced data. In the case of balanced data for turbine category 5, the third most important feature was the pitch angle.
\begin{table}
\centering
\caption{\label{t6}Feature importance for turbine categories 10 and 5 using random forests and either imbalanced (I) or balanced (B) training data. The values are normalised and colour-coded, transitioning from red (lower importance) to yellow (intermediate) to green (higher importance).}
\includegraphics[width=\textwidth]{../images/t6}
\end{table}
| {
"alphanum_fraction": 0.7918154353,
"avg_line_length": 188.3275862069,
"ext": "tex",
"hexsha": "5013b11968aac71d5b47ac27a0ff6e5138b5c6f5",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2021-06-26T15:04:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-01T21:24:46.000Z",
"max_forks_repo_head_hexsha": "b0ea6de909ccd5bb425cee291ca3c252c11df4eb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "nmstreethran/WindTurbineClassification",
"max_forks_repo_path": "docs/results.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b0ea6de909ccd5bb425cee291ca3c252c11df4eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "nmstreethran/WindTurbineClassification",
"max_issues_repo_path": "docs/results.tex",
"max_line_length": 2057,
"max_stars_count": 34,
"max_stars_repo_head_hexsha": "b0ea6de909ccd5bb425cee291ca3c252c11df4eb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "nmstreethran/WindTurbineClassification",
"max_stars_repo_path": "docs/results.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-20T09:59:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-03-01T21:24:40.000Z",
"num_tokens": 2459,
"size": 10923
} |
\documentclass[Chemistry.tex]{subfiles}
\begin{document}
\chapter{Inorganic Chemistry Summary of Reactions}
\begin{tabularx}{\textwidth}[c]{clXc}
\sltbcap{Reaction of period 3 elements with oxygen}{tb:a9.eox}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Observations} & \textbf{Flame} \\
\midrule \endhead
\ch{Na} & \ch{2 Na\solid{} + 1/2 O2\gas{} -> Na2O\solid{}} & Burns very vigorously & Yellow \\
\midrule
\ch{Mg} & \ch{Mg\solid{} + 1/2 O2\gas{} -> MgO\solid{}} & Burns very vigorously & Bright white \\
\midrule
\ch{Al} & \ch{4 Al\solid{} + 3 O2\gas{} -> 2 Al2O3\solid{}} & Must be heated to \SI{800}{\celsius} due to the unreactive oxide layer & --- \\
\midrule
\ch{Si} & \ch{Si\solid{} + O2\gas{} -> SiO2\solid{}} & Reacts slowly with strong heat & --- \\
\midrule
\ch{P} & \begin{varwidth}[t]{0.4\textwidth}\ch{P4\solid{} + 3 O2\gas{} -> P4O6\solid{}}\\\ch{P4\solid{} + 5 O2\gas{} -> P4O10\solid{}}\end{varwidth} & Reacts vigorously, forming a dense white fume of \ch{P4O10} & Brilliant yellow \\
\midrule
\ch{S} & \begin{varwidth}[t]{0.4\textwidth}\ch{S\solid{} + O2\gas{} -> SO2\gas{}}\\\ch{SO2\gas{} + 1/2 O2\gas{}~(excess) -> SO3\gas{}}\end{varwidth} & Burns slowly & Blue \\
\bottomrule
\end{tabularx}
%
\begin{longtable}[c]{cll}
\sltbcap{Reaction of period 3 elements with chlorine}{tb:a9.ecl}
\toprule
& \sltbhdr{Chloride} & \sltbhdr{Observations} \\
\midrule\endhead
\ch{Na} & \ch{NaCl\solid{}} & Reacts very vigorously \\
\midrule
\ch{Mg} & \ch{MgCl2\solid{}} & Reacts vigorously \\
\midrule
\ch{Al} & \ch{AlCl3\solid{}} & Reacts vigorously; dimerises to \ch{Al2Cl6} \\
\midrule
\ch{Si} & \ch{SiCl4\lqd{}} & Reacts slowly \\
\midrule
\ch{P4\solid{}} & \ch{PCl3\lqd{}} & In limited \ch{Cl2}; reacts slowly \\
\ch{P4\solid{}} & \ch{PCl5\solid{}} & In excess \ch{Cl2}; reacts slowly \\
\midrule
\ch{S} & \ch{S2Cl2\lqd{}} & Reacts slowly \\
\bottomrule
\end{longtable}
%
\begin{longtable}[c]{cll}
\sltbcap{Reaction of period 3 elements with water}{tb:a9.ew}
\toprule
& \sltbhdr{Equation} & \textbf{Observations} \\
\midrule\endhead
\ch{Na} & \ch{2 Na\solid{} + 2 H2O\lqd{} -> 2 NaOH\aq{} + H2\gas{}} & Reacts very vigorously \\
\midrule
\ch{Mg} & \ch{Mg\solid{} + H2O\gas{} -> MgO\solid{} + H2\gas{}} & Reacts with steam only \\
\midrule
\ch{Al} & \ch{2 Al\solid{} + 3 H2O\gas{} -> Al2O3\solid{} + 3 H2\gas{}} & Reacts with steam only \\
\midrule
\ch{Cl2} & \ch{Cl2\gas{} + H2O\lqd{} -> HClO\aq{} + HCl\gas{}} & Hydrolyses to \pH 2 solution \\
\midrule
Rest & \multicolumn{2}{l}{Does not react} \\
\bottomrule
\end{longtable}
%
\clearpage
\begin{tabularx}{\textwidth}[c]{clXcc}
\sltbcap{Reaction of period 3 oxides with water}{tb:a9.oxw}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Remarks} & \(\pH\) & \textbf{UI} \\
\midrule\endhead
\ch{Na2O} & \ch{Na2O\solid{} + H2O\lqd{} -> 2 NaOH\aq{}} & Reacts vigorously & \num{13} & Violet \\
\midrule
\ch{MgO} & \ch{MgO\solid{} + H2O\lqd{} <=> Mg(OH)2\aq{}} & Reacts less vigorously and dissolves sparingly as its lattice energy is high, leading to a high enthalpy of solution. & \num{9} & Blue \\
\midrule
\ch{Al2O3} & \multicolumn{4}{l}{Does not react; too much energy required to cause detachment of ions from the lattice structure} \\
\midrule
\ch{SiO2} & \multicolumn{4}{l}{Does not react; too much energy required to break its very stable giant molecular structure} \\
\midrule
\ch{P4O6} & \ch{P4O6\solid{} + 6 H2O\lqd{} -> 4 H3PO3\aq{}} & & \num{2} & Red \\
\ch{P4O10} &\ch{P4O10\solid{} + 6 H2O\lqd{} -> 4 H3PO4\aq{}} \\
\cmidrule{1-3}
\ch{SO2} &\ch{SO2\gas{} + H2O\lqd{} -> H2SO3\aq{}} \\
\ch{SO3} & \ch{SO3 + H2O\lqd{} -> H2SO4\aq{}} \\
\cmidrule{1-3}
\ch{Cl2O7} & \ch{Cl2O7\aq{} + H2O\lqd{} -> 2 HClO4\aq{}} \\
\bottomrule
\end{tabularx}
%
\begin{longtable}[c]{cll}
\sltbcap{Reaction of period 3 oxides with acid and base}{tb:a9.oa}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Remarks} \\
\midrule\endhead
\ch{Na2O} & \ch{Na2O\solid{} + 2 H^+\aq{} -> 2 Na^+\aq{} + H2O\lqd{}} & \\
\midrule
\ch{MgO} & \ch{MgO\solid{} + 2 H^+\aq{} -> Mg^{2+}\aq{} + H2O\lqd{}} & \\
\midrule
\ch{Al2O3} & \ch{Al2O3\solid{} + 6 H^+\aq{} -> 2 Al^{3+}\aq{} + 3 H2O\lqd{}} & With acid \\
\ch{Al2O3} & \ch{Al2O3\solid{} + 2 OH^-\aq{} + 3 H2O\lqd{} -> 2 Al(OH)4^{-}\aq{}} & With base \\
\midrule
\ch{SiO2} & \ch{SiO2\solid{} + 2 OH^{-}\aq{} -> SiO3^{2-}\aq{} + H2O\lqd{}} & \\
\midrule
\ch{P4O6} & \ch{P4O6\solid{} + 8 OH^{-}\aq{} -> 4 HPO3^{2-}\aq{} + 2 H2O\lqd{}} & \\
\ch{P4O10} & \ch{P4O10\solid{} + 12 OH^{-}\aq{} -> 4 HPO4^{3-}\aq{} + 6 H2O\lqd{}} & \\
\midrule
\ch{SO2} & \ch{SO2\gas{} + 2 OH^{-}\aq{} -> SO3^{2-}\aq{} + H2O\lqd{}} & \\
\ch{SO3} & \ch{SO3\gas{} + 2 OH^{-}\aq{} -> SO4^{2-}\aq{} + H2O \lqd{}} & \\
\midrule
\ch{Cl2O7} & \ch{Cl2O7\aq{} + 2 OH^{-}\aq{} -> 2 ClO4^{-}\aq{} + H2O\lqd{}} & \\
\bottomrule
\end{longtable}
%
\begin{tabularx}{\textwidth}[c]{clXcc}
\sltbcap{Reaction of period 3 chlorides with water}{tb:a9.clw}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Remarks} & \(\pH\) & \textbf{UI} \\
\midrule\endhead
\ch{NaCl} & \ch{NaCl\solid{} -> Na^+\aq{} + Cl^-\aq{}} & Undergoes hydration & 7 & Green \\
\midrule
\ch{MgCl2} & \multicolumn{2}{l}{\begin{varwidth}[t]{0.7\textwidth}\ch{MgCl2\solid{} + 6 H2O\lqd{} -> {[}Mg(H2O)6{]}^{2+}\aq{} + 2 Cl^{-}\aq{}}\\\ch{{[}Mg(H2O)6{]}^{2+}\aq{} + H2O\lqd{} <=> {[}Mg(H2O)5(OH){]}^{+}\aq{} + H3O^{+}\aq{}}\end{varwidth}} & \num{6.5} & Orange \\\addlinespace
\multicolumn{5}{p{0.9\textwidth}}{Undergoes hydration and then slight hydrolysis to form a slightly acidic solution, as the relatively high charge density of the hydrated \slch{Mg^{2+}} ion polarises the electron cloud of one of the surrounding water molecules, weakening and breaking the \slch{O-H} bond, releasing a proton.} \\
\midrule
\ch{Al2Cl6} & \multicolumn{2}{l}{\begin{varwidth}[t]{0.7\textwidth}\ch{AlCl3\solid{} + 6 H2O\lqd{} -> {[}Al(H2O)6{]}^{3+}\aq{} + 3 Cl^{-}\aq{}}\\\ch{{[}Al(H2O)6{]}^{3+}\aq{} + H2O\lqd{} <=> {[}Al(H2O)5(OH){]}^{2+}\aq{} + H3O^{+}\aq{}}\end{varwidth}} & \num{3} & Orange \\\addlinespace
\multicolumn{5}{p{0.9\textwidth}}{Undergoes hydration and hydrolysis to form an acidic solution, for reasons similar to \slch{MgCl2}.} \\
\midrule
\ch{SiCl4} & \ch{SiCl4\lqd{} + 2 H2O\lqd{} -> SiO2\solid{} + 4 HCl\gas{}} & Hydrolyses completely & \num{2} & Red \\
\cmidrule{1-3}
\ch{PCl3} & \ch{PCl3\lqd{} + H2O\lqd{} -> H3PO3\aq{} + 3 HCl\gas{}} \\
\ch{PCl5} & \ch{PCl5\solid{} + 4 H2O\lqd{}~(excess) -> H3PO4\aq{} + 5 HCl\gas{}} & (hot) \\
\ch{PCl5} & \ch{PCl5\solid{} + H2O\lqd{} -> POCl3\aq{} + 2 HCl\aq{}} & (cold) \\
\ch{POCl3} & \ch{POCl3\aq{} + 3 H2O\lqd{} -> H3PO4\aq{} + 3 HCl\aq{}} \\
\cmidrule{1-3}
\ch{S2Cl2} & \ch{2 S2Cl2\lqd{} + 2 H2O\lqd{} -> 3 S\solid{} + SO2\gas{} + 4 HCl\gas{}} \\
\cmidrule{1-3}
\ch{Cl2} & \ch{Cl2\gas{} + H2O\lqd{} -> HClO\aq{} + HCl\gas{}} \\
\bottomrule
\end{tabularx}
%
\clearpage
\begin{longtable}[c]{cccccccc}
\sltbcap{Reactions of group II elements and oxides}{tb:a9.gii}
\toprule
\multicolumn{2}{c}{\textbf{Element}} & \multicolumn{2}{c}{\textbf{Reaction with water}} & \multicolumn{4}{c}{\textbf{Oxide}}\\
\cmidrule(r){1-2} \cmidrule(lr){3-4} \cmidrule(l){5-8}
& \textbf{Flame} & \textbf{Cold} & \textbf{Steam} & & \textbf{Solubility} & \(\pH\) & \textbf{UI}\\
\cmidrule(r){1-2} \cmidrule(lr){3-4} \cmidrule(l){5-8}\endhead
\slch{Be} & --- & --- & --- & \ch{BeO} & \multicolumn{3}{c}{Insoluble}\\
\slch{Mg} & brilliant white & --- & Forms oxide & \slch{MgO} & Slightly & \num{9} & Blue\\
\slch{Ca} & red & \multicolumn{2}{c}{Forms hydroxide} & \slch{CaO} & Yes & \numrange{10}{13} & Violet\\
\slch{Sr} & crimson & \multicolumn{2}{c}{Forms hydroxide} & \slch{SrO} & Yes & \numrange{10}{13} & Violet\\
\slch{Ba} & green & \multicolumn{2}{c}{Forms hydroxide} & \slch{BaO} & Yes & \numrange{10}{13} & Violet\\
\bottomrule
\end{longtable}
%
\begin{longtable}[c]{cccccc}
\sltbcap{Colours of group VII elements}{tb:a9.g7.col}
\toprule
& \multicolumn{5}{c}{\textbf{Colour in state}}\\
\cmidrule{2-6}
& \textbf{Gas} & \textbf{Liquid} & \textbf{Solid} & \textbf{Aqueous} & \textbf{Organic} \\
\ch{Cl2} & \multicolumn{2}{c}{Greenish-yellow} & --- & Greenish-yellow & Yellow \\
\ch{Br2} & \multicolumn{2}{c}{Reddish-brown} & --- & Yellow & Orange \\
\ch{I2} & Violet & --- & Black & Brown & Violet \\
\bottomrule
\end{longtable}
%
\begin{tabularx}{\textwidth}[c]{clX}
\sltbcap{Reactions of group VII elements with hydrogen}{tb:a9.g7.eh}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Observations} \\
\midrule\endhead
\ch{F2} & \ch{H2\gas{} + F2\gas{} -> 2 HF\gas{}} & Explosive even in the dark \\
\ch{Cl2} & \ch{H2\gas{} + Cl2\gas{} -> 2 HCl\gas{}} & Explosive in sunlight; does not react at r.t.p. or in the dark \\
\ch{Br2} & \ch{H2\gas{} + Br2\gas{} -> 2 HBr\gas{}} & Heat and \ch{Pt} catalyst \\
\ch{I2} & \ch{H2\gas{} + I2\gas{} -> 2 HI\gas{}} & Reacts reversibly at \SI{400}{\celsius} with \ch{Pt} catalyst \\
\bottomrule
\end{tabularx}
%
\begin{tabularx}{\textwidth}[c]{clX}
\sltbcap{Reactions of group VII elements with \ch{NaOH\aq{}}}{tb:a9.g7.enaoh}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Remarks} \\
\midrule\endhead
\ch{X2} & \ch{2 OH^-\aq{} + X2\aq{} -> X^-\aq{} + XO^-\aq{} + H2O\lqd{}} & R.t.p. dilute \ch{NaOH} and \ch{Cl2} or \SI{0}{\celsius} dilute \ch{NaOH} and \ch{Br2} \\
\ch{XO^-} & \ch{3 XO^-\aq{} -> 2 X^-\aq{} + XO3^-\aq{}} & On warming \\
\midrule
\ch{X2} & \ch{6 OH^-\aq{} + 3 X2\aq{} -> 5 X^-\aq{} + XO3^-\aq{} + 3 H2O\lqd{}} & Hot concentrated \ch{NaOH} (\SI{70}{\celsius}) and \ch{Cl2} \textbf{or} \ch{Br2} or \ch{I2} \\
\bottomrule
\end{tabularx}
%
\begin{tabularx}{\textwidth}[c]{clX}
\sltbcap{Reactions of group VII elements with concentrated \ch{H2SO4}}{tb:a9.g7.eh2so4}
\toprule
& \sltbhdr{Equation} & \sltbhdr{Remarks} \\
\midrule\endhead
\ch{F2}, \ch{Cl2} & \ch{NaX\solid{} + H2SO4\lqd{} -> HX\gas{} + NaHSO4\solid{}} & \ch{HX} is not further oxidised by \ch{H2SO4} as the latter is not powerful enough an oxidising agent \\
\midrule
\ch{Br2} & \begin{varwidth}[t]{0.5\textwidth}\ch{NaBr\solid{} + H2SO4\lqd{} -> HBr\gas{} + NaHSO4\solid{}}\\\ch{2 HBr\gas{} + H2SO4\lqd{} -> Br2\gas{} + SO2\gas{} + 2 H2O\lqd{}}\end{varwidth} & \ch{HBr} forms white fumes; \ch{SO2} is pungent \\
\midrule
\ch{I2} & \begin{varwidth}[t]{0.5\textwidth}\ch{NaI\solid{} + H2SO4\lqd{} -> HI\gas{} + NaHSO4\solid{}}\\\ch{8 HI\gas{} + H2SO4\lqd{} -> 4 I2\gas{} + H2S\gas{} + 4 H2O\lqd{}}\end{varwidth} & \ch{HI} forms white fumes; \ch{H2S} is pungent \\
\bottomrule
\end{tabularx}
%
\clearpage
\begin{longtable}[c]{llll}
\sltbcap{Colours of transition metal compounds and complexes}{tb:a9.tm.col}
\toprule
\sltbhdr{Ion} & \multicolumn{3}{c}{\textbf{Species and colour}} \\
\midrule\endhead
\ch{V}(II) & \ch{[V(H2O)6]^{2+}}: violet \\
\ch{V}(III) & \ch{[V(H2O)6]^{3+}}: green \\
\ch{V}(IV) & \ch{[VO(H2O)5]^{2+}}: blue \\
\ch{V}(V) & \ch{[VO2(H2O)4]^{+}}: yellow \\
\midrule
\ch{Cr}(II) & \ch{[Cr(H2O)6]^{2+}}: blue \\
\ch{Cr}(III) & \ch{[Cr(H2O)6]^{3+}}: green & \ch{[Cr(OH)6]^{3-}}: deep green & \ch{[Cr(NH3)6]^{3+}}: purple \\
\ch{Cr}(VI) & \ch{CrO4^{2-}}: yellow & \ch{Cr2O7^{2-}}: orange \\
\midrule
\ch{Mn}(II) & \ch{[Mn(H2O)6]^{2+}}: pink/colourless \\
\ch{Mn}(III) & \ch{[Mn(H2O)6]^{3+}}: red \\
\ch{Mn}(IV) & \ch{MnO2}: brown solid \\
\ch{Mn}(VI) & \ch{MnO4^{2-}}: green \\
\ch{Mn}(VII) & \ch{MnO4^-}: purple \\
\midrule
\ch{Fe}(II) & \ch{[Fe(H2O)6]^{2+}}: pale green & \ch{[Fe(CN)6]^{4-}}: yellow \\
\ch{Fe}(III) & \ch{[Fe(H2O)6]^{3+}}: yellow & \ch{[Fe(CN)6]^{3-}}: orange-red & \ch{[Fe(H2O)5(SCN)]^{2+}}: blood red \\
\midrule
\ch{Co}(II) & \ch{[Co(H2O)6]^{2+}}: pink & \ch{[Co(NH3)6]^{2+}}: pale brown & \ch{[CoCl4]^{2-}}: blue \\
\ch{Co}(III) & \ch{[Co(H2O)6]^{3+}}: dark brown \\
\midrule
\ch{Ni}(II) & \ch{[Ni(H2O)6]^{2+}}: green & \ch{[Ni(NH3)6]^{2+}}: blue & \ch{[Ni(CN)6]^{4-}}: yellow \\
\midrule
\ch{Cu}(I) & \ch{Cu2O}: reddish-brown solid \\
\ch{Cu}(II) & \ch{[Cu(H2O)6]^{2+}}: blue & \ch{[Cu(NH3)4(H2O)2]^{2+}}: dark blue & \ch{[CuCl4]^{2-}}: yellow \\
\midrule
\ch{Ag}(I) & \ch{[Ag(H2O)2]^{+}}: colourless & \ch{[Ag(NH3)2]^{+}}: colourless \\
\bottomrule
\end{longtable}
%
\begin{longtable}[c]{ccccc}
\sltbcap{Transition metal precipitates soluble when excess ligand added}{tb:a9.tm.ex}
\toprule
\sltbhdr{Precipitate} & \sltbhdr{Colour} & \sltbhdr{Soluble in excess of} & \sltbhdr{Complex} & \sltbhdr{Colour} \\
\midrule\endhead
\ch{Cr(OH)3} & Green & \ch{NaOH\aq{}} & \ch{[Cr(OH)6]^{3-}} & Deep green \\
\midrule
\ch{Zn(OH)2} & White & \ch{NaOH\aq{}} & \ch{[Zn(OH)4]^{2-}} & Colourless \\
\ch{Zn(OH)2} & White & \ch{NH3\aq{}} & \ch{[Zn(NH3)4]^{2+}} & Colourless \\
\midrule
\ch{Cu(OH)2} & Blue & \ch{NH3\aq{}} & \ch{[Cu(NH3)4(H2O)2]^{2+}} & Deep blue \\
\midrule
\ch{Co(OH)2} & Blue & \ch{NH3\aq{}} & \ch{[Co(NH3)6]^{2+}} & Pale brown \\
\midrule
\ch{Ni(OH)2} & Green & \ch{NH3\aq{}} & \ch{[Ni(NH3)6]^{2+}} & Blue \\
\bottomrule
\end{longtable}
\end{document}
| {
"alphanum_fraction": 0.5919601838,
"avg_line_length": 51.6205533597,
"ext": "tex",
"hexsha": "3d4aacaa448bde52dd50daebe269d8fe3dcb8355",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "oliverli/A-Level-Notes",
"max_forks_repo_path": "TeX/Chemistry/a_inorganic_reactions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "oliverli/A-Level-Notes",
"max_issues_repo_path": "TeX/Chemistry/a_inorganic_reactions.tex",
"max_line_length": 330,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "oliverli/A-Level-Notes",
"max_stars_repo_path": "TeX/Chemistry/a_inorganic_reactions.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-05T11:44:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-05T11:44:33.000Z",
"num_tokens": 6020,
"size": 13060
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Copyright (c) 2016, Perry L Miller IV
% All rights reserved.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Discussion about bullets.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Bullets}
\label{sec:discussion_bullets}
% Start some bullets.
\begin{itemize}
\item All work and no play makes Jack a dull boy.
All work and no play makes Jack a dull boy.
\item All work and no play makes Jack a dull boy.
All work and no play makes Jack a dull boy.
% End the bullets.
\end{itemize}
| {
"alphanum_fraction": 0.4045698925,
"avg_line_length": 25.6551724138,
"ext": "tex",
"hexsha": "601fc75297cd70b7a0c2f53504a8e0b126804b6c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "47d00447a47ce0696b711b70828116794e83504e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "perryiv/latex_starter_kit",
"max_forks_repo_path": "source/discussion/bullets.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "47d00447a47ce0696b711b70828116794e83504e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "perryiv/latex_starter_kit",
"max_issues_repo_path": "source/discussion/bullets.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "47d00447a47ce0696b711b70828116794e83504e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "perryiv/latex_starter_kit",
"max_stars_repo_path": "source/discussion/bullets.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 129,
"size": 744
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.