Search is not available for this dataset
text
string | meta
dict |
---|---|
\sekshun{Conversions}
\label{Conversions}
\index{conversions}
A \emph{conversion} converts an expression of one type to another type,
possibly changing its value.
\index{conversions!source type}
\index{conversions!target type}
We refer to these two types the \emph{source} and \emph{target} types.
Conversions can be either
implicit~(\rsec{Implicit_Conversions}) or
explicit~(\rsec{Explicit_Conversions}).
\section{Implicit Conversions}
\label{Implicit_Conversions}
\index{conversions!implicit}
An \emph{implicit conversion} is a conversion that occurs implicitly,
that is, not due to an explicit specification in the program.
Implicit conversions occur at the locations in the program listed below.
Each location determines the target type.
The source and target types of an implicit conversion must be allowed.
They determine whether and how the expression's value changes.
\index{conversions!implicit!occurs at}
An implicit conversion occurs at each of the following program locations:
\begin{itemize}
\item In an assignment, the expression on the right-hand side of
the assignment is converted to the type of the variable
or another lvalue on the left-hand side of the assignment.
\item The actual argument of a function call or an operator is converted
to the type of the corresponding formal argument, if the formal's
intent is \chpl{in} or \chpl{const in} or an abstract intent
(\rsec{Abstract Intents}) with the semantics of
\chpl{in} or \chpl{const in}.
\item The formal argument of a function call is converted
to the type of the corresponding actual argument, if the formal's
intent is \chpl{out}.
\item The return or yield expression within a function without a \chpl{ref}
return intent is converted to the return type of that function.
\item The condition of a conditional expression,
conditional statement, while-do or do-while loop statement
is converted to the boolean type~(\rsec{Implicit_Statement_Bool_Conversions}).
A special rule defines the allowed source types and
how the expression's value changes in this case.
\end{itemize}
\index{conversions!implicit!allowed types}
Implicit conversions \emph{are allowed} between
the following source and target types,
as defined in the referenced subsections:
\begin{itemize}
\item numeric, boolean, and enumerated types~(\rsec{Implicit_NumBoolEnum_Conversions}),
\item class types~(\rsec{Implicit_Class_Conversions}),
\item integral types in the special case when the expression's value
is a compile-time constant~(\rsec{Implicit_Compile_Time_Constant_Conversions}), and
\item from an integral or class type to \chpl{bool}
in certain cases~(\rsec{Implicit_Statement_Bool_Conversions}).
\end{itemize}
In addition,
an implicit conversion from a type to the same type is allowed for any type.
Such conversion does not change the value of the expression.
% TODO: If an implicit conversion is not allowed, it is an error.
Implicit conversion is not transitive. That is, if an implicit conversion
is allowed from type \chpl{T1} to \chpl{T2} and from \chpl{T2} to \chpl{T3},
that by itself does not allow an implicit conversion
from \chpl{T1} to \chpl{T3}.
\subsection{Implicit Numeric, Bool and Enumeration Conversions}
\label{Implicit_NumBoolEnum_Conversions}
\index{conversions!numeric}
\index{conversions!implicit!numeric}
Implicit conversions among numeric types are allowed when
all values representable in the source type can also be represented
in the target type, retaining their full precision.
%
%REVIEW: vass: I did not understand the point of the following,
% so I am commenting it out for now.
%When the implicit conversion is from an integral to a real type, source
%types are converted to type \chpl{int} before determining if the
%conversion is valid.
%
In addition, implicit conversions from
types \chpl{int(64)} and \chpl{uint(64)} to types \chpl{real(64)}
and \chpl{complex(128)} are allowed, even though they may result in a loss of
precision.
%REVIEW: hilde
% Unless we are supporting some legacy behavior, I would recommend removing this
% provision. A loss of precision is a loss of precision, so I would favor
% consistent behavior that does not lead to surprising results. EVERY ``if''
% costs money: which is to say that if a behavior can be described simply, it can
% be implemented simply.
\begin{rationale}
We allow these additional conversions because they are an important
convenience for application programmers. Therefore we are willing to
lose precision in these cases. The largest real and complex types
are chosen to retain precision as often as as possible.
\end{rationale}
\index{conversions!boolean}
\index{conversions!implicit!boolean}
Any boolean type can be implicitly converted to any other boolean type,
retaining the boolean value.
Any boolean type can be implicitly converted to any integral type
by representing \chpl{false} as 0 and \chpl{true} as 1,
except (if applicable)
a boolean cannot be converted to \chpl{int(1)}.
% Rationale: because 1 cannot be represented by \chpl{int(1)}.
\begin{rationale}
We disallow implicit conversion of a boolean to
a real, imaginary, or complex type because of the following.
We expect that the cases where such a conversion is needed
will more likely be unintended by the programmer.
Marking those cases as errors will draw the programmer's attention.
If such a conversion is actually desired, a cast \rsec{Explicit_Conversions}
can be inserted.
\end{rationale}
\index{conversions!enumerated types}
\index{conversions!implicit!enumerated types}
An expression of an enumerated type can be implicitly converted
to an integral type, provided that all of the constants defined by the
enumerated type are representable by the integral type.
% Requiring an explicit cast to convert an integer to an enumerated type
% is consistent with C# and later versions of C++.
Legal implicit conversions with numeric, boolean and enumerated types
may thus be tabulated as follows:
\begin{center}
\begin{tabular}{l|llllll}
& \multicolumn{6}{c}{Target Type} \\ [4pt]
Source Type & bool($t$) & uint($t$) & int($t$) & real($t$) & imag($t$) & complex($t$) \\ [3pt]
\cline{1-7} \\
bool($s$) & all $s,t$ & all $s,t$ & all $s$; $2 \le t$ & & & \\ [7pt]
enum & & (see rules) & (see rules) & & & \\ [7pt]
uint($s$) & & $s \le t$ & $s < t$ & $s \le mant(t)$ & & $s \le mant(t/2)$ \\ [7pt]
uint(64) & & & & real(64) & & complex(128) \\ [7pt]
int($s$) & & & $s \le t$ & $s \le mant(t)+1$ & & $s \le mant(t/2)+1$ \\ [7pt]
int(64) & & & & real(64) & & complex(128) \\ [7pt]
real($s$) & & & & $s \le t$ & & $s \le t/2$ \\ [7pt]
imag($s$) & & & & & $s \le t$ & $s \le t/2$ \\ [7pt]
complex($s$) & & & & & & $s \le t$ \\ [5pt]
\end{tabular}
\end{center}
Here, $mant(i)$ is the number of bits in the (unsigned) mantissa of
the $i$-bit floating-point type.\footnote{For the IEEE 754 format,
$mant(32)=24$ and $mant(64)=53$.}
%
Conversions for the default integral and real types (\chpl{uint},
\chpl{complex}, etc.) are the same as for their
explicitly-sized counterparts.
\subsection{Implicit Compile-Time Constant Conversions}
\label{Implicit_Compile_Time_Constant_Conversions}
\index{conversions!numeric!parameter}
\index{conversions!implicit!parameter}
The following implicit conversion of a parameter is allowed:
\begin{itemize}
\item A parameter of type \chpl{int(64)} can be implicitly converted
to \chpl{int(8)}, \chpl{int(16)}, \chpl{int(32)}, or any unsigned integral type if the
value of the parameter is within the range of the target type.
\end{itemize}
\subsection{Implicit Statement Bool Conversions}
\label{Implicit_Statement_Bool_Conversions}
\index{conversions!boolean!in a statement}
\index{conversions!implicit!boolean}
In the condition of an if-statement, while-loop, and do-while-loop,
the following implicit conversions to \chpl{bool} are supported:
\begin{itemize}
\item An expression of integral type is taken to be false if it is zero and is true otherwise.
\item An expression of a class type is taken to be false if it is nil and is true otherwise.
\end{itemize}
\section{Explicit Conversions}
\label{Explicit_Conversions}
\index{conversions!explicit}
Explicit conversions require a cast in the code. Casts are defined
in~\rsec{Casts}. Explicit conversions are supported between more
types than implicit conversions, but explicit conversions are not
supported between all types.
The explicit conversions are a superset of the implicit conversions.
In addition to the following definitions,
an explicit conversion from a type to the same type is allowed for any type.
Such conversion does not change the value of the expression.
\subsection{Explicit Numeric Conversions}
\label{Explicit_Numeric_Conversions}
\index{conversions!numeric}
\index{conversions!explicit!numeric}
Explicit conversions are allowed from any numeric type, boolean, or
string to any other numeric type, boolean, or string.
% A valid \chpl{bool} value behaves like a single unsigned bit.
When a \chpl{bool} is converted to a \chpl{bool}, \chpl{int}
or \chpl{uint} of equal or larger size, its value is zero-extended to fit the
new representation. When a \chpl{bool} is converted to a
smaller \chpl{bool}, \chpl{int} or \chpl{uint}, its most significant
bits are truncated (as appropriate) to fit the new representation.
When a \chpl{int}, \chpl{uint}, or \chpl{real} is converted to a \chpl{bool}, the result is \chpl{false} if the number was equal to 0 and \chpl{true} otherwise.
% This has the odd effect that a bool stored in a signed one-bit bitfield would
% change sign without generating a conversion error. But its subsequent
% conversion back to a bool would yield the original value.
% In regard to supporting bitfields: Be careful what you wish for.
% The source type determines whether a value is zero- or sign-extended.
When an \chpl{int} is converted to a larger \chpl{int} or \chpl{uint}, its value is
sign-extended to fit the new representation.
When a \chpl{uint} is converted to a larger \chpl{int} or \chpl{uint}, its value
is zero-extended.
When an \chpl{int} or \chpl{uint} is converted to an \chpl{int} or \chpl{uint}
of the same size, its binary representation is unchanged.
When an \chpl{int} or \chpl{uint} is converted to a smaller \chpl{int}
or \chpl{uint}, its value is truncated to fit the new representation.
\begin{future}
There are several kinds of integer conversion which can result in a loss of
precision. Currently, the conversions are performed as specified, and no error
is reported. In the future, we intend to improve type checking, so the user can
be informed of potential precision loss at compile time, and actual precision
loss at run time. Such cases include:
%
% An exception is thrown if the source value cannot be represented in the target type.
When an \chpl{int} is converted to a \chpl{uint} and the original value is
negative;
When a \chpl{uint} is converted to an \chpl{int} and the sign bit of the result
is true;
When an \chpl{int} is converted to a smaller \chpl{int} or \chpl{uint} and any
of the truncated bits differs from the original sign bit;
%
When a \chpl{uint} is converted to a smaller \chpl{int} or \chpl{uint} and any
of the truncated bits is true;
\end{future}
\begin{rationale}
For integer conversions, the default behavior of a program should be to produce
a run-time error if there is a loss of precision. Thus, cast expressions not only
give rise to a value conversion at run time, but amount to an assertion
that the required precision is preserved. Explicit conversion procedures would be
available in the run-time library so that one can perform explicit conversions
that result in a loss of precision but do not generate a run-time diagnostic.
\end{rationale}
When converting from a \chpl{real} type to a larger \chpl{real} type, the
represented value is preserved. When converting from a \chpl{real} type to a
smaller \chpl{real} type, the closest representation in the target type is
chosen.\footnote{When converting to a smaller real type, a loss of precision is \emph{expected}.
Therefore, there is no reason to produce a run-time diagnostic.}
When converting to a \chpl{real} type from an integer type, integer types
smaller than \chpl{int} are first converted to \chpl{int}. Then, the closest
representation of the converted value in the target type is chosen. The exact
behavior of this conversion is implementation-defined.
When converting from \chpl{real($k$)} to \chpl{complex($2k$)}, the original
value is copied into the real part of the result, and the imaginary part of the
result is set to zero. When converting from a \chpl{real($k$)} to
a \chpl{complex($\ell$)} such that $\ell > 2k$, the conversion is performed as
if the original value is first converted to \chpl{real($\ell/2$)} and then
to \chpl{$\ell$}.
The rules for converting from \chpl{imag} to \chpl{complex} are the same as for
converting from real, except that the imaginary part of the result is set using
the input value, and the real part of the result is set to zero.
\subsection{Explicit Tuple to Complex Conversion}
\label{Explicit_Tuple_to_Complex_Conversion}
\index{conversions!tuple to complex}
\index{conversions!explicit!tuple to complex}
A two-tuple of numerical values may be converted to a \chpl{complex} value. If
the destination type is \chpl{complex(128)}, each member of the two-tuple must
be convertible to \chpl{real(64)}. If the destination type
is \chpl{complex(64)}, each member of the two-tuple must be convertible
to \chpl{real(32)}. The first member of the tuple becomes the real part of the
resulting complex value; the second member of the tuple becomes the imaginary
part of the resulting complex value.
\subsection{Explicit Enumeration Conversions}
\label{Explicit_Enumeration_Conversions}
\index{conversions!enumeration}
\index{conversions!explicit!enumeration}
Explicit conversions are allowed from any enumerated type to any
integer or real type, \chpl{bool}, or \chpl{string}, and vice versa.
When the target type is an integer type, the value is first converted to its
underlying integer type and then to the target type, following the rules above
for converting between integers.
When the target type is a real or complex type, the value is first converted to
its underlying integer type and then to the target type.
The conversion of an enumerated type to \chpl{imag} is not permitted.
When the target type is \chpl{bool}, the value is first converted to its
underlying integer type. If the result is zero, the value of the \chpl{bool}
is \chpl{false}; otherwise, it is \chpl{true}.
When the target type is \chpl{string}, the value becomes the name of the
enumerator. % in the execution character set.
When the source type is \chpl{bool}, enumerators corresponding to the values 0
and 1 in the underlying integer type are selected, corresponding to input values
of \chpl{false} and \chpl{true}, respectively.
%REVIEW: hilde
% As with default values for variables of enumerated types, I am pushing for the
% simplest implementation -- in which the conversion does not actually change
% the stored value. This means that it may be possible for an enumerated variable
% to assume a value that does not correspond to any of its enumerators. Further
% encouragement to always supply a default clause in your switch statements!
When the source type is a real or integer type, the value is converted to the
target type's underlying integer type.
The conversion from \chpl{complex} and \chpl{imag} types to an enumerated type is not
permitted.
When the source type is string, the enumerator whose name matches value of the input
string is selected. If no such enumerator exists, a runtime error occurs.
\subsection{Explicit Class Conversions}
\label{Explicit_Class_Conversions}
\index{conversions!class}
\index{conversions!explicit!class}
An expression of static class type \chpl{C} can be explicitly
converted to a class type \chpl{D} provided that \chpl{C} is derived
from \chpl{D} or \chpl{D} is derived from \chpl{C}.
When at run time the source expression refers to an instance of
\chpl{D} or it subclass, its value is not changed.
Otherwise, or when the source expression is \chpl{nil},
the result of the conversion is \chpl{nil}.
\subsection{Explicit Record Conversions}
\label{Explicit_Record_Conversions}
\index{conversions!records}
\index{conversions!explicit!records}
An expression of record type \chpl{C} can be explicitly converted to
another record type \chpl{D} provided that \chpl{C} is derived
from \chpl{D}. There are no explicit record conversions that are not
also implicit record conversions.
\subsection{Explicit Range Conversions}
\label{Explicit_Range_Conversions}
\index{conversions!range}
\index{conversions!explicit!range}
An expression of stridable range type can be explicitly converted
to an unstridable range type, changing the stride to 1 in the process.
\subsection{Explicit Domain Conversions}
\label{Explicit_Domain_Conversions}
\index{conversions!domain}
\index{conversions!explicit!domain}
An expression of stridable domain type can be explicitly converted
to an unstridable domain type, changing all strides to 1 in the process.
\subsection{Explicit Type to String Conversions}
\label{Explicit_Type_to_String_Conversions}
\index{conversions!type to string}
\index{conversions!explicit!type to string}
A type expression can be explicitly converted to a \chpl{string}. The resultant
\chpl{string} is the name of the type.
\begin{chapelexample}{explicit-type-to-string.chpl}
For example:
\begin{chapel}
var x: real(64) = 10.0;
writeln(x.type:string);
\end{chapel}
\begin{chapeloutput}
real(64)
\end{chapeloutput}
This program will print out the string \chpl{"real(64)"}.
\end{chapelexample}
| {
"alphanum_fraction": 0.7597777778,
"avg_line_length": 43.2692307692,
"ext": "tex",
"hexsha": "b1cc9a75792241080cb6931097fdc9680a7844a3",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-04-03T09:33:12.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-03T09:33:12.000Z",
"max_forks_repo_head_hexsha": "53feadfc838e7f36ef863b1cd8cd5200d2d92ec8",
"max_forks_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_forks_repo_name": "vasslitvinov/chapel",
"max_forks_repo_path": "spec/Conversions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "53feadfc838e7f36ef863b1cd8cd5200d2d92ec8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_issues_repo_name": "vasslitvinov/chapel",
"max_issues_repo_path": "spec/Conversions.tex",
"max_line_length": 160,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "53feadfc838e7f36ef863b1cd8cd5200d2d92ec8",
"max_stars_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_stars_repo_name": "vasslitvinov/chapel",
"max_stars_repo_path": "spec/Conversions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4618,
"size": 18000
} |
% ***********************************************************************************
% Pure LaTeX part to be inserted in a document (be careful of depencies of packages & commands
% Prepared by XXX and YYY under the supervision of Arnaud de La Fortelle
% Fall 2017
% 2D wave propagation subsection of the modeling part
% ***********************************************************************************
\subgroup{1}{Bradley Cage and Lin Yang}
\paragraph{Description}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{Figures/2D_waves_system.png}
\caption{The membrane system }
\label{2D_waves_system.fig}
\end{figure}
Our system is comprised of a flexible membrane stretched to some shape, with all of its edges fixed in place. The desired goal is to understand the vertical position of the various points on the membrane over time. The membrane in this system has vertical deflections which are small compared to its overall size, and deflections happen only in the vertical direction.
This 2D system is a continuation of the 1D wave equation, and is a natural precursor to the 3D wave case.
\paragraph{Model}
Assumptions:
\begin{itemize}
\item Membrane has uniform planar density $\rho$
\item The tension per unit length, $F_t$, caused by stretching the membrane is the same at all points and in all directions and does not change during the motion
\item Vertical position is given by some function $u(x,y,t)$
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=15cm]{Figures/2D_waves_model.png}
\caption{The force analysis of a small section of the membrane system }
\label{2D_waves_model.fig}
\end{figure}
We begin from basic principles.
$$\Sigma F = m\vec{a}$$
\noindent Taking some small section of the membrane $\ud x$ by $\ud y$, we can replace mass and acceleration and since we know density and that $\vec{a}$ is the second derivative of position with respect to time, thus enabling us to rewrite the equation.
\begin{equation}
\label{no_balance}
\Sigma F = \rho \ud x\ud y \frac{\partial^2u}{\partial t^2}
\end{equation}
\noindent Performing a force balance on the section of membrane in the x and y directions gives us tensions at each on each side, then resolved to their vertical components. Remember that since tension is constant per unit length, we must multiply the force acting on each side by the length of that side. Thus, the force acting on this balance lets us rewrite $\Sigma F$ (that is we only consider vertical forces, forces acting in the $x-u$ and $y-u$ planes):
$$\Sigma F = F_x + F_y$$
$$F_x = F_t\ud y\Big[\sin\big( \theta (x+\ud x,y,t) \big) - \sin \big((\theta (x,y,t)\big)\Big]$$
$$F_y = F_t\ud x\Big[\sin\big( \beta (x,y+\ud y,t) \big) - \sin \big((\beta (x,y,t)\big)\Big]$$
\noindent We can confidently use the small angle approximation $\sin$ in the x direction
$$ \sin(\theta) \approx \tan(\theta) = \frac{\partial u}{\partial x} = u_x$$
\noindent and likewise in the y direction
$$ \sin(\beta) \approx \tan(\beta) = \frac{\partial u}{\partial y} = u_y$$
\noindent to get our equations into the form
$$F_x = F_t\ud y\Big[u_x(x+\ud x,y,t) - u_x(x,y,t)\Big]$$
$$F_y = F_t\ud x\Big[u_y(x,y+\ud y,t) - u_y(x,y,t)\Big]$$
\noindent From there we can sum these forces and plug them back in to equation \ref{no_balance}
$$\rho \ud x\ud y \frac{\partial^2u}{\partial t^2} = F_t\bigg[dy\Big[u_x(x+\ud x,y,t) - u_x(x,y,t)\Big]+dx\Big[u_y(x,y+\ud y,t) - u_y(x,y,t)\Big] \bigg]$$
\noindent We then divide by $\ud x$ and $\ud y$ and take the limit as $\ud x,\ud y \to 0$:
$$\rho\frac{\partial^2u}{\partial t^2} = \lim_{\ud x,\ud y\to 0} F_t \bigg[ \frac{u_x(x+\ud x,y,t) - u_x(x,y,t)}{\ud x} + \frac{u_y(x,y+\ud y,t) - u_y(x,y,t)}{\ud y} \bigg]$$
\noindent We recognize that we now have derivatives in the form of difference quotients, and can take the partial derivative of each one (since $u$ is a function of multiple variables)
\begin{equation}
\rho \frac{\partial^2u}{\partial t^2} = F_t\bigg[\frac{\partial}{\partial x}u_x + \frac{\partial}{\partial y}u_y \bigg] = F_t\bigg[\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\bigg]
\end{equation}
\noindent Dividing over the uniform tension, we reach our final form.
\begin{equation}
\frac{\rho}{F_t}\frac{\partial^2u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}
\end{equation}
\noindent We can adhere to standard conventions and write our final 2D wave equation as
\begin{align}
a^2 \frac{\partial^2u}{\partial t^2} &= \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} & a &= \sqrt{\frac{\rho}{F_t}}
\label{final_eq}
\end{align}
\noindent Equation \ref{final_eq} is also commonly written using the Laplace operator:
\begin{equation}
a^2 \frac{\partial^2u}{\partial t^2} = \nabla^2 u
\end{equation}
\paragraph{Initial and boundary conditions}
The initial state (i.e. at $t=0$) of the system is given by two functions:
\begin{itemize}
\item the vertical position all over the membrane $u(x,y,0)=u_0(x,y)$;.
\item the membrane speed $\frac{\partial u(x,y,0)}{\partial t}=u_1(x,y)$. When --- as usually assumed --- the membrane is at rest, then $u_1=0$.
\end{itemize}
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{Figures/2D_waves_boundary_conditions.png}
\caption{The membrane system's boundaries}
\label{2D_waves_boundary_conditions.fig}
\end{figure}
\noindent The membrane's area is ab (with the edge length being a,b respectively).
For membrane the boundary conditions can be of the two types (or a combination of):
\begin{itemize}
\item The vertical positions of the 4 edges of the membrane remain fixed all the time, and usually at 0, i.e., $u(x,0,t)=0$, $u(0,y,t)=0$, $u(a,y,t)=0$, $u(x,b,t)=0$.
\item Another type of boundary condition is the reflecting or no-flux boundary condition: this implies a derivative condition along the normal at the boundary and it is more technical to write (though not less meaningful in terms of physics).
\end{itemize}
Some hints for the control and the cost function of the 2D wave equation:
\begin{itemize}
\item Control: Maintain the highest vertical deflection of the membrane to be some constant height H
\item The cost function then could be finding the smallest force exerted on the membrane to maintain the highest height H in the membrane system
\end{itemize} | {
"alphanum_fraction": 0.6868300153,
"avg_line_length": 52.24,
"ext": "tex",
"hexsha": "22db821ae0c450e20e72379b040d4fd81edfbb65",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-05-16T17:29:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-16T17:29:03.000Z",
"max_forks_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems",
"max_forks_repo_path": "modeling-2Dwaves.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems",
"max_issues_repo_path": "modeling-2Dwaves.tex",
"max_line_length": 462,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "3bbe532eab793efa6c3a3a4569d155dd39c0102c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "QinganZhao/Course-Support-for-CE-291F-Control-and-Optimization-of-Distributed-Parameters-Systems",
"max_stars_repo_path": "modeling-2Dwaves.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-29T06:19:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-08T02:54:10.000Z",
"num_tokens": 1898,
"size": 6530
} |
\documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format
% \usepackage{draftwatermark}
% \SetWatermarkText{Draft}
% \SetWatermarkScale{5}
% \SetWatermarkLightness {0.9}
% \SetWatermarkColor[rgb]{0.7,0,0}
\usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper} % ... or a4paper or a5paper or ...
%\geometry{landscape} % Activate for for rotated page geometry
%\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx} % Use pdf, png, jpg, or eps� with pdflatex; use eps in DVI mode
% TeX will automatically convert eps --> pdf in pdflat
% TeX will automatically convert eps --> pdf in pdflatex
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{mathrsfs}
\usepackage{hyperref}
\usepackage{url}
\usepackage{subcaption}
\usepackage{authblk}
\usepackage{mathtools}
\usepackage{graphicx}
\usepackage[export]{adjustbox}
\usepackage{fixltx2e}
\usepackage{hyperref}
\usepackage{alltt}
\usepackage{color}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{float}
\usepackage{bigints}
\usepackage{braket}
\usepackage{siunitx}
\theoremstyle{definition}
\newtheorem{thm}{Theorem}[section]
% \newtheorem{defn}[thm]{Definition}
\newtheorem{definition}{Definition}[section]
\newtheorem{example}{Example}[section]
% \newtheorem[thm]{exmp}
\newtheorem{proposition}{Proposition}[section]
\newcommand{\veq}{\mathrel{\rotatebox{90}{$=$}}}
\DeclareMathOperator{\bda}{\Big \downarrow}
\DeclareMathOperator{\mymod}{\text{mod}}
\DeclareMathOperator{\E}{\mathbb{E}}
\newcommand{\argmax}{\operatornamewithlimits{argmax}}
\newcommand{\argmin}{\operatornamewithlimits{argmin}}
\title{A Few Notes on Groups, Rings, and Fields}
\author{David Meyer \\ dmm@\{1-4-5.net,uoregon.edu\}}
\date{Last update: September 10, 2017} % Activate to display a given date or no date
\begin{document}
\maketitle
\section{Introduction}
Suppose we want to solve an equation of the form
\begin{equation}
f(x) = x^{n-1} + a_{n-2}x^{n-2} + a_{n-3}x^{n-3} + \cdots + a_{1}x + a_0 = 0
\label{eqn:f(x)}
\end{equation}
\bigskip
\noindent
where the coefficients\footnote{Note that the largest degree term ($x^{n-1}$) has coefficient 1. This is called a \emph{monic} polynomial.}
$a_i \in \mathbb{Q}$. We can notice quite a few interesting things about $f(x)$. For example, if $R$ is a ring then ring of polynomials in $x$
with coefficients in $R$, denoted $R[x$], consists of all formal sums
\begin{equation*}
f(x) = \sum\limits_{i = 0}^{\infty} a_ix^i
\end{equation*}
\bigskip
\noindent
where $a_i = 0$ for all but finitely many values of $i$.
\bigskip
\noindent
The fundamental theorem of algebra \cite{steed2014} tells us that for any $n > 0$
and arbitrary complex coefficients $a_{n-1}, \hdots, a_0 \in \mathbb{C}$ there is a complex solution
$x = \lambda \in \mathbb{C}$. If we iterate the process we find that
\begin{equation}
f(x) = (x - \lambda_0)(x - \lambda_2) \cdot \hdots \cdot (x - \lambda_{n - 1}) = 0
\label{eqn:factorization}
\end{equation}
\bigskip
\noindent
for $\lambda_0, \lambda_2, \hdots \lambda_{n-1} \in \mathbb{C}$. Here $f(x) = 0$ iff $x = \lambda_j$ for some
$j \in \{0,1,\hdots, n-1\}$.
\bigskip
\noindent
\textbf{Aside:} What is being assumed here? Well, we are assuming that if $r \cdot s = 0$ then either $r$ or $s$ (or both) equal zero.
If $r \neq 0$ and $s \neq 0$ but $r \cdot s = 0$ we call $r$ and $s$ \emph{zero divisors}. A commutative ring with no zero divisors
is called an \emph{intergral domain}\footnote{Saying that $F$ has no zero divisors is equivalent to saying that $F$ has a cancellation law.}.
The canonical example of an integral domain is the integers $\mathbb{Z}$.
\bigskip
\noindent
BTW, why is $\mathbb{Z}$ not a field? Well, consider for example that $2 \in \mathbb{Z}$ but $\frac{1}{2} \notin \mathbb{Z}$ so not every
non-zero $n \in \mathbb{Z}$ has an inverse in $\mathbb{Z}$ and so $\mathbb{Z}$ is not a field. Every \emph{finite}
integral domain is a field however (Theorem \ref{thm:finite_id_is_a_field}).
\bigskip
\noindent
Note that if we have zero divisors then the factorization shown in Equation \ref{eqn:factorization} might not find all of
the roots of $f(x)$ (values of $x$ for which $f(x) = 0$). Why? Consider the following example:
\begin{equation}
\begin{array}{rcll}
x^2 + 5x + 6 \equiv 0 \text{ mod } 12
&\Rightarrow& (x + 2) \cdot (x + 3) \equiv 0 \text{ mod } 12
\end{array}
\label{eqn:roots}
\end{equation}
\bigskip
\noindent
Here we can read off the roots $x \equiv -2 \text{ mod } 12 \Rightarrow x = 10 \text{ mod } 12$ and
$x \equiv -3 \text{ mod } 12 \Rightarrow x = 9 \text{ mod } 12$. So we have two roots (mod 12)
at $x = 9$ and $x = 10$. But are these all of the roots? Well, the answer is no. Consider
$f(1) \text{ mod } 12 \equiv (1^2 + 5 + 6) \text{ mod } 12 \equiv 0 \text{ mod } 12$. In addition,
$f(6) \text{ mod } 12 \equiv (36 + 30 +6) \text{ mod } 12 \equiv 72 \text{ mod } 12 \equiv 0 \text{ mod } 12$.
\bigskip
\noindent
So the roots of Equation \ref{eqn:roots} are $\{1,6,9,10\}$. Why were we only able to find
two of the roots (9 and 10) by factoring? It is because the ring $\mathbb{Z}_{12}$ has
zero divisors. What are the zero divisors in $\mathbb{Z}_{12}$? Well
\begin{equation}
\begin{array}{lrcll}
\,\; 2 \cdot 6 \equiv 12 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\,\; 3 \cdot 4 \equiv 12 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\,\; 4 \cdot 3 \equiv 12 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\,\; 6 \cdot 2 \equiv 12 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\,\; 8 \cdot 3 \equiv 24 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\,\; 9 \cdot 8 \equiv 72 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
10 \cdot 6 \equiv 60 \text{ mod } 12 \equiv 0 \text{ mod } 12 \\
\end{array}
\end{equation}
\bigskip
\noindent
Note that if $p$ is a prime then $\mathbb{Z}_{p}$ is an integral domain (has no zero divisors).
\bigskip
\noindent
So the condition we need is that the set of coefficients are drawn from an integral domain.
\begin{thm}
Every field $F$ is an integral domain.
\label{thm:integral_domain}
\end{thm}
\noindent
\textbf{Proof:} Recall that if $F$ is a field then each non-zero $r \in F$ has an inverse $r^{-1}$.
So suppose $r,s \in F$ and $r \neq 0$ such that $r \cdot s = 0$. Then the claim is that $s = 0$.
Why? Consider
\begin{equation}
\begin{array}{rcll}
r \cdot s
&=& 0 &\quad \mathrel{\#} \text{assumption with $r \neq 0$} \\
&\Rightarrow& r^{-1} \cdot (r \cdot s) = r^{-1} \cdot 0 &\quad \mathrel{\#} \text{multiply both sides by $r^{-1}$} \\
&\Rightarrow& r^{-1} \cdot (r \cdot s) = 0 &\quad \mathrel{\#} x \cdot 0 = 0 \\
&\Rightarrow& (r^{-1} \cdot r) \cdot s = 0 &\quad \mathrel{\#} \text{multiplication is associative} \\
&\Rightarrow& s = 0 &\quad \mathrel{\#} r^{-1} \cdot r = 1
\end{array}
\end{equation}
\bigskip
\noindent
So $r$ is not a zero divisor. But every non-zero element $r$ of the field $F$ has an inverse ($r$ is a "unit")
so $F$ has no zero divisors and is by definition an integral
domain. $\square$
\bigskip
\noindent
Theorem \ref{thm:finite_id_is_a_field} below shows a limited version of this theorem in the other direction:
Every finite integral domain is a field.
%\newpage
\section{Splitting Fields}
Recall that the ring of polynomials over a field $F$, denoted $F[x]$, is defined as follows\footnote{I reversed the order
of Equation \ref{eqn:f(x)} since its an easier form to work with. In addition, we can assume $a_{n-1} = 1$ since $f(x)$
is monic.}
\begin{definition}
\textbf{Polynomial Ring over $\mathbf{F}$:} The polynomial ring over $F$ is defined as
\begin{equation*}
F[x] = \{f(x) \mid f(x) = a_0x^0 + a_1x^1 +a_2x^2 + \hdots + a_{n-1}x^{n-1} \}
\end{equation*}
\bigskip
\noindent
with $a_i \in F$ and with the usual ring properties.
\bigskip
\noindent
Aside on notation: while $F[x]$ is defined as above, $F(x)$ is defined differently.
\begin{equation*}
F(x) = \Bigg \{\frac{p(x)}{q(x)} \; \bigg \lvert \; p(x),q(x) \in F[x] \Bigg \}
\end{equation*}
\label{def:polynomials}
\end{definition}
\noindent
There doesn't seem to be any standard convention as to the definitions of $F[x]$ vs. $F(x)$. I've seen $F(x)$ used to mean what I
defined as $F[x]$ above.
\begin{definition}
\textbf{Splitting Field:} Let $f \in F[x]$. An extension field\footnote{$E$ is an extension field of $F$ if $F$ is a subfield of $E$.}
$E$ of $F$, written $E/F$, is called a \emph{splitting field} for $f$ over $F$ if the following two
conditions are satisfied:
\begin{enumerate}
\item $f$ factors into linear polynomials ("splits" or "splits completely") in $E [x]$
\item $f$ does not split completely in $K[x]$ for any $F \subsetneq K \subsetneq E$
\end{enumerate}
\label{def:splitting_field}
\end{definition}
\subsection{The Evaluation Homomorphism: $\boldsymbol{e: F[x] \rightarrow F[\alpha]}$}
TBD
%\newpage
\subsection{Examples}
\begin{example}
$\mathbb{Q}[\sqrt{2}]$ is a splitting field for $x^2 - 2$ over $\mathbb{Q}$. \\
\noindent
Why? Consider the conditions in Definition \ref{def:splitting_field}: First,
the polynomial $x^2 -2$ factors into linear polynomials ("splits") in $\mathbb{Q}[\sqrt{2}][x]$: $x^2 -2 = (x - \sqrt{2})(x + \sqrt{2})$.
To see this, consider
\begin{equation*}
\begin{array}{rcll}
\mathbb{Q}[\sqrt{2}]
&=& a_0(\sqrt{2})^0 + a_1(\sqrt{2})^1 + a_2(\sqrt{2})^2 + a_3(\sqrt{2})^3 + a_4(\sqrt{2})^4 + \cdots + a_{n-1}(\sqrt{2})^{n-1}
&\mathrel{\#} \text{defn $\mathbb{Q}[\sqrt{2}]$} \\
&=& a_0 + a_1 \sqrt{2}+ a_2 2 + a_3 2 \sqrt{2} + a_4 4 + a_5 4 \sqrt{2} + \cdots + a_{n-1}2^{\frac{n-1}{2}}
&\mathrel{\#} \text{simplify} \\
&=& (a_0 + a_2 2 + a_4 4 + \cdots) + (a_1 + a_3 2 + a_5 4 + \cdots) \sqrt{2}
&\mathrel{\#} \text{group terms} \\
&=& a + b \sqrt{2}
&\mathrel{\#} a +b \sqrt{2} \in \mathbb{Q} [\sqrt{2}]\\
\end{array}
\end{equation*}
\bigskip
\noindent
Note that here $a = a_0 + a_2 2 + a_4 4 + \cdots$ and $b = a_1 + a_3 2 + a_5 4 + \cdots$ and
that $a,b \in \mathbb{Q}$ since $\mathbb{Q}$ is closed under addition and multiplication.
\bigskip
\noindent
Next we need to see what $\mathbb{Q}[\sqrt{2}][x]$ looks like. We saw above that the elements of $\mathbb{Q}[\sqrt{2}]$ have the
form $a +b \sqrt{2}$ for $a,b \in \mathbb{Q}$. So an element $p(x) \in \mathbb{Q}[\sqrt{2}][x]$ looks like (Definition \ref{def:polynomials})
\begin{equation*}
\begin{array}{rlll}
p(x)
&=& \sum\limits_{i = 0}^{n-1} (a_i+ b_i \sqrt{2}) x^i \\
&=& (a_0 +b_0 \sqrt{2})x^0 + (a_1 +b_1 \sqrt{2})x^1 + (a_2 +b_2 \sqrt{2})x^2 + \cdots + (a_{n-1} + b_{n-1}\sqrt{2}) x^{n-1}
\end{array}
\end{equation*}
\bigskip
\noindent
for some $a_i, b_i \in \mathbb{Q}$.
\bigskip
\noindent
Now, if we consider the case in which $a_0 = 0, b_0 = 1, a_1 = 1, b_1 = 0$ and
$a_i = b_i = 0$ for $1 < i \leq n - 1$ we get an element $p(x) \in \mathbb{Q}[\sqrt{2}][x]$
that looks like
\begin{equation*}
\begin{array}{rlll}
p(x)
&=& (a_0 + b_0 \sqrt{2}) x^0 + (a_1 + b_1 \sqrt{2}) x^1 + \sum\limits_{i = 2}^{n-1} (a_i+ b_i \sqrt{2}) x^i \\
&=& (0 + 1 \sqrt{2}) 1 + (1 + 0 \sqrt{2}) x + \sum\limits_{i = 2}^{n-1} 0 \\
&=& \sqrt{2} + x \\
&=& x + \sqrt{2}
\end{array}
\end{equation*}
\bigskip
\noindent
so we can see that $x^2 - 2$ splits in $\mathbb{Q}[\sqrt{2}][x]$ since $x^2 -2 = (x - \sqrt{2})(x + \sqrt{2})$ (let $b_0 = -1$ to get the $(x - \sqrt{2})$ factor).
\bigskip
\noindent
So the first criteria of Definition \ref{def:splitting_field} is satisfied, but is there a field $K$ that splits $x^2 - 2$ such that
$\mathbb{Q} \subsetneq K \subsetneq \mathbb{Q}[\sqrt{2}]$ (the second criteria in Definition \ref{def:splitting_field})? Well, if we consider
$\mathbb{Q}[\sqrt{2}][x]$ as a vector space over $\mathbb{Q}[\sqrt{2}]$ we see that it is of order 2 (written $[\mathbb{Q}[\sqrt{2}]: \mathbb{Q}] = 2$),
so there is no field $K$ such that $\mathbb{Q} \subsetneq K \subsetneq \mathbb{Q}[\sqrt{2}]$. So the second criteria is true and so
$\mathbb{Q}[\sqrt{2}]$ is a splitting field for $f(x) = x^2 - 2$.
\end{example}
\begin{example}
$\mathbb{Q}[\sqrt[3]{2}]$ is \emph{not} a splitting field for $x^3 - 2$ over $\mathbb{Q}$.
\bigskip
\noindent
Why? Well, it is because the polynomial $x^3 - 2$ does not split in $\mathbb{Q}[\sqrt[3]{2}][x]$. But still why? After all
$x^3 - 2$ does have a root at $\sqrt[3]{2}$ in $\mathbb{Q}[\sqrt[3]{2}] [x]$ . However,
if we divide $x^3 - 2$ by $x - \sqrt[3]{2}$ we see that
\begin{equation}
x^3 - 2 = (x - \sqrt[3]{2})(x^2 + \sqrt[3]{2} x + (\sqrt[3]{2})^2)
\label{eqn:x^3-2}
\end{equation}
\bigskip
\noindent
and it turns out that $h(x) = x^2 + \sqrt[3]{2} x + (\sqrt[3]{2})^2$ is \emph{irreducible}\footnote{A polynomial $p(x)$ is
irreducible if no polynomials $g(x)$ and $h(x)$ exist such that $p(x) = g(x) \cdot h(x)$.} in $\mathbb{Q}[\sqrt[3]{2}]$.
This is because the roots of $h(x)$ are complex and but everything in $\mathbb{Q}[\sqrt[3]{2}]$ is real.
\end{example}
\bigskip
\noindent
So what is a splitting field for $x^2 + \sqrt[3]{2} x + (\sqrt[3]{2})^2$ over $\mathbb{Q}$? Well, we know $x^3-2$ splits into the factors
shown in Equation \ref{eqn:x^3-2} in $\mathbb{Q}[\sqrt[3]{2}]$, so one approach would be to adjoin the (complex) roots of $x^2 + \sqrt[3]{2} x + (\sqrt[3]{2})^2$
to $\mathbb{Q}[\sqrt[3]{2}]$.
\bigskip
\noindent
The idea to "keep adding roots of irreducible factors" is the core idea in the proof that every polynomial has a splitting field. This
observation leads to the following proposition:
\bigskip
\begin{proposition}
Let $f \in F[x]$ and $E$ be an extension field of $F$. If $E$ contains the roots $\alpha_1, \cdots, \alpha_n$ of $f$ and $f$ splits
in $F[\alpha_1, \cdots, \alpha_n][x]$ then $F[\alpha_1, \cdots, \alpha_n]$ is a splitting field for $f$ over $F$.
\end{proposition}
\noindent
\textbf{Proof: } Because $f$ splits in $F[\alpha_1, \hdots, \alpha_n]$ we only need to show that $f$ doesn't split
in a proper subfield containing $F$. Suppose $E$ is such a proper subfield. Then there is at least one root
$\alpha_i$ such that $\alpha_i \notin E$. But this would mean that $f$ would not split in $E$ because
if it did then $\alpha_i$ would be a root of one of the linear factors in $E[x]$; this would contradict our assumption
that $\alpha_i \notin E$. So such an $E$ does not exist.
\bigskip
\noindent
This result guarantees that if you can find all the roots of a polynomial in \emph{some} extension field, then you can construct a splitting field easily.
This is great for polynomials that are in, say, $\mathbb{Q}[x]$ because it is often easy to find roots in $\mathbb{C}$. But what about more obscure fields like
$\mathbb{Z}_7$, where we don't have a good understanding of its extension fields? It is not obvious (at least to me) that polynomials
over these fields have splitting fields, but luckily it turns out they do.
\bigskip
\noindent
\textbf{Aside: } We saw that every field is an integral domain (Theorem \ref{thm:integral_domain}). Here we
observe that any finite integral domain (like $\mathbb{Z}_7$) is a field.
\bigskip
\begin{thm}
Every finite integral domain is a field.
\label{thm:finite_id_is_a_field}
\end{thm}
\bigskip
\noindent
\textbf{Proof:} The proof is based on the fact that since $R$ is an integral domain it has a cancellation law (or equivalently, $R$ has no zero divisors).
Having a cancellation law means that
\begin{equation}
ab = ac \implies b = c
\label{eqn:cancellation_law}
\end{equation}
\bigskip
\noindent
To see why any finite integral domain $R$ is a field, consider $R = \{r, r^2, r^3, \hdots, r^n\}$ where
$r^k \neq 0$ for $1 \leq k \leq n$. Since $R$ is finite we will have $r^k = r^l$ for some $k$ and $l$ such that $k > l$. Then
\bigskip
\begin{equation*}
\begin{array}{rlll}
r^k
&=& r^l & \qquad \mathrel{\#} \text{$R$ is a finite integral domain} \\
&\Rightarrow& r \cdot r^{k-1} = r \cdot r^{l -1} & \qquad \mathrel{\#} \text{factor out $r$} \\
&\Rightarrow& r^{k-1} = r^{l -1} & \qquad\mathrel{\#} \text{use cancellation law (cancel $r$, Equation \ref{eqn:cancellation_law}}) \\
&\Rightarrow& r \cdot r^{k-2} = r \cdot r^{l -2} & \qquad\mathrel{\#} \text{factor out $r$} \\
&\Rightarrow& r^{k-2} = r^{l -2} & \qquad\mathrel{\#} \text{use cancellation law (cancel $r$, Equation \ref{eqn:cancellation_law})} \\
&\vdots && \qquad\mathrel{\#} \text{iterate $l - 1$ times} \\
&\Rightarrow& r^{k - l + 1} = r^1 & \qquad\mathrel{\#} \text{...} \\
&\Rightarrow& r \cdot r^{k - l} = r \cdot r^0 & \qquad\mathrel{\#} \text{factor out $r$} \\
&\Rightarrow& r^{k-l} = r^{0} & \qquad\mathrel{\#} \text{use cancellation law (cancel $r$, Equation \ref{eqn:cancellation_law})} \\
&\Rightarrow& r^{k-l} = 1 & \qquad\mathrel{\#} r^0 = 1
\end{array}
\end{equation*}
\bigskip
\noindent
So $ r^{k-l} = 1 $. If $k-l = 1$ then $r$ is a unit since $r^{k-l} = r^{1} = 1$ so $r^{-1}$ is $\frac{1}{r}$. Otherwise
$k-l > 1$ and $r^{k-l} = 1 \Rightarrow r^{k-l-1} = \frac{1}{r}$. So $r^{-1} = r^{k-l -1}$ and
every $r \neq 0 \in R$ has an inverse. Thus every non-zero $r \in R$ is a unit and so $R$ is a field. $\square$
\section{Note: Gauss and the Gaussian Integers $\boldsymbol{\mathbb{Z}[i]}$}
First, recall that the Gaussian Integers $\mathbb{Z}[i] = \{a + bi \mid a,b \in \mathbb{Z} \text{ and } i = \sqrt{-1} \}$.
Gauss found that the polynomial $a^2 + b^2$ had a unique factorization (would "split") in $\mathbb{Z}[i]$:
\begin{equation*}
\begin{array}{rlll}
a^2 + b^2
&=& (a - bi)(a + bi)
\end{array}
\end{equation*}
\bigskip
\noindent
The natural question was are there other values that could be adjoined to $\mathbb{Z}$ to form a new number
system in which some polynomial would split. For example
\begin{equation*}
\begin{array}{rlll}
\mathbb{Z}[\sqrt{-5}]
&=& \{a+b\sqrt{-5} \mid a,b \in \mathbb{Z}\}
\end{array}
\end{equation*}
\bigskip
\noindent
Here we can factor say $6$ in $\mathbb{Z}[\sqrt{-5}]$ as $6 = 2 \cdot 3 = (1 - \sqrt{-5}) \cdot (1 + \sqrt{-5})$. So the natural question
is there other values in which we can factor polynomials into irreducible factors? It turns out there
is are precisely nine such numbers, $\{1,2,3,7,11,19,43,67,163\}$ (Gauss discovered this sequence but couldn't prove that these were
the only such numbers). That is, only the negative square root of these numbers can be adjoined to $\mathbb{Z}$ to get a ring with
unique factorization. This is the set
\begin{equation*}
\begin{array}{rlll}
\{\sqrt{-1}, \sqrt{-2}, \sqrt{-3}, \sqrt{-7}, \sqrt{-11}, \sqrt{-19}, \sqrt{-43}, \sqrt{-67}, \sqrt{-163}\}
\end{array}
\end{equation*}
\bigskip
\noindent
Interestingly, a Heegner number (so named for the amateur mathematician that proved Gauss's conjecture) is a square-free positive integer $d$
such that the imaginary quadratic field $\mathbb {Q} [\sqrt {-d}]$ has unique factorization.
\bigskip
\noindent
These numbers turn up in all kinds of interesting places, including Ramanujan's constant $e^{{\pi {\sqrt {163}}}}$. For example
\begin{center}
\begin{equation*}
\begin{array}{llll}
e^{{\pi {\sqrt {19}}}} &\approx 96^{3} + 744 - 0.22 \\
e^{{\pi {\sqrt {43}}}} &\approx 960^{3}+744-0.000\,22\\
e^{{\pi {\sqrt {67}}}} &\approx 5\,280^{3}+744-0.000\,0013\\
e^{{\pi {\sqrt {163}}}} &\approx 640\,320^{3}+744-0.000\,000\,000\,000\,75
\end{array}
\end{equation*}
\end{center}
\noindent
or alternatively
\begin{equation*}
\begin{array}{llll}
e^{{\pi {\sqrt {19}}}} &\approx 12^{3}(3^{2}-1)^{3}+744-0.22 \\
e^{{\pi {\sqrt {43}}}} &\approx 12^{3}(9^{2}-1)^{3}+744-0.000\,22\\
e^{{\pi {\sqrt {67}}}} &\approx 12^{3}(21^{2}-1)^{3}+744-0.000\,0013\\
e^{{\pi {\sqrt {163}}}} &\approx 12^{3}(231^{2}-1)^{3}+744-0.000\,000\,000\,000\,75
\end{array}
\end{equation*}
\bigskip
\begin{thm}
If $m$ is an integer then either $m^2 \equiv 0 \; (\mymod 4)$ or $m^2 \equiv 1 \; (\mymod 4)$.
\end{thm}
\bigskip
\noindent \textbf{Proof:} Let $m \in \mathbb{Z}$. Then $m$ is either even or $m$ is odd.
\begin{equation*}
\begin{array}{llll}
\textbf{Case I:}
& \text{Assume $m$ is even.} \\
& \text{If $m$ is even then there exists $k \in \mathbb{Z}$ such that $m = 2k$.} \\
& \text{Then $m^2 = 4k^2$, and so $4 \lvert m^2$ and hence $m^2 \equiv 0 \; (\mymod 4)$.} \\ \\
\textbf{Case II:}
& \text{Assume $m$ is odd.} \\
& \text{If $m$ is odd then there exists $k \in \mathbb{Z}$ such that $m = 2k + 1$.} \\
& \text{Then $m^2 = 4k^2 +4k + 1 \Rightarrow m^2 - 1 = 4(k^2 + k)$ so} \\
& \text{$4 \lvert (m^2 - 1)$. Therefore $(m^2 - 1) \equiv 0 \; (\mymod 4)$ and} \\
&m^2 \equiv 1 \; (\mymod 4).
\end{array}
\end{equation*}
\bigskip
\noindent
Thus if $m$ is an integer then either $m^2 \equiv 0 \; (\mymod 4)$ or $m^2 \equiv 1 \; (\mymod 4)$. $\square$
\bigskip
\noindent
Recall that a \emph{unit} in a ring $R$ is an element which has a multiplicative inverse.
\begin{proposition}
Let $F$ be a field and let $F[x]$ be the polynomial ring over $F$. Then units in $F[x]$ are exactly the nonzero elements of $F$.
\end{proposition}
\noindent
\textbf{Proof:} First, observe that the nonzero elements of $F$ are invertible in $F$ since $F$ is a field. These elements are
also invertible in $F[x]$ since, as we just saw, they are invertible in $F$.
\bigskip
\noindent
Suppose, OTOH that $f(x) \in F[x] $ is invertible. That is, $f(x)g(x) = 1$
for some $g(x) \in F [x]$. Then $\deg f \cdot g = \deg f + \deg g = \deg 1 = 0$,
which requires that both $f$ and $g$ to have degree 0. In particular, $f$
must have degree 0. So $f$ is a nonzero constant, i.e. $f$ is an element of $F$. $\square$
\begin{proposition}
Let $R$ be a commutative ring and let $a$ be a unit in $R$. Then $a$ divides $r$ for all $r \in R$.
\end{proposition}
\noindent
\textbf{Proof:} First assume $1 \in R$ ($R$ is a ring rather than a rng). Then $a$ a unit in $R$ means that there exists $b \in R$ such
that $ab = 1$. Note that $ab \in R$ since $R$ is closed under multiplication.
\bigskip
\noindent
Now let $r$ be an arbitrary element of $R$. Then
\begin{equation*}
\begin{array}{rlll}
r
&=& 1 \cdot r &\qquad\qquad\mathrel{\#} \text{$1$ is the multiplicative identity}\\
&=& (ab) \cdot r &\qquad\qquad\mathrel{\#} \text{$a$ a unit $\Rightarrow$ $1 = ab$ with $ab \in R$} \\
&=& a \cdot (br) &\qquad\qquad\mathrel{\#} \text{multiplication is associative} \\
&\Rightarrow& a|r &\qquad\qquad\mathrel{\#}\text{$a|r \Rightarrow r = a \cdot m$. Here $m = br$. $\square$}
\end{array}
\end{equation*}
\bigskip
\begin{proposition}
Let $R$ be a commutative ring and let $a$ and $b$ be units in $R$. Then $ab$ is a unit in $R$.
\end{proposition}
\noindent
\textbf{Proof:} Let $a, b \in R$ be units. Then there exists $c, d \in R$ such that $ac = 1$ and
$bd = 1$. To show that $ab$ is a unit in $R$ consider
\begin{equation*}
\begin{array}{rlll}
ac
&=& a(1c) &\qquad\qquad\qquad \mathrel{\#} c = 1 c \\
&=& a(1)c &\qquad\qquad\qquad \mathrel{\#} \text{multiplication is associative} \\
&=& a(bd)c &\qquad\qquad\qquad \mathrel{\#} \text{$b$ a unit so $1 = bd$} \\
&=& abdc &\qquad\qquad\qquad \mathrel{\#} \text{multiplication is still associative} \\
&=& (ab)(dc) &\qquad\qquad\qquad \mathrel{\#} \text{multiplication is associative} \\
&=& 1 &\qquad\qquad\qquad \mathrel{\#} ac = 1 \\
\end{array}
\end{equation*}
\bigskip
\noindent
So $(ab)(dc) = 1$ which implies that $ab$ is a unit in $R$ with inverse $dc$. $\square$
\newpage
\section{Acknowledgements}
\bibliographystyle{plain}
\bibliography{/Users/dmm/papers/bib/qc}
\end{document}
| {
"alphanum_fraction": 0.6035266458,
"avg_line_length": 41.8360655738,
"ext": "tex",
"hexsha": "f56894b5c2bab378b1f39b2a2951d9197703b656",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "davidmeyer/davidmeyer.github.io",
"max_forks_repo_path": "_my_stuff/papers/qc/galois_theory/galois_theory.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "davidmeyer/davidmeyer.github.io",
"max_issues_repo_path": "_my_stuff/papers/qc/galois_theory/galois_theory.tex",
"max_line_length": 193,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "davidmeyer/davidmeyer.github.io",
"max_stars_repo_path": "_my_stuff/papers/qc/galois_theory/galois_theory.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9130,
"size": 25520
} |
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
\usepackage{setspace}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\title{Chapter 4\\Tensors and Differential Forms}
\author{solutions by Hikari}
\date{August 2021}
\begin{document}
\newcommand{\T}{\mathrm}
\newcommand{\V}{\mathbf}
\newcommand{\pdv}[2]{\frac{\partial#1}{\partial#2}}
\newcommand{\del}{\boldsymbol{\nabla}}
\newcommand{\VE}{\mathbf{\hat{e}}}
\newcommand{\EE}{\pmb{\varepsilon}}
\maketitle
\section*{4.1 Tensor Analysis}
\paragraph{4.1.1}
Let all the components of a tensor $\T{A}$ vanish in a coordinate system $K$. For any coordinate system $K'$, the components of $\T{A}$ in $K'$ are linear combinations of components of $\T{A}$ in $K$ according to the transformation laws of tensors, and is therefore zero. So in every coordinate systems, all the components of $\T{A}$ vanish.
\paragraph{4.1.2}
\[
A_{ij}=\sum_k\sum_l\pdv{(x^0)^k}{x^i}\pdv{(x^0)^l}{x^j}A^0_{kl}=\sum_k\sum_l\pdv{(x^0)^k}{x^i}\pdv{(x^0)^l}{x^j}B^0_{kl}=B_{ij}
\]
\paragraph{4.1.3}
Let the vector be $\V{A}$, and its components be $A^i$ and $(A')^i$ in the two reference frames. For $i=1,2,3$, $A^i=0$ and $(A')^i=0$. Applying the transformation law,
\[
(A')^i=\sum_j\pdv{(x')^i}{x^j}A^j=\pdv{(x')^i}{x^0}A^0
\]
For $i=1,2,3$, $(A')^i=0$, but at least one of $\pdv{(x')^i}{x^0}\neq0$, so $A^0$ must be zero. So all the components of $\V{A}$ in the first reference frame vanish, and by exercise 4.1.1, all the components of $\V{A}$ vanish in every reference frame. In particular, the zeroth component of $\V{A}$ vanish in every reference frame.
\paragraph{4.1.4}
Let $\T{A}$ be an isotropic second-rank tensor in 3-D space. Consider the $90^\circ$ rotation about $x_3$ axis. Then $(x')^1=x^2$, $(x')^2=-x^1$, $(x')^3=x^3$. So
\[
A^{11}=(A')^{11}=\sum_i\sum_j\pdv{(x')^1}{x^i}\pdv{(x')^1}{x^j}A^{ij}=\pdv{(x')^1}{x^2}\pdv{(x')^1}{x^2}A^{22}=A^{22}
\]
Similarly, we can prove $A^{22}=A^{33}$, so $A^{11}=A^{22}=A^{33}=k$, $k$ is a constant.
Consider the $180^\circ$ rotation about $x_3$ axis. Then $(x'')^1=-x^1$, $(x'')^2=-x^2$, $(x'')^3=x^3$. So
\[
A^{13}=(A'')^{13}=\sum_i\sum_j\pdv{(x'')^1}{x^i}\pdv{(x'')^3}{x^j}A^{ij}=\pdv{(x')^1}{x^1}\pdv{(x')^1}{x^3}A^{13}=(-1)(1)A^{13}=-A^{13}
\]
so $A^{13}=0$. Similarly, we can prove $A^{31}=A^{12}=A^{21}=A^{23}=A^{32}=0$. Therefore,
\[
A^{ij}=
\begin{cases}
k,\quad i=j\\
0,\quad i\neq j
\end{cases}
\]
which is $k\delta^i_j$.
\paragraph{4.1.5}
[First raletion]
If $i=k$, then $R_{iklm}=-R_{kilm}=-R_{iklm}$, so $R_{iklm}=0$. The same is for $l=m$, so $R_{iklm}\neq0$ only if $i\neq k$ and $l\neq m$. $R_{ik\_\_}$ will determine $R_{ki\_\_}$, and $R_{\_\_lm}$ will determine $R_{\_\_ml}$, so if we let $(i,k),(l,m)\in\{(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\}$, then all the other components are determined. So the number of independent components is $6\times6=36$.
\medskip
[Second relation]
If $(i,k)\neq(l,m)$, then $R_{iklm}$ determines $R_{lmik}$. So the number of components reduced is $C^6_2=15$, and the number of independent components becomes $36-15=21$.
\medskip
[Third relation]
If one of $k,l,m$ equals $i$, let it be $k$, then the relation becomes $R_{iilm}+R_{ilmi}+R_{imil}=0$. Using the first two relations, it becomes $R_{imil}+R_{miil}=0$, which is the first relation, so no new information are obtained. If two of $k,l,m$ are equal, let it be $k=l$, then the relation become $R_{ikkm}+R_{ikmk}+R_{imkk}=0$. Using the first relation, it becomes $R_{ikkm}+R_{ikmk}=0$, which is the first relation, so no new information are obtained. So the relation furnishes new information only if all four indices are different.
Using the first relation, $R_{iklm}+R_{ilmk}+R_{imkl}=0$ becomes $R_{ikml}+R_{ilkm}+R_{imlk}=0$, so the parity of the permutation of $k,l,m$ does not matter.
Using the first two relation, $R_{iklm}+R_{ilmk}+R_{imkl}=0$ becomes $R_{klmi}+R_{kmil}+R_{kilm}=0$, so whether the first index is $1,2,3$ or $4$ does not matter. Therefore, let $=1$, and $(k,l,m)=(2,3,4)$, then we get a new equation, so the number of independent components becomes $21-1=20$.
\paragraph{4.1.6}
If two of $i,k,l,m$ are equal, that it be $i=k$, then $T_{iklm}=-T_{kilm}=-T_{iklm}$, so $T_{iklm}=0$. So $T_{iklm}\neq0$ only if all the indices are different. But there are only three possible values (3-D space) for the four indices, so at least two of $i,k,l,m$ are equal, so $T_{iklm}=0$. Therefore, there are no independent components.
\paragraph{4.1.7}
By the transformation law,
\[
(T')_{\cdots i}=\sum\cdots\sum_k\cdots\pdv{x^k}{(x')^i}T_{\cdots k}
\]
Defining $\left(\pdv{T}{x} \right)_{\cdots ij}=\pdv{T_{\cdots i}}{x_j}$. If the transformation is linear, then $\pdv{^2x^\mu}{(x')^j(x')^i}=0$ for all $\mu$. So
\[
\left(\pdv{T}{x} \right)'_{\cdots ij}=\pdv{(T')_{\cdots i}}{(x')^j}=\sum\cdots\sum_k\cdots\pdv{x^k}{(x')^i}\sum_l\pdv{x^l}{(x')^j}\pdv{T_{\cdots k}}{x^l}
\]
\[
=\sum\cdots\sum_k\sum_l\cdots\pdv{x^k}{(x')^i}\pdv{x^l}{(x')^j}\pdv{T_{\cdots k}}{x^l}
\]
\[
=\sum\cdots\sum_k\sum_l\cdots\pdv{x^k}{(x')^i}\pdv{x^l}{(x')^j}\left(\pdv{T}{x}\right)_{\cdots kl}
\]
which is the transformation law for tensors of rank $n+1$, so $\left(\pdv{T}{x} \right)_{\cdots ij}=\pdv{T_{\cdots i}}{x_j}$ is a tensor of rank $n+1$.
\paragraph{4.1.8}
By the transformation law,
\[
(T')_{ijk\cdots}=\sum_l\sum_m\sum_n\cdots\sum\pdv{x^l}{(x')^i}\pdv{x^m}{(x')^j}\pdv{x^n}{(x')^k}\cdots T_{lmn\cdots}
\]
Note that in Cartesian coordinates, $\pdv{x^m}{(x')^j}=\pdv{(x')^j}{x^m}$. So
\[
\sum_j\pdv{(T')_{ijk\cdots}}{(x')^j}=\sum_j\sum_l\sum_m\sum_n\cdots\sum\pdv{x^l}{(x')^i}\pdv{x^m}{(x')^j}\pdv{x^n}{(x')^k}\cdots\pdv{T_{lmn\cdots}}{(x')^j}
\]
\[=
\sum_l\sum_m\sum_n\cdots\sum\pdv{x^l}{(x')^i}\pdv{x^n}{(x')^k}\cdots\sum_j\pdv{(x')^j}{x^m}\pdv{T_{lmn\cdots}}{(x')^j}
\]
\[
=\sum_l\sum_m\sum_n\cdots\sum\pdv{x^l}{(x')^i}\pdv{x^n}{(x')^k}\cdots\pdv{T_{lmn\cdots}}{x^m}
\]
\[
=\sum_l\sum_n\cdots\sum\pdv{x^l}{(x')^i}\pdv{x^n}{(x')^k}\cdots\sum_m\pdv{T_{lmn\cdots}}{x^m}
\]
which is the transformation law for tensors of rank $n-1$, so $\sum_j\pdv{T_{ijk\cdots}}{x^j}$ is a tensor of rank $n-1$.
\paragraph{4.1.9}
When defining $x_4=ict$, the Lorentz transformations take the form
\[
\begin{pmatrix}
x'_1\\x'_2\\x'_3\\x'_4
\end{pmatrix}=
\begin{pmatrix}
\gamma&0&0&i\gamma\beta\\
0&1&0&0\\
0&0&1&0\\
-i\gamma\beta&0&0&\gamma
\end{pmatrix}
\begin{pmatrix}
x_1\\x_2\\x_3\\x_4
\end{pmatrix}
\]
or $\V{x'}=\T{U}\V{x}$, where $\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$, $\beta=\frac{v}{c}$. It can be verified that $\T{U}$ is orthogonal, so $\T{U}^{-1}=\T{U}^T$, and $\V{x}=\T{U}^T\V{x'}$.\quad $\pdv{x'_i}{x_j}=\T{U}_{ij}$, and $\pdv{x_j}{x'_i}=\T{U}^T_{ji}=\T{U}_{ij}$, so $\pdv{x'_i}{x_j}=\pdv{x_j}{x'_i}$. (Therefore, as long as the transformation is orthogonal, this relation holds.)
\[
({\square}^2)'=\sum_i\pdv{^2}{(x'_i)^2}=\sum_i\pdv{}{x'_i}(\pdv{}{x'_i})=\sum_i\sum_j\pdv{x_j}{x'_i}\pdv{}{x_j}(\pdv{}{x'_i})=\sum_i\sum_j\pdv{x_j}{x'_i}\pdv{^2}{x_j\partial x'_i}
\]
\[
=\sum_j\sum_i\pdv{x'_i}{x_j}\pdv{^2}{x'_i\partial x_j}=\sum_j\sum_i\pdv{x'_i}{x_j}\pdv{}{x'_i}(\pdv{}{x_j})=\sum_j\pdv{}{x_j}(\pdv{}{x_j})=\sum_j\pdv{^2}{{x_j}^2}={\square}^2
\]
so the d’Alembertian is invariant under Lorentz transformation.
\paragraph{4.1.10}
(Using the Einstein convention)
\[
K_{mn}A^mB^n=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}(A')^i(B')^j=(K')_{ij}(A')^i(B')^j
\]
so
\[
\left[(K')_{ij}-K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j} \right](A')^i(B')^j=0
\]
Because $\T{A'}$ and $\T{B'}$ are arbitrary, the coefficient must vanish. (For example, to prove $(K')_{12}-K_{mn}\pdv{x^m}{(x')^1}\pdv{x^n}{(x')^2}$=0, set $(A')^1=(B')^2=1$, all the other components of $\T{A'}$ and $\T{B'}=0$.) Therefore,
\[
(K')_{ij}=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}
\]
which means that $K_{ij}$ is a second-rank tensor.
\paragraph{4.1.11}
(Using the Einstein convention)
\[
(B')^k_i=\pdv{(x')^k}{x^p}\pdv{x^m}{(x')^i}B^p_m
=\pdv{(x')^k}{x^p}\pdv{x^m}{(x')^i}K_{mn}A^{np}
\]
\[
=\pdv{(x')^k}{x^p}\pdv{x^m}{(x')^i}K_{mn}\pdv{x^n}{(x')^j}\pdv{x^p}{(x')^l}(A')^{jl}
\]
\[
=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}\left(\pdv{(x')^k}{x^p}\pdv{x^p}{(x')^l} \right)(A')^{jl}
\]
\[
=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}\delta^k_l(A')^{jl}
\]
\[
=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}(A')^{jk}
\]
\[
=(K')_{ij}(A')^{jk}
\]
so
\[
\left[(K')_{ij}-K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j} \right](A')^{jk}=0
\]
Because $\T{A'}$ is arbitrary, the coefficient must vanish. Therefore,
\[
(K')_{ij}=K_{mn}\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}
\]
which means that $\T{K}$ is a second-rank tensor.
\section*{3.2 Pseudsotensors, Dual Tensors}
\renewcommand{\arraystretch}{1.5}
\paragraph{4.2.1}
Let the transformation matrix from $\V{x}$ to $\V{x'}$ be $\T{A}$, then
\[
\T{A}=
\begin{pmatrix}
\pdv{(x')^1}{x^1}&\pdv{(x')^1}{x^2}&\pdv{(x')^1}{x^3}\\
\pdv{(x')^2}{x^1}&\pdv{(x')^2}{x^2}&\pdv{(x')^2}{x^3}\\
\pdv{(x')^3}{x^1}&\pdv{(x')^3}{x^2}&\pdv{(x')^3}{x^3}\\
\end{pmatrix}
\]
(take $n=3$ for example). Then $\det{(\T{A})}=\varepsilon_{ljk}\pdv{(x')^l}{x^1}\pdv{(x')^j}{x^2}\pdv{(x')^k}{x^3}$ (using Einstein convention)(we use $l$ instead of $i$ for reason that will soon be clear). If we permute $(1,2,3)$, then we must permute $(l,j,k)$ in the same way to retain the formula $\pdv{(x')^l}{x^1}\pdv{(x')^j}{x^2}\pdv{(x')^k}{x^3}$. Therefore,
\begin{equation}
\varepsilon_{ljk}\pdv{(x')^l}{x^m}\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}=\varepsilon_{mnp}\varepsilon_{ljk}\pdv{(x')^l}{x^1}\pdv{(x')^j}{x^2}\pdv{(x')^k}{x^3}=\varepsilon_{mnp}\det{(\T{A})}
\end{equation}
$C_i$ is a pseudovector, so the transformation law gives
\[
(C')_i=\det{(\T{A})}\pdv{x^m}{(x')^i}C_m=\det{(\T{A)}}\pdv{x^m}{(x')^i}\frac{1}{2}\varepsilon_{mnp}C^{np}\quad \textit{(combine $\det{(\T{A})}$ and $\varepsilon_{mnp}$)}
\]
\[
=\frac{1}{2}\pdv{x^m}{(x')^i}(\varepsilon_{mnp}\det{(\T{A})})C^{np}\quad\textit{(use equation 1)}
\]
\[
=\frac{1}{2}\pdv{x^m}{(x')^i}\varepsilon_{ljk}\pdv{(x')^l}{x^m}\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}\quad\textit{(combine $\pdv{x^m}{(x')^i}$ and $\pdv{(x')^l}{x^m}$)}
\]
\[
=\frac{1}{2}\delta^l_i\varepsilon_{ljk}\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}
\]
\[
=\frac{1}{2}\varepsilon_{ijk}\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}
=\frac{1}{2}\varepsilon_{ijk}(C')^{jk}
\]
the last equality holds because $C_i=\frac{1}{2}\varepsilon_{ijk}C^{jk}$ holds in all coordinate systems, so $(C')_i=\frac{1}{2}\varepsilon_{ijk}(C')^{jk}$.
If $j\neq k$, let $i$ be the remaining value other than $j,k$, then $\varepsilon_{ijk}\neq0$, so we have
\[
(C')^{jk}=\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}
\]
If $j=k$, then $\pdv{(x')^j}{x^p}\pdv{(x')^k}{x^n}=\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}$, and because $\T{C}$ is antisymmetric, so $(C')^{jk}=(C)^{jk}=0$ for $j=k$. Therefore,
\[
\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}\]
\[
=\pdv{(x')^j}{x^1}\pdv{(x')^k}{x^2}(C^{12}+C^{21})+
\pdv{(x')^j}{x^1}\pdv{(x')^k}{x^3}(C^{13}+C^{31})+
\pdv{(x')^j}{x^2}\pdv{(x')^k}{x^3}(C^{23}+C^{32})
\]
\[
=0=(C')^{jk}
\]
so $(C')^{jk}=\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}$ still holds. Therefore, in all cases,
\[
(C')^{jk}=\pdv{(x')^j}{x^n}\pdv{(x')^k}{x^p}C^{np}
\]
holds, which implies that $C^{jk}$ is a tensor.
\paragraph{4.2.2}
If there is a one-to-one correspondence between two sets, then the numbers of elements of the two sets need to be the same. If there is a one-to-one correspondence between the components of a vector $C_i$ and the components of a tensor $(AB)^{jk}$, because the numbers of components of $C_i$ and $(AB)^{jk}$ are different ($n$ and $n^2$), it should mean that the one-to-one correspondence exists between \textit{independent} components of $C_i$ and $(AB)^{jk}$, so the number of \textit{independent} components of $C_i$ and $(AB)^{jk}$ should be the same.
By the antisymmetry property of ${AB}^{jk}$, ${AB}^{jj}=-{AB}^{jj}$, so ${AB}^{jj}=0$, and ${AB}^{jk}=-{AB}^{kj}$. So the the number of \textit{independent} components of and $(AB)^{jk}=\frac{n\times n-n}{2}$, should be equal to $n$, the number of \textit{independent} components of $C_i$. So
\[
\frac{n\times n-n}{2}=n
\]
so $n=3$.
\paragraph{4.2.3}
\[
\del\cdot\del\times\V{A}=\pdv{}{x^i}(\del\times\V{A})_i=\pdv{}{x^i}\varepsilon_{ijk}\pdv{}{x^j}A^k
=(\varepsilon_{ijk}\pdv{}{x^i}\pdv{}{x^j})A^k=0
\]
because $\pdv{}{x^i}\pdv{}{x^j}=\pdv{}{x^j}\pdv{}{x^i}$ and $\varepsilon_{ijk}+\varepsilon_{jik}=0$.
\[
(\del\times\del\varphi)_i=\varepsilon_{ijk}\pdv{}{x^j}(\del\varphi)_k=\varepsilon_{ijk}\pdv{}{x^j}\pdv{}{x^k}\varphi=0
\]
because $\pdv{}{x^j}\pdv{}{x^k}=\pdv{}{x^k}\pdv{}{x^j}$ and $\varepsilon_{ijk}+\varepsilon_{ikj}=0$.
\paragraph{4.2.4}
(a)
\[
(A')^{ik}_{jl}=\pdv{(x')^i}{x^m}\pdv{x^n}{(x')^j}\pdv{(x')^k}{x^p}\pdv{x^q}{(x')^l}\delta^m_n\delta^p_q
\]
\[
=\pdv{(x')^i}{x^m}\pdv{x^m}{(x')^j}\pdv{(x')^k}{x^p}\pdv{x^p}{(x')^l}
\]
\[
=\pdv{(x')^i}{(x')^j}\pdv{(x')^k}{(x')^l}=\delta^i_j\delta^k_l
\]
(b)
\[
(B')^{ij}_{kl}=\pdv{(x')^i}{x^m}\pdv{x^p}{(x')^k}\pdv{(x')^j}{x^n}\pdv{x^q}{(x')^l}(\delta^m_p\delta^n_q+\delta^m_q\delta^n_p)
\]
\[
=\pdv{(x')^i}{(x')^k}\pdv{(x')^j}{(x')^l}+\pdv{(x')^i}{(x')^l}\pdv{(x')^j}{(x')^k}=
\delta^i_k\delta^j_l+\delta^i_l\delta^j_k
\]
(c)
\[
(C')^{ij}_{kl}=\pdv{(x')^i}{x^m}\pdv{x^p}{(x')^k}\pdv{(x')^j}{x^n}\pdv{x^q}{(x')^l}(\delta^m_p\delta^n_q-\delta^m_q\delta^n_p)
\]
\[
=\pdv{(x')^i}{(x')^k}\pdv{(x')^j}{(x')^l}-\pdv{(x')^i}{(x')^l}\pdv{(x')^j}{(x')^k}=
\delta^i_k\delta^j_l- \delta^i_l\delta^j_k
\]
\paragraph{4.2.5}
\setcounter{equation}{0}
Similar with Exercise 4.2.1, let $\T{A}$ be the transformation matrix from $\V{x}$ to $\V{x'}$, then $\det(\T{A})=\varepsilon_{kl}\pdv{(x')^k}{x^1}\pdv{(x')^l}{x^2}$, and
\begin{equation}
\varepsilon_{kl}\pdv{(x')^k}{x^m}\pdv{(x')^l}{x^n}=
\varepsilon_{mn}\varepsilon_{kl}\pdv{(x')^k}{x^1}\pdv{(x')^l}{x^2}=\varepsilon_{mn}\det(\T{A})
\end{equation}
$\varepsilon_{ij}$ is defined as $\varepsilon_{11}=\varepsilon_{22}=0$, $\varepsilon_{12}=1$, $\varepsilon_{21}=-1$, regardless of which coordinate system it is in. So it is invariant in all coordinate systms, and $(\varepsilon')_{kl}=\varepsilon_{kl}$. (We will use it later.)
\[
(\varepsilon')_{ij}=(\varepsilon')_{kl}(\delta')^k_i(\delta')^l_j=\varepsilon_{kl}\pdv{(x')^k}{(x')^i}\pdv{(x')^l}{(x')^j}\]
\[
=\varepsilon_{kl}\pdv{(x')^k}{x^m}\pdv{x^m}{(x')^i}\pdv{(x')^l}{x^n}\pdv{x^n}{(x')^j}\quad\textit{(combining $\varepsilon_{kl}\pdv{(x')^k}{x^m}\pdv{(x')^l}{x^n}$)}
\]
\[
=\left(\varepsilon_{kl}\pdv{(x')^k}{x^m}\pdv{(x')^l}{x^n}\right)\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}\quad\textit{(use equation 1)}
\]
\[
=\varepsilon_{mn}\det(\T{A})\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}=\det(\T{A})\pdv{x^m}{(x')^i}\pdv{x^n}{(x')^j}\varepsilon_{mn}
\]
which is the transformation equation for a second-rank pseudotensor, Therefore $\varepsilon_{ij}$ is a second-rank pseudotensor. It does not contradict the uniqueness of $\delta^i_j$ because we proved $\delta^i_j$ is the only isotropic second rank \textit{tensor} (with a coefficient), while $\varepsilon_{ij}$ is an isotropic second-rank \textit{pseudotensor} (it fails to keep the transformation law under improper rotation).
\paragraph{4.2.6}
$\varepsilon=
\begin{pmatrix}
0&1\\-1&0
\end{pmatrix}$ in matrix form, and let the orthogonal transformation be
$\T{S}=
\begin{pmatrix}
\cos\varphi&\sin\varphi\\-\sin\varphi&\cos\varphi
\end{pmatrix}$. Then the similarity transformation is
\[
\varepsilon'=\T{S}\varepsilon{\T{S}}^T=
\begin{pmatrix}
\cos\varphi&\sin\varphi\\-\sin\varphi&\cos\varphi
\end{pmatrix}
\begin{pmatrix}
0&1\\-1&0
\end{pmatrix}
\begin{pmatrix}
\cos\varphi&-\sin\varphi\\\sin\varphi&\cos\varphi
\end{pmatrix}=
\begin{pmatrix}
0&1\\-1&0
\end{pmatrix}
\]
so $\varepsilon_{ij}$ is invariant under orthogonal similarity transformations. It corresponds to the isotropic property of $\varepsilon_{ij}$ under proper rotation.
\paragraph{4.2.7}
Using the result from Exercise 2.1.9, we have $\varepsilon^{mnk}\varepsilon_{ijk}=\delta^m_i\delta^n_j-\delta^m_j\delta^n_i$, so
\[
\varepsilon^{mnk}A_k
=\varepsilon^{mnk}\frac{1}{2}\varepsilon_{ijk}B^{ij}
=\frac{1}{2}(\delta^m_i\delta^n_j-\delta^m_j\delta^n_i)B^{ij}=\frac{1}{2}(B^{mn}-B^{nm})=B^{mn}
\]
\section*{4.3 Tensors in General Coordinates}
\paragraph{4.3.1}
$q^i,q^j,q^k$ are independent, so $\EE^i,\EE^j,\EE^k$ is linear independent. (If $\EE^i,\EE^j,\EE^k$ is linear dependent, which means $a\EE^i+b\EE^j+c\EE^k=0$, then $\pdv{(aq^i+bq^j+cq^k)}{x}\VE_x+\pdv{(aq^i+bq^j+cq^k)}{y}\VE_y+\pdv{(aq^i+bq^j+cq^k)}{z}\VE_z=0$, so $aq^i+bq^j+cq^k=d$, which means $q^i,q^j,q^k$ are dependent.)
Express $\frac{\EE_j\times\EE_k}{\EE_j\times\EE_k\cdot\EE_i}$ in the bases of $\EE^i,\EE^j,\EE^k$, and note that $\EE^p\cdot\EE_q=\delta^p_q$, no matter whether $\EE^i,\EE^j,\EE^k$ and $\EE_i,\EE_j,\EE_k$ are orthogonal. Then
\[
\frac{\EE_j\times\EE_k}{\EE_j\times\EE_k\cdot\EE_i}=A_i\EE^i+A_j\EE^j+A_k\EE^k
\]
\[
\frac{\EE_j\times\EE_k\cdot\EE_i}{\EE_j\times\EE_k\cdot\EE_i}=1=A_i
\]
\[
\frac{\EE_j\times\EE_k\cdot\EE_j}{\EE_j\times\EE_k\cdot\EE_i}=0=A_j
\]
\[
\frac{\EE_j\times\EE_k\cdot\EE_k}{\EE_j\times\EE_k\cdot\EE_i}=0=A_k
\]
so
\[
\frac{\EE_j\times\EE_k}{\EE_j\times\EE_k\cdot\EE_i}=\EE^i
\]
\paragraph{4.3.2}
(a) If $i\neq j$, then $g_{ij}=\EE_i\cdot\EE_j=0$, so $g_{ij}$ is diagonal.
(b)
$g_{ji}=0$ when $i\neq j$, so
\[
g^{ii}g_{ii}\quad\textit{(no summation on i)}
\]
\[
=g^{ij}g_{ji}\quad\textit{(summation on $j$)}
\]
\[
=\delta^i_i\quad\textit{(by definition of $g^{ij}$)}
\]
\[
=1
\]
so
\[
g^{ii}=\frac{1}{g_{ii}}
\]
(c) $\EE_j\cdot\EE_i=0$, so
\[
(\EE^i\cdot\EE^i)(\EE_i\cdot\EE_i)\quad\textit{(no summation on i)}
\]
\[
=(\EE^i\cdot\EE^j)(\EE_j\cdot\EE_i)\quad\textit{(summation on $j$)}
\]
\[
=\delta^i_i=1\quad\textit{(by Eq. $4.46$)}
\]
so $|\EE^i|^2|\EE_i|^2=1$, which means
\[
|\EE^i|=\frac{1}{|\EE_i|}
\]
\paragraph{4.3.3}
\[
(\EE^i\cdot\EE^j)\cdot(\EE_j\cdot\EE_k)=(\pdv{q^i}{x}\pdv{q^j}{x}+\pdv{q^i}{y}\pdv{q^j}{y}+\pdv{q^i}{z}\pdv{q^j}{z})(\pdv{x}{q^j}\pdv{x}{q^k}+\pdv{y}{q^j}\pdv{y}{q^k}+\pdv{z}{q^j}\pdv{z}{q^k})
\]
Note that $j$ is summed, so $\pdv{q^j}{x}\pdv{x}{q^j}=\pdv{x}{x}=1$, and $\pdv{q^j}{x}\pdv{y}{q^j}=\pdv{y}{x}=0$. Similarly, $\pdv{q^j}{y}\pdv{y}{q^j}=\pdv{q^j}{z}\pdv{z}{q^j}=1$, and other cross terms are zero. So the equation becomes
\[
\pdv{q^i}{x}\pdv{x}{q^k}+\pdv{q^i}{y}\pdv{y}{q^k}+\pdv{q^i}{z}\pdv{z}{q^k}=\pdv{q^i}{q^k}=\delta^i_k
\]
\paragraph{4.3.4}
\[
\Gamma^m_{jk}\,\EE_m=\pdv{\EE_k}{q^j}=\pdv{^2x}{q^j\partial q^k}\VE_x+\pdv{^2y}{q^j\partial q^k}\VE_y+\pdv{^2z}{q^j\partial q^k}\VE_z
\]
\[
=\pdv{^2x}{q^k\partial q^j}\VE_x+\pdv{^2y}{q^k\partial q^j}\VE_y+\pdv{^2z}{q^k\partial q^j}\VE_z=
\pdv{\EE_j}{q^k}=\Gamma^m_{kj}\,\EE_m
\]
so $(\Gamma^m_{jk}-\Gamma^m_{kj})\EE_m=0$. Because $\EE_m$ are linear independent, $\Gamma^m_{jk}-\Gamma^m_{kj}$ must be zero for every $m$, so
\[
\Gamma^m_{jk}=\Gamma^m_{kj}
\]
\paragraph{4.3.5}
$(q^1,q^2,q^3)=(\rho,\varphi,z)$, and $x=\rho\cos\varphi$, $y=\rho\sin\varphi$, $z=z$. So
\[
\EE_1=\pdv{x}{\rho}\VE_x+\pdv{y}{\rho}\VE_y+\pdv{z}{\rho}\VE_z=\cos\varphi\,\VE_x+\sin\varphi\,\VE_y
\]
\[
\EE_2=\pdv{x}{\varphi}\VE_x+\pdv{y}{\varphi}\VE_y+\pdv{z}{\varphi}\VE_z=-\rho\sin\varphi\,\VE_x+\rho\cos\varphi\,\VE_y
\]
\[
\EE_3=\pdv{x}{z}\VE_x+\pdv{y}{z}\VE_y+\pdv{z}{z}\VE_z=\VE_z
\]
\[
(g_{ij})=
\begin{pmatrix}
\EE_1\cdot\EE_1&\EE_1\cdot\EE_2&\EE_1\cdot\EE_3\\
\EE_2\cdot\EE_1&\EE_2\cdot\EE_2&\EE_2\cdot\EE_3\\
\EE_3\cdot\EE_1&\EE_3\cdot\EE_2&\EE_3\cdot\EE_3\\
\end{pmatrix}=
\begin{pmatrix}
1&0&0\\
0&{\rho}^2&0\\
0&0&1
\end{pmatrix}
\]
Because $g^{ij}g_{jk}=\delta^i_k$, the unit matrix, so $(g^{ij})=(g_{jk})^{-1}$, the matrix inverse. Therefore,
\[
(g^{ij})=
\begin{pmatrix}
1&0&0\\
0&\frac{1}{{\rho}^2}&0\\
0&0&1
\end{pmatrix}
\]
\paragraph{4.3.6}
Differentiate $\EE^i\cdot\EE_k=\delta^i_k$ by $q^j$, we have
\[
\pdv{\EE^i}{q^j}\cdot\EE_k+\EE^i\cdot\pdv{\EE_k}{q^j}=0
\]
\[
\pdv{\EE^i}{q^j}\cdot\EE_k=-\EE^i\cdot\pdv{\EE_k}{q^j}=-\EE^i\cdot(\Gamma^\mu_{jk}\EE_\mu)=-\Gamma^i_{jk}
\]
so
\[
\pdv{\EE^i}{q^j}=-\Gamma^i_{jk}\EE^k
\]
when expanded in the contravariant basis.
$V_{i;j}$ is defined as $\pdv{\V{V}'}{q^j}=V_{i;j}\EE^i$. Expand the vector in contravariant basis $\V{V'}=V_i\EE^i$ and differentiate, we have
\[
\pdv{\V{V'}}{q^j}=\pdv{V_i}{q^j}\EE^i+V_i\pdv{\EE^i}{q^j}
\]
\[
=\pdv{V_i}{q^j}\EE^i-V_i\Gamma^i_{jk}\EE^k\quad\textit{(interchange $i$ and $k$ in the second term)}
\]
\[
=(\pdv{V_i}{q^j}-V_k\Gamma^k_{ji})\EE^i\quad\textit{( $i$ and $j$ in $\Gamma^k_{ji}$ can be interchanged)}
\]
\[
=(\pdv{V_i}{q^j}-V_k\Gamma^k_{ij})\EE^i=V_{i;j}\EE^i
\]
So
\[
V_{i;j}=\pdv{V_i}{q^j}-V_k\Gamma^k_{ij}
\]
because the set of $\EE^i$ are linear independent.
\paragraph{4.3.7}
\[
\pdv{V_i}{q^j}-V_k\Gamma^k_{ij}=\pdv{(g_{ik}V^k)}{q^j}-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+\pdv{g_{ik}}{q^j}V^k-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+\pdv{(\EE_i\cdot\EE_k)}{q^j}V^k-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+V^k\EE_i\cdot\pdv{\EE_k}{q^j}+V^k\EE_k\cdot\pdv{\EE_i}{q^j}-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+V^m\EE_i\cdot\pdv{\EE_m}{q^j}+V_k\EE^k\cdot\pdv{\EE_i}{q^j}-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+V^m(g_{ik}\EE^k)\cdot\pdv{\EE_m}{q^j}+V_k\Gamma^k_{ij}-V_k\Gamma^k_{ij}
\]
\[
=g_{ik}\pdv{V^k}{q^j}+g_{ik}V^m\Gamma^k_{mj}
\]
\[
=g_{ik}\left[\pdv{V^k}{q^j}+V^m\Gamma^k_{mj} \right]
\]
\noindent
(Or note that $\V{V'}=V_i\EE^i=V^k\EE_k$, so
\[
\pdv{\V{V'}}{q^j}=\left[\pdv{V_i}{q^j}-V_k\Gamma^k_{ij} \right]\EE^i=\left[\pdv{V^k}{q^j}+V^m\Gamma^k_{mj} \right]\EE_k
\]
Take the scalar product of both sides with $\EE_i$, and note that $\EE^i\cdot\EE_i=1$, and $\EE_k\cdot\EE_i=g_{ik}$. Therefore,
\[
\pdv{V_i}{q^j}-V_k\Gamma^k_{ij}=\left[\pdv{V^k}{q^j}+V^m\Gamma^k_{mj} \right]g_{ik}
\]
which is another verification.)
\paragraph{4.3.8}
From Eq. 4.63, $\Gamma^n_{ij}=\frac{1}{2}g^{nk}\left[\pdv{g_{ik}}{q^j}+\pdv{g_{jk}}{q^i}-\pdv{g_{ij}}{q^k} \right]$. Because $g^{ij}$ has only diagonal components, $g^{nk}\neq0$ only when $n=k$, so $\Gamma^n_{ij}=\frac{1}{2}g^{nn}\left[\pdv{g_{in}}{q^j}+\pdv{g_{jn}}{q^i}-\pdv{g_{ij}}{q^n} \right]$ ($n$ is not summed). The only non-constant component of $(g_{ij})$ is $g_{22}=\rho=q^1$, so the only non-zero derivative of $g_{ij}$ is $\pdv{g_{22}}{q^1}=2\rho$. When $n=1$, $\pdv{g_{in}}{q^j}+\pdv{g_{jn}}{q^i}=0$, so $i,j$ must be $2$ to have non-zero $\Gamma^n_{ij}$. When $n=2$, $\pdv{g_{ij}}{q^n}=0$, so one of $i,j$ must be $1$ and the other must be $2$ to make $\pdv{g_{in}}{q^j}+\pdv{g_{jn}}{q^i}\neq0$. When $n=3$, none of the derivatives can be non-zero.
Therefore, there are only three nonzero $\Gamma^n_{ij}$: $\Gamma^1_{22}$, $\Gamma^2_{12}$, $\Gamma^2_{21}$.
\[
\Gamma^1_{22}=\frac{1}{2}(1)[-2\rho]=-\rho
\]
\[
\Gamma^2_{12}=\Gamma^2_{21}=\frac{1}{2}(\frac{1}{{\rho}^2})[2\rho]=\frac{1}{\rho}
\]
\paragraph{4.3.9}
$V^i_{;j}=\pdv{V^i}{q^j}+V^k\Gamma^i_{kj}$, so
\[
V^1_{;2}=\pdv{V^1}{q^2}+V^2\Gamma^1_{22}=\pdv{V^\rho}{\varphi}-V^\varphi\rho=V^\rho_{;\varphi}
\]
\[
V^2_{;1}=\pdv{V^2}{q^1}+V^2\Gamma^2_{21}=\pdv{V^\varphi}{\rho}+V^\varphi\frac{1}{\rho}=V^\varphi_{;\rho}
\]
\[
V^2_{;2}=\pdv{V^2}{q^2}+V^1\Gamma^2_{12}=\pdv{V^\varphi}{\varphi}+V^\rho\frac{1}{\rho}=V^\varphi_{;\varphi}
\]
For all the other $i,j$,
\[
V^i_{;j}=\pdv{V^i}{q^j}
\]
\paragraph{4.3.10}
$g_{ij;k}$ and $g^{ij}_{;k}$ are not defined in the text, but I think they are probably defined as \\$\pdv{(g_{ij}\EE^i\cdot\EE^j)}{q^k}=g_{ij;k}\,\EE^i\cdot\EE^j$ and $\pdv{(g^{ij}\EE_i\cdot\EE_j)}{q^k}=g^{ij}_{;k}\,\EE_i\cdot\EE_j$.
\[
\pdv{(g_{ij}\EE^i\cdot\EE^j)}{q^k}=\pdv{g_{ij}}{q^k}\EE^i\cdot\EE^j+g_{ij}\pdv{\EE^i}{q^k}\cdot\EE^j+g_{ij}\EE^i\cdot\pdv{\EE^j}{q^k}
\]
\[
=\pdv{g_{ij}}{q^k}\EE^i\cdot\EE^j+g_{ij}(-\Gamma^i_{k\alpha}\EE^\alpha)\cdot\EE^j+g_{ij}\EE^i\cdot(-\Gamma^j_{k\beta}\EE^\beta)
\]
\textit{(interchange $i$ and $\alpha$ in the second term, and interchange $j$ and $\beta$ in the last term)}
\[
=\pdv{g_{ij}}{q^k}\EE^i\cdot\EE^j-g_{\alpha j}\Gamma^\alpha_{ki}\EE^i\cdot\EE^j-g_{i\beta}\Gamma^\beta_{kj}\EE^i\cdot\EE^j
\]
\[
=\left[\pdv{g_{ij}}{q^k}-g_{j\alpha }\Gamma^\alpha_{ik}-g_{i\beta}\Gamma^\beta_{jk}\right]\EE^i\cdot\EE^j
\]
so
\[
g_{ij;k}=\pdv{g_{ij}}{q^k}-g_{j\alpha }\Gamma^\alpha_{ik}-g_{i\beta}\Gamma^\beta_{jk}\quad\textit{(using Eq. $4.63$)}
\]
\[
=\pdv{g_{ij}}{q^k}-g_{j\alpha}\frac{1}{2}g^{\alpha m}\left[\pdv{g_{im}}{q^k}+\pdv{g_{km}}{q^i}-\pdv{g_{ik}}{q^m} \right]-g_{i\beta}\frac{1}{2}g^{\beta n}\left[\pdv{g_{jn}}{q^k}+\pdv{g_{kn}}{q^j}-\pdv{g_{jk}}{q^n} \right]
\]
\[
=\pdv{g_{ij}}{q^k}-\frac{1}{2}\delta^m_j\left[\pdv{g_{im}}{q^k}+\pdv{g_{km}}{q^i}-\pdv{g_{ik}}{q^m} \right]-\frac{1}{2}\delta^n_i\left[\pdv{g_{jn}}{q^k}+\pdv{g_{kn}}{q^j}-\pdv{g_{jk}}{q^n} \right]
\]
\[
=\pdv{g_{ij}}{q^k}-\frac{1}{2}\left[\pdv{g_{ij}}{q^k}+\pdv{g_{jk}}{q^i}-\pdv{g_{ik}}{q^j} \right]-\frac{1}{2}\left[\pdv{g_{ij}}{q^k}+\pdv{g_{ik}}{q^j}-\pdv{g_{jk}}{q^i} \right]
\]
\[
=\pdv{g_{ij}}{q^k}-\pdv{g_{ij}}{q^k}=0
\]
We can prove $g^{ij}_{;k}=0$ in a similar way. Or we can note that \[g^{lj}_{;k}\,\EE_l\cdot\EE_j=\pdv{(g^{lj}\EE_l\cdot\EE_j)}{q^k}=\pdv{(g_{lj}\EE^l\cdot\EE^j)}{q^k}=g_{lj;k}\,\EE^l\cdot\EE^j=0\]
multiply both side with $\EE^j\cdot\EE^i$, and note that $(\EE_l\cdot\EE_j)(\EE^j\cdot\EE^i)=\delta^i_l$, we have
\[
g^{lj}_{;k}(\EE_l\cdot\EE_j)(\EE^j\cdot\EE^i)=0\]
\[g^{lj}_{;k}\delta^i_l=0\]
\[g^{ij}_{;k}=0
\]
\paragraph{4.3.11}
From Example 4.3.1, the metric tensor of spherical polar coordinates is
\[
(g_{ij})=
\begin{pmatrix}
1&0&0\\
0&r^2&0\\
0&0&r^2\sin^2\theta
\end{pmatrix}
\]
so $[\det(g)]^{1/2}=r^2\sin\theta$. Use Eq. 4.69, we have
\[
\del\cdot\V{V}=\frac{1}{[\det(g)]^{1/2}}\pdv{}{q^k}\left([\det(g)]^{1/2}V^k \right)
\]
\[
=\frac{1}{r^2\sin\theta}\left[\pdv{}{r}(r^2\sin\theta V^r)+\pdv{}{\theta}(r^2\sin\theta V^\theta)+\pdv{}{\varphi}(r^2\sin\theta V^\varphi) \right]
\]
Compared with the results in Chapter 3, and note that
\begin{align*}
\EE_r&=\pdv{\V{r}}{r}=\VE_r & \EE_r V^r&=\VE_r(V^r)=\VE_r V_r & V^r&=V_r\\
\EE_\theta&=\pdv{\V{r}}{\theta}=r\VE_\theta & \EE_\theta V^\theta&=\VE_\theta(rV^\theta)=\VE_\theta V_\theta & V^\theta&=\frac{1}{r}V_\theta\\
\EE_\varphi&=\pdv{\V{r}}{\varphi}=r\sin\theta\VE_\varphi & \EE_\varphi V^\varphi&=\VE_\varphi(r\sin\theta V^\varphi)=\VE_\varphi V_\varphi & V^\varphi&=\frac{1}{r\sin\theta}V_\varphi\\
\end{align*}
Substitute $V^r,V^\theta,V^\varphi$ into the above equation, we get Eq. 3.157.
\paragraph{4.3.12}
$A_i=\pdv{\varphi}{q^i}$ bexause it is the gradient of a scalar. From Exercise 4.3.6 we have
\[
A_{i;j}=\pdv{A_i}{q^j}-A_k\Gamma^k_{ij}=\pdv{^2\varphi}{q^j\partial q^i}-A_k\Gamma^k_{ij}
\]
\[
=\pdv{^2\varphi}{q^i\partial q^j}-A_k\Gamma^k_{ji}=\pdv{A_j}{q^i}-A_k\Gamma^k_{ji}=A_{j;i}
\]
so
\[
A_{i;j}-A_{j;i}=0
\]
\section*{4.4 Jacobians}
\paragraph{4.4.1}
(a) If $f(u,v)=0$, then by differentiating we have
\[
\pdv{f}{x}=\pdv{f}{u}\pdv{u}{x}+\pdv{f}{v}\pdv{v}{x}=0
\]
\[
\pdv{f}{y}=\pdv{f}{u}\pdv{u}{y}+\pdv{f}{v}\pdv{v}{y}=0
\]
\[
\pdv{f}{z}=\pdv{f}{u}\pdv{u}{z}+\pdv{f}{v}\pdv{v}{z}=0
\]
which means $\pdv{u}{x}:\pdv{v}{x}=\pdv{u}{y}:\pdv{v}{y}=\pdv{u}{z}:\pdv{v}{z}=(-\pdv{f}{v}):\pdv{f}{u}$, so $\del u=(\pdv{u}{x},\pdv{u}{y},\pdv{u}{z})$ are parallel to $\del v=(\pdv{v}{x},\pdv{v}{y},\pdv{v}{z})$, and therefore $(\del u)\times(\del v)=0$
\medskip
If $(\del u)\times(\del v)=0$, then $(\pdv{u}{x},\pdv{u}{y},\pdv{u}{z})=a(\pdv{v}{x},\pdv{v}{y},\pdv{v}{z})$ (or $u$,$v$ being interchanged, and $a$ can be zero for one or both of $\del u$,$\del v$ being zero). So $\pdv{(u-av)}{x}=\pdv{(u-av)}{y}=\pdv{(u-av)}{z}=0$, means that $u-av=b$, \\a constant, so $u-av-b=0$ is the relation for $u$ and $v$.
(b)
\[
J=
\begin{vmatrix}
\pdv{u}{x}&\pdv{u}{y}\\
\pdv{v}{x}&\pdv{v}{y}
\end{vmatrix}
=\pdv{u}{x}\pdv{v}{y}-\pdv{u}{y}\pdv{v}{x}
=\left[(\pdv{u}{x}\VE_x+ \pdv{u}{y}\VE_y)\times(\pdv{v}{x}\VE_x+\pdv{v}{y}\VE_y)\right]_z=\left[(\del u)\times(\del v)\right]_z=0
\]
\paragraph{4.4.2}
$h_1$,$h_2$ are defined as
\[
\pdv{\V{r}}{q_1}=h_1\VE_1=\pdv{x}{q_1}\VE_x+\pdv{y}{q_1}\VE_y
\]
\[
\pdv{\V{r}}{q_2}=h_2\VE_2=\pdv{x}{q_2}\VE_x+\pdv{y}{q_2}\VE_y
\]
Because $\VE_1$ and $\VE_2$ are orthogonal, the area of the parallelogram formed by $h_1\VE_1$ and $h_2\VE_2$ is $h_1h_2$, and also the area of the parallelogram equals to the determinant of components in Cartesian coordinate, so
\[
Area=h_1h_2=
\begin{vmatrix}
\pdv{x}{q_1}&\pdv{y}{q_1}\\
\pdv{x}{q_2}&\pdv{y}{q_2}\\
\end{vmatrix}=
\pdv{x}{q_1}\pdv{y}{q_2}-\pdv{x}{q_2}\pdv{y}{q_1}
\]
\paragraph{4.4.3}
(a) Solve for $x,y$, we have $x=\frac{vu}{v+1}$, $y=\frac{u}{v+1}$, so
\[
J=
\begin{vmatrix}
\pdv{x}{u}&\pdv{y}{u}\\
\pdv{x}{v}&\pdv{y}{v}\\
\end{vmatrix}=\left(\frac{v}{v+1}\right)\left(\frac{-u}{(v+1)^2}\right)-\left(\frac{1}{v+1}\right)\left(\frac{u}{v+1}-\frac{vu}{(v+1)^2}\right)=\frac{-u}{(v+1)^2}
\]
(b)
\[
J^{-1}=
\begin{vmatrix}
\pdv{u}{x}&\pdv{v}{x}\\
\pdv{u}{y}&\pdv{v}{y}\\
\end{vmatrix}=(1)(\frac{-x}{y^2})-(\frac{1}{y})(1)=\frac{-(x+y)}{y^2}=\frac{(v+1)^2}{-u}
\]
so
\[
J=\frac{1}{J^{-1}}=\frac{-u}{(v+1)^2}
\]
\section*{4.5 Differential Forms}
\paragraph{4.5.1}
($i,j,k$ denotes any cyclic permutation of 1,2,3)
\[
*1=(1)(-1)^0dt\wedge dx_1\wedge dx_2\wedge dx_3=dt\wedge dx_1\wedge dx_2\wedge dx_3\]
\[
*dx_i=(-1)(-1)^1dt\wedge dx_j\wedge dx_k=dt\wedge dx_j\wedge dx_k
\]
\[
*dt=(1)(-1)^0dx_1\wedge dx_2\wedge dx_3=dx_1\wedge dx_2\wedge dx_3
\]
\[
*(dx_j\wedge dx_k)=(1)(-1)^2dt\wedge dx_i=dt\wedge dx_i
\]
\[
*(dt\wedge dx_i)=(1)(-1)^1dx_j\wedge dx_k=-dx_j\wedge dx_k
\]
\[
*(dx_1\wedge dx_2\wedge dx_3)=(-1)(-1)^3dt=dt
\]
\[
*(dt\wedge dx_i\wedge dx_j)=(1)(-1)^2dx_k=dx_k
\]
\[
*(dt\wedge dx_1\wedge dx_2\wedge dx_3)=(1)(-1)^3=-1
\]
\paragraph{4.5.2}
Let the force field be $\V{F}=F_x\VE_x+F_y\VE_y+F_z\VE_z$, so the infinitely small work done is $dw=\V{F}\cdot d\V{r}=F_xdx+F_ydy+F_zdz$, and $w=F_x(x_2-x_1)+F_y(y_2-y_1)+F_z(z_2-z_1)$. Substituting, we get $F_x=\frac{a}{3}$, $F_y=\frac{b}{2}$, $F_z=c$. So
\[
dw=\frac{a}{3}dx+\frac{b}{2}dy+cdz
\]
\section*{4.6 Differentiating Forms}
\paragraph{4.6.1}
(a) $d\omega_1=dx\wedge dy+dy\wedge dx=0$
(b) $d\omega_2=dx\wedge dy-dy\wedge dx=2dx\wedge dy$
(c) $d(d\omega_2)=2d(dx)\wedge dy-2dx\wedge d(dy)=0$
\paragraph{4.6.2}
\[
d\omega_3=(ydx+xdy)\wedge dz+(zdx+xdz)\wedge dy-(zdy+ydz)\wedge dx=2zdx\wedge dy-2ydz\wedge dx
\]
\[
d(d\omega_3)=2dz\wedge dx\wedge dy-2dy\wedge dz\wedge dx=0
\]
\paragraph{4.6.3}
(a)
\[
\omega_2\wedge\omega_3=(x\,dy-y\,dx)\wedge(xy\,dz+xz\,dy-yz\,dx)=x^2y\,dy\wedge dz+xy^2\,dz\wedge dz
\]
\[
d(\omega_2\wedge\omega_3)=2xy\,dx\wedge dy\wedge dz+2xy\,dy\wedge dz\wedge dx=4xy\,dx\wedge dy\wedge dz
\]
(b)
\[
d(\omega_2\wedge\omega_3)=(d\omega_2)\wedge\omega_3-\omega_2\wedge(d\omega_3)=2xy\,dx\wedge dy\wedge dz-(-2xy)\,dy\wedge dz\wedge dx=4xy\,dx\wedge dy\wedge dz
\]
\section*{4.7 Integrating Forms}
\paragraph{4.7.1}
\[
A(x,y,z)dx\wedge dy\wedge dz
\]
\[
=A(u,v,w)(\pdv{x}{u}du+\pdv{x}{v}dv+\pdv{x}{w}dw)\wedge(\pdv{y}{u}du+\pdv{y}{v}dv+\pdv{y}{w}dw)\wedge(\pdv{z}{u}du+\pdv{z}{v}dv+\pdv{z}{w}dw)
\]
\[
=A(u,v,w)\left[\pdv{x}{u}(\pdv{y}{v}\pdv{z}{w}-\pdv{y}{w}\pdv{z}{v})-\pdv{x}{v}(\pdv{y}{u}\pdv{z}{w}-\pdv{y}{w}\pdv{z}{u})+\pdv{x}{w}(\pdv{y}{u}\pdv{z}{v}-\pdv{y}{v}\pdv{z}{u}) \right]du\wedge dv\wedge dw
\]
\[
=A(u,v,w)
\begin{vmatrix}
\pdv{x}{u}&\pdv{y}{u}&\pdv{z}{u}\\
\pdv{x}{v}&\pdv{y}{v}&\pdv{z}{v}\\
\pdv{x}{w}&\pdv{y}{w}&\pdv{z}{w}\\
\end{vmatrix}
du\wedge dv\wedge dw\]
\[=A(u,v,w)\pdv{(x,y,z)}{(u,v,w)}du\wedge dv\wedge dw
\]
\paragraph{4.7.2}
$\int\displaylimits_S\del\times\V{H}\cdot d\V{a}=kI=k\int\displaylimits_S\V{J}\cdot d\V{a}$, where $\V{J}$ is the current density. In the differential form, the equation becomes
\[
\left[\pdv{H_z}{y}-\pdv{H_y}{z}\right]dy\wedge dz+\left[\pdv{H_x}{z}-\pdv{H_z}{x}\right]dz\wedge dx+\left[\pdv{H_y}{x}-\pdv{H_x}{y}\right]dx\wedge dy=kJ_x\,dy\wedge dz+kJ_y\,dz\wedge dx+kJ_z\,dx\wedge dy
\]
the corresponding components of the two sides equal, respectively.
\paragraph{4.7.3}
If $\pdv{f}{x}=A$ and $\pdv{f}{y}=B$, then $\pdv{A}{y}=\pdv{^2f}{y\partial x}=\pdv{^2f}{x\partial y}=\pdv{B}{x}$, so being exact implies being closed.
If $\pdv{A}{y}=\pdv{B}{x}=\varphi(x,y)$, then $A=\int_{y_0}^y\varphi(x,y)dy+h(x)$ and $B=\int_{x_0}^x\varphi(x,y)dx+g(y)$. Let $f=\int_{x_0}^x\int_{y_0}^y\varphi(x,y)dxdy+\int_{x_0}^xh(x)dx+\int_{y_0}^yg(y)dy$, then $\pdv{f}{x}=\int_{y_0}^y\varphi(x,y)dy+h(x)=A$ and $\pdv{f}{y}=\int_{x_0}^x\varphi(x,y)dx+g(y)=B$, so being closed implies being exact. Therefore, being closed and being exact are sufficient and necessary conditions for each other.
\medskip
To find the function $f$ for exact $Adx+Bdy$, let $f=\int_{x_0}^xA(x,y)dx+\int_{y_0}^yB(x_0,y)dy$, then $\pdv{f}{x}=A(x,y)$, and $\pdv{f}{y}=\int_{x_0}^x\pdv{A(x,y)}{y}dx+B(x_0,y)=\int_{x_0}^x\pdv{B(x,y)}{x}dx+B(x_0,y)=B(x,y)-B(x_0,y)+B(x_0,y)=B(x,y)$. So $f$ satisfy the condition.
\medskip
(1)$\pdv{y}{y}=\pdv{x}{x}=1$, so $ydx+xdy$ is closed and exact. Let $x_0=y_0=0$, then \[f=\int_{x_0}^xydx+\int_{y_0}^yx_0dy=xy\]
(2) $\pdv{}{y}(\frac{y}{x^2+y^2})\neq\pdv{}{x}(\frac{x}{x^2+y^2})$, so $\frac{ydx+xdy}{x^2+y^2}$ is neither closed nor exact.
\medskip
(3) $\pdv{}{y}[\ln(xy)+1]=\pdv{}{x}(\frac{x}{y})=\frac{1}{y}$, so $[\ln(xy)+1]dx+\frac{x}{y}dy$ is closed and exact. Let $x_0=0$, then
\[
f=\int_{x_0}^x[\ln(xy)+1]dx+\int_{y_0}^y\frac{x_0}{y}dy=x\ln(xy)
\]
(4) $\pdv{}{y}(\frac{-y}{x^2+y^2})=\pdv{}{x}(\frac{x}{x^2+y^2})=\frac{-x^2+y^2}{(x^2+y^2)^2}$, so $\frac{-y}{x^2+y^2}dx+\frac{x}{x^2+y^2}dy$ is closed and exact. Let $x_0=0$, then
\[
f=\int_{x_0}^x\frac{-y}{x^2+y^2}dx+\int_{y_0}^y\frac{x_0}{{x_0}^2+y^2}dy=-\tan^{-1}\frac{x}{y}
\]
(5) $f(z)dx=(x+iy)dx+(-y+ix)dy$.\quad$\pdv{(x+iy)}{y}=\pdv{(-y+ix)}{x}=i$, so $(x+iy)dx+(-y+ix)dy$ is closed and exact. Let $x_0=y_0=0$, then
\[
f=\int_{x_0}^x(x+iy)dx+\int_{y_0}^y(-y+ix_0)dy=\frac{x^2-y^2}{2}+ixy
\]
\end{document}
| {
"alphanum_fraction": 0.596569702,
"avg_line_length": 39.7620730271,
"ext": "tex",
"hexsha": "10a8fc828a75c86081ded8603a9c4e455c5fe3e6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "hikarimusic2002/Solutions",
"max_forks_repo_path": "Mathematical Methods for Physicists/Chapter 04/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "hikarimusic2002/Solutions",
"max_issues_repo_path": "Mathematical Methods for Physicists/Chapter 04/main.tex",
"max_line_length": 763,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3f48f7e1e97cc78c01142936a267255f7164f6a4",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "hikarimusic2002/Solutions",
"max_stars_repo_path": "Mathematical Methods for Physicists/Chapter 04/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16668,
"size": 33758
} |
\documentclass[11pt]{article}
\usepackage{graphics}
\usepackage{amsmath, amsthm, amssymb, latexsym}
\usepackage{hyperref}
%\usepackage[firstpage]{draftwatermark}
\usepackage{draftwatermark}
%\newtheorem{theorem}{Theorem}
%\newtheorem{definition}{Definition}
%\newtheorem{lemma}{Lemma}
%\newtheorem{corollary}{Corollary}
\begin{document}
\title{\textbf{Collecting Data with Unlock}}
\author{James Percent \\
[email protected]}
\date{\today}
\parskip 11pt
\parindent 0pt
\maketitle
%\tableofcontents
\section{Overview}
This document provides details about using the Unlock framework, on the Mouth workstation, to collect data for offline analysis. Section~\ref{unlock} covers Mouth account setup, Unlock software setup and MOBIlab hardware setup, and Section~\ref{collector} covers collecting data.
\section{Setup}\label{unlock}
A Mouth account is required to get started. If you do not have a Mouth account, then send me an email and I'll create 1 for you.
Next, login to Mouth and launch the Git Bash Shell. To launch the Git Bash Shell, click Start Menu$\rightarrow$Git Bash Shell. See Figure~\ref{fig:gitbash}.
Note that the images in this document have high enough resolution to clearly read the text on them, so if you're having trouble reading the text, then use the Zoom In feature of your PDF viewer to increase the size of the image.
The next step is Github configuration. Github needs to be configured with your public ssh key from Mouth. You do not have a public key on Mouth by default, so we need to generate 1 using ssh-keygen. After we generate a public key, we need to upload it to Github. Finally, after the public key is uploaded, we can clone the repo.
Figure~\ref{fig:bash-cmds} shows the commands that need to be run from Mouth; in particular note that the generated public key is located in the .ssh directory of your home directory; also note, public keys end in $.pub$. To upload the key to Github, click on the Account settings icon, and, after that page loads, click on SSH Keys link.
The MOBIlab device must be connected to collect data. Connecting the MOBIlab to Mouth consists of 2 steps: powering on the MOBIlab and connecting its USB fob. The MOBIlab USB is the 1 with the green key-ring identifier.
\begin{figure}[]
\resizebox{\textwidth}{!}{\includegraphics{images/git-bash0.png}}
\caption{\label{fig:gitbash} Git bash shell}
\end{figure}
\begin{figure}[]
\resizebox{\textwidth}{!}{\includegraphics{images/bash-cmds.png}}
\caption{\label{fig:bash-cmds} Bash commands }
\end{figure}
\section{Collecting Data}\label{collector}
After completing the steps in Section~\ref{unlock}, a directory called unlock-npl should exist in your home directory; enter this directory. Collector.py is the program that we use to collect data. Running the collector is simple. To see the options run the following command.
\begin{verbatim}
$ python collector.py --help
\end{verbatim}
Running an m-sequence visualization, with 4 cues, separated by 5 seconds, can be accomplished by executing the following command.
\begin{verbatim}
$ python collector.py -v left,right,up,down -m
\end{verbatim}
The output file consists of a sequence of samples separated line breaks. Each sample consists of $n = channels + cue$ integers, separated by tabs. For example, a 4 channel sample would look like the following.
\begin{verbatim}
11123 -2123 23456 1234 0
\end{verbatim}
The cue will be the last integer in the sequence. A cue value can only be $0$ or $1$. Figure~\ref{fig:cmds} shows the help output and some possible executions. The default number of channels is 4, the default duration between cue events is 5 seconds, and the default output file will be named gtec concatenated with a timestamp. Figure~\ref{fig:mseq} shows what an m-sequence visualization looks like.
\begin{figure}[]
\resizebox{\textwidth}{!}{\includegraphics{images/commands.png}}
\caption{\label{fig:cmds} Example collector usage}
\end{figure}
\begin{figure}[]
\resizebox{\textwidth}{!}{\includegraphics{images/msequence.png}}
\caption{\label{fig:mseq} Example of running the m-sequence collector}
\end{figure}
%\begin{thebibliography}{99}
%\bibitem{unlock} http:\/\/github.com/NeuralProsthesisLab/unlock-npl.
%\bibitem{mobilab} http:\/\/www.gtec.at\/Products/Hardware-and-Accessories\/g.MOBIlab-Specs-Features
%\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7711806348,
"avg_line_length": 50.3333333333,
"ext": "tex",
"hexsha": "b2f55143fde7b71ceeafa5dd832831cd8762ab27",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T15:47:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-21T12:38:42.000Z",
"max_forks_repo_head_hexsha": "0c4d95abdab288d3e657ca2db867b06f755f26ff",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "NeuralProsthesisLab/unlock",
"max_forks_repo_path": "unlock/doc/offline-analysis.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0c4d95abdab288d3e657ca2db867b06f755f26ff",
"max_issues_repo_issues_event_max_datetime": "2015-05-21T16:03:43.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-21T01:02:50.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "NeuralProsthesisLab/unlock",
"max_issues_repo_path": "unlock/doc/offline-analysis.tex",
"max_line_length": 405,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "0c4d95abdab288d3e657ca2db867b06f755f26ff",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "NeuralProsthesisLab/unlock",
"max_stars_repo_path": "unlock/doc/offline-analysis.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-03T21:50:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-05-05T01:08:55.000Z",
"num_tokens": 1141,
"size": 4379
} |
\section{Conclusion} \label{gua:sec:concl}
We have extended the cutoffs for guarded protocols of Emerson and
Kahlon~\cite{Emerson00} to support local deadlock detection, fairness
assumptions, and open systems.
In particular, our results imply the decidability of the parameterized model checking problem for this class of systems and specifications,
which to the best of our knowledge was unknown before.
%Our results allow us to model check
%guarded protocols that satisfy not
%only safety, but also liveness conditions, for an arbitrary number of
%components.
Furthermore, the cutoff results can easily be integrated into
the parameterized synthesis approach~\cite{JB14}.
%~\cite{Jacobs14,Khalimov13,Khalimov13a}.
%An approach for using cutoff results in
%synthesis has been introduced by
%Jacobs and Bloem~\cite{Jacobs14}. It has been described in detail for the
%case of
%token-passing systems. Follow-up papers have shown how to make the approach
%more efficient~\cite{KhalimovJB13b}, and how to use it for the synthesis of a
%large
%case study, the AMBA bus arbiter~\cite{BloemJK14}.
Since conjunctive guards can model atomic sections and read-write locks,
and disjunctive guards can model pairwise rendezvous
(for some classes of specifications, see~\cite{EmersonK03}),
our results apply to a wide spectrum of systems models.
But the expressive power of the model %and flexibility of the results
comes at a high cost: cutoffs are linear in the size of a process, and
are shown to be tight (with respect to this parameter).
For conjunctive systems, our new results are restricted to systems with
1-conjunctive guards, effectively only allowing to model a single shared
resource.
We conjecture that our proof methods can be extended to systems with
more general conjunctive guards, at the price of bigger cutoffs.
We leave this extension and the question of finding cutoffs that are independent of the size of processes for future research.
We did preliminary experiments~\cite{SimonThesis} by implementing the synthesizer inside our parameterized synthesizer PARTY~\cite{party}.
It is a possible future work to find and apply it to real-world applications.
%
%
%We are working on a prototype implementation (\url{https://bitbucket.org/parsy/guarded_synthesis/}), which however is currently limited to very small systems.\sj{we should re-formulate or remove the comment on implementation}
%In future work, we will try to lift the restrictions of our results for conjunctive systems, and investigate cutoffs that are independent of the size of the components' state spaces.
%\ak{remove this promise?}
\ak{note that EK have better complexities for 'for all paths' properties.
As a future work, one can look if our cutoffs can be improved.}
%This is due
%to the growth of the cutoff (linearly) and the set of possible transition
%guards (doubly exponential) in the size of process templates.
%In the future,
%we will look into cutoffs that are independent of the size of process
%templates.
| {
"alphanum_fraction": 0.7953042328,
"avg_line_length": 56,
"ext": "tex",
"hexsha": "d6b1e163858eaa8315fc6c508e68affb6dbb7c49",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74a7a4c6ed06aa2894d2ba05f417f5f812730b78",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "5nizza/phd-thesis",
"max_forks_repo_path": "thesis/guarded-systems/conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74a7a4c6ed06aa2894d2ba05f417f5f812730b78",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "5nizza/phd-thesis",
"max_issues_repo_path": "thesis/guarded-systems/conclusion.tex",
"max_line_length": 227,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74a7a4c6ed06aa2894d2ba05f417f5f812730b78",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "5nizza/phd-thesis",
"max_stars_repo_path": "thesis/guarded-systems/conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 711,
"size": 3024
} |
\documentclass[aspectratio=169]{beamer}
\usepackage[utf8]{inputenc} % required for umlauts
\usepackage[english]{babel} % language
%\usepackage[sfdefault]{roboto} % enable sans serif font roboto
%\usepackage{libertine} % enable this on Windows to allow for microtype
\usepackage[T1]{fontenc} % required for output of umlauts in PDF
\usepackage{mathtools} % required for formulas
\usepackage{caption} % Customize caption aesthetics
\usepackage{tcolorbox} % fancy colored boxes
\usepackage{xcolor} % Highlighting
\usepackage{soul}
\usepackage{graphicx} % required to insert images
\usepackage{subcaption} % enable sub-figure
\usepackage[space]{grffile} % insert images baring a filename which contains spaces
\usepackage{float} % allow to forcefully set the location of an object
\usepackage[tracking=true]{microtype} % required to change character spacing
\usepackage[style=numeric,backend=biber]{biblatex}
\usepackage{hyperref} % insert clickable references
\usepackage{datetime} % flexible date specification
\newcommand{\leadingzero}[1]{\ifnum#1<10 0\the#1\else\the#1\fi}
\newcommand{\todayddmmyyyy}{\leadingzero{\day}.\leadingzero{\month}.\the\year}
\newcommand{\mathcolorbox}[2]{\colorbox{#1}{$\displaystyle #2$}}
\usepackage{geometry}
\usepackage{scrextend} % allow arbitrary indentation
\usepackage{color}
\setbeamercolor{title}{fg=orange}
\setbeamertemplate{title}{
\color{orange}
\textbf{\inserttitle}
}
\setbeamercolor{tableofcontents}{fg=orange}
\setbeamercolor{section in toc}{fg=black}
\setbeamercolor{subsection in toc}{fg=black}
\setbeamertemplate{frametitle}{
%\vspace{0.5em}
\color{orange}
\begin{center}
\textbf{\insertframetitle} \\
{\small \insertframesubtitle}
\end{center}
}
\setbeamertemplate{footline}[text line]{
\parbox{\linewidth}{
\color{gray}
\vspace*{-1em}
PSRC 2018
\hfill
Gordian (\href{mailto:[email protected]}{[email protected]})
\hfill
\insertpagenumber
}
}
\setbeamertemplate{navigation symbols}{}
\setbeamertemplate{itemize item}{\color{black}$\bullet$}
\setbeamertemplate{itemize subitem}{\color{black}$\circ$}
\setbeamercolor{block title}{fg=black}
\captionsetup{font=scriptsize,labelfont={bf,scriptsize}}
\title{Seventh Weekly Update on `Optimization~of~Particle~Identification'}
\subtitle{Neyman Pearson by detector, pt and cosTheta; Abundance comparisons; Neural Network for different optimizers and various parameters}
\author[Edenhofer]{\href{mailto:[email protected]}{Gordian Edenhofer}}
\institute[LMU]{
Working Group of Prof.~Dr.~Kuhr \\
Faculty of Physics \\
Excellence Cluster Universe
}
\date[BA Thesis 2018]{\today}
\subject{Particle Physics}
\begin{document}
\section{Git log}
\begin{frame}
\frametitle{\insertsection}
\begin{itemize}
\item Neyman Pearson
\begin{itemize}
\item{By pt (\textit{Appendix})}
\item{By cosTheta (\textit{Appendix})}
\end{itemize}
\item Abundance comparisons
\item Neural Network
\begin{itemize}
\item{By optimizer}
\item{By batch size}
\end{itemize}
\end{itemize}
\end{frame}
\section{Neyman Pearson}
\subsection{Anomalies}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.7\textheight,keepaspectratio]{{{../res/charged 01/General Purpose Statistics: Relative p Abundance in Likelihood Ratio Bins for ALL detector}}}
\caption{`ALL' detector}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.7\textheight,keepaspectratio]{{{../res/charged 01/General Purpose Statistics: Relative p Abundance in Likelihood Ratio Bins for CDC detector}}}
\caption{`CDC' detector}
\end{subfigure}
\caption{Relative $p$ abundance in likelihood ratio bins for various detectors.}
\end{figure}
\end{frame}
\section{Abundance comparisons}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.7\textheight,keepaspectratio]{{{../res/charged 01/Diff Abundances: Particle Abundances in the K+-Data via PID, via flat Bayes}}}
\caption{PID, flat Bayes}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.7\textheight,keepaspectratio]{{{../res/charged 01/Diff Abundances: Particle Abundances in the K+-Data via flat Bayes, by pt & cos(Theta)}}}
\caption{flat Bayes, by $p_t$ \& $\cos(\Theta)$}
\end{subfigure}
\caption{Particle Abundances for different selection method with an exclusive classification.}
\end{figure}
\end{frame}
\section{Neural Network by optimizer}
\subsection{Fair sampling}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizerrmsprop LearningRateNone nEpochs20 BatchSize192}}}
\caption{RMSprop}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizeradagrad LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adagrad}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizeradadelta LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adadelta}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizeradam LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adam}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizeradamax LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adamax}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca fair nLayers7 Optimizernadam LearningRateNone nEpochs20 BatchSize192}}}
\caption{Nadam}
\end{subfigure}
\caption{Accuracy by optimizer for a PCA feature selection and using fair sampling.}
\end{figure}
\end{frame}
\subsection{Biased sampling}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizerrmsprop LearningRateNone nEpochs20 BatchSize192}}}
\caption{RMSprop}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizeradagrad LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adagrad}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizeradadelta LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adadelta}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizeradam LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adam}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizeradamax LearningRateNone nEpochs20 BatchSize192}}}
\caption{Adamax}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\textwidth,height=0.26\textheight,keepaspectratio]{{{../res/sample/Neural Network Model: Accuracy pca biased nLayers7 Optimizernadam LearningRateNone nEpochs20 BatchSize192}}}
\caption{Nadam}
\end{subfigure}
\caption{Accuracy by optimizer for a PCA feature selection and using biased sampling.}
\end{figure}
\end{frame}
\section{Appendix}
\subsection{Anomalies in bins}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=0.6\textheight,keepaspectratio]{{{../res/charged 01/pidProbability Approach: Relative p Abundance in Likelihood Ratio Bins for ALL detector for equal size pt bins}}}
\caption{Relative $p$ Abundance in Likelihood Ratio Bins for the `ALL' detector using \textit{equal~height} $p_t$ bins.}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=0.6\textheight,keepaspectratio]{{{../res/charged 01/pidProbability Approach: Relative p Abundance in Likelihood Ratio Bins for ALL detector for equal size cos(Theta) bins}}}
\caption{Relative $p$ Abundance in Likelihood Ratio Bins for the `ALL' detector using \textit{equal~height} $cos(\Theta)$ bins.}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=0.6\textheight,keepaspectratio]{{{../res/charged 01/pidProbability Approach: Relative p Abundance in Likelihood Ratio Bins for CDC detector for equal size pt bins}}}
\caption{Relative $p$ Abundance in Likelihood Ratio Bins for the `CDC' detector using \textit{equal~height} $p_t$ bins.}
\end{figure}
\end{frame}
\subsection{pidProbability scatter plot}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=0.6\textheight,keepaspectratio]{{{../res/charged 01/General Purpose Statistics: p pidPropability multi-axes Histogram of pt, cosTheta above 0.05 Threshold}}}
\caption{Multi-axes Histogram for the $p$-pidProbability above a 0.05 threshold as scatter plot depending on $p_t$ and $cos(\Theta)$.}
\end{figure}
\end{frame}
\subsection{White paper of detector design}
\begin{frame}
\frametitle{\insertsection}
\framesubtitle{\insertsubsection}
\begin{figure}
\centering
\includegraphics[width=\textwidth,height=0.65\textheight,keepaspectratio]{{{../res/Belle 2 detector design white paper}}}
\caption{Belle 2 preliminary detector design at construction stage.}
\end{figure}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7601104579,
"avg_line_length": 38.4452054795,
"ext": "tex",
"hexsha": "49de2bed1f50b753a781d830b4b0f08d981720b0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Edenhofer/PID-boost",
"max_forks_repo_path": "doc/updates/07-Weekly Update.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Edenhofer/PID-boost",
"max_issues_repo_path": "doc/updates/07-Weekly Update.tex",
"max_line_length": 216,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Edenhofer/PID-boost",
"max_stars_repo_path": "doc/updates/07-Weekly Update.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3552,
"size": 11226
} |
\chapter{Scientific Thesis}
\lipsum[1]
\section{Different kinds of thesis}
\lipsum[2]
\subsection{Exam papers}
\lipsum[3]
\subsection{Scientific papers}
\lipsum[4]
\subsection{Scientific monographs}
\lipsum[5]
\subsection{Scientific reports}
\lipsum[6]
\section{Different kinds of research}
\lipsum[7]
\subsection{Literature-focused research}
\lipsum[8]
\subsection{Empirical descriptive research}
\lipsum[9]
\subsection{Theory grounding research}
\lipsum[10]
\subsection{Theory scrutinizing research}
\lipsum[11]
\subsection{Methodology research}
\lipsum[12]
| {
"alphanum_fraction": 0.7615780446,
"avg_line_length": 12.1458333333,
"ext": "tex",
"hexsha": "539756286baa9163f3617cfed1938bbb60881f24",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-05-20T21:00:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-05-20T21:00:55.000Z",
"max_forks_repo_head_hexsha": "dda01c1442c417dc729d72c7dca58612f3d911f6",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "danieldinter/fsfm-tex-template",
"max_forks_repo_path": "example/chapters/1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dda01c1442c417dc729d72c7dca58612f3d911f6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "danieldinter/fsfm-tex-template",
"max_issues_repo_path": "example/chapters/1.tex",
"max_line_length": 43,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dda01c1442c417dc729d72c7dca58612f3d911f6",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "danieldinter/fsfm-tex-template",
"max_stars_repo_path": "example/chapters/1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 161,
"size": 583
} |
%
% Copyright (c) 2011 - 2016
% University of Houston System and UT-Battelle, LLC.
% Copyright (c) 2009 - 2016
% Silicon Graphics International Corp. SHMEM is copyrighted
% by Silicon Graphics International Corp. (SGI) The OpenSHMEM API
% (shmem) is released by Open Source Software Solutions, Inc., under an
% agreement with Silicon Graphics International Corp. (SGI).
%
% All rights reserved.
%
% Redistribution and use in source and binary forms, with or without
% modification, are permitted provided that the following conditions
% are met:
%
% o Redistributions of source code must retain the above copyright notice,
% this list of conditions and the following disclaimer.
%
% o Redistributions in binary form must reproduce the above copyright
% notice, this list of conditions and the following disclaimer in the
% documentation and/or other materials provided with the distribution.
%
% o Neither the name of the University of Houston System,
% UT-Battelle, LLC. nor the names of its contributors may be used to
% endorse or promote products derived from this software without specific
% prior written permission.
%
% THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
% "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
% LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
% A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
% HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
% SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
% TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
% PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
% LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
% NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
% SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
%
\documentclass[english]{article}
\usepackage[T1]{fontenc}
\usepackage[latin9]{inputenc}
\usepackage{listings}
\usepackage{babel}
\usepackage{url}
\usepackage{graphicx}
\usepackage[unicode=true]{hyperref}
\usepackage[usenames,dvipsnames]{color}
\definecolor{ListingBG}{rgb}{0.91,0.91,0.91}
\usepackage{courier}
\usepackage{listings}
\lstset{
basicstyle=\scriptsize\ttfamily,
backgroundcolor=\color{ListingBG},
frame=tlBR,
showspaces=false,
showstringspaces=false,
showtabs=false
}
% \usepackage[disable]{todonotes}
% --------------------------------------------------------------------------
\providecommand{\tabularnewline}{\\}
\newcommand{\lyxrightaddress}[1]{
\par {\raggedleft \begin{tabular}{l}\ignorespaces
#1
\end{tabular}
\vspace{1.4em}
\par}
}
\newenvironment{lyxlist}[1]
{\begin{list}{}
{\settowidth{\labelwidth}{#1}
\setlength{\leftmargin}{\labelwidth}
\addtolength{\leftmargin}{\labelsep}
\renewcommand{\makelabel}[1]{##1\hfil}}}
{\end{list}}
% --------------------------------------------------------------------------
\usepackage{xspace}
\newcommand{\openshmem} {\mbox{OpenSHMEM}\xspace}
% --------------------------------------------------------------------------
\begin{document}
\title{\openshmem Reference Library Implementation}
\author{Tony Curtis <[email protected]>}
\maketitle
\lyxrightaddress{Computer Science Department, University of Houston}
\pagebreak{}
\tableofcontents{}
\pagebreak{}
\section{Sponsorship}
Work on the \openshmem project is sponsored by the
\href{http://www.ornl.gov/}{Oak Ridge National Laboratory} Extreme
Scale System Center and the \href{http://www.defense.gov/}{U.S. Department
of Defense}.
Development at the University of Houston was supported in part by the
National Science Foundation's Computer Systems Research program under
Award No. CRI-0958464. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors
and do not necessarily reflect the views of the National Science
Foundation.
\section{Introduction}
This document is an overview of the implementation strategy for the
initial, reference version of OpenSMHEM. We will discuss the concept
of a partitioned global address space for completeness.
\section{Terminology}
An \openshmem program consists of a number of processors executing
separate processes. A processor is referred to as a ``processing
element'', or PE. All PEs run the same program in the SPMD model,
although they can discover which position, or rank, they occupy within
the program and diverge in behavior.
The number of PEs participating in the program is set at launch-time
(although not all PEs need to do work). PEs are numbered monotonically
increasing from 0 up to $N-1$, where N is the total number of PEs.
PEs are assumed to be equidistant from each other for communication
purposes, no topological information is currently exposed.
Communication occurs through point-to-point one-sided routines,
point-to-point remote atomic operations, and collective operations. A
one-sided operation is a communication in which one PE transfers data
to another PE, but the ``other'' PE does not participate: the data
transfer does not cause the other PE to be interrupted to acknowledge
the transfer (assuming the hardware underneath \openshmem allows this).
\section{Partitioned Global Address Space}
Parallel programs running in a distributed environment access both
local and remote data. The model used to construct the program can
either expose or hide this distribution. Exposed models include that
of MPI and PVM, in which explicit messages are required to pass data
between processors participating in a parallel program. Hidden models
include those with a Global Address Space (GAS), in which there
appears to be memory accessible from all processors. This memory may
be physically accessible, or may in fact be made available through I/O
operations over network interconnects.
SHMEM provides a symmetric view of memory, in which processors
allocate variables in concert but have their own copies. A processor
can then ``put'' or ``get'' data to, or from, other processors by
requesting a specific variable on another processor. SHMEM provides
for address translation when required to allow a variable allocated by
one processor to be accessed by another, because in a number of
environments it is not guaranteed that address spaces are
uniform. This allocation-in-concert of separate variables is termed
Partitioned Global Address Space (PGAS).
Clusters that use interconnects with remote direct memory access
(rDMA) are of particular interest to the PGAS community as they
provide hardware off-load capability to avoid interrupts during
one-sided communications.
\section{SHMEM History and \openshmem}
The SHMEM communications library was originally developed as a
proprietary application interface by Cray for their T3D systems and
subsequently the T3E models. These systems typically consisted of a
memory subsystem with a logically shared address space over physically
distributed memories, a memory interconnect network, a set of
processing elements (PEs), a set of input-output gateways, and a host
subsystem. The systems were designed to support latency hiding
mechanisms such as prefetch queues, remote stores and the Block
Transfer Engine (BLT). The prefetch queues allowed the users to issue
multiple asynchronous single-word reads which could overlap with
computation. Remote stores enabled PEs to directly write to other PE's
memory asynchronously, while the BLT could hide latency while
transferring blocks of data efficiently between local memory and
remote memory locations. The explicit shared memory programming method
allowed structured communication via shared memory on Cray MPP
systems.
SHMEM was later adapted by SGI for its products based on the Numa-Link
architecture and included in the Message Passing Toolkit (MPT). Other
SHMEM implementations grew out of the SGI and Cray implementations,
including Quadrics, HP, IBM, gpSHMEM and SiCortex, but diverged from
the original libraries as they developed. These implementations of the
SHMEM API support C, C++, and Fortran programs; however, the
differences between SHMEM implementations' semantics and APIs are
subtle, resulting in portability and correctness
issues. U.S. Department of Defense funded a collaboration between Oak
Ridge National Laboratory and the University of Houston to develop a
specification for a uniform SHMEM API. The \openshmem specification was
announced to address the divergence of the SHMEM APIs.
\section{The Reference \openshmem Library}
The overall structure of the reference library is shown below. We use
an intermediate library such as GASNet or ARMCI to abstract away from
a particular interconnect/platform, although there is nothing to stop
a more direct approach being used. Internally, the reference
implementation of \openshmem provides private APIs for memory
management, the communications layer, tracing/debugging and
(eventually) support for adaptivity to choose things like a barrier
algorithm at run-time.
\medskip{}
\title{\includegraphics[scale=0.5]{implementation}}
\section{Implementation Strategy}
In this section we talk about how the reference library was written
and why certain implementation strategies were chosen.
\subsection{GASNet}
GASNet provides access to the symmetric memory areas. A memory
management library marshals accesses to these areas during allocation
and freeing of symmetric variables in user code, usually through a
call like \texttt{shmalloc()} or \texttt{shfree()}.
When you \texttt{gasnet\_attach()} and ask for segment information,
each PE has access to an array of segments, 1 segment per PE. Each PE
initializes a memory pool within its own segment. The set up is
handled either by GASNet internally (``fast''/''large'' model) or by
\openshmem itself (``everything'' model). The table of segments
allows any PE to know the virtual location and size of the segment
belonging to any other PE.
If the platform allows it, GASNet can align all the segments at the
same address, which means that all PEs see the same address for
symmetric variables and there's no address translation.
In the general case though, segments are not aligned (e.g.\ due to a
security measure like process address space randomization by the
OS). However, each PE can see the addresses of the segments of the
other PEs locally, and can therefore do address translation.
Currently alignment is not checked for, so we're coding to the
``worst case scenario''. That just adds a \emph{small} overhead if
the segments are in fact aligned. The library should at some point
introduce code that differentiates between aligned and non-aligned
environments with optimized code for the former case (GASNet provides
a macro you can test against).
\subsubsection{Segment Models}
GASNet supports 3 segment models: ``fast'', ``large'' and
``everything''. ``fast'' and ``large'' allow a certain region of
process memory to be registered. ``everything'' maps the entire
process memory, but is not supported by all interconnect conduits, and
often shows very poor performance in the instances where the author
has tried to use it.
For the ``fast'' and ``large'' models, extra support has to be added
to handle communication with global variables, because only the
symmetric heap is exposed by GASNet. Currently this is done via Active
Messages.
For the SMP conduit, PSHM support is required to run parallel threaded
programs with \openshmem. This excludes the ``everything'' model (at
least for the architectures to hand).
\subsection{Initialization}
In \texttt{src/updown/} we handle setting up the \openshmem runtime,
and eventual shutdown. Shutdown is implicit in user code, there is no
call to do this in SGI SHMEM, so we register a handler to be called
when \texttt{main()} exits. (Cray SHMEM has an explicit finalize call,
however, and a profiling interface proposal has suggested introducing
this to \openshmem.) The segment exchange is a target for future
optimization: in large programs, the start-up time will become
burdensome due to the large number of address/size
communications. Strategies for avoiding this include lazy
initialization and hierarchical or directory-based lookups.
\subsection{Incorporating \openshmem into Programs}
For C, the appropriate header file must be included:
\texttt{<shmem.h>}~\footnote{The location \texttt{<mpp/shmem.h} is
also accepted, but is now considered deprecated and the preprocessor
can generate a warning if desired}. The first \openshmem call in
any program must be \texttt{shmem\_init ()}. The number of PEs is
taken from the invoking environment. Currently this number is assumed
to be fixed throughout the lifetime of the program, but fault
tolerance extensions could generalize this notion. Below are simple C
and Fortran program templates:
\vspace{0.1in}
\begin{minipage}{0.75\linewidth}
\begin{lstlisting}[language=C,caption={Simple C ``hello world'' program}]
#include <stdio.h>
#include <shmem.h>
int
main(int argc, char *argv[])
{
int me, npes;
shmem_init ();
me = shmem_my_pe (); /* which PE I am */
npes = shmem_n_pes (); /* how many PEs in program */
printf("Hello from PE %d of %d\n", me, npes);
return 0;
}
\end{lstlisting}
\end{minipage}
\vspace{0.1in}
\begin{minipage}{0.75\linewidth}
\begin{lstlisting}[language=Fortran,caption={Simple Fortran ``hello world'' program}]
program hello
include 'shmem.fh'
integer :: shmem_my_pe, shmem_n_pes
integer :: me, npes
call shmem_init()
me = shmem_my_pe()
npes = shmem_n_pes()
print *, 'Hello from PE ', me, ' of ', npes
end program hello
\end{lstlisting}
\end{minipage}
\subsection{Communications Substrate}
The \openshmem library has been written to sit on top of any
communications library that can provide the required
functionality. Initially we have targetted GASNet. The directory
\texttt{src/comms/gasnet} provides implementations of the internal
API. All subsequent references to GASNet should be read with an eye on
the abstraction process.
\subsection{Servicing Communications}
GASNet provides this functionality in some cases. The mainline code
needs to spin on variable waits (e.g.\ shmem\_long\_waituntil) to poll
GASNet, otherwise progress is automatic via a servicer unit. This is
implemented with a progress thread that polls in a continuous loop.
\subsection{Memory Management}
Initially we tried to use the TLSF library (as used in the SiCortex
SHMEM implementation):
\url{http://rtportal.upv.es/rtmalloc/}
but this proved to have weird interactions with Open-MPI. Tracking
program progress with valgrind
\url{http://www.valgrind.org/}
suggested that system memory calls were
being intercepted.
So, following the Chapel lead,
\url{http://chapel.cray.com/}
we now use the ``dlmalloc'' library
\url{http://g.oswego.edu/dl/html/malloc.html}
to manage allocations in the symmetric memory space.
\subsection{Point-to-point routines}
Point-to-point operations are a thin layer on top of GASNet. The
non-blocking put operations with implicit handles provide a way to
subsequently fence and barrier. However, tracking individual handles
explicitly with a hash table keyed on the address of symmetric
variables may give better performance, and this needs to be looked
into.
The Quadrics extensions that add non-blocking calls into the API
proper have already been requested for the \openshmem development. An
initial attempt at these are already in the library and they pass the
Cray verification tests.
\subsection{Atomic Operations}
Atomic operations include swaps, fetch-and-add and locks (discussed
separately in \ref{sub:Locks}). The first two are handled via GASNet's
Active Messages. Increment was originally layered on top of add
(increment = add 1, after all) but was rewritten with its own
handlers. The payload for increment can be ever so slightly smaller
than for add since there's no need to pass the value to add. In large
applications, even such a small saving could add up (if you'll pardon
the pun).
Earlier versions of the implementation had a single handler lock
variable per operation (one for all adds, one for all increments,
\emph{etc.}).
% However, we've now added a hash table to dynamically
% allocate and manage per-target-address handler locks. Large-scale
% atomic operations, like add-scatters across multiple variables could
% easily benefit from this, as the lock granularity then permits
% concurrent discrete memory accesses.
However, the \openshmem 1.3 specification clarified atomicity
guarantees: there is now a lock per data-type (integer, float, double,
...).
\subsection{\label{sub:Locks}Locks}
\openshmem provides routines to claim, release and test global locks.
These can be used for mutual-exclusion regions. Our implementation is
from the Quadrics library, which is a version of the
Mellor-Crummy-Scott algorithm (``Algorithms for Scalable
Synchronization on Shared-Memory Multiprocessors'' by John
M. Mellor-Crummey and Michael L Scott). The locks are layered on top
of \openshmem primitives, so there are no Elan dependencies.
\subsection{Barrier and broadcast}
The initial version is naive, making the root of the broadcast a
bottleneck. This is partly intentional, to allow a student at UH to
explore better algorithms and work out how to demonstrate and document
the improvements. We would like to collect some locality information
inside the library to help decide communication order inside these
algorithms: PEs that differ in rank by large amounts are likely to be
further away topologically too, so by sending to more distant PEs
first, we can stagger the network traffic and balance the latencies
better. A proper measurement of ``distance'' is needed here. ``hwloc''
provides a per-system distance metric in NUMA terms. A simple
extension could e.g.\ just multiply the distance by some constant when
moving off-node to penalize network traffic.
\subsection{Collects}
The directories \texttt{src/fcollect} and \texttt{src/collect}
implement the collector routines (concatenating arrays on a set of PEs
into a target array on all of those PEs).
\begin{description}
\item[fcollect] is pretty easy since all the PEs must contribute the
same amount of data. This means we can just pre-compute where each PE
writes to their targets.
\item[collect] is harder because each PE can write different
amounts. Thought of 2 ways of handling this:
\begin{enumerate}
\item initial exchange of sizes ``from the left'' so each PE can
compute its write locations; then same as fcollect
\item wavefront: PEs wait for notification from PEs before them in the
set (lower numbered). This passes the offsets across the set.
\end{enumerate}
I used \#2. \#1 potentially generates a network storm as all PEs wait
to work out where to write, then all write at once. \#2 staggers the
offset notification with a wave of writes moving up the PE numbers.
\end{description}
\subsection{Reductions}
Reductions coalesce data from a number of PEs into either a single
variable or array on all participating PEs. The coalescing involves
some kind of arithmetic or logic operation (e.g.\ sum, product,
exclusive-or). Currently probably naive, using gets. A version with
puts that can overlap communication and the computation of the
reduction operation should be more scalable. However, the code is
rather compact and all ops use the same template. A future version of
\openshmem may add user-defined reductions, and in fact the framework
for this is already in place: all that is needed is a specification of
the \openshmem API.
\subsection{Address and PE Accessibility}
\openshmem allows us to test whether PEs are currently reachable, and
whether addresses on remote PEs are addressable. GASNet is used to
``ping'' the remote PE and then we wait for an ``ack'' with a
configurable timeout. Remains to be seen how useful this is, and
whether it can be used for future fault tolerance issues.
For now, the implementation simply says ``yes'' to this query, but in
the future we will add some real tests, e.g.\ for fault tolerance.
Some infrastructure is already there for this, but isn't quite doing
what was intended.
\subsection{Tracing Facility}
This library contains \textbf{trace points} with categorized
messages. These are listed in section \ref{sec:Environment-Variables}
A high-resolution clock is maintained to timestamp such messages.
Numerically sorting the output on the first field can thus help
understand the order in which events happened.
\subsection{C++}
The C++ interface is basically the C one. There is one point of
contention, namely complex numbers. The SGI documentation refers only
to the use of C99 ``complex'' modifiers, not to C++'s
\texttt{complex<T>}. The use of complex number routines (e.g.\
reductions) in C++ is thus not clearly specified.
\subsection{Fortran}
The Fortran interface is very similar to that of C. The names of
various routines are different to accommodate the various type
differences, e.g.\ \texttt{shmem\_integer\_put()} instead of
\texttt{shmem\_int\_put()}.
The biggest difference is in the symmetric memory management routines.
These have completely different names and parameters compared to the C
interface.
The \openshmem implementation handles Fortran with a very thin wrapper
on top of C. Mostly this involves catching Fortran's pass-by-reference
variables and dereferencing them in the underlying C call.
The main development has been on a CentOS platform with GNU
4.1.2-redhat. There seem to be some issues with this compilers'
handling of cray-pointers: even the simplest programs (no \openshmem
content at all) produce a segmentation fault. Later versions
(e.g.\ 4.5.0 ++) behave better.
\section{Undefined Behavior}
Many routines are currently specified only in terms of ``correct''
behavior. What happens when something goes wrong is not always
specified. This section attempts to set out a few of these scenarios
\begin{itemize}
\item put to PE out of range: suppose we do a put to ``right
neighbor'' (pe + 1). The highest-numbered PE will attempt to
communicate with a PE that does not exist.
\item library not initialized: virtually all \openshmem routines will
have major problems if the library has not been
initialized. Implementations can handle this situation in different
ways.
\end{itemize}
\section{Configuration and Installation}
There is a top-level \texttt{configure} script that is a simplified
version of the GNU autotools. This script will eventually become the
GNU setup and will do lots more feature tests. So the usual procedure
applies:
\begin{lstlisting}
$ /path/to/source/configure [--options...]
$ make
$ make install
\end{lstlisting}
The \texttt{configure} script accepts a
% make sure we get a real double-dash
\texttt{-{}-help} option that lists all the various settings. Note
that you can run \texttt{configure} from a separate build directory,
or directly in the source tree.
\section{Environment Variables\label{sec:Environment-Variables}}
The behavior of the \openshmem library can be controlled via a number
of environment variables. For SGI compatibility reasons, we support
the ``SMA'' variables and our own new ones.
\subsection{SGI Environment Variables}
\subsubsection*{\texttt{SMA\_VERSION}}
Print the library version at start-up.
\subsubsection*{\texttt{SMA\_INFO}}
Print helpful text about all these environment variables.
\subsubsection*{\texttt{SMA\_SYMMETRIC\_SIZE}}
Number of bytes to allocate for symmetric heap.
\subsubsection*{\texttt{SMA\_DEBUG}}
Enable debugging messages.
\subsection{Reference Implementation Variables}
\subsubsection*{\texttt{SHMEM\_LOG\_LEVELS}}
A comma, space, or semi-colon separated list of logging/trace
facilities to enable debugging messages. The facilities currently
include the case-insensitive names:
\begin{table}[h]
\begin{tabular}{|l|l|}
\hline
\texttt{FATAL} & something unrecoverable happened, abort \tabularnewline
\hline
\hline
\texttt{DEBUG} & used for debugging purposes \tabularnewline
\hline
\texttt{INFO} & something interesting happened \tabularnewline
\hline
\texttt{VERSION} & show this library version \tabularnewline
\hline
\texttt{SYMBOLS} & to inspect the symbol table information \tabularnewline
\hline
\hline
\texttt{INIT} & set-up of the program \tabularnewline
\hline
\texttt{FINALIZE} & tear-down of the program \tabularnewline
\hline
\texttt{ATOMIC} & remote atomic operations \tabularnewline
\hline
\texttt{AUTH} & when something is attempted but not allowed \tabularnewline
\hline
\texttt{BARRIER} & about barrier operations \tabularnewline
\hline
\texttt{BROADCAST} & about broadcast operation \tabularnewline
\hline
\texttt{REDUCTION} & about reduction operations \tabularnewline
\hline
\texttt{CACHE} & cache flushing operations \tabularnewline
\hline
\texttt{COLLECT} & about collect and fcollect operation \tabularnewline
\hline
\texttt{QUIET} & tracing network quiet events \tabularnewline
\hline
\texttt{FENCE} & tracing network fence events \tabularnewline
\hline
\texttt{LOCK} & related to setting, testing and clearing locks \tabularnewline
\hline
\texttt{MEMORY} & symmetric memory information \tabularnewline
\hline
\texttt{NOTICE} & important event, but non-fatal (see below) \tabularnewline
\hline
\texttt{SERVICE} & related to the network service thread \tabularnewline
\hline
\texttt{PROFILING} & for the PSHMEM profiling interface \tabularnewline
\hline
\texttt{MODULES} & loadable routines \tabularnewline
\hline
\end{tabular}
\end{table}
\subsubsection*{\texttt{SHMEM\_LOG\_FILE}}
A filename to which to write log messages. All PEs append to this
file. The default is for all PEs to write to standard error. (Per-PE
log files might be an interesting addition.)
\subsubsection*{\texttt{SHMEM\_SYMMETRIC\_HEAP\_SIZE}}
The number of bytes to allocate for the symmetric heap area. Can scale
units ($2^n$) with ``K'' (kibi), ``M'' (mebi) etc. modifiers. The default is 2G.
\subsubsection*{\texttt{SHMEM\_BARRIER\_ALGORITHM}}
The version of the barrier to use. The default is ``naive''. Designed
to allow people to plug other variants in easily and test.
\subsubsection*{\texttt{SHMEM\_BARRIER\_ALL\_ALGORITHM}}
As for \texttt{SHMEM\_BARRIER\_ALGORITHM}, but separating these two
allows us to optimize if e.g.\ hardware has special support for global
barriers.
\subsubsection*{\texttt{SHMEM\_PE\_ACCESSIBLE\_TIMEOUT}}
The number of seconds to wait for PEs to reply to accessiblity
checks. The default is 1.0 (i.e\ may be fractional).
\section{Alternate collective algorithms}
A module sytem coupled with the above environment variables allows for
runtime decisions to be made about which algorithm should be used for
different collective routines
\subsection{Writing a New Collective Algorithm}
To add a new implementation of a broadcast, barrier or collect, the
following steps are required:
\begin{enumerate}
\item Add a source file to the appropriate directory
\item write the 32- and 64-bit routines
\end{enumerate}
\noindent
An outline file is:
\vspace{0.1in}
\begin{minipage}{0.75\linewidth}
\begin{lstlisting}[language=C,caption={Outline of broadcast implementation}]
#include <...>
void
shmemi_broadcast32_clever (...)
{
...
}
void
shmemi_broadcast64_clever (...)
{
...
}
\end{lstlisting}
\end{minipage}
\vspace{0.1in}
``shmemi\_'' is the internal implementation namespace.
The names of the new routines are added to the ``impl.h'' file in the
sub-directory, and the loader file in the collective's sub-directory
is configured to understand the new name and to assign the function
pointers accordingly.
\section{Compiling and Running Programs}
The SGI SHMEM is provided as part of the Message-Passing Toolkit (MPT)
in the
\href{http://www.sgi.com/products/software/propack.html}{ProPack}
suite. Compilation uses a standard C, C++ or Fortran compiler
(e.g.\ GNU, Intel) and links against the SMA and MPI libraries.
In order to abstract the compilation and launching process for
\openshmem we have provided 4 wrapper programs. There is no
requirement to provide anything like this, it's merely a convenience.
\begin{enumerate}
\item \texttt{oshcc}: for compiling and linking C programs.
\item \texttt{oshc++/oshcxx}: for compiling and linking C++ programs.
\item \texttt{oshfort}: for compiling and linking F77/F90 programs.
\item \texttt{oshrun}: to launch programs.
\end{enumerate}
The similarity to the style of wrappers found in many MPI
implementations is obvious and intentional. Currently these wrappers
do handle a few situations (e.g.\ oshcc/CC and oshfort detect that they
shouldn't do linking and stop the underlying compiler complaining
about link options being present, but unused). The compiler scripts
are generated from a common template.
The run wrapper currently detects which GASNet conduit is being used
and sets up the environment accordingly to launch the program
\footnote{Not sure if this is the best place to do this check, or if
the build process should work this out in advance to streamline the
installed code.}.
\subsection{pkg-config}
An interface to pkg-config has also been added: the package name is
``openshmem''. So you can compile and link separately e.g.\ with
\begin{lstlisting}
$ gcc $(pkg-config --cflags openshmem) program.c $(pkg-config --libs openshmem)
\end{lstlisting}
The pkgconfig file can be found in \emph{installroot/lib/pkgconfig}.
\section{Future Plans}
Ideas for extensions to SHMEM to go into \openshmem need to be
requested and evaluated from the SHMEM user and vendor community. A
decision process will determine which ideas are eventually
implemented. The library that this document refers to is hopefully a
good platform for these developments.
A number of extensions have already been proposed, and in fact have
been implemented in other SHMEM libraries. These include (but are not
limited to)
\begin{itemize}
\item thread-safety: providing thread-safe SHMEM routines that can
operate in threaded environments, e.g.\ alongside OpenMP;
\item non-blocking puts: put routines that return per-communication
handles to the caller. The handles can be tested later for completion
(present in Cray and Quadrics SHMEMs); this extension may require
revamping the way implicit handles are used in GASNet, since we will
be generating calls with explicitly generated handles. Building a
handle pool on which to synchronize later should take care of this.
\item locality: exposing information about topology to the library
and/or its API;
\item Fortran module, C++ API: provide better language support.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7772985244,
"avg_line_length": 39.2802547771,
"ext": "tex",
"hexsha": "4db237284985b89763a7d2ebe8fe0c8f3caa16bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "59b370dc564cee4f6337f33a0bcb0448e7533855",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "openshmem-org/openshmem-async",
"max_forks_repo_path": "doc/openshmem-implementation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "59b370dc564cee4f6337f33a0bcb0448e7533855",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "openshmem-org/openshmem-async",
"max_issues_repo_path": "doc/openshmem-implementation.tex",
"max_line_length": 85,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "59b370dc564cee4f6337f33a0bcb0448e7533855",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "openshmem-org/openshmem-async",
"max_stars_repo_path": "doc/openshmem-implementation.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-12T04:17:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-22T21:59:41.000Z",
"num_tokens": 7337,
"size": 30835
} |
\chapter{Introduction}
This project aims at creating a simple modle of our solar system using {\em MatLab.} While putting emphasis on the visuals, we kept the mathematical aspect simple.\\
Trajectories are not calculated and the orbits are perfect circles, due to the simple fact that the difference would be hardly noticeable. The sizes and distances are mostly to scale. We use a logarithmic scale for the sizes and a linear, though a very small scale, for the distances.
\pagebreak | {
"alphanum_fraction": 0.8004115226,
"avg_line_length": 97.2,
"ext": "tex",
"hexsha": "2fd3cd14a425aa27a20dd5b1b3a548f8dcf43a27",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0a4f78a5deadcd8dba0da63ae8370aa19a8857b9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "polarcode/SolarSystemSimulation",
"max_forks_repo_path": "docs/inputfiles/Introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0a4f78a5deadcd8dba0da63ae8370aa19a8857b9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "polarcode/SolarSystemSimulation",
"max_issues_repo_path": "docs/inputfiles/Introduction.tex",
"max_line_length": 285,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0a4f78a5deadcd8dba0da63ae8370aa19a8857b9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "polarcode/SolarSystemSimulation",
"max_stars_repo_path": "docs/inputfiles/Introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 102,
"size": 486
} |
\documentclass{article}
\usepackage[fancyhdr,pdf]{latex2man}
\input{common.tex}
\begin{document}
\begin{Name}{3}{unw\_apply\_reg\_state}{David Mosberger-Tang}{Programming Library}{unw\_apply\_reg\_state}unw\_apply\_reg\_state -- apply a register state update to a cursor
\end{Name}
\section{Synopsis}
\File{\#include $<$libunwind.h$>$}\\
\Type{int}
\Func{unw\_apply\_reg\_state}(\Type{unw\_cursor\_t~*}\Var{cp},
\Type{void~*}\Var{reg\_states\_data});\\
\section{Description}
The \Func{unw\_apply\_reg\_state}() routine updates the register values
of a cursor according to the instructions in \Var{reg\_states\_data},
which have been obtained by calling \Var{unw\_reg\_states\_iterate}.
\section{Return Value}
On successful completion, \Func{unw\_apply\_reg\_state}() returns 0.
Otherwise the negative value of one of the error-codes below is
returned.
\section{Thread and Signal Safety}
\Func{unw\_apply\_reg\_state}() is thread-safe. If cursor \Var{cp} is
in the local address-space, this routine is also safe to use from a
signal handler.
\section{Errors}
\begin{Description}
\item[\Const{UNW\_EUNSPEC}] An unspecified error occurred.
\item[\Const{UNW\_ENOINFO}] \Prog{Libunwind} was unable to locate
unwind-info for the procedure.
\item[\Const{UNW\_EBADVERSION}] The unwind-info for the procedure has
version or format that is not understood by \Prog{libunwind}.
\end{Description}
In addition, \Func{unw\_apply\_reg\_state}() may return any error
returned by the \Func{access\_mem}() call-back (see
\Func{unw\_create\_addr\_space}(3)).
\section{See Also}
\SeeAlso{libunwind(3)},
\SeeAlso{unw\_reg\_states\_iterate(3)}
\section{Author}
\noindent
David Mosberger-Tang\\
Email: \Email{[email protected]}\\
WWW: \URL{http://www.nongnu.org/libunwind/}.
\LatexManEnd
\end{document}
| {
"alphanum_fraction": 0.7513842746,
"avg_line_length": 28.21875,
"ext": "tex",
"hexsha": "c67cc3ebfafdeb86d203e52a1d98c19afc40fb2b",
"lang": "TeX",
"max_forks_count": 3629,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T21:52:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-25T23:29:16.000Z",
"max_forks_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pyracanda/runtime",
"max_forks_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_apply_reg_state.tex",
"max_issues_count": 37522,
"max_issues_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T23:58:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-25T23:30:32.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pyracanda/runtime",
"max_issues_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_apply_reg_state.tex",
"max_line_length": 173,
"max_stars_count": 12278,
"max_stars_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pyracanda/runtime",
"max_stars_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_apply_reg_state.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T21:12:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-29T17:11:33.000Z",
"num_tokens": 547,
"size": 1806
} |
\section{Specifications}
\subsection{Raw data size}
\begin{itemize}
\item HSC-RC2: 432 exposures; about 3TB.
\item DC2's test-med-1 dataset: 345 exposures; about 1TB.
\item all DC2 run 2.2i is about 10 TB.
\end{itemize}
\subsection{Butler repository}
A Postgres database will be used as the Butler Registry.
The Butler datastore can be either an object store or a POSIX shared file system; the Butler datastore should be assessible from the exeuciton nodes.
\subsection{CPU cycles to the processing}
<1000 node-hours, old data https://confluence.lsstcorp.org/display/~sthrush/Node+Utilization+for++HSC-RC2+Reprocessing+Jobs
More current data: how much CPU time was used for the more recent NCSA runs?
\subsection{Medium-sized DRP workflow}
A standard DRP HSC-RC2 or DC2-test-med-1 workflow includes multiple (around 6?) BPS submissions run as a sequence.
Definitions of the included Science Pipelines tasks are part of the stack release
e.g. \url{https://github.com/lsst/obs_subaru/blob/master/pipelines/DRP.yaml}.
Currently, the submission orchestration is handled manually by humans; the new campaign tooling or tentative campaign prototype should do this to allow automatic submissions.
\subsection{Rubin software stack}
Weekly stack releases need to be provided.
For execution,
PanDA should soon be able to use Rubin's official stack docker releases.
(Not today... but soon hopefully)
For developers, an installation on the filesystem may be needed to allow version mix-and-match. (developer processing is a stretch goal)
\subsection{Workflow management system}
This will use the PanDA system and the PanDA plugin as implemented in the \texttt{ctrl\_bps} package.
The frontend is part of the Rubin \texttt{lsst\_distrib} software stack.
The PanDA system provides a monitoring page, but customization is needed if this is the only monitoring page.
\subsection{Submission management}
Automatic tooling does not exist as of today.
Mechanism to manage bps submissions. This includes tracking tools of submissions.
The new campaign tooling from DP0.2 may be used here.
First prototype at SLAC can be a hackaround for a known dataset and workflow without the new campaign tooling.
For automatic submission, the triggering mechanism may be cron jobs on a submission machine, kubernetes cron jobs, Apache Airflow, jenkins-like triggering, GH Action, and so on.
This may be set up on a regular schedule to generate the parameters and submit to PanDA via BPS.
Need to be able to easily and accurately track the status of submissions.
\subsection{Policy for output data storage}
In the short term all are saved.
Later, a policy to remove old runs is wanted.
Steve Pietrowicz is developing Butler expiration tooling for the OODS; it might be applicable here.
\subsection{PanDA DOMA authentication}
it may be better for DRP to run as a service account that operators can sudo to or otherwise get credentials for, rather than have individual users with their own database and object store accounts/credentials. This will avoid having releases (which may have parts run by different operators, or no operator at all if triggered by cron) be mishmashes from different users.
PanDA DOMA authorization lasts 24 hours. Also the access permissions for the area in the repo where the outputs are written. The service account's umask isn't quite right to allow others in the same group to access those directories. The current SLAC setup needs adjustments.
\subsection{Profiling (bonus)}
Jim Chiang’s profiling tools, created for DESC DC2 at NERSC
\subsection{PanDA as a developer service (bonus)}
Next step, to allow running workflows with a non-released stack.
This might need a shared stack on POSIX shared filesystem where all execution nodes can assess.
Or, use a standard CVMFS \texttt{lsst\_distrib} distribution plus developer's own ticket branch on disk.
\section{Needs}
\begin{itemize}
\item One or two maintainers of the Butler repos. They put together the Butler repos and ensure the readiness and correctness of the Butler repos. Occasional updates to the repos may be needed. Data Curation group?
\item PanDA setup: SLAC as a PanDA client site. Submission environment.
PanDA needs to be able to use Rubin's standard weekly stack releases without Sergey building images each time.
\item submission mechanism. (1) need to be able to submit automatically and programmatically
(2) authentication for PanDA
\item submission tracking tooling, including status monitoring of a sequence of submissions. The PanDA monitoring page as-is today is not sufficient; developers need to know the relevant overall run status and failures a lot more easily than the current page and developers aren't expected to know PanDA specifics.
\end{itemize}
\section{Smaller test}
automatic weekly pipeline\_check or ci\_hsc
Needs a fresh registry every time, and create the repo from scratch.
Currently at NCSA the ci\_hsc is run weekly manually.
| {
"alphanum_fraction": 0.8008072654,
"avg_line_length": 55.6741573034,
"ext": "tex",
"hexsha": "11d4ef8210f3a95afd259cbd705e6aa270f448a8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "37db6b6256d480ebfd1a8b259657f0f0501d4e82",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst/rtn-024",
"max_forks_repo_path": "spec.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37db6b6256d480ebfd1a8b259657f0f0501d4e82",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst/rtn-024",
"max_issues_repo_path": "spec.tex",
"max_line_length": 373,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "37db6b6256d480ebfd1a8b259657f0f0501d4e82",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst/rtn-024",
"max_stars_repo_path": "spec.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1117,
"size": 4955
} |
\documentclass[9pt,twocolumn,twoside]{../../styles/osajnl}
\usepackage{fancyvrb}
\journal{i524}
\title{Automated Sharded MongoDB Deployment and Benchmarking for Big Data Analysis}
\author[1,*]{Mark McCombe}
\author[1,*]{Gregor von Laszewski}
\affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.}
\affil[*]{Corresponding author: [email protected]}
\dates{Project: S17-IO-3012, \today}
\ociscodes{MongoDB, Cloud Computing, Ansible, Python, Cloudmesh Client, OpenStack, I524}
% replace this with your url in github/gitlab
\doi{Report: \url{https://github.com/cloudmesh/sp17-i524/tree/master/project/S17-IO-3012/report/report.pdf}\\
Code: \url{https://github.com/cloudmesh/sp17-i524/tree/master/project/S17-IO-3012/code}}
\begin{abstract}
Using Python, Ansible, Bash Shell, and Cloudmesh Client a fully
automated process is created for deploying a configurable MongoDB
sharded cluster on Chameleon, FutureSystems, and Jetstream cloud
computing environments. A user runs a single Python program which
configures and deploys the environment based on parameters specified
for numbers of Config Server Replicas, Mongos Instances, Shards, and
Shard Replication. The process installs either MongoDB version 3.4 or
3.2 as requested by the user. Additionally, functionality exists to
run benchmarking tests for each deployment, capturing statistics in a
file as input for python visualization programs, the results of which
are displayed in this report. These reports depict the impact of
MongoDB version and degrees of sharding and replication on
performance. Key performance findings regarding version, sharding, and
replication are abstracted from this analysis. As background,
technologies and concepts key to the deployment and benchmarking, such
as MongoDB, Python, Ansible, Cloudmesh Client, and OpenStack are
examined.
\end{abstract}
\setboolean{displaycopyright}{true}
\begin{document}
\maketitle
\section{Introduction}
As the final project for I524, Big Data Software and Projects, Spring
2017, a Python program invoking Bash shell scripts, Ansible playbooks,
and Cloudmesh Client commands has been created to fully automate a
configurable deployment of a MongoDB sharded cluster on various
clouds. Chameleon Cloud, FutureSystems, and Jetstream are the
currently supported cloud environments. The scripts have been
developed and tested on an Ubuntu 16.04 LTS (Xenial Xerus) Virtual
Machine running in Virtual Box. Using the Python cmd command line
interface, the program project.py accepts parameters for deployment
cloud, MongoDB version, Config Server replication, number of Mongos
instances, number of Data Shards, and Shard replication.
Also via project.py, automated benchmarking tests can be run. Tests
were performed with various sharding and replication configurations to
assess their impact on performance. Additionally, tests were run
against MongoDB versions 3.4 and 3.2 to uncover any performance
differences between these version. Performance results are captured
and graphed using Python's matplotlib, the results of which are
displayed and analyzed in this report.
\section{Infrastructure}
Three clouds were selected for deployment: Chameleon Cloud,
FutureSystems (also referred to as Kilo in some sections of this
document), and Jetstream. In our automated deployment and benchmarking
process, the cloud name is passed as a parameter to the deploy
function of the main project.py script and a customized version of
MongoDB is deployed to the selected cloud.
\subsection{OpenStack}
Chameleon Cloud, FutureSystems and Jetstream all utilize OpenStack.
OpenStack is a free, open source cloud computing platform, primarily
deployed as IaaS \cite{www-wikiOpenStack}. OpenStack was created in
2010 as joint project between NASA and Rackspace that is currently
managed by the OpenStack Foundation \cite{www-wikiOpenStack}. Open
Stack is open source software released under the Apache 2.0 license
\cite{www-openStackFAQ}.
Open Stack has various components, also known by code names
\cite{www-wikiOpenStack}. Examples of OpenStack components (and code
names) are Compute (Nova), Networking (Neutron), Block Storage
(Cinder), Identity (Keystone), Image (Glance), Object Storage (Swift),
Dashboard (Horizon), Orchestration (Heat), Workflow (Mistral),
Telemetry (Ceilometer), OpenStack Telemetry (Ceilometer), Database
(Trove), Elastic Map Reduce (Sahara), Bare Metal (Ironic), Messaging
(Zaqar), Shared File System (Manila), DNS (Designate), Search
(Searchlight), and Key Manager (Barbican) \cite{www-wikiOpenStack}.
\subsection{Chameleon Cloud}
Chameleon is funded by the National Science Foundation and provides
computing resources to the open research community. The Chameleon
testbed is hosted at the Texas Advanced Computing Center and the
University of Chicago. Chameleon provides resources to facilitate
research and development in areas such as Infrastructure as a Service,
Platform as a Service, and Software as a Service. Chameleon provides
both an OpenStack Cloud and Bare Metal High Performance Computing
Resources \cite{www-chamAbout}.
\subsection{FutureSystems}
FutureSystems is a computing environment run by Indiana University
that supports educational and research activities
\cite{www-futureSystems}. FutureSystems is directed by Geoffrey C. Fox
and Gregor von Laszewski, both of Indiana University
\cite{www-futureSystems}. For our deployment, we utilize the OpenStack
Kilo Cloud, running on the India machine. Because the environment is
by default referred to as Kilo in the Cloudmesh documentation and
setup file, it is referred to as both FutureSystems and Kilo in
subsequent sections of this document and the accompanying diagrams.
\subsection{Jetstream}
Jetstream is a cloud computing environment implemented by many
academic and industry partners including the University of Texas at
Austin's Texas Advanced Computing Center (TACC), the Computation
Institute at the University of Chicago, the University of Arizona, the
University of Texas San Antonio, Johns Hopkins University, Penn State
University, Cornell University, the University of Arkansas at Pine
Bluff, the National Snow and Ice Data Center (NSIDC), the Odum
Institute at the University of North Carolina, the University of
Hawaii, and Dell \cite{www-jetstream1}. At Indiana University,
leadership is provided by the Pervasive Technology Institute with
involvement from several members of the School of Informatics and
Computing including Beth Plale, Katy Borner, and Volker Brendel
\cite{www-jetstream2}.
\subsection{Cloud Hardware Comparison}
Table \ref{tab:cloud-comparison} shows a comparison of key computing
resources on Chameleon, FutureSystems, and Jetstream cloud
environments.
\begin{table}[htbp]
\centering
\caption{\bf Cloud Hardware Specification Comparison \cite{www-chamHardware} \cite{www-kiloHardware} \cite{www-jetHardware}}
\begin{tabular} {| c | c | c | c |}
\hline
& FutureSystems & Chameleon & Jetstream \\ [0.5ex]
\hline
CPU & Xeon E5-2670 & Xeon X5550 & Haswell E-2680 \\
\hline
cores & 1024 & 1008 & 7680 \\
\hline
speed & 2.66GHz & 2.3GHz & 2.5GHz\\
\hline
RAM & 3072GB & 5376GB & 40TBr\\
\hline
storage & 335TB & 1.5PB & 2 TB\\ [1ex]
\hline
\end{tabular}
\label{tab:cloud-comparison}
\end{table}
\section{Python/cmd}
Python is utilized in two portions of the automated process. First,
the main script, project.py, is a Python program that utilizes the cmd
module to provide a simple command line interface \cite{www-cmd}
accepting parameters for deployment configuration. project.py also
provides other functionality such as cluster deletion, benchmarking,
benchmarking summarization and reporting, and data distribution
reporting. Second, several visualization programs for benchmarking
analysis are written in Python, utilizing the matplotlib and pandas
modules.
\section{Ansible}
Ansible is open source software typically use to automate software
provisioning and configuration management. Ansible uses Playbooks
specified in YAML file format to accomplish this goal. Ansible runs on
Linux/Unix and requires Python \cite{www-wikiAnsible}.
In our deployment, virtual machines are created using Cloudmesh Client
cluster commands. Once they are created, all direct cloud interaction
for the MongoDB software installation and environment customization
and setup is performance via Ansible playbooks.
\section{Cloudmesh Client}
The Cloudmesh Client toolkit is an open source client interface that
standardizes access to various clouds, clusters, and workstations
\cite{www-cloudmesh}. Cloudmesh Client is a python based application
developed by Gregor von Laszewski and others collaborators primarily
at Indiana University.
In the deployment, Cloudmesh Client is used to handle most interaction
with the Virtual Machines in the clouds. Cloudmesh Client provides
functionality in three main areas: Key Management, OpenStack Security,
and virtual machine management. For key management, Cloudmesh's key
add and upload commands simplify secure interaction with the cloud
environments. For OpenStack security, Cloudmesh's secgroup commands
allow new security rules to be added and uploaded to the cloud.
Virtual machine management is performed with Cloudmesh's cluster
functionality, which allows easy creation and deletion of virtual
machines and communication between them.
Cloudmesh Client simplifies and standardized interaction with the
cloud for these tasks. This allows us to more easily port the
deployment to additional clouds that are supported by Cloudmesh.
Furthermore, by encapsulating the logic necessary to perform these
tasks we are shielded from changes in interfaces made by individual
clouds.
\section{MongoDB}
MongoDB is a popular open source, document oriented noSQL database. It
stores documents in JSON-like schema-less formats called collections
\cite{www-MonWiki}. DBEngines ranks MongoDB as the most popular noSQL
store and as the fifth most popular Database Management System overall
\cite{www-dbEngines}.
\subsection{Architecture}
A sharded cluster in MongoDB has three main components, all of which
will be implemented in our deployment:
\begin{itemize}
\item Config Servers - hold configurations setting and metadata
\item Mongos - a query routing interface between applications and the cluster
\item Shards - subsets of data
\end{itemize}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/sharded-cluster-production-architecture.png}}
\caption{Sharded MongoDB Architecture \cite{www-mongoComponents}}
\label{fig:mongodb-arch}
\end{figure}
Figure \ref{fig:mongodb-arch} depicts a sharded MongoDB environment
with two Mongos instances and two data Shards. The replica sets shown
for both Config Servers and Shards may have any number of replicas
within the set.
\subsection{Config Servers}
Config Servers stored metadata for sharded MongoDB clusters. This
metadata includes information about the state and structure of the
data and components of the sharded cluster \cite{www-mongoConfig}.
Config Servers also contain authentication information. For example,
information about the key files used for internal authentication
between the nodes is stored in the Config Servers
\cite{www-mongoConfig}.
In production deployments, it is recommended for Config Servers to be
deployed in 3 member replica sets \cite{www-mongoComponents}. The
rationale behind a 3 member set is discussed in more detail in the
Replication subsection that follows.
In our deployment and benchmarking automation, the degree of
replication in the Config Server Replica Set is by the third parameter
to the main project.py script. For example, specifying 1 will create a
Replica Set with one Config Servers (not replication), specifying 3
will create a Replica Set with three Config Server, and so on.
\subsection{Mongos Routers}
Mongos is a query routing service used in sharded MongoDB
configurations. Queries from applications go through Mongos, which
locates the data in the sharded cluster. The Mongos instances
accomplish this by reading and caching data from the Config Servers
\cite{www-mongoMongos}.
For applications with high performance or availability requirements
multiple Mongos instances may be ideal. In a high volume application,
spreading the routing load over multiple Mongos instances can benefit
performance. Additionally, multiple Mongos instances may increase
availability in the case where a Mongos instance fails
\cite{www-mongoConfig}.
In our deployment and benchmarking automation, the number of Mongos
instances is controlled by the fourth parameter to the main project.py
script. For example, specifying 1 will create one Mongos instance,
specifying 3 will create three Mongos instances, and so on.
\subsection{Shards}
Sharding, or distributing data across machines, is used by MongoDB to
support large data sets and provide high throughput
\cite{www-sharding}. Our deployment and benchmarking will test the
performance of various numbers of shards, measuring the performance
improvements associated with sharding in MongoDB.
Documents are distributed among the shards using a shard key. A
sharded collection must have one, and only one, shard key which must
have a supporting index \cite{www-sharding}. Since the shard key is
critical to performance and efficiency, particular care must be given
to shard key selection \cite{www-shardkey}. In our performance testing
a key was chosen that would distribute data relatively evenly across
the shards, but was not used in retrieving the data as the more costly
retrieval of not using an index provided a better test case.
In our deployment and benchmarking automation, Sharding is controlled
by the fifth parameter to the main project.py script. For example,
specifying 1 cause only 1 shard to be created, specifying 3 will cause
three shards to be created, and so on.
\subsection{Replication}
In databases, replication provides data redundancy leading to greater
availability and fault tolerance. Replication in MongoDB is achieved
via Replica Sets \cite{www-replication}. Replica Sets were implemented
in our deployment for both Config Servers and Shards.
Two key benefits provided by replication are redundancy and fault
tolerance. Each Replica in a Replica Set provides another copy of
data. Higher redundancy means more nodes can be lost without data
being lost. Higher numbers of Replicas in a set also increase fault
tolerance, which leads to increased availability. As a general rule, a
set will be able to tolerate faults while the majority of its nodes
are still available
\begin{table}[htbp]
\centering
\caption{\bf Fault Tolerance by Replica Set Size \cite{www-mongoRepDep}}
\begin{tabular}{|c | c | c|}
\hline
Replica Members & Majority Needed & Fault Tolerance \\ [0.5ex]
\hline\hline
2 & 1 & 0 \\
\hline
3 & 2 & 1 \\
\hline
4 & 3 & 1 \\
\hline
5 & 3 & 2 \\
\hline
6 & 4 & 2\\
\hline
7 & 4 & 3\\
\hline
8 & 5 & 3\\ [1ex]
\hline
\end{tabular}
\label{tab:fault-tolerance}
\end{table}
As shown in Table \ref{tab:fault-tolerance}, odd numbers of members in
a replica set are a better choice for fault tolerance. For example,
both a 3 and 4 replace set can only tolerate one member failing while
maintaining availability. This is because a majority of the members
must be available to maintain availability. In a 3 replica set the
majority is 2, so it can tolerate 1 member failing. In a 4 replica
set, the majority is 3, so it can still only tolerate 1 member
failing. Increases in fault tolerance only occur when the next odd
numbered member of a replica set is added \cite{www-mongoRepDep}.
For production systems, a standard deployment is a 3 Replica Set
\cite{www-mongoRepDep}. A 3 replica set provides 3 copies of the data
for redundancy and fault tolerance if 1 member of the set were to
fail. In a situation where availability was of higher concern, a 5
replica set would provide 4 copies of the data for redundancy and
fault tolerance if 2 members of the set were to fail.
In our automated deployment and benchmarking process, the degree of
replication for Shards is controlled by the sixth parameter to the
main project.py script. For example, specifying 1 for will create a
replica set per shard with only one copy of data (essentially no
replication, although technically we create a one member replica set),
specifying 3 will cause a replica set of with three copies to be
created, and so on.
\subsection{MongoDB Versions}
The most current version of MongoDB is version 3.4, which was released
on November 29, 2016. Based on an input parameter, our deployment will
install either version 3.4 or version 3.2, the prior version of
MongoDB. Many enhancements were made for version 3.4 impacting Sharded
Clusters, Replica Sets, Aggregation, Indexes, Views, Security, Tools,
and Platform Support. The complete list of 3.4 features and
enhancements can be found in the Release Notes
\cite{www-mongoRelease}.
In our automated deployment and benchmarking process, the version of
MongoDB installed is controlled by the second parameter to the main
project.py script. Specifying 34 will install version 3.4. Specifying
32 will install 3.2. These versions were selected as they are the two
most recent major versions of MongoDB and because they are the only
two compatible with Ubuntu 16.04 LTS (Xenial Xerus).
\subsection{Security}
There are two levels of security to consider in a sharded MongoDB
deployment: internal and external authentication.
In our deployment the various MongoDB components (config servers,
mongos instances, shards, and replicas) all reside on separate Virtual
Machines. These machines must be able to communicate with each other.
Two steps were necessary to enable this internal authentication.
First, the ports (27017, 27018, 27019, 28017) used by MongoDB needed
to be opened for communication. This was accomplished by adding
appropriate security group rules to the clouds through Cloudmesh
client. Second, MongoDB requires the internal authentication to be
done by either key files or x.509 certificates \cite{www-mongoAuth}.
In our deployment, authentication is done by key files. A new key file
is automatically created for each deployment and distributed to all of
the virtual machines on the selected cloud.
For external authentication, the three users are created. The user
\emph{admin} is created with the role of \emph{userAdminAnyDatabase}.
\emph{admin} performs administrative functions such as creating the
other users. The user \emph{cluster\_admin\_user} is created with the
role \emph{clusterAdmin}. \emph{cluster\_admin\_user} user performs
sharding functions such as sharding the collection and checking its
data distribution. \emph{user1} is a standard user with readWrite
permissions. \emph{user1} performs the benchmarking tests and other
functions not requiring administrative privileges.
\section{Deployment}
The automated process fully deploys a sharded MongoDB environment with
the Cloud, MongoDB version, number of Config Servers, Mongos
Instances, Shards, and degree of Replication specified as input
parameters.
\subsection{Computing Resources}
In all cases, virtual machines are deployed with the Ubuntu 16.04 LTS
(Xenial Xerus) operating system. On OpenStack the flavor or the
machine determines the amount of computing resources (CPU, memory,
storage) allocated to it. In our testing, m1.medium was used as the
flavor for Chameleon Cloud and FutureSystems, while m1.small was used
on Jetstream. Jetstream has more resources allocated to each flavor
than Chameleon and FutureSystems, which are similar. In order to
perform similar tests on each cloud, flavors with identical CPU and
memory were selected. Table \ref{tab:computing-resources} shows the
comparative resources of the flavors used in our testing. While
storage is lower on Jetstream, it is sufficient for out tests and
should not significantly impact performance.
\begin{table}[htbp]
\centering
\caption{\bf Computing Resources}
\begin{tabular}{|c | c | c | c| c|}
\hline
Cloud & Flavor & VCPU & RAM & Size \\ [0.5ex]
\hline
Chameleon & m1.medium & 2 & 4 & 40 \\
\hline
FutureSystems & m1.medium & 2 & 4 & 40 \\
\hline
Jetstream & m1.small & 2 & 4 & 20 \\
\hline
\end{tabular}
\label{tab:computing-resources}
\end{table}
\subsection{Deployment Process}
Several programs are involved in the deployment. A high level overview of each is provided.
\subsubsection{project.py}
The deployment process is invoked by running the deploy function of
project.py, passing six required parameters: cloud (chameleon,
jetstream, or kilo), version (32 or 34 for version 3.2 or 3.4), size
of the config server replica set (a number, 1 or greater), number of
mongos routers (a number, 1 or greater), number of data shards (a
number, 1 or greater), and size of data shard replica sets.
Project.py calls a bash script, deploy.sh, which runs two bash shell
scripts to accomplish the deployment: cluster.sh and ansible.sh.
\subsubsection{cluster.sh}
Cluster.sh does the work of creating the cluster in the specified
cloud environment. First, it creates a key file needed for secure
access between the nodes, and uses Cloudmesh secgroup commands to
builds and uploads a new security group with the ports necessary for
MongoDB (27017, 27018, 27019, and 28017) accessible. Next, it uses
Cloudmesh client cluster commands to launch the appropriate number of
virtual machines in the desired cloud. Then, it builds a file,
inventory.txt, with sections for each MongoDB component (Config
Servers, Mongos Instances, and Shard Replica Sets), allocating the
correct number of IP addresses to each. Finally, cluster.sh builds a
few complex commands that will need to be run later in the process by
ansible.
\subsubsection{ansible.sh}
After the virtual machines have been created by cluster.sh, ansible.sh
used the inventory.txt file to execute Ansible playbooks on the
appropriate virtual machines.
\begin{enumerate}
\item install-python.yaml - Installs Python, if not installed. This
script was necessary because the Ubuntu Zenial image on FutureSystems
does not have Python installed. Python is required for Ansible.
\item mongo-install32.yaml - Using apt\_key and apt\_repository,
installs the packages for version 3.2 of MongoDB on all virtual
machines.
\item mongo-install34.yaml - Using apt\_key and apt\_repository,
installs the packages for version 3.4 of MongoDB on all virtual
machines.
\item add-mongo-key.yaml - Uploads the key file created in cluster.sh
to all virtual machines.
\item mongo-config.yaml - On Config Servers only, stops the mongod
service and uses a template file to start the mongod process for a
Config Server.
\item mongo-config2.yaml - On only one Config Server, uses a template
file to initiate the primary Config Server.
\item mongo-mongos.yaml - On Mongos Instances only, stops the mongod
service and uses a template file to start the mongos process for a
Mongos instance.
\item mongo-users.yaml - On only one Mongos Instance, create several
users needed in later steps.
\item mongo-shard.yaml - On Shards only, stops the mongod service and
uses a template file to start the mongod process for a Shard.
\item mongo-shard2.yaml - On the primary Shard in each Replica Set,
uses a file built in cluster.sh to initiate the Shards.
\item add-shards.yaml - On one Mongos instance, uses a file built in
cluster.sh to add all of the Shards.
\item create-sharded-collection.yaml - Uploads several files to one
Mongos instance that will be need for benchmarking and shard the
collection (benchmarking setup, not included in deployment times).
\item getdata.yaml - Downloads and unarchive the pitches data from an
AWS S3 directory. Also, create a smaller version for testing
(benchmarking setup, not included in deployment times).
\end{enumerate}
The kill function in project.py will delete and deallocate the last
existing cluster on the cloud to clean up after the test is complete.
\subsection{Deployment Timing}
The configuration parameters and cluster and Ansible deployment times
are captured in a file for each deployment (benchmarking timings are
later captured as well). Total run time for a few interesting
configurations are shown in Table \ref{tab:deploy-times}.
Deployment A shows a simple deployment with only one of each component
being created. This deployment may only be suitable for a development
or test environment. Deployment A completed in 330 seconds.
Deployment B shows a more complex deployment with production like
replication factors for Config Servers and Shards and an additional
Mongos instance. This deployment may be suitable for a production
environment as it has greater fault tolerance and redundancy.
Deployment B took 1059 seconds to deploy.
Deployment C shows a deployment focused on high performance. It has a
high number of shards, nine, but no fault tolerance or redundancy. The
deployment may be suitable where performance needs are high and
availability is less critical. Deployment C finished in 719 seconds.
\begin{table}[htbp]
\centering
\caption{\bf Deployment Times on Chameleon Cloud in Seconds}
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
& Config & Mongos & Shards & Replicas & Seconds \\
\hline
A & 1 & 1 & 1 & 1 & \emph{330} \\
\hline
B & 3 & 2 & 3 & 3 & \emph{1059} \\
\hline
C & 1 & 1 & 9 & 1 & \emph{719} \\ [1ex]
\hline
\end{tabular}
\label{tab:deploy-times}
\end{table}
The total number of virtual machines is highly correlated with
deployment time as booting the machines and installing the software,
tasks that occur for all nodes, take the most time. The additional
steps to configure Config Servers, Mongos Instances, Replicas, and
Shards run in relatively similar times, so the specific type of
component created has little impact on the deployment time. For
example, holding all other deployment variables at 1, a deployment
with five Config Servers took 534 seconds, one with five Mongos
Instances took 556 seconds, one with five Shards took 607 seconds, and
one with a five Shard Replica set took 524 seconds. There is small
extra overhead to starting additional data shards, but a strong
correlation exists for total nodes to runtime for all configurations.
Deployment times for version 3.4 were very similar to version 3.2.
Table \ref{tab:deploy-times2} shows this empirically, as it takes a
very similar time to launch configurations with the same total number
of nodes, but extremely different mixed of Config Servers, Mongos
Instances, Replicas, and Shards.
The total number of nodes in a deployment can be calculated by the
following equation involving the parameters to the deployment script.
c + m + ( s * r ) = total nodes
\begin{table}[htbp]
\centering
\caption{\bf Deployment Times on Chameleon Cloud in Seconds}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
Config Servers & Mongos & Shards & Replicas & Time in \\
-c & -m & -s & -r & Seconds
\\ [0.5ex]
\hline
\hline
\emph{5} & 1 & 1 & 1 & \emph{534} \\
\hline
1 & \emph{5} & 1 & 1 & \emph{556} \\
\hline
1 & 1 & \emph{5} & 1 & \emph{607} \\
\hline
1 & 1 & 1 & \emph{5} & \emph{524} \\ [1ex]
\hline
\end{tabular}
\label{tab:deploy-times2}
\end{table}
Due to Chameleon Cloud having the most reliable and consistent
performance of the three clouds, performance numbers are presented
only for selected runs on Chameleon. These numbers are proportionately
representative of deployment timings on Jetstream and FutureSystems.
\section{Benchmarking}
After the sharded MongoDB instance has been fully deployed, a
benchmarking process is run to assess performance of the
configuration. This process has also been fully automated. It is
invoked by running the benchmark function of project.py and passing
either the parameter large (for a full benchmark test) or small for a
small test.
\subsection{Data Set}
The data set used in the benchmarking testing and analysis was Major
League Baseball PITCHf/x data obtained by using the program Baseball
on a Stick (BBOS) \cite{www-bbos}. BBOS is a python program created by
\emph{willkoky} on github which extracts data from mlb.com and loads
it into a MySQL database. While it would be possible to convert this
program to populate the MongoDB database directly, collecting all of
the data is a time consuming process. Therefore, the data was captured
locally to the default MySQL database and then extracted to a CSV
file. This file contains 5,508,014 rows and 61 columns. It is
1,588,996,075 bytes in size uncompressed.
\subsection{Methodology}
There are several goals of the benchmarking process. The primary
benchmarking goal of the project is to assess the impact of sharding
on performance in MongoDB. Since replication was also built into the
deployment process, a secondary goal was to assess the impact of
replica sets on performance. A third goal is to assess performance of
MongoDB version 3.4 versus version 3.2, specifically for various shard
configuration. A final objective is to assess the relative performance
of the Chameleon, FutureSystems, and Jetstream cloud computing
environments.
The benchmarking tests are design to assess performance in three
situations: Reads, Writes, and MapReduce operations.
\subsubsection{Impact on Reads}
To access the impact of different configurations on writes, we use
MongoDB's mongoimport command. Mongoimport is a command line tool
capable of loading JSON, CSV, or TSV files \cite{www-mongoimport}. In
this case, we load a CSV file to the pitches collections in the mlb
database.
\subsubsection{Impact on Writes}
To assess the impact of different configurations on reads, we use
MongoDB's find command. We read the data previously loaded by the
mongoimport command to the pitches collection. The find command
retrieves documents that meet a specified criteria. In this case, we
search for pitches with a speed over 100 mph, a relatively rare event
in baseball. To limit the information sent back over the network, we
only return a count of these events. 3,632 is the count returned of
5,508,014 total documents. The column we search on does not have an
index, as the goal is to test the impact of sharding on a long running
query.
\subsubsection{Impact on MapReduce}
To assess the performance of MongoDB version, sharding, and
replication on reads, a simple MapReduce operation was written against
the pitches table to get the average speed of pitches that were stikes
versus those that were not strikes \cite{www-mapreduceEx}
\cite{www-mapreduce}.
\subsection{Benchmarking Process}
The benchmarking process is invoked by running the benchmark function
of the script project.py with the large parameter. Results for each
test are automatically captured in file benchmark\_datetime.csv. This
file included the configuration the test was run under (cloud, MongoDB
version, config server replication factor, mongos instances, number of
shards, and shard replication factor) along with the run times of the
find, mongoimport, and MapReduce commands. After all tests were run, a
shell script, combine\_benchdeploy.sh combines all files into one
file, benchmark\_combined.csv.
The graphical depictions of the test results shown in the next section
were created by running python programs to average the run times
across the shard, replication, and version configurations shown. For
consistency, config server replication and mongos instances were both
kept at one for all benchmarking tests. Additionally, replication was
kept at one for sharding and version tests and sharding at one for
replication tests. This methodologies allows us to isolate the
variable we are assessing. All sharding tests were run with one, two,
three, five, seven, and nine shards. All replication tests were run
with one, two, three, five, and seven replicas.
To setup for the test, a compressed version of file has been stored in
an Amazon Web Services S3 directory. This file is pre=staged on a
Mongos instance during the deployment (but excluded from the run time)
and is loaded it to a collection named \emph{pitches} in MongoDB using
mongoimport before running the find and MapReduce commands.
Before the benchmarking process can be run, a sharded collection must
be created and sharded. This was also done via Ansible during the
deployment in preparation for benchmarking. For reproducibility of
benchmarks, the benchmarking process also deletes any data from the
pitches collection that may have been loaded prior to running
mongoimport.
The shard key for the pitches table is set to pitchID. PitchID is a
unique key to each pitch document. Selecting pitchID as the shard key
should cause the data to be reasonably evenly distributed among the
shards. Data distribution will be analyzed in a subsequent section.
\subsection{Data Distribution}
To explore how data was allocated among the shards, a function called
distribution was built into project.py. This function runs the
getShardDistribution() command, which reports on how data and
documents are distributed among shards \cite{www-shardDist}. Tables
\ref{tab:data-dist32} and \ref{tab:data-dist34} show the results of
tests with one, three, and five shards in version 3.2 and 3.4 of
MongoDB. The results clearly show the data is well distributed,
although interestingly, in all cases there is some minor skew toward
the first shard having the most data. These results clearly show that
data distribution is similar in both versions of MongoDB.
\begin{table}[htbp]
\centering
\caption{\bf Data Distribution among Shards - Version 3.2}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
& 1 & 2 & 3 & 4 & 5 \\ [0.5ex]
\hline
\hline
1 & 100 & & & & \\
\hline
3 & 35.84 & 32.18 & 31.96 & & \\
\hline
5 & 23.04 & 19.27 & 19.40 & 19.38 & 18.89 \\
\hline
\end{tabular}
\label{tab:data-dist32}
\end{table}
\begin{table}[htbp]
\centering
\caption{\bf Data Distribution among Shards - Version 3.4}
\begin{tabular}{| c | c | c | c | c | c |}
\hline
& 1 & 2 & 3 & 4 & 5 \\ [0.5ex]
\hline
\hline
1 & 100 & & & & \\
\hline
3 & 36.23 & 31.82 & 35.75 & & \\
\hline
5 & 22.26 & 19.67 & 19.42 & 19.37 & 19.26 \\
\hline
\end{tabular}
\label{tab:data-dist34}
\end{table}
\subsection{Benchmarking Analysis}
\subsubsection{Cloud Analysis}
Chameleon Cloud was significantly more stable and reliable than
FutureSystems and Jetstream Clouds for our testing. Chameleon yields
the fastest and most consistent results with very few errors.
Jetstream initially had stability problems that were eventually
resolved by the Jetstream support team. Once these issues were
resolved, Jetstream performance and stability was very close to
Chameleon's. FutureSystem performance was the poorest with respect to
run time. Environmental errors were initially frequent, but after
allocating new floating IPs test would be completed successfully.
JetStream performance was good, but the environment was very unstable.
Due to its stability and performance, Chameleon was chosen as the
environment to test MongoDB version 3.4 versus 3.2, due to its
stability.
\subsubsection{Impact of Sharding on Reads}
gure \ref{fig:shard-find} depicts the impact on performance of various
numbers of shards on a find command in Chameleon, FutureSystems, and
Jetstream Clouds. All three clouds show a strong overall decline in
run time as the number of shards increases, which shows the positive
impact of sharding on performance. For all clouds, reads were over 35
seconds for one shard and less than 10 seconds for five shards. This
is a significant gain in performance.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/shard_find.png}}
\caption{Find Command - Sharding Test}
\label{fig:shard-find}
\end{figure}
All three clouds show a particularly large gain in performance when in
increasing from one shard to two. Run time for two shards is less than
one third the run time of one shard. Increases in shards beyond two
show much smaller incremental gains.
Perfomance on Chameleon Cloud and Jetstream is very similar for the
find test. Kilo performance is worse, although proportionately better
than on the mongoimport test. This is an interesting observation as
for both deployment and mongoimport, performance was much better on
Chameleon and FutureSystems than Kilo. One difference from the
mongoimport test is that much less data is being sent over the
network. Network speeds could be a factor in this discrepancy.
Figure \ref{fig:shard-find} can be recreated by running the program
benchmark\_shards\_find.py passing the file benchmark\_combined.csv as
a parameter. It plots the average run time for each configuration as
shown using matplotlib. This report is run automatically by the report
function of project.py.
\subsubsection{Impact of Sharding on Writes}
Figure \ref{fig:shard-import} depicts the impact on performance of
various numbers of shards on a mongoimport command in the three
clouds. For all clouds, run time of the mongoimport command in our
tests does not appear to be impacted by the number of shards. Since
the same amount of data is written with more computing resources
available when there are more shards, we might expect to see a
performance gain. However, there are possible explanations for
performance not improving. First, the mongoimport command may not
write data in parallel. This is not indicated in the documentation,
but it seems likely that it reads the file serially. Second, resources
on the server the data is written to may not be the bottleneck in the
write process. Other resources like the network time seem more likely
to be the bottleneck. Since we are always going over the network from
the mongos instance to a data shard, regardless of the number of
shards, a bottleneck in the network would impact all shard
configurations equally.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/shard_import.png}}
\caption{Mongoimport Command - Sharding Test}
\label{fig:shard-import}
\end{figure}
While sharding did not benefit a single threaded mongoimport command,
it is likely it would benefit other heavy write operations,
particularly coming through multiple mongos instances. In a
non-sharded environment, this would lead to a heavy load on the single
data shard. In a sharded environment, the load on each shard would
drop as the number of shards increased.
While performance on Chameleon and FutureSystems was very similar for
the find command, performance of the mongoimport command was
significantly better on Chameleon than on Kilo. We see approximately
50\% better performance on both Chameleon and Jetstream Clouds
compared to FutureSystems. Jetstream performance is slightly better
than Chameleon for the import test.
Figure \ref{fig:shard-import} can be recreated by running the program
benchmark\_shards\_import.py passing the file benchmark\_combined.csv
as a parameter. It plots the average run time for each configuration
as shown using matplotlib. This report is run automatically by the
report function of project.py.
\subsubsection{Impact of Sharding on MapReduce}
Figure \ref{fig:shard-mapreduce} shows the performance of MapReduce
across various sharding configurations on our three clouds. These
results are relatively similar to the find results. While results are
inconsistent, particularly on FutureSystems, likely due to
environmental issues, all clouds show an overall decrease in
processing time with addition of shards. Relative to mongoimport
performance, performance is more similar across the three clouds for
MapReduce.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/shard_mapreduce.png}}
\caption{MapReduce - Sharding Test}
\label{fig:shard-mapreduce}
\end{figure}
Figure \ref{fig:shard-mapreduce} can be recreated by running the
program benchmark\_shards\_mapreduce.py passing the file
benchmark\_combined.csv as a parameter. It plots the average run time
for each configuration as shown using matplotlib. This report is run
automatically by the report function of project.py.
\subsubsection{Impact of Replication on Reads}
Figure \ref{fig:replica-find} depicts the impact on performance of
various numbers of replicas on a find command in Chameleon,
FutureSystems, and Jetstream Clouds. These results show no correlation
between the number of replicas and find performance.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/replica_find.png}}
\caption{Find Command - Replication Test}
\label{fig:replica-find}
\end{figure}
Similarly to other tests, performance on Chameleon was best for the
majority of the test runs in the find replication test, followed by
Jetstream, with FutureSystems performing the worst.
Figure \ref{fig:replica-find} can be recreated by running the program
benchmark\_replicas\_find.py passing the file benchmark\_combined.csv
as a parameter. It plots the average run time for each configuration
as shown using matplotlib. This report is run automatically by the
report function of project.py.
\subsubsection{Impact of Replication on Writes}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/replica_import.png}}
\caption{Mongoimport Command - Replication Test}
\label{fig:replica-import}
\end{figure}
Figure \ref{fig:replica-import} depicts the impact on performance of
various numbers of replicas on a mongoimport command on our three
Clouds. The results show poorer write performance as the number of
replicas increase. Given that an extra copy of data is written with
each increase in the replication factor, this performance hit is
expected.
Performance on Jetstream and Chameleon were very close on this test
with Chameleon only performing significantly better with four or more
replicas. FutureSystems import performance was by far the worst of the
three clouds.
Figure \ref{fig:replica-import} can be recreated by running the
program benchmark\_shards\_import.py passing the file
benchmark\_combined.csv as a parameter. It plots the average run time
for each configuration as shown using matplotlib. This report is run
automatically by the report function of project.py.
\subsubsection{Impact of Replication on MapReduce}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/replica_mapreduce.png}}
\caption{MapReduce - Replication Test}
\label{fig:replica-mapreduce}
\end{figure}
As shown in Figure \ref{fig:replica-mapreduce}, replication appears to
have no impact on MapReduce operations. While there are variations in
FutureSystems and Jetstream performance for different numbers of
replicas, they do not follow a consistent pattern and appear to be
caused by environmental issues. This is an interesting result as
increased levels of replication came with a performance penalty for
the find commmand, which also reads data.
As with several other tests, Chameleon MapReduce performance was the
best, followed by Jetstream, with FutureSystems again being the worst.
Figure \ref{fig:replica-mapreduce} can be recreated by running the
program benchmark\_shards\_import.py passing the file
benchmark\_combined.csv as a parameter. It plots the average run time
for each configuration as shown using matplotlib. This report is run
automatically by the report function of project.py.
\subsubsection{Impact of Version and Sharding on Reads}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/version_find.png}}
\caption{Find Command - Version 3.2 vs 3.4}
\label{fig:version-find}
\end{figure}
Figure \ref{fig:version-find} shows the MongoDB version 3.4 and 3.2
find performance on Chameleon Cloud. Results are very close, with
version 3.2 having the best performance for one shard and performance
being similar for all other sharding levels.
Figure \ref{fig:version-find} can be recreated by running the program
benchmark\_version\_find.py passing the file benchmark\_combined.csv
as a parameter. It plots the average run time for each configuration
as shown using matplotlib. This report is run automatically by the
report function of project.py.
\subsubsection{Impact of Version and Sharding on Writes}
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/version_import.png}}
\caption{Mongoimport Command - Version 3.2 vs 3.4}
\label{fig:version-import}
\end{figure}
Figure \ref{fig:version-import} shows the MongoDB version 3.4 and 3.2
mongoimport performance on Chameleon Cloud. Runtimes are similar for
each version. Version 3.2 is slightly faster at the lowest sharding
levels and Version 3.4 is slightly faster at the highest sharding
level. Given the mixed results and close run times, neither version
shows a significant advantage for write operations.
Figure \ref{fig:version-import} can be recreated by running the
program benchmark\_version\_find.py passing the file
benchmark\_combined.csv as a parameter. It plots the average run time
for each configuration as shown using matplotlib. This report is run
automatically by the report function of project.py.
\subsubsection{Impact of Version and Sharding on MapReduce}
Figure \ref{fig:version-mapreduce} shows the MongoDB version 3.4 and
3.2 mongoimport performance on Chameleon Cloud. Runtimes are similar
for each version and with each version being faster at some shard
level, which appears to be random. Given the mixed results and close
run times, neither version shows a significant advantage for MapReduce
operations.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{images/version_mapreduce.png}}
\caption{MapReduce - Version 3.2 vs 3.4}
\label{fig:version-mapreduce}
\end{figure}
Figure \ref{fig:version-mapreduce} can be recreated by running the
program benchmark\_version\_find.py passing the file
benchmark\_combined.csv as a parameter. It plots the average run time
for each configuration as shown using matplotlib. This report is run
automatically by the report function of project.py.
\section{Summary}
We have created, tested, and demonstrated a fully automated program to
configure and deploy a sharded MongoDB cluster to three cloud
environments: Chameleon, Jetstream, and FutureSystems. Using a
combination of Python, Bash, and Cloudmesh Client, the a cluster is
dynamically deployed with a selected number of Config Server Replicas,
Mongos Routers, Shards, and Shard Replicas and either MongoDB version
3.4 or 3.2. Functions also exist for terminating the environment,
reporting on data distribution, benchmarking, and reporting on
performance testing.
An automated benchmarking process to show the impact of well
distributed data across shards of a large data set has been run for
various configurations. The impact of MongoDB version 3.4 versus 3.2,
Sharding, and Replication on performance have been assessed. Testing
showed performance and stability on Chameleon Cloud to be the best of
our three cloud environments with Jetstream a close second after an
environmental issue was resolved by the support team. FutureSystems
performance consistently lagged behind the other two clouds. A key
finding is that read performance, typically a high priority for noSQL
data stores and Big Data operations, increases significantly as shards
are added. Testing also showed that a predictable performance penalty
is associated with replication. Our comparison of version 3.4 and 3.2
showed no significant differences between version 3.2 and 3.4
performance across various sharding levels.
\section*{Acknowledgements}
This work wos conducted with the help of resources provided by
FutureSystems and Chameleon Cloud.
% Bibliography
\bibliography{references}
\section*{Author Biographies}
\begingroup
\setlength\intextsep{0pt}
\begin{minipage}[t][3.2cm][t]{1.0\columnwidth} % Adjust height [3.2cm] as required for separation of bio photos.
\noindent
{\bfseries Mark McCombe} received his B.S. (Business Administration/Finance) and M.S. (Computer Information Systems) from Boston University. He is currently studying Data Science at Indiana University Bloomington.
\end{minipage}
\endgroup
\newpage
\appendix
\section{Code References}
References used in deployment, benchmarking, visualization programs are formally documented here as well as noted in a comment in the code \cite{www-bashNum} \cite{www-lastChar} \cite{www-configOpts} \cite{www-bashArgs} \cite{www-cmVms} \cite{www-python1} \cite{www-python2} \cite{www-python3} \cite{www-ansibleDir} \cite{www-mongoAnsible} \cite{www-ansibleCopy} \cite{www-ansibleHost} \cite{www-installMongo} \cite{www-ansiblePython}.
\section{Execution Instructions}
The project should be run on an Ubuntu 16.04 LTS (Xenial Xerus)
machine. The required modules for the project can be installed in a
virtualenv virtual environment using the file
project/S17-IO-3012/code/requirements.txt.
The main script, project/S17-IO-3012/code/bin/project.py, can be run
to execute all functionality. Project.py functions (deploy, kill,
benchmark, report, distribution) are described in help, but sample
instructions are provided below for each function.
python project.py has four functions.
\subsection{deploy}
Runs a deployment. Takes 6 parameters:
\begin{enumerate}
\item Cloud - chameleon, jetstream, or kilo (futuresystems)
\item MongoDB Version - 34 for version 3.4, 32 for version 3.2
\item Config Server Replication Size - a number
\item Mongos Router Instances - a number
\item Shard Count - a number
\item Shard Replication Size - a number
\end{enumerate}
Simple example:
deploy chameleon 34 1 1 1 1
More complex examples:
deploy chameleon 32 3 2 3 3
deploy kilo 34 2 2 1 1
\subsection{kill} - Deletes and undefines the current cluster. No parameters.
\subsection{benchmark}
Runs a benchmark mongoimport, find, and MapReduce and logs timings.
Takes one required parameter - \emph{large} or \emph{small} (for
testing purposes).
\subsection{report}
Regenerates PNG files in the code/report/directory based on current
benchmarks
\subsection{distribution}
Shows the data distribution of the current configuration. For
meaningful results, must be run after benchmark.
\section{Directory Structure}
The project/S17-IO-3012/code contains several directories.
\subsection{benchmark/} Contains all benchmark timing logs
\subsection{bin/} Contains all Bash and Python code
\subsection{configfiles/} Contains all configuration file templates
\subsection{deploy/} Contains all deployment timing logs
\subsection{json/} Contains all json documents
\subsection{playbooks/} Contains all Ansible YAML files
\subsection{report/} Contains all reports in PNG format
\subsection{stdlist/} Contains all bash script output logs
\subsection{work/} Contains temporary work files
\end{document}
| {
"alphanum_fraction": 0.7934041386,
"avg_line_length": 39.9624119029,
"ext": "tex",
"hexsha": "2f21d07fcbd0961b04516b4bf2eaf25d7181928f",
"lang": "TeX",
"max_forks_count": 294,
"max_forks_repo_forks_event_max_datetime": "2018-07-13T01:32:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T13:18:39.000Z",
"max_forks_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "cloudmesh/sp17-i524",
"max_forks_repo_path": "project/S17-IO-3012/report/report.tex",
"max_issues_count": 98,
"max_issues_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_issues_repo_issues_event_max_datetime": "2017-10-27T11:30:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-01-19T04:24:02.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cloudmesh/sp17-i524",
"max_issues_repo_path": "project/S17-IO-3012/report/report.tex",
"max_line_length": 438,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "suunni/sp17-i524",
"max_stars_repo_path": "project/S17-IO-3012/report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T19:13:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-30T09:54:25.000Z",
"num_tokens": 12092,
"size": 51032
} |
\documentclass[12pt]{article}
\usepackage{psfig}
\usepackage{fullpage}
% bold math italic font
\newcommand{\mbf}[1]{\mbox{\boldmath $#1$}}
% symbol used for sqrt(-1)
\newcommand{\Ci}{\ensuremath{i}}
\newcommand{\var}{{\rm var}}
\newcommand{\trace}{{\rm tr}}
\newcommand{\Rotation}{{\bf R}}
\newcommand{\Boost}{{\bf B}}
\newcommand{\vRotation}[1][n]{\ensuremath{\Rotation_{\mbfs{\hat #1}}}}
\newcommand{\vBoost}[1][m]{\ensuremath{\Boost_{\mbfs{\hat #1}}}}
\newcommand{\rotat}{\ensuremath{\vRotation(\phi)}}
\newcommand{\boost}{\ensuremath{\vBoost(\beta)}}
\newcommand{\pauli}[1]{\ensuremath{ {\mbf{\sigma}}_{#1} }}
\begin{document}
\section{Introduction}
This is a quick reference for the Measurement and Error Analysis
Library (MEAL). The classes defined within the {\tt MEAL} namespace
implement a general-purpose library for performing non-linear
least-squares and first-order error propagation.
\section{Function Characteristics}
A mathematical function or expression is characterized by its return
type and input variables; a distinction is made between function
parameters, ${\mbf a}=(a_0, a_1, ... , a_N)$ and independent
variables, or arguments, ${\mbf x}=(x_0, x_1, ... , x_M)$. Parameters
are double-precision floating point values that can vary freely during
modeling. Arguments have arbitrary type and cannot be treated as free
variables during modeling.
The parameter and argument access interface is defined by the {\tt
Function} base class, and the return type is defined by {\em
evaluation base classes} that inherit {\tt Function}. The evaluation
base classes define an {\tt evaluate} method that returns a result,
$f({\mbf x},{\mbf a})$, and the partial derivatives of this result
with respect to each of the parameters, $\partial f/\partial a_i$.
The behaviour of function expression classes is organized into
three categories: Parameter, Argument, and Evaluation. These
behaviours are implemented by the children of three base classes,
each of which inherits the {\tt FunctionPolicy} base class.
The behaviour policies are mutually exclusive, and derived classes can
incorporate pre-defined behaviours by setting the appropriate
policy attribute.
\subsection{Parameter Policy}
A function expression may have an arbitary number of scalar parameters,
$\mbf{a}=(a_0, a_1, ... , a_N)$. Each parameter has an associated
name, estimated variance, and a flag that indicates if the parameter
is free or fixed. Parameter management and access is implemented by
children of the {\tt ParameterPolicy} base class.
\subsection{Argument Policy}
A function expression may be further parameterized by an arbitrary number
of independent variables, or arguments, ${\mbf x}=(x_0, x_1, ... ,
x_M)$. Unlike parameters, arguments have no specified type, no
estimated variance, and can never be free. Because they have no
specified type, the interface between a {\tt Function} and its
arguments is mediated through the {\tt Argument} and {\tt
Argument::Value} abstract base classes. Argument management and
behaviour is implemented by children of the {\tt ArgumentPolicy} base
class.
\subsection{Evaluation Policy}
The return type of a function expression is unspecified in the
{\tt Function} base class definition. Therefore, derived classes must
inherit an {\em evaluation base class}. The evaluation base classes are
children of the {\tt Function} base class that define an {\tt evaluate} method
and an evaluation policy. There are currently two evaluation base
classes:
\begin{itemize}
\item {\tt Scalar} - returns a scalar (double-precision) value
\item {\tt Complex2} - returns a $2\times2$ complex (double-precision) matrix
\end{itemize}
In addition, a number of template classes, known as {\tt Rules},
implement basic rules of calculus that may be used to simplify the
computation of more complicated expressions and their partial
derivatives. There are currently two policies for dealing with the
evaluation of a evaluation base-derived class: {\tt Cached} and {\tt
NotCached}.
\section{Modular Construction}
TO DO: Show how new functions can be built up from more basic elements.
\section{Example Usage}
\subsection{Non-linear Least-Squares Estimation}
TO DO: Document lmfit
\subsection{Error Propagation}
The {\tt Estimate} template class is very useful for storing a value
and its estimated variance. There are also operators and functions
which enable the propagation of error estimates to derived quantities;
for example:
\begin{verbatim}
Estimate<float> y (0.596,0.0034);
Estimate<float> x (-0.83,0.0072);
Estimate<float> z = pow (y,x);
\end{verbatim}
automatically computes the variance of the new variable, {\tt z}.
However, the {\tt Estimate} template class fails when a variable
appears more than once in an expression; e.g.
\begin{verbatim}
Estimate<float> ratio = x/x;
\end{verbatim}
should yield $1\pm0$; however, the {\tt Estimate} template class does
not recognize that the numerator and denominator are the same
variable, and incorrectly sums the weighted variances.
The problem of correctly computing the partial derivatives of an
expression with respect to its variables makes use of the exact same
functionality used to generate the gradient and Hessian matrix in
non-linear least squares fitting.
A simplified interface to this functionality is implemented by the
{\tt ScalarMath} class. {\tt ScalarMath} objects may be conveniently
initialized as a single parameter and its estimated variance using the
{\tt Estimate} template class. As with float and double types, {\tt
ScalarMath} objects may be combined using normal arithmetic operations
and basic mathematical functions, creating {\tt Scalar} functions of
any number of parameters. For example:
\begin{verbatim}
MEAL::ScalarMath x (Estimate<double> (0.9587, 0.00058));
MEAL::ScalarMath y (Estimate<double> (-0.283, 0.00034));
cerr << "Polar angle = " << atan2 (y, x) << endl;
\end{verbatim}
yields the output
\begin{verbatim}
Polar angle = (-0.287039 +/- 0.0189612)
\end{verbatim}
\noindent
As with any native type, the {\tt ScalarMath} class can be used as a
template argument, e.g.
\begin{verbatim}
complex<MEAL::ScalarMath> z (Estimate<double> (0.87, 0.0041),
Estimate<double> (2.38, 0.0095));
complex<MEAL::ScalarMath> w (Estimate<double> (1.74, 0.0081),
Estimate<double> (-.63, 0.0043));
Jones<MEAL::ScalarMath> jones (z, conj(z),
conj(w), w);
\end{verbatim}
enabling error propagation through increasingly complex expressions.
\end{document}
\section model Function Components
All model components that inherit the MEAL::Function abstract
base class represent functions of an arbitrary number of variables.
A distinction is made between independent variables, or arguments,
\f${\bf x}=(x_0, x_1, ... , x_M)\f$, and model parameters, \f${\bf
a}=(a_0, a_1, ... , a_N)\f$. Through use of the
MEAL::Argument and MEAL::Argument::Value abstract base
classes, model components may be constrained by one or more
independent variables of arbitrary type. The model parameters,
\f${\bf a}\f$, represent double precision floating point values
that may need to be constrained by some fitting technique. Function
classes should define an evaluation function that returns a result,
\f$M\f$, and the partial derivative of this result with respect to
each of the model parameters, \f$\partial M/\partial a_i\f$. The
independent variables,\f$\bf x\f$, represent known values, such as
observing frequency and epoch, that may be used to further constrain
a model.
The MEAL::Function class does not define the type of value that
it represents. This is defined by derived types, which must define
a type named Result and a method named evaluate:
virtual Result evaluate (std::vector<Result>* gradient = 0) const = 0;
The evaluate method returns a value of the type specified by Result
and, if a pointer to a vector of Result is passed as the first
argument, the vector will return the gradient of the return value
with respect to the model parameters.
The Return type and evaluate method are implemented by two main
classes of MEAL::Function derived components:
<UL>
<LI> MEAL::Scalar - a scalar function,
\f$f({\bf a}; {\bf x})\f$, such as the MEAL::Polynomial
<LI> MEAL::Complex2 - a complex 2x2 matrix function,
\f$J({\bf a}; {\bf x})\f$, such as the MEAL::Coherency matrix
and the MEAL::Rotation transformation.
</UL>
\subsection calculus Partial Derivatives
A number of template classes may be used to simplify the modular
construction of more complicated functions. These templates
implement the following basic rules of differentiation:
<UL>
<LI> MEAL::ChainRule - an arbitrary function in which
one or more parameters is set equal to the ordinate of a
MEAL::Scalar function
<LI> MEAL::BinaryRule - an associative binary operation, such
as the sum (MEAL::SumRule) or product
(MEAL::ProductRule).
</UL>
| {
"alphanum_fraction": 0.7469919417,
"avg_line_length": 39.9074889868,
"ext": "tex",
"hexsha": "c0007d7e70fa5756d16d0a058508558d32063a7f",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-02-13T20:08:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-13T20:08:14.000Z",
"max_forks_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35",
"max_forks_repo_licenses": [
"AFL-2.1"
],
"max_forks_repo_name": "xuanyuanstar/psrchive_CDFT",
"max_forks_repo_path": "More/MEAL/MEAL.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"AFL-2.1"
],
"max_issues_repo_name": "xuanyuanstar/psrchive_CDFT",
"max_issues_repo_path": "More/MEAL/MEAL.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "453c4dc05b8e901ea661cd02d4f0a30665dcaf35",
"max_stars_repo_licenses": [
"AFL-2.1"
],
"max_stars_repo_name": "xuanyuanstar/psrchive_CDFT",
"max_stars_repo_path": "More/MEAL/MEAL.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2270,
"size": 9059
} |
\documentclass[12pt,letterpaper]{article}
\usepackage{graphicx,wrapfig,colordvi,color,citesort,times,array,subfigure,floatflt,makeidx,framed,boxedminipage,url,hyperref}
\graphicspath{{images/}}
\definecolor{UglyBrown}{rgb}{0.7,0.4,0.1}
\newcommand{\newtext}[1]{\textcolor{UglyBrown}{#1}}
\newcommand{\contributors}{
Thomas Parker\\
Devon Jones\\
James Dempsey\\
Aaron Divinsky\\
Tir Gwaith (Andrew McDougall)\\
Frank Kliewe\\
Eddy Anthony\\
Thomas Clegg\\
Paul W. King\\
Paul Grosse\\
Terry FitzSimons\\
Chris Chandler\\
Martijn Verburg\\
Connor Petty\\
}
\newcommand{\lastupdate}{June 27, 2008}
\newcommand{\versionEOS}{0.60}
\newcommand{\docnumEOS}{2.0}
\newcommand{\lastversionEOS}{0.51}
\newcommand{\pcgenversEOS}{5.16}
\newcommand{\systemEOS}{Rules Persistence System}
\makeindex
\hoffset=-0.6in
\voffset=-0.6in
\textwidth=6.5in
\textheight=9.0in
\parindent=0.25in
\parskip=1.0ex
\newcommand{\system}{\systemEOS{} }
\newcommand{\version}{\versionEOS{} }
\newcommand{\docnum}{\docnumEOS{} }
\newcommand{\lastversion}{\lastversionEOS{} }
\newcommand{\pcgenvers}{\pcgenversEOS{} }
\definecolor{Red}{rgb}{1,0,0}
\definecolor{Blue}{rgb}{0,0,1}
\definecolor{Black}{rgb}{0,0,0}
\newcommand{\redtext}[1]{\textcolor{Red}{#1}}
\newcommand{\revisedtext}[1]{\textcolor{Blue}{#1}}
\newcommand{\newtextPointZeroFive}[1]{#1}
\newcommand{\revisedtextPointZeroFive}[1]{#1}
\newcommand{\deletedTextPointZeroFive}[1]{}
\newcommand{\newtextPointFive}[1]{\newtext{#1}}
\newcommand{\revisedtextPointFive}[1]{\revisedtext{#1}}
\newcommand{\deletedTextPointFive}[1]{}
\newcommand{\textem}[1]{\emph{#1}}
\newcommand{\ix}[1]{#1\index{#1}}
\newcommand{\tool}[1]{\textbf{\ix{#1}}}
\newcommand{\rejected}[1]{}
\newcommand{\nsection}[1]{\newpage \section{#1}}
\newcommand{\lsection}[1]{\label{#1}\section{#1}}
\newcommand{\lnsection}[1]{\label{#1}\nsection{#1}}
\newcommand{\lsubsection}[1]{\label{#1}\subsection{#1}}
\newcommand{\lnsubsection}[1]{\newpage\label{#1}\subsection{#1}}
\newcommand{\lsubsubsection}[1]{\subsubsection{#1}\label{#1}}
\newcommand{\lisection}[1]{\section{#1}\label{#1}\index{#1}}
\newcommand{\lisubsection}[1]{\subsection{#1}\label{#1}\index{#1}}
\newcommand{\lisubsubsection}[1]{\lsubsubsection{#1}\label{#1}\index{#1}}
\newcommand{\myref}[1]{\ref{#1} #1}
\newcommand{\sidebar}[5]{
\noindent
\definecolor{shadecolor}{#2}{#3}
\begin{minipage}{\textwidth}
\setlength{\parskip}{-11pt}
\begin{shaded}\textbf{#1: #4}\end{shaded}
\begin{boxedminipage}{\textwidth}
#5
\end{boxedminipage}
\end{minipage}
}
\newcommand{\authors}{Members of the PCGen Development Team and Community (see contributors)}
\newcommand{\csidebar}[6]{
\noindent
\definecolor{shadecolor}{#2}{#3}
\begin{minipage}{\textwidth}
\setlength{\parskip}{-11pt}
\begin{shaded}\textbf{\textcolor{#6}{#1: #4}}\end{shaded}
\begin{boxedminipage}{\textwidth}
#5
\end{boxedminipage}
\end{minipage}
}
\newcommand{\sbdevon}[2]{\sidebar{Devon's Thoughts}{gray}{0.9}{#1}{#2}}
\newcommand{\sbjames}[2]{\sidebar{James' Thoughts}{gray}{0.9}{#1}{#2}}
\newcommand{\sbtom}[2]{\sidebar{Tom's Thoughts}{gray}{0.9}{#1}{#2}}
\newcommand{\sbnote}[2]{\sidebar{Note}{gray}{0.9}{#1}{#2}}
\newcommand{\sbwhatis}[2]{\sidebar{What Is}{gray}{0.8}{#1}{#2}}
\newcommand{\sbeffect}[2]{\sidebar{Effect}{gray}{0.8}{#1}{#2}}
\newcommand{\sbexception}[2]{\sidebar{Exception}{gray}{0.7}{#1}{#2}}
\newcommand{\sbwarning}[2]{\csidebar{Warning}{gray}{0.3}{#1}{#2}{white}}
\newcommand{\sbold}[2]{\csidebar{Original Proposal}{gray}{0.3}{#1}{#2}{white}}
\newcommand{\vbar}{\hspace{0.2mm}\rule{0.2mm}{3mm}\hspace{0.2mm}}
\newcommand{\openfig}{\begin{figure}[hbt]}
\newcommand{\closefig}[1]{\vspace*{-0.15in}\caption{#1}\end{figure}}
\newcommand{\openffig}[2]{\begin{floatingfigure}[#1]{#2}}
\newcommand{\closeffig}[1]{\caption{#1}\end{floatingfigure}}
\newcommand{\nocapcloseffig}{\end{floatingfigure}}
\newcommand{\mapffig}[4]{\openffig{#1}{#2}\includegraphics[scale=0.675]{#3}\closeffig{#4}}
\newcommand{\mapffigl}[3]{\mapffig{l}{#1}{#2}{#3}}
\newcommand{\mapffigr}[3]{\mapffig{r}{#1}{#2}{#3}}
%\newcommand{\mapfig}[2]{\openfig\centerline{\includegraphics[scale=0.675]{#1}}\closefig{#2}}
%\newcommand{\subf}[3]{\subfigure[#1]{#2\includegraphics[scale=0.675]{#3}}}
%\newcommand{\maptwofig}[7]{\openfig\begin{center}\mbox{\subf{#1}{#2}{#3}\hspace{0.25in}\subf{#4}{#5}{#6}}\end{center}\caption{#7}\end{figure}}
%\newcommand{\mapfigl}[3]{\mapfig{#2}{#3}}%\mapfig{l}{#1}{#2}{#3}}
%\newcommand{\mapfigr}[3]{\mapfig{#2}{#3}}%\mapfig{r}{#1}{#2}{#3}}
%\newcommand{\mapscalefig}[5]{\openffig{#1}{#2}\includegraphics[#3]{#4}\nocapcloseffig}
\newcommand{\mapscalefig}[3]{\openfig\centerline{\includegraphics[#1]{#2}}\closefig{#3}}
\newcommand{\mapfig}[2]{\openfig\centerline{\includegraphics[scale=0.675]{#1}}\closefig{#2}}
\newcommand{\subf}[3]{\subfigure[#1]{#2\includegraphics[scale=0.675]{#3}}}
\newcommand{\maptwofig}[7]{\openfig\begin{center}\mbox{\subf{#1}{#2}{#3}\hspace{0.25in}\subf{#4}{#5}{#6}}\end{center}\caption{#7}\end{figure}}
\newcommand{\basis}{\noindent\textem{Basis for this requirement:} }
\newcommand{\impl}{\noindent\textem{Implementation:} }
\newcommand{\under}{\noindent\textem{Underlying Requirement(s):} }
\renewcommand{\baselinestretch}{1.0}
\hyphenpenalty=9999
\exhyphenpenalty=9999
\pagestyle{empty}
\sloppy
%\thispagestyle{empty}
%============================================================================
\begin{document}
\begin{center}
\vspace*{2.5in}
{ \Huge PCGen 5.16 Architecture Overview }
{ \huge Version \version (Draft) }
{ \huge \system Sub-Document }
{ \Large Document Index Number \docnum }
{ \large \authors }
\lastupdate
\end{center}
\vspace*{2.5in}
\noindent Portions \copyright 2006-2008 Individually by \authors.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.
To view a copy of this license, visit \url{http://creativecommons.org/licenses/by-nc-sa/2.5/}
or send a letter to Creative Commons, 543 Howard Street, 5th Floor,
San Francisco, California, 94105, USA.
\noindent ``d20 System'' is a trademark or registered trademark
of Wizards of the Coast, Inc., a subsidiary of Hasbro, Inc., in the United States and/or other countries.
All other trademarks mentioned are the property of their respective owners.
\newpage
\pagestyle{plain}
\setcounter{page}{1}
\pagenumbering{roman}
\tableofcontents
\listoftables
\listoffigures
\lnsection{Purpose and Scope of this Document}
\setcounter{page}{1}
\pagenumbering{arabic}
In 2005, the PCGen project formed an Architecture team to help define the
path forward for the software structure of PCGen. The subsequent discussions
and architecture development led to a new proposed architecture for PCGen.
This document is one of a series of documents resulting from those efforts.
This document is primarily intended to
communicate the architecture of PCGen \pcgenvers \systemEOS. See Figure
\ref{Fig: Layered Subsystems} to get an understanding of the place of the \system
within the entire PCGen code base and architecture.
This document provides a detailed overview of the architecture of a specific
portion of PCGen. The overall architecture and further details of
other subsystems and processes are provided in separate documents.
\openfig\centerline{\includegraphics[width=6.5in]{rsf.pdf}}\closefig{\label{Fig: Layered Subsystems}Layered View of PCGen \pcgenvers subsystems}
\lnsection{Contributors}
Contributors to this document, either directly or by feedback and comments:\\
\contributors
\lsection{Document Conventions}
Java Classes, Interfaces, or other forms of items that can be found in the code are
presented in this document in \textem{italics}.
\lnsection{\system - Background}
The \system is one of the major components of PCGen, and a focus area for PCGen \pcgenversEOS.
The \system is responsible for loading game system and component data from the
persistence data file format and saving it back into that data file format. It is aware of the
internal storage of information within PCGen only to the point it is required to store that
information for use by the core of PCGen. The \system is not capable of interpreting much
in the way of meaning of the values it is storing.
This document describes the changes being implemented in the \systemEOS, and
provides general guidance on how to interact with the interface/API of the \systemEOS.
\lsubsection{\system Major Concepts}
\lsubsubsection{The Data Persistence Format}
The data persistence format is typically known to the data developers as ``Game Mode'',
``PCC'' or ``LST'' files. These vary slightly in syntax, but are generally tab-delimited
text files. Generally ``PCC'' files are used to identify which ``LST'' files should be
loaded, although there is also limited support for some Global tokens to be used in
``PCC'' files.
\lsubsubsection{Data Load}
Load is the set of events that occurs when data persistence files are loaded into PCGen.
The Data Load process is intended to include full parsing of the data in the data
persistence files (see \myref{Catch Errors Early}) and not while Player Characters are
being created. The Data Load process is shown in Figure \ref{Fig: Flow of Data Load}.
\lsubsubsection{Runtime}
Runtime processes and events are those that take place while a Player Character is
being created. The \system is not responsible for Runtime behavior, although the
\system is responsible for ensuring that the Data Load process produces entries
in the Rules Data Store that minimizes the possible errors or unexpected conditions
at Runtime.
\lnsection{\system Sub-components}
\openfig\centerline{\includegraphics[width=6.5in]{rse.pdf}}\closefig{\label{Fig: Subcomponents and Relationships}View of PCGen \pcgenvers \system components and context to other subsystems of PCGen }
\openfig\centerline{\includegraphics[width=6.5in]{rslf.pdf}}\closefig{\label{Fig: Flow of Data Load}Flow of Data Load in PCGen \pcgenvers \system }
\openfig\centerline{\includegraphics[width=6.5in]{rsuf.pdf}}\closefig{\label{Fig: Flow of Data Unload}Flow of Data Unload in PCGen \pcgenvers \system }
\lsubsection{System Loader}
The System Loader is responsible for receiving a set of Campaigns that should be
loaded during Data Load. These Campaigns are analyzed by the System Loader, and
then the System Loader provides a list of Source Entries to each File Loader to
indicate what files (or more generally, URIs) should be loaded.
For backwards compatibility in the transition from PCGen 5.14, the System Loader is
in \textem{LstSystemLoader} in the \textem{pcgen.persistence.lst} package.
\lsubsection{File Loaders}
File Loaders are Classes that perform the specific actions necessary to load a specific
LST file from the file system (or URI) and parse the file into
individual tags. The File Loaders then identify the appropriate Token and pass the
value of the tag to the individual Token to be parsed.
For backwards compatibility in the transition from PCGen 5.14, the File Loaders are in the
\textem{pcgen.persistence} package.
\lsubsection{Tags and Tokens}
Data that defines the rule constructs of a Class, Race, Language, and other characteristics
and attributes of a PlayerCharacter are stored in LST files. These LST files are parsed to
import the information from those LST files into objects within the PCGen program.
For purposes of this document, the individual entries in the data persistence
files will be referred to as ``Tags''. To store information, these tags separate the ``key''
(used to indicate the tag name) and the ``value'' (the information the tag is conveying to
PCGen) by a colon. A simple example can be taken from the Common Language in the
Language LST file from SRD 3.0:\\
\texttt{TYPE:Spoken}
This indicates the ``TYPE'' tag is indicating a value of ``Spoken''.
For purposes of this document, the code that parses the value of a tag and translates the
tag syntax into the internal data structure of PCGen is called a Token.
Occasionally, you fill find these the terms ``tag'' and ``token'' used differently;
which name is used will likely depend on the perspective of the
speaker.\footnote{This document makes effort to use the terms Tag and Token in the context
they are defined in this section. This is a decision for clarity, and is not a judgement
of the ``best'' method for referring to the items in the data persistence format or the code.}
Types of tokens for PCGen \pcgenvers are located in \textem{pcgen.rules.persistence.token}.
Table \ref{Token Interfaces} explains the different types of tokens, in order
to clarify the use of each of the token interfaces.
\begin{table}
\begin{center}
\begin{tabular}{ | m{2.1in} | m{4in} | }
\hline
\textbf{Token Interface from \textem{pcgen.rules.persistence.token}} & \textbf{Interface Description} \\ \hline \hline
CDOMToken & A \textem{CDOMToken} is a PCGen \pcgenvers token that has a \textem{parse()} method for processing the value of a tag and a \textem{getTokenName()} method to identify the name of the tag the Token can process. Note that a \textem{CDOMToken} does not require \textem{unparse()}, as compatibility tokens from previous revisions would not implement that behavior. \\ \hline
CDOMPrimaryToken & A \textem{CDOMPrimaryToken} is a \textem{CDOMToken} that is active (not deprecated) in the current version of PCGen, and thus implements the \textem{unparse()} method. \\ \hline
CDOMSubToken & A \textem{CDOMSubToken} is a \textem{CDOMToken} that is designed to operate for subtokens (see \myref{Subtokens}). \\ \hline
CDOMSecondaryToken & A \textem{CDOMSecondaryToken} is a \textem{CDOMSubToken} that is active (not deprecated) in the current version of PCGen, and thus implements the \textem{unparse()} method. \\ \hline
CDOMCompatibilityToken & A \textem{CDOMCompatibilityToken} is a \textem{CDOMToken} that implements a tag syntax from a previous version of PCGen, see \myref{Token Compatibility}. Once deprecated, a \textem{CDOMPrimaryToken} would become a \textem{CDOMCompatibilityToken}. \\ \hline
CDOMCompatibilitySubToken & A \textem{CDOMCompatibilitySubToken} is a CDOMSubToken that implements a tag syntax from a previous version of PCGen, see \myref{Token Compatibility}. Once deprecated, a \textem{CDOMSecondaryToken} would become a \textem{CDOMCompatibilitySubToken}. \\ \hline
ClassWrappedToken & A \textem{ClassWrappedToken} is a specialized \textem{CDOMToken} designed to provide compatibility for using CLASS tokens on the level 1 CLASSLEVEL line, see \myref{Class Wrapped Token}. \\ \hline
PreCompatibilityToken & A \textem{PreCompatibilityToken} is a token for wrapping the PCGen 5.14-type Prerequisite tokens into \pcgenversEOS-type tokens, see \myref{Prerequisite Tags}. \\ \hline
\end{tabular}
\end{center}
\caption{Token Interfaces and usage in PCGen \pcgenvers}
\label{Token Interfaces}
\end{table}
Tokens that implement these interfaces, which are effectively extensions to the \systemEOS,
are found in the \textem{plugin.lsttokens} package.
\sbnote{A Note of Caution on Tags/Tokens}{Different tags have different behavior with respect to
overwriting or appending values. Some tags may only allow a single value (e.g. RACETYPE
from Race LST files), and subsequent references to that tag in the same object
will overwrite the original value. Other tags are effectively lists
(e.g. the Global TYPE tag), and subsequent calls to that tag
will append values to the list. The behavior should be noted within the tag documentation
distributed with PCGen.}
\lsubsection{Load Context}
The Load Context represents the translator of information loaded in
File Loaders and Tokens. The translation is from the persistence data file format
to the internal structure used by the Rules Data Store. The LoadContext provides
various services; some are discussed in later sections.
Context Classes of the \system are found in the \textem{pcgen.rules} package.
\lsubsection{Token Library}
The Token Library is the storage location for the Tokens, and is queried by the LoadContext
once the key and value of a tag has been established by a File Loader. The Token Library is
initialized by the Startup System when plugins are loaded by PCGen.
\lnsection{Functional Requirements}
The following functional requirements are provided for use in understanding the basis for the PCGen \pcgenvers
\system architecture. Functional requirements are constraints on features as they appear to a user
of PCGen \pcgenversEOS. Many of these Functional Requirements are shared with the overall PCGen
Functional Requirements.
\lsubsection{Compatibility with Previous Versions}
PCGen \pcgenvers must be capable of fully loading PCGen 5.14 LST data files. Data from older 5.x releases
(in particular PCGen 5.12 and 5.10) may be compatible with PCGen \pcgenversEOS; however, there are known
ambiguities that prevent 100\% compatibility with older releases.
This requirement must be met. Other features of the system (requirements and other good design guidelines)
will be sacrificed to meet this requirement.
\basis There is a tremendous investment in the PCGen 5.x code, data and documentation. This compatibility
requirement ensures the ability to leverage that time investment, especially in data and documentation.
\lsubsection{Increased Flexibility}
Eliminating hard coded values and special cases that currently exist in the PCGen code base will
allow PCGen to more easily support non-d20 systems. In addition, increased flexibility will provide
for faster feature enhancement, providing additional benefit to users.
\basis Desire for faster turnover of feature requests, and long-term strategy to expand
the PCGen universe to include non-d20 based game systems to increase function
for existing users and to attract new users to PCGen.
\lnsection{Structural Requirements}
The following structural requirements are provided for use in understanding the basis for the
PCGen \pcgenvers \system architecture. Structural requirements are constraints on features as they appear
to a developer of PCGen \pcgenversEOS.\footnote{It is recognized that some of these items would
qualify more as ``design'' than ``architecture'', as they are general features or characteristics
of well-written software systems. Without any disrespect for software architecture purists, we
include a number of those design characteristics, as they help to contrast the PCGen
\pcgenvers architecture from architecture of earlier versions of PCGen.} Many of these Structural
Requirements are shared with the overall PCGen Structural Requirements.
\lsubsection{Minimize Process/Structural Models}
PCGen \pcgenvers \system should have a minimal set of design structures used to specify the functions
required to build a Player Character.
\basis This minimizes the number of mental models a developer must understand. Reduced quantity
of design patterns and structures improves the ability of new (and existing!) developers to
understand the code. This also allows greater code reuse and reduces code duplication (which
is subject to copy/paste error). Selection of appropriate models will also reduce the number
of exceptions in the code, eliminating further risk of bugs and confusion for developers.
\lsubsection{Information Hiding}
The format of data files on disk should be independent of the processing of a game system or
Player Character
\basis This insulates the code (modifying a player character or the game system data)
from changes in the data file format. This increases the flexibility to add features to
PCGen without core changes. This facilitates unit testing by improving component isolation.
\lsubsection{Data Encapsulation}
There should be defined and limited interfaces between modules of PCGen.
\basis This improves code maintainability. This truly insulates modules of PCGen from
changes elsewhere in the system. This therefore facilitates parallel development. This
also improves the ability to unit test the code, as ``mock'' objects that implement the
framework can be used for testing.
\lsubsection{Catch Errors Early}
Errors in the data files should be caught during data persistence file load, and should not
trigger runtime errors.
\basis Given the \system being independent of the internal data structure,
all items will be resolved at Data Load. This will ensure that all objects
in a given namespace possess a unique KEY, regardless of the source file. In addition,
all object references can be validated to ensure an object that was actually
constructed and loaded.
\lsubsection{Stable Code Structure}
Dependency between packages and classes will result in a Directed Acyclic Graph.
\basis The PCGen code base is over 2,000 classes. This is a significant code base and
requires isoloating subsystems and defining dependencies in order to minimize the impact
of code changes. Improved code structure also facilitates testing. Eliminating tangles
in Class dependency will improve the ability to write true unit tests (tests that work on a
single Class). This means it will be possible to catch smaller errors due to incorrect
modifications of the PCGen code base. Improved structure also facilitates understanding
of the code by developers. The overall impact of good code structure is seen to both
developers and end users as improved speed and ability to modify the PCGen code.
\lsubsection{Avoid Contracts}
Two forms of contracts should be avoided: (1) When a developer makes a code modification,
the developer should not be forced to make a matching modification in another location in the
code. (2) When a change is made to the internal data structure, a significant amount of
``reorganization'' to ensure a valid data structure should not be necessary.
\basis Contracts cause problems because they introduce bugs into software when matching changes
are not made. They also make it more difficult for developers to understand the architecture
of a system, because they are forced to focus on adhering to the contracts. Data structure
contracts can result in invalid data structures, and often lead to performance issues as the
data structure is validated and corrected.
\lsubsection{Minimize Order Dependency}
The number of order dependent operations should be minimized.
\basis Order dependency can cause significant issues, especially as it is hard to maintain
accurate documentation of such restrictions. Operations that are not order dependent can
also be parallelized (e.g. on multi-processor systems) to improve performance.
\lnsection{\system Key Design Decisions}
\lsubsection{Token/File Loader System}
This is a shared design characteristic with PCGen 5.14. File Loaders are key components of
the \systemEOS. File Loader instances are specific to a given file type. When processing
a file, the File Loader splits the file into separate lines, splits the lines (if necessary)
into separate tags, and then submits the tags to the Tokens.
\under \myref{Information Hiding}, \myref{Data Encapsulation}, \myref{Increased Flexibility}
\basis This abstracts specific individual components of the data persistence format from the
internal data structure (and each other).
\impl The interactions of Tokens, File Loaders and other elements of the \system is shown in
Figure \ref{Fig: Flow of Data Load}.
File Loaders are created by the \system for each file type. While these may share
a single Class (for code reuse), each instance is specialized to a specific data persistence
format file type. The list of available tags is also specific to a given data persistence
file type. This allows features to be limited to certain objects to avoid non-sensical
situations (e.g. you can't assign types of material components to a Race). A collection
of Global tags that can be used in nearly all data persistence files is also available.
Each Token Class is stored in a separate file, independent of the core of PCGen, to allow each
token to be independently updated, removed, or otherwise manipulated without altering or
impacting other Tokens. Individual Token files are in the \textem{pcgen.plugin.lsttokens} package.
These persistence Tokens are non-abstract Classes that
implement the \textem{LstToken} interface. When PCGen is launched, all plugins of PCGen are
evaluated, and Tokens are specifically placed into the \textem{TokenLibrary}. The Tokens each
have a method (\textem{getTokenName()}) that identifies the tag the Token processes.
By keeping each Token in an individual class, this keeps the Token Classes very simple,
which makes them easy to test, modify, and understand (as they are effectively atomic to the
processing of a specific token). One goal of the PCGen \system is to ensure that all of the
parsing of LST files is done within the Tokens and not in the core of PCGen. This makes
adding new tags to the LST files to be reasonably painless (though changes to the core or
export system may also be required to add required functionality). It at least facilitates
the long-term goal of altering behavior of PCGen without forcing a recompile of core PCGen
code.
\sbnote{On Transition from PCGen 5.14}{PCGen 5.14 used a slightly different storage system
for tokens. It stored tokens in a \textem{TokenStore}, and that was effectively done under
a Map of Maps. The first Map was an interface identifying the type of Token, and the second
Map was used to identify the token from the tag name. As the conversion to the new Token
system is being done gradually, some tokens in PCGen \pcgenvers may remain in the PCeGen 5.14
style. New tokens will always extend (perhaps indirectly through another Interface),
\textem{CDOMToken}.}
\lsubsection{Data Modification during Data Load}
This is a shared design characteristic with PCGen 5.14. The \system supports
modifying, copying or forgetting objects defined in the data persistence files.
\under \myref{Information Hiding}, \myref{Data Encapsulation}, \myref{Increased Flexibility}
\basis This allows users to modify base data to easily produce new Races, Abilities, or other items
without risk of copy/paste error.
\impl The data persistence file format supports three special functions that can be performed
on data persistence entries.
\begin{description}
\item[.COPY] allows a data file to copy an existing object. This .COPY entry need not worry about
file load order (see \myref{Data Persistence File Load Order Independence}). The value preceding
the .COPY string identifies the object to be copied. This identifier is the KEY
(or KEY and CATEGORY) of the object to be copied. The identifier for the copied object is placed
after an equals sign that follows the .COPY String, e.g.:\\
\texttt{Dodge.COPY=MyDodge}
\item[.MOD] allows a data file to modify an existing object. This .MOD entry need not worry about
file load order (see \myref{Data Persistence File Load Order Independence}). All .MOD entries will
be processed after all .COPY entries, regardless of the source file. The value preceding
the .MOD string identifies the object to be modified. This identifier is the KEY
(or KEY and CATEGORY) of the object to be modified. If more than one .COPY token produces an
object with the same identifier, then a duplicate object error will be generated.
\item[.FORGET] allows a data file to remove an existing object from the Rules Data Store. This
.FORGET entry need not worry about file load order
(see \myref{Data Persistence File Load Order Independence}). All .FORGET entries will
be processed after all .COPY and .MOD entries, regardless of the source file. The value preceding
the .FORGET string identifies the object to be removed from the Rules Data Store.
\end{description}
\lsubsection{Subtokens}
This is a shared design characteristic with PCGen 5.14. Some tags have complex behavior
that significantly differs based on the first argument in the value of the tag. In order
to simplify tag parsing and Token code, these Tokens implement a Sub-token structure,
which delegates parsing of the tag value to a Token specialized to the first argument
in the value of the tag.
\under \myref{Data Encapsulation}, \myref{Information Hiding}, \myref{Increased Flexibility}
\basis This design is primarily intended to separate out code for different subtokens. This
provides increased ability to add new subtokens without altering existing code. This provides
increased flexibility for developers, and ensures that unexpected side effects from code changes
don't impact other features of PCGen.
\impl The flow of events during Data Load when Subtokens are present is shown as an optional
series of events in Figure \ref{Fig: Flow of Data Load}.
The LoadContext is capable of processing subtokens for a given Token. Any token
which delegates to subtokens can call \textem{processSubToken(T, String, String, String)}
from LoadContext in order to delegate to subtokens. This delegation will return a boolean
value to indicate success (\textem{true}) or failure (\textem{false}) of the delegation.
The exact cause of the failure is reported to the \textem{Logging} utility.
Note that it is legal for a subtoken to only be valid in a single object type (such as
a Race), even if the ``primary'' token is accepted universally. This greatly simplifies
the restriction of subtokens to individual file types without producing burden on
the primary token to establish legal values. Resolution of those restrictions is handled
entirely within the LoadContext.
\lsubsection{\system I/O}
The input and output of data persistence information should be an integral part of the
\systemEOS. In PCGen 5.14, Tokens and the \system were only responsible for input from
the data persistence file format. In PCGen \pcgenversEOS, the \system is responsible
for both input and output.
\under \myref{Data Encapsulation}, \myref{Information Hiding}
\basis Adding output to the persistence system provides the ability to reuse the
\system in a data file editor, as well as the runtime system. This sharing
of code helps to guarantee the integrity of the data file editor. Such a structure
also facilitates unit testing, as the \system can be tested independently
of the core code.
\impl Each token has the ability to both ``parse'' and ``unparse'' information
for the \systemEOS. Parsing is the act of reading a token value from a data
persistence file and placing it into the internal rules data structure. Unparsing
is the act of reading the internal data structure and writing out the appropriate
syntax into a data persistence file.
In addition to other benefits, this parse/unparse structure allows Tokens to be
tested without major dependence on other components of PCGen. These tests are found in
\textem{plugin.lsttokens package} of the \textem{code/src/utest} source directory.
As explained in Section \ref{Token/File Loader System}, Token/File Loader System, the File Loaders
separate out the tags in an input file and call the parse method on the appropriate
Tokens. In order to unparse a loaded object back to the data persistence syntax, the
all Tokens that could be used in the given object type must be called.
Unparsing a particular object is managed by the \textem{unparse(T)}
method of \textem{LoadContext}. This process includes delegation of the unparse to all
subtokens (See section \ref{Subtokens}), as depcited in Figure \ref{Fig: Flow of Data Unload}.
Because all tokens are called when unparsing an object, it is important that tokens
properly represent when they are not used. This is done by returning \textem{null} from
the \textem{unparse} method of the Token.
Some tokens can be used more than once in a given object (e.g. BONUS), and thus must be
capable of indicating each of the values for the multiple tag instances. Since Tokens
do not maintain state, the unparse method must only be called a single time to get all
of the values; thus, the unparse method returns an array of String objects to indicate
the list of values for each instance of the tag being unparsed.
The context is responsible for loading the returned value with the name of the tag;
just as the token is not responsible for removing/ignoring the name of the tag in
the value passed into the \textem{parse} method, it does not prepend the name of the tag
to the value(s) returned from the \textem{unparse} method.
\lsubsection{Independent Data Persistence Interface}
The Data Persistence format must be independent of internal data structure. (The subsystems of PCGen
other than the \system should not have detailed knowledge of the data persistence file format).
\under \myref{Information Hiding}, \myref{Catch Errors Early}
\basis This abstracts the data persistence format from the internal data structure. It forces the entire
persistence contents to be parsed on data load. This ensures any errors in data files are
caught in the \system at data load, rather than at runtime.
\impl During the load of data from the data persistence format, each Token is required to
fully parse the information and validate the information as much as possible. This ensures
that errors in the data files are caught as they are loaded, and not at runtime. The \system
is responsible for ensuring data integrity of the rules data to the rest of the PCGen system,
and the Tokens are the ``front lines'' of fulfilling that responsibility.
Beyond the tokens, a load subsystem translates between the data persistence file format parsed by
the Tokens and the internal data structure.\footnote{Some might argue this
system fits the Data Mapper design pattern, although it's not strictly using relational databases.}
This system is currently known as a \textem{LoadContext}. The details of translation takes
various forms, and those structures are explained in later sections.
\lsubsection{Only valid Tokens may impact the Rules Data Store}
There is a risk that a partially-parsed Token from an invalid data persistence entry could
lead to an unknown state within the Rules Data Store. Therefore, a Token should only be
responsible for controlling the state of the Rules Data Store if the token parse
completes successfully.
\under \myref{Information Hiding}
\basis This greatly simplifies the implementation of Tokens, as they are not required to
analyze or defer method calls to the \textem{LoadContext} until after the data persistence syntax
is established to be valid.
\impl During the load of data from the data persistence format, each Token may fully parse
the provided value and make any necessary calls to the LoadContext. This can be done even if
subsequent information indicates to the Token that there is an error in the Token value.
Specifically, individual Tokens should be free to take any action on the \textem{LoadContext},
and are not responsible for the consequences of those method calls unless the Token indicates
that the value from the data persistence format was indicated to be valid. This indication of
validity is by returning \textem{true} from the parse method of the Token.\footnote{This is
effectively the \textem{Unit of Work} design pattern}
If a Token returns \textem{true}, indicating the token was valid, then the File Loader that
called the Token is responsible for indicating to the \textem{LoadContext} to \textem{commit()}
the changes defined by the Token. This proces is shown in the ``Transaction Success Response''
section in Figure \ref{Fig: Flow of Data Load}.
If the Token returns \textem{false}, then the File Loader is
responsible for calling the \textem{rollback()} method of \textem{LoadContext} to indicate
no changes should made to the Rules Data Store and the tentative changes proposed by the Token
should be discarded. This proces is shown in the ``Transaction Failure Response'' section
in Figure \ref{Fig: Flow of Data Load}.
\lsubsection{Data Persistence File Load Order Independence}
Items in the rules structure may refer to each other, by granting certain features, possessing
certain prerequisites, or by other means. For example, an Ability A may grant Ability B, but
we cannot reasonably require that Ability A appears before Ability B. More specifically, due
to known interactions, it is impossible to choose a load order for files and entries
that guarantees objects will be constructed before references to those objects
are encountered. Order independence of persistent data is therefore an architectural requirement.
\under \myref{Information Hiding}, \myref{Data Encapsulation}, \myref{Catch Errors Early}
\basis Using references before objects are constructed to ensure full parsing of the data
persistence file syntax during load improves error catching capability at load time and
should improve runtime performance.
\impl A two pass load system is required in order to ensure separation of the data
persistence format and the internal data structure. In
PCGen \pcgenvers, any Token may request a reference to an object, regardless of whether that object has
been constructed in the \textem{LoadContext}. This is done through a \textem{ReferenceContext}.
The \textem{ReferenceContext} is capable of returning three types of references.
\begin{description}
\item[Single References] refer to a single object, and are useful in situations where single elements (or
perhaps lists of single elements) are specified in the value of the Token.
\item[Type References] refer to groups of objects, and are used in situations where entire types are
referenced by the TYPE= identifier.
\item[All References] refer to all objects of a given type, and are used in situations where the
special "ALL" or "ANY" identifiers are used.
\end{description}
These references requested by the tokens can then be placed into objects (Abilities, Skills, etc.)
and the underlying object(s) to which the reference refers can be established at runtime.
There are two issues introduced with a system that is capable of referencing objects before
they are constructed.
The first issue is that references might be made to objects that don't exist. This problem
cannot be detected until the entire load operation is complete. The \system makes a call to
the \textem{validate()} method of \textem{ReferenceContext} to test whether any references were
made where the appropriate referred-to object was not constructed during data persistence file load.
In order to provide for minimal functionality without truly understanding the reference, PCGen
\pcgenvers constructs a dummy (empty) object with the given identifier.
Second, the References constructed during data persistence file load
must be resolved before they are used during ``runtime''. Therefore, the
\system is responsible for resolving any references after a collection of Campaigns are loaded.
This resolution is driven through the \textem{resolveReferences()} method of \textem{LoadContext}.
Due to the construction of dummy objects during the \textem{validate()} step,
\textem{resolveReferences()} must be called after \textem{validate()}.
\lsubsection{Shared Persistence System with Editor}
The data persistence system should be usable for both a data file editor and the
runtime character generation program.
\under Code Reuse (general design characteristic), \myref{Catch Errors Early}
\basis A significant investment made in ensuring that persistent data is read
without errors should be reused across both a data file editor and the runtime
system. Consolidation reduces the risk of error and ensures that the editor
will always be up to date (a problem in PCGen 5.14). In addition, additional
editing capabilities (e.g. edit data in place) that are not available in PCGen
5.14 can be added once a full-capability editor is available.
\impl The \system is responsible for tracking detailed changes made by the Tokens during
Data Load (see \myref{Only valid Tokens may impact the Rules Data Store}). As a result,
this information allows the load system to serve as a runtime load system and a
file editor load system.
As noted in \myref{\system I/O}, tags may overwrite previous values
or add to the set of values for that tag. In the case of an editor,
it is critically important not to lose information that
would later be overwritten in a runtime environment. A simple example would be
the use of a .MOD to alter the number of HANDS on a Race. This alteration should
be maintained in the file that contained the .MOD and the value (or unspecificied
default) in the original Race should not be lost. This is done by tracking the
URI from which data is loaded.
In order to maintain simplicity in the Tokens, they are kept URI-ignorant. File Loaders
are responsible for calling the \textem{setSourceURI(URI)} method on LoadContext
to identify the source of data being processed in the \textem{parse()} method of a Token.
File Loaders are also responsible for calling the \textem{setExtractURI(URI)} method
on LoadContext to identify any restriction on data that should be written out
during calls to the \textem{unparse()} method of a Token. The LoadContext is responsible
for restricting responses to the extract URI in return values from any methods used by
\textem{unparse()} to extract information from the LoadContext.
\lsubsection{Token Compatibility}
***PLACEHOLDER: Describe Compatibility system and impact on TokenLibrary***
\lnsection{Known Issues}
\lsubsection{Prerequisite Tags}
Currently the Prerequisite tags are an exception to the parsing system. The Prerequisite tags
have a prefix of "PRE" and are followed by the Prerequisite name, e.g. PREFEAT. This means that
the Prerequisite tags do not follow the traditional method of having a unique name before
the colon. Also, Prerequisite tags can have a leading ! to negate the Prerequisite.
In order to address this situation of a different token definition system, the
\textem{PreComatibilityToken} provides a wrapper into the new PCGen \pcgenvers token syntax.
There is a feature request to convert the Prerequisite tags into two separate buckets, PRE: and REQ:
(Prerequisites and Requirements) based on their current behavior. This is (probably) not within
the scope of PCGen \pcgenversEOS.
\lsubsection{Class Wrapped Token}
The point behind a ClassWrappedToken is to provide compatibility for what may be bad
behavior in data files.
Many Class tokens in PCGen 5.14 ignore the class level, so they are technically Class tags
and not CLASSLEVEL tags. Yet, PCGen 5.14 allows those tags to
appear on class level lines. This is a bit deceptive to users in that the effect will always
be on the class, and not appear on the specified level.
Unfortunately, one cannot simply remove support for using CLASS tokens on CLASSLEVEL lines,
because if they are used at level 1, then they are equivalent to appearing on a CLASS line.
Certainly, the data monkeys use it that way. For example, Blackguard in RSRD advanced uses
EXCHANGELEVEL on the first level line.
Therefore the entire ClassWrappedToken system is a workaround for data monkeys using CLASS
tokens on CLASSLEVEL lines, and therefore it should only work on level one, otherwise
expectations for when the token will take effect are not set.
%\newpage
%\printindex
\end{document}
| {
"alphanum_fraction": 0.7689238512,
"avg_line_length": 53.4031476998,
"ext": "tex",
"hexsha": "890ad6040b54e886ae7edb0525c62f833fe586ee",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2643b8cde1b866121290b16c26d5c8bd7ad21cbc",
"max_forks_repo_licenses": [
"OML"
],
"max_forks_repo_name": "allardhoeve/pcgen-multiline-objects",
"max_forks_repo_path": "architecture/Arch-5.16-2.0.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2643b8cde1b866121290b16c26d5c8bd7ad21cbc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"OML"
],
"max_issues_repo_name": "allardhoeve/pcgen-multiline-objects",
"max_issues_repo_path": "architecture/Arch-5.16-2.0.tex",
"max_line_length": 387,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ab07cb4250e5d0419fd805197fb040a2ea425cc8",
"max_stars_repo_licenses": [
"OML"
],
"max_stars_repo_name": "odraccir/pcgen-deprecated",
"max_stars_repo_path": "architecture/Arch-5.16-2.0.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-12T22:28:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-12T22:28:29.000Z",
"num_tokens": 10648,
"size": 44111
} |
%% For double-blind review submission, w/o CCS and ACM Reference (max submission space)
\documentclass[acmsmall]{acmart}\settopmatter{printfolios=true,printccs=false,printacmref=false}
%% For double-blind review submission, w/ CCS and ACM Reference
%\documentclass[acmsmall,review,anonymous]{acmart}\settopmatter{printfolios=true}
%% For single-blind review submission, w/o CCS and ACM Reference (max submission space)
%\documentclass[acmsmall,review]{acmart}\settopmatter{printfolios=true,printccs=false,printacmref=false}
%% For single-blind review submission, w/ CCS and ACM Reference
%\documentclass[acmsmall,review]{acmart}\settopmatter{printfolios=true}
%% For final camera-ready submission, w/ required CCS and ACM Reference
%\documentclass[acmsmall]{acmart}\settopmatter{}
%% Journal information
%% Supplied to authors by publisher for camera-ready submission;
%% use defaults for review submission.
\acmJournal{PACMPL}
\acmVolume{1}
\acmNumber{CONF} % CONF = POPL or ICFP or OOPSLA
\acmArticle{1}
\acmYear{2018}
\acmMonth{1}
\acmDOI{} % \acmDOI{10.1145/nnnnnnn.nnnnnnn}
\startPage{1}
%% Copyright information
%% Supplied to authors (based on authors' rights management selection;
%% see authors.acm.org) by publisher for camera-ready submission;
%% use 'none' for review submission.
\setcopyright{none}
%\setcopyright{acmcopyright}
%\setcopyright{acmlicensed}
%\setcopyright{rightsretained}
%\copyrightyear{2018} %% If different from \acmYear
%% Bibliography style
\bibliographystyle{ACM-Reference-Format}
%% Citation style
%% Note: author/year citations are required for papers published as an
%% issue of PACMPL.
\citestyle{acmauthoryear} %% For author/year citations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Note: Authors migrating a paper from PACMPL format to traditional
%% SIGPLAN proceedings format must update the '\documentclass' and
%% topmatter commands above; see 'acmart-sigplanproc-template.tex'.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Some recommended packages.
\usepackage{booktabs} %% For formal tables:
%% http://ctan.org/pkg/booktabs
\usepackage{subcaption} %% For complex figures with subfigures/subcaptions
%% http://ctan.org/pkg/subcaption
\usepackage{syntax}
\usepackage{xspace}
\usepackage{cleveref}
\newcommand{\f}[1]{\texttt{#1}}
\newcommand{\true}{\f{true}\xspace}
\newcommand{\false}{\f{false}\xspace}
\newcommand{\pair}{\f{pair}\xspace}
\begin{document}
%% Title information
\title[]{Co-Synthesis of Untyped Lambda Calculus Expressions}
%% \subtitle{Subtitle} %% \subtitle is optional
%% Author information
%% Contents and number of authors suppressed with 'anonymous'.
%% Each author should be introduced by \author, followed by
%% \authornote (optional), \orcid (optional), \affiliation, and
%% \email.
%% An author may have multiple affiliations and/or emails; repeat the
%% appropriate command.
%% Many elements are not rendered, but should be provided for metadata
%% extraction tools.
%% Author with single affiliation.
\author{David Thien}
\affiliation{
\institution{University of California, San Diego} %% \institution is required
}
\email{[email protected]} %% \email is recommended
%% Author with two affiliations and emails.
\author{Nico Lehmann}
\affiliation{
\institution{University of California, San Diego} %% \institution is required
}
\email{[email protected]} %% \email is recommended
\input{abstract}
%% 2012 ACM Computing Classification System (CSS) concepts
%% Generate at 'http://dl.acm.org/ccs/ccs.cfm'.
\begin{CCSXML}
<ccs2012>
<concept>
<concept_desc>CSE 291: Program Synthesis</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[500]{Software and its engineering~General programming languages}
\ccsdesc[300]{Social and professional topics~History of programming languages}
%% End of generated code
%% Keywords
%% comma separated list
\keywords{lambda calculus, curch encoding, synthesis} %% \keywords are mandatory in final camera-ready submission
%% \maketitle
%% Note: \maketitle command must come after title commands, author
%% commands, abstract environment, Computing Classification System
%% environment and commands, and keywords command.
\maketitle
\input{introduction}
\input{background}
\input{synthesis}
\input{co-synthesis}
\input{optimizations}
\input{future}
\input{conclusion}
%% Bibliography
% \bibliography{bibfile.bib}
%% Appendix
%\appendix
%\section{Appendix}
% Text of appendix \ldots
\end{document}
| {
"alphanum_fraction": 0.7256071581,
"avg_line_length": 31.7162162162,
"ext": "tex",
"hexsha": "1dd0017ca3df015f31ee63c93ed3b842f30cc8bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DavidThien/elsa",
"max_forks_repo_path": "report/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DavidThien/elsa",
"max_issues_repo_path": "report/main.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2bf97839d1fc210d12dac919b34ec4e0143f80e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "DavidThien/elsa",
"max_stars_repo_path": "report/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1193,
"size": 4694
} |
\chapter{Proposed Framework}
\label{chap:framework}
This chapter presents a framework to allow for the portable serialization of floating-point numbers that attempts to address the issues presented in Chapter \ref{chap:challenges}. In addition to the core framework, a number of helper utilities are provided, as well as details regarding integration with the Boost Serialization Library, and an example usage of the framework.
\section{Framework Core}
\label{sec:framework_core}
The framework consists of two major components: a set of standards dictating how floating-point numbers will be serialized, and a pair of abstract base classes that provide the interface for serialization and de-serialization. Users of the framework must implement these interfaces so that the serialization implementation saves floating-point values in a manner compliant with the framework requirements, while the de-serialization implementation loads framework compliant values and converts them to a format compatible with the user's platform.
\subsection{Serialization Standards}
When using this framework, users shall adhere to the following standards to ensure that floating-point values may be saved and loaded in a consistent manner:
\begin{enumerate}
\item Binary Representation
\begin{enumerate}
\item Within the framework, floating-point values shall always be treated as binary values.
\begin{enumerate}
\item The types \code{float}, \code{double}, and \code{long double} shall only be used to obtain the representation when a value enters the framework before serialization, and to set the final floating-point number as the value leaves the framework following de-serialization. In other words, no floating-point instructions or casts are to be used in the serialization or de-serialization code. This ensures that the underlying bit pattern is only changed when desired by the developer.
\item The type \code{uint32\_t} shall be used to hold the binary representation of a \code{float}, and the type \code{uint64\_t} shall be used to hold the binary representation of a \code{double}.
\item Since \code{long double} may vary in size, its binary representation shall be held in 128 bits using a pair of \code{uint64\_t} values. True 128 bit integer types remain uncommon, therefore the choice of \code{uint64\_t[2]} ensures greater portability.
\end{enumerate}
\item The integer types that hold the binary values shall be serialized using a byte ordering consistent with how the containing serialization library handles integer types. In this thesis, the Boost Serialization Library is used, and integer types are serialized using the built-in functionality of that library.
\item When a \code{long double} is serialized, the integer representing the most significant portion of the binary value shall be written first, followed by the integer representing the least significant portion of the binary value.
\end{enumerate}
\item Floating-Point Format
\begin{enumerate}
\item All values shall be serialized using a bit pattern that represents a value conforming to the appropriate IEEE-754 format. \code{long double} values shall always use the 128 bit Extended Double Precision format.
\item Whenever possible, values shall be serialized as a normal number.
\item Any number that falls within the range of IEEE-754 denormal values shall be serialized using a denormal number representation. Zero always uses a denormal representation.
\item Any number that falls outside the range of IEEE-754 normal values shall be serialized as either $+\infty$ or $-\infty$.
\item All NaN values shall be considered to be QNaN.
\item During de-serialization, an implementation must recognize all denormal, infinity, and NaN values, and convert them to an appropriate format as required by the target floating-point format.
\begin{enumerate}
\item If a denormal value is detected during de-serialization and the target floating-point format does not support denormal values or is otherwise incapable of representing the value, the value shall be replaced with the target format's representation of zero.
\item When a NaN value is detected during de-serialization and the target floating-point format is IEEE-754, it is recommended to replace the loaded binary value with the target system's QNaN representation, obtained using the facilities of the C++ Standard Library\footnote{\code{std::numeric\_limits<T>::quiet\_NaN()}; \code{T} is the floating-point type.}.
\item If an infinity or NaN value is detected during de-serialization and the target floating-point format is incapable of representing either infinity or NaN, it is recommended to signal an error condition or to raise an exception. If de-serialization is allowed to continue, the value returned in place of infinity or NaN is implementation-defined.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\subsection{Serialization Interface}
The abstract base class (interface) \code{FP\_Serializer}, shown in Listing~\ref{listing:framework_serializer}, provides the means by which the local floating-point type is converted to a binary value suitable for serialization. A serialization library using this framework shall require the user to provide an implementation of the \code{FP\_Serializer} interface before serialization begins.
When the library encounters a floating-point value, it shall call the corresponding \code{convert()} method. The parameter \code{in} shall be the floating-point value to be serialized, passed using the floating-point format of the current platform. The implementation shall set the parameter \code{out} to be the binary value that will be serialized, conforming to the requirements listed above. The library shall then perform serialization of the integer value containing the binary representation stored in \code{out}.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=The \code{FP\_Serializer} interface., label=listing:framework_serializer]
class FP_Serializer
{
public:
virtual ~FP_Serializer() {}
virtual void convert(const float in, uint32_t& out) const = 0;
virtual void convert(const double in, uint64_t& out) const = 0;
virtual void convert(const long double in, LongDouble& out) const = 0;
};
\end{lstlisting}
\end{singlespace}
\end{minipage}
\subsection{De-serialization Interface}
The counterpart to \code{FP\_Serializer} is the abstract base class (interface) \code{FP\_Deserializer}, shown in Listing~\ref{listing:framework_deserializer}. It is used to convert the binary value loaded during de-serialization into a suitable floating-point value that is compatible with the current platform. A serialization library using this framework shall require the user to provide an implementation of the \code{FP\_Deserializer} interface before de-serialization begins.
When the library receives a request that a floating-point value be loaded, it shall first de-serialize the appropriate integer type, which will contain the binary representation of the floating-point value. This integer shall then be passed to the appropriate \code{convert()} method as the parameter \code{in}. The \code{convert()} method determines what type of floating-point value the \code{in} parameter represents, and calls the appropriate method to handle the conversion of the loaded type. Four methods are provided for each type: \code{loadedNorm()}, \code{loadedDenorm()}, \code{loadedNan()}, \code{loadedInf()}. Each method takes a single parameter, which is the binary representation of the loaded value, performs a conversion of the binary representation to the appropriate floating-point type. This floating-point value shall then be assigned to the \code{out} parameter of the \code{convert()} method.
Note that it is not strictly necessary to implement all five ``loaded'' methods for each type. For example, the binary representation of a \code{float} specified by this framework may be converted directly to the \code{float} type of a platform that conforms to IEEE-754 (see Section~\ref{sec:framework_example}). However, even in that case it is still recommended to implement \code{loadedNan()}, as previously discussed.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=The \code{FP\_Deserializer} interface., label=listing:framework_deserializer]
class FP_Deserializer
{
public:
virtual ~FP_Deserializer() {}
virtual void convert(const uint32_t in, float& out) const = 0;
virtual float loadedNorm(const uint32_t v) const = 0;
virtual float loadedDenorm(const uint32_t v) const = 0;
virtual float loadedNan(const uint32_t v) const = 0;
virtual float loadedInf(const uint32_t v) const = 0;
virtual void convert(const uint64_t in, double& out) const = 0;
virtual double loadedNorm(const uint64_t v) const = 0;
virtual double loadedDenorm(const uint64_t v) const = 0;
virtual double loadedNan(const uint64_t v) const = 0;
virtual double loadedInf(const uint64_t v) const = 0;
virtual void convert(const LongDouble in, long double& out) const = 0;
virtual long double loadedNorm(const LongDouble v) const = 0;
virtual long double loadedDenorm(const LongDouble v) const = 0;
virtual long double loadedNan(const LongDouble v) const = 0;
virtual long double loadedInf(const LongDouble v) const = 0;
};
\end{lstlisting}
\end{singlespace}
\end{minipage}
\section{Provided Utilities}
\label{sec:framework_utilities}
To facilitate the identification and conversion of floating-point types within implementations of \code{FP\_Serializer} and \code{FP\_Deserializer}, a set of utility functions and values are provided as part of the framework.
\subsection{Floating-Point Field Masks}
A number of integer constants are supplied that implementations may use when manipulating floating-point numbers as binary values. These constants are shown in Listing~\ref{listing:framework_masks}. Masks are supplied for the 32, 64, and 128 bit types that are used for serialization within the framework. For each type, five constants are provided:
\begin{description}
\item[\code{biasN} :] The bias for the type, as listed in Table~\ref{table:fp_types}.
\item[\code{signMaskN} :] A mask that may be used to obtain the sign bit for the type.
\item[\code{exponentMaskN} :] A mask that may be used to obtain the exponent field for the type.
\item[\code{fractionMaskN} :] A mask that may be used to obtain the fraction field for the type.
\item[\code{nanTypeMaskN} :] A mask that may be used to obtain the QNaN/SNaN flag bit if the type conforms to IEEE-754-2008.
\end{description}
In the name of the constant, \code{N} indicates the size of the type, in bits. The 128 bit type actually defines a pair of constants for each item listed above, with \code{hi} or \code{lo} appended to the name, indicating whether that mask applies to the most significant or least significant portion of the value.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=Floating-point field masks., label=listing:framework_masks]
const uint32_t bias32 = 127;
const uint32_t signMask32 = UINT32_C(0x80000000);
const uint32_t exponentMask32 = UINT32_C(0x7f800000);
const uint32_t fractionMask32 = UINT32_C(0x007fffff);
const uint32_t nanTypeMask32 = UINT32_C(0x00400000);
const uint64_t bias64 = 1023;
const uint64_t signMask64 = UINT64_C(0x8000000000000000);
const uint64_t exponentMask64 = UINT64_C(0x7ff0000000000000);
const uint64_t fractionMask64 = UINT64_C(0x000fffffffffffff);
const uint64_t nanTypeMask64 = UINT64_C(0x0008000000000000);
const uint64_t bias128 = 16383;
const uint64_t signMask128hi = UINT64_C(0x8000000000000000);
const uint64_t signMask128lo = UINT64_C(0);
const uint64_t exponentMask128hi = UINT64_C(0x7fff000000000000);
const uint64_t exponentMask128lo = UINT64_C(0);
const uint64_t fractionMask128hi = UINT64_C(0x0000ffffffffffff);
const uint64_t fractionMask128lo = UINT64_C(0xffffffffffffffff);
const uint64_t nanTypeMask128hi = UINT64_C(0x0000800000000000);
const uint64_t nanTypeMask128lo = UINT64_C(0);
\end{lstlisting}
\end{singlespace}
\end{minipage}
\subsection{Floating-Point Type Functions}
As discussed in Section~\ref{sec:challenges_identifying_types}, the C++ Standard Library has only recently added functionality to identify the various types of floating-point values. Therefore, the framework includes a selection of utility functions for identifying floating-point values. These functions are intended to be used with binary values within the framework, therefore all of them take the binary representation of the floating-point value as an integer parameter. The following functions are included, with overloads supplied for the each of the 32, 64, and 128 bit floating-point types:
\begin{description}
\item[\code{isNeg()} :] Returns true if the value is negative, regardless of its type.
\item[\code{isDenormal()} :] Returns true if the value is a denormal number.
\item[\code{isNormal()} :] Returns true if the value is a normal number.
\item[\code{isZero()} :] Returns true if the value is either $+0$ or $-0$.
\item[\code{isPosZero()} :] Returns true only if the value is $+0$.
\item[\code{isNegZero()} :] Returns true only if the value is $-0$.
\item[\code{isInf()} :] Returns true if the value is either $+\infty$ or $-\infty$.
\item[\code{isPosInf()} :] Returns true only if the value is $+\infty$.
\item[\code{isNegInf()} :] Returns true only if the value is $-\infty$.
\item[\code{isNaN()} :] Returns true if the value is Not a Number. Does not distinguish between Quiet or Signaling NaN.
\end{description}
Listing~\ref{listing:framework_type_funcs} contains the prototypes for the included utility functions.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=Floating-point type utility functions., label=listing:framework_type_funcs]
bool isNeg(const uint32_t v);
bool isNeg(const uint64_t v);
bool isNeg(const LongDouble v);
bool isDenormal(const uint32_t v);
bool isDenormal(const uint64_t v);
bool isDenormal(const LongDouble v);
bool isNormal(const uint32_t v);
bool isNormal(const uint64_t v);
bool isNormal(const LongDouble v);
bool isZero(const uint32_t v);
bool isZero(const uint64_t v);
bool isZero(const LongDouble v);
bool isPosZero(const uint32_t v);
bool isPosZero(const uint64_t v);
bool isPosZero(const LongDouble v);
bool isNegZero(const uint32_t v);
bool isNegZero(const uint64_t v);
bool isNegZero(const LongDouble v);
bool isInf(const uint32_t v);
bool isInf(const uint64_t v);
bool isInf(const LongDouble v);
bool isPosInf(const uint32_t v);
bool isPosInf(const uint64_t v);
bool isPosInf(const LongDouble v);
bool isNegInf(const uint32_t v);
bool isNegInf(const uint64_t v);
bool isNegInf(const LongDouble v);
bool isNaN(const uint32_t v);
bool isNaN(const uint64_t v);
bool isNaN(const LongDouble v);
\end{lstlisting}
\end{singlespace}
\end{minipage}
\subsection{\code{LongDouble} Type}
\code{long double} may vary in width from 64 to 128 bits, as discussed in Section~\ref{sec:challenges_ld_size}. At this time, integers larger than 64 bits are not commonly available. Therefore, the \code{LongDouble} union type shown in Listing~\ref{listing:framework_longdouble} is provided for ease of accessing the binary representation of a \code{long double} number, as well as converting a binary value back to a \code{long double} type. Within the framework, \code{LongDouble} is always used to encapsulate the pair of \code{uint64\_t} values that hold the binary representation of a \code{long double}. The individual bytes of the type are also accessible, if needed.
Note that on big-endian systems, the integer \code{LongDouble::u64[0]} will refer to the most significant portion of the \code{long double} value, while on little-endian systems it will refer to the least significant portion of the value.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=The \code{LongDouble} type., label=listing:framework_longdouble]
union LongDouble
{
long double ld;
uint8_t u8[16];
uint64_t u64[2];
};
\end{lstlisting}
\end{singlespace}
\end{minipage}
\subsection{System Endianness Identification}
Ideally all endianness concerns would be delegated to the containing serialization library. However, as indicated in the previous section, when dealing with the binary representation of \code{long double} values it is necessary for the developer to be aware of the endianness of the current system. The framework provides a utility function, shown in Listing~\ref{listing:framework_endian}, to detect the endianness of the platform.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=The endianness identifier utility function., label=listing:framework_endian]
bool isLittleEndian()
{
union
{
uint16_t u16;
uint8_t u8[2];
} endian;
endian.u16 = 0x1234;
return endian.u8[0] == 0x34;
}
\end{lstlisting}
\end{singlespace}
\end{minipage}
\subsection{\code{long double} Type Identifier.}
As discussed in Section~\ref{sec:challenges_ld_size}, 10 byte \code{long double} values may be stored using 16 bytes of memory. The framework includes a utility function to address this issue, shown in Listing~\ref{listing:framework_ldtype}. This is an implementation of the algorithm from Listing~\ref{listing:10in16}.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=The \code{long double} type identifier utility function., label=listing:framework_ldtype]
bool LDis10in16()
{
if (sizeof(long double) != 16)
return false;
LongDouble ld = { -1.0L };
if (isLittleEndian())
return !(ld.u64[1] & signMask128hi);
else
return !(ld.u64[0] & signMask128hi);
}
\end{lstlisting}
\end{singlespace}
\end{minipage}
\section{Integration With Boost the Serialization Library}
\label{sec:framework_integration}
The Boost Serialization Library, discussed in Section~\ref{sec:background_boost}, is the serialization library used for testing the framework in this thesis. The example \code{Archive} classes provided with the library are \code{portable\_binary\_oarchive}, used for serialization, and \code{portable\_binary\_iarchive}, used for de-serialization. In order to accommodate this floating-point framework, these classes were modified as follows:
\begin{enumerate}
\item Addition of the floating-point interfaces.
\begin{enumerate}
\item A protected member was added to \code{portable\_binary\_oarchive}, which is used to hold an implementation of the \code{FP\_Serializer} interface.
\item A protected member was added to \code{portable\_binary\_iarchive}, which is used to hold an implementation of the \code{FP\_Deserializer} interface.
\end{enumerate}
\item Each class was modified so that the methods that perform floating-point serialization function as described in Section~\ref{sec:framework_core}.
\end{enumerate}
Listing~\ref{listing:framework_boost_oarchive} and Listing~\ref{listing:framework_boost_iarchive} show the changes made to these classes.
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=Modifications to the \code{portable\_binary\_oarchive} class., label=listing:framework_boost_oarchive]
class portable_binary_oarchive
{
protected:
FP_Serializer& m_fp_serializer;
public:
void save(const float& t)
{
uint32_t v;
this->m_fp_serializer.convert(t, v);
this->primitive_base_t::save(v);
}
void save(const double& t)
{
uint64_t v;
this->m_fp_serializer.convert(t, v);
this->primitive_base_t::save(v);
}
void save(const long double& t)
{
LongDouble out;
this->m_fp_serializer.convert(t, out);
this->primitive_base_t::save(out.u64[0]);
this->primitive_base_t::save(out.u64[1]);
}
};
\end{lstlisting}
\end{singlespace}
\end{minipage}
\noindent
\begin{minipage}{\linewidth}
\begin{singlespace}
\begin{lstlisting}[caption=Modifications to the \code{portable\_binary\_iarchive} class., label=listing:framework_boost_iarchive]
class portable_binary_iarchive
{
protected:
FP_Deserializer& m_fp_deserializer;
public:
void load(float& t){
uint32_t v;
this->primitive_base_t::load(v);
this->m_fp_deserializer.convert(v, t);
}
void load(double& t){
uint64_t v;
this->primitive_base_t::load(v);
this->m_fp_deserializer.convert(v, t);
}
void load(long double& t)
{
LongDouble in;
this->primitive_base_t::load(in.u64[0]);
this->primitive_base_t::load(in.u64[1]);
this->m_fp_deserializer.convert(in, t);
}
};
\end{lstlisting}
\end{singlespace}
\end{minipage}
\section{Example Usage on IEEE-754 platforms}
\label{sec:framework_example}
To demonstrate the usage of this floating-point framework, an example implementation has been provided that operates on systems supporting IEEE-754. This implementation supports only true IEEE-754 platforms; specifically, platforms that use the x86 Extended Precision type (see Section~\ref{sec:challenges_x86_ext}) are not supported.
Listing~\ref{listing:framework_ieee_serializer} shows the implementation of the \code{FP\_Serializer} interface. In this implementation, \code{float} and \code{double} types are both serialized directly, as their formats are already compliant with the requirements listed in Section~\ref{sec:framework_core}. \code{long double} types are modified to conform to the framework specifications by modifying the binary representation as described in Section~\ref{sec:challenges_ld_size}.
\begin{singlespace}
\lstinputlisting[label=listing:framework_ieee_serializer, caption=The \code{IEEE754Serializer} class.]{code/ieee754serializer.h}
\end{singlespace}
Listing~\ref{listing:framework_ieee_deserializer} shows the implementation of the \code{FP\_Deserializer} interface. In this implementation, the binary values loaded for \code{float} and \code{double} types are converted directly to their respective floating-point types. \code{long double} values are modified as described in Section~\ref{sec:challenges_ld_size} to match the size of that type on the current platform.
\begin{singlespace}
\lstinputlisting[label=listing:framework_ieee_deserializer, caption=The \code{IEEE754Deserializer} class.]{code/ieee754deserializer.h}
\end{singlespace}
| {
"alphanum_fraction": 0.7741579482,
"avg_line_length": 59.4566929134,
"ext": "tex",
"hexsha": "5ca056985147d5cb3477e4ab4588fa8b40bd5b48",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2017-01-13T15:40:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-13T15:40:42.000Z",
"max_forks_repo_head_hexsha": "19133d9f36afa09e04b6366dcc2c6bb1ae51ef29",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mbcrawfo/undergrad-thesis",
"max_forks_repo_path": "05_framework.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "19133d9f36afa09e04b6366dcc2c6bb1ae51ef29",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mbcrawfo/undergrad-thesis",
"max_issues_repo_path": "05_framework.tex",
"max_line_length": 917,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "19133d9f36afa09e04b6366dcc2c6bb1ae51ef29",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mbcrawfo/undergrad-thesis",
"max_stars_repo_path": "05_framework.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-25T13:59:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-20T20:38:31.000Z",
"num_tokens": 5436,
"size": 22653
} |
% Project for Integration Workshop. Dept. Mathematics. UArizona
% Write a first-order ODE solver for a system of ODEs
\section{First-order ODE Integrators}
Consider the differential equation
\begin{equation}
\label{eq:ode_system}
\begin{cases} \dot{x}(t) = -y, & \quad x(0) = 1\\ \dot{y}(t) = \phantom{-}x, & \quad y(0) = 0 \end{cases}
\end{equation}
% which we can also write in matrix notation as
% \begin{equation}
% \label{eq:ode_matrix}
% \dot{\bm{x}}(t) = \begin{bmatrix*}[r] 0 & -1 \\ 1 & 0 \end{bmatrix*} \bm{x}, \quad \bm{x}(0) = \begin{pmatrix} 1\\0 \end{pmatrix}
% \end{equation}
Use pencil-and-paper to solve the ODE. Make a quiver plot. Sketch solutions.\\
In this project, you will build a numerical solver for differential equations like this one. You will test your solver on this ODE, and then use it to solve a more interesting ODE.
For this problem, discretize the time interval $0 \leq t \leq T$ into $n+1$ points. That is, set $\Delta t = T/n$ and $t_k = k \, \Delta t$, so $ t = \{t_0, t_1, \dots, t_n\}$. \\
\textit{Forward Differences:}
\begin{enumerate}[(a)]
\item Use what's called a \textit{first-order forward difference} approximation to the derivatives $\dot{x}$ and $\dot{y}$ at each $t$ as follows:
\begin{equation}
\label{eq:forward_difference}
\dot{x}(t) \approx \frac{x(t + \Delta t) - x(t)}{\Delta t} \qquad \text{and} \qquad \dot{y}(t) \approx \frac{y(t + \Delta t) - y(t)}{\Delta t}.
\end{equation}
Substitute equation (\ref{eq:forward_difference}) into equation (\ref{eq:ode_system}) and show that we can approximate the differential equation by the discrete time-stepping process:
\begin{equation}
\label{eq:forward_euler}
\begin{cases} x_{k+1} = x_k - (\Delta t) y_k, & \quad x_0 = 1\\ y_{k+1} = y_k + (\Delta t) x_k, & \quad y_0 = 0\\ \end{cases}
\end{equation}
\item Implement equation \ref{eq:forward_euler} for $0 \leq t \leq 1$ and plot the trajectory given by the solution.
\item Is your approximation qualitatively correct? Will it remain qualitatively correct on $0 \leq t \leq T$ as $T$ gets very large? Explain.\\
\textit{Hint:} It may be useful to rewrite the system of difference equations in matrix form,
\begin{equation*}
\bm{x}_{k+1}
% = \bm{x}_n + (\Delta t) \begin{bmatrix*}[r] 0 & -1 \\ 1 & 0 \end{bmatrix*} \bm{x}_n
= \begin{bmatrix} 1 & -\Delta t \\ \Delta t & 1\end{bmatrix} \bm{x}_k, \quad \bm{x}_0 = \begin{pmatrix} 1\\0 \end{pmatrix}
\end{equation*}
and to analyze the eigenvalues of the matrix.
\end{enumerate}
\textit{Backward Differences:}
\begin{enumerate}[(a),resume]
\item We could have solved this problem by a different approach. We could also have used what's called a \textit{first-order backward difference}:
\begin{equation}
\label{eq:backward_difference}
\dot{x}(t) \approx \frac{x(t) - x(t - \Delta t)}{\Delta t} \qquad \text{and} \qquad \dot{y}(t) \approx \frac{y(t) - y(t \Delta t)}{\Delta t}.
\end{equation}
Show that the backward difference gives the approximation
\begin{equation}
\label{eq:backward_euler}
\begin{cases} x_{k} = x_{k-1} - (\Delta t) y_k, & \quad x_0 = 1\\ y_{k} = y_{k-1} + (\Delta t) x_k, & \quad y_0 = 0\\ \end{cases}
\end{equation}
which can be rewritten as
\begin{equation*}
\bm{x}_{k}
% = \bm{x}_n + (\Delta t) \begin{bmatrix*}[r] 0 & -1 \\ 1 & 0 \end{bmatrix*} \bm{x}_n
= \left(\begin{bmatrix} 1 & \Delta t \\ -\Delta t & 1\end{bmatrix}\right)^{-1} \bm{x}_{k-1}, \quad \bm{x}_0 = \begin{pmatrix} 1\\0 \end{pmatrix}
\end{equation*}
\item Implement equation (\ref{eq:backward_euler}) for $0 \leq t \leq 1$ and plot the trajectory given by the solution.
\item Is your approximation qualitatively correct? Will it remain qualitatively correct on $0 \leq t \leq T$ as $T$ gets very large? Explain.
\end{enumerate}
\textit{Bonus:}
\begin{enumerate}[(a),resume]
\item (The really fun stuff) Use your ODE solver to solve $\dot{x} = -y$, $\dot{y} = \sin x$.
\end{enumerate}
\newpage
| {
"alphanum_fraction": 0.6555637799,
"avg_line_length": 59.6029411765,
"ext": "tex",
"hexsha": "9d6f340f144a57bc0245b808a931fcbdb7f15156",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-07-27T01:10:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-07-25T18:18:19.000Z",
"max_forks_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "colinlclark/integration_workshop",
"max_forks_repo_path": "ode-solver-first-order.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "colinlclark/integration_workshop",
"max_issues_repo_path": "ode-solver-first-order.tex",
"max_line_length": 188,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "16845412db0bd18469db67075d31a6189d968c56",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "colinlclark/integration_workshop",
"max_stars_repo_path": "ode-solver-first-order.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1409,
"size": 4053
} |
\documentclass{article}
\usepackage[margin=1in]{geometry}
\usepackage[table,xcdraw]{xcolor}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{titlesec}
\usepackage{titling}
\usepackage{setspace}
\titleformat{\section}
{\large\bfseries}
{Method \thesection:}
{0em}
{ }[]
\renewcommand{\maketitle}{\hspace{-22px} \textbf{{\theauthor} \\
ASEN5158 \\
\today \\
{\thetitle}} \titlerule}
\definecolor{lightgray}{gray}{0.8}
\renewcommand{\arraystretch}{3}
\begin{document}
\title{Homework 5}
\author{Connor Johnstone}
\maketitle
\setstretch{1.25}
\section{Introduction}
Most operations in Space in the modern age involve launch capabilities to LEO, MEO, HEO, or GEO and increasingly have utilized smaller and smaller payloads. As a result, exploration into Heavy Lift Launch Vehicles stalled during the period in which the Shuttle could provide for all heavy-lift needs. But in the wake of the end of the shuttle program, the world has been once again looking to vehicles with higher and higher launch capabilities, such as would be required by a human exploration mission to the moon. As such we have a few currently available launch options for such a mission and a host of unproven and in-development options.
\section{Launch Vehicle Table}
\begin{center}
\begin{table}[h!]
\begin{tabular}{m{5em}
>{\columncolor[HTML]{EFEFEF}}m{6em} m{14em}
>{\centering\columncolor[HTML]{EFEFEF}}m{5.4em} m{8em}}
\cellcolor[HTML]{9B9B9B}\textbf{Launch Vehicle} & \cellcolor[HTML]{9B9B9B}\textbf{Provider} & \cellcolor[HTML]{9B9B9B}\textbf{Launch Site} & \cellcolor[HTML]{9B9B9B}\textbf{Mass to LEO} & \cellcolor[HTML]{9B9B9B}\textbf{Shroud Volume} \\ \hline
\multicolumn{1}{c|}{Atlas V} & ULA (NASA) & Cape Canaveral (28N), Vandenburg (35N) & 20,520 kg & 4.2m x 11m \\
\multicolumn{1}{c|}{Delta IV Heavy} & ULA (NASA) & Cape Canaveral (28N), Vandenburg (35N) & 28,790 kg & 5m x 19m \\
\multicolumn{1}{c|}{Falcon Heavy} & SpaceX & Cape Canaveral (28N) & 63,800 kg & 3.66m x 70m \\
\multicolumn{1}{c|}{BFR} & SpaceX & None Yet (South Texas?) & 150,000+ kg & 9m x 118m \\
\multicolumn{1}{c|}{SLS} & ULA (NASA) & None Yet (Typical NASA) & 130,000 kg & 8.4m x 111m \\
\multicolumn{1}{c|}{New Glenn} & Blue Origin & None Yet & 45,000 kg & 7m x 82m \\
\multicolumn{1}{c|}{H-IIA} & Mitsubishi & Tanegashima (30N) & 15,000 kg & 4m x 53m \\
\multicolumn{1}{c|}{Long March 5} & CALT & Wenchang (19N) & 25,000 kg & 5m x 57m
\end{tabular}
\end{table}
\end{center}
\begin{thebibliography}{9}
\bibitem{atlas}
Atlas V Data Sheet,
http://www.ulalaunch.com/site/pages/Products\_AtlasV.shtml,
United Launch Alliance
\bibitem{delta}
Delta IV Heavy User Guide, \\
http://www.ulalaunch.com/site/docs/product\_cards/guides/Delta\%20IV\%20Users\%20Guide\%20June\%202013.pdf,
United Launch Alliance
\bibitem{falcon}
Falcon Heavy Official Page,
http://spacex.com/falcon-heavy,
SpaceX
\bibitem{bfr}
BFR Official Page,
https://www.spacex.com/starship,
SpaceX
\bibitem{sls}
SLS Overview,
http://www.nasa.gov/exploration/systems/sls/overview.html,
NASA
\bibitem{glenn}
Blue Origin Website,
https://www.blueorigin.com/,
Blue Origin
\bibitem{h2a}
H-IIA English Launch Services,
http://www.jaxa.jp/projects/rockets/h2a/index\_e.html,
Mitsubishi
\bibitem{lm5}
The New Generation of Launch Vehicles in China,
http://www.iafastro.net/download/congress/IAC-14/DVD/full/IAC-14/D2/1/manuscripts/IAC-14,D2,1,11,x20929.pdf,
International Astronautical Federation
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.6997260274,
"avg_line_length": 35.0961538462,
"ext": "tex",
"hexsha": "bd808c45575da456baad1586d38a48febe49dd90",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b88dd9efdbcf609247cfe35f70b69617a7411a08",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rconnorjohnstone/SpaceHabDesign",
"max_forks_repo_path": "homework/hw5/hw_5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b88dd9efdbcf609247cfe35f70b69617a7411a08",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rconnorjohnstone/SpaceHabDesign",
"max_issues_repo_path": "homework/hw5/hw_5.tex",
"max_line_length": 642,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b88dd9efdbcf609247cfe35f70b69617a7411a08",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rconnorjohnstone/SpaceHabDesign",
"max_stars_repo_path": "homework/hw5/hw_5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1230,
"size": 3650
} |
% !TeX root = main.tex
\chapter{Sparse Matrix Vector Multiplication}
\glsresetall
\label{chapter:spmv}
Sparse matrix vector multiplication (SpMV) takes a sparse matrix, i.e., one in which most of its elements are zero, and multiplies it by a vector. The vector itself may be sparse as well, but often it is dense. This is a common operation in scientific applications, economic modeling, data mining, and information retrieval. For example, it is used as an iterative method for solving sparse linear systems and eigenvalue problems. It is an operation in PageRank and it is also used in computer vision, e.g., image reconstruction.
This chapter introduces several new HLS concepts, and reinforces some previously discussed optimization. One goal of the chapter is to introduce a more complex data structure. We use a \gls{crs} representation to hold the sparse matrix. Another goal is to show how to perform testing. We build a simple structure for a testbench that can be used to help determine if the code is functionally correct. This is an important aspect of hardware design, and \VHLS makes it easy to test many aspects of the generated RTL with the same high-level C testbench. This is one of the big advantages of HLS over RTL design. We also show how you can perform C/RTL cosimulation using the testbench and \VHLS tool. This is necessary to derive the performance characteristics for the different SpMV designs. Since the execution time depends upon the number of entries in the sparse matrix, we must use input data in order to determine the clock cycles for the task interval and task latency.
\section{Background}
\begin{figure}
\centering
\includegraphics[width= .65\textwidth]{images/crs}
\caption{A $4 \times 4$ matrix $\mathbf{M}$ represented in two different ways: as a `dense' matrix stored in a two-dimensional array, and as a sparse matrix stored in the compressed row storage (\gls{crs}) form, a data structure consisting of three arrays. }
\label{fig:crs}
\end{figure}
Figure~\ref{fig:crs} shows an example of a $4 \times 4$ matrix $\mathbf{M}$ represented in two different ways. Figure ~\ref{fig:crs} a) shows the normal representation of the matrix as a two-dimensional array of 16 elements. Each element is store in its own location in the array. Figure ~\ref{fig:crs} b) shows the same matrix represented in \gls{crs} format. The \gls{crs} representation is a data structure consisting of three arrays. The \lstinline{values} array holds the value of each non-zero element in the matrix. The \lstinline{columnIndex} and \lstinline{rowPtr} arrays encode information about the location of these non-zero elements in the matrix. \lstinline{columnIndex} stores the column of each element, while \lstinline{rowPtr} contains the index in \lstinline|values| of the first element in each row. The \gls{crs} format avoids storing values in the matrix that are zero, although there is nothing to prevent a zero from being explicitly represented in the \lstinline|values| array. In the example, however, we see that the \lstinline|values| array does not, in fact, contain any zero elements. The tradeoff is that some additional book-keeping information (the \lstinline|columnIndex| and \lstinline|rowPtr| arrays) in order to properly interpret and manipulate the matrix. The \gls{crs} form is commonly used when large matrices contain only a small number of non-zero elements (typically 10 percent or less), enabling these matrices to be stored with less memory and manipulated with fewer operations. However, the \gls{crs} form has no requirements about the sparsity of the matrix and can be used for any matrix. This makes it a general approach that can be used for any matrix, but not necessarily the most efficient. The \gls{crs} form is also not the only efficient representation of sparse matrices. Depending on the characteristics of the matrix and the types of operations to be performed, other sparse representations can also be used.
More precisely, the \gls{crs} format uses a data structure consisting of three arrays: \lstinline{values}, \lstinline{colIndex}, and \lstinline{rowPtr}. The \lstinline{values} array and the \lstinline{columnIndex} has an entry for each of the non-zero elements in the sparse matrix $\mathbf{M}$. These arrays represent the matrix $\mathbf{M}$ stored in a row-wise fashion, i.e., left to right, and top to bottom. The data in the matrix is stored in the \lstinline|values| array, while the \lstinline{columnIndex} array contains the horizontal location of the data in the matrix. If \lstinline|values[k]| represents $M_{ij}$ then \lstinline|colIndex[k]| = j. The array \lstinline{rowPtr} has size $n+1$ for an $n$-row matrix. \lstinline|rowPtr[k]| contains the total number of elements in all the rows in the matrix prior to row $k$, with the first element \lstinline|rowPtr[0] = 0| and the last element \lstinline|rowPtr[n]| always giving the total number of non-zero elements the matrix. As a result, if \lstinline|values[k]| represents $M_{ij}$, then \lstinline|rowPtr[i]| $\leq k < $\lstinline|rowPtr[i+1]|. If row \lstinline|k| contains any non-zero elements, then \lstinline|rowPtr[k]| will contain the index of the first element in the row. Note that if there are rows in the matrix without a non-zero element, then values in the \lstinline|rowPtr| array will repeat.
Looking at Figure~\ref{fig:crs} a), we can scan the matrix in row-major order to determine the \lstinline{values} array in \gls{crs} form. Whenever we find a non-zero element, its value is stored at the next available index $i$ in the \lstinline|values| array and its column is stored at \lstinline{columnIndex[i]}. In addition, whenever we start scanning a new row, we store the next available index $i$ in the \lstinline|rowPtr| array. As a result, the first element in the \lstinline|rowPtr| array is always zero.
Looking at Figure~\ref{fig:crs} b), we can also convert the matrix back to a two-dimensional array representation. The first step is to determine the number of elements in each row of the matrix from the \lstinline|rowPtr| array. The number of elements in row $i$ is the difference \lstinline|rowPtr[i]-rowPtr[i+1]|. Then the row can be reconstructed by iterating through the \lstinline|values| array starting at \lstinline|values[rowPtr[i]]|. In our example matrix, the because the first two elements of the \lstinline|rowPtr| array are $0$ and $2$, then we know that there are 2 elements in the first row, i.e., \lstinline|values[0]| and \lstinline|values[1]|. The first non-zero element in the \lstinline{values} data structure, \lstinline|values[0]|, is $3$. This value is in column 0, since \lstinline{columnIndex[0]} = 0. Similarly, the second non-zero value is the value 4 in column 1. The second row of the matrix has elements with $k \in [2,4)$, the third row has elements with $k \in [4,7)$, and so on. In this case, there are 9 non-zero entries, thus that last entry in the \lstinline{rowPtr} data structure is 9.
\begin{exercise}
Given a 2-dimensional array representing a matrix, write the C code to convert the matrix to \gls{crs} form. Write the corresponding C code to convert the matrix in \gls{crs} form back to a 2-dimensional array.
\end{exercise}
It turns out that using the \gls{crs} form, we can multiply a sparse matrix with a vector relatively efficiently without explicitly converting the matrix back to a 2-dimensional array. In fact, for large matrices with a small number of non-zero elements, sparse matrix-vector multiply is much more efficient than the dense matrix-vector multiply we discussed in chapter \ref{chapter:dft}. This is because we can compute the non-zero elements of the result by only looking at the non-zero elements of the operands.
\section{Baseline Implementation}
\begin{figure}
\lstinputlisting{examples/spmv.cpp}
\caption{ The baseline code for sparse matrix vector (SpMV) multiplication, which performs the operation $y = \mathbf{M} \cdot x$. The variables \lstinline{rowPtr}, \lstinline{columnIndex}, and \lstinline{values} hold $\mathbf{M}$ in \gls{crs} format. The first \lstinline{for} loop iterates across the rows while the second nested \lstinline{for} loop iterates across the columns of $\mathbf{M}$ by multiplying each non-zero element by the corresponding element in the vector \lstinline{x} which results in one element in the resulting vector \lstinline{y}. }
\label{fig:spmv_arch1}
\end{figure}
\begin{figure}
\lstinputlisting{examples/spmv.h}
\caption{ The header file for \lstinline{spmv} function and testbench. }
\label{fig:spmv.h}
\end{figure}
Figure \ref{fig:spmv_arch1} provides a baseline code for sparse matrix vector multiplication. The \lstinline{spmv} function has five arguments. The arguments \lstinline{rowPtr}, \lstinline{columnIndex}, and \lstinline{values} correspond to the input matrix $\mathbf{M}$ in \gls{crs} format. These are equivalent to the data structures shown in Figure \ref{fig:crs}. The argument \lstinline{y} holds the output result $y$ and the argument \lstinline{x} holds the input vector $x$ to be multiplied by the matrix. The variable \lstinline{NUM_ROWS} indicates the number of rows in the matrix $\mathbf{M}$. The variable \lstinline{NNZ} is the number of non-zero elements in the matrix $\mathbf{M}$. Finally, the variable \lstinline{SIZE} is the number of elements in the arrays \lstinline{x} and \lstinline{y}.
The outer \lstinline{for} loop, labeled \lstinline{L1}, iterates across each row of the matrix. Multiplying this row of the matrix with the vector $x$ will produce one element of $y$. The inner loop labeled \lstinline{L2} loop across the elements in the columns of the matrix $\mathbf{M}$. The \lstinline{L2} loop iterates \lstinline{rowPtr[i+1]} $-$ \lstinline{rowPtr[i]} times, corresponding to the number of non-zero entries in that row. For each entry, we read the value of the non-zero element of the $\mathbf{M}$ matrix from the \lstinline{values} array and multiply it by the corresponding value of the vector $x$ read from the \lstinline{x} array. That value is located at \lstinline{columnIndex[k]} since the data structure \lstinline{columnIndex} holds the column for the value \lstinline{k}.
\begin{figure}
\lstinputlisting[format=none]{examples/spmv-top.cpp}
\caption{ A simple testbench for our \lstinline{spmv} function. The testbench generates one example and computes the matrix vector multiplication using a sparse (\lstinline{spmv}) and non-sparse function (\lstinline{matrixvector}).}
\label{fig:spmv_test}
\end{figure}
\section{Testbench}
Figure \ref{fig:spmv_test} shows a simple testbench for the \lstinline{spmv} function. The testbench starts by defining the \lstinline{matrixvector} function. This is a straightforward implementation of matrix vector multiplication. This does not assume a sparse matrix and does not use the \gls{crs} format. We will compare the output results from this function with the results from our \lstinline{spmv} function.
\begin{aside}
A common testbench will implement a ``golden'' reference implementation of the function that the designer wishes to synthesis. The testbench will then compare the results of the golden reference with those generated from the code that is synthesized by the \VHLS code. A best practice for the testbench is to use alternative implementations for the golden reference and the synthesizable code. This provides more assurance that both implementations are correct.
\end{aside}
The testbench continues in the \lstinline{main} function. Here we set the \lstinline{fail} variable equal to $0$ (later code sets this to $1$ if the output data from \lstinline{spmv} does not match that from the function \lstinline{matrixvector}). Then we define a set of variables that correspond to the matrix $\mathbf{M}$, the input vector $x$ and the output vector $y$. In case of $\mathbf{M}$, we have both the ``normal'' form and the CSR form (stored in the variables \lstinline{values}, \lstinline{columnIndex}, and \lstinline{rowPtr}). The values of the $\mathbf{M}$ matrix are the same as shown in Figure \ref{fig:crs}. We have two versions of the output vector $y$. The \lstinline{y_sw} array stores the output from the function \lstinline{matrixvector} and the \lstinline{y} array has the output from the function \lstinline{spmv}.
After defining all of the input and output variables, we call the \lstinline{spmv} and \lstinline{matrixvector} functions using the appropriate data. The following \lstinline{for} loop compares the output results from both of the functions by comparing the elements from \lstinline{y_sw} with those in \lstinline{y}. If any of them are different, we set the \lstinline{fail} flag equal to $1$. Lastly, we print out the results of the test and then return the \lstinline{fail} variable.
This testbench is relatively simple and probably insufficient to ensure that the implementation is correct. Primarily, it only tests one example matrix, whereas a better testbench would test multiple matrices. It is common, for instance, to randomly generate inputs for testing, in addition to explicitly verifying important corner-cases. In this case, we need to make sure to vary not only the values being operated on, which which will be passed to our accelerator when it is executing, but also to vary the compile-time parameters which might be used to create different accelerators with different tradeoffs. The key difference is that we can randomly generate multiple data values to operate on and test them all in the same execution of the program, using multiple function calls. Compile-time parameters, on the other hand, require the code to be recompiled every time parameters change.
\begin{exercise}
Create a more sophisticated testbench which generates multiple sets of test data using a random number generator. The compile-time parameters of the sparse matrix should be modifiable (e.g., \lstinline{SIZE}, \lstinline{NNZ}, etc.). Create an HLS synthesis script which executes the same code multiple times for different reasonable compile-time parameters.
\end{exercise}
\section{Specifying Loop Properties}
If you directly synthesize this code, you will get results for the clock period and utilization. However, you will not get the number of clock cycles either in terms of task latency or initiation interval. This is because this depends upon the input data, which is external to the \lstinline{spmv} function itself. Primarily, the performance depends on the number of times the body of the inner loop is executed, which is equal to the number of non-zero elements in $\mathbf{M}$. We know that the number of non-zero elements is limited by the constant \lstinline{NNZ} in the code, but it is possible to call the code with matrices of different sizes, so the actual number of iterations is data-dependent. In addition, the performance may vary depending on the location of the non-zero elements and the optimization directives utilized during synthesis. To make matters worse, the number of iterations depends on the input in a complex way and many potential inputs don't actually represent valid matrices. Thus, it is very difficult for a tool to determine the total number of clock cycles for the \lstinline{spmv} function without complex analysis and additional information. \VHLS is unable to perform this analysis.
\begin{exercise}
What are the preconditions for the spmv function to work correctly? Prove that given these preconditions, the body of the inner loop does, in fact, execute exactly once for each non-zero element in the matrix.
\end{exercise}
There are several ways to leverage the tool to derive some performance estimates, however. One method is to provide the \VHLS tool additional information about the loop bounds. This can be done using the \lstinline{loop_tripcount} directive, which enables the designer to specify a minimum, maximum, and/or average number of iterations for each particular loop. By providing these values, the \VHLS tool is capable of providing an estimate on the number of clock cycles.
\begin{aside}
Use the \lstinline{loop_tripcount} directive to specify minimum, maximum, and/or average number of iterations for a loop with a variable bound. This enables the \VHLS tool to provide an estimate on the number of clock cycles for the design. This does not impact the results of the synthesis; it only effects the synthesis report.
\end{aside}
\begin{exercise}
Add a \lstinline{loop_tripcount} directive to the \lstinline{spmv} function. The syntax for the pragma form of the directive is \lstinline{#pragma HLS loop_tripcount min=X, max=Y, avg=Z} where \lstinline{X}, \lstinline{Y}, and \lstinline{Z} are constant positive integers. Which loops require this directive? What happens to the synthesis report when you change the different parameters (\lstinline{min}, \lstinline{max}, and \lstinline{avg})? How does this effect the clock period? How does it change the utilization results?
\end{exercise}
The \lstinline{loop_tripcount} directive enables the designer to get a rough idea about the performance of a function. This can enable comparison between different implementations of the same function either by applying different optimization directives or by restructuring the code itself. However, it may be difficult or impossible to determine the \lstinline{min}, \lstinline{max}, and \lstinline{avg} parameters. It can also be difficult to provide tight bounds on the \lstinline{min} and \lstinline{max} parameters. If there is a testbench, there is another more accurate method to calculate the total number of clock cycles required for the \lstinline{spmv} function. This is done by performing C/RTL cosimulation.
\section{C/RTL Cosimulation}
C/RTL cosimulation performs automatic testing of the \gls{rtl} designs that are generated by the \VHLS tool. It does this by executing the synthesized code together with the provided testbench. The execution is instrumented to record the input and output values for each execution of the synthesized code. The input values are converted to cycle-by-cycle \term{input vectors}. The input vectors are used in an RTL-level simulation of the generated RTL design and the resulting \term{output vectors} are captured. The testbench code can then be executed again replacing the synthesized code with the captured input and output values. The testbench code can then return a zero value (indicating success) or a non-zero value (indicating failure).
The C/RTL cosimulation flow combines the cycle-accurate RTL design generated from the \VHLS tool with input values provided from the C testbench. As a result, it can generate accurate estimates of the performance of the generated RTL design which reflect any HLS optimizations, even in the presence of data-dependent behavior. The minimum, maximum, and average latency and interval of the synthesized function are automatically extracted after simulation completes.
Note that these numbers only correspond to the clock cycles derived from the input data used by the testbench. Thus, they are only as good as the testbench itself. To put it another way, if the testbench does not exercise the function in a manner that is consistent with how it will be used upon deployment, the results will not be accurate. In addition, the input testvectors are generated with idealized timing that does not accurately model the behavior of external interfaces. The actual performance may be lower if execution stalls waiting for input data, or if there is contention waiting for external memory access to complete. Nevertheless, it provides a convenient method for determining clock cycles that does not require the designer to estimate the loop bounds for a variable loop.
\begin{aside}
C/RTL cosimulation provides the latency for functions with variable loop bounds. It reports the minimum, maximum, and average clock cycles for function latency and function interval. These latency values are directly dependent upon the input data from the C testbench.
\end{aside}
\begin{exercise}
What are the minimum, maximum, and average clock cycles for the \lstinline{spmv} function latency and function interval when using the testbench provided in Figure \ref{fig:spmv_test}?
\end{exercise}
\section{Loop Optimizations and Array Partitioning}
Now that we have a method to gather all of the performance and utilization estimates from the \VHLS tool, let us consider how to best optimize the function. Pipelining, loop unrolling, and data partitioning are the most common first approaches in optimizing a design. And the typical approach is to start with the innermost loop, and then move outwards as necessary.
In this example, pipelining the inner \lstinline{L2} loop is perhaps the first and easiest optimization to consider. This overlaps the execution of the consecutive iterations of this loop, which can result is a faster overall implementation. Without pipelining, each iteration of the \lstinline{L2} loop occurs sequentially. Note that the iterations of the \lstinline{L1} loop are still done sequentially.
\begin{figure}
\centering
%\includegraphics[width= .85\textwidth]{images/spmv_pipeline_inner}
\includesvg{spmv_behavior}
\caption{Architecture and behavior of the \lstinline{spmv} code with a pipelined inner loop.}
\label{fig:spmv_pipeline_inner}
\end{figure}
Figure \ref{fig:spmv_pipeline_inner} illustrates the approximate manner in which the \lstinline{spmv} function executes when pipelining the \lstinline{L2 for} loop. Each iteration of the inner \lstinline{L2} loop is pipelined with II=3. Pipelining allows multiple iterations of the inner loop from the same iteration of the outer loop execute concurrently. In this case, the II of the inner loop is limited by a recurrence through the accumulation. II=3 is achieved because we've assumed that the adder has a latency of 3 clock cycles. Iterations of the outer loop are not pipelined, so the inner loop must completely finish and flush the pipeline before the next iteration of the outer \lstinline{L2} loop begins.
\begin{exercise}
Pipeline the innermost \lstinline{L2 for} loop. This can be done by adding a pipeline directive to the \lstinline{spmv} code from Figure \ref{fig:spmv_arch1}. What is the achieved initiation interval (II)? What happens to the results as you specify an II argument, and increase or decrease the target II?
\end{exercise}
Looking at this behavior, we see that there are several factors limiting the performance of the loop. One factor is the recurrence through the adder that limits the achieved loop II. A second factor is that iterations of the outer loop are not pipelined. An efficient solution for sparse matrix-vector multiply would likely come close to using each multiplier and adder every clock cycle. This design is far from that.
In Section \ref{subsec:mvmul_implementation} we explored several design optimization techniques, including pipelining different loops, loop unrolling, and array partitioning. Understanding the tradeoffs between these techniques can be somewhat challenging, since they are often dependent on what another. We must often apply these techniques together with a carefully chosen goal in mind in order to get a benefit and applying one technique without applying another technique can actually make matters worse. For instance, when performing loop unrolling, the designer must be careful to understand the effects that this has upon memory accesses. Increasing the number of operations that can execute concurrently doesn't help if performance is limited by available memory ports. Similarly, providing more memory ports if there are insufficient operations to utilize those memory ports (or if the addresses of each memory operation can't be easily partitioned) can also incur a resource cost without increasing performance.
% NOTE: most of the description below is redundant with previous discussion and doesn't actually lead to a better design in this case.
%In the \lstinline{spmv} function, we read from three arrays on every iteration of the \lstinline{L2 for} loop -- \lstinline{values}, \lstinline{x}, and \lstinline{columnIndex}. Thus, when we unroll this loop, we need to consider the additional read operations that occur upon these arrays. Each iteration of the \lstinline{L2 for} loop, accesses \lstinline{values} and \lstinline{columnIndex} using the variable \lstinline{k}, which increases by $1$ on every iteration. In fact, we know that we are accessing these arrays sequentially across the entire function, i.e., last iteration of \lstinline{L2} and the first iteration of the following execution of \lstinline{L2} (as we move to the next iteration of \lstinline{L1}) are sequential. The access pattern of the \lstinline{x} array is a bit more complicated and depends upon the locations of the non-zero elements in the input matrix $\mathbf{M}$. This makes it difficult to optimize these memory accesses. Therefore, we only consider the optimization of the arrays \lstinline{values} and \lstinline{columnIndex}.
%Assume that all of the data in the array \lstinline{values} is stored in one memory. Also assume that the entire array \lstinline{columnIndex} is stored in memory, which is separate from the memory storing \lstinline{values}. Finally, assume that we unroll the \lstinline{L2 for} loop by a factor of four. In this case, each iteration of the unrolled loop requires four read operations from both memories. If the memories do not have four read ports, then the \VHLS tool must sequentialize these read operations. This will reduce the performance. And we are not able to take advantage of all of the parallelism that is exposed by the unroll optimization.
%We can eliminate the need for sequentialization by creating more read ports. The way to do this is through data partitioning, i.e., separate the \lstinline{values} and \lstinline{columnIndex} into multiple memories. This can be done manually by refactoring the code. Or it can be done automatically by the \VHLS tool using the \lstinline{array_partition} directive. This directive splits the array into multiple smaller memories based upon the \lstinline{factor} argument. For example, setting \lstinline{factor = 2} splits the array into two memories, and \lstinline{factor = 4} divides the array across four memories.
%The next question is how exactly to divide the arrays. This can be done in a \lstinline{block}, \lstinline{cyclic}, or \lstinline{complete} manner. This is specified using the \lstinline{type} argument. A \lstinline{block} partition takes consecutive elements of the array and puts them in the same memory. For example, the directive \lstinline{#pragma HLS array_partition variable=values block factor=2} will put the elements of the first half of the array \lstinline{values} into one memory and the elements of the second half into another memory. A \lstinline{cyclic} partition takes consecutive elements and puts them in different arrays. Thus, the directive \lstinline{#pragma HLS array_partition variable=values block factor=2} puts every even element of \lstinline{values} into one memory, and every odd element into another separate memory.
%Given that we are accessing the elements from the arrays \lstinline{values} and \lstinline{columnIndex} in a sequential manner, we want to make sure that each consecutive element is in a separate memory. This would be done using a \lstinline{cyclic} partitioning. It should be clear that it is important to consider the data partitioning when we perform loop unrolling.
%Loop unrolling can be used in combination with pipelining. Unrolling reduces the number of iterations of the loop while performing more work per iteration. The addition of pipelining enables the iterations to occur in an overlapping fashion.
%It is also possible to perform pipelining, unrolling, and data partitioning in combination. However, the effects of each of these is not necessarily straightforward.
To see some of the complexity in applying these combinations of transforms, we encourage you to perform the following exercise:
\begin{exercise}
Synthesize the \lstinline{spmv} design using the directives specified in each of the ten cases from Table \ref{table:spmv_optimizations}. Each case has different pipeline, unroll, and partitioning directives for the different loops and arrays. These partitionings should be done across the three arrays (\lstinline{values}, \lstinline{columnIndex}, and \lstinline{x}). What sort of trends do you see? Does increasing the unroll and partitioning factors help or hurt when it comes to utilization? How about performance? Why?
\end{exercise}
\begin{table}[htbp]
\centering
\caption{Potential optimizations for sparse matrix-vector multiplication. }
\begin{tabular}{*{4}{l}}
\toprule
%& \multicolumn{2}{c}{Optimizations} \\
%& \multicolumn{2}{c}{Full} & \multicolumn{2}{c}{Full} \\
%\cmidrule(lr){2-3}
%\cmidrule(lr){4-6}
%\cmidrule(lr){8-9}
%\cmidrule(lr){10-11}
& L1 & L2 \\
\midrule
Case 1 & - & - \\
Case 2 & - & pipeline \\
Case 3 & pipeline & - \\
Case 4 & unroll=2 & - \\
Case 5 & - & pipeline, unroll=2 \\
Case 6 & - & pipeline, unroll=2, cyclic=2 \\
Case 7 & - & pipeline, unroll=4 \\
Case 8 & - & pipeline, unroll=4, cyclic=4 \\
Case 9 & - & pipeline, unroll=8 \\
Case 10 & - & pipeline, unroll=8, cyclic=8 \\
Case 11 & - & pipeline, unroll=8, block=8 \\
\bottomrule
\end{tabular}
\label{table:spmv_optimizations}
\end{table}
If you performed the previous exercise, you should have seen that blindly applying optimization directives may not always provide you with the expected results. It is usually more effective to consider the properties of an application under design, and to select optimizations with a particular design goal in mind. Of course, this requires some intuition behind the capabilities and limitations of a particular tool being used. While it is certainly difficult (perhaps impossible?) to understand every detail of a complex tool like \VHLS, we can build a mental model of the most critical aspects.
One of the options we considered in cases 3 and 4 above was to increase pipeline outer loops, such as the \lstinline{L1} loop in this code, rather than inner loops. This transformation has the effect of increasing the potential parallelism within one task. In order to perform this optimization, the \VHLS tool must fully unroll inner loops, like the \lstinline{L2} loop in this code. If full unrolling is possible, this can reduce the cost of calculating the loop bounds and can also eliminate recurrences in the code. However, in this code, the inner loop cannot be unrolled by \VHLS because the loop bound is not constant.
\begin{exercise}
Add a directive to pipeline the outermost \lstinline{L1} loop, i.e., implement case 3 above. What is the initiation interval (II) when you do not set a target II? What happens to the utilization? How does explicitly increasing the II change the utilization results? How does this compare to pipelining the \lstinline{L2} loop? How does this compare to the baseline design (no directives)? What is happening when you attempt to pipeline this outer loop? (hint: check the synthesis log)
\end{exercise}
Another option to increase parallelism is \gls{partial_loop_unrolling} of the inner loop, as in cases 5 through 10. This transformation exposes more parallelism by allowing more operations from the same loop iteration to be executed concurrently. In some cases, more operations can increase performance by enabling \VHLS to instantiate more operators when pipelining the inner loop. However, in this case it is still difficult to improve the II of the inner loop because of the recurrence through the inner loop. However, in this case, because we have an II greater than 1, many of those operations can be shared on the same operators.
An partially unrolled version of the code is shown in Figure \ref{fig:spmv_unrolled}. In this code, the L2 loop has been split into two loops labeled \lstinline|L2_1| and \lstinline|L2_2|. The innermost \lstinline|L2_2| executes a parameterized number of times, given by the compile-time parameter \lstinline|S|. The body of the inner loop contains the body of the original \lstinline|L2| loop, along with a condition that arises from the loop bound of the original \lstinline|L2| loop. In this code, we now have an arbitrary number of multiply and add operations to execute in the body of the \lstinline|L2_1| loop, given by the parameter \lstinline|S|, and a single recurrence through the accumulation \lstinline|y0 += yt|.
Note that the code in Figure \ref{fig:spmv_unrolled} is slightly different from the code that is generated from automatic loop unrolling. Automatic loop unrolling duplicates operations, but must also preserve the order of each operation (additions in this case). This results in a long chain of operation dependencies in the inner loop shown on the left side of Figure \ref{fig:spmv_partial_unroll}. Reordering the operations results in operation dependencies show on the right side of the figure. In this case, only the final accumulation results in a recurrence. When using floating-point data types, this reordering of operations can slightly change the behavior of the program, so \VHLS does not apply this kind of operation reordering automatically.
\begin{figure}
\lstinputlisting{examples/spmv_unrolled.cpp}
\caption{A partially unrolled version of the \lstinline|spmv| code from Figure \ref{fig:spmv_arch1}.}
\label{fig:spmv_unrolled}
\end{figure}
\begin{figure}
\centering
\includesvg{spmv_partial_unroll}
\caption{Two different partially unrolled versions of an accumulation. The version on the left has a recurrence with three additions, whereas the version on right only has one addition in the recurrence.}
\label{fig:spmv_partial_unroll}
\end{figure}
A possible implementation of this design is shown in Figure \ref{fig:spmv_unrolled_behavior}. In this case, \lstinline|S = 3| to match the best achievable II where there is a latency of 3 through the adder. In this case, we see that all the operations have been successfully shared on a single multiplier and adder. Comparing this behavior to the behavior in Figure \ref{fig:spmv_pipeline_inner}, we see that there are some disadvantages. In particular, the depth of the pipeline of the inner loop is much longer, which implies that the number of cycles to flush the pipeline to start a new iterations of the outer \lstinline|L1| loop is much larger. Processing of the non-zero elements in a row also occurs in blocks of size \lstinline|S|. A row with 3 elements takes exactly the same time to compute as a row with one element. The remaining operations which are still scheduled in the loop pipeline must still `execute' even though their results are discarded. In order to rigorously compare the characteristics of the two designs, we need to understand the expected number of non-zero elements in each row of the matrix. Fewer non-zero elements in each line would favor the first implementation, while more non-zero elements in each line would favor the second implementation.
\begin{figure}
\centering
\includesvg{spmv_unrolled_behavior}
\caption{Architecture and behavior of the \lstinline{spmv} code based on the partially unrolled and pipelined inner loop shown in Figure \ref{fig:spmv_unrolled}.}
\label{fig:spmv_unrolled_behavior}
\end{figure}
Notice that there is, to some extent a chicken-and-egg problem here. We need to know the target device and clock period to determine the number of pipeline stages required for the adder to meet timing. Only after we know the number of pipeline stages (perhaps by running with \lstinline|S=1| and investigating the \VHLS logs to identify the adder recurrence) can we select an appropriate version of the parameter \lstinline|S| that achieves II=1. Once we've determined \lstinline|S|, we can run \gls{cosimulation} to determine the achieved performance on a set of benchmark test data. Because of the variable loop bounds, the achieved performance is data-dependent so we might have to explore different values of \lstinline|S| to determine the value that maximizes performance. Changing the target device or clock period might affect all of these decisions! Although it may seem like high-level synthesis provides little assistance in solving this problem, it's still much faster (and possible to easily script) compared to evaluating each new version with a new \gls{rtl} design that must be verified!
\begin{exercise}
The behavior in Figure \ref{fig:spmv_unrolled_behavior} is achieved when \lstinline|S| is the same as the number of pipeline stages for the adder. What happens to the behavior when \lstinline|S| is set larger? What happens to the behavior when is it set smaller? What happens when the target II is smaller than \lstinline|S|? What happens when the target II is larger?
\end{exercise}
\section{Conclusion}
In this chapter, we looked at sparse matrix-vector multiplication (SpMV). This continues our study of matrix operations. This operation is particularly interesting because it uses a unique data structure. In order to reduce the amount of storage, the matrix is stored in a compressed row storage format. This requires a design that uses some indirect references to find the appropriate entry in the matrix.
This chapter is the first to discuss at length the testing and simulation abilities of the \VHLS tool. We provide a simple testbench for SpMV and describe how it can be integrated into the HLS work-flow. Additionally, we describe the C/RTL cosimulation features of the \VHLS tool. This is particularly important for us in order to get precise performance results. The task interval and task latency depends upon the input data. The less sparse the matrix, the more computation that must be performed. The cosimulation provides a precise trace of execution using the given testbench. This allows the tool to compute the clock cycles to include in the performance results. Finally, we discuss optimizing the code using loop optimizations and array partitioning. | {
"alphanum_fraction": 0.7878914296,
"avg_line_length": 165.3146551724,
"ext": "tex",
"hexsha": "8eaa55868dc305b4f9c3fa27ba9b8eb2593cb113",
"lang": "TeX",
"max_forks_count": 107,
"max_forks_repo_forks_event_max_datetime": "2022-01-23T22:59:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-05-12T16:43:35.000Z",
"max_forks_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "mithro/pp4fpgas",
"max_forks_repo_path": "sparse_matrix_vector.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a",
"max_issues_repo_issues_event_max_datetime": "2021-10-06T06:06:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-05-13T16:26:23.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "mithro/pp4fpgas",
"max_issues_repo_path": "sparse_matrix_vector.tex",
"max_line_length": 1978,
"max_stars_count": 418,
"max_stars_repo_head_hexsha": "ddede5bd337f4fa33915d7e4ca98f97a7b31413a",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "mithro/pp4fpgas",
"max_stars_repo_path": "sparse_matrix_vector.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-17T05:51:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-09T17:28:00.000Z",
"num_tokens": 8957,
"size": 38353
} |
\section{Secondary Storage}
\paragraph{Secondary Storage --- Structure}
\begin{itemize}
\item hard disk drives
\item solid state drive
\item RAID structure
\item tertiary storage devices (DVD, magnetic tape)
\end{itemize}
\paragraph{Hard Disk Drives --- Anatomy}
\begin{itemize}
\item stack of magnetic platters
\item disk arms contain disk heads per recording surface, read/write to platters
\item \textbf{Storage}:
\begin{itemize}
\item platters divided into concentric \emph{tracks}
\item \emph{cylinder}: stack of tracks of fixed radius
\item tracks of fixed radius divided into \emph{sectors}
\end{itemize}
\end{itemize}
\paragraph{Flash Memory}
\begin{itemize}
\item \textbf{advantages}:
\begin{itemize}
\item[+] solid state
\item[+] lower power consumption/heat
\item[+] no mechanical seek
\end{itemize}
\item \textbf{disadvantages}:
\begin{itemize}
\item[-] limited number of overwrites
\item[-] limited durability
\end{itemize}
\end{itemize}
\paragraph{RAID}
\begin{itemize}
\item \textbf{Idea}: improve performance/reliability of storage system by storing redundant data
\end{itemize}
\begin{figure}[h]\centering\label{RAID}\includegraphics[width=0.2\textwidth]{RAID}\end{figure} | {
"alphanum_fraction": 0.7299444003,
"avg_line_length": 29.9761904762,
"ext": "tex",
"hexsha": "2cb201e11a28eef1e0a37b62675c45e9c1ab08a8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Jintzo/OS",
"max_forks_repo_path": "chapters/16_SecondaryStorage.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_issues_repo_issues_event_max_datetime": "2017-12-31T11:57:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-02T12:22:38.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Jintzo/OS",
"max_issues_repo_path": "chapters/16_SecondaryStorage.tex",
"max_line_length": 98,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Jintzo/OS",
"max_stars_repo_path": "chapters/16_SecondaryStorage.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 370,
"size": 1259
} |
\chapter{Data Structures}
\cpbookimport{OrderedSet.h}
\cpbookimport{Bit.h}
\cpbookimport{ST.h}
\cpbookimport{STLazy.h}
\cpbookimport{STPersistent.h}
\cpbookimport{STPersistentFast.h}
\cpbookimport{DSU.h}
\cpbookimport{2dBit.h}
\cpbookimport{treap.h}
\cpbookimport{ImpTreap.h}
| {
"alphanum_fraction": 0.797833935,
"avg_line_length": 21.3076923077,
"ext": "tex",
"hexsha": "f5f58c022e59cf5ac88b636ffeadfb242c58935b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T21:25:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-28T21:25:18.000Z",
"max_forks_repo_head_hexsha": "2ba04b006fd11421c7236503795c9c5fe016914b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jam-usb/cp-book",
"max_forks_repo_path": "content/data-structures/chapter.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2ba04b006fd11421c7236503795c9c5fe016914b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jam-usb/cp-book",
"max_issues_repo_path": "content/data-structures/chapter.tex",
"max_line_length": 33,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2ba04b006fd11421c7236503795c9c5fe016914b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jam-usb/cp-book",
"max_stars_repo_path": "content/data-structures/chapter.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-07T20:34:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-07T20:34:50.000Z",
"num_tokens": 97,
"size": 277
} |
\documentclass[slidestop]{beamer}
\usepackage[english]{babel}
\usepackage{graphicx}
\usepackage{hyperref}
\title{Open and Reproducible Scientific Programming}
\providecommand{\myConference}{Lab-J work discussion}
\providecommand{\myDate}{Wednesday, 30 Januari 2013}
\author{Martijn Vermaat}
\providecommand{\myGroup}{}
\providecommand{\myDepartment}{Department of Human Genetics}
\providecommand{\myCenter}{Center for Human and Clinical Genetics}
\providecommand{\lastCenterLogo}{
\raisebox{-0.1cm}{
\includegraphics[height = 1cm]{lgtc_logo}
}
}
\providecommand{\lastRightLogo}{
\includegraphics[height = 0.7cm]{nbic_logo}
%\includegraphics[height = 0.8cm]{nwo_logo_en}
}
\usetheme{lumc}
\begin{document}
% This disables the \pause command, handy in the editing phase.
%\renewcommand{\pause}{}
% Make the title page.
\bodytemplate
%\section{Introduction}
\frame{
\frametitle{Open and Reproducible Scientific Programming}
\tableofcontents
}
\section{Academic publishing}
\begin{frame}
\frametitle{Scientific findings}
Should be
\begin{itemize}
\item Replicable
\uncover<2->{Verify method and result}
\item Reproducible
\uncover<3->{Verify result by other means}
\item Reusable
\uncover<4->{Build on previous results}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Data}
\begin{itemize}[<+->]
\item Data should be published with the paper
\item Policies often include additional requirements
\item Data standards
\item Specific databases
\item E.g. publish genomic variants
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Data policy example}
\begin{quote}
Authors must follow standards and practice for data deposition in
publicly available resources including those created for gene sequences,
microarray expression, structural studies, and similar kinds of
data. Failure to comply with community standards may result in rejection.
\end{quote}
(PLOS ONE Publication Criteria)
\end{frame}
\begin{frame}
\frametitle{Source code}
\begin{itemize}[<+->]
\item Results often depend on computational analysis
\item Source code should be published with the paper
\item Are there policies?
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Source code policy example (1)}
\begin{quote}
Of the 20 most-cited journals in 2010 from all fields of science, only
three (including Science) have editorial policies requiring
availability of computer source code upon publication. This stands in stark
contrast to near-universal agreement among the 20 on policies regarding
availability of data and other enabling materials.
\end{quote}
(A. Morin et al. Shining Light into Black Boxes. Science 13 April 2012:
Vol. 336 no. 6078 pp. 159--160)
\end{frame}
\begin{frame}
\frametitle{Source code policy example (2)}
\begin{quote}
Although it is now accepted that data should be made available on
request, the current regulations regarding the availability of software
are inconsistent. We argue that, with some exceptions, anything less than
the release of source programs is intolerable for results that depend on
computation.
\end{quote}
(D.C. Ince et al. The case for open computer programs. Nature 482, 485--488
(23 February 2012))
\end{frame}
\begin{frame}
\frametitle{Source code policy example (3)}
\begin{quote}
Nature does not require authors to make code available, but we do
expect a description detailed enough to allow others to write their
own code to do a similar analysis.
\end{quote}
(Devil in the details. Nature 470, 305--306 (17 February 2011))
\end{frame}
\begin{frame}
\frametitle{Source code policy example (4)}
\begin{quote}
The editors of Bioinformatics encourage authors to make their source
code available and, if possible, to provide access through an open source
license (see http://www.opensource.org for examples). Authors should make
every effort to use URLs that will remain stable. At the minimum, authors
must provide one of: webserver, source code or binary.
\end{quote}
(Bioinformatics Instructions to Authors)
\end{frame}
\begin{frame}
\frametitle{Source code policy example (5)}
\begin{quote}
To address the growing complexity of data and analyses, Science is
extending our data access requirement listed above to include computer
codes involved in the creation or analysis of data.
\end{quote}
(B. Hanson et al. Making Data Maximally Available. Science 11 February 2011:
Vol. 331 no. 6018 p. 649)
\end{frame}
% Todo: new section?
\begin{frame}
\frametitle{Value of software}
\begin{quote}
Time spent on software development that doesn't result in
widely-recognized deliverables such as publications or grants is essentially
time wasted, and will be inversely correlated with your chances of success
as an academic.
\end{quote}
\vspace{1cm}
\pause
\begin{itemize}[<+->]
\item What counts in academia?
\item Reputation
\item Recognize software as a product of research
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Value of software}
Recent change to the NSF grant proposal guide:
\begin{quote}
The acknowledgement of datasets, patents, software, and copyrights as
citable products of research, eligible for inclusion in a researcher's
biosketch.
\end{quote}
(National Science Foundation, January 2013)
\end{frame}
\begin{frame}
\frametitle{Example: 1000 Genomes Project}
\begin{quote}
The 1000 Genomes Project, for example, a project to sequence and
analyse more than a thousand genomes, has carefully detailed its
workflows, and makes both its data and its procedures available for
the world to see.
\end{quote}
(Devil in the details. Nature 470, 305--306 (17 February 2011))
\end{frame}
\begin{frame}
\frametitle{Example: ENCODE}
\begin{quote}
As part of the supplementary material for this paper, we have established
a virtual machine instance of the software, using the code bundles from
ftp.ebi.ac.uk/pub/databases/ensembl/encode/supplementary/, where each
analysis program has been tested and run.
\end{quote}
\begin{quote}
Where possible the VM enables complete reproduction of the analysis as it
was performed to generate the figures, tables or other information.
\end{quote}
(ENCODE Virtual Machine and Cloud Resource)
\end{frame}
\section*{}
\frame{
\frametitle{Open and Reproducible Scientific Programming}
\tableofcontents
}
\section{Reasons not to open your source code}
\begin{frame}
\frametitle{1. ``Obligates me to support it''}
\pause
No, it doesn't.
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[clip=true,trim=0cm 17cm 0cm 0cm,width=\paperwidth,keepaspectratio=true]{publish}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{2. ``I'm ashamed of my bad code''}
\pause
\begin{itemize}[<+->]
\item If it is good enough to do the job, it's good enough to be released
(in support of the publication)
\item Opening might improve it
\item CRAPL license:
\begin{enumerate}
\item Source and modifications used to validate scientific claims be
released with those claims
\item Authors are absolved of shame, embarrassment and ridicule for
ugly code
\end{enumerate}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{3. ``It takes time/effort/money''}
\pause
\begin{itemize}
\item It doesn't have to
\item For scientists, publication signifies perfectionism
\item Very good infrastructure is available (GitHub, Bitbucket, Google
Code, SourceForge)
\item These are free and easy to use
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{4. ``They will steal my idea''}
\pause
\begin{itemize}
\item I think this rarely matters in practice
\item If you must, await publication
\item Getting it out claims territory %establishes precedence
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{preprints1}
\end{center} \vfill}
}
\frame{}
}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{preprints2}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{5. ``An algorithmic description is good enough''}
\pause
\begin{itemize}
\item It really isn't
\item Descriptions are ambiguous and imperfect
\item Makes replication impossible
\item Makes reuse impossible
\end{itemize}
\end{frame}
\section*{}
\frame{
\frametitle{Open and Reproducible Scientific Programming}
\tableofcontents
}
\section{Sharing computational analyses}
\begin{frame}
\frametitle{Replicability}
Not only for verification, also to regenerate lost data and to generate
further data.
\vspace{0.5cm}
\pause
Requirements:
\begin{enumerate}
\item Same input data
\item Same build tools and settings
\item Same source code
\item Same software settings
\item Same OS and environment
\item Same hardware
\end{enumerate}
\vspace{0.5cm}
\pause
Provenance needed
\end{frame}
\begin{frame}
\frametitle{Data analysis}
\begin{itemize}
\item<1-> Many workflow systems exist
\item<2-> myExperiment to share workflows
\item<3-> Taverna (using web services)
\item<4-> A more extreme approach:
\end{itemize}
\uncover<4->{
\begin{quote}
We propose capturing and exchanging computational pipelines using
complete digital representations of the entire computing environment
needed to execute the pipeline.
\end{quote}
(J.T. Dudley et al. In silico research in the era of cloud computing. Nature
Biotechnology 28, 1181--1185 (2010))
}
\end{frame}
\begin{frame}
\frametitle{Integrate code with data, figures, and text}
\begin{itemize}[<+->]
\item Literate programming (e.g. Sweave)
\item IPython Notebook
\item Great for exploratory research
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{notebook}
\end{center} \vfill}
}
\frame{}
}
\section*{}
\frame{
\frametitle{Open and Reproducible Scientific Programming}
\tableofcontents
}
\section{Best practices}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.55\paperwidth,keepaspectratio=true]{wtfm}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{The next step}
\begin{itemize}
\item We can do better than just having code
\item Policies should define additional requirements
\item Follow best practices (compare data standards)
\item Academcis are not necessarily good programmers
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[clip=true,trim=1cm 10cm 3cm 0cm,width=\paperwidth,keepaspectratio=true]{compute}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{Rules, commandments, and best practices}
Prli\'c A, Procter JB. {\bf Ten Simple Rules for the Open Development of
Scientific Software}. PLoS Comput Biol 8(12), 2012.
\vspace{0.5cm}
Vince Buffalo. {\bf The ten commandments of scientific coding}. Published on
GitHub, August 10, 2012.
\vspace{0.5cm}
Wilson et al. {\bf Best Practices for Scientific Computing}. Preprint,
arXiv:1210.0530, 29 Nov 2012.
\end{frame}
\begin{frame}
\frametitle{Ten simple rules for the open development of scientific
software}
\begin{enumerate}
\item Don't Reinvent the Wheel
\item Code Well
\item Be Your Own User
\item Be Transparent
\item Be Simple
\item Don't Be a Perfectionist
\item Nurture and Grow Your Community
\item Promote Your Project
\item Find Sponsors
\item Science Counts
\end{enumerate}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{commandments}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item {\bf Thou shall use version control}
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Thou shall use version control}
Two main uses:
\pause
\begin{enumerate}[<+->]
\item Keeping track of files, versions, changes
\item Collaboration
\end{enumerate}
\vspace{1cm}
\pause
Everything that has been created manually goes in.
\vspace{1cm}
\pause
Being able to go back to computation of previous results aids
replicability.
\end{frame}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item {\bf Thou shall use existing libraries whenever possible}
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Thou shall use existing libraries whenever possible}
\begin{itemize}[<+->]
\item Re-use instead of rewrite
\item Search on Google, GitHub, etc
\item Document dependencies and their versions
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item {\bf Thou shall try to unit test}
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Thou shall try to unit test}
\begin{itemize}[<+->]
\item Prevent regressions
\item Turn bugs into test cases
\item Automate this
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{travis}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item {\bf Thou shall read code other than thy own}
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Thou shall read code other than thy own}
\begin{itemize}
\item Like academics should keep up with the literature
\item Prevents isolation and tunnel vision
\item Code review (idealy pre-merge)
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{review}
\end{center} \vfill}
}
\frame{}
}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{comments}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item {\bf Thou shall write documentation}
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.9\paperwidth,keepaspectratio=true]{overlyhonestmethods}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{Thou shall write documentation}
\begin{itemize}[<+->]
\item For users and for developers
\item Interfaces and reasons, not implementations
\item Refactor rather than explain
\item Idealy embedded in the software
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item {\bf Thou shall write modular code}
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Thou shall write modular code}
\begin{itemize}[<+->]
\item Generalize instead of copy/paste and edit
\item Have a single representation for every piece of data
\item One extreme would be Taverna
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The ten commandments of scientific coding}
\begin{enumerate}
\item Thou shall use version control
\item Thou shall comment thy code
\item Thou shall use existing libraries whenever possible
\item Thou shall try to unit test
\item Thou shall not make up statistical procedures
\item Thou shall read code other than thy own
\item Thou shall write documentation
\item Thou shall beware of floating point issues
\item Thou shall write modular code
\item Thou shall follow coding standards
\end{enumerate}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[clip=true,trim=1.5cm 21cm 0cm 0cm,width=\paperwidth,keepaspectratio=true]{practices}
\end{center}\vspace{3cm} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{Best practices for scientific computing}
\begin{enumerate}
\item Write programs for people, not computers
\item Automate repetitive tasks
\item Use the computer to record history
\item Make incremental changes
\item Use version control
\item Don't repeat yourself (or others)
\item Plan for mistakes
\item Optimize software only after it works correctly
\item Document design and purpose, not mechanics
\item Collaborate
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Initiatives}
\begin{itemize}
\item Science Code Manifesto
\item Bioinformatics Testing Consortium
\item Software Carpentry
\item Our programming course
\end{itemize}
\end{frame}
{
\setbeamercolor{background canvas}{bg=}
\usebackgroundtemplate{
\parbox[c][\paperheight]{\paperwidth}{%
\vfill \begin{center}
\includegraphics[width=0.8\paperwidth,keepaspectratio=true]{manifesto}
\end{center} \vfill}
}
\frame{}
}
\begin{frame}
\frametitle{Programming course with Python}
Why Python?
\begin{itemize}
\item Language itself (level of abstraction, syntax)
\item Suitable for exploratory research
\item Investement of scientific community and our department
\item Many bioinformatics tools available
\end{itemize}
\end{frame}
\section*{Open and Reproducible Scientific Programming}
\begin{frame}
\frametitle{Conclusions}
\begin{itemize}
\item Disclosure and publication requirements for source code is not in
line with other types of scientific data and materials
\item This is changing
\item There is no good reason not to open code
\item Tool support for managing scientific code
\item Many best practices are available
\end{itemize}
\end{frame}
\section*{Questions?}
\lastpagetemplate
\begin{frame}
\begin{center}
Acknowledgements:\\
\vspace{0.8cm}
Jeroen Laros\\
Leon Mei\\
Johan den Dunnen
\end{center}
\vspace{1cm}
{\tiny
Links to all sources are included on last slide
}
\end{frame}
\section*{Extra}
\extrapagetemplate
\begin{frame}
\frametitle{Stop Hosting Data and Code on your Lab Website (1)}
Survey of URL stability:
\begin{itemize}
\item Of 1630 URLs identified in Pubmed abstracts only 63\% were
consistently available
\item That rate was far worse for anonymous login FTP sites (33\%)
\end{itemize}
Wren, Jonathan D. 404 not found: the stability and persistence of URLs
published in MEDLINE. Bioinformatics 20.5 (2004): 668-672.
\end{frame}
\extrapagetemplate
\begin{frame}
\frametitle{Stop Hosting Data and Code on your Lab Website (2)}
Survey of 1000 web services published in Nucleic Acids Web
Server Issue (2003--2009):
\begin{itemize}
\item 72\% available at the published address
\item Unable to test functionality for 33\% because of no example data,
13\% not working as expected
\item Positive functionality confirmed for 45\%
\item 274 of 872 corresponding authors answered email
\item Of these 78\% said a service was developed by a student or temporary
researcher, and many had no plan for maintenance after the researcher
had moved on to a permanent position
\end{itemize}
Schultheiss, Sebastian J., et al. Persistence and availability of web
services in computational biology. PLoS one 6.9 (2011): e24914.
\end{frame}
\section*{Sources}
\extrapagetemplate
\begin{frame}
Clickable links, in no particular order
\vspace{0.3cm}
{\fontsize{3.5}{5}\selectfont
\href{http://www.nature.com/nbt/journal/v28/n11/full/nbt1110-1181.html}{J.T. Dudley and A.J. Butte. In silico research in the era of cloud computing. Nature Biotechnology 28, 1181--1185 (2010)}\\
\href{http://www.sciencemag.org/content/336/6078/159.summary}{A. Morin et al. Shining Light into Black Boxes. Science 13 April 2012: Vol. 336 no. 6078 pp. 159-160}\\
\href{http://www.nature.com/nature/journal/v482/n7386/full/nature10836.html}{D.C. Ince, L. Hatton and J. Graham-Cumming. The case for open computer programs. Nature 482, 485--488 (23 February 2012)}\\
\href{http://www.sciencemag.org/content/331/6018/649.summary}{B. Hanson, A. Sugden, and B. Alberts. Making Data Maximally Available. Science 11 February 2011: Vol. 331 no. 6018 p. 649}\\
\href{http://www.nature.com/nature/journal/v470/n7334/full/470305b.html}{Editorial. Devil in the details. Nature 470, 305--306 (17 February 2011)}\\
\href{http://www.sciencemag.org/content/334/6060/1226.abstract}{R.D. Peng. Reproducible Research in Computational Science. Science 2 December 2011: Vol. 334 no. 6060 pp. 1226--1227}\\
\href{http://online.liebertpub.com/doi/abs/10.1089/omi.2006.10.94}{C. Brooksbank and J. Quackenbush. Data Standards: A Call to Action. OMICS: A Journal of Integrative Biology. June 2006, 10(2): 94--99}\\
\href{http://arxiv.org/abs/1210.0530}{G. Wilson et al. Best Practices for Scientific Computing. Preprint, arXiv:1210.0530, 29 Nov 2012}\\
\href{http://arxiv.org/abs/1010.1092}{K.A. Baggerly and K.R. Coombes. Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology. Preprint, arXiv:1010.1092, 6 Oct 2010}\\
\href{http://www.nature.com/news/2010/101013/full/467775a.html}{Z. Merali. Computational science: ...Error. Nature 467, 775--777 (2010)}\\
\href{https://www.zotero.org/scottbot/items/itemKey/GCQEH25G}{B. D. McCullough et al. Lessons from the JMCB Archive. Journal of Money, Credit, and Banking. Volume 38, Number 4, June 2006}\\
\href{http://www.ncbi.nlm.nih.gov/pubmed/16646837}{R. Gentleman. Reproducible research: a bioinformatics case study. Stat Appl Genet Mol Biol. 2005;4:Article2. Epub 2005 Jan 11}\\
\href{http://www.nature.com/news/2010/101013/full/467753a.html}{N. Barnes. Publish your computer code: it is good enough. Nature 467, 753 (2010)}\\
\href{http://www.oxfordjournals.org/our_journals/bioinformatics/for_authors/general.html}{Oxford Bioinformatics Instructions to Authors}\\
\href{http://www.plosone.org/static/publication}{PLOS ONE Publication Criteria}\\
\href{http://www.galter.northwestern.edu/news/index.cfm/2012/10/9/Datasets-Software-Eligible-for-Listing-in-NSF-Biosketches}{P. Shaw. Datasets, Software Eligible for Listing in NSF Biosketches. Galter Health Sciences Library website (9 October 2012}\\
\href{http://scofield.bx.psu.edu/~dannon/encodevm/}{ENCODE Virtual Machine and Cloud Resource}\\
\href{http://simplystatistics.org/2013/01/23/statisticians-and-computer-scientists-if-there-is-no-code-there-is-no-paper/}{J. Leek. Statisticians and computer scientists -- if there is no code, there is no paper. Simply Statistics (23 January 2013)}\\
\href{http://www.bendmorris.com/2012/12/what-incentives-are-there-to-maintain.html}{B. Morris. What incentives are there to maintain software in academia? Ben Morris' Macroblog (29 December, 2012)}\\
\href{http://gettinggeneticsdone.blogspot.nl/2013/01/stop-hosting-data-and-code-on-your-lab.html}{S. Turner. Stop Hosting Data and Code on your Lab Website. Getting Genetics Done (8 January 2013))}\\
\href{http://www.johndcook.com/blog/2010/10/19/buggy-simulation-code-is-biased/}{J.D. Cook. Buggy code is biased code. The Endeavour (19 October 2010)}\\
\href{https://gist.github.com/3311557}{V. Buffalo. The Ten Commandments of Scientific Coding. GitHub Gist (8 Oct 2012)}\\
\href{http://www.slideshare.net/jandot/b-temperton-the-bioinformatics-testing-consortium}{B. Temperton. The Bioinformatics Testing Consortium. BOSC2012 (Jul 16, 2012)}\\
\href{https://speakerdeck.com/ptomato/open-and-reproducible-scientific-programming}{P. Chimento. Open and reproducible scientific programming. Apr 5, 2012}\\
\href{http://www.osnews.com/story/19266/WTFs_m}{OSNews on WTFs/m}\\
\href{https://twitter.com/ianholmes/status/288689712636493824}{Ian Holmes (@ianholmes) on Twitter about \#overlyhonestmethods}\\
\href{https://twitter.com/luispedrocoelho/status/238632048313647104}{Luis Pedro Coelho (@luispedrocoelho) on Twitter about scientists ashamed of their code}\\
\href{https://twitter.com/ianholmes/status/250608615361241089}{Ian Holmes (@ianholmes) on Twitter about biologists, physicists and preprints (1)}\\
\href{https://twitter.com/ianholmes/status/250608825428746242}{Ian Holmes (@ianholmes) on Twitter about biologists, physicists and preprints (2)}\\
\href{http://jrjohansson.github.com/}{R. Johansson. Lectures on scientific computing with Python}\\
\href{http://matt.might.net/articles/crapl/}{M. Might. The CRAPL: An academic-strength open source license}\\
\href{http://sciencecodemanifesto.org/}{Science Code Manifesto}\\
\href{http://biotest.cgrb.oregonstate.edu/}{Bioinformatics Testing Consortium}\\
\href{http://software-carpentry.org/}{Software Carpentry}\\
\href{http://ipython.org/}{F. P\'erez and B.E. Granger. IPython: a System for Interactive Scientific Computing. Science and Engineering, vol. 9, no. 3, pp. 21--29, May/June 2007}\\
\href{https://travis-ci.org/}{Travis CI. A hosted continuous integration service for the open source community}\\
\href{https://github.com/}{GitHub. Build software better, together}\\
}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7246579913,
"avg_line_length": 34.369266055,
"ext": "tex",
"hexsha": "b3dd046025eaea9156f21d8e43ec5297154aedc8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "martijnvermaat/documents",
"max_forks_repo_path": "presentations/lab-j-2013-01-30/presentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "martijnvermaat/documents",
"max_issues_repo_path": "presentations/lab-j-2013-01-30/presentation.tex",
"max_line_length": 255,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "42483b7c4bf94ed708e2893c3ea961d025a10b5e",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "martijnvermaat/documents",
"max_stars_repo_path": "presentations/lab-j-2013-01-30/presentation.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-28T14:38:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-28T14:38:06.000Z",
"num_tokens": 8352,
"size": 29970
} |
\title{\bf{\huge{
DRAFT *** DRAFT *** DRAFT \\
Users Manual for \\
BRL-CAD Graphics Editor \\
MGED
}}}
\author{
Keith A. Applin \\
Michael J. Muuss \\
Robert J. Reschly \\
{\em US Army Ballistic Research Laboratory} \\
{\em Aberdeen Proving Ground, MD} \\
\and
Alan Collier \\
{\em US Army Foreign Science and Technology Center} \\
{\em Charlottesville, VA} \\
\and
Mike Gigante \\
Ian Overend \\
{\em The Royal Melbourne Institute of Technology} \\
{\em Australia}
}
\date{6-October-1988}
\maketitle
\tableofcontents
\listoffigures
% ---------------------------------------------------------------------------
\chapter{INTRODUCTION}
Computer graphics is one of the fastest growing fields
in the computer industry.
Computer graphics has applications in many diverse areas, from electronic
games to medicine; from cartoons to the space industry. Just
what is interactive computer graphics and why is it so versatile?
Human visual perception is quite keen and communicating with a
computer is generally faster and easier
with images, rather than with
numbers. Furthermore, by
having the computer continuously updating a display,
the display itself becomes the communications medium.
The user converses with the computer through the display using
devices such as light pens,
mice, data tablets, buttons, and
knobs. The response of the computer is immediately reflected
on the display,
providing a fast communication channel between person and machine.
This technology is called interactive computer graphics.
As the Army's lead laboratory for vulnerability technology, the
Ballistic Research Laboratory (BRL) constantly performs
analyses for a wide variety of military systems.
Three dimensional computer models of the
physical characteristics of these systems
are vital to these studies.
Since the mid-1960's, BRL has used a solid modeling technique
called Combinatorial Solid Geometry (CSG or COMGEOM)
for representing these models.
The COMGEOM technique uses
Boolean logic operations to combine basic geometric
shapes or primitives to produce complex three-dimensional objects.
The COMGEOM geometric models are processed by
the Geometric Information
For Targets (GIFT)
\cite{gift1,gift2}
and LIBRT
\cite{solid-models}
for use in follow-on engineering analysis.
Geometric models are large collections
of numerical data which have traditionally
been created and edited manually, and analyzed in a batch environment.
The production and modification of geometric models has been a slow,
labor-intensive
process.
In 1980, BRL initiated an effort to improve the response
time of the geometric modeling process by applying interactive
computer graphics techniques.
As a result of this work, BRL
developed the Multi-device Graphics EDitor (MGED),
an interactive editor for solid models
based on the COMGEOM technique.
Using MGED, a designer can build, view, and modify model descriptions
interactively by manipulating the graphical representation,
receiving immediate visual feedback on a graphics display.
MGED replaces the manual method for the production
and modification of geometric models.
Before MGED was built,
existing packages were evaluated with respect to
their utility for the geometric modeling process.
Quite an exhaustive search of commercially available systems
was conducted and none were found which met
the BRL requirements.
A study was initiated to examine the feasibility of producing
the required capability in-house;
a preliminary version of MGED which
quite convincingly demonstrated the
feasibility of such an undertaking \cite{interactive-construction}.
It was then decided to develop MGED into a full production code.
Production MGED code has been used since January 1982 to
build models interactively at BRL.
This report is intended to serve as a user manual
for the MGED program.
The process of viewing and editing a description using MGED
is covered in detail. The internal data structure is also covered, as
it is an important part in the overall design.
All the commands will be discussed and a command summary table presented.
Also, a section will be devoted to the hardware interfaces for each
major class of workstations which MGED supports.
\section{Philosophy}
The role of CAD models at BRL differs somewhat from
that of CAD models being built in the automobile and aerospace industries,
resulting in some different design choices
being made in the BRL-CAD software.
Because BRL's main use for these models is to conduct detailed
performance and survivability analyses of large complex vehicles,
it is required that the model of an entire vehicle be completely contained
in a single database suitable for interrogation by the application codes.
This places especially heavy demands on the database software.
At the same time, these analysis codes require less detail
than would be required if NC machining were the primary goal.
At BRL, there are only a small number of primary designers responsible
for the design of a vehicle, and for the construction of the corresponding
solid model. Together they decide upon and construct the
overall structure of the model,
then they perform the work of building substructures in parallel,
constantly combining intermediate results into the full model database.
Because of the need to produce rapid prototypes (often creating a full design
within a few weeks), there is no time for a separate integration stage;
subsystem integration must be an ongoing part of the design process.
Once an initial vehicle design is completed, there is usually the
need for exploring many alternatives. Typically, between three and twelve
variations of each design need to be produced, analyzed, and optimized
before recommendations for the final design can be made.
Also, there is a constantly changing definition of performance;
new developments may necessitate rapidly re-evaluating
all the designs of the past several years for trouble spots.
The user interface is designed to be powerful and ``expert friendly'' rather
than foolproof for a novice to use.
However, it only takes about two days for new users to start doing useful
design work with MGED.
True proficiency comes with a few months practice.
Finally, it is vitally important that the software offer the same capabilities
and user interface across a wide variety of display and processor hardware.
Government procurement regulations make single-vendor solutions difficult.
The best way to combat this is with highly portable software.
\section{Displays Supported}
It is important for a CAD system to have a certain degree of independence
from any single display device in order to provide longevity of the
software and freedom from a single equipment supplier.
The MGED editor supports serial use of multiple displays by way of
an object-oriented programmatic
interface between the editor proper and the display-specific code.
All display-specific code for each type of hardware is isolated
in a separate {\em display manager} module.
High performance of the display manager was an important design goal.
Existing graphics libraries
were considered, but no well established standard existed with the necessary
performance and 3-dimensional constructs.
By having the display manager modules incorporated as a direct part of
the MGED editor, the high rates of display update necessary to deliver
true interactive response are possible, even when using CPUs of modest power.
An arbitrary number of
display managers may be included in a copy of MGED, allowing the user
to rapidly and conveniently move his editing session from display to display.
This is useful for switching between several displays, each of
which may have unique benefits: one might have color capability,
and another might have depth cueing.
The {\bf release} command closes out MGED's use of the current
display, and does an implicit attach to the ``null'' display manager.
This can be useful to allow another user to briefly examine an image
on the same display hardware without having to lose the state of
the MGED editing session. The {\bf attach} command is used to
attach to a new display via its appropriate display manager routines.
If another display is already attached, it is released first.
The null display manager also allows the MGED editor to be run from a normal
alphanumeric terminal with no graphic display at all. This can be useful
when the only tasks at hand involve viewing or changing
database structures, or entering or adjusting geometry parameters
in numerical form.
Creation of a new display manager module in the ``{\bf C}'' language
\cite{c-prog-lang}
generally takes an experienced
programmer from one to three days.
The uniform interface to the display manager provides two levels
of interactive support.
The first level of display support includes
the Tektronix 4014, 4016, and compatible displays,
including the Teletype 5620 bit-mapped displays.
However, while storage-tube style display devices allow MGED to
deliver the correct functionality, they lack the
rate of screen refresh needed for productive interaction.
The second level of support, including real-time interaction,
is provided by
the Vector General 3300 displays,
the Megatek 7250 and 7255 displays,
the Raster Technologies Model One/180 display,
the Evans and Sutherland PS300 displays
with either serial, parallel, or Ethernet attachment,
the Sun workstations,
and the Silicon Graphics IRIS workstation family.
\section{Portability}
Today, the half-life of computer technology is
approximately two to three years.
To realize proper longevity of the modeling software, it needs to be written
in a portable language to allow the software to be moved readily from
processor to processor without requiring the modeling software or users
to change.
Then, when it is desirable to
take advantage of the constantly increasing
processor capabilities and similarly increasing memory capacity by replacing
the installed hardware base, there are a minimum of ancillary costs.
Also, it may be desirable to connect together processors from a variety
of vendors, with the workload judiciously allocated to
the types of hardware that best support the requirements of each particular
application program.
This distribution of processing when coupled with the fact that
users are spread out over multiple locations makes networking a vital
ingredient as well.
BRL's strategy for achieving this high level of portability was to target
all the software for the UNIX operating system,
\cite{unix-ts-sys},
with all the software written in the ``{\bf C}''
programming language \cite{c-prog-lang}.
The entire BRL-CAD Package, including the MGED editor
is currently running on all UNIX machines at BRL,
under several versions of the UNIX operating system, including
Berkeley 4.3 BSD UNIX, Berkeley 4.2 BSD UNIX, and AT\&T System V UNIX.
The list of manufacturers and models of CPUs that support the UNIX
operating system \cite{modern-tools-hi-res}
is much too lengthy to include here. However, BRL
has experience using this software on
DEC VAX 11/750, 11/780, 11/785 processors,
Gould PN6000 and PN9000 processors,
Alliant FX/8 and FX/80 processors (including systems with eight CPUs),
Silicon Graphics IRIS 2400, 2400 Turbo, 3030, 4-D, and 4-D/GT workstations,
the Cray X-MP, the Cray-2,
and the ill-fated Denelcor HEP H-1000 parallel supercomputer.
\section{Object-Oriented Design}
The central editor code has four sets of object-oriented interfaces
to various subsystems, including database access, geometry processing,
display management, and command parser/human interface.
In each case, a common interface has been defined for the set of
functions that implement the subsystem;
multiple instances of these function sets can exist.
The routines in each instance of a subsystem are completely independent
of all the routines in other functions sets, making it easy to add new
instances of the subsystem. A new type of primitive geometry,
a new display manager, a new database interface, or a new command
processor can each be added simply by writing all the routines
to implement a new subsystem.
This approach greatly simplifies software maintenance, and allows
different groups to have responsibility for the
creation and enhancement of features within each of the subsystems.
\chapter{THE COMBINATORIAL GEOMETRY METHODOLOGY}
\section{Background}
Since the MGED system is presently based on the COMGEOM solid modeling
technique, a brief overview of the COMGEOM technique is required
to effectively use MGED.
For more detailed information on the COMGEOM technique see
\cite{gift1,gift2}.
\begin{figure}[tb]
\begin{tabular}{l l}
Symbol & Name \\
\\
ARS & Arbitrary Triangular Surfaced Polyhedron \\
ARB & Arbitrary Convex Polyhedron \\
ELLG & General Ellipsoid \\
POLY & Polygonal Faceted Solid \\
SPL & Non-Uniform Rational B-Spline (NURB) \\
TGC & Truncated General Cone \\
TOR & Torus \\
HALF & Half Space (Plane)
\end{tabular}
\caption{Basic Solid Types \label{list-of-basic-solids} }
\end{figure}
\begin{figure}[tb]
\begin{tabular}{l l}
Symbol & Name \\
\\
RPP & Rectangular Parallelepiped \\
BOX & Box \\
RAW & Right Angle Wedge \\
SPH & Sphere \\
RCC & Right Circular Cylinder \\
REC & Right Elliptical Cylinder \\
TRC & Truncated Right Cylinder \\
TEC & Truncated Elliptical Cylinder \\
\end{tabular}
\caption{Special-Case Solid Types \label{list-of-special-case-solids} }
\end{figure}
The COMGEOM technique utilizes two basic entities - a solid and a region.
A solid is defined as one of fifteen basic geometric shapes or
primitives. Figure \ref{list-of-basic-solids} lists the
basic solid types, and Figure \ref{list-of-special-case-solids}
lists special cases of the basic solid types for which support exists.
The individual parameters of each solid define the solid's
location, size, and orientation. A region is a combination of
one or more solids and is defined as the volume occupied
by the resulting combination of solids.
Solids are combined into regions using any of three logic
operations: union (OR), intersection (+), or difference (-).
The union of two solids is defined as the volume in either
of the solids.
The difference of two solids is defined as the volume of the first
solid minus the volume of the second solid.
The intersection of two solids is defined as the volume
common to both solids.
%%% XXX Figure 1 presents a graphical representation of these operations.
Any number of solids may be combined to produce a region.
As far as the COMGEOM technique is concerned, only a region can
represent an actual component of the model.
Regions are homogeneous; they are composed of a single material.
Each region represents a single object in the model;
the solids are only building blocks which are combined to
define the {\em shape} of the regions.
Since regions represent the components of the model, they
are further identified by code numbers.
These code numbers either identify the region as
a model component (nonzero item code)
or as air (nonzero air code).
Any volume not defined as a region is assumed to be ``universal air'' and
is given an air code of ``01''.
If it is necessary to distinguish between universal ``01'' air and any
other kind of air, then that volume must be defined as a region
and given an air code other than ``01''.
Normally, regions cannot occupy the same volume (overlap),
but regions identified with
air codes can overlap with any region identified as a component
(i.e. one that has a nonzero item code).
Regions identified with different air codes, however, can not overlap.
\section{Directed Acyclic Graph and Database Details}
One of the critical aspects of a graphics software package
is its internal data structure.
Since geometric models often result
in very large volumes of data being generated,
the importance of the data structure here is emphasized.
Thus it is felt that a brief introduction to the
organization of the MGED database is
important for all users.
The database is stored as a single,
binary, direct-access
UNIX file for efficiency and cohesion,
with fixed length records called database {\em granules}.
Each object occupies one or more granules of storage.
The user sees and manipulates the directed acyclic graphs
like UNIX paths (e.g., car/chassis/door),
but in a global namespace.
There can be many independent or semi-independent
directed acyclic graphs within the same database,
each defining different models.
The figure also makes heavy use of the {\em instancing} capability.
As mentioned earlier, the
{\em leaves} of the graph are the primitive solids.
Commands exist to import sub-trees from other databases and libraries,
and to export sub-trees to other databases.
Also, converters exist to dump databases in printable form for
non-binary interchange.
\section{Model Building Philosophy}
The power of a full directed acyclic graph structure for representing
the organization of the database gives a designer a great deal of
flexibility in structuring a model.
In order to prevent chaos, most designers at BRL choose to
design the overall structure of their model in a top-down manner,
selecting meaningful names for the major structures and sub-structures
within the model.
Actual construction of the details of the
model generally proceeds in a bottom-up
manner, where each sub-system is fabricated from component primitives.
The first sub-systems to be constructed are the chassis and skin of the
vehicle, after which a set of analyses are run to validate the geometry,
checking for unintentional gaps in the skin or for solids which overlap.
The second stage of model construction is to build the features of the
main compartments of the vehicle. If necessary for the analysis
codes that will be used, the different types of air compartments within
the model also need to be described.
The final stage of model construction is to build the internal
objects to the desired level of detail.
This might include modeling engines, transmissions, radios,
people, seats, etc.
In this stage of modeling, the experienced designer will draw heavily on the
parts-bin of model components and on pieces extracted from earlier
models, modifying those existing structures to meet his particular
requirements.
Throughout the model building process it is important for the model builder
to choose part names carefully, as the MGED database currently has a
global name space, with individual node names limited to 16 characters.
In addition, BRL has defined conventions for naming the elements in the
top three levels of database structure,
allowing people to
easily navigate within models prepared at
different times by different designers.
This naming convention
facilitates the integration of design changes into existing models.
\chapter{THE BASIC EDITING PROCESS}
\section{Interaction Forms}
Textual and numeric interaction with the MGED editor is the most
precise editing paradigm because it allows exact
manipulation of known configurations.
This works well when the user is designing the model
from an existing drawing, or when all dimensions are known (or are computable)
in advance.
The use of a
tablet or mouse, knob-box or dial-box, buttons, and a joystick
are all simultaneously supported by MGED for analog inputs.
Direct graphic interaction via a ``point-push-pull'' editing paradigm
tends to be better for
prototyping, developing arbitrary geometry, and fitting
together poorly specified configurations.
Having both types of interaction capability available at all times
allows the user to select the style of interaction that best
meets his immediate requirements.
\section{The Faceplate}
\mfig faceplate, The MGED Editor Faceplate.
\mfig buttonmenu, The Pop-Up Button Menu.
When the MGED program has a display device attached, it
displays a border around the region of the screen being used
along with some ancillary status information. Together, this
information is termed the editor ``faceplate''.
See Figure \ref{faceplate}.
In the upper left corner of the display is a small enclosed area
which is used to display the current editor state;
this is discussed further in the Editor States section, below.
Underneath the state display is a zone in which three ``pop-up'' menus
may appear.
The top menu is termed the ``button menu,'' as it
contains menu items which duplicate many of the functions assigned to
the button box.
Having these frequently used
functions available on a pop-up menu
can greatly decrease the number of times that the user needs to remove
his hand from the pointing device (either mouse or tablet puck)
to reach for the buttons.
An example of the faceplate and first level menu is shown in
Figure \ref{buttonmenu}.
The second menu is used primarily for the various editing states,
at which time it contains all the editing operations which are generic
across all objects (scaling, rotation, and translation).
The third menu contains selections for object-specific editing operations.
The choices on these menus are detailed below.
It is important to note that while some display hardware that MGED runs on
has inherent support for pop-up menus included, MGED does not
presently take advantage of that support, preferring to depend
on the portable menu system within MGED instead.
It is not clear whether the slight increase in functionality that might
accrue from using display-specific menu capabilities would offset the
slight nuisance of a non-uniform user interface.
Running across the entire bottom of the faceplate is a thin rectangular
display area which holds two lines of text.
The first line always contains a numeric display of the model-space
coordinates of the center of the view and the current size of
the viewing cube, both in the currently selected editing units.
The first line also contains the current rotation rates.
The second line has several uses, depending on editor mode.
Normally it displays the formal name of the database that is being
edited, but in various editing states this second line will instead
contain certain path selection information.
When the angle/distance cursor function is activated, the second
line will be used to display the current settings of the cursor.
It is important to mention that while the database records all
data in terms of the fixed base unit of millimeters, all numeric interaction between
the user and the editor are in terms of user-selected display [or local] units.
The user may select from millimeters, centimeters, meters, inches, and
feet, and the currently active display units are noted in the first
display line.
The concept of the ``viewing cube'' is an important one.
Objects drawn on the screen are clipped in X, Y, and Z, to the size
indicated on the first status line.
This feature allows extraneous wireframes which are positioned within view
in X and Y, but quite far away in the Z direction to not be seen,
keeping the display free from irrelevant objects when zooming in.
Some display managers can selectively enable and disable Z axis clipping
as a viewing aid.
\section{The Screen Coordinate System}
\mfig coord-axes, The Screen Coordinate System.
The MGED editor uses the standard right-handed
screen coordinate system,
as shown in Figure \ref{coord-axes}.
The Z axis is perpendicular to the screen and the positive Z direction is
out of the screen. The directions of positive (+) and negative (-) axis
rotations are also indicated. For these rotations, the ``Right
Hand Rule'' applies: Point the thumb of the right hand along the direction
of +X axis and the other fingers will describe the sense of positive
rotation.
\section{Changing the View}
At any time in an editing session, the user may add one or more
subtrees to the active model space. If the viewing cube is
suitably positioned, the newly added subtrees are drawn on the display.
(The ``reset'' function can always be activated to get the entire active
model space into view).
The normal mode of operation is for users to work with wireframe
displays of the unevaluated primitive solids. These wireframes can be
created from the database very rapidly.
\mfig crod, An Engine Connecting Rod.
\mfig crod-close, {Close-Up Connecting Rod, Showing Z-clipping}.
On demand, the user can request the calculation of
approximate boundary wireframes that account for
all of the boolean operations specified along the arcs of the
directed acyclic graph in the database.
This is a somewhat time consuming process, so it is not used
by default, but it is quite reasonable to use whenever the
design has reached a plateau.
Note that these boundary wireframes are not stored in the database,
and are generally used as a visualization aid for the designer.
Figure \ref{crod} shows an engine connecting rod.
On the left side is the wireframe of the unevaluated primitives
that the part is modeled with, and on the right side is the approximate
boundary wireframe that results from evaluating the boolean expressions.
Also, at any time the user can cause any part of the active model space
to be dropped from view.
This is most useful when joining two complicated subsystems
together; the first would be called up into the active model space,
manipulated until ready, and then the second subsystem would also be
called up as well. When any necessary adjustments had been made,
perhaps to eliminate overlaps or to change positioning tolerances,
one of the subassemblies could be dropped from view,
and editing could proceed.
The position, size, and orientation of the viewing cube can be
arbitrarily changed during an editing session.
The simplest way to change the view is by selecting one of nine
built in preset views, which can be accomplished by a simple keyboard
command, or by way of a button press or first level menu selection.
The view can be rotated and translated to any arbitrary position.
The user is given the ability to execute a {\bf save view} button/menu
function that attaches the current view to a {\bf restore view} button/menu
function.
The rate of rotation around each of the X, Y, and Z axes
can be selected by knob, joystick, or keyboard command.
Because the rotation is specified as a rate, the view
will continue to rotate about the view center until the rotation
rate is returned to zero.
(A future version of MGED will permit selection of rate or value
operation of the knobs).
Similarly, the zoom rate (in or out) can be set by keyboard
command or by rotating a control dial.
Also, displays with three or more mouse buttons have binary (2x) zoom
functions assigned to two of the buttons.
Finally, it is possible to set a slew rate to translate the view
center along any axis in the current viewing space, selectable
either by keyboard command or control dial.
In VIEW state, the main mouse button translates the
view center; the button is defined to cause the indicated point to become
the center of the view.
The assignment of zoom and slew functions to the mouse buttons tends to
make wandering around in a large model very straightforward.
The user uses the binary zoom-out button to get an overall view, then
moves the new area for inspection to the center of the view and uses
the binary zoom-in button to obtain a ``close up'' view.
Figure \ref{crod-close}
shows such a close up view of the engine connecting rod.
Notice how the wireframe is clipped in the Z viewing direction
to fit within the viewing cube.
\section{Model Navigation}
In order to assist the user in creating and manipulating a complicated
hierarchical model structure, there is a whole family of editor commands
for examining and searching the database.
In addition, on all keyboard commands, UNIX-style regular-expression
pattern matching, such as ``*axle*'' or ``wheel[abcd]'', can be used.
The simplest editor command ({\bf t}) prints a table of contents, or directory,
of the node names used in the model. If no parameters are specified,
all names in the model are printed,
otherwise only those specified are printed.
The names of solids are printed unadorned, while the names of combination
(non-terminal) nodes are printed with a slash (``/'') appended to them.
If the user is interested in obtaining detailed information about the
contents of a node, the list ({\bf l}) command will provide it.
For combination (non-terminal) nodes, the information about all departing
arcs is printed, including the names of the nodes referenced, the boolean
expressions being used, and an indication of any translations and rotations
being applied.
For leaf nodes, the primitive solid-specific ``describe yourself''
function is invoked, which provides a formatted display of the parameters
of that solid.
The {\bf tops} command is used to find the names of all nodes which are
not referenced by any non-terminal nodes; such nodes are either
unattached leaf nodes, or tree tops.
To help visualize the tree structure of the database,
the {\bf tree} command exists to
print an approximate representation of the database subtree below the
named nodes.
The {\bf find} command can be used to find the names of all non-terminal
nodes which reference the indicated node name(s). This can be very helpful
when trying to decide how to modify an existing model.
A related command ({\bf paths}) finds the full tree path specifications
which contain a specified graph fragment, such as ``car/axle/wheel''.
In addition to these commands, several more commands exist
to support specialized types of searching through the model database.
\section{Editor States}
The MGED editor operates in one of six states.
Either of the two PICK states can be entered by button press,
menu selection, or keyboard command. The selection of the desired
object can be made either by using {\em illuminate mode}, or by
keyboard entry of the name of the object.
Illuminate mode is arranged such that if there are {\bf n} objects visible on
the screen, then the screen is divided into {\bf n} horizontal bands.
By moving the cursor (via mouse or tablet) up and down through these bands,
the user will cause each solid in turn to be highlighted on the screen,
with the solid's name displayed in the faceplate.
The center mouse button is pressed when the desired solid is located, causing
a transition to the next state (Object Path, or Solid Edit).
Illuminate mode offers significant advantages over more conventional pointing
methods when the desired object lies in a densely populated region of the
screen. In such cases, pointing methods have a high chance of making an
incorrect selection.
However, in sparsely populated regions of the screen, a pointing paradigm
would be more convenient, and future versions of MGED will support this.
\section{Model Units}
All databases start with an ``ident'' record which contains
a title string that identifies the model, the
current local units (e.g., mm, cm or inches) of the model,
and a database version identification number.
As noted, all numerical information
in the database is stored in the fixed base
unit of millimeters,
and all work (input and output) is done in a user-selected local unit.
The user can change his local unit at any time
by using the {\bf units} command.
This way of handling units was selected to free the user from worrying
about units conversion when components are drawn from the ``parts bin''.
\chapter{PERIPHERAL DEVICES}
Before we discuss the features of MGED, we will introduce
the hardware devices used to implement them.
These devices are the ``tools of the trade'' for the MGED user.
We will discuss only basic operational characteristics here.
Specific use of these devices will be covered in the later sections
on the viewing and editing features of MGED.
\section{Joystick}
The joystick is a mechanical device used to do rotations in MGED.
Any movement left or right rotates the display about the
X-axis. Any movement up or down rotates the display
about the Y-axis. When the joystick top is twisted in a clockwise or
counterclockwise direction, the display rotates about the Z-axis.
Any combination motion of the joystick will produce a ``combined''
rotation about the appropriate axes.
As implemented on the Vector General hardware,
all of these motions have a spring return to a null center position.
\section{Button Box}
The button box contains a collection of buttons.
On each button is a light that can be lit under program control.
Pressing a button sends a ``press'' event to MGED,
and results in an action occurring, or a condition being set.
The exact functions assigned to these buttons will be discussed
in the sections on viewing the display and on editing.
\subsection{Vector General Buttons}
\PostScriptPicture 6in by 5.6in, fig-vg-buttons.ps, Vector General Button Assignments, vg-buttons.
\PostScriptPicture 4.5in by 3.25in, fig-sgi-buttons.ps, Silicon Graphics Button Assignments, sgi-buttons.
% \gluein 4.5in by 4.5in, Vector General Button Assignments, vg-buttons.
% XXX \gluein 4.5in by 4in, Megatek Dial and Button Box, mg-buttons.
The Vector General has thirty-two buttons.
Figure \ref{vg-buttons} depicts the functions programmed
for each button.
The buttons in the shaded area are used for editing while the
rest are used for viewing the display.
\subsection{Megatek Buttons}
\begin{figure}[tbp]
\begin{tabular}{rl}
Button & Function\\
\\
1 & View Mode: Restores View \\
& Edit Mode: Translation in the Object-Edit mode \\
2 & View Mode: Saves View \\
& Edit Mode: Translation in the Object-Edit mode \\
3 \\
& Edit Mode: Saves the model being displayed on the screen \\
4 & Off: Viewing mode \\
& On: Edit mode \\
5 & View Mode: Resets View \\
& Edit Mode: Scaling in the Object-Edit mode \\
6 \\
& Edit Mode: Rotation in the Object-Edit mode \\
7 & View Mode: Angle/Distance Cursor \\
& Edit Mode: Translation in the Object-Edit mode \\
8 \\
& Edit Mode: Rejects display and returns to Viewing display \\
9 & View Mode: Bottom View \\
& Edit Mode: Scaling in the Solid-Edit mode \\
10 & View Mode: Left View \\
& Edit Mode: Rotation in the Solid-Edit mode \\
11 & View Mode: Rear View \\
& Edit Mode: Translation in the Solid-Edit mode \\
12 & View Mode: 90, 90 View \\
& Edit Mode: Restores Edit mode menu \\
13 & View Mode: Top View \\
& Edit Mode: Transfers from Viewing to Solid Pick \\
14 & View Mode: Right View \\
& Edit Mode: Transfers from Viewing to Object Pick \\
15 & View Mode: Front View \\
\\
16 & View Mode: 35/45 View
\end{tabular}
\caption{Megatek Buttons \label{mg-button-table} }
\end{figure}
The Megatek button box
is a general purpose input/output device that communicates with
MGED through an intelligent control unit. The device has eight
rotatable knobs and 16 buttons with lights.
% XXX See Figure \ref{mg-buttons}.
The ``buttons'' and ``knobs'' of the Megateks are located in the same box.
There are not enough buttons to have just one assigned meaning, hence
most buttons have dual functions.
To toggle the functions of the buttons,
use the upper right button (toggle button).
When the light on this button is ON, the functions listed on the RIGHT above
each button is the current function.
When the light on the ``toggle'' button is OFF, the functions labeled on the
LEFT are then in effect.
The left/right meaning of these buttons is grouped generally according to
viewing functions on the left and editing functions on the right.
Figure \ref{mg-button-table}
summarizes the uses of the buttons.
Depressing the button switches the light on and off. Many of these serve a
dual role depending upon the selected mode - viewing or editing. The mode is
selected by depressing button 4. If light 4 is off, the system is performing
in the viewing mode, and the commands shown in the top half of the table are
executed. If light 4 is on, the system is performing in the edit mode, and
the commands shown in the bottom half are executed.
\subsection{Silicon Graphics Buttons}
The button box layout for the SGI Iris is given
in Figure \ref{sgi-buttons}.
Note that the ``right'' button shows you the right side of the
model, as if you were looking in from the left.
To achieve the customary draftsman views, this function
goes on the left.
The upper left button is the {\bf help} key.
If this button is held down, and any other button (or knob)
is activated, a descriptive string is displayed in the eight character
LED display on the button box.
The upper right button is used to reset all the knobs to zero.
This is useful to halt a runaway rotation or zoom operation.
%\begin{figure}[tbp]
%{\tt \begin{verbatim}
% |---------|---------|---------|---------|
% | | | | Zero |
% | Help | ADC | Reset | Knobs |
%|---------|---------|---------|---------|---------|---------|
%| Obj | Obj | Obj | Obj | | Save |
%| Scale | ScaleX | ScaleY | ScaleZ | empty | View |
%|---------|---------|---------|---------|---------|---------|
%| Obj | Obj | Obj | Obj | | Restore |
%| TransX | TransY | TransXY | Rotate | empty | View |
%|---------|---------|---------|---------|---------|---------|
%| Solid | Solid | Solid | Solid | Obj | Solid |
%| Trans | Rot | Scale | Menu | Pick | Pick |
%|---------|---------|---------|---------|---------|---------|
%| REJECT | Bottom | Top | Rear | 90,90 | ACCEPT |
%|---------|---------|---------|---------|---------|---------|
% | Right | Front | Left | 35,25 |
% |---------|---------|---------|---------|
%\end{verbatim} }
%\caption{Silicon Graphics Button Layout \label{XXsgi-buttons} }
%\end{figure}
\section{Knobs (Dials)}
The knobs (or control dials) are used to send digital information
to the computer.
As a knob is turned, a succession of numbers are available for
use by the computer.
The knobs can be used to rotate a displayed object about the x, y, or z
axis, translate the object along the x or y axis, and change the size of the
view. Action performed by these knobs is continuous and is initiated by
turning the knob in the proper direction and terminated by turning the knob
in the opposite direction.
\subsection{Vector General Knobs}
\PostScriptPicture 4.5in by 2.8in, fig-vg-knobs.ps, Vector General Knob Assignments, vg-knobs.
\PostScriptPicture 4.5in by 3.25in, fig-sgi-knobs.ps, Silicon Graphics Knob Assignments, sgi-knobs.
% \gluein 4.5in by 3.5in, Vector General Knobs, vg-knobs.
% \gluein 4.5in by 3.5in, Silicon Graphics Knobs, sgi-knobs.
Figure \ref{vg-knobs} depicts the functions assigned to each of the ten knobs.
The exact functions of each of these knobs will be discussed in
the angle distance cursor section and in the viewing features section.
\subsection{Megatek Knobs}
The ``buttons'' and ``knobs'' of the Megateks are located in the same box.
% XXX as shown in Figure \ref{mg-buttons}.
There are not enough knobs to have ONE assigned meaning, hence
three knobs have dual functions.
The second function of the first three knobs is only in effect when
the angle-distance cursor (ADC) is on the screen.
\subsection{Silicon Graphics Knobs}
Figure \ref{sgi-knobs} depicts the functions assigned to the
eight knobs on the Silicon Graphics knob box.
In normal operation, the left knobs provide rotations,
and the right knobs provide translations and zooming.
When the angle/distance cursor is activated, some of the
knobs are redefined.
\section{Mouse or Data Tablet}
Moving the mouse or the data tablet ``pen'' causes a cursor
on the screen to move.
The screen X-Y coordinates of the cursor can be sensed by MGED
at any time.
Clicking one of the mouse buttons,
or depressing the tip of the pen, results in MGED receiving
a special event notification.
The meaning of this mouse event depends on the current editing mode
and which portion of the display faceplate that the cursor is located
in.
Below is a list of some of the functions the mouse is used for in MGED;
\begin{itemize}
\item
Selecting editing menus, edit functions (move faces, move edges
etc.) and viewing functions (selected from main edit menu); move
pointer to appropriate edit function and press center mouse button.
\item
Pointing functions; interactively positioning solid primitive
relative to other solids with positioning or size update being
displayed at the same time, position pointer where required and click
center mouse button.
\item
Scaling of view size; enlarge or reduce for a more detailed view of
object, left button shrinks view size, right button enlarges view
size.
\item
During the solid or object illuminate phase of editing,
the screen is divided into
invisible horizontal sections.
The available selections are scanned by moving the mouse up and down.
\item
When MGED is is the viewing state,
and a mouse event is received which is not in the menu area of the faceplate,
the point at which the cursor is pointing at will be translated to become
the center of the current view.
By pointing and clicking the center mouse button,
the center of the viewing cube
can be moved to allow close-up viewing of different areas in your
model.
\end{itemize}
\subsection{Vector General Data Tablet}
Position information is entered using a pen-like stylus.
The distance this pen is from the tablet is important.
If the pen tip is within one half inch of the tablet surface, the cursor
location on the screen corresponds to the X,Y location of the
pen on the tablet. This condition is called the ``near'' position.
If the pen is more than one half inch from the tablet surface, the
cursor remains located in the center of the screen.
When the pen is pressed against the tablet surface,
the pressure switch is activated and a ``mouse'' event
is sent to MGED.
\subsection{Megatek Data Tablet}
Some Megatek systems enter position data on the data tablet
using a pen-like stylus.
If the tip of the stylus is
within one-half inch of the surface of the tablet, a ``star'' corresponding
to this location is displayed on the display screen. If the tip is moved
more than one-half inch from the surface, the position of the star remains
fixed. When the stylus is pressed against the tablet surface, the pressure
switch is activated and a ``mouse'' event is sent MGED.
Other Megatek data tablets have a mouse instead of a pen.
This mouse has four buttons on it.
The yellow (top) button is used during illumination and editing just as the
pen on the Vector General terminals.
However, in the viewing mode, when pushed, the point which it was pointing
at will be drawn at the center of the screen.
The blue (bottom) button has this same function at ALL times and is used
to ``slew'' the display during editing.
The white (left) and the green (right) buttons on the mouse are used for
zooming the display at a fixed rate.
The white button will zoom out and the green button will zoom in.
\subsection{Silicon Graphics Mouse}
The left and right mouse buttons are used for binary (2x) zooming,
and the center mouse button is used for all other MGED mouse functions.
On the Silicon Graphics 3-D workstations, MGED can be run directly,
or it can be run under the window manager MEX. In both cases,
MGED opens two windows, one outlined in white for all text interaction,
and one outlined in yellow for all graphics display.
When running MGED directly (without MEX), all mouse events are
sent the MGED, regardless of where the mouse is pointing.
In order to shift emphasis between the graphics and text windows,
the smaller one can be enlarged by pointing the cursor within the
boundaries of the smaller window, and pressing the center button.
This enlarges that window, and reduces the size of the other window.
When MEX is running, it is necessary to follow the MEX convention of
moving the cursor into the desired window, and clicking the right mouse
button, to ``attach'' all input to that window.
This has the unfortunate consequence of requiring a lot of extra
mouse clicking, because the graphics window needs to be attached
when using the buttons, knobs, and mouse, while the text window
needs to be attached in order to enter keyboard commands.
On the Silicon Graphics 4-D workstations running 4Sight,
mouse events are sent to MGED only when the cursor is within the
boundaries of the MGED graphics window.
\subsection{Sun Workstation Mouse}
On the Sun workstation, MGED must be run in a {\bf suntools} window.
The main consequence of this is that mouse events are sent to MGED
only when the cursor is within the boundaries of the MGED graphics window
on the screen.
The left and right mouse buttons are used for binary (2x) zooming,
and the center mouse button is used for all other MGED mouse functions.
\section{Keyboard}
The keyboard is used to issue commands and supply parameters to MGED.
It is also used to login and logout of the UNIX system, and to
run other UNIX programs.
All characters typed on the keyboard,
with the exception of the user's password, are displayed (echoed) on the
monitor.
In this text, all input typed by the user is
shown in {\em italics}, while all literal MGED output
is shown in {\tt typewriter font}.
All entries are
terminated by depressing the RETURN key.
This action immediately precedes the execution of the directive.
In most cases, lower case letters
must be used. A space must be used between the command and its
arguments.
Embedded blanks are not allowed.
Entering Control/H causes cursor to backspace and erase entered
information.
An MGED command is interrupted by entering Control/C.
End-of-File is sent to MGED by entering Control/D.
The graphics editor displays the prompt
\begin{verbatim}
mged>
\end{verbatim}
on the display whenever it is ready to accept a command from the keyboard.
\chapter{OPERATING INSTRUCTIONS}
\section{Entering the Graphics Editor}
Type {\em mged filename}, e.g.:
\begin{verbatim}
mged s_axle.g
mged shaft.g
mged fred.g
\end{verbatim}
where the filename is the name of the UNIX file in which
your object description data is stored.
It is conventional that the
extension ``.g'' on the filename
signifies a graphics file, and is a good practice, but is not required.
If the named database does not already exist,
MGED will ask if it should create a new database.
MGED will ask:
{\tt \begin{verse}
\% {\em mged new.g} \\
BRL-CAD Release 3.0 Graphics Editor (MGED) \\
\ \ \ \ Tue Sep 6 02:52:55 EDT 1988 \\
\ \ \ \ mike@video:/cad/mged.4d \\
new.g: No such file or directory \\
Create new database (y|n)[n]? {\em y} \\
attach (nu|tek|tek4109|ps|plot|sgi)[nu]? {\em sgi} \\
ATTACHING sgi (SGI 4d) \\
Untitled MGED Database (units=mm) \\
mged>
\end{verse} }
Here, the {\em italic} type indicates the user's response:
{\em y} instructs MGED to create the new database, and
{\em sgi} instructs MGED to attach to a window
on the Silicon Graphics (SGI) workstation.
Directives to the graphics editor are made by
\begin{enumerate}
\item entering information from the keyboard,
shown here in the text by the use of {\em italics},
\item using the stylus to select items from
the menu (select), and
\item pressing buttons and twisting knobs on the
function control box (press, twist).
\end{enumerate}
The prompt for a command is {\tt mged>}.
\subsection{Running MGED on a Silicon Graphics}
When running MGED from the console of a Silicon Graphics workstation,
MGED retains the text window from which it was started,
and opens a second window for the graphics display.
By default, the graphics window is quite large, and the text window
is rather small.
On the SGI 3-d workstations,
should you wish to have a large text window to scan printed output, move
the mouse pointer into the text window and click the center mouse button.
Use the reverse procedure to regain a large graphics window, i.e.,
move the mouse pointer into the graphics window
and click the center mouse button.
\subsection{Running MGED on a Tektronix}
To run MGED on the tek4014 class of terminals one needs to have TWO
terminals - the graphics terminal (4014 or one which emulates a 4014) and
another terminal to enter commands on.
The procedure is as follows:
\begin{enumerate}
\item login on the graphics terminal
\item enter {\em tty} to find out which terminal number has been assigned
\item enter {\em sleep 32000} to put the graphics terminal in sleep mode
\item login on the other terminal
\item enter {\em mged file} to execute MGED
\item enter {\em tek} to select the tek4014 device processor
\item enter the tty value found in step 2.
\item perform editing
\item enter {\em q} to quit MGED
\item enter control-c on the graphics terminal to end the sleep mode
\item logout on both terminals
\end{enumerate}
Since there are no knobs or buttons on the tek4014 class of terminals, one
is forced to use the {\em press} and {\em knob} commands to emulate these
peripherals.
Other commands which can/should be used are:
\begin{tabular}{rl}
ill & put up a desired path \\
center & slew the display \\
size & zoom the display \\
sed & solid edit immediately
\end{tabular}
The main force behind the writing of a driver for the tek4014 terminals
was to allow the use of the Teletype 5620 terminals.
These graphic terminals have an internal processor and different windows
can be set up which represent different terminals.
Hence two terminals are NOT necessary.
The use of the Teletype 5620 terminals is then the same as the procedure
outlined above, except each window represents a terminal.
\section{The Pop-Up Button Menu}
The default MGED faceplate is shown in Figure \ref{faceplate}.
If the BUTTON MENU area on the screen is selected with the mouse,
then the pop-up button menu appears, as shown in Figure \ref{buttonmenu}.
This menu can be very useful in reducing the amount of hand motion
between the mouse and the button box.
\section{Starting Your Model}
Modeling practices using MGED can be quite individual. The following is a
suggested modeling method to start with; you may end up developing your own
style as you become more familiar with MGED.
First of all, decide how you want to represent your model, including the
amount of detail, types of solids and regions necessary. Have an accurate
sketch or engineering drawing available, so that you can easily transfer its
information into the types of primitive solids necessary to create your model.
Where possible it is recommended to start with a large block solid and
``subtract'' pieces from it. In this way you avoid errors with abutting
faces of a collection of solids ``unioned'' together.
Next the solids are created using the
{\em make}, {\em cp}, {\em mirror} or {\em in}
commands. Depending on the complexity of the model, the solids may be
created in the desired location or created at the origin and later
translated to the desired location. Creation at the origin provides
an opportunity to take advantage of possible symmetries in the geometry.
Once all the solids are finished it is time to create the region[s],
which will describe (to MGED) how to combine the solids to represent
the model.
The region[s] are then given the desired item/air code (if this is
necessary, otherwise leave it as the system default value), and material
codes. The regions are then put onto a group, usually for functionality only.
A group has no operations as such (like union [u], intersection [+] or
difference [-]) and is just a collection of objects for convenient naming
of a whole screen or collection of objects.
\chapter{CREATING NEW OBJECTS}
\section{Creating New Leaves (Solids/Primitives)}
A family of commands exists to allow the user to add more actual
solids (leaf nodes) to the model database. To obtain a precise
duplicate of an existing solid (presumably to be changed by a
subsequent editing command), the copy ({\em cp}) command can be used.
It is important to note that the copy operation is different from
creating an {\em instance} of an existing solid; there are occasions
to use both operations. If the precise configuration of the solid
desired is not important, the {\em make} command can be used to create
a stock prototype solid of the desired type with the given name, which
can then be edited to suit. The {\em mirror} command makes a
duplicate of an existing solid reflected about a given coordinate
axis.
If the actual numeric parameters of a solid are known, then the {\em in}
command can be used. In addition to prompting for the descriptions of
the full generic primitive solids, this command also accepts
abbreviated input formats. For example, a wedge or an RPP can be entered
with a minimum of parameters, even though a database ARB8 is created.
Similarly, the parameters for a right circular cylinder can be given,
resulting in a truncated general cone (TGC) being stored.
This is not a very sophisticated way to build solids, but it receives
a surprising amount of use.
A number of commands also exist to create new solids with some
higher level description. For example, the {\em inside} command
creates a new solid inside an existing solid, separated from the
existing solid by specified tolerances. This is quite useful for
creating hollow objects such as fuel tanks.
It is possible to create a plate with a specified
azimuthal orientation and fallback angle, or to create an ARB8 (plate)
by specifying three points and a thickness, or to create an ARB8
given one point, an azimuthal orientation, and a fallback angle.
\section{Specific Cases}
After having started MGED and created a new database, the next
step is to use the {\em units} command to tell the system that you will
be entering values in mm, cm, m, in or ft. For our example we will
work in mm:
\begin{verbatim}
units mm
\end{verbatim}
Now you may give your database a title using the {\em title} command as in:
\begin{verbatim}
title Mechanical Bracket
\end{verbatim}
This title (``Mechanical Bracket'') now appears at bottom left hand
corner of graphics window. Further examples:
\begin{verbatim}
title six wheeled tank
title stub axle
\end{verbatim}
At this point you can start creating your solid objects using the ``arbs'',
``sph'', ``tor'', {\em etc.},
arguments to the {\em make} or {\em in} command.
The {\em make} command gives you a solid to a default size,
you then have to use the
solid edit mode to interactively edit the solid to the desired size.
{\em make} command is entered as below:
Examples:
\begin{verbatim}
make box arb8
make cyl rcc
make ball sph
\end{verbatim}
The first argument is the solid name, and the second argument is
the primitive type.
The {\em in} command expects you to key in a set of parameters to describe your
solid; the parameters can be the x y and z of a vertex (such as the
corner of an ARP8), or the x y and z of a vector (such as the height
or H vector of a BOX) or the radius (such as for a torus).
Below is a list of primitives
with their {\em in} commands as requested by MGED and sample input.
Reading an {\em in} file into a MGED data file will be discussed later.
Note how providing incomplete input to the {\em in} command will result
in MGED repeating the prompt for the missing information.
\mfig ex.rpp, Example RPP.
\subsection{RPP (rectangular parallelepiped)}
{\tt
mged> {\em in name rpp}\\
Enter XMIN, XMAX, YMIN, YMAX, ZMIN, ZMAX: {\em 0 25} \\
Enter YMIN, YMAX, ZMIN, ZMAX: {\em 0 50} \\
Enter ZMIN, ZMAX: {\em 0 100} \\
}
This sequence produces the RPP shown in Figure \ref{ex.rpp}.
\mfig ex.box, Example BOX.
\subsection{BOX (BOX)}
{\tt
mged> {\em in my box} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of vector H: {\em 25 0 0} \\
Enter X, Y, Z of vector W: {\em 0 50 0} \\
Enter X, Y, Z of vector D: {\em 0 0 100} \\
}
This sequence produces the BOX shown in Figure \ref{ex.box}.
\mfig ex.arb8, Example ARB8.
\subsection{ARB8: Arbitrary Convex Polyhedron, 8 Vertices}
{\tt
mged> {\em in poly arb8} \\
Enter X, Y, Z for point 1: {\em 0 0 0} \\
Enter X, Y, Z for point 2: {\em 0 150 0} \\
Enter X, Y, Z for point 3: {\em 0 150 200} \\
Enter X, Y, Z for point 4: {\em 0 0 200} \\
Enter X, Y, Z for point 5: {\em 75 0 0} \\
Enter X, Y, Z for point 6: {\em 75 150 0} \\
Enter X, Y, Z for point 7: {\em 75 150 200} \\
Enter X, Y, Z for point 8: {\em 75 0 200} \\
}
This sequence produces the ARB8 shown in Figure \ref{ex.arb8}.
\mfig ex.arb4, Example ARB4.
\subsection{ARB4: Arbitrary Convex Polyhedron, 4 vertices}
{\tt
mged> {\em in a4 arb4} \\
Enter X, Y, Z for point 1: {\em 0 0 0} \\
Enter X, Y, Z for point 2: {\em 10 60 0} \\
Enter X, Y, Z for point 3: {\em 40 20 0} \\
Enter X, Y, Z for point 4: {\em 20 15 70} \\
}
This sequence produces the ARB4 shown in Figure \ref{ex.arb4}.
\mfig ex.rcc, Example Right Circular Cylinder.
\subsection{RCC (Right Circular Cylinder)}
{\tt
mged> {\em in rcyl rcc} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of height (H) vector: {\em 0 0 60} \\
Enter radius: {\em 15} \\
}
This sequence produces the RCC shown in Figure \ref{ex.rcc}.
Note that in this case, the A,B,C, and D vectors have magnitude
which equal the radius, 15.
\mfig ex.trc, Example Truncated Right Cylinder.
\subsection{TRC (Truncated Right Cylinder)}
{\tt
mged> {\em in trcyl trc} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of height (H) vector: {\em 40 0 0} \\
Enter radius of base: {\em 20} \\
Enter radius of top: {\em 10} \\
}
This sequence produces the TRC shown in Figure \ref{ex.trc}.
Note that the magnitude of A and B equal the base radius, 20,
\mfig ex.raw, Example Right Angle Wedge.
\subsection{RAW (Right Angle Wedge)}
{\tt
mged> {\em in weg raw} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of vector H: {\em 40 0 0} \\
Enter X, Y, Z of vector W: {\em 0 70 0} \\
Enter X, Y, Z of vector D: {\em 0 0 100} \\
}
This sequence produces the RAW shown in Figure \ref{ex.raw}.
\mfig ex.sph, Example Sphere.
\subsection{SPH (Sphere)}
{\tt
mged> {\em in ball sph} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter radius: {\em 50} \\
}
This sequence produces the sphere shown in Figure \ref{ex.sph}.
Note that the A, B, and C vectors all have magnitude equal to
the radius, 50.
\mfig ex.ellg, Example General Ellipsoid.
\subsection{ELLG (General Ellipsoid)}
{\tt
mged> {\em in egg ellg} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of vector A: {\em 20 0 0} \\
Enter X, Y, Z of vector B: {\em 0 60 0} \\
Enter X, Y, Z of vector C: {\em 0 0 40} \\
}
This sequence produces the ellipsoid shown in Figure \ref{ex.ellg}.
\mfig ex.tor, Example Torus.
\subsection{TOR (Torus)}
{\tt
mged> {\em in tube tor} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of normal vector: {\em 0 0 50} \\
Enter radius 1: {\em 20} \\
Enter radius 2: {\em 10} \\
}
This sequence produces the torus shown in Figure \ref{ex.tor}.
\section{Creating New Combinations}
Non-terminal nodes in the directed acyclic graph stored in the database
are also called {\em combinations}.
It is possible to extend the definition of a non-terminal node by
adding an instance of an existing node to the non-terminal node
with an associated boolean
operation of union; this is done by the {\em i}
(instance) command. To start with, such an instance has an identity
matrix stored in the arc; the user needs to separately edit the
arc to move the instance to some other location.
If the non-terminal node being extended does not exist, it is created first.
The instance command provides the simplest way to create a reference to
another node. Instances of a whole list of nodes can be added to a
non-terminal node by way of the group {\em g} command.
If instances of a list of nodes with non-union boolean operations
is to be added to a non-terminal node, the region {\em r} command
accepts a list of (operation, name) pairs, where the single lower case
character ``u'' indicates union, ``--'' indicates subtraction, and
``+'' indicates intersection. The first operation specified
is not significant. An example of this command might be:
\begin{center}
{\em r non-terminal u node1 -- node2 + node3}
\end{center}
For historical reasons,
there is no explicit grouping possible, occasionally forcing
the user to create intermediate non-terminal nodes to allow the
realization of the desired boolean formula.
It is also important to note that for the same reasons
there is an {\em implicit} grouping between union terms, i.e.
\begin{center}
u n1 -- n2 + n3 u n4 -- n5
\end{center}
is evaluated as
\begin{center}
(n1 -- n2 + n3) union (n4 -- n5)
\end{center}
rather than
\begin{center}
((((n1 -- n2) + n3) union n4) -- n5)
\end{center}
Therefore, you can think of the solids
on either side of the union operators as surrounded by parentheses.
The order of the region members is
critical, and must be scrutinized when members are added or deleted from a
region.
The order can be printed out using the {\em l} or {\em cat} commands.
\chapter{VIEWING FUNCTIONS}
The MGED viewing features are designed to allow one to examine an object
in close detail. Any of the viewing features can be invoked at any time.
It should be noted, that these functions do not change the actual data,
only the way the data is displayed.
\section{Preset Views}
Six standard views (front, rear, top, bottom, left, and right) and one
oblique view (azimuth 35, elevation 25 isometric) are each assigned to the
function buttons, views 35 25 (isometric), top, right, front,
45 45 are available from the screen editor menu. Hence, any of these views
is immediately available at the press of the appropriate function button or
mouse selection. The views available are not limited to these standard views
however, as the display can be rotated to any view by using the dial box. By
pressing the function button labeled ``save view'' or entering
the keyboard {\em saveview} command, the present view as displayed display is
saved (used for raytracing,
producing colored pictures, which will be discussed later).
At any time, the saved view can be immediately returned to the
screen by pressing the ``restore view'' function button. The
``restore view'' button will be lit whenever a view has been saved.
The function button labeled ``reset'',
restores the display to the default view (front) when pressed.
\section{View Translation}
The displays can be panned or slewed on the screen in two ways -- using the
mouse pointer or by using the dial box knobs. When one is editing, the mouse
functions are not available for slewing, hence one must use the dial box knobs
to slew the display.
To slew the display using the control knobs, one uses the knobs labeled
``slew x'' or ``slew y''. The null positions on these knobs is in the
center or straight up. If the ``slew x'' knob is turned clockwise of center,
the display will move to the right. If it turned counterclockwise, the
display will move to the left. For the ``slew y'' central knob, clockwise
of the center moves the display up and counterclockwise moves the display
down. The further these knobs are turned from center, the faster the display
moves.
\section{View Zooming}
One can zoom the display by using the dial box knob labeled ``zoom''.
Again the null position of this knob is center or straight
up. Turning this knob clockwise of center causes the display to increase in
size producing a zoom-in effect. Turning this knob counterclockwise of center
causes the display to decrease in size or zoom-out. Again, the further the
``zoom'' knob is turned from center, the faster the zooming will occur.
\mfig adc, The Angle Distance Cursor.
\section{The Angle Distance Cursor (ADC)}
The angle distance cursor is a construction aid used to measure
angles and distances. It should be noted that all measurements are
made in the projected space of the screen, so one should measure only
in a view normal to the surface where the measurement is to take place.
The ADC is placed on (or removed from) the display by pushing the ``ADC''
button.
The ADC consists of three cursors which cover the entire screen.
Figure \ref{adc} depicts the ADC as it appears on the screen.
All the cursors are centered at the same point and can be moved to any
location on the screen. Two of these cursors rotate for angle measuring
purposes. Angle cursor 1 is solid while angle cursor 2 is dashed. Angle
cursor 1 has movable tic marks for measuring distances on the screen.
The two angle cursors move with the horizontal and vertical
lines of the main cursor.
The resulting effect is the moving of the center point
horizontally or vertically.
The ADC is controlled by the bottom row
of the (Megatek) knobs:
\begin{tabular}{rl}
Knob & Function \\
\\
6 & moves the center in the horizontal direction \\
7 & moves the center in the vertical direction \\
8 & rotates angle cursor 1 (alpha) \\
9 & rotates angle cursor 2 (beta) \\
10 & moves the tic marks
\end{tabular}
Whenever the ADC is on the screen, there is a readout at the bottom of
the screen listing pertinent information about the ADC.
This information includes the angles that angle cursors 1 and 2
have been rotated (alpha and beta), the distance the tic marks are
from the center of the ADC, and the location of the center of the ADC.
This information is continually updated on the screen.
\chapter{MGED EDITING FEATURES}
The heart of the MGED system is its editing features.
The editing features are divided into two classes: object editing
and solid editing.
Object editing is designed to allow one to change the location,
size, and orientation of an object.
Recall that an object is defined as the basic data unit of the
MGED system and includes both combinations and solids.
In the case of a solid, one needs to change not only its location,
size, and orientation, but also its ``shape''.
Changing the shape of a solid means changing any of its individual parameters.
Hence, solid editing is handled separately.
\section{Combination Editing (OBJECT EDIT)}
Before being able to enter the OBJECT EDIT state
(i.e. edit non-terminal),
it is necessary to pass through two intermediate states
in which the full path of an object to be edited is specified,
and the location of one arc along that path is designated for editing.
It is possible to create a transformation matrix to be applied
above the root of the tree, affecting everything in the path,
or to apply the matrix between any pair of nodes.
For example, if the full path /car/chassis/door is specified,
the matrix could be applied above the node ``car'', between
``car/chassis'', or between ``chassis/door''.
The transformation matrix to be applied at the
designated location can be created by the concatenation of
operations, each specified through several types of user direction.
Trees can be rotated around the center of the viewing cube;
this rotation can be specified in degrees via keyboard command, or can
be controlled by the rotation of a set of control dials or motions on
a three-axis joystick.
Translation of trees can be specified in terms of a precise new location
via keyboard command, or by adjusting a set of control dials.
Tree translation can also be accomplished by pointing and
clicking with the mouse or tablet.
Uniform and single-axis (affine) scaling of a tree can be controlled
by a numeric scale factor via keyboard command, or by way of repeated
analog scaling by pointing and clicking with the mouse or tablet.
Before we discuss the editing features of MGED, we will discuss
how one selects objects for editing.
\section{Selecting Objects For Editing}
To select a displayed object for editing, press the object illuminate button
or select ``Object illum'' from the ***BUTTON MENU***.
The object selection is a two step process.
Whenever an object is displayed (using the {\em e} command), all paths in the
object's hierarchy are traversed recursively, accumulating the transformation
matrices. When the bottom of the path (a solid) is encountered, the
accumulated
transformations are applied to the solid's parameters and the solid is drawn.
Thus every solid displayed is really a path ending with that solid.
If the object has been displayed using the {\em E} command, the same procedure
is followed, but only until a region is encountered.
Then all members of the region have the accumulated transformations
applied and the region is then ``evaluated'' and drawn.
In the first step of the object selection process, the path is selected.
Again, the data tablet is divided in as many horizontal sections as there are
paths drawn.
The path (solid or evaluated region) corresponding to the horizontal
section the pen/mouse is located in will be illuminated (brighter on B/W
displays and white on color displays).
This complete path is also listed on the display.
When the pen/mouse is pressed the illuminated path is selected.
In the second step, a member of the selected path is chosen.
All editing will then be applied to this member.
The tablet is divided into as many horizontal sections as there are
path members.
The word ``[MATRIX]'' is used to illuminate path members and will appear
above the member corresponding to the location of the pen/mouse.
Pressing the pen/mouse when the desired path member is ``illuminated''
will put MGED in the object edit state.
The editing will be performed on the path member selected.
If a solid is located at the bottom of this path, it becomes the key
solid and its vertex becomes the key point.
If an evaluated region is at the bottom of the path, the center of
this region becomes the key point.
All object editing is done with respect to this key point.
The object editing features can be invoked in any order and at any time
once an object has been selected for editing. During object editing, any of
the viewing features, such as changing views, zooming, and slewing, can be
used, and in fact, are usually quite useful. Again, the only way to exit the
object editing mode is to ``accept'' or ``reject'' the editing.
If the ``reject'' button is pressed (or selected from the edit menu), the
object will return to its pre-edit state. If the ``accept'' button is pressed
(or selected from the edit menu), the data base will be changed to reflect the
object editing performed.
\section{Object Edit State}
When MGED enters the object edit state, the following occurs:
\begin{enumerate}
\item all the solids/evaluated regions of the edited object
become illuminated
\item the key solid's parameters are labeled OR \\
the center of the key evaluated region is marked
\item the key solid's parameters are listed and
continually updated OR \\
the key evaluated region's center is listed and
continually updated
\item the ***OBJ EDIT*** menu is displayed
\end{enumerate}
\section{Translate An Object}
There are three ways to translate an object: translate in the screen
X direction only (X move), translate in the screen Y direction only
(Y move) or just straight translation (XY move).
In all cases, the complete object is translated so that the ``key point''
is positioned at the desired location.
The {\em translate} command is used to enter a precise location (x,y,z) for
the key point.
Entering {\em translate x y z} will move the complete object so that the key
point will be at coordinates (x,y,z).
\section{Rotate An Object}
Rotation of the object may be accomplished by selecting the ``Rotate''
menu item, or pressing the ``Rotate'' button.
Turning the knobs results in the object being rotated.
The {\em rotobj x y z} command can be used here, to specify
a precise rotation in degrees.
While in this edit state, the only way to rotate view is to use
the {\em vrot x y z} command.
\section{Scale An Object}
\subsection{Global Scale}
To select global object scale, press the object scale button or select
``Scale'' from the ***OBJ EDIT*** menu.
When the pen/mouse is pressed, the edited object is scaled about the key
point by an amount
proportional to the distance the pen/mouse is from the center of the screen.
If the pen/mouse is above the center, the edited object will become larger.
If it is below the center, the object will become smaller.
The {\em scale} command can be used to enter precise scale factors.
The value entered is applied to the object as it existed when object
scale was entered.
Hence entering {\em scale 1} will return the object to its size when the
object scale session first started.
\subsection{Local Scale}
Local object scaling is allowed about any of the coordinate axes.
To select local scaling, press one of the buttons (OBJ Scale X, OBJ Scale Y,
or OBJ Scale Z) or select ``Scale X'', ``Scale Y'', or ``Scale Z'' from
the ***OBJ EDIT*** menu.
When the pen/mouse is pressed, the edited object is scaled in the selected
coordinate axis only, about the key point.
The amount of scaling is proportional to the distance the pen/mouse is
from the center of the screen.
If the pen/mouse is above the center, the edited object will become larger
in the selected axis direction.
If it is below the center, the object will become smaller in the selected
axis direction.
The {\em scale} command can be used to enter precise scale factors.
The value entered is applied to the object as it existed when local object
scale was entered.
Hence entering {\em scale 1} will return the object (in the selected axis
direction) to its size when the object scale session first started.
\section{Solid Editing}
There are two classes of editing operations that can be performed on
leaf nodes, the primitive solids.
The first class of operations are generic operations which can be applied to
any type of solid, and the second class of operations are those operations
which are specific to a particular type of solid.
Generic operations which can be applied to all primitive solids are
rotation, translation and scaling.
Recall that primitives can be treated as any other object and ``object
edited'' as detailed above.
Each primitive solid also has a variety of editing operations available that
are specific to the definition of that solid. These operations are
detailed below.
The solid editing mode is necessary to
perform to basic shapes of solids.
Precise modifications
of the shape are possible (using the {\em p} keyboard command) in the solid
editing mode.
The solid editing feature allows the user to interactively translate,
rotate, scale, and modify individual parameters of a solid. Whenever one is
in the solid edit mode, the parameters of the solid being edited are listed
and continually updated at the top of the screen. Certain parameters are
also labeled on the solid being edited. Solid editing is generally used to
``build'' objects by producing solids of the desired shape and size in the
correct orientation and position. Once the object is built, object editing
is used to scale, orient, and position the object in the description. The
general philosophy of solid editing is to first create a solid with the
desired name and then to edit this solid. As an example, suppose one were
to build an object called ``BRACKET''; to produce the base of the object the
primitive solid type ARB8 (see Figure 1) would be used along with either the
{\em in} command or {\em make} command, so one would type:
\begin{verse}
in btm box 0 0 0 0 -90 0 40 0 0 0 0 6 \\
make block arb8
\end{verse}
A new solid called ``btm'' or ``block'' would be created and displayed on the
screen. These solids would then be edited using solid editing to produce the
solid parameters for the shape desired.
\section{Selecting Solids For Editing}
The procedure for solid editing is quite similar to that for object editing.
First, solid edit state must be entered, by pressing the
``solid illuminate'' button, or selecting the ``solid illum'' menu item.
Second,
A solid is selected for
editing using the illuminate mode, just as in object editing,
by moving the cursor up and down, and choosing the desired solid.
The solid data is listed at the top of the screen and a
header depending on the solid type is written above the solid editing data.
Third, select the appropriate function button or edit menu operations,
and perform the editing desired. Finally, the solid
editing mode is exited
by either accepting or rejecting the editing performed.
A solid must be displayed before it can be picked for editing.
To pick a displayed solid for editing, press the ``solid illum'' button or
select ``Solid Illum'' from the ***BUTTON MENU***.
The data tablet and pen/mouse are then used to pick the solid.
The surface of the data tablet is divided into as many horizontal sections
as there are solids displayed.
The displayed solid corresponding to the horizontal section the pen/mouse
is located in will be ``illuminated'' (it will become brighter on black and
white devices and white on color devices).
The complete hierarchical path to reach the solid is also listed on the
display. When the pen/mouse is pressed, MGED enters the solid edit state
with the illuminated solid as the solid to be edited.
If the solid is not multiply referenced, entering {\em sed solidname} on
the keyboard will immediately put MGED in the solid edit state with
{\em solidname} as the edited solid.
\section{Solid Edit State}
When MGED enters the solid edit state, the following occurs:
\begin{enumerate}
\item the edited solid remains illuminated
\item the edited solid's parameters are labeled
\item the edited solid's parameters are listed
\item (and continually updated)
\item the ***SOLID EDIT*** menu is displayed
\item the parameter edit menu is initially displayed (default)
\end{enumerate}
\section{Rotate A Solid}
Solid rotation allows the user to rotate the solid being edited to any
desired orientation. The rotation is performed about the vertex of the
solid. To select this option, one presses the function button labeled
``solid rotate'' or selects from the edit menu on screen.
The rotation can be done using the dial box or one can input exact angles
of rotation of the solid by using the {\em p} keyboard command.
For example, typing:
{\em \center
p alpha beta gamma
}
will rotate the solid {\em alpha} degrees about the x-axis, {\em beta} degrees
about the y-axis and {\em gamma} degrees about the z-axis. Alpha, beta, and
gamma are measured from the original ``zero'' orientation of the solid,
defined when the ``solid edit'' function button was
pressed. Hence, typing
{\em \center
p 0 0 0
}
will always return the solid to its original position (its position when the
current solid editing session began) before accepting edit.
To select solid rotation, press the solid rotate button or select ``Rotate''
from the ***SOLID EDIT*** menu.
The joy stick or appropriate rotation knobs then will rotate the edited solid
about the coordinate axes.
The solid is rotated about its vertex.
The parameter (p) command can be used to make precise rotation changes.
The values entered after the p are absolute -- the rotations are applied
to the solid as it existed when solid rotation was first selected.
Thus entering {\em p 0 0 0} will ``undo'' any rotations performed since
solid rotation was selected.
The rotation about the z-axis is done first, then the y, then the x.
\section{Translate A Solid}
Solid translation allows the user to place the solid being edited anywhere
in the description. To invoke this option, one presses the function
button labeled ``solid trans'' or selects from the screen edit
menu. To move the solid, use the mouse pointer to position the solid and
click the center mouse button. Whenever the mouse button is pressed, the
VERTEX of the solid moves to that location on the screen.
One can read the actual coordinates of the vertex on the top of the
screen, along with other data. If the actual desired coordinates of the
vertex are known, one can place the solid exactly using the {\em p} keyboard
command. For example, to place a solid's vertex at the coordinates (x,y,z)
one would type:
{\em \center
p40 20 10
}
The solid would then jump to this location.
To select solid translation, press the solid translate button or
select ``Translate'' from the ***SOLID EDIT*** menu.
When the pen/mouse is pressed, the vertex of the edited solid will
move to that location.
The parameter (p) command can be used to translate the solid to
a precise location.
Entering {\em p x y z} will place the vertex of the edited solid at (x, y, z).
\section{Scale A Solid}
The solid SCALE feature allows the user to scale the solid being to any
desirable size. The scaling is done about the vertex of the solid, hence NO
translation of the solid occurs. The scaling is performed using the mouse
pointer and clicking the center mouse button, just as in object scaling. One
can input an exact scale factor using the {\em p} keyboard command, in the form
of. For example, typing
{\em \center
p factor
}
will scale the solid by an amount equal to {\em factor}. The value of
{\em factor} is absolute -- the original solid is scaled. By setting {\em factor}
equal to one (1), the original size solid will be displayed on the screen
before accepting your edit.
To select solid scale, press the solid scale button or select ``Scale''
from the ***SOLID EDIT*** menu.
When the pen/mouse is pressed, the edited solid is scaled by an amount
proportional to the distance the pen/mouse is from the center of the screen.
If the pen/mouse is above the center, the edited solid will become larger.
If it is below the center, the solid will become smaller.
The parameter (p) command can be used to enter precise scale factors.
The value entered is applied to the solid as it existed when solid
scale was entered.
Hence entering {\em p 1} will return the solid to its size when solid scale
session first started.
\section{Solid Parameter Editing}
To modify individual solid parameters, press the menu button or select
``edit menu'' from the ***SOLID EDIT*** menu.
A menu listing what parameter editing is available for that particular
solid type will be displayed.
Using the pen/mouse select the desired item(s) from this menu.
For most of the parameter editing, the {\em p} command can be used to
make precise changes.
Parameter editing is the default edit mode entered when MGED first
enters the solid edit state.
In the following paragraphs, we will discuss parameter editing
for each of the MGED general types of solids.
\mfig menu-arb-ctl, ARB Control Menu.
\subsection{ARB Parameter Editing}
The GENERAL ARB class of solids represents all the convex polyhedrons
(RPP, BOX, RAW, and ARBs).
The ARBs comprise five classes of polyhedrons each with a characteristic
number of vertices.
These are the ARB8, ARB7, ARB6, ARB5, and ARB4, where the ARB8 has
eight vertices, etc.
During editing, all the vertices are labeled on the screen.
An ARB is defined by a fixed number of vertices where all faces must
be planar. This fact means that during parameter editing, movement
of individual vertices in faces containing four vertices is not allowed.
There are three classes of parameter editing that can be done to ARBs:
move edges,
move faces, and rotate faces. There is an ``ARB control menu''
(see Figure \ref{menu-arb-ctl}) from
which one selects the type of parameter editing to be done.
A specific ARB edit menu will appear dependent on which parameter editing
option was selected. The ``return'' entry on each of these specific menus
will return the ``ARB control'' menu to the screen.
Note that there are several keyboard commands that apply only to ARB solids
which are being edited in SOLID EDIT state.
Once such command is {\em mirface}, which replaces a designated
face of the ARB with a copy of an original face mirrored about
the indicated axis.
Another such command is {\em extrude}, which projects a designated face
a given amount in the indicated direction.
\mfig menu-arb8-edge, Move Edge Menu for ARB8.
\mfig menu-arb4-edge, Move Edge Menu for ARB4.
\mfig menu-arb8-face, Move Face Menu for ARB8.
\mfig menu-arb4-face, Move Face Menu for ARB4.
\subsection{Move ARB Edges}
To move an ARB edge, select the desired edge from the ``move edge'' menu.
For example, Figure \ref{menu-arb8-edge} shows the menu for
moving an edge of an ARB8, and Figure \ref{menu-arb4-edge}
shows the menu for moving an edge of an ARB4.
A point is then ``input'' either through a pen press or through the {\em p}
command.
The line containing the selected edge is moved so that it goes through
coordinate of the input point.
Any affected faces are automatically adjusted to remain planar.
\subsection{Move ARB Faces}
To move an ARB face, select the desired face from the ``move face'' menu.
A point is then ``input'' either through a pen press or through the {\em p}
command. The plane containing the edited face is then moved so that it
contains the input point. The new face is then calculated and the ARB
is displayed.
The move face menus for an ARB8 are shown
in Figure \ref{menu-arb8-face}, and the move face menus for an ARB4
are shown in Figure \ref{menu-arb4-face}.
\mfig menu-arb8-rot, Rotate Face Menu for ARB8.
\mfig menu-arb4-rot, Rotate Face Menu for ARB4.
\subsection{Rotate ARB Faces}
ARB faces may be rotated around any of the vertices comprising that face.
First, select the desired face from the ``rotate face'' menu. You will then
be asked to select the vertex number around which to rotate the face.
The face can be rotated about the three coordinate axes. The knobs (Rotate X,
Rotate Y, and Rotate Z) are used for this purpose. For precise rotations,
use the {\em p} command. If three values are entered after the {\em p}, then
they are interpreted as angles (absolute) of rotation about the X, Y, Z axes
respectively. If only two values are entered, then they are considered as
rotation and fallback angles for the normal to that face. The {\em eqn}
command can also be used here to define the plane equation coefficients of
the face being rotated.
The rotate face menus for an ARB8 are shown
in Figure \ref{menu-arb8-rot}, and the rotate face menus for an ARB4
are shown in Figure \ref{menu-arb4-rot}.
\mfig ped-tgc, Typical TGC During Parameter Editing.
\subsection{Truncated General Cone (TGC) Parameter Editing}
The TGC general class of solids includes all the cylindrical COMGEOM solids.
The defining parameters of the TGC are two base vectors (A and B), a height
vector (H), two top vectors (C and D), and the vertex (V).
The top vectors C and D are directed the same as the base vectors A and
B respectively,
hence the top vectors are defined only by their lengths (c and d).
During solid editing, only vectors A and B are
labeled on the display.
Figure \ref{ped-tgc} depicts a typical TGC during parameter editing.
It is possible to change the length of the H, A, B, C, or D
vectors, resulting in a change in height or eccentricity of the
end plates. The overall size of the A,B or C,D end plates can
be adjusted, or the size of both can be changed together, leaving
only the H vector constant.
The H vector or the base plate (AXB) can be rotated.
Recall that vectors A \& C and vectors B \& D have like directions, hence
rotating the base (AXC) will automatically rotate the top (BXD).
Finally, one can move the end of the height vector H
with the TGC becoming or remaining
a right cylinder (move end H (rt)),
or with the orientation of the base (and top)
unchanged (move end H).
Either the mouse/tablet or the {\em p} command can be used.
These functions are selected from the menu which can be seen
in Figure \ref{ped-tgc}.
\mfig ped-ell, Ellipsoid Parameter Editing Menu.
\subsection{Ellipsoid Parameter Editing}
The ELLG general class represents all the ellipsoidal solids, including
spheres and ellipsoids of revolution.
The defining parameters of the ELLG are three mutually perpendicular
vectors (A, B, and C) and the vertex (V).
When an ELLG is being edited, only vectors A and B are labeled on the display.
Figure \ref{ped-ell} depicts a typical ELLG during parameter editing.
The parameter editing of the ELLG consists of scaling the lengths of the
individual vectors A, B, C.
One may also scale all theses vectors together of equal length.
The scaling of these vectors is done using the data tablet/mouse in
exactly the same manner as in object scaling.
The {\em p} keyboard command again can be used to produce a vector of
desired length.
\mfig ped-tor, Torus Parameter Editing Menu.
\subsection{Torus Parameter Editing}
The TOR general class of solids contains only one type of torus, one with
circular cross-sections.
The defining parameters of the TOR are two radii (r1 and r2), a normal
vector (N), and the vertex (V).
The scalar r1 is the distance from the vertex to the midpoint of the
circular cross section.
The scalar r2 is the radius of the circular cross-section.
The vector N is used to orient the torus.
During solid editing, none of these parameters are labeled on the screen.
Figure \ref{ped-tor} depicts a typical torus during parameter editing.
The parameter editing of the TOR consists of scaling the radii, hence the
menu contains only two members.
\chapter{KEYBOARD COMMANDS}
The MGED keyboard commands are used to maintain overall control of the
system and to perform general housekeeping functions.
They are summarized in Figure \ref{cmd-summary}.
\begin{figure}[tb]
\begin{tabular}{l l l}
Key & Argument[s] & Description \\
\\
e & obj1* obj2* ... objn* & display objects on the screen \\
E & obj1* obj2* ... objn* & display objects evaluating regions \\
B & obj1* obj2* ... objn* & Zap screen, display objects \\
d & obj1* obj2* ... objn* & delete objects from screen \\
cp & oldobj newobj & copy 'oldobj' to 'newobj' \\
cpi & oldtgc newtgc & copy 'oldtgc' to 'newtgc' inverted \\
Z & -none- & Zap (clear) the screen \\
g & groupname obj1* obj2*....objn* & group objects \\
r & region op1 sol1....opn soln & create/modify a region \\
i & object instname & create instance of an object \\
mv & oldname newname & rename object \\
mvall & oldname newname & rename all occurrences of an object \\
l & object* & list object information \\
kill & obj1* obj2* ... objn* & remove objects from the file \\
killall & obj1* obj2* ... objn* & remove object[s] + references from file \\
killtree & obj1* objn* ... objn* & remove complete paths **CAREFUL** \\
t & object* & table of contents \\
mirror & -[axis] oldobj newobj & mirror image of an object \\
mirface & \#\#\#\# axis & mirror face \#\#\#\# about an axis \\
extrude & \#\#\#\# distance & extrude an arb face \\
item & region item air & change region item/air codes \\
mater & region material los & change region mat/los codes \\
rm & comb mem1* mem2*....memn* & delete members from combination \\
units & mm|cm|m|in|ft & change the units of the objectfile \\
title & new-title & change the title of the description \\
p & dx [dy dz] & precise commands for solid editing \\
rotobj & xdeg ydeg zdeg & rotate(absolute) an edited object \\
scale & factor & scale(absolute) an edited object \\
translate & x y z & translate an edited object \\
arb & name rot fb & make arb8 with rot and fb \\
analyze & solids & print much info about a solid \\
summary & s|r|g & solid/region/group summary \\
tops & -none- & list all top level objects \\
find & obj1* obj2* ... objn* & find all references to an object \\
area & [endpoint-tolerance] & find presented area of E'd objects \\
\end{tabular}
\caption{MGED Command Summary 1 \label{cmd-summary} }
\end{figure}
\begin{figure}
\begin{tabular}{l l l}
Key & Argument[s] & Description \\
\\
plot & [-zclip] [-2d] [out-file] [| filter] & make UNIX-plot of view \\
color & low high r g b str & assign color(r g b) to item range \\
edcolor & -none- & text edit the color/item assignments \\
prcolor & -none- & print the current color/item assignments \\
make & name type & create and display a primitive \\
fix & -none- & restart the display after hangup \\
rt & [options] & raytrace view onto framebuffer \\
release & -none- & release current display processor \\
attach & nu|tek|tek4109|plot|mg|vg|rat & attach new display processor \\
ae & az elev & rotate view w/azim and elev angles \\
regdef & item [air los mat] & set default codes for next region created \\
ted & -none- & text edit a solids parameters \\
vrot & xdeg ydeg zdeg & rotate view \\
ill & name & illuminate object \\
sed & solidname & solid edit the named solid \\
center & x y z & set view center \\
press & button-label & emulate button press \\
knob & id value & emulate knob twist \\
size & value & set view size \\
x & -none- & debug list of objects displayed \\
status & -none- & print view status \\
refresh & -none- & send new control list \\
edcomb & comb flag item air mat los & edit comb record info \\
edgedir & delta\_x delta\_y delta\_z & define direction of an ARB edge being moved \\
in & name type {parameters} & type-in a new solid directly \\
prefix & string obj1* obj2* ... objn* & prefix objects with 'string' \\
keep & file.g obj1* obj2* ... objn* & keep objects in 'file.g' \\
tree & obj1* obj2* ... objn* & list tree for objects \\
inside & --prompted for input-- & find inside solid \\
\end{tabular}
\caption{MGED Command Summary 2 }
\end{figure}
\begin{figure}
\begin{tabular}{l l l}
Key & Argument[s] & Description \\
\\
solids & file obj1* obj2* ... objn* & make ascii solid parameter summary in 'file' \\
regions & file obj1* obj2* ... objn* & make ascii region summary in 'file' \\
idents & file obj1* obj2* ... objn* & make ascii region ident summary in 'file' \\
edcodes & obj1* obj2* ... objn* & edit region ident codes \\
dup & file {prefix} & checks for dup names in 'file' \& current file \\
cat & file {prefix} & cat's 'file' onto end of current file \\
track & --prompted for input-- & builds track given appropriate 'wheel' data \\
3ptarb & --prompted for input-- & makes arb8 given 3 pts, etc. \\
rfarb & --prompted for input-- & makes arb8 given point, rot, fallback, etc. \\
whichid & ident1 ident2 ... identn & list all regions with given ident \\
paths & --prompted for input-- & lists all paths matching input path \\
listeval & -prompted for input-- & gives 'evaluated' path summary \\
copyeval & --prompted for input-- & copy an 'evaluated' path-solid \\
tab & obj1* obj2* ... objn* & list objects as stored in data file \\
push & obj1* obj2* ... objn* & push object transformations to solids \\
facedef & \#\#\#\# {data} & define plane of an edited ARB face \\
eqn & A B C & define plane coefficients of rotating ARB face \\
q & -none- & quit \\
% & -none- & escape to shell \\
? & -none- & help message \\
\end{tabular}
\caption{MGED Command Summary 3 }
\end{figure}
\section{Copy Object}
{\em \center cp oldobj newobj}
This command is used to produce a copy of an object (solid or comb).
In this case, the
object "oldobj" will be copied into an object called "newobj".
Examples:
{\em
cp arb8 hullbot.s \\
cp tgc wheelrim.s \\
cp torso.r driver\_torso \\
cp proto.man driver \\
}
\section{Zap Screen}
{\em \center Z}
This is the Zap command. It clears all objects from the screen.
\section{Drop objects from display screen}
{\em \center d obj1 obj2 ... objn}
This command allows one to remove objects from the display screen. In
this case "obj1" thru "objn" will be removed from the display.
\section{Move (rename) object}
{\em \center mv old new}
This command is used to rename objects in the data file. In this
case, the object "old" will be renamed "new".
A note of caution: the name is changed only in the object record itself, not
in any member records. Thus if the object "old" appears as a member
of any other object, the name will not be changed there.
To rename all occurrences of an object, use the "mvall" command.
Examples:
{\em
mv test hull \\
mv g00 air \\
mv g1 turret \\
}
\section{Set Local Working Units}
{\em \center units ab}
This command allows one to change the local or working units at ANY time.
The only allowable values for "ab" are "mm", "cm", "m", "in", or "ft".
Examples:
{\em
units mm \\
units in \\
}
\section{Group objects}
{\em \center g group obj1 obj2 ..... objn}
This command creates or appends to a combination record and
is used to group objects together either for editing or displaying
purposes. In this case, "obj1" through "objn" are added as members
to the combination "group". If "group" does not exist, it is
created and "obj1" through "objn" are added as members.
NOTE: no checking to see if "obji" is already a member of "group".
Examples:
{\em
g shell hull turret \\
g tank wheels engine crew shell \\
g tank track \\
}
\section{Create Region}
{\em \center
r region op1 sol1 op2 sol2 .... opn soln
}
This command is used to create regions or append to regions.
If "region" exists, then solids "sol1" through "soln" are
added as members with "op1" through "opn" as the defining operations.
If "region" does not exist, then it is created and solids "sol1" through
"soln" are added as members with "op1" through "opn" as the
defining operations. A region is merely a combination
record with a flag set and is distinguished from other combinations (groups)
since it has meaning to the COMGEOM solid modeling system.
Note that "+" or "u" must be the first operations in a region.
When a region is created, the item and air codes are set equal to default values.
If the "regdef" command has been used, then those values will be used,
otherwise the values "1000 0 100 1" will be used respectively.
To change
the item and air codes use the "item" command.
The "edcodes" command is probably the easiest and fastest way to change these
identifying codes.
Note: In the past, all members of a region had to be solids, but
recently combinations have been allowed as members of regions. Hence,
the names "soli" can also be combinations (groups) now.
Also, as in grouping, no checking for members already in a region.
Examples:
{\em
r hulltop.r + hulltop.s -- hullleft.s -- hullright.s \\
r gun + gun.s -- gunin.s \\
r gunair + gunin.s \\
}
\section{Instance an object}
{\em \center
i object combname
}
This command is used to make an instance of an object.
An instance of an object is produced by creating a combination
and making the object a member.
In this case, an instance of "object" is made by
creating the combination record "combname" (if "combname" does not
already exist) and adding "object" as a member.
If "combname" already exists, then "object" is added as the next member.
An instance is used to refer to an object, without making actual copies
of the object. Instances are useful when one is adding a certain
component to a target description many times.
Furthermore, any modifications to an object which has been instanced need only be
done in the original (prototype) object.
These modifications will then be automatically reflected in all the
instances of the object.
Examples:
{\em
i heround he1 .he1. \\
i heround he2 .he2. \\
i heat heat1 .heat1. \\
i heat heat2 .heat2. \\
}
\section{Change Title of Database}
{\em \center
title newtitle
}
This command allows one to change the title of the model database at any time.
The string "newtitle" will become the new title, and may contain blanks.
The title is limited to 72 characters including blanks.
Examples:
{\em
title XM89A -- New version of tank \\
title M345 (groups are m345 and m345a) \\
}
\section{Extrude}
{\em \center
extrude \#\#\#\# distance
}
This command allows the user to project
a face(\#\#\#\#) of an arb being edited a normal distance to create a new arb.
The value of "face" is 4 digits such as 1256. If the face is projected
in the wrong direction use a negative "distance".
One common use for this command is
for producing armor plates of a desired thickness.
Examples:
{\em
extrude 1234 20 \\
extrude 2367 34.75 \\
extrude 2367 -34.75 \\
}
\section{Remove members from Combination}
{\em \center
rm comb mem1 mem2 .... memN
}
This command allows one to delete members from a combination (group or region) record.
In this case, members "mem1" through "memn" will be deleted from
the combination "comb".
Examples:
{\em
rm tank hull wheels \\
rm region1 solid8 solid112 \\
rm turtop.r tursidel.s tursider.s \\
}
\section{List Object Information}
{\em \center
l object
}
This command is used to list information about objects in the data file.
The information listed depends on what type of record "object" is.
If "object" is a combination record, then the members are listed.
If "object" is a solid record, then the MGED general solid type and
the parameters as presently in the data file are listed.
Note: only the solid parameters as they exist in the solid record are
listed, no transformation matrix is applied.
Hence, if the solid was edited as a member of a combination, the "l"
command will not reflect the editing in the listed parameters.
To produce this type of listing, see the "listeval" command.
Examples:
{\em
l hull \\
l turret \\
l turtop.s \\
l arb8 \\
}
\section{Analyze Solid}
{\em \center
analyze solid
}
This command produces information about a solid (all except ARS).
The information includes surface area(s) and volume.
Also, in the case of ARBs, edge lengths and rot and fallback angles
and plane equations are given for each face.
If "solid" is present that solid name will be used and analyzed.
If "solid" is absent, the solid at the bottom of the present path
being edited will be analyzed.
\section{Mirror Object}
{\em \center
mirror -[axis] oldobj newobj
}
This command is used to create a new object which is
the mirror image of an existing object about an axis.
The object may be either a solid or a combination.
In this case, a mirror image of the object "oldobj" will created
about the axis indicated by "axis" and the new object record will
be called "newobj".
The only acceptable values for the parameter "axis" are "x", "y", and "z".
Examples:
{\em
mirror -y tur.left.s tur.right.s
mirror -z tur.top.s tur.bot.s
mirror -x tur.front.s tur.back.s
mirror -y lt\_gun rt\_gun
}
\section{Create ARB8}
{\em \center
arb name rot fb
}
This command allows one to create an arb8
with the desired rotation and fallback angles.
In this case, an arb8 with the name of "name" will be created with the desired
rotation angle of "rot" degrees and the fallback angle of "fb" degrees.
Examples:
{\em
arb top1.s 0 90 \\
arb sidelt.s 90 0 \\
arb upglacis.s 0 60 \\
}
\section{Change Item (Ident) and Air codes of Region}
{\em \center
item region ident air
}
This command allows one to change the item or
air code numbers of a region. If the air code ("air") is not included,
a zero is assumed.
To change the air code, a zero item code must be used (see second
example below).
Examples:
{\em
item region1 105 \\
item region7 0 2 \\
item region11 129 0 \\
}
\section{Specify Material Properties}
{\em \center
mater comb [material]
}
This command is used to change the material properties specification for
a combination.
\section{Edit Combination Record Info}
{\em \center
edcomb comb regionflag regionid air los GIFTmater
}
This command is used to change the material and the los percent for
a region.
\section{Edit (Display) an object on the screen}
{\em \center
e obj1 obj2 ... objn
}
This command allows one to display objects on the screen.
In this case, "obj1" thru "objn" will be displayed on the screen.
\section{Evaluated Display of Object on the screen}
{\em \center
E obj1 obj2 ... objn
}
This command is the same as the "e" command, except the regions will
be evaluated before being displayed.
\section{Zap screen and Display Object}
{\em \center
B obj1 obj2 ... objn
}
This command is the same as the "e" command, except that the screen
is cleared (Zap) before the objects are displayed.
\section{Kill (delete) object from database}
{\em \center
kill obj1 obj2 ... objn
}
This command allows one to remove objects from the file itself.
Only the object records themselves are removed, any references made
to these objects still will exist.
To remove the references also, see the "killall" command.
\section{List Table of Contents}
{\em \center
t obj1 obj2 ... objn
}
This is the table of contents command. If arguments are present, a list
of all objects in the file matching these names will be printed.
If there are no arguments, then a listing of all objects will be printed.
\section{List Tree Tops}
{\em \center
tops
}
This command will search the target file hierarchy, and list all "top level"
objects (objects which are not members of any other object).
This command is useful to make sure objects have been grouped properly.
\section{Make prototypical solid}
{\em \center
make name type
}
This command will create a solid of a specified type.
This solid will be named "name" and the solid type will be "type".
The acceptable types are: arb8, arb7, arb6, arb5, arb4, tor, tgc, tec,
rec, trc, rcc, ellg, ell, sph.
This new solid will be drawn at the center of the screen.
This command should be used to create
solids for editing.
\section{Mirror ARB Face}
{\em \center
mirface \#\#\#\# axis
}
This command allows one to mirror a face of an edited arb about an axis.
This command is quite useful for adding air to a "symmetric" target.
\section{Print Summary of Objects}
{\em \center
summary s|r|g
}
This command will produce a summary of objects in the target file.
If the options s, r, or g are entered a listing of the solids, regions,
or groups will also be presented.
\section{Specify Numeric Parameter(s)}
{\em \center
p dx [dy dz]
}
This is the parameter modification command and is used during solid
editing to make precise changes.
The actual meaning of the values typed after the "p", depend on what
editing option is being performed.
If one were translating a solid, then the values would be the x,y,z
coordinates of the vertex of the solid.
\section{Release Current Display}
{\em \center
release
}
This command releases the current display device,
and attaches the null device.
\section{Attach to Display Device}
{\em \center
attach device
}
This command allows one to attach a display device (the present display device
is released first).
The present acceptable values for "device" are vg, mg, tek, rat, plot, tek4109, ir, sgi, and nu.
The "plot" device will produce a UNIX-plot of the present view (including the
faceplate) on a display device using a specific filter.
You will be asked which filter to use.
Sample filters include "tplot" and "plot-fb".
\section{Numeric Object Rotation Edit}
{\em \center
rotobj xdeg ydeg zdeg
}
This command allows one to make precise rotations of an object during
object editing.
MGED must be in the "object edit" state for this command to have effect.
If "object rotation" is not in effect, MGED will select this option for you
and perform the rotation.
The object will be rotated "xdeg" about the x-axis, "ydeg" about the
y-axis, and "zdeg" about the z-axis.
The rotation is "absolute"....the total rotation since the beginning
of object editing will be equal to the input values.
The rotation is done about the "KEY" point for the object being edited.
\section{Scale Edited Object}
{\em \center
scale xxx.xx
}
This command allows one to make precise scaling changes to an object
during object editing.
MGED must be in the "object edit" state for this command to have effect.
If one of the object scale options is not in effect, the "global scale"
options will be selected.
The object will be scaled by a TOTAL amount equal to the input value.
If one of the local scale options is in effect, the object will be
scaled in the selected axis direction by an amount equal to the input value.
The scaling is done about the "KEY" point of the object being edited.
\section{Translate Edited Object}
{\em \center
translate xxx.xx yyy.yy zzz.zz
}
This command allows one to make precise translation changes to an object
during object editing.
MGED must be in the "object edit" state for this command to have effect.
If the object translation option is not in effect, this option will be
selected and the translation performed.
The "KEY" point of the object being edited will move to the input coordinates.
\section{Fix Broken Hardware (sometimes)}
{\em \center
fix
}
This command will "fix" (restart) the display device after a hardware error.
\section{Ray-Trace Current View}
{\em \center
rt [-s\#]
}
This command will run the {\bf rt}(1) program
to produce a color shaded image of objects on the currently
selected framebuffer.
The resolution of the image (number of rays) is equal to "\#" from the "-s"
(square view resolution) option.
If the -s option is absent, 50x50 ray resolution will be used.
\section{Emulate Knob Twist}
{\em \center
knob id value
}
This command is used to emulate a "knob twist".
Generally this command is used for display devices which have no actual
knob peripherals (e.g. tek).
Any non-zero number entered for "value" is converted to 1 (if "value" is
greater than zero) or is converted to -1 (if "value is less than zero).
The user must enter the same command with "value" equal to zero to
stop the action invoked by the knob twist.
The "id" defines which knob is to be "twisted":
\begin{tabular}{rl}
x & rotates about x-axis \\
y & rotates about y-axis \\
z & rotates about z-axis \\
X & slew view in x direction \\
Y & slew view in y direction \\
Z & zoom the view \\
\end{tabular}
Examples:
{\em
knob x 1 \\
knob x 0 \\
knob Z -1 \\
knob Z 0 \\
}
\section{Solid\_Edit Named Solid}
{\em\center
sed name
}
This command allows one to immediately enter the solid edit mode
with the solid "name" as the edited solid.
Note that the solid must be displayed but not multiply referenced.
\section{Illuminate Named Object}
{\em\center
ill name
}
This command is used to illuminate an object ... a path containing this
object ("name") will be illuminated.
This command is primarily used with display devices which do not have
a tablet to pick objects for editing.
\section{Rotate the View}
{\em\center
vrot xdeg ydeg zdeg
}
This command rotates the VIEW "xdeg" degrees about the screen x-axis,
"ydeg" degrees about the screen y-axis, and "zdeg" degrees about the
screen z-axis.
This command is useful when the precise rotation desired is known.
It is also useful when in a rotation edit mode, and the viewing
rotation needs to be changed, without affecting the current edit.
\section{Move Screen Center}
{\em\center
center xx.xx yy.yy zz.zz
}
This command moves the screen center to (xx.xx, yy.yy, zz.zz).
Using this command is one way of slewing the view.
\section{Set View Size}
{\em\center
size xx.xx
}
This command sets the view size to xx.xx and is one way of zooming
the display.
Making the view size smaller has the effect of zooming in on the view.
\section{Extended List of all Objects in Displaylist}
{\em\center
x
}
This command produces a list of all objects displayed, listing the center
of the object, its size, and if it is in the present view.
It is intended primarily for software debugging.
\section{Refresh Display}
{\em\center
refresh
}
This command will send a new display list to the display device.
\section{Print View Status}
{\em\center
status
}
This is a debug command which prints the status of the current view,
including all viewing and editing matrices.
\section{Simulate Button Press}
{\em\center
press button-label
}
This command allows one to emulate a button press and is generally used
on display devices which do not have actual button peripherals.
The following are the strings allowed for "button-label" and all produce
the indicated view on the device screen:
top, bottom, right, left, front, rear, 90,90, 35,25
The following is a listing of the remaining acceptable strings for
"button-label" and the resulting action:
\begin{tabular}{rl}
reset & reset the view \\
save & save the present view \\
restore & restore the saved view \\
adc & display the angle-distance cursor \\
oill & begin object illumination (pick) \\
sill & begin solid illumination (pick) \\
oscale & object scale \\
ox & object translation in x direction only \\
oy & object translation in y direction only \\
oxy & object translation \\
orot & object rotation \\
sedit & put up solid parameter menu \\
srot & solid rotation \\
sxy & solid translation \\
sscale & solid scale \\
accept & accept editing done \\
reject & reject editing done \\
\end{tabular}
Examples:
{\em
press 90,90 \\
press front \\
press oill \\
press orot \\
press reject \\
}
\section{Escape to the Shell}
{\em\center
%
}
This command allows one to escape to the shell to perform multiple commands
without having to terminate the current MGED session.
To return to mged, enter a control-d to the Shell.
Note that the "!" escape at the beginning of a line can be used
to send a single command to the shell.
\section{Get Short Help Listing}
{\em\center
?
}
This is the short form of the help command,
that lists the names of all MGED commands.
\section{Get Long Help Listing}
{\em\center
help
}
This is the long form of the help command,
that produces a listing of all the available commands and their
arguments, and a one sentence summary of the commands purpose.
\section{Exit (Quit) MGED}
{\em\center
q
}
Running the "q" command, or entering an End-Of-File (EOF) (typ. Control/D)
is the normal way of exiting MGED.
\section{Copy and Translate TGC}
{\em \center
cpi oldtgc newtgc
}
This command is a specialized copy command and is designed to be used when one
is "running wires" in a description.
The object being copied must a cylindrical solid (TGC).
The following occurs when cpi is used: first the cylinder ("oldtgc") is
copied to "newtgc"; then "newtgc" is translated to the end of "oldtgc";
then "newtgc" is displayed; and finally, MGED is put in the SOLID EDIT state
with "newtgc" as the edited solid.
\section{Remove Object and All References}
{\em \center
killall obj1 obj2 ... objn
}
This command will accomplish two things: first, the object[s] will be
removed from the data file just as in the "kill" command; second, all
references to the object[s] will also be removed.
\section{Remove Complete Tree}
{\em \center
killtree obj1 obj2 ... objn
}
This command will remove from the file complete trees originating with
obj1, obj2, ..., objn.
Every object in the designated paths will be removed from the file, hence
CAUTION is urged.
Make sure that "killtree" is what you want to do.
Using the "paths" or "tree" command on an object before "killtree" will
show what objects will be killed.
\section{Add Color To Display}
{\em \center
color low high r g b string
}
This command allows one to make color assignments to a range of item codes.
The arguments "low" and "high" are the item ranges.
The arguments "r", "g", and "b" are the red, green, and blue values
respectively (the range of these numbers is generally 0-255).
The argument "string" is a string describing this class of items.
A blank is considered a terminator, so there can be no blanks in
this string.
\section{Edit Display Colors}
{\em \center
edcolor
}
This command allows one to edit the existing color assignments (table).
The changes are made using the user's "selected" text editor
found in environment variable EDITOR.
Note that this method of specifying object colors is obsolete,
and has been replaced by the {\em mater} command.
\section{Print Display Colors}
{\em \center
prcolor
}
This command prints the color assignments as they presently exist.
\section{Find Objects}
{\em \center
find obj1 obj2 ... objn
}
This command will find ALL references of obj1 obj2 ... objn in the file.
\section{Estimate Presented Area}
{\em \center
area [tolerance]
}
This command finds an estimate of the presented area of all E'd objects
in the present view from that aspect.
The argument "tolerance" is the tolerance for the endpoints of
line segments being "equal" and is optional.
\section{Produce UNIX Plot}
{\em \center
plot [-zclip] [-2d] [out-file] [| filter]
}
This command is used to produce a UNIX-plot hardcopy of the present view of
the 'geometry' on the display device.
The MGED faceplate will not be drawn.
Some useful examples are:
\begin{tabular}{rl}
plot-fb & LIBFB framebuffer (low res) \\
plot-fb -h & LIBFB framebuffer (high res) \\
tplot -Ti10 & Imagen laser printer \\
tplot -Tmeg & Megatek 7250 \\
tplot -T4014 & Tek4014 \\
tplot -Thpgl & HP7550A plotter
\end{tabular}
\section{Text Edit Solid Parameters}
{\em \center
ted
}
This command allows one to edit a solid's parameters using the text editor
defined by the user's path.
The solid being edited (solid edit mode) will be the one that will be
"text edited".
\section{Keyboard Input of Solid Parameters}
{\em \center
in name type {parameters}
}
This command allows one to enter a new solid directly via the keyboard.
The user will be prompted for any missing input and the solid will be
displayed on the screen. WARNING: only minimal checking of parameters
is done.
\section{Change View Azimuth, Elevation}
{\em \center
ae az elev
}
This command sets the display viewing angles using the input azimuth (az)
and elevation (elev) angles.
\section{Define Region Identifiers}
{\em \center
regdef item [air los mat]
}
This command allows one to change the default codes which will be given
to the next region created.
If the "air" code is non-zero, then the "item" code will be set to zero.
\section{Change Edge Direction}
{\em \center
edgedir delta\_x delta\_y delta\_z
}
This command allows one to define the direction of an ARB edge being moved.
If only two arguments are input, the code assumes these to be the
rotation and fallback angles for the edge.
If three arguments are present, the code will use them as "deltas" to
define the direction of the edge.
Note that this command can be useful to find the intersection of
a line (edge) with planes (faces).
\section{Prefix Object Names}
{\em \center
prefix string obj1 obj2 ... objn
}
This command will prefix obj1, obj2, .... objn with "string". All occurrences
of these names will be prefixed. String matching is allowed for the objects
to prefix.
\section{Keep Objects in Another File}
{\em \center
keep file.g obj1 obj2 ... objn
}
This command allows one to keep the listed objects in the file "file.g".
This command is useful for pulling out parts of a description.
\section{List Object Hierarchy}
{\em \center
tree obj1 obj2 ... objn
}
This command will print the tree structure for the objects listed.
\section{Create Inside Solid}
{\em \center
inside
}
This command is used to define a solid (inside solid) such that when it is subtracted
from another certain solid (outside solid), the resulting region will have desired
thicknesses.
To invoke this option, enter "inside".
You will then be prompted for the required data.
If you are in the solid edit mode, the "outside solid" will default to the
solid presently being edited.
If you are in the object edit mode, the "outside solid" will default to the
"key solid" of the object path being edited.
If you are not in an edit mode, you will then be asked for the name of
the "outside solid".
Next, you will be asked what name you wish to call the new "inside solid"
to be calculated.
Finally, you will be asked to enter the thickness[es], depending on the
solid type.
The "inside solid" will then be displayed on the screen.
If the thickness values input are negative, then the thickness will be
directed to the "outside" of the solid.
\section{Produce ASCII Summary of Solids}
{\em \center
solids file obj1 obj2 ... objn
}
This command will produce an ascii summary of all solids involved with
objects obj1, obj2, ..., objn. This summary will be written in 'file'.
In this file, all regions and solids are numbered and will match the
numbers of a COMGEOM deck produced by VDECK or GIFT if the objects are
entered in the same order.
The file 'file' will be overwritten if it already exists.
String matching is allowed for the objects.
\section{Produce ASCII Summary of Regions}
{\em \center
regions file obj1 obj2 ... objn
}
This command will produce an ascii summary table of all regions involved with
objects obj1, obj2,...,objn.
This summary table will be written in 'file'.
In this file, all regions and solids will be numbered and will match the numbers
of a COMGEOM description produced by VDECK or GIFT if the objects are entered
in the same order.
This file will be identical to the "solids" command except the actual
parameters are not listed.
The file 'file' will be overwritten if it already exists.
String matching is allowed for the object names.
\section{Produce ASCII Summary of Idents}
{\em \center
idents file obj1 obj2 ... objn
}
This command will produce an ascii summary table of all region idents involved with
objects obj1, obj2,...,objn.
This summary table will be written in 'file'.
In this file, all regions will be numbered and will match the region numbers
of a COMGEOM description produced by VDECK or GIFT if the objects are entered
in the same order.
At the end of this file will be the same region information, but ordered
by ident number.
The file 'file' will be overwritten if it already exists.
String matching is allowed for the object names.
\section{List Objects Paths}
{\em \center
paths
}
This command will print ALL paths matching an "input" path.
You will be asked to enter the path to match.
The path members are entered on ONE line, separated by spaces.
The input path need not be a complete path, but must contain at least one object.
All paths with the same first members as the input path will be listed.
This command is useful for finding complete paths which begin with certain objects.
\section{List Evaluated Path Solid}
{\em \center
listeval
}
This command will "evaluate" and list ALL paths matching an input path, including the
parameters of the solids at the bottom of each path.
These parameters will reflect any editing contained in the path listed.
You will be asked to enter the path to match.
The path members are entered on ONE line, separated by spaces.
The input path need not be a complete path, but must contain at least
one object.
Note that since the solid parameters are printed, this could be a
rather lengthy listing depending on the completeness of the input path.
\section{Evaluate Path and Copy Solid}
{\em \center
copyeval
}
This command allows one to copy an "evaluated solid", that is a complete path
ending in a solid.
You will be asked to enter a complete path.
Again, this path is entered on ONE line with the members separated by spaces.
If you do not know the complete path, use the "paths" command above to find it.
Next, you will be asked to enter the name of this copied solid.
The input path will be traversed and the accumulated path transformations will
be applied to the parameters of the bottom solid.
These new parameters will then be the copied solid.
Note: this command is useful for making "dummy solids" to subtract for
overlaps between objects which have been "object" edited.
\section{Edit Objects Region Identifiers}
{\em \center
edcodes obj1 obj2 ... objn
}
This command provides for an easy way to modify the code numbers (item, air, material, and los)
of ALL regions found in paths beginning with an object.
For each object, all paths beginning with that object are traversed until a region
is encountered...at which time the following is listed:
item air mat los /obj1/.../region.n
The cursor then jumps back to "item" at the beginning of the line.
At this time the cursor can be advanced only by entering a new item code
(only digits are allowed) or by hitting the "space bar" or "tab key".
The space and tab will move the cursor to the next position in the line.
Pressing the "backspace key" will move the cursor to the beginning of
the next location to the left.
Moving the cursor "past" the los location
will return it to the beginning of the line.
When the code numbers are as you desire,
a RETURN will print the next line
for editing.
At any time, pressing the "R" key will restore the idents on the current
line to the way they were originally.
A Control/C or a "q" will abort the process at the current line.
String matching is allowed for the object names.
\section{List Object As Stored}
{\em \center
tab obj1 obj2 ... objn
}
This command will produce a listing of objects as they are stored in
the MGED object data file.
String matching is allowed for the objects.
\section{List Regions With Given Ident}
{\em \center
whichid item1 item2 ... itemn
}
This command will list ALL regions in the data file which have certain
item codes.
\section{Create ARB Given 3 Points}
{\em \center
3ptarb
}
This command will produce a "plate mode" arb8 given 3 points on one face, 2 coordinates
of the 4th point on that face, and a thickness.
The 3rd coordinate of the 4th point will be solved for and the "other" face
will be a normal distance (== the desired thickness) away.
You will be asked for all necessary input.
\section{Create ARB Given Point and Angles}
{\em \center
rfarb
}
This command will produce a "plate mode" arb8 given a point on one face,
rotation and fallback angles for that face, 2 coordinates of the 3
remaining points on that face, and a thickness.
The 3rd coordinates of the three points will be solved for, and the other
face will be a normal distance equal to "thickness" away.
You will be asked for all necessary input.
\section{Push Editing Down Paths}
{\em \center
push obj1 obj2 ... objn
}
This command will "push" an object's path transformations to the solid's parameters
located at the bottom of each path.
If a conflict is encountered, then an error message is printed and NOTHING is done.
A conflict occurs when the same solid "ends" different paths but the transformations
are different.
Conflicts could occur when "instanced" or "copied" groups occur in an object's
paths or when a solid is a member of 2 regions which have been edited separately.
A complication of this command, which will not be listed as a conflict, is when a solid's parameters have been
changed by a push, yet this solid is referenced by another object which was NOT
included in the push.
The user should beware of this situation.
This command can be very useful when "adding" parts from another file.
Once the "object editing" of these new parts is completed, the push command
will put the editing done to the solid level.
Since the added parts should have no cross-references with the existing
objects, there should be no problems.
\section{Check For Duplicate Names}
{\em \center
dup file.g {string}
}
This command will compare the current object data file with another
MGED file "file.g" and list any object names common to both files.
If "string" is present, all object names in "file.g" will be prefixed with
"string" before comparing for duplicate names.
Generally, one uses "string" only when duplicate names are found without it.
\section{Concat Files}
{\em \center
concat file.g {string}
}
This command will concatenate another MGED file "file.g" onto the present
object data file.
If "string" is present, all names in "file.g" will be prefixed with this
string.
No objects from "file.g" will be added if the name already occurs in the
current object file...the object names will be listed and skipped.
However, this command should be used in conjunction with the "dup" command to
eliminate any problems with duplicate names.
\section{Create Pseudo-Track}
{\em \center
track
}
This command adds track components to the data file to fit specified
"wheel" data.
Solids, regions, and a group containing the regions will be created and
displayed on the screen.
You will be prompted for all required data.
\section{Define ARB Face}
{\em \center
facedef \#\#\#\#
}
This command is used to define the face plane of an ARB that is being edited.
The following is the option menu displayed when this command is used:
a planar equation
b 3 points
c rot and fb angles + fixed point
d same plane thru fixed point
q quit
To select any of these methods of defining a plane, enter the appropriate
letter. You will then be asked for the desired input.
\section{Define Plane Equation of ARB Face}
{\em \center
eqn [A B C]
}
This command is used when one is rotating the face of an edited ARB and defines
the coefficients of the face planar equation (Ax + By + Cz).
\section{Move (Rename) Everywhere}
{\em \center
mvall old new
}
This command is used to rename all occurrences of an object in the data file. In this
case, the object "old" will be renamed "new" for every occurrence.
\section{Miscellaneous Commands}
It is possible to view, specify, and text-edit
information pertaining to the material type and color of various
parts of the model tree. This is an interim capability
intended to provide enough material properties information for
current rendering and analysis purposes until the design of a
full material properties database can be finalized.
In addition to a variety of usual database manipulation
and status commands, there are commands to
compare the current database for name overlap (conflicts)
with another database, as well as commands to import and export
subtrees to/from the current database.
If name conflicts between the two databases
do exist, there are commands to rename an individual node without
changing any of the references to it (``mv''), or to rename
a node and change all the references to it (``mvall'').
Another command which is useful for preparing to move subtrees between
databases is the ``push'' command,
which adjusts the transformation matrices from the indicated point
down to the leaves of the directed acyclic graph, leaving the higher
level arcs with identity matrices.
\section{UNIX-Plot Output}
The ``plot'' command can store an exact image of the current
(non-faceplate) display on the screen, either using the
System V standard 2-D monochrome UNIX-Plot ({\em plot(4)}) format,
or the BRL 3-D color extended-UNIX-Plot format.
These plots can be sent to a disk file,
or ``piped'' directly to a filter process.
This can be useful for making hard copies of the current MGED view
for showing to others,
using a local pen plotter or laser printer.
\section{Ray-Tracing the Current View}
An important capability even beyond the ability to generate an
evaluated boundary wireframe is the ability of MGED to initiate a
quick ray-trace rendering of the current view on any nearby framebuffer!
This is implemented by using the MGED ``rt'' command to fork off
an instance of the RT program, and sending the RT program
a description of the current view
with any user-specified options.
This allows the designer to
use the power of MGED to select the desired view, and then to
quickly verify the geometry and light source placement.
A 50 by 50 pixel rendering of the current view can usually be done
in less than a minute (on a DEC VAX-780 class processor), and allows
for general verification before the designer uses the ``saveview''
command to submit a batch job for a high resolution ray-trace of
the same view.
\section{Animation}
The MGED editor includes a number of features which are useful
for developing animation tools and scripts.
The full description of the current viewing transformation and eye position
can be saved in a file,
and such a previously saved view can be read back at any time,
immediately changing the editor's view.
In addition, the current viewing transformation and eye position can be
appended to a file containing a collection of keyframes.
Most importantly, a file full of keyframe information, either raw keyframe
positions or smoothly interpolated keyframe sequences, can
by ``played'' in real time using MGED,
providing a powerful and fast animation preview capability.
As a separate animation capability intended
for developing demonstrations and instructional material relating to the
use of the MGED editor,
all user interactions with the editor can be recorded in a file,
along with an indication of the time elapsed between user actions.
This file can be adjusted using a normal text editor to remove any errors,
or to eliminate dead time where the user stopped to think.
Once created, this session script can be replayed through the editor
at any time, either to provide a smooth ``canned'' demonstration
before a live audience, or to create a film or videotape.
\chapter{TUTORIALS ON VIEWING AND STATES}
Tutorials with illustrations are provided to give the MGED user a
step-by-step walk-through of the basic capabilities of the graphics
editor.
Standard UNIX login and logout procedures appropriate to each site
should be followed prior to
beginning and after ending the tutorials.
Each of the tutorials will use the solids contained the MGED database called
``prim.g''.
These can be obtained by making a copy of ``db/prim.g''
from the BRL-CAD Package distribution tree. It is important to make
a copy of the database and work with that, rather than using the
supplied one. Changes made during the editing process are written
to the database when they are {\sl accepted}.
The first tutorial shows a sample invocation dialogue. All other
tutorials start at the first MGED prompt ({\tt mged> }). If the user
wishes to continue from one tutorial to the next without leaving MGED,
issue the {\em press reject} and {\em press reset} commands
before starting a new tutorial.
User input will be shown in an
{\em emphasized} font, and MGED output will appear in a {\tt typewriter}
font. If the user input is shown on the same line as a prompt, the
input is literal. If the user input is shown on a line by itself,
it is a directive, and is entered in an appropriate fashion.
The tutorials are self-contained, and if the user wishes to proceed to
the next tutorial without exiting MGED,
the RESET button should be pressed
to return to the top view, where the model XYZ axes
map to the screen XYZ axes.
The standard recovery procedure when in the middle of an editing operation
is to select REJECT edit. Control is
returned to the viewing state, and the user can restart with the last edit (e)
command used in the tutorial.
\section{States Within the Edit Process}
In this tutorial, the user will invoke MGED on a file called ``prim.g'';
attach a {\sl display manager\/}; explore the various MGED states;
and finally, exit MGED. A MGED database has a treelike structure. The
leaves are the individual solids, and the other nodes are groupings
of those solids. The solid editing functions are concerned with defining
and modifying the leaves, and the object editing functions operate
on groups, which are Boolean combinations of solids. One useful mental
model is to envision solid editing as operating directly on a leaf and
object editing as operating on the arc connecting a pair of nodes. The
object edit will affect everything below the selected arc (this is why
there is an additional state transition when object editing).
\section{Viewing State}
The first task is to invoke MGED. This tutorial will assume the user
has a copy of the ``prim.g'' database in the current directory.
\noindent
{\tt \$ }{\em mged prim.g}\\
{\tt BRL-CAD Release 3.0 Graphics Editor (MGED) Compilation 82}\\
{\tt Thu Sep 22 08:08:39 EDT 1988}\\
{\tt [email protected]:/cad/.mged.4d2}\\
\noindent
{\tt attach (nu|tek|tek4109|ps|plot|sgi)[nu]? }{\em sgi}\\
{\tt ATTACHING sgi (SGI 4d)}\\
{\tt Primitive Objects (units=mm)}\\
{\tt mged> }\\
The first three lines give information about which version of MGED is running,
when it was compiled, and who compiled it. The next line is the display
manager attach prompt. This prompt provides a list of available display
managers, then shows what the default will be (selected if the user answers
with a carriage return). In this case, the Silicon Graphics 4d display
manager was selected, as is noted by the following line.
Next the title of the database and
the unit of measurement used in the database are printed,
and finally, the first prompt is issued.
At this point MGED has loaded ``prim.g''; attached the SGI display;
and is awaiting commands. Attaching a display also causes what
is known as the MGED {\sl faceplate} to be drawn on the graphics display.
The faceplate has several features of interest. In the upper left corner
of the display, is a box which always shows the current MGED {\sl state}.
This can be one of six states: {\bf VIEWING}, {\bf SOL PICK},
{\bf SOL EDIT}, {\bf OBJ PICK}, {\bf OBJ PATH}, or {\bf OBJ EDIT}.
Immediately below, is the menu area. The only menu item initially shown is
one labeled {\bf BUTTON MENU}. This menu item toggles the display of the
button menu entries when {\sl selected} (more on selection later).
At the bottom of the display are two status lines. The first line
contains information about the current view.
The entry labeled {\bf cent=} gives the {\sl model space} coordinates
of the dot in the center of the display.
The entry labeled {\bf sz=} reflects the current size in model units of
the {\sl viewing cube}. The viewing cube is a mathematical construct
centered on the dot in the center of the display. The {\bf ang=}
display shows the current rate of rotation in each of the three axes.
The bottom line is used for several kinds of information.
In the {\bf VIEWING} state, it displays the title of the database.
The MGED viewing features are designed to allow the user to examine
models at different angles.
Preset views can be invoked at
anytime by using either the menu or the button box.
Selecting a preset view does
not change the coordinates of the primitives,
but instead changes the angle from which these primitives are
displayed. Five standard views (top, right, front, 35/25, and 45/45) can
be obtained by using either the bottom menu on the display screen or the
control box.
Three additional views (button, left, and rear) can be obtained
by using the button box, but not by using the menu.
The normal or default viewing state is the ``top'' orientation,
with model +X pointing towards the right of the screen,
model +Y pointing towards the top of the screen,
and model +Z pointing out of the screen.
In the ``top'' view, the model and screen axes are the same.
The ``reset'' button and ``Reset Viewsize'' menu items also
result in a ``top'' view.
The following table shows the angles of rotation to obtain the other views.
\begin{tabular}{l l}
View & Angle of Rotation (from top) \\
\\
Top & 0, 0, 0 \\
Bottom & 180, 0, 0 \\
Right & 270, 0, 0 \\
Left & 270, 0, 180 \\
Front & 270, 0, 270 \\
Rear & 270, 0, 90 \\
35, 25 & 295, 0, 235 \\
\end{tabular}
\noindent
{\tt mged>\ }{\em e arb8}\\
{\tt vectorized in 0 sec}\\
{\tt mged>\ }{\em size 12}\\
{\tt mged> }\\
\mfig t1-top-vw, ``arb8'' Top View.
The {\bf e} command causes the named object(s) -- a solid named ``arb8''
in this case
-- to be displayed, and the {\bf size} command sets the size of the
viewing cube. Figure \ref{t1-top-vw} shows what the display currently
looks like. In this view, the X-axis is to the right, the Y-axis points
up, and the Z-axis is perpendicular to (poking out of) the screen.
\noindent
{\em Twist the {\bf Y ROT} knob clockwise and back.}\\
{\em Twist the {\bf X ROT} knob counterclockwise and back.}\\
These knobs, along with the {\bf Z ROT} knob, rotate the viewing cube.
Use of the rotation
knobs allows the user to view the model from any orientation.
Turning a knob clockwise causes a rotation in the positive direction,
while turning a knob counterclockwise causes a negative rotation
(right-hand rule). The knobs are rate based, not position based;
once a rotation has been started, it will continue until the
knob is returned to zero (or the {\bf zeroknobs} button is pressed).
Rotations are about the viewing cube (screen) axes, not the model axes.
Systems without knobs can use the {\bf knob} command.
\noindent
{\em Move the mouse (or pen) until the cursor is in the {\bf BUTTON MENU}
block and then press the middle mouse button (depress the pen).}\\
\mfig t1-rot-vw, ``arb8'' Rotated View.
Pressing the middle mouse button (or the pen) {\sl selects} something.
When the cursor is inside the menu area, a selection
causes the event described by the menu item to occur.
Selecting {\bf BUTTON MENU} causes the button menu to appear on the left
side of the screen. The {\bf BUTTON MENU} menu item is
a toggle; subsequent selection of this item will cause the button menu
to disappear.
Figure \ref{t1-rot-vw} shows the new display.
\noindent
{\em Move the cursor from the menu area to a point near the
upper left corner of the solid and select it (press the center mouse
button).}\\
In the {\bf VIEWING} state, making a selection while outside of the menu
area will move the selected point to the center of the display. Look
carefully at the center of the display; the point just selected is now
located at the center dot. Use the {\bf center} command to reset any
translations made with the mouse.
\noindent
{\tt mged> }{\em center 0 0 0}\\
{\tt mged> }\\
From the {\bf VIEWING} state, the user will normally transition to either the
{\bf SOL PICK} or {\bf OBJ PICK} state.
The {\bf SOL PICK} state is selected by:
\begin{itemize}
\item Selecting the {\bf Solid Illum} button menu entry, or,
\item Pressing the {\bf sill} button (this button may be labeled
using some variation of ``Solid Illum''), or,
\item Typing {\bf press sill}.
\end{itemize}
Similar entries ({\bf Object Illum}) and ({\bf oill}) exist for transitioning
into the {\bf OBJ PICK} state.
In general, the {\bf press} command is the basic mechanism (type
{\bf press help} for a list of available commands). Most of the press
commands have been mapped onto a button box if it is available,
and some of the
most common are also mapped into the {\bf BUTTON MENU} so they can
accessed without letting go of the mouse.
\section{Solid Pick State}
\noindent
{\em Place MGED in the {\bf SOL PICK} state using one of the
above mechanisms.}\\
\mfig t1-sol-pk, MGED In Solid Pick State.
Upon entering the {\bf SOL PICK} state, the display will look similar to
Figure \ref{t1-sol-pk}. The {\bf SOL PICK} state used to select which
of the displayed solids is to be edited. Note that the color of the
solid has changed from red to white. The screen is divided into as many
horizontal zones as there are solids displayed, and each zone is
assigned to one solid. As the mouse is moved vertically through each
zone, the corresponding solid is highlighted (``illuminated'') by
drawing it in white. In this instance, there is only one solid being
displayed, so this state is relatively uninteresting.
If the system being used has no mouse, there is no reason to enter the
{\bf SOL PICK} state. The user will instead transition directly to
the {\bf SOL EDIT} state using the {\bf sed} command.
\noindent
{\tt mged> }{\em press reject}\\
{\tt mged> }{\em e ellg}\\
{\tt mged> }\\
{\em Press the {\bf sill} button}\\
\mfig t1-2s-pk, MGED In Solid Pick with Two Solids.
Note that the first action taken was to {\sl reject} the edit. Any time MGED
is not in the {\bf VIEWING} state, a {\sl reject} command (via
{\bf press}, button, or mouse) discards all editing changes accumulated
since the last transition out of the {\bf VIEWING} state, and places
MGED in the {\bf VIEWING} state.
The display should now look similar to Figure \ref{t1-2s-pk}.
Notice that one solid is white and
the name of that solid is displayed in the upper left corner of the
display, as well as in the bottom status line. The solid to be edited is
selected by moving the mouse up and down until the zone corresponding to
the desired solid is reached. Once the appropriate zone is reached, select it.
This selects a solid, and once a solid is selected,
MGED enters the {\bf SOL EDIT} state.
\section{Solid Edit State}
\noindent
{\tt mged> }{\em d ellg}\\
{\tt mged> }\\
{\em Select the solid called ``arb8''.}\\
\mfig t1-sol-ed, Solid Edit State.
The {\bf d} command removes something from the display. In this
case, the solid ``ellg'' was removed to reduce clutter.
The display should now look like Figure \ref{t1-sol-ed}.
When MGED enters the solid edit state, the following occurs:
\begin{itemize}
\item The solid selected for editing remains illuminated,
\item The solid is labeled,
\item The coordinates (or dimensions) associated with the labels,
and other information is displayed to the right of the menu area,.
\item If the solid is a member of one or more groups, a similar set
of coordinates called the {\sl PATH} is displayed immediately below
the first set of coordinates,
\item The {\bf *SOLID EDIT*} menu is displayed, and,
\item A solid specific edit menu (in this case the {\bf ARB MENU})
is displayed.
\end{itemize}
The {\bf *SOLID EDIT*} menu provides access to generic operations (translation, rotation
and scaling) common to all solids.
The solid specific edit menu is a list of solid type specific editing operations.
Selecting one of the solid specific edit menus causes a submenu with solid type specific
choices to be displayed. To remove this submenu, select either the
{\bf RETURN} item in the submenu, or the {\bf edit menu} item in the
{\bf *SOLID EDIT*} menu.
It is in this state that the solid is altered to meet the modeler's
requirements. The shape, positioning, and orientation of the solid is
changed using numeric keyboard input, positioning of the mouse, or by
use of the knobs. Once the solid has been altered, the edit is
either accepted or rejected. Accepting the edit causes all changes
made to be written to the database; rejecting the edit ``throws them
away''. Either operation will terminate the edit session and return MGED
to the {\bf VIEWING} state.
\noindent
{\em Reject the edit.}\\
\section{Object Pick State}
\noindent
{\em Place MGED in the {\bf OBJ PICK} state.}\\
\mfig t1-obj-pk, Object Pick State.
Figure \ref{t1-obj-pk} shows what the display looks like when in the
{\bf OBJ PICK} state. As with the {\bf SOL PICK} state, a single solid is
selected. This solid becomes the reference solid for the object edit.
In the {\bf OBJ PICK} state, the solid will be shown
as a member of one or more objects. Less obvious is the fact that the
local axes associated with the selected solid are the axes used for the
entire object during the object edit.
\section{Object Path State}
\noindent
{\em Select ``arb8''.}\\
\mfig t1-obj-ph, Object Path Selection State.
MGED transitions into the {\bf OBJ PATH} state once a solid has been
picked from {\bf OBJ PICK}. Figure \ref{t1-obj-ph} is the display in
the {\bf OBJ PATH} state. When in this state the extent of the editing
operation is set. Everything below the editing point is affected by the
edit. The editing point is shown by the {\sl MATRIX} label in the
display. It is shown as {\bf [MATRIX]} in the upper left part of the
display and as {\bf \_\_MATRIX\_\_} in the second status line. The editing
point is chosen with the same mechanism used by {\bf SOL PICK} and
{\bf OBJ PICK}. This time, there is one horizontal zone for each node in
the path between the root and selected leaf. Moving the mouse up and down
moves the editing point up and down in the tree. Once again, having a
simple database and only one object in view makes for a relatively
uninteresting situation.
\section{Object Edit State}
\noindent
{\em Select the editing point above ``arb8''.}\\
\mfig t1-obj-ed, Object Edit State.
MGED is now in the {\bf OBJ EDIT} state and the display should look like
Figure \ref{t1-obj-ed}.
When MGED enters the object edit state, the following occurs:
\begin{itemize}
\item The reference solid remains illuminated,
\item The reference solid is labeled,
\item The information associated with the labels is displayed to the right
of the menu area, and
\item The {\bf *OBJ EDIT*} menu is displayed.
\end{itemize}
The {\bf OBJ EDIT} state is used to modify the
Homogeneous Transform Matrix selected during the {\bf OBJ PATH} state.
Permissible operations include uniform and affine scaling of the objects,
as well as translation and rotation.
As with the {\bf SOL EDIT} state, MGED accepts changes entered using
the keyboard, mouse or knobs.
This concludes the first tutorial. Examples of the appearance of MGED
in each of the six states have been given, along with some idea of what
each of the states is used for. All that remains is to reject the current
edit, and exit MGED. Strictly speaking the {\bf q} command could be entered
directly, but doing so, can become a dangerous habit.
\noindent
{\em Select {\bf REJECT Edit} using the mouse.}\\
{\em Press the {\bf reject} button.}\\
{\tt mged> }{\em d arb8}\\
{\tt mged> }{\em q}\\
{\tt \$ }\\
\section{Editing in the Plane of the Screen}
\mfig plane-top1, A Top View of the Coordinate Axes.
When MGED is in a ``translate'' mode within an edit state,
the plane of the mouse or data tablet is mapped to
the plane of the screen, to permit moving objects in a
controlled way in two of the three available dimensions.
The orientation of the plane of the screen is determined by the
currently selected view.
In most circumstances, users will find that repositioning objects
is easiest when the plane of the screen is oriented in an
axis-aligned view. This is most easily accomplished by utilizing
one of the preset views.
For this exercise, obtain a copy of the {\em axis.g} database,
and run MGED, e.g.:
\noindent{\tt
\$ cp cad/db/axis.g . \\
\$ mged axis.g \\
BRL-CAD Release 3.0 Graphics Editor (MGED) Compilation 82 \\
Thu Sep 22 08:08:39 EDT 1988 \\
[email protected]:/cad/.mged.4d2 \\
\\
attach (nu|tek|tek4109|ps|plot|sgi)[nu]? {\em sgi} \\
ATTACHING sgi (SGI 4d) \\
X,Y,Z Coordinate Axis (units=none) \\
mged> {\em e axis} \\
vectorized in 0 sec \\
{\em Select ``Top'' in the Button menu} \\
mged> \\
}
\subsection{Top View}
\mfig plane-top2, Translating from the Top View.
The top view is the default view. The orientation of the axes
is shown in Figure \ref{plane-top1}.
The surface of the viewing screen and the graphics tablet is the XY plane.
Edit changes using the graphics tablet will affect only the X and Y
coordinates of the primitive.
\noindent{\tt
mged> {\em sed x} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged>
}
Select different points on the tablet with the mouse, each time
pressing the middle mouse button.
Notice how the X and Y coordinates of the V vector change,
but the Z coordinate does not.
An example of this is shown in Figure \ref{plane-top2};
compare the values of V with those in Figure \ref{plane-top1}.
{\em Select ``REJECT Edit'' in the Button menu}
\subsection{Bottom View}
\mfig plane-bot1, A Bottom View of the Coordinate Axes.
\mfig plane-bot2, Translating from the Bottom View.
\noindent{\tt
mged> {\em press bottom} \\
mged> {\em sed x} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged>
}
The {\em press bottom} command selects the bottom view of the
model, and the new configuration of the axes can be seen in
Figure \ref{plane-bot1}.
The surface of the viewing screen and the mouse or tablet
are still in the XY plane.
Edit changes using the graphics tablet will affect only the X and Y
components of the solid.
Select different points on the tablet with the mouse and notice the
changes in the coordinates;
compare the values of V with those in Figure \ref{plane-bot2}.
{\em Select ``REJECT Edit'' in the Button menu}
\subsection{Right View}
\mfig plane-right1, A Right View of the Coordinate Axes.
\mfig plane-right2, Translating from the Right View.
\noindent{\tt
{\em Select ``Right'' in the Button menu} \\
mged> {\em sed x} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged>
}
The right hand view has been selected. Model +X still proceeds to the right,
but now model +Z is at the top of the screen, and model +Y is
pointing out of the screen.
This new configuration is depicted in Figure \ref{plane-right1}.
The surface of the viewing screen and the graphics tablet is the XZ plane.
Edit changes using the graphics tablet will affect only the X and Z
coordinates of the solid.
Select different points on the tablet with the mouse and notice the
changes in the V coordinates; only the X and Z components change,
as in Figure \ref{plane-right2}.
{\em Select ``REJECT Edit'' in the Button menu}
\subsection{Front View}
\mfig plane-front1, A Front View of the Coordinate Axes.
\mfig plane-front2, Translating from the Front View.
\noindent{\tt
{\em Select ``Right'' in the Button menu} \\
mged> {\em sed x} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged>
}
The front view has been selected. Model +X points out of the screen,
model +Y points to the right, and model +Z points towards the top
of the screen, as shown in Figure \ref{plane-front1},
which has been slightly rotated off the preset view to improve
the legibility of the axis labels.
The surface of the viewing screen and the graphics tablet is the YZ
plane. Edit changes will affect only the Y and Z
coordinates of the primitive, as shown in Figure \ref{plane-front2}.
Select different points on the tablet with the mouse and notice the
changes in the coordinates.
{\em Select ``REJECT Edit'' in the Button menu}
\subsection{35, 25 View}
\mfig plane-35a, An Oblique 35,25 View of the Coordinate Axes.
\mfig plane-35b, Translating in the 35,25 View.
\noindent{\tt
{\em Select ``35,25'' in the Button menu} \\
mged> {\em sed x} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged>
}
Figure \ref{plane-35a} is the 35,25 view of the axes model.
The axes are no longer
parallel or perpendicular to the viewing surface or to the graphics tablet.
Edit changes using the graphics tablet will affect all of the coordinates of
the solid, in a manner that is visually intuitive when the solid
is moved around on the screen.
Select different points on the tablet with the mouse and notice the
changes in the coordinates, such as in Figure \ref{plane-35b}.
Note how all three components of the V vector have changed.
{\em Select ``REJECT Edit'' in the Button menu}
\chapter{TUTORIALS ON EDITING SOLIDS}
The Solid Editing state of MGED is used to modify the fundamental
parameters of an individual solid.
Each solid must be modified individually.
\section{Solid Edit: A Six-Sided Polyhedron}
\mfig es8-top, Top View of ARB8.
This section illustrates the use of commands while in
SOL EDIT state to alter the
shape of a polyhedron with six sides and 8 faces (ARB8).
\noindent{\tt
\$ {\em mged es.g} \\
BRL-CAD Release 3.0 Graphics Editor (MGED) Compilation 82 \\
Thu Sep 22 08:08:39 EDT 1988 \\
[email protected]:/cad/.mged.4d2 \\
\\
es.g: No such file or directory \\
Create new database (y|n)[n]? {\em y} \\
attach (nu|tek|tek4109|ps|plot|sgi)[nu]? {\em sgi} \\
ATTACHING sgi (SGI 4d) \\
Untitled MGED Database (units=mm) \\
mged> {\em in arb8 rpp -1 1 -1 1 -1 1} \\
mged> {\em size 10} \\
mged>
}
Figure \ref{es8-top}
is a top view of the six-sided polyhedron.
The Z-axis perpendicular to the viewing screen.
Next, the view is rotated so that all sides can be seen.
\noindent{\tt
mged> {\em Twist ROTY knob clockwise and restore} \\
mged> {\em Twist ROTX knob counter-clockwise and restore} \\
mged>
}
\mfig es8-rot, A Rotated View of the ARB8.
Figure \ref{es8-rot} shows a better perspective of the solid.
The next step in this tutorial is to transfer to the solid edit state.
This can be accomplished in two ways: either by going through
the SOL PICK state (``illuminate mode'') or by direct transfer via
keyboard command.
Using illuminate mode is better when the name of the solid to be
edited may not be known, while the keyboard command is generally
preferred when the name of the solid is known.
\noindent{\tt
mged> {\em Select the ``Solid Illum'' entry in the button menu} \\
mged> {\em Move the mouse out of the menu area} \\
mged> {\em Click the mouse to enter SOL EDIT state} \\
mged>
}
To perform a direct transfer from the viewing state to the solid edit state
using a keyboard command, enter:
\noindent{\tt
mged> {\em sed arb8} \\
mged>
}
\mfig es8-sed, An ARB8 in Solid Edit State.
Figure \ref{es8-sed} corresponds to the view on the display.
The ARB8 MENU is unique to the ARB primitive,
and lists operations that can only be performed on an ARB solid.
The items in the ARB8 MENU are
selected by using the mouse.
Each of the other types of solids have a
similar unique menu.
When one of these items is selected, the top level ARB8 MENU disappears,
to be replaced with the indicated subordinate menu.
The top-level menu reappears when either
the ``edit menu'' item in the SOLID EDIT menu is selected,
or the ``RETURN'' item in the subordinate menu is selected.
The SOLID EDIT menu applies to all
solids when in the SOL EDIT state.
The items in the SOLID EDIT menu are selected
by either using the mouse or by depressing the appropriate button on the
button box.
When any of the SOLID EDIT menu items are selected
(e.g., ``Rotate'', ``Translate'', ``Scale''), the solid-specific menu
disappears.
Th top-level solid-specific menu reappears when
the ``edit menu'' item in the SOLID EDIT menu is selected.
The {\em p [params]} command is used to
make precise changes, where the numeric value of the parameter being
edited is know.
Values for all parameters in the ARB8
and SOLID EDIT menus can be specified by using the {\em p} command,
or by pointing and clicking with the mouse.
\mfig es8-tr0, Translating ARB8 Point 1 to the Origin.
\subsection{Translate Operation}
\noindent{\tt
mged> {\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p 0 0 0} \\
mged>
}
Point 1 of the primitive is moved to point 0 0 0,
as shown in Figure \ref{es8-tr0}.
The translate solid operation is selected
by either picking ``Translate'' on the solid edit menu
with the mouse, by depressing the solid edit button on the button box,
or by entering the {\em press sed} command.
Parameters to the translate solid operation
are of the form {\em p a b c}
where {\em a}, {\em b}, and {\em c} are the new coordinates
of point 1 in the solid.
The other points are transferred to keep the same position
relative to point 1.
The general form of the new coordinates for point is
\begin{center}
\begin{verbatim}
x ' = x + a - x
y ' = y + b - y
Z ' = Z + c - Z
\end{verbatim}
\end{center}
The command
\noindent{\tt
mged> {\em p 1 -1 -1} \\
mged>
}
can be used to restore the primitive to the original position.
\subsection{Rotate Operation}
\mfig es8-xrot, Solid Edit Rotation of 45 Degrees about X.
The rotate operation is initiated by either selecting Rotate on the menu
screen with the mouse,
by depressing the Solid Rotate button on the button box,
or by entering the {\em press srot} command on the keyboard.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 45 0 0} \\
mged>
}
The parameter {\em p} command is used to make precise rotation changes.
The
command is entered in the form {\em p a b c} where
{\em a}, {\em b}, and {\em c} are the angles
(in degrees) of rotation about the x, y, and z axes respectively. Point 1,
the vertex, remains fixed, and the solid is rotated about this point. A
positive angle of rotation is counter-clockwise when viewed in the positive
direction along an axis.
The order of rotation is not commutative.
Rotation takes place about the
Z axis, Y axis, and X axis in that order.
Figure \ref{es8-xrot} shows the rotation of 45 degrees about the X axis.
%The following is the formula for moving point (x,y,z) through angles
%(a, b, c) to point (x', y', z')
%\begin{verbatim}
%[X'] [cos b cos c -cos b sin c sin b ][X]
%[Y']=[sin a sin b cos c +cos a sin b cos a cos c -sin a sin b sin c -sin a cos b ][Y]
%[Z'] [sin a sin c -cos a sin b cos c cos a sin b sin c + sin a cos c cos a cos b ][Z]
%\end{verbatim}
The values entered after the p are absolute - the rotations are applied to
the primitive as it existed when solid rotation was first selected. Thus
entering {\em p 0 0 0} will undo any rotations
performed since solid rotation was begun.
\mfig es8-yrot, Solid Edit Rotation of 45 Degrees about Y.
\mfig es8-zrot, Solid Edit Rotation of 45 Degrees about Z.
\noindent{\tt
mged> {\em p 0 45 0} \\
mged>
}
Figure \ref{es8-yrot} displays the solid
after it has been rotated about the Y axis.
\noindent{\tt
mged> {\em p 0 0 45} \\
mged>
}
Figure \ref{es8-zrot} displays the solid
after it has been rotated about the Z axis.
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
This restores the original orientation of the solid.
\subsection{Scale Operation}
\mfig es8-scale, ARB8 Scale Increased by 2X.
\noindent{\tt
mged> {\em Select the ``Scale'' entry in the solid edit menu} \\
mged> {\em p 2} \\
mged>
}
Figure \ref{es8-scale} corresponds to the view that is shown on the display.
The scale operation may be initiated by either selecting
the Scale entry on the menu with the mouse,
by depressing the Solid Scale button,
or by entering {\em press sscale} on the keyboard.
The parameter command {\em p n} is
used to enter a precise scale factors, where {\em n} is
the scale factor.
The coordinates of point 1 remain the same. The distances
from point 1 to the other points are multiplied by the scale value {\em n}.
The general equations for the transformation from point p to p' are
\begin{verbatim}
x'[i] = x[i] + n (x[i] - x[1] )
y'[i] = y[i] + n (y[i] - y[1] ) i != 1
z'[i] = z[i] + n (z[i] - z[1] )
\end{verbatim}
The size of the primitive may be changed by depressing the mouse at
different positions. When the mouse is clicked, the
edited primitive is scaled about point 1 (the key point)
by an amount proportional
to the distance the mouse is from the center of the screen. If the mouse
is above the center of the screen, the edited primitive will become larger.
If the
mouse is below the center, the primitive will become smaller.
The value of {\em n} entered is applied to the primitive as it existed when the
solid scale state was entered.
Entering {\em p 1} will return the primitive
to the size it had when the solid scale operation first started.
\noindent{\tt
mged> {\em p 1} \\
mged>
}
\mfig es8-edge1, ARB8 Edge 15 Moved Through (9, -2, -2).
\mfig es8-edge2, ARB8 Edge 12 Moved Through (2, 5, -2).
\mfig es8-edge3, ARB8 Edge 14 Moved Through (2, -2, 7).
\subsection{Moving Edges}
The move edge command permits the moving of a line or edge so that the
line passes through the selected point.
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``move edges'' entry in the ARB menu} \\
mged> {\em Select the ``move edge 15'' entry in the ARB8 edges menu} \\
mged> {\em p 9 -2 -2} \\
mged>
}
The edge 15 is moved so that it passes through the point (9, -2, -2). The
coordinates of the new points 1 and 5 are the intersection of the new edge
with the planes 234 and 678. Since both the old edge and new edge 15 are
parallel to the X axis,
the X coordinate of the point given by the {\em p} command
has no meaning. The X coordinates for points 1 and 5 are not changed.
See Figure \ref{es8-edge1}.
\noindent{\tt
mged> {\em p 9 -1 -1} \\
mged>
}
This restores the original shape.
The choice of ``9'' for the X coordinate was arbitrary.
\noindent{\tt
mged> {\em Select the ``move edge 12'' entry in the ARB8 edges menu} \\
mged> {\em p 2 5 -2} \\
mged>
}
The edge 12 is parallel to the Y axis. This command moves the points 1
and 2 so that their X and Z coordinates are 2 and -2.
See Figure \ref{es8-edge2}.
The Y coordinates are not changed.
To restore the view, enter:
\noindent{\tt
mged> {\em p 1 5 -1} \\
mged>
}
The choice of ``5'' for the Y coordinate was arbitrary.
\noindent{\tt
mged> {\em Select the ``move edge 14'' entry in the ARB8 edges menu} \\
mged> {\em p 2 -2 7} \\
mged>
}
The edge 14 is parallel to the Z axis.
This command moves the points 1
and 4 so that their X and Y coordinates are 2 and -2.
See Figure \ref{es8-edge3}.
The Z coordinates are not changed.
\noindent{\tt
mged> {\em p 1 -1 7} \\
mged>
}
This restores the original shape.
The choice of ``7'' for the Z coordinate was arbitrary.
\subsection{Extrude Command}
\mfig es8-ex1, ARB8 Rear Face Extruded 5 Units in -Z.
\mfig es8-ex2, ARB8 Rear Face Extruded 3 Units in +Z.
The extrude command is used to move the opposite surface a distance from
the specified surface or plate.
This command can only be used when an ARB solid is in
solid edit state.
\noindent{\tt
mged> {\em extrude 1265 5} \\
mged>
}
In Figure \ref{es8-ex1},
the plane opposite surface whose points are 1, 2, 6, and 5
is moved to a distance of 5 in the positive Z direction from plane 1265. Note
that the points were selected counter-clockwise when viewed in the positive
direction along the Z axis.
\noindent{\tt
mged> {\em extrude 1562 3} \\
mged>
}
In Figure \ref{es8-ex2},
the plane opposite surface 1562 is moved to a distance of 3
in the negative Z direction from 1562. Note that the points were selected
clockwise when viewed in the positive direction along the Z axis.
\noindent{\tt
mged> {\em extrude 1265 2} \\
mged>
}
This restores the original shape of this solid.
To return control to the VIEWING state, select the ``REJECT Edit''
item on the button menu, press the ``reject'' button on the button box,
or enter the command {\em press reject} on the keyboard.
Then, enter
\noindent{\tt
mged> {\em d arb8} \\
mged>
}
to drop the ARB8 from view.
\section{Solid Edit: A Five-Sided Polyhedron}
\mfig es5-top, Top View of an ARB5.
\mfig es5-rot, A Rotated View of the ARB5.
\mfig es5-sed, The ARB5 in Solid Edit State.
This tutorial illustrates the application of the SOL EDIT state to the ARB5
solid.
In this tutorial, the view is modified by using the rotation
knobs so that all sides can be seen.
\noindent{\tt
mged> {\em size 6} \\
mged> {\em in arb5 arb5} \\
Enter X, Y, Z for point 1: {\em 0 0 0} \\
Enter X, Y, Z for point 2: {\em 0 0 1} \\
Enter X, Y, Z for point 3: {\em 0 1 1} \\
Enter X, Y, Z for point 4: {\em 0 1 0} \\
Enter X, Y, Z for point 5: {\em -1 .5 .5} \\
mged>
}
Figure \ref{es5-top} is the display of arb5
in the VIEWING state that is seen when
the solid is first created.
In this view, the Z axis is perpendicular to the viewing screen.
\noindent{\tt
mged> {\em Twist ROTY knob clockwise and restore} \\
mged> {\em Twist ROTX knob counter-clockwise and restore} \\
mged>
}
These actions generate a view shown in Figure \ref{es5-rot}
that shows all sides.
\noindent{\tt
mged> {\em Select the ``Solid Illum'' entry in the button menu} \\
mged> {\em Move the mouse out of the menu area} \\
mged> {\em Click the mouse to enter SOL EDIT state} \\
mged>
}
These actions will place MGED in the SOL EDIT state
as shown in Figure \ref{es5-sed}.
\subsection{Translate Operation}
\mfig es5-tr, Translating an ARB5.
\noindent{\tt
mged> {\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p -1 -1 1} \\
mged>
}
This command cause point 1 to be moved to coordinates (-1, -1, 1) and the
other points are moved so that they keep the same relative position to point
1. See Figure \ref{es5-tr}.
Enter this command to restore the solid to its original location:
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
\subsection{Rotate Operation}
\mfig es5-xrot, ARB5 Solid Edit Rotation about X.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 45 0 0} \\
mged>
}
Figure \ref{es5-xrot} shows a rotation of 45 degrees
about an axis parallel to the X axis.
The rotate command is entered in the form {\em p a b c}
where {\em a}, {\em b}, and {\em c} are the angles
(in degrees) of rotation about the x, y, and z axes and intersect at point 1.
All rotation takes place about point 1.
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
This restores the original orientation of the solid.
\subsection{Scale Operation}
\mfig es5-scale, ARB5 Scale Increased by 2X.
\noindent{\tt
mged> {\em Select the ``Scale'' entry in the solid edit menu} \\
mged> {\em p 2} \\
mged>
}
Figure \ref{es5-scale} shows the change in the primitive.
Point 1 remains the same
and the distances of the other points from point 1 is multiplied by 2.
Entering {\em p 1} will return the primitive
to the size it had when the solid scale operation first started.
\noindent{\tt
mged> {\em p 1} \\
mged>
}
\subsection{Move Edge Command}
\mfig es5-edge1, ARB5 Edge 14 Moved Through (1, 1, 1).
\mfig es5-edge2, ARB5 Point 5 Moved to (-1.5, 1, 1).
\mfig es5-edge3, ARB5 Edge 45 Moved Through (-1.5 1 1).
\mfig es5-edge4, ARB5 Edge 12 Moved Through (2, 1, 2).
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``move edges'' entry in the ARB menu} \\
mged> {\em Select the ``move edge 14'' entry in the ARB8 edges menu} \\
mged> {\em p 1 1 1} \\
mged>
}
The edge 14 is moved so that it moves through the point (1, 1, 1).
Note that this point is the mid-point between points 1 and 4.
See Figure \ref{es5-edge1}.
\noindent{\tt
mged> {\em p 0 2 0} \\
mged>
}
This restores the original shape.
\noindent{\tt
mged> {\em Select the ``move point 5'' entry in the ARB5 edges menu} \\
mged> {\em p -1.5 1 1} \\
mged>
}
The point 5 is moved to location -1.5, 1, 1. See Figure \ref{es5-edge2}.
\noindent{\tt
mged> {\em p -1 .5 .5} \\
mged>
}
will restore the original shape.
\noindent{\tt
mged> {\em Select the ``move edge 45'' entry in the ARB5 edges menu} \\
mged> {\em p -1.5 1 1} \\
mged>
}
In Figure \ref{es5-edge3}, the edge 45 is moved
so that it passes through the point (-1.5, 1, 1).
Note that this point lies between the points 4 and 5.
\noindent{\tt
mged> {\em p -1 .5 .5} \\
mged>
}
This restores the original shape.
\noindent{\tt
mged> {\em Select the ``move edge 12'' entry in the ARB5 edges menu} \\
mged> {\em p 2 1 2} \\
mged>
}
In Figure \ref{es5-edge4},
the edge 12 is moved so that it passes through the point (2, 1, 2).
Note that the coordinates correspond to point 2.
The movement of the edges may yield unpredictable results when the edges
are not parallel to one of the axes.
To return control to the VIEWING state, select the ``REJECT Edit''
item on the button menu, press the ``reject'' button on the button box,
or enter the command {\em press reject} on the keyboard.
Then, enter
\noindent{\tt
mged> {\em d arb5} \\
mged>
}
to drop the ARB5 from view.
\section{Solid Edit: Alter a Cylinder}
\mfig esc-top, Top View of a Cylinder.
\mfig esc-rot, A Rotated View of the Cylinder.
\mfig esc-sed, A Cylinder in Solid Edit State.
This tutorial illustrates the application of the SOL EDIT state to
cylinder solids.
\noindent{\tt
mged> {\em size 12} \\
mged> {\em in cyl rcc} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of height (H) vector: {\em 2 0 0} \\
Enter radius: {\em 1}
mged>
}
Figure \ref{esc-top} is the display of the cylinder solid
when viewed from the top.
Since the
Z axis is perpendicular to the viewing screen, a view of all sides cannot be
seen.
\noindent{\tt
mged> {\em Twist ROTY knob clockwise and restore} \\
mged> {\em Twist ROTX knob counter-clockwise and restore} \\
mged>
}
These actions generate a view, Figure \ref{esc-rot}, that shows all sides.
\noindent{\tt
mged> {\em Select the ``Solid Illum'' entry in the button menu} \\
mged> {\em Move the mouse out of the menu area} \\
mged> {\em Click the mouse to enter SOL EDIT state} \\
mged>
}
Figure \ref{esc-sed} is the view that displays the menu
for the SOL EDIT state.
The
point V is at the origin (0,0,0) in this example and is in the middle of the
circle that contains points A and B. H is the point of the center of the
circle that contains points C and D. The coordinates of H are the coordinates
of the vector from V to H and represent the relative position of H to V. Mag
is the magnitude of these vectors and is represented by the formula
\begin{center}
\begin{verbatim}
Mag = sqrt( x + y + z )
\end{verbatim}
\end{center}
``H dir cos'' are the direction cosines of the vector H which
is perpendicular to plane of the points A, B, and V. The coordinates of A
are the coordinates of the vector from V through A. Mag is the magnitude of
the vectors from V to A. The coordinates of B are the coordinates of the
vectors from V through B. Mag is the magnitude of the vector from V to B.
The values for c and d are the magnitudes of the vectors from the tip of
vector H to the points C
and D respectively.
``A x B dir cos'' represents the direction cosines of the
vector ``A x B''.
\subsection{Translate Operation}
\mfig esc-tr, Translating Cylinder Vertex to (1, 1, 1).
\noindent{\tt
mged> {\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p 1 1 1} \\
mged>
}
The location of the vertex point V is moved to (1, 1, 1).
The locations of the other
points relative to V remains the same. See Figure \ref{esc-tr}.
Move the mouse anywhere on the screen (outside the menu area), and click.
Notice
that the cylinder is moved so that V is placed at this location, and the
coordinates of the other points remain the same relative to V.
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
This restores the solid to the original location.
\subsection{Rotate Operation}
\mfig esc-xrot, Solid Edit Rotation of 45 Degrees about X.
\mfig esc-yrot, Solid Edit Rotation of 45 Degrees about Y.
\mfig esc-zrot, Solid Edit Rotation of 45 Degrees about Z.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 45 0 0} \\
mged>
}
When viewing in the positive X direction, the cylinder is rotated counter-
clockwise 45 degrees about an axis through point V parallel to the x axis.
See Figure \ref{esc-xrot}.
\noindent{\tt
mged> {\em p 0 45 0} \\
mged>
}
When viewing in the positive Y direction, the cylinder is rotated counter-
clockwise 45 degrees about an axis through point V parallel to the Y axis.
See Figure \ref{esc-yrot}.
\noindent{\tt
mged> {\em p 0 0 45} \\
mged>
}
When viewing in the positive Z direction, the cylinder is rotated counter-
clockwise 45 degrees about an axis through point V parallel to the Z axis.
See Figure \ref{esc-zrot}.
The command
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
will restore the cylinder to the original orientation.
\subsection{Scale Operation}
\mfig esc-scale, Cylinder Scale Increased by 1.5X.
\noindent{\tt
mged> {\em Select the ``Scale'' entry in the solid edit menu} \\
mged> {\em p 1.5} \\
mged>
}
The point V remains fixed, the distance H between the two end-plate ellipses
is multiplied by 1.5.
See Figure \ref{esc-scale}.
The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
restores the original scale.
\subsection{Scale H Command}
\mfig esc-sh, Cylinder Scale H Vector.
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``scale H'' entry in the TGC menu} \\
mged> {\em p 1} \\
mged>
}
The magnitude of the vector H is reduced from 2 to 1.
See Figure \ref{esc-sh}. The command
\noindent{\tt
mged> {\em p 2} \\
mged>
}
will restore the original shape.
\subsection{Scale A Command}
\mfig esc-sa, Cylinder Scale A Vector.
\noindent{\tt
mged> {\em Select the ``scale A'' entry in the TGC menu} \\
mged> {\em p 2} \\
mged>
}
The magnitude of the vector through point A is increased to 2, i.e.,
the length of the axis of the ellipse through point A is set equal to p.
See Figure \ref{esc-sa}. The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
will restore the original shape.
\subsection{Scale B Command}
\mfig esc-sb, Cylinder Scale B Vector.
\noindent{\tt
mged> {\em Select the ``scale B'' entry in the TGC menu} \\
mged> {\em p 2} \\
mged>
}
The magnitude of the vector through point B is increased to 2, i.e.,
the length of the axis of the ellipse through point B is set equal to p.
See Figure \ref{esc-sb}. The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
will restore the original shape.
\subsection{Scale C Command}
\mfig esc-sc, Cylinder Scale C Vector.
\noindent{\tt
mged> {\em Select the ``scale C'' entry in the TGC menu} \\
mged> {\em p 2} \\
mged>
}
The magnitude of the vector through point c is increased to the value of
p. The length of the axis of the ellipse through point c is set equal to the
value of p. See Figure \ref{esc-sc}. The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
will restore the original shape.
\subsection{Scale D Command}
\mfig esc-sd, Cylinder Scale D Vector.
\noindent{\tt
mged> {\em Select the ``scale D'' entry in the TGC menu} \\
mged> {\em p 2} \\
mged>
}
The magnitude of the vector through point D is changed to the value of p.
The length of the axis of the ellipse through point D is set equal to the
value of p. See Figure \ref{esc-sd}. The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
will restore the original shape.
The scale H, A, B, C, and D commands provide for setting the magnitude
equal to the value entered by the {\em p} command.
The solid edit {\bf scale} operation provides
for multiplying {\bf all} the vectors by the value
entered by the {\em p} command.
\subsection{Move End H Command}
\mfig esc-mh, Cylinder Move End of H.
\noindent{\tt
mged> {\em Select the ``move end H'' entry in the TGC menu} \\
mged> {\em p 3} \\
mged>
}
The length of the vector H is changed to the value of p.
See Figure \ref{esc-mh}. The command
\noindent{\tt
mged> {\em p 2} \\
mged>
}
will restore the original shape.
\subsection{Move End H (rt) Command}
\mfig esc-mhrt, Cylinder Move End of H \& Rotate.
\noindent{\tt
mged> {\em Select the ``move end H(rt)'' entry in the TGC menu} \\
mged> {\em p 3} \\
mged>
}
This command is similar to the ``move end H'' command except the vector
through point A is rotated so its direction is in the -Y direction.
See Figure \ref{esc-mhrt}. The command
\noindent{\tt
mged> {\em p 2} \\
mged>
}
will restore the original shape, but not the original orientation.
To return control to the VIEWING state, select the ``REJECT Edit''
item on the button menu, press the ``reject'' button on the button box,
or enter the command {\em press reject} on the keyboard.
Then, enter
\noindent{\tt
mged> {\em d cyl} \\
mged>
}
to drop the cylinder from view.
\section{Solid Edit: Alter Ellipsoid}
\mfig ese-top, Top View of an Ellipsoid.
\mfig ese-sed, An Ellipsoid in Solid Edit State.
This tutorial illustrates the application of the SOL EDIT state to the
ellipsoid primitive.
\noindent{\tt
mged> press reset
mged> {\em size 6} \\
mged> {\em in ell ellg} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of vector A: {\em 1 0 0} \\
Enter X, Y, Z of vector B: {\em 0 .3536 -0.3536} \\
Enter X, Y, Z of vector C: {\em 0 .3536 0.3536} \\
mged>
}
Figure \ref{ese-top} is the display of the primitive in the viewing state.
Since the
Z axis is perpendicular to the viewing screen, a view of all sides cannot be
seen.
\noindent{\tt
mged> {\em Twist ROTY knob clockwise and restore} \\
mged> {\em Twist ROTX knob counter-clockwise and restore} \\
mged>
}
These actions generate a view that shows all sides.
\noindent{\tt
mged> {\em Select the ``Solid Illum'' entry in the button menu} \\
mged> {\em Move the mouse out of the menu area} \\
mged> {\em Click the mouse to enter SOL EDIT state} \\
mged>
}
The display will be changed from the VIEWING MODE through the SOL PICK to
the SOL EDIT state. Figure \ref{ese-sed} is the view that is displayed.
The coordinates of the points A, B, C, are given by the product of the
magnitude of the vector and the cosine of X, Y, and Z direction cosines. In
the display, the coordinates are:
\begin{center}
\begin{verbatim}
A = (1, 0, 0)
B = (0, 0.3536, -0.3536)
C = (0, 0.3536, 0.3536)
\end{verbatim}
\end{center}
or
\begin{center}
\begin{verbatim}
A = ( 1* cos 0, 1* cos 90, 1* cos 90 )
B = (.5* cos 90, .5* cos 45, -.5* cos 45 )
C = (.5* cos 90, .5* cos 45, .5* cos 45 )
\end{verbatim}
\end{center}
\subsection{Translate Operation}
\mfig ese-tr, Translating Ellipsoid to (-1, 1, 1).
\noindent{\tt
mged> {\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p -1 1 1} \\
mged>
}
The key point V is moved to (-1, 1, 1) and the ellipsoid maintains its
relative position to V. See Figure \ref{ese-tr}.
While in the SOL EDIT state, the solid may be translated by
using the mouse. These changes are not numerically exact, but they can be
useful to visually position a solid with respect to other solids.
Move the mouse to a position outside the menu area on the screen.
Click the mouse.
The center point (V) of the ellipsoid will be translated to that point.
Note that only the value of the coordinates of V are changed.
The command
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
will restore the original position.
\subsection{Rotate Operation}
\mfig ese-xrot, Solid Edit Rotation of 45 Degrees about X.
\mfig ese-yrot, Solid Edit Rotation of 45 Degrees about Y.
\mfig ese-zrot, Solid Edit Rotation of 45 Degrees about Z.
The rotate operation is initiated by either selecting Rotate on the menu
screen with the mouse,
by depressing the Solid Rotate button on the button box,
or by entering the {\em press srot} command on the keyboard.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 45 0 0} \\
mged>
}
Figure \ref{ese-xrot} shows the rotation of the ellipsoid about its X axis.
The angle of rotation is counter-clockwise when viewed in the positive X
direction. The direction cosines of vectors VB and VC are changed by 45 .
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 0 45 0} \\
mged>
}
Figure \ref{ese-yrot} shows the rotation of the ellipsoid about its Y axis.
The angle
of rotation is counter-clockwise when viewed in the positive Y direction. The
rotation is made from the original view, and the restoration of the view is
not necessary.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 0 0 45} \\
mged>
}
Figure \ref{ese-zrot} shows the rotation of the ellipsoid about its Z axis.
The axis
of rotation is counter-clockwise when viewed in the positive Z direction.
The command
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
restores the original orientation of the solid.
\subsection{Scale Operation}
\mfig ese-scale, Ellipsoid Scale Decreased.
\noindent{\tt
mged> {\em Select the ``Scale'' entry in the solid edit menu} \\
mged> {\em p .5} \\
mged>
}
Point V is not changed,
but the distance from V to the surface of the ellipsoid is multiplied by 0.5,
because the magnitude of the vectors are multiplied by the value of 0.5.
See Figure \ref{ese-scale}.
Move the mouse to a position outside the menu area and above the X axis,
and click the mouse.
Notice that the size of the ellipsoid has grown, i.e.,
the magnitude of the vectors have increased.
Move the mouse to a position below the X axis, and click the mouse.
Notice that the size of the ellipsoid has increased.
The command
\noindent{\tt
mged> {\em p 1} \\
mged>
}
will restore the original scale.
NOTE:
The use
of the scale operation from the Solid Edit menu
will result in the values of all the vectors being
multiplied by the value of the scale.
Use of the scale operation from the Ellipsoid menu
with a particular vector A, B, or C changes the
magnitude of that vector to the value of the scale.
\subsection{Scale A Command}
\mfig ese-sa, Ellipsoid Scale A Vector.
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``scale A'' entry in the ellipsoid menu} \\
mged> {\em p 1.5} \\
mged>
}
The magnitude of the vector to point A is set equal to the value of p
(e.g. 1.5).
The components of the vector are (1.5, 0, 0) since the vector was
parallel to the X axis. See Figure \ref{ese-sa}. The command
\noindent{\tt
mged> {\em Select the ``scale A'' entry in the TGC menu} \\
mged> {\em p 1} \\
mged>
}
will restore the original shape.
\subsection{Scale B Command}
\mfig ese-sb, Ellipsoid Scale B Vector.
\noindent{\tt
mged> {\em Select the ``scale B'' entry in the Ellipsoid menu} \\
mged> {\em p 1.5} \\
mged>
}
The magnitude of the vector to point B is set equal to the value of p
(e.g. 1.5).
The coordinates of the vector are the product of p and the
direction cosines of B. See Figure \ref{ese-sb}. The command
\noindent{\tt
mged> {\em p 0.5} \\
mged>
}
will restore the original shape.
\subsection{Scale C Command}
\mfig ese-sc, Ellipsoid Scale C Vector.
\noindent{\tt
mged> {\em Select the ``scale C'' entry in the Ellipsoid menu} \\
mged> {\em p 1.5} \\
mged>
}
The magnitude of the vector to point C is set equal to the value of p
(i.e., 1.5).
The coordinates of the vector are the product of p and the
direction cosines of C. See Figure \ref{ese-sc}. The command
\noindent{\tt
mged> {\em p 0.5} \\
mged>
}
will restore the original shape.
To return control to the VIEWING state, select the ``REJECT Edit''
item on the button menu, press the ``reject'' button on the button box,
or enter the command {\em press reject} on the keyboard.
Then, enter
\noindent{\tt
mged> {\em d ell} \\
mged>
}
to drop the ellipsoid from view.
\section{Solid Edit: Alter Torus}
\mfig est-top, Top View of a Torus.
\mfig est-sed, The Torus in Solid Edit State.
This tutorial illustrates the application of the SOL EDIT state to the
torus solid.
\noindent{\tt
mged> {\em size 6} \\
mged> {\em in tor tor} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of normal vector: {\em 0 1 0} \\
Enter radius 1: {\em 1} \\
Enter radius 2: {\em 0.2} \\
mged>
}
Figure \ref{est-top} is the display of the torus solid in viewing state.
Since the
Z-axis is perpendicular to the viewing screen, a view of all sides cannot be
seen.
\noindent{\tt
mged> {\em Twist ROTY knob clockwise and restore} \\
mged> {\em Twist ROTX knob counter-clockwise and restore} \\
mged>
}
These actions generate a view of the torus that shows all sides,
as shown in Figure \ref{est-sed}.
\noindent{\tt
mged> {\em Select the ``Solid Illum'' entry in the button menu} \\
mged> {\em Move the mouse out of the menu area} \\
mged> {\em Click the mouse to enter SOL EDIT state} \\
mged>
}
The torus is a ring whose cross-section is a circle. The distance from
the vertex to the center of the cross-section is r1 and r2 is the radius of
the circular cross section.
Let the points I and O be the intersection of the line x=-z and the torus.
Then,
\begin{center}
\begin{verbatim}
I = (-(r2-r1) cos 45, 0, (r2-r1) cos 45 )
O = (-(r2+r1) cos 45, 0, (r2+r1) cos 45 )
\end{verbatim}
\end{center}
\subsection{Translate Operation}
\mfig est-tr, Translating a Torus.
\noindent{\tt
mged> {\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p -.5 -1 .5} \\
mged>
}
The vertex V of the torus is moved to (-.5, -1, .5).
See figure \ref{est-tr}. The
coordinates of the other points remain the same, relative to the vertex.
The command
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
will restore the original position.
\subsection{Rotate Operation}
\mfig est-xrot, Torus Solid Edit Rotation about X.
\mfig est-yrot, Torus Solid Edit Rotation about Y.
\mfig est-zrot, Torus Solid Edit Rotation about Z.
\noindent{\tt
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 45 0 0} \\
mged>
}
The torus is rotated 45 degrees counter-clockwise about the positive X axis.
The
coordinates of the points I and H are transformed using the following matrix:
\begin{verbatim}
[x'] [1 0 0 ] [x]
[y']=[0 .7071 -.7071] [y]
[z'] [0 .7071 .7071] [z]
\end{verbatim}
See Figure \ref{est-xrot}.
\noindent{\tt
mged> {\em p 0 45 0} \\
mged>
}
The torus is rotated 45 degrees counter-clockwise about the positive Y axis.
See Figure \ref{est-yrot}.
\noindent{\tt
mged> {\em p 0 0 45} \\
mged>
}
The torus is rotated 45 degrees counter-clockwise about the positive Z axis.
See Figure \ref{est-zrot}.
The original orientation is restored by entering
\noindent{\tt
mged> {\em p 0 0 0} \\
mged>
}
\subsection{Scale Operation}
\mfig est-scale, Torus Scale Increased.
\noindent{\tt
mged> {\em Select the ``Scale'' entry in the solid edit menu} \\
mged> {\em p 1.5} \\
mged>
}
The vertex remains the same and all distances from the vertex are
multiplied by 1.5, the value entered with p. See Figure \ref{est-scale}.
To return to the original scale, enter
\noindent{\tt
mged> {\em p 1} \\
mged>
}
\subsection{Scale Radius 1 Command}
\mfig est-sr1, Scale Torus Radius 1.
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``scale radius 1'' entry in the TORUS menu} \\
mged> {\em p 1.5} \\
mged>
}
The distance from the vertex to the center of the cross-section of the
ring is set equal to the values given with {\em p}, e.g., 1.5.
See Figure \ref{est-sr1}.
The original scale can be restored with
\noindent{\tt
mged> {\em p 1} \\
mged>
}
\subsection{Scale Radius 2 Command}
\mfig est-sr2, Scale Torus Radius 2.
\noindent{\tt
mged> {\em Select the ``edit menu'' entry in the solid edit menu} \\
mged> {\em Select the ``scale radius 2'' entry in the TORUS menu} \\
mged> {\em p 0.5} \\
mged>
}
The distance from the center of the cross-section of the ring is set equal
to the value given with {\em p}, e.g. 0.5.
This value must remain less than the value for r1.
See Figure \ref{est-sr2}.
The command
\noindent{\tt
mged> {\em p 0.2} \\
mged>
}
will restore the original shape.
To return control to the VIEWING state, select the ``REJECT Edit''
item on the button menu, press the ``reject'' button on the button box,
or enter the command {\em press reject} on the keyboard.
Then, enter
\noindent{\tt
mged> {\em d tor} \\
mged>
}
to drop the torus from view.
\chapter{TUTORIALS ON OBJECT EDITING}
\section{Object Editing a Six-Sided Polyhedron}
This tutorial illustrates the application of the {\bf OBJ EDIT} state to
the ``arb8'' primitive. The editing of the arb8 illustrates the
absolute movement of the points.
\mfig eo-start, ``arb8'' Object Edit; Top View.
\noindent
{\tt
mged> {\em e arb8}\\
vectorized in 0 seconds\\
mged> {\em size 8}\\
mged>\\
{\em Select the {\bf BUTTON MENU} if not already displayed.} \\
{\em Select the {\bf Object Illum} menu entry.} \\
{\em Move the mouse away from the menu area and select twice.} \\
}
These operations select an object for editing.
Control is passed through the {\bf OBJ PICK} and {\bf OBJ PATH} states
to the {\bf OBJ EDIT} state. The display should look similar to
Figure \ref{eo-start}.
\subsection{Scale Operation}
\mfig eo-scale, ``arb8'' Object Edit; Scaled by 0.5.
\noindent
{\tt
mged> {\em scale 0.5}\\
mged>\\
}
As always, the selected operation operates with respect to the key
vertex -- point 1 remains the same, and the distances from point 1 to
the other points are multiplied by the scale factor. See Figure
\ref{eo-scale}. What will happen if another {\bf scale 0.5} command
is given? The {\bf scale} operator is an absolute operator. It sets
the scale factor associated with a particular transformation matrix.
It does not multiply the current transformation matrix scale factor by
the new scale factor.
\subsection{X, Y, and XY Move Operation}
\mfig eo-xyzmove, ``arb8'' Object Edit; Translated to (0.5, -2, 1.5).
\noindent
{\tt
mged> {\em scale 1}\\
mged> {\em translate .5 -2 1.5}\\
mged>\\
{\em Select the {\bf X move} menu entry.}\\
}
The first two commands undo the effects of the previous scale operation, and
translate the key point to (0.5, -2, 1.5). The coordinates of the other
points are changed accordingly; preserving their distances relative
to point 1 (see Figure \ref{eo-xyzmove}).
The last operation above placed MGED into a state where the key point
of the object will track the X component of successive selects, but not the Y
component. Note that on some displays point 1 may not be directly
visible. It is actually behind point 4. Watch the area listing the
information concerning ``arb8'' as selections are made.
\noindent
{\em Do several selects while moving the cursor slowly in a circle.}\\
Observe that only the X axis information changes. Similarly, the {\bf
Y move} tracks only changes in the Y axis, and {\bf XY move} tracks
changes in both axes.
\mfig eo-xymove, XY Move.
\noindent
{\tt mged> }{\em press 45,45}\\
{\tt mged> }\\
{\em Do several more selects while moving the cursor slowly in a circle.}\\
This time, moving point 1 modifies the model x and y axes (observe the
changes in the list of vertices). If {\bf Y move} is selected, all
three sets of axes are modified. These operators work in screen space,
not model space, so using them with an oblique view moves the model in
more than one axis (Figure \ref{eo-xymove} for example)..
\subsection{Rotate Operation}
\mfig eo-arbrot, ``arb8'' Rotated by (30, 45, 60).
\noindent
{\tt
mged> {\em press reset}\\
mged> {\em translate 0 0 0}\\
mged> {\em rotobj 30 45 60}\\
mged>\\
}
The primitive is rotated 60, 45, and 30 about the z, y, and x axes in
that order as shown in Figure \ref{eo-arbrot}.
Note that the coordinates of the points are changed when scaling,
translation, and rotation are performed.
\noindent
{\tt
mged> {\em press reset}\\
mged> {\em d arb8}\\
mged>\\
}
If an object which is being edited is deleted from view, MGED transitions
back into the {\bf VIEWING} state.
\section{Object Editing an Ellipsoid}
\mfig eo-ellg, Object Edit; Ellipse viewed from 45,45 preset.
\noindent
{\tt
mged> {\em e ellg}\\
mged>\\
{\em Select the {\bf 45,45} button menu entry.}\\
{\em Select the {\bf Object Illum} button menu entry.}\\
{\em Move the cursor outside the menu area and select twice.}\\
}
Control is again passed from the {\bf OBJ PICK} state through the
{\bf OBJ PATH} state to the {\bf OBJ EDIT} state.
When editing a cylinder, ellipsoid, or torus, the coordinates of the
primitives are set relative to the center of the primitive, and are not
changed by any translation of the primitive. Figure \ref{eo-ellg}
represents the display.
\subsection{Scale Operation}
\mfig eo-ellg2x, Object Edit; Ellipse scaled up by 2.
\noindent
{\tt
mged> {\em scale 2}\\
mged>\\
}
The magnitudes of the vectors from the center V to the points A, B, and C
are multiplied by the scale factor 2. The location of the center V is
unchanged. See Figure \ref{eo-ellg2x}.
\subsection{Move Operations}
\mfig eo-ellgxyz, Object Edit; Ellipse Translated.
\noindent
{\tt
mged> {\em scale 1}\\
mged>\\
{\em Select the {\bf XY move} button menu entry.}\\
{\em Move the cursor to some location away from the menu area and select.}\\
}
The as with the arb8 in the previous section, the ellipsoid is moved
in the model space plane parallel to the plane of the screen so the
key point (vertex V) is ``placed at'' the point corresponding to the
cursor location. Note that there is no explicit control over the
location of the point with respect to screen Z (depth in the viewing
cube); the selected point has the same screen Z as the original point.
The screen should look similar to Figure \ref{eo-ellgxyz}.
\noindent
{\tt
mged> {\em d ellg}\\
mged>\\
}
\section{Object Path and Object Edit}
This section illustrates the use of the {\bf OBJ PATH} state to select the
number of objects that are affected by one edit command. In the previous
sections the user was shown how to manipulate only one primitive. A group
of primitives may also be edited as a single entity. This section
shows how an entire group may be edited without addressing each individual
primitive.
MGED generally has several ways to achieve a particular result. In
this section, keyboard commands are used instead of the control buttons and
the display menu. The creation and saving of special primitives is
illustrated.
In the database, the original primitives are centered around
the origin. Copies of these primitives will be made, translated away
from the origin and saved for future editing.
\subsection{Organize the Primitives and Groups}
\mfig eo-stacked, Stacked Primitives.
\noindent
{\tt
mged> {\em cp arb8 arb8.s}\\
mged> {\em cp ellg ellg.s}\\
mged> {\em cp tgc tgc.s}\\
mged> {\em cp tor tor.s}\\
mged>\\
}
Figure \ref{eo-stacked} shows the four primitives; arb8.s, ellg.s, tgc.s,
and tor.s. One convention frequently used by experienced modelers is to
tack an identifying suffix on the names of the various primitives and
objects. Often, a ``.s'' suffix denotes a {\em solid}, a ``.r'' denotes
a {\em region} and a ``.g'' denotes a {\em group} (note that this is a
different ``.g'' from the ``.g'' suffix used with filenames.).
\noindent
{\tt
mged> {\em size 16}\\
mged> {\em sed arb8.s}\\
mged> {\em press sxy}\\
mged> {\em p 3 -3 1}\\
mged> {\em press accept}\\
mged>\\
}
Several things happened in the above sequence. The net result is that
the solid arb8.s was ``unstacked'' using a Solid Edit so it is more
visible. The same sequence of operations will be performed with the
other objects to move them to other locations.
\mfig eo-spread, Primitives After Translation.
\noindent
{\tt
mged> {\em sed ellg.s}\\
mged> {\em press sxy}\\
mged> {\em p -3 3 1}\\
mged> {\em press accept}\\
mged> {\em sed tgc.s}\\
mged> {\em press sxy}\\
mged> {\em p -3 -3 1}\\
mged> {\em press accept}\\
mged> {\em sed tor.s}\\
mged> {\em press sxy}\\
mged> {\em p 3 3 1}\\
mged> {\em press accept}\\
mged>\\
}
The screen should now look like Figure \ref{eo-spread}. The next step
is to group the primitives
\noindent
{\tt
mged> {\em g a.g tgc.s arb8.s}\\
mged> {\em g b.g ellg.s tor.s}\\
mged> {\em g c.g a.g b.g}\\
mged> {\em B c.g}\\
vectorized in 0 sec\\
mged> {\em tree c.g}
\begin{verbatim}
| c.g_____________| a.g_____________| tgc.s
| arb8.s
| b.g_____________| ellg.s
| tor.s
\end{verbatim}
\noindent
mged>\\
}
The {\em group} operator ({\bf g}) generates an object, named by the
first argument, which is the union of all objects named in succeeding
arguments. Therefore, the object ``a.g'' is composed of the union of
``tgc.s'' and ``arb8.s''. Likewise, the object ``c.g'' is the union of
``a.g'' and ``b.g''. The next command in the above sequence is called
the {\em blast} command. It is effectively a {\em zap} ({\bf Z})
followed by an {\em edit} ({\bf e}).
The final command is the {\em tree} command. It is intended to give the
user some idea of the hierarchical structure of an object. It presents
a tree laid on its side. The root is at the left, and the leaves are
at the right. A vertical bar denotes a connection at a given level, with
the proviso that a vertical bar having a line of underscores coming
in from the left represents the start of a particular subtree when read
from top down (``ellg.s'' and ``arb8.s'' do not have a common parent).
\mfig eo-grpath, Object Path With ``tor.s'' as Reference Solid.
\noindent
{\tt
mged> {\em press oill}\\
mged>\\
{\em Move the cursor up and down the screen until the primitive ``tor.s''
is illuminated, then select}\\
}
Selecting the solid ``tor.s'' transitions MGED into the {\bf OBJ PATH}
state, and establishes ``tor.s'' as the reference solid for any future
editing operations.
Note that the name ``tor.s'' is shown in the upper left corner of the
display, and on the second status line at the bottom of the display
(Figure \ref{eo-grpath}).
The {\bf OBJ PATH} state has little meaning unless there is more than one path
or group in the display. One of the following paths may be selected:
\begin{quote}
c.g/b.g/\_MATRIX\_/tor.s
c.g/\_MATRIX\_/b.g/tor.s
\_MATRIX\_/c.g/b.g/tor.s
\end{quote}
Although the torus primitive has been selected as the reference solid,
the position of {\bf \_MATRIX\_} determines the extent of the effects of
the edit. The first choice affects only the torus. The second choice
affects everything under the group ``b.g'' (the torus and ellipsoid).
The third choice affects all of the primitives. Remember though, that
in all cases, what is being edited is the Homogeneous Transformation
Matrix (thought of as the arc connecting objects), not the underlying
solid.
\subsection{Editing One Primitive}
\mfig eo-gredit, Object Edit With ``tor.s'' as Reference Solid.
\noindent
{\em Move the cursor up and down until {\tt /c.g/b.g/\_MATRIX\_/tor.s} appears
in the bottom status line, then select.}\\
Figure \ref{eo-gredit} is the new display. Note that the torus is
illuminated. The {\bf OBJ EDIT} state has been reached.
\mfig eo-tor111, Object Edit Affecting Torus Only.
\noindent
{\tt
mged> {\em translate 1 1 1}\\
mged>\\
}
The key point of the torus is moved to 1, 1, 1. The other primitives are
not moved. See Figure \ref{eo-tor111}.
\noindent
{\tt
mged> {\em press reject}\\
mged>\\
}
\subsection{Editing a Group of Two Primitives}
\mfig eo-bgrp, Torus and Ellipsoid Selected for Object Edit.
\noindent
{\tt
mged> {\em press oill}\\
mged>\\
{\em Move the cursor up and down the screen until the primitive ``tor.s''
is illuminated, then select.}\\
{\em Move the cursor up and down until {\tt /c.g/\_MATRIX\_/b.g/tor.s} appears
in the bottom status line, then select.}\\
}
Control has again been passed to the {\bf OBJ EDIT} state. Notice that
both the torus and ellipsoid are illuminated (Figure \ref{eo-bgrp}.
Although only the parameters for the torus will be changed in the
display, the ellipsoid in group ``b.g'' will be affected by the edit.
\mfig eo-bgrp311, Torus and Ellipsoid Translated by (3, 1, 1).
\noindent
{\tt
mged> {\em translate 3 1 1}\\
mged>\\
}
The key point of the torus is moved to 3, 1, 1. The ellipsoid is
moved by the same amount. See Figure \ref{eo-bgrp311}.
\noindent
{\tt
mged> {\em press reject}\\
mged>\\
}
\subsection{Editing Two Groups of Four Primitives}
\mfig eo-cgrp, All Primitives Selected for Object Edit.
\noindent
{\tt
mged> {\em press oill}\\
mged>\\
{\em Move the cursor up and down the screen until the primitive ``tor.s''
is illuminated, then select.}\\
{\em Move the cursor up and down until {\tt /\_MATRIX\_/c.g/b.g/tor.s} appears
in the bottom status line, then select.}\\
}
Control has again been passed to the {\bf OBJ EDIT} state. Notice that
all of the primitives are illuminated (Figure \ref{eo-cgrp}).
Although only the parameters for the torus will be changed in the
display, the ellipsoid in group ``b.g'' will be affected by the edit.
\mfig eo-cgrp321, All Primitives Translated by (3, 2, 1).
\noindent
{\tt
mged> {\em translate 3 2 1}\\
mged>\\
}
The key point of the torus is moved to 3, 2, 1. All other primitives are
moved by the same amount. See Figure \ref{eo-cgrp321}.
\noindent
{\tt
mged> {\em press reject}\\
mged>\\
}
Control has now returned to the VIEWING state.
\chapter{BUILDING A SET OF COORDINATE AXES}
\mfig axis-3525, The Model Axes Viewed from 35,25.
\mfig rmit-3525, Example Axes Viewed form 35,25.
MGED does not display a set of XYZ co-ordinate axes on the
screen.
When you are in the 35,25 (isometric) viewing state the axes are
positioned as in Figure \ref{axis-3525}.
This database can be found in cad/db/axis.g.
If you would like a set of coordinate axes to assist in model building,
the easiest thing to do is to
construct three axes using "rcc" cylinder primitives via the "in" command;
\noindent{\tt
mged> {\em in x rcc 0 0 0 50 0 0 1} \\
mged> {\em in y rcc 0 0 0 0 100 0 1} \\
mged> {\em in z rcc 0 0 0 0 0 150 1} \\
mged>
}
with the short leg as the "x" axis, next longer leg the "y" axis and longest
leg the "z" axis,
as in Figure \ref{rmit-3525}.
Now, at any stage through construction of the model,
the 'solid' or 'object illuminate' mode can be used
to identify which axis cylinder is going where; they
will have the solid names of "x", "y", and "z".
The name of the solid will also be
displayed in the top left hand corner of the graphics window
and at the bottom of this window.
Before going on to create a model, construct the three axes cylinders
with the "in" commands mentioned above.
Select the "button menu" in the
upper left corner of the graphics window
to enable the button menu, and
select 35, 25 from this menu.
Your axis will be displayed as shown in Figure \ref{rmit-3525}.
\chapter{BUILDING A TIN WOODSMAN}
\mfig wm-prims, WoodsMan Primitives.
The purpose of this tutorial is to demonstrate how to build a model using
a few basic primitives. The model to be constructed is a tin woodsman.
The four primitives used in the construction of the tin woodsman
are an ARB8, a cylinder, an ellipsoid, and a torus.
These four primitives will be duplicated several times,
and each copy will be modified using solid editing,
to obtain the required shapes.
The finished version of this database can be found in
the BRL-CAD Package file ``db/woodsman.g''.
\section{Create Primitives}
\noindent{\tt
\$ {\em mged woodsman.g} \\
BRL-CAD Release 3.0 Graphics Editor (MGED) Compilation 82 \\
Thu Sep 22 08:08:39 EDT 1988 \\
[email protected]:/cad/.mged.4d2 \\
\\
woodsman.g: No such file or directory \\
Create new database (y|n)[n]? {\em y} \\
attach (nu|tek|tek4109|ps|plot|sgi)[nu]? {\em sgi} \\
ATTACHING sgi (SGI 4d) \\
Untitled MGED Database (units=mm) \\
mged> {\em size 20} \\
mged> {\em title A Tin Woodsman} \\
mged> {\em in solid8 rpp} \\
Enter XMIN, XMAX, YMIN, YMAX, ZMIN, ZMAX: {\em -1 1 -1 1 -1 1} \\
mged> {\em in torus tor} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of normal vector: {\em 0 1 0} \\
Enter radius 1: {\em 1} \\
Enter radius 2: {\em 0.2} \\
mged> {\em in ellipsoid ellg} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of vector A: {\em 1 0 0} \\
Enter X, Y, Z of vector B: {\em 0 0.5 0} \\
Enter X, Y, Z of vector C: {\em 0 0 0.5} \\
mged> {\em in cylinder rcc} \\
Enter X, Y, Z of vertex: {\em 0 0 0} \\
Enter X, Y, Z of height (H) vector: {\em 2 0 0} \\
Enter radius: {\em 1} \\
mged>
}
At this point, the screen should look like Figure \ref{wm-prims}.
\section{Copy Primitives and Set Up for Edit}
Although eight copies of the ellipsoid and two copies of the cylinder
shall be used in the final solid, fewer copies are made initially since there
is replication in the editing of these primitives.
\noindent{\tt
mged> {\em cp ellipsoid e.2} \\
mged> {\em cp ellipsoid e.6} \\
mged> {\em cp cylinder c.1} \\
mged> {\em cp solid8 s.1} \\
mged> {\em cp torus t.1} \\
mged> {\em Z} \\
mged> {\em e e.* c.1 t.1 s.1} \\
vectorized in 0 sec \\
mged> \\
}
\mfig wm-hat1, Funnel Bowl Cylinder After Rotation.
\mfig wm-hat2, Funnel Bowl Cylinder After End Scaling.
\mfig wm-hat3, Funnel Bowl Cylinder After Moving.
\mfig wm-tube, Funnel Tube Scaled and Positioned.
\section{Create Funnel Hat}
The solid ``cylinder'' has a height vector (``H'') which is 2mm
long.
This will be used to good advantage, to make the Tin Woodsman's
funnel hat, with the bowl of the funnel being 2mm high,
and the tube of the funnel being 2mm long.
The tube of the funnel will point straight up the +Y axis.
\noindent{\tt
mged> {\em sed c.1} \\
mged> {\em Select the ``Rotate'' entry in the solid edit menu} \\
mged> {\em p 0 45 90} \\
mged>
}
This places the cylinder so that the lines BD and AC are at the outer
ends of the cylinder. See Figure \ref{wm-hat1}.
Next, the cylinder is shaped to look like the top of a funnel.
The vectors c and d are scaled.
\noindent{\tt
{\em Select the ``edit menu'' entry in the solid edit menu} \\
{\em Select the ``scale c'' entry in the TGC menu} \\
mged> {\em p .1} \\
{\em Select the ``scale d'' entry in the TGC menu} \\
mged> {\em p .1} \\
mged> \\
}
Figure \ref{wm-hat2} is the new shape of the cone.
Note how the on-screen display records the new lengths of the ``c''
and ``d'' vectors.
This cone must be moved to the planned locations for the top of the head.
\noindent{\tt
{\em Select the ``Translate'' entry in the solid edit menu} \\
mged> {\em p 0 2.2 0} \\
{\em Select the ``ACCEPT Edit'' entry in the button menu} \\
mged> \\
}
The bottom of the hat is now properly shaped and positioned. The
new version of the solid ``c.1'' has been saved in the model database.
The editor returns to the viewing state. See Figure \ref{wm-hat3}.
A copy of the saved ``c.1'' cone is made. A byproduct of the {\em cp}
command is to display the new solid, as if the {\em cp c.1 c.2} command
had been immediately followed by an {\em e c.2} command.
This new solid will be edited to make the neck of the funnel,
which is the top of the hat.
The ``c.2'' copy of the cone must be
scaled down to become a tube and the tube must be placed on top of the
cone ``c.1''.
\noindent{\tt
mged> {\em cp c.1 c.2} \\
mged> {\em sed c.2} \\
{\em Select ``scale A,B'' in the TGC menu} \\
mged> {\em p 0.1} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 0 4.2 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu} \\
mged>
}
Figure \ref{wm-tube} is the new shape of the funnel tube.
The woodsman's hat is comprised of solids c.1 and c.2.
\mfig wm-head, Head Sphere.
\section{Building the Head}
The head of our Tin Woodsman is perfectly spherical, and will be located
at coordinates (0, 2, 0).
While it would be possible to duplicate the ellipsoid solid created above,
and modify it to produce the desired sphere, since all the parameters
of the head sphere are known, it is more economical simply to use
the {\em in} command to construct it directly.
Figure \ref{wm-head} shows the results of this operation.
\noindent{\tt
mged> {\em in e.1} \\
Enter solid type: {\em sph} \\
Enter X, Y, Z of vertex: {\em 0 2 0} \\
Enter radius: {\em 1} \\
mged>
}
\mfig wm-collar, The Woodsman's Collar.
\section{Building the Collar}
The torus (primitive t.1) is used to build a collar between the head and
the body.
The ring of the collar is scaled to 0.1 of its original size,
and repositioned at the base of the head.
The results of this step are shown in Figure \ref{wm-collar}.
\noindent{\tt
mged> {\em sed t.1} \\
{\em Select ``scale radius 2'' in the TORUS menu} \\
mged> {\em p 0.1} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 0 1 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu} \\
mged>
}
\mfig wm-body, The Woodsman's Body.
\section{Building the Body}
The ARB8 (primitive s.1) is used to build the body.
The original height of s.1 is only 2mm; the required
length of the body is 3mm, so the extrusion command is
used to adjust the position of the lower (-Y) face of the solid.
The result is shown in Figure \ref{wm-body}.
\noindent{\tt
mged> {\em sed s.1} \\
mged> {\em extrude 2367 3} \\
{\em Select ``ACCEPT Edit'' in the Button menu} \\
mged>
}
\mfig wm-arm1, An Upper Arm Prototype.
\mfig wm-arm2, The Woodsman's Arms.
\section{Building the Arms}
The ellipsoid primitive (e.2) is used to build the upper and lower parts
of the left and right arms.
The original solid ``e.2'' is oriented with the major axis of the ellipse
oriented along the X axis.
The arms need to have the major axis oriented along the Y axis,
so first the solid is rotated.
A more graceful arm is obtained by decreasing the length of
the B and C vectors, and the resulting upper arm solid can be
seen in Figure \ref{wm-arm1}.
\noindent{\tt
mged> {\em sed e.2} \\
{\em Select ``Rotate'' in the Solid Edit menu} \\
mged> {\em p 0 45 90} \\
{\em Select ``edit menu'' in the Solid Edit menu} \\
{\em Select ``scale B'' in the ELLIPSOID menu} \\
mged> {\em p 0.25} \\
{\em Select ``scale C'' in the ELLIPSOID menu} \\
mged> {\em p 0.25} \\
mged>
}
This e.2 solid will now be moved into final position as the upper left arm.
Then it will be duplicated three times
to make the rest of the arm parts. Finally, each new arm
part will be translated into the proper position,
as seen in Figure \ref{wm-arm2}.
\noindent{\tt
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p -1.3 0 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the upper left arm} \\
mged> {\em cp e.2 e.3} \\
mged> {\em cp e.2 e.4} \\
mged> {\em cp e.2 e.5} \\
mged> {\em sed e.3} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p -1.3 -2 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the lower left arm} \\
mged> {\em sed e.4} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 1.3 0 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the upper right arm} \\
mged> {\em sed e.5} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 1.3 -2 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the lower right arm} \\
mged>
}
\mfig wm-leg1, The First Leg.
\mfig wm-final1, The Tin Woodsman.
\section{Building the Legs}
The ellipsoid primitive (e.6) is used as a prototype
to build the upper and lower parts of both legs from.
The primitive e.6 is scaled, rotated, and translated
into position as the upper left leg, as seen in Figure \ref{wm-leg1}.
Then, copies are made and translated to the remaining positions,
just like the arms were.
\noindent{\tt
mged> {\em sed e.6} \\
{\em Select ``Rotate'' in the Solid Edit menu} \\
mged> {\em p 0 45 90} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p -0.5 -3 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the upper left leg} \\
mged> {\em cp e.6 e.7} \\
mged> {\em cp e.6 e.8} \\
mged> {\em cp e.6 e.9} \\
mged> {\em sed e.7} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p -0.5 -5 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the lower left leg} \\
mged> {\em sed e.8} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 0.5 -3 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the upper right leg} \\
mged> {\em sed e.9} \\
{\em Select ``Translate'' in the Solid Edit menu} \\
mged> {\em p 0.5 -5 0} \\
{\em Select ``ACCEPT Edit'' in the Button menu -- This is the lower right leg} \\
}
Figure \ref{wm-final1} is the view on the screen, the Tin Woodsman.
Take a moment to use the rotation knobs to view the model from various
angles.
\section{Building Regions}
So far, this example has concentrated on describing the basic shapes
involved in making the Tin Woodsman, without concern for establishing
a proper hierarchical structure. To illustrate this point,
the various solids will be grouped by purpose and composition.
First, a region will be constructed to contain the torso,
and the color of ``cadet blue'' will be assigned:
\noindent{\tt
mged> {\em r torso.r u s.1} \\
Defaulting item number to 1001 \\
Creating region id=1000, air=0, los=100, GIFTmaterial=1 \\
mged> {\em mater torso.r} \\
Material = \\
Material? (CR to skip) {\em plastic} \\
Param = \\
Parameter string? (CR to skip) {\em [RETURN]} \\
Color = (No color specified) \\
Color R G B (0..255)? (CR to skip) {\em 95 159 159} \\
Inherit = 0: lower nodes (towards leaves) override \\
Inheritance (0|1)? (CR to skip) {\em [RETURN]} \\
mged>
}
Second, a region will be constructed to contain the collar,
which will be colored red:
\noindent{\tt
mged> {\em r collar.r u t.1} \\
Defaulting item number to 1003 \\
Creating region id=1002, air=0, los=100, GIFTmaterial=1 \\
mged> {\em mater collar.r} \\
Material = \\
Material? (CR to skip) {\em plastic} \\
Param = \\
Parameter string? (CR to skip) {\em [RETURN]} \\
Color = (No color specified) \\
Color R G B (0..255)? (CR to skip) {\em 255 127 0} \\
Inherit = 0: lower nodes (towards leaves) override \\
Inheritance (0|1)? (CR to skip) {\em [RETURN]} \\
mged>
}
Third, a region will be constructed to contain all the limbs,
and a flesh color will be assigned.
Even though none of the limbs touch each other, note how they
are combined with the UNION operation, to create a single
object of uniform composition and color.
\noindent{\tt
mged> {\em r limbs.r u e.2 u e.3 u e.4 u e.5 u e.6 u e.7 u e.8 u e.9} \\
Defaulting item number to 1001 \\
Creating region id=1000, air=0, los=100, GIFTmaterial=1 \\
mged> {\em mater limbs.r} \\
Material = \\
Material? (CR to skip) {\em plastic} \\
Param = \\
Parameter string? (CR to skip) {\em [RETURN]} \\
Color = 0 0 0 \\
Color R G B (0..255)? (CR to skip) {\em 255 200 160} \\
Inherit = 0: lower nodes (towards leaves) override \\
Inheritance (0|1)? (CR to skip) {\em [RETURN]} \\
mged>
}
Next, the funnel needs to be placed in a region.
For the sake of simplicity, the funnel will be solid, rather
than having a hollow center.
Note that the interior of the funnel overlaps with the top
of the Woodsman's head.
The funnel can be made ``form fitting'' by subtracting out
the overlap zone:
\noindent{\tt
mged> {\em r funnel.r u c.1 - e.1 u c.2 - e.1} \\
Defaulting item number to 1004 \\
Creating region id=1003, air=0, los=100, GIFTmaterial=1 \\
mged> {\em mater funnel.r} \\
Material = \\
Material? (CR to skip) {\em plastic} \\
Param = \\
Parameter string? (CR to skip) {\em sh=100} \\
Color = (No color specified) \\
Color R G B (0..255)? (CR to skip) {\em 35 107 142} \\
Inherit = 0: lower nodes (towards leaves) override \\
Inheritance (0|1)? (CR to skip)
mged> {\em l funnel.r} \\
funnel.r (len 4) REGION id=1003 (air=0, los=100, GIFTmater=1) -- \\
Material 'plastic' 'sh=100' \\
Color 35 107 142 \\
\ \ u c.1 \\
\ \ - e.1 \\
\ \ u c.2 \\
\ \ - e.1 \\
mged>
}
\mfig wm-hat-E, Evaluation of Funnel Hat Region.
Note how the boolean expression was written.
The concept that we need to express here is
the combination of all the funnel parts, minus the
portion of the head that overlaps with the inside of the funnel.
The natural way to write this is
\begin{center}
(c.1 union c.2) - e.1
\end{center}
but note that there are no grouping operations permitted in the {\em r}
command.
Furthermore, for historic reasons, union operations bind more loosely than
intersection and subtraction, i.e., there are implied groups
between union operations. Thus, the expression above needs to be
rewritten as the formula:
\begin{center}
(c.1 - e.1) union (c.2 - e.1)
\end{center}
which with the binding precedence can be expressed as:
\begin{center}
c.1 - e.1 union c.2 - e.1
\end{center}
which is what was entered in the sequence above.
To see the effect that this command had on the shape of ``funnel.r'',
run these commands, the effect of which is shown in Figure \ref{wm-hat-E}:
\noindent{\tt
mged> {\em Z } \\
mged> {\em E funnel.r } \\
vectorized in 1 sec \\
mged>
}
These regions should be grouped together into a group,
for convenience in referencing. This can be done with these commands:
\noindent{\tt
mged> {\em g man.g collar.r funnel.r limbs.r torso.r} \\
mged> {\em Z} \\
mged> {\em e man.g} \\
vectorized in 1 sec \\
mged>
}
The grouping {\em g} command combined the regions, the Zap command {\em Z}
cleared the screen, and the edit {\em e man.g} command drew the whole
object.
As an exercise, run
the database structure printing command {\em tree man.g}
to obtain a simple depiction of the tree structure that has been created.
For the final step of this example, the model will be ray-traced.
Run the command:
\noindent{\tt
mged> {\em rt -s128} \\
rt -s50 -M -s128 woodsman.g man.g \\
db title: A Tin Woodsman \\
Buffering single scanlines \\
initial dynamic memory use=35152.\\
Interpreting command stream in old format\\
GETTREE: 0.01 CPU secs in 1 elapsed secs (1\%)\\
\\
...................Frame 0...................\\
PREP: 0.01 CPU secs in 0.01 elapsed secs (100\%)\\
shooting at 13 solids in 4 regions \\
model X(-2,2), Y(-6,7), Z(-2,2)\\
Beam radius=0.078125 mm, divergence=0 mm/1mm\\
\\
SHOT: 3.73 CPU secs in 6 elapsed secs (62.1667\%)\\
Additional dynamic memory used=29728. bytes\\
3515 solid/ray intersections: 1005 hits + 2510 miss\\
pruned 28.6\%: 13647 model RPP, 8197 dups, 10740 RPP\\
Frame 0: 16384 pixels in 3.73 sec = 4392.49 pixels/sec\\
Frame 0: 16384 rays in 3.73 sec = 4392.49 rays/sec (RTFM)\\
\\
Press RETURN to reattach\\
{\em [RETURN]} \\
mged>
}
\chapter{BUILDING A ROBOT ARM}
\mfig robot, The RMIT Robot Arm.
The model shown in Figure \ref{robot}
will be described in a step by step instructions
on how to build and display this model.
This is the MGED input file:
\begin{verbatim}
in btm box 0 0 0 0 -90 0 40 0 0 0 0 6
in btm1 box 0 -90 0 0 -61.549 0 40 0 0 0 0 6
in rad rcc 20 -150 0 0 0 6 8
in cyl rcc 20 -45 6 0 0 30 20
in cyl1 rcc 20 -45 0 0 0 36 15.5
in cyl2 rcc 20 -45 0 0 0 36 12.5
in hole rcc 8 -8 0 0 0 6 3
in hole1 rcc 32 -8 0 0 0 6 3
cp hole1 hole2
in gus raw 21.5 -25.3 6 0 0 30 0 25.3 0 -3 0 0
in cnr box 0 0 0 6 6 0 6 0 0 0 0 6
in cnr1 box 34 0 0 0 -6 0 6 0 0 0 0 6
cp cnr cnr2
cp cnr1 cnr3
in rad1 rcc 6 -6 0 0 0 6 6
in rad2 rcc 34 -6 0 0 0 6 6
in head rcc 20 -45 36 0 0 30 18
in shaft rcc 20 -45 36 0 0 -50 12.5
in han rcc 20 -45 51 0 120 0 6
in ball sph 20 75 51 15
in cut box 20 -45 0 0 50 0 25 0 0 0 0 40
in squ box 12 -53 -14 0 16 0 16 0 0 0 0 -30
r handle u squ u shaft u han - ball
r knob u ball
r cor u cnr2 + rad1
r cor1 u cnr3 + rad2
in hole4 rcc 20 -150 0 0 0 6 3
cp hole2 hole3
r base u btm u btm1 - hole2 - hole3 - hole4 u rad - hole4
g all base handle knob
size 300
e all
\end{verbatim}
This is the MGED dialog:
\begin{verbatim}
mged mark
BRL Graphics Editor (MGED) Version 2.31
Sat Oct 17 20:33:05 PDT 1987
mg\@godzilla:/usr/staff/mg/brlcad/mged
mark: No such file or directory
Crete new database (y/n)[n]? y
attach (nu|tek|plot|ir) [nu]? nu
ATTACHING nu (Null Display)
Untitled MGED Database (units=mm)
mged> in btm box 0 0 0 0 -90 0 40 0 0 0 0 6
mged> in btm1 box
Enter X, Y, Z of vertex: 0 -90 0
Enter X, Y, Z of vector H: 0 -61.549 0
Enter X, Y, Z of vector W: 50 0 0 40 0 0
Enter X, Y, Z of vector D: 0 0 6
mged> in rad rcc 20 -150 0 0 0 6 8
mged> in cyl rcc
Enter X, Y, Z of vertex: 20 -45 6
Enter X, Y, Z of height (H) vector: 0 0 30
Enter radius: 20
mged> in cyl1 rcc 20 -45 0 0 0 36 15.5
mged> in cyl2 rcc 20 -45 0 0 0 36 12.5
mged> in hole rcc 8 -8 0 0 0 6 3
mged> in hl ole 1 rcc 2 32 -8 0 0 0 6 2 3
mged> cp hole1 hole2
mged> in gus raw
Enter X, Y, Z of vertex: 21.5 -25.3 6
Enter X, Y, Z of vector H: 0 0 30
Enter X, Y, Z of vector W: 0 25.3 6
Enter X, Y, Z of vector D: -3 0 0
mged> in cnr box 0 0 0 06 6 0 6 0 0 0 0 6
mged> incnr1 box 34 0 0 0 -6 0 6 0 0 0 0 6
incnr1: no such command, type ? for help
mged> in cnr1 box 34 0 0 0 -6 0 6 0 0 0 0 6
mged> cp cnf r cnr2
mged> in cp cnr1 cnr3
mged> in rad1 rcc 6 -6 0 0 0 6 6
mged> in rad2 rcc 34 -6 0 0 0 6 6
mged> in shaft rcc 20 -45 36 0 0 30 18
mged> in shaft rcc 20 -45 36 0 0 -50 12.5
mged> in han rcc 20 -45 51 0 120 0 6
mged> in ball sph
Enter X, Y, Z of vertex: 20 75 51
Enter radius: 15
mged> in cut box 20 -45 0 0 50 0 25 0 0 0 040
Enter Z: 03 NOTE: error again
mged> killall cut
mged> in cut box 20 -45 0 0 50 0 25 0 0 0 0 40
mged> in squ box 12 -53 -14 0 16 0 16 0 0 0 0- -30
mged> r handle + squ shaft u han u ball
Defaulting item number to 1001
Creating region id=1000, air=0, los=100, GIFT material=1
mged> r knob + ball
Defaulting item number to 1002
Creating region id=1001, air=0, los=100, GIFT material=1
mged> r cor + cnr2 + rad1
Defaulting item number to 1003
Creating region id=1002, air=0, los=100, GIFT material=1
mged> r cor1 + cnr3 + rad2
Defaulting item number to 1004
Creating region id=1003, air=0, los=100, GIFT material=1
mged> mater knob plastic
Was
Parameter string? n
Override material color (y/n)[n]? y
R G B (0..255)? 255 0 0 NOTE: This is color RED
mged> mater handle plastic
mged> Was
Parameter string? n
Override material color (y/n)[n]? y
R G B (0..255)? 219 147 112 NOTE: This is color TAN
mged> r base + btm u btm1 u gus cyl - cyl1 m1 - hole2 -hole3 -hole4 u rad-
hole4
mged> error in number of args! NOTE: Typing errors
mged> r base + btm u btm1 - hole2 - hole3 - hole4 u rad - hole4
Defaulting item number to 1005
Creating region id=1004, air=0, los=100, GIFTmaterial=1
dir_lookup: could not find "hole3"
skipping hole3
dir_lookup: could not find "hole4"
skipping hole4
dir_lookup: could not find "hole4"
skipping hole4
mged> t
ball cnr3 gus knob/
base/ cor/ han rad
btm cor1/ handle rad1
btm1 cut head rad2
cnr cyl hole shaft
cnr1 cyl1 hole1 squ
cnr2 cyl2 hole2
mged> in hole 4 rcc 20 -150 0 0 0 6 3
mged> cp hole2 hole3
mged> killall base NOTE: Redo "base" region
mged> r base + btm u btm1 - hole2 - hole3 - hole4 u rad - hole4
Defaulting item number to 1006
Creating region id=1005, air=0, los=100, GIFTmaterial=1
mged> g all base handle knob
mged> tree all
| all_________________| base_________| btm
| btm1
| hole2
| hole 3
| hole4
| rad
| hole4
| handle______________| squ
| shaft
| han
| ball
| knob________________| ball
| handle_______________| squ
| shaft
| han
| ball
| knob_________________| ball
mged> l base
base (len 9) REGION id=1005 (air=0, los=100, GIFTmater=1)--
+ btm
u btm1
- hole2
- hole3
- hole4
u rad
- hole4
u handle
u knob
mged> l gus
gus: ARB8 (ARB6)
1 (21.5000, -25.3000, 6.0000)
2 (21.5000, 0.0000, 6.0000)
3 (21.5000, 0.0000, 6.0000)
4 (21.5000, -25.3000, 36.0000)
5 (18.5000, -25.3000, 6.0000)
6 (18.5000, 0.0000, 6.0000)
7 (18.5000, 0.0000, 6.0000)
8 (18.5000, -25.3000, 36.0000)
mged> l ball
ball: ELL
V (20.0000, 75.0000, 51.0000)
A (15.0000, 0.0000, 0.0000) Mag=15.000000
A dir cos=(0.0, 90.0, 90.0), rot=0.0, fb=0.0
B (0.0000, 15.0000, 0.0000) Mag=15.000000
B dir cos=(90.0, 0.0, 90.0) rot=90.0, fb=0.0
C (0.0000, 0.0000, 15.0000) Mag=15.000000
C dir cos=(90.0, 90.0, 0.0) rot=90.0, fb=90.0
mged> l knob
knob (len 1) REGION id=1001 (air=0, los=100, GIFTmater=1)--
Material "plastic"
Color 255 0 0
+ ball
mged> l handle
handle (len 4) REGION id=1000 (air=0, los=100, GIFT MATER=1)--
Material "plastic" "n
Color 219 147 112
+ squ
u shaft
u han
u ball
mged> canter-0-75 0
mged> size 300
mged> tops
all/ cor1/ cyl2 hole1
cnr cut gus
cnr1 cyl head
cor/ cyl1 hole
mged> analyze cyl
cyl: TGC
V (20.0000, -45.0000, 6.0000)
H (0.0000, 0.0000, 30.0000) Mag=30.000000
H dir cos=(90.0, 90.0, 0.0), rot=90.0, fb=90.0
A (-17.5032, -9.6767, 0.0000) Mag=20.000000
B (9.6767,-17.5032, 0.0000) Mag=20.000002
c=20.000000, d=20.000002
AxB dir cos=(90.0, 90.0, 0.0), rot=90.0,fb=90.0
Surface Areas: base(AxB)=1256.6371
top(CxD)=1256.6371 side=3769.9114
Total Surface Area=6283.1855
Volume=37699.1132 (0.0100 gal)
mged> q
\end{verbatim}
\chapter{RT MATERIAL TYPE, PROPERTIES, and COLOR}
First the solids must be formed into a "region", e.g.:
{\em\center
r ball u torus u tube-hole
}
To change material type, properties and color use the "mater" command:
{\tt
mged> {\em mater base} \\
Material = \\
Material? (CR to skip) {\em plastic} \\
Param = \\
Parameter string? (CR to skip) {\em sh=10 dl=0.2 sp=0.8 re=0.75} \\
Color = (No color specified) \\
Color R G B (0..255)? (CR to skip) {\em 112 219 147} \\
Inherit = 0: lower nodes (towards leaves) override \\
Inheritance (0|1)? (CR to skip) {\em 0} \\
mged> \\
}
For the values in Parameter String for material ``Plastic'',
you can enter such things as:
"shinyness (sh)",
"specular lighting fraction (sp)",
"diffuse lighting fraction (di)",
"transmission fraction (tr)",
"reflection fraction (re)", and
"refractive index (ri)".
Two formulas must hold to keep the material ``physical'':
sp + di=1.0, and tr + re=1.0.
Suggested values for these properties are listed below:
{\center sh=10, dl=0.2, sp=0.8, re=0.75}
NOTE: Not all of these fields need to be input, you
can use the system defaults for the rest.
To display objects in different colors on the screen, each object must
be a region with its own material properties and colors. All regions must be
displayed on screen before a ray tracing can be performed (region objects can
have cutouts to display other parts).
\begin{tabular}{r r r l}
R & G & B & COLOR \\
112 & 219 & 147 & aquamarine \\
50 & 204 & 153 & med aquamarine \\
0 & 0 & 0 & black \\
0 & 0 & 255 & blue \\
95 & 159 & 159 & cadet blue \\
66 & 66 & 111 & corn flower blue \\
107 & 35 & 142 & dk slate blue \\
191 & 216 & 216 & light blue \\
143 & 143 & 188 & light steel blue \\
50 & 50 & 204 & medium blue \\
127 & 0 & 255 & medium slate blue \\
47 & 47 & 79 & midnight blue \\
35 & 35 & 142 & navy blue \\
50 & 153 & 204 & sky blue \\
0 & 127 & 255 & slate blue \\
35 & 107 & 142 & steel blue \\
255 & 127 & 0 & coral \\
0 & 255 & 255 & cyan \\
142 & 35 & 35 & firebrick \\
204 & 127 & 50 & gold \\
219 & 219 & 112 & golden rod \\
234 & 234 & 173 & med goldenrod \\
0 & 255 & 0 & green \\
47 & 79 & 47 & dark green \\
79 & 79 & 47 & dk olive green \\
35 & 142 & 35 & forest green \\
50 & 204 & 50 & lime green \\
107 & 142 & 35 & med forest green \\
66 & 111 & 66 & medium sea green \\
127 & 255 & 0 & med spring green \\
143 & 188 & 143 & pale green \\
35 & 142 & 107 & sea green \\
0 & 255 & 127 & spring green \\
153 & 204 & 50 & yellow green \\
47 & 79 & 79 & dk slate grey \\
84 & 84 & 84 & dim grey \\
168 & 168 & 168 & light grey \\
\end{tabular}
\begin{tabular}{r r r l}
R & G & B & COLOR \\
159 & 159 & 95 & khaki \\
255 & 0 & 255 & magenta \\
142 & 35 & 107 & maroon \\
204 & 50 & 50 & orange \\
219 & 112 & 219 & orchid \\
153 & 50 & 204 & dark orchid \\
147 & 112 & 219 & medium orchid \\
188 & 143 & 143 & pink \\
234 & 173 & 234 & plum \\
255 & 0 & 0 & red \\
79 & 47 & 47 & indian red \\
219 & 112 & 147 & medium violet \\
255 & 0 & 127 & orange red \\
204 & 50 & 153 & violet red \\
111 & 66 & 66 & salmon \\
142 & 107 & 35 & sienna \\
219 & 147 & 112 & tan \\
216 & 191 & 216 & thistle \\
173 & 234 & 234 & turquoise \\
112 & 147 & 219 & dk turquoise \\
112 & 219 & 219 & med turquoise \\
79 & 47 & 79 & violet \\
159 & 95 & 159 & blue violet \\
216 & 216 & 191 & wheat \\
252 & 252 & 252 & white \\
255 & 255 & 0 & yellow \\
147 & 219 & 112 & green yellow
\end{tabular}
material types are: plastic
mirror
glass
texture
Shinyness (i.e.: sh=16)
Refractive index for: crown glass = 1.52
Flint glass = 1.65
Rock salt = 1.54
Water = 1.33
Diamond = 2.42
Transmission fraction for a mirror: re=1.0 (tr=0)
\chapter{RAYTRACING YOUR CREATION}
Once you have finished creating all your solids, positioned them in
their correct relationships to each other, formed all your regions (forming
your finished object), created groups (if required), you can now do a ray-
tracing of the view displayed on the screen.
Note! If you want to display solids or objects (collection of solids
regioned together) of different colors, each of the solids or objects must be
separate regions so you can give them a specific color.
The raytracing command is
{\em\center
rt [-s\#]
}
This command produces a color shaded image of the solids or objects on
the display. This color shaded image will appear on a frame buffer display.
The resolution of the image (number of rays) is equal to "\#" from the "-s"
option. If the "-s" option is absent, 50x50 ray solution will be used (very
course raytrace). The higher the "-s" option the better the raytracing, but
it takes longer to display.
Recommended optimum value of "-s" option for picture
quality and speed of display is 256!. Some examples follow:
{\em
rt -s128 \\
rt \\
rt -s256 \\
}
When the rt command is given the text and graphic window will appear,
then the frame buffer starts to appear (the picture window). The first scanned
display will be what was previously stored in it, it will then over write it
with your picture; sometimes two buffer scans are displayed before yours.
The default background color is blue with steel grey colored solids and
objects. The terminal will beep when the scanned picture is finished; press
return to get back to the "text and graphic" window.
With the blue background it is sometimes hard to visualize the raytraced
picture; two things you can do to improve the situation is:
(a) Make separate regions for all solids and objects, so that you can
assign a specific color to each region; this can be a time consuming task if
you have a lot of solids and objects.
(b) Construct as a separate region, three thin flat plates to form two
walls with a bottom, as shown below; using "make name arb8",
then solid editing this arb8, using move faces to the required thickness,
then use command "cp"
(copy command) to make two more copies which you can rotate to their
respective relationships, then translate all three into the correct positions
relative to each other and the solids and objects you are displaying.
The advantage of doing this is to give the light source something
to reflect off, giving back lighting; improving contrast considerably.
With the
three plates formed into their own region you can delete them from the screen
with the "d" command, rotate your creation then re-display your plates (walls)
with the "e" command to do another rt, the walls need to be deleted from the
display when you rotate your objects,
otherwise everything will rotate together.
Figure
A bonus of having constructed these three walls is that you can quickly
change the material type to "mirror" so that you can get reflections of the
three hidden faces.
\chapter{CONCLUSIONS}
MGED performs two basic functions:
viewing and editing.
The standard viewing capabilities of zooming, slewing,
slicing, and rotation are available.
Likewise, all the standard editing features are also available.
The user easily traverses the hierarchical data structure, applying
the editing functions of rotation, translation, and scaling to any
position in the hierarchy.
The hierarchical structure can be modified and regrouped and regions
created and modified.
Specific parameter editing can also be applied to the solids to produce
any shape solid desired.
For several decades, the production and modification of geometric models
suitable for sophisticated engineering analysis
has been a slow, labor-intensive procedure.
In an effort to improve the response time of geometric models,
the Ballistic
Research Laboratory (BRL) has developed an interactive model editor
for their combinatorial solid geometry modeling system (The BRL-CAD Package).
The user interface to the geometry of these models
is a program called the Multi-device Graphics Editor (MGED)
that is designed to replace the
traditional manual method
for producing and modifying model databases.
Using MGED, the geometric models
are interactively viewed, modified, and constructed with immediate visual
feedback at each step.
When desired, the MGED editor can be operated without the need for
explicit numerical input
and opens a new dimension in the model building process.
MGED has made great gains in reducing the bottleneck in
the creation of high resolution geometric models.
| {
"alphanum_fraction": 0.7352021367,
"avg_line_length": 35.8138338126,
"ext": "tex",
"hexsha": "485d731d63351e2ba96df661258dc49f5982c40d",
"lang": "TeX",
"max_forks_count": 54,
"max_forks_repo_forks_event_max_datetime": "2022-03-28T23:20:37.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-10T07:57:06.000Z",
"max_forks_repo_head_hexsha": "fb56f37c201b51241e8f3aa7b979436856f43b8c",
"max_forks_repo_licenses": [
"BSD-4-Clause",
"BSD-3-Clause"
],
"max_forks_repo_name": "pombredanne/sf.net-brlcad",
"max_forks_repo_path": "doc/html/manuals/mged/ged.tex",
"max_issues_count": 13,
"max_issues_repo_head_hexsha": "fb56f37c201b51241e8f3aa7b979436856f43b8c",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-06-24T17:07:48.000Z",
"max_issues_repo_licenses": [
"BSD-4-Clause",
"BSD-3-Clause"
],
"max_issues_repo_name": "pombredanne/sf.net-brlcad",
"max_issues_repo_path": "doc/html/manuals/mged/ged.tex",
"max_line_length": 110,
"max_stars_count": 83,
"max_stars_repo_head_hexsha": "f91ea585c1a930a2e97c3f5a8274db8805ebbb46",
"max_stars_repo_licenses": [
"BSD-4-Clause",
"BSD-3-Clause"
],
"max_stars_repo_name": "lf-/brlcad",
"max_stars_repo_path": "doc/html/manuals/mged/ged.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T16:33:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-10T05:54:52.000Z",
"num_tokens": 62421,
"size": 236622
} |
\documentclass{article}
\usepackage{xcolor}
\usepackage{array}
\usepackage{graphicx}
\usepackage{multirow}
\title{GST 108: Introduction to Quantitative Reasoning}
\date{2021 - 11 - 18}
\author{Nnadiukwu Miracle}
\begin{document}
\pagecolor{white}
\pagenumbering{gobble}
\maketitle
\newpage
\pagenumbering{arabic}
\centering
\section*{LOGIC GATES}
\begin{table}[h!]
\begin{tabular}{m{4cm} m{10cm}}
\color{blue}
{\LARGE\textbf{A logic gate is a building block of a digital circuit which is at the heart of any computer operation.}} & \includegraphics[width=1\linewidth]{circuit.png} \\
\end{tabular}
\end{table}
\newpage
\section*{LOGIC GATES}
\begin{table}[h!]
\begin{tabular}{m{4cm} m{10cm}}
\color{blue}
{\LARGE\textbf{Behind every digital system is a logic gate.
}} & \includegraphics[width=1\linewidth]{devices.jpg} \\
\end{tabular}
\end{table}
\newpage
\section*{LOGIC GATES}
\begin{table}[h!]
\begin{center}
\begin{tabular}{c c c}
\multicolumn{3}{m{12cm}}{\LARGE{\textbf {Logic gates perform logical operations that take binary input (0s and 1s) and produce a single binary output. They are used in most electronic device including :}}} \\
& & \\
\color{red}{\Large \textbf{Smartphones}} & \color{green}{\Large \textbf{Tablets}} & \color{blue}{\Large \textbf{Memory devices}}\\
\includegraphics[width=0.3\linewidth]{phone.png} & \includegraphics[width=0.3\linewidth]{tablet.png} & \includegraphics[width=0.3\linewidth]{memcard.png} \\
\end{tabular}
\end{center}
\end{table}
\newpage
\section*{LOGIC GATES}
{\LARGE{\textbf {Now think of a logic gate like a light switch, it is either in an ON or OFF position. Similarly, the input output terminals are always in one of two binary positions false(0) and true(1). Each gate has its own logic or set of rules that determines how it acts based on multiple inputs outlined in a truth table.}}}
\newpage
\section*{LOGIC GATES}
{\LARGE{\textbf {Combining 10s, 1000s or millions of logic gates makes it possible for a computer to perform highly complex operations and tasks at ever increasing speeds.
}}}
\newpage
\section*{LOGIC GATES}
{\LARGE{\textbf {A gate is a basic electronic circuit which operates on one or more signals to produce an output signal.
\section*{Logic gates are digital circuits constructed from diodes, transistors, and resistors connected in such a way that the circuit output is the result of a basic logic operation \color{green}(OR, AND, NOT) \color{black}performed on the inputs.}
}}}
\newpage
\section*{TYPES OF LOGIC GATES}
{\LARGE{\textbf {Fundamental gates are \color{red}AND\color{black}, \color{red} OR \color{black}and \color{red}NOT
}}}
\begin{table}[h!]
\begin{center}
\begin{tabular}{c c c}
\multicolumn{3}{m{36cm}}{\includegraphics[width=0.3\linewidth]{gates.png}}\\
\end{tabular}
\end{center}
\end{table}
{\LARGE{\textbf {Derived Gates are \color{red}NAND\color{black}, \color{red}NOR\color{black}, \color{red}XOR \color{black}and \color{red}XNOR \color{black}(derived from the fundamental gates)
}}}
\end{document}
| {
"alphanum_fraction": 0.7254521964,
"avg_line_length": 43,
"ext": "tex",
"hexsha": "8675d2348a9658e82d7aac6959552fb40e0f4527",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Nnadiukwu-Miracle/miracleCSC101",
"max_forks_repo_path": "CSC 101 - miracle - LaTEX/project 2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Nnadiukwu-Miracle/miracleCSC101",
"max_issues_repo_path": "CSC 101 - miracle - LaTEX/project 2.tex",
"max_line_length": 332,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9a20f57f5164ce75a0ba7f84f3f3e766d093f16b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Nnadiukwu-Miracle/miracleCSC101",
"max_stars_repo_path": "CSC 101 - miracle - LaTEX/project 2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1007,
"size": 3096
} |
\section{Implementation of \fun{txSize}}
\label{sec:txSize}
The minimum fee calculation in Figure~\ref{fig:defs:protocol-parameters-helpers}
depends on an abstract $\fun{txSize}$ function, which we describe here.
We define $\fun{txSize}$ as:
$$\fun{txSize}~tx~=~\var{numInputs} \cdot 40 + \var{numOutputs} \cdot 65 + \var{rest},$$
where
\begin{itemize}
\item $\var{numInputs}$ is the number of transaction inputs in $\var{tx}$,
\item $\var{numOutputs}$ is the number of transaction outputs in $\var{tx}$,
\item $\var{tx'}$ is identical to $\var{tx}$, except that it has
\begin{itemize}
\item no inputs,
\item no outputs,
\item a fee of zero
\end{itemize}
\item $\var{rest}$ is the number of serialized bytes in $\var{tx'}$,
as defined in Appendix~\ref{sec:cddl},
\end{itemize}
We now justify this calculation.
Using the number of bytes in the serialized transaction is problematic for a couple of reasons.
First, the fee is listed in the transaction, so there is a circularity problem.
Second, algorithms implementing coin selection
(choosing which unspent transaction outputs to consume)
would have to make heavy use of serialization.
Besides these two issues, however, the number of serialized bytes
does exactly what we want.
Therefore we calculate the transaction size by first computing
the number of bytes in a modifed version of the transaction
that has no inputs, no outputs, and has a fee of zero,
and then we adjust accordingly by the number of inputs and outputs.
As given by the CDDL spec in Appendix~\ref{sec:cddl},
a transaction input is serialized as:
\begin{lstlisting}[backgroundcolor = \color{lightgray}]
transaction_input = [ transaction_id : $hash32 , index : uint ]
\end{lstlisting}
which is bounded by 40 bytes.
Similarly, a transaction output is serialized as:
\begin{lstlisting}[backgroundcolor = \color{lightgray}]
transaction_output = [address, amount : uint]
\end{lstlisting}
which is bounded by 65 bytes.
| {
"alphanum_fraction": 0.7462082912,
"avg_line_length": 39.56,
"ext": "tex",
"hexsha": "0db372ca8073b8fea5d280231866fabfa9080540",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e65f6e3f966659b69865fd6bcfe9caf3008b820",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "SebastienGllmt/cardano-ledger-specs",
"max_forks_repo_path": "shelley/chain-and-ledger/formal-spec/txsize.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4e65f6e3f966659b69865fd6bcfe9caf3008b820",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "SebastienGllmt/cardano-ledger-specs",
"max_issues_repo_path": "shelley/chain-and-ledger/formal-spec/txsize.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4e65f6e3f966659b69865fd6bcfe9caf3008b820",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "SebastienGllmt/cardano-ledger-specs",
"max_stars_repo_path": "shelley/chain-and-ledger/formal-spec/txsize.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 510,
"size": 1978
} |
\documentclass[../PHYS306Notes.tex]{subfiles}
\begin{document}
\section{Lecture 35}
\subsection{Lecture Notes - Course Review II}
\subsubsection{Examples of Canonical Transformations}
Only transformations in which new coordinates obey Hamilton's equations of motion are canonical. Suppose we have our generalized position $q$ that maps to a new variable $Q = p$ and our generalized momentum $p$ that maps to a new variable $P = -q$. The original coordinates satisfy Hamilton's equations, so:
\[\dot{q} = \dpd{\HH}{p}, \quad \dot{p} = -\dpd{\HH}{q}\]
We therefore have that:
\[\dot{Q} = \dot{p} = -\dpd{\HH}{q} = -\dpd{\HH}{(-P)} = \dpd{\HH}{P}\]
So the first equation of motion is satisfied. Playing the same game with $\dot{P}$, we have:
\[\dot{P} = -\dot{q} = -\dpd{\HH}{p} = -\dpd{\HH}{Q}\]
So this also works out. Checking the (fundamental) Poisson brackets, we have that:
\[[Q, Q] = [p, p] = 0\]
\[[P, P] = [-q, -q] = [q, q] = 0\]
\[[Q, P] = [p, -q] = -[p, q] = -(-[q, p]) = [q, p] = 1\]
So we have checked that this is a canonical transformation in a completely equivalent way. Recall that we can come up with generating functions $F$ which link the old and new variables. One example (in the HW) was:
\[F_1(q, Q) = qe^Q\]
Now we can calculate the momenta:
\[p_i = \dpd{F_1}{q_i}, \quad P_i = -\dpd{F_i}{Q_i}\]
A second example is:
\[F_2(p, Q) = -(e^Q - 1)^2\]
Where:
\[q_i = -\dpd{F_2}{p_i}, \quad P_i = -\dpd{F_2}{Q_i}\]
The significance of all of this is that there is a subclass of transformations, where due to the Jacobian determinant being equal to 1, the metric of phase space is preserved (this relates to a fundamental symmetry).
\subsubsection{Practice Problem}
\begin{center}
\includegraphics[scale=0.5]{Lecture-35/l35-img1.png}
\end{center}
Two equal masses are connected by a string of length $l$ that runs through the tip of a cone. One mass is free to move inside, the other moves without friction on the surface.
\begin{enumerate}
\item Set up suitable generalized coordinates.
\begin{s}
It is most natural to use spherical. For $m_1$ we use coordinates $(r, \theta, \phi)$. For $m_2$ we have coordinates $(l - r, \pi - \alpha, \beta)$. We have 6 variables, 4 independent, and two constrains. We put origin at the top of the cone. Take $r, \theta, \phi, \beta$ as our generalized coordinates.
\end{s}
\item Find the Lagrangian and the equations of motion. Are there cylical coordinates?
\begin{s}
Velocities are given by:
\[m_1: (\dot{r}, r\dot{\theta}, r\dot{\phi}\sin\theta)\]
\[m_2: (-\dot{r}, 0, (l-r)\dot{\beta}\sin(\pi - \alpha))\]
\[\LL = \frac{m}{2}\left[2\dot{r}^2 + r^2\dot{\theta}^2 + r^2\dot{\phi}^2\sin^2\theta + (l-r)\dot{\beta}^2\sin^2(\pi - \alpha)\right] - mgr\cos\theta + mg(l-r)\cos\alpha\]
To find the equations of motion, we use the Euler Lagrange equations. We notice that the Lagrangian has no dependence on $\phi$ or $\beta$ and hence these coordinates are cyclic.
\end{s}
\item Find the Hamiltonian.
\begin{s}
For the cyclic coordinates, we have:
\[p_\phi = mr^2\dot{phi}\sin^2\theta = \text{Const.}\]
\[p_\beta = m(l-r)^2\dot{\beta}\sin^2(\pi - \alpha) = \text{Const.}\]
For the $r$ equations, we have:
\[2\ddot{r} - r(\dot{\theta}^2 + \dot{\phi}^2\sin^2\theta) + (l-r)\dot{\beta}^2\sin\alpha + g(\cos\theta + \cos\alpha) = 0\]
For the $\theta$ equation we have:
\[r\ddot{\theta} + 2\dot{r}\dot{\theta} - r\dot{\phi}^2 = \]
\end{s}
\item What is the angular velocity of the particle on the outside if it moves in a circular orbit?
\begin{s}
We already calcualted $p_\phi$ and $p_\theta$ which were constant. Calculating the other two, we have:
\[p_r = \dpd{\LL}{\dot{r}} = 2m\dot{r}\]
\[p_\theta = \dpd{\LL}{\dot{\theta}} = mr^2\dot{\theta}\]
The Hamiltonian is given by:
\[\HH = p_r\dot{r} + p_\theta\dot{\theta} + p_\phi\dot{\phi} + p_\beta\dot{\beta} - \LL\]
Hence:
\[\HH = \frac{p_r^2}{2m} + \frac{p_\theta^2}{2mr^2} + \frac{p_\phi^2}{2mr\sin^2\theta} + \frac{p_\beta^2}{2m(l-r)^2\sin^2\alpha} + mgr\cos\theta - mg(L-r)\cos\alpha\]
Here we see that the Hamiltonian is just $\HH = T + U$ as the transformation is indeed natural. Moving onto solving the question, in a circular orbit $r = \text{Const.}$, so we can therefore solve for:
\[\dot{\beta} = \frac{p_\beta^2}{2m(l-r)^2\sin^2\alpha}\]
\end{s}
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.6478427829,
"avg_line_length": 65.1029411765,
"ext": "tex",
"hexsha": "50f00db22c369591a7bb557ff124a7b744ed7a41",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RioWeil/PHYS306-notes",
"max_forks_repo_path": "Lecture-35/Lecture-Notes-35.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RioWeil/PHYS306-notes",
"max_issues_repo_path": "Lecture-35/Lecture-Notes-35.tex",
"max_line_length": 308,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9394a8cd986722b6fdcb57c8846c6b0d52c23188",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RioWeil/PHYS306-notes",
"max_stars_repo_path": "Lecture-35/Lecture-Notes-35.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1544,
"size": 4427
} |
\section{Disentangling Adversarial Robustness\\[2px]\hspace*{-6px}and Generalization}
\label{sec:main}
To clarify the relationship between adversarial robustness and generalization, we explicitly distinguish between regular and on-manifold adversarial examples, as illustrated in \figref{fig:introduction}. Then, the hypothesis \cite{TsiprasARXIV2018,SuARXV2018} that robustness and generalization are contradicting goals is challenged in four arguments: regular \red{unconstrained} adversarial examples leave the manifold; adversarial examples constrained to the manifold exist; robustness against on-manifold adversarial examples is essentially generalization; \red{robustness against regular adversarial examples is not influenced by generalization when controlled through the amount of training data}. Altogether, our results imply that adversarial robustness and generalization are not opposing objectives \red{and both robust and accurate models are possible} but require higher sample complexity.
\vskip 0px
\subsection{Experimental Setup}
\myparagraph{Datasets:} We use \MNIST \cite{CohenARXIV2017}, F(ashion)-MNIST \cite{XiaoARXIV2017} and \Celeb \cite{LiuICCV2015} for our experiments ($240\text{k}/40\text{k}$, $60\text{k}/10\text{k}$ and $182\text{k}/20\text{k}$ training/test images); \Celeb has been re-sized to $56{\times}48$ and we classify ``Male'' \vs ``Female''. Our synthetic dataset, \Fonts, consists of letters ``A'' to ``J'' of $1000$ Google Fonts randomly transformed (uniformly over translation, shear, scale, rotation in $[-0.2,0.2]$, $[-0.5,0.5]$, $[0.75,1.15]$, $[-\nicefrac{\pi}{2},\nicefrac{\pi}{2}]$) using a spatial transformer network \cite{JaderbergNIPS2015} such that the generation process is completely differentiable. The latent variables correspond to the transformation parameters, font and class. We generated $960\text{k}/40\text{k}$ (balanced) training/test images of size $28{\times}28$.
We consider classifiers with three (four on \Celeb) convolutional layers ($4\times4$ kernels; stride $2$; $16$, $32$, $64$ channels), each followed by ReLU activations and batch normalization \cite{IoffeICML2015}, and two fully connected layers. The networks are trained using ADAM \cite{KingmaICLR2015}, with learning rate $0.01$ (decayed by $0.95$ per epoch), weight decay $0.0001$ and batch size $100$, for $20$ epochs. Most importantly, to control their generalization performance, we use $N$ training images, with $N$ between $250$ and $40\text{k}$; for each $N$, we train $5$ models with random weight initialization \cite{GlorotAISTATS2010} an report averages.
We learn class-specific \VAEGANs, similar to \cite{LarsenICML2016,RoscaARXIV2017}, to approximate the underlying manifold; we refer to the supplementary material for details.
\myparagraph{Attack:} Given an image-label pair $(x,y)$ from an unknown data distribution $p$ and a classifier $f$, an adversarial example is a perturbed image $\tilde{x} = x + \delta$ which is mis-classified by the model, \ie, $f(\tilde{x}) \neq y$. While our results can be confirmed using other attacks and norms (see the supplementary material for \cite{CarliniSP2017} and transfer attacks), for clarity, we concentrate on the $L_{\infty}$ white-box attack by Madry \etal \cite{MadryICLR2018} that directly maximizes the training loss,
\vskip -14px
\begin{align}
\max_\delta \cL(f(x + \delta), y)\quad\text{s.t.}\quad\|\delta\|_{\infty} \leq \epsilon, \tilde{x}_i \in [0,1],\label{eq:main-off-manifold-attack}
\end{align}
\vskip -2px
\noindent using projected gradient descent; where $\cL$ is the cross-entropy loss and $\tilde{x}=x+\delta$. The $\epsilon$-constraint is meant to ensure perceptual similarity. We run $40$ iterations of ADAM \cite{KingmaICLR2015} with learning rate $0.005$ and consider $5$ restarts, (distance and direction) uniformly sampled in the $\epsilon$-ball for $\epsilon = 0.3$. Optimization is stopped as soon as the predicted label changes, \ie, $f(\tilde{x}) \neq y$. We attack $1000$ test images.
\begin{figure}
\centering
\vskip -0.3cm
\hskip -0.1cm
\begin{subfigure}[t]{0.5\textwidth}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{experiments_hypo1_b.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=0.9\textwidth]{experiments_hypo1_c.pdf}
\end{subfigure}
\end{subfigure}
\vskip -6px
\caption{Distance of adversarial examples to the true, on \Fonts (left), or approximated, on \MNIST (right), manifold. We show normalized histograms of the $L_2$ distance of adversarial examples to their projections onto the manifold ($4377$/$3837$ regular adversarial examples on \Fonts/\MNIST; $667$ on-manifold adversarial examples on \MNIST). Regular adversarial examples exhibit a significant distance to the manifold; on \MNIST, clearly distinguishable from on-manifold adversarial examples.}
\label{fig:main-hypo1}
\vskip 0px
\end{figure}
\myparagraph{Adversarial Training:} An established defense is adversarial training, \ie, training on adversarial examples crafted during training \cite{ZantedschiAISEC2017,MiyatoICLR2016,HuangARXIV2015,ShahamNEUROCOMPUTING2018,SinhaICLR2018,LeeARXIV2017b,MadryICLR2018}. Madry \etal \cite{MadryICLR2018} consider the min-max problem
\vskip -14px
\begin{align}
\hskip -8px \min_w \sum_{n = 1}^N{\max_{\|\delta\|_{\infty} \leq \epsilon,x_{n,i}{+}\delta_i\in[0,1]}}{\cL}(f(x_n{+}\delta; w), y_n)
\label{eq:main-off-manifold-adversarial-training}
\end{align}
\vskip -2px
\noindent where $w$ are the classifier's weights and $x_n$ the training images. As shown in the supplementary material, we considered different variants \cite{SzegedyARXIV2013,GoodfellowARXIV2014,MadryICLR2018}; in the paper, however, we follow common practice and train on $50\%$ clean images and $50\%$ adversarial examples \cite{SzegedyARXIV2013}. For $\epsilon = 0.3$, the attack (for the inner optimization problem) is run for full $40$ iterations, \ie, is not stopped at the first adversarial example found. Robustness of the obtained network is measured by computing the attack \textbf{success rate}, \ie, the fraction of successful attacks on correctly classified test images, as, \eg, in \cite{CarliniSP2017}, for a fixed~$\epsilon$; lower success rate indicates higher robustness of the network.
\vskip 0px
\subsection{Adversarial Examples Leave the Manifold}
The idea of adversarial examples leaving the manifold is intuitive on \MNIST where particular background pixels are known to be constant, see \figref{fig:main-examples}. If an adversarial example $\tilde{x}$ manipulates these pixels, it has zero probability under the data distribution and its distance to the manifold, \ie, the distance to its projection $\pi(\tilde{x})$ onto the manifold, should be non-zero. On \Fonts, with known generative process in the form of a decoder $\dec$ mapping latent variables $z$ to images $x$, the projection is obtained iteratively: $\pi(\tilde{x}) = \dec(\tilde{z})$ with $\tilde{z} = \argmin_{z} \|\dec(z) - \tilde{x})\|_2$ and $z$ constrained to valid transformations (font and class, known from the test image $x$, stay constant). On \MNIST, as illustrated in \red{\figref{fig:main-illustration-2} (right)}, the manifold is approximated using $50$ nearest neighbors; the projection $\pi(\tilde{x})$ onto the sub-space spanned by the $x$-centered nearest neighbors is computed through least squares. On both \Fonts and \MNIST, the distance $\|\tilde{x} - \pi(\tilde{x})\|_2$ is considered to asses whether the adversarial example $\tilde{x}$ actually left the manifold.
On \Fonts, \figref{fig:main-hypo1} (left) shows that regular adversarial examples clearly exhibit non-zero distance to the manifold. In fact, the projections of these adversarial examples to the manifold are almost always the original test images; as a result, the distance to the manifold is essentially the norm of the corresponding perturbation: $\|\tilde{x} - \pi(\tilde{x})\|_2 \approx \|\tilde{x} - x\|_2 = \|\delta\|_2$. This suggests that the adversarial examples leave the manifold in an almost orthogonal direction. On \MNIST, in \figref{fig:main-hypo1} (right), these results can be confirmed in spite of the crude local approximation of the manifold. Again, regular adversarial examples seem to leave the manifold almost orthogonally, \ie, their distance to the manifold coincides with the norm of the corresponding perturbations. These results show that regular adversarial examples essentially \emph{are} off-manifold adversarial examples; this finding is intuitive as for well-trained classifiers, leaving the manifold should be the ``easiest'' way to fool it; \red{results on \Fashion as well as a more formal statement of this intuition can be found in the supplementary material.}
\vskip 0px
\subsection{On-Manifold Adversarial Examples}
Given that regular adversarial examples leave the manifold, we intend to explicitly compute on-manifold adversarial examples. To this end, we assume our data distribution $p(x,y)$ to be conditional on the latent variables $z$, \ie, $p(x,y|z)$, corresponding to the underlying, low-dimensional manifold. \red{On this manifold, however, there is no notion of ``perceptual similarity'' in order to ensure label invariance, \ie, distinguish valid on-manifold adversarial examples, \figref{fig:introduction} (b), from invalid ones that change the actual, true label, \figref{fig:introduction} (c):}
\begin{definition}[On-Manifold Adversarial Example]
Given the data distribution $p$, an on-manifold adversarial example for $x$ with label $y$ is a perturbed version $\tilde{x}$ such that $f(\tilde{x}) \neq y$ but $p(y | \tilde{x}) > p(y' | \tilde{x}) \forall y' \neq y$.\label{def:main-on-manifold-adversarial-example}
\end{definition}
\vskip 2px
\noindent Note that the posteriors $p(y|\tilde{x})$ correspond to the true, unknown data distribution; any on-manifold adversarial example $\tilde{x}$ violating \defref{def:main-on-manifold-adversarial-example} changed its actual, true label.
In practice, we assume access to an encoder and decoder modeling the (class-conditional) distributions $p(z|x,y)$ and $p(x|z,y)$ -- in our case, achieved using \VAEGANs \cite{LarsenICML2016,RoscaARXIV2017}. Then, given the encoder \red{$\enc$ and decoder $\dec$} and as illustrated in \figref{fig:main-illustration-2} (left), we obtain the latent code $z = \enc(x)$ and compute the perturbation $\zeta$ by maximizing:
\vskip -14px
\begin{align}
\max_\zeta \cL(f(\dec(z + \zeta)), y)\quad\text{s.t.}\quad\|\zeta\|_{\infty}\leq \eta.\label{eq:main-on-manifold-attack}
\end{align}
\vskip -2px
\noindent The image-constraint, \ie, $\dec(z + \zeta) \in[0,1]$, is enforced by the decoder; the $\eta$-constraint can, again, be enforced by projection; and we can additionally enforce a constraint on $z + \zeta$, \eg, corresponding to a prior on $z$. Label invariance, as in \defref{def:main-on-manifold-adversarial-example}, is ensured by considering only class-specific encoders and decoders, \ie, the data distribution is approximated per class. We use $\eta = 0.3$ and the same optimization procedure as for \eqnref{eq:main-off-manifold-attack}; on approximated manifolds, the perturbation $z + \zeta$ is additionally constrained to $[-2,2]^{10}$, corresponding to a truncated normal prior from the class-specific \VAEGANs; we attack $2500$ test images.
On-manifold adversarial examples obtained through \eqnref{eq:main-on-manifold-attack} are similar to those crafted in \cite{GilmerICLRWORK2018}, \cite{SchottARXIV2018}, \cite{AthalyeARXIV2018} or \cite{ZhaoICLR2018}. However, in contrast to \cite{GilmerICLRWORK2018,SchottARXIV2018,AthalyeARXIV2018}, we directly compute the perturbation $\zeta$ on the manifold instead of computing the perturbation $\delta$ in the image space and subsequently projecting $x + \delta$ to the manifold. Also note that enforcing any similarity constraint through a norm on the manifold is significantly more meaningful compared to using a norm on the image space, as becomes apparent when comparing the obtained on-manifold adversarial examples in \figref{fig:main-examples} to their regular counterparts. Compared to \cite{ZhaoICLR2018}, we find on-manifold adversarial examples using a gradient-based approach instead of randomly sampling the latent space.
\figref{fig:main-examples} shows on-manifold adversarial examples for all datasets, which we found significantly harder to obtain compared to their regular counterparts. On \Fonts, using the true, known class manifolds, on-manifold adversarial examples clearly correspond to transformations of the original test image -- reflecting the true latent space. For the learned class manifolds, the perturbations are less pronounced, often manipulating boldness or details of the characters. Due to the approximate nature of the learned \VAEGANs, these adversarial examples are strictly speaking not always part of the true manifold -- as can be seen for the irregular ``A'' (\figref{fig:main-examples}, 6th column). On \MNIST and \Fashion, on-manifold adversarial examples represent meaningful manipulations, such as removing the tail of a hand-drawn ``8'' (\figref{fig:main-examples}, 10th column) or removing the collar of a pullover (\figref{fig:main-examples}, 11th column), in contrast to the random noise patterns of regular adversarial examples. However, these usually incur a smaller change in the images space; which also explains why regular, unconstrained adversarial examples almost always leave the manifold. Still, on-manifold adversarial examples are perceptually close to the original images. On \Celeb, the quality of on-manifold adversarial examples is clearly limited by the approximation quality of our \VAEGANs. Finally, \figref{fig:main-hypo1} (right) shows that on-manifold adversarial examples are closer to the manifold than regular adversarial examples -- in spite of the crude approximation of the manifold on \MNIST.
\begin{figure}[t]
\centering
\vskip -0.3cm
\begin{subfigure}{0.28\textwidth}
\includegraphics[width=1\textwidth]{main_illustration_c.pdf}
\end{subfigure}
\begin{subfigure}{0.18\textwidth}
\includegraphics[width=1\textwidth,trim={0 0 1.9cm 0},clip]{main_illustration_a.pdf}
\end{subfigure}
\vskip -8px
\caption{\red{Left: On-manifold adversarial examples can be computed using learned, class-specific \VAEGANs \cite{LarsenICML2016,RoscaARXIV2017}. The perturbation $\zeta$ is obtained via \eqnref{eq:main-on-manifold-attack} and added to the latent code $z = \enc(x)$ yielding the adversarial example $\tilde{x} = \dec(z + \zeta)$ with difference $\delta = \tilde{x} - x$ in image space. Right: The distance of a regular adversarial example $\tilde{x}$ to the manifold, approximated using nearest neighbors, is computed as the distance to its orthogonal projection $\pi(\tilde{x})$: $\|\tilde{x} - \pi(\tilde{x})\|_2$. Large distances indicate that the adversarial example likely left the manifold.}}
\label{fig:main-illustration-2}
\vskip 0px
\end{figure}
\vskip 0px
\subsection{On-Manifold Robustness is Essentially\\[2px]\hspace*{-5px}Generalization}
\begin{figure*}
\centering
\vskip -0.3cm
\hskip -0.25cm
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fonts_error.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fonts_on_true.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_emnist_error.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fashion_error.pdf}
\end{subfigure}
\\
\hskip -0.25cm
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fonts_error_on_true_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fonts_error_on_learned_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_emnist_error_on_learned_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo3_fashion_error_on_learned_alt.pdf}
\end{subfigure}
\\
\fcolorbox{black!50}{white}{
\begin{subfigure}[t]{0.975\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{experiments_hypo3_legend.pdf}
\end{subfigure}
}
\vskip -6px
\caption{On-manifold robustness is strongly related to generalization, as shown on \Fonts, \MNIST and \Fashion considering on-manifold success rate and test error. Top: test error and on-manifold success rate in relation to the number of training images. As test error reduces, so does on-manifold success rate. Bottom: on-manifold success rate plotted against test error reveals the strong relationship between on-manifold robustness and generalization.}
\label{fig:main-hypo3}
\vskip 0px
\end{figure*}
We argue that on-manifold robustness is nothing different than generalization: as on-manifold adversarial examples have non-zero probability under the data distribution, they are merely generalization errors. This is shown in \figref{fig:main-hypo3} (top left) where test error and on-manifold success rate on \Fonts are shown. As expected, better generalization, \ie, using more training images $N$, also reduces on-manifold success rate. In order to make this relationship explicit, \figref{fig:main-hypo3} (bottom) plots on-manifold success rate against test error. Then, especially for \Fonts and \MNIST, the relationship of on-manifold robustness and generalization becomes apparent. On \Fashion, the relationship is less pronounced because on-manifold adversarial examples, computed using our \VAEGANs, are not close enough to real generalization errors. However, even on \Fashion, the experiments show a clear relationship between on-manifold robustness and generalization.
\vskip 0px
\subsubsection{On-Manifold Adversarial Training\\[2px]Boosts Generalization}
Given that generalization positively influences on-manifold robustness, we propose to adapt adversarial training to the on-manifold case in order to boost generalization:
\vskip -14px
\begin{align}
\min_w \sum_{n=1}^N \max_{\|\zeta\|_{\infty} \leq \eta} \cL(f(\dec(z_n + \zeta); w), y_n).
\label{eq:main-on-manifold-adversarial-training}
\end{align}
\vskip -2px
\noindent with $z_n = \dec(x_n)$ being the latent codes corresponding to training images $x_n$. Then, on-manifold adversarial training corresponds to robust optimization \wrt the true, or approximated, data distribution. For example, with \matthias{the} perfect decoder on \Fonts, the inner optimization problem finds ``hard'' images irrespective of their likelihood under the data distribution. For approximate $\dec$, the benefit of on-manifold adversarial training depends on how well the true data distribution is matched, \ie, how realistic the obtained on-manifold adversarial examples are; in our case, this depends on the quality of the learned \VAEGANs.
\begin{figure*}
\centering
\vskip -0.3cm
\hskip -0.25cm
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo4_fonts_off.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo4_fonts_error_off_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo4_emnist_error_off_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_hypo4_fashion_error_off_alt.pdf}
\end{subfigure}
\\
\fcolorbox{black!50}{white}{
\begin{subfigure}[t]{0.975\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{experiments_hypo4_legend.pdf}
\end{subfigure}
}
\vskip -6px
\caption{Regular robustness is not related to generalization, as demonstrated on \Fonts, \MNIST and \Fashion considering test error and (regular) success rate. On \Fonts (left), success rate is not influenced by test error, except for adversarial training. Plotting success rate against test error highlights the independence of robustness and generalization; however, different training strategies exhibit different robustness-generalization characteristics.}
\label{fig:main-hypo4}
\vskip 0px
\end{figure*}
Instead of approximating the manifold using generative models, we can exploit known invariances of the data. Then, adversarial training can be applied to these invariances, assuming that they are part of the true manifold. In practice, this can, for example, be accomplished using adversarial deformations \cite{AlaifariARXIV2018,XiaoICLR2018,EngstromARXIV2017}, \ie, adversarially crafted transformations of the image. For example, as on \Fonts, we consider $6$-degrees-of-freedom transformations corresponding to translation, shear, scaling and rotation:
\vskip -14px
\begin{align}
\min_w \sum_{n = 1}^N \max_{\|t\|_{\infty} \leq \eta, t \in \mR^6} \cL(f(T(x_n; t); w), y_n).
\label{eq:main-stn-adversarial-training}
\end{align}
\vskip -2px
\noindent where $T(x; t)$ denotes the transformation of image $x$ with parameters $t$ and the $\eta$-constraint ensures similarity and label invariance. Again, the transformations can be applied using spatial transformer networks \cite{JaderbergNIPS2015} such that $T$ is differentiable; $t$ can additionally be constrained to a reasonable space of transformations. We note that a similar approach has been used by Fawzi \etal \cite{FawziICIP2016} to boost generalization on, \eg, MNIST \cite{LecunIEEE1998}. However, the approach was considered as an adversarial variant of data augmentation and not motivated through the lens of on-manifold robustness. We refer to \eqnref{eq:main-stn-adversarial-training} as adversarial transformation training and note that, on \Fonts, this approach is equivalent to on-manifold adversarial training as the transformations coincide with the actual, true manifold by construction. We also include a data augmentation baseline, where the transformations $t$ are applied randomly.
We demonstrate the effectiveness of on-manifold adversarial training in \figref{fig:main-hypo3} (top). On \Fonts, with access to the true manifold, on-manifold adversarial training is able to boost generalization significantly, especially for low $N$, \ie, few training images. Our \VAEGAN approximation on \Fonts seems to be good enough to preserve the benefit of on-manifold adversarial training. On \MNIST and \Fashion, the benefit reduces with the difficulty of approximating the manifold; this is the ``cost'' of imperfect approximation. While the benefit is still significant on \MNIST, it diminishes on \Fashion. However, both on \MNIST and \Fashion, identifying invariances and utilizing adversarial transformation training recovers the boost in generalization; especially in contrast to the random data augmentation baseline. Overall, on-manifold adversarial training is a promising tool for improving generalization and we expect its benefit to increase with better generative models.
\vskip 0px
\subsection{Regular Robustness is Independent of\\[2px]\hspace*{-5px}Generalization}
\red{We argue that generalization, as measured \emph{on} the manifold \wrt the data distribution, is mostly independent of robustness against regular, possibly off-manifold, adversarial examples when varying the amount of training data}. Specifically, in \figref{fig:main-hypo4} (left) for \Fonts, it can be observed that -- except for adversarial training -- the success rate is invariant to the test error. \red{This can best be seen when plotting the success rate against test error for different numbers of training examples, \cf \figref{fig:main-hypo4} (middle left): only for adversarial training there exists a clear relationship; for the remaining training schemes success rate is barely influenced by the test error. In particular, better generalization does not worsen robustness.} Similar behavior can be observed on \MNIST and \Fashion, see \figref{fig:main-hypo4} (right). Here, it can also be seen that different training strategies exhibit different characteristics \wrt robustness and generalization. \red{Overall, regular robustness and generalization are not necessarily contradicting goals.}
As mentioned in \secref{sec:introduction}, these findings are in contrast to related work \cite{TsiprasARXIV2018,SuARXV2018} claiming that an inherent trade-off between robustness and generalization exists. For example, Tsipras \etal \cite{TsiprasARXIV2018} use a synthetic toy dataset to theoretically show that no model can be both robust and accurate (on this dataset). However, they allow the adversary to produce perturbations that change the actual, true label \wrt the data distribution, \ie, the considered adversarial examples are not adversarial examples according to \defref{def:main-on-manifold-adversarial-example}. Thus, it is unclear whether the suggested trade-off actually exists \red{for real datasets}; our experiments, \red{at least, as well as further analysis in the supplementary material} seem to indicate the contrary. Similarly, Su \etal \cite{SuARXV2018} experimentally show a trade-off between adversarial robustness and generalization by studying different models on ImageNet \cite{RussakovskyIJCV2015}. However, Su \etal compare the robustness and generalization characteristics of different models (\ie, different architectures, training strategies \etc), while we found that the generalization performance does not influence robustness for any \emph{arbitrary, but fixed} model.
\vskip 0px
\subsection{Discussion}
\label{subsec:main-discussion}
Our results imply that robustness and generalization are not \red{necessarily} conflicting goals, as believed in related work \cite{TsiprasARXIV2018,SuARXV2018}. This means, in practice, for any arbitrary but fixed model, better generalization will not worsen regular robustness. Different models (architectures, training strategies \etc) might, however, exhibit different robustness and generalization characteristics, as also shown in \cite{SuARXV2018,RozsaICMLA2016}. For adversarial training, on regular adversarial examples, the commonly observed trade-off between robustness and generalization is explained by the tendency of adversarial examples to leave the manifold. As result, the network has to learn (seemingly) random, but adversarial, noise patterns \emph{in addition} to the actual task at hand; rendering the learning problem harder. On simple datasets, such as \MNIST, these adversarial directions might avoid overfitting; on harder tasks, \eg, \Fonts or \Fashion, the discrepancy in test error between normal and adversarial training increases. \red{Our results also support the hypothesis that regular adversarial training has higher sample complexity \cite{SchmidtARXIV2018,KhouryARXIV2018}. In fact, on \Fonts, adversarial training can reach the same accuracy as normal training with roughly twice the amount of training data, as demonstrated in \figref{fig:main-disc-2} (top). Furthermore, as illustrated in \figref{fig:main-disc-2} (bottom), the trade-off between regular robustness and generalization can be controlled by combining regular and on-manifold adversarial training, \ie boost generalization while reducing robustness.}
\begin{figure}
\centering
\vskip -0.3cm
\hskip -0.25cm
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_fonts_error_resnet.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_fonts_error_off_resnet.pdf}
\end{subfigure}
\\
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_fonts_error_off_accuracy.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_emnist_error_off_accuracy.pdf}
\end{subfigure}
\\[-2px]
\fcolorbox{black!50}{white}{
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1.025\textwidth]{experiments_disc_2_legend.pdf}
\end{subfigure}
}
\vskip -6px
\caption{\red{Adversarial training on regular adversarial examples, potentially leaving the manifold, renders the learning problem more difficult. Top: With roughly $1.5$ to $2$ times the training data, adversarial training can still reach the same accuracy as normal training; results for ResNet-13~\cite{HeCVPR2016}. Bottom: Additionally, the trade-off can be controlled by combining regular and on-manifold adversarial training; results averaged over $3$ models.}}
\label{fig:main-disc-2}
\vskip 0px
\end{figure}
The presented results can also be confirmed on more complex datasets, such as \Celeb, and using different threat models, \ie, attacks. On \Celeb, where \VAEGANs have difficulties approximating the manifold, \figref{fig:main-disc-1} (top left) shows that on-manifold robustness still improves with generalization although most on-manifold adversarial examples are not very realistic, see \figref{fig:main-examples}. Similarly, regular robustness, see \figref{fig:main-disc-1} (top right), is not influenced by generalization; here, we also show that the average distance of the perturbation, \ie, average $\|\delta\|_{\infty}$, when used to asses robustness leads to the same conclusions. Similarly, as shown in \figref{fig:main-disc-1} (bottom), our findings are confirmed using Carlini and Wagner's attack \cite{CarliniSP2017} with $L_2$-norm -- to show that the results generalize across norms. However, overall, we observed lower success rates using \cite{CarliniSP2017} and the $L_2$ norm. \red{Finally, our results can also be reproduced using transfer attacks (\ie, black-box attacks, which are generally assumed to be subsumed by white-box attacks \cite{AthalyeARXIV2018}) as well as and different architectures such as multi-layer perceptrons, ResNets~\cite{HeCVPR2016} and VGG~\cite{SimonyanARXIV2014}, as detailed in the supplementary material.}
\begin{figure}
\centering
\vskip -0.3cm
\hskip -0.25cm
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_celeba_on_learned.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_celeba_off_distance.pdf}
\end{subfigure}
\\
\hskip -0.25cm
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_emnist_error_on_learned_cw_alt.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\vskip 0px
\centering
\includegraphics[width=1\textwidth]{experiments_disc_emnist_error_off_cw_alt.pdf}
\end{subfigure}
\\[-2px]
\fcolorbox{black!50}{white}{
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1.01\textwidth]{experiments_disc_1_legend.pdf}
\end{subfigure}
}
\vskip -6px
\caption{Results on \Celeb and using the $L_2$ Carlini and Wagner \cite{CarliniSP2017} attack. On \Celeb, as the class manifolds are significantly harder to approximate, the benefit of on-manifold adversarial training diminishes. For \cite{CarliniSP2017}, we used $120$ iterations; our hypotheses are confirmed, although \cite{CarliniSP2017} does not use the training loss as attack objective and the $L_2$ norm changes the similarity-constraint for regular and on-manifold adversarial examples.}
\label{fig:main-disc-1}
\vskip 0px
\end{figure} | {
"alphanum_fraction": 0.7636119749,
"avg_line_length": 105.8129032258,
"ext": "tex",
"hexsha": "1a51722b3148d9c147851dbab4565f887e8fd5ff",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-03-29T13:19:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-14T22:47:46.000Z",
"max_forks_repo_head_hexsha": "5bbb149f58614253e06632538570f13645c86f71",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "davidstutz/cvpr2019-adversarial-robustness",
"max_forks_repo_path": "paper/sec_main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5bbb149f58614253e06632538570f13645c86f71",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "davidstutz/cvpr2019-adversarial-robustness",
"max_issues_repo_path": "paper/sec_main.tex",
"max_line_length": 1654,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "5bbb149f58614253e06632538570f13645c86f71",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "davidstutz/cvpr2019-adversarial-robustness",
"max_stars_repo_path": "paper/sec_main.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-06T11:44:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-22T11:41:23.000Z",
"num_tokens": 8782,
"size": 32802
} |
\section{21. Go away and come back}
This chapter goes into mixing the use of other programs with Vim.
Either by executing program from inside Vim or by leaving Vim and coming back later.
Furthermore, this is about the ways to remember the state of Vim and restore it later.
%\localtableofcontentswithrelativedepth{+1} % I couldn't get this to work TMC
\etocsetnexttocdepth{2} % This produces the same result TMC
\etocsettocstyle{\subsection*{Contents}}{}
\localtableofcontents
\subsection{Suspend and resume}
Like most Unix programs Vim can be suspended by pressing CTRL-Z.
This stops Vim and takes you back to the shell it was started in.
You can then do any other commands until you are bored with them.
Then bring back Vim with the "\texttt{fg}" command.
CTRL-Z\\
\{any sequence of shell commands\}\\
fg
You are right back where you left Vim, nothing has changed.
In case pressing CTRL-Z doesn't work, you can also use "\texttt{:suspend}".
Don't forget to bring Vim back to the foreground, you would lose any changes that you made!
Only Unix has support for this.
On other systems Vim will start a shell for you.
This also has the functionality of being able to execute shell commands.
But it's a new shell, not the one that you started Vim from.
When you are running the GUI you can't go back to the shell where Vim was started.
CTRL-Z will minimize the Vim window instead.
\subsection{Executing shell commands}
To execute a single shell command from Vim use "\texttt{:!{command}}".
For example, to see a directory listing:
\begin{Verbatim}[samepage=true]
:!ls
:!dir
\end{Verbatim}
The first one is for Unix, the second one for MS-Windows.
Vim will execute the program.
When it ends you will get a prompt to hit <Enter>.
This allows you to have a look at the output from the command before returning to the text you were editing.
The "\texttt{!}" is also used in other places where a program is run.
Let's take a look at an overview:
\begin{Verbatim}[samepage=true]
:!{program} execute {program}
:r !{program} execute {program} and read its output
:w !{program} execute {program} and send text to its input
:[range]!{program} filter text through {program}
\end{Verbatim}
Notice that the presence of a range before "\texttt{!{program}}" makes a big difference.
Without it executes the program normally, with the range a number of text lines is filtered through the program.
Executing a whole row of programs this way is possible.
But a shell is much better at it.
You can start a new shell this way:
\begin{Verbatim}[samepage=true]
:shell
\end{Verbatim}
This is similar to using CTRL-Z to suspend Vim.
The difference is that a new shell is started.
When using the GUI the shell will be using the Vim window for its input and output.
Since Vim is not a terminal emulator, this will not work perfectly.
If you have trouble, try toggling the \texttt{'guipty'} option.
If this still doesn't work well enough, start a new terminal to run the shell in.
For example with:
\begin{Verbatim}[samepage=true]
:!xterm&
\end{Verbatim}
\subsection{Remembering information; viminfo}
After editing for a while you will have text in registers, marks in various files, a command line history filled with carefully crafted commands.
When you exit Vim all of this is lost.
But you can get it back!
The viminfo file is designed to store status information:
\begin{itemize}
\item Command-line and Search pattern history
\item Text in registers
\item Marks for various files
\item The buffer list
\item Global variables
\end{itemize}
Each time you exit Vim it will store this information in a file, the viminfo file.
When Vim starts again, the viminfo file is read and the information restored.
The \texttt{'viminfo'} option is set by default to restore a limited number of items.
You might want to set it to remember more information.
This is done through the following command:
\begin{Verbatim}[samepage=true]
:set viminfo=string
\end{Verbatim}
The string specifies what to save.
The syntax of this string is an option character followed by an argument.
The option/argument pairs are separated by commas.
Take a look at how you can build up your own viminfo string.
First, the \texttt{'} option is used to specify how many files for which you save marks (a-z).
Pick a nice even number for this option (1000, for instance).
Your command now looks like this:
\begin{Verbatim}[samepage=true]
:set viminfo='1000
\end{Verbatim}
The f option controls whether global marks (A-Z and 0-9) are stored.
If this option is 0, none are stored.
If it is 1 or you do not specify an f option, the marks are stored.
You want this feature, so now you have this:
\begin{Verbatim}[samepage=true]
:set viminfo='1000,f1
\end{Verbatim}
The \texttt{<} option controls how many lines are saved for each of the registers.
By default, all the lines are saved.
If 0, nothing is saved.
To avoid adding thousands of lines to your viminfo file (which might never get used and makes starting Vim slower) you use a maximum of 500 lines:
\begin{Verbatim}[samepage=true]
:set viminfo='1000,f1,<500
\end{Verbatim}
Other options you might want to use:
\begin{center} \begin{longtable}{c l}
\texttt{@} & number of lines to save from the input line history \\
\texttt{:} & number of lines to save from the command line history \\
\texttt{/} & number of lines to save from the search history \\
\texttt{r} & removable media, for which no marks will be stored (can be used several times) \\
\texttt{!} & global variables that start with an uppercase letter and don't contain lowercase letters \\
\texttt{h} & disable \texttt{'hlsearch'} highlighting when starting \\
\texttt{\%} & the buffer list (only restored when starting Vim without file arguments) \\
\texttt{c} & convert the text using \texttt{'encoding'} \\
\texttt{n} & name used for the viminfo file (must be the last option) \\
\end{longtable} \end{center}
See the \texttt{'viminfo'} option and |\texttt{:h viminfo-file}| for more information.
When you run Vim multiple times, the last one exiting will store its information.
This may cause information that previously exiting Vims stored to be lost.
Each item can be remembered only once.
\subsubsection{Getting back to where you stopped vim}
You are halfway editing a file and it's time to leave for holidays.
You exit Vim and go enjoy yourselves, forgetting all about your work.
After a couple of weeks you start Vim, and type:
\begin{Verbatim}[samepage=true]
'0
\end{Verbatim}
And you are right back where you left Vim.
So you can get on with your work.
Vim creates a mark each time you exit Vim.
The last one is \texttt{'0}.
The position that \texttt{'0} pointed to is made \texttt{'1}.
And \texttt{'1} is made to \texttt{'2}, and so forth.
Mark \texttt{'9} is lost.
The |\texttt{:h :marks}| command is useful to find out where \texttt{'0} to \texttt{'9} will take you.
\subsubsection{Getting back to some file}
If you want to go back to a file that you edited recently, but not when exiting Vim, there is a slightly more complicated way.
You can see a list of files by typing the command:
\begin{Verbatim}[samepage=true]
:oldfiles
1: ~/.viminfo
2: ~/text/resume.txt
3: /tmp/draft
\end{Verbatim}
Now you would like to edit the second file, which is in the list preceded by "\texttt{2:}".
You type:
\begin{Verbatim}[samepage=true]
:e #<2
\end{Verbatim}
Instead of "\texttt{:e}" you can use any command that has a file name argument, the "\texttt{\#<2}" item works in the same place as "\texttt{\%}" (current file name) and "\texttt{\#}" (alternate file name).
So you can also split the window to edit the third file:
\begin{Verbatim}[samepage=true]
:split #<3
\end{Verbatim}
That \texttt{\#<123} thing is a bit complicated when you just want to edit a file.
Fortunately there is a simpler way:
\begin{Verbatim}[samepage=true]
:browse oldfiles
1: ~/.viminfo
2: ~/text/resume.txt
3: /tmp/draft
-- More --
\end{Verbatim}
You get the same list of files as with |\texttt{:h :oldfiles}|.
If you want to edit "\texttt{resume.txt}" first press "\texttt{q}" to stop the listing.
You will get a prompt:
\begin{Verbatim}[samepage=true]
Type number and <Enter> (empty cancels):
\end{Verbatim}
Type "\texttt{2}" and press <Enter> to edit the second file.
More info at |\texttt{:h :oldfiles}|, |\texttt{:h v:oldfiles}| and |\texttt{:h c\_\#<}|.
\subsubsection{Move info from one vim to another}
You can use the "\texttt{:wviminfo}" and "\texttt{:rviminfo}" commands to save and restore the information while still running Vim.
This is useful for exchanging register contents between two instances of Vim, for example.
In the first Vim do:
\begin{Verbatim}[samepage=true]
:wviminfo! ~/tmp/viminfo
\end{Verbatim}
And in the second Vim do:
\begin{Verbatim}[samepage=true]
:rviminfo! ~/tmp/viminfo
\end{Verbatim}
Obviously, the "w" stands for "write" and the "r" for "read".
The \texttt{!} character is used by "\texttt{:wviminfo}" to forcefully overwrite an existing file.
When it is omitted, and the file exists, the information is merged into the file.
The \texttt{!} character used for "\texttt{:rviminfo}" means that all the information is used, this may overwrite existing information.
Without the \texttt{!} only information that wasn't set is used.
These commands can also be used to store info and use it again later.
You could make a directory full of viminfo files, each containing info for a different purpose.
\subsection{Sessions}
Suppose you are editing along, and it is the end of the day.
You want to quit work and pick up where you left off the next day.
You can do this by saving your editing session and restoring it the next day.
A Vim session contains all the information about what you are editing.
This includes things such as the file list, window layout, global variables, options and other information.
(Exactly what is remembered is controlled by the \texttt{'sessionoptions'} option, described below.)
The following command creates a session file:
\begin{Verbatim}[samepage=true]
:mksession vimbook.vim
\end{Verbatim}
Later if you want to restore this session, you can use this command:
\begin{Verbatim}[samepage=true]
:source vimbook.vim
\end{Verbatim}
If you want to start Vim and restore a specific session, you can use the following command:
\begin{Verbatim}[samepage=true]
vim -S vimbook.vim
\end{Verbatim}
This tells Vim to read a specific file on startup.
The \texttt{'S'} stands for session (actually, you can source any Vim script with \texttt{-S}, thus it might as well stand for "source").
The windows that were open are restored, with the same position and size as before.
Mappings and option values are like before.
What exactly is restored depends on the \texttt{'sessionoptions'} option.
The default value is `blank,buffers,curdir,folds,help,options,winsize'.
\begin{center} \begin{tabular}{l l}
blank & keep empty windows \\
buffers & all buffers, not only the ones in a window \\
curdir & the current directory \\
folds & folds, also manually created ones \\
help & the help window \\
options & all options and mappings \\
winsize & window sizes \\
\end{tabular} \end{center}
Change this to your liking.
To also restore the size of the Vim window, for example, use:
\begin{Verbatim}[samepage=true]
:set sessionoptions+=resize
\end{Verbatim}
\subsubsection{Session here, session there}
The obvious way to use sessions is when working on different projects.
Suppose you store you session files in the directory "\texttt{~/.vim}".
You are currently working on the "secret" project and have to switch to the "boring" project:
\begin{Verbatim}[samepage=true]
:wall
:mksession! ~/.vim/secret.vim
:source ~/.vim/boring.vim
\end{Verbatim}
This first uses "\texttt{:wall}" to write all modified files.
Then the current session is saved, using "\texttt{:mksession!}".
This overwrites the previous session.
The next time you load the secret session you can continue where you were at this point.
And finally you load the new "boring" session.
If you open help windows, split and close various windows, and generally mess up the window layout, you can go back to the last saved session:
\begin{Verbatim}[samepage=true]
:source ~/.vim/boring.vim
\end{Verbatim}
Thus you have complete control over whether you want to continue next time where you are now, by saving the current setup in a session, or keep the session file as a starting point.
Another way of using sessions is to create a window layout that you like to use, and save this in a session.
Then you can go back to this layout whenever you want.
For example, this is a nice layout to use:
\begin{Verbatim}[samepage=true]
+----------------------------------------+
| VIM - main help file |
| |
|Move around: Use the cursor keys, or "h|
|help.txt================================|
|explorer | |
|dir |~ |
|dir |~ |
|file |~ |
|file |~ |
|file |~ |
|file |~ |
|~/=========|[No File]===================|
| |
+----------------------------------------+
\end{Verbatim}
This has a help window at the top, so that you can read this text.
The narrow vertical window on the left contains a file explorer.
This is a Vim plugin that lists the contents of a directory.
You can select files to edit there.
More about this in the next chapter.
Create this from a just started Vim with:
\begin{Verbatim}[samepage=true]
:help
CTRL-W w
:vertical split ~/
\end{Verbatim}
You can resize the windows a bit to your liking.
Then save the session with:
\begin{Verbatim}[samepage=true]
:mksession ~/.vim/mine.vim
\end{Verbatim}
Now you can start Vim with this layout:
\begin{Verbatim}[samepage=true]
vim -S ~/.vim/mine.vim
\end{Verbatim}
Hint: To open a file you see listed in the explorer window in the empty window, move the cursor to the filename and press "\texttt{O}".
Double clicking with the mouse will also do this.
\subsubsection{Unix and ms-windows}
Some people have to do work on MS-Windows systems one day and on Unix another day.
If you are one of them, consider adding "\texttt{slash}" and "\texttt{unix}" to \texttt{'sessionoptions'}.
The session files will then be written in a format that can be used on both systems.
This is the command to put in your vimrc file:
\begin{Verbatim}[samepage=true]
:set sessionoptions+=unix,slash
\end{Verbatim}
Vim will use the Unix format then, because the MS-Windows Vim can read and write Unix files, but Unix Vim can't read MS-Windows format session files.
Similarly, MS-Windows Vim understands file names with / to separate names, but Unix Vim doesn't understand \textbackslash.
\subsubsection{Sessions and viminfo}
Sessions store many things, but not the position of marks, contents of registers and the command line history.
You need to use the viminfo feature for these things.
In most situations you will want to use sessions separately from viminfo.
This can be used to switch to another session, but keep the command line history.
And yank text into registers in one session, and paste it back in another session.
You might prefer to keep the info with the session.
You will have to do this yourself then.
Example:
\begin{Verbatim}[samepage=true]
:mksession! ~/.vim/secret.vim
:wviminfo! ~/.vim/secret.viminfo
\end{Verbatim}
And to restore this again:
\begin{Verbatim}[samepage=true]
:source ~/.vim/secret.vim
:rviminfo! ~/.vim/secret.viminfo
\end{Verbatim}
\subsection{Views}
A session stores the looks of the whole of Vim.
When you want to store the properties for one window only, use a view.
The use of a view is for when you want to edit a file in a specific way.
For example, you have line numbers enabled with the \texttt{'number'} option and defined a few folds.
Just like with sessions, you can remember this view on the file and restore it later.
Actually, when you store a session, it stores the view of each window.
There are two basic ways to use views.
The first is to let Vim pick a name for the view file.
You can restore the view when you later edit the same file.
To store the view for the current window:
\begin{Verbatim}[samepage=true]
:mkview
\end{Verbatim}
Vim will decide where to store the view.
When you later edit the same file you get the view back with this command:
\begin{Verbatim}[samepage=true]
:loadview
\end{Verbatim}
That's easy, isn't it?
Now you want to view the file without the \texttt{'number'} option on, or with all folds open, you can set the options to make the window look that way.
Then store this view with:
\begin{Verbatim}[samepage=true]
:mkview 1
\end{Verbatim}
Obviously, you can get this back with:
\begin{Verbatim}[samepage=true]
:loadview 1
\end{Verbatim}
Now you can switch between the two views on the file by using "\texttt{:loadview}" with and without the "\texttt{1}" argument.
You can store up to ten views for the same file this way, one unnumbered and nine numbered 1 to 9.
\subsubsection{A view with a name}
The second basic way to use views is by storing the view in a file with a name you chose.
This view can be loaded while editing another file.
Vim will then switch to editing the file specified in the view.
Thus you can use this to quickly switch to editing another file, with all its options set as you saved them.
For example, to save the view of the current file:
\begin{Verbatim}[samepage=true]
:mkview ~/.vim/main.vim
\end{Verbatim}
You can restore it with:
\begin{Verbatim}[samepage=true]
:source ~/.vim/main.vim
\end{Verbatim}
\subsection{Modelines}
When editing a specific file, you might set options specifically for that file.
Typing these commands each time is boring.
Using a session or view for editing a file doesn't work when sharing the file between several people.
The solution for this situation is adding a modeline to the file.
This is a line of text that tells Vim the values of options, to be used in this file only.
A typical example is a C program where you make indents by a multiple of 4 spaces.
This requires setting the \texttt{'shiftwidth'} option to 4.
This modeline will do that:
\begin{Verbatim}[samepage=true]
/* vim:set shiftwidth=4: */
\end{Verbatim}
Put this line as one of the first or last five lines in the file.
When editing the file, you will notice that \texttt{'shiftwidth'} will have been set to four.
When editing another file, it's set back to the default value of eight.
For some files the modeline fits well in the header, thus it can be put at the top of the file.
For text files and other files where the modeline gets in the way of the normal contents, put it at the end of the file.
The \texttt{'modelines'} option specifies how many lines at the start and end of the file are inspected for containing a modeline.
To inspect ten lines:
\begin{Verbatim}[samepage=true]
:set modelines=10
\end{Verbatim}
The \texttt{'modeline'} option can be used to switch this off.
Do this when you are working as root on Unix or Administrator on MS-Windows, or when you don't trust the files you are editing:
\begin{Verbatim}[samepage=true]
:set nomodeline
\end{Verbatim}
Use this format for the modeline:
\begin{Verbatim}[samepage=true]
any-text vim:set {option}={value} ... : any-text
\end{Verbatim}
The "\texttt{any-text}" indicates that you can put any text before and after the part that Vim will use.
This allows making it look like a comment, like what was done above with /* and */.
The "\texttt{ vim:}" part is what makes Vim recognize this line.
There must be white space before "\texttt{vim}", or "\texttt{vim}" must be at the start of the line.
Thus using something like "\texttt{gvim:}" will not work.
The part between the colons is a "\texttt{:set}" command.
It works the same way as typing the "\texttt{:set}" command, except that you need to insert a backslash before a colon (otherwise it would be seen as the end of the modeline).
Another example:
\begin{Verbatim}[samepage=true]
// vim:set textwidth=72 dir=c\:\tmp: use c:\tmp here
\end{Verbatim}
There is an extra backslash before the first colon, so that it's included in the "\texttt{:set}" command.
The text after the second colon is ignored, thus a remark can be placed there.
For more details see |\texttt{:h modeline}|.
\clearpage
| {
"alphanum_fraction": 0.7301479834,
"avg_line_length": 38.6504672897,
"ext": "tex",
"hexsha": "0ba4e1fc8af30b17a5850eb4310f592e06f9b2e8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf",
"max_forks_repo_licenses": [
"OML"
],
"max_forks_repo_name": "tristanchase/LaTeX-Vim-User-Manual",
"max_forks_repo_path": "src/usr_21.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"OML"
],
"max_issues_repo_name": "tristanchase/LaTeX-Vim-User-Manual",
"max_issues_repo_path": "src/usr_21.tex",
"max_line_length": 206,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cfcf5be0bf33362cd326a283c8ae65a9caa59adf",
"max_stars_repo_licenses": [
"OML"
],
"max_stars_repo_name": "tristanchase/LaTeX-Vim-User-Manual",
"max_stars_repo_path": "src/usr_21.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5200,
"size": 20678
} |
\documentclass[../main.tex]{subfiles}
\begin{document}
\section{Torsion \& The Frenet-Serret Formulas}
We have previously defined some properties of curves, and some associated vectors, namely the unit tangent vector \(\vec{T}\), the curvature \(\kappa\), the principal normal vector \(\vec{N}\), and the binormal vector \(\vec{B}\). We have also defined the osculating planes and circles associated with curves, which function as approximations to curves different from the typical linear ones. Another property which we may wish to consider is torsion, which encodes the rate of change of the binormal vector.
\begin{definition}{Torsion}{}
Consider
\[
\frac{d\vec{B}}{ds}
\]
which is the rate of change of the binormal per unit length. This vector is perpendicular to both \(\vec{B}\) and \(\vec{T}\). The fact that it is perpendicular to \(\vec{B}\) follows from the fact that the derivative of a vector of constant magnitude (\(\vec{B}\) having constant unit magnitude) is always perpendicular to that vector. For \(\vec{T}\), the relevant fact is that
\[
\frac{d\vec{B}}{ds}=\frac{d\vec{T}}{ds}\times\vec{N}+\vec{T}\times\frac{d\vec{N}}{ds}
\]
but
\[
\frac{d\vec{T}}{ds}=\kappa\vec{N}
\]
so the first term is zero, and thus
\[
\frac{d\vec{B}}{ds}=\vec{T}\times\frac{d\vec{N}}{ds}
\]
and therefore \(\vec{B}\) must be perpendicular to \(\vec{T}\), and we have
\[
\frac{d\vec{B}}{ds}=-\tau\vec{N}
\]
where \(\tau\) is defined to be the \emph{torsion} of the curve.
\end{definition}
\begin{example}{}{}
Consider the helix parameterized by \(\vec{r}=a\cos{t}\i+a\sin{t}\j+bt\k\). Find the binormal vector and the torsion.
\tcblower
We already know
\[
v(t)=\sqrt{a^2+b^2}=\frac{ds}{dt}
\]
\[
\vec{T}=\frac{1}{\sqrt{a^2+b^2}}(-a\sin{t}\i+a\cos{t}\j+b\k)
\]
\[
\vec{N}=-\cos{t}\i-\sin{t}\j
\]
\[
\kappa=\frac{a}{a^2+b^2}
\]
so to find \(\vec{B}\), we must simply compute \(\vec{T}\times\vec{N}\), which yields
\[
\vec{B}=\frac{1}{\sqrt{a^2+b^2}}(b\sin{t}\i-b\cos{t}\j+a\k)
\]
the torsion is then obtained by differentiating \(\vec{B}\) in \(s\). This may be done using the chain rule, noting that
\[
\frac{d\vec{B}}{dt}=\frac{1}{\sqrt{a^2+b^2}}(b\cos{t}\i+b\sin{t}\j)
\]
and using the previously calculated value for \(\frac{ds}{dt}\) we obtain
\[
\frac{d\vec{B}}{ds}=-\frac{b}{a^2+b^2}(-\cos{t}\i-\sin{t}\j)=-\frac{b}{a^2+b^2}\vec{N}
\]
and therefore we have
\[
\tau=\frac{b}{a^2+b^2}
\]
\end{example}
It may be noted that the helix has constant curvature and torsion. As such, the helix is among the simplest 3D curves, having curvature and torsion which are constant, but non-zero.
\begin{definition}{Frenet-Serret Formulas}{}
The \emph{Frenet-Serret formulas} define the derivatives of \(\vec{T}\), \(\vec{B}\), and \(\vec{N}\). We have previously found
\begin{align*}
\frac{d\vec{T}}{ds}&=\kappa\vec{N}\\
\frac{d\vec{B}}{ds}&=-\tau\vec{N}
\end{align*}
but we require a formula for the derivative of \(\vec{N}\). We know that \(\vec{B}=\vec{T}\times\vec{N}\), but this implies \(\vec{N}=\vec{B}\times\vec{T}\), and therefore
\begin{align*}
\frac{d\vec{N}}{ds}&=\frac{d}{ds}\vec{B}\times\vec{T}\\
&=\frac{d\vec{B}}{ds}\times\vec{T}+\vec{B}\times\frac{d\vec{T}}{ds}\\
&=-\tau\vec{N}\times\vec{T}+\vec{B}\times\kappa\vec{N}\\
&=\tau(\vec{T}\times\vec{N})-\kappa(\vec{N}\times\vec{B})\\
&=\tau\vec{B}-\kappa\vec{T}
\end{align*}
This completes the Frenet-Serret formulas, which may also be written in a matrix form as
\[
\frac{d}{ds}
\begin{bmatrix}
\vec{T}\\
\vec{N}\\
\vec{B}
\end{bmatrix}=
\begin{bmatrix}
0&\kappa&0\\
-\kappa&0&\tau\\
0&-\tau&0
\end{bmatrix}
\begin{bmatrix}
\vec{T}\\
\vec{N}\\
\vec{B}
\end{bmatrix}
\]
\end{definition}
\end{document}
| {
"alphanum_fraction": 0.4019297954,
"avg_line_length": 60.7171717172,
"ext": "tex",
"hexsha": "1cfc56209707c0242bdd4323a190e381d9f6f9ff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ec5356ff816c2f51828e4f8b64b5ae0a8b0c8cd3",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "CrashAndSideburns/MATH227-Notes",
"max_forks_repo_path": "src/lec_6.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ec5356ff816c2f51828e4f8b64b5ae0a8b0c8cd3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "CrashAndSideburns/MATH227-Notes",
"max_issues_repo_path": "src/lec_6.tex",
"max_line_length": 524,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ec5356ff816c2f51828e4f8b64b5ae0a8b0c8cd3",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "CrashAndSideburns/MATH227-Notes",
"max_stars_repo_path": "src/lec_6.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-25T04:16:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-25T04:16:23.000Z",
"num_tokens": 1476,
"size": 6011
} |
\chapter{TRANSPOSITION SYSTEMS}
\section{MONOPHASE TRANSPOSITIOIN SYSTEMS}
\subsection{Transp05ition Systems Employing Geometric Designs}
In part one brief mention was made of the use of geometric designs
and figures other than rectangles in producing transposition ciphers. It
was stated that triangles, trapezoids, and polygons of various symmetrical
shapes can be employed. Figures of these types form connecting links
between the methods that use simple rectangular designs and the more
complicated methods that use figures in which transposition takes place
along diagonals.
\subsectionTrapezoidal Designs}
(1-. A trapezoid or, more accurately, a truncated triangle, of pre—
arranged dimensions as regards the number of cells (which in this case
are rhombs into which it is to be partitioned, is constructed. There will
be left on one side of the design a series of small triangles which are not
to be used for inscribing letters, and are therefore crossed off in the
design, as shown in figure 24. Only two agreements are necessary in
order .to fix the dimensions of the design: a keyword or keyphrase to
determine the number of cells at the base of the design, and an under-
standing as to the height of the design expressed in number of cells. The
successive horizontal rows of cells will decrease by one in number from
bottom to top of the design. In figure 24, the keyphrase NO CANDY
FOR ISSUE is used as a basis for deriving a numerical key of 15 ele—
ments, and it is assumed that by prearrangement it was agreed that the
height of the design should be eight cells. Therefore, the bottom row
has 15 cells, the next one upwards, 14, the next, 13, and so on, to the
last, with 8 cells. The inscription may follow any route agreed upon; in
the example, it follows the normal manner of writing. The transcription
follows the numerical key order, yielding this cryptogram:
87
REF 1D:A56932
ODAIK AEDME HPODV ITEIP NHUET BOBRO
HDTFS EISNI ETBEF BCBTM ESHGA RTORD
IRERE AWARR ERTNS IEPVR VASEO FTEDL
NA
b. Decryptographing is merely the reverse of cryptographing, there
being no difficulties provided that the design has been correctly con-
structed. For this purpose cross—section paper will be found useful. The
analysis of such a cryptogram is somewhat complicated by the presence
of columns having varying numbers of letters; it may be further com-
plicated by following complex routes in inscription. It is also possible
/’Ev71p’1y’D}rQ/'E/" Tim“
F I v E H U N DA13zmt
E 'D -D A s H L H A Sari
B R 0 K E N D o w n.43mm
T o P I P E R A T I vim,
E T H A T I T B E R E P A.mh
I R E D B E F 0 RA‘EEI§EIEEIaEIEWN
Aflflfiflflflflfl' IRSLM
7——9——2-1_-8-3-15--5—1o—11-—6—12-13-14.—4
'NOOANDYFORISSUE
Figur224.
to follow a numerical key in the inscription of the plain text in horizontal
lines; this additional procedure would further complicate and delay
solution.
87. Triangular Designs
a. The simplest way of drawing up a triangle for cryptographing is to
take cross—section paper, draw a square the side of which is equal to the
length agreed upon as expressed in the number of cells, and then draw
a diagonal cutting the large square into two equal triangles. This is
shown in figure 25, where the length agreed upon is nine, that is, nine
cells per side. The letters of the plain text are inscribed in accord-
ance with any prearranged route, the one illustrated in figure 26 be-
ing a simple method wherein the letters are inscribed in horizontal
lines in the normal manner. When so inscribed, the letters in the dia-
gram will form 2n — 1 columns where n is the number of cells
forming one of the sides of the square from which the triange has been
constructed. The total number of letters that can be inscribed within
REF ID:A56932
Figure 25.
thetriangleis the sum ofn + (n — 1) + (n — 2 + (n —— 3) +...
+ 1. For a triangle based upon a side of 9 cells, the sum is 9 + 8 + 7
+ 6 + 5 + 4 + 3 + 2 +1: 45. The letters may then be tran-
scribed to form the cryptogram by following another route, or by follow-
ing a derived numerical key applied to the base of the triangle. A simple
method of deriving a key of Zn — 1 elements from a key of n elements
or letters is exemplified herewith. Let the key be DIAGONALS, a word
of nine letters. Extend this key to Zn — 1 places by repetition, and then
assign numerical values as usual:
I! = 9; 2 n —- 1 = 17
O N A L S D I A G O N A L
5-13-2-11-17——6-10—3—8-16-l4——4-12
This numerical key is the one that has been employed in enciphering
the message in Figure 26.
5--9--1--7-15-13--2-1L-17-6-10--3--8-16-14--4-12
Cryptogram:
RICRC OCSGE DOONI UAOOE
SEYID RTISS DTSNR AUNTN
PERTR
Figure 26.
8‘2
fiver—'M -"-"" ' ' " '
REF ID:A56932
Cryptogram :
UUSOC YNTSO REOYS ONRER
DRITI DTOGD RANEO RICSN
CTRNI GENNE ATGSR OSIIR
SOIET RTUAI POECO TNESS
DPRCD AURSD
Figure 27.
b. By a slight change in procedure it is possible to encipher a mes—
sage and produce a text which, for the sake of accuracy in special
cases, is double the original length, but which is self—checking. Sup—
pose that instead of applying a single numerical key to the base of the
triangle, a double—length key is applied to the legs, as shown in
figure 27. Here the key is TRIANGLES, extended to double length
by simple repetition, as follows:
1—2-3-4-—5-6-7-8—9-10-11-12-13-14-15-16-17-18
Keyword: TRIANGLESTRIANGLES
Numerical key: 17-13-7-1-11-5-9-3-15-18-14—8—2-12—6-10—4-16
This key is applied to the legs of the triangle beginning at the lower
left-hand corner. The transcription then follows key—number order,
which results in doubling the length of the message but the repeated
letters are scattered throughout the whole message. In decryptographing
such a message the clerk merely omits the second occurrence of a letter
if it agrees (in identity) with its first appearance in the text.
6. Many variations in inscription and transcription can be employed
in the case of triangles as well as trapezoids. Some of the variations in
the case of triangles are shown in figure 28.
88. Diagonal Methods
a. A method involving diagonal transposition which is reported to
have been employed by the French Army in World War I is now to
be described. A numerical key is derived from a fairly long word or
phrase, and a rectangle is constructed, as in figure 29. The text is
inscribed in this rectangle in normal fashion, nulls being employed, if
necessary, to complete the last line of the rectangle.
90
REF ID:A56932
n . -. -' '.| I . ." ' \..-.w.=-v$whhfl?flllllll
17-13-—7--1—11—-5--9--3-15-14--8--2—12--6-10--I.-16
Inscription: Up left side, down right, alternately.
Transcription: (a) In rows from the base line, left to right and right to left,
alternately, upwards :
PISOS RNATU SIERS Etc.
(1)) In diagonals from right leg, in key—number order:
RIEDR OUAYN etc.
(c) In rows from left leg, in key-number order:
CTGEO YTCEU etc.
(d) From columns in key-number order:
CNROI TUGRU etc.
Figure 28.
Message: ENEMY BATTERY LOCATED AT WOODS 1,000 YARDS
SOUTHEAST OF MUMMASBURG HEAVY ARTILLERY
STOP THEY ARE FIRING AT RATE OF THREE ROUNDS
PER MINUTE FOR THE BATTERY X WILLS, MAJ.
Keyphrase: MIDNIGHT RIDE OF PAUL REVERE.
Enciphering diagram:
MIDNIGHTRIDEOFPAULREVERE
l5—11—2-16—12-9—10—22—19-13—3—4—17—8—18—1—23—14—20—5—24—6—21—7
E N|§| M YB A T T EE’IY Lo clfAJ T E D|§| TIWI 00
D [310 N ET H o u SM! DY AR [E] s so 1le HE
{El ST 0 PM U M M IEISB IEIR H E III VIZ! MEI TI
L LE R YS T o |E|THE TE E F 1131 NC |_‘A_'|T
R AT E or T [E] R EER ou lfiln s [E] ElEl MI NIQI
T er 0 RT |El E B ATT ER TIXIIE I LL [SIM AJ
Cryptogram:
ADARR SESAR NUANX YAAPH HAURA UWYFW
RHEDO TETFS HETBE RTOIL TGIMO EITJO
YRURB TMSFT AHUTT NSLAE YEFYO RESTE
AESII EDLRT MNORE OLDYO ECAGR YTUMR
BDSVE LOHTN ATOMO ETEFS TANM
Figure 29.
91
REF ID:A56932
b. The correspondents agree beforehand upon several diagonals which
run from left to right, and from right to left and which intersect,
thus cutting up the design quite thoroughly. In figure 29 let these selected
diagonals be those indicated by the numbers from 1 to 6, inclusive,
the odd ones indicating diagonals running from left to right. In the
transcription, the letters along the indicated diagonals are first set down
in groups of five, proceeding in key—number order. Correspondents must
also agree beforehand as to whether a letter which lies at the intersection
of two diagonals will be taken both times it is encountered or taken
only once and, if so, whether on its first or second appearance. After
all these letters have been written down, one then proceeds with the
remaining letters in the usual columnar manner, omitting the letters
which have already been taken. The cryptographing process will become
clear upon the study of the example in figure 29.
89. Interrupted Keyword Transposition
a. This method of transposition is a development of a more simple
method wherein the transposition follows a numerical key. The latter
must first be described. A keyword or keyphrase of fair length is selected
and a numerical key derived from it. Let this key be the phrase UNI-
FORMITY OF METHOD.
Keyphrase: UNIFORMITYOFME TH OD
Numerical key: 17-10-6-3-11-14-8-7-15-18-12-4-9-2-16-5-13-1
The plain text is then written out in horizontal lines corresponding
to the length of the key; then transposition is effected within each row,
according to the sequence of numbers applicable, as shown in figure 30.
Message: ADMINISTRATIVE ORDERS MUST BE COMPLETED AND
READY TO ACCOMPANY FIELD ORDERS NOT LATER
THAN 5:00 RM. THIS DATE.
Enciphering diagram :
I”, 17-10-6-3-11-14--871-5 18-12-4--92-16-5-15-1
A DMI N IST R A TIVE on DE
R suu s TBE c o MPLE TE DA
N DRE A DYT o A ccou PA NY
F IEL D 0RD E R sno'r LA TE
R THA N FIV E r MTHI SD Ar
E
Cryptogram:
EEIIR MTSVD NTDIR OAAAE UPEME BLSSM
DTG'I'R OYMEC ARTYO DACND OPNAE TLNAE
DROID STOEL FRTIA TDHVI HTNMA FESRP
E
Figure 30.
9.2
REF ID:A56932
b. In the foregoing case the eneipherment takes place only by trans—
position within rows, but it is possible to complicate the method by
transposing, in addition, the rows as a whole, employing the same key
or only a portion of it, as much as is required. Thus, if the message
contained 18 rows of 18 letters each, then the transposition of rows
could be effected according to key—number order, the last row being
taken first (since the number 1 of the numerical key happens in this
case to be at the end of the numerical key), the 14th row being taken
second (since the number 2 of the numerical key is the 14th number),
and so on. Where the message does not contain as many complete rows
as there are numbers in the key, the transposition takes place in key-
number order nevertheless, the rows being taken in the numerical order
of the numbers present. Using the same key and message as in the
foregoing case, the encipherment would be as shown in figure 31.
Eneiphering diagram:
l7-10-6-3-11-14-8-7-15-18-12-4-9-2-16-5-13-1
l7: ADMINISTRATIVE OR DE
10: RSMUSTBECOMPLE TE DA
6: NDREADYTOACCOM PA NY
3: FIELDORDERSNOT LA TE
11: RTHANFIVEPMTHI SD AT
14: E
Cryptogram:
ETLNA EDROI DSTOE LFRYM ECART YODAC
NDOPN AAEUP EMEBL SSMDT CTROT IATDH
VIHTN MAFES RPEEE IIRMT SVDN'I' DIROA
A
Figure 31
c. From the preceding method it is but a step to the method of
interrupted key transposition now to be described. Instead of writing
the text in regular-length groups corresponding to the length of the
key, it is written out in irregular groups the lengths of which vary
according to some prearranged plan. For example, note the basis of
the variable grouping in figure 32, which uses the same message and
key as in a above.
(1. This method may be combined with that shown in b above, thus
further complicating the system. In decryptographing such a message it
is best to use cross—section paper, block out the cells to be occupied by
letters in the deciphering diagram, and indicate the key numbers appli-
cable to each line. This will facilitate the process materially and help
eliminate errors.
93
' :nlififluli I||
REF ID:A56932
Enciphering diagram:
17-10—6—3- ll-14—8—7-15-18-12—4—9—2-16—5-15—1
A D M I N I S T R A T I V E 0 R D E
R S M U S T B E C 0 M P L E T E D A
N D R E A D Y T 0 A C C O M P A N Y
F I E L D 0 R D E R S N O T L A T E
R T H A N F I V E P M T H I S D A T
E
H
‘1
-10—6—5-11-l4—8—7-15-18-12—4—9—2-16—5-13—1
N STRATIVEORDE
S BECOMPLE
*iH
ADYTOACC......
NYFIELDORDER..
>FJ>CH
T E R T H
I V E P . . . . . . .
I S D A T E (L C E P)*.
(“The four final letters LCEP are nulls, to complete the row.)
=>oamoza=u>
D-JZP‘ZEUL‘JUIU
:qpowwuzs
Cryptogram (columnar transposition in key-number sequence):
EEEDI UAEAT IIIPC OERRM MDRPO AFHTE
TIHTS BYFTP AVLRP DSEDM NLNTN SANEV
STMCD CDITD YREDR COEEO EARTN OSTAM
AOALL
Figure 32.
e. Another method of interrupted transposition is that which employs
a rather long sequence of digits to control the interruption. In order
to avoid the necessity of carrying around such a written sequence, it
is possible to agree upon a number whose reciprocal when converted
by actual division into its equivalent decimal number will give a long
series of digits. For example, the reciprocal of 7, or 1/7, yields a
repeating sequence of six digits: 142857142857 . . .; the reciprocal
of 49, 1/49, yields a repeating sequence of 42 digits, etc. Zeros, when they
appear, are omitted from the sequence. Suppose the number 19 is agreed
upon, the reciprocal of which yields the sequence (0)52631578947368421.
On cross-section paper mark ofl' sets of cells corresponding in number
to the successive digits. Thus:
5 2 (i 8 l 5
||l|l|><|||><|||l||l><||l|><l|><|||l|l
Let the message be ATTACK HAS BEEN POSTPONED.
Encipherment :
5 2 6 3 1 5
willIEIsloIXITIll><l1rl$lNITINIDIXIAIBIPI><I¢I><IKIEI0IPITI
Cryptogram:
AHESO TATSN TNDAB PCKEO PE
94
:33 L V:
F‘ ETQ"E'§$'33’E
REF ID:A56932
' '--' ...,“w.'. ' ---:'=‘--.-."..'=-'-._ .....
f. To decryptograph such a message, the cryptogram is written down
in a series of cross-section cells, which are then blocked off in sets
according to the numerical key:
5 2 6 3 1 5
IAIHIEISIOIXITIAIXITISINITINIDIXIAIBIPIXICIXIKIElolPTfl
Taking the letters in consecutive order out of the successive sets, and
crossing them off the series at the same time as they are being written
down to construct the plain text, the message is found to begin with
the following two words:
5. 2 6 3 1 5
IAIHIEISIOIXITIAIXITISINITINIDIXIAIBIPIXICIXIKTEIOIPIEI
ATTACK HAS . . .
g. Preparatory to cryptographing, it is necessary to find the length
of the message to be enciphered and then to mark off as many cells as
will be required for encipherment. Nulls are used to fill in cells that
are not occupied after cnciphering the whole message. The secrecy of
the method depends, of course, upon the reciprocal selected, but there
is no reason why any fraction that will yield a long series of digits
cannot be employed. If the selection of key numbers were restricted
to reciprocals, the secrecy would be more limited in scope than is actually
necessitated by the method itself.
90. Permutation Method
0. An old method, known in literature as the aerial telegraphy method,1
forms the basis of this system. A set of permutations of 3 4, . . .
9 digits is agreed upon and these permutations are listed in a definite
series. As an example, let these permutations be made of the digits 1
to 5, selecting only four of the possible 120. Suppose those selected
are the following, set down in successive lines of the diagram in
figure 33a:
Permutation
2 3 1 5 4 2 3 1 5 4
3 2 5 1 4 3 2 5 1 4
1 5 5 2 4 1 5 3 2 4
4 3 1 5 2 4 3 1 5 2
Figure 33a.
1 So named because it was first devised and employed in messages transmitted by a system of
semaphore signaling in practical usage in Europe before the electrical telegraph was invented.
95
REF ID:A56932
The letters of the plain text, taken in sets of fives, are distributed
within the sections of the diagram in accordance with the permuta-
tions indicated above the sections and also at the left. Thus, the first
five letters of the text, supposing them to be the initial letters of. the
word RECOMMENDATIONS, are inserted in the following positions:
Permutation
23154
E C R M 0
The next five letters are inscribed in the second line of the diagram
in the sections indicated by the permutation above and at the- left of
the line. Thus:
Permutation
' 2 3 1 5 4
2 3 1 5 4 E C . R M O
3 2 5 1 4
l 4
5 2 5. N E A M D
This process is continued for each line and for as many lines as there
are permutations indicated at the left. In the foregoing case, after
twenty letters have been inserted, one inserts a second set of five
letters again on the first line, placing the letters of this second set
immediately to the right of those of the first set, respectively in key-
number order. The succeeding lines are treated in similar fashion
until the whole message has been enciphered. The following example
will illustrate the process:
Message: RECOMMENDATIONS FOR LOCATION OF NEW
BALLOON POSITIONS MUST BE SUBMITTED
BEFORE 12TH AIRDROME COMPANY CHANGES
COMMAND POST TOMORROW.
Enciphering diagram :
Permutation
2 .3 1 5 4 2 3 1 5 4
EASEOM CTIDMA RCOTRM MOIECD OITBEN
3 2 5 1 4 5 2 5 1 4
_ NOSRPS ESNOMO ANUTNT MNOFOP DF'MEAT
1 5 3 2 4 1 5 5 2 4
TESWYO 'SLSTNR OBBLHO IWTECM NAEFAR
4 3 1 5 2 4 3 1 5 2
LNIRCB* ROMISC* FLUHGO op'mow OOBAEW
' The letters B. G. and D are nulls. to complete the figure.
' ' " Figure 3317.
‘96
Fun“- ¢.__ ..
REF ID:A56932
The letters of the cipher text are taken from the diagram according
to any prearranged route, the most simple being to transcribe the lines
of letters in groups of fives, thus:
EASEO MCTID MARCO TRMNO IECDO ITBEN
NOSRP SESNO MOANU TNTMN OFOPD FMEAT
TESWY OSLST NROBB LHOIW TECMN AEFAR
LNIRC BROME SCFLU HGOOP TDODO OBAEW
b. The foregoing method when employed in its most simple form
does not yield cryptograms of even a moderate degree of security;
but if the method of inscription and transcription is varied and made
more. complex, the degree of security may be increased quite notice-
ably. It is possible to use longer permutations, based on sets of 6,
7, 8, or 9 digits, but in every case the successive permutations must
be prearranged as regards both their exact composition and their order
or arrangement in the diagram. '
91. Transposition Method Using Special Figures
0. The method now to be described is useful only in special cases
where the correspondence is restricted to brief communications between
a very limited number of persons. It is necessary to agree in advance on
certain particulars, as will be seen. Let the message to be enciphered
be the following:
FOUR TRANSPORTS WILL BE COMPLETED BY END
OF APRIL AND SIX MORE BY END OF JULY.
Note the following figures and enciphcrment:
o n r s 1. o
/”- N /" I l I i -
r u T——A s o r w r. n c u
L/ 4 l l i
n / n n 1 r. r
I n n P A 1
L——J—r n-———-r n+0 A——R n+1! 3+1:
1: r 1 n u
r 1'
c n r n 0—-l-—-J’ 1.___..._
n n ' u
Cryptogram:
ORPSL OFUTA SOTWL BCMRN RIEPE BDPAI
LTDYN OARLN SXEEF IDMRE FYOEY NOJLB
DU
Figure 34.
97
REF ID:A56932
b. It will be noted that it is essential to agree in advance not only
upon the nature of the figure but also upon the number of figures per
line.
c. The next series is a modification of the preceding. The same
i message will be employed, with a double-cross figure, five figures per
line.
\ o u r o 1. a a a n c
S D E ' I 9
l I 2 C I D I A
A II. I S M 0 ‘l 3 It I’
A I I I J
D n V
Y 1-
a x 1 I s v
| Cryptogram:
ouror. BETDO FRSRL ELI-INF NTITP CEDIA
| ARwsu oven? ANREF JLDOB owsn mm
EY
l Figure 35.
(1. Still another series may be formed, as follows:
F s I. I N
l u——-—-o I———P r—-—I. n—-—-E I———D
l 2““; 3“: Li ;__§ 33:2
| 1' r c n A
l I- c c
§:*t 3:3¢ :“§
1‘51» a~—3 Y_:u
s 1 1.
Cryptogram:
FSLLN NOIPP LEEID AUWOM BYTRO RRSRO
EBEPF TTCDA LOOMA DRFXN NEJID EBYUS
YL
Figure 36.
e. A figure of different form than the preceding forms the basis of
the next type.
98
1-"
REF ID:A56932
OOEDRTOYRWPNNLE
rpazuncnrsmrixnnsn'r
FNROPSBJ’IXEL
OAODADEFRIYULMN!
Cryptogram:
OOEDR TOYRW PNNLE FPBEU RCBTS
MEAIL DSLTF NROPS BJIXE LOAOD
ADEF‘R IYULM NY
Figure 37.
f. From the foregoing examples, it is obvious that many other figures
may be used for effective transpositions of this kind, such as stars
of varying numbers of points, polygons of various symmetrical shapes,
etc. It is merely necessary to agree upon the figures, the number of
figures per line, the starting points of the inscription and transcription
processes.
9. The method lends itself readily to combination with simple
monoalphabetic substitution, yielding cryptograms of a rather high
degree of security.
Section II. POLYPHASE TR'A-NSPOSITION SYSTEMS
92. Polyphase Transposition Methods in General
a. In paragraph 33, brief mention was made of transposition systems
in which two or more processes of rearrangement are involved. It was
stated that only a very limited number of such transposition methods
are practicable for military use, but that the degree of security afforded
by them is considerably greater than that afforded by certain much
more complicated substitution methods. The methods referred to are
those which involve two or more successive transpositions, and merely
for purposes of brevity in reference they will here be called polyphase
transposition methods to distinguish them from the single monophase
methods thus far described.
b. It is obvious that a polyphase transposition method may involve
2, 3, . . . successive transpositions of the letters of the plain text.
To describe these methods in general terms, one may indicate that
the letters resulting from a first transposition, designated as the T—l
99
REF ID:A56932
transposition, form the basis of a second, or T—2 transposition. If the
process is continued, there may be T—3, T—4 . . . transpositions, and
each may involve the use of a geometric figure or design. For con—
venience, the design involved in accomplishing the T—1 transposition
may be designated as the D—1 design; that involved in accomplishing
the T—2 transposition as the D—2 design, etc. However, it may as well
be stated at this point, that so far as military cryptography is concerned,
methods which involve more than D—2 and T—2 elements are entirely
impractical and often those which involve no more than D—2 and T—2
elements are also impracticable for such use.
93. True and False Polyphase Transpositions
a. It is possible to perform two or more transpositions with the
letters of a text and yet the final cryptogram will be no more difficult
to solve than if only a single transposition had been effected. The equiva-
lent of this in the case of substitution ciphers is to encipher a mono-
alphabetic cryptogram by means of a second single alphabet; the final
result is still a monoalphabetic substitution cipher. Likewise, if a mes-
sage had been enciphered by a simple form of route transposition
and a second and similar or approximately similar form of simple
route transposition is again applied to the text of the first transposition,
the final text is still that of a monophase transposition cipher. Again,
two transpositions may be accomplished without really affecting a more
thorough scrambling of the letters composing the original text. Examples
will serve to clarify the differences between false and true polyphase
transposition.
b. Note the following simple columnar transposition cipher pre-
pared according to the method described in paragraph 27 :
Message: DELIVER ALL AMMUNITION TO 4TH DIVISION
DUMP-
Keyword : SCHEDULE = i:
q
SCHEDUL
-1-5-5-2-8-6-
Enciphering rectangle :
I—l
vac-arcs:
HWHI‘H
oao>rcn
23:23:49:
Dua=<m
:Hocmm
a<wzwa>
‘UHOHbIDh
Cryptogram (T—l):
ELI'RI VMTDD IMNHN A I
DL'PUS EUOIU IO P LAOTO mm
Figure 38.
1 00
__ 7‘ _,.-., .Pm-mw—uum
REF ID:A56932
._.___.... mum—s- mai'!'-;l:i""l"lih l... ‘ “L'-
' “III“ a:
In producing the foregoing cryptogram only the columns were trans—
posed. Suppose that by prearrangement, using the keyword BREAK
(derived numerical key = 2—5—3—1—4), the horizontal lines of the fore—
going enciphering rectangle were also to be transposed. For example,
let the horizontal lines of the rectangle D—l be transposed immediately
before taking the letters out of the columns of the design (in key-number
--- 'NV‘F’E‘EWH
wUW-fl ra-r'zr -=—~1e--'1=r*r”'. w
order) to form the cipher text. Thus:
.—
ne- .. my: to
In c: all" U
L" m a Uc.‘ «1'
>0 O l"!-] 0‘
32 2 H 3: ca
HWHr'm
on—Jo:>r'
ZIZEH
CHOCFJ
S<'¢12>§1
’UHOH>
r'H'Hmw
EUH<UI~7
CGOHHco
Z:".15U<G=
H-uo>H~h
D— l
Cryptogram (T—2):
REIIL DVTDM HINNM IAOPI TLOOA VRFMN
UDTSL IEOUU
Figure 39.
c. The foregoing, however, is not a case of true polyphase or- so-
called double transposition. The same final result may be accomplished
in a way which will at first glance appear quite different but is in.
reality one that accomplishes the same two operations by combining
them in one operation. Let the message be inscribed as before, but this
time with both numerical keys applied to the top and side of the
rectangle. Then let another rectangle of the same dimensions, but with
numbers in straight sequence instead of key—number sequence, be set
alongside it. Thus: ' '
7
p.-
5
3 2 8 6 4
2 D E L I V E R A 1
5 L L A M M U ' N I 2
3 T I 0 N T 0 F 0 3
1 U R T H D I V I 4
4 S I 0 N D U M P 5
D—l
Figure 40.
Each letter D—l is now transferred to that cell in D—2 which is indicated
by the row and column indicators of the letter in D—l. For example,
the first letter, D, of D—l, has the indicators 2—7 and it is placed in
101
REF ID:A56932
the 2—7 cell in D—Z; the second letter of D—l, which is E, is placed
in the 2—1 cell of D—2, and so on. The final result is as follows:
71532864 12345 rs
ans I'VERA 1R -HITVUI
I 5LLAMMUN12EVIALRDE
. arrourorosrruoorro
1URTHDIVI4IDNPOMSU
4SIONDUMP5LMMIANLU
I 13-1 . D—2
Figure41.
It will be seen that if the columns of D—2 are now read downwards
in straight order from left to right the final cryptogram is identical
with that obtained in figure 39: REIIL DVTDM, etc.
d. The foregoing cipher, often called the Nihilist Cipher, is referred
to in some of the older literature as a double transposition cipher
becauSe it involves a transposition of both columns and rows; and
I. indeed as described in b above it seems to involve a double process.
It is, however, not an example of true double transposition. When
the mechanism of this cipher is compared with that now to be
. described, the great difference in the cryptographic security of the two
. methods will become apparent.
94. True Double Transposition
In the form of the false double transposition described above, it is
only entire columns and entire rows that are transposed. The disarrange-
ment of the letters is after all not very thorough. In true double trans-
; position this is no longer the case, for here the letters of columns and
i rows become so thoroughly rearranged that the final text presents a
complete scrambling almost as though the letters of the message had
been tossed into a hat and then drawn out at random.
i Section I". TRUE DOUBLE TRANSPOSITION
i 95. True Double Transposition of the Columnar Type
a. It is by what is apparently a simple modification of certain of the
c0111mnar methods already described that an exceedingly good true
double transposition can be effected. Let a numerical key be derived
from a keyword in the usual manner and let the message be written
out under this key to form a rectangle in the usual manner for colum-
nar transposition. The length of the message itself determines the
anct dimensions of the rectangle thus formed, and whether or not it
‘5 completely or incompletely filled.
102
REF ID:A56932
b. In its most effective form the double transposition is based upon
an incompletely filled rectangle; that is, one in which one or more cells
in the last line remain unfilled. An example of the method now follows:
Let the keyword be INTERNATIONAL; the message to be enciphered,
as follows :
OUR ATTACK SLOWING UP IN FRONT OF HILL 1000 YARDS
SOUTHEAST OF GOLDENVILIE STOP REQUEST PROMPT
REEN FORCEMEN T.
Keyword :
IN TE RNA TI ONAL
Derived numerical key: 4-7-12-5-11—8-1-15—5-10-9-2-6
4——‘7—12—3~—11—8——.1—13-5— 10 —9.—-2-~6
U
R
A
T
A C
K
S
0.
want-imam
MMUOL‘EO
PJDHCOZL"
ZCZHCD-J
fim<=mo=
OMHMU’WHO
NSF>ZIZ
Q’UE‘MUHQ
resume-4r:
zomo>rm
NKHWWOHI-l
FED-J'UOUIIFI"!
4—7—12 —3—11 — 8— 1—13 — 5- 10 — 9—2— 6
A
N
N
D
G
OPN
0
T
U
T
N
U
N
D-2
Figure 42a.
103
REF ID:A56932
The first, Or D—l, rectangle is inscribed in the usual manner of simple
numerical key columnar transposition. It is shown as D—l in the accom-
panying figure. . The letters of T—l transposition are then inscribed
in the second, or D—Z, rectangle in the normal manner of writing, that
is, frorn left to right and from the top downwards. This is shown in
D—Z of figure 42a for the first two columns of D—l (in numerical key
order.) after transfer of their letters into D—Z. The letters of the
remaining columns of D—1 are transferred in the same manner into
D—Z, yielding the following rectangle:
4—7—12-3—11 — 8—- 1—13— 5- 10 — 9—2—6
N'NgDGOPNOTUT
UHMIMW
mrar'zr'm
I‘D-12>5UO
'U'UOIT‘WH
OF‘CHE'TJZ
I c: :> m in Q > c: .:>
'11 o o a m :21 z
ta tn ta H :x: H :>.
m” o m o < m c:
o g o 5:: [=1 0 r'
*u so :1: '21 u: z}: -<
a o o a c: an 2-1
-a H U a z v—a I351
Figure 421).
For the T—Z text- the letters are transcribed from the D—2 rectangle,
reading down the'columns in key-number order, and grouping the letters
in fives. The Cryptogram is as follows:
PTRUT OGTTI RLOPP DUSVO SOSAU AOREA
CORSH EEDNF WTULC NNEST QOFOY KFFHR
PUORA NTLTE LNLES GLOER OMONA IHIES
ENETN MDIT
e. In paragraph 29 a variation of the simple columnar key method
of transposition was described. If the process therein indicated is
repeated, double transposition is effected. The following example will
serve to illustrate the method, using the same message and key as were
used in the paragraph 29:
Message: REQUEST IMMEDIATE REENFORCEMENTS
Keyword: P R 0 D U C T
Derived numerical key: 4-5-3-2-7-1-6
:104
REF 1D:A56932
Encipherment :
4-5—3-2—7—1-6 4-5—5-2-7-1-6 4—5-3—2—7-1-6
TextzREQUESTIMMEDIATEREENF"
T—l:SINEUEEEQMRCRITOTEMER
T—2:EREEEREFNMTASETSEIQOT
4—5-3-2—7-1—6 4—5
0 R C E M E N T S
S 1' A F N E D E M
M E I R D U C M N
Cryptogram:
EREEE REFNM TASET SEIQO TMEIR
DUCMN
d. In some respects this modified method is simpler for the novice-to
perform correctly than is that employing rectangles. Experience has
shown that many inexpert cryptographic clerks fail to perform the two
transpositions correctly when D—1 and D—2 rectangles are employed
in the work.
96. General Remarks on True Polyphase Transposition
a. The cryptographic security of the true double transposition method
deserves-discussion. Careful study of a .cryptogram': enciphered by; the
double transposition method set forth in paragraph 95 b and c will
indicate that an extremely thorough scrambling of the letters is. indeed
brought about by the method. Basically, its principle is the splitting up
of the adjacent or successive letters constituting the plain text by two
sets of‘ cuts” , the second of which IS in a direction that 15 perpendicular
to the first, with the individual “cuts” of both sets arranged in a
variable and irregular order. It is well adapted for a regular and
voluminous exchange of cryptograms between correspondents, because
even if many messages in the same key are intercepted, so" long .asllno
two messages are identical in length, they can only be cryptanalyzed
after considerable effort.
b. Triple and quadruple transpositions of the same nature are possible
but not practical for serious usage. Theoretically, a continuation or
repetition of the tranposition process will ultimately bring about a condi-
tion'iivherein the D-n rectangle is identical with the D'~1. rectangle: in
other words, after a certain number of transpositions the rectangle pro—
duced by a repetition of the cryptographing process results finally in
decryptographing the message. Exactly how many repetitive transposi-
tions intervene in such cases is extremely variable and depends upon
factors lying outside the scope of this text, I I
1:195
REF ID:A56932
c. In the example of cryptographing given in paragraph 95b, the
D—1 and D—Z rectangles are identical in dimensions, and identical
numerical keys are applied to effect the T—l and T—2 transpositions.
It is obvious, however, that it is not necessary to maintain these identi—
ties; D—1 and D—2 rectangles of different dimensions may readily be
employed, and even if it is agreed to have the dimensions identical, the
numerical keys for the two transpositions may be different. Furthermore,
it is possible to add other variable elements. (1) The direction or manner
of inscribing the letters in the D—1 rectangle may be varied; (2) the
direction of reading off or taking the letters out of the D—1 rectangle
in effecting the T—l transposition, that is, in transferring them into the
D—2 rectangle, may be varied; (3) the direction of inscribing these
letters in the D—2 rectangle may be varied; ( 4) the direction of reading
off or taking the letters out of the D—2 rectangle in effecting the T—Z
transposition may be varied.
d. The solution of cryptograms enciphered upon the double transposi-
tion principle is often made possible by the presence of certain plain—text
combinations, such as QU and CH (in German). For this reason, care-
ful cryptographers substitute a single letter for such combinations, as
decided upon by preagreement. For example, in one case the letter Q
was invariably .used as a substitute for the compound CH, with good
effect.
Section IV. GRILLES AND OTHER TYPES 'OF MATRICES
97. Type of Cryptographic Grilles
Broadly speaking, cryptographic grilles2 are sheets of paper, card-
board, or thin metal in which perforations have been made for the
uncovering of spaces in which letters (or groups of letters, syllables,
entire words) may be written on another sheet of paper upon which the
grille is superimposed. This latter sheet, usually made also of cross-
section paper, will hereafter be designated for purposes of brevity in
reference as the grille grid, or grid. Its external dimensions are the
same as those of the grille. Grilles are of several types depending upon
their construction and manner of employment. They will be treated here
under the titles of (1) simple grilles, (2) revolving grilles, (3) non-
perforated grilles, and (4) “post card” grilles.
98. Simple Grilles
a. These consist usually of a square in which holes or apertures have
been cut in prearranged positions. When the grille is superimposed upon
3Also often called “stencils." The general term matrix (plural, matrices) is very useful in
referring to a geometric figure or diagram used for transposition purposes. Other terms in
common use are cage, frame, bar, etc.
106
REF ID:A56932
the grid, these apertures disclose cells on the grid, in which cells letters,
groups of letters, syllables, or entire words may be inscribed. An example
is shown in figure 43. The four sides of the obverse surface of the grille
are designated by the figures 1, 2, 3, 4; the four sides of the reverse
surface, by the figures 5, 6, 7, 8. These figures are employed to indicate
the position of the grille upon the grid in encipherment.
b. (1) In cryptographing a message the grille is placed upon the grid,
in one of the eight possible positions: Obverse surface up, with
figure 1, 2, 3, or 4 at the top left; or reverse surface up, with
1
57 %’//777 V%*
7%%,//////7/7A ”/A
/ fifil/A/AV/ffl/
Wf/é/O/ @fifi/JV
éf/AV/A /%7/ 0:0;
Q/AVA @f/A A202.
0074/ [A A W47
QWVJVA 7/ @717
75% 7A Af///V/fl
u A // //// e
(5)
27A AV/V/Vf . 7/ 3':
QVA W/fflO/AVAV/zy
0/57/71 /,7 7A
%7/47//A5 @9047
@714 7/71/ ’Af/fl/
/%0%%é 7// V
90 4% T/// QV
VAW/V/z 7/, Ay/zy/V
, @QVAV/ OVA
SEX V/V/ ///’u.)
Figure 43.
107
1%
REF ID:A56932
figure 5, 6, 7, or 8 at the top left. The letters of the plain text
are then inscribed in the cells disclosed by the apertures, follow—
ing any prearranged route. In figure 44, the normal manner of
writing, from left to right, and from the top downwards, has
been followed in the inscription, the message being ALL
DESTROYERS OUTSIDE.
g5)
[7/
7,
// .
E
74
74
\
(8) 7'
A /
/%
11%
7A
QT
R
@WAYVAE
7&8 OV/X/X/A
7/8 I //
V
\\
\ m§xx
V
k\\
e §s\\‘§&&§\\\“
'* =° O\x§s\\\lr‘ \\ t‘
®S§\ ®§§QQ\
2%
(L)€
Figure 44.
(2) The transcription process now follows. The cipher text is
written down, the letters being taken by following any pre-
arranged route, which must be perpendicular to the route of
inscription, otherwise the letters will follow in plain-text order.
In the following, the route is by columns from left to right.
Cryptogram:
LRTAD TSSER YOIDS ELOEU
(3) If the number of letters of the plain—text message exceeds the
number of cells disclosed by one placement of the grille, the
letters given by this placement are written down (in crypto-
graphic order), and then the grille is placed in the next position
on a fresh grid; the process is continued in this manner until
the entire message has been cryptographed. The several sections
of the cipher letters resulting from the placements of the grille
on successive grids merely follow each other in the final crypto-
gram. In this manner of employment it is only necessary for
the correspondents to agree upon the initial position of the grille
and its successive positions or placements.
REF ID:A56932
c. It is obvious that by the use of a simple grille the. letters of a
message to be CTYPtngaPhed may be distributed within an enveloping
message consisting mostly of “dummy” text, inserted for purposes of
enabling the message to escape suppression in censorship. 'For example,
suppose the grille shown in figure 43 is employed in position 1 and the
message to be conveyed is ALL DESTROYERS OUTSIDE. The
letters of this message are inscribed in their proper places on the grid,
exactly as shown in figure 44. An “open” or disguising text is now to
be composed; the latter serving as an envelope or “cover” for the letters
of the secret text, which remain in the positions in which they fall on
the grid. The open or disguising text, in other words, is built around or
superimposed on the secret text. Note how this is done in figure 45, with
an apparently innocent message reading:
I HAVE WORKED VERY WELL ALL DAY, TRYING TO- GET
EVERYTHING STRAIGHTENED UP BEFORE GOING ON MY
NEXT TRIP SOUTH, BUT INSIDE TEN DAYS . . .
1(5)
‘IHAVEWORKEg.
DVERYWELLAg
LLDAYTRY-IN
.GTOGETEVER
"YTHINGSTRA
.IGHTENEDUP
BEFOREGOIN
GONMYNEXTT
G-RIPSOUTHBU
«TINSIDETEN
Figure 45. (1.) g
d. The foregoing method naturally requires the transmission of con—
siderably more text than is actually necessary for conveying the message
intended. Where questions of censorship are not involved, the method
is therefore impractical. A modification of the method suggests itself in
the use of a transparent sheet of paper superimposed upon a square or
other figure in which the individual cells are irregularly numbered and
the inscription process follows the sequence of numbers. An example is
shown in figure 46, using the message ROCK CREEK BRIDGE WILL
BE DESTROYED WHEN TAIL HAS CROSSED.
109
REF ID:A56932
21.3944 715
372941 1114531
184310 24 20 28 14
12 84248 43338
354730 462617
191332224036 9
OI
N
0|
Sugar»;
cognac:
FHHFHO
crummy-a
ICON”!!!
coat/awn:
>mxmm°
>WH‘MH
fiHr‘n‘H
Figure 46.
The transcription may now follow any prearranged route. The normal
method of reading would produce the cryptogram beginning WCTEH
OEERI, etc. It is obvious that the correspondents must possess designs
with identically numbered cells.8
99. Revolving Grilles
a. In this type of grille (see fig. 47a) the apertures are also formed
by perforating a sheet of cross-section paper according to prearrange-
ment, but these apertures are so distributed that when the grille is
turned four times successively through angles of 90° and set in four
grille positions on the grid, all the cells on the grid are disclosed in turn.
(The preparation of such grilles is discussed in par. 103.) If letters are
inserted in the cells so disclosed, then after a complete revolution of the
grille every one of the cells of the grid will contain a letter and thus the
grid will be completely filled. For this reason such a grille is also called
a self—filling, or an automatic-completion grille. The secrecy of messages
enciphered by its means is dependent upon the distribution or position of
the apertures, the sequence of grille positions on the grid, that is, whether
in the order 1, 2, 3, 4 clockwise; or 1, 3, 4, 2 etc.), and the route followed
in inscribing and transcribing the letters in the cells of the grid. For each
position of the grille, one-fourth the total number of letters of the text
is inscribed; hence it is convenient to refer to “sections” of the text, it
being understood that each section consists of one-fourth the total num-
ber of letters.
b. There are two possible procedures so far as the inscription-trans-
scription sequence is concerned. (1) The letters of the plain text may be
inscribed in the cells of the grid through the apertures disclosed by the
grille and then, when the grid has been completely filled, the grille
removed, and the letters transcribed from the grid according to a pre-
arranged route; or, (2) the letters of the plain text may first be inscribed
in the cells of the grid according to a prearranged route and then the
grille applied to the completely-filled grid to give the sequence of letters
'The system employed by the French Army in 1886 was of the nature here described.
110
REF ID:A56932
Cryptogram :
LHICV YROOT WILHN F'SOMT
HURTI TCU'LO ROEDA TMVUI
ESTEL YF'RMU RNSF'E FASES
ESEAT OIDTL YNOIN AHEAH I
EDFOT NHSHH ETAMI YOSRE
Figure 47.
111
BM\\\1\\\\}\\\\!\\\\I
REF ID:A56932
forming the cipher text of the transcription process. The first method
will be described in c below; the second in e below.
c. Taking the simplest manner of inscribing the letters, that is, from
left to right and from the top downwards, the letters of the first section
of the text are inscribed in the cells disclosed by the apertures, the grille
being in the first position. This is shown in b of figure 47. The grille is
then given % turn clockwise, bringing figure 2 to the top left. If the
grille has been correctly prepared, none of the cells disclosed in the
second grille position on the grid will be occupied by a letter. The letters
of the second section are then inscribed, this being shown in c of figure
47. In d and e of figure 47, the results of inscribing the third and fourth
sections, respectively, are shown. The letters of the cryptogram are
then taken out of the completed grid by following any prearranged route
of transcription. The cryptogram below has been transcribed by follow-
ing down the columns in succession from left to right.
d. To decryptograph such a message, the cipher letters are inscribed
columnwise in a grid 10 by 10 (that is, one composed of 100 cells, 10 per
side) and then the grille applied to the square in four consecutive posi-
tions corresponding to those used in cryptographing. The letters dis-
closed by each placement of the grille are written down as they appear,
section after section.
e. The second manner of employing a revolving grille is merely the
reciprocal of the first. The procedure followed in the first method to
decryptograph a message is followed in the second method to crypto—
graph a message; and the procedure followed in the first method to
cryptograph is followed in the second method to decryptograph.
100. Grilles of Other Geometric Forms
Grilles are not limited to square—shaped figures. They may be equi—
lateral triangles, pentagons, hexagons, and so on. Any figure which can
be pivoted upon a central point and which when revolved upon this
pivot can be placed in a succession of homologous positions over a grid
corresponding to the grille will serve equally well. A triangle affords
three grille positions, a pentagon, five, and so on.
101. Polyphase Transposition by Grilles
One grille may be employed to inscribe the letters of the message on
the grid, and a second, and different, grille employed to transcribe them
from the grid to form the final text of the cryptogram. This would con-
stitute a real double transposition method of great complexity. Polyphase
transposition by a series of grilles is of course possible.
112
REF ID:A56932
102. Increasing the Security of Revolving Grilles
a. The total number of letters which a grille will exactly encipher is
termed its capacity. If the number of letters of a message is always equal
to the total capacity of the grille, this information is of great aid in solu-
tion by the enemy. For example, a message of 64 letters indicates a grille
8 by 8 with 16 apertures; one of 144 letters, a grille 12 by 12 with 36
apertures, and so on. There are, however, methods of employing a
grille so that it will serve to encipher messages the lengths of which are
greater or less than the capacity of the grille.
b. When the total number of letters is less than the capacity of the
grille, no modification in method of use is necessary. Encipherment of
such a message comes to a close when the last plain-text letter has been
inscribed. In decryptographing such a message, the recipient must strike
out, on the grid upon which he is to inscribe the cipher text, a number of
cells corresponding to the difference between the number of letters of
the text as received and the total capacity of the grille. The location of
the cells to be thus eliminated must be prearranged, and it is best usually
to strike them off from the final positions of the grid.
1529 1301933 57
1.2 2 164346 6207/
I / /
Zj/flV/ZZ 17 u. 31 13 21 '47 34 Z;
/, I /, x 3 32 1.5 7 35 '8 /
QZ/éflf/fl 25 38 11 39 22 36 9 7/
% - /// so 12 26 51 48 10 23 V/
a 27 52 40 28 ’21; 49 37 7/
13 1.1 1!. @V/A/ /////7/
b
b
Figure 48.
c. When the total number of letters is equal to or greater than the
capacity of the grille, a grid of greater capacity than that of the grille
can be prepared, on which the grille may be positioned several times,
thus forming a large or composite grid composed by the juxtaposition
of the several small grids. If there are a few cells in excess of the actual
number required, these may be struck off from the large grid at pre-
arranged points, for example, from the last column and row, as shown
in b of figure 48. The grille is then placed in its first position in turn on
each of the component grids, then in its second position, and so on. An
example will serve to illustrate. A message of fifty-two letters is to be
113
REF ID:A56932
cnciphered with the grille shown in a of figure 48, the capacity of which
is sixteen letters. The number of letters of the message being greater than
three times sixteen, the composite grid must be composed of four small
grids containing a total of sixty-four cells. Therefore, twelve of these
cells must be eliminated. These are shown in b of figure 48, together with
the number indicating the positions occupied by the letters of the text.
103. Construction of Revolving Grilles
(I. There are several ways of preparing revolving grilles, of which the
one described below is the most simple. All methods make use of cross-
section paper.
b. Suppose a revolving grille with a capacity of 100 letters is to be
constructed. The cells of a sheet of cross-section paper 10 by 10 are
numbered consecutively in bands from the outside to the center, in the
manner shown in a of figure 49. It will be noted that in each band, if n
is the number of cells forming one side of the band, the highest number
assigned to the cells in each band is n —— 1.
c. It will be noted that in each band there is a quadruplication of
each digit; the figure 1 appears four times, the figure 2 appears four
times, and so on. From each receding band there is to be cut out
(n—l) cells: from the outermost band, therefore, nine cells are to be
cut out; from the next band, seven; from the next, five; from the next,
three; and from the last, one cell. In determining specifically what cells
are to be cut out in each band, the only rules to be observed are these:
(1) One and only one cell bearing the figure 1 is to be cut out, one
and only one cell bearing the figure 2 is to be cut out, and so on; (2) as
random a selection as possible is to be made among the cells available
for selection for perforation. In b of figure 49 is shown a sample grille
prepared in this way.
(1. If the side of the grille is composed of an odd number of cells, the
innermost band will consist of but one cell. In such case this central cell
must not be perforated.
e. It is obvious that millions of differently perforated grilles may be
constructed. Grilles of fixed external dimensions may be designated by
indicators, as was done by the German Army in 1915 when this system
was employed. For example, the FRITZ grille might indicate a 10 by 10
grille, serving to cncipher messages of about 100 letters; the ALBERT
grille might indicate a 12 by 12 grille, serving to encipher messages of
about 144 letters, and so on. Thus, with a set of grilles of various dimen-
sions, all constructed by a central headquarters and distributed to lower
units, systematic use of grilles for messages of varying lengths can be
afforded.
f. A system for designating the positions of the perforated cells of a
grille may be established between correspondents, so that the necessity
114
REF ID:A56932
for physical transmission of grilles for intercommunication is eliminated.
An example of a possible system is that which is based upon the co—
ordinate method of indicating the perforations. The columns from left
to right and the rows from bottom to top are designated by .the letters
A, B, C, . . . Thus, the grille shown in b of figure 49 would have the
following formula:
ADG; BBEH; CD]; DEG; EACH; FFI; GE; HBDH]; IDG;
JABFI.
u: HM'M
~10UIl-‘NUl-‘NWIN
axxnitqu-thowzsu.
coal-INA»
Hmu‘hmmqmolp
sol-amwthkucbsit-Jm
UI‘F‘LANO-‘Hw-hv‘m
mwamu‘buwuo
bWNI—‘MNHUIO‘Q
memhuqum
HONQO‘UIINWMH
@NV“
s\\
\V
\V \V
\V
\
\V
\
\\\\\\
k
§
\\\\
§\
Q
V
\VQkV
N
\V
Q
Figure 49.
115
REF ID:A56932
9. Given the formula, the eight corners of the grille can be labeled in
various ways by prearrangement; but the simplest method is that shown
in connection with b of figure 49. Then the initial position of the grille
can be indicated by the number which appears at the upper left-hand
corner when the grille is placed on the grid, ready for use. Thus, position
1 indicates that the grille is in position with the figure 1 at the upper
left-hand corner; position 3, with the figure 3 at the upper left—hand
corner, etc. .
h. The direction of revolving the grille can be clockwise or counter-
clockwise, so that correspondents must make arrangements beforehand
as to which direction is to be followed.
1'. Revolving grilles can be constructed so that they have two operating
faces, an obverse and a reverse face. They may be termed revolving-
reversible grilles. The principles of their construction merely involve a
modification of those described in connection with ordinary revolving
grilles. A revolving—reversible grille will have eight possible placement
indicators; usually positions 1 and 5, 2 and 6, and so forth, correspond
in this obverse—reverse relationship, as shown in figure 43.
j. The principles of construction described above apply also to grilles
of other shapes, such as triangles, pentagons, and so forth.
104. lNonpen‘oral‘ed Grilles
a. All the effects of a grille with actual perforations may be obtained
by the modified use of a nonperforated grille. Let the cells that': would
normally be cut out in a grille be indicated merely by crosses thereon,
and then on a sheet of cross-section paper let the distribution of'- letters
resulting from each placement of the grille on a grid be indicated by
inserting crosses in the appropriate cells, as shown in figure 50.
Grille Grille Position
bUNH
Figure 5011. Figure 501).
b. Note should be made of the fact that in figure 50b the distribu—
tion of crosses shown in the third row of cells is the reverse of that
showu in the first; the distribution shown in the fourth row is the reverse
of that shown in the second. This rule is applicable to all revolving
grilles and is of importance in solution.
1:. If the letters of the text are now inscribed (normal manner of
writing) in the cells not eliminated by crosses, and the letters transcribed
116
REF 1D:A56932
from columns to form the cryptogram, the results are the same as though
a perforated grille had been employed. Thus:
A I
EIGRIIOLDARDDLTY
Cryptogram:
EWCRA EOLDA RDDAT Y
Figure 506.
d. It is obvious that a numerical key'. may be applied to effect a
columnar transposition in the foregoing method, giving additional
security. I
e. The method is applicable to grilles of other shapes, such as tri-
angles, pentagons, hexagons, octagons, etc.
f. In figure 50c it is noted that there are many cells that might be
occupied by letters but are not. It is obvious that these may be filled with
nulls so that the grid is completely filled with letters; Long messages may
be enciphered by the superposition of several diagrams of the same
dimensions as figure 50c.
105. Rectangular or "Post Card" Grilles
a. The grille shown in figure 51 differs from the ordinary revolving
grille in that (1) the apertures are rectangular in shape, and are greater
in width, thus permitting of inscribing several letters in the cells dis—
closed on the grid by each perforation of the grille; and (2) the grille
itself admits of but two positions with its obverse side up and two with
its reverse side up. In figure 51 the apertures are numbered in succes—
sion from top to bottom in four series, each applying to one position of
the grille; the numbers in parentheses apply to the apertures when the
grille is reversed; the numbers at the corners apply to the four positions
in which the grille may be placed upon the grid.
17. One of the ways in which such a grille may be used is to write the
first letter of the text at the extreme left of the cell disclosed by aperture
1, the second letter, at the extreme left of the cell disclosed by aperture 2,
and so on. The grille is retained in the same position and the 17th letter
is written immediately to the right of the lst, the 18th immediately to the
right of the 2d, and so on. Depending upon the width of the aperture,
and thus of the cells disclosed on the grid, 2, 3, 4 . . . letters may' be
inserted in these cells. When all the cells have been filled, the grille may
then be placed in the second position, then the third, and finally, the
fourth.
117
REF ID:A56932
'1 if?! its):
29 I: Eli]
1:11—65 39)::
oz 6 3° (lg-gal
is E
““0 (99»:
“)9: 9(41
199)“: 1° 43
11 43 (mm
(gs-)3; 12 44
13 45 (5:9)]:
14 ‘6 (29m
%:§ 15 47
{a}; 16m)
if L!
l s
Figure 51.
c. Another way in which the grille may be used is to change the
position of the grille after the 16th letter has been inserted, then after
the 32d, 48th, and 64th ; the 65th letter is then inserted to the righfl of
the lst, the Slst, to the right of the 17th, and so on until the grid is com-
pleted.
d. Whole words may, of course, be inserted in the cells disclosed by
the apertures, instead of individual letters, but the security of the latter
method is much lower than that of the former.
e. The text of the grid may be transcribed (to form the cryptogram)
by following any prearranged route.
1‘. The successive positions of a post card grille may be prearranged.
The order 1, 2, 3, 4 is but one of 24 different sequences in which it may
be superimposed upon the grid.
g. A modification of the principles set forth in paragraph 103, dealing
with the construction of revolving grilles, is applied in the construction
of rectangular or “post card” grilles. Note the manner in which the
cells in a of figure 51 are assigned numbers; homologous cells in each
band receive the same number. In a of figure 52 there are three bands,
118
REF ID:A56932
numbered from 1 to 8, 9 to 16, and 17 to 24. Then in each band one
and only one cell of the same numbered set of four cells is cut out. For
example, if cell 1a is selected for perforation from band 1 (as indicated
by the check mark in that cell), then a cross is written in the other three
homologous cells, 1b, c, and d, to indicate that they are not available for
selection for perforation. Then a cell bearing the number 2 in band 1
is selected, for example, 2c, and at once 2a, b, and d are crossed ofil as
being ineligible for selection, and so on. In c of figure 52 is shown a
grille as finally prepared, the nonshadcd cells representing apertures.
_h. The grille, c of figure 52, is a “six—column” one, that is, the cells
form six columns. It is obvious that grilles with any even number of
columns of cells are possible. The number of apertures in each band
should be equal and this number multiplied by the number of bands and
then by 4 should equal the capacity of the grille. In the case of the one
shown in c of figure 52, the capacity is 8 by 3 by 4 or 96 cells; this is
the same as is obtained merely by multiplying the height (in cells) by the
1
Band
_Wl/V/Il-W/l/l/ll
/// /7//77///7// 7///—
///7/// ’////’///A 7///
/// -7///-j
///7/// //-7////////
/// ;7///.'/// //-'///7
///,7/// //7///A7////7////7///.
///—7///////./7/////////‘
77/ '////.- ////'(/
7///—7/7
//A
:\\:\\:
%\
’////A'//////
7///A
7/////// ////'////: /////
7///-//// /////’/////
-
//////// /’//
'////)-’///'////// /’/////’////
Figure 52.
119
REF ID:A56932
number of columns, 16 X 6 = 96. If four letters are inscribed in each
rectangle, the capacity of the grille in terms of letters is 384. The grid in
this case would, after completion, present 24 columns of letters, to which
a numerical key for a second transposition can be applied in transcrip-
tion to produce the final text of the cryptogram.
106. Indefinite or Continuous Grilles
a. In his Manual of Cryptography, Sacco illustrates a type of grille
which he has devised and which has elements of practical importance.
An example of such a grille is shown in figure 53. This grille contains 20
columns of cells, and each column contains 5 apertures distributed at
random in the column. There are therefore 100 apertures in all, and
this is the maximum number of letters which may be enciphered in one
position of the grille. The plain text is inscribed vertically, from left to
right, using only as many columns as may be necessary to inscribe the
complete message. A 25-letter message would require but 5 columns. To
form the cryptogram the letters are transcribed horizontally from the
rows, taking the letters from left to right as they appear in the apertures.
If the total number of letters is not a multiple of 5, suflicient nulls are
added to make it so. In decryptographing, the total number of letters is
divided by 5, this giving the number of columns employed. The cipher
text is inscribed from left to right and top downwards in the apertures
in the rows of the indicated number of columns and the plain text then
reappears in the apertures in the columns, reading downward and from
left to right. (It is, of course, not essential that nulls be added in the
encipherment to make the length of the cryptogram an exact multiple of
5, for the matter can readily be handled even if this is not done. In de-
cipherment the total number of letters divided by 5 will give the number
of complete columns; the remainder left over from the division will give
the number of cells occupied by letters in the last column on the right.)
1' 31!! V
Jr. Limb.
Figure 53a.
120
REF ID:A56932
1). Such a grille can assume 4 positions, two obverse and two reverse.
Arrangements must be made in advance as to the sequence in which the
various positions will be employed. That is why the grille shown in figure
53a has the position-designating letter “A” in the upper left-hand corner
and the letter “B” (upside down) in the lower right—hand corner. On the
obverse side of the grille would be the position—designating letters “C"
and I‘D.”
c. Figure 53b shows how a message is enciphered.
Message:
All RECEIVING HEAVY MACHINE GUN FIRE FROM HILL SIX TWO ZERO.
Figure 53b.
Cryptogram:
EGIIX FNNEA YTHFL RIRMO IOLWE MERVA ERMAH EGSOA ICUEC NVHIZ.
(The letters E and A in the 10th column are nulls. Columns 11 to 20 are not
used at all, the irregular right-hand edge of the grille merely indicating that this
portion of the grille remains vacant.)
Section V. MISCELLANEOUS TRANSPOSITION SYSTEMS
107. Complex Route Transposition
a. In figure 54 a route for inscribing letters within a rectangle is
indicated by a sequence of numbers. The initial point may be at any of
the four corners of the rectangle, or it may be at any other point, as pre—
arranged. The letters may be inscribed to form the rectangle by following
the route indicated and then transcribed from the rectangle to form the
cryptogram by following another route; or the letters may be inscribed
according to one route and transcribed accordingly to the numerical
route indicated.
b. A variation of the foregoing is that illustrated in figure 55, wherein
the inscription follows the route shown by the arrows. The initial point
121
REF 1D:A56932
of inscription is indicated by the figure 1, and the final point, by the
figure 2.
c. In the foregoing case, the route is a succession of the moves made
by the king in the game of chess; it forms the so-called “king’s tour”,
in which the playing piece makes a complete or reentrant journey cover—
ing all cells of the chessboard, each cell being traversed only once. A
route c0mposed of a succession of moves made by the knight, or the so-
called “knight’s tour”, is also possible, but in order to be practical a grid
with the cells numbered in succession would have to be prepared for the
correspondents, since millions of different reentrant knight’s tours can
be constructed‘ on a chessboard of the usual 64 cells.
a“
Figure 54. Figure 55.
108. Transposition of Groups of Letters, Syllables. and Words
There is nothing in the previously described methods which precludes
the possibility of their application to pairs of letters, sets of three or
more letters, or even syllables and whole words. Nor, of course, is their
use limited to operations with plain text they may be applied as second—
ary steps after a substitutive process has been completed (see sec. I, ch.
10).
109. Disguised Transposition Methods
a. The system often encountered in romances and mystery stories,
wherein the message to be conveyed is inserted in a series of nonsig—
nificant words constructed with the purpose of avoiding or evading sus-
picion, is a species of this form of “open” cryptogram involving trans—
position. The “open” or enveloping, apparently innocent text may be
designated as the exle-mal text; the secret or cryptographic text may be.
designated as the internal text. A complicated example of external or
open and internal or secret text is that shown in paragraph 98.
‘Eee BalTW. W. R., Mathematical Recreations and Essays, London, 1928.
122
w—~ w""'1' ".‘"“" '
REF ID:A56932
1). Little need be said of the method based upon constructing external
text the letters of which, at prearranged positions or intervals, spell out
the internal text. For example, it may be prearranged that every fourth
letter of the external text forms the series of letters for spelling out
the internal text, so that only the 4th, 8th, 12th . . . letters of the external
text are significant. The same rule may apply to the complete words of
the external text, the n, 2n, 3n, . . . words form the internal text. The
preparation of the external text in a suitable form to escape suspicion
is not so easy as might be imagined, when efficient, experienced, and
vigilant censorship is at work. Often the paragraph or passage containing
the secret text is sandwiched in between other paragraphs added to pad
the letter as a whole with text suitable to form introductory and closing
matter to help allay suspicion as to the presence of secret, hidden text.
c. A modification of the foregoing method is that in which the lst,
3d, 5th, . . . words of a secret message are transmitted at one time or by
one agency of communication, and the 2d, 4th, 6th, . . . words of the
message are transmitted at another time or by another agency of com-
munication. Numerous variations of this scheme will suggest themselves,
but they are not to be considered seriously as practical methods of secret
intercommunication.
(1. Two correspondents may agree upon a specific size of paper and a
special diagram drawn upon this sheet, the lines of which pass through
the words or letters of the internal text as they appear in the external
text. For example, the legs of an equilateral triangle drawn upon the
sheet of paper can serve for this purpose. This method is practicable
only when messages can be physically conveyed by messenger, by the
postal service, or by telephotographic means. Many variations of this
basic scheme may perhaps be encountered in censorship work.
110. Cipher Machines for EFFectin-g Transposition
These may be dismissed with the brief statement that if any exist
today they are practically unknown. A few words are devoted to the
subject in paragraph 147.
123
| {
"alphanum_fraction": 0.7480773817,
"avg_line_length": 21.9196341065,
"ext": "tex",
"hexsha": "197d7b5c286e6bac171f83d92109665de0c3e300",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dreemkiller/MilitaryCryptography",
"max_forks_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dreemkiller/MilitaryCryptography",
"max_issues_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex",
"max_line_length": 94,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "098f202e65b281ca56ea7ea3736b71934ec59727",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dreemkiller/MilitaryCryptography",
"max_stars_repo_path": "AdvancedMilitaryCryptography/Chapter1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 19608,
"size": 67096
} |
\chapter{Book binders in Okinawa} \label{appA}
\section*{Permanent Binding}
Black Buckram with gold lettering following the template in Appendix 2 of the official guidelines (not this \LaTeX document). Arial font. Spine lettering includes title (or brief version), OIST PhD Thesis, your name, year. Arial.
Print on acid-free paper.
Book binding services are available from Shocopy (Naha) or Laminex (Okinawa City). The cost is approx. 8,000 yen per volume. The Graduate School will pay for a supervisor copy. You may bind additional copies at your own expense.
\subsection*{Shocopy}
Located in Naha.
\textbf{URL}: \url{http://www.shocopy.com/} (Japanese only)
\textbf{Address}: Kume 1-4-25, Naha, Okinawa, 900-0033.
\textbf{TEL}: 098-866-5027
\textbf{FAX}: 098-866-5144
Closed on weekends and holidays. Opening hours: 9:00 to 17:30.
\subsection*{Laminex}
Located in Koza, Okinawa City.
\textbf{URL}: \url{http://www.laminex-c.jp/content/view/18/31/} (Japanese only)
\textbf{Address}: Uechi 2-9-6, Okinawa City, Okinawa, 904-0031.
\textbf{Email}: [email protected]
\textbf{TEL}: 098-932-1234
\textbf{FAX}: 098-933-2001
| {
"alphanum_fraction": 0.7399299475,
"avg_line_length": 28.55,
"ext": "tex",
"hexsha": "0eebd6e264c454ac50979490e323141d8aaf6056",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d3f5a1e554efe56ccab299b45017316edfddef6f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "leios/LaTeX-templates",
"max_forks_repo_path": "PhD Thesis/MainText/appendixA.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d3f5a1e554efe56ccab299b45017316edfddef6f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "leios/LaTeX-templates",
"max_issues_repo_path": "PhD Thesis/MainText/appendixA.tex",
"max_line_length": 230,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "d3f5a1e554efe56ccab299b45017316edfddef6f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "leios/LaTeX-templates",
"max_stars_repo_path": "PhD Thesis/MainText/appendixA.tex",
"max_stars_repo_stars_event_max_datetime": "2019-02-08T04:37:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-07T04:59:36.000Z",
"num_tokens": 375,
"size": 1142
} |
%----------------------------------------------------------------------------------------
% Introduction
%----------------------------------------------------------------------------------------
\setcounter{page}{1} % Sets counter of page to 1
\section{Introduction} % Add a section title
In mathematical statistics, a problem that often comes up with basic tests (t-tests, Analysis of Variance, etc) is dealing with small sample sizes. When looking specifically at the two-sample t-test, the condition that has to be met and is taught in all introductory statistics courses is normality of both samples. However, in the case where this conditions cannot be met, we hope to have a sample size greater than $30$. But this magic number $30$ can be misleading. Real world data often contains more than $30$ observations. Creating models such as linear regression models is difficult because real world data often fails to meet critical model conditions such as normality of residuals and homoescedasticity. Because of this, we cannot blindly trust parameters like a $\hat{\beta_i}\text{'s}$ in linear regression models and residual deviance in logistic regression models. There are also times when real world data is too small and then we can’t even use the Central Limit Theorem to assume our statistics come from an approximation of a normal distribution. However, there is a solution to both of these problems: randomization based inference. Another common solution is bootstrapping, however, that is most useful when we want to develop robust estimates for statistics and standard errors as well as counteract sampling error. We are focusing on randomization methods, which focus on randomization of units within our sample. Forgetting all model distribution assumptions, randomization-based inference only cares whether the sample that you have is typical of the population. In this paper, we will explain what randomization based inference is in detail while walking through a short example pointing out each step in the context of our paper, explain why this process works, elaborate on some of the limitations that come with this test, and explore Monte Carlo methods for re-randomization.
| {
"alphanum_fraction": 0.7447679709,
"avg_line_length": 244.2222222222,
"ext": "tex",
"hexsha": "8f2712541f9b9bcc53d7f146d781b83b6e071d67",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "jake-caldwell/Math420Proj",
"max_forks_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "jake-caldwell/Math420Proj",
"max_issues_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex",
"max_line_length": 1906,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "460deedb53a0a18509bc412c012f33e04440e9bb",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "jake-caldwell/Math420Proj",
"max_stars_repo_path": "Project/Math%20420%20Final%20Paper/content/1-introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 406,
"size": 2198
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[left=2cm, right=4cm, top=2cm]{geometry}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{amsfonts}
\usepackage{amsmath}
\numberwithin{equation}{section}
\newcommand{\bracket}[1]{|#1\rangle}
\graphicspath{ {../../figures/esl/} }
\title{Notes: \\The Elements of Statistical Learning}
\author{Daniel Saunders}
\begin{document}
\maketitle
\textbf{Notes}
\begin{enumerate}
\item Some \textit{emphasis} is from the book, some is added.
\item Abbreviationsare used liberally and must sometimes be inferred from context.
\end{enumerate}
\section{Introduction}
\label{chapter:1}
\textit{Statistical learning} plays a key role in many areas of science, namely statistics, data mining, and artificial intelligence, and intersects with engineering and other disciplines.
The book is about learning from data. Typically, we have a quantitative or categorical outcome measurement that we want to predict based on a set of \textit{features}. We have a \textit{training} set of data, in which observe both outcome and feature for a set of objects. Using this data, we build a prediction model (\textit{learner}) which enables us to predict outcomes for unseen objects.
The above describes the \textit{supervised learning} problem, called so because of the presence of the outcome measurement to guide the learning process. In the \textit{unsupervised learning} problem, no outcome measurements are available, so we must instead describe how the data are organized or clustered.
\section{Overview of Supervised Learning}
\subsection{Introduction}
For each of the examples in Chapter \ref{chapter:1}, there is a set of variables known as the \textit{inputs} (measured or preset), which have influence over one or more \textit{outputs}. For each example, the goal is to use the inputs to predict the outputs. This is known as \textit{supervised learning}.
In the statistics / pattern recognition literature, the inputs are often called the \textit{predictors}, \textit{independent variables}, or \textit{features}, whereas the outputs are called the \textit{responses} or \textit{dependent variables}.
\subsection{Variables Types and Terminology}
Outputs variables may vary in nature; some \textit{quantitative} measurements are larger than others, and close measurements are close in nature. On the other hand, \textit{qualitative} measurements assume values in a finite set, without explicit ordering, and sometimes are descriptive labels rather than numbers to denote the classes. Qualitative variables are sometimes referred to as \textit{categorical} variables, \textit{discrete} variables, or \textit{factors}.
The distinction in output type has led to a naming convention for prediction tasks: \textit{regression} when we predict quantitative outputs, and \textit{classification} when we predict qualitative outputs. Both can be viewed as tasks in function approximation.
Inputs can also vary in measurement type, with some qualitative and some quantitative variables. Some methods are better suited to one type or the other, or both.
A third variable type is \textit{ordered categorical} (e.g., small, medium, or large), where there is an ordering, but no metric notion is appropriate.
Qualitative variables are typically represented numerically by codes (sometimes referred to as \textit{targets}). Binary variables can be represented simply by 0 and 1, or -1 and 1. With more than two categories, a commonly used coding is via \textit{dummy variables} (\textit{one-hot encoding}), where a $K$-level qualitative variables is represented by a vector of $K$ bits, only one of which is ``on'' at a time.
\subsection{Two Simple Approaches to Prediction: Least Squares and Nearest Neighbors}
\end{document} | {
"alphanum_fraction": 0.7915017155,
"avg_line_length": 62.1147540984,
"ext": "tex",
"hexsha": "8a52c6a4394408c5205d44c312dca20c1196d7a7",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-07-28T05:17:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-28T05:17:21.000Z",
"max_forks_repo_head_hexsha": "27dba28768ba4829f4bad922d73439454b9fed82",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "djsaunde/djsaunde.github.io",
"max_forks_repo_path": "notes/books/esl/els.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "27dba28768ba4829f4bad922d73439454b9fed82",
"max_issues_repo_issues_event_max_datetime": "2022-02-26T03:49:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-25T10:34:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "djsaunde/djsaunde.github.io",
"max_issues_repo_path": "notes/books/esl/els.tex",
"max_line_length": 469,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "27dba28768ba4829f4bad922d73439454b9fed82",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "djsaunde/djsaunde.github.io",
"max_stars_repo_path": "notes/books/esl/els.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-27T15:16:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-23T06:47:51.000Z",
"num_tokens": 876,
"size": 3789
} |
\documentclass[Economics.tex]{subfiles}
\begin{document}
\chapter{Market Structure}
\section{Objectives of firms}
Profit is the difference between the total revenue and total cost. In economics, firms are assumed profit-motivated and so will price to maximise profits.
Profit is maximised if a firm produces at an output where marginal cost equals marginal revenue when marginal cost is increasing, where marginal cost (\mrg{C}) is the cost of producing the last unit of output, and marginal revenue (\mrg{R}) is the revenue gained by selling the last unit of output. If the firm produces at a level below that output, then producing an additional unit of output adds more to revenue than to cost, increasing profit. While \mrg{R} > \mrg{C}, profits are increased by producing more. If the firm produces at a level above that output, then producing an additional unit of output adds more to cost than revenue, decreasing profit. While \mrg{C} > \mrg{R}, profits are increased by producing less. Therefore, profit is maximised at the point where \(\mrg{C} = \mrg{R}\) when \mrg{C} is increasing.
In reality, firms may not be able to identify the profit maximising price and output, as there is lack of information and the ceteris paribus assumption does not hold, as conditions of demand and supply are continuously changing, so even if firms know the profit maximising output level, they might not be able to determine the correct price to sell their goods. Notwithstanding that, it is difficult for firms to even determine the demand curve they face or predict how competitors may behave in response to their actions.
Many firms thus practice cost-plus pricing, where they estimate their long run average cost and add a profit margin to arrive at the price.
Some firms may choose not to maximise profits to avoid unwanted attention from the government, who may be concerned that these firms are exploiting customers, or from other firms, who are likely to be interested in any profitable assets they have.
Firms can choose to maximise sales revenue, as it can make it easier for them to take out loans. Sales revenue is maximised when \(\mrg{R} = 0\), as total revenue will be at a peak at this point. The level of output is likely to be past the profit maximising point, but the firm can still earn supernormal, normal or subnormal profits, depending on total revenue and total cost at this point.
Firms can choose to maximise sales volume, as employees' status and salary are often more linked to the size of the firm than its profitability. Sales volume is maximised when average cost equals average revenue. A firm can choose to produce above this level, but this level is the highest output level that the firm can earn normal profits. The firm will always earn normal profits at this point.
Firms can choose to profit satisfice, where firms aim for a profit level that will keep shareholders happy.
Firms may be reluctant to accept the risks and pressures associated with fiercely competitive policies, or they may be aiming to satisfy other stakeholders. Stakeholders include consumers, who want low prices and high quality; workers, who want high wages and job satisfaction and security; suppliers, who want high prices; community, who want employment without congestion; and environmentalists, who want clean environments and the conservation of flora and fauna. Profit satisficing may conflict with profit maximisation in the short run, but is compatible in the long run. Showing concern for the environment, for example, by avoiding nature reserves or not selling genetically modified foods, may raise a firm's costs, but it will also provide the firm with good publicity and may increase demand for the firm's products, creating brand loyalty. In the long run, revenue may rise more than costs, increasing profit.
\section{Costs of production}
A plant is a site where production or distribution of a product occurs. A firm is a decision-making unit that hires factors, combines them to create output and then sell the output; firms own plants. An industry is a collection of firms producing similar goods and services.
Factors of production are inputs used to produce output. They can be either fixed factors, which are factors whose quantity cannot be increased or decreased in a given time period e.g.\ land and machines, or variable factors, which are factors whose quantity can be changed within the given time period, e.g.\ labour and raw materials.
In production, a short-run time period is a time period in which there is at least one fixed factor; output can only be increased by using more variable factors. The long-run is when all factors are variable.
Typically, at small outputs, the marginal cost is high; it decreases to a minimum as output expands, and then starts increasing past that minimum. As the firm expands its output initially, it will employ more units of variable factors, causing the combination of fixed and variable factors to improve e.g.\ due to specialisation, raising the marginal productivity of the variable factor. Assuming the price of the variable input is constant, \mrg{C} will fall. Past a certain output, the law of diminishing marginal returns takes effect, and the proportion of factors becomes less efficient, reducing the marginal productivity of the variable factor and so \mrg{C} rises.
\section{Economies of scale}
The long run costs of firms are generally affected by economies and diseconomies of scale.
Dis\slash{}economies of scale are rises\slash{}falls in unit cost enjoyed by the firm or firms from growth of the firm (internal) or expansion of the entire industry (external). Internal economies or diseconomies of scale are represented by movements along the long run average cost (\slmf{LRAC}) curves of the firm, while external economies or diseconomies of scale are represented by \slmf{LRAC} shifts.
\subsection{Internal economies}
Technical economies can lead to internal economies of scale. They occur when a firm is able to have functional specialisation of labour, allowing the efficiency of labour to increase, lowering the unit COP. Also, some inputs e.g.\ machines can only be employed in large indivisible units that can produce a large output, which is unsuitable for small firms, but greatly economical for larger firms. Some inputs like machines can result in greater output with a less than proportionate increase in cost e.g.\ a double decker bus can carry double the passengers a single decker bus can, but the cost of purchasing and operating it may not necessarily double.
Internal economies of scale can arise due to marketing economies. As a firm becomes bigger, it purchases its inputs in bulk and so may secure discounts on purchases of inputs, as suppliers are eager to get the firm's orders. A larger firm is able to spread its advertising costs over a larger output and so unit cost is reduced.
Internal economies of scale can arise due to financial economies, where they can obtain loans at lower interest rates due to greater credit worthiness; it can also issue shares to the public to raise funds in the capital market.
Internal economies of scale can arise due to risk-bearing economies, where larger firms can deal with risks better, through diversification: if one product is not selling well, it can depend on its other products for revenue, so it is less likely to shut down than a smaller firm selling only one or two products.
Managerial economies of scale arise when firms can hire professionals in various fields to specialise and lead different departments that can help to increase a firm's output, lowering its unit cost.
\subsection{Internal diseconomies}
Internal diseconomies of scale can occur due to managerial diseconomies, which are the most common reason, as other types of diseconomies, like technical diseconomies, can simply be avoided by creating smaller plants. A firm can grow so large that it becomes difficult to manage, and more bureaucracy is involved in making decisions, slowing operations down. Paperwork can reduce work efficiency, resulting in lower productivity and thus higher unit cost. Management can find it hard to coordinate the operations of large firms, resulting in inefficiency.
Employees in a large organisation can experience feelings of alienation as the firm may become insensitive to the needs of its workers, affecting worker morale, resulting in lower productivity. An employee in a large organisation that receives a fixed salary may have little motivation to be efficient, as the quality of his work rarely translates into greater salaries.
Financial diseconomies can occur when big firs become too big and borrow too much without repaying debts, affecting the firm's credit worthiness. Banks would be reluctant to offer loans to the firm, and may begin to charge higher interest, increasing the cost of borrowing.
Risk-bearing diseconomies can occur if one branch of a firm has poor performance, which leads to negative spill over effects on other branches, increasing cost of production.
The minimum efficient scale of production is the output at the lowest point of the \slmf{LRAC}. It represents the output after which internal diseconomies will start taking effect.
\subsection{External economies}
External economies of scale can occur as when an industry expands, amenities will be developed. As firms set up in an area, the government will develop amenities for the industries, reducing costs for individual firms and facilitating production. The government will also develop a better transport network so raw materials can be transported to, and outputs away from, the firms more efficiently, reducing transport costs.
External diseconomies of scale can occur when an industry expands excessively.
The increased demand for factors of production can lead to a shortage of the factors, leading to higher prices and so higher unit cost at all levels of output.
If too many firms concentrate in one area, this can result in traffic congestion, leading to loss of man-hours as time is wasted waiting for traffic, etc. Noise, water and air pollution may also result, forcing the government to impose taxes and fines. This all leads to increased unit costs at all outputs.
\section{Size of a firm}
Various demand factors can affect the size of a firm.
If the demand for a good is small, a firm that produces that good only or similar goods with small demand will remain small simply because the market for their produces is small.
Some consumers prefer firms that are more personal with their customers, something large firms cannot as easily provide. Thus, smaller firms can usually coexist with larger ones.
Supply factors also affect the size of a firm. Some goods are naturally suited for small firms e.g.\ dentists, where a single dentist can only work for so many hours before his efficiency drops.
A firm will enter short run shutdown when its \(\slmf{AVC} > \slmf{AR}\) or equivalently \(\slmf{TVC} > \slmf{TR}\), as when this is the case, the firm would incur the least loss by producing no output. There is no such thing as long run subnormal profit, as a firm would simply exit the market.
\section{Features of market structures}
Market structure is the way in which goods and services are supplied by firms in an industry. The market structure a firm operates in will determine its behaviour, or pricing and output decision and competitive strategies, and performance, or profitability and efficiency level.
\subsection{Barriers to entry}
Some markets have barriers to entry (BTE). A barrier to entry is something that prevents the entry of new firms into an industry, thereby limiting the amount of competition faced by existing firms. There are various types of barriers to entry.
Natural barriers to entry include economies of scale and natural monopolies, among others.
Economies of scale can be a barrier to entry when the minimum efficient scale of production is large compared to market demand; firms incur huge outlays in terms of infrastructural investment and a large output is needed to produce a good at its lowest unit cost. Therefore, new small firms entering such a market would find it difficult to compete.
In the extreme case where economies of scale persist past market demand, a natural monopoly arises where the MES is so large to the extent where one firm alone can satisfy the entire market demand, and if the demand is split equally with another firm, both firms' average costs increase to the point where they earn subnormal profits.
If a firm owns the resources needed to produce a particular good, then that firm can prevent other firms from entering the industry. E.g.\ Debeers owns most of the world's diamond mines and so monopolises the world's production of diamonds.
Some manufacturing industries use capital-intensive production techniques, so a large capital outlay is required to start production that may hinder entrants to the market.
State-created barriers to entry are, as in the name, created by the state.
Licenses are exclusive permits to produce owned by a firm. If a firm does not own a license to produce a controlled good, it cannot produce that good. Thus, licenses act as a barrier to entry.
Patents, copyrights and trademarks are granted by the government; they grant a firm the exclusive license to produce a good or use a specific technique for a period, in order to promote innovation. Other firms that wish to produce the same good or use the same method must pay royalties to the firm.
Firms can create barriers to entry to try to enjoy monopoly power and long-run supernormal profit.
\begin{itemize}
\item Advertising and brand name image creation helps to create brand loyalty in a market, making it more difficult for a newcomer to enter.
\item Firms can produce many varieties of a product (product proliferation), making entry difficult.
\item Firms may maintain excess productive capacity, discouraging potential entrants, as they know that incumbent firms will easily increase output to depress price when they enter.
\item Firms can practice predatory pricing, which is the setting of price to levels so low that entrants are discouraged from entering the market.
\item Firms can also practice limit pricing, which is when firms set prices low and restrict their profits to avoid attracting potential entrants.
\item Firms can practice restrictive practices e.g.\ exclusive dealing arrangements with merchants that stock only that firm's products, like discounts or other favourable trading terms.
\end{itemize}
\subsection{Price discrimation}
Price discrimination occurs when a firm sells the same product to different groups of consumers at different prices for reasons not due to cost, to increase profits by reducing consumer surplus.
For price discrimination to be possible, the firm must be a price setter, markets must be separated and no resale between segmented markets should be possible, and the \PE[D] in the segmented markets must be different to enable the monopolist to charge different prices.
Firms need to be price setters as they need to be able to set different prices without losing market share completely, with their market power derived from product differentiation. The market needs to be separated with no arbitrage possible, otherwise a consumer can easily defeat price discrimination by buying from someone who has purchased the good at a lower price.
1st degree price discrimination is when the seller charges the maximum price that a consumer is willing to pay for that unit of output e.g.\ via an auction. This removes the entire consumer surplus.
2nd degree price discrimination is when the seller charges the same consumer different prices for different quantities sold e.g.\ car park charges. When a natural monopoly is regulated, it may practice two-tier pricing, where it charges some users a higher price and others a lower price, so that the monopoly can survive while also being more equitable.
3rd degree price discrimination is when the seller divides his consumers into different groups and charges a different price to each group e.g.\ in movie tickets or buffets. In this case, the firm produces at the point where the combined \(\mrg{C} = \mrg{R}\); the \mrg{C} is then equated to the \mrg{R} of separate markets to find the output distribution between the markets.
\subsection{Collusion}
Firms can choose to collude or compete. When they collude, they can form a cartel, in which firms coordinate their activities and act as a single firm. They can also initiate price leadership, where firms follow the largest firm's prices (dominant price leadership) or use prices that best reflect market conditions (barometric).
Cartels are generally illegal in countries, and they may not last very long because of disputes and the incentive to cheat e.g.\ selling below the agreed price or selling more than their assigned quota.
\subsection{Competition}
When competing, firms can do so using price or non-price strategies. Price strategies are those directly related to decreasing their price, while non-price strategies are things like advertising or product proliferation.
Price competition occurs when a firm lowers its price in order to attract customers away from rivals. This can become full-fledged price wars, where firms continually lower prices until other firms are driven out of the market. This may be good, but it may also mean the firm that wins now gets to enjoy monopoly power.
Forms of non-price competition include product development and advertising. Product development differentiates a firm's products from others, and advertising informs consumers of and persuades consumers to purchase the firm's goods, in order to increase demand for their product as well as make their product less substitutable by other firms' i.e.\ decrease \PE[D] for their product.
However, non-price competition tends to involve extra costs that may not be worth the benefit received from undertaking them.
\section{Spectrum of market competition}
\begin{itemize}
\item In the perfectly competitive market, there are a large number of firms selling a homogenous product. There are no BTE to the market, and there is perfect knowledge in the market i.e.\ producers know all costs and there are no industry secrets.
\item In the monopolistic competitive market, there are a large number of firms selling differentiated products. There are little BTE to the market, but there may not be perfect knowledge.
\item In the oligopolistic market, there are a few large firms, selling differentiated or homogenous products.
\item Finally, in a monopoly, there is one firm selling a unique product, and there are very strong BTE.
\end{itemize}
The characteristics of market structures lead to their behaviours.
\subsection{Market power}
In markets where there are a large number of firms selling homogenous products, each firm produces a small proportion of the market output i.e.\ has a small market share, and so a change in any firm's output will not significantly change the market price. Thus firms in such a market, i.e.\ perfectly competitive market, are price takers, and have no market power, and face a perfectly price elastic demand.
Conversely, firms that sell a differentiated product or have a large market share can raise their price without losing all their customers and so they are said to have a degree of market power as they can restrict output to increase price above marginal cost. These firms face a downward-sloping demand.
\subsection{Types of profit}
In the short run, all firms can earn all kinds of profits (supernormal, normal and subnormal), but the presence of barriers to entry affects the long run profits of a firm. Firms in markets with no barriers to entry i.e.\ perfectly competitive or monopolistic competitive markets cannot earn supernormal or subnormal profits in the long run.
If firms are earning supernormal profits in the short run, new firms will be attracted by the supernormal profits and enter the industry. Firms earning supernormal profits will also expand and so industry output increases at all prices, therefore supply increases, leading to a fall in price, ceteris paribus. Supernormal profits will be competed away until all firms earn only normal profits.
If firms are earning subnormal profits in the short run, some firms will shut down so industry output decreases at all prices, therefore supply decreases, leading to an increase in price, ceteris paribus. Total revenue and thus profits will increase until all firms earn normal profits.
When all firms are earning normal profits, there will be no incentive for new firms to enter; existing firms stay in the industry as their revenue is sufficient to cover all their costs, and the industry is now in long run equilibrium as there is no tendency for firms to move in or out of the industry.
Firms in markets with barriers to entry i.e.\ oligopoly or monopoly can earn supernormal or normal profits in the long run, due to the significant barriers to entry preventing firms from entering to compete away supernormal profits. No firm will have subnormal profits in the long run as such firms will exit.
Firms in markets with market power i.e.\ not perfectly competitive are allocative inefficient.
\subsection{Efficiency}
Assuming the firm is profit maximising, it will produce at a level where \(P > \mrg{C}\), so consumers value the last unit of the good more than it costs to produce it; the good is thus underproduced: increasing output can increase consumer's surplus and thus welfare. This is allocative inefficiency. There is a deadweight loss caused to society that increases with the market power a firm has.
Firms in markets with no market power are allocative efficient. In a perfectly competitive industry, the price consumers pay, which reflects the value consumers place on extra units of the good, is equal to the cost of producing the last unit of output. When \(P > \mrg{C}\), the value consumers place on the last unit of output is greater than the cost of producing it, so more should be consumed, and vice versa. Thus allocative efficiency occurs when \(P = \mrg{C}\).
Productive efficiency occurs when firms are producing a given output at the lowest possible cost i.e.\ they are producing on their \slmf{LRAC}. Since firms are assumed to be profit maximising, they would want to minimise costs, so they will choose the lowest cost method of production possible; therefore all firms are productive efficient.
Dynamic efficiency occurs when product and process innovation occurs. Firms that can earn supernormal profits in the long run are generally dynamic efficient, as supernormal profits provide funds for a firm to channel to funding research and development, so firms with supernormal profits are able to innovate. Such firms are usually forced to innovate to maintain their dominant position in the market. Generally, the more competition a firm faces, the more they need to innovate to maintain their dominant position, so oligopolies are more dynamic efficient than monopolies.
X-inefficiency is the difference between efficient behaviour of firms assumed by theory and their observed behaviour. It occurs when efficiency is not achieved due to a lack of competitive pressure. Generally, firms experiencing more competition are less X-inefficient as they have competitive pressure to force them to keep \slmf{AC} low, and innovate to maintain their position in the market. The perfectly competitive market is completely X-efficient because if it is less cost efficient than other firms, it will make subnormal profit and be driven out of the market; firms in such a market also have no incentive to advertise as all firms make homogenous products.
Advertising is usually considered wasteful as it is an extra cost on top of what is needed to operate. In this light, the perfectly competitive and monopoly markets are not as wasteful as there is no need for them to advertise; the monopolistic competitive and oligopolistic markets tend to advertise more, and so waste more.
\subsection{Equity}
Equity is the fairness and justness of the allocation of resources.
Generally, a firm structure in which firms can earn supernormal profits in the long run is inequitable as supernormal profits go to shareholders, who generally consist of higher-income earners, worsening the income distribution as the rich get more.
However, even if a firm cannot earn supernormal profit in the long run, it does not guarantee equity as there is no guarantee that goods produced are distributed to individuals fairly, especially if the good is a necessity; those who earn higher income have the ability to purchase more, while the poor might not get enough because they do not have enough money to purchase what they need.
\subsection{Variety}
Generally, firms that sell differentiated products produce more variety and so allow consumer choice, which is a part of consumer welfare. Only the perfectly competitive market cannot have variety; in other markets, there can be a large variety of goods.
\subsection{Mutual interdependence}
Firms in market structures with few firms tend to exhibit mutual interdependence. Collusion is an extreme case of this, of course, but its effects are also seen when these firms compete, in the form of price rigidity, which is modelled using a kinked demand curve.
Two key assumptions are made in the kinked demand model, namely that if a firm lowers price, rival firms will follow suit and lower their price to avoid losing customers to the former firm; and if a firm raises price, rival firms will stay at the original price to gain customers from the former firm. These assumptions result in a demand curve that is price elastic above the equilibrium price, and price inelastic below the equilibrium price, and so there is a kink at the equilibrium price.
The corresponding \mrg{R} curve is made from the composition of the two separate \mrg{R} curves that the two segments of demand would separately produce, and there is a discontinuous range in the \mrg{R} at equilibrium output; if \mrg{C} at the equilibrium varies but stays within the discontinuous range, price will not change and so prices are rigid.
\end{document}
| {
"alphanum_fraction": 0.8022812252,
"avg_line_length": 149.5875706215,
"ext": "tex",
"hexsha": "51c661f7036f79c9706540c490f25749805d56fa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "oliverli/A-Level-Notes",
"max_forks_repo_path": "TeX/Economics/3_firms.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "oliverli/A-Level-Notes",
"max_issues_repo_path": "TeX/Economics/3_firms.tex",
"max_line_length": 921,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "oliverli/A-Level-Notes",
"max_stars_repo_path": "TeX/Economics/3_firms.tex",
"max_stars_repo_stars_event_max_datetime": "2020-08-05T11:44:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-05T11:44:33.000Z",
"num_tokens": 5266,
"size": 26477
} |
\section{Basics of Reinforcement Learning}
\label{sec:basics_reinforcement_learning}
\emph{Reinforcement Learning} (RL) is a general class of algorithms in the field of \emph{Machine Learning} (ML) that allows an agent to learn how to behave in a stochastic and possibly unknown environment, where the only feedback consists of a scalar reward signal \cite{sutton1998introduction}. The goal of the agent is to learn by trial-and-error which actions maximize his long-run rewards. However, since the environment evolves stochastically and may be influenced by the actions chosen, the agent must balance his desire to obtain a large immediate reward by acting greedily and the opportunities that will be available in the future. Thus, RL algorithms can be seen as computational methods to solve sequential decision problems by directly interacting with the environment.\\
\subsection{Markov Decision Processes}
\label{sec:markov_decision_processes}
Sequential decision problems are typically formalized using \emph{Markov Decision Processes} (MDP). An MDP is a stochastic dynamical system specified by the tuple $<\S, \A, \calP, \calR, \gamma>$, where $(\S, \calS)$ is a measurable state space, $(\A, \calA)$ is a measurable action space, $\calP: \S \times \A \times \calS \to \R$ is a Markov transition kernel, $\calR: \S \times \A \to \R$ is a reward function and $0 < \gamma < 1$ is the discount factor. Suppose that at time $t$ the system is in state $S_t = s$ and that the agent takes action $A_t = a$, then, regardless of the previous history of the system, the probability to find the system in a state belonging to $B\in\calS$ at time $t+1$ is given by
\begin{equation}
\calP(s, a, B) = \P{S_{t+1} \in B | S_t = s, A_t = a}
\end{equation}
Following this random transition, the agent receives a stochastic reward
$R_{t+1}$. The reward function $\calR(s, a)$ gives the expected reward
obtained when action $a$ is taken in state $s$, i.e.
\begin{equation}
\calR(s, a) = \E{R_{t+1} | S_t = s, A_t = a}
\end{equation}
This feedback mechanism between the environment and the agent is illustrated in Figure \ref{fig:sequential_decision_problem}. At any time step, the agent selects his actions according to a certain policy $\pi: \S \times \calA \to \R$ such that for every $s \in \S$, $C \mapsto \pi(s,C)$ is a probability distribution over $(\A, \calA)$. Hence, a policy $\pi$ and an initial state $s_0 \in \S$ determine a random state-action-reward sequence ${\{(S_t, A_t, R_{t+1})\}}_{t\geq 0}$ with values on $\S \times \A \times \R$.
\begin{figure}[t]
\centering
\begin{tikzpicture}[node distance = 6em, auto, thick]
\node [block] (Agent) {Agent};
\node [block, below of=Agent] (Environment) {Environment};
\path [line] (Agent.0) --++ (4em,0em) |- node [near start]{Action $a_t$} (Environment.0);
\path [line] (Environment.190) --++ (-6em,0em) |- node [near start]{State $s_{t}$} (Agent.170);
\path [line] (Environment.170) --++ (-4.25em,0em) |- node [near start, right] {Reward $r_{t+1}$} (Agent.190);
\end{tikzpicture}
\caption{Agent-environment interaction in sequential decision problems.}
\label{fig:sequential_decision_problem}
\end{figure}
In an infinite horizon task, the agent's performance is typically measured as the total discounted reward obtained following a specific policy
\begin{equation}
G_t = \sum^{\infty}_{t=0} \gamma^t R_{t+k+1}
\end{equation}
Since this gain is stochastic, the agent considers its expected value, which is typically called \emph{state-value function}
\begin{equation}
V_\pi(s) = \E[\pi]{G_t|S_t = s}
\end{equation}
where the subscript in $\mathbb{E}_{\pi}$ indicates that all the actions are selected according to policy $\pi$. The state-value function measures how good it is for the agent to be in a given state and follow a certain policy. Similarly, we introduce the \emph{action-value function}
\begin{equation}
Q_\pi(s,a) = \E[\pi]{G_t|S_t = s, A_t = a}
\end{equation}
We have the following relationship between $V_\pi$ and $Q_\pi$
\begin{equation}
V_\pi(s) = \int_\A \pi(s,a) Q_\pi(s,a) da
\end{equation}
Almost all reinforcement learning algorithms are designed to estimate these
value functions and are typically based on the Bellman equations.
\begin{equation}
V_\pi(s) = \calR_\pi(s) + \gamma T_\pi V_\pi(s)
\label{eq:bellman_expectation_eq_V}
\end{equation}
\begin{equation}
Q_\pi(s,a) = \calR(s,a) + \gamma T_a V_\pi(s)
\label{eq:bellman_expectation_eq_Q}
\end{equation}
where we denoted by $T_a$ (resp. $T_\pi$) the transition operator for action
$a$ (resp. for policy $\pi$)
\begin{equation}
T_a F(s) = \E{F(S_{t+1})|S_t = s, A_t = a} = \int_\S \calP(s, a, s') F(s') ds'
\end{equation}
\begin{equation}
T_\pi F(s) = \E[\pi]{F(S_{t+1})|S_t = s} = \int_\A \pi(s,a) \int_\S \calP(s,a,s') F(s') ds' da
\end{equation}
These equations can be rewritten as fixed-point equations which, under some formal assumptions on the reward functions, admit a unique solution by the contraction mapping theorem. The agent's goal is to select a policy $\pi_*$ that maximizes his expected return in all possible states. Such a policy is called \emph{optimal} and the corrisponding value functions are called \emph{Optimal State-Value Function}
\begin{equation}
V_*(s) = \sup_\pi V_\pi(s)
\end{equation}
and \emph{Optimal Action-Value Function}
\begin{equation}
Q_*(s,a) = \sup_\pi Q_\pi(s,a)
\end{equation}
The optimal value functions satisfy the following Bellman equations.
\begin{equation}
V_*(s) = \sup_a Q_*(s,a) = \sup_a \left\{\calR(s,a) + \gamma T_a V_*(s)\right\}
\end{equation}
\begin{equation}
\begin{split}
Q_*(s,a) &= \calR(s,a) + \gamma T_a V_*(s)\\
&= \calR(s,a) + \gamma \int_\S \calP(s,a,s') \sup_{a'} Q_*(s', a') ds'
\end{split}
\end{equation}
Again, these are fixed-point equations for which the existence and uniqueness of a solution is guaranteed by the contraction mapping theorem. Given the optimal action-value function $Q_*$, an optimal policy is obtained by selecting in each state the action with maximizes $Q_*$
\begin{equation}
a_* = \argsup_a Q_*(s,a)
\end{equation}
This greedy policy is deterministic and only depends on the current state of the system.
\subsection{Policy Gradient Methods}
The standard way to solve MDPs is through dynamic programming, which simply consists in solving the Bellman fixed-point equations discussed in the previous chapter. Following this approach, the problem of finding the optimal policy is transformed into the problem of finding the optimal value function. However, apart from the simplest cases where the MDP has a limited number of states and actions, dynamic programming becomes computationally infeasible. Moreover, this approach requires complete knowledge of the Markov transition kernel and of the reward function, which in many real-world applications might be unknown or too complex to use. \emph{Reinforcement Learning} (RL) is a subfield of Machine Learning which aims to turn the infeasible dynamic programming methods into practical algorithms that can be applied to large-scale problems. RL algorithms are based on two key ideas: the first is to use samples to compactly represent the unknown dynamics of the controlled system. The second idea is to use powerful function approximation methods to compactly estimate value functions and policies in high-dimensional state and action spaces. In this section we will only focus on a particular class of algorithms called \emph{Policy Gradient Methods}, which have proved successful in many applications. For a more complete introduction to RL, the reader may consult \cite{sutton1998introduction}, \cite{szepesvari2010algorithms} or \cite{wiering2012reinforcement}.\\
In \emph{policy gradient methods} \cite{peters2008reinforcement}, the optimal policy is approximated using a parametrized policy $\pi: \S \times \calA \times \Theta \to \R$ such that, given a parameter vector $\theta \in \Theta \subseteq \R^{D_\theta}$, $\pi(s, B; \theta) = \pi_\theta(s, B)$ gives the probability of selecting an action in $B \in \calA$ when the system is in state $s \in \S$.
The general goal of policy optimization in reinforcement learning is to
optimize the policy parameters $\theta \in \Theta$ so as to maximize a certain
objective function $J: \Theta \to \R$
\begin{equation}
\theta^* = \argmax_{\theta \in \Theta} J(\theta)
\end{equation}
In the following, we will focus on gradient-based and model-free methods that exploit
the sequential structure of the the reinforcement learning problem. The idea of
policy gradient algorithms is to update the policy parameters using the gradient ascent direction of the objective function
\begin{equation}
\theta_{k+1} = \theta_k + \alpha_k \nabla_\theta J\left(\theta_k\right)
\end{equation}
where $\{\alpha_k\}_{k\geq 0}$ is a sequence of learning rates. Typically, the
gradient of the objective function is not known and its approximation is the key component of every policy gradient algorithm. It is a well-know result from stochastic optimization \cite{kushner2003stochastic} that, if the gradient estimate is unbiased and the learning rates satisfy the \emph{Robbins-Monro conditions}
\begin{equation}
\sum_{k=0}^\infty \alpha_k = \infty \;\;\;\;\;\; \sum^{\infty}_{k=0}
\alpha_k^2 < \infty
\end{equation}
the learning process is guaranteed to converge at least to a local optimum of
the objective function. In an episodic environment where the system always starts from an initial state $s_0$, the typical objective function is the start value.
\begin{equation}
J_{\text{start}}(\theta) = V_{\pi_\theta}(s_0) = \E[\pi_\theta]{G_0 |
S_0 = s_0}
\end{equation}
In a continuing environment, where no terminal state exists and the task might go on forever, it is common to use either the average value
\begin{equation}
J_{\text{avV}}(\theta) = \E[S \sim d^{\theta}]{V_{\pi_\theta}(S)} = \int_\S
d^{\theta}(s) V_{\pi_\theta}(s) ds
\end{equation}
where $d^\theta$ is the stationary distribution of the Markov chain induced by $\pi_\theta$. Alternatively, one may use the average reward per time step
\begin{equation}
J_{\text{avR}}(\theta) = \rho(\theta) = \E[\substack{S \sim d^{\theta}\\A \sim \pi_\theta}]{\calR(S,A)}
= \int_\S d^{\theta}(s) \int_\A \pi_\theta(s,a) \calR(s,a) da ds
\end{equation}
Luckily, the same methods apply with minor changes to the three objective functions.
\subsubsection{Policy Gradient Theorem}
The \emph{policy gradient theorem} \cite{sutton1999policy} shows that the gradient can be rewritten in a form suitable for estimation from experience aided by an approximate action-value or advantage function.
\begin{theorem}[Policy Gradient]
\label{thm:risk_neutral_policy_gradient}
Let $\pi_\theta$ be a differentiable policy. The policy gradient for the average reward formulation is given by
\begin{equation}
\nabla_\theta \rho(\theta) =
\E[\substack{S \sim d^\theta\\A \sim \pi_\theta}]{\nabla_\theta\log
\pi_\theta(S,A) Q_{\theta}(S, A)}
\end{equation}
where $d^\theta$ is the stationary distribution of the Markov chain induced by $\pi_\theta$. The policy gradient for the start value formulation is given by
\begin{equation}
\nabla_\theta J_{\text{start}}(\theta) =
\E[\substack{S \sim d_\gamma^\theta(s_0, \cdot)\\A \sim \pi_\theta}]{\nabla_\theta\log
\pi_\theta(S,A) Q_{\theta}(S, A)}
\end{equation}
where $d_\gamma^\theta(s_0, \cdot)$ is the $\gamma$-discounted visiting distribution over states starting from the initial state $s_0$ and following policy $\pi_\theta$
\begin{equation}
d_\gamma^\theta(s, x) = \sum_{k=0}^{\infty} \gamma^k \calP_\theta^{(k)}(s, x)
\end{equation}
\end{theorem}
Let us notice that we can subtract a state-dependent baseline from the action-value function without changing the value of the expectation, indeed
\begin{equation*}
\begin{split}
\E[\substack{S \sim d^\theta\\A \sim \pi_\theta}]{\nabla_\theta\log
\pi_\theta(S,A) B_\theta(S)}
&= \int_\S d^\theta(s) \int_\A \pi_\theta(s,a) \nabla_\theta\log
\pi_\theta(s,a) B_\theta(s) da ds\\
&= \int_\S d^\theta(s) B_\theta(s) \int_\A \nabla_\theta \pi_\theta(s,a) da ds\\
&= \int_\S d^\theta(s) B_\theta(s) \nabla_\theta \underbrace{\int_\A \pi_\theta(s,a) da}_{= 1} ds = 0
\end{split}
\end{equation*}
Hence, the policy gradient theorem can be rewritten as
\begin{equation}
\label{eq:pg_theorem_baseline}
\nabla_\theta \rho(\theta) =
\E[\substack{S \sim d^\theta\\A \sim \pi_\theta}]{\nabla_\theta\log
\pi_\theta(S,A) \left(Q_{\pi_\theta}(S, A) - B_\theta(S)\right)}
\end{equation}
The baseline can be chosen so as to minimize the variance of the gradient estimate which can prove beneficial for the algorithm convergence \cite{peters2008reinforcement}. This result can be used as the starting point to derive several policy gradient methods that use different approximation of the action-value function, which is typically unknown. For instance, in an episodic MDP the action-value function can be estimated with the total return obtained on a sample trajectory
\begin{equation}
Q_\theta(s_0,a_0) \approx \sum_{t=0}^{T^{(m)}} \gamma^t r_{t+1}^{(m)}
\end{equation}
Combining this remark with a Monte Carlo approximation of Eq. (\ref{eq:pg_theorem_baseline}), we obtain the \emph{Monte Carlo Policy Gradient} algorithm \cite{baxter2001infinite} (also known as GPOMDP) for which the pseudocode is reported in Algorithm \ref{algo:GPOMDP}.
\begin{algorithm}[t]
\caption{GPOMDP}
\label{algo:GPOMDP}
\begin{algorithmic}[0]
\Require{\\
\begin{itemize}
\item Initial policy parameters $\theta_0 = (\theta_0^1, \ldots, \theta_0^{D_\theta})^T$
\item Learning rate $\{\alpha_k\}$
\item Number of trajectories $M$
\end{itemize}
}
\Ensure Approximation of the optimal policy $\pi_{\theta^*} \approx \pi_*$
\begin{algorithmic}[1]
\State Initialize $k = 0$
\Repeat
\State Sample $M$ trajectories $h^{(m)} = \{(s_t^{(m)}, a_t^{(m)}, r_{t+1}^{(m)}\}_{t = 0}^{T^{(m)}}$ of the MDP under policy $\pi_{\theta_k}$
\State Compute the optimal baseline
\begin{equation}
\widehat{b}_k^n = \frac{\sum^{M}_{m=1} \left[ \sum_{i=0}^{T^{(m)}}
\partial_{\theta_k} \log \pi_\theta\left(s_i^{(m)}, a_i^{(m)}\right) \right]^2
\sum^{T^{(m)}}_{j=0} \gamma^j r_{j+1}^{(m)}}{\sum^{M}_{m=1} \left[ \sum_{i=0}^{T^{(m)}} \partial_{\theta_k} \log \pi_\theta\left(s_i^{(m)}, a_i^{(m)}\right) \right]^2}
\end{equation}
\State Approximate policy gradient
\begin{equation}
\frac{\partial}{\partial\theta^n} J_{\text{start}}(\theta_k) \approx \widehat{g}_k^n = \frac{1}{M} \sum^{M}_{m=1} \sum_{i=0}^{T^{(m)}}
\frac{\partial}{\partial\theta^n} \log \pi_{\theta_k}\left(s_i^{(m)}, a_i^{(m)}\right) \left(
\sum^{T^{(m)}}_{j=i} \gamma^j r_{j+1}^{(m)} - \widehat{b}_k^n \right)
\end{equation}
\State Update actor parameters $\theta_{k+1} = \theta_k + \alpha_k \widehat{g}_k $.
\State $k \leftarrow k + 1$
\Until{converged}
\end{algorithmic}
\end{algorithmic}
\end{algorithm}
\subsubsection{Parameter-Based Policy Gradient Methods}
In Monte Carlo Policy Gradient, trajectories are generated by sampling at each
time step an action according to a stochastic policy $\pi_\theta$ and the
objective function gradient is estimated by differentiating the policy with
respect to the parameters. However, sampling an action from the policy at each
time step leads to a large variance in the sampled histories and therefore in
the gradient estimate, which can in turn slow down the convergence of the
learning process. To address this issue, the \emph{policy gradient with parameter-based exploration} (PGPE) method \cite{sehnke2008policy} replaces the search in the policy space with a direct search in the model parameter space. Given an episodic MDP, PGPE considers a deterministic controller $F: \S \times \Theta \to \A$ that, given a set of parameters $\theta \in \Theta \subseteq \R^{D_\theta}$, maps a state $s \in \S$ to an action $a = F(s; \theta) = F_\theta(s) \in \A$. The policy parameters are drawn from a probability distribution $p_\xi$, with hyper-parameters $\xi \in \Xi \subseteq \R^{D_\xi}$. Combining these two hypotheses, the agent follows a stochastic policy $\pi_\xi$ defined by
\begin{equation}
\forall B \in \calA ,\ \pi_\xi(s,B) = \pi(s, B; \xi) = \int_\Theta p_\xi(\theta)
\ind{F_{\theta}(s)\in B} d\theta
\end{equation}
In this setting, the policy gradient theorem can be reformulated in the following way
\begin{theorem}[Parameter-Based Policy Gradient]
Let $p_\xi$ be differentiable with respect to $\xi$, then the gradient of the average reward is given by
\begin{equation}
\nabla_\xi J(\xi) = \E[\substack{S \sim d^\xi\\\theta \sim p_\xi}]{\nabla_\xi \log p_\xi(\theta) Q_{\pi_\xi}(S, \theta)}
\end{equation}
where we denoted $Q_\xi(S, \theta) = Q_\xi(S, F_\theta(S))$.
\end{theorem}
This expression is very similar to the original policy gradient theorem, but
the expectation is taken over the controller parameters instead of the action space and we have the likelihood score of the controller parameters distribution instead of that of the stochastic policy. Thus, we might interpret this result as if the agent directly selected the parameters $\theta$ according to a policy $p_\xi$, which then lead to an action through the deterministic mapping $F_\theta$. Therefore, it is as if the agent's policy was in the parameters space and not in the control space. As in the standard policy gradient methods, we can subtract a state-dependent baseline $B_\xi(S)$ to the gradient without increasing the bias
\begin{equation}
\nabla_\xi J(\xi) = \E{\nabla_\xi \log p_\xi(\theta) \left(Q_{\pi_\xi}(S,
\theta) - B_\xi(S)\right)}
\end{equation}
The PGPE algorithm, which is outlined in Algorithm \ref{algo:PGPE}, employs a Monte Carlo approximation of this gradient, where the action-value function is estimated using the returns on a sampled trajectory of the MDP. The benefit of this approach is that the controller is deterministic and therefore the actions do not need to be sampled at each time step, with a consequent reduction of the gradient estimate variance. Indeed, It is sufficient to sample the parameters $\theta$ once at the beginning of the episode and then generate an entire trajectory following the deterministic policy $F_\theta$. As an additional benefit, the parameter gradient is
estimated by direct parameter perturbations, without having to backpropagate
any derivatives, which allows to use non-differentiable controllers. Again the baseline can be chosen so as to minimize the gradient estimate variance \cite{zhao2011analysis}.
\begin{algorithm}[t!]
\caption{Episodic PGPE algorithm}
\label{algo:PGPE}
\begin{algorithmic}[0]
\Require{\\
\begin{itemize}
\item Initial hyper-parameters $\xi_0 = (\xi_0^1, \ldots, \xi_0^{D_\xi})^T$
\item Learning rate $\{\alpha_k\}$
\item Number of trajectories $M$
\end{itemize}
}
\Ensure Approximation of the optimal policy $F_{\xi^*} \approx \pi_*$
\begin{algorithmic}[1]
\State Initialize $k = 0$
\Repeat
\For {$m = 1, \ldots, M$}
\State Sample controller parameters $\theta^{(m)} \sim p_{\xi_k}$
\State Sample trajectory $h^{(m)} = \{(s_t^{(m)}, a_t^{(m)}, r_{t+1}^{(m)}\}_{t = 0}^{T^{(m)}}$ under policy $F_{\theta^{(m)}}$
\EndFor
\State Compute optimal baseline
\begin{equation}
\widehat{b}_k^n = \frac{\sum^{M}_{m=1} \left[\partial_{\xi^n} \log p_{\xi_k} \left(\theta^{(m)}\right)\right]^2 \sum^{T^{(m)}}_{j=0} \gamma^j r_{j+1}^{(m)}}{\sum^{M}_{m=1} \left[\partial_{\xi^n} \log p_{\xi_k} \left(\theta^{(m)}\right)\right]^2}
\end{equation}
\State Approximate policy gradient
\begin{equation}
\frac{\partial}{\partial\xi^n} J_{\text{start}}(\xi_k) \approx \widehat{g}_k^n = \frac{1}{M} \sum^{M}_{m=1}
\frac{\partial}{\partial\xi^n} \log p_{\xi_k}\left(\theta^{(m)}\right) \left(
\sum^{T^{(m)}}_{j=i} \gamma^j r_{j+1}^{(m)} - \widehat{b}_k^n \right)
\end{equation}
\State Update hyperparameters using gradient ascent $\xi_{k+1} = \xi_k + \alpha_k \widehat{g}_k^n$
\State $k \leftarrow k + 1$
\Until{converged}
\end{algorithmic}
\end{algorithmic}
\end{algorithm}
| {
"alphanum_fraction": 0.7224617214,
"avg_line_length": 73.6532846715,
"ext": "tex",
"hexsha": "2abd4e0a385437c6698f556c3230cec8e305ebb2",
"lang": "TeX",
"max_forks_count": 34,
"max_forks_repo_forks_event_max_datetime": "2021-08-21T21:48:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-05-15T07:51:52.000Z",
"max_forks_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pnecchi/Thesis",
"max_forks_repo_path": "Pacs/Report/Sections/2_basics_of_reinforcement_learning.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pnecchi/Thesis",
"max_issues_repo_path": "Pacs/Report/Sections/2_basics_of_reinforcement_learning.tex",
"max_line_length": 1474,
"max_stars_count": 80,
"max_stars_repo_head_hexsha": "1a3ae97023acff1ee5e2d197a446734117a6fb99",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AmineAboussalah/Thesis",
"max_stars_repo_path": "Pacs/Report/Sections/2_basics_of_reinforcement_learning.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-24T23:47:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-13T15:20:29.000Z",
"num_tokens": 6147,
"size": 20181
} |
\documentclass[12pt]{article}
\usepackage{fancyhdr}
\usepackage{color}
\usepackage{multicol}
\usepackage{enumitem}
\usepackage{graphicx}
\usepackage{sectsty}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{hyperref}
\usepackage{array}
\newcommand{\sectionbreak}{\clearpage}
\usepackage{tikz}
\usetikzlibrary{arrows,shapes,trees}
\allsectionsfont{\centering}
\usepackage{draftwatermark}
\SetWatermarkText{\copyright wolf-math.com}
\SetWatermarkScale{4}
\SetWatermarkLightness{1}
\usepackage[margin=1in, headsep=0pt]{geometry}
\setlength{\parindent}{0cm}
\pagestyle{empty}
\begin{document}
Mr. Wolf \\ wolf-math.com
\section*{Trig Pre-Requisites}
\subsection*{Goals}
\textbf{SWBAT} calculate and apply the Pythagorean Theorem.\\
\textbf{SWBAT} calculate and apply the distance formula.\\
\textbf{SWBAT} graph circles using $x^2+y^2=r^2$\\
\subsection*{Standards}
\textbf{Expressing Geometric Properties with Equations \hfill G-GPE}\\
Translate between the geometric description and the equation for a
conic section\\
1.Derive the equation of a circle of given center and radius using the
Pythagorean Theorem; complete the square to find the center and
radius of a circle given by an equation.\\
Use coordinates to prove simple geometric theorems algebraically\\
7. Use coordinates to compute perimeters of polygons and areas of
triangles and rectangles, e.g., using the distance formula. \\
\textbf{Geometry \hfill 8.G}\\
Understand and apply the Pythagorean Theorem.\\
6. Explain a proof of the Pythagorean Theorem and its converse.\\
7. Apply the Pythagorean Theorem to determine unknown side lengths
in right triangles in real-world and mathematical problems in two and
three dimensions.\\
8. Apply the Pythagorean Theorem to find the distance between two
points in a coordinate system.\\
\subsection*{Connections}
\textbf{Now} we are learning the equation of a circle, the Pythagorean Theorem and the distance formula, all of which are prerequisites to trigonometry.\\
\textbf{Later} we are going to learn basic trig ratios and how to solve right triangles with those ratios.\\
\let\stdsection\section
\renewcommand\section{\newpage\stdsection}
\section*{The Pythagorean Theorem}
$$a^2+b^2=c^2$$
Given any right triangle, if the lengths of the legs (the shorter sides) are known, but the hypotenuse (the longest side) is unknown, the length of the hypotenuse can be figured out with the Pythagorean Theorem, $a^2+b^2=c^2$, where $a$ and $b$ are the legs, and $c$ is the hypotenuse.\\
\textbf{Example:}
\pagebreak
\section*{The Distance Formula}
An application of the \textit{Pythagorean Theorem} is the \textbf{Distance Formula}. With it the distance between two points on a graph can be calculated.\\
$$d=\sqrt{(x_1-x_2)^{2}+(y_1-y_2)^{2}}$$
\textbf{Example:} 1) Let point a be $(1,1)$, and point b be $(4,5)$. Find the distance between them.\\
\begin{center}
%\includegraphics[scale=.5]{graph1.jpg}\\
\end{center}
With the distance formula the distance can be calculated without graphing.\\
\textbf{Example 2:} Find the distance between $(2,3)$ and $(4,5)$.\\
\vspace{1cm}
\textbf{Example 2:} $(0,0)$ to $(5,5)$.\\
\vspace{1cm}
\textbf{You Try:}
\begin{enumerate}
\item $(-4,10)$ and $(7,-6)$\\
\item $(10,2)$ and $(12, 17)$\\
\item $(-1,-5)$ and $(4,-6)$\\
\end{enumerate}
\section*{Equation of a Circle}
\subsection*{Graphing Circles}
A circle whose center is about the origin can be graphed with the equation $$r^2=x^2+y^2$$ where $r$ is the radius, $x$ is the $x$ coordinate, and $y$ is the $y$ coordinate. \\
\textbf{Example 1:} $x^2+y^2=1$
\begin{center}
\begin{tikzpicture}[scale=1.]
\draw[help lines] (-1.5,-1.5) grid (1.5,1.5);
\draw[style=thick] (0,0) circle (1);
\draw[<->] (-1.5,0) -- (1.5,0);
\draw[<->] (0,1.5) -- (0,-1.5);
\draw (0,0) -- (.7,.7);
\fill[black] (.7,.7) circle (.5ex);
\fill[black] (0,0) circle (.5ex);
\end{tikzpicture}
\end{center}
\textbf{Example 2:} $x^2+y^2=4$ -- what's the radius?
\begin{center}
\begin{tikzpicture}[scale=.85]
\draw[help lines] (-2.2,-2.2) grid (2.2,2.2);
\draw[style=thick] (0,0) circle (2);
\draw[<->] (-2.3,0) -- (2.3,0);
\draw[<->] (0,2.3) -- (0,-2.3);
\draw (0,0) -- (1.414,1.414);
\fill[black] (1.4,1.4) circle (.5ex);
\fill[black] (0,0) circle (.5ex);
\end{tikzpicture}
\end{center}
The circle can be moved around. In the following equation the center would be at the point $(h,k)$. Notice that there are minus signs in the parentheses.
$$r^2=(x-h)^2+(y-k)^2$$
\textbf{Example 3:} $9=(x-2)^2+(y+3)^2$
\begin{center}
\begin{tikzpicture}[scale=.75]
\draw[help lines] (-2,-6) grid (5,1);
\draw[<->] (-2.1,0) -- (5.2,0);
\draw[<->] (0,1.2) -- (0, -6.2);
\draw[style=thick] (2,-3) circle (3);
\fill[black] (2,-3) circle (1ex);
\end{tikzpicture}
\end{center}
\pagebreak
\subsection*{Writing Equations of Circles}
$$r^2=(x-h)^2+(y-k)^2$$
We can write the equation of any circle given certain information.\\
\textbf{Example 4:} Center at $(-1,4)$, radius $=3$
\vspace{1in}
\textbf{You Try:} Center at $(2,4)$, radius $=6$
\vspace{1in}
\hrulefill
\textbf{Example 5:} Center at $(-5,-2)$ point on circle $(1,1)$\\
\textbf{Hint:} The distance between them is the radius.\\
\vspace{1in}
\textbf{You Try:} Center at $(-5,-4)$ point on circle $(-4,-2)$
\vspace{1in}
\hrulefill
\textbf{Example 6:} What is the equation of this graph?
\begin{tikzpicture}[scale=.65]
\draw[help lines] (-2,-6) grid (5,1);
\draw[<->] (-2.1,0) -- (5.2,0);
\draw[<->] (0,1.2) -- (0, -6.2);
\draw[style=thick] (2,-3) circle (3);
\fill[black] (2,-3) circle (1ex);
\end{tikzpicture}
\section*{Review So Far...}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance.
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, shade] (0,0) --
(4,0) node[pos=.5,below] {a} --
(4,3)node[pos=.5,right] {b} --cycle;
\node (c) at (2,2) {c};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=6$, \hspace{1cm} $b=6$,\hspace{1cm} $c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{1cm}$b=12$,\hspace{1cm} $c=15$\\
\item $a=8$, \hspace{1cm}$b=6$, \hspace{1cm}$c=$\underline{\hspace{1in}}\\
\item $a=5$, \hspace{1cm}$b=$\underline{\hspace{1in}}, \hspace{1cm}$c=5\sqrt{2}$\\
\item $a=10$,\hspace{1cm} $b=\sqrt{21}$,\hspace{1cm} $c=$\underline{\hspace{1in}}
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points.\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(0,0),(12,5)$\\
\item $(-3,-5),(5,10)$\\
\item $(-10,-10),(10,11)$\\
\item $(-5,2), (19,-5)$\\
\item $(4,-2),(-1,8)$\\
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria\\
\begin{enumerate}[resume]
\item $r=4$, center at the origin\\
\item $r=9$, center at $(4,9)$\\
\item $r=1$, center at $(-1,-1)$\\
\item $r=\sqrt{2}$, center at $(-3,7)$\\
\item $r=\sqrt{5}$, center at $(5,-1)$\\
\item center at origin, passes through the point $(9,40)$\\
\end{enumerate}
\hrulefill
Sketch the following equations.\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=4$\\
% \includegraphics[scale=.5]{graph1.jpg}
\item $(x-1)^2+(y+3)^2=9$\\
% \includegraphics[scale=.5]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Review}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance. (2 points each)
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2} \hspace{1in} a=\sqrt{c^2-b^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, inner color=blue ] (0,0)--
(0,3) node[pos=.5,left] {a} --
(4,0) --cycle;
\node (c) at (2,2) {c};
\node (b) at (2,-.5) {b};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=21$, \hspace{1in} $b=20$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{.5cm} $b=40$,\hspace{1in} $c=41$\\
\item $a=15$, \hspace{1in}$b=8$, \hspace{1in}$c=$\underline{\hspace{1in}}\\
\item $a=1$, \hspace{1in}$b=$\underline{\hspace{1in}},\hspace{.5cm} $c=\sqrt{2}$\\
\item $a=1$,\hspace{1in} $b=\sqrt{2}$,\hspace{1in} $c=$\underline{\hspace{1in}}
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(-5,0),(3,15)$\\
\item $(-13,-7),(7,14)$\\
\item $(-5,2), (19,-5)$\\
\item $(5,-2),(0,8)$\\
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria. (2 points each)\\
\begin{enumerate}[resume]
\item $r=3$, center at the origin\\
\item $r=5$, center at $(3,-5)$\\
\item $r=7$, center at $(-2,-9)$\\
\item $r=\sqrt{5}$, center at $(-2,4)$\\
\item $r=\sqrt{8}$, center at $(6,-2)$\\
\item center at $(1,1)$, passes through the point $(15,12)$\\
\end{enumerate}
\hrulefill
Sketch the following equations. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=16$\\
% \includegraphics[scale=.5]{graph1.jpg}
\item $(x+2)^2+(y-3)^2=4$\\
% \includegraphics[scale=.5]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Quiz.}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance. (2 points each)
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2} \hspace{1in} a=\sqrt{c^2-b^2} \hspace{1in} b=\sqrt{c^2-a^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, fill=blue!15 ] (0,0)--
(0,3) node[pos=.5,left] {a} --
(4,0) --cycle;
\node (c) at (2,2) {c};
\node (b) at (2,-.5) {b};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=10$, \hspace{1in} $b=24$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{.5cm} $b=80$,\hspace{1in} $c=82$\\
\item $a=6$, \hspace{1in}$b=8$, \hspace{1in}$c=$\underline{\hspace{1in}}\\
\item $a=1$, \hspace{1in}$b=$\underline{\hspace{1in}},\hspace{.5cm} $c=\sqrt{3}$\\
\item $a=1$,\hspace{1in} $b=1$,\hspace{1in} $c=$\underline{\hspace{1in}}
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(-5,0),(-2,-4)$\\
\item $(-13,-20),(-4,20)$\\
\item $(-5,2), (0,14)$\\
\item $(5,-2),(-3,10)$\\
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria. (2 points each)\\
\begin{enumerate}[resume]
\item $r=4$, center at the origin\\
\item $r=6$, center at $(-3,-4)$\\
\item $r=2$, center at $(0,7)$\\
\item $r=\sqrt{3}$, center at $(9,7)$\\
\item $r=\sqrt{7}$, center at $(6,0)$\\
\item center at $(0,0)$, passes through the point $(33,56)$\\
\end{enumerate}
\hrulefill
Sketch the following equations. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=25$\\
% \includegraphics[scale=.75]{graph1.jpg}
\item $(x-1)^2+(y+1)^2=16$\\
% \includegraphics[scale=.75]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Quiz:}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance. (2 points each)
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2} \hspace{1in} a=\sqrt{c^2-b^2} \hspace{1in} b=\sqrt{c^2-a^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, fill=blue!15 ] (0,0)--
(0,3) node[pos=.5,left] {a} --
(4,0) --cycle;
\node (c) at (2,2) {c};
\node (b) at (2,-.5) {b};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=5$, \hspace{1in} $b=12$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{.5cm} $b=40$,\hspace{1in} $c=41$\\
\item $a=9$, \hspace{1in}$b=12$, \hspace{1in}$c=$\underline{\hspace{1in}}\\
\item $a=2$, \hspace{1in}$b=$\underline{\hspace{1in}},\hspace{.5cm} $c=3$\\
\item $a=2$,\hspace{1in} $b=2$,\hspace{1in} $c=$\underline{\hspace{1in}}
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(-15,-10),(-6,30)$\\
\item $(7,-2),(-1,10)$\\
\item $(0,10),(3,6)$\\
\item $(3,0), (8,12)$\\
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria. (2 points each)\\
\begin{enumerate}[resume]
\item $r=1$, center at the origin\\
\item $r=2$, center at $(3,-4)$\\
\item $r=3$, center at $(-2,12)$\\
\item $r=\sqrt{5}$, center at $(-10,-10)$\\
\item $r=\sqrt{11}$, center at $(-1,2)$\\
\item center at $(-10,-10)$, passes through the point $(23,46)$\\
\end{enumerate}
\hrulefill
Sketch the following equations. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=25$\\
% \includegraphics[scale=.75]{graph1.jpg}
\item $(x-1)^2+(y+1)^2=16$\\
% \includegraphics[scale=.75]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Quiz.:}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance. (2 points each)
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2} \hspace{1in} a=\sqrt{c^2-b^2} \hspace{1in} b=\sqrt{c^2-a^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, fill=blue!15 ] (0,0)--
(0,3) node[pos=.5,left] {a} --
(4,0) --cycle;
\node (c) at (2,2) {c};
\node (b) at (2,-.5) {b};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=3$,\hspace{1in} $b=3$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=70$, \hspace{1in} $b=24$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=2$, \hspace{1in}$b=$\underline{\hspace{1in}},\hspace{.5cm} $c=3$\\
\item $a=15$, \hspace{1in}$b=20$, \hspace{1in}$c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{.5cm} $b=80$,\hspace{1in} $c=89$\\
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(0,0), (5,12)$\\
\item $(-6,1),(-3,-3)$\\
\item $(-10,-20),(-1,20)$\\
\item $(5,5),(-3,17)$
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria. (2 points each)\\
\begin{enumerate}[resume]
\item $r=10$, center at the origin\\
\item $r=9$, center at $(-8,-7)$\\
\item $r=6$, center at $(5,-4)$\\
\item $r=\sqrt{3}$, center at $(2,1)$\\
\item $r=\sqrt{8}$, center at $(-4,0)$\\
\item center at $(0,0)$, passes through the point $(11,60)$\\
\end{enumerate}
\hrulefill
Sketch the following equations. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=25$\\
% \includegraphics[scale=.75]{graph1.jpg}
\item $(x-1)^2+(y+1)^2=16$\\
% \includegraphics[scale=.75]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Quiz::}
Mr. Wolf \hfill \textbf{NAME:\underline{\hspace{2in}}}\\
Pre-Calculus \hfill \textbf{Date: \underline{\hspace{1in}}}\\
\subsection*{Pythagorean Theorem}
Given the triangle diagram below, determine the unknown distance. (2 points each)
$$a^2+b^2=c^2 \hspace{1in} c=\sqrt{a^2+b^2} \hspace{1in} a=\sqrt{c^2-b^2} \hspace{1in} b=\sqrt{c^2-a^2}$$
\begin{center}
\begin{tikzpicture}[scale=.8]
\path[draw, fill=blue!15 ] (0,0)--
(0,3) node[pos=.5,left] {a} --
(4,0) --cycle;
\node (c) at (2,2) {c};
\node (b) at (2,-.5) {b};
\end{tikzpicture}
\end{center}
\begin{enumerate}
\item $a=10$, \hspace{1in} $b=\sqrt{21}$,\hspace{1in} $c=$\underline{\hspace{1in}}\\
\item $a=$\underline{\hspace{1in}}, \hspace{.5cm} $b=80$,\hspace{1in} $c=82$\\
\item $a=30$, \hspace{1in}$b=40$, \hspace{1in}$c=$\underline{\hspace{1in}}\\
\item $a=1$, \hspace{1in}$b=$\underline{\hspace{1in}},\hspace{.5cm} $c=\sqrt{3}$\\
\item $a=7$,\hspace{1in} $b=7$,\hspace{1in} $c=$\underline{\hspace{1in}}
\end{enumerate}
\hrulefill
\subsection*{The Distance Formula}
$$d=\sqrt{(x_{1}-x_{2})^{2}+(y_{1}-y_{2})^{2}}$$
Find the distance between the two points. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $(5,-2),(-3,10)$\\
\item $(-8,2),(-2,-4)$\\
\item $(-10,-20),(-1,20)$\\
\item $(-5,-10), (5,14)$\\
\end{multicols}
\end{enumerate}
\pagebreak
\subsection*{Equation of a Circle}
$$r^2=x^2+y^2$$
$$r^2=(x-h)^2+(y-k)^2$$
Write the equation of a circle with the following criteria. (2 points each)\\
\begin{enumerate}[resume]
\item $r=5$, center at the origin\\
\item $r=6$, center at $(-4,-7)$\\
\item $r=3$, center at $(8,1)$\\
\item $r=\sqrt{10}$, center at $(11,12)$\\
\item $r=\sqrt{13}$, center at $(-14,0)$\\
\item center at $(0,0)$, passes through the point $(55,48)$\\
\end{enumerate}
\hrulefill
Sketch the following equations. (3 points each)\\
\begin{enumerate}[resume]
\begin{multicols}{2}
\item $x^2+y^2=25$\\
% \includegraphics[scale=.75]{graph1.jpg}
\item $(x-1)^2+(y+1)^2=16$\\
% \includegraphics[scale=.75]{graph1.jpg}
\end{multicols}
\end{enumerate}
\section*{Answer Key}
\begin{multicols}{3}
Quiz 1
\begin{enumerate}
\item 26
\item 18
\item $\sqrt{2}$
\item $\sqrt{2}$
\item 5
\item 41
\item 13
\item $4\sqrt{13}$
\item $4^2=x^2+y^2$
\item $6^2=(x+3)^2+(x+4)^2$
\item $2^2=x^2+(y-7)^2$
\item $3=(x-9)^2+(y-7)^2$
\item $7=(x-6)^2+y^2$
\item $65^2=x^2+y^2$\\
\end{enumerate}
Quiz 2
\begin{enumerate}
\item 13
\item 9
\item 15
\item $\sqrt{5}$
\item $2\sqrt{2}$
\item 41
\item $4\sqrt{13}$
\item 5
\item 13
\item $1=x^2+y^2$
\item $2^2=(x-3)^2+(Y+4)^2$
\item $3^2=(x+2)^2+(y-12)^2$
\item $5=(x+10)^2+(y+10)^2$
\item $11=(x+1)^2+(y-2)^2$
\item $65=(x+10)^2+(y+10)^2$
\end{enumerate}
quiz 3
\begin{enumerate}
\item $3\sqrt{2}$
\item 74
\item $\sqrt{5}$
\item 25
\item 39
\item 13
\item 5
\item 41
\item $4\sqrt{13}$
\item $10^2=x^2+y^2$
\item $9^2=(x+8)^2+(y+7)^2$
\item $6^2=(x-5)^2+(y+4)^2$
\item $3=(x-2)^2+(x-1)^2$
\item $8=(x+4)^2+y^2$
\item $61^2=X^2+y^2$
\end{enumerate}
\end{multicols}
\hrulefill
Quiz 4
\begin{multicols}{2}
\begin{enumerate}
\item 11
\item 18
\item 50
\item $\sqrt{2}$
\item $7\sqrt{2}$
\item $4\sqrt{13}$
\item $6\sqrt{2}$
\item 41
\item 26
\item $5^2=x^2+y^2$
\item $6^2=(x+4)^2+(y+7)^2$
\item $3^2=(x-8)^2+(y-1)^2$
\item $10=(x-11)^2+(y-12)^2$
\item $13=(x+14)^2+y^2$
\item $73^2=x^2+y^2$
\end{enumerate}
\end{multicols}
\end{document} | {
"alphanum_fraction": 0.617509434,
"avg_line_length": 19.61994077,
"ext": "tex",
"hexsha": "619a77a4360cfe787af7a551f4fca2bc18fa6e4c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c3d1d38ac3728e7524469143cf8ed5372f36fe17",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "wolf-math/Big-Open-Source-Math-Book",
"max_forks_repo_path": "07_Trigonometry/07.1_prerequisites/5.1 prereq.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c3d1d38ac3728e7524469143cf8ed5372f36fe17",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "wolf-math/Big-Open-Source-Math-Book",
"max_issues_repo_path": "07_Trigonometry/07.1_prerequisites/5.1 prereq.tex",
"max_line_length": 287,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c3d1d38ac3728e7524469143cf8ed5372f36fe17",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "wolf-math/Big-Open-Source-Math-Book",
"max_stars_repo_path": "07_Trigonometry/07.1_prerequisites/5.1 prereq.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8221,
"size": 19875
} |
%!TEX root = ../report.tex
%
% Introduction
%
\section{Introduction}
%General description of the problem and its context, current solutions, and road map of the project.
%This days designers and developers have available a large number of Computer Aided Design(CAD) tools to produce their models. This tools are getting more and more powerful and with more features available. The traditional way of modeling where the user adds geometry one by one does not present performance issues about rendering. On the other hand it is being more and more common the concept of generative design, that is a design method that is based on a programming approach which allows architects and designers to model large amounts of shapes with significantly less effort.
%As technology evolves people have more powerful devices and they want to take advantage of that. They want to have more realistic experiences with larger, more detailed and complex contents.
%And this is observable in the graphic contents. With the recent extra high definition on screens and the computational power of the machines beating records, the graphic content has to follow up that characteristics in quantity as well as in quality. The issue is that the manual content generation takes a long work time from architects and designers to achieve this quality, which implies high costs.
Graphic contents are mainly used for entertainment, both in the gaming and movie industries, but they are also used in many other different areas. The fields of architecture and design, for instance, use this technology to experiment and model new designs, from small objects like a plate, to buildings or even entire cities. Unfortunately, manual modeling of large sets of potentially complex shapes is tiresome and very costly.
%In this field they also face the problems that raises from the modeling of really big sets of objects and forms manually, which is slow and error prone. \emph{This work addresses the problem of large content creation and will focus on the fields of architecture and design.}
The obvious solution to this problem is to hire more architects or designers in order to increase productivity and reduce the time needed. However, experience has shown that this solution is not scalable, because doubling the number of architects or designers working in a project will not double their overall productivity. Also, this solution has a big impact on financial costs, that would take immediately out of the market producers with fewer resources.
A solution for this problem is the use of generative design (GD). This is a design method that is based on a programming approach which allows architects and designers to model large volumes of complex shapes with significantly less effort. They can model cities, buildings, trees, and many other objects that are, usually, too big or complex for a manual approach.
\begin{wrapfigure}{r}{0.5\textwidth}
% \vspace{-15pt}
\centering
\includegraphics[width=0.5\textwidth]{img/Architecture/GD-Common-Pipeline.png}
\caption{Common Generative Design Pipeline}
\label{fig:GD_Pipeline}
\vspace{-15pt}
\end{wrapfigure}
Although most computer-aided design (CAD) applications provide programming languages for generative design, programs written in these languages have very limited portability. Additionally, the provided languages, such as AutoLisp, C++ or Visual Basic, are not pedagogical and are difficult to use even to experienced programmers. All this problems create barriers to the adherence to this approach by all users, specially those that are not used to code.\cite{ramos_et_al:OASIcs:2014:4565}
There are several generative design (GD) tools such as Grasshopper\footnote{\url{http://www.grasshopper3d.com/}} and Rosetta\cite{Leit2012}, that aim to break down some of this barriers, and facilitate the approximation of these individuals to programming. With this tools the users can create their models using pedagogical and easy to use languages. This systems implement a straightforward pipeline presented in Figure~\ref{fig:GD_Pipeline}.
%\begin{figure}[htbp]
% \centering
% \includegraphics[width=0.45\textwidth]{img/Architecture/GD-Common-Pipeline.png}
% \caption{Common Generative Design Pipeline}
% \label{fig:GD_Pipeline}
%\end{figure}
Users implement their models through the GD tool interface. Then all the geometry data is serialized and the data is transfered through some transport mechanism. This data has to be deserialized on the other side within the CAD application. The CAD application takes the deserializes data and processes it producing geometry. Finally, the geometry is moved to the GPU that renders it. All these steps are time-consuming, due to the large amount of data that needs to be transfered. This creates a performance problem.
One big difference between GD and traditional approaches is that users do not see the result of their program while they code. They follow a code-execute-visualize loop where they make changes in the code, execute the code and visualize the resulting model. This makes it difficult for them to understand the impact of changes in their programs. It would be much more productive if they could easily understand the correlation between their program and the resulting model and to be able to experiment values on their program and see the effects they have on the model. To help them with this, there is the concept of \emph{immediate feedback}. Immediate feedback is a mechanism that allows the users to quickly see the results of the changes they make. This can be implemented, for instance, through the use of sliders that can be associated with values on the program, and when one slider is moved the effects of that change should be visualized immediately.
However, there is a problem: CAD applications are built for manual modeling mainly, and are not prepared to quickly handle large amounts of geometry. Running the code produces much more geometry and much faster than manual modeling, so the user is able to create massive amounts of geometry, which is fed to the CAD that gets overloaded. With this issues, it is hard to get good performance, specially with large models, that makes impossible to have true immediate feedback.
This work proposes a solution to this problem and aim to generate large volumes of geometry that is as close as possible to real-time. It does so by jumping over some steps while drastically decreasing the amount of data that is transfered between steps. First we aim to get the geometry as fast as possible to the GPU, so since our goal is just visualization, we jump the CAD layer, eliminating the first communication steps. Another action is to reduce the amount of data that is transferred, by transferring only a very concise description of the geometry, generating the actual geometry on the GPU. To implement the generation of the geometry, procedural techniques such as: Fractals(Section~\ref{ssub:fractals}), Cellular Automata (Section~\ref{sub:cellular_automaton}), and L-Systems (Section~\ref{ssub:l_systems}) will be applied.
To improve visualization performance, techniques such as Level Of Detail (Section~\ref{ssub:level_of_detail}) and Occlusion Culling (Section~\ref{ssub:occlusion_culling}) are explored.
| {
"alphanum_fraction": 0.8082474227,
"avg_line_length": 134.7222222222,
"ext": "tex",
"hexsha": "2d6b1933b64f10de687f67f39e76a1aeeb8553d7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "496ecd2bf9885b6fa634cb958b696dad7a2166b7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arturalkaim/v2ProceduralGeneration",
"max_forks_repo_path": "sections/3-introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "496ecd2bf9885b6fa634cb958b696dad7a2166b7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arturalkaim/v2ProceduralGeneration",
"max_issues_repo_path": "sections/3-introduction.tex",
"max_line_length": 961,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "496ecd2bf9885b6fa634cb958b696dad7a2166b7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arturalkaim/v2ProceduralGeneration",
"max_stars_repo_path": "sections/3-introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1502,
"size": 7275
} |
\documentclass{article}
\usepackage[letterpaper, margin=1.3cm]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{siunitx}
\usepackage[fleqn]{mathtools}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{mathrsfs}
\usepackage{datetime}
\usepackage{microtype}
\usepackage[l2tabu, orthodox]{nag}
\newcommand\aug{\fboxsep=-\fboxrule\!\!\!\fbox{\strut}\!\!\!}
\title{MATH 225 Assignment 4}
\author{Michael Kwok}
\date{2020-08-01}
\begin{document}
\maketitle
\subsection*{a}
Yes. Each eigenvalue has an algebraic multiplicity of 1.
By using the definition $1 \leq d_{\lambda} \leq m_{\lambda}$:
Since each eigenvalue has $m_{\lambda} = 1, \text{by definition, } d_{\lambda} = 1$.
This shows that each eigenvalue has an eigenspace with basis of size 1, implying the matrix is diagonalizable.
\subsection*{b}
No. Invertible matrices cannot have a 0 as an eigenvalue by the Invertible Matrix Theorem.
\subsection*{c}
The list of eigenvalues is: $\{ 4,4,1,1,0\}$.
Proof:
Let $v_1, v_2, \ldots v_i$ be eigenvectors of A.
Each $v_i$ related to an eigenvalue $\lambda$ from the list.
\begin{align*}
A v_i &= \lambda_i v_i\\
A^2 v_i &= AA v_i\\
&= A \lambda_i v_i\\
&= \lambda_i A v_i\\
&= \lambda^2_i v_i
\end{align*}
\subsection*{d}
Yes it is diagonalizable.
\begin{align*}
A^2 &= AA\\
&= PDP^{-1}PDP^{-1}\\
&= P DD P^{-1}\\
&= P D^2 P^{-1}
\end{align*}
\end{document} | {
"alphanum_fraction": 0.6856330014,
"avg_line_length": 24.6666666667,
"ext": "tex",
"hexsha": "6159a52b2fb870fa6cea0ac2d32fff43d9b64777",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "n30phyte/SchoolDocuments",
"max_forks_repo_path": "Assignments/MATH225/MATH225As4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "n30phyte/SchoolDocuments",
"max_issues_repo_path": "Assignments/MATH225/MATH225As4.tex",
"max_line_length": 110,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "79652ec7e3345d67e67f0cffe3bea468708622bd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "n30phyte/SchoolDocuments",
"max_stars_repo_path": "Assignments/MATH225/MATH225As4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 503,
"size": 1406
} |
%mainfile: ../master.tex
\section{Parallelism}\label{sec:parallelism}
In the continued effort of trying to squeeze as much power out of computers as possible, the computer scientific community is at a point where increasing the clock speed of processors is no longer as viable a solution as it used to be. This has spawned an increased interest in increasing computational power in other ways. One of these ways is parallelism.
Parallelism is the act of dividing calculations into independently solvable parts, and then solving them on multiple processors before finally gathering the individual results and combining them. The main benefit of parallelising anything is to gain \emph{speed up} in the computation time of problems. This will be described in \cref{sup}.
With parallelism you gain \emph{speed up} through combining multiple processing units. This is seen in newer CPU's as multiple cores, but on a larger scale this principle can be used to create supercomputers, capable of performing immense calculations.
But even without a supercomputer, a distributed network of multiple computers can provide large amounts of parallel computing power. With this being a foreseeable future, we predict an increase in the access to, and need for, parallel systems.
\subsection{Speed Up}\label{sup}
For parallel computing, Gene Amdahl, an american computer scientist, defined a law for describing the theoretical maximum speed up using multiple processors. The maximum speed up a system can achieve by parallelisation is limited to at most 20×.
The law can be used to describe the speed up achievable by a giving system, by the percentage of parallelisable code, in this equation \cite{wiki_amdahl}.
The time an algorithm takes to finish with $n \in \mathbb{N}$ execution units and $B \in [0, 1]$ the fraction of the algorithm that is strictly serial.
\begin{equation}
T(n) = T(1)(B + \frac{1}{n} (1 - B))
\end{equation}
The theoretical speed up to be made can then be described by:
\begin{equation}
S(n) = \frac{T(1)}{T(n)} = \frac{T(1)}{T(1)(B + \frac{1}{n} (1 - B))} = \frac{1}{B + \frac{1}{n} (1 - B)}
\end{equation}
By this law, maximising the percentage of parallelisable, the highest possible speed up achievable is increased, thereby improving the scaling of the solution on multiple processor.
\subsection{Types of Tasks}\label{top}
Tasks within a problem can be dependent on each other, in the sense that one task needs the output of a computation done by another task. This section will describe two types of problems relevant when doing parallel computations \cite{gribblelab,compLLNL}.
\subsubsection{Embarrassingly Parallel Problems}
A problem can be describe as being \emph{embarrassingly parallel} when the tasks within the problem are independent of each other, and can therefore be parallelised without any concern of the order in which the tasks can be executed. They are trivially parallel in nature because of the independency. An example of this type of problem is incrementation of a large matrix, the individual cells in the matrix are completely independent from each other and can therefore be incremented without regard of other cells in the matrix.
\subsubsection{Serial Problems with Dependencies}
Although multiple similar simulations can be observed as being independent of each other, as utilised by the Monte Carlo method, most simulations do not satify the condition of being independent. Instead these are inherently sequential. They form a class of problems that cannot be split into independent sub-problems. In some cases it is not possible to gain speed up at all, by trying to parallelise a problem, which is not parallelisable. The only thing that a simulation designer can achieve in this case, is to add overhead to the computations. An example is calculating the fibonacci series by f(n) = f(n-1)+f(n-2), where f(n) is dependent on first finding the previous values of f.
\subsection{Parallel Data Access}
When trying to solve problems that are not embarrasingly parallel, with a parallel approach, some problems arise when multiple processes try to access the same memory. Among these the most common are race conditions and deadlocks.
\subsubsection{Race Conditions}
This problem arises when multiple processes want to modify the same data in memory or a similar resource, and the outcome of the modification depends on the internal timing of processes. We call the resource "the shared resource" and the code which works with the shared resource "the critical region". An example of a race condition is two concurrent processes that want to raise the value of an integer by 1. In a normal modern day processor the process could be split into the three atomic operations\footnote{Atomic operations are operations which the hardware manufacturer ensures happen without disruptions and that cannot be split into smaller operations}:
\begin{itemize}
\item Copy the current value of the integer from main memory into register A
\item Calculate the value from register A, add 1 to it and place the result in register B
\item Take the new value from register B and override the integer in memory
\end{itemize}
Since we do not know when each process will try to access the memory, the value can either be raised by one, if both processes access it before the other overrides it, or raised by two, if one process finished before the other copy the value from memory. This is a well known problem and exists in many situations, where multiple processes work with non-atomic operations on the same memory. Some software solutions, that ensure only one instance of the critical region has permission to access the shared resource at the time, have been developed; especially Gary L. Peterson's algorithm, published in 1981, is used today. Other solutions have also been developed by creating atomic assembly instructions that can set a flag, thereby ensuring that only one critical region access the shared resource at the time.
\subsubsection{Deadlocks}
Deadlocks are a type of problem that occur when two or more processes are waiting for at least one of the others to finish, before it itself finishes, thereby never finishing. This is a common problem within concurrent programming. Edward G. Coffman described four conditions that must be present if a deadlock is to occur~\cite{Coffman:1971}:
\begin{description}
\item[Mutual Exclusion] Resources can not be shared simultaneously by two processes.
\item[Hold and Wait] A process is at some point holding one resource, while waiting for another.
\item[No Preemption] A resources can not be released externally from a process.
\item[Circular Wait] A process must be waiting for a resource which is held by another process, which is waiting for the first process to release a resource.
\end{description}
By knowing these four conditions, we can try prevent deadlocks by making sure at least one of the conditions is not present.
| {
"alphanum_fraction": 0.7986751152,
"avg_line_length": 103.6417910448,
"ext": "tex",
"hexsha": "c8e55bec68ab79bef7d7c5ce892c6ddb98c6cb6a",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2016-04-12T20:49:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-04-12T20:49:43.000Z",
"max_forks_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simonvandel/P4",
"max_forks_repo_path": "Report/Analysis/Parallelism.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simonvandel/P4",
"max_issues_repo_path": "Report/Analysis/Parallelism.tex",
"max_line_length": 813,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "1fb4ce407174224efce92aa3ee5e1ac5704a307b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simonvandel/TLDR",
"max_stars_repo_path": "Report/Analysis/Parallelism.tex",
"max_stars_repo_stars_event_max_datetime": "2015-02-18T13:38:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-18T13:38:49.000Z",
"num_tokens": 1478,
"size": 6944
} |
\section{Abstractions}
\label{sec:abstractions}
StackwalkerAPI contains two interfaces: the Stackwalking Interface and the
Callback Interface. The stackwalking interface is used to walk the call stack,
query information about stack frames, and collect basic information about
threads. The Callback Interface is used to provide custom mechanisms for walking
a call stack. Users who operate in one of StackwalkerAPI's standard
configurations do not need to use the Callback Interface.
Figure~\ref{fig:object-ownership} shows the ownership hierarchy for
StackwalkerAPI's classes. Ownership is a "contains" relationship; if one class
owns another, then instances of the owner class maintain an exclusive instance
of the other. For example, in Figure~\ref{fig:object-ownership} the each Walker
instance contains exactly one instance of a ProcessState object. No other
instance of Walker uses that instance of ProcessState.
This remainder of this
section briefly describes the six classes that make up StackwalkerAPI's two
interfaces. For more details, see the class descriptions in
Section~\ref{sec:api}.
\input{fig/object-ownership}
\subsection{Stackwalking Interface}
\begin{description}
\item [Walker] The Walker class is the top-level class used for collecting
stackwalks. It provides a simple interface for requesting a stackwalk.
Each Walker object is associated with one process, but may walk the call
stacks of multiple threads within that process.
\item [Frame] A call stack is returned as a vector of Frame objects, where
each Frame object represents a stack frame. It can provide information
about the stack frame and basic information about the function, signal
handler or other mechanism that created it. Users can request
information such as the symbolic name associated with the Frame object,
and values of its saved registers.
\end{description}
\subsection{Callback Interface}
StackwalkerAPI includes default implementations of the Callback Interface on
each of its supported platforms. These default implementations allow
StackwalkerAPI to work "out of the box" in a standard configuration on each
platform. Users can port StackwalkerAPI to new platforms or customize its call
stack walking behavior by implementing their own versions of the classes in the
Callback Interface.
\begin{description}
\item [FrameStepper] A FrameStepper object describes how to walk through a
single type of stack frame. Users can provide an implementation of this
interface that allows StackwalkerAPI to walk through new types of stack
frames. For example, the DyninstAPI uses this interface to extend
StackwalkerAPI to allow it to walk through stack frames created by
instrumentation code.
\item [StepperGroup] A StepperGroup is a collection of FrameStepper objects and
criteria that describes when to use each type of FrameStepper. These
criteria are based on simple address ranges in the code space of the target
process. In the above example with DyninstAPI, it would be the job of the
StepperGroup to identify a stack frame as belonging to instrumentation code
and use the instrumentation FrameStepper to walk through it.
\item [ProcessState] A ProcessState interface describes how to access data in
the target process. To walk a call stack, StackwalkerAPI needs to access
both registers and memory in the target process; ProcessState provides an
interface that StackwalkerAPI can use to access that information.
StackwalkerAPI includes two default implementation of ProcessState for each
platform: one to collect a first party stackwalk in the current process, and
one that uses a debugger interface to collect a third party stackwalk in
another process.
\item [SymbolLookup] The SymbolLookup interface is used to associate a symbolic
name with a stack frame. A stackwalk returns a collection of addresses in
the code space of a binary. This class uses the binary's symbol table to map
those addresses into symbolic names. A default implementation of this class,
which uses the DynSymtab package, is provided with StackwalkerAPI. A user
could, for example, use this interface to allow StackwalkerAPI to use libelf
to look up symbol names instead.
\end{description}
| {
"alphanum_fraction": 0.7892314772,
"avg_line_length": 55.0126582278,
"ext": "tex",
"hexsha": "8ad06d4707bea51d4b5efcdce163d56c18f749eb",
"lang": "TeX",
"max_forks_count": 18,
"max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z",
"max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "Vtech181/Path_Armor",
"max_forks_repo_path": "Dyninst-8.2.1/stackwalk/doc/2-Abstractions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "Vtech181/Path_Armor",
"max_issues_repo_path": "Dyninst-8.2.1/stackwalk/doc/2-Abstractions.tex",
"max_line_length": 80,
"max_stars_count": 47,
"max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "Vtech181/Path_Armor",
"max_stars_repo_path": "Dyninst-8.2.1/stackwalk/doc/2-Abstractions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z",
"num_tokens": 907,
"size": 4346
} |
%-------------------------------------
% LaTeX Resume
% Author : Aaqib Bashir
% License : MIT
%-------------------------------------
\documentclass[letterpaper,12pt]{article}[leftmargin=*]
\usepackage[empty]{fullpage}
\usepackage{enumitem}
\usepackage{ifxetex}
\ifxetex
\usepackage{fontspec}
\usepackage[xetex]{hyperref}
\else
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[pdftex]{hyperref}
\fi
\usepackage{fontawesome}
\usepackage[sfdefault,light]{FiraSans}
\usepackage{anyfontsize}
\usepackage{xcolor}
\usepackage{tabularx}
%-------------------------------------------------- SETTINGS HERE --------------------------------------------------
% Header settings
\def \fullname {Aaqib Bashir}
\def \subtitle {}
\def \linkedinicon {\faLinkedin}
\def \linkedinlink {https://www.linkedin.com/in/aaqibbashir/}
\def \linkedintext {aaqibbashir}
\def \phoneicon {\faPhone}
\def \phonetext {+91 000 111 22 33}
\def \emailicon {\faEnvelope}
\def \emaillink {mailto: [email protected]}
\def \emailtext {[email protected]}
\def \githubicon {\faGithub}
\def \githublink {https://github.com/aaqibb13}
\def \githubtext {aaqibb13}
\def \websiteicon {\faGlobe}
\def \websitelink {https://researchgate.net/profile/Aaqib_Bashir2}
\def \websitetext {Aaqib\_Bashir2}
\def \headertype {\doublecol} % \singlecol or \doublecol
% Misc settings
\def \entryspacing {-0pt}
\def \bulletstyle {\faAngleRight}
% Define colours
\definecolor{primary}{HTML}{000000}
\definecolor{secondary}{HTML}{0D47A1}
\definecolor{accent}{HTML}{263238}
%\definecolor{links}{HTML}{1565C0}
\definecolor{links}{HTML}{008080}
\definecolor{ternary}{HTML}{008080}
%-------------------------------------------------------------------------------------------------------------------
% Defines to make listing easier
\def \linkedin {\linkedinicon \hspace{3pt}\href{\linkedinlink}{\linkedintext}}
\def \phone {\phoneicon \hspace{3pt}{ \phonetext}}
\def \email {\emailicon \hspace{3pt}\href{\emaillink}{\emailtext}}
\def \github {\githubicon \hspace{3pt}\href{\githublink}{\githubtext}}
\def \website {\websiteicon \hspace{3pt}\href{\websitelink}{\websitetext}}
% Adjust margins
\addtolength{\oddsidemargin}{-0.55in}
\addtolength{\evensidemargin}{-0.55in}
\addtolength{\textwidth}{1.1in}
\addtolength{\topmargin}{-0.6in}
\addtolength{\textheight}{1.1in}
% Define the link colours
\hypersetup{
colorlinks=true,
urlcolor=links,
}
% Set the margin alignment
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
%-------------------------
% Custom commands
% Sections
\renewcommand{\section}[2]{\vspace{5pt}
\colorbox{ternary}{\color{white}\raggedbottom\normalsize\textbf{{#1}{\hspace{7pt}#2}}}
}
% Entry start and end, for spacing
\newcommand{\resumeEntryStart}{\begin{itemize}[leftmargin=2.5mm]}
\newcommand{\resumeEntryEnd}{\end{itemize}\vspace{\entryspacing}}
% Itemized list for the bullet points under an entry, if necessary
\newcommand{\resumeItemListStart}{\begin{itemize}[leftmargin=4.5mm]}
\newcommand{\resumeItemListEnd}{\end{itemize}}
% Resume item
\renewcommand{\labelitemii}{\bulletstyle}
\newcommand{\resumeItem}[1]{
\item\small{
{#1 \vspace{-2pt}}
}
}
% Entry with title, subheading, date(s), and location
\newcommand{\resumeEntryTSDL}[4]{
\vspace{-1pt}\item[]
\begin{tabularx}{0.97\textwidth}{X@{\hspace{60pt}}r}
\textbf{\color{primary}#1} & {\firabook\color{accent}\small#2} \\
\textit{\color{accent}\small#3} & \textit{\color{accent}\small#4} \\
\end{tabularx}\vspace{-6pt}
}
% Entry with title and date(s)
\newcommand{\resumeEntryTD}[2]{
\vspace{-1pt}\item[]
\begin{tabularx}{0.97\textwidth}{X@{\hspace{60pt}}r}
\textbf{\color{primary}#1} & {\firabook\color{accent}\small#2} \\
\end{tabularx}\vspace{-6pt}
}
% Entry for special (skills)
\newcommand{\resumeEntryS}[2]{
\item[]\small{
\textbf{\color{primary}#1 }{ #2 \vspace{-6pt}}
}
}
% Double column header
\newcommand{\doublecol}[6]{
\begin{tabularx}{\textwidth}{Xr}
{
\begin{tabular}[c]{l}
\fontsize{35}{45}\selectfont{\color{primary}{{\textbf{\fullname}}}} \\
{\textit{\subtitle}} % You could add a subtitle here
\end{tabular}
} & {
\begin{tabular}[c]{l@{\hspace{1.5em}}l}
{\small#4} & {\small#1} \\
{\small#5} & {\small#2} \\
{\small#6} & {\small#3}
\end{tabular}
}
\end{tabularx}
}
% Single column header
\newcommand{\singlecol}[6]{
\begin{tabularx}{\textwidth}{Xr}
{
\begin{tabular}[b]{l}
\fontsize{35}{45}\selectfont{\color{primary}{{\textbf{\fullname}}}} \\
{\textit{\subtitle}} % You could add a subtitle here
\end{tabular}
} & {
\begin{tabular}[c]{l}
{\small#1} \\
{\small#2} \\
{\small#3} \\
{\small#4} \\
{\small#5} \\
{\small#6}
\end{tabular}
}
\end{tabularx}
}
\begin{document}
%-------------------------------------------------- BEGIN HERE --------------------------------------------------
%---------------------------------------------------- HEADER ----------------------------------------------------
\headertype{\linkedin}{\github}{\website}{\phone}{\email}{} % Set the order of items here
\vspace{-10pt} % Set a negative value to push the body up, and the opposite
%-------------------------------------------------- EDUCATION --------------------------------------------------
\section{\faGraduationCap}{Education}
\resumeEntryStart
\resumeEntryTSDL
{University of Kashmir}{2015 -- 2019}
{B.E. Computer Engineering (\textbf{70.2\%})}{Hazratbal, J\&K, India}
\resumeEntryEnd
%-------------------------------------------------- EXPERIENCE --------------------------------------------------
\section{\faGraduationCap}{Internships}
\resumeEntryStart
\resumeEntryTSDL
{Brillwork}{April 2019 -- January 2020}
{Researcher}{Switzerland}
\resumeEntryTSDL
{CNS INFOTEL PVT. LTD}{Feb 2018}
{Network Engineer Trainee}{Srinagar, J\&K, India}
\resumeEntryEnd
%------------------------------------------------------------PUBLICATIONS----------------------------------%
\section{\faPencilSquare}{Publications}
\resumeEntryStart
\resumeEntryTSDL
{Blockchain Driven Access Control Mechanisms, Models and Frameworks: A State of the Art Review}{October 2020}
{Under Review}{}
\resumeEntryEnd
\resumeEntryStart
\resumeEntryTSDL
{Applicability of Mobile Contact Tracing in Fighting Pandemic (COVID-19): Issues, Challenges and Solutions}{September 2020}
{Computer Science Review -- IF:7.707 (Q1)}{Elsevier}
\resumeEntryEnd
\resumeEntryStart
\resumeEntryTSDL
{Taxonomy of Blockchain Driven Access Control Frameworks, Models and Schemes for IoT}{September 2020}
{Presented at: 2nd International Workshop on Blockchain Technologies for Robotic Systems, Naples, Italy }{(Springer)}
\resumeEntryEnd
\resumeEntryStart
\resumeEntryTSDL
{Group Testing: Leveraging a Mathematical Tool for Effective COVID-19 Testing}{August 2020}
{Elsevier's SSRN}{}
\resumeEntryEnd
%-------------------------------------------------- PROJECTS --------------------------------------------------
\section{\faTags}{Memberships}
\resumeEntryStart
\resumeEntryTD
{Institute of Electrical and Electronics Engineers (Student Member)}{2020}
\resumeEntryEnd
\resumeEntryStart
\resumeEntryTD
{The Society of Digital Information and Wireless Communication (Member)}{Since 2019}
\resumeEntryEnd
%-------------------------------------------------- PROGRAMMING SKILLS --------------------------------------------------
\section{\faGears}{Skills}
\resumeEntryStart
\resumeEntryS{Programming Languages } {\LaTeX\ , Python, C, NodeJS}
\resumeEntryS{Hard Skills } {Research \& Development, High Learnability Quotient, Scientific Writing}
\resumeEntryEnd
\section{\faCertificate}{Awards, Participations \& Certifications}
\resumeEntryStart
\resumeEntryTSDL
{Reviewer Recognition}{December 2020}
{Peer Reviewer}{IEEE ACCESS}
\resumeEntryTSDL
{Indian Workshop of Post Quantum Cryptography (IWPQC)}{December 2020}
{Student Participation}{(Online Event)}
\resumeEntryTSDL
{Algebraic Number Theory \& Symposium 2020}{June-July 2020}
{Student Participation}{University of Auckland (Online)}
\resumeEntryTSDL
{Statement of Participation}{July 2020}
{Information Security}{The Open University}
\resumeEntryTSDL
{Certificate of Brilliance}{July 2020}
{Software Testing Certification Course}{Eduonix Learning Solutions Pvt Ltd}
\resumeEntryTSDL
{Certificate of Completion}{May 2020}
{COVID-19 Contact Tracing}{John Hopkins University (Coursera)}
\resumeEntryTSDL
{Reviewer Certificate}{April 2020}
{Manuscript JCS-6251}{Journal of Computer Science}
\resumeEntryTSDL
{Certificate of Participation}{June 2019}
{Data Science \& Information Security Bootcamp}{IUST \& MACAUT WB}
\resumeEntryTSDL
{Certificate of Participation}{August 2018}
{Workshop on IoT}{CETPA INFOTECH PVT. LTD}
\resumeEntryEnd
\end{document}
| {
"alphanum_fraction": 0.6217069454,
"avg_line_length": 31.2448979592,
"ext": "tex",
"hexsha": "273d18776b7ed1738fc84e607fc7694eb901623b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-02-19T06:08:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-19T06:08:28.000Z",
"max_forks_repo_head_hexsha": "8dc95b9acaafcfa703d9e95f0d2aada004ee447d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aaqibb13/Resume",
"max_forks_repo_path": "template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8dc95b9acaafcfa703d9e95f0d2aada004ee447d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aaqibb13/Resume",
"max_issues_repo_path": "template.tex",
"max_line_length": 129,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "8dc95b9acaafcfa703d9e95f0d2aada004ee447d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aaqibb13/Resume",
"max_stars_repo_path": "template.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-13T09:13:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-19T16:06:55.000Z",
"num_tokens": 2587,
"size": 9186
} |
\documentclass[]{spie} %>>> use for US letter paper
%\documentclass[a4paper]{spie} %>>> use this instead for A4 paper
%\documentclass[nocompress]{spie} %>>> to avoid compression of citations
\renewcommand{\baselinestretch}{1.0} % Change to 1.65 for double spacing
\usepackage{amsmath,amsfonts,amssymb}
\usepackage{graphicx}
\graphicspath{{./graph/}} % Images folders
\DeclareGraphicsExtensions{.jpg,.png,.pdf,.eps}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\usepackage{fontspec}
\usepackage[caption=false]{subfig}
\title{Liquid Nitrogen Cooling in IR Thermography applied to steel specimen}
\author[a]{L. Lei}
\author[b]{G. Ferrarini}
\author[b]{A. Bortoin}
\author[b]{G. Cadelano}
\author[b]{P. Bison}
\author[a]{X. Maldague}
\affil[a]{LVSN, University Laval, 1065 avenue de la Médecine, Québec (Québec) G1V 0A6 Canada}
\affil[b]{CNR-ITC, Corso Stati Uniti 4, 35127 Padova PD, Italy}
\authorinfo{Further author information: \\ L. Lei: E-mail: [email protected]\\ P. Bison: E-mail: [email protected]}
% Option to view page numbers
\pagestyle{empty} % change to \pagestyle{plain} for page numbers
\setcounter{page}{301} % Set start page numbering at e.g. 301
\begin{document}
\maketitle
\begin{abstract}
Pulsed Thermography (PT) is one of the most common methods in Active Thermography procedures of the Thermography for NDT \& E (Nondestructive Testing \& Evaluation), due to the rapidity and convenience of this inspection technique. Flashes or lamps are often used to heat the samples in the traditional PT. This paper mainly explores exactly the opposite external stimulation in IR Thermography: cooling instead of heating. A steel sample with flat-bottom holes along different depths and sizes has been tested (along with a preliminary test on industrial samples CFRP). Liquid nitrogen (LN2) is sprinkled on the surface of the specimen and the whole process is captured by a thermal camera. To obtain a good comparison, two other classic NDT techniques--Pulsed Thermography and Lock-In Thermography are also employed. In particular, the Lock-in method is implemented with three different frequencies. In the image processing procedure, Principal Component Thermography (PCT) method has been performed in all thermal images. For Lock-In results, both Phase and Amplitude images are generated by Fast Fourier Transform (FFT). Results show that all techniques presented part of the defects while the LN2 technique displays the flaws just at the beginning of the test. Moreover, a binary threshold post-processing is applied to the thermal images, and by comparing these images to a binary map of the location of the defects, the corresponding Receiver operating characteristic (ROC) curves are established and discussed. A comparison of the results indicates that the better ROC curve is obtained by the Flash technique.
\end{abstract}
% Include a list of keywords after the abstract
\keywords{ Infrared Thermography, NDT \& E, Liquid Nitrogen cooling, ROC curve }
\section{INTRODUCTION}
\label{sec:introduction} % \label{} allows reference to this section
In Nondestructive Testing \& Evaluation (NDT \& E) field, Active InfraRed Thermography is a technique widely used in assessing the conditions of parts of material components\cite{Maldague2001theory}. Traditionally, Pulsed Thermography (PT) deploys a thermal stimulation pulse (flash or lamp heating) to produce a thermal contrast between the feature of interest and the background, then monitors the time evolution of the surface temperature. With this rapidity and convenience, numerous studies have been devoted to this technique\cite{Maldague1993Nondestructive,Maldague1994bInfra,2011-ClementeIbarra-Castanedo,2007-Ibarra-Castanedo,duan2013quantitative}.
However, in the past the scientific community only a limited number of studies investigating a cold approach (cooling as the external stimulation in Active Infrared Thermography) have been performed on industrial produce inspection\cite{endohdynamical2012,2012-LewisHom,Lei2016detection}. Thus, the advantage and convenience of using the cold stimulation in IT still remains to be investigated in detail and better understood. The aim of this paper is that of exploring the opposite external stimulation in IR Thermography: cooling instead of heating. THe three methods (two other traditional techniques--Pulsed Thermography and Lock-in Thermography act as the reference) will be applied on a steel slab with different size of flat-bottom-holes. The thermographic images of the experiments will be treated to produce eventually a binary map of the location of the defects. This map will be statistically evaluated in terms of sensitivity and specificity\cite{Fawcett2006} by comparison with the ’true’ map of the defects, furnishing a rank of the three stimulation methods.
\section{Experimental setup} % (fold)
\label{sec:experimental_setup}
One side stimulation approach is often used in the Infrared Thermography for NDT \& E field, which is also known as reflection scheme: both the stimulation device and the camera stay on the same side of the sample under test. This approach applied in reality is shown in Figure~\ref{Exp_setup}.
\begin{figure}[ht]
\centering
\subfloat[Pulsed Thermography set-up]
{
\includegraphics[scale=0.3]{graph/Flash_Setup.png}
}
%\hspace{5pt}
\subfloat[Lock-in Thermography set-up]
{
\includegraphics[scale=0.3]{graph/LIT_setup.png}
}
\caption{Experimental set-up in the \textit{reflection} mode}
\label{Exp_setup}
\end{figure}
The following equipments are set up for this study:
\begin{itemize}
\item Infrared Camera FLIR SC3000 (320$\times$240 pixels, 50Hz, GaAs, 8-9 $\mu m$)
\item Two pairs halogen lamps with heat source of \textbf{(???)$W$} (\textbf{REF TO Paolo})
\item One pair of modulated halogen lamps \textbf{(???$W$)} served as Lock-in stimulation
\item An isolated bottle full of Liquid Nitrogen
\end{itemize}
\subsection{Specimen} % (fold)
\label{sub:specimen}
In this study, a specimen made of steel prepared with flat-bottom holes along different depths and sizes will be examined. Their dimensions are depicted in Figure~\ref{specimen}.
\begin{figure}[ht]
\centering
% \begin{tabular}{c} %% tabular useful for creating an array of images
\includegraphics[scale=0.4]{graph/specimen_schema.pdf}
% \end{tabular}
\caption{Steel sample dimension details with Flat-Bottom Holes along different depths and sizes.}
%>>>> use \label inside caption to get Fig. number with \ref{}
\label{specimen}
\end{figure}
Where seventeen holes in which diameters vary from 0.4 $cm$ to 3 $cm$, and depths vary from 0.3 $cm$ to 0.9 $cm$ (from bottom). The whole thickness is 3 $cm$ (\textbf{???Verify with Paolo}). This specimen is painted before the test, in order to increase its emissivity and to get a homogeneous external simulation.
% subsection specimen (end)
\subsection{Stimulation Techniques} % (fold)
\label{sub:stimulation_techniques}
Three external stimulations are planned to deployed on the sample, for the sake of obtaining a good comparison in results:
\begin{itemize}
\item Pulse Thermography (PT)
\item Lock-in Thermography (LIT)
\item Liquid Nitrogen cooling (LN2)
\end{itemize}
Known as the traditional and fast technique in NDT \& E, Pulse Thermography acts as the reference during this test. When neglecting heat exchange with the environment, the pulse of energy $Q$, delivered on a layer of thickness $L$, characterized by a density $\rho$, a specific heat $C_p$ and a thermal conductivity $\lambda$ (or a thermal diffusivity $\alpha$) gives increment to a temperature behavior on the heated surface by:
\begin{equation}
T(t) = \frac{Q}{\rho C_p L}[1+2\sum_{n=1}^{\infty} e^{-\frac{n^2 \pi ^2\alpha t}{L^2}}]
\label{eq_pt}
\end{equation}
For the situation of modulated periodic heating, 3 different angular frequencies $\omega$ of 100~$Hz$ (LIT16), 200~$Hz$ (LIT8) and 400~$Hz$ (LIT4) (\textbf{???VERIFY with Paolo}) are performed in Lock-in Thermography.
By the convolution integral, Eq~(\ref{eq_pt}) becomes:
\begin{equation}
T(t) = \frac{W}{\lambda}\frac{\alpha}{L}\int_0^t d\tau \Big(1+\sin(\omega \tau - \frac{\pi}{2})\Big)\Big\{1+2\sum_{n=1}^{\infty} e^{-\frac{n^2 \pi ^2\alpha(t-\tau)}{L^2}}\Big\}
\end{equation}
where $W$ is the absorbed heating power.
The Liquid Nitrogen applied in the test is in a way of pouring-out directly to cool the sample. One sprinkles LN2 from the specimen center and lets it spread around to the edges. The whole capture duration is 500 frames with 50~$Hz$ of image frequency, ie. 10 seconds of recording. The pouring time (cooling time) is about 2 seconds, with 100 frames in the results sequence. Its set-up is shown in Figure~\ref{Exp_LN2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{graph/LN2_setup.png}
\caption{Experimental set-up for LN2 cooling}
\label{Exp_LN2}
\end{figure}
% subsection stimulation_techniques (end)
% section experimental_setup (end)
\section{Processing Methods} % (fold)
\label{sec:processing_methods}
The following image-processing techniques and data-analysis methods are employed for this study:
\begin{itemize}
\item Principal Component Thermography (PCT)
\item Phase and Amplitude images Fast Fourier Transform (FFT)
\item Receiver Operating Characteristic curves (ROC Curves)
\end{itemize}
\subsection{Principal Component Thermography (PCT)}
Principal Component Thermography technique\cite{Rajic2002} uses “singular value decomposition (SVD) to reduce the matrix of observations to a highly compact statistical representation of the spatial and temporal variations relating to contrast information associated with underlying structural flaws”.
\subsection{FFT in Phase and Amplitude for LIT}
In addition to the common technique for NDT \& E, Fast Fourier Transform in LIT\cite{wu1998lock} is also one of the most applied technique in IR Thermography, which is based on the periodic heating of the object under test. A thermal wave is likewise generated and propagates inside the material. In real experimental cases the thermal wave is composed by a principal frequency and several harmonics where the amplitude of the Fast Fourier Transform is a function of frequency. By selecting the component with the highest amplitude it is possible to produce a map of phase at the corresponding frequency where the defect appears enhanced.
\subsection{ROC Curve analysis} % (fold)
\label{sub:roc_curve_analysis}
Receiver operating characteristic (ROC) curve is a technique in statistics which helps visualize, organize and select classifiers based on their performance. The curve graph is created by plotting the the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. More details about concepts and definitions can be found in [T. Fawcett 2006]\cite{Fawcett2006}.
Implemented in this study, a binary map of defects location is built and correlated to the post-processed images in gray scale, and the lay-out is sketched in Figure~\ref{binary}. The main algorithm in the calculation of TPR and FPR is clarified as:
\begin{enumerate}
\item Get the best gray-scale result from post-processing thermal images as the test image;
\item Resize the defect map as the same size of the test image;
\item Choose a thresholding step number $N$($N=1000$ in this study) and establish the step value of thresholding [from $0$, $\frac{1}{N}$, $\frac{2}{N}$ till $1 (=\frac{N}{N})$];
\item For each thresholding step, binarize the test image with the thresholding and then compare it to the defect map, getting the corresponding TPR and FPR value;
\item Iterate the binarization and comparison till to plot a whole curve.
\end{enumerate}
\begin{figure}[ht]
\centering
\subfloat[Binary map of defects]
{
\includegraphics[scale=0.195]{graph/Schema_done.png}
\label{bin_map}
}
\subfloat[Cool map]
{
\includegraphics[scale=0.8]{graph/Cool_ROC.png}
\label{cool_map}
}
\caption{One example of ROC analysis (LN2 results) and binary map}
\label{binary}
\end{figure}
% subsection roc_curve (end)
% section methods (end)
\section{Results \& Discussion} % (fold)
\label{sec:results_&_discussion}
Figure~\ref{raw_results} illustrates the thermal raw images of PT and LN2. And Lock-In FFT results can be found in Figure~\ref{LIT_results}.
\begin{figure}[ht]
\centering
\subfloat[Flash Raw Frame 23]
{
\includegraphics[scale=0.5]{graph/Flash_Raw23.png}
\label{Flash_raw23}
}
\hspace{10pt}
\subfloat[Flash Raw Frame 60]
{
\includegraphics[scale=0.5]{graph/Flash_Raw60.png}
\label{Flash_raw60}
}
\hspace{10pt}
\subfloat[LN2 Raw Frame 41]
{
\includegraphics[scale=0.5]{graph/Cool_Raw.png}
\label{LN2_raw}
}
\caption{Thermal Raw Images of PT and LN2 stimulation techniques}
\label{raw_results}
\end{figure}
\begin{figure}[ht]
\centering
\subfloat[LIT4 FFT in Amplitude]
{
\includegraphics[scale=0.5]{graph/LIT4_AMP.png}
}
\hspace{10pt}
\subfloat[LIT4 FFT in Phase]
{
\includegraphics[scale=0.5]{graph/LIT4_PHA.png}
}
\hspace{10pt}
\subfloat[LIT8 FFT in Amplitude]
{
\includegraphics[scale=0.5]{graph/LIT8_AMP.png}
}
\hspace{10pt}
\subfloat[LIT8 FFT in Phase]
{
\includegraphics[scale=0.5]{graph/LIT8_PHA.png}
\label{LIT8_ph}
}
\hspace{10pt}
\subfloat[LIT16 FFT in Amplitude]
{
\includegraphics[scale=0.5]{graph/LIT16_AMP.png}
\label{LIT16_ph}
}
\hspace{10pt}
\subfloat[LIT16 FFT in Phase]
{
\includegraphics[scale=0.5]{graph/LIT16_PHA.png}
}
\caption{FFT in amplitude and Phase results for LIT}
\label{LIT_results}
\end{figure}
\subsection{Thermal images comparison}
From the results above, raw images in Figure~\ref{raw_results} display that the most detectable flaws are the ones with high aspect ratio (ie. diameter-to-depth). What's more, for PT stimulation, a small hole with diameter 0.4 $cm$, depth\footnote{It should be noted that the depth values mentioned here and after are from the bottom of the sample, therefore the real corresponding depths should be these values subtracted by the thickness.} 0.9 $cm$ (left-upper in Figure~\ref{Flash_raw23}) appeared in frame 23, and disappeared after, while another two holes with diameter 2 $cm$, depth 0.9 $cm$ and two with diameter 3 $cm$, depth 0.5 $cm$ (center in Figure~\ref{Flash_raw60}) appeared in frame 60.
For LIT results, FFT in Amplitude has a better detectable flaws than FFT in Phase, as there are some noise in all Phase images. LIT8 and LIT16 have about four more detected flaws than LIT4. Whereas, for FFT in Phase for LIT8 and LIT16, there come up some inverse gray-scale values. This might due to the reverse image question.
After processed by PCT, Figure~\ref{PCT_results} exhibits a clearer consequence. It can be observed that most of the flaws are visible, especially in PT image. There are less in the one of LIT4.
%The corresponding PCT results are represented in Figure~\ref{PCT_results}.
\begin{figure}[ht]
\centering
\subfloat[Flash PCT 2nd Image]{
\includegraphics[scale=0.5]{graph/Flash_PCT_2.png}
}
\hspace{10pt}
\subfloat[LN2 PCT 2nd Image]{
\includegraphics[scale=0.5]{graph/Cool_PCT_2.png}
}
\hspace{10pt}
\subfloat[LIT4 PCT 3rd Image]{
\includegraphics[scale=0.5]{graph/LIT4_PCT_3.png}
}
\\ %\hspace{10pt}
\subfloat[LIT8 PCT 3rd Image]{
\includegraphics[scale=0.5]{graph/LIT8_PCT_3.png}
}
\hspace{10pt}
\subfloat[LIT16 PCT 3rd Image]{
\includegraphics[scale=0.5]{graph/LIT16_PCT_3.png}
}
% \includegraphics[scale=0.4]{graph/LIT4_PCT_3.png}
% \includegraphics[scale=0.4]{graph/LIT8_PCT_3.png}
% \includegraphics[scale=0.4]{graph/LIT16_PCT_3.png}
\caption{PCT results of corresponding technique}
\label{PCT_results}
\end{figure}
Comparing the processed thermal images, the following summaries can be obtained:
\begin{itemize}
\item All techniques present part of the flaws in the sample;
\item PCT post-processing method displays a better results for all;
\item More defects are exhibited in Flash stimulation with PCT processing;
%\item For LIT8 and LIT16, there are some reverse image question.
\end{itemize}
\subsection{Corresponding ROC curves comparison}
Therefore, ROC curves obtained from comparing the binary map of defects location to above results are represented in Figure~\ref{ROC_curve}.
\begin{figure}[ht]
\centering
\subfloat[ROC from PCT Results]
{
\includegraphics[scale=0.5]{graph/ROC_PCT.png}
}
\hspace{10pt}
\subfloat[ROC from LIT FFT in amplitude results]
{
\includegraphics[scale=0.5]{graph/ROC_LIT_AMP.png}
}
\caption{ROC curves obtained from above results}
\label{ROC_curve}
\end{figure}
Where the \textit{sensitivity} is the True positive rate, \textit{1-specificity} is the False positive rate.
From the curves of PCT results, one can easily notice that the five curves have almost the same performance of classification in the beginning. When the \textit{tp rate} arrives at $0.3$, PT curve becomes the nearest to the northwest (where \textit{tp rate} is higher, \textit{fp rate} is lower or both). The second one is the LN2 curve. But it has a bit higher \textit{tp rate} than PT after \textit{fp rate} reaches $0.4$. LIT8 and LIT16 curves act almost the same performance before \textit{tp rate} attains $0.7$. After that LIT16 has a higher \textit{tp rate} than LIT8. Whereas the LIT4 curve keeps closest to the diagonal line $y=x$, which represents the strategy of randomly guessing a class.
It is also the same case for the curves of LIT Amplitude that LIT4 has an unfavorable performance in classification. And LIT16 has a bit higher \textit{tp rate} than LIT8, too. Because of the reverse image problem, one plots only the LIT amplitude ROC curve.
% section results_&_discussion (end)
\section{Conclusion} % (fold)
\label{sec:conclusion}
In this study one investigates an opposite external stimulation--cooling instead of heating in IR Thermography for NDT \& E.
A steel specimen is used to test three different stimulations for thermal images and also ROC analysis comparison.
Results shows that all techniques present part of the flaws in the sample, whereas LN2 technique represents the defects just at the beginning, this may due to the high conductivity of steel.
In thermal results, the PCT post-processing method displays a better results for all. More defects are exhibited in Flash stimulation with PCT processing.
ROC curve analysis has elucidated a straightforward classification comparison, in which the best curve obtained is by the Flash technique with PCT processing.
In future work, other common composite materials such as CFRP (Carbon fiber reinforced polymer), GFRP (Glass Fiber Reinforced Polymer ) will be chosen as specimen. The way of Liquid Nitrogen pouring may be replaced by spraying onto the sample surface, which can reduce the inhomogeneous cooling problem. To enhance the penetration of heat inside the sample, a proposition that heating in one side and cooling the other side might be taken into consideration.
Even though LN2 technique in this study has not shown enough advantages, this exploration of opposite way of external stimulation in InfraRed Thermography might propel new ideas of approaches for NDT \& E.
% Begin the Introduction below the Keywords. The manuscript should not have headers, footers, or page numbers. It should be in a one-column format. References are often noted in the text and cited at the end of the paper.
% % \begin{table}[ht]
% % \caption{Margins and print area specifications.}
% % \label{tab:Paper Margins}
% % \begin{center}
% % \begin{tabular}{|l|l|l|}
% % \hline
% % \rule[-1ex]{0pt}{3.5ex} Margin & A4 & Letter \\
% % \hline
% % \rule[-1ex]{0pt}{3.5ex} Top margin & 2.54 cm & 1.0 in. \\
% % \hline
% % \rule[-1ex]{0pt}{3.5ex} Bottom margin & 4.94 cm & 1.25 in. \\
% % \hline
% % \rule[-1ex]{0pt}{3.5ex} Left, right margin & 1.925 cm & .875 in. \\
% % \hline
% % \rule[-1ex]{0pt}{3.5ex} Printable area & 17.15 x 22.23 cm & 6.75 x 8.75 in. \\
% % \hline
% % \end{tabular}
% % \end{center}
% % \end{table}
% LaTeX margins are related to the document's paper size. The paper size is by default set to USA letter paper. To format a document for A4 paper, the first line of this LaTeX source file should be changed to \verb|\documentclass[a4paper]{spie}|.
% Authors are encouraged to follow the principles of sound technical writing, as described in Refs.~\citenum{Alred03} and \citenum{Perelman97}, for example. Many aspects of technical writing are addressed in the {\em AIP Style Manual}, published by the American Institute of Physics. It is available on line at \url{https://publishing.aip.org/authors}. A spelling checker is helpful for finding misspelled words.
% An author may use this LaTeX source file as a template by substituting his/her own text in each field. This document is not meant to be a complete guide on how to use LaTeX. For that, please see the list of references at \url{http://latex-project.org/guides/} and for an online introduction to LaTeX please see \citenum{Lees-Miller-LaTeX-course-1}.
% % \section{FORMATTING OF MANUSCRIPT COMPONENTS}
% This section describes the normal structure of a manuscript and how each part should be handled. The appropriate vertical spacing between various parts of this document is achieved in LaTeX through the proper use of defined constructs, such as \verb|\section{}|. In LaTeX, paragraphs are separated by blank lines in the source file.
% At times it may be desired, for formatting reasons, to break a line without starting a new paragraph. This situation may occur, for example, when formatting the article title, author information, or section headings. Line breaks are inserted in LaTeX by entering \verb|\\| or \verb|\linebreak| in the LaTeX source file at the desired location.
% %\subsection{Title and Author Information}
% %\label{sec:title}
% The article title appears centered at the top of the first page. The title font is 16 point, bold. The rules for capitalizing the title are the same as for sentences; only the first word, proper nouns, and acronyms should be capitalized. Avoid using acronyms in the title. Keep in mind that people outside your area of expertise might read your article. At the first occurrence of an acronym, spell it out, followed by the acronym in parentheses, e.g., noise power spectrum (NPS).
% The author list is in 12-pt. regular, centered. Omit titles and degrees such as Dr., Prof., Ph.D., etc. The list of affiliations follows the author list. Each author's affiliation should be clearly noted. Superscripts may be used to identify the correspondence between the authors and their respective affiliations. Further author information, such as e-mail address, complete postal address, and web-site location, may be provided in a footnote by using \verb|\authorinfo{}|, as demonstrated above.
% \subsection{Abstract and Keywords}
% The title and author information is immediately followed by the Abstract. The Abstract should concisely summarize the key findings of the paper. It should consist of a single paragraph containing no more than 250 words. The Abstract does not have a section number. A list of up to eight keywords should immediately follow the Abstract after a blank line. These keywords will be included in a searchable database at SPIE.
% \subsection{Body of Paper}
% The body of the paper consists of numbered sections that present the main findings. These sections should be organized to best present the material. See Sec.~\ref{sec:sections} for formatting instructions.
% \subsection{Appendices}
% Auxiliary material that is best left out of the main body of the paper, for example, derivations of equations, proofs of theorems, and details of algorithms, may be included in appendices. Appendices are enumerated with uppercase Latin letters in alphabetic order, and appear just before the Acknowledgments and References. Appendix~\ref{sec:misc} contains more about formatting equations and theorems.
% \subsection{Acknowledgments}
% In the Acknowledgments section, appearing just before the References, the authors may credit others for their guidance or help. Also, funding sources may be stated. The Acknowledgments section does not have a section number.
% \subsection{References}
% SPIE is able to display the references section of your paper in the SPIE Digital Library, complete with links to referenced journal articles, proceedings papers, and books, when available. This added feature will bring more readers to your paper and improve the usefulness of the SPIE Digital Library for all researchers. The References section does not have a section number. The references are numbered in the order in which they are cited. Examples of the format to be followed are given at the end of this document.
% The reference list at the end of this document is created using BibTeX, which looks through the file {\ttfamily report.bib} for the entries cited in the LaTeX source file. The format of the reference list is determined by the bibliography style file {\ttfamily spiebib.bst}, as specified in the \verb|\bibliographystyle{spiebib}| command. Alternatively, the references may be directly formatted in the LaTeX source file.
% For books\cite{Lamport94,Alred03,Goossens97}, the listing includes the list of authors, book title, publisher, city, page or chapter numbers, and year of publication. A reference to a journal article\cite{Metropolis53} includes the author list, title of the article (in quotes), journal name (in italics, properly abbreviated), volume number (in bold), inclusive page numbers, and year. By convention\cite{Lamport94}, article titles are capitalized as described in Sec.~\ref{sec:title}. A reference to a proceedings paper or a chapter in an edited book\cite{Gull89a} includes the author list, title of the article (in quotes), volume or series title (in italics), volume number (in bold), if applicable, inclusive page numbers, publisher, city, and year. References to an article in the SPIE Proceedings may include the conference name (in italics), as shown in Ref.~\citenum{Hanson93c}. For websites\cite{Lees-Miller-LaTeX-course-1} the listing includes the list of authors, title of the article (in quotes), website name, article date, website address either enclosed in chevron symbols ('\(<\)' and '\(>\)'), underlined or linked, and the date the website was accessed.
% If you use this formatting, your references will link your manuscript to other research papers that are in the CrossRef system. Exact punctuation is required for the automated linking to be successful.
% Citations to the references are made using superscript numerals, as demonstrated in the above paragraph. One may also directly refer to a reference within the text, e.g., ``as shown in Ref.~\citenum{Metropolis53} ...''
% \subsection{Footnotes}
% Footnotes\footnote{Footnotes are indicated as superscript symbols to avoid confusion with citations.} may be used to provide auxiliary information that doesn't need to appear in the text, e.g., to explain measurement units. They should be used sparingly, however.
% Only nine footnote symbols are available in LaTeX. If you have more than nine footnotes, you will need to restart the sequence using the command \verb|\footnote[1]{Your footnote text goes here.}|. If you don't, LaTeX will provide the error message {\ttfamily Counter too large.}, followed by the offending footnote command.
% % \section{SECTION FORMATTING}
% %\label{sec:sections}
% Section headings are centered and formatted completely in uppercase 11-point bold font. Sections should be numbered sequentially, starting with the first section after the Abstract. The heading starts with the section number, followed by a period. In LaTeX, a new section is created with the \verb|\section{}| command, which automatically numbers the sections.
% Paragraphs that immediately follow a section heading are leading paragraphs and should not be indented, according to standard publishing style\cite{Lamport94}. The same goes for leading paragraphs of subsections and sub-subsections. Subsequent paragraphs are standard paragraphs, with 14-pt.\ (5 mm) indentation. An extra half-line space should be inserted between paragraphs. In LaTeX, this spacing is specified by the parameter \verb|\parskip|, which is set in {\ttfamily spie.cls}. Indentation of the first line of a paragraph may be avoided by starting it with \verb|\noindent|.
% \subsection{Subsection Attributes}
% The subsection heading is left justified and set in 11-point, bold font. Capitalization rules are the same as those for book titles. The first word of a subsection heading is capitalized. The remaining words are also capitalized, except for minor words with fewer than four letters, such as articles (a, an, and the), short prepositions (of, at, by, for, in, etc.), and short conjunctions (and, or, as, but, etc.). Subsection numbers consist of the section number, followed by a period, and the subsection number within that section.
% \subsubsection{Sub-subsection attributes}
% The sub-subsection heading is left justified and its font is 10 point, bold. Capitalize as for sentences. The first word of a sub-subsection heading is capitalized. The rest of the heading is not capitalized, except for acronyms and proper names.
% \section{FIGURES AND TABLES}
% Figures are numbered in the order of their first citation. They should appear in numerical order and on or after the same page as their first reference in the text. Alternatively, all figures may be placed at the end of the manuscript, that is, after the Reference section. It is preferable to have figures appear at the top or bottom of the page. Figures, along with their captions, should be separated from the main text by at least 0.2 in.\ or 5 mm.
% Figure captions are centered below the figure or graph. Figure captions start with the figure number in 9-point bold font, followed by a period; the text is in 9-point normal font; for example, ``{\footnotesize{Figure 3.} Original image...}''. See Fig.~\ref{fig:example} for an example of a figure caption. When the caption is too long to fit on one line, it should be justified to the right and left margins of the body of the text.
% Tables are handled identically to figures, except that their captions appear above the table.
% \begin{figure} [ht]
% \begin{center}
% \begin{tabular}{c} %% tabular useful for creating an array of images
% \includegraphics[height=5cm]{mcr3b.eps}
% \end{tabular}
% \end{center}
% \caption[example]
% %>>>> use \label inside caption to get Fig. number with \ref{}
% { \label{fig:example}
% Figure captions are used to describe the figure and help the reader understand it's significance. The caption should be centered underneath the figure and set in 9-point font. It is preferable for figures and tables to be placed at the top or bottom of the page. LaTeX tends to adhere to this standard.}
% \end{figure}
% \section{MULTIMEDIA FIGURES - VIDEO AND AUDIO FILES}
% Video and audio files can be included for publication. See Tab.~\ref{tab:Multimedia-Specifications} for the specifications for the mulitimedia files. Use a screenshot or another .jpg illustration for placement in the text. Use the file name to begin the caption. The text of the caption must end with the text ``http://dx.doi.org/doi.number.goes.here'' which tells the SPIE editor where to insert the hyperlink in the digital version of the manuscript.
% Here is a sample illustration and caption for a multimedia file:
% \begin{figure} [ht]
% \begin{center}
% \begin{tabular}{c}
% \includegraphics[height=5cm]{MultimediaFigure.jpg}
% \end{tabular}
% \end{center}
% \caption[example]
% { \label{fig:video-example}
% A label of “Video/Audio 1, 2, …” should appear at the beginning of the caption to indicate to which multimedia file it is linked . Include this text at the end of the caption: \url{http://dx.doi.org/doi.number.goes.here}}
% \end{figure}
% \begin{table}[ht]
% \caption{Information on video and audio files that must accompany a manuscript submission.}
% \label{tab:Multimedia-Specifications}
% \begin{center}
% \begin{tabular}{|l|l|l|}
% \hline
% \rule[-1ex]{0pt}{3.5ex} Item & Video & Audio \\
% \hline
% \rule[-1ex]{0pt}{3.5ex} File name & Video1, video2... & Audio1, audio2... \\
% \hline
% \rule[-1ex]{0pt}{3.5ex} Number of files & 0-10 & 0-10 \\
% \hline
% \rule[-1ex]{0pt}{3.5ex} Size of each file & 5 MB & 5 MB \\
% \hline
% \rule[-1ex]{0pt}{3.5ex} File types accepted & .mpeg, .mov (Quicktime), .wmv (Windows Media Player) & .wav, .mp3 \\
% \hline
% \end{tabular}
% \end{center}
% \end{table}
% \appendix %>>>> this command starts appendixes
% \section{MISCELLANEOUS FORMATTING DETAILS}
% \label{sec:misc}
% It is often useful to refer back (or forward) to other sections in the article. Such references are made by section number. When a section reference starts a sentence, Section is spelled out; otherwise use its abbreviation, for example, ``In Sec.~2 we showed...'' or ``Section~2.1 contained a description...''. References to figures, tables, and theorems are handled the same way.
% \subsection{Formatting Equations}
% Equations may appear in line with the text, if they are simple, short, and not of major importance; e.g., $\beta = b/r$. Important equations appear on their own line. Such equations are centered. For example, ``The expression for the field of view is
% \begin{equation}
% \label{eq:fov}
% 2 a = \frac{(b + 1)}{3c} \, ,
% \end{equation}
% where $a$ is the ...'' Principal equations are numbered, with the equation number placed within parentheses and right justified.
% Equations are considered to be part of a sentence and should be punctuated accordingly. In the above example, a comma follows the equation because the next line is a subordinate clause. If the equation ends the sentence, a period should follow the equation. The line following an equation should not be indented unless it is meant to start a new paragraph. Indentation after an equation is avoided in LaTeX by not leaving a blank line between the equation and the subsequent text.
% References to equations include the equation number in parentheses, for example, ``Equation~(\ref{eq:fov}) shows ...'' or ``Combining Eqs.~(2) and (3), we obtain...'' Using a tilde in the LaTeX source file between two characters avoids unwanted line breaks.
% \subsection{Formatting Theorems}
% To include theorems in a formal way, the theorem identification should appear in a 10-point, bold font, left justified and followed by a period. The text of the theorem continues on the same line in normal, 10-point font. For example,
% \noindent\textbf{Theorem 1.} For any unbiased estimator...
% Formal statements of lemmas and algorithms receive a similar treatment.
\newpage
\acknowledgments % equivalent to \section*{ACKNOWLEDGMENTS}
This research was supported by the governments of Italy and Quebec, and by the Natural
Sciences and Engineering Research Council of Canada (NSERC). We are also thankful to
our collaborative institute CNR-ITC Padova which provided expertise that greatly helped
in this research.
% References
\bibliography{Biblio_th} % bibliography data in report.bib
\bibliographystyle{spiebib} % makes bibtex use spiebib.bst
\end{document}
| {
"alphanum_fraction": 0.7560188149,
"avg_line_length": 70.5874263261,
"ext": "tex",
"hexsha": "bf682d333d247b1b45cc9978100c437d27bf3896",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "00514107f9c98c1765795656f20a9f9d0b5c4d35",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Crescent-Saturn/Hello_Beamer",
"max_forks_repo_path": "Thermosense2017_Lei.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "00514107f9c98c1765795656f20a9f9d0b5c4d35",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Crescent-Saturn/Hello_Beamer",
"max_issues_repo_path": "Thermosense2017_Lei.tex",
"max_line_length": 1635,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "00514107f9c98c1765795656f20a9f9d0b5c4d35",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Crescent-Saturn/Hello_Beamer",
"max_stars_repo_path": "Thermosense2017_Lei.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9197,
"size": 35929
} |
\documentclass[11pt]{amsbook}
\usepackage{../HBSuerDemir} % ------------------------
\begin{document}
% ++++++++++++++++++++++++++++++++++++++
\hPage{b2p2/312}
% ++++++++++++++++++++++++++++++++++++++
\begin{align*}
4A \frac{\hDif A}{\hDif t} = x(y^2+z^2) \frac{\hDif x}{\hDif t} + y(z^2+x^2) \frac{\hDif y}{\hDif t} + z(x^2+y^2) \frac{\hDif z}{\hDif t}
\end{align*}
When P is at $P_{O}$(6, 0, 0), then Q is at $Q_{O}$(0, 9, 0) and R is at $R_{O}$(0, 0, 12) with $|P_{O}$ $Q_{O}$ $R_{O}|_{2}$ = 9$\sqrt{61}$. Then
\begin{align*}
4.9\sqrt{61}\frac{\hDif A}{\hDif t} = 6(225)2+ 9(680)3 + 12(117)4 \\
9\sqrt{61}\frac{\hDif A}{\hDif t} = 675 + 27.45 + 12.117 \\
\sqrt{61}\frac{\hDif A}{\hDif t} = 75 + 135 + 156 = 366 \\
\frac{\hDif A}{\hDif t} = \frac{766}{\sqrt{61}} = 6\sqrt{61} unit^2/sec
\end{align*}
\subsection{TAYLOR'S FORMULA AND SERIES}
\begin{thm} If f(x, y) has continuous partial derivatives up to order n+1 in a neighborhood of (a, b)$\epsilon \upsilon_{f}$, then
\begin{align*}
f(x, y) = f(a, b) + \sum_{k=1}^{n} \frac{1}{k!} ((x-a) \frac{a}{ax} + (y-b) \frac{a}{ay})^k f(x, y) |_{(a, b)} + R_{n+1}
\end{align*}
where the remainder is given by
\begin{align*}
R_{n+1} = \frac{1}{(n+1)!} ((x-a) \frac{a}{ax} + (y-b) \frac{a}{ax})^{n+1}f(x, y)_{(x*, y*)}
\end{align*}
with (x*, y*) a point on the open segment ($P_{O}$P) joining $P_{O}$(a, b) to P(x, y)
\end{thm}
\begin{proof} Since every point of the line segment [$P_{O}$P] can be represented parametrically as
\begin{align*}
x = a+ht,\qquad y = b+kt\qquad 0\leq t \leq 1,
\end{align*}
\includegraphics[width=0.35\textwidth]{images/b2p2-312-fig01}
The end points of the segment correspond to t=0 and t=1 (observe that h, k are direction numbers of the line segment)
Substituting (2) in f(x, y) gives the function
\end{proof}
\end{document} | {
"alphanum_fraction": 0.567139738,
"avg_line_length": 33.9259259259,
"ext": "tex",
"hexsha": "71352fc8fc4c6298d05f5d09218b6656604f5f73",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_forks_repo_path": "hw2/non-merged/EMRE ALPAGUT_38161_assignsubmission_file_/hw2 - 2016742000/suerdemir/pages/b2p2-312.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_issues_repo_path": "hw2/non-merged/EMRE ALPAGUT_38161_assignsubmission_file_/hw2 - 2016742000/suerdemir/pages/b2p2-312.tex",
"max_line_length": 148,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "4e71a0ed20d76b93c144c2f9c0fbbd52c04b5ae3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "yildirimyigit/cmpe220_2016_3",
"max_stars_repo_path": "hw2/non-merged/EMRE ALPAGUT_38161_assignsubmission_file_/hw2 - 2016742000/suerdemir/pages/b2p2-312.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-15T22:03:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-15T22:03:34.000Z",
"num_tokens": 808,
"size": 1832
} |
\documentclass[norsk,titlepage]{ntnuthesis}
\title{Your Long Title}
\shorttitle{Your Short Title}
\author{Your Name}
\shortauthor{Your Short Name}
\date{\displaydate{date}}
\addbibresource{thesis.bib}
\input{glossary.tex} % add glossary and acronym lists before document
\begin{document}
\input{chapters/0a-abstract.tex}
\input{chapters/0b-sammendrag.tex}
\tableofcontents
\listoffigures
\listoftables
\lstlistoflistings
\printglossary[type=\acronymtype] % Print acronyms
\printglossary % Print glossary
\input{chapters/1-introduction}
\input{chapters/2-usage.tex}
\input{chapters/3-structure.tex}
\input{chapters/4-conclusion.tex}
\chapter*{\bibname}
\printbibliography[heading=none]
\input{chapters/papers.tex}
\appendix
\input{appendices/a-appendix.tex}
\end{document}
| {
"alphanum_fraction": 0.7683686177,
"avg_line_length": 20.075,
"ext": "tex",
"hexsha": "8a93c2ea4e7b8b6fd813ed1c98a03d186d342473",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "187b11325d2895235ce2aa1a78f14876bca70911",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "thaugdahl/NTNU-hpclab-thesis",
"max_forks_repo_path": "thesis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "187b11325d2895235ce2aa1a78f14876bca70911",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "thaugdahl/NTNU-hpclab-thesis",
"max_issues_repo_path": "thesis.tex",
"max_line_length": 69,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "187b11325d2895235ce2aa1a78f14876bca70911",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "thaugdahl/NTNU-hpclab-thesis",
"max_stars_repo_path": "thesis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 247,
"size": 803
} |
\subsubsection{Time Series Normalization}
\begin{frame}{Time Series Normalization}{$\eta$ Normalization}
Given is a time series $Q = (q_1, \dots, q_n)$ of length $n$
\begin{block}{$\eta$ Normalization}
$\eta(q_i) = q_i - \mu$
\end{block}
\begin{block}{Mean of $Q$}
$\mu = \frac{1}{n} \sum \limits_{i=1}^{n} q_i$
\end{block}
\end{frame}
\begin{frame}<handout:0>{Time Series Normalization $\eta$}{Example}
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox* {!} {0.3\textwidth} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
xlabel=time,
ylabel=acceleration,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight,
reverse legend,
legend pos=south east]
\addplot[red, thick, mark=none] table {../data/fig/norm1/q.dat};
\addlegendentry{Q}
\addplot[blue, thick, mark=none] table {../data/fig/norm1/c.dat};
\addlegendentry{C}
\end{axis}
\end{tikzpicture}
} & \quad
\resizebox* {!} {0.3\textwidth} {
\begin{tabular}[b]{ll}
\begin{turn}{90}
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-40,
ymax=40,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[red, ultra thick, mark=none] table {../data/fig/norm1/q.dat};
\end{axis}
\end{tikzpicture}
\end{turn} \hspace*{3em} &
\begin{tikzpicture}
\begin{axis}[
enlargelimits=false,
ymin=0,
ymax=47,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
colorbar,
colormap/viridis high res]
\addplot[matrix plot*,
mesh/cols=48,
point meta=explicit] table[meta=C] {../data/fig/norm1/matrix.dat};
\end{axis}
\end{tikzpicture}\\
&
\\[1em]
&
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-40,
ymax=40,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[blue, ultra thick, mark=none] table {../data/fig/norm1/c.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
}
\end{tabular}
}
\end{center}
\end{frame}
\begin{frame}{Time Series Normalization $\eta$}{Example}
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox* {!} {0.3\textwidth} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
xlabel=time,
ylabel=acceleration,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight,
reverse legend,
legend pos=south east]
\addplot[gray, quiver={u=\thisrow{u}, v=\thisrow{v}}] table {../data/fig/norm1/path.dat};
\addplot[red, thick, mark=none] table {../data/fig/norm1/q.dat};
\addlegendentry{Q}
\addplot[blue, thick, mark=none] table {../data/fig/norm1/c.dat};
\addlegendentry{C}
\end{axis}
\end{tikzpicture}
} & \quad
\resizebox* {!} {0.3\textwidth} {
\begin{tabular}[b]{ll}
\begin{turn}{90}
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-40,
ymax=40,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[red, ultra thick, mark=none] table {../data/fig/norm1/q.dat};
\end{axis}
\end{tikzpicture}
\end{turn} \hspace*{3em} &
\begin{tikzpicture}
\begin{axis}[
enlargelimits=false,
ymin=0,
ymax=47,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
colorbar,
colormap/viridis high res]
\addplot[matrix plot*,
mesh/cols=48,
point meta=explicit] table[meta=C] {../data/fig/norm1/matrix.dat};
\addplot[white, ultra thick, mark=*, mark size=1] table {../data/fig/norm1/matrix_path.dat};
\end{axis}
\end{tikzpicture}\\
&
\\[1em]
&
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-40,
ymax=40,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[blue, ultra thick, mark=none] table {../data/fig/norm1/c.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
}
\end{tabular}
}
\end{center}
\end{frame}
\begin{frame}{Time Series Normalization}{$z$ Normalization}
Given is a time series $Q = (q_1, \dots, q_n)$ of length $n$
\begin{block}{$z$ Normalization}
$z(q_i) = \frac{q_i - \mu}{\sigma}$
\end{block}
\begin{block}{Standard deviation of $Q$}
$\sigma = \frac{1}{n} \sum \limits_{i=1}^{n} (q_i - \mu)^2$
\end{block}
\end{frame}
\begin{frame}<handout:0>{Time Series Normalization $z$}{Example}
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox* {!} {0.3\textwidth} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
xlabel=time,
ylabel=acceleration,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight,
reverse legend,
legend pos=south east]
\addplot[red, thick, mark=none] table {../data/fig/norm2/q.dat};
\addlegendentry{Q}
\addplot[blue, thick, mark=none] table {../data/fig/norm2/c.dat};
\addlegendentry{C}
\end{axis}
\end{tikzpicture}
} & \quad
\resizebox* {!} {0.3\textwidth} {
\begin{tabular}[b]{ll}
\begin{turn}{90}
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-2,
ymax=2,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[red, ultra thick, mark=none] table {../data/fig/norm2/q.dat};
\end{axis}
\end{tikzpicture}
\end{turn} \hspace*{3em} &
\begin{tikzpicture}
\begin{axis}[
enlargelimits=false,
ymin=0,
ymax=47,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
colorbar,
colormap/viridis high res]
\addplot[matrix plot*,
mesh/cols=48,
point meta=explicit] table[meta=C] {../data/fig/norm2/matrix.dat};
\end{axis}
\end{tikzpicture}\\
&
\\[1em]
&
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-2,
ymax=2,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[blue, ultra thick, mark=none] table {../data/fig/norm2/c.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
}
\end{tabular}
}
\end{center}
\end{frame}
\begin{frame}{Time Series Normalization $z$}{Example}
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox* {!} {0.3\textwidth} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
xlabel=time,
ylabel=acceleration,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight,
reverse legend,
legend pos=south east]
\addplot[gray, quiver={u=\thisrow{u}, v=\thisrow{v}}] table {../data/fig/norm2/path.dat};
\addplot[red, thick, mark=none] table {../data/fig/norm2/q.dat};
\addlegendentry{Q}
\addplot[blue, thick, mark=none] table {../data/fig/norm2/c.dat};
\addlegendentry{C}
\end{axis}
\end{tikzpicture}
} & \quad
\resizebox* {!} {0.3\textwidth} {
\begin{tabular}[b]{ll}
\begin{turn}{90}
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-2,
ymax=2,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[red, ultra thick, mark=none] table {../data/fig/norm2/q.dat};
\end{axis}
\end{tikzpicture}
\end{turn} \hspace*{3em} &
\begin{tikzpicture}
\begin{axis}[
enlargelimits=false,
ymin=0,
ymax=47,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
colorbar,
colormap/viridis high res]
\addplot[matrix plot*,
mesh/cols=48,
point meta=explicit] table[meta=C] {../data/fig/norm2/matrix.dat};
\addplot[white, ultra thick, mark=*, mark size=1] table {../data/fig/norm2/matrix_path.dat};
\end{axis}
\end{tikzpicture}\\
&
\\[1em]
&
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=47,
ymin=-2,
ymax=2,
hide x axis,
hide y axis,
width=\axisdefaultwidth,
height=0.7*\axisdefaultheight]
\addplot[blue, ultra thick, mark=none] table {../data/fig/norm2/c.dat};
\end{axis}
\end{tikzpicture}
\end{tabular}
}
\end{tabular}
}
\end{center}
\end{frame}
| {
"alphanum_fraction": 0.3279809118,
"avg_line_length": 45.8786982249,
"ext": "tex",
"hexsha": "e165c74c2753afb0f01004608c73ea7106218b17",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GordonLesti/SlidingWindowFilter",
"max_forks_repo_path": "iotstreaming2017/background/dynamic_time_warping/time_series_normalization.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GordonLesti/SlidingWindowFilter",
"max_issues_repo_path": "iotstreaming2017/background/dynamic_time_warping/time_series_normalization.tex",
"max_line_length": 124,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GordonLesti/SlidingWindowFilter",
"max_stars_repo_path": "iotstreaming2017/background/dynamic_time_warping/time_series_normalization.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z",
"num_tokens": 2959,
"size": 15507
} |
\documentclass[epsfig,10pt,fullpage]{article}
\newcommand{\LabNum}{5}
\newcommand{\CommonDocsPath}{../../../common/docs}
\input{\CommonDocsPath/preamble.tex}
\begin{document}
\centerline{\huge Digital Logic}
~\\
\centerline{\huge Laboratory Exercise \LabNum}
~\\
\centerline{\large Timers and Real-time Clock}
~\\
The purpose of this exercise is to study the use of clocks in timed circuits. The designed
circuits are to be implemented on an Intel\textsuperscript{\textregistered} FPGA DE10-Lite, DE0-CV, DE1-SoC, or DE2-115 board.
\section*{Background}
\addcontentsline{toc}{1}{Background}
In the VHDL hardware description language we can describe a variable-size counter by
using a GENERIC declaration. An example of an {\it n}-bit counter is shown in
Figure~\ref{fig:n_counter}.
\begin{figure}[H]
\begin{center}
\begin{minipage}[t]{12.5 cm}
\begin{tabbing}
LIBRARY ieee;\\
USE ieee.std\_logic\_1164.all;\\
USE ieee.std\_logic\_unsigned.all;\\
~\\
ZZ\=GENERICZZ\=reset\_nZZ\=: OUTZZ\=STD\_LOGIC;\kill
ENTITY counter IS\\
\>GENERIC (\>n : NATURAL := 4 );\\
ZZ\=PORT (Z\=reset\_nZZ\=: OUTZZ\=STD\_LOGIC;\kill
\>PORT (\>clock \>: IN\>STD\_LOGIC;\\
\>\>reset\_n \>: IN \>STD\_LOGIC;\\
\>\>Q \>: OUT \>STD\_LOGIC\_VECTOR(n$-$1 DOWNTO 0) );\\
END ENTITY;\\
~\\
ARCHITECTURE Behavior OF counter IS\\
ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\kill
\>SIGNAL value : STD\_LOGIC\_VECTOR(n$-$1 DOWNTO 0));\\
BEGIN\\
\>PROCESS (clock, reset\_n)\\
\>BEGIN\\
\>\>IF (reset\_n = '0') THEN\\
\>\>\>value $<$= (OTHERS =$>$ '0');\\
\>\>ELSIF ((clock'EVENT) AND (clock = 1 )) THEN\\
\>\>\>value $<$= value + 1;\\
\>\>END IF\\
\>END PROCESS\\
\>Q $<$= value;\\
END Behavior;\\
\end{tabbing}
\end{minipage}
\end{center}
\caption{A VHDL description of an {\it n}-bit counter.}
\label{fig:n_counter}
\end{figure}
The parameter {\it n} specifies the number of bits in the counter. A particular value of
this parameter is defined by using a GENERIC MAP statement. For example, an 8-bit
counter can be specified as:
\begin{center}
\begin{minipage}[t]{12.5 cm}
\begin{tabbing}
ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\kill
\>eight\_bit: counter\\
\>\>GENERIC MAP ( n =$>$ 8 )\\
\>\>PORT MAP eight\_bit (clock, reset\_n, Q);
\end{tabbing}
\end{minipage}
\end{center}
By using parameters we can instantiate counters of different sizes in a logic circuit, without having to create a new module for each counter.
\section*{Part I}
\addcontentsline{toc}{2}{Part I}
Create a modulo-$k$ counter by modifying the design of an 8-bit counter to contain an
additional parameter. The counter should count from $0$ to $k-1$. When the counter reaches
the value $k-1$, then the next counter value should be $0$. Include an output from the
counter called {\it rollover} and set this output to 1 in the clock cycle where the count value
is equal to $k-1$.
Perform the following steps:
\begin{enumerate}
\item Create a new Quartus\textsuperscript{\textregistered} project which will be used to implement the desired circuit
on your DE-series board.
\item Write a Verilog file that specifies the circuit for {\it k} = 20, and an appropriate
value of $n$. Your circuit should use pushbutton {\it KEY}$_0$ as an asynchronous reset
and {\it KEY}$_1$ as a manual clock input.
The contents of the counter should be displayed on the red lights {\it LEDR}. Also display
the {\it rollover} signal on one of the LEDR lights.
\item Include the VHDL file in your project and compile the circuit.
\item Simulate the designed circuit to verify its functionality.
\item Make the necessary pin assignments needed to implement the circuit on your
DE-series board, and compile the circuit.
\item Verify that your circuit works correctly by observing the lights.
\end{enumerate}
\section*{Part II}
\addcontentsline{toc}{3}{Part II}
Using your modulo-counter from Part I as a subcircuit,
implement a 3-digit BCD counter (hint: use multiple counters, not just one). Display the
contents of the counter on the 7-segment displays, {\it HEX2$-$0}. Connect all of the counters
in your circuit to the 50-MHz clock signal on your DE-series board, and make the BCD counter
increment at one-second intervals.
Use the pushbutton switch {\it KEY}$_0$ to reset the BCD counter to 0.
\section*{Part III}
\addcontentsline{toc}{4}{Part III}
Design and implement a circuit on your DE-series board that acts as a real-time clock.
It should display the minutes (from 0 to 59) on {\it HEX$5-4$}, the seconds (from 0 to 59)
on {\it HEX$3-2$}, and hundredths of a second (from 0 to 99) on {\it HEX}$1-0$. Use the
switches {\it SW}$_{7-0}$ to preset the minute
part of the time displayed by the clock when {\it KEY}$_1$ is pressed.
Stop the clock whenever {\it KEY}$_0$ is being pressed and continue the clock when
{\it KEY}$_0$ is released.
\section*{Part IV}
\addcontentsline{toc}{5}{Part IV}
An early method of telegraph communication was based on the Morse code. This code uses
patterns of short and long pulses to represent a message. Each letter is represented as a
sequence of dots (a short pulse), and dashes (a long pulse). For example, the first eight
letters of the alphabet have the following representation:
\begin{table}[H]
\begin{center}
\begin{minipage}[t]{12.5 cm}
\begin{tabbing}
ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\=ZZ\kill
\>A\>\>{\bf $\bullet$ ---}\\
\>B\>\>{\bf --- $\bullet$ $\bullet$ $\bullet$}\\
\>C\>\>{\bf --- $\bullet$ --- $\bullet$}\\
\>D\>\>{\bf --- $\bullet$ $\bullet$}\\
\>E\>\>{\bf $\bullet$}\\
\>F\>\>{\bf $\bullet$ $\bullet$ --- $\bullet$}\\
\>G\>\>{\bf --- --- $\bullet$}\\
\>H\>\>{\bf $\bullet$ $\bullet$ $\bullet$ $\bullet$}\\
\end{tabbing}
\end{minipage}
\end{center}
\end{table}
Design and implement a circuit that takes as input one of the first eight letters of the
alphabet and displays the Morse code for it on a red LED. Your circuit should use
switches {\it SW}$_{2-0}$ and pushbuttons {\it KEY}$_{1-0}$ as inputs. When a user
presses {\it KEY}$_1$, the circuit should display the Morse code for a letter specified
by {\it SW}$_{2-0}$ (000 for A, 001 for B, etc.), using 0.5-second pulses to represent dots,
and 1.5-second pulses to represent dashes. Pushbutton {\it KEY}$_0$ should function as
an asynchronous reset. A high-level schematic diagram of the circuit is shown in
Figure~\ref{fig:morse_code_cct}.
~\\
~\\
{\bf Hint:} Use a counter to generate 0.5-second pulses, and another counter to keep
the {\it LEDR}$_0$ light on for either 0.5 or 1.5 seconds.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.9]{figures/fig_morse_code_circuit_schematic.pdf}
\end{center}
\caption{High-level schematic diagram of the circuit for part IV.}
\label{fig:morse_code_cct}
\end{figure}
\input{\CommonDocsPath/copyright.tex}
\end{document}
| {
"alphanum_fraction": 0.7086801427,
"avg_line_length": 38.8901734104,
"ext": "tex",
"hexsha": "3386d52e5adb0a6b8c108b898d27198eea23b865",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-12-15T16:44:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-15T16:44:27.000Z",
"max_forks_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic",
"max_forks_repo_path": "vhdl/lab5/doc/vhdl_lab5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic",
"max_issues_repo_path": "vhdl/lab5/doc/vhdl_lab5.tex",
"max_line_length": 142,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f4119b617a5af228a032f8f0ff27a299b496ad78",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fpgacademy/Lab_Exercises_Digital_Logic",
"max_stars_repo_path": "vhdl/lab5/doc/vhdl_lab5.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T23:21:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-09T23:21:40.000Z",
"num_tokens": 2086,
"size": 6728
} |
\documentclass[international_finance_p1.tex]{subfiles}
\begin{document}
\setbeamercovered{transparent}
\section{International Monetary Arrangements}
\subsection{The Gold Standard and Interwar Period}
\begin{frame}{The Gold Standard: 1880 to 1914. The Interwar Period: 1918 to 1939}
\begin{itemize}[<+->]
\item
Under a gold standard, currencies are valued in terms of their gold equivalent (an ounce of gold was worth USD20.67 in terms of the U.S. dollar over the gold standard period).
\item
The I World War ended Britain’s financial preeminence.
\item
The United States had risen to the status of the world’s dominant banker country.
\end{itemize}
\end{frame}
\begin{frame}{}
% Table generated by Excel2LaTeX from sheet 'Лист1'
\begin{table}[htbp]
\centering
\fontsize{6pt}{6pt}\selectfont
\caption{Leading central bank/treasury gold reserves(in metric tons fine gold)}
\begin{tabular}{lrrrrrrrr}
\toprule
Year & 1845 & 1850 & 1855 & 1860 & 1865 & 1870 & 1875 & 1880 \\
\midrule
UK & 82 & 104 & 74 & 78 & 93 & 161 & 154 & 170 \\
France & 2 & 3,5 & 32,8 & 105 & 194 & 217 & 337 & 242 \\
Germany & n/a & n/a & n/a & n/a & n/a & n/a & 43 & 81 \\
Italy & n/a & n/a & n/a & n/a & n/a & 30,8 & 26 & 22 \\
Russia & n/a & n/a & 81 & n/a & 57 & 160 & 230 & 195 \\
USA & n/a & n/a & n/a & n/a & n/a & 107 & 87 & 208 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\raggedright
\footnotesize
Source: World Gold Council. Historical Data - Annual time series on
World Official Gold Reserves since 1845. 10th August 2011.
\end{table}%
\end{frame}
\begin{frame}
% Table generated by Excel2LaTeX from sheet 'Лист1'
\begin{table}[htbp]
\centering
\fontsize{6pt}{6pt}\selectfont
\caption{Leading central bank/treasury gold reserves(in metric tons fine gold)}
\begin{tabular}{lrrrrrrrr}
\toprule
Year & 1885 & 1890 & 1895 & 1900 & 1905 & 1910 & 1913 & 1915 \\
\midrule
UK & 141 & 166 & 305 & 198 & 199 & 223 & 248 & 585 \\
France & 344 & 370 & 460 & 544 & 836 & 952 & 1030 & 1457 \\
Germany & 99 & 186 & 252 & 211 & 267 & 240 & 437 & 876 \\
Italy & 142 & 133 & 132 & 115 & 285 & 350 & 355 & 397 \\
Russia & 195 & 312 & 695 & 661 & 654 & 954 & 1233 & 1250 \\
USA & 371 & 442 & 169 & 602 & 1149 & 1660 & 2293 & 2568 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\raggedright
\footnotesize
Source: World Gold Council. Historical Data - Annual time series on
World Official Gold Reserves since 1845. 10th August 2011.
\end{table}%
\end{frame}
\begin{frame}
% Table generated by Excel2LaTeX from sheet 'Лист1'
\begin{table}[htbp]
\centering
\fontsize{6pt}{6pt}\selectfont
\caption{Leading central bank/treasury gold reserves(in metric tons fine gold)}
\begin{tabular}{lrrrrrr}
\toprule
Year & 1920 & 1925 & 1930 & 1935 & 1940 & 1945 \\
\midrule
UK & 864 & 1045 & 1080 & 1464 & n/a & 1773 \\
France & 1622 & 1201 & 3160 & 3907 & 1773 & 1378 \\
Germany & 391 & 432 & 794 & 56 & n/a & n/a \\
Italy & 307 & 498 & 420 & 240 & 122 & 28 \\
Russia & n/a & 141 & 375 & 7456 & n/a & n/a \\
USA & 3679 & 5998 & 6358 & 8998 & 19543 & 17848 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\raggedright
\footnotesize
Source: World Gold Council. Historical Data - Annual time series on
World Official Gold Reserves since 1845. 10th August 2011.
\end{table}%
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
So a run on U.S. gold at the end of 1931 led to a 15 percent drop in U.S. gold holdings. By 1933 the United States abandoned the gold standard.
\item
The early to mid-1930s was a period of competitive devaluations and foreign exchange controls.
\end{itemize}
\end{frame}
\subsection{The Bretton Woods Agreement}
\begin{frame}{The Bretton Woods Agreement: 1944 to 1973 and its breakdown}
\begin{itemize}[<+->]
\item
Bretton Woods agreement required each country to fix the value of its currency in terms of an anchor currency, namely the dollar.
\item
The U.S. dollar was the key currency in the system, and USD1 was defined as being equal in value to 1/35 ounce of gold.
\item
Since every currency had an implicitly defined gold value, through the link to the dollar, all currencies were linked in a system of fixed exchange rates.
\end{itemize}
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
Nations were committed to maintaining the parity value of their currencies within 1 percent of parity.
\item
When a country was experiencing difficulty maintaining its parity value because of balance of payments disequilibrium, it could turn to the International Monetary Fund (IMF).
\end{itemize}
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
The IMF was created to monitor the operation of the system and provide short-term loans to countries experiencing temporary balance of payments difficulties.
\item
IMF conditions for loans were changes in domestic economic policy aimed at restoring balance of payments equilibrium.
\end{itemize}
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
The failure to realign currency values in the face of fundamental economic change spelled the beginning of the end for the gold exchange standard of the Bretton Woods agreement by the late 1960s.
\item
In December 1971, the dollar per gold exchange value was changed from USD35 to USD38.02 per ounce of gold. But the dollar was still inconvertible into gold.
\item
The speculative capital flows of 1972 and early 1973 led to a further devaluation of the dollar in February 1973, when the official price of an ounce of gold rose from USD38 to USD42.22.
\item
In March 1973, the major currencies began to float
\end{itemize}
\end{frame}
\subsection{Floating Exchange Rates}
\begin{frame}{Floating Exchange Rates: starting from 1973}
The types of exchange rate systems
1. Free floating.
2. Managed floating.
3. Horizontal bands.
4. Crawling pegs.
5. Crawling bands.
6. Fixed peg.
7. Currency board.
8. ``Dollarization'' or No separate legal tender.
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
\textbf{“dollarization” }where the central bank of the country has completely given up control of the money supply to adopt some other country’s currency
\item
\textbf{purely floating}, where the central bank retains domestic control over the currency in the country.
\item
In between, the central bank has some degree of control over the money supply.
\end{itemize}
\end{frame}
\begin{frame}{What is ``Legal Tender''}
\begin{block}{Legal tender }
\quad is any official medium of payment recognized by law that can be used to extinguish a public or private debt, or meet a financial obligation. The national currency is legal tender in practically every country. A creditor is obligated to accept legal tender toward repayment of a debt. Legal tender can only be issued by the national body that is authorized to do so, such as the U.S. Treasury in the United States and the Royal Canadian Mint in Canada.
\end{block}
The term "legal tender" is from Middle English tendren, French tendre (verb form), meaning to offer. The Latin root is tendere (to stretch out), and the sense of tender as an offer is related to the etymology of the English word "extend" (to hold outward).
\end{frame}
\begin{frame}{What does ``Tender'' mean}
\begin{block}{To tender}
To invite bids for a project, or to accept a formal offer such as a takeover bid. Tender usually refers to the process whereby governments and financial institutions invite bids for large projects that must be submitted within a finite deadline. The term also refers to the process whereby shareholders submit their shares or securities to a takeover offer.
\end{block}
\end{frame}
\begin{frame}{Legal Tender and Standard of Deferred Payment}
A debt is a deferred payment; a standard of deferred payment is what they are denominated in.
Since the value of money – be it dollars, gold, or others – may fluctuate over time via inflation and deflation, the value of deferred payments (the real level of debt) likewise fluctuates.
A device is termed ``legal tender'' if it may serve to discharge (pay off) debts; thus, while US dollars are not backed by gold or any other commodity, they draw value from being legal tender – being usable to pay off debts.
\end{frame}
\begin{frame}{Characteristics Associated with Countries Choosing to Peg or Float}
% Table generated by Excel2LaTeX from sheet 'Лист2'
\begin{table}[htbp]
\centering
\begin{tabular}{cc}
\toprule
Peggers & Floaters \\
\midrule
Small size & Large size \\
Open economy & Closed economy \\
Harmonious inflation rate & Divergent inflation rate \\
Concentrated trade & Diversified trade \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
\end{frame}
\subsection{Plaza and Louvre Accord}
\begin{frame}{Plaza Accord}
\begin{itemize}[<+->]
\item
Between 1980 and 1985 the dollar had appreciated by about 50\% against the Japanese yen, Deutsche Mark, French Franc and British pound
\item
Campaign asking for protection against foreign competition
\item
The governments of France, West Germany, Japan, the United States, and the United Kingdom signed the accord to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets on September 22, 1985 at the Plaza Hotel in New York City.
\item
The exchange rate value of the dollar versus the yen declined by 51\% from 1985 to 1987.
\end{itemize}
\end{frame}
\begin{frame}{Louvre Accord}
\begin{itemize}[<+->]
\item
The Louvre Accord was an agreement, signed on February 22, 1987 in Paris, that aimed to stabilize the international currency markets and halt the continued decline of the US Dollar caused by the Plaza Accord.
\item
The Louvre Accord helped prevent a recession because it stopped the value of the U.S. Dollar from decreasing any further in relation to other currencies.
\item
Countries agreed to reduce budget deficits and government spendings, cut taxes, USA agreed to hold interest rates low.
\end{itemize}
\end{frame}
\subsection{The European Monetary System}
\begin{frame}{The European Monetary System and the Euro}
\begin{itemize}[<+->]
\item
The European Monetary System (EMS) was established in March 1979.
\item
The member countries agreed to maintain small exchange rate fluctuations among themselves, while allowing free float against outside currencies.
\end{itemize}
\end{frame}
\begin{frame}{The theory of optimum currency area}
{Professor Robert Mundell of Columbia University, 1961}
\begin{block}{Criterion for a common currency zone }
The relevant criterion for identifying and designing a common currency zone is the degree of factor (i.e., capital and labor) mobility within the zone; a high degree of factor mobility would provide an adjustment mechanism, providing an alternative to country-specific monetary/currency adjustments.
\end{block}
\end{frame}
\begin{frame}[shrink=15]{Monetary Unions I}
% Table generated by Excel2LaTeX from sheet 'Лист2'
\begin{table}[htbp]
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{1}{c}{Monetary union} & \multicolumn{1}{c}{Participants} \\
\midrule
\pbox{3cm}{European Union (EU)} & \pbox{6cm}{Austria, Belgium, Bulgaria, Cyprus, Czech Republic, Germany, Denmark, Spain, Estonia, Finland, France, United Kingdom, Greece, Croatia, Hungary, Ireland, Italy, Lithuania, Luxembourg, Latvia, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Sweden} \\
\midrule
\pbox{3cm}{Commonwealth of \\Independent States (CIS)} & \pbox{6cm}{Armenia, Azerbaijan, Belarus, Kazakhstan, Kyrgyzstan, Moldova, Russia, Tajikistan, Uzbekistan} \\
\midrule
\pbox{3cm}{Asian Monetary Unit (AMU)} & \pbox{6cm}{Australia, Brunei, China, Indonesia, India, Japan, Cambodia, South Korea, Laos, Myanmar, Malaysia, New Zealand, Philippines, Singapore, Thailand, Vietnam} \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
\end{frame}
\begin{frame}[shrink=10]{Monetary Unions II}
% Table generated by Excel2LaTeX from sheet 'Лист2'
\begin{table}[htbp]
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{1}{c}{Monetary union} & \multicolumn{1}{c}{Participants} \\
\midrule
\pbox{4cm}{East African Community (EAC)} & \pbox{5cm}{Burundi, Kenya, Rwanda, Tanzania, Uganda} \\
\midrule
\pbox{4cm}{Economic Community of \\West African States (Ecowas)} & \pbox{5cm}{Benin, Burkina Faso, CapeVerde, Gambia, Ghana, Guinea, Guinea-Bissau, Ivory Coast, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo} \\
\midrule
\pbox{4cm}{Bolivarian Alliance for the Peoples of Our America (AlBA)} & \pbox{5cm}{Antigua and Barbuda, Bolivia, Cuba, Dominica, Ecuador, Grenada, Saint Kitts and Nevis, Saint Lucia, Nicaragua, Saint Vincent and the Grenadines, Venezuela} \\
\bottomrule
\end{tabular}%
\label{tab:addlabel}%
\end{table}%
\end{frame}
\begin{frame}{Existing and emerging monetary unions}
\includegraphics[scale=0.40]{img/monetaryunions}
\end{frame}
\begin{frame}{Convergence of monetary policy}
\begin{itemize}[<+->]
\item
the country’s inflation rate did not exceed the average of the lowest three member country rates by more than 1.5 percentage points;
\item
its interest rate on long-term government bonds did not exceed those of the three lowest-inflation members by more than 2 percentage points;
\item
the country’s government budget deficit did not exceed 3 percent of GDP, and outstanding government debt did not exceed 60 percent of GDP.
\end{itemize}
\end{frame}
\begin{frame}{The optimum currency area}
\begin{itemize}[<+->]
\item
The geographical region that could gain economic efficiency by fixing exchange rates within a group and floating exchange rates with the rest of the world.
\item
Necessary condition is perfect mobility of the factors of production.
\end{itemize}
\end{frame}
\begin{frame}{The European Central Bank (ECB)}
\begin{itemize}[<+->]
\item
Created on June 1, 1998, in Frankfurt, Germany.
\item
The European Central Bank (ECB) is responsible for monetary policy of the Eurozone.
\item
The ECB is governed by a president and a board of the heads of national central banks.
\item
The main purpose of the ECB is to keep inflation under control.
\end{itemize}
\end{frame}
\begin{frame}{}
\begin{itemize}[<+->]
\item
The new European currency, the euro, made its debut on January 1, 1999. The symbol is €, and the ISO code is EUR.
\item
In the transition years of 1999 to 2001, people used the euro as a unit of account, denominating financial asset values and transactions in euro amounts. Bank accounts were available in euros and credit transactions were denominated in euros.
\item
Euro notes and coins began to circulate on January 1, 2002.
\end{itemize}
\end{frame}
\begin{frame}{Member states}
As of August 2014, the Euro was adopted by 18 member states of European Union:
Austria (1999), Belgium (1999), Cyprus (2008), Estonia (2011), Finland (1999), France (1999), Germany (1999), Greece (2001), Ireland (1999), Italy (1999), Latvia (2014), Luxembourg (1999), Malta (2008), the Netherlands (1999), Portugal (1999), Slovakia 2009), Slovenia (2007), and Spain (1999).
\end{frame}
\end{document} | {
"alphanum_fraction": 0.7241246138,
"avg_line_length": 45.2944606414,
"ext": "tex",
"hexsha": "54cc8cfd1da1e6c387753a3854f780242d1b35fb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8a6f8ea8cdadc3c9e934c3162a9faea74259adec",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "aabor/textbooks",
"max_forks_repo_path": "if/tex/1/monetaryarrangements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8a6f8ea8cdadc3c9e934c3162a9faea74259adec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "aabor/textbooks",
"max_issues_repo_path": "if/tex/1/monetaryarrangements.tex",
"max_line_length": 457,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8a6f8ea8cdadc3c9e934c3162a9faea74259adec",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "aabor/textbooks",
"max_stars_repo_path": "if/tex/1/monetaryarrangements.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4461,
"size": 15536
} |
\XtoCBlock{Cos}
\label{block:Cos}
\begin{figure}[H]\includegraphics{Cos}\end{figure}
\begin{XtoCtabular}{Inports}
In & Input u\tabularnewline
\hline
\end{XtoCtabular}
\begin{XtoCtabular}{Outports}
Out & Result of cos(u)\tabularnewline
\hline
\end{XtoCtabular}
\subsubsection*{Description:}
Cosine computation of input value.
% include optional documentation file
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Cos_Info.tex}{\vspace{1ex}}{}
\subsubsection*{Implementations:}
\begin{tabular}{l l}
\textbf{FiP8} & 8 Bit Fixed Point Implementation\tabularnewline
\textbf{FiP16} & 16 Bit Fixed Point Implementation\tabularnewline
\textbf{FiP32} & 32 Bit Fixed Point Implementation\tabularnewline
\textbf{Float32} & 32 Bit Floating Point Implementation\tabularnewline
\textbf{Float64} & 64 Bit Floating Point Implementation\tabularnewline
\end{tabular}
\XtoCImplementation{FiP8}
\index{Block ID!4864}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP8 \tabularnewline
\textbf{ID} & 4864 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & Cos\_FiP8.c \tabularnewline
\textbf{H filename} & Cos\_FiP8.h \tabularnewline
\end{tabular}
\vspace{1ex}
8 Bit Fixed Point Implementation
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int8 *In;
int8 Out;
} COS_FIP8;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Cos_FiP8.tex}{}{}
\fi
\XtoCImplementation{FiP16}
\index{Block ID!4865}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP16 \tabularnewline
\textbf{ID} & 4865 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & Cos\_FiP16.c \tabularnewline
\textbf{H filename} & Cos\_FiP16.h \tabularnewline
\end{tabular}
\vspace{1ex}
16 Bit Fixed Point Implementation
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int16 *In;
int16 Out;
} COS_FIP16;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Cos_FiP16.tex}{}{}
\fi
\XtoCImplementation{FiP32}
\index{Block ID!4866}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & FiP32 \tabularnewline
\textbf{ID} & 4866 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & Cos\_FiP32.c \tabularnewline
\textbf{H filename} & Cos\_FiP32.h \tabularnewline
\end{tabular}
\vspace{1ex}
32 Bit Fixed Point Implementation
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
int32 *In;
int32 Out;
} COS_FIP32;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Cos_FiP32.tex}{}{}
\fi
\XtoCImplementation{Float32}
\index{Block ID!4867}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & Float32 \tabularnewline
\textbf{ID} & 4867 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & Cos\_Float32.c \tabularnewline
\textbf{H filename} & Cos\_Float32.h \tabularnewline
\end{tabular}
\vspace{1ex}
32 Bit Floating Point Implementation
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
float32 *In;
float32 Out;
} COS_FLOAT32;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Cos_Float32.tex}{}{}
\fi
\XtoCImplementation{Float64}
\index{Block ID!4868}
\nopagebreak[0]
% Implementation details
\begin{tabular}{l l}
\textbf{Name} & Float64 \tabularnewline
\textbf{ID} & 4868 \tabularnewline
\textbf{Revision} & 0.1 \tabularnewline
\textbf{C filename} & Cos\_Float64.c \tabularnewline
\textbf{H filename} & Cos\_Float64.h \tabularnewline
\end{tabular}
\vspace{1ex}
64 Bit Floating Point Implementation
% Implementation data structure
\XtoCDataStruct{Data Structure:}
\begin{lstlisting}
typedef struct {
uint16 ID;
float64 *In;
float64 Out;
} COS_FLOAT64;
\end{lstlisting}
\ifdefined \AddTestReports
\InputIfFileExists{\XcHomePath/Library/Math/Doc/Test_Cos_Float64.tex}{}{}
\fi
| {
"alphanum_fraction": 0.7163387132,
"avg_line_length": 26.4502923977,
"ext": "tex",
"hexsha": "e3df71ce421c30eb4b2ab2c3e21e511db9beaddb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "AlexisTM/X2C",
"max_forks_repo_path": "Library/Math/Doc/Cos.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "AlexisTM/X2C",
"max_issues_repo_path": "Library/Math/Doc/Cos.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "31f39b598afe271a7fd46ef1ee9e06c410b1120c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "AlexisTM/X2C",
"max_stars_repo_path": "Library/Math/Doc/Cos.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1419,
"size": 4523
} |
\chapter{Introduction}
Figure \ref{book_design} illustrates the basic structure of a book. It is common to use book format when writing a thesis\cite{Mori2008}.
\begin{figure}[h]
\centering
\includegraphics[width=15cm]{structure/main_matter/figures/book_design.png}
\caption{Components of a book}
\label{book_design}
\end{figure}
\input{structure/main_matter/sections/1_introduction/motivation} | {
"alphanum_fraction": 0.7753623188,
"avg_line_length": 34.5,
"ext": "tex",
"hexsha": "5a608bc3e919335be6af48d098b0c5e0c086ca44",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "glupeksha/Thesis-Format",
"max_forks_repo_path": "structure/main_matter/sections/1_introduction/chapter_main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "glupeksha/Thesis-Format",
"max_issues_repo_path": "structure/main_matter/sections/1_introduction/chapter_main.tex",
"max_line_length": 138,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e602b2aabc13603127703eb1c8c836e155c0b194",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "upeksha1996/Thesis-Format",
"max_stars_repo_path": "structure/main_matter/sections/1_introduction/chapter_main.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-26T00:51:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-01-26T00:51:34.000Z",
"num_tokens": 110,
"size": 414
} |
\subsection{Bias and variance of the Robinson estimator}
robinson: can't have confounded in dummy. but can in real. general result of propensity stuff?
Framing: Partialling out is an alternative to OLS where \(n<<p\) doesn't hold. alterntive to LASSO etc
\(\hat \theta \approx N(\theta, V/n)\)
\(V=(E[\hat D^2)^{-1}E[\hat D^2\epsilon^2 ](E[\hat D^2])^{-1}\)
These are robust standard errors.
\subsubsection{Moments of the Robinson estimator}
If IID then
\(Var (\hat \theta) =\dfrac{\sigma^2_\epsilon }{\sum_i(x_i-\hat X_i)^2}\)
Otherwise, can use GLM
What are the properties of the estimator?
\(E[\hat \theta ]=E[\dfrac{\sum_i (X_i-\hat X_i)(y_i-\hat y_i)}{\sum_i(x_i-\hat X_i)^2}]\)
\subsection{Non-linear treatment effects in the Robinson estimator}
Page on reformulating as non-linear. can do it. show can be estimated using arg min
https://arxiv.org/pdf/1712.04912.pdf
\subsection{DML}
in DML. page on orthogonality scores, page on constructing them; page on using them to estimate parameters (GMM)
We have \(P(X)=f(\theta , \rho)\)
\(\hat \theta = f(X, n)\)
\(\theta = g(\rho , X)\)
So error is:
\(\hat \theta - \theta=f(X, n)-g(\rho , X)\)
Bias is defined as:
\(Bias(\hat \theta, \theta ) = E[\hat \theta - \theta]=E[\hat \theta ] - \theta \)
\(Bias = E[\hat \theta - \theta]=E[f(X, n)-g(\rho , X)]\)
\(Bias = E[\hat \theta - \theta]=E[f(X, n)]-g(\rho ,X)\)
double ML: regression each parametric parameter on ML of other variables.
eg: get \(e(x|z)\)
\(e(d|x)\)
\(d=m(x)+v\)
\(d\) is correlated with \(x\) so bias.
\(v\) is corrleated with \(d\) but not \(x\). use as "iv".
Still need estimate for \(g(x)\).
for iterative, process is:
+ estimate \(g(x)\)
+ plug into other and estimate theta
+ this section should be in sample splitting. rename iterative estimation. separate pages for bias, variance
+ how does this work?? paper says random forest regression and OLS. intialise \(\theta \) randomly?
+ page on bias, variance, efficiency?
+ page on sample splitting, why?
+ page on goal: \(x\) and \(z\) orthogonal for split sampling
+ page on \(X=m_0(Z)+\mu\), first stage machine learning, synthetic instrumental variables? h3 on that for multiple variables on interest. regression for each
\subsection{DML1}
Divide into \(k\).
For each do ML on nuicance (how???) use all instances outside of sample
Then do GMM using orthogonality condition to calculate \(\theta \). (how??) use instances in sample
Average \(\theta \) from each class
\subsection{Last stage Robinson}
Separate page for last stage: note we can do OLS, GLS etc with choice of \(\Omega \).
| {
"alphanum_fraction": 0.6836027714,
"avg_line_length": 30.9285714286,
"ext": "tex",
"hexsha": "21dcbdaf14c35a4d8485f83407bb300e27140f3a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/semiParametric/01-03-robinsonBV.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/semiParametric/01-03-robinsonBV.tex",
"max_line_length": 158,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/semiParametric/01-03-robinsonBV.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 778,
"size": 2598
} |
\subsection{Special lnear groups \(SL(n, F)\)}
The special linear group, \(SL(n,F)\), is the subgroup of \(GL(n,F)\) where the determinants are \(1\).
That is, \(|M|=1\)
These are endomorphisms, not forms.
| {
"alphanum_fraction": 0.6492890995,
"avg_line_length": 21.1,
"ext": "tex",
"hexsha": "44849799d3cb9d0ff906597833b5ca20cb98b3b5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/geometry/forms/05-03-SL.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/geometry/forms/05-03-SL.tex",
"max_line_length": 103,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/geometry/forms/05-03-SL.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 65,
"size": 211
} |
\documentclass{beamer}
%\usetheme{AnnArbor}
%\usetheme{Antibes}
%\usetheme{Bergen}
%\usetheme{Berkeley}
%\usetheme{Berlin}
%\usetheme{Boadilla}
%\usetheme{boxes}
%\usetheme{CambridgeUS}
\usetheme{Copenhagen}
%\usetheme{Darmstadt}
%\usetheme{default}
%\usetheme{Frankfurt}
%\usetheme{Goettingen}
%\usetheme{Hannover}
%\usetheme{Ilmenau}
%\usetheme{JuanLesPins}
%\usetheme{Luebeck}
%\usetheme{Madrid}
%\usetheme{Malmoe}
%\usetheme{Marburg}
%\usetheme{Montpellier}
%\usetheme{PaloAlto}
%\usetheme{Pittsburgh}
%\usetheme{Rochester}
%\usetheme{Singapore}
%\usetheme{Szeged}
%\usetheme{Warsaw}
\title{Introduction to Python}
\usepackage{listings}
\subtitle{Functional Programming in Python}
\date{\today}
\begin{document}
\lstset{language=Python}
\begin{frame}
\titlepage
\end{frame}
\section{Introduction}
\subsection{Introduction}
\begin{frame}[fragile]{Lambda Calculus}
\begin{itemize}
\item {
\textbf{Idea:} Create a syntax to describe computation and study it formally as an abstract process.
}
\pause
\item {
First formulated by Alonzo Church just as Turing was inventing Turing machines.
}
\pause
\item {
Formally equivalent to Turing machines (proved by Turing). Led to the Church-Turing Thesis.
}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Lambda Calculus in the Real World}
\begin{itemize}
\item{ \textbf{1958:} Lisp invented - a realisation of the Lambda Calculus. Nowadays: Common Lisp, Scheme, Clojure}
\pause
\item{\textbf{1970s:} ML invented at University of Edinburgh as a language to prove theorems with. Nowadays: OCaml, Standard ML.
}
\pause
\item{\textbf{1987:} Haskell language invented, taught at Edinburgh to first years}
\pause
\item{\textbf{Present day:} Erlang, Python, C++ (ish), Swift and many more!}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Concepts}
\begin{itemize}
\item { \textbf{Functions are First-Class Citizens:} Functions are just examples of data - they can be parameters in functions and return values to functions
}
\pause
\item{ \textbf{Pure functions:} Functions have no side effects. In particular every function is idempotent.
}
\pause
\item{\textbf{Recursion:} Because of the ``no mutation" philosophy, recursion is always preferred over iteration.
}
\pause
\item{\textbf{Python is \emph{not} a pure-functional language. It's up to you what you use and ignore from the above
}}
\end{itemize}
\end{frame}
\section{Functional Programming In Action}
\subsection{Helper Functions and Closures}
\begin{frame}[fragile]{Helper Functions}
\begin{itemize}
\item { You can define functions inside functions. These will be inaccessible to the outside world.
\begin{block}{}
\begin{lstlisting}{frame=single}
def f(n):
def g(m):
return m*m #latex won't let me indent :(
return g(n)
\end{lstlisting}
\end{block}
}
\item{ This is useful for de-cluttering your code and hiding functionality you don't want to be public.}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Closures}
\begin{itemize}
\item { Helper functions are more powerful than at first they seem. Remember functions can be return values as well!
}
\pause
\item { When you return a function Python will \emph{capture} local variables needed to use later.
\begin{block}{}
\begin{lstlisting}{frame=single}
def powerFactory(n):
def g(x):
return x ** n
return g
\end{lstlisting}
\end{block}
}
\pause
\item{
\large{\textbf{ Demo Time!}}
}
\end{itemize}
\end{frame}
\subsection{Anonymous Functions}
\begin{frame}[fragile]{Lambdas}
\begin{itemize}
\item { Sometimes it can be annoying to come up with names for functions you will only use once.
}
\pause
\item{Use Lambdas to get around this
\begin{block}{}
\begin{lstlisting}{frame=single}
def powerFactory(n):
return (lambda x: x ** n)
\end{lstlisting}
\end{block}
}
\end{itemize}
\end{frame}
\subsection{Demo}
\begin{frame}[fragile]{Demo}
\huge{\textbf{Demo Time!}}
\end{frame}
\section{Map, Reduce and Filter}
\begin{frame}[fragile]{Introduction}
\begin{itemize}
\item {Map, Reduce and Filter are three common functions from functional programming.
}
\item{Each function acts on a list. Since these are functional they do not mutate the list!
}
\pause
\item{The names have since become famous because of the 'MapReduce' framework invented by Google for Big Data calculations
}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Map}
\begin{itemize}
\item {Map takes a function of one variable and a list and acts componentwise on the list.
}
\pause
\item{For example:
\begin{block}{}
\begin{lstlisting}{frame=single}
x = [1,2,3]
print map(lambda z: z ** 2, x)
# prints [1, 4, 9]
\end{lstlisting}
\end{block}
}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Reduce}
\begin{itemize}
\item {Reduce takes a function of two variables, a list and an initial value and 'folds' down the list.
}
\pause
\item{For example:
\begin{block}{}
\begin{lstlisting}{frame=single}
x = [1,2,3]
print reduce(lambda y,z: y + z, x, 0)
# prints 6
\end{lstlisting}
\end{block}
}
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Filter}
\begin{itemize}
\item {Filter takes a function of one variable that returns True or False, a list and returns the list with only
those elements that the function returns True on.
}
\pause
\item{For example:
\begin{block}{}
\begin{lstlisting}{frame=single}
x = [1,2,3]
print reduce(lambda z: z \% 2, x)
# prints [1,3]
\end{lstlisting}
\end{block}
}
\end{itemize}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.722965642,
"avg_line_length": 25.0226244344,
"ext": "tex",
"hexsha": "88b0c54843e85fe0339c4ccaca27078e59145bf7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "campbellC/introToPython",
"max_forks_repo_path": "session5FunctionalProgramming/functionalProgramming.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "campbellC/introToPython",
"max_issues_repo_path": "session5FunctionalProgramming/functionalProgramming.tex",
"max_line_length": 158,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "988f67adc3f687248308cb9b26481eb30cad687d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "campbellC/introToPython",
"max_stars_repo_path": "session5FunctionalProgramming/functionalProgramming.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-30T14:10:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-30T14:10:53.000Z",
"num_tokens": 1725,
"size": 5530
} |
\chapter{Zyelios GPU}
\section{General Information}
\subsection{Basics}
ZGPU is a vector graphics processor, which runs a certain set of instructions to display an image on screen. It has two basic modes of operation - frame-based mode, in which the processor core will execute a certain subprogram each time the new frame must be drawn to screen, and asynchonous mode, which allows to run ZGPU instructions without being tied to screen refreshes.
Even thought ZGPU works with vector graphics the final result is being rasterized into one of two buffers. These two buffers are called the \emph{front} buffer, and the \emph{texture} buffer. Both buffers are 512x512 pixels in size, and can be used by the programmer. There is support for a natively vertex mode for drawing graphics, but it has certain limitations.
By default the ZGPU runs the frame-based mode, which makes use of the \emph{front} buffer to store data, which will then\subsection{Memory Layout}be displayed on screen. When running in asynchonous mode the graphics are drawn to one of the two buffers, and are then being output to screen.
\subsection{Features}
Most of the features in the ZGPU are controlled via internal registers (see \pageref{gpuregs} for a complete list of all the registers). They are located in the register memory, which starts at addesses 63488 and ends at 65535.
The memory area between addresses 65536 and 131071 (inclusive) is reserved as an additional external memory bus. It is very slow, but it allows GPU to communicate with other devices. If several GPU's are running at the same time, the access to these memory areas will be concurrent, and so additional synchronization is required to prevent race conditions or collisions of any sort.
The simpliest program GPU can execute is the following (it makes use of the frame-based mode of execution):
\begin{verbatim}
dtest; //Output test pattern to screen
dexit; //Finish execution
\end{verbatim}
It's possible to setup asynchonous rendering instead of frame-based rendering:
\begin{verbatim}
//Setup entrypoints
dentrypoint 0,DrawThread;
dentrypoint 4,AsyncThread;
//Disable hardware clear so drawing thread does not wipe
//the image on screen
mov #regHWClear,0;
//Set asynchronous thread frequency (speed)
mov #regAsyncFreq,200000;
//Run the thread
mov #regAsyncClk,1;
dexit;
DrawThread: //Do nothing
dexit;
AsyncThread:
dbegin;
dtest;
dend;
dvsync; //Add frame synchonization
jmp AsyncThread;
\end{verbatim}
ZGPU makes use of the vector extension (see page \pageref{vectorext}) which allows it to work with matrices and vectors:
\begin{verbatim}
//Generate rotation, translation matrices
mrotate mRotateMatrix,vRotate;
mtranslate mTranslateMatrix,vTranslate;
//Create model matrix
mmov mModelMatrix,mRotateMatrix;
mmul mModelMatrix,mTranslateMatrix;
\end{verbatim}
The ZGPU supports 2D and 3D graphics , which must be drawn as polygons:
\begin{verbatim}
dvxdata_2f polydata,4; //4 vertices
dvxdata_3f cubedata,12; //12 triangles
....
polydata:
db 0, 0;
db 10, 0;
db 10, 10;
db 0, 10;
cubedata:
db -1,-1,-1; //Triangle 1
db 1,-1,-1;
db 1, 1,-1;
db -1,-1,-1; //Triangle 2
db 1, 1,-1;
db -1, 1,-1;
...
\end{verbatim}
Polygons can be drawn in both normal mode, and indexed mode. They can also be drawn solid-colored, textured, or wireframe:
\begin{verbatim}
//Load array of vertices
mov #regVertexArray,cube_varray;
dvxdata_3f cube_idxarray,12; //Draw all faces
dvxdata_3f_wf cube_idxarray,6; //Draw faces 1-3 as wireframe
//Load array of vertices with texture coords
mov #regVertexArray,cube_varray_tex;
dvxdata_3f_tex cube_idxarray,12; //Draw all faces, textured
cube_varray:
db -1,-1,-1; //0
db -1,-1, 1; //1
db -1, 1,-1; //2
db -1, 1, 1; //3
db 1,-1,-1; //4
db 1,-1, 1; //5
db 1, 1,-1; //6
db 1, 1, 1; //7
cube_idx:
db 0,4,6; db 0,6,2; //Face 1
db 5,1,7; db 1,3,7; //Face 2
db 4,0,5; db 0,1,5; //Face 3
db 2,6,7; db 3,2,7; //Face 4
db 0,2,3; db 1,0,3; //Face 5
db 6,4,7; db 4,5,7; //Face 6
\end{verbatim}
It supports vertex buffer, which serves as temporary storage for 2D/3D data before it's rendered on screen. This allows to provide depth-sorting within the buffer, and other features:
\begin{verbatim}
//Enable vertex buffer features
denable 0; //Vertex buffer
denable 1; //ZSorting
denable 2; //Lighting
//Add commands to vertex buffer
dcolor cube_color;
dvxdata_3f cube_data,12;
//Flush vertex buffer
dvxflush;
//Disable vertex buffer and its features
ddisable 0;
\end{verbatim}
There is support for texturing using both custom textures, and textures available externally:
\begin{verbatim}
mov #regVertexMode,1; //Enable vertex mode
mov #regTexSize,128; //Texture size
denable 5; //Enable custom texture mapping
dcolor white; //Set color to white
dtexture 2; //Pick texture #2
drectwh rect_pos,rect_size;
...
ddisable 5; //Disable custom texture mapping
dxtexture texture_name; //Pick texture
drectwh rect_pos,rect_size;
....
string texture_name,"brick/brickfloor001a";
\end{verbatim}
ZGPU supports various 2D transformations to move shapes on screen:
\begin{verbatim}
dmove target_pos; //move to position
drotatescale 1.23,2; //Rotate by 1.23 radians, and scale up twice
drect rect_pos1,rect_pos2; //Draw rectangle around 0,0 point
....
vector2f target_pos,256,256; //Screen center
vector2f rect_pos1,-50,-50; //Two endpoints for rectangle
vector2f rect_pos2, 50, 50;
\end{verbatim}
There is also support for performing similar transformations on the textures, independantly of the previous transformations (rotation is performed around texture centerpoint usually):
\begin{verbatim}
denable 5; //Enable custom texturing
mov #regTexRotation,1.23; //Rotate texture by 1.23 radians
mov #regTexOffsetV,0.2; //Offset V coordinates by 0.2
dvxtexpoly horizon_polygon,4;
\end{verbatim}
\section{Features Reference}
\subsection{Basic Graphics}
The basic graphics output in GPU makes use of the few control instructions (such as \reg{DCOLOR}, which changes the current drawing color), and the few drawing instructions (for example \reg{DRECT}, \reg{DLINE}, etc).
The basic graphics output only requires use of the frame-based drawing mode. The GPU will clear the screen to black each frame, and set the current color to black too. To draw something the color must first be set to wanted color, and then some drawing instructions must be executed:
\begin{verbatim}
dcolor white;
drect rect_point1,rect_point2;
dexit; //Program must be correctly terminated
//Compiler macros for data:
color white,255,255,255;
vec2f rect_point1,50,50;
vec2f rect_point2,100,150;
\end{verbatim}
These are all the basic drawing instruction that can be used:
\singlespacing
\begin{longtable}{|c|p{3.4in}|} \hline
Instruction & Description \\ \hline
\reg{DRECT} & Draw a rectangle between two endpoints \\ \hline
\reg{DRECTWH} & Draw a rectangle at some point (first operand), with some size (second operand) \\ \hline
\reg{DORECT} & Similar to \reg{DRECT}, but draws a rectangle outline \\ \hline
\reg{DORECTWH} & Similar to \reg{DRECTWH}, but draws a rectangle outline \\ \hline
\reg{DCIRCLE} & Draw a circle at some point (first operand), with some radius (second operand) \\ \hline
\reg{DLINE} & Draws a line between two points. Width specified with the \reg{DSETWIDTH} instruction \\ \hline
\reg{DVXPOLY} & Draw a custom polygon \\ \hline
\end{longtable}
\onehalfspacing
It's possible to specify quality at which the circle is drawn:
\begin{verbatim}
dcolor white;
mov #regCircleQuality,8; //8 vertices in the circle
dcircle pos,256; //Draw a circle in middle of the screen,
//and covering the entire screen
dexit;
//Compiler macros for data:
color white,255,255,255;
vec2f pos,256,256;
\end{verbatim}
It's also possible to draw 2D polygons (each polygon may have up to 128 vertices in it):
\begin{verbatim}
dcolor white;
dvxpoly polygon_data,4;
dexit;
//Compiler macros for data:
color white,255,255,255;
polygon_data: //Polygon for a distorted rectangle
db 50,50;
db 190,50;
db 120,190;
db 50,120;
\end{verbatim}
It's possible to use all of these instructions to draw textured data (see page \pageref{gputexturing})
\subsection{Asynchronous Thread}
Asynchonous thread runs in parallel to the main frame-based rendering thread, but it is not synchronized to frame boundaries (while the normal frame-based mode will restart execution each time it must render a new frame). It's possible to use both at the same time, or use just one of two.
Asynchonous thread is not active by default, but it can be started up using the following code:
\begin{verbatim}
//Setup entrypoints
dentrypoint 0,DrawThread;
dentrypoint 4,AsyncThread;
//Set asynchronous thread frequency (speed)
mov #regAsyncFreq,200000;
//Run the thread
mov #regAsyncClk,1;
dexit;
DrawThread: //Do nothing
dexit;
AsyncThread:
...
jmp AsyncThread;
\end{verbatim}
Asychronous thread frequency may be set up to 1,200,000. If asynchonous thread encounters an error, and there is no specified error handler, it will simply shut down (and reset \reg{AsyncClk} register back to 0).
It's possible to perform rendering in asynchonous thread in two ways. There are built-in opcodes which allow to draw to texture buffer, and then copy that image back into the front buffer. They require the hardware clear feature to be disabled though:
\begin{verbatim}
mov #regHWClear,0;
....
AsyncThread:
dbegin; //Start drawing
... //Drawing code of any length
dend; //Copy the image to front buffer
dvsync; //If rendering is too fast, it can be synchronized with frame
//generation, making it less resource intensive
jmp AsyncThread;
\end{verbatim}
It's also possible to manually switch buffers for drawing. See page \pageref{gpubuffers} for more information on that.
\subsection{Error Handling}
If any error is encountered during the GPU execution, it will be handled in one of the possible ways:
\begin{itemize}
\item In frame-based mode with no entrypoint for the error handler set the GPU will display an error screen, detailing the error code and the error address.
\item In frame-based mode with entrypoint set for the error handler the GPU will jump to the error handler. There must be no error occuring in the error handler itself, or the GPU will be stuck in an infinite loop until the frame ends.
\item In asynchonous mode with no error handler an error will cause the thread to halt.
\item In asynchonous mode with an error handler defined the thread will jump over to that error handler. Just as with the frame-based mode, and error inside the error handler will cause an infinite loop.
\end{itemize}
Entrypoint for error handler in the frame-based mode is \reg{3}, and entrypoint for the asynchonous thread error handler is \reg{5}. The error code will be passed in the \reg{LINT} internal register, and the error parameter is passed in the \reg{LADD} register. Here's an example of how to setup an error handler in both threads:
\begin{verbatim}
//Setup entrypoints
dentrypoint 0,DrawThread;
dentrypoint 3,DrawError;
dentrypoint 4,AsyncThread;
dentrypoint 5,AsyncError;
....
DrawError:
cpuget R0,28; //Read error parameter
cpuget R1,27; //Read error code
....
dexit;
AsyncError: //Similar to DrawError
cpuget R0,28; //Read error parameter
cpuget R1,27; //Read error code
....
Stop: dvsync; jmp Stop; //Stop with infinite loop
\end{verbatim}
\subsection{Coordinate Transformations}
GPU provides several coordinate transformations. This allows programmer to control how the screen coordinates, which are generated by the drawing instructions, are mapped to screen coordiantes. The GPU native screen size is always 512x512 pixels (size of the rasterizer buffer/front buffer).
Coordinate transformation pipe (routine) can be selected using the \reg{DCPIPE} opcode:
\begin{verbatim}
dcpipe 2; //Select transformation pipe 2
\end{verbatim}
These coordinate transformation pipes are supported:
\singlespacing
\begin{longtable}{|c|p{3.4in}|} \hline
Index & Description \\ \hline
\reg{0} & Coordinates are unchanged \\ \hline
\reg{1} & Screen height/width specified by \reg{Width} and \reg{Height} registers \\ \hline
\reg{2} & Coordinates must be in 0..1 range \\ \hline
\reg{3} & Coordinates must be in -1..1 range \\ \hline
\reg{4} & All drawing is offset so point (0,0) is the screen center \\ \hline
\end{longtable}
\onehalfspacing
Before the coordinates are mapped to the screen ones the GPU also performs additional transformations based on the values of several register. This allows for scaling, rotating, offseting the result of drawing instructions:
\begin{verbatim}
dmove offset; //Move by vector 'offset'
drotatescale 1.23,2; //Scale up twice, rotate by 1.23 radians
drect ...; //Draw something
drotatescale 0,1; //Reset rotation, scale
dmove 0; //Reset offset
\end{verbatim}
Rotation is clockwise, argument is specified in radians. It's possible to scale on each axis separately, see the list of internal registers at page \pageref{gpuregs}.
\subsection{Vertex Transformations}
The GPU can also perform transformations on separate vertices which are being drawn via the drawing instructions. This is usually used to provide 3D graphics support.
\subsection{Drawing 3D Graphics}
no chapter
\subsection{Rasterizer Control}
no chapter
\subsection{Cursor Control}
no chapter
\subsection{Color Transformation Control}
no chapter
\subsection{Font Rendering}
no chapter
paramlist
\subsection{3D Rendering Control}
no chapter
\subsection{Indexed Rendering}
no chapter
\subsection{Switching buffers} \label{gpubuffers}
no chapter
\subsection{Texturing} \label{gputexturing}
no chapter
\subsection{Texture Transformations}
no chapter
\subsection{Advanced Rendering Instructions}
no chapter
DDFRAME, DDTERRAIN
\section{Internal Registers} \label{gpuregs}
The internal registers of the ZGPU are mapped to the memory, and are available as memory locations. They can be read and written to at any time, and they control various aspects of the ZGPU operation.
All of these registers are available in the HL-ZASM compiler by prepending \reg{reg} prefix to the registers name.
Memory offsets \reg{63488}..\reg{64511} are mapped to the IOBus (external ports). The memory offsets \reg{65536}..\reg{131071} are mapped to the MemBus, allowing for access to external devices from the GPU. There is support for both reading and writing this memory, although at very low speed.
\singlespacing
\begin{longtable}{|c|c|p{3.4in}|} \hline
Name & Address & Description \\ \hline
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\regentry{Clk} {65535}{Current GPU power state (if set to 0, the GPU will be shut down)}
\regentry{Reset} {65534}{Reset the GPU state}
\regentry{HWClear} {65533}{Enables or disables the hardware clear (front buffer filling to black)}
\regentry{VertexMode} {65532}{Enables or disables the vertex mode (raw vertex output instead of rasterizing)}
\regentry{Halt} {65531}{Halts the current GPU execution, and preserves image in the front buffer}
\regentry{RAMReset} {65530}{Clears the GPU RAM}
\regentry{AsyncReset} {65529}{Reset the asynchonous thread state}
\regentry{AsyncClk} {65528}{Asynchronous thread execution state}
\regentry{AsyncFreq} {65527}{Asynchronous thread frequency}
\regentry{Index} {65526}{GPU index, can be between 0 and 31}
\regentry{HScale} {65525}{Horizontal image scale (for rasterized output)}
\regentry{VScale} {65524}{Vertical image scale (for rasterized output)}
\regentry{HWScale} {65523}{Hardware image scale}
\regentry{Rotation} {65522}{Rotation of the rasterized image. 0 for 0 deg, 1 for 90 deg, 2 for 180 deg, 3 for 270 deg}
\regentry{TexSize} {65521}{Subtexture size}
\regentry{TexDataPtr} {65520}{Pointer to texture data for load by the GPU}
\regentry{TexDataSz} {65519}{Size of the texture data for load by the GPU}
\regentry{RasterQ} {65518}{Rasterizer quality}
\regentry{TexBuffer} {65517}{Buffer used for the texturing (0: front buffer, 1: texture buffer)}
\regentry{Width} {65515}{Screen width (resolution)}
\regentry{Height} {65514}{Screen height (resolution)}
\regentry{Ratio} {65513}{Current screen ratio (physical)}
\regentry{ParamList} {65512}{Pointer to list of parameters for the \reg{DWRITEFMT} instruction, or 0 if unused}
\regentry{CursorX} {65505}{X coordinate of the cursor (0..1)}
\regentry{CursorY} {65504}{Y coordinate of the cursor (0..1)}
\regentry{Cursor} {65503}{Should the cursor be drawn on screen}
\regentry{CursorButtons} {65502}{State of the cursor buttons}
\regentry{BrightnessW} {65495}{Total screen brightness}
\regentry{BrightnessR} {65494}{R component brightness}
\regentry{BrightnessG} {65493}{G component brightness}
\regentry{BrightnessB} {65492}{B component brightness}
\regentry{ContrastW} {65491}{Total screen contrast}
\regentry{ContrastR} {65490}{R component contrast}
\regentry{ContrastG} {65489}{G component contrast}
\regentry{ContrastB} {65488}{B component contrast}
\regentry{CircleQuality} {65485}{Circle output quality (number of vertices). Can be between 3 and 128}
\regentry{OffsetX} {65484}{X offset for screen coordinates of all drawn graphics}
\regentry{OffsetY} {65483}{Y offset for screen coordinates of all drawn graphics}
\regentry{Rotation} {65482}{Rotation in radians for screen coordinates of all drawn graphics}
\regentry{Scale} {65481}{Scale (1 is normal scale) for screen coordinates of all drawn graphics}
\regentry{CenterX} {65480}{X coordinate of centerpoint of rotation (see \reg{Rotation} register)}
\regentry{CenterY} {65479}{Y coordinate of centerpoint of rotation (see \reg{Rotation} register)}
\regentry{CircleStart} {65478}{Circle start angle (in radians)}
\regentry{CircleEnd} {65477}{Circle end angle (in radians)}
\regentry{LineWidth} {65476}{Line width}
\regentry{ScaleX} {65475}{X component of the scale for screen coordinates of all drawn graphics}
\regentry{ScaleY} {65474}{Y component of the scale for screen coordinates of all drawn graphics}
\regentry{FontHalign} {65473}{Font horizontal align mode}
\regentry{ZOffset} {65472}{Extra Z offset for all coordinates passed into vertex pipe}
\regentry{FontValign} {65471}{Font vertical align mode}
\regentry{CullDistance} {65470}{Culling distance}
\regentry{CullMode} {65469}{Face culling mode (0: front, 1: back)}
\regentry{LightMode} {65468}{Lighting mode (0: two-side, 1: front, -1: back)}
\regentry{VertexArray} {65467}{Pointer to array of vertices for indexed rendering}
\regentry{TexRotation} {65466}{Texture rotation in radians}
\regentry{TexScale} {65465}{Texture scale (1 is normal)}
\regentry{TexCenterU} {65464}{U component of centerpoint of texture rotation}
\regentry{TexCenterV} {65463}{V component of centerpoint of texture rotation}
\regentry{TexOffsetU} {65462}{U offset for the texture output}
\regentry{TexOffsetV} {65461}{V offset for the texture output}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{longtable}
\onehalfspacing
\newpage
\section{Instruction Set Reference}
\sintrentry{DTEST}{200}{0}
This opcode generates a test pattern image on the GPU screen, somewhat similar to the PAL TV test pattern. It ignores any coordinate or texture transformations, and ignores all previous color commands/settings.
The rightmost left black bar will be left transparent.
\textbf{Psuedocode:}
\begin{verbatim}
W = ScreenWidth
H = ScreenHeight
for bar=0,6 do
SetColor(TEST_PATTERN_COLOR[bar])
Rectangle(W*0.125*bar,0,W*0.125,H*0.80)
end
for gray=0,7 do
SetColor(31*gray,31*gray,31*gray,255)
Rectangle(W*0.125*gray,H*0.80,W*0.125,H*0.20)
end
\end{verbatim}
\intrentry{DEXIT/DVSYNC}{201}{0}
Finishes drawing the current frame (used only in the frame-based mode). This must be the last instruction in any program that makes use of the frame-based mode.
The execution may be terminated before this instruction if amount of cycles spent drawing the current frame exceeds total limit
It's implementation is exactly same as that of the \reg{IDLE} instruction (while in the frame-based mode).
\textbf{Psuedocode:}
\begin{verbatim}
INTR = 1
\end{verbatim}
\intrentry{DCLR}{202}{0}
Clears screen/current buffer by filling it with black background.
\textbf{Psuedocode:}
\begin{verbatim}
SetColor(0,0,0,0)
Rectangle(0,0,ScreenWidth,ScreenHeight)
\end{verbatim}
\intrentry{DCLRTEX}{203}{0}
Clears screen/current buffer by filling it with a texture.
If vertex texturing is enabled, it will use the texture specified by the \reg{DTEXTURE} opcode. Otherwise it will use the texture specified by the \reg{DXTEXTURE} opcode.
If no texture was defined, it will fill buffer with solid black color.
\textbf{Psuedocode:}
\begin{verbatim}
BindState()
SetColor(0,0,0,255)
Rectangle(0,0,ScreenWidth,ScreenHeight)
\end{verbatim}
\intrentry{DVXFLUSH}{204}{0}
Draws all the pending polygons in the vertex buffer to screen, and clears the buffer.
This instruction is used with vertex buffer enabled. It will perform Z-sorting, clipping, etc, and draw the output to screen.
\textbf{Psuedocode:}
\begin{verbatim}
FlushBuffer()
\end{verbatim}
\intrentry{DVXCLEAR}{205}{0}
Clears any pending polygons from the vertex buffer.
\textbf{Psuedocode:}
\begin{verbatim}
ClearBuffer()
\end{verbatim}
\intrentry{DSETBUF\textunderscore VX}{206}{0}
Sets the current drawing target to the raw vertex output. This opcode can only be used when vertex mode is active, and it is the default target for the vertex mode.
\textbf{Psuedocode:}
\begin{verbatim}
SetRendertarget(2)
\end{verbatim}
\intrentry{DSETBUF\textunderscore SPR}{207}{0}
Sets current drawing target to the texture buffer. Also known as \reg{DBACKBUF}.
\textbf{Psuedocode:}
\begin{verbatim}
SetRenderTarget(1)
\end{verbatim}
\intrentry{DSETBUF\textunderscore FBO}{208}{0}
Sets current drawing target to the front/main buffer. Also known as \reg{DFRONTBUF}.
\textbf{Psuedocode:}
\begin{verbatim}
SetRenderTarget(0)
\end{verbatim}
\intrentry{DSWAP}{209}{0}
Copies contents of the texture buffer into the front buffer.
\textbf{Psuedocode:}
\begin{verbatim}
Copy(RenderTarget(1),RenderTarget(0))
\end{verbatim}
\intrentry{DVXPIPE}{210}{1}
Selects the current vertex pipe/vertex transformation mode. This controls the transformation that brings world-space coordinates into screen coordinates. \reg{X} can be one of the following pipes:
\begin{itemize}
\item \reg{0}: X, Y coordinates are used as the screen coordinates
\item \reg{1}: Y, Z coordinates are used as the screen coordinates
\item \reg{2}: X, Z coordinates are used as the screen coordinates
\item \reg{3}: Uses basic 3D perspective projection (Z: depth)
\item \reg{4}: Transforms X, Y coordinates with the current model matrix
\item \reg{5}: Performs 3D transformation with projection and model matrices
\end{itemize}
\textbf{Psuedocode:}
\begin{verbatim}
VertexPipe = X
\end{verbatim}
\intrentry{DCPIPE}{211}{1}
Selects the current coordinate pipe/coordinate transformation mode. This controls the transformation that converts the screen coordinates into true coordinates, which are correctly mapped to the buffer. \reg{X} can be one of the following pipes:
\begin{itemize}
\item \reg{0}: No transformation
\item \reg{1}: Mapped to screen using the \reg{Width} and \reg{Height} registers.
\item \reg{2}: Coordinates are transformed from 0..1 range.
\item \reg{3}: Coordinates are transformed from -1..1 range.
\item \reg{4}: Coordinates are relative to the screen center (and not the top-left corner).
\end{itemize}
\textbf{Psuedocode:}
\begin{verbatim}
CoordinatePipe = X
\end{verbatim}
\intrentry{DENABLE}{212}{1}
Enables one of the internal GPU modes/switches. \reg{X} can be one of the following:
\begin{itemize}
\item \reg{0}: Vertex buffer
\item \reg{1}: Z-Sorting for the triangles in the vertex buffer.
\item \reg{2}: Flat face lighting using the internal lighting system
\item \reg{3}: Front/back face culling
\item \reg{4}: Distance-based culling
\item \reg{5}: Texturing using the internal GPU buffers
\end{itemize}
For example:
\begin{verbatim}
//Prepare 3D drawing
mov #regCullingDistance,4.0; //Setup culling distance
denable 0; //Vertex buffer
denable 1; //ZSorting
denable 2; //Lighting
denable 3; //Face culling
denable 4; //Distance-based culling
\end{verbatim}
\textbf{Psuedocode:}
\begin{verbatim}
MODE_SWITCH[X] = 1
\end{verbatim}
\intrentry{DDISABLE}{213}{1}
Disables one of the internal GPU modes/switches. \reg{X} can be one of the following:
\begin{itemize}
\item \reg{0}: Vertex buffer
\item \reg{1}: Z-Sorting for the triangles in the vertex buffer.
\item \reg{2}: Flat face lighting using the internal lighting system
\item \reg{3}: Front/back face culling
\item \reg{4}: Distance-based culling
\item \reg{5}: Texturing using the internal GPU buffers
\end{itemize}
For example:
\begin{verbatim}
//Finish 3D drawing
ddisable 0; //Vertex buffer
ddisable 1; //ZSorting
ddisable 2; //Lighting
ddisable 3; //Face culling
ddisable 4; //Distance-based culling
\end{verbatim}
\textbf{Psuedocode:}
\begin{verbatim}
MODE_SWITCH[X] = 0
\end{verbatim}
\intrentry{DCLRSCR}{214}{1}
Clears screen with the specified color.
\textbf{Psuedocode:}
\begin{verbatim}
SetColor(X)
Rectangle(0,0,ScreenWidth,ScreenHeight)
\end{verbatim}
\intrentry{DCOLOR}{215}{1}
Sets the current drawing color.
If the vertex buffer is enabled, it will change the color of all the following polygons in the buffer (until the next \reg{DCOLOR} command, or end of buffer).
\textbf{Psuedocode:}
\begin{verbatim}
SetColor(X)
\end{verbatim}
\intrentry{DTEXTURE}{216}{1}
Sets a texture out of one of the internal buffers. The buffer texture data is taken from is specified by the \reg{TexBuffer} register.
By default it will take the entire buffer as the texture, but it is possible to specify smaller subtextures of the buffer. For example, it's possible to use four 256x256 textures, or 16 128x128 textures. If it is done so, the \reg{X} parameter will specify which subtexture must be used.
Example:
\begin{verbatim}
mov #regTexBuffer,0; //Select front buffer
mov #regTexSize,128; //128x128 subtextures
dtexture 2; //Bind subtexture #2
\end{verbatim}
\textbf{Psuedocode:}
\begin{verbatim}
SetBufferTexture(X)
\end{verbatim}
\intrentry{DSETFONT}{217}{1}
Set the current font for all operations which output text. There are 8 fonts available:
\begin{itemize}
\item \reg{0}: Letter Gothic (Lucida Console)
\item \reg{1}: Courier New
\item \reg{2}: Trebuchet
\item \reg{3}: Arial
\item \reg{4}: Times New Roman
\item \reg{5}: Coolvetica
\item \reg{6}: Akbar
\item \reg{7}: CSD
\end{itemize}
\textbf{Psuedocode:}
\begin{verbatim}
Font = X
\end{verbatim}
\intrentry{DSETSIZE}{218}{1}
Set the current font size for all operations which output text. Size can be any integer value between 4 and 200.
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\intrentry{DMOVE}{219}{1}
Set the offset for 2D drawing. This instruction will offset all screen coordinates of next rendering instructions by the given vector.
\reg{X} must be a pointer to vector, or it can be 0. If \reg{X} is equal to zero then offset will be removed.
\textbf{Psuedocode:}
\begin{verbatim}
Registers[OffsetX] = X.x
Registers[OffsetY] = X.y
\end{verbatim}
\intrentry{DVXDATA\textunderscore 2F}{220}{2}
Draw a single 2D polygon with up to 128 vertices. Also known as \reg{DVXPOLY}.
If vertex array mode is not used, \reg{X} points to an array of polygon vertex coordinates, and \reg{Y} specifies the total count of vertices.
In vertex array mode, \reg{X} points to an array of indexes into the vertex array, and \reg{Y} specifies the total count of vertices.
\textbf{Psuedocode:}
\begin{verbatim}
VDATA = Registers[VertexArray]
for IDX=1,MIN(128,Y) do
if VDATA > 0 then
VIDX = ReadCell(X+IDX-1)
VD[IDX] = {
x = ReadCell(VDATA+VIDX*2+0),
y = ReadCell(VDATA+VIDX*2+1)}
else
VD[IDX] = {
x = ReadCell(X+(IDX-1)*2+0),
y = ReadCell(X+(IDX-1)*2+1)}
end
ComputeTextureUV(VD[IDX],VD[IDX].x/512,VD[IDX].y/512)
end
DrawToBuffer(VD)
\end{verbatim}
\intrentry{DVXDATA\textunderscore 2F\textunderscore TEX}{221}{2}
Draw a single textured 2D polygon with up to 128 vertices. Also known as \reg{DVXTEXPOLY}.
If vertex array mode is not used, \reg{X} points to an array of polygon vertex coordinates and texture coordinates, and \reg{Y} specifies the total count of vertices.
In vertex array mode, \reg{X} points to an array of indexes into the vertex array, and \reg{Y} specifies the total count of vertices.
\textbf{Psuedocode:}
\begin{verbatim}
VDATA = Registers[VertexArray]
for IDX=1,MIN(128,Y) do
if VDATA > 0 then
VIDX = ReadCell(X+IDX-1)
VD[IDX] = {
x = ReadCell(VDATA+VIDX*4+0),
y = ReadCell(VDATA+VIDX*4+1)}
ComputeTextureUV(VD[IDX],
ReadCell(VDATA+VIDX*4+2),
ReadCell(VDATA+VIDX*4+3))
else
VD[IDX] = {
x = ReadCell(X+(IDX-1)*4+0),
y = ReadCell(X+(IDX-1)*4+1)}
ComputeTextureUV(VD[IDX],
ReadCell(X+(IDX-1)*4+2),
ReadCell(X+(IDX-1)*4+3))
end
end
DrawToBuffer(VD)
\end{verbatim}
\intrentry{DVXDATA\textunderscore 3F}{222}{2}
Draw a single 3D polygon with up to 128 triangles.
If vertex array mode is not used, \reg{X} points to an array of triangle vertex coordinates, and \reg{Y} specifies the total count of triangles.
In vertex array mode, \reg{X} points to an array of indexes into the vertex array, and \reg{Y} specifies the total count of triangles.
\textbf{Psuedocode:}
\begin{verbatim}
VDATA = Registers[VertexArray]
for IDX=1,MIN(128,Y) do
if VDATA > 0 then
VIDX1 = ReadCell(X+(IDX-1)*3+0)
VIDX2 = ReadCell(X+(IDX-1)*3+1)
VIDX3 = ReadCell(X+(IDX-1)*3+2)
VD[1] = {
x = ReadCell(VDATA+VIDX1*3+0),
y = ReadCell(VDATA+VIDX1*3+1),
z = ReadCell(VDATA+VIDX1*3+2)}
VD[2] = {
x = ReadCell(VDATA+VIDX2*3+0),
y = ReadCell(VDATA+VIDX2*3+1),
z = ReadCell(VDATA+VIDX2*3+2)}
VD[3] = {
x = ReadCell(VDATA+VIDX3*3+0),
y = ReadCell(VDATA+VIDX3*3+1),
z = ReadCell(VDATA+VIDX3*3+2)}
else
VD[1] = {
x = ReadCell(X+(IDX-1)*9+0),
y = ReadCell(X+(IDX-1)*9+1),
z = ReadCell(X+(IDX-1)*9+2)}
VD[2] = {
x = ReadCell(X+(IDX-1)*9+3),
y = ReadCell(X+(IDX-1)*9+4),
z = ReadCell(X+(IDX-1)*9+5)}
VD[3] = {
x = ReadCell(X+(IDX-1)*9+6),
y = ReadCell(X+(IDX-1)*9+7),
z = ReadCell(X+(IDX-1)*9+8)}
end
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
DrawToBuffer(VD)
end
\end{verbatim}
\intrentry{DVXDATA\textunderscore 3F\textunderscore TEX}{223}{2}
Draw a single textured 3D polygon with up to 128 triangles.
If vertex array mode is not used, \reg{X} points to an array of triangle vertex coordinates and texture coordinates, and \reg{Y} specifies the total count of triangles.
In vertex array mode, \reg{X} points to an array of indexes into the vertex array, and \reg{Y} specifies the total count of triangles.
\textbf{Psuedocode:}
\begin{verbatim}
VDATA = Registers[VertexArray]
for IDX=1,MIN(128,Y) do
if VDATA > 0 then
$L VIDX1 = ReadCell(X+(IDX-1)*3+0)
$L VIDX2 = ReadCell(X+(IDX-1)*3+1)
$L VIDX3 = ReadCell(X+(IDX-1)*3+2)
VD[1] = {
x = ReadCell(VDATA+VIDX1*5+0),
y = ReadCell(VDATA+VIDX1*5+1),
z = ReadCell(VDATA+VIDX1*5+2),
}
VD[2] = {
x = ReadCell(VDATA+VIDX2*5+0),
y = ReadCell(VDATA+VIDX2*5+1),
z = ReadCell(VDATA+VIDX2*5+2),
}
VD[3] = {
x = ReadCell(VDATA+VIDX3*5+0),
y = ReadCell(VDATA+VIDX3*5+1),
z = ReadCell(VDATA+VIDX3*5+2),
}
ComputeTextureUV(VD[1],
ReadCell(VDATA+VIDX1*5+3),
ReadCell(VDATA+VIDX1*5+4))
ComputeTextureUV(VD[2],
ReadCell(VDATA+VIDX2*5+3),
ReadCell(VDATA+VIDX2*5+4))
ComputeTextureUV(VD[3],
ReadCell(VDATA+VIDX3*5+3),
ReadCell(VDATA+VIDX3*5+4))
else
VD[1] = {
x = ReadCell(X+(IDX-1)*15+0),
y = ReadCell(X+(IDX-1)*15+1),
z = ReadCell(X+(IDX-1)*15+2),
}
VD[2] = {
x = ReadCell(X+(IDX-1)*15+5),
y = ReadCell(X+(IDX-1)*15+6),
z = ReadCell(X+(IDX-1)*15+7),
}
VD[3] = {
x = ReadCell(X+(IDX-1)*15+10),
y = ReadCell(X+(IDX-1)*15+11),
z = ReadCell(X+(IDX-1)*15+12),
}
ComputeTextureUV(VD[1],
ReadCell(X+(IDX-1)*15+ 3),
ReadCell(X+(IDX-1)*15+ 4))
ComputeTextureUV(VD[2],
ReadCell(X+(IDX-1)*15+ 8),
ReadCell(X+(IDX-1)*15+ 9))
ComputeTextureUV(VD[3],
ReadCell(X+(IDX-1)*15+13),
ReadCell(X+(IDX-1)*15+14))
end
self:Dyn_EmitInterruptCheck()
DrawToBuffer(VD)
end
\end{verbatim}
\intrentry{DVXDATA\textunderscore 3F\textunderscore WF}{224}{2}
Draw a single wireframe 3D polygon with up to 128 triangles.
If vertex array mode is not used, \reg{X} points to an array of triangle vertex coordinates and texture coordinates, and \reg{Y} specifies the total count of triangles.
In vertex array mode, \reg{X} points to an array of indexes into the vertex array, and \reg{Y} specifies the total count of triangles.
\textbf{Psuedocode:}
\begin{verbatim}
VDATA = Registers[VertexArray]
for IDX=1,MIN(128,Y) do
if VDATA > 0 then
VIDX1 = ReadCell(X+(IDX-1)*3+0)
VIDX2 = ReadCell(X+(IDX-1)*3+1)
VIDX3 = ReadCell(X+(IDX-1)*3+2)
VD[1] = {
x = ReadCell(VDATA+VIDX1*3+0),
y = ReadCell(VDATA+VIDX1*3+1),
z = ReadCell(VDATA+VIDX1*3+2)}
VD[2] = {
x = ReadCell(VDATA+VIDX2*3+0),
y = ReadCell(VDATA+VIDX2*3+1),
z = ReadCell(VDATA+VIDX2*3+2)}
VD[3] = {
x = ReadCell(VDATA+VIDX3*3+0),
y = ReadCell(VDATA+VIDX3*3+1),
z = ReadCell(VDATA+VIDX3*3+2)}
else
VD[1] = {
x = ReadCell(X+(IDX-1)*9+0),
y = ReadCell(X+(IDX-1)*9+1),
z = ReadCell(X+(IDX-1)*9+2)}
VD[2] = {
x = ReadCell(X+(IDX-1)*9+3),
y = ReadCell(X+(IDX-1)*9+4),
z = ReadCell(X+(IDX-1)*9+5)}
VD[3] = {
x = ReadCell(X+(IDX-1)*9+6),
y = ReadCell(X+(IDX-1)*9+7),
z = ReadCell(X+(IDX-1)*9+8)}
end
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
DrawToBuffer(VD,WIREFRAME)
end
\end{verbatim}
\intrentry{DRECT}{225}{2}
Draws a single rectangle. \reg{X} is a pointer to vector which specifies the top-left vertex, and \reg{Y} is a pointer to vector which specifies the bottom-right vertex.
\textbf{Psuedocode:}
\begin{verbatim}
VD[1] = {
x = ReadCell(X+0),
y = ReadCell(X+1)}
VD[2] = {
x = ReadCell(Y+0),
y = ReadCell(X+1)}
VD[3] = {
x = ReadCell(Y+0),
y = ReadCell(Y+1)}
VD[4] = {
x = ReadCell(X+0),
y = ReadCell(Y+1)}
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
ComputeTextureUV(VD[4],0,1)
DrawToBuffer(VD)
\end{verbatim}
\intrentry{DCIRCLE}{226}{2}
Draws a circle or a sector with specific radius and angles.
\textbf{Psuedocode:}
\begin{verbatim}
R = Y
SIDES = clamp(ReadCell(65485),3,64)
START = ReadCell(65478)
END = ReadCell(65477)
STEP = (END-START)/SIDES
VEC = ReadVector2f(X)
for IDX=1,SIDES do
VD[1] = {
x = VEC.x + R*sin(START+STEP*(IDX+0)),
y = VEC.y + R*cos(START+STEP*(IDX+0))}
VD[2] = {
x = VEC.x,
y = VEC.y}
VD[3] = {
x = VEC.x + R*sin(START+STEP*(IDX+1)),
y = VEC.y + R*cos(START+STEP*(IDX+1))}
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
DrawToBuffer(VD)
end
\end{verbatim}
\intrentry{DLINE}{227}{2}
Draws a line between two points specified by the vectors \reg{X} and \reg{Y}.
\textbf{Psuedocode:}
\begin{verbatim}
DrawLine(ReadVector2f($1),ReadVector2f($2))
\end{verbatim}
\intrentry{DRECTWH}{228}{2}
Draws a single rectangle. \reg{X} is a pointer to vector which specifies the top-left corner coordinates, and \reg{Y} is a pointer to vector which specifies the Rectangle size.
\textbf{Psuedocode:}
\begin{verbatim}
VD[1] = {
x = ReadCell(X+0),
y = ReadCell(X+1)}
VD[2] = {
x = ReadCell(X+0)+ReadCell(Y+0),
y = ReadCell(X+1)}
VD[3] = {
x = ReadCell(X+0)+ReadCell(Y+0),
y = ReadCell(X+1)+ReadCell(Y+1)}
VD[4] = {
x = ReadCell(X+0),
y = ReadCell(X+1)+ReadCell(Y+1)}
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
ComputeTextureUV(VD[4],0,1)
DrawToBuffer(VD)
\end{verbatim}
\intrentry{DORECT}{229}{2}
Draws an outline of a rectangle. \reg{X} is a pointer to vector which specifies the top-left vertex, and \reg{Y} is a pointer to vector which specifies the bottom-right vertex.
The line width can be specified with the \reg{DSETWIDTH} instruction.
\textbf{Psuedocode:}
\begin{verbatim}
VD[1] = {
x = ReadCell(X+0),
y = ReadCell(X+1)}
VD[2] = {
x = ReadCell(Y+0),
y = ReadCell(X+1)}
VD[3] = {
x = ReadCell(Y+0),
y = ReadCell(Y+1)}
VD[4] = {
x = ReadCell(X+0),
y = ReadCell(Y+1)}
DrawLine(VD[1],VD[2])
DrawLine(VD[2],VD[3])
DrawLine(VD[3],VD[4])
DrawLine(VD[4],VD[1])
\end{verbatim}
\intrentry{DTRANSFORM2F}{230}{2}
Transforms a 2D vector using the projection and the modelview matrices.
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\intrentry{DTRANSFORM3F}{231}{2}
Transforms a 3D vector using the projection and the modelview matrices.
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\intrentry{DSCRSIZE}{232}{2}
Sets the current screen size.
\textbf{Psuedocode:}
\begin{verbatim}
Registers[Width] = X
Registers[Height] = Y
\end{verbatim}
\intrentry{DROTATESCALE}{233}{2}
Rotates and scales coordinates of all following graphics instructions. The default centerpoint of rotation is (0,0) which can be changed using the \reg{CenterX} and the \reg{CenterY} registers.
\textbf{Psuedocode:}
\begin{verbatim}
Registers[Rotation] = X
Registers[Scale] = Y
\end{verbatim}
\intrentry{DORECTWH}{234}{2}
Draws an outline of a rectangle. \reg{X} is a pointer to vector which specifies the top-left corner coordinates, and \reg{Y} is a pointer to vector which specifies the Rectangle size.
The line width can be specified with the \reg{DSETWIDTH} instruction.
\textbf{Psuedocode:}
\begin{verbatim}
VD[1] = {
x = ReadCell(X+0),
y = ReadCell(X+1)}
VD[2] = {
x = ReadCell(X+0)+ReadCell(Y+0),
y = ReadCell(X+1)}
VD[3] = {
x = ReadCell(X+0)+ReadCell(Y+0),
y = ReadCell(X+1)+ReadCell(Y+1)}
VD[4] = {
x = ReadCell(X+0),
y = ReadCell(X+1)+ReadCell(Y+1)}
DrawLine(VD[1],VD[2])
DrawLine(VD[2],VD[3])
DrawLine(VD[3],VD[4])
DrawLine(VD[4],VD[1])
\end{verbatim}
\intrentry{DCULLMODE}{235}{2}
Sets the current culling mode and lighting mode.
\reg{X} sets the culling mode:
\begin{itemize}
\item \reg{0}: front face culling
\item \reg{1}: back face culling
\end{itemize}
\reg{Y} sets the lighting mode:
\begin{itemize}
\item \reg{0}: double-side lighting
\item \reg{1}: front side lighting
\item \reg{-1}: back side lighting
\end{itemize}
\textbf{Psuedocode:}
\begin{verbatim}
Register[CullMode] = X
Register[LightMode] = X
\end{verbatim}
\intrentry{DPIXEL}{238}{2}
Outputs a single pixel to screen. \reg{X} is a pointer to vector which specifies coordinates on screen (can be non-integer, which will cause anti-aliasing effect), and \reg{Y} is a pointer to color of the pixel.
\textbf{Psuedocode:}
\begin{verbatim}
SetPixel(X,Y)
\end{verbatim}
\intrentry{DWRITE}{240}{2}
Writes a null-terminated string to screen. \reg{X} is a pointer to vector that specifies the position of string on screen, and \reg{Y} is the pointer to the first character of the string.
\textbf{Psuedocode:}
\begin{verbatim}
TEXT = VM:ReadString(Y)
FontWrite(X,TEXT)
\end{verbatim}
\intrentry{DWRITEI}{241}{2}
Writes a integer value to screen. \reg{X} is a pointer to vector that specifies the position of string on screen, and \reg{Y} is the value that must be drawn on screen.
\textbf{Psuedocode:}
\begin{verbatim}
FontWrite(X,Integer(Y))
\end{verbatim}
\intrentry{DWRITEF}{242}{2}
Writes a floating-point value to screen. \reg{X} is a pointer to vector that specifies the position of string on screen, and \reg{Y} is the value that must be drawn on screen.
\textbf{Psuedocode:}
\begin{verbatim}
FontWrite(X,Y)
\end{verbatim}
\intrentry{DENTRYPOINT}{243}{2}
Sets one of the GPU entrypoints. Each entrypoint corresponds to a specific function, there are the following entrypoints available right now:
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\intrentry{DSETLIGHT}{244}{2}
Sets parameters of one of the 8 lights supported by the GPU. \reg{X} is the light index (0..7), and \reg{Y} points to the following data structure:
\begin{verbatim}
LightData:
vector4f position,<x>,<y>,<z>,0;
vector4f color,<r>,<g>,<b>,<brightness>;
\end{verbatim}
Light brightness is usually set to 1, but can vary.
\textbf{Psuedocode:}
\begin{verbatim}
if (X < 0) or (X > 7) then
Interrupt(19,0)
else
Lights[X] = {
Position = ReadVector4f(Y+0),
Color = ReadVector4f(Y+4)}
end
\end{verbatim}
\intrentry{DGETLIGHT}{245}{2}
Reads light data for one of the 8 lights supported by the GPU. \reg{X} is the light index (0..7), and \reg{Y} points to the following data structure, which will be filled with light data:
\begin{verbatim}
LightData:
vector4f position,<x>,<y>,<z>,0;
vector4f color,<r>,<g>,<b>,<brightness>;
\end{verbatim}
\textbf{Psuedocode:}
\begin{verbatim}
N/A
\end{verbatim}
\intrentry{DWRITEFMT}{246}{2}
Writes a formatted string to screen. \reg{X} points to vector that specifies the position of string on screen, and \reg{Y} is the string that must be drawn on screen.
Variables used in the string format must follow the string data. If \reg{ParamList} register is set, then variables used in the string format start at that offset.
\textbf{Psuedocode:}
\begin{verbatim}
N/A
\end{verbatim}
\intrentry{DWRITEFIX}{247}{2}
Writes a fixed-point value to screen. \reg{X} is a pointer to vector that specifies the position of string on screen, and \reg{Y} is the value that must be drawn on screen.
\textbf{Psuedocode:}
\begin{verbatim}
N/A
\end{verbatim}
\intrentry{DTEXTWIDTH}{248}{2}
Returns the width of string using the current font, and writes it to \reg{X}.
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
\intrentry{DTEXTHEIGHT}{249}{2}
Returns the height of string using the current font, and writes it to \reg{X}.
\textbf{Psuedocode:}
\begin{verbatim}
\end{verbatim}
%\intrentry{DLOOPXY}{259}{2}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
\intrentry{MLOADPROJ}{271}{1}
Loads given matrix into the GPU projection matrix. \reg{X} points to the matrix.
\textbf{Psuedocode:}
\begin{verbatim}
ProjectionMatrix = ReadMatrix(X)
\end{verbatim}
\intrentry{MREAD}{272}{1}
Reads GPU model matrix. \reg{X} points to the matrix into which model matrix will be written.
\textbf{Psuedocode:}
\begin{verbatim}
WriteMatrix(X,ModelMatrix)
\end{verbatim}
\intrentry{DT}{274}{1}
Returns time passed since last frame (works only in frame-based mode).
\textbf{Psuedocode:}
\begin{verbatim}
X = TimerDT
\end{verbatim}
\intrentry{DSHADE}{276}{1}
Shades the current color by specific amount. \reg{X} is the shading value. A value between 0 and 1 will make the color darker, a value of 1 will not change the current color, while value higher than 1 will make color brighter.
There is no normalization, so values outside of 0..1 range might generate weird colors.
\textbf{Psuedocode:}
\begin{verbatim}
Color.x = Color.x*X
Color.y = Color.y*X
Color.z = Color.z*X
SetColor(Color)
\end{verbatim}
\intrentry{DSETWIDTH}{277}{1}
Sets line width.
\textbf{Psuedocode:}
\begin{verbatim}
Register[LineWidth] = X
\end{verbatim}
\intrentry{MLOAD}{278}{1}
Loads given matrix into the GPU model matrix. \reg{X} points to the matrix.
\textbf{Psuedocode:}
\begin{verbatim}
ModelMatrix = ReadMatrix(X)
\end{verbatim}
\intrentry{DSHADENORM}{279}{1}
Shades the current color by specific amount. \reg{X} is the shading value. A value between 0 and 1 will make the color darker, a value of 1 will not change the current color, while value higher than 1 will make color brighter.
The resulting color is normalized, so it's possible to use values outside of the 0..1 range.
\textbf{Psuedocode:}
\begin{verbatim}
Color.x = Clamp(Color.x*X,0,255)
Color.y = Clamp(Color.y*X,0,255)
Color.z = Clamp(Color.z*X,0,255)
SetColor(Color)
\end{verbatim}
\intrentry{DDFRAME}{280}{1}
Draws a framed rectangle. \reg{X} points to the following data structure:
\begin{verbatim}
FrameData:
vector2f position,<x>,<y>;
vector2f size,<w>,<h>;
vector4f info,<shadow>,<highlight>,<face>,<border size>;
\end{verbatim}
The \reg{info} entry stores pointers to colors that must be used in rendering.
\textbf{Psuedocode:}
\begin{verbatim}
V1 = ReadVector2f(X+0)
V2 = ReadVector2f(X+2)
V3 = ReadVector4f(X+4)
CSHADOW = ReadVector3f(V3.x)
CHIGHLIGHT = ReadVector3f(V3.y)
CFACE = ReadVector3f(V3.z)
VD1[1] = {
x = V3.w + V1.x,
y = V3.w + V1.y}
VD1[2] = {
x = V3.w + V1.x + V2.x,
y = V3.w + V1.y}
VD1[3] = {
x = V3.w + V1.x + V2.x,
y = V3.w + V1.y + V2.y}
VD1[4] = {
x = V3.w + V1.x,
y = V3.w + V1.y + V2.y}
VD2[1] = {
x = -V3.w + V1.x,
y = --V3.w + V1.y}
VD2[2] = {
x = -V3.w + V1.x + V2.x,
y = -V3.w + V1.y}
VD2[3] = {
x = -V3.w + V1.x + V2.x,
y = -V3.w + V1.y + V2.y}
VD2[4] = {
x = -V3.w + V1.x,
y = -V3.w + V1.y + V2.y}
VD3[1] = {
x = V1.x,
y = V1.y}
VD3[2] = {
x = V1.x + V2.x,
y = V1.y}
VD3[3] = {
x = V1.x + V2.x,
y = V1.y + V2.y}
VD3[4] = {
x = V1.x,
y = V1.y + V2.y}
ComputeTextureUV(VD1[1],0,0)
ComputeTextureUV(VD1[2],1,0)
ComputeTextureUV(VD1[3],1,1)
ComputeTextureUV(VD1[4],0,1)
ComputeTextureUV(VD2[1],0,0)
ComputeTextureUV(VD2[2],1,0)
ComputeTextureUV(VD2[3],1,1)
ComputeTextureUV(VD2[4],0,1)
ComputeTextureUV(VD3[1],0,0)
ComputeTextureUV(VD3[2],1,0)
ComputeTextureUV(VD3[3],1,1)
ComputeTextureUV(VD3[4],0,1)
SetColor(CSHADOW)
DrawToBuffer(VD1)
SetColor(CHIGHLIGHT)
DrawToBuffer(VD2)
SetColor(CFACE)
DrawToBuffer(VD3)
\end{verbatim}
%\intrentry{DDBAR}{281}{1}
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
%\intrentry{DDGAUGE}{282}{1}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
\intrentry{DRASTER}{283}{1}
Set raster quality.
\textbf{Psuedocode:}
\begin{verbatim}
Registers[RasterQ] = X
\end{verbatim}
\intrentry{DDTERRAIN}{284}{1}
Draw 3D terrain.
\textbf{Psuedocode:}
\begin{verbatim}
W = ReadCell(X+0)
H = ReadCell(X+1)
R = clamp(floor(ReadCell(X+2)),0,16)
U = ReadCell(X+3)
V = ReadCell(X+4)
MinX = clamp(floor(W/2 + U - R),1,W-1)
MinY = clamp(floor(H/2 + V - R),1,H-1)
MaxX = clamp(floor(W/2 + U + R),1,W-1)
MaxY = clamp(floor(H/2 + V + R),1,H-1)
for X=MinX,MaxX do
for Y=MinY,MaxY do
XPOS = X - W/2 - U - 0.5
YPOS = Y - H/2 - U - 0.5
if (X > 0) and (X <= W-1) and (Y > 0) and (Y <= H-1) and (XPOS^2+YPOS^2 <= R^2) then
Z1 = ReadCell(X+16+(Y-1)*W+(X-1)
Z2 = ReadCell(X+16+(Y-1)*W+(X-0)
Z3 = ReadCell(X+16+(Y-0)*W+(X-0)
Z4 = ReadCell(X+16+(Y-0)*W+(X-1)
VD[1] = { x = XPOS, y = YPOS, y = Z1 }
VD[2] = { x = XPOS+1, y = YPOS, y = Z2 }
VD[3] = { x = XPOS+1, y = YPOS+1, y = Z3}
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],1,0)
ComputeTextureUV(VD[3],1,1)
DrawToBuffer(VD)
VD[1] = { x = XPOS, y = YPOS, y = Z1}
VD[2] = { x = XPOS, y = YPOS+1, y = Z4}
VD[3] = { x = XPOS+1, y = YPOS+1, y = Z3}
ComputeTextureUV(VD[1],0,0)
ComputeTextureUV(VD[2],0,1)
ComputeTextureUV(VD[3],1,1)
DrawToBuffer(VD)
end
end
end
\end{verbatim}
%\intrentry{DLOADBYTES}{290}{2}
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
\intrentry{DMULDT}{294}{2}
Multiplies \reg{Y} by time-step and writes it into \reg{X}. Used in frame-based mode to provide smooth animations.
\textbf{Psuedocode:}
\begin{verbatim}
X = Y * TimerDT
\end{verbatim}
%\intrentry{DSMOOTH}{297}{2}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
\intrentry{DBEGIN}{298}{0}
Starts asynchonous drawing. Used only in asynchronous thread.
\textbf{Psuedocode:}
\begin{verbatim}
SetRenderTarget(1)
\end{verbatim}
\intrentry{DEND}{299}{0}
Ends asynchonous drawing, and outputs the drawn image to screen.
\textbf{Psuedocode:}
\begin{verbatim}
FlushBuffer()
Copy(1,0)
SetRenderTarget(2)
\end{verbatim}
%\intrentry{DROTATE}{300}{1}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
%\intrentry{DTRANSLATE}{301}{1}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
%\intrentry{DSCALE}{302}{1}
%
%\textbf{Psuedocode:}
%\begin{verbatim}
%\end{verbatim}
\intrentry{DXTEXTURE}{303}{1}
Binds a predefined texture. \reg{X} points to the string that contains texture name. If \reg{X} is equal to 0, texture will be unbound.
\textbf{Psuedocode:}
\begin{verbatim}
if X > 0 then
NAME = VM:ReadString(X)
SetTexture(NAME)
else
SetTexture(0)
end
\end{verbatim} | {
"alphanum_fraction": 0.6844729484,
"avg_line_length": 31.4146191646,
"ext": "tex",
"hexsha": "d8d11a194abb02e9cf330c5f5acd616bf2cd82b1",
"lang": "TeX",
"max_forks_count": 17,
"max_forks_repo_forks_event_max_datetime": "2021-06-10T12:04:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-12T18:09:59.000Z",
"max_forks_repo_head_hexsha": "c9df7f9fc4920542c6819d7e121a139c47d6af42",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Hanter666/Wire-GPU-doc",
"max_forks_repo_path": "CPU & GPU/zcpudocs/zgpu_docs.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "c9df7f9fc4920542c6819d7e121a139c47d6af42",
"max_issues_repo_issues_event_max_datetime": "2021-05-12T14:35:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-10-07T15:13:09.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Hanter666/Wire-GPU-doc",
"max_issues_repo_path": "CPU & GPU/zcpudocs/zgpu_docs.tex",
"max_line_length": 383,
"max_stars_count": 18,
"max_stars_repo_head_hexsha": "c9df7f9fc4920542c6819d7e121a139c47d6af42",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Hanter666/Wire-GPU-doc",
"max_stars_repo_path": "CPU & GPU/zcpudocs/zgpu_docs.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-16T08:12:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-09-11T04:03:11.000Z",
"num_tokens": 15570,
"size": 51143
} |
\documentclass[11pt]{article}
\usepackage{fullpage}
\usepackage{titling}
\usepackage{indentfirst}
\usepackage{amsmath, amssymb}
\usepackage{tikz}
\usepackage[colorlinks,linkcolor=blue]{hyperref}
\setlength{\parindent}{0em}
\setlength{\droptitle}{-3em}
\begin{document}
\title{Factor Graph}
\date{}
\maketitle
\vspace{-5em}
\section{Variables and Factors}
DeepDive uses factor graphs to perform learning and inference. A factor graph is a type of probabilistic graphical model.
There are two types of nodes in a factor graph, (random) variables and factors. A random variable can be used to quantitatively describe an event. For example, we can use a random variable to denote if John smokes. If John smokes, the random variable takes a value of 1, and 0 if John does not smoke. For now, DeepDive only supports boolean variables, so we will constrain our discussion to boolean variables.
A factor is a function of variables, and is used to evaluate the relations among variable(s). For example, a function imply(A, B) means if A, then B. Now suppose we have relation that ``if John smokes then he has cancer". Here we have two variables, one indicating if John smokes, the other indicating if John has cancer. Thus, imply(smoke, cancer) expresses the rule above.
The figure shows an example of a factor graph, where $v_1$ and $v_2$ are two variables, and $f_1, f_2$ are two factors. Factor $f_1$ is connected with $v_1$ and $v_2$, while $f_2$ is connected with $v_2$. We will use this example to illustrate some basic concepts about factor graphs.
\begin{center}
\includegraphics[width=2in]{factor_graph.png}
\end{center}
\section{Possible Worlds and Probabilities}
A possible world is a particular possible assignment to every variable, denoted by $I$. We can also think of it as each variable taking a particular value.
\begin{itemize}
\item How many possible worlds in the factor graph above? Each variable can take value 0 or 1, and there are two variables. So we have four possible worlds. The possible worlds are shown in the table below, with each column representing a possible world.
\begin{center}
\begin{tabular}{|l|llll|}
\hline
$v_1$ & 0 & 0 & 1 & 1\\
\hline
$v_2$ & 0 & 1 & 0 & 1\\
\hline
\end{tabular}
\end{center}
\end{itemize}
How do we define the probability of a possible world? We define it through factor functions. We give different weight to factor functions, to express the relative influence of each factor on the probability. Factors with larger weight have greater impacts on the probability. The probability of a possible world graph is then defined to be proportional to some measure of weighted combination of factor functions (for how to define such a measure, please refer to [Factor Graphs and the Sum-Product Algorithm] \url{http://www.comm.utoronto.ca/~frank/papers/KFL01.pdf}), i.e., for the above graph,
\[ \text{Pr}(I) \propto \text{measure}\{w_1 f_1(v_1, v_2) + w_2 f_2(v_2)\}. \]
Here, $w_1, w_2$ are weights associated with factor functions.
\begin{itemize}
\item If $f_1$ is imply function with weight $w_1 = 1$, and $f_2$ is isTrue with weight $w_2 = 0.5$ (for explaination on types of factor functions, see \url{http://deepdive.stanford.edu/inference_rule_functions.html}). What is the probability of possible world $v_1 = 1, v_2 = 0$ proportional to (in terms of measure)?
Here, $f_1(v_1, v_2)$ = imply(1, 0) = 0, and $f_2(v_2)$ = isTrue(0) = 0. Thus, the answer is easily computed by measure$(w_1 f_1(v_1, v_2) + w_2 f_2(v_2))$ = measure$(1 \cdot 0 + 0.5 \cdot 0)$ = measure(0).
\end{itemize}
It is not convenient to express the probability to be proportional to something rather than to have an absolute value. To define absolute probabilities of possible worlds, we can simply normalize the probabilities above against all possible worlds. That is, we define the probability of possible world I as
\[ \text{Pr}(I) = \frac{\text{measure}\{w^T f(I)\}}{\sum_{J} \text{measure}\{w^T f(J)\}}, \]
where the sum is over all possible worlds.
\begin{itemize}
\item What's the probability of the possible world $v_1=1,v_2=0$?
\end{itemize}
\section{Marginal Inference and Weight Learning}
Now, we can perform marginal inference on factor graphs. A marginal inference is to infer the probability of one variable taking a particular value. For example, if we would like to infer whether John has cancer, and it is expressed using a variable $v_1$, this means we would like to infer the probability of $v_1 = 1$. It is straightforward to define the probability of it as just the sum of probabilities of possible worlds that contains the specific value for that variable. This is similar to marginal probability and joint probability. The marginal inference for the event $\{v_1 = 1\}$ is expressed as
\[ \text{Pr}\{v_1 = 1\} = \sum_{I:v_1=1} \text{Pr}(I). \]
\begin{itemize}
\item What is the result of the marginal inference $\text{Pr}\{v_1 = 1\}$?
\end{itemize}
In DeepDive, you can assign factor weights manually, or you can let DeepDive learn weights automatically. In order to learn weights automatically, you must have enough training data available. DeepDive chooses weights that agree most with the training data. Formally, the training data is just set of possible worlds, and we choose weights by maximizing the probabilities of these possible worlds.
\end{document}
| {
"alphanum_fraction": 0.7534883721,
"avg_line_length": 74.6527777778,
"ext": "tex",
"hexsha": "394868d7e65b54cc4ea32cec5e7fe2ff8f357650",
"lang": "TeX",
"max_forks_count": 603,
"max_forks_repo_forks_event_max_datetime": "2022-03-22T07:45:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-01T08:05:19.000Z",
"max_forks_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "onthway/deepdive",
"max_forks_repo_path": "doc/assets/factor_graph.tex",
"max_issues_count": 426,
"max_issues_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7",
"max_issues_repo_issues_event_max_datetime": "2021-07-05T02:37:40.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-02T10:28:05.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "onthway/deepdive",
"max_issues_repo_path": "doc/assets/factor_graph.tex",
"max_line_length": 608,
"max_stars_count": 1729,
"max_stars_repo_head_hexsha": "f0b1f355446b169bc8a01061f773df598117a1b7",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "onthway/deepdive",
"max_stars_repo_path": "doc/assets/factor_graph.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-29T09:08:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-01T02:40:56.000Z",
"num_tokens": 1464,
"size": 5375
} |
% ----------------------------------------------------------------------
\lecture{Language}{language}
% ----------------------------------------------------------------------
\part{Language}
% ----------------------------------------------------------------------
\section{Base language}
% ------------------------------
\subsection{Motivation}
% ------------------------------
\input{language/motivation}
%------------------------------------------------------------
\subsection{Integrity constraint}
% ------------------------------
\input{language/basic-integrity-constraint}
% ------------------------------
\subsection{Choice rule}
% ------------------------------
\input{language/basic-choice-rule}
% ------------------------------
\subsection{Cardinality rule}
% ------------------------------
\input{language/basic-cardinality-rule}
% ------------------------------
\subsection{Weight rule}
% ------------------------------
\input{language/basic-weight-rule}
% ------------------------------
\subsection{Conditional literal}
% ------------------------------
\input{language/conditional-literal}
%------------------------------------------------------------
\section{Optimization}
% ------------------------------
\input{language/optimization-statements}
% ----------------------------------------------------------------------
\section{Formats}
% ------------------------------
\input{language/gringo-formats}
% ------------------------------
\subsection{Input format}
% ------------------------------
\input{language/gringo}
% ------------------------------
\subsection{Intermediate format}
% ------------------------------
% \subsubsection{smodels format}
% ------------------------------
\input{language/smodels-format}
% ------------------------------
% \subsubsection{aspif format}
% ------------------------------
\input{language/aspif-format}
% ------------------------------
% \subsection{Output formats}
% ----------------------------------------------------------------------
\section{Summary}
% ------------------------------
\input{language/summary}
% ----------------------------------------------------------------------
%
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End:
| {
"alphanum_fraction": 0.3315266486,
"avg_line_length": 34.0615384615,
"ext": "tex",
"hexsha": "82d6fc628e12831c87ee1cb5ad4233d07dcb874c",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2022-02-14T10:19:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-12T23:09:16.000Z",
"max_forks_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "vogelh0ws/course",
"max_forks_repo_path": "language.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_issues_repo_issues_event_max_datetime": "2022-02-09T10:55:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-05T18:06:31.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "vogelh0ws/course",
"max_issues_repo_path": "language.tex",
"max_line_length": 72,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "c673c03db3c3dbaee45a69e9617264f56d0ec67f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "vogelh0ws/course",
"max_stars_repo_path": "language.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-18T23:47:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-09T16:01:53.000Z",
"num_tokens": 315,
"size": 2214
} |
\chapter*{Acknowledgements}
\markboth{Acknowledgements}{Acknowledgements}
% \addcontentsline{toc}{chapter}{Acknowledgements}
% put your text here
\vspace{0.2\textheight}
I would first like to thank my family for having supported me for so long:
I would like to thank my mother for having always been there and believed in me,
my father for financially supporting me for so long and having faith in my endeavors,
and my sister for trying to remind me of what's important.\\
As with countless other souls, I had to make some last minute decisions
due to Covid-19; I am very grateful to both Martin and Praneeth for
accepting to take me on board for my thesis on such a short notice.
I owe a special thanks to Yunus Inan; I was very lucky to have as a
flatmate such a talented information theorist --- some ideas and a lot
of the proofs came as a result of our interactions.
I would also like to thank Sidak Pal Singh for reading an early draft of the thesis
and giving many valuable comments. \\
I would finally like to thank Darkhan Musanov, who's strong conviction in me
had an important role in my decision to leave business school to try something
more mathematical.
\bigskip
\noindent\textit{Lausanne, \today}
\hfill Ignacio S. Aleman
| {
"alphanum_fraction": 0.7796340493,
"avg_line_length": 41.9,
"ext": "tex",
"hexsha": "88fb3ff91d5c383ee873a85c6617d657887814bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e92f8b0b2d14d0a514dce0fc4a83a358481a4d20",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Nacho114/EPFL_thesis_template",
"max_forks_repo_path": "head/acknowledgements.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e92f8b0b2d14d0a514dce0fc4a83a358481a4d20",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Nacho114/EPFL_thesis_template",
"max_issues_repo_path": "head/acknowledgements.tex",
"max_line_length": 86,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e92f8b0b2d14d0a514dce0fc4a83a358481a4d20",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Nacho114/EPFL_thesis_template",
"max_stars_repo_path": "head/acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 304,
"size": 1257
} |
\section{What have I done?}
\begin{frame}{\insertsec}
I've been doing Neural Networks related courses at Coursera
\begin{center}
\includegraphics[width=.5\textwidth]{images/coursera_logo}
\end{center}
\begin{itemize}
\item Neural Networks and Deep Learning (\emph{94.3\%})
\item Improving Deep Neural Networks: Hyperparameter tuning, Regularization and
Optimization (\emph{97.8\%})
\item Convolutional Neural Networks (\emph{94.4\%})
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.6635338346,
"avg_line_length": 25.3333333333,
"ext": "tex",
"hexsha": "4f74ecd4a84065e6005b0edb305b63077952a8ef",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-10-23T08:11:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-23T08:11:28.000Z",
"max_forks_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jmigual/FIB-TFG",
"max_forks_repo_path": "LATEX/Lab Meetings/2018_03_16/sections/01_what_done.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jmigual/FIB-TFG",
"max_issues_repo_path": "LATEX/Lab Meetings/2018_03_16/sections/01_what_done.tex",
"max_line_length": 88,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jmigual/FIB-TFG",
"max_stars_repo_path": "LATEX/Lab Meetings/2018_03_16/sections/01_what_done.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-02T15:17:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-02T15:17:51.000Z",
"num_tokens": 149,
"size": 532
} |
% !TEX root = ../main.tex
\section{Contents}
\begin{frame}
\frametitle{Holistic approach}
\begin{figure}[H]
\centering
\begin{tikzpicture}
%\draw [thin] (0,0) circle [radius=2];
\draw (0,2) node{Synchronisation};
\draw (1.732,-1) node{Topology};
\draw (-1.732,-1) node{Plasticity};
\draw[black, <->, thick] (-1,1.73)to[out=180+25,in=120-7.5](-1.85,-0.77);
\draw[black, <->, thick] (1,1.73)to[out=-25,in=180-120+7.5](1.85,-0.77);
\draw[black, <->, thick] (-1.59,-1.22)to[out=-60+7.5,in=-120-7.5](1.59,-1.22);
\end{tikzpicture}
\end{figure}
\end{frame}
\section{Introduction}
\begin{frame}
\frametitle{Neuron dynamics}
How do neurons communicate?
\begin{itemize}
\item Neurotransmitters \\
\item Action potential = explosion of electrical activity \\ [0.5cm]
\end{itemize}
%How can we capture this behaviour?
%\begin{itemize}
%\item Human brain $\sim$ 100 billion neurons \\
%\item \MFR: average dynamics of the network \\ [0.5cm]
%\end{itemize}
How do neurons learn?
\begin{itemize}
\item \textsl{Fire and wire}: Correlation of neuronal activity \\
\end{itemize}
\end{frame}
\section{\theory The Theta Neuron Model}
\begin{frame}
\frametitle{Model Description}
%\tabitem Formulation
%\begin{align*}
%\dot{\theta} = (1-\cos \theta)+(1+\cos \theta) \cdot I \qquad \theta \in \T
%\end{align*}
\tabitem Normal form of SNIC bifurcation
\begin{figure}[H]
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw[thick] (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Excitable regime: $I < 0$};
\draw (-1,0) node[left]{$\pi$};
\draw[fill=black, black] (1,0) circle [radius=0.025];
\draw (1,0) node[right]{0};
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw[black, ->, thick] (0.866, 0.5)to[out=-60,in=90](1,0);
\draw[fill=white, draw=black] (0.866,0.5) circle [radius=0.1];
\draw (0.866,0.5) node[left]{\small{threshold}};
\draw[fill=black, draw=black] (0.866,-0.5) circle [radius=0.1];
\draw (0.866,-0.5) node[left]{\small{rest}};
\draw[black, ->, thick] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->, thick] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw[thick] (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Bifurcation: $I = 0$};
\draw (1.1,0) node[right]{0};
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw (-1,0) node[left]{$\pi$};
\draw[fill=gray, draw=black] (1,0) circle [radius=0.1];
\draw[black, ->, thick] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->, thick] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\minipage{0.33\linewidth}
\centering
\begin{tikzpicture}
\draw[thick] (0,0) circle [radius=1];
\draw (0,-1.2) node[below]{Periodic regime: $I > 0$};
\draw (-1,0) node[left]{$\pi$};
\draw (1,0) node[right]{0};
\draw[fill=black, black] (1,0) circle [radius=0.025];
\draw[fill=red, red] (-1,0) circle [radius=0.05];
\draw (-1,0) node[right]{spike};
\draw[black, dotted] (0,0)to(1,0);
\draw(0,0) node[above]{$\theta$};
\draw[black, dotted] (0,0)to(0.866,0.5);
\draw[black, ->, thick] (0.5,0.866)to[out=150,in=0](0,1);
\draw[black, ->, thick] (-0.5,-0.866)to[out=-30,in=180](0,-1);
\end{tikzpicture}
\endminipage
\label{fig:thetaneuronbifurcationtikz}
\end{figure}
\end{frame}
%\begin{frame}
%\frametitle{Response}
%\tabitem Formulate bifurcations in terms of spiking frequency or phase angle
%\begin{figure}[H]
%\centering
%\includegraphics[width = \textwidth]{../Figures/ThetaNeuronfIandPRC.pdf}
%\label{fig:ThetaNeuronfIandPRC}
%\end{figure}
%\end{frame}
\section{\theory Network Topologies}
\begin{frame}
\frametitle{Three basic networks}
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/Distributions/1D.pdf}
\label{fig:1Dpdfs}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Networks of Theta neurons}
\tabitem For arbitrary network topology:
\begin{align*}
\dot{\theta}_{i} &=\left(1-\cos \theta_{i}\right)+\left(1+\cos \theta_{i}\right) \cdot \left[\eta_{i} + I_{i}(t)\right] \qquad \theta_i \in \T^N \\
I_{i}(t) &=\frac{\kappa}{\kmean} \sum_{j=1}^{N} A_{i j} \cdot \mathcal{P}_{n}(\theta_{j})
\end{align*}
\tabitem Capture average/mean synchronisation
\begin{align*}
Z(t) = \frac{1}{N} \sum_{j=1}^N e^{\ic\theta_j} \qquad Z \in \C
\end{align*}
\end{frame}
\section{\theory Mean Field Reduction}
\begin{frame}
\frametitle{Predict synchronisation dynamics}
The \MFR = solution for $Z(t)$\\
\tabitem Simple for fixed-degree networks: one equation\\[0.5cm]
\textsl{Solution?} Formulate $Z(t)$ per degree $z(\k,t)$!\\
\tabitem Less unique degrees than neurons \\
\tabitem = less equations! \\
%\tabitem $M_{\k} \ll N$ unique node degrees \\
%\tabitem Only $M_{\k}$ equations left \\
%\tabitem Weighed by $P(\k)$
\begin{align*}
\bar{Z}(t) &= \frac{1}{N} \sum_{\k} P(\k) z(\k, t) \qquad \bar{Z} \in \C \label{eq:OttAntonsenMeanField}
\end{align*}
%Problem: $M_{\k}$ still too large when $P(\k)$ is bivariate
\end{frame}
\begin{frame}
\frametitle{Fixed-degree networks}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{../Figures/PhaseSpace/MFRPSR.pdf}
\caption{PSR}
\label{fig:MFRPSR}
\end{subfigure} \hfill
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{../Figures/PhaseSpace/MFRPSS.pdf}
\caption{PSS}
\label{fig:MFRPSS}
\end{subfigure} \hfill
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{../Figures/PhaseSpace/MFRCPW.pdf}
\caption{CPW}
\label{fig:MFRCPW}
\end{subfigure}
\label{fig:macroscopicstatesfixeddegree}
\end{figure}
\end{frame}
\section{\mywork Mean Field Reductions for undirected graphs}
\begin{frame}
\frametitle{Goals}
$Z(t)$ can be measured and predicted: are they the same? \\
\begin{itemize}
\item Formulate directed networks \\
\item Construct adjacency matrix from degree distribution \\
\item Initial conditions
\item Compare directed and undirected networks
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Directed networks}
%Directed networks = bivariate degree distribution = asymmetric adjacency matrix \\
Sample degree vectors from bivariate distribution? Difficult!\\
\tabitem Use independant univariate distributions as marginal \\
\tabitem In- and out degree vectors are a permutation
\end{frame}
\begin{frame}
\frametitle{Directed networks}
\begin{figure}[ht]
\centering
\includegraphics[width = 0.95\textwidth]{../Figures/Distributions/2D.pdf}
\label{fig:2Ddistributions}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Adjacency matrix}
Find a probable solution by sampling from the in- and outdegrees
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/Adjacency_matrices.pdf}
\label{fig:adjacencymatrices}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Initial conditions}
\begin{figure}[H]
\centering
\includegraphics[trim=0cm 8.5cm 0cm 0cm, clip=true, width = 0.75\textwidth]{../Figures/PhaseSpace/Mappings.pdf}
\label{fig:mappings}
\end{figure}
\end{frame}
%\begin{frame}
%\frametitle{Final conditions}
%\begin{figure}[H]
%\centering
%\includegraphics[width = \textwidth]{../Figures/Distributions/FinalConditions.pdf}
%\label{fig:FinalConditions}
%\end{figure}
%\end{frame}
\begin{frame}
\frametitle{Results}
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth, trim={0 3mm 0 3mm},clip]{../Figures/InspectMeanFieldFixedDegree.pdf}
\label{fig:InspectMeanFieldFixedDegree}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Results}
\begin{figure}[H]
\centering
\includegraphics[width = 0.45\textwidth]{../Figures/PhaseSpace/ScalefreeLimCycles.pdf}
\label{fig:InspectMeanFieldScaleFreePhaseSpace}
\end{figure}
\end{frame}
\section{\theory Hebbian Learning and Synaptic Plasticity}
\begin{frame}
\frametitle{Temporal Interpretation: \STDP}
\tabitem \textsl{Fire and wire}: correlate successive action potentials
\begin{align*}
\Delta \text{Synaptic strength } \sim \sum_{t_{j}^{f}, t_{i}^{n} \in \mathcal{T}} W\left(t_{j}^{f}-t_{i}^{n}\right)
\end{align*}
\tabitem \IP: adjust neuron sensitivity to incoming action potentials
\begin{figure}[H]
\centering
\includegraphics[width = \textwidth]{../Figures/Learning/LearningWindows.pdf}
\label{fig:LearningWindows}
\end{figure}
\end{frame}
\section{\mywork Emerging Network Topologies}
\begin{frame}
\frametitle{\STDP}
\begin{figure}[H]
\centering
\includegraphics[trim=0cm 24.7cm 0cm 8.4cm, clip=true, height = 0.455\textheight]{../Figures/Learning/STDP.pdf}
\label{fig:STDP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[trim=0cm 7.2cm 0cm 26cm, clip=true, height = 0.45\textheight]{../Figures/Learning/STDP.pdf}
\label{fig:STDP}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{\STDP + \IP}
\begin{figure}[H]
\centering
\includegraphics[trim=0cm 28.5cm 0cm 7.1cm, clip=true, height = 0.475\textheight]{../Figures/Learning/STDPandIP.pdf}
\label{fig:STDP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[trim=0cm 13.3cm 0cm 22.7cm, clip=true, height = 0.45\textheight]{../Figures/Learning/STDPandIP.pdf}
\label{fig:STDP}
\end{figure}
\end{frame}
\section{Conclusion}
\begin{frame}
\frametitle{Accomplishments}
\tabitem Built and simulated directed networks \\
\tabitem Compared simulation and prediction at every timepoint \\
\tabitem Network structure emerges from learning strategy \\
\tabitem Unification of dynamics \textsl{on} and \textsl{of} networks
\end{frame}
\begin{frame}
\frametitle{Future Work}
\tabitem Further investigation of initial and final conditions \\
\tabitem Computational challenges
\end{frame}
| {
"alphanum_fraction": 0.6947538991,
"avg_line_length": 29.5628742515,
"ext": "tex",
"hexsha": "7972c789d40dd369f66d1b80e8ff56a82322b266",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_forks_repo_path": "Presentation/Slides/Content.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_issues_repo_path": "Presentation/Slides/Content.tex",
"max_line_length": 147,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "506a4e8aba392330f8a6ecc6b229e2c8322b8e83",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "simonaertssen/AdaptiveNeuronalNetworks",
"max_stars_repo_path": "Presentation/Slides/Content.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3505,
"size": 9874
} |
\documentclass[DIN, pagenumber=false, parskip=half]{scrartcl}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[margin=0pt, landscape]{geometry}
\usepackage{graphics}
\usepackage{color}
\usepackage{url}
\usepackage[
colorlinks=false,
pdftitle={Cheatsheet Template},
pdfauthor={Michael Mueller},
pdfsubject={Compilation of useful shortcuts.},
pdfkeywords={Random Software, Cheatsheet}
]{hyperref}
\setlength{\unitlength}{1mm}
\pagestyle{empty}
\setlength{\parindent}{0pt}
\definecolor{mygray}{gray}{.75}
\renewcommand{\dots}{\ \dotfill{}\ }
\title{Cheatsheet Template}
\author{Michael Mueller}
\date{\today}
\begin{document}
\begin{picture}(297,210)
\put(10,200){
\begin{minipage}[t]{85mm}
\section*{Cheatsheet Template}
\paragraph{Window manager control} \ \\
Client = Application window.\ \\
Mod4 + Ctrl + r\dots{}Restart awesome\\
Mod4 + Return\dots{}Start terminal in current tag\\
Mod4 + F1\dots{}Run terminal prompt\\
Mod4 + F4\dots{}Run Lua code prompt\\
%Mod4 + Ctrl + i\dots{}Print client class and instance\\
\paragraph{Clients} \ \\
Mod4 + Shift + r \dots{}Redraw the focused window\\
Mod4 + m\dots{}Maximize client\\
Mod4 + f\dots{}Set client fullscreen\\
Mod4 + Shift + c\dots{}Kill focused client\\ \\
Mod4 + 1\dots{}Go to Tag 1\\
Mod4 + Ctrl + 1-9\dots{}Toggle tag view\\
Mod4 + t\dots{}Mark a client\\
Mod4 + Shift + 1-9\dots{}Tag marked clients with tag\\
Mod4 + Shift + Ctrl + 1-9\dots{}Toggle tag on client\\
\paragraph{Mouse} \ \\
B1, B2, B3 = Mouse buttons 1--3.\ \\
Mod4 + B1 on tag\dots{}Tag client with this tag\\
Mod4 + B1 on client\dots{}Move window\\
Mod4 + B3 on tag\dots{}Toggle this tag for client\\
Mod4 + B3 on client\dots{}Resize window\\
B3 clicked on tag\dots{}Add tag to current view\\
\end{minipage}
}
\put(105,190.5){
\begin{minipage}[t]{85mm}
\paragraph{Navigation} \ \\
Mod4 + j\dots{}Focus next client\\
Mod4 + k\dots{}Focus previous client\\
Mod4 + u\dots{}Focus first urgent client\\
Mod4 + Left\dots{}View previous tag\\
Mod4 + Right\dots{}View next tag\\
Mod4 + 1-9\dots{}Switch to tag 1-9\\
Mod4 + Ctrl + j\dots{}Focus next screen\\
Mod4 + Ctrl + k\dots{}Focus previous screen\\
Mod4 + Esc\dots{}Focus previously selected tag set\\ \\
\paragraph{Layout modification} \ \\
Mod4 + Shift + k / j\dots{}Rotate clients around\\
Mod4 + h / l\dots{}Change master width by 5\%\\
Mod4 + Shift + h\dots{}Number of master windows +1\\
Mod4 + Shift + l\dots{}Number of master windows --1\\
Mod4 + Ctrl + h\dots{}Number of columns for non-master windows +1\\
Mod4 + Ctrl + l\dots{}Number of columns for non-master windows --1\\ \\
Mod4 + Space\dots{}Next layout\\
Mod4 + Shift + Space\dots{}Previous layout\\
Mod4 + Ctrl + Space\dots{}Floating master\\
Mod4 + Ctrl + Return\dots{}Swap focused client with master\\
\end{minipage}
}
\put(200,189){
\begin{minipage}[t]{85mm}
\paragraph{Important files} \ \\ \\
\texttt{~/.config/awesome/rc.lua}\\
\texttt{/etc/xdg/awesome/rc.lua}\\ \\
\paragraph{Links and information} \ \\ \\
\url{http://awesome.naquadah.org/}\\
\url{http://awesome.naquadah.org/wiki/}\\
\begin{picture}(0,10)
\linethickness{0,5mm}
\put(0,0){\color{mygray}\line(1,0){30}}
\end{picture}
\footnotesize{
Created by Michael M\"uller, 2010\\
\url{http://micha.elmueller.net/}\\
Released under the MIT license.\\
The \LaTeX{} source and license for\\
this sheet can be found at github:\\
\url{http://github.com/cmichi/latex-template-collection}.
}
\end{minipage}
}
\end{picture}
\end{document}
| {
"alphanum_fraction": 0.6376279192,
"avg_line_length": 26.6503496503,
"ext": "tex",
"hexsha": "6866c5943cfe9ecdab680ae9d6b602111e45262d",
"lang": "TeX",
"max_forks_count": 252,
"max_forks_repo_forks_event_max_datetime": "2022-03-10T12:11:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-10T08:15:54.000Z",
"max_forks_repo_head_hexsha": "57b6379fd4e44d506ae2516e94fc1f1bc129ffec",
"max_forks_repo_licenses": [
"Unlicense",
"MIT"
],
"max_forks_repo_name": "pumpkink/latex-template-collection",
"max_forks_repo_path": "cheatsheet/cheatsheet.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "57b6379fd4e44d506ae2516e94fc1f1bc129ffec",
"max_issues_repo_issues_event_max_datetime": "2017-05-04T16:54:59.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-12-10T09:55:23.000Z",
"max_issues_repo_licenses": [
"Unlicense",
"MIT"
],
"max_issues_repo_name": "pumpkink/latex-template-collection",
"max_issues_repo_path": "cheatsheet/cheatsheet.tex",
"max_line_length": 74,
"max_stars_count": 887,
"max_stars_repo_head_hexsha": "57b6379fd4e44d506ae2516e94fc1f1bc129ffec",
"max_stars_repo_licenses": [
"Unlicense",
"MIT"
],
"max_stars_repo_name": "pumpkink/latex-template-collection",
"max_stars_repo_path": "cheatsheet/cheatsheet.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T03:52:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-03T04:28:39.000Z",
"num_tokens": 1302,
"size": 3811
} |
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage{url}
\usepackage[]{hyperref}
\usepackage{caption}
\usepackage{listings}
\usepackage{color}
\usepackage{pythonhighlight}
% *** GRAPHICS RELATED PACKAGES ***
%\usepackage[pdftex]{graphicx}
\usepackage{graphicx}
%\usepackage[dvips]{graphicx}
% to place figures on a fixed position
\usepackage{float}
\usepackage[margin=1in]{geometry}
\title{Hyperbolic Geometry of Complex Networks – Queuing Theory I. Home Assignment II.}
\author{Ferenc Nandor Janky - OA8AT9}
\date{}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
The task was to study section I, II, III and IV A of~\cite{HyperbolicGeoNetworks} and in any programming language/tool generate a network according to the model described in the paper
(in Section IV.A.) with the following parameters: N=5000 (the number of nodes), R=14 (the radius of the disk on which the nodes are uniformly distributed). Then calculate numerically the empirical average degree, and plot on a log-log scale the empirical degree distribution of this generated network.
\section{Implementation}
The implementation of the graph generation and analysis has been done in Python language. The software is available on \url{https://github.com/fecjanky/QT_home_assignment/blob/master/ha2/toki2.py}.
The graph generation has been implemented as described in ~\cite{HyperbolicGeoNetworks}. The node density on the disc with radius \emph{R} along the radial polar coordinates followed an exponential distribution $ \rho~(r) \simeq e^r $~.
The connection probability was analogous to the hyperbolic distance between nodes on the disc given by: $ p(x) = \Theta(R - x) $
For the implementation the following Python libraries have been utilized:
\begin{itemize}
\item \verb!matplotlib! , for creating the representation of the generated graph in polar coordinates and for plotting the degree distribution
\item \verb!networkx! , for graph analysis
\end{itemize}
The generator function of the points is show in listing~\ref{lst:python}. There were some offset between the simulated and theoretical results (see Section~\ref{sect:metrics}) and it could have been caused by a bias in the generation of random nodes.
\newpage
\begin{lstlisting}[style=mypython,caption={The function used for generating random nodes},label={lst:python}]
def lte(a, b):
return math.isclose(a, b) or a < b
# use rejection sampling to generate points with a given distribution
def generate_points(self, distribution=None):
points = []
if distribution is None:
distribution = lambda r: math.sinh(r) / (math.cosh(self.radius) - 1)
for i in range(0, self.nodecount):
azimuth = random.uniform(0, 2 * math.pi)
d_point = (random.uniform(0, self.radius), random.uniform(0, distribution(self.radius)))
while True:
d_accept = distribution(d_point[0])
if lte(d_point[1], d_accept):
break
d_point = (random.uniform(0, self.radius), random.uniform(0, distribution(self.radius)))
points.append(PolarPoint(radius=d_point[0], azimuth=azimuth))
return points
\end{lstlisting}
\section{Results}
\subsection{Theoretical metrics}
To calculate the theoretical average degree the following equations were used from \cite{HyperbolicGeoNetworks}:
\begin{equation}
N = \nu~e^{R/2} \rightarrow \nu \simeq 4.559 ,if\;R=14\;and\;N=5000
\end{equation}
\begin{equation}
R = 2 \ln[8 N / (\pi \overline{k})] \rightarrow \overline{k} \simeq 11.61, if\;R=14\;and\;N=5000
\end{equation}
\subsection{Empirical metrics}\label{sect:metrics}
The polar plot of the generated network can be seen on Figure~\ref{fig:graph}. The empirical average degree calculated for the generated graph was:
\begin{equation}
\overline{k}_{sim} = 11.8084
\end{equation}
The relative error between the average obtained from the generated one and the theoretical average was:
\begin{equation}
\epsilon = \frac{\vert \overline{k} - \overline{k}_{sim} \vert}{\overline{k} } * 100 \% = 1.71 \%
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figures/result_graph.png}
\caption{The polar plot of the generated graph according to the rules in \cite{HyperbolicGeoNetworks}}
\label{fig:graph}
\end{figure}
Figure~\ref{fig:graph_stats} shows the empirical degree distribution of the generated network on a log-log scale. It resembles the expected Poissonian distribution.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figures/result_graph_stats.png}
\caption{The plot on a log-log scale of the empirical degree distribution of the generated network}
\label{fig:graph_stats}
\end{figure}
The average degree as a function of radius from the center on the disc is illustrated on Figure~\ref{fig:graph_radius} alongside with the theoretical curve. The tangent of the empirical curve is similar however there was a constant offset between them that might have been cause by the bias in the random point generation.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figures/result_graph_radial_stats.png}
\caption{The plot of the average degree as a function of radial distance of the generated network}
\label{fig:graph_radius}
\end{figure}
\section{Conclusion}
As a part of this homework a random network has been generated using hyperbolic geometry based on \cite{HyperbolicGeoNetworks} to study the
structure and function of complex network in purely geometric terms. The edge probability is analogous to the hyperbolic distance between two nodes and if that distance is below a threshold between
any of two nodes an edge is present between them resulting in similar structure as it would have been created by specifying and edge probability for the random graph and generating edges between them based on that.
The simulation results were resembling the theoretical results and also the ones presented in \cite{HyperbolicGeoNetworks} with around 2\% relative error.
\bibliographystyle{unsrt}
\bibliography{references}
\end{document} | {
"alphanum_fraction": 0.7525823112,
"avg_line_length": 43.9432624113,
"ext": "tex",
"hexsha": "08ec17f32aa412089b581f2617cb4930744af991",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cba71c2d3213b4af8f7dfe633eb83fcc0405ebd8",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "fecjanky/QT_home_assignment",
"max_forks_repo_path": "ha2/doc/ha2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cba71c2d3213b4af8f7dfe633eb83fcc0405ebd8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "fecjanky/QT_home_assignment",
"max_issues_repo_path": "ha2/doc/ha2.tex",
"max_line_length": 322,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cba71c2d3213b4af8f7dfe633eb83fcc0405ebd8",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "fecjanky/QT_home_assignment",
"max_stars_repo_path": "ha2/doc/ha2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1539,
"size": 6196
} |
%This is a template for producing LIPIcs conference volumes.
%The usage of this file together with lipicsmaster-v2019.cls should be
%straightforward. There is no separate documentation.
\documentclass[a4paper,UKenglish]{lipicsmaster-v2021}
%for A4 paper format use option "a4paper", for US-letter use option "letterpaper"
%for british hyphenation rules use option "UKenglish", for american hyphenation rules use option "USenglish"
%for producing a PDF according the PDF/A standard, add "pdfa"
%\graphicspath{{./graphics/}}%helpful if your graphic files are in another directory
\bibliographystyle{plainurl}% the mandatory bibstyle
\editor{John Q. Public}{Dummy University Computing Laboratory, [Address], Country \and Second affiliation, Country}{[email protected]}{https://orcid.org/0000-0002-1825-0097}%TODO mandatory, please use full name; only 1 editor per \editor macro; first two parameters are mandatory, other parameters can be empty.
\editor{Joan R. Access}{School of Computer Science, University City, Country2}{[email protected]}{}
%TODO please enter name of conference without abbreviate title
\EventTitle{42nd Conference on Very Important Topics}
%TODO please enter macros as provided by LIPIcs office%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\EventEditors{John Q. Open and Joan R. Access}
\EventNoEds{2}
\EventLongTitle{42nd Conference on Very Important Topics (CVIT 2016)}
\EventShortTitle{CVIT 2016}
\EventAcronym{CVIT}
\EventYear{2016}
\EventDate{December 24--27, 2016}
\EventLocation{Little Whinging, United~Kingdom}
\EventLogo{}
\SeriesVolume{42}
\ArticleNo{0}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\ccsdesc[100]{\textcolor{red}{Replace ccsdesc macro with valid one}}%TODO mandatory: Please choose ACM 2012 classifications from https://dl.acm.org/ccs/ccs_flat.cfm .
\ISBN{424-2-424242-424-2}%TODO mandatory: Please enter ISBN as provided by LIPIcs office
%\DatePublished{June 6, 2018}%leave empty; will be entered by LIPIcs office before publication
%\title{\huge 1st Conference on Very Important Topics} %use only to override default title
%\subtitle{CVIT 2016, December 24--27, 2016, Little Whinging, United Kingdom} %use only to override default subtitle
%\titlepagebottomline{LIPIcs -- Vol.~\printSeriesVolume{} -- \printEventShortTitle \qquad \qquad \qquad www.dagstuhl.de/lipics} %override text line in the yellow box on the titlepage
%\serieslogo{} %please provide filename (without suffix)
%\nolinenumbers %uncomment to disable line numbering
\begin{document}
\frontmatter
%%
%% PAGE 1: Cover page
%%%
\maketitle
%%
%% PAGE 2: Bibliographic data (editors, ACM classification, ISBN, license, DOI, ...)
%%
\begin{publicationinfo}%for page ii, please fill as required
\sffamily
\emph{Editors}
\printEditorLong
\bigskip
\bigskip
\emph{ACM Classification 2012}\\
\printSubjclass
\bigskip
\bigskip
{\Large\bfseries\sffamily \href{https://www.dagstuhl.de/dagpub/\printISBN}{ISBN \printISBN}}
\bigskip
\bigskip
\emph{Published online and open access by}\newline
Schloss Dagstuhl -- Leibniz-Zentrum f\"ur Informatik GmbH, Dagstuhl Publishing, Saarbr\"ucken/Wadern, Germany. Online available at \href{https://www.dagstuhl.de/dagpub/\printISBN}{https://www.dagstuhl.de/dagpub/\printISBN}.
\bigskip
\emph{Publication date}\newline
\printDatePublished
\bigskip
\bigskip
\emph{Bibliographic information published by the Deutsche Nationalbibliothek}\newline
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at \href{https://portal.dnb.de}{https://portal.dnb.de}.
\bigskip
\emph{License}\newline
This work is licensed under a Creative Commons Attribution 4.0 International license (CC-BY~4.0):\\ \href{https://creativecommons.org/licenses/by/4.0/legalcode}{https://creativecommons.org/licenses/by/4.0/legalcode}.\\
In brief, this license authorizes each and everybody to share (to copy, distribute and transmit) the work under the following conditions, without impairing or restricting the authors' moral rights:
\marginpar{\hspace*{0.2\marginparwidth}\includegraphics[width=0.75\marginparwidth]{cc-by.pdf}}
\begin{itemize}
\item Attribution: The work must be attributed to its authors.
\end{itemize}
\smallskip
The copyright is retained by the corresponding authors.
\bigskip
\bigskip
\bigskip
\bigskip
Digital Object Identifier: \href{https://doi.org/\printDOI}{\printDOI}
\vfill
\textbf{\href{https://www.dagstuhl.de/dagpub/\printISBN}{ISBN \printISBN}}\qquad \qquad \textbf{\href{https://www.dagstuhl.de/dagpub/1868-8969}{ISSN 1868-8969}} \hfill \textbf{\href{https://www.dagstuhl.de/lipics}{https://www.dagstuhl.de/lipics}}
%%
%% PAGE 3: LIPIcs series information
%%
\newpage
\ \\
\bigskip
\bigskip
\bigskip
{\Large LIPIcs -- Leibniz International Proceedings in Informatics}
\bigskip
LIPIcs is a series of high-quality conference proceedings across all fields in informatics.
LIPIcs volumes are published according to the principle of Open Access, i.e., they are available online and free of charge.
\bigskip
\bigskip
\bigskip
\emph{Editorial Board}
%LIPIcs Board members as of June 2021
\begin{itemize}
\item Luca Aceto (\emph{Chair}, Reykjavik University, IS and Gran Sasso Science Institute, IT)
\item Christel Baier (TU Dresden, DE)
\item Mikolaj Bojanczyk (University of Warsaw, PL)
\item Roberto Di Cosmo (Inria and Universit\'e de Paris, FR)
\item Faith Ellen (University of Toronto, CA)
\item Javier Esparza (TU M\"unchen, DE)
\item Daniel Kr\'al' (Masaryk University - Brno, CZ)
\item Meena Mahajan (Institute of Mathematical Sciences, Chennai, IN)
\item Anca Muscholl (University of Bordeaux, FR)
\item Chih-Hao Luke Ong (University of Oxford, GB)
\item Phillip Rogaway (University of California, Davis, US)
\item Eva Rotenberg (Technical University of Denmark, Lyngby, DK)
\item Raimund Seidel (Universit\"at des Saarlandes, Saarbr\"ucken, DE and Schloss Dagstuhl -- Leibniz-Zentrum f\"ur Informatik, Wadern, DE)
\end{itemize}
\bigskip
\bigskip
\bigskip
{\large\bfseries\sffamily \href{https://www.dagstuhl.de/dagpub/1868-8969}{ISSN 1868-8969}}
\bigskip
\bigskip
\bigskip
{\Large\bfseries\sffamily \href{https://www.dagstuhl.de/lipics}{https://www.dagstuhl.de/lipics}}
\vfill
%%
%% PAGE 4: (empty)
%%
\newpage
\thispagestyle{empty}
\ \\
\end{publicationinfo}
%%
%% PAGE 5 and more: TOC etc.
%%
%TODO please fill or comment out
\begin{dedication}
Insert dedication here.
\end{dedication}
\begin{contentslist}
%Please note, that the table of contents will be generated by the LIPIcs office based on the order provided in the Dagstuhl Submission System. So you can keep this section as is!!!
%To generate the table of contents copy all the .vtc files
%of the contributions to your working directory.
%For every contribution type a line
%\inputtocentry{dummycontribution}
%where the argument of \inputtocentry is the name of
%the vtc file without suffix.
%Alternatively write e.g.
\contitem
\title{Preface}
\author{John Q. Open}
\page{0:vii}
%\part{} %use if volume is divided in parts
\part{Regular Papers}
\contitem
\title{Mmmmm $\ldots$ donuts}
\author{Homer J. Simpson}
\page{1:1--1:23}
\inputtocentry{lipics-v2019-sample-article}
\end{contentslist}
%TODO please fill or comment out
\chapter{Preface}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut mattis
elementum fermentum. Pellentesque habitant morbi tristique senectus et
netus et malesuada fames ac turpis egestas. Nulla sapien magna,
bibendum in dictum sed, egestas vel purus. Pellentesque id ornare
lacus. Pellentesque justo elit, sodales a fringilla vitae, gravida sed
elit. Etiam turpis eros, tincidunt sit amet tempor sed, gravida quis
eros. Mauris et nunc enim. Ut congue rhoncus odio vitae lacinia. Nunc
placerat est eu eros dignissim ac tristique nisi placerat. Fusce et
hendrerit justo. Nunc feugiat pulvinar nunc ac tincidunt. Donec eu
pharetra metus. Cras malesuada ante accumsan purus dignissim
euismod. Curabitur risus ante, aliquet ut suscipit eget, vulputate non
ligula. Nullam eleifend malesuada est, nec adipiscing sapien eleifend
eget. Vestibulum fringilla diam id felis sagittis aliquet. Maecenas
sed metus vel dui vulputate pretium et id lacus. Ut eget libero augue,
ut aliquet orci. Integer sed nunc id massa interdum imperdiet. Nunc ut
consequat eros.
Morbi hendrerit dapibus augue. Proin sed adipiscing ipsum. Ut
vulputate ultricies diam id dictum. Nunc pharetra imperdiet
sodales. Morbi convallis massa vitae justo adipiscing nec congue nunc
fringilla. Pellentesque eu rhoncus ligula. Nam eros neque, hendrerit a
rhoncus vel, molestie nec nulla. Ut iaculis vulputate mauris, non
scelerisque dolor fringilla sit amet. Pellentesque habitant morbi
tristique senectus et netus et malesuada fames ac turpis
egestas. Quisque vitae accumsan risus. Sed molestie dictum
venenatis. Sed odio justo, gravida et vulputate eu, congue et lorem.
%optional
\begin{participants}
\chapter[Authors]{List of Authors}
%use \participant for every author, eg.:
\participant John Q. Public\\
Dummy University Computing Laboratory\\
Address, Country\\
[email protected]
\end{participants}
\end{document}
| {
"alphanum_fraction": 0.7713913892,
"avg_line_length": 33.6532846715,
"ext": "tex",
"hexsha": "845347bf70ed39ec0962575b470360f1f6892e52",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-04-26T07:18:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-04-02T19:43:59.000Z",
"max_forks_repo_head_hexsha": "638327e7a2b4e347a4a6e88b740e2c27f3caa0f6",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "dagstuhl-publishing/styles",
"max_forks_repo_path": "LIPIcs/editors/lipics-v2021-sample-frontmatter.tex",
"max_issues_count": 20,
"max_issues_repo_head_hexsha": "638327e7a2b4e347a4a6e88b740e2c27f3caa0f6",
"max_issues_repo_issues_event_max_datetime": "2022-03-25T14:21:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-09-18T06:22:44.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "dagstuhl-publishing/styles",
"max_issues_repo_path": "LIPIcs/editors/lipics-v2021-sample-frontmatter.tex",
"max_line_length": 318,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "638327e7a2b4e347a4a6e88b740e2c27f3caa0f6",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "dagstuhl-publishing/styles",
"max_stars_repo_path": "LIPIcs/editors/lipics-v2021-sample-frontmatter.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-06T16:42:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-27T05:37:34.000Z",
"num_tokens": 2646,
"size": 9221
} |
\hypertarget{section}{%
\section{1}\label{section}}
\bibleverse{1} Paul, and Silvanus, and Timotheus, unto the church of the
Thessalonians which is in God the Father and in the Lord Jesus Christ:
Grace be unto you, and peace, from God our Father, and the Lord Jesus
Christ. \bibleverse{2} We give thanks to God always for you all, making
mention of you in our prayers; \bibleverse{3} Remembering without
ceasing your work of faith, and labour of love, and patience of hope in
our Lord Jesus Christ, in the sight of God and our Father;
\bibleverse{4} Knowing, brethren beloved, your election of God.
\bibleverse{5} For our gospel came not unto you in word only, but also
in power, and in the Holy Ghost, and in much assurance; as ye know what
manner of men we were among you for your sake. \bibleverse{6} And ye
became followers of us, and of the Lord, having received the word in
much affliction, with joy of the Holy Ghost: \bibleverse{7} So that ye
were ensamples to all that believe in Macedonia and Achaia.
\bibleverse{8} For from you sounded out the word of the Lord not only in
Macedonia and Achaia, but also in every place your faith to God-ward is
spread abroad; so that we need not to speak any thing. \bibleverse{9}
For they themselves shew of us what manner of entering in we had unto
you, and how ye turned to God from idols to serve the living and true
God; \bibleverse{10} And to wait for his Son from heaven, whom he raised
from the dead, even Jesus, which delivered us from the wrath to come.
\hypertarget{section-1}{%
\section{2}\label{section-1}}
\bibleverse{1} For yourselves, brethren, know our entrance in unto you,
that it was not in vain: \bibleverse{2} But even after that we had
suffered before, and were shamefully entreated, as ye know, at Philippi,
we were bold in our God to speak unto you the gospel of God with much
contention. \bibleverse{3} For our exhortation was not of deceit, nor of
uncleanness, nor in guile: \bibleverse{4} But as we were allowed of God
to be put in trust with the gospel, even so we speak; not as pleasing
men, but God, which trieth our hearts. \bibleverse{5} For neither at any
time used we flattering words, as ye know, nor a cloke of covetousness;
God is witness: \bibleverse{6} Nor of men sought we glory, neither of
you, nor yet of others, when we might have been burdensome, as the
apostles of Christ. \bibleverse{7} But we were gentle among you, even as
a nurse cherisheth her children: \bibleverse{8} So being affectionately
desirous of you, we were willing to have imparted unto you, not the
gospel of God only, but also our own souls, because ye were dear unto
us. \bibleverse{9} For ye remember, brethren, our labour and travail:
for labouring night and day, because we would not be chargeable unto any
of you, we preached unto you the gospel of God. \bibleverse{10} Ye are
witnesses, and God also, how holily and justly and unblameably we
behaved ourselves among you that believe: \bibleverse{11} As ye know how
we exhorted and comforted and charged every one of you, as a father doth
his children, \bibleverse{12} That ye would walk worthy of God, who hath
called you unto his kingdom and glory.
\bibleverse{13} For this cause also thank we God without ceasing,
because, when ye received the word of God which ye heard of us, ye
received it not as the word of men, but as it is in truth, the word of
God, which effectually worketh also in you that believe. \bibleverse{14}
For ye, brethren, became followers of the churches of God which in
Judaea are in Christ Jesus: for ye also have suffered like things of
your own countrymen, even as they have of the Jews: \bibleverse{15} Who
both killed the Lord Jesus, and their own prophets, and have persecuted
us; and they please not God, and are contrary to all men:
\bibleverse{16} Forbidding us to speak to the Gentiles that they might
be saved, to fill up their sins alway: for the wrath is come upon them
to the uttermost.
\bibleverse{17} But we, brethren, being taken from you for a short time
in presence, not in heart, endeavoured the more abundantly to see your
face with great desire. \bibleverse{18} Wherefore we would have come
unto you, even I Paul, once and again; but Satan hindered us.
\bibleverse{19} For what is our hope, or joy, or crown of rejoicing? Are
not even ye in the presence of our Lord Jesus Christ at his coming?
\bibleverse{20} For ye are our glory and joy.
\hypertarget{section-2}{%
\section{3}\label{section-2}}
\bibleverse{1} Wherefore when we could no longer forbear, we thought it
good to be left at Athens alone; \bibleverse{2} And sent Timotheus, our
brother, and minister of God, and our fellowlabourer in the gospel of
Christ, to establish you, and to comfort you concerning your faith:
\bibleverse{3} That no man should be moved by these afflictions: for
yourselves know that we are appointed thereunto. \bibleverse{4} For
verily, when we were with you, we told you before that we should suffer
tribulation; even as it came to pass, and ye know. \bibleverse{5} For
this cause, when I could no longer forbear, I sent to know your faith,
lest by some means the tempter have tempted you, and our labour be in
vain. \bibleverse{6} But now when Timotheus came from you unto us, and
brought us good tidings of your faith and charity, and that ye have good
remembrance of us always, desiring greatly to see us, as we also to see
you: \bibleverse{7} Therefore, brethren, we were comforted over you in
all our affliction and distress by your faith: \bibleverse{8} For now we
live, if ye stand fast in the Lord. \bibleverse{9} For what thanks can
we render to God again for you, for all the joy wherewith we joy for
your sakes before our God; \bibleverse{10} Night and day praying
exceedingly that we might see your face, and might perfect that which is
lacking in your faith? \bibleverse{11} Now God himself and our Father,
and our Lord Jesus Christ, direct our way unto you. \bibleverse{12} And
the Lord make you to increase and abound in love one toward another, and
toward all men, even as we do toward you: \bibleverse{13} To the end he
may stablish your hearts unblameable in holiness before God, even our
Father, at the coming of our Lord Jesus Christ with all his saints.
\hypertarget{section-3}{%
\section{4}\label{section-3}}
\bibleverse{1} Furthermore then we beseech you, brethren, and exhort you
by the Lord Jesus, that as ye have received of us how ye ought to walk
and to please God, so ye would abound more and more. \bibleverse{2} For
ye know what commandments we gave you by the Lord Jesus. \bibleverse{3}
For this is the will of God, even your sanctification, that ye should
abstain from fornication: \bibleverse{4} That every one of you should
know how to possess his vessel in sanctification and honour;
\bibleverse{5} Not in the lust of concupiscence, even as the Gentiles
which know not God: \bibleverse{6} That no man go beyond and defraud his
brother in any matter: because that the Lord is the avenger of all such,
as we also have forewarned you and testified. \bibleverse{7} For God
hath not called us unto uncleanness, but unto holiness. \bibleverse{8}
He therefore that despiseth, despiseth not man, but God, who hath also
given unto us his holy Spirit. \bibleverse{9} But as touching brotherly
love ye need not that I write unto you: for ye yourselves are taught of
God to love one another. \bibleverse{10} And indeed ye do it toward all
the brethren which are in all Macedonia: but we beseech you, brethren,
that ye increase more and more; \bibleverse{11} And that ye study to be
quiet, and to do your own business, and to work with your own hands, as
we commanded you; \bibleverse{12} That ye may walk honestly toward them
that are without, and that ye may have lack of nothing.
\bibleverse{13} But I would not have you to be ignorant, brethren,
concerning them which are asleep, that ye sorrow not, even as others
which have no hope. \bibleverse{14} For if we believe that Jesus died
and rose again, even so them also which sleep in Jesus will God bring
with him. \bibleverse{15} For this we say unto you by the word of the
Lord, that we which are alive and remain unto the coming of the Lord
shall not prevent them which are asleep. \bibleverse{16} For the Lord
himself shall descend from heaven with a shout, with the voice of the
archangel, and with the trump of God: and the dead in Christ shall rise
first: \bibleverse{17} Then we which are alive and remain shall be
caught up together with them in the clouds, to meet the Lord in the air:
and so shall we ever be with the Lord. \bibleverse{18} Wherefore comfort
one another with these words.
\hypertarget{section-4}{%
\section{5}\label{section-4}}
\bibleverse{1} But of the times and the seasons, brethren, ye have no
need that I write unto you. \bibleverse{2} For yourselves know perfectly
that the day of the Lord so cometh as a thief in the night.
\bibleverse{3} For when they shall say, Peace and safety; then sudden
destruction cometh upon them, as travail upon a woman with child; and
they shall not escape. \bibleverse{4} But ye, brethren, are not in
darkness, that that day should overtake you as a thief. \bibleverse{5}
Ye are all the children of light, and the children of the day: we are
not of the night, nor of darkness. \bibleverse{6} Therefore let us not
sleep, as do others; but let us watch and be sober. \bibleverse{7} For
they that sleep sleep in the night; and they that be drunken are drunken
in the night. \bibleverse{8} But let us, who are of the day, be sober,
putting on the breastplate of faith and love; and for an helmet, the
hope of salvation. \bibleverse{9} For God hath not appointed us to
wrath, but to obtain salvation by our Lord Jesus Christ, \bibleverse{10}
Who died for us, that, whether we wake or sleep, we should live together
with him. \bibleverse{11} Wherefore comfort yourselves together, and
edify one another, even as also ye do.
\bibleverse{12} And we beseech you, brethren, to know them which labour
among you, and are over you in the Lord, and admonish you;
\bibleverse{13} And to esteem them very highly in love for their work's
sake. And be at peace among yourselves. \bibleverse{14} Now we exhort
you, brethren, warn them that are unruly, comfort the feebleminded,
support the weak, be patient toward all men. \bibleverse{15} See that
none render evil for evil unto any man; but ever follow that which is
good, both among yourselves, and to all men. \bibleverse{16} Rejoice
evermore. \bibleverse{17} Pray without ceasing. \bibleverse{18} In every
thing give thanks: for this is the will of God in Christ Jesus
concerning you. \bibleverse{19} Quench not the Spirit. \bibleverse{20}
Despise not prophesyings. \bibleverse{21} Prove all things; hold fast
that which is good. \bibleverse{22} Abstain from all appearance of evil.
\bibleverse{23} And the very God of peace sanctify you wholly; and I
pray God your whole spirit and soul and body be preserved blameless unto
the coming of our Lord Jesus Christ. \bibleverse{24} Faithful is he that
calleth you, who also will do it. \bibleverse{25} Brethren, pray for us.
\bibleverse{26} Greet all the brethren with an holy kiss.
\bibleverse{27} I charge you by the Lord that this epistle be read unto
all the holy brethren. \bibleverse{28} The grace of our Lord Jesus
Christ be with you. Amen. \#\# The first epistle unto the Thessalonians
was written from Athens.
| {
"alphanum_fraction": 0.7733052705,
"avg_line_length": 60.9786096257,
"ext": "tex",
"hexsha": "d19c40185f73987f41c6fef5358b06c764049c01",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "bibliadelpueblo/BibliaLibre",
"max_forks_repo_path": "Bibles/English.KingJames/out/tex/52-1 Thessalonians.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "bibliadelpueblo/BibliaLibre",
"max_issues_repo_path": "Bibles/English.KingJames/out/tex/52-1 Thessalonians.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "039ab9b18364ecade1d56695cb77c40ee62b1317",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "bibliadelpueblo/BibliaLibre",
"max_stars_repo_path": "Bibles/English.KingJames/out/tex/52-1 Thessalonians.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3150,
"size": 11403
} |
\subsubsection{Hand Crossbow}\label{weapon:handCrossbow}
Weapon, Hand Crossbow, One-Handed, Unusual, Ranged\\
Ammunition: Bolts\\
Size: S\\
Cost: 380 Gold
\textbf{Draw/Sheath} \\
2AP draw, 4AP sheath
\textbf{Shoot} \\
2 AP, DE to hit, Critical on 19 and 20, \passus{25} Reach\\
2d12 + \sfrac{1}{2}\texttimes DE Piercing Damage
\textbf{Reload} \\
4 AP from a back quiver\\
3 AP form a belt quiver\\
The hand crossbow has to be reloaded after every shot.\\
Reloading the Hand Crossbow is a two-handed activity.
| {
"alphanum_fraction": 0.73046875,
"avg_line_length": 26.9473684211,
"ext": "tex",
"hexsha": "7b66450add42cfd995682ba68307d9e8c23f57ac",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "items/equipment/weapons/unusual/handcrossbow.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "items/equipment/weapons/unusual/handcrossbow.tex",
"max_line_length": 59,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "items/equipment/weapons/unusual/handcrossbow.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 177,
"size": 512
} |
% Template for PLoS% Version 3.5 March 2018
%
% % % % % % % % % % % % % % % % % % % % % %
%
% -- IMPORTANT NOTE
%
% This template contains comments intended
% to minimize problems and delays during our production
% process. Please follow the template instructions
% whenever possible.
%
% % % % % % % % % % % % % % % % % % % % % % %
%
% Once your paper is accepted for publication,
% PLEASE REMOVE ALL TRACKED CHANGES in this file
% and leave only the final text of your manuscript.
% PLOS recommends the use of latexdiff to track changes during review, as this will help to maintain a clean tex file.
% Visit https://www.ctan.org/pkg/latexdiff?lang=en for info or contact us at [email protected].
%
%
% There are no restrictions on package use within the LaTeX files except that
% no packages listed in the template may be deleted.
%
% Please do not include colors or graphics in the text.
%
% The manuscript LaTeX source should be contained within a single file (do not use \input, \externaldocument, or similar commands).
%
% % % % % % % % % % % % % % % % % % % % % % %
%
% -- FIGURES AND TABLES
%
% Please include tables/figure captions directly after the paragraph where they are first cited in the text.
%
% DO NOT INCLUDE GRAPHICS IN YOUR MANUSCRIPT
% - Figures should be uploaded separately from your manuscript file.
% - Figures generated using LaTeX should be extracted and removed from the PDF before submission.
% - Figures containing multiple panels/subfigures must be combined into one image file before submission.
% For figure citations, please use "Fig" instead of "Figure".
% See http://journals.plos.org/plosone/s/figures for PLOS figure guidelines.
%
% Tables should be cell-based and may not contain:
% - spacing/line breaks within cells to alter layout or alignment
% - do not nest tabular environments (no tabular environments within tabular environments)
% - no graphics or colored text (cell background color/shading OK)
% See http://journals.plos.org/plosone/s/tables for table guidelines.
%
% For tables that exceed the width of the text column, use the adjustwidth environment as illustrated in the example table in text below.
%
% % % % % % % % % % % % % % % % % % % % % % % %
%
% -- EQUATIONS, MATH SYMBOLS, SUBSCRIPTS, AND SUPERSCRIPTS
%
% IMPORTANT
% Below are a few tips to help format your equations and other special characters according to our specifications. For more tips to help reduce the possibility of formatting errors during conversion, please see our LaTeX guidelines at http://journals.plos.org/plosone/s/latex
%
% For inline equations, please be sure to include all portions of an equation in the math environment. For example, x$^2$ is incorrect; this should be formatted as $x^2$ (or $\mathrm{x}^2$ if the romanized font is desired).
%
% Do not include text that is not math in the math environment. For example, CO2 should be written as CO\textsubscript{2} instead of CO$_2$.
%
% Please add line breaks to long display equations when possible in order to fit size of the column.
%
% For inline equations, please do not include punctuation (commas, etc) within the math environment unless this is part of the equation.
%
% When adding superscript or subscripts outside of brackets/braces, please group using {}. For example, change "[U(D,E,\gamma)]^2" to "{[U(D,E,\gamma)]}^2".
%
% Do not use \cal for caligraphic font. Instead, use \mathcal{}
%
% % % % % % % % % % % % % % % % % % % % % % % %
%
% Please contact [email protected] with any questions.
%
% % % % % % % % % % % % % % % % % % % % % % % %
\documentclass[10pt,letterpaper]{article}
\usepackage[top=0.85in,left=2.75in,footskip=0.75in]{geometry}
% amsmath and amssymb packages, useful for mathematical formulas and symbols
\usepackage{amsmath,amssymb}
% Use adjustwidth environment to exceed column width (see example table in text)
\usepackage{changepage}
% Use Unicode characters when possible
\usepackage[utf8x]{inputenc}
% textcomp package and marvosym package for additional characters
\usepackage{textcomp,marvosym}
% cite package, to clean up citations in the main text. Do not remove.
\usepackage{cite}
% Use nameref to cite supporting information files (see Supporting Information section for more info)
\usepackage{nameref,hyperref}
% line numbers
\usepackage[right]{lineno}
% ligatures disabled
\usepackage{microtype}
\DisableLigatures[f]{encoding = *, family = * }
% color can be used to apply background shading to table cells only
\usepackage[table]{xcolor}
% array package and thick rules for tables
\usepackage{array}
% text style packages
\usepackage{soul}
% package including float barrier
\usepackage{placeins}
%comments (Yarden) margin was changed to accomodate longer text in todos
\usepackage[colorinlistoftodos, textwidth=2in]{todonotes}
%\usepackage[disable]{todonotes}
\reversemarginpar
\setlength{\marginparwidth}{2in}
% create "+" rule type for thick vertical lines
\newcolumntype{+}{!{\vrule width 2pt}}
% create \thickcline for thick horizontal lines of variable length
\newlength\savedwidth
\newcommand\thickcline[1]{%
\noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}%
\cline{#1}%
\noalign{\vskip\arrayrulewidth}%
\noalign{\global\arrayrulewidth\savedwidth}%
}
% \thickhline command for thick horizontal lines that span the table
\newcommand\thickhline{\noalign{\global\savedwidth\arrayrulewidth\global\arrayrulewidth 2pt}%
\hline
\noalign{\global\arrayrulewidth\savedwidth}}
% Remove comment for double spacing
%\usepackage{setspace}
%\doublespacing
% Text layout
\raggedright
\setlength{\parindent}{0.5cm}
\textwidth 5.25in
\textheight 8.75in
\usepackage{ragged2e}
\usepackage[rightcaption]{sidecap}
\usepackage{floatrow}
% Bold the 'Figure #' in the caption and separate it from the title/caption with a period
% Captions will be left justified
\usepackage[aboveskip=1pt,labelfont=bf,labelsep=period,justification=raggedright,singlelinecheck=off]{caption}
\renewcommand{\figurename}{Fig}
\newcommand{\beginsupplement}{%
\setcounter{table}{0}
\renewcommand{\thetable}{S\arabic{table}}%
\setcounter{figure}{0}
\renewcommand{\thefigure}{S\arabic{figure}}%
}
% Use the PLoS provided BiBTeX style
\bibliographystyle{plos2015.bst}
% Remove brackets from numbering in List of References
\makeatletter
\renewcommand{\@biblabel}[1]{\quad#1.}
\makeatother
% Header and Footer with logo
\usepackage{lastpage,fancyhdr,graphicx}
\graphicspath{ {./doc/Figures/} }
\usepackage{epstopdf}
%\pagestyle{myheadings}
\pagestyle{fancy}
\fancyhf{}
%\setlength{\headheight}{27.023pt}
%\lhead{\includegraphics[width=2.0in]{PLOS-submission.eps}}
\rfoot{\thepage/\pageref{LastPage}}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrule}{\hrule height 2pt \vspace{2mm}}
\fancyheadoffset[L]{2.25in}
\fancyfootoffset[L]{2.25in}
\lfoot{\today}
%% Include all macros below
\newcommand{\todoycgr}[1]{
\todo[bordercolor=green, color=green!40, size=\small]{#1}
}
\newcommand{\todoycpu}[1]{
\todo[bordercolor=purple, color=purple!40, size=\small]{#1}
}
\newcommand{\tododnor}[1]{
\todo[bordercolor=orange, color=orange!40, size=\small]{#1}
}
\newcommand{\todotg}[1]{
\todo[bordercolor=yellow, color=yellow!40, size=\small]{#1}
}
%% END MACROS SECTION
\begin{document}
\vspace*{0.2in}
% Title must be 250 characters or less.
\begin{flushleft}
{\Large
\textbf\newline{TweetyNet: A neural network that enables high-throughput, automated annotation of birdsong}
% Please use "sentence case" for title and headings (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns).
}
\newline
% Insert author names, affiliations and corresponding author email (do not include titles, positions, or degrees).
\\
Yarden Cohen\textsuperscript{1\Yinyang*},
David Nicholson\textsuperscript{2\Yinyang},
Alexa Sanchioni\textsuperscript{1},
Emily K. Mallaber\textsuperscript{1},
Viktoriya Skidanova\textsuperscript{1},
Timothy J. Gardner\textsuperscript{3\ddag}
\\
\bigskip
\textbf{1} Biology department, Boston University, Boston, MA, USA
\\
\textbf{2} Biology department, Emory University, Atlanta, GA, USA
\\
\textbf{3} Phil and Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, OR, USA
\\
\bigskip
% Insert additional author notes using the symbols described below. Insert symbol callouts after author names as necessary.
%
% Remove or comment out the author notes below if they aren't used.
%
% Primary Equal Contribution Note
\Yinyang These authors contributed equally to this work.
% Additional Equal Contribution Note
% Also use this double-dagger symbol for special authorship notes, such as senior authorship.
% \ddag These authors also contributed equally to this work.
% Current address notes
% \textcurrency Current Address: Dept/Program/Center, Institution Name, City, State, Country % change symbol to "\textcurrency a" if more than one current address note
% \textcurrency b Insert second current address
% \textcurrency c Insert third current address
% Deceased author note
% \dag Deceased
% Group/Consortium Author Note
% \textpilcrow Membership list can be found in the Acknowledgments section.
% Use the asterisk to denote corresponding authorship and provide email address in note below.
* [email protected]
\ddag [email protected]
% \todoycgr{Yarden Cohen, GREEN - small comments that are not critical}
% \todoycpu{Yarden Cohen, PURPLE, large comments comments that are important}
% \tododnor{David Nicholson, ORANGE, comment}
% \todotg{Tim G, YELLOW, comment}
\end{flushleft}
\justify
% Please keep the abstract below 300 words
\section*{Abstract}
Songbirds have long been studied as a model system of sensory-motor learning. Many analyses of birdsong require time-consuming manual annotation of the individual elements of song, known as syllables or notes. Here we describe the first automated algorithm for birdsong annotation that is applicable to complex song such as canary song. We developed a neural network architecture, “TweetyNet”, that is trained with a small amount of hand-labeled data using supervised learning methods. We first show TweetyNet achieves significantly lower error on Bengalese finch song than a similar method, using less training data, and maintains low error rates across days. Applied to canary song, TweetyNet achieves fully automated annotation of canary song, accurately capturing the complex statistical structure previously discovered in a manually annotated dataset. We conclude that TweetyNet will make it possible to ask a wide range of new questions focused on complex songs where manual annotation was impractical.
% Please keep the Author Summary between 150 and 200 words
% Use first person. PLOS ONE authors please skip this step.
% Author Summary not valid for PLOS ONE submissions.
% \section*{Author summary}
% TBD \todoycpu{suggest not including this section for biorxiv - it is journal specific}
\linenumbers
% Use "Eq" instead of "Equation" for equation citations.
\section*{Introduction}
\label{intro}
Songbirds provide an excellent model system for investigating sensorimotor learning \cite{mooney_neurobiology_2009}.
Like many motor skills,
birdsong consists of highly stereotyped gestures executed in a sequence \cite{fee_songbird_2010}.
In this and many other ways, birdsong resembles speech:
song is learned by juveniles from a tutor, like babies learning to talk \cite{brainard_what_2002}.
A key advantage of songbirds as a model system for studying vocal learning is that birds sing spontaneously,
often producing hundreds or thousands of song bouts a day.
This provides a detailed readout of how song is acquired during development,
and how this skilled behavior is maintained in adulthood.
Leveraging the amount of data that songbirds produce requires
methods for high-throughput automated analyses.
For example, automated methods for measuring similarity of juvenile and tutor song
across development \cite{tchernichovski_procedure_2000,mets_automated_2018}
led to important advances in understanding the behavioral
\cite{tchernichovski_dynamics_2001,mets_learning_2019}
and genetic \cite{mets_genetic_2018} bases of how vocalizations are learned.
These examples demonstrate how automated methods that enable analysis of
large-scale behavioral datasets
contribute to realizing the potential of songbirds as a model system.
However, this potential to address central questions of sensorimotor learning
is currently hindered by a lack of high-throughput automated methods
for scaling up other types of analyses.
The central issue is that many analyses require researchers to annotate song.
Annotation is a time-consuming process done by hand
(typically with GUI-based applications, e.g., Praat, Audacity, Chipper \cite{noauthor_praat_nodate,noauthor_audacity_nodate, searfoss2020chipper}).
An example of Bengalese finch song annotated with a GUI is shown in Fig.~\ref{fig0}.
Researchers annotate song by dividing it up into segments (red lines in Fig.~\ref{fig0}),
often referred to as syllables or notes,
and assigning labels to those segments (letters in Fig.~\ref{fig0}).
Annotation makes several types of analyses possible.
For example, annotation is required to build statistical models of syntax
\cite{markowitz_long-range_2013,jin2011compact,berwick2011songs,hedley2016complexity},
to fit computational models of motor learning that
precisely quantify how single syllables change over the course of an experiment
\cite{sober2009adult,sober2012vocal},
and to relate behavior to neural activity
\cite{wohlgemuth_linked_2010,aronov_specialized_2008,hahnloser_ultra-sparse_2002}.
Annotating song greatly increases our ability to leverage songbirds as a model system
when answering questions about how the brain produces syntax observed in
sequenced motor skills, and how the brain learns to adaptively control muscles.
\begin{figure}[!ht]
\includegraphics[scale=1.0]{Figures/fig0/annotfig.png}
\caption{{\bf Annotation of birdsong.}
\textbf{A.} Spectrogram showing a brief clip of Bengalese finch song
with different syllable types.
\textbf{B.} Text labels over red segments are applied by human annotators to assign those segments to various syllable classes.
\textbf{C.} Segments were extracted from song by finding continuous periods above a fixed amplitude threshold.
Red arrow to left of panel \textbf{C} indicates the user-defined amplitude threshold.
}
\label{fig0}
\end{figure}
Previous work has been done on automating annotation,
as we briefly review below in \nameref{Related work},
but these methods are challenged by the variable song of some species.
To illustrate these challenges, Fig~\ref{fig1}A-C presents examples of annotated songs from different species.
When a species' song consists of just a few syllables sung repeatedly in a fixed motif, methods based on template matching or
other algorithms (see \nameref{Related work} below) can be applied.
This is true for zebra finches, as can be seen in a song from one individual shown in Fig~\ref{fig1}A.
However, many species have songs that are more complex than the stereotyped motif of zebra finches.
Complex songs can contain a large vocabulary of syllable types arranged in multiple motifs or phrases,
with phrases sequenced according to complex transition statistics. For example, Bengalese
finch song contains "branch points", where a given syllable may transition to more than one other class of
syllable. An example of a branch point is indicated above the spectrogram in Fig~\ref{fig1}B. In addition,
Bengalese finch song can contain syllables that repeat, with the number of repeats varying from rendition to
rendition. Both branch points and repeats prevent existing algorithms from effectively annotating Bengalese
finch song (Fig~\ref{fig1}E). Canary song is even more complex (Fig~\ref{fig1}C). Some individuals may have as many as 50 unique classes of syllables in their repertoire. Bouts of canary song can last more than a minute instead of a few seconds (Fig~\ref{fig1}D). These long songs contain individual syllable types that can be very short, under 10ms, or very long, ranging up to 500ms (Fig~\ref{fig1}F). Some syllables are very quiet, and others loud.
Because of this extreme range of amplitude, common methods for segmenting audio of song into syllables can fail.
Segments are typically defined as points where the smoothed sound envelope or other song-related acoustic features \cite{tchernichovski_procedure_2000} stay above some threshold, indicated by the dashed lines in Fig \ref{fig2}. In the case of canary song, if sound energy or other acoustic features are filtered on timescales short enough to accurately segment the shortest syllables, then the longest syllables will be subdivided. This problem is also commonly encountered when analyzing the variable songs of young zebra finches.
Fig~\ref{fig2} illustrates how canary song is difficult to segment in an automated manner.
Finally, canary song has a hierarchical structure where syllables occur in trilled repetitions, called phrases, that themselves obey long-range syntax rules \cite{markowitz_long-range_2013,gardner_freedom_2005}. Phrases can differ in duration depending on the type of syllable being repeated and similarly inter-syllable silent gaps vary widely in duration (Fig~\ref{suppfig_canary_phrases_diff_gaps}). Because of all this complexity, there are currently no automated methods for accurate annotation of canary song.
\begin{figure}[!ht]
\includegraphics[scale=0.725]{Figures/fig1/Figure1_v4.png}
\caption{{\bf The challenge of annotating complex songs.}
\textbf{A.} The zebra finch repeating motif allows annotation by matching its template spectrogram without segmenting different syllables (colored bars).
\textbf{B.} Bengalese finch songs segmented to syllables shows variable transitions and changing numbers of syllable repeats.
\textbf{C.} A third of one domestic canary song of median duration segmented to syllables reveals repetitions (phrase) structure.
\textbf{D.} The median, 0.25 and 0.75 quantiles of song durations (x-axis) and # of syllables per song (y-axis) for 2 canary strains, Bengalese finches and Zebra finches (color coded)
\textbf{E.} Variable songs are not suited for template matching. Songs contain repeating sequences of syllables but because of sequence variability songs with more syllables (x-axis) share smaller sequence fractions (y-axis)
\textbf{F.} Distributions of syllable duration for one domestic canary. The bird had 20 different syllable types (x-axis, ordered by mean syllable duration). Box plot shows median, 0.25 and 0.75 quantiles of syllable durations. Whiskers show the entire range.}
\label{fig1}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=0.2]{Figures/fig2/fig2_v2.png}
\caption{{\bf Examples of failure to segment canary song.}
\textbf{A.} Several seconds of domestic canary song, presented as a spectrogram, beneath a plot of a band-pass filtered sound amplitude. To segment song, an amplitude threshold can be taken, marked by the dashed line on the amplitude trace, and then an automated program finds continuous segments of above-threshold amplitude and marks the onset and offset times of those segments (green, red lines in the spectrogram panel).
\textbf{B.} Focusing on three examples (a-c matching panel A), segmenting by threshold crossing with a fixed filtering bandwidth does not work well for canaries. Above threshold amplitudes are shown in bold colored lines and reveal that syllables of type 'a' are broken into 2 components and syllables of type 'c' are not separated by low amplitude.}
\label{fig2}
\end{figure}
\subsection*{Proposed Method and Related Work}
\label{Related work}
Previous work has been done to automate annotation, as referenced above,
that we now briefly review.
The crucial point here is that none of the methods work for canary song, for the
reasons we outlined and demonstrated in Figs.~\ref{fig1} and~\ref{fig2}, necessitating the
development of an algorithm like the one we present.
However, for birdsong that consists largely of a single fixed motif, like that of zebra finches,
several methods have been widely used, including semi-automatic clustering methods
\cite{burkett2015voice,daou2012computational},
and template matching \cite{anderson1996template,yamahachi_undirected_2020,pearre_fast_2017}.
Several studies have also applied supervised learning algorithms
to annotation, such as Hidden Markov Models \cite{kogan1998automated},
k-Nearest Neighbors \cite{songbrowser},
and support vector machines \cite{tachibana2014semi}.
These algorithms can annotate more variable song with branch points and repeats,
like that of Bengalese finches,
but they all require segmenting song to extract the engineered features
used to train the algorithms (e.g. acoustic parameters like pitch and duration).
To our knowledge there has been no large-scale comparison
of performance of these different algorithms,
but at least one study suggests they may not generalize well across songs of different
individuals \cite{nicholson2016comparison}.
Additionally, feature extraction can fail if segmentation is noisy,
e.g. because of changes in audio equipment set-up, background noises, etc.
Here again we stress that canary song exhibits wide ranges in amplitude,
and often requires annotators to set multiple thresholds to successfully extract segments.
These factors contribute to the lack of automated algorithms for annotating canary song.
Given these issues, we sought to develop an algorithm for automated annotation
that (1) can learn features from data, and
(2) does not require segmented syllables to predict annotations.
To meet both these criteria, we developed an artificial neural network
that we call TweetyNet, shown in (Fig~\ref{fig_tweetynet_architecture}).
TweetyNet takes as input windows from spectrograms of song
and produces labels for each time bin of that spectrogram window.
TweetyNet requires no pre-processing of song spectrograms - most importantly, segmentation of song into syllables is not needed. Silent gaps between syllables are labelled in training data, and time bins given these silent labels ('Unl.' in Fig~\ref{fig_tweetynet_architecture}) define gaps between syllables when TweetyNet inference is applied to a new song.
Essentially, the network combines two types of layers found in neural networks:(1) convolutional layers, common in computer vision tasks to learn features of images \cite{goodfellow_deep_2016,farabet_learning_2013,krizhevsky_imagenet_2012}, and (2) recurrent layers, often used to predict sequences \cite{graves_supervised_2012}. A recurrent layer is a natural choice because the input image or spectrogram is defined by two axes (time and frequency) with very different correlation structure. Specifically, the temporal dimension of songbird vocalization, like music and environmental noises, contains regularities in multiple time scales that are unrelated to the regularities of the frequency axes. The bidirectional LSTM (Long-Short-Time-Memory) recurrent layer is designed to capture these temporal correlations. \cite{bock_polyphonic_2012-1,parascandolo_recurrent_2016}.
To predict annotation, we feed consecutive windows from spectrograms to trained networks and
then concatenate the output vectors of labeled timebins.
Finally, \textcolor{red}{we simply find uninterrupted runs of a single syllable label to annotate song syllables from this framewise classification}\textcolor{blue}{** YC: is this still true? or do we define silence- separated runs as segment that always get a single label?**}. As discussed below, this final step can include a "cleanup" step rejecting segments shorter than a minimum syllable duration and\\or choosing a single label for consecutive time bins not labeled as silence by majority vote (c.f. examples at the bottom of Fig~\ref{fig_tweetynet_architecture}). In the rest of the results below we show that this simple method trained end-to-end
provides robust predictions of segment onsets, offsets, and labels.
\begin{figure}[!ht]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top}}}]{figure}[\FBwidth]
{\includegraphics[scale=0.66]{figures/mainfig_tweetynet_architecture_and_basic_operation/mainfig_tweetynet_architecture_operations_and_post_processing.png}}
{\caption{{\bf TweetyNet architecture and operation.}
TweetyNet takes as input a window,
specified in time bins, from a spectrogram and in a sequence of steps (top to bottom) outputs a sequence of labeled segments:
(1) The convolutional blocks produce a set of feature maps
by convolving (asterisk) their input and a set of learned filters (greyscale boxes).
A max-pooling operation down samples the feature maps.
(2) The recurrent layer is made up of Long Short Term Memory (LSTM) units.
This step is designed to capture dependencies across time
using both forward (F) and backward (B) passes through time to learn the weight governing the dynamics of the layer's internal state.
(3) A learned matrix projects the LSTM state onto the different syllable classes at each time bin
resulting in a vector of label probabilities.
(4) Each time bins is labeled by choosing the class with the highest probability (argmax).
(5) The labelled bins separate continuous song segments from no-song segments (Unl.).
(6) Post processing rejects short segments, and annotates each silence-separated song-segment with a single label using the majority vote of time bins in the segment. Fig.~\ref{suppfig_example_tensors} shows example tesnsor shapes following each step of the deep network.
(i-iv) Zoom into abberant sound (i), max. label probabilities and post-processing.}
\label{fig_tweetynet_architecture}}
\end{figure}
Surprisingly, beyond the work previously cited, we find little research that addresses the problem of learning to classify each time bin of a vocalization, either for human speech or birdsong.
The architecture we present here is somewhat similar to early deep networks models for speech recognition,
but a crucial difference is that state-of-the-art models in that area
map directly from sequences of acoustic features to sequences of words \cite{graves2006connectionist}.
The success of these state-of-the-art models is attributed to the fact that they learn this mapping from speech to text, **avoiding** the intermediate step of classifying each frame of audio, as has previously been shown \cite{graves_supervised_2012}.
In other words, they avoid the problem of classifying every frame that we set out to solve.
The architecture that we develop is most directly related to those that have been used for event
detection in audio and video \cite{bock_polyphonic_2012-1,parascandolo_recurrent_2016}
and for phoneme classification and sequence labeling\cite{graves_framewise_2005,graves_supervised_2012}.
The closest prior model for segmenting and labeling birdsong is \cite{koumura_automatic_2016-1}.
\textcolor{blue}{**YC: TODO: rephrase presentation of Koumura's work**} Several aspects of that study provide context for the contributions of our work.
The authors compared different pipelines that combine a neural network for recognizing syllable
segments with Hidden Markov Models that learns to predict syllable sequences, and in this way
improve the output of the network. They measured performance of these pipelines on a large dataset of
hand-annotated Bengalese finch song which they made publicly available \cite{koumura_birdsongrecognition_2016}.
\textcolor{red}{In summary, the key prior art is the important work of Koumura and Okanoya\cite{koumura_automatic_2016-1}.} This work anticipates the overall structure of our model, but through the integration of multiple distinct components that are individually optimized. In contrast, TweetyNet is a single neural network trained end-to-end, meaning it does not require optimizing multiple models.
Below we show that TweetyNet meets our criteria for an algorithm that
learns features from data and does not require segmented song to make predictions.
To do so we we benchmark TweetyNet on Bengalese finch and canary song,
\textcolor{red}{\st{and where possible compare the performance to} \cite{koumura_automatic_2016-1}.}
Additionally we show that we achieve robust performance:
across songs of individuals, which can vary widely even within a species;
across many bouts of song from one individual, e.g. across days of song, and;
across multiple species.
Lastly we show that this performance required only a small amount of
manually annotated data to train TweetyNet models accurately enough to recreate and add details to the deep structure of canary syntax.
\section*{Results}
\label{Results}
\subsection*{TweetyNet annotates Bengalese finch song with low error rates across individuals.}
We first set out to test whether our network robustly annotates syllables across a large number of
individual birds. To do so, we made use of the
publicly available repository of Bengalese Finch song \cite{koumura_birdsongrecognition_2016},
used to benchmark hybrid neural network-HMM models from \cite{koumura_automatic_2016-1}
as referenced in \nameref{Related work}.
The repository contains song from 10 individual birds, with hundreds of bouts of hand-annotated
song for each bird. Each individual's song had different number of syllables and obeyed a different syntax.
To benchmark TweetyNet models on this dataset, we generated learning curves that plot error of the
model as a function of the size of the training set (duration in seconds).
The learning curves give us an estimate of the smallest amount of hand-labeled training data
we would need to obtain the lowest error that the TweetyNet model can achieve.
For each bird we split the data into fixed training and test sets,
with durations of 900 and 400 seconds respectively.
Then for each training set duration we trained 10
replicates with a randomly-drawn subset of the training data.
We computed error metrics for each training replicate on the
held-out test set for each individual. (See \nameref{Methods} for details.)
As shown in Fig~\ref{fig4}, these learning curves
demonstrate that TweetyNet models achieved low error rates across all ten birds.
We first looked at frame error, a percentage that measures the number of times the label
predicted by the model for each time bin in a spectrogram
did not match the ground truth label.
For all birds %Fig~\ref{fig4}\textbf{A} that for all birds (solid colored lines)
TweetyNet models achieve less than 8\% frame error with the smallest training set duration
of 30 seconds (Fig~\ref{fig4}A).
From the learning curve we can estimate that across birds, the lowest frame error
that TweetyNet models produce is roughly 4\%, and that they achieve this with just
180 seconds (three minutes) of training data. (For specific values, see Table~\ref{table:1}.)
Larger training sets did not further reduce error.
% \todotg{The next paragraph doesn't make sense unless you have convereted frame labels to syllable sequences. That hasn't been explained yet. THis could a be a good place to insert a figure with the output vectors alighed to bengalese finch song, and a third panel showing the syllable segmentation and annotation marks.}
To better understand how well the network segments and labels songs, we used another metric, the syllable error rate, which is analogous to the word error rate that is widely used in the speech recognition literature. This metric is an edit distance that counts the number of edits (insertions and deletions) needed to convert a predicted sequence of syllables into the ground-truth sequence. The error rate is normalized by dividing it by the length of the sequences for comparison across birds (e.g. if one bird sang more syllables per bout than another).
Measuring the syllable error rate confirmed that TweetyNet consistently achieved similar error rates across the ten birds, as shown in
Fig \ref{fig4}B. Because this metric was also used in \cite{koumura_automatic_2016-1}
(as "note error rate"), we can compare our results directly to theirs.
As indicated by blue circles in Fig \ref{fig4}B, the best-performing models in that study achieved syllable
error rates of 0.83 and 0.46 with two and eight minutes of training data, respectively. TweetyNet always
achieved much lower syllable error rates. Taken together, the results from benchmarking TweetyNet on this
dataset indicate that the architecture performs well across the song of many individual birds. In addition, it dramatically outperforms existing models with less training data, and does so while being trained end-to-end
without requiring optimizations of multiple steps in a pipeline.
\begin{figure}[!ht]
\includegraphics[scale=0.75]{Figures/fig4/fig4-learning-curves-ai.png}
\caption{{\bf TweetyNet annotates song with low error rates across ten indiviudal Bengalese finches.}
Model ‘learning curves’ showing the reduction in annotation error (y-axis)
on a held-out test set as a function of the size of the training set (x-axis).
Shown are the ‘frame error rate’ (\textbf{A}) measuring the percent of mislabeled time bins
and the ‘syllable error rate’ (\textbf{B}) measuring the normalized sequence edit distance.
Each colored line corresponds to one bird from dataset.
The solid line indicates mean error across ten training replicates
for each training set duration, and the translucent error band
around the solid lines indicates standard deviation.
Thicker black lines indicate the mean across birds.
Circular blue markers indicate mean syllable error rate across birds
reported in \cite{koumura_automatic_2016-1}
for a different algorithm using the same dataset.}
\label{fig4}
\end{figure}
\subsection*{ TweetyNet models achieve low error across days even when trained with just the first three minutes of song recorded.}
We next sought to benchmark TweetyNet in a scenario similar to long-term behavioral experiments
for which we hope to automate annotation.
For this purpose we used another publicly-available repository \cite{nicholson_bengalese_2017}
with hand-labeled song from four Bengalese finches.
Importantly, the repository contains most or all of the songs sung
by each bird for multiple consecutive days, as is typically done
during a long-term behavioral experiment,
and annotation for all those songs (recall that experimenters usually are able to annotate only a limited number).
Here we sought to measure how well TweetyNet models
would perform when an experimenter takes the \textit{first} set of songs of
some duration $n$ and annotates those songs manually before using them to train a network.
This stands in contrast to the experiment in Fig.~\ref{fig4},
where we trained multiple replicates with random subsets
of songs from a larger training set, in order to obtain a better estimate of expected error rates.
Of course our goal is to avoid the need for experimenters to label a large dataset by
hand and then use it to train multiple replicates with random subsets of that data,
just to find the best performing network.
If we show that we can achieve comparable error rates with just the first $n$ minutes of song, we
can be more confident that TweetyNet models will robustly segment and label hours of song recorded across days.
Using the learning curves in Fig~\ref{fig4} we estimated that three minutes
of data was the shortest duration training set we could use to obtain
the lowest error rate achieved by models.
Thus, we trained single TweetyNet models with the first three minutes of song
sung by a bird on one day, and then measured the accuracy of
that model using all other songs across multiple days.
The test datasets we used to obtain these measures were in almost all cases at least as large as
those we used to benchmark models in the learning curves.
The mean duration of these test datasets was 1528 seconds
(standard deviation of 888.6 seconds,
i.e. 25 minutes mean, 14 minutes standard deviation),
in contrast to Fig~\ref{fig4} where we measured error with
a test set of 400 seconds (6 minutes 40 seconds).
Hence this approach gave us multiple estimates of
how a single trained model performs on relatively large datasets.
TweetyNet models trained in this manner did achieve low frame error (Fig~\ref{fig5}A) and
low syllable error rates (Fig~\ref{fig5}B) across days without exhibiting large fluctuations.
The frame error ranged from 2-4\% across 3-5 days of song,
comparable to those observed when training with a random subset of songs, as in Fig~\ref{fig4}.
In one case, for one bird, the frame error did increase on the last day, but was still
within the low end of the range seen for all birds, and this increase did not appear to translate into
an increase in the syllable error rate (Fig~\ref{fig5}B and Fig~\ref{fig5}C, bird ID or60yw70, red line).
We also found that TweetyNet models trained on the first three minutes of song
maintained a low syllable error rate across days (Fig~\ref{fig5}B and Fig~\ref{fig5}C),
again comparable to what we observed in the learning curves (Fig~\ref{fig4}B).
Here we additionally tested whether a simple post-processing step could
further lower the error rate. This "majority vote" transform consists of
defining syllable segments as continuous sequences of lebaled spectrogram time bins bordered by time bins that the network
predicted were "unlabeled" / "silent", finding the label
occurred most frequently across time bins within that segment, and then assigning that
label to the segment - and to all time bins within the segment.
As shown in Fig~\ref{fig5}C, this simple post-processing step
did lower the syllable error rate of TweetyNet models.
We did not find that this post-processing step had a large effect on the
frame error (not shown in plot), from which we infer that this
transform removes small frame errors (e.g. a single time bin) that give
rise to spurious extra segments,
and correcting these in turn produces a large drop in the syllable error rate.
Hence we have shown using Bengalese finch song
that TweetyNet outperforms existing models and that,
with only minimal cleaning of its output, analyses of behavioral experiments
can be scaled up to very large datasets.
\begin{figure}[!ht]
\includegraphics[scale=0.75]{Figures/fig5/fig5-error-across-days-ai.png}
\caption{{\bf TweetyNet models achieve low error across days of Bengalese finch song,
even when trained with just the first three minutes of song recorded}.
\textbf{A.} TweetyNet models trained on the first three minutes of song
from day 1 achieved low frame error across days.
The mean duration of the set of songs for each day
that we used to measure error was 1528 seconds(888.6 S.D.),
(i.e. 25 minutes (14 minutes S.D.)),
Different line colors and styles indicate individual birds
\textbf{B.} TweetyNet models trained on the first three minutes of song
from day 1 also demonstrate a low syllable error rate across days.
\textbf{C.} The syllable error rates in \textbf{B} further improve
after applying a “majority vote” post-processing
(assigning the dominant label in each continuous segment of time bins
not annotated as ‘silence’, see methods).
For one bird (or60yw70), the error did increase on the last day,
but was still within the low end of the range seen for all birds.
}
\label{fig5}
\end{figure}
\subsection*{TweetyNet annotates minutes-long canary songs with low error rates across individuals}
After demonstrating TweetyNet's high performance across multiple individuals of the same species and across multiple songs of individual birds, we wanted to test TweetyNet across species. We chose the domestic canary (serinus canaria) - a species for which there are no published annotation algorithms and whose rich song repertoire offers a unique opportunity for neuroscience research \cite{markowitz_long-range_2013,gardner_freedom_2005,alonso_low-dimensional_2009,appeltants_effect_2005,alliende_species-specific_2013}.
As in our first test in Bengalese finches, we curated training sets of 1-10 minutes of song from 3 canaries and measured the frame error rates in a held-out test set 20-30 minutes long. (Training sets are relatively longer than the Bengalese tests since canary songs can be up to a minute or more in length and even sparse sampling of the full repertoire requires these longer training sets.) Still, Fig~\ref{fig_canary_lc} shows that in three canaries the model learning curves asymptote with 8-10 minute training sets to frame error rates similar to TweetyNet's performance in Bengalese finches.
\begin{figure}[!ht]
\includegraphics[scale=1.0]{figures/mainfig_canary_learning_curve/mainfig_CanaryLC.png}
\caption{{\bf TweetyNet segments and labels canary song with low error rates, similar to Bengalese finches, across individuals.}
Models were trained on 60s-600s of song from each individual. The mean frame error (lines) of five models (markers) trained with different randomly-drawn subsets from the training set was measured on a separate 1500-2000s test set from each individual. The asymptotic error rates, annotated to the right of the curves, overlaps with the error rates in the Bengalese finch data sets}
\label{fig_canary_lc}
\end{figure}
Unlike TweetyNet's performance in Bengalese finches, the frame error rates in annotating canary songs cannot be compared to alternative algorithms using published data and results. Furthermore, the length of these songs, usually containing hundreds of syllables, mean that even in very low error rates we expect annotation errors in many songs (Table~\ref{table:1}, Fig~\ref{suppfig_errors_in_boundaries}). These annotation errors can occur at the onset of song and in transitions between canary phrases (Fig~\ref{fig7}) and affect analyses of canary syntax.
To gauge the effect of such errors, in the next section we evaluate the accuracy of syntax models estimated from TweetyNet's automatic annotation.
\begin{table}[h!]
\centering
\begin{tabular}{ | c || m{5em} | m{5em} | m{5em} | m{5em} | m{5em} || m{5em} | }
\hline
dataset & training set duration (s) & frame error (\%) & syllable error rate & syllable error rate (majority vote) & \% Near Boundary \\
\hline
B.F. 1 & 120 & 3.5$\pm$0.5 & 0.05$\pm$0.01 & n/a & n/a\\
\hline
B.F. 1 & 180 & 3.4$\pm$0.5 & 0.04$\pm$0.01 & n/a & n/a\\
\hline
B.F. 1 & 480 & 3.3$\pm$0.5 & 0.04$\pm$0.01 & n/a & n/a\\
\hline
B.F. 2 & 180 & 2.9$\pm$1.4 & 0.2$\pm$0.09 & 0.06$\pm$0.04 & 64.9$\pm$14.3\\
\hline
Can. & 240 & 3.9$\pm$1.0 & 0.155$\pm$0.072 & 0.076$\pm$0.037 & 51.1$\pm$10.7\\
\hline
Can. & 600 & 3.1$\pm$0.8 & 0.09$\pm$0.016 & 0.051$\pm$0.011 & 58.3$\pm$11.3\\
\hline
Can. & 6000 & 2.1$\pm$0.8 & 0.069$\pm$0.013 & 0.031$\pm$0.005 & 68.6$\pm$13.9\\
% Frames &0&0&0&0&0&0.03&0.012&0.023\\
% \% In-Edge &0&0&0&0&0&84&61&59\\
% Segments &0&0&0&0&0&0.069&0.056&0.082\\
% Cleaned seg. &0&0&0&0&0&0.031&0.025&0.037\\
\hline
\end{tabular}
\caption{{\bf Error metrics of TweetyNet models for different species and training set sizes}
For each Bengalese finch (BF) and canary (Can) data set we evaluate test-set errors metrics for models trained on several training-sets sizes (measured in seconds). Presented are the mean $\pm$ standard deviation across all birds and experiment replicates. The \textit{frame error rate} and \textit{syllable error rate} columns present the raw error shown in learning curves (Figs.~\ref{fig4},\ref{fig_canary_lc}). The \textit{syllable error rate (majority vote)} column shows the syllable error rate after applying post-hoc cleaning of annotation, where we assigned a single label to each segment by majority vote and discarded all segments below a set duration (methods). The \textit{\% Near Boundary} column shows the percent of frame errors involving silent periods that occur within 0-2 time bins of syllable boundaries (onsets and offsets, see \nameref{Methods}).}
\label{table:1}
\end{table}
\begin{figure}[!ht]
\includegraphics[scale=1.0]{Figures/fig7/fig7_model_err_examples.png}
\caption{{\bf Variants of canary song introduce segmentation and annotation errors.}
Canary vocalizations contain variations that challenge TweetyNet. The examples in panels A-E show spectrograms on top of the time-aligned likelihood (gray scale) assigned by a trained TweetyNet model to each of the labels (y-axis, 30 syllable types and the tag \textit{unl.} for the unlabeled segments). Green and red vertical lines and numbers on top of the spectrograms mark the onset, offset, and labels predicted by the model.
\textbf{A,B.} Transitions between syllables can occur without a silence gap. In this example, TweetyNet assigns higher likelihood to both syllables (c.f. pink arrow). In rare variants the model ignores the first syllable (A)
\textbf{C.} Syllables produced weakly or deformed still get higher likelihood (arrows) but may still be ignored because the unlabeled class gets a higher likelihood.
\textbf{D.} Transition between phrases of very similar syllables (22 \textrightarrow 1) introduce label confusion.
\textbf{E.} Canaries can produce completely overlapping syllables. The model assigns high likelihood to both classes but is forced to choose only one}
\label{fig7}
\end{figure}
\subsection*{Automated analysis of canary song structure. }
Sequences of canary phrases contain transitions with different 'memory' depths. Namely, the probability distribution of transition outcomes from a given phrase is captured by Markov chains with variable lengths. As shown in a recent study in Waterslager canaries, this syntax structure is captured parsimoniously by probabilistic suffix trees (PST) \cite{ron_power_1996,markowitz_long-range_2013}.
The root node in these graphical models, appearing in the middle of Fig~\ref{fig_automatic_canary_syntax}A,B, represents the zero-order Markov, or base rate, frequencies of the different phrases, labelled in different colors and letters. Each branch, emanating from the colored letters in Fig~\ref{fig_automatic_canary_syntax}, represents the set of Markov chains that end in the specific phrase type designated by that label. For example, the 'A' branch in Fig~\ref{fig_automatic_canary_syntax}A includes the first order Markov model 'A' and the second order Markov chains 'FA' and '1A' representing the second order dependence of the transition from phrase 'A'.
These models are built by iterative addition of nodes up the branch to represent longer Markov chains, or a transition's dependence on longer sequences of song history.
Figure~\ref{fig_automatic_canary_syntax} and Supplementary Figures~\ref{suppfig_tweetynet_vs_600_hand_labeled_songs},\ref{suppfig_tweetynet_vs_1764_hand_labeled_songs} demonstrate that TweetyNet parses domestic canary song with an accuracy sufficient to extract its long-range order. In these figures, we set parameters of the PST estimation algorithm to derive the deepest syntax structure possible without overfitting as practiced in a recent study \cite{markowitz_long-range_2013} that used about 600 hand annotated songs of Waterslager canaries. In this example, using 2.2\% of the data set, about 40 songs, to train a TweetyNet model and predict the rest of the data reveals the deep structures shown in Fig~\ref{fig_automatic_canary_syntax}A - comparable to using 600 hand annotated songs of the same bird. With more training data, Tweetynet's accuracy improves as does the statistical strength of the syntax model. In Fig~\ref{fig_automatic_canary_syntax}B a TweetyNet model was trained on 19\% of the data, about 340 songs, and predicted the rest of the data. The resulting syntax model can be elaborated to greater depth without overfitting. To crosscheck this deeper model, we manually annotated all 1764 songs of that bird, revealing a very similar syntax model (Fig~\ref{fig_automatic_canary_syntax}B).
In sum, we find that TweetyNet, trained on a small sample of canary song, is accurate enough to automatically derive the deep structure that has formed the basis of recent studies \cite{markowitz_long-range_2013,cohen_hidden_2020}.
\begin{figure}[!ht]
\includegraphics[scale=0.85]{figures/mainfig_automatic_canary_syntax/mainfig_automatic_canary_syntax.png}
\caption{{\bf Example of reproducing syntax dependencies seen in \textit{Waterslager} canaries and then using a TweetyNet model to automatically process a larger dataset - adding detail and memory limits of the syntax structure.}
\textbf{A.} Long-range order found in 600 domestic canary songs annotated with human proof reader (methods, similar dataset size to \cite{markowitz_long-range_2013}). Letters and colors label different phrase types. Each branch terminating in a given phrase type indicates the extent to which song history impacts transition probabilities following that phrase. Each node corresponds to a phrase sequence, annotated in its title, and shows a pie chart representing the outgoing transition probabilities from that sequence. The nodes are scaled according to their frequency (legend). Nodes that can be grouped together (chunked as a sequence) without significantly reducing the power of the model are labeled with blue text. The songs used to create the PST are a subset of 1764 songs. A TweetyNet model was trained using 2.2\% of that dataset (9.5\% of the data in A). The PST created from the model's predicted annotation of the entire dataset is very similar to A (see full comparison in Supplementary Figure~\ref{suppfig_tweetynet_vs_600_hand_labeled_songs}). Here, branch differences between the hand labeled and model labeld song are marked by red and blue dashed lines for added and missed branches. \textbf{B.} Using all 1764 hand labeled songs allowed creating a PST with greater detail. Compared to panel A, some branches did not grow. An almost identical PST was created \textit{without} a human proof reader from a TweetyNet model trained on 19\% of the data. The fluctuation in transition probabilities accumulates in long sequences and, in this example, increased the minimal sequence probability included in the PST. This difference prevented the inclusion of the 'N' branch (see full comparison in Supplementary Figure~\ref{suppfig_tweetynet_vs_1764_hand_labeled_songs}).}
\label{fig_automatic_canary_syntax}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=1.0]{figures/mainfig_accuracy_in_large_datasets/mainfig_accuracy_in_large_datasets.png}
\caption{{\bf Using datasets more than 5 times larger than previously explored increases statistical power and the precision of syntax models.}
\textbf{A.} Ten-fold cross validation is used in selection of the minimal node probability for the PSTs (x-axis). Lines show the mean negative log-likelihood of test set data estimated by PSTs in 10 repetitions (methods). Curves are calculated for datasets that are sub sampled from about 5000 songs. Red dots show minimal values - the optimum for building the PSTs.
\textbf{B.} The decrease in optimal minimal node probability (y-axis, red dots in panel A) for increasing dataset sizes (x-axis) is plotted in gray lines for 6 birds. The average across animals is shown in black dots and line.}
\label{fig_precision_in_large_datasets}
\end{figure}
\subsection*{Larger data sets of annotated canary song add details and limit the memory of the syntax structure}
The increase in syntax detail, presented in Fig~\ref{fig_automatic_canary_syntax}B, is possible because more rare nodes can be added to the PST without over-fitting the data. Formally, the PST precision increase in larger data sets is defined by the decrease in minimal node frequency allowed in the process of building PST models (Fig~\ref{fig_precision_in_large_datasets}), as measured in model cross validation (methods). In our data set, we find an almost linear relation between the number of songs and this measure of precision - close to a tenfold precision improvement.
In Fig~\ref{fig_automatic_canary_syntax}B, this increased precision allowed reliably adding longer branches to the PST to represent longer Markov chains (in comparison to Fig~\ref{fig_automatic_canary_syntax}A). In this example, using a dataset 3 times larger revealed a 5-deep branch that initiate with the beginning of song ('1ABGN') indicating a potential global time-in-song dependency of that transition. The PST in Fig~\ref{fig_automatic_canary_syntax}B also has branches that did not 'grow' compared to Fig~\ref{fig_automatic_canary_syntax}A when more songs were analyzed (e.g. the 'B', 'Q', and 'R' branches) - indicating a potential cutoff of memory depth that is crucial in studying the neural mechanisms of song sequence generation.
The data sets used in Figs~\ref{fig_automatic_canary_syntax} and Fig~\ref{fig_precision_in_large_datasets}, are about 10 times larger than previous studies. To ascertain the accuracy of the syntax models, in creating the data sets we manually proof read TweetyNet's results (see methods). Across 5 different human proof readers we compare the time required to manually annotate canary song with the proof reading time and find that using TweetyNet saves 95-97.5 percent of the labor.
\newline
Taken together, the TweetyNet algorithm allowed us to annotate many more songs of individual complex singers than previously demonstrated, with high accuracy across individuals and across species. This accuracy allowed fully-automated analyses, saved most of the labor, and revealed novel details of canary syntax in a new strain.
\section*{Discussion}
\label{Discussion}
The family of songbirds that learns by imitation consists of over 4500 species of birds. Some of these singers, such as the canary, produce songs that are much too complex to be automatically annotated with existing methods, and for these complex singers little is known about the syntax structure and organization of song. Even for birds with simple adult songs, a detailed description of song development will require the application of new methods. This is particularly true for early song development where template based extraction of song syllables and clustering of syllable forms provides an incomplete picture of the full variability of song.
A recent study illustrated the surprises that await a more detailed analysis of song. The canary, one of the most widely bred species of domesticated songbird, was recorded for 2 hours or more and hundreds of songs were manually annotated and cross validated. This data set revealed a new complexity to the statistical structure of canary song - the song follows long-range rules where specific subsets of syllables follow transition statistics governed by 4th and 5th order Markov processes in phrase types \cite{markowitz_long-range_2013}. This rich behavior motivated another recent study to implant miniature microscopes in singing canaries, and the recorded neural signals included hierarchical memory traces corresponding to the complex syntax \cite{cohen_hidden_2020}. The sophistication of the neural representation of song in canaries was largely unanticipated based on decades of neural recordings in simpler singers.
The present project was motivated by these recent studies and the knowledge that new fundamental discoveries in vocal learning and neural dynamics will follow if automated annotation of complex song becomes possible. Some methods for automated annotation exist, but previous work suggests these methods have their own limitations, especially when applied to song with many syllable types and variable sequence such as that of Bengalese finches and canaries.
The TweetyNet algorithm described here is a work in progress, with many clear paths for improvement. Still the first syllable error rates described here are dramatic improvements over a prior model for song parsing. We used publicly-available datasets of Bengalese finch song to benchmark TweetyNet. We showed that it achieves low error rates across many individuals. On Bengalese finch data, our single network trained end-to-end performs better than a previously proposed hybrid HMM-neural network model and does so with less training data (Fig.~\ref{fig4}). We then showed that TweetyNet models achieve low error across days, in thousands of Bengalese finch songs, even when trained with just the first three minutes of song. This experiment, while strongly restricting the data available for model training, demonstrates the usefulness of TweetyNet in 'real-life' laboratory settings - for experimentalists that want to hand annotate as little as possible.
We next reported that TweetyNet was sufficiently accurate to reproduce the recent findings on the complex syntax structure of canary song with fully automated machine-classified song. Specifically, a TweetyNet model trained on just 10 minutes of canary song could accurately recover the statistical structure reported from 600 manually annotated songs - exceeding 100 minutes. Furthermore, a deep network trained on 340 annotated songs, about 19\% of the data, could classify a larger data set of more than 1700 songs and build a much more complete statistical model of song revealing additional depth to the long-range syntax rules and extending prior reports on the complexity of the canary song behavior. This more complex statistical model was validated using a manually curated data set of all songs.
With a trained model performing at this level it becomes feasible to examine the effect of social context on song syntax, circadian variations in syntax, or the effects of distinct neural perturbations that could effect song syntax while keeping syllable forms intact.
%% TBD
% Here we can discuss future directions that might improve the model, mentioning the TSNE plots, prior knowlege about song structure,e tc.
% \tododnor{This is a really important point we should make -- I moved it here from the intro because as Tim pointed out it's really a discussion point. Also we should make sure we explain that we \textbf{did} implement a simple method of post-processing, the majority vote transform, but yes clearly some method that further accounts for sequence probabilities could improve performance}
On top of sequence variations, many song studies require syllable similarity metrics to examine the effects of such neural or song pertubations, or the ontogeny of syllable forms through development.
Here we used TweetyNet to classify the most likely syllable in every time point, focusing not on variations in syllable form but the sequential structure of song syntax. But, the syllable classification is the final processing step in TweetyNet achieved by maximum a-posteriori (MAP, or argmax) estimation following the calculation of similarity to all possible syllables. Thus, the full likelihood function that TweetyNet produces prior to classification may itself be a useful metric for syllable structure, allowing for example the time course of syllable form to be examined through development or as a result of neural perturbations. A syllable similarity metric that can be assigned at each point in time or frame of a spectrogram without syllable segmentation is, by itself, a new development in the field and can be used, in future development, to improve TweetyNet and to apply it to many more species whose song is difficult to segment. % The utility of the network outputs as a frame by frame metric of syllable similarity is illustrated in figure (canary fig). In figure (TSNE) the output vectors for Bengalese finch song segment according to syllable type. We emphasize the representation of these clusters in song do not require segmentation of song - for some species such as canaries, segmentation has been difficult to tune and even for simpler singers, the earliest subsong is contains highly variable vocalizatinos that cannot be unambiguously segmented into a discrete set of sound units.
To make TweetyNet useful to a large research community, we developed the \texttt{vak} library - a user-friendly toolbox that enables
researchers to apply TweetyNet simply by adapting existing configuration files. This library does not require extensive programming knowledge or expertise in neural networks. The framework will allow users to explore different methods of optimizing neural network models that might improve segmentation, and also generate alternative architectures that could incorporate distinct features and topologies. For example, in many domains transformer networks have recently replaced LSTMs for sequence processing. Substituting transformer layers for the LSTM layer could provide advances here. \cite{vaswani_attention_2017}.
Aspects of other deep networks applied to animal motor control may improve TweetyNet. Examples include object detection architectures \cite{coffey_deepsqueak_2019, mathis_deeplabcut_2018}
applied to mouse ultrasonic vocalizations and animal motion tracking,
and generative architectures applied to birdsong and other vocalizations
\cite{goffinet_inferring_2019,sainburg2019animal,sainburg2019latent}.
Lastly we note that in principle TweetyNet and \texttt{vak} library can be applied to any other annotated vocalization, including calls of bats, mouse ultrasonic vocalizations, and dolphin communication.
We do not claim to have achieved the best possible method for automated annotation
of vocalizations with neural networks using supervised learning methods, although
we have aimed to establish a strong baseline for the work that will build upon ours.
That said, we are confident our method enables songbird researchers to automate annotation
required for analyses that address central questions of sensorimotor learning.
\section*{Materials and methods}
\label{Methods}
\subsection*{Ethics declaration}
All procedures were approved by the Institutional Animal Care and Use Committees of Boston University (protocol numbers 14-028 and 14-029). Song data were collected from n = 5 adult male canaries. Canaries were individually housed for the entire duration of the experiment and kept on a light–dark cycle matching the daylight cycle in Boston (42.3601 N). The birds were not used in any other experiments.
\subsection*{Data availability}
Datasets of annotated Bengalese finch song are available at
\url{https://figshare.com/articles/BirdsongRecognition/3470165} and \url{https://figshare.com/articles/Bengalese_Finch_song_repository/4805749}.
Datasets of annotated canary song are available at \url{ https://datadryad.org/stash/share/lXWpizOCPjW1V63_yD8KSnj0huB-jYTJ0EfbBsNxHzU}.
Model checkpoints, logs, and source data files for figures are available at \url{https://datadryad.org/stash/share/q_N9D6dfZp_phGPQrUbeinGqcgd-lB4JZeIsd_tGXAs}
\subsection*{Code availability}
\label{methods:code}
The code implementing the TweetyNet architecture,
and code to reproduce figures in this paper, are available at
\url{https://github.com/yardencsGitHub/tweetynet}
(version 0.4.3, 10.5281/zenodo.3978389).
To aid with reproducibility of our experiments,
and to make TweetyNet more accessible to researchers studying birdsong
and other animal vocalizations, we developed a software library,
\texttt{vak}, available at \url{https://github.com/NickleDave/vak}.
Both TweetyNet and \texttt{vak} are implemented using
the following open-source scientific Python libraries:
torch \cite{paszke_automatic_2017},
torchvision \cite{marcel_torchvision_2010},
numpy \cite{walt_numpy_2011, harris2020array},
scipy \cite{virtanen_scipy_2020},
dask \cite{dask_development_team_dask_2016},
pandas \cite{team_pandas-devpandas_2020},
matplotlib \cite{Hunter:2007,thomas_a_caswell_2020_4030140},
seaborn \cite{michael_waskom_2020_4019146},
jupyter \cite{kluyver2016jupyter},
attrs \cite{attrs}
and tqdm \cite{casper_da_costa_luis_2020_4054194}.
\subsection*{Data collection}
\subsubsection*{Use of available datasets}
Bengalese finch song is from two publicly-available repositories. \textcolor{blue}{**YC: we need to describe what we did with Komoura's data**}
The first \cite{koumura_birdsongrecognition_2016} was used for results in \ref{fig_tweetynet_architecture} and can be found at
\url{https://figshare.com/articles/BirdsongRecognition/3470165}. It accompanied the paper \cite{koumura_automatic_2016-1}.
The second \cite{nicholson_bengalese_2017} was used for results in Fig~\ref{fig4} can be found at \url{https://figshare.com/articles/Bengalese_Finch_song_repository/4805749}.
Apart from recordings made for this manuscript we used publicly available datasets of Waterslager canary songs \cite{markowitz_long-range_2013}, Bengalese finch songs \cite{koumura_automatic_2016-1} and Zebra finch songs \cite{otchy_acute_2015}.
\subsubsection*{Domestic canary song screening}
Birds were individually housed in soundproof boxes and recorded for 3-5 days (Audio-Technica AT831B Lavalier Condenser Microphone, M-Audio Octane amplifiers, HDSPe RayDAT sound card and VOS games’ Boom Recorder software on a Mac Pro desktop computer). In-house software was used to detect and save only sound segments that contained vocalizations. These recordings were used to select subjects that are copious singers ($\ge 50$ songs per day) and produce at least 10 different syllable types.
\subsubsection*{Domestic canary audio recording}
All data used in this manuscript was acquired between late April and early May 2018 – a period during which canaries perform their mating season songs. Birds were individually housed in soundproof boxes and recorded for 7-10 days (Audio-Technica AT831B Lavalier Condenser Microphone, M-Audio M-track amplifiers, and VOS games’ Boom Recorder software on a Mac Pro desktop computer). In-house software was used to detect and save only sound segments that contained vocalizations. Separate songs were defined by silence gaps exceeding 1 second.
\subsection*{Audio processing}
\subsubsection*{Segmenting annotated phrases of Waterslager canaries}
The dataset of waterslager canaries was available from a previous project in the Gardner lab \cite{markowitz_long-range_2013}. These songs were previously segmented into phrases, trilled repetitions of syllables, and not to individual syllables. To include this data in Fig~\ref{fig1} we needed to break annotated phrase segments into syllable segments. In each segmented phrase, we separated vocalization and noise fluctuations between vocalizations by fitting a 2-state hidden Markov model with Gaussian emission functions to the acoustic signal. The suspected syllable segments resulting from this procedure were proofread and manually corrected using a GUI developed in-house (\url{https://github.com/yardencsGitHub/BirdSongBout/tree/master/helpers/GUI}).
\subsubsection*{Preparing data sets of domestic canaries}
\paragraph{Bootstrapping annotation with TweetyNet}
In this manuscript we used annotated domestic canary datasets an order of magnitude larger than previously published. To create these datasets we used TweetyNet followed by manual proofreading of its results. This process, described below, allowed 'bootstrapping' TweetyNet's performance.
\vspace{2mm}
\newline
Song syllables were segmented and annotated in a semi-automatic process:
\begin{itemize}
\item A set of ~100 songs was manually segmented and annotated using a GUI developed in-house (\url{https://github.com/yardencsGitHub/BirdSongBout/tree/master/helpers/GUI}). This set was chosen to include all potential syllable types as well as cage noises.
\item The manually labeled set was used to train TweetyNet (\url{https://github.com/yardencsGitHub/tweetynet}).
\item In both the training phase of TweetyNet and the prediction phase for new annotations, data is fed to TweetyNet in segments of 1 second and TweetyNet’s output is the most likely label for each 2.7msec time bin in the recording.
\item The trained algorithm annotated the rest of the data and its results were manually verified and corrected.
\end{itemize}
\paragraph{Assuring the identity and separation of syllable classes}
The manual steps in the pipeline described above can still miss rare syllable types of mislabel syllables into the wrong classes. To make sure that the syllable classes are well separated all the spectrograms of every instance of every syllable, as segmented in the previous section, were zero-padded to the same duration. An outlier detection algorithm (IsolationForest: \url{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html}) was used to flag and re-check potential mislabeled syllables or previously unidentified syllable classes.
\paragraph{Preparing spectrograms inputs for TweetyNet}
Spectrograms were created from audio files using custom Numpy (Bengalese finch) or Matlab (canary) code.
All spectrograms for song from a given species were created with the same parameters (e.g., number of
samples in the window for the Fast Fourier Transform). From initial studies we found that it was necessary
to perform standard transforms on spectrograms such as a log transform in order for the neural network to
learn. We did not notice any difference in the nature of the transform (i.e, we also used log + 1)
although here we do not study this systematically.
\subsection*{Network Architecture}
The network takes a 2D window from a spectrogram as input (c.f. top of Fig~\ref{fig_tweetynet_architecture}) and produces as output
labels for each time bin in the window.
The spectrogram window passes through two standard convolutional blocks,
each of which consists of a convolutional layer and a max pooling layer.
The convolutional layer ('2D conv.' in Fig~\ref{fig_tweetynet_architecture}) performs a cross-correlation
like operation (asterisk in Fig~\ref{fig_tweetynet_architecture}) between the spectrogram window and learned filters (greyscale boxes in Fig~\ref{fig_tweetynet_architecture}) to produces feature maps.
The max pooling layer ('Pooling' in Fig~\ref{fig_tweetynet_architecture}) uses a similar operation to further reduce feature maps to maximum values within a sliding
window (orange bin in Fig~\ref{fig_tweetynet_architecture}). Importantly, the window size we use in the max pooling layer has a "width" of one time bin, so that this
layer does not down-sample along the time axis \st{(although the convolutional layer does)}. \textcolor{blue}{**YC: suggest delete**}
The output of the second convolutional block passes through a recurrent layer made up of LSTM units, where by default
the number of units equals the number of time bins in the spectrogram window. This default choice, like other model parameters, can be changed by the user.
The final layer in TweetyNet is a projection ($\overrightarrow{W}_{t,s}$, purple matrix in Fig~\ref{fig_tweetynet_architecture}) of the recurrent layer's output onto the different syllable classes, $s=1..n$, resulting in a vector of $n$ syllable-similarity scores for each spectrogram time bin $t$. The number of classes, $n$, is predetermined by the user and includes a class for no-song time bins ('Unl.' in Fig~\ref{fig_tweetynet_architecture}). At present this non-song class includes both background noises and silence, and future iterations of the model may distinguish between these for better performance. To segment syllables, the bin-wise syllable-similarity scores are first used to select a single syllable class per time bin by choosing the label with the highest syllable-similarity score. Since similarity scores can be normalized, this is akin to maximum a-posteriori (MAP) label selection. Then, the labelled time bins are used to separate continuous song segments from no-song segments and to annotate each song-segment with a single label using majority decision across time bins in that segment.
\subsection*{Training and benchmarking TweetyNet}
\label{methods:training}
Benchmarking of TweetyNet was performed with the \texttt{vak} library.
We apply standard methods for benchmarking supervised machine learning algorithms, following best practices \cite{james2013introduction}.
We leverage functionality of the \texttt{vak} library
that extends best practices for benchmarking to the domain where
where dataset size is measured in duration, as described in \nameref{methods:learning curves}.
\subsubsection*{Data transformations}
As stated above, the input to the network consists of spectrogram windows. To produce this input,
we slid a window of fixed length across spectrograms, essentially creating an array of every possible
window from each spectrogram. This array was randomly permuted then fed to the network in minibatches
during training, along with the expected output, vectors of labels for each timebin in the
spectrogram windows. These vectors of labeled timebins are produced programmatically by \texttt{vak}
from annotations consisting of segment labels and their onset and offset times.
For Bengalese finch song we used windows of 88 time bins,
and for canary song we used windows of 370 time bins.
We carried out preliminary experiments where we varied the window size
for Bengalese finch song, but did not find that larger windows
greatly increased accuracy, although they did increase training time.
\subsubsection*{Learning curves}
\label{methods:learning curves}
For the studies shown in Figs.~\ref{fig4},\ref{fig_canary_lc}, we created learning curves,
that display a metric such as frame error rate as a function of the amount of training data.
For each individual bird we fit networks with training sets of increasing size (duration in seconds)
and then measured performance on a separate test set.
In the case of Bengalese finches, we used training sets with durations ranging from 30-480 seconds.
For each network trained, audio files were drawn at random from a fixed-size total training set of 900 seconds
until the target size (e.g. 60 seconds) was reached. If the total duration of the randomly drawn audio files
extended beyond the target duration, they were clipped at the target duration
in a way that ensured all syllable classes were still present in the training set.
For each bird we trained ten replicates, where each
replicate had a different subset of randomly-drawn audio files to create the target training set size.
For all Bengalese finches, we measured accuracy on a separate test set with a fixed size of 400s. We chose
to use a totally-separate fixed-size set (instead of e.g. using the remainder of the training data set) so
we could be sure that any variance in our measures across training replicates could be attributed to
the randomly-drawn training set, and not to changes in the test set.
We computed metrics such as frame error rate and syllable error rate on the held-out test set for each bird.
For canaries we used test set duration of 1500-2000 seconds and training sets of 60-600 seconds for the learning curves in Fig.~\ref{fig_canary_lc}.
For the result in Table~\ref{table:1} we used a test set of 5000 seconds and a training set of 6000 seconds.
The method for generating learning curves as just described is built into the \texttt{vak} library and
can be reproduced using its \textit{learncurve} functionality in combination with the configuration files
we shared (reference link) and the publicly-available datasets.
\subsubsection*{Metrics}
We measured performance with two metrics. The first is the frame error rate, that simply measures for each acoustic frame
(in our case each time bin in a spectrogram) whether the predicted label matches the ground truth
label. Hence the range of the frame error rate is between 0 and 1, i.e. can be stated as a percent, and
gives an intuitive measure of a model's overall performance. Previous work on supervised sequence labeling, including
bidirectional-LSTM architectures similar to ours, has used this metric \cite{graves_supervised_2012,graves_framewise_2005}.
The second metric we used is commonly called the word error rate in the speech recognition literature,
and here we call it the syllable error rate. This metric is an edit distance that counts the number of edits (insertions and deletions) needed to convert a predicted sequence into the ground-truth sequence. The error rate is normalized by dividing it by the length of the sequences.
In Table~\ref{table:1} we provide two additional measures. The first is a lower bound on the percent of all frame errors that can be attributed to slightly-misaligned syllable onsets and offsets.
These syllable boundaries are naturally variable in creating the ground truth hand annotated data sets. Spectrogram time bins in which a trained TweetyNet model and the ground truth disagree and only one of them assigns the 'unlabeled' tag can potentially be around segment boundaries. In Fig~\ref{suppfig_errors_in_boundaries} we show the histogram of distances, in spectrogram bins, of such frame errors from ground truth segment boundaries. The majority is concentrated in 0-2 bins away from the boundaries, amounting the overall percents summarized in Table~\ref{table:1}. The second is syllable error rate after applying post-hoc cleaning of annotation. This cleanup is done in two steps: (1) discard all segments shorter than 5msec (using 10 msec adds an insignificant improvement in some birds) and (2)assign a single label to each segment of time bins not labeled as 'silence' by majority vote.
\subsubsection*{Model output as syllable likelihoods}
In Fig~\ref{fig7} we present model outputs one step prior to assigning the most likely label to each spectrogram time bin. At that stage, one before the \textit{argmax(N)} step in Fig~\ref{fig_tweetynet_architecture}, the model output for a given time bin $t$ is a real-valued affinity $a(t,s)\in\mathcal{R}$ of all predefined syllable classes $s$. In Fig~\ref{fig7} we convert these numbers to likelihoods by subtracting the minimum value and normalizing separately for each time bin $L(t,s)=\frac{a(t,s)-\min_{s'}a(t,s')}{\sum_{\sigma}[a(t,\sigma)-\min_{s'}a(t,s')]}$. This transformation was done for presentation only. Applying the commonly-used softmax transform ($x\rightarrow\frac{exp(x)}{\sum_xexp(x)}$) is equivalent since we only keep the maximal value.
\subsection*{Data analysis - song structure}
\subsubsection*{Shared template dependence on number of syllables in song (Fig~\ref{fig1}e)}
In each bird we define an upper bound for repeating parts of songs using pairwise comparisons. For each song we examined all other songs with equal or larger number of syllables and found the largest shared string of consecutive syllables. The fraction of shared syllables is the ratio between the number of shared sequence and the number of syllables in the first, shorter, song. Then, we bin songs by syllable counts (bin size is 10 syllables) and calculate the mean and standard deviation across all pairwise comparisons.
\subsubsection*{Probabilistic suffix tree (Figs~\ref{fig_automatic_canary_syntax},\ref{suppfig_tweetynet_vs_600_hand_labeled_songs}, and \ref{suppfig_tweetynet_vs_1764_hand_labeled_songs})}
For each canary phrase type we describe the dependency of the following transition on previous phrases with a probabilistic suffix tree. This method was described in a previous publication from our lab (Markowitz et. al. 2013, code in \url{https://github.com/jmarkow/pst}). Briefly, the tree is a directed graph in which each phrase type is a root node representing the first order (Markov) transition probabilities to downstream phrases, including the end of song. The pie charts in Figs~\ref{fig_automatic_canary_syntax},\ref{suppfig_tweetynet_vs_600_hand_labeled_songs}, and \ref{suppfig_tweetynet_vs_1764_hand_labeled_songs} show such probabilities. Upstream nodes represent higher order Markov chains that are added sequentially if they significantly add information about the transition.
\subsection*{Model cross validation to determine minimal node frequency}
To prevent overfitting, nodes in the probabilistic suffix trees are added only if they appear more often than a threshold frequency, $P_{min}$. To determine $P_{min}$ we replicate the procedure in \cite{markowitz_long-range_2013} and carry a 10-fold model cross validation procedure. In this procedure the dataset is randomly divided into a training set, containing 90 percent of songs, and a test set, containing 10 percent of songs. A PST is created using the training set and used to calculate the negative log likelihood of the test set. This procedure is repeated 10 times for each value of $P_{min}$, the x-axis in Fig~\ref{fig_precision_in_large_datasets}a. For data sets of different sizes (curves in Fig~\ref{fig_precision_in_large_datasets}a, x-axis in Fig~\ref{fig_precision_in_large_datasets}b) the mean negative log-likelihood across the 10 cross validation subsets and across 10 data sets, y-axis in Fig~\ref{fig_precision_in_large_datasets}a, is then used to find the optimal value of $P_{min}$ - the minimum negative log-likelihood that corresponds to the highest precision without over-fitting the training set. All PSTs in Figs~\ref{fig_automatic_canary_syntax},\ref{suppfig_tweetynet_vs_600_hand_labeled_songs}, and \ref{suppfig_tweetynet_vs_1764_hand_labeled_songs} are created using the cross-validated $P_{min}$.
\section*{Acknowledgments}
This study was supported by NIH grants R01NS104925,
R24NS098536, and R01NS118424 (T.J.G.) We thank J. Markowitz and T.M. Otchy for sharing song datasets, and Nvidia Corporation for a technology grant (Y.C., Sober lab).
\section*{Supporting information}
\beginsupplement
% Include only the SI item label in the paragraph heading. Use the \nameref{label} command to cite SI items in the text.
%\paragraph*{Fig. S1}
%\label{S1_Fig}
%{\bf Consecutive canary phrases can include acoustically-similar syllables but differ in the duration of inter-syllabic gaps.}
\begin{figure}[!ht]
\includegraphics[scale=1.0]{Figures/Supplementaries/Supp_Figure1_1.png}
\caption{{\bf Example of two consecutive canary phrases that differ mostly in inter-syllable gaps.} In this case, annotation methods that first segment syllables and then use acoustic parameters to classify them will introduce errors. By simultaneously learning acoustic and sequence properties, TweetyNet overcomes this weakness.}
\label{suppfig_canary_phrases_diff_gaps}
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=0.75]{Figures/Supplementaries/suppfig_tweetynet_tensor_shapes_example.png}
\caption{{\bf The shape of tensors (multi-dimensional arrays) that result from each operation the network performs.} In this example the input is a spectorgarm window of T time bins and 513 frequency bins. The first and second convolution steps learn 32 and 64 5x5 features respectively and both pooling steps operate on non overlapping windos of 8 frequency bins and 1 time bin (to keep temporal resolution). In this example the size of the LSTM (hidden) state is 512 to match its input. The LSTM's ouput tensor is 1024 places tall because we concatenate the forward and backward runs.}
\label{suppfig_example_tensors}
\end{figure}
%\paragraph*{Fig. S2}
%\label{S2_Fig}
\begin{figure}[!ht]
\includegraphics[scale=1.0]{Figures/Supplementaries/EdgeErrorDistCanaries.png}
\caption{{\bf Most errors of trained TweetyNet models are disagreement on syllable boundaries of 0-2 time bins.} Potential syllable boundary disagreements are time bins in which the ground truth test set or the trained TweetyNet model disagree and just one of them assigns the 'unlabeled' silence tag. The histograms show the distances of those time bins from the nearest syllable boundary in test sets 5000 second long.}
\label{suppfig_errors_in_boundaries}
\end{figure}
%\paragraph*{Fig. S3}
%\label{S3_Fig}
\begin{figure}[!ht]
\includegraphics[scale=0.85]{figures/Supplementaries/suppfig_tweety_vs_600_hand_labeled_songs.png}
\caption{{\bf Detailed comparison of syntax structure in 600 hand labeled or TweetyNet-labeled canary songs.} Supporting Fig~\ref{fig_automatic_canary_syntax}A. we plot the full probabilistic suffix trees created from 600 hand labeled canary songs ({\bf A}) and from a TweetyNet model trained on 2.2 percent of this bird's song ({\bf B})}
\label{suppfig_tweetynet_vs_600_hand_labeled_songs}
\end{figure}
%\paragraph*{Fig. S4}
%\label{S4_Fig}
\begin{figure}[!ht]
\includegraphics[scale=0.85]{figures/Supplementaries/suppfig_tweety_vs_1764_hand_labeled_songs.png}
\caption{{\bf Detailed comparison of syntax structure in 1764 hand labeled or TweetyNet-labeled canary songs.} Supporting Fig~\ref{fig_automatic_canary_syntax}B. we plot the full probabilistic suffix trees created from 1764 hand labeled canary songs ({\bf A}) and from a TweetyNet model trained on 19 percent of this bird's song ({\bf B})}
\label{suppfig_tweetynet_vs_1764_hand_labeled_songs}
\end{figure}
\nolinenumbers
% Either type in your references using
% \begin{thebibliography}{}
% \bibitem{}
% Text
% \end{thebibliography}
%
% or
%
% Compile your BiBTeX database using our plos2015.bst
% style file and paste the contents of your .bbl file
% here. See http://journals.plos.org/plosone/s/latex for
% step-by-step instructions.
%
% \begin{thebibliography}{10}
% \bibitem{bib1}
% Conant GC, Wolfe KH.
% \newblock {{T}urning a hobby into a job: how duplicated genes find new
% functions}.
% \newblock Nat Rev Genet. 2008 Dec;9(12):938--950.
% \bibitem{bib2}
% Ohno S.
% \newblock Evolution by gene duplication.
% \newblock London: George Alien \& Unwin Ltd. Berlin, Heidelberg and New York:
% Springer-Verlag.; 1970.
% \bibitem{bib3}
% Magwire MM, Bayer F, Webster CL, Cao C, Jiggins FM.
% \newblock {{S}uccessive increases in the resistance of {D}rosophila to viral
% infection through a transposon insertion followed by a {D}uplication}.
% \newblock PLoS Genet. 2011 Oct;7(10):e1002337.
% \end{thebibliography}
\FloatBarrier
\bibliography{CohenNicholsonGardner2020.bib}
\end{document}
| {
"alphanum_fraction": 0.7995652578,
"avg_line_length": 85.6852589641,
"ext": "tex",
"hexsha": "bb6d929dfee233ced22622dc77acf045582593e0",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-11-10T20:47:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-03-11T16:41:54.000Z",
"max_forks_repo_head_hexsha": "281f8876726359a298a2c387c7b4c2e40ac61c91",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "yardencsGitHub/tweetynet",
"max_forks_repo_path": "article/doc/tweetynet-paper.tex",
"max_issues_count": 116,
"max_issues_repo_head_hexsha": "281f8876726359a298a2c387c7b4c2e40ac61c91",
"max_issues_repo_issues_event_max_datetime": "2022-03-23T20:57:37.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-11-06T02:53:18.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "yardencsGitHub/tweetynet",
"max_issues_repo_path": "article/doc/tweetynet-paper.tex",
"max_line_length": 1787,
"max_stars_count": 28,
"max_stars_repo_head_hexsha": "281f8876726359a298a2c387c7b4c2e40ac61c91",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "yardencsGitHub/tweetynet",
"max_stars_repo_path": "article/doc/tweetynet-paper.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-25T13:31:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-14T08:18:15.000Z",
"num_tokens": 20513,
"size": 86028
} |
\section{State of the Art}
\subsection{General framework}
%Eficientica Energética en CPDs
%Subida de la densidad de potencia porque aumenta el número de racks aunque aumente la eficiencia energética
%PUE
There is several research on how to reduce power consumption in data centers. Those that are more relevant to this thesis are explained below. This research is focused on several lines: Reducing the consumption of the servers, improving the PUE of the data centers, rising the power density in racks and establishing new actuation techniques to manage servers. These are some of the most important works related to this topic:
\begin{itemize}
\item As shown by Koomey in 2011 \cite{koomey2011growth}, IT sector must be concerned about data centers consumption. Despite power consumption has not raised as it was expected due to the economic crisis and to the fact that infrastructure providers have understood that they had to minimize the power consumption, the electricity used in global data centers in 2010 was between 1.1\% and 1.5\% of total electricity use. This factor is one of the reasons of this thesis.
\item Work by Emerson \cite{emersonDC} shows how power consumption density will increase in data centers in next years. The idea is that data centers need to incorporate new servers every week to absorb the current demand. This, supplemented by a continuous growth of the power density of the servers, makes very important to make servers more efficient.
%http://www.emersonnetworkpower.com/en-US/Latest-Thinking/Data-Center-2025/Documents/002401_DataCenter2025Report_HR_INTERACTIVE.PDF
%[6] de aqui? file:///C:/Users/alvaro.m.lopez/Downloads/00%20ALVARO/publications-devel/mzapater_IGSC15_submitted.pdf
\item A key challenge is to achieve a good trade-off between energy savings and server performance. Accordingly, few researchers have recently started to investigate coordinated solutions for performance and power management in a dynamic cloud environment \cite{Kumar:2009:VLC:1555228.1555262}. The point is that the best policy for a subsystem does not depend only on the subsystem, but also the full environment.
%http://dl.acm.org/citation.cfm?id=1555262
\item PUE is considered one of the key metrics of the good performance and green design data centers. First, it was published by The Green Grid in 2007 as the reference metric of power usage effectiveness, due to the comparison that PUE makes between the consumption of the servers and the total consumption of the server. The most important considerations about PUE and why there are better metrics, are collected from the Green Grid in these documents \cite{amer2013pue} \cite{GreenGrid2010Eff}.
%http://www.thegreengrid.org/~/media/WhitePapers/WP49-PUE%20A%20Comprehensive%20Examination%20of%20the%20Metric_v6.pdf?lang=en
%http://www.energystar.gov/ia/partners/prod_development/downloads/Data_Center_Metrics_Task_Force_Recommendations_V2.pdf?6107-55e3
\end{itemize}
\subsection{Energy Efficiency Metrics}
%% Sería importante hablar sobre esto?
%%http://www.infoq.com/articles/power-consumption-servers
All the waste and data mentioned above are really important, but it is necessary to have some metrics to compare several DCs. The most extended metric is the Power Usage Effectiveness (PUE).
PUE is a measure defined by The Green Grid that tell us how efficient is the usage of the energy of the DC comparing the energy spent in the IT environment and all the energy spent in the DC. \cite{originalPUE}
\begin{equation*}
PUE = \frac{Total Facility Energy}{IT Energy}
\end{equation*}
Looking at this equation, the following keys can be extracted:
\begin{wrapfigure}{l}{0.5\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{percConsumoDC}
\caption{Data center power usage distribution}
\end{wrapfigure}
Notice that PUE is always higher than 1 because IT Energy is contained in Total Facility Energy.
The perfect PUE would be 1. In this case, all the energy consumed by the IT - servers, storage, network equipment... - would be the entirety of the energy consumption. This would be the ideal situation and there are no DC with this effectiveness.
Figure \ref{fig:pue-average} represents the evolution of PUE in Google's DCs along the last eigth years. Google DCs are considered one of the most power effective DCs around the world since they have the lowest PUE.
\begin{figure}[h]
\includegraphics[scale= 0.5]{pue-average} % Podría poner[width=\linewidth]
\caption{PUE evolution of Google's DCs |\cite{puegrafica}}
\label{fig:pue-average} %Establece una etiqueta para la figura
\end{figure}
%\ref{fig:pue-average} % Devuelve el número de la figura
%\pageref{fig:pue-average} %Devuelve la página en la que está la figura
As can be seen in the plot, the PUE of Google DCS has progressively decrease year by year becoming almost 1 recently.
\subsubsection{Limitations of PUE \cite{ignacioPaper}}
Even though, it seems a good idea to compare DCs depending on the PUE, PUE it is not the only comparison we should make talking about energy effectiveness.
As it was mentioned above, there are two contributors to the energy consumption: The server and the rest of the subsystems, and in this thesis we will focus on the consumption of the servers. According to the PUE formula, if the IT consumption is reduced, the PUE increases despite of the total energy consumption has been minimized and the DC has become more efficient.
Consequently, it can be said that PUE is a really important metric that has to be kept in mind, but it cannot be the single metric and must go with absolute metric such as consumption per server or consumption per full-load hour of each server. \cite{amer2013pue}
\subsection{Open Compute Project}
In April 2011, was announced the Open Compute Project (OCP). This project is led by Facebook with the aim of designing more efficient data centers. All the policies and designs introduced by the OCP have the same idea: making the DC a single environment where all the components - servers, building and software - can work at the same time and synchronized so that power-consumption and load share policies don't take decisions according to one component but to all the DC.
To achieve it, they share some specifications so that companies can create a new server based on the OCP specifications and, finally, the community can check if it really minimizes the energy-consumption and suggest some improvements. In this Bachelor Thesis an Intel version of the OCP server specification called Decathlete is used.
The reason why the OCP context has been chosen to work is that it is a four-years project in which some of the most important companies of the DC sector has worked.
Finally, it is important to mention that OCP has some principles that share with this Bachelor Thesis. The most important points are the following ones:
\begin{itemize}
\item First of all, it is based in Open Software and Hardware. Everyone can make and test its own changes without any restriction. This improves the speed at which the design upgrades.
\item Second, OCP promotes a big research community. All new ideas are well-received at first. Then, the OCP decides which are the contributions that suit more the project and incorporates them to the main project.
\item Thirdly, the main objective of the OCP is to create a new open standard so that several vendors could use the same standard.
\end{itemize}
\subsection{Green LSI framework}
Within this general framework, the Green LSI team also wants to participate in the optimization of the power-consumption of DC. Our work is focused on making the DC a single environment where all the components run at the same time. This synchronization is mainly targeted to reducing the power consumption.
This thesis will be useful for the Green LSI researches for two reasons
:
\begin{itemize}
\item[$-$] Firstly, several lines of work of the GreenLSI has the aim of creating high level policies that do not have actuation support. It is a need of the group to create the tools to implement this policies in the group servers.
\item[$-$] Secondly, the results of this thesis can be applied to every server of the group so every line of work will have low-level support.
\end{itemize}
%%During this year, several works of the GreenLSI has been published.
%%Podría hablar de juan carlos y de marina como dos ejemplos del green lsi
%2015-ieee-cloudd = Marina
%2015-igsc = Marina
%2014-tpds = Zapater
%Van los tres aquí
\subsection{Current Server-control Policies}
%Vasic N et al (2009) Making cluster applications energy-aware. In: Proc of automated ctrl for datacenters and clouds
%http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=0BAF373A528501F6B3ABF23028EBC947?doi=10.1.1.169.6157&rep=rep1&type=pdf
%46. Srikantaiah S et al (2008) Energy aware consolidation for cloud computing. In: Proc of HotPower
%http://research.microsoft.com/pubs/75408/srikantaiah_hotpower08.pdf
%Paper de alfonso
%Paper de ignacio
%Cualquier politica de actuación que solo hable de cómo manejar el consumo
As we can see in th paragraphs above, some policies at different levels inside the DC - servers, racks, hole DC - have been mentioned. One of them is at server level. Policies at server level need actuation support to implement them.
There are several research works focused on changing the configuration of a server to improve its performance, minimizing its consumption or a combination of both.
\begin{itemize}
\item Research by Aransay et.al, in \cite{ignacioPaper} suggests a new algorithm to decide in which server should be allocated a new task in a data center. This algorithm recommends to use a Reputation System. Each server will have a reputation calculated using their CPU temperature and the most suitable server to allocate new workload will be established using this algorithm.
\item In research by Felter et.al, \cite{Felter:2005:PAR:1088149.1088188}, they reduce peak power consumption by using workload-guided dynamic allocation of power among components. The idea is that not all the workloads need the same resources and these resources can be switched between power-saving and performance depending on the necessities.
\item This document \cite{export:75408} suggests to consolidate applications in cloud-computing to reduce the energy consumption. This is because when an application needs low resources, the idle consumption predominate over the dynamic consumption so the server is not taking advantage of its resources. When applications are consolidated, this issue disappears.
\end{itemize}
| {
"alphanum_fraction": 0.7943017282,
"avg_line_length": 74.8601398601,
"ext": "tex",
"hexsha": "3749286dcad90829d7cd81cab1f670d9a3bd53fc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9b5d4e89183ba775aae81c60da5ea170ffd22f79",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alvarolop/tfg_latex",
"max_forks_repo_path": "Chapter21.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9b5d4e89183ba775aae81c60da5ea170ffd22f79",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alvarolop/tfg_latex",
"max_issues_repo_path": "Chapter21.tex",
"max_line_length": 501,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9b5d4e89183ba775aae81c60da5ea170ffd22f79",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alvarolop/tfg_latex",
"max_stars_repo_path": "Chapter21.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2473,
"size": 10705
} |
\section{Multi-Point Search}\label{sec:multi_point}
As has been mentioned, the multi-point selection perturbative hyper-heuristic employs a genetic algorithm which explores the space of heuristic combinations. This approach is similar to that of Raghavjee and Pillay \cite{raghavjee2015genetic} in that low-level heuristic combinations are evolved to produce the combinations that best improve on an initial solution. In addition, Evohyp \cite{pillay2017evohyp} - a Java toolkit for evolutionary algorithm hyper-heuristics - was used for the implementation of the genetic algorithm.
A single chromosome in the genetic algorithm population is represented as a heuristic combination - for example, \emph{AEBDACA}. To evaluate the chromosome, the heuristic combination is applied to the initial solution to produce a new solution. Each low-level heuristic is applied to the solution until the solution has been changed \emph{n} times, where \emph{n} has again been arbitrarily chosen as the number of exams that need to be scheduled for the problem instance. The means of applying a perturbative heuristic sequence is shown in algorithm \ref{alg:multi_point}.
Following the evaluation a heuristic sequence, the fitness of a solution is calculated in the same manner as suggested by Pillay \cite{pillay2010evolving} in that fitness is a measure of the soft constraint cost, multiplied by the hard constraint cost incremented by one.
\begin{algorithm}[H]\label{alg:multi_point}
\SetAlgoLined
currentSolution = initialSolution\;
\BlankLine
\For{$i\gets1$ \KwTo $n$}{
heuristic = H.elementAt($i \mod H.length$)\;
currentSolution = heuristic.applyTo(currentSolution)\;
}
\BlankLine
return currentSolution\;
\caption{Applying a Perturbative Heuristic Sequence}
\end{algorithm}
Furthermore, the genetic algorithm employs tournament selection as the selection method and uses crossover and mutation as genetic operators - these techniques are all supported by the Evohyp \cite{pillay2017evohyp} toolkit. Of course, as is the case with any genetic algorithm approach, there are specific parameters required by the algorithm. The parameters, in this case, were largely chosen based on proportions that have been used previously with similar approaches to the same benchmark set \cite{pillay2010evolving, raghavjee2015genetic}. However, many parameters were scaled down to smaller values. The genetic parameters used were selected as follows:
\begin{itemize}
\item The Population Size was selected as 50.
\item The Tournament Size was selected as 5.
\item The Number of Generations was selected as 10.
\item The Mutation Rate was selected as 0.3.
\item The Crossover rate was selected as 0.7.
\item The Maximum Initial Length was selected as 20.
\item The Maximum Offspring Length was selected as 20.
\item The Mutation Length was selected as 5.
\end{itemize} | {
"alphanum_fraction": 0.7974465148,
"avg_line_length": 90.5625,
"ext": "tex",
"hexsha": "297af33335ae7cd9547e6bb74043284550e1c425",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ec6e229960efa4afe0c1f36a06c722e54778459b",
"max_forks_repo_licenses": [
"AFL-3.0"
],
"max_forks_repo_name": "marcus-bornman/cos_790_assignment_2",
"max_forks_repo_path": "assets/report/03_multi_point/multi_point.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ec6e229960efa4afe0c1f36a06c722e54778459b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"AFL-3.0"
],
"max_issues_repo_name": "marcus-bornman/cos_790_assignment_2",
"max_issues_repo_path": "assets/report/03_multi_point/multi_point.tex",
"max_line_length": 660,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ec6e229960efa4afe0c1f36a06c722e54778459b",
"max_stars_repo_licenses": [
"AFL-3.0"
],
"max_stars_repo_name": "marcus-bornman/cos_790_assignment_2",
"max_stars_repo_path": "assets/report/03_multi_point/multi_point.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 650,
"size": 2898
} |
\pagebreak
\pagenumbering{arabic}
\chapter{Introduction}
Since neurons are driven by electric currents, changing their surrounding environment's electrical conditions can alter their behavior. Both electrically and magnetically induced stimulation has been used for many years, although the latter was studied several years earlier. Each method is based on different physics, although the results are similar.
\begin{wrapfigure}{l}{0.40\textwidth}
\centering
\includegraphics[width = 0.3\textwidth]{assets/images/cortical_layers.pdf}
\caption{Cellular structure of the neocortex. (Purves et al.\cite{Purves2012}, Figure 27.1(B) p.628)}
\label{fig:cortical_layers}
\end{wrapfigure}
Because of the layered structure (\autoref{fig:cortical_layers}) of the neocortex, the electric fields induced by \gls{TMS} act mainly on interneurons and collaterals of the tangentially oriented pyramidal cells, whereas the total electric field induced by electrical stimulation is perpendicular to the cortical surface and thus acts mainly on pyramidal cells of \textit{layer V} since most of them are orthogonal to the cortex. As a result, \gls{TMS}-induced fields have a low penetration depth. In contrast, \gls{tES} induced fields can penetrate deep into the brain, but at the expense of spatial resolution. Therefore, \gls{tES} is mostly used to try to reach deep targets.
\gls{DBS}, particularly the non-invasive methods achieving it, is an emerging need as most crucial brain components' functionality lies deep within the brain. Although successful invasive methods can, for example ,alleviate Parkinson's symptoms, non-invasive methods offer greater scalability and lower risk to the patient. However, the challenge with non-invasive methods is the lack of spatial resolution and precise targeting of the desired area, usually on the order of tens of millimeters (e.g., pituitary gland \cite{Yadav2017_pituitary}).
In 2017, a new method that seemingly revolutionizes the way non-invasive \gls{DBS} is performed by accommodating clever tricks or \textit{"hacks"} of the neuron physics, was presented by Nir Grossman and his team \cite{Grossman2017} for the first time. The method of \paper{Grossman}{Grossman2017} uses a pair of electrodes operated with a high-frequency alternating current whose envelope oscillates at the desired frequency by setting different frequencies between the pairs of electrodes. The significant advantage is reaching deep targets and stimulating only the regions of interest without affecting surrounding areas or the entire brain.
The Grossman method is a promising technique for the future of \gls{DBS}. It is investigated in this work to evaluate its potential application in humans, as only spherical and simple phantom models were studied in the original work \cite{Grossman2017}, in conjunction with successful experiments in murine models. Expanding the study on realistic human brain models will help understand \gls{tTIS} specifics, and tests on healthy individuals will consolidate the method's effectiveness. Ultimately, this work aims to enable optimized and personalized treatment planning for humans based on rigorous computer simulations.
As a closing remark, it has been a firm belief of the author that the greatest benefit to science and society comes from the open sharing of knowledge, and true to this belief, all the code, drafts, text, and all originally created materials of this work are publicly available on the GitLab repository \cite{thesis_repo}.
\section{Methods of Electrical Stimulation}
To better understand the primary advantages of the \gls{tTIS} method, the different methods for achieving electrical stimulation shall be touched upon for comparison purposes. An overview of the two main methods is presented in this section. These are the most commonly used ones and form the basis for almost all other electrical stimulation method variants.
\subsection{Transcranial Direct Current Stimulation \textit{(tDCS)}}
\gls{tDCS} delivers constant current via electrodes placed on the head, targeting to stimulate the cortical areas. Numerous studies show promising results in depression treatment \cite{Moffa2020,Brunoni2016}, and \gls{tDCS} is increasingly used in such cases \cite{Nitsche2008}.
Although \gls{tDCS} is useful for overall cortical stimulation, the lack of focality is one of the principal issues. As shown in \autoref{fig:tdcs_pattern}, the electric current flows through an extensive area, potentially affecting unwanted regions of the brain.
\begin{figure}[H]
\centering
\includegraphics[width = 0.75\textwidth]{assets/images/tdcs_pattern.png}
\caption{\gls{tDCS} pattern showcase with P7 \textit{(active)}, F7 electrodes based on the 10-20 system.}
\label{fig:tdcs_pattern}
\end{figure}
Furthermore, achieving neural firing synchronization is impossible through \gls{tDCS} since this kind of effect requires alternating current. The method that has such an ability is \gls{tACS}.
\subsection{Transcranial Alternating Current Stimulation \textit{(tACS)}}
In many aspects, \gls{tACS} and \gls{tDCS} are identical. As mentioned before, the key characteristic where \gls{tACS} differs is the neural firing synchronization. Like \gls{tDCS}, this method also lacks focality.
\gls{tACS} requires using low-frequency currents in the order of tens of Hz since at those frequencies, the neurons are firing the action potentials. The problem arising here again is the stimulation of a large area, synchronized at the stimulating current frequency. Such synchronization can have adverse effects, i.e., stimulating and synchronizing unwanted areas.
Stimulating with higher frequency currents renders the method useless because the neuronal membrane can either not follow the oscillation, or the frequency is too high for the desired synchronization. This problem is solved by the \gls{tTIS} method, which is the focus of this work.
\subsection{Transcranial Temporal Interference Stimulation \textit{(tTIS)}}
\gls{tTIS} is essentially \gls{tACS} with a small, yet remarkable, modification. Taking advantage of the superposition principle of the electric field and utilizing the emerging interference patterns is the essence of this approach. The current flowing through each electrode pair has a high enough frequency that does not modulate any neurons, as opposed to \gls{tACS}, where the frequency shall be in the target modulation range. The modulator here is the envelope (\autoref{fig:modulation_showcase}) of the two electric fields' temporal interference, given that the envelope modulates at the difference of the two electrode frequencies.
\begin{figure}[H]
\centering
\includegraphics[width = 0.95\textwidth]{assets/images/modulation_envelope.pdf}
\caption{Wave superposition pattern with equal amplitude waves}
\label{fig:modulation_showcase}
\end{figure}
This method was tested by \paper{Grossman}{Grossman2017} on murine brains, and the results were spot on with the respective simulation. Targeted neuro-modulation was achieved with a much better focality and depth than the traditional methods. Although this process works well on murine brains, it poses a great challenge for the human models as the geometry and the volume change dramatically. A sample image from this work can be seen in \autoref{fig:ttis_pattern}, where the Grossman's method focality and depth in comparison with \autoref{fig:tdcs_pattern} are visible.
\begin{figure}[H]
\centering
\includegraphics[width = 0.70\textwidth]{assets/images/ttis_pattern.png}
\caption{Electric field pattern of the \gls{tTIS} method using P8 \textit{(active)}, F8 as the base frequency electrodes and P7 \textit{(active)}, F7 as the secondary \textit{(delta)} frequency electrodes. The base frequency is $f = 1\; kHz$ and $\Delta f=40\; kHz$.}
\label{fig:ttis_pattern}
\end{figure}
This approach seems promising. It is yet to be reinforced by follow-up works and real experiments on human subjects in the clinical setting, potentially aiding in treating some diseases.
\section{Theoretical Background}
To grasp the physics behind the simulations, a brief explanation regarding current conduction within tissues is presented in \autoref{sec:e_ohmic_qs}, followed by an analysis regarding the \gls{tTIS} hypothesis based on Grossman et al.\cite{Grossman2017} work.
\subsection{Electric Field Ohmic Quasi-static Approximation}
\label{sec:e_ohmic_qs}
Generally, Maxwell's equations for electromagnetic wave propagation in a medium are as follows:
% Maxwells equations
\begin{center}
\begin{minipage}{.35\linewidth}
\begin{equation}
\nabla\cdot\vec{E}=\dfrac{\rho}{\epsilon}
\end{equation}
\end{minipage}
\begin{minipage}{.35\linewidth}
\begin{equation}
\nabla\cdot\vec{B} = 0
\end{equation}
\end{minipage}\break
\begin{minipage}{.35\linewidth}
\begin{equation}
\label{eq:maxwell_curl_e}
\nabla\times\vec{E}=-\dfrac{\partial\vec{B}}{\partial t}
\end{equation}
\end{minipage}
\begin{minipage}{.35\linewidth}
\begin{equation}
\nabla\times\vec{B} = \mu\Bigg(\vec{J} + \epsilon\dfrac{\partial\vec{E}}{\partial t}\Bigg)
\end{equation}
\end{minipage}
\end{center}
\noindent The problem in question, finding the electrical field distribution in a volume using low frequencies, can be approached by simplifying the general form of Maxwell's equations and deriving the Quasi-static approximation format. The first step is to define the assumptions taken for such an approach to be valid.
For the frequencies in the \si{kHz} range used in the studied problem, the displacement current can be neglected, making the Ohmic currents dominant. Also, since the magnetic field is not time-variant, based on \autoref{eq:maxwell_curl_e}, we can write:
\begin{equation}
\label{eq:curl_zero_e_field}
\nabla\times\vec{E} = \vec{0}
\end{equation}
and as we know, when a field is irrotational, then it can be calculated from a scalar potential ($\phi$) as seen below:
\begin{equation}
\label{eq:e_field_from_potential}
\boxed{\vec{E} = -\nabla\phi}
\end{equation}
\noindent Moreover, since the sum of currents entering and exiting the volume is zero \textit{(Kirchhoff's second law)}, we can denote:
\begin{equation}
\nabla\cdot\vec{J} = 0
\end{equation}
where here $\vec{J}$ is the ohmic current seen below as it is assumed that there are no current sources in the volume:
\begin{equation}
\label{eq:sigma_e_0}
\vec{J} = \sigma\vec{E}\Rightarrow\boxed{\nabla\cdot\big(\sigma\vec{E}\big) = 0}
\end{equation}
with $\sigma$ being the electrical conductivity of each tissue. Finally, based on \cref{eq:e_field_from_potential,eq:sigma_e_0} the final relationship describing the problem can be derived:
\begin{equation}
\label{eq:laplace_e}
\boxed{\nabla\cdot(\sigma\nabla\phi) = 0}
\end{equation}
\autoref{eq:laplace_e} describes the problem of conduction using bulk conductors. However, it is worth noting that the equation is only valid in the frequency domain, where the displacement current effects are negligible and only when there is no charge generated \textit{(charge is conserved)}. Furthermore, care shall be taken with the conductivity values, which may depend on the stimulating frequency for different materials.
\pagebreak
\subsection{Temporal Interference}
The utilization of \gls{tTIS} to achieve targeted \gls{DBS} was first introduced by \paper{Grossman}{Grossman2017}. This technique takes advantage of the spatial electromagnetic wave interference, using frequencies at the \si{kHz} range, having almost no effect on neurons, which do not respond to higher than 1\si{kHz} frequencies \cite{Hutcheon2000}.
\\\vspace{1pt}
\begin{wrapfigure}{r}{0.48\textwidth}
\vspace{-10pt}
\centering
\includegraphics[width = 0.44\textwidth]{assets/images/brain_figure_ttis.pdf}
\caption[Depiction of the \gls{tTIS} pattern and the vector direction of the electric field. The purple area is the \gls{ROI} where interference happens.]{Depiction of the \gls{tTIS} pattern and the vector direction of the electric field. The purple area is the \gls{ROI} where interference happens. Image by \href{https://pixabay.com/users/openclipart-vectors-30363/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=150935}{OpenClipart-Vectors} from \href{https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=150935}{Pixabay}}
\label{fig:brain_elec_demo}
\end{wrapfigure}
Using two pairs of electrodes (\autoref{fig:brain_elec_demo}), with each pair having a slightly different frequency, an interference pattern can be generated in the conducting medium oscillating at the two frequencies' difference. Depending on the medium's nature, the pattern can vary, and as illustrated in \paper{Grossman}{Grossman2017}, when a uniform medium is used, the pattern easily calculated.
Since the modulation happens in the 3D space, there will be different patterns in the $x$, $y$ and $z$ directions. According to Grossman et al. \cite[page 20]{Grossman2017}, at any location $\vec{r} = (x,y,z)$ the envelope amplitude of the \gls{AM} of the electric field produced by the temporal interference is calculated as:
\begin{equation}
\label{eq:directional_amplitude}
\vec{E}(\vec{n},\vec{r}) = \Big|\big|(\vec{E_1} + \vec{E_2})\cdot\vec{n}\big| - \big|(\vec{E_1} - \vec{E_2})\cdot\vec{n}\big|\Big|
\end{equation}
where $\vec{E_1} = \vec{E_1}(\vec{r})$, $\vec{E_2} = \vec{E_2}(\vec{r})$ are the electric fields coming from the two electrodes and $\vec{n} = \vec{n}(\vec{r})$ is the unit vector at the direction of interest.
\\\vspace{1pt}
What is of interest is the maximum amplitude of modulation at a specific location because the modulation will vary with the time between zero and maximum. To calculate this amplitude across all directions, the analysis in Grossman et al. \cite[page 20]{Grossman2017} can be better supported by the analysis conducted in Rampersad et al. \cite[section 2.5]{Rampersad2019}. Based on the two publications mentioned above, a complete description will be given here. The formula to calculate the maximum modulation amplitude along all directions at a specific location, $\vec{r} = (x,y,z)$, is:
\begin{equation}
\label{eq:max_mod_amplitude}
\vec{E}_{AM}^{max}(\vec{r}) = \begin{cases}
2\big|\vec{E_2}\big| & \text{if}\; \big|\vec{E_2}\big| < \big|\vec{E_1}\big|\cos\alpha \\
&\\
2\dfrac{\Big|\big|\vec{E_2}\big|\times\big(\vec{E_1} - \vec{E_2}\big)\Big|}{\big|\vec{E_1} - \vec{E_2}\big|} & \text{otherwise}
\end{cases}
\end{equation}
where $\alpha$ is the angle between $\vec{E_1}$ and $\vec{E_2}$, while \autoref{eq:max_mod_amplitude} holds only if $\alpha < 90\si{\degree}$. Whenever $\alpha \geq 90\si{\degree}$, the sign of one of the two fields can be flipped \textit{(it must be done consistently)}, since reaching peak field strength at different time points across different areas is what makes the $< 90\si{\degree}$ rule to be violated. This change is possible considering that \autoref{eq:max_mod_amplitude} calculates the maximum effect over one oscillation so that the overall effect is taken. The calculation of $\vec{E}_{AM}^{max}$ can be seen in \autoref{alg:max_modulation_amplitude} in \autoref{appndx:algorithms}.
| {
"alphanum_fraction": 0.7765943629,
"avg_line_length": 91.1124260355,
"ext": "tex",
"hexsha": "0d630e67b65c6ec6bf738cf5ff41cf5fae06db3e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9705f8af51d39ea333d84b2b4da35699de2d26d2",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "dimst23/BSc-Thesis",
"max_forks_repo_path": "Sections/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9705f8af51d39ea333d84b2b4da35699de2d26d2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "dimst23/BSc-Thesis",
"max_issues_repo_path": "Sections/introduction.tex",
"max_line_length": 698,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9705f8af51d39ea333d84b2b4da35699de2d26d2",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "dimst23/BSc-Thesis",
"max_stars_repo_path": "Sections/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3954,
"size": 15398
} |
\chapter{Testing}
\section{Server tests}
\subsection{Goal 1}
\textbf{The system should offer the possibility to create a new account}
\testTable{Registration with empty email}{Incorrect email error response}{Incorrect email error response}
\testImage{"testServer/emptyEmail".png}
\testTable{Registration with empty password}{Incorrect password error response}{Incorrect email error response}
\testImage{"testServer/emptyPassword".png}
\subsection{Goal 2}
\textbf{The system should be able to handle a login phase}
\testTable{Login with wrong client credentials}{Wrong credentials error response}{Wrong credentials error response}
\testImage{"testServer/wrongCredentials".png}
\testTable{Login with wrong user credentials}{Wrong username or password error response}{Wrong username or password response}
\testImage{"testServer/wrongUserPassword".png}
\subsection{Other tests}
\testTable{Access auth-protected api with correct access token}{Response succesfully returns}{Response succesfully returns}
\testImage{"testServer/correctToken".png}
\testTable{Access auth-protected api with wrong access token}{Wrong access token error response}{Wrong access token error response}
\testImage{"testServer/wrongToken".png}
\section{Client test}
\subsection{Goal 1}
\textbf{The system should offer the possibility to create a new account}
\testTable{Registration with invalid email}{Incorrect email error message}{Incorrect email error message}
\testImage{"testClient/invalidRegistration".png}
\testTable{Registration with non matching passwords confirmation}{Non matching passwords error message}{Non matching passwords error message}
\testImage{"testClient/nonmatchingPassword".png}
\TestTable{User is at the login page}{The user wants to register inside Travlendar, so he taps on the REGISTER button}{The user inputs a valid email and a password for the new account, a view appear in order to confirm the inserted password}{As expected: registration done message}
\testImage{"testClient/correctRegistration".png}
\subsection{Goal 2}
\textbf{The system should be able to handle a login phase}
\testTable{Login with wrong user credentials}{Wrong username or password error message}{Wrong username or password error message}
\testImage{"testClient/invalidLogin".png}
\subsection{Goal 4}
\textbf{The system should allow the user to insert an appointment according to his necessities and his preferences}
\TestTable{The appointment list is blank}{The user likes to insert a new appointment, say a Software Engineering II lesson dated 15-01-2018, located in Via Camillo Golgi 42 and which lasts 2 hours. So, from the appointments section, the users adds a new appointment with these peculiarities.}
{The appointment should be created and thus appear in the appointments list.}{As expected}
\testImage{testClient/appointment_creation.png}
\TestTable{The appointment list is not empty}{The user wants to see the characteristics of a previously created appointment, say the Software Engineering II lesson created before, so the user taps the appointment on the appointments list}{A view should appear, showing the peculiarities of the selected appointment}{As expected}
\testImage{testClient/appointment_details.png}
\subsection{Goal 5}
\textbf{The system should provide a way to modify an inserted appointment}\\
\TestTable{The appointment list is not empty}{The user wants to modify the Software Engineering II lesson's details, setting the starting time at 10.30, so he taps the appointments and he clicks on the edit button and he modifies the starting time}{The appointment should be modified according to the new value of starting time imposed}{As expected}
\testImage{testClient/appointment_editing.png}
\TestTable{The appointment list is not empty}{The user wants to delete the Software Engineering II appointment, so he holds the click to this appointment on the appointment view and decide to delete it}{The appointment should disappear from the view}{As expected}
\testImage{testClient/appointment_deletion.png}
\subsection{Goal 6}
\textbf{The system should provide a way to create a valid schedule of the user appointments when requested and display the scheduling result}\\
\TestTable{
User has no computed schedule in his schedule list.}
{Creation of a new schedule.}
{{\begin{enumerate}
\item a click on the add schedule button is performed and the user is redirected to the schedule creation view.
\item the user selects the date in which he/she wants compute his/her schedule.
\item the empty fields are filled by the user.
\item the button for computing the schedule is clicked and the progress bar is shown.
\item the user is redirected to the schedule list and there, is added the new computed schedule.
\item with one click on the created schedule the schedule results are shown.
\end{enumerate}}}
{The outcome is equal to the expected behaviour.}
\TestImage{test_goal_6/2}{execution of point 2}
\TestImage{test_goal_6/3}{execution of point 5}
\TestImage{test_goal_6/4}{execution of point 6}
\TestTable{user has no computed schedule in his schedule list.}
{Creation of a new schedule with a constraint on one of his appointment}
{Creation of a new schedule that doesn't use the car on its appointments}
{The computed schedule doesn't involve the car}
\TestImage{test_scheduler/2}{computed schedule}
\TestTable{User has no computed schedule in his schedule list.}
{Creation of a new schedule.}
{{\begin{enumerate}
\item a click on the add schedule button is performed and the user is redirected to the schedule creation view.
\item the user selects the date in which he/she wants compute his/her schedule.
\item the empty fields are filled by the user.
\item a time-slot constraint for car is set
\item the button for computing the schedule is clicked and the progress bar is shown.
\item the user is redirected to the schedule list and there, is added the new computed schedule.
\item with one click on the created schedule the schedule results are shown.
\end{enumerate}}}
{The outcome is equal to the expected behaviour: car is not taken for the specified time slot}
\testImage{"test_goal_6/timeSlotConstraint".png}
\testImage{"test_goal_6/timeSlotConstraintResult".png}
\TestTable{The user has three appointments in the same time slot ranging from 3 pm to 6 pm}
{The user wants to schedule his appointments, starting from 3 pm in Via Privata Giovanni Ventura, Lambrate, optimizing the cost of the schedule}
{It's expected that the scheduler will prefer travelling by public travel means and by foot, providing directions to reach the appointments and setting a real starting time to those}
{The actual behaviour is equal to the expected one. In particular notice that the scheduler will manage by itself the order in which the appointments should be travelled, taking into account the optimization criteria}
\testImage{test_goal_6/schedule_time_slot.png}
\subsection{Goal 7}
\textbf{The system should let the user create valid multiple schedules and decide which one is chosen for the current day}\\
\TestTable{Schedule list is not empty}{The user wants to run a schedule, so he taps on it and runs it}{The schedule is ran}{As expected}
\testImage{testClient/schedule_running.png}
\subsection{Goal 8}
\textbf{The system should be able to book the travel means involved in the current schedule under user approval}
\TestTable{A previusly computed schedule involves some public travel means}{The user runs that schedule}{The application asks to the user if he wants to buy the tickets for that schedule}{As expected}
\testImage{testClient/tickets.png}
\subsection{Goal 9}
\textbf{The system should be able to display in real time user position and the directions to be followed in order to arrive to the next appointment on a dinamically updated map}\\
\TestTable{Schedule list is not empty}{The user wants to see the directions to follow for a schedule, so he runs it}{The schedule directions are shown in the main page}{As expected}
\testImage{testClient/directions.png}
| {
"alphanum_fraction": 0.8045403518,
"avg_line_length": 58.5182481752,
"ext": "tex",
"hexsha": "fbddcc5bc99185794c4d6c77599196fd90ba212d",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-10-19T08:25:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-06T15:07:29.000Z",
"max_forks_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "keyblade95/DamicoGabboliniParroni",
"max_forks_repo_path": "ITD/cap6_testing.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "keyblade95/DamicoGabboliniParroni",
"max_issues_repo_path": "ITD/cap6_testing.tex",
"max_line_length": 349,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "keyblade95/DamicoGabboliniParroni",
"max_stars_repo_path": "ITD/cap6_testing.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1742,
"size": 8017
} |
\subsection{Hyperbolic discounting}
| {
"alphanum_fraction": 0.8157894737,
"avg_line_length": 9.5,
"ext": "tex",
"hexsha": "c43d4ac78e140032508cc2812a14647ffd515916",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/ai/intertemporal/03-03-interHyperbolic.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/ai/intertemporal/03-03-interHyperbolic.tex",
"max_line_length": 35,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/ai/intertemporal/03-03-interHyperbolic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10,
"size": 38
} |
\chapter{Security\label{security}}
Considerations on using mod_python in a secure manner can be found
in the \citetitle[http://wiki.apache.org/mod_python]{mod_python wiki}
at \citetitle[http://wiki.apache.org/mod_python/CategorySecurity]{CategorySecurity}.
| {
"alphanum_fraction": 0.8100775194,
"avg_line_length": 43,
"ext": "tex",
"hexsha": "dff3241f9ee3f246278c429e1c169772e5892326",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-11-06T11:02:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-04T06:31:32.000Z",
"max_forks_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "yinsen/apache-text",
"max_forks_repo_path": "modules/mod_python-3.3.1/Doc/modpython7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "yinsen/apache-text",
"max_issues_repo_path": "modules/mod_python-3.3.1/Doc/modpython7.tex",
"max_line_length": 84,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "yinsen/apache-text",
"max_stars_repo_path": "modules/mod_python-3.3.1/Doc/modpython7.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 62,
"size": 258
} |
% !TEX program = xelatex
\documentclass{resume}
%\usepackage{zh_CN-Adobefonts_external} % Simplified Chinese Support using external fonts (./fonts/zh_CN-Adobe/)
%\usepackage{zh_CN-Adobefonts_internal} % Simplified Chinese Support using system fonts
\begin{document}
\pagenumbering{gobble} % suppress displaying page number
\name{Yu Zi}
\basicInfo{
(+1) 412-482-0674 $\bullet$
[email protected] $\bullet$
2630 Mt Royal Road, Pittsburg, PA 15217
}
\section{Education}
\datedsubsection{\textbf{Carnegie Mellon University }\textit{Master of Information System Management (BIDA)}}{Aug. 2021 -- Aug. 2022}
\begin{itemize}[parsep=0.5ex]
\item \textbf{Average GPA:} \textit{\textbf{3.92} / 4}
\end{itemize}
\datedsubsection{\textbf{The University of Auckland }\textit{Bachelor of Software Engineering (Honours)}}{Jul. 2014 -- May. 2018}
\begin{itemize}[parsep=0.5ex]
\item \textbf{Average GPA:} \textit{\textbf{7.75} / 9 (First Class, \textbf{3.82} / 4)}
\end{itemize}
\section{Skills}
\begin{itemize}[parsep=0.5ex]
\item \textbf{Programming Languages:} \textbf{Python}, \textbf{Java}, JavaScript, Matlab, R, C
\item \textbf{Other Knowledge:} Spark, MySQL, HBase, Kubernetes, Docker, Kafka, Samza, MongoDB
\item \textbf{Interests:} Software Engineering, Cloud Computing, Machine Learning
\end{itemize}
\section{Work Experience}
\datedsubsection{\textbf{Kakapo Technologies Limited., New Zealand } \textit{(Full-time Developer)}}{Apr. 2018 - May. 2021}
\begin{flushleft}
Develop and maintain a website for National Australia Bank to reconcile trade information across disparate systems.\linebreak It regularly extracts data from systems and identifies trades that should be the same, locating any differences
\begin{itemize}
\item Multiple Python services that run on AWS to monitor the arrival of data from different systems and \linebreak coordinate the graph of thousands of reconciliation actions each day
\item Parsers and Builders that handle the processing and analyzing of data in hundreds of different formats
\end{itemize}
\end{flushleft}
%\datedsubsection{\textbf{SJGTW Electrical Commercial Company, China } \textit{(Intern)}}{Jun. 2017 - Jul. 2017}
%\begin{flushleft}
%Built a tool that automatically classifies building materials given the name and model (with Keras)
%\begin{itemize}
% \item Used Neural Network to predicate the class of material, achieving a correctness rate of over 80\%
%\end{itemize}
%\end{flushleft}
% Reference Test
%\datedsubsection{\textbf{Paper Title\cite{zaharia2012resilient}}}{May. 2015}
%An xxx optimized for xxx\cite{verma2015large}
%\begin{itemize}
% \item main contribution
%\end{itemize}
\section{Projects}
\begin{flushleft}
\datedsubsection{\textbf{Cloud-Based Twitter User Recommendation System}}{\textit{Cloud Computing Project (2021)}}
A cloud-based web service which recommend some close friends of a input user with similar interests to you.
\begin{itemize}
\item Use Spark to process over 1TB Twitter dataset, clean and pre-calculate interactive \& common hashtag \linebreak \& keyword score between contact users (users who have retweeted or reply to each other)
\item Processed Dataset stored in MariaDB on Amazon Relational Database Service
\item A web service respond to user request was deployed on AWS EKS, reaching throughput of 16000 requests/sec,\linebreak within \$1.2/hour budget.
\end{itemize}
\datedsubsection{\textbf{Real-Time Cabs Matching System}}{\textit{Cloud Computing Project (2021)}}
Use Kafka and Samza stream processing on EMR to match customers with cabs based on distances, preferences, etc.
\datedsubsection{\textbf{Cloud-based Anomaly Route Detection System}}{\textit{Part IV Project Research (2017)}}
Worked under the supervision of \textit{Dr. Xuyun Zhang} to build a mobile application which can tell users whether \linebreak the taxi driver is taking an anomalous route
\begin{itemize}
\item A mobile application to collect real-time trajectory data and send to back-end server for analyzing
\item Scripts to deal with pre-processing initial dataset by using Hadoop MapReduce
\item A cloud-based Java backend to store and normalize route data, train a prediction model periodically, \linebreak and justify the current route with the trained model, achieving a correctness rate of over 90\%
\end{itemize}
\datedsubsection{\textbf{Improvements to a Git client based on NodeJS}}{\textit{Summer Research Intern (2016)}}
Make improvements to a semi-finished Git client, adding abstract graphical view of commit history, adding interactive GUI enabling users to manipulate commits through dragging nodes of the commits tree
\end{flushleft}
\section{Honors}
\datedline{\textbf{\textit{Dean's Honours List}} of Faculty of Engineering, University of Auckland}{2017}
\datedline{\textbf{\textit{\nth{11}}} in \textbf{\textit{ACM-ICPC}} Programming Contest \textbf{South Pacific Regional Final}}{2016}
\datedline{\textbf{\textit{\nth{2} Place}}, in \textbf{\textit{New Zealand Programming Contest}} (Tertiary Open Category)}{2016}
\datedline{\textbf{\textit{\nth{87}}} in \textbf{\textit{IEEEXtreme}} Programming (out of 1823 teams, Top 5\%)}{2016}
\datedline{\textbf{\textit{\nth{3} Place}}, in \textbf{\textit{New Zealand Programming Contest}} (Tertiary Intermediate Category)}{2015}
\end{document}
| {
"alphanum_fraction": 0.7753799962,
"avg_line_length": 57.9239130435,
"ext": "tex",
"hexsha": "861927c21a2000e16f0c5239e9a96b746e4da886",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3a39614b5239bf9da2a380b17b32d718a58a7a0c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zyu539/CV",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3a39614b5239bf9da2a380b17b32d718a58a7a0c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zyu539/CV",
"max_issues_repo_path": "resume.tex",
"max_line_length": 237,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3a39614b5239bf9da2a380b17b32d718a58a7a0c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "zyu539/CV",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1422,
"size": 5329
} |
\documentclass[]{article}
% Get better typography
\usepackage[protrusion=true,expansion=true]{microtype}
% For algorithms
\usepackage[boxruled,linesnumbered,vlined,inoutnumbered]{algorithm2e}
\SetKwInOut{Parameter}{Parameters}
% For basic math, align, fonts, etc.
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{mathrsfs}
\usepackage{enumitem}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argmax}{arg\,max}
% For color
\usepackage{xcolor}
\definecolor{light-grey}{rgb}{0.9,0.9,0.9}
\definecolor{dark-red}{rgb}{0.4,0.15,0.15}
\definecolor{dark-blue}{rgb}{0,0,0.7}
% For links (e.g., clicking a reference takes you to the phy)
\usepackage{hyperref}
\hypersetup{
colorlinks, linkcolor={dark-blue},
citecolor={dark-blue}, urlcolor={dark-blue}
}
%-------------------------
% BEGIN DOCUMENT / TITLE
%-------------------------
\begin{document}
\begin{center}
\begin{Large}
CMPSCI 687 Homework 4
\end{Large}
\\
Due November 14, 2019, 11:55pm Eastern Time
\end{center}
\addcontentsline{toc}{subsection}{\textbf{Homework 4}}
\noindent {\bf Instructions: } Collaboration is not allowed on any part of this assignment. Submissions must be typed (hand written and scanned submissions will not be accepted). You must use \LaTeX. The assignment should be submitted as five documents: a .pdf with your written answers, two .hpp files, and two .cpp files as described in the programming portion.
\\\\
\section*{Programming (75 Points Total)}
In this assignment, you will implement Sarsa and $Q$-learning, and will apply them to a gridworld (not the one from the course notes), mountain car, acrobot, and cart-pole. Begin with the source code provided \href{https://people.cs.umass.edu/~pthomas/courses/CMPSCI_687_Fall2019/HW4Source.zip}{here} (see the previous assignments for instructions regarding opening the project). Look at main.cpp, starting with the function main. Look through how this code functions: it applies Sarsa and Q-learning to various MDPs in sequence. Hyperparameters (not good ones!) are specified for each environment in main.cpp.\footnote{For this assignment, you may view the iOrder and dOrder hyperparameters as both being the order of the Fourier basis, and you may always set them to the same value.} The code for Sarsa should be in Sarsa.hpp (a header file, that defines the Sarsa class) and Sarsa.cpp (the source file, that includes the actual code for all of the functions that a Sarsa object requires). Similarly the code for Q-learning is split across QLearning.hpp and QLearning.cpp. You should fill code into Sarsa.hpp, Sarsa.cpp, QLearning.hpp, and QLearning.cpp, and these four files are the four that you should submit with your assignment.
To be clear, your edits should be: 1) changing the hyperparameters specified in main.cpp (you do not have to submit these, but will report hyper-parameter values in your write-up), 2) adding code to the train function in QLearning.cpp (you may change QLearning.hpp and other functions in QLearning.cpp, but this is not necessary), 3) adding code to Sarsa.hpp (private member variables) and Sarsa.cpp (likely all of the functions except for ``getAction'' will have code that you add).
After reading through main.cpp to see what it does, look through Gridworld.hpp and Gridworld.cpp. Gridworld.hpp and Gridworld.cpp have been commented more heavily than the files for the other MDPs. These provide an example of a class in C++. The .hpp file contains the definition of the Gridworld object, and the .cpp file implements the functions that it requires. This also shows how the environments all work. Notice, for example, that the getState() function normalizes the state for you -- it returns the state as a vector, each element of which is in the interval $[0,1]$. Notice also that this code is set up to work well with linear function approximation, as the state is a vector of floating point numbers (not yet features!) that can be provided to the FourierBasis class to convert to features.
Now that you've read through main.cpp and Gridworld.cpp, look at QLearning.hpp and QLearning.cpp. QLearning.hpp and QLearning.cpp have been commented more heavily than the files for Sarsa. Most of this class is implemented for you. The ``train'' function has not been implemented fully -- you must fill this in. Notice some useful functions have been provided in MathUtils.hpp, like ``dot''. Also, note that this time we are not using the Eigen library. Don't be afraid to use for loops though, as these are very efficient in C++. The computational bottleneck in this code is usually computing the cosines in the FourierBasis object. This is why we compute and store the features for state $s'$ in an $(s,a,r,s')$ tuple, so that we can re-use them at the next iteration for state $s$. We could be even more efficient by not recomputing features whenever the agent is asked for an action (right now, QLearning will compute features for state $s$ twice, once in the train function and once in the getAction function). For this assignment, this inefficiency is ok.
Once you have implemented the train function in QLearning.cpp, try setting the hyperparemeters in main.cpp to get results similar to those in the provided file ``plots.xlsx''. If, after running your code, you copy the contents of the other .csv files over the entries in plots.xlsx, it should update to show the plots we want. You are welcome to use your own system (e.g., write your own python code) to make plots from the output .xlsx files. Hint: For both Q-Learning and Sarsa, set $q(s',a')=0$ when computing the TD-error if $s'$ is a terminal state, since we know this action-value and therefore do not need to approximate it.
Next, look at Sarsa.hpp and Sarsa.cpp. These are left more empty for you to fill in. Importantly, we're making this harder for you that just putting in the pseudocode. Look back at Section 3.1. The pseudocode there works well for Q-learning, but not as well for Sarsa, since Sarsa requires the action $a'$ to update. Notice that main.cpp implements the pseudocode from Section 3.1. So, you must write Sarsa in a way that works with this setup. Hint: you will want the agent to have some memory, perhaps remembering which states, actions, and/or rewards it saw previously.
Point allocations for this assignment will be determined at the time of grading, based on which mistakes and issues are common.
\begin{enumerate}
\item Describe the process of implementing Q-Learning. Did everything work immediately? Did you have bugs that you had to fix? What were they?
\\\\
\textcolor{blue}{
To implement Q-learning, I had to implement the TD update by calculating the TDError in the train method using the following formulation:\\
$\delta = r - gamma*\max_{a \in \mathcal{A}}(q(s',a')) + q(s,a)$\\
$\max_{a \in \mathcal{A}}(q(s',a')) = maxQ(\phi(s'))$ \\
$q(s,a) = w[a] . \phi(s)$\\
I then updated the weight for action $a$ as follows:\\
$w[a][i] += \alpha*\delta*\phi[i]$ $\forall i $\\
I was able to get everything working immediately and there were no bugs in the code.
}
\item Describe the process of implementing Sarsa. Did everything work immediately? Did you have bugs that you had to fix? What were they?
\\\\
\textcolor{blue}{
Implementation of Sarsa was more nuanced. The TD Error required both $a$ and $a'$ but sampling $a'$ inside the train method lead to mismatch in the action used during the update and the action used to interact with the environment in the runExperiment method which would sample the action again. I kept a variable to track the previous action ($prevAction$), previous reward ($prevR$)as well as the previous value of $\phi(s)$ ($prevPhi$) and updated the weights for the action in the subsequent call to train. I had to skip this update in the very first call to train. I also had to handle the case of $s'$ being terminal separately in order to additionally update the weights for action $a$. Updates were made using the following formulation:\\
$\delta = r - gamma*q(s,a) + q(prevState,prevAction)$\\
$q(s,a) = w[a].\phi(s) = w[a].phi $\\
$q(prevState,prevAction) = w[prevAction] . prevPhi$\\
I then updated the weight for action $prevAction$ as follows:\\
$w[prevAction][i] += \alpha*\delta*prevPhi[i]$ $\forall i $\\
Additionally, when $s'$ was terminal, the following update would also be made for action $a$:\\
$w[a][i] += \alpha*\delta*phi[i]$ $\forall i $\\
I was able to get everything working immediately and there were no bugs in the code.
}
\item Describe the process of optimizing the hyperparameters for Q-Learning for MountainCar, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.005,
\gamma = 0.99,
\epsilon = 0.1,
iOrder = 3,
dOrder = 2
$\\
I started with $dOrder$ of 2 as I had used a $2^{nd}$ order Fourier basis for the BBO implementations in the earlier homework. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and values of $\alpha$ in the range of $1$ to $0.001$. I started with small values for both and then gradually them. Increasing $\epsilon$ improved the learning but the increasing $\alpha$ seemed to cause divergence where the algorithm would suddenly stop learning and returns would go down. I then kept the value of $\alpha$ small and increased $\epsilon$ till the algorithm converged. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Q-Learning for CartPole, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.01,
\gamma = 0.9,
\epsilon = 0.05,
iOrder = 3,
dOrder = 2
$\\
I started with $iOrder$ of 2 as I had used a $2^{nd}$ order Fourier basis for the BBO implementations in the earlier homework. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and values of $\alpha$ in the range of $1$ to $0.001$. I started with small values for both and then gradually increased $\epsilon$ till the algorithm converged without having to increase $\alpha$. Increasing $\epsilon$ also increased the standard deviation in the result. Increasing $\epsilon$ increased the standard deviation in the result. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.9$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Q-Learning for Acrobot, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.001,
\gamma = 0.99,
\epsilon = 0.1,
iOrder = 4,
dOrder = 3
$\\
Acrobot required a substantially higher order represtation. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and found the lower values worked better. I searched for values of $\alpha$ in the range of $1$ to $0.001$. I started with a high value of $\alpha$ and a low value of $\epsilon$ but the algorithm did not converge. I then started reducing the alpha and increasing $\epsilon$ to add more randomness to the process. Increasing $\epsilon$ increased the standard deviation in the result. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Q-Learning for Gridworld, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.05,
\gamma = 0.99,
\epsilon = 0.001,
iOrder = 1,
dOrder = 0
$\\
As suggested, I did not change iOrder and dOrder to keep a tabular representation. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and found that lower values worked better. I searched for values of $\alpha$ in the range of $1$ to $0.001$ with very high values resulted in some quick high return values but then fell off and did not converge whereas very low values did not seem to learn fast enough and stayed at low return values. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Sarsa for MountainCar, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.005,
\gamma = 0.99,
\epsilon = 0.1,
iOrder = 3,
dOrder = 2
$\\
I started with $iOrder$ of 2 as I had used a $2^{nd}$ order Fourier basis for the BBO implementations in the earlier homework. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and values of $\alpha$ in the range of $1$ to $0.001$. I started with small values for both and then gradually them. Increasing $\epsilon$ improved the learning but the increasing $\alpha$ seemed to cause divergence where the algorithm would suddenly stop learning and returns would go down. I then kept the value of $\alpha$ small and increased $\epsilon$ till the algorithm converged. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well. Surprisingly, the same values of hyperparameters that worked for Q-learning also worked for Sarsa.
}
\item Describe the process of optimizing the hyperparameters for Sarsa for CartPole, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.01,
\gamma = 0.99,
\epsilon = 0.1,
iOrder = 3,
dOrder = 2
$\\
I started with $dOrder$ of w as I had used a $2^{nd}$ order Fourier basis for the BBO implementations in the earlier homework. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and values of $\alpha$ in the range of $1$ to $0.001$. I started with small values for both and then gradually increased $\epsilon$ till the algorithm converged without having to increase $\alpha$. Increasing $\epsilon$ also increased the standard deviation in the result. Sarsa required a higher randomness factor than Q-learning. Increasing $\epsilon$ increased the standard deviation in the result. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Sarsa for Acrobot, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.001,
\gamma = 0.99,
\epsilon = 0.01,
iOrder = 4,
dOrder = 3
$\\
Acrobot required a substantially higher order represtation. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and values of $\alpha$ in the range of $1$ to $0.001$. I started with a high value of $\alpha$ and a low value of $\epsilon$ but the algorithm did not converge. I then started reducing the alpha and increasing $\epsilon$ to add more randomness to the process. Sarsa was able to converge with a smaller value of $\epsilon$ than Q-learning. Increasing $\epsilon$ increased the standard deviation in the result. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well.
}
\item Describe the process of optimizing the hyperparameters for Sarsa for Gridworld, and report the final hyperparameters that you found.
\\\\
\textcolor{blue}{
The final hyperparameters found were :\\
$
\alpha = 0.05,
\gamma = 0.99,
\epsilon = 0.001,
iOrder = 1,
dOrder = 0
$\\
As suggested, I did not change iOrder and dOrder to keep a tabular representation. I searched for values of $\epsilon$ in the range of $0.1$ to $0.001$ and found that lower values worked better. I searched for values of $\alpha$ in the range of $1$ to $0.001$ with very high values resulted in some quick high return values but then fell off and did not converge whereas very low values did not seem to learn fast enough and stayed at low return values. I tried a few values of $\gamma$ ranging from $0.9$ to $0.999$ and found that $0.99$ worked well. Surprisingly, the same hyperparameters that worked for Q-learning also worked for Sarsa.
}
\item Provide four plots, one per environment, showing the learning curves for Sarsa and Q-learning with the best hyperparameters that you found. Keep the number of trials, number of episodes, and maxEpisodeLength terms from the provided main.cpp. Include error bars showing one standard deviation. These plots can be created using any plotting software of your choice.
\begin{figure}[h!]
\hspace*{0.1\textwidth}\includegraphics[width=0.8\textwidth]{MountainCar.png}
\label{fig: Learning Curve - Mountain Car}
\end{figure}
\begin{figure}
\hspace*{0.1\textwidth}\includegraphics[width=0.8\textwidth, ]{CartPole.png}
\label{fig: Learning Curve - CartPole}
\end{figure}
\begin{figure}
\hspace*{0.1\textwidth}\includegraphics[width=0.8\textwidth]{Acrobot.png}
\label{fig: Learning Curve - Acrobot}
\end{figure}
\begin{figure}
\hspace*{0.1\textwidth}\includegraphics[width=0.8\textwidth]{Gridworld.png}
\label{fig: Learning Curve - Gridworld}
\end{figure}
\pagebreak
\item Compare your experiences with Q-Learning and Sarsa to your experiences with BBO. Which did you find easier to get working? Which algorithms learned fastest on Cart-Pole (in HW2, you implemented BBO algorithms for Cart-Pole)?
\\\\
\textcolor{blue}{
Answer:\\
Q-learning and Sarsa were much easier to train and converged remarkably faster than BBO algorithms. BBO algorithms took several thousand episodes to converge while Sarsa just takes about 30 and Q-learning about 40 to converge on the Cart-Pole domain. Sarsa and Q-learning learned the fastest with Q-learning converging slightly slower than Sarsa but to a better policy. Sarsa seemed more sensitive to the hyperparameters than Q-learning.
}
\item Be sure to submit your QLearning.hpp (even if it is unchanged, as recommended), QLearning.cpp, Sarsa.hpp, and Sarsa.cpp files with your write-up.
\textcolor{blue}{Submitted}
\end{enumerate}
Note: This code is written to be relatively simple, as many of you are new to C++. This does not represent best coding practices (e.g., we could make use of subclasses).
\end{document}
| {
"alphanum_fraction": 0.72688,
"avg_line_length": 79.1139240506,
"ext": "tex",
"hexsha": "2c4017e32b448ac52a7a095f5dba4f8a4a132366",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cf30cc0ab2b0e515cd4b643fc55c60cc5f38a481",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "anshuman1811/cs687-reinforcementlearning",
"max_forks_repo_path": "homeworks/HW4Source/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cf30cc0ab2b0e515cd4b643fc55c60cc5f38a481",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "anshuman1811/cs687-reinforcementlearning",
"max_issues_repo_path": "homeworks/HW4Source/main.tex",
"max_line_length": 1235,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cf30cc0ab2b0e515cd4b643fc55c60cc5f38a481",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "anshuman1811/cs687-reinforcementlearning",
"max_stars_repo_path": "homeworks/HW4Source/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4893,
"size": 18750
} |
%\section{State of the art}
%\label{sec12}
%The rapid development in the field of artificial intelligence (AI) in the recent years occurred due to the rapid growth in computational powers that boosted deep learning approach to evolve.
%Deep learning or in other words (artificial neural network ANN) is a very promising technique due to their numerous applications such as machine translation, speech recognition, computer vision, among others.
%We can say that the breakthroughs in computer vision began after 2012 when Alex Krizhevsky, et al. implemented a convolutional neural network (CNN) called \say{AlexNet}~\cite{Krizhevsky2012} which won the competition of 2012 ImageNet challenge.
%AlexNet achieved state-of-the-art recognition accuracy against all traditional machine learning and computer vision techniques.
%Therefore, AlexNet is considered a significant breakthrough in the field of machine learning and computer vision as image object detection, segmentation, video classification, object tracking, among other tasks.
%The following years witnessed several computer vision architectures that are much more developed with a larger and deeper architectures such as VGGNet~\cite{szegedy2015going}, GoogleNet~\cite{Simonyan2015}, ResNet~\cite{he2016deep}.
%As a result of this advancement in deep learning techniques, fields of (NDT/SHM) began to utilise ANN techniques in their approaches of damage detection and localisation, due to these techniques have the potential to overcome the conventional damage detection localisation issues by avoiding the complex process of handcrafting feature extraction and providing an automatic feature extraction solution, furthermore their capacity to adapt to big data (e.g. acquired measurements from the investigated
%structures).
%That means the performance of ANN increases with a neural network (NN) size as well as the size of data used for supervised learning. Damage detection and localisation in composite materials by utilising elastic wave propagation have been investigated by several authors who have adapted ANN techniques in their models.
%However, all the literature in this field were conducted on truss structures and signal data acquired by accelerometers.
%In this project, we take a further step regarding investigating the wavefield propagation signals since it is more sensitive to local damage.
%For this purpose, a large dataset of full wavefield of propagating elastic waves was generated to simulate the experimentally generated data which resembles measurements acquired by SLDV. | {
"alphanum_fraction": 0.8247663551,
"avg_line_length": 183.4285714286,
"ext": "tex",
"hexsha": "73c6ab1518a6db05d69cff73219e02922b149477",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "IFFM-PAS-MISD/aidd",
"max_forks_repo_path": "reports/project_reports/Ijjeh_thesis_template/Chapters/Chapter1/sect12.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "IFFM-PAS-MISD/aidd",
"max_issues_repo_path": "reports/project_reports/Ijjeh_thesis_template/Chapters/Chapter1/sect12.tex",
"max_line_length": 501,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9fb0ad6d5e6d94531c34778a66127e5913a3830c",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "IFFM-PAS-MISD/aidd",
"max_stars_repo_path": "reports/project_reports/Ijjeh_thesis_template/Chapters/Chapter1/sect12.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T05:36:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-03T05:36:07.000Z",
"num_tokens": 510,
"size": 2568
} |
%!TEX root = ../main.tex
\subsection{Pecking Order}
\objective{Build and decompose a composition of functions and their derivatives.}
In elementary school, we learned that a mathematics sentence has a
certain order of operations, that somethings must happen before others.
If we want to add before we multiply, we must notate parentheses,
because normally multiplication is repeated addition, something of a higher
order than simple addition. These parentheses lead to a phenomenon
in mathematics that is difficult and unnatural, compared to all other
languages: inside to outside reading.
For example, in the function $f(x)=\frac{2(x^2+1)}{3}$, you know to follow a
sequence of operations on any given input:
\begin{itemize}
\item square it
\item add 1
\item multiply by two
\item divide by three
\end{itemize}
Natural language does something like this, but it is quite confusing to read
(try saying it aloud to yourself): I saw the child the man my girl hates fathered.
It might be better to make this entirely left-branching: I saw the child who
the man fathered who my girl hates!
Our example utilizes only elementary functions: squaring, adding, multiplying,
and dividing. If we want to use arbitrary functions, we must write
$f(g(h(x)))$ or $(f\circ g\circ h)(x)$.
\subsection{Domain and Range}
Taking a function as a composition of two other (e.g. $f(x)=g(h(x))$), we observe
a ``chain'' of inputs to outputs. This might be compared to the game of telephone,
where your ear receives a message from the person before you, garbles it in your
brain, and then outputs a transformed version from your mouth, to the next person's
ear. In math, there is a domain for $h(x)$, a set of numbers it can take for input.
These numbers are mapped onto a range of outputs, which are then fed into
$g(x)$. Now $g(x)$ had its own domain, and the output from $h$ is a subset of that
absolute domain. Having received some input from a portion of its domain,
$h(x)$, now outputs a unique range.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.8,
>=stealth,
bullet/.style={
fill=black,
circle,
inner sep=1pt
},
projection/.style={
->,
thick,
shorten <=2pt,
shorten >=2pt
},
]
\draw (0, 0) circle [x radius=2, y radius=3];
\node [bullet, label=below:\(x\)] (x) at (-1, -0.5) {};
\node[font=\large] (X) at (0, 4) {\(X\)};
\begin{scope}[xshift=4cm]
\draw (0, 0) circle [x radius=1, y radius=3.5]; \node [bullet,
label=above:\(h(x)\)] (fx) at (0.3, 2) {};
\node[font=\large] (Y) at (0, 4) {\(Y\)};
\end{scope}
\begin{scope}[xshift=8cm]
\draw (0, 0) circle [x radius=2, y radius=1.5]; \node [bullet,
label=below:\(g(h(x))\)] (gfx) at (-0.5, -0.1) {};
\node[font=\large] (Z) at (0, 4) {\(Z\)};
\end{scope}
\draw [projection] (x) -- (fx);
\draw [projection] (fx) -- (gfx);
\draw [projection] (X) -- (Y)
node [pos=0.5, above] {\(h\)};
\draw [projection] (Y) -- (Z)
node [pos=0.5, above] {\(g\)};
\draw [out=45, in=180-45, projection, line width=1.5pt, red!80!black]
(X) .. controls ++(1, 1) and ++(-1, 1) .. (Z)
node [pos=0.5, above] {\(f = g \circ h\)};
\end{tikzpicture}
\caption{Composition of functions \cite{sxtikzcomp}.}
\end{center}
\end{figure}
\subsection{Decomposition}
Any function with more than one operation can be written as the composition of
other functions. For example, if $f(x)=x^2+1$ then we could construct
functions $g(x)=x^2$ and $h(x)=x+1$ such that $f(x)=h(g(x))$ ($g(h(x))$ would be
$(x+1)^2$).
\subsection{The Chain Rule}\index{derivative!chain rule}
The algebra of functions continues, even into the realm of derivatives. Two functions
added would have a simple derivative: the sum of their respective derivatives. But what
of two functions composed? What is the derivative of $g(h(x))$?
Here, Leibniz's notation is more helpful:
$$f(x)=g(h(x)), \quad \frac{df}{dx}=\frac{dg}{dh}\cdot\frac{dh}{dx}$$
\paragraph{Trigonometric Derivatives}\index{derivative!of sine}
While you are not responsible for proving and understanding trigonometric functions
until Part~\ref{pt:trig}, it is helpful to being memorizing facts about them now. This will allow
you to practice manipulating the functions like any other. However, to do so, you will
need to know the derivative of sine and cosine.\index{derivative!of cosine}
\begin{itemize}
\item $f(x) = \sin(x)$
\item $f'(x) = \cos(x)$
\item $f''(x) = -\sin(x)$
\item $f'''(x) = -\cos(x)$
\item $f^{(4)}(x) = \sin(x)$
\end{itemize}
As you can see, a straightforward, four-step cycle emerges. The derivative of sine is
cosine. The derivative of cosine is negative sine.
| {
"alphanum_fraction": 0.6703664921,
"avg_line_length": 39.1393442623,
"ext": "tex",
"hexsha": "7245fee56f50223e32f7b7aae0d9aa4e1cc13310",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aquatiki/AnalysisTextbook",
"max_forks_repo_path": "ch04/0403.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aquatiki/AnalysisTextbook",
"max_issues_repo_path": "ch04/0403.tex",
"max_line_length": 98,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aquatiki/AnalysisTextbook",
"max_stars_repo_path": "ch04/0403.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z",
"num_tokens": 1442,
"size": 4775
} |
\clearpage
\subsection{Type} % (fold)
\label{sub:type}
All values within a program will have a \textbf{type}. The type indicates how the data stored in the computers memory is interpreted by the program. There are three basic data types available in a programming language, as shown in \fref{fig:program-creation-type}.
\begin{itemize}
\item \textbf{Textual} data such as `\emph{Fred}', `\emph{Hello World}', `\emph{23}', and `\emph{This is text!}'.
\item \textbf{Whole numbers} such as \emph{1}, \emph{0}, \emph{-5}, and \emph{37}.
\item \textbf{Real numbers} such as \emph{0.5}, \emph{-126.0}, \emph{3.141516}, and \emph{23.981}.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{./topics/program-creation/diagrams/Type}
\caption{A types define how values are interpreted and the operations that can be performed on the data.}
\label{fig:program-creation-type}
\end{figure}
\mynote{
\begin{itemize}
\item A type is an \textbf{artefact}, there will be a number of existing types that you can use, and later you will see how to create your own types.
\item The concepts related to expressions are shown in Figure \ref{fig:program-creation-type}.
\item A type is a programming artefact that indicates a kind of data.
\item The type determines the basic actions that can be performed on the value.
\item The type determines the amount of memory needed to store a value of that kind.
\item Whole numbers are usually called \textbf{Integers}.
\item Real numbers are usually represented as \textbf{Floating Point} values. These values have a limited precision, supporting only a certain number of digits of precision.
\item Textual values can contain numbers as text characters. For example, the text `\emph{23}' is the character `\emph{2}' followed by the character `\emph{3}' - it is not the number \emph{23}.
\item You can perform mathematic operations on numeric data, but not on textual data.
\end{itemize}
}
% section program (end) | {
"alphanum_fraction": 0.7389358528,
"avg_line_length": 59.1470588235,
"ext": "tex",
"hexsha": "c21733b82524bb46f11576befe01269aa9bb6570",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z",
"max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "macite/programming-arcana",
"max_forks_repo_path": "topics/program-creation/concepts/type.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "thoth-tech/programming-arcana",
"max_issues_repo_path": "topics/program-creation/concepts/type.tex",
"max_line_length": 264,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "thoth-tech/programming-arcana",
"max_stars_repo_path": "topics/program-creation/concepts/type.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z",
"num_tokens": 555,
"size": 2011
} |
% Created: 2013-09-24
\documentclass[english]{../thermomemo/thermomemo}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{siunitx}
\usepackage{amsfonts,amsmath,amsthm,amssymb}
\usepackage{hyperref}
\usepackage{cleveref}
\usepackage{pgfplots}
\usepackage{tikz}
\usepackage{color}
\usepackage{url}
\usepackage{graphicx}
\usepackage[bf,small,margin=10pt]{caption}
\usepackage[margin=1em]{subcaption}
\usepackage{bm}
\usepackage{booktabs}
\usepackage[numbers]{natbib}
\definecolor{urlblue}{RGB}{70,130,180}
\hypersetup{
colorlinks=true,
linkcolor=black,
urlcolor=urlblue,
citecolor=black,
}
% TikZ and pgfplots
\usetikzlibrary{arrows}
\pgfplotsset{
compat=1.6,
width=0.8\textwidth,
grid=major,
every axis legend/.append style={
at={(0,1.02)},
anchor=south west,
},
every axis plot/.style={
black,
solid,
thick,
},
legend cell align=left,
legend style={column sep=0.5em,draw=white},
xlabel={$T\;[\si{K}]$},
ylabel={$p\;[\si{bar}]$},
}
\newcommand{\myplotNL}[2]{
\addplot[#1] file[skip first] {data/#2.dat};
}
\newcommand{\myplot}[3]{
\addplot[#1] file[skip first] {data/#2.dat};
\addlegendentry{#3}
}
\crefname{equation}{Eq.}{Eqs.}
\crefname{section}{Section}{Sections}
\crefname{chapter}{Chapter}{Chapters}
\crefname{figure}{Figure}{Figures}
\crefname{subfigure}{Figure}{Figures}
\crefname{table}{Table}{Tables}
\title{Documentation of the binary-XY code in thermopack}
\author{Morten Hammer}
\date{\today}
\graphicspath{{gfx/}}
% Fancy differential from Claudio Beccari, TUGboat
% * No need for manual tweak of spaces.
% * Copied from Svend Tollak Munkejord.
\makeatletter
\newcommand*{\dif}{\@ifnextchar^{\DIfF}{\DIfF^{}}}
\def\DIfF^#1{\mathop{\mathrm{\mathstrut d}}\nolimits^{#1}\gobblesp@ce}
\def\gobblesp@ce{\futurelet\diffarg\opsp@ce}
\def\opsp@ce{%
\let\DiffSpace\!%
\ifx\diffarg(%
\let\DiffSpace\relax
\else
\ifx\diffarg[%
\let\DiffSpace\relax
\else
\ifx\diffarg\{%
\let\DiffSpace\relax
\fi\fi\fi\DiffSpace}
\makeatother
% New commands
\newcommand*{\ifrac}[2]{\ensuremath{#1/#2}}
\newcommand*{\volume}{V}
\newcommand*{\area}{A}
\newcommand*{\volumeint}[1]{\ensuremath{\int_{\volume}#1\dif\volume}}
\newcommand*{\areaint}[1]{\ensuremath{\int_{\area}#1\dif\area}}
\newcommand*{\dt}[1]{\ensuremath{\frac{\dif #1}{\dif t}}}
\newcommand*{\dti}[1]{\ifrac{\dif #1}{\dif t}}
\newcommand*{\pdt}[1]{\ensuremath{\frac{\partial #1}{\partial t}}}
\newcommand*{\pdti}[1]{\ifrac{\partial #1}{\partial t}}
\newcommand*{\td}[2]{\frac{\mathrm{d} #1}{\mathrm{d} #2}}
\newcommand*{\tdi}[1]{\mathrm D #1 /\mathrm Dt}
\newcommand*{\od}[2]{\ensuremath{\frac{\dif#1}{\dif{#2}}}}
\newcommand*{\odi}[2]{\ensuremath{{\dif#1}/{\dif{#2}}}}
\newcommand*{\pd}[2]{\ensuremath{\frac{\partial #1}{\partial{#2}}}}
\newcommand*{\pdi}[2]{\ensuremath{{\partial #1}/{\partial{#2}}}}
\newcommand*{\vct}[1]{\ensuremath{\boldsymbol{#1}}}
\newcommand*{\normal}{\ensuremath{\vct n}}
\renewcommand*{\div}{\boldsymbol\nabla\cdot}
\newcommand*{\divs}{\boldsymbol\nabla_{\text s}\cdot}
\newcommand*{\grad}{\boldsymbol\nabla}
\newcommand*{\grads}{\boldsymbol\nabla_{\text s}}
\newcommand*{\lapl}{\boldsymbol\Delta}
\newcommand*{\lapls}{\boldsymbol\Delta_{\text s}}
\newcommand*{\sint}[2]{\ensuremath{\int_{#1}#2\dif #1}}
\newcommand*{\surint}[2]{\ensuremath{\int_{#1}#2\dif S}}
\newcommand*{\chr}{\chi}
\newcommand*{\bigo}[1]{\ensuremath{\mathcal{O}\left(#1\right)}}
\newcommand*{\smallo}[1]{\ensuremath{o\left(#1\right)}}
\newcommand*{\eint}[1]{\ensuremath{\int_{-\infty}^\infty #1\dif z}}
\newcommand*{\zint}[1]{\ensuremath{\int #1\dif z}}
\newcommand*{\ejmp}[1]{\ensuremath{\left[#1\right]_{-\infty}^\infty}}
\newcommand*{\zlimp}{\ensuremath{\lim_{z\to\infty}}}
\newcommand*{\zlimm}{\ensuremath{\lim_{z\to-\infty}}}
\newcommand*{\zlimpm}{\ensuremath{\lim_{z\to\pm\infty}}}
\newcommand*{\ndot}{\ensuremath{\vct n\cdot}}
\newcommand*{\vext}{\ensuremath{V_{\text{ext}}}}
\newcommand*{\txin}{\ensuremath{\textup{in}\ }}
\newcommand*{\txon}{\ensuremath{\textup{on}\ }}
\newcommand{\spec}{\text{spec}}
\newcommand{\coto}{\ensuremath{\text{CO}_{\text{\scriptsize 2}}}}
\newcommand{\no}{\ensuremath{\text{NO}}}
\newcommand{\CO}{CO\ensuremath{_2}}
\newcommand{\N}{N\ensuremath{_2}}
\newcommand{\thermo}{\textsc{Thermopack}}
\newcommand{\TPlib}{\textsc{TPlib}}
\newcommand*{\out}{\ensuremath{\text{out}}}
\newcommand*{\ideal}{\ensuremath{\text{ideal}}}
\newcommand*{\recovery}{\ensuremath{\text{recovery}}}
\newcommand*{\capture}{\ensuremath{\text{capture}}}
\begin{document}
\frontmatter
\tableofcontents
\clearpage
\section{Introduction}
The equations required to describe an binary Txy or Pxy plot.
\section{Governing equations}
\label{sec:eqn}
The equation system $\mathbf{F(X)}=\mathbf{0}$, is defined by equation
\eqref{eq:fug_eq} to \eqref{eq:X}.
\begin{align}
f_i & = \ln K_i + \ln \hat{\varphi}_i\left(\mathbf{y}\right)- \ln
\hat{\varphi}_i\left(\mathbf{x}\right) = 0, \quad
i=1,2 \label{eq:fug_eq} \\
f_{i+2} & = y_i - K_i x_i = 0, \quad i=1,2 \label{eq:k_eq} \\
f_{5} & = x_1 + x_2 - 1 = 0 \label{eq:x_eq} \\
f_{6} & = y_1 + y_2 - 1 = 0 \label{eq:y_eq} \\
f_{7} & = S - S_{\spec} = 0 \label{eq:spec_eq} \\
\end{align}
\begin{equation}
\label{eq:F}
\mathbf{F} = \begin{pmatrix}
f_1 \\
\vdots \\
f_7
\end{pmatrix}
\end{equation}
The last variable will be $\ln T$ or $\ln P$, for simplicity the
equations are written out for a Txy binary plot. The temperature is
therefore constant. The last variable is therefore $\ln P$. The Pxy
problem can be developed in the same manner.
\begin{equation}
\label{eq:X}
\mathbf{X} = \begin{pmatrix}
\ln \mathbf{K} \\
\mathbf{x} \\
\mathbf{y} \\
\ln P
\end{pmatrix}
\end{equation}
% \nomenclature[mu]{$\mathbf{u}$}{Variable vector}
% \nomenclature[mb]{$\mathbf{B}$}{Nonconservative flux matrix}
% \nomenclature[mf]{$\mathbf{F}$}{Flux function}
% \nomenclature[ms]{$\mathbf{s}$}{Source function}
% \nomenclature[mw]{$\mathbf{w}$}{Nonconservative flux}
% \nomenclature[ah]{$h$}{Enthalpy}
\subsection{Changing the specification}
Differentiating, we get:
\begin{align}
\td{\mathbf{F(X)}}{\mathbf{X}}\td{\mathbf{X}}{S_{\spec}}
+\td{\mathbf{F(X)}}{S_{\spec}} &= \mathbf{0}\\
\td{\mathbf{F(X)}}{\mathbf{X}}\td{\mathbf{X}}{S_{\spec}}
= \begin{pmatrix} \mathbf{0}\\
1
\end{pmatrix}
\end{align}
We are therefore able to produce new initial guesses when we change
the specification, $S_{\spec}$.
\subsection{Termination}
The binaryXY routine need to terminate when one of the following
situations occur:
\begin{enumerate}
\item One of the components become zero.
\item Critical point or azeotrope is
reached. ($\mathbf{x}=\mathbf{y}$) \label{enum:crit}
\item User given maximum in pressure or minimum in temperature is reached. \label{enum:temp}
\end{enumerate}
\section{Results}
Have tested the mixture \coto-\no~with SRK and PR. Both have been
tested with the binary interaction parameters $k_{ij}=0$ and
$k_{ij}=-0.105$. The plots have been initialised at the \coto~bubble point.
See Figure \ref{fig:srk_0}, \ref{fig:srk_opt}, \ref{fig:pr_0} and \ref{fig:pr_opt}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{NO_SRK_Txy_k0.pdf}
\caption{Mole fraction of \no~in an\coto-\no~mixture with SRK and $k_{ij}=0$.}
\label{fig:srk_0}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{NO_SRK_Txy.pdf}
\caption{Mole fraction of \no~in an\coto-\no~mixture with SRK and $k_{ij}=-0.119$.}
\label{fig:srk_opt}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{NO_PR_Txy_k0.pdf}
\caption{Mole fraction of \no~in an\coto-\no~mixture with PR and $k_{ij}=0$.}
\label{fig:pr_0}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{NO_PR_Txy.pdf}
\caption{Mole fraction of \no~in an\coto-\no~mixture with PR and $k_{ij}=-0.105$.}
\label{fig:pr_opt}
\end{figure}
\appendix
\section{Jacobian}
The Jacobian required for a Newton solver of the equation system in
section \ref{sec:eqn}.
\begin{align}
\pd{f_1}{\ln K_1} & = 1 \\
\pd{f_1}{\ln K_2} & = 0 \\
\pd{f_1}{x_i} & = - \pd{\ln
\hat{\varphi}_1\left(\mathbf{x}\right)}{x_i}, \quad
i=1,2\\
\pd{f_1}{y_i} & = \pd{\ln
\hat{\varphi}_1\left(\mathbf{y}\right)}{y_i}, \quad
i=1,2\\
\pd{f_1}{\ln P} & = P\left(\pd{\ln
\hat{\varphi}_1\left(\mathbf{y}\right)}{P} -\pd{\ln
\hat{\varphi}_1\left(\mathbf{x}\right)}{P}\right)
\end{align}
\begin{align}
\pd{f_2}{\ln K_1} & = 0 \\
\pd{f_2}{\ln K_2} & = 1 \\
\pd{f_2}{x_i} & = - \pd{\ln
\hat{\varphi}_2\left(\mathbf{x}\right)}{x_i}, \quad
i=1,2\\
\pd{f_2}{y_i} & = \pd{\ln
\hat{\varphi}_2\left(\mathbf{y}\right)}{y_i}, \quad
i=1,2\\
\pd{f_2}{\ln P} & = P\left(\pd{\ln
\hat{\varphi}_2\left(\mathbf{y}\right)}{P} -\pd{\ln
\hat{\varphi}_2\left(\mathbf{x}\right)}{P}\right)
\end{align}
\begin{align}
\pd{f_3}{\ln K_1} & = -K_1 x_1 \\
\pd{f_3}{\ln K_2} & = 0 \\
\pd{f_3}{x_1} & = - K_1 \\
\pd{f_3}{x_2} & = 0 \\
\pd{f_3}{y_1} & = 1 \\
\pd{f_3}{y_2} & = 0 \\
\pd{f_3}{\ln P} & = 0
\end{align}
\begin{align}
\pd{f_4}{\ln K_1} & = 0 \\
\pd{f_4}{\ln K_2} & = -K_2 x_2 \\
\pd{f_4}{x_1} & = 0\\
\pd{f_4}{x_2} & = - K_2 \\
\pd{f_4}{y_1} & = 0 \\
\pd{f_4}{y_2} & = 1 \\
\pd{f_4}{\ln P} & = 0
\end{align}
\begin{align}
\pd{f_5}{X_i} & = 0 \quad i=1,2,5,6,7\\
\pd{f_5}{x_i} & = 1 \quad i=3,4
\end{align}
\begin{align}
\pd{f_6}{X_i} & = 0 \quad i=1,2,3,4,7\\
\pd{f_6}{y_i} & = 1 \quad i=5,6
\end{align}
\begin{equation}
\pd{f_7}{X_i} =\begin{cases} 1 & i=i_{\spec}\\
0 & \text{otherwise}
\end{cases}
\end{equation}
\end{document}
| {
"alphanum_fraction": 0.6541979242,
"avg_line_length": 29.7584097859,
"ext": "tex",
"hexsha": "d4891e46189146d394a106314403de8a0e51d300",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2022-03-21T04:59:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-27T13:04:19.000Z",
"max_forks_repo_head_hexsha": "63c0dc82fe6f88dd5612c53a35f7fbf405b4f3f6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SINTEF/Thermopack",
"max_forks_repo_path": "doc/memo/binaryXY/binaryXY.tex",
"max_issues_count": 20,
"max_issues_repo_head_hexsha": "63c0dc82fe6f88dd5612c53a35f7fbf405b4f3f6",
"max_issues_repo_issues_event_max_datetime": "2022-03-30T22:06:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-26T11:43:43.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SINTEF/Thermopack",
"max_issues_repo_path": "doc/memo/binaryXY/binaryXY.tex",
"max_line_length": 93,
"max_stars_count": 28,
"max_stars_repo_head_hexsha": "63c0dc82fe6f88dd5612c53a35f7fbf405b4f3f6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SINTEF/Thermopack",
"max_stars_repo_path": "doc/memo/binaryXY/binaryXY.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-21T04:59:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-14T07:51:21.000Z",
"num_tokens": 3928,
"size": 9731
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\title{PS7 Yarberry}
\author{Megan N. Yarberry }
\date{March 10, 2020}
\begin{document}
\maketitle
\section{Question 6}
\begin{table}[!htbp] \centering
\caption{}
\label{}
\begin{tabular}{@{\extracolsep{5pt}}ccccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
Statistic & \multicolumn{1}{c}{N} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{St. Dev.} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c}{Pctl(25)} & \multicolumn{1}{c}{Pctl(75)} & \multicolumn{1}{c}{Max} \\
\hline \\[-1.8ex]
logwage & 1,669 & 1.625 & 0.386 & 0.005 & 1.362 & 1.936 & 2.261 \\
hgc & 2,229 & 13.101 & 2.524 & 0 & 12 & 15 & 18 \\
tenure & 2,229 & 5.971 & 5.507 & 0.000 & 1.583 & 9.333 & 25.917 \\
age & 2,229 & 39.152 & 3.062 & 34 & 36 & 42 & 46 \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
At what rate are log wages missing?
\begin{itemize}
\item Over 25 percent of logwage data is missing
\end{itemize}
Do you think the logwage variable is most likely to be MCAR, MAR, or MNAR?
\begin{itemize}
\item MAR ~ able to determine the data using a regression set and the current available data at hand
\end{itemize}
\section{Question 7}
\begin{table}[!htbp] \centering
\caption{Regression Results}
\label{}
\begin{tabular}{@{\extracolsep{5pt}}lccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{3}{c}{\textit{Dependent variable:}} \\
\cline{2-4}
\\[-1.8ex] & \multicolumn{3}{c}{logwage} \\
\\[-1.8ex] & (1) & (2) & (3)\\
\hline \\[-1.8ex]
hgc & 0.062$^{***}$ & 0.049$^{***}$ & 0.062$^{***}$ \\
& (0.005) & (0.004) & (0.004) \\
& & & \\
collegenot college grad & 0.146$^{***}$ & 0.160$^{***}$ & 0.146$^{***}$ \\
& (0.035) & (0.026) & (0.025) \\
& & & \\
tenure & 0.023$^{***}$ & 0.015$^{***}$ & 0.023$^{***}$ \\
& (0.002) & (0.001) & (0.001) \\
& & & \\
age & $-$0.001 & $-$0.001 & $-$0.001 \\
& (0.003) & (0.002) & (0.002) \\
& & & \\
marriedsingle & $-$0.024 & $-$0.029$^{**}$ & $-$0.024$^{*}$ \\
& (0.018) & (0.014) & (0.013) \\
& & & \\
Constant & 0.639$^{***}$ & 0.833$^{***}$ & 0.639$^{***}$ \\
& (0.146) & (0.115) & (0.111) \\
& & & \\
\hline \\[-1.8ex]
Observations & 1,669 & 2,229 & 2,229 \\
R$^{2}$ & 0.195 & 0.132 & 0.268 \\
Adjusted R$^{2}$ & 0.192 & 0.130 & 0.266 \\
Residual Std. Error & 0.346 (df = 1663) & 0.311 (df = 2223) & 0.300 (df = 2223) \\
F Statistic & 80.508$^{***}$ (df = 5; 1663) & 67.496$^{***}$ (df = 5; 2223) & 162.884$^{***}$ (df = 5; 2223) \\
\hline
\hline \\[-1.8ex]
\textit{Note:} & \multicolumn{3}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\
\end{tabular}
\end{table}
The true value of ˆ β1 = 0.093.
Based on the following two equations the true value will go up the true value goes up after adding the random missing variables.
\section{Question 8}
For my final project I have made some slow progress in scrapping the data for my project. I have a clear direct since I want to use something similar to what my thesis is going to be over for my master's program. Now it's just in the process of scrapping that extra data to prove my hypothesis. With everything going on right now with my internship, teaching and my own classes I haven't been able to get very far but look forward to doing some deep diving over spring break when I have a little more time.
\end{document}
| {
"alphanum_fraction": 0.5749253731,
"avg_line_length": 37.6404494382,
"ext": "tex",
"hexsha": "c1eba190926b550a4da6c71a04fec6ff70923e14",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "myarberry26/DScourseS20",
"max_forks_repo_path": "ProblemSets/PS7/PS7_Yarberry.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "myarberry26/DScourseS20",
"max_issues_repo_path": "ProblemSets/PS7/PS7_Yarberry.tex",
"max_line_length": 507,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "myarberry26/DScourseS20",
"max_stars_repo_path": "ProblemSets/PS7/PS7_Yarberry.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1406,
"size": 3350
} |
% !TEX root = ./IF-2016-Part_B.tex
\addcontentsline{toc}{section}{\hspace{-0.5cm}Document 1}
\addcontentsline{toc}{section}{Start Page}
\phantom{a}
\vspace{15mm}
\begin{center}
\Large{
\textbf{START PAGE}
\vspace{15mm}
MARIE SK\L{}ODOWSKA-CURIE ACTIONS\\
\vspace{1cm}
\textbf{\acf{IF}}\\
\textbf{Call: H2020-MSCA-IF-2016}
\vspace{2cm}
PART B
\vspace{2.5cm}
``{\sc \ac{PropAcronym}\xspace}''
\vspace{2cm}
\textbf{This proposal is to be evaluated as:}
\vspace{.5cm}
\textbf{[Standard EF] [CAR] [RI] [GF]}\\
}
\large{[Delete as appropriate]}
\end{center}
\vspace{1cm}
\newpage
\setcounter{tocdepth}{1}
\tableofcontents
\newpage
\addcontentsline{toc}{section}{List of Participating Organisations}
\section*{List of Participating Organisations}
\label{sec:participants}
Please provide a list of all participating organisations (both beneficiaries and, where applicable, partner organisations%
\footnote{All partner organisations should be listed here, including secondments})
indicating the legal entity, the department carrying out the work and the supervisor.
\medskip\noindent
If a secondment in Europe is planned but the partner organisation is not yet known, as a minimum the type of organisation foreseen (academic/non-academic) must be stated.
\medskip\noindent
For non-academic beneficiaries, please provide additional detail as indicated in the table below.
\newcommand\rotx[1]{\rotatebox[origin=c]{90}{\textbf{#1}}}
\newcommand\roty[1]{\rotatebox[origin=c]{90}{\parbox{4cm}{\raggedright\textbf{#1}}}}
\newcommand\MyHead[2]{\multicolumn{1}{l|}{\parbox{#1}{\centering #2}}}
\noindent\begin{tabular}{|m{2.4cm}|m{1cm}|b{1em}|b{1em}|c|m{2.5cm}|m{2cm}|c|}
\hline
\textbf{Participants}
& \MyHead{1cm}{\textbf{Legal\\Entity\\Short\\Name}}
& \rotx{Academic}
& \rotx{Non-academic}
& \textbf{Country}
& \MyHead{2.1cm}{\textbf{Dept. / \\Division / \\Laboratory}}
& \textbf{Supervisor}
& \MyHead{2.5cm}{\textbf{Role of\\Partner\\Organisation\footnotemark}} \\
\hline
\ul{Beneficiary} & & & & & & & \\\hline
- NAME & & & & & & & \\\hline
\ul{Partner} \ul{\mbox{Organisation}} & & & & & & & \\\hline
- NAME & & & & & & & \\\hline
\end{tabular}
\vspace{\baselineskip}
\footnotetext{For example hosting secondments, for GF hosting the outgoing phase, etc.}
\noindent
{\bf Data for non-academic beneficiaries}\\
\noindent\begin{tabular}{|m{1.7cm}|m{2cm}|m{1.8cm}|c|c|m{2.5cm}|c|c|c|}
\hline
\textbf{Name}
& \roty{Location of research premises (city / country)}
& \roty{Type of R\&D activities}
& \roty{No. of fulltime employees}
& \roty{No. of employees in R\&D}
& \roty{Website}
& \roty{Annual turnover (approx. in Euro)}
& \roty{Enterprise status (Yes/No)}
& \roty{SME status\footnotemark (Yes/No)}
\\\hline
& & & & & & & & \\\hline
\end{tabular}
\vspace{\baselineskip}
\footnotetext{As defined in \href{http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2003:124:0036:0041:en:PDF}{Commission Recommendation 2003/261/EC.}}
\noindent
{\bf Please note that:}
\begin{itemize}
\item Any inter-relationship between different participating institution(s) or individuals and other entities/persons (e.g. family ties, shared premises or facilities, joint ownership, financial interest, overlapping staff or directors, etc.) \textbf{must} be declared and justified \textbf{in this part of the proposal};
\item The information in the table for non-academic beneficiaries \textbf{must be based on current data, not projections}.
\end{itemize}
\newpage
\markStartPageLimit
\section{Excellence}
\label{sec:excellence}
~\footnote{Literature should be listed in footnotes, font size 8 or 9.
All literature references will count towards the page limit.}
\subsection{Quality and credibility of the research/innovation action (level of novelty, appropriate consideration of inter/multidisciplinary and gender aspects)}
\label{sec:excellence_quality}
You should develop your proposal according to the following lines:
\begin{itemize}
\item \ul{Introduction, state-of-the-art, objectives and overview of the action}
\item \ul{Research methodology and approach}: highlight the type of research / innovation activities proposed
\item \ul{Originality and innovative aspects of the research programme}: explain the contribution that the project is expected to make to advancements within the project field. Describe any novel concepts, approaches or methods that will be employed.
\item The \ul{gender dimension} in the research content (if relevant)
\item The \ul{interdisciplinary} aspects of the action (if relevant)
\item Explain how the high-quality, novel research is the most likely to open up the best career possibilities for the {\em experienced researcher} and new collaboration opportunities for the host organisation(s).
\end{itemize}
\subsection{Quality and appropriateness of the training and of the two way transfer of knowledge between the researcher and the host}
\label{sec:excellence_transfer}
Describe the training that will be offered.
\noindent
Outline how a two way transfer of knowledge will occur between the researcher and the host institution(s):
\begin{itemize}
\item Explain how the \emph{experienced researcher} will gain new knowledge during the fellowship at the hosting organisation(s)
\item Outline the previously acquired knowledge and skills that the researcher will transfer to the host organisation(s).
\end{itemize}
For Global Fellowships explain how the newly acquired skills and knowledge in the Third Country will be transferred back to the host institution in Europe (the beneficiary) during the incoming phase.
\subsection{Quality of the supervision and of the integration in the team/institution}
\label{sec:excellence_supervision}
\begin{itemize}
\item Qualifications and experience of the supervisor(s)
\end{itemize}
\noindent
Provide information regarding the supervisor(s):
the level of experience on the research topic proposed and their track record of work,
including main international collaborations,
as well as the level of experience in supervising researchers.
Information provided should include participation in projects, publications, patents and any other relevant results.
\begin{itemize}
\item Hosting arrangements%
\footnote{The hosting arrangements refer to the integration of the researcher to his new environment in the premises of the host.
It does not refer to the infrastructure of the host as described in the Quality and efficiency of the implementation criterion.}
\end{itemize}
\noindent
The application must show that the experienced researcher will be well integrated within the team/institution in order that all parties gain the maximum knowledge and skills from the fellowship.
The nature and the quality of the research group/environment as a whole should be outlined,
together with the measures taken to integrate the researcher in the different areas of expertise, disciplines, and international networking opportunities that the host could offer.
\medskip\noindent
For GF both phases should be described\----for the outgoing phase, specify the practical arrangements in place to host a researcher coming from another country,
and for the incoming phase specify the measures planned for the successful (re-)integration of the researcher.
\subsection{Capacity of the researcher to reach or re-enforce a position of professional maturity/independence}
\label{sec:excellence_maturity}
Applicants should demonstrate how the proposed research and personal experience will contribute to the further professional development as an independent/mature researcher.
\medskip\noindent
Describe {\bf briefly} how the host will contribute to the advancement of the researcher's career.
\medskip\noindent
Therefore, a complete {\bf Career Development Plan should not be included in the proposal},
but it is part of implementing the action in line with the European Charter for Researchers.
\newpage
\section{Impact}
\label{sec:impact}
\subsection{Enhancing the potential and future career prospects of the researcher}
\label{sec:impact_researcher}
Explain the expected \ul{impact of the planned research and training} of the career prospects of the experienced researcher after the fellowship.
Which \ul{new competences} will be acquired?
\subsection{Quality of the proposed measures to exploit and disseminate the action results}
\label{sec:impact_dissemination}
Describe how the new knowledge generated by the action will be disseminated and exploited,
e.g. communicated, transferred into other research settings or, if appropriate, commercialised.
\medskip\noindent
What is the dissemination strategy\----targetted at scientists, potential users and to the wider research and innovation community\----to achieve the potential impact of the action?
\medskip\noindent
Please make also reference to the "Dissemination \& exploitation" section of the H2020 Online Manual%
\footnote{\url{http://ec.europa.eu/research/participants/docs/h2020-funding-guide/grants/grant-management/dissemination-of-results_en.htm}}.
\medskip\noindent
The following section of the European Charter for Researchers refers specifically to dissemination:
\bigskip\noindent
\setlength{\fboxsep}{3mm}
\fbox{\parbox{.95\textwidth}{
{\large {\bf Dissemination, exploitation of results}}
\medskip\noindent
All researchers should ensure, in compliance with their contractual arrangements, that the results of their research are disseminated and exploited, e.g. communicated, transferred into other research settings or, if appropriate, commercialised.
Senior researchers, in particular, are expected to take a lead in ensuring that research is fruitful and that are either exploited commercially or made accessible to the public (or both) whenever the opportunity arises.
}}
\medskip\noindent
Concrete planning for section~\ref{sec:impact_dissemination} must be included in the Gantt Chart (see point~\ref{sec:implementation_work_plan}).
\subsection{Quality of the proposed measures to communicate the action activities to different target audiences}
\label{sec:impact_communication}
Please make also reference to the guidelines {\em \href{http://ec.europa.eu/research/participants/data/ref/h2020/other/gm/h2020-guide-comm_en.pdf}{Communicating EU research and innovation guidance for project participants}}%
\footnote{\url{http://ec.europa.eu/research/participants/data/ref/h2020/other/gm/h2020-guide-comm_en.pdf}}
as well as to the "communication" section of the H2020 Online Manual%
\footnote{\url{http://ec.europa.eu/research/participants/docs/h2020-funding-guide/grants/grant-management/communication_en.htm}}.
\medskip\noindent
Concrete planning for section~\ref{sec:impact_communication} must be included in the Gantt Chart (see point~\ref{sec:implementation_work_plan}).
\medskip\noindent
The following section of the European Charter for Researchers refers specifically to public engagement:
\bigskip\noindent
\setlength{\fboxsep}{3mm}
\fbox{\parbox{.95\textwidth}{
{\large {\bf Public engagement}}
\medskip\noindent
Researchers should ensure their research activities are made known to society at large in such a way that they can be understood by non-specialists, thereby improving the public's understanding of science.
Direct engagement with the public will help researchers to better understand public interest in priorities for science and technology and also the public's concerns.
}}
\newpage
\section{Quality and Efficiency of the Implementation}
\label{sec:implementation}
\subsection{Coherence and effectiveness of the work plan}
\label{sec:implementation_work_plan}
The proposal should be designed in such a way to achieve the desired impact.
A Gantt Chart should be included in the text listing the following:
\begin{itemize}
\item \ul{Work Packages titles (for EF there should be at least 1 WP)};
\item \ul{List of major deliverables, if applicable;}%
\footnote{A deliverable is a distinct output of the action, meaningful in terms of the action's overall objectives and may be a report, a document, a technical diagram, a software, etc.
Should the applicants wish to participate in the pilot on Open Research Data, the Data Management Plan should be indicated here.\\
Deliverable numbers ordered according to delivery dates.
Please use the numbering convention <WP number>.<number of deliverable with that WP>.
For example, deliverable 4.2 would be the second deliverable from work package 4.}
\item \ul{List of major milestones}, if applicable;%
\footnote{Milestones are control points in the action that help to chart progress.
Milestones may correspond to the completion of a key deliverable, allowing the next phase of the work to begin.
They may also be needed at intermediary points so that, if problems have arisen, corrective measures can be taken.
A milestone may be a critical decision point in the action where, for example, the researcher must decide which of several technologies to adopt for further development.}
\item \ul{Secondments, if applicable.}
\end{itemize}
\noindent
The schedule should be in terms of number of months elapsed from the start of the project.
\begin{figure}[!htbp]
\begin{center}
\begin{ganttchart}[
canvas/.append style={fill=none, draw=black!5, line width=.75pt},
hgrid style/.style={draw=black!5, line width=.75pt},
vgrid={*1{draw=black!5, line width=.75pt}},
title/.style={draw=none, fill=none},
title label font=\bfseries\footnotesize,
title label node/.append style={below=7pt},
include title in canvas=false,
bar label font=\small\color{black!70},
bar label node/.append style={left=2cm},
bar/.append style={draw=none, fill=black!63},
bar progress label font=\footnotesize\color{black!70},
group left shift=0,
group right shift=0,
group height=.5,
group peaks tip position=0,
group label node/.append style={left=.6cm},
group progress label font=\bfseries\small
]{1}{24}
\gantttitle[
title label node/.append style={below left=7pt and -3pt}
]{Month:\quad1}{1}
\gantttitlelist{2,...,24}{1} \\
\ganttgroup{Work Package}{1}{10} \\
\ganttgroup{Deliverable}{5}{15} \\
\ganttgroup{Milestone}{5}{5} \\
\ganttgroup{Secondment}{20}{23} \\
\ganttgroup{Conference}{16}{16} \\
\ganttgroup{Workshop}{17}{17} \\
\ganttgroup{Seminar}{18}{18} \\
\ganttgroup{Dissemination}{23}{24} \\
\ganttgroup{Public engagement}{4}{5} \\
\ganttgroup{Other}{7}{10}
\end{ganttchart}
\end{center}
\caption{Example Gantt Chart}
\end{figure}
\subsection{Appropriateness of the allocation of tasks and resources}
\label{sec:implementation_resources}
Describe how the work planning and the resources will ensure that he research and training objectives will be reached.
\medskip\noindent
Explain why the amount of person-months is appropriate in relation to the activities proposed.
\subsection{Appropriateness of the management structure and procedures, including risk management}
\label{sec:implementation_management}
Describe the:
\begin{itemize}
\item \ul{Organization and management structure}, as well as the progress monitoring mechanisms put in place, to ensure that objectives are reached;
\item \ul{Research and/or administrative risks that might endanger reaching the action objectives} and the contingency plans to be put in place should risk occur.
\end{itemize}
\subsection{Appropriateness of the institutional environment (infrastructure)}
\label{sec:implementation_infrastructure}
The active contribution of the beneficiary to the research and training activities should be described.
For GF also the role of partner organisations in Third Countries for the outgoing phase should appear.
\begin{itemize}
\item \ul{Give a description of the main tasks} and commitments of the beneficiary and all partner organisations(if applicable).
\item Describe the infrastructure, logistics, facilities offered in as far as they are necessary for the good implementation of the action.
\end{itemize}
\markEndPageLimit
| {
"alphanum_fraction": 0.7660542002,
"avg_line_length": 41.4071246819,
"ext": "tex",
"hexsha": "0efdb88182bc5b43e6d6f9dac71a0b5cae60b2b6",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2021-05-24T17:25:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-08-10T16:36:55.000Z",
"max_forks_repo_head_hexsha": "c857d3cd2bd61f0aa92179ad4d20b43fd1804f5b",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "sean-chester/H2020-MSCA-IF-2015",
"max_forks_repo_path": "doc1.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "c857d3cd2bd61f0aa92179ad4d20b43fd1804f5b",
"max_issues_repo_issues_event_max_datetime": "2016-08-31T21:49:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-07-31T14:19:50.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "sean-chester/H2020-MSCA-IF-2015",
"max_issues_repo_path": "doc1.tex",
"max_line_length": 323,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "c857d3cd2bd61f0aa92179ad4d20b43fd1804f5b",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "sean-chester/H2020-MSCA-IF-2015",
"max_stars_repo_path": "doc1.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-15T19:58:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-01T12:48:00.000Z",
"num_tokens": 4018,
"size": 16273
} |
\section{Projects}
\textbf{SimpleNavierStokes.jl} (2020) \\
{A \link{https://github.com/Emadmasroor/SimpleNavierStokes.jl}{Julia package}, \link{https://emadmasroor.github.io/blog/2020/12/16/CFD-tutorial-in-julia/}{blog post}, and open-source \link{https://nextjournal.com/emadmasroor/CFD-tutorial-in-Julia}{notebook} to serve as a beginner's tutorial for writing incompressible Navier-Stokes solvers using the $\omega-\psi$ formulation.}
| {
"alphanum_fraction": 0.7813211845,
"avg_line_length": 109.75,
"ext": "tex",
"hexsha": "834384da11843d8fb5a02899c711bfd5a2b502b8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cd687dd5d96f7d5148cf3968b10ab73f36b3e5a5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Emadmasroor/Simple-CV",
"max_forks_repo_path": "sections/projects.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cd687dd5d96f7d5148cf3968b10ab73f36b3e5a5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Emadmasroor/Simple-CV",
"max_issues_repo_path": "sections/projects.tex",
"max_line_length": 378,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cd687dd5d96f7d5148cf3968b10ab73f36b3e5a5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Emadmasroor/Simple-CV",
"max_stars_repo_path": "sections/projects.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 139,
"size": 439
} |
% \respto{2-22}{\color{blue}Given a group of software projects,
% there often exists one exemplar project
% that offers the best prediction for all others.
% Such ``bellwether projects''
% can be used to make quality predictions that are general
% to that group.}
% Existing methods for finding bellwether are very slow. When applied to the 697 projects studied here, standard bellwether methods took 60 days of CPU to find and certify the bellwethers
% Hence,we propose a faster way to find bellwethers.
% GENERAL applies hierarchical clustering to groups of project data. At each level within a tree of clusters, one bellwether is computed from sibling projects, then promoted up the tree. This hierarchical method is a scalable approach to learning effective models from very
% large data sets. For hundreds of projects, the defect prediction models generated from GENERAL's bellwether were just as good as those found via standard methods.
%Many Large organizations and many open source communities are using large scale data analysis on historical data available to them. With large quantity of data available to them, how should they reason about software quality? Should they use general defect prediction models that hold over many projects? Or must they use an ever-changing set of defect prediction models that are continually build and adapted to the task at hand? If the latter were true then there would be no stable models and conclusions about what is best practice for SE for avoiding defects (since those best practices would keep changing as we move from project to project). As discussed in section~\ref{sec:Motivation}, such conclusion instability has detrimental implications for {\em generality, trust, insight, training}, and {\em tool development}.}
% Researchers and industry practitioners makes use of different machine learning models to automatically generate software quality models(defect predictor) from project data comprises of software quality metrics (Commonly used software metrics for defect prediction are complexity metrics (such as lines of code, Halstead metrics, McCabe’s cyclometic complexity, and CK metrics) and process metrics(such as Number of revisions, lines added and deleted)) . Researches and industry practitioners use these defect predictors to predict for defect in new set of changes and to learn what are the important metrics that are responsible for finding these defects. These quality metrics can be collected with much ease from the code base of different software systems from large organizations and open source community, and
% After a decade of intensive research into automated software analytics, what general principles have we learned? While that work has generated specific results about specific projects~\cite{Bird:-24015,menzies2013software}, it has failed (so far) to deliver general principles that are demonstrably useful across many projects~\cite{menzies2013guest} (for an example of how {\em more} data can lead to {\em less} general conclusions, see below in {\S}2a).
% Is that the best we can do?
% How should we reason about software quality? Should we use general models that hold over many projects? Or must we use an ever-changing set of ideas that are continually adapted to the task at hand?
% Or does the truth lie somewhere in-between?
% To say that another way:
% \bi
% \item
% Are there general principles we can use to guide project management, software standards, education, tool development, and legislation about software?
% \item
% Or is software engineering some ``patchwork quilt'' of ideas and methods where it only makes sense to reason about specific, specialized, and small sets of projects?
% \ei
% If the latter were true then
% then there would be no stable conclusions about what is best practice for SE (since those best practices would keep changing as we move from project to project). As discussed in section~\ref{sec:Motivation}, such conclusion instability has detrimental implications for {\em generality, trust, insight, training}, and {\em tool development}.
% \respto{1-3}{\color{blue}Researchers and industry practitioners makes use of software analytics for many tasks to access software quality, such as:
% \bi
% \item Predicting if a submitted code is likely to buggy or not.
% \item Improving code quality by detecting code smells.
% \item Issue lifetime estimation to enable effective development and maintenance of their software systems.
% \ei
% Large organizations and many open source communities uses data-driven decision making, where they learn using large scale data analysis on historical data available to them. With large quantity of data available to them, how should they reason about software quality? Should they use general models that hold over many projects? Or must they use an ever-changing set of ideas that are continually adapted to the task at hand? If the latter were true then there would be no stable conclusions about what is best practice for SE (since those best practices would keep changing as we move from project to project). As discussed in section~\ref{sec:Motivation}, such conclusion instability has detrimental implications for {\em generality, trust, insight, training}, and {\em tool development}.}
%One explanation for the limited conclusions (so far) from automated analytics is {\em how much} data we are using for analysis. A typical software analytics research paper uses less than a few dozen projects (exceptions: see~\cite{krishna18a, zhao17, agrawal18}). Such small samples can never represent something as diverse as software engineering.
%{\color{blue} Finding general defect predictors across many projects is a complex task.
%For a long list of conclusions that were found to be unstable across multiple projects, see~\cite{Me13}. This problem is so endemic in software engineering that other journals have devoted entire special issues to the topic~\cite{Menzies2012}.}
%There are many reasons for that including how the models were certified (20 repeats with different train/test sets)
%and the complexity of the analysis procedure (which includes fixing class imbalance and feature selection). But the major cause of this slow down was that those methods required an $O(N^2)$ comparison between $N=697$ projects.
%dramatically improves on existing bellwether methods.
% \bi
% \item
% This paper applies GENERAL and traditional $O(N^2)$ bellwether to 697 projects.
% GENERAL and the traditional approach terminated in 1.5 and 72 hours (respectively).
% \item
% Figure~\ref{fig:cost} shows a hypothetical cost comparison in AWS between standard bellwethers and GENERAL when running for 100 to 1,000,000 projects. Note that GENERAL
% is inherently more scalable.
% \ei
% \begin{figure}[!t]
% \centering
% \includegraphics[width=\linewidth]{figs/cost.pdf}
% \caption{Hypothetical cost comparison between GENERAL and default Bellwether.}
% \label{fig:cost}
% \end{figure}
% \noindent Overall, the contributions of this paper are
% \bi
% \item \textbf{Hierarchical bellwethers for transfer learner:} We
% offer a novel hierarchical clustering bellwether algorithm called GENERAL (described in section~\ref{GENERAL}) that
% finds bellwether in hierarchical clusters, then promotes those bellwether to upper levels
% The final project that is promoted to the root of the hierarchy is returned as ``the'' bellwether.
% \item \textbf{Showing inherent generality in SE:}
% In this study we discover a source data set for transfer learner from a large number of projects, hence proving generality in the SE datasets (where some datasets can act as exemplars for the hundreds of other projects). To be more specific we can create a generalized defect prediction model that can act as a general model for rest of the projects in that SE community.
% \item \textbf{Knowledge about software quality that are general
% to hundreds of software projects:}
% As said above, in this sample of 697 projects, we find that code interface issues are the dominant factor on software defects.
% In this section, we ask ``Why even bother to transfer knowledge between projects?''. In several recent studies ~\cite{bettenburg2012think, menzies2012local, posnett2011ecological} with readily-available data from SE repositories, numerous authors report the locality effect in SE; i.e. general models outperformed by specialized models localized to particular parts of the data.
% For example.
% Menzies et al. explored local vs global learning in defect prediction and effort estimation~\cite{menzies2012local} and found that
% learning rules from specific local data was more
% effective than learning rules from the global space.
% On the other hand, Herbold et al.~\cite{herbold2017global} offered an opposite conclusion.
% In their study regarding global vs local model for cross-project defect prediction,
% they saw that local models offered little to no improvement over models learned
% from all the global data.
% One explanation for this discrepancy is the size of number of projects that they explored.
% \respto{2-14}Menzies, Herbold et al. explored less than two dozen projects to their conclusions. Accordingly, here, we explore nearly 700 projects to verify their findings. As shown below, the results of this paper agree more with Herbold et al. than Menzies et al. since we show that one global model (learned from a single bellwether projects) does just as well as anything else.
% \respto{2-14}Menzies, Herbold et al. explored less than two dozen projects which raises issues of external validity in their conclusions. Accordingly, here, we explore nearly 700 projects. As shown below, the results of this paper agree more with Herbold et al. than Menzies et al. since we show that one global model (learned from a single bellwether projects) does just as well as anything else.
%\respto{2-24}{\color{blue} It turns out that developers are not the only one's confused about how various factors influence software projects. Much recent research calls into question the ``established wisdoms'' of SE field.
%Note that if the reader disputes any of the above, then we ask how would you challenge the items on this list? Where would you get the data, from enough projects, to successfully refute the above? And where would you get that data? And how would you draw conclusions from that large set? Note that the answers to these questions requires learning from multiple projects. Hence, this paper.
% Other researchers ~\cite{kocaguneli2012, kocaguneli2011find} doubted that a fixed value of k was appropriate for all data. That work recursively bi-clustered the source data, then pruned the cluster sub-trees with greatest ``variance'' (where the ``variance'' of a sub-tree is the variance of the conclusions in its leaves). This method combined row selection with row pruning (of nearby rows with large variance). Other similarity methods~\cite{Zhang16aa} combine domain knowledge with automatic processing: e.g. data is partitioned using engineering judgment before automatic tools cluster the data. To address variations of software metrics between different projects, the original metric values were discretized by rank transformation according to similar degree of context factors.
% \bi
% \item {\color{blue}Recall is the ratio between predicted actual target class examples vs all target class examples; i.e. $ \frac{TP}{TP+FN} $, which means in an ideal scenario where we identify all the actual target class as target class without missing the recall will be one.
% When recall is maximal, we are finding all the target class.}
% \item {\color{blue} Precision is the ratio between predicted actual target class examples vs all predicted target class examples; i.e. $ \frac{TP}{TP+FP} $. When precision is maximal, all the reports of defect modules are
% actually buggy (so the users waste no time looking at results that do not matter to them).}
% \item {\color{blue}popt(20) is a cost sensitive productivity based metric, which represents the percentage of total defect identified by reading 20\% of the code. This means more defects we can identify by reading only 20\% of the code is better for the developers to localize and fix the defects.}
% \item {\color{blue}ifa\_auc is XXX.}
% \ei
% In such {\em multi-objective} problems, one model is better than another if it
% satisfies a ``domination predicate''.
% We use the Zitler indicator dominance
% predictor~\cite{zit02} to select our bellwether (since this is known to select better models
% for 5-goal optimization~\cite{Sayyad:2013,Sayyad:2013:SPL}).
% This predicate favors model
% $y$ over $x$ model if $x$ ``losses'' most:
% \begin{equation}\label{eq:cdom}
% \begin{array}{rcl}
% \textit{worse}(x,y)& =& \textit{loss}(x,y) > \textit{loss}(y,x)\\
% \textit{loss}(x,y)& = &\sum_j^n -e^{\Delta(j,x,y,n)}/n\\
% \Delta(j,x,y,n) & = & w_j(o_{j,x} - o_{j,y})/n
% \end{array}
% \end{equation}
% where ``$n$'' is the number of objectives (for us, $n=5$) and $w_j\in \{-1,1\}$ depending on whether
% we seek to maximize goal $x_j$.
% An alternative to the Zitler indicator is `boolean domination ''
% that says one thing is better than another it if it no worse on any criteria and better on at least one criteria. We prefer Equation~\ref{eq:cdom} to boolean domination since we
% have a 5-goal optimization problem and it it is known that boolean domination often fails for 3 or more goals~\cite{Wagner:2007,Sayyad:2013}.
% The above equation is actually Pearson's correlation where
% all variables have been standardized. To be applied
% for discrete class learning (as done by KDP and this paper),
% Hall et al. employ the Fayyad Irani discretizer~\cite{FayIra93Multi} then apply the following
% entropy-based measure to infer $r$ (the degree of associations
% between discrete sets $X$ and $Y$):
% \begin{equation}\label{eq:cfs}
% r_{\mathit{xy}}=2\times \left[ \frac{H(x) + H(y) - H(x,y)}{H(y)+H(x)} \right]
% \end{equation}
% where $H$ is the standard information gain measure used in
% decision tree learning.
% \lstset{language=Python}
% \lstset{frame=lines}
% \lstset{label={lst:code_direct}}
% \lstset{basicstyle=\footnotesize}
% \begin{center}\begin{minipage}{2.5in}\begin{lstlisting}
% def CFS(data):
% features = []
% score = -1
% while True:
% best = None
% for feature in range(data.features):
% features += [feature]
% tmp = merit( data, features) # see above equation
% if tmp > score:
% score = tmp
% bests = features
% features.pop()
% features += bests
% if not improve(score): break
% return features
% \end{lstlisting}
% \end{minipage}\end{center}
% \end{figure}
% This algorithm works is works in 6 stages.
% (1) \textbf{Feature Extraction:} In this stage the whole dataset is used to extract features from each project. This is done using the FSS algorithm as shown in~\ref{subsec:FSS}. Here each project is sent to the FSS and the FSS returns most suitable features for building models, we do this for every project and that information as a vector (i.e. a vector or length equal to total number of features, where 0,1 represents a feature being absent,selected) that represents each project. By performing this we have a vector representation of each project in the dataset. GENERAL uses this information to create the hierarchical clusters to find the communities. We perceive this is a good representation of community, as in this work we try to find community which has similar information distribution according to the attributes. Thus 2 projects with similar features selected have much higher chance of building similar models.
% (2) \textbf{Cluster Creation:} After the feature extraction has been done, the data is sent to a modified BIRCH algorithm. The algorithm requires a branching factor (i.e. Maximum number of CF sub-clusters in each node) and threshold value (i.e. when to form a new cluster based on radius). For this experiment we have set the branching factor as 20 and threshold value as 0.5. Using this version of BIRCH algorithm we build the hierarchical cluster, while storing all necessary details about the cluster like parent-child node, data points, level information, etc. This stage returns a Clustering Feature Tree (CF Tree) with all this information, which is passed to the next phase of experiment.
% (3) \textbf{Bellwether selection phase 1:} The CF Tree from the last phase is passed to the hierarchical bellwether. In this phase we use \textit{bellwether method} to identify bellwether at the leaf level. Using the CF Tree, we identify the clusters at the leaf level where each cluster represents the smallest community produced by the BIRCH algorithm in the last phase. Here we use the default ``Bellwether'' to perform a $ N*(N-1) $ comparison at each cluster. Here we select each project in the cluster one by one as a source dataset for transfer learning apply SMOTE as mentioned in sec~\ref{subsec:SMOTE} to handle any data imbalance and then use the FSS algorithm to get rid of any unnecessary attributes as mentioned in sec~\ref{subsec:FSS}. This informative and balanced dataset is used to build a LR model as mentioned in sec~\ref{subsec:LR} and we measure the performance of all other projects in the community (cluster). The performance measures that are used are mentioned in sec~\ref{sec:Measures}. To find the bellwether in each community we use cdom function to find the best source dataset among each cluster considering all performance measures. This phase returns a bellwether for each cluster at the leaf level of the CF Tree.
% (4) \textbf{Bellwether promotion:} In this phase of the algorithm is an iterative process, here we receive the selected bellwethers from the child clusters of each cluster in the level. This is called the bellwether promotion. Here each parent cluster instead of being represented by all projects within them, they are represented by only the bellwethers in them.
% (5) \textbf{Bellwether selection phase 2:} In this phase instead of finding a bellwether for each cluster at the level by performing a $ N*(N-1) $ comparison, we select the projects which represented as bellwether at the child nodes and then try to find a bellwether among them. So at each cluster at the level we perform a $ M*(M-1) $ comparison at each cluster where M is the selected bellwether from child clusters. This creates an order of magnitude faster \textbf{bellwether method}. This is again an iterative process, and the end of all the iterations, we will have a bellwether at each cluster at every level.
% (6) \textbf{Bellwether Prediction:} This is the transfer learning phase, when a new project is evaluated, we will use the FSS algorithm to get its features, then use the feature vectors to identify which cluster it belongs and use that cluster's bellwether as the transfer learning model.
% \begin{figure}[!t]
% \centering
% \includegraphics[width=\linewidth]{figs/BUBBLE.png}
% \caption{GENERAL algorithm.}
% \label{fig:GENERAL}
% \end{figure}
% In this study we try to establish the presence of generality in SE datasets. We do this by analyzing the presence of bellwether incrementally by adding more and more projects and how the bellwether's predictive power changes. In this case to show the presence of generality in SE datasets the predictive power of the bellwether should look like the \textcolor{ao(english)}{GREEN} in figure~\ref{fig:predictive_power}, that is the predictive power of bellwether should increase or remains same, if our results look like the \textcolor{red}{RED} curve, that will show absence of generality in SE datasets.
% In order to achieve this, we try to explore the \textit{bellwether effect} as mentioned in ~\ref{sec:related}. We know the default \textit{bellwether method} is very expensive ($ O(N^2) $). Thus in this paper we proposes an alternative transfer learning method (GENERAL), that explores \textit{bellwether effect} by exploring an order of magnitude faster \textit{bellwether method}. Our approach has three key components:
% \bi
% \item A feature extractor to find a representation of each project, which will be used for clustering the projects.
% \item A hierarchical clustering model to use the features extracted from previous step to build the hierarchical cluster.
% \item A transfer learning model to identify bellwether in the hierarchical cluster.
% \ei
% GENERAL employs a few different algorithms to complete and compose it's 4 different components -
% \subsubsection{\textbf{Feature Subset Selection (FSS):}}
% \label{subsec:FSS}
% To extract features from each dataset, we use a feature selector algorithm called Feature Subset Selection(FSS)~\cite{hall1999correlation,hall1997feature}. Which is a process of identifying and removing as much irrelevant and redundant information as possible. This is achieved using a correlation based feature evolution strategy to evaluate importance of an attribute and a best first search strategy with backtracking that moves through the search space by making local changes to the current feature subset. Here if the path being explored begins to look less promising, the best first search can back-track to a more promising previous subset and continue the search from there. Given enough time, a best first search will explore the entire search space, so it uses a stopping criterion (i.e. no improvement for five consecutive attributes).
% {\small
% \begin{figure}[]
% \small
% \inputminted[numbersep=2pt, linenos=true, fontsize=\small]{python}{pseudocode/cfs.py}
% \vspace{-0.2cm}
% \caption{Pseudo-code of Feature Subset Selection}
% \label{fig:GAP_pseudocode}
% \vspace{-0.3cm}
% \end{figure}
% }
% \subsection{Performance Measures}
% \label{sec:Measures}
% In this section, we introduce the following 5 evaluation measures used in this study to evaluate the performance of machine learning models. Suppose we have a dataset with M changes and N defects. After inspecting 20\% LOC, we inspected $m$ changes and found $n$ defects. Also, when we find the first defective change, we have inspected k changes. Using
% this data, we can define 5 evaluation measures as follows:
% (1) \textbf{Recall:} This is the proportion of inspected defective changes among all the actual defective changes; i.e. $n/N$.
% Recall is used in many previous studies~\cite{kamei2012large,yang2016effort,yang2017tlel,xia2016collective,yang2015deep}.
% (2) \textbf{Precision:} This is the proportion of inspected defective changes among all the inspected changes; i.e. $n/m$. A low Precision indicates that developers would encounter more false alarms, which may have negative impact on developers' confidence on the prediction model.
% (3) \textbf{pf:} This is the proportion of all suggested defective changes which are not actual defective changes among all the suggested defective changes. A high {\em pf} suggests developers will encounter more false alarms which may have negative impact on developers' confidence in the prediction model.
% (4) \textbf{popt20:} This is the proportion number of suggested defective changes among all suggested defective changes, when when 20\% LOC modified by all changes are inspected.
% A high {\em popt20} values mean that developers can find most bugs in a small percent of the code.
% To compute Popt20, we divided the test set into the modules predicted to be faulty (set1)
% and predicted to be bug-free (set2). Each set was then sorted in ascending order by lines
% of code. We then ran down set1, then set2, till 20\% of the total lines of code
% were reached-- at which point {\em popt20} is the percent of buggy modules seen up to that point.
% (5) \textbf{ifa\_auc:} Number of initial false alarms encountered before we find the first defect. Inspired by previous studies on fault localization~\cite{parnin2011automated, kochhar2016practitioners, xia2016automated}, we caution that if the top-k changes recommended by the model are all false alarms, developers would be frustrated and are not likely to continue inspecting the other changes. Parnin and Orso ~\cite{parnin2011automated} found that developers would stop inspecting suspicious statements, and turn back to traditional debugging, if they could not get promising results within the first few statements they inspect. Using the nomenclature reported about {\em ifa$=k$}. In this study we use a modified version of {\em ifa} called ifa\_auc, which calculates {\em ifa} based on efforts spent on inspecting the code. We use gradually increment the efforts spent by increasing the total LOC inspected and calculate ifa on each iteration to get the area under the curve (auc), here the x-axis is the percentage of effort spent on inspection and y-axis is {\em ifa}.
% In the literature, we saw most of the previous studies have shown bellwether effect with very small datasets. In order to use bellwether to prove presence of generality in SE domain datasets, we first have to showcase the bellwether method suggested by Krishna et al. in their experiment works for large datasets. We use our defect prediction dataset with 697 projects for this purpose.
% We divide the dataset in train\_1 and test\_1 set and then performed a $ N*(N-1) $ comparison on all the projects in train\_1 set to find a bellwether, and then dividing each project in test\_1 set into train\_2 and test\_2 using a train\_test split. We use the train\_2 to train a LR model and test on test\_2 which is represented as \textit{self} in the figures. Similarly we use the bellwether project from train\_1 to train a LR model and test it on test\_2 which is represented as \textit{bellwether0}. We use statistical tests mentioned in section~\ref{stats} to compare the performance of \textit{self} vs \textit{bellwether0} for all the performance measures mentioned in section~\ref{sec:Measures}, which is shown
% \begin{figure}[!b]
% \centering
% \includegraphics[width=\linewidth]{figs/Time.png}
% \caption{Mean run-time for for one run of standard bellwether and GENERAL (With SMOTE and CFS).}
% \label{fig:time}
% \end{figure}
% \fig{compare} shows the mean number of comparisons required for finding a bellwether using conventional bellwether versus GENERAL for different community size of 45,90, 180, 360 and 627 projects. We can see from the \fig{compare} for increasing community size the number of comparisons required increases rapidly for conventional bellwether, while with GENERAL the number of comparisons required are relatively small. This shows the new bellwether method (aka GENERAL) is scalable with increasing community size.
% similarly \fig{compare_time} shows the mean run times for one run of GENERAL versus traditional bellwether for different community size of 45,90, 180, 360 and 627 projects. It is evident from the figure with growing community size the new bellwether method takes lesser amount of time ($\approx 4$ times faster in community size of 50 while $\approx 30$ times faster in a community size of 627 projects ).
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%%%%%Latex Table%%%%%%%%%%%%%%%%%%%%%%%%%
% \begin{figure}[!b]
% {\scriptsize
% {\scriptsize \begin{tabular}{p{.1cm}lp{1.5cm}rrc}
% \arrayrulecolor{darkgray}
% \rowcolor[gray]{.9} & rank & treatment & mean & sd & \\
% \multirow{5}{*}{\rotatebox[origin=c]{90}{Recall}}
% & 1 & GENERAL\_level2 & 38 & 32 & \quart{22}{38}{32}{16} \\
% & 1 & bellwether0 & 39 & 31 & \quart{23}{39}{31}{15} \\
% & 1 & ZeroR & 40 & 49 & \quart{16}{40}{49}{25} \\
% & 2 & GENERAL\_level1 & 48 & 33 & \quart{31}{48}{33}{16} \\
% & 3 & self & 55 & 27 & \quart{41}{55}{27}{12} \\
% & 3 & GENERAL\_level0 & 56 & 31 & \quart{42}{56}{31}{16} \\
% & 4 & global & 78 & 40 & \quart{57}{78}{40}{20} \\ \hline
% \multirow{5}{*}{\rotatebox[origin=c]{90}{Pf}}
% & 1 & GENERAL\_level2 & 28 & 28 & \quart{14}{28}{28}{13} \\
% & 1 & bellwether0 & 28 & 25 & \quart{15}{28}{25}{11} \\
% & 1 & self & 30 & 20 & \quart{20}{30}{20}{10} \\
% & 1 & GENERAL\_level1 & 35 & 28 & \quart{21}{35}{28}{14} \\
% & 1 & ZeroR & 39 & 49 & \quart{15}{39}{49}{25} \\
% & 2 & GENERAL\_level0 & 47 & 31 & \quart{31}{47}{31}{15} \\
% & 3 & global & 79 & 39 & \quart{59}{79}{39}{19} \\\hline
% \multirow{5}{*}{\rotatebox[origin=c]{90}{Precision}} & 1 & ZeroR & 21 & 30 & \quart{6}{21}{30}{15} \\
% & 2 & global & 35 & 28 & \quart{21}{35}{28}{14} \\
% & 2 & GENERAL\_level2 & 39 & 34 & \quart{22}{39}{34}{17} \\
% & 2 & bellwether0 & 40 & 33 & \quart{23}{40}{33}{16} \\
% & 2 & GENERAL\_level1 & 42 & 31 & \quart{26}{42}{31}{15} \\
% & 2 & GENERAL\_level0 & 44 & 30 & \quart{28}{44}{30}{15} \\
% & 2 & self & 50 & 30 & \quart{35}{50}{30}{15} \\\hline
% \multirow{5}{*}{\rotatebox[origin=c]{90}{Popt20}} & 1 & ZeroR & 13 & 16 & \quart{5}{13}{16}{8} \\
% & 2 & GENERAL\_level2 & 26 & 22 & \quart{15}{26}{22}{11} \\
% & 2 & global & 26 & 13 & \quart{19}{26}{13}{6} \\
% & 2 & bellwether0 & 28 & 21 & \quart{17}{28}{21}{9} \\
% & 2 & GENERAL\_level0 & 28 & 15 & \quart{22}{28}{15}{8} \\
% & 2 & GENERAL\_level1 & 28 & 19 & \quart{19}{28}{19}{9} \\
% & 3 & self & 35 & 19 & \quart{26}{35}{19}{10} \\\hline
% \multirow{5}{*}{\rotatebox[origin=c]{90}{ifa\_auc}} & 1 & ZeroR & 7 & 11 & \quart{1.12}{6.77}{11.29}{5.65} \\
% & 2 & global & 19 & 16 & \quart{11.35}{19.38}{16.06}{8.03} \\
% & 3 & GENERAL\_level2 & 22 & 14 & \quart{14.48}{21.59}{14.21}{7.10} \\
% & 3 & bellwether0 & 23 & 14 & \quart{15.48}{22.56}{14.15}{7.07} \\
% & 3 & self & 23 & 12 & \quart{17.09}{22.93}{11.68}{5.84} \\
% & 3 & GENERAL\_level1 & 23 & 14 & \quart{16.10}{22.98}{13.74}{6.87} \\
% & 3 & GENERAL\_level0 & 25 & 13 & \quart{17.87}{24.51}{13.27}{6.63} \\
% \end{tabular}}
% }
% \caption{Statistical Results comparison. The ``rank`` column at left comes from
% the statistical analysis methods of \S\ref{eval}. Note that for {\em pf}, and {\em ifa} rank=1 is the best rank while for all other performance measures, ranks $\in{3,4}$ are best.
% }\label{fig:Statistical}
% \end{figure}
% {\small
% \begin{figure*}[!t]
% \centering
% \begin{tikzpicture}[nodes={draw, circle,fill=darkgray!60}, ->,sibling distance=.7cm,minimum size=.1cm,scale=.5]
% \node{627}
% child { node {57}
% child {node {9}}
% child {node {9}}
% child {node {7}}
% child {node {17}}
% child {node {13}}}
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child { node {127}
% child {node {3}}
% child {node {9}}
% child {node {19}}
% child {node {15}}
% child {node {4}}
% child {node {17}}
% child {node {15}}
% child {node {19}}
% child {node {11}}
% child {node {15}}}
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child { node {183}
% child {node {8}}
% child {node {3}}
% child {node {12}}
% child {node {16}}
% child {node {20}}
% child {node {18}}
% child {node {12}}
% child {node {4}}
% child {node {12}}
% child {node {16}}
% child {node {5}}
% child {node {17}}
% child {node {9}}
% child {node {9}}
% child {node {10}}
% child {node {14}}}
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child [missing]
% child { node {260}
% child {node {17}}
% child {node {17}}
% child {node {20}}
% child {node {14}}
% child {node {9}}
% child {node {13}}
% child {node {18}}
% child {node {10}}
% child {node {18}}
% child {node {10}}
% child {node {6}}
% child {node {15}}
% child {node {14}}
% child {node {9}}
% child {node {10}}
% child {node {8}}
% child {node {7}}
% child {node {13}}
% child {node {15}}}
% ;
% \end{tikzpicture}
% \caption{Example of Hierarchical Clustering for 627 projects}
% \label{fig:example tree}
% \end{figure*}
% }
% As to {\em ZeroR}, we cannot recommend that approach.
% While {\em ZeroR}
% makes few mistakes (low {\em ifa}s and low {\em pf}s), it scores badly
% on other measures (very low {\em recall}s and {\em popt(20)}.
% In this section, we return to
% In RQ4, we try to answer the question if learning from too many projects detrimental effect, this question has two parts, one on predictive power, the other on making general conclusion and conclusion instability. Figure~\ref{fig:Statistical} shows the results of statistical significance and effect size tests to rank them in order for all the different methods(treatments) used in this experiment. In this figure for a performance measure two methods showing same rank means there performance is not statistically significantly different, while a different ranks mean they are different, while a smaller rank is better if the performance goal is negative (i.e. Pf), and higher rank is better in case of positive goal (i.e. Recall).
% To answer the first part of the question, if learning from too many projects have detrimental effect on predictive power of models, we compare the results of default ``bellwether method'' (a.k.a bellwether0) proposed by Krishna et al. and GENERAL(a.k.a GENERAL\_level0) method. The results of scott-knott test shows that for positive goals such as Recall the GENERAL\_level0 is significantly doing better than Bellwether0 and it is doing as good as or better for precision ans recall. While for negative goals such as ifa\_auc GENERAL\_level0 is doing as good as bellwether0. Although in case of Pf they score different ranks with GENERAL showing higher Pf then bellwether0, it is not by much. Similarly while comparing GENERAL\_level0 with global (which is learning from all the projects) shows although it achieves higher Recall, but it has very high Pf, low precision. Which answers the first part of the question that learning from too many projects do have detrimental effect on predictive power of models.
% Now to answer the second part of the question, that is if learning from too many projects creates conclusion instability, we will look at figure~\ref{fig:FSS_compare}. Figure~\ref{fig:FSS_compare} show the distribution of attributes selected while building a ML model for defect prediction using local models(a.k.a self), the \textcolor{red}{{\bf red bars}} shows the attributes selected by the ``Bellwether Project''. Conclusion instability causes vastly different and often contradicting conclusions to be derived from a data source. This sort of instability is very prevalent in several domains of software engineering. We can see from figure~\ref{fig:FSS_compare}, when learning from local data, each model selected different sets of attributes and resulted in selecting almost all different attributes. That means there is no general conclusion can be drawn for defect prediction models by saying which attributes are important, this results in conclusion instability and that effects the trusts on those models, the insights that can be drawn from them. Which farther affects training and tool development as mentioned in sec~\ref{sec:Motivation}. From these results we can see learning from too many data can cause conclusion instability and thus affecting generality in SE domain. In Summary, we can say
% \begin{figure*}[h]
% \centering
% \includegraphics[width=\linewidth]{figs/FSS_compare.png}
% \caption{Distribution of features selected using self model and ``Bellwether'' model.}
% \label{fig:FSS_compare}
% \end{figure*}
% \begin{figure*}
% \centering
% \begin{subfigure}[b]{0.475\textwidth}
% \centering
% \includegraphics[width=\textwidth]{figs/fss_all0.pdf}
% \caption[Network2]%
% {{\small Network 1}}
% \label{fig:mean and std of net14}
% \end{subfigure}
% \hfill
% \begin{subfigure}[b]{0.475\textwidth}
% \centering
% \includegraphics[width=\textwidth]{figs/fss_all1.pdf}
% \caption[]%
% {{\small Network 2}}
% \label{fig:mean and std of net24}
% \end{subfigure}
% \vskip\baselineskip
% \begin{subfigure}[b]{0.475\textwidth}
% \centering
% \includegraphics[width=\textwidth]{figs/fss_all2.pdf}
% \caption[]%
% {{\small Network 3}}
% \label{fig:mean and std of net34}
% \end{subfigure}
% \quad
% \begin{subfigure}[b]{0.475\textwidth}
% \centering
% \includegraphics[width=\textwidth]{figs/fss_all3.pdf}
% \caption[]%
% {{\small Network 4}}
% \label{fig:mean and std of net44}
% \end{subfigure}
% \caption[ The average and standard deviation of critical parameters ]
% {\small The average and standard deviation of critical parameters: Region R4}
% \label{fig:mean and std of nets}
% \end{figure*}
% \begin{figure*}[]
% \centering
% \includegraphics[width=\linewidth]{figs/fss_1.pdf}
% \caption{Distribution of features selected using self model and ``Bellwether'' model.}
% \label{fig:FSS_level0}
% \end{figure*}
% \subsection*{RQ6: What exactly did we learn from all those projects?}
% \label{sec:rq6}
% Having demonstrated that we can quickly find bellwethers
% from hundreds of software projects, it is appropriate to ask
% what model was learned from all that data. This is an important question for this research since if we cannot show the lessons
% learned from our 627 projects, then all the above is wasted effort.
% Table~\ref{tbl:coefs} shows the weights learned by logistic
% regression after feature selection using the bellwether project
% selected by {\em GENERAL\_level0}. Note that:
% \bi
% \item
% The number of features that appear in Table~\ref{tbl:coefs} is much smaller than the list of features shown in Table~\ref{tbl:metric}.
% That is, our bellwether is reporting that only a few features
% are most important for predicting software defects.
% \item
% Table~\ref{tbl:coefs} is sorted by the absolute value of the weights
% associated with those features. The last two features have near
% zero weights; i.e. they have negligible effect.
% \ei
% Apart from the negligible features, all that is left are NPRM, NPNM, RFC , and CBO. As shown in Table~\ref{tbl:metric}, these features
% all relate to class interface concepts; specifically:
% \bi
% \item
% The number of public and private methods;
% \item
% The average number of methods that respond to an incoming message;
% \item
% Inter-class coupling.
% \ei
% \begin{table}[!t]
% \centering
% \begin{tabular}{|l|l|l|l|} \hline
% Rank & Attr & coef & Odds ratio \\ \hline
% 1 & avg\_NPRM & 2.23 & 9.26 \\ \hline
% 2 & avg\_NPBM & -1.31 & 0.27 \\ \hline
% 3 & max\_NPBM & -1.12 & 0.33 \\ \hline
% 4 & max\_RFC & 0.74 & 2.09 \\ \hline
% 5 & total\_NPBM & -0.70 & 0.50 \\ \hline
% 6 & max\_CBO & -0.64 & 0.53 \\ \hline
% 7 & total\_ModifiedLOC & 0.10 & 1.10 \\ \hline
% 8 & avg\_WMC & 0.07 & 1.07 \\ \hline
% \end{tabular}
% \caption{Importance of coefs on \textit{log p} from logistic regression model of ``Bellwether'' shown in Fig~\ref{fig:FSS_compare}. Here Odds ratio shows one increment in in respective variable increase in the log-odds of being defective.}\label{tbl:coefs}
% \end{table}
% \fig{FSS_compare} shows what might be learned with and without
% the methods of this paper. Recall that the learners used in this research used feature selection and logistic regression.
% \begin{itemize}
% \item The gray bars in \fig{FSS_compare} show how often the features
% of Table~\ref{tbl:metric} were selected in the models learned from
% local data using {\em self}.
% \item
% The red bars in \fig{FSS_compare} shows which features
% used in the local models that also appeared in the model learned from the bellwether.
% Note that only a very
% small subset of the features seen in the {\em self} models
% were found useful in the bellwether model of Table~\ref{tbl:coefs}.
% \end{itemize}
% Just to say the obvious:
% when learning
% local models from very many projects, there is a wide range
% of features used in the model.
% It is far easier to
% definitively learn lessons from a much smaller range
% of features, such as those listed in Table~\ref{tbl:coefs}.
% Based on these results
% we can say that for predicting defects, in this sample of features taken from 627 projects:
% In summary:
% \begin{RQ}
% {\respto{2-9} {\color{blue}In terms of defect prediction,
% issues of inter-class interface are paramount.
% Many other issues are far less important such as file size, depth of inheritance tree, intra-method complexity, file size, revision history.}}
% \end{RQ}
% Just to say the obvious:
% when learning
% local models from many projects, there is a wide range
% of features used in the model.
% It is far easier to
% definitively learn knowledge from a much smaller range
% of features, such as those listed in Table~\ref{tbl:coefs}.
% Based on these results
% we can say that for predicting defects, in this sample of features taken from 627 projects:
% In summary:
% \begin{RQ}
% {\respto{2-9} {\color{blue}In terms of defect prediction,
% issues of inter-class interface are paramount.
% Many other issues are far less important such as file size, depth of inheritance tree, intra-method complexity, file size, revision history.}}
% \end{RQ}
% \begin{RQ}
% {\respto{2-9} {\color{blue}In terms of defect prediction, depending on your goals and project community different attributes are important. We can see in this experiment for risk-adverse development issues of inter-class interface are paramount. While for other cases issues of inter-class interface along with file size, intra-method complexity and revision histories are important}}
% \end{RQ}
% number of features that appear in Table~\ref{tbl:coefs_4} is much smaller than the list of features shown in Table~\ref{tbl:metric}.
% That is, our bellwether is reporting that only a few features
% are most important for predicting software defects.
% \item
% Table~\ref{tbl:coefs} is sorted by the absolute value of the weights
% associated with those features. The last two features have near
% zero weights; i.e. they have negligible effect.
% \ei
% Apart from the negligible features, all that is left are NPRM, NPNM, RFC , and CBO. As shown in Table~\ref{tbl:metric}, these features
% all relate to class interface concepts; specifically:
% \bi
% \item
% The number of public and private methods;
% \item
% The average number of methods that respond to an incoming message;
% \item
% Inter-class coupling.
% \ei
% Similarly if we choose cost-adverse development, then Table~\ref{tbl:coefs_4} and Figure~\ref{fig:FSS_level1} shows the attributes deemed important by logistic regression models on 4 different clusters selected by {\em GENERAL\_level1}. In this case we can see, for each cluster there are specific set of attributes which are important for predicting for defect in that cluster.
% \bi
% \item The Number of attributes selected in each cluster is effectively very small, giving us the advantage of selecting very small number of attributes which are really important for a certain set of project community.
% \item For the selected projects in cluster 1, by analyzing the odd-ratios of the important attributes we can say a higher number of protected methods and instance variables increases the chance of introducing defect in a class, while increasing number of immediate sub classes reduces the probability of having defect in the class.
% \item Similarly for projects in cluster 2, having higher number of immediate sub classes or comment to code ration indicates lower probability of defect. While higher number of revision or number of commented lines increase chance of having defect.
% \item Cluster 3 projects shows similar learning like cluster 1 along with higher lines of code increases chances of defect, while higher cyclomatic complexity lowers the chance of having defect.
% \item For the selected projects in cluster 4, the model says having higher lines of code, number of revisions, number of revisions on a file and higher coupling between objects increases the chance of defects in a module.
% \ei
% \begin{table}[!t]
% \centering
% \begin{tabular}{|l|l|l|l|} \hline
% Rank & Attr & coef & Odds ratio \\ \hline
% 1 & avg\_NPM & 1.82 & 6.17 \\ \hline
% 2 & avg\_NPBM & 1.29 & 3.62 \\ \hline
% 3 & max\_NPBM & 0.34 & 1.41 \\ \hline
% 4 & max\_RFC & 0.15 & 2.09 \\ \hline
% 5 & total\_NPBM & -0.70 & 0.50 \\ \hline
% 6 & max\_CBO & -0.64 & 0.53 \\ \hline
% 7 & total\_ModifiedLOC & 0.10 & 1.10 \\ \hline
% 8 & avg\_WMC & 0.07 & 1.07 \\ \hline
% \end{tabular}
% \caption{Importance of coefs on \textit{log p} from logistic regression model of ``Bellwether'' shown in Fig~\ref{fig:FSS_level0}. Here Odds ratio shows one increment in in respective variable increase in the log-odds of being defective.}\label{tbl:coefs}
% \end{table}
% \begin{table}[!t]
% \centering
% \begin{tabular}{|c|c|c|c|}
% \hline
% \rowcolor[HTML]{C0C0C0}
% Rank & Attribute & Coef & Odd\_ratio \\ \hline
% 1 & avg\_NPM & 2.09 & 8.05 \\ \hline
% 2 & total\_NPM & 0.70 & 2.01 \\ \hline
% 3 & max\_NIV & 0.44 & 1.55 \\ \hline
% 4 & avg\_FANOUT & 0.31 & 1.37 \\ \hline
% \rowcolor[HTML]{EFEFEF}
% 5 & avg\_AddedLOC & 0.15 & 1.16 \\ \hline
% \rowcolor[HTML]{EFEFEF}
% 6 & max\_AddedLOC & 0.11 & 1.11 \\ \hline
% \rowcolor[HTML]{EFEFEF}
% 7 & RCC & 0.06 & 1.06 \\ \hline
% \rowcolor[HTML]{EFEFEF}
% 8 & max\_NIM & -0.12 & 0.89 \\ \hline
% \rowcolor[HTML]{EFEFEF}
% 9 & NFix & -0.12 & 0.88 \\ \hline
% 10 & CL & -0.45 & 0.64 \\ \hline
% 11 & max\_NPM & -0.48 & 0.62 \\ \hline
% 12 & avg\_NOC & -1.76 & 0.17 \\ \hline
% \end{tabular}
% \caption{Importance of coefs on \textit{log p} from logistic regression model of ``Bellwether'' shown in Fig~\ref{fig:FSS_level0}. Here Odds ratio shows one increment in in respective variable increase in the log-odds of being defective. Here the grayed out attributes are the ones, which don't have significant importance. This is based on cohen's delta calculated on the Odd ratios of the selected attributes, which is $\approx 0.2$. An Odd ratio of 1 means the condition or event under study is equally likely to occur in both groups, so using cohen's delta value from the experiment we remove the attributes with Odd ratio between 1.2 and 0.8.}\label{tbl:coefs}
% \end{table}
% Please add the following required packages to your document preamble:
% \usepackage[table,xcdraw]{xcolor}
% If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer}
% \begin{table*}[]
% \centering
% \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
% \hline
% Rank & \multicolumn{2}{c|}{Cluster 1} & \multicolumn{2}{c|}{Cluster 2} & \multicolumn{2}{c|}{Cluster 3} & \multicolumn{2}{c|}{Cluster 4} \\ \hline
% 1 & Attribute & coef & Attribute & coef & Attribute & coef & Attribute & coef \\ \hline
% 2 & avg\_NOC & -1.77 & avg\_NOC & -0.54 & total\_NPM & 0.53 & max\_ModifiedLOC & -1.11 \\ \hline
% 3 & avg\_NPM & 1.33 & CL & 0.39 & NRev & 0.35 & avg\_ModifiedLOC & 0.99 \\ \hline
% 4 & total\_NPM & 1.10 & RCC & -0.29 & total\_CC & -0.33 & total\_ModifiedLOC & 0.91 \\ \hline
% 5 & CL & -0.44 & NFix & 0.18 & avg\_ModifiedLOC & -0.27 & NFix & 0.49 \\ \hline
% 6 & max\_NIV & 0.42 & max\_NOM & 0.13 & LOC & 0.25 & total\_CBO & 0.29 \\ \hline
% 7 & avg\_FANOUT & 0.21 & avg\_WMC & -0.13 & RCC & -0.22 & NRev & 0.19 \\ \hline
% 8 & max\_NPM & 0.17 & NRev & 0.12 & total\_NIV & 0.18 & LOC & 0.13 \\ \hline
% 9 & max\_AddedLOC & 0.12 & total\_CC & -0.11 & NFix & -0.16 & total\_DeletedLOC & -0.12 \\ \hline
% 10 & avg\_AddedLOC & 0.11 & total\_ModifiedLOC & -0.07 & NStmt & 0.10 & total\_NIM & -0.06 \\ \hline
% 11 & max\_NIM & -0.08 & MNL & -0.07 & total\_RFC & 0.08 & & \\ \hline
% 12 & RCC & 0.06 & & & max\_ModifiedLOC & -0.04 & & \\ \hline
% 13 & NFix & -0.04 & & & & & & \\ \hline
% \end{tabular}
% \caption{Cluster wise importance of coefs on \textit{log p} from logistic regression models of ``Bellwether'' found in GENERAL\_level1. Here Odds ratio shows one increment in in respective variable increase in the log-odds of being defective.}
% \label{tbl:coefs_4}
% \end{table*}
% Please add the following required packages to your document preamble:
% \usepackage[table,xcdraw]{xcolor}
% If you use beamer only pass "xcolor=table" option, i.e. \documentclass[xcolor=table]{beamer}
% \begin{table}[]
% \begin{tabular}{|c|c|c|c|c|c|}
% \hline
% \rowcolor[HTML]{C0C0C0}
% Attributes & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 & Cluster 5 \\ \hline
% NStmt & 1 & & & 1 & \\ \hline
% NRev & 1 & & & 1 & \\ \hline
% NFix & 1 & 1 & & 1 & \\ \hline
% NPRM & 1 & 1 & & & 1 \\ \hline
% NOM & 1 & & 2 & & \\ \hline
% LOC & 2 & 1 & 1 & 1 & 2 \\ \hline
% NIV & & 1 & & & \\ \hline
% FANIN & & 2 & & & 1 \\ \hline
% NOC & & 1 & 1 & & 1 \\ \hline
% FANOUT & & & 1 & 2 & 1 \\ \hline
% CBO & & 1 & 2 & 1 & \\ \hline
% DIT & & & 2 & & 1 \\ \hline
% CL & & & 2 & & \\ \hline
% RFC & & & & 2 & 1 \\ \hline
% CC & & & & & 2 \\ \hline
% NPM & & 1 & & 1 & 2 \\ \hline
% \end{tabular}
% \end{table} | {
"alphanum_fraction": 0.6761593877,
"avg_line_length": 74.7601683029,
"ext": "tex",
"hexsha": "5dbaaff08d227f763b27a0cddb149a503e8bf684",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-02-12T17:18:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-12T17:18:43.000Z",
"max_forks_repo_head_hexsha": "d44e0ca7e04143af14b1549c86c4c4c0860ea153",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ai-se/BUBBLE_TSE",
"max_forks_repo_path": "culled.tex",
"max_issues_count": 46,
"max_issues_repo_head_hexsha": "d44e0ca7e04143af14b1549c86c4c4c0860ea153",
"max_issues_repo_issues_event_max_datetime": "2020-02-17T20:59:40.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-14T15:39:15.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "Suvodeep90/BUBBLE_TSE",
"max_issues_repo_path": "culled.tex",
"max_line_length": 1314,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d44e0ca7e04143af14b1549c86c4c4c0860ea153",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "Suvodeep90/BUBBLE_TSE",
"max_stars_repo_path": "culled.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 14227,
"size": 53304
} |
\section{Instruction semantics}
\label{sec:Semantics}
In this section we define the meaning IAM instructions as a \emph{state
transformer}, i.e. a function from the state space to itself.
\begin{figure}[H]
\centering
$T = \{s_1 \rightarrow s_2 : s_1 \in S, s_2 \in S\}$
\label{}
\caption{A set of all machine state transformers.}
\end{figure}
Every instruction transforms the machine state in its own
way, thus we may define the instructions semantics as a mapping from instructions
to state transformers.
\begin{figure}[H]
\centering
$ \textrm{execute}:I \rightarrow T$
\label{}
\caption{Instruction interpreter.}
\end{figure}
Here $I$ is a finite set represented in Haskell as
the~\mintinline{haskell}{Instruction} data type.
The state transition representation is the core concept of the developed
Haskell framework. The Haskell's concept of~\emph{state monad} provides a way to
emulate a mutable state behaviour in a purely functional programming language
without explicit state threading. Consider the~\mintinline{haskell}{Machine} type being
essentially a parametrised state-transition builder:
\begin{figure}[H]
\begin{minted}{haskell}
newtype Machine a = Machine {runMachine :: State MachineState a}
\end{minted}
\caption{The IAM state transformer.}
\end{figure}
Every function with its return type market as~\mintinline{haskell}{Machine a} is a computation
yielding a value of type~\mintinline{haskell}{a} and possibly altering the state of the machine.
We define the semantics by implementing an interpreter
function~\mintinline{haskell}{execute} that considers every data constructor of
the~\mintinline{haskell}{Instruction} type and builds a computation in the
~\mintinline{haskell}{Machine} monad --- a transformer of IAM state. This function
is the heart of the simulation and formal verification framework.
\begin{figure}[H]
\begin{minted}{haskell}
execute :: Instruction -> Machine ()
execute (Halt ) = executeHalt
execute (Load rX dmemaddr) = executeLoad rX dmemaddr
execute (LoadMI rX dmemaddr) = executeLoadMI rX dmemaddr
execute (Set rX simm ) = executeSet rX simm
execute (Store rX dmemaddr) = executeStore rX dmemaddr
execute (Add rX dmemaddr) = executeAdd rX dmemaddr
execute (Jump simm ) = executeJump simm
execute (JumpZero simm ) = executeJumpZero simm
\end{minted}
\caption{An interpreter of IAM instruction set.}
\label{execute}
\end{figure}
Let us now consider the semantics of some of instructions.
The simplest state transformation is halting. If the interpreter sees the halt instruction, it
sets the~\mintinline{haskell}{Halted} flag preventing the following execution.
The~\mintinline{haskell}{writeFlag} does the actual machine state modification,
advancing the clock by one tick (using the~\mintinline{haskell}{delay} function)
and setting the flag.
\begin{figure}[H]
\begin{minted}{haskell}
executeHalt :: Machine ()
executeHalt = writeFlag Halted true
writeFlag :: Flag -> SBool -> Machine ()
writeFlag flag value = do
delay 1
modify $ \state ->
state { flags = writeArray (flags state)
(flagId flag)
value }
\end{minted}
\caption{Semantics of the~\mintinline{haskell}{Halt} instruction.}
\label{haltSemantics}
\end{figure}
As a more involved example, let us define the semantics of the addition instruction.
It operates with a register containing a first term and a memory location referring
to the second one. The~\mintinline{haskell}{executeAdd} essentially retrieves the terms,
adds them, places the result to the register, and sets the \mintinline{haskell}{Zero}
flag if the result is zero.
\begin{figure}[H]
\begin{minted}{haskell}
executeAdd :: Register -> MemoryAddress -> Machine ()
executeAdd rX dmemaddr = do
x <- readRegister rX
y <- readMemory dmemaddr
let z = x + y
writeFlag Zero (z .== 0)
writeRegister rX z
\end{minted}
\caption{Semantics of the~\mintinline{haskell}{Add} instruction.}
\label{addSemantics}
\end{figure}
Here,~\mintinline{haskell}{readRegister} and~\mintinline{haskell}{readMemory}
are functions dedicated to query the states of register bank and memory.
They are implemented in terms of SBV's array accessors:
\begin{figure}[H]
\begin{minted}{haskell}
readRegister :: Register -> Machine Value
readRegister register = do
currentState <- get
delay 1
pure $ readArray (registers currentState) register
readMemory :: MemoryAddress -> Machine Value
readMemory address = do
currentState <- get
delay 2
pure $ readArray (memory currentState) address
\end{minted}
\caption{Register bank and memory accessors.}
\label{readRegMem}
\end{figure}
Both functions retrieve the current state of the machine, advance the clock by
a predefined amount of cycles, and query the machine state for a desired value.
An interesting note to make here is that in the implementation of these functions
we are using Haskell as a~\emph{metalanguage}. We are operating the machine as
a puppet master, using the external meta-notions of addition, comparison and let-binding.
From the machine's point of view, we have an unlimited memory and act instantly,
which gives us an unlimited modeling power. Later in this section we'll see how
hard it is to achieve such a power using only IAM's internal entities.
As the most sophisticated example we give here, consider the semantics of the
\mintinline{haskell}{JumpZero} instruction. It uses SBV's symbolic conditional
operation~\mintinline{haskell}{ite} to test if the~\mintinline{haskell}{Zero} flag
is set, and, if so, modifies the machine's instruction counter by a provided offset.
\begin{figure}[H]
\begin{minted}{haskell}
executeJumpZero :: Byte -> Machine ()
executeJumpZero offset = do
zeroIsSet <- readFlag Zero
ic <- instructionCounter <$> get
let ic' = ite zeroIsSet (ic + fromByte offset) ic
modify $ \currentState ->
currentState {instructionCounter = ic'}
\end{minted}
\caption{Semantics of the~\mintinline{haskell}{JumpZero} instruction.}
\label{jumpZeroSemantics}
\end{figure}
The semantics of the conditional jump instruction turns out to be one of the
most sensible parts of a verification framework that relies on symbolical execution.
It is vital to bear in mind the notion of~\emph{symbolic termination}. In our special
case the situation starts to get difficult if the instruction counter value becomes
purely symbolic, i.e. the program termination starts to depend on a symbolic value.
There are ways to control symbolic termination, but current state of implementation of
the verification framework is fragile and relies on the user to prevent the dangerous
situations from happening. Currently, the verification framework uses an external
counter of state transitions that guaranties termination.
The implementations of these three instruction interpreters cover all the interesting
features of the framework's implementation. Please consult to the framework
repository~\cite{IAMGithub} for the full source code. | {
"alphanum_fraction": 0.7607344633,
"avg_line_length": 40.6896551724,
"ext": "tex",
"hexsha": "ebfd48ce55111a84539b596bec08f097e82c0e42",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tuura/redfin",
"max_forks_repo_path": "papers/haskell-symposium-2019/tex/semantics.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tuura/redfin",
"max_issues_repo_path": "papers/haskell-symposium-2019/tex/semantics.tex",
"max_line_length": 96,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "8931f3f8cdee7dd877c84563a81ee7f70e92bf3e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tuura/redfin",
"max_stars_repo_path": "papers/haskell-symposium-2019/tex/semantics.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-05T19:13:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-05T19:13:46.000Z",
"num_tokens": 1743,
"size": 7080
} |
\section{Overview}
The purpose of the AOS is to optimize the image quality across the field by controlling the surface figures
of the mirrors (M1M3 and M2) and to maintain the relative position of the three optical systems
(M1M3 mirror, M2 mirror and the camera).
This section provides an overview of the open-loop and closed-loop AOS operations, including
Look-Up Tables,
mirror positioning, hard points, mirror Force Balance Systems,
Control strategy and image quality error budget.
% The mirror surfaces are adjusted by means of figure control actuators that support the mirrors.
% Although they are commonly called ``figure control actuators'' the majority of their load is utilized to
% support the mirrors against the forces of gravity. The relative rigid body positions of M1M3, M2 and
% the camera are controlled through the M2 and Camera hexapods.
% The M2 and the Camera are positioned relative to the M1M3.
% The AOS is principally operated off of a Look-Up Table (LUT).
% The LUT provides open loop, near optimum values for all actuator forces and hexapod positions.
% The LUT values vary principally with elevation angle and secondarily with temperature.
% Although the LUT values are near optimum, as a result of non-repeatable effects,
% they are inadequate to reach the image quality requirements. These effects include temperature
% distributions on the telescope structure and mirrors along with wind loading and hysteresis.
% Corrections are added to the LUT values based on wavefront measurements from the wavefront sensors
% in the camera's focal plane.
% The position of the mirrors relative to their mirror cells is controlled with hard points.
% The proper load is maintained in the hard points by applying distributed loads through the figure control
% actuators in the form of correction added to the LUT values. Since the wavefront correction
% is intended to bend the mirror and apply no net forces and the force balance correction is intended to
% produce specific sets of net forces without bending the mirror, the two systems are compatible and
% can operate simultaneously.
% The force balance offset is added along with the wavefront correction to the LUT values.
% To allow more rapid responses, the force balance is accomplished directly by the mirror support control systems.
% This allows the force balance to accommodate dynamic loading and the quasi-static component of wind loading.
% Control strategy and image quality error budget
% Operational considerations
\section{Active Optics Hardware Performance}
This section gives a very brief summary of AOS hardware performance.
Readers are referred to other construction papers (\cite{PSTN-006}, \cite{PSTN-046}, and \cite{PSTN-011}) for more details.
M1M3 mirror and cell assembly: Mirror Lab testing, M3 in-situ
M2 mirror and cell assembly: testing at Harris and summit
Hexapods and camera rotator: Moog testing and summit re-verification
Corner raft wavefront sensor performance
% mention IOTA?
Alignment System performance
Refer to \cite{PSTN-032} for general LSST system performance.
\section{Curvature Wavefront Sensing}
First, provide an one-paragraph summary on wavefront sensing, touching on
design considerations, FFT and EXP algorithms, refer to Applied Optics~\cite{2015ApOpt..54.9045X}.
The rest of this section shows some real-world examples,
demostrating wavefront sensor image processing, source selection, donut de-blending, and wavefront sensing Zernike measurements.
\section{State Estimation and Control}
Here we talk about optical sensitivity matrix,
M1M3 and M2 bending modes, and their evolutions.
Then we talk about the optical state estimator, the optimal controller, sensitivity matrix truncation,
and system observability and controllability. Refer to \cite{2014SPIE.9150E..0HA}
\section{Look-Up Table Construction and Operation}
Here we discuss the evolution of the LUTs throughout construction and commissioning.
We started from Finite-Element based LUTs, then went through a few rounds of refinements via component-level optical testing, On-Axis calibration, ComCam, eventually LSSTCam which covers the full field.
\section{Image Quality and Ellipticity Performance}
Give quick examples from ComCam~\cite{PSTN-033} and LSSTCam~\cite{PSTN-034}.
Also refer to \cite{PSTN-004} and \cite{PSTN-032} for performance.
Discuss AOS performance under various control strategies.
Discuss field variations of the Zernikes, and compare to aberration theory predictions
(\cite{2011PASP..123..812S, 2012SPIE.8444E..55S}).
\section{Simulations and Early Testing with other Telescopes}
Role of simulations in system development, refer to simulation and modeling
papers~\cite{2012SPIE.8444E..4PC,2016SPIE.9911E..18A, 2018SPIE10705E..0PX}.
Early testing with data from other telescopes, for example, \cite{2016SPIE.9906E..4JX}
Reconciliation of simulated performance vs measured performance.
\section{Conclusions}
The AOS has been working reliably since xxx 2021.
Future explorations: forward modeling, machine learning.
| {
"alphanum_fraction": 0.8055720213,
"avg_line_length": 45.5945945946,
"ext": "tex",
"hexsha": "e0c34ea2ce2f417a730e14d733d0a332a3e83c0d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e7b2d22d57d202c23de6034608c14b0d16cfda2b",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst-pst/pstn-008",
"max_forks_repo_path": "body.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e7b2d22d57d202c23de6034608c14b0d16cfda2b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst-pst/pstn-008",
"max_issues_repo_path": "body.tex",
"max_line_length": 202,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e7b2d22d57d202c23de6034608c14b0d16cfda2b",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst-pst/pstn-008",
"max_stars_repo_path": "body.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1148,
"size": 5061
} |
\lesson{2}{Nov 15 2021 Mon (12:15:15)}{Polynomial Transformations}{Unit 3}
\begin{definition}[Theorems]
A theorem is a proven statement based on experimentation. I will go over these three \bf{Theorems}:
\begin{itemize}
\item Fundamental Theorem of Algebra
\item Factor Theorem
\item Remainder Theorem
\end{itemize}
\end{definition}
\begin{theorem}[Fundamental Theorem of Algebra]
The Fundamental Theorem of Algebra generally states that the degree of a polynomial is equivalent to the number of zeros (both real and complex) of a function.
By the \bf{Fundamental Theorem of Algebra}, the polynomial function $f(x) = x^2 - 3x - 28$ has two zeros since the degree of the function is two. To determine these zeros, replace the function notation of $f(x)$ with $0$ and solve by factoring.
To factor, let's use the \bf{Quadratic Equation}:
\begin{align}
x &= \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \\
f(x) &= x^2 - 3x - 28 \\
a &= 1, b = -3, c = -28 \\
x &= \frac{-(-3) \pm \sqrt{(-3)^2 - 4 \times 1(-28)}}{2 \times 1} \\
x &= \frac{3 + 11}{2} \rm{ AND } x = \frac{3 - 11}{2} \\
x &= 7 \rm{ AND } -4 \rightarrow (x + 4)(x - 7)
\end{align}
The zero product property tells us that for these factors to result in a product of $0$, one or both of them must equal $0$.
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tabular}{cccccc}
& 0 & = & x & + & 4 \\
- & 4 & = & & - & 4 \\
\hline
& x & = & & - & 4 \\
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[c]{0.5\textwidth}
\centering
\begin{tabular}{cccccc}
& 0 & = & x & - & 7 \\
+ & 7 & = & & + & 7 \\
\hline
& x & = & & + & 7 \\
\end{tabular}
\end{minipage}
The \bf{Zeros} of $f(x) = x^2 - 3x - 28$ are: $x_1 = -4, x_2 = 7$.
\end{theorem}
\begin{marginfigure}
\centering
\incfig{fundamental-theorem-of-algebra}
\sidecaption{$f(x) = x^2 - 3x - 28$ Graphed.}
\label{fig:fundamental-theorem-of-algebra}
\end{marginfigure}
\begin{theorem}[Factor Theorem]
The \bf{Factor Theorem} states that a first degree binomial is a factor of a polynomial function if the remainder, when the polynomial is divided by the binomial, is $0$.
To determine whether $x - 5$ is a factor of the function $f(x) = -4x^3 + 21x^2 - 25$, set up a division problem whereby $-4x^3 + 21x^2 - 25$ is divided by $x - 5$.
\[ \polylongdiv{-4x^3 + 21x^2 - 25}{x - 5} \]
When the function $f(x) = -4x^3 + 21x^2 - 25$ is divided by the binomial $x - 5$, the remainder is $0$. So, $x - 5$ is a factor of the function $f(x) = -4x^3 + 21x^2 - 25$.
\end{theorem}
\begin{theorem}[Remainder Theorem]
The \bf{Remainder Theorem} states that when the opposite of the constant from the binomial divisor is substituted into a function for $x$, the result is the remainder.
When the polynomial function $f(x) = x^4 + 11x^3 + 26x^2 + 15x - 17$ is divided by $x + 8$ using division, the remainder is the last integer on the bottom row.
\[ \polylongdiv{x^4 + 11x^3 + 26x^2 + 15x - 17}{x + 8} \]
When the opposite constant in the divisor is substituted into the function, the result will be the same as the remainder in the division process.
\begin{align}
f(x) &= x^4 + 11x^3 + 26x^2 + 15x - 17 \\
f(-8) &= (-8)^4 + 11(-8)^3 + 26(-8)^2 + 15(-8) - 17 \\
&= 4096 + 11(-512) + 26(64) + 15(-8) - 17 \\
&= 4096 - 5632 + 1664 - 120 - 17 \\
&= -9 \leftarrow \rm{Remainder}
\end{align}
\end{theorem}
\subsubsection*{Using these Theorems}
Given a polynomial function $f(x)$ and a number $a$, if $(x - a)$ is a factor of $f(x)$, then $a$ is a $0$ of the polynomial.
The binomial $(x - a)$ can be proved as a factor of $f(x)$ by:
\begin{itemize}
\item Using \bf{Long Division} with $(x - a)$ as the divisor.
\item Using factoring methods when appropriate (grouping, completing the square, \ldots).
\end{itemize}
If a is a $0$ of the polynomial function $f(x)$, then:
\begin{itemize}
\item The graph of $f(x)$ crosses the $x$ axis at $(a,0)$.
\item Substituting $a$ into $(x - a)$ will equal $0$
\item Substituting $a$ into $f(x)$ will result in $f(a) = 0$.
\end{itemize}
\newpage
| {
"alphanum_fraction": 0.5836506159,
"avg_line_length": 40.5909090909,
"ext": "tex",
"hexsha": "60196add1630d19716481de74bdb185e7be64dfc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_forks_repo_licenses": [
"Info-ZIP"
],
"max_forks_repo_name": "SingularisArt/notes",
"max_forks_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Info-ZIP"
],
"max_issues_repo_name": "SingularisArt/notes",
"max_issues_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex",
"max_line_length": 248,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_stars_repo_licenses": [
"Info-ZIP"
],
"max_stars_repo_name": "SingularisArt/notes",
"max_stars_repo_path": "Grade-10/semester-1/hs-algebra-2/unit-3/lesson-2.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z",
"num_tokens": 1523,
"size": 4465
} |
\section{File System Implementation}
\paragraph{Disk Structure}
\begin{itemize}
\item \textbf{Partitions}: disk can be subdivided into partitions
\item \textbf{Raw usage}: disks/partitions can be used raw (unformatted) or formatted with file system
\item \textbf{Volume}: entry containing FS
\begin{itemize}
\item tracks that file system's info is in device directory or volume table of contents
\end{itemize}
\item \textbf{FS diversity}: there are general purpose and special purpose FS
\end{itemize}
\paragraph{File Systems --- Logical vs. Physical}
\begin{itemize}
\item \textbf{Logical}: can consist of different physical file systems
\item \textbf{Placement}: file system can be mounted at any place within another file system
\item \textbf{Mounted local root}: bit in i-node of local root in mounted file system identifies this directory as mount point
\end{itemize}
\paragraph{File Systems --- Layers}
\begin{itemize}
\item \textbf{Layer 5}: applications
\item \textbf{Layer 4}: logical file system
\item \textbf{Layer 3}: file-organization module
\item \textbf{Layer 2}: basic file system
\item \textbf{Layer 1}: I/O control
\item \textbf{Layer 0}: devices
\end{itemize}
\paragraph{File Systems --- Virtual}
\begin{itemize}
\item \textbf{Principle}: provide object-oriented way of implementing file systems
\begin{itemize}
\item same API used for different file system types
\end{itemize}
\end{itemize}
\begin{figure}[h]\centering\label{VirtualFileSystem}\includegraphics[width=0.33\textwidth]{VirtualFileSystem}\end{figure}
\paragraph{Files --- Implementation}
\begin{itemize}
\item \textbf{Meta data} must be tracked:
\begin{itemize}
\item which logical block belongs to which file?
\item block order?
\item which blocks are free for next allocation?
\end{itemize}
\item \textbf{Block identification}: blocks on disk must be identified by FS (given logical region of file)
\begin{itemize}
\item[$ \to $] meta data needed in \emph{file allocation table}, \emph{directory} and \emph{inode}
\end{itemize}
\item \textbf{Block management}: creating/updating files might imply allocating new/modifying old disk blocks
\end{itemize}
\paragraph{Allocation --- Policies}
\begin{itemize}
\item \textbf{Preallocation}:
\begin{itemize}
\item \emph{problem}: need to know maximum file size at creation time
\item often difficult to reliably estimate maximum file size
\item users tend to overestimate file size to avoid running out of space
\end{itemize}
\item \textbf{Dynamic allocation}: allocate in pieces as needed
\end{itemize}
\paragraph{Allocation --- Fragment size}
\begin{itemize}
\item \textbf{Extremes}:
\begin{itemize}
\item fragment size = length of file
\item fragment size = smallest disk block size (= sector size)
\end{itemize}
\item \textbf{Trade-offs}:
\begin{itemize}
\item \emph{contiguity}: speedup for sequential accesses
\item \emph{small fragments}: larger tables needed to manage free storage and file access
\item \emph{large fragments}: improve data transfer
\item \emph{fixed-size fragments}: simplifies space reallocation
\item \emph{variable-size fragments}: minimizes internal fragmentation, can lead to external fragmentation
\end{itemize}
\end{itemize}
\paragraph{Allocation --- File space}
\begin{itemize}
\item \textbf{Contiguous}
\item \textbf{Chained}
\item \textbf{Indexed}:
\begin{itemize}
\item fixed block fragments
\item variable block fragments
\end{itemize}
\end{itemize}
\begin{figure}[h]\centering\label{AllocationOverview}\includegraphics[width=0.33\textwidth]{AllocationOverview}\end{figure}
\paragraph{Allocation --- Contiguous}
\begin{itemize}
\item \textbf{Principle}: array of $ n $ contiguous logical blocks reserved per file (to be created)
\item \textbf{Periodic compaction}: overcome external fragmentation
\end{itemize}
\begin{figure}[h]\centering\label{AllocationContiguous}\includegraphics[width=0.33\textwidth]{AllocationContiguous}\end{figure}
\paragraph{Allocation --- Chained}
\begin{itemize}
\item \textbf{Principle}: linked list of logical blocks per file
\begin{itemize}
\item FAT or directory contains address of first file block
\item[$ \to $] \emph{no external fragmentation}: any free block can be added to chain
\end{itemize}
\end{itemize}
\begin{figure}[h]\centering\label{AllocationChained}\includegraphics[width=0.33\textwidth]{AllocationChained}\end{figure}
\paragraph{Allocation --- Indexed}
\begin{itemize}
\item \textbf{Principle}: FAT contains one-level index table per file
\begin{itemize}
\item \emph{generalization}: $ n $-level index table
\item index has one entry for allocated file block
\item FAT contains block number for index
\end{itemize}
\end{itemize}
\begin{figure}[h]\centering\label{AllocationIndexed}\includegraphics[width=0.33\textwidth]{AllocationIndexed}\end{figure}
\paragraph{Directories --- Implementation}
\begin{itemize}
\item \textbf{Simple directory} (MS-DOS):
\begin{itemize}
\item fixed-size entries
\item disk addresses + attributes in directory entry
\end{itemize}
\item \textbf{i-node reference directory} (UNIX):
\begin{itemize}
\item entry refers to i-node containing attributes
\end{itemize}
\end{itemize}
\paragraph{Disk Blocks --- Buffering}
\begin{itemize}
\item \textbf{Buffering}: disk blocks buffered in main memory
\item \textbf{Access}: buffer access done via hash table
\begin{itemize}
\item blocks with same hash value are chained together
\end{itemize}
\item \textbf{Replacement}: LRU
\item \textbf{Management}: free buffer is managed via doubly-linked list
\end{itemize}
\paragraph{File Systems --- Journaling}
\begin{itemize}
\item \textbf{Principle}: record each update to file system as \emph{transaction}
\begin{itemize}
\item written to log
\end{itemize}
\item \textbf{Committed} transaction = written to log
\begin{itemize}
\item[$ \to $] \emph{problem}: file system may not yet be updated
\end{itemize}
\item \textbf{Writing} transactions from log to FS is asynchronous
\item \textbf{Modifying} FS $ \to $ transaction removed from log
\item \textbf{Crash} of file system $ \to $ remaining transactions in log must still be performed
\end{itemize}
\paragraph{File Systems --- Log-structured}
\begin{itemize}
\item \textbf{Principle}: use disk as circular buffer
\begin{itemize}
\item write all updated (including i-nodes, meta data and data) to end of log
\end{itemize}
\item \textbf{Buffering}: all writes initially buffered in memory
\item \textbf{Writing}: periodically write within 1 segment (1 MB)
\item \textbf{Opening}: locate i-node, find blocks
\item \textbf{Clearing}: clear all data from other end, no longer used
\end{itemize} | {
"alphanum_fraction": 0.7435448578,
"avg_line_length": 39.8546511628,
"ext": "tex",
"hexsha": "ef7f8cfb6a759bcd8d2a5213da7952a51d9009c9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Jintzo/OS",
"max_forks_repo_path": "chapters/18_FileSystemImplementation.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_issues_repo_issues_event_max_datetime": "2017-12-31T11:57:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-02T12:22:38.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Jintzo/OS",
"max_issues_repo_path": "chapters/18_FileSystemImplementation.tex",
"max_line_length": 128,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4e9d784cf8a615a98f1c6f07d730fe855a920b57",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Jintzo/OS",
"max_stars_repo_path": "chapters/18_FileSystemImplementation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1901,
"size": 6855
} |
%!TEX root = thesis.tex
\chapter{Proof of Concept}
\label{chapter:prototyp}
| {
"alphanum_fraction": 0.7402597403,
"avg_line_length": 15.4,
"ext": "tex",
"hexsha": "2a15412db2357c62e758a76645ef8e78f7efacfc",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-11-15T10:48:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-07T10:03:35.000Z",
"max_forks_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_forks_repo_licenses": [
"Beerware"
],
"max_forks_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_forks_repo_path": "04_prototype/project.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_issues_repo_issues_event_max_datetime": "2021-05-30T18:47:32.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-28T12:08:30.000Z",
"max_issues_repo_licenses": [
"Beerware"
],
"max_issues_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_issues_repo_path": "04_prototype/project.tex",
"max_line_length": 26,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "62e6fb4d801fa9aaa84adc5e2150d38ff6a1eacd",
"max_stars_repo_licenses": [
"Beerware"
],
"max_stars_repo_name": "T1m1/customized-latex-htwg-template-master",
"max_stars_repo_path": "04_prototype/project.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-20T15:43:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-07T10:03:36.000Z",
"num_tokens": 22,
"size": 77
} |
\subsubsection{\stid{1.12} Runtime System for Application-Level Power Steering on Exascale Systems}
\paragraph{Overview}
Power remains a critical constraint for Exascale. As we design supercomputers at larger scales, power becomes an expensive and limited resource. Inefficient management of power leads to added operational costs as well as low scientific throughput. Although hardware advances will contribute a certain amount towards achieving high energy efficiency, they will not be sufficient, creating a need for a sophisticated system software approach. Significant advances in software technologies are thus required to ensure that Exascale systems achieve high performance with effective utilization of available power. Distributing available power to nodes while adhering to system, job and node constraints involves complex decision making in software.
The ECP PowerSteering project is developing a \emph{job-level} power management runtime system that will optimize performance of Exascale scientific applications transparently under power and/or energy constraints. Existing research efforts, including Conductor and Adagio, are being actively integrated into Intel's GEOPM runtime system, an ongoing open source effort led by Intel. This integration expands GEOPM?s capabilities with the latest research while providing a production-grade, industry-supported open source solution. By developing new platform plugins, this project also supports upcoming target platforms and paradigms for ECP beyond the Intel architectures, and incorporates task-based programming models such as Legion. By being both configurable and cross-platform, GEOPM will help applications achieve maximum performance under a power constraint.
This project is essential for ECP because it enables Exascale applications to operate safely with optimal performance under power and energy constraints. This project is also essential for building a sophisticated hierarchical software stack proposed by the ECP Argo and ECP Flux projects. Additionally, the project fulfills an essential need for ECP by enabling vendor and academic collaborations that provide for accelerated adoption of best practices and better interoperability at scale. By leveraging the GEOPM software developed in this project, compute centers can safely operate under power and energy constraints while maximizing performance and scientific throughput.
\paragraph{Key Challenges}
Power management in software is challenging due to the dynamic phase behavior of applications, processor manufacturing variability, and the increasing heterogeneity of node-level components. While several scattered research efforts exist, a majority of these efforts are site-specific, require substantial programmer effort, and often result in suboptimal application performance and system throughput. Additionally, these approaches are not production-ready and are not designed to cooperate in an integrated manner. A holistic, generalizable and extensible approach is still missing in the HPC community, and a goal for the ECP PowerSteering project is to provide a solution for this technology gap.
Another set of challenges come from portability issues. Existing solutions are targeted toward specific Intel microarchitectures as well as programming models. Additionally, some of the existing solutions violate the specified power budget before reaching a steady state, resulting in power fluctuations as well as unsafe operation. As part of this project, we strive to provide portability as well as safe operation using both hardware-level and application-level information for adaptive configuration selection and critical path analysis.
\paragraph{Solution Strategy}
Our solution is to develop a job-level runtime system (Intel GEOPM) that can operate transparently to user applications, and can also cooperate with HPC resource managers and node-level tools. We are taking a two-pronged approach. First, we are working toward consolidating existing research efforts from the community to develop high-quality plugins for GEOPM that can be deployed at Exascale. In parallel, we are developing new algorithms in GEOPM to address other Exascale challenges such as heterogeneity and variation. While GEOPM already provides some baseline algorithms, the existing capabilities are not programmer transparent and not sufficient for Exascale. Our advanced algorithms analyze critical paths of scientific applications transparently, balance power between different components intelligently, and provide mechanisms to capture fine-grained application semantics through Caliper. Additionally, these advanced algorithms will support non-Intel architectures such as IBM/NVIDIA and novel task-based programming models such as Legion, which are critical for portability in the future. We also intend for GEOPM to be a part of a holistic power management stack that does dynamic, hierarchical power management and works closely with resource managers such as SLURM or Flux. In order to accomplish portability and smooth integration, we are closely collaborating with ECP Argo and ECP Flux projects, with University of Arizona, and with Intel and IBM.
\paragraph{Recent Progress}
Recently, we achieved two milestones in March 2018. The first was to update the power model for our plugin to incorporate application phases and manufacturing variation, and the second milestone was to support task-based programming models in GEOPM. We developed an offline power/performance model based on processor characterization over codes with broad spectrum of compute and memory-boundedness at different processor power caps. We also updated the configuration space exploration to use this model to adjust per-MPI rank performance measurements over each computation phase.
We are now working on testing and evaluation of our framework with the new model and collecting new data on the Quartz cluster at LLNL. Some early results are presented in Figure \ref{fig:MG}. The figure shows the compute phase of MG.C, where the runtime system uses a non-linear power-performance model during the configuration exploration phase to account for manufacturing variability. For our second milestone, we developed an MPI + Legion + GEOPM interoperability benchmark that allows us to use GEOPM for dynamic power management of task-based models.
\begin{figure}[t]
\centering
\includegraphics[scale = 0.6]{projects/2.3.1-PMR/2.3.1.12-Power-Steering/power_model.png}
\caption{Non-linear power-performance model in use for MG.C during configuration exploration phase for the runtime system}
\label{fig:MG}
\end{figure}
\paragraph{Next Steps}
We will continue our research and development work as planned toward the September 2018 milestones. More specifically, we are working on porting GEOPM to non-Intel architectures (IBM Power8 or Power9, and NVIDIA GPUs are candidates). We will also enhance our variation-aware and phase aware model with advanced machine learning and statistical techniques. We also plan to improve the overhead of the configuration exploration function by selecting configurations that minimize sampling overhead without a significant impact on the prediction accuracy of the power model especially at lower power budgets.
One of our current challenges is to gain access to non-Intel architectures such as IBM Power8/Power9 and NVIDIA GPUs with elevated privileges that are required for power management. We are working with LLNL to gain such access. Additionally, for our Legion work, we are working toward understanding mappers as well as task distribution better in order to determine the spatial and temporal aspects of power management with GEOPM plugins. We are also looking into S3D application code as part of our Legion power model exploration. Lastly, we are looking into adding Spack support for installing GEOPM.
| {
"alphanum_fraction": 0.832106599,
"avg_line_length": 212.972972973,
"ext": "tex",
"hexsha": "94f5c44053dfb0b0b7ca9007df43300fb633a124",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.1-PMR/2.3.1.12-Power-Steering/2.3.1.12-Power-Steering.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.1-PMR/2.3.1.12-Power-Steering/2.3.1.12-Power-Steering.tex",
"max_line_length": 1470,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.1-PMR/2.3.1.12-Power-Steering/2.3.1.12-Power-Steering.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1465,
"size": 7880
} |
\documentclass[11pt]{article}
\usepackage{common}
\usepackage{booktabs}
\usepackage{wrapfig}
\usepackage{titling}
\usepackage{titlesec}
\usepackage{float}
\usepackage[margin=0.99in]{geometry}
\titlespacing\section{0pt}{4pt}{2pt}
\setlength{\parskip}{0.35em}
\setlength{\droptitle}{-5em}
\pagestyle{plain}
\title{Practical 2: Classification\\ Classifying Malicious Software}
\author{Antonio Copete, [email protected], Camelot.ai Copete \\
Fangli Geng, [email protected], Camelot.ai fangligeng\\
Junran Yang, [email protected], Camelot.ai junran}
\begin{document}
\maketitle{}
\section{Technical Approach}
The purpose of this practical was to classify a test set of 3,724 files in XML format into 15 classes of malware, using a training set of 3,086 labeled files. Our general approach consisted on the following steps:
\begin{enumerate} %[leftmargin=1cm]
\item \textbf{Feature Engineering}\\
Initial inspection showed the XML data files to be structured hierarchically into a series of \emph{processes}, in turn composed of a series of \emph{threads}, in turn composed of a series of \emph{system calls}. Each of these were identified by a \emph{tag} as well as a number of \emph{attributes} indicated as sets of key-value pairs. Our simplest approach began by drawing $n$-grams (ordered sets) of up to 4 consecutive system call tags, initially without attributes, and our initial exploratory analysis tested the hypothesis that the presence and order of such system call tags was correlated with the distinct anomalous behavior found in different types of malware.
More advanced versions of this approach found in the literature \cite{canali} suggested also drawing features based on $n$-bags (unordered non-consecutive sets) as well as $n$-tuples (ordered non-consecutive sets) of system calls and processes, as well as including attributes such as file names, file sizes, URLs, execution times, etc. We found some of these attributes to be oddly predictive of some specific classes, such as the file \verb|maxtrox.txt| being always related to malware of class VB, and \verb|GrayPigeon_Hacker.com.cn| always indicating malware of class Hupigon. In addition, we also took the number of total processes and thread numbers into account.
To analyze the system call tags (including 4-grams, 29,813 features in total) and predictive attributes (110,465 features in total), we tried 2 methods: using either the number of counts, or the TF-IDF (term frequency---inverse document frequency) \cite{canzanese}. The TF-IDF value measures the relative frequency of a token across the data set; it is proportional to the number of times a token appears in a given file, offset by the frequency of the token across all files, which helps adjust for the absolute frequency of a given token in the whole dataset. The scikit-learn module \verb|sklearn.feature_extraction.text| includes the objects \verb|CountVectorizer| and \verb|TfidfVectorizer|, which tokenize large strings of tags (with or without arguments) into $n$-grams, and then transform them into counts and TF-IDF values in the resulting feature matrix, respectively.
\item \textbf{Feature Selection}\\
Since we derived roughly 140,000 features from a training set of just over 3,000 files, it was easy to cause our model to overfit the data. To avoid overfitting, we used the sklearn module \verb|SelectFromModel| and the classifier \verb|sklearn.ensemble.RandomForestClassifier| to reduce features and use cross-validation to determine the features we wanted to keep. More specifically, we ran a loop to continuously drop features with importance of less than 10\% the mean importance among all features, monitoring the mean score from 5-fold cross-validation in each iteration. The highest score happened with around 5,000 features, which was 1/20 of the original feature size.
% Antonio: I also dropped features with low variance across both the training and test set (unsupervised feature selection), and used a similar strategy to drop features according to RF importance. I went more conservatively and dropped only features that had importance = 0 in all 5 cross-validation sets. But Fangli's strategy is good, so we can leave it at that.
\item\textbf{Model Selection}\\
We began by exploring a number of classifiers for model selection from scikit-learn, including \verb|RandomForestClassifier| (Random Forest), \verb|svm.SVC| and \verb|svm.LinearSVC| (Support Vector), and \verb|linear_model.SGDClassifier| (Stochastic Gradient Descent), with default hyper-parameters. The Random Forest classifier yielded the best score of 0.78105 on the test data, compared to 0.57211 for SVC, 0.76789 for Linear SVC, and 0.77158 for SGD. We thus selected Random Forest as our main model and proceeded to tune it in the subsequent steps.
In addition, we found the class distribution in the training set to be heavily imbalanced. We therefore took a sequential approach to model training, dividing the data into 4 main categories: None (52.14\%), Swizzor (17.56\%), VB (12.18\%) and Others (18.12\%). We first trained the model on each of first 3 categories, and then separately on the minor categories. This approach resulted in an improvement of 0.02 in the test score, compared to training on all categories jointly (0.81211 vs 0.79263). We tried the same approach on a Neural Network for comparison, which yielded a score of 0.79105 for training on separate categories, and confirming again Random Forest as the better model to choose.
Late in the process, we also found a bug in the original code we were provided for extracting system call tags, which was causing the tags for every other thread to be skipped altogether. Our final results are based on the corrected version of feature extraction, but due to time constraints we limited ourselves to 4-gram tokens of system call tags without attributes. With the corrected feature set and a Random Forest classifier trained jointly across all categories, we obtained an improvement of 0.015 in the test data score (0.80789 vs 0.79263).
In our last attempt at model experimentation, we were inspired by a Microsoft Malware Classification Challenge (BIG2015)\footnote{https://www.kaggle.com/c/malware-classification/discussion/13897} to utilize xgboost and a semi-supervised method to detect malware from assembly code. Gradient Boosting is different from Random Forest in that it focuses more on reducing the bias rather than variance. After grid searching for hyperparameters, the highest accuracy we got by xgboost was 0.82526. Furthermore, we tried semi-supervised Gradient Boosting, aiming to further reduce the bias of the model and incorporating information from test dataset into training dataset. We first generate pseudo labels for the test set by selecting the most likely classification based on our best modeling of the training set alone. Then we predict on the test set again in a cross-validation fashion using both the training data and test data.
As an example, consider a test set that is split into 4 folds: A, B, C and D. We draw a new training set that is composed of test folds A, B and C with their pseudo labels, in addition to the original training set, and we then predict on test set fold D. The same method is used to predict on A, B and C. However, this approach didn't provide us with decent results. We suspect the relatively low accuracy in the prediction set (0.82526) provided more inaccurate information than that it could offer to reduce bias.
\end{enumerate}
\section{Results}
\begin{table}[htbp]
\centering
\begin{tabular}{cclccccc}
\toprule
Rank & \multicolumn{1}{p{1cm}}{\centering Max Depth} & \multicolumn{1}{p{0.8cm}}{\centering $\eta$} &
\multicolumn{1}{p{2cm}}{\centering Min. Child Weight} & colsample & \multicolumn{1}{p{2cm}}{\centering CV Mean Error} &
\multicolumn{1}{p{2cm}}{\centering Best Num. Round} & Final Score\\
\midrule
1 & 4 & 0.15 & 1 & 0.5 & 0.093332 & 49 & 0.82158\\
2 & 6 & 0.2 & 1 & 1 & 0.093334 & 25 & 0.811054\\
3 & 4 & 0.1 & 1 & 0.5 & 0.093983 & 68 & 0.82526\\
4 & 4 & 0.15 & 2 & 0.5 & 0.094306 & 54 & - - -\\
5 & 6 & 0.15 & 1 & 1 & 0.094953 & 31 & - - -\\
\bottomrule
\end{tabular}
\caption{\label{tab:results} Result tables for the best 5 cross-validation scores by Gradient Boosting with grid search}
\end{table}
Table \ref{tab:results} shows the best 5 cross-validation scores by Gradient Boosting with grid search. The hyperparameters we tuned were \verb|max_depth| (the maximum depth of a tree, same definition as GBM, set as [2, 4, 6]), $\eta$ (analogous to learning rate in GBM, set as [0.05, 0.1, 0.15, 0.2]), \verb|min_child_weight| (minimum sum of weights of all observations required in a child, set as [1, 2]), \verb|colsample_bytree| (denotes the subsample ratio of columns for each split, in each level, set as [0.5, 1]). Those 48 possible configurations were drawn from the ones people use most commonly. We also tested models with higher and lower round numbers, optimizing the result by cross-validation.
Figures \ref{fig:results_1}--\ref{fig:results_3} show the results of the tuning process for the Random Forest model for 2 hyperparameters: the maximum tree depth \verb|max_depth|, as well as the number of estimators (trees) \verb|n_estimators|. Results are shown for a validation set (blue) drawn from 20\% of the training data in a stratified fashion (i.e. preserving class proportions), as well as for the remaining training data (red).
\begin{figure}[H]
\minipage{0.48\textwidth}
\includegraphics[width=\linewidth]{hyperparameter_tuning_1}
\caption{Max depth range from 10 to 235, and number of estimators range from 10 to 460.}
\label{fig:results_1}
\endminipage\hfill
\minipage{0.48\textwidth}
\includegraphics[width=\linewidth]{hyperparameter_tuning_2}
\caption{Max depth range from 10 to 45, and number of estimators range from 10 to 460.}
\label{fig:results_2}
\endminipage\hfill
\minipage{1\textwidth}
\begin{center}
\includegraphics[width=0.48\linewidth]{hyperparameter_tuning_3}
\caption{Max depth range from 40 to 49, and number of estimators range from 400 to 490.}
\label{fig:results_3}
\end{center}
\endminipage
\end{figure}
We first ran the experiment in a rough region (Figure~\ref{fig:results_1}): \verb|max_depth| range from 10 to 235 with a step of 25, while \verb|n_estimators| range from 10 to 460 with a step of 50. From this experiment we found out that when depth is larger than 60, the increasing depth of the decision tree no longer contributes to train score or test score. That is because we only have a training set of $\sim3,000$ samples, and making the tree deeper than the square root of number of samples does not contribute significantly to the accuracy of the decision trees. Then we ran 2 experiments with finer sampling (Figures~\ref{fig:results_2} and \ref{fig:results_3}). We found out from the results that we should make the depth as close to the square root of number of samples as possible, while choosing the number of estimators to be around 400, since larger numbers no longer improve the performance of the model significantly.
\section{Discussion}
Our general strategy focused first on feature generation, first tokenizing the data files into tags of system calls and processes ---both with and without attributes---, and then drawing features based on $n$-grams (ordered consecutive sets) of tokens, and later generalizing to $n$-bags (unordered non-consecutive sets) and $n$-tuples (ordered non-consecutive sets). Different values of $n$ (1, 2, 3, 4, and 10) were considered in the process. The features themselves were either the absolute number of occurrences of the given subsets of tokens, or their TF-IDF values to express them as a proportion of their overall occurrence. While some attributes such as certain file names proved to be of some predictive value, in the end we obtained our best results from 4-grams of system call tags.
Our model selection process considered several classifiers, including Random Forest, C-Support Vector, Linear-Support Vector and Stochastic Gradient Descent, which we tried on different subsets of the training data, and evaluating them using 5-fold cross-validation. The experimentation process showed Random Forest to be the most robust model, and we proceeded to tune its hyperparameters (number of trees in the model and their maximum depth), in order to achieve the best possible results. We also performed feature reduction by iteratively discarding features of low importance and monitoring the cross-validated scores, resulting in an optimal set of $\sim5,000$ features from an initial set of $\sim140,000$ features. One further refinement to our approach consisted in using a 2-step process in training our model, first training on 4 broad categories to predict on the 3 most frequent classes first, and then predicting separately on the 12 remaining minor classes.
Our best results on the Camelot public leaderboard were a score of 0.81632 obtained from the tuned Random Forest classifier, and then lately we improved on it by experimenting with a semi-supervised Gradient Boosting model which gave a score of 0.82263. For the final leaderboard scored on the private dataset, our best score was 0.81798, which placed us among the top 15 scores. Our placement turned out to be much better for the private dataset than for the public one, which is evidence of the robustness of our model that did not overfit the data as much as it did for other teams.
As for further improvements to our model with more time for this practical, first we would have done further experimentation with features beyond $n$-grams of system calls ---such as $n$-bags, $n$-tuples, and combinations thereof---, to result in more complex sets of features. Also, given the promise of the semi-supervised Gradient Boosting approach we discovered late in the game, we would have done further tuning of model parameters, as well as training it separately on different categories of data in a 2-step process, as we did with Random Forest.
\newpage
\nocite{rozenberg}
\bibliographystyle{apalike}
\bibliography{P2_dualspace}
\end{document}
| {
"alphanum_fraction": 0.7832922101,
"avg_line_length": 119.2016806723,
"ext": "tex",
"hexsha": "53e115ac2ef8c395a391af2790532c01289da8dd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "658e5f5a1381be1b13cd9014161f9cc68f159449",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fangligeng/dualspace",
"max_forks_repo_path": "report/P2_dualspace_final.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "658e5f5a1381be1b13cd9014161f9cc68f159449",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fangligeng/dualspace",
"max_issues_repo_path": "report/P2_dualspace_final.tex",
"max_line_length": 973,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "658e5f5a1381be1b13cd9014161f9cc68f159449",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fangligeng/dualspace",
"max_stars_repo_path": "report/P2_dualspace_final.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3526,
"size": 14185
} |
\documentclass[../main.tex]{subfiles}
\begin{document}
\begin{center}
\centering
{
\includegraphics[width=1.0\linewidth]{vmv}
}
\captionof{figure}{VMV-VGG architecture \cite{MIPR}.}
\label{SHREC19}
\end{center}
The View and Majority Vote based 3D scene retrieval algorithm (VMV) utilizes the VGG-16 architecture, as illustrated in \textbf{Fig.}~\ref{SHREC19}.
\subsubsection{3D Scene View Sampling}
Each 3D scene model is in a 3D sphere observable by an automated QMacro that captures 13 scene views. Of these 13 unique perspectives, 12 are uniformly sampled along the equator of the sphere while the last view is from a top-down perspective as shown in \textbf{Fig.}~\ref{apartment_building_outdoor}.
\begin{center}
\centering
{
\includegraphics[width=1.0\linewidth]{apartment_building_outdoor.pdf}
}
\captionof{figure}{A 13 sampled scene view images example of an apartment scene model \cite{MIPR}.}
\label{apartment_building_outdoor}
\end{center}
\subsubsection{Data Augmentation}
They implemented several augmentations (e.g rotations,
translations and reflections)\cite{3DICPR} on the dataset to avoid overfitting. These augmentations extended the dataset to be 500
times its initial size.
\subsubsection{Pre-training and Fine-tuning}
They preformed domain adaption with VGG2 on the Places scene image dataset \cite{zhou2017places} for 100 epochs. After this adaption phase, another phase of domain adaption is performed on VGG2 with the 2D scene views training dataset, respectively.
\subsubsection{Image/ View Classification and Majority Vote-Based Label Matching}
Probability distributions of classifications were obtained from the trained VGG2 with the target 2D scene views testing dataset. A query image and each model's 13 scene views are used to generate a rank list for the query by using a majority vote-based label matching method.
\end{document} | {
"alphanum_fraction": 0.7920844327,
"avg_line_length": 45.119047619,
"ext": "tex",
"hexsha": "f26dca11557d6e20b1607a30c9a9668047e557ae",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "61b0357482d9a4eae3096abc91f83ab9b803e412",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Hammania689/shrec_2019",
"max_forks_repo_path": "journal/SceneIBR2019_latex_src/Sections/yuan.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "61b0357482d9a4eae3096abc91f83ab9b803e412",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Hammania689/shrec_2019",
"max_issues_repo_path": "journal/SceneIBR2019_latex_src/Sections/yuan.tex",
"max_line_length": 302,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "61b0357482d9a4eae3096abc91f83ab9b803e412",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Hammania689/shrec_2019",
"max_stars_repo_path": "journal/SceneIBR2019_latex_src/Sections/yuan.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 487,
"size": 1895
} |
\section{education}
\begin{entrylist}
%------------------------------------------------
\courseentry
{2011--2012}
{Bachelor {\normalfont of Commerce}}
{The University of California, Berkeley}
{Money Is The Root Of All Evil -- Or Is It?}
{This thesis explored the idea that money has been the cause of untold anguish and suffering in the world. I found that it has, in fact, not.}
{Business for Business Gurus, Money 101}
%------------------------------------------------
\end{entrylist}
| {
"alphanum_fraction": 0.6004098361,
"avg_line_length": 34.8571428571,
"ext": "tex",
"hexsha": "6ea04f2a8d464a5a5f6f65a69ba0e0e716396e07",
"lang": "TeX",
"max_forks_count": 11,
"max_forks_repo_forks_event_max_datetime": "2022-03-08T19:21:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-09T19:31:47.000Z",
"max_forks_repo_head_hexsha": "6d4aea8d4a302cda57eea7b70c725ce9c65c06bf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "JesperDramsch/friggeri-cv",
"max_forks_repo_path": "sections/education.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "6d4aea8d4a302cda57eea7b70c725ce9c65c06bf",
"max_issues_repo_issues_event_max_datetime": "2020-03-08T12:35:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-25T05:09:28.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "JesperDramsch/friggeri-cv",
"max_issues_repo_path": "sections/education.tex",
"max_line_length": 142,
"max_stars_count": 22,
"max_stars_repo_head_hexsha": "6d4aea8d4a302cda57eea7b70c725ce9c65c06bf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "JesperDramsch/friggeri-cv",
"max_stars_repo_path": "sections/education.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-16T21:27:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-21T11:57:46.000Z",
"num_tokens": 109,
"size": 488
} |
\documentclass{ncjms}
\usepackage{verbatim}
\newcommand{\e}{\ensuremath{\varepsilon}}
\newcommand\sumtoinfty[1]{\ensuremath \sum_{n=#1}^\infty a_n}
%
% %actual document starts here
%
\begin{document}
% input the title of the manuscript
\title{A Survey on the Visual Perceptions of Gaussian Noise Filtering on Photography}
\titlerunning{Investigating Image Quality Loss $\ldots$} %specify header with (shorter) title
% specify the author(s) of the manuscript
\author[Aidan J. Draper]{Aidan J. Draper}
\address[Aidan J. Draper]{Department of Math and Statistics, Elon University, Elon, NC 27244, US}
\email[Corresponding author]{[email protected]}
\urladdr{http://www.aidandraper.com/} % Delete if not wanted.
%second author
\author{Laura L. Taylor}
\address[Laura L. Taylor]{Department of Math and Statistics, Elon University, Elon, NC 27244, US}
\email{[email protected]}
\authorsrunning{A. J. ~Draper and L. L. Taylor} %specify header with author names, use "et al." if too many
\begin{abstract}
Statisticians, as well as machine learning and computer vision experts, have been studying image reconstitution through denoising different domains of photography, such as textual documentation, tomographic, astronomical, and low-light photography. In this paper, we apply common inferential kernel filters in the R and python languages, as well as Adobe Lightroom's denoise filter, and compare their effectiveness in removing noise from JPEG images. We ran standard benchmark tests to evaluate each method's effectiveness for removing noise. In doing so, we also surveyed students at Elon University about their opinion of a single filtered photo from a collection of photos processed by the various filter methods. Many scientists believe that noise filters cause blurring and image quality loss so we analyzed whether or not people felt as though denoising causes any quality loss as compared to their noiseless images. Individuals assigned scores indicating the image quality of a denoised photo compared to its noiseless counterpart on a 1 to 10 scale. Survey scores are compared across filters to evaluate whether there were significant differences in image quality scores received. Benchmark scores were compared to the visual perception scores. Then, an analysis of covariance test was run to identify whether or not survey training scores explained any unplanned variation in visual scores assigned by students across the filter methods.
\end{abstract}
%
% Include AMS Subject Classifications as can be find on http://www.ams.org/msc/
\subjclass[2010]{68U10; 62-06; 93-04; 60A99} %separated by semicolon
%
% Include keywords for the paper, separated by semi-colons; ending with a point.
\keywords{image denoising; image processing; Statistics in signal processing; kernel filter methods; Shiny applications; OpenCV; photography; Gaussian noise.}
\date{\today}
\maketitle
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Actual body of the manuscript starts here
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
The world is full of many naturally-occurring signals. Because there are so many, it becomes rather hard to isolate them at times. Photographers try to isolate light signals during exposure, which can often be difficult. Many other domains of science (Ecology: \citealp{statespace16}; Microscopy: \citealp{perfeval16}; Tomography: \citealp{imgproctom06}; Astronomy: \citealp{1502901}; Physics: \citealp{939832}) study signals that can be even harder to single out. Even with the immense collection of advanced electronics found in today's average digital camera, captured light signals are still prone to random noise from sources such as the camera's sensor or circuitry and as a response to qualities of the environment, like temperature. This is what makes the field of signal denoising so compelling. In addition, no other physical environments resist noise as poorly as low-light environments do. In specific, dusk light is arguably the most detrimental to a photographer's image quality, yet it is still a popular time to shoot. This has increased the prevalence of noise in modern photography, which motivates this study.
Many researchers have spent much of their life investigating methods for properly denoising signals. In doing so, standardized formulas have been developed to validate the effectiveness of these methods. However, past studies have reported that the benchmark scores typically associated with these methods are uncorrelated with unbiased observers' opinions of denoised-image quality \citep{blur, survey_blur, perf_eval}. These studies induced continued research into the investigation of image quality in this paper.
In studying signal processing, it was apparent that there is a void in computer vision literature surrounding the visual perception of image quality in denoising methods. Additionally, this study differs from past work for three main reasons. First, it focuses on the subdomain of low-light photography due to current interests surrounding this setting in social media. This meant creating a unique low-light image dataset to suit the study. Second, quality of filtered grayscale images was investigated and visual perceptions of college students were collected. This was for convenience and to offer a different perspective than previous studies have had. Third, actual noise was captured rather than simulating the disruption of random pixels on a noiseless image. This adds some complications later, but also, provides real world examples of the filter methods' abilities. Lastly, a proprietary image denoising method from Adobe's creative cloud was included. This is probably the most accessible method to photographers and has also received a fair amount of criticism in the photography community, which made it an interesting addition to this study.
In this paper, five filter methods' performances on a single noisy photo were evaluated. After implementing these filtering methods on a single image, the benchmark scores of the filtered images were compared across methods. Then, a survey collected undergraduate college students' perceptions of the filtered images' quality in relation to a noiseless image. A one-way ANOVA test and an ANCOVA test were performed on the visual perception scores about the quality of filtered images in relation to their desired non-noisy state.
This paper is organized in the following way. This section proceeds with the background surrounding the subdomain of signal denoising in computer vision and, in doing so, the methods performed on the test image are shared. Section 2 describes the methodology and experimental design, which includes the R Shiny App survey that was built and the process in which the experiment was performed. Section 3 shares the results of the benchmark tests on the single image as well as survey participants perception of the filtered photos. A discussion about the experiment results is provided in Section 4. Finally, Section 5 discusses the conclusions that are drawn from the study.
\subsection{Background}
Computer vision is believed to be driven by two main pursuits for knowledge. From one perspective, scientists look to model human vision processes. Interest surrounds mimicking common human ability and understanding how human perception and comprehension occurs. On the other end of the spectrum, scientists look to improve autonomy in machines and perform advanced tasks, such as identifying objects or understanding dynamically-changing scenes, that is unrelated to understanding how human vision works \citep{zhang}. These philosophies inevitably overlap at times, but they are ultimately the motivators that drive the study of vision in computer science. The field is said to have emerged partly as a result of Larry Robert's thesis at MIT, where he introduced the concept of extracting 3-dimensional shapes from 2-dimensional images using ``line drawing" to retrieve edge information \citep{larry}. Many scientists would follow in his footsteps in studying a subdomain of computer vision that is today known as edge detection. Self-driving vehicles are some of the newest technology that require advanced methods for scene understanding that mimic human perception, while also performing many other processes that go beyond our human capability, like tracking distance from other vehicles. This paper analyzes the work of one sub-field of computer vision, image denoising.
The study of noise removal would be launched by the second motivator. Signal denoising researchers look to filter noise to improve pre-processing for machine autonomy and a broad scope of other science processes. Image denoising has been used to cleanup tomographic photos for electron identification \citep{fernandez}. It has also been used to process and cleanup many other signals, including electronic hisses, magnetized particles from magnetic film, tendrils and candlesticks from stock market data, and grain from satellite images. The specific study of digital image denoising emerged after the invention of the first charge-couple device camera in 1975. These charge-coupled devices would allow electronic storage of images and later, the computation of pixels. Charge-coupling devices can still be found in a wide variety of devices, including most modern compact lens and digital single-lens reflex cameras, computer vision robots, and satellites.
% need to add more on the individual distributions
In the field of signal processing, there are a few different types of noise that haunt images. What makes statistics useful is that most types of noise follow the same probability density functions of some common random variables. The most common found in low-light photography are Gaussian noise and Poisson noise, which follow the distribution of their respective random variables.
There are many heuristic methods for noise removal, but none have been able to fully denoise a scene to this date. Competition in the field emerged for the title of ``best-approach". Noise reduction is computationally demanding so methods are judged by their effectiveness and running-time. The need to evaluate algorithms with slightly-differing, approximate results led to the development of common scores for evaluating denoised photos. The most common include mean squared error (MSE), r-squared ($R^2$), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and run time. Their formulas are described below.
\begin{equation}
\text{MSE} = \frac{\mid \sum_{x} \sum_{y} \text{filtered state} - \sum_{x} \sum_{y} \text{true state} \mid^2} { \text{N}_{\text{True State}}}
\end{equation}
\begin{equation}
R^2 = \frac{1 - \big( \sum_{x} \sum_{y}\text{true state} - \sum_{x} \sum_{y}\text{filtered state} \big)^2} {\big(\sum_{x} \sum_{y}\text{true state} - \mu_{\text{true state}} \big)^2}
\end{equation}
\begin{equation}
\text{PSNR} = 20* log_{10}\big(\frac{R^2}{\text{MSE}}\big)
\end{equation}
\begin{equation}
\text{SSIM} = \frac{\big( 2\mu_{\text{true state}}\mu_{\text{filtered state}} + c_1\big)*\big( 2\sigma_{\text{true state, filtered state}} + c_2\big)}{\big( \mu^2_{\text{true state}} + \mu^2_{\text{filtered state}} + c_1\big) * \big( \sigma^2_{\text{true state}} + \sigma^2_{\text{filtered state}} + c_2\big)}
\end{equation}
\newline
\noindent{\textit{True state} and \textit{filtered state} describe the respective pixel matrices of the two images. Totals, means and standard deviations are evaluated based on the entirety of the image matrix. Higher PSNR values indicate better image restoration quality in most of cases. Lower MSE scores indicate less error between true image and filtered image, which is preferred. $R^2$ shares how much of the variation in the filtered image can be explained by the noiseless image, which is strongly influenced by what $(x,y)$ pixel values they share. Higher $R^2$ values indicate a stronger relationship between the noiseless image and the filtered image. Higher values of SSIM indicate a stronger structural simularity between the two images. SSIM evaluates photos similarly to PSNR and MSE, except it also considers the interdependence of pixels that are spatially similar.}
In this paper, the effectiveness of a few of the most common noise removal methods is explored, but for the purpose of cleaning up Gaussian noise, or `grain', from low-light images.
% I should further elaborate on each formula described
\subsection{Filter Methods}
There are many strategies implored in noise removal. Motwani et al. dissect and classify most methods currently used in the field into a tree graph \citep{Mukesh}. They classify every algorithm into two overarching categories: Spatial Domain and Transform Domain. Spatial domains implement box filters, while transform domains are slightly more complex in their classification, but can be generalized to relying on a standard basis function that differs from the traditional box filter. As mentioned earlier, the study was conducted using five of the most common, and arguably simplest, image denoising methods. They include the Three-by-three Mean Filter, Non-local Means Filter, Bilateral Filter, and two levels: 50\% and 100\%, of the Adobe Lightroom CC denoise filter. Each method will be dissected in this section.
\subsubsection{Three-by-three Mean Filter}
The Mean Filter is a rudimentary approach to noise filtering. It implements a box filter of size $z$ by $z$. This box filter is a smaller matrix that traverses the photo and calculates the mean for the center value of every three-by-three matrix that fits within the photo matrix. This box matrix is typically denoted as $x$ with a size of $n$, or $z*z$. The method is classified as a spatial domain linear filter known to be used to specifically target a decrease in mean squared error. The Mean Filter has received criticism for destroying edges, erasing fine details, and blurring lines in images \citep{Mukesh}. It is expressed symbolically as follows:
\begin{equation}
I^{filtered}(x)=\frac{1}{n}\sum_{x_i\in \Omega}x_i
\end{equation}
\noindent {where $\Omega$ represents the entire image and $I^{filtered}$ is the resultant filtered image. This experiment implores the ability of a Three-by-three Mean Filter, which is a typical size for this method that causes less blurring than some of the larger spatial filter sizes.}
\subsubsection{Non-local Means Filter}
One of the first public introductions of the Non-local Means Filter was during an IEEE conference by \cite{baudes}. The algorithm relates to many linear filtering methods, but it calculates the weighted probability impact of each pixel in averaging the pixel of interest based on the similarity of neighboring pixel scores within the box instead of using a standard approach for probability weights, such as Gaussian probabilities or equal weights. It is expressed symbolically as:
\begin{equation}
I^{filtered}(p)=\frac{1}{C(p)}\int_{\Omega}v(q)f(p,q)dq.
\end{equation}
\noindent{For this study, \citeauthor{opencv_library}'s \citeyearpar{opencv_library} OpenCV Non-local Means Filter was specifically implemented because their method has been optimized to decrease run time as much as possible.}
\subsubsection{Bilateral Filter}
The Bilateral Filter is a far more advanced filter that takes into account 3-dimensional space by also considering whether or not the next pixel would be unlikely to see based on a distribution of previously viewed pixels. This is expressed by the $g_s$ in the formula by calculating residual error. Symbolically, this formula is:
\begin{equation}
I^{filtered}(x)=\frac{1}{W_p}\sum_{x_{i}\in \Omega}I(x_i)f_r(||I(x_i)-I(x)||)g_s(||x_i-x||)
\end{equation}
\noindent{Again, \citeauthor{opencv_library}'s \citeyearpar{opencv_library} OpenCV Bilateral Filter was specifically implemented rather than the traditional formula because OpenCV's method has been optimized for run time.}
\subsubsection{Adobe Lightroom Denoise Filter}
At the time of this study, there was no public information shared about Adobe's methods. It appears they have sacrificed some performance to be scalable on larger images and to output at an acceptable run time to users. In the author's opinion, at 100\%, their method appears to blur edges at an extreme rate. Photos appear washed and fine details are lost.
\section{Methodology}
Noise is considered to be a quality that negatively impacts the perception of an image. One of the most typically seen random variable distributions of noise is Gaussian noise, which is an unwanted channel that is captured during the camera's acquisition of the desired light wavelengths. Many have explored methodology for replacing the unwanted pixels with computed pixels that replicate what the desired wavelengths may have shown using a variety of mathematical approaches for approximation and estimation. The common approach for testing these methods involves mimicking a noisy image and then, taking a true image with no noise so that they can compare how well their computed pixels replicate the photo that would be originally desired. They use the standard benchmark scores (PSNR, MSE, $R^2$, and SSIM) to score their results.
This study models the process of analyzing methods' abilities to approximate missed pixels of the true signal, but goes a step further by surveying college undergraduates to investigate whether they believe image quality has in fact improved.
Introductory mathematics and statistics students attending Elon University were chosen as a sample population in order to expose younger students to undergraduate research. Additionally, it provided a convenient sampling frame for the time restrictions of this project. Under IRB approval, instructors from the previously mentioned courses were emailed and asked to share the survey link with students in their sections. Student participation was completely voluntary. Images of the survey are included \autoref{fig:survey} below. There were a total of nine slides in the survey, which are displayed in order in \autoref{fig:survey}. The survey included:
\begin{enumerate}
\item Introduction slide - describes the survey, who to contact, and that response is optional,
\item First instructions slide - informs users how to go about rating the following images,
\item First true state image slide - a reference image to compare the following training images to,
\item Training image one slide - an unfiltered noisy image with a horizontal score bar at the bottom,
\item Training image two slide - a filtered noisy image with a horizontal score bar at the bottom,
\item Training image three slide - the same true state image with a horizontal score bar at the bottom and a "Submit Part One" button that leads to the second part of the survey,
\item Second instructions slide - informs the user how to go about rating the following image,
\item Second true state image slide - a reference image to compare with the filtered image of interest,
\item Filtered image of interest slide - a randomly-selected filtered image (either Three-by-three Mean, Non-local Means, Bilateral, Abobe 50\% or Adobe 100\%), a horizontal score bar to rate the image, and a "Submit Part Two" button to send the scores to a Google Sheet.
\end{enumerate}
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\linewidth]{page1.png}
\includegraphics[width=0.3\linewidth]{page2.png}
\includegraphics[width=0.3\linewidth]{page3.png}
\includegraphics[width=0.3\linewidth]{page4.png}
\includegraphics[width=0.3\linewidth]{page5.png}
\includegraphics[width=0.3\linewidth]{page6.png}
\includegraphics[width=0.3\linewidth]{page7.png}
\includegraphics[width=0.3\linewidth]{page8.png}
\includegraphics[width=0.3\linewidth]{page9.png}
\caption{A breakdown of the Shiny application survey.}
\label{fig:survey}
\end{figure}
\noindent{The experiment process conducted will be expanded on below.}
\subsection{Protocol}
The first step was to compile an original dataset of true and noisy images to test. Although there are image datasets that exist (RENOIR: \citealp{renoir}; Darmstadt: \citealp{darmstadt}), none look to specifically model low-light photography, which was of interest in this study. The camera model used was a full-frame Canon 6D Mark I released back in 2007. It had a USM 17-40mm f/4L Canon lens attached to it. Two photos were taken at each location with greatly varying settings. First, the original photo would be shot with an ISO of around 200-600 depending on the scene. The aperture was set to the lowest setting in order to let the most light in. The shutter of the camera would be left open for 1 to 5 seconds in order to properly expose the photo at such a low ISO setting. The second photo was shot using the same aperture. However, the ISO setting was increased to just below the camera's maximum light sensitivity settings, which was around 20000 or 25600 ISO, for different scenes to dramatize the amount of noise captured in the image. There are approximately 20 million pixels in each image so noise will not be as apparent when presented regularly.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{streetlamp_true.jpg}
\includegraphics[width=0.4\linewidth]{streetlamp_noisy.jpg}
\caption{A true state image (left) and a noisy image (right) taken with different camera settings.}
\label{fig:streetlamp}
\end{figure}
After photos had been captured, they were initially brought into Lightroom where the same grayscale filter was applied to every image. Images were then exported out of their CR2 Canon RAW image formats and into the best possible resolution JPEG format. The Three-by-three Mean, Non-local Means, and Bilateral Filter python scripts all ran on the noisy image of interest. The unfiltered noisy image was also brought into Adobe Lightroom so that the denoise filter could be applied at the 50\% and 100\% levels. The respective filtered photos were saved and loaded into RStudio. An R script imported the true and filtered images and converted them into matrices. Then, MSE, PSNR, R-Squared, and SSIM were calculated for each filter variation of the same subject matter.
To capture student perceptions of the filtered images, a survey was built using Shiny \citep{shiny} in R. A specific frame was selected so that there was a control in subject matter presented to participants. The subject matter selected for the noisy, filtered and true state images can be seen in \autoref{fig:streetlamp} from earlier. The survey began by presenting respondents with a noiseless photo and then, asking them to rate three additional photos of the same content, but with varying degrees of image quality. The first training photo presented was the unfiltered noisy photo. The second training photo was a Non-local Means filtered photo. The third training photo was the same photo as the noiseless photo. These three scores were meant to serve as an indication of an inability to identify noise, or image quality, in respondents while also taking into account that respondents will have different metrics for image quality, naturally. Once the initial training portion was complete, respondents were asked to rate a final photo about its image quality on a scale of 1-10 in comparison to its respective noiseless image. A score of one represented an image that had far inferior quality as compared to the original. A score of five indicated equal image quality to the respective noiseless image. Lastly, a score of ten meant that the respondent felt that the photo had far superior image quality to the noiseless photo.
Once the respondent had selected scores for those three training images, they proceeded to the images of interest. Again, the noiseless image was presented first and then, they were asked to compare a filtered photo that had been randomly assigned to them to the noiseless photo of the same subject matter.
To emphasize differences in images, a blown-up region of each image was included in the bottom right-hand corners. These blown-up regions were 600-by-600 pixel grids from sections of the photo that included an edge. In Photoshop, these regions were enlarged to approximately 1200-by-1200 pixels. An example of this layout can be seen in \autoref{fig:blownup}. After the participant rated the last image, scores were sent to a Google Sheet.
\begin{figure}
\includegraphics[width=0.7\linewidth]{shoes_noise.jpg}
\caption{An example of an image with a blown-up region (in the lower right corner).}
\label{fig:blownup}
\end{figure}
\section{Experiment Results}
Visual interpretations of image quality were made for each processed photo prior to running benchmark tests. Zoomed-in regions of each filtered photo were produced so that differences between methods may be easier to compare in this report. The resultant images are shown next to the original noisy image in \autoref{fig:images}. Note that this paragraph is subjective based on the visual opinions of the authors. The authors believe that, visually, the Adobe 50\% (e) and Bilateral Filter (d) produced the most compelling filtered images. This decision can be justified by their preservation of fine-detail, as well as decent edge preservation, when compared to the original noisy image (a). These images appear less washed due to a lack of blurring. The Three-by-three Mean Filter (b) also produced less blurring, but noise was far more present in this image, which is most visible in the gradient of light originating from the street lamp. Noise is less present in the light gradient for images $d$ and $e$ of \autoref{fig:images}. Although images $c$ and $f$ also show less noise in the streetlamp light gradient, the smoothness of the gradient is an indicator of blurring, which is more obvious when contrasting the top of the telephone pole across filter methods. Images $c$ and $f$ have sacrificed edge preservation, arguably just as an important of a factor in evaluating the clarity of photos, for noise reduction. The least compelling result was from the Non-local Means Filter (c), which did not preserve edges or fine details well. The filtered image has no noise, but has also destroyed image clarity in the process, as the top of the telephone pole appears to blend into the background. The Three-by-three Mean Filter (b) appeared to be least effective at removing noise, but detail and edges have been preserved due to the lack of blurring. The Adobe 100\% (f) produced the most blurring, yet noise is nonexistent in its resultant image.
\begin{figure}[h]
\centering
\includegraphics[width=0.3\linewidth]{zoom_noise.png}
\includegraphics[width=0.3\linewidth]{zoom_mean.png}
\includegraphics[width=0.3\linewidth]{zoom_nonlocal.png}
\includegraphics[width=0.3\linewidth]{zoom_bilateral.png}
\includegraphics[width=0.3\linewidth]{zoom_lr50.png}
\includegraphics[width=0.3\linewidth]{zoom_lr100.png}
\caption{A grid of 600 by 600 pixel regions of an unfiltered image and its filtered images. The regions are ordered as follows: Noisy (a), Three-by-three Mean (b), Non-local Means (c), Bilateral (d), Adobe 50\% (e), and Adobe 100\% (f).}
\label{fig:images}
\end{figure}
% keep working on this paragraph (WHERE TO START)
Benchmark scores were calculated for every filter method, as well as the noisy photo, in comparison to a noiseless image. With the exception of the SSIM scores for the Adobe 50\% and Adobe 100\% filters, all methods scored higher than the unfiltered noisy image in the benchmark tests, which means almost every filter method showed some degree of improvement in image reconstitution. The debatable filters are Adobe 50\% and Adobe 100\%, who scored well in every test except for SSIM. PSNR, MSE and $R^2$ scored images similarly and ranked methods from most improved to least improved as Bilateral, Adobe 100\%, Non-local Means, Three-by-three Means, and Adobe 50\%. SSIM offered a different conclusion that declared Adobe 50\% and Adobe 100\% as less structurally similar to the noisy photo, which would mean that the benchmark believes image reconstitution has worsened after filtering. The Bilateral Filter still outperformed every other filter (SSIM=0.8577, PSNR=47.8103, MSE=264.8607, $R^2$=0.7217, Run time=0.1186 seconds), which declares it as the quickest and most effective filtering method in this study. \autoref{table:benchmark} reports the five benchmark scores for these images for reference.
\begin{table}[ht]
\caption{Benchmark Results for the Street Lamp Image.} % title of Table
\centering % used for centering table
\begin{tabular}{|c|r|r|r|r|r|} % centered columns (4 columns)
\hline %inserts double horizontal lines
& \multicolumn{1}{c|}{\textbf{SSIM}} & \multicolumn{1}{c|}{\textbf{PSNR}} & \multicolumn{1}{c|}{\textbf{MSE}} & \multicolumn{1}{c|}{$\mathbf{R^2}$} & \multicolumn{1}{c|}{\textbf{Run time (s)}} \\ [0.5ex] % inserts table
%heading
\hline % inserts single horizontal line
\textbf{\textit{Unfiltered}} & 0.7520 & 41.7691 & 530.4268 & 0.4426 & N/A \\
\textbf{3x3 Mean} & 0.8432 & 46.8833 & 294.3851 & 0.6732 & 91.9112 \\
\textbf{Non-local} & 0.8445 & 46.9004 & 293.8056 & 0.6913 & 20.5480 \\
\textbf{Bilateral} & 0.8577 & 47.8103 & 264.8607 & 0.7217 & 0.1886 \\
\textbf{Adobe 50\%} & 0.7111 & 46.3890 & 311.6243 & 0.6732 & 0.5532 \\
\textbf{Adobe 100\%} & 0.7245 & 47.5747 & 271.8604 & 0.7143 & 1.3872 \\ [1ex] % [1ex] adds vertical space
\hline %inserts single line
\end{tabular}
\label{table:benchmark} % is used to refer this table in the text
\end{table}
% TALK ABOUT BENCHMARK RESULTS MORE HERE
In total, there were 89 responses received from the survey as far more students chose not to respond than to respond to the survey. However, it is important to note that three data points had to be removed from the experiment. For at least one of the training images, the respondent marked a zero value for a training score, which is nonexistent on the range of possible values. This is likely from a malfunction in collecting data in Shiny Apps or sending the data using the `googlesheets' R library \citep{googlesheets}. The descriptive statistics of image quality scores, with the bad data points removed, are visible in \autoref{table:desc_survey}.
\begin{table}[ht]
\caption{Summary of survey responses based on filter presented to respondent. Aside from sample size, units of measurement are points on a scale from 1 (inferior quality) to 10 (superior quality).} % title of Table
\centering % used for centering table
\begin{tabular}{|c|r|r|r|r|} % centered columns (4 columns)
\hline
& \multicolumn{1}{c|}{\textbf{n}} & \multicolumn{1}{c|}{$\mathbf{\bar{x}}$} & \multicolumn{1}{c|}{$\mathbf{\Tilde{x}}$} & \multicolumn{1}{c|}{\textbf{\textit{s}}}\\ [0.5ex]
% inserts table
%heading
\hline % inserts single horizontal line
\textbf{3x3 Mean} & 15 & 3.6 & 4.0 & 0.99 \\
\textbf{Non-local} & 18 & 4.1 & 4.0 & 1.76 \\
\textbf{Bilateral} & 18 & 4.0 & 3.5 & 1.91 \\
\textbf{Adobe 50\%} & 18 & 4.4 & 4.0 & 2.04 \\
\textbf{Adobe 100\%} & 17 & 5 & 4.0 & 2.42 \\ [1ex] % [1ex] adds vertical space
\hline %inserts single line
\end{tabular}
\label{table:desc_survey} % is used to refer this table in the text
\end{table}
\noindent{The Adobe 100\% Filter had the largest mean image quality score ($\bar{x}_{\text{Adobe 100\%}} = 5.0$), followed by the Adobe 50\% Filter ($\bar{x}_{\text{Adobe 50\%}} = 4.4$). They also had the largest spread to their image scores ($s_{\text{Adobe 100\%}}=2.42$, $s_{\text{Adobe 50\%}}=2.04$). Image scores for the Adobe 100\% were skewed right, which apparent by a sample mean larger than the sample median. This characteristic tells us that a few respondents scored this filter group far better than others did. The filter group that received the most criticism, but most precision, was the Three-by-three Mean Filter ($\bar{x}_{\text{3x3 Mean}} = 3.6$, $s_{\text{3x3 Mean}}=0.99$). Surprisingly, the Bilateral Filter received lower scores for image quality ($\bar{x}_{\text{Bilateral}}$=4.0, $\Tilde{x}_{\text{Bilateral}}$=3.5), even though it outperformed every filter during benchmark testing ($\text{SSIM}_{\text{Bilateral}}$=0.8577, $\text{PSNR}_{\text{Bilateral}}$=47.8103). Note that the lack of responses, as well as bad data points, lead to uneven sample sizes between filter groups.}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\linewidth]{boxplot.png}
\caption{Boxplots of the participants' image quality perception score distributions by filtering method.}
\label{fig:box}
\end{figure}
The distribution of image quality scores by filter groups is visualized in \autoref{fig:box}. Filter group image score sample means remain fairly similar. Shockingly, all filter method image score distributions, with the exception of the Three-by-three Mean filter, are being skewed by a few scores that would imply a superior image quality to the filtered photo's noiseless counterpart. The programmers of these algorithms would surely be satisfied knowing these select respondents' sentiments, although, there is likely another conclusion that could be drawn from this observation. Adobe 100\% received almost every image quality score, giving it the largest range, suggesting that respondents' opinions differed the most in judging this image. The overlap in interquartile ranges suggests a non-significant p-value from the ANOVA test.
\begin{table}[ht]
\caption{An ANOVA table for the image quality perception scores.} % title of Table
\centering % used for centering table
\begin{tabular}{|c| rrrrr|} % centered columns (4 columns)
\hline %inserts double horizontal lines
& \multicolumn{1}{c}{\textbf{Df}} & \multicolumn{1}{c}{\textbf{SSE}} & \multicolumn{1}{c}{\textbf{MSE}} & \multicolumn{1}{c}{\textbf{F-score}} & \multicolumn{1}{c|}{\textbf{P-value}} \\ [0.5ex]
%heading
\hline % inserts single horizontal line
\textbf{Treatment} & 4 & 18.36 & 4.590 & 1.269 & 0.289 \\ % inserting body of the table
\textbf{Error} & 81 & 292.99 & 3.617 & & \\ [1ex]
\hline
\textbf{Total} & 85 & 311.35 & & & \\ % [1ex] adds vertical space
\hline %inserts single line
\end{tabular}
\label{table:anova_table} % is used to refer this table in the text
\end{table}
As apparent in \autoref{table:anova_table}, the one-way analysis of variance test yielded insufficient evidence of a difference in the population mean image quality score across the noise filtering methods tested (F(4,81) = 1.269, p = 0.289). More of the residual error is found within individual methods. This is likely due to the great variation in scores received from respondents for individual methods, which is shown in \autoref{fig:box}. With the exception of the Three-by-three Mean Filter, the remaining groups had standard deviations of approximately two, according to \autoref{table:anova_table}, implying that the scores were fairly spread around the mean and that respondents often did not share the same opinions about the image quality of each filtered photo. In testing assumptions, it became apparent that there was one outlier with a Studentized residual greater than three ($s_{e_{21}} = 3.46$). There still was insufficient evidence to reject the null hypothesis under an alpha level of 0.05 when testing the data with no outliers. Log and square root transformations were tested. The residual error had a better spread across fitted values, but again, the conclusion about the null hypothesis would not change.
When analyzing the image quality scores, it became apparent that the initial training scores received could be indicators of an unexplained variance on the final image score received due to a similar trends in variation in training score distributions to the final image score distributions. As a result, an ANCOVA test was conducted to determine if removing the potential effect of training scores, or individual respondent image quality subjectivity, would lead to sufficient evidence to reject the null hypothesis that each filter group had the same population mean image quality score. Additionally, an analysis of covariance could explain how differences in training scores affected the final image scores received between filter methods. It was suggested that the average training score of the initial three images could be an influence on, and explain uncontrolled variation in, respondents' image quality scores. The unequal slopes model for the relationship between mean training score and image quality score is graphed in \autoref{fig:un_ancova}.
% talk about ANCOVA and provide formula with indicator variables for both equal and unequal variance
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{unequal.png}
\caption{The unequal slopes ANCOVA lines fitted to scatterplots by image filter.}
\label{fig:un_ancova}
\end{figure}
There was slightly insufficient evidence of an interaction effect between the mean training score and the final image quality score in the unequal slopes model (F(4,81) = 2.40, p = 0.057). The least parallel slope to the rest of the filter methods was the slope for the Bilateral Filter. This result technically validates the use of the equal slopes model, but 95\% confidence intervals were calculated for both models because of how small the p-value was. The 95\% confidence intervals are reported in \autoref{table:un_ancova_ci}.
\begin{table}[ht]
\caption{The unequal slopes 95\% Confidence Interval estimates of the mean training score on image score slopes for each filter method.} % title of Table
\centering % used for centering table
\begin{tabular}{|c| r|r|r|} % centered columns (4 columns)
\hline %inserts double horizontal lines
& \multicolumn{1}{c|}{\textbf{Slope Estimate}} & \multicolumn{2}{c|}{\textbf{95\% C.I.}} \\ [0.5ex]
%heading
\hline
\textbf{3x3 Mean} & 0.3943 & -0.6183 & 1.4069 \\
\textbf{Non-local} & 1.1482 & 0.2251 & 2.0712 \\
\textbf{Bilateral} & 0.0847 & -0.8195 & 0.9888 \\
\textbf{Adobe 50\%} & 0.5238 & -0.2800 & 1.3277 \\
\textbf{Adobe 100\%} & 1.2403 & 0.5791 & 1.9015 \\ [1ex]
\hline %inserts single line
\end{tabular}
\label{table:un_ancova_ci} % is used to refer this table in the text
\end{table}
\noindent{According to the unequal slopes model, two filter methods had significantly positive 95\% confidence intervals for the estimated slope of mean training image score on image quality score (Adobe 100\%: 95\% CI = [0.5791, 1.9015], Non-local: [0.2251, 2.0712]). The unequal slopes model equation that estimates the score received on the final image based on filter method and mean score of training images (t) is given by:}
\begin{equation}
\begin{array}{l}
\Hat{y}=-1.20 + 2.98(I_{\text{3x3 Mean}}) + 4.57(I_{\text{Bilateral}}) - 0.85(I_{\text{Non-local}}) + 3.08(I_{\text{Adobe 50\%}}) + \\
1.24*t - 0.85(I_{\text{3x3 Mean}}*\text{t}) - 1.16(I_{\text{Bilateral}}*\text{t}) - 0.09(I_{\text{Non-local}}*\text{t}) - 0.72(I_{\text{Adobe 50\%}}*\text{t})
\end{array}
\end{equation}
\noindent{where $I_{i}$ represents an indicator random variable for a filter and $t$ indicates the slope of model that fits the mean training score and the predicted image quality score, $\Hat{y}$. The model is adjusted around the linear equation for the Adobe 100\% Filter, which is why there is no indicator random variable for that filter.}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{equal.png}
\caption{The equal slopes ANCOVA lines fitted to scatterplots by image filter.}
\label{fig:eq_ancova}
\end{figure}
Since there was marginally insignificant evidence of unequal slopes at the alpha level of 0.05 (p = 0.057), the equal slopes model was also fitted. This model is depicted in \autoref{fig:eq_ancova} and provides a common estimate of the slope associated with mean training score (Estimate = 0.6519, 95\% CI = [0.3739, 0.9299]), which is significantly positive. The equal slopes model equation to estimate the training image effect on the predicted final image scores across all filter methods is given by:
\begin{equation}
\Hat{y}= 0.65(t) + 1.61 - 1.02(I_{\text{3x3 Mean}}) - 1.04(I_{\text{Bilateral}}) - 1.12(I_{\text{Non-local}}) - 0.362(I_{\text{Adobe 50\%}})
\end{equation}
The results of the equal slopes model can be interpreted as an increase of 0.6519 units of perceived image quality score for every additional 1 unit increase in mean training score regardless of the filter method implemented on a photo. The positive slope signifies that it is likely that the mean training score had an additive effect on image quality score given by respondents.
% add more on ANCOVA and change to APA formatting
\section{Discussion of Results}
%discussion of benchmark results
There was a lack of variation in types of methods used, which may explain the small disparities in benchmark scores between the filter methods. This could be explained by the types of algorithms implemented. These algorithms are linear filters, which are considered to be somewhat primitive in comparison to new-age neural networks or modern Transform Domain methods \citep{Mukesh}. For that reason, the $R^2$ benchmark score fluctuated no more than 0.05 units between filter methods. With the exception of SSIM, no benchmark result appeared to be lower due to image blurring. In fact, arguably the most washed photo, image $f$, received the second best marks from PSNR, MSE and $R^2$. These benchmark formulas are more likely focused on punishing random fluctuations in pixel values rather than a general skew in filtered pixel values away from their true noiseless values.
It is not too surprising then that PSNR followed the same trends as MSE and $R^2$ in ranking the filters, especially given that its formula, equation 1.3, is not independent to either of these algorithms. However, it is interesting that MSE and $R^2$ shared trends. This is likely due to equation 1.1 and equation 1.2 relying on the residual error between the true and filtered states. SSIM had the most fascinating results. While SSIM scored the Three-by-three Mean, Bilateral, and Non-local Means filters in a similar pattern to PSNR, MSE and $R^2$, the SSIM benchmark test also scored the Adobe 50\% and Adobe 100\% filter groups shockingly low. Initial interpretation suggests that Adobe may be sacrificing pixel information to increase run time performance and structural information is being lost as a result of that drop in spatial information. Although, without more public information on the algorithm, the reason may remain unknown to the public. Studying SSIM in evaluating Adobe denoising methods surpassed the compass of this paper, but may be interesting to study in the future. Still, it is odd to see a preferred benchmark test rank an image so poorly given that the other benchmarks did not. Who is to say which is correct without more information.
%discussion of survey results
With the exception of the Three-by-three Mean Filter, all of the filter method groups in \autoref{table:desc_survey} had image quality score means that were greater than the median. This lead to concern about a scale misunderstanding from respondents. Approximately ten percent of respondents left scores indicating that they felt the filter method had an image quality greater than the noiseless image. This result is interesting though because it signifies that respondents could not detect a significant difference between methods based on the experiment design. This could be due to the size of the image used. Perhaps a larger blown-up region or smaller original image may make individual pixel differences more apparent. Most likely, these respondents made the mistake of interpreting the maximum score as equal rather than as having superior quality to the noiseless photo. In general, it would not be very logical for one respondent to score the Adobe 100\% filter with a 2 and another to score the same filter with a score 10. The variance previously mentioned is unplanned variation within each filter group and, ultimately, lead to an inability to test for true differences between filter method image scores in the analysis of variance test. The difference in sample sizes between groups also causes the analysis of variance test to be more problematic as an assumption of that test is now violated. It would have been surprising to find sufficient evidence of population differences due to these errors in the experiment's design.
%discussion of ANCOVA
The analysis of covariance lead to an interesting finding about the unexplained variation found within image filter groups. Significantly positive slopes found in \autoref{table:un_ancova_ci} signifies that there was a linear relationship between mean training score and image quality score for the Non-local Means and Adobe 100\%. This is especially interesting because these two filters were said to be two that caused the most blurring, which can be seen in images $c$ and $f$ of \autoref{fig:images}. The only filtered training image used was a Non-local Means filter so it is possible that, in training, people who preferred blurring in image reconstitution would also rank a blurry filtered image positively later in the survey. To avoid a potential bias, future studies should randomize filter methods on training images presented to respondents as well. Regardless of the differences in slopes, there was still marginally insignificant results to justify the use of the equal slopes model. Generally speaking, there was a positive relationship between the mean training score and the final image quality score. The model shows how the Adobe filter groups are preferred based on the disparity between intercepts.
\section{Conclusion and Future Work}
In this paper, some rudimentary image denoising filters were used to process grayscale low-light photographs. When benchmark results ranked filter effectiveness differently to what the authors visually perceived, a survey was created in order to capture how others perceived the effectiveness of image denoising filters in image reconstitution. In order to compare benchmark results to survey results, descriptive statistics and an analysis of variance test were conducted.
Descriptive statistics from \autoref{table:desc_survey} differed from the benchmark scores found in \autoref{table:benchmark}. Specifically, the benchmark tests' best ranked algorithm (the Bilateral Filter) had the lowest median score from the respondents. Additionally, there were odd results in the scoring of the two Adobe filter groups for SSIM. The analysis of variance test yielded insufficient evidence to reject the null (p=0.289), but there was reason to believe that the image quality score scale was misinterpreted by some because of the oddly high number of scores greater 5 (superior quality to the noiseless photo) collected from respondents. As a result, an analysis of covariance test was performed to quantify how the mean training score from the survey may explain some of the variation in image quality scores received from respondents in the same filter method groups. In the unequal slopes model, two significantly positive slopes were found for the mean training score's relationship with image quality scores. The Non-local Means and Adobe 100\% showed significantly positive slopes in \autoref{table:un_ancova_ci}, which gave reason to believe that the training section of the survey might have had more of an effect on the final image quality score left by respondents for these two filter methods. Marginally insignificant evidence (p=0.057) of an interaction effect between filter groups and mean training scores also warranted the use of the equal slopes model in this test. A significantly positive slope was found that quantified the mean training score's relationship with image quality scores. For every 1 unit increase in mean training score, it was expected for that the respondent score an image, regardless of filter method, 0.6519 units higher.
Future work would entail implementing modern approaches, such as a Convolutional Neural Network or a Markov Random Field, in order to produce more disparity in image reconstitution results. In addition, scale reformation should be considered so that there is less likelihood of a misinterpretation leading to bad data collection and a more informed survey population should be considered so that image quality is properly understood when rating photographs. More responses may also help discover outliers and decrease variation. Testing more than one subject matter could shed light on variation in visual perception and benchmark scores that is otherwise unnoticed. More work should be conducted in explaining why SSIM might have ranked the Adobe filters lower than even the original noisy image. Another analysis of variance test should be conducted with a dataset that has an equal number of responses per filter group. Finally, it may be interesting to complete another analysis of covariance test with the Bilateral Filter group withheld. The Bilateral Filter had the most significant difference in slope to the rest of the filter groups and was most likely the reason for the marginally insignificant p-value for interaction effect in the unequal slopes model.
\bibliographystyle{apa}
\bibliography{image_bib}
\end{document} | {
"alphanum_fraction": 0.7846685809,
"avg_line_length": 135.1202185792,
"ext": "tex",
"hexsha": "411f5f55900862f38a3fc2f62d6fc2749d04aeca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "abef63e83546b3da07d1d90db1b16665ff1011e0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adraper2/Noise_Reduction_Research-STS499",
"max_forks_repo_path": "draper_taylor_image_denoising_survey.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "abef63e83546b3da07d1d90db1b16665ff1011e0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adraper2/Noise_Reduction_Research-STS499",
"max_issues_repo_path": "draper_taylor_image_denoising_survey.tex",
"max_line_length": 1953,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "abef63e83546b3da07d1d90db1b16665ff1011e0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adraper2/Noise_Reduction_Research-STS499",
"max_stars_repo_path": "draper_taylor_image_denoising_survey.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11535,
"size": 49454
} |
% Part: normal-modal-logic
% Chapter: axioms-systems
\documentclass[../../../include/open-logic-chapter]{subfiles}
\begin{document}
\chapter{Axioms, \usetoken{P}{derivation}, and Modal Systems}
\olimport{modal-logics}
\olimport{normal-logics}
\olimport{modal-systems}
\olimport{logics-proofs}
\olimport{proofs-in-K}
\olimport{duals}
\olimport{proofs-modal-systems}
\olimport{soundness}
\olimport{systems-distinct}
\olimport{provability-from-set}
\olimport{provability-properties}
\olimport{consistency}
\end{document}
| {
"alphanum_fraction": 0.7709923664,
"avg_line_length": 20.96,
"ext": "tex",
"hexsha": "cd547ae8d1e3974bd53537133ba3f18d50960d54",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4dadf83769f70463cb4ec03232229a0baf847cf4",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "dcelkind/Open-Logic-Project",
"max_forks_repo_path": "content/normal-modal-logic/axioms-systems/axioms-systems.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4dadf83769f70463cb4ec03232229a0baf847cf4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "dcelkind/Open-Logic-Project",
"max_issues_repo_path": "content/normal-modal-logic/axioms-systems/axioms-systems.tex",
"max_line_length": 61,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "4dadf83769f70463cb4ec03232229a0baf847cf4",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "dcelkind/Open-Logic-Project",
"max_stars_repo_path": "content/normal-modal-logic/axioms-systems/axioms-systems.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-19T01:35:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-17T00:08:35.000Z",
"num_tokens": 160,
"size": 524
} |
\documentclass[]{deedy-resume-openfont}
\usepackage{fontawesome}
\pagenumbering{gobble}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% LAST UPDATED DATE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% TITLE NAME
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\namesection{Daniel}{Kent}
{ \urlstyle{same} 100 North St. Apt 88, Columbus, Ohio 43202 \\
\href{mailto:[email protected]}{\faEnvelope \hspace{0.1em} [email protected]} |
\faMobilePhone \hspace{0.1em} 925.818.9866 |
\href{http://dnkent.github.io}{\faGlobe \hspace{0.1em} http://dnkent.github.io}
| \href{https://github.com/dnkent}{\faGithub \hspace{0.1em} dnkent} |
\href{https://www.linkedin.com/in/dnkent/}{\faLinkedinSquare \hspace{0.1em} dnkent}
}
% Summary Statement
\vspace{1em}
\centering{\large{Ph.D. Candidate -- graduating in August 2020 -- specializing in data science and computational social science, with over 5 years of experience in leading collaborative research teams and projects on machine learning, quantitative research and analysis, statistical software development, and data visualization. }}
%}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EXPERIENCE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\vspace{-0.35em}
\begin{flushleft}
\section{Skills}
\runsubsection{Software:} \size{11}{R (5+ years) \textbullet{} Python (3+ years) \textbullet{} SQL (MySQL) \textbullet{} Git \textbullet{} Unix \textbullet{} \LaTeX\ \textbullet{}Julia \textbullet{} JavaScript (D3.JS)}
\sectionsep
\vspace{-0.2em}
\runsubsection{Quantitative:} \size{11}{Machine Learning \textbullet{} Social Network Analysis \textbullet{} Causal Inference \textbullet{} Time Series Analysis \textbullet{} Bayesian Inference \textbullet{} Natural Language Processing \textbullet{} Generalized Linear Models \textbullet{} Probability Theory} %\textbullet{} Experimental Design
\sectionsep
\vspace{-0.2em}
\runsubsection{Platforms:} \size{11}{Google Cloud Platform \textbullet{} Amazon Web Services \textbullet{} Cluster Computing \textbullet{} Linux}
\sectionsep
\vspace{-0.5em}
\section{Experience}
\vspace{-0.2em}
\runsubsection{The Ohio State University} \hspace{4.05in} \location{Columbus, OH}
\vspace{0.35em}
\descript{PH.D. Candidate, Political Science} \hspace{3.05in} \location{August 2015 -- Present}
\vspace{-1.1em}
\begin{tightemize}
\item Dissertation: \italicfont{Supervised and Unsupervised Models of International Conflict and Revision}
\item Built machine learning ensembles in R with 800,000+ observations to forecast international conflict onset
\item Developed a novel dataset with 15,974 estimates of each country's international standing from 1816-2012
\item Programmed computational model in Python, simulating exclusion across complex social systems
%\item Organized speaker series.
\end{tightemize}
\sectionsep
\descript{Graduate Research Associate} \hspace{3.375in} \location{August 2018 -- Present}
\vspace{-1.1em}
\begin{tightemize}
\item Estimated latent social networks of over 400,000 Cold War bureaucrats through text-as-data methods
%\item Inferred network ties by applying text-as-data methods to German archival records
\item Designed forecasting approach for civil wars from 1945-2012, monitoring XGBoost features for anomalies
\item Wrote R package, \texttt{\href{http://github.com/dnkent/dynamr}{dynamr}}, providing over 20\% higher accuracy than other changepoint software %detects changepoints in varying-coefficient generalized linear models
%\item Collaborated with research teams spanning multiple countries and states
\end{tightemize}
\sectionsep
\descript{Instructor, \href{https://dnkent.github.io/talk/math-workshop-2019/}{Math Workshop for Political Scientists}} \hspace{2.325in} \location{July 2019}
\begin{tightemize}
\item Taught graduate-level course with 15 Ph.D. students on mathematic foundations of applied statistics
\item Lectured on calculus (derivatives, integrals, and multivariate), linear algebra, and set notation
\item Mentored cohort of first-year Ph.D. students during their transition into graduate school
\end{tightemize}
\sectionsep
\descript{Junior Fellow, Program in Statistical Methodology}
\hspace{1.33in} \location{July 2017 -- August 2018}
%\vspace{\topsep} % Hacky fix for awkward extra vertical space
%\vspace{-1.1em}
\begin{tightemize}
\item Served as a research consultant, helping faculty and students tackle computational and modeling problems
\item Hosted training workshops and organized a speaker series on applied and theoretical data science topics
%\item Organized speaker series.
\end{tightemize}
\sectionsep
%\runsubsection{Southern Illinois University}
%\descript{| Undergraduate Research Assistant}
%\location{August 2012 -- May 2014 | Carbondale, IL}
%\begin{tightemize}
%\item Prepared network dataset of state supreme court citations.
%\item Assisted in data cleaning for network analysis.
%\item Data entry and cleaning produced experience using Excel.
%\end{tightemize}
%\sectionsep
\vspace{-0.5em}
\section{Research} % Add website urls
\renewcommand\refname{\vskip -0.75cm}
\bibliographystyle{abbrv}
\bibliography{publications}
\nocite{*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EDUCATION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\vspace{-0.5em}
\section{Education}
\runsubsection{The Ohio State University} \hspace{4.05in} \location{Columbus, OH}
\descript{PH.D., Political Science} \location{(Focus on Quantitative Research Methods)}
\hspace{0.85in} \location{(Expected) August 2020}
\descript{M.A., Political Science} \hspace{4.9in} \location{May 2017}
\sectionsep
\vspace{-1em}
\runsubsection{University of California, Davis} \hspace{3.95in} \location{Davis, CA}
\descript{B.A., International Relations} \location{(Highest Honors)} \hspace{3.1in} \location{June 2013}
\end{flushleft}
\end{document} \documentclass[]{article} | {
"alphanum_fraction": 0.728581104,
"avg_line_length": 44.4060150376,
"ext": "tex",
"hexsha": "7fd898872bfb425c2944dfded7ae7be1f98c3b7c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "69bf01b9ee9fb91d894b62065bcb270efc398cf5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dnkent/resume_ds",
"max_forks_repo_path": "kent.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "69bf01b9ee9fb91d894b62065bcb270efc398cf5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dnkent/resume_ds",
"max_issues_repo_path": "kent.tex",
"max_line_length": 347,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "69bf01b9ee9fb91d894b62065bcb270efc398cf5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dnkent/resume_ds",
"max_stars_repo_path": "kent.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1572,
"size": 5906
} |
\documentclass[11pt]{scrartcl}
\usepackage[sexy]{evan}
\usepackage{braket}
\usepackage{color} %May be necessary if you want to color links
\usepackage{hyperref}
\usepackage{tikz}
\usepackage[compat=1.1.0]{tikz-feynman}
\usepackage{comment}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\hypersetup{
colorlinks=true, %set true if you want colored links
linktoc=all, %set to all if you want both sections and subsections linked
linkcolor=blue, %choose some color if you want links to stand out
}
\usepackage{geometry}
%\usepackage{showframe} %This line can be used to clearly show the new margins
\newgeometry{vmargin={30mm}, hmargin={20mm,20mm}} % set the margins
\begin{document}
\title{Notes 2018, Part I} % Beginner
\date{December 2017}
\maketitle
\begin{abstract}
\sffamily\small
Here is a collected notes on physics readings.
\end{abstract}
\vspace{1em}
\tableofcontents
\newpage
\section{Resources}
\begin{itemize}
\item David Skinner Advanced QFT notes: has interesting material on renormalization, very formal for gauge theory using differential geometry throughout. Very good link from 0+1d QFT to 3+1d QFT, and uses path integral. \url{https://dec41.user.srcf.net/h/III_L/advanced_quantum_field_theory}
\item Joseph Minahan QFT notes: best for path integral of harmonic oscillator and feynman rules, as well as explicit QED and $\phi^4$ calculations in dimensional regularization. \url{https://www.physics.uu.se/digitalAssets/405/c_405910-l_3-k_wholeshebang_hyper.pdf}
\item Timothy Hollywood \url{https://arxiv.org/abs/0909.0859}
\item Jorge Crispim Romero: Very detailed loop calculation and appendix for feynman tricks and integrals useful reference. \url{http://porthos.ist.utl.pt/ftp/textos/tca.pdf}
\item Kardar Statistical Physics of Fields. Clearest treatment of wilsonian RG for $O(N)$ model.
\item Michio Kaku: Very concise, to the point but easy to read and not too dense. Lots of material covered, but page layout is clean and very pleasant to read. Highlights: group theory chapter (at the start!), non-abelian gauge theory and standard model.
\item Peskin and Schroeder: nonlinear sigma model is done pretty well, and all derivations are very explicit (albeit very brute force). Some highlights: chapter on path integral, QED rules and basic calculations, non-abelian gauge symmetry, classical fields. The problems are very good but some are quite hard/long.
\item Fradkin: Renormalization group, OPE, Conformal Field Theory. Many applications peskin leaves to the exercises, fradkin actually carries it out making it very encyclopedic. However due to the breadth of material shown, it is quite dilute.
\item Weinberg: Quantum Theory of Fields Vol. II. A bit too much detail/sophisticated/nuanced reasoning on pretty much every topic. However the authoritative reference, everything is carefully reasoned, no carpet is left un-turned. I especially liked discussion on spontaneous symmetry breaking, the effective potential and topology.
\item Matthew Schwartz: Especially done well is gauge invariance, the lorentz group, derivation of spinor representation and derivation of feynman diagrams. formulas are very explicit make easier to use for solving problems.
\item Srednicki: $\phi^3$ theory renormalization, gamma function and dim-reg tricks, path integrals and feynman rules derivation. Uses the weird (relativist) minkowski signature unfortunately. Sometimes the calculations are too brute force with little intuition.
\item Zee QFT in a nutshell: very good for differential form and gauge theory. Sketch notation sometimes. Many article sized chapters on condensed matter applications. The problems are quite good and not too hard.
\item Xi yin, harvard 253b notes: super dense, not much intuition. Covers Weinberg vol. 2 in half the number of pages.
\item Preskill les houches notes on vortices and monopoles: \url{http://theory.caltech.edu/~preskill/pubs/preskill-1987-vortices.pdf}
\item Nair, Quantum field theory, a modern perspective: good material on differential geometry ch. 14. Uses forms throughout making it quite clean. Selection of topics and math used is very nice for high energy, but no problems :(
\end{itemize}
\section{Breakdown of Mean-Field}
\subsection{Landau's Symmetry Breaking Theory}
\subsection{The Free energy functional (Goldenfeld 5.6)}
Landau postulates a free energy functional of the form:
\begin{align}
L = \int d^d x \mathcal{L}[m, m^2, (\nabla m)^2, ...](x)
\end{align}
While it has units of energy, it is not the Free energy, and nor is it the Gibbs free energy. Both of those
thermodynamic functions are convex functions while L can clearly be non-convex.
Consider a coarse grained theory with microscopic degrees of freedom $m'(x)$, which are averaged over boxes of size ${1 \over \Lambda}$.
\begin{align}
e^{L} = \sum_{m'} e^{\beta H(m')} |_{m(x)}
\end{align}
where the sum is over all configurations of $m'$ (microscopic order parameter) such that the coarse grained order parameter m is fixed.
It then automatically follows from this definition that the partition function is just a functional integral:
\begin{align}
\boxed{Z = \int \mathcal{D}m e^{- \int d^d x \mathcal{L}[m]}}
\end{align}
\subsection{Mean Field Solution}
We consider a Landau Ginzburg statistical field $m(x)$ to model O(N) symmetric systems, for example magnetic spins etc...:
\begin{align}
Z \propto \int \mathcal{D}m \exp\left( \int dx K (\nabla m)^2 + {t \over 2} m^2 + u m^4 \right)
\end{align}
Note that $t \equiv {T- T_c \over T_c}$.
Consider a mean field ansatz where $m(x) = \bar{m}$. The free energy is minimized at $\bar{m} = 0$
if $ t > 0 $ (disordered phase), and at $\bar{m} = \sqrt{-t \over 4 u}$ for $t < 0$ (ordered phase).
\subsection{Goldstone Theorem}
In the ordered phase, we may expand the action as:
\begin{align}
m(x) = \bar{m} + \phi^l \hat{e}_l + \sum_i^{n-1} \phi^t \hat{e}_{i}
\end{align}
where $\phi^l$ and $\phi^t$ model the longitudinal and transverse fluctuations.
It is straightforward to show that the transverse modes are massless, while the longitudinal mode
is massive:
\begin{align}
\braket{\phi^l(q) \phi^l(q')} & \propto {\delta^d (q + q') \over K |q|^2 + {1 \over \xi_l^2}} \\
\braket{\phi^t_\beta(q) \phi^t_\alpha(q')} & \propto {\delta_{\alpha \beta} \delta^d (q + q') \over K |q|^2 }
\end{align}
Those massless transverse modes in the ordered phase are what people refer to a "Goldstone Bosons".
\subsection{Mermin-Wagner-Coleman Theorem}
\begin{theorem}
There is no spontaneous symmetry breaking of a continuous symmetry in dimension 2 or less
\end{theorem}
\begin{proof}
Consider the fluctuations of the goldstone modes, and its effect on the long range correlation.
The spin spin correlation is:
\begin{align}
\braket{m(x) m(x')} \equiv G(x-x')
\end{align}
We can freeze the massive longitudinal mode for the low energy effective theory. In that case, in 2 dimensions, we can parametrize the spin wave via the angle field $\theta$:
\begin{align}
m(x) \approx |\bar{m}| e^{i \theta(x)}
\end{align}
The low energy action is then governed by a gaussian action:
\begin{align}
P(\theta(x)) \propto \exp \left( -\int d^d x \frac12 K (\nabla \theta)^2 \right)
\end{align}
We can compute the fluctuation of the angle:
\begin{align}
\braket{\theta(q) \theta(q')} \propto {\delta(q + q') \over K |q|^2}
\end{align}
In real space, we have:
\begin{align}
\braket{\theta(x) \theta(0)} \equiv G(x) \propto x^{2-d}
\end{align}
for $d > 2$. For d = 2, we have $G(x) \propto \ln(x)$.
What this means is that the long range order is destroyed, and hence there is no ordered phase
in dimensions 2 or less.
Dimension 2 is called the \vocab{lower critical dimension}
\end{proof}
\subsection{Upper Critical Dimension}
Let us now compute the singular part to the heat capacity in dimensions larger than 2. We are interested in its behavior close to the critical point where $t \rightarrow 0$ and $\xi^2 \rightarrow \infty$
Recall the saddle point approximation is:
\begin{align}
\int dx e^{-f(x)} \approx e^{-f_{min}} \int dx' e^{- \frac12 f''(x_0) x'^2} = e^{- f_{min}} \sqrt{2 \pi \over f''}
\end{align}
In higher dimensions, where $f''$ is the Hessian operator $G^{-1}$, we have
\begin{align}
\int d\phi e^{-S[\phi]} \approx e^{- S_{min}} e^{-\frac12 \text{Tr} \ln \det(G)} e^{\frac12 \ln(2 \pi) }
\end{align}
The free energy is just $\ln(Z)$.
Let's compute the free energy density for the O(N) model:
\begin{align}
f \equiv {F \over V} = f_0 + \int {d^d q \over (2 \pi)^d} \ln(Kq^2 + {1 \over \xi^2})
\end{align}
where ${1 \over \xi^2} = t$ for $t > 0$ and ${1 \over \xi^2} = -2t$ for $ t< 0$.
The heat capacity, $C$ can be computed as:
\begin{align}
C = \partial_{t}^2 {\beta f} \propto 0 + \int {d^d q \over (2 \pi)^d} {1 \over (K q^2 + {1 \over \xi^2})^2}
\end{align}
For dimensions $ d> 4$, the integral is dominated by the UV cutoff $\Lambda \equiv {1 \over a}$ where $a$ is the lattice spacing. In other words:
\begin{align}
\int { d^d q \over (2 \pi)^d} {1 \over (K q^2 + {1 \over \xi^2})^2} \propto \Lambda^{d-4} \text{ (d $>$ 4)}
\end{align}
This quantity contributes a non-universal constant correction to the heat capacity discontinuity around $t = 0$.
For dimensions $d < 4$, the integral is dominated by the IR divergence. In fact, we can rescale and
shift the integration variable:
\begin{align}
x & \rightarrow \sqrt{K} q \xi \\
C &\propto \xi^{4-d} \int {d^d x} {1 \over x^2 + 1}\\
C &\propto \xi^{4-d}
\end{align}
This integral diverges for dimensions $< 4$, since the correlation length $\xi$ diverges as as approach criticality. We thus showed the following:
\begin{itemize}
\ii In dimensions $ \geq 4$, the divergent part of the heat capacity is well captured by mean field theory. The fluctuation corrections to the saddle point approximation only contributes a non-universal constant correction to the heat capacity, but does not alter the critical exponent.
\ii In dimensions of $< 4$, the fluctuations around the saddle point solution contributes singular corrections to the divergence of the heat capacity (and other response functions like susceptibility). This means that the results of mean field theory, are unreliable. For this reason, $d=4$ is called the \vocab{upper critical dimension} of the theory.
\end{itemize}
The way to get around dimension 4 is through a program first started by Kadanoff and later finished by Wilson and Fischer. The main idea is that around the critical point, the correlation length diverges, which should wash out all UV related scales. The system exhibits scale invariance, and analysis around scale invariant fixed point of statistical fields should be enough to predict the critical exponents.
\section{Invitation to Renormalization}
Before we treat RG, let's build some technology
\subsection{The quantum effective action}
\subsubsection{A list of analogies}
Here are the QFT $\rightarrow$ stat mech analogies:
\begin{itemize}
\item \[ \underbrace{\phi(x)}_{\text{field}} \rightarrow \underbrace{m(x)}_{\text{local mesoscopic magnetisation}} \]
\item \[ \underbrace{\mathcal{L}_{E} (x)}_{\text{euclidean lagrangian}} \rightarrow \underbrace{\mathcal{H}(x)}_{\text{landau energy functional}} \]
\item \[ \underbrace{Z(J)}_{\text{generating functional}} \rightarrow \underbrace{Z(H)}_{\text{partition function}} \]
\item \[ \underbrace{W(J)}_{\text{connected generating functional}} \rightarrow \underbrace{F(J)}_{\text{free energy}} \]
\item \[ \underbrace{\Gamma(\braket{\phi})}_{\text{1PI generating functional or "effective action"}} \rightarrow \underbrace{\mathcal{G}(M)}_{\text{Gibbs Free Energy}} \]
\end{itemize}
\subsubsection{Properties and definition (Xi Yin 253b notes)}
The generator of connected correlations is denoted as $W(J)$. It is the analog of the free energy of a statistical system. Differentiating it generates connected diagrams
\begin{align}
W(J) &= \ln \braket{ e^{\int dx J \phi}} \equiv \ln \left[ \int [d \phi] e^{- S[\phi] + \int dx J(x) \phi(x) } \right] \\
\prod_{i = 1}^N {\delta \over \delta J(x_i)}|_{J = 0} & = \braket{\prod_{i = 1}^N \phi(x_i)}_{c} \text{ ("c" denotes connected correlation)}
\end{align}
The legendre transform of the free energy with respect to J is the Gibbs free energy, denoted by $\Gamma$. It is also called the "Quantum Action". The legendre transform is involutive (2 legendre transforms yields the identity), and bijective
when the function is convex (failure to be bijective is related to the appearance of multiple phases).
\begin{align}
\phi_{cl}(x) \equiv {\delta \over \delta J(x)} W(J) & \equiv \braket{\phi}_{J} \text{ (Note we let the J be anything for now)} \\
\Gamma(\phi_{cl} (x)) & = -W(J) - \int dy J(y) \phi_{cl} (y) \\
\rightarrow {\delta \over \delta \phi(y)} \Gamma & = -J (y)
\end{align}
Note both W(J) and $\Gamma(\phi)$ are functionals: they map functions ($J$ or $\phi$) to numbers.
One useful property of the quantum action is that its tree level (classical) expansion gives the full connected correlations in the quantum theory:
\begin{align}
\lim_{\hbar \rightarrow 0} \int [d\phi] e^{{-1 \over \hbar} \left(\Gamma[\phi] + \int dx J \phi \right)} &= e^{\Gamma(\phi_0) + \int dx J \phi_0} \\
&= e^{W(J)} \\
\text{ where } {\delta \over \delta \phi} \Gamma(\phi) &= -J
\end{align}
Furthermore, we note the stationary point of $\Gamma$ is the expectation of the quantum field:
\begin{align}
{\delta \over \delta J} W(J) & = \braket{\phi} \\
\phi_0 & = \braket{\phi}
\end{align}
This is why it is called the "quantum" effective action. Its classical solutions incorporate all quantum fluctuations.
\subsubsection{Generation function for 1PI vertices (Peskin 11.5)}
The quantum action is also the generating function for 1PI vertices. Its hessian is the inverse of the propagator.
\begin{proof}
Consider the following tautology:
\begin{align}
{\delta \over \delta J(y)}{\delta \over \delta \phi (x)} \Gamma[\phi] & = -\delta(x-y)
\end{align}
We can apply the (calculus of variation version) of the chain rule:
\begin{align}
\delta(x-y) & = - \int dz {\delta \phi(z) \over \delta J(y)} {\delta^2 \over \delta \phi(z) \delta \phi(y)} \Gamma(\phi) \\
&= \int dz \underbrace{{\delta^2 \over \delta J(y) \delta J(z)} W(J) }_{G(y,z)} \underbrace{{\delta^2 \over \delta \phi(z) \delta \phi(y)} \Gamma(\phi) }_{D(z, y)} \\
& \equiv \int dz G_{yz} D_{zy}
\end{align}
The last line shows that the hessian of $\Gamma$ is the inverse of the propagator (which is the hessian of $W$)
To obtain the relation between the quantum action and the W at higher orders, first note the following identities:
\begin{align}
{\delta \over \delta J(z)} & = \int dw {\delta \phi \over \delta J} {\delta \over \delta \phi} = \int dw G(z, w) {\delta \over \delta \phi} \\
{\partial \over \partial \alpha } M^{-1} (\alpha) & = M^{-1} {\partial M \over \partial \alpha} M^{-1}
\end{align}
Applying it to the 3rd order connected function gives:
\begin{align}
{\delta^3 W \over \delta J_x \delta J_y \delta J_z} & = \int dw D_{zw} {\delta \over \delta \phi(w)} \left( {\delta^2 \Gamma \over \delta \phi_x \delta \phi_y}\right)^{-1} \\
& = \int dw \int du \int dv D_{zw} D_{xu} {\delta^3 \over \delta \phi_u \delta \phi_v \delta \phi_w} D_{vy} \\
{\delta^3 W \over \delta J_x \delta J_y \delta J_z} & = \int du dv dw D_{xu} D_{yv} D_{wz} {\delta^3 \Gamma \over \delta \phi_u \delta \phi_v \delta \phi_w}
\end{align}
The left hand side is the 3 point function. The right hand size is 3 propagators hooking up to a blob, which is generated by $\Gamma$. This means $\Gamma$ at 3rd order generates 3-point 'blobs' with external propagators chopped off ("amputated"). We can iterate to higher orders, but basically, we have that $\Gamma$ generates 1 Particle Irreducible diagrams. In equation form it reads:
\begin{align}
\boxed{{\delta^n \Gamma[\phi] \over \delta \phi(x_1) ... \delta \phi(x_n)} = \braket{\phi(x_1)... \phi(x_n)}_{1PI}}
\end{align}
\end{proof}
\subsubsection{Alternate perspective (Xi yin 253b)}
Let's evaluate $\Gamma$ directly from the definition and see what comes out:
\begin{align}
e^{i \Gamma[\phi_0]} & = \int [d \phi] \exp \left({i \int dx \mathcal{L} + J \phi - J \phi_0} \right)_{{\delta W \over \delta J} = \phi_0} \text{ (Just using the definition of the quantum action )}
\end{align}
Note that J is a background source, which is set by $\phi_0$.
J(x) is a source configuration that makes $\braket{ \phi } = \phi_0$ in the quantum theory.
The expression above suggests
we should shift variable to $\hat{\phi} \equiv \phi - \phi_0$:
\begin{align}
e^{i \Gamma[\phi_0]}& = \int [d \hat{\phi}] e^{i \int dx \mathcal{L}(\hat{\phi} + \phi_0) + J \hat{\phi}} \\
\end{align}
To compute the effective action for a field configuration $\phi_0$
\begin{itemize}
\item Expand the action in a background field $\phi = \phi_0 + \hat{\phi}$
\item Add a term $J \hat{\phi}$ in the action to make $\braket{\hat{\phi}} = 0$. In QFT language, add a source configuration J(x) such that all tadpoles are cancelled.
This is the so called \vocab{background field} technique.
\end{itemize}
\subsubsection{Effective action for $\phi^4$ symmetry breaking}
We'll compute $\Gamma$ in 2 step, first compute W(J). For this calculation, we will do it in minkowski time and re-introduce $\hbar$ to make clear expansion around a classical solution.
From the definition,
\begin{align}
e^{{i \over \hbar}W(J) } = \int d \phi \exp( S[\phi]] + \int dx J \phi)
\end{align}
We first expand via the saddle point (classical solution) called $\phi_0$, which satisfies the classical equations of motion ${\delta S \over \delta \phi}|_{\phi_0} = -J$. Shifting the field variables as a fluctuation around the classical solution, we define $\tilde{\phi} = \phi- \phi_0$:
\begin{align}
e^{{i \over \hbar} W(J)} \approx e^{{i \over \hbar} \left(S[\phi_0] + \int dx J \phi_0 \right)} \int d \tilde{\phi} e^{\tilde{\phi} D^{-1} \tilde{\phi} } \\
\end{align}
Taking the log of both sides:
\begin{align}
W(J) \approx S[\phi_0] + \int dx J \phi_0 + {\hbar \over 2 i} \text{Tr} \ln(D^{-1})
\end{align}
$\Gamma$ just removes the extraneous current (tadpoles!), giving
\begin{align}
\Gamma = \underbrace{S[\phi_0]}_{\text{Classical Action}} + { \hbar \over 2 i} \underbrace{\text{Tr} \ln(D^{-1})}_{\text{Quantum correction}} + ...
\end{align}
We now make the dramatic assumption that the minimum energy configuration of field $\phi(x)$ at fixed average field value is translation invariant (\textbf{this argument fails miserably when multiple phases appear, see convexity proof})
\begin{align}
\Gamma[\phi] \equiv VT \times V_{eff}(\phi)
\end{align}
One then defines the effective potential $V_{eff}$ as the intensive value of $\Gamma$ normalized in spacetime.
\begin{example}
\emph{Compute to first order the correction to the $V_{eff}$ for the following lagrangian (Zee section IV.3)}
$$ \mathcal{L} = \frac12 (\partial \phi)^2 + \frac12 m^2 \phi^2 - {g \over 4!} \phi^4$$
To 0th order, the quantum action is just the classical action. For constant field, the effective potential is just the classical potential:
$$V_{eff} = -\frac12 m^2 \phi^2 + {g \over 4!} \phi^4$$
To first order in, we need to compute the quantum correction. First, the potential energy density can be rewritten as:
$$ \int dx \mathcal{L} = \int dx dy \phi(x) \underbrace{[-\partial^2 - V''(\phi)] \delta(x-y)}_{D^{-1}} \phi(y)$$
The operator D is just the propagator for a free theory with a different mass, and is diagonal in k space:
\begin{align}
\text{Tr} \ln(D^{-1}) = \int d^4 x \int \dkk \log({-k^2 + m'^2})
= V T I(d)
\end{align}
This integral is divergent, but we can evaluate up to a cutoff $\Lambda$:
\begin{align}
I(d) = {\Lambda^2 \over 32 \pi^2} V''(\phi) - {V''(\phi)^2 \over 64 \pi^2} \log \left({e^{\frac12} \Lambda^2 \over V''(\phi)} \right)
\end{align}
The cutoff dependent terms will be absorbed by renormalization conditions. However, the part that is not cutoff dependent, shows
as a log correction to the potential:
\begin{align}
V_{eff} (\phi) = A \phi^2 + B \phi^4 (D + E \log({\phi^2 \over \Lambda^2}))
\end{align}
\end{example}
\emph{Comments:}
We see the effective potential will be deeper near the classical minima (see Peskin 378). Note peskin does the higher dimensional
version of this problem, which is why it looks a bit obscure.
The way the coefficients A, B, D, E are set are via renormalization conditions (lab measurements).
There are 2 renormalization conditions (2 coupling constants in the bare lagrangian, the mass $m$ and the interaction $g$)
\begin{itemize}
\item We measure in lab the mass of the particle to be $m_P$ where P stands for physical. This sets: $${d^2 V_{eff} \over d \phi^2}_{\phi=0} = m_P^2$$
\item We measure the strength of the interaction at a given scale $M$ to be some coupling $g_P$. This sets:
$$ {d^4 V_{eff} \over d \phi^4}_{\phi = M} = g_P$$
Question: Why can't we measure it at $\phi = 0$? What is the meaning of this scale $M$ we picked?
\end{itemize}
\subsubsection{Convexity}
\emph{Zinn Justin, Peskin and Schroeder, and Weinberg}
The effective potential is convex. The one computed from perturbation theory may not be because we \textbf{incorrectly} assume the field configuration that minimizes the energy is $\phi(x) = \phi$ constant.
\begin{proof}
Suppose we have $V_1 = V_{eff}(\phi_1)$ and $V_2 = V_{eff} (\phi_2)$ values.
One can always construct a state with some intermediate value
\[ \braket{\phi} = (1-x) \phi_1 + x \phi_2 \]
Do this by mixing little islands of $\phi_1$ field values with $\phi_2$ field values in the desired ratio. Because the total energy scales with the amount of islands you get a effective potential value:
\[ V(\braket{\phi}) = (1-x) V_1 + x V_2 \]
Since the effective potential minimizes the energy given some fixed constraint average field, this value must be an upper bound on the effective potential. Hence the effective potential is subtended by the convex hull of the thermodynamic construction.
\end{proof}
\subsection{Diagrammatic interpretation}
\emph{Francois David PIRSA or Zinn Justin}
We can interpret easiest the quantum action by studying the case of $\phi^4$ theory with the following Lagrangian:
$$ \mathcal{L} = \frac12 \phi( \Box + m^2) \phi + {\lambda \over 4} \phi^4$$
Note we are working with Euclidean field theory hence the weird sign.
If we evaluate the path integral via saddle point, we can show (this requires a few line of math) that:
$$\boxed{\Gamma[\varphi] = S[\varphi] + \frac{ \hbar}{2} \text{Tr} \ln(S''[\varphi]) + ...}$$
This formula is extremely useful and \textbf{needs to be committed to memory}. It re-appears often in statistical physics,
for example when it is used to compute linear responses like the heat capacity (which relates to fluctuations of the order parameter).
What it shows is that the first order quantum effects ($\hbar$ term)
in the effective action has an elegant trace formula.
To write this as a perturbation expansion we factor the action into the free and the interacting part:
\begin{align}
G^{-1} &\equiv {\frac12 (\Box + m^2)} \\
\underbrace{S''}_{\text{Hessian Operator}} &\equiv {\delta^2 \over \delta \phi(x) \delta \phi(y)} S(\phi) \\
&= (G^{-1}(x-y) + {\lambda \over 2} \underbrace{V(x, y)}_{\text{interaction kernel}}) \\
&= {\bf G}^{-1} ( 1 + {\bf G} {\lambda \over 2} {\bf V} )
\end{align}
In the previous expressions, we denote operators in bold for clarity.
We plug this expression into the trace expansion.
\begin{align}
\text{Tr} \ln((S'')) & = \text{Tr} \ln ({\bf G^{-1}}) + \text{Tr} \ln(1 + {\lambda \over 2}{\bf G V})\\
\end{align}
It is straightforward to identify the interaction kernel V by seeing how it acts on sample functions $\phi_1, \phi_2$:
$$ {\bf \phi_1 \cdot V \cdot \phi_2} = \int dx dy \phi_1(x) V(x, y) \phi_2(y) = \int dx' \phi^2(x') \phi_1(x') \phi_2(x')$$
$$ \rightarrow V(x, y) = \delta(x -y) \phi^2(y)$$
Using the operator from of the interaction above, we can carry the first order term expansion:
\begin{align}
\text{Tr} \ln(1 + {\bf G} {\lambda \over 2} \phi^2) & = {\lambda \over 2} \int dx \int dy G(x-y) \phi^2(y) + \\
& ({\lambda \over 2})^2 \int dx \int dy G(x-y) \phi^2(y) \int dz \int dw G(w - z) \phi^2(w) + ... \\
\end{align}
This expansion can be re-interpreted as a sum over 1-loop 1P-I diagrams (these just look like bigger and bigger loops for higher order of
the interaction strength $\lambda$).
\textbf{ The expansion of the effective action at each order of $\hbar$ is just the expansion in the number of loops of the quantum theory}. A good proof is presented in Zinn Justin's Quantum Field Theory and Critical Phenomena (what isn't proved there!), however we will outline a brief argument.
Consider some diagram contributing to the effective action. This diagram will have vertices (interactions), internal lines (propagators) and external lines which are \emph{amputated}:
\begin{itemize}
\item Each propagator contributes a factor of ${\hbar}$ (L propagator)
\item Each vertex interaction contributes ${1 \over \hbar}$ (V vertices)
\item By definition because the $e^{-S \over \hbar}$ has an $\hbar$ in it, to make the effective action dimensionless,
we give its definition a factor of $\hbar$
\end{itemize}
The contribution of that diagram will be at order $\hbar^{L - V +1 }$. This number $L-V+1$ is called the \vocab{Betti Number} of the graph and is a topological invariant denoting the number of loops of the diagram.
\subsection{Renormalization (Peskin 12.1)}
Under rescaling of momenta $k-> bk$ (coarse graining, or looking from far away which increases frequencies), an operator with $N \phi, M \partial$ rescale as follows:
\begin{align}
g(b) = g(1) b^{M + N({d-2 \over 2})-d} = g(1) b^{\alpha} \\
\alpha \equiv M + N {d-2 \over 2}-d
\end{align}
The possible cases are:
\begin{itemize}
\ii \vocab{$\alpha > 0$}. This implies that the coefficient grows under RG. In other words, this coefficient becomes more important to the IR physics. This is called \vocab{relevant, or super-renormalizable} etc...
\ii \vocab{$ \alpha = 0$}. This is a \vocab{marginal} case and higher quantum corrections should determine the growth of the operator in the IR physics.
\ii \vocab{$\alpha < 0 $}. This operator grows weaker at low energy under RG. It is called \vocab{irrelevant}, as in does not affect as much the IR physics. In particle physics, we
call it \vocab{non-renormalizable} because we think about it the opposite way. For a given interaction strength measured for this interaction in the IR, it will blow up in the UV!
\end{itemize}
\subsection{Renormalization of $\phi^4$ theory}
Consider a massless $\phi^4$ theory. We would like to study its renormalization procedure. We start with a "model" lagrangian with parameters in the UV called \vocab{bare parameters} $A, B, C$ we'd like to fit to create a model of the theory.
\begin{align}
\mathcal{L} = \frac{1}{2} A (\partial \phi)^2 + \frac{1}{2} B \phi^2 + {C \over 4!} \phi^4
\end{align}
These parameters are a function of the cut off of the theory $\Lambda$. We would like to fit this theory to a lab scale energy scale $\mu$.
By that we mean the 2 following experimental fit conditions:
\begin{itemize}
\item colliding 2 particles with momentum $\mu^2 = s, t, u$ \vocab{Mandelstam} parameters gives us the measured
coupling $g_R$
\item we observe at that scale a massless theory.
\end{itemize}
As we will see these 2 input from experiment will allow us to self consistently extrapolate the theory to calculate correlation functions of any momentum input order by order in perturbation theory. The process of fitting this data is called \vocab{Renormalization} in the high energy context (which we will
distinguish from \vocab{Wilsonian renormalization})
First let's consider the Effective action at first order in $\hbar$, which gives us the radiative correction to the propagator $$\Gamma^{(2)}(p_1, p_2) = \underbrace{p^2}_{p = p_1 \text{or} p_2} + B^2 + \underbrace{T(B)}_{\text{self energy}}$$
T(B) is called the \vocab{self-energy} and causes mass to increase as we flow to the IR.
It consists of 1 particle irreducible diagrams (other texts often use symbe $i \Sigma(p)$ to denote it)
For $\phi^4$ theory, it's just:
$$T(m) = \int {d^4 k \over (2 \pi)^4} {1 \over k^2 + B^2}$$
This integral is quadratically divergent, and has an expansion in the cutoff $\Lambda$:
\begin{align}
T(m) & = \int_{|k| < \Lambda} {d^4 k \over (2 \pi)^4} {1 \over k^2 + B^2} \\
& \approx \int_{|k| < \Lambda} {d^4 k \over (2 \pi)^4 k^2} (1 - {B^2 \over k^2} + ...) \\
& = {\Lambda^2 \over 8 \pi^2} - {B^2 \over 8 \pi^2} log(\Lambda^2)
\end{align}
To fit the theory to a massless theory at some lab scale, we need to make
$$B = -{\Lambda^2 \over 8 \pi^2}$$
This is our first bare parameter fix (in general, B could be momentum dependent like in QED).
The renormalization condition for the 4 point function is computed using the 4 point effective action term, which are conveniently expressed
in terms of mandelstam variables s, t, u:
\begin{align}
\Gamma^{(4)}(p_1, p_2, p_3, p_4) & = \Gamma^{(4)}(s, t, u) \\
& = C - {C^2 \over 2} (\sum_{x = s, t, u} \log({\Lambda^2 \over x}))
\end{align}
By imposing the experimental condition that $\Gamma^(4)(\mu, \mu, \mu) = g_R$, we obtain obtain $C(\Lambda)$.
Now we can compute any 4 point functions to order $\hbar$ and $g_R^2$ based on our 2 parameter fit:
\begin{align}
\Gamma(p_1, p_2, p_3, p_4) = g_R - {1 \over 2} {g_R^2 \over (4 \pi)^2} (\sum_{x = s, t, u} \log({\Lambda^2 \over x}))
\end{align}
\subsection{Lehmann-Kallen representation of the Propagator}
\begin{theorem}
The exact propagator in momentum space has the form:
\begin{align}
i G(k) = {1 \over k^2 + m^2 + i \epsilon} + \int_{s = 4 m^2}^\infty ds {\rho(s) \over s + m^2 + i \epsilon}
\end{align}
\end{theorem}
\begin{proof}
See Srednicki.
\end{proof}
\emph{Comments}:
This theorem says that the propagator has a pole at the mass of the fundamental excitation and a maybe continuous spectrum starting a a mass scale of $2m$.
My intuition is basically fundamental quantas have a mass. Interactions can alter the propagator only when 2 quanta interact.
\section{The Renormalization Group}
Let's evaluate the scale dependence of the coupling constants. There are 2 main methods to do so, one i'll call \vocab{counterterm RG} and the other is \vocab{wilsonian/polchinski RG} also called \vocab{exact RG}.
\subsection{Counterterm RG}
The n-point scattering amplitude $\Gamma^{(n)} (Z_\phi, g_R, \mu^2)$ cannot depend on renormalization scale $\mu^2$.
\begin{align}
{\partial \over \partial \ln \mu} \Gamma^{(n)} = 0
\end{align}
This implies an evolution equation between couplings and scale $\mu$, which is called the \vocab{Callan Symanzik} equation. The evolution rate is defined by a \vocab{beta} function:
\begin{align}
\beta_{g_R} \equiv {\partial \over \partial \ln \mu} g_R
\end{align}
Because the only dependence of those amplitudes is in the counterterms, we can take a derivative of the counterterms directly.
\subsection{Wilsonian RG}
\emph{Peskin and Schroeder ch. 12, or Subir Sachdev Quantum Phase Transitions}.
The other way to obtain the beta functions is to integrate the high energy theory to a given low energy scale. As your integrate how each momentum shell, the effective action of the low energy theory changes. The rate of change is the beta function. In pratice, counterterm RG is easier to implement.
\subsubsection{The setup}
Start with an (euclidean) theory with explicit (UV or bare) cut-off $\Lambda_0$.
\begin{align}
\mathcal{Z} &= \int_{|k| < \Lambda_0} \mathcal{D}[\phi] \exp\left(- S_{\Lambda_0} \right) \\
S_{\Lambda_0} & = \int d^D x \frac12 \partial \phi^2 + \sum_{i} g_i \Lambda^{D-d_i} \mathcal{O}_i
\end{align}
Obtain a theory (an \vocab{effective action}) at scale $\Lambda$, which will be our RG sliding energy scale by integrating high energy modes
\begin{align}
\phi = \underbrace{\phis}_{\text{slow}} + \underbrace{\phif}_{\text{fast}}
\end{align}
If we integrate out down to some value $\Lambda < \Lambda_0$, we get an correction $S_{int}$ to the effective action:
\begin{align}
S_{int} = \ln \left[ \int_{C^\infty (\Lambda, \Lambda_0]} \mathcal{D} \phif \exp\left( -S_0[\phif] -S[\phis, \phif ]\right) \right]
\end{align}
The wilsonian approach is to integrate 1 slice $[\underbrace{\Lambda - \delta \Lambda}_{\Lambda \over s}, \Lambda]$ at a time, and obtain the change in the couplings.
\subsubsection{$\phi^4$ wilson style}
For a $\phi^4$ theory we get:
\begin{align}
S_{int} &= \sum \text{connected diagrams} \\
S_{int} &= {-\lambda \over 4!} \braket{\phis \phis \phif \phif}_{\phif}^c + {\lambda^2 \over 4! 4! 2!} \braket{\phis \phis \phis \phis \phif \phif \phif \phif}_{\phif}^c + \mathcal{O} (\lambda^3)
\end{align}
where $ \braket{}_{\phif]}^c$ denotes gaussian averaging over $\phif$ modes living in a momentum shell, and "c" denotes connected correlation functions.
The first term corrects the mass of $\phis$, the second corrects the $\lambda$ for $\phis$:
\begin{align}
\Delta m^2 &= -{\lambda \over 2} {1 \over (2 \pi)^d} \int_{\delta \Lambda} {1 \over k^2 + m^2} k^{d-1} dk \\
&\approx -{\lambda \over 2 (2 \pi)^d} {S_{d} \Lambda^{d-1} \over \Lambda^2} \delta \Lambda \\
\Delta \lambda &\approx -{3 \lambda^2 \over 2} {1 \over (2 \pi)^d} {S_d \lambda^{d-1} \delta \Lambda \over \Lambda^4} \\
&= -{3 \lambda^2 \over 2} {1 \over (2 \pi)^d} S_d \lambda^{d-4} d \ln \Lambda \\
\end{align}
We recover the RG equations:
\begin{align}
{d \over d s} \lambda = - {3 \over 16 \pi^2} \lambda^2
\end{align}
Here note the sign inversion. The flow is towards the IR.
\subsubsection{Wilsonian effective potential, Local Potential Approximation}
\emph{David Skinner, Timothy Hollywood ch 2, lec10 Francois David }
We just reproduce the calculation above, but in more generality. Note that all the couplings $g_{2n}$ are dimensionless (if you make them dimensionfull like francois david, you have to rescale the momentum shell from ${\Lambda \over s} \rightarrow \Lambda$ after integrating out fast modes).
\begin{align}
S &= S[\phis] + S[\phif] \\
& = S[ {\phis} ] + \int d^d x \frac12 ( \partial \phif )^2 + \sum_{n} {1 \over (2n)!} {d^{2n} \over d {\phis} ^{2n}} V( {\phis} ) ({\phif} )^{2n} \\
V(\phi) &\equiv \sum_n {1 \over (2n)!} \Lambda^{d-n(d-2)}g_{2n} \phi^{2n} \\
\end{align}
At {\bf one loop order} (or in the saddle point approximation) we get
\begin{align}
S = S[ {\phis} ] + \int d^d x \frac12 ( \partial \phif )^2 + \frac12 V'' \phif^2
\end{align}
This is a gaussian integral which gives:
\begin{align}
\ln \braket{e^{-\delta S}}_{\phif} = \frac12 \text{Tr} \left[ \ln( -\Delta + V'') \right]_\phif
\end{align}
Note another way to arrive at the same result is to compute the use the trace formula for the 1 loop effective action where we integrate out $\phif$:
\begin{align}
\Gamma(\phi) &= S[\phi] + \frac12 \text{Tr} \left[ \ln( -\Delta + V'') \right] \\
\rightarrow \text{ Integrate fast modes : }
\delta \Gamma(\phis) &= \frac12 \text{Tr} \left[ \ln( -\Delta + V'') \right]_{\phif}
\end{align}
$V''$ is a functional operator which is not translation invariant. In the \vocab{local potential} approximation, we assume it's just identity operator times a number which allow the to be summed in k-space.
\begin{align}
\underbrace{S(\Lambda-\delta \Lambda) - S(\Lambda)}_{-\delta S} &= \frac12 \int d^d x \ln(-\Delta + \underbrace{V''}_{\text{operator}})_{\phif} \\
&= \frac12 \int d^d x \int_{k \in (\Lambda - \delta \Lambda, \Lambda)} {d^d k \over (2 \pi)^d} \ln (k^2 + \underbrace{V''}_{\text{number}}) \text{ (local potential approx)} \\
& = \delta \Lambda \underbrace{C}_{\equiv {{(4 \pi)^{-{d \over 2}} \over \Gamma(d/2)}}} \Lambda^{d-1} \int d^d x \ln(\Lambda^2 + V'' ) \label{eq:2}
\end{align}
The flow of the dimensionless coupling determines how the theory "looks" at different scales, and gives the beta function.
\begin{align}
{d g_{2n} \over d \ln(\Lambda)} &= (n(d-2)-d) g_{2n} - C \Lambda^{n(d-2)} {d^{2n} \over d \phi^{2n}}|_{\phi=0} \ln (\Lambda^2 + V'') \label{eq:1}
\end{align}
\begin{example}
To obtain equation (\ref{eq:1}), expand the LHS and RHS of (\ref{eq:2}) separately
\begin{align}
\delta S &= \delta \int d^d x \frac12 (\partial \phi^2) + V(\phi, g_{2n}, \Lambda) \\
& = \int d^d x \sum_{n} {1 \over (2n)!} g_{2n}(\Lambda^{d-n(d-2)}) (d - n(d-2)) \phi^{2n} \delta \ln \Lambda \\
-\text{RHS} &= \delta S = - C \delta( \ln \Lambda) \Lambda^{d-2} \int d^d x \ln(\Lambda^2 + V'')
\end{align}
A few tricks:
\begin{itemize}
\item The RHS can be expanded as power series in $\phi$. To obtain the $\phi^{2n}$ term, we just need the 2nd derivative to the 2n.
\item Use$\ln(1 + x) \approx x - {x^2 \over 2} + {x^3 \over 3} -...$ to expand the log.
\item Divide all the powers of $\Lambda$ out. (the other way to see this is to set $\Lambda$ arbitrarily to 1 since the beta function is only a function of the dimensionless coupling, not the cutoff)
\end{itemize}
Expand:
\begin{align}
\ln(1 + V'') \approx {V''} - {V''^2 \over 2} + ...
\end{align}
\end{example}
\subsubsection{Polchinski's ERGE}
Actionally implementing this calculation is hard, but conceptually it leads to \vocab{ERG} (Exact RG) also called Polchinski's equation.
\subsection{$\phi^4$ RG}
Tree level action:
\begin{align}
S_{tree} = \int d^D x \frac12 \left( \partial \phi \right)^2 - \frac12 m^2 \phi^2 - {\lambda \over 4!} \phi^4
\end{align}
1 Loop action (with counter terms):
\begin{align}
S_{1 -loop} = \int d^D x \frac12 \left( \partial \phi \right)^2 - \frac12 \mu^{2 \epsilon} \left(m^2 + {\lambda\over (4 \pi)^2 \epsilon} \right) \phi^2 - {1\over 4!} \underbrace{ \mu^{2-D/2} \left(\lambda + {3 \lambda^2 \over (4 \pi)^2 } \left({1 \over \epsilon} + ... \right) \right)}_{\lambda_0} \phi^4
\end{align}
Note that we only explicitly wrote the terms that depend on the renormalization scale $\mu$ in the counterterms. $\lambda_0$ denote the \vocab{bare} (UV scale) theory that is fixed.
Requiring that the bare parameter is independent of $\mu$ implies the beta function for $\lambda$, the \vocab{renormalized} coupling (we've ignored finite non-divergent contributions):
\begin{align}
{\partial \over \partial \ln(\mu)} \lambda_0 = 0 \\
\rightarrow \beta_\lambda \equiv {\partial \over \partial \ln(\mu)} \lambda = -\epsilon \lambda + {3 \lambda^2 \over (4 \pi)^2}
\end{align}
\begin{itemize}
\item for $D = 4$, $\epsilon = 0$ and the coupling grows in the UV. Quantum correction made the coupling \vocab{marginally irrelevant}.
\item For $D < 4$, there is a competing effect in the beta function. There is a new fixed point at
$\lambda^* = {(4 \pi)^2 \over 3} \epsilon$. This is the \vocab{wilson fisher fixed point}.
(one needs to check stability of the fixed point in the $m^2, \lambda$ submanifold)
\end{itemize}
Expanding in $D = 4- \epsilon$ is also called the \vocab{$\epsilon$ expansion}
\subsubsection{Momentum Cutoff}
The divergent integrals for the 2 point and 4 point functions at one loop look as follow:
\begin{align}
\int_0^\Lambda dp p^3 {1 \over p^2 + m^2} = \frac12 \Lambda^2 - {m^2 \over 2} \log \left( 1+{\Lambda^2 \over m^2}\right) \\
\approx {\Lambda^2 \over 2} -{m^2 \over 2} \log \left( {\Lambda \over m}\right)
\end{align}
\begin{align}
\int_0^{\Lambda} dp p^3 {1 \over p^2 + m^2} {1 \over (k-p)^2 + m^2} \approx \log ({\Lambda^2 \over p^2 + m^2})
\end{align}
Call the scale at which we measured $\lambda$ to be $\mu^2 \equiv s$ (you scattered some s-wave and obtained $\lambda$ in lab).
We already obtained this expression when doing the effective potential:
\begin{align}
\Gamma(p_1 = p_2 = -p3 = -p_4) = {3 \over 2} {\lambda^2 \over (4 \pi)^2} \log \left({\Lambda^2 \over \mu^2} \right)
\end{align}
Since this relation is physical, it implies that there is a fixed relation between $\lambda$ (measured coupling) and $\mu$ (energy scale).
\begin{align}
{d \over d \ln (\mu)} \Gamma & = 0 \\
\rightarrow \beta_{\lambda} & \equiv {\partial \over \partial \ln \mu} \lambda(\mu)= {3 \over (4 \pi)^2} \lambda^2
\end{align}
This is the same beta function obtaind with dimensional regularization.
\subsection{Anatomy of signs}
We're the most interested in the sign of the beta function, and that means getting the signs right for the diagrams.
Recall the feynman rules for $\phi^4$:
\begin{itemize}
\item propagator: ${i \over k^2 - m^2}$
\item interaction: $ - i \lambda$
\end{itemize}
This can be easily seen in the path integral:
\begin{align}
\mathcal{Z} &= \int \Dphi \exp \left(i \int \mathcal{L} d^d x \right) \\
& = \int \Dphi \exp \left( {i \phi M \phi} - i V(\phi) \right)
\end{align}
Propagators come from the inverse of the quadratic operator $ i M$:
$$G \approx {i \over M} = {i \over k^2 - m^2}$$
$$ V \approx - i \lambda$$
Using those rules, the loop integral has 2 vertex and 2 propagators giving a 1PI amplitude $\propto + \lambda^2 (\overbrace{...}^{\text{divergent}})$
This means that the 1PI vertex $\Gamma^{(4)}$ grows with $\mu$, and the interaction is marginally irrelevant.
\subsection{Nonlinear $\sigma$ model}
We derive the RG flow for nonlinear $\sigma$ method, the traditional way with callan-symanzik equation, and the wilsonian way.
\subsubsection{The Setup}
\begin{example}
Notational addendum. We write short hand
$(\partial_{\mu} \vec{n})^2$. This is understood to mean:
\[\underbrace{\sum_{\mu = 0}^{D}}_{\text{spacetime}} \underbrace{\sum_{i = 1}^{N}}_{\text{spin components}} (\partial_{\mu} n^i)^2 \]
\end{example}
It describes a vector field $\vec{n} = n_i$ in N-dimension constrained to move on a hypersphere.
The constraint is $\sum_{i} n_i^2 = 1$. This is the infinite mass limit of O(N) symmetry breaking model.
\[ \LL = {1 \over 2 g^2} (\partial_\mu \vec{n})^2\]
\[n^i = ( \underbrace{\pi^1, \pi^2, ..., \pi^{N-1}}_{\text{Goldstone bosons}}, \sigma)\]
The spherical constraint implies
\[\sigma = \sqrt{1 - \pib^2}\]
Expanding the lagrangian gives:
\[\LL = {1 \over 2 g^2} (\partial_\mu \pib)^2 + {\pib \cdot \partial_\mu \pib \over 2 g^2 (1 - \pib^2)} \approx {1 \over 2 g^2} (\partial_\mu \pib)^2 + {1 \over 2 g^2} (\pib \cdot \partial_\mu \pib)^2 \]
One obtains progropagator and interactions:
\[ {ig^2 \over p^2} \delta_{ij} \]
\[ {-i \over g^2} \left( \sum_{\text{3 pairings}} \delta^{ij} \delta^{kl} \left(p_{1} + p_{2} \right) \left(p_3+ p_4 \right) \right) \]
\subsubsection{Traditional Way}
Callan Symanzik for n-point function order by order in perturbation theory:
\[\left(\pder{\ln(M)} + \beta(g) \pder{g} + n \gamma(g) \right) G^{(n)} (M, g, p) = 0 \]
To get $\gamma$, pick $G^{(1)} = \braket{\sigma(0)}$
\begin{align}
\sigma(0) &= 1 - \frac12 \braket{\pi(0)^2} + ... \\
&= 1 -\sum_{kl} \int \dkk {ig^2 \over k^2 - \underbrace{\mu^2}_{\text{IR regulator}}} \delta^{kl} \\
&=_{d \rightarrow 2} 1 - {g^2} (N-1){\Gamma(1 - {d \over 2} ) \over (8 \pi) \mu^{1 - {d \over 2}}}
\end{align}
\textbf{Impose} renormalization at scale $M$:
\begin{align}
\braket{\sigma} = 1 - \frac12 {g^2 (N-1) \over 8 \pi} \ln{M^2 \over \mu^2}
\end{align}
Plug into callan symanzik equation to order $g^2$ (note $\beta = \underbrace{0}_{\text{classical}} + \mathcal{O}(g^2)$ so it has no effect for this diagram)
\[\gamma(g) = {g^2 (N-1) \over 4 \pi} + \MO(g^4)\]
Similarly one can compute the callan symanzik equation
for the propagator which will give $\beta$:
\begin{align}
\braket{\pi^k(p) \pi^l(-p)} ={i g^2 \delta^{kl} \over p^2} \left(\underbrace{1}_{\text{tree}} - \underbrace{{g^2 \over 4 \pi} \ln \left({M^2 \over \mu^2} \right) + \MO(g^4)}_{\text{loops}} \right)
\end{align}
Chug the callan symanzik equation again, using $\gamma$ obtained previously:
\[ \left( \pder{\ln(M)} + \beta(g) \pder{g} + 2 \gamma(g) \right) \braket{\pi^k(p) \pi^l(-p)} = 0 \]
\[= {i \delta^{kl} \over p^2} \left(- {g^4 \over 2 \pi} + 2g \beta(g) + 2 g^2 {g^2 (N-1) \over 4 \pi} \right)\]
\[\rightarrow \beta(g) = - (N-2) {g^3 \over 4 \pi} + \MO(g^5)\]
\begin{itemize}
\item The model is \textbf{free} with N=2. This is because for $N=2$ in $d=2$, this is the model of a 2-d angle field, with effective action ${1 \over 2 g^2} (\partial \theta)^2$. This is a gaussian theory and the beta function does not flow.
\item For $N>2$ in 2 dimension, the theory is \vocab{asymptotically free}.
\end{itemize}
\subsubsection{Wilson Style}
\emph{Kardar, Beyond Spin Waves}
\subsubsection{Large N}
\emph{Peskin, Schroeder ch. 13}
We will yet analyze this theory using new trick, in the large N limit. This trick consists of:
\begin{itemize}
\item Use lagrange multipliers to enforce the constraint
\item Interpret the lagrange multiplier as a gauge field
\end{itemize}
\begin{align}
Z&= \int \D \vec{n} \exp \left[ -{i \over 2 g_0^2} \int \dd^d x (\partial_\mu \vec{n})^2 \prod_{x} \underbrace{\delta(n^2 - 1)}_{\text{constraint}} \right] \\
& = \int \D \vec{n} \D \alpha \exp \left[ -{i \over 2 g_0^2} \int \dd^d x (\partial_\mu \vec{n})^2 - {i \over 2 g_0^2} \underbrace{\alpha \int \dd^d x (n^2 -1)}_{\text{Lagrange Multiplier } \alpha(x)} \right] \\
& = \int \D \alpha \exp \left[ \underbrace{-i {N \over 2} \ln \mathrm{det} \left(-\partial^2 + i \alpha \right) +{i \over 2 g_0^2} \int \dd^d x \alpha(x)}_{S} \right]
\end{align}
In the large N limit, the integral can be evaluated via sadde point. The saddle point is determined by ${\delta \over \delta \alpha} S = 0$, to solve for $\alpha(x)$:
\[ -{N \over 2} {1 \over -\partial^2 + i \alpha} = {1 \over 2 g_0 ^ 2} \]
\textbf{key}: because the RHS is a constant we need $\alpha(x) = \alpha$ to be constant. Because the RHS is real, we write $-i \alpha = m^2$. Furthermore, the operator is diagonal in frequency space:
\[ N \int \dkk {1 \over k^2 + m^2} = {1 \over g_0^2} \]
At this step, we start our renormalization program. Obviously the integral is UV divergent and one needs to do so matching (ahem, measurement), at scale $M$, with measured coupling $g(M)$, and fixed cut-off $\Lambda$. Evaluating at d=2:
\[{N \over 2 \pi} \ln \left({\Lambda \over m} \right) = {1 \over g_0^2}\]
\[ {1 \over g_0^2} = {1 \over g^2} + {N \over 2 \pi} \ln \left({\Lambda \over M} \right) \text{ (we measured the mass to be M)} \]
This equation is obviously overdetermined, since $g_0, \Lambda$ is fixed (the bare theory is the full truth!). This implies the mass parameter depend on the scale:
\[ \underbrace{m}_{\text{bare mass}} = M \exp \left[-{2 \pi \over g^2 N} \right]\]
Similary, one can plug the bare mass into callan symanzik to get flow:
\[ \left[\pder{\ln (M)} + \beta(g) \pder{g} \right]m(M, g) = 0 \]
\[\beta(g) = -{g^3 N \over 4 \pi} \]
\subsection{Covariance of beta function}
\emph{Zinn Justin, section 10.11, MIT 8.324 pset 6}.
For classically scale invariant theories, the beta function only flows at second order
of the coupling. This is because only loop diagrams lead to a broken scale invariance.
\begin{align}
\beta(\lambda) = b_2 \lambda^2 + b_3 \lambda^3 + ....
\end{align}
We can show that under smooth change $\lambda' = \lambda + a_2 \lambda^2 + ...$, the 2nd and 3rd order coefficients of the beta function are univeral.
In general under coupling rescaling:
\begin{align}
\beta'_j(\lambda') &= \beta_i {\partial \lambda_j' \over \partial \lambda_i} \text{ general reparametrization} \\
\lambda(\lambda') & \approx \lambda' - a_2 \lambda'^2 + \mathcal{O}(\lambda'^3) \\
\beta'(\lambda') &= (b_2 (\lambda' - a_2 \lambda'^2 + ...)^2 + b_3 (\lambda' - a_2 \lambda' + ...)^2 ) (1 + 2 a_2 \lambda' + ... ) \\
& = b_2 \lambda'^2 + b_3 \lambda'^3 + \lambda'^3 (- 2 a_2 b_2 - 2 b_3 a_2 + 2 a_2 (b_2 + b_3)) + \mathcal{O}(\lambda'^4) \\
&= b_2 \lambda'^2 + b_3 \lambda'^3 + \mathcal{O}(\lambda'^4)
\end{align}
For a marginally relevant interaction like QCD, we have
\begin{align}
\beta &= - b_2 g^2 - b_3 g^3 + ... \\
\rightarrow \ln \left({\mu \over \Lambda} \right) &= {1 \over g(\mu)} + {c \over b^2} \ln \left({b g(\mu)}\right) + \mathcal{O}(g) \\
\end{align}
This is an example of \vocab{dimensional transmutation}, where a new scale $\Lambda$ appears in a classically scale invariant theory.
For QED, it's the landau pole scale where the theory blows up. For QCD, it's a low energy scale where perturbation theory breaks down in the IR.
\subsubsection{Dimensional Transmutation}
Massless $\phi^4 $ in $d=4$ is a classically scale invariant theory. When we introduce quantum fluctuations, we find out it is no-longer scale invariant because loops cause the $\beta$ function to be non-zero. One finds there is a \vocab{Landau pole}, in other words, given some fixed lab measurement, one integrates the callan symanzik equation and finds:
$\lambda(\mu) = {\lambda_R \over 1 - \exp\left({k \mu}\right)}$
$\lambda$ blows up at some finite $\mu^*$. The statement of dimensional transmutation is this a bijection between $\mu^* \leftrightarrow \lambda_R$ thanks to the reversibility of the flow. The lab measurement measured a dimensionless coupling, but one can also specify the theory using a scale where the theory blows up. That's it, really. The same thing happens in QED, where i can either give you the dimensionless $\alpha = {1 \over 137}$ parameter at so and so lab energy scale, or give you the landau pole energy to fully specify all scattering experiment measurements.
\subsection{Conformal Fixed Points}
\subsubsection{Critical behavior}
\emph{(Eduardo Fradkin, Quantum Field Theory an integrated approach)}
We saw that the coupling can run. We saw that the $\beta$ function could be asymptotically free or slaved (marginal relevance or irrelevance). Now we will examine the physical consequence near a critical point where the $\beta$ function is 0.
Near an IR fixed point where $t$ is some coupling the beta function can be expanded as:
$$\beta(t) = \beta' (t - t^*) = \beta' \tilde{t}$$
We can integrate the callan symanzik equation:
$$ {\mu_2 \over \mu_1} = \exp \left(\int_{t_1}^{t_2} {1 \over \beta' \tilde{t} } d \tilde{t} \right) = \left( {t_2 \over t_1} \right)^{\beta'}$$
Since energy scale is inversely related to distance, we have:
$$ {x_1 \over x_2} \propto \left( {t_2 \over t_1} \right)^{\beta'}$$
This slows that the critical exponent for the divergence of the correlation length, related to a coupling (tuning parameter, mass, termperature) is the slope of the beta function at the critical point.
In general at a fixed point, there could be many relevant operators one has to tune to achieve criticality.
In that case, the $\beta$ function can be expanded (linearized) and diagonalized. The eigenvalues of the linearized $\beta$ matrix gives the critical exponent for each relevant tuning parameter.
One similarly can construct Callan Symanzik equations for macroscopic observables like the Gibbs energy:
\subsubsection{Conformal Field Theories}
It turns out that scale invariant fixed points are also \vocab{conformal field theories}: the theory respects invariance with respect to change of coordinates $x \rightarrow f(x)$ that preserve angles.
Consider an observable $\MO$. Due to conformal invariance:
\[\braket{\MO(x_1) \MO(x_2)} \propto {1 \over |x_1 - x_2|^{2 \Delta_\MO}} \]
The value $\Delta_\MO$ is the \vocab{scaling dimension} of operator $\MO$.
In general a conformal field theory has
\begin{itemize}
\item the ground state is conformally invariant.
\item It has a (minimal) set of observables called \vocab{quasi-primaries} $\phi_{i}$ which transforms irreducibly under conformal transformations:
\[\phi_i'(x') \rightarrow | \underbrace{\pder[x']{x}}_{\text{jacobian}}|^{\Delta_{i}\over D} \phi_i(x) \]
$D$ is the space time dimensions while $\Delta_i$ are the scaling dimension of the operators.
\item All other observables are made of quasi primaries and their derivatives, which are called \vocab{descendents}
\item The quasi primaries form an orthogonal set in the sense:
\begin{align*}
\braket{\phi_1(x_1) \phi_2(x_2) } =
\begin{cases} {C_{12} \over |x_2 - x_1|^{2 \Delta}} \text{ if } \Delta_1 = \Delta_2 \equiv \Delta \\
0 \text{ else}
\end{cases}
\end{align*}
Once can picka basis of $\{ \phi_i \}$ such that $C_{ij} = \delta_{ij}$.
\item Similarly one writes a formula for the 3 point function:
\[ \braket{\phi_1(x_1) \phi_2(x_2) \phi_3(x_3)} = {C_{ijk} \over |x_{12}|^{\Delta_{12}} |x_{23}|^{\Delta_{23}} |x_{13}|^{\Delta_{13}}} \]
where $x_{ij} \equiv x_i - x_j$ and $\Delta_{ij} = \Delta_{i} + \Delta_j - \Delta_{k}$.
\end{itemize}
\subsubsection{Operator Product Expansion}
Because the primaries and their descendents form a complete set, they form closed algebra. This implies the \vocab{Operator Product Expansian}:
\begin{theorem}
The operator product expansion formula states that for $\phi_i(x)$ being a complete set:
\begin{align}
\lim_{x \rightarrow y} \phi_i(x) \phi_j(y) = \sum_{i, j, k} \tilde{C}_{ijk} {1 \over |x - y|^{\Delta_i + \Delta_j - \Delta_k}} \phi_k \left({x + y \over 2} \right)
\end{align}
\end{theorem}
Using the OPE, one can show the $\tilde{C}_{ijk}$ are the 3 point functions.
\[\lim_{y \rightarrow x} \braket{\phi_i(x) \phi_j(y) \phi_l(z)}
= \lim_{y \rightarrow x} \sum_k \sum_{k} \tilde{C}_{ijk} {1 \over |x - y|^{\Delta_i + \Delta_j - \Delta_k}} \braket{\phi_k \left({x + y \over 2} \right) \phi_l} \]
Given that the $\{ \phi_k \}$ form an orthogonal complete set, $\braket{\phi_k \phi_l} = {\delta_{kl} \over |x|^{2 \Delta_l}}$
\[\lim_{y \rightarrow x} \braket{\phi_i(x) \phi_j(y) \phi_l(z)}
= \tilde{C}_{ij} {1 \over |x - y|^{\Delta_i + \Delta_j - \Delta_y}} {1 \over |x - z|^{2 \Delta_l}} \]
This implies that the $\tilde{C}_{ijk} = C_{ijk}$ the 3 point function. In the limit the operators coincide in space, they form a \vocab{fusion algebra} of the form:
\[ [\phi_i][\phi_j] = C_{ijk}[\phi_k]\]
\section{Symmetries}
\subsection{Spontaneous Symmetry Breaking}
\subsection{Generalities}
\begin{itemize}
\item The usual picture about spontaneous symmetry breaking is \emph{classical}. You start with a lagrangian with a potential that has a set of minima at $\braket{\phi} \neq 0$:
\[\ML = {1 \over 2} (\partial \phi)^2 + \frac12 m^2 \phi^2- {\lambda \over 4!} \phi^4 \rightarrow \phi_{cl} = \pm \sqrt{6m^2 \over \lambda} \]
The correct minima needs to account for quantum fluctuations. In that case, one minimizes the quantum effective action rather than the classical action.
\item The spontaneous symmetry breaking of \vocab{global continuous symmetry} imply the existence of massless particles called \vocab{golstone bosons}. The proof is quite simple.
Classically the Noether charge associated with a continuous symmetry is:
\[Q = \int d^d x \sum_m \pder[\ML]{\dot{\phi}} {\delta \phi_m \over \delta \alpha} \]
We identify the canonical momentum $ \pi_m = \pder[\ML]{\dot{\phi}}$ which has commutation relation
\[ [\pi_m(x), \phi_n(y)] = -i \delta^d (x-y) \delta_{mn} \]
The charge acts as a symmetry generator:
\[ [Q, \phi_n(y)] = \sum_m \int d^d x \sum_m [\pi_m(x), \phi_n(y)] {\delta \phi_m \over \delta \alpha} = - i {\delta \phi_n \over \delta \alpha} \]
Furthermore since the charge is conserved:
\[ [Q, H] = 0 \]
and the spontaneously broken vacuum is charged:
\[ e^{i \alpha Q} \ket{0} \neq \ket{0} \]
This implies that there is a degenerate manifold of vacua generated by $\alpha$.
\item \emph{Alternate Proof, xi yin, weinberg}: We carry the same derivation in the path integral form
\end{itemize}
\subsubsection{Coleman Weinberg}
\subsection{Ward Identity}
\emph{(David Skinner) }
The Ward identity is the generalization of Noether's theorem in the quantum world.
Recall that noether's theorem said if we had a continuous symmetry parameterized by $\epsilon$ of the form $\phi \rightarrow \phi' = U(\epsilon) \phi$ that preserves the action, then we can derive a conserved current.
In the quantum world there is an additional subtlety that we need to preserve the product $\D \phi e^{- S[\phi]}$. Usually, we only look for transformations that preserve $e^{-S[\phi]}$, but in cases where one cannot preserve $\D \phi$, this is an \vocab{anomaly} (a classical symmetry that cannot be realized in the quantum world).
The derivation is straightforward:
\[ \int \D \phi e^{- S[\phi]} = \int \D \phi' e^{- S[\phi']} \]
\[ = \int \D \phi e^{-S[\phi]} \left( 1 - \int \dd^4 x j_\mu(x) \partial_\mu \epsilon \right) \]
\[\rightarrow \braket{j_\mu(x)} = 0 \]
This can be used to show that in QED that gauge symmetry implies any amplitude $\mathcal{M}^\mu \underbrace{\epsilon_\mu(p)}_{\text{external photon}}$ satisfies:
\[ p_\mu M^\mu = 0 \text{ (no longitudinal polarization) } \]
This for example constrains vacuum polarization to look like
\[ \pi^{\mu \nu} \propto e^2 (g^{\mu \nu}p^2 - p^\mu p^\nu ) \]
\section{Gauge Theories}
\subsection{QED review}
\begin{align}
\sigma^{\mu} &\equiv (\mathbb{I}, \mathbf{\sigma}) \\
\gamma^\mu &\equiv
\left( \begin{array}{cc}
\mathbf{0}_{2 \times 2} & \sigma^\mu \\
\sigma^\mu & \mathbf{0}_{2 \times 2}
\end{array} \right)_{4 \times 4} \\
\psi &= (\psi_L, \psi_R)_{1 \times 4} \\
\psib &= (\psi_R^\dagger, \psi_L^\dagger)_{4 \times 1} \\
\D_\mu &\equiv \partial_\mu - i e A^\mu \\
\Dslash &= \gamma^\mu \D_\mu \\
\ML &= \psib \left(i \Dslash - m \mathbb{I}_{4 \times 4} \right) \psi - \frac14 F_{\mu \nu} F^{\mu \nu} \\
\{\hat{\psi}_i(x), \hat{\psi}_j^\dagger (y) \} & = \delta^3(x -y) \delta_{ij}
\end{align}
Feynman rules:
\begin{itemize}
\item The photon propagator is
\[ {-i \over p^2 + i \epsilon} \left[g_{\mu \nu} - (1 - \xi){p_\mu p_\nu \over p^2} \right] \]
Contract this with any external photon polarization $ \epsilon^\mu(p)$. Note per ward identity, $p^\mu$ will kill this (photons don't have longitudinal polarization)
\item The fermion propagator is
\[{i (\slashp + m) \over p^2 - m^2 + i \epsilon} \]
\item For any fermion loop, trace out all internal spin indices and you get a minus sign.
\end{itemize}
\subsection{Schrodinger-Pauli Equation}
Consider the equation:
\[ \left(i \gamma^\mu \partial_\mu - e \gamma^\mu A_\mu - m\right) \psi = 0 \]
We would like to obtain the schrodinger's equation from it.
Multiply by $\left(i \gamma^\mu \partial_\mu - e \gamma^\mu A_\mu - m\right)$ on the left giving:
\begin{align}
\left(i \gamma^\mu \partial_\mu - e \gamma^\mu A_\mu - m\right) \left(i \gamma^\nu \partial_\nu - e \gamma^\nu A_\nu - m\right) \psi = 0 \\
= \left((\partial_\mu - e A_\mu) (\partial_\nu - e A_\nu) \gamma^\mu \gamma^\nu - m^2 \right) \psi \\
= \left( \frac14 \underbrace{ \{ i \partial_\mu - e A_\mu, i\partial_\nu - e A_\nu \} }_{2 \D^2} \underbrace{\{\gamma^\mu, \gamma^\nu \}}_{2 g^{\mu \nu}} + \frac14 \underbrace{[ i\partial_\mu - e A_\mu, i \partial_\nu - e A_\nu]}_{- ei F_{\mu \nu}} \underbrace{[\gamma^\mu, \gamma^\nu]}_{2 \sigma^{\mu \nu}} - m^2\right) \\
(i \partial_\mu - e A_\mu)^2 - {e \over 2} F_{\mu \nu} \sigma^{\mu \nu} - m^2 = 0 \\
\left\{ (\partial_\mu + i e A_\mu)^2 + m^2 - e
\begin{pmatrix}
(\mathbf{B + i E}) \cdot \vec{\sigma} & 0 \\
0 & (\mathbf{B - i E}) \cdot \vec{\sigma}
\end{pmatrix} \right\} \psi = 0
\end{align}
In short hand, the equation above is a useful identity:
\[ \cancel{\D}^2 = \D^2 + {e \over 2} F_{\mu \nu} \sigma^{\mu \nu} \]
This equation is the \vocab{Schrodinger-Pauli} equation. One can extract from it the fact dirac spinors carry magnetic spin in units of $\frac12$.
\subsection{Non-Abelian Gauge Theory}
\subsubsection{Building the theory}
\emph{Zee, section IV.5, Srednicki ch. 69}
Let's construct a non-abelian gauge theory:
\begin{itemize}
\item Promote global gauge invariance of the field vector $\{\phi_i \}$ under $SU(N)$ to local gauge invariance:
\[\phi \rightarrow U(x) \phi \]
\item The derivative operator is no longer gauge covariant
\[ \partial_\mu \phi \rightarrow U( \partial_\mu + U^\dagger \partial_\mu U) \]
\item Introduce the covariant derivative
\[ \D_\mu \equiv \partial_\mu - i A_\mu(x) \]
where $A_\mu$ is called the \vocab{gauge field}. It can be matrix valied. To obtain the transformation of the gauge field,
\begin{align}
(\partial_\mu + i A_\mu) U \phi & = U( \partial_\mu + \underbrace{U^\dagger \partial_\mu U - i U^{-1} {A_\mu}' U}_{\text{cancel to } -i A}) \phi
\end{align}
\item Make $A_\mu$ transform to cancel $U \partial_\mu U^\dagger$:
\[A_\mu' = U^\dagger A_\mu U - i (\partial_\mu U) U^\dagger \]
Using $\partial_\mu (U U^\dagger) = 0$
\[ \rightarrow
\partial_\mu U U^\dagger = - U \partial_\mu U^\dagger\]
We can derive the transformation law for $A$:
$A_\mu \rightarrow U^{\dagger} A_\mu U + i U \partial_\mu U^{\dagger}$
\item Let's pick a basis of generators $T^a$ for the gauge group. Consider an infinitesimal gauge transformation $U = 1 + i \theta^a T^a$, and decompose $A_\mu =A_\mu^c T^c $ We can use this form to show that the gauge field transforms in the \vocab{adjoint representation} of the gauge group
\begin{align*}
{A_\mu}' &= 1 + i \theta^a [T^a, A_\mu] + T^a \partial_\mu \theta^a \\
\text{recall } [T^a, T^b] &= i f^{abc} T^c \\
{A_\mu}'^c &= A_\mu^a - f^{abc} \theta^b A_\mu^c + \partial_\mu \theta
\end{align*}
\item One can immediately write down gauge invariant kinetic and potential terms and lagrangians:
\[ \ML = (\D_\mu \phi) (\D^\mu \phi) - V(\phi^\dagger \phi) \]
\end{itemize}
\begin{example}
The shorthand notation $\partial_\mu$ on vector valued fields implicitly means
$\mathbf{\mathbb{I}} \partial_\mu$
\end{example}
\subsubsection{Adding kinetic term}
We will use differential form notation to add a kinetic term, and show it is the curvature of the gauge field.
\begin{itemize}
\item To clean the notation, redefine $i A_\mu \rightarrow A dx^{\mu}$ to be a matrix valued 1-form (note we absorbed the i)
\item The covariant derivative is just:
$d + A$
\item There are 2 matrix valued 2 forms we can construct:
\[A \wedge A \equiv A^2 \text{ (in spacetime comps) } \rightarrow \frac12 [A_\mu, A_\nu] \dx^\mu \wedge \dx^\nu \]
\[ dA \]
$A^2$ obviously vanish for abelian gauge groups
\item This combination is gauge covariant (transforms homogeneously):
\[ F = dA + A^2 \]
\[F \rightarrow U F U^\dagger \]
F is the \vocab{curvature tensor}
\item Another slick derivation
\[ D = d + A \]
\[D^2 = dA + A^2 \]
\item In component form and switching back the i's, we have
\[F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu - i[A_\mu, A_\nu] \]
\[F^{a}_{\mu \nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu + f^{abc} A^{b}_\mu A^c_\nu \]
\item Since F transforms covariantly, to obtain a gauge invariant scalar, one traces the matrix:
\[ - {1 \over 4} \mathrm{Tr} F_{\mu \nu} F^{\mu \nu} \]
Another way to write this term elegantly is:
\[ -{1 \over 4} \mathrm{Tr} \left( ^\star F \wedge F \right) \]
This has interactions!
\[f^{abc} A^{b \mu} A^{c \nu} (\partial_\mu A^a_\nu - \partial_\nu A^a_\mu) \text{ cubic term} \]
\[(f^{abc} A^b_\mu A^c_\nu)^2 \text{ quartic term}\]
\end{itemize}
\begin{example}
Since the structure constants for SU(2) is
$f^{abc} = \epsilon^{abc}$, the field strength is just:
\[ \vec{F}_{\mu \nu} = \partial_\mu \vec{A}_{\nu} - \partial_\nu \vec{A}_{\mu} + \vec{A}_\mu \times \vec{A}_\nu \]
(Note SU(2) has 3 generators, so one can represent it in 3-d space as vectors)
\end{example}
\begin{example}{Bianchi's Identity}
One useful identity is the following:
\[ \sum_{\text{cylic } \mu, \nu, \lambda} [\D_\mu, [\D_\nu, \D_\lambda]] = 0 \]
This is jacobi's identity, and follows from the fact the $\D$'s are associative.
Also note \[ [\D_{\nu}, \D_{\rho}] = F_{\nu \rho} \], because:
\begin{align*}
[\D_{\nu}, \D_{\rho}] \dx^\nu \wedge \dx^\rho &= (d + A) \wedge (d + A) \\
& = d A + A^2 \\
& = F
\end{align*}
It then follows that:
\[\sum_{\text{cylic}} [\D_\mu, F_{\nu \lambda}] = 0 \]
We can compute the identity using leibnitz property of $\D$
\[ [\D_\mu, F_{\nu \lambda}] \phi = (\D_\mu F_{\nu \lambda}) \phi + F_{\nu \lambda} \D_\mu \phi - F_{\nu \lambda} \D_\mu \phi \]
\[ = (\D_\mu F_{\nu \lambda}) \phi \]
In short this means that:
\[ [\D_\mu, F_{\nu \lambda}] = (\D_\mu F_{\nu \lambda}) \]
The result is the so-called \vocab{Bianchi's Identity}:
\[ \sum_{\text{cyclic}} \D_\mu F_{\nu \lambda} = 0 \]
Another way to show's bianchi's identity is to compute using forms:
\begin{align*}
\D F & \equiv d F + [A, F] \\
F &\equiv dA + A^2 \\
\rightarrow \D F & = \cancel{d^2 A} + A dA + (dA)A + A (dA + A^2) - (dA + A^2) A \\
&= 0
\end{align*}
\end{example}
\subsubsection{Nonlinearity}
As we have seen the theory is self-interacting from cubic and quartic terms. Another way to see this as follow:
\begin{itemize}
\item The photon is a U(1) gauge field. It does not transform under U(1) gauge transformation due to its abelian nature:
$e^{i \theta(x)} A e^{-i \theta(x)} \rightarrow A$.
This means the photon is \textbf{neutral}
\item The gluons are SU(3) gauge fields. They transform non-trivially under gauge transformation:
$U(x) A U^{-1} \neq A$.
This means the gluons are \textbf{charged}
\end{itemize}
One way to think about charged operators is they create states that have non-zero charge. Here we elaborate a bit on the terminology often used:
\begin{itemize}
\item The generator of global gauge invariance (a real symmetry) is some unitary operator $e^{i \alpha \hat{Q}}$ where $\hat{Q}$ is hermitian.
\item Using Noether's theorem, we know $\braket{\hat{Q}}$ is conserved under time evolution and call it the \vocab{charge} associated with the gauge symmetry.
\[ \hat{Q} \underbrace{\ket{q}}_{\text{charged state}} = \underbrace{q}_{\text{charge}} \ket{q}\]
\item An operator $\MO$ that doesn't commute with $\hat{Q}$ (alternatively is not gauge invariant) is called a \vocab{charged operator}: it creates charge!
\[ [\hat{Q}, \MO] \neq 0 \rightarrow \MO \ket{q} = \sum_{q' \neq q} \underbrace{C_{q'} \ket{q'}}_{\text{new charge created!}} \]
\end{itemize}
\subsubsection{Adding Matter}
To couple a gauge field to matter, replace $\partial_\mu \rightarrow \D_\mu$
\begin{example}
QED.
The free electron lagrangian is:
\[ \psib (i \pslash_\mu - m \mathbb{I}) \psi = 0\]
where $\psi$ is a 4-component spinner:
\[ \pslash \equiv i \gamma^\mu \partial_\mu \]
Replace $\Dslash \equiv \pslash - i e \gamma^\mu A_\mu$
and one obtains the QED lagrangian:
\[ \ML = \psib (i \Dslash - m) \psi - \frac14 F_{\mu \nu} F^{\mu \nu} \]
\end{example}
\begin{example}
QCD. In QCD, one has quarks (fermions) interacting with gluons (gauge bosons), with gauge group SU(3).
\begin{itemize}
\item The quarks have 3 \vocab{colors} corresponding to the gauge group index in the fundamental representation.
\item They furter have 6 \vocab{flavor} indices, with global flavor symmetry.
\item One combines both into $\psi_{i J}$ where i= color and J = flavor index
\end{itemize}
The full lagrangian is then:
\[ \ML = \psib_{iI} \left( \Dslash_{ij} - m_I \delta_{ij} \right) \psi_{jI} - \frac14 \mathrm{Tr} F_{\mu \nu} F^{\mu \nu} \]
\end{example}
\begin{example}
One often hear the term the fermions transforms under the \textbf{fundamental} representation while the fields, transform under the \textbf{adjoint} representation of the gauge group. Let's spell out what it means.
Consider some gauge group G with generators $T^a$.
The fields $\psi$ in general has dirac indices and color indices (gauge group).
The color indices mix under a gauge transformation. The way they mix is as follow:
\[\psi (x) \rightarrow U(x) \psi = \exp \left(i \sum_k \epsilon_k T^{k} \right) \psi \]
If we write this infinitesimally and use color indices:
\[\psi_b \rightarrow (\mathbb{I} + i \epsilon_a T^{a}_{bc}) \psi_c \]
The fermion fields mix via the generators which is the fundamental representation.
The gauge fields also transform but differently:
\[F(x) \rightarrow U F U^\dagger \]
Decompose F using the generator basis.
Writing this infinitesimally, we see:
\[F(x)_a \rightarrow (\mathbb{I} - f^{abc} \epsilon_b) F_c\]
This is called the adjoint representation, since the generators of the transformation is the structure constants themselves.
For an SU(N) gauge group, the fermion field has N color indices, while the gauge field will have $N^2-1$ indices, 1 for each generator.
\end{example}
\subsubsection{Theta term}
Note in the previous discussion we ignored another renormalizable term which is a total derivative
\begin{align}
\ML = \theta \epsilon^{\alpha \beta \mu \nu} F_{\mu \nu} F_{\alpha \beta} = 2 \theta \partial_\mu \left( \epsilon^{\mu \nu \alpha \beta} A_{\nu} F_{\alpha \beta} \right)
\end{align}
This can be more compactly written as $\theta F \wedge F$ and is a total derivative:
\[d(A \wedge F) = F \wedge F \]
It is topological in nature (doesn't depend on the metric!). Since $\epsilon$ exchanges $E \leftrightarrow B$, the theta term is just $\mathbf{E} \cdot \mathbf{B}$ for EM.
\subsubsection{Classical considerations}
From Noether's theorem, compute the conserved charge:
\[J_\mu^a = - \psib_i \gamma^\mu T^{a}_{ij} \psi_j + f^{abc} A^{b}_\nu F^c_{\mu \nu}\]
The current is \textbf{not} gauge covariant. Therefore there's no physical charge one can measure.
If you define the matter current as
\[j^{a}_\mu = - \psib_i \gamma^\mu T^{a}_ij \psi_j\]
This matter current is not conserved but only covariantly conserved.
\begin{theorem}
\vocab{Weinberg Witten} theorem for spin 1: A theory with global non-abelian symmetry under which massless spin-1 particles are charged does not admit a gauge invariant conserved current
\end{theorem}
\begin{theorem}
\vocab{Weinberg Witten} theorem for spin 2: A theory with a conserved and lorentz-covariant energy momentum tensor can never have a massless particle of spin 2.
\end{theorem}
\subsection{Quantization}
\subsubsection{Fadeev-Popov}
\emph{(Kaku)}
Fadeev Popov strategy is to factor out gauge orbit integration from the physical stuff, by fixing the gauge with a deta function and then integrating over gauge orbits.
\[ Z = \int \D A^\mu \underbrace{\Delta_{FP}(A_\Omega)}_{\mathrm{det}[{\delta F \over \delta \Omega}]} \underbrace{\int \D \Omega}_{\text{gauge orbit integral}} \underbrace{\delta(F(A_\Omega))}_{\text{gauge constraint}} \exp \left( i S[A^\mu ]\right) \]
$\D \Omega$ is an invariant group measure:
\begin{align}
U &\approx 1 + i \theta_a T^a \rightarrow \D \Omega \propto \prod_{x, a} d \theta_a(x) \\
\D \Omega &= \D (\Omega \Omega')
\end{align}
Due to the invariance across gauge orbit, the gauge orbit integration factors out:
\[Z = \underbrace{\int \D \Omega}_{\infty} \times \int \D A^\mu \Delta_{FP}(A) \delta(F(A)) \exp(i S[A]) \]
We now just need to evaluate the fadeev popov determinant. This will introduce so called \vocab{ghosts}:
\begin{itemize}
\item Use the key identity for fermion (grassman) integration:
\[ \mathrm{det} (M) = \int \prod \dd c \dd \bar{c} \exp(-{\bar{c} M c}) \]
\[ \Delta_{FP} = \int \D c \D \bar{c} \exp \left( i \int \dd^4 x \int \dd^4 y \bar{c} (x) M(x, y) c(y) \right) \]
\item To obtain the kernel M, let's compute an example for U(1) gauge theory in a particular gauge
\[F(A) = \partial_\mu A^\mu = 0 \]
Under gauge transformation, $A \rightarrow A + \partial_\mu \theta$
and $F(A) \rightarrow F(A) + \partial^\mu \partial_\mu \theta$
\[M(x, y) = {\delta F \over \delta \Omega} = [\partial^\mu \partial_\mu]_{x, y} \]
\item For non-abelian gauge theory, one has additional color indices.
\[\Delta_{FP} = M_{x, y, a, b} \]
\item Nonabelian FP determinant can also be obtained from the gauge transformation of the field
\[A^{a}_\mu \rightarrow A^{a}_\mu + {1 \over g} \left( \partial_\mu \theta^{a}_\mu - g f^{abc} \theta^{b} A^{c} \right)\]
\[ \rightarrow M_{x, y, a, b} = {1 \over g} \left(\delta^{ab} \partial^\mu \partial_\mu - g f^{abc} \partial^\mu A^{c}_\mu \right) \delta(x-y) \]
\item This contributes an extra term in the action, which are the \vocab{Fadeev Popov Ghosts}
\[ \int \dd^4 x \bar{c}_a \left(\delta^{ab} \partial^2 - g f^{abc} \partial^\mu A^c_\mu \right) c_b \]
\item Finally we sum over a gaussian weighted version of the constraint
giving the gauge fixing term (1 per gauge field)
\[ {1 \over 2 \xi} \int d^4 x \sum_a (\partial_\mu A^a_\mu)^2\]
\end{itemize}
We obtain the full lagrangian below (where we couple to N fermions transforming in the fundamental representation of SU(N)):
\begin{align*}
\ML = \underbrace{-\frac14 \sum_a (F^a_{\mu \nu})^2}_{\text{gauge kinetic}} - \underbrace{{1 \over 2 \xi} \sum_a (\partial_\mu A_\mu^a)^2}_{\text{gauge constraint}} - \underbrace{\bar{c}^a(\delta^{ac} \partial^2 - g f^{abc} A^{b}_\mu) c^c}_{\text{ghosts}} \\
+ \underbrace{\psib^{i} \left(\delta_{ij} i \pslash + g \gamma^{\mu} A_{\mu}^a T^{a}_{ij} - m \delta_{ij} \right) \psi_j}_{\text{fermions}}
\end{align*}
\subsubsection{BRST}
\subsubsection{Feynman Rules}
The feynman rules are identical to QED except for a few things:
\begin{itemize}
\item There are bunch of structure factors for the gluons and Trace of gauge group generator for the fermions. Fermions sit in the fundamental while gluons in the adjoint.
\item There's a 3 point gluon and 4 point gluon vertex
\item The ghosts do not factor out. The ghost interact with the gluon via the structure factor
\end{itemize}
\begin{itemize}
\item gluon propagator
\[\underbrace{i {-g^{\mu \nu} + (1 - \xi {p^\mu p^\nu \over p^2}) \over p^2 + i \epsilon}}_{\text{same as QED photon}} \underbrace{\delta^{ab}}_{\text{gluon type}} \]
\item Similar colored fermions have the exact same propagator
with just a (fundamental color) fermion index matrix on top:
\[{i \over \slashp - m + i \epsilon} \underbrace{\delta^{ij}}_{\text{fermion color index}} \]
\item The 3 point gluon interacton is a bit messy but still some structure
\[ g f^{abc} \left( g^{\mu \nu} (k - p)^\rho + g^{\nu \rho} (p-q)^\nu + g^{\rho \nu} (q -k)^\nu \right)\]
\item The 4 point gluon interaction has a bunch of structure factors (this is where you'll get the casimir of adjoint representation factors from)
\item Fermion gluon interaction term:
\[i g \gamma^\mu T^{a}_{ij}\]
where $a$ is the gluon type, i, j are fermion color indices.
\end{itemize}
To give a taste of a calculation, consider the vacuum polarization (gluon - fermion pair creation bubble - gluon).
\[i \mathcal{M}^{ab \mu \nu} = \underbrace{\mathrm{tr}[T^a T^b]}_{\text{color averaging}} \times \underbrace{\left(-(ig)^2 \int \dkk {i \over k^2 - m^2}{i \over (p-k)^2 - m^2} \mathrm{Tr}[\gamma^\mu (\slashk - \slashp + m) \gamma^\nu (\slashk + m)] \right)}_{\text{same as QED}} \]
The colors just introduce here a factor of $T_F \delta^{ab}$. Similar as QED, there's no symmetry factor because fermions in the loop can't be interchanged.
\subsubsection{Renormalization}
One then computes the beta function for the fermion coupling vertex in $d = 4 - \epsilon$ dimensions:
\[\beta(g_R) = - {\epsilon \over 2} g_R - {g_R^3 \over 16 \pi^2} \left[ {11 \over 3} C_A - {4 \over 3} \underbrace{n_f}_{ \text{num fermions}} T_F \right] \]
We see that the theory is \vocab{asymptotically free} for low enough fermion content. Note that the result only cares about ${T_F \over C_A}$ which does not depend on the normalization scheme.
For QCD, N = 3, $C_A = 3$, $T_F = \frac12$, so if there's fewer than 17 flavors of quarks the theory is free in the UV (there's 6 quark flavors).
The flow of the coupling vs. energy scale gives another example of dimensional transmutation. The theory has no mass scale, but at some finite energy scale $\Lambda_{QCD}$, $g_R \approx 1$. $\Lambda_{QCD}$ can then be used to completely parametrize the theory.
\subsubsection{Banks Zaks Fixed Point}
\subsection{Electroweak unification}
We observe the following process of neutron decay:
\[n \rightarrow e^- + p + \underbrace{\nu_e}_{neutrino} \]
Basically, there must be some form of interaction of 4 fermions, which is modelled by Fermi's 4-fermi coupling:
\[g \psib \psi \psib \psi \equiv g_F {\Lambda^2} (\psib \psi)^2 \]
Obviously, this theory is non-renormalizable since the coupling has negative mass dimensions (mass dimension -2).
Weinberg-Salam's model called \vocab{Electro-weak} theory provides the UV completion of this model, while unifying both electromagnetism and weak interactions under the gauge group $SU(2) \otimes U(1)$.
We will describe the theory:
\begin{itemize}
\item The theory is described by a 2 component complex scalar field called the \vocab{Higgs} "H" transforming with global SU(2) symmetry, coupled to W and B bosons which respectively transform in the adjoint representation of SU(2) and U(1).
\[ \ML = - \frac14 (W_{\mu \nu})^2 - \frac14 (B_{\mu \nu})^2 + (\D_\mu H)^\dagger (\D^\mu H) + m^2 H^\dagger H - \lambda (H^\dagger H)^4 + \underbrace{\sum_{i} \psib_i (\Dslash_{ij}) \psi_j}_{\text{fermion sector}}\]
Where \[ \D_\mu = \partial_\mu - i g W_{\mu \nu}^a \underbrace{\tau^a}_{\text{SU(2) generator}} - i {g' \over 2} B_\mu H \]
\item Upon spontaneous symmetry breaking to the the mexican hat potential, the Higgs aquires a vacuum expectation value. Rotate your global gauge phase along the direction of that break: this is called the \vocab{Unitary Gauge}. A generic direction will leave 1 massless direction and gap all other 3 directions. The massless direction is the photon: it is a linear combination of the $B$ and the $W$.
\item Without loss of generality align your W direction so that $W^1$ and $W^2$ is gapped cleanly. In math, the term $(\D_\mu H)^2$ will have 3 massive modes
\[g^2 {v^2 \over 2} \left[ \left(W_\mu^{1} \right)^2 + \left(W_\mu^{1} \right)^2+ \left(\underbrace{{g' \over g} B_\mu - W_\mu^3}_{Z_\mu } \right)^2 \right] \]
\item Identify the last term as the massive Z boson.
The mode orthogonal to the last term is the photon and massless:
\[A^\mu = \left({g' \over g} B_\mu - W_\mu^3 \right)\perp \]
\[ = \sin (\theta_W) W_\mu^3 + \cos(\theta_W) B_\mu \]
with $\theta_W \equiv \arctan({g' \over g})$
\item The electromagnetic strength coupling is identified to be $e = g \sin(\theta_W) = g' \cos(\theta)$.
Furthermore from this model, the mass of the W and the Z are related (and the Z needs to be lighter):
$m_W = {v \over 2 g}$, $ m_Z = {m_W \over \cos(\theta_W)}$.
\item We started with 4 couplings: $m, \lambda, g, g'$. In the low energy theory, we traded them with 4 differen values:
$e, \theta_w, m_h, m_W$. $\sin(\theta_W)^2 = 0.223$, $g = {e \over \sin(\theta_W)} = 0.64$, $g' = {e \over \cos(\theta_W)} = 0.34$, $e = 0.303$.
\end{itemize}
One interesting thing is that we can use the broken gauge theory to computing perturbation theory and see unitarity violated in the $W^+ Z W^+ Z$ cross section scattering. This means the higgs must come into the effective theory to fix it, giving a bound on the higgs mass called \vocab{Lee-Quigg-Thacker bound} $m_h \leq \sqrt{16 \over 3} v \approx \mathrm{1 TeV}$
Let's now describe the fermion sector:
\begin{itemize}
\item The fermion sector is chiral (couples differently to left and right handed) and maximally parity violating:
$SU(2)$ gauge bosons only couple to left-handed fermions.
\item Denote 3 \vocab{generations} of left handed SU(2) doublets or quarks and leptons:
\[L^i = (\nu_{eL} / \nu_{\mu L}, \nu_{\tau L}, e_L / \mu_L / \tau_L )\]
\[Q^i = (u_L / c_L, t_L, d_L / s_L / b_L) \]
\item Denote the remaining right handed fermions:
\[e^{i}_R = (e_R, \nu_R, \tau_R)\]
\[u^i_R = (u_R, c_R, t_R) \]
\[\nu^i_R = (\nu_{eR}, \nu_{\mu R}, \nu_{\tau R}) \]
\[d^{i}_R = (d_R, s_R, b_R) \]
\item The lagrangian consists of a bunch of terms
\begin{align*}
\ML &= i \bar{L}_i (\pslash - i g \slashW^a \tau^a - i g' Y_L \slashB) L_i + i \bar{Q}_i (\pslash - ig \slashW^a \tau^a- ig' Y_Q \slashB) Q_i + \\
& ...
\end{align*}
\item The value of the hypercharge couplings are nice rational numbers with very interesting cancellation as required by anomaly cancellation.
\item Given those hyper charges, the breaking of the higgs will make flavor and mass basis not identical. The mixing effect is given by the \vocab{Cabibbo-Kobayashi-Maskawa (CKM) matrix}
\item We can also extract from the CKM matrix phase the CP violation. The weak interaction violate CP by a measured amount.
\item Similarly, the strong interaction can also violate CP with a topological term, the \vocab{theta} term:
\[{i \theta} \int {g^2 \over 32 \pi^2} \epsilon^{\mu \nu \alpha \beta} F_{\mu \nu}^a F_{\alpha \beta}^b \]
or in short hand notation $ \propto F \wedge F$. The fact the weak sector has such a large CP violating term, but no theta term is measured (neutron has no dipole moment) is called the \vocab{strong CP problem}
\end{itemize}
\section{Non-perturbative effects}
\subsection{Phases of Gauge Theories}
\begin{itemize}
\item Lattice gauge theory attempts to discretize quantum gauge theories. Consider a 4-d hypercubic lattice with lattice points $x$ and link $(x, x+ \hat{n})$. On each link lives a unitary matrix U such that:
\[U(x, x+ \hat{n}) = U^{\dagger} (x+ \hat{n}, x) = e^{i \sum_a t^a A_a} \]
The non-abelian lattice gauge action is:
\[ S = \sum_{p}-{1 \over 2g^2} \mathrm{tr}(U_p) \]
where p denotes plaquettes and $U_p$ is the wilson loop around a plaquette (bounded by 2 unit vectors $\hat{n}, \hat{m}$)
\[U_p (\hat{n}, \hat{m}) = U(x, x+\hat{n}) U(x+\hat{n}, x+\hat{n}+\hat{m}) U(x+ \hat{m}, x+\hat{m} + \hat{n})^\dagger U(x, x+ \hat{m})^\dagger \]
Using the BCH formula, one can show this reproduces the usual $F_{\mu \nu}^2$ action in the continuum.
\begin{align}
S &=_{\text{use BCH}} {-1 \over 2 g^2} \sum_p \mathrm{tr} \exp(i a^2 g^2 F_{\mu \nu}^2) \\
&= -{1 \over 2 g^2} \sum_p \left(1 - {a^4 g^2 \over 2} \mathrm{tr} F_{\mu \nu} \ F^{\mu \nu} + ...\right) \\
&\approx -{1 \over 2g^2} \int d^4 x \mathrm{tr} F_{\mu \nu} F^{\mu \nu}
\end{align}
\item One key step in computing the path integral is to define the measure on which to integrate the $U's$. This is the \vocab{Haar Measure} of the group. It follows from a few key requirements:
\[ \int_{SU(N)} dU = 1 \]
\[ d(U' U) = dU \text{ with } U' \in SU(N) \]
In particular we will only need a few key formulae:
\[ \int U_{ij} dU = 0 \]
\[ \int dU U_{ij} {U^{\dagger}_{kl}}= {1 \over N} \delta_{il} \delta_{jk} \]
\[\int dU U_{i_1 i_2} U_{j_1 j_2} = {1 \over N} \epsilon_{i_1 i_2} \epsilon_{j_2, j_2} \]
\item The partition function sum
\[ Z(g) = \sum_{U} e^{-S(U)} = \sum_U e^{{1 \over 2 g^2} \sum_p \mathrm{tr}U_p} \]
is strongly reminiscent of the statistical mechanics partition function with temperature $\beta = {1 \over 2 g^2}$. In particular, in the high temperature limit, $\beta \rightarrow 0$ and $g \rightarrow \infty$: we have a strong coupling expansion with:
\[ Z(g) = \sum_p 1 - \beta ... + ... \]
\item Consider computing the expectation of the wilson loop over some spacetime codimension-2 surface bounded by a curve C:
\[ \braket{W(C)} \equiv \braket{\mathrm{tr} \underbrace{ \mathcal{P}}_{\text{path ordered}} \exp(i \oint_C A_\mu dx^\mu) } \]
In the lattice gauge formulation it is:
\[ \braket{W(C)} = \sum_{U}n\left[ \underbrace{\mathrm{tr} \left(\prod_{C} U_{ij} \right)}_{\text{wilson loop}} \exp(- S) \right] \]
\textbf{Key observation}: For every link $U_{ij} \in C$, for the trace to not vanish it needs to pair with a $U^\dagger_{ij}$ in $e^{-S}$. Furthermore, $e^{-S}$ decomposes in the strong coupling expansion as:
\[e^{-S} = \prod_{p} e^{- \beta \mathrm{tr}U_p} \approx_{\text{strong coupling}} \prod_{p} (1 - \beta \mathrm{tr} U_p) \]
Therefore in the strong coupling approximation, only the plaquettes that "tile" the curve C contribute to the sum. The sum of course includes all such tiling surfaces:
\[ \braket{W(C)} \propto \sum_{\Sigma = \partial C}(...)^{A(\Sigma)} \]
The leading term comes from the minimal surface. Picking C to be a R by T rectangle you get the potential $V(R)$ between a quark-anti-quark pair:
\[ \braket{W(C)} \propto (...)^A = \exp(i V(R)T) \]
or the so called \vocab{area law} for confined interactions. Note in this calculation we didn't use anything about the gauge group being non-abelian. Yet U(1) QED is deconfined. This is because at strong coupling, QED hits a phase transition and is not analytically continued from the free theory at weak coupling.
\item Another example of a gauge theory is $\mathbb{Z}_2$ gauge theory. Here the link variables are just $\tau = \pm 1$ which are elements of the abelian 2 element group and the action is:
\[ S = - {1 \over 2 g^2} \sum_p \mathrm{tr} U_p = - {1 \over 2 g^2} \sum_p \mathrm{tr} \underbrace{\tau_1 \tau_2 \tau_3 \tau_4}_{\text{link 1-4 of plaquette p}} \]
We can carry the strong coupling approximation similar to above:
\[ \braket{W(C)} = \braket{ W(C) \exp (-S)} = \braket{W(C) \prod_{p} \left(1 + \tau_1 \tau_2 \tau_3 \tau_4 \tanh{1 \over g} \right)} \]
Here once again you need to pair the $\tau$'s in the W(C) computation with the $\tau$'s in the exp, giving the strong coupling tiling and the area law for confinement.
\item Similarly, one can consider the \vocab{weak-coupling limit} $g\rightarrow 0$ or the \emph{low temperature} expansion $\beta \rightarrow \infty$.
\item A practical way to simulate lattice gauge theories (and equilibrium statistical mechanics) is using the \vocab{metropolis algorithm}. Basically we would like to compute ensemble averages by generating a bunch of samples from the equilibrium distribution using a markov chain.
Given a current configuration $\Sigma$ and a candidate next configuration $\Sigma'$ (flip one of the link variables for example) we choose to update to $\Sigma'$ with the following probabilities:
\[ P(\Sigma \rightarrow \Sigma') = 1 \text{ if } \Delta S \equiv S(\Sigma') - S(\Sigma) < 0 \]
\[P(\Sigma \rightarrow \Sigma') = e^{-\beta \Delta S} \text{if } \Delta S > 0 \]
The reason why this algorithm equilibrates is because the boltzmann distribution $P(\Sigma_{eq}) \equiv e^{- \beta S(\Sigma)}$ is a right eigenvector of the markov transition matrix:
\[M_{ij} \equiv P(\Sigma_j \rightarrow \Sigma_i) \]
\[ M P(\Sigma_{eq}) \propto P(\Sigma_{eq}) \]
The long time behavior evolution is dominated by the eigenvectors with largest eigenvalues.
\end{itemize}
\subsection{Global Field Configuration}
\subsubsection{Instanton Method in Quantum mechanics}
The instanton subject comes very naturally from a study of saddle point integration. Essentially in the path integral formulation, we are interested in evaluating an integral of the form:
\[ Z = \int dx g(x) e^{- {1 \over \hbar} f(x)} \approx \sum_{x_0} \sqrt{2 \pi \over f''(x_0)} g(x_0) \exp({- 1 \over \hbar} f(x_0)) \]
where $x_0$ are the saddles in the complex plane satisfying $f'(x_0) = 0$.
So far we have only studied expanding this integral around the ground state. We will not studying accounting for all the saddles, which are finite action ($f(x_0)$ finite) solutions to the euclidean equations of motion.
To do this we first study the problem of a mexican hat potential in single particle quantum mechanics:
\[H = \frac12 {p^2 \over m} + {\lambda} (x^2 - a^2)^2 \]
This potential has 2 almost degenerate ground state. In the limit of $a \rightarrow \infty$, each ground state is just a harmonic oscillator at $\pm a$, with ground state energy $\hbar \omega$. The presence of instanton solutions will show those 2 ground states split.
First note that the partition function can be used to compute the ground state energy:
\begin{align}
Z(\beta) &= \mathrm{Tr} e^{-\beta H} \\
& \equiv \sum_{n} \braket{n | e^{-\beta H} | n} \\
&= \sum_{n} e^{-{E_n T \over \hbar}} \\
\rightarrow E_0 &= \lim_{T \rightarrow \infty} - {\hbar \over T} \ln(Z)
\end{align}
To compute Z we use the saddle point expansion trick.
The saddles with finite euclidean action satisfy the euclidean equation of motion in an inverted potential $V(x) = - (x^2 - a^2)^2$.
The solution can be solved with energy conservation:
\begin{align}
\frac12 \dot{x}^2 + V(x) &= 0 \rightarrow \int {dx \over \sqrt{2 V(x)}} = \int d\tau \\
\int {dx \over \sqrt{2} (a^2 - x^2)} &= \tau_f - \tau_i \\
x(\tau) & \propto \tanh( {\tau \over 2 \sqrt{a}} )
\end{align}
The explicit form of the solution doesn't matter much. What matters is that there 2 such solutions, one going forward from a to -a and one going backward: the \vocab{instanton} and \vocab{anti-instanton}. In the \vocab{dilute gas} limit, one can just superpose such solution without worrying about instanton-anti-instanton interaction. Furthermore, there is one such solution for each center location, and one has to integrate over them
The partition function is the sum over all instanton combos with periodic boundary conditions:
\[Z = \sum_{n = \text{even}} \int_{-{T \over 2}}^{T \over 2} d \tau_1
\int_{-{T \over 2}}^{\tau_1} d \tau_2
\int_{-{T \over 2}}^{\tau_2} d \tau_3 ... d \tau_n \underbrace{C}_{\text{Normalization}} \underbrace{K^n}_{K \equiv \sqrt{1 \over \mathrm{det} S''}} e^{- {nS_0 \over \hbar}} \]
\[Z = \sum_{n \text{ even}} {1 \over n!} C {KT}^n \exp(- {n S_0 \over \hbar}) = {C \over 2} \left(\exp(KT e^{-S_0 \over \hbar} ) - \exp(-KT e^{-S_0 \over \hbar} ) \right) \]
We know from nonrelativistic QM that K is just ${\hbar \omega \over 2}$, so this gives:
\[E_0 = \lim_{T \rightarrow \infty} {-\hbar \over T} \ln(Z) = {\hbar \omega \over 2} \left( 1 - \exp(-{S_0 \over \hbar}) \right) \]
The quantity $\exp(- {S_0 \over \hbar})$ is \vocab{non-pertubative}: all its derivative vanish at $\hbar \rightarrow 0$, so you will not be able to obtain this contribution from perturbation theory.
There are of course more examples of potentials that have instanton solutions. One example is the \vocab{Sine-Gordon} potential:
$V(x) = (1- \cos(x)) $. This potential has many false vacua labelled by n which sits at $x = 2 \pi n$. The instanton sum from a transition from $k \rightarrow l$ is:
\[ Z = \braket{k | U | l} = \sum_{n, n'} \delta_{n-l' = k-l} e^{- (n+n')S_0 \over \hbar} C K^{n+n'} \]
where $n$ labels instanton and $n'$ anti-instanton respectively. Unlike the $x^4$ an-harmonic oscillator, the ground state energy splits into a continuum due to the infinite number of vacua.
\subsection{Tunneling in Quantum Mechanics}
\emph{Semiclassical vacua (coleman 1980)}
Besides allowing a calculation of grounds state energy, instanton solutions can be used to calculate tunneling amplitudes.
Consider a classical particle in a potential V(x) with 2 minima at location $x_{\pm}$ where $x_-$ is the true global minimum while $x_+$ is a local minimum with $V(x_+) \geq V(x_-)$. Without loss of generality one can shift $V(x_+) = 0$. Classically, a particle that starts at $x_+$ with 0 velocity could stay stuck there. Quantum mechanically, it can tunnel to the global minimum.
In the semiclassical limit, the tunneling amplitude $\Gamma$ is:
\[ \Gamma = A e^{-B \over \hbar} \left(1 + \mathcal{O} (\hbar) \right) \]
The particle emerges at the other side of the barrier where $V(x)$ crosses 0, at $x^*$, with 0 velocity, and continues to classically move towards the global minimum.
Suppose one wants to compute A and B. In the WKB approximation, for 1 dimensional motion:
\begin{align}
B = \int_{x_+}^{x^*} \sqrt{2 V(x)} dx
\end{align}
Suppose the particle is moving in $N$ dimensions, the lagrangian would be:
\begin{align}
\mathcal{L} = \frac12 \mathbf{\cdot{q}}^2 - V(\mathbf{q})
\end{align}
In this case, the intersection of the potential $V(\vec{q}) = 0$ forms a hyper surface $ \Sigma$. The particle emerges at the point $\sigma \in \Sigma$ such that the value $\int_{x_+}^{\sigma} \sqrt{2 V(\vec{q})} ds$ is minimized. The value of B is computed via the variational problem:
\begin{align}
B = \int_{\gamma} \sqrt{2 V(\mathbf{q})} ds \\
\delta \int_{\gamma} \sqrt{2 V} ds = \int_{\gamma} \mathbf{p} \cdot \mathbf{dq} = 0
\end{align}
The trajectory $\gamma$ to this variational problem is the classical trajectory of a particle moving in an inverted potential:
\begin{align}
{d^2 \over dt^2} \bf{q} - \partial_{\bf{q}} V(\bf{q}) = 0
\end{align}
Intuitively, in the path integral language, these paths are the saddle point contributions to the integrals. In general, you must sum over all saddle contributions.
Computing the tunneling amplitude has been reduced to an exercise in classical mechanics! In particular, one is interested in finite action solutions to the classical equations of motion with boundary condition:
\begin{align}
{d \over dt} \bf{q}|_{\bf{q} = \bf{q_+}} &= 0 \text{ The particle is at rest when it emerges out of the barrier } \\
\end{align}
Such a solution looks like a classical "bounce" off the inverted potential and is an instanton solution. This underlies statements like "instantons mediate tunneling from a metastable vacuum to the true vacuum."
\subsection{Tunneling in Quantum Field Theory}
In Quantum field theory, the tunneling amplitude proceeds similarly with minor modifications. Consider a classical field $\phi(x)$ with lagrangian:
\begin{align}
\mathcal{L} = \frac12 \partial_\mu \phi \partial^\mu \phi - V(\phi)
\end{align}
where $V(\phi)$ has 2 local minimum, $V(\phi_{\pm}) = 0, -\epsilon$.
We replace $\bf{q} \rightarrow \phi(x)$, and one is interested in finite action solutions to the euclidean equations of motion ($\tau \equiv it$):
\begin{align}
\partial_\tau^2 \phi - \nabla^2 \phi - \nabla V(\phi) = 0
\end{align}
with boundary condition $\partial_t \phi |_{\phi_{+}} = 0$. This involves solving a partial differential equations, however one can exploit symmetries to simplify. In particular, it makes sense that the stationary $\phi(x)$ is O(4) symmetric, so that one can write it as $\phi(\rho)$ with $\phi \equiv \sqrt{\tau^2 + |\bf{x}|^2}$. Furthermore, since the motion is time translation invariant, one can always shift the boundary condition where the particle starts at $\rho = 0$ for simplicity.
Using the euclidean radial coordinate, the equations of motion is just an ordinary differential equation:
\begin{align}
{d^2 \over d \rho^2} \phi(\rho) + {3 \over \rho} {d \over d \rho} \phi &= V'(\phi) \\
{d \over d \rho} \phi|_{\rho = 0} & = 0 \text{ (boundary condition)} \\
\end{align}
This equation looks like 1-D motion of a classical particle with time dependent damping ${3 \over \rho}$.
Coleman makes beautiful observations about the existence of the bounce:
\begin{itemize}
\item If the particle starts at $\phi = \phi^*$ it will not reach $\phi = \phi_+$ since $V(\phi^*) = V(\phi_+) = 0$ and the damping term will cause energy loss.
\item If the particle starts close to (but not exactly) $\phi = \phi^-$ it will overshoot the point $\phi_+$ because it can stay an exponentially long time before rolling towards $\phi_+$ by which time the damping force is gone and the frictionless motion ensures it overshoots.
\item Therefore, there is some $\phi \in [\phi^*, \phi_-]$ which satisfies the bounce solution where ${d \over d \rho} \phi_{\rho = 0} = 0$ and $\phi(t=0) = \phi_+$.
\end{itemize}
Furthermore, one can solve this problem exactly in the \vocab{thin wall limit} where $\epsilon = V(\phi_+)- V(\phi_-)$ is small. In that limit, the classical equation that ends with a stationary field at $\phi_+$ stays a long time close to $\phi_-$ before quickly moving to $\phi_+$. This quick transition forms a domain wall of high action, hence the "thin wall". Most of the time, the field sits at $\phi_+$, with $\epsilon$ action. Parametrize such solution by $R_0$ which is the value of $\rho$ at which the transition from $\phi^- \rightarrow \phi_+$ happens. The total action of this instanton is:
\begin{align}
S & = \int_{\rho < R_0} \mathcal{L}_E (\phi) + \int_{S_d} \underbrace{\sigma}_{\text{surface tension of domain wall}} \\
S &= {\frac12 \pi^2 R_0^4 \epsilon} - 2 \pi^2 R_0^3 \sigma
\end{align}
There exist a optimal radius $R_0^*$ that minimizes this action:
\begin{align}
{d \over d R_0} S(R_0) = 0 \rightarrow R_0^* = {3 \sigma \over \epsilon^3} \\
S^* \equiv S(R_0^*) = -{27 \pi^2 \sigma^4 \over 2 \epsilon^3}
\end{align}
The instanton solution is then:
\begin{align}
\phi(\rho) = \phi_- \theta(R_0^* - \rho) + \phi_+ \theta(\rho - R_0^*)
\end{align}
Let's now discuss what happens after the bubble of real vacuum has formed at time t = 0. Because the instanton solution solves the euclidean equation of motion, we just need to analytically continue it to obtain the time evolution for $t > 0$:
\begin{align}
\tau \rightarrow it \\
\phi(t) = \phi(\sqrt{|\mathbf{x}|^2 - t^2}) \\
\end{align}
The domain wall at $\sqrt{|\mathbf{x}|^2 - t^2} = R_0^*$ then traces a hyperboloid through spacetime. Because we expect $R_0^*$ to be a microphysical quantity on the order of a few fermi, the bubble expands almost instantaneously at the speed of light.
\subsection{Yang Mills Instanton}
Discovering Yang Mills Instanton comes from asking a question: what are the classical vacuua of Non-Abelian Gauge Theory.
In general, $A_\mu = 0$ is one vacuum. However you can always add a \vocab{pure gauge} transformation on it to obtain any vacuum:
\[A_\mu + i U \partial_\mu U^{-1} \]
Suppose 2 gauge field configurations that are pure gauge are not deformable into one another: then there is an energy barrier to tunnel from one to the other! (one has to physically break pure gauge into some field strength configuration to go from one to another).
Consider a time independent pure gauge field configuration. This is a map from $S_{d-1} \rightarrow SU(N)$. For d = 4 there's a math theorem that say such maps are labelled by integers, or their \vocab{homotopy group} $\pi_3(SU(N)) = Z_n$
\begin{theorem}
All maps $S^3 \rightarrow G$ where G is a simple non-abelian group, can be deformed to $S^3 \rightarrow SU(2)$ or $S^3 \rightarrow S^3$.
\end{theorem}
This implies that pure gauge configurations are labelled by n. \vocab{Yang Mills Instantons} are finite action configurations that mediate this tunnelling between 2 different pure gauge configurations at different times, with different n.
One property of these instanton solutions is that they are \vocab{self dual}:
$F_{\mu \nu} = \tilde{F}_{\mu \nu}$. To show this one first shows that self dual solutions set a lower bound on the action of a gauge field configuration.
\begin{align}
\mathrm{Tr} (F_{\mu \nu} - \tilde{F}_{\mu \nu} )^2 & \geq 0 \\
\epsilon_{\mu \nu \alpha \beta} \epsilon_{\mu \nu \sigma \rho} &= 2 \left( \delta_{\alpha \sigma} \delta_{\beta \rho} - \delta_{\alpha \rho } \delta_{\beta \sigma} \right) \\
\rightarrow \mathrm{Tr} F_{\mu \nu}^2 & \geq \mathrm{Tr} F_{\mu \nu} \tilde{F}^{\mu \nu}
\end{align}
This shows that a self dual solution where $F = \tilde{F}$ necessarily minimizes the classical action. Furthermore, the integral of $F \wedge F$ is a topological invariant. The easiest way to see this is to use form notation, recall:
\begin{align}
\underbrace{J_{CS}}_{\text{Chern Simons Form}} &\equiv \mathrm{Tr}AdA + \frac23 \mathrm{Tr}A^3 \\
d J_{CS} &= \mathrm{Tr} d(A dA + \frac23 A^3) = \mathrm{Tr}dA \wedge dA + 2 \mathrm{Tr} A^2dA \\
F \wedge F & = (dA + A^2) \wedge (dA + A^2) = \mathrm{Tr} dA \wedge dA + 2 \mathrm{Tr} A ^2 dA + \underbrace{\mathrm{Tr} A^4}_{= 0} \\
\rightarrow F \wedge F & = d J_{CS}
\end{align}
Since the RHS is a differential form, its integral over a manifold is a topological invariant (doesn't depend on smooth deformation of the metric).
We can also show this explicitly in index notation. In general, consider a compact gauge group $U(\theta_i)$ defined on a manifold $S_d$ parametrized by $d$ coordinates $( \theta_1, ... \theta_d)$. The topological invariant called the \vocab{winding number} is given by a the integral of the \vocab{Cartan-Maurer Form}:
\begin{align}
n & = {- 1 \over 24 \pi^2} \int d \theta_1 d \theta_2 ... d \theta_d \epsilon^{i_1...i_d} \mathrm{Tr} (U \partial_{i_1} U^\dagger) (U \partial_{i_2} U^\dagger) ... (U \partial_{i_d} U^\dagger)
\end{align}
We can express this as a surface integral over a d dimensional surface.
For the case of d = 3 it reduces to:
\begin{align}
n & = {1 \over 24 \pi^2} \int d S^\mu \epsilon^{\mu \nu \lambda \rho} \mathrm{Tr} (U \partial_\mu U^\dagger) (U \partial_\lambda U^\dagger) (U \partial_\rho U^\dagger)
\end{align}
Note that at infinity, the gauge field configuration for an instanton is pure gauge, so $A_\mu = U \partial_\mu U^{\dagger}$. This implies that the winding number can be written as:
\begin{align}
n & = {1 \over 24 \pi^2} \int d S^\mu \epsilon^{\mu \nu \lambda \rho} \mathrm{Tr} A_\nu A_\lambda A_\rho \\
\end{align}
We would like to express this surface integral as a volume integral. This current is the \vocab{Chern Simons current}:
\begin{align}
n &= {1 \over 24 \pi^2} \int d^4 x \partial_\mu J_{CS}^\mu \\
J_{CS}^\mu & = \epsilon^{\mu \nu \lambda \rho} \mathrm{Tr} \left(A_\nu \partial_\lambda A_\rho - {2 ig \over 3} A_\nu A_\lambda A_\rho \right)
\end{align}
To show this we explicitly compute $F \wedge F$:
\begin{align}
F_{\mu \nu} &= \partial_\mu A_\nu - \partial_\nu A_\mu - {i g} [A_\mu, A_\nu] = \epsilon_{\mu \nu \alpha \beta} \left(\partial_\alpha A_\beta + ig A_\alpha A_\beta \right) \\
\epsilon^{\mu \nu \alpha \beta} \mathrm{Tr} F_{\alpha \beta} F_{\mu \nu} & = \epsilon^{\mu \nu \alpha \beta} \mathrm{Tr} \left(\partial_\mu A_\nu + A_\mu A_\nu \right) \left(\partial_\alpha A_\beta + A_\alpha A_\beta \right) \\
&= \epsilon^{\mu \nu \alpha \beta} \left[ \partial_\mu \mathrm{Tr} \left( A_\nu \partial_\alpha A_\beta + A_\nu A_\alpha A_\beta \right) + \mathrm{Tr} A_\mu A_\nu A_\alpha A_\beta \right]
\end{align}
The last term is 0 because the trace is cyclic and it is an even number of permutations contracted with the epsilon tensor. This gives:
\begin{align}
\partial_\mu J_{CS}^\mu = F_{\mu \nu} \tilde{F}^{\mu \nu}
\end{align}
We thus have established in 2 different notations how the winding number of a gauge field configuration can be computed as a local integral over the 4-dimensional spacetime of a local quantity $F \wedge F$.
From this information alone, one can extract lots of information about the vacuum structure without even solving the instanton configuration. Denote the vacuua $\ket{n}$ for winding number n. Suppose an (self dual) instanton that mediates between $n \rightarrow n+1$ has finite action $S_1$. By superposing instantons, one can mediate between and n and m, with tunelling amplitude:
\[ \braket{n | H | m} \approx A e^{-|n-m| S_1} \]
The translation invariance of the hamiltonian with respect to winding numbers implies it can be diagonalized in fourier modes:
\[ \underbrace{\ket{ \theta }}_{\text{energy eigenstate}} = \sum_{n} e^{i n \theta} \ket{n} \]
We see we have a continuous set of energy eigenstates labelled by the angle $\theta \in [0, 2 \pi]$ also called the \vocab{vacuum angle}. Let us now compute the amplitude from one theta eigenstate and another, from $\theta \rightarrow \theta'$:
\begin{align}
\braket{\theta' |e^{-iHt}| \theta} & = \sum_{n, m} e^{-im \theta'} e^{i n \theta} \braket{n | e^{-i H t}| m} \\
& = \sum_{n, m} e^{i (n \theta - m \theta')} \underbrace{\int \D A_{\nu = n-m}}_{\text{instanton configurations}} e^{i S} \\
& = \delta(\theta - \theta') \sum_{\nu} e^{-i \nu \theta} \int \D A_\nu e^{i S}
\end{align}
Note that the amplitude $\braket{n|e^{-iHt}|m}$ is expressed on the RHS as a path integral over field configurations with boundary conditions being in different instanton sectors. Those field configurations are labelled by $\nu$ the topological charge difference, which can be expressed as the winding number:
\begin{align}
\nu & = {1 \over 32 \pi^2} \int d^4 x F_{\mu \nu} \tilde{F}^{\mu \nu} \\
\braket{ \theta' |e^{-iHt} | \theta} & = \delta(\theta - \theta') \int \D A
\exp \left( -{i} \int d^4 x \left[ \mathcal{L} + \underbrace{\color{red} {\theta \over 16 g^2 \pi^2} \mathrm{Tr} F_{\mu \nu} \tilde{F}^{\mu \nu}}_{\text{topological term}}\right] \right)
\end{align}
The $\delta(\theta - \theta')$ show that the different theta vacua are \vocab{superselection sectors}, which means that no local operator can mix them: they are in effect isolated. However one further can see that for a given $\theta$ vacuum, the instanton contribution to the path integral gives a topological correction that violates parity and time reversal (it has an $\epsilon$ symbol). The fact this correction is so small for the strong force $\theta << \mathcal{O}(1)$ is called the \vocab{strong CP} problem (CPT = 1 implies that CP must violate, but it's observed to not violate).
\subsection{Anomaly}
An anomaly is when a symmetry of the classical lagrangian is not respected at quantum level. When a global symmetry is broken by quantum fluctuations, this is OK: it just means the quantum theory of real life we won't see that symmetry. However, when a gauge symmetry is broken in the quantum theory, this is a problem. Recall that gauge symmetry is not a symmetry but a redundancy (another way to view it is that a gauge theory is like a constrained physical theory). It means the gauged theory has no quantum counterpart: the constrained theory is not consistent at the quantum level.
\section{Q and A}
\subsection{Nomenclature}
\begin{myquestion}
What is a phase transition? What is an order parameter?
\end{myquestion}
Unfortunately, the definition of phase transition is not extremely clear. The original definition of a phase transition is when a system's thermodynamic function undergoes a non-analytic change as a function of a smooth tuning parameter changing. The reason why this is a surprising phenomenon is because the boltzmann weights are analytic in the tuning parameters (temperature, etc...) \footnote{See Goldenfeld for more discussion}:
\[Z = \sum_{n=1}^N e^{-{\beta E_n}} \]
In fact, phase transitions are only possible in the $N \rightarrow \infty$ or the \emph{thermodynamic limit.}
For a \vocab{2nd order phase transition} (continuous) in particular, there is no discontinuity in the free energy. What landau showed is that phenomologically, it can be modelled by a free energy functional that is similar to functional integrals of the quantum field theory:
\[ Z = \int \D \phi e^{- \int \dd^d x \ML [\phi]} \]
where $\ML$ is a local function of the \vocab{order parameter} $\phi$. The non-analytic behavior is explained by \vocab{symmetry breaking} of the order parameter $\phi$ which acquires a non-zero value in the ground state in the thermodynamic limit (this requires assumption about ergodicity breaking, or called "cluster decomposition" in high energy). From this tradition, one then \emph{defines} an order parameter to be any mesoscopic function of the system that acquires a non-zero expectation value across a phase transition. By extension, however, some authors do not require this to be a symmetry breaking phase transition (since landau's paradigm is only a subset of phase transitions). This is why it is confusing. For example, one sometimes talks about wilson loop expectations as order parameters (not a local parameter), or chern numbers (not even a continuous variable!).
\newline
For gapped systems, a definition of 0 temperature phases are equivalence classes of hamiltonians which are separate by level crosses between the ground state(s) and excited state(s). 2 hamiltonians are in the same "phase" if one can deform smoothly one to the other with local operators without closing the energy gap ("adiabatically" connect the ground state).
\begin{myquestion}
What is a \vocab{super selection sector}?
\end{myquestion}
Super selection sectors are sectors of the hilbert space that are completely separate due to special symmetries of the dynamic. Either there's an infinite energy barrier preventing a state starting from one sector to evolve into another sector, or there is some symmetry/conserved charge that prevent them from changing into each other.
Some examples
\begin{itemize}
\item Vacuum states of infinite volume related by a continuous global symmetry are super selection sector. No local operator $\hat{O} \ket{\text{vac}}$ can change them into each other.
\item Vacua with different charge values: dynamics conserve charge!
\end{itemize}
\subsection{Fundamentals}
\begin{myquestion}
Where does time-ordering come into the picture in the path integral?
\end{myquestion}
\begin{myquestion}
In the path integral formulation, how is $\braket{x(t_1) x(t_2)}$ equivalent to fixing the boundary condition on the path integral sum?
\end{myquestion}
\begin{myquestion}
What is the equivalent of the wavefunction in quantum field theories?
\end{myquestion}
\begin{myquestion}
What is the relation between wavefunction renormalization and perturbation theory in non-relativistic QM?
\end{myquestion}
\begin{myquestion}
When a gauge theory is spontaneously broken, what is broken specifically?
\end{myquestion}
\begin{myquestion}
Why can there be no local order parameter in gauge theories to describe confining transition?
\end{myquestion}
The reason is because non-abelian gauge theories are asymptotically free. This implies that local observables which probe the deep UV all look similar in the confining or coulomb phase.
\begin{myquestion}
What is spontaneous symmetry breaking in a quantum system? What is the relationship between Coleman-Mermin-Wagner theorem and cluster decomposition?
\end{myquestion}
\subsection{Gauge Theories}
\begin{myquestion}
Does all observables need to be gauge-invariant? If so, how can an observable be "charged" under a gauge field.
\end{myquestion}
Indeed, local gauge transformations (gauge transformations that go to 0 at the boundary) are redundancies in the description of a physical state. Therefore, no observable can change under such gauge transformations.
\newline
In contrast, \emph{global} gauge transformations are physical transformations. The charge is the Noether current associated with this global symmetry, and is conserved. Therefore observables can be "charged" under a global gauge symmetry.
\begin{example}
Consider a wilson line operator $W_{g, \mathcal{P}}(x_1, x_2) = \mathcal{P} \exp( i g \int_{x_1}^{x_2} A_\mu dx^\mu)$, and a locally charged operator $\MO$ with charge $g$.
The transformation rules under a gauge transformation $g(x) = e^{i \alpha(x)}$ are:
\[ \MO(x) \rightarrow e^{i g \alpha(x)} \MO(x) \]
and
\[ W_{g}(x_1, x_2) \rightarrow e^{i g \alpha(x_1)} W(x_1, x_2) e^{- i g \alpha(x_2)}\]
The object $\MO$ is not an observable, however the object
$ W(-\infty, x) \MO(x)$ is. It is charged under the global gauge symmetry but gauge invariant with respect to local redundant gauge transformations.
\end{example}
\begin{myquestion}
What is the link between the $\mathbb{CP}^n$ model with $n=1$ and the non-linear sigma model in 2 dimensions (2-d XY model) and $U(1)$ gauge theory?
\end{myquestion}
\begin{myquestion}
What's the relationship between $\phi^4$ theory and magnetic transitions which are technically $O(N)$ models?
\end{myquestion}
\begin{myquestion}
What is the relation between \[ \D F = dF + A \wedge F \] and \[ \D F = dF - i [A, F] \]
\end{myquestion}
The $F$'s in the 2 equations are in \textbf{different representations} of the gauge group. Therefore the 2 equations are saying the same thing but about different representations of the object.
\newline
In the first equation, $F$ is a column vector whose components are the projected weight onto the lie-algebra generator basis. In the 2nd equation, F is a matrix representation.
\newline
We define the \vocab{covariant derivative} to be the operator $\D_\mu$. This operator does different things depending on what is the representation of the thing it acts on. For example, in gravity, the covariant derivative on a scalar field $\phi$ is just:
\[ \nabla_\mu \phi = \partial_\mu \phi \]
However, on a vector field $V^\nu$, it is:
\[ \nabla_\mu V_\nu = \partial_\mu V_\nu - \Gamma^{\lambda}_{\mu \nu} V_\lambda \]
\newline
Consider a vector $F^a$ (you can think of as some vector) which transforms under some representation $R_{k}$ of the gauge group. The covariant derivative is defined as:
\[ \D_\mu F^i = \partial_\mu F^i - i A_\mu^a(t_k^a)_{ij} F^j \]
where $(t^a)_{ij}$ are the lie algebra generators in the representation k.
\newline
It turns out that the gauge fields live in the \vocab{adjoint} representation of the gauge group.
This equation is then for gauge field vectors F:
\[ \D_\mu F^a = \partial_\mu F^a - i A_\mu^b(t_{\mathrm{adj}}^b)_{ac} F^c \]
The generators in the adjoint representation are the structure constants $(t^b_{\mathrm{adj}})_{ac} = - i f^{bac}$
Therefore the equation becomes:
\[ \D_\mu F^a = \partial_\mu F^a - f^{bac} A_\mu^b F^c = \partial_\mu F^a + f^{bca} A_\mu^b F^c \]
The equation above computes the covariant derivate in the basis of generators. However, we can also write it in matrix form
\[ (\D_\mu F^a)_{ij} = \partial_\mu (F^a)_{ij} + f^{bca} A_\mu^b F_c (t^{a})_{ij} \]
Using the fact $-i [t^b, t^c] = f^{bca} t^a$, we therefore have:
\[ (\D_\mu F^a)_{ij} = \partial_\mu (F^a)_{ij} + [t^b A_\mu^b, t^c F^c]_{ij} \]
This implies the equation in the matrix representation:
\[ \D F = dF - i [A, F] \]
\section{Unclassified Equations}
Find me a home
\textbf{Gauge theory}
\[ [D_\mu, D_\nu] = i g F_{\mu \nu} \]
\[ \sum_{\text{cylic} \mu \nu \lambda} D_{\mu} F_{\nu \lambda} = 0 \rightarrow \epsilon^{\alpha \beta \mu \nu} D_{\beta} F_{\mu \nu} = 0 \text{ Bianchi's identity} \]
\[ \sum_a T^a_{ij} T^a_{kl} = \frac12 \left(\delta_{il} \delta_{jk} - {1 \over N} \delta_{ij} \delta{kl} \right) \text{ Fierz identity for SU(N)} \]
\[ \mathrm{tr} \left( \underbrace{ A \wedge A \wedge ... }_{\text{even}} \right) = 0 \]
where $ A = t^a A^a_\mu \dx^\mu$
\[A^g \rightarrow g A g^{-1} - dg g^{-1} \]
\[ \D X = d X + A X - (-1)^n X A \]
\section{Appendix}
\subsection{Useful QFT Integrals}
The subject of renormalization gets hairy because we often get tangled in the math (big ugly integrals) and lose track of the physics. We collect those integrals trick in 1 section for reference
\[ \underbrace{S_d}_{\text{surface of d-sphere}} = {2 \pi^{d \over 2} \over \Gamma({d \over 2})} \]
\[ \int_0^\infty dx e^{-b x^2} x^n = { \Gamma ({n+1 \over 2}) \over (2 b)^{n+1 \over 2}} \]
\textbf{Bessel function}:
\url{https://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html}
\textbf{Jacobi-Anger Expansion}
\[ e^{iz \cos(\theta)} = \sum_{n = -\infty}^\infty (i)^n J_n(z) \cos(n \theta) \]
This implies
\[J_n(z) = {1 \over 2 \pi i^n} \int_{0}^{2 \pi} d \theta e^{i(z \cos(\theta) + n \theta)}\]
\url{https://www.researchgate.net/publication/333159155_Integral_Involving_Bessel_Functions_Arising_in_Propagation_Phenomena/fulltext/5cde147792851c4eaba6923d/Integral-Involving-Bessel-Functions-Arising-in-Propagation-Phenomena.pdf}
\[K_n(x) = \int_{0}^\infty e^{-ix \cos(t) -int} dt\]
\[\int {d^d k \over (2 \pi)^d } {e^{i \bf{k} \cdot \bf{x}} \over k^2 + m^2} = {(m)^{d - 2} \over (2 \pi)^{d \over 2}} (m | \bf{x} |^{1 - {d \over 2}}) K_{1- {d \over 2}} (m |\bf{x}|) \]
note $K_{-1} = K_1$
For example in d=4:
\[G_4(x) = {1 \over 4 \pi } ({m \over |x|}) K_1(m|x|)\]
\subsubsection{Schwinger's trick}
\begin{problem}
Evaluate $$I(m^2) = \int {d^D l \over (2 \pi)^D} {1 \over l^2 + m^2}$$
\end{problem}
(trick in Zinn Justin)
\begin{align}
I(m^2) & = {S_D \over (2 \pi)^D} \int_0^\infty dl \int_{0}^\infty ds l^{D-1} e^{- s(l^2 + m^2)} \\
& = {1 \over (4 \pi)^{D \over 2}} \mu^{4-D} m^{D-2} \Gamma(1 - D/2)
\end{align}
\subsubsection{Feynman's trick}
\begin{problem}
Evaluate $$I(m^2, p^2) = \int {d^D l \over (2 \pi)^D} {1 \over l^2 + m^2} {1 \over (p-l)^2 + m^2}$$
\end{problem}
\begin{lemma}
Feynman's trick:
$${1 \over AB} = \int_0^1 dx {1 \over x A + (1-x) B}$$
\end{lemma}
\begin{align}
I &= {1 \over (2 \pi)^D} \int_0^1 dx \int l^{D-1} dl {1 \over \underbrace{(1 - x)(l^2 + m^2) + x ((p-l)^2 + m^2)}_{\equiv \mathcal{A}}}
\end{align}
Complete the square of $\mathcal{A}$:
\begin{align}
\mathcal{A} &= l^2 + m^2 - 2 p \cdot l x + x p^2 \\
&= (l- xp)^2 + m^2 + x (1-x) p^2
\end{align}
Since we integrating over all momenta, we can shift the integration variable
$l \rightarrow l - xp$
With this, the integral I is of the form $$u \equiv l^2 \rightarrow I = \int du {u^{{D \over 2} - 1} \over u + m^2}$$
which once again gives a bunch of gamma functions (using schwinger's trick.)
\begin{align}
I = {\mu^{4 -D} \over (4 \pi)^{D \over 2}} \Gamma(2 - D/2) \int_0^1 {1 \over m^2 - x(1 -x) p^2}
\end{align}
Note that this integral is logarithmically divergence so the poles are exactly at $D = 4$ while for the quadratically divergent integral, the poles were at $D = 2$ and $D=4$.
\subsubsection{More integration}
\begin{align}
\int dk_E {k_E^a \over (k_E^2 + m^2)^b} = {\left(m^2 \right)^{{a + b \over 2} - b} { \Gamma \left({a + 1 \over 2} \right)} \Gamma \left( {b - {a + 1 \over 2}} \right) \over 2 \Gamma(b)}
\end{align}
\subsection{Bessel Functions}
\subsubsection{Gamma Function}
For positive integers n, $\Gamma(n+1) = n!$.
The gamma function is an analytica function with simple poles at 0, -1, -2, ... with residues
$$\text{Res}[ \Gamma(-m)] = {(-1)^m \over m!}$$
Using this, we get:
\begin{align}
\Gamma(\epsilon) = {1 \over \epsilon} - \gamma_E
\end{align}
Good to remember (srednicki section 14)
\begin{align}
\Gamma(n+1) &= n! \\
\Gamma(n+\frac12) &= {(2n)! \over n! 2^{2n}} \sqrt{\pi} \\
\Gamma(-n + x) & = {\left( -1 \right)^n \over n!} \left[ {1 \over x} - \gamma_E + \sum_{k=1}^n {1 \over k} + \mathcal{O}(x) \right]
\end{align}
$\gamma_E \approx 0.5772...$ is the \vocab{Euler Macheroni constant}
\subsubsection{Dimensional Regularization}
\emph{Minahan lecture notes }
Given information of the previous section, dimensional regularization just does an expansion of the gamma function of the integrals around the poles.
Here there are 2 conventions, expanding with $D = 4 - 2 \epsilon$ and
$D = 4 - \epsilon$. That's where some factors of 2 can be missed!
We show the $\phi^4$ quadratic divergen mass renormalization:
\begin{align}
\mu^{4 - D}\int {d^D l \over (2 \pi)^D} {1 \over l^2 + m^2} & = {1 \over (4 \pi)^{D \over 2}} \mu^{4-D} m^{D-2} \Gamma(1 - D/2) \\
& \approx {m^2 \over (4 \pi)^2} \left({1 \over \epsilon} + 1 - \gamma_E - \log \left({m^2 \over \mu^2} \right) \right)
\end{align}
Here's the $\phi^4$ divergent interaction vertex correction:
\begin{align}
I & = {\mu^{4 -D} \over (4 \pi)^{D \over 2}} \Gamma(2 - D/2) \int_0^1 {1 \over \Delta - x(1 -x) p^2} \\
& \approx {1 \over (4 \pi)^2} \left({1 \over \epsilon} - \gamma_E + \log(4 \pi) - \log \left({\mu^2 \over m^2} \right) + Q\left({p^2 \over m^2}\right) \right) \\
Q(p^2) & \equiv - \int_0^1 dx \log \left(1 - x(1-x) p^2 \right) \\
\end{align}
\end{document}
| {
"alphanum_fraction": 0.6903533714,
"avg_line_length": 61.14,
"ext": "tex",
"hexsha": "5e440161023e327ad46038897aa19a6010e59d58",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "556f1fdfb769aa884fc0a144977687f141e3454d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tranphysics/math-physics",
"max_forks_repo_path": "qft RG.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "556f1fdfb769aa884fc0a144977687f141e3454d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tranphysics/math-physics",
"max_issues_repo_path": "qft RG.tex",
"max_line_length": 887,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "556f1fdfb769aa884fc0a144977687f141e3454d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tranphysics/math-physics",
"max_stars_repo_path": "qft RG.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 38582,
"size": 119223
} |
\documentclass{article}
\usepackage{fullpage}
\usepackage{nopageno}
\usepackage{amsmath}
\usepackage{amssymb}
\allowdisplaybreaks
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\begin{document}
\title{Notes}
\date{April 26, 2014}
\maketitle
homework questions
catalan problems (1,2,36), could prove formula directly but that can be hard, easier is write bijection from objects to catalan numbers.
1)b to d, 2)a to d, 36)c to d
\subsubsection*{worksheet}
prove \# length 2n ballot seq$=\frac{1}{n+1}\binom{2n}{n}$
a)$\binom{2n}{n}$
b)$A_n+U_n=\binom{2n}{n}$
c)$a_k=-1$, $a_1+a_2+\dots+a_{k-1}=0$
d)is $k$ odd or even? odd
e)one more one, one less negative one. this process gives all possible sequences ofn+1 1's and n-1 -1's. it is really a bijection from $A_n+U_n$ to n+1 1's and n-1 -1's. so $\abs{U_n}=\binom{2n}{n+1}$, $A_n=\binom{2n}{n}-\binom{2n}{n+1}$
this leads to a catalan number.
f)
\section*{sterling numbers}
\subsubsection*{warm up}
how many ways are there to partition $\{1,2,\dots,p\}$ into $k$ indistinguishable boxes?
$k$ choices for each object in set. $k^p$. but the boxes here are distinguisheable. divide by the number of permutations to eliminate distinguishability. $\frac{k^p}{k!}$.
\subsubsection*{question}
what if no box may be left empty? (choose k elements, then distribute)
\subsubsection*{answer}
use inclusion exclusion. let $A_i$=number of distributions where box i is empty.
\begin{align*}
\end{align*}
the bell number$B_p$ counts \# partitions of $\{1,\dots,p\}$ into nonempty indistinguishable boxes. This is sum of stirling numbers, sum of pth row of the triangle.
\end{document}
| {
"alphanum_fraction": 0.718902439,
"avg_line_length": 31.5384615385,
"ext": "tex",
"hexsha": "2aab3fd38665941b2c3af31e9e7d07106de8a8cd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ylixir/school",
"max_forks_repo_path": "combinatorics/combinatorics-notes-04-23.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ylixir/school",
"max_issues_repo_path": "combinatorics/combinatorics-notes-04-23.tex",
"max_line_length": 237,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ylixir/school",
"max_stars_repo_path": "combinatorics/combinatorics-notes-04-23.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 559,
"size": 1640
} |
%-------------------------------------------------------
% DOCUMENT CONFIGURATIONS
%-------------------------------------------------------
%-------------------------------------------------------
% START OF PROTECTION STATE OF THE ART
%-------------------------------------------------------
\subsubsection{Protections}
\paragraph{Honeypot}
Description from the official website \cite{Honeypot}
\blockquote{In computer terminology, a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data (for example, in a network site) that appears to be a legitimate part of the site but is actually isolated and monitored, and that seems to contain information or a resource of value to attackers, which are then blocked.}
\paragraph{Penetration Testing Lab}
Description from the official website \cite{PenetrationLab}
\blockquote{Use of multiples OS to target penetration type on a network or computer.}
\paragraph{Two-factor authentication}
Description from the official website \cite{Two-factorAuthentication}
\blockquote{That provides identification of users by means of the combination of two different components.}
\paragraph{Let's Encrypt}
Description from the official website \cite{LetsEncrypt}
\blockquote{A new Certificate Authority: It’s free, automated, and open.}
%-------------------------------------------------------
% END OF PROTECTION STATE OF THE ART
%------------------------------------------------------- | {
"alphanum_fraction": 0.6332691073,
"avg_line_length": 55.6071428571,
"ext": "tex",
"hexsha": "97b303848c2626c47cb5a64a1c07fdaac84df9d5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Rocla/OverClouds",
"max_forks_repo_path": "report-bs/16dlm-tb210-overclouds/state-of-art/protections.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Rocla/OverClouds",
"max_issues_repo_path": "report-bs/16dlm-tb210-overclouds/state-of-art/protections.tex",
"max_line_length": 449,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Rocla/OverClouds",
"max_stars_repo_path": "report-bs/16dlm-tb210-overclouds/state-of-art/protections.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 281,
"size": 1557
} |
\documentclass[9pt,twocolumn,twoside]{../../styles/osajnl}
\usepackage{fancyvrb}
\journal{i524}
\title{Optical Character Recognition}
\author[1]{Saber Sheybani}
\author[1]{Sushmita Sivaprasad}
\affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.}
\affil[*]{Corresponding authors: [email protected],[email protected]}
\dates{Project: S17-IR-P012, \today}
\ociscodes{OCR, Ansible, K-Nearest Neighbor}
% replace this with your url in github/gitlab
\doi{ Report: \url{https://github.com/cloudmesh/sp17-i524/blob/master/project/S17-IR-P012/report/report.pdf}\\
Code: \url{https://github.com/cloudmesh/cloudmesh.ocr/tree/master/code}}
\begin{abstract}
Optical Character Recognition is a technology for converting images of texts,
into machine encoded text format. In this project, the input data is
in standard image formats and our goal is to recognize the words/letters in the
image as accurately as possible and convert the dataset into TXT
format. The heart of OCR as a pattern recognition system is a classifier.
The images will go through a pre-processing phase which will convert them
into a form that is compatible for feeding to the classifier. Then the classifier
will determine the class of each glyph in the image. The whole recognition system
is installed and operated on the cloud.
\newline
\end{abstract}
\begin{document}
\maketitle
\section{Introduction}
The age of digitalization has made it quintessential to store documents
in a digital form for the purpose of allowing valuable information to
be stored, edited and searchable in the middle of billions of
records. OCR technology is used quite frequently by every industry that
has handwritten, scanned or photographic data. This project proposal
gives a detailed view on how the OCR technology works, the preprocessing done to remove the noise and the
algorithms used to recognize the images. We have delved deep into some of the basic
concepts used in the implementation process. We have also discussed some
important applications of this technology in the real world and created a benchmark.
\section{Execution Plan}
This execution plan is to show the distribution of work over the time
period of the course. The steps followed to achieve the final results.
\begin{enumerate}
\item {\bfseries 6 March - 12 March 2017} To implement the OCR, we
first looked for a dataset to train the algorithm and test it. The
desired library to use to write the codes and the complexity of the
final goal of the project was discussed.
\item{\bfseries 13 March -19 March 2017}
Looking for preprocessing steps in order to cleanse noise from the
image and convert the image into a standard form which makes it adequate
for later processing.
\item {\bfseries 20 March - 26
March 2017} Starting to work with Ansible and Cloudmesh Client for running
tasks on virtual nodes.
\item{\bfseries 27 March- 02 April 2017} starting to deploy individual modules
for preprocessing, to the virtual machines on Chameleon Cloud service,
using Ansible. Cloudmesh Client was used to reserve the virtual machines
on the cloud services.
\item {\bfseries 03 April- 09
April 2017} Executing a preliminary form of character recognition using
K Nearest Neighbour classification and a large dataset.
\item {\bfseries 10 April - 16 April 2017}
Implementing segmentation of image into lines, words and letters.
\item {\bfseries 17 April - 23 April 2017} Executing benchmark and
finalizing the project report.
\end{enumerate}
\section{BACKGROUND}
\subsection{OCR Technology}
Optical Character Recognition is a technology which is used to convert
different types of documents that can be in the form of scanned documents,
including handwritten or machine-written into an editable and searchable
form \cite{www-ocr}.
The images can be in either the basic black \& white or colored. The
technology analyzes the structure of the document and divides it
into small refined segments, so that each segment contains one character.
Finally, individual characters are singled out
one by one and fed to a classification algorithm which will return the
closest letter that the individual character could possibly be
identified with.
\section {SYSTEM CONFIGURATION AND TECHNOLOGIES USED}
\subsection{System Configuration}
The python codes and the libraries are run on a Ubuntu 16.04 LTS.
\subsection{Technologies Used}
\begin{enumerate}
\item {\bfseries Python Programming Language} : An object oriented
programming language with an open-source license. We chose Python for
this project as there are many libraries available in Python that
helps in creating a code for OCR easier. Packages such as PIL,
Pillow and OpenCV are one of the few libaries that helps in image
processing.
\item{\bfseries OpenCV } : OpenCV is an open source computer vision
library. It is used for building computer vision applications. It
consists of more than 2500 algorithms including both machine
learning and computer vision algorithms \cite{opencv-about}. These algorithms are
devised for facial and image recognition, track any moving objects
etc. We have used OpenCV 2 for creating a KNN classifier and a few
of our preprocessing techniques.
\item{\bfseries Ansible} : Ansible is an IT automation tool. It uses
YAML in order to issue the state of the server
\cite{www-ocr}. Ansible implements the internal command that is
required to reach that state which depends on the operating
system. The ansible playbook which consists of these internal
commands can be applied to any server or service. There is no
requirement to instal an additional software on the target system
as the commands are run over an SSH session.
\item{\bfseries Cloudmesh Client} : It is an open source client
interface tool that provides us with an easy-to-use interface for
accessing cloud services, creating single and multiple VMs,
clusters and workstations.We can manage the resources we would
like to use and customize them as per our requirement to run the
projects.\\ It provides an interface to execute jobs on High
Performance Computing clusters \cite{cloudmesh}.The users can use
just one platform to manage all of the cloud resources. Cloudmesh
client creates a local copy of the data which results in clouds
with similar configurations to be created as well. The default
features of the Cloudmesh Client allows easy control of the cloud
as well. Cloudmesh includes an API, commandline client and a
commandline shell.
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{images/cloudmesh.png}}
\caption{Architecture of Cloudmesh Client \cite{cloudmesh}}
\label{Architecture of Cloudmesh Client}
\end{figure}
\item{\bfseries Chameleon Cloud} :Chameleon cloud provides
OpenStack Cloud (kilo) using the KVM virtualization
technology \cite{www-chameleon-openstack}. It is an Infrastructure as
a Service(IaaS) platform that allows us to create as well as manage
the virtual environment. The virtual machine that we use here is
compatible with KVM. Chameleon Cloud also gives access to the
bare-metal computing resources, which allows administrative rights to
use cloud for computing experiments \cite{www-chameleon-baremetal}.
\end{enumerate}
\section{Method Survey}
Optical Character Recognition have already been developed in numerous
ways, focusing on different goals. We did a survey on the possible
approaches for character recognition.
The main components of every OCR system can be enumerated as feature
extraction component, and the classifier.
Feature extraction methods can be separated into two groups:
Template matching, and Structural classification \cite{borovikov2014survey}.
In \textbf{Template matching}, the individual pixels of the image
are directly used as features. For each possible character, there is
one class and one template feature set, associated with it. In classification,
a similarity metric will evaluate the distance of the input character to each
of the templates. And thus, the input will be represented with the class of
the most similar template.
In \textbf{structural classification} structural features of every
character, such as loops and curves, are used as features.
There are various types of classification methods. A number of methods that
have been used for OCR are Nearest-Neighbor, Artificial Neural Networks,
and Support Vector Machines. K-Nearest Neighbor and Artificial Neural Networks
are discussed further as the candidates for our job.
\subsection {K Nearest Neighbour}
It is a non parametric algorithm where each of the training data is considered to have a set of vectors and a class label associated with each vectors. The training set will have the labels for the classes , given the test set it calculates the distance between the training set and the test set and calculate the nearest points. A single value of 'K' is given which allows us to decide how many neighbours influence the classification. Figure \ref{fig:knn} displays a schema of KNN classification.
\begin{figure}
\centering
\fbox{\includegraphics[width=\linewidth]{images/knn.png}}
\caption{Example of k-NN classification \cite{steinbach-book}}
\label{fig:knn}
\end{figure}
\subsection{Feed Forward Neural Networks}
Artificial Neural Network is a paradigm in computing, inspired by the
structure of biological nervous systems. It consists of a network of
processing units, where the output of each unit is a nonlinear
function of its weighted inputs that come from other units. Such
network can be trained to solve different kinds of problems, including
classification and clustering. A feed forward neural networks is one
in which the neurons are organized in a number of layers and each
layer only feeds to the next one, but not to the previous one (no
feedback). However, in a back-propagation process, the errors from one
iteration of classification will be fed back from the output to the
network, in order to modify and improve the network for next
iterations.
Due to simplicity and relatively good performance of KNN, we choose it
as the classifier for this project. There is an easy to use implementation
of it in OpenCV library.
Our KNN uses euclidean distance as the distance metric to calculate the nearest neighbours for the 'K' value. \\
Euclidean Distance\\$ d(p,q) = \sqrt{ \sigma(p_i - q_i)^2}$\\
For the value of K, 5 is chosen so that it is neither too small and sensitive to the noise nor too large which might result in including the points from other classes.
\section {Preprocessing Techniques}
The steps in an OCR full session are as follows:
Preprocessing: The input images need to be segmented
into units that each of them keep only one glyph (symbol). Also, the
colored or grayscale images will be binarized.
Feature extraction: The glyphs will be decomposed into features like lines, closed loops,
line direction, and line intersections.
Character recognition: The
image features will be fed to the classifier and they will be
compared with stored glyph features and the nearest match will be
chosen.
Preprocessing is required on the raw images that we are using to
filter out the required subject and distinguish from any other
unwanted objects from the image such as watermarks, background
subjects etc. We have conducted different preprocessing techniques in
order to remove noise and convert the image into a grey scale format
as color images requires more complex methods of processing
\subsection{ Noise Reduction Techniques}
Noise reduction is done for extracting out any unwanted bit-pattern,
there are linear as well as non-linear techniques for this. Linear :
In this method is used to remove any isolated pixel noise from the
image. Here the required output filter is taken as a linear
combination of the neighborhood pixels Non- Linear : These kind of
filters are used to replace the value of a particular pixel in order
to remove any kind of impulse noise
\subsection{Histogram Based Method}
It gives a value to the intensity of the pixel and plot it on a
histogram , where darker the image , more the data points would be on
the left and center of the histogram . Lighter the image , more the
data points would be on the right side of the histogram. Using a
histogram equalization method the contrast on the image can be
improved in this case. In the histogram equalization method , an image
is divided into blocks of pixels and an histogram equalization is
done. This allows us to distinguish the images we actually require
from the other background images . It allows us to enhance the
visibility of the characters’ present on the image.
\subsection{Median Filter}
It is a non-linear noise reduction technique , it is a low pass
filter. In this case the pixel values are taken for an area on the
image and an average of the pixel value is taken and assigned to the
center pixel in that area. Figure \ref{fig:median-schema} displays a schema of
how median filter works. It is an effective means for removing the
salt and pepper noise which are random lines occurring on the image
due to poor quality of the picture or if the image wasn’t scanned
well \cite{medianfilterpreprocessing}. Figure \ref{fig:median} shows the result of
applying a median filter on a scanned image, we can see the reduction
in dots and other marks on the image, making it more smooth and
usable.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{images/pasted.png}}
\caption{Averaging of a pixel in median filter \cite{medianfilterpreprocessing}}
\label{fig:median-schema}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=8 cm]{images/medianfilter}}
\caption{Result of applying median filter on a scanned image}
\label{fig:median}
\end{figure}
\subsection{Gaussian Blur}
Gaussian Blur filtwe is a low pass filter which is used to eliminate
isolated pixel noise. Image smoothing is done here using gaussian
filters where the weighted average of the pixel values is computed
with the gaussian coefficients as weight. The filter provides a smooth
texture to the noisy image.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=0.25\textwidth]{images/gaussianfilter.png}}
\caption{After applying gaussian filter on the image preproccesed with median filter}
\end{figure}
\subsection{ Binarization}
Otsu’s method \cite{otsu1975threshold}: Otsu’s method concludes finding
the best intensity threshold to separate two classes, often background
vs foreground but not always. The algorithm tries to find a separation
point that has the minimum weighted within class variance. If the
input images are grayscale, the algorithm will simply find a threshold
that any intensity below that will be considered as the background and
the intensity of the corresponding pixels will be rounded to
zero. Similarly, the intensities above the threshold will be rounded
to 1. The resulting image (array) will be binary.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=8 cm]{images/binarization.png}}
\caption{Applying binarization after applying gaussian filter}
\end{figure}
\subsection{ Segmentation}
Segmentaion of the image happens in multiple levels, namely lines, words
and letters. Application of all of these three will enable the conversion
of whole pages of scanned documents into text format.
In this project, segmentation at all levels is implemented using
projection profile approach. Projection of the intensities on the vertical
axis will differentiate the rows that contain some text, from the ones that
only include the background.
\newline
For word segmentation if the number of background columns are more than a
threshold, the two letters will be considered as incorporating different words.
The background columns are detected using projection on the horizontal axis, but only
for the rows that are included in the current line. Thresholds are defined using
expert views in typography \cite{dowding1995finer}.
\newline
The letter segmentation is simply done after one background column is found.
The figure \ref{fig:segment} displays the application of our segmentation algorithm
on a sample image.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=5 cm]{images/sample3segmented.pdf}}
\caption{Applying segmentation}
\label{fig:segment}
\end{figure}
\section{Application}
OCR converts images to machine-readable text. That will make it the
initial tool that needs to be used for processing any documents or
simply any written material in a digital image, which has been
captured by a camera \cite{www-ocr-wiki}. It’s output can be stored significantly more
compact than scanned images. But beyond that, it enables us to process
the output information for numerous applications. Examples of these
applications include creating a narrator machine to help the visually
impaired read nondigital documents and signs, or automatic recognition
of automobile number plates.
\section{License}
The project is developed under the open source Apache License 2.0. The license
file is included in the Git repository of the project.
The packages and softwares used in developing the project include OpenCV,
Python 2.7, Ansible, Cloudmesh. All of these packages are open-source.
The license file for OpenCV is included in the project repository
(CV\_LICENSE.txt).
\section{Cloud Deployment}
Automatic deployment of program on virtual environment was done using
Ansible \cite{www-ansible} .The jobs were collected, organized in
Playbooks \cite{www-ansible-playbook} and run on virtual clusters provided
by Chameleon Cloud \cite{www-chameleoncloud}.
The tasks include Installing the essential libraries on the remote machine
and running the program.
The VM reservations were done using Cloudmesh Client. Three Ansible playbooks are used, one for
deploying the software stack, one for running the OCR codes on a standard dataset
and another one for running it on an arbitrary image. The software stack
required for running the program includes the following components:
\begin{enumerate}
\item {\bfseries Git} The Git module is needed for cloning the OCR Python codes.
It is installed using apt module.
\item{\bfseries OpenCV} OpenCV, as discussed in the previous sections, has become
the standard library of computer vision. It is used in this project for multiple
purposes. As this library is extensive, its installation may take a significant time.
Hence, we tried to examine all the possible ways to install it. Building from the source,
which is the recommended way for installing it (and any other package) took more than
20 minutes. Even after that, connecting it to the existing Python installation on the
VMs was problematic. The alternative ways include using PyPi, Conda (Anaconda package
management system), and apt. After trial and error, Apt appeared to be the best solution
with installation time of less than 5 minutes and hassle-free python integration. It can
be easily installed on VMs using Ansible apt module. So we chose this method.
\item {\bfseries OCR Code} The codes of the program were checked in the project
repository. An Ansible role cloned these codes into each node of the cluster.
\item{\bfseries Dataset} An Ansible role was used to download and extract the
dataset. Unarchive module was used for this purpose.
\end{enumerate}
\section{Dataset}
The dataset used in our project for training the KNN is Chars74k \cite{chars74k-dataset}.
This dataset provides an extensive collection of English letters and digits,
in handwritten form or in various computer fonts \cite{de2009character}. In this project, a subset
of the data, including characters from computer fonts with 4 variations (combinations
of italic, bold and normal) was used. The glyphs in this subset include 0-9, A-Z, and a-z.
For each glyph, there are 1016 different files, each with one font. However, some of the fonts
are similar to each other.
\newline
Before this dataset, the MNIST handwritten digits database were used for preliminary
purposes \cite{mnist-dataset}.
\section{Benchmarking}
After wrapping all the necessary material, the code was run in a role and the result
was saved in text files, using another role. The accuracy of classification is printed
in the console of the local machine.
The program was first deployed on single VMs that were reserved manually using
Cloudmesh Client. Later on, clusters of 2-4 nodes were used to measure the execution
time of deployment. Note that for running the benchmark, there is a trade-off between the runtime
and accuracy of classification. The more samples are used for training the dataset, the higher the
accuracy will be achieved and of course the longer execution time is needed.
\subsection{Sample size analysis}
Running a dataset of 50 images as the test and train data
with K=5, takes around 20 seconds and yieds an accuracy of 65\%. The time complexity
of the algorithm is O(n2), thus, using 100 samples will multiply the runtime by 4.
However, as the dataset used contains 74k images, there is still significant diversity
among 100 images. As a result, using 100 images only improves the classification
accuracy up to 75\%. Changing the number of nodes does not affect the runtime, unless paralellization
is exploited.
Each VM on Chameleon Cloud has only 1 core, 2050076 kB (2 GB) of RAM, and 20 GB of storage.
Due to small size of RAM, we had to keep the number of training samples under 200.
\subsection{Cluster size analysis}
In this section, the sample size is around 50 different fonts for training the classifier and another 50 for testing it.
For the cluster of \textbf{2 nodes}:
It took 67 seconds to deploy (allocate) the cluster, 153 seconds to install the software stack
and dataset on the nodes, and 169 seconds to run the benchmark on them. The accuracy of classification on one of the nodes was 72.23 and on the other, it was 70.84.
For the cluster of \textbf{3 nodes}:
It took 98 seconds to deploy (allocate) the cluster, 170 seconds to install the stack and
180 seconds to run the benchmark.
For the cluster of \textbf{4 nodes}:
It took 131 seconds to deploy (allocate) the cluster, 175 seconds to install the stack and
187 seconds to run the benchmark.
As we can see, the higher number of nodes will only add to the time of cluster creation. As the nodes are
quite similar, the execution time of the tasks which run in parallel is similar.
\subsection{OCR on arbitrary images}
As the arbitrary images can be significantly different from
the standard dataset, the sample size needs to be much higher. However, due to small size of RAM, for running OCR on arbitrary images, we only used a sample size of 200. That causes the accuracy of OCR to be very low, around 20\%.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=8 cm]{images/time_complexity.eps}}
\caption{Time complexity of running OCR on Chameleon Cloud}
\end{figure}
\section{Reproducibility}
Instructions on reproduction of the project are provided in the project
repository \cite{git-self-code}.
\section{Future Work}
For improving the current OCR system, various kinds of sophisticated structural features
can be added to feature extraction. It can also be extended to operate on languages other
than English. In order to improve the performance, the power of multiple cores/VMs can be
used for parallelization, so that after segmentation, each VM will operate on one line
of the text. Thus, multiple lines can be processed at the same time. For improving the
recognition accuracy, a lexicon can be used to evaluate the validity of the words that
have been formed by OCR, and then possibly correct them to the closest word in the lexicon.
\section{ACKNOWLEDGEMENT}
A very special thanks to Professor Gregor von Laszewski and the
teaching assistants Miao Zhang and Dimitar Nikolov for all the support
and guidance. This project proposal is written during the spring 2017
semester course {I524: Big Data and Open Source Software Projects} at
Indiana University Bloomington.
\section*{AUTHOR BIOGRAPHIES}
\begingroup
\setlength\intextsep{0pt}
\begin{minipage}[t][3.2cm][t]{1.0\columnwidth} % Adjust height [3.2cm] as required for separation of bio photos.
\begin{wrapfigure}{L}{0.25\columnwidth}
\includegraphics[width=0.25\columnwidth]{images/saber.jpg}
\end{wrapfigure}
\noindent
{\bfseries Saber Sheybani} received his B.S. (Electrical Engineering -
Minor in Control Engineering) from University of Tehran. He is currently
a PhD student of Intelligent Systems Engineering - Neuroengineering at
Indiana University Bloomington.
\end{minipage}
\begin{minipage}[t][3.2cm][t]{1.0\columnwidth} % Adjust height [3.2cm] as required for separation of bio photos.
\begin{wrapfigure}{L}{0.25\columnwidth}
\includegraphics[width=0.25\columnwidth]{images/sushmita.png}
\end{wrapfigure}
\noindent
{\bfseries Sushmita Sivaprasad} is a graduate student in Data Science at
Indiana University under the department of Informatics and
Computing. She had completed her bachelors in Electronics and
Communication from SRM University , India and her master's in
International Business from Hult International Business School, UAE.
\end{minipage}
\endgroup
\section{WORK BREAKDOWN}
The following was the work distribution followed for the project,
\begin{itemize}
\item {\bfseries Sushmita Sivaprasad} : Researched on the background of implementing
the ocr technology, briefed on the system configurations and the
technologies that can be used. She implemented the preprocessing steps
to remove the noise and inaccuracies in the image file before
implementing the K means algorithm. Contributed to the writing of the
report.
\item {\bfseries Saber Sheybani} : Implemented the cloud deployment on
chameleon cloud and has done the benchmarking of the on the chameleon
cloud. Contributed to the writing of the report.
\end{itemize}
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.7960892782,
"avg_line_length": 49.1150943396,
"ext": "tex",
"hexsha": "bb408ad28fdbbd5a84d241c3b1685b089b6ffda5",
"lang": "TeX",
"max_forks_count": 294,
"max_forks_repo_forks_event_max_datetime": "2018-07-13T01:32:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-09T13:18:39.000Z",
"max_forks_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "cloudmesh/sp17-i524",
"max_forks_repo_path": "project/S17-IR-P012/report/report.tex",
"max_issues_count": 98,
"max_issues_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_issues_repo_issues_event_max_datetime": "2017-10-27T11:30:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-01-19T04:24:02.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "cloudmesh/sp17-i524",
"max_issues_repo_path": "project/S17-IR-P012/report/report.tex",
"max_line_length": 498,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "suunni/sp17-i524",
"max_stars_repo_path": "project/S17-IR-P012/report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-14T19:13:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-30T09:54:25.000Z",
"num_tokens": 5894,
"size": 26031
} |
\documentclass[12pt,twoside]{article}
\newcommand{\reporttitle}{Title of course}
\newcommand{\reportauthor}{Your Name}
\newcommand{\reporttype}{Coursework}
\newcommand{\cid}{your college-id number}
% include files that load packages and define macros
\input{includes} % various packages needed for maths etc.
\input{notation} % short-hand notation and macros
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
% front page
\input{titlepage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%% Main document
\section{Introduction}
This is a template for coursework submission. Many macros and definitions can be found in \texttt{notation.tex}. This document is not an introduction to LaTeX. General advice if get stuck: Use your favorite search engine. A great source is also \mbox{\url{https://en.wikibooks.org/wiki/LaTeX}}.
\section{Basics}
\subsection{Figures}
A figure can be included as follows:
\begin{figure}[tb]
\centering % this centers the figure
\includegraphics[width = 0.7\hsize]{./figures/usc} % this includes the figure and specifies that it should span 0.7 times the horizontal size of the page
\caption{This is a figure.} % caption of the figure
\label{fig:usc} % a label. When we refer to this label from the text, the figure number is included automatically
\end{figure}
Fig.~\ref{fig:usc} shows the USC logo.
Some guidelines:
\begin{itemize}
\item Always use vector graphics (scale free)
\item In graphs, label the axes
\item Make sure the font size (labels, axes) is sufficiently large
\item When using colors, avoid red and green together (color blindness)
\item Use different line styles (solid, dashed, dotted etc.) and different markers to make it easier to distinguish between lines
\end{itemize}
\subsection{Notation}
\begin{table}[tb]
\caption{Notation}
\label{tab:notation}
\centering
\begin{tabular}{ll}
Scalars & $x$\\
Vectors & $\vec x$\\
Matrices & $\mat X$\\
Transpose & $\T$\\
Inverse & $\inv$\\
Real numbers & $\R$\\
Expected values & $\E$\\
\end{tabular}
\end{table}
Table~\ref{tab:notation} lists some notation with some useful shortcuts (see latex source code).
\subsubsection{Equations}
Here are a few guidelines regarding equations
\begin{itemize}
\item Please use the \texttt{align} environment for equations (\texttt{eqnarray} is buggy)
\item Please number all equations: It will make things easier when we need to refer to equation numbers. If you always use the \texttt{align} environment, equations are numbered by default.
\item Vectors are by default column vectors, and we write
\begin{align}
\vec x &= \colvec{1,2}
\end{align}
\item Note that the same macro (\texttt{$\backslash$colvec}) can produce vectors of variable lengths, as
\begin{align}
\vec y &= \colvec{1,2,3,4}
\end{align}
\item Matrices can be created with the same command. The \& switches to the next column:
\begin{align}
\mat A = \begin{bmatrix}
1 & 2 & 3\\
3 & 4 & 5
\end{bmatrix}
\end{align}
\item Determinants. We provide a simple macro (\texttt{$\backslash$matdet}) whose argument is just a matrix array:
\begin{align}
\matdet{
1 & 2 & 3\\
3 & 4 & 5\\
2 & 2 & 2
}
\end{align}
\item If you do longer manipulations, please explain what you are doing: Try to avoid sequences of equations without text breaking up. Here is an example:
We consider
\begin{align}
U_1 = [\colvec{1,1,0,0},\, \colvec{0,1,1,0},\, \colvec{0,0,1,1}]
\subset\R^4, \quad
U_2 = [\colvec{-1,1,2,0},\, \colvec{0,1,0,0}]
\subset\R^4\,.
\end{align}
To find a basis of $U_1\cap U_2$, we need to find all $\vec x \in V$ that can be represented as linear combinations of the basis vectors of $U_1$ and $U_2$, i.e.,
\begin{align}
\sum_{i=1}^3 \lambda_i \vec b_i = \vec x = \sum_{j=1}^2 \psi_j \vec c_j\,,
\end{align}
where $\vec b_i$ and $\vec c_j$ are the basis vectors of $U_1$ and $U_2$, respectively.
%
The matrix $\mat A = [\vec b_1|\vec b_2|\vec b_3| -\vec c_1|-\vec
c_2]$ is given as
\begin{align}
\mat A =
\begin{bmatrix}
1 & 0 & 0 & 1 & 0\\
1 & 1 & 0 & -1 & -1\\
0 & 1 & 1 & -2 & 0\\
0 & 0 & 1 & 0 & 0
\end{bmatrix}\,.
\end{align}
By using Gaussian elimination, we determine the corresponding reduced row echelon form
\begin{align}
\begin{bmatrix}
1 & 0 & 0 & 1& 0\\
0 & 1 & 0 & -2 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
\,.
\end{align}
We keep in mind that we are interested in finding $\lambda_1,\lambda_2,\lambda_3\in\R$ and/or $\psi_1,\psi_2\in\R$ with
\begin{align}
\begin{bmatrix}
1 & 0 & 0 & 1& 0\\
0 & 1 & 0 & -2 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
\colvec{\lambda_1, \lambda_2, \lambda_3, \psi_1, \psi_2}
=\vec 0\,.
\end{align}
From here, we can immediately see that $\psi_2=0$ and $\psi_1\in\R$ is
a free variable since it corresponds to a non-pivot column, and our solution is
\begin{align}
U_1\cap U_2 = \psi_1\vec c_1 = [ \colvec{-1,1,2,0} ]
\,, \quad \psi_1\in\R\,.
\end{align}
\end{itemize}
\subsection{Gaussian elimination}
We provide a template for Gaussian elimination. It is not perfect, but it may be useful:
\begin{elimination}[6]{5}{8mm}{1}
\eliminationstep
{
1 & - 2 & 1 & -1 & 1 & 0\\
0 & 0 & -1 & 1 & -3 & 2\\
0 & 0 & 0 & -3 & 6 & -3\\
0 & 0 & -1 & -2 & 3 & a
}
{
\\
\\
\\
-R_2
}
\\
\eliminationstep
{
1 & - 2 & 1 & -1 & 1 & 0\\
0 & 0 & -1 & 1 & -3 & 2\\
0 & 0 & 0 & -3 & 6 & -3\\
0 & 0 & 0 & -3 & 6 & a-2
}
{
\\
\\
\\
-R_3
}\\
\eliminationstep{
1 & - 2 & 1 & -1 & 1 & 0\\
0 & 0 & -1 & 1 & -3 & 2\\
0 & 0 & 0 & -3 & 6 & -3\\
0 & 0 & 0 & 0 & 0 & a+1
}
{
\\
\cdot (-1)\\
\cdot (-\tfrac{1}{3})\\
\\}
\\
\eliminationstep{
1 & - 2 & 1 & -1 & 1 & 0\\
0 & 0 & 1 & -1 & 3 & -2\\
0 & 0 & 0 & 1 & -2 & 1\\
0 & 0 & 0 & 0 & 0 & a+1
}{}
\end{elimination}
The arguments of this environment are:
\begin{enumerate}
\item Number of columns (in the augmented matrix)
\item Number of free variables (equals the number of columns after which the vertical line is drawn)
\item Column width
\item Stretch factor, which can stretch the rows further apart.
\end{enumerate}
\newpage
\section{Answer Template}
\begin{enumerate}[1)]
\item Discrete models
\begin{enumerate}[a)]
\addtocounter{enumii}{2} % change to enumi if you use sections rather than enumerate for question numbers
\item
\item
\item
\end{enumerate}
\item Differentiation
\begin{enumerate}[a)]
\item
\item
\addtocounter{enumii}{1}
\item
\item
\end{enumerate}
\item Continuous Models
\begin{enumerate}[a)]
\item
\item
\item
\item
\item
\item
\item
\end{enumerate}
\item Linear Regression
\begin{enumerate}[a)]
\item
\item
\item
\item
\end{enumerate}
\item Ridge Regression
\begin{enumerate}[a)]
\item
\item
\item
\begin{enumerate}[i)]
\item
\item
\end{enumerate}
\end{enumerate}
\item Bayesian Linear Regression
\begin{enumerate}[a)]
\addtocounter{enumii}{1}
\item
\item
\item
\item (bonus)
\end{enumerate}
\end{enumerate}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
| {
"alphanum_fraction": 0.6595775463,
"avg_line_length": 23.1946308725,
"ext": "tex",
"hexsha": "df486807fdf39bbd09ddeaa1768962a65c7413d8",
"lang": "TeX",
"max_forks_count": 9,
"max_forks_repo_forks_event_max_datetime": "2022-01-31T00:32:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-29T14:33:37.000Z",
"max_forks_repo_head_hexsha": "82f06614d9c2e2441e46b2bd0a14067cb35abade",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pooyanjamshidi/pmls",
"max_forks_repo_path": "resources/coursework-template.gz/coursework.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "82f06614d9c2e2441e46b2bd0a14067cb35abade",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pooyanjamshidi/pmls",
"max_issues_repo_path": "resources/coursework-template.gz/coursework.tex",
"max_line_length": 294,
"max_stars_count": 34,
"max_stars_repo_head_hexsha": "82f06614d9c2e2441e46b2bd0a14067cb35abade",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pooyanjamshidi/pmls",
"max_stars_repo_path": "resources/coursework-template.gz/coursework.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T16:11:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-19T17:55:19.000Z",
"num_tokens": 2413,
"size": 6912
} |
\section{Extensions of the Base Logic}
In this section we discuss some additional constructions that we define within and on top of the base logic.
These are not ``extensions'' in the sense that they change the proof power of the logic, they just form useful derived principles.
\subsection{Derived Rules about Base Connectives}
We collect here some important and frequently used derived proof rules.
\begin{mathparpagebreakable}
\infer{}
{\prop \Ra \propB \proves \prop \wand \propB}
\infer{}
{\prop * \Exists\var.\propB \provesIff \Exists\var. \prop * \propB}
\infer{}
{\prop * \All\var.\propB \proves \All\var. \prop * \propB}
\end{mathparpagebreakable}
Verifying that existential quantifiers commute with separating conjunction requires an intermediate step using a magic wand: From $P * \exists x, Q \vdash \Exists x. P * Q$ we can deduce $\Exists x. Q \vdash P \wand \Exists x. P * Q$ and then proceed via $\exists$-elimination.
\subsection{Derived Rules about Modalities}
Iris comes with 4 built-in modalities ($\always$, $\plainly$, $\upd$ and $\later$) and, as we will see, plenty of derived modalities.
However, almost all of them fall into one of two categories (except for $\later$, as we will see): they are either \emph{always-style} modalities (``something holds in all/many (future) worlds'') or \emph{eventually-style} modalities (``something holds in a possible (future) world'').
\emph{Eventually-style modalities} are characterized by being easy to ``add''/introduce, but hard to ``remove''/eliminate.
Consider, for example, the basic update modality $\upd$:
we have $\prop \proves \upd\prop$ (\ruleref{upd-intro}), but the inverse direction does not hold.
Instead, from \ruleref{upd-mono} and \ruleref{upd-trans}, we can derive the following elimination principle:
\begin{mathpar}
\infer[upd-E]
{\prop \proves \upd\propB}
{\upd\prop \proves \upd\propB}
\end{mathpar}
In other words, we can remove an $\upd$ in front of an assumption \emph{if} the goal is itself wrapped in $\upd$.
Another way to view this rule is to think of it as a \emph{bind rule}.
Indeed, together with \ruleref{upd-intro}, this rule shows that $\upd$ forms a monad.
\emph{Always-style modalities}, on the other hand, are easy to ``remove''/eliminate, but hard to ``add''/introduce.
The most widely used example of that in Iris is the persistence modality $\always$:
we have $\always\prop \proves \prop$ (\ruleref{pers-elim}), but the inverse direction does not hold.
Instead, from \ruleref{pers-mono} and $\always{\prop} \proves \always\always\prop$, we can derive the following introduction principle:
\begin{mathpar}
\infer[$\always$-I]
{\always\prop \proves \propB}
{\always\prop \proves \always\propB}
\end{mathpar}
In other words, we can remove an $\always$ from the goal \emph{if} all our assumptions are wrapped in $\always$.
This matches the algebraic structure of a comonad.
In particular, both eventually-style and always-style modalities are \emph{idempotent}: we have $\upd\upd\prop \provesIff \upd\prop$ and $\always\always\prop \provesIff \always\prop$.
Beyond this, all modalities come with plenty of rules that show how they commute around other connectives and modalities.
And, of course, they come with a few ``defining rules'' that give the modalities their individual meaning, \ie for the update modality, that would be \ruleref{upd-update}.
In the following, we briefly discuss each of the modalities.
\paragraph{Update modality.}
As already mentioned, the update modality is an eventually-style modality:
\begin{mathpar}
\inferhref{upd-E}{upd-elim}
{\prop \proves \upd\propB}
{\upd\prop \proves \upd\propB}
\inferH{upd-idemp}
{}{\upd\upd\prop \provesIff \upd\prop}
\end{mathpar}
Beyond this (and the obvious variant of \ruleref{upd-frame} that exploits commutativity of separating conjunction), there are no outstandingly interesting derived rules.
\paragraph{Persistence modality.}
As already mentioned, the persistence modality is an always-style modality:
\begin{mathpar}
\inferhref{$\always$-I}{pers-intro}
{\always\prop \proves \propB}
{\always\prop \proves \always\propB}
\inferhref{$\always$-idemp}{pers-idemp}
{}{\always\always\prop \provesIff \always\prop}
\end{mathpar}
Some further interesting derived rules include:
\begin{mathparpagebreakable}
\infer{}
{\always(\prop\land\propB) \provesIff \always\prop \land \always\propB}
\infer{}
{\always(\prop\lor\propB) \provesIff \always\prop \lor \always\propB}
\infer{}
{\always\TRUE \provesIff \TRUE}
\infer{}
{\always\FALSE \provesIff \FALSE}
\\
\infer{}
{\always(\prop*\propB) \provesIff \always\prop * \always\propB}
\infer{}
{\always\prop*\propB \provesIff \always\prop \land \propB}
\infer{}
{\always(\prop \wand \propB) \provesIff \always(\prop \Ra \propB)}
\\
\infer{}
{\always(\prop \Ra \propB) \proves \always\prop \Ra \always\propB}
\infer{}
{\always(\prop \wand \propB) \proves \always\prop \wand \always\propB}
\end{mathparpagebreakable}
In particular, the persistence modality commutes around conjunction, disjunction, separating conjunction as well as universal and existential quantification.
Commuting around conjunction can be derived from the primitive rule that says it commutes around universal quantification (as conjunction is equivalent to a universal quantification of a Boolean), and similar for disjunction.
$\TRUE \provesIff \always\TRUE$ (which is basically persistence ``commuting around'' the nullary operator $\TRUE$) can be derived via $\always$ commuting with universal quantification ranging over the empty type.
A similar rule holds for $\FALSE$.
Moreover, if (at least) one conjunct is below the persistence modality, then conjunction and separating conjunction coincide.
\paragraph{Plainness modality.}
The plainness modality is very similar to the persistence modality (in fact, we have $\plainly\prop \proves \always\prop$, but the inverse does not hold).
It is always-style:
\begin{mathpar}
\infer[$\plainly$-I]
{\plainly\prop \proves \propB}
{\plainly\prop \proves \plainly\propB}
\infer{}{\plainly\plainly\prop \provesIff \plainly\prop}
\end{mathpar}
It also commutes around separating conjunction, conjunction, disjunction, universal and existential quantification (and $\TRUE$ and $\FALSE$).
The key difference to the persistence modality $\always$ is that $\plainly$ provides a \emph{propositional extensionality} principle:
\[ \plainly ( ( P \Ra Q) \land (Q \Ra P ) ) \proves P =_{\Prop} Q \]
In contrast, $\always$ permits using some forms of ghost state ($\ownM\melt \proves \always{\ownM{\mcore\melt}}$).
Having both of these principles for the same modality would lead to a contradiction:
imagine we have an RA with elements $\melt$, $\meltB$ such that $\mcore\melt$ is incompatible with $\meltB$ (\ie $\neg\mvalFull(\mcore\melt \mtimes \meltB)$).
Then we can prove:
\[
\ownM{\mcore\melt} \proves
\always\ownM{\mcore\melt} \proves
\always ( ( \FALSE \Ra \ownM\meltB ) \land ( \ownM\meltB \Ra \FALSE ) )
\]
The first implication is trivial, the second implication follows because $\always\ownM{\mcore\melt} \land \ownM\meltB \proves \ownM{\mcore\melt} * \ownM\meltB \proves \mval(\mcore\melt \mtimes \meltB)$.
But now, if we had propositional extensionality for $\always$ the way we do for $\plainly$, we could deduce $\FALSE =_{\Prop} \ownM\meltB$, and that is clearly wrong.
This issue arises because $\always$, as we have seen, still lets us use some resources from the context, while propositional equality has to hold completely disregarding current resources.
\paragraph{Later modality.}
The later modality is the ``odd one out'' in the sense that it is neither eventually-style nor always-style, because it is not idempotent:%
\footnote{This means $\later$ is neither a monad nor a comonad---it does form an applicative functor, though.}
with $\later$, the number of times the modality is applied matters, and we can get rid of \emph{exactly one} layer of $\later$ in the assumptions only by doing the same in the conclusion (\ruleref{later-mono}).
Some derived rules:
\begin{mathparpagebreakable}
\inferhref{L{\"o}b}{Loeb}
{}
{(\later\prop\Ra\prop) \proves \prop}
\infer{}
{\later(\prop \Ra \propB) \proves \later\prop \Ra \later\propB}
\infer{}
{\later(\prop \wand \propB) \proves \later\prop \wand \later\propB}
\\
\infer{}
{\later(\prop\land\propB) \provesIff \later\prop \land \later\propB}
\infer{}
{\later(\prop\lor\propB) \provesIff \later\prop \lor \later\propB}
\infer{\text{$\type$ is inhabited}}
{\later(\Exists x:\type. \prop) \provesIff \Exists x:\type. \later\prop}
\infer{}
{\later\TRUE \provesIff \TRUE}
\infer{}
{\later(\prop*\propB) \provesIff \later\prop * \later\propB}
\infer{}
{\later\always\prop \provesIff \always\later\prop}
\infer{}
{\later\plainly\prop \provesIff \plainly\later\prop}
\end{mathparpagebreakable}
Noteworthy here is the fact that Löb induction (\ruleref{Loeb}) can be derived from $\later$-introduction and the fact that we can take fixed-points of functions where the recursive occurrences are below $\later$~\cite{Loeb}.%
\footnote{Also see \url{https://en.wikipedia.org/wiki/L\%C3\%B6b\%27s_theorem}.}
Also, $\later$ commutes over separating conjunction, conjunction, disjunction, universal quantification and \emph{non-empty} existential quantification, as well as both the persistence and the plainness modality.
\subsection{Persistent Propositions}
We call a proposition $\prop$ \emph{persistent} if $\prop \proves \always\prop$.
These are propositions that ``do not own anything'', so we can (and will) treat them like ``normal'' intuitionistic propositions.
Of course, $\always\prop$ is persistent for any $\prop$.
Furthermore, by the proof rules given in \Sref{sec:proof-rules}, $\TRUE$, $\FALSE$, $t = t'$ as well as $\ownGhost\gname{\mcore\melt}$ and $\mval(\melt)$ are persistent.
Persistence is preserved by conjunction, disjunction, separating conjunction as well as universal and existential quantification and $\later$.
\subsection{Timeless Propositions and Except-0}
One of the troubles of working in a step-indexed logic is the ``later'' modality $\later$.
It turns out that we can somewhat mitigate this trouble by working below the following \emph{except-0} modality:
\[ \diamond \prop \eqdef \later\FALSE \lor \prop \]
Except-0 satisfies the usual laws of a ``monadic'' modality (similar to, \eg the update modalities):
\begin{mathpar}
\inferH{ex0-mono}
{\prop \proves \propB}
{\diamond\prop \proves \diamond\propB}
\axiomH{ex0-intro}
{\prop \proves \diamond\prop}
\axiomH{ex0-idem}
{\diamond\diamond\prop \proves \diamond\prop}
\begin{array}[c]{rMcMl}
\diamond{(\prop * \propB)} &\provesIff& \diamond\prop * \diamond\propB \\
\diamond{(\prop \land \propB)} &\provesIff& \diamond\prop \land \diamond\propB \\
\diamond{(\prop \lor \propB)} &\provesIff& \diamond\prop \lor \diamond\propB
\end{array}
\begin{array}[c]{rMcMl}
\diamond{\All x. \prop} &\provesIff& \All x. \diamond{\prop} \\
\diamond{\Exists x. \prop} &\provesIff& \Exists x. \diamond{\prop} \\
\diamond\always{\prop} &\provesIff& \always\diamond{\prop} \\
\diamond\later\prop &\proves& \later{\prop}
\end{array}
\end{mathpar}
In particular, from \ruleref{ex0-mono} and \ruleref{ex0-idem} we can derive a ``bind''-like elimination rule:
\begin{mathpar}
\inferH{ex0-elim}
{\prop \proves \diamond\propB}
{\diamond\prop \proves \diamond\propB}
\end{mathpar}
This modality is useful because there is a class of propositions which we call \emph{timeless} propositions, for which we have
\[ \timeless{\prop} \eqdef \later\prop \proves \diamond\prop \]
In other words, when working below the except-0 modality, we can \emph{strip
away} the later from timeless propositions (using \ruleref{ex0-elim}):
\begin{mathpar}
\inferH{ex0-timeless-strip}{\timeless{\prop} \and \prop \proves \diamond\propB}
{\later\prop \proves \diamond\propB}
\end{mathpar}
In fact, it turns out that we can strip away later from timeless propositions even when working under the later modality:
\begin{mathpar}
\inferH{later-timeless-strip}{\timeless{\prop} \and \prop \proves \later \propB}
{\later\prop \proves \later\propB}
\end{mathpar}
This follows from $\later \prop \proves \later\FALSE \lor \prop$, and then by straightforward disjunction elimination.
The following rules identify the class of timeless propositions:
\begin{mathparpagebreakable}
\infer
{\vctx \proves \timeless{\prop} \and \vctx \proves \timeless{\propB}}
{\vctx \proves \timeless{\prop \land \propB}}
\infer
{\vctx \proves \timeless{\prop} \and \vctx \proves \timeless{\propB}}
{\vctx \proves \timeless{\prop \lor \propB}}
\infer
{\vctx \proves \timeless{\prop} \and \vctx \proves \timeless{\propB}}
{\vctx \proves \timeless{\prop * \propB}}
\infer
{\vctx \proves \timeless{\prop}}
{\vctx \proves \timeless{\always\prop}}
\infer
{\vctx \proves \timeless{\propB}}
{\vctx \proves \timeless{\prop \Ra \propB}}
\infer
{\vctx \proves \timeless{\propB}}
{\vctx \proves \timeless{\prop \wand \propB}}
\infer
{\vctx,\var:\type \proves \timeless{\prop}}
{\vctx \proves \timeless{\All\var:\type.\prop}}
\infer
{\vctx,\var:\type \proves \timeless{\prop}}
{\vctx \proves \timeless{\Exists\var:\type.\prop}}
\axiom{\timeless{\TRUE}}
\axiom{\timeless{\FALSE}}
\infer
{\text{$\term$ or $\term'$ is a discrete OFE element}}
{\timeless{\term =_\type \term'}}
\infer
{\text{$\melt$ is a discrete OFE element}}
{\timeless{\ownM\melt}}
\infer
{\text{$\melt$ is an element of a discrete camera}}
{\timeless{\mval(\melt)}}
\end{mathparpagebreakable}
\subsection{Dynamic Composeable Higher-Order Resources}
\label{sec:composeable-resources}
The base logic described in \Sref{sec:base-logic} works over an arbitrary camera $\monoid$ defining the structure of the resources.
It turns out that we can generalize this further and permit picking cameras ``$\iFunc(\Prop)$'' that depend on the structure of propositions themselves.
Of course, $\Prop$ is just the syntactic type of propositions; for this to make sense we have to look at the semantics.
Furthermore, there is a composability problem with the given logic: if we have one proof performed with camera $\monoid_1$, and another proof carried out with a \emph{different} camera $\monoid_2$, then the two proofs are actually carried out in two \emph{entirely separate logics} and hence cannot be combined.
Finally, in many cases just having a single ``instance'' of a camera available for reasoning is not enough.
For example, when reasoning about a dynamically allocated data structure, every time a new instance of that data structure is created, we will want a fresh resource governing the state of this particular instance.
While it would be possible to handle this problem whenever it comes up, it turns out to be useful to provide a general solution.
The purpose of this section is to describe how we solve these issues.
\paragraph{Picking the resources.}
The key ingredient that we will employ on top of the base logic is to give some more fixed structure to the resources.
To instantiate the logic with dynamic higher-order ghost state, the user picks a family of locally contractive bifunctors $(\iFunc_i : \COFEs^\op \times \COFEs \to \CMRAs)_{i \in \mathcal{I}}$.
(This is in contrast to the base logic, where the user picks a single, fixed camera that has a unit.)
From this, we construct the bifunctor defining the overall resources as follows:
\begin{align*}
\GName \eqdef{}& \nat \\
\textdom{ResF}(\ofe^\op, \ofe) \eqdef{}& \prod_{i \in \mathcal I} \GName \fpfn \iFunc_i(\ofe^\op, \ofe)
\end{align*}
We will motivate both the use of a product and the finite partial function below.
$\textdom{ResF}(\ofe^\op, \ofe)$ is a camera by lifting the individual cameras pointwise, and it has a unit (using the empty finite partial function).
Furthermore, since the $\iFunc_i$ are locally contractive, so is $\textdom{ResF}$.
Now we can write down the recursive domain equation:
\[ \iPreProp \cong \UPred(\textdom{ResF}(\iPreProp, \iPreProp)) \]
Here, $\iPreProp$ is a COFE defined as the fixed-point of a locally contractive bifunctor, which exists and is unique up to isomorphism by \thmref{thm:america_rutten}, so we obtain some object $\iPreProp$ such that:
\begin{align*}
\Res &\eqdef \textdom{ResF}(\iPreProp, \iPreProp) \\
\iProp &\eqdef \UPred(\Res) \\
\wIso &: \iProp \nfn \iPreProp \\
\wIso^{-1} &: \iPreProp \nfn \iProp \\
\wIso(\wIso^{-1}(x)) &\eqdef x \\
\wIso^{-1}(\wIso(x)) &\eqdef x
\end{align*}
Now we can instantiate the base logic described in \Sref{sec:base-logic} with $\Res$ as the chosen camera:
\[ \Sem{\Prop} \eqdef \UPred(\Res) \]
We obtain that $\Sem{\Prop} = \iProp$.
Effectively, we just defined a way to instantiate the base logic with $\Res$ as the camera of resources, while providing a way for $\Res$ to depend on $\iPreProp$, which is isomorphic to $\Sem\Prop$.
We thus obtain all the rules of \Sref{sec:base-logic}, and furthermore, we can use the maps $\wIso$ and $\wIso^{-1}$ \emph{in the logic} to convert between logical propositions $\Sem\Prop$ and the domain $\iPreProp$ which is used in the construction of $\Res$ -- so from elements of $\iPreProp$, we can construct elements of $\Sem{\textlog M}$, which are the elements that can be owned in our logic.
\paragraph{Proof composability.}
To make our proofs composeable, we \emph{generalize} our proofs over the family of functors.
This is possible because we made $\Res$ a \emph{product} of all the cameras picked by the user, and because we can actually work with that product ``pointwise''.
So instead of picking a \emph{concrete} family, proofs will assume to be given an \emph{arbitrary} family of functors, plus a proof that this family \emph{contains the functors they need}.
Composing two proofs is then merely a matter of conjoining the assumptions they make about the functors.
Since the logic is entirely parametric in the choice of functors, there is no trouble reasoning without full knowledge of the family of functors.
Only when the top-level proof is completed we will ``close'' the proof by picking a concrete family that contains exactly those functors the proof needs.
\paragraph{Dynamic resources.}
Finally, the use of finite partial functions lets us have as many instances of any camera as we could wish for:
Because there can only ever be finitely many instances already allocated, it is always possible to create a fresh instance with any desired (valid) starting state.
This is best demonstrated by giving some proof rules.
So let us first define the notion of ghost ownership that we use in this logic.
Assuming that the family of functors contains the functor $\Sigma_i$ at index $i$, and furthermore assuming that $\monoid_i = \Sigma_i(\iPreProp, \iPreProp)$, given some $\melt \in \monoid_i$ we define:
\[ \ownGhost\gname{\melt:\monoid_i} \eqdef \ownM{(\ldots, \emptyset, i:\mapsingleton \gname \melt, \emptyset, \ldots)} \]
This is ownership of the pair (element of the product over all the functors) that has the empty finite partial function in all components \emph{except for} the component corresponding to index $i$, where we own the element $\melt$ at index $\gname$ in the finite partial function.
We can show the following properties for this form of ownership:
\begin{mathparpagebreakable}
\inferH{res-alloc}{\text{$G$ infinite} \and \melt \in \mval_{M_i}}
{ \TRUE \proves \upd \Exists\gname\in G. \ownGhost\gname{\melt : M_i}
}
\and
\inferH{res-update}
{\melt \mupd_{M_i} B}
{\ownGhost\gname{\melt : M_i} \proves \upd \Exists \meltB\in B. \ownGhost\gname{\meltB : M_i}}
\inferH{res-empty}
{\text{$\munit$ is a unit of $M_i$}}
{\TRUE \proves \upd \ownGhost\gname\munit}
\axiomH{res-op}
{\ownGhost\gname{\melt : M_i} * \ownGhost\gname{\meltB : M_i} \provesIff \ownGhost\gname{\melt\mtimes\meltB : M_i}}
\axiomH{res-valid}
{\ownGhost\gname{\melt : M_i} \Ra \mval_{M_i}(\melt)}
\inferH{res-timeless}
{\text{$\melt$ is a discrete OFE element}}
{\timeless{\ownGhost\gname{\melt : M_i}}}
\end{mathparpagebreakable}
Below, we will always work within (an instance of) the logic as described here.
Whenever a camera is used in a proof, we implicitly assume it to be available in the global family of functors.
We will typically leave the $M_i$ implicit when asserting ghost ownership, as the type of $\melt$ will be clear from the context.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "iris"
%%% End:
| {
"alphanum_fraction": 0.733997086,
"avg_line_length": 50.7142857143,
"ext": "tex",
"hexsha": "3d1ae27b4ead54cc55bf9239b6c34b392ca5f20f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5",
"max_forks_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_forks_repo_name": "SkySkimmer/iris",
"max_forks_repo_path": "tex/extended-logic.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_issues_repo_name": "SkySkimmer/iris",
"max_issues_repo_path": "tex/extended-logic.tex",
"max_line_length": 399,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "186d9ece07e210e92be28eb0e1a42f5d5fe6f1b5",
"max_stars_repo_licenses": [
"CC-BY-4.0",
"BSD-3-Clause"
],
"max_stars_repo_name": "SkySkimmer/iris",
"max_stars_repo_path": "tex/extended-logic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6020,
"size": 20590
} |
\section{Experience}
\runsubsection{Cengage Learning}
\descript{| \hspace{0.10cm} Software Engineering Intern}
\location{June 2019 - August 2019}
\vspace{10pt}
\begin{tightemize}
\item One of Two SWE summer Interns who worked in the Boston HQ
\item Developed an internal account management tool that ingested daily changes of permissions across various platforms like Google Analytics or Looker for the Platform Analytics Team
\item Full stack developement: Developed with react written in Typescript, snowflake computing, and python with Jenkins to both ingest and build the application. All running on an EC2 instance.
\item Development followed a rigid agile development environment that used Jira and Confluence to manage and document stories
\end{tightemize}
\sectionsep
\runsubsection{University Of Mass. Amherst}
\descript{| \hspace{0.10cm} Undergraduate Course Assistant}
\location{September 2019 - Present}
\begin{tightemize}
\item Assistant for the introductory undergraduate introduction to agorithms course of UMass Amherst. (locally known as CS311)
\item Responsible for leading discussions, answering online piazza questions and grading gradescope assignments
\end{tightemize}
\sectionsep
\section{Projects}
\runsubsection{Hackathon-Dashboard}
\descript{| \hspace{0.10cm} Open-source Hackathon tool}
\location{March 2019 - Present}
\begin{tightemize}
\item Worked in the tech team of HackUmass and HackHer to publish the hackathon management tool used by our two local hackathons on a open source platform. Currently forked by numerous hackathons around the country.
\item In charge of the Development Operations that deals with the infrastructure of the tools when used in development, staging, and production
\item Witten in Ruby on Rails, I've implemented dockerization, proposed new database designs, and have done full stack work on the project
\end{tightemize}
\sectionsep
\runsubsection{Community Upliftment Program}
\descript{| \hspace{0.10cm} Non-profit Website Redesign}
\location{April 2019 - Present}
\begin{tightemize}
\item Non-profit in Springfield, MA that helps somali refugees. Contacted us from a need of an online presence and technical solutions
\item Placed as the Project Manager of a team in BuildUMass, I've kept contact with the adminstrators of the organization over the summer to lead a team in the fall to fully develop this project
\item Current technologies planned to be implemented are apis such as donorbox and a simple react typescript website hosted on heroku
\end{tightemize}
\sectionsep
\runsubsection{\href{https://Sltran.com}{Sltran.com}}
\descript{| \hspace{0.10cm} Personal Website to test new tech and tools}
\location{December 2018 - Present}
\begin{tightemize}
\item Work in Progress (pls don't judge)
\item Something about using namecheap, gcp, react, typescript, and go as the stack for web dev
\item Fun Stuff: Automation of the resume uploads + builds from resume changes with Jenkins
\item Hopefully something about a blog if I can set it up over the weekend
\end{tightemize}
| {
"alphanum_fraction": 0.7895244216,
"avg_line_length": 55.5714285714,
"ext": "tex",
"hexsha": "7f0d50b15e0f5931e723809a4b3f8737487b2c40",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bc10455e9c710368dabc1e4bde5170a25da21ac9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Tran981/steen-resume",
"max_forks_repo_path": "resume/sections/mainbar.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bc10455e9c710368dabc1e4bde5170a25da21ac9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Tran981/steen-resume",
"max_issues_repo_path": "resume/sections/mainbar.tex",
"max_line_length": 219,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "bc10455e9c710368dabc1e4bde5170a25da21ac9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Tran981/steen-resume",
"max_stars_repo_path": "resume/sections/mainbar.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-06T01:40:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-11T02:36:04.000Z",
"num_tokens": 750,
"size": 3112
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.