Search is not available for this dataset
text
string
meta
dict
\section{Discussion} \label{sect:prob-discussion} We discuss potential applications, extensions, and connections of probabilistic property evaluation in the following. \subsection{Probabilistic equivalence checking} \begin{figure}[t] \centering \includegraphics{fig/build/pec-miter.pdf} \caption{A miter SPBN for probabilistic equivalence checking} \label{fig:prob-PEC} \end{figure} Given two SPBNs, their equivalence checking can be easily formulated under PPE and MPPE framework, as depicted in~\cref{fig:prob-PEC}. The property network corresponds to a miter circuit that tests the difference of corresponding outputs of the two SPBNs, same as in the equivalence checking of deterministic designs. With the proposed framework, we can analyze the average (resp. maximum) probability that the two SPBNs are functionally different through PPE (resp. MPPE). We refer to the equivalence checking problem as \textit{probabilistic equivalence checking} (PEC) for the average-case analysis and \textit{maximum probabilistic equivalence checking} (MPEC) for the worst-case analysis. Since equivalence checking is widely encountered, our experiments will focus on PEC and MPEC to compare the strengths and weaknesses of different solutions that we proposed in~\cref{sect:prob-solutions}. \subsection{Prioritized output requirement} For some applications, we may want to impose different criticality requirements on different output signals. Given an SPBN $G$ over primary inputs $X$, internal vertices $Y$, and auxiliary inputs $Z$, this output-prioritized version of MPPE is naturally expressible in terms of stochastic integer linear programming (SILP)~\cite{Schultz2003} as follows: \begin{align*} \max_X \enskip \mathbb{E}[\sum_{i=1}^n w_i o_i(X,Y,Z)] \enskip s.t. \enskip \pf, \end{align*} where $o_i \in V_O$ is an output of $G$, $|V_O|=n$, $w_i$ is the weight of $o_i$, $\mathbb{E}[\cdot]$ denotes the expectation value, and $\pf$ is a set of linear inequalities derived from the CNF formula of $G$ through the standard translation from clauses to linear constraints. To illustrate, a clause $(x \lor \lnot y \lor z)$ in a CNF formula is transformed into a linear inequality $(x+1-y+z)\geq 1$, or $(x-y+z)\geq 0$ in $\pf$. Note that the worst case formulation in~\cref{thm:prob-mppe-ssat} is a special case of the SILP formulation where the expectation value of the miter's output is maximized. \subsection{Connection to approximate design analysis} Approximate design analysis assesses the deviation between an approximate design and its exact counterpart in two scenarios: the worse and average cases. For the worst-case analysis, integer linear programming (ILP) can be applied to find an input assignment to maximize the number of deviating outputs. For the average-case analysis, model counting can be used to compute the number of input assignments that make the two designs have different output responses. In both cases, our PPE framework can be applied to analyze an approximate design, which can be seen as a probabilistic design without random behavior. For probabilistic design analysis, if all random variables become deterministic, then it degenerates to approximate design analysis (from SILP to ILP under the worst-case analysis and from weighted model counting to model counting under the average-case analysis).
{ "alphanum_fraction": 0.7924246943, "avg_line_length": 51.5846153846, "ext": "tex", "hexsha": "4abaaaf681a3007dd0cdfb1c8c89cd87fcb1b725", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "nianzelee/PhD-Dissertation", "max_forks_repo_path": "paper/prob-design-eval/discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "nianzelee/PhD-Dissertation", "max_issues_repo_path": "paper/prob-design-eval/discussion.tex", "max_line_length": 117, "max_stars_count": 1, "max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "nianzelee/PhD-Dissertation", "max_stars_repo_path": "paper/prob-design-eval/discussion.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z", "num_tokens": 800, "size": 3353 }
% !TEX root = omar-thesis.tex \chapter{Simple Expression TLMs (seTLMs)}\label{chap:uetsms} In the remainder of this work, we will develop a system of \emph{typed literal macros (TLMs)}. Briefly, TLMs offer substantially greater syntactic flexibility as compared to typed term rewriting macros \emph{a la} Scala, while 1) guaranteeing that a segmentation can always be produced; 2) enforcing a prohibition on capture; 3) enforcing a strong form of context independence and 4) maintaining the ability to reason abstractly about types. We will establish these reasoning principles formally, ultimately in a system with an ML-style module system in Chapter \ref{chap:ptsms}. We will begin, however, in this chapter with a simpler calculus of expressions and types. The TLMs available in this calculus are called \emph{simple expression TLMs} (seTLMs). \section{Simple Expression TLMs By Example}\label{sec:tsms-by-example} We begin in this section with a ``tutorial-style'' introduction to seTLMs in VerseML. %In particular, we will define an seTLM for constructing values of the recursive labeled sum type \li{rx} that was defined in Figure \ref{fig:datatype-rx}. Sec. \ref{sec:tsms-minimal-formalism} then formally defines a reduced dialect of VerseML called $\miniVerseUE$. This will serve as a ``conceptually minimal'' core calculus of TLMs, in the style of the simply typed lambda calculus. %We conclude in Sec. \ref{sec:uetsms-discussion} \subsection{TLM Application}\label{sec:uetsms-usage} The following VerseML expression, drawn textually, is of \emph{TLM application} form. Here, a TLM named \li{#\dolla#rx} is applied to the \emph{generalized literal form} \li{/SURLA|T|G|CEURL/}: \begin{lstlisting}[numbers=none,mathescape=|] $rx /SURLA|T|G|CEURL/ \end{lstlisting} Generalized literal forms are left unparsed according to the context-free syntax of VerseML. Several other outer delimiters are also available, as summarized in Figure \ref{fig:literal-forms}. The client is free to choose any of these for use with any TLM, as long as the \emph{literal body} (shown in green above) satisfies the requirements stated in Figure \ref{fig:literal-forms}. For example, we could have equivalently written the example above as \li{#\dolla#rx `SURLA|T|G|CEURL`}. (In fact, this would have been convenient if we had wanted to express a regex containing forward slashes but not backticks.) It is only during the subsequent \emph{typed expansion} phase that the applied TLM parses the {body} of the literal form to generate a \emph{proto-expansion}. The language then \emph{validates} this proto-expansion according to criteria that we will describe in Sec. \ref{sec:uetsms-validation}. If proto-expansion validation succeeds, the language generates the \emph{final expansion} (or more concisely, simply the \emph{expansion}) of the TLM application. The behavior of the program is determined by its expansion. For example, the expansion of the TLM application above is equivalent to the following expression when the regex value constructors \li{Or} and \li{Str} are in scope: \begin{lstlisting}[numbers=none] Or(Str "SSTRAESTR", Or(Str "SSTRTESTR", Or(Str "SSTRGESTR", Str "SSTRCESTR"))) \end{lstlisting} To avoid the assumption that the variables \li{Or} and \li{Str} are in scope at the TLM application site, the expansion actually uses the explicit \li{fold} and \li{inj} operators, as described in Sec. \ref{sec:lists}. In fact, the proto-expansion validation process enforces this notion of context independence -- we will return to proto-expansion validation below. (We will show how TLM parameters can reduce the awkwardness of this requirement in Chapter \ref{chap:ptsms}.) %The constructors above are those of the type \li{Rx} that was defined in Figure \ref{fig:datatype-rx}. % A number of literal forms, , are available in VerseML's concrete syntax. Any literal form can be used with any TLM, TLMs have access only to the literal bodies. Because TLMs do not extend the concrete syntax of the language directly, there cannot be syntactic conflicts between TLMs. %The form does not directly determine the expansion. \begin{figure} \begin{lstlisting} 'SURLbody cannot contain an apostropheEURL' `SURLbody cannot contain a backtickEURL` [SURLbody cannot contain unmatched square bracketsEURL] {|SURLbody cannot contain unmatched barred curly bracesEURL|} /SURLbody cannot contain a forward slashEURL/ \SURLbody cannot contain a backslashEURL\ \end{lstlisting} %SURL<tag>body includes enclosing tags</tag>EURL \caption[Generalized literal forms available in VerseML]{Generalized literal forms available for use in VerseML's textual syntax. The characters in green indicate the literal bodies and describe how the literal body is constrained by the form shown on that line. The Wyvern language defines additional forms, including whitespace-delimited forms \cite{TSLs} and multipart forms \cite{sac15}, but for simplicity we leave these out of VerseML.} \label{fig:literal-forms} \end{figure} \subsection{TLM Definitions}\label{sec:uetsms-definition} %The original expression, above, is statically rewritten to this expression. The definition of \lstinline{#\dolla#rx} takes the following form: \begin{lstlisting}[numbers=none,mathescape=|] syntax $rx at rx by static fn(b : body) -> parse_result(proto_expr) => (* regex literal parser here *) end \end{lstlisting} Every seTLM definition consists of a TLM name, here \li{#\dolla#rx}, a \emph{type annotation}, here \lstinline{at rx}, and a \emph{parse function} between \li{by} and \li{end}. TLM definitions follow standard scoping rules -- unless an \li{in} clause is provided, the definition is in scope until the end of the enclosing declaration (e.g. the enclosing function or module.) We will consider how TLM definitions are packaged into libraries in Chapter \ref{chap:static-eval}. All TLM names must begin with the dollar symbol (\li{#\dolla#}), which distinguishes them from variables. This is inspired by the Rust macro system, which uses post-fix exclamation points (\li{!}) to distinguish macro identifiers \cite{Rust/Macros}. The {parse function} is a \emph{static function} delegated responsibility over parsing the literal bodies of the literal forms to which the TLM is applied. Static functions, marked by the \li{static} keyword, are applied during the typed expansion process, so they cannot refer to the surrounding variable bindings (because those variables stand for dynamic values.) For now, we will simply assume that static functions are closed and do not themselves make use of TLMs (we will eliminate these impractical limitations in Chapter \ref{chap:static-eval}.) Every seTLM parse function must have type \li{body -> parse_result(proto_expr)}. The input type, \lstinline{body}, classifies encodings of literal {bodies}. In VerseML, literal bodies are sequences of characters, so it suffices to define \li{body} as an abbreviation for the \li{string} type, as shown in Figure \ref{fig:indexrange-and-parseresult}.\footnote{In languages where the surface syntax is not textual, \li{body} would have a different definition, but we leave explicit consideration of such languages as future work (see Sec. \ref{sec:future-work}.)} The return type is a labeled sum type, defined by applying the type function \li{parse_result} defined in Figure \ref{fig:indexrange-and-parseresult}, that distinguishes between parse errors and successful parses.\footnote{\li{parse_result} is defined as a type function because in Chapter \ref{chap:uptsms}, we will introduce pattern TLMs, which generate patterns rather than expressions.} Let us consider these two possibilities in turn. \begin{figure} \begin{lstlisting}[numbers=none] type body = string type segment = {startIdx : int, endIdx : int} (* inclusive *) type parse_result('a) = ParseError of { msg : string, loc : segment } + Success of 'a \end{lstlisting} \caption[Definitions of \li{body}, \li{segment} and \li{parse_result} in VerseML]{Definitions of \li{body}, \li{segment} and \li{parse_result}. These type definitions are given in the VerseML \emph{prelude}, which is a small collection of definitions available ambiently.} \label{fig:indexrange-and-parseresult} \end{figure} \paragraph{Parse Errors} If the parse function determines that the literal body is not well-formed (according to whatever syntax definition that it implements), it returns: \begin{lstlisting}[numbers=none] inj[ParseError]({msg=#$e_\text{msg}$#, loc=#$e_\text{loc}$#}) \end{lstlisting} where $e_\text{msg}$ is an error message and $e_\text{loc}$ is a value of type \li{segment}, defined in Figure \ref{fig:indexrange-and-parseresult}, that designates a segment of the literal body as the location of the error \cite{DBLP:journals/jsc/DeursenKT93}. This information is for use by VerseML compilers when reporting the error to the programmer. % In languages where the surface syntax is not textual, the definition of \li{loc} would need to designate \paragraph{Successful Parses} If parsing succeeds, the parse function returns \begin{lstlisting}[numbers=none] inj[Success](#$\ecand$#) \end{lstlisting} where $\ecand$ is called the \emph{encoding of the proto-expansion}. For expression TLMs, proto-expansions are \emph{proto-expressions}, which are encoded as VerseML values of the type \lstinline{proto_expr} defined in Figure \ref{fig:candidate-exp-verseml}. Most of the variants defined by \li{proto_expr} are individually uninteresting -- they encode VerseML's various expression forms (just as in a compiler, c.f. SML/NJ's Visible Compiler library \cite{SML/VisibleCompiler}.) Expressions can mention types, so we also need to define a type \li{proto_typ} in Figure \ref{fig:candidate-exp-verseml}. As we enrich our language in later chapters, we will need to define more encodings like these, for other sorts of trees. The only non-standard classes are \li{SplicedT} and \li{SplicedE} -- these are \emph{references to spliced unexpanded types and expressions}, which we will return to when we consider splicing in Sec. \ref{sec:splicing-and-hygiene} below. The definitions of \li{proto_typ} and \li{proto_expr} are recursive labeled sum types to simplify our exposition, but we could have chosen alternative encodings, e.g. based on abstract binding trees \cite{pfpl}, with only minor modifications to our semantics. Indeed, when we formally define seTLMs in Sec. \ref{sec:miniVerseU}, we abstract over the particular encoding. % We will show a complete encoding when we describe our reduced formal system $\miniVerseUE$ in Sec. \ref{sec:tsms-minimal-formalism}. % ; the remaining constructors (some of which are elided for concision) encode the abstract syntax of VerseML expressions and types. % It is extended with one additional form used to handled spliced subexpressions, %Notice that the types just described are those that one would expect to find in a typical parser. %One would find types analagous to those just described in any parser, so for concision, we elide the details of \li{#\dolla#rx}'s parse function. %The parse function must treat the TLM parameters parametrically, i.e. it does not have access to any values in the supplied module parameter. Only the expansion the parse function generates can refer to module parameters. %For example, the following definition is ill-sorted: %\begin{lstlisting}[numbers=none] %syntax pattern_bad[Q : PATTERN] at Q.t { % static fn (body : Body) : Exp => % if Q.flag then (* ... *) else (* ... *) %} %\end{lstlisting}%So the parse function parses the body of the delimited form to generate an encoding of the elaboration. \begin{figure} \begin{lstlisting}[numbers=none] type proto_typ = rec(proto_typ => TyVar of var_t + Arrow of proto_typ * proto_typ + (* ... *) + SplicedT of segment) type proto_expr = rec(proto_expr => Var of var_t + Fn of var_t * proto_typ * proto_expr + Ap of proto_expr * proto_expr + (* ... *) + SplicedE of segment * proto_typ) \end{lstlisting} \caption[Abbreviated definitions of \li{proto_typ} and \li{proto_expr} in VerseML]{Abbreviated definitions \li{proto_typ} and \li{proto_expr} in the VerseML prelude. We assume some suitable type \li{var_t} exists, not shown.} \label{fig:candidate-exp-verseml} % \vspace{-5px} \end{figure} \subsection{Splicing}\label{sec:splicing-and-hygiene} As described thusfar, TLMs operate just like term-rewriting macros over string literals. TLMs therefore do not cause difficulties related to reasoning about \textbf{Conflict}, \textbf{Responsibility} or \textbf{Localization}, for exactly the reasons discussed in Sec. \ref{sec:macro-systems}. TLMs differ from term-rewriting macros in that they support \emph{splicing out arbitrary types and expressions} (including those that may themselves involve TLM applications) from within literal bodies in a reasonable manner. For example, the program fragment from Figure \ref{fig:derived-spliced-subexpressions} can be expressed using the \li{#\dolla#rx} TLM as follows: \begin{lstlisting}[numbers=none] val ssn = $rx /SURL\d\d\d-\d\d-\d\d\d\dEURL/ fun lookup_rx(name: string) => $rx /SURL@name: %ssnEURL/ \end{lstlisting} The expressions \lstinline{name} and \lstinline{ssn} on the second line appear spliced within the literal body, so we call them \emph{spliced expressions}. When \li{#\dolla#rx}'s parse function determines that a subsequence of the literal body should be taken as a spliced expression (here, by recognizing the characters \lstinline{@} or \lstinline{%} followed by a variable or parenthesized expression), it does not directly insert the syntax tree of that expression into the encoding of the expansion. Instead, the TLM must refer to the spliced expression \emph{by its relative location} within the literal body using the \li{SplicedE} variant of \li{proto_expr}. In particular, the \li{SplicedE} variant requires a value of type \li{segment}, which indicates the zero-indexed location of the spliced expression relative to the start of the literal body provided to the parse function. The \li{SplicedE} variant also requires a value of type \li{proto_typ}, which indicates the type that the spliced expression is expected to have. For example, the proto-expansion generated by \li{#\dolla#rx} for the literal body on the second line above, if written in a textual syntax for proto-expressions where references to spliced expressions are \li{spliced<startIdx; endIndex; ty>}, is: \begin{lstlisting}[numbers=none] Seq(Str(spliced<1; 4; string>), Seq(Str "SSTR: ESTR", spliced<8; 10; rx>)) \end{lstlisting} Here, \li{spliced<1; 4; string>} refers to the spliced string expression \li{name} by location and \li{spliced<8; 10; rx>} refers to the spliced regex expression \li{ssn} by location. (For clarity of exposition, we again use the regex value constructors to abbreviate applications of the \li{fold} and \li{inj} operators and use the type abbreviation \li{rx}. In fact, given only the mechanisms introduced in this chapter, these abbreviations would need to be explicitly included in each proto-expansion.) Proto-types can make reference to spliced types by using the \li{SplicedT} variant of \li{proto_typ} analagously. Requiring that the TLM refer to spliced terms indirectly in this manner prevents it from ``forging'' a spliced expression (i.e. claiming that an expression is a spliced expression when it does not appear in the literal body.) This will be formally critical to being able to reason abstractly about segmentation, capture and context-independence, as we will detail below. % The parse function can similarly extract \emph{spliced types} from a literal body using the \li{SplicedT} variant of \li{proto_typ}. %In particular, the parse function must provide the index range of spliced subexpressions to the \li{Spliced} constructor of the type \li{MarkedExp}. %Only subexpressions that actually appear in the body of the literal form can be marked as spliced subexpressions. %For example, had the would not be a valid expansion, because the that are not inside spliced subexpressions: %\begin{lstlisting}[numbers=none] %Q.Seq(Q.Str(name), Q.Seq(Q.Str ": ", ssn)) %\end{lstlisting} \subsection{Segmentations} The \emph{segmentation} of a proto-expression is the finite set of references to spliced terms within the proto-expression. For example, the summary of the proto-expression above is the finite set containing only \li{spliced<1; 4; string>} and \li{spliced<8; 10; rx>}.% Notice that no information about the spliced terms appear is communicated by the splice summary. The semantics checks that all of the locations in the segmentation are 1) in bounds relative to the literal body; 2) non-overlapping; and 3) used at a consistent sort and type. This resolves the problem of \textbf{Segmentation} described in Secs. \ref{sec:non-local-term-rewriting}-\ref{sec:macro-systems}, i.e. every literal body in a well-typed program has a well-defined segmentation. A program editor or pretty-printer can communicate the segmentation information to the programmer, e.g. by coloring non-spliced segments green as is our convention in this document: \begin{lstlisting}[numbers=none] val ssn = $rx /SURL\d\d\d-\d\d-\d\d\d\dEURL/ fun lookup_rx(name: string) => $rx /SURL@EURLnameSURL: %EURLssn/ \end{lstlisting} A program editor or pretty-printer can also communicate the type of each spliced term, as indicated in the segmentation, to the programmer upon request (for example, the Emacs and Vim packages for working with OCaml defer to the Merlin tool when the programmer requests the type of an expression \cite{Merlin}.) \subsection{Proto-Expansion Validation}\label{sec:uetsms-validation} Three potential problems described in Secs. \ref{sec:non-local-term-rewriting}-\ref{sec:macro-systems} remain: those related to reasoning abstractly about \textbf{Capture}, \textbf{Context Dependence} and \textbf{Typing}. Addressing these problems is the purpose of the \emph{proto-expansion validation} process. \subsubsection{Capture} Proto-expansion validation ensures that spliced terms have access \emph{only} to the bindings that appear at the application site -- spliced terms cannot capture the bindings that appear in the proto-expansion. For example, suppose that \li{#\dolla#rx} generated a proto-expansion of the following form (drawn as above): \begin{lstlisting}[numbers=none] let tmp = (* ... expansion-internal tmp ... *) in Seq(tmp, spliced<1; 3; rx>) \end{lstlisting} Na\"ively, the binding of the variable \li{tmp} here could shadow bindings of \li{tmp} that appear at the application site within the indicated spliced expression. For example, consider the following application site: \begin{lstlisting}[numbers=none] let tmp = (* ... application site tmp ... *) in $rx /SURL%EURLtmp/ \end{lstlisting} Here, the application site binding of \li{tmp} would be shadowed by the ``invisible'' binding of \li{tmp} in the expansion of the TLM application. % The possibility that this might occur makes it impossible to reason abstractly about binding within a spliced term -- i.e. it is unclear whether \li{tmp}. To address this problem, proto-expansion validation enforces a prohibition on capture. This prohibition on capture can be silently enforced by implicitly alpha-varying the bindings in the proto-expansion as needed, as in hygienic term-rewriting macro systems (cf. Sec. \ref{sec:macro-systems}.) For example, the expansion of the example above might take the following form: \begin{lstlisting}[numbers=none] let tmp = (* ... application site tmp ... *) in let tmp' = (* ... expansion-internal tmp ... *) in Seq(tmp', tmp) \end{lstlisting} Notice that the expansion-internal binding of \li{tmp} has been alpha-varied to \li{tmp'} to avoid shadowing the application site binding of \li{tmp}. As such, the reference to \li{tmp} in the spliced expression refers, as intended, to the application site binding of \li{tmp}. For TLM providers, the benefit of this mechanism is that they can name the variables used internally within expansions freely, without worrying about whether their chosen identifiers might shadow those that a client might have used at the application site. There is no need for a user-facing mechanism that generates ``fresh variables''. TLM clients can, in turn, reliably reason about binding within every spliced expression without examining the expansion that the spliced expression appears within. The trade-off is that this prevents library providers from defining alternative binding forms. For example, Haskell's derived form for monadic commands (i.e. \li{do}-notation) supports binding the result of executing a command to a variable that is then available in the subsequent commands in a command sequence. In VerseML, this cannot be expressed in the same way. Values can be communicated from the expansion to a spliced expression only via function arguments. %We will show an alternative formulation of Haskell's syntax for monadic commands that uses VerseML's anonymous function syntax to bind variables in Sec. \ref{sec:application-monadic-commands}. We will return to this example when we consider other possible points in this design space in Sec. \ref{sec:controlled-binding}. \subsubsection{Context Dependence} %The prohibition on shadowing ensures only that variables that appear in spliced terms do not refer to bindings that appear in the surrounding expansion. The proto-expansion validation process also ensures that variables that appear in the proto-expansion do not refer to bindings that appear at the TLM definition or application site. In other words, expansions must be completely \emph{context independent} -- a TLM definition can make no assumptions about the application site context. A minimal example of a ``broken'' TLM that does not generate context-independent proto-expansions is given below: \begin{lstlisting}[numbers=none] syntax $bad1 at rx by static fn(_) => Success (Var "SSTRxESTR") end \end{lstlisting} The proto-expansion that this TLM generates (for every literal body) refers to a variable \li{x} that is not bound within the expansion. If proto-expansion validation permitted such a proto-expansion, it would be well-typed only under those application site typing contexts where \li{x} is bound. This ``hidden assumption'' makes reasoning about binding and renaming difficult, so this proto-expansion is deemed invalid (even when \li{#\dolla#bad1} is applied where \li{x} is coincidentally bound.) Of course, this prohibition does not extend into the spliced terms in a proto-expansion -- spliced terms appear at the application site, so they can justifiably refer to application site bindings. The client's ability to hold the expansion abstract is retained. We saw examples of spliced terms that referred to variables bound at the application site -- \li{name} and \li{ssn} -- in Sec. \ref{sec:splicing-and-hygiene}. Because proto-expansions refer to spliced terms indirectly, and forging is impossible, enforcing context independence is straightforward -- we need only that the proto-expansion itself be closed, without considering the spliced terms.% In the next section, we will formalize this intuition. % The TLM provider can only refer to them opaquely. This prohibition on context dependence explains why the expansion generated by the TLM application in Sec. \ref{sec:uetsms-usage} cannot make use of the regex value constructors, e.g. \li{Str} and \li{Or}, directly. (In Chapter \ref{chap:ptsms}, we will relax this restriction to allow proto-expansions to access explicit parameters.) Collectively, we refer to the prohibition on capture and the prohibition on context dependence as \emph{hygiene properties}, by conceptual analogy to corresponding properties in term-rewriting macro systems (see Sec. \ref{sec:macro-systems}.) The novelty here comes from the fact that spliced terms are being extracted from an initially unparsed sequence of characters. % In the examples in Sec. \ref{sec:uetsms-usage} and Sec. \ref{sec:splicing-and-hygiene}, the expansion used constructors associated with the \li{Rx} type, e.g. \li{Seq} and \li{Str}. This might appear to violate our prohibition on context-dependent expansions. This is not the case only because in VerseML, constructor labels are not variables or scoped symbols. Syntactically, they must begin with a capital letter (like Haskell's datatype constructors). Different labeled sum types can use common constructor labels without conflict because the type the term is being checked against -- e.g. \li{Rx}, due to the type ascription on \li{#\dolla#rx} -- determines which type of value will be constructed. For dialects of ML where datatype definitions do introduce new variables or scoped symbols, we need parameterized TLMs. We will return to this topic in Chapter \ref{chap:ptsms}. % Indeed, we used the label \li{Spliced} for two different recursive labeled sum types in Figure \ref{fig:candidate-exp-verseml}. \vspace{-8px} \subsubsection{Typing}\vspace{-4px} Finally, proto-expansion validation maintains a reasonable \emph{typing discipline} by: \begin{enumerate} \item checking each spliced expression against the type indicated in the summary; and \item checking to ensure that the generated expansion is of the type specified in the TLM's type annotation. For example, the type annotation on \li{#\dolla#rx} is \li{at rx}, so proto-expansion validation ensures that the final expansion is of type \li{rx}. \end{enumerate} This addresses the problem of reasoning abstractly about \textbf{Typing} described in Secs. \ref{sec:non-local-term-rewriting}-\ref{sec:macro-systems}, i.e.: \begin{enumerate} \item determining the type that a spliced expression must have requires only the information in the summary of the proto-expansion (rather than complete knowledge of the proto-expansion); and \item determining the type of an expansion requires examining only the type annotation on the TLM definition (much as determining the type of a function application requires examining only the type of the function.) \end{enumerate} % The language \emph{validates} proto-expansions before a final expansion is generated. One aspect of proto-expansion validation is checking the proto-expansion against the type annotation specified by the TLM, e.g. the type \li{Rx} in the example above. This maintains a \emph{type discipline}: if a programmer sees a TLM being applied when examining a well-typed program, they need only look up the TLM's type annotation to determine the type of the generated expansion. Determining the type does not require examining the expansion directly. % \subsection{Hygiene} % The spliced subexpressions that the proto-expansion refers to (by their position within the literal body, cf. above) must be parsed, typed and expanded during the proto-expansion validation process (otherwise, the language would not be able to check the type of the proto-expansion). To maintain a useful \emph{binding discipline}, i.e. to allow programmers to reason also about variable binding without examining expansions directly, the validation process maintains two additional properties related to spliced subexpressions: \textbf{context independent expansion} and \textbf{expansion independent splicing}. These are collectively referred to as the \emph{hygiene properties} (because they are conceptually related to the concept of hygiene in term rewriting macro systems, cf. Sec. \ref{sec:term-rewriting}.) % \paragraph{Context Independent Expansion} % \paragraph{Expansion Independent Splicing} % %These properties suffice to ensure that programmers and tools can freely rename a variable without changing the meaning of the program. The only information that is necessary to perform such a \emph{rename refactoring} is the locations of spliced subexpressions within all the literal forms for which the variable being renamed is in scope; the expansions need not otherwise be examined. It would be straightforward to develop a tool and/or editor plugin to indicate the locations of spliced subexpressions to the user, like we do in this document (by coloring spliced subexpressions black). We discuss tool support as future work in Sec. \ref{sec:interaction-with-tools}. \subsection{Final Expansion} The result of proto-expansion validation is the \emph{final expansion}, which is simply the proto-expansion with references to spliced terms replaced with their own final expansions. For example, the final expansion of the body of \li{lookup_rx} is equivalent to the following, assuming that the regex value constructors were defined (not shown): \begin{lstlisting}[numbers=none] Seq(Str(name), Seq(Str "SSTR: ESTR", ssn)) \end{lstlisting} \subsection{Comparison to the Dialect-Oriented Approach} Let us compare the VerseML TLM \li{#\dolla#rx} to $\mathcal{V}_\texttt{rx}$, the hypothetical syntactic dialect of VerseML with support for derived forms for values of type \li{rx} described in Sec. \ref{sec:examples-of-syntax-dialects}. Both $\mathcal{V}_\texttt{rx}$ and \li{#\dolla#rx} give programmers the ability to use the same standard POSIX syntax for constructing regexes, extended with the same syntax for splicing in strings and other regexes. Using \li{#\dolla#rx}, however, we incur the additional syntactic cost of explicitly applying the \li{#\dolla#rx} TLM each time we wish to use regex syntax. This cost does not grow with the size of the regex, so it would only be significant in programs that involve a large number of small regexes (which do, of course, exist.) In Chapter \ref{chap:tsls} we will consider a design where even this syntactic cost can be eliminated in positions where the type is known to be \li{rx}. The benefit of the TLM-based approach is that we can easily define other TLMs to use alongside the \li{#\dolla#rx} TLM without needing to consider the possibility of syntactic conflict. Furthermore, programmers can rely on the binding discipline and the typing discipline enforced by proto-expansion validation to reason about programs, including those that contain unfamiliar forms. Put pithily, VerseML helps programmers avoid ``conflict and confusion''. % \begin{figure} % \begin{lstlisting} % val a = get_a() % val w = get_w() % val x = read_data(a) % val y = $k {|SURL(!R)@&{&/EURLxSURL!/:2_!EURLxSURL}'!R}EURL|} % \end{lstlisting} % \caption{The example from Figure \ref{fig:K-dialect}, written using a TLM.} % \label{fig:K-tsms} % \end{figure} % To underline this point, consider the program fragment in Figure \ref{fig:K-tsms}, which is based on the example involving the K query language from Sec. \ref{fig:K-dialect}. The programmer need not be familiar with the syntax of K, or examine the expansion itself, to answer questions corresponding to those posed in Sec. \ref{fig:K-dialect}. In particular, the programmer knows that: % \begin{enumerate} % \item The TLM named \li{#\dolla#k} is responsible for parsing the body of the literal on Line 4. % \item The character \li{x} inside the literal body is parsed as a ``spliced'' expression, \li{x}, as indicated by our visualization of the segmentation. The other characters, e.g. \li{R}, are definitively not spliced expressions. % \item The spliced expression \li{x} definitively refers to the binding of \li{x} on the previous line. No other binding of \li{x} could have shadowed this binding, due to the prohibition on capture. % \item The TLM application on Line 4 must be context-independent, so it cannot have referred to \li{w}. % \item We need only look at the type annotation on \li{#\dolla#k} to determine the type of \li{y}. For example, if that declaration takes the following form, we know definitively that \li{y} has type \li{kquery} (without examining the elided parse function): % \begin{lstlisting}[numbers=none] % syntax $k at kquery by (* ... *) end % \end{lstlisting} % The summary gives the types of the spliced expressions. % \end{enumerate} \vspace{-3px} \section{\texorpdfstring{$\miniVerseUE$}{miniVerseSE}}\label{sec:tsms-minimal-formalism}\label{sec:miniVerseU} \vspace{-3px} % \begin{figure}[p!] % $\begin{array}{lllllll} % \textbf{variables} & \textbf{type variables} & \textbf{labels} & \textbf{label sets} & \textbf{TLM variables} & \textbf{literal bodies} & \textbf{nats}\\ % x & t & \ell & \labelset & \tsmv & b & n\\~\end{array}$\\ % $\begin{array}{ll} % \textbf{type formation contexts} & \textbf{typing contexts}\\ % \Delta ::= \emptyset ~\vert~ \Delta, t & \Gamma ::= \emptyset ~\vert~ \Gamma, x : \tau\\ % ~ % \end{array}$\\ % ~\\ % $\begin{array}{lcl} % \gheading{types}\\ % \tau & ::= & t ~\vert~ \parr{\tau}{\tau} ~\vert~ \forallt{t}{\tau} ~\vert~ \rect{t}{\tau} ~\vert~ \prodt{\mapschema{\tau}{i}{\labelset}} ~\vert~ \sumt{\mapschema{\tau}{i}{\labelset}}\\ % ~\\ % \gheading{expanded expressions}\\ % e & ::= & x ~\vert~ \lam{x}{\tau}{e} ~\vert~ \app{e}{e} ~\vert~ \Lam{t}{e} ~\vert~ \App{e}{\tau} ~\vert~ \fold{t}{\tau}{e} ~\vert~ \unfold{e} ~\vert~ \tpl{\mapschema{e}{i}{\labelset}} ~\vert~ \prj{e}{\ell} \\ % & \vert & \inj{\ell}{e} ~\vert~ \caseof{e}{\mapschemab{x}{e}{i}{\labelset}}\\ % ~\\ % \gheading{TLM expressions}\\ % \tsme & ::= & \tsmv ~\vert~ \utsmdef{\tau}{\ue}\\ % ~\\ % \gheading{unexpanded expressions}\\ % \ue & ::= & {x} ~\vert~ \lam{x}{\tau}{\ue} ~\vert~ \ue(\ue) ~\vert~ \Lam{t}{\ue} ~\vert~ \App{\ue}{\tau} ~\vert~ \fold{t}{\tau}{\ue} ~\vert~ \unfold{\ue} ~\vert~ \tpl{\mapschema{\ue}{i}{\labelset}} ~\vert~ \prj{\ue}{\ell} \\ % & \vert & \inj{\ell}{\ue} ~\vert~ \caseof{\ue}{\mapschemab{x}{\ue}{i}{\labelset}}\\ % & \vert & \uesyntax{\tsmv}{\tsme}{\ue} ~\vert~ \utsmapp{\eta}{b}\\ % ~\\ % \gheading{proto-types}\\ % \mtau & ::= & t ~\vert~ \parr{\mtau}{\mtau} ~\vert~ \forallt{t}{\mtau} ~\vert~ \rect{t}{\mtau} ~\vert~ \prodt{\mapschema{\tau}{i}{\labelset}} ~\vert~ \sumt{\mapschema{\mtau}{i}{\labelset}} \\ % & \vert & \mtspliced{\tau}\\ % ~\\ % \gheading{proto-expressions}\\ % \me & ::= & x ~\vert~ \lam{x}{\mtau}{\me} ~\vert~ \app{\me}{\me} ~\vert~ \Lam{t}{\me} ~|~ \App{\me}{\mtau} ~\vert~ \fold{t}{\mtau}{\me} ~\vert~ \unfold{\me} ~\vert~ \tpl{\mapschema{\me}{i}{\labelset}} ~\vert~ \prj{\me}{\ell} \\ % & \vert & \inj{\ell}{\me} ~\vert~ \caseof{\me}{\mapschemab{x}{\me}{i}{\labelset}}\\ % & \vert & \mspliced{e} % % \\~ % \end{array}$ % \todo{finish breaking this up into syntax tables} % \caption[Syntax of $\miniVerseUE$]{Syntax of $\miniVerseUE$. The forms $\mapschema{V}{i}{\labelset}$ and $\mapschemab{x}{V}{i}{\labelset}$ where $V$ is a metavariable indicate finite functions from each label $i \in \labelset$ to a term, $V_i$, or binder, $x_i.V_i$, respectively.} % \label{fig:lambda-tsm-syntax} % \end{figure} To make the intuitions developed in the previous section precise, we will now introduce a reduced dialect of VerseML called $\miniVerseUE$ that supports seTLMs. The full definition of $\miniVerseUE$ is given in Appendix \ref{appendix:miniVerseSES} for reference. In the exposition below, we will reproduce only particularly noteworthy rules and proof cases. Rule and theorem numbers below refer to corresponding rules and theorems in the appendix. %For reference, the syntax of $\miniVerseUE$ is specified in Figure \ref{fig:lambda-tsm-syntax}. We will reproduce relevant portions of this specification inline (in tabular form) as we continue. %We specify all formal systems in this document within the metatheoretic framework detailed in \emph{PFPL} \cite{pfpl}, and assume familiarity of fundamental background concepts (e.g. abstract binding trees, substitution, implicit identification of terms up to $\alpha$-equivalence, structural induction and rule induction) covered therein. %Familiarity with other accounts of typed lambda calculi should also suffice to understand the formal systems in this document. \subsection{Overview} $\miniVerseUE$ consists of a language of \emph{unexpanded expressions} (the \emph{unexpanded language}, or \emph{UL}) defined by typed expansion to a language of \emph{expanded expressions} (the \emph{expanded language}, or \emph{XL}.) We will begin with a brief overview of the standard XL before turning our attention to the UL in the remainder of this chapter. \subsection{Syntax of the Expanded Language}\label{sec:U-expanded-terms} \begin{figure} %\hspace{-5px} \[\begin{array}{lllllll} \textbf{Sort} & & & \textbf{Operational Form} %& \textbf{Stylized Form} & \textbf{Description}\\ \mathsf{Typ} & \tau & ::= & t %& t & \text{variable}\\ &&& \aparr{\tau}{\tau} %& \parr{\tau}{\tau} & \text{partial function}\\ &&& \aall{t}{\tau} %& \forallt{t}{\tau} & \text{polymorphic}\\ &&& \arec{t}{\tau} %& \rect{t}{\tau} & \text{recursive}\\ &&& \aprod{\labelset}{\mapschema{\tau}{i}{\labelset}} %& \prodt{\mapschema{\tau}{i}{\labelset}} & \text{labeled product}\\ &&& \asum{\labelset}{\mapschema{\tau}{i}{\labelset}} %& \sumt{\mapschema{\tau}{i}{\labelset}} & \text{labeled sum}\\ \mathsf{Exp} & e & ::= & x %& x & \text{variable}\\ &&& \aelam{\tau}{x}{e} %& \lam{x}{\tau}{e} & \text{abstraction}\\ &&& \aeap{e}{e} %& \ap{e}{e} & \text{application}\\ &&& \aetlam{t}{e} %& \Lam{t}{e} & \text{type abstraction}\\ &&& \aetap{e}{\tau} %& \App{e}{\tau} & \text{type application}\\ &&& \aefold{e} %& \fold{e} : \tau & \text{fold}\\ &&& \aeunfold{e} %& \unfold{e} & \text{unfold}\\ &&& \aetpl{\labelset}{\mapschema{e}{i}{\labelset}} %& \tpl{\mapschema{e}{i}{\labelset}} & \text{labeled tuple}\\ &&& \aepr{\ell}{e} %& \prj{e}{\ell} & \text{projection}\\ &&& \aein{\ell}{e} %& \inj{\ell}{e} & \text{injection}\\ &&& \aecase{\labelset}{e}{\mapschemab{x}{e}{i}{\labelset}} %& \caseof{e}{\mapschemab{x}{e}{i}{\labelset}} & \text{case analysis} \end{array}\] \caption[Syntax of the $\miniVerseUE$ expanded language (XL)]{Syntax of the $\miniVerseUE$ expanded language (XL)%When using stylized forms, the label set is omitted when it can be inferred, e.g. the labeled product type $\prodt{\finmap{\mapitem{\ell_1}{e_1}, \mapitem{\ell_2}{e_2}}}$ leaves the label set $\{\ell_1, \ell_2\}$ implicit. % When we use the stylized forms, we assume that the reader can infer suppressed indices and arguments from the surrounding context. } \label{fig:U-expanded-terms} \end{figure} \noindent The syntax chart in Figure \ref{fig:U-expanded-terms} defines the syntax of \emph{types}, $\tau$, and \emph{(expanded) expressions}, $e$. Metavariables $x$ range over expression variables, $t$ over type variables, $\ell$ over labels and $\labelset$ over finite sets of labels. Types and expanded expressions are ABTs identified up to $\alpha$-equivalence in the usual manner (our typographic conventions are adapted from \emph{PFPL}, and summarized in Appendix \ref{appendix:typographic-conventions}.) To emphasize that programmers never draw expanded terms directly, and to clearly distinguish expanded terms from unexpanded terms, we do not define a stylized or textual syntax for expanded terms. The {XL} forms a standard pure functional language with support for partial functions, quantification over types, recursive types, labeled product types and labeled sum types. The reader is directed to \emph{PFPL} \cite{pfpl} (or another text on type systems, e.g. \emph{TAPL} \cite{tapl}) for a detailed introductory account of these standard constructs. We will tersely summarize the statics and dynamics of the XL in the next two subsections, respectively. \subsection{Statics of the Expanded Language} The \emph{statics of the XL} is defined by hypothetical judgements of the following form: \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \istypeU{\Delta}{\tau} & \text{$\tau$ is a type}\\ %\isctxU{\Delta}{\Gamma} & \text{$\Gamma$ is a well-formed typing context assuming $\Delta$}\\ \hastypeU{\Delta}{\Gamma}{e}{\tau} & \text{$e$ is assigned type $\tau$} \end{array}\] The \emph{type formation judgement}, $\istypeU{\Delta}{\tau}$, is inductively defined by Rules (\ref{rules:istypeU}). The \emph{typing judgement}, $\hastypeU{\Delta}{\Gamma}{e}{\tau}$, is inductively defined by Rules (\ref{rules:hastypeU}). \emph{Type formation contexts}, $\Delta$, are finite sets of hypotheses of the form $\Dhyp{t}$. Empty finite sets are written $\emptyset$, or omitted entirely within judgements, and non-empty finite sets are written as comma-separated finite sequences identified up to exchange and contraction. We write $\Delta, \Dhyp{t}$ when $\Dhyp{t} \notin \Delta$ for $\Delta$ extended with the hypothesis $\Dhyp{t}$. %Finite sets are written as finite sequences identified up to exchange.% We write $\Dcons{\Delta}{\Delta'}$ for the union of $\Delta$ and $\Delta'$. % \begin{subequations}\label{rules:istypeU} % \begin{equation*}\label{rule:istypeU-var} % \inferrule{ }{\istypeU{\Delta, \Dhyp{t}}{t}} % \end{equation*} % \begin{equation*}\label{rule:istypeU-parr} % \inferrule{ % \istypeU{\Delta}{\tau_1}\\ % \istypeU{\Delta}{\tau_2} % }{\istypeU{\Delta}{\aparr{\tau_1}{\tau_2}}} % \end{equation*} % \begin{equation*}\label{rule:istypeU-all} % \inferrule{ % \istypeU{\Delta, \Dhyp{t}}{\tau} % }{ % \istypeU{\Delta}{\aall{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:istypeU-rec} % \inferrule{ % \istypeU{\Delta, \Dhyp{t}}{\tau} % }{ % \istypeU{\Delta}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:istypeU-prod} % \inferrule{ % \{\istypeU{\Delta}{\tau_i}\}_{i \in \labelset} % }{ % \istypeU{\Delta}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:istypeU-sum} % \inferrule{ % \{\istypeU{\Delta}{\tau_i}\}_{i \in \labelset} % }{ % \istypeU{\Delta}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \end{subequations} % Premises of the form $\{{J}_i\}_{i \in \labelset}$ mean that for each $i \in \labelset$, the judgement ${J}_i$ must hold. \emph{Typing contexts}, $\Gamma$, are finite functions that map each variable $x \in \domof{\Gamma}$, where $\domof{\Gamma}$ is a finite set of variables, to the hypothesis $\Ghyp{x}{\tau}$, for some $\tau$. Empty typing contexts are written $\emptyset$, or omitted entirely within judgements, and non-empty typing contexts are written as finite sequences of hypotheses identified up to exchange and contraction. We write $\Gamma, \Ghyp{x}{\tau}$, when $x \notin \domof{\Gamma}$, for the extension of $\Gamma$ with a mapping from $x$ to $\Ghyp{x}{\tau}$, and $\Gcons{\Gamma}{\Gamma'}$ when $\domof{\Gamma} \cap \domof{\Gamma'} = \emptyset$ for the typing context mapping each $x \in \domof{\Gamma} \cup \domof{\Gamma'}$ to $x : \tau$ if $x : \tau \in \Gamma$ or $x : \tau \in \Gamma'$. % We write $\isctxU{\Delta}{\Gamma}$ if every type in $\Gamma$ is well-formed relative to $\Delta$. % \begin{definition}[Typing Context Formation] \label{def:isctxU} % $\isctxU{\Delta}{\Gamma}$ iff for each hypothesis $x : \tau \in \Gamma$, we have $\istypeU{\Delta}{\tau}$. % \end{definition} % \begin{subequations}\label{rules:hastypeU} % \begin{equation*}\label{rule:hastypeU-var} % \inferrule{ }{ % \hastypeU{\Delta}{\Gamma, \Ghyp{x}{\tau}}{x}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-lam} % \inferrule{ % \istypeU{\Delta}{\tau}\\ % \hastypeU{\Delta}{\Gamma, \Ghyp{x}{\tau}}{e}{\tau'} % }{ % \hastypeU{\Delta}{\Gamma}{\aelam{\tau}{x}{e}}{\aparr{\tau}{\tau'}} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-ap} % \inferrule{ % \hastypeU{\Delta}{\Gamma}{e_1}{\aparr{\tau}{\tau'}}\\ % \hastypeU{\Delta}{\Gamma}{e_2}{\tau} % }{ % \hastypeU{\Delta}{\Gamma}{\aeap{e_1}{e_2}}{\tau'} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-tlam} % \inferrule{ % \hastypeU{\Delta, \Dhyp{t}}{\Gamma}{e}{\tau} % }{ % \hastypeU{\Delta}{\Gamma}{\aetlam{t}{e}}{\aall{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-tap} % \inferrule{ % \hastypeU{\Delta}{\Gamma}{e}{\aall{t}{\tau}}\\ % \istypeU{\Delta}{\tau'} % }{ % \hastypeU{\Delta}{\Gamma}{\aetap{e}{\tau'}}{[\tau'/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-fold} % \inferrule{\ % \istypeU{\Delta, \Dhyp{t}}{\tau}\\ % \hastypeU{\Delta}{\Gamma}{e}{[\arec{t}{\tau}/t]\tau} % }{ % \hastypeU{\Delta}{\Gamma}{\aefold{e}}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-unfold} % \inferrule{ % \hastypeU{\Delta}{\Gamma}{e}{\arec{t}{\tau}} % }{ % \hastypeU{\Delta}{\Gamma}{\aeunfold{e}}{[\arec{t}{\tau}/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-tpl} % \inferrule{ % \{\hastypeU{\Delta}{\Gamma}{e_i}{\tau_i}\}_{i \in \labelset} % }{ % \hastypeU{\Delta}{\Gamma}{\aetpl{\labelset}{\mapschema{e}{i}{\labelset}}}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-pr} % \inferrule{ % \hastypeU{\Delta}{\Gamma}{e}{\aprod{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \ell \hookrightarrow \tau}} % }{ % \hastypeU{\Delta}{\Gamma}{\aepr{\ell}{e}}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-in} % \inferrule{ % \{\istypeU{\Delta}{\tau_i}\}_{i \in \labelset}\\ % \istypeU{\Delta}{\tau}\\ % \hastypeU{\Delta}{\Gamma}{e}{\tau} % }{ % \hastypeU{\Delta}{\Gamma}{\aein{\labelset, \ell}{\ell}{\mapschema{\tau}{i}{\labelset}; \ell \hookrightarrow \tau}{e}}{\asum{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \ell \hookrightarrow \tau}} % } % \end{equation*} % \begin{equation*}\label{rule:hastypeU-case} % \inferrule{ % \hastypeU{\Delta}{\Gamma}{e}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}}\\ % \istypeU{\Delta}{\tau}\\ % \{\hastypeU{\Delta}{\Gamma, x_i : \tau_i}{e_i}{\tau}\}_{i \in \labelset} % }{ % \hastypeU{\Delta}{\Gamma}{\aecase{\labelset}{e}{\mapschemab{x}{e}{i}{\labelset}}}{\tau} % } % \end{equation*} % \end{subequations} %Rules (\ref{rules:istypeU}) and (\ref{rules:hastypeU}) are syntax-directed, so we assume an inversion lemma for each rule as needed without stating it separately. % The following standard lemmas also hold. These judgements validate standard lemmas, defined in Appendix \ref{appendix:SES-XL}: Weakening, Substitution and Decomposition. \subsection{Structural Dynamics}\label{sec:dynamics-U} The \emph{structural dynamics} (a.k.a. the \emph{structural operational semantics} \cite{DBLP:journals/jlp/Plotkin04a}) of $\miniVerseUE$ is defined as a transition system by judgements of the following form: \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \stepsU{e}{e'} & \text{$e$ transitions to $e'$}\\ \isvalU{e} & \text{$e$ is a value} \end{array}\] We also define auxiliary judgements for \emph{iterated transition}, $\multistepU{e}{e'}$, and \emph{evaluation}, $\evalU{e}{e'}$. \begingroup \def\thetheorem{\ref{defn:iterated-transition-UP}} \begin{definition}[Iterated Transition] Iterated transition, $\multistepU{e}{e'}$, is the reflexive, transitive closure of the transition judgement, $\stepsU{e}{e'}$.\end{definition} % \addtocounter{theorem}{-1} \endgroup \begingroup \def\thetheorem{\ref{defn:evaluation-UP}} \begin{definition}[Evaluation] $\evalU{e}{e'}$ iff $\multistepU{e}{e'}$ and $\isvalU{e'}$. \end{definition} % \addtocounter{theorem}{-1} \endgroup Our subsequent developments do not require making reference to particular rules in the structural dynamics (because TLMs operate statically), so we do not reproduce the rules here. Instead, it suffices to state the following conditions. The Canonical Forms condition characterizes well-typed values. Satisfying this condition requires an \emph{eager} (a.k.a. \emph{by-value}) formulation of the dynamics. \begingroup \def\thetheorem{\ref{condition:canonical-forms-UP}} \begin{condition}[Canonical Forms] If $\hastypeUC{e}{\tau}$ and $\isvalU{e}$ then: \begin{enumerate} \item If $\tau=\aparr{\tau_1}{\tau_2}$ then $e=\aelam{\tau_1}{x}{e'}$ and $\hastypeUCO{\Ghyp{x}{\tau_1}}{e'}{\tau_2}$. \item If $\tau=\aall{t}{\tau'}$ then $e=\aetlam{t}{e'}$ and $\hastypeUCO{\Dhyp{t}}{e'}{\tau'}$. \item If $\tau=\arec{t}{\tau'}$ then $e=\aefold{e'}$ and $\hastypeUC{e'}{[\abop{rec}{t.\tau'}/t]\tau'}$ and $\isvalU{e'}$. \item If $\tau=\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}$ then $e=\aetpl{\labelset}{\mapschema{e}{i}{\labelset}}$ and $\hastypeUC{e_i}{\tau_i}$ and $\isvalU{e_i}$ for each $i \in \labelset$. \item If $\tau=\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}$ then for some label set $L'$ and label $\ell$ and type $\tau'$, we have that $\labelset=\labelset', \ell$ and $\tau=\asum{\labelset', \ell}{\mapschema{\tau}{i}{\labelset'}; \mapitem{\ell}{\tau'}}$ and $e=\aein{\ell}{e'}$ and $\hastypeUC{e'}{\tau'}$ and $\isvalU{e'}$. \end{enumerate}\end{condition} \endgroup The Preservation condition ensures that evaluation preserve typing. \begingroup \def\thetheorem{\ref{condition:preservation-UP}} \begin{condition}[Preservation] If $\hastypeUC{e}{\tau}$ and $\multistepU{e}{e'}$ then $\hastypeUC{e'}{\tau}$. \end{condition} \endgroup The Progress condition ensures that evaluating a well-typed expanded expression cannot ``get stuck'': \begingroup \def\thetheorem{\ref{condition:progress-UP}} \begin{condition}[Progress] If $\hastypeUC{e}{\tau}$ then either $\isvalU{e}$ or there exists an $e'$ such that $\stepsU{e}{e'}$. \end{condition} \endgroup Together, these two conditions constitute the Type Safety Condition. \vspace{-8px} \subsection{Syntax of the Unexpanded Language}\label{sec:syntax-U} \begin{figure}[t!] \[\begin{array}{lllllll} \textbf{Sort} & & %&\textbf{Operational Form} & \textbf{Stylized Form} & \textbf{Description}\\ \mathsf{UTyp} & \utau & ::= % &\ut & \ut & \text{identifier}\\ && %& \auparr{\utau}{\utau} & \parr{\utau}{\utau} & \text{partial function}\\ && %& \auall{\ut}{\utau} & \forallt{\ut}{\utau} & \text{polymorphic}\\ && %& \aurec{\ut}{\utau} & \rect{\ut}{\utau} & \text{recursive}\\ && %& \auprod{\labelset}{\mapschema{\utau}{i}{\labelset}} & \prodt{\mapschema{\utau}{i}{\labelset}} & \text{labeled product}\\ && %& \ausum{\labelset}{\mapschema{\utau}{i}{\labelset}} & \sumt{\mapschema{\utau}{i}{\labelset}} & \text{labeled sum}\\ \mathsf{UExp} & \ue & ::= %& \ux & \ux & \text{identifier}\\ && % & \asc{\ue}{\utau} & \text{ascription}\\ && % & \letsyn{\ux}{\ue}{\ue} & \text{value binding}\\ && %& \aulam{\utau}{\ux}{\ue} & \lam{\ux}{\utau}{\ue} & \text{abstraction}\\ && %& \auap{\ue}{\ue} & \ap{\ue}{\ue} & \text{application}\\ && %& \autlam{\ut}{\ue} & \Lam{\ut}{\ue} & \text{type abstraction}\\ && %& \autap{\ue}{\utau} & \App{\ue}{\utau} & \text{type application}\\ && %& \aufold{\ut}{\utau}{\ue} & \fold{\ue} & \text{fold}\\ && %& \auunfold{\ue} & \unfold{\ue} & \text{unfold}\\ && %& \autpl{\labelset}{\mapschema{\ue}{i}{\labelset}} & \tpl{\mapschema{\ue}{i}{\labelset}} & \text{labeled tuple}\\ && %& \aupr{\ell}{\ue} & \prj{\ue}{\ell} & \text{projection}\\ && %& \auin{\labelset}{\ell}{\mapschema{\utau}{i}{\labelset}}{\ue} & \inj{\ell}{\ue} & \text{injection}\\ && %& \aucase{\labelset}{\utau}{\ue}{\mapschemab{\ux}{\ue}{i}{\labelset}} & \caseof{\ue}{\mapschemab{\ux}{\ue}{i}{\labelset}} & \text{case analysis}\\ \LCC & & %& \lightgray & \color{Yellow} & \color{Yellow} \\ && %& \audefuetsm{\utau}{e}{\tsmv}{\ue} & \uesyntax{\tsmv}{\utau}{e}{\ue} & \text{seTLM definition}\\ && %& \autsmap{b}{\tsmv} & \utsmap{\tsmv}{b} & \text{seTLM application}\ECC \end{array}\]\vspace{-10px} \caption[Syntax of the $\miniVerseUE$ unexpanded language (UL)]{Syntax of the $\miniVerseUE$ unexpanded language (UL).% Metavariable $\ut$ ranges over type identifiers, $\ux$ ranges over expression identifiers, $\tsmv$ over TLM names and $b$ over character sequences, which, when they appear in an unexpanded expression, are called literal bodies. } \label{fig:U-unexpanded-terms} \end{figure} A $\miniVerseUE$ program ultimately evaluates as a well-typed expanded expression. However, the programmer does not construct this expanded expression directly. Instead, the programmer constructs an \emph{unexpanded expression}, $\ue$, which might contain \emph{unexpanded types}, $\utau$. Figure \ref{fig:U-unexpanded-terms} defines the relevant forms. Unexpanded types and expressions are \textbf{not} abstract binding trees -- we do \textbf{not} define notions of renaming, alpha-equivalence or substitution for unexpanded terms. This is because unexpanded expressions remain ``partially parsed'' due to the presence of literal bodies, $b$, from which spliced terms might be extracted during typed expansion. In fact, unexpanded types and expressions do not involve variables at all, but rather \emph{type identifiers}, $\ut$, and \emph{expression identifiers}, $\ux$. Identifiers are given meaning by expansion to variables during typed expansion, as we will see below. This distinction between identifiers and variables will be technically crucial. %We \textbf{cannot} adopt the usual definitions of $\alpha$-renaming of identifiers, because unexpanded types and expressions are still in a ``partially parsed'' state -- the literal bodies, $b$, within an unexpanded expression might contain spliced subterms that are ``surfaced'' by a TLM only during typed expansion, as we will detail below. %identifiers are given meaning by expansion to variables. %In other words, unexpanded expressions are not abstract binding trees, nor sequences of characters, but a ``transitional'' structure with some characteristics of each of these. %For this reason, we will need to handle generating fresh variables explicitly at binding sites in our semantics. %To do so, we distinguish \emph{type identifiers}, $\ut$, and \emph{expression identifiers}, $\ux$, from type variables, $t$, and expression variables, $x$. identifiers will be given meaning by expansion to variables (which, in turn, are given meaning by substitution, as described above). Most of the unexpanded forms in Figure \ref{fig:U-unexpanded-terms} mirror the expanded forms. We refer to these as the \emph{common forms}. The mapping from expanded forms to common unexpanded forms is defined explicitly in Appendix \ref{appendix:SES-shared-forms}. % There are only two unexpanded expression forms, highlighted in gray in Figure \ref{fig:U-unexpanded-terms}, that do not correspond to expanded expression forms -- the seTLM definition form and the seTLM application form. %These are the ``interesting'' forms. % These are the ``interesting'' forms. % Let us define this correspondence by the metafunction $\Uof{e}$: %\[ %\begin{split} %\Uof{x} & = x\\ %\Uof{\aelam{\tau}{x}{e}} & = \aulam{\tau}{x}{\Uof{e}}\\ %\Uof{\aeap{e_1}{e_2}} & = \auap{\Uof{e_1}}{\Uof{e_2}} %\end{split} %\] and so on for the remaining expanded expression forms. In addition to the stylized syntax given in Figure \ref{fig:U-unexpanded-terms}, there is also a context-free textual syntax for the UL. Giving a complete definition of the context-free textual syntax as, e.g., a context-free grammar, risks digression into details that are not critical to our purposes here. The paper on Wyvern defines a textual syntax for a similar system \cite{TSLs}. Instead, we need only posit the existence of partial metafunctions $\parseUTypF{b}$ and $\parseUExpF{b}$ that go from character sequences, $b$, to unexpanded types and expressions, respectively. \begingroup \def\thetheorem{\ref{condition:textual-representability-SES}} \begin{condition}[Textual Representability] ~ \begin{enumerate} \item For each $\utau$, there exists $b$ such that $\parseUTyp{b}{\utau}$. \item For each $\ue$, there exists $b$ such that $\parseUExp{b}{\ue}$. \end{enumerate} \end{condition} \endgroup \subsection{Typed Expansion}\label{sec:typed-expansion-U} Unexpanded expressions, and the unexpanded types therein, are checked and expanded simultaneously according to the \emph{typed expansion judgements}: \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \expandsTU{\uDelta}{\utau}{\tau} & \text{$\utau$ has well-formed expansion $\tau$}\\ \expandsUX{\ue}{e}{\tau} & \text{$\ue$ has expansion $e$ of type $\tau$}\end{array}\] %\newcommand{\gray}[1]{{\color{gray} #1}} \subsubsection{Type Expansion} \emph{Unexpanded type formation contexts}, $\uDelta$, are of the form $\uDD{\uD}{\Delta}$, i.e. they consist of a \emph{type identifier expansion context}, $\uD$, paired with a standard type formation context, $\Delta$. A \emph{type identifier expansion context}, $\uD$, is a finite function that maps each type identifier $\ut \in \domof{\uD}$ to the hypothesis $\vExpands{\ut}{t}$, for some type variable $t$. We write $\ctxUpdate{\uD}{\ut}{t}$ for the type identifier expansion context that maps $\ut$ to $\vExpands{\ut}{t}$ and defers to $\uD$ for all other type identifiers (i.e. the previous mapping is \emph{updated}.) We define $\uDelta, \uDhyp{\ut}{t}$ when $\uDelta=\uDD{\uD}{\Delta}$ as an abbreviation of \[\uDD{\ctxUpdate{\uD}{\ut}{t}}{\Delta, \Dhyp{t}}\]%type identifier expansion context is always extended/updated together with The \emph{type expansion judgement}, $\expandsTU{\uDelta}{\utau}{\tau}$, is inductively defined by Rules (\ref{rules:expandsTU}). The first three of these rules are reproduced below: % \begin{subequations}%\label{rules:expandsTU} \begin{equation*}\tag{\ref{rule:expandsTU-var}} \inferrule{ }{\expandsTU{\uDelta, \uDhyp{\ut}{t}}{\ut}{t}} \end{equation*} \begin{equation*}\tag{\ref{rule:expandsTU-parr}} \inferrule{ \expandsTU{\uDelta}{\utau_1}{\tau_1}\\ \expandsTU{\uDelta}{\utau_2}{\tau_2} }{\expandsTU{\uDelta}{\parr{\utau_1}{\utau_2}}{\aparr{\tau_1}{\tau_2}}} \end{equation*} \begin{equation*}\tag{\ref{rule:expandsTU-all}} \inferrule{ \expandsTU{\uDelta, \uDhyp{\ut}{t}}{\utau}{\tau} }{ \expandsTU{\uDelta}{\forallt{\ut}{\utau}}{\aall{t}{\tau}} } \end{equation*} % \begin{equation*}\label{rule:expandsTU-rec} % \inferrule{ % \expandsTU{\uDelta, \uDhyp{\ut}{t}}{\utau}{\tau} % }{ % \expandsTU{\uDelta}{\aurec{\ut}{\utau}}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:expandsTU-prod} % \inferrule{ % \{\expandsTU{\uDelta}{\utau_i}{\tau_i}\}_{i \in \labelset} % }{ % \expandsTU{\uDelta}{\auprod{\labelset}{\mapschema{\utau}{i}{\labelset}}}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:expandsTU-sum} % \inferrule{ % \{\expandsTU{\uDelta}{\utau_i}{\tau_i}\}_{i \in \labelset} % }{ % \expandsTU{\uDelta}{\ausum{\labelset}{\mapschema{\utau}{i}{\labelset}}}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \end{subequations} %We write $\uDeltaOK{\uDelta}$ when $\uDelta=\uDD{\uD}{\Delta}$ and each type variable in $\uD$ also appears in $\Delta$. %\begin{definition}\label{def:uDeltaOK} $\uDeltaOK{\uDD{\uD}{\Delta}}$ iff for each $\vExpands{\ut}{t} \in \uD$, we have $\Dhyp{t} \in \Delta$.\end{definition} To develop an intuition for how type identifier expansion operates, it is instructive to inspect the derivation of the expansion of the unexpanded type $\forallt{\ut}{\forallt{\ut}{\ut}}$: \begin{mathpar} \inferrule{ \inferrule{ \inferrule{ }{ \expandsTU{\uDD{\vExpands{\ut}{t_2}}{{\Dhyp{t_1}}, {\Dhyp{t_2}}}}{\ut}{t_2} }~\text{(\ref*{rule:expandsTU-var})} }{ \expandsTU{\uDD{\vExpands{\ut}{t_1}}{\Dhyp{t_1}}}{\forallt{\ut}{\ut}}{\aall{t_2}{t_2}} }~\text{(\ref*{rule:expandsTU-all})} }{ \expandsTU{\uDD{\emptyset}{\emptyset}}{\forallt{\ut}{\forallt{\ut}{\ut}}}{\aall{t_1}{\aall{t_2}{t_2}}} }~\text{(\ref*{rule:expandsTU-all})} \end{mathpar} Notice that when Rule (\ref{rule:expandsTU-all}) is applied, the type identifier expansion context is extended (when the outermost binding is encountered) or updated (at all inner bindings) and the type formation context is simultaneously extended with a (necessarily fresh) hypothesis. Without this mechanism, expansions for unexpanded types with shadowing, like this minimal example, would not exist, because by definition we cannot extend a type formation context with a variable it already mentions, nor implicitly $\alpha$-vary the unexpanded type to sidestep this problem in the usual manner. The Type Expansion Lemma establishes that the expansion of an unexpanded type is a well-formed type. \begingroup \def\thetheorem{\ref{lemma:type-expansion-U}} \begin{lemma}[Type Expansion] If $\expandsTU{\uDD{\uD}{\Delta}}{\utau}{\tau}$ then $\istypeU{\Delta}{\tau}$.\end{lemma} \begin{proof} By rule induction over Rules (\ref{rules:expandsTU}). In each case, we apply the IH to or over each premise, then apply the corresponding type formation rule in Rules (\ref{rules:istypeU}). \end{proof} \endgroup \subsubsection{Typed Expression Expansion} % \begin{subequations}\label{rules:expandsU} \emph{Unexpanded typing contexts}, $\uGamma$, are, similarly, of the form $\uGG{\uG}{\Gamma}$, where $\uG$ is an \emph{expression identifier expansion context}, and $\Gamma$ is a typing context. An expression identifier expansion context, $\uG$, is a finite function that maps each expression identifier $\ux \in \domof{\uG}$ to the hypothesis $\vExpands{\ux}{x}$, for some expression variable, $x$. We write $\ctxUpdate{\uG}{\ux}{x}$ for the expression identifier expansion context that maps $\ux$ to $\vExpands{\ux}{x}$ and defers to $\uG$ for all other expression identifiers (i.e. the previous mapping is updated.) %We write $\uGammaOK{\uGamma}$ when $\uGamma=\uGG{\uG}{\Gamma}$ and each expression variable in $\uG$ is assigned a type by $\Gamma$. %\begin{definition} $\uGammaOK{\uGG{\uG}{\Gamma}}$ iff for each $\vExpands{\ux}{x} \in \uG$, we have $\Ghyp{x}{\tau} \in \Gamma$ for some $\tau$.\end{definition} %\noindent We define $\uGamma, \uGhyp{\ux}{x}{\tau}$ when $\uGamma = \uGG{\uG}{\Gamma}$ as an abbreviation of \[\uGG{\ctxUpdate{\uG}{\ux}{x}}{\Gamma, \Ghyp{x}{\tau}}\] The \emph{typed expression expansion judgement}, $\expandsUX{\ue}{e}{\tau}$, is inductively defined by Rules (\ref{rules:expandsU}). Before covering these rules, let us state the main theorem of interest: that typed expansion results in a well-typed expanded expression. \begingroup \def\thetheorem{\ref{thm:typed-expansion-short-U}} \begin{theorem}[Typed Expression Expansion] \hspace{-3px}If $\expandsU{\uDD{\uD}{\Delta}\hspace{-3px}}{\uGG{\uG}{\Gamma}\hspace{-3px}}{\uPsi}{\ue}{e}{\tau}$ then $\hastypeU{\Delta}{\Gamma}{e}{\tau}$. \end{theorem} \endgroup %These rules validate the following theorem, which establishes that typed expansion produces an expansion of the assigned type. %\begin{theorem}[Typed Expression Expansion] If $\expandsU{\uDD{\uD}{\Delta}}{\uGG{\uG}{\Gamma}}{\uPsi}{\ue}{e}{\tau}$ and $\uetsmenv{\Delta}{\uPsi}$ then $\hastypeU{\Delta}{\Gamma}{e}{\tau}$.\end{theorem} %\begin{proof} This is the first part of Theorem \ref{thm:typed-expansion-U}, defined and proven below.\end{proof} \paragraph{Common Forms} Rules (\ref{rule:expandsU-var}) through (\ref{rule:expandsU-case}) handle unexpanded expressions of common form, as well as ascriptions and let binding. The first five of these rules are reproduced below: %Each of these rules is based on the corresponding typing rule, i.e. Rules (\ref{rule:hastypeU-var}) through (\ref{rule:hastypeU-case}), respectively. For example, the following typed expansion rules are based on the typing rules (\ref{rule:hastypeU-var}), (\ref{rule:hastypeU-lam}) and (\ref{rule:hastypeU-ap}), respectively:% for unexpanded expressions of variable, function and application form, respectively: \begin{equation*}\tag{\ref{rule:expandsU-var}} \inferrule{ }{\expandsU{\uDelta}{\uGamma, \uGhyp{\ux}{x}{\tau}}{\uPsi}{\ux}{x}{\tau}} \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-asc}} \inferrule{ \expandsTU{\uDelta}{\utau}{\tau}\\ \expandsU{\uDelta}{\uGamma}{\uPsi}{\ue}{e}{\tau} }{ \expandsU{\uDelta}{\uGamma}{\uPsi}{\asc{\ue}{\utau}}{e}{\tau} } \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-letsyn}} \inferrule{ \expandsU{\uDelta}{\uGamma}{\uPsi}{\ue_1}{e_1}{\tau_1}\\ \expandsU{\uDelta}{\uGamma, \uGhyp{\ux}{x}{\tau_1}}{\uPsi}{\ue_2}{e_2}{\tau_2} }{ \expandsU{\uDelta}{\uGamma}{\uPsi}{\letsyn{\ux}{\ue_1}{\ue_2}}{ \aeap{\aelam{\tau_1}{x}{e_2}}{e_1} }{\tau_2} } \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-lam}} \inferrule{ \expandsTU{\uDelta}{\utau}{\tau}\\ \expandsU{\uDelta}{\uGamma, \uGhyp{\ux}{x}{\tau}}{\uPsi}{\ue}{e}{\tau'} }{\expandsUX{\lam{\ux}{\utau}{\ue}}{\aelam{\tau}{x}{e}}{\aparr{\tau}{\tau'}}} \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-ap}} \inferrule{ \expandsUX{\ue_1}{e_1}{\aparr{\tau}{\tau'}}\\ \expandsUX{\ue_2}{e_2}{\tau} }{ \expandsUX{\ap{\ue_1}{\ue_2}}{\aeap{e_1}{e_2}}{\tau'} } \end{equation*} The rules for the remaining expressions of common form are entirely straightforward, mirroring the corresponding typing rules, i.e. Rules (\ref{rules:hastypeU}). %In particular, observe that, in each of these rules, the unexpanded and expanded expression forms in the conclusion correspond, and each premise corresponds to a premise of the corresponding typing rule. %Type formation premises in the typing rule give rise to type expansion premises in the corresponding typed expansion rule, and each typed expression expansion premise in each rule above corresponds to a typing premise in the corresponding typing rule. The type assigned in the conclusion of each rule is identical to the type assigned in the conclusion of the corresponding typing rule. The seTLM context, $\uPsi$, passes opaquely through these rules (we will define seTLM contexts below.) As such, the corresponding cases in the proof of Theorem \ref{thm:typed-expansion-short-U} are by application of the induction hypothesis and the corresponding typing rule. %Rules (\ref{rules:expandsTU}) could similarly have been generated by mechanically transforming Rules (\ref{rules:istypeU}). % We can express this scheme more precisely with the rule transformation given in Appendix \ref{appendix:SES-uexps}. For each rule in Rules (\ref{rules:istypeU}) and Rules (\ref{rules:hastypeU}), % \begin{mathpar} % \refstepcounter{equation} % % \label{rule:expandsU-tlam} % % \refstepcounter{equation} % % \label{rule:expandsU-tap} % % \refstepcounter{equation} % \label{rule:expandsU-fold} % \refstepcounter{equation} % \label{rule:expandsU-unfold} % \refstepcounter{equation} % \label{rule:expandsU-tpl} % \refstepcounter{equation} % \label{rule:expandsU-pr} % \refstepcounter{equation} % \label{rule:expandsU-in} % \refstepcounter{equation} % \label{rule:expandsU-case} % \inferrule{J_1\\ \cdots \\ J_k}{J} % \end{mathpar} % the corresponding typed expansion rule is % \begin{mathpar} % \inferrule{ % \Uof{J_1} \\ % \cdots\\ % \Uof{J_k} % }{ % \Uof{J} % } % \end{mathpar} % where % \[\begin{split} % \Uof{\istypeU{\Delta}{\tau}} & = \expandsTU{\Uof{\Delta}}{\Uof{\tau}}{\tau} \\ % \Uof{\hastypeU{\Gamma}{\Delta}{e}{\tau}} & = \expandsU{\Uof{\Gamma}}{\Uof{\Delta}}{\uPsi}{\Uof{e}}{e}{\tau}\\ % \Uof{\{J_i\}_{i \in \labelset}} & = \{\Uof{J_i}\}_{i \in \labelset} % \end{split}\] % and where: % \begin{itemize} % \item $\Uof{\tau}$ is defined as follows: % \begin{itemize} % \item When $\tau$ is of definite form, $\Uof{\tau}$ is defined as in Sec. \ref{sec:syntax-U}. % \item When $\tau$ is of indefinite form, $\Uof{\tau}$ is a uniquely corresponding metavariable of sort $\mathsf{UTyp}$ also of indefinite form. For example, in Rule (\ref{rule:istypeU-parr}), $\tau_1$ and $\tau_2$ are of indefinite form, i.e. they match arbitrary types. The rule transformation simply ``hats'' them, i.e. $\Uof{\tau_1}=\utau_1$ and $\Uof{\tau_2}=\utau_2$. % \end{itemize} % \item $\Uof{e}$ is defined as follows % \begin{itemize} % \item When $e$ is of definite form, $\Uof{e}$ is defined as in Sec. \ref{sec:syntax-U}. % \item When $e$ is of indefinite form, $\Uof{e}$ is a uniquely corresponding metavariable of sort $\mathsf{UExp}$ also of indefinite form. For example, $\Uof{e_1}=\ue_1$ and $\Uof{e_2}=\ue_2$. % \end{itemize} % \item $\Uof{\Delta}$ is defined as follows: % \begin{itemize} % \item When $\Delta$ is of definite form, $\Uof{\Delta}$ is defined as above. % \item When $\Delta$ is of indefinite form, $\Uof{\Delta}$ is a uniquely corresponding metavariable ranging over unexpanded type formation contexts. For example, $\Uof{\Delta} = \uDelta$. % \end{itemize} % \item $\Uof{\Gamma}$ is defined as follows: % \begin{itemize} % \item When $\Gamma$ is of definite form, $\Uof{\Gamma}$ produces the corresponding unexpanded typing context as follows: % \begin{align*} % \Uof{\emptyset} & = \uGG{\emptyset}{\emptyset}\\ % \Uof{\Gamma, \Ghyp{x}{\tau}} & = \Uof{\Gamma}, \uGhyp{\identifierof{x}}{x}{\tau} % \end{align*} % \item When $\Gamma$ is of indefinite form, $\Uof{\Gamma}$ is a uniquely corresponding metavariable ranging over unexpanded typing contexts. For example, $\Uof{\Gamma} = \uGamma$. % \end{itemize} % \end{itemize} % It is instructive to use this rule transformation to generate Rules (\ref{rules:expandsTU}) and Rules (\ref{rule:expandsU-var}) through (\ref{rule:expandsU-tap}) above. We omit the remaining rules, i.e. Rules (\ref*{rule:expandsU-fold}) through (\ref*{rule:expandsU-case}). By instead defining these rules solely by the rule transformation just described, we avoid having to write down a number of rules that are of limited marginal interest. Moreover, this demonstrates the general technique for generating typed expansion rules for unexpanded types and expressions of common form, so our exposition is somewhat ``robust'' to changes to the inner core. %o that when the inner core changes, typed expansion rules our exposition somewhat robust to changes to the inner core (though not to changes to the judgement forms in the statics of the inner core).% Even if changes to the judgement forms in the statics of the inner core are needed (e.g. the addition of a symbol context), it is easy to see would correspond to changes in the generic specification above. % \begin{subequations}\label{rules:expandsU} % \begin{equation*}\label{rule:expandsU-var} % \inferrule{ }{\expandsU{\Delta}{\Gamma, x : \tau}{\uPsi}{x}{x}{\tau}} % \end{equation*} % \begin{equation*}\label{rule:expandsU-lam} % \inferrule{ % \istypeU{\Delta}{\tau}\\ % \expandsU{\Delta}{\Gamma, x : \tau}{\uPsi}{\ue}{e}{\tau'} % }{\expandsUX{\aulam{\tau}{x}{\ue}}{\aelam{\tau}{x}{e}}{\aparr{\tau}{\tau'}}} % \end{equation*} % \begin{equation*}\label{rule:expandsU-ap} % \inferrule{ % \expandsUX{\ue_1}{e_1}{\aparr{\tau}{\tau'}}\\ % \expandsUX{\ue_2}{e_2}{\tau} % }{ % \expandsUX{\auap{\ue_1}{\ue_2}}{\aeap{e_1}{e_2}}{\tau'} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-tlam} % \inferrule{ % \expandsU{\Delta, \Dhyp{t}}{\Gamma}{\uPsi}{\ue}{e}{\tau} % }{ % \expandsUX{\autlam{t}{\ue}}{\aetlam{t}{e}}{\aall{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-tap} % \inferrule{ % \expandsUX{\ue}{e}{\aall{t}{\tau}}\\ % \istypeU{\Delta}{\tau'} % }{ % \expandsUX{\autap{\ue}{\tau'}}{\aetap{e}{\tau'}}{[\tau'/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-fold} % \inferrule{ % \istypeU{\Delta, \Dhyp{t}}{\tau}\\ % \expandsUX{\ue}{e}{[\arec{t}{\tau}/t]\tau} % }{ % \expandsUX{\aufold{t}{\tau}{\ue}}{\aefold{e}}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-unfold} % \inferrule{ % \expandsUX{\ue}{e}{\arec{t}{\tau}} % }{ % \expandsUX{\auunfold{\ue}}{\aeunfold{e}}{[\arec{t}{\tau}/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-tpl} % \inferrule{ % \{\expandsUX{\ue_i}{e_i}{\tau_i}\}_{i \in \labelset} % }{ % \expandsUX{\autpl{\labelset}{\mapschema{\ue}{i}{\labelset}}}{\aetpl{\labelset}{\mapschema{e}{i}{\labelset}}}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-pr} % \inferrule{ % \expandsUX{\ue}{e}{\aprod{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}} % }{ % \expandsUX{\aupr{\ell}{\ue}}{\aepr{\ell}{e}}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-in} % \inferrule{ % \{\istypeU{\Delta}{\tau_i}\}_{i \in \labelset}\\ % \istypeU{\Delta}{\tau}\\ % \expandsUX{\ue}{e}{\tau} % }{ % \left\{\shortstack{$\Delta~\Gamma \vdash_\uPsi \auin{\labelset, \ell}{\ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}{\ue}$\\$\leadsto$\\$\aein{\labelset, \ell}{\ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}{e} : \asum{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}$\vspace{-1.2em}}\right\} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-case} % \inferrule{ % \expandsUX{\ue}{e}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}}\\ % \{\expandsU{\Delta}{\Gamma, \Ghyp{x_i}{\tau_i}}{\uPsi}{\ue_i}{e_i}{\tau}\}_{i \in \labelset} % }{ % \expandsUX{\aucase{\labelset}{\ue}{\mapschemab{x}{\ue}{i}{\labelset}}}{\aecase{\labelset}{e}{\mapschemab{x}{e}{i}{\labelset}}}{\tau} % } % \end{equation*} % \end{subequations} \paragraph{seTLM Definition and Application} The two remaining typed expansion rules, Rules (\ref{rule:expandsU-syntax}) and (\ref{rule:expandsU-tsmap}), govern the seTLM definition and application forms. They are defined in the next two subsections, respectively. % \begin{equation*}\label{rule:expandsU-syntax} % \inferrule{ % \istypeU{\Delta}{\tau}\\ % \expandsU{\emptyset}{\emptyset}{\emptyset}{\ueparse}{\eparse}{\aparr{\tBody}{\tParseResultExp}}\\\\ % a \notin \domof{\uPsi}\\ % \expandsU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{\ue}{e}{\tau'} % }{ % \expandsUX{\audefuetsm{\tau}{\ueparse}{\tsmv}{\ue}}{e}{\tau'} % } % \end{equation*} % \begin{equation*}\label{rule:expandsU-tsmap} % \inferrule{ % \encodeBody{b}{\ebody}\\ % \evalU{\ap{\eparse}{\ebody}}{\inj{\lbltxt{SuccessE}}{\ecand}}\\ % \decodeCondE{\ecand}{\ce}\\\\ % \cvalidE{\emptyset}{\emptyset}{\esceneU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{b}}{\ce}{e}{\tau} % }{ % \expandsU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{\autsmap{b}{\tsmv}}{e}{\tau} % } % \end{equation*} %\end{subequations} %Notice that each form of expanded expression (Figure \ref{fig:U-expanded-terms}) corresponds to a form of unexpanded expression (Figure \ref{fig:U-unexpanded-terms}). For each typing rule in Rules (\ref{rules:hastypeU}), there is a corresponding typed expansion rule -- Rules (\ref{rule:expandsU-var}) through (\ref{rule:expandsU-case}) -- where the unexpanded and expanded forms correspond. The premises also correspond -- if a typing judgement appears as a premise of a typing rule, then the corresponding premise in the corresponding typed expansion rule is the corresponding typed expansion judgement. The seTLM context is not extended or inspected by these rules (it is only ``threaded through'' them opaquely). %There are two unexpanded expression forms that do not correspond to an expanded expression form: the seTLM definition form, and the seTLM application form. The rules governing these two forms interact with the seTLM context, and are the topics of the next two subsections, respectively. \subsection{seTLM Definitions}\label{sec:U-uetsm-definition} The seTLM definition form is \[\uesyntax{\tsmv}{\utau}{\eparse}{\ue}\] %The operational form corresponding to this stylized form is \[\audefuetsm{\utau}{\eparse}{\tsmv}{\ue}\] An unexpanded expression of this form defines an {seTLM} identified as $\tsmv$ with \emph{unexpanded type annotation} $\utau$ and \emph{parse function} $\eparse$ for use within $\ue$. Rule (\ref*{rule:expandsU-syntax}) defines typed expansion of this form: % \begin{subequations}[resume] % \begin{equation*}\label{rule:expandsU-syntax} % \inferrule{ % \istypeU{\Delta}{\tau}\\ % \expandsU{\emptyset}{\emptyset}{\emptyset}{\ueparse}{\eparse}{\aparr{\tBody}{\tParseResultExp}}\\\\ % \expandsU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{\ue}{e}{\tau'} % }{ % \expandsUX{\audefuetsm{\tau}{\ueparse}{\tsmv}{\ue}}{e}{\tau'} % } % \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-syntax}} \inferrule{ \expandsTU{\uDelta}{\utau}{\tau}\\ \hastypeU{\emptyset}{\emptyset}{\eparse}{\aparr{\tBody}{\tParseResultExp}}\\\\ \evalU{\eparse}{\eparse'}\\ \expandsU{\uDelta}{\uGamma}{\uPsi, \uShyp{\tsmv}{a}{\tau}{\eparse'}}{\ue}{e}{\tau'} }{ \expandsUX{\uesyntax{\tsmv}{\utau}{\eparse}{\ue}}{e}{\tau'} } \end{equation*} % \end{subequations} The premises of this rule can be understood as follows, in order: \begin{enumerate} \item The first premise expands the unexpanded type annotation. \item The second premise checks that the parse function, $\eparse$, is a closed expanded function\footnote{In Chapter \ref{chap:static-eval}, we add the machinery necessary for parse functions that are neither closed nor yet expanded.} of the following type: \[\aparr{\tBody}{\tParseResultExp}\] %$\miniVerseUE$.%to generate the \emph{expanded parse function}, $\eparse$. %Notice that this occurs under empty contexts, i.e. parse functions cannot refer to the surrounding bindings. %The parse function must be of type $\aparr{\tBody}{\tParseResultExp}$ where the type abbreviations $\tBody$ and $\tParseResultExp$ are defined as follows. The type abbreviated $\tBody$ classifies encodings of literal bodies, $b$. The mapping from literal bodies to values of type $\tBody$ is defined by the \emph{body encoding judgement} $\encodeBody{b}{\ebody}$. An inverse mapping is defined by the \emph{body decoding judgement} $\decodeBody{\ebody}{b}$. \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \encodeBody{b}{e} & \text{$b$ has encoding $e$}\\ \decodeBody{e}{b} & \text{$e$ has decoding $b$} \end{array}\] Rather than defining $\tBody$ explicitly, and these judgements inductively against that definition (which would be tedious and uninteresting), it suffices to define the following condition, which establishes an isomorphism between literal bodies and values of type $\tBody$ mediated by the judgements above. \begingroup \def\thetheorem{\ref{condition:body-isomorphism}} \begin{condition}[Body Isomorphism] ~ \begin{enumerate} \item For every literal body $b$, we have that $\encodeBody{b}{\ebody}$ for some $\ebody$ such that $\hastypeUC{\ebody}{\tBody}$ and $\isvalU{\ebody}$. \item If $\hastypeUC{\ebody}{\tBody}$ and $\isvalU{\ebody}$ then $\decodeBody{\ebody}{b}$ for some $b$. \item If $\encodeBody{b}{\ebody}$ then $\decodeBody{\ebody}{b}$. \item If $\hastypeUC{\ebody}{\tBody}$ and $\isvalU{\ebody}$ and $\decodeBody{\ebody}{b}$ then $\encodeBody{b}{\ebody}$. \item If $\encodeBody{b}{\ebody}$ and $\encodeBody{b}{\ebody'}$ then $\ebody = \ebody'$. \item If $\hastypeUC{\ebody}{\tBody}$ and $\isvalU{\ebody}$ and $\decodeBody{\ebody}{b}$ and $\decodeBody{\ebody}{b'}$ then $b=b'$. \end{enumerate} \end{condition} \endgroup The return type of the parse function, $\tParseResultExp$, abbreviates a labeled sum type that distinguishes parse errors from successful parses:\footnote{In VerseML, the \li{ParseError} constructor of \li{parse_result} required an error message and an error location, but we omit these in our formalization for simplicity.} \begin{align*} L_\mathtt{SE} & \defeq \lbltxt{ParseError}, \lbltxt{SuccessE}\\ \tParseResultExp & \defeq \asum{\labelset_\mathtt{SE}}{ \mapitem{\lbltxt{ParseError}}{\prodt{}}, \mapitem{\lbltxt{SuccessE}}{\tCEExp} } \end{align*} %[\mapitem{\lbltxt{ParseError}}{\prodt{}}, \mapitem{\lbltxt{SuccessE}}{\tCEExp}] % \] The type abbreviated $\tCEExp$ classifies encodings of \emph{proto-expressions}, $\ce$ (pronounced ``grave $e$''.) The syntax of proto-expressions, defined in Figure \ref{fig:U-candidate-terms}, will be described when we describe proto-expansion validation in Sec. \ref{sec:ce-syntax-U}. The mapping from proto-expressions to values of type $\tCEExp$ is defined by the \emph{proto-expression encoding judgement}, $\encodeCondE{\ce}{e}$. An inverse mapping is defined by the \emph{proto-expression decoding judgement}, $\decodeCondE{e}{\ce}$. \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \encodeCondE{\ce}{e} & \text{$\ce$ has encoding $e$}\\ \decodeCondE{e}{\ce} & \text{$e$ has decoding $\ce$} \end{array}\] Again, rather than picking a particular definition of $\tCEExp$ and defining the judgements above inductively against it, we only state the following condition, which establishes an isomorphism between values of type $\tCEExp$ and proto-expressions. \begingroup \def\thetheorem{\ref{condition:proto-expression-isomorphism}} \begin{condition}[Proto-Expression Isomorphism] ~ \begin{enumerate} \item For every $\ce$, we have $\encodeCondE{\ce}{\ecand}$ for some $\ecand$ such that $\hastypeUC{\ecand}{\tCEExp}$ and $\isvalU{\ecand}$. \item If $\hastypeUC{\ecand}{\tCEExp}$ and $\isvalU{\ecand}$ then $\decodeCondE{\ecand}{\ce}$ for some $\ce$. \item If $\encodeCondE{\ce}{\ecand}$ then $\decodeCondE{\ecand}{\ce}$. \item If $\hastypeUC{\ecand}{\tCEExp}$ and $\isvalU{\ecand}$ and $\decodeCondE{\ecand}{\ce}$ then $\encodeCondE{\ce}{\ecand}$. \item If $\encodeCondE{\ce}{\ecand}$ and $\encodeCondE{\ce}{\ecand'}$ then $\ecand=\ecand'$. \item If $\hastypeUC{\ecand}{\tCEExp}$ and $\isvalU{\ecand}$ and $\decodeCondE{\ecand}{\ce}$ and $\decodeCondE{\ecand}{\ce'}$ then $\ce=\ce'$. \end{enumerate} \end{condition} \endgroup \item The third premise of Rule (\ref{rule:expandsU-syntax}) evaluates the parse function to a value. \item The final premise of Rule (\ref{rule:expandsU-syntax}) extends the seTLM context, $\uPsi$, with the newly determined {seTLM definition}, and proceeds to assign a type, $\tau'$, and expansion, $e$, to $\ue$. The conclusion of Rule (\ref{rule:expandsU-syntax}) assigns this type and expansion to the seTLM definition as a whole.% i.e. TLMs define behavior that is relevant during typed expansion, but not during evaluation. \emph{seTLM contexts}, $\uPsi$, are of the form $\uAS{\uA}{\Psi}$, where $\uA$ is a \emph{TLM identifier expansion context} and $\Psi$ is a \emph{seTLM definition context}. A \emph{TLM identifier expansion context}, $\uA$, is a finite function mapping each TLM identifier $\tsmv \in \domof{\uA}$ to the \emph{TLM identifier expansion}, $\vExpands{\tsmv}{a}$, for some \emph{TLM name}, $a$. We write $\ctxUpdate{\uA}{\tsmv}{a}$ for the TLM identifier expansion context that maps $\tsmv$ to $\vExpands{\tsmv}{a}$, and defers to $\uA$ for all other TLM identifiers (i.e. the previous mapping is \emph{updated}.) An \emph{seTLM definition context}, $\Psi$, is a finite function mapping each TLM name $a \in \domof{\Psi}$ to an \emph{expanded seTLM definition}, $\xuetsmbnd{a}{\tau}{\eparse}$, where $\tau$ is the seTLM's type annotation, and $\eparse$ is its parse function. We write $\Psi, \xuetsmbnd{a}{\tau}{\eparse}$ when $a \notin \domof{\Psi}$ for the extension of $\Psi$ that maps $a$ to $\xuetsmbnd{a}{\tau}{\eparse}$. % We write $\uetsmenv{\Delta}{\Psi}$ when all the type annotations in $\Psi$ are well-formed assuming $\Delta$, and the parse functions in $\Psi$ are closed and of type $\parr{\tBody}{\tParseResultExp}$. We define $\uPsi, \uShyp{\tsmv}{a}{\tau}{\eparse}$, when $\uPsi=\uAS{\uA}{\Psi}$, as an abbreviation of \[\uAS{\ctxUpdate{\uA}{\tsmv}{a}}{\Psi, \xuetsmbnd{a}{\tau}{\eparse}}\] We distinguish TLM identifiers, $\tsmv$, from TLM names, $a$, for much the same reason that we distinguish type and expression identifiers from type and expression variables: in order to support TLM definitions identified in the same way as a previously defined TLM definition, without an implicit renaming convention. %Moreover, this distinction will be crucial in the semantics of TLM abbreviations in Chapter \ref{chap:ptsms}. \end{enumerate} % \[\begin{array}{ll} % \textbf{Judgement Form} & \textbf{Description}\\ % \uetsmenv{\Delta}{\uPsi} & \text{$\uPsi$ is well-formed assuming $\Delta$}\end{array}\] % This judgement is inductively defined by the following rules: % \begin{subequations}[intermezzo]\label{rules:uetsmenv-U} % \begin{equation*}\label{rule:uetsmenv-empty} % \inferrule{ }{\uetsmenv{\Delta}{\emptyset}} % \end{equation*} % \begin{equation*}\label{rule:uetsmenv-ext} % \inferrule{ % \uetsmenv{\Delta}{\uPsi}\\ % \istypeU{\Delta}{\tau}\\ % \hastypeU{\emptyset}{\emptyset}{\eparse}{\aparr{\tBody}{\tParseResultExp}} % }{ % \uetsmenv{\Delta}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}} % } % \end{equation*} % \end{subequations} \subsection{seTLM Application}\label{sec:U-uetsm-application} The unexpanded expression form for applying an seTLM named $\tsmv$ to a literal form with literal body $b$ is: \[ \utsmap{\tsmv}{b} \] This stylized form uses backticks to delimit the literal body, but other generalized literal forms, like those described in Figure \ref{fig:literal-forms}, could also be included as derived forms in the textual syntax. % (we omit them for simplicity). %The corresponding operational form is $\autsmap{b}{\tsmv}$. %i.e. for each literal body $b$, the operator $\texttt{uapuetsm}[b]$ is indexed by the TLM name $\tsmv$ and takes no arguments. %\footnote{This is in following the conventions in \emph{PFPL} \cite{pfpl}, where operators parameters allow for the use of metatheoretic objects that are not syntax trees or binding trees, e.g. $\mathsf{str}[s]$ and $\mathsf{num}[n]$.} This operator is indexed by the TLM name $\tsmv$ and takes no arguments. The typed expansion rule governing seTLM application is below: % \begin{subequations}[resume] % \begin{equation*}\label{rule:expandsU-tsmap} % \inferrule{ % \encodeBody{b}{\ebody}\\ % \evalU{\ap{\eparse}{\ebody}}{\inj{\lbltxt{SuccessE}}{\ecand}}\\ % \decodeCondE{\ecand}{\ce}\\\\ % \cvalidE{\emptyset}{\emptyset}{\esceneU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{b}}{\ce}{e}{\tau} % }{ % \expandsU{\Delta}{\Gamma}{\uPsi, \xuetsmbnd{\tsmv}{\tau}{\eparse}}{\autsmap{b}{\tsmv}}{e}{\tau} % } % \end{equation*} \begin{equation*}\tag{\ref{rule:expandsU-tsmap}} \inferrule{ \uPsi = \uPsi', \uShyp{\tsmv}{a}{\tau}{\eparse}\\\\ \encodeBody{b}{\ebody}\\ \evalU{\ap{\eparse}{\ebody}}{\aein{\mathtt{SuccessE}}{\ecand}}\\ \decodeCondE{\ecand}{\ce}\\\\ \segOK{\segof{\ce}}{b}\\ \cvalidE{\emptyset}{\emptyset}{\esceneU{\uDelta}{\uGamma}{\uPsi}{b}}{\ce}{e}{\tau} }{ \expandsU{\uDelta}{\uGamma}{\uPsi}{\utsmap{\tsmv}{b}}{e}{\tau} } \end{equation*} The premises of Rule (\ref{rule:expandsU-tsmap}) can be understood as follows, in order: \begin{enumerate} \item The first premise ensures that $\tsmv$ has been defined and extracts the type annotation and parse function. \item The second premise determines the encoding of the literal body, $\ebody$. This term is closed per Condition \ref{condition:body-isomorphism}. \item The third premise applies the parse function $\eparse$ to the encoding of the literal body. The parse function is closed by well-formedness of $\uPsi$ (which, in turn, is maintained by the TLM definition rule, Rule (\ref{rule:expandsU-syntax}), described above). If parsing succeeds, i.e. a value of the form $\aein{\mathtt{SuccessE}}{\ecand}$ results from evaluation, then $\ecand$ will be a value of type $\tCEExp$ (assuming a well-formed seTLM context, by application of the Preservation assumption, Assumption \ref{condition:preservation-UP}.) We call $\ecand$ the \emph{encoding of the proto-expansion}. If the parse function produces a value labeled $\lbltxt{ParseError}$, then typed expansion fails. No rule is necessary to handle this case. \item The fourth premise decodes the encoding of the proto-expansion to produce the \emph{proto-expansion}, $\ce$, itself. \item The fifth premise determines a segmentation, $\segof{\ce}$, and ensures that it is valid with respect to $b$. In particular, the predicate $\segOK{\psi}{b}$ checks that each segment in $\psi$, has non-negative length and is within bounds of $b$, and that the segments in $\psi$ do not overlap and operate at a consistent sort and type. The definition of this predicate is given in Appendix \ref{appendix:segmentations-U}. \item The final premise of Rule (\ref{rule:expandsU-tsmap}) \emph{validates} the proto-expansion and simultaneously generates the \emph{final expansion}, $e$, which appears in the conclusion of the rule. The proto-expression validation judgement is discussed next. \end{enumerate} \subsection{Syntax of Proto-Expansions}\label{sec:ce-syntax-U} \begin{figure} \hspace{-5px}$\arraycolsep=3.5pt\begin{array}{lllllll} \textbf{Sort} & & & \textbf{Operational Form} & \textbf{Stylized Form} & \textbf{Description}\\ \mathsf{PrTyp} & \ctau & ::= & t & t & \text{variable}\\ &&& \aceparr{\ctau}{\ctau} & \parr{\ctau}{\ctau} & \text{partial function}\\ &&& \aceall{t}{\ctau} & \forallt{t}{\ctau} & \text{polymorphic}\\ &&& \acerec{t}{\ctau} & \rect{t}{\ctau} & \text{recursive}\\ &&& \aceprod{\labelset}{\mapschema{\ctau}{i}{\labelset}} & \prodt{\mapschema{\ctau}{i}{\labelset}} & \text{labeled product}\\ &&& \acesum{\labelset}{\mapschema{\ctau}{i}{\labelset}} & \sumt{\mapschema{\ctau}{i}{\labelset}} & \text{labeled sum}\\ \LCC &&& \color{Yellow} & \color{Yellow} & \color{Yellow}\\ &&& \acesplicedt{m}{n} & \splicedt{m}{n} & \text{spliced type ref.}\\\ECC \mathsf{PrExp} & \ce & ::= & x & x & \text{variable}\\ &&& \aceasc{\ctau}{\ce} & \asc{\ce}{\ctau} & \text{ascription}\\ &&& \aceletsyn{x}{\ce}{\ce} & \letsyn{x}{\ce}{\ce} & \text{value binding}\\ &&& \acelam{\ctau}{x}{\ce} & \lam{x}{\ctau}{\ce} & \text{abstraction}\\ &&& \aceap{\ce}{\ce} & \ap{\ce}{\ce} & \text{application}\\ &&& \acetlam{t}{\ce} & \Lam{t}{\ce} & \text{type abstraction}\\ &&& \acetap{\ce}{\ctau} & \App{\ce}{\ctau} & \text{type application}\\ &&& \acefold{\ce} & \fold{\ce} & \text{fold}\\ &&& \aceunfold{\ce} & \unfold{\ce} & \text{unfold}\\ &&& \acetpl{\labelset}{\mapschema{\ce}{i}{\labelset}} & \tpl{\mapschema{\ce}{i}{\labelset}} & \text{labeled tuple}\\ &&& \acepr{\ell}{\ce} & \prj{\ce}{\ell} & \text{projection}\\ &&& \acein{\ell}{\ce} & \inj{\ell}{\ce} & \text{injection}\\ &&& \acecase{\labelset}{\ce}{\mapschemab{x}{\ce}{i}{\labelset}} & \caseof{\ce}{\mapschemab{x}{\ce}{i}{\labelset}} & \text{case analysis}\\ \LCC &&& \color{Yellow} & \color{Yellow} & \color{Yellow}\\ &&& \acesplicede{m}{n}{\ctau} & \splicede{m}{n}{\ctau} & \text{spliced expr. ref.}\ECC \end{array}$ \caption[Syntax of $\miniVerseUE$ proto-types and proto-expressions]{Syntax of $\miniVerseUE$ proto-types and proto-expressions.} \label{fig:U-candidate-terms} \end{figure} Figure \ref{fig:U-candidate-terms} defines the syntax of proto-types, $\ctau$, and proto-expressions, $\ce$. Proto-types and -expressions are ABTs identified up to $\alpha$-equivalence in the usual manner. Each expanded form maps onto a proto-expansion form. We refer to these as the \emph{common proto-expansion forms}. The mapping is given explicitly in Appendix \ref{appendix:proto-expansions-SES}. There are two ``interesting'' proto-expansion forms, highlighted in yellow in Figure \ref{fig:U-candidate-terms}: a proto-type form for \emph{references to spliced unexpanded types}, $\acesplicedt{m}{n}$, and a proto-expression form for \emph{references to spliced unexpanded expressions}, $\acesplicede{m}{n}{\ctau}$, where $m$ and $n$ are natural numbers.%TLM utilize these to splice types and unexpanded expressions out of literal bodies. \subsection{Proto-Expansion Validation}\label{sec:ce-validation-U} The \emph{proto-expansion validation judgements} validate proto-types and proto-expressions and simultaneously generate their final expansions.% are types and expanded expressions, respectively. \[\begin{array}{ll} \textbf{Judgement Form} & \textbf{Description}\\ \cvalidT{\Delta}{\tscenev}{\ctau}{\tau} & \text{$\ctau$ has well-formed expansion $\tau$}\\ \cvalidE{\Delta}{\Gamma}{\escenev}{\ce}{e}{\tau} & \text{$\ce$ has expansion $e$ of type $\tau$} \end{array}\] \emph{Type splicing scenes}, $\tscenev$, are of the form $\tsceneU{\uDelta}{b}$ and \emph{expression splicing scenes}, $\escenev$, are of the form $\esceneU{\uDelta}{\uGamma}{\uPsi}{b}$. We write $\tsfrom{\escenev}$ for the type splicing scene constructed by dropping the unexpanded typing context and seTLM context from $\escenev$: \[\tsfrom{\esceneU{\uDelta}{\uGamma}{\uPsi}{b}} = \tsceneU{\uDelta}{b}\] The purpose of splicing scenes is to ``remember'', during the proto-expansion validation process, the unexpanded type formation context, $\uDelta$, unexpanded typing context, $\uGamma$, seTLM context, $\uPsi$, and the literal body, $b$, from the seTLM application site (cf. Rule (\ref{rule:expandsU-tsmap}) above.) These structures will be necessary to validate the references to spliced unexpanded types and expressions that appear within the proto-expansion. \subsubsection{Proto-Type Validation}\label{sec:SE-proto-type-validation} The \emph{proto-type validation judgement}, $\cvalidT{\Delta}{\tscenev}{\ctau}{\tau}$, is inductively defined by Rules (\ref{rules:cvalidT-U}). \paragraph{Common Forms} Rules (\ref{rule:cvalidT-U-tvar}) through (\ref{rule:cvalidT-U-sum}) validate proto-types of common form. These rules, like the rules for common unexpanded type forms, mirror the corresponding type formation rules, i.e. Rules (\ref{rules:istypeU}). The type splicing scene, $\tscenev$, passes opaquely through these rules. The first three of these are reproduced below. %Each of these rules is defined based on the corresponding type formation rule, i.e. Rules (\ref{rule:istypeU-var}) through (\ref{rule:istypeU-sum}), respectively. For example, the following proto-types validation rules are based on type formation rules (\ref{rule:istypeU-var}), (\ref{rule:istypeU-parr}) and (\ref{rule:istypeU-all}), respectively: % \begin{subequations}%\label{rules:cvalidT-U} \begin{equation*}\tag{\ref{rule:cvalidT-U-tvar}} \inferrule{ }{ \cvalidT{\Delta, \Dhyp{t}}{\tscenev}{t}{t} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidT-U-parr}} \inferrule{ \cvalidT{\Delta}{\tscenev}{\ctau_1}{\tau_1}\\ \cvalidT{\Delta}{\tscenev}{\ctau_2}{\tau_2} }{ \cvalidT{\Delta}{\tscenev}{\aceparr{\ctau_1}{\ctau_2}}{\aparr{\tau_1}{\tau_2}} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidT-U-all}} \inferrule { \cvalidT{\Delta, \Dhyp{t}}{\tscenev}{\ctau}{\tau} }{ \cvalidT{\Delta}{\tscenev}{\aceall{t}{\ctau}}{\aall{t}{\tau}} } \end{equation*} % \begin{equation*}\label{rule:cvalidT-U-rec} % \inferrule{ % \cvalidT{\Delta, \Dhyp{t}}{\tscenev}{\ctau}{\tau} % }{ % \cvalidT{\Delta}{\tscenev}{\acerec{t}{\ctau}}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidT-U-prod} % \inferrule{ % \{\cvalidT{\Delta}{\tscenev}{\ctau_i}{\tau_i}\}_{i \in \labelset} % }{ % \cvalidT{\Delta}{\tscenev}{\aceprod{\labelset}{\mapschema{\ctau}{i}{\labelset}}}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidT-U-sum} % \inferrule{ % \{\cvalidT{\Delta}{\tscenev}{\ctau_i}{\tau_i}\}_{i \in \labelset} % }{ % \cvalidT{\Delta}{\tscenev}{\acesum{\labelset}{\mapschema{\ctau}{i}{\labelset}}}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % We can express this scheme more precisely with the following rule transformation. For each rule in Rules (\ref{rules:istypeU}), % \begin{mathpar} % % \refstepcounter{equation} % % \label{rule:cvalidT-U-rec} % % \refstepcounter{equation} % % \label{rule:cvalidT-U-prod} % % \refstepcounter{equation} % % \label{rule:cvalidT-U-sum} % % \inferrule{J_1\\\cdots\\J_k}{J} % \end{mathpar} % the corresponding proto-types validation rule is % \begin{mathpar} % \inferrule{ % \VTypof{J_1}\\ % \cdots\\ % \VTypof{J_k} % }{ % \VTypof{J} % } % \end{mathpar} % where % \[\begin{split} % \VTypof{\istypeU{\Delta}{\tau}} & = \cvalidT{\Delta}{\tscenev}{\VTypof{\tau}}{\tau}\\ % \VTypof{\{J_i\}_{i \in \labelset}} & = \{\VTypof{J_i}\}_{i \in \labelset} % \end{split}\] % and where $\VTypof{\tau}$, when $\tau$ is a metapattern of sort $\mathsf{Typ}$, is a metapattern of sort $\mathsf{CETyp}$ defined as follows: % \begin{itemize} % \item When $\tau$ is of definite form, $\VTypof{\tau}$ is defined as follows: % \begin{align*} % \VTypof{t} & = t\\ % \VTypof{\aparr{\tau_1}{\tau_2}} & = \aceparr{\VTypof{\tau_1}}{\VTypof{\tau_2}}\\ % \VTypof{\aall{t}{\tau}} & = \aceall{t}{\VTypof{\tau}}\\ % \VTypof{\arec{t}{\tau}} & = \acerec{t}{\VTypof{\tau}}\\ % \VTypof{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} & = \aceprod{\labelset}{\mapschemax{\VTypofv}{\tau}{i}{\labelset}}\\ % \VTypof{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}} & = \acesum{\labelset}{\mapschemax{\VTypofv}{\tau}{i}{\labelset}} % \end{align*} % \item When $\tau$ is of indefinite form, $\VTypof{\tau}$ is a uniquely corresponding metapattern also of indefinite form. For example, $\VTypof{\tau_1}=\ctau_1$ and $\VTypof{\tau_2}=\ctau_2$. % \end{itemize} % It is instructive to use this rule transformation to generate Rules (\ref{rule:cvalidT-U-tvar}) through (\ref{rule:cvalidT-U-all}) above. We omit the remaining rules, i.e. Rules (\ref*{rule:cvalidT-U-rec}) through (\ref*{rule:cvalidT-U-sum}). Notice that in Rule (\ref{rule:cvalidT-U-tvar}), only type variables tracked by $\Delta$, the expansion's local type validation context, are well-formed. Type variables tracked by the application site unexpanded type formation context, which is a component of the type splicing scene, $\tscenev$, are not validated. %Indeed, $\tscenev$ passes opaquely through the rules above. %This achieves \emph{context-independent expansion} as described in Sec. \ref{sec:splicing-and-hygiene} for type variables -- seTLMs cannot impose ``hidden constraints'' on the application site unexpanded type formation context, because the type variables bound at the application site are simply not directly available to proto-types. \paragraph{References to Spliced Types} The only proto-type form that does not correspond to a type form is $\acesplicedt{m}{n}$, which is a \emph{reference to a spliced unexpanded type}, i.e. it indicates that an unexpanded type should be parsed out from the literal body, which appears in the type splicing scene $\tscenev$, beginning at position $m$ and ending at position $n$, where $m$ and $n$ are natural numbers. Rule (\ref{rule:cvalidT-U-splicedt}) governs this form: \begin{equation*}\tag{\ref{rule:cvalidT-U-splicedt}} \inferrule{ \parseUTyp{\bsubseq{b}{m}{n}}{\utau}\\ \expandsTU{\uDD{\uD}{\Delta_\text{app}}}{\utau}{\tau}\\ \Delta \cap \Delta_\text{app} = \emptyset }{ \cvalidT{\Delta}{\tsceneU{\uDD{\uD}{\Delta_\text{app}}}{b}}{\acesplicedt{m}{n}}{\tau} } \end{equation*} The first premise of this rule extracts the indicated subsequence of $b$ using the partial metafunction $\bsubseq{b}{m}{n}$ and parses it using the partial metafunction $\mathsf{parseUTyp}(b)$, which was characterized in Sec. \ref{sec:syntax-U}, to produce the spliced unexpanded type itself, $\utau$. The second premise of Rule (\ref{rule:cvalidT-U-splicedt}) performs type expansion of $\utau$ under the application site unexpanded type formation context, $\uDD{\uD}{\Delta_\text{app}}$, which is a component of the type splicing scene. The hypotheses in the expansion's local type formation context, $\Delta$, are not made available to $\tau$. %This enforces the injunction on shadowing as described in Sec. \ref{sec:splicing-and-hygiene} for type variables that appear in proto-types. The third premise of Rule (\ref{rule:cvalidT-U-splicedt}) imposes the constraint that the proto-expansion's type formation context, $\Delta$, be disjoint from the application site type formation context, $\Delta_\text{app}$. This premise can always be discharged by $\alpha$-varying the proto-expansion that the reference to the spliced type appears within. Together, these two premises enforce the injunction on type variable capture as described in Sec. \ref{sec:uetsms-validation} -- the TLM provider can choose type variable names freely within a proto-expansion. We will consider this formally in Sec. \ref{sec:SE-metatheory} below. %, because the language prevents them from shadowing type variables at the application site (by $\alpha$-varying the proto-expansion as needed.)%Such a change in bound variable names is possible again because variables bound by the seTLM provider in a proto-expansion cannot ``leak into'' spliced terms because the hypotheses in $\Delta$ are not made available to the spliced type, $\tau$. Rules (\ref{rules:cvalidT-U}) validate the following lemma, which establishes that the final expansion of a valid proto-type is a well-formed type under the combined type formation context. \begingroup \def\thetheorem{\ref{lemma:candidate-expansion-type-validation}} \begin{lemma}[Proto-Expansion Type Validation] If $\cvalidT{\Delta}{\tsceneU{\uDD{\uD}{\Delta_\text{app}}}{b}}{\ctau}{\tau}$ and $\Delta \cap \Delta_\text{app}=\emptyset$ then $\istypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\tau}$. \end{lemma} \endgroup \subsubsection{Proto-Expression Validation} The \emph{proto-expression validation judgement}, $\cvalidE{\Delta}{\Gamma}{\escenev}{\ce}{e}{\tau}$, is defined mutually inductively with the typed expansion judgement by Rules (\ref{rules:cvalidE-U}) as follows.% This is necessary because a typed expansion judgement appears as a premise in Rule (\ref{rule:cvalidE-U-splicede}) below, and a proto-expression validation judgement appears as a premise in Rule (\ref{rule:expandsU-tsmap}) above. \paragraph{Common Forms} Rules (\ref{rule:cvalidE-U-var}) through (\ref{rule:cvalidE-U-case}) validate proto-expressions of common form, as well as ascriptions and let binding. Once again, the rules for common forms mirror the typing rules, i.e. Rules (\ref{rules:hastypeU}). The expression splicing scene, $\escenev$, passes opaquely through these rules. The first five of these rules are reproduced below: %For each expanded expression form defined in Figure \ref{fig:U-expanded-terms}, Figure \ref{fig:U-candidate-terms} defines a corresponding proto-expression form. The validation rules for proto-expressions of these forms are each based on the corresponding typing rule in Rules (\ref{rules:hastypeU}). For example, the validation rules for proto-expressions of variable, function and function application form are based on Rules (\ref{rule:hastypeU-var}) through (\ref{rule:hastypeU-ap}), respectively: %\begin{subequations}%\label{rules:cvalidE-U} \begin{equation*}\tag{\ref{rule:cvalidE-U-var}} \inferrule{ }{ \cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau}}{\escenev}{x}{x}{\tau} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidE-U-asc}} \inferrule{ \cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau}{\tau}\\ \cvalidE{\Delta}{\Gamma}{\escenev}{\ce}{e}{\tau} }{ \cvalidE{\Delta}{\Gamma}{\escenev}{\aceasc{\ctau}{\ce}}{e}{\tau} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidE-U-letsyn}} \inferrule{ \cvalidE{\Delta}{\Gamma}{\escenev}{\ce_1}{e_1}{\tau_1}\\ \cvalidE{\Delta}{\Gamma, x : \tau_1}{\ce_2}{e_2}{\tau_2} }{ \cvalidE{\Delta}{\Gamma}{\escenev}{\aceletsyn{x}{\ce_1}{\ce_2}}{ \aeap{\aelam{\tau_1}{x}{e_2}}{e_1} }{\tau_2} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidE-U-lam}} \inferrule{ \cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau}{\tau}\\ \cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau}}{\escenev}{\ce}{e}{\tau'} }{ \cvalidE{\Delta}{\Gamma}{\escenev}{\acelam{\ctau}{x}{\ce}}{\aelam{\tau}{x}{e}}{\aparr{\tau}{\tau'}} } \end{equation*} \begin{equation*}\tag{\ref{rule:cvalidE-U-ap}} \inferrule{ \cvalidE{\Delta}{\Gamma}{\escenev}{\ce_1}{e_1}{\aparr{\tau}{\tau'}}\\ \cvalidE{\Delta}{\Gamma}{\escenev}{\ce_2}{e_2}{\tau} }{ \cvalidE{\Delta}{\Gamma}{\escenev}{\aceap{\ce_1}{\ce_2}}{\aeap{e_1}{e_2}}{\tau'} } \end{equation*} Notice that in Rule (\ref{rule:cvalidE-U-var}), only variables tracked by the proto-expansion typing context, $\Gamma$, are validated. Variables in the application site unexpanded typing context, which appears within the expression splicing scene $\escenev$, are not validated. This achieves \emph{context independence} as described in Sec. \ref{sec:uetsms-validation} -- seTLMs cannot impose ``hidden constraints'' on the application site unexpanded typing context, because the variable bindings at the application site are not directly available to proto-expansions. We will consider this formally in Sec. \ref{sec:SE-metatheory} below. \paragraph{References to Spliced Unexpanded Expressions} The only proto-expression form that does not correspond to an expanded expression form is $\acesplicede{m}{n}{\ctau}$, which is a \emph{reference to a spliced unexpanded expression}, i.e. it indicates that an unexpanded expression should be parsed out from the literal body beginning at position $m$ and ending at position $n$. Rule (\ref{rule:cvalidE-U-splicede}) governs this form: \begin{equation*}\tag{\ref{rule:cvalidE-U-splicede}} \inferrule{ \cvalidT{\emptyset}{\tsfrom{\escenev}}{\ctau}{\tau}\\ \escenev=\esceneU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uPsi}{b}\\ \parseUExp{\bsubseq{b}{m}{n}}{\ue}\\ \expandsU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uPsi}{\ue}{e}{\tau}\\\\ \Delta \cap \Delta_\text{app} = \emptyset\\ \domof{\Gamma} \cap \domof{\Gamma_\text{app}} = \emptyset }{ \cvalidE{\Delta}{\Gamma}{\escenev}{\acesplicede{m}{n}{\ctau}}{e}{\tau} } \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-splicede} % \inferrule{ % \parseUExp{\bsubseq{b}{m}{n}}{\ue}\\\\ % \expandsU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{\ue}{e}{\tau}\\ % \Delta \cap \Delta_\text{app} = \emptyset\\ % \domof{\Gamma} \cap \domof{\Gamma_\text{app}} = \emptyset % }{ % \cvalidE{\Delta}{\Gamma}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\splicede{m}{n}}{e}{\tau} % } % \end{equation*} The premises of this rule can be understood as follows: \begin{enumerate} \item The first premise of this rule validates and expands the type annotation. This type must be context independent. \item The second premise of this rule serves simply to reveal the components of the expression splicing scene. \item The third premise of this rule extracts the indicated subsequence of $b$ using the partial metafunction $\bsubseq{b}{m}{n}$ and parses it using the partial metafunction $\mathsf{parseUExp}(b)$, characterized in Sec. \ref{sec:syntax-U}, to produce the referenced spliced unexpanded expression, $\ue$. \item The fourth premise of Rule (\ref{rule:cvalidE-U-splicede}) performs typed expansion of $\ue$ assuming the application site contexts that appear in the expression splicing scene. Notice that the hypotheses in $\Delta$ and $\Gamma$ are not made available to $\ue$. \item The fifth premise of Rule (\ref{rule:cvalidE-U-splicede}) imposes the constraint that the proto-expansion's type formation context, $\Delta$, be disjoint from the application site type formation context, $\Delta_\text{app}$. Similarly, the sixth premise requires that the proto-expansion's typing context, $\Gamma$, be disjoint from the application site typing context, $\Gamma_\text{app}$. These two premises can always be discharged by $\alpha$-varying the proto-expression that the reference to the spliced unexpanded expression appears within. Together, these premises enforce the prohibition on capture as described in Sec. \ref{sec:uetsms-validation} -- the TLM provider can choose variable names freely within a proto-expansion, because the language prevents them from shadowing those at the application site. Again, we will consider this formally in Sec. \ref{sec:SE-metatheory} below. \end{enumerate} %\end{subequations} % \begin{subequations}\label{rules:cvalidE-U} % \begin{equation*}\label{rule:cvalidE-U-var} % \inferrule{ }{ % \cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau}}{\escenev}{x}{x}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-lam} % \inferrule{ % \cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau}{\tau}\\ % \cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau}}{\escenev}{\ce}{e}{\tau'} % }{ % \cvalidE{\Delta}{\Gamma}{\escenev}{\acelam{\ctau}{x}{\ce}}{\aelam{\tau}{x}{e}}{\aparr{\tau}{\tau'}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-ap} % \inferrule{ % \cvalidE{\Delta}{\Gamma}{\escenev}{\ce_1}{e_1}{\aparr{\tau}{\tau'}}\\ % \cvalidE{\Delta}{\Gamma}{\escenev}{\ce_2}{e_2}{\tau} % }{ % \cvalidE{\Delta}{\Gamma}{\escenev}{\aceap{\ce_1}{\ce_2}}{\aeap{e_1}{e_2}}{\tau'} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-tlam} % \inferrule{ % \cvalidE{\Delta, \Dhyp{t}}{\Gamma}{\escenev}{\ce}{e}{\tau} % }{ % \cvalidEX{\acetlam{t}{\ce}}{\aetlam{t}{e}}{\aall{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-tap} % \inferrule{ % \cvalidEX{\ce}{e}{\aall{t}{\tau}}\\ % \cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau'}{\tau'} % }{ % \cvalidEX{\acetap{\ce}{\ctau'}}{\aetap{e}{\tau'}}{[\tau'/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-fold} % \inferrule{ % \cvalidT{\Delta, \Dhyp{t}}{\escenev}{\ctau}{\tau}\\ % \cvalidEX{\ce}{e}{[\arec{t}{\tau}/t]\tau} % }{ % \cvalidEX{\acefold{t}{\ctau}{\ce}}{\aefold{e}}{\arec{t}{\tau}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-unfold} % \inferrule{ % \cvalidEX{\ce}{e}{\arec{t}{\tau}} % }{ % \cvalidEX{\aceunfold{\ce}}{\aeunfold{e}}{[\arec{t}{\tau}/t]\tau} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-tpl} % \inferrule{ % \{\cvalidEX{\ce_i}{e_i}{\tau_i}\}_{i \in \labelset} % }{ % \cvalidEX{\acetpl{\labelset}{\mapschema{\ce}{i}{\labelset}}}{\aetpl{\labelset}{\mapschema{e}{i}{\labelset}}}{\aprod{\labelset}{\mapschema{\tau}{i}{\labelset}}} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-pr} % \inferrule{ % \cvalidEX{\ce}{e}{\aprod{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}} % }{ % \cvalidEX{\acepr{\ell}{\ce}}{\aepr{\ell}{e}}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-in} % \inferrule{ % \{\cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau_i}{\tau_i}\}_{i \in \labelset}\\ % \cvalidT{\Delta}{\tsfrom{\escenev}}{\ctau}{\tau}\\ % \cvalidEX{\ce}{e}{\tau} % }{ % \left\{\shortstack{$\Delta~\Gamma \vdash_\uPsi \acein{\labelset, \ell}{\ell}{\mapschema{\ctau}{i}{\labelset}; \mapitem{\ell}{\ctau}}{\ce}$\\$\leadsto$\\$\aein{\labelset, \ell}{\ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}{e} : \asum{\labelset, \ell}{\mapschema{\tau}{i}{\labelset}; \mapitem{\ell}{\tau}}$\vspace{-1.2em}}\right\} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-case} % \inferrule{ % \cvalidEX{\ce}{e}{\asum{\labelset}{\mapschema{\tau}{i}{\labelset}}}\\ % \{\cvalidE{\Delta}{\Gamma, \Ghyp{x_i}{\tau_i}}{\escenev}{\ue_i}{e_i}{\tau}\}_{i \in \labelset} % }{ % \cvalidEX{\acecase{\labelset}{\ce}{\mapschemab{x}{\ce}{i}{\labelset}}}{\aecase{\labelset}{e}{\mapschemab{x}{e}{i}{\labelset}}}{\tau} % } % \end{equation*} % \begin{equation*}\label{rule:cvalidE-U-splicede} % \inferrule{ % \parseUExp{\bsubseq{b}{m}{n}}{\ue}\\\\ % \Delta \cap \Delta_\text{app} = \emptyset\\ % \domof{\Gamma} \cap \domof{\Gamma_\text{app}} = \emptyset\\ % \expandsU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{\ue}{e}{\tau} % }{ % \cvalidE{\Delta}{\Gamma}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\acesplicede{m}{n}}{e}{\tau} % } % \end{equation*} % \end{subequations} % Each form of expanded expression, $e$, corresponds to a form of proto-expression, $\ce$ (compare Figure \ref{fig:U-expanded-terms} and Figure \ref{fig:U-candidate-terms}). For each typing rule in Rules \ref{rules:hastypeU}, there is a corresponding proto-expression validation rule -- Rules (\ref{rule:cvalidE-U-var}) to (\ref{rule:cvalidE-U-case}) -- where the proto-expression and expanded expression correspond. The premises also correspond. %Candidate expansions cannot themselves define or apply TLMs. This simplifies our metatheory, though it can be inconvenient at times for TLM providers. We discuss adding the ability to use TLMs within proto-expansions in Sec. \ref{sec:tsms-in-expansions}. \subsection{Metatheory}\label{sec:SE-metatheory} \subsubsection{Typed Expansion} Let us now consider Theorem \ref{thm:typed-expansion-short-U}, which was mentioned at the beginnning of Sec. \ref{sec:typed-expansion-U} and is reproduced below: \begingroup \def\thetheorem{\ref{thm:typed-expansion-short-U}} \begin{theorem}[Typed Expression Expansion] \hspace{-3px}If $\expandsU{\uDD{\uD}{\Delta}\hspace{-3px}}{\uGG{\uG}{\Gamma}\hspace{-3px}}{\uPsi}{\ue}{e}{\tau}$ then $\hastypeU{\Delta}{\Gamma}{e}{\tau}$. \end{theorem} \endgroup To prove this theorem, we must prove the following stronger theorem, because the proto-expression validation judgement is defined mutually inductively with the typed expansion judgement: \begingroup \def\thetheorem{\ref{thm:typed-expansion-full-U}} \begin{theorem}[Typed Expansion (Full)] ~ \begin{enumerate} \item If $\expandsU{\uDD{\uD}{\Delta}}{\uGG{\uG}{\Gamma}}{\uAS{\uA}{\Psi}}{\ue}{e}{\tau}$ then $\hastypeU{\Delta}{\Gamma}{e}{\tau}$. \item If $\cvalidE{\Delta}{\Gamma}{\esceneU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uAS{\uA}{\Psi}}{b}}{\ce}{e}{\tau}$ and $\Delta \cap \Delta_\text{app} = \emptyset$ and $\domof{\Gamma} \cap \domof{\Gamma_\text{app}} = \emptyset$ then $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e}{\tau}$. \end{enumerate} \end{theorem} \endgroup \begin{proof} By mutual rule induction over Rules (\ref{rules:expandsU}) and Rules (\ref{rules:cvalidE-U}). The full proof is given in Appendix \ref{appendix:SES-typed-expression-expansion-metatheory}. We will reproduce the interesting cases below. The proof of part 1 proceeds by inducting over the typed expansion assumption. The only interesting cases are those related to seTLM definition and application, reproduced below. In the following cases, let $\uDelta=\uDD{\uD}{\Delta}$ and $\uGamma=\uGG{\uG}{\Gamma}$ and $\uPsi=\uAS{\uA}{\Psi}$. \begin{byCases} \item[\text{(\ref{rule:expandsU-syntax})}] We have \begin{pfsteps} \item \ue=\uesyntax{\tsmv}{\utau'}{\eparse}{\ue'} \BY{assumption} \item \expandsTU{\uDelta}{\utau'}{\tau'} \BY{assumption} \pflabel{expandsTU} \item \hastypeU{\emptyset}{\emptyset}{\eparse}{\aparr{\tBody}{\tParseResultExp}} \BY{assumption}\pflabel{eparse} \item \expandsU{\uDelta}{\uGamma}{\uPsi, \uShyp{\tsmv}{a}{\tau'}{\eparse}}{\ue'}{e}{\tau} \BY{assumption}\pflabel{expandsU} % \item \uetsmenv{\Delta}{\Psi} \BY{assumption}\pflabel{uetsmenv1} \item \istypeU{\Delta}{\tau'} \BY{Lemma \ref{lemma:type-expansion-U} to \pfref{expandsTU}} \pflabel{istype} % \item \uetsmenv{\Delta}{\Psi, \xuetsmbnd{\tsmv}{\tau'}{\eparse}} \BY{Definition \ref{def:seTLM-def-ctx-formation} on \pfref{uetsmenv1}, \pfref{istype} and \pfref{eparse}}\pflabel{uetsmenv3} \item \hastypeU{\Delta}{\Gamma}{e}{\tau} \BY{IH, part 1(a) on \pfref{expandsU}} \end{pfsteps} \resetpfcounter \item[\text{(\ref{rule:expandsU-tsmap})}] We have \begin{pfsteps} \item \ue=\utsmap{\tsmv}{b} \BY{assumption} \item \uA = \uA', \vExpands{\tsmv}{a} \BY{assumption} \item \Psi=\Psi', \xuetsmbnd{a}{\tau}{\eparse} \BY{assumption} \item \encodeBody{b}{\ebody} \BY{assumption} \item \evalU{\eparse(\ebody)}{\aein{\lbltxt{SuccessE}}{\ecand}} \BY{assumption} \item \decodeCondE{\ecand}{\ce} \BY{assumption} \item \cvalidE{\emptyset}{\emptyset}{\esceneU{\uDelta}{\uGamma}{\uPsi}{b}}{\ce}{e}{\tau} \BY{assumption}\pflabel{cvalidE} % \item \uetsmenv{\Delta}{\Psi} \BY{assumption} \pflabel{uetsmenv} \item \emptyset \cap \Delta = \emptyset \BY{finite set intersection} \pflabel{delta-cap} \item {\emptyset} \cap \domof{\Gamma} = \emptyset \BY{finite set intersection} \pflabel{gamma-cap} \item \hastypeU{\emptyset \cup \Delta}{\emptyset \cup \Gamma}{e}{\tau} \BY{IH, part 2 on \pfref{cvalidE}, \pfref{delta-cap}, and \pfref{gamma-cap}} \pflabel{penultimate} \item \hastypeU{\Delta}{\Gamma}{e}{\tau} \BY{finite set and finite function identity over \pfref{penultimate}} \end{pfsteps} \resetpfcounter \end{byCases} The proof of part 2 proceeds by induction over the proto-expression validation assumption. The only interesting case governs references to spliced expressions. In the following cases, let $\uDelta_\text{app}=\uDD{\uD}{\Delta_\text{app}}$ and $\uGamma_\text{app}=\uGG{\uG}{\Gamma_\text{app}}$ and $\uPsi = \uAS{\uA}{\Psi}$. \begin{byCases} % \item[\text{(\ref{rule:cvalidE-U-var})}] ~ % \begin{pfsteps*} % \item $\ce=x$ \BY{assumption} % \item $e=x$ \BY{assumption} % \item $\Gamma=\Gamma', \Ghyp{x}{\tau}$ \BY{assumption} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gamma', \Ghyp{x}{\tau}}{x}{\tau}$ \BY{Rule (\ref{rule:hastypeU-var})} \pflabel{hastypeU} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma', \Ghyp{x}{\tau}}{\Gamma_\text{app}}}{x}{\tau}$ \BY{Lemma \ref{lemma:weakening-U} over $\Gamma_\text{app}$ to \pfref{hastypeU}} % \end{pfsteps*} % \resetpfcounter % \item[\text{(\ref{rule:cvalidE-U-lam})}] ~ % \begin{pfsteps*} % \item $\ce=\acelam{\ctau_1}{x}{\ce'}$ \BY{assumption} % \item $e=\aelam{\tau_1}{x}{e'}$ \BY{assumption} % \item $\tau=\aparr{\tau_1}{\tau_2}$ \BY{assumption} % \item $\cvalidT{\Delta}{\tsceneU{\uDelta_\text{app}}{b}}{\ctau_1}{\tau_1}$ \BY{assumption} \pflabel{cvalidT} % \item $\cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau_1}}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\ce'}{e'}{\tau_2}$ \BY{assumption} \pflabel{cvalidE} % % \item $\uetsmenv{\Delta_\text{app}}{\Psi}$ \BY{assumption} \pflabel{uetsmenv} % \item $\Delta \cap \Delta_\text{app}=\emptyset$ \BY{assumption} \pflabel{delta-disjoint} % \item $\domof{\Gamma} \cap \domof{\Gamma_\text{app}}=\emptyset$ \BY{assumption} \pflabel{gamma-disjoint} % \item $x \notin \domof{\Gamma_\text{app}}$ \BY{identification convention} \pflabel{x-fresh} % \item $\domof{\Gamma, x : \tau_1} \cap \domof{\Gamma_\text{app}}=\emptyset$ \BY{\pfref{gamma-disjoint} and \pfref{x-fresh}} \pflabel{gamma-disjoint2} % \item $\istypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\tau_1}$ \BY{Lemma \ref{lemma:candidate-expansion-type-validation} on \pfref{cvalidT} and \pfref{delta-disjoint}} \pflabel{istype} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma, \Ghyp{x}{\tau_1}}{\Gamma_\text{app}}}{e'}{\tau_2}$ \BY{IH, part 2 on \pfref{cvalidE}, \pfref{delta-disjoint} and \pfref{gamma-disjoint2}} \pflabel{hastype1} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}, \Ghyp{x}{\tau_1}}{e'}{\tau_2}$ \BY{exchange over $\Gamma_\text{app}$ on \pfref{hastype1}} \pflabel{hastype2} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{\aelam{\tau_1}{x}{e'}}{\aparr{\tau_1}{\tau_2}}$ \BY{Rule (\ref{rule:hastypeU-lam}) on \pfref{istype} and \pfref{hastype2}} % \end{pfsteps*} % \resetpfcounter % \item[\text{(\ref{rule:cvalidE-U-ap})}] ~ % \begin{pfsteps*} % \item $\ce=\aceap{\ce_1}{\ce_2}$ \BY{assumption} % \item $e=\aeap{e_1}{e_2}$ \BY{assumption} % \item $\cvalidE{\Delta}{\Gamma}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\ce_1}{e_1}{\aparr{\tau_2}{\tau}}$ \BY{assumption} \pflabel{cvalidE1} % \item $\cvalidE{\Delta}{\Gamma}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\ce_2}{e_2}{\tau_2}$ \BY{assumption} \pflabel{cvalidE2} % % \item $\uetsmenv{\Delta_\text{app}}{\Psi}$ \BY{assumption} \pflabel{uetsmenv} % \item $\Delta \cap \Delta_\text{app}=\emptyset$ \BY{assumption} \pflabel{delta-disjoint} % \item $\domof{\Gamma} \cap \domof{\Gamma_\text{app}}=\emptyset$ \BY{assumption} \pflabel{gamma-disjoint} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e_1}{\aparr{\tau_2}{\tau}}$ \BY{IH, part 2 on \pfref{cvalidE1}, \pfref{delta-disjoint} and \pfref{gamma-disjoint}} \pflabel{hastypeU1} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e_2}{\tau_2}$ \BY{IH, part 2 on \pfref{cvalidE2}, \pfref{delta-disjoint} and \pfref{gamma-disjoint}} \pflabel{hastypeU2} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{\aeap{e_1}{e_2}}{\tau}$ \BY{Rule (\ref{rule:hastypeU-ap}) on \pfref{hastypeU1} and \pfref{hastypeU2}} % \end{pfsteps*} % \resetpfcounter % \item[\text{(\ref{rule:cvalidE-U-tlam})}] ~ % \begin{pfsteps} % \item \ce=\acetlam{t}{\ce'} \BY{assumption} % \item e = \aetlam{t}{e'} \BY{assumption} % \item \tau = \aall{t}{\tau'}\BY{assumption} % \item \cvalidE{\Delta, \Dhyp{t}}{\Gamma}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\ce'}{e'}{\tau'} \BY{assumption} \pflabel{cvalidE} % % \item \uetsmenv{\Delta_\text{app}}{\Psi} \BY{assumption} \pflabel{uetsmenv} % \item \Delta \cap \Delta_\text{app}=\emptyset \BY{assumption} \pflabel{delta-disjoint} % \item \domof{\Gamma} \cap \domof{\Gamma_\text{app}}=\emptyset \BY{assumption} \pflabel{gamma-disjoint} % \item \Dhyp{t} \notin \Delta_\text{app} \BY{identification convention}\pflabel{t-fresh} % \item \Delta, \Dhyp{t} \cap \Delta_\text{app} = \emptyset \BY{\pfref{delta-disjoint} and \pfref{t-fresh}}\pflabel{delta-disjoint2} % \item \hastypeU{\Dcons{\Delta, \Dhyp{t}}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e'}{\tau'} \BY{IH, part 2 on \pfref{cvalidE}, \pfref{delta-disjoint2} and \pfref{gamma-disjoint}}\pflabel{hastype1} % \item \hastypeU{\Dcons{\Delta}{\Delta_\text{app}, \Dhyp{t}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e'}{\tau'} \BY{exchange over $\Delta_\text{app}$ on \pfref{hastype1}}\pflabel{hastype2} % \item \hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{\aetlam{t}{e'}}{\aall{t}{\tau'}} \BY{Rule (\ref{rule:hastypeU-tlam}) on \pfref{hastype2}} % \end{pfsteps} % \resetpfcounter % \item[{\text{(\ref{rule:cvalidE-U-tap})}}~\textbf{through}~{\text{(\ref{rule:cvalidE-U-case})}}] These cases follow analagously, i.e. we apply the IH, part 2 to all proto-expression validation judgements, Lemma \ref{lemma:candidate-expansion-type-validation} to all proto-type validation judgements, the identification convention to ensure that extended contexts remain disjoint, weakening and exchange as needed, and the corresponding typing rule in Rules (\ref{rule:hastypeU-tap}) through (\ref{rule:hastypeU-case}). % \\ \item[\text{(\ref{rule:cvalidE-U-splicede})}] ~ \begin{pfsteps*} \item $\ce=\acesplicede{m}{n}{\ctau}$ \BY{assumption} \item $ \escenev=\esceneU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uPsi}{b}$ \BY{assumption} \item $\cvalidT{\emptyset}{\tsfrom{\escenev}}{\ctau}{\tau}$ \BY{assumption} \item $\parseUExp{\bsubseq{b}{m}{n}}{\ue}$ \BY{assumption} \item $\expandsU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{\ue}{e}{\tau}$ \BY{assumption} \pflabel{expands} % \item $\uetsmenv{\Delta_\text{app}}{\Psi}$ \BY{assumption} \pflabel{uetsmenv} \item $\Delta \cap \Delta_\text{app}=\emptyset$ \BY{assumption} \pflabel{delta-disjoint} \item $\domof{\Gamma} \cap \domof{\Gamma_\text{app}}=\emptyset$ \BY{assumption} \pflabel{gamma-disjoint} \item $\hastypeU{\Delta_\text{app}}{\Gamma_\text{app}}{e}{\tau}$ \BY{IH, part 1 on \pfref{expands}} \pflabel{hastype} \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e}{\tau}$ \BY{Lemma \ref{lemma:weakening-U} over $\Delta$ and $\Gamma$ and exchange on \pfref{hastype}} \end{pfsteps*} \resetpfcounter \end{byCases} The mutual induction can be shown to be well-founded by showing that the following numeric metric on the judgements that we induct over is decreasing: \begin{align*} \sizeof{\expandsU{\uDelta}{\uGamma}{\uPsi}{\ue}{e}{\tau}} & = \sizeof{\ue}\\ \sizeof{\cvalidE{\Delta}{\Gamma}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\ce}{e}{\tau}} & = \sizeof{b} \end{align*} where $\sizeof{b}$ is the length of $b$ and $\sizeof{\ue}$ is the sum of the lengths of the literal bodies in $\ue$ (see Appendix \ref{appendix:SES-body-lengths}.) The only case in the proof of part 1 that invokes part 2 is Case (\ref{rule:expandsU-tsmap}). There, we have that the metric remains stable: \begin{align*} & \sizeof{\expandsU{\uDelta}{\uGamma}{\uPsi}{\utsmap{\tsmv}{b}}{e}{\tau}}\\ =& \sizeof{\cvalidE{\emptyset}{\emptyset}{\esceneU{\uDelta}{\uGamma}{\uPsi}{b}}{\ce}{e}{\tau}}\\ =&\sizeof{b}\end{align*} The only case in the proof of part 2 that invokes part 1 is Case (\ref{rule:cvalidE-U-splicede}). There, we have that $\parseUExp{\bsubseq{b}{m}{n}}{\ue}$ and the IH is applied to the judgement $\expandsU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{\ue}{e}{\tau}$ where $\uDelta_\text{app}=\uDD{\uD}{\Delta_\text{app}}$ and $\uGamma_\text{app}=\uGG{\uG}{\Gamma_\text{app}}$ and $\uPsi=\uAS{\uA}{\Psi}$. Because the metric is stable when passing from part 1 to part 2, we must have that it is strictly decreasing in the other direction: \[\sizeof{\expandsU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{\ue}{e}{\tau}} < \sizeof{\cvalidE{\Delta}{\Gamma}{\esceneU{\uDelta_\text{app}}{\uGamma_\text{app}}{\uPsi}{b}}{\acesplicede{m}{n}{\ctau}}{e}{\tau}}\] i.e. by the definitions above, \[\sizeof{\ue} < \sizeof{b}\] This is established by appeal to the following two conditions. The first condition states that an unexpanded expression constructed by parsing a textual sequence $b$ is strictly smaller, as measured by the metric defined above, than the length of $b$, because some characters must necessarily be used to invoke a TLM and delimit each literal body. \begingroup \def\thetheorem{\ref{condition:body-parsing}} \begin{condition}[Expression Parsing Monotonicity] If $\parseUExp{b}{\ue}$ then $\sizeof{\ue} < \sizeof{b}$.\end{condition} \endgroup The second condition simply states that subsequences of $b$ are no longer than $b$. \begingroup \def\thetheorem{\ref{condition:body-subsequences}} \begin{condition}[Body Subsequencing] If $\bsubseq{b}{m}{n}=b'$ then $\sizeof{b'} \leq \sizeof{b}$. \end{condition} \endgroup Combining these two conditions, we have that $\sizeof{\ue} < \sizeof{b}$ as needed. \end{proof} % We need to define the following theorem about proto-expression validation mutually with Theorem \ref{thm:typed-expansion-U}. % \begin{theorem}[Proto-Expansion Expression Validation]\label{thm:candidate-expansion-validation-U} % If $\cvalidE{\Delta}{\Gamma}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\ce}{e}{\tau}$ and $\uetsmenv{\Delta_\text{app}}{\uPsi}$ then $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e}{\tau}$. % \end{theorem} % \begin{proof} By rule induction over Rules (\ref{rules:cvalidE-U}). % \begin{byCases} % \item[\text{(\ref{rule:cvalidE-U-var})}] ~ % \begin{pfsteps*} % \item $\ce=x$ \BY{assumption} % \item $e=x$ \BY{assumption} % \item $\Gamma=\Gamma', \Ghyp{x}{\tau}$ \BY{assumption} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gamma', \Ghyp{x}{\tau}}{x}{\tau}$ \BY{Rule (\ref{rule:hastypeU-var})} \pflabel{hastypeU} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma', \Ghyp{x}{\tau}}{\Gamma_\text{app}}}{x}{\tau}$ \BY{Lemma \ref{lemma:weakening-U} over $\Gamma_\text{app}$ to \pfref{hastypeU}} % \end{pfsteps*} % \resetpfcounter % \item[\text{(\ref{rule:cvalidE-U-lam})}] ~ % \begin{pfsteps*} % \item $\ce=\acelam{\ctau_1}{x}{\ce'}$ \BY{assumption} % \item $e=\aelam{\tau_1}{x}{e'}$ \BY{assumption} % \item $\tau=\aparr{\tau_1}{\tau_2}$ \BY{assumption} % \item $\cvalidT{\Delta}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\ctau_1}{\tau_1}$ \BY{assumption} \pflabel{cvalidT} % \item $\cvalidE{\Delta}{\Gamma, \Ghyp{x}{\tau_1}}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\ce'}{e'}{\tau_2}$ \BY{assumption} \pflabel{cvalidE} % \item $\uetsmenv{\Delta_\text{app}}{\uPsi}$ \BY{assumption} \pflabel{uetsmenv} % \item $\istypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\tau_1}$ \BY{Lemma \ref{lemma:candidate-expansion-type-validation} on \pfref{cvalidT}} \pflabel{istype} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma, \Ghyp{x}{\tau_1}}{\Gamma_\text{app}}}{e'}{\tau_2}$ \BY{IH on \pfref{cvalidE} and \pfref{uetsmenv}} \pflabel{hastype1} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}, \Ghyp{x}{\tau_1}}{e'}{\tau_2}$ \BY{exchange over $\Gamma_\text{app}$ on \pfref{hastype1}} \pflabel{hastype2} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{\aelam{\tau_1}{x}{e'}}{\aparr{\tau_1}{\tau_2}}$ \BY{Rule (\ref{rule:hastypeU-lam}) on \pfref{istype} and \pfref{hastype2}} % \end{pfsteps*} % \resetpfcounter % \item[\text{(\ref{rule:cvalidE-U-ap})}] ~ % \begin{pfsteps*} % \item $\ce=\aceap{\ce_1}{\ce_2}$ \BY{assumption} % \item $e=\aeap{e_1}{e_2}$ \BY{assumption} % \item $\cvalidE{\Delta}{\Gamma}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\ce_1}{e_1}{\aparr{\tau_1}{\tau}}$ \BY{assumption} \pflabel{cvalidE1} % \item $\cvalidE{\Delta}{\Gamma}{\esceneU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{b}}{\ce_2}{e_2}{\tau_1}$ \BY{assumption} \pflabel{cvalidE2} % \item $\uetsmenv{\Delta_\text{app}}{\uPsi}$ \BY{assumption} \pflabel{uetsmenv} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e_1}{\aparr{\tau_1}{\tau}}$ \BY{IH on \pfref{cvalidE1} and \pfref{uetsmenv}} \pflabel{hastypeU1} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e_2}{\tau_1}$ \BY{IH on \pfref{cvalidE2} and \pfref{uetsmenv}} \pflabel{hastypeU2} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{\aeap{e_1}{e_2}}{\tau}$ \BY{Rule (\ref{rule:hastypeU-ap}) on \pfref{hastypeU1} and \pfref{hastypeU2}} % \end{pfsteps*} % \resetpfcounter % \item[\VExpof{\text{\ref{rule:hastypeU-tlam}}}~\text{through}~\VExpof{\text{\ref{rule:hastypeU-case}}}] These cases follow analagously, i.e. we apply the IH to all proto-expression validation premises, Lemma \ref{lemma:candidate-expansion-type-validation} to all proto-types validation premises, weakening and exchange as needed, and then apply the corresponding typing rule. % \\ % \item[\text{(\ref{rule:cvalidE-U-splicede})}] ~ % \begin{pfsteps*} % \item $\ce=\acesplicede{m}{n}$ \BY{assumption} % \item $\parseUExp{\bsubseq{b}{m}{n}}{\ue}$ \BY{assumption} % \item $\expandsU{\Delta_\text{app}}{\Gamma_\text{app}}{\uPsi}{\ue}{e}{\tau}$ \BY{assumption} \pflabel{expands} % \item $\uetsmenv{\Delta_\text{app}}{\uPsi}$ \BY{assumption} \pflabel{uetsmenv} % \item $\hastypeU{\Delta_\text{app}}{\Gamma_\text{app}}{e}{\tau}$ \BY{Theorem \ref{thm:typed-expansion-U} on \pfref{expands} and \pfref{uetsmenv}} \pflabel{hastype} % \item $\hastypeU{\Dcons{\Delta}{\Delta_\text{app}}}{\Gcons{\Gamma}{\Gamma_\text{app}}}{e}{\tau}$ \BY{Lemma \ref{lemma:weakening-U} on \pfref{hastype}} % \end{pfsteps*} % \resetpfcounter % \end{byCases} % \end{proof} %\qed \subsubsection{Abstract Reasoning Principles}\label{sec:uetsms-reasoning-principles} The following theorem summarizes the abstract reasoning principles that programmers can rely on when applying an seTLM. A descripition of each named clause is given in-line below. \begingroup \def\thetheorem{\ref{thm:tsc-SES}} \begin{theorem}[seTLM Abstract Reasoning Principles] If $\expandsU{\uDD{\uD}{\Delta}}{\uGG{\uG}{\Gamma}}{\uPsi}{\utsmap{\tsmv}{b}}{e}{\tau}$ then: \begin{enumerate} \item (\textbf{Typing 1}) $\uPsi = \uPsi', \uShyp{\tsmv}{a}{\tau}{\eparse}$ and $\hastypeU{\Delta}{\Gamma}{e}{\tau}$ \begin{quote} The type of the expansion is consistent with the type annotation on the seTLM definition. \end{quote} \item $\encodeBody{b}{\ebody}$ \item $\evalU{\ap{\eparse}{\ebody}}{\aein{\lbltxt{SuccessE}}{\ecand}}$ \item $\decodeCondE{\ecand}{\ce}$ \item (\textbf{Segmentation}) $\segOK{\segof{\ce}}{b}$ \begin{quote} The segmentation determined by the proto-expansion actually segments the literal body (i.e. each segment is in-bounds and the segments are non-overlapping.) \end{quote} \item $\segof{\ce} = \sseq{\acesplicedt{m'_i}{n'_i}}{\nty} \cup \sseq{\acesplicede{m_i}{n_i}{\ctau_i}}{\nexp}$ \item \textbf{(Typing 2)} $\sseq{ \expandsTU{\uDD{\uD}{\Delta}} { \parseUTypF{\bsubseq{b}{m'_i}{n'_i}} }{\tau'_i} }{\nty}$ and $\sseq{\istypeU{\Delta}{\tau'_i}}{\nty}$ \begin{quote} Each spliced type has a well-formed expansion at the application site. \end{quote} \item \textbf{(Typing 3)} $\sseq{ \cvalidT{\emptyset}{ \tsceneUP {\uDD {\uD}{\Delta} }{b} }{ \ctau_i }{\tau_i} }{\nexp}$ and $\sseq{\istypeU{\Delta}{\tau_i}}{\nexp}$ \begin{quote} Each type annotation on a reference to a spliced expression has a well-formed expansion at the application site. \end{quote} \item \textbf{(Typing 4)} $\sseq{ \expandsU {\uDD{\uD}{\Delta}} {\uGG{\uG}{\Gamma}} {\uPsi} {\parseUExpF{\bsubseq{b}{m_i}{n_i}}} {e_i} {\tau_i} }{\nexp}$ and $\sseq{\hastypeU{\Delta}{\Gamma}{e_i}{\tau_i}}{\nexp}$ \begin{quote} Each spliced expression has a well-typed expansion consistent with its type annotation. \end{quote} \item (\textbf{Capture Avoidance}) $e = [\sseq{\tau'_i/t_i}{\nty}, \sseq{e_i/x_i}{\nexp}]e'$ for some $\sseq{t_i}{\nty}$ and $\sseq{x_i}{\nexp}$ and $e'$ \begin{quote} The final expansion can be decomposed into a term with variables in place of each spliced type or expression. The expansions of these spliced types and expressions can be substituted into this term in the standard capture avoiding manner. \end{quote} \item (\textbf{Context Independence}) $\mathsf{fv}(e') \subset \sseq{t_i}{\nty} \cup \sseq{x_i}{\nexp}$ \begin{quote} The aforementioned decomposed term makes no mention of bindings in the application site context. \end{quote} % $\hastypeU % {\sseq{\Dhyp{t_i}}{\nty}} % {\sseq{x_i : \tau_i}{\nexp}} % {e'}{\tau}$ \end{enumerate} \end{theorem} \begin{proof} The proof, which involves auxiliary lemmas about the decomposition of proto-types and proto-expressions, is given in Appendix \ref{appendix:SES-reasoning-principles}. \end{proof} \endgroup This style of specifying the hygiene properties builds directly on the standard notion of capture-avoiding substitution for general ABTs. Prior work on hygiene for macro systems has instead explicitly specified how fresh variables are generated during expansion (e.g. \cite{DBLP:conf/esop/HermanW08}.) Our formal approach appears therefore to be more elegant in this regard. % The following theorem establishes that every valid proto-type generates a final expansion that can be decomposed into a context-independent term and context-dependent sub-terms, all of which arise from references to spliced types as summarized by the splice summary. % \begingroup % \def\thetheorem{\ref{thm:proto-type-expansion-decomposition-SES}} % \begin{theorem}[Proto-Type Expansion Decomposition] % If $\cvalidT{\Delta}{\tsceneU{\uDD{\uD}{\Delta_\text{app}}}{b}}{\ctau}{\tau}$ and $\segof{\ctau} = \sseq{\acesplicedt{m_i}{n_i}}{n}$ then all of the following hold: % \begin{enumerate} % \item $\sseq{\expandsTU{\uDD{\uD}{\Delta_\text{app}}}{ % \parseUTypF{\bsubseq{b}{m_i}{n_i}} % }{\tau_i}}{n}$ % % \item $\sseq{\istypeU{\Delta_\text{app}}{\tau_i}}{n}$ % \item $\tau = [\sseq{\tau_i/t_i}{n}]\tau'$ for some $\sseq{t_i}{n}$ and $\tau'$ % \item $\istypeU{\Delta \cup \sseq{\Dhyp{t_i}}{n}}{\tau'}$ % \end{enumerate} % \end{theorem} % \begin{proof} % By rule induction over Rules (\ref{rules:cvalidT-U}). % \begin{byCases} % \item[\text{(\ref{rule:cvalidT-U-tvar}) \textbf{through} (\ref{rule:cvalidT-U-sum})}] These cases follow by straightforward inductive argument (see appendix.) % \item[\text{(\ref{rule:cvalidT-U-splicedt})}] ~ % \begin{pfsteps} % \item \ctau = \acesplicedt{m}{n} \BY{assumption} % \item \segof{\acesplicedt{m}{n}} = \{ \acesplicedt{m}{n} \} \BY{definition} % \item \parseUTyp{\bsubseq{b}{m}{n}}{\utau} \BY{assumption} \pflabel{parseUTyp} % \item \expandsTU{\uDD{\uD}{\Delta_\text{app}}}{\utau}{\tau} \BY{assumption} \pflabel{expandsTU} % \item \istypeU{\Delta, \Dhyp{t}}{t} \BY{Rule (\ref{rule:istypeU-var})} \pflabel{istype} % \end{pfsteps} % The conclusions hold as follows: % \begin{enumerate} % \item \pfref{parseUTyp} and \pfref{expandsTU} % \item Choose $t$ and $t$. Then $\tau = [\tau/t]t$ by definition. % \item \pfref{istype} % \end{enumerate} % \end{byCases} % \end{proof} % \endgroup % The following theorem, together with Theorem \ref{thm:typed-expansion-short-U}, establishes \textbf{Typing}, \textbf{Segmentation} and \textbf{Context Independence} as discussed in Sec. \ref{sec:uetsms-validation}. % \begingroup % \def\thetheorem{\ref{thm:tsc-SES}} % \begin{theorem}[seTLM Typing and Context Independence] % If $\expandsU{\uDelta}{\uGamma}{\uPsi}{\utsmap{\tsmv}{b}}{e}{\tau}$ then: % \begin{enumerate} % \item (\textbf{Typing}) $\uPsi = \uPsi', \uShyp{\tsmv}{a}{\tau}{\eparse}$ % \item $\encodeBody{b}{\ebody}$ % \item $\evalU{\ap{\eparse}{\ebody}}{\lbltxt{SuccessE}\cdot\ecand}$ % \item $\decodeCondE{\ecand}{\ce}$ % \item (\textbf{Segmentation}) $\segOK{\segof{\ce}}{b}$ % \item (\textbf{Context Independence}) $\cvalidE{\emptyset}{\emptyset}{\esceneU{\uDelta}{\uGamma}{\uPsi}{b}}{\ce}{e}{\tau}$ % \end{enumerate} % \end{theorem} % \begin{proof} By rule induction over Rules (\ref{rules:expandsU}). The only rule that applies is Rule (\ref{rule:expandsU-tsmap}). The conclusions of the theorem are the premises of this rule. % \end{proof} % \endgroup % The following theorem establishes a prohibition on \textbf{Shadowing} as discussed in Sec. \ref{sec:uetsms-validation}. % \begingroup % \def\thetheorem{\ref{thm:shadowing-prohibition-SES}} % \begin{theorem}[Shadowing Prohibition] ~ % \begin{enumerate} % \item If $\cvalidT{\Delta}{\tsceneU{\uDD{\uD}{\Delta_\text{app}}}{b}}{\acesplicedt{m}{n}}{\tau}$ then:\begin{enumerate} % \item $\parseUTyp{\bsubseq{b}{m}{n}}{\utau}$ % \item $\expandsTU{\uDD{\uD}{\Delta_\text{app}}}{\utau}{\tau}$ % \item $\Delta \cap \Delta_\text{app} = \emptyset$ % \end{enumerate} % \item If $\cvalidE{\Delta}{\Gamma}{\escenev}{\acesplicede{m}{n}{\ctau}}{e}{\tau}$ then: % \begin{enumerate} % \item $\cvalidT{\emptyset}{\tsfrom{\escenev}}{\ctau}{\tau}$ % \item $ \escenev=\esceneU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uPsi}{b}$ % \item $\parseUExp{\bsubseq{b}{m}{n}}{\ue}$ % \item $\expandsU{\uDD{\uD}{\Delta_\text{app}}}{\uGG{\uG}{\Gamma_\text{app}}}{\uPsi}{\ue}{e}{\tau}$ % \item $\Delta \cap \Delta_\text{app} = \emptyset$ % \item $\domof{\Gamma} \cap \domof{\Gamma_\text{app}} = \emptyset$ % \end{enumerate} % \end{enumerate} % \end{theorem} % \begin{proof} ~ % \begin{enumerate} % \item By rule induction over Rules (\ref{rules:cvalidT-U}). The only rule that applies is Rule (\ref{rule:cvalidT-U-splicedt}). The conclusions are the premises of tihs rule. % \item By rule induction over Rules (\ref{rules:cvalidE-U}). The only rule that applies is Rule (\ref{rule:cvalidE-U-splicede}). The conclusions are the premises of tihs rule. % \end{enumerate} % \end{proof} % \endgroup
{ "alphanum_fraction": 0.7065919215, "avg_line_length": 71.0274250889, "ext": "tex", "hexsha": "b6f068b76e1f06bdb0e92feaa05f4fda1c51812b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-04-19T22:24:32.000Z", "max_forks_repo_forks_event_min_datetime": "2016-04-19T22:24:32.000Z", "max_forks_repo_head_hexsha": "18df98bb9eea243f361558102d6331e8caab306d", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "cyrus-/thesis", "max_forks_repo_path": "uetsms.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "18df98bb9eea243f361558102d6331e8caab306d", "max_issues_repo_issues_event_max_datetime": "2016-04-20T19:45:26.000Z", "max_issues_repo_issues_event_min_datetime": "2016-04-19T22:34:20.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "cyrus-/thesis", "max_issues_repo_path": "uetsms.tex", "max_line_length": 1280, "max_stars_count": 15, "max_stars_repo_head_hexsha": "18df98bb9eea243f361558102d6331e8caab306d", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "cyrus-/thesis", "max_stars_repo_path": "uetsms.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-17T14:57:05.000Z", "max_stars_repo_stars_event_min_datetime": "2016-02-08T10:04:44.000Z", "num_tokens": 44891, "size": 139853 }
\documentclass{article} \usepackage{amsmath} \usepackage[margin=1.0in]{geometry} \usepackage{xcolor} \begin{document} \noindent Does $\displaystyle \sum_{n=1}^\infty \frac{n}{n!}$ diverge, converge absolutely, or converge conditionally? \subsection*{Solution} \begin{align*} L&=\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right|\\ &= \lim_{n \to \infty} \left| \frac{n+1}{(n+1)!} \cdot\frac{n!}{n} \right|\\ &= \lim_{n \to \infty} \left| \frac{n+1}{(n+1) \cdot n!} \cdot\frac{n!}{n} \right|\\ &= \lim_{n \to \infty} \left| \frac1n \right|\\ &= \lim_{n \to \infty} \frac1n \\ &= 0 \end{align*} Since $L < 1$, by the Ratio Test, the series $\displaystyle \sum_{n=1}^\infty \frac{n}{n!}$ converges absolutely. \end{document}%%%%%%%%%%%%%%%%% \begin{align*} L&=\lim_{n \to \infty} \sqrt[n]{|a_n|}\\ &= \lim_{n \to \infty} \sqrt[n]{\left| \right|}\\ \end{align*} \begin{align*} L&=\lim_{n \to \infty} \left|\frac{a_{n+1}}{a_n}\right|\\ &= \lim_{n \to \infty} \left| \right|\\ \end{align*} \begin{align*} \lim_{n \to \infty} a_n &= \lim_{n \to \infty} \\ \end{align*} Since $\sum |a_n| = \sum a_n$, the series $\displaystyle \sum_{n=1}^\infty AAAAAAAAAAAAAA$ converges absolutely. Since $|r| < 1$, the series ... converges by the Geometric Series Test. Since $|r| \geq 1$, the series ... diverges by the Geometric Series Test. The function $f(x)=\frac{}{}$ is continuous, positive, and decreasing on $[1,\infty)$. \subsection*{Solution}
{ "alphanum_fraction": 0.6050531915, "avg_line_length": 28.3773584906, "ext": "tex", "hexsha": "0d8d8caa3d93bb40cdb4a4eb7232458af601e764", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2017-06-25T22:14:59.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-25T18:51:52.000Z", "max_forks_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_forks_repo_path": "key/series/m5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_issues_repo_path": "key/series/m5.tex", "max_line_length": 114, "max_stars_count": null, "max_stars_repo_head_hexsha": "db677132d89eb95dc5749dceeb9544c77b6b4a05", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "edward-kim-math/edward-d-kim.github.io", "max_stars_repo_path": "key/series/m5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 583, "size": 1504 }
\documentclass[]{article} \usepackage{fullpage} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amsfonts} \usepackage[colorlinks=true, allcolors=blue]{hyperref} %Use Charter font \renewcommand{\familydefault}{bch} %opening \title{Bullet-Fluids Reference (draft version)} \author{} \date{02 January 2014} \begin{document} \maketitle \begin{abstract} Originally based on Fluids v.2 \cite{RH:2008}, the Bullet-Fluids library is a Zlib licensed extension to the Bullet Physics engine \cite{EC:2012} that is targeted at interactive or real-time simulation of fluids using the meshless particle method known as SPH.\\ The library is being developed as a draft for the production version, which is targeted at Bullet 3.x.\\ This document provides an introduction to the Bullet-Fluids library, and also includes some explanation of parameters and other details. \end{abstract} \tableofcontents \pagebreak \section{Introduction to Bullet-Fluids} \label{s_bulletFluidsIntro} \subsection{Important notes (please read this section before using the library)} \subsubsection{Simulation Scale} \label{s_simulationScale} The fluid simulation defines two scales, \textit{world scale} and \textit{fluid simulation scale} (hereafter, \textit{simulation scale}). \begin{itemize} \item \textit{simulation scale} is the scale at which the SPH density and forces are calculated. \item \textit{world scale} is the scale at which rendering, the rigid body simulation and all other functions take place. \end{itemize} In general, the world scale is much larger than the simulation scale. Multiplying a value by the simulation scale can be seen as scaling down the value from world or rigid body units to fluid units, while dividing a value by the simulation scale is equivalent to scaling up the value from the fluid simulation scale to world units. \[ \mathbf{simulation\_scale\_length = world\_scale\_length * simulation\_scale} \] \[ \mathbf{world\_scale\_length = simulation\_scale\_length / simulation\_scale} \] To give a concrete example, the default simulation scale is 0.004.\\ This means that a length of 1 metre at world scale is shrunk to 0.004 metres at simulation scale.\\ Alternatively, a length of 1 metre at simulation scale is expanded to 250 metres at world scale.\\ In either case, the ratio of world scale to simulation scale is (1 / simulation\_scale), or 1 : 250 by default.\\ When changing the size of the particles, it is recommended to change m\_simulationScale and parameters marked [world scale]. Reducing the simulation scale will make particles larger, and increasing it will make the particles smaller. For example, doubling the default simulation scale of 0.004 will halve the size of the particles. (default: 1 / 0.004 = 250; halved: 1 / 0.008 = 125).\\ Although in theory it should only be necessary to change the simulation scale to resize the fluid, in practice floating point rounding can cause the fluid to behave differently when the simulation scale is changed. (Even if there is no difference mathematically.) However, this should not matter unless the simulation scale is changed by a factor of 100-1000x or greater.\\ The motivation for including the simulation scale parameter is to provide a easy way to change the scale of the fluid without affecting its behavior. The Navier-Stokes equations are sensitive to scale, and so the SPH approximation of their terms is also scale sensitive.\\ Additionally, SPH is highly sensitive to the parameters used; if a single parameter is even slightly incorrect, it can cause the entire simulation to explode. Abstracting the size of the fluid using a `simulation scale' term allows us to shrink or expand the fluid without spending excessive effort on tuning the parameters. Were the simulation scale not implemented, it would be a difficult task to get all parameters correct on the first try. Using two scales allows us to make slight modifications from a working set of parameters to get the desired behavior.\\ Some more details on the simulation scale are provided at the Fluids v.2 website \cite{RH:2008}. \subsubsection{Changing Particle Indicies} The particle data is resorted every frame. For instance, the particle at index 0 could in the next frame be at index 3, or some other location. This means that the index of each particle cannot be used to track it. However, each particle has a user pointer that can be used to associate a unique id or struct with it. The per-particle user pointer can be accessed through btFluidSph::getParticleUserPointer() and btFluidSph::setParticleUserPointer(). This user pointer is set to 0 when the particle is created, and is sorted along with the particle data every frame. It is not modified otherwise.\\ Alternatively, the fluid's grid(accessible through btFluidSph::getGrid()) stores the previous indices of the particles. It can be accessed with btFluidSortingGrid::getValueIndexPairs().\\ The main reason for rearranging the particle data is performance; sorting the particles increases the cache hit rate. \subsubsection{Explosions} A common issue that may be encountered when implementing a SPH fluid simulation is the `explosion', where the particles cease to behave as a fluid and begin to fly around everywhere.\\ This issue is usually caused by applying very high forces to the fluid particles. The source of such forces is many; it could occur if any of the following is set too high: time step, stiffness, gravity, viscosity or other user applied forces. The most sensitive of these parameters are the time step and stiffness. \subsection{Class Heirarchy} There are 2 main classes, and those 2 classes contain 5 important subclasses. \subsubsection{btFluidRigidDynamicsWorld} Contains all objects(rigid bodies, collision objects, and fluids) in the simulation. \begin{itemize} \item btFluidSphParametersGlobal - Properities shared by all fluids. \item btFluidSphSolver - Determines how the fluid particles interact with each other, and contains solver-specific data. \end{itemize} \subsubsection{btFluidSph} A single btFluidSph corresponds to a group of particles with the same characteristics. \begin{itemize} \item btFluidParticles - Contains particle state data: position, velocity, accumulated force, etc. \item btFluidSphParametersLocal - Defines the fluid material type; contains the fluid's viscosity, particle mass, and other parameters related to SPH and rigid body interaction. \item btFluidSortingGrid - A uniform grid used to accelerate the SPH density and force calculation. \end{itemize} \pagebreak \section{A SPH Fluid Primer} This covers some of the theory behind SPH, see \nameref{s_bulletFluidsIntro} for library-specific details.\\ Smoothed Particle Hydrodynamics(SPH) is a method that can be used to simulate fluids(liquids and gases) using particles. Before explaning SPH, however, it is necessary to go over some basic details of particle systems.\\ Prerequisites and notation: Familiarity with differential calculus and time stepping is very useful, but not absolutely necessary to understand the main points. Nonbold letters are scalars, while bold letters are vectors. For example, a 3-dimensional point using the Cartesian Coordinate System is written as: \( \mathbf{r} = (r_x, r_y, r_z) \). \( \nabla \) denotes the gradient. \(\nabla ^2\) is used to represent the Laplacian or the divergence of the gradient; it also appears as \(\nabla \cdot \nabla\). The partial derivative of a variable with respect to time is written as \( \frac{\partial}{\partial t} \). \(\rho\)(not to be confused with the English letter \(p\)) and \(\mu\)(not the letter \(u\)) are the Greek letters rho and mu.\\ \subsection{A simple particle simulation} Consider a simple simulation of rigid body spheres without rotation. The rigid spheres are represented as particles with the properties: mass, radius, position, and velocity. To perform the simulation, we will use a technique known as \textit{time stepping}. We discretize time into finite amounts, called \textit{frames}, and also store some information about each particle, the \textit{state}. The state includes values that change over time; in particular, the position and velocity of each particle. To begin, we initialize each particle with a state. Every frame, we advance the simulation by a fixed amount of time, the \textit{time step}, and update each particle's position and velocity. In order to prevent the spheres from intersecting, we modify the velocities of the particles by applying forces so that they will not penetrate in the next frame.\\ \begin{figure}[ht] \centering \includegraphics[width=6.0in]{images/TimeStepping} \caption{Example of time stepping for a falling sphere. Time is split into discrete chunks, called \textit{frames}, and each frame advances the simulation by a \textit{time step} of 16 milliseconds(ms).} \end{figure} The update loop for such a particle system might be written as: \begin{itemize} \item Detect collisions \item Determine collision response forces \item Integrate velocity(that is, apply collision response forces and other forces such as gravity) \item Integrate position \end{itemize} Let's go over that loop in a bit more detail: \subparagraph{Detect collisions} The main purpose of this stage is to generate information needed for the collision response stage. For each pair of particles, we calculate a normal vector and distance. The normal vector, which is a vector of length 1 pointing from one particle to another, determines the direction of the repulsive force used to separate the particles. The distance determines whether a force needs to be applied and, if below 0 (which means that the particles are penetrating), affects the magnitude of the force that is applied. \subparagraph{Determine collision response forces} In this stage, we generate forces that will change the velocities of the particles so that, ideally, they will not interpenetrate in the next step. The force is called the \textit{normal force} (as it is applied along the normal vector), which scales up as large as needed to prevent the particles from penetrating. It is also possible to generate a friction force, so that the particles will not slide across each other. \begin{figure}[ht] \centering \includegraphics[width=3.0in]{images/SphereCollision} \caption{The collision detection stage generates distance and a normal vector for each pair of colliding spheres. If the spheres are able to rotate, then it is also necessary to detect the point of contact where the spheres are touching. Whether the normal vector points towards the left or right sphere is a matter of convention.} \end{figure} \subparagraph{Integrate velocity and position} In these stages, we advance the state of the simulation, updating the velocity and position of each particle. \[ \mathbf{velocity_{next} = velocity + (force/mass) * timestep} \] \[ \mathbf{position_{next} = position + velocity_{next} * timestep} \] It is important to note that we use \( velocity_{next} \) to compute the next position as opposed to the current \(velocity\). This means that the velocity and position are integrated using semi-implicit Euler as opposed to explicit Euler. The difference is subtle but very important, as explicit Euler is unconditionally unstable; that is, it ensures that the simulation will explode if damping(artificially reducing the velocity) is not applied.\\ \subsection{Extending the simple particle simulation to fluids} \subparagraph{\textit{Smoothed Particle Hydrodynamics}} may sound intimidating, but it is actually not very different from this particle simulation. In particular, the only difference between a SPH fluid simulation and a rigid body particle simulation is that the \textit{determine collision response forces} stage is different.\\ To elaborate, the rigid body simulation uses a repulsive force between colliding particles, while an SPH fluid uses pressure and viscosity forces based on the \textit{Navier-Stokes equations}. This has the effect of changing some characteristics of the particles. Instead of considering the particles as solid spheres of uniform density, with the mass evenly distributed across the particle, the particles are seen as points with a radius of interaction. Furthermore, the boundary between SPH particles is soft. A common rule of thumb for SPH simulations is that each particle should have 20--40 neighbors; that is, each particle is expected to be interpenetrating with 20 to 40 other particles. Finally, the mass of a SPH particle is unevenly distributed throughout the volume of the particle, with more mass concentrated at the center and less mass farther out.\\ Another difference is that while rigid body particles are seen as distinct, separate entities, SPH particles are viewed as being part of a fluid, with each SPH particle contributing to a greater whole. To be precise, each SPH particle contributes some mass to a density scalar field. That is, it is possible to use particles to associate every point in space with a density. Just as it is possible to use linear interpolation to define a line from 2 points, it is possible to use SPH to define a scalar field from many points.\\ To reiterate, in addition to the properties of mass, radius, position, and velocity, SPH particles also gain the implicit property of density. This property is not stored with each particle, but interpolated by summing the contributions from all particles.\\ \begin{figure}[ht] \centering \includegraphics[width=6.0in]{images/RigidSPH} \caption{On the left are uniform density rigid particles with hard boundaries. On the right are distributed density SPH particles with soft boundaries. The darkness indicates the amount of mass at that point.} \end{figure} The equation we use to define the density, \(\rho\), at a position, \( \mathbf{r} \), is: \begin{equation} \label{eq_density} \rho (\mathbf{r}) = \sum_{j}^{} m_j W_{poly6}(\mathbf{r} - \mathbf{r}_j, h) \end{equation} Where the sum loops over all nearby particles with position \( \mathbf{r}_j \) within the radius of interaction, \( h \), the term \( m_j \) is the mass of a particle. \( W \) is used to denote a \textit{smoothing kernel}; it is a function that determines how a properity is distributed(or smoothed) throughout the volume of a fluid particle. In this case, we are using the kernel named the \textit{poly6} kernel. In the context of our simulation, \( W_{poly6} \) is a function that determines how the mass is distributed throughout a fluid particle. The amount of mass contributed is highest at the particle's center, and gradually falls to 0 as the distance from the center increases to the radius of interaction, \( h\).\\ \begin{figure}[ht] \centering \includegraphics[width=6.0in]{images/SPHSum} \caption{Example of equation \ref{eq_density} in 1D. SPH Interpolates a global density(blue) by summing up local densities(black) from particles.} \end{figure} Fluid motion is described by the the Navier-Stokes equations. Although the Navier-Stokes equations are a set of equations, we are only interested in a single equation due to various assumptions and other simplifications: \begin{equation} \rho \frac{\partial \mathbf{v}}{\partial t} = - \nabla p + \mu \nabla ^2 \mathbf{v} + \mathbf{f} \end{equation} With density \( \rho \), acceleration \( \partial \mathbf{v} / \partial t \), pressure \( p \), and velocity \( \mathbf{v} \). This equation may appear complex, but it is actually quite simple; it is just \(F = ma\) for fluids. \begin{itemize} \item \(- \nabla p\), the negated gradient of pressure, means that fluids are accelerated from places of higher pressure towards areas of lower pressure, \item \(\mu \nabla ^2 \mathbf{v}\), the Laplacian of velocity, means that fluid particles experience friction against each other, with \(\mu\) being a constant that determines the amount of friction, and \item \(\mathbf{f}\) includes external forces such as gravity, surface tension and collision response forces (for instance, forces from collisions with rigid bodies). \end{itemize} The main term in this equation that accounts for the motion of fluids such as water is the pressure term, \(- \nabla p\). While the normal force that is applied between rigid body particles acts as a hard boundary that removes all penetration, the pressure force acts to eliminate variations in pressure. As we shall see next, eliminating variations in pressure also has the effect of constraining the particles' density to a single value.\\ Minor note: although the Laplacian operates on a scalar field, applying it to a vector simply means to apply it to each of the components and create another vector from the result. \(\nabla ^2 \mathbf{v} = (\nabla ^2 v_x, \nabla ^2 v_y, \nabla ^2 v_z) \), where \(\mathbf{v} = (v_x, v_y, v_z)\). To be clear, there are 3 scalar fields(\(v_x\), \(v_y\), and \(v_z\)) involved in this expression.\\ The density scalar field can be converted into a pressure field by using an equation of state. For liquids, the equation we use is: \begin{equation} \label{eq_densityToPressure} p = k(\rho - \rho_0) \end{equation} Where \( k \), the stiffness, is a positive constant that determines the strength of the pressure force and \( \rho_0 \) is the fluid's rest density. The \textit{rest density} is exactly what it sounds like---the density of the fluid when it is \textit{at rest}; that is, when it has settled and is not moving.\\ The effect of this equation is that particles below \(\rho_0\), the rest density, will have a negative pressure, while particles above the rest density will have a positive pressure. As the gradient operator points in the direction of greatest increase, this results in a force that pushes the particles apart when above rest density, and together when below rest density.\\ Now that we have a way of defining a pressure field, we have all the variables needed to approximate a solution for the acceleration using SPH. \(\mathbf{v}\) is carried by each particle and \(\rho\) can be calculated by equation \ref{eq_density}. \begin{equation} acceleration = \frac{\partial \mathbf{v}}{\partial t} = \frac{- \nabla p + \mu \nabla ^2 \mathbf{v} + \mathbf{f}}{\rho} \end{equation} Using SPH, the term \(- \nabla p\) is approximated at a particle i as: \begin{equation} \mathbf{f}_{i}^{pressure} = - \sum_{j}^{} m_j \frac{p_i + p_j}{2 \rho_j} \nabla W_{spiky}(\mathbf{r}_i - \mathbf{r}_j, h) \end{equation} and the viscosity term \(\mu \nabla ^2 \mathbf{v} \) as: \begin{equation} \mathbf{f}_{i}^{viscosity} = \mu \sum_{j}^{} m_j \frac{ \mathbf{v}_j - \mathbf{v}_i}{\rho_j} \nabla ^ 2 W_{viscosity}(\mathbf{r}_i - \mathbf{r}_j, h) \end{equation} Where \( \nabla W_{spiky} \) is the gradient of the spiky kernel, and \( \nabla ^ 2 W_{viscosity} \) is the Laplacian of the viscosity kernel. The spiky and viscosity kernels are functions that have been specially designed to improve stability when using SPH to approximate pressure and viscosity forces. Similiar to the \(W_{poly6}\) kernel, they scale the force between pairs of particles, with the force gradually dropping to 0 as the distance between particles increases to the radius of interaction.\\ Substituting terms from the kernels \(W_{poly6}\), \(\nabla W_{spiky}\), and \( \nabla ^ 2 W_{viscosity}\), removing constant terms from the sum, and then solving for acceleration(\( \mathbf{f}_i = \rho_i \mathbf{a}_i \)) changes the equations into this form: \begin{equation} \label{eq_densityFinal} \rho_i (\mathbf{r}_i) = \frac{315m}{64 \pi h^9 } \sum_{j}^{} (h^2 - (|\mathbf{r}_i - \mathbf{r}_j|)^2 )^3 \end{equation} \begin{equation} \label{eq_pressureAccel} \mathbf{a}_{i}^{pressure} = \frac{45 m}{\pi h^6} \sum_{j}^{} \frac{p_i + p_j}{2 \rho_i \rho_j} \frac{\mathbf{r}_i - \mathbf{r}_j}{|\mathbf{r}_i - \mathbf{r}_j|} (h - |\mathbf{r}_i - \mathbf{r}_j|)^2 \end{equation} \begin{equation} \label{eq_viscosityAccel} \mathbf{a}_{i}^{viscosity} = \frac{45 \mu m}{\pi h^6} \sum_{j}^{} \frac{ \mathbf{v}_j - \mathbf{v}_i}{ \rho_i \rho_j} (h - |\mathbf{r}_i - \mathbf{r}_j|) \end{equation} Going over the terms again, \(m\) is the mass of a particle(assuming all particles have the same mass), \(h\) is the radius of interaction, \(p\) is pressure, \(\rho\) is density, \(\mu\) is the strength of viscosity, \(\mathbf{r}\) is position, and \(\mathbf{v}\) is velocity. The subscripts i and j are used to indicate the indices of the particles that the properties are associated with. \(|\mathbf{r}_i - \mathbf{r}_j|\) denotes the distance between points \(\mathbf{r}_i\) and \(\mathbf{r}_j\). \\ Every frame, we iterate over all particles twice. The first pass to calculate the density and pressure (equations \ref{eq_densityFinal} and \ref{eq_densityToPressure}), and the second pass to calculate pressure and viscosity forces (equations \ref{eq_pressureAccel} and \ref{eq_viscosityAccel}). Finally, we apply these forces to each particle, and integrate their positions.\\ To conclude, there is only a minor conceptual difference between a rigid body particle simulation and a SPH fluid simulation. In order to convert a rigid body particle simulation into a SPH fluid simulation, we need only to replace the repulsive force with pressure and viscosity forces.\\ The important things to remember are that: \begin{itemize} \item Fluid motion is described by the Navier-Stokes equations, which define a pressure and viscosity force. \item SPH is a method that combines the local contributions of particles to represent a global scalar field, and can be used to approximate solutions to the Navier-Stokes equations. \item The SPH pressure force is both attractive and repulsive, and accelerates particles towards the fluid's rest density. \item The SPH viscosity force accelerates particles towards having the same velocity. This has a similar effect to friction, dissipating energy and gradually reducing the velocity of the particles. (2 particles with the same velocity experience no acceleration from the viscosity force, while particles moving in opposite directions experience a strong viscosity force.) \end{itemize} The derivation presented in this article was originally introduced by M\"{u}ller, Charypar, and Gross in \cite{Muller:2003:PFS:846276.846298}. It is recommended to see their paper for a more in depth explanation. Additionally, a good tutorial on SPH is provided in Kelager's Master's thesis \cite{MK:2006}.\\ For completeness, the definitions of the kernels are presented here:\\ Poly6 kernel: \begin{equation} W_{poly6}(\mathbf{r} - \mathbf{r}_j, h) = \left \{ \begin{array}{ll} \dfrac{315}{64 \pi h^9 } (h^2 - (|\mathbf{r} - \mathbf{r}_j|)^2)^3 & \textrm{if \(0 \leq |\mathbf{r} - \mathbf{r}_j| \leq h\) } \\[1em] 0 & \textrm{otherwise} \end{array} \right. \end{equation} Gradient of spiky kernel: \begin{equation} \nabla W_{spiky}(\mathbf{r} - \mathbf{r}_j, h) = \left \{ \begin{array}{ll} - \dfrac{45}{\pi h^6} \dfrac{\mathbf{r} - \mathbf{r}_j}{|\mathbf{r} - \mathbf{r}_j|} (h - |\mathbf{r} - \mathbf{r}_j|)^2 & \textrm{if \(0 \leq |\mathbf{r} - \mathbf{r}_j| \leq h\) } \\[1em] 0 & \textrm{otherwise} \end{array} \right. \end{equation} Laplacian of viscosity kernel: \begin{equation} \nabla ^ 2 W_{viscosity}(\mathbf{r} - \mathbf{r}_j, h) = \left \{ \begin{array}{ll} \dfrac{45}{\pi h^6} (h - |\mathbf{r} - \mathbf{r}_j|) & \textrm{if \(0 \leq |\mathbf{r} - \mathbf{r}_j| \leq h\) } \\[1em] 0 & \textrm{otherwise} \end{array} \right. \end{equation} \pagebreak \section{Parameters} In the context of real time simulations, factors such as the time step and low stiffness mean that the relation to physical units becomes largely academic, so using actual physical values is unlikely to give the desired result. Parameters at world scale are marked as [world scale] and parameters at simulation scale are marked as [simulation scale]. [X, Y] means that the range of the parameter should be between X and Y. For instance, [0.0, 1.0] denotes a range from 0 to 1. \subsection{Global Parameters} Properities assigned to all btFluidSph in a btFluidRigidDynamicsWorld. \subparagraph{m\_timeStep} For performance and stability reasons, it is not possible to run the fluid simulation at the same time step as the rigid body simulation. Setting this value too high will cause the fluid to explode. In general, a higher time step requires lower stiffness to prevent the fluid from exploding. As a rule of thumb, doubling the time step means that the stiffness needs to be halved. By default, the fluid is stepped 5 times slower(3ms) than the rigid body simulation(16ms). \subparagraph{m\_simulationScale} See \ref{s_simulationScale}. \subparagraph{m\_gravity [simulation scale] } Acceleration applied to all particles during each frame; by default the Y- axis is assumed to point downwards. Note that, as this is a simulation scale parameter, the default gravity is 250x larger compared to rigid bodies. This default value can be reduced considerably. (Part of the reason for the high default gravity is the ratio of time steps between fluid/rigid). \subparagraph{m\_sphSmoothRadius [simulation scale]} Commonly denoted as \(r\) or \(h\) in the literature. Defines the radius in which SPH particles will interact with each other. This should be set so that each particle has about 20--40 neighbors. To be precise, a particle influences another if the distance to its center is less than or equal to \(r\)---that is, it should be thought of as sphere v. point collision. Considering it as sphere-sphere collision means that both spheres would have a radius of \(r/2\). \subsection{Local Parameters} Properties assigned to a single btFluidSph. \subsubsection{Fluid Simulation} \subparagraph{m\_viscosity} Affects the magnitude of the viscosity force, which accelerates particles so that they have the same velocity---it defines the resistance of the fluid to flow. For instance, water has low viscosity while honey has high viscosity. The viscosity force also works to improve the numerical stability of the simulation by reducing particle velocities(setting this to 0 can cause the fluid to explode). However, a too high viscosity value will instead introduce energy into the simulation.\\ This is the parameter \(\mu\) in equation \ref{eq_viscosityAccel}. \subparagraph{m\_restDensity} Defines the density of the fluid when it is at rest, or not moving. Water has a rest density of \(1000 kg / m^3\). The pressure force is applied to constrain particles to this density. \subparagraph{m\_sphParticleMass} Contribution of a single particle to the SPH density scalar field. \subparagraph{m\_stiffness [simulation scale]} The fluid stiffness determines how violently the fluid reacts to variations in density. In other words, it defines the magnitude of the SPH pressure force. For mostly incompressible fluids such as water, is best to keep this value as high as possible. Another reason to use a higher stiffness value is performance; a particle that is part of a compressible(low stiffness) fluid tends to have more neighbor particles. The maximum stiffness that can be used is dependent on the time step.\\ This is the parameter \(k\) in equation \ref{eq_densityToPressure}. \subparagraph{m\_initialSum (0.0, 1.0]} Determines the particle's self contribution when calculating its density. It should always be greater than 0, but generally be kept as low as possible. Using a value of 1 is equivalent to adding a virtual SPH particle at zero distance when calculating each particle's density; the virtual particle is not included when calculating forces. Higher values improve stability, especially for particles that are isolated or have incomplete neighborhoods. As it has the effect of increasing the density of all particles, however, it also tends to make particles repulse each other more strongly. \subparagraph{m\_particleDist [simulation scale]} This is an automatically calculated value that defines a default spacing for particles. It has no effect on the simulation, but should not be changed. Consider spawning a group of fluid particles in the shape of a box. Placing the fluid particles too far apart will cause them to attract(box implodes), while spacing them too closely will make them repulse(box explodes). Depending on how m\_initialSum is set, this may need to be scaled by a factor of 0.75--1.25. \subsubsection{Rigid Body Interaction} \subparagraph{m\_particleRadius [world scale]} Radius of a particle with colliding with btRigidBodies/btCollisionObjects. The particle is treated as a rigid body sphere. Has no effect on the SPH simulation. Note that this is separate from m\_sphSmoothRadius, which defines the radius for interactions between fluid particles. \subparagraph{m\_particleMargin [0.0, m\_particleRadius) [world scale]} Meters of allowed penetration when colliding with rigid bodies. This value should be a small fraction of m\_particleRadius. Should not be set to 0, as it can cause fluid particles to oscillate when colliding with rigid bodies(also known as jitter). \subparagraph{m\_particleMass} Mass of a particle when interacting with rigid bodies and applying forces. \subparagraph{m\_particleRadiusExpansion [world scale]} Expands the collision detection radius of a particle. Can be used to find objects that are near but not colliding with the fluid particles. Setting this too high will reduce performance and degrade the accuracy of fluid-rigid contacts. \subparagraph{m\_boundaryFriction [0.0, 1.0]} Fraction of tangential velocity to remove per frame when a fluid particle collides with a rigid body. For example, using a m\_boundaryFriction of 0.2 would cause a tangential velocity with magnitude 1 to be reduced to 0.8 after 1 frame. In practice, a value of e.g. 0.2 is already very high. Setting this too high will cause instablility. \subparagraph{m\_boundaryRestitution [0.0, 1.0]} Generally set to 0 for fluids. It controls the fraction of normal velocity that is reflected when a fluid particle collides. Using a nonzero value will cause particles to bounce off rigid bodies when colliding. Note this introduces energy into the system, so it will make the fluid less stable. \subparagraph{m\_boundaryErp [0.0, 1.0]} The boundary ERP(Error Reduction Parameter) controls the fraction of penetration removed per frame from fluid-rigid contacts. Higher values reduce penetration more quickly, but at the cost of lower stability. \subsection{Other Parameters} \subparagraph{setCcdMotionThreshold} The function btCollisionObject::setCcdMotionThreshold(), inherited by btFluidSph, can be used to prevent fluid particles from tunneling through rigid bodies. There is no feature to prevent rigid bodies from tunneling through fluids. In general, this should be set to btFluidSphParametersLocal.m\_particleRadius. \pagebreak \section{Optimizations} Implementing a basic SPH algorithm is actually quite simple. Without any optimizations, it would simply take the form of a double nested loop. Much of the complexity comes from the grid algorithm. \subsection{Grid Algorithm} For spheres of the same radius, the most efficient algorithm is a uniform grid. That is, we divide space into unformly sized cubes(or cells) that are axis-aligned. We add particles into these cells by checking which cell the particle's center is in. If the edge length of the cells is equal to the SPH interaction radius, then the particles that are able to interact with a particle are in the directly adjacent cells. There are several ways of implementing the uniform grid, some are presented here.\\ \subsubsection{Static Noncolliding Uniform Grid (implemented by Bullet-Fluids) } It is a fixed-size sparse implicit grid, where only the contents of grid cells containing particles is stored. The main advantage of this grid is that it has no collisions, and can be used in a fairly large world. The main disadvantage is that it requires a binary search to query a cell.\\ The information maintained by this algorithm includes: \begin{itemize} \item Cell size: A scalar. This is identical to the SPH interaction radius. \item value[]: An array containing \textit{hash values} that identify grid cells. Each value is a 32-bit or 64-bit int. 10 or 21 bits are used for each dimension(x, y, z), so that worlds of \(1024^3\) and \(2097152^3\) cells can be used. \item index\_range[]: An array containing the range of particle indicies that correspondes to each grid cell. Each element containts 2 32-bit ints that store the first and last index of the particles in each grid cell. \item value\_index[]: An array containing value-index pairs. Where the index identifies a particle and the \textit{value} identifies the grid cell it is inside. There is one entry in this array for each particle. \end{itemize} value[] and index\_range[] are parallel arrays, so value[i] identifies the grid cell with index i and contents index\_range[i]. The length of the value[] and index\_range[] array is equal to the number of grid cells with particles in them.\\ Grid Update Algorithm: \begin{enumerate} \item Quantize\\ First, a particle's position is converted into integer grid cell coordinates( \( \mathbb{R}^{3} \to \mathbb{Z}^{3}\) ). This is achieved by dividing each coordinate of the particle's position by the grid cell width. Since a simple division would result in the cell (0,0,0) having a size of \(2r\), we map the coordinate in range [0, r) to 0, and [-r, 0) to -1. Converts the continuous coordinates \((x, y, z)\) to the integer coordinates \((x_{grid}, y_{grid}, z_{grid})\). \item Assign value with hash function\\ Next, we combine these coordinates into a single value that can be used for sorting. \begin{equation} \label{eq_hash} value = x_{grid} + y_{grid} * C + z_{grid} * C * C \end{equation} where C = 1024 or 2097152, defines the maximum extent of the grid in each dimension. \item Create value-index pairs\\ We associate each particle with its value for sorting, creating an array of structs, each struct a {value, index} pair. \item Sort by value-index pairs by value\\ On CPU, quick sort is used while radix sort is used on GPU. After sorting the particles are grouped by grid cells. The result of this step is an array of value-index pairs, where value\_index[i] is the current position and value\_index[i].index is the last position. \item Rearrange particles\\ In order to avoid consuming a large amount of memory bandwidth during the sort, we only sort value-index pairs and then rearrange other particle attributes (position, velocity, user-pointer, etc.) after the sort. \item Generate index ranges\\ Look through the particles to get the range of particle indicies corresponding to each grid cell. The result of this stage is a set of [value, index-ranges] pairs; the value is used to identify the grid cell, and the index-range indicates its contents. \end{enumerate} Grid Query Algorithm: \begin{enumerate} \item Quantize \item Assign value\\ In order to find neighbors, the first 2 steps is the same as the grid update. \item Binary Search\\ Perform a binary search for the cell with value, once we have the index of the cell, it can be used to access the particle index range. Although a binary search may sound expensive, it only needs to be performed for each grid cell. \end{enumerate} %\subsubsection{Modulo Colliding Uniform Grid} % By using the hash function X % C + (Y % C) * C + (Z % C) * C * C, a finite grid can be extended into an infinite domain.\\ % (e.g. C = 64) % % A drawback of this method is collisions. Another is that the '3-cell bar' optimization cannot be used without % adding an additional branch to distinguish between outer and inner cells.\\ % % More information on this grid may be found in: % NVIDIA particles presentation[REFERENCE] % PhysX presentation reference[REFERENCE] \subsection{Grid Cell Size} The cell size \(r * (3^3)\) is the optimal size for a CPU/GPU implementation. Compared to \(2r * (2^3)\), it contains half the volume, so it is a closer fit to the actual neighbors. On CPU, it also simplifies the symmetry optimization; since each particle in the same cell has the same 26 neighbor cells, it becomes possible to do processing on each grid cell instead of each particle. (It is actually possible to use the symmetry optimization with 2r cells, but it is much more complicated. The method requires making a copy of the grid and removing particles from the cell as the SPH density and SPH force is calculated. Additionally, since each 2r grid cell can still access 26 neighbors overall, it is still necessary to split the cells into 27 groups for multithreading, which reduces the amount of parallelism as each 2r cell has 8 times the volume of a r cell) \subsection{GPU Optimizations} On the GPU, we found that it is more efficient to store density and recompute the pressure instead of storing the inverted density and pressure. Using the ternary operator and moving the (i != n) to the distance check also helps reduce divergence.\\ \subsubsection{3-cell bar} If the hash function is defined as in the Bullet-Fluids grid(equation \ref{eq_hash}), it becomes possible to group 3 cells into a 3-cell bar, which reduces the amount of divergence. Instead of processing 27 cells, 9 3-cell bars are processed. \begin{figure}[ht] \centering \includegraphics[width=6.0in]{images/3CellBar} \caption{2D example of 3-cell bar optimization. In this case, the constant C in equation \ref{eq_hash} is equal to 8. Instead of using a for loop with 9 iterations(27 in 3D), it is possible to use a for loop with 3 iterations(9 in 3D), with each iteration processing 3 cells. In this example, the cells with values 19, 20, and 21 have particles with indicies [22, 32], [33, 41], and [42, 54], respectively. Instead of running 3 iterations for each of these particle index ranges, we can perform a single iteration through the range [22, 54].} \end{figure} \subsubsection{Effect of CPU optimizations on the GPU} \subparagraph{Neighbor table} Although using neighbor tables improves performance on CPU, it decreases overall performance and increases memory consumption on the GPU. To be exact, it reduces the performance of the density computation stage and improves the performance of the force calculations stage. There may be a performance benefit for methods that iterate over all particles several times, however. \subparagraph{Symmetry} Implementing the symmetry optimization as on CPU was found to result in a \(\sim\)50x performance decrease. This is likely because: \begin{itemize} \item Processing by grid cell requires more registers. \item Higher memory latency on GPU means that the symmetric optimization is less effective. \item There are less grid cells than particles, which reduces the max number of active threads. Additionally, grouping the cells into 27 groups further reduces the occupancy. A GPU typically needs 1000--10000 or more threads for efficient processing. On average, a grid cell with size \(r\) has 4 particles and 27 groups are needed to avoid thread collisions, so the overall number of threads that can be used is reduced by an estimate of \(4 * 27 = 108\). \end{itemize} \subsection{CPU Optimizations} \subsubsection{Symmetry and Multithreading} In order to allow multithreading without using atomics or other syncronization, the grid cells are divided into 27 groups (9 groups in 2D) so that processing all the cells in a single group will not cause thread collisions. Since the pressure and viscosity forces are antisymmetric, that is, \(\mathbf{f}_{ij} = - \mathbf{f}_{ji}\), it is only necessary to compute forces once for each pair of particles. The density contribution between particles is also symmetric, so the same optimization can be applied during the density computation stage. This additionally reduces the number of neighbor cells that needs to be accessed by each cell from 26 to 13 cells in 3D and 8 to 4 cells in 2D. \begin{figure}[ht] \centering \includegraphics[width=6.0in]{images/CellGrouping} \caption{Multithreading optimization. A 2D example is presented for 2 iterations. The cells are grouped into clusters of 9 cells, and due to symmetry each cell only needs to access 4 neighbor cells during the computation. During the first iteration, cells marked 1 can be processed by multiple threads. During the second iteration, cells marked 2 are processed, and so on. Dark blue indicates the current cell, while light blue indicates neighbor cells that are accessed and written to. } \end{figure} \subsubsection{Neighbor Tables} The index and distance of neighbor particles are stored during the SPH density calculation to avoid recomputing during the SPH force calculation. Due to symmetry, each entry represents a pair. This also allows tuning the quality of the simulation by reducing the max number of neighbors. \pagebreak \section{Other Internal Details} \subsection{Update Loop} \begin{enumerate} \item Update grid \item Calculate SPH sum (density and pressure) \item Calculate SPH pressure and viscosity force \item Integrate velocity (apply SPH forces) \item Handle collision detection and response with rigid bodies (correct velocities to avoid penetrating into rigids) \item Integrate position \end{enumerate} \subsection{Rigid Body Interaction} Fluid-Rigid interaction is implemented by treating particles as rigid body spheres. CCD is implemented for moving fluid against rigid by casting a ray along the path of a fluid particle's motion. If the ray intersects a rigid body, then the particle is moved to intersect slightly with the rigid body. The collision response step then prevents the particle from tunneling by applying an impulse that is as large as needed.\\ To describe the collision resolution method, the force has 2 components. The first force(or \textit{impulsive force}) prevents the particle from further penetrating, by scaling as large as needed to remove the component of the velocity in the direction of penetration. The second force, scaled by m\_boundaryErp, removes penetration.\\ A drawback of this method is that fluid particles can stack while colliding with rigid bodies. That is, since the fluid is not completely incompressible, many fluid particles can occupy the same space and, as a result, cause a collding rigid body to experience very strong collision forces. This issue is reduced somewhat if a less compressible, but more expensive, fluid simulation method is used.\\ Another limitation is that the SPH density is inaccurate at boundaries, as SPH assumes that each particle has a full neighborhood. This causes fluids to cluster at boundaries in order to make up for the reduction in density.\\ Dynamic rigid bodies, especially triangle meshes, are able to tunnel through the fluid if moving too fast. Currently, there is no solution for this issue aside from reducing the time step. \subsection{Other notes} \subparagraph{Clustering} When implementing a new SPH solver, a common issue is that the particles only seem to attract each other and form into clumps. This may be caused by using the incorrect sign for the pressure force(or a negative stiffness). \subparagraph{Split masses} The mass of a particle is different when calculating the SPH force and when interacting with rigid bodies. This allows the user to separately tune the interaction with rigid bodies. Another motivation is that large mass ratios between rigid bodies can cause the simulation to become unstable. \subparagraph{Local and Global parameters split} In order to efficiently perform fluid-fluid interaction, it is necessary to use a common smoothing radius and simulation scale. Otherwise, the grids would be out of alignment and a more complex approach that is also slower would be needed. \subparagraph{Denormalized floating point} The variables used by SPH can become very close to 0, causing the floating point values to become denormalized. This has the potential to cause extreme degradations in performance, although the author has not encountered such issues even with 32-bit floats. %\pagebreak %\section{Recommended Reading} % \subsection{Grid Algorithm} % \subsection{Incompressibility} % \subsection{Rendering} \pagebreak \bibliographystyle{plain} \bibliography{BulletFluids_reference} \end{document}
{ "alphanum_fraction": 0.7477590612, "avg_line_length": 60.0598179454, "ext": "tex", "hexsha": "444db66b3f75d706801b555f9678d0393f91b23b", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-07-22T08:01:54.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-09T08:03:04.000Z", "max_forks_repo_head_hexsha": "b58dbc5108512e5a4bb0a532284a98128fd8f8ce", "max_forks_repo_licenses": [ "Zlib" ], "max_forks_repo_name": "rtrius/Bullet-FLUIDS", "max_forks_repo_path": "Documentation/BulletFluids_reference.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b58dbc5108512e5a4bb0a532284a98128fd8f8ce", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Zlib" ], "max_issues_repo_name": "rtrius/Bullet-FLUIDS", "max_issues_repo_path": "Documentation/BulletFluids_reference.tex", "max_line_length": 155, "max_stars_count": 19, "max_stars_repo_head_hexsha": "b58dbc5108512e5a4bb0a532284a98128fd8f8ce", "max_stars_repo_licenses": [ "Zlib" ], "max_stars_repo_name": "rtrius/Bullet-FLUIDS", "max_stars_repo_path": "Documentation/BulletFluids_reference.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-22T08:00:17.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-18T09:34:48.000Z", "num_tokens": 11856, "size": 46186 }
\documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format % \usepackage{draftwatermark} % \SetWatermarkText{Draft} % \SetWatermarkScale{5} % \SetWatermarkLightness {0.9} % \SetWatermarkColor[rgb]{0.7,0,0} \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for for rotated page geometryhttps://www.washingtonpost.com/world/europe/amid-impeachment-probe-gordon-sondland-is-overseeing-a-renovation-of-his-residence-that-has-cost-1-million-in-taxpayer-money/2019/10/16/d0eece92-ef86-11e9-bb7e-d2026ee0c199_story.html?tid=sm_tw %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} % Use pdf, png, jpg, or eps� with pdflatex; use eps in DVI mode % TeX will automatically convert eps --> pdf in pdflat \label{thm:integral_domain} % TeX will automatically convert eps --> pdf in pdflatex \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{mathrsfs} \usepackage[hyphens,spaces,obeyspaces]{url} \usepackage{url} \usepackage{hyperref} \usepackage{subcaption} \usepackage{authblk} \usepackage{mathtools} \usepackage{graphicx} \usepackage[export]{adjustbox} \usepackage{fixltx2e} \usepackage{hyperref} \usepackage{alltt} \usepackage{color} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{float} \usepackage{bigints} \usepackage{braket} \usepackage{siunitx} \usepackage{mathtools} \usepackage[hyphenbreaks]{breakurl} \newtheorem{thm}{Theorem}[section] % \newtheorem{defn}[thm]{Definition} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{example}{Example}[section] \newcommand{\veq}{\mathrel{\rotatebox{90}{$=$}}} \DeclareMathOperator{\bda}{\Big \downarrow} \DeclareMathOperator{\E}{\mathbb{E}} \newcommand{\argmax}{\operatornamewithlimits{argmax}} \newcommand{\argmin}{\operatornamewithlimits{argmin}} \title{A Few Notes on Projective Geometry (WIP)} \author{David Meyer \\ dmm@\{1-4-5.net,uoregon.edu\}} \date{Last update: \today} % Activate to display a given date or no date \begin{document} \maketitle \section{Introduction} These notes began life as a bit of research on elliptic curves. However, it turns out that an elliptic curve is a smooth, projective, algebraic curve of genus one, on which there is a specified point $\mathcal{O}$. An elliptic curve is also abelian in the sense that if it has a multiplication that is defined algebraically, then it is an abelian group with respect to this multiplication. In this case $\mathcal{O}$ serves as the identity element. Often the curve itself without $\mathcal{O}$ specified, is called an elliptic curve; the point $\mathcal{O}$ is often taken to be the curve's "point at infinity" in the \emph{projective plane}. \bigskip \noindent So what is a projective plane? As we will see in Section \ref{sec:projective_plane}, a projective plane is a kind of (projective) geometry. % \newpage \section{Geometries and Projective Planes} \label{sec:projective_plane} In this section we will start by defining what a Geometry is and look at a few examples. \begin{definition} \textbf{(Geometry):} A geometry $S = (P, L)$ is a non-empty set $P$ whose elements are called \emph{points} together with a set $L$ of non-empty subsets of $P$ called \emph{lines} which satisfy: \begin{itemize} \item G1: For any two distinct points $p_1,p_2 \in P$, there exists exactly one line $l \in L$ such that both $p_1 \in l$ and $p_2 \in l$. \item G2: There exists a set of four points, such that given any set of three of these points, no line exists that contains all three points. \end{itemize} \noindent Note that $P$ and $L$ may be either finite or infinite. We say that a point $p$ is on, or \emph{incident with} $p$ if $p \in l$. In the same way a line $l$ is on or \emph{incident with} a point $p$ if $p \in l$. \noindent A set of points is called \emph{collinear} if there exists a line such that all points are on the line. If $p$ and $q$ are two points, then $pq$ denotes the unique line on both $p$ and $q$. Clearly $qp = pq$. If $l_1$ and $l_2$ are lines that intersect in a point, $l_1l_2$ denotes their point of intersection. \noindent Using this notation, we can write G1 and G2 in a more concise way way: \begin{itemize} \item G1: Two distinct points are on exactly one line. \item G2: There exists a set of four points, no three of which are collinear. \end{itemize} \noindent A set of four points, no three of which are collinear, is called a \emph{quadrangle}. A line through two points of a quadrangle is called a \emph{diagonal} of the quadrangle. \end{definition} \begin{example} This Euclidean plane is a geometry. Why? Well, in the Euclidean plane the set $P$ consist of $(x,y)$ points and the set $L$ consists lines of the form $y = mx + b$. Also, it is a well-known that two points are on a unique line in the Euclidean plane, so G1 holds. In addition, the set $\{(0,0), (1,0), (0,1), (1,1)\}$ form a quadrangle and so G2 holds. So the Euclidean plane is a geometry. \end{example} \begin{lemma} Two distinct lines in a geometry intersect in at most one point. \end{lemma} \noindent \textbf{Proof:} Assume that there are two lines that intersect in at least two points, and let two such points be $p_1$ and $p_2$. But by G1 there is exactly one line on both $p_1$ and $p_2$ and so our assumption is contradicted. So two distinct lines in a geometry intersect in at most one point. \begin{definition} \textbf{(Affine Plane):} An affine plane is a geometry that satisfies the following condition: \begin{itemize} \item AP: For any line $l$ and any point $p$ not on $l$, there exists a unique line $l^\prime$ on $p$ that does not intersect $l$. \end{itemize} This is the famous parallel axiom of Euclidean geometry. Clearly, the Euclidean plane is an affine plane. The obvious connection to the parallel axiom justifies the following definition. \end{definition} \begin{definition} \textbf{(Parallel Lines):}. Two lines in an affine planes are said to be parallel if they do not intersect. Any line is also said to be parallel to itself. If $l_1$ and $l_2$ are two parallel lines we write $l_1 \parallel l_2$. \end{definition} \begin{lemma} The relation "is parallel" on the lines of an affine plane $A$ is an equivalence relation. \end{lemma} \noindent \textbf{Proof:} Let $l_1,l_2$ and $l_3$ be three distinct lines of $A$. Then \begin{itemize} \item By definition $l_1 \parallel l_1$ so $\parallel$ is reflexive. \item Assume that $l_1 \parallel l_2$, Then $l_2$ does not intersect $l_1$ either, so $l_2 \parallel l_1$ and so $\parallel$ is symmetric. \item Finally, to show transitivity assume that $l_1 \parallel l_2$ and $l_2 \parallel l_3$, but $l_1 \nparallel l_3$. Then $l_1$ and $l_3$ intersect in a point $p$. But then $p$ is on two lines ($l_1$ and $l_3$) but not on $l_2$. This contradicts AP since AP says there is a \emph{unique} parallel line and here we have two. Hence $l_1 \parallel l_3$, and so $\parallel$ is transitive. $\square$ \end{itemize} \bigskip \noindent The equivalence relation "is parallel" is kind of interesting in that it partitions the set of lines in an affine plane into parallel classes. So for a line $l$ in an affine plane, we denote its parallel class by $[l]$. $[l]$ consists of all lines parallel to $l$. \begin{lemma} Let $p$ be a point of an affine plane. For each parallel class of lines, there is exactly one line on $p$ that belongs to the class. \end{lemma} \noindent \textbf{Proof: } Let [l] be any parallel class for $l \in L$. If $l$ is not on $p$ then by AP there exists a unique line on $p$ parallel to $l$ so done. If $l$ is on $p$, we must show that no other line on $p$ is also in $[l]$. But any other line on $p$ intersects $l$ at $p$, so the lines are not parallel. $\square$ \bigskip \noindent Ok, but what is a projective plane? % \newpage \subsection{Projective Planes} \begin{definition} \textbf{(Projective plane): } A projective plane is a geometry that satisfies the following condition: \begin{itemize} \item PP: Any two lines intersect in exactly one point. \end{itemize} \end{definition} \noindent So what is the difference between affine and projective planes? In an affine plane parallel lines exists whereas in projective planes two unique lines always intersect (at one point). \subsection{The Fano Plane} \label{subsec:fano_plane} The Fano Plane, named for Italian mathematician Gino Fano \cite{wiki:gino_fano}, is the finite projective plane of order 2. It is the finite projective plane with the \emph{smallest possible number of points and lines}: 7 points and 7 lines, with 3 points on every line and 3 lines through every point. The standard notation for this plane, as a member of a family of projective spaces, is PG(2, 2) where PG stands for "projective geometry", the first parameter is the geometric dimension and the second parameter is the order. \bigskip \noindent More precisely, let $\Pi = (P,L)$ be a geometry where $P = \{1,2,3,4,5,6,7\}$ and $L = \{l_1,l_2,l_3,l_4,l_5,l_6,l_7\}$ and let $l_1 = \{1,2,3\}, \: l_2 = \{1,4,5\}, \: l_3 = \{1,6,7\}, \: l_4 = \{2,4,6\}, \: l_5 = \{2,5,7\}, \: l_6 = \{3,4,7\}$ and $l_7 = \{3,5,6\}$. This labelling is shown in Figure \ref{fig:fano_plane}. Here $\Pi$ is a projective plane as it easily satisfies G1 and PP and $\{1, 2, 4, 7\}$ is an example of a quadrangle on $\Pi$. \bigskip \noindent Legend has it that the Fano Plane was invented as a solution to a game posed by Fano. The challenge was to create a projective plane with the smallest possible number of points and lines, 7 each, with 3 points on every line and 3 lines through every point. The problem remained unsolved until someone observed that not all lines need be straight. A circle was then added to the projective plane to create what we know as the Fano Plane today. The Fano Plane shown in Figure \ref{fig:fano_plane}. \bigskip \begin{figure}[H] \center{\includegraphics[scale=0.4] {images/fanoplane.jpg}} \caption{The Fano Plane} \label{fig:fano_plane} \end{figure} \section {Homogeneous Coordinates} Homogeneous Coordinates are a convenient way to represent translations and rotations which allows them both to be expressed as a matrix multiplication. This has certain computational and notational conveniences, and eventually leads us to a wider and interesting class of transformations. \bigskip \noindent Suppose we wish to translate all points $(X, Y, Z)$ by adding some constant vector $(t_x, t_y, t_z)$ to all coordinates. One way to accomplish this is to embed the points $(X, Y, Z) \in \mathbb{R}^3$ in $\mathbb{R}^4$ by tacking on a fourth coordinate with value 1. So the embedding of $\mathbb{R}^3$ in $\mathbb{R}^4$ looks like \bigskip \begin{equation*} \begin{bmatrix} X \\ Y \\ Z \end{bmatrix} \hookrightarrow \begin{bmatrix} X \\ Y \\ Z \\ 1 \end{bmatrix} \end{equation*} \bigskip \noindent So we want the translated point \begin{equation*} \begin{bmatrix} X + t_x \\ Y + t_y \\ Z + t_z \\ 1 \end{bmatrix} \end{equation*} \bigskip \noindent We can get this translation by multiplying by a "translation matrix" \begin{equation*} \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{equation*} \bigskip \noindent Then the translation, represented as matrix multiplication, looks like \begin{equation*} \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} X \\ Y \\ Z \\ 1 \end{bmatrix} = \begin{bmatrix*}[l] 1 \cdot X &+& 0 \cdot Y &+& 0 \cdot Z &+& t_x \cdot 1 \\ 0 \cdot X &+& 1 \cdot Y &+& 0 \cdot Z &+& t_y \cdot 1\\ 0 \cdot X &+& 0 \cdot Y &+& 1 \cdot Z &+& t_z \cdot 1\\ 0 \cdot X &+& 0 \cdot Y &+& 0 \cdot Z &+& 1 \cdot 1 \end{bmatrix*} = \begin{bmatrix} X + t_x \\ Y + t_y \\ Z + t_z \\ 1 \end{bmatrix} \end{equation*} \bigskip \noindent We can perform rotations using a $4 \times 4$ matrix as well. Rotations matrices go into the upper-left $3 \times 3$ corner of the $4 \times 4$ matrix \begin{equation*} \begin{bmatrix} * & * & * & 0 \\ * & * & * & 0 \\ * & * & * & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{equation*} \bigskip \noindent We can also scale the three coordinates with a scaling matrix \begin{equation*} \begin{bmatrix} \sigma_x & 0 & 0 & 0 \\ 0 & \sigma_y& 0 & 0 \\ 0 & 0 & \sigma_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{equation*} \bigskip \noindent so that \begin{equation*} \begin{bmatrix} \sigma_x & 0 & 0 & 0 \\ 0 & \sigma_y& 0 & 0 \\ 0 & 0 & \sigma_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} X \\ Y \\ Z \\ 1 \end{bmatrix} = \begin{bmatrix*}[l] \sigma_x \cdot X &+& 0 \cdot Y &+& 0 \cdot Z &+& 0 \cdot 1 \\ 0 \cdot X &+& \sigma_y \cdot Y &+& 0 \cdot Z &+& 0 \cdot 1 \\ 0 \cdot X &+& 0 \cdot Y &+& \sigma_z \cdot Z &+& 0 \cdot 1 \\ 0 \cdot X &+& 0 \cdot Y &+& 0 \cdot Z &+& 1 \cdot 1 \end{bmatrix*} = \begin{bmatrix} \sigma_x X \\ \sigma_y Y \\ \sigma_z Z \\ 1 \end{bmatrix} \end{equation*} \bigskip \noindent So far we have represented (embedded) points $(X, Y, Z) \in \mathbb{R}^3$ as points $(X, Y, Z, 1) \in \mathbb{R}^4$. We can generalize this by representings the points $(X, Y, Z) \in \mathbb{R}^3$ as \emph{any} vector of the form $(wX, wY, wZ, w) \in \mathbb{R}^4$, where $w \neq 0$. Note that the set of points \bigskip \begin{equation*} \big \{(wX,wY,wZ,w) \mid w \neq 0\big \} \end{equation*} \bigskip \noindent is a line in $\mathbb{R}^4$ that passes through the origin and the point $(X,Y,Z,1)$. \bigskip \noindent Here we are associating each point $(X,Y,Z) \in \mathbb{R}^3$ with a line in $\mathbb{R}^4$ that passes through the origin. This representation is called \emph{homogeneous coordinates}. \bigskip \noindent BTW is this generalization consistent with the $4 \times 4$ rotation, translation, and scaling matrices above? It turns out that it is. This is because for any $\mathbf{X} = (X, Y, Z, 1)$ and \emph{any} $4 \times 4$ matrix $\mathbf{M}$ we have \begin{equation*} w(\mathbf{MX}) \equiv \mathbf{M}(w\mathbf{X}) \end{equation*} \bigskip \noindent That is, multiplying each component of the vector $\mathbf{X}$ by $w$ and then transforming by $\mathbf{M}$ yields the same vector as transforming $\mathbf{X}$ itself by $\mathbf{M}$ and then muliplying each component of $\mathbf{MX}$ by $w$. But multiplying each component of $\mathbf{X}$ by $w$ doesn?t change how that vector is transformed. \bigskip \noindent There is another way do define homogeneous coordinates in terms of the \emph{Real Projective Plane}. Before being able to do this however we need a few definitions. \begin{definition} {\bf (Real Projective Plane):} The real projective plane is the space of equivalence classes \begin{equation*} \mathbb{R}P^2 = \{\mathbb{R}^3 \backslash (0,0,0) \} / \sim \label{eqn:rpp} \end{equation*} \bigskip \noindent where $\sim$ is the equivalence relation such that $(x_1,y_1,z_1) \sim (x_2,y_2,z_2)$ if $\exists t \neq 0$ such that $x_2 = t \cdot x_1$, $y_2 = t \cdot y_1$, and $z_2 = t \cdot z_1$. \end{definition} \bigskip \noindent Then the alternate definition of homogeneous coordinates is \begin{definition} {\bf Homogeneous Coordinates:} The points of $\mathbb{R}P^2$ are called homogeneous coordinates: $[x \mathord{:} y \mathord{:} z]$, where the colons indicate that the \emph{ratios} between the coordinates are the only things that are significant (recall that the notation $[x]$ refers to the equivalence class of $x$). \end{definition} \bigskip \noindent We can generalize this definition as follows \begin{definition} {\bf Projective Spaces:} The projective spaces \begin{equation*} \mathbb{R}P^n = \{\mathbb{R}^{n+1} \backslash \text{Origin}\} / \sim \end{equation*} and \begin{equation*} \mathbb{C}P^n = \{\mathbb{C}^{n+1} \backslash \text{Origin}\} / \sim \end{equation*} \bigskip \noindent are sets of equivalence classes via an $n + 1$-dimensional version of Equation \ref{eqn:rpp} where $t$ is in $\mathbb{R}$ and $\mathbb{C}$ respectively. \end{definition} \bigskip \noindent These projective spaces can be broken down into unions of ordinary Euclidean space and other projective spaces. For example: \begin{proposition} {\bf Inclusions:} We have the following inclusions \begin{equation*} (x_1, \cdots, x_n) \mapsto [x_1 \mathord{:} \cdots \mathord{:} x_n \mathord{:} 1] \end{equation*} \begin{equation*} \mathbb{R}^n \hookrightarrow \mathbb{R}P^n \end{equation*} \begin{equation*} \mathbb{C}^n \hookrightarrow \mathbb{C}P^n \end{equation*} \bigskip \noindent This means that a point $[x_1:\hdots: x_{n+1}] \in \mathbb{C}P^n$ with $x_{n+1} \neq 0$ corresponds to the point \bigskip \begin{equation*} \Bigg ( \frac{x_1}{x_{n+1}}, \hdots, \frac{x_n}{x_{n+1}} \Bigg ) \in \mathbb{C}^n \end{equation*} \bigskip \noindent We also have \begin{equation*} [x_1 \mathord{:} \cdots \mathord{:} x_n] \mapsto [x_1 \mathord{:} \cdots \mathord{:} x_n \mathord{:} 0] \end{equation*} \begin{equation*} \mathbb{R}P^{n-1} \hookrightarrow \mathbb{R}P^n \end{equation*} \begin{equation*} \mathbb{C}P^{n-1} \hookrightarrow \mathbb{C}P^n \end{equation*} \bigskip \noindent and \begin{equation*} \mathbb{R}P^n = \mathbb{R}^n \cup \mathbb{R}P^{n-1} \end{equation*} \begin{equation*} \mathbb{C}P^n = \mathbb{C}^n \cup \mathbb{C}P^{n-1} \end{equation*} \bigskip \noindent where the embedded copies of $\mathbb{R}P^{n - 1}$ and $\mathbb{C}P^{n - 1}$ are called the "spaces at infinity" and every point, $[x_1 \mathord{:} \cdots \mathord{:} x_{n+1}] \in \mathbb{C}P^n$ is either in the image of $\mathbb{C}^n$ (if $x_{n+1} \neq 0$) or in the image of $\mathbb{C}P^{n - 1}$ (if $x_{n + 1} = 0$). \end{proposition} \bigskip \noindent It looks like reason for the term "space at infinity" is that the point $[x_1:\hdots:x_{n+1}] \in \mathbb{C}P^n$ with $x_{n+1} \neq 0$ corresponds to the point \bigskip \begin{equation*} \Bigg ( \frac{x_1}{x_{n+1}}, \hdots, \frac{x_n}{x_{n+1}} \Bigg ) \in \mathbb{C}^n \end{equation*} \bigskip \noindent and this point moves out to infinity as $x_{n+1} \to 0$. \bigskip \noindent Another way to think about this is that so far we have considered points like $(wX,wY,wZ,w)$ under the condition that $w \neq 0$ and where this point is associated with $(X,Y,Z) \in \mathbb{R}^3$. So what happens if we let $w = 0$? To understand this consider the point $(X, Y, Z, \epsilon) \in \mathbb{R}^4$ where $\epsilon > 0$ and so \begin{equation*} (X,Y,Z,\epsilon) \equiv \bigg ( \frac{X}{\epsilon}, \frac{Y}{\epsilon}, \frac{Z}{\epsilon}, 1 \bigg ) \end{equation*} \bigskip \noindent Now, if we let $\epsilon \to 0$ then the corresponding point $(\frac{X}{\epsilon} , \frac{Y}{\epsilon} , \frac{Z}{\epsilon} )$ goes to infinity while staying along the line from the origin through the point $(X, Y, Z, 1)$. As a result we identify the limit $(X, Y, Z, 0)$ with a point at infinity, namely \bigskip \begin{equation*} (X, Y, Z, 0) \coloneqq \lim_{\epsilon \to 0} \bigg (\frac{X}{\epsilon} , \frac{Y}{\epsilon} , \frac{Z}{\epsilon} \bigg ) \end{equation*} \begin{figure} \center{\includegraphics[scale=0.45,frame] {images/parallel_lines.png}} \caption{Why do Parallel Lines Appear to Intersect?} \label{fig:parallel_lines} \end{figure} \bigskip \noindent One other perhaps reassuring way to think about this: consider that parallel lines appear to intersect in the far distance as shown in Figure \ref{fig:parallel_lines} and that the equation for a single line is \begin{equation} y = ax + b \label{eqn:line} \end{equation} \bigskip \noindent We can rewrite Equation \ref{eqn:line} as \begin{equation*} x_2 = a_1 x_1 + b_1 \end{equation*} \bigskip \noindent In this case we have points $(x_1,x_2) \in \mathbb{C}^2$ gives rise to homogeneous coordinates $[x_1 \mathord{:} x_2 \mathord{:} x_3]$ in $\mathbb{C}P^2$ where \bigskip \begin{equation*} (x_1,x_2) \mapsto [x_1 \mathord{:} x_2 \mathord{:} x_3] \implies \\ (x_1,x_2) \mapsto \bigg [ \frac{x_1}{x_3} \mathord{:} \frac{x_2}{x_3} \mathord{:} 1 \bigg ] \end{equation*} \bigskip \noindent and so \bigskip \begin{equation*} \frac{x_2}{x_3} = a_1 \frac{x_1}{x_3} + b_1 \end{equation*} \bigskip \noindent or \begin{equation} x_2 = a_1 x_1 + b_1 x_3 \label{eqn:projective_line} \end{equation} \bigskip \noindent So we can see that lines in $\mathbb{C}^2$ map uniquely to lines in $\mathbb{C}P^2$ by Equation \ref{eqn:projective_line}. So here's a proposition: \bigskip \begin{proposition} Two \emph{distinct} lines \begin{flalign*} x_2 &= a_1 x_1 + b_1 x_3 \\ x_2 &= a_2 x_1 + b_2 x_3 \end{flalign*} \bigskip \noindent in $\mathbb{C}P^2$ intersect in one point. If they are parallel (that is, $a_1 = a_2 = a$) then they intersect at the point $[x_1 \mathord{:} ax_1 \mathord{:} 0]$. That is, at infinity. \bigskip \noindent {\bf Proof: } If the lines are not parallel (i.e., $a_1 \neq a_2$) then they intersect in $\mathbb{C}^2 \subset \mathbb{C}P^2$ in the usual way. If $a_1 = a_2 = a$, then \begin{equation*} \begin{array}{llll} & x_2 &=& a x_1 + b_1 x_3 \\ - &x_2 &=& a x_1 + b_2 x_3 \\ \hline & 0 &=& 0 + (b_1 - b_2) x_3 \end{array} \end{equation*} \bigskip \noindent So $0 = 0 + (b_1 - b_2) x_3 \implies 0 = (b_1 - b_2) x_3$. Since the two lines were distinct but parallel we know that $b_1 \neq b_2 \implies b_1 - b_2 \neq 0$, So $x_3$ must be zero which implies that $x_2 = a x_1 + b \cdot 0$ or $x_2 = ax_1$. Then the original two lines intersect at the point $[x_1 \mathord{:} ax_1 \mathord{:} 0] \in \mathbb{C}P^1 \subset \mathbb{C}P^2$. That is, at infinity. $\square$ \end{proposition} \bigskip \noindent A question we might ask is what happens to a point at infinity when we perform a rotation, translation, or scaling? Since the bottom row of each of these $4 \times 4$ matrices is $(0,0,0,1)$, it is easy to see that these transformations map points at infinity to points at infinity. In particular, \begin{itemize} \item A translation matrix does not affect a point at infinity; i.e. it behaves the same as the identity matrix. \item A rotation matrix maps a point at infinity in exactly the same way it maps a finite point, namely, $(X,Y,Z,1)$ rotates to $(X^\prime,Y^\prime,Z^\prime,1)$ if and only if $(X,Y,Z,0)$ rotates to $(X^\prime,Y^\prime,Z^\prime,0)$. \item A scale matrix maps a point at infinity in exactly the same way it maps a finite point, namely, $(X, Y, Z, 1)$ maps to $(\sigma_x X, \sigma_y Y, \sigma_z Z, 1)$ if and only if $(X, Y, Z, 0)$ scales to $(\sigma_x X, \sigma_y Y, \sigma_z Z, 0)$. \end{itemize} \noindent We sometimes interpret points at infinity as \emph{direction} vectors, that is, they have a direction but no position. One must be careful in refeing to them in this way, though, since vectors have a length whereas points at infinity $(X, Y, Z, 0)$ do not have a length. \bigskip \section{Acknowledgements} \bibliographystyle{plain} \bibliography{/Users/dmm/papers/bib/qc} \end{document}
{ "alphanum_fraction": 0.6821122939, "avg_line_length": 39.2062193126, "ext": "tex", "hexsha": "151614d49f4a4c3557b42f71872a2c36cc28a82c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "davidmeyer/davidmeyer.github.io", "max_forks_repo_path": "_my_stuff/papers/qc/projective_geometry/projective_geometry.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "davidmeyer/davidmeyer.github.io", "max_issues_repo_path": "_my_stuff/papers/qc/projective_geometry/projective_geometry.tex", "max_line_length": 369, "max_stars_count": null, "max_stars_repo_head_hexsha": "14f01e0a50b9c643b5176a10c840f270b9da7bc1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "davidmeyer/davidmeyer.github.io", "max_stars_repo_path": "_my_stuff/papers/qc/projective_geometry/projective_geometry.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8110, "size": 23955 }
\section{The \filli algorithm} \Label{sec:filli} The \filli algorithm in the \cxx Standard Library \cite[\S 28.6.6]{cxx-17-draft} initializes general sequences with a particular value. The signature of our modified variant reads: \begin{lstlisting}[style=acsl-block] void fill(value_type* a, size_type n, value_type v); \end{lstlisting} \subsection{Formal specification of \filli} The following listing shows the formal specification of \specref{filli}. We can express the postcondition of \filli simply by using the overloaded predicate \logicref{AllEqual}. \input{Listings/fill.h.tex} The \inl{assigns}-clauses formalize that \filli modifies only the entries of the range \inl{a[0..n-1]}. In general, when more than one \emph{assigns clause} appears in a function's specification, it is permitted to modify any of the referenced memory locations. However, if no \emph{assigns clause} appears at all, the function is free to modify any memory location, see \cite[\S 2.3.2]{ACSLSpec}. To forbid a function to do any modifications outside its scope, a clause \inl{assigns \\nothing;} must be used, as we practised in the example specifications in Chapter~\ref{cha:non-mutating}. \subsection{Implementation of \filli} The implementation of \implref{filli} comes with the loop invariant \inl{constant} expresses that for each iteration the array is \emph{filled} with the value of \inl{v} up to the index \inl{i} of the iteration. Note that we use here again the predicate \logicref{AllEqual}. \input{Listings/fill.c.tex} %\clearpage
{ "alphanum_fraction": 0.7726098191, "avg_line_length": 32.9361702128, "ext": "tex", "hexsha": "448b79806e2262a0906d9b06b41c1bb2786bed8f", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_path": "Informal/mutating/fill.tex", "max_issues_count": 22, "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_path": "Informal/mutating/fill.tex", "max_line_length": 82, "max_stars_count": 90, "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_path": "Informal/mutating/fill.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "num_tokens": 427, "size": 1548 }
\section{Smoothing and Penalized Least Squares } In Section 4.4.1 we saw that the smoothing spline solution to a penalized least squares is a linear smoother. Using the notation of Section 4.4.1, we can write the penalized criterion as \[ (\by - \bB\bg{\theta})'(\by - \bB\bg{\theta}) + \lambda\bg{\theta}'\bg{\Omega}\bg{\theta} \] Setting derivatives with respect to $\bg{\theta}$ equal to 0 gives the estimating equation: \[ (\bB'\bB + \lambda\bg{\Omega})\bg{\theta} = \bB'\by \] the $\hat{\bg{\theta}}$ that solves this equation will give us the estimate $\hat{\g} = \bB \hat{\bg{\theta}}$. Write: \[ \hat{\g} = \bB \bg{\theta} = \bB(\bB'\bB + \lambda \bg{\Omega})^{-1} \bB'\by = ({\mathbf I} + \lambda {\mathbf K})^{-1}\by \] where ${\mathbf K} = \bB'^{-} \bg{\Omega} \bB^{-}$. Notice we can write the penalized criterion as \[ (\by - \g)'(\by - \g) + \lambda \g' {\mathbf K} \g \] If we plot the rows of this linear smoother we will see that it is like a kernel smoother. \begin{figure}[htb] \caption{Kernels of a smoothing spline.} \begin{center} \epsfig{figure=Plots/plot-06-03.ps,angle=270,width=\textwidth} \end{center} \end{figure} Notice that for any linear smoother with a symmetric and nonnegative definite $\bS$, i.e. there $\bS^{-}$ exists, then we can argue in reverse: $\hat{\f}=\bS\by$ is the value that minimizes the penalized least squares criteria of the form \[ (\by - \f)'(\by - \f) + \f'(\bS^{-} - I)\f. \] Some of the smoothers presented in this class are not symmetrical but are close. In fact for many of them one can show that asymptotically they are symmetric.
{ "alphanum_fraction": 0.6652173913, "avg_line_length": 26.393442623, "ext": "tex", "hexsha": "a8ffda97187ccc2ac91b182a46c1432e31f838d0", "lang": "TeX", "max_forks_count": 38, "max_forks_repo_forks_event_max_datetime": "2021-11-20T12:17:08.000Z", "max_forks_repo_forks_event_min_datetime": "2016-08-17T22:17:30.000Z", "max_forks_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "igrabski/rafalab.github.io", "max_forks_repo_path": "pages/754/section-06-02.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_issues_repo_issues_event_max_datetime": "2021-01-21T22:35:40.000Z", "max_issues_repo_issues_event_min_datetime": "2016-08-18T00:41:36.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "igrabski/rafalab.github.io", "max_issues_repo_path": "pages/754/section-06-02.tex", "max_line_length": 74, "max_stars_count": 50, "max_stars_repo_head_hexsha": "2f27ea0d9e0b8a2342bb851ae7415ba3268fd00f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "igrabski/rafalab.github.io", "max_stars_repo_path": "pages/754/section-06-02.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-31T19:21:02.000Z", "max_stars_repo_stars_event_min_datetime": "2016-08-17T23:04:04.000Z", "num_tokens": 541, "size": 1610 }
\documentclass[10pt,letterpaper]{article} \usepackage[margin=1in]{geometry} \usepackage{setspace} \usepackage{fancyhdr} \usepackage{lastpage} \usepackage{tcolorbox} \pagestyle{fancyplain} % Put watermark on % \usepackage{draftwatermark} % \SetWatermarkText{Draft} % \SetWatermarkScale{7} \lhead{} \chead{Central Massachusetts Amateur Radio Association} \rhead{} \lfoot{\texttt{https://github.com/mide/cmara-meeting-minutes/}} \cfoot{} \rfoot{Page \thepage\ of \pageref{LastPage}} \begin{document} \begin{center} {\huge March 2017 Business Meeting}\\ \emph{of the}\\ {\Large Central Massachusetts Amateur Radio Association}\\ \emph{Submitted by Mark Ide \texttt{W1IDE}, Secretary} \end{center} \section{Meeting Called to Order} The CMARA March 2017 business meeting was called to order on March 16, 2017 by CMARA president Bob Peloquin (\texttt{KB1VUA}) at 7:17 PM. \section{Attendance} \noindent Below is the list of members and guests that attended the meeting. If you are not caught up on your dues, you'll be listed as a guest. If you feel that's in error, please reach out the club treasurer. \subsection{Officers Present} \begin{tabular}{|l|l|l|c|} \hline \textbf{Position} & \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline President & Bob Peloquin & \texttt{KB1VUA} & Yes \\ Vice President & Brian Loverro & \texttt{K1BML} & Yes \\ Secretary & Mark Ide & \texttt{W1IDE} & Yes \\ Treasurer & Jim Singer & \texttt{N1EKO} & No \\ Webmaster & Lyn Glagowski & \texttt{WB1CCL} & No \\ \hline \end{tabular} \subsection{Board of Directors Present} \begin{tabular}{|l|l|c|} \hline \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline Adrian Zeffert & \texttt{AB2IX} & Yes \\ Harold Carlson & \texttt{N1ZC} & No \\ Greg Algieri & \texttt{WA1JXR} & Yes \\ Terry Glagowski & \texttt{W1TR} & Yes \\ Randy Dore & \texttt{W4FEB} & Yes \\ Johnathan Sherman & \texttt{WW2JS} & Yes \\ \hline \end{tabular} \subsection{Members Present} \texttt{K1ALN}, \texttt{K1RAU}, \texttt{K1YYT}, \texttt{KA1SPY}, \texttt{KB1VXH}, \texttt{KB1VXY}, \texttt{KC1ACX}, \texttt{KC1BHD}, \texttt{KC1ETB}, \texttt{KC1FPI}, \texttt{KC1GIB}, \texttt{KM1D}, \texttt{N1FSK}, \texttt{NE1R}, \texttt{W1AHM}, \texttt{W1BFG}, \texttt{W1CAB}, \texttt{W1GJM}, \texttt{W1IDE}, \texttt{W1REJ}, \texttt{W1WER}, \texttt{WA1MDD}, \texttt{WA1RCQ}, \texttt{WI1Y} \subsection{Guests \& Visitors} \texttt{AB1HT}, \texttt{AE1DX}, \texttt{KA1AQP}, \texttt{KA3RLZ}, \texttt{KB2SAE}, \texttt{KC1BCY}, \texttt{KC1SGT}, \texttt{KW2T}, \texttt{N1FF}, \texttt{N1JEL}, \texttt{W1AAM}, Christopher Wentworth % \noindent % \textasteriskcentered{} Entered as a guest. Voted in as new member. See \S{} \ref{new-cmara-members} for details. \section{Reports} \subsection{Secretary's Report} The secretary's report for February 2017 was voted in by voice vote. \newpage \subsection{Treasurer's Report} The treasurers report for February will be tabled until the April meeting. \subsection{Committee Reports} \subsubsection{Membership Committee Report} Nothing to report. We did have several new visitors present at the February meeting. \subsubsection{Repeater Trustees Report} Nothing to report. No communications have been made with Kurt, as the trustees are waiting for the better weather. \subsubsection{Field Day Committee Report} \begin{enumerate} \item Nothing to report, as the head of the committee was not present. \item The Astronomy Club (Aldrich Astronomical Society, Inc.) has given the okay for us to use their field this year. Just like last year, the field will only be open to CMARA members in good standing, and not to the public. Bob (\texttt{KB1VUA}) is on the board of the Aldrich Astronomical Society, and the club members would be guests of his. \item There have been many improvements to the field, such as the completion of the building and there will be hot water this year to assist with proper cooking crew hand and utensil washing during the event. \item We will likely still have a liability waiver that people will need to sign upon entry. This will be enforced better than last year, and we will likely have a table at the entrance to ensure everyone signs. \item Brian (\texttt{K1BML}) will be the safety officer again this year. \end{enumerate} \section{Old Business} There was no business tabled from February. \section{New Business} \subsection{New CMARA Members} \label{new-cmara-members} Without the treasurer present, no new members were voted in. All business pertaining to membership will be tabled until April. \subsection{Other New Business} \begin{enumerate} \item Last month, Mark (\texttt{W1IDE}) shared with the club that he open-sourced many documents and templates on GitHub. Upon discussions with the other officers, it was decided that there is value in having all club resources under the \texttt{cmara.org} domain. For this reason, the GitHub page and all associated assets have been taken offline. \newpage \item According to the by-laws, we must publish the list of paid-up members. \begin{enumerate} \item By-Laws Excerpt: \begin{tcolorbox} Section 4 - In the March bulletin and/or meeting minutes a roster of paid-up members shall be published. At the April Board of Directors meeting, the Treasurer shall submit a list of all members in arrears on dues. The names of all members in arrears shall be removed from all rosters and mailing lists and their membership privileges suspended. \end{tcolorbox} \item At the start of the March meeting, we had 63 paid-up members, and they are below. \\ \begin{tabular}{|ll||ll|} \hline \textbf{Callsign} & \textbf{Name} & \textbf{Callsign} & \textbf{Name} \\ \hline \texttt{AA2S} & Bill Michalson & \texttt{N1FSK} & Sandy Lancraft \\ \texttt{AB1MT} & George Szarka & \texttt{N1HTB} & Dimitris Paliyannis \\ \texttt{AB1ZW} & Marc Abretti & \texttt{N1KWX} & Hugh Bouchard \\ \texttt{K1ALN} & Alan Ruuska & \texttt{N1TYH} & Steven Olivieri \\ \texttt{K1BML} & Brian Loverro & \texttt{N2LND} & John Barbato \\ \texttt{K1QJM} & Achille A. Levesque & \texttt{N9PVF} & David Lipson \\ \texttt{K1RAU} & Dan Rau & \texttt{NE1R} & Thomas C. Carrigan \\ \texttt{K1YYT} & Gerard Bachand & \texttt{W1ABP} & Alison Peters \\ \texttt{KA1SPY} & John Kenney & \texttt{W1AHM} & Alan H. Martin \\ \texttt{KA2CNN} & Dwight Ernest & \texttt{W1BFG} & John Boyd \\ \texttt{KB1EZF} & Scott Olsen & \texttt{W1BGL} & Nicholas Gatzios \\ \texttt{KB1LOU} & Ron DiProfio & \texttt{W1BNC} & Mike Keller \\ \texttt{KB1NIQ} & Michelle Martin & \texttt{W1CAB} & Craig Bolen \\ \texttt{KB1OQA} & Thomas Turner & \texttt{W1DYQ} & Dean Huggins \\ \texttt{KB1QZD} & Clovis Padilha & \texttt{W1GD} & Gerald Kersus \\ \texttt{KB1VUA} & Robert Peloquin Jr & \texttt{W1GJM} & Gary McMeekin \\ \texttt{KB1VXH} & George Saari & \texttt{W1IDE} & Mark Ide Jr \\ \texttt{KB1VXY} & John J. Iwuc & \texttt{W1IKE} & Roger Ikonan \\ \texttt{KC1ACX} & Vernie Grady & \texttt{W1PA} & Bill Acito \\ \texttt{KC1BHD} & James Garner & \texttt{W1REJ} & Richard Jubinville \\ \texttt{KC1BKW} & Michael DesChenes & \texttt{W1RJZ} & Ralph Zottola \\ \texttt{KC1DQT} & Robert Diehl & \texttt{W1RVY} & Eric Wilhelm \\ \texttt{KC1DTT} & Michael Latour & \texttt{W1WER} & James A. Young \\ \texttt{KC1DYA} & Kevin Romines & \texttt{W3DEC} & Don Carlton \\ \texttt{KC1ETB} & Craig Brauckmiller & \texttt{W3SJP} & Jeremy Peters \\ \texttt{KC1FPI} & Maurice Quirke & \texttt{W4FEB} & Randolph Dore \\ \texttt{KC1GBK} & Jamie Giampietro & \texttt{WA1JXR} & L. Greg Algieri \\ \texttt{KC1GIB} & Herbert Gilbert & \texttt{WA1MDD} & John Spencer \\ \texttt{KC1SGT} & Joe Rich & \texttt{WA1RCQ} & Arthur B. Kass \\ \texttt{KM1D} & Ray Donadt & \texttt{WI1Y} & Mickey Westover \\ \texttt{N1AVP} & Morris Beverly & \texttt{WW2JS} & Jonathan Sherman \\ \texttt{N1EKO} & James Singer & & \\ \hline \end{tabular} \item During the April board meeting, discussions will be had about members who have not paid their 2017 dues. It's important to note that no action has been taken yet, so if you're not on the list it's not to late. Reach out to the club ASAP. \item Only the president and vice president will be in contact with members who are not caught up. \item Some folks paid (or tried to pay) during the March meeting. These renewals have not yet been processed by the treasurer. Please check in with him for more information. \end{enumerate} \newpage \item Greg (\texttt{WA1JXR}) provided the club with an update for the MCW (Modulated CW) project. If you're interested in taking part of the project, email Greg at \texttt{[email protected]}. Greg shared a prototype of the board, with all soldering and wires attached. The cost for the project will be \$50.00 for the complete kit, but you will still need a CW key and various cables depending on your setup. If all goes well, the kits could be ready for the April meeting. \end{enumerate} \section{For the Good of the Club} \subsection{New Hams \& Upgrades} No new hams present nor upgrades this month. \subsection{Other News} \begin{enumerate} \item There are many upcoming hamfests. Some are listed below, taken from \texttt{W1GSL}'s list which is usually up to date. Visit \texttt{http://web.mit.edu/w1gsl/Public/ne-fleas} for more info. \begin{tcolorbox} \begin{verbatim} New England Area Ham - Electronic Flea Market *** DATES *** 2017 P 1 of 2 ******************************************************************************* 2017 Contact Source ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 19 Mar Southington CT SARA @HS $5@8 $20/T@6:15 John WA1JKR 860 621 8791 W+ 19 Mar Henniker NH CVRC @CommSch Jeffrey KB1WTI 603 831 9352 + 31 Mar - 1 Ap Lewiston ME AARC ME Conv @Ramada @8 Ivan N1OXA 207 784 0350 W 8 Ap Newton MA PHSNE Photographica Sat Only @AmLegion @9A John 781 592 2553 8 Ap Hampton NH PCARC @Masonic $5@8 $10/T@7 Mark K1RX 603 775 0220 F 8 Apr LaSalle PQ MARC @RC Legion $5@9 $10/T@8:15 Jim VE2VE 514-990-1965 R 8 Apr Windsor CT VR+C Mus 115 Pierson LN @8AM Indoor John 860 673 0518 + 15 April S Portland ME PAWA @AmLegion Bryce K1GAX 207 415 0498 A 16 Apr Cambridge MA Flea at MIT Mitch 617 253 3776 F Third Sunday April thru October ... [SNIP] ... ******************************************************************************* LAST UPDATE 3-14-17 de W1GSL http://swapfest.us P 1 List is normally updated twice a month - look for the latest version ******************************************************************************* Additions/ Corrections via e-Mail [email protected] <- SUBSCRIBE US Mail W1GSL POB 397082 MIT Br Cambridge MA 02139 (c)2017 W1GSL unlimited reproduction permitted in entirety \end{verbatim} \end{tcolorbox} \item James Young (\texttt{W1WER}) attended the meeting and was recognized for being a past president of CMARA. Thank you for your continued support, Jim! \end{enumerate} \section{Next Meeting} The next meeting will be Thursday, April 20, 2017 at 7:00 PM. It will be located at the Oakdale United Methodist Church, 15 North Main Street, West Boylston, MA 01583.\\ \noindent The meeting will begin promptly at 7:00 PM. Social time will start around 6:30 PM, so be sure to come early if you would like to mingle.\\ \noindent The topic of the next meeting will be decided during the board meeting following the conclusion of the business meeting. \section{Meeting Adjourned} The meeting was adjourned March 16, 2017 at 7:50 PM by CMARA president Bob Peloquin (\texttt{KB1VUA}). \section{Post-Meeting Presentation} We had a swap meet during the March meeting and it went very well. The results and metrics of the swap will be discussed during the board meeting tonight. \end{document}
{ "alphanum_fraction": 0.6724014337, "avg_line_length": 46.5, "ext": "tex", "hexsha": "18f9f8fc4587da4398e3747f2888c20ec7919e02", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cmara/meeting-minutes", "max_forks_repo_path": "minutes/2017-03-16-business-meeting.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cmara/meeting-minutes", "max_issues_repo_path": "minutes/2017-03-16-business-meeting.tex", "max_line_length": 476, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cmara/meeting-minutes", "max_stars_repo_path": "minutes/2017-03-16-business-meeting.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-27T17:33:16.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-27T17:33:16.000Z", "num_tokens": 3845, "size": 12555 }
\documentclass{article} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{subcaption} \usepackage[margin=1in]{geometry} \usepackage{amsmath} % or simply amstext \usepackage{siunitx} \usepackage{booktabs} \usepackage[export]{adjustbox} \newcommand{\angstrom}{\textup{\AA}} \newcommand{\colormap}{jet} % colorbar to use \usepackage{cleveref} \usepackage{booktabs} \usepackage{gensymb} \usepackage{float} \title{Intuitive explanations of X-ray diffraction patterns} \author{Benjamin J. Coscia \and Michael R. Shirts} \begin{document} \bibliographystyle{ieeetr} \graphicspath{{./figures/}} \maketitle \section{Topics to explore} Topics here are trivial to people who have long been immersed in crystallography or a related field. This stuff is far from intuitive Reference Atlas of optical transforms \begin{itemize} \item 2D transforms of 2D arrays explored in detail \item Here we look at slices of 3D transforms created from 3D coordinates \end{itemize} \begin{itemize} \item Narrow to structures with periodic order (eliminate proteins, amporphous materials) \item Will not talk about instrumentation \item Common crystal lattices \begin{itemize} \item 1D and 2D patterns (potentially 3D) \item Perfect lattices \item Sample orientation \item Add noise in each dimension \end{itemize} \item Common liquid crystal phases \begin{itemize} \item show 1D and 2D patterns \item Nematic \item Columnar \item Bicontinuous Cubic \item Smectic \end{itemize} \item Other shapes \begin{itemize} \item helices + screw axes \end{itemize} \item Effect of tortuosity \item Effect of misaligned domains - angle averagin \item Intensity of subharmonics \end{document}
{ "alphanum_fraction": 0.7661056297, "avg_line_length": 28.7166666667, "ext": "tex", "hexsha": "d745ca7678372f8f4123195b0e5e4e46f2a2c18f", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-01-27T17:59:13.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-06T15:41:53.000Z", "max_forks_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "shirtsgroup/LLC_Membranes", "max_forks_repo_path": "Ben_Manuscripts/fourier_transforms/Outline.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_issues_repo_issues_event_max_datetime": "2019-08-22T22:35:17.000Z", "max_issues_repo_issues_event_min_datetime": "2019-08-22T20:11:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "shirtsgroup/LLC_Membranes", "max_issues_repo_path": "Ben_Manuscripts/fourier_transforms/Outline.tex", "max_line_length": 136, "max_stars_count": 4, "max_stars_repo_head_hexsha": "e94694f298909352d7e9d912625314a1e46aa5b6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "shirtsgroup/LLC_Membranes", "max_stars_repo_path": "Ben_Manuscripts/fourier_transforms/Outline.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-11T18:57:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-18T15:26:49.000Z", "num_tokens": 534, "size": 1723 }
\chapter{Preface} This thesis is submitted in partial fulfillment of the requirements for the degree of \emph{Philosophiae Doctor} at the University of Oslo. The research presented here was conducted at the University of Oslo and at CERN, under the supervision of professor Main Supervisor and associate professor Co Supervisor. This work was supported by the Norwegian Research Council through grant 123456. The thesis is a collection of three papers, presented in chronological order of writing. The common theme to them is a \LaTeX\ thesis template. The papers are preceded by an introductory chapter that relates them to each other and provides background information and motivation for the work. Two of the papers are joint work with Second Author. I am the sole author of the remaining paper. \section*{Acknowledgements} Thanks for all the fish! \vskip\onelineskip \begin{flushleft} \sffamily % \uiocolon\textbf{\theauthor} \textbf{\theauthor} \\ Risø,\MONTH\the\year \end{flushleft}
{ "alphanum_fraction": 0.7876106195, "avg_line_length": 32.8064516129, "ext": "tex", "hexsha": "88c5ec04db1bcc0f997a3502da1ff78ca1f57a66", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e215e60ff8e385f04ebedeb10cfdbddcdb2603b3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fsn1995/phdau-article-based", "max_forks_repo_path": "sections/preface.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e215e60ff8e385f04ebedeb10cfdbddcdb2603b3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fsn1995/phdau-article-based", "max_issues_repo_path": "sections/preface.tex", "max_line_length": 89, "max_stars_count": null, "max_stars_repo_head_hexsha": "e215e60ff8e385f04ebedeb10cfdbddcdb2603b3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fsn1995/phdau-article-based", "max_stars_repo_path": "sections/preface.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 228, "size": 1017 }
\chapter{Reference cells} \label{sec:referencecells} \input{chapters/referencecells_common}
{ "alphanum_fraction": 0.8279569892, "avg_line_length": 18.6, "ext": "tex", "hexsha": "a87eab723ab7c05794116a1afe58979d31fe1789", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "083251420eb934d860c99dcf1eb07ae5b8ba7e8c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "szmurlor/fiver", "max_forks_repo_path": "src/ufc-2.0.5/doc/manual/chapters/referencecells.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "083251420eb934d860c99dcf1eb07ae5b8ba7e8c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "szmurlor/fiver", "max_issues_repo_path": "src/ufc-2.0.5/doc/manual/chapters/referencecells.tex", "max_line_length": 38, "max_stars_count": null, "max_stars_repo_head_hexsha": "083251420eb934d860c99dcf1eb07ae5b8ba7e8c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "szmurlor/fiver", "max_stars_repo_path": "src/ufc-2.0.5/doc/manual/chapters/referencecells.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 23, "size": 93 }
\chapter{Apache Configuration Directives\label{directives}} \section{Request Handlers\label{dir-handlers}} \subsection{Python*Handler Directive Syntax\label{dir-handlers-syn}} \index{Python*Handler Syntax} All request handler directives have the following syntax: \code{Python*Handler \emph{handler [handler ...] [ | .ext [.ext ...] ] } } Where \var{handler} is a callable object that accepts a single argument - request object, and \var{.ext} is a file extension. Multiple handlers can be specified on a single line, in which case they will be called sequentially, from left to right. Same handler directives can be specified multiple times as well, with the same result - all handlers listed will be executed sequentially, from first to last. If any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for that phase are aborted. What happens when either \code{apache.OK} or \code{apache.DECLINED} is returned is dependent on which phase is executing. Note that prior to mod_python 3.3, if any handler in the sequence, no matter which phase was executing, returned a value other than \code{apache.OK}, then execution of all subsequent handlers for that phase was aborted. The list of handlers can optionally be followed by a \code{|} followed by one or more file extensions. This would restrict the execution of the handler to those file extensions only. This feature only works for handlers executed after the trans phase. A \emph{handler} has the following syntax: \code{module[::object]} Where \var{module} can be a full module name (package dot notation is accepted) or an actual path to a module code file. The module is loaded using the mod_python module importer as implemented by the \code{apache.import_module()} function. Reference should be made to the documentation of that function for further details of how module importing is managed. The optional \var{object} is the name of an object inside the module. Object can also contain dots, in which case it will be resolved from left to right. During resolution, if mod_python encounters an object of type \code{<class>}, it will try instantiating it passing it a single argument, a request object. If no object is specified, then it will default to the directive of the handler, all lower case, with the word \samp{python} removed. E.g. the default object for PythonAuthenHandler would be authenhandler. Example: \begin{verbatim} PythonAuthzHandler mypackage.mymodule::checkallowed \end{verbatim} For more information on handlers, see Overview of a Handler. Side note: The \samp{::} was chosen for performance reasons. In order for Python to use objects inside modules, the modules first need to be imported. Having the separator as simply a \samp{.}, would considerably complicate process of sequentially evaluating every word to determine whether it is a package, module, class etc. Using the (admittedly un-Python-like) \samp{::} takes the time consuming work of figuring out where the module part ends and the object inside of it begins away from mod_python resulting in a modest performance gain. \subsection{PythonPostReadRequestHandler\label{dir-handlers-prrh}} \index{PythonPostReadRequestHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This handler is called after the request has been read but before any other phases have been processed. This is useful to make decisions based upon the input header fields. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \begin{notice} When this phase of the request is processed, the URI has not yet been translated into a path name, therefore this directive could never be executed by Apache if it could specified within \code{<Directory>}, \code{<Location>}, \code{<File>} directives or in an \file{.htaccess} file. The only place this directive is allowed is the main configuration file, and the code for it will execute in the main interpreter. And because this phase happens before any identification of the type of content being requested is done (i.e. is this a python program or a gif?), the python routine specified with this handler will be called for \emph{ALL} requests on this server (not just python programs), which is an important consideration if performance is a priority. \end{notice} \indexii{phase}{order} The handlers below are documented in order in which phases are processed by Apache. \subsection{PythonTransHandler\label{dir-handlers-th}} \index{PythonTransHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This handler gives allows for an opportunity to translate the URI into an actual filename, before the server's default rules (Alias directives and the like) are followed. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \begin{notice} At the time when this phase of the request is being processed, the URI has not been translated into a path name, therefore this directive will never be executed by Apache if specified within \code{<Directory>}, \code{<Location>}, \code{<File>} directives or in an \file{.htaccess} file. The only place this can be specified is the main configuration file, and the code for it will execute in the main interpreter. \end{notice} \subsection{PythonHeaderParserHandler\label{dir-handlers-hph}} \index{PythonHeaderParserHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This handler is called to give the module a chance to look at the request headers and take any appropriate specific actions early in the processing sequence. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \subsection{PythonInitHandler\label{dir-handlers-pih}} \index{PythonInitHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This handler is the first handler called in the request processing phases that is allowed both inside and outside \file{.htaccess} and directory. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. This handler is actually an alias to two different handlers. When specified in the main config file outside any directory tags, it is an alias to \code{PostReadRequestHandler}. When specified inside directory (where \code{PostReadRequestHandler} is not allowed), it aliases to \code{PythonHeaderParserHandler}. \emph{(This idea was borrowed from mod_perl)} \subsection{PythonAccessHandler\label{dir-handlers-ach}} \index{PythonAccessHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This routine is called to check for any module-specific restrictions placed upon the requested resource. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. For example, this can be used to restrict access by IP number. To do so, you would return \code{HTTP_FORBIDDEN} or some such to indicate that access is not allowed. \subsection{PythonAuthenHandler\label{dir-handlers-auh}} \index{PythonAuthenHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This routine is called to check the authentication information sent with the request (such as looking up the user in a database and verifying that the [encrypted] password sent matches the one in the database). Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. To obtain the username, use \code{req.user}. To obtain the password entered by the user, use the \code{req.get_basic_auth_pw()} function. A return of \code{apache.OK} means the authentication succeeded. A return of \code{apache.HTTP_UNAUTHORIZED} with most browser will bring up the password dialog box again. A return of \code{apache.HTTP_FORBIDDEN} will usually show the error on the browser and not bring up the password dialog \code{again. HTTP_FORBIDDEN} should be used when authentication succeeded, but the user is not permitted to access a particular URL. An example authentication handler might look like this: \begin{verbatim} def authenhandler(req): pw = req.get_basic_auth_pw() user = req.user if user == "spam" and pw == "eggs": return apache.OK else: return apache.HTTP_UNAUTHORIZED \end{verbatim} \begin{notice} \code{req.get_basic_auth_pw()} must be called prior to using the \code{req.user} value. Apache makes no attempt to decode the authentication information unless \code{req.get_basic_auth_pw()} is called. \end{notice} \subsection{PythonAuthzHandler\label{dir-handlers-auzh}} \index{PythonAuthzHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This handler runs after AuthenHandler and is intended for checking whether a user is allowed to access a particular resource. But more often than not it is done right in the AuthenHandler. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \subsection{PythonTypeHandler\label{dir-handlers-tph}} \index{PythonTypeHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This routine is called to determine and/or set the various document type information bits, like Content-type (via \code{r->content_type}), language, et cetera. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \subsection{PythonFixupHandler\label{dir-handlers-fuh}} \index{PythonFixupHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This routine is called to perform any module-specific fixing of header fields, et cetera. It is invoked just before any content-handler. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \subsection{PythonHandler\label{dir-handlers-ph}} \index{PythonHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This is the main request handler. Many applications will only provide this one handler. Where multiple handlers are specified, if any handler in the sequence returns a status value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of subsequent handlers for the phase are skipped and the return status becomes that for the whole content handler phase. If all handlers are run, the return status of the final handler is what becomes the return status of the whole content handler phase. Where that final status is \code{apache.DECLINED}, Apache will fall back to using the \code{default-handler} and attempt to serve up the target as a static file. \subsection{PythonLogHandler\label{dir-handlers-plh}} \index{PythonLogHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This routine is called to perform any module-specific logging activities. Where multiple handlers are specified, if any handler in the sequence returns a value other than \code{apache.OK} or \code{apache.DECLINED}, then execution of all subsequent handlers for this phase are aborted. \subsection{PythonCleanupHandler\label{dir-handlers-pch}} \index{PythonCleanupHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} \emph{Python*Handler Syntax}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c This is the very last handler, called just before the request object is destroyed by Apache. Unlike all the other handlers, the return value of this handler is ignored. Any errors will be logged to the error log, but will not be sent to the client, even if PythonDebug is On. This handler is not a valid argument to the \code{rec.add_handler()} function. For dynamic clean up registration, use \code{req.register_cleanup()}. Once cleanups have started, it is not possible to register more of them. Therefore, \code{req.register_cleanup()} has no effect within this handler. Cleanups registered with this directive will execute \emph{after} cleanups registered with \code{req.register_cleanup()}. \section{Filters\label{dir-filter}} \subsection{PythonInputFilter\label{dir-filter-if}} \index{PythonInputFilter} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonInputFilter handler name\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Registers an input filter \var{handler} under name \var{name}. \var{Handler} is a module name optionally followed \code{::} and a callable object name. If callable object name is omitted, it will default to \samp{inputfilter}. \var{Name} is the name under which the filter is registered, by convention filter names are usually in all caps. The \var{module} referred to by the handler can be a full module name (package dot notation is accepted) or an actual path to a module code file. The module is loaded using the mod_python module importer as implemented by the \code{apache.import_module()} function. Reference should be made to the documentation of that function for further details of how module importing is managed. To activate the filter, use the \code{AddInputFilter} directive. \subsection{PythonOutputFilter\label{dir-filter-of}} \index{PythonOutputFilter} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonOutputFilter handler name\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Registers an output filter \var{handler} under name \var{name}. \var{Handler} is a module name optionally followed \code{::} and a callable object name. If callable object name is omitted, it will default to \samp{outputfilter}. \var{Name} is the name under which the filter is registered, by convention filter names are usually in all caps. The \var{module} referred to by the handler can be a full module name (package dot notation is accepted) or an actual path to a module code file. The module is loaded using the mod_python module importer as implemented by the \code{apache.import_module()} function. Reference should be made to the documentation of that function for further details of how module importing is managed. To activate the filter, use the \code{AddOutputFilter} directive. \section{Connection Handler\label{dir-conn}} \subsection{PythonConnectionHandler\label{dir-conn-ch}} \index{PythonConnectionHandler} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonConnectionHandler handler\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Specifies that the connection should be handled with \var{handler} connection handler. \var{Handler} will be passed a single argument - the connection object. \var{Handler} is a module name optionally followed \code{::} and a callable object name. If callable object name is omitted, it will default to \samp{connectionhandler}. The \var{module} can be a full module name (package dot notation is accepted) or an absolute path to a module code file. The module is loaded using the mod_python module importer as implemented by the \code{apache.import_module()} function. Reference should be made to the documentation of that function for further details of how module importing is managed. \section{Other Directives\label{dir-other}} \subsection{PythonEnablePdb\label{dir-other-epd}} \index{PythonEnablePdb} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonEnablePdb \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonEnablePdb Off\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c When On, mod_python will execute the handler functions within the Python debugger pdb using the \code{pdb.runcall()} function. Because pdb is an interactive tool, start httpd from the command line with the -DONE_PROCESS option when using this directive. As soon as your handler code is entered, you will see a Pdb prompt allowing you to step through the code and examine variables. \subsection{PythonDebug\label{dir-other-pd}} \index{PythonDebug} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonDebug \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonDebug Off\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Normally, the traceback output resulting from uncaught Python errors is sent to the error log. With PythonDebug On directive specified, the output will be sent to the client (as well as the log), except when the error is \exception{IOError} while writing, in which case it will go to the error log. This directive is very useful during the development process. It is recommended that you do not use it production environment as it may reveal to the client unintended, possibly sensitive security information. \subsection{PythonImport\label{dir-other-pimp}} \index{PythonImport} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonImport \emph{module} \emph{interpreter_name}\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Tells the server to import the Python module module at process startup under the specified interpreter name. The import takes place at child process initialization, so the module will actually be imported once for every child process spawned. The \var{module} can be a full module name (package dot notation is accepted) or an absolute path to a module code file. The module is loaded using the mod_python module importer as implemented by the \code{apache.import_module()} function. Reference should be made to the documentation of that function for further details of how module importing is managed. The \code{PythonImport} directive is useful for initialization tasks that could be time consuming and should not be done at the time of processing a request, e.g. initializing a database connection. Where such initialization code could fail and cause the importing of the module to fail, it should be placed in its own function and the alternate syntax used: \code{PythonImport \emph{module::function} \emph{interpreter_name}} The named function will be called only after the module has been imported successfully. The function will be called with no arguments. \begin{notice} At the time when the import takes place, the configuration is not completely read yet, so all other directives, including PythonInterpreter have no effect on the behavior of modules imported by this directive. Because of this limitation, the interpreter must be specified explicitly, and must match the name under which subsequent requests relying on this operation will execute. If you are not sure under what interpreter name a request is running, examine the \member{interpreter} member of the request object. \end{notice} See also Multiple Interpreters. \subsection{PythonInterpPerDirectory\label{dir-other-ipd}} \index{PythonInterpPerDirectory} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonInterpPerDirectory \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonInterpPerDirectory Off\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Instructs mod_python to name subinterpreters using the directory of the file in the request (\code{req.filename}) rather than the the server name. This means that scripts in different directories will execute in different subinterpreters as opposed to the default policy where scripts in the same virtual server execute in the same subinterpreter, even if they are in different directories. For example, assume there is a \file{/directory/subdirectory}. \file{/directory} has an .htaccess file with a PythonHandler directive. \file{/directory/subdirectory} doesn't have an .htaccess. By default, scripts in /directory and \file{/directory/subdirectory} would execute in the same interpreter assuming both directories are accessed via the same virtual server. With PythonInterpPerDirectory, there would be two different interpreters, one for each directory. \begin{notice} In early phases of the request prior to the URI translation (PostReadRequestHandler and TransHandler) the path is not yet known because the URI has not been translated. During those phases and with PythonInterpPerDirectory on, all python code gets executed in the main interpreter. This may not be exactly what you want, but unfortunately there is no way around this. \end{notice} \begin{seealso} \seetitle[pyapi-interps.html]{Section \ref{pyapi-interps} Multiple Interpreters} {for more information} \end{seealso} \subsection{PythonInterpPerDirective\label{dir-other-ipdv}} \index{PythonPythonInterpPerDirective} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonInterpPerDirective \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonInterpPerDirective Off\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Instructs mod_python to name subinterpreters using the directory in which the Python*Handler directive currently in effect was encountered. For example, assume there is a \file{/directory/subdirectory}. \file{/directory} has an .htaccess file with a PythonHandler directive. \file{/directory/subdirectory} has another \file{.htaccess} file with another PythonHandler. By default, scripts in \file{/directory} and \file{/directory/subdirectory} would execute in the same interpreter assuming both directories are in the same virtual server. With PythonInterpPerDirective, there would be two different interpreters, one for each directive. \begin{seealso} \seetitle[pyapi-interps.html]{Section \ref{pyapi-interps} Multiple Interpreters} {for more information} \end{seealso} \subsection{PythonInterpreter\label{dir-other-pi}} \index{PythonInterpreter} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonInterpreter name \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Forces mod_python to use interpreter named \emph{name}, overriding the default behaviour or behaviour dictated by \citetitle[dir-other-ipd.html]{\code{PythonInterpPerDirectory}} or \citetitle[dir-other-ipdv.html]{\code{PythonInterpPerDirective}} directive. This directive can be used to force execution that would normally occur in different subinterpreters to run in the same one. When specified in the DocumentRoot, it forces the whole server to run in one subinterpreter. \begin{seealso} \seetitle[pyapi-interps.html]{Section \ref{pyapi-interps} Multiple Interpreters} {for more information} \end{seealso} \subsection{PythonHandlerModule\label{dir-other-phm}} \index{PythonHandlerModule} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonHandlerModule module \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c PythonHandlerModule can be used an alternative to Python*Handler directives. The module specified in this handler will be searched for existence of functions matching the default handler function names, and if a function is found, it will be executed. For example, instead of: \begin{verbatim} PythonAuthenHandler mymodule PythonHandler mymodule PythonLogHandler mymodule \end{verbatim} one can simply say \begin{verbatim} PythonHandlerModule mymodule \end{verbatim} \subsection{PythonAutoReload\label{dir-other-par}} \index{PythonAutoReload} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonAutoReload \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonAutoReload On\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c If set to Off, instructs mod_python not to check the modification date of the module file. By default, mod_python checks the time-stamp of the file and reloads the module if the module's file modification date is later than the last import or reload. This way changed modules get automatically reimported, eliminating the need to restart the server for every change. Disabling autoreload is useful in production environment where the modules do not change; it will save some processing time and give a small performance gain. \subsection{PythonOptimize\label{dir-other-pomz}} \index{PythonOptimize} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonOptimize \{On, Off\} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Default]{Default:} PythonOptimize Off\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Enables Python optimization. Same as the Python \programopt{-O} option. \subsection{PythonOption\label{dir-other-po}} \index{PythonOption} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonOption key [value] \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c Assigns a key value pair to a table that can be later retrieved by the \code{req.get_options()} function. This is useful to pass information between the apache configuration files (\file{httpd.conf}, \file{.htaccess}, etc) and the Python programs. If the value is omitted or empty (\code{""}), then the key is removed from the local configuration. \strong{Reserved PythonOption Keywords} Some PythonOption keywords are used for configuring various aspects of mod_python. Any keyword starting with mod_python.* should be considered as reserved for internal mod_python use. Users are encouraged to use their own namespace qualifiers when creating add-on modules, and not pollute the global namespace. The following PythonOption keys are currently used by mod_python. % Note - Make sure you put a space character in any empty tables cells. % Otherwise the formatting will be messed up. \begin{tableiii}{l|c|l}{textrm}{Key}{Required Value}{Notes} \lineiii{mod_python.legacy.importer}{*}{Enables the obsolete importer.} \lineiii{mod_python.mutex_directory}{ }{ } \lineiii{mod_python.mutex_locks}{ }{ } \lineiii{mod_python.psp.cache_database_filename}{ }{ } \lineiii{mod_python.session.session_type}{ }{ } \lineiii{mod_python.session.cookie_name}{ }{ } \lineiii{mod_python.session.application_domain}{ }{ } \lineiii{mod_python.session.application_path}{ }{ } \lineiii{mod_python.session.database_directory}{ }{ } \lineiii{mod_python.dbm_session.database_filename}{ }{ } \lineiii{mod_python.dbm_session.database_directory}{ }{ } \lineiii{mod_python.file_session.enable_fast_cleanup}{ }{ } \lineiii{mod_python.file_session.verify_session_timeout}{ }{ } \lineiii{mod_python.file_session.cleanup_grace_period}{ }{ } \lineiii{mod_python.file_session.cleanup_time_limit}{ }{ } \lineiii{mod_python.file_session.database_directory}{ }{ } \lineiii{session}{ }{Deprecated in 3.3, use mod_python.session.session_type} \lineiii{ApplicationPath}{ }{Deprecated in 3.3, use mod_python.session.application_path} \lineiii{session_cookie_name}{ }{Deprecated in 3.3, use mod_python.session.cookie_name} \lineiii{session_directory}{ }{Deprecated in 3.3, use mod_python.session.database_directory} \lineiii{session_dbm}{ }{Deprecated in 3.3, use mod_python.dbm_session.database_filename} \lineiii{session_cleanup_time_limit}{ }{Deprecated in 3.3, use mod_python.file_session.cleanup_time_limit} \lineiii{session_fast_cleanup}{ }{Deprecated in 3.3, use mod_python.file_session.enable_fast_cleanup} \lineiii{session_grace_period}{ }{Deprecated in 3.3, use mod_python.file_session.cleanup_grace_period} \lineiii{session_verify_cleanup}{ }{Deprecated in 3.3, use mod_python.file_session.cleanup_session_timeout} \lineiii{PSPDbmCache}{ }{Deprecated in 3.3, use mod_python.psp.cache_database_filename} \end{tableiii} \subsection{PythonPath\label{dir-other-pp}} \index{PythonPath} \strong{\citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Syntax]{Syntax:}} PythonPath \emph{path} \\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Context]{Context:} server config, virtual host, directory, htaccess\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Override]{Override:} not None\\ \citetitle[http://httpd.apache.org/docs-2.0/mod/directive-dict.html#Module]{Module:} mod_python.c PythonPath directive sets the PythonPath. The path must be specified in Python list notation, e.g. \begin{verbatim} PythonPath "['/usr/local/lib/python2.0', '/usr/local/lib/site_python', '/some/other/place']" \end{verbatim} The path specified in this directive will replace the path, not add to it. However, because the value of the directive is evaled, to append a directory to the path, one can specify something like \begin{verbatim} PythonPath "sys.path+['/mydir']" \end{verbatim} Mod_python tries to minimize the number of evals associated with the PythonPath directive because evals are slow and can negatively impact performance, especially when the directive is specified in an \file{.htaccess} file which gets parsed at every hit. Mod_python will remember the arguments to the PythonPath directive in the un-evaled form, and before evaling the value it will compare it to the remembered value. If the value is the same, no action is taken. Because of this, you should not rely on the directive as a way to restore the pythonpath to some value if your code changes it. \begin{notice} This directive should not be used as a security measure since the Python path is easily manipulated from within the scripts. \end{notice}
{ "alphanum_fraction": 0.7851826705, "avg_line_length": 44.9448356808, "ext": "tex", "hexsha": "622a733693c9749b6c8555fe3fe7f6e9f8da8848", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-11-06T11:02:20.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-04T06:31:32.000Z", "max_forks_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "yinsen/apache-text", "max_forks_repo_path": "modules/mod_python-3.3.1/Doc/modpython5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "yinsen/apache-text", "max_issues_repo_path": "modules/mod_python-3.3.1/Doc/modpython5.tex", "max_line_length": 109, "max_stars_count": null, "max_stars_repo_head_hexsha": "77a69659b476f4b18399187fe0b1e77d0b069107", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "yinsen/apache-text", "max_stars_repo_path": "modules/mod_python-3.3.1/Doc/modpython5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9916, "size": 38293 }
\pagestyle{myheadings} \setcounter{page}{1} \setcounter{footnote}{0} \section{~Configuration of Input Files} \label{app:config} \newcounters This section contains the input files in the traditional *.inp format or new namelist *.nml format for each program. These can also be found in the {\file model/inp} or {\file model/nml} folders respectively. \vssub \subsection{~ww3\_grid} \vsssub \subsubsection{~ww3\_grid.inp} \label{sec:config011} \inpfile{ww3_grid.tex} \vsssub \vsssub \subsubsection{~ww3\_grid.nml} \label{sec:config012} \nmlfile{ww3_grid.tel} \vsssub \vssub \vssub \subsection{~ww3\_strt} \vsssub \subsubsection{~ww3\_strt.inp} \label{sec:config021} \inpfile{ww3_strt.tex} \vsssub \vssub \vssub \subsection{~ww3\_bound} \vsssub \subsubsection{~ww3\_bound.inp} \label{sec:config031} \inpfile{ww3_bound.tex} \vsssub \vssub \vssub \subsection{~ww3\_bounc} \vsssub \subsubsection{~ww3\_bounc.inp} \label{sec:config041} \inpfile{ww3_bounc.tex} \vsssub \vsssub \subsubsection{~ww3\_bounc.nml} \label{sec:config042} \nmlfile{ww3_bounc.tel} \vsssub \vssub \vssub \subsection{~ww3\_prep} \vsssub \subsubsection{~ww3\_prep.inp} \label{sec:config051} \inpfile{ww3_prep.tex} \vsssub \vssub \vssub \subsection{~ww3\_prnc} \vsssub \subsubsection{~ww3\_prnc.inp} \label{sec:config061} \inpfile{ww3_prnc.tex} \vsssub \vsssub \subsubsection{~ww3\_prnc.nml} \label{sec:config062} \nmlfile{ww3_prnc.tel} \vsssub \vssub \vssub \subsection{~ww3\_prtide} \vsssub \subsubsection{~ww3\_prtide.inp} \label{sec:config071} \inpfile{ww3_prtide.tex} \vsssub \vssub \vssub \subsection{~ww3\_shel} \vsssub \subsubsection{~ww3\_shel.inp} \label{sec:config081} \inpfile{ww3_shel.tex} \vsssub \vsssub \subsubsection{~ww3\_shel.nml} \label{sec:config082} \nmlfile{ww3_shel.tel} \vsssub \vssub \vssub \subsection{~ww3\_gspl} \vsssub \subsubsection{~ww3\_gspl.inp} \label{sec:config091} \inpfile{ww3_gspl.tex} \vsssub \vssub \vssub \subsection{~ww3\_multi} \vsssub \subsubsection{~ww3\_multi.inp} \label{sec:config101} \inpfile{ww3_multi.tex} \vsssub \vsssub \subsubsection{~ww3\_multi.nml} \label{sec:config102} \nmlfile{ww3_multi.tel} \vsssub \vssub \vssub \subsection{~ww3\_gint} \vsssub \subsubsection{~ww3\_gint.inp} \label{sec:config111} \inpfile{ww3_gint.tex} \vsssub \vssub \vssub \subsection{~ww3\_outf} \vsssub \subsubsection{~ww3\_outf.inp} \label{sec:config121} \inpfile{ww3_outf.tex} \vsssub \vssub \vssub \subsection{~ww3\_ounf} \vsssub \subsubsection{~ww3\_ounf.inp} \label{sec:config131} \inpfile{ww3_ounf.tex} \vsssub \vsssub \subsubsection{~ww3\_ounf.nml} \label{sec:config132} \nmlfile{ww3_ounf.tel} \vsssub \vssub \vssub \subsection{~gx\_outf} \vsssub \subsubsection{~gx\_outf.inp} \label{sec:config141} \inpfile{gx_outf.tex} \vsssub \vssub \vssub \subsection{~ww3\_grib} \vsssub \subsubsection{~ww3\_grib.inp} \label{sec:config151} \inpfile{ww3_grib.tex} \vsssub \vssub \vssub \subsection{~ww3\_outp} \vsssub \subsubsection{~ww3\_outp.inp} \label{sec:config161} \inpfile{ww3_outp.tex} \vsssub \vssub \vssub \subsection{~ww3\_ounp} \vsssub \subsubsection{~ww3\_ounp.inp} \label{sec:config171} \inpfile{ww3_ounp.tex} \vsssub \vsssub \subsubsection{~ww3\_ounp.nml} \label{sec:config172} \nmlfile{ww3_ounp.tel} \vsssub \vssub \vssub \subsection{~gx\_outp} \vsssub \subsubsection{~gx\_outp.inp} \label{sec:config181} \inpfile{gx_outp.tex} \vsssub \vssub \vssub \subsection{~ww3\_trck} \vsssub \subsubsection{~ww3\_trck.inp} \label{sec:config191} \inpfile{ww3_trck.tex} \vsssub \vssub \vssub \subsection{~ww3\_trnc} \vsssub \subsubsection{~ww3\_trnc.inp} \label{sec:config201} \inpfile{ww3_trnc.tex} \vsssub \vsssub \subsubsection{~ww3\_trnc.nml} \label{sec:config202} \nmlfile{ww3_trnc.tel} \vsssub \vssub \vssub \subsection{~ww3\_systrk} \vsssub \subsubsection{~ww3\_systrk.inp} \label{sec:config211} \inpfile{ww3_systrk.tex} \vsssub \vssub \vssub \subsection{~ww3\_uprstr} \vsssub \subsubsection{~ww3\_uprstr.inp} \label{sec:config221} \inpfile{ww3_uprstr.tex} \vsssub \vssub \bpage \pagestyle{empty}
{ "alphanum_fraction": 0.7560493827, "avg_line_length": 16.5983606557, "ext": "tex", "hexsha": "71cbf6a8a4987db40ca1cf6748201f58e3f56ff5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-01T09:29:46.000Z", "max_forks_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "minsukji/ci-debug", "max_forks_repo_path": "WW3/manual/app/config.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_issues_repo_issues_event_max_datetime": "2021-06-04T14:17:45.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-31T15:49:26.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "minsukji/ci-debug", "max_issues_repo_path": "WW3/manual/app/config.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e8bbbe6652b702b61d2896612f6aa8e4aa6c803", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "minsukji/ci-debug", "max_stars_repo_path": "WW3/manual/app/config.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1657, "size": 4050 }
% Header, overrides base % Make sure that the sphinx doc style knows who it inherits from. \def\sphinxdocclass{article} % Declare the document class \documentclass[letterpaper,10pt,english]{/usr/share/sphinx/texinputs/sphinxhowto} % Imports \usepackage[utf8]{inputenc} \DeclareUnicodeCharacter{00A0}{\\nobreakspace} \usepackage[T1]{fontenc} \usepackage{babel} \usepackage{times} \usepackage{import} \usepackage[Bjarne]{/usr/share/sphinx/texinputs/fncychap} \usepackage{longtable} \usepackage{/usr/share/sphinx/texinputs/sphinx} \usepackage{multirow} \usepackage{amsmath} \usepackage{amssymb} \usepackage{ucs} \usepackage{enumerate} % Used to make the Input/Output rules follow around the contents. \usepackage{needspace} % Pygments requirements \usepackage{fancyvrb} \usepackage{color} % ansi colors additions \definecolor{darkgreen}{rgb}{.12,.54,.11} \definecolor{lightgray}{gray}{.95} \definecolor{brown}{rgb}{0.54,0.27,0.07} \definecolor{purple}{rgb}{0.5,0.0,0.5} \definecolor{darkgray}{gray}{0.25} \definecolor{lightred}{rgb}{1.0,0.39,0.28} \definecolor{lightgreen}{rgb}{0.48,0.99,0.0} \definecolor{lightblue}{rgb}{0.53,0.81,0.92} \definecolor{lightpurple}{rgb}{0.87,0.63,0.87} \definecolor{lightcyan}{rgb}{0.5,1.0,0.83} % Needed to box output/input \usepackage{tikz} \usetikzlibrary{calc,arrows,shadows} \usepackage[framemethod=tikz]{mdframed} \usepackage{alltt} % Used to load and display graphics \usepackage{graphicx} \graphicspath{ {figs/} } \usepackage[Export]{adjustbox} % To resize % used so that images for notebooks which have spaces in the name can still be included \usepackage{grffile} % For formatting output while also word wrapping. \usepackage{listings} \lstset{breaklines=true} \lstset{basicstyle=\small\ttfamily} \def\smaller{\fontsize{9.5pt}{9.5pt}\selectfont} %Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother %Set pygments styles if needed... \definecolor{nbframe-border}{rgb}{0.867,0.867,0.867} \definecolor{nbframe-bg}{rgb}{0.969,0.969,0.969} \definecolor{nbframe-in-prompt}{rgb}{0.0,0.0,0.502} \definecolor{nbframe-out-prompt}{rgb}{0.545,0.0,0.0} \newenvironment{ColorVerbatim} {\begin{mdframed}[% roundcorner=1.0pt, % backgroundcolor=nbframe-bg, % userdefinedwidth=1\linewidth, % leftmargin=0.1\linewidth, % innerleftmargin=0pt, % innerrightmargin=0pt, % linecolor=nbframe-border, % linewidth=1pt, % usetwoside=false, % everyline=true, % innerlinewidth=3pt, % innerlinecolor=nbframe-bg, % middlelinewidth=1pt, % middlelinecolor=nbframe-bg, % outerlinewidth=0.5pt, % outerlinecolor=nbframe-border, % needspace=0pt ]} {\end{mdframed}} \newenvironment{InvisibleVerbatim} {\begin{mdframed}[leftmargin=0.1\linewidth,innerleftmargin=3pt,innerrightmargin=3pt, userdefinedwidth=1\linewidth, linewidth=0pt, linecolor=white, usetwoside=false]} {\end{mdframed}} \renewenvironment{Verbatim}[1][\unskip] {\begin{alltt}\smaller} {\end{alltt}} % Help prevent overflowing lines due to urls and other hard-to-break % entities. This doesn't catch everything... \sloppy % Document level variables \title{Computational Methods in Biology} \date{January 26, 2015} \release{} \author{Nathan Crock} \renewcommand{\releasename}{} % TODO: Add option for the user to specify a logo for his/her export. \newcommand{\sphinxlogo}{} % Make the index page of the document. \makeindex % Import sphinx document type specifics. % Body % Start of the document \begin{document} \maketitle \section{Assignment 1}\label{computational-methods-in-biology---assignment-1} \subsection{Probelm 1.}\label{probelm-1.} \subsubsection{(a)}\label{a} The Nernst Potential is given by: \[V_S = \frac{RT}{zF}\ln\bigg(\frac{[S]_o}{[S]_i}\bigg)\] Where $R=8.314$ is the gas constant, $T$ is the temperature in Kelvin, $z$ is the valence charge of the ion, and $F=9.648\times10^4$ is Faraday's constant. For simplicity, I will write a \emph{nernst} function to calculate the potentials for an ion based on its intracellular and extracellular concentrations, its valence charge z, and the temperature of their solution in Celcius. % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} \vspace{6pt} \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}6{]}:\hspace{4pt}}\\* \vspace{-2.65\baselineskip} \begin{ColorVerbatim} \vspace{-0.7\baselineskip} \begin{Verbatim}[commandchars=\\\{\}] \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np} \PY{c}{\PYZsh{} Use the numerics library for math} \PY{k}{def} \PY{n+nf}{nernst}\PY{p}{(}\PY{n}{intra}\PY{p}{,} \PY{n}{extra}\PY{p}{,} \PY{n}{C}\PY{p}{,} \PY{n}{z}\PY{p}{)}\PY{p}{:} \PY{n}{R} \PY{o}{=} \PY{l+m+mf}{8.314462} \PY{c}{\PYZsh{} Gas Constant} \PY{n}{F} \PY{o}{=} \PY{l+m+mf}{9.64853399e4} \PY{c}{\PYZsh{} Faraday\PYZsq{}s Constant} \PY{n}{T} \PY{o}{=} \PY{l+m+mf}{273.15} \PY{o}{+} \PY{n}{C} \PY{c}{\PYZsh{} Kelvin = 273.15 + Celcius} \PY{k}{return} \PY{p}{(}\PY{n}{R}\PY{o}{*}\PY{n}{T}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{z}\PY{o}{*}\PY{n}{F}\PY{p}{)}\PY{o}{*}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n+nb}{float}\PY{p}{(}\PY{n}{extra}\PY{p}{)}\PY{o}{/}\PY{n}{intra}\PY{p}{)}\PY{p}{)} \end{Verbatim} \vspace{-0.2\baselineskip} \end{ColorVerbatim} For Potassium we are given $[K]_i=430$ and $[K]_o=20$. The temperature is $20^o$ Celcius and Potassium has a valence charge of $+1$. Therefore\ldots{} \hspace{1pt} % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} \vspace{6pt} \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}7{]}:\hspace{4pt}}\\* \vspace{-2.65\baselineskip} \begin{ColorVerbatim} \vspace{-0.7\baselineskip} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{intra} \PY{o}{=} \PY{l+m+mi}{430} \PY{n}{extra} \PY{o}{=} \PY{l+m+mi}{20} \PY{n}{C} \PY{o}{=} \PY{l+m+mi}{20} \PY{n}{z} \PY{o}{=} \PY{l+m+mi}{1} \PY{n}{Vk} \PY{o}{=} \PY{n}{nernst}\PY{p}{(}\PY{n}{intra}\PY{p}{,}\PY{n}{extra}\PY{p}{,}\PY{n}{C}\PY{p}{,}\PY{n}{z}\PY{p}{)}\PY{p}{;} \PY{n}{Vk} \end{Verbatim} \vspace{-0.2\baselineskip} \end{ColorVerbatim} % If the first block is an image, minipage the image. Else % request a certain amount of space for the input text. \needspace{4\baselineskip} % Add document contents. \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-out-prompt}Out\hspace{4pt}{[}7{]}:\hspace{4pt}}\\* \vspace{-2.55\baselineskip}\begin{InvisibleVerbatim} \vspace{-0.5\baselineskip} \hspace{1pt} \begin{alltt}-0.077504259044191351\end{alltt} \end{InvisibleVerbatim} Which is approximately $-77.5$ mVFor Sodium we are given $[Na]_i=50$ and $[Na]_o=440$. The temperature is $20^o$ Celcius and Sodium has a valence charge of $+1$. Therefore\ldots{} \hspace{1pt} % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} \vspace{6pt} \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}8{]}:\hspace{4pt}}\\* \vspace{-2.65\baselineskip} \begin{ColorVerbatim} \vspace{-0.7\baselineskip} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{intra} \PY{o}{=} \PY{l+m+mi}{50} \PY{n}{extra} \PY{o}{=} \PY{l+m+mi}{440} \PY{n}{C} \PY{o}{=} \PY{l+m+mi}{20} \PY{n}{z} \PY{o}{=} \PY{l+m+mi}{1} \PY{n}{Vna} \PY{o}{=} \PY{n}{nernst}\PY{p}{(}\PY{n}{intra}\PY{p}{,}\PY{n}{extra}\PY{p}{,}\PY{n}{C}\PY{p}{,}\PY{n}{z}\PY{p}{)}\PY{p}{;} \PY{n}{Vna} \end{Verbatim} \vspace{-0.2\baselineskip} \end{ColorVerbatim} % If the first block is an image, minipage the image. Else % request a certain amount of space for the input text. \needspace{4\baselineskip} % Add document contents. \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-out-prompt}Out\hspace{4pt}{[}8{]}:\hspace{4pt}}\\* \vspace{-2.55\baselineskip}\begin{InvisibleVerbatim} \vspace{-0.5\baselineskip} \hspace{1pt} \begin{alltt}0.054937944143186326\end{alltt} \end{InvisibleVerbatim} Which is approximately $54.9$ mVFor Chloride we are given $[Cl]_i=65$ and $[Cl]_o=560$. The temperature is $20^o$ Celcius and Chloride has a valence charge of $-1$. Therefore\ldots{} \hspace{1pt} % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} \vspace{6pt} \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}9{]}:\hspace{4pt}}\\* \vspace{-2.65\baselineskip} \begin{ColorVerbatim} \vspace{-0.7\baselineskip} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{intra} \PY{o}{=} \PY{l+m+mi}{65} \PY{n}{extra} \PY{o}{=} \PY{l+m+mi}{560} \PY{n}{C} \PY{o}{=} \PY{l+m+mi}{20} \PY{n}{z} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{n}{Vcl} \PY{o}{=} \PY{n}{nernst}\PY{p}{(}\PY{n}{intra}\PY{p}{,}\PY{n}{extra}\PY{p}{,}\PY{n}{C}\PY{p}{,}\PY{n}{z}\PY{p}{)}\PY{p}{;} \PY{n}{Vcl} \end{Verbatim} \vspace{-0.2\baselineskip} \end{ColorVerbatim} % If the first block is an image, minipage the image. Else % request a certain amount of space for the input text. \needspace{4\baselineskip} % Add document contents. \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-out-prompt}Out\hspace{4pt}{[}9{]}:\hspace{4pt}}\\* \vspace{-2.55\baselineskip}\begin{InvisibleVerbatim} \vspace{-0.5\baselineskip} \hspace{1pt} \begin{alltt}-0.054402340153032386\end{alltt} \end{InvisibleVerbatim} Which is approximately $-54.4$ mV\subsubsection{(b)}\label{b} The resting membrane potential is given by the weighted average of Nernst Potentials \[ V_{rest} = \frac{g_KV_K + g_{Na}V_{Na} + g_{Cl}V_{Cl}}{g_K + g_{Na} + g_{Cl}} \] Given $g_{Na} = 1$, $g_K = 10$, $g_{Cl} = 3$, and the Nernst Potentials found above we can calculate the resting potential\ldots{} \hspace{1pt} % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} \vspace{6pt} \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}10{]}:\hspace{4pt}}\\* \vspace{-2.65\baselineskip} \begin{ColorVerbatim} \vspace{-0.7\baselineskip} \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{gna} \PY{o}{=} \PY{l+m+mi}{1} \PY{n}{gk} \PY{o}{=} \PY{l+m+mi}{10} \PY{n}{gcl} \PY{o}{=} \PY{l+m+mi}{3} \PY{n}{Vrest} \PY{o}{=} \PY{p}{(}\PY{n}{gk}\PY{o}{*}\PY{n}{Vk} \PY{o}{+} \PY{n}{gna}\PY{o}{*}\PY{n}{Vna} \PY{o}{+} \PY{n}{gcl}\PY{o}{*}\PY{n}{Vcl}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{gk} \PY{o}{+} \PY{n}{gna} \PY{o}{+} \PY{n}{gcl}\PY{p}{)}\PY{p}{;} \PY{n}{Vrest} \end{Verbatim} \vspace{-0.2\baselineskip} \end{ColorVerbatim} % If the first block is an image, minipage the image. Else % request a certain amount of space for the input text. \needspace{4\baselineskip} % Add document contents. \makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-out-prompt}Out\hspace{4pt}{[}10{]}:\hspace{4pt}}\\* \vspace{-2.55\baselineskip}\begin{InvisibleVerbatim} \vspace{-0.5\baselineskip} \hspace{1pt} \begin{alltt}-0.063093690482701734\end{alltt} \end{InvisibleVerbatim} Which is approximately -63.1 mV\subsection{Problem 2}\label{problem-2} \subsubsection{(a)}\label{a} We will solve the passive membrane equation \[C\frac{dV}{dt}=-g(V-V_{rev})+I_{app}\] using the integrating factor technique. First let us divide the equation through by C and rewrite it as follows\ldots{} \[\frac{dV}{dt}+\frac{1}{\tau}V=Q\] where we used the fact that the conductance $g=\frac{1}{R}$ and the membrane time constant $\tau=RC$ and to simplify algebra we let \[\frac{1}{\tau}V_{rev}+\frac{1}{C}I_{app}=Q\] We now have the linear ODE in standard form and by letting $\frac{dV}{dt}=0$ we can easily see the steady state of the system $V_{\infty}= \tau Q$. Now we apply the integrating factor technique. Multiply both sides through by exp($\int_0^t\frac{1}{\tau}dt$) = $e^{\frac{t}{\tau}}$. \[e^{\frac{t}{\tau}}\Big(\frac{dV}{dt}+\frac{1}{\tau}V\Big)=e^{\frac{t}{\tau}}Q\] After distributing the exponential on the left side it can be rewritten as a product rule \[e^{\frac{t}{\tau}}\frac{dV}{dt}+\frac{1}{\tau}e^{\frac{t}{\tau}}V \Rightarrow \frac{d}{dt}(e^{\frac{t}{\tau}}V)\] No if we integrate both sides\ldots{} \[\int_0^t \frac{d}{ds}(e^{\frac{s}{\tau}}V) ds = Q \int_0^t e^{\frac{s}{\tau}}ds\] \[e^{\frac{t}{\tau}}V(t)-V_0 = \tau Q (e^{\frac{t}{\tau}}-1)\] \[V(t)= e^{-\frac{t}{\tau}}V_0+\tau Q(1-e^{-\frac{t}{\tau}})\] The solution confirms the behavior we would expect. At $t=0$ we see that $V(t)=V_0$. Now as time passes, exp($-\frac{t}{\tau}$) $\rightarrow 0$ which slowly shifts $V(t)$ from the first term $V_0$ (the initial condition) to the second term $\tau Q$ which we already established was the system's steady state. So with a substitution $\tau Q=V_{\infty}$ and some simple algebra we can rewrite our solution as \[V(t) = (V_0-V_{\infty})e^{-\frac{t}{\tau}}+V_{\infty}\] \subsubsection{(b)}\label{b} The $\tau$ parameter is the time constant for the differential equation. It controls the speed of the approach to steady state. It is the product of both the resistance and capacitance of the membrane $\tau=RC$. If one wanted to vary the growth/decay rate of the voltage they could do so by changing the resistance or the capacitance.\subsection{Problem 3}\label{problem-3} Given the Nernst-Planck equation \[J = -D\Big(\frac{dC}{dx}+\frac{zCF}{RT}\frac{d\Phi}{dx}\Big)\] we can derive the Nerst equation by finding the potential when flux is zero. We set $J=0$ and divide through by $-D$ \[0=\frac{dC}{dx}+\frac{zCF}{RT}\frac{d\Phi}{dx}\] To put this into more of an integral friendly form we will isolate the $\frac{d\Phi}{dx}$ term and group the concentration variable $C$. \[\frac{d\Phi}{dx}=-\frac{RT}{zF}\frac{\frac{dC}{dx}}{C}\] Next we integrate through\ldots{} \[\int_0^x \frac{d\Phi}{ds} ds = -\frac{RT}{zF} \int_0^x \frac{\frac{dC}{ds}}{C} ds \hspace{10pt} \Rightarrow \hspace{10pt} \Phi(x)-\Phi(0) = -\frac{RT}{zF}(\ln(C(x))-\ln(C(0)))\] Where is $x=0$? The convention is to define $x=0$ to be extracellular. Also, because voltage is defined as the potential of a charge relative to some source of electric field and the source is not explicitly defined here, we can define $\Phi(0)=0$. The source is irrelevant, we are only interested in the difference of potentials across the membrane. Therefore $\Phi(x)-\Phi(0) = V$. \[V = -\frac{RT}{zF}(\ln(C(x))-\ln(C(0)))\] Lastly we use logarithmic properties to combine the natural logs. We move the $-1$ to the exponent of the natural log flipping the inner fraction, and redefine the concentrations $C(x)=[C_{in}]$ and $C(0)=[C_{out}]$ giving us the Nernst equation\ldots{} \[V = \frac{RT}{zF}\ln\Big(\frac{[C_{out}]}{[C_{in}]}\Big)\]\subsection{Problem 4}\label{problem-4} \subsubsection{(a)}\label{a} The driving force $V-V_{rev}$ is simply a translation of the voltage up or down. Below we have the original voltage plot provided. \includegraphics[scale=.5]{vRest.png} Given $V_K = -70$mV the plot for $V-V_K$ is a vertical translation upwards by $70$mV. \includegraphics[scale=.5]{vKforce.png} This simply means when the membrane voltage is at rest, $V=-60$mV, the potassium reversal potential wants to drive the voltage down to $-70$mV with a ``driving force'' of $10$mV. Similarly when the voltage jumps to $V=40$mV the potassium reversal potential wants to drive the voltage way down to $-70$mV with a much stronger ``driving force'' of $110$mV. Given $V_{Na} = 50$mV the plot for $V-V_{Na}$ is a vertical translation downwards by $50$mV. We interpret this plot the same as above. \includegraphics[scale=.5]{vNaforce.png} \subsubsection{(b)}\label{b} The three differential equations governing the probability of the ion channels being opened or closed (activated or inactivated) are given by \[\frac{dn}{dt} = \frac{n_{\infty}(V)-n}{\tau_n(V)} \hspace{25pt} \frac{dm}{dt} = \frac{m_{\infty}(V)-m}{\tau_m(V)} \hspace{25pt} \frac{dh}{dt} = \frac{h_{\infty}(V)-h}{\tau_h(V)}\] We notice that each equation is a function of voltage and, as evidenced by our initial voltage plot, voltage is held constant for t $\epsilon (0,10)\bigcup (10,20) \bigcup (20,30)$. Therefore the steady states and the (variable) time constants each become constant. This means the nonlinear ODEs becomes linear on those intervals and we can solve them analytically for piecewise solutions. Because all of the equations have a similar form, we will solve a general expression giving us the solution for all of the equations on all the intervals. Let $x$ be the particular gating variable, then \[\frac{dx}{dt} = \frac{x_{\infty}(V)-x}{\tau_x(V)}\] Now we allow our functions of voltage to become constants $x_{\infty}(V)=x_{\infty}$ and $\tau_x(V)=\tau_x$ \[\frac{dx}{dt} = \frac{x_{\infty}-x}{\tau_x} \hspace{10pt} \Rightarrow \hspace{10pt} \frac{dx}{dt} +\frac{1}{\tau_x}x = Q\] Where $Q = \frac{x_{\infty}}{\tau_x}$. This the same standard form used in problem 2a. Using the same technique we arrive at the same solution. \[x(t) = (x_0-x^*_{\infty})e^{-\frac{t}{\tau_x}}+x^*_{\infty}\] Where $x^*_{\infty} = \tau_xQ = \tau_x\big(\frac{x_{\infty}}{\tau_x}\big) = x_{\infty}$. This gives us our general solution for all of the gating variables. \[x(t) = (x_0-x_{\infty})e^{-\frac{t}{\tau_x}}+x_{\infty}\] For each gating variable, the time constants were provided: $\tau_m=0.5$msec, $\tau_n=5$msec and $\tau_h=5$msec. What remains to be determined are the initial conditions and the steady states for each gating variable. These will be crude approximations to the behavior because to approximate the continuous behavior over the discontiuous jumps we'll be using the final values of the previous interval as the initial conditions for its neighboring interval. These change depending on which interval we are solving over. We will start with t $\epsilon [0,10)$. In these ranges, the voltage is at rest, so the sodium and potassium activation gates are closed, and the sodium inactivation gate is opened, $n_0=0$, $m_0=0$, $h_0=1$. Also, at $-60$mV, the steady states for each gating variable is approximately, $n_{\infty}=0$, $m_{\infty}=0$, $h_{\infty}=1$. Substituting these values into each gating variable's equation gives us the following expressions \[n(t) = (n_0-n_{\infty})e^{-\frac{t}{\tau_n}}+n_{\infty} \hspace{23pt} \Rightarrow \hspace{10pt} n(t) = (0-0)e^{-\frac{t}{5}}+0 \hspace{10pt} = 0\] \[m(t) = (m_0-m_{\infty})e^{-\frac{t}{\tau_m}}+m_{\infty} \hspace{10pt} \Rightarrow \hspace{10pt} m(t) = (0-0)e^{-\frac{t}{0.5}}+0 \hspace{4pt} = 0\] \[h(t) = (h_0-h_{\infty})e^{-\frac{t}{\tau_h}}+h_{\infty} \hspace{24pt} \Rightarrow \hspace{10pt} h(t) = (1-1)e^{-\frac{t}{5}}+1 \hspace{11pt} = 1\] Now we will determine the constants for t $\epsilon [10,20]$. We take the values of the gating variable before the jump to be the initial conditions in this interval. They are $n_0=0$, $m_0=0$, $h_0=1$. At $40$mV, the steady states for each gating variable is approximately, $n_{\infty}=1$, $m_{\infty}=1$, $h_{\infty}=0$. Substituting these values into each gating variable's equation gives us the following expressions \[n(t) = (n_0-n_{\infty})e^{-\frac{t}{\tau_n}}+n_{\infty} \hspace{23pt} \Rightarrow \hspace{10pt} n(t) = (0-1)e^{-\frac{t}{5}}+1 \hspace{10pt} = 1-e^{-\frac{t}{5}}\] \[m(t) = (m_0-m_{\infty})e^{-\frac{t}{\tau_m}}+m_{\infty} \hspace{10pt} \Rightarrow \hspace{10pt} m(t) = (0-1)e^{-\frac{t}{0.5}}+1 \hspace{4pt} = 1-e^{-\frac{t}{0.5}}\] \[h(t) = (h_0-h_{\infty})e^{-\frac{t}{\tau_h}}+h_{\infty} \hspace{24pt} \Rightarrow \hspace{10pt} h(t) = (1-0)e^{-\frac{t}{5}}+0 \hspace{11pt} = e^{-\frac{t}{5}}\] On the last interval, t $\epsilon (20,30]$, again the initial conditions will come from the previous interval and the steady states will depend on the voltage. The initial conditions are the previous interval's gating variables evaluated at $t=10$. It is at $10$ because when solving the differential equation over the middle interval, the ion channels are just opening at $t=10$. So their behavior from $t=10$ to $t=20$ is what the functions would do from $t=0$ to $t=10$. The initial conditions are, $n_0=1-e^{-2}$, $m_0=1-e^{-20}$, $h_0=e^{-2}$. The steady states are the same as the first interval, $n_{\infty}=0$, $m_{\infty}=0$, $h_{\infty}=1$. Plugging these into our equations gives \[n(t) = (n_0-n_{\infty})e^{-\frac{t}{\tau_n}}+n_{\infty} \hspace{23pt} \Rightarrow \hspace{10pt} n(t) = (1-e^{-4}-0)e^{-\frac{t}{5}}+0 \hspace{14pt} = (1-e^{-4})e^{-\frac{t}{5}}\] \[m(t) = (m_0-m_{\infty})e^{-\frac{t}{\tau_m}}+m_{\infty} \hspace{10pt} \Rightarrow \hspace{10pt} m(t) = (1-e^{-40}-0)e^{-\frac{t}{0.5}}+0 \hspace{4pt} = (1-e^{-40})e^{-\frac{t}{0.5}}\] \[h(t) = (h_0-h_{\infty})e^{-\frac{t}{\tau_h}}+h_{\infty} \hspace{24pt} \Rightarrow \hspace{10pt} h(t) = (e^{-4}-1)e^{-\frac{t}{5}}+1 \hspace{32pt} = (e^{-4}-1)e^{-\frac{t}{5}}+1\] We will visualize this approximate behavior by plotting the pieces of each gating variable's function contiguously. For the gating variable probability we have \includegraphics[scale=0.5]{gv.png} For the conductances, I used the maximal sodium and potassium conductance determined by Hodgkin and Huxley, $\bar g_{Na} = 120$, and $\bar g_K = 36$. This gave the following functions \[g_{Na}(t) = \bar g_{Na}m^3h = 120m^3h\] \[g_K(t) = \bar g_Kn^4 = 36n^4\] The plots for the above functions are shown below. \includegraphics[scale=0.5]{con.png} \subsubsection{(c)}\label{c} Finally we combine all of the above information to define the ionic currents. \[I_{ion}(t) = g_{ion}(t)(V-V_{ion})\] Over each interval, $(V-V_{ion})$ is different and the gating variable equations change through the three equations outlined above. Instead of defining all 6 equations here we will just show the plot of the behavior over each interval. \includegraphics[scale=0.5]{cur.png} % Make sure that atleast 4 lines are below the HR \needspace{4\baselineskip} % %\vspace{6pt} %\makebox[0.1\linewidth]{\smaller\hfill\tt\color{nbframe-in-prompt}In\hspace{4pt}{[}{]}:\hspace{4pt}}\\* %\vspace{-2.65\baselineskip} %\begin{ColorVerbatim} %\vspace{-0.7\baselineskip} %\begin{Verbatim}[commandchars=\\\{\}] % %\end{Verbatim} % % %\vspace{0.3\baselineskip} % %\end{ColorVerbatim} \renewcommand{\indexname}{Index} \printindex % End of document \end{document}
{ "alphanum_fraction": 0.6392776524, "avg_line_length": 41.8488529015, "ext": "tex", "hexsha": "ee75fe2cce5a5fe26bd8e8c3c20ba33e188ba064", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mathnathan/notebooks", "max_forks_repo_path": "cmb_phase_plane/CMB/hw1/hw1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mathnathan/notebooks", "max_issues_repo_path": "cmb_phase_plane/CMB/hw1/hw1.tex", "max_line_length": 262, "max_stars_count": 1, "max_stars_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mathnathan/notebooks", "max_stars_repo_path": "cmb_phase_plane/CMB/hw1/hw1.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-04T11:04:45.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-04T11:04:45.000Z", "num_tokens": 11779, "size": 31010 }
\documentclass[12pt]{article} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage{setspace} \onehalfspacing \usepackage{amssymb} %% The amsthm package provides extended theorem environments \usepackage{amsthm} \usepackage{epsfig} \usepackage{times} \renewcommand{\ttdefault}{cmtt} \usepackage{amsmath} \usepackage{graphicx} % for graphics files \usepackage{tabu} % Draw figures yourself \usepackage{tikz} % writing elements %\usepackage{mhchem} \usepackage{paralist} % The float package HAS to load before hyperref \usepackage{float} % for psuedocode formatting \usepackage{xspace} % from Denovo Methods Manual \usepackage{mathrsfs} \usepackage[mathcal]{euscript} \usepackage{color} \usepackage{array} \usepackage[pdftex]{hyperref} \usepackage[parfill]{parskip} % math syntax \newcommand{\nth}{n\ensuremath{^{\text{th}}} } \newcommand{\ve}[1]{\ensuremath{\mathbf{#1}}} \newcommand{\Macro}{\ensuremath{\Sigma}} \newcommand{\rvec}{\ensuremath{\vec{r}}} \newcommand{\vecr}{\ensuremath{\vec{r}}} \newcommand{\omvec}{\ensuremath{\hat{\Omega}}} \newcommand{\vOmega}{\ensuremath{\hat{\Omega}}} \newcommand{\even}{\ensuremath{\phi^g}} \newcommand{\odd}{\ensuremath{\vartheta^g}} \newcommand{\evenp}{\ensuremath{\phi^{g'}}} \newcommand{\oddp}{\ensuremath{\vartheta^{g'}}} \newcommand{\Sn}{\ensuremath{S_N} } \newcommand{\Ye}[2]{\ensuremath{Y^e_{#1}(\vOmega_#2)}} \newcommand{\Yo}[2]{\ensuremath{Y^o_{#1}(\vOmega_#2)}} \newcommand{\sigg}[1]{\ensuremath{\Macro^{gg'}_{s\,#1}}} \newcommand{\psig}{\ensuremath{\psi^g}} \newcommand{\Di}{\ensuremath{\Delta_i}} \newcommand{\Dj}{\ensuremath{\Delta_j}} \newcommand{\Dk}{\ensuremath{\Delta_k}} %--------------------------------------------------------------------------- %--------------------------------------------------------------------------- \begin{document} \begin{center} {\bf NE 255, Fa16 \\ Eigenvalue Formulation and Solutions\\ October 27; November 1, 2016} \end{center} \setlength{\unitlength}{1in} \begin{picture}(6,.1) \put(0,0) {\line(1,0){6.25}} \end{picture} We're going to start talking about how to find the criticality state of a reactor, or the dominant eigenvalue-eigenvector pair. To turn the steady state Boltzmann transport equation with fission into an eigenvalue problem, we can use a mathematical ``knob". There are two ways we do this: \begin{enumerate} \item Alter the effective cross section by adding an $\alpha$ term or \item Alter the effective fission yield, $\nu$, by scaling it with $k$. \end{enumerate} \textbf{The $\alpha$ version}, which is used more frequently at LLNL and LANL, looks like this: % \begin{align*} \bigl[\vOmega \cdot \nabla + \bigl(\Sigma_t + \underbrace{\frac{\alpha_0}{v}}_{\text{new}}\bigr)\bigr]\psi(\vec{r}, E, \vOmega) &= \int_{4 \pi} d\vOmega' \int_0^{\infty} dE' \: \Sigma_s(E', \vOmega' \rightarrow E, \vOmega) \psi(\vec{r}, E', \vOmega')\\ +& \frac{\chi(E)}{4 \pi}\int_0^{\infty} dE' \: \nu(E') \Sigma_f(E') \int_{4 \pi} d\vOmega' \:\psi(\vec{r}, E', \vOmega') \end{align*} + homogeneous boundary conditions.\\ % Some important notes about this \begin{itemize} \item In the $\alpha$-evaluation problem, the total cross section is modified by a 1/v absorber. \item There is numerical difficulty if $\alpha_0 < 0$.\\ If $\frac{-\alpha_0}{v} > \Sigma_t$ then total interaction becomes a ``source" rather than a loss! \item If $\alpha_0 > 0$, absorption of \textbf{slow} neutrons is enhanced by an $\alpha_0$/v absorber $\rightarrow$ harder spectrum. \item If $\alpha_0 < 0$, absorption of \textbf{fast} neutrons is enhanced by an $\alpha_0$/v absorber $\rightarrow$ softer spectrum. \item These spectral effects are important because reaction rate is $\int_0^{\infty} dE \:\Sigma_j(E) \phi(E)$ \end{itemize} \textbf{The $k$-eigenvalue problem} is a little bit different: % \begin{align*} \bigl[\vOmega \cdot \nabla + \Sigma_t \bigr]\psi(\vec{r}, E, \vOmega) &= \int_{4 \pi} d\vOmega' \int_0^{\infty} dE' \: \Sigma_s(E', \vOmega' \rightarrow E, \vOmega) \psi(\vec{r}, E', \vOmega')\\ +& \underbrace{\frac{1}{k}}_{\text{new}}\frac{\chi(E)}{4 \pi}\int_0^{\infty} dE' \: \nu(E') \Sigma_f(E') \int_{4 \pi} d\vOmega' \:\psi(\vec{r}, E', \vOmega') \end{align*} % + homogeneous boundary conditions.\\ Important notes about this version: \begin{itemize} \item we can see that $k=1$ means $\alpha_0 = 0$ and vice versa. \item $k$ as a multiplication factor: the ratio of neutron production in one generation to the neutron production in the previous generation. \item We often use the $k$ version because we don't have the mathematical difficulties associated with the $\alpha$ form, and the physical interpretation is helpful. \end{itemize} Most material derived from my thesis, with support form Wolfram Mathworld and other sources (noted in tex comments). \subsection*{Background} A right eigenvector is defined as a column vector $x_R$ satisfying \[\ve{A}x_R=\lambda_R x_R\:.\] In many common applications, only right eigenvectors (and not left eigenvectors) need be considered. Hence the unqualified term ``eigenvector" can be understood to refer to a right eigenvector. %http://mathworld.wolfram.com/RightEigenvector.html A left eigenvector is defined as a row vector $x_L$ satisfying \[ x_L\ve{A}=\lambda_L x_L\:.\] %http://mathworld.wolfram.com/LeftEigenvector.html The \textbf{generalized eigenvalue problem} takes the form $\ve{B}x = \mu \ve{C}x$ and can be transformed into an \textbf{ordinary eigenvalue problem}, $\ve{A}x = \lambda x$. Both forms have the same right eigenvectors. If $\ve{C}$ is non-singular then $\ve{A} = \ve{C}^{-1}\ve{B}$ and the problem $v = \ve{A}x$ can be solved in two steps: %\cite{Stewart2001} % \begin{enumerate} \item $w = \ve{B}x$ \item Solve the system $\ve{C}v = w$. \end{enumerate} % Because the generalized form can be converted to the ordinary form, we will focus on the more common ordinary form without loss of applicability. Recall this basic notation: let $\sigma(\ve{A}) \equiv \{\lambda \in \mathbb{C} : rank(\ve{A} - \lambda \ve{I}) \le n\}$ be the spectrum of $\ve{A}$, where the elements in the set are the eigenvalues and $\mathbb{C}$ is the set of complex numbers. The eigenvalues can be characterized as the $n$ roots of the polynomial $p_{\ve{A}}(\lambda) \equiv det(\lambda \ve{I} - \ve{A})$. Each distinct eigenvalue in $\sigma(\ve{A})$ has a corresponding nonzero vector $x$ such that $\ve{A}x_{i} = \lambda_{i} x_{i}$ for $i = 1,...,n$. % \cite{Sorensen1996}. It will be assumed that the eigenvalues are ordered as $|\lambda_{1}| > |\lambda_{2}| \ge \dots \ge |\lambda_{n}| \ge 0$. Eigenvalue problems in the nuclear transport community are typically solved with iterative rather than direct methods. A variety of iterative solvers have been used to solve eigenvalue problems. These are ones that have been most widely used. %----------------------------------------------------------------------------------------- \subsection*{Power Iteration} Power iteration (PI) is an old and straightforward algorithm for finding an eigenvalue/vector pair. The basic idea is that any nonzero vector can be written as a linear combination of the eigenvectors of $\ve{A}$ because the eigenvectors are linearly independent, namely $v_0 = \gamma_1 x_1 + \gamma_2 x_2 + \cdots + \gamma_n x_n$ where $x_{j}$ is the $j$th eigenvector and $\gamma_{j}$ is some constant. This specific expression assumes a non-defective $\ve{A}$, though this assumption is not necessary for the method to work. Another fact that is used to understand power iteration is that $\ve{A}^k x_i = \lambda_i^k x_i$. Thus % \begin{equation} \ve{A}^k v_{0} = \gamma_1 \lambda_1^k x_1 + \gamma_2 \lambda_2^k x_2 + \cdots + \gamma_n \lambda_n^k x_n \:. \label{eq:Ak} \end{equation} % Since $|\lambda_1| > |\lambda_i|, i \ne 1$, the first term in the expansion will dominate as $k \to \infty$ and $\ve{A}^k v_{0}$ therefore becomes an increasingly accurate approximation to $x_1$. In practice, it is desirable to avoid exponentiating a matrix and it is helpful to normalize $v$ in order to avoid possible over or underflow. This leads to the power iteration algorithm:%\ref{algo:PI} \cite{Stewart2001}, \cite{Trefethen1997}. % \begin{list}{}{} \item Given $\ve{A}$ and $v_0$, $v = \frac{v_{0}}{||v_{0}||}$. \item Until convergence do: \begin{list}{}{\hspace{2em}} \item $w = \ve{A}v$ \item $v = \frac{w}{||w||}$ \item $\lambda = v^{T}\ve{A}v$ \end{list} %\caption{Power Iteration} %\label{algo:PI} \end{list} Using Equation \eqref{eq:Ak} and the PI algorithm, PI's convergence behavior can be understood. After $k$ steps, the iteration vector will be: % \begin{equation} v_{k} = \bigl( \frac{\lambda_{1}^{k}}{e_{1}^{T}\ve{A}^{k}v_{0}} \bigr) \bigl(\frac{1}{\lambda_{1}^{k}}\ve{A}^{k}v_{0} \bigr) \:. \end{equation} % If $\ve{A}$ has eigenpairs $\{(x_{j}, \lambda_{j}), 1 \le j \le n \}$ and $v_{0}$ has the expansion $v_{0} = \sum_{j=1}^{n} x_{j}\gamma_{j}$ then % \begin{equation} \frac{1}{\lambda_{1}^{k}}\ve{A}^{k}v_{0} = \frac{1}{\lambda_{1}^{k}} \sum_{j=1}^{n} \ve{A}^{k}x_{j}\gamma_{j} = \sum_{j=1}^{n} x_{j} \bigl(\frac{\lambda_{j}}{\lambda_{1}} \bigr) \gamma_{j} \:. \label{eq:PIexpand} \end{equation} % From equation \eqref{eq:PIexpand} it can be determined that the error is reduced in each iteration by a factor of $|\frac{\lambda_{2}}{\lambda_{1}}|$, which is called the dominance ratio. If $\lambda_2$ is close to $\lambda_1$, then this ratio will be close to unity and the method will converge very slowly. If $\lambda_2$ is far from $\lambda_1$, then convergence will happen much more quickly. Put simply, PI is better suited for problems where $\ve{A}$ has eigenvalues that are well separated.% \cite{Sorensen1996}. Power iteration is very attractive because it only requires matrix-vector products and two vectors of storage space. Because of its simplicity and low storage cost, PI has been widely used in the transport community for criticality problems for quite some time. %\cite{Lewis1993}, \cite{Warsa2004a}. However, because of the slow convergence for many problems of interest, many current codes use an acceleration method with PI or have moved away from it altogether. As an aside, it is interesting to point out the connection between Krylov methods and power iteration. The power method is a Krylov method that uses a subspace of dimension one. A Krylov subspace is built by storing the vectors created during each iteration of the power method. Krylov methods with subspaces larger than one take advantage of the information generated during each iteration that the power method discards. %----------------------------------------------------------------------------------------- \subsection*{Power Iteration in the TE} Recall the operator form of the steady state, eigenvalue transport equation: \[ \mathbf{L} \psi = \mathbf{MS}\phi +\frac{1}{k}\mathbf{MF}\phi \:. \] This equation can be combined with $\phi = \mathbf{D} \psi$ and manipulated in the following ways: % \begin{equation} \phi = \ve{DL}^{-1}\ve{MS}\phi + \ve{DL}^{-1}\ve{M}\frac{1}{k}\ve{\chi} \ve{f}^{T} \phi \:. \end{equation} % Let $\ve{T} = \ve{DL}^{-1}$, rearrange, and multiply both sides by $f^{T}$: % \begin{align} f^{T}\bigl(\ve{I} - \ve{TMS}\bigr)\phi &= \frac{f^{T}}{k} \ve{TM}\chi f^{T} \phi \:, \label{eq:OperatorEvalForm} \\ % \text{define } \Gamma &= f^{T}\phi \text{ and rearrange} \:, \nonumber \\ % k \Gamma &= \underbrace{f^{T}(\ve{I} - \ve{TMS})^{-1} \ve{TM} \chi}_{\ve{A}} \Gamma \:. \label{eq:OrdinaryEigenvalue} \end{align} % All of that can be used to make the traditional form of power iteration where the eigenvalue iteration index is $k$: % \begin{equation} \Gamma^{k+1} = \frac{1}{k^{k}}\ve{A} \Gamma^{k} \:. \label{eq:PowerItForm} \end{equation} Note that Equation~\eqref{eq:OrdinaryEigenvalue} is the ordinary form of the eigenvalue transport equation. In this case the eigenvalue-vector pair are $(\Gamma, k)$. The application of $\ve{A}$ to $\Gamma^{k}$ involves a multigroup solve that looks like a fixed source problem: % \begin{equation} \bigl(\ve{I} - \ve{TMS}\bigr)y^{k} = \ve{TM}\chi \Gamma^{k} \:. \end{equation} % Thus, inside of one eigenvalue iteration we will have done inner, space-angle solves for each group and an outer iteration over all groups. %This means outer iterations update the eigenvalue and there are two sets of inner iterations: energy, and space-angle, just like other fixed source solves. Originally, a Krylov solver was used for the space-angle inner iterations and Gauss Seidel was used for the energy iterations. %\cite{Evans2009a}, \cite{Evans2011}. Denovo uses Trilinos \cite{Heroux2003} to provide the Krylov solver, with a choice of either GMRES or BiCGSTAB \cite{Evans2009}. %----------------------------------------------------------------------------------------- \subsection*{Shifted Inverse Iteration} One way to improve power iteration is to shift the matrix $\ve{A}$ and then, in a power iteration-type scheme, apply the inverse of the shifted matrix rather than the regular unshifted matrix. The method is called inverse iteration or shifted inverse iteration and the goal is to provide better convergence properties. Understanding why shifted inverse iteration is an improvement requires understanding spectral transformation. The fundamental notion is that $\ve{A}$ can be shifted by a scalar without changing the eigenvectors. That is, for some shift $\mu$, $(\ve{A} - \mu \ve{I})$ will have the same eigenvectors as $\ve{A}$. %\cite{Sorensen1996}. If $\mu \notin \sigma(\ve{A})$, then $(\ve{A} - \mu \ve{I})$ is invertible and $\sigma([\ve{A} - \mu \ve{I}]^{-1}) = \{1/(\lambda - \mu):\lambda \in \sigma(\ve{A})\}$. Eigenvalues of $\ve{A}$ that are near the shift will be transformed to extremal eigenvalues that are well separated from the others. Such a spectral transformation can be added to the power method. Given an estimate, $\mu \approx \lambda_1$, the shifted inverse power method will usually converge more quickly than PI. To see why this works consider: % \begin{equation} \tau_1 = \frac{1}{\lambda_1 - \mu}\text{ , }\tau_2 = \frac{1}{\lambda_2 - \mu}\text{ , }\dots\text{ , }\tau_n = \frac{1}{\lambda_n - \mu} \:. \end{equation} % As $\mu \to \lambda_1$, $\tau_1 \to \infty$ and all the other eigenvalues go to finite quantities. If the $\tau$s are inserted into the convergence analysis done for PI, it can be shown that the shifted inverse power method will reduce error in every iteration by a factor of $\frac{|\lambda_{1} - \mu|}{|\lambda_{2} - \mu|}$. This is typically much faster than $|\frac{\lambda_{2}}{\lambda_{1}}|$, though the ultimate success of the method is dependent upon the quality of the shift.% \cite{Sorensen1996}, \cite{Trefethen1997}. The algorithm for shifted inverse iteration is much like that for power iteration, requiring only one change. Step 1 becomes ``solve $(\ve{A}-\mu \ve{I})w = v$;'' all other steps are the same. After convergence, the actual eigenvalue of interest is backed out using the eigenvector. %\cite{Sorensen1996}. Shifted inverse iteration is effectively the same as performing power iteration on $(\ve{A}-\mu \ve{I})^{-1}$. --------------------------------------------\\ \textbf{Aside:}\\ Conditioning is one way to express the perturbation behavior of the mathematical problem. A \emph{well-conditioned} problem is one in which all small perturbations of $x$ lead to only small changes in $f(x)$. An \emph{ill-conditioned} problem is one in which some small perturbation of $x$ leads to a large change in $f(x)$. The condition number is a quantity used to express how well-conditioned a matrix or problem is. A small condition number corresponds to a well-conditioned problem, and vice versa. The relative condition number is used to measure the effect of relative perturbations. This is particularly useful in numerical analysis because computers introduce relative errors. Let $\delta x$ be a small perturbation of $x$ and $\delta f = f(x + \delta x) - f(x)$. With these terms, the relative condition number for some norm $p$ is defined as % \begin{equation} \kappa(x)_{p} = \lim_{\delta \rightarrow 0} \sup_{||\delta x||_{p} \le \delta} \biggl(\frac{||\delta f||_{p}}{||f(x)||_{p}} / \frac{||\delta x||_{p}}{||x||_{p}} \biggr) \:, \label{eq:cond} \end{equation} % where $\sup$ is the supremum, the smallest real number that is greater than or equal to every number in the set in question. %\cite{Wikipedia2011}. This definition holds for any norm. The norm subscript will excluded unless specifying a certain norm is pertinent. The condition number of a matrix $\mathbf{A}$ is defined as % \begin{equation} \kappa(\mathbf{A}) = ||\mathbf{A}|| \text{ }||\mathbf{A}^{-1}|| \:. \label{eq:condA} \end{equation} % For an $n \times m$ matrix $\ve{A}$, the singular values (a generalization of eigenvalues for non-square matrices) are ordered such that $\sigma_{1} \ge \sigma_{2} \ge \dots \ge \sigma_{n} > 0$. If the two-norm is used, then $||\mathbf{A}||_{2} = \sigma_{1}$, $||\mathbf{A}^{-1}||_{2} = \frac{1}{\sigma_{m}}$, and $\kappa_{2}(\mathbf{A}) = \frac{\sigma_{1}}{\sigma_{m}}$; $\sigma_{m}$ is the $m$th singular value of $\ve{A}$. If $\mathbf{A}$ is singular, its condition number is infinity. %The ratio $\frac{\sigma_{1}}{\sigma_{m}}$ can be interpreted as the eccentricity of the hyper-ellipse that is the image of an $m$-dimensional unit sphere under $\mathbf{A}$. When $\ve{A}$ has a large condition number the largest principle semiaxis is much longer than the smallest principle semiaxis. -------------------------------------------- On initial consideration it would seem shifted inverse methods might not work well when the shift is very good because the matrix becomes very ill-conditioned. If the shift is exact, i.e.\ when $\mu = \lambda_{1}$, the matrix is singular. It turns out this concern is unfounded. Peters and Wilkinson proved that ill-conditioning is not a problem for inverse iteration methods. %\cite{Peters1979}. Trefethen and Bau also assert that this is the case as long as the $(\ve{A}-\mu \ve{I})w = v$ portion is solved with a backwards stable algorithm.% \cite{Trefethen1997}. Wielandt's method is a flavor of shifted inverse iteration that has been used widely in the neutron transport community, though it was originally developed in 1944 in an unrelated context. %\cite{Zinzani2008}, \cite{Itagaki1996}, \cite{Itagaki2002}, \cite{Ipsen}. In the transport equation formulation, Wielandt's method changes the generalized form of the eigenvalue problem to $(\ve{I} - \ve{DL}^{-1}\ve{M}(\ve{S} +\gamma_e \ve{F}))\phi = (\delta \gamma) \ve{DL}^{-1}\ve{MF} \phi$ where $\gamma_e$ is an estimate for the eigenvalue $\gamma_1 = \frac{1}{k}$, $\phi$ is the corresponding eigenvector, and $(\delta \gamma) = \gamma_1 - \gamma_e$. The power method is applied to this, giving.% \cite{Nakamura1977} % \begin{equation} \phi^{i+1} = (\delta \gamma)^{i}(\ve{I} - \ve{DL}^{-1}\ve{M}(\ve{S} + \gamma_e \ve{F}))^{-1}\ve{DL}^{-1}\ve{MF}\phi^{i} \:. \label{eq:Wielandt} \end{equation} The estimate, $\gamma_e$, can be improved on each iteration by starting with an initial guess of $0$ and setting $\gamma_e^i = (\delta \gamma)^{i-1} + \gamma_e^{i-1} + \alpha$. Here $\alpha$ is an optional small positive constant that is designed to keep roundoff error from dominating when $\gamma_e$ becomes very close to $\gamma_1$.% \cite{Nakamura1977}. Studies of reactor systems have found shifted inverse iteration to be faster than power iteration. %\cite{Allen2002}. In the general computational community, shifted inverse iteration has largely taken the place of power iteration when looking for eigenvectors associated with eigenvalues that are relatively well known at the outset of the calculation since this information allows for the selection of a good shift.% \cite{Ipsen}. %----------------------------------------------------------------------------------------- \subsection*{Rayleigh Quotient Iteration} Rayleigh quotient iteration is a variation of shifted inverse iteration that has a changing shift like Wielandt's method sometimes does. The key difference is that this method uses a specific formula, the Rayleigh quotient (RQ), that is designed to find the optimal the shift at every iteration. The Rayleigh quotient for the ordinary eigenvalue problem, originally proposed by the third Baron Rayleigh in the 1870s, is defined as:% \cite{Parlett1974}: % \begin{equation} \rho(x, \ve{A}) = \rho(x) = \frac{x^{T}\ve{A}x}{x^{T}x} \:. \label{RQ} \end{equation} % If $x$ is an eigenvector of $\ve{A}$, then the RQ is the corresponding eigenvalue. If $x$ is close to an eigenvector, then the RQ will approximate the eigenvalue. %\cite{Stewart2001}. The derivation of the RQ comes from asking the question ``what $\alpha$ will minimize $||\ve{A}x - \alpha x||_2$?'' Solving this using least squares will give $\alpha = \rho(x)$.% \cite{Trefethen1997}. The RQ has a similar form for the generalized eigenvalue problem, mentioned here because it is the form that was implemented in Denovo. For the problem $\ve{A}x = \lambda \ve{B}x$, there is a right eigenpair $(\lambda, x)$ for $x \ne 0$ and a left eigenpair $(\lambda, y): y^{T}\ve{A} = \lambda y^{T}\ve{B}$ for $y \ne 0$. Let $\langle \alpha, \beta \rangle$ be a simple eigenvalue of pencil $(\ve{A}, \ve{B})$. If $x$ and $y$ are right and left eigenvectors corresponding to $\langle \alpha, \beta \rangle$, respectively, then $\langle \alpha, \beta \rangle = \langle y^{T} \ve{A} x, y^{T} \ve{B} x \rangle$. This means the ordinary form of the eigenvalue is: \begin{equation} \lambda = \frac{y^{T} \ve{A} x}{y^{T} \ve{B} x} \:, \end{equation} which is the generalization of the RQ.% \cite{Stewart2001}. Recall from the previous section that choosing a shift close to the eigenvalue of interest controls the dominance ratio of shifted inverse iteration, and hence convergence behavior. The RQI algorithm uses a strategically selected shift, the Rayleigh quotient, as seen here.% \cite{Trefethen1997}, \cite{Parlett1974}. \begin{list}{}{} \item Given $\ve{A}$ and $v_{0}$, $v = \frac{v_{0}}{||v_{0}||}$, and $\rho_{0} = \rho(v) = v^{T}\ve{A}v$ \\ \item Until convergence do: \begin{list}{}{\hspace{2.5em}} \item Solve $(\ve{A} - \rho\ve{I})w = v$ \item normalize $v = \frac{w}{||w||}$ \item form $\rho = v^{T}\ve{A}v$ \end{list} %\caption{Rayleigh Quotient Iteration} %\label{algo:RQI} \end{list} % This process generates a sequence, $\{\rho_{k}, v_{k}\}$, called the Rayleigh sequence generated by $v_{0}$ on $\ve{A}$. To more deeply understand why this method is optimal and useful for the purposes of this work, more properties of the Rayleigh quotient are highlighted:% \cite{Parlett1974}: % \begin{itemize} \item For $\alpha \ne 0$, $\rho(\alpha x, \ve{A}) = \rho(x, \ve{A})$, so using any multiple of $x$ will produce the same sequence as $x$. \item The RQ has translational invariance, $\rho(x, \ve{A} - \alpha \ve{I}) = \rho(x,\ve{A}) - \alpha$, meaning the matrix $(\ve{A} - \alpha \ve{I})$ produces the same sequence as $\ve{A}$. %\cite{Parlett1974}. This relationship can be used to relate eigenvalues and applied shifts. In fact, this is one of the ways to find the eigenvalue of interest for the standard shifted inverse iteration method.% \cite{Sorensen1996}. \item When $x$ is an eigenvector of $\ve{A}$, $\rho(x)$ is stationary at $x$. \item When $x \ne 0$ the RQ gives the minimal residual, with equivalence holding only when $\beta = \rho(x)$: % \begin{equation} ||(\ve{A} - \beta\ve{I})x||^{2} \ge ||\ve{A}x||^{2} - ||\rho(x)x||^{2} \:. \end{equation} % \item $x$ is orthogonal to $(\ve{A} - \rho(x))x$. \end{itemize} RQI has very good convergence properties for normal matrices. The minimal residual property of the RQ causes the global convergence behavior. The sequence of residuals $\{ ||(\ve{A} - \rho_{k})v_{k}|| = ||r_{k}||, k = 0, 1, ... \}$ is monotonically decreasing for all starting $v_{0}$. When RQI is applied to normal matrices, the following has been proven as $k \to \infty$: % \begin{enumerate} \item $\rho_{k} = \rho(v_{k})$ converges, and either \item $(\rho_{k}, v_{k})$ converges cubically to an eigenpair $(\lambda, k)$, or \item $\rho_{k}$ converges linearly to a point equidistant from $s \ge 2$ eigenvalues of $\ve{A}$, and the sequence $\{v_{k}\}$ cannot converge. \end{enumerate} % This means that when RQI converges to the correct eigenpair it does so rapidly. However, there is a risk that if a bad starting vector is selected it will not converge at all.% \cite{Parlett1974}. The monotone sequence $\{||r_{k}||\}$ is bounded from below by 0. If the limit of the sequence is 0 as $k \to \infty$ then case 2 will be observed and convergence will be cubic. If the limit is greater than 0, case 3 is found. Unfortunately, it does not seem that this can be know \emph{a priori}. In practice, however, users found that it was difficult to make the method fail for normal matrices.% \cite{Parlett1974}. For non-normal matrices the stationary property does not hold, which means the convergence is quadratic at best. The residual sequence is also not guaranteed to be monotonically decreasing and thus no global convergence properties can be proven. It has been found in practice that RQI will still converge for non-normal systems, just at a slower rate than for normal matrices. However, convergence cannot be guaranteed nor predicted in advance.% \cite{Parlett1974}. %To deal with this, two adapted RQI methods for non-normal matrices were developed that attempt rectify this. One developed Ostrowski regains the stationary property at eigenvectors. In the non-defective case this gives cubic convergence, but gives no guarantees of global convergence. The other was developed by Parlett and it generates monotonically decreasing residuals, thus guaranteeing global convergence but at a rate that is merely linear for non-defective matrices \cite{Parlett1974}. \end{document}
{ "alphanum_fraction": 0.6982484382, "avg_line_length": 74.974137931, "ext": "tex", "hexsha": "799e69c638b93f6122b80d05a538c457e0baf055", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-07-14T23:53:06.000Z", "max_forks_repo_forks_event_min_datetime": "2016-08-23T00:47:55.000Z", "max_forks_repo_head_hexsha": "235cf55dc28e9caef2fab7ad5407f4fb84c247b6", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "rachelslaybaugh/NE255", "max_forks_repo_path": "06-eigenvalue/06-eigenvalue.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "235cf55dc28e9caef2fab7ad5407f4fb84c247b6", "max_issues_repo_issues_event_max_datetime": "2016-10-12T03:01:20.000Z", "max_issues_repo_issues_event_min_datetime": "2016-09-01T22:07:42.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "rachelslaybaugh/NE255", "max_issues_repo_path": "06-eigenvalue/06-eigenvalue.tex", "max_line_length": 660, "max_stars_count": 5, "max_stars_repo_head_hexsha": "235cf55dc28e9caef2fab7ad5407f4fb84c247b6", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "rachelslaybaugh/NE255", "max_stars_repo_path": "06-eigenvalue/06-eigenvalue.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-23T14:44:17.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-21T19:21:45.000Z", "num_tokens": 7812, "size": 26091 }
%---------------------------------------------------------------------------------------- % CREDITS AND SOURCES %---------------------------------------------------------------------------------------- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Vertical Line Title Page % LaTeX Template % Version 1.0 (27/12/12) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Peter Wilson ([email protected]) % % License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Aardvark non-copyright image reference: % ClkerFreeVectorImages, (2012), Aardvark [ONLINE]. Available at: https://pixabay.com/static/uploads/photo/2012/04/28/20/56/animal-44528_960_720.png [Accessed 26 February 16]. %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[11pt, a4paper, parskip=full]{article} % Package for mathematics \usepackage{amsmath} % Packages for inserting code (Python) \usepackage[procnames]{listings} \usepackage{color} \definecolor{keywords}{RGB}{255,0,90} \definecolor{comments}{RGB}{0,0,113} \definecolor{red}{RGB}{160,0,0} \definecolor{green}{RGB}{0,150,0} \lstset{language=Python, breaklines=true, breakautoindent=false, % postbreak=\raisebox{0ex}[0ex][0ex]{\ensuremath{\color{red}\hookrightarrow\space}}, basicstyle=\ttfamily\small, keywordstyle=\color{keywords}, commentstyle=\color{comments}, stringstyle=\color{red}, showstringspaces=false, identifierstyle=\color{green}, procnamekeys={def,class}} % Package for inline code \definecolor{codegray}{gray}{0.9} \newcommand{\code}[1]{\colorbox{codegray}{\texttt{#1}}} % Package for adding space between paragraphs \usepackage{parskip} % Package for links \usepackage{hyperref} % Package for images \usepackage{graphicx} % Package for headers, footers, etc \usepackage{fancyhdr} \pagestyle{fancy} \lhead{Othman Alikhan} \rhead{COMP2444} % Package for controlling title spacing \usepackage{titlesec} \usepackage{lipsum}% just to generate text for the example %\titlespacing*{\section} %{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex} %\titlespacing*{\subsection} %{0pt}{5.5ex plus 1ex minus .2ex}{4.3ex plus .2ex} \newcommand*{\plogo}{\fbox{$\mathcal{PL}$}} % Generic publisher logo %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \newcommand*{\titleGM}{\begingroup % Create the command for including the title page in the document \hbox{ % Horizontal box \hspace*{0.15\textwidth} % Whitespace to the left of the title page \rule{1pt}{\textheight} % Vertical line \hspace*{0.05\textwidth} % Whitespace between the vertical line and title page text \parbox[b]{0.75\textwidth}{ % Paragraph box which restricts text to less than the width of the page {\noindent\Huge \textbf{Red and Black}}\\[0.5\baselineskip] % Title {\noindent\Huge \textbf{Gauss-Seidel In MPI}}\\[2\baselineskip] % Title {\Large \textsc{COMP3920: Parallel Computing}}\\[0.5\baselineskip] % Module code and name {\Large \textsc{University Of Leeds}}\\[7\baselineskip] % University name {\large \textit{"Why did the functions stop calling each other? \\[0.5\baselineskip] Because they had constant arguments..."}} % Horrific pun \vspace{0.3\textheight} % Whitespace between the title block and the publisher \includegraphics[width=0.15\textwidth]{logo.png} \\[\baselineskip] % Team logo {\noindent \today }\\[\baselineskip] % Today }}\endgroup} %---------------------------------------------------------------------------------------- % MAIN DOCUMENT %---------------------------------------------------------------------------------------- \begin{document} \titleGM \newpage \newpage \section*{Question 1} For N=128, the table of convergence tolerance against number of iterations: \begin{center} \begin{tabular}{ c | c } convTol & iters \\ \hline 10e-1 & 3713 \\ 10e-2 & 7595 \\ 10e-3 & 11477 \\ 10e-4 & 15359 \\ 10e-5 & 19241 \\ 10e-6 & 23123 \\ 10e-7 & 27005 \end{tabular} \end{center} As the tolerance becomes more stringent in powers by 10, the number of iterations increases by a constant amount. In other words, the amount of iterations is logarithmically related to the convergence tolerance. Namely, the relationship is captured by: $$iteration = 3882 * (-log convTol) - 169$$ which fits through all the data points with 0 error surprisingly. As for the explaining the relation, as the tolerance becomes more stringent, more iterations are needed to minimize the error suitably hence both the iteration amount and tolerance increase positively. \section*{Question 2} The table of the matrix size N against the final error of vector x: \begin{center} \begin{tabular}{ c | c } N & error \\ \hline 64 & 6.45149e-11 \\ 128 & 0.00240055 \\ 256 & 0.202453 \\ 512 & 0.619469 \\ 1024 & 0.830251 \end{tabular} \end{center} The problem size is roughly linearly proportional to the error for a fixed number of iterations. One explanation to this is because the amount of error terms increases as N increases (since the error is some least squares fit) \section*{Question 3} In chronological order of the code, the routines are: MPI\_Scatter, to efficiently strip partition the matrix A across all ranks since only a slice of the matrix is needed per rank for the subsequent calculations. MPI\_Bcast, to send vector x and b to all ranks since all elements of both vectors are needed during matrix multiplication and other calculations. MPI\_Gather, used to gather on the root rank all the updated values of x from all other ranks. This is needed since the fully updated vector x will need be broadcast to all ranks (synchronization). MPI\_Bcast, to send the synchronized vector x on the root rank to all other ranks before proceeding to the next iteration. This is necessary since each iteration requires the newly updated values of x for Gauss-Seidel MPI\_Allreduce, to gather all the errors from each rank and sum them up. This is needed later on to define the while loop in terms of a convergence tolerance. As for the conditions for the code to work: $N\ mod\ matrixSize == 0$, that is, the number of rows per process must be equal. Otherwise calculating the sum of the matrix multiplication will include some extra rows for the last process which has a different amount of matrix rows. $matrixSize > p > 1$, that is, each process must have at least one row otherwise the initial calculation of $rowsPerProc = N / numprocs$ will yield $0$ and hence no rows being sent out to any ranks (a bigger concern though is the fact if there are more process than matrix rows then some of the process will be idle with the current implementation of matrix multiplication). \newpage \section*{Question 4} For a fixed $N=1536$, $iters=2000$, the tables below illustrate the time taken for various number of processors $p$ and various number of machines: \vspace{20pt} \begin{center} \begin{tabular}{ c | c | c} p & parallel t (seconds) & ideal serial t (seconds) \\ \hline 1 & 5.242228 & 2.48321 \\ 1 & 5.231823 & 2.48321 \\ 1 & 5.202714 & 2.48321 \\ 2 & 2.847799 & 4.96642 \\ 2 & 2.835561 & 4.96642 \\ 2 & 2.884595 & 4.96642 \\ 4 & 2.413028 & 9.93284 \\ 4 & 2.400443 & 9.93284 \\ 4 & 2.426597 & 9.93284 \end{tabular} Table 1: One Machine. \end{center} \vspace{20pt} \begin{center} \begin{tabular}{ c | c | c} p & parallel t (seconds) & ideal serial t (seconds) \\ \hline 4 & 7.834163 & 20.968912 \\ 4 & 7.810804 & 20.968912 \\ 4 & 7.811297 & 20.968912 \\ 8 & 9.365273 & 41.937824 \\ 8 & 9.362385 & 41.937824 \\ 8 & 9.363021 & 41.937824 \end{tabular} Table 2: Two Machines. \end{center} \vspace{20pt} \begin{center} \begin{tabular}{ c | c | c} p & parallel t (seconds) & ideal serial t (seconds) \\ \hline 12 & 11.139044 & 29.79852 \\ 12 & 11.110144 & 29.79852 \\ 12 & 10.989798 & 29.79852 \end{tabular} Table 3: Three Machines. \end{center} \vspace{20pt} For a single machine, the parallel time decreases as the number of processes increase whereas the opposite holds true for the ideal serial time. This is because as the number of processes increase on the same machine, the time spent computing increases in proportion to the time spent communicating between processes. We can note an increase in time taken for four processes when using a single machine vs multiple (two) machines. This is due to the fact that the communication overhead now includes time taken to send information across the network as oppose to solely within the same machine hence is slower. Additionally, as the number of processes increases on two machines from four to eight, the time taken for the parallel algorithm also increases. One explanation is that the time to communicate between processes (even internally on the same machine) scales much worse compared to the time spent computing on a single process for a fixed matrix size (i.e. the communication overheads increases faster than the time spent computing per rank as p increases for a fixed matrix size). Lastly, we can see that on the whole, the parallel algorithm performs vastly superior compared to the ideal serial implementation. However, as p increases so does the parallel time taken which isn't favoured for a fixed matrix size. Although, if the matrix size could vary as well then the parallel time would further out perform the ideal serial time. \end{document}
{ "alphanum_fraction": 0.6628640048, "avg_line_length": 38.7325581395, "ext": "tex", "hexsha": "42d3a181c39d8a60dbf859c8f99e26d0727c0578", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3405e1463e82ca2e6f7deef05c3b1ba0ab9c1278", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "OthmanEmpire/university_code", "max_forks_repo_path": "year3/c/mpi/comp3920_coursework3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3405e1463e82ca2e6f7deef05c3b1ba0ab9c1278", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "OthmanEmpire/university_code", "max_issues_repo_path": "year3/c/mpi/comp3920_coursework3.tex", "max_line_length": 479, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3405e1463e82ca2e6f7deef05c3b1ba0ab9c1278", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "OthmanEmpire/university", "max_stars_repo_path": "year3/c/mpi/comp3920_coursework3.tex", "max_stars_repo_stars_event_max_datetime": "2016-05-21T17:23:50.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-21T17:23:50.000Z", "num_tokens": 2722, "size": 9993 }
\section{XAFS Fourier transforms} \begin{frame} \frametitle{XAFS Fourier Transforms} \begin{cenpage}{130mm} Fourier Transforms are an important part of XAFS Analysis: \begin{postitbox}{75mm} $\displaystyle{ \chi(R) = {\rm FT}[\chi(k)] = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} { dk \, e^{i2kR} \, k^w \, \chi(k) \, \Omega(k) }} $ \end{postitbox} \begin{itemize} \item $\Omega(k)$ is the Window Function \item $w$ is the $k$-weighting \end{itemize} \pause \vmm We really use a discrete version and Fast Fourier Transform \[ \chi(R_m) = \frac{i \delta k}{\sqrt{\pi N_{\rm fft}}} \sum_{n=1}^{N_{\rm fft}} e^{2\pi i n m/N_{\rm fft}} \, k_n^w \, \chi(k_n)\, \Omega(k_n) \] \pause \begin{itemize} \item {\chik} is put on a uniform $k$-grid with spacing of $\delta k=0.05 {\rm\, \AA}^{-1}$. \item {\chik} is filled with zeros past the real data range. \item $N_{\rm fft} = 2048$: {\chik} can go to $102.4 {\rm\, \AA}^{-1}$ ($\sim 40\rm \, keV$) past the edge. \item {\chir} is on a $R$-grid with spacing $\sim0.031\,\rm\AA$, and can go to $31.4 \,\rm\AA$. \end{itemize} \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transforms: Basic Properties} \begin{cenpage}{130mm} Fourier Transform of a sine wave: \begin{tabular}{ccc} \begin{minipage}{55mm} \includegraphics[width=55mm]{figs/reduction/sine_k0} \end{minipage} & $\Rightarrow $ & \begin{minipage}{55mm} \includegraphics[width=55mm]{figs/reduction/sine_r0} \end{minipage}\\ \begin{minipage}{55mm} \includegraphics<2->[width=55mm]{figs/reduction/sine_k} \end{minipage} & {\onslide+<2-> $\Rightarrow $} {\onslide<1> {\vspace{40mm}}} & \begin{minipage}{55mm} \includegraphics<2->[width=55mm]{figs/reduction/sine_r} \end{minipage}\\ \end{tabular} \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transforms: Basic Properties(2)} \begin{cenpage}{130mm} Fourier Transforms are complex: \begin{tabular}{ccc} \begin{minipage}{55mm} \includegraphics[width=55mm]{figs/reduction/sine_k} \end{minipage} & $\Rightarrow $ & \begin{minipage}{55mm} \includegraphics[width=55mm]{figs/reduction/sine_r2} \end{minipage}\\ \noalign{\smallskip} \multicolumn{3}{l}{\onslide+<2->Waves with slightly different frequencies can cancel each other out, causing ``beats'' } \\ \noalign{\smallskip} \begin{minipage}{55mm} \includegraphics<2->[width=55mm]{figs/reduction/beat_k} \end{minipage} & {\onslide+<2-> $\Rightarrow $} {\onslide<1> {\vspace{40mm}}} & \begin{minipage}{55mm} \includegraphics<2->[width=55mm]{figs/reduction/beat_r} \end{minipage}\\ \end{tabular} \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transform Window Types} \begin{cenpage}{130mm} \begin{tabular}{ll} \begin{minipage}{80mm} \includegraphics[width=80mm]{figs/reduction/ftwin_zoo} \end{minipage} & \begin{minipage}{50mm} \setlength{\baselineskip}{10pt} \hspace{-3mm}{\Red{Typical Window Functions}} \vspace{0.5mm} A Window Function: \begin{itemize} \item goes from 0 to 1 and back to 0 \item $dk$ gives the width of the Window ``sill'' \end{itemize} \vmm {\Red{Most important rule:}} \vmm Pick a window type and stick with it. \vmm Kaiser-Bessel and Hanning are the most commonly used, and recommended. \vmm \end{minipage} \end{tabular} \vmm Personal Recommendation: \vmm Kaiser-Bessel Window, $dk=4$. \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transform Window Types} \begin{cenpage}{130mm} \begin{columns} \begin{column}{70mm} \includegraphics[width=60mm]{figs/reduction/ftwin_anat} \vmm {\onslide+<2-> { \includegraphics[width=60mm]{figs/reduction/ftwin_sills} }} \end{column} \begin{column}{55mm} \hspace{-3mm}{\Red{Fourier Window Function}} \vspace{0.5mm} The meaning of $k_{\rm min}$, $k_{\rm max}$, and $dk$. \vspace{10mm} {\onslide+<2-> { \hspace{-3mm}{\Red{Parzen, Hanning, Welch}} \vspace{0.5mm} Details of the different Window ``sills''. all with $k_{\rm min}=2\rm\,\AA^{-1}$ and $dk=3\rm\,\AA^{-1} $. \vspace{3mm} }} \end{column} \end{columns} \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transform Window and real data} \begin{cenpage}{130mm} The effect of $dk$ (for Hanning Window) and different Window Function: \begin{tabular}{ll} \begin{minipage}{60mm} \includegraphics[width=55mm]{figs/reduction/ftwin_kdk} \end{minipage} & \begin{minipage}{60mm} \includegraphics[width=55mm]{figs/reduction/ftwin_rdk} \end{minipage}\\ \begin{minipage}{60mm} \includegraphics<2->[width=55mm]{figs/reduction/ftwin_wins} \end{minipage} & {\onslide<1> {\vspace{40mm}}} {\onslide+<2-> \begin{minipage}{60mm} Changing $dk$ and Window functions gives relatively small changes to {\chir}, most noticeable in the ``ringing'' of small peaks. \end{minipage} } \end{tabular} \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transform Window and $k$-weight } \begin{cenpage}{130mm} $\displaystyle{ \chi(R) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} { dk \, e^{i2kR} \, k^{w} \, \chi(k) \, \Omega(k) }} $ \vmm Changing $w$, the $k$-weighting has a significant impact: \vmm \begin{tabular}{ll} \begin{minipage}{65mm} \includegraphics[width=65mm]{figs/reduction/ftwin_kw} \end{minipage} & \begin{minipage}{42mm} Fe-Fe scattering dominates with higher $w$. \vmm \vmm low $w$ emphasizes low-$k$, and low-Z scatterers. \vmm high $w$ emphasizes high-$k$, and high-Z scatterers. \vmm \end{minipage} \end{tabular} \vmm This is important when trying to determine the $Z$ of a scatterer. \vmm Again, $w=2$ and $w=3$ are most common, and recommended. \end{cenpage} \end{frame} \begin{frame} \frametitle{Fourier Transform Window and $k_{\rm min}$ } \begin{cenpage}{130mm} $k_{\rm min}$ and $k_{\rm max}$ are important too. \begin{itemize} \item $k_{\rm max}$ should be the end of useful data. \item With $k$-weight = 2, 3, it is not too important to avoid ``very low $k$''. \end{itemize} \vmm \begin{tabular}{ll} \begin{minipage}{65mm} \includegraphics[width=65mm]{figs/reduction/ftwin_kmin} \end{minipage} & \begin{minipage}{42mm} Conventional wisdom: keep $k_{\rm min} > 2\rm\,\AA^{-1}$ \vmm But: don't make it too big. \vmm \end{minipage} \end{tabular} \begin{center} \begin{postitbox}{85mm} Use Kaiser-Bessel with $dk=4,k_{\rm min}=2 \rm\,\AA^{-1}$ \vmm Use $k$-weight=2, or 3. \hspace{10mm} Don't obsess too much. \end{postitbox} \end{center} \end{cenpage} \end{frame}
{ "alphanum_fraction": 0.6294176563, "avg_line_length": 24.3519163763, "ext": "tex", "hexsha": "d98aa5af1efd420607ad115be07a84b4e86ab36f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "newville/xafsfun", "max_forks_repo_path": "slides/fourier.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "newville/xafsfun", "max_issues_repo_path": "slides/fourier.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "525b0b8fb6ec61396dc7dd2950a3e2a3ab6c17d1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "newville/xafsfun", "max_stars_repo_path": "slides/fourier.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2510, "size": 6989 }
\documentclass[11pt,letter]{article} \usepackage{latexsym, color, graphicx, comment} \usepackage[top=1in,bottom=1in,left=1in,right=1in]{geometry} \usepackage{amssymb} \usepackage{amsmath} \usepackage{hyperref} \usepackage{empheq} \hypersetup{colorlinks=true,linkcolor=blue} \newcommand{\vect}[1]{\mbox{\boldmath $#1$}} \newcommand{\gyrophase}{\varphi} \newcommand{\energy}{\varepsilon} \renewcommand{\Re}{\mathrm{Re}} \renewcommand{\Im}{\mathrm{Im}} \newcommand{\todo}[1]{\textcolor{red}{#1}} %\newcommand{\kxfac}{\kappa_x} \newcommand{\kxfac}{\mathtt{kxfac}} \newcommand{\bmag}{\mathtt{bmag}} \newcommand{\smz}{\mathtt{smz}} \newcommand{\gdstwo}{\mathtt{gds2}} \newcommand{\gdstwoone}{\mathtt{gds21}} \newcommand{\gdstwotwo}{\mathtt{gds22}} \newcommand{\gbdrift}{\mathtt{gbdrift}} \newcommand{\gbdriftO}{\mathtt{gbdrift0}} \newcommand{\cvdrift}{\mathtt{cvdrift}} \newcommand{\cvdriftO}{\mathtt{cvdrift0}} \newcommand{\fprim}{\mathtt{fprim}} \newcommand{\tprim}{\mathtt{tprim}} \newcommand{\jtwist}{\mathtt{jtwist}} \newcommand{\gradpar}{\mathtt{gradpar}} \newcommand{\delthet}{\mathtt{delthet}} \newcommand{\codedt}{\mathtt{code\_dt}} \title{Definitions for GS2 full-flux-surface stellarator geometry} \author{Matt Landreman} \begin{document} \maketitle In this note, we state the definitions used for geometric quantities in the full-flux-surface version of GS2, and we detail how these quantities are computed from the information in a VMEC output file. We also derive how the twist-and-shift parallel boundary condition and box size quantization conditions, familiar from flux tube simulations, are modified in a full-surface calculation. \section{Review of GS2 geometry definitions} \subsection{Definitions valid for any GS2 geometry} First, we review the definitions used in the flux tube version of GS2, as discussed in the note {\ttfamily gs2\_geometry\_definitions.pdf}. The Clebsch representation of the magnetic field is \begin{equation} \vect{B} = \nabla\psi\times\nabla\alpha. \label{eq:Clebsch} \end{equation} We take $\psi$ to represent a flux surface label, but do not (yet) assume that it is necessarily the poloidal or toroidal flux. The $\alpha$ coordinate is a field line label on the flux surface. New coordinates $x=x(\psi)$ and $y=y(\alpha)$ are introduced that are scaled versions of $\psi$ and $\alpha$ with dimensions of length. In a flux tube simulation where $x$ and $y$ only vary on a length scale comparable to the gyroradius, then \begin{align} &x = \frac{dx}{d\psi} \left[ \psi - \psi_0 \right], \\ &y = \frac{dy}{d\alpha} \left[ \alpha - \alpha_0 \right] \nonumber, \end{align} where $dx/d\psi$ and $dy/d\alpha$ are constant within the flux tube, and $\psi_0$ and $\alpha_0$ are the coordinates around which the flux tube is centered. A reference magnetic field strength $B_{ref}$ and reference length $L_{ref}$ are introduced for normalization. Then the quantities needed to specify the geometry in GS2 are the following: \begin{align} \bmag & = \frac{B}{B_{ref}}, \label{eq:bmag} \\ \gradpar &= L_{ref} \nabla_{||} z, \\ \gdstwo &= |\nabla y |^2 = \left( \frac{dy}{d\alpha} \right)^2 |\nabla\alpha|^2 , \\ \gdstwoone &= \hat{s} \nabla x \cdot \nabla y = \hat{s} \frac{dx}{d\psi} \frac{dy}{d\alpha} \nabla\psi\cdot\nabla\alpha , \\ \gdstwotwo &= \hat{s}^2|\nabla x |^2 = \hat{s}^2 \left( \frac{dx}{d\psi} \right)^2 |\nabla\psi |^2. \\ \gbdrift &= \frac{2 B_{ref} L_{ref}}{B^3} \vect{B}\times\nabla B\cdot \nabla y, \\ \gbdriftO &= \hat{s} \frac{2 B_{ref} L_{ref}}{B^3} \vect{B}\times\nabla B\cdot \nabla x, \\ \cvdrift &= \frac{2 B_{ref} L_{ref}}{B^2} \vect{B}\times\vect{\kappa} \cdot \nabla y, \\ \cvdriftO &= \hat{s} \frac{2 B_{ref} L_{ref}}{B^2} \vect{B}\times\vect{\kappa} \cdot \nabla x, \label{eq:cvdrift0} \\ \kxfac & = B_{ref} \frac{dx}{d\psi} \frac{dy}{d\alpha}, \label{eq:kxfac} \\ \fprim &= B_{ref} L_{ref} \frac{dy}{d\alpha} \frac{dx}{d\psi} \frac{1}{n_s} \frac{d n_s}{d x}, \\ \tprim &= B_{ref} L_{ref} \frac{dy}{d\alpha} \frac{dx}{d\psi} \frac{1}{T_s} \frac{d T_s}{d x}. \label{eq:tprim} \end{align} These quantities are all dimensionless. In the definition of $\gradpar$, $z$ is any parallel coordinate; the parallel coordinate is called {\ttfamily theta} in GS2, although it need not be the poloidal angle. Also $\hat{s}$ is whichever number is used to define $\theta_0=${\ttfamily theta0} in the relation \begin{equation} \theta_0 = \frac{k_x}{\hat{s} k_y}. \end{equation} The above quantities (\ref{eq:bmag})-(\ref{eq:cvdrift0}) are functions of the parallel coordinate $z$. The quantity $\kxfac$ is a single number, and the quantities $\fprim$ and $\tprim$ are each single numbers for each particle species. Since $\vect{B}\times\vect{\kappa}\cdot\nabla\psi = \vect{b}\times\nabla B\cdot\nabla\psi$ for any static ideal MHD equilibrium (at any $\beta$), then $\cvdriftO=\gbdriftO$. \subsection{Definitions used previously for stellarator geometry in GS2} In stellarator calculations that have been performed using the standard flux tube version of GS2 and the GIST geometry interface, the following additional definitions have been made. The flux surface label $\psi$ has been taken to be the toroidal flux divided by $2\pi$. Therefore for consistency with (\ref{eq:Clebsch}), $\alpha = \theta - \iota \zeta$ where $\iota = 1/q$ is the rotational transform, $q$ is the safety factor, and $\theta$ and $\zeta$ are straight-field-line poloidal and toroidal angles. While the GIST GS2 interface takes $\theta$ and $\zeta$ to be Boozer angles, all the expressions below are equally valid for other straight-field-line angles such as PEST or Hamada coordinates. In GIST, the parallel coordinate $z$ is presently taken to be the Boozer poloidal angle. However none of the expressions in this note are altered if a different choice is desired. In GIST, the reference length $L_{ref}$ is taken to be the effective minor radius computed by VMEC, named Aminor\_p in the VMEC wout*.nc file. The reference magnetic field is taken to be \begin{equation} B_{ref} = \frac{2 \psi_{LCFS}}{L_{ref}^2} \label{eq:Bref} \end{equation} where $\psi_{LCFS}$ is the value of $\psi$ at the outermost VMEC flux surface. The choice (\ref{eq:Bref}) is motivated by the cylindrical limit: the toroidal flux enclosed by the outermost VMEC surface is equivalent to the flux of a field $B_{ref}$ through a circle of radius $L_{ref}$. The radial coordinate $x$ is then chosen to be \begin{equation} x = L_{ref} \sqrt{\frac{\psi}{\psi_{LCFS}}} = L_{ref} \sqrt{s}, \label{eq:gist_x} \end{equation} where $s = \psi / \psi_{LCFS} \in [0,1]$ is the flux surface label coordinate used in VMEC. The choice (\ref{eq:gist_x}) is natural since $x$ then reduces to the usual minor radius in the cylindrical limit. %the toroidal flux $2\pi\psi$ scales like the square of the radius in the cylindrical limit. It follows that \begin{equation} \frac{dx}{d\psi} = \frac{L_{ref}}{2 \sqrt{ \psi \psi_{LCFS}}} = \frac{1}{L_{ref} B_{ref}} \sqrt{ \frac{\psi_{LCFS}}{\psi}}. \label{eq:psi_x_conversion} \end{equation} Another choice made in GIST is $\kxfac=1$. According to (\ref{eq:kxfac}), we are then required to take \begin{equation} \frac{d y}{d\alpha} = L_{ref} \sqrt{\frac{\psi}{\psi_{LCFS}}} . \label{eq:dy_dalpha} \end{equation} GIST computes the global shear parameter $\hat{s}$ using \begin{equation} \hat{s} = \frac{x}{q} \frac{dq}{dx}. \end{equation} Now that $dx/d\psi$ and $dy/d\alpha$ are specified, equations (\ref{eq:bmag})-(\ref{eq:tprim}) can be evaluated to obtain explicit expressions for the geometry arrays computed by GIST for GS2: \begin{eqnarray} \bmag &=& B/B_{ref}, \label{eq:gist_Bref}\\ \gradpar &=& L_{ref} \nabla_{||} z, \\ \gdstwo &=& |\nabla y|^2 = |\nabla\alpha|^2 L_{ref}^2 \frac{\psi}{\psi_{LCFS}}, \\ \gdstwoone &=& \hat{s} \nabla x \cdot \nabla y = \frac{\hat{s}}{B_{ref}} \nabla\psi\cdot\nabla\alpha , \\ \gdstwotwo &=& \hat{s}^2|\nabla x |^2 = \left( \frac{\hat{s}}{L_{ref} B_{ref}}\right)^2 \frac{\psi_{LCFS}}{\psi} |\nabla\psi |^2, \\ \gbdrift &=& \frac{2 B_{ref} L_{ref}^2}{B^3} \sqrt{\frac{\psi}{\psi_{LCFS}}} \vect{B}\times\nabla B\cdot \nabla \alpha, \\ \gbdriftO &=& \hat{s} \frac{2 }{B^3} \sqrt{\frac{\psi_{LCFS}}{\psi}} \vect{B}\times\nabla B\cdot \nabla \psi, \\ \cvdrift &=& \frac{2 B_{ref} L_{ref}^2}{B^2} \sqrt{\frac{\psi}{\psi_{LCFS}}} \vect{B}\times\vect{\kappa} \cdot \nabla \alpha, \\ \cvdriftO &=& \hat{s} \frac{2 }{B^2} \sqrt{\frac{\psi_{LCFS}}{\psi}} \vect{B}\times\vect{\kappa} \cdot \nabla \psi, \label{eq:gist_cvdrift0} \\ \fprim &=& \frac{L_{ref}}{n_s} \frac{d n_s}{d x}, \\ \tprim &=& \frac{L_{ref}}{T_s} \frac{d T_s}{d x}. \end{eqnarray} \section{VMEC coordinates} The VMEC code uses a toroidal coordinate $\zeta$ which is the conventional azimuthal angle of cylindrical coordinates. We let $\theta_v$ denote the poloidal angle used in VMEC, which is \emph{not} a straight-field-line coordinate. We also let $\theta_p$ denote the PEST poloidal angle, i.e. the straight-field-line angle which results when the toroidal angle is chosen to be the conventional azimuthal angle of cylindrical coordinates, as in VMEC. The conversion between the two coordinates is \begin{equation} \theta_p = \theta_v + \Lambda, \end{equation} where $\Lambda$ is the quantity given by the {\ttfamily lmns} and {\ttfamily lmnc} arrays in VMEC. Thus, the field line label we need for GS2 geometry quantities is \begin{equation} \alpha = \theta_v + \Lambda - \iota \zeta. \label{eq:alpha_vmec} \end{equation} VMEC provides many quantities as functions of the coordinates $(s, \theta_v, \zeta)$, where again $s = \psi / \psi_{LCFS}$. Specifically, it provides the Fourier amplitudes for expansions in $\theta_v$ and $\zeta$, on grid points equally spaced in $s$. The quantities that are available include $\Lambda$, $B$, the cylindrical coordinates $(R,Z)$, the components \begin{align} B^\theta &= \vect{B}\cdot\nabla\theta_v, \nonumber \\ B^\zeta &= \vect{B}\cdot\nabla\zeta, \nonumber \\ B_s &= \vect{B}\cdot\frac{\partial\vect{r}}{\partial s}, \nonumber \\ B_\theta &= \vect{B}\cdot\frac{\partial\vect{r}}{\partial \theta_v}, \nonumber \\ B_\zeta &= \vect{B}\cdot\frac{\partial\vect{r}}{\partial \zeta}, \nonumber \end{align} and the Jacobian \begin{equation} \sqrt{g} = \frac{\partial \vect{r}}{\partial s} \cdot \frac{\partial \vect{r}}{\partial\theta_v} \times \frac{\partial\vect{r}}{\partial\zeta} = \frac{1}{\nabla s \cdot \nabla \theta_v \times\nabla\zeta}. \label{eq:Jacobian} \end{equation} Here $\vect{r}(s,\theta_v,\zeta)$ is the position vector. Throughout this note, we will use $\sqrt{g}$ to denote the Jacobian of the non-straight-field-line VMEC coordinates, as in (\ref{eq:Jacobian}). %\begin{align} %\vect{B} &= \nabla\psi \times \nabla\theta_p + \iota \nabla\zeta \times \nabla\psi \nonumber \\ %& = \nabla\psi \times \nabla\theta_v + \nabla\psi \times \nabla \Lambda + \iota \nabla\zeta \times \nabla\psi . %\end{align} \section{Computation of GS2 geometry quantities from VMEC data} Given the desired central flux surface $\psi_0$, and given a desired set of grid points in $\alpha$ and $\zeta$, a 1D nonlinear root-finding algorithm is applied to solve (\ref{eq:alpha_vmec}) for the value of $\theta_v$ at each grid point. The $\bmag$ array is then obtained by evaluating $B$ at the $(\theta_v, \zeta)$ grid points, using VMEC's Fourier arrays {\ttfamily bmnc} and {\ttfamily bmns}. For the full-flux-surface calculation, we will take the parallel coordinate $z$ to be the toroidal angle $\zeta$. Then to evaluate $\gradpar$, we use \begin{equation} \gradpar = L_{ref} \frac{\vect{B}\cdot\nabla\zeta}{B} = L_{ref} \frac{B^\zeta}{B}, \end{equation} %\begin{align} %\gradpar %&=\frac{\vect{B}\cdot\nabla\zeta}{B} %=\frac{\nabla\psi\times\nabla\alpha\cdot\nabla\zeta}{B} %=\frac{\nabla\psi\times\nabla(\theta_v +\Lambda - \iota\zeta)\cdot\nabla\zeta}{B} %=\frac{\nabla\psi\times\nabla(\theta_v +\Lambda)\cdot\nabla\zeta}{B} \nonumber \\ %&=\frac{1}{B \sqrt{g}} \left( 1 + \frac{\partial\Lambda}{\partial\theta_v}\right). %\end{align} where $B^\zeta$ is available in the VMEC output through the variables {\ttfamily bsupvmnc} and {\ttfamily bsupvmns}. To evaluate the quantities {\ttfamily gds*}, we must obtain the Cartesian components of $\nabla\psi$ and $\nabla \alpha$. To this end, we use the dual relations: \begin{align} \nabla s &= \frac{1}{\sqrt{g}} \frac{\partial\vect{r}}{\partial\theta_v} \times \frac{\partial\vect{r}}{\partial\zeta}, \label{eq:nabla_s}\\ \nabla\theta_v &= \frac{1}{\sqrt{g}} \frac{\partial\vect{r}}{\partial\zeta} \times \frac{\partial\vect{r}}{\partial s} \label{eq:nabla_theta}. \end{align} The right hand sides of these two expressions can be evaluated in terms of Cartesian components using the VMEC outputs {\ttfamily rmnc}, {\ttfamily rmns}, {\ttfamily zmnc}, and {\ttfamily zmns}, yielding Cartesian components for $\nabla s$ and $\nabla\theta_v$. Also the Cartesian components of $\nabla\zeta$ are known since $\zeta$ is the standard toroidal angle. We can then compute \begin{equation} \nabla\psi = \frac{d\psi}{ds} \nabla s = \psi_{LCFS} \frac{1}{\sqrt{g}} \frac{\partial\vect{r}}{\partial\theta_v} \times \frac{\partial\vect{r}}{\partial\zeta} \end{equation} and \begin{align} \nabla\alpha &= \nabla(\theta_v + \Lambda - \iota \zeta) \nonumber \\ &= \left(\frac{\partial\Lambda}{\partial s} - \zeta \frac{d\iota}{ds}\right)\nabla s + \left( 1 + \frac{\partial\Lambda}{\partial\theta_v}\right) \nabla\theta_v + \left( -\iota + \frac{\partial\Lambda}{\partial\zeta}\right) \nabla\zeta. \label{eq:grad_alpha} \end{align} \todo{Do we want to subtract {\ttfamily zeta\_center} from $\zeta$ here so the secular $\zeta$ term in $\nabla\alpha$ vanishes at the center of the domain? Or might we want it to vanish somehwhere other than the center of the domain?} Now that Cartesian components of $\nabla\psi$ and $\nabla \alpha$ are known, $\gdstwo$, $\gdstwoone$, and $\gdstwotwo$ can be computed. To evaluate $\gbdriftO = \cvdriftO$, we can use \begin{align} \vect{B}\times\nabla B\cdot\nabla\psi &= \vect{B}\times\nabla \zeta \cdot \nabla \psi \frac{\partial B}{\partial \zeta} + \vect{B}\times\nabla \theta_v \cdot \nabla \psi \frac{\partial B}{\partial \theta_v} \nonumber \\ &= \left( B_\theta \nabla\theta_v \times\nabla\zeta\cdot\nabla s \frac{\partial B}{\partial\zeta} + B_\zeta \nabla\zeta\times\nabla\theta_v\cdot\nabla s \frac{\partial B}{\partial\theta_v} \right) \frac{d\psi}{ds} \nonumber \\ &= \left( B_\theta \frac{\partial B}{\partial\zeta} - B_\zeta \frac{\partial B}{\partial\theta_v} \right) \frac{\psi_{LCFS} }{\sqrt{g} }. \end{align} To obtain this result we have used \begin{equation} \nabla B = \frac{\partial B}{\partial s} \nabla s + \frac{\partial B}{\partial\theta_v} \nabla\theta_v + \frac{\partial B}{\partial\zeta} \nabla\zeta \label{eq:gradB} \end{equation} and \begin{equation} \vect{B} = B_s \nabla s + B_\theta \nabla \theta_v + B_\zeta \nabla\zeta. \label{eq:Bsub} \end{equation} The quantity $B_\theta$ is available as the VMEC outputs {\ttfamily bsubumnc} and {\ttfamily bsubumns}, and the quantity $B_\zeta$ is available as the VMEC outputs {\ttfamily bsubvmnc} and {\ttfamily bsubvmns}. A couple of options are possible for computing $\gbdrift$. One method is to compute the Cartesian components of $\nabla B$ using (\ref{eq:gradB}) together with (\ref{eq:nabla_s})-(\ref{eq:nabla_theta}). Furthermore, the Cartesian components of $\vect{B}$ can be computed from \begin{align} \vect{B} &= \nabla\psi \times \nabla (\theta_v + \Lambda) + \iota \nabla\zeta\times\nabla\psi \nonumber \\ &= \frac{d\psi}{ds} \left[ \left( 1 + \frac{\partial\Lambda}{\partial\theta_v}\right) \nabla s \times\nabla\theta_v + \left(\iota - \frac{\partial\Lambda}{\partial\zeta}\right) \nabla\zeta\times\nabla s \right] \nonumber \\ &= \frac{\psi_{LCFS}}{\sqrt{g}} \left[ \left(1 + \frac{\partial\Lambda}{\partial\theta_v}\right) \frac{\partial\vect{r}}{\partial\zeta} + \left( \iota - \frac{\partial\Lambda}{\partial\zeta}\right) \frac{\partial\vect{r}}{\partial\theta_v}\right]. \end{align} Now that Cartesian components of $\vect{B}$, $\nabla B$, and $\nabla \alpha$ are all known, their cross product needed for $\gbdrift$ is straightforward. Alternatively, $\gbdrift$ can be computed by substituting (\ref{eq:gradB})-(\ref{eq:Bsub}) and (\ref{eq:grad_alpha}) into $\vect{B}\times\nabla B \cdot\nabla\alpha$, yielding \begin{align} \vect{B}\times\nabla B \cdot\nabla\alpha = \frac{1}{\sqrt{g}} &\left[ B_s \frac{\partial B}{\partial \theta_v} \left(\frac{\partial \Lambda}{\partial\zeta} - \iota \right) + B_\theta \frac{\partial B}{\partial\zeta} \left( \frac{\partial\Lambda}{\partial s} - \zeta \frac{d\iota}{ds}\right) + B_\zeta \frac{\partial B}{\partial s} \left( 1 + \frac{\partial \Lambda}{\partial \theta_v}\right) \right. \nonumber \\ &\left.-B_\zeta \frac{\partial B}{\partial\theta_v} \left( \frac{\partial\Lambda}{\partial s} - \zeta \frac{d\iota}{ds}\right) -B_\theta \frac{\partial B}{\partial s} \left( \frac{\partial \Lambda}{\partial\zeta}-\iota\right) -B_s \frac{\partial B}{\partial\zeta} \left( 1 + \frac{\partial\Lambda}{\partial\theta_v}\right) \right]. \end{align} The last quantity we need to evaluate is $\cvdrift$. Using \begin{align} \vect{B}\times\vect{\kappa} &= \vect{B}\times(\vect{b}\cdot\nabla \vect{b}) = \vect{B}\times \left[ (\nabla\times\vect{b}) \times\vect{b}\right] \nonumber \\ &= \vect{B} \times \left[ -\frac{1}{B^2} \left( \nabla B \times\vect{B}\right)\times\vect{b} + \frac{\mu_0}{B^2} \vect{j}\times\vect{B}\right] \nonumber \\ &= \frac{1}{B} \vect{B}\times\nabla B + \frac{\mu_0}{B^2} \frac{dp}{ds} \vect{B}\times\nabla s, \end{align} (where we have used the MHD equilibrium equation $\vect{j}\times\vect{B} = \nabla p$), we find \begin{equation} \cvdrift = \gbdrift + \frac{2 B_{ref} L_{ref}^2}{B^2} \sqrt{\frac{\psi}{\psi_{LCFS}}} \frac{\mu_0}{B^2} \frac{dp}{ds} \vect{B}\times\nabla s \cdot \nabla \alpha. \end{equation} In the last term, $\vect{B}\times\nabla s \cdot \nabla \alpha$ can either be evaluated using the Cartesian components of $\vect{B}$, $\nabla s$, and $\nabla \alpha$, the calculation of which has already been described, or by combining (\ref{eq:grad_alpha}) and (\ref{eq:Bsub}) to obtain \begin{equation} \vect{B}\times\nabla s \cdot \nabla \alpha = \frac{1}{\sqrt{g}} \left[ B_\zeta \left( 1 + \frac{\partial\Lambda}{\partial\theta_v}\right) -B_\theta \left( \frac{\partial\Lambda}{\partial\zeta} - \iota \right) \right]. \end{equation} \section{Parallel boundary condition and wavenumber quantization for a full surface calculation} Let us now consider a full-flux-surface calculations, using field-aligned coordinates. We continue to use $\psi$ and $\alpha$ as perpendicular coordinates, and $\zeta$ as the third (parallel) coordinate. In a full-flux-surface calculation, the range of $\alpha$ is $[0,\; 2\pi)$, which can be seen from the fact at fixed $\psi$ and $\zeta$, the $2\pi$-periodicity of quantities in $\theta_p$ implies $2\pi$-periodicity in $\alpha$. Just as in a flux tube code, fluctuating quantities such as the electrostatic potential $\phi$ are represented as \begin{equation} \phi(\psi,\alpha,\zeta) =\sum_{k_\psi, k_\alpha} \bar{\phi}_{k_\psi,k_\alpha}(\zeta) \exp\left( i k_\alpha \alpha + i k_\psi \left[ \psi - \psi_0\right] \right). \label{eq:fluctuations} \end{equation} Again, $\psi_0$ indicates the flux surface about which the numerical domain is centered. The wavenumbers with respect to $(\psi,\alpha)$ are related to wavenumbers with respect to $(x,y)$ through \begin{align} \label{eq:k_conversion} k_x &= k_\psi \frac{d\psi}{dx}, \\ k_y &= k_\alpha \frac{d\alpha}{dy} \nonumber. \end{align} Note that $k_\alpha$ ranges over the integers, due to the $2\pi$-periodicity in $\alpha$ discussed above. The choice of $y$ (\ref{eq:dy_dalpha}) then implies the wavenumber grid in $y$ must be \begin{equation} k_y \rho_{ref} = \frac{\rho_{ref}}{L_{ref} \sqrt{s(\psi_0)}} \times (\mathrm{integers}). \end{equation} We take fluctuating quantities to be periodic in $\psi$, just as in a flux tube code. The allowed values of $k_\psi$ and associated `box size' in $\psi$ will be derived below. We take fluctuating quantities to be periodic in $\zeta$ with period $2 \pi P$ at fixed $\psi$ and $\theta_p$. Here, $P$ is a rational number, typically the inverse of the number of field periods (e.g. 5 for W7-X). One could also choose $P=1$ to simulate the entire toroidal domain, or choose $P=$ integer / (number of field periods) for an intermediate domain size. When $P=1$, this periodicity condition is the true periodicity of the torus. When $P=$ integer / (number of field periods), the periodicity imposed in the code is effectively a statement of statistical periodicity of the turbulence at geometrically equivalent points in the domain. To see the implications of imposing periodicity in $\zeta$ at fixed $\theta_p$, we substitute $\alpha = \theta_p - \iota \zeta$ and the Taylor expansion \begin{equation} \iota \approx \iota(\psi_0) + \frac{d\iota}{d\psi} \left[ \psi - \psi_0\right] \label{eq:Taylor_iota} \end{equation} into (\ref{eq:fluctuations}). (We take $d\iota/d\psi$ to be evaluated at $\psi_0$, and hence constant over the domain.) The result is \begin{equation} \phi =\sum_{k_\psi, k_\alpha} \bar{\phi}_{k_\psi,k_\alpha}(\zeta) \exp \left( i k_\alpha \theta_p -i k_\alpha \iota(\psi_0) \zeta +i \left[- k_\alpha \frac{d\iota}{d\psi} \zeta + k_\psi\right] \left[ \psi - \psi_0\right] \right). \end{equation} When a particular $(k_\psi,k_\alpha)$ Fourier mode gets to the end of the $\zeta$ domain ($\zeta = 2\pi P$), we want the mode to connect exactly to another Fourier mode in the simulation $(k'_\psi,k'_\alpha)$, with the latter evaluated at $\zeta=0$. Mathematically, this condition is \begin{align} &\bar{\phi}_{k_\psi,k_\alpha}(2\pi P) \exp \left( i k_\alpha \theta_p -i k_\alpha \iota(\psi_0) 2\pi P +i \left[- k_\alpha \frac{d\iota}{d\psi} 2 \pi P + k_\psi\right] \left[ \psi - \psi_0\right] \right) \nonumber \\ &= \bar{\phi}_{k'_\psi,k'_\alpha}(0) \exp \left( i k'_\alpha \theta_p +i k'_\psi\left[ \psi - \psi_0\right] \right) . \label{eq:parallelBC} \end{align} For this equation to hold for all $\theta_p$ at fixed $\psi$, we must have $k'_\alpha = k_\alpha$. For equality to hold for all $\psi$ at fixed $\theta_p$, we must have \begin{equation} k'_\psi = k_\psi - k_\alpha \frac{d\iota}{d\psi} 2 \pi P. \label{eq:twist_and_shift} \end{equation} This last result is analogous to the twist-and-shift condition used in flux-tube GS2. If $k'_\psi$ is to be included in the wavenumber grid, and assuming the $k_\psi$ grid consists of integer multiples of some $k_{\psi,min}$, then the difference $k'_\psi - k_\psi$ should be an integer multiple of $k_{\psi,min}$. In particular this must be true for the smallest nonzero $k_\alpha$, which is 1. Thus, we conclude \begin{equation} \frac{d\iota}{d\psi} 2\pi P = (\jtwist) k_{\psi,min} \end{equation} where $\jtwist$ is an integer. It follows that the `box size' in $\psi$, denoted $L_\psi$, is \begin{equation} L_\psi = \frac{2\pi}{k_{\psi,min}} = \frac{(\jtwist)}{P} \left( \frac{d\iota}{d\psi} \right)^{-1}. \end{equation} Using (\ref{eq:k_conversion}) and (\ref{eq:psi_x_conversion}), the equivalent box size in $x$ is \begin{equation} L_x = \frac{dx}{d\psi} L_\psi = \frac{2\pi}{k_{x,min}} = \frac{(\jtwist)}{ P L_{ref} B_{ref} \sqrt{s}} \left( \frac{d\iota}{d\psi} \right)^{-1}. \label{eq:Lx} \end{equation} Unfortunately, in low-shear stellarators like W7-X and HSX, $d \iota/d\psi$ in (\ref{eq:Lx}) can be quite small, meaning $L_x$ must be quite large. Equation (\ref{eq:parallelBC}) also indicates there should be a phase shift when two Fourier modes are connected: \begin{equation} \bar{\phi}_{k_\psi,k_\alpha}(2\pi P) \exp \left( -i k_\alpha \iota(\psi_0) 2\pi P \right) = \bar{\phi}_{k'_\psi,k_\alpha}(0). \label{eq:phase} \end{equation} \subsection{Continuity of geometric quantities} %It is interesting to check whether We can verify that various terms in the gyrokinetic equation are continuous % when a Fourier mode is followed across the parallel boundary condition. First, let us consider the magnetic drift term: \begin{align} \vect{v}_{m}\cdot \nabla h &= \vect{v}_{m} \cdot \nabla \left[ \sum_{k_\psi,k_\alpha} \bar{h}_{k_\psi,k_\alpha}(\zeta) \exp \left( i k_\alpha \alpha + i k_\psi [\psi - \psi_0] \right) \right] \nonumber \\ &\approx \sum_{k_\psi,k_\alpha} \bar{h}_{k_\psi,k_\alpha}(\zeta) \exp \left( i k_\alpha \alpha + i k_\psi [\psi - \psi_0] \right) \left[ i k_\alpha \vect{v}_{m}\cdot \nabla \alpha + i k_\psi \vect{v}_{m}\cdot\nabla\psi \right] \nonumber \\ & = \sum_{k_\psi,k_\alpha} \bar{h}_{k_\psi,k_\alpha}(\zeta) \exp \left( i k_\alpha [\theta_p - \iota \zeta] + i k_\psi [\psi - \psi_0] \right) \nonumber \\ & \hspace{1in}\times \left[ i k_\alpha \vect{v}_{m}\cdot \left(\nabla \theta_p - \iota \nabla\zeta - \zeta \frac{d\iota}{d\psi} \nabla\psi \right) + i k_\psi \vect{v}_{m}\cdot\nabla\psi \right], \label{eq:grad_B_drift_term} \end{align} where $h$ is the nonadiabatic distribution function. (The $\approx$ above comes from dropping the slow $\zeta$ dependence of $\bar{h}_{k_\psi,\zeta}$ in the gradient.) As $\zeta$ is increased, just before the boundary $\zeta = 2\pi P$ we have \begin{align} \left( \vect{v}_{m}\cdot \nabla h \right)_- & = \sum_{k_\psi,k_\alpha} \bar{h}_{k_\psi,k_\alpha}(2\pi P) \exp \left( i k_\alpha [\theta_p - \iota 2\pi P] + i k_\psi [\psi - \psi_0] \right) \nonumber \\ & \hspace{1in}\times \left[ i k_\alpha \vect{v}_{m}\cdot \left(\nabla \theta_p - \iota \nabla\zeta - 2\pi P \frac{d\iota}{d\psi} \nabla\psi \right) + i k_\psi \vect{v}_{m}\cdot\nabla\psi \right]. \label{eq:v_B_term_minus} \end{align} The subscript on the left hand side indicates that we have evaluated the result just to the left of the boundary. On the other side of the boundary ($\zeta = 0$), (\ref{eq:grad_B_drift_term}) evaluates to \begin{equation} \left(\vect{v}_{m}\cdot \nabla h \right)_+ = \sum_{k_\psi,k_\alpha} \bar{h}_{k'_\psi,k_\alpha}(0) \exp \left( i k_\alpha \theta_p + i k'_\psi [\psi - \psi_0] \right) \left[ i k_\alpha \vect{v}_{m}\cdot \left(\nabla \theta_p - \iota \nabla\zeta \right) + i k'_\psi \vect{v}_{m}\cdot\nabla\psi \right]. \end{equation} In this last equation, we were free to put primes on $k_\psi$ because it is summed over. Applying (\ref{eq:twist_and_shift}) and (\ref{eq:phase}), \begin{align} \left(\vect{v}_{m}\cdot \nabla h \right)_+ =& \sum_{k_\psi,k_\alpha} \bar{h}_{k_\psi,k_\alpha}(2\pi P) \exp \left( i k_\alpha \theta_p -i k_\alpha \iota(\psi_0) 2\pi P+ i \left[ k_\psi - k_\alpha \frac{d\iota}{d\psi} 2 \pi P\right] [\psi - \psi_0] \right) \nonumber \\ & \hspace{1in} \times \left[ i k_\alpha \vect{v}_{m}\cdot \left(\nabla \theta_p - \iota \nabla\zeta \right) + i \left[ k_\psi - k_\alpha \frac{d\iota}{d\psi} 2 \pi P\right] \vect{v}_{m}\cdot\nabla\psi \right]. \label{eq:v_B_term_plus} \end{align} Using (\ref{eq:Taylor_iota}), it can be seen that (\ref{eq:v_B_term_minus}) is identical to (\ref{eq:v_B_term_plus}), so the magnetic drift term is indeed continuous across the boundary. \todo{Also show continuity for $\langle \phi \rangle_R$...} \begin{comment} perpendicular wavenumber appearing inside the Bessel functions: \begin{equation} k_{\perp}^2 = k_\alpha^2 |\nabla \alpha|^2 + 2 k_\alpha k_\psi \nabla\alpha\times\nabla\psi + k_\psi^2 |\nabla\psi|^2. \label{eq:kperp2} \end{equation} Using \begin{equation} \nabla\alpha = \nabla\theta_p - \iota \nabla\zeta - \zeta \frac{d\iota}{d\psi} \nabla\psi, \end{equation} we find that (\ref{eq:kperp2}) \end{comment} \end{document}
{ "alphanum_fraction": 0.7003576202, "avg_line_length": 53.4420849421, "ext": "tex", "hexsha": "c5d5e0af62cd5fc6bbf4df5c27e19b5b8c5f9074", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2022-03-09T09:23:42.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-05T15:35:55.000Z", "max_forks_repo_head_hexsha": "104556a07b9736e7c28e6f1bf2f799384732f38b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SStroteich/stella-1", "max_forks_repo_path": "src/vmec_interface/doc/gs2_full_surface_stellarator_geometry.tex", "max_issues_count": 37, "max_issues_repo_head_hexsha": "104556a07b9736e7c28e6f1bf2f799384732f38b", "max_issues_repo_issues_event_max_datetime": "2022-03-21T15:58:05.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-05T16:41:33.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SStroteich/stella-1", "max_issues_repo_path": "src/vmec_interface/doc/gs2_full_surface_stellarator_geometry.tex", "max_line_length": 234, "max_stars_count": 4, "max_stars_repo_head_hexsha": "104556a07b9736e7c28e6f1bf2f799384732f38b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SStroteich/stella-1", "max_stars_repo_path": "src/vmec_interface/doc/gs2_full_surface_stellarator_geometry.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T15:14:42.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-15T08:23:45.000Z", "num_tokens": 9746, "size": 27683 }
\chapter{Conclusions} \section*{General considerations} \addcontentsline{toc}{section}{General considerations} \noindent To sum up the results of our study project we must start from the motivations which has driven it along the path and follow the logical steps: \begin{itemize} \item the project grows from the fact that helicopter crew and passenger comfort has recently gained increased emphasis and so vibration requirements have become more and more stringent. \item Consequently, treat the helicopters vibration problem only after the manufacturing phase adding suppression devices (absorbers) to overcome the inadequate vibration prediction capability has no more sense and it is not cost-efficient; \\ \underline{The new challenge is to design helicopters with intrinsically low vibration level}. \item To reach this goal, a \textbf{precise FEM model of the helicopter have to be implemented} so it can provide a broad insight of its characteristics dynamical behaviour since the beginning. The need to precise modelling of the basic frame or the basic structure including the material properties and the cross sections is that it contributes to \underline{helicopter stiffness characteristics} which must be precisely modelled to investigate in vibration problems. \item Structural parts of the fuselage can be easily and accurately modelled with the Ansys most common structural elements; in fact, dynamic analysis yield satisfactorily results with simplified models. \\ In fact, it is not advisable to build solid models or surfaces with many elements to represent the basic frame's parts of the airframe. \\ \underline{Comparison with literature results showed that simple} \textbf{truss, beam, shells elements} \underline{models accurately predict the real system's behaviour}. In our case, in both models, longerons, cross members, stringers, stiffeners, bulkheads and outer skin have all been modelled with those simple type of elements. \item Then, \textbf{it is essential to add the secondary structural components} that have a critical role in the vibration characteristics of the model. So, in the case of considering the whole model of the helicopter, the fuselage skin, the cabin floor, the windscreen glass and doors with their assigned properties must be considered and added to the model. Furthermore, \textbf{also the non-structural components} have to be considered (for example tailrotor head, gearbox, aerodynamical stabilizers and, eventually, for the full model, also engines, fuel tank, landing skid, swash plate) and added to certain points on the model as lumped masses. \\ In fact, as it can be appreciated in the results table, adding those components to the basic structure has a dramatical effect on the results values; structure's resonant frequencies results to be considerably lowered down. \item Unfortunately, \underline{the complete helicopter vibration problem is really difficult to predict} \underline{accurately}, as yet, because of its structural, as well as aerodynamic, complexity. \\ To let the problem to be more tractable, \emph{simplifying assumptions} must be imposed at the beginning. In our case, \textbf{we neglected aerodynamic loads} which typically introduce non-linearities on the problem. However, in the future studies, to extend our analysis, those loads must be considered. \item Nowadays, rotordynamics tools are not enough advanced to treat problems with asymmetric structures, because them are recently introduced in ANSYS and above all for a computational cost issue. As we seen before, the coupling rotor - fuselage analysis suffers troubles of geometrical asymmetry. So as to partially overcome this issue, one possibility is to simplify the rotor as a concentrated mass and inertia with torsional springs in order to simulate the gyroscopic effects. \\ Results obtained from the analysis of the sole rotor (without coupling) are not reliable to give an overview of the tailboom dynamic behaviour. In this project, considerations about the rotor were made just as a matter of investigation.\\ The dynamic studies relative to tailboom - rotor coupling, therefore, should have a future development using, for example, a software capable to treat multibody dynamic simulations. \end{itemize} \section*{Review of the main results} \addcontentsline{toc}{section}{Review of the main results} \noindent Once defined each model, a free vibration analysis has been carried out to extract the natural frequencies of the basic frames up to its 20 modes, which is located around 61.6 Hz for the LAMA SA-315b helicopters model and 94.2 Hz for the Ecureuil AS-350.\\ \noindent The results on the natural frequencies of the full structures reasonably match with the literature (especially for lower modes) giving confidence in our simple models. However, in order to let our models to become a comprehensive design tool for analysing the real behaviour of the two helicopter's considered, it \textbf{is essential to add the non-linear contribution of aerodynamic loads and the rotor-fuselage dynamic coupling effects}. %\clearpage % \begin{table}[h] \centering \pgfplotstableset{ column type=l, every head row/.style={ before row={ \toprule & \multicolumn{2}{c}{TRUSS} & \multicolumn{2}{c}{MONOCOQUE}\\ & \multicolumn{1}{c}{simple} & \multicolumn{1}{c}{complete} & \multicolumn{1}{c}{simple} &\multicolumn{1}{c}{complete}\\ }, after row=\midrule, }, every last row/.style={after row=\bottomrule}, % global config, for example in the preamble % these columns/<colname>/.style={<options>} things define a style % which applies to <colname> only. %every head row/.style={before row=\hline, after row=\hline}, %every last row/.style={after row=\hline}, display columns/0/.style={column name =Mode, int detect,column type=r}, display columns/1/.style={column name =Frequence [Hz], column type=r, fixed,fixed zerofill,precision=5,set thousands separator={\,}}, display columns/2/.style={column name =Frequence [Hz], column type=r,fixed,fixed zerofill,precision=5,set thousands separator={\,}}, display columns/3/.style={column name =Frequence [Hz], column type=r,fixed,fixed zerofill,precision=5,set thousands separator={\,}}, display columns/4/.style={column name =Frequence [Hz], column type=r,fixed,fixed zerofill,precision=5,set thousands separator={\,}}, %other style option } %TRUSS MODEL - RESULT \pgfplotstableread{ModalFreq-Helicopter_tail.txt}{\dataA} \pgfplotstableread{ModalFreq-TrussTailLumped.txt}{\dataB} %SHELL MODEL - RESULT \pgfplotstableread{ModalFreq-Shellmodel.txt}{\dataC} \pgfplotstableread{ModalFreq-ShellmodelShaftLumped.txt}{\dataD} % concatenate table \pgfplotstablecreatecol[copy column from table={\dataB}{[index] 1}] {par2} {\dataA} \pgfplotstablecreatecol[copy column from table={\dataC}{[index] 1}] {par3} {\dataA} \pgfplotstablecreatecol[copy column from table={\dataD}{[index] 1}] {par4} {\dataA} %generate full table \pgfplotstabletypeset{\dataA} \caption{Result all natural frequencies} \label{tab:ResultRecap} \end{table} % \smallskip \noindent FURTHER IMPORTANT NOTES: \noindent \begin{itemize} \item Since the exciting frequency of the tail rotor head is almost 33,35 Hz (2001 rpm), the first harmonic of the main rotor head (regarding the three bladed tail rotor) will be 66.70 Hz. So, \underline{the higher modes (from the number 10 on) will be critical modes} and have to be considered carefully in design stage in order to prevent resonance with the tail rotor excitation forces. \item \noindent The tail rotor speed has obviously influence on the tailboom natural frequencies; In fact they are expected to increase with the rotational velocity.\\ \end{itemize} \medskip \section*{Giving confidence to the FEM model by testing} \addcontentsline{toc}{section}{Giving confidence to the FEM model by testing} \noindent As previously noted, FE modelling technique found wide application in helicopters vibrations problems investigation and results to be a very useful and powerful tool. However, in order to become applicable for real purposes, FE model should always be validated via experimental modal analysis results. \\ \noindent Also literature's research led to the conclusion that helicopters vibrations problems might be effectively solved with use of the analytical and FEM models if their accuracy is improved by updating to the experimental results. \\ \noindent Hence, the models can be experimentally verified by \textbf{ground} or \textbf{in-flight tests}.\\ \noindent With ground tests, the effects of the rotating systems (e.g. rotors, engines, \dots) and the aerodynamic environment on the fuselage dynamic behaviour \underline{cannot be investigated}. This is the reason why \textbf{it is preferable to perform in-flight test} for helicopter structural dynamic investigation despite their higher costs and greater technical experiment complexity. The ground modal test results to be only an approximation of the structural dynamics model description for the in-flight conditions and in-flight modal testing is always considered superior. %
{ "alphanum_fraction": 0.7850092201, "avg_line_length": 76.825, "ext": "tex", "hexsha": "b42f7346e3babb33d86d0ecb104767615fb1a3e7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "48c3bbc21f16c18537925db985f91c30aa87a8aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "frank1789/FEM-Analysis---Helicopter-s-Tail", "max_forks_repo_path": "Report/CONCLUSIONS.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "48c3bbc21f16c18537925db985f91c30aa87a8aa", "max_issues_repo_issues_event_max_datetime": "2017-07-06T08:06:09.000Z", "max_issues_repo_issues_event_min_datetime": "2017-07-06T08:06:09.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "frank1789/FEM-Analysis---Helicopter-s-Tail", "max_issues_repo_path": "Report/CONCLUSIONS.tex", "max_line_length": 483, "max_stars_count": 1, "max_stars_repo_head_hexsha": "48c3bbc21f16c18537925db985f91c30aa87a8aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "frank1789/FEM-Analysis---Helicopter-s-Tail", "max_stars_repo_path": "Report/CONCLUSIONS.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-02T12:50:01.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-02T12:50:01.000Z", "num_tokens": 2134, "size": 9219 }
\documentclass [12pt, executivepaper]{article} \usepackage{mathtools} \usepackage{outlines} \usepackage{amsfonts} \usepackage{booktabs} \usepackage{graphicx} \everymath{\displaystyle} \usepackage{hyperref} \usepackage{graphics} \usepackage{commath} \usepackage{adjustbox} \usepackage{listings} \begin{document} \title {Technical Interview Prep Guide} \author{Brendan Busey\\Wake Forest University\\ [email protected]} \maketitle \pagebreak \vspace*{-40mm} \section*{Complexity Analysis} \begin{enumerate} \item Big Oh Definitions: \begin{enumerate} \item $f(n)=\mathcal{O}(g(n))$ means $c \cdot g(n)$ is an upper bound on $f(n)$. Thus, there exists some constant $c$ such that $f(n)$ is always $\leq$ $c \cdot g(n)$, for large enough $n$ \item $f(n)=\Omega(g(n))$ means $c \cdot g(n)$ is a lower bound on $f(n)$. Thus, there exists some constant $c$ such that $f(n)$ is always $\geq$ $c \cdot g(n)$, for all $n$ $\geq$ $n_{0}$ \item $f(n)=\Theta(g(n))$ means $c_{1} \cdot g(n)$ is an upper bound on $f(n)$ and $c_{2} \cdot g(n)$ is a lower bound on $f(n)$, for all $n$ $\geq$ $n_{0}$. Thus, there exists constants $c_{1}$ and $c_{2}$ such that $f(n) \leq c_{1} \cdot g(n)$ and $f(n) \geq c_{2} \cdot g(n)$. This means that $g(n)$ provides a nice, tight bound on $f(n)$. \end{enumerate} \item Tips on analysis: \begin{enumerate} \item If your algorithm is in the form ``do this, then, when you're all done, do that," then you add the runtimes \item If your algorithm is in the form ``do this for each time you do that," then you multiply the runtimes \item Any algorithm that repeatedly doubles or halves, you should be thinking logarithmic complexity aka $\mathcal{O}(\log{} n)$ \item Amortized time allows us to describe that, yes, this worst case happens every once in a while. But, once this happens, it won't happen again for so long that the cost is ``amortized" \item When you have a recursive function that makes multiple calls, the runtime will often (but not always) look like $\mathcal{O}(branches^{depth})$, where branches is the number of times each recursive call branches aka how many times the recursive function is called \item Memoization is a very common technique used to optimize exponential time recursive algorithms \end{enumerate} \end{enumerate} \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF SORTING SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Sorting} %%%%%%%%%%START OF QUICKSORT%%%%%%%% \begin{enumerate} \item QuickSort \begin{enumerate} \item Complexity: $\mathcal{O}(n \log{} n)$ \item How it works: \begin{enumerate} \item Choose the pivot value: we take the value of the middle element as the pivot value, but it can be any value, which is in range of sorted values, even if it doesn't appear in the array. \item Partition: Rearrange elements in a such a way that all elements which are less than the pivot go the left part of the array and all elements greater than the pivot go to the right part of the array. Values equal to pivot can stay in any part of the array. Note, that the array may be divided into non-equal parts. \item Sort both parts: apply the Quicksort algorithm recursively to the left and right parts. \end{enumerate} \item Advantage(s): \begin{enumerate} \item Fast and efficient for large data sets \item Can carry out sequential traversal through array elements which results in good locality of reference and cache behaviour for arrays \end{enumerate} \item Disadvantage(s): \begin{enumerate} \item Not efficient if the elements are already sorted and if each element in the array is equal (gives worst case time complexity of $\mathcal{O}(n^2)$) \item Might be space expensive for large data sets due to the fact that it uses $\mathcal{O}(\log{} n)$ auxiliary space for recursive function calls \end{enumerate} \pagebreak \vspace*{-40mm} \item Implementation \includegraphics[scale=0.5]{QuickSort} \end{enumerate} %%%%%%%%%END OF QUICKSORT%%%%%% %%%%%%%%%START OF MERGE SORT%%%% \item Merge Sort \begin{enumerate} \item Complexity: $\mathcal{O}(n \log{} n)$ \item How it works: \begin{enumerate} \item We partition the elements into two groups, sorting each of the smaller problems recursively, and then interleaving the two sorted lists to totally order the elements \end{enumerate} \item Advantage(s): \begin{enumerate} \item Want to use it when the data is stored in a linked list, because merging does not require random access to the list elements \end{enumerate} \item Disadvantage(s): \begin{enumerate} \item Requires twice as much memory as any other sophisticated sorting algorithm and likewise is not recommended for smaller arrays for the reason that it works recursively and it requires $\mathcal{O}(n)$ auxiliary space for sorting \item It is difficult to implement the merge operation \end{enumerate} \pagebreak \vspace*{-40mm} \item Implementation \includegraphics[scale=0.5]{MergeSortPart1} \vspace{3mm} \includegraphics[scale=0.5]{MergeSortPart2} \end{enumerate} %%%%%%%%%END OF MERGE SORT%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%START OF HEAP SORT%%%%% \item Heap Sort \begin{enumerate} \item Complexity: $\mathcal{O}(n \log{} n)$ \item How it works: \begin{enumerate} \item It is similar to insertion sort where we first find the maximum element, exchange it with the last element, and then re-make the heap. We then repeat the process for the remaining elements. \end{enumerate} \item Advantage(s): \begin{enumerate} \item Used often for large data sets because it does not work recursively all the time \end{enumerate} \item Disadvantages(s): \begin{enumerate} \item Works slower than other sorting methods with the same computational complexity \item Not efficient for parallelization \item Not recommended for sorting data that is stored in a linked list because it is difficult to convert a linked list into a heap like structure \end{enumerate} \end{enumerate} %%%%%%%%%END OF HEAP SORT%%%%%% \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF SORTING SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF HASH TABLE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Hash Table(s)} \begin{enumerate} \item How do they work? \begin{enumerate} \item Let's assume you want to fill up a library of books and not just stuff them in there, but you want to be able to easily find them again when you need them.\\ So, you decide that if the person that wants to read a book knows the title of the book and the exact title to boot, then that's all it should take. With the title, the person, with the aid of the librarian, should be able to find the book easily and quickly. \\ So, how can you do that? Well, obviously you can keep some kind of list of where you put each book, but then you have the same problem as searching the library, you need to search the list. Granted, the list would be smaller and easier to search, but still you don't want to search sequentially from one end of the library (or list) to the other.\\ You want something that, with the title of the book, can give you the right spot at once, so all you have to do is just stroll over to the right shelf, and pick up the book.\\ But how can that be done? Well, with a bit of forethought when you fill up the library and a lot of work when you fill up the library.\\ Instead of just starting to fill up the library from one end to the other, you devise a clever little method. You take the title of the book, run it through a small computer program, which spits out a shelf number and a slot number on that shelf. This is where you place the book.\\ \pagebreak \vspace*{-40mm} The beauty of this program is that later on, when a person comes back in to read the book, you feed the title through the program once more, and get back the same shelf number and slot number that you were originally given, and this is where the book is located.\\ The program is called a hash algorithm or hash computation and usually works by taking the data fed into it (the title of the book in this case) and calculates a number from it.\\ For simplicity, let's say that it just converts each letter and symbol into a number and sums them all up. In reality, it's a lot more complicated than that, but let's leave it at that for now.\\ The beauty of such an algorithm is that if you feed the same input into it again and again, it will keep spitting out the same number each time.\\ Ok, so that's basically how a hash table works. \end{enumerate} \item (One possible) Implementation (please, note that for brevity, I have only provided the core functions of the Hash Table class. If the reader wants to run the code, you will have to complete the template class implementation): \begin{adjustbox}{max height=2in, max width=2in} \lstinputlisting[language=C++]{Node.h} \end{adjustbox} \pagebreak \vspace*{-40mm} \includegraphics[scale=0.5]{HashTableHeaderfilePart1} \includegraphics[scale=0.5]{HashTableHeaderfilePart2} \pagebreak \vspace*{-40mm} \includegraphics[scale=0.5]{HashTableHeaderfilePart3} \includegraphics[scale=0.5]{HashTableHeaderfilePart4} \item Complexities \begin{enumerate} \item Insertion: $\mathcal{O}(1)$ \item Deletion: $\mathcal{O}(1)$ \item Search: $\mathcal{O}(1)$ \end{enumerate} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF HASH TABLE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF TREE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Tree(s)} \begin{enumerate} %%%%%%%%%START OF BINARY TREE SECTION%%%%%%%%%% \item Binary Trees \begin{enumerate} \item A data structure in which each node has two children, a left and right child \item Major Algorithms \begin{enumerate} \item Pre-order traversal (Type of depth first traversal/search) \begin{enumerate} \item Visit order: root, left child, right child \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes in your tree \end{enumerate} \item In-order traversal (type of depth first search/traversal) \begin{enumerate} \item Visit order: left child, root, right child \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes in your tree \end{enumerate} \item Post-order traversal (type of depth first search/traversal) \begin{enumerate} \item Visit order: left child, right child, root \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes in your tree \end{enumerate} \item Breadth First Traversal/Search \begin{enumerate} \item How it works: traverses the tree one level at a time, left to right within a level \item Time complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to visit all the nodes in the tree \item Space complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to hold all of the nodes in a queue \item Implementation: \vspace{1mm} \includegraphics[scale=0.5]{BreadthFirstTraversalForTree} \end{enumerate} \end{enumerate} \end{enumerate} %%%%%%%END OF BINARY TREE SECTION%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%START OF N-ARY TREE SECTION%%%%%%% \item $N$-ary Trees \begin{enumerate} \item Is a data structure that consists of a root node and then $N$ subtrees \item Major Algorithms \begin{enumerate} \item Pre-order traversal \begin{enumerate} \item Print out the current node, then loop through the children of the current node and recursively call the pre-order traversal function on each of them \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to visit all the nodes in the tree \end{enumerate} \item Post-order traversal \begin{enumerate} \item Loop through the children of the current node and recursively call the post-order traversal function on each of them, then print out the current node \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to visit all the nodes in the tree \end{enumerate} \item Breadth First Search \vspace{1mm} \includegraphics[scale=0.5]{BreadthFirstSearchN-aryTree} \begin{enumerate} \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to visit all the nodes in the tree \end{enumerate} \item Depth First Search \vspace{1mm} \includegraphics[scale=0.5]{DepthFirstSearchN-aryTree} \pagebreak \vspace*{-40mm} \begin{enumerate} \item Complexity: $\mathcal{O}(n)$, where $n$ is the number of nodes since you have to visit all the nodes in the tree \end{enumerate} \item Implementation: each node will have data associated with it as well as a vector to store it's $n$ other childen \end{enumerate} \end{enumerate} %%%%%%%END OF N-ARY TREE SECTION%%%%%%%% %%%%%%%START OF TRIE SECTION%%%%%%%%%%% \item Trie(s) \begin{enumerate} \item A tree where every vertex represents either a word or prefix \item Major Algorithms: \begin{enumerate} \item Similar to those for $N$-ary trees \end{enumerate} \item Implementation: identical to that of a $N$-ary tree except that you will be storing characters in the vector \end{enumerate} %%%%%%%END OF TRIE SECTION%%%%%%%%%%%% %%%%%%%START OF AVL TREE SECTION%%%% \item AVL Tree(s) \begin{enumerate} \item Is a binary search tree with the following properties: \begin{enumerate} \item The sub-tree(s) of every node differ in height by exactly one \item Every sub-tree is also an AVL tree \end{enumerate} \item Why do we care about AVL tree(s)? \begin{enumerate} \item Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take $\mathcal{O}(h)$ time where $h$ is the height of the BST. The cost of these operations may become $\mathcal{O}(n)$ for a skewed Binary tree. If we make sure that height of the tree remains $\mathcal{O}(\log{} n)$ after every insertion and deletion, then we can guarantee an upper bound of $\mathcal{O}(\log{} n)$ for all these operations. \end{enumerate} \item Insertion process for a node $n$ \begin{enumerate} \item Perform the standard Binary Search Tree insertion for $n$ \item Starting from $n$, travel up in the tree until we find the first unbalanced node, call it $z$ \item Re-balance the tree by performing the appropriate rotations for the sub-tree with root $z$ \pagebreak \vspace*{-40mm} \item There are four possible cases to consider for re-balancing: \begin{enumerate} \item Case $1$: Left Left \vspace{2mm} \includegraphics[scale=0.5]{RotationCase1AVLTree} \item Case $2$: Left Right \vspace{2mm} \includegraphics[scale=0.5]{RotationCase2AVLTree} \item Case $3$: Right Right \vspace{2mm} \includegraphics[scale=0.5]{RotationCase3AVLTree} \item Case $4$: Right Left \vspace{2mm} \includegraphics[scale=0.5]{RotationCase4AVLTree} \end{enumerate} \item Implementation (in words) for insertion: \begin{enumerate} \item Perform the normal Binary Search Tree insertion \item The current node must be one of the ancestors of the newly inserted node. Update the height of the current node. \item Get the balance factor (left subtree height - right subtree height) \item If the balance factor is greater than $1$, then the current node is unbalanced and we are either in the Left Left case or the Left Right case. We are in the Left Left case if the newly inserted key is less than the key of its left sub-tree. We are in the Left Right case if the newly inserted key is greater than the key of its left sub-tree. \pagebreak \vspace*{-40mm} \item If the balance factor is less than $-1$, then the current node is unbalanced and we are either in the Right Right case or the Right Left case. We are in the Right Right case if the newly inserted key is greater than the key of its right sub-tree. We are in the Right Left case if the newly inserted key is less than the key of its right sub-tree. \end{enumerate} \end{enumerate} \item Deletion process for a node $n$ \begin{enumerate} \item Perform the normal deletion for Binary Search Tree \item Starting from $n$, travel up in the tree until we find the first unbalanced node, call it $z$ \item Re-balance the tree by performing the appropriate rotations for the sub-tree with root $z$ \item There are four cases to consider for re-balancing: \begin{enumerate} \item Case $1$: Left Left \vspace{2mm} \includegraphics[scale=0.5]{RotationCase1AVLTree} \item Case $2$: Left Right \vspace{2mm} \includegraphics[scale=0.5]{RotationCase2AVLTree} \item Case $3$: Right Right \vspace{2mm} \includegraphics[scale=0.5]{RotationCase3AVLTree} \item Case $4$: Right Left \vspace{2mm} \includegraphics[scale=0.5]{RotationCase4AVLTree} \end{enumerate} \pagebreak \vspace*{-40mm} \item Implementation for deletion: \begin{enumerate} \item Perform the normal Binary Search Tree deletion \item The current node must be one of the ancestors of the deleted node; update the height of the current node \item Get the balance factor (left subtree height - right subtree height) of the current node \item If the balance factor is greater than $1$, then the current node is unbalanced and we are either in Left Left case or Left Right case. To check whether it is the Left Left case or the Left Right case, get the balance factor of the left subtree. If the balance factor of the left subtree is greater than or equal to $0$, then it is the Left Left case, else it is the Left Right case. \item If the balance factor is less than $-1$, then the current node is unbalanced and we are either in the Right Right case or the Right Left case. To check whether it is the Right Right case or the Right Left case, get the balance factor of right subtree. If the balance factor of the right subtree is less than or equal to $0$, then it is the Right Right case, else it is the Right Left case. \end{enumerate} \end{enumerate} \end{enumerate} %%%%%%%END OF AVL TREE SECTION%%%%% %%%%%%%START OF BREADTH FIRST AND DEPTH FIRST COMPARISON SECTION%%%%% \item Breadth First and Depth First Comparison \begin{enumerate} \item When to use Breadth First \begin{enumerate} \item If you know what you want to search for is not far from the root \item If the tree is very deep and solutions are not very common \end{enumerate} \item When to use Depth First \begin{enumerate} \item If the tree is wide, since breadth first might take up too much memory \item If solutions are frequent and located deep in the tree, breadth first might take too long \end{enumerate} \item Complexities \begin{enumerate} \item Breadth First: $\mathcal{O}(n)$ \item Depth First: $\mathcal{O}(n)$ \end{enumerate} \end{enumerate} %%%%%%%END OF BREADTH FIRST AND DEPTH FIRST COMPARISON SECTION%%%%%% \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF TREE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF GRAPH SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Graphs} \begin{enumerate} \item Adjacency Matrix Representation \begin{enumerate} \item is a 2D array of size $V$ x $V$ where $V$ is the number of vertices in a graph \item Complexity: \begin{enumerate} \item Adding an edge $\rightarrow \mathcal{O}(1)$ \item Deleting an edge $\rightarrow \mathcal{O}(1)$ \item Answering the question ``is there an edge between two vertices" $\rightarrow \mathcal{O}(1)$ \item Finding the successors of a given vertex $\rightarrow \mathcal{O}(n)$ \item Determining if a path exists between two vertices $\rightarrow \mathcal{O}(n^2)$ \end{enumerate} \item Advantage(s): \begin{enumerate} \item Representation is easier to implement and follow \item Removing an edge takes constant time \item Answering the question ``is there an edge between two vertices" can be answered in constant time \end{enumerate} \item Disadvantage(s): \begin{enumerate} \item Space complexity is $\mathcal{O}(V^2)$, where $V$ is the number of vertices \item Even if the graph is sparse, still takes up $\mathcal{O}(V^2)$ space \item Adding a vertex is $\mathcal{O}(V^2)$ time \end{enumerate} \end{enumerate} \item Adjacency List Representation \begin{enumerate} \item We keep a list of all vertices and each vertex within that list has its own list that contains their adjacent vertices \item Complexity: \begin{enumerate} \item Adding an edge $\rightarrow \mathcal{O}(\log{} V)$ \item Deleting an edge $\rightarrow \mathcal{O}(\log{} V)$ \item Answer the questions "is there an edge between two vertices" $\rightarrow \mathcal{O}(\log{} V)$ \item Finding the successor of a given vertex $\rightarrow \mathcal{O}(\log{} n)$, where $n$ is the length of the lists containing the successors of a given vertex \pagebreak \vspace*{-40mm} \item Determining if a path exists between two vertices $\rightarrow \mathcal{O}(V+E)$, where $V$ is number of vertices and $E$ is the number of edges \end{enumerate} \item Advantage(s): \begin{enumerate} \item Saves space; average space complexity is $\mathcal{O}(V+E)$; worst case is $\mathcal{O}(V^2)$ \item Adding a vertex is easier \end{enumerate} \item Disadvantage(s): \begin{enumerate} \item Queries like whether there is an edge from vertex $u$ to vertex $v$ are not efficient and can be done $\mathcal{O}(V)$, where $V$ is the number of vertices \end{enumerate} \end{enumerate} \item Major Algorithms \begin{enumerate} \item Breadth First Traversal \begin{enumerate} \item Complexity: $\mathcal{O}(V+E)$, where $V$ is the number of vertices and $E$ is the number of edges \item Implementation: \vspace{1mm} \includegraphics[scale=0.5]{BreadthFirstSearchForGraph} \end{enumerate} \pagebreak \vspace*{-40mm} \item Depth First Search \begin{enumerate} \item Complexity: $\mathcal{O}(V+E)$, where $V$ is the number of vertices and $E$ is the number of edges \item Implementation: \vspace{1mm} \includegraphics[scale=0.5]{DepthFirstSearchGraph} \vspace{3mm} \includegraphics[scale=0.5]{DepthFirstSearchGraphIterative} \end{enumerate} \item Dijkstra's Algorithm \begin{enumerate} \item Description: \includegraphics[scale=0.4]{DijkstraAlgorithmDescription} \pagebreak \vspace*{-40mm} \item Implementation (in pseudo code): \vspace{1mm} \includegraphics[scale=0.4]{DijkstraPseudoCode} \end{enumerate} \item A* Algorithm \begin{enumerate} \item A* is like Dijkstra's algorithm in that it can be used to find a shortest path \item A* is like Greedy Best-First-Search in that it can use a heuristic to guide itself \item The secret to its success is that it combines the pieces of information that Dijkstra's algorithm uses (favoring vertices that are close to the starting point) and information that Greedy Best-First-Search uses (favoring vertices that are close to the goal) \item When talking about the algorithm, $g(n)$ represents the exact cost of the path from the starting point to any vertex $n$ \item When talking about the algorithm, $h(n)$ represents the heuristic estimated cost from vertex n to the goal \end{enumerate} \end{enumerate} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF GRAPH SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF NP-COMPLETE PROBLEM(S) SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{NP-Complete Problem(s)} \begin{enumerate} \item NP stands for Non-deterministic Polynominal time \item A yes-or-no problem is said to be NP if a yes can be verified in polynominal time \item NP-complete is a family of NP problems for which you know that if one had a polynominal time solution, then everyone of them has a polynominal time solution \item Some famous examples of NP problems: \pagebreak \vspace*{-40mm} \begin{enumerate} \item Traveling Salesman problem: finding the shortest path (on a graph) that allows you to visit each city exactly once \item Bin packing problem: there are a number of fixed (integer) size bins and objects of varying sizes. Minimize the number of bins required to hold all of the objects \item Knapsack problem: given objects of various sizes and values and a knapsack with a fixed integer size, choose the objects that can fit inside with the most value \item Minimal Vertex Cover: finding the smallest set of vertices such that every edge contains at least one chosen vertex \item Clique: finding that largest group of people who all know each other \item Subgraph Isomorphism: does one graph contain a subgraph isomorphic to another? \item Set packing: given a number of sets, what is the maximum number of disjoint sets that can be selected? This is related to set cover, where we are trying to choose sets so that every element is within at least one set \item Subset sum: Given a set of integers, does some subset sum to 0? \end{enumerate} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF NP-COMPLETE PROBLEM(S) SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF OPERATING SYSTEM SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Operating Systems} \begin{enumerate} \item Process: an instance of computer program in execution \item Thread: a basic unit of CPU utilization, often called a ``lightweight" process \item Concurrency issues \begin{enumerate} \item Race condition(s): behaviour of software or system where the output is dependent on the sequence or timing of other uncontrollable events \item Deadlock: a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain \item Livelock: similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing \end{enumerate} \item Lock(s): something that programmers annotate source code with, especially critical sections, to ensure that any such critical section executes as if it was a single atomic instruction \item Mutex(es): used to serialise access to a section of re-entrant code that cannot be executed concurrently by more than one thread \begin{enumerate} \item Real world example: Imagine a mutex as a key to a toilet. One person can have the key - occupy the toilet - at a time. When they are finished, the person gives (frees) the key to the next person in the queue \end{enumerate} \item Semaphore(s): something that restricts the number of simultaneous users (threads) of a shared resource up to a maximum number \begin{enumerate} \item Real world example: Going back to our earlier toilet example, a semaphore is the number of free identical toilet keys. Say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. \pagebreak \vspace*{-40mm} Now, when one person leaves the toilet, the semaphore count is increased to 1 (one free key), and given to the next person in the queue \end{enumerate} \item Monitor(s): A monitor is a set of multiple routines which are protected by a mutual exclusion lock \begin{enumerate} \item Four main components: \begin{enumerate} \item Initialization: contains the code that is used exactly once when the monitor is created \item Private data: private data, including private procedures, that can only be used within the monitor \item Monitor procedure(s): procedures that can be called from outside of the monitor \item Entry queue: contains all threads that called monitor procedures but have not been granted permissions \end{enumerate} \end{enumerate} \item Deadlock \begin{enumerate} \item Technical definition: a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain \item Real world example: imagine two children are rummaging through a toy box because they want to play with a drum. However, one child finds the drumstick while the other finds the actual drum. Now, both children want to play with the drum but in order for this to happen, either the child with the drumstick has to give up the drumstick or the child with the drum has to give up the drum. Of course, since both children want to play with the drum, neither is going to do this, so both will be waiting forever for the other to give up what they have. \end{enumerate} \item Livelock \begin{enumerate} \item Technical definition: similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing \item Real world example: two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time \end{enumerate} \pagebreak \vspace*{-40mm} \item What resources does a thread need? \begin{enumerate} \item Thread ID \item Program counter \item Function stack \item Set of registers \end{enumerate} \item What resources does a process need? \begin{enumerate} \item Process Control Block \begin{enumerate} \item Process state: new, ready, running, waiting, terminating \item Process ID and parent process ID \item CPU registers and program counter \item CPU scheduling information such as priority information and pointers to scheduling queues \item Memory management information such as page tables or segment tables \item Accounting information such as user and kernel CPU time consumed, account numbers, limits, etc \item I/O status information such as devices allocated, open file tables, etc \end{enumerate} \end{enumerate} \item Context Switching \begin{enumerate} \item What is it? \begin{enumerate} \item Context switching is the process of switching a process from working on one task to working on another even before the former task is completed \end{enumerate} \item What does it involve? \begin{enumerate} \item This involves saving the state of all volatile data like registers, program counter, memory, etc. (in other words the ``context" of the process) to persistent storage and then loading up the context of a new process \end{enumerate} \item Before we get to process switching, we have to talk about the steps in thread context switching: \pagebreak \vspace*{-40mm} \begin{enumerate} \item All context switches are initiated by an 'interrupt'. This could be an actual hardware interrupt that runs a driver, (eg. from a network card, keyboard, memory-management or timer hardware), or a software call, (system call), that performs a hardware-interrupt-like call sequence to enter the OS \item Non-trivial systems will have to initiate a hardware-protection-level change to enter a kernel-state so that the kernel code/data etc. can be accessed \item Core state for the interrupted thread has to be saved. On a simple embedded system, this might just be pushing all registers onto the thread stack and saving the stack pointer in its Thread Control Block \item Many systems switch to an OS-dedicated stack at this stage so that the bulk of OS-internal stack requirements are not inflicted on the stack of every thread \item It may be necessary to mark the thread stack position where the change to interrupt-state occurred to allow for nested interrupts \item The driver/system call runs and may change the set of ready threads by adding/removing TCB's from internal queues for the different thread priorities, eg. network card driver may have set an event or signaled a semaphore that another thread was waiting on, so that thread will be added to the ready set, or a running thread may have called sleep() and so elected to remove itself from the ready set \item The OS scheduler algorithm is run to decide which thread to run next, typically the highest-priority ready thread that is at the front of the queue for that priority \item The saved stack pointer from the TCB for that thread is retrieved and loaded into the hardware stack pointer \item The core state for the selected thread is restored. On my simple system, the registers would be popped from the stack of the selected thread. More complex systems will have to handle a return to user-level protection \item An interrupt-return is performed, so transferring execution to the selected thread \end{enumerate} \item Okay, now that we have talked about thread context switching, we can now talk about the steps in process context switching: \pagebreak \vspace*{-40mm} \begin{enumerate} \item Process context switches are initiated by a thread-context switch, so all of the above, 1-9, is going to need to happen \item At step 5 in the thread context switching process, the scheduler decides to run a thread belonging to a different process from the one that owned the previously-running thread \item The memory-management hardware has to be loaded with the address-space for the new process, i.e. whatever selectors/segments/flags/whatever that allow the thread(s) of the new process to access its memory \item The context of any Floating Point Unit hardware needs to be saved/restored from the Process Control Block \item There may be other process-dedicated hardware that needs to be saved/restored \end{enumerate} \end{enumerate} \end{enumerate} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF OPERATING SYSTEM SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF COMPLEXITIES SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Complexities} Data structures \vspace{3mm} \begin{adjustbox}{max height=6in, max width=6in} \begin{tabular}{l*{6}{c}r} Data Structure & Access (average) & Search (average) & Insertion (average) & Deletion (average) & Space \\ \hline Singly-Linked List & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(1)$ & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ \\ Doubly-Linked List & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(1)$ & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ \\ Array & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ \\ Stack & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ & $\mathcal{O}(1)$ & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ \\ Queue & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ & $\mathcal{O}(1)$ & $\mathcal{O}(1)$ & $\mathcal{O}(n)$ \\ Binary Tree & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(n)$ \\ AVL Tree & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(n)$ \end{tabular} \end{adjustbox} \vspace{3mm} \hspace{-8mm} Algorithms \vspace{3mm} \begin{adjustbox}{max height=4in, max width=4in} \begin{tabular}{l*{6}{c}r} Algorithm & Time & Space \\ \hline Quicksort & $\mathcal{O}(n \log{} n)$ & $\mathcal{O}(\log{} n)$ \\ Mergesort & $\mathcal{O}(n \log{} n)$ & $\mathcal{O}(n)$ \\ Heapsort & $\mathcal{O}(\log{} n)$ & $\mathcal{O}(1)$ \\ Depth First Search (Trees) & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ \\ Breadth First Search (Trees) & $\mathcal{O}(n)$ & $\mathcal{O}(n)$ \\ Depth First Search (Graph(s)) & $\mathcal{O}(\abs{V} + \abs{E})$ & $\mathcal{O}(\abs{V})$ \\ Breadth First Search (Graph(s)) & $\mathcal{O}(\abs{V} + \abs{E})$ & $\mathcal{O}(\abs{V})$ \end{tabular} \end{adjustbox} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF COMPLEXITIES SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagebreak \vspace*{-40mm} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%START OF REFERENCE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{center} \begin{thebibliography}{9} \bibitem{AlgorithmDesignManual} Steven S. Skienna \\ \textit{The Algorithm Design Manual, Second Edition}. \bibitem{CrackingTheCodingInterview} Gayle McDowell \\ \textit{Cracking The Code Interview, 6th Edition} \bibitem{GeeksForGeeks} Geeks for Geeks \\ \url{http://www.geeksforgeeks.org/} \bibitem{QuicksortImplementation} Quicksort Implementation \\ \url{http://www.algolist.net/Algorithms/Sorting/Quicksort} \bibitem{MergeSortImplementation} Mergesort Implementation \\ \url{https://en.wikibooks.org/wiki/Algorithm_Implementation/Sorting/Merge_sort#C.2B.2B} \bibitem{HashTable} Hash Table Description \\ \url{http://stackoverflow.com/questions/730620/how-does-a-hash-table-work} \bibitem{Trees} Trees \\ \url{http://stackoverflow.com/questions/5262308/how-do-implement-a-breadth-first-traversal} \vspace{1mm} \url{http://www.cs.cmu.edu/~pattis/15-1XX/15-200/lectures/specialtrees/} \vspace{1mm} \url{http://www.brpreiss.com/books/opus5/html/page257.html} \vspace{1mm} \url{http://stackoverflow.com/questions/5987867/traversing-a-n-ary-tree-without-using-recurrsion} \bibitem{AVLTrees} AVL Trees \\ \url{http://www.geeksforgeeks.org/avl-tree-set-1-insertion/} \vspace{1mm} \url{http://www.geeksforgeeks.org/avl-tree-set-2-deletion/} \bibitem{BFSvsDFS} Breadth First Search vs Depth First Search \\ \url{http://stackoverflow.com/questions/3332947/when-is-it-practical-to-use-dfs-vs-bfs} \pagebreak \vspace*{-40mm} \bibitem{Graphs} Graphs \\ \url{http://stackoverflow.com/questions/3287003/three-ways-to-store-a-graph-in-memory-advantages-and-disadvantages} \vspace{1mm} \url{https://www.khanacademy.org/computing/computer-science/algorithms/graph-representation/a/representing-graphs} \vspace{1mm} \url{http://www.algorithmist.com/index.php/Graph_data_structures} \vspace{1mm} \url{http://www.geeksforgeeks.org/graph-and-its-representations/} \vspace{1mm} \url{http://www.geeksforgeeks.org/breadth-first-traversal-for-a-graph/} \vspace{1mm} \url{http://www.geeksforgeeks.org/depth-first-traversal-for-a-graph/} \vspace{1mm} \url{https://en.wikipedia.org/wiki/Depth-first_search} \vspace{1mm} \url{https://en.wikipedia.org/wiki/Dijkstra\%27s_algorithm} \vspace{1mm} \url{http://theory.stanford.edu/~amitp/GameProgramming/AStarComparison.html} \bibitem{NPCompleteProblems} NP Complete Stuff \\ \url{http://stackoverflow.com/questions/111307/whats-p-np-and-why-is-it-such-a-famous-question} \vspace{1mm} \url{http://math.stackexchange.com/questions/726/what-are-np-complete-problems-and-why-are-they-so-important} \bibitem{OperatingSystems} Operating Systems \\ \url{http://niclasw.mbnet.fi/MutexSemaphore.html} \vspace{1mm} \url{https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/3_Processes.html} \vspace{1mm} \url{https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html} \vspace{1mm} \url{http://www.programmerinterview.com/index.php/operating-systems/monitors-vs-semaphores} \vspace{1mm} \url{https://www.cs.mtu.edu/~shene/NSF-3/e-Book/MONITOR/basics.html} \bibitem{ContextSwitching} Context switching \\ \url{http://stackoverflow.com/questions/7439608/steps-in-context-switching} \pagebreak \vspace*{-40mm} \url{http://stackoverflow.com/questions/5440128/thread-context-switch-vs-process-context-switch?rq=1} \bibitem{Complexity} Complexity \\ \url{http://stackoverflow.com/questions/7294634/what-are-the-time-complexities-of-various-data-structures} \vspace{1mm} \url{http://bigocheatsheet.com/} \end{thebibliography} \end{center} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%END OF REFERENCE SECTION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document}
{ "alphanum_fraction": 0.715758211, "avg_line_length": 30.4006069803, "ext": "tex", "hexsha": "36a72f3b5c0121d55cef30d64a02006aa0ee3508", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e68c41f16f7790e44b10a229548186e13edb5998", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "busebd12/InterviewPreparation", "max_forks_repo_path": "InterviewStudyGuide/TechnicalInterviewPreparationGuide.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e68c41f16f7790e44b10a229548186e13edb5998", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "busebd12/InterviewPreparation", "max_issues_repo_path": "InterviewStudyGuide/TechnicalInterviewPreparationGuide.tex", "max_line_length": 552, "max_stars_count": null, "max_stars_repo_head_hexsha": "e68c41f16f7790e44b10a229548186e13edb5998", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "busebd12/InterviewPreparation", "max_stars_repo_path": "InterviewStudyGuide/TechnicalInterviewPreparationGuide.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10130, "size": 40068 }
\chapter{Introduction} \wmii\ is a simple but powerful window manager for the X Window System. It provides both the classic (“floating”) and tiling (“managed”) window management paradigms, which is to say, it does the job of managing your windows, so you don't have to. It also provides programability by means of a simple file-like interface, which allows the user to program in virtually any language he chooses. These basic features have become indispensable to the many users of \wmii\ and other similar window managers, but they come at a cost. Though our penchant for simplicity makes \wmii's learning curve significantly shorter than most of its competitors, there's still a lot to learn. The rest of this guide will be devoted to familiarizing new users with \wmii's novel features and eccentricities, as well as provide advanced users with an in-depth look at our customization facilities. \section{Concepts} As noted, \wmii\ provides two management styles: \begin{description} \item[Managed] This is the primary style of window management in \wmii. Windows managed in this style are automatically arranged by \wmii\ into columns. Columns are created and destroyed on demand. Individual windows in the column may be moved or resized, and are often collapsed or hidden entirely. Ad-hoc stacks of collapsed and uncollapsed windows allow the user to efficiently manage their tasks. When switching from an active to a collapsed window, the active window collapses and the collapsed one effectively takes its place. Managed windows have an unadorned titlebar: \titlebar{managed} \item[Floating] Since some programs aren't designed in ways conducive to the managed work flow, \wmii\ also provides the classic “floating” window management model. Windows managed in this model float above the managed windows and may be moved freely about. Other than automatic placement of new windows and snapping of edges, \wmii\ doesn't manage floating windows at all. Floating windows are indicated by a decorated titlebar: \titlebar{floating} \item[Fullscreen] Fullscreen mode is actually a subset of the floating style. Windows may be toggled to and from fullscreen mode at will. When fullscreen, windows reside in the floating layer, above the managed windows. They have no borders or titlebars, and occupy the full area of the screen. Other than that, however, they're not special in any way. Other floating windows may appear above them and the user can still select, open, and close other windows at will. \end{description} \subsection{The Filesystem} All of \wmii's customization is done via a virtual filesystem. Since the filesystem is implemented in the standardized \ninep\ protocol, it can be accessed in many ways. \wmii\ provides a simple command-line client, \wmiir, but many alternatives exist, including libraries for Python, Perl, Ruby, PHP, and C. It can even be mounted, either by Linux's 9p.ko kernel module or indirectly via FUSE. The filesystem that \wmii\ provides is “virtual”, which is to say that it doesn't reside on disk anywhere. In a sense, it's a figment of \wmii's imagination. Files, when read, represent \wmii's current configuration or state. When written, they perform actions, update the UI, etc. For instance, the directory |/client/| contains a directory for each window that \wmii\ is currently managing. Each of those directories, in turn, contains files describing the client's properties (its title, its views\footnote{Views in \wmii\ are akin to workspaces or virtual desktops in other window managers, but with some subtle differences.}, its state). Most files can be written to update the state they describe. For instance, |/client/sel/ctl| describes the state of the selected client. If a client is fullscreen, it contains the line: \begin{code} fullscreen on \end{code} \noindent To change this, you'd update the file with the line % XXX: Line broken at /ctl cmd. |fullscreen off| or even |fullscreen| |toggle| to toggle the client's fullscreen state. The concept of controlling a program via a filesystem derives from \plannine, where such interfaces are extensive and well proven\footnote{The concept has also taken hold on most Unixes in the form of \texttt{/proc} and \texttt{/sys} virtual filesystems, but tends to be very kernel-centric. On \plannine, where the model is more pervasive, there are more virtual filesystems for user-level applications than for the kernel.}. The metaphor has shown itself to be quite intuitive to Unix users, once the shock of a “virtual” filesystem wears off. The flexibility of being able to control \wmii\ from myriad programming languages, including the standard Unix shell and even from the command line, is well worth the shock. \subsection{Views and Tags} Like most X11 window managers, \wmii\ provides virtual workspaces. Unlike other window managers though, \wmii's workspaces are created and destroyed on demand. Instead of being sent to a workspace, windows in \wmii\ are tagged with any number of names. Views are created dynamically from these tags, and automatically if the user tries to access them. For instance, if a window is given the tags ‘foo’ and ‘bar’, the two views ‘foo’ and ‘bar’ are created, if they don't already exist. The window is now visible on both of them. Moreover, tags can be specified as regular expressions. So, a client tagged with {\tt \verb+/^foo/+} will appear on any view named ‘foo’, ‘foo:bar’, and so forth. Any time a client is tagged with a matching tag, or the user opens a matching view, the window is automatically added to it. \subsection{The Bar} \wmii\ provides a general purpose information bar at the top or bottom of the screen. The bar is divided into a left and a right section. Each section is made up of buttons, with a single button spanning the gap between the two sides. Buttons can be individually styled and can hold any text content the user wishes. By convention, the buttons to the left show view names, and those to the right display status information. \subsection{The Menus} \wmii\ includes two simple, external menu programs. The first, \wimenu, is keyboard-based, and is used to launch programs and generally prompt the user for input. It provides a list of completions which are automatically filtered as you type. The second, \wiIXmenu, is mouse-based, and is generally used to provide context menus for titlebars and view buttons. Both menus can be easily launched from shell scripts or the command line, as well as from more complex scripting languages. \subsection{The Keyboard} \wmii\ is a very keyboard friendly window manager. Most actions can be performed without touching the mouse, including launching, closing, moving, resizing, and selecting programs. New keybindings of any complexity can easily be added to handle any missing functionality, or to simplify any repetitive tasks. \subsection{The Mouse} Despite being highly keyboard-accessible, \wmii\ strives to be highly mouse accessible as well. Windows can be moved or resized by dragging their window borders. When combined with a key press, they can be moved, resized, or raised by dragging any visible portion of the window. Mouse menus are accessed with a single click and drag. View buttons in the bar and client titlebars respond to the mouse wheel; view buttons can be activated by dragging any draggable object (e.g., a file from a file manager) over them.
{ "alphanum_fraction": 0.7787963332, "avg_line_length": 45.0718562874, "ext": "tex", "hexsha": "411b1a701b54cbf78d8c51b6adf6f0e77316e5b7", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2022-03-15T22:11:57.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-09T06:01:19.000Z", "max_forks_repo_head_hexsha": "56b3a14f57e8150d74c2c5e6012e7a8eda3c17d9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aksr/wmii", "max_forks_repo_path": "doc/introduction.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "56b3a14f57e8150d74c2c5e6012e7a8eda3c17d9", "max_issues_repo_issues_event_max_datetime": "2020-07-15T05:27:56.000Z", "max_issues_repo_issues_event_min_datetime": "2015-06-28T19:20:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aksr/wmii", "max_issues_repo_path": "doc/introduction.tex", "max_line_length": 66, "max_stars_count": 78, "max_stars_repo_head_hexsha": "56b3a14f57e8150d74c2c5e6012e7a8eda3c17d9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aksr/wmii", "max_stars_repo_path": "doc/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-19T21:00:52.000Z", "max_stars_repo_stars_event_min_datetime": "2016-11-23T07:45:27.000Z", "num_tokens": 1751, "size": 7527 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={EDLD652 Lab 2}, pdfauthor={Hyeonjin Cha; Rachael Latimer; Tess Sameshima}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \title{EDLD652 Lab 2} \author{Hyeonjin Cha \and Rachael Latimer \and Tess Sameshima} \date{2/3/2021} \begin{document} \maketitle \hypertarget{google-trends-data}{% \subsection{1. Google Trends Data}\label{google-trends-data}} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#dataset} \NormalTok{google_trends_longer <-}\StringTok{ }\NormalTok{google_trends }\OperatorTok{%>%} \StringTok{ }\KeywordTok{pivot_longer}\NormalTok{(}\DataTypeTok{cols =} \KeywordTok{starts_with}\NormalTok{(}\StringTok{"hurricane"}\NormalTok{),} \DataTypeTok{names_to =} \StringTok{"hurricane"}\NormalTok{,} \DataTypeTok{names_prefix =} \StringTok{"hurricane_"}\NormalTok{,} \DataTypeTok{values_to =} \StringTok{"mentions"}\NormalTok{)} \CommentTok{#part 1} \NormalTok{plot1 <-}\StringTok{ }\NormalTok{google_trends_longer }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, mentions, }\DataTypeTok{color =}\NormalTok{ hurricane)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_line}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{facet_wrap}\NormalTok{(}\OperatorTok{~}\NormalTok{hurricane) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \NormalTok{plot1} \end{Highlighting} \end{Shaded} \includegraphics{script_files/figure-latex/googletrends-1.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#part2} \NormalTok{plot2 <-}\StringTok{ }\NormalTok{google_trends_longer }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, mentions, }\DataTypeTok{fill =}\NormalTok{ hurricane)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_area}\NormalTok{(}\DataTypeTok{position =} \StringTok{"dodge"}\NormalTok{, }\DataTypeTok{alpha =} \FloatTok{0.8}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \NormalTok{plot2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Width not defined. Set with `position_dodge(width = ?)` \end{verbatim} \includegraphics{script_files/figure-latex/googletrends-2.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#part3} \CommentTok{# plot3 <- google_trends_longer %>%} \CommentTok{# ggplot(aes(date, mentions)) +} \CommentTok{# geom_area(aes(fill = hurricane)) +} \CommentTok{# scico:: scale_fill_scico(palette = "tokyo") +} \CommentTok{# theme_minimal} \CommentTok{# #Error: Discrete value supplied to continuous scale} \CommentTok{# How do I change the scales so color is mapped to a continuous scale? Hurricane is a categorical variable...} \CommentTok{# Maybe change the variable from categorical to numerical?} \CommentTok{# google_trends_longer$hurricane = as.numeric(levels(google_trends_longer$hurricane))[google_trends_longer$hurricane]} \CommentTok{# this didn't seem to work either... it all turned NA} \CommentTok{#I asked Daniel for help; feel like an idiot not figuring out this simple but oh well} \NormalTok{plot3 <-}\StringTok{ }\NormalTok{google_trends_longer }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{group_by}\NormalTok{(hurricane, date) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{summarize}\NormalTok{(}\DataTypeTok{mentions =} \KeywordTok{mean}\NormalTok{(mentions)) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, hurricane)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_tile}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{fill =}\NormalTok{ mentions),} \DataTypeTok{color =} \StringTok{"white"}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{coord_fixed}\NormalTok{() }\OperatorTok{+}\StringTok{ } \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## `summarise()` regrouping output by 'hurricane' (override with `.groups` argument) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{plot3} \end{Highlighting} \end{Shaded} \includegraphics{script_files/figure-latex/googletrends-3.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#part 4} \CommentTok{# google_trends_longer_landfall <- google_trends_longer %>%} \CommentTok{# mutate(landfall = 0) %>%} \CommentTok{# mutate(landfall = ifelse(} \CommentTok{# hurricane == "harvey_us" & date == "2017-08-25" |} \CommentTok{# hurricane == "irma_us" & date == "2017-09-10" |} \CommentTok{# hurricane == "maria_us" & date == "2017-09-20",} \CommentTok{# 1, 0))} \CommentTok{#probably not the way Daniel intended....} \NormalTok{landfall <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{hurricane =} \KeywordTok{c}\NormalTok{(}\StringTok{"harvey_us"}\NormalTok{, }\StringTok{"irma_us"}\NormalTok{, }\StringTok{"maria_us"}\NormalTok{),} \DataTypeTok{date =} \KeywordTok{as.Date}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"2017-08-25"}\NormalTok{, }\StringTok{"2017-09-10"}\NormalTok{, }\StringTok{"2017-09-20"}\NormalTok{)),} \DataTypeTok{Ref =} \KeywordTok{c}\NormalTok{(}\StringTok{"Harvey landfall"}\NormalTok{, }\StringTok{"Irma landfall"}\NormalTok{, }\StringTok{"Maria landfall"}\NormalTok{),} \DataTypeTok{stringsAsFactors =} \OtherTok{FALSE}\NormalTok{)} \NormalTok{plot4 <-}\StringTok{ }\NormalTok{google_trends_longer }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, mentions, }\DataTypeTok{fill =}\NormalTok{ hurricane)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_area}\NormalTok{(}\DataTypeTok{position =} \StringTok{"dodge"}\NormalTok{, }\DataTypeTok{alpha =} \FloatTok{0.8}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_vline}\NormalTok{(}\DataTypeTok{data =}\NormalTok{ landfall, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{xintercept=}\KeywordTok{as.numeric}\NormalTok{(date[}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{)]))) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \NormalTok{plot4} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Width not defined. Set with `position_dodge(width = ?)` \end{verbatim} \includegraphics{script_files/figure-latex/googletrends-4.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#part 5} \NormalTok{plot5 <-}\StringTok{ }\NormalTok{google_trends_longer }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, mentions, }\DataTypeTok{fill =}\NormalTok{ hurricane)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_area}\NormalTok{(}\DataTypeTok{position =} \StringTok{"dodge"}\NormalTok{, }\DataTypeTok{alpha =} \FloatTok{0.8}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_vline}\NormalTok{(}\DataTypeTok{data =}\NormalTok{ landfall, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{xintercept=}\KeywordTok{as.numeric}\NormalTok{(date[}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{)])), }\DataTypeTok{linetype=}\DecValTok{4}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{mapping =} \KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ date,} \DataTypeTok{y =} \DecValTok{100}\NormalTok{,} \DataTypeTok{label =}\NormalTok{ Ref,} \DataTypeTok{hjust =} \DecValTok{0}\NormalTok{,} \DataTypeTok{vjust =} \DecValTok{0}\NormalTok{),} \DataTypeTok{data =}\NormalTok{ landfall) }\OperatorTok{+} \StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{title =} \StringTok{"US Google Search Interest on Hurricanes"}\NormalTok{,} \DataTypeTok{x =} \StringTok{"Date"}\NormalTok{,} \DataTypeTok{y =} \StringTok{"Search Interest"}\NormalTok{,} \DataTypeTok{label =} \StringTok{"Hurricane"}\NormalTok{,} \DataTypeTok{caption =} \StringTok{"Search interest measured in search term popularity relative to peak popularity} \StringTok{ in the given region and time period (with 100 as peak popularity)"}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{scale_fill_discrete}\NormalTok{(}\DataTypeTok{name =}\StringTok{"Hurricane"}\NormalTok{,} \DataTypeTok{breaks=}\KeywordTok{c}\NormalTok{(}\StringTok{"harvey_us"}\NormalTok{, }\StringTok{"irma_us"}\NormalTok{, }\StringTok{"jose_us"}\NormalTok{, }\StringTok{"maria_us"}\NormalTok{),} \DataTypeTok{labels=}\KeywordTok{c}\NormalTok{(}\StringTok{"Harvey"}\NormalTok{, }\StringTok{"Irma"}\NormalTok{, }\StringTok{"Jose"}\NormalTok{, }\StringTok{"Maria"}\NormalTok{)) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \NormalTok{plot5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Width not defined. Set with `position_dodge(width = ?)` \end{verbatim} \includegraphics{script_files/figure-latex/googletrends-5.pdf} \hypertarget{replicating-national-cable-news-networks-plot}{% \subsection{2. Replicating ``National cable news networks'' Plot}\label{replicating-national-cable-news-networks-plot}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{tv_states_longer <-}\StringTok{ }\NormalTok{tv_states }\OperatorTok{%>%} \StringTok{ }\KeywordTok{pivot_longer}\NormalTok{(}\DataTypeTok{cols =}\NormalTok{ florida}\OperatorTok{:}\NormalTok{puerto_rico,} \DataTypeTok{names_to =} \StringTok{"state"}\NormalTok{,} \DataTypeTok{values_to =} \StringTok{"percent"}\NormalTok{)} \NormalTok{TVlines<-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{state =} \KeywordTok{c}\NormalTok{(}\StringTok{"florida"}\NormalTok{, }\StringTok{"texas"}\NormalTok{, }\StringTok{"puerto_rico"}\NormalTok{,}\StringTok{"florida"}\NormalTok{),} \DataTypeTok{date =} \KeywordTok{as.Date}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"2017-08-25"}\NormalTok{, }\StringTok{"2017-09-10"}\NormalTok{, }\StringTok{"2017-09-20"}\NormalTok{, }\StringTok{"2017-10-01"}\NormalTok{)),} \DataTypeTok{RefTV =} \KeywordTok{c}\NormalTok{(}\StringTok{"Harvey landfall"}\NormalTok{, }\StringTok{"Irma landfall"}\NormalTok{, }\StringTok{"Maria landfall"}\NormalTok{, }\StringTok{"Las Vegas shooting"}\NormalTok{),} \DataTypeTok{stringsAsFactors =} \OtherTok{FALSE}\NormalTok{)} \NormalTok{d <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{state =} \KeywordTok{c}\NormalTok{(}\StringTok{"texas"}\NormalTok{, }\StringTok{"florida"}\NormalTok{, }\StringTok{"puerto_rico"}\NormalTok{),} \DataTypeTok{date=}\KeywordTok{as.Date}\NormalTok{(}\KeywordTok{c}\NormalTok{(}\StringTok{"2017-08-28"}\NormalTok{,}\StringTok{"2017-09-10"}\NormalTok{,}\StringTok{"2017-10-01"}\NormalTok{)), } \DataTypeTok{percent=}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\FloatTok{1.5}\NormalTok{,}\FloatTok{0.40}\NormalTok{), } \DataTypeTok{name =} \KeywordTok{c}\NormalTok{(}\StringTok{"Texas"}\NormalTok{, }\StringTok{"Florida"}\NormalTok{, }\StringTok{"Puerto Rico"}\NormalTok{),} \DataTypeTok{stringsAsFactors =} \OtherTok{FALSE}\NormalTok{)} \NormalTok{TVplot <-}\StringTok{ }\NormalTok{tv_states_longer }\OperatorTok{%>%} \StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{state =} \KeywordTok{fct_relevel}\NormalTok{(state, }\StringTok{"florida"}\NormalTok{, }\StringTok{"texas"}\NormalTok{, }\StringTok{"puerto_rico"}\NormalTok{)) }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(date, percent, }\DataTypeTok{fill =}\NormalTok{ state)) }\OperatorTok{+} \StringTok{ }\KeywordTok{guides}\NormalTok{(}\DataTypeTok{fill =} \OtherTok{FALSE}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_area}\NormalTok{(}\DataTypeTok{position =} \StringTok{"dodge"}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_vline}\NormalTok{(}\DataTypeTok{data =}\NormalTok{ TVlines, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{xintercept=}\KeywordTok{as.numeric}\NormalTok{(date[}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{)])), }\DataTypeTok{linetype=}\DecValTok{4}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{mapping =} \KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ date,} \DataTypeTok{y =} \DecValTok{4}\NormalTok{,} \DataTypeTok{label =}\NormalTok{ RefTV,} \DataTypeTok{hjust =} \StringTok{"center"}\NormalTok{,} \DataTypeTok{vjust =} \DecValTok{0}\NormalTok{),} \DataTypeTok{data =}\NormalTok{ TVlines) }\OperatorTok{+} \StringTok{ }\KeywordTok{scale_fill_manual}\NormalTok{(}\DataTypeTok{values =} \KeywordTok{c}\NormalTok{(}\StringTok{"#ff007b"}\NormalTok{,}\StringTok{"#ff6e00"}\NormalTok{,}\StringTok{"#00d9ff"}\NormalTok{))}\OperatorTok{+} \StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{title =} \StringTok{"National cable news networks"}\NormalTok{,} \DataTypeTok{x =} \OtherTok{NULL}\NormalTok{,} \DataTypeTok{y =} \StringTok{"Share of sentences"}\NormalTok{,} \DataTypeTok{caption =} \StringTok{"Includes Bloomberg, CNBC, CNN, Fox Business, Fox News and MSNBC."}\NormalTok{) }\OperatorTok{+} \StringTok{ } \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{data=}\NormalTok{d, }\DataTypeTok{mapping=}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{date, }\DataTypeTok{y=}\NormalTok{percent, }\DataTypeTok{label=}\NormalTok{name))} \NormalTok{TVplot} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning: Width not defined. Set with `position_dodge(width = ?)` \end{verbatim} \includegraphics{script_files/figure-latex/tv_states-1.pdf} \hypertarget{using-comic_characters-dataset}{% \subsection{2. Using Comic\_characters Dataset}\label{using-comic_characters-dataset}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(janitor)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'janitor' \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## chisq.test, fisher.test \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{comic_characters <-}\StringTok{ }\NormalTok{comic_characters} \NormalTok{N <-}\StringTok{ }\DecValTok{23272} \CommentTok{#pie chart of appearance count by first appearnce date} \NormalTok{comic_characters_gender <-}\StringTok{ }\NormalTok{comic_characters }\OperatorTok{%>%} \StringTok{ }\KeywordTok{count}\NormalTok{(}\DataTypeTok{gender_type =}\NormalTok{ sex) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{gender_ratio =}\NormalTok{ n}\OperatorTok{/}\NormalTok{N)}\CommentTok{#devide each sex count value by total } \NormalTok{comic_characters_gender}\OperatorTok{$}\NormalTok{gender_type <-}\StringTok{ }\KeywordTok{gsub}\NormalTok{(}\StringTok{" Characters"}\NormalTok{, }\StringTok{""}\NormalTok{, comic_characters_gender}\OperatorTok{$}\NormalTok{gender_type)} \NormalTok{comic_plot_}\DecValTok{1}\NormalTok{ <-}\StringTok{ }\NormalTok{comic_characters_gender }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(}\StringTok{""}\NormalTok{, gender_ratio, }\DataTypeTok{fill =}\NormalTok{ gender_type)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\DataTypeTok{stat =} \StringTok{"identity"}\NormalTok{, }\DataTypeTok{width =} \DecValTok{5}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{coord_polar}\NormalTok{(}\StringTok{"y"}\NormalTok{, }\DataTypeTok{start =} \DecValTok{0}\NormalTok{) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_void}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{title =} \StringTok{"Ratio of Gender Types of Comic Book Characters"}\NormalTok{,} \DataTypeTok{x =} \StringTok{"Gender Ratio"}\NormalTok{,} \DataTypeTok{y =} \OtherTok{NULL}\NormalTok{,} \DataTypeTok{caption =} \StringTok{"Includes DC & Marvel Characters from 1938 to 2013"}\NormalTok{) } \CommentTok{#scale_color_discrete(name = "Gender Types")} \NormalTok{comic_plot_}\DecValTok{1} \end{Highlighting} \end{Shaded} \includegraphics{script_files/figure-latex/comic_characters-1.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#Bar plots for type of gender ratio by alignment} \NormalTok{comic_plot_}\DecValTok{2}\NormalTok{ <-}\StringTok{ }\NormalTok{comic_characters }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{count}\NormalTok{(}\DataTypeTok{gender_type =}\NormalTok{ sex, align) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\KeywordTok{mutate}\NormalTok{(}\DataTypeTok{gender_ratio =}\NormalTok{ n}\OperatorTok{/}\NormalTok{N) }\OperatorTok{%>%}\StringTok{ } \StringTok{ }\CommentTok{#tabyl(gender_type, align) %>% #returns dataframe with counts with sex as row and align as column, } \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(gender_ratio, align, }\DataTypeTok{fill =}\NormalTok{ gender_type)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_col}\NormalTok{() }\OperatorTok{+} \StringTok{ }\KeywordTok{facet_wrap}\NormalTok{(}\OperatorTok{~}\NormalTok{gender_type) }\OperatorTok{+} \StringTok{ }\KeywordTok{theme_minimal}\NormalTok{()} \NormalTok{comic_plot_}\DecValTok{2} \end{Highlighting} \end{Shaded} \includegraphics{script_files/figure-latex/comic_characters-2.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{comic_plot_}\DecValTok{3}\NormalTok{ <-}\StringTok{ }\NormalTok{comic_characters_gender }\OperatorTok{%>%} \StringTok{ }\KeywordTok{ggplot}\NormalTok{(}\KeywordTok{aes}\NormalTok{(gender_type, gender_ratio, }\DataTypeTok{fill =}\NormalTok{ gender_type)) }\OperatorTok{+} \StringTok{ }\KeywordTok{geom_bar}\NormalTok{(}\DataTypeTok{stat=}\StringTok{"identity"}\NormalTok{)} \NormalTok{comic_plot_}\DecValTok{3} \OperatorTok{+}\StringTok{ } \StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{title =} \StringTok{"Ratio of Gender Types of Comic Book Characters"}\NormalTok{,} \DataTypeTok{subtitle =} \StringTok{"Includes DC & Marvel Characters from 1938 to 2013"}\NormalTok{,} \DataTypeTok{x =} \StringTok{"Type"}\NormalTok{,} \DataTypeTok{y =} \StringTok{"Ratio"}\NormalTok{,} \DataTypeTok{label =} \StringTok{"Type"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{script_files/figure-latex/comic_characters-3.pdf} \end{document}
{ "alphanum_fraction": 0.7124094395, "avg_line_length": 59.0075, "ext": "tex", "hexsha": "118c4e6bd44c97c319e9cb156e3b63260595bcbb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-02-08T20:43:17.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-08T20:43:17.000Z", "max_forks_repo_head_hexsha": "a58b4243378c16c2ffc6584e31867944c1875035", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rlatimer/Lab2", "max_forks_repo_path": "scripts/script.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a58b4243378c16c2ffc6584e31867944c1875035", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rlatimer/Lab2", "max_issues_repo_path": "scripts/script.tex", "max_line_length": 388, "max_stars_count": null, "max_stars_repo_head_hexsha": "a58b4243378c16c2ffc6584e31867944c1875035", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rlatimer/Lab2", "max_stars_repo_path": "scripts/script.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7611, "size": 23603 }
\SetAPI{C} \section{IRoutedEventHandlerExtendable} \label{extendable:IRoutedEventHandlerExtendable} \ClearAPI \javadoc{De.Osthus.Minerva.Extendable.IRoutedEventHandlerExtendable}{IRoutedEventHandlerExtendable} \javadoc{System.Windows.RoutedEventHandler}{RoutedEventHandler} \TODO %% GENERATED LISTINGS - DO NOT EDIT \inputcsharp{Extension point for instances of \type{RoutedEventHandler}} {Minerva.Core/minerva/extendable/IRoutedEventHandlerExtendable.cs} \begin{lstlisting}[style=Csharp,caption={Example to register to the extension point (C\#)}] IBeanContextFactory bcf = ... IBeanConfiguration myExtension = bcf.RegisterAnonymousBean... bcf.Link(myExtension).To<IRoutedEventHandlerExtendable>(); \end{lstlisting} %% GENERATED LISTINGS END
{ "alphanum_fraction": 0.8317631225, "avg_line_length": 41.2777777778, "ext": "tex", "hexsha": "9ddb5fbc81246ab9d5229b8852b9f824c2a55a96", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/extendable/IRoutedEventHandlerExtendable.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/extendable/IRoutedEventHandlerExtendable.tex", "max_line_length": 99, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/extendable/IRoutedEventHandlerExtendable.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 188, "size": 743 }
\documentclass[10pt, tikz,border=2mm, xcolor=dvipsnames]{beamer} \usetheme[progressbar=frametitle]{metropolis} \usepackage{appendixnumberbeamer} \usepackage{url} \usepackage{graphicx} \usepackage{hyperref} \usepackage{multirow} \usepackage{listings} \lstset{ escapeinside={(*@}{@*)}, language=C, basicstyle=\ttfamily, keywordstyle=\color{blue}\ttfamily, stringstyle=\color{red}\ttfamily, commentstyle=\color{green}\ttfamily, morecomment=[l][\color{magenta}]{\#} } \usepackage{color} \usepackage[font=small,labelfont=bf]{caption} \usepackage{tikzsymbols} \usepackage{pifont} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \usepackage[utf8]{inputenc} \usepackage[absolute,overlay]{textpos} \usetikzlibrary{matrix,backgrounds, shapes, arrows.meta} \tikzstyle{every node}=[ellipse, thick] \usepackage{booktabs} \usepackage[scale=2]{ccicons} \usepackage{pgfplots} \usepgfplotslibrary{dateplot} \usepackage{xspace} \newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} \title{IACA Clone} \subtitle{Bachelor proposal talk} \date{March 29, 2018} \author{Hendrik Meerkamp} \institute{} \titlegraphic{\hfill\includegraphics[height=1.5cm]{eule}} \begin{document} \maketitle \begin{frame}{Content} \setbeamertemplate{section in toc}[sections numbered] \tableofcontents[hideallsubsections] \end{frame} \section{The Nehalem architecture} \begin{frame}{The Nehalem architecture} \begin{center} \includegraphics[clip, scale=.26, trim=0 0 0 1.7cm]{Intel_Nehalem_arch} \end{center} \end{frame} \begin{frame}{The Nehalem architecture} \begin{tikzpicture}[remember picture,overlay] \node[xshift=-0.4cm,yshift=-0.3cm] at (current page.center) {\includegraphics[clip, trim=0.5cm 2cm 9.78cm 20cm, scale=.65]{Intel_Nehalem_arch}}; \end{tikzpicture} \end{frame} \section{Introduction to IACA} \begin{frame}{What is IACA?} \begin{itemize}[<+- | alert@+>] \item Intel® Architecture Code Analyzer \item Static analysing tool for assembly code (usually a loop) on Intel processors \item Computes port pressure and critical path \item Allows you to analyze code perfomance on a specific machine! \item Provides an estimate of performance\only<6>{, \alert{i.e. no absolute results}} \end{itemize} \end{frame} \begin{frame}[fragile]{How to use IACA?} \begin{lstlisting} #include "iacaMarks.h" int main(void) { int a = 3; int b = 4; (*@\textcolor{Green}{IACA\_START} @*) a++; b = a * 4; (*@\textcolor{Green}{IACA\_END} @*) return b; } \end{lstlisting} \end{frame} \begin{frame}[fragile]{IACAs Output - Reduced} \begin{itemize} \item DV - Divider Pipe \item D\phantom{V} - Data fetch pipe \end{itemize} \begin{center} \begin{tabular}{|c|c c|c|c c|c c|c|c|c|c|} \hline \textbf{Port} & 0 & DV & 1 & 2 & D & 3 & D & 4 & 5 & 6 & 7 \\ \hline \textbf{Cycles} & $0.5$ & $0.0$ & $0.5$ & $1.4$ & $1.0$ & $1.4$ & $1.0$ & $2.0$ & $0.5$ & $0.5$ & $1.3$ \\ \hline \end{tabular} \end{center} \end{frame} \begin{frame}[fragile]{IACAs Output - Detailled} \hspace*{-0.6cm} \resizebox{1.1\textwidth}{!}{ \begin{tabular}{|r|c|c c|c|c c|c c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Instruction}}}&\textbf{Num Of} & \multicolumn{11}{|c|}{Ports pressure in cycles} &\multirow{2}{*}{\textbf{CP}}\\ \cline{3-13} &$\boldsymbol{\mu}$\textbf{ops} & 0 & DV & 1 & 2 & D & 3 & D & 4 & 5 & 6 & 7 & \\ \cline{1-14} add dword ptr [rbp-0x4], 0x1& 4 & & & $0.5$ & $1.0$ & $1.0$ & $0.3$ & & $1.0$ & $0.5$ & & $0.6$ & $\surd$ \\ % \cline{1-13} mov eax, dword ptr [rbp-0x4] & 1 & & & & & & $1.0$ & $1.0$ & & & & & \\ % \cline{1-13} shl eax, 0x2&1 & $0.5$ & & & & & & & & & $0.5$ & & \\ %\cline{1-13} mov dword ptr [rbp-0x8], eax & 2 & & & & $0.3$ & & & & $1.0$ & & &$0.6$ & $\surd$ \\ \hline \end{tabular} } \end{frame} \section{Disadvantages of IACA} \begin{frame}[fragile]{Disadvantages of IACA} \begin{itemize}[<+- | alert@+>] \item It's closed source \item It's not up to date \begin{itemize}[<+- | alert@+>] \item IACA $2.3$ supports $1^{\text{st}}$ to $6^{\text{th}}$ generation of of Intel Core processors \item IACA $3.0$ only supports $4^{\text{th}}$ to $6^{\text{th}}$ generation \item The $8^{\text{th}}$ generation was released in 2017 \end{itemize} \item It completely ignores branches \end{itemize} \end{frame} \begin{frame}[fragile]{A branching example} \begin{verbatim} 1: mov rax, 1 2: cmp rcx, 0 3: jne else 4: add rbx, rax 5: jmp end else: 6: add rbx, rax end: 7: add rbx, rbx \end{verbatim} \end{frame} \begin{frame}[fragile]{The dependency graph} \begin{minipage}{.35\textwidth} \begin{verbatim} 1: mov rax, 1 2: cmp rcx, 0 3: jne else 4: add rbx, rax 5: jmp end else: 6: add rbx, rax end: 7: add rbx, rbx \end{verbatim} \end{minipage}% This must go next to `\end{minipage}` \begin{minipage}{.5\textwidth} \begin{tikzpicture} \node [draw] (A) {1: mov rax, 0x1}; \node [] (W1) [below of=A, below of=A] {}; \node [draw] (B)[left of=W1, left of=W1] {4: add rbx, rax}; \node [draw] (C) [right of=W1, right of=W1] {6: add rbx, rax}; \node [draw] (D) [below of=W1, below of=W1] {7: add rbx, rbx}; \draw[-{Latex[length=3mm]}, thick] (A)--(B); \draw[-{Latex[length=3mm]}, thick] (A)--(C); \draw[-{Latex[length=3mm]}, thick] (B)--(D); \draw[-{Latex[length=3mm]}, thick] (C)--(D); \end{tikzpicture} \end{minipage} \end{frame} \begin{frame}[fragile]{IACAs version of the graph} \begin{minipage}{.35\textwidth} \begin{verbatim} 1: mov rax, 1 2: cmp rcx, 0 3: jne else 4: add rbx, rax 5: jmp end else: 6: add rbx, rax end: 7: add rbx, rbx \end{verbatim} \end{minipage}% This must go next to `\end{minipage}` \begin{minipage}{.5\textwidth} \begin{tikzpicture} \node [draw] (A) {1: mov rax, 0x1}; \node [] (W1) [below of=A, below of=A] {}; \node [draw] (B)[left of=W1, left of=W1] {4: add rbx, rax}; \node [draw] (C) [right of=W1, right of=W1] {6: add rbx, rax}; \node [draw] (D) [below of=W1, below of=W1] {7: add rbx, rbx}; \draw[-{Latex[length=3mm]}, thick] (A)--(B); \draw[-{Latex[length=3mm]}, thick] (A)--(C); \draw[-{Latex[length=3mm]}, thick] (D)--(B); \draw[-{Latex[length=3mm]}, thick] (B)--(C); \draw[-{Latex[length=3mm]}, thick] (C) to [bend right=15](D); \draw[-{Latex[length=3mm]}, thick] (C) to [bend left=15](D); \end{tikzpicture} \end{minipage} \end{frame} \section{My take on it} \begin{frame}{My take on it} \textbf{What exactly will I do?} \begin{itemize}[<+- | alert@+>] \item Reimplement the core functionality of IACA \item Branch dependencies will be taken into account \item My implementation will take measurements of microarchitectures as an input \item I.e. you can analyze new Intel Core processors basically on release \end{itemize} \end{frame} \begin{frame}[fragile]{The XML measurement file} \begin{lstlisting}[language=XML, basicstyle=\ttfamily\scriptsize, breaklines=true] <instruction ... iform="ADD_LOCK_MEMv_GPRv" ...> <operand idx="2" type="reg" ...>RAX,RCX,RDX,RBX,...</operand> <operand idx="3" type="flag" ...>OF</operand> ... <architecture name="NHM"> <measurement ... port15="2" port2="1" port3="1" port4="1" total_uops="5"> <latency ... cycles="19" startOp="2" targetOp="3"/> ... <\measurement> </architecture> <\instruction> \end{lstlisting} \end{frame} \section{Problems I'll be facing} \begin{frame}{Inaccuracies} \textbf{I can't know\dots} \begin{itemize}[<+- | alert@+>] \item which algorithm the ``reservation station'' uses \item about the dependencies between $\mu$ops \end{itemize} \only<3->{\textbf{Also\dots}} \begin{itemize}[<+- | alert@+>] \item it is very hard to compute latency under non-optimal conditions \item some instruction's "iforms" are incomplete in the XML file \end{itemize} \end{frame} {\setbeamercolor{palette primary}{fg=white, bg=black} \begin{frame}[standout] Thank you for your attention!\\ Questions? \end{frame} } \begin{frame}{References} \bibliography{zitate} \bibliographystyle{siam} \end{frame} \end{document}
{ "alphanum_fraction": 0.6353664417, "avg_line_length": 27.2, "ext": "tex", "hexsha": "3139d674486d3ea6710c9f977b3351fb6b5fe778", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f9561e69a285415053333bc5b92b45d6fd91aed", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Henni16/SUACA", "max_forks_repo_path": "Vortrag/vortrag.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "2f9561e69a285415053333bc5b92b45d6fd91aed", "max_issues_repo_issues_event_max_datetime": "2020-05-25T13:52:37.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-25T13:47:23.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Henni16/SUACA", "max_issues_repo_path": "Vortrag/vortrag.tex", "max_line_length": 159, "max_stars_count": null, "max_stars_repo_head_hexsha": "2f9561e69a285415053333bc5b92b45d6fd91aed", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Henni16/SUACA", "max_stars_repo_path": "Vortrag/vortrag.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3160, "size": 8296 }
\newcommand{\issue}[3]{\begin{minipage}{\textwidth}\textbf{Problem:}\\ #1 \\ \textbf{Solution:}\\ #2 \\ \textbf{Change:} \\ #3 \end{minipage}\vskip.2\baselineskip} \newcommand{\keyword}[1]{\textbf{#1}} \newcommand{\class}[1]{\textcolor{blue}{#1}} \newcommand{\package}[1]{\textbf{#1}} \newcommand{\member}[1]{\textit{#1}} \chapter{Issues} During the implementation phase, various unforeseen issues where encountered which made adjustements to the original design necessary. This chapter briefly mentions the challenges and their respective resolution. \RaggedRight \input{sections/Issues/Core.tex} \input{sections/Issues/Common.tex} \input{sections/Issues/CommandLineInterface.tex} \input{sections/Issues/Modules.tex} \input{sections/Issues/BrowserExtension.tex}
{ "alphanum_fraction": 0.7630890052, "avg_line_length": 44.9411764706, "ext": "tex", "hexsha": "c4df3259d890bc4617de72afb0b4e769e02d6f52", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "prtest01/MORR", "max_forks_repo_path": "documents/implementation/sections/Issues.tex", "max_issues_count": 110, "max_issues_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_issues_repo_issues_event_max_datetime": "2020-04-05T20:55:05.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T16:49:24.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "prtest01/MORR", "max_issues_repo_path": "documents/implementation/sections/Issues.tex", "max_line_length": 163, "max_stars_count": 5, "max_stars_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "insightmind/MORR", "max_stars_repo_path": "documents/implementation/sections/Issues.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-26T20:21:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-03T14:52:47.000Z", "num_tokens": 219, "size": 764 }
\documentclass[12pt]{article} \usepackage[margin=1in]{geometry} \usepackage{enumerate} \usepackage{color} \begin{document} \title{Project Plan\\ Interactive Visualizations in Mathematica \\ Spring 2021} \author{Faculty Mentor: A.J. Hildebrand \\ Project Leader: Efstathios Konstantinos Chrontsios Garitsis \\ IGL Scholars: Xiaojun Jia, Adithya Swaminathan, \\ Dimitrios Tambakos, Troy Yang, Sarah Zimmerman} \date{February 12, 2021} \maketitle \section{Project Goals} \begin{itemize} \item Create interactive Mathematica-based visualizations of fractal-like phenomena arising in number theory and analysis for use in instruction and outreach activities. \item Learn more about the mathematics behind these topics. \item Make the visualizations available to a broader audience through publication at the Wolfram Demonstrations website. \end{itemize} \section{Responsibilities} \begin{itemize} \item All team members are expected to develop the necessary Mathematica skills for this project. \item As the semester progresses, we will split into subgroups, with each focusing on a different aspect of the project. \end{itemize} \section{Meeting Times} \begin{itemize} \item Tuesdays/Fridays, 6:00 PM - 7:00 PM \item Backup: Mondays, 7:00 PM - 8:00 PM \end{itemize} \end{document}
{ "alphanum_fraction": 0.787037037, "avg_line_length": 32.4, "ext": "tex", "hexsha": "054266d695b8b2f029169cbd01a1cf11b8ae81b6", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-21T23:05:43.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-21T23:05:43.000Z", "max_forks_repo_head_hexsha": "1691d4664ea4dd1392a244847763d26942c621b3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ajhildebrand/sum-of-digit-fractals", "max_forks_repo_path": "IGL_Interactive_Visualizations_Spring2021_ProjectPlan.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1691d4664ea4dd1392a244847763d26942c621b3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ajhildebrand/sum-of-digit-fractals", "max_issues_repo_path": "IGL_Interactive_Visualizations_Spring2021_ProjectPlan.tex", "max_line_length": 120, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fcc0340409afdd71f16fec4e93cd9280f13af33e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adiswami14/exponential-random-walks", "max_stars_repo_path": "IGL_Interactive_Visualizations_Spring2021_ProjectPlan.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-22T00:51:37.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-22T00:51:37.000Z", "num_tokens": 345, "size": 1296 }
\section{Results and Discussion} \label{sec:result} In this section we discuss the implementation and performance comparison of the different NoC implementations. All designs are modeled using Verilog HDL and extensively simulated for their functional correctness. We use the CONNECT open-source NoC platform as the fat tree, mesh and torus references~\cite{papa_connect_fpga2012}. The designs are simulated as well as implemented with Vivado 2017.3 targeting Xilinx xc7vx690t FPGA (VC709 evaluation board). \input Tables/SystemResource Table~\ref{table:systemResourceConsumption} compares the resource utilization and the maximum frequency of operation for the binary tree, AsyncBTree, and fat tree for different network sizes (number of PEs). For all implementations, the interface between PEs and the switches are kept 32-bits wide. As expected, binary trees are least resource intensive due to their simple switch architecture. AsyncBTree consumes 45\% to 65\% more LUTs (look-up-tables) and about 165\% more flip flops compared to the binary tree implementation. The multiple frequencies in the AsyncBTree rows represent the maximum clock frequency supported at different tree levels. At the lowest level (switches connected to PEs), the clock performance is better than that of binary trees but deteriorates as going higher in the hierarchy. This could be because of the additional pipelining present inside the asynchronous FIFOs. This also means if AsyncBTree is used as a synchronous NoC (all tree levels are clocked by a single clock source), its resource consumption and performance will be worse than a binary tree. Compared to AsyncBTree, fat trees consume $\sim$3.7$\times$ LUTs but less than half the number of flip flops. AsyncBTree requires more flip flops due to the presence of asynchronous FIFOs. For fat trees, the impact due to complex switches can be clearly seen in the clock performance, where they are not even able to achieve 200 MHz for a high-end FPGA. Due to the low clock performance of the NoC, the PEs also have to be under-clocked in most scenarios for overall synchronous operations. %Otherwise synchronization FIFOs have to be inserted between PEs and the NoC switches, which will further increase the resource utilization. Considering the fact that the number of LUTs available in 7-series Xilinx FPGAs are half of that of number of flip-flops, AsyncBTree is much lite compared fat trees at the same time given more than 4$\times$ clock performance. \input Tables/ResourceDiffConf Table~\ref{table:NocResourceUtilisation} lists the resource utilization and clock performance of the most popular FPGA NoC topologies namely mesh and torus. Data shows that these topologies are quite resource intensive compared to all binary tree configurations, especially the number of LUTs. In terms of clock performance, for larger NoC configurations, they perform better than fat trees but inferior to traditional binary trees and the proposed AsyncBTrees. Fig.~\ref{fig:tput} and~\ref{fig:latency} compares the throughput and latency of the three implementations with different NoC sizes for different traffic patterns such as random, tornado and reverse~\cite{Bahn2008}. The different patterns are generated based on how the destination addresses as generated for each data packet. In each case, the PE to switch interface is clocked at the lowest frequency supported among the three implementations. For AsyncBTree upper levels are clocked at double the frequency of lower levels, but limited by maximum supported frequency given in Table~\ref{table:systemResourceConsumption}. It could be seen that for NoC size up to 8 PEs, AsyncBTree performance is better than or comparable to that of fat trees. For larger tree sizes, fat trees perform better since the clock frequencies cannot be scaled beyond a limit. If PEs run at lower clock frequencies ($\sim$50 MHz), AsyncBTree can provide better performance for NoC with up to 16 PEs. Fig.~\ref{fig:tputmax} and~\ref{fig:latmax} compares the throughput and latency when each implementation is running at its maximum supported frequency. Again for smaller NoC sizes binary tree and AsyncBTree out performs fat tree for random traffic pattern. But for larger NoC sizes fat trees are clearly advantageous. Fig.~\ref{fig:tputmax}(a) also shows the performance of mesh topology, which is the most popular NoC topology, compared to different tree topologies. Again for smaller NoC sizes (8 or less) AsyncTree performance is better. Several FPGA-based multi-processor systems have 8 core or less thus AsyncBTree could be much suitable for their implementation. Fig.~\ref{fig:tputPerf} compares the performance of each NoC compared to their resource utilization. The total number of resources is calculated by adding the number of LUTs with the scaled number of flip-flops. The number of flip-flops is multiplied by a factor of 0.5 since in Xilinx 7-series FPGAs there are twice the number of flip-flops compared to LUTs in every logic slice. Synchronous binary trees clearly have an upper hand in this regard. AsyncBTree give moderately high throughput by consuming relatively less resources. But for larger NoC size, mesh topology is still the suitable candidate. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Data/tputVsCost.pdf} \vspace{-5mm} \caption{Throughput vs cost} \vspace{-5mm} \label{fig:tputPerf} \end{figure}
{ "alphanum_fraction": 0.8107760206, "avg_line_length": 87.7096774194, "ext": "tex", "hexsha": "fe96e8e6dac0499568e0e372db2de6eb64cf13b3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5066c0f2b53b1fef3b2ed6132bcf010190de149c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vipinkmenon/HNoC", "max_forks_repo_path": "paper/Sections/4_results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5066c0f2b53b1fef3b2ed6132bcf010190de149c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vipinkmenon/HNoC", "max_issues_repo_path": "paper/Sections/4_results.tex", "max_line_length": 226, "max_stars_count": null, "max_stars_repo_head_hexsha": "5066c0f2b53b1fef3b2ed6132bcf010190de149c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vipinkmenon/HNoC", "max_stars_repo_path": "paper/Sections/4_results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1207, "size": 5438 }
{% load restructuredtext_tags %} \documentclass{article} {% include "core/print_front_matter.tex" %} \pagestyle{headings} \begin{document} \setcounter{page}{1} \begin{tikzpicture}[remember picture,overlay] \node [xshift=-1in,yshift=-1in] at (current page.north east) [below left] {Name: \underline{\makebox[2in]{}}}; \end{tikzpicture} \begin{center} \LARGE{{ '{' }}{{ page.title }}{{ '}' }} \\[2mm] \small{{ '{' }}\sf {{ page.date|date:"b d, Y"|title }}{{ '}' }} \end{center} \thispagestyle{empty} \section*{Word Problems} Show all your work and circle your final answer. (Ten points each.) \par {{ page.content|rst2latex }} \end{document}
{ "alphanum_fraction": 0.6733436055, "avg_line_length": 23.1785714286, "ext": "tex", "hexsha": "2db4be570065e02dc3417449af6ec0d3066ae9ae", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5fa57dbb9c0c9a010b4dc153f832b2d130bc8f73", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dulrich15/spot", "max_forks_repo_path": "apps/core/templates/core/print_exam.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5fa57dbb9c0c9a010b4dc153f832b2d130bc8f73", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dulrich15/spot", "max_issues_repo_path": "apps/core/templates/core/print_exam.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "5fa57dbb9c0c9a010b4dc153f832b2d130bc8f73", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dulrich15/spot", "max_stars_repo_path": "apps/core/templates/core/print_exam.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 205, "size": 649 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ english, man, fleqn, noextraspace,floatsintext]{apa6} \title{Race-Related Discrimination and the Behavioral Drive for Muscularity in Asian/Asian American Men} \author{Claire Guidinger\textsuperscript{1} \& Yijun Cheng\textsuperscript{1}} \date{} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{iftex} \ifPDFTeX \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Race-Related Discrimination and the Behavioral Drive for Muscularity in Asian/Asian American Men}, pdfauthor={Claire Guidinger1 \& Yijun Cheng1}, pdflang={en-EN}, pdfkeywords={Asian/Asian American male, racism, muscularity}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering % Make \paragraph and \subparagraph free-standing \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi \newlength{\cslhangindent} \setlength{\cslhangindent}{1.5em} \newlength{\csllabelwidth} \setlength{\csllabelwidth}{3em} \newlength{\cslentryspacingunit} % times entry-spacing \setlength{\cslentryspacingunit}{\parskip} \newenvironment{CSLReferences}[2] % #1 hanging-ident, #2 entry spacing {% don't indent paragraphs \setlength{\parindent}{0pt} % turn on hanging indent if param 1 is 1 \ifodd #1 \let\oldpar\par \def\par{\hangindent=\cslhangindent\oldpar} \fi % set entry spacing \setlength{\parskip}{#2\cslentryspacingunit} }% {} \usepackage{calc} \newcommand{\CSLBlock}[1]{#1\hfill\break} \newcommand{\CSLLeftMargin}[1]{\parbox[t]{\csllabelwidth}{#1}} \newcommand{\CSLRightInline}[1]{\parbox[t]{\linewidth - \csllabelwidth}{#1}\break} \newcommand{\CSLIndent}[1]{\hspace{\cslhangindent}#1} % Manuscript styling \usepackage{upgreek} \captionsetup{font=singlespacing,justification=justified} % Table formatting \usepackage{longtable} \usepackage{lscape} % \usepackage[counterclockwise]{rotating} % Landscape page setup for large tables \usepackage{multirow} % Table styling \usepackage{tabularx} % Control Column width \usepackage[flushleft]{threeparttable} % Allows for three part tables with a specified notes section \usepackage{threeparttablex} % Lets threeparttable work with longtable % Create new environments so endfloat can handle them % \newenvironment{ltable} % {\begin{landscape}\centering\begin{threeparttable}} % {\end{threeparttable}\end{landscape}} \newenvironment{lltable}{\begin{landscape}\centering\begin{ThreePartTable}}{\end{ThreePartTable}\end{landscape}} % Enables adjusting longtable caption width to table width % Solution found at http://golatex.de/longtable-mit-caption-so-breit-wie-die-tabelle-t15767.html \makeatletter \newcommand\LastLTentrywidth{1em} \newlength\longtablewidth \setlength{\longtablewidth}{1in} \newcommand{\getlongtablewidth}{\begingroup \ifcsname LT@\roman{LT@tables}\endcsname \global\longtablewidth=0pt \renewcommand{\LT@entry}[2]{\global\advance\longtablewidth by ##2\relax\gdef\LastLTentrywidth{##2}}\@nameuse{LT@\roman{LT@tables}} \fi \endgroup} % \setlength{\parindent}{0.5in} % \setlength{\parskip}{0pt plus 0pt minus 0pt} % \usepackage{etoolbox} \makeatletter \patchcmd{\HyOrg@maketitle} {\section{\normalfont\normalsize\abstractname}} {\section*{\normalfont\normalsize\abstractname}} {}{\typeout{Failed to patch abstract.}} \patchcmd{\HyOrg@maketitle} {\section{\protect\normalfont{\@title}}} {\section*{\protect\normalfont{\@title}}} {}{\typeout{Failed to patch title.}} \makeatother \shorttitle{EDLD 651 Final Project} \keywords{Asian/Asian American male, racism, muscularity} \usepackage{csquotes} \raggedbottom \setlength{\parskip}{0pt} \ifXeTeX % Load polyglossia as late as possible: uses bidi with RTL langages (e.g. Hebrew, Arabic) \usepackage{polyglossia} \setmainlanguage[]{english} \else \usepackage[main=english]{babel} % get rid of language-specific shorthands (see #6817): \let\LanguageShortHands\languageshorthands \def\languageshorthands#1{} \fi \ifLuaTeX \usepackage{selnolig} % disable illegal ligatures \fi \affiliation{\vspace{0.5cm}\textsuperscript{1} University of Oregon} \abstract{ Although currently understudied in eating disorder literature, Asian and Asian/American men report among the highest rates of disordered eating behaviors, including excessive and compulsive exercise. Historically, Asian/Asian American men have been stereotyped to be smaller, more feminine, and less masculine than their non-Asian peers. These harmful stereotypes may result in Asian/Asian American men engaging in extreme behaviors to achieve the increasingly mesomorphic (lean and muscular) Western male body ideal. There is a robust body of literature implicating instances of race-related discrimination as being highly correlated with negative mental health outcomes, including depression and anxiety. Yet, no studies to date have examined the link between race-related discrimination and Asian/Asian American men's disordered exercise behaviors, including behaviors aimed at increasing muscularity (e.g., excessive weightlifting, anabolic steroid use, supplement consumption). This study seeks to address limitations in the current disordered eating literature by investigating the link between Asian/Asian American men's experience with race-related discrimination, including overt racism and microaggressions, with the behavioral drive for muscularity. Data for the current study included a nationally representative sample of 266 Asian/Asian American men (Mage = 24.4 ± 3.6y; MBMI = 24.2 ± 5.6 kg/\(m^2\)) who completed an online Qualtrics survey. After adjusting for income, education, and presence of a psychiatric diagnoses, linear regression models indicated that both experiences with overt racism and microaggressions were significantly and positively associated with the behavioral drive for muscularity in Asian/Asian American men. These finding shed further light on the numerous, adverse effects of race-related discrimination on minority mental health, such as the behavioral drive for muscularity in Asian/Asian American men. } \begin{document} \maketitle \hypertarget{introduction}{% \section{Introduction}\label{introduction}} Historically, men have been understudied and underrepresented in disordered eating research (Braun, Sunday, Huang, \& Halmi, 1999; Lavender, Brown, \& Murray, 2017). Yet, increasing and compelling data indicate that young men between the ages of 18-30, in particular, report high rates of disordered eating symptoms (Braun et al., 1999; Strother, Lemberg, Stanford, \& Turberville, 2012). Excessive exercise and muscularity-enhancing behaviors may be especially applicable to young men, given the current sociocultural pressures for young men to embody the mesomorphic body ideal (e.g., a lean and muscular physique) (Lavender et al., 2017). Indeed, many men report being dissatisfied with their bodies and a desire to reduce their fat mass and increase their muscle mass (Baghurst, Hollander, Nardella, \& Haff, 2006; Pope et al., 2005). Excessive exercise aimed at enhancing muscularity may function to reduce body dissatisfaction while also simultaneously working towards achieving the mesomorphic body ideal. Although excessive exercise and muscularity-enhancing behaviors are rampant in young men (Spann \& Pritchard, 2008), little is known about sociocultural risk factors that precipitate and maintain these behaviors. Extant data suggest that Asian/Asian American men report the most severe disordered eating symptoms, such as muscularity-enhancing behaviors, across racial/ethnic groups (Kelly, Cotter, Tanofsky-Kraff, \& Mazzeo, 2015; Strother et al., 2012). Indeed, Asian/Asian American men often rate their bodies as smaller than their ideal physique (Barnett, Keel, \& Conoscenti, 2001). Potential romantic partners also rate Asian/Asian American men as less masculine and more feminine than their non-Asian counterparts (Wilkins, Chan, \& Kaiser, 2011). These harmful stereotypes may render Asian/Asian American men especially susceptible to engaging in muscularity-enhancing behaviors in an effort to achieve the mesomorphic body ideal. Evidently, harmful stereotypes have a profound effect on Asian/Asian American men's body image and associated disordered eating behaviors. Racial discrimination, in the forms of both overt racism and microaggressions, may be particularly relevant to Asian/Asian American men's behavioral drive for muscularity (Nadal, Griffin, Wong, Hamit, \& Rasmus, 2014). Preliminary data suggest that overt racism (e.g., ``Asian Americans were historically targets of racism'') and microaggressions (e.g., ``a student you do not know asks you for help in math'') are positively associated with disinhibited eating in young, Asian/Asian American men (e.g., binge eating and loss of control eating) (Kelly et al., 2018). However, no studies to date have identified if experiences with overt racism and microaggressions are linked to muscularity enhancing behaviors, specifically (e.g., body building, metabolic steroid use, excessive weightlifting) in young Asian/Asian American men. \hypertarget{study-aims-and-hypotheses}{% \subsection{Study Aims and Hypotheses}\label{study-aims-and-hypotheses}} This study seeks to examine the link between experiences with racial discrimination, both in the forms of overt racism and microaggressions, in young Asian/Asian American men. It is hypothesized that experiences with both overt racism and microaggressions will be significantly and positively associated with the behavioral drive for muscularity (e.g., body building, supplement consumption, metabolic steroid use, excessive weightlifting, etc.). The study hypotheses are as follows: \emph{Hypothesis 1}: Experiences with overt racism will be significantly and positively associated with the behavioral drive for muscularity in young, Asian/Asian American men. \emph{Hypothesis 2}: Experiences with microaggressions will be significantly and positively associated with the behavioral drive for muscularity in young, Asian/Asian American men. \hypertarget{methods}{% \section{Methods}\label{methods}} This study was approved by the University of Oregon Institutional Review Board (IRB). Data were collected between January-February 2017. Participants were recruited through Qualtrics Panels, which utilize social media outlets to recruit a diverse sample of survey respondents. Eligibility criteria included being 18-to-30-years-old; self-identifying as male and Asian/Asian American; and English fluency. Participants were asked to complete an online survey. All study responses were anonymous and considered invalid if less than 80\% of questions were answered (Dong \& Peng, 2013), the survey was completed in \textless{} 2 minutes (n = 9), or if participants failed to answer ``yes'' to an embedded validity item (n = 52). \hypertarget{measures}{% \subsection{Measures}\label{measures}} \emph{Demographics.} Participants self-reported their age; height (\emph{ft, in}) and weight (\emph{lbs.}), from which body mass index (BMI) in kg/\(m^2\) was calculated; ethnicity; generation status; geographic region; highest education; employment status; income; geographic region; and presence of a psychiatric diagnosis.\\ \emph{Experiences with racism.} Participants completed the 13-item Asian American Racism-Related Stress Inventory (Miller, Kim, Chen, \& Alvarez, 2012). Items were rated on a 5-point scale from 1 (This has never happened to me or someone I know) to 5 (This event happened, and I was extremely upset). Two subscale composite scores were created to measure experiences with overt racism (e.g., ``You see a TV commercial in which an Asian character speaks bad English and acts subservient to non-Asian characters'') and microaggressions (e.g., ``Someone asks you if you can teach him or her karate''). The Asian American Racism-Related Stress Inventory (Miller et al., 2012) has been found to have strong psychometric properties (\(\alpha\) = 0.81-0.95).\\ \emph{Behavioral Drive for Muscularity.} The 15-item Drive for Muscularity Scale {[}DMS; McCreary and Sasse (2000){]} will be used to assess the behavioral drive for muscularity. The DMS measures drive for muscularity across both cognitive and behavioral dimensions; the construct of interest in the present study is the behavioral dimension (e.g., ``I lift weights to build up muscle''). Participants rated the frequency to which they engage in behaviors with the intention to increase muscularity on a 6-point Likert scale from 1 (never) to 6 (always). A mean score of the behavioral items was calculated, with higher scores indicating a greater behavioral drive for muscularity. The DMS has demonstrated good internal consistency among ethnically diverse adult men (e.g., Swami, Barron, Weis, \& Furnham, 2016). \hypertarget{results}{% \section{Results}\label{results}} RStudio Statistical Software was used for all analyses. We used R {[}Version 4.0.3; R Core Team (2020){]} and the R-packages \emph{dplyr} {[}Version 1.0.4; Wickham, François, Henry, and Müller (2021){]}, \emph{forcats} {[}Version 0.5.1; Wickham (2021){]}, \emph{ggplot2} {[}Version 3.3.5; Wickham (2016){]}, \emph{ggResidpanel} {[}Version 0.3.0; Goode and Rey (2019){]}, \emph{here} {[}Version 1.0.1; Müller (2020){]}, \emph{htmlwidgets} {[}Version 1.5.4; Vaidyanathan et al. (2021){]}, \emph{janitor} {[}Version 2.1.0; Firke (2021){]}, \emph{kableExtra} {[}Version 1.3.4; Zhu (2021){]}, \emph{ltm} {[}Version 1.1.1; Rizopoulos (2006){]}, \emph{MASS} {[}Version 7.3.53; Venables and Ripley (2002){]}, \emph{msm} {[}Version 1.6.9; Jackson (2011){]}, \emph{papaja} {[}Version 0.1.0.9997; Aust and Barth (2020){]}, \emph{patchwork} {[}Version 1.1.1; Pedersen (2020){]}, \emph{performance} {[}Version 0.8.0; Lüdecke, Ben-Shachar, Patil, Waggoner, and Makowski (2021){]}, \emph{polycor} {[}Version 0.7.10; Fox (2019){]}, \emph{purrr} {[}Version 0.3.4; Henry and Wickham (2020){]}, \emph{readr} {[}Version 1.4.0; Wickham and Hester (2020){]}, \emph{rio} {[}Version 0.5.27; Chan, Chan, Leeper, and Becker (2021){]}, \emph{see} {[}Version 0.6.8; Lüdecke et al. (2021){]}, \emph{stargazer} {[}Version 5.2.2; Hlavac (2018){]}, \emph{stringr} {[}Version 1.4.0; Wickham (2019){]}, \emph{tibble} {[}Version 3.0.6; Müller and Wickham (2021){]}, \emph{tidyr} {[}Version 1.1.2; Wickham (2020){]}, \emph{tidyverse} {[}Version 1.3.0; Wickham et al. (2019){]}, and \emph{tinytex} {[}Version 0.35; Xie (2019){]} for all our analyses. To ensure data met model assumptions, data were first screened using the ``Performance'' and ``ggResidpanel'' packages to assess indices of model quality, goodness of fit, and data missingness. Data fulfilled all model assumptions and missing data were minimal (\textless2\%), and thus listwise deletion was employed (Buhi, Goodson, \& Neilands, 2008). All analyses adjusted for BMI, education, income, and presence of psychiatric diagnosis given a robust body of prior literature identifying significant, positive associations with disordered eating symptoms (McLean et al., 2014; Striegel, Bedrosian, Wang, \& Schwartz, 2012)(Table 1). \begin{table}[tbp] \begin{center} \begin{threeparttable} \caption{\label{tab:table1}Means and Standard Deviations for Study Variables} \begin{tabular}{lrrr} \toprule Psychiatric Diagnosis Group & \multicolumn{1}{c}{Category} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{SD}\\ \midrule No Diagnosis & bmi & 23.99 & 5.44\\ & dms & 3.19 & 1.12\\ & racism & 2.58 & 0.89\\ & microaggr & 2.17 & 0.90\\ With a Diagnosis & bmi & 24.87 & 6.03\\ & dms & 3.06 & 1.03\\ & racism & 2.67 & 1.01\\ & microaggr & 2.06 & 0.86\\ \bottomrule \addlinespace \end{tabular} \begin{tablenotes}[para] \normalsize{\textit{Note.} Means and Standard Deviations for body mass index, drive for muscularity, racism, and microaggressions for Asian/Asian American men with and without a cormorbid psychiatric diagnosis} \end{tablenotes} \end{threeparttable} \end{center} \end{table} A linear regression was conducted to examine the link between experiences with overt racism and the behavioral drive for muscularity. Experiences with overt racism were significantly and positively associated with the behavioral drive for muscularity in Asian/Asian American men, F(5, 250) = 4.06, \emph{p} \textless{} .01, \(R^2\) = 0.08 (Table 2). Experiences with microaggressions were also significantly and positively associated with the behavioral drive for muscularity in Asian/Asian American men, F(5, 250) = 6.48, \emph{p} \textless{} .001, \(R^2\) = 0.12 (Table 3). \begin{table}[tbp] \begin{center} \begin{threeparttable} \caption{\label{tab:regression-table}Dependent Variable: Behavioral drive for muscularity} \begin{tabular}{lrcrr} \toprule predictor & \multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(250)$} & \multicolumn{1}{c}{$p$}\\ \midrule (Intercept) & 2.21 & $[1.42$, $2.99]$ & 5.53 & < .001\\ Income & 0.09 & $[-0.02$, $0.21]$ & 1.67 & .096\\ Education & -0.04 & $[-0.22$, $0.14]$ & -0.45 & .652\\ BMI & 0.00 & $[-0.02$, $0.03]$ & 0.33 & .742\\ Psychiatric Diagnosis & -0.22 & $[-0.56$, $0.12]$ & -1.29 & .199\\ Overt Racism (Mean) & 0.27 & $[0.13$, $0.42]$ & 3.65 & < .001\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} \begin{table}[tbp] \begin{center} \begin{threeparttable} \caption{\label{tab:regression-table}Dependent Variable: Behavioral drive for muscularity} \begin{tabular}{lrcrr} \toprule predictor & \multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{$t(250)$} & \multicolumn{1}{c}{$p$}\\ \midrule (Intercept) & 2.10 & $[1.34$, $2.86]$ & 5.45 & < .001\\ Income & 0.11 & $[0.00$, $0.22]$ & 1.95 & .052\\ Education & -0.04 & $[-0.22$, $0.13]$ & -0.50 & .615\\ BMI & 0.00 & $[-0.02$, $0.03]$ & 0.20 & .841\\ Psychiatric Diagnosis & -0.16 & $[-0.49$, $0.17]$ & -0.96 & .339\\ Microaggression (Mean) & 0.38 & $[0.23$, $0.53]$ & 5.01 & < .001\\ \bottomrule \end{tabular} \end{threeparttable} \end{center} \end{table} Findings indicate that as Asian/Asian American men report greater incidences of both overt racism and microaggressions, they engage in significantly more muscularity-enhancing behaviors (e.g., excessive weightlifting, anabolic steroid use, supplement consumption, etc.) (Figure 1 \& 2). \includegraphics{final_project_files/figure-latex/ggplot-1.pdf} \includegraphics{final_project_files/figure-latex/ggplot-2.pdf} \hypertarget{discussion}{% \section{Discussion}\label{discussion}} This was the first known study to examine the link between Asian/Asian American men's experiences with race-related discrimination, both in the forms of overt racism and microaggressions, and the behavioral drive for muscularity. As hypothesized, both experiences with overt racism (e.g., ``You see a TV commercial in which an Asian character speaks bad English and acts subservient to non-Asian characters'') and microaggressions (e.g., ``Someone asks you if you can teach him or her karate'') were significantly and positively associated with the behavioral drive for muscularity (e.g., engaging in behaviors aimed at increasing muscle mass). The current study sheds further light on the harmful effects of racism on Asian/Asian American's mental health, including body image and disordered eating behaviors. This is particularly pervasive given the increasingly muscular, mesomorphic male body ideal perpetuated throughout Western media (Edwards, Tod, Molnar, \& Markland, 2016). Extant data suggest that Asian/Asian American men are often stereotyped to be smaller, more feminine, less masculine, and less sexually attractive than their non-Asian peers (Wilkins et al., 2011). As such, it is theorized that when Asian/Asian American men experience race-related discrimination (e.g., overt racism and/or microaggressions), their Asian identity becomes particularly salient, therefore perpetuating internalized feelings of perceived inadequacy with regards to embodying the mesomorphic, Western male body ideal. This, in turn, may result in Asian/Asian American men going to extreme lengths to achieve the ideal body physique, including excessive and compulsive muscularity-enhancing behaviors. It is important to consider limitations to the current study, including the cross-sectional study design. The findings are correlational, rather than causal, and experimental and prospective data are needed to determine if experiences with racism prompt muscularity-enhancing behaviors. While the current study includes a large, nationally represented sample of Asian/Asian American men, we were underpowered to examine whether the link between experiences with race-related discrimination and muscularity-enhancing behaviors vary by Asian ethnic identity (e.g., Chinese, Japanese, Korean, Asian Indian, Filipino, and other Asian subgroups). Future research should seek to clarify whether there are intra- and inter-ethnic variations in these associations. Although prospective and mechanistic studies are needed, these findings indicate that experiences with race-related discrimination negatively impact Asian/Asian American men's body image, thus prompting engagement in potentially harmful and compulsive muscularity-enhancing behaviors. The current study adds to a small, but growing body of research implicating experiences with race-related discrimination as a significant contributor to health disparities among racial/ethnic minority men living in the United States. These data may help to inform clinical programming and preventative interventions aimed at addressing the harmful effects of race-related discrimination on men's body image and disordered eating behaviors. The current study may also help to inform the development and implementation of interventions aimed at helping Asian/Asian American men adopt healthy coping strategies in response to discriminatory experiences. Overall, this study sheds light on the numerous, adverse effects of race-related discrimination on minority mental health. \newpage \hypertarget{references}{% \section{References}\label{references}} \begingroup \setlength{\parindent}{-0.5in} \hypertarget{refs}{} \begin{CSLReferences}{1}{0} \leavevmode\vadjust pre{\hypertarget{ref-R-papaja}{}}% Aust, F., \& Barth, M. (2020). \emph{{papaja}: {Create} {APA} manuscripts with {R Markdown}}. Retrieved from \url{https://github.com/crsh/papaja} \leavevmode\vadjust pre{\hypertarget{ref-baghurst_change_2006}{}}% Baghurst, T., Hollander, D. B., Nardella, B., \& Haff, G. G. (2006). Change in sociocultural ideal male physique: {An} examination of past and present action figures. \emph{Body Image}, \emph{3}(1), 87--91. \url{https://doi.org/10.1016/j.bodyim.2005.11.001} \leavevmode\vadjust pre{\hypertarget{ref-barnett_body_2001}{}}% Barnett, H. L., Keel, P. K., \& Conoscenti, L. M. (2001). Body type preferences in {Asian} and {Caucasian} college students.{[}{No} title found{]}. \emph{Sex Roles}, \emph{45}(11/12), 867--878. \url{https://doi.org/10.1023/A:1015600705749} \leavevmode\vadjust pre{\hypertarget{ref-braun_more_1999}{}}% Braun, D. L., Sunday, S. R., Huang, A., \& Halmi, K. A. (1999). More males seek treatment for eating disorders. \emph{International Journal of Eating Disorders}, \emph{25}(4), 415--424. \url{https://doi.org/10.1002/(SICI)1098-108X(199905)25:4\%3C415::AID-EAT6\%3E3.0.CO;2-B} \leavevmode\vadjust pre{\hypertarget{ref-buhi_out_2008}{}}% Buhi, E. R., Goodson, P., \& Neilands, T. B. (2008). Out of sight, not out of mind: Strategies for handling missing data. \emph{American Journal of Health Behavior}, \emph{32}(1), 83--92. \url{https://doi.org/10.5555/ajhb.2008.32.1.83} \leavevmode\vadjust pre{\hypertarget{ref-R-rio}{}}% Chan, C., Chan, G. C., Leeper, T. J., \& Becker, J. (2021). \emph{Rio: A swiss-army knife for data file i/o}. \leavevmode\vadjust pre{\hypertarget{ref-dong_principled_2013}{}}% Dong, Y., \& Peng, C.-Y. J. (2013). Principled missing data methods for researchers. \emph{SpringerPlus}, \emph{2}(1), 222. \url{https://doi.org/10.1186/2193-1801-2-222} \leavevmode\vadjust pre{\hypertarget{ref-edwards_perceived_2016}{}}% Edwards, C., Tod, D., Molnar, G., \& Markland, D. (2016). Perceived social pressures and the internalization of the mesomorphic ideal: {The} role of drive for muscularity and autonomy in physically active men. \emph{Body Image}, \emph{16}, 63--69. \url{https://doi.org/10.1016/j.bodyim.2015.11.003} \leavevmode\vadjust pre{\hypertarget{ref-R-janitor}{}}% Firke, S. (2021). \emph{Janitor: Simple tools for examining and cleaning dirty data}. Retrieved from \url{https://CRAN.R-project.org/package=janitor} \leavevmode\vadjust pre{\hypertarget{ref-R-polycor}{}}% Fox, J. (2019). \emph{Polycor: Polychoric and polyserial correlations}. Retrieved from \url{https://CRAN.R-project.org/package=polycor} \leavevmode\vadjust pre{\hypertarget{ref-R-ggResidpanel}{}}% Goode, K., \& Rey, K. (2019). \emph{ggResidpanel: Panels and interactive versions of diagnostic plots using 'ggplot2'}. Retrieved from \url{https://CRAN.R-project.org/package=ggResidpanel} \leavevmode\vadjust pre{\hypertarget{ref-R-purrr}{}}% Henry, L., \& Wickham, H. (2020). \emph{Purrr: Functional programming tools}. Retrieved from \url{https://CRAN.R-project.org/package=purrr} \leavevmode\vadjust pre{\hypertarget{ref-R-stargazer}{}}% Hlavac, M. (2018). \emph{Stargazer: Well-formatted regression and summary statistics tables}. Bratislava, Slovakia: Central European Labour Studies Institute (CELSI). Retrieved from \url{https://CRAN.R-project.org/package=stargazer} \leavevmode\vadjust pre{\hypertarget{ref-R-msm}{}}% Jackson, C. H. (2011). Multi-state models for panel data: The {msm} package for {R}. \emph{Journal of Statistical Software}, \emph{38}(8), 1--29. Retrieved from \url{https://www.jstatsoft.org/v38/i08/} \leavevmode\vadjust pre{\hypertarget{ref-kelly_racial_2015}{}}% Kelly, N. R., Cotter, E. W., Tanofsky-Kraff, M., \& Mazzeo, S. E. (2015). Racial variations in binge eating, body image concerns, and compulsive exercise among men. \emph{Psychology of Men \& Masculinity}, \emph{16}(3), 326--336. \url{https://doi.org/10.1037/a0037585} \leavevmode\vadjust pre{\hypertarget{ref-kelly_perceptions_2018}{}}% Kelly, N. R., Smith, T. M., Hall, G. C. N., Guidinger, C., Williamson, G., Budd, E. L., \& Giuliani, N. R. (2018). Perceptions of general and postpresidential election discrimination are associated with loss of control eating among racially/ethnically diverse young men. \emph{International Journal of Eating Disorders}, \emph{51}(1), 28--38. \url{https://doi.org/10.1002/eat.22803} \leavevmode\vadjust pre{\hypertarget{ref-lavender_men_2017}{}}% Lavender, J. M., Brown, T. A., \& Murray, S. B. (2017). Men, {Muscles}, and {Eating} {Disorders}: An {Overview} of {Traditional} and {Muscularity}-{Oriented} {Disordered} {Eating}. \emph{Current Psychiatry Reports}, \emph{19}(6), 32. \url{https://doi.org/10.1007/s11920-017-0787-5} \leavevmode\vadjust pre{\hypertarget{ref-R-performance}{}}% Lüdecke, D., Ben-Shachar, M. S., Patil, I., Waggoner, P., \& Makowski, D. (2021). {performance}: An {R} package for assessment, comparison and testing of statistical models. \emph{Journal of Open Source Software}, \emph{6}(60), 3139. \url{https://doi.org/10.21105/joss.03139} \leavevmode\vadjust pre{\hypertarget{ref-R-see}{}}% Lüdecke, D., Patil, I., Ben-Shachar, M. S., Wiernik, B. M., Waggoner, P., \& Makowski, D. (2021). {see}: An {R} package for visualizing statistical models. \emph{Journal of Open Source Software}, \emph{6}(64), 3393. \url{https://doi.org/10.21105/joss.03393} \leavevmode\vadjust pre{\hypertarget{ref-mccreary_exploration_2000}{}}% McCreary, D. R., \& Sasse, D. K. (2000). An exploration of the drive for muscularity in adolescent boys and girls. \emph{Journal of American College Health: J of ACH}, \emph{48}(6), 297--304. \url{https://doi.org/10.1080/07448480009596271} \leavevmode\vadjust pre{\hypertarget{ref-mclean_stigmatizing_2014}{}}% McLean, S. A., Paxton, S. J., Massey, R., Hay, P. J., Mond, J. M., \& Rodgers, B. (2014). Stigmatizing attitudes and beliefs about bulimia nervosa: Gender, age, education and income variability in a community sample. \emph{The International Journal of Eating Disorders}, \emph{47}(4), 353--361. \url{https://doi.org/10.1002/eat.22227} \leavevmode\vadjust pre{\hypertarget{ref-miller_exploratory_2012}{}}% Miller, M. J., Kim, J., Chen, G. A., \& Alvarez, A. N. (2012). Exploratory and {Confirmatory} {Factor} {Analyses} of the {Asian} {American} {Racism}-{Related} {Stress} {Inventory}. \emph{Assessment}, \emph{19}(1), 53--64. \url{https://doi.org/10.1177/1073191110392497} \leavevmode\vadjust pre{\hypertarget{ref-R-here}{}}% Müller, K. (2020). \emph{Here: A simpler way to find your files}. Retrieved from \url{https://CRAN.R-project.org/package=here} \leavevmode\vadjust pre{\hypertarget{ref-R-tibble}{}}% Müller, K., \& Wickham, H. (2021). \emph{Tibble: Simple data frames}. Retrieved from \url{https://CRAN.R-project.org/package=tibble} \leavevmode\vadjust pre{\hypertarget{ref-nadal_impact_2014}{}}% Nadal, K. L., Griffin, K. E., Wong, Y., Hamit, S., \& Rasmus, M. (2014). The {Impact} of {Racial} {Microaggressions} on {Mental} {Health}: {Counseling} {Implications} for {Clients} of {Color}. \emph{Journal of Counseling \& Development}, \emph{92}(1), 57--66. \url{https://doi.org/10.1002/j.1556-6676.2014.00130.x} \leavevmode\vadjust pre{\hypertarget{ref-R-patchwork}{}}% Pedersen, T. L. (2020). \emph{Patchwork: The composer of plots}. Retrieved from \url{https://CRAN.R-project.org/package=patchwork} \leavevmode\vadjust pre{\hypertarget{ref-pope_clinical_2005}{}}% Pope, C. G., Pope, H. G., Menard, W., Fay, C., Olivardia, R., \& Phillips, K. A. (2005). Clinical features of muscle dysmorphia among males with body dysmorphic disorder. \emph{Body Image}, \emph{2}(4), 395--400. https://doi.org/\url{https://doi.org/10.1016/j.bodyim.2005.09.001} \leavevmode\vadjust pre{\hypertarget{ref-R-base}{}}% R Core Team. (2020). \emph{R: A language and environment for statistical computing}. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from \url{https://www.R-project.org/} \leavevmode\vadjust pre{\hypertarget{ref-R-ltm}{}}% Rizopoulos, D. (2006). Ltm: An r package for latent variable modelling and item response theory analyses. \emph{Journal of Statistical Software}, \emph{17}(5), 1--25. Retrieved from \url{http://www.jstatsoft.org/v17/i05/} \leavevmode\vadjust pre{\hypertarget{ref-spann_disordered_2008}{}}% Spann, N., \& Pritchard, M. (2008). \emph{Disordered {Eating} in {Men}: {A} {Look} at {Perceived} {Stress} and {Excessive} {Exercise}: (626972012-153)}. American Psychological Association. \url{https://doi.org/10.1037/e626972012-153} \leavevmode\vadjust pre{\hypertarget{ref-striegel_why_2012}{}}% Striegel, R. H., Bedrosian, R., Wang, C., \& Schwartz, S. (2012). Why men should be included in research on binge eating: Results from a comparison of psychosocial impairment in men and women. \emph{The International Journal of Eating Disorders}, \emph{45}(2), 233--240. \url{https://doi.org/10.1002/eat.20962} \leavevmode\vadjust pre{\hypertarget{ref-strother_eating_2012}{}}% Strother, E., Lemberg, R., Stanford, S. C., \& Turberville, D. (2012). Eating {Disorders} in {Men}: {Underdiagnosed}, {Undertreated}, and {Misunderstood}. \emph{Eating Disorders}, \emph{20}(5), 346--355. \url{https://doi.org/10.1080/10640266.2012.715512} \leavevmode\vadjust pre{\hypertarget{ref-swami_bodies_2016}{}}% Swami, V., Barron, D., Weis, L., \& Furnham, A. (2016). Bodies in nature: {Associations} between exposure to nature, connectedness to nature, and body image in {U}.{S}. adults. \emph{Body Image}, \emph{18}, 153--161. \url{https://doi.org/10.1016/j.bodyim.2016.07.002} \leavevmode\vadjust pre{\hypertarget{ref-R-htmlwidgets}{}}% Vaidyanathan, R., Xie, Y., Allaire, J., Cheng, J., Sievert, C., \& Russell, K. (2021). \emph{Htmlwidgets: HTML widgets for r}. Retrieved from \url{https://CRAN.R-project.org/package=htmlwidgets} \leavevmode\vadjust pre{\hypertarget{ref-R-MASS}{}}% Venables, W. N., \& Ripley, B. D. (2002). \emph{Modern applied statistics with s} (Fourth). New York: Springer. Retrieved from \url{http://www.stats.ox.ac.uk/pub/MASS4/} \leavevmode\vadjust pre{\hypertarget{ref-R-ggplot2}{}}% Wickham, H. (2016). \emph{ggplot2: Elegant graphics for data analysis}. Springer-Verlag New York. Retrieved from \url{https://ggplot2.tidyverse.org} \leavevmode\vadjust pre{\hypertarget{ref-R-stringr}{}}% Wickham, H. (2019). \emph{Stringr: Simple, consistent wrappers for common string operations}. Retrieved from \url{https://CRAN.R-project.org/package=stringr} \leavevmode\vadjust pre{\hypertarget{ref-R-tidyr}{}}% Wickham, H. (2020). \emph{Tidyr: Tidy messy data}. Retrieved from \url{https://CRAN.R-project.org/package=tidyr} \leavevmode\vadjust pre{\hypertarget{ref-R-forcats}{}}% Wickham, H. (2021). \emph{Forcats: Tools for working with categorical variables (factors)}. Retrieved from \url{https://CRAN.R-project.org/package=forcats} \leavevmode\vadjust pre{\hypertarget{ref-R-tidyverse}{}}% Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D., François, R., \ldots{} Yutani, H. (2019). Welcome to the {tidyverse}. \emph{Journal of Open Source Software}, \emph{4}(43), 1686. \url{https://doi.org/10.21105/joss.01686} \leavevmode\vadjust pre{\hypertarget{ref-R-dplyr}{}}% Wickham, H., François, R., Henry, L., \& Müller, K. (2021). \emph{Dplyr: A grammar of data manipulation}. Retrieved from \url{https://CRAN.R-project.org/package=dplyr} \leavevmode\vadjust pre{\hypertarget{ref-R-readr}{}}% Wickham, H., \& Hester, J. (2020). \emph{Readr: Read rectangular text data}. Retrieved from \url{https://CRAN.R-project.org/package=readr} \leavevmode\vadjust pre{\hypertarget{ref-wilkins_racial_2011}{}}% Wilkins, C. L., Chan, J. F., \& Kaiser, C. R. (2011). Racial stereotypes and interracial attraction: {Phenotypic} prototypicality and perceived attractiveness of {Asians}. \emph{Cultural Diversity and Ethnic Minority Psychology}, \emph{17}(4), 427--431. \url{https://doi.org/10.1037/a0024733} \leavevmode\vadjust pre{\hypertarget{ref-R-tinytex}{}}% Xie, Y. (2019). TinyTeX: A lightweight, cross-platform, and easy-to-maintain LaTeX distribution based on TeX live. \emph{TUGboat}, (1), 30--32. Retrieved from \url{https://tug.org/TUGboat/Contents/contents40-1.html} \leavevmode\vadjust pre{\hypertarget{ref-R-kableExtra}{}}% Zhu, H. (2021). \emph{kableExtra: Construct complex table with 'kable' and pipe syntax}. Retrieved from \url{https://CRAN.R-project.org/package=kableExtra} \end{CSLReferences} \endgroup \end{document}
{ "alphanum_fraction": 0.7524636576, "avg_line_length": 80.2052401747, "ext": "tex", "hexsha": "6f74426a11fd20666931a6b2aa1468883ff05679", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-08T17:42:05.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-24T22:34:08.000Z", "max_forks_repo_head_hexsha": "85b1bf89d74bc2ab0b69862c4e44ffeab4347bb2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cguidin4/final_project", "max_forks_repo_path": "Scripts/final_project.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85b1bf89d74bc2ab0b69862c4e44ffeab4347bb2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cguidin4/final_project", "max_issues_repo_path": "Scripts/final_project.tex", "max_line_length": 1946, "max_stars_count": null, "max_stars_repo_head_hexsha": "85b1bf89d74bc2ab0b69862c4e44ffeab4347bb2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cguidin4/final_project", "max_stars_repo_path": "Scripts/final_project.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10970, "size": 36734 }
\subsection{Microbenchmarks} \label{micro} %%%% topology %%% \begin{figure}[!t] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/dumbbell_topology.pdf} \caption{Dumbbell topology.} \label{dumbbell_topology} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth]{figures/parkinglot_topology.pdf} \caption{Multi-hop, multi-bottleneck (parking lot) topology.} \label{parkinglot_topology} \end{subfigure} \caption{Experiment topologies in microbenchmarks (\cref{micro}).} \label{microbenchmarks_topology} \end{figure} %%%coexistence %%% \begin{figure}[th] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/micro2flows/coexitence/cubic_dctcp_coexistence_official.pdf} \caption{Default} \label{coexistence_tput_ovs} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/micro2flows/coexitence/cubic_dctcp_coexistence_acdctcp.pdf} \caption{\acdc{}} \label{coexistence_tput_ovsdctcp} \end{subfigure} \caption{(a) CUBIC gets little throughput when competing with DCTCP. (b) With \acdc{}, CUBIC and DCTCP flows get fair share.} \label{coexistence_tput} \end{figure} \begin{figure}[th] \centering \includegraphics[width=0.45\textwidth]{figures/micro2flows/coexitence/sockperf_and_droprate/coexistence_sockperf.pdf} \caption{ CUBIC traffic experiences very high TCP RTT when competing with DCTCP (packet drop rate is 0.18\%). LiquidSwitch fixes this problem (packet drop rate is 0\%). } \label{coexistence_sockperf_droprate} \end{figure} %%%%%%%%%%%%%%% text for coexistence %%%%%%%%%% %%%%%%%%%start text in microbenchmark section We first evaluate \acdc{}'s performance using a set of microbenchmarks. The microbenchmarks are conducted on topologies shown in Figure~\ref{microbenchmarks_topology}. \tightparagraph{ECT and non-ECT coexistence} \cite{judd2015nsdi,wu2012tuning} observed that ECT and non-ECT do not coexist well---switches drop non-ECT packets when the queue length is (even slightly) larger than the predefined marking threshold. We conduct an experiment to validate this and demonstrate how \acdc{} supports ECT and non-ECT coexistence. Two competing flows (one pair uses CUBIC, the other uses DCTCP) traverse the same bottleneck link in Figure~\ref{dumbbell_topology}. Figure~\ref{coexistence_tput_ovs} shows that CUBIC's throughput is very poor when it is mixed with DCTCP traffic using Default scheme. With \acdc{}, CUBIC and DCTCP flows get fair share (Figure~\ref{coexistence_tput_ovsdctcp}). Figure~\ref{coexistence_sockperf_droprate} shows that CUBIC's TCP RTT is extremely high because switches drops non-ECT packets (drop rate is 0.18\%) even with there is light congestion and CUBIC may need to transmit the dropped packets for multiple times. On the other hand, \acdc{} encodes every non-ECT packet into ECT packet and undo the ECT marking properly when it arrives at the receiver host, so the switches do not discriminate CUBIC packets. Furthermore, \acdc{} enforces DCTCP-like congestion control in the virtualization layer by modifying RWND field in the TCP header, so all the TCP variants react to the extent of network congestion such that the queue occupancy in the switch is small (drop rate is 0\%). Another method to handle the ECT and non-ECT coexistence issue is to put ECT and non-ECT flows into different queues and apply different AQM schemes~\cite{judd2015nsdi}. However, two limitations restrict the applicability of such an approach. First, in multi-tenant datacenters, it is not always easy (if not impossible) to know which congestion control algorithm the tenant stack is using. Second, non-ECT traffic still suffer from huge queueing latency. In summary, \acdc{} enforces a unified low latency congestion control algorithm for all TCP variants. It gives low latency and throughput fairness properties to all kinds of TCP traffic. \emph{\acdc{} makes low latency possible in production data center networks where incremental deployment is the norm and transport diversity must be supported}. %%compare with CUBIC defalt and DCTCP test %%% %% two flow case \iffalse \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/micro2flows/cubic_ours_dctcp_sockperf.pdf} \caption{Compare OVS-DCTCP with CUBIC default and DCTCP. Two flows. CUBIC default's throughput is 4.9Gbps. DCTCP 4.9Gbps, OVS-DCTCP 4.8Gbps.} \label{compare_cubic_dctcp_twoflows} \end{figure} \fi %%how close our RWND is to DCTCP's CWND \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/newpara_refine/mtu1500_5flows_1/measure_cwnd_rwnd_gap_15k_5flows_0sec_100msec.pdf} \caption{MTU1500: first 100 msec starting from second 0.} \label{cwnd_rwnd_1500_0sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/newpara_refine/mtu1500_5flows_1/measure_cwnd_rwnd_gap_15k_5flows_2sec_100msec.pdf} \caption{MTU1500: first 100 msec starting from second 2.} \label{cwnd_rwnd_1500_2sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/newpara_refine/mtu9000_5flows_1/measure_cwnd_rwnd_gap_9k_5flows_0sec_100msec.pdf} \caption{MTU9000: first 100 msec starting from second 0.} \label{cwnd_rwnd_9000_0sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/newpara_refine/mtu9000_5flows_1/measure_cwnd_rwnd_gap_9k_5flows_2sec_100msec.pdf} \caption{MTU9000: first 100 msec starting from second 2.} \label{cwnd_rwnd_9000_2sec} \end{subfigure} \caption{Ours RWND closely tracks the CWND offered by DCTCP. We run DCTCP and output our running RWND value without enforcing it in TCP ACKs. Dumbbell topology. Microview. CWND trace captured by ``tcpprobe'' and RWND logged by OVS's datapath kernel module. Then we align the two kinds of traces by timestamp and sequence numbers.} \label{compare_cwnd_rwnd} \end{figure} %%who limits TCP throughput, CWND or RWND? CUBIC \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd2/mtu1500_cubic/cubic_measure_cwnd_rwnd_gap_15k_5flows_0sec_100msec.pdf} \caption{MTU1500: first 100 msec starting from second 0.} \label{who_limits_cwnd_rwnd_1500_0sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd2/mtu1500_cubic/cubic_measure_cwnd_rwnd_gap_15k_5flows_2sec_100msec.pdf} \caption{MTU1500: first 100 msec starting from second 2.} \label{who_limits_cwnd_rwnd_1500_2sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd2/mtu9000_cubic/cubic_measure_cwnd_rwnd_gap_9k_5flows_0sec_100msec.pdf} \caption{MTU9000: first 100 msec starting from second 0.} \label{who_limits_cwnd_rwnd_9000_0sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd2/mtu9000_cubic/cubic_measure_cwnd_rwnd_gap_9k_5flows_2sec_100msec.pdf} \caption{MTU9000: first 100 msec starting from second 2.} \label{who_limits_cwnd_rwnd_9000_2sec} \end{subfigure} \caption{Who limits TCP throughput? CWND or RWND? We run CUBIC on top and output our running RWND value while enforcing it in TCP ACKs. Dumbbell topology. Microview. CWND trace captured by ``tcpprobe'' and RWND logged by OVS's datapath kernel module. Then we align the two kinds of traces by timestamp and sequence numbers.} \label{who_limits_compare_cwnd_rwnd} \end{figure} %%how close our RWND is to DCTCP's CWND \begin{figure}[!htb] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/moving-ave/measure_cwnd_rwnd_gap_15k_5flows_ave100.pdf} \caption{MTU1500: Moving average over 100 ms window.} \label{cwnd_rwnd_1500_0sec} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{figures/cwnd_rwnd/moving-ave/measure_cwnd_rwnd_gap_9k_5flows_ave100.pdf} \caption{MTU9000: Moving average over 100 ms window.} \label{cwnd_rwnd_1500_2sec} \end{subfigure} \caption{Ours RWND closely tracks the CWND offered by DCTCP. We run DCTCP and output our running RWND value without enforcing it in TCP ACKs. Dumbbell topology. CWND trace captured by ``tcpprobe'' and RWND logged by OVS's datapath kernel module. Then we align the two kinds of traces by timestamp and sequence numbers. Moving average over 100 ms windows presented.} \label{compare_cwnd_rwnd_ave} \end{figure} %%% other TCP variants %%% \begin{table*}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline CC variants & 50$^{th}$ percentile TCP RTT ($\mu$s) & 99.9$^{th}$ percentile TCP RTT ($\mu$s) & Throughput (Gbps) \\ \hline CUBIC* & 3283 & 3714 & 1.89 \\ DCTCP* & 136 & 342 & 1.88 \\ \hline \hline %%INIT 5, STEP 0.75 %CUBIC & 142 & 362 & 1.86 \\ %New Reno & 136 & 348 & 1.87 \\ %DCTCP (clear ECE) & 139 & 337 & 1.86 \\ %DCTCP (keep ECE) & 141 & 367 & 1.85\\ %CUBIC+ECN (clear ECE) & 138 & 372 & 1.86\\ %CUBIC+ECN (keep ECE) & 112 & 338 & 1.70 \\ %% INIT 10, STEP 1.0 CUBIC & 139 & 359 & 1.87 \\ New Reno & 145 & 346 & 1.87 \\ DCTCP (clear ECE) & 137 & 343 & 1.87 \\ DCTCP (keep ECE) & 144 & 360 & 1.86 \\ CUBIC+ECN (clear ECE) & 135 & 343 & 1.87\\ CUBIC+ECN (keep ECE) & 121 & 316 & 1.50 \\ \hline \end{tabular} \caption{OVS-DCTCP works with various kinds of congestion control variants. CUBIC*: TCP CUBIC + official OVS, switch ECN/WRED marking off. DCTCP*: DCTCP + official OVS, switch ECN/WRED marking on. MTU = 1500B} \label{other_cc_variants_1500} \end{center} \end{table*} \begin{table*}[!htb] \begin{center} \begin{tabular}{ |c|c|c|c|c|c| } \hline CC variants & 50$^{th}$ percentile TCP RTT ($\mu$s) & 99.9$^{th}$ percentile TCP RTT ($\mu$s) & Throughput (Gbps) \\ \hline %CUBIC & 148 & 299 & 4.8 \\ %New Reno & 142 & 336 & 4.8 \\ %DCTCP (clear ECE) & 142 & 318 & 4.8 \\ %DCTCP (keep ECE) & 144 & 403 & 4.7 \\ %CUBIC+ECN (clear ECE) & 151 & 301 & 4.8\\ %CUBIC+ECN (keep ECE) & 119 & 307 & 4.0 \\ CUBIC* & 3408 & 3976 & 1.98 \\ DCTCP* & 158 & 362 & 1.97 \\ \hline \hline CUBIC & 154 & 343 & 1.94 \\ New Reno & 156 & 324 & 1.94 \\ DCTCP (clear ECE) & 155 & 355 & 1.94 \\ DCTCP (keep ECE) & 155 & 368 & 1.93 \\ CUBIC+ECN (clear ECE) & 158 & 359 & 1.94\\ CUBIC+ECN (keep ECE) & 153 & 309 & 1.92 \\ \hline \end{tabular} \caption{OVS-DCTCP works with various kinds of congestion control variants. CUBIC*: TCP CUBIC + official OVS, switch ECN/WRED marking off. DCTCP*: DCTCP + official OVS, switch ECN/WRED marking on. MTU = 9000B} \label{other_cc_variants} \end{center} \end{table*} %%%convergence %%% \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/convergence/flowcontrolOFF_sockperf/convergence_test_sockperf.pdf} \caption{Compare OVS-DCTCP with CUBIC default and DCTCP. TCP RTT (sockperf). RTO\_{min} is 10 ms in all of our experiments. Dumbbell topology.} \label{sockperf_convergence} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{figures/convergence/flowcontrolOFF/tcp_flowcontrolOFF_convergence.pdf} \caption{TCP CUBIC convergence test.} \label{cubic_convergence} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{figures/convergence/flowcontrolOFF/dctcp_flowcontrolOFF_convergence.pdf} \caption{DCTCP convergence test.} \label{dctcp_convergence} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{figures/convergence/flowcontrolOFF/ovsdctcp_flowcontrolOFF_convergence.pdf} \caption{OVS-DCTCP convergence test.} \label{ovsdctcp_convergence} \end{subfigure} \caption{Convergence tests. TCP CUBIC's drop rate is 0.17\% while DCTCP and OVS-DCTCP is 0\%. A note on CUBIC: as explained by~\cite{judd2015nsdi}, its stability, convergence and fairness issue is caused by receive buffer auto-tuning of the OS and it can be fixed by manually setting the TCP socket buffer size. We confirmed this finding on our testbed. We share the same view that ``achieving proper receive buffer sizing, however, is much more difficult under TCP due to the massive dynamic range of latencies''---another motivation for unified low latency transport enforcement for all TCP variants. } \label{convergence_test} \end{figure*} \begin{figure*}[!htb] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{figures/tput_fairness/default_all_cubic_tput.pdf} \caption{All CUBIC.} \label{fairness_all_cubic} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{figures/tput_fairness/default_5CC_tput.pdf} \caption{5 different CCs.} \label{fairness_5CC} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{figures/tput_fairness/liquid_5CC_tput.pdf} \caption{5 different CCs with our logic.} \label{fairness_5CC_with_ours} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{figures/tput_fairness/ecn_all_dctcp_tput.pdf} \caption{All DCTCP.} \label{fairness_5CC_with_ours} \end{subfigure} \caption{Our logic can help improve TCP variants' throughput fairness. receive buffer auto-tuning on. 5 different CCs are: TCP CUBIC~\cite{ha2008cubic}, TCP Illinois~\cite{liu2008tcp}, TCP HighSpeed~\cite{RFC3649}, TCP New Reno~\cite{RFC3782} and TCP Vegas~\cite{Brakmo1994}. 50th and 99.9$^{th}$ percentile TCP RTTs for all CUBIC, 5 different CCs, 5 different CCs with our logic and all DCTCP are: 3.5 msec, 3.9msec; 3.4 msec, 4.0 msec; 146 $\mu$s, 306 $\mu$s; 147 $\mu$s, 317 $\mu$s. Both ours and DCTCP offer a Jain's fairness index greater than 0.99. } \label{tput_fairness_coexistence} \end{figure*} \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/overhead/sender_15k_compare_cpu_witherrbar.pdf} \caption{CPU overhead: sender side (1.5K MTU). The 10G NIC is saturated when there are more than 1K TCP connections. CPU usage refers to the CPU usage of the whole server (12 Intel(R) Xeon(R) CPU [email protected] processors), measured by ``sar (sysstat)''.} \label{cpu_overhead_sender_15k} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/overhead/receiver_15k_compare_cpu_witherrbar.pdf} \caption{CPU overhead: receiver side (1.5K MTU). The 10G NIC is saturated when there are more than 1K TCP connections. CPU usage refers to the CPU usage of the whole server (12 Intel(R) Xeon(R) CPU [email protected] processors) measured by ``sar (sysstat)''.} \label{cpu_overhead_receiver_15k} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/overhead/sender_9k_compare_cpu_witherrbar.pdf} \caption{CPU overhead: sender side (9K MTU). The 10G NIC is saturated when there are more than 1K TCP connections. CPU usage refers to the CPU usage of the whole server (12 Intel(R) Xeon(R) CPU [email protected] processors) measured by ``sar (sysstat)''.} \label{cpu_overhead_sender_9k} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{figures/overhead/receiver_9k_compare_cpu_witherrbar.pdf} \caption{CPU overhead: receiver side (9K MTU). The 10G NIC is saturated when there are more than 1K TCP connections. CPU usage refers to the CPU usage of the whole server (12 Intel(R) Xeon(R) CPU [email protected] processors) measured by ``sar (sysstat)''} \label{cpu_overhead_receiver_9k} \end{figure} %%%%%%%%%start text in microbenchmark section \tightparagraph{Compare with TCP (default) and DCTCP} \acdc{} achieves similar throughput with TCP CUBIC and DCTCP. When 5 flows compete for a 10Gbps bottleneck, each flow get around 1.94Gbps with \acdc{} while TCP CUBIC and DCTCP achieve 1.98Gbps and 1.97Gbps, respectively. \acdc{} achieves comparable TCP RTT compared with DCTCP and significantly outperforms TCP CUBIC (Figure~\ref{sockperf_convergence}). How close we are from DCTCP? Figure~\ref{compare_cwnd_rwnd}. \tightparagraph{Support various kinds of congestion control (CC) variants} Table~\ref{other_cc_variants} show that \acdc{} can work with various kinds of congestion control variants that are defined by tenants. It can also help over-conservative congestion control schemes (\eg{}, CUBIC with ECN support) get better throughput via cleaning ECE bit in TCP ACKs while still keeping latency low. \tightparagraph{Convergence and fairness} Figure~\ref{convergence_test} shows that \acdc{} can converge quickly. Both \acdc{} and DCTCP have a Jain's fairness index greater than 0.99. %Figure~\ref{sockperf_convergence} shows that \acdc{} %has very good TCP RTT performance. \tightparagraph{Different MTU sizes} We set MTU size to 1500 bytes and run the tests on the dumbbell topology (Figure~\ref{dumbbell_topology}) with 5 flows competing for a 10G bottleneck link. \acdc{} gets 1.87Gbps average flow throughput. DCTCP gets 1.88Gbps average flow throughput. Both have a Jain's fairness index greater than 0.99. TCP CUBIC gets 1.89 average flow throughput and a fairness index of 0.85. The 50$^{th}$ and 99.9$^{th}$ percentile TCP RTT for \acdc{} (DCTCP, CUBIC) are 139$\mu$s (136$\mu$s, 3.2ms) and 359$\mu$s (342$\mu$s, 3.7ms), respectively. \tightparagraph{Multi-hop, multi-bottleneck topology} We test \acdc{}'s performance on a parking lot topology as shown in Figure~\ref{parkinglot_topology}. We start background elephant flows from four senders to one receiver. The elephant flows traverse different number of bottleneck links. We measure each flow's throughput and TCP RTT. Our results show that for both DCTCP and \acdc{}, each flow can get around 2.45Gbps throughput, with a Jain's fairness index greater than 0.99. CUBIC's average throughput is 2.48Gbps, and its fairness index is 0.94. The 50$^{th}$ and 99.9$^{th}$ percentile TCP RTT for \acdc{} (DCTCP, default CUBIC) are 124$\mu$s (136$\mu$s, 3.3msec) and 279$\mu$s (301$\mu$s, 3.9msec), respectively. TCP-Bolt~\cite{stephens2014practical} and DCQCN~\cite{zhu2015congestion} found that Data Center Bridging (DCB) can have throughput unfairness and increased queueing (bufferbloat) issues and such issues can be solved by utilizing ECN (like DCTCP). We anticipate our scheme can also work in DCB. \tightparagraph{Little CPU and memory overhead} In our implementation, first we leverage the Read-Copy-Update (RCU)~\cite{guniguntala2008read} enabled hash tables to keep per-flow states (such as ``snd\_una'' and ``snd\_nxt''). RCU technique is also employed by Open vSwitch's kernel datapath and it helps improve processing speed for ``read-heavy'' workloads (\eg{}, interesting new flows is less frequent than looking-up existing flows) on shared-memory multiprocessor systems. Second, \acdc{} processes on ``segment'' level instead of ``packet'' level due to NIC offloading features (TSO at the sender side and GRO/LRO at the receiver side). Third, we also leverage the NIC checksumming offloading feature such that we do not need to compute checksums after we change TCP/IP header fields. Our microbenchmarks show that \acdc{} incurs very little additional CPU overhead (less than 2\%) to support 10Gbps line-rate, even it is fully implemented in software. We are currently implementing \acdc{} on Cavium's programmable NICs~\cite{cavium-nic}, where we can entirely offload the computational overhead to hardware. Therefore, we believe \acdc{} can support even higher line rates (\eg{}, 40Gbps). In our implementation, each TCP connection takes 320 bytes in the hash tables, so it takes around 3.2MB even there are 10K connections. Figure~\ref{cpu_overhead_sender_15k} and Figure~\ref{cpu_overhead_receiver_15k}. Figure~\ref{cpu_overhead_sender_9k} and Figure~\ref{cpu_overhead_receiver_9k}.
{ "alphanum_fraction": 0.702125735, "avg_line_length": 49.4630872483, "ext": "tex", "hexsha": "50bb6a146cbace829633aaa3748b651e4f8b02c5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "keqhe/phd_thesis", "max_forks_repo_path": "lanyue_thesis/acdctcp/save_microbenchmarks_Jan5_2016.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "keqhe/phd_thesis", "max_issues_repo_path": "lanyue_thesis/acdctcp/save_microbenchmarks_Jan5_2016.tex", "max_line_length": 151, "max_stars_count": 2, "max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "keqhe/phd_thesis", "max_stars_repo_path": "acdctcp/save_microbenchmarks_Jan5_2016.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z", "num_tokens": 6660, "size": 22110 }
\section{Introduction} \IEEEPARstart{O}{n} December 17th, 2010, Mohamed Bouazizi immolated himself in Sidi Bou Said, Tunisia in response to harassment from a local policewoman and local municipality officers. Though Bouazizi was not the first to engage in this form of protest, for some reason his case resonated with other Tunisians, who took to the streets in protest of constant persecution and victimization by a corrupt government. Although early protests were relatively small and were met with violence by government forces, social media sites like Twitter, Facebook and YouTube recorded the events and displayed them to the broader public. These events are widely considered to be the beginning of what has come to be known, for better or worse \citep{gelvin_arab_2015}, as the Arab Spring. There is little doubt that social media, and new media \cite{baym_personal_2010} more generally, played an important role in Arab Spring. However, conventional wisdom that emphasized social media as the cause of \emph{the} revolutions has been proven to be overblown \citep{bruns_arab_2013,goldstone_bringing_2013,comunello_will_2012}. Recent research has focused instead on how social media may have aided certain aspects of the revolutions in important ways for different people \citep{galle_who_2013,starbird_how_2012,lotan_revolutions_2011,tufekci_social_2012}, and whether social processes that were carried out via new media are reflective of those that occurred ``offline'' \citep{comunello_will_2012}. Additionally, data from newspaper articles written during the time of the Arab Spring also may be of use in better understanding these processes better (REMOVED FOR BLIND REVIEW)%\citep{joseph_arab_2014,pfeffer_rapid_2012} . Thus, social media and news media coverage should be appreciated as both pieces and reflections of a complex system of causal structures at play. Prior work on this subject has largely considered how the news media or Twitter are useful in understanding the social processes at play during particular events across a small set of countries \citep[e.g.][]{lotan_revolutions_2011,borge-holthoefer_content_2014} or a series of events in a particular country, most often Egypt \citep[e.g.][]{tufekci_social_2012}. Further, few studies have considered traditional news sources and Twitter data side-by-side, leaving questions as to the similarities and differences in how media responded to, or possibly influenced, long and short term social processes during the Arab Spring. In the present work, we use a corpora of approximately 70M tweets and around 700K newspaper articles to provide an initial overview of the change in topical focus over time in sixteen countries relevant to the Arab Spring in both news and Twitter. We take a breadth-over-depth approach, attempting to reconcile patterns in Twitter usage and news media coverage over a wide range of countries and multiple time periods. Additionally, we employ the same methodology for both datasets, allowing us to compare results across social and print media. Specifically, we have focused on the following two research questions: \begin{itemize} \item {\bf RQ1:} How did the topical focii of our news and Twitter data differ over time and across different countries and categories? \item {\bf RQ2:} Did significant changes in focus on different topics reflect on-the-ground processes? \end{itemize} In addition to these research questions, we provide a case study that explores how changes in the topical foci of news and Twitter users may be useful in understanding how topical discussions clustered around particular countries in interesting ways, especially with respect to discussions of protests. In order to perform our analysis, we begin by developing a set of human-curated topical themes of interest based on a review of the literature. For each theme, we determined a set of terms that, when mentioned, were relevant to these themes. We searched for these terms across all of our Twitter and newspaper data. Where we found a term used in a particular tweet or news article, we determined the time at which the content was produced and the particular country(ies) in the Arab World to which the content was relevant. We were left with a set of counts, over time, of the discussion of our different themes in different countries in the Arab world. In theory, this count data, or rates directly calculated from it, could be used to address our research questions. As detailed in recent work by \cite{eisenstein_diffusion_2014}, however, the direct utilization of count data, or rate data based on these counts, is a methodologically unsound decision. As \cite{eisenstein_diffusion_2014} discuss with regards to Twitter, term counts may be biased by unknown irregularities in the way Twitter provides tweets through its API \citep{morstatter_is_2013} or via unique properties of the keyword or spatial queries researchers construct to obtain data from the API (REMOVED FOR BLIND REVIEW)%\citep{joseph_approach_2014 . Similarly, superfluous coverage by news media on specific countries may lead to artificial increases in counts or, if focused on themes not of interest for a study lead to superfluous decreases in rates. In the present work, we adapt a slightly modified version of the statistical model developed and employed by \cite{eisenstein_diffusion_2014}. This model controls for spatial and temporal patterns in the rate at which data is obtained, thereby removing many of the important biases associated with term count data. While a plethora of issues must be considered when analyzing, in particular, social media data \citep{tufekci_big_2014}, \citeapos{eisenstein_diffusion_2014} model allows us to move beyond the statistical irregularities in the data and gives us more freedom to draw inferences about the relationship between rates of change in thematic content and actual events occurring during the Arab Spring. After explaining this model in greater detail and how it was adapted for the present work, we consider analyses that address our two primary research questions, and then discuss results from the case study. With respect to RQ1, we find that Twitter and traditional news media were more highly correlated on certain topics and in certain countries than others. More precisely, topics relevant to social change and in countries in which massive social change occurred showed high levels of correlation between the social and print media, while others showed significantly less cohesion in topical focus across the two media. With respect to RQ2, we find that outlier data, as determined by our model, matched quite well with important time periods during the Arab Spring, providing further evidence that news media and Twitter data are important tools for the study of social change. Finally, with respect to our case study, we find evidence that countries clustered in thematic discussions along dimensions of social change, i.e. that countries with more similar levels of social change appear to have more similar levels of discussion across the themes we studied. We also see evidence that temporal patterns in discussions of protest provide initial support for qualitative claims made in the political science and communications literature on news media, Twitter and protests during the Arab Spring. %From Ghita: if you talk about arab you don’t mean iran %These concerns include the fact that ``data-driven'' analysis often suffers from post-hoc conclusions which substantially increase the number of ``researcher degrees of freedom'', even if unintentionally, in the analysis \cite{}. %\item {\bf H3}: (term network) if media representation shows protestors as representative of the whole society, rather than as one particular group seeking partisan advantages for itself, revolution will be more successful %\item Attempt to quantify how Mohammad Bouazizi’s setting himself on fire spreads across normally disconnected factions within the Twitter data in Tunisia from Dec 2010 – Feb 2011. %\item I think we could use our religiosity terms to measure factionalism/sectarianism at the onset of Egypt’s protests. Both counts and term networks could be useful. % %\subsection{My list of covariates} % %\begin{enumerate} % \item {\bf Regime-based} % \begin{enumerate} % \item religious freedom indices % \item personalist/not \cite{goldstone_bringing_2013} % \item wealth (particularly oil) \cite{goldstone_bringing_2013} % \item Twitter/News mentions of regime (of some kind) - sentiment? % \item Twitter/News revolution/insurgent/violence or adaptation terms % \end{enumerate} % \item {\bf International relations} % \begin{enumerate} % \item international network position % \item international Twitter/News network position % \end{enumerate} % \item {\bf Population indicators } % \begin{enumerate} % \item Twitter/news geospatial spread % \item Religious indices % \item Ethnic diversity / group indicators % \item Twitter/news mentions of ethnic group % \item Twitter/news attention / network position % \item Twitter/news ``class'' cohesiveness \cite{goldstone_cross-class_2011} % \item Twitter/news ``class'' spread \cite{goldstone_cross-class_2011} % \end{enumerate} %\end{enumerate} % %
{ "alphanum_fraction": 0.7938503804, "avg_line_length": 155.1475409836, "ext": "tex", "hexsha": "66510e86c95ca4a6b5a9a2740cf7bf91eea36361", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "07e0139afd26048307ca6323245da70a888e91b6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kennyjoseph/minerva_relig_paper", "max_forks_repo_path": "arab_spring_paper/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "07e0139afd26048307ca6323245da70a888e91b6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kennyjoseph/minerva_relig_paper", "max_issues_repo_path": "arab_spring_paper/intro.tex", "max_line_length": 1402, "max_stars_count": null, "max_stars_repo_head_hexsha": "07e0139afd26048307ca6323245da70a888e91b6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kennyjoseph/minerva_relig_paper", "max_stars_repo_path": "arab_spring_paper/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2001, "size": 9464 }
\documentclass{article} \usepackage{arxiv} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{hyperref} % hyperlinks \usepackage{url} % simple URL typesetting \usepackage{booktabs} % professional-quality tables \usepackage{amsfonts} % blackboard math symbols \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \usepackage{multirow} \usepackage{multicol} \usepackage{graphicx} %\usepackage{natbib} \usepackage[numbers,sort&compress]{natbib} \usepackage{hyperref} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathtools, nccmath} %\usepackage{subcaption} \usepackage{float} \DeclarePairedDelimiter{\nint}\lfloor\rceil \usepackage[linesnumbered,ruled,vlined]{algorithm2e} %%%%%%%%%%% Defining Enunciations %%%%%%%%%%% \newtheorem{theorem}{\bf Theorem}[section] \newtheorem{condition}{\bf Condition}[section] \newtheorem{corollary}{\bf Corollary}[section] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{Dealing with uncertainty in agent-based models for short-term predictions} \author{ Le-Minh Kieu\thanks{Corresponding author} \\ School of Geography \& Leeds Institute of Data Analytics\\ University of Leeds\\ United Kingdom \\ \texttt{[email protected]} \\ %% examples of more authors \And Nicolas Malleson \\ School of Geography\\ University of Leeds and Alan Turing Institute\\ United Kingdom \\ \texttt{[email protected]} \\ \And Alison Heppenstall \\ School of Geography\\ University of Leeds and Alan Turing Institute\\ United Kingdom \\ \texttt{[email protected]} \\ } \begin{document} \maketitle \begin{abstract} Agent-based models (ABM) are gaining traction as one of the most powerful modelling tools within the social sciences. They are particularly suited to simulating complex systems. Despite many methodological advances within ABM, one of the major drawbacks is their inability to incorporate real-time data to make accurate short-term predictions. This paper presents an approach that allows ABMs to be dynamically optimised. Through a combination of parameter calibration and data assimilation (DA), the accuracy of model-based predictions using ABM in real time is increased. We use the exemplar of a bus route system to explore these methods. %Within this paper we construct an ABM that simulates the main interactions and key processes within this system. We develop an numerical experiment to quantify the impacts of calibration and DA in dealing the with stochastic and dynamic nature of the system under study. The bus route ABMs developed in this research are examples of ABMs that can be dynamically optimised by a combination of parameter calibration and DA. The proposed model and framework can also be used in an passenger information system, or in an Intelligent Transport Systems to provide forecasts of bus locations and arrival times. \end{abstract} % keywords can be removed \keywords{First keyword \and Second keyword \and More} \section{Introduction} \label{s:Intro} Agent-based modelling (ABM) \citep{bonabeau_agent_2002} is a field that excels in its ability to simulate complex systems. Instead of deriving aggregated equations of system dynamics, ABM encapsulates system-wide characteristics from the behaviours and interactions of individual agents e.g. human, animals or vehicles. ABM has emerged as an important tool for many applications ranging from urban traffic simulation \citep{balmer2009matsim}, humanitarian assistance \citep{crooks_gis_2013} to emergency evacuations \citep{schoenharl_design_2011}. Despite the many advances and applications of ABM, the field suffers from a serious drawback: models are currently unable to incorporate up-to-date data to make accurate real-time predictions \citep{lloyd_exploring_2016, wang_data_2015, ward_dynamic_2016}. Models are typically calibrated once, using historical data, then projected forward in time to make a prediction. Here, calibration is ideal for one point in time, but as the simulation progresses, the prediction rapidly diverges from reality due to underlying uncertainties \citep{ward_dynamic_2016}. These uncertainties come from \textit{dynamic} (changing over space and time), \textit{stochastic} (containing inherent randomness) and \textit{unobserved} (unseen from the data) conditions of the real system under study. An example of such a system can be found in bus routes. Each time a bus reaches a bus stop, the number of alighting passengers is uncertain and the number of waiting passengers downstream is unobserved. The bus route's conditions also change over time, e.g. traffic varies over the route and with at off-peak to peak periods. %\end{fmtext} There are methods to incorporate streaming data into models, such as \textit{data assimilation} (DA) routines \citep{lewis_dynamic_2006, wang2000data}. Broadly, DA refers to a suite of techniques that allow observational data to be incorporated into models \citep{wang2000data} to provide an optimal estimate of the evolving state of the system. Performing DA increases the probability of having an accurate representation of the current state of the system, thereby reducing the uncertainty of future predictions. This is a technique that has been widely applied in fields such as meteorology, hydrology and oceanography \citep{kalnay_atmospheric_2003}. There are, however, two methodological challenges that must be overcome to apply DA in ABM. First, DA methods are often intrinsic to their underlying models which are typically systems of partial differential equations with functions linearised mathematically. Hence DA methods typically rely on linearising the underlying model \citep{harvey1990forecasting}. One of the most appealing aspects of agent-based models is that they are inherently non-linear, so it is not clear whether the assumptions of traditional DA methods will hold. Second, it is still unknown how much uncertainty DA can effectively deal with when implemented within ABM. Assimilation of real-time data into ABMs has only been attempted a few times and these examples are limited by their simplicity \citep{lloyd_exploring_2016, wang_data_2015, ward_dynamic_2016}. This paper is part of a wider programme of work\footnote{\url{http://dust.leeds.ac.uk/}} that is focused on developing DA methods to be readily used in ABM. This paper focuses on one particular model that aims to make predictions of bus locations in real time. Bus route operation has been chosen due to its inherent uncertainties -- for example a model will need to account for uncertain factors affecting how buses travel on the roads \citep{khosravi2011prediction} -- but also for its tractability -- there are many fewer interactions than present in, say, a model of a crowd. We also focus on one particular DA algorithm -- the Particle Filter (PF). This method is chosen due to its ability to incorporate data into non-linear models such as ABMs \citep{carpenter1999improved}. The objectives of this paper are to: (1) perform dynamic state estimation to reduce the uncertainty in the model's estimate of the \textit{current} system state; (2) improve the accuracy of short term forecasts. All the numerical experiments in this paper will be tightly controlled, following an `identical twin' experimental framework \citep[for example see][]{wang_data_2015}. We will first develop a complex ABM of a bus route to generate fine-grained synthetic GPS data of buses, that are reasonably similar to real GPS data, for use as synthetic `ground truth' data. We call this model the `BusSim-truth' model. The next step is to develop companion ABMs that are of simpler nature than BusSim-truth that will not know the parameters of BusSim-truth and will not have the dynamic and stochastic features of BusSim-truth. We will calibrate and evaluate these companion ABMs against the data generated from BusSim-truth. This experiment is designed to be similar to the real-time monitoring and predictions of bus locations, where models are often a simpler version of reality, that are calibrated to be as close as possible to reality. The prediction of bus location and arrival times are essential for bus operators and a topical research challenge \citep{bin2006bus}. The methods developed here can easily be applied to simulation and forecasting for \textit{real} bus systems and could, therefore, offer considerable potential impact. This is particularly pertinent in rapidly developing cities where daily bus schedules can be extremely erratic. In these cases accurate, up-to-date estimates of current waiting times will be highly beneficial to citizens who use (or would like to use) public transport. The contributions of this paper are threefold. First, several ABMs of bus routes are constructed that account for the interactions between the bus and passengers, the bus and the surrounding traffic, and between multiple buses are considered. While model development is not the sole focus of this paper, these bus route ABMs are novel and have utility for other studies. Second, this paper introduces a combination of parameter calibration and DA techniques that can dynamically optimise an ABM to enable accurate estimation of the bus system in real time. Third, this paper shows and quantifies the impacts of calibration and DA in dealing the with stochastic and dynamic nature of the system under study. This paper is structured as follows. Section~\ref{s:problem} describes the research problem and the related works in the literature. Section~\ref{s:method} outlines the methodology. Section~\ref{s:experiments} describes the numerical experiments that are conducted and discusses these results. Finally, Section~\ref{s:conclusion} concludes the study and considers the opportunities for future work. % Other sections are in their own file \input{research_approach} \input{method} \input{experiments} \input{implications} \input{conclusion} \section*{Data Availability} This paper does not use any real data. Synthetic data has been generated from one of its models (BusSim-truth model). The source code for all the models, and the used synthetic data are available from \url{https://github.com/leminhkieu/Bus-Simulation-model}. \section*{Competing interests} We declare we have no competing interests \section*{Acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement No. 757455), a UK Economic and Social Research Council (ESRC) Future Research Leaders grant (ES/L009900/1) and a ESRC/Alan Turing Joint Fellowship (ES/R007918/1). \appendix \input{AppendixA} \appendix \input{AppendixB} %\section*{References} \bibliographystyle{plain} \bibliography{2018-pf-bussim} \end{document}
{ "alphanum_fraction": 0.7968535626, "avg_line_length": 74.3741496599, "ext": "tex", "hexsha": "778fb214695bf5808945934ebfbea8baa725ddf7", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-10-08T10:21:06.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-20T15:56:49.000Z", "max_forks_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RobertClay/DUST-RC", "max_forks_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/main.tex", "max_issues_count": 125, "max_issues_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_issues_repo_issues_event_max_datetime": "2022-03-07T13:38:33.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-06T13:03:35.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RobertClay/DUST-RC", "max_issues_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/main.tex", "max_line_length": 1081, "max_stars_count": 15, "max_stars_repo_head_hexsha": "09f7ec9d8d093021d068dff8a7a48c15ea318b86", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RobertClay/DUST-RC", "max_stars_repo_path": "Writing/2019-BusSim-ParticleFilter-MK/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-04T15:42:09.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-21T14:57:24.000Z", "num_tokens": 2488, "size": 10933 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright (c) 2003-2018 by The University of Queensland % http://www.uq.edu.au % % Primary Business: Queensland, Australia % Licensed under the Apache License, version 2.0 % http://www.apache.org/licenses/LICENSE-2.0 % % Development until 2012 by Earth Systems Science Computational Center (ESSCC) % Development 2012-2013 by School of Earth Sciences % Development from 2014 by Centre for Geoscience Computing (GeoComp) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Introduction} This document describes how to install \emph{esys-Escript}\footnote{For the rest of the document we will drop the \emph{esys-}} on to your computer. To learn how to use \esfinley please see the Cookbook, User's guide or the API documentation. \esfinley is primarily developed on Linux desktop, SGI ICE and \macosx systems. It can be installed in several ways: \begin{enumerate} \item Binary packages -- ready to run with no compilation required. These are available in Debian and Ubuntu repositories, so just use your normal package manager (so you don't need this guide). They are also available for Anaconda Python 3. \item Using flatpak \item From source -- that is, it must be compiled for your machine. This is the topic of this guide. \end{enumerate} See the site \url{https://answers.launchpad.net/escript-finley} for online help. Chapter~\ref{chap:source} covers installing from source. Appendix~\ref{app:cxxfeatures} lists some c++ features which your compiler must support in order to compile escript. This version of escript has the option of using \texttt{Trilinos} in addition to our regular solvers. Appendix~\ref{app:trilinos} covers features of \texttt{Trilinos} which escript needs.
{ "alphanum_fraction": 0.71184573, "avg_line_length": 53.3823529412, "ext": "tex", "hexsha": "1da2dd41ebf3e483f2369b5379b4fc91b6e69be2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_path": "doc/install/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_path": "doc/install/intro.tex", "max_line_length": 175, "max_stars_count": null, "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_path": "doc/install/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 426, "size": 1815 }
\section{Matrix Calculus} \subsection{Layout} There are two different layout: \begin{itemize} \item \cindex{numerator layout}: \begin{equation} \left [ \begin{matrix} \nabla f \\ \nabla g \end{matrix} \right ] \end{equation} \item \cindex{denominator layout}: \begin{equation} \left [ \nabla f , \nabla g \right ] \end{equation} \end{itemize} numerator layout is preferred. \subsection{Jacobian Matrix} for $\mathbf{y}_{1 \times m} = \mathbf{f}(\mathbf{x}_{1 \times n})$, its \cindex{Jacobian matrix} is: \begin{equation} \nabla{}_{\mathbf{x}} \mathbf{y} = \frac{\partial \mathbf{y}}{\partial \mathbf{x}} = \left [ \begin{matrix} \nabla f_1 (\mathbf{x}) \\ \nabla f_1 (\mathbf{x}) \\ \vdots \\ \nabla f_m (\mathbf{x}) \end{matrix} \right ] = \begin{pmatrix} \pd{f_1}{x} \\ \\ \pd{f_2}{x} \\ \vdots \\ \pd{f_m}{x} \end{pmatrix} = \left [ \begin{matrix} \frac{\partial f_1({\mathbf{x}})}{x_1} & \frac{\partial f_1({\mathbf{x}})}{x_2} & \dots & \frac{\partial f_1({\mathbf{x}})}{x_n} \\ \frac{\partial f_2({\mathbf{x}})}{x_1} & \frac{\partial f_2({\mathbf{x}})}{x_2} & \dots & \frac{\partial f_2({\mathbf{x}})}{x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m({\mathbf{x}})}{x_1} & \frac{\partial f_m({\mathbf{x}})}{x_2} & \dots & \frac{\partial f_m({\mathbf{x}})}{x_n} \end{matrix} \right ] \end{equation} \subsection{Element-wise binary operator} for element-wise binary operator \begin{equation} \mathbf{y}= \mathbf{f}(\mathbf{w}) \bigcirc \mathbf{g}(\mathbf{x}) \end{equation} $\bigcirc$ could be $+,-,\times\footnote{called \emph{hadamard product}}, \div, max $. The gradient is: \begin{equation} \nabla{}_{\mathbf{x}} \mathbf{y} = \left [ \begin{matrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{matrix} \right ] = \left [ \begin{matrix} f_1 (\mathbf{w}) \bigcirc g_1 (\mathbf{x}) \\ f_2 (\mathbf{w}) \bigcirc g_2 (\mathbf{x}) \\ \vdots \\ f_n (\mathbf{w}) \bigcirc g_n (\mathbf{x}) \end{matrix} \right ] \end{equation} The expanded matrix could be differentiated using Jacobian matrix. \subsection{Vector Sum} Vector sum operation $sum$ could be expressed as \begin{equation} y = \text{sum}\Big (\mathbf{f}(\mathbf{x}) \Big ) = \sum_{i=1}^{n} f_i (\mathbf{x}) \end{equation} $\nabla \mathbf{y}$ could be calculated as usual. \subsection{Chain Rules} In machine learning there are two ways of taking \cindex{chain rules}: \begin{itemize} \item forward differentiation: $\frac{dy}{dx} =\frac{du}{dx} \times \frac{dy}{du}$ \item backward differentiation: $\frac{dy}{dx}=\frac{dy}{du} \times \frac{du}{dx}$ \end{itemize} Backward differentiation is preferred for matrix operation. The full expression of $\mathbf{y}=\mathbf{f}(\mathbf{g}(\mathbf{x}))$ is: \begin{equation} \begin{aligned} \nabla{}_{\mathbf{x}} f &= \pd{\mathbf{f}(\mathbf{g}(\mathbf{x}))}{\mathbf{x}} \\ &= \pd{\mathbf{f}}{\mathbf{g}} \times \pd{\mathbf{g}}{\mathbf{x}} \\ &= \left [ \begin{matrix} \frac{\partial f_1({\mathbf{x}})}{g_1} & \frac{\partial f_1({\mathbf{x}})}{g_2} & \dots & \frac{\partial f_1({\mathbf{x}})}{g_n} \\ \frac{\partial f_2({\mathbf{x}})}{g_1} & \frac{\partial f_2({\mathbf{x}})}{g_2} & \dots & \frac{\partial f_2({\mathbf{x}})}{g_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m({\mathbf{x}})}{g_1} & \frac{\partial f_m({\mathbf{x}})}{g_2} & \dots & \frac{\partial f_m({\mathbf{x}})}{g_n} \end{matrix} \right ]_{m \times n} \times % multiple another one \left [ \begin{matrix} \frac{\partial g_1({\mathbf{x}})}{x_1} & \frac{\partial g_1({\mathbf{x}})}{x_2} & \dots & \frac{\partial g_1({\mathbf{x}})}{x_r} \\ \frac{\partial g_2({\mathbf{x}})}{x_1} & \frac{\partial g_2({\mathbf{x}})}{x_2} & \dots & \frac{\partial g_2({\mathbf{x}})}{x_r} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial g_n({\mathbf{x}})}{x_1} & \frac{\partial g_n({\mathbf{x}})}{x_2} & \dots & \frac{\partial g_n({\mathbf{x}})}{x_r} \end{matrix} \right ]_{n \times r} \end{aligned} \end{equation}
{ "alphanum_fraction": 0.6241543473, "avg_line_length": 33.2583333333, "ext": "tex", "hexsha": "9bfd4f632902e96393a983269bf9b14d017d3c7b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-10-01T23:34:47.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-01T23:34:47.000Z", "max_forks_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "elvisren/machine-learning-notes", "max_forks_repo_path": "src/matrix/math.2.matrix_differentiation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "elvisren/machine-learning-notes", "max_issues_repo_path": "src/matrix/math.2.matrix_differentiation.tex", "max_line_length": 131, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d12ac08d30be4341776714ad895116a243ec026f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "elvisren/machine-learning-notes", "max_stars_repo_path": "src/matrix/math.2.matrix_differentiation.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-04T17:28:22.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-07T03:05:08.000Z", "num_tokens": 1594, "size": 3991 }
%!TEX root = ../report.tex \chapter{Evaluation} Implementation and measurements.
{ "alphanum_fraction": 0.7045454545, "avg_line_length": 14.6666666667, "ext": "tex", "hexsha": "f4e0a9f995825671fd640facac7081a3de731449", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "da3c6b47cdd7ec9b211a33d107ec4a6b2a0ff4b3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MihirSPatil/Utilizing-qualitative-spatial-representations-for-control-of-mobile-robots-", "max_forks_repo_path": "Report/project-report-1.0.1/chapters/ch05_evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "da3c6b47cdd7ec9b211a33d107ec4a6b2a0ff4b3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MihirSPatil/Utilizing-qualitative-spatial-representations-for-control-of-mobile-robots-", "max_issues_repo_path": "Report/project-report-1.0.1/chapters/ch05_evaluation.tex", "max_line_length": 33, "max_stars_count": null, "max_stars_repo_head_hexsha": "da3c6b47cdd7ec9b211a33d107ec4a6b2a0ff4b3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MihirSPatil/Utilizing-qualitative-spatial-representations-for-control-of-mobile-robots-", "max_stars_repo_path": "Report/project-report-1.0.1/chapters/ch05_evaluation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 17, "size": 88 }
%!TEX TS-program = xelatex % !TEX root = ../thesis.tex % Do not delete; used to set build system %\begin{savequote}[75mm] %Nulla facilisi. In vel sem. Morbi id urna in diam dignissim feugiat. Proin molestie tortor eu velit. Aliquam erat volutpat. Nullam ultrices, diam tempus vulputate egestas, eros pede varius leo. %\qauthor{Quoteauthor Lastname} %\end{savequote} \chapter{Absolute protein quantitation: Inference with non-ignorable missing data in high throughput proteomics} \label{proteomics:ch:proteomics} \section{Introduction} \label{proteomics:sec:intro} Proteins are the leading actors in cellular processes, making them a prime target for biological investigation. Understanding their absolute concentrations within a population of cells can elucidate the molecular dynamics that regulate their functions \citep{Ishihama:2005ir} and open a broad range of biological processes to deeper investigation. However, measuring the concentration of many proteins in a single experiment has been difficult \citep{Ghaemmagham:2003tu}. As a result, many investigations have relied upon gene expression as a proxy for the concentration of these proteins \citep{Franks:2013}. In recent years a new field of high-throughput proteomics has emerged, in which biologists and biochemists have begun exploring the next level of biological complexity with Fourier transform mass spectrometers \citep{Scigelova:2006p10560,Scigelova:2011dt}. High-throughput mass spectrometry can deliver a fine-grained view of cellular activity at an unprecedented scale and precision. In principle, this new technology enables the direct estimation of absolute protein concentrations from the relative intensities of protein fragments in a biological sample. In practice, the measurement process implemented by mass spectrometers introduces complex systematic biases that must be accounted for to obtain valid estimates of absolute protein concentrations. Proteins are large macromolecules, consisting of intricately-folded sequences of amino acids. They are enzymatically digested into fragments, which are amenable to analysis by mass spectrometry \citep{Thakur:2011kz}. % Introduce distinction between fragments (unidentified peptides) and peptides (identified up to a modification or charge state). Protein fragments are generally referred to as {\em peptides}; throughout this paper, fragments with the same sequence but distinct characteristics (e.g., charge states and chemical modifications) are considered {\em distinct} peptides. %A typical biological sample contains hundreds of thousands of peptides. These peptides are analyzed by two separate mass spectrometers to produce quantitative summaries \citep{Steen:2004tk}. Because of sample complexity and instrument limitations, however, not all of the peptides can be analyzed in the second-stage of mass spectrometry to have their sequence identified. The instrument is programmed to select the most abundant peptides in the first stage of mass spectrometry for sequencing, at any given time. This multi-stage measurement process results in a systematic bias towards observing peptides from the sample's most abundant proteins. In this paper, we present and evaluate a statistical technique to correct these biases, providing reliable estimates of absolute protein abundances from mass spectrometry experiments. Figure \ref{proteomics:fig:Mass_Spec_Overview} illustrates the experimental process in detail. % \ifx\nofigures\undefined \begin{figure}[t] \centering % \includegraphics[width=\textwidth]{figures/proteomics/MS_Overview_small} % \caption{MS proteomics overview. % (A) From cells to protein fragments. % (B) Fragments are separated in time, charged with ions, then analyzed using two mass spectrometers. % (C) The HPLC separates the peptide %mixture as well as determining the relative abundance of the mixture %at each time point. MS1 determines the intensities of each compontent %of the mixture at the given timepoint. Finally, the MS2 determines %the sequence of a subset of peptide states.} \includegraphics[width=\textwidth]{figures/proteomics/mass-spec-errors-awb} \caption{Overview of the measurement process in LC-MS/MS proteomics. Clockwise from top left: Cells are lysed to extract proteins, which are broken down into fragments by a protease. These fragments are then fractionated by charge and separated by hydrophobicity in the liquid chromatography (LC) stage before being sent through tandem mass spectrometry (MS/MS). This yields the spectrographic intensity (shown by shaded dots) and mass/charge ratio for each fragment retention time. A subset of these fragments are selected for further processing at each time, allowing for identification. \label{proteomics:fig:Mass_Spec_Overview}} \end{figure} \fi % Starting from the top left, a culture of approximately $10^{8}$ cells are opened (lysed) to extract their proteins, which are then digested with an enzyme (protease) into fragments (peptides). These peptides are then separated based on their hydrophobicity (via high-performance liquid chromatography). As peptides reach the spray tip gradually, they are given an electrical charge and fly into the mass spectrometer. All of these steps simply transform the long, complex mass of proteins from our cell culture into a well-separated set of simpler molecules that the mass spectrometers can process. Fragments that are ionized within the same short time window are analyzed together by the first of two mass spectrometers (MS1). At this stage, fragments with the same sequences of amino acids are present in a number of different states as a result of ion charges, \emph{in vivo} post-translational modifications and \emph{in vitro} chemical modifications from the sample preparation \citep{Michalski:2011gm}. This step results in raw measurements $\xi(t, r)$ each of which quantifies the intensity\footnote{This intensity corresponds to the magnitude of a Fourier coefficient associated with peptides of mass-to-charge ratio $r$ \citep{Scigelova:2006p10560}. Modern mass spectrometers generally measure the amplitudes at which ionized protein fragments oscillate along an electrode over time. The Fourier transformation $\xi(f)$ of the amplitude time series from a mixture of ionized fragments provides the power associated with each frequency $f$. Each frequency $f$ is associated with fragments of a particular mass-to-charge ratio $r$ according to $f=C/r$, where $C$ encodes instrument geometry and settings, yielding $\xi(t, r(f)) = \xi(t, r)$. % m/z in dalton/charge % (obs m - true m)/(tr m)*1M in ppm % f = k*B/(2*\pi*m/z) with f=axial freq, B=force of EM field, and m/z as above -- in orbitrap} %%% brief explanation of intensity %%% } % corresponding to ionized fragments with mass-to-charge ratio $r$, analyzed within a time window indexed by $t$. These intensities are proxies for relative abundance of fragment in the mass spectrometer within each time window. % The rightmost sections of Figure \ref{proteomics:fig:Mass_Spec_Overview} illustrate the output of this process, with each fraction yielding a set of intensities $\xi(t, r)$ for each (retention) time $t$ and mass-to-charge ratio $r$. The lower right panel of this figure shows one slice through this dataset corresponding to a particular time $t$. % A subset of the fragments analyzed by the first mass spectrometers is selected, based on their intensities, to be broken down into yet smaller fragment components by collision with a gas. The products of these collisions are then analyzed by a second mass spectrometer (MS2). This step results in an additional set of intensity measurements for these components, each of which has a distinct mass-to-charge ratio and is associated with its original fragment's mass-to-charge ratio by the instrument. The final (bottom left) panel of Figure \ref{proteomics:fig:Mass_Spec_Overview} shows the intensity spectrum obtained by MS2 for the components of the fragments with mass-to-charge ratio $r$ selected from MS1. The amino acid sequence of these fragments can often be reliably determined by analyzing differences in the mass-to-charge ratio between adjacent peaks in the MS2 spectrum. %Peptides with exactly the same mass-to-charge ratio and sequence bits cannot be distinguished, but this occurs quite rarely give the accuracy of the instrument (1-2 parts-per-million) and the combinatorial explosion in the number of peptides---possible fragments times possible ion charge states times possible post-translational and chemical modifications. % two stat problems: identification/matching, absolute abundance estimation. The statistical problem of interest is to estimate the absolute concentrations of proteins in a sample from the observed peptide-level intensities $\{\xi(t, r):t\in T,r\in R\}$. However, to tackle this problem, we must first associate these raw intensities with peptides and condense them into a more manageable set of summaries. This preprocessing falls under the heading of identification, a well-studied problem in MS/MS proteomics. % In the identification task, the intensity and mass-to-charge data from MS2 are used to detect which peptides are present in the sample, in terms of their most likely sequence, charge, and possible chemical and post-translational modifications. In a typical run, hundreds of thousands of unique peptides are simultaneously detected. The observed spectrum of each detected peptide is then compared to the theoretical spectra of the all peptides that would be generated by digesting the proteins in the sample with a specific enzyme. For example, the enzyme trypsin digests the amino acids arginine (R) and lysine (K), so each protein is expected to be fragmented into peptides by removing all arginines and lysines from its amino acid sequence. Currently, several well-established methods for peptide/protein identification exist in the literature \citep{Cox:2008uu,Perkins:1999ed,Eng:1994fj}; we generally use the MaxQuant software of \citet{Cox:2008uu}. Identification yields a set of integrated log-intensities $y_{ikl}^{obs}$, each of which is associated with peptide $k$ from protein $i$, and $l$ indexes distinct modifications and charge states. These integrated log-intensities are known to be approximately linearly related to the log of the number of molecules of that peptide present in the mass spectrometer, making them an excellent basis for absolute quantitation \citep{Old:2005jf,Scigelova:2011dt}. Formally, each $y_{ikl}$ is the result of two operations: a mapping between identified fragment and a given set of values for $(t, r) \in \Delta_{ikl}$, and the integration of $\xi(t,r)$ over these values. % The mapping serves to associate each observed $\xi(t, r)$ with a given peptide state $ikl$. The raw intensities $\xi(t, r)$ are then integrated over a small window in both retention time and mass-to-charge ratio. The former accounts for the fact that any given peptide $ikl$ is typically observed across adjacent time windows, while the latter accounts for minor variation in $r$ for each fragment originating from the presence of multiple isotopes and other factors. Thus, all of our inferences are based on % \begin{equation} y_{ikl}^{obs} = \log_{10}\left( \int \! \int_{(t, r) \in \Delta_{ikl}} \xi(t, r) \dd r \dd t \right) . \end{equation} With these quantities in-hand, we can refine the statement of our statistical problem: using the collection of integrated log-intensities $y_{ikl}^{obs}$ and known properties of the observed proteins, estimate the log-absolute concentrations of proteins in our sample $\zeta_i$. We present a probabilistic model for these data in Section \ref{proteomics:sec:model}, with particular details of the core estimand $\zeta_i$ provided in Section \ref{proteomics:sec:estimand}. \subsection{Related work} There are two threads of literature on protein quantitation: relative and absolute quantitation. In relative quantitation, the estimand of interest is the ratio of a given protein concentration in two distinct samples (generally different experimental conditions). In contrast, absolute quantitation requires the ability to estimate the concentration of all proteins in a single sample, relative to one another---these quantities together with the total amount of protein measured in the sample lead to the estimand of interest in this work, $\{\zeta_i:i\in I\}$, as we discuss in Section \ref{proteomics:sec:estimand}. While there has been much work on the problem of peptide identification \citep[e.g.,][]{Cox:2008uu,Perkins:1999ed,Eng:1994fj}, less progress has been made on the quantitation problem \citep{Makawita:2010eg}. Despite the relatively strong correlation between the amount of each protein in a sample, the observed intensities, and the number of its peptides identified by the mass spectrometer \citep{Old:2005jf}, heterogeneity in this relationship between labs, samples, and even peptides from a common protein has hampered the development of robust quantitation methods \citep{Tabb:2010bc,Bell:2009il}. To control for these sources of variation, the popular methods for relative quantitation rely on physically labeling each sample being compared, analyzing them simultaneously, and estimating the relative abundance protein-by-protein based on summaries of peptide intensity ratios \citep{Ong:2002tf}. % Existing experimental methods for estimating absolute abundance are quite intricate. For each protein of interest, the experimenter must synthesize a set of synthetic, standard peptides. They then compare the observed intensities of peptides from the sample to the intensities of the corresponding standard peptides, which are introduced at known concentrations \citep{Gerber:2003kq}. These approaches have limited throughput, are experimentally complex, and are limited to quantitation of a preselected set of proteins. Motivated by the limitations of experimental quantitation methodologies, there has been a growing interest in computational methods for relative and absolute quantitation. The simplest of these are based on spectral counting, either in the form of ratio-based estimates for relative quantitation \citep{Liu:2004hv}, or using rescaled counts for absolute quantitation \citep{Ishihama:2005ir}. More recently, a semi-supervised absolute quantitation method called APEX has been developed, which uses a large training dataset to learn how differences in physiochemical properties affect the probability of peptide identification, independent of abundance \citep{Lu:2006p10143}. It then uses these estimated probabilities of identification to construct a weighted estimator of protein abundance. % Another class of methods uses peptide intensities, which have a wider dynamic range than spectral counts, for quantitation, the most common of which are based on simple summaries of observed intensities, such as their median \citep{deGodoy:2008jk, Silva:2005cn}. Most recently, a few attempts have been made to combine intensity and identification information, for example using principle component analysis to combine these features across samples \citep[e.g.,][]{Dicker:2010ea}. Finally, the most sophisticated techniques, which motivated our own work, are model-based methods for relative quantitation that account for missing data at the peptide level \citep{Karpievitch:2009wb,Luo:2009ff}. However, by focusing on peptide-level missingness, these methods fail to account for the true amount of missing data, since each peptide is generally found in a number of different states, due to biological and chemical modifications \citep{Michalski:2011gm}. %This is due, in part, to the misconception that the measurements produced by the mass spectrometer are extremely accurate and thus there is no need for any statistics after identification. This argument, however, is flawed. It ignores the selection bias implemented at multiple data collection stages within the instrument. %This bias is especially significant for peptides and proteins present at low concentrations. %In our work, we proposed a modeling framework that is able to account for and mitigate the selection bias. % %Most of the existing literature is devoted to the relative quantitation task. %In this case, the consequences of selection bias are arguably mitigated, at least from an empirical perspective. %Because of this, the most popular methods available for relative quantitation do not involve statistical modeling. %Rather, they are based on observable summaries produced by the identification procedures, for example spectral counting \citep{Liu:2004hv} and median intensity \citep{deGodoy:2008jk,Silva:2005cn}. %As we show in Section \ref{proteomics:sec:empirical}, these methods perform poorly when adapted to the task of absolute quantitation as the estimates often span a limited range of absolute concentrations \citep{Schulze:2010fk}. % %There are only a handful of model-based methods for either absolute or relative quantitation, and they take very different approaches. %A semi-supervised approach called APEX uses a training set of proteins to learn the physicochemical properties of peptides that influence their likelihood of detection, which is then used to compute the absolute abundance based on the number of peptide identifications weighted for their "detectability" \citep{Lu:2006p10143}. More recently, a similar approach has been suggested that relies on principal component analysis and a kernel ... \citep{Dicker:2010ea}. % An alternative approach relies on a different experimental protocol and technology, but ... \citep{Luo:2009ff}. % Perhaps closest to our approach, \citet{Karpievitch:2009wb} develop a model that posits a non-ignorable observation model. %The observation model is specified on peptide-level intensities. %We show that there is little information at the peptide-level about the censoring process that can be leveraged to correct for the selection bias. %Instead, our model specifies the observation model in terms of the intensities of peptides in different charge and chemical modification states, a level of measurement where information about the selection bias can be found in the shared number of states among peptides present a high and low concentrations in the sample. %Generally, existing methods for absolute quantitation also suffer from limited range of the estimated concentrations \citep{Schulze:2010fk}. \subsection{Contributions of this article} In this paper, we introduce a statistical model that combines a hierarchical intensity model with an observation model for intensity-based censoring, in Section \ref{proteomics:sec:model}. This combination accounts for key aspects of the biology and of the data acquisition process, including the fact that peptides are observed in multiple charge and modification states, and that there are at least two missing data mechanisms that compound throughout any LC-MS/MS instrument. Our approach is novel in a number of ways: (1) the focus is on absolute, rather than on relative quantitation; (2) while most existing approaches to quantitation involve simple summary statistics after identification, we posit and estimate a realistic censoring mechanisms that results in non-ignorable missing data; (3) the model explains the variability in the intensities of peptides observed in multiple charge and modification states, rather than the aggregated intensities for individual peptides. % We find that accounting for non-ignorable missing data helps reduce the selection bias induced by the measurement process. In addition, modeling intensity-based censoring at the level of peptide states helps transfer information from abundant to rare proteins, since the number of states we expect to observe any given peptide in is independent of abundance. As we show in Sections \ref{proteomics:sec:simulations} and \ref{proteomics:sec:empirical}, these aspects of the model allows robust, accurate estimation of absolute protein abundances in complex samples with concentrations spanning many orders of magnitude. We provide efficient inference for this model, in Section \ref{proteomics:sec:estimation}, by leveraging a combination of Halley's method, adaptive Gauss-Hermite quadrature, and explicit envelopes for sampling the number of censored peptide states and the censored intensities. In Section \ref{proteomics:sec:simulations}, we explore the frequentist coverage of interval estimates, and we compare the performance of the proposed method with existing methods for estimating absolute protein concentrations, as we systematically vary the abundance and the length of the target proteins. In Section \ref{proteomics:sec:empirical}, we explore the extent to which key assumptions of our model hold in practice, and we analyze estimates based on three biological samples, processed with different LC-MS/MS settings, in which a set of proteins with known concentration was introduced for validation purposes. %%% %%% %%% %%% %%% %%% \section{Model} \label{proteomics:sec:model} We develop a probabilistic model for the measurements, $y_{ikl}^{obs}$, the output of an LC-MS/MS experiment combined with identification analysis with standard software. %This includes both the non-ignorable missing data mechanism induced these experimental techniques and the hierarchical structure of our observations. The observed data consists of: the observed state-level intensities $y_{ikl}^{obs}$, indexed by protein $i$, peptide $k$, and charge state $l$, and the observed counts of observed states by peptides $s_{ik}^{obs}$. We also know the number of possible peptides per protein $m_i$, for a given enzyme used for digestion, which is independent of the sample in theory but requires careful preprocessing in practice. We are interested in inferring the abundance of each protein $i$ within the given sample. This abundance is a monotone function of the parameter $\mu_i$ as detailed in Section \ref{proteomics:sec:estimand}. We posit the following model, % \begin{align} %1 / \tau_i^2 &\sim \text{Gamma}(\alpha_\tau, \beta_\tau) \\ %1 / \sigma_i^2 &\sim \text{Gamma}(\alpha_\sigma, \beta_\sigma) \\ \gamma_{ik}\mid\mu_{i}, \tau_{i}^2 &\sim \hbox{Normal}(\mu_{i},\tau_{i}^{2}), \, i=1,\ldots,n, \, k=1,\ldots,m_i \\ s_{ik} \mid \lambda, r &\sim 1+\hbox{ Negative-Binomial }(1-\lambda,r), \, k=1,\ldots,m_i \\ y_{ikl} \vert \gamma_{ik}, \sigma_{i}^2 &\sim \hbox{Normal}(\gamma_{ik},\sigma_{i}^{2}), \, l=1,\ldots,s_{ik}\\ R_{ikl} \vert \pi^{rnd}, s_{ik} &\sim \hbox{Bernoulli}(1 - \pi^{rnd}), \, l=1,\ldots,s_{ik} \\ I_{ikl} \vert y_{ikl}, \bm \eta, s_{ik}, R_{ikl}=1 &\sim \hbox{Bernoulli}(1 - g(y_{ikl}; \bm \eta)), \, l=1,\ldots,s_{ik} \\ O_{ikl} &= R_{ikl} \cdot I_{ikl} \\ Y_{obs} &= \{y_{ikl} : O_{ikl} = 1\} \\ Y_{mis} &= \{y_{ikl} : O_{ikl} = 0\} \end{align} % This model consists of two pieces: the distribution of the complete data $Y_{com} = (Y_{obs}, Y_{mis})$ given the parameters, and the distribution of the observed data given the complete data and parameters. Starting with the complete data for protein $i$, this model specifies that each peptide gets a mean intensity $\gamma_{ik}$ distributed around the protein-mean $\mu_i$. No prior distribution on $\mu_i$ is assumed. Similarly, each state-level intensity $y_{ikl}$ is distributed around the peptide-level intensity. The variances of these distributions are $\tau^2_i$ and $\sigma^2_i$ are drawn from inverse-Gamma distributions. % \begin{align} 1 / \tau_i^2 &\sim \text{Gamma}(\alpha_\tau, \beta_\tau) \\ 1 / \sigma_i^2 &\sim \text{Gamma}(\alpha_\sigma, \beta_\sigma) \end{align} This is a straightforward Normal hierarchical model except for one complication. % The number of states per peptide $s_{ik}$ is drawn from a shifted negative binomial distribution. This distribution is fixed across proteins and peptides, reflecting the physical independence between the number of possible charge states and a protein's abundance. This invariance plays a crucial role in our inference, as we discuss in Sections \ref{proteomics:sec:estimation} and \ref{proteomics:sec:remarks}. Hence, we have \begin{align} \label{proteomics:eq:complete_data_likelihood} P(Y_{com} | \bm \mu, \bm \sigma^2, \bm \tau^2, r, \lambda) = \prod_{ik} \Bigg[& \frac{1}{\tau_i} \phi\left( \frac{\gamma_{ik} - \mu_i}{\tau_i} \right) \begin{pmatrix} s_{ik} + r - 2 \\ s_{ik} - 1 \end{pmatrix} \lambda^{r} (1 - \lambda)^{s_{ik} - 1} \cdot \\ \nonumber & \prod_{l=1}^{s_{ik}} \left[ \frac{1}{\sigma_i} \phi\left( \frac{y_{ikl} - \gamma_{ik}}{\sigma_i} \right) \right] \Bigg], \end{align} where $\phi(z)$ is defined as $\frac{1}{\sqrt{2 \pi}} \exp\left( -\frac{1}{2} z^2 \right)$. %%% %%% %%% \subsection{Missing data mechanism} \label{proteomics:sec:mdm} The missing data mechanism operates at the state level and is characterized using two random variables. The first is a random censoring indicator, $R_{ikl} \sim \text{Bernoulli}(1 - \pi^{rnd})$, which accounts for censoring due to factors other than abundance. The second is an intensity censoring indicator, $I_{ikl} \sim \text{Bernoulli}(1 - g(y_{ikl}; \bm \eta))$, which accounts for the intensity-dependent censoring. We assume $I_{ikl}$ is drawn only if $R_{ikl} = 1$, as shown in Figure \ref{proteomics:fig:Missing_Data_Indicators_Tree}. % \ifx\nofigures\undefined \begin{figure}[ht!] \begin{center} \includegraphics[width=.75\textwidth]{figures/proteomics/fig_tree_awb.pdf} \end{center} \caption{Possible missingness indicator values.\label{proteomics:fig:Missing_Data_Indicators_Tree}} \end{figure} \fi % Hence, we have \begin{equation} P(\bm I, \bm R | Y_{com}, \eta, \pi^{rnd}) = \prod_{ikl} (\pi^{rnd})^{1 - R_{ikl}} (1 - \pi^{rnd})^{R_{ikl}} (1 - g(y_{ikl}; \eta))^{I_{ikl} R_{ikl}} g(y_{ikl}; \eta)^{(1 - I_{ikl}) R_{ikl}}, \end{equation} % which implies a corresponding distribution on the (redundant) variables $\bm O \equiv \bm I \circ \bm R$, where $\circ$ denotes element-wise product, which indicate whether each intensity $ikl$ is observed. In particular, the marginal probability of observing a given peptide state given $y_{ikl}$ and all other parameters is then % \begin{equation} P(O_{ikl} = 1 | y_{ikl}, \bm \Theta) = (1 - \pi^{rnd}) (1 - g(y_{ikl}; \bm \eta)). \end{equation} % In combination with (\ref{proteomics:eq:complete_data_likelihood}), these assumptions imply that \begin{equation} s_{ik}^{obs} | \bm \gamma, \bm \sigma^2, \bm s \sim \text{Binomial}(s_{ik}, (1-\pi^{rnd})(1 - \pi^{int}_{ik})) \end{equation} are conditionally independent across peptides, %%% EDO : YOU MEAN "ARE INDEPENDENT" ?? OF WHAT ?? %%% AWB : Conditionally independent across peptides where $\pi^{int}_{ik} = \int_{\mathbb{R}} g(t; \eta) \frac{1}{\sigma_i} \phi\left( \frac{t - \gamma_{ik}}{\sigma_i} \right) \text{d}t$. From this, we see how the division of the missing data mechanism into random and intensity-based censoring adds flexibility to our model, allowing the probability of missingness to asymptote to a value lower than 1 as intensity increases. However, it is important to note that this separation is largely conceptual, not physical. The random and intensity-based censoring mechanisms corresponds only roughly to the early and late stages of the LC-MS/MS process, respectively. From a theoretical perspective, this missing data mechanism induces a stochastic dominance relationship between the distribution $Y_{obs}$ and $Y_{com}$, as Theorem \ref{proteomics:thm:dominance} establishes: % \begin{theorem}\label{proteomics:thm:dominance} Suppose $X \sim F_X(x)$; that is, $X$ has a cumulative distribution function that can be represented as a Riemann-Stieltjes integral over $\mathbb{R}$. Let $Z | X=x \sim \Bernoulli( g(x) )$, where $g(x)$ is strictly increasing on $\mathbb{R}$ from $0$ to $1$. Then, the posterior distribution of $X$ given $Z=1$ stochastically dominates the original distribution of $X$. \end{theorem} % This result establishes that observed intensities will be biased upwards relative to the complete ones. We use this result for model checking in Section \ref{proteomics:sec:checkassumptions}. A proof is provided in Section \ref{proteomics:sec:proof1}. %%% %%% %%% \subsection{Estimands} \label{proteomics:sec:estimand} The $\mu_i$ parameters are the primary target of the inference; however, they are not directly interpretable as absolute measures of protein abundance. To provide absolute measures, we must convert $\mu_i$ from the log-intensity scale to the scale of protein abundances. We define an estimand $\zeta_i$ for this purpose, % \begin{equation} \zeta_i = \log_{10} \left( \frac{T \times 10^{\mu_i}}{ \sum_{i=1}^{n} 10^{\mu_i}} \right) = \mu_i + \log_{10}(T) - \log_{10}\left(\sum_{i=1}^{n} 10^{\mu_i}\right), \label{proteomics:eq:abs_abund_estimand} \end{equation} % where $T$ is the total amount of proteins in the sample of interest. The key feature of this estimand is normalization by $\sum_{i=1}^n 10^{\mu_i}$, which provides the core conversion from log-intensities to the log-abundance scale (up to an additive constant). The total protein amount $T$ serves primarily as a rescaling factor for interpretability, which converts estimates from log-proportion of proteins to log-molecules per cell or log-femtomoles. We assume that $T$ is known and fixed. While this assumption is often challenged in practice, it is crucial to neither the validity nor the utility of our estimates. Modeling $T$ is also a possibility. %%% %%% %%% %%% %%% %%% \section{Inference and estimation} \label{proteomics:sec:estimation} \ifx\nofigures\undefined \begin{table} \caption{Prior Distributions. \label{proteomics:table:priors}} \begin{eqnarray*} \log(\alpha_\sigma) &\sim& N(\mu_{0 \alpha_\sigma}, v_{0 \alpha_\sigma}) \\ \beta_\sigma &\sim& Gamma(\alpha_{0 \sigma}, \beta_{0 \sigma}) \\ \log(\alpha_\tau) &\sim& N(\mu_{0 \alpha_\tau}, v_{0 \alpha_\tau}) \\ \beta_\tau &\sim& Gamma(\alpha_{0 \tau}, \beta_{0 \tau}) \\ \mbox{\ensuremath{\pi}}^{rnd} & \sim & Beta(\alpha_{0\pi}, \beta_{0\pi})\\ \lambda & \sim & Beta(\alpha_{0 \lambda}, \beta_{0 \lambda})\\ \log(r) & \sim & N(\mu_{0r}, v_{0r}) \end{eqnarray*} \end{table} \fi We develop an efficient Monte Carlo Markov Chain algorithm to perform inference with the proposed model on proteome-wide data sets with hundreds of thousands of peptide states. We design a Metropolis-within-Gibbs algorithm, alternating between updates for the missing data and parameters. The algorithm consists of the following steps within each iteration: % \begin{enumerate} \item Draw the censored peptide latent variables, $\mathbf{M}\vert\mathbf{Y^{obs}}, \bm \Theta$. \begin{enumerate} \item Draw the number of peptide states, $\bm s \vert \mathbf{Y^{obs}}, \bm \Theta$, using rejection sampling. \item Draw the random censoring indicators, $\bm{W} \vert \bm s, \mathbf{Y^{obs}}, \bm \Theta$. \item Draw the censored intensities, $\mathbf{Y^{mis}} \vert \bm{W},\bm{s}, \bm{Y^{obs}}, \bm{\Theta}$, using rejection sampling. \end{enumerate} \item Update the parameters $\bm \Theta$ given the complete data $(\bm Y^\mathrm{obs}, \bm M)$. \end{enumerate} % The updates of $\bm{\Theta} | \bm{Y^{obs}}, M$ are of a standard form. In Section \ref{proteomics:sec:missingDataDraw}, we provide the details of the exact update of $\bm M$ given $\bm \Theta$. We leave further details of our inference strategy are contained in the Appendix, which includes the complete specification the updates for $\bm \Theta$ given $\bm M$. Table \ref{proteomics:table:priors} provides the prior distributions used in our inference; we provide the specific parameter values used in Appendix \ref{ch:supp:proteomics}. \subsection{Draw the censored peptide latent variables, $\mathbf{M} \vert \bm \Theta$.} \label{proteomics:sec:missingDataDraw} Drawing the missing data $\mathbf{M}=\{\mathbf{Y}^{mis},\mathbf{s},\mathbf{R}\}$ is the most challenging component of the inference. The dimensionality of the missing data $\mathbf{M}$ is not fixed across iterations, so standard Metropolis-Hastings techniques are not enough. Reversible Jump Metropolis-Hastings (RJMH) methods provide one option, which we originally explored, but they proved too inefficient and fragile. Instead, we develop a partially marginalized update than draws from the exact conditional distribution of $\bm M$ given $(\bm Y^{obs}, \bm \Theta)$. We implement this exact draw using a ``triangular'' dependence structure, starting with the easiest draws to marginalize: % \begin{align} p( \bm M \mid \bm{Y}^{obs}, \bm{\Theta}) \propto & p(\bm s \mid \bm{s}^{obs}, \bm {Y}^{obs}, \bm{\Theta}) \\ \nonumber \times & p(\bm{R} \mid \bm s, \bm{Y}^{obs}, \bm{\Theta}) \\ \nonumber \times & p(\bm{Y}^{mis} \mid \bm{R}, \bm{s}, \bm{Y}^{obs}, \bm{\Theta}) \end{align} % Using efficient numerical integration techniques (such as Gauss-Hermite quadrature) and exact sampling methods (involving explicit envelopes for rejection samplers), we generate exact draws from the joint posterior of the missing data using the above sequence of conditional distributions. Details of each of these draws is given in the following sections. Section \ref{proteomics:sc:draw_s_ik_mis} details the steps required to compute and sample $\bm s$ from $p(\bm s \mid \bm{s}^{obs}, \bm {Y}^{obs}, \bm{\Theta})$. Section \ref{proteomics:sec:r_ik_mis_post} covers $p(\bm{R} \mid \bm s, \bm{Y}^{obs}, \bm{\Theta})$. Section \ref{proteomics:sec:y_ikl_mis_draw} details the exact sampling strategy for $p(\bm{Y}^{mis} \mid \bm{R}, \bm{s}, \bm{Y}^{obs}, \bm{\Theta})$. \subsubsection{Drawing from $p(\bm s \mid \bm {Y}^{obs}, \bm{\Theta})$} \label{proteomics:sc:draw_s_ik_mis} We derive the posterior of $s_{ik}$ given $(\bm {Y}^{obs}, \bm{\Theta})$ by iteratively marginalizing over the remaining components of $\bm M$. For the derivations in this section, we define the number of unobserved states for peptide $k$ of protein $i$ as $s_{ik}^{mis} \equiv s_{ik} - s_{ik}^{obs}$; drawing $s_{ik} | s_{ik}^{obs}, Y^{obs}, \bm \Theta$ is then equivalent to drawing $s_{ik}^{mis} | s_{ik}^{obs}, Y^{obs}, \bm \Theta$. First, we note that the conditional posterior distribution of $\bm M$ factors by both protein and peptide, so it suffices to consider a single $s_{ik}$. This yields % \begin{align} p( s_{ik} \mid \bm{y}_{ik}^{obs}, s_{ik}^{obs}, \bm{\Theta} ) \propto& p( \bm{y}_{ik}^{obs} \mid s_{ik}, \bm{\Theta} ) \cdot p(s_{ik} \mid \bm{\Theta} ) \\ =& \left[ \int_{\Reals^{s_{ik}^{mis}}} p(\bm{y}_{ik}^{obs}, \bm{y}_{ik}^{mis} \mid s_{ik}, \bm{\Theta}) \, \mathrm{d}\bm{y}_{ik}^{mis} \right] \cdot p(s_{ik} \mid \bm{\Theta}) \\ % & \propto %\left(\begin{array}{c} %s_{ik}^{obs}+s_{ik}^{mis}\\ %s_{ik}^{obs} %\end{array}\right) \cdot %\left[ \prod_{l=s_{ik}^{obs}+1}^{s_{ik}^{obs}+s_{ik}^{mis}} \int_{\Reals} p(y_{ikl}^{mis} \mid s_{ik}, \bm{\Theta}) \, \mathrm{d} y_{ikl}^{mis} \right] \cdot p(s_{ik} \mid \bm{\Theta}) \\ \nonumber \propto& \left(\begin{array}{c} s_{ik}^{obs}+s_{ik}^{mis}\\ s_{ik}^{obs} \end{array}\right) \cdot \left\{ \prod_{l=s_{ik}^{obs}+1}^{s_{ik}^{obs}+s_{ik}^{mis}} \int_\Reals \left[ \sum_{R_{ikl},I_{ikl}} p(y_{ikl}^{mis}, R_{ikl},I_{ikl} \mid s_{ik}, \bm{\Theta}) \right] \, \mathrm{d}y_{ikl}^{mis} \right\} \\ &\cdot p(s_{ik} \mid \bm{\Theta}) , \end{align} where the last relationship follows by conditioning on $\bm{Y}^{obs}$ and expansion over the missingness indicators $I_{ikl}$ and $R_{ikl}$. The combinatorial term enters the above expression due to the varying dimensionality of our missing data. We then focus on $p(y_{ikl}^{mis}, R_{ikl},I_{ikl} \mid s_{ik}, \bm{\Theta})$, obtaining \begin{align} \nonumber \sum_{R_{ikl},I_{ikl}} p(y_{ikl}^{mis}, R_{ikl},I_{ikl} \mid s_{ik}, \bm{\Theta}) =& p(y_{ikl}^{mis} \mid s_{ik}, \bm{\Theta}) \cdot \Big( p(R_{ikl} = 0 \mid y_{ikl}^{mis}, s_{ik}, \bm{\Theta}) + \\ & p(R_{ikl} = 1 \mid y_{ikl}^{mis}, s_{ik}, \bm{\Theta})\cdot p(I_{ikl} = 0 \mid R_{ikl}=0, y_{ikl}^{mis}, s_{ik}, \bm{\Theta}) \Big) \\ =& \frac{1}{\sigma_i} \phi\left( \frac{y_{ikl}^{mis} - \gamma_{ik}}{\sigma_{i}} \right) \cdot \Big( \pi^{rnd} + (1-\pi^{rnd}) g(y_{ikl}^{mis}, \bm \eta) \Big). \end{align} % Integrating this expression over $y_{ikl}^{mis}$ yields % \begin{align} \int_\Reals p(y_{ikl}^{mis}, R_{ikl},I_{ikl} \mid s_{ik}, \bm{\Theta}) \, \mathrm{d}y_{ikl}^{mis} &= \pi^{rnd} + (1-\pi^{rnd}) \int_\Reals \frac{1}{\sigma_i} \phi\left( \frac{y_{ikl}^{mis} - \gamma_{ik}}{\sigma_{i}} \right) g(y_{ikl}^{mis}, \bm \eta) \, \mathrm{d}y_{ikl}^{mis} \label{proteomics:eqn:normal_logit_integral} \\ &= \pi^{rnd} + (1 - \pi^{rnd}) \pi^{int}_{ik}, \end{align} % where we define \begin{equation} \pi^{int}_{ik} = \int_\Reals \frac{1}{\sigma_i} \phi\left( \frac{y_{ikl}^{mis} - \gamma_{ik}}{\sigma_{i}} \right) g(y_{ikl}^{mis}, \bm \eta) \, \mathrm{d}y_{ikl}^{mis} \label{proteomics:eqn:pi_IC} \end{equation} Substituting $\pi^{int}_{ik}$ from (\ref{proteomics:eqn:pi_IC}) into (\ref{proteomics:eqn:normal_logit_integral}) yields a simple form for the conditional posterior PMF of $s_{ik}$: \begin{eqnarray} \nonumber p( s_{ik} \mid \bm{y}_{ik}^{obs}, \bm{\Theta} ) & \propto & \left(\begin{array}{c} s_{ik}^{obs}+s_{ik}^{mis}\\ s_{ik}^{obs} \end{array}\right) \cdot \Big( \pi^{rnd} + (1-\pi^{rnd}) \cdot \pi^{int}_{ik} \Big)^{s_{ik}^{mis}} \cdot \, p(s_{ik} \mid \bm{\Theta}) \\ %& = & \left(\begin{array}{c} %s_{ik}^{obs}+s_{ik}^{mis}\\ %s_{ik}^{obs} %\end{array}\right) \cdot \Big( \pi^{rnd} + (1-\pi^{rnd}) \cdot \pi^{int}_{ik} \Big)^{s_{ik}^{mis}} \label{proteomics:eqn:s_ik_posterior}\\ % & & \cdot \, NegativeBinomial(s_{ik}^{obs} + s_{ik}^{mis} -1 \mid \lambda, r, s_{ik}^{obs}) \nonumber & \propto & (s_{ik}^{obs} + s_{ik}^{mis}) \cdot NegativeBinomial(s_{ik}^{mis} \mid 1-p_{ik}^*, \, s_{ik}^{obs} + r - 1) \label{proteomics:eqn:sik_final_posterior} \end{eqnarray} where $p_{ik}^* = (1-\lambda)(\pi^{rnd} + \pi_{ik}^{IC}(1-\pi^{rnd}))$. % The conditional posterior given by (\ref{proteomics:eqn:sik_final_posterior}) deviates from a Negative Binomial PMF only due to the constraint that $s_{ik} \geq 1$. % End of posterior derivation, start of algorithm In order to draw from the posterior of $s_{ik}$ exactly, we develop a rejection sampler using a $NegativeBinomial(1 - p_{ik}^*, s_{ik}^{obs} + r)$ as the proposal distribution. We structure this as a draw from the conditional posterior of $s_{ik}^{mis}$ for computational convenience and clarity of notation. If $s_{ik}^{obs}=0$, $s_{ik}^{mis} - 1 \sim NegativeBinomial(1-p_{ik}^*, \, r)$ exactly, so we consider only the $s_{ik}^{obs} > 0$ case. To construct this, we obtain a target-proposal ratio of \begin{eqnarray*} \frac{p(s_{ik}^{mis}\vert \bm{Y^{obs}}, \bm{\Theta})}{NegativeBinomial(1-p_{ik}^{*},s_{ik}^{obs}+r)} & = & \frac{c^{*}(s_{ik}^{obs}+s_{ik}^{mis})}{s_{ik}^{obs}+s_{ik}^{mis}+r-1}, \end{eqnarray*} where $c^{*}$ is a constant that is not a function of $s_{ik}^{mis}$. This ratio is bounded by \begin{eqnarray*} \frac{c^{*}(s_{ik}^{obs}+s_{ik}^{mis})}{s_{ik}^{obs}+s_{ik}^{mis}+r-1} & \leq & \begin{cases} c^{*} & \text{if }r\geq1\\ \frac{c^{*}s_{ik}^{obs}}{s_{ik}^{obs}+r-1} & \text{if }0<r<1 \end{cases}. \end{eqnarray*} Optimizing this with respect to $c^*$ yields the following acceptance probabilities: \begin{eqnarray*} p(accept \vert X) & = & \begin{cases} \frac{(s_{ik}^{obs}+s_{ik}^{mis})}{(s_{ik}^{obs}+s_{ik}^{mis}+r-1)} & \text{if }r\geq1\\ \frac{(s_{ik}^{obs}+s_{ik}^{mis})}{s_{ik}^{obs}+s_{ik}^{mis}+r-1} \times \frac{s_{ik}^{obs}+r-1}{s_{ik}^{obs}} & \text{if }0<r<1 \end{cases}. \end{eqnarray*} However, the integral required to compute $\pi_{ik}^{int}$ is not generally available in closed form. We develop a simple, accurate numerical integration strategy based on adaptive Gauss-Hermite quadrature \citep{Liu1994}. We first find the mode of the integrand $\hat{y}^{mis}_{ikl}$ and its logarithm's second derivative at the mode $\hat{v}_{ik}^{mis} \equiv \frac{1}{\sigma_i^2} - \frac{\partial^2}{\partial y_{ikl}^{mis\,2}}\log\left[g(y_{ikl}^{mis}, \bm \eta) \right] \Big|_{y_{ikl}^{mis} = \hat{y}^{mis}_{ikl}}$. Since even this mode is not available in closed form, we use a vectorized version of Halley's method to efficiently approximate it. Using this information, we then approximate $\pi_{ik}^{int}$ using Gauss-Hermite quadrature, with the nodes shifted and scaled based on $\hat{y}^{mis}_{ikl}$ and $\hat{v}_{ik}^{mis}$, yielding $\hat{\pi}_{ik}^{int}$. For moderate values of $\eta_1$, only a small number of nodes (10 or less) are typically required for accuracy to machine precision. We summarize this strategy in Figure \ref{proteomics:fig:Laplace_Halleys_Outline}.% and details of these computations are available in the Appendix. The algorithm for drawing $s_{ik}^{mis}$ is given in Algorithm \ref{proteomics:alg:N-Censored-Rejection-Sampler-Algorithm}. See Figure \ref{proteomics:fig:s_ik^mis-Rejection-Sampler} for sample draws compared to the true density and the proposal negative binomial density. \ifx\nofigures\undefined \begin{figure}[http] \centering \includegraphics[width=0.8\textwidth]{figures/proteomics/Fig2_Observed_Censored_Intensity_Distributions} \caption{Comparisons of original, censored, and observed intensity distributions. \label{proteomics:fig:Laplace_Halleys_Outline}} \end{figure} \fi \begin{algorithm}[H] \caption{$s_{ik}$ Rejection Sampler\label{proteomics:alg:N-Censored-Rejection-Sampler-Algorithm}} \begin{enumerate} \item Draw $X\sim NegativeBinomial(1-\hat{p}_{ik}^{*},s_{ik}^{obs}+r)$, with $\hat{p}_{ik}^{*}=(1-\lambda)\left[\pi^{rnd}+(1-\pi^{rnd})\hat{\pi}_{ik}^{IC}\right]$ and $U\sim Uniform(0,1)$. \item Accept $s_{ik}^{mis}=X$, if $U\leq\begin{cases} \frac{(s_{ik}^{obs}+s_{ik}^{mis})}{(s_{ik}^{obs}+s_{ik}^{mis}+r-1)} & \text{if }r\geq1\\ \frac{(s_{ik}^{obs}+s_{ik}^{mis})}{s_{ik}^{obs}+s_{ik}^{mis}+r-1}\times\frac{s_{ik}^{obs}+r-1}{s_{ik}^{obs}} & \text{if }0<r<1 \end{cases}$. \item Return to $1$ otherwise.\end{enumerate} \end{algorithm} \ifx\nofigures\undefined \begin{figure} \begin{center} \includegraphics[width=.75\textwidth]{figures/proteomics/figure_nstates_sampler} \end{center} \caption{Expected iterations per accepted draw for $s_{ik}^{mis}$ rejection sampler for $\lambda=0.1$, $\pi^{rnd}=0.1$, and $\pi_{ik}^{IC}=.5$, with $r$ ranging from 0.5 to 5, $s_{ik}^{obs}$ ranging from 1 to 10. \label{proteomics:fig:s_ik^mis-Rejection-Sampler}} \end{figure} \fi \subsubsection{Drawing from $p(\bm{R} \mid \bm s, \bm{Y}^{obs}, \bm{\Theta})$} \label{proteomics:sec:r_ik_mis_post} After drawing the number of states per peptide $s_{ik}$, we draw the latent random censoring indicators for each censored peptide, $R_{ikl}^{mis}$. Because random censoring occurs before intensity-based censoring, if a peptide was randomly censored ($R_{ikl}=0$), then $p(I_{ikl}=1\vert R_{ikl}=0,\bm{\Theta})=0$ and $p(O_{ikl}=1\vert R_{ikl}=0,\bm{\Theta})=0$ as outlined in Figure \ref{proteomics:fig:Missing_Data_Indicators_Tree}. We then obtain the posterior probability that a peptide is randomly censored given that it is missing via a straightforward application of Bayes Theorem: \begin{eqnarray} p(R_{ikl}=1\vert O_{ikl}=0,\bm{\Theta}, \bm{Y}^{obs}) & = & 1 - \frac{p(O_{ikl}=0\vert R_{ikl}=0,\bm{\Theta}, \bm{Y}^{obs})p(R_{ikl}=0\vert\bm{\Theta}, \bm{Y}^{obs})}{\sum_{r_{ikl}=0}^{1}p(O_{ikl}=0\vert R_{ikl}=r_{ikl},\bm{\Theta})p(R_{ikl}=r_{ikl}\vert\bm{\Theta}, \bm{Y}^{obs})}\nonumber \\ % & = & \frac{1\times\pi^{rnd}}{\pi^{rnd}\times1+(1-\pi^{rnd})p(O_{ikl}=0\vert R_{ikl}=0,\bm{\Theta})}\label{proteomics:eq:w-posterior-in-text-1}\\ % & = & \frac{\pi^{rnd}}{\pi^{rnd}+(1-\pi^{rnd})\int_{-\infty}^{\infty}p(I_{ikl}=1\vert y_{ikl},R_{ikl}=0,\bm{\Theta})p(y_{ikl}\vert R_{ikl}=0,\bm{\Theta})dY_{ikl}} \label{proteomics:r-posterior}\\ & = & 1 - \frac{\pi^{rnd}}{\pi^{rnd}+(1-\pi^{rnd})\pi_{ik}^{int}}, \end{eqnarray} where $\pi_{ikl}^{int}$ is defined as in (\ref{proteomics:eqn:pi_IC}). Plugging in our numerical approximation for $\pi_{ikl}^{int}$, $\hat{\pi}_{ikl}^{int}$, we draw \begin{equation} R_{ikl} \mid \bm{Y}^{obs}, \bm{\Theta}, s_{ik} \sim Bernoulli\left(1 - \frac{\pi^{rnd}}{\pi^{rnd}+(1-\pi^{rnd})\hat{\pi}_{ik}^{int}}\right) . \end{equation} \subsubsection{Drawing from $p(\bm{Y}^{mis} \mid \bm{R}, \bm{s}, \bm{Y}^{obs}, \bm{\Theta})$} \label{proteomics:sec:y_ikl_mis_draw} The final step in drawing the missing data is to impute the unobserved intensities by drawing each intensity from its full conditional posterior distribution. The full conditional posterior is given by \begin{eqnarray*} p(y_{ikl}^{mis}\vert s_{ik}, R_{ik},\bm{\Theta}) & \propto & \begin{cases} \phi\left(\frac{y_{ikl} - \gamma_{ik}}{\sigma_{i}}\right) & \text{if }R_{ikl}=0 \text{ and }O_{ikl}=0\\ \phi\left(\frac{y_{ikl} - \gamma_{ik}}{\sigma_{i}}\right) g(y_{ikl}, \bm \eta) & \text{if }R_{ikl}=1 \text{ and }O_{ikl}=0 \end{cases} . \end{eqnarray*} For randomly censored states, where $R_{ikl}=0$, the missingness mechanism is ignorable given $R_{ikl}$. Hence, we simply draw $y_{ikl}^{mis}$ from its unconditional distribution, \begin{eqnarray*} p(y_{ikl}^{mis}\vert s_{ik}, I_{ik}=0, R_{ikl}=0, \bm{\Theta}) & \sim & Normal(\gamma_{ik},\sigma_{i}^{2}). \end{eqnarray*} % The posterior distribution for intensity-censored states ($R_{ikl}=1$, $I_{ikl}=0$) is given by \begin{eqnarray} p(y_{ikl}^{mis}\vert O_{ikl}=0,R_{ikl}=1,\bm{\Theta}) & = & (\pi_{ik}^{int})^{-1}\frac{1}{\sigma_{i}} \phi\left(\frac{y_{ikl}^{mis}-\gamma_{ik}}{\sigma_{i}}\right) g(y_{ikl}^{mis}, \bm \eta) . \label{proteomics:eq:intensity-censored-posterior-text} \end{eqnarray} As this is not a standard distribution, we draw from it using a rejection sampler. We use information from the adaptive quadrature of Section \ref{proteomics:sc:draw_s_ik_mis} to construct an efficient proposal distribution with little additional computation. Specifically, we propose from a location-scale transformation of a $t_{\nu}$-distribution as \begin{eqnarray} Y_{ikl}^{*} & \sim & \hat{y}_{ikl}^{mis}+\hat{\sigma}_{ik}^{mis} \sqrt{\frac{\nu-2}{\nu}} t_{\nu}, \end{eqnarray} which has expectation $\hat{y}_{ik}^{mis}$ and variance $\hat{\sigma}_{ik}^{mis})^{2}$ for $\nu>2$. The rejection sampling algorithm requires bounding the ratio of the target density to the proposal density. For this sampler, the given ratio has two local maxima, as shown in Figure \ref{proteomics:fig:Censored-Intensity-Rejection-Sampler-Outline-TEXT} B and C. The global maximum of the ratio can be either the first or third root of the log-ratio's derivative depending upon the particular relationship between $\bm \eta$, $\sigma^2_i$, and $\gamma_ik$. To reliably find the global maximum of the target-proposal density ratio, we use a pair of vectorized bisection searches to find the roots of the derivative of the log-ratio in two ranges: $(-\infty,\hat{y}_{ik}^{mis})$ and $(\hat{y}_{ik}^{mis},\infty)$. Once these roots are obtained, we simply select the one corresponding to the larger local maximum to compute acceptance probabilities. A graphical example of the rejection sampler is shown in Figure \ref{proteomics:fig:Censored-Intensity-Rejection-Sampler-Outline-TEXT} D, and the sampling algorithm is detailed in Algorithm \ref{proteomics:alg:Censored-Intensity-Accept-Reject-Algorithm-TEXT}. \ifx\nofigures\undefined \begin{figure} \includegraphics[width=1\textwidth]{figures/proteomics/figure_ycen_rejection_sampler.pdf} \caption{Censored Intensity Rejection Sampler Example\label{proteomics:fig:Censored-Intensity-Rejection-Sampler-Outline-TEXT}} \end{figure} \fi \begin{algorithm} \caption{$y_{ikl}^{mis}$ rejection sampler algorithm\label{proteomics:alg:Censored-Intensity-Accept-Reject-Algorithm-TEXT}} $ $Target density: $f^{mis}(y_{ikl}\vert\eta_{0},\eta_{1},\gamma_{ik},\sigma_{i}^{2})=(\pi_{ik}^{int})^{-1} \frac{1}{\sigma_{i}}\phi\left(\frac{y_{ikl}-\gamma_{ik}}{\sigma_{i}}\right) g(y_{ikl}, \bm \eta)$. Proposal density: $f^*(y_{ikl}^{*}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})=\frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\sqrt{\nu\pi}\Gamma\left(\frac{\nu}{2}\right)}\left(1+\frac{(y_{ikl}^{*}-\tilde{\mu}_{ikl})^{2}}{\nu\tilde{\sigma}_{ikl}^{2}}\right)^{-\left(\frac{\nu+1}{2}\right)}$. \begin{enumerate} \item Compute $z_{ikl}$: \begin{enumerate} \item Using bisection, find the roots of of the first derivative of the log density ratio, \\ $z_{ikl}^{(1)}\equiv\underset{y_{ikl}\in[-\infty,\tilde{\mu}_{ikl})}{\arg\max}\log\left[\frac{f^{mis}(y_{ikl}\vert\eta_{0},\eta_{1},\gamma_{ik},\sigma_{i}^{2})}{f^*(y_{ikl}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})}\right]$ and $z_{ikl}^{(2)}\equiv\underset{y_{ikl}\in(\tilde{\mu}_{ikl},\infty]}{\arg\max}\log\left[\frac{f^{mis}(y_{ikl}\vert\eta_{0},\eta_{1},\gamma_{ik},\sigma_{i}^{2})}{f^*(y_{ikl}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})}\right]$. \item Determine the maximum of the density ratios,\\ $c^* \equiv \max\left(\frac{f^{mis}(z_{ikl}^{(1)}\vert\eta_{0},\eta_{int},\gamma_{ik},\sigma_{i}^{2})}{f^*(z_{ikl}^{(1)}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})},\frac{f^{mis}(z_{ikl}^{(2)}\vert\eta_{0},\eta_{1},\gamma_{ik},\sigma_{i}^{2})}{f^*(z_{ikl}^{(2)}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})}\right)$. \end{enumerate} \item Generate $X\sim g(y_{ikl}^{*}\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})$, $U\sim Unif(0,1)$. \item Accept $y_{ikl}=X$, if $U\leq\frac{f^{mis}(X\vert\eta_{0},\eta_{int},\gamma_{ik},\sigma_{i}^{2})}{c^* f^*(X\vert\nu,\tilde{\mu}_{ikl},\tilde{\sigma}_{ikl})}$. \item Else, return to $2$.\end{enumerate} \end{algorithm} %%% %%% %%% %%% %%% %%% \section{Simulation studies} \label{proteomics:sec:simulations} To evaluate the performance of our method and the stability of the algorithm presented in Section \ref{proteomics:sec:estimation}, we tested our method on a set of simulated MS-MS experiments. We describe the design of these simulations in detail in Section \ref{proteomics:sec:doe}. We cover the computational performance and validation of our MCMC sampler in Section \ref{proteomics:sec:mcmc_performance}. In Section \ref{proteomics:sec:frequentist_evaluations}, we evaluate the frequentist properties of our Bayesian estimates, including the coverage of our posterior intervals (\ref{proteomics:sec:coverage}) and the performance of the proposed method relative to standard estimators used for this class of experiments (\ref{proteomics:sec:compperf}). %%% %%% %%% %%% %%% %%% %%% %%% %%% \subsection{Design of experiments} \label{proteomics:sec:doe} We simulated a set of complex biological samples, spanning a broad range of abundances and protein-specific properties. For each such sample we simulate 54 proteins with abundances spanning 6 orders magnitude, $\mu_i= 3, \ldots, 8$, yielding 9 proteins per abundance level. Within each abundance level, each simulated protein consists of $m_{ik}=20,25\ldots,60$ peptides. With these properties fixed across replicates, we simulated 1,200 experiments from the model described in Section \ref{proteomics:sec:model}, using a probit link function for $g(\cdot, \bm \eta)$. These consisted for 400 replicates for each of the parameter settings provided in Table \ref{proteomics:table:sim_parameters}, each of which was based on estimates from an experiment with the given gradient length using the Sigma UPS2 standard. \ifx\nofigures\undefined \begin{table} \begin{center} \caption{Parameter settings for simulation studies. \label{proteomics:table:sim_parameters}} \begin{tabular}{r|ccccccccc} \hline Gradient & $\alpha_\tau$ & $\beta_\tau$ & $\alpha_\sigma$ & $\beta_\sigma$ & $\pi^{rnd}$ & $\eta_0$ & $\eta_1$ & $\lambda$ & $r$ \\ \hline 90m & 6.3 & 2.1 & 8.5 & 3.5 & 0.20 & 16 & -3.0 & 0.84 & 1.9 \\ 180m & 5.3 & 1.3 & 7.0 & 4.1 & 0.21 & 14 & -2.7 & 0.79 & 1.9 \\ 360m & 6.2 & 1.8 & 7.0 & 2.5 & 0.30 & 20 & -4.3 & 0.62 & 1.6 \\ \hline \end{tabular} \end{center} \end{table} \fi %Across these experiments, we fixed our top-level parameters to the following values: $\alpha_\tau=4$, $\beta_\tau=5$, $\alpha_\sigma=8$, $\beta_\sigma=4$, $\pi^{rnd}=0.1$, $\eta_0=-6.25$, $\eta_1=1.25$, $\lambda=0.1$, and $r=3$. %% why these values? add a paragraph with some comments on the values; what observable properties they imply (e.g., 50% missing at .. and 10% missing at ..). how many state-level intensities (data points) on average in each data set? %These values were selected based upon initial estimates from experiments using the Sigma UPS2 standard. %They imply 50\% missingness at an intensity of $5.18$, 90\% missingness at an intensity of 3.34, and 15\% missingness at an intensity of 7.27. %These also imply a mean of $1.33$ possible states per peptide with 99\% having between 1 and 4 possible states. %The variance parameters yield $E[\sigma_i] = 1.39$ and $E[\tau_i] = 0.87$, with 95\% of their values within $(0.93, 1.90)$ and $(0.47, 1.32)$, respectively. %These reflect a typical regime for both the missingness mechanism and the degree of variation observed in MS-MS data. \subsection{MCMC performance and validation} \label{proteomics:sec:mcmc_performance} Our MCMC sampler produced high-quality draws from the target posterior at a low computational cost. Running 3,000 iterations for each replicate required an average of 267 seconds (0.088 seconds per iteration) with a standard deviation of 17.1 seconds. Of these 3,000 iterations, we discarded the first 1,000 as burn in. The mean effective sample sizes for each top-level parameter based on the remaining 2,000 draws are provided in Table \ref{proteomics:tab:effective_sample_sizes}. For $\mu_i$ and $\zeta_i$, we also include the mean standard deviation of effective sample sizes across proteins. % \ifx\nofigures\undefined \begin{table} \centering \caption{Average effective sample sizes by parameter. $\pm$ for $\mu_{i}$ and $\zeta_{i}$ indicates the average standard deviation of effective sample sizes across proteins. \label{proteomics:tab:effective_sample_sizes}} \begin{tabular}{r|cccccccccc} \hline Gradient & $\zeta_{i}$ & $\eta$ & $\lambda$ & $\mu_i$ & $\pi^{rnd}$ & $r$ & $\beta_{\sigma}$ & $\beta_{\tau}$ & $\alpha_{\sigma}$ & $\alpha_{\tau}$\tabularnewline \hline 90m & 341$\pm$188 & 287 & 773 & 331$\pm$191 & 191 & 1160 & 272 & 217 & 303 & 242 \tabularnewline 180m & 256$\pm$143 & 285 & 823 & 250$\pm$148 & 175 & 1197 & 337 & 176 & 379 & 191 \tabularnewline 360m & 421$\pm$208 & 179 & 1013 & 411 $\pm$215 & 275 & 1338 & 346 & 222 & 392 & 250 \tabularnewline \hline \end{tabular} \end{table} \fi % We note that all mean effective sample sizes are greater than 100, indicating that even 2,000 iterations are sufficient to obtain Monte Carlo an order of magnitude below the posterior standard deviation of each parameter. \subsection{Frequentist evaluations} \label{proteomics:sec:frequentist_evaluations} \subsubsection{Coverage analysis} \label{proteomics:sec:coverage} We first used our simulated replicates to evaluate the frequency coverage of our posterior intervals, focusing on those for $\zeta_i$. The results of these evaluations are summarized in Figure \ref{proteomics:fig:coverage_mu_hpd}; corresponding results for coverage vs. $m_i$ are similar and provided in Appendix \ref{ch:supp:proteomics}. % \ifx\nofigures\undefined \begin{figure} \centering \includegraphics[width=\textwidth, page=1]{figures/proteomics/figures_coverage_sim} \caption{Coverage of HPD intervals vs. $\mu_i$ by simulated gradient length and Bayesian level. \label{proteomics:fig:coverage_mu_hpd}} \end{figure} \fi % We find that our posterior intervals are well-calibrated for the regimes of interest. Across all three simulated gradient lengths, we obtain 67\% coverage for our 68\% posterior intervals, 89\% mean coverage for our 90\% posterior intervals, 94\% mean coverage for our 95\% posterior intervals, and 98\% mean coverage for our 99\% posterior intervals. These demonstrate that, despite the complexity of our Bayesian method, it can provide inferences with frequentist guarantees. %\ifx\nofigures\undefined %\begin{figure} %\centering %\includegraphics[width=\textwidth, page=3]{figures/proteomics/figures_coverage_sim} %\caption{Coverage of HPD intervals vs. $m_i$ by simulated gradient length and Bayesian level. %\label{proteomics:fig:coverage_mi_hpd}} %\end{figure} %\fi \subsubsection{Comparison with exiting methods} \label{proteomics:sec:compperf} We compared the performance of our model-based abundance estimates with several standard methods for absolute protein quantitation. Many of these methods were originally developed for relative quantitation and are converted to absolute measures by appropriate rescaling. All methods used in this comparison are of the form \begin{equation} \hat{\zeta}_i = \log_{10}\left( \hat{T} \frac{10^{\hat{\mu}_i}}{\sum_i 10^{\hat{\mu}_i}} \right) . \end{equation} % We compare the performance of our method to two classes of common methods: intensity-based estimators and count-based estimators. We set $\hat{T} = \sum_i 10^{\mu_i}$ for all of these estimators. Among intensity-based estimators, we consider simple summed intensities (\ref{proteomics:eq:summed_intensity}) and median observed intensity (\ref{proteomics:eq:median_intensity}) \citep[e.g.,][]{Cox:2008uu}. We also evaluate two variants based on estimators from the spectral counting literature: mean observed intensity (\ref{proteomics:eq:mean_intensity}) and adjusted mean observed intensity (\ref{proteomics:eq:adj_mean_obs_intensity}) \begin{eqnarray} \hat \mu_{i}^{si} &=& \log_{10} \left( \sum_{kl} 10^{y^{obs}_{ikl}} \right), \label{proteomics:eq:summed_intensity} \\ \hat \mu_{i}^{med} &=& \text{Med}_{kl}(y_{ikl}), \label{proteomics:eq:median_intensity} \\ \hat \mu_{i}^{mi} &=& \log_{10}\left( \sum_{kl} 10^{y^{obs}_{ikl}}/m_i^{obs} \right), \label{proteomics:eq:mean_intensity} \\ \hat \mu_{i}^{ami} &=& \log_{10}\left( \sum_{kl} 10^{y^{obs}_{ikl}}/\sum_{k} s^{obs}_{ik} \right) . \label{proteomics:eq:adj_mean_obs_intensity} \end{eqnarray} % In the above, $m_i$ is the (known) number of possible peptides that are generated by digesting protein $i$ using a particular enzyme. Another common approach, known as spectral counting, disregards the intensity measurements and uses counts of the observed states associated with each protein. A range of such methods have been developed relative quantitation, including basic spectral counting \citep[e.g.,][]{Liu:2004hv}, average spectral counts \citep[e.g.,][]{Weiss:2010ik}, and the proportion of peptides identified (PPI) \citep{Rappsilber2002}. However, these are not directly suitable for absolute quantitation due to a lack of normalization. \citet{Ishihama:2005ir} defined an exponentially-modified protein abundance index (emPAI) to address these issues: % \begin{equation} \hat \mu_{i}^{emPAI} = \log_{10}\left( 10^{\sum_k \mathbb{I}(s_{ik}^{obs} > 0) / m_i } - 1 \right) . \label{proteomics:eq:empai} \end{equation} This provides a commonly-used representative of the count-based class of methods. We summarize the results of these comparisons in Figures \ref{proteomics:fig:compare_relative_error} and \ref{proteomics:fig:compare_bias}. % \ifx\nofigures\undefined \begin{figure} \centering \includegraphics[width=\textwidth, page=3]{figures/proteomics/figures_relative_error_sim} \caption{Relative efficiency of standard methods ($MSE_{Method} / MSE_{Model-based}$) vs. $\mu_i$ and $m_i$ by block (left and right columns, respectively). Relative MSE axis is logarithmic. Solid black line at 1 corresponds to the model-based method's efficiency, by definition. \label{proteomics:fig:compare_relative_error}} \end{figure} \fi % Complete tabular results are available in Appendix \ref{ch:supp:proteomics}. From Figure \ref{proteomics:fig:compare_relative_error}, we see that our method reduces the overall MSE of abundance estimates by a factor of 5--10 relative to intensity-based estimates and by a factor of 6--150 relative to emPAI. Most intensity-based estimators exhibit consistent efficiency relative to ours, although the median intensity estimator's efficiency improves to nearly 1 as abundance increases. However, this estimator's efficiency is quite poor for low-abundance proteins with approximately 20 times the model-based estimator's MSE. The efficiency of emPAI increases by an order of magnitude over the range of simulated abundances and decreases slightly with protein length $m_i$, demonstrating the value of intensity information in low-abundance regimes. By carefully combining count and intensity information, we obtain consistent reductions in MSE relative to both classes of methods for all values of $\mu_i$ and $m_i$. \ifx\nofigures\undefined \begin{figure} \centering \includegraphics[width=\textwidth, page=3]{figures/proteomics/figures_bias_sim} \caption{Bias of $\hat{\zeta}_i$ for standard methods and posterior mean from model vs. $\mu_i$ and $m_i$ by gradient length (left and right columns, respectively). \label{proteomics:fig:compare_bias}} \end{figure} \fi % Turning to bias, we see that the model-based estimator provides lower bias than both classes of standard methods. Summed intensity shows a slight negative bias (between -0.7 and 0), while mean and average mean observed intensity show a positive bias of similar magnitude. The median intensity estimator shows a far larger positive bias (nearly 2 orders of magnitude) at low abundances. The biases of all four intensity-based estimators diminish as abundance increases. The emPAI estimator shows a large positive bias that also decreases in magnitude with abundance. The bias of the summed intensity estimator diminishes from approximately -0.7 to 0 as protein length increases; the biases of our other estimators exhibits little sensitivity to protein length. Again, we see that our model-based approach enables us to combine count and intensity information into estimators that outperform both classes of standard methods. %%% %%% %%% %%% %%% %%% \section{Empirical results} \label{proteomics:sec:empirical} Having established our method's properties on simulated data, we turn to actual experimental data. We use a quantitative protein standard, described in Section \ref{proteomics:sec:data}, for these experiments. This standard provides known abundances for evaluation with a realistic set of proteins. In Section \ref{proteomics:sec:checkassumptions}, we use these experiments to check our model assumptions and demonstrate their implications for real data. In Section \ref{proteomics:sec:dataanalysis}, we assess the comparative performance of the proposed methods with respect to the standard methods presented in Section \ref{proteomics:sec:compperf}. %%% %%% %%% %%% %%% %%% \subsection{Data} \label{proteomics:sec:data} We conducted a set of LC/MS-MS experiments using the Universal Proteome Standard 2 (UPS2), a constructed biological sample that contains 48 human proteins. These proteins are present in six concentrations, ranging from .5fM to 50,000 fM, with eight known proteins at each concentration. These proteins are selected to span a broad range of physical properties (size and hydrophobicity) within each concentration, removing a potential confounding factor while reflecting the actual variation among proteins in typical samples. The concentration specified for each protein in the standard was confirmed by the manufacturer using multiple methods, so we are comfortable taking the concentrations reported by the manufacturer as ground truth for subsequent analyses. With its high dynamic range, the standard provides a realistic and stringent test of our method. We ran five LC/MS-MS experiments using this standard at three gradient lengths, 90, 180, and 360 minutes, with two, two, and one replicates per gradient length, respectively. Each gradient length implies a different set of parameters for the missing data mechanism. Longer gradients allow for the observation of a greater variety MS2 spectra and is expected to decrease the degree of intensity-based missingness. However, longer gradients can also reduce the dynamic range of observed intensities and require substantially greater effort from the experimenter. Our experiments allow us to explore these tradeoffs in as we evaluate our method. The 90 minute gradients reflect the degree of censoring typically observed in analyses of more complex mixtures, while the 180 and 360 minute gradients provide a ``sanity check'' of our method's behavior in a setting with less missing data. %%% %%% %%% %%% %%% %%% \subsection{Exploratory analysis and model checking} \label{proteomics:sec:checkassumptions} Using the experimental data, we examined the distributions of intensities, identifications, and states to check the assumptions of our model. Figure \ref{proteomics:fig:EDA-Figure} summarizes these results. % \ifx\nofigures\undefined \begin{figure} \centering \includegraphics[width=\textwidth]{figures/proteomics/New_EDA_Multipanel_Figure} \caption{Exploratory data analysis using the UPS2 quantitative protein standard. \label{proteomics:fig:EDA-Figure}} \end{figure} \fi % Panel A of this figure shows the observed distribution of intensities by concentration. It is immediately clear from these histograms that the number of intensities observed and the median observed intensity increases strongly with the concentration. The distributions of observed intensities are also right-skewed for each concentration level, which is consistent with the combination of intensity-dependent censoring and an underlying log-normal distribution of intensities. Panels B, C, and D show the relationship between concentration and identification, median intensity, and $s_{ik}^{obs}$ in greater detail. The median observed intensity by protein increases with the protein's concentration, although the former has far less dynamic range than the latter (2 orders of magnitude vs. 5). Intensity-based censoring at the state level explains this discrepancy, as the lowest-intensity ions are preferentially removed from the sample. % We also compared the number of states that peptides were detected in across protein concentrations. The physical process that generates distinct peptide states is known to be independent of protein or peptide abundance, so the number of \emph{possible} peptide states must be invariant to the peptide's concentration. Since it is not possible to determine the number of possible states per for peptide \emph{a priori}, we model this value as an IID random variable in our model. However, we observe a strong relationship between the number of observed peptide states and concentration in the experimental data. This supports the assumption of intensity-dependent state-level censoring in our model. The proportion of peptides identified per protein ranges nearly 1 for the most abundant proteins to approximately 5\% for the least abundant, while the This is mostly attributable to selection for abundant ions within the MS-MS equipment, as panel E demonstrates. By design, the subset of peptides selected for fragmentation is determined by the instrument's software in real time as the sample is analyzed. While we focus on the subset of peptides that are both detected by the mass spectrometer and successfully identified, nearly all of the peptides in the sample can be quantified in terms of their total intensity\footnote{ Software is able to quantify peptides that were not identified is due to the both mass accuracy of the mass spectrometer and the effective separation of the sample by chromatography. The software used to process the experimental data \citep{Cox:2008uu} is able to identify and integrate over all unique profiles in the m/z $\times$ intensity $\times$ retention time space, as shown in Figure \ref{proteomics:fig:Mass_Spec_Overview} C. This allows quantitation of the majority of peptides; however, since the majority of these are not identified, there is no way to map them to a specific protein. Hence, we exclude them from our model-based inference.}. This allows us to investigate the intensity-dependence of the peptide identification process in detail and validate our model assumptions. Comparing the distributions of identified and unidentified state-level intensities in Panel E shows that (1) the distribution of identified intensities stochastically dominates that of the unidentified intensities and (2) there is substantial overlap between these distributions. (1) is consistent with Theorem \ref{proteomics:thm:dominance}, while (2) is consistent with the assumption of a stochastic censoring mechanism. %%% %%% %%% %%% %%% %%% \subsection{Comparison of empirical results} \label{proteomics:sec:dataanalysis} We used the experiments described in Section \ref{proteomics:sec:data} to evaluate our method's performance against the standard methods described in Section \ref{proteomics:sec:compperf}. We summarize these results in Figure \ref{proteomics:fig:UPS2 results} and Table \ref{proteomics:tab:UPS2 results} and provide complete tables of results in Appendix \ref{ch:supp:proteomics}. % \ifx\nofigures\undefined \begin{figure} \centering \includegraphics[width=\textwidth]{figures/proteomics/figures_draft_ups2} \caption{Inferred protein abundances from experiments with UPS2 standard. \label{proteomics:fig:UPS2 results}} \end{figure} \fi % \ifx\nofigures\undefined \begin{table} \centering \caption{Mean squared errors $\pm$ standard errors from experiments with UPS2 standard. \label{proteomics:tab:UPS2 results}} %\begin{tabular}{l|ccccc} % \hline %Gradient & Posterior mean & Summed intensity & Mean obs. intensity & Median intensity & emPAI \\ % \hline %90m & $0.41 \pm 0.11$ & $1.06 \pm 0.12$ & $2.45 \pm 0.16$ & $3.18 \pm 0.13$ & $2.81 \pm 0.10$ \\ % 180m & $0.34 \pm 0.07$ & $0.76 \pm 0.14$ & $1.82 \pm 0.21$ & $2.26 \pm 0.21$ & $3.03 \pm 0.12$ \\ % 360m & $0.37 \pm 0.06$ & $0.65 \pm 0.07$ & $1.26 \pm 0.08$ & $1.87 \pm 0.06$ & $3.55 \pm 0.13$ \\ % \hline %\end{tabular} \begin{tabular}{l|ccc} \hline & \multicolumn{3}{c}{Gradient length} \\ \hline Method & 90m & 180m & 360m \\ \hline Posterior mean & $0.41 \pm 0.11$ & $0.34 \pm 0.07$ & $0.37 \pm 0.06$ \\ Summed intensity & $1.06 \pm 0.12$ & $0.76 \pm 0.14$ & $0.65 \pm 0.07$ \\ Mean obs. intensity & $2.45 \pm 0.16$ & $1.82 \pm 0.21$ & $1.26 \pm 0.08$ \\ Median intensity & $3.18 \pm 0.13$ & $2.26 \pm 0.21$ & $1.87 \pm 0.06$ \\ emPAI & $2.81 \pm 0.10$ & $3.03 \pm 0.12$ & $3.55 \pm 0.13$ \\ \hline \end{tabular} \end{table} \fi % We restrict our attention here to the best-performing subset of the estimators presented in Section \ref{proteomics:sec:compperf}. We see from Table \ref{proteomics:tab:UPS2 results} that our method's performance is particularly strong on short gradients, providing a factor of 2.6 reduction in MSE relative to the best standard estimator (summed intensity) for the 90 minute gradients and a factor of 2.2 reduction for the 180 minute gradients. On the longest gradient (360 minutes), our method's MSE is a factor of 1.8 lower than than that of the best standard estimator. This behavior matches our expectations based on the relevant properties of the underlying experimental processes. Detailed modeling of the missing data mechanism yields the largest gains when the sample is least separated and there are fewer possible spectra---in the shortest gradients. In the longest gradient, intensity-based missingness diminishes in importance relative to other factors. From Figure \ref{proteomics:fig:UPS2 results}, we note that our method's improvements are greatest for low-abundance proteins. All estimators considered provide reasonably good estimates of $\zeta_i$ for the highest-abundance proteins. However, most of the standard estimators exhibit a substantial positive bias (1-2 orders of magnitude) for the lowest-abundance proteins, particularly on short gradients. In contrast, our method provides consistently accurate estimates for low-abundance proteins across all gradient lengths. We discuss the implications of this capability for biological applications in Section \ref{proteomics:sec:remarks}. %However, the mean intensity estimator appears to perform better on this measure. %This estimator implicitly imputes zeros for all unobserved peptide intensities by using the number of predicted peptides $m_i$ as a denominator, correcting for some of the positive bias in observed intensities. %This implicit correction is not sufficient to yield accurate results on the shorter gradients, though. %%% %%% %%% %%% %%% %%% \section{Concluding remarks} \label{proteomics:sec:remarks} We have presented a model-based approach to label-free absolute quantitation in LC/MS-MS proteomics experiments. As the results of Sections \ref{proteomics:sec:simulations} and \ref{proteomics:sec:empirical} show, accounting for non-ignorable missing data in the proteomics context improves estimates of absolute abundances. Our method also provides well-calibrated measures of uncertainty, as the results of Sections \ref{proteomics:sec:coverage} and \ref{proteomics:sec:dataanalysis} show. Below, we provide some broader context and particular insights on this class of methods and problems, focusing on the role of non-ignorable missing data in complex experiments, efficient computation with missing data of variable dimension, and the role of absolute quantitation in biological analyses. We also discussion extensions of the methods presented to the semi-supervised setting and distributed computational environments. \subsection{Modeling and inference} The core of our model for LC-MS/MS proteomics data is the structure of our non-ignorable missing data mechanism. We explicitly account for the dependence of censoring probabilities on unobserved peptide intensities at the level of individual peptide states. This creates an additional complication, as the number of possible states per peptide is unknown. Thus, we must account for truncation at the state level in addition to censoring at the peptide level. However, we do know that the distribution of then number possible peptide states is invariant to the peptide's concentration. Incorporating this information into our model allows us to extract information from the number of states observed per peptide ($s_{ik}^{obs}$). Working with state-level data provides fine-grained measures of missingness, which provide a great deal of information on low-abundance proteins when used properly. For high-abundance proteins, the vast majority of peptide states are observed, providing the majority of the information on the underlying distribution of the number of states per peptides. Our model uses this information to improve inferences on the abundances of low-abundance proteins, leveraging all of the available information from their intensity and state information. Extracting all of this information requires a sophisticated approach to inference and computation, however. The presence of state-level censoring with an unknown number of states per peptide makes inference under our missing data model particularly challenging. The dimension of our missing data $\bm M$ varies across draws, requiring more specialized computational strategies than standard Metropolis-Hastings updates. We implemented an exact marginal draw from the conditional distribution of $\bm M$ given $\bm \Theta$ using a combination of unidimensional numerical integration and efficient rejection sampling, as detailed in Section \ref{proteomics:sec:missingDataDraw}. The reductions in autocorrelation provided by this marginal update more than offset the computational costs of rejection sampling and numerical approximation. This allows for efficient inference in the presence of missing data of unknown dimension, as the results of Section \ref{proteomics:sec:simulations} demonstrate. We believe that this technique can provide effective computational techniques for a range of non-ignorable missing data problems originating from complex experimental techniques. \subsection{Applications} Absolute quantitation provides a different view on biological processes than relative quantitation, enabling the study of new phenomena. Whereas relative quantitation can only measure the protein-by-protein differences in abundances across experiments, absolute quantitation provides estimates of the abundances of different proteins within a single population of cells. Such information is somewhat interesting for high-abundance proteins, but it promises the greatest insights from the study of low-abundance ones. Reliable, accurate quantitation of low-abundance proteins also makes LC/MS-MS methods applicable to new classes of biological investigations. Two of the most promising areas are the study of mistranslation and transcription factors. Both involve proteins which appear at low concentrations but can have an outsized biological impact. In the case of mistranslation, the cell must spend a great deal of energy handling defective proteins; the rate of mistranslation represents the largest determinant of these costs. Transcription factors regulate the conversion of DNA into mRNA, providing one of the primary mechanisms of control and feedback for cellular functions. Our method allows for reliable estimates of these proteins' concentrations within large populations of cells, opening new areas for high-throughput research. \subsection{Extensions} We are developing two extensions of the proposed methodology with a focus on larger, shallower datasets. In relatively shallow datasets, few proteins are observed and, due to intensity-based censoring, the dynamic range of observed intensities is compressed. This makes estimation of the censoring parameters ($\eta$ and $\pi^{rnd}$) from observed data difficult, reducing the precision of our estimates. The quantitative protein standard described in Section \ref{proteomics:sec:data} provides a potential solution to this problem. Such a standard can be spiked into the biological sample of interest and used as a calibration source. A semi-supervised version of the model presented in Section \ref{proteomics:sec:model} can use the known properties of this standard to enable reliable inferences about the censoring mechanism in shallow samples. As an additional benefit, this supervision could simplify the conversion of our estimates between the intensity and abundance scales. To tackle larger datasets, including full-proteome quantitation for complex organisms, we are also developing a distributed variant of the MCMC algorithm presented here. The goal is to scale the sampler across computational clusters with minimal communication within iterations. The conditional independence structure of the model presented in Section \ref{proteomics:sec:model} makes such distribution both appealing and feasible. We see a rich range of opportunities for statistical research in this area. The field of MS-MS proteomics is young, and it can provide a unique view on the details of biological processes. Extracting this technology's full value requires care, deep subject-matter knowledge, and sophisticated statistical techniques, but the rewards are great. We hope that more statisticians will enter this field and continue to build strong methodological foundations to advance biological research. %%% %%% %%% %%% %%% %%% %%% %%% %%% \section{Acknowledgments} This work was partially supported by the National Science Foundation under grants no.\ DMS-0907009 and no.\ IIS-1017967, by the National Institute of Health under grant no.\ R01 GM-096193, all to Harvard University. Additional funding was provided by the Harvard Medical School's Milton Fund. %% %The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Institute of Health, the Army Research Office, the National Science Foundation, or the U.S. government. %%% %%% %%% %%% %%% %%% %%% %%% %%% %\appendix \section{Proof of Theorem \ref{proteomics:thm:dominance}} \label{proteomics:sec:proof1} \begin{proof} Define $Y$ as a random variable with a distribution given by % \begin{equation} \nonumber \dd F_Y(y) = \frac{1}{w} g(y) \dd F_x(y) \end{equation} % where $w \equiv \int_{\Reals} g(t) \dd F_x(t) \equiv P(Z=1)$. We assume we are in a non-trivial case, so $w \in (0,1)$. % Then, define $h \equiv g^{-1}(w)$. Consider the difference $D(t) \equiv F_X(t) - F_Y(t)$. This difference is equivalent to % \begin{equation} \nonumber D(t) = \int_{\Reals} \left(1 - \frac{1}{w}g(x) \right) \dd F_X(x) \end{equation} % We now consider two cases: \begin{enumerate} \item[(1)] If $t \leq h$, then we have $g(x) \leq w$ for all $x \in (-\infty, h])$ by monotonicity. Thus, over the same range, $\left(1 - \frac{1}{w}g(x) \right) \geq 0$, so $D(t) \geq 0$. \item[(2)] If $t > h$, then % \begin{equation} \nonumber D(t) = \int_{-\infty}^h \left(1 - \frac{1}{w}g(x) \right) \dd F_X(x) + \int_{h}^t \left(1 - \frac{1}{w}g(x) \right) \dd F_X(x) \end{equation} % Now, we have $D(h) \geq 0$ from case (1) and we know $D(h) \rightarrow 0$ as $t \rightarrow \infty$ as it is the difference of CDFs. By monotonicity, for $x \geq h$, $g(x) \geq w$, so $1 - \frac{1}{w}g(x) \leq 0$ for $x \in [h,t]$. Thus, as $\dd F_X(x) \geq 0$, we know that $\int_{h}^t \left(1 - \frac{1}{w}g(x)\right) \dd F_X(x)$ is (weakly) monotone decreasing in $t$ and $\leq 0$. Thus, $D(t)$ increases monotonically to $D(h)$, then decreases monotonically to zero from $h$ as $t \rightarrow \infty$. \end{enumerate} % Therefore, for any value of $t$, we have shown that $D(t) \geq 0$. Thus, $Y$ stochastically dominates $X$. \end{proof}
{ "alphanum_fraction": 0.7552266194, "avg_line_length": 77.3368022706, "ext": "tex", "hexsha": "f524dc13d2819f3b77741b9d740f86f8b5f63e74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ae2e603d8109fa42b841cff632bf9350d20ff83a", "max_forks_repo_licenses": [ "Unlicense", "MIT" ], "max_forks_repo_name": "awblocker/HarvardDissertation", "max_forks_repo_path": "chapters/chapter-proteomics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ae2e603d8109fa42b841cff632bf9350d20ff83a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense", "MIT" ], "max_issues_repo_name": "awblocker/HarvardDissertation", "max_issues_repo_path": "chapters/chapter-proteomics.tex", "max_line_length": 739, "max_stars_count": null, "max_stars_repo_head_hexsha": "ae2e603d8109fa42b841cff632bf9350d20ff83a", "max_stars_repo_licenses": [ "Unlicense", "MIT" ], "max_stars_repo_name": "awblocker/HarvardDissertation", "max_stars_repo_path": "chapters/chapter-proteomics.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22474, "size": 81745 }
\section{Recursion} % \begin{frame} % \frametitle{Housekeeping} % \begin{itemize}[<+->] % \item Explain: ``if $\tyjudge{}{e}{\tau}$ and $\jtrans{e}{e'}$, then $\tyjudge{}{e'}{\tau}$''. % \begin{itemize} % \item If $e$ is well-typed and $e$ reduces to $e'$, then $e'$ is % also well-typed (and both $e$ and $e'$ have the same type). % \item This statement is also called \emph{type preservation} (cf.\ type safety, slide~\ref{fr:ty-safety}). % \end{itemize} % \item Whiteboard pictures on Moodle $\implies$ ask on Piazza. % \item \LaTeX\ in the lectures $\implies$ too risky! % \end{itemize} % \end{frame} \begin{frame} \frametitle{Introduction} \begin{center} {\large What is the \emph{expressive power} of the languages we have studied so far? } \end{center} \pause \bigskip \begin{itemize}[<+->] \item We can specify the expressivity of a programming language by considering the set of computable functions it can represent. % \item Most programming languages are \emph{universal}, i.e., Turing complete (meaning that they can be used to simulate a Turing machine). % \item Hence, expressivity of programming languages is generally concerned with questions such as: can construct $C$ in language $L$ be simulated in language $L'$? % \item For more on expressivity of programming languages beyond computational power, see, e.g., \emph{``\href{https://www.sciencedirect.com/science/article/pii/016764239190036W}{On the expressive power of programming languages}''}, by Matthias Felleisen. \end{itemize} \end{frame} % \note{In particular, what sort of recursive programs (or programs that % ``loop'' infinitely) can we write in the languages we have seen thus far?} \begin{frame} \frametitle{Language \lggeT} % \[ \begin{array}{llclll} \TYPES & \tau & \Coloneqq & \tynat & \tynat & \text{natural} \\ &&& \tyarr{\tau_1}{\tau_2} & \tau_1 \rightarrow \tau_2 & \text{function} \\ \\ \pause \EXPS & e & \Coloneqq & \var{x} & \var{x} & \text{variable} \\ &&& \ez & \ez & \text{zero} \\ &&& \esucc{e} & \esucc{e} & \text{successor} \\ &&& \elam{\tau}{x}{e} & \clam{\tau}{x}{e} & \text{abstraction} \\ &&& \eapp{e_1}{e_2} & \capp{e_1}{e_2} & \text{application} \\ &&& \eiter{e_0}{x}{e_1}{e} & \multicolumn{2}{c}{\citer{e_0}{x}{e_1}{e}} \end{array} \] \pause We write $\bnum{3}$ for $\esucc{\esucc{\esucc{\ez}}}$. \end{frame} \begin{frame} \frametitle{Statics} % (same as $\lggeEF$) \[ \tyrule {\ttyrulename{var}} {\,} {\tyjudge{\Gamma_1, \var{x}: \tau, \Gamma_2}{\var{x}}{\tau} } \] \pause \bigskip \[ \tyrule {\ttyrulename{nat}} {\,} {\tyjudge{\Gamma}{\ez}{\tynat}} \qquad\qquad\qquad \tyrule {\ttyrulename{num}} {\tyjudge{\Gamma}{{e}}{\tynat{}}} {\tyjudge{\Gamma}{\esucc{e}}{\tynat{}}} \] \end{frame} \begin{frame} \frametitle{Statics} \[ \tyrule {\ttyrulename{lam}} {\tyjudge{\Gamma, \var{x} : \tau_1}{e}{\tau_2}} {\tyjudge{\Gamma}{\elam{\tau_1}{x}{e}}{\tyarr{\tau_1}{\tau_2}}} \] \pause \[ \tyrule {\ttyrulename{ap}} { \tyjudge{\Gamma}{e_1}{\tyarr{\tau_1}{\tau}} \and \tyjudge{\Gamma}{e_2}{\tau_1} } { \tyjudge{\Gamma}{\eapp{e_1}{e_2}}{\tau} } \] \pause \bigskip \bigskip NB: this rules are the same as for Language $\lggeEF$. \end{frame} \begin{frame} \frametitle{Statics} \[ \tyrule {\ttyrulename{ite}} { \tyjudge{\Gamma}{e}{\tynat} \and \tyjudge{\Gamma}{e_0}{\tau} \and \tyjudge{\Gamma, \var{x} : \tau}{e_1}{\tau} } { \tyjudge{\Gamma}{\eiter{e_0}{x}{e_1}{e}}{\tau} } \] \bigskip \pause Why do $e_0$ and $e_1$ need to have the same type? \end{frame} \begin{frame} \frametitle{Dynamics} \emph{Eager} semantics: \[ \semrule{\tsemrulename{z}} {\,} {\valjudge{\ez}} \qquad \qquad \qquad \semrule{\tsemrulename{s}} {\valjudge{e}} {\valjudge{\esucc{e}}} \] \[ \semrule {\tsemrulename{lam}} {\,} {\valjudge{\elam{\tau}{x}{e}}} \] \end{frame} \begin{frame} \frametitle{Dynamics} % (same as $\lggeEF$) \[ \semrule {\tsemrulename{ss}} {\jtrans{e}{e'}} { \jtrans{\esucc{e}}{\esucc{e'}} } \] \[ \semrule {\tsemrulename{ap}} {\jtrans{e_1}{e'_1}} { \jtrans{\eapp{e_1}{e_2}}{\eapp{e'_1}{e_2}} } \] \end{frame} \begin{frame} \frametitle{Dynamics} % (same as $\lggeEF$) \[ \semrule {\tsemrulename{lan}} {\valjudge{e_1} \and \jtrans{e_2}{e'_2} } {\jtrans{\eapp{e_1}{e_2}}{\eapp{e_1}{e'_2}}} \] \[ \semrule {\tsemrulename{lav}} {\valjudge{e_2}} {\jtrans{\eapp{\elam{\tau}{x}{e_1}}{e_2}}{\subs{e_1}{e_2}{x}}} \] \end{frame} \begin{frame} \frametitle{Dynamics (iterator)} \begin{itemize}[<+->] \item Evaluate the parameter \[ \semrule {\tsemrulename{rin}} {\jtrans{e}{e'}} { \jtrans{\eiter{e_0}{x}{e_1}{e}}{\eiter{e_0}{x}{e_1}{e'}} } \] % \item Case if zero \[ \semrule {\tsemrulename{r0}} {\,} { \jtrans{\eiter{e_0}{x}{e_1}{\ez}}{e_0} } \] \item Case if strictly positive \[ \semrule {\tsemrulename{rs}} {\valjudge{{e}}} { \jtrans{\eiter{e_0}{x}{e_1}{\esucc{e}}}{\subs{e_1}{{\eiter{e_0}{x}{e_1}{{e}}}}{x}} } \] \end{itemize} NB: $\eiter{e_0}{x}{e_1}{e}$ stands for ${\citer{e_0}{x}{e_1}{e}}$ \end{frame} \begin{frame} \frametitle{Alpha equivalence ($\aeq$)} \begin{itemize}[<+->] \item Two programs are \emph{$\alpha$-equivalent} if they are identical up to the choice of \emph{bound} variables. % \pause \[ \elam{\tynat}{x}{\esucc{\var{x}}} \aeq \elam{\tynat}{y}{\esucc{\var{y}}} \] \pause \[ \esucc{\var{x}} \naeq \esucc{\var{y}} \] \end{itemize} \end{frame} \begin{frame} \frametitle{Alpha equivalence ($\aeq$)} \begin{itemize} \item \emph{Bound} variables can be renamed \emph{consistently} without changing the meaning of a program. \[ \begin{array}{c} \elam{\tyarr{\tynat}{\tynat}}{x_1}{ \elam{\tynat}{x_2}{ \eapp {\var{x_1}} {\var{x_2}} } } \\ \pause \qquad \aeq \elam{\tyarr{\tynat}{\tynat}}{y_1}{ \elam{\tynat}{y_2}{ \eapp {\var{y_1}} {\var{y_2}} } } \\ \pause \qquad \naeq \elam{\tyarr{\tynat}{\tynat}}{y_1}{ \elam{\tynat}{y_2}{ \eapp {\underline{\var{y_2}}} {\underline{\var{y_1}}} } } \end{array} \] \pause \item It's helpful to rename bound variables so that they are all \emph{distinct}, e.g., \[ \begin{array}{c} \eapp {\elam{\tau}{x}{{\var{x}}}} {\elam{\tau'}{x}{\esucc{\var{x}}}} \\ \qquad \pause \aeq \eapp {\elam{\tau}{x}{{\var{x}}}} {\elam{\tau'}{y}{\esucc{\var{y}}}} \end{array} \] \end{itemize} \end{frame} \begin{frame} \frametitle{Substitution} {\Large \[ \subs{e_1}{e_2}{x} \] } \pause \begin{itemize}[<+->] \item Substitution is the process of ``plugging in'' an object ($e_2$) for the variable ($\var{x}$) in a program ($e_1$). % \item Only the \emph{free} occurrences of variable $\var{x}$ should be replaced! \item Example: if \[ e_1 = \eapp{\elam{\tynat}{x}{\esucc{\var{x}}}}{\underline{\var{x}}} \] \pause then \[ \subs{e_1}{\ez}{x} = \pause \eapp{\elam{\tynat}{x}{\esucc{\var{x}}}}{\ez} \] \end{itemize} % \pause NB: $ \eapp{\elam{\tynat}{x}{\esucc{\var{x}}}}{\underline{\var{x}}} \aeq \eapp{\elam{\tynat}{y}{\esucc{\var{y}}}}{\underline{\var{x}}} $ \end{frame} \begin{frame}{Exercises} \label{fr:define-t} % In $\lggeT$, implement the following functions: \begin{itemize}[<+->] \item Successor ($\tynat{} \rightarrow \tynat$): $\text{succ} \defi \clam{\tynat{}}{x}{{\color{red} ??}}$ \item Doubling ($\tynat{} \rightarrow \tynat$): {\color{red} ??} \item Addition ($\tynat{} \rightarrow \tynat \rightarrow \tynat$): $\text{add} \defi \clam{\tynat{}}{x}{\clam{\tynat{}}{y}{ {\color{red} ??} }}$ \item Multiplication ($\tynat{} \rightarrow \tynat \rightarrow \tynat$): {\color{red} ??} % % \end{itemize} % \pause % \bigskip % Using complex data types (see further chapters): % \begin{itemize} % \item Predecessor ($\tynat{} \rightarrow \tynat$): {\color{red} ??} (add product types to $\lggeT$) % \item Subtraction ($\tynat{} \rightarrow \tynat \rightarrow \tynat$): {\color{red} ??} % \end{itemize} \end{frame} % Add: \x \y . iter x { z -> y | v -> s(v)} % Doubling: \x . iter x {z -> z | v -> s(s(v))} % Multiply: \x \y . iter x {z -> z | v -> add(y,v)} % Predecessor: \x . pl(iter x {z -> <z,z> | v -> <pr . v, s(pr . v)>}) % Subtraction: \x \y . iter x {z -> y | v -> pred(v)} % \begin{frame} % \frametitle{Housekeeping} % \begin{itemize}[<+->] % \item What does the following term evaluate to % \[ % \eapp{\elam{\tynat}{x}{\var{x}}}{\esucc{{\var{x}}}} % \] % \item Is language $\lggeT$ using Peano numbers? Yes, $\lggeT$ is % based on Godel's system T (1958), which includes Peano % axioms/numbers. % \end{itemize} % \end{frame} \begin{frame} \frametitle{Expressivity of $\lggeT$} \begin{itemize} \item Language $\lggeT$ provides a mechanism called \emph{primitive recursion} (which we used to define arithmetic operations). % \item Primitive recursion can characterise the \emph{inductive} nature of the natural numbers. % \item In $\lggeT$, we can only define \emph{total recursive functions} on the natural numbers (functions that \emph{always} return something). % \item This inductive natures implies that every program comes with a proof of its \emph{termination} (we always ``bottom out'' using the induction principle). % \item To obtain Turing completeness, we need a \emph{partial recursive} language (where some programs may not terminate). \end{itemize} \end{frame} % \separator{Examinable material for 2018 \emph{ends} here.} \begin{frame} \frametitle{Y-Combinator} An option to express \emph{general recursion}\footnote{i.e., allowing for partial functions.} is to introduce the $Y$ combinator. \bigskip Its type is \[ Y : (\tau \rightarrow \tau) \rightarrow \tau \] \pause Additional semantic rule: \[ \semrule {\tsemrulename{y}} {\,} { \jtrans{Y f}{ f(Y f)} } \] \pause \bigskip NB: in the \emph{untyped} lambda-calculus, it can be expressed as \[ Y \defi \lambda f .\ (\lambda x . f (x \ x)) \ (\lambda x . f (x \ x)) \] Exercise: try to find a type for that term! \end{frame} \separator{Examinable material for 2020 ends here.} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End: % LocalWords: computable expressivity Examinable
{ "alphanum_fraction": 0.5651747504, "avg_line_length": 21.7786407767, "ext": "tex", "hexsha": "8a0605930c130c0f6105d00b1a43f46417df602a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "40bc888c7389ae9554bfddc90d22b078c6e5d5d6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "julien-lange/CO663-slides", "max_forks_repo_path": "slides/language-t.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "40bc888c7389ae9554bfddc90d22b078c6e5d5d6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "julien-lange/CO663-slides", "max_issues_repo_path": "slides/language-t.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "40bc888c7389ae9554bfddc90d22b078c6e5d5d6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "julien-lange/CO663-slides", "max_stars_repo_path": "slides/language-t.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4254, "size": 11216 }
\tolerance=10000 % magical packages \documentclass[12pt, openany]{report} % defines the type of document this is \usepackage{geometry} % needed for some reason \usepackage[utf8]{inputenc} % dont care \usepackage[english]{babel} % multilingual support? not sure why its here % less magical packages \usepackage{ODUthesis} % custom template specific to ODU thesis. \usepackage{afterpage} \usepackage{graphicx} % allows images to be included in latex \usepackage{amsmath} % makes big math fractions display less shitilly \usepackage{enumitem} % list formating doohickey \usepackage{tabto} % allows the use of tabing by distance \usepackage{url} % allows embedded URLs to link \usepackage{indentfirst} % indents first line of a section (style guide) \usepackage{hyperref} % allows use of hyperlinks \usepackage{gensymb} % allows use of degree symbol with \degree \usepackage{longtable} % formats tables of multiple pages \usepackage{float} % allows use of figure{}[H] \usepackage{subcaption} % allows mutliple figures for one caption \usepackage{mathtools} % mathtools including multi lined equations \usepackage[all]{nowidow} % prevents orphaned headers \graphicspath{{figs/}} % defines reference path for images and figures \usepackage[font=small, labelfont=bf]{caption} % formats figure captions \usepackage[scientific-notation=true]{siunitx} \begin{document} %========================================================================= \title{Experimental Investigation of Turbulent Structures and Non-Equilibrium Effects in Axial Wake Vortices Via Particle Image Velocimetry} \author{Jeffry William Ely} \principaladviser{Robert Ash} \member{Drew Landman} \member{Colin Britcher} \degrees{B.S.M.E. 2011, Old Dominion University} \dept{Aerospace Engineering} \submitdate{May 2016} \vita{Bachelors of Science, Mechanical Engineering, Old Dominion University, December 2011} \phdfalse %produces language on title page for Masters Thesis %\copyrightfalse %suppresses copyright notice %\figurespagefalse %suppresses List of Figures %\tablespagefalse %suppresses List of Tables %========================================================================= \iftrue % set to \iftrue to print to the next \fi, \iffalse not to \abstract{ Vortices are a common phenomenon in fluid flows that arise as kinetic energy dissipates into heat via viscous interaction. They arise naturally at large scales in the form of dust devils, tornadoes, and as a counter-rotating vortex pair in the wake of aircraft. It is important to understand the conditions leading to their formation, their duration, and their dissipation in order to forecast or prevent undesirable effects. Among these deleterious effects is a decrease in safety of aircraft operations in the wake of other aircraft, an extremely common situation at airports around the world. A large number of mathematical models and experimental data sets exists to help explain various aspects of axial wake vortex behavior, but current models fail to explain why many vortices remain tightly wound with slowly decaying azimuthal velocities about their cores the length of time for which they have been observed. The current study builds upon the theoretical work of Ash, Zardadkhan and Zuckerwar \cite{ash2011}, and tests specific attributes of a turbulent axial vortex for agreement with non-equilibrium pressure relaxation theory. This theory provides an exact solution to a modified version of the Navier-Stokes equations for an axial vortex, with a resulting velocity model that agrees with leading empirical models. In the present investigation, axial wake vortices were created with a bi-wing vortex generator in a low speed wind tunnel, at free stream velocities between 15 and 33 $m/s$. Stereo particle image velocimetry was employed to map three dimensional velocity vectors at positions between 5.4 and 10 chord lengths downstream of the vortex generator, and at a sampling rate of 1Hz for 200 seconds. A Reynolds time averaging approach was employed to express instantaneous velocity measurements as localized mean and fluctuating components and to study turbulent structures within the vortices. Periodicity in turbulent energy and Reynolds stress structures was observed by comparing vortex velocity fields normalized by age, based on free stream velocity and downstream distance. The cores of these vortices appeared to periodically ingest turbulent energy and compress it into approximately one half of local core radii. The cyclical ingestion of turbulence was shown to have the effect of tightening the core radius in the wake of the vortex generator center body. If this phenomenon persists for the life of the vortex, it could provide an explanation for the longevity of the azimuthal velocity component, as observed in natural wake vortices. } \beforepreface \prefacesection{Acknowledgments} For my lovely bride-to-be, who planned our wedding by herself while I remained occupied with this thing. Special thanks to Dr. Robert Ash, who I ignored for two years while off adventuring, but was enthusiastic to pick things right back up where they were left upon my return. Additional thanks to my committee members, Dr. Drew Landman and Dr. Colin Britcher, for their support. %\prefacesection{Notes} %Good practice for preserving the code used to perform this research was %attempted by use of git version control software and a git repository. All %%%code %used to perform data analysis and visualization, along with the raw three %dimensional vector dataset can be downloaded from %\url{https://github.com/Jwely/pivpr}. %\prefacesection{Nomenclature} %\input{docs/nomenclature} \fi \afterpreface %========================================================================= \iftrue % set to \iftrue to print to the next \fi, \iffalse not to \chapter{Introduction} \input{docs/intro/intro_intro} \input{docs/intro/vortices} \input{docs/intro/piv_fundamentals} \fi %========================================================================= \iftrue % set to \iftrue to print to the next \fi, \iffalse not to \chapter{Experimental Setup} \input{docs/experiment_setup/setup_intro} \input{tables/test_matrix_table} \input{docs/experiment_setup/lswt} \input{docs/experiment_setup/vortex_generator} \input{docs/experiment_setup/piv_overview} \input{docs/experiment_setup/piv_acquisition} \input{docs/experiment_setup/piv_processing} \fi %========================================================================= \iftrue % set to \iftrue to print to the next \fi, \iffalse not to \chapter{Experimental Results} \label{chapter:results} \input{docs/experiment_results/discussion} \input{docs/experiment_results/piv_uncertainty} \fi %========================================================================= \iftrue % set to \iftrue to print to the next \fi, \iffalse not to \chapter{Conclusions} \input{docs/conclusions} \fi %========================================================================= % constructs the bibliography \bibliographystyle{apalike} \bibliography{bib/vortices,bib/other,bib/particles,bib/piv} %This command adds some extra space in the table of contents \addtocontents{toc}{\vspace*{12pt}} % adds an entry for the bibliography in the table of contents \addcontentsline{toc}{chapter}{BIBLIOGRAPHY} %========================================================================= \appendix \include{docs/appendices/uncertainty_appendices} %\include{docs/appendices/piv_data_appendices} % starts blank page \newpage % initiates printing of the vita page \vitapage \end{document}
{ "alphanum_fraction": 0.7306191467, "avg_line_length": 45.7619047619, "ext": "tex", "hexsha": "71b011aad217b15d757870af2485050475dfa369", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Jwely/thesis-pivpr", "max_forks_repo_path": "texdocs/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Jwely/thesis-pivpr", "max_issues_repo_path": "texdocs/main.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "f07a95610cec2a275f9edb2c15cf0f2dfb99a967", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Jwely/thesis-pivpr", "max_stars_repo_path": "texdocs/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1830, "size": 7688 }
\subsection{Web Connection} Our application's interface with the internet needs to serve three purposes: \begin{enumerate} \item Retrieve pseudo random images to use for embedding a message. \item Upload images to a public online forum anonymously without altering the image. \item Retrieve specific images from a public online forum anonymously without altering the image. \end{enumerate} After some research, we discovered a service that could satisfy the first purpose. The Cat API provides random cat images retrieved from Tumblr. The API is reliable and returns content neutral images big enough to store information. A downside to this approach is that statistical steganalysis can be done to compare the uploaded images to the originals. The possibility of using local unique images is out of the scope of our project. The next two requirements are fulfilled by Sendspace.com. This website provides an external API that allows our application to easily interface with and upload and download full size images anonymously. %The only significant issue that we encountered while using these online resources was with Sendspace. % At some point, Sendspace updated their API to require a key. We were not aware of this update and our application was completely broken until we found the bug and fixed it. The online resources work together as follows: \begin{enumerate} \item The Cat API retrieves a random cat image for a message to be embedded into. \item Embed data into image. \item Upload the encoded image to Sendspace. Sendspace returns and download and delete URL. \item Download the encoded image using the download URL. \item Decode the image to restore original data. %This allows for the retrieval of the original message. \end{enumerate}
{ "alphanum_fraction": 0.8109028961, "avg_line_length": 76.5652173913, "ext": "tex", "hexsha": "eef8bd336bc33a25fa2d51454e601297289442f4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "83dfc899aad8a6687e827e102ba19fce6a207e7f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gorhack/webStegFS_project", "max_forks_repo_path": "paper/webConnectionPart.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "83dfc899aad8a6687e827e102ba19fce6a207e7f", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:12:57.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-22T17:13:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gorhack/webStegFS_project", "max_issues_repo_path": "paper/webConnectionPart.tex", "max_line_length": 639, "max_stars_count": null, "max_stars_repo_head_hexsha": "83dfc899aad8a6687e827e102ba19fce6a207e7f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gorhack/webStegFS_project", "max_stars_repo_path": "paper/webConnectionPart.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 341, "size": 1761 }
\section{Results} The dependent variables were selection time and the number of clutches. \subsection{Selection Time} Selection time is the time between each subsequent target selection. This means that the first target the participants selects for each Transfer Function is not included in the results. Outliers for trials over 20 seconds were removed from the data. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{selection-time} \caption{Selection Time, Transfer Function x Distance Interaction} \label{fig:selection-time} \end{figure} Repeated measures analysis of variance using Mauchly's Test for Sphericity, showed that the order of Transfer Functions presented to the participant had no significant effect on movement time, so there was no significant asymmetric skill transfer, this indicated that the within-subjects design was appropriate. There was a significant main effect for the for the Transfer Function on selection time (F$_{2,12}$ = 10.3, p < 0.005) with the Constant and Acceleration outperforming the RubberEdge Transfer Function by 21.8\%, this can be seen across all distances in Figure \ref{fig:selection-time}. There was also a significant main effect for Distance (F$_{2,12}$ = 30.8, p < 0.0001) on selection time. The interaction between Transfer Function x Distance was not significant (F$_{2,24}$ = 0.70, p = 0.59). Therefore we cannot accept \textbf{H2}. Pair-wise comparisons using Bonferroni found that there was a significant difference between RubberEdge and the Constant and Acceleration Transfer Functions. Another pair-wise test also found a significant difference between each distance (D$_S$, D$_M$, D$_L$) \subsection{Fitts' Law Analysis} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{fitts} \caption{A comparison between the Index of difficulty and mean time, between different transfer functions.} \label{fig:fitts} \end{figure} A significant difference for Fitts' law between distances was not found (Figure \ref{fig:fitts}). Whereas the previous study\cite{Casiez2007RubberEdge} found that the distance had a significant effect for indexes of the same difficulty. Why the results found in this study differ, is likely because the distances measured against in the previous study were much larger, 172, 344, 688mm. Compared to 79, 118, 157mm for this study. The reason that smaller distances were chosen is that they align closer to distances that users are likely to demonstrate on a laptop device. Specifically, 157mm was around the limit for the distance from the centre point of the laptop screen to the upper or lower edge, see section \ref{section:apparatus} for details about the laptop used. \begin{table}[h] \centering {\rowcolors{2}{gray!30}{lightgray!30} \begin{tabular}{ l r r r } Function & $a$ & $b$ & $r^2$ \\ \hline Constant & 0.20 & 0.52 & 0.99 \\ Acceleration & -0.08 & 0.59 & 0.99 \\ RubberEdge & 0.64 & 0.56 & 0.95 \\ \hline \end{tabular} } \vspace{0.5cm} \caption{Fitts' Law regression values for different transfer functions, where $a$ is the intercept of the regression line, $b$ is the slope, $r^2$ is the fitness.} \label{table:fitts} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{fitts-grouped} \caption{A comparison between the Index of difficulty and mean time, between different transfer functions, where each distance is separated into its own group.} \label{fig:fitts-grouped} \end{figure} Table \ref{table:fitts} also shows that the $r^2$ values for each Transfer Function are within acceptable values. For interest, a Figure showing the different distances is also supplied (Figure \ref{fig:fitts-grouped}). This shows that for each Transfer Function the distances are neatly grouped together. \subsection{Clutches} Clutches are the number of times a participant removes their finger from the touchpad to continue the task of moving the pointer to a target. This is recorded on a per trial basis. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{clutches} \caption{Transfer Function x Distance Interaction on clutch invocations.} \label{fig:clutches} \end{figure} The order of the Transfer Functions presented to the participants had no significant effect on clutch invocations as found using Mauchly's Test for Sphericity. There was a significant main effect for the Transfer Function on clutch invocations (F$_{2,12}$ = 23.4, p < 0.0001) with the RubberEdge Function outperforming the Constant and Acceleration Function at all Distances. A significant main effect was also present for Distance on clutch invocations (F$_{2,12}$ = 114.2, p < 0.0001). More importantly, there was a significant Transfer Function x Distance interaction (F$_{2,24}$ = 32.2, p < 0.0001), as shown in Figure \ref{fig:clutches}. RubberEdge outperforms both the Constant and Acceleration functions at all Distances (D$_S$, D$_M$, D$_L$) by ~(13.6\%, 31.1\%, 41.6\%) respectively. A pair-wise comparison confirms this finding with both Acceleration vs RubberEdge and Constant vs RubberEdge having p < 0.0001. We therefore accept \textbf{H1}. \subsection{Participant Feedback} Participants were asked the following question for each Transfer Function at the end of the experiment. 'Was the interface frustrating to use?', 'Was the interface easy to use?' and 'The interface felt accurate?'. Only the question about accuracy was found to be significant ($x^2_r = 6.42$, df = 2, $p = < 0.05$). Where 5 out of 7 participants agreed that the Constant Transfer Function was accurate, 4 out of 7 agreed that Acceleration was accurate and 4 out of 7 participants found that RubberEdge not to be accurate. Participants commented the noticeable lag in the pointer see section \ref{section:interface_problems}, which details the issue.
{ "alphanum_fraction": 0.770396866, "avg_line_length": 81.5416666667, "ext": "tex", "hexsha": "86144938b5e1b504d5453035d98a2075f26e4013", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-08T21:40:26.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-08T21:40:26.000Z", "max_forks_repo_head_hexsha": "dc8bb2b0b7e16a9f596343e89fc5d1751f08f512", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Awarua-/RubberEdge2", "max_forks_repo_path": "report/pages/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dc8bb2b0b7e16a9f596343e89fc5d1751f08f512", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Awarua-/RubberEdge2", "max_issues_repo_path": "report/pages/results.tex", "max_line_length": 953, "max_stars_count": null, "max_stars_repo_head_hexsha": "dc8bb2b0b7e16a9f596343e89fc5d1751f08f512", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Awarua-/RubberEdge2", "max_stars_repo_path": "report/pages/results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1482, "size": 5871 }
\section{Introduction} \label{sec_opportunistic_intro} %\ac{MU-MIMO} is a transmission technique that enables a multi-antenna transmitter to transmit multiple, parallel data streams to distinct user nodes. %By pre-coding the data streams concurrently through a coherent antenna array, a transmitter can increase its spectral efficiency and overall downlink system capacity. %Closed-loop \ac{MU-MIMO} transmissions first require a transmitter to measure the channel between itself and its receivers (a process known as channel sounding) before transmitting concurrent data streams to the receivers. %This direct measurement of \ac{CSI} adds considerable protocol overhead and must occur more often in time-varying channel environments since the beam-formed transmission is sensitive to channel variation. %A more temporally-correlated channel would allow a \ac{MU-MIMO} system to reduce \ac{CSI}-estimation frequency and improve the accuracy of this estimate for longer lag times. \ac{MU-MIMO} is a wireless channel coding technique that enables an \ac{AP} equipped with multiple antennas to transmit \textit{simultaneous} data streams to separate \acp{STA}, leveraging spatial diversity to scale data rates with the number of transmit antennas. For an \ac{AP} to use this technique, it must first estimate the \ac{CSIT} between each of its transmit antennas and each receiving antenna through a method termed \textit{channel sounding}. The estimated \ac{CSIT} is then used to compute precoding weights for the multi-stream transmitter. \ac{CSIT} can also be used for resource allocation, such as user grouping \cite{mao2012sus} and inter-cell interference mitigation \cite{rahman2010intercell}. IEEE 802.11af is a standard amendment for Wi-Fi to operate in unused UHF \ac{TVWS} channels \cite{flores2013af}. The standard can also employ MU-MIMO features of IEEE 802.11ac \cite{std11ac}: here, explicit \ac{CSIT} is obtained at the \ac{AP} by first transmitting a sounding packet from the \ac{AP} to the \ac{STA}, then having each \ac{STA} transmit the measured \ac{CSI} to the \ac{AP} as a control frame \cite{std11af}. Unfortunately, the transmission overhead required for \ac{CSIT} estimation increases with the number of transmit antennas at the \ac{AP}, $M$, and the number of aggregate \ac{STA} antennas, $K$, and recent results have shown that this overhead can severely decrease the achievable throughput gains \cite{xie2013adaptive, bejarano2014mute}. %In this final chapter, we explore elimination of explicit channel sounding altogether via purely \textit{opportunistic} channel sounding in which \ac{CSIT} is implicitly estimated from each received \textit{uplink} transmission, whether a data or control frame. In this final chapter, we leverage our results to design a new random-access technique called Opportunistic Implicit Channel Sounding in order to support throughput scaling of 802.11 multi-user random access as the number of base station antennas continues to grow. %####################################################### \subsection{Implicit vs. Explicit Channel Sounding} \label{sec_im_vs_exp} Implicit beamforming relies on the assumption that the physical channel between the transmitter and receiver is reciprocal in nature so that estimating \ac{CSI} in the downlink direction is equivalent to estimating \ac{CSI} in the uplink direction and vice versa. Accurate array reciprocity calibration has been demonstrated \cite{shepard2012argos}, and \cite{guillaud2013reciprocity} has experimentally demonstrated equivalent mutual information between downlink and uplink channel estimates utilizing transceiver hardware similar to our own. Therefore, we assume that uplink channel estimation is sufficient to estimate the downlink channel for our purposes, and we assume that all new \acp{AP} will have the capability to perform reciprocity calibration and provide implicit channel estimation. The benefits of implicit channel sounding vary based on node/environment mobility as well as the protocol and radio configuration utilized. For example, if the wireless channel varies rapidly due to high mobility, frequent channel sounding, whether implicit or explicit, will be required to obtain accurate \ac{CSIT}. Per-packet channel sounding mechanisms that incur protocol overhead, such as the multi-user implicit sounding mechanism analyzed in \cite{lou2013comparison} may be required to ensure that channel estimates are accurate in such environments. However, in the case where the wireless channel remains coherent for long periods of time, for example, due to limited or lack of mobility, then it becomes possible to rely on previously collected \ac{CSI} for current \ac{MU-MIMO} transmissions \cite{bejarano2014mute, xie2013adaptive}. Practically, such environments exist in wireless networks utilizing sub-GHz carrier frequencies, for instance \ac{TVWS} networks \cite{anand2014case}, as well as certain fixed Wi-Fi networks. % \section{Opportunistic Channel Estimation for Implicit MU-MIMO} \label{sec_opportunistic} %\rgnote{This is where the setup and simulation work for [ITC 2016] goes. I've copied the paper intro--it needs to be made into a section intro.} This approach exploits a key property of UHF bands: they can be highly stable on the order of 100~ms while maintaining high multi-user diversity \cite{anand2014case}. Thus, the opportunistic policy eliminates \ac{CSIT} sounding overhead if the channel remains sufficiently unchanged between uplink transmissions. Otherwise, the AP can fall back to active channel sounding modes in the case of rapid channel variation or insufficient uplink traffic. We show that opportunistic sounding is beneficial in four operating regimes in which: \emph{(i)} channel conditions are sufficiently stable such that beamforming error due to obtaining \ac{CSIT} from a prior uplink transmission is negligible; \emph{(ii)} legacy 802.11 \acp{STA} cannot respond to beamforming requests and otherwise could not leverage full spatial diversity; \emph{(iii)} the number of spatial streams grows such that even implicit channel estimation generates significant overhead; and \emph{(iv)} the \ac{MCS} is sufficiently high that any wasted airtime due to channel sounding overhead imposes a high relative cost. Scenario \textit{(ii)} is of particular interest because it enables new 802.11 \acp{AP} with multi-user capabilities to operate in spectral-efficient multi-user modes with legacy 802.11 equipment that does not otherwise support multi-user modes. To explore the key performance factors of opportunistic sounding, we design and manufacture a custom MIMO \ac{SDR} front-end for the WARPv3 \ac{SDR} platform \cite{warpProject}. This platform enables the first characterization of mobile multi-user UHF channels, enabling evaluation of opportunistic sounding even in the presence of \ac{STA} or environmental mobility. The design simplifies high-power UHF-band 8x8 MIMO experiments and improves synchronous clocking over previous \ac{SDR} testbeds. In addition to implementation of custom \ac{SDR} hardware, we modify a novel \ac{SDR} channel sounding framework designed for high-speed mobile implicit multi-user channel measurements \cite{shepard2015faros} and port the framework to operate on our UHF equipment. We find that fixed wireless nodes utilizing UHF spectrum exhibit long-term stable \ac{CSI} under environmental and static mobility scenarios. Consequently, we find that with a low number of spatial streams, performance of both active and opportunistic implicit sounding policies significantly exceeds that of the current 802.11af protocol due to the reduced overhead of collecting \ac{CSI}, even when taking into account the measured beamforming inefficiency of using delayed \ac{CSIT}. We further extend our analysis to show that opportunistic implicit sounding with more spatial streams yields increasing benefits, enabling future systems with many more antennas than the current maximum of eight in commodity \acp{AP}. %#################################################################################### \subsection{Opportunistic \ac{CSIT} Collection} \label{sec:wurcProtocol} \begin{figure*}[t] % Channel Sounding Types \centering \includegraphics[width=1\textwidth]{figs/protocol/opportunistic_diagram.pdf} \caption{Time diagram of the considered types of channel sounding. 802.11af polling packets are not included in the explicit sounding diagram.} \label{fig:soundingDiagrams} \end{figure*} In this section, we propose a new approach to collecting \ac{CSIT} in 802.11af networks that avoids the overhead of \ac{MU-MIMO} channel sounding altogether by relying on the opportunistic reception of implicit \ac{CSI} for regular network traffic. Fig.~\ref{fig:soundingDiagrams}a diagrams explicit channel sounding, where first the downlink channel is estimated and then the channel estimates are fed back as data packets before each multi-user downlink transmission. This method, with additional polling and channel reservation overhead, is the version currently used in 802.11af \cite{std11af}. Explicit beamforming overhead scales as $\mathcal{O}(M\cdot K)$. A proposed implicit sounding method \cite{lou2013comparison} that transmits staggered \acp{NDP} in the uplink direction allowing implicit downlink channel estimation following a multi-user trigger from the \ac{AP}. Its timeline is similar to that of Fig.~\ref{fig:soundingDiagrams}a, but instead the uplink packets are short \acp{NDP} rather than \ac{CBFR} packets, and polling is avoided. Implicit channel estimation overhead is significantly reduced from the explicit case since no \ac{CBFR} polling or uplink payload is required before the downlink transmission. Implicit beamforming overhead scales as $\mathcal{O}(K)$ since all \ac{AP} antennas are sounded simultaneously and is key for scaling $M$, the number of \ac{AP} antennas. Fig.~\ref{fig:soundingDiagrams}b displays our proposed \textit{opportunistic} implicit sounding method that estimates the downlink channel implicitly from uplink data transmissions and utilizes that channel estimate for \ac{MUBF} so long as it remains ``fresh.'' \ac{ACK} packets from a successful downlink transmission can also refresh \ac{CSIT} implicitly. Opportunistic implicit beamforming overhead scales as $\mathcal{O}(1)$ since no sounding overhead is injected to sound active \acp{STA}. Given that the UHF channels for 802.11af networks can remain stable for relatively long periods of time, we target to avoid channel sounding altogether and rely on standard PLCP preambles \cite{std11af} in overheard uplink transmissions to estimate the downlink channel since that estimate will remain valid over multiple packet timescales. The key strategy for opportunistic sounding is as follows: when historical implicit CSI is available and ``fresh,'' the \ac{AP} forms user groups and calculates precoding weights for the optimal multi-user transmission group determined by the MAC scheduler. We utilize two methods when implicit \ac{CSI} is unavailable or stale for a particular \ac{STA}: 1) a single downlink frame for the stale \ac{STA} is de-queued and transmitted by the AP using MISO omni-directional transmission; the subsequent \ac{ACK} will then provide an update of implicit \ac{CSI} for that \ac{STA}; or 2) alternately, the \ac{AP} could fall back to legacy implicit sounding methods, e.g., \cite{lou2013comparison}, if no traffic is available. In order to determine the feasibility of such an opportunistic sounding policy in 802.11af systems and explore the possible throughput gains, we measure a series of indoor and outdoor multi-user channel traces and perform protocol analysis to understand policy tradeoffs for opportunistic \ac{CSIT}. \subsection{Performance of Proposed 802.11af Sounding Alternatives} \label{sec:protocolSimulation} %\eknote{the subsection title shold focus on the issue vs. the tool (sim vs. OTA} %\eknote{are these trace-driven simulations?} In this section, we investigate the protocol gains available from an opportunistic \ac{CSIT} system with regard to various MAC-layer parameters. We also compare performance against an implicit sounding policy adapted from 802.11n standard proposals for multi-user operation \cite{lou2013comparison}. %Although UHF radio spectrum is highly valuable due to its licensing structure and superior propagation characteristics, the fragmentation of UHF spectrum into many small 6~MHz channels and the relatively large antenna apertures of UHF bands means that scaling capacity with \ac{MU-MIMO} is critical \cite{anand2014case} and therefore reducing \ac{CSIT} overhead is very important. The 802.11af standard attempts to amortize explicit sounding overhead by transmitting aggregated data frames, however the efficiency of this approach depends on the number of frames actually available to aggregate. We analyze the protocol performance of a \ac{MU-MIMO} system with various channel sounding policies and with varying packet aggregation values in order to emulate both best and worst case scenarios. %Since our proposed opportunistic system will discard historical \ac{CSIT} when it is determined to be stale, the stale channel must be resounded either explicitly or via a unicast transmission to that \ac{STA} resulting in an ACK that can then be used to refresh implicit \ac{CSIT}. %In the worst-case transmission scenario, the system would have no fresh \ac{CSIT} and performance would revert to the implicit/explicit curves or the opportunistic \ac{AP} would send unicast transmissions to all \acp{AP} before sending multi-user data. We set the single frame size to 1500~bytes, the largest regular Ethernet frame size and the best case for \ac{CSIT} overhead amortization before aggregation. %\eknote{bytes or kB? frame aggregation would be the best case so should rephrase} We compare three different channel sounding policies: \textbf{Explicit 802.11af.} This is the current standard operation of 802.11af \ac{MU-MIMO}. \ac{CSIT} overhead in this case is caused by the \ac{NDP} Announcement, the sounding \ac{NDP}, and the sequence of polls and \ac{CBFR} responses from all 802.11af \acp{STA} before each downlink transmission \cite{std11af}. The upper and lower bounds on explicit performance are calculated with minimum and maximum feedback compression of the \ac{CBFR} payload, a highly vendor-specific implementation parameter. We assume no impairment on performance from feedback compression, and plot the median performance while indicating the bounds with a shaded red region. Although the 802.11af standard only supports up to 8 concurrent spatial streams, we assume that timing and protocol performance scales with the number of streams in order to provide a point of reference for scaling to large numbers of antennas. We label this policy \textit{``Explicit 802.11af''} in the following plots. %\eknote{I'm not in favor of calling anything 802.11af unless it IS 802.11af. there is no such thing as implicit %11af and the reader might be confused (and might reject the paper and explain to you that you've misunderstood %the standard and don't know how it works.} \textbf{Implicit Proposal for 802.11af.} In \cite{lou2013comparison}, the authors proposed an alternative multi-user \ac{CSI} sounding protocol that avoids the lengthy \ac{CBFR} by estimating the channel implicitly with short \acp{NDP}. \ac{CSIT} overhead in this case comes from the \ac{NDP} Announcement and a staggered sequence of uplink \acp{NDP} that are used for implicit channel estimation before each multi-user transmission as proposed in \cite{lou2013comparison}. Since the channel is estimated implicitly, there are no levels of feedback compression to display. We label this policy \textit{``Implicit''} in the following plots. \textbf{Opportunistic Proposal for 802.11af.} In this case, there is no \ac{CSIT} overhead to multi-user transmissions. We explore three regions of operation for an opportunistic \ac{AP}: \begin{enumerate} \item \textit{``Opportunistic.''} The best-case performance assuming all \ac{CSIT} is available opportunistically and there is no beamforming penalty for using stale \ac{CSIT}. This is an upper bound on opportunistic performance since practically all \ac{CSIT} will have some error. \item \textit{``Opportunistic with Bootstrap.''} An alternative fallback mode where at most one \ac{STA} has stale \ac{CSIT} and the \ac{AP} sends a single %\eknote{do not use unicast here as it seems you are contrasting to multicast but you are actually contrasting to multi-user} packet to that \ac{STA} before each multi-user transmission in order to implicitly refresh its \ac{CSIT}. This can be viewed as a way of quickly bootstrapping opportunistic \ac{CSIT} to a \ac{STA} that previously was inactive. \item \textit{``Opportunistic with Stale CSIT.''} A trace-driven lower bound on opportunistic performance based on our environmental measurement traces. We assume that \ac{CSIT} is refreshed opportunistically every second. According to our empirical results in Figure~\ref{fig_power_allocation}, this would result in less than 10\% reduction in achievable sum-rate in an environment with static \acp{STA}. Thus, we reduce the throughput of the best-case opportunistic scenario by the requisite amount, presenting a more fair approximation of how an implemented opportunistic system might perform. \end{enumerate} %If \textit{more} than one channel is stale, it is generally better to fall back on implicit channel sounding rather than to expend overhead in a unicast transmission to refresh opportunistic \ac{CSI}. %We ignore the increased per-\ac{STA} \ac{SINR} that would be achieved with a MISO transmission compared to a corresponding multi-user transmission and assume the underlying PHY rate remains the same. All \acp{ACK} are staggered as per the 802.11af specification. For tractability, transmissions are assumed to be successful, requiring no retransmissions, and only downlink data flows are considered. %\rgnote{Question for reviewer: is the ``Opportunistic w/ Stale CSIT'' curve confusing?} % Sounding 4x4 Policy \subsection{Sounding Policy Performance: 4x4} \label{sec:4x4_policy} In Fig.~\ref{fig:protosim_4x4}, we vary the multi-user frame aggregation number from $1$ to $64$ for the lowest (top) and highest (bottom) 802.11af \ac{MCS} in a $4\times 4$ system where all \acp{STA} have only a single antenna. \begin{figure}[t] % FROZEN Simulation - 4x4 6 Mhz TVWS \centering \includegraphics[width=0.7\linewidth]{./figs/protocol/tput_vs_agg_4x4_6mhz_im_mcs-0_crop} \caption{Network throughput for a 4x4 802.11af system on a 6~MHz UHF channel. Base rate (top) and maximum \ac{MCS} (bottom).} \label{fig:protosim_4x4} \end{figure} \begin{figure}[t] % FROZEN Simulation - 8x4 6 Mhz TVWS \centering \includegraphics[width=0.7\linewidth]{./figs/protocol/tput_vs_agg_8x4_6mhz_im_mcs-0} \caption{Network throughput for an 8x4 802.11af system on a 6~MHz UHF channel. Base rate (top) and maximum \ac{MCS} (bottom).} \label{fig_protosim_8x8} \end{figure} \subsubsection{Effect of Frame Aggregation} %\eknote{rough org. this is the basic X axis so it is awkward to first discuss and conclude about grphas without %mentioning what the x axis is and then come back to the x axis. treat graphs completely with 4 steps one by one} Frame aggregation allows the cost of channel sounding to be amortized over large payloads. While we expect that increased aggregation will generally decrease the efficiency of channel sounding reduction protocols, it also determines crossover points in terms of protocol performance. At the lowest \ac{MCS} in the top plot of Fig.~\ref{fig:protosim_4x4} with no frame aggregation, there is a moderate performance gap between implicit channel sounding methods (opportunistic, implicit) and the current explicit 802.11af policy. An opportunistic sounding policy would increase throughput at best by 31\%, while an implicit sounding policy would increase throughput by 21\% over explicit 802.11af. However, as the aggregation rate increases, these alternatives rapidly converge. %around $b=10$, or 15~kB %\eknote{check this} %of aggregated payload data per-user. %We observe that the proposed opportunistic bootstrapping mechanism performs poorly when operating at low \ac{MCS} since there is little benefit to transmitting a payload while sounding the stale \ac{STA} and it is always better to transmit a short explicit sounding handshake.\csnote{todo:improve readability} %When operating at low \ac{MCS}, there is negligible benefit to using opportunistic sounding in the best case and significant penalties compared to alternatives on the order of 10\% when considering stale \ac{CSIT}. \subsubsection{Effect of Modulation and Coding Scheme} In general, the lower the \ac{MCS} rate, the lower the relative overhead of sounding; thus the use of any particular sounding mechanism matters much less at low rates than high rates. At base rate, shown in Fig.~\ref{fig:protosim_4x4} (top), there is little advantage in opportunistic sounding, and our proposed bootstrapping method under-performs even explicit sounding. However, as the \ac{MCS} of the system increases, the relative cost of sounding overhead also increases since airtime becomes more valuable, potentially compensating for the stale \ac{CSIT} penalty. In Fig.~\ref{fig:protosim_4x4} (bottom), we show the same results for the maximum supported \ac{MCS}. 802.11af sounding overhead is much more costly when the system could otherwise be operating at high \ac{MCS}, since \acp{CBFR}, polling packets, and \acp{ACK} are all sent at base rate for robustness. The large range in explicit 802.11af sounding performance (red region) stems from the fact that uncompressed \ac{CBFR} packets take a significant amount of airtime, resulting in very high overhead. At high \ac{MCS}, opportunistic sounding can improve throughput by 186\% and implicit sounding can improve by 94\% without frame aggregation. While opportunistic sounding with stale \ac{CSIT} is strictly better than explicit 802.11af up to 35 aggregated frames, it barely out-performs implicit sounding at low aggregation with fewer than $10$ frames and then performs significantly worse with higher frame aggregation. %\eknote{again a conclusion without %going through the figure. no description or analysis} in either implicit sounding policy since overhead incurs a relatively small penalty compared to the low \ac{MCS}. %At the lowest \ac{MCS} in the top of Fig.~\ref{fig:protosim_4x4}, there is a severe difference in performance when only a few packets are available to be aggregated. %\eknote{no discussion of the VERY wide bounds at lower vs. higher aggregation rates and why they narrow} Therefore, we conclude that for a low number of spatial streams, opportunistic channel sounding has approximately equivalent performance compared to implicit channel sounding and potentially worse performance when considering beamforming error from stale \ac{CSIT}. However, both opportunistic and implicit channel sounding offer significant throughput gains over the current explicit 802.11af standard. The best usage scenario for opportunistic sounding in this regime is when implicit \ac{STA} cooperation is not possible, such as with current 802.11 devices. A system design that leverages this observation would utilize \textit{opportunistic} \ac{CSIT} when per-user downlink traffic queues are below 3-52~MB, depending on the current \ac{MCS}, and then revert to explicit sounding when queues exceed that size and sounding overhead can be sufficiently amortized. For legacy 802.11a/b/g/n devices that do not report any CSIT, only opportunistic \ac{CSIT} would be available and the decision is made between multi-user and single-user transmission modes only. %We also conclude that even with low numbers of spatial streams, an implicit sounding policy can drastically increase the capacity of 802.11af systems, with negligible improvement for operation at low \ac{MCS}. %In high \ac{SINR} regimes with many short-lived flows, implicit channel sounding could significantly increase protocol throughput for 802.11af systems. %\eknote{interesting subsection. needs the above structure/analysis changes} %\eknote{this seems to be one of the larger punchlines of the entire paper and it's buried at the very end. %since implicit seems to do pretty well in Fig 10, the big gain seems to be when the entire system %scales up. Given that, the paper sould really be ``Opportunistic Channel Sounding for Scaling %to Massive MU-MIMO'' if this is a design paper. Otherwise, you're more agnostic to exp/implicit %and opportunistic/not and the title would be more like %``Implementation and Experimental Performance Evaluation of CSI Management Strategies on MU-MIMO %Performance in UHF Bands''} % Sounding 32x16 Policy %\subsubsection{Sounding Policy Performance: 32x16} \subsection{Scaling to 32x16} \label{sec:scaling} At all \ac{MCS}, the challenge of efficiently using narrow bands of UHF radio spectrum is clear: system throughput is no more than 50~Mbps even with full 8x4 spatial diversity at the maximum \ac{MCS} (Figure~\ref{fig_protosim_8x8}). For this reason, we explore the possibility of leveraging additional spatial streams for UHF-band communications as a means of increasing spectral efficiency. Given the potential for large-scale 802.11af system installations to establish long-range point-to-multi-point networks and the need to support high throughput over narrow UHF channels, we extend our beamforming protocol analysis to a $32\times16$ system in Figure~\ref{fig:protosim_32x16}. Previous work on many-antenna \ac{MU-MIMO} systems has proposed implicit channel sounding as a means to avoid protocol collapse as the number of antennas at the \ac{AP} grows \cite{shepard2012argos}. Our results in Figure~\ref{fig:meas_corr} and Figure~\ref{fig_power_allocation} indicate that the \ac{CSI} of stationary \acp{STA} in both indoor and outdoor environments remain constant for long periods of time, which supports the possibility of using opportunistic sounding policies to increase system throughput even further. In all cases with a large number of spatial streams, explicit channel sounding suffers severely from protocol congestion due to the high number of spatial streams and amount of explicit data that is transmitted to the \ac{AP} to report \ac{CSI}. \begin{figure}[th!] % FROZEN Simulation - 32x16 6 Mhz TVWS \centering \includegraphics[width=0.7\linewidth]{./figs/protocol/tput_vs_agg_32x16_6mhz_im_mcs-0_crop} %\vspace{-6mm} \caption{Network throughput for a 32x16 802.11af system on a 6~MHz UHF channel.} \label{fig:protosim_32x16} \end{figure} In Figure~\ref{fig:protosim_32x16} (top), we see that for low \ac{MCS} rates and frame aggregation below $18$ frames, opportunistic sounding with stale \ac{CSIT} out-performs even implicit sounding, given the number of \acp{STA} involved in each transmission. In Figure~\ref{fig:protosim_32x16} (bottom) at the maximum supported \ac{MCS}, strict relationships emerge between the sounding policies, since \ac{CSIT} overhead dominates any other effects at this scale. When channel sounding becomes extremely expensive, the use of opportunistic \ac{CSIT} is able to offer significant throughput gains over implicit sounding, ranging from 112\% with no frame aggregation, to 18\% at maximum aggregation, even when considering the penalty from stale \ac{CSIT}. Explicit sounding should be avoided altogether. It is clear from Figure~\ref{fig:protosim_32x16} that network throughput for a 6~MHz channel can be significant in high-\ac{SNR} environments where high modulation rates can be achieved. When channel bonding multiple 6~MHz \ac{TVWS} channels, it is clear that multi-gigabit network throughput is achieveable with large-scale multi-user beamforming systems using implicit beamforming, and further increased in fixed deployments by using opportunistic beamforming. %% Sounding 8x8 Wi-Fi Policy %\subsubsection{Comparison with 802.11ac: 8x8} %\label{sec:wifi_policy} % %Until now, we have focused on the performance of opportunistic resounding % %The analysis in \S~\ref{sec:4x4_policy} and \S~\ref{sec:scaling} indicate that the efficiency of opportunistic channel sounding increases as the relative rate of the payload increases relative to the channel overhead. %This would suggest that high-rate 802.11ac systems with high channel bandwidth would benefit greatly from implicit modes of channel sounding. %On the other hand, the increase channel coherence time of 2.4 and 5.8~GHz channels could potentially prevent the use of opportunistic channel sounding. %In order to % %\begin{figure}[h] % FROZEN Simulation -8x8 40 MHz Wi-Fi %\centering %\includegraphics[width=1\linewidth]{./figs/tput_vs_agg_8x8_40mhz_mcs-0.pdf} %\caption{Network throughput for a 8x8 802.11ac system on a 40~MHz Wi-Fi channel.} %\label{fig:protosim_8x8} %\end{figure} % Opportunistic Sounding Conclusion \section{Discussion and Conclusion} \label{sec_protocol_conclusion} In order to scale the capacity of \ac{MU-MIMO} beamforming systems, it is important to address the problem of \ac{CSIT} overhead, particularly in 802.11af systems with potentially limited system bandwidth. In this work, we developed a new \ac{SDR} system specifically for UHF-band \ac{MU-MIMO} that allowed us to gather the first mobile multi-user channel traces in the UHF band, which can be found in reference \cite{shepard2016understanding}. Based on our analysis of beamforming capacity with stale \ac{CSIT} in Section~\ref{sec:uhf_outdoor_results}, we showed large S-T intervals can be tolerated in UHF frequency bands, enabling the gathering of \ac{CSIT} purely opportunistically and enabling multi-user transmissions with legacy 802.11 equipment that can not provide \ac{CBFR} reports. We compared three different channel sounding policies and showed that for a small number of spatial streams, significant throughput gains are available with either of the implicit sounding policies, though the penalty of using stale \ac{CSIT} would encourage the use of implicit sounding rather than opportunistic sounding, if available. However, as the number of spatial streams increases, the overhead of even implicit beamforming begins to become a bottleneck on 802.11af performance and opportunistic channel sounding becomes much more beneficial when the nodes are stationary. %Given the limitations of our hardware platform, we were not able to experimentally validate the performance of %\csnote{removed `as APs and STAs grow in complexity' since it was very awkward and confusing. perhaps we should say something about next-generation/emerging/many-antenna though} \rgnote{no room! :(} %\csnote{I would try to make the conclusion sound more impressive 'up to XX fold gains'}
{ "alphanum_fraction": 0.7985632369, "avg_line_length": 105.5850340136, "ext": "tex", "hexsha": "6d77ab5fa693b2626009dcc4cbf523864db02b39", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "acf1ebafee00a8e4375008e60e35da8affc97d9b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "RyanEGuerra/ryan_guerra_phd_thesis", "max_forks_repo_path": "sec/opportunistic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "acf1ebafee00a8e4375008e60e35da8affc97d9b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "RyanEGuerra/ryan_guerra_phd_thesis", "max_issues_repo_path": "sec/opportunistic.tex", "max_line_length": 635, "max_stars_count": null, "max_stars_repo_head_hexsha": "acf1ebafee00a8e4375008e60e35da8affc97d9b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "RyanEGuerra/ryan_guerra_phd_thesis", "max_stars_repo_path": "sec/opportunistic.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7215, "size": 31042 }
\chapter{Implementation of Methods on Actual Satellite} % put these two lines after every \chapter{} command \vspace{-2em} \minitoc \section{Isolation Forests} A trained network will be developed from simulation data or the data generated during the first few orbits of a satellite. Afterwards the anomaly score will be calculated for a data point and based on a given threshold, the data point will be flagged as an anomaly or not.
{ "alphanum_fraction": 0.8013856813, "avg_line_length": 61.8571428571, "ext": "tex", "hexsha": "7805517873e18662b873763bb4efbf2cf702e5ef", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UlrichLouw/Masters_Latex", "max_forks_repo_path": "chapter/Implementation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UlrichLouw/Masters_Latex", "max_issues_repo_path": "chapter/Implementation.tex", "max_line_length": 272, "max_stars_count": null, "max_stars_repo_head_hexsha": "92d341948a6c9dea47d987c9b9e7f55421960694", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UlrichLouw/Masters_Latex", "max_stars_repo_path": "chapter/Implementation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 93, "size": 433 }
\documentclass[output=paper]{langscibook} \ChapterDOI{10.5281/zenodo.5082450} % \author{Authors\affiliation{Institution}} \author{Marcin Wągiel\affiliation{Masaryk University} and Mojmír Dočekal\affiliation{Masaryk University}} \title{Number in natural language from a formal perspective} \abstract{In this introduction, we provide a general overview of a variety of phenomena related to the encoding of the cognitive category of \textsc{number} in natural language, e.g., number-marking, collective nouns, conjunctions, numerals and other quantifiers, as well as classifiers, and show how Slavic data can contribute to our understanding of these phenomena. We also examine the main strands of the study of number in language developed within formal lingusitics, linguistic typology, and psycholinguistics. Finally, we introduce the content of this collective monograph and discuss its relevance to current research. \keywords{number, plurality, numerals, quantifiers, formal linguistics}} \begin{document} \SetupAffiliations{mark style=none} \maketitle \section{Introduction} The goal of this monograph is to explore the relationship between the cognitive notion of \textsc{number} and various grammatical devices expressing this concept in natural language. The book aims at investigating different morphosyntactic and semantic categories including plurality and number-marking, individuation and countability, cumulativity, distributivity and collectivity, numerals, numeral modifiers and classifiers, as well as other quantifiers. It gathers contributions tackling the main themes from different theoretical and methodological perspectives in order to contribute to our understanding of cross-linguistic patterns both in Slavic and non-Slavic languages. In this chapter, we will provide a brief introduction to various approaches to the study of the concept of number in natural language. We will mainly focus on the issues whose better understanding this book directly contributes to. First, in \sectref{doc-wag:sec:number-in-language}, we will discuss a variety of phenomena related to the expression of number in language. Then, in \sectref{doc-wag:sec:approaches-to-number}, we will review the major strands in linguistic research dedicated to explaining these phenomena. Finally, in \sectref{doc-wag:sec:the-contribution-of-this-book} we will introduce the content of this book and briefly explain its contribution. \section{Number in language}\label{doc-wag:sec:number-in-language} % \section{The research so far}\label{doc-wag:sec:the-research-so-far} The nature of the relationship between number as a cognitive category and language %in question is highly complex, and thus the literature on the topic is vast. In this section, we will introduce a number of topics that are of relevance for the linguistic phenomena explored in this book and briefly discuss why they are important for a better understanding of how humans conceive of quantity and number. \subsection{Number sense}\label{doc-wag:sec:number-sense} It is well-documented that humans possess what is often called \textsc{number sense}, i.e., an intuitive understanding of numbers and their magnitude as well as various numerical relations and operations (see, e.g., \citealt{dehaene1997number} for an overview). The human number sense involves two distinct cognitive systems, namely the object tracking system, which enables an immediate enumeration of small sets, and the approximate number system, which supports the estimation of the magnitude of a collection of objects without relying on symbolic representations (see, e.g., \citealt{hyde2011two} for an overview). This mental ability is argued to provide an endowed predisposition for developing the concept of exact number and simple arithmetic and to facilitate the acquisition of lexical categories related to quantity, such as numerals (e.g., \citealt{gelman_gallistel1978child, wynn1990children}). Therefore, it seems that already in early childhood the language faculty interacts with that part of human mind that generates number sense. \subsection{Linguistic expression of the cognitive notion of number}\label{doc-wag:sec:lingusitic-expression-of-the-cognitive-notion-of-number} Most languages of the world have formal means to express the conceptual distinction between `one' and `more than one'. A cross-linguistically widespread morphosyntactic device dedicated for that purpose is the category of \textsc{grammatical number} (e.g., \citealt{corbett2000number}). This category is typically expressed by an affix on the noun and/or by the agreement it triggers on other lexical items. The overall range of its values includes singular, dual (for two), trial (for three), paucal (for few, as opposed to many), plural and greater plural (for an excessive number). Though languages typically encode only two or three of those values, there are also languages with more complex number systems as well as ones that do not mark those distinctions morphologically at all. An example of a language with a rich number system is Bayso, see \REF{doc-wag:ex:number-bayso}, which distinguishes between number-neutral, singular, paucal and plural forms of the noun. \ea\label{doc-wag:ex:number-bayso} \ea \gll lúban foofe\\ lion.\textsc{gnrl} watched\textsc{.1.sg}\\ \glt `I watched a lion/lions.' \ex \gll lubán-titi foofe\\ lion-\textsc{sg} watched\textsc{.1.sg}\\ \glt `I watched a lion.' \ex \gll luban-jaa foofe\\ lion-\textsc{pau} watched\textsc{.1.sg}\\ \glt `I watched a few lions.' \ex \gll luban-jool foofe\\ lion-\textsc{pl} watched\textsc{.1.sg}\\ \glt `I watched (a lot of) lions.' \hfill \citep[Bayso, Cushitic;][11, adapted]{corbett2000number} \z \z \noindent In Slavic, a complex number system including singular, dual and plural is attested in certain dialects of Slovenian as well as in Lower and Upper Sorbian, see \REF{doc-wag:ex:number-sorbian}. \ea\label{doc-wag:ex:number-sorbian} \ea \gll hród\\ palace.\textsc{sg}\\ \glt `palace/castle' \ex \gll hrod-aj\\ palace-\textsc{du}\\ \glt `two palaces/castles' \ex \gll hrod-y\\ palace-\textsc{pl}\\ \glt `palaces/castles' \hfill \citep[Upper Sorbian;][20, adapted]{corbett2000number} \z \z \noindent In these languages, dual %is a morphosyntactic category that triggers obligatory agreement with determiners, adjectives and verbs, as demonstrated in \REF{doc-wag:ex:number-slovenian}. Its semantic relationship with the singular and plural as well as its interplay with the meaning of numerals have been subject to important theoretical considerations (e.g., \citealt{dvorak_sauerland2006semantics, marti2020dual}). \ea\label{doc-wag:ex:number-slovenian} \gll T-a dv-a stol-a st-a polomljen-a.\\ these-\textsc{du.m.nom} two-\textsc{du.m.nom} chair-\textsc{du.m.nom} be-\textsc{3.du.prs} broken-\textsc{du.m.nom}\\ \glt `These two chairs are broken.' \hfill \citep[Slovenian;][168, adapted]{derganc2003dual} \z \noindent Though in Slavic and other Indo-European languages grammatical number is typically marked through suffixation and inflection, other cross-linguistically common means include apophony, i.e., a word-internal sound change, as in the English pair \textit{man} $\sim$ \textit{men}, and suppletion, e.g., \textit{čelovek} `man' $\sim$ \textit{ljudi} `men' in Russian. Yet another frequent grammatical device employed for number marking across languages is reduplication (e.g., \citealt{moravcsik1978reduplicative, corbett2000number}). For instance, the repeated initial syllable in \REF{doc-wag:ex:reduplication} functions as a morphological plural marker. \ea\label{doc-wag:ex:reduplication} \ea \gll kuna\\ husband\\ \glt `husband' \ex \gll kuu-kuna\\ \textsc{red}-husband\\ \glt `husbands' \hfill \citep[Papago, Uto-Aztecan;][308, adapted]{moravcsik1978reduplicative} \z \z \noindent A related phenomenon attested cross-linguistically is known as syntactic reduplication (e.g., \citealt{travis2001syntax}, \citetv{chapters/04}), where the repeated material preceding and following the proposition gives raise to a plural interpretation, as illustrated in \REF{doc-wag:ex:syntactic-reduplication}. \ea Jon washed plate after plate for hours after the party. \hfill \citep[457]{travis2001syntax}\label{doc-wag:ex:syntactic-reduplication} \z \noindent Though grammatical number often expresses the semantic concepts of \textsc{singularity} and \textsc{plurality}, there are many well-studied mismatches between the two notions. % notions of \textsc{singularity} and \textsc{plurality}, there are many well-studied mismatches between form and meaning. First, the plural does not always mean `more than one' (e.g., \citealt{sauerland2003new, spector2007aspects, zweig2009number}). For instance, \REF{doc-wag:ex:mismatch-guns} does not mean that only carrying multiple guns is illegal in Illinois. Similarly, \REF{doc-wag:ex:mismatch-aliens} cannot be true in a scenario where a single alien has walked the earth. \ea \ea Carrying guns is illegal in Illinois.\label{doc-wag:ex:mismatch-guns} \ex No aliens have ever walked the earth.\hfill \citep[267]{nouwen2016plurality}\label{doc-wag:ex:mismatch-aliens} \z \z \noindent Furthermore, there is an intriguing relationship between bare singular nominals and \textsc{number neutrality} (e.g., \citealt{rullmann_you2006general, dayal2011hindi}, \citetv{chapters/06}). For instance, the bare direct object in \REF{doc-wag:ex:bare} is not specified with respect to whether it refers to a single individual or to a plurality of individuals. \ea \gll anu bacca sambhaaltii hai\\ Anu child look-after-\textsc{ipfv} be-\textsc{prs}\\ \glt `Anu looks after (one or more) children.' \hfill \citep[Hindi;][127, adapted]{dayal2011hindi}\label{doc-wag:ex:bare} \z \noindent Furthermore, a question arises whether the semantics of bare noun phrases in languages with articles like English and German is the same as in articleless languages such as most Slavic languages (e.g., \citealt{geist2010bare, heim2011definiteness}). Though it has been proposed that articleless languages employ other morphological or syntactic devices in order to express definiteness, e.g., word order, aspect and number marking, novel evidence suggests the meaning of bare nouns in Slavic is different than expected under standard theories of uniqueness and maximality (e.g., \citetv{chapters/07}). The grammatical category of plural marking is closely related to \textsc{countability}, often known also as the mass/count distinction illustrated by the contrast in \REF{doc-wag:ex:countability}. While standard theories of mass and count tend to model this distinction in binary terms (e.g., \citealt{link1983logical, chierchia1998plurality, chierchia2010mass}), there is convincing evidence that nouns can be countable to various degrees forming a scale of the mass/count spectrum (e.g., \citealt{allan1980nouns}, \citetv{chapters/03}). \ea\label{doc-wag:ex:countability} \ea[]{Thirty three \{tables/stars/pieces of that pizza\}.} \ex[*]{Thirty three \{bloods/waters/golds\}. \hfill \citep[104, adapted]{chierchia2010mass}} \z \z \noindent Naturally, what counts as `one' and what counts as `many' relates to a deep philosophical problem of individuation, i.e, a criterion of numerically distinguishing the members of a kind (e.g., \citealt{grimm2012number, wagiel2018subatomic}). The problem of individuation becomes even more perplexing if we consider the class of abstract entities, e.g., \textit{fact} and \textit{information} (e.g., \citealt{grimm2014individuating, sutton2020informational}), and belief objects, e.g., imaginary individuals such as monsters (e.g., \citealt{geach1967intentional}, \citetv{chapters/11}). Across languages, there is also a distinct class of nominal expressions known as \textsc{collective nouns}, e.g., \textit{committee} and \textit{pile}.\footnote{Sometimes they are also referred to as group or bunch nouns.} Though such nouns are singular in terms of their morphosyntax, they denote a plurality of objects (e.g., \citealt{landman1989groupsi, barker1992group, pearson2011new, henderson2017swarms}). This is evidenced by the fact that similar to plurals, but unlike singulars, collectives are compatible with predicates calling for plural arguments such as \textit{meet}, see \REF{doc-wag:ex:collectives}. \ea\label{doc-wag:ex:collectives} \ea The \{men/\#man\} met on Tuesday. \ex The committee met on Tuesday. \hfill \citep[80, adapted]{barker1992group} \z \z \noindent Interestingly, Slavic languages with their rich nominal systems have many types of derived collectives, e.g., Czech \textit{list} `leaf' $\rightarrow$ \textit{listí} `foliage'.\footnote{Note that the form \textit{listí} `foliage' is not the plural of \textit{list} `leaf', which is \textit{listy} `leaves'.} This fact makes them an especially valuable source of data regarding the ways in which the semantic notion of plurality can be encoded in derivational morphology (e.g., \citetv{chapters/08}). Another class of expressions designating number consists of \textsc{quantifiers} such as \textit{some}, \textit{most} and \textit{all}. The nature of the lexical representations of their meanings as well as the psychological mechanisms involved in the interpretation of those meanings have been a puzzling question not only in linguistics but also in cognitive science (e.g., \citealt{pietroski2009meaning, lidz_et-al2011interface}, \citetv{chapters/17}). A well-known property of quantifiers is that they give rise to scalar implicatures, i.e., implicit inferences suggesting that the speaker had a reason for not using a stronger, i.e., more informative, term on the same scale (e.g., \citealt{horn1984toward}). For instance, uttering \REF{doc-wag:ex:scalar-implicature} implies that the addressee did not eat all of the cookies. \ea You ate some of the cookies. \hfill \citep[14]{horn1984toward}\label{doc-wag:ex:scalar-implicature} \z \noindent In this context, what is of particular interest is children's understanding of quantifiers and their computation of scalar implicatures, which seem to differ from what we find in adults (e.g., \citealt{noveck2001children, papafragou2004children}, \citetv{chapters/18}). Yet another intriguing feature of quantifiers is that some of them enter non-trivial interactions with other phenomena such as negative polarity (e.g., \citealt{isreal1996polarity, solt2015q-adjectives}, \citetv{chapters/19}). For instance, items like \textit{much} can only appear in specific environments, such as negation, and are incompatible with affirmative contexts, as demonstrated by the contrast in \REF{doc-wag:ex:much}. \ea\label{doc-wag:ex:much} \ea[]{Albert didn't get much sleep.} \ex[*]{Albert got much sleep. \hfill \citep[620]{isreal1996polarity}} \z \z \noindent A unique subset of lexical items dedicated to expressing quantity are \textsc{cardinal numerals}. Though traditionally they were assumed to form a natural class with quantifiers such as \textit{some} and \textit{all}, there are good reasons to believe that in fact numerals are linguistic objects of a different type (e.g., \citealt[Ch.~2]{landman2004indefinites}, \citealt[Ch.~2]{rothstein2017semantics}). As witnessed in \REF{doc-wag:ex:numerals}, nominals modified by numerals can appear in predicate position while nominals involving other quantifiers cannot (on a non-partitive reading). Furthermore, numerals can also co-occur with the definite article and \textit{every}, e.g., \textit{the four cats} and \textit{every two students}, respectively. \ea\label{doc-wag:ex:numerals} \ea[]{The inhabitants of the barn are four cats.} \ex[\#]{The guests are \{some/most\} students. \hfill \citep[18, adapted]{rothstein2017semantics}} \z \z \noindent The internal syntax and semantics of cardinal numerals as well as relationships between basic and complex numerals have been an important topic in the study of these expressions (e.g., \citealt{rothstein2013fregean, ionin_matushansky2018cardinals, wagiel_caha2020universal}, \citetv{chapters/13}, \citetv{chapters/14}). One of the questions is whether the meaning and syntactic status of \textit{six} is the same also in \textit{sixty} and \textit{six hundred}. Though for a long time the mainstream research has been mostly focused on cardinals, like the ones described above, in recent years some attention has also been dedicated to puzzling semantic properties of numerals referring to numbers that are not positive integers like \textit{zero} (e.g., \citealt{bylinina_nouwen2018zero}) as well as fractions such as \textit{one third} (⅓) and decimals like \textit{two point five} ($2.5$) (e.g., \citealt{salmon1997}, \citetv{chapters/12}). A deeper understanding of how the mechanism responsible for quantification over parts of entities might also shed light on more general issues of individuation discussed above. Furthermore, numerals can be modified by various modifiers including comparative modifiers such as \textit{more than} as well as superlative modifiers such as \textit{at least}. Though at first sight these two seem entirely synonymous only the latter give rise to ignorance inferences (e.g., \citealt{krifka1999least, nouwen2010two}, \citetv{chapters/15}). To illustrate, consider the contrast in \REF{doc-wag:ex:ignorance} in the scenario when the speaker knows that a hexagon has exactly six sides. \ea\label{doc-wag:ex:ignorance} \ea[]{A hexagon has more than three sides.} \ex[\#]{A hexagon has at least three sides. \hfill \citep[4, adapted]{nouwen2010two}} \z \z % \ea\label{doc-wag:ex:ignorance} % \ea I have more than 2 children % \ex I have at least 3 children. \hfill \citep[]{} % \z % \z \noindent Interestingly, in many languages across the world numerals cannot combine with nouns directly. For this purpose a special category of \textsc{classifiers} is required, see \REF{doc-wag:ex:classifier} (e.g., \citealt{aikhenvald2000classifiers, bale_coon2014classifiers}). Classifiers sort nouns based on the type of their referents and provide means of the individuation thereof. \ea \gll liǎng \minsp{*(} zhāng) zhuōzi\\ two {} \textsc{cl} table\\ \glt `two tables' \hfill \citep[Mandarin Chinese;][695]{bale_coon2014classifiers}\label{doc-wag:ex:classifier} \z \noindent A puzzling property of some classifier systems is their optionality (e.g., \citetv{chapters/16}). For instance, the classifier in \REF{doc-wag:ex:optional-classifier} can but need not be used, which raises questions with respect to its semantic contribution. \ea \gll sa(-tangkai) bungo\\ one-\textsc{cl} flower\\ \glt `one flower' \hfill \citep[Minangkabau, Malayic;][190, adapted]{aikhenvald2000classifiers}\label{doc-wag:ex:optional-classifier} \z \noindent Though classifiers are a rather marginal category in Slavic, there are a small number thereof in languages such as Bulgarian and Russian (e.g., \citealt{cinque_krapova2007note, khrizman2016functional}). For instance, the Russian classifier \textit{čelovek} for counting persons can appear optionally in constructions like \REF{doc-wag:ex:classifier-russian}. % the classifiers are a , e.g., Russian \textit{štuka} `item', \textit{čelovek} `person' and \textit{golova} `head' (e.g., \citealt{khrizman2016functional}) as well as Bulgarian \textit{duši} `persons' and \textit{broj} `item' (e.g., \citealt{cinque_krapova2007note}). \ea \gll pjat' \minsp{(} čelovek) stroitelej\\ five {} \textsc{cl} builders\textsc{.gen}\\ \glt `five builders' \hfill \citep[Russian;][4, adapted]{khrizman2016functional}\label{doc-wag:ex:classifier-russian} \z \noindent Another grammatical device dedicated to encoding plurality is \textsc{conjunction}. Interestingly, coordinated phrases as well as other plurality-denoting expressions give rise to an ambiguity between the collective, the distributive and the cumulative interpretation (e.g., \citealt{scha1981distributive, link1983logical, beck_sauerland2000cumulation, landman2000events}, \citetv{chapters/10}, \citetv{chapters/09}). For instance, \REF{doc-wag:ex:conjunction} on the collective reading is true if John and Bill together gave one flower to Mary, Sue, Ann and Jane as a group. On the distributive reading, John gave a flower to the girls and so did Bill. Finally, the cumulative scenario could look like this: John gave a flower to Mary and Ann, whereas Bill gave a flower to Sue and Jane. \ea John and Bill gave a flower to Mary, Sue, Ann and Jane. \\ \hfill \citep[362]{beck_sauerland2000cumulation}\label{doc-wag:ex:conjunction} \z \noindent In this respect Slavic languages have proved to be a valuable source of data since they grammaticalized a special category of collective numerals, which rule out the distributive reading (e.g., \citealt{docekal2012atoms, wagiel2015sums}). For instance, while \REF{doc-wag:ex:czech-numeral} receives both the collective and the distributive interpreation, \REF{doc-wag:ex:czech-collective} allows only for the collective reading, i.e., the total of written letters is one. \ea \ea \gll Tři chlapci napsali dopis.\\ three boys wrote\textsc{.pl} letter\textsc{.acc}\\ \glt `Three boys wrote a letter.'\label{doc-wag:ex:czech-numeral} % \hfill $\checkmark$\textsc{coll}, $\checkmark$\textsc{distr} \ex \gll Troj-ice chlapců napsala dopis.\\ three-\textsc{coll.f} boys\textsc{.gen} wrote\textsc{.sg.f} letter\textsc{.acc}\\ %\hfill $\checkmark$\textsc{coll}, \#\textsc{distr} \glt `A group of three boys wrote a letter.' \\ \hfill \citep[Czech;][113, adapted]{docekal2012atoms}\label{doc-wag:ex:czech-collective} \z \z \noindent So far, we have discussed various ways in which the cognitive distinction between `one' and `more than one' is expressed by nouns and their modifiers. However, the expression of number is by no means restricted to the nominal domain. Many languages display the category of verbal number often termed as \textsc{pluractionality} (e.g., \citealt[Ch.~13]{lasersohn1995plurality}). This grammatical device indicates that the action designated by the verb was performed more than once or that there is more than one participant involved in that action. For instance, the contrast in \REF{doc-wag:ex:pluractionality} shows that the semantic contribution of the pluractional marker, realized here as \textit{tu}, is that the agent and the theme were involved in a plurality of pushing events. \ea\label{doc-wag:ex:pluractionality} \ea \gll ʔiʃa-ʔ ʔinanta-siʔ ʔi=tuʛʛuur-ay\\ he-\textsc{nom} girl-\textsc{def} \textsc{3}=push-\textsc{pfv}\\ \glt `He pushed the girl.' \ex \gll ʔiʃa-ʔ ʔinanta-siʔ ʔi=tu-tuʛʛuur-ay\\ he-\textsc{nom} girl-\textsc{def} \textsc{3}=\textsc{plu}-push-\textsc{pfv}\\ \glt `He pushed the girl more than once.' \\ \hfill \citep[Konso, Cushitic;][adapted]{orkaydo2013category} \z \z \noindent Verbal number is also related to \textsc{aspect}, which expresses how an event or a state denoted by the verb extends over time. Since Slavic languages are renowned for their rich aspectual systems, they have attracted a lot of attention in this area (e.g., \citealt{filip1999aspect, borik2006aspect}). For instance, morphologically marked iterative forms of verbs in West Slavic express repetitive events, as illustrated in \REF{doc-wag:ex:iterative}. \ea \gll Irenka \minsp{(} często) chadz-a-ła do biblioteki.\\ Irenka {} often walk-\textsc{iter}-\textsc{pst} to library\textsc{.gen}\\ \glt `Irenka often walked to the library.' \hfill \citep[Polish;][469, adapted]{pinon1997verbs}\label{doc-wag:ex:iterative} \z \noindent Moreover, it is known that the grammatical number of the noun phrase interacts non-trivially with the telicity of the entire verb phrase (e.g., \citealt{verkuyl1972compositional, krifka1998origins, de-swart2006aspectual}, \citetv{chapters/01}). While in sentences with a singular indefinite object the predicate gets a telic interpretation, see \REF{doc-wag:ex:telicity-singular}, its counterpart with a plural indefinite object is atelic, see \REF{doc-wag:ex:telicity-plural}.\footnote{Notice, however, that not all predicates behave like this, e.g., \textit{find} and \textit{kill} do not.} \ea\label{doc-wag:ex:telicity} \ea[\#]{Koos and Robby ate a sandwich for hours.}\label{doc-wag:ex:telicity-singular} \ex[]{Koos and Robby ate sandwiches for hours. \hfill \citep[49--50]{verkuyl1972compositional}}\label{doc-wag:ex:telicity-plural} \z \z \noindent The discussion of various grammatical and lexical devices dedicated to expressing the cognitive notion of number presented above by no means exhausts the potential of natural language. There are also various complex numerical expressions such as \textit{two-fold} and \textit{double} (e.g., \citealt{wagiel2018subatomic}), frequency adjectives such as \textit{occasional} and \textit{frequent} (e.g., \citealt{gehrke_mcnally2015distributional}), quantificational adverbials such as \textit{two times} (e.g., \citealt[Ch.~11]{landman2004indefinites}, \citealt{docekal_wagiel2018event}) and \textit{often} (e.g., \citealt{doetjes2007adverbs}) and many more. Nonetheless, we believe that this short presentation gives an overall idea of how elusive and multi-layered the relationship between number sense and grammar is. In the next section, we will briefly discuss various linguistic approaches that attempt to shed more light on the relationship in question. \section{Approaches to number}\label{doc-wag:sec:approaches-to-number} The phenomena described above have puzzled linguists, philosophers and psychologists for a long time. In this section, we briefly introduce three main research traditions that attempt at explaining the relationship between number and grammar. In the last thirty years, formal linguistics has been heavily influenced by studies addressing the vexing questions concerning the proper treatment of grammatical number, conjunction, numerals, the mass/count distinction and a number of other related topics that can be vaguely summarized under the label \textsc{theories of plurality}. The usual starting point is referenced as \citet{link1983logical}, but of course, there are many influential pre-runners such as \citet{bennett1979mass}, \citet{ter_meulen1980substances}, and \citet{scha1981distributive}. If we focus on the last three decades of the research on pluralities, we can identify several central frameworks which address the issues in question and offer heuristically intriguing paths to follow. At the end of the previous century, there appeared first proposals of the formalization of various interpretations of plurality-denoting noun phrases. Since then the study of number and plurality has become one of the central topics in linguistics. The theories of plurality proposed so far differ in many respects. While some are more semantically oriented and develop models grounded in lattice-theory (e.g., \citealt{krifka1989nominal, landman1989groupsi, landman2000events, champollion2017parts}), others take a more pragmatic stance and base their formalizations on sets (e.g., \citealt{schwarzschild1996pluralities, winter2001flexibility}). Furthermore, after the seminal work of \citet{link1983logical} the mainstream research has agreed upon a more parsimonious approach to ontological domains, though authors diverge in the way they formalize the cognitive distinction between objects and substances (see, e.g., \citealt{krifka1989nominal, chierchia1998plurality, chierchia2010mass, rothstein2010counting, landman2011count, landman2016iceberg}). Moreover, already in the early years of semantic research the notion of plurality was extended to the domain of eventualities (e.g., \citealt{bach1986algebra}) and then expanded to even more abstract categories. Another significant strand of the research pursued in formal theories of plurality focuses on the proper treatment of numerals and classifiers (e.g., \citealt{krifka1995common,krifka1999least, landman2004indefinites, ionin_matushansky2006composition, ionin_matushansky2018cardinals, bale_gagnon_khanjian2011crosslinguistic, bale_coon2014classifiers, rothstein2017semantics}). Finally, a growing body of literature concerns bounded and unbounded interpretations of numerals and the semantic contribution of numeral modifiers (e.g., \citealt{geurts2006take, nouwen2010two, kennedy2015fregean}). Independently to the research pursued in formal linguistics, the distribution and grammar of number and numerals has received a lot of attention in the typological literature (e.g., \citealt{corbett1978universals, corbett2000number, greenberg1978generalizations, hurford1987language, hurford1998interaction}). Similarly, significant work has been carried out in the domain of classifiers (e.g., \citealt{dixon1982noun, aikhenvald2000classifiers}). What these broad cross-linguistic inquiries have revealed is that across languages there is a surprisingly rich diversity in meaning-form correspondences related to number and plurality. Yet, the exact nature of these correspondences remains unclear and the discovered variation often poses a challenge for the theoretical work described above. Finally, for a couple of decades the way in which plurality and numerosity are linguistically expressed and cognitively processed has been a topic of interest for psycholinguists and cognitive scientists. This strand of research investigates experimentally different ways in which speakers refer to quantities in natural language. The key issues relate to countability, pluralization, quantity comparison and the mental representation of number magnitude (see, e.g., \citealt{henik1982three, shipley_shepperson1990countable, dehaene1993mental, barner_snedeker2005quantity, melgoza_pogue_barner2008broken}). Another important topic concerns the nature of the lexical representations of quantifiers alongside the psychological mechanisms involved in their interpretation (e.g., \citealt{pietroski2009meaning, lidz_et-al2011interface}). Finally, acquisition studies have pursued to understand how children acquire the capacity to perceive, comprehend and use those parts of language that are dedicated to expressing quantity (e.g., \citealt{noveck2001children, papafragou2004children}). Despite intriguing experimental results, it is often still unclear how to account for the psycholinguistic findings in formal models. Though all of these traditions are very insightful and have produced significant results, so far to a great extent they seem to be developing independently, and thus many important more general issues related to number and plurality remain elusive. We feel it is time to attempt to shed more light on the topic by proposing a monograph whose aim is to combine different empirical, methodological and theoretical perspectives. We hope that as a result the field will gain a better understanding of the relationship between the cognitive notion of number and different ways it is reflected in grammar. The research pursued in the course of the last decade proves that focusing on Slavic is a good place to start (see, e.g., \citealt{docekal2012atoms}, \citealt{wagiel2015sums}, \citealt{matushansky2015}, \citealt{khrizman2016functional}, \citealt{arsenijevic2017gender}). \section{The contribution of this book}\label{doc-wag:sec:the-contribution-of-this-book} This monograph consists of four parts covering coherent topics within the study of number in natural language: (I)~\textit{Plurality, number and countability}, (II)~\textit{Collectivity, distributivity and cumulativity}, (III)~\textit{Numerals and classifiers} and (IV)~\textit{Other quantifiers}. Each part includes 3--6 chapters investigating different aspects of the main subject. In sum, the book consists of 19 chapters (including this introduction) related to each other by virtue of the general topic as well as formal linguistic frameworks adopted as their background. While being part of a broader whole, each chapter focuses on a particular problem from a different perspective, be it formal morphology, syntax or semantics, linguistic typology, experimental investigation or a combination of these. Concerning the empirical coverage, 11 out of the total of 19 chapters focus on Slavic data, often in comparison with other languages. The remaining 8 contributions either explore more general theoretical issues or investigate relevant linguistic phenomena in non-Slavic languages, which could also shed new light on the research on number and plurality in Slavic. The first part, \textit{Plurality, number and countability}, is dedicated to the study of grammatical number and its correspondence to the semantic notion of plurality including the mass/count distinction. Empirically, it covers Slavic as well as Germanic, Turkic, Afro-Asiatic and Niger-Congo languages. The contribution by Piotr Gulgowski \& Joanna Błaszczak opens the volume by investigating experimentally the conceptual representation of grammatical and lexical number. This is pursued from the perspective of the perceptual processing of singular, plural and collective nouns in Polish. Subsequently, Scott Grimm, Ellise Moon and Adam Richman argue for a more fine-grained theory of countability by investigating strongly non-countable nouns in English such as \textit{fatherhood} and \textit{eyesight}. Based on the evidence from an extensive corpus search carried out on the COCA, they present a challenge for current approaches to the mass/count distinction, pointing to the need for a more general theory. Wiktor Pskit investigates (primarily) syntactic properties of English and Polish reduplicated constructions such as \textit{goal after goal}. A Slavic perspective is insightful since it allows the correlation of grammatical aspect with the pluractional interpretation of the expressions in question. Dorota Klimek-Jankowska \& Joanna Błaszczak relate plurality in the domain of objects and events. The experiment discussed in their chapter brings evidence in favor of the underspecification approach to the imperfective morphological aspect in Slavic. Suzana Fong explores the syntax of plural marking by examining bare nouns in Wolof. Her results suggest that the number interpretation of such nominals arise as a result of syntactic structures of a different size. Finally, Radek Šimík \& Christoph Demian examine the correlation in Polish and German between uniqueness and maximality on the one hand, and grammatical number on the other. Based on a production experiment, they argue that Polish word order alternations are not semantic correlates of German articles. The second part, \textit{Collectivity, distributivity and cumulativity}, brings together contributions investigating distributive and non-distributive, i.e., cumulative and collective, interpretations of different types of nominals from a broad cross-lin\-guistic perspective. Marcin Wągiel investigates the morpho-semantics of two different types of Slavic collective nouns arguing that the manner in which parts are related to the whole is often grammaticalized. The discussed data call for a mereoto\-pological approach under which spatial collectives are interpreted as properties of spatial clusters, whereas social collectives are treated as properties of social clusters. Magdalena Roszkowski provides novel evidence from Polish concerning non-distributive interpretations of (allegedly) obligatorily distributive conjunction particles. The data are challenging for current theories of distributivity and demonstrate how careful exploration of Slavic data can help us to fine-tune the theories of plurality. Nina Haslinger, Eva Rosina, Magdalena Rosz\-kow\-ski, Viola Schmitt \& Valerie Wurm test the cross-linguistic predictions of different theories of cumulativity with respect to morphological marking. Based on a typological sample covering 22 languages from 7 language families (including Slavic), they conclude that no obligatory markers for cumulative readings were attested. Finally, Nina Haslinger \& Viola Schmitt explore contextual restrictions on intentional identity. Their research tackles an intriguing question, namely when are two intensions treated as distinct in natural language, by examining evidence from cumulative belief sentences. The third part, \textit{Numerals and classifiers}, explores theoretical challenges related to the categories in question and discuss data from a wide variety of languages including Slavic and Germanic as well as Hungarian and obligatory classifier languages such Mandarin Chinese and Japanese. Andreas Haida \& Tue Trinh open this part of the book by convincingly showing that traditional theories of numeral denotations break down once we move beyond the usual examples including cardinals. They propose a more inclusive theory of numerals that could also account for decimals like \textit{two point five} (2.5) by postulating a mereological subpart counting component. Heidi Klockmann investigates the syntactic status of base numerals in Polish and English. Her analysis provides an account for different types of numeral bases as well as insights concerning language change in the domain of numerals. On the other hand, Yuta Tatsumi provides a syntactic analysis of complex cardinals by building on parallels between multiplicands and numeral classifiers in a number of languages (including Slavic). The data discussed pose a challenge for mainstream theories of complex numerals while the developed analysis proposes a unified account for numeral constructions in both classifier and non-classifier languages. Flóra Lili Donáti \& Yasutada Sudo explore the problem of defining alternatives for modified numerals from a theoretical perspective. Their account for the unacceptability of sentences with superlative numeral modifiers accompanied with scalar particles such as \textit{even} brings a novel piece of evidence concerning the nature of such alternatives and provides insight into the strength of the additivity presupposition. Finally, Brigitta R. Schvarcz \& Borbála Nemes investigate sortal individuating classifiers in Hungarian and their relationship with plurality and kind denotation. Their findings support analyses postulating that nouns are born as kind-denoting expressions and then can undergo a shift to predicates. As already indicated by the title \textit{Other quantifiers}, the last part of the book focuses on other types of quantifying expressions. Barbara Tomaszewicz-Özakın discusses how the verification procedure of an agent parsing sentences containing quantifiers is directly determined by the particular formal properties of the respective quantifiers. The findings of an eye-tracking experiment on four Polish quantifiers extend the results of previous behavioral studies on the topic. Katalin É. Kiss, Lilla Pintér \& Tamás Zétényi present new evidence stemming from an acquisition study on Hungarian children's grasp of an existential plural determiner corresponding to English \textit{some}. The reported results of their experiments seem to corroborate previous studies suggesting that at least some pragmatic interpretative resources are acquired later in the course of language acquisition. Finally, Mina Giannoula brings some intriguing data concerning a previously observed fact that in some languages \textit{much} behaves in certain contexts as a weak negative polarity item. Based on a grammaticalized distinction in Greek, she argues that one of the two Greek equivalents of \textit{much} behaves like a strong negative polarity item in the sense of veridicality-based approaches. % The book concludes with the summary of novel observations and generalizations presented in the monograph. We will comment on the puzzles tackled as well as point out new ones, which arise as a result of the reported investigations. We believe that the broad multi-dimensional empirical and methodological perspective of this collective monograph will be of interest to researchers focusing on how certain cognitive distinctions concerning number and related issues are represented in grammar, be it linguists, philosophers or cognitive psychologists. The reader will find data not only from Slavic languages, which constitute the main empirical focus of the book, but also from a number of typologically and genetically diverse languages including, e.g., English, German, Spanish, Greek, Japanese, Mandarin Chinese, Hungarian, Turkish as well as Wolof. Thus, we believe the book will be valuable not only to linguists working on Slavic, but also to those interested in broader cross-linguistic research and typology. \section*{Abbreviations} \begin{tabularx}{.5\textwidth}{@{}lX@{}} \textsc{1}&{first person}\\ \textsc{3}&{third person}\\ \textsc{acc}&{accusative case}\\ \textsc{cl}&{classifier}\\ \textsc{coll}&{collective marker}\\ \textsc{def}&{definite marker}\\ \textsc{du}&{dual number}\\ \textsc{f}&{feminie gender}\\ \textsc{gen}&{genitive case}\\ \textsc{gnrl}&{general number}\\ \textsc{ipfv}&{imperfective aspect}\\ \end{tabularx}% \begin{tabularx}{.5\textwidth}{@{}lX@{}} \textsc{iter}&{iterative aspect}\\ \textsc{m}&{masculine gender}\\ \textsc{nom}&{nominative case}\\ \textsc{pau}&{paucal number}\\ \textsc{pfv}&{perfective aspect}\\ \textsc{pl}&{plural number}\\ \textsc{plu}&{pluractional marker}\\ \textsc{prs}&{present tense}\\ \textsc{pst}&{past tense}\\ \textsc{red}&{reduplication}\\ \textsc{sg}&{singular number}\\ % \textsc{}&{}\\ \end{tabularx} \section*{Acknowledgements} We would like to sincerely thank Berit Gehrke and Radek Šimík for their help and support in the process of editing this book as well as for their comments on the form and content of this introduction (though of course the standard disclaimer applies). We gratefully acknowledge that the research was supported by a Czech Science Foundation (GAČR) grant to the Department of Linguistics and Baltic Languages at the Masaryk University in Brno (GA17-16111S). {\sloppy\printbibliography[heading=subbibliography,notkeyword=this]} \end{document}
{ "alphanum_fraction": 0.8007388869, "avg_line_length": 138.4653465347, "ext": "tex", "hexsha": "770473c9cddfab55dbf28334fc9c2385b080c531", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a2a67da0a567ae9fb05308ee952d943b8772f872", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/316", "max_forks_repo_path": "chapters/01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a2a67da0a567ae9fb05308ee952d943b8772f872", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/316", "max_issues_repo_path": "chapters/01.tex", "max_line_length": 2085, "max_stars_count": null, "max_stars_repo_head_hexsha": "a2a67da0a567ae9fb05308ee952d943b8772f872", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/316", "max_stars_repo_path": "chapters/01.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10595, "size": 41955 }
\documentclass{article} \input{include} \usepackage{bytefield} \usepackage{color} \usepackage{fullpage} % \usepackage[margin=12mm]{geometry} \usepackage{hyperref} \usepackage[underline=true]{pgf-umlsd} \usetikzlibrary{calc} \newcommand{\entid}{{\rm Ent}_{\rm ID}} \begin{document} \title{Entanglement Generation Protocol: Notes on Link+Physical Layer} \author{Stephanie, Axel, Matthew, Leon, Erwin, Ronald} \maketitle The objective of this document is to define the link layer in quantum networks connecting quantum processing nodes, and to propose two concrete link layer protocols based on an existing implementation of the physical layer with certain properties. In analogy to classical networks, the objective of the link layer will be to enable communication between two nodes $A$ and $B$ connected by a \emph{link} on the same network. Here, enabling communication corresponds to producing entanglement between $A$ and $B$, and we will hence refer to such protocols as Entanglement Generation Protocols (EGP). We propose the desired service, interface to the higher layer, as well as two possible EGPs that are closely related design alternatives. We first discuss an EGP between two nodes $A$ and $B$, and discuss extensions to a proposed architecture connecting many nodes at the end. To fit the link layer EGP into the future envisioned network stack we briefly sketch the stack framework here, going from higher to lower layer: \begin{description} \item[QTP - Qubit transport protocol] (Transport Layer) Responsible for the end to end transmission of qubits. \item[EMP - Entanglement Management Protocol] (Network Layer) Responsible for the generation of entanglement between two nodes that are not directly connected by a link, i.e. not on the same local network. \item[EGP - Entanglement Generation Protocol] (Link Layer) Resonsible for the generation of entanglement between to nodes connect by a direct link. \end{description} \section{Entanglement Generation Protocols} Let us first describe the interface, service, as well as performance criteria of entanglement generation protocols. \subsection{Higher layer to EGP} An EGP supports a single command from the higher layer, namely a request to produce entanglement, which we call a CREATE command. This command includes some desired properties of the entanglement such as for example a minimum fidelity, and a maximum waiting time. In an actual physical implementation, there is a tradeoff between these parameters. More time, for example, may allow the underlying implementation to use entanglement distillation to produce higher quality pairs. \begin{description} \item[CREATE] Produce entanglement with a node on the same network (i.e. connected by a link). Arguments supplied are:\\ \noindent \begin{tabular}{ll} Partner ID & ID of the node to generate entanglement with. \\ Number $k$ & Number of pairs we want to create.\\ $F_{\min}$ & Minimum acceptable fidelity (with high confidence). \\ $t_{\max}$ & Maximum acceptable waiting time before request is completed. \\ Purpose ID & Identifying the purpose or application at this node (optional, default 0). \\ Priority & Manual setting of a priority for entanglement production (optional).\\ create ID & Sequence number identifying this CREATE command. \end{tabular} \end{description} \subsection{EGP to higher layer} Following the reception of the CREATE command, several actions of the EGP are possible. Let us start with the positive outcome, and then consider possible errors. \begin{description} \item[OK] Entangled pair has successfully been produced deterministically (heralded). One message per pair created, delivered immediately (best effort) following pair creation. With high confidence, the minimum acceptable fidelity $F_{\min}$ has been met, and the entanglement has been generated within the specified time frame $t_{\max}$. Information about the entanglement generation is provided, including an entanglement identifier. This identifier is required to be globally unique, and agreed upon by $A$ and $B$. That is, $A$ and $B$ can locally use this entanglement identifier to determine which of their qubits is entangled with the remote node, and also which qubit belongs to which entangled pair. Entanglement identifiers are meant to be shared in the network by higher layer protocols and carry meaning beyond the nodes $A$ and $B$. An entanglement identifier ($\entid$) consists of:\\ \noindent \begin{tabular}{ll} (Node $A$ ID, Node $B$ ID) & IDs of the two nodes between which this entanglement is shared.\\ seqID & Sequence number. Unique (up to wrap around) between $A$ and $B$, and \\ & globally unique when combined with the node IDs.\\ Goodness & Heuristic estimate for the fidelity of the generated pair.\\ $t_{Goodness}$ & Time when this goodness was established (in EGP, usually \\ & the same as generation time).\\ $t_{Create}$ & Time the pair was produced.\\ \end{tabular}\\ \smallskip \noindent In addition the OK message also includes the following local information. We remark that Qubit IDs are exclusively local information (akin to the memory address in a computer) and not in general shared between network nodes.\\ \noindent \begin{tabular}{ll} Qubit ID & Logical Qubit ID of the entangled pair can locally be found. \end{tabular} \end{description} Entanglement generation may fail for wide number of reasons, some of which form an immediate error. It may also be that the entanglement later expires, or is discarded of which the EGP will inform the higher layer. Let us start by listing the immediate failure modes, where in all such cases the create ID will be included allowing the higher layer to identify which request has failed.\\ \begin{description} \item[ERR\_UNSUPP] Operation not supported. For example, creation of entanglement with the specified minimum fidelity is unattainable, or unattainable within the given time frame, even if the node is not loaded. \item[ERR\_NOTIME] Cannot meet the desired fidelity demand within the given time frame due to high load. \item[ERR\_NORES] No resources (such as qubits to store the entanglement) available. \item[ERR\_TIMEOUT] Failure to produce entanglement within the specified time frame. \item[ERR\_OTHER] Failure for unspecified reasons, such as hardware failures. \end{description} In addition, the following failure mode can occur later when an entangled pair is expired. The primary use case of this will be to deal with extremely improbable failures in which recognition of the failure only becomes available after the higher layer has already received an OK message. This allows for a tradeoff between speed and certainty in recognizing failure modes. Since entanglement is very short lived, increased certainty can if desired be sacrificed for speed. \begin{description} \item[EXPIRE] Expire Qubit ID. Any entanglement associated with Qubit ID has become unavailable. \end{description} \subsubsection{Questions} \begin{itemize} \item The term ''High confidence'' is not defined and we need to decide what we mean by that, and also if this is some parameter where/by whom it is determined. \end{itemize} \subsection{Performance metrics} Apart from correctly fullfilling requests, a variety of performance metrics can be considered for EGPs. Not all of these can be simultaneously optimized, but occasionally impose tradeoffs. We hereby also draw a distinction between performance metrics of interest to a specific ``user'' requesting entanglement from the EGP, and the overall performance of the network. Evidently, for all metrics below adverage, variance, and worst case behaviour is of interest. Once more data is available on how quantum networks are used in practise, one may also consider ``typical'' values for these metrics. Let us first consider ``user'' centric metrics, measuring the experience of one invidual user rather than a behaviour of the network as a whole. We remark that nevertheless these metrics are a consequence of the total usage of the network. \begin{description} \item[Fidelity] Quality of the entanglement produces. By design the fidelity has to exceed the minium requested fidelity $F_{\min}$. \item[Latency] Time between submission of a CREATE request, and an OK response when successful. By design this time may not exceed $t_{\max}$. \end{description} In addition, we can consider measures defined by the behaviour of the network when dealing with a large number of requests. \begin{description} \item[Throughput] Number of pairs/s. Refined variants of throughput to be measured include: instantaneous throughput and sustained throughput. \item[Fairness] Difference in performance metrics between requests originating at $A$ and $B$. \item[Availability] Availability is a concern here if a network node requires resetting and two nodes require resynchronization at certain time intervals. \end{description} We remark that measured values like throughput evidently depend on the request behaviour, including what we will call the \emph{request ratio}, i.e. the number of pairs requested/number of requests total. \section{EGPs based on midpoint heralding protocols (MHPs)} Before proposing a specific EGP, let us first consider a general class of EGPs that are build on top of a system supporting heralded entanglement generation at the physical layer. More precisely, we will consider physical layer protocols that produce entanglement between two nodes $A$ and $B$, by means of an heralding station $M$ between them, in the following configuration: \smallskip \begin{sequencediagram} \newinst{a}{Node $A$} \newinst[3]{mid}{Heralding Station $M$} \newinst[3]{b}{Node $B$} \end{sequencediagram} Several variations of such schemes exist, such as the single-click, the Barrket-Kok (BK) protocol or even a memory assisted protocol in which entanglement distillation is performed. For simplicity, we will assume a single-click/BK type scheme below, but the following also applies to memory assisted schemes with minor modifications. Evidently, the choice of the physical layer entanglement generation scheme effects the overall performance metrics stated above, as well as the possibility to service requests for entanglement above a specific fidelity $F_{\min}$. For example, certain fidelities may only be attainable by performing entanglement distillation. Nevertheless, we may cast such physical layer generation schemes in the same following abstract form, upon which our EGPs will be built. We remark that MHPs in our network stack make no high level decisions on when to produce entanglement, scheduling, etc. Also the decision on which MHP to use - eg whether to perform single click, or distill - or even what parameters to set in the single click protocol (such as the bright state population $\alpha$) are not part of the MHP, but such decisions are left to the link layer EGP, that will use the appropriate MHP (parameters) to obtain the desired service from the physical layer. Let us thus first give a very abstract description of such protocols, before specifying some assumptions and design alternatives, and stating desired requirements. Here, we have subdivided node into the different units, EGP and MHP. \smallskip \begin{sequencediagram} \newinst{mema}{:EGP $A$} \newinst[1]{a}{MHP $A$} \newinst[3]{mid}{Heralding Station $M$} \newinst[3]{b}{MHP $B$} \newinst[1]{memb}{:EGP $B$} \begin{call}{a}{generate?}{mema}{\shortstack{yes/no\\info}} \end{call} \prelevel \prelevel \begin{call}{b}{generate?}{memb}{\shortstack{yes/no\\info}} \end{call} \mess[1]{a}{{$m_{AM} = (req_{AM}, pass_{AM})$, $q$}}{mid} \prelevel \prelevel \mess[1]{b}{{$m_{BM} = (req_{BM}, pass_{BM})$, $q$}}{mid} \mess[1]{mid}{{$r_A = (resp_{MA}, pass_{BM})$}}{a} \prelevel \prelevel \mess[1]{mid}{{$r_B = (resp_{MB}, pass_{AM})$}}{b} \prelevel \mess{a}{{$f(r_A)$}}{mema} \prelevel \mess{b}{{$f(r_B)$}}{memb} \end{sequencediagram} As a simple example, consider the single click protocol with some fixed parameters. Here, $m_{AM},m_{BM}$ are empty, $resp_{MA}, resp_{MB} \in \{OK, FAIL\}$. Some assumptions and choices were made in the above description: \begin{description} \item[Assumptions] \begin{enumerate} \item An (essentially) instantaneous association between the classical control messages $m$ below, and the entanglement generation. For this reason, we will write a classical message transmission as simply $m$, and $q$ for arbitrary quantum signal $q$, and assume simultaneous arrival. This could be realized by forming a timing association between the classical and quantum signals. To make it clear, how we will use this abstract description later, we will always take $m = (req, pass)$ where $req$ is request information only for the mid point and later protocol specific, and $pass$ is something that will by default always be passed onto the other side (also protocol specific). \item Midpoint will provide a response $resp$, which includes at least the following information to both $A$ and $B$: (1) Success or failure of entanglement generation. (2) In case different types of states can be produced, the identity of the state made. (3) A sequence number or equivalent information that allows $A$ and $B$ to determine the ordering of generated pairs. \end{enumerate} \end{description} \subsection{Design considerations} Before considering possible protocols, let us abstract some design considerations resulting from the implementation for the non quantum reader: \begin{description} \item[Basic facts] Nodes $A$ and $B$ in the considered implementation are few qubit processors capable of storing and manipulating qubits. Entanglement can be produced probabilistically between one of these qubits, and a travelling photon ($q$ in the above cartoon). When photons from both side arrive simultaneously at the heralding station, a measurement is performed. This measurement will produce $2$ possible entangled states with a certain probability $p_{\rm succ}$, which we will call states $1$ and $2$ below. This consititutes successful generation. The two types of states can be converted into each other by applying local gates at either $A$ (or $B$) and thus both states can be considered equally valuable. The measurement at the heralding station can also fail, in which case another attempt must be made. Typical values are $p_{\rm succ} = 1/2$. The information about success - incl. whether we have state $1$ or $2$ - or failure is only available at the heralding station, until it is transmitted to $A$ and $B$. Not all qubits are created equal, in the sense that not all of them can be directly entangled with a traveling photon. We will refer to those that can as communication qubits, and the other as storage qubits. One can transfer the state of a communication qubit to a storage qubit at any time (where we remark that this operation evidently also costs time and introduces noise). In NV in diamond, there is one communication qubits (electron spin) and several storage qubits (called nuclear spins or carbon spins). \item[Triggering generation] Generation of entanglement requires a time synchronized trigger at both $A$ and $B$ that will result in the probabilistic creation of entanglement between the electron spin, and the photon ($q$ in the cartoon above) traveling to the midpoint. If the trigger only occurs at one node, no entanglement can be made. Agreement to produce entanglement at specific time steps thus already has to be realized, requring some communication ahead of time. Here, we will assume that all low level timing and other synchronization is implemented in the MHP, which is then able to produce entanglement at certain ``pre-agreed'' i.e. synchronized time instances without additional communication between $A$ and $B$. As such, the EGP only performs higher level processing. This motivates the choice above that the MHP will poll the EGP, e.g. by reading a specific state variable, on whether entanglement generation is required at a specific synchronized time step. This is in contrast to the EGP sending a signal to the MHP to produce a pair. Since the EGP does not deal with timing synchronization, it cannot know when the actual physical trigger should be produced and hence the MHP could then only save the request until pair production is timely. We remark that this would mean that the MHP would need to keep state of how many outstanding triggers there are, which is not desirable from a design point of view where if $t_{\max}$ has elapsed the EGP may no longer want generation by the MHP. Consequently, we here choose for the MHP to poll the EGP, which does have state on desired generation. \item[Noise due to generation] One may wonder, why entanglement generation is not enabled continuously. That is, attempts to produce entanglement are made all the time, and then the entanglement is either discarded or directly available to the EGP. Two reasons motivated by the physical implementation considered (NV in diamond) make this undesirable: First, triggering an attempt to produce entanglement causes additional noise on the storage qubits. This means that the storage is significantly degraded by trying to produce further entanglement. As such, it is desirable that triggers to attempt generation are only made whenever entanglement is indeed wanted. Second, there are only a small number of storage qubits (presently, $4$). If we produce entanglement quickly, by for example triggering an attempt and then immediately transfering the state of the communication qubit to the storage qubit and then proceeding with the next attempt before having heard back from the heralding station, then several storage qubits are needed to support this, making the memory unavailable to other purposes. For these reasons, MHP will inquire with EGP whether more entanglement is desired, and only then commence production. \item[Scheduling and flow control] The EGP will be responsible for all higher level logic, including scheduling requests. A form of scheduling is flow control which controls the speed of entanglement generation, which hence also falls under higher level logic (see questions in EGP section below). \item[Memory allocation] Decisions on which qubits to use for what purpose lies in the domain of higher level logic, where more information is available on - for example - the number of outstanding requests allowing scheduling decisions including the scheduling of how memory is best used. MHP will hence also not perform memory allocation, i.e., determine which communication qubits or storage qubits to use. This impacts the types of information included in ``info'' in the protocol above, which we later take to be of the form (Physical ID Communication Qubit, Physical ID Storage Qubit, Parameters) \end{description} \subsection{Sending classical messages}\label{sec:classicalMessages} Above we assumed that there exists a means to transmit classical data between $A$, $B$ and $M$. How this is realized is not the objective of this document, and it could be achieved both by a dedicated fiber (possibly using two wavelength for bidirectional communication), or interspersed with quantum signals on the same fiber. Here, of interest are merely standard numbers - and the system will need to be implemented to ensure a quality that yields good performance in our link layer protocol. We hence for now consider merely standard numbers: \begin{itemize} \item Classical channels are bidirectional, meaning data could in principle be sent in both direction at the same time (and, as a relevant consequence, messages can cross and are not ordered in both directions) \item Likelihood of losses: $p_{\rm loss}$ probability of loss (e.g. following standard fiber loss plus electronics if applicable). \item Likelihood of errors: $p_{\rm err}$ probability of error - where we remark that as in other classical communication burst errors are probably dominant. \item Standard delays of interest: propagation delay (over the fiber), transmission delay (incl. delays of the electronics in putting the packets on the fiber), and processing delay, if known. We will assume that given the highly sophisticated electronics and the fact that the rate of classical communication is low due the relatively low repetition rate of entanglement generation attempts , the transmission and processing delay are essentially negligible. \end{itemize} \subsubsection{Enhanced situation} Two standard methods exist to enhance this situation to the following, whose exact form and choice depends on the parameters above. We may also consider running an existing link layer protocol, such as Ethernet over fiber - we do however note that authentication is highly desirable due to control messages leading to local spin photon entanglement generation, and hence otherwise allow unauthenticated messsages to manipulate the state of the quantum processing node. \begin{itemize} \item Error dectection: This can be achieved using standard methods, where probably a simple CRC depending on the length of the headers is most appropriate. This will add a number of bits to the messages headers below if employed. For example, for a standard CRC-32 as used in Ethernet, the CRC is computed over the message and stored in a $32$ bit header field. \item Message authentication: In this case, this refers to the fact that $A$ knows the messages originate with $B$ (and vice versa). Similarly, $M$ can authenticate messages if desired. Such authentication can be realized using a message authentication code (MAC) (see eg~\cite{UMAC}). These can be realize with varying levels of security. If $A$ and $B$ share consumable key (such as for example generated by QKD), they can afford to use one-time MAC which - similar to a one-time pad - offers information-theoretic security. Such MACS, for example based on two-universal hashing, can in principle be fast (see e.g.~\cite{UMAC} needing however various levels of key), although it is a question whether they are fast enough to be useful in this context. We remark that also weaker forms of authentication are acceptable as they merely form a means of protection and would need to be broken in a very short time frame to have a practical impact. \end{itemize} \subsubsection{Enhanced situation} Based on the general shape of such protocols above, one can now consider a slight ``enhancement'' of a protocol of this form - like single-click - that makes explicit some (probably obvious) failure modes, and produces a total ordering of pairs that $A$ and $B$ agree upon, even if some messages may go missing. \bigskip \begin{bytefield}[bitwidth=1.1em]{32} \bitheader{0-31} \\ \begin{rightwordgroup}{To be filled later} \bitbox{32}{Rest of header} \end{rightwordgroup} \\ \bitbox{32}{Error detection CRC}\\ \bitbox{32}{Message authentication (MAC)} \end{bytefield} \section{Candidate EGPs} We will now consider three classes of link layer EGPs build on top of MHPs. Strictly speaking, we will consider two non trivial implementations, and one rather adhoc one with rather trivial features for comparison. The present objective is to implement these in simulation to: \begin{enumerate} \item Assess their relative performance - also with respect to each other. Performance metrics have been specified above. \item Assess the effect of specific design choices made in each of these protocols. \item Study the relevance of parameter demands, specifically also the required quality of the classical communication and its timing so it can be implemented. \end{enumerate} The main difference between the more sophisticated EGPs is where queuing and scheduling is done, demanding more or less power at the heralding station. \subsection{Node Centric: Distributed Queue} In the first scenario, the heralding station is essentially passive and performs no higher level functions. Before sketching the protocol, let us first outline its ingredients next to the MHP. \subsubsection{Priority Queues (PQ)} In principle, we may want to give priorities to certain requests. This will be accomplished by adding them to different types of queues. For simplicity, we will for now assume there is only one. \subsubsection{Distributed Queue Protocol (DQP)} The objective of the DQP is to obtain a shared - i.e. agreed upon queue - at both nodes. That is, both A and B hold a local queue, which is synchronized using the DQP. Specifically, any request to the EGP at node A or B to perform entanglement is placed into the local queues such that the following are satisfied: \begin{itemize} \item (Unique queue numbers) If a request is placed into the queue by either A or B, then it is assigned a unique queue number (modulo the maximum queue number). \item (Total agreed ordering) A and B agree on the ordering of their queue: If request with queue number $j$ exists in the queue of $A$, then eventually (whp within the usual propagation delay between A and B) the item also exists in the queue of $B$ with number $j$ (and vice versa). \item (Fairness) If A (or B) is issuing requests continuously, then also the other node B (or A) will get to add items to the queue after A (or B) has added a maximum of queue\_windowsize many items. \item (Scheduling information) When items are added to the queue, they are marked ready if acks have been received (ie we can reasonably assume they are in both queues). In this case, the item receives a minimum to be executed time, corresponding to the expected propagation delay between A and B, decreasing the likelihood A or B wants to produce before the other node is ready as well. No penalty other than reduced performance due to increased decoherence results if one of them goes too early. \end{itemize} Given we have only two nodes, the above can easily be realized by one node being the master controller of the queue marshalling access to the queue, and the other the slave controller. (see DistributedQueue for implementation) \subsubsection{Scheduler} The scheduler fullfills a number of important arbitrating functions: \begin{itemize} \item Determines deterministically which queue to serve next. In the case there is only one queue, the top most item is returned if it is ready to be scheduled. \item Determines which parameters to use in the MHP depending onthe number of type of outstanding requests. \item In cooperation with QMM determine storage qubits. \item Includes the flow controller below. \end{itemize} \subsubsection{Quantum Memory Management (QMM)} While not part of the actual protocol, the QMM can be asked to (smartly) allocated a fresh qubit and to convert logical qubit IDs to physical Qubit IDs and vice versa. \subsubsection{Flow Control} We will perform a form of flow control, that is we will speed up or slow down the production of entanglement. This will be implemented in the simulation for now as a stub, for first testing to be filled in later. Speeding up or slowing down is relevant if the local quantum memory is full, i.e., we can no longer produce entanglement since insufficiently many free qubits are available. To do this we remark that by a standard Hoeffding bounds, A (or B) can likely produce entanglement except with probability $\eps$, if the number of qubits A (or B) has free $m_A$ (or $m_B$) satisfies \begin{align} n \leq \frac{m_A}{p_{\rm succ} + \sqrt{\frac{1}{2n} \ln\left(\frac{1}{\eps}\right)}}, \end{align} where $p_{\rm succ}$ is the probability of successful entanglement generation and $n$ is the number of qubits currently inflight (i.e. we do not yet know the outcome of entanglement generation). Below we will say that A (or analogously B) can likely produce entanglement, if the above expression is satisfied. \subsubsection{Fidelity estimation unit} We are planning to perform error detection by including later an online fidelity esitmation protocol, that together with already available information from the experiment allows us to make a guess for the fidelity communicated to higher layers via the Goodness in the Entanglement ID. May intersperse CREATE requests of its own to update its online fidelity estimate of the EGP. \subsubsection{Protocol Sketch} Let us now describe the local protocol running at each node, once a CREATE request is received. For clarity we will describe it for node $A$, but it applies to both analogously: \begin{enumerate} \item Ask the scheduler which queue the request should be added to. Call that $Q$. \item Add the request to the Distributed Queue $Q$ (ie run the DQP to get it added to the consistent local queue) \item Ask the scheduler whether there are ready items for which entanglement should be made ? \item Ask the flow controller whether B is likely to receive? \item If the answer to both of these questions is yes: \begin{enumerate} \item Ask the scheduler to fetch a storage ID from the QMM \item Turn on the entanglement generation flag, with physical ID electron spin, storage ID the one just obtained from QMM, and parameters in accordance with the desired fidelity TBD by scheduler. \end{enumerate} \end{enumerate} Let us now describe how we will use the MHP, recall that it will be automatically executed at specific time steps and poll the EGP for the entanglement generation flag (or more accurately list of outstanding ``flags''). Here, we will assume that in addition to the flag y/n whether to produce entanglement, we will also supply the MHP with (1) the physical ID of the communication qubit (2) the physical ID of the storage qubit (3) any parameters such as the bright state population for entanglement generation (4) the current free memory size $m_A$. We specify the MHP by filling in the default messages given above. Again we will for simplicity do this only for Alice but the same holds for Bob. It is unlikely we want the acking here, but I'll include it anyways. \begin{enumerate} \item Initialize sequence numbers to 0. \item Initialize list to be acked LA. \item Use $req_{AM} = (Produce)$ and $pass_{AM} = (m_A, optional ack generation sequence numbers on LA)$. If no entanglement is desired to produce, set $req_{AM} = (Info)$ and use the same pass to update the remote node with the new memory size if any. \item Use $resp_{MA} = (r, MHP seq, info)$ with $r \in \{0,1,2\}$, where 0 denotes failure and 1 and 2 denote the creation of states one and two respectively. If if $r \in \{1,2\}$ a unique and increasing sequence number MHP seq is chosen by the heralding station and send to both A and B. The heralding station may send additional information (eg the moment of detection) to allow a more accurate fidelity guess. \item (Optional) Resend from previous round \item Choose $f$ to pass the following information up to the EGP: Current free memory of the remote node, $r$, and MHP seq. \item (optional) Save seq onto LA to be acknowledged as part of the next generation round. For those outstanding too long, also include that information in the message back to the EGP. \end{enumerate} Let us now specify what EGP does once an event of generation is communicated back by the MHP: \begin{enumerate} \item If the sequence number is in the expected order, and $r \in \{1,2\}$, ask the scheduler what request to serve. Ask the fidelity estimation unit for an estimated fidelity based also on the info provided by the heralding station. If the request asks for many pairs, fullfill at least one request immediately. Send OK up and update the request and queue accordingly. \item If the sequence number is not in expected order, we lost a heralding signal. EXPIRE the request that this would have belonged to. \item If we do acks in the MHP, then check for too long outstanding acks - retroactively EXPIRE those requests. \item Update our knowledge of the remote nodes memory size, to make an assessment whether it is likely to receive in the future. \end{enumerate} \subsection{Heralding Station Centric: Arbitrage by heralding station} \subsection{Happy go lucky} \end{document}
{ "alphanum_fraction": 0.777288407, "avg_line_length": 79.773955774, "ext": "tex", "hexsha": "eef5283b88f09419f0aca7fca96177597f0bfcff", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-07-26T15:54:12.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-26T15:54:12.000Z", "max_forks_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "SoftwareQuTech/QLinkLayerSimulations", "max_forks_repo_path": "notes/linkLayer/linkLayerNotes-V2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "SoftwareQuTech/QLinkLayerSimulations", "max_issues_repo_path": "notes/linkLayer/linkLayerNotes-V2.tex", "max_line_length": 813, "max_stars_count": 8, "max_stars_repo_head_hexsha": "552f4b59d4deb5e838b21d569b5c4fd835fa1494", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "SoftwareQuTech/QLinkLayerSimulations", "max_stars_repo_path": "notes/linkLayer/linkLayerNotes-V2.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-17T11:20:44.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-26T15:22:17.000Z", "num_tokens": 7330, "size": 32468 }
\subsection{The Quadratic Formula and Completing the Square} The technique of \dfont{completing the square} allows us to solve quadratic equations and also to determine the center of a circle/ellipse or the vertex of a parabola. The main idea behind completing the square is to turn: $$ ax^2 + bx + c$$ into $$a(x - h)^2 + k.$$ One way to complete the square is to use the following formula: $$ax^2+bx+c=a\left(x+\frac{b}{2a}\right)^2-\frac{b^2}{4a^2}+c.$$ But this formula is a bit complicated, so some students prefer following the steps outlined in the next example. \\ \begin{example}{Completing the Square}{CompletingSquare} Solve $2x^2+12x-32=0$ by completing the square. \end{example} \begin{solution} In this instance, we will \ifont{not} divide by $2$ first (usually you would) in order to demonstrate what you should do when the `$a$' value is not $1$. \bigskip \begin{tabular}{rl} $2x^2+12x-32=0$ & Start with original equation.\\ \\ $2x^2+12x=32$ & Move the number over to the other side.\\ \\ $2(x^2+6x)=32$ & Factor out the $a$ from the $ax^2+bx$ expression.\\ \\ $6~~\to~~\frac{6}{2}=3~~\to~~3^2=\dfont{9}$ & Take the number in front of $x$, \\ & \dfont{divide by $2$}, \\ & then \dfont{square} it.\\ \\ $\ifont{2}(x^2+6x+\dfont{9})=32+\ifont{2}\cdot\dfont{9}$ & Add the result to both sides, \\ & taking $a=2$ into account.\\ \\ $2(x+3)^2=50$ & Factor the resulting perfect square trinomial.\\ \\ ~ & \ifont{You have now completed the square!}\\ \\ $(x+3)^2=25~~\to~~x=2 \mbox{ or } x=-8$ & To solve for $x$, simply divide by $a=2$ \\ & and take square roots.\\ \end{tabular} \end{solution} Suppose we want to solve for $x$ in the quadratic equation $ax^2+bx+c=0$, where $a\neq 0$. The solution(s) to this equation are given by the \dfont{quadratic formula}.\\ \begin{theorem}{The Quadratic Formula}{Quadratic Formula} \label{QuadForm} The solutions to $ax^2+bx+c=0$ (with $a\neq 0$) are $\ds{x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}}$. \end{theorem} \begin{proof} To prove the Quadratic Formula (Theorem \ref{QuadForm} ) we use the technique of \ifont{completing the square}. The general technique involves taking an expression of the form $x^2+rx$ and trying to find a number we can add so that we end up with a perfect square (that is, $(x+n)^2$). It turns out if you add $(r/2)^2$ then you can factor it as a perfect square. For example, suppose we want to solve for $x$ in the equation $ax^2+bx+c=0$, where $a\neq 0$. Then we can move $c$ to the other side and divide by $a$ (remember, $a\neq 0$ so we can divide by it) to get $$x^2+\frac{b}{a}x=-\frac{c}{a}.$$ To write the left side as a perfect square we use what was mentioned previously. We have $r=(b/a)$ in this case, so we must add $(r/2)^2=(b/2a)^2$ to both sides $$x^2+\frac{b}{a}x+\left(\frac{b}{2a}\right)^2=-\frac{c}{a}+\left(\frac{b}{2a}\right)^2.$$ We know that the left side can be factored as a perfect square $$\left(x+\frac{b}{2a}\right)^2=-\frac{c}{a}+\left(\frac{b}{2a}\right)^2.$$ The right side simplifies by using the exponent rules and finding a common denominator $$\left(x+\frac{b}{2a}\right)^2=\frac{-4ac+b^2}{4a^2}.$$ Taking the square root we get $$x+\frac{b}{2a}=\pm\sqrt{\frac{-4ac+b^2}{4a^2}},$$ which can be rearranged as $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$$ In essence, the quadratic formula is just completing the square. \end{proof}
{ "alphanum_fraction": 0.6640419948, "avg_line_length": 45.72, "ext": "tex", "hexsha": "b2c9f5334e5e4bb6b220393c7f49d03bcb59129b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "TimAlderson/OpenCalc", "max_forks_repo_path": "1-review/1-1-3-quadratic-formula.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "TimAlderson/OpenCalc", "max_issues_repo_path": "1-review/1-1-3-quadratic-formula.tex", "max_line_length": 168, "max_stars_count": null, "max_stars_repo_head_hexsha": "7d0110b6bc4ba42a6b911729420e1406296d6964", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "TimAlderson/OpenCalc", "max_stars_repo_path": "1-review/1-1-3-quadratic-formula.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1193, "size": 3429 }
\section{Discussion} \label{sec:discussion} This thesis arises as remarked in the introduction from a series of problems identified since 2010, when Imhotep was developed as a response to the observed demand of adaptive user interfaces solutions for people with disabilities. Dealing with the available technology and analysed approaches a solution in which developers were given several tools for developing adaptive interfaces was designed. Based on preprocessor primitives, developers were able to include pieces of source code in which user interface configuration were completed with a user and a device profile. Those profiles were configured by the potential users through another application. Imhotep was well accepted by the scientific community. Several publications were produced (as shown in Section~\ref{sec:publications}) and an award was given based on the AssistedCity developed use case. However, in spite of the success of Imhotep, technology improvements regarding mobile devices have brought new challenges and opportunities. Thus, from the weaknesses of Imhotep new opportunities arose. These challenges motivated and led this dissertation. First, a further analysis of the literature and state of the art in user interface adaptation solutions were needed. A lack of mobile based user interface adaptive solutions was identified. Besides, the current ongoing software and hardware mobile devices' improvements enhanced the design of a mobile platform. After taking the decision of using a mobile centred platform, the challenge of \textit{what} and \textit{how} to model interface adaptations arose. Based on our previous experience and on the reviewed literature, a semantic model based on three main entities was designed. This model combines user's interaction characteristics, the current context situation description, and different device's characteristics. As one of the identified problems in the analysed user adaptation platforms in the literature is their lack of independence from the considered domain, a semantic based design was performed. This allows an easy method to represent the knowledge of the domain, extend it, share it, and adapt it to any desired sub-domain by just combining different ontologies with the provided AdaptUIOnt. The combination of the two previous mentioned decisions brought a significant problem: the lack of readily available mobile-based semantic reasoning engines. Therefore, as a technical contribution, AdaptUI provides a mobile reasoning engine based on Pellet and compatible with Android. It is called \textit{Pellet4Android}, it is open source, and it provides support for \ac{owl} 2 and \ac{swrl} rules in Android. Once the semantic models and the mobile semantic infrastructure for reasoning were operative, the required architecture was designed, as shown in Figure~\ref{fig:architecture_discussion}. \begin{figure}[H] \centering \includegraphics[width=0.30\textwidth]{architecture.pdf} \caption{AdaptUI's three-layered global architecture.} \label{fig:architecture_discussion} \end{figure} Besides, in order to make AdaptUI accessible for developers, two main \acp{api} are provided. These \acp{api} aim to allow developers not only to adapt their application's user interface, but also to adapt the whole AdaptUI platform to the domain they work with. % In this thesis several contributions have been provided: First, in % Chapter~\ref{cha:ontology_model}, the AdaptUIOnt ontology has been presented. % AdaptUIOnt is an ontology that models user interaction capabilities, context % current situation and device characteristics with a design that allows a dynamic % update of the represented knowledge. Second, \textit{Pellet4android} has been % introduced in Chapter~\ref{cha:architecture}. \textit{Pellet4android} is a % semantic reasoning engine based on Pellet but compatible with Android devices. % Finally, AdaptUI, a whole dynamic user interface adaptation platform, which % allows developers to design adaptive user interfaces for their applications, has % been described. % As AdaptUIOnt has been first highlighted, in the following lines we summarize % several conclusions and benefits of the cited designed ontology: % In the following lines we summarize several conclusions and benefits of AdaptUIOnt: % % \begin{itemize} % \item AdaptUIOnt arises from the identified weaknesses of the models and solutions % reviewed in Chapter~\ref{cha:state_of_the_art}. Centred on the user needs, it % models several characteristics that define users, context and devices, % regarding the current needs for the interaction with the current user % interface, and allowing their semantic representation. % % \item AdaptUIOnt allows the modelling of non physiological interaction % capabilities of the user. As mentioned in Chapter~\ref{cha:state_of_the_art}, % several solutions aiming the adaptation of user interfaces (or services) % take into account user physiological capabilities. However, these solutions % assume that these capabilities are properly provided by the user or by % external services. But the truth is that, analysing these solutions, no % experts are consulted or included in the researches. Thus, to us these % solutions, although they are interesting, do not cover the reality of the % users and their daily limitations when dealing with interaction activities. % Hence, the AdaptUIOnt ontology has been designed taking into account this % issue. By avoiding the inclusion of physiological knowledge about user's % capabilities we allow users to directly manipulate their applications without % considering specific expertise or medical background. Besides, developers do % not have to consult any expert in the area, as the translation of the user % interactions are represented in the ontology as preferences and needs, not as % capabilities. % % \item It allows the addition of external ontologies to complete the knowledge % of the main entities. One of the benefits of semantic representation is the % ability to join external ontologies which can enrich the knowledge base. Thus, % developers are allowed to design their models or use those ontologies they % prefer to better fit AdaptUI in their domains. For example, \textit{Context} % has not been designed from scratch. Several extra ontologies, fully supported % and used by the community, have been used to build the corresponding knowledge % about the context. % \end{itemize} % % As the AdaptUI platform is based on semantics and reasoning, a mobile reasoning % engine is required. The ported \textit{Pellet4Android} reasoning engine provides % several benefits, all inherited from Pellet for Java. Although, as is remarked % in Chapter~\ref{cha:evaluation}, more tests are needed to assure a similar % performance in mobile devices. Nevertheless, this port provides: % % \begin{itemize} % \item Representation and reasoning about information using \ac{owl} in Android % based mobile devices. % % \item Support for \ac{owl} 2 for Android devices. % % % \item % \end{itemize} % % Finally, regarding the whole AdaptUI platform: % % \begin{itemize} % \item It allows users to configure the best suitable adaptation regarding their % capabilities, temporary disabilities, context current characteristics and % their devices' dynamic and static set of characteristics. Through a simple % application they are able to configure the best user interface combination % of components for each case. These configurations are stored in the ontology, % indexed by the context disabilities extracted from the reasoning process. % Thus, each time AdaptUI detects a known context situation and reasons over % it. If the same disabilities are sensed, they corresponding user interface % is adapted. % % \item It also allows developers to forget about the aspect of their applications, regarding % inclusive design and adaptation, as the platform automatically manages it. % Besides, they are allowed to modify the knowledge through the provided % \acp{api}. Classes, properties, individuals and rules are fully modifiable, % which gives total liberty to developers to adapt the whole platform to their % needs. % % \item It provides a fully 100\% mobile adaptation platform. It does not require external % processing aid, nor even Internet connectivity. Due to the % \textit{Pellet4Android} reasoning engine every reasoning and semantics % management process runs in the mobile. This fact avoids possible failures due % to connectivity or network losses. % \end{itemize} Although AdaptUI has several benefits and it contributes with the mentioned completed goals, the presented platform has also several constraints: \begin{itemize} \item Depending on the hardware of the used devices performance penalties might be identified if such devices have not the necessary computing capabilities. However, the results presented in Chapter~\ref{cha:evaluation} brought promising results with popular devices. \item Off-the-shelf, although we have demonstrated that several temporary context disabilities and limitations are considerably reduced, people with disabilities might still suffer several interaction problems. Thus, further efforts are needed to improve and refine the whole system. This issue is detailed in Section~\ref{sec:future_work}. \item Another limitation is that AdaptUI mostly considers disabilities based on visual and hearing constraints. Although others have been studied analysing the \ac{icf} (e.g., motor disabilities), the experiments are difficult to carry out. This is also detailed in Section~\ref{sec:future_work}, as we aim to cover more limitations and experiment with such contexts. \end{itemize}
{ "alphanum_fraction": 0.7916414294, "avg_line_length": 56.6057142857, "ext": "tex", "hexsha": "0c8cc057abe0399f819bcb826746d75c1595c73c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "edlectrico/dissertation", "max_forks_repo_path": "6_conclusion/61_discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "edlectrico/dissertation", "max_issues_repo_path": "6_conclusion/61_discussion.tex", "max_line_length": 109, "max_stars_count": null, "max_stars_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "edlectrico/dissertation", "max_stars_repo_path": "6_conclusion/61_discussion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2095, "size": 9906 }
\documentclass{memoir} \usepackage{notestemplate} % \begin{figure}[ht] % \centering % \incfig{riemmans-theorem} % \caption{Riemmans theorem} % \label{fig:riemmans-theorem} % \end{figure} \begin{document} \section{Subring, Ideals, Quotient rings, Ring homomorphisms} \begin{defn}[Subring] A subring is a subset \(S\) of a ring \(R\) which is a ring under the restriction of the operations in \(R\). We denote this by \(S\leq R\). \end{defn} \begin{rmrk} \(S\leq R\) nonempty is a subring if and only if \(a,b \in S \implies a+b, ab, -a \in S\). This is equivalent to \(a,b \in S \implies a-b, ab \in S\). \end{rmrk} \begin{defn}[Ideal] An ideal is a subring \(I \leq R\) which is closed under multiplication with elements of \(R\). Notationally, we say that \(I \triangleleft R\). \end{defn} \begin{rmrk} \(I\) nonempty is an ideal if and only if \(a,b \in I \implies a-b \in I\), which is equivalent to \(a \in I, r \in R \implies ar, ra \in I\). \end{rmrk} \begin{hw} Show that a field has only trivial ideals. \end{hw} \begin{defn}[Principal Ideal] If \(R\) is commutative and has an identity, then the principal ideal generated by \(c\) is the ideal \((c) = \left\{rc \mid r \in R \right\} \). \end{defn} This is the smallest ideal containing \(c\). \end{document}
{ "alphanum_fraction": 0.6697530864, "avg_line_length": 34.1052631579, "ext": "tex", "hexsha": "6f1adb6227ab4b78151f46856fffd57ac91de3ee", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-02-12-Subrings.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-02-12-Subrings.tex", "max_line_length": 150, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_path": "Abstract Algebra - Introductory/Algebra I/Notes/source/2020-02-12-Subrings.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "num_tokens": 441, "size": 1296 }
\lab{Object-Oriented Programming}{Object Oriented Programming} \label{lab:OOP} \objective{ Python is a class-based language. A \emph{class} is a blueprint for an object that binds together specified variables and routines. Creating and using custom classes is often a good way to clean and speed up a program. In this lab we learn how to define and use Python classes. In subsequents labs, we will create customized classes often for use in algorithms.} \section*{Python Classes} % =================================================== A Python \emph{class} is a code block that defines a custom object and determines its behavior. The \li{class} key word defines and names a new class. Other statements follow, indented below the class name, to determine the behavior of objects instantiated by the class. A class needs a method called a \emph{constructor} that is called whenever the class instantiates a new object. The constructor specifies the initial state of the object. In Python, a class's constructor is always named \li{__init__()}. For example, the following code defines a class for storing information about backpacks. \begin{lstlisting} class Backpack(object): """A Backpack object class. Has a name and a list of contents. Attributes: name (str): the name of the backpack's owner. contents (list): the contents of the backpack. """ def __init__(self, name): # This function is the constructor. """Set the name and initialize an empty contents list. Inputs: name (str): the name of the backpack's owner. Returns: A Backpack object wth no contents. """ self.name = name # Initialize some attributes. self.contents = [] \end{lstlisting} This \li{Backpack} class has two \emph{attributes}: \li{name} and \li{contents}. Attributes are variables stored within the class object. In the body of the class definition, attributes are accessed via the name \li{self}. This name refers to the object internally once it has been created. \subsection*{Instantiation} % ------------------------------------------------- The \li{class} code block above only defines a blueprint for backpack objects. To create a backpack object, we ``call'' the class like a function. An object created by a class is called an \emph{instance} of the class. It is also said that a class \emph{instantiates} objects. Classes may be imported in the same way as modules. In the following code, we import the \li{Backpack} class and instantiate a \li{Backpack} object. \begin{lstlisting} # Import the Backpack class and instantiate an object called 'my_backpack'. >>> from oop import Backpack >>> my_backpack = Backpack("Fred") # Access the object's attributes with a period and the attribute name. >>> my_backpack.name <<'Fred'>> >>> my_backpack.contents [] # The object's attributes can be modified dynamically. >>> my_backpack.name = "George" >>> print(my_backpack.name) George \end{lstlisting} \begin{info} Many programming languages distinguish between \emph{public} and \emph{private} attributes. In Python, all attributes are automatically public. However, an attribute can be hidden from the user in IPython by starting the name with an underscore. % Example? \end{info} \subsection*{Methods} % ------------------------------------------------------- In addition to storing variables as attributes, classes can have functions attached to them. A function that belongs to a specific class is called a \emph{method}. Below we define two simple methods in the \li{Backpack} class. \begin{lstlisting} class Backpack(object): # ... def put(self, item): """Add 'item' to the backpack's list of contents.""" self.contents.append(item) def take(self, item): """Remove 'item' from the backpack's list of contents.""" self.contents.remove(item) \end{lstlisting} The first argument of each method must be \li{self}, to give the method access to the attributes and other methods of the class. The \li{self} argument is only included in the declaration of the class methods, \textbf{not} when calling the methods. \begin{lstlisting} # Add some items to the backpack object. >>> my_backpack.put("notebook") >>> my_backpack.put("pencils") >>> my_backpack.contents <<['notebook', 'pencils']>> # Remove an item from the backpack. >>> my_backpack.take("pencils") >>> my_backpack.contents <<['notebook']>> \end{lstlisting} \begin{comment} % This doesn't quite fit here, but it's covered later. IPython's object introspection feature reveals details on the object. \begin{lstlisting} In [1]: import Packs In [2]: b = Packs.Backpack("Bill") In [3]: b. # press 'tab' to see the attributes and methods. b.color b.contents b.put b.take In [4]: b.put? <<Signature: b.put(item) Docstring: Add 'item' to the backpack's content list. File: ~/Downloads/Packs.py Type: instancemethod>> \end{lstlisting} \end{comment} % Problem 1: Expand the Backpack class. % Should we specify what the attributes are called? \begin{problem} Expand the \li{Backpack} class to match the following specifications. \begin{enumerate} \item Modify the constructor so that it accepts a name, a color, and a maximum size (in that order). Make \li{max_size} a default argument with default value $5$. Store each input as an attribute. \item Modify the \li{put()} method to ensure that the backpack does not go over capacity. If the user tries to put in more than \li{max\_size} items, print ``No Room!'' and do not add the item to the contents list. \item Add a new method called \li{dump()} that resets the contents of the backpack to an empty list. This method should not receive any arguments (except \li{self}). \end{enumerate} To ensure that your class works properly, consider writing a test function outside outside of the \li{Backpack} class that instantiates and analyzes a \li{Backpack} object. Your function may look something like this: \begin{lstlisting} def test_backpack(): testpack = Backpack("Barry", "black") # Instantiate the object. if testpack.max_size != 5: # Test an attribute. print("Wrong default max_size!") for item in ["pencil", "pen", "paper", "computer"]: testpack.put(item) # Test a method. print(testpack.contents) \end{lstlisting} \end{problem} \section*{Inheritance} % ====================================================== \emph{Inheritance} is an object-oriented programming tool for code reuse and organization. To create a new class that is similar to one that already exists, it is often better to \emph{inherit} the already existing methods and attributes rather than create a new class from scratch. This is done by including the existing class as an argument in the class definition (where the word \li{object} is in the definition of the \li{Backpack} class). This creates a \emph{class hierarchy}: a class that inherits from another class is called a \emph{subclass}, and the class that a subclass inherits from is called a \emph{superclass}. For example, since a knapsack is a kind of backpack (but not all backpacks are knapsacks), we create a special \li{Knapsack} subclass that inherits the structure and behaviors of the \li{Backpack} class, and adds some extra functionality. \begin{lstlisting} # Inherit from the Backpack class in the class definition. class Knapsack(Backpack): """A Knapsack object class. Inherits from the Backpack class. A knapsack is smaller than a backpack and can be tied closed. Attributes: name (str): the name of the knapsack's owner. color (str): the color of the knapsack. max_size (int): the maximum number of items that can fit in the knapsack. contents (list): the contents of the backpack. closed (bool): whether or not the knapsack is tied shut. """ def __init__(self, name, color, max_size=3): """Use the Backpack constructor to initialize the name, color, and max_size attributes. A knapsack only holds 3 item by default instead of 5. Inputs: name (str): the name of the knapsack's owner. color (str): the color of the knapsack. max_size (int): the maximum number of items that can fit in the knapsack. Defaults to 3. Returns: A Knapsack object with no contents. """ Backpack.__init__(self, name, color, max_size) self.closed = True \end{lstlisting} A subclass may have new attributes and methods that are unavailable to the superclass, such as the \li{closed} attribute in the \li{Knapsack} class. If methods in the new class need to be changed, they are overwritten as is the case of the constructor in the \li{Knapsack} class. New methods can be included normally. As an example, we modify the \li{put()} and \li{take()} methods in \li{Knapsack} to check if the knapsack is shut. \begin{lstlisting} class Knapsack(Backpack): # ... def put(self, item): """If the knapsack is untied, use the Backpack.put() method.""" if self.closed: print("I'm closed!") else: Backpack.put(self, item) def take(self, item): """If the knapsack is untied, use the Backpack.take() method.""" if self.closed: print("I'm closed!") else: Backpack.take(self, item) \end{lstlisting} Since \li{Knapsack} inherits from \li{Backpack}, a knapsack object is a backpack object. All methods defined in the \li{Backpack} class are available to instances of the \li{Knapsack} class. For example, the \li{dump()} method is available even though it is not defined explicitly in the \li{Knapsack} class. \begin{lstlisting} >>> from oop import Knapsack >>> my_knapsack = Knapsack("Brady", "brown") >>> isinstance(my_knapsack, Backpack) # A Knapsack is a Backpack. True # The put() and take() method now require the knapsack to be open. >>> my_knapsack.put('compass') <<I'm closed!>> # Open the knapsack and put in some items. >>> my_knapsack.closed = False >>> my_knapsack.put("compass") >>> my_knapsack.put("pocket knife") >>> my_knapsack.contents <<['compass', 'pocket knife']>> # The dump method is inherited from the Backpack class, and # can be used even though it is not defined in the Knapsack class. >>> my_knapsack.dump() >>> my_knapsack.contents [] \end{lstlisting} \begin{problem} % Problem 2: Create an inheritance class. Create a \li{Jetpack} class that inherits from the \li{Backpack} class. \begin{enumerate} \item Overwrite the constructor so that in addition to a name, color, and maximum size, it also accepts an amount of fuel. Change the default value of \li{max_size} to $2$, and set the default value of fuel to $10$. Store the fuel as an attribute. \item Add a \li{fly()} method that accepts an amount of fuel to be burned and decrements the fuel attribute by that amount. If the user tries to burn more fuel than remains, print ``Not enough fuel!" and do not decrement the fuel. \item Overwrite the \li{dump()} method so that both the contents and the fuel tank are emptied. \end{enumerate} \end{problem} \section*{Magic Methods} % ==================================================== In Python, a \emph{magic method} is a special method used to make an object behave like a built-in data type. Magic methods begins and ends with two underscores, like the constructor \li{__init__()}. Every Python object is automatically endowed with several magic methods, but they are normally hidden from IPython's object introspection because they begin with an underscore. To see an object's magic methods, type an underscore before pressing tab. \begin{lstlisting} In [1]: run oop.py # Load the names from oop.py. In [2]: b = Backpack("Oscar", "green") In [3]: b. # Press 'tab' to see standard methods and attributes. b.name b.contents b.put b.take In [4]: b.put? <<Signature: b.put(item) Docstring: Add 'item' to the backpack's content list. File: ~/Downloads/Packs.py Type: instancemethod>> In [5]: b.__ # Now press 'tab' to see magic methods. b.__add__ b.__getattribute__ b.__reduce_ex__ b.__class__ b.__hash__ b.__repr__ b.__delattr__ b.__init__ b.__setattr__ b.__dict__ b.__lt__ b.__sizeof__ b.__doc__ b.__module__ b.__str__ b.__eq__ b.__new__ b.__subclasshook__ b.__format__ b.__reduce__ b.__weakref__ In [6]: b? <<Type: Backpack File: ~/Downloads/Packs.py Docstring: A Backpack object class. Has a name and a list of contents. Attributes: name (str): the name of the backpack's owner. contents (list): the contents of the backpack. Init docstring: Set the name and initialize an empty contents list. Inputs: name (str): the name of the backpack's owner. Returns: A backpack object wth no contents.>> \end{lstlisting} \begin{info} In all of the preceding examples, the comments enclosed by sets of three double quotes are the object's \emph{docstring}, stored as the \li{__doc__} attribute. A good docstring typically includes a summary of the class or function, information about the inputs and returns, and other notes. Modules also have a \li{\_\_doc\_\_} attribute for describing the purpose of the file. Writing detailed docstrings is critical so that others can utilize your code correctly (and so that you don't forget how to use your own code!). \end{info} Now, suppose we wanted to add two backpacks together. How should addition be defined for backpacks? A simple option is to add the number of contents. Then if backpack $A$ has $3$ items and backpack $B$ has $5$ items, $A + B$ should return $8$. \begin{lstlisting} class Backpack(object): # ... def __add__(self, other): """Add the number of contents of each Backpack.""" return len(self.contents) + len(other.contents) \end{lstlisting} Using the $+$ binary operator on two \li{Backpack} objects calls the class's \li{__add__()} method. The object on the left side of the $+$ is passed in to \li{__add__()} as \li{self} and the object on the right side of the $+$ is passed in as \li{other}. \begin{lstlisting} >>> pack1 = Backpack("Rose", "red") >>> pack2 = Backpack("Carly", "cyan") # Put some items in the backpacks. >>> pack1.put("textbook") >>> pack2.put("water bottle") >>> pack2.put("snacks") # Now add the backpacks like numbers >>> pack1 + pack2 # Equivalent to pack1.__add__(pack2). 3 \end{lstlisting} Subtraction, multiplication, division, and other standard operations may be similary defined with their corresponding magic methods (see Table \ref{table:magic}). \subsection*{Comparisons} % --------------------------------------------------- % DO NOT USE __cmp__ ()! It has been removed as of Python 3.0. Magic methods also facilitate object comparisons. For example, the \li{__lt__()} method corresponds to the $<$ operator. Suppose one backpack is considered ``less'' than another if it has fewer items in its list of contents. \begin{lstlisting} class Backpack(object) # ... def __lt__(self, other): """Compare two backpacks. If 'self' has fewer contents than 'other', return True. Otherwise, return False. """ return len(self.contents) < len(other.contents) \end{lstlisting} Now using the $<$ binary operator on two \li{Backpack} objects calls \li{__lt__()}. As with addition, the object on the left side of the $<$ operator is passed to \li{__lt__()} as \li{self}, and the object on the right is passed in as \li{other}. \begin{lstlisting} >>> pack1 = Backpack("Maggy", "magenta") >>> pack2 = Backpack("Yolanda", "yellow") >>> pack1.put('book') >>> pack2.put('water bottle') >>> pack1 < pack2 False >>> pack2.put('pencils') >>> pack1 < pack2 # Equivalent to pack1.__lt__(pack2). True \end{lstlisting} Other standard comparison operators also have corresponding magic methods and should be implemented similarly (see Table \ref{table:magic}). Note that comparison methods should return either \li{True} or \li{False}, while arithmetic methods like \li{__add__()} might return a numerical value or another kind of object. % For example, we could have defined addition for backpacks as $A + B = C$, where $C$ is a new backpack with the name of backpack $A$, the color of backpack $B$, and the contents of both. % Then \li{__add__()} would return a \li{Backpack} object instead of a number. \begin{table}[H] % Common Magic Methods. \begin{tabular}{r|c|c} Method & Operation & Operator \\ \hline \li{__add__()} & Addition & \li{+}\\ \li{__sub__()} & Subtraction & \li{-}\\ \li{__mul__()} & Multiplication & \li{*}\\ \li{__div__()} & Division & \li{/}\\ \li{__lt__()} & Less than & \li{<}\\ \li{__le__()} & Less than or equal to & \li{<=}\\ \li{__gt__()} & Greater than & \li{>}\\ \li{__ge__()} & Greater than or equal to & \li{>=}\\ \li{__eq__()} & Equal & \li{==}\\ \li{__ne__()} & Not equal & \li{\!=} \end{tabular} \caption{Common magic methods for arithmetic and comparisons. What each of these operations do, or should do, is up to the programmer and should be carefully documented. See \url{https://docs.python.org/2/reference/datamodel.html\#special-method-names} for more methods and details.} \label{table:magic} \end{table} \begin{problem} % Problem 3: __eq__() and __str__() for Backpack. Endow the \li{Backpack} class with two additional magic methods: \begin{enumerate} \item The \li{__eq__()} magic method is used to determine if two objects are equal, and is invoked by the \li{==} operator. Implement the \li{__eq__()} magic method for the \li{Backpack} class so that two \li{Backpack} objects are equal if and only if they have the same name, color, and number of contents. % The two contents lists do not have to have their items in the same order to be considered equal. % TODO: for Python 3, call the 'print statement' the 'print() function'. \item The \li{__str__()} magic method is used to produce the string representation of an object. This method is invoked when an object is cast as a string with the \li{str()} function, or when using the \li{print} statement. Implement the \li{__str__()} method in the \li{Backpack} class so that printing a \li{Backpack} object yields the following output: \begin{lstlisting} <<Owner: <name> Color: <color> Size: <number of items in contents> Max Size: <max_size> Contents: [<item1>, <item2>, ...]>> \end{lstlisting} (Hint: Use the tab and newline characters \li{'\\t'} and \li{'\\n'} to help align output nicely.) \end{enumerate} \end{problem} \begin{warn} Comparison operators are not automatically related. For example, for two backpacks \li{A} and \li{B}, if \li{A==B} is \li{True}, it does not automatically imply that \li{A\!=B} is \li{False}. Accordingly, when defining \li{__eq__()}, one should also define \li{__ne__()} so that the operators will behave as expected, perhaps by calling \li{__eq__()} itself: \begin{lstlisting} def __ne__(self, other): return not self == other \end{lstlisting} \end{warn} \begin{problem} % Problem 5: ComplexNumber class Create your own \li{ComplexNumber} class that supports the basic operations of complex numbers. \begin{enumerate} \item Complex numbers have a real and an imaginary part. The constructor should therefore accept two numbers. Store the first as \li{self.real} and the second as \li{self.imag}. \item Implement a \li{conjugate()} method that returns the object's complex conjugate (as a new \li{ComplexNumber} object). Recall that $\overline{a + bi} = a - bi$. \item Add the following magic methods: \begin{enumerate} \item The \li{__abs__()} magic method determines the output of the built-in \li{abs()} function (absolute value). Implement \li{__abs__()} so that it returns the magnitude of the complex number. Recall that $|a+bi| = \sqrt{a^2+b^2}$. \item Implement \li{__lt__()} and \li{__gt__()} so that \li{ComplexNumber} objects can be compared by their magnitudes. That is, $(a+bi) < (c+di)$ if and only if $|a+bi| < |c+di|$, and so on. \item Implement \li{__eq__()} and \li{__ne__()} so that two \li{ComplexNumber} objects are equal if and only if they have the same real and imaginary parts. \item Implement \li{__add__()}, \li{__sub__()}, \li{__mul__()}, and \li{__div__()} appropriately. Each of these should return a new \li{ComplexNumber} object. (Hint: use the complex conjugate to implement division). \end{enumerate} \end{enumerate} Compare your class to Python's built-in \li{complex} type. How can your class be improved to act more like true complex numbers? \end{problem} \newpage \section*{Additional Material} % ============================================== \subsection*{Static Attributes} % --------------------------------------------- Attributes that are accessed through \li{self} are called \emph{instance} attributes because they are bound to a particular instance of the class. In contrast, a \emph{static} attribute is one that is shared between all instances of the class. To make an attribute static, declare it inside of the \li{class} block but outside of any of the class's functions, and do not use \li{self}. Since the attribute is not tied to a specific instance of the class, it should be accessed or changed via the class name. For example, suppose our Backpacks all have the same brand name. \begin{lstlisting} class Backpack(object): # ... brand = "Adidas" \end{lstlisting} Changing the brand name changes it on every backpack instance. \begin{lstlisting} >>> pack1 = Backpack("Bill", "blue") >>> pack2 = Backpack("William", "white") >>> print pack1.brand, pack2.brand Adidas Adidas # Change the brand name for the class to change it for all class instances. >>> print Backpack.brand Adidas >>> Backpack.brand = "Nike" >>> print pack1.brand, pack2.brand Nike Nike \end{lstlisting} \subsection*{Static Methods} % ------------------------------------------------ Individual class methods can also be static. A static method can have no dependence on the attributes of individual instances of the class, so there can be no references to \li{self} inside the body of the method and \li{self} is \textbf{not} listed as an argument in the function definition. Thus only static attributes and other static methods are available within the body of a static method. Include the tag \li{@staticmethod} above the function definition to designate a method as static. \begin{lstlisting} class Backpack(object): # ... @staticmethod def origin(): print "Manufactured by " + Backpack.brand + ", inc." \end{lstlisting} \begin{lstlisting} # We can call static methods before instantiating the class. Backpack.origin() Manufactured by Nike, inc. # The method can also be accessed through class instances. >>> pack = Backpack("Larry", "lime") >>> pack.origin() Manufactured by Nike, inc. \end{lstlisting} % Add a static method / attribute to the Backpack class. To practice these principles, consider adding a static attribute to the \li{Backpack} class to serve as a counter for a unique ID. In the constructor for the \li{Backpack} class, add an instance variable called \li{self.ID}. Set this ID based on the static ID variable, then increment the static ID so that the next \li{Backpack} object will have a new ID. \subsection*{More Magic Methods} % ------------------------------------------- Explore the following magic methods and consider how they could be implemented for the \li{Backpack} class. \begin{table}[H] % More Magic Methods. \begin{tabular}{r|c|l} Method & Operation & Trigger Function\\ \hline \li{__repr__()} & Object representation & \li{repr()}\\ \li{__nonzero__()} & Truth value & \li{bool()}\\ \li{__len__()} & Object length or size & \li{len()}\\ \li{__getitem__()} & Indexing and slicing & \li{self[index]}\\ \li{__setitem__()} & Assignment via indexing & \li{self[index] = x}\\ \li{__iter__()} & Iteration over the object & \li{iter()}\\ \li{__reversed__()} & Reverse iteration over the object & \li{reversed()}\\ \li{__contains__()} & Membership testing & \li{in} \end{tabular} \end{table} See \url{https://docs.python.org/2/reference/datamodel.html} for more details and documentation on all magic methods. \subsection*{Hashing} % ------------------------------------------------------- A \emph{hash value} is an integer that identifies an object. The built-in \li{hash()} function calculates an object's hash value by calling its \li{__hash__()} magic method. In Python, the built-in \li{set} and \li{dict} structures use hash values to store and retrieve objects in memory quickly. Thus if an object is unhashable, it cannot be put in a set or be used as a key in a dictionary. \begin{lstlisting} # Get the hash value of a hashable object. >>> hash("math") -8321016616855971138 # Create a dictionary and attempt to get its hash value. >>> example = {"math": 320} >>> hash(example) <<Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict'>> \end{lstlisting} If the \li{__hash__()} method is not defined, the default hash value is the object's memory address (accessible via the built-in function \li{id()}) divided by $16$, rounded down to the nearest integer. However, two objects that compare as equal via the \li{__eq__()} magic method must have the same hash value. A simple \li{__hash__()} method for the \li{Backpack} class that conforms to this rule and returns an integer might be the following. \begin{lstlisting} class Backpack(object): # ... def __hash__(self): return hash(self.name) ^ hash(self.color) ^ hash(len(self.contents)) \end{lstlisting} The caret operator \texttt{\^} is a bitwise XOR (exclusive or). The bitwise AND operator \li{&} and the bitwise OR operator \li{|} are also good choices to use. \begin{warn} If a class does not have an \li{__eq__()} method it should \textbf{not} have a \li{__hash__()} method either. Furthermore, if a class defines \li{__eq__()} but not \li{__hash__()}, its instances may be usable in hashed collections like sets and dictionaries (because of the default hash value), but two instances that compare as equal may or may not be recognized as distinct in the collection. \end{warn}
{ "alphanum_fraction": 0.7014207236, "avg_line_length": 44.7372881356, "ext": "tex", "hexsha": "d8840dcd996e83334c68fb0afa4d180d8064610f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z", "max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "joshualy/numerical_computing", "max_forks_repo_path": "Introduction/OOP/OOP.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "joshualy/numerical_computing", "max_issues_repo_path": "Introduction/OOP/OOP.tex", "max_line_length": 284, "max_stars_count": null, "max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "joshualy/numerical_computing", "max_stars_repo_path": "Introduction/OOP/OOP.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6602, "size": 26395 }
%!TEX root = ../User Guide.tex \chapter{Output description} By default, CACTUS will output rotor-integrated loads to a text file. CACTUS is capable of saving much more information, such as blade-integrated loads, blade element loads, wall panel data, field velocities, and the complete state of vortex filaments describing the wake. These outputs can be enabled through the appropriate input file flags, described in \Cref{tbl:configoutputs}. This section describes the format of the various output files. \section{Revolution-averaged rotor-integrated loads} CACTUS writes revolution-averaged rotor-integrated loads for each revolution to the comma delimited file \path{[run name]_RevData.csv}. The output columns of this file are described in \Cref{tbl:output_vars_rev}. \begin{table}[!htbp] \centering \caption{Revolution-averaged output data.} \label{tbl:output_vars_rev} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Rev} & Revolution number \\ \path{Power Coeff. (-)} & Revolution average machine power coefficient \\ \path{Tip Power Coeff. (-)} & Revolution average machine power coefficient normalized with $U_\textrm{tip}$ instead of $U_\infty$ \\ \path{Torque Coeff. (-)} & Revolution average torque coefficient \\ \path{Fx Coeff. (-)} & Revolution average $x$-component of force coefficient \\ \path{Fy Coeff. (-)} & Revolution average $y$-component of force coefficient \\ \path{Fz Coeff. (-)} & Revolution average $z$-component of force coefficient \\ \path{Power (kW)} & Revolution average machine power \\ \path{Torque (ft-lbs)} & Revolution average machine torque \\ \bottomrule \end{tabular} \end{table} \section{Blade-integrated loads} If \path{BladeElemOutFlag} = 1, blade-integrated loads are written at each to the comma delimited file \path{_TimeData.csv}. The output columns of this file are described in \Cref{tbl:output_vars_time}. \begin{table}[!htbp] \centering \caption{Blade integrated loads.} \label{tbl:output_vars_time} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Normalized Time (-)} & Normalized simulation time, $t_N=t U_\infty/R$ \\ \path{Theta (rad)} & Turbine rotational phase angle \\ \path{Rev} & Revolution number \\ \path{Torque Coeff (-)} & Torque coefficient \\ \path{Power Coeff (-)} & Power coefficient \\ \path{Fx Coeff. (-)} & $x$-component of force coefficient \\ \path{Fy Coeff. (-)} & $y$-component of force coefficient \\ \path{Fz Coeff. (-)} & $z$-component of force coefficient \\ \path{Blade Fx Coeff (-)} & Contribution to $x$-component of force coefficient from blade \\ \path{Blade Fy Coeff (-)} & Contribution to $y$-component of force coefficient from blade \\ \path{Blade Fz Coeff (-)} & Contribution to $z$-component of force coefficient from blade \\ \path{Blade Torque Coeff (-)} & Contribution to torque coefficient from blade \\ \path{Strut Fx Coeff (-)} & Contribution to $x$-component of force coefficient from strut \\ \path{Strut Fy Coeff (-)} & Contribution to $y$-component of force coefficient from strut \\ \path{Strut Fz Coeff (-)} & Contribution to $z$-component of force coefficient from strut \\ \path{Strut Torque Coeff (-)} & Contribution to torque coefficient from strut \\ \bottomrule \end{tabular} \end{table} \section{Blade element loads} If \path{BladeElemOutFlag} = 1, loads on the individual blade elements at each time step are written to the comma delimited file \path{_ElementData.csv}. The output columns of this file are described in \Cref{tbl:output_vars_blade}. \begin{table}[!htbp] \centering \caption{Temporal blade element loads.} \label{tbl:output_vars_blade} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Normalized Time (-)} & Normalized simulation time, $t_N=t U_\infty/R$ \\ \path{Theta (rad)} & Turbine rotational phase angle \\ \path{Blade} & Blade number \\ \path{Element} & Element number \\ \path{Rev} & Revolution number \\ \path{AOA25 (deg)} & Local flow angle of attack, defined at element quarter-chord location. \\ \path{AOA50 (deg)} & Reference 50\% chord flow angle of attack. Different from AOA25 when element is rotating in the local spanwise direction. \\ \path{AOA75 (deg)} & Reference 75\% chord flow angle of attack. Different from AOA25 when element is rotating in the local spanwise direction. \\ \path{AdotNorm (-)} & Normalized AOA rate $\dot{\alpha}_\textrm{norm} = \dot{\alpha} c / 2 U_\textrm{loc}$ \\ \path{Re (-)} & Element Reynolds number based on element chord \\ \path{Mach (-)} & Element Mach number \\ \path{Ur (-)} & Local flow speed ratio with freestream, $U_r = U_\textrm{loc}/U_\infty$ \\ \path{CL (-)} & Element lift coefficient, $C_L=L/{\frac{1}{2} \rho U_\textrm{loc}^2 A_E}$ \\ \path{CD (-)} & Element drag coefficient, $C_D=D/{\frac{1}{2} \rho U_\textrm{loc}^2 A_E}$ \\ \path{CM25 (-)} & Element pitching moment coefficient about the quarter-chord location, $C_{M,25}=L/{\frac{1}{2} \rho U_\textrm{loc}^2 A_E c}$ \\ \path{CLCirc (-)} & Circulatory component of element lift coefficient, $C_{L,\textrm{circ}}={L_\textrm{circ}}/{\frac{1}{2} \rho U_\textrm{loc}^2}$ \\ \path{CN (-)} & Element normal force coefficient, $C_N = {N}/{\frac{1}{2} \rho U_\textrm{loc}^2 A_E}$ \\ \path{CT (-)} & Element tangential force coefficient, $C_T = {T}/{\frac{1}{2} \rho U_\textrm{loc}^2 A_E}$ \\ \path{Fx (-)} & Contribution to $x$-component of force coefficient from element \\ \path{Fy (-)} & Contribution to $y$-component of force coefficient from element \\ \path{Fz (-)} & Contribution to $z$-component of force coefficient from element \\ \path{te (-)} & Contribution to torque coefficient from element \\ \bottomrule \end{tabular} \end{table} \section{Ground plane, wall, and free surface output} \subsection{Ground plane and wall data} If either a ground plane calculation or wall calculation is being performed (\path{GPFlag} = 1 or \path{WPFlag} = 1), information about the wall may optionally be written to file (\path{[run name]_WPData_[timestep number].tp}). The files use a TecPlot file format (finite-element structured mesh, cell-centered field data). The output fields are described in \Cref{tbl:output_wall,tbl:output_free_surface}. These files may be easily read in to ParaView for post-processing. The verbosity of this output file is specified with the \path{WallOutFlag} namelist variable. If \path{WallOutFlag} = 1, the source panel strength is written to file. If \path{WallOutFlag} = 2, the induced velocities at each panel center are also computed and written to file. The interval of ground plane/wall data output may be controlled via the variables described in \Cref{tbl:configoutputs}. \begin{itemize} \item \path{WallOutIntervalTimesteps}, \item \path{WallOutStartTimestep}, and \item \path{WallOutEndTimestep} \end{itemize} \begin{table}[!htbp] \centering \caption{Wall outputs.} \label{tbl:output_wall} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{X} & $x$-coordinate of panel node normalized by $R$ \\ \path{Y} & $y$-coordinate of panel node normalized by $R$ \\ \path{Z} & $z$-coordinate of panel node normalized by $R$ \\ \path{sigma (-)} & Source density of the panel normalized by $U_\infty$ \\ \path{u} & $x$-velocity at the panel center normalized by $U_\infty$ \\ \path{v} & $y$-velocity at the panel center normalized by $U_\infty$ \\ \path{w} & $z$-velocity at the panel center normalized by $U_\infty$ \\ \bottomrule \end{tabular} \end{table} \subsection{Free surface data} If a free surface calculation is being performed (\path{FSFlag} = 1), information about the free surface is automatically written to file. A single comma delimited file (\path{[run name]_FSData.csv}) is written which contains the information described in \Cref{tbl:output_free_surface} \emph{averaged over the final revolution}. \begin{table}[!htbp] \centering \caption{Free surface outputs.} \label{tbl:output_free_surface} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{X/R (-)} & $x$-coordinate of the panel center normalized by $R$ \\ \path{Y/R (-)} & $y$-coordinate of the panel center normalized by $R$ \\ \path{Z/R (-)} & $z$-coordinate of the panel center normalized by $R$ \\ \path{U/Uinf (-)} & Wall tangential velocity (nominal freestream direction) normalized by $U_\infty$ \\ \path{dH/R (-)} & Free surface height (above un-deflected height) normalized by $R$ \\ \bottomrule \end{tabular} \end{table} \section{Field velocities} If \path{FieldOutFlag = 1}, the induced velocity field on a Cartesian grid is computed and written to file (\path{[run name]_FieldData_[timestep number].csv}). Each file contains the field data at a single timestep. The columns are described in \Cref{tbl:output_cartesian_velocity}. Enabling this output can add considerable time to the simulation, since the induced velocity due to every wake node must be calculated at every point in the specified Cartesian grid. \begin{table}[!htbp] \centering \caption{Field velocity output data.} \label{tbl:output_cartesian_velocity} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Normalized Time (-)} & Normalized simulation time, $t_N=t U_\infty/R$ \\ \path{x/R (-)} & $x$-coordinate of the data point normalized by $R$ \\ \path{y/R (-)} & $y$-coordinate of the data point normalized by $R$ \\ \path{z/R (-)} & $z$-coordinate of the data point normalized by $R$ \\ \path{U/Uinf (-)} & $x$-component of induced velocity \\ \path{V/Uinf (-)} & $y$-component of induced velocity \\ \path{W/Uinf (-)} & $z$-component of induced velocity \\ \path{Ufs/Uinf (-)} & $x$-component of free-stream velocity \\ \path{Vfs/Uinf (-)} & $y$-component of free-stream velocity \\ \path{Wfs/Uinf (-)} & $z$-component of free-stream velocity \\ \bottomrule \end{tabular} \end{table} \section{Wake element (vortex filament) data} If \path{WakeElemOutFlag} = 1, information about the vortex filament comprising the wake is written to file (\path{[run name]_WakeData_[timestep number].csv}). Each file contains the wake element data at a single timestep. Output data is defined at the endpoints of the vortex filaments. The columns are described in \Cref{tbl:output_vortex_filaments}. \begin{table} \centering \caption{Vortex filament output data.} \label{tbl:output_vortex_filaments} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Normalized Time (-)} & Normalized simulation time, $t_N=t U_\infty/R$ \\ \path{Node ID} & A unique ID given to each distinct filament node (useful for tracing a particle's path in time) \\ \path{Origin Node} & ID of the element node from which this filament was generated \\ \path{x/R (-)} & $x$-coordinate of the data point normalized by $R$ \\ \path{y/R (-)} & $y$-coordinate of the data point normalized by $R$ \\ \path{z/R (-)} & $z$-coordinate of the data point normalized by $R$ \\ \path{U/Uinf (-)} & $x$-component of induced velocity \\ \path{V/Uinf (-)} & $y$-component of induced velocity \\ \path{W/Uinf (-)} & $z$-component of induced velocity \\ \bottomrule \end{tabular} \end{table} \section{Probe Output} If \path{ProbeFlag} = 1, the velocities at the probes specified in the file at \path{ProbeSpecPath} will be computed and written to file(\path{probe_[probe number].csv}). Each file contains the time-series velocity data at an individual probe location. Each probe file has a header, described in \Cref{tbl:output_probe_file_header}, containing the location of the probe. This header is followed by the probe time series data, whose columns are described in \Cref{tbl:output_probe_file}. \begin{table} \centering \caption{Probe output data header.} \label{tbl:output_probe_file_header} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{x/R (-)} & $x$-coordinate of the data point normalized by $R$ \\ \path{y/R (-)} & $y$-coordinate of the data point normalized by $R$ \\ \path{z/R (-)} & $z$-coordinate of the data point normalized by $R$ \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{Probe output data.} \label{tbl:output_probe_file} \begin{tabular}{p{0.3\textwidth}p{0.6\textwidth}} \toprule Variable name & Description \\ \midrule \path{Normalized Time (-)} & Normalized simulation time, $t_N=t U_\infty/R$ \\ \path{U/Uinf (-)} & $x$-component of induced velocity \\ \path{V/Uinf (-)} & $y$-component of induced velocity \\ \path{W/Uinf (-)} & $z$-component of induced velocity \\ \path{Ufs/Uinf (-)} & $x$-component of free-stream velocity \\ \path{Vfs/Uinf (-)} & $y$-component of free-stream velocity \\ \path{Wfs/Uinf (-)} & $z$-component of free-stream velocity \\ \bottomrule \end{tabular} \end{table}
{ "alphanum_fraction": 0.6770074309, "avg_line_length": 60.0043290043, "ext": "tex", "hexsha": "0572310a244703687565044dba973b640049e8f6", "lang": "TeX", "max_forks_count": 11, "max_forks_repo_forks_event_max_datetime": "2022-03-29T08:52:49.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-04T03:51:05.000Z", "max_forks_repo_head_hexsha": "6d89b48759fe78d1890a77656bafdbd1e703bbb2", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ebranlard/CACTUS", "max_forks_repo_path": "doc/user-guide/tex/content/output_description.tex", "max_issues_count": 40, "max_issues_repo_head_hexsha": "6d89b48759fe78d1890a77656bafdbd1e703bbb2", "max_issues_repo_issues_event_max_datetime": "2022-03-27T01:56:25.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-12T21:21:13.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ebranlard/CACTUS", "max_issues_repo_path": "doc/user-guide/tex/content/output_description.tex", "max_line_length": 486, "max_stars_count": 18, "max_stars_repo_head_hexsha": "6d89b48759fe78d1890a77656bafdbd1e703bbb2", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ebranlard/CACTUS", "max_stars_repo_path": "doc/user-guide/tex/content/output_description.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-22T23:03:37.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-04T18:49:22.000Z", "num_tokens": 3828, "size": 13861 }
\chapter{Dealing with extreme low-density chains} \label{low-density} \section{Introduction} As we saw in \cref{genesis}, chain density is our principal means for distinguishing chains forged by malicious nodes from the honest chain: due to the fundamental assumption that there is an honest majority, the chain forged by the honest nodes will be denser than any chain forged by an adversary. If therefore the honest nodes in the system stop producing blocks due to some network-wide problem---pervasive node misconfiguration, bug in the ledger, etc.---the security of the system is at risk. Even when the nodes start producing blocks again, the low-density region of the chain left behind by the problem will remain to be an issue for security. If an adversary forks off their own chain at the point where the honest majority stopped producing blocks, then new nodes joining the network (using the genesis rule, \cref{genesis}) will adopt the adversarial chain instead of the honest chain: % \begin{center} \begin{tikzpicture} \draw (0,0) -- (3,0) node{$\bullet$} coordinate(i); \draw (i) -- ++(3,0) node[pos=0.5,above]{$\overbrace{\hspace{3cm}}^\text{no blocks produced}$} -- ++(4,0) node[pos=0.5,above]{$\overbrace{\hspace{4cm}}^\text{chain resumes}$} node[right]{honest chain}; \draw (i) -- ++(0, -1) -- ++(4, 0) node[pos=0.5,below]{$\underbrace{\hspace{4cm}}_\text{$s$ slots}$} -- ++(2, 0) node[right]{adversarial chain}; \draw [dashed] (7,-2) -- (7,1); \end{tikzpicture} \end{center} % The \emph{Disaster Recovery Plan}\footnote{Currently not available as a public document.} sketches how we might ``patch the chain back up'' when a major problem like this occurs. This must happen out-of-band with the cooperation of the major stake holders, and is (mostly) outside the scope of the Consensus Layer report. That said, the options for disaster recovery are currently limited due to a technical limitation in the consensus layer, which we will discuss now. From a chain security point of view, it makes little difference if the honest chain has a region of $s$ slots containing \emph{one} block or \emph{zero} blocks; both are equally terrible. However, this makes a big difference to the implementation as it currently stands: as long as there is at least one block in every $s$ slots, the system can continue; but when there is a gap of more than $s$ slots anywhere on the chain, the system will grind to a halt. As we will see in this chapter, this is a consequence of the fact that we validate headers independent from blocks, the so-called header/body split (see also \cref{nonfunctional:network:headerbody}). The main goal of this chapter is to discuss how we can address this, allowing the system to continue irrespective of any gaps on the chain. This is important for a number of reasons: \begin{enumerate} \item It makes disaster recovery less immediately urgent: if the honest nodes stop producing blocks for whatever reason, the problem can be resolved, the system restarted, and blocks can be produced again. Disaster recovery, and patching the chain back up, can then be considered as the system is running again, and put into motion when the various stake holders are ready. \item It also opens up more avenues for disaster recovery. If the consensus layer can't skip past large gaps on the chain, then the chain \emph{must} be patched. However, if we lift this restriction, then there are other ways in which we might address the problem. For example, we could (even if just temporarily) simply record the low-density area of the chain within the code itself and hardcode a preference for (this part of the) ``honest but sparse'' chain in chain selection. \item Chain regions with extreme low density are difficult to avoid in our consensus tests (\cref{testing:consensus}). \end{enumerate} Even \emph{if} it is desirable that the system stops when the chain density falls below a certain threshold, it does not make sense to set that threshold at the ``less than 1 block per $s$ slots'' boundary. This should be defined and implemented as an external policy, not dictated by implementation details. Moreover, even with an explicit stop, we might like the ability to mark the known-to-be-low-density chain and restart the system (point 2, above). It is also far from clear how to avoid adversarial nodes from taking advantage of such ``automatic'' stops (how do we prevent adversaries from producing blocks?). Either way, such concerns are well outside the scope of this chapter. Here we address just one question: how can we allow the system to continue when there are larger-than-$s$-slots gaps on the chain. \section{Background} \subsection{Recap: ledger state, ledger view and forecasting} Blocks are validated against the state of the ledger (\cref{ledger:api:ApplyBlock}). For example, we check that inputs spent by transactions in the block are available in the UTxO in the ledger state. Depending on the choice of consensus protocol, we may also need part of the ledger state to be able to validate the block \emph{header}. For example, in Optimum and Genesis we need to know the active stake distribution in order to be able to verify that whoever produced the block had a right to do so. We call the part of the ledger state that we need to validate block headers the \emph{ledger view} (\cref{consensus:class:ledgerview}). We call it a ledger \emph{view} because it is a projection out of the full ledger state. Unfortunately, we cannot compute the \emph{next} ledger view based only on the header; there is nothing that corresponds to the dotted arrow in this diagram: % \begin{center} \begin{tikzpicture}[block/.style={rectangle}] \node at (0, 2) (state1) [block] {ledger state}; \node at (7, 2) (state2) [block] {ledger state}; \node at (0, 0) (view1) [block] {ledger view}; \node at (7, 0) (view2) [block] {ledger view}; \draw [->] (state1.south) -- (view1.north) node[pos=0.5,left]{project}; \draw [->] (state2.south) -- (view2.north) node[pos=0.5,right]{project}; \draw [->] (state1.east) -- (state2.west) node[pos=0.5,above]{apply block}; \draw [->, dotted] (view1.east) -- (view2.west) node[pos=0.5,below]{(cannot apply header)}; \end{tikzpicture} \end{center} % Let's recall the Optimum example again: we can compute the active stake distribution from the ledger state, but in order to understand how the active stake distribution evolves, we need to know how the full UTxO evolves, and for that we need the full blocks. (We discussed this also in \cref{hfc:failed:forecasting}.) Let's stay with Optimum a little longer. The active stake distribution changes only at epoch boundaries. Therefore we will know the active stake distribution at least until the end of the epoch. Moreover, once we get close enough to the epoch boundary, we also know the stake distribution for the \emph{next} epoch. The range over which we know the active stake distribution therefore evolves as follows: % \begin{center} \begin{tikzpicture}[yscale=0.75] % \draw (0, 0) -- (2, 0) node{$\bullet$} node[above left]{tip}; \path (2, 0) -- (6, 0) node[pos=0.5,above]{$\overbrace{\hspace{4cm}}^\text{known}$}; % \draw (0, -1) -- (3, -1) node{$\bullet$} node[above left]{tip}; \path (3, -1) -- (6, -1) node[pos=0.5,above]{$\overbrace{\hspace{3cm}}^\text{known}$}; % \draw (0, -2) -- (4, -2) node{$\bullet$} node[above left]{tip}; \path (4, -2) -- (6, -2) node[pos=0.5,above]{$\overbrace{\hspace{2cm}}^\text{known}$}; % \draw (0, -3) -- (5, -3) node{$\bullet$} node[above left]{tip}; \path (9, -3) -- (6, -3) node[pos=0.5,above]{$\overbrace{\hspace{5cm}}^\text{known}$}; % \draw [dashed] ( 2, -3.2) -- ( 2, 0.7) node[above]{epoch}; \draw [dashed] ( 6, -3.2) -- ( 6, 0.7) node[above]{epoch}; \draw [dashed] (10, -3.2) -- (10, 0.7) node[above]{epoch}; \end{tikzpicture} \end{center} \pagebreak The range over which we know the active stake distribution shrinks and then grows again, but never falls below a certain minimum size. We abstract from this process in the consensus layer, and say we can \emph{forecast} the ledger view from a particular ledger state over a certain \emph{forecast range} (\cref{ledger:api:LedgerSupportsProtocol}). This does not necessarily mean the ledger view is constant during that range, but merely that any changes are \emph{known} (for example, see the last line in the diagram above). If we change our perspective slightly, we can say that blocks on the chain cannot influence the ledger view (active stake distribution) until a certain period of time (in slots) has passed. We call this the \emph{stability window} of the ledger, and will study it in more detail in the next section. \subsection{Recap: stability windows} \label{low-density:recap-stability-window} Blocks are validated against ledger states; each block is validated against the ledger state as it was after applying the previous block. This means that when we validate block $B$ in the example below, we use the ledger state after applying block $A$; for block $C$, we use the ledger state after applying block $B$: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm} ,baseline=0pt] \node at (0,0) (A) [block] {A}; \node at (2,0) (B) [block] {B}; \node at (5,0) (C) [block] {C}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west) node[pos=0.5,above=5mm]{\small validated against}; \draw (B.east) -- (C.west) node[pos=0.5,above=5mm]{\small validated against}; \draw (C.east) -- ++(2,0); % \draw [->, dotted] (B.west) to [out=135,in=90] (A.east); \draw [->, dotted] (C.west) to [out=135,in=90] (B.east); \end{tikzpicture} \qquad \begin{minipage}{0.25\textwidth} \emph{Horizontal axis represents time (in slots)} \end{minipage} \end{center} % In the chain sync client (\cref{chainsyncclient}) we are however not validating blocks, but block \emph{headers}. As we saw, in order to validate a header we only need part of the ledger state, known as the ledger \emph{view}. We also saw that despite the fact that we only need part of the ledger state, we cannot \emph{update} the ledger view using only headers: we still need the full block. This means that if we have block $A$, but only block \emph{headers} $B$ and $C$, we have a problem: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (2,0) (B) [block, dashed] {B}; \node at (5,0) (C) [block, dashed] {C}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west) node[pos=0.5,above=5mm]{\small validated against}; \draw (B.east) -- (C.west) node[pos=0.5,above=5mm]{\small validated against}; \draw (C.east) -- ++(2,0); % \draw [->, dotted] (B.west) to [out=135,in=90] (A.east); \draw [->, dotted] (C.west) to [out=135,in=90] (B.east); \end{tikzpicture} \end{center} % Validating header $B$ is unproblematic, since we have the ledger state available after applying block $A$. However, since we don't have block $B$, we can't compute the ledger state after block $B$ to validate header $C$. We are saved by the fact that we can \emph{forecast} the ledger view required to validate header $B$ from the ledger state after $A$: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (2,0) (B) [block, dashed] {B}; \node at (5,0) (C) [block, dashed] {C}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west) node[pos=0.55,below=5mm]{\small forecast}; \draw (B.east) -- (C.west); \draw (C.east) -- ++(2,0); % \draw [->, dotted] (B.west) to [out=135,in=90] (A.east); \draw [->, dotted] (C.west) to [out=135,in=90] (B.east); % \draw [->, dotted] (A.east) to [out=270,in=270] (B.east); \end{tikzpicture} \end{center} % We can do this because of a restriction on the ledger: blocks cannot affect the ledger view until a \emph{stability window} has passed: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (2,0) (B) [block, dashed] {B}; \node at (5,0) (C) [block, dashed] {C}; \node at (8,0) (D) [block, dashed] {D}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west) node[pos=0.55,below=5mm]{\small forecast}; \draw (B.east) -- (C.west); \draw (C.east) -- (D.west); \draw (D.east) -- ++(2,0); % \draw [->, dotted] (B.west) to [out=135,in=90] (A.east); \draw [->, dotted] (C.west) to [out=135,in=90] (B.east); % \draw [->, dotted] (A.east) to [out=270,in=270] (B.east); \node at (B.east) [below=0.6, right] {$\underbrace{\hspace{4cm}}_\text{stability window}$}; \node at (7,0) {$\times$}; \end{tikzpicture} \end{center} % We can use the ledger state after applying block $A$ (which we have complete knowledge of) to validate any header up to the end of $B$'s stability window: any changes that $A$ (or any block before $A$) initiates we know about, and any changes that $B$ initiates cannot take effect until that stability window ends. Therefore we can validate header $C$, but not header $D$: block $B$ might have scheduled some changes to take effect at the slot marked as $(\times)$ in the diagram, and we do not know what those effects are.\footnote{It might be tempting to think that we can validate $D$ because if we did have blocks $B$ and $C$, block $D$ would be evaluated against the ledger state as it was after applying $C$, which is still within $B$'s stability window. However, the slot number of $D$ (its location on the $x$-axis in the diagram) matters, because changes are scheduled for slots.} In chain sync we do not currently take advantage of the knowledge of the location of header $B$.\footnote{\label{footnote:anchor-after-first-header}We should change this. By anchoring the stability window at the last known block, we only have a guarantee that we can validate $k$ headers, but we should really be able to validate $k + 1$ headers in order to get a chain that is longer than our own (\cref{low-density:tension}). If we anchored the stability window after the first unknown header, where it \emph{should} be anchored, we can validate $k$ headers \emph{after} the first unknown header, and hence $k + 1$ in total. Concretely, we would have to extend the \lstinline!LedgerSupportsProtocol! class with a function that forecasts the ledger view given a \emph{ticked} ledger state. Taking advantage of this would then just be a minor additional complication in the chain sync client.} This means we have to be conservative: all we know is that there could be \emph{some} block in between $A$ and $C$ that might schedule some changes that are relevant for validating header $C$. In this case we therefore assume that the stability window extends from $A$ instead: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (2,0) (B) [block, dashed] {B}; \node at (5,0) (C) [block, dashed] {C}; \node at (8,0) (D) [block, dashed] {D}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west); \draw (B.east) -- (C.west); \draw (C.east) -- (D.west); \draw (D.east) -- ++(2,0); % \node at (A.east) [below=0.6, right] {$\underbrace{\hspace{4cm}}_\text{stability window}$}; \end{tikzpicture} \end{center} % In this example, that means we can validate $B$, but not $C$ (nor $D$).\footnote{We could in principle shift this up by 1 slot: after all, the very first next block after $A$ cannot be in the same slot as $A$. While EBBs are an exception to that rule (\cref{ebbs}), we do not need to validate EBBs so this is a rare example where EBBs do not cause a problem.} \subsection{Tension with chain selection} \label{low-density:tension} Changes that affect the ledger view are scheduled for slots (often for epoch boundaries, which happen at particular slots); the stability window must therefore be defined in terms of slots as well. This means that the number of \emph{headers} we can validate within a given stability window depends on the density of that chain; if the chain we considered at the end of the previous section looks like this instead % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (1.5,0) (B) [block, dashed] {B}; \node at (3,0) (C) [block, dashed] {C}; \node at (8,0) (D) [block, dashed] {D}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west); \draw (B.east) -- (C.west); \draw (C.east) -- (D.west); \draw (D.east) -- ++(2,0); % \node at (A.east) [below=0.6, right] {$\underbrace{\hspace{4cm}}_\text{stability window}$}; \end{tikzpicture} \end{center} % we can validate headers $B$ and $C$ (but still not $D$). There is a fundamental tension between the stability window defined in \emph{slots}, and chain selection preferring longer chains: chains that have more \emph{blocks}. In order to be able to do a meaningful comparison between our chain and the candidate chain, we must be able to verify enough of that candidate chain that the length of that verified prefix exceeds the length of our own chain. Since the maximum rollback we support is $k$ (\cref{consensus:overview:k}), that means we must be able to validate at least $k + 1$ headers. The tension is resolved by a theoretical result that says that within $3k/f$ slots we \emph{will} see more than $k$ blocks (more precisely, the probability that we see fewer than $k$ blocks in $3k/f$ slots is negligibly small; \cite{cryptoeprint:2017:573}). This therefore provides us with a suitable choice for a stability window. Unfortunately, while in theory there is no difference between theory and practice, there is in practice. Currently, when all nodes in the system are unable to produce blocks for an extended period of time, the system grinds to a halt. Even if the underlying problem is resolved, nodes will refuse to create a block if the distance between that block and the previous block exceeds the stability window; after all, if they did produce a block, other nodes would be unable to validate it. The former is easily resolved, this is merely a check in the block production code; resolving the second problem is the topic of this chapter. It would be preferable to avoid the tension altogether, and schedule changes that affect the ledger view for particular \emph{blocks} instead (and consequently, have epoch boundaries also happen at certain blocks). This however requires backing from theoretical research; we will come back to this in \cref{future:block-vs-slot}. \pagebreak \subsection{Single-gap case} It is tempting to think that when there is only a \emph{single} large gap on the chain, there is no problem: % \begin{center} \begin{tikzpicture} [block/.style={rectangle,draw=black,minimum size=5mm}] \path (-2,0) -- (11,0); % adjust bounding box \node at (0,0) (A) [block] {A}; \node at (6,0) (B) [block, dashed] {B}; \node at (7,0) (C) [block, dashed] {C}; \node at (8,0) (D) [block, dashed] {D}; \draw (-2,0) -- (A.west); \draw (A.east) -- (B.west); \draw (B.east) -- (C.west); \draw (C.east) -- (D.west); \draw (D.east) -- ++(2,0); % \node at (B.east) [below=0.6, right] {$\underbrace{\hspace{4cm}}_\text{stability window}$}; \end{tikzpicture} \end{center} % The gap between $A$ and $B$ exceeds the stability window, but this should not matter: it's not the stability window after $A$ that matters, but the stability window after $B$. This seems to be a useful special case: if a problem \emph{does} arise that prevents nodes from producing blocks for an extended period of time, one might hope that this problem does not immediately arise again after the nodes resume producing blocks. As we saw, the consensus layer always conservatively anchors the stability window at the last known block rather than the first header after the tip. We could change this (and probably should; see \cref{footnote:anchor-after-first-header}), but it turns out this does not actually help very much for this particular problem. To see this, suppose there is another node in the system which is currently on a fork that intersects with this chain after some block $I$ before the gap: % \begin{center} \begin{tikzpicture}[yscale=0.5,block/.style={rectangle,draw=black,minimum size=5mm}] \node at (-2,-1) (I) [block] {I}; \node at (0,0) (A) [block, dashed] {A}; \node at (6,0) (B) [block, dashed] {B}; \node at (7,0) (C) [block, dashed] {C}; \node at (8,0) (D) [block, dashed] {D}; \draw (-4,-1) -- (I.west); \draw (I.east) -- (A.west); \draw (A.east) -- (B.west); \draw (B.east) -- (C.west); \draw (C.east) -- (D.west); \draw (D.east) -- ++(2,0); % \node at (A.east) [below=0.6, right] {$\underbrace{\hspace{4cm}}_\text{stability window}$}; % \node at (0, -2) (A') [block] {A$'$}; \draw (I.east) -- (A'.west); \end{tikzpicture} \end{center} % The second node must execute a rollback to $I$ in order to be able to adopt the new chain, but from \emph{its} perspective the first unknown block is $A$, not $B$: hence the stability window \emph{must} be anchored at $A$, and the node will be unable to bridge the gap. \section{Pre-genesis} \label{low-density:pre-genesis} In this section we will consider how we might allow nodes to recover from a low density chain, prior to the implementation of the genesis rule. An obvious solution suggests itself: we could just allow chain sync to download blocks when it needs to validate a header which is beyond its forecast range. \subsection{Damage mitigation} The reason the chain sync client doesn't normally download blocks is to limit the amount of unnecessary work an attacker can make it do (prevent DoS attacks, \cref{nonfunctional:network:headerbody}). We might therefore consider if we can restrict \emph{when} we allow the chain sync client to download blocks. Ideally we would do this only ``when necessary'': to bridge the gap on the honest chain. Unfortunately, it is difficult to come up with a criterion that approximates this ideal. Consider how the situation evolves from the point of view of a single node: \begin{center} \begin{tikzpicture}[yscale=0.25] % \path (0, 0) coordinate(imm1) node{$\bullet$} node[above]{imm}; \draw (imm1) -- ++(-1,0); \draw (imm1) -- ++(0.5,0); \path (imm1) -- ++(0.5,0) -- ++(1,0) node[pos=0.5]{$\cdots$} -- ++(0.5,0) coordinate(cp1); \draw (cp1) -- ++(-0.5,0); \draw (cp1) -- ++(1, 1.5) -- ++(1, 0) node{$\bullet$}; \draw (cp1) -- ++(1, 0.5) -- ++(1.5, 0) node{$\bullet$}; \draw (cp1) -- ++(1,-0.5) -- ++(0.5, 0) node{$\bullet$}; \draw (cp1) -- ++(1,-1.5) -- ++(1, 0) node{$\bullet$}; \draw [very thick] (4.5,-2) -- (4.5,2) node[above]{now}; % \path (0, -7) coordinate(imm2) node{$\bullet$} node[above]{imm}; \draw (imm2) -- ++(-1,0); \draw (imm2) -- ++(0.5,0); \path (imm2) -- ++(0.5,0) -- ++(1,0) node[pos=0.5]{$\cdots$} -- ++(0.5,0) coordinate(cp2); \draw (cp2) -- ++(-0.5,0); \draw (cp2) -- ++(1, 1.5) -- ++(1, 0) node{$\bullet$}; \draw (cp2) -- ++(1, 0.5) -- ++(1.5, 0) node{$\bullet$}; \draw (cp2) -- ++(1,-0.5) -- ++(0.5, 0) node{$\bullet$}; \draw (cp2) -- ++(1,-1.5) -- ++(1, 0) node{$\bullet$}; \draw [very thick] (5.5,-9) -- (5.5,-5) node[above]{now}; % \path (0, -14) coordinate(imm3) node{$\bullet$} node[above]{imm}; \draw (imm3) -- ++(-1,0); \draw (imm3) -- ++(0.5,0); \path (imm3) -- ++(0.5,0) -- ++(1,0) node[pos=0.5]{$\cdots$} -- ++(0.5,0) coordinate(cp3); \draw (cp3) -- ++(-0.5,0); \draw (cp3) -- ++(1, 1.5) -- ++(1, 0) node{$\bullet$}; \draw (cp3) -- ++(1, 0.5) -- ++(1.5, 0) node{$\bullet$}; \draw (cp3) -- ++(1,-0.5) -- ++(0.5, 0) node{$\bullet$}; \draw (cp3) -- ++(1,-1.5) -- ++(1, 0) node{$\bullet$}; \path (4.5,-12.5) -- (7.5,-12.5) node[pos=0.5,above=-0.1]{$\overbrace{\hspace{3cm}}^\text{$> s$ slots}$}; \draw [very thick] (7.5,-16) -- (7.5,-12) node[above]{now}; % \path (0, -21) coordinate(imm4) node{$\bullet$} node[above]{imm}; \draw (imm4) -- ++(-1,0); \draw (imm4) -- ++(0.5,0); \path (imm4) -- ++(0.5,0) -- ++(1,0) node[pos=0.5]{$\cdots$} -- ++(0.5,0) coordinate(cp4); \draw (cp4) -- ++(-0.5,0); \draw (cp4) -- ++(1, 1.5) -- ++(1, 0) node{$\bullet$} coordinate(before1); \draw (cp4) -- ++(1, 0.5) -- ++(1.5, 0) node{$\bullet$}; \draw (cp4) -- ++(1,-0.5) -- ++(0.5, 0) node{$\bullet$} coordinate(before2); \draw (cp4) -- ++(1,-1.5) -- ++(1, 0) node{$\bullet$}; \path (4.5,-19.5) -- (7.5,-19.5) node[pos=0.5,above=-0.1]{$\overbrace{\hspace{3cm}}^\text{$> s$ slots}$}; \draw [very thick] (10,-23) -- (10,-19) node[above]{now}; \path (before1) -- ++(5,0) coordinate(after1); \path (before2) -- ++(6,0) coordinate(after2); \draw (after1) node{$\bullet$} -- ++(2,0); \draw (after2) node{$\bullet$} -- ++(2,0); \end{tikzpicture} \end{center} \pagebreak The node is tracking the chains of a number of upstream peers. These chains will share some common prefix, which must at least include the tip of our own immutable database (that is, the block $k$ blocks away from our tip), marked ``imm''. When block production is halted due to some problem, the gap between the tips of the chains and the wallclock will start to increase; at some point this gap will exceed the stability window. Finally, when the problem is resolved the nodes will start producing blocks again. \begin{assumption} \label{never-only-malicious} In the period where the honest nodes cannot produce any blocks, malicious nodes cannot either. If that is not the case, we are in trouble anyway; that is a problem which is well outside the scope of this chapter. \end{assumption} \Cref{never-only-malicious} seems to give some hope. We may not be able to decide for any \emph{particular} chain if that chain happens to be the honest chain. However, if \emph{none} of the chains contain any blocks in the gap, then eventually it will be true for \emph{all} upstream peers that the gap from the tip of that peer's chain to the wallclock exceeds the stability window. This might suggest the following rule: \begin{failedattempt} Only allow the chain sync client to download blocks if this would be required for \emph{all} peers. \end{failedattempt} Unfortunately, this rule does not work because as soon as we bridge the gap for \emph{one} of our peers, that condition no longer holds: % \begin{center} \begin{tikzpicture}[yscale=0.25] \path (0, -21) coordinate(imm4) node{$\bullet$} node[above]{imm}; \draw (imm4) -- ++(-1,0); \draw (imm4) -- ++(0.5,0); \path (imm4) -- ++(0.5,0) -- ++(1,0) node[pos=0.5]{$\cdots$} -- ++(0.5,0) coordinate(cp4); \draw (cp4) -- ++(-0.5,0); \draw (cp4) -- ++(1, 1.5) -- ++(1, 0) node{$\bullet$} coordinate(before1); \draw (cp4) -- ++(1, 0.5) -- ++(1.5, 0) node{$\bullet$}; \draw (cp4) -- ++(1,-0.5) -- ++(0.5, 0) node{$\bullet$} coordinate(before2); \draw (cp4) -- ++(1,-1.5) -- ++(1, 0) node{$\bullet$}; \path (4.5,-19.5) -- (7.5,-19.5) node[pos=0.5,above=-0.1]{$\overbrace{\hspace{3cm}}^\text{$> s$ slots}$}; \draw [very thick] (10,-23) -- (10,-19) node[above]{now}; \path (before1) -- ++(5,0) coordinate(after1); \draw (before2) -- ++(6,0) coordinate(after2); \draw (after1) node{$\bullet$} -- ++(2,0); \draw (after2) node{$\bullet$} -- ++(2,0); \end{tikzpicture} \end{center} % Now one of our chains has a tip which is near the wallclock, and so the condition no longer holds. Okay, you might say, but it was true at \emph{some} point, and when it was true, it would have allowed the chain sync client to download blocks for \emph{any} peer. Thus, we could try the following rule: \begin{failedattempt} When we detect that the tips of all upstream peers are more than the stability window away from the wallclock, give the chain sync client a chance to download blocks for \emph{all} peers. \end{failedattempt} This \emph{might} work, but it's very stateful. What does ``all peers'' mean exactly? All peers we are currently connected to? What if we connect to another peer later? What if the node has restarted in the meantime, do we need to persist this state? Will we need some notion of peer identity? Perhaps all of these questions have answers, but this does not seem like a clean solution. As a final attempt, we might try to ensure that there is only a \emph{single} chain after we resolve the problem that was preventing block production. Suppose this could somehow be guaranteed (out of band communication to agree on a block in the common prefix, use a BFT-like leadership selection for a while, etc.). Then we could try the following rule: \begin{failedattempt} When we detect that the tips of all upstream peers are more than the stability window away from the wallclock, allow the chain sync client to download enough blocks to bridge the gap for \emph{one} peer. Allow the other peers to bridge the gap only if they contain the \emph{same} header after the gap. \end{failedattempt} Unfortunately, this still cannot work. Even if the honest nodes agree to only produce a single chain after the gap, we cannot prevent an adversary from constructing another chain. If the node then happens to pick the adversary's chain as the one-and-only allowed header to jump the gap, it would be unable to then switch to the honest chain later. \pagebreak \subsection{Damage analysis} If we cannot limit when the chain sync client is allowed to download and validate blocks, then let's analyse exactly what the possibility for denial of service attacks really is. \begin{lemma} When the node is up to date, the chain sync client will never have to download any blocks. \end{lemma} \begin{proof} The Optimum analysis \cite{cryptoeprint:2017:573} tells us that the honest chains will not diverge by more than $k$ blocks, and that this means that their intersection cannot be more than $3k/f$ slots away from the wallclock (provided block production is not halted, of course). This means that any header that would be more than the stability window away from the intersection point would have a slot number past the wallclock, and would therefore be invalid.\footnote{Though we allow for some minimal clock skew, headers past the wallclock should be considered invalid if this exceeds $s$ slots from the immutable tip, even if they would still fall within the permissible clock skew. This is an edge case that was important for implementation of genesis as well; see \cref{genesis:becoming-alert:DoS}.} \end{proof} This means that we only have to worry about DoS attacks while a node is syncing. As a first observation, node performance is less critical here. The node is anyway not producing blocks while syncing, so causing the node to slow down temporarily is not a huge deal (\emph{cf.} also \cref{genesis:optimisations} where we argue it's less essential during syncing to make the worst case performance and the normal case performance the same). It will therefore suffice to simply \emph{bound} the amount of work a malicious node can make us do. We have to make sure that we can see at least $k+1$ headers from each peer (we want to support a rollback of $k$ blocks, and chain selection is based on length, so if we can validate $k+1$ headers, we have seen enough to do a length comparison and decide we want to switch to the other chain). This means we would need to download at most $k$ blocks. This bounds the amount of \emph{memory} we might need to dedicate to any chain,\footnote{Currently the length of the fragments we keep in memory for each upstream peer is bound by the forecast range, but that natural bound would of course no longer work if we allow the chain sync client to download blocks.} but does not limit how much \emph{work} they can make us do: an attacker with even a small amount of stake could construct lots of chains that fork off the main chain, and so we'd end up downloading and validating lots of blocks. We can limit the impact of this by rate limiting rollback messages, which would be useful for other purposes as well.\footnote{For example, it can help avoid a DoS attack where an attacker attempts to flood our volatile DB with lots of useless blocks.} Moreover, there is no real asymmetry here between the attacker and the defender: the cost of downloading and validating a block on our side is not too dissimilar from the cost of producing and providing that block on the side of the attacker, and all the attacker would gain in doing so is slow down a node's syncing speed. (Admittedly, if we adopt more than $k$ blocks from the adversarial chain we'd be in trouble, but that is a problem solved by the Genesis chain selection rule). \pagebreak \section{Post-genesis} \label{low-density:post-genesis} With the implementation of the genesis rule, discussed in detail in \cref{genesis}, some things get easier, but unfortunately some things get more difficult. \subsection{Pre-disaster genesis window} Suppose the chain is operating as normal until disaster strikes and the nodes stop producing blocks: % \begin{center} \begin{tikzpicture}[yscale=0.5] \draw (0,0) -- (3,0) node{$\bullet$} coordinate(i); \draw (i) -- ++(1, 1) -- ++(1, 0); \draw (i) -- ++(1, 0) -- ++(1.5, 0); \draw (i) -- ++(1, -1) -- ++(0.5, 0); \path (i) -- ++(2.5, 0) node[pos=0.5,above=0.5cm]{$\overbrace{\hspace{2.5cm}}^\text{$\le k$ blocks}$}; \draw [very thick] (6,-1.5) -- (6,2) node[above]{disaster}; \end{tikzpicture} \end{center} % While the Genesis analysis \cite{cryptoeprint:2018:378} tells us that that common intersection point is \emph{at most} $k$ blocks away, in practice it will actually be much less than $k$ most of the time, a handful of blocks in typical cases. This means that when the nodes start producing blocks again, chain selection will be a looking at a window of $s$ slots where all chains have very low density:\footnote{Prefix selection does a length comparison when we can see all chains to their tip, meaning all chains terminate within the $s$ window. It is important that we don't reinterpret that as ``all chains are less than $k$ \emph{blocks} away from the intersection point''. If we did, we would conclude in this case that we can still do a length comparison when the chains continue after the end of the disaster period; that is not correct: it would mean that while the chains start are growing we would come to one conclusion, but then once the chains grow past the window of $k$ blocks, we would switch to comparing density and might come to a \emph{different} conclusion.} % \begin{center} \begin{tikzpicture}[yscale=0.5] \draw (0,0) -- (3,0) node{$\bullet$} coordinate(i); \draw (i) -- ++(0.25, 1) -- ++(0.5, 0); \draw (i) -- ++(0.25, 0) -- ++(0.75, 0); \draw (i) -- ++(0.25, -1) -- ++(0.25, 0); \draw [very thick] (4.5,-1.5) -- (4.5,2.5) node[above]{disaster\vphantom{y}}; \draw [very thick] (6.5,-1.5) -- (6.5,2.5) node[above]{recovery}; \draw [dashed] (i) -- ++( 0, 2) -- ++( 3, 0) -- ++( 0, -4) -- ++(-3, 0) node[pos=0.5,below]{$\underbrace{\hspace{3cm}}_\text{$s$ slots}$} -- cycle; % \draw (6.5, 1) -- (8.5, 1); \draw (6.5, 0) -- (7.5, 0); \draw (6.5, -1) -- (8, -1); \end{tikzpicture} \end{center} % In effect we are doing a density comparison over very short fragments. In general this is not meaningful; in the extreme case, where that fragment contains only a single slot, density will either be 100\% or 0\%. It is tempting to think that we could just \emph{grow} the genesis window to include part of the post-disaster chain. Growing the genesis window is however not sound: once we get more than $s$ slots away from the intersection point, an adversary can start to influence the leadership schedule and so density comparisons are no longer meaningful. Essentially what this means is that after disaster recovery we arbitrarily pick any of the chains from before the disaster to continue. This probably does not matter too much; at worst more blocks are lost than strictly necessary, but those transactions can be resubmitted and we're anyway talking about disaster recovery; some loss is acceptable.\todo{Verify} It might \emph{even} okay if the chain we happened to pick was constructed by an adversarial node. After all, at most they can have constructed $k$ blocks, and all they can do is selectively \emph{omit} transactions; if we continue the chain based on such an adversarial chain, the damage they can do is very limited.\todo{Verify} \emph{However.} Suppose we do make an arbitrary choice and the chain resumes. Nothing is preventing an adversary from forking off a new chain just prior to the disaster region \emph{after the fact}. If they do, and new nodes joining the system end up choosing that chain, they are in serious trouble; now they are following a chain that is basically under the control of the adversary. \pagebreak This ability of adversaries to construct new forks before areas of low density on the chain mean that these areas are a serious risk to security. Indeed, somewhat ironically this risk is made \emph{worse} by the genesis rule. If we look at chain length only, the honest chain will probably be longer than whatever chain an attacker forges; but if we look at density, an attacker than can even produce a single block in $s$ slots might already have a sufficient advantage. This means that some kind of disaster recovery becomes even more important after we implement the genesis rule. Ideally we would patch the chain up, but there is an easier option which can work (at least as a temporarily solution): it suffices to hardcode a pre-disaster block as the agreed-on pre-disaster tip. \subsection{Post-disaster genesis window} So far we've been talking about the genesis window as we approach the disaster. Suppose we choose \emph{some} block as our pre-disaster tip; either by randomly selecting one of the chains (or if by luck all chains happen to converge pre-disaster) or by hardcoding a preference for a certain block: % \begin{center} \begin{tikzpicture}[yscale=0.5] \draw (0,0) -- (3,0) coordinate(i) node{$\bullet$}; \draw [dotted] (i) -- ++(0.25, 1) -- ++(0.5, 0); \draw (i) -- ++(0.25, 0) -- ++(0.75, 0) node{$\bullet$} coordinate(pre-disaster-tip); \draw [dotted] (i) -- ++(0.25, -1) -- ++(0.25, 0); \draw [very thick] (4.5,-1.5) -- (4.5,2.5) node[above]{disaster\vphantom{y}}; \draw [very thick] (6.5,-1.5) -- (6.5,2.5) node[above]{recovery}; \draw [dashed] (pre-disaster-tip) -- ++( 0, 2) -- ++( 3, 0) -- ++( 0, -4) -- ++(-3, 0) node[pos=0.5,below]{$\underbrace{\hspace{3cm}}_\text{$s$ slots}$} -- cycle; % \draw (6.5, 1) -- (8.5, 1); \draw (6.5, 0) -- (7.5, 0); \draw (6.5, -1) -- (8, -1); \end{tikzpicture} \end{center} % Having made this choice, we are \emph{again} faced with a comparison between chains which all have very low density within the window (in the extreme case, even zero). This means that here we effectively have a \emph{second} arbitrary choice between chains, with all the same dangers (in particular the danger of an attacker forking off a new chain after the fact). However, in this case we have a way out: % \begin{lemma} \label{lemma:shift-genesis-window} Suppose we have decided on a particular pre-disaster tip, and the chains we see look like this: % \begin{center} \begin{tikzpicture}[yscale=0.5] \draw (0,0) -- (3,0) node{$\bullet$} node[above left]{tip} coordinate(tip); \draw (tip) -- ++(4,1.5) node{$\bullet$} -- ++(2.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(6,0.5) node{$\bullet$} -- ++(0.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(5,-0.5) node{$\bullet$} -- ++(1.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(3,-1.5) node{$\bullet$} -- ++(3.5,0) node[right]{$\cdots$}; % \draw [dashed] (tip) -- ++(0,2) -- ++(4.5,0) node[pos=0.5, above]{$\overbrace{\hspace{4.5cm}}^\text{$s$ slots}$} -- ++(0,-4) -- ++(-4.5,0) -- cycle; \end{tikzpicture} \end{center} % Then we can shift up the genesis lookahead window until it starts at the first block after the tip: % \begin{center} \begin{tikzpicture}[yscale=0.5] \draw (0,0) -- (3,0) node{$\bullet$} node[above left]{tip} coordinate(tip); \draw (tip) -- ++(4,1.5) node{$\bullet$} -- ++(2.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(6,0.5) node{$\bullet$} -- ++(0.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(5,-0.5) node{$\bullet$} -- ++(1.5,0) node[right]{$\cdots$}; \draw (tip) -- ++(3,-1.5) node{$\bullet$} -- ++(3.5,0) node[right]{$\cdots$}; % \draw [dashed] (tip) ++(3,0) -- ++(0,2) -- ++(4.5,0) node[pos=0.5, above]{$\overbrace{\hspace{4.5cm}}^\text{$s$ slots}$} -- ++(0,-4) -- ++(-4.5,0) -- cycle; \end{tikzpicture} \end{center} \end{lemma} \begin{proof} The first block that could be produced by an adversary is the first block after the tip. This adversarial block cannot influence the leadership schedule until at least $3k/f$ slots later, which is also the size of the lookahead window ($s$). Therefore a density comparison within the shifted window will still favour the honest chains. \end{proof} % \Cref{lemma:shift-genesis-window} means that we can shift the genesis window until after the disaster, and avoid the second arbitrary choice between chains. In particular, it means we can definitely make it across the gap safely if we \emph{mark} the before-disaster block (to avoid picking an adversary's block). \pagebreak \subsection{(No) need for gap jumping} In \cref{low-density:pre-genesis} we discuss that prior to the implementation of the genesis rule, we sometimes need to allow the chain sync client to download blocks. Since chain selection was based on length, we need to be able to validate a sufficient number of headers to get a fragment that is longer than our current chain; in the case of a disaster, that might mean validating a header that is more than $s$ slots away from our latest usable ledger state to validate that header, and hence we may need to download some blocks. The genesis rule, in principle, \emph{never needs to look past $s$ slots}. It makes all of its decisions based on a window of $s$ slots; if a node reports a header past the end of that window, that just tells us we have seen everything we need to see about that chain within the window. There is no need to validate this header: any headers \emph{within} the window contribute to the density of the chain and are validated, any headers \emph{past} the window just cap that density; nodes cannot increase their chain's density with an invalid header past the window, and so nothing can go wrong if we do not validate that header. This changes however if we want to make use of \cref{lemma:shift-genesis-window}. It is of course necessary that we validate the headers \emph{within} the window; if we shift the window, we are no longer guaranteed that ``within the window'' is synonymous with ``at most $s$ slots away from the ledger state we have available''. Whether or not this opens us up to denial of service attacks depends on when exactly we shift the window. However, if we do this only if we have some kind of explicit disaster recovery (where we mark the pre-disaster block), or if the density in the window we see drops below a certain threshold, then the scope for a denial is service attack is very limited indeed. \subsection{In the absence of adversaries} In the consensus tests (\cref{testing:consensus}) periods where no blocks are being produced are hard to avoid. However, we do not (currently) model adversarial behaviour. This means that any kind of explicit disaster recovery is not needed: if pre-disaster and post-disaster we end up picking an ``arbitrary'' chain, consensus is still guaranteed. After all, the choice is not ``arbitrary'' in the sense that different nodes may pick different chains; it is only ``arbitrary'' in the sense that we are doing a density comparison on a fragment that is too short (it may be necessary to add a deterministic tie-breaker in case there are multiple fragments with equal density).
{ "alphanum_fraction": 0.7085341457, "avg_line_length": 46.1090146751, "ext": "tex", "hexsha": "413ff5339e17f978d3722c2fbc3b6d6a7dc982a6", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-21T16:37:52.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-13T21:23:06.000Z", "max_forks_repo_head_hexsha": "556083a6d5e0fb94c912b561a5f1f7afd1113dc0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Quantum-One-DLT/shardagnostic-network", "max_forks_repo_path": "shardagnostic-consensus/docs/report/chapters/future/lowdensity.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "556083a6d5e0fb94c912b561a5f1f7afd1113dc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Quantum-One-DLT/shardagnostic-network", "max_issues_repo_path": "shardagnostic-consensus/docs/report/chapters/future/lowdensity.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "556083a6d5e0fb94c912b561a5f1f7afd1113dc0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Quantum-One-DLT/shardagnostic-network", "max_stars_repo_path": "shardagnostic-consensus/docs/report/chapters/future/lowdensity.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13383, "size": 43988 }
\section{The \findii algorithm---reuse of specification elements} \Label{sec:findii} In this section we specify \find in a slightly different way. Our approach is motivated by a considerable number of closely related \acsl formulas in the contract \specref{find} and the implementation \implref{find}. \begin{lstlisting}[style=acsl-block] \exists integer i; 0 <= i < n && a[i] == v; \forall integer i; 0 <= i < \result ==> a[i] != v; \forall integer i; 0 <= i < n ==> a[i] != v; \forall integer k; 0 <= k < i ==> a[k] != v; \end{lstlisting} Note that the first formula is the negation of the third one. \subsection{The predicates \SomeEqual and \NoneEqual} In order to be more explicit about the commonalities of these formulas we define a predicate, called \logicref{SomeEqual}, which describes the situation that there is a valid index \inl{i} where~\inl{a[i]} equals~\inl{v}. \input{Listings/SomeNone.acsl.tex} We first remark that the \SomeEqual, its negation \NoneEqual and the lemmas \NotSomeEqualNoneEqual and \NoneEqualNotSomeEqual are encapsulated in the \emph{axiomatic block} \logicref{SomeNone}. This is a \emph{feeble} attempt to establish some modularization for the various predicates, logic functions and lemmas. We say \emph{feeble} because axiomatic blocks are, in contrast to \acsl \inl{module}s, \emph{not} name spaces. \acsl modules, however, are not yet implemented by \framac. We also remark that both predicates come in overloaded versions. The first of theses versions is a definition for array sections while the second definition is for the case of complete arrays. Note that we have provided a label, viz.\ \inl{A}, to the predicate \SomeEqual. Its purposes to express that the evaluation of the predicate depends on a memory state, viz.\ the contents of \inl{a[0..n-1]}. In general, we have to write \begin{lstlisting}[style=acsl-block] \exists integer i; 0 <= i < n && \at(a[i],A) == v; \end{lstlisting} in order to express that we refer to the value \inl{a[i]} in the program state~\inl{A}. However, \acsl allows to abbreviate \inl{\\at(a[i],A)} by \inl{a[i]} if, as in \SomeEqual or \NoneEqual, the label~\inl{A} is the only available label. In particular, we have omitted the label in the overloaded versions for complete arrays. %\clearpage \subsection{Formal specification of \findii} With the predicates \logicref{SomeEqual} and \logicref{NoneEqual} we are able to encapsulate all uses of the universal and existential quantifiers in both the specification and implementation of \findii. As a result, the revised contract \specref{findii} is more concise than that of \specref{find}. % In particular, it can be seen immediately that the conditions in the assumes clauses of the two behaviors \inl{some} and \inl{none} are mutually exclusive since one is the literal negation of the other. Moreover, the requirement that \find returns the smallest index can also be expressed using the \logicref{NoneEqual} predicate, as depicted with the last postcondition of behavior \inl{some}. \input{Listings/find2.h.tex} We also enriched the specification of \find by user-defined names (sometimes called \emph{labels}, too, the distinction to program state identifiers being obvious) to refer to the \inl{requires} and \inl{ensures} clauses. We highly recommend this practice in particular for more complex annotations. For example, \framac can be instructed to verify only clauses with a given name. \clearpage \subsection{Implementation of \findii} The predicate \NoneEqual is also used in the loop annotation inside the implementation of \implref{findii}. Note that, as in the case of the specification, we use labels to name individual annotations. \input{Listings/find2.c.tex} %\clearpage
{ "alphanum_fraction": 0.7524335701, "avg_line_length": 35.523364486, "ext": "tex", "hexsha": "7a824ab98bae4de9f3f3165c30a5edc8bc444f59", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z", "max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fraunhoferfokus/acsl-by-example", "max_forks_repo_path": "Informal/nonmutating/find2.tex", "max_issues_count": 22, "max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fraunhoferfokus/acsl-by-example", "max_issues_repo_path": "Informal/nonmutating/find2.tex", "max_line_length": 93, "max_stars_count": 90, "max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fraunhoferfokus/acsl-by-example", "max_stars_repo_path": "Informal/nonmutating/find2.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z", "num_tokens": 1011, "size": 3801 }
%...................................................................... \subsection{Syntax}\label{sec:syntax} %...................................................................... \begin{syntaxdiagram}{Program} \node[junction,right=of Program] (startl1) {}; \node[junction,right=of startl1] (bl) {}; \node[junction,right=of bl] (l) {}; \node[junction,right=of l] (al) {}; \node[junction,right=of al] (bcd) {}; \node[junction,right=of bcd,xshift=3mm] (cd) {}; \node[junction,right=of cd,xshift=3mm] (acd) {}; \node[junction,right=of acd] (bvd) {}; \node[junction,right=of bvd,xshift=7mm] (vd) {}; \node[junction,right=of vd,xshift=7mm] (avd) {}; \node[junction,right=of avd] (endl1) {$\cdots$}; \node[junction,below=of startl1,yshift=-10mm] (startl2) {}; \node[junction,right=of startl2] (brl) {}; \node[junction,right=of brl] (rl) {}; \node[junction,right=of brl] (rl) {}; \node[junction,right=of rl] (arl) {}; \node[junction,right=of arl] (bass) {}; \node[junction,right=of bass,xshift=3mm] (ass) {}; \node[junction,right=of ass,xshift=3mm] (aass) {}; \node[junction,right=of aass] (endl2) {$\diamond$}; \node[nonterminal,below=of l] (Lexicon) {Lexicon}; \node[nonterminal,below=of cd] (ClassDecls) {ClassDecls}; \node[nonterminal,below=of vd] (GlobalVarDecls) {GlobalVarDecls}; \node[nonterminal,below=of rl] (Rules) {Rules}; \node[nonterminal,below=of ass] (Assertions) {Assertions}; \graph [use existing nodes] { startl1 -- bl -- l -- al -- bcd -- cd -- acd -- bvd -- vd -- avd -> endl1 }; \graph [use existing nodes] { startl2 -- brl -- rl -- arl -- bass -- ass -- aass -> endl2 }; \syndiagAlternative{bl}{Lexicon}{Lexicon}{al} \syndiagAlternative{bcd}{ClassDecls}{ClassDecls}{acd} \syndiagAlternative{bvd}{GlobalVarDecls}{GlobalVarDecls}{avd} \syndiagAlternative{brl}{Rules}{Rules}{arl} \syndiagAlternative{bass}{Assertions}{Assertions}{aass} \end{syntaxdiagram} \begin{syntaxdiagram}{Lexicon} \node[junction,right=of Lexicon] (startl1) {}; \node[terminal,right=of startl1] (lexicon) {\texttt{lexicon}}; \node[junction,right=of lexicon] (bvar) {}; \node[terminal,right=of bvar] (var) {\emph{VAR}}; \node[terminal,right=of var] (arrow) {\texttt{->}}; \node[terminal,right=of arrow] (string) {\emph{string}}; \node[junction,right=of string] (astring) {}; \node[junction,right=of astring] (end) {}; \node[junction,above=of string] (abs) {}; \graph [use existing nodes] { startl1 -- lexicon -- bvar -- var -- arrow -- string -- astring -> end }; \syndiagBridge{astring}{abs}{bvar} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{ClassDecls} \node[junction,right=of ClassDecls] (start) {}; \node[junction,right=of start,xshift=-3mm] (bdecl) {}; \node[terminal,right=of bdecl,xshift=-3mm] (class) {\texttt{class}}; \node[terminal,right=of class,xshift=-3mm] (classname) {\emph{VAR}}; \node[junction,right=of classname,xshift=-4mm] (bextends) {}; \node[terminal,right=of bextends,xshift=-4mm] (extends) {\texttt{extends}}; \node[terminal,right=of extends,xshift=-3mm] (extendsname) {\emph{VAR}}; \node[junction,right=of extendsname,xshift=-4mm] (bFields) {}; \node[nonterminal,right=of bFields,xshift=-4mm] (Fields) {Fields}; \node[junction,right=of Fields,xshift=-3mm] (aFields) {}; \node[junction,right=of aFields,xshift=-3mm] (end) {}; \node[junction,above=of extends] (abextends) {}; \node[junction,below=of extends] (blextends) {}; \graph [use existing nodes] { start -- bdecl -- class -- classname -- bextends -- extends -- extendsname -- bFields -- Fields -- aFields -> end }; \syndiagBridge{aFields}{abextends}{bdecl} \syndiagBridge{bextends}{blextends}{bFields} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{Fields} \node[junction,right=of Fields] (start) {}; \node[junction,right=of start] (bleftbrace) {}; \node[terminal,right=of bleftbrace] (leftbrace) {\verb|{|}; \node[junction,right=of leftbrace] (aleftbrace) {}; \node[nonterminal,right=of aleftbrace] (VarDecl) {VarDecl}; \node[junction,right=of VarDecl] (brightbrace) {}; \node[terminal,right=of brightbrace] (rightbrace) {\verb|}|}; \node[junction,right=of rightbrace] (arightbrace) {}; \node[junction,right=of arightbrace] (end) {}; \node[junction,above=of VarDecl] (abVarDecl) {}; \node[junction,below=of VarDecl] (blVarDecl) {}; \node[junction,below=of blVarDecl] (blblVarDecl) {}; \graph [use existing nodes] { start -- bleftbrace -- leftbrace -- aleftbrace -- VarDecl -- brightbrace -- rightbrace -- arightbrace -> end }; \syndiagBridge{brightbrace}{abVarDecl}{aleftbrace} \syndiagBridge{aleftbrace}{blVarDecl}{brightbrace} \syndiagBridge{bleftbrace}{blblVarDecl}{arightbrace} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{GlobalVarDecls} \node[junction,right=of GlobalVarDecls] (start) {}; \node[junction,right=of start] (bdecl) {}; \node[terminal,right=of bdecl] (decl) {\texttt{decl}}; \node[nonterminal,right=of decl] (VarDecl) {VarDecl}; \node[junction,right=of VarDecl] (aVarDecl) {}; \node[junction,right=of aVarDecl] (end) {}; \node[junction,above=of VarDecl] (abv) {}; \graph [use existing nodes] { start -- bdecl -- decl -- VarDecl -- aVarDecl -> end }; \syndiagBridge{aVarDecl}{abv}{bdecl} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{VarDecl} \node[junction,right=of VarDecl] (start) {}; \node[junction,right=of start] (bvar) {}; \node[terminal,right=of bvar] (var) {\emph{VAR}}; \node[terminal,right=of var] (colon) {\texttt{:}}; \node[nonterminal,right=of colon] (Tp) {Tp}; \node[junction,right=of Tp] (aTp) {}; \node[junction,right=of aTp] (end) {}; \graph [use existing nodes] { start -- bvar -- var -- colon -- Tp -- aTp -> end }; \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{Rules} \node[junction,right=of Rules] (start) {}; \node[junction,right=of start] (brule) {}; \node[nonterminal,right=of brule] (Fact) {Fact}; \node[junction,right=of Fact] (arule) {}; \node[junction,right=of arule] (end) {}; \node[junction,above=of Fact] (abfact) {}; \node[nonterminal,below=of Fact] (Rule) {Rule}; \graph [use existing nodes] { start -- brule -- Fact -- arule -> end }; \syndiagBridge{arule}{abfact}{brule} \syndiagAlternative{brule}{Rule}{Rule}{arule} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{Rule} \node[junction,right=of Rule] (startl1) {}; \node[terminal,right=of startl1] (rule) {\texttt{rule}}; \node[terminal,right=of rule] (rname) {\texttt{<} \emph{VAR} \texttt{>}}; \node[junction,right=of rname,xshift=-4mm] (bfor) {}; \node[terminal,right=of bfor,xshift=-4mm] (for) {\texttt{for}}; \node[junction,right=of for,xshift=-3mm] (bVarDecl) {}; \node[nonterminal,right=of bVarDecl,xshift=-3mm] (VarDecl) {VarDecl}; \node[junction,right=of VarDecl,xshift=-4mm] (aVarDecl) {}; \node[junction,right=of aVarDecl,xshift=-4mm] (endl1) {$\cdots$}; \node[junction,below=of startl1,yshift=-10mm] (start2) {}; \node[terminal,right=of startl2,xshift=-4mm] (if) {\texttt{if}}; \node[nonterminal,right=of if,xshift=-3mm] (Expr1) {Expr}; \node[terminal,right=of Expr1,xshift=-4mm] (then) {\texttt{then}}; \node[nonterminal,right=of then,xshift=-3mm] (Expr2) {Expr}; \node[junction,right=of Expr2,xshift=-3mm] (endl2) {}; \node[junction,above=of VarDecl] (abVarDecl) {}; \node[terminal,below=of VarDecl] (blVarDecl) {\texttt{,}}; \graph [use existing nodes] { startl1 -- rule -- rname -- bfor -- for -- bVarDecl -- VarDecl -- aVarDecl -> endl1 }; \graph [use existing nodes] { startl2 -- if -- Expr1 -- then -- Expr2 -> endl2 }; \syndiagBridge{bfor}{abVarDecl}{aVarDecl} \syndiagBridge{aVarDecl}{blVarDecl}{bVarDecl} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{Fact} \node[junction,right=of Fact] (startl1) {}; \node[terminal,right=of startl1] (fact) {\texttt{fact}}; \node[terminal,right=of fact] (rname) {\texttt{<} \emph{VAR} \texttt{>}}; \node[junction,right=of rname,xshift=-4mm] (bfor) {}; \node[terminal,right=of bfor,xshift=-4mm] (for) {\texttt{for}}; \node[junction,right=of for,xshift=-3mm] (bVarDecl) {}; \node[nonterminal,right=of bVarDecl,xshift=-3mm] (VarDecl) {VarDecl}; \node[junction,right=of VarDecl,xshift=-4mm] (aVarDecl) {}; \node[nonterminal,right=of aVarDecl,xshift=-3mm] (Expr) {Expr}; \node[junction,right=of Expr,xshift=-4mm] (endl1) {}; \node[junction,above=of VarDecl] (abVarDecl) {}; \node[terminal,below=of VarDecl] (blVarDecl) {\texttt{,}}; \graph [use existing nodes] { startl1 -- fact -- rname -- bfor -- for -- bVarDecl -- VarDecl -- aVarDecl -- Expr -> endl1 }; \syndiagBridge{bfor}{abVarDecl}{aVarDecl} \syndiagBridge{aVarDecl}{blVarDecl}{bVarDecl} \end{syntaxdiagram} %...................................................................... %...................................................................... \begin{syntaxdiagram}{Assertions} \node[junction,right=of Assertions] (start) {}; \node[junction,right=of start] (bassert) {}; \node[terminal,right=of bassert] (assert) {\texttt{assert}}; \node[nonterminal,right=of assert] (expr) {Expr}; \node[junction,right=of expr] (aexpr) {}; \node[junction,right=of aexpr] (end) {}; \node[junction,below=of expr] (blexpr) {}; \node[junction,above=of expr] (abexpr) {}; \graph [use existing nodes] { start -- bassert -- assert -- expr -- aexpr -> end }; \syndiagBridge{aexpr}{abexpr}{bassert} \syndiagBridge{bassert}{blexpr}{aexpr} \end{syntaxdiagram} %...................................................................... A \nonterminalref{Program} is the main processing unit. It consists of a lexicon, a list of class declarations, global variable declarations, rules and assertions, all of which can be missing. A \nonterminalref{Lexicon} maps identifiers of the program to GF strings\remms{Inari: more details}. Class declarations (\nonterminalref{ClassDecls}) come in several shapes. In its simplest form, a class declaration introduces a new class and relates it to a superclass, as in \begin{lstlisting} class Lawyer extends Person \end{lstlisting} The \texttt{extends} clause can also be omitted, in which case the superclass is implicitly assumed to be \texttt{Class} (see \secref{sec:typing} for the built-in class hierarchy). Thus, \begin{lstlisting} class Person \end{lstlisting} is equivalent to: \begin{lstlisting} class Person extends Class \end{lstlisting} New fields can be added to a class with field declarations (\nonterminalref{Fields}). These are variable declarations enclosed in braces; they can also be missing altogether (equivalent to empty field declarations \texttt{\{\}}). For example, \begin{lstlisting} class Person { name : String age : Integer } class Lawyer extends Person { companyName : String } \end{lstlisting} introduces two fields \texttt{name} and \texttt{age} to class \texttt{Person}, whereas \texttt{Lawyer} inherits \texttt{name} and \texttt{age} from \texttt{Person} and in addition defines a third field \texttt{companyName}. Global variable declarations (\nonterminalref{GlobalVarDecls}) introduce names that are meant to have global visibility in the program. In the \nonterminalref{Rules} section, rules and facts can be mixed arbitrarily. A \nonterminalref{Rule} introduces local variables in a \texttt{for} clause (which may be omitted if there are no local variables), followed by a precondition (\texttt{if}) and a postcondition (\texttt{then}), both assumed to be Boolean expressions. If there are several preconditions, these have to be conjoined by \emph{and} to form a single formula. A \nonterminalref{Fact} is a notational variant of a \nonterminalref{Rule} that does not have a precondition (more technically, a fact is mapped to a rule whose precondition is \texttt{True}). \nonterminalref{Assertions} are Boolean expressions introduced by the keyword \texttt{assert}. %...................................................................... \subsection{Typing}\label{sec:typing} %...................................................................... \subsection{Pragmatics}\label{sec:pragmatics} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End:
{ "alphanum_fraction": 0.6199236641, "avg_line_length": 38.9880952381, "ext": "tex", "hexsha": "2e36fda60ff64b3beec2f4c3c013205d2e2be97b", "lang": "TeX", "max_forks_count": 11, "max_forks_repo_forks_event_max_datetime": "2021-07-26T05:22:58.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-15T02:52:55.000Z", "max_forks_repo_head_hexsha": "3b42b0a2b815aa452981029a8f33150ea7c8b2f4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "smucclaw/complaw", "max_forks_repo_path": "Publications/Documentation/L4Language/language.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "3b42b0a2b815aa452981029a8f33150ea7c8b2f4", "max_issues_repo_issues_event_max_datetime": "2021-06-30T04:48:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-03T17:32:48.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "smucclaw/complaw", "max_issues_repo_path": "Publications/Documentation/L4Language/language.tex", "max_line_length": 94, "max_stars_count": 19, "max_stars_repo_head_hexsha": "3b42b0a2b815aa452981029a8f33150ea7c8b2f4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "smucclaw/complaw", "max_stars_repo_path": "Publications/Documentation/L4Language/language.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-22T08:31:20.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-29T22:29:12.000Z", "num_tokens": 3858, "size": 13100 }
\documentclass[../main.tex]{subfiles} \begin{document} \subsection{One-to-All} \label{q:onetoall} \begin{question} Given are $P$ processors. Processor $p(0)$ sends an element to all processors (one-to-all). \begin{enumerate} \item Given the BSP program. \item What is the $h$-relation. \item How can this be improved, given that $l << g$. \end{enumerate} \end{question} \begin{solution} The solution to the subquestions are given below. \begin{enumerate} \item The BSP program for one-to-all communication is given below. \begin{lstlisting} void oneToAll(){ bsp_begin(P); bsp_push_reg(element); for(pid=1; pid<P; pid++){ bsp_put(pid, data, element, 0, sizeof(data)); } bsp_sync(); bsp_end(); } \end{lstlisting} \item The $h$-relation is given by $h = max\{h_s,h_r\}$. In the previous case, $p(0)$ sends a single element to $(p-1)$ processors. \begin{equation} h = (p - 1) \end{equation} The BPS cost is given by $T_\text{comm}(h) = hg + l$, making the cost of this broadcast $(p-1)g + l$. \item If the cost of $l$ is significantly less than $g$, splitting the broadcast over multiple communication supersteps can reduce cost. Am example is a multi-level broadcast where every processor sends the element to two others. In $\log_2(P)$ supersteps, every processor has received the element. \begin{equation} T_\text{comm} = \lceil \log_2(P) \rceil (2g + l) \end{equation} \end{enumerate} \end{solution} \subsection{LU Factorization} \label{q:lu} \begin{question} LU factorization with 2-phase broadcast. \begin{enumerate} \item Give the total sequential cost and its order. \item Give the total BSP cost and its order when the data is distributed over $\sqrt{P} \times \sqrt{P}$ processors. \item Given the order of the total cost when distributed over $1 \times P$ processors. \item How is the parallel overhead influenced by the different distribution when the problem size grows. \end{enumerate} \end{question} \begin{solution} The solution to the subquestions are given below. \begin{enumerate} \item The number of flops in the LU decomposition with row permutations is given in \autoref{eq:tseqlu}. \begin{equation}\label{eq:tseqlu} \begin{array}{ll} T_{seq} & = \displaystyle\sum_{k=0}^{n-1} (2(n - k - 1)^2 + n - k -1) \\ & = \displaystyle\sum_{k=0}^{n-1} (2k^2 + k) \\ & = \dfrac{(n-1)n(2n-1)}{3} + \dfrac{(n-1)n)}{2} \\ & = \dfrac{2n^3}{3} - \dfrac{n^2}{2} - \dfrac{n}{6} \\ \end{array} \end{equation} The sequential LU-decomposition algorithm is $\mathcal{O}(n^3)$. \item The number of flops for the parallel LU-decomposition with two-phase broadcast and an \emph{optimal} 2D distribution ($N = \sqrt{P}, M = \sqrt{P}$) is given in \autoref{eq:tparlu}. \begin{equation}\label{eq:tparlu} T_{LU} = \dfrac{2n^3}{3p} + \dfrac{3n^2}{2\sqrt{p}} + \dfrac{3n^2g}{\sqrt{p}} + 8nl \end{equation} \item The distribution of processors impacts superstep (1), (3), (6) and (7) as we can find in \cite[p.~70]{bisseling04}. Their new cost is given in \autoref{tbl:lucost}. \begin{table}[H] \centering \begin{tabular}{ll} \toprule Superstep & Cost \\ \midrule (1) & $l$ \\ (3) & $(P-1)g+l$ \\ (6) & $(C_{k+1} - \lceil C_{k_1}/P \rceil )g + l$ \\ (7) & $((P-1) \lceil C_{k+1}/P \rceil )g + l$\\ \bottomrule \end{tabular} \caption{New superstep costs.} \label{tbl:lucost} \end{table} Superstep (1) can be removes since no data is sent, making its cost zero. The dominant cost is caused by the matrix update. \begin{equation} R_{k+1} + C_{k+1} = (n-k-1) \frac{M+N}{p} + 2 \end{equation} For $M = N = \sqrt{P}$, the value is minimal. With the current distribution, this cost is $p (n-k-1) + 2$ or a factor $p$ larger than with the previous one. \item \end{enumerate} \end{solution} \subsection{Parallel Efficiency} \label{q:efficiency} \begin{question} Given a multicore program for calculating the average value of a matrix $A$. What is the efficiency and how can we do better? \end{question} \begin{solution} The \emph{Parallel Efficiency} is defined in \autoref{eq:pareff}. The efficiency correlates the increase in speedup with the number of processors or can also be seen as the fraction of the total computing power that is usefully employed \cite[p.~141]{bisseling04}. The efficiency of a given parallel algorithm can be increased by either decreasing the number of processors (Amdahl's Law) or increasing the problem size $n$. \begin{equation}\label{eq:pareff} E_p(n) = \frac{T_\text{sec}(n)}{p \cdot T_p(n)} = \frac{S_p(n)}{p} \end{equation} \end{solution} \subsection{BSP Quicksort} \label{q:qsort1} \begin{question} Give a high-level description of a BSP quicksort algorithm. For each step, calculate the BSPC cost ($a + bg + cl$). Calculate the total cost and give the overhead and efficiency. Discuss your results. \end{question} \begin{solution} The algorithm is described below. \begin{enumerate} \item \textbf{Superstep 1:} Determine and broadcast the pivot. This translates to a regular broadcast, with BSP-cost $(p-1)g + l$. Optionally, this can be achieved with a recursion tree, resulting in a cost of $(g+l) \log (p)$. \item \textbf{Superstep 2:} Locally rearrange the array assigned to each process. It takes $(n/p)$ FLOPS to compare each value with the pivot. This costs $(n/p) + l$ \item \textbf{Superstep 3:} Determine the locations in the globally rearranged array that the local elements will go to. This can be achieved by either using \emph{prefix sums} or \emph{partial sums}. Regardless, it translates to all-to-all communication which costs $(p-1)g + l$. \item \textbf{Superstep 4:} In this step the the global rearrangement is performed. This is again a case of all-to-all communication, costing $(p-1)g + l$. \end{enumerate} The total cost is then the sum of the cost of each superstep. These are repeated $\log(n)$ times. \begin{equation} \frac{n \cdot \log(n)}{p} + 3\log(n)(p-1)g + 4\log(n)l \end{equation} \end{solution} \end{document}
{ "alphanum_fraction": 0.7054783046, "avg_line_length": 43.8897058824, "ext": "tex", "hexsha": "eff6c4855f30a502d79250475cd38fa94a29737f", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2018-12-29T14:10:47.000Z", "max_forks_repo_forks_event_min_datetime": "2015-12-15T20:37:02.000Z", "max_forks_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "KULeuven-CS/Parallel-Computing", "max_forks_repo_path": "subfiles/20140124.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "KULeuven-CS/Parallel-Computing", "max_issues_repo_path": "subfiles/20140124.tex", "max_line_length": 281, "max_stars_count": 10, "max_stars_repo_head_hexsha": "598897266e687904c575e8f65f874f4100e0e7bf", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "KULeuven-CS/Parallel-Computing", "max_stars_repo_path": "subfiles/20140124.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-15T19:12:42.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-12T14:38:13.000Z", "num_tokens": 1927, "size": 5969 }
\documentclass[10pt]{article} \usepackage{a4wide} \usepackage{listings} \usepackage{listings} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\OP}[1]{{\bf\widehat{#1}}} \begin{document} \section*{Project 1, Variational Monte Carlo studies of helium and beryllium} The final aim of this project is to develop a variational Monte Carlo program which can be used to obtain ground state properties of atoms like He, Be, O, Ne, Si etc as well as diatomic molecules. for important molecules The aim of the first project is to use the Variational Monte Carlo (VMC) method and evaluate the ground state energy of the helium and beryllium atoms. The variational Monte Carlo part will include the basic ingredients for adding additional features in projects 2 and 3, allowing you to build a professional code that can be used to perform first principle calculations of atoms and molecules. If you however object orient properly your code, you should be able to make your code flexible enough to run for two-dimensional systems like electrons confined in so-called quantum dots or other fermionic systems in one, two and three dimensions. Object orientation is a mandatory feature of your code. {\bf The deadline for project 1 is February 27, at noon}. See below for delivery format. \section*{Variational Monte Carlo calculations of the helium atom} We will start with the simplest possible system beyond hydrogen, namely the helium atom. We label $r_1$ the distance from electron 1 to the nucleus and similarly $r_2$ the distance between electron 2 and the nucleus. The contribution to the potential energy from the interactions between the electrons and the nucleus is \be -\frac{2}{r_1}-\frac{2}{r_2}, \ee and if we add the electron-electron repulsion with $r_{12}=|{\bf r}_1-{\bf r}_2|$, the total potential energy $V(r_1, r_2)$ is \be V(r_1, r_2)=-\frac{2}{r_1}-\frac{2}{r_2}+ \frac{1}{r_{12}}, \ee yielding the total Hamiltonian \be \OP{H}=-\frac{\nabla_1^2}{2}-\frac{\nabla_2^2}{2} -\frac{2}{r_1}-\frac{2}{r_2}+ \frac{1}{r_{12}}, \ee and Schr\"odinger's equation reads \be \OP{H}\psi=E\psi. \ee All equations are in so-called atomic units. The distances $r_i$ and $r_{12}$ are dimensionless. To have energies in electronvolt you need to multiply all results with $2\times E_0$, where $E_0=13.6$ eV. The ``experimental'' binding energy for helium in atomic units a.u. is $E_{\mathrm{He}}=-2.9037$ a.u.. \begin{enumerate} \item[a)] We want to perform a Variational Monte Carlo calculation of the ground state of the helium atom. In our first attempt we will use a brute force Metropolis sampling with two trial wave functions which have the following forms \begin{equation} \psi_{T1}({\bf r_1},{\bf r_2}, {\bf r_{12}}) = \exp{\left(-\alpha(r_1+r_2)\right)} \label{eq:trial1} \end{equation} and \begin{equation} \psi_{T2}({\bf r_1},{\bf r_2}, {\bf r_{12}}) = \exp{\left(-\alpha(r_1+r_2)\right)} \exp{\left(\frac{r_{12}}{2(1+\beta r_{12})}\right)}, \label{eq:trial2} \end{equation} with $\alpha$ and $\beta$ as variational parameters. Your task is to perform a Variational Monte Carlo calculation using the Metropolis algorithm to compute the integral \begin{equation} \langle E \rangle = \frac{\int d{\bf r_1}d{\bf r_2}\psi^{\ast}_{Ti}({\bf r_1},{\bf r_2}, {\bf r_{12}})\OP{H}({\bf r_1},{\bf r_2}, {\bf r_{12}})\psi_{T1}({\bf r_1},{\bf r_2}, {\bf r_{12}})} {\int d{\bf r_1}d{\bf r_2}\psi^{\ast}_{Ti}({\bf r_1},{\bf r_2}, {\bf r_{12}})\psi_{Ti}({\bf r_1},{\bf r_2}, {\bf r_{12}})}. \end{equation} Find the energy minimum and compute also the mean distance $r_{12}$ between the two electrons for the optimal set of the variational parameters. A code for doing a VMC calculation for the helium atom can be found on the webpage of the course. Your Monte Carlo moves are determined by \begin{equation} {\bf R}' = {\bf R} +\delta \times r, \end{equation} where $r$ is a random number from the uniform distribution and $\delta$ a chosen step length. In solving this exercise you need to devise an algorithm which finds an optimal value of $\delta$ for the variational parameters $\alpha$ and $\beta$, resulting in roughly $50\%$ accepted moves. Give a physical interpretation of the best value of $\alpha$. Make a plot of the variance as a function of the number of Monte Carlo cycles. \item[b)] Find closed form expressions for the local energy (see below) for the above trial wave functiona and explain shortly how they satisfy or not the cusp condition when $r_1\rightarrow 0$ or $r_2\rightarrow 0$ or $r_{12}\rightarrow 0$. Show that closed-form expression for the second trial wave function is \[ E_{L2} = E_{L1}+\frac{1}{2(1+\beta r_{12})^2}\left\{\frac{\alpha(r_1+r_2)}{r_{12}}(1-\frac{\mathbf{r}_1\mathbf{r}_2}{r_1r_2})-\frac{1}{2(1+\beta r_{12})^2}-\frac{2}{r_{12}}+\frac{2\beta}{1+\beta r_{12}}\right\}, \] where \[ E_{L1} = \left(\alpha-Z\right)\left(\frac{1}{r_1}+\frac{1}{r_2}\right)+\frac{1}{r_{12}}-\alpha^2, \] is the local energy for the first trial function. Compare and discuss the results obtained with and without the closed-form expressions (in terms of CPU time). \item[c)] Introduce now importance sampling and study the dependence of the results as a function of the time step $\delta t$. Compare the results with those obtained under a) and comment eventual differences. \item[d)] In performing the Monte Carlo analysis you should use blocking as a technique to make the statistical analysis of the numerical data. Present your results with a proper evaluation of the statistical errors. \item[e)] With the optimal parameters for the ground state wave function, compute the onebody density and the charge density. Discuss your results and compare the results with those obtained with pure hydrogenic wave functions. Run a Monte Carlo calculations without the Jastrow factor as well and compute the same quantities. How important are the correlations induced by the Jastrow factor? \end{enumerate} \section*{Variational Monte Carlo calculations of the Beryllium atom} The previous exercise has prepared you for extending your calculational machinery to other systems. Here we will focus on the beryllium atoms. It is convenient to make modules or classes of trial wave functions, both many-body wave functions and single-particle wave functions and the quantum numbers involved, such as spin, orbital momentum and principal quantum numbers. The new item you need to pay attention to is the calculation of the Slater Determinant. This is an additional complication to your VMC calculations. For beryllium however, these calculations can be considerably simplified. The second project will include parallelization and a proper treatment of the Slater determinant. If we stick to hydrogen-like wave functions, the trial wave function for Beryllium can be written as \begin{equation} \psi_{T}({\bf r_1},{\bf r_2}, {\bf r_3}, {\bf r_4}) = Det\left(\phi_{1}({\bf r_1}),\phi_{2}({\bf r_2}), \phi_{3}({\bf r_3}),\phi_{4}({\bf r_4})\right) \prod_{i<j}^{4}\exp{\left(\frac{r_{ij}}{2(1+\beta r_{ij})}\right)}, \end{equation} where $Det$ is a Slater determinant and the single-particle wave functions are the hydrogen wave functions for the $1s$ and $2s$ orbitals. Their form within the variational ansatz are given by \begin{equation} \phi_{1s}({\bf r_i}) = e^{-\alpha r_i}, \end{equation} and \begin{equation} \phi_{2s}({\bf r_i}) = \left(1-\alpha r_i/2\right)e^{-\alpha r_i/2}. \end{equation} You can approximate the Slater determinant for the ground state of the Beryllium atom by writing it out as \begin{equation} \psi_{T}({\bf r_1},{\bf r_2}, {\bf r_3}, {\bf r_4}) \propto \left(\phi_{1s}({\bf r_1})\phi_{2s}({\bf r_2})-\phi_{1s}({\bf r_2})\phi_{2s}({\bf r_1})\right) \left(\phi_{1s}({\bf r_3})\phi_{2s}({\bf r_4})-\phi_{1s}({\bf r_4})\phi_{2s}({\bf r_3})\right). \end{equation} Here you can see a simple code example which implements the above expression \begin{lstlisting} for (i = 0; i < number_particles; i++) { argument[i] = 0.0; r_single_particle = 0; for (j = 0; j < dimension; j++) { r_single_particle += r[i][j]*r[i][j]; } argument[i] = sqrt(r_single_particle); } // Slater determinant, no factors as they vanish in Metropolis ratio wf = (psi1s(argument[0])*psi2s(argument[1]) -psi1s(argument[1])*psi2s(argument[0]))* (psi1s(argument[2])*psi2s(argument[3]) -psi1s(argument[3])*psi2s(argument[2])); \end{lstlisting} For beryllium we can easily implement the explicit evaluation of the Slater determinant. The above will serve as a useful check for your function which computes the Slater determinat in project 2. The derivatives of the single-particle wave functions can be computed analytically and you should use the closed form expression for the local energy. For the correlation part \[ \Psi_C=\prod_{i< j}g(r_{ij})= \exp{\left\{\sum_{i<j}\frac{ar_{ij}}{1+\beta r_{ij}}\right\}}, \] we need to take into account whether electrons have equal or opposite spins since we have to obey the electron-electron cusp condition as well. For Beryllium, as an example, you can fix electrons 1 and 2 to have spin up while electrons 3 and 4 have spin down. When the electrons have equal spins \[ a= 1/4, \] while for opposite spins (as for the ground state of helium) \[ a= 1/2. \] \begin{enumerate} \item[(f)] Write a function which sets up the Slater determinant for beryllium. Compute the ground state energies beryllium as you did for the helium atom in exercise d. The calculations should include blocking and importance sampling using the closed form expression for the local energy. \item[g)] With the optimal parameters for the ground state wave function, compute again the onebody density and the charge density. Discuss your results and compare the results with those obtained with a pure hydrogenic wave functions. Run a Monte Carlo calculations without the Jastrow factor as well and compute the same quantities. How important are the correlations induced by the Jastrow factor? \end{enumerate} \section*{How to write the report} What should the report contain and how can I structure it? A typical structure follows here. \begin{itemize} \item An abstract with the main findings. \item An introduction where you explain the aims and rationale for the physics case and what you have done. At the end of the introduction you should give a brief summary of the structure of the report \item Theoretical models and technicalities. This sections ends often being the methods section. \item Results and discussion \item Conclusions and perspectives \item Appendix with extra material \item Bibliography \end{itemize} Keep always a good log of what you do. \subsection*{What should I focus on? Introduction.} You don't need to answer all questions in a chronological order. When you write the introduction you could focus on the following aspects \begin{itemize} \item Motivate the reader, the first part of the introduction gives always a motivation and tries to give the overarching ideas \item What I have done \item The structure of the report, how it is organized etc \end{itemize} \subsection*{What should I focus on? Methods sections.} \begin{itemize} \item Describe the methods and algorithms \item You need to explain how you implemented the methods and also say something about the structure of your algorithm and present some parts of your code \item You should plug in some calculations to demonstrate your code, such as selected runs used to validate and verify your results. The latter is extremely important!! A reader needs to understand that your code reproduces selected benchmarks and reproduces previous results, either numerical and/or well-known closed form expressions. \end{itemize} \subsection*{What should I focus on? Results sections.} \begin{itemize} \item Present your results \item Give a critical discussion of your work and place it in the correct context. \item Relate your work to other calculations/studies \item An eventual reader should be able to reproduce your calculations if she/he wants to do so. All input variables should be properly explained. \item Make sure that figures and tables contain enough information in their captions, axis labels etc so that an eventual reader can gain a first impression of your work by studying figures and tables only. \end{itemize} \subsection*{What should I focus on? Conclusions sections.} \begin{itemize} \item State your main findings and interpretations \item Try as far as possible to present perspectives for future work \item Try to discuss the pros and cons of the methods and possible improvements \end{itemize} \subsection*{What should I focus on? Additional material, appendices.} \begin{itemize} \item Additional calculations used to validate the codes \item Selected calculations, these can be listed with few comments \item Listing of the code if you feel this is necessary \item You can consider moving parts of the material from the methods section to the appendix. You can also place additional material on your webpage. \end{itemize} \subsection*{What should I focus on? References.} \begin{itemize} \item Give always references to material you base your work on, either scientific articles/reports or books. \item Refer to articles as: name(s) of author(s), journal, volume (boldfaced), page and year in parenthesis. \item Refer to books as: name(s) of author(s), title of book, publisher, place and year, eventual page numbers \end{itemize} \section*{Format for electronic delivery of report and programs} % Your are free to choose your format for handing in. The simplest way is that you send us your github link that contains the report in your chosen format(pdf, ps, docx, ipython notebook etc) and the programs. As programming language you have to choose either C++ or Fortran or Python. We recommend C++ or Fortran. Finally, we recommend that you work together. Optimal working groups consist of 2-3 students, but more people can collaborate. You can then hand in a common report. \section*{Literature} \begin{enumerate} \item B.~L.~Hammond, W.~A.~Lester and P.~J.~Reynolds, Monte Carlo methods in Ab Inition Quantum Chemistry, World Scientific, Singapore, 1994, chapters 2-5 and appendix B. \item B.H.~Bransden and C.J.~Joachain, Physics of Atoms and molecules, Longman, 1986. Chapters 6, 7 and 9. \item S.A.~Alexander and R.L.~Coldwell, Int.~Journal of Quantum Chemistry, {\bf 63} (1997) 1001. This article is available at the webpage of the course as the file jastrow.pdf under the project 1 link. \item C.J.~Umrigar, K.G.~Wilson and J.W.~Wilkins, Phys.~Rev.~Lett.~{\bf 60} (1988) 1719. \end{enumerate} \section*{Unit tests, how and why?} Unit Testing is the practice of testing the smallest testable parts, called units, of an application individually and independently to determine if they behave exactly as expected. Unit tests (short code fragments) are usually written such that they can be preformed at any time during the development to continually verify the behavior of the code. In this way, possible bugs will be identified early in the development cycle, making the debugging at later stage much easier. There are many benefits associated with Unit Testing, such as \begin{itemize} \item It increases confidence in changing and maintaining code. Big changes can be made to the code quickly, since the tests will ensure that everything still is working properly. \item Since the code needs to be modular to make Unit Testing possible, the code will be easier to reuse. This improves the code design. \item Debugging is easier, since when a test fails, only the latest changes need to be debugged. \item Different parts of a project can be tested without the need to wait for the other parts to be available. \item A unit test can serve as a documentation on the functionality of a unit of the code. \end{itemize} Here follows a simple example, see the website of the course for more information on how to install unit test libraries. \begin{verbatim} #include <unittest++/UnitTest++.h> class MyMultiplyClass{ public: double multiply(double x, double y) { return x * y; } }; TEST(MyMath) { MyMultiplyClass my; CHECK_EQUAL(56, my.multiply(7,8)); } int main() { return UnitTest::RunAllTests(); } \end{verbatim} For Fortran users, the link at \url{http://sourceforge.net/projects/fortranxunit/} contains a similar software for unit testing. \end{document}
{ "alphanum_fraction": 0.7495934959, "avg_line_length": 48.982300885, "ext": "tex", "hexsha": "7e7149d9905723fa4e132fc796bc8921701e383a", "lang": "TeX", "max_forks_count": 54, "max_forks_repo_forks_event_max_datetime": "2022-03-07T10:44:14.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-09T10:02:00.000Z", "max_forks_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_forks_repo_path": "doc/Projects/2015/project1_2015.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_issues_repo_issues_event_max_datetime": "2020-02-08T13:15:42.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-18T10:43:38.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_issues_repo_path": "doc/Projects/2015/project1_2015.tex", "max_line_length": 538, "max_stars_count": 87, "max_stars_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_stars_repo_path": "doc/Projects/2015/project1_2015.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T07:11:53.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-21T08:29:56.000Z", "num_tokens": 4494, "size": 16605 }
\section{Fault Tree Model} \label{sec:FTModel} This model is designed to read the structure of the fault tree (FT) from the file and to import such Boolean logic structures as a RAVEN model. The FT must be specified in the OpenPSA format (\href{<url>}{https://github.com/open-psa}). As an example, the FT of Figure~\ref{fig:FT} is translated in the OpenPSA as shown below: \begin{lstlisting}[style=XML,morekeywords={anAttribute},caption=FT in OpenPSA format., label=lst:FTModel] <opsa-mef> <define-fault-tree name="FT"> <define-gate name="TOP"> <or> <gate name="G1"/> <gate name="G2"/> <gate name="G3"/> </or> </define-gate> <define-component name="A"> <define-gate name="G1"> <and> <basic-event name="BE1"/> <basic-event name="BE2"/> </and> </define-gate> <define-gate name="G2"> <and> <basic-event name="BE1"/> <basic-event name="BE3"/> </and> </define-gate> <define-basic-event name="BE1"> <float value="1.2e-3"/> </define-basic-event> <define-component name="B"> <define-basic-event name="BE2"> <float value="2.4e-3"/> </define-basic-event> <define-basic-event name="BE3"> <float value="5.2e-3"/> </define-basic-event> </define-component> </define-component> <define-component name="C"> <define-gate name="G3"> <and> <basic-event name="BE1"/> <basic-event name="BE4"/> </and> </define-gate> <define-basic-event name="BE4"> <float value="1.6e-3"/> </define-basic-event> </define-component> </define-fault-tree> </opsa-mef> \end{lstlisting} \begin{lstlisting}[style=XML,morekeywords={anAttribute},caption=FT model input example., label=lst:FT_InputExample] <Models> ... <ExternalModel name="FT" subType="FTModel"> <variables> statusBE1,statusBE2,statusBE3,statusBE4,TOP </variables> <topEvents>TOP</topEvents> <map var="statusBE1">BE1</map> <map var="statusBE2">BE2</map> <map var="statusBE3">BE3</map> <map var="statusBE4">BE4</map> </ExternalModel> ... </Models> \end{lstlisting} \begin{figure} \centering \centerline{\includegraphics[scale=0.5]{FT.pdf}} \caption{Example of FT.} \label{fig:FT} \end{figure} The FT of Figure~\ref{fig:FT} described in Listing~\ref{lst:FTModel} can be defined in the RAVEN input file as shown in Listing~\ref{lst:FT_InputExample} All the specifications of the FT model are given in the \xmlNode{ExternalModel} block. Inside the \xmlNode{ExternalModel} block, the XML nodes that belong to this models are: \begin{itemize} \item \xmlNode{variables}, \xmlDesc{string, required parameter}, a list containing the names of both the input and output variables of the model \item \xmlNode{topEvents}, \xmlDesc{string, required parameter}, the name of the alias Top Event \item \xmlNode{map}, \xmlDesc{string, required parameter}, the name ID of the FT basic events \begin{itemize} \item \xmlAttr{var}, \xmlDesc{required string attribute}, the ALIAS name ID of the FT basic events \end{itemize} \end{itemize} Provided this definition and the FT model of Figure~\ref{fig:FT} described in Listing~\ref{lst:FT_InputExample}, the resulting model in RAVEN is characterized by these variables: \begin{itemize} \item Input variables: statusBE1, statusBE2, statusBE3, statusBE4 \item Output variable: TOP \end{itemize} \subsection{FT Model Reference Tests} \begin{itemize} \item SR2ML/tests/test\_FTmodel.xml \item SR2ML/tests/test\_FTmodel\_TD.xml \end{itemize}
{ "alphanum_fraction": 0.6031312127, "avg_line_length": 36.5818181818, "ext": "tex", "hexsha": "cc9cce67320fb073e6d09110e7ced3adf242ba07", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "idaholab/SR2ML", "max_forks_repo_path": "doc/user_manual/include/FTmodel.tex", "max_issues_count": 32, "max_issues_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_issues_repo_issues_event_max_datetime": "2022-02-17T19:45:27.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-12T18:43:29.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "idaholab/SR2ML", "max_issues_repo_path": "doc/user_manual/include/FTmodel.tex", "max_line_length": 147, "max_stars_count": 5, "max_stars_repo_head_hexsha": "2aa5e0be02786523cdeaf898d42411a7068d30b7", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "idaholab/SR2ML", "max_stars_repo_path": "doc/user_manual/include/FTmodel.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-27T03:14:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-25T02:01:22.000Z", "num_tokens": 1070, "size": 4024 }
\documentclass{pset_template} \title{Bit-String Flicking} \date{January 17, 2019} \editorOne{Alexander Sun} \editorTwo{Sanjit Bhat} \lectureNum{2} \contestMonth{January} \begin{document} \maketitle \section{Introduction/Lecture} Bit String Flicking is a general term to denote operations that can be done to bit strings/ binary strings. They can involve multiple strings or just a single string. There are 3 general types of operations, your basic truth operators, shifts, and circulates. \subsection{Truth Operations} Bit String flicking only involves 4 main operations(they should seem familiar), AND, OR, NOT, EXOR, and all the inverses of them. In contest AND $\rightarrow$ $\&$, OR $\rightarrow$ $\mid$, NOT $\rightarrow$ $\sim$, EXOR $\rightarrow$ $\oplus$. E.X. 101 | 010 = 111 \subsection{Shift} Shifting is very straight forward involving 2 operations. You can shift right or left. A shift, literally shifts the digits the specified amount in the specified direction, while maintaining the same amount of digits, filling in 0's fordigits shifted out of the number. \\ \\ E.X. LSHIFT-3 100010 = 010000, RSHIFT-3 100010 = 000100 \\ \\ The trick to solving these problems, is to cover up the amount of digits specified on the right or left side, then just fill in 0's for the rest of the digits. \subsection{Circulate} Circulating is extremely similar to shifting, except no digits will be deleted by getting shifted out of the number. Instead they are added back (or circulated) on to the other end of the number, either in the right or left direction, a specified amount. \\ \\ E.X. RCIRC-3 10111 = 11110 LCIRC-3 10111 = 11101 \\ \\ The method to solving these problems, are to take the number of digits on the specified side and shifting them directly behind the remaining digits on the other side. \subsection{Miscellaneous/General Tips} Order of Operations: NOT, SHIFT / CIRC, AND, XOR, OR \\ Solving problems with variables often have more then one solution. Some questions will ask you to list out all values, and others will ask for the amount of possible values. Generally just work from the outside in, and if necessary assign values abced. for each digit the expression is equivalent to, and use that to help you. \section{Exercises} Some borrowed from \href{http://minich.com/education/wyo/acsl/bitstringflicking/bitstringwksht1.htm}{here}. \begin{enumerate} \item 10010 OR 11101 \item 10001 AND 11011 OR 11001 \item 11100 OR 10101 AND 10111 \item NOT 101 AND 100 \item 101011 XOR 110101 \item 101011 OR 100111 XOR 100010 \item NOT (11101 XNOR 10100) \item 1011 OR 1101 XOR 1000 AND 1010 OR 1110 AND NOT 1000 \item Find all possible values of x: (RCIRC-2(LSHIFT-1 (NOT X)))=00101 \item List all possible unique values of x that solve the following equation: \\ (LSHIFT-1 (10110 XOR (RCIRC-3 x) AND 11011)) = 01100 \item (RSHIFT-1 (LCIRC-4 (RCIRC-2 01101))) \item ((RCIRC-14 (LCIRC-23 01101)) $\mid$ (LSHIFT-1 10011) $\&$ (RSHIFT-2 10111)) \\ Hint: Order of operations, generally parenthesis dictate order \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7611454605, "avg_line_length": 52.9827586207, "ext": "tex", "hexsha": "69c1805e32b2ddd3088b9adcb496f698f26a6388", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sanjit-bhat/AB-ACSL", "max_forks_repo_path": "bit-string-flicking.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sanjit-bhat/AB-ACSL", "max_issues_repo_path": "bit-string-flicking.tex", "max_line_length": 326, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sanjit-bhat/AB-ACSL", "max_stars_repo_path": "bit-string-flicking.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-12T03:01:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-12T03:01:29.000Z", "num_tokens": 872, "size": 3073 }
\subsection{Fuel salt reprocessing system} \begin{frame} \frametitle{Fuel salt reprocessing system overview: gas separation} Gaseous fission products (e.g., Xe, Kr) must be removed from the fuel salt to avoid reactor poisoning. \begin{columns} \column[t]{4.7cm} \begin{block}{Noble gas removal} \begin{enumerate} \item bubble generator injects He bubbles in the salt stream \item noble gas migrate to the He bubbles \item gas separator discharges the poison-rich bubbles \end{enumerate} \end{block} \column[t]{8cm} \begin{figure}[t] \centering \vspace{-8mm} \includegraphics[width=\textwidth]{../figures/gas_separation.pdf} \caption{Schematic flow diagram of the \gls{MSBR} gas separation system (figure reproduced from Robertson \emph{et al.} \cite{robertson_conceptual_1971}).} \end{figure} \end{columns} \end{frame} \begin{frame} \frametitle{Mathematical model for gas separation efficiency} \vspace{-1mm} Xenon removal efficiency ($\epsilon_{Xe}$) in a gas separation system is \cite{peebles_removal_1968}: \begin{align} & \qquad\qquad \epsilon_{Xe} = \frac{1-e^{-\beta}}{1+\alpha} \nonumber \\ \alpha &= \frac{RTQ_{L}}{HQ_{G}} \nonumber \\ \beta &= \frac{K_L a A_C L (1+\alpha)}{Q_{L}} \nonumber \\ Q_{L}&= \mbox{volumetric salt flow rate} \nonumber \\ Q_{G}&= \mbox{volumetric helium flow rate} \nonumber \\ H &= \mbox{Henry's law constant} \nonumber \\ a &= \mbox{gas-liquid interfacial area} \nonumber \\ K_L &= \mbox{liquid phase mass transfer coefficient.} \nonumber \end{align} \vspace{-5mm} \begin{figure}[t] \includegraphics[width=0.66\textwidth]{./images/pipeline_contactor.png} \vspace{-2mm} \caption{Flow diagram for gas separator (figure reproduced from Peebles \emph{et al.} \cite{peebles_removal_1968}).} \end{figure} \end{frame} \begin{frame} \frametitle{Fuel processing system overview: rare earths and Pa removal} \begin{figure}[htp!] % replace 't' with 'b' to \centering \includegraphics[width=0.57\textwidth]{../figures/flowsheet.pdf} \vspace{-2mm} \caption{Liquid metal (Bi) extraction system for the \gls{MSBR} (reproduced from Sorensen \cite{sorensen_one-fluid_2006}).} \end{figure} \end{frame} \begin{frame} \frametitle{Fuel processing system overview: TAP concept} \vspace{-2mm} \begin{figure}[htp!] % replace 't' with 'b' to \centering \includegraphics[width=0.75\textwidth]{../figures/tap_primary_loop.png} \caption{Simplified \gls{TAP} primary loop design including off-gas system (blue), nickel filter (orange) and liquid metal extraction system (green) \cite{transatomic_power_transatomic_2019}.} \end{figure} \end{frame} \begin{frame} \frametitle{SaltProc demonstration for TAP concept input data} \begin{textblock*}{12.5cm}(0.5cm,1.5cm) % {block width} (coords) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table}[htbp!] \fontsize{6}{9}\selectfont \centering \caption{The effective cycle times for fission products removal from the \gls{TAP} reactor \cite{betzler_implementation_2017}.} \begin{tabular}{p{0.14\textwidth} p{0.3\textwidth} p{0.11\textwidth} p{0.11\textwidth}} \hline \textbf{Processing group} & \qquad\qquad\qquad \textbf{Nuclides} & \textbf{Removal Rate (s$^{-1}$)} & \textbf{Cycle time (at full power)} \\ \hline \multicolumn{3}{c}{\textit{Elements removed in \gls{MSBR} concept and adopted for the \gls{TAP}} \cite{robertson_conceptual_1971}} \\ Noble gases & Xe, Kr & 5.00E-2 & 20 sec \\ Noble metals & Se, Nb, Mo, Tc, Ru, Rh, Pd, Ag, Sb, Te & 5.00E-2 & 20 sec \\ Seminoble metals & Zr, Cd, In, Sn & 5.79E-8 & 200 days\\ Volatile fluorides & Br, I & 1.93E-7 & 60 days\\ Rare earths & Y, La, Ce, Pr, Nd, Pm, Sm, Gd & 2.31E-7 & 50 days\\ \qquad & Eu & 2.32E-8 & 500 days \\ Discard & Rb, Sr, Cs, Ba & 3.37E-9 & 3435 days \\ \hline \multicolumn{3}{c}{\textit{Additional elements removed} \cite{betzler_implementation_2017, transatomic_power_corporation_neutronics_2016}} \\ Noble gases & H & 5.00E-2 & 20 sec \\ Noble metals & Ti, V, Cr, Cu & 3.37E-9 & 3435 days \\ Seminoble metals & Mn, Fe, Co, Ni, Zn, Ga, Ge, As & 3.37E-9 & 3435 days \\ Rare earths & Sc & 3.37E-9 & 3435 days \\ Discard & Ca & 3.37E-9 & 3435 days \\ \hline \end{tabular} \label{tab:reprocessing_list} \end{table} \begin{itemize} \item Noble gas removal efficiency: variable, described using mathematical correlation \item Other FP removal efficiency: fixed, non-ideal, based on Table~\ref{tab:reprocessing_list} \end{itemize} \end{textblock*} \end{frame} \subsection{Proposed tool design} \begin{frame} \frametitle{SaltProc class architecture} \begin{itemize} \item \textit{Simulation} class \begin{itemize} \item Manages simulation process \item Stores data into the HDF5 database \item Tracks time \end{itemize} \item \textit{Depcode} class \begin{itemize} \item Contains attributes and methods for reading user's input \item Creates input files for depletion code \item Parses depletion code output \end{itemize} \item \textit{Process} class \begin{itemize} \item Represents fuel processing system component \item Contains attributes of the component ($\epsilon_e$, throughput rate, capacity) \item Tracks waste stream \end{itemize} \item \textit{MaterialFlow} class \begin{itemize} \item Instances of that class represents the material flowing between processes \end{itemize} \end{itemize} \vspace{3mm} \begin{figure}[ht!] % replace 't' with 'b' to \centering \includegraphics[width=0.6\textwidth]{../figures/materialflow.pdf} \vspace{-0.1in} \caption{Schematic for passing material data between fuel processing system components.} \end{figure} \end{frame} \begin{frame} \frametitle{SaltProc flowchart} \vspace{-2mm} \begin{figure}[ht!] % replace 't' with 'b' to \centering \centering \includegraphics[width=1.05\textwidth]{./images/saltproc_flowchart.pdf} \caption{Tentative generic flowchart for SaltProc v1.0 python package.} \end{figure} \end{frame}
{ "alphanum_fraction": 0.6824659728, "avg_line_length": 32.3575129534, "ext": "tex", "hexsha": "466c946639543137e4ecd6b49e4b7d5ce2c6930c", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-02-21T14:58:10.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-13T18:59:46.000Z", "max_forks_repo_head_hexsha": "85ef424c3405c14b5321c03e7e41de482d3a4c1b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mehmeturkmen/2019-rykhl-prelim-saltproc", "max_forks_repo_path": "pres/demo.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "85ef424c3405c14b5321c03e7e41de482d3a4c1b", "max_issues_repo_issues_event_max_datetime": "2019-02-22T16:41:47.000Z", "max_issues_repo_issues_event_min_datetime": "2019-02-22T14:15:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mehmeturkmen/2019-rykhl-prelim-saltproc", "max_issues_repo_path": "pres/demo.tex", "max_line_length": 88, "max_stars_count": 1, "max_stars_repo_head_hexsha": "85ef424c3405c14b5321c03e7e41de482d3a4c1b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "arfc/2019-rykhl-prelim-saltproc", "max_stars_repo_path": "pres/demo.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-23T16:37:15.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-23T16:37:15.000Z", "num_tokens": 2156, "size": 6245 }
%TEX root = ../dissertation.tex \chapter{Limitations and Future Work} \label{chapter:limitations-future-work} During the development of this solution, and mainly due to compatibility issues, some limitations were raised. The most flagrant is the possibility of compromising a network node and sniffing the contents of the propagated packets. Although we are using \gls{LLSec} and this assures confidentiality, integrity, authenticity of the propagated packets while also providing authentication of the network nodes, in the event of compromising an already deployed node, the messages are cyphered and deciphered on every device, exposing the payload to the attackers. In the future a solution that protects the packets from the source all the way to the destination needs to be addressed so that the solutions can be totality secure against privacy attacks.\\ Additionally, and as discussed in Section \ref{sec:attack_analysis}, an attacked could try to introduce himself to the network by stealing the network credentials from a deployed device. The mitigations strategies for this attack can be either software based, assuring that secure memory areas cannot be copied to external locations. Or hardware based, certain integrated circuits assure the stored information cannot be read from them \cite{Lesjak2014}. Although they have not been address in this system as we believe they are out of the scope of this project and more related to other computer science fields, they are still important and need to be address for increasing the security of the system globally.\\ Furthermore, the authors attempted to publish their work by writing two articles and sending them to two separate conferences. From the comments of the reviewers, additional limitations and improvement opportunities arose.\\ Regarding the article that was sent to the international conference (Appendix \ref{appendix:acm_acsac}), the reviewers identified the bootstrapping process to be not adequate for massive deployment due to the fact that it needs to be physically connected to a central station and does not come shipped with the necessary credentials to start operating immediately. Although our goal was never a massive deployment, but scenarios with the dimension of a Smart Campus, this question loses some relevance. However it is still important to this about this system as possibly being used on a large scale enterprise solution and to that extent this issue needs to be addressed. In order to avoid the necessity of physical connections, one could attempt, in the future,to resort to the always required 802.15.4 radio as the way of sending the custom firmware to the new device. This would need to be done on a secure environment so that the firmware could not be sniffed by any attacker and from that firmware be able to steal the network credentials, but would reduce the time and hardware connections required for flashing a new device.\\ Regarding the article that was sent to the national conference (Appendix \ref{appendix:inforum_paper}), the reviewers were mostly concerned about scalability issues. Due to budget limitations, it was not possible to create a network larger than four nodes and so the capabilities of the system under heavy usage were not tested. This is important and needs to be studied before thinking about deploying this system on a large scale. To achieve that, one could try, in the future, to integrate this system with the IoT-LAB \footnote{https://www.iot-lab.info/} large scale infrastructure facility suitable for testing small wireless sensor devices and heterogeneous communicating objects. If such integration could be achieved, the system could be tested with dozens of nodes on real hardware objects.\\ The submitted article was accepted as an Exended Abstract (Appendix \ref{appendix:inforum_abstract}), and presented during both the conference track and the break hours as a poster (Appendix \ref{appendix:inforum_poster}).
{ "alphanum_fraction": 0.818617558, "avg_line_length": 360.3636363636, "ext": "tex", "hexsha": "10c63469ec037bac1f878e636de6b2b76936d5ef", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a9fc338a350c2f76cd292b45367dbd113853325f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tiagodiogo/thesis-dissertation", "max_forks_repo_path": "chapters/limitations-future-work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a9fc338a350c2f76cd292b45367dbd113853325f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tiagodiogo/thesis-dissertation", "max_issues_repo_path": "chapters/limitations-future-work.tex", "max_line_length": 1134, "max_stars_count": null, "max_stars_repo_head_hexsha": "a9fc338a350c2f76cd292b45367dbd113853325f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tiagodiogo/thesis-dissertation", "max_stars_repo_path": "chapters/limitations-future-work.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 769, "size": 3964 }
\subsection{mimetypes -- Map filenames to MIME types} To be done .... %
{ "alphanum_fraction": 0.6849315068, "avg_line_length": 14.6, "ext": "tex", "hexsha": "9aaa340ca2d7b0a643bb6d0425cde24b05203b23", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-24T19:55:47.000Z", "max_forks_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "remigiusz-suwalski/programming-notes", "max_forks_repo_path": "src/python3/sections/mimetypes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "remigiusz-suwalski/programming-notes", "max_issues_repo_path": "src/python3/sections/mimetypes.tex", "max_line_length": 53, "max_stars_count": 1, "max_stars_repo_head_hexsha": "dd7d6f30d945733f7ed792fcccd33875b59d240f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "remigiusz-suwalski/programming-notes", "max_stars_repo_path": "src/python3/sections/mimetypes.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-28T05:03:18.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-28T05:03:18.000Z", "num_tokens": 18, "size": 73 }
%!TeX root=thesis.tex \chapter{Active Learning} \label{cha:active-learning} In this chapter we discuss active learning and look at its application to both the galaxy classification task and to the Radio Galaxy Zoo project. In Section \ref{sec:intro-active-learning}, we will briefly describe the key concepts of active learning, moving to discuss querying strategies for active learning for binary classification in \ref{sec:query-strategies}. We apply these methods in a simple experiment in Section \ref{sec:rgz-qbc}, showing that active learning may be useful in an astronomical context. In Section \ref{sec:active-learning-on-crowds} we extend the active learning discussion to a crowdsourcing context, and discuss how crowdsourcing complicates active learning. We propose an experiment for applying active learning to the Radio Galaxy Zoo in Section \ref{sec:al-rgz-ideal-experiment}, highlighting some difficulties in doing so. Finally, in Section \ref{sec:al-citizen-science} we look at the differences between the standard crowdsourcing context and citizen science, pointing out how they differ and how this breaks assumptions of existing methods. \section{Introduction} \label{sec:intro-active-learning} In supervised learning we deal with a set of data points and their associated labels. This dataset may be expensive to obtain, but the main costs may come from collecting labels, rather than from collecting the data points themselves. Examples of such data include text samples \citep{lewis94, mccallum98}, and images \citep{loy11, lintott08}, both of which are now widely and cheaply available through the internet. A more abstract example is scientific hypotheses \citep{king04}. Labelling text and images is hard, error-prone, and requires humans; and performing a scientific experiment to test a hypothesis is considerably more expensive than coming up with the hypothesis. It may even be the case that we simply cannot label all the data because there is too much, such as in the Galaxy Zoo \citep{lintott08} and Radio Galaxy Zoo \citep{banfield15} projects. \emph{Active learning} (or \emph{query learning} \citep{settles09, seung92, angluin86}) allows a machine learning algorithm to select specific, unlabelled examples to be labelled by an expert. The algorithm effectively chooses its own training set \citep{settles09}. The hope is that the algorithm chooses to label only the most useful examples \citep{mccallum98}, and the expensive process of labelling redundant or useless examples is avoided \citep{engelson99}. Intelligently selecting the training set as in active learning can result in massively reduced labelling costs \citep{lewis94, king04} or even make intractable labelling problems tractable. While there are many variations of active learning scenarios, we focus on \emph{pool-based} active learning in this thesis. In pool-based active learning, we already have a large pool of unlabelled data points accessible to our algorithms, and our algorithms can choose to present any of these data points to the expert. The pool-based scenario commonly arises when we are able to obtain a lot of unlabelled data at once, such as in astronomical surveys \citep{pelleg04, richards12, marshall15}. Active learning has already been successfully applied in astronomy. \citet{pelleg04} applied active learning to the Sloan Digital Sky Survey to find anomalies in the survey. \citet{richards12} applied active learning to classify variable stars from the All Sky Automated Survey. Both papers showed that active learning resulted in a great reduction in the number of labels needed to achieve their respective tasks. \section{Query Strategies} \label{sec:query-strategies} A \emph{query strategy} is the approach an active learning algorithm takes to selecting a new data point to label. There are many different query strategies, but here we focus on uncertainty sampling and query-by-committee. All pool-based query strategies take the same form. We are given some pool of data $\mathcal X$ and a set of labelled data $\mathcal D \subseteq \mathcal X \times \mathcal Y$. We want to select $\tilde x \in \mathcal X$ such that labelling $\tilde x$ maximises our information gain. \subsection{Uncertainty Sampling} \label{sec:uncertainty-sampling} \emph{Uncertainty sampling} \citep{lewis94} is perhaps the most common query strategy. Given a classification model $y(\vec x) = p(z \mid \vec{x})$ with the ability to output a probability (including probabilistic classifiers like logistic regression, nearest-neighbour classifiers \citep{lewis94}, and combinations of probabilistic and non-probabilistic classifiers \citep{lewis94b}), the queried point $\tilde x$ is the data point for which the model is least certain of the classification. This is not well-defined and an uncertainty sampling algorithm must choose what ``least certain'' means. There are three common measures of uncertainty --- confidence-, entropy-, and margin-based --- but in the case of binary classification, they all reduce to one strategy~\citep{settles09}: \[ \tilde x = \underset{\vec x}{\mbox{argmax}}\ \left(0.5 - \abs{y(\vec x) - 0.5}\right). \] The intuition is that the further a data point is from the decision boundary, the more certain the classifier is of the assigned label, so choosing the closest data point to the decision boundary is equivalent to choosing the most uncertain data point. Another interpretation is that $0.5 - \abs{y(\vec x) - 0.5}$ is the expected probability of mislabelling $\vec x$ \citep{settles09}. % In the confidence-based approach, $\tilde x$ is the data point that is % closest to the decision boundary, i.e. % % In the \subsection{Query-by-Committee} \label{sec:qbc} \emph{Query-by-committee} (QBC) is an ensemble-based query strategy first proposed by \citet{seung92}. A committee of classifiers is trained on the known labels, with different subsets of the labelled data to ensure variety in the committee. The committee then labels the unlabelled pool of data: Each classifier votes on each data point and our prediction is the label with the most votes. The information gain associated with each data point is estimated by the disagreement of the committee on the label, and the data point with the most disagreement is queried. Disagreement can be measured in multiple ways. The most obvious way is to simply count the number of classifiers that disagree with the majority label \citep{seung92}. Other methods include computing the entropy of the committee vote \citep{mccallum98, dagan95}, and using Kullback-Leibler divergence \citep{mccallum98}. \section{Query-by-Committee on the Galaxy Classification Task} \label{sec:rgz-qbc} \begin{figure} \centering \includegraphics[width=0.8\textwidth] {images/experiments/rgz_qbc.pdf} \caption{Logistic regression trained on the \citeauthor{norris06} labels with different amounts of training data and two different query strategies. The filled areas represent standard deviation across multiple test sets.} \label{fig:rgz-qbc} \end{figure} We tested QBC active learning on the galaxy classification task described in Chapter \ref{cha:cross-identification}, comparing QBC to passive (i.e. random) selection as a query strategy. We used a committee of 20 logistic regression classifiers for the QBC test. Each was presented with 75\% of the known labels at random, stratified by the labels. For the passive test, we sampled 100 galaxies at random (stratified by the labels) and trained a logistic regression classifier on these. We then drew a batch of new labels, added these to the existing label set, and retrained the classifier. This was repeated until the classifier had seen the entire training set ($10^4$ labels). The process for testing QBC was identical, except that instead of drawing new labels at random, the new labels were drawn in order of highest to lowest disagreement of the committee. Disagreement was measured by counting the number of committee labels that disagreed with the majority vote of the labels. After running the experiment, we observed that QBC outperformed passive selection. We hypothesised that this was because querying at random ignores the fact that there are far more negative examples than positive examples in the galaxy classification task. By this hypothesis, QBC would perform comparably to sampling from the set of positive examples and the set of negative examples at equal rates. To test this, we ran a third test with a random sampler that accounted for class imbalance. We found that this third test performed similarly to QBC. All three tested querying strategies are plotted in Figure \ref{fig:rgz-qbc}. \section{Active Learning on Crowds} \label{sec:active-learning-on-crowds} Traditional active learning assumes that we have access to one expert, who always issues correct labels. When labels are sourced from a crowd, these assumptions no longer hold: The crowd are non-experts and can give incorrect labels \citep{mozafari12,yan11}, and there are multiple labellers with different accuracies \citep{yan11}. We can now ask questions deeper than simply ``which label should I request?'' --- we can, for example, ask ``which labeller should I ask?'', or ``do I need to re-request this label?''. \citet{yan11} apply the \citet{yan10} model (Section \ref{sec:yan}) to the problem of active learning from crowds. We remind the reader that this model consists of a label model $p(z \mid \vec x)$ and a data-dependent labeller model $p(y_t \mid \vec x, z)$, where $\vec x$ is an instance, $y_t$ is the label assigned by labeller $t$, and $z$ is the groundtruth label. To extend this model into active learning, \citeauthor{yan11} introduce a query strategy where not only an instance is requested, but also a specific labeller. First, uncertainty sampling is used with the label model to choose a set of ideal points to query. With logistic regression (Equation \ref{eq:raykar-logreg}), the decision boundary between positive and negative labels is a hyperplane $\vec w^T \vec x = 0$; uncertainty sampling would choose to query the points nearest (or on) this hyperplane. The labeller and instance to query are then chosen as solutions to the following optimisation problem: \begin{align*} \underset{\vec x, t}{\text{minimise}}\ & \eta_t(\vec x)\\ \text{s.t. } & \vec w^T \vec x = 0. \end{align*} Intuitively, we query the instance on the decision hyperplane with the least noisy labeller. While this instance may not actually exist in our pool, we simply choose the instance closest to it (i.e. using Euclidean distance). This method has similar drawbacks to the \citeauthor{yan10} passive learning method described in Chapter \ref{cha:ml}: The number of parameters grows large with large numbers of annotators, and the expectation-maximisation algorithm only converges to a local minimum. In our implementation, training was also quite slow, meaning online active learning may be impractical. It also does not account for the possibility of relabelling instances. \citet{mozafari12} suggest an approach similar to uncertainty sampling, with uncertainty computed using a bootstrap method. A full description of this method is beyond the scope of this thesis. Instead, we look at the approach they take to handle noise. Noting that crowds may perform worse on some subsets of the data than other subsets, \citeauthor{mozafari12} solve an integer linear program to compute the redundancy required for different subsets of the instance pool. First, they partition the data, then estimate the probability of obtaining a correct crowd label for each partition. This estimation is accomplished by querying the crowd on a sample of instances from each partition. The estimated probability is then used to compute the redundancy required. For full details, we refer the reader to the paper \citep{mozafari12}. \section{Active learning for Radio Galaxy Zoo} \label{sec:al-rgz-ideal-experiment} Radio Galaxy Zoo \citep{banfield15} is a domain where active learning could be very useful. There are many more radio galaxies to classify than there are volunteers, so making the best use we can of their labels is particularly important. Applying active learning to the task that Radio Galaxy Zoo presents to their volunteers is non-trivial. Volunteers first choose which radio components are associated with the same AGN, and then decide where this AGN is located on an infrared image. Even ignoring the radio components problem, the position is a real-valued vector label, rather than a binary label. In this thesis, we have cast the cross-identification as a binary classification problem, converting these real-valued positions into binary labels by assigning a $1$ to the closest galaxy and $0$ to all other galaxies. This should still work for active learning, but there is no obvious way to develop a query strategy. Volunteers are presented with an image of radio object to label, so a query strategy must choose radio objects to present. Methods like uncertainty sampling have no clear application here: How do we aggregate uncertainties in our classifications of neighbouring galaxies into an uncertainty for a radio object? We may be able to perform this aggregation by a number of methods, such as summing, averaging, or maximising the uncertainties. We could even aggregate using something like entropy, looking at the distribution of uncertainties of galaxies in the image. The choice is not obvious and an experiment is required. While we do not perform such an experiment in this thesis, we will suggest one at the end of this section. Query-by-committee may generalise to Radio Galaxy Zoo. We could label radio objects by considering nearby galaxies and classifying them using our methods from Chapter \ref{cha:cross-identification}, then selecting the galaxy with the most certain classification as the host galaxy of the radio object. Multiple classifiers could be used to perform this selection, and the percentage disagreement on the location of the host galaxy would then indicate the uncertainty associated with the radio object. However, this method would likely find radio objects that are intrinsically hard to cross-identify rather than radio objects where labelling would give high information gain. An example of this would be a compact radio object with multiple potential host galaxies very close to its centre. Cross-identifying such an object is usually very easy --- the galaxy closest to the centre is most likely to be the host galaxy --- but if there are multiple galaxies equidistant from the centre then a committee of classifiers will likely choose equally between them, resulting in high disagreement. In preliminary experiments with using QBC for this task, we found exactly this behaviour. An ideal experiment for active learning for Radio Galaxy Zoo would compare different generalisations of uncertainty sampling to QBC and passive sampling. In contrast to our approach to the cross-identification problem, instances would be radio objects rather than galaxies. Radio objects would be drawn using the different query strategies. The results of queries would come from an expert catalogue such as the \citeauthor{norris06} catalogue. The resulting labels would then be used to train a classifier. Training is possible using our methods by assigning all nearby galaxies a positive or negative label based on the result of the query, in a very similar way to how we converted the \citeauthor{norris06} and \citeauthor{fan15} cross-identifications into label sets. There is a clear problem of scale --- how large a radius around the radio object do we consider ``nearby''? --- and there is no clear solution to this problem. After such an experiment was used to determine good query strategies, the approach would need to be extended to work with the crowd. Queries would now be sent to a simulated crowd with realistic (i.e. Radio Galaxy Zoo volunteer-like) noise. This is a much harder problem to solve: One must now consider the problem of relabelling, label noise, and so on. Inspiration for such methods could likely be drawn from \citet{mozafari12}, as their partitioning approach is similar to that chosen by \citet{banfield15} for the Radio Galaxy Zoo (where the partitions are compact and complex radio objects). Reviewing these suggested experiments, it is clear that active learning for Radio Galaxy Zoo is a very hard problem to solve! As such, these experiments were beyond the scope of this thesis, but we hope that our classification methods may be applied in an aggregate way to radio objects in future work. \section{Active Learning for Citizen Science} \label{sec:al-citizen-science} Thus far, we have taken the word ``crowdsourcing'' to mean any scenario where there are multiple labellers. This conflates a number of arguably different scenarios: We may have multiple experts, multiple non-experts with domain knowledge, multiple non-experts without domain knowledge, and so on. The literature often does not disambiguate. \citet{raykar10} consider the problem of multiple experts who disagree; \citet{yan10} and \citet{mozafari12} consider crowdsourcing using a platform such as Amazon Mechanical Turk, where non-experts are paid to label data on a per-label basis. \citet{nguyen15} consider a hybrid scenario where we have access to both non-experts \emph{and} experts. The specific scenario we are interested in is citizen science, where volunteers interested in science contribute to labelling scientific data, typically through web interfaces like Galaxy Zoo \citep{lintott11}. With the rise of internet usage and the amount of available data throughout scientific disciplines, citizen science is steadily increasing in popularity and impact \citep{marshall15}. We believe that citizen science is a crowdsourcing scenario unlike those presented in existing literature, and as such, breaks assumptions often made by active learning methods that operate on crowds. There is a key difference between citizen science and paid crowdsourcing: In citizen science, we cannot choose our labellers. Using a paid crowdsourcing platform, we have some degree of freedom over who we query; this is (for example) the assumption made in \citet{yan11} and the crux of their methods. We can choose the best labeller and the best instance to query. In citizen science, volunteers request labels from us. Another factor is that citizen science tends to involve a very large number of labellers: Galaxy Zoo has involved several hundred thousand volunteers \citep{marshall15}; Radio Galaxy Zoo has over 2000 volunteers involved with just the ATLAS--CDFS observations that we have looked at in this thesis. For contrast, \citet{raykar10}, on whose methods we based our own crowd experiments, used 5 labellers for most of their experiments (with one experiment with 164 labellers); \citet{mozafari12} and \citet{yan11} both tested their methods with 3--5 labellers. The large number of labellers involved in practical citizen science raises many challenges. In particular, estimating labeller accuracies is very difficult for large numbers of labellers as the number of model parameters is often tied to the number of labellers. % Additionally, the label matrix $y_{t, i}$ is sparse: Most labellers have % not labelled most instances. This can make computing bulk properties of % the labels or labellers difficult. It may also be difficult to estimate the accuracies of individual volunteers, for a number of reasons. These volunteers may be anonymous, and identifying which labels were assigned by the same individual may be impossible. Indeed, Radio Galaxy Zoo reports that 27\% of their labels come from anonymous volunteers. Finally, we note that volunteers in citizen science vary greatly in how many instances they label. A large number of labels come from volunteers who only label a few instances. This means that trialling volunteers on a ``gold standard'' of instances with known groundtruth is impractical: If we tested every volunteer on a few known instances, then for many volunteers, the known instances would be all that they label! Yet we cannot treat all volunteers as if they only label a few instances --- large contributions are also made by small numbers of volunteers labelling many instances. \citet{marshall15} highlight this variety with respect to Galaxy Zoo 2. They emphasise the importance of designing projects for both recurring and new volunteers, citing the significant contributions of both groups. This can be visualised by Figure \ref{fig:galaxy-zoo-2-volunteer-distribution}, which depicts Galaxy Zoo 2 volunteer contributions. We would like to extend their statement: Just as we must design citizen science projects for both recurring and new volunteers, we must design citizen science \emph{algorithms} for both recurring and new volunteers. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{images/galaxyzoovolunteers} \caption{Label contributions from volunteers in Galaxy Zoo 2. Each square represents a single user, with the area of the square proportional to the number of instances labelled by that user. Colours are arbitrary. \emph{Image: K. Willett. Reproduced from \citet{marshall15}.}} \label{fig:galaxy-zoo-2-volunteer-distribution} \end{figure}
{ "alphanum_fraction": 0.7515881709, "avg_line_length": 59.5953002611, "ext": "tex", "hexsha": "8de3cf3f6895410f1208addc3f9af3c95b2f96df", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2018-10-03T13:37:15.000Z", "max_forks_repo_forks_event_min_datetime": "2015-11-07T00:20:09.000Z", "max_forks_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "chengsoonong/crowdastro", "max_forks_repo_path": "tex/thesis/5_active.tex", "max_issues_count": 234, "max_issues_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8", "max_issues_repo_issues_event_max_datetime": "2019-06-27T00:26:08.000Z", "max_issues_repo_issues_event_min_datetime": "2016-02-21T23:53:16.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "chengsoonong/crowdastro", "max_issues_repo_path": "tex/thesis/5_active.tex", "max_line_length": 98, "max_stars_count": 13, "max_stars_repo_head_hexsha": "ce14432c36de0574b73d813304365b74446a61f8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "chengsoonong/crowdastro", "max_stars_repo_path": "tex/thesis/5_active.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-20T05:29:58.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-07T15:24:44.000Z", "num_tokens": 5142, "size": 22825 }
%% %% forked from https://gits-15.sys.kth.se/giampi/kthlatex kthlatex-0.2rc4 on 2020-02-13 %% expanded upon by Gerald Q. Maguire Jr. %% This template has been adapted by Anders Sjögren to the University %% Engineering Program in Computer Science at KTH ICT. Adaptation is the %% translation of English headings into Swedish as the addition of Swedish %% text. Original body text is deliberately left in English. %% set the default lanage to english or swedish by passing an option to the documentclass - this handles the inside tile page \documentclass[english, master]{tex/kththesis} %\documentclass[swedish]{kththesis} % \usepackage[style=numeric,sorting=none,backend=biber]{biblatex} \setlength {\marginparwidth }{2cm} %leave some extra space for todo notes \usepackage{todonotes} \usepackage[perpage,para,symbol]{footmisc} %% use symbols to ``number'' footnotes and reset which symbol is used first on each page %% Reduce hyphenation as much as possible % \hyphenpenalty=15000 % \tolerance=1000 % include a variety of packages that are useful \include{tex/includes} %% Acronyms % note that nonumberlist - removes the cross references to the pages where the acronym appears % note that nomain - does not produce a main gloassay, this only acronyms will be in the glossary % note that nopostdot - will present there being a period at the end of each entry \usepackage[acronym, section=section, nonumberlist, nomain, nopostdot]{glossaries} %\glsdisablehyper \makeglossaries \input{tex/acronyms} %load the acronyms file %% definition of new command for bytefield package \newcommand{\colorbitbox}[3]{% \rlap{\bitbox{#2}{\color{#1}\rule{\width}{\height}}}% \bitbox{#2}{#3}} \newenvironment{swedishnotes}% {\begin{center} \selectlanguage{swedish} \color{blue}}% {\end{center}\selectlanguage{english} } \begin{document} \ifinswedish \selectlanguage{swedish} \else \selectlanguage{english} \fi %% Information for inside title page \title{Detecting Echo Chambers in social media; a graph-based approach} % \subtitle{Detecting Polarization in social medias} % give the alternative title - i.e., if the thesis is in English, then give a Swedish title % \alttitle{Detta är den svenska översättningen av titeln} % \altsubtitle{Detta är den svenska översättningen av undertiteln} % alternative, if the thesis is in Swedish, then give an English title %\alttitle{This is the English translation of the title} %\altsubtitle{This is the English translation of the subtitle} \authorsLastname{Zappia} \authorsFirstname{Francesco} \email{[email protected]} \kthid{u104906} % If the student has an ORCiD - add it here % \orcid{0000-0002-00001-1234} \authorsSchool{\schoolAcronym{EECS}} % If there is a second author - add them here: % \secondAuthorsLastname{Student} % \secondAuthorsFirstname{Fake B.} % \secondemail{[email protected]} % \secondkthid{u100002} % % If the student has an ORCiD - add it here % \secondorcid{0000-0002-00001-5678} % \secondAuthorsSchool{\schoolAcronym{ABE}} \supervisorAsLastname{Neumann} \supervisorAsFirstname{Stefan} \supervisorAsEmail{[email protected]} % If the supervisor is from within KTH add their KTHID, School and Department info \supervisorAsKTHID{u112750} \supervisorAsSchool{\schoolAcronym{EECS}} \supervisorAsDepartment{Division of Theoretical Computer Science} % other for a supervisor outside of KTH add their organization info %\supervisorAsOrganization{Timbuktu University, Department of Pseudoscience} %If there is a second supervisor add them here: \supervisorBsLastname{Anagnostopoulos} \supervisorBsFirstname{Aris} \supervisorBsEmail{[email protected]} % % If the supervisor is from within KTH add their KTHID, School and Department info % \supervisorBsKTHID{u100003} % \supervisorBsSchool{\schoolAcronym{ABE}} % \supervisorBsDepartment{Public Buildings} % other for a supervisor outside of KTH add their organization info \supervisorBsOrganization{Sapienza University of Rome} \examinersLastname{Gionis} \examinersFirstname{Aristides} \examinersEmail{[email protected]} % If the examiner is from within KTH add their KTHID, School and Department info \examinersKTHID{u93105} \examinersSchool{\schoolAcronym{EECS}} \examinersDepartment{Division of Theoretical Computer Science} % other for a examiner outside of KTH add their organization info %\examinersOrganization{Timbuktu University, Department of Pseudoscience} % \hostcompany{Företaget AB} % Remove this line if the project was not done at a host company %\hostorganization{CERN} % if there was a host organization \date{\today} \programcode{TCSCM} %% Alternatively, you can say \programme{Civilingenjör Datateknik} to directly set the programme string \titlepage % document/book information page \bookinfopage % Frontmatter includes the abstracts and table-of-contents \frontmatter \setcounter{page}{1} \input{tex/abstract.tex} \input{tex/acknowledgements.tex} \fancypagestyle{plain}{} \renewcommand{\chaptermark}[1]{ \markboth{#1}{}} \tableofcontents \markboth{\contentsname}{} \cleardoublepage \listoffigures \cleardoublepage \listoftables \cleardoublepage % \listoflistings % \cleardoublepage \printglossary[type=\acronymtype, title={List of acronyms and abbreviations}] \label{pg:lastPageofPreface} % Mainmatter is where the actual contents of the thesis goes \mainmatter \renewcommand{\chaptermark}[1]{\markboth{#1}{}} \input{tex/introduction.tex} \cleardoublepage \input{tex/background.tex} \cleardoublepage \input{tex/complexity.tex} \cleardoublepage \input{tex/methods.tex} \cleardoublepage % \input{tex/whatyoudid.tex} % \cleardoublepage \input{tex/results.tex} \cleardoublepage \input{tex/conclusions.tex} \cleardoublepage % \noindent\rule{\textwidth}{0.4mm} % \todo[inline]{In the references, let Zotero or other tool fill this % in for you. I suggest an extended version of the IEEE style, to include % URLs, DOIs, ISBNs, etc., to make it easier for your reader to find % them. This will make life easier for your opponents and examiner. \\ % % IEEE Editorial Style Manual: \url{https://www.ieee.org/content/dam/ieee-org/ieee/web/org/conferences/style_references_manual.pdf} % } % Print the bibliography (and make it appear in the table of contents) %\printbibliography[heading=bibintoc] % The lines below are for BibTeX \bibliographystyle{tex/myIEEEtran} \renewcommand{\bibname}{References} \addcontentsline{toc}{chapter}{References} \bibliography{tex/references} % \cleardoublepage % \appendix % \renewcommand{\chaptermark}[1]{\markboth{Appendix \thechapter\relax:\thinspace\relax#1}{}} % \chapter{Something Extra} % \todo[inline, backgroundcolor=aqua]{svensk: Extra Material som Bilaga} \label{pg:lastPageofMainmatter} \clearpage \section*{For DIVA} \divainfo{pg:lastPageofPreface}{pg:lastPageofMainmatter} \end{document}
{ "alphanum_fraction": 0.7796734814, "avg_line_length": 32.845410628, "ext": "tex", "hexsha": "dfe82527818e4e654a27b02fa837ee9340a365c7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "morpheusthewhite/master-thesis", "max_forks_repo_path": "thesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "morpheusthewhite/master-thesis", "max_issues_repo_path": "thesis.tex", "max_line_length": 135, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "morpheusthewhite/master-thesis", "max_stars_repo_path": "thesis.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-15T14:01:29.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-15T14:01:29.000Z", "num_tokens": 1959, "size": 6799 }
\documentclass[english]{article} % import packages \usepackage{substitutefont} % must be above babel \usepackage{babel} % babel must be above all other packages \usepackage[fleqn]{amsmath} \usepackage[iso]{isodate} \usepackage[labelfont=bf]{caption} \usepackage[utf8]{inputenc} \usepackage{algorithmic} \usepackage{algorithm} \usepackage{alltt} \usepackage{amsthm, amssymb, bm, bbm} \usepackage{array} \usepackage{color} \usepackage{comment} \usepackage{graphicx} \usepackage{hyperref} \usepackage{minted} \usepackage{multicol} \usepackage{multirow} \usepackage{natbib} \usepackage{parskip} \usepackage{placeins} \usepackage{ragged2e} \usepackage{subfigure} \usepackage{subfiles} \usepackage{times} \usepackage{csquotes} % must appear after some package above % configure image file extensions \DeclareGraphicsExtensions{.pdf, .png, .jpg} \graphicspath{{images/}{../images/}} % fix hyperref and algorithmic packages \newcommand{\theHalgorithm}{\arabic{algorithm}} \renewcommand{\vec}{\mathbf} % use conference package \usepackage[accepted]{icml2017} % set running title \icmltitlerunning{Automatic Bootstrapping on a Vast Predictive Network} \begin{document} \twocolumn[ \icmltitle{Automatic Bootstrapping on a Vast Predictive Network of Reinforcement Learning Sub-agents} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Dylan R. Ashley}{ua} \end{icmlauthorlist} \icmlaffiliation{ua}{University of Alberta, Edmonton, Alberta, Canada} \icmlcorrespondingauthor{Dylan R. Ashley}{[email protected]} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} We experiment with using the {$\lambda$}-greedy algorithm to simplify the task of tuning the trace-decay parameter on a vast network of temporal difference learning sub-agents, a problem that currently limits the utility of such networks. We find that {$\lambda$}-greedy can achieve good performance on our network. We also extend {$\lambda$}-greedy by using a recent, robust method of estimating the variance of the return. We find that this new variant has more stability in the $\lambda$ values selected from timestep to timestep but at the cost of decreased performance. \end{abstract} \section{Introduction} \label{sec:introduction} \subfile{sections/introduction} \section{Algorithms} \label{sec:algorithms} \subfile{sections/algorithms} \section{Robot} \label{sec:robot} \subfile{sections/robot} \section{Domain} \label{sec:domain} \subfile{sections/domain} \section{Value Functions} \label{sec:value_functions} \subfile{sections/value_functions} \section{Results} \label{sec:results} \subfile{sections/results} \section{Conclusion} \label{sec:conclusion} \subfile{sections/conclusion} \section{Future Work} \label{sec:future_work} \subfile{sections/future_work} \FloatBarrier \pagebreak \bibliographystyle{apa} \bibliography{main.bib} \end{document}
{ "alphanum_fraction": 0.7890324834, "avg_line_length": 26.2660550459, "ext": "tex", "hexsha": "77a4245d4ab9cf190d99ecd004cb17e4190756de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4df5e32943eb1b11a9167afa1ca19c8387291980", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dylanashley/direct-lambda-greedy", "max_forks_repo_path": "report/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4df5e32943eb1b11a9167afa1ca19c8387291980", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dylanashley/direct-lambda-greedy", "max_issues_repo_path": "report/main.tex", "max_line_length": 578, "max_stars_count": null, "max_stars_repo_head_hexsha": "4df5e32943eb1b11a9167afa1ca19c8387291980", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dylanashley/direct-lambda-greedy", "max_stars_repo_path": "report/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 800, "size": 2863 }
% ============================================================================== % % P R E A M B L E % % ============================================================================== \chapter{Storage Media} % --------------------------------------------------------- % \label{ch:app:media} % ---------------------------------------------------------------------------- % If you have a physical copy of this report, this appendix may contain a storage medium with a copy of the project repository. A current copy can always be obtained by either cloning the Github repository from \href{https://github.com/alpenwasser/pitaya/}{\nolinkurl{https://github.com/alpenwasser/pitaya/}}, or by downloading a release archive from \href{https://github.com/alpenwasser/pitaya/releases}{\nolinkurl{https://github.com/alpenwasser/pitaya/releases}}.
{ "alphanum_fraction": 0.4681818182, "avg_line_length": 55, "ext": "tex", "hexsha": "7e316f08b841a61476a62922318730ca2d39a330", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alpenwasser/pitaya", "max_forks_repo_path": "doc/report/chunks/media.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alpenwasser/pitaya", "max_issues_repo_path": "doc/report/chunks/media.tex", "max_line_length": 114, "max_stars_count": 4, "max_stars_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alpenwasser/pitaya", "max_stars_repo_path": "doc/report/chunks/media.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-15T20:19:03.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-22T15:26:34.000Z", "num_tokens": 174, "size": 880 }
\documentclass[8pt]{article} \usepackage{geometry} \geometry{landscape,margin=.2in} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{amsmath} \usepackage{amssymb} \usepackage[hidelinks]{hyperref} % \hypersetup{ % colorlinks=false, % linkcolor=white, % filecolor=magenta, % urlcolor=white, % } \bibliographystyle{abbrv} \usepackage{multicol} \usepackage{parskip} \DeclareMathOperator*{\argmin}{argmin} % ------------------------------------------------------------------------ \begin{document} % ------------------------------------------------------------------------ % Overall intro \section*{The math of reinforcement learning} All the math for learning and choosing, in a common place and notation. % ------------------------------------------------------------------------ % Start the sheet... \begin{multicols}{4} The aim of all reinforcement learning it to maximize reward recieved. Call this $\rho$. \subsection*{Some formality} In all reinforcement leaning problems there is a set of $\textbf{S}$ states, $\textbf{A}$ actions, and $\textbf{R}$ rewards. A model always start of in some state $s_0 \in \textbf{S}$. Using policy $\pi$ it takes action $a \in \textbf{A}$. Each action leads to a new state $s' \in \textbf{S}$, and sometimes a reward $r \in \textbf{R}$. Eventually, a terminal state is found. We don't know to find $\rho$ so it is estimated incrementally by an \emph{expected value} $V$. Said another way, $V \approx \rho$. As we will see there are many ways to define $V$. Many of the proofs in reinforcement learning want to show that for some way of calculating $V$, $V \rightarrow \rho$ as $t \rightarrow \infty$. Here the expected value at $s$ is $V$. The expected value at $s'$ is $V'$. The intial value $V_0$ is arbitrary, but is important by Bellman's optimality principle \footnote{\url{https://en.wikipedia.org/wiki/Bellman_equation}}. The size of each set is denoted by $k = |\textbf{A}|$, $m = |\textbf{S}|$, and $o = |\textbf{R}|$. \subsection*{A recursive notion of time} To keep the notation simple, we don't mantion time explicitly. Learning happens recursively, with changes denoted by the `$\leftarrow$'. For example, $V \leftarrow V + r$ is equivilant to $V(t+\delta t) = V(t) + r(t)$. \subsection*{Other simplifications} Commonly $V$ is denoted as function of $s$, $a$, and $t$. That is as $V(s,a,t)$, or even as $V(s,a, s', a', t)$. To keep the notation compact--at the cost of precision--this kind of thing is left implicit. Only when really needed is the complete notation used. % ------------------------------------------------------------------------ % Begin denoting all the models % \vfill\null % \columnbreak \section*{The models} \subsection*{The minimal} The simplest possible reinforcement learning is: \begin{eqnarray} V \leftarrow V + r \\ \pi \leftarrow \frac{V}{\sum_\textbf{A}{V}} \end{eqnarray} Where the intial value is free $V_0 = \mathbb{R}$, and $\sum_\textbf{A}{V}$ is the sum of the values for all actions $A$. The difference in size between $V$ and $r$ determines the learning rate. Which is equivilant to: \begin{eqnarray} V \leftarrow V + \alpha r \end{eqnarray} Where the learning rate is explicit as, $(0 < \alpha \leq 1)$ and $V_0 = 0$. \subsubsection*{Other policies} Different policies can be calculated with the same values. The linear version above is not common. Two common alternatives are softmax, \begin{eqnarray} \pi \leftarrow \frac{e^{V}}{\sum_\textbf{A}{e^{V}}} \end{eqnarray} and $\epsilon$-greedy \begin{eqnarray} TODO \end{eqnarray} \textbf{Note:} these, or any other policy, can be mixed and matched arbitrarily with any learning rule. \subsection*{The minimal, discounted} The effect of recent values can be modulated with: \begin{eqnarray} V \leftarrow \gamma V + \alpha r \end{eqnarray} Where $(0 < \gamma \leq 1)$, $(0 < \alpha \leq 1)$, and $V_0 = 0$. Once the ratio $\alpha/\gamma > 0.5$ current rewards begin to matter more than past values. This general method for controlling recent values carries on to more complex models, and longer time horizons. Use it as needed. \subsection*{The discount} Often values or summed rewards will be discounted as time passes. A common form is: \begin{eqnarray} \gamma \leftarrow \gamma ^ t \end{eqnarray} Where $0 < \gamma \leq 1$ and $t \in \mathbb{N}_1$ is an integer code for the elapsed discrete time. Another form is exponential decay \cite{Francois-Lavet2015}: \begin{eqnarray} \gamma \leftarrow \gamma - \gamma \tau \end{eqnarray} Where $\tau \in \mathbb{R}_+$. Typically $\tau \approx 0.02$. \textbf{Note}: anywhere $\gamma$ is used here assume a discounted form can--and often is--substituted in. % \vfill\null % \columnbreak \subsection*{The temporal difference} If the reward is delayed, the minimal model can't give credit to past states. One solution is to take the difference between the current $V$ next, $V'$, as in the \emph{SARSA} rule: \begin{eqnarray} V \leftarrow V + \alpha (r + \gamma V' - V) \end{eqnarray} Where $(0 < \gamma \leq 1)$, $(0 < \alpha \leq 1)$, and $V_0 = 0$. Unlike the minimal model here $\gamma$ wieghs the influence of the future, not the past. \subsection*{The maximum difference} \emph{Q learning} takes the difference between the current value $V$ and the maximum value, $\argmin_A \textbf{V}$. \begin{eqnarray} V \leftarrow V + \alpha (r + \gamma \argmin_A \textbf{V} - V) \end{eqnarray} Where $(0 < \gamma \leq 1)$, $(0 < \alpha \leq 1)$, and $V_0 = 0$. \textbf{Note:} as policies are define by their target action $a$, \emph{Q learning} is often called an \emph{off-policy} learning rule, while the temporal difference is \emph{on-policy}. \subsection*{The average difference} \emph{Avantage learning} works with relative changes in value, comparing $V$ to its average $\bar{V}$. \begin{eqnarray} V \leftarrow V + \alpha (r + \gamma \bar{V} - V) \\ \bar{V} = \frac{1}{k} \sum_\textbf{A}{V} \end{eqnarray} In practice $\bar{V}$ is approximated online, often with a discounting scheme. \textbf{Note}: advantages play an important role in policy gradients. \subsection*{The regretable difference} TODO.... \vfill\null \columnbreak \subsection*{The policy gradient} Policy gradients learn to map states to directly to actions. They are most useful when problems are continious in action spaace and time, or when promises of local optima are needed. Policy gradients tend however to be sample inefficient, and by definition they are also \emph{on policy} methods, which can also slow learning down. A policy is the probability of taking an action $a$, given a state $s$ and some generic parameters $\theta$. \begin{eqnarray} \pi = P(a, s, \theta) \end{eqnarray} To learn a good policy $\pi$, the parameter gradient should follow the reward gradient. \begin{eqnarray} \theta \leftarrow \theta + \alpha \frac{\partial \rho}{\partial \theta} \end{eqnarray} As with other reinforcement leaning methods, finding a suitable way to estimate $\rho$ becomes a major concern, and our problem becomes: \begin{eqnarray} \theta \leftarrow \theta + \alpha \frac{\partial V}{\partial \theta} \label{eq:grad} \end{eqnarray} \subsection*{The average (again)} In a gradient setting \emph{avantage learning} increases in $V$ are by definition better than average, $\bar{V}$. \begin{eqnarray} V \leftarrow V + \alpha (r + \gamma \bar{V} - V) \\ \bar{V} = \frac{1}{k} \sum_\textbf{A}{V} \end{eqnarray} Gradients driven by advatage updates ensure learning is always better than average \cite{Sutton}. By eq. \ref{eq:grad}, purturbations to $\theta \leftarrow \theta + \delta \theta$ which increase $V$ must, by definition, be better than average. \subsection*{The generalized advantage} Discounting values can introduce a bias to the final estimate. The generalized advantage scheme takes this into account \cite{Schulman2015a}. \subsection*{Actor-critic} Value learning and policy can be joined into a single \emph{actor-critic architecture}. TODO\ldots % ------------------------------------------------------------------------ \vfill\null \columnbreak \section*{The planning models} Planning means the learner has, or creates, an \emph{explicit map} of the state-space. This map can come from outside the system, from some useful oracle, or can be learned. \subsection*{The DYNA} \subsection*{The prioritized sweep} \subsection*{The successor} \end{multicols} \newpage \bibliography{library} \end{document}
{ "alphanum_fraction": 0.6879449177, "avg_line_length": 35.5560165975, "ext": "tex", "hexsha": "5d23ac3a0adf35fa8831137ed919b3b63b82079c", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2018-10-29T15:45:54.000Z", "max_forks_repo_forks_event_min_datetime": "2018-09-12T00:40:52.000Z", "max_forks_repo_head_hexsha": "d1498069dd8856e93ae077b34dd7c9f1c7ce80e6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CoAxLab/azad", "max_forks_repo_path": "notes/rl.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d1498069dd8856e93ae077b34dd7c9f1c7ce80e6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CoAxLab/azad", "max_issues_repo_path": "notes/rl.tex", "max_line_length": 251, "max_stars_count": 6, "max_stars_repo_head_hexsha": "d1498069dd8856e93ae077b34dd7c9f1c7ce80e6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CoAxLab/azad", "max_stars_repo_path": "notes/rl.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-28T17:36:52.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-11T21:06:12.000Z", "num_tokens": 2416, "size": 8569 }
\subsection*{\href{https://source-academy.github.io/sicp/chapters/2.3.1.html}{Strings}} Strings are of the form \texttt{"}$ \textit{double-quote-characters} $\texttt{"}, where $\textit{double-quote-characters}$ is a possibly empty sequence of characters without the character \texttt{"}, and of the form \texttt{'}$ \textit{single-quote-characters} $\texttt{'}, where $\textit{single-quote-characters}$ is a possibly empty sequence of characters without the character~\texttt{'},
{ "alphanum_fraction": 0.7474120083, "avg_line_length": 48.3, "ext": "tex", "hexsha": "e1fd6689280dc9bd3c038bf49e91e5f808084c55", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "06285558a7a91bd350a49cc8d582cc49b3090791", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dtlay/js-slang", "max_forks_repo_path": "docs/specs/source_strings.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "06285558a7a91bd350a49cc8d582cc49b3090791", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dtlay/js-slang", "max_issues_repo_path": "docs/specs/source_strings.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "06285558a7a91bd350a49cc8d582cc49b3090791", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dtlay/js-slang", "max_stars_repo_path": "docs/specs/source_strings.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 133, "size": 483 }
\section{Linear Support Vector Classifier} \label{section:svc} Given $n$ training points, where each input $x_i$ has $m$ attributes, i.e., is of dimensionality $m$, and is in one of two classes $y_i=\pm1$, i.e., our training data is of the form: \begin{equation} \{(x_i,y_i), x_i\in\Re^m, y_i=\pm1, i=1, \dots, n\} \label{eq:svc_data} \end{equation} For simplicity we first assume that data are (not fully) linearly separable in the input space $x$, meaning that we can draw a line separating the two classes when $m=2$, a plane for $m=3$ and, more in general, a hyperplane for an arbitrary $m$. Support vectors are the examples closest to the separating hyperplane and the aim of support vector machines is to orientate this hyperplane in such a way as to be as far as possible from the closest members of both classes, i.e., we need to maximize this margin. This hyperplane is represented by the equation $w^T x + b=0$. So, we need to find $w$ and $b$ so that our training data can be described by: \begin{equation} \label{eq:svc_consts} \begin{aligned} & w^T x_i + b \geq +1 - \xi_i, \forall y_i=+1 \\ & w^T x_i + b \leq -1 + \xi_i, \forall y_i=-1 \\ & \xi_i \geq 0 \ \forall_i \end{aligned} \end{equation} where the positive slack variables $\xi_i$ are introduced to allow misclassified points. In this way data points on the incorrect side of the margin boundary will have a penalty that increases with the distance from it. These two equations can be combined into: \begin{equation} \label{eq:svc_const} \begin{aligned} & y_i (w^T x_i + b) \geq 1 - \xi_i \ \forall_i \\ & \xi_i\geq 0 \ \forall_i \end{aligned} \end{equation} The margin is equal to $\displaystyle \frac{1}{\| w \|}$ and maximizing it subject to the constraint in~\eqref{eq:svc_const} while as we are trying to reduce the number of misclassifications is equivalent to finding: \begin{equation} \label{eq:svc_obj} \begin{aligned} \min_{w,b,\xi} \quad & \| w \| + C \sum_{i=1}^n \xi_i \\ \text{subject to} \quad & y_i (w^T x_i + b) \geq 1 - \xi_i \ \forall_i \\ & \xi_i \geq 0 \ \forall_i \end{aligned} \end{equation} Minimizing $\| w \|$ is equivalent to minimizing $\displaystyle \frac{1}{2} \| w \|^2$, but in this form we will deal with a 1-strongly convex regularization term that has more desirable convergence properties. So we need to find: \begin{equation} \label{eq:quad_svc_obj} \begin{aligned} \min_{w,b,\xi} \quad & \frac{1}{2} \| w \|^2 + C \sum_{i=1}^n \xi_i \\ \text{subject to} \quad & y_i (w^T x_i + b) \geq 1 - \xi_i \ \forall_i \\ & \xi_i \geq 0 \ \forall_i \end{aligned} \end{equation} where the parameter $C$ controls the trade-off between the slack variable penalty and the size of the margin. \begin{figure}[h!] \centering \includegraphics[scale=0.6]{img/linear_dual_l1_svc_hyperplane} \caption{Linear SVC hyperplane} \label{fig:linear_dual_l1_svc_hyperplane} \end{figure} \pagebreak \subsection{Hinge loss} The \emph{hinge} loss is defined as: \begin{equation} \label{eq:hinge_loss1} \mathcal{L}_1 = \max(0, 1 - y (w^T x + b)) \end{equation} or, equivalently: \begin{equation} \label{eq:hinge_loss2} \mathcal{L}_1 = \begin{cases} 0 & \text{if} \ y (w^T x + b) \geq 1 \\ 1 - y (w^T x + b) & \text{otherwise} \\ \end{cases} \end{equation} and it is a nondifferentiable convex function due to its nonsmoothness in 1, but has a subgradient that is given by: \begin{equation} \label{eq:hinge_loss_der} \partial_w \mathcal{L}_1= \begin{cases} -y x & \text{if} \ y (w^T x + b) < 1 \\ 0 & \text{otherwise} \\ \end{cases} \end{equation} \subsubsection{Primal formulation} The general primal unconstrained formulation takes the form: \begin{equation} \label{eq:primal_svm} \min_{w,b} \frac{1}{2} \| w \|^2 + C \sum_{i=1}^n \mathcal{L}(w,b;x_i,y_i) \end{equation} where $\displaystyle \frac{1}{2} \| w \|^2$ is the \emph{regularization term} and $\mathcal{L}(w,b;x_i,y_i)$ is the \emph{loss function} associated with the observation $(x_i,y_i)$~\cite{piccialli2018nonlinear}. The quadratic optimization problem~\eqref{eq:quad_svc_obj} can be equivalently formulated as: \begin{equation} \label{eq:primal_l1_svc} \min_{w,b} \frac{1}{2} \| w \|^2 + C \sum_{i=1}^n \max(0, 1 - y_i (w^T x_i + b)) \end{equation} where we make use of the \emph{hinge} loss~\eqref{eq:hinge_loss1} or~\eqref{eq:hinge_loss2}. The above formulation penalizes slacks $\xi$ linearly and is called $\mathcal{L}_1$-SVC. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{img/l1_svc_loss} \caption{Hinge loss with different optimization steps} \label{fig:l1_svc_loss} \end{figure} To simplify the notation and so also the design of the algorithms, the simplest approach to learn the bias term $b$ is that of including that into the \emph{regularization term}; so we can rewrite~\eqref{eq:primal_svm} as follows: \begin{equation} \label{eq:reg_bias_primal_svm1} \min_{w,b} \frac{1}{2} (\| w \|^2 + b^2) + C \sum_{i=1}^n \mathcal{L}(w,b;x_i,y_i) \end{equation} or, equivalently, by augmenting the weight vector $w$ with the bias term $b$ and each instance $x_i$ with an additional dimension, i.e., with constant value equal to 1: \begin{equation} \label{eq:reg_bias_primal_svm2} \begin{aligned} \min_{w} \quad & \frac{1}{2} \| \hat{w} \|^2 + C \sum_{i=1}^n \mathcal{L}(\hat{w};\hat{x}_i,y_i) \\ \text{where} \quad & \hat{w}^T = [w^T, b] \\ & \hat{x}_i^T = [x_i^T, 1] \end{aligned} \end{equation} with the advantages of having convex properties of the objective function useful for convergence analysis and the possibility to directly apply algorithms designed for models without the bias term. In the specific case of the $\mathcal{L}_1$-SVC the objective~\eqref{eq:primal_l1_svc} become: \begin{equation} \label{eq:reg_bias_primal_l1_svc} \min_{w,b} \frac{1}{2} (\| w \|^2 + b^2) + C \sum_{i=1}^n \max(0, 1 - y_i (w^T x_i + b)) \end{equation} Note that in terms of numerical optimization the formulation~\eqref{eq:primal_l1_svc} is not equivalent to~\eqref{eq:reg_bias_primal_l1_svc} since in the first one the bias term $b$ does not contribute to the \emph{regularization term}, so the SVM formulation is based on an unregularized bias term $b$, as highlighted by the \emph{statistical learning theory}. But, in machine learning sense, numerical experiments in~\cite{hsu2002simple} show that the accuracy does not vary much when the bias term $b$ is embedded into the weight vector $w$. \subsubsection{Wolfe dual formulation} To reformulate the~\eqref{eq:quad_svc_obj} as a \emph{Wolfe dual}, we need to allocate the Lagrange multipliers $\alpha_i, \mu_i \geq 0 \ \forall_i$: \begin{equation} \label{eq:svc_wolfe_dual} \max_{\alpha,\mu} \min_{w,b,\xi} \mathcal{W}(w,b,\xi,\alpha,\mu) = \frac{1}{2} \| w \|^2 + C \sum_{i=1}^n \xi_i-\sum_{i=1}^n \alpha_i(y_i(w^T x_i + b)-1+\xi_i)-\sum_{i=1}^n\mu_i\xi_i \end{equation} We wish to find the $w$, $b$ and $\xi_i$ which minimizes, and the $\alpha$ and $\mu$ which maximizes $\mathcal{W}$, provided $\alpha_i\geq 0, \mu_i \geq 0 \ \forall_i$. We can do this by differentiating $\mathcal{W}$ wrt $w$ and $b$ and setting the derivatives to 0: \begin{equation} \label{eq:svc_wolfe_der_w} \frac{\partial \mathcal{W}}{\partial w}=w-\sum_{i=1}^n \alpha_i y_i x_i \Rightarrow w=\sum_{i=1}^n \alpha_i y_i x_i \end{equation} \begin{equation} \label{eq:svc_wolfe_der_b} \frac{\partial \mathcal{W}}{\partial b}=-\sum_{i=1}^n \alpha_i y_i\Rightarrow\sum_{i=1}^n \alpha_i y_i=0 \end{equation} \begin{equation} \label{eq:svc_wolfe_der_xi} \frac{\partial \mathcal{W}}{\partial\xi_i}=0\Rightarrow C=\alpha_i+\mu_i \end{equation} Substituting~\eqref{eq:svc_wolfe_der_w} and~\eqref{eq:svc_wolfe_der_b} into~\eqref{eq:svc_wolfe_dual} together with $\mu_i\geq 0 \ \forall_i$, which implies that $\alpha\leq C$, gives a new formulation being dependent on $\alpha$. We therefore need to find: \begin{equation} \label{eq:svc_max_wolfe_dual} \begin{aligned} \max_{\alpha} \mathcal{W}(\alpha) &= \sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i,j}\alpha_i\alpha_j y_i y_j \langle x_i, x_j \rangle \\ &= \sum_{i=1}^n \alpha_i - \frac{1}{2}\sum_{i,j}\alpha_i Q_{ij}\alpha_j \ \text{where} \ Q_{ij} = y_i y_j \langle x_i, x_j \rangle \\ &= \sum_{i=1}^n \alpha_i - \frac{1}{2}\alpha^T Q\alpha \ \text{subject to} \ 0\leq\alpha_i\leq C \ \forall_i, \sum_{i=1}^n \alpha_i y_i=0 \end{aligned} \end{equation} or, equivalently: \begin{equation} \label{eq:svc_min_wolfe_dual} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2}\alpha^T Q\alpha+q^T\alpha \\ \text{subject to} \quad & 0\leq\alpha_i\leq C \ \forall_i \\ & y^T\alpha=0 \end{aligned} \end{equation} where $q^T = [1, \dots, 1]$. By solving~\eqref{eq:svc_min_wolfe_dual} we will know $\alpha$ and, from~\eqref{eq:svc_wolfe_der_w}, we will get $w$, so we need to calculate $b$. We know that any data point satisfying~\eqref{eq:svc_wolfe_der_b} which is a support vector $x_s$ will have the form: \begin{equation} \label{eq:svc_sv_const1} y_s(w^T x_s + b)=1 \end{equation} and, by substituting in~\eqref{eq:svc_wolfe_der_w}, we get: \begin{equation} \label{eq:svc_sv_const2} y_s\big(\sum_{m\in S}\alpha_m y_m \langle x_m, x_s \rangle +b\big)=1 \end{equation} where $s$ denotes the set of indices of the support vectors and is determined by finding the indices $i$ where $\alpha_i>0$, i.e., nonzero Lagrange multipliers. Multiplying through by $y_s$ and then using $y_s^2=1$ from~\eqref{eq:svc_consts}: \begin{equation} \label{eq:svc_sv_squared_const2} y_s^2\big(\sum_{m\in S}\alpha_m y_m \langle x_m, x_s \rangle +b\big)=y_s \end{equation} \begin{equation} \label{eq:svc_b} b=y_s-\sum_{m\in S}\alpha_m y_m \langle x_m, x_s \rangle \end{equation} Instead of using an arbitrary support vector $x_s$, it is better to take an average over all of the support vectors in $S$: \begin{equation} \label{eq:svc_b_avg} b=\frac{1}{N_s}\sum_{s\in S} y_s-\sum_{m\in S}\alpha_m y_m \langle x_m, x_s \rangle \end{equation} We now have the variables $w$ and $b$ that define our separating hyperplane's optimal orientation and hence our support vector machine. Each new point $x'$ is classified by evaluating: \begin{equation} \label{eq:svc_pred} y'=\operatorname{sign}\big(\sum_{i=1}^n\alpha_i y_i\langle x_i, x' \rangle+b\big) \end{equation} From~\eqref{eq:svc_min_wolfe_dual} we can notice that the equality constraint $y^T \alpha = 0$ arises form the stationarity condition $\partial_{{b}} \mathcal{W}=0$. So, again, for simplicity, we can again consider the bias term $b$ embedded into the weight vector. We report below the box-constrained dual formulation~\cite{hsu2002simple} that arises from the primal~\eqref{eq:reg_bias_primal_svm1} or~\eqref{eq:reg_bias_primal_svm2} where the bias term $b$ is embedded into the weight vector $w$: \begin{equation} \label{eq:svc_min_bcqp_wolf_dual} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2} \alpha^T (Q + yy^T)\alpha+q^T\alpha \\ \text{subject to} \quad & 0\leq\alpha_i\leq C \ \forall_i \end{aligned} \end{equation} \subsubsection{Lagrangian dual formulation} In order to relax the constraints in the \emph{Wolfe dual} formulation~\eqref{eq:svc_min_wolfe_dual} we define the problem as a \emph{Lagrangian dual} relaxation by embedding them into objective function, so we need to allocate the Lagrange multipliers $\mu$ and $\lambda_+, \lambda_- \geq 0$: \begin{equation} \label{eq:l1_svc_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda_+,\lambda_-} \min_{\alpha} \mathcal{L}(\alpha,\mu,\lambda_+,\lambda_-) &= \frac{1}{2} \alpha^T Q\alpha+q^T\alpha + \mu^T (y^T \alpha) + \lambda_+^T (ub - \alpha) - \lambda_-^T \alpha \\ &= \frac{1}{2} \alpha^T Q\alpha + (q + \mu y^T + \lambda_+ - \lambda_-)^T \alpha + \lambda_+^T ub \\ \text{subject to} \quad & \,\, \lambda_+, \lambda_- \geq 0 \end{aligned} \end{equation} where the upper bound $ub^T = [C, \dots, C]$. Taking the derivative of the Lagrangian $\mathcal{L}$ wrt $\alpha$ and settings it to 0 gives: \begin{equation} \label{eq:svc_lagrangian_der_a} \frac{\partial \mathcal{L}}{\partial \alpha}=0\Rightarrow Q \alpha + (q + \mu y^T + \lambda_+ - \lambda_-) = 0 \end{equation} With $\alpha$ optimal solution of the linear system: \begin{equation} \label{eq:l1_svc_lagrangian_sol} Q \alpha = - (q + \mu y^T + \lambda_+ - \lambda_-) \end{equation} the gradients wrt $\mu$, $\lambda_+$ and $\lambda_-$ are: \begin{equation} \label{eq:svc_lagrangian_der_mu} \frac{\partial \mathcal{L}}{\partial \mu}=-y \alpha \end{equation} \begin{equation} \label{eq:svc_lagrangian_der_lp} \frac{\partial \mathcal{L}}{\partial \lambda_+}=\alpha - ub \end{equation} \begin{equation} \label{eq:svc_lagrangian_der_lm} \frac{\partial \mathcal{L}}{\partial \lambda_-}=-\alpha \end{equation} From~\eqref{eq:svc_min_wolfe_dual} we can notice that the equality constraint $y^T \alpha = 0$ arises form the stationarity condition $\partial_{{b}} \mathcal{W}=0$. So, again, for simplicity, we can again consider the bias term $b$ embedded into the weight vector. In this way the dimensionality of~\eqref{eq:l1_svc_lagrangian_dual} is reduced by removing the multipliers $\mu$ which was allocated to control the equality constraint $y^T \alpha=0$, so we will end up solving exactly the problem~\eqref{eq:svc_min_bcqp_wolf_dual}. \begin{equation} \label{eq:l1_svc_bcqp_lagrangian_dual} \begin{aligned} \max_{\lambda_+,\lambda_-} \min_{\alpha} \mathcal{L}(\alpha,\lambda_+,\lambda_-) &= \frac{1}{2} \alpha^T (Q + yy^T)\alpha+q^T\alpha + \lambda_+^T (ub - \alpha) - \lambda_-^T \alpha \\ &= \frac{1}{2} \alpha^T (Q + yy^T)\alpha + (q + \lambda_+ - \lambda_-)^T \alpha + \lambda_+^T ub \\ \text{subject to} \quad & \,\, \lambda_+, \lambda_- \geq 0 \end{aligned} \end{equation} where, again, the upper bound $ub^T = [C, \dots, C]$. Now, taking the derivative of the Lagrangian $\mathcal{L}$ wrt $\alpha$ and settings it to 0 gives: \begin{equation} \label{eq:l1_svc_bcqp_lagrangian_der_a} \frac{\partial \mathcal{L}}{\partial \alpha}=0\Rightarrow (Q + yy^T) \alpha + (q + \lambda_+ - \lambda_-) = 0 \end{equation} With $\alpha$ optimal solution of the linear system: \begin{equation} \label{eq:l1_svc_bcqp_lagrangian_sol} (Q + yy^T) \alpha = - (q + \lambda_+ - \lambda_-) \end{equation} the gradients wrt $\lambda_+$ and $\lambda_-$ are: \begin{equation} \label{eq:l1_svc_bcqp_lagrangian_der_lp} \frac{\partial \mathcal{L}}{\partial \lambda_+}=\alpha - ub \end{equation} \begin{equation} \label{eq:l1_svc_bcqp_lagrangian_der_lm} \frac{\partial \mathcal{L}}{\partial \lambda_-}=-\alpha \end{equation} \bigskip Note that since the Hessian matrix $Q$ of the $\mathcal{L}_1$-SVC is not strictly positive definite, i.e., the Lagrangian function is not strictly convex since it will be linear along the eigenvectors correspondent to the null eigenvalues and so it will be unbounded below, the Lagrangian dual relaxation, i.e.,~\ref{eq:l1_svc_lagrangian_sol} and~\ref{eq:l1_svc_bcqp_lagrangian_sol}, will be nondifferentiable, so it will have infinite solutions and for each of them it will have a different subgradient. In order to compute an approximation of the gradient, we will choose $\alpha$ in such a way as the one that minimizes the 2-norm since it is good almost like the gradient: \begin{equation} \label{eq:svc_lagrangian_krylov_sol} \min_{\alpha_n \in K_n(Q, b)} \| Q \alpha_n - b \| \end{equation} Since we are dealing with a symmetric system we will choose a well-known Krylov method that performs the Lanczos iterate, i.e., symmetric Arnoldi iterate, called \emph{minres}, i.e., symmetric \emph{gmres}, to compute the vector $\alpha_n$ that minimizes the norm of the residual $r_n = Q \alpha_n - b$ among all vectors in $K_n(Q, b) = span(b, Qb, Q^2b, \dots, Q^{n-1}b)$. \bigskip Since the linear algebra methods in the ML context are crucial and also in order to deal with a per-iteration cost equals to the other algorithms described later to provide a coherent comparison of all at the end, we will solve it with a primal-dual optimization method and we modify its definition by adding a strictly convex augmentation term, i.e., a penalty term, in order to improve the actual convergence of the algorithms. So, if we consider a general quadratic optimization problem subject to linear constraints, i.e., equality and inequality constraints, defined as: \begin{equation} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha \\ \textrm{subject to} \quad & A \alpha = b \\ & G \alpha \leq h \\ & lb \leq \alpha \leq ub \end{aligned} \end{equation} or, equivalently: \begin{equation} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha \\ \textrm{subject to} \quad & A \alpha = b \\ & \hat{G} \alpha \leq \hat{h} \end{aligned} \end{equation} where $\hat{G} = \begin{bmatrix} G \\ -I \\ I \end{bmatrix}$ and $\hat{h} = \begin{bmatrix} h & -lb & ub \end{bmatrix}$; we give the following \emph{augmented Lagrangian dual}: \begin{equation} \label{eq:svc_gen_aug_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha + \mu^T (A \alpha - b) + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| A \alpha - b \|^2 + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \lambda \geq 0 \end{aligned} \end{equation} with $\rho > 0$. \bigskip According to this definition, we change the formulation~\ref{eq:l1_svc_lagrangian_dual} as: \begin{equation} \label{eq:l1_svc_aug_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda} \min_{\alpha} \mathcal{L}(\alpha,\mu,\lambda) &= \frac{1}{2} \alpha^T Q\alpha+q^T\alpha + \mu^T (y^T \alpha) + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| y^T \alpha \|^2 + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} and the formulation~\ref{eq:l1_svc_bcqp_lagrangian_dual} as: \begin{equation} \label{eq:l1_svc_bcqp_aug_lagrangian_dual} \begin{aligned} \max_{\lambda} \min_{\alpha} \mathcal{L}(\alpha,\lambda) &= \frac{1}{2} \alpha^T (Q + yy^T) \alpha + q^T \alpha + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} where $\hat{G} = \begin{bmatrix} -I \\ I \end{bmatrix}$ and $\hat{h} = \begin{bmatrix} -lb & ub \end{bmatrix}$ with $lb^T = [0, \dots, 0]$, $ub^T = [C, \dots, C]$ and $\rho > 0$. \pagebreak \subsection{Squared hinge loss} The \emph{squared hinge} loss is defined as: \begin{equation} \label{eq:squared_hinge_loss2} \mathcal{L}_2 = \max(0, 1 - y (w^T x + b))^2 \end{equation} or, equivalently: \begin{equation} \label{eq:squared_hinge_loss1} \mathcal{L}_2 = \begin{cases} 0 & \text{if} \ y (w^T x + b) \geq 1 \\ (1 - y (w^T x + b))^2 & \text{otherwise} \\ \end{cases} \end{equation} It is a strictly convex function and its gradient is given by: \begin{equation} \label{eq:squared_hinge_loss_der} \nabla_w \mathcal{L}_2= \begin{cases} - 2 \max(0, 1 - y (w^T x + b)) y x & \text{if} \ y (w^T x + b) < 1 \\ 0 & \text{otherwise} \\ \end{cases} \end{equation} \subsubsection{Primal formulation} Since smoothed versions of objective functions may be preferred for optimization, we can reformulate~\eqref{eq:primal_l1_svc} as: \begin{equation} \label{eq:primal_l2_svc} \min_{w,b} \frac{1}{2} \| w \|^2 + C \sum_{i=1}^n \max(0, 1 - y_i (w^T x_i + b))^2 \end{equation} where we make use of the \emph{squared hinge} loss that quadratically penalized slacks $\xi$ and is called $\mathcal{L}_2$-SVC. The $\mathcal{L}_2$-SVC objective~\eqref{eq:primal_l2_svc} can be rewritten in form~\eqref{eq:reg_bias_primal_svm1} or~\eqref{eq:reg_bias_primal_svm2} as: \begin{equation} \label{eq:reg_bias_primal_l2_svc} \min_{w,b} \frac{1}{2} (\| w \|^2 + b^2) + C \sum_{i=1}^n \max(0, 1 - y_i (w^T x_i + b))^2 \end{equation} \begin{figure}[h!] \centering \includegraphics[scale=0.4]{img/l2_svc_loss} \caption{Squared hinge loss with different optimization steps} \label{fig:l2_svc_loss} \end{figure} \subsubsection{Wolfe dual formulation} As done for the $\mathcal{L}_1$-SVC we can derive the \emph{Wolfe dual} formulation of the $\mathcal{L}_2$-SVC by obtaining: \begin{equation} \label{eq:wolfe_dual_l2_svc} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2}\alpha^T (Q + D)\alpha+q^T\alpha \\ \text{subject to} \quad & \alpha_i\geq 0 \ \forall_i \\ & y^T\alpha=0 \end{aligned} \end{equation} or, alternatively, with the regularized bias term by obtaining: \begin{equation} \label{eq:reg_bias_wolfe_dual_l2_svc} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2}\alpha^T (Q + yy^T + D) \alpha + q^T \alpha \\ \text{subject to} \quad & \alpha_i \geq 0 \ \forall_i \end{aligned} \end{equation} where the diagonal matrix $\displaystyle D_{ii} = \frac{1}{2C} \ \forall_i$. \subsubsection{Lagrangian dual formulation} In order to relax the constraints in the $\mathcal{L}_2$-SVC \emph{Wolfe dual} formulation~\eqref{eq:wolfe_dual_l2_svc} we define the problem as a \emph{Lagrangian dual} relaxation by embedding them into objective function, so we need to allocate the Lagrange multipliers $\mu$ and $\lambda \geq 0$: \begin{equation} \label{eq:l2_svc_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda} \min_{\alpha} \mathcal{L}(\alpha,\mu,\lambda) &= \frac{1}{2} \alpha^T (Q+D)\alpha+q^T\alpha + \mu^T (y^T \alpha) - \lambda^T \alpha \\ &= \frac{1}{2} \alpha^T (Q+D)\alpha + (q + \mu y^T - \lambda)^T \alpha \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} Taking the derivative of the Lagrangian $\mathcal{L}$ wrt $\alpha$ and settings it to 0 gives: \begin{equation} \label{eq:l2_svc_lagrangian_der_a} \frac{\partial \mathcal{L}}{\partial \alpha}=0\Rightarrow (Q+D) \alpha + (q + \mu y^T - \lambda) = 0 \end{equation} With $\alpha$ optimal solution of the linear system: \begin{equation} \label{eq:l2_svc_lagrangian_sol} (Q+D) \alpha = - (q + \mu y^T - \lambda) \end{equation} the gradients wrt $\mu$ and $\lambda$ are: \begin{equation} \label{eq:l2_svc_lagrangian_der_mu} \frac{\partial \mathcal{L}}{\partial \mu}=-y \alpha \end{equation} \begin{equation} \label{eq:l2_svc_lagrangian_der_lambda} \frac{\partial \mathcal{L}}{\partial \lambda}=-\alpha \end{equation} From~\eqref{eq:svc_min_wolfe_dual} we can notice that the equality constraint $y^T \alpha = 0$ arises form the stationarity condition $\partial_{{b}} \mathcal{W}=0$. So, again, for simplicity, we can again consider the bias term $b$ embedded into the weight vector. In this way the dimensionality of~\eqref{eq:l2_svc_lagrangian_dual} is reduced by removing the multipliers $\mu$ which was allocated to control the equality constraint $y^T \alpha=0$, so we will end up solving exactly the problem~\eqref{eq:reg_bias_wolfe_dual_l2_svc}. \begin{equation} \label{eq:l2_svc_lb_lagrangian_dual} \begin{aligned} \max_{\lambda} \min_{\alpha} \mathcal{L}(\alpha,\lambda) &= \frac{1}{2} \alpha^T (Q + yy^T + D) \alpha+q^T\alpha - \lambda^T \alpha \\ &= \frac{1}{2} \alpha^T (Q + yy^T + D) \alpha + (q - \lambda)^T \alpha \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} Now, taking the derivative of the Lagrangian $\mathcal{L}$ wrt $\alpha$ and settings it to 0 gives: \begin{equation} \label{eq:l2_svc_lb_lagrangian_der_a} \frac{\partial \mathcal{L}}{\partial \alpha}=0\Rightarrow (Q + yy^T + D) \alpha + (q - \lambda) = 0 \end{equation} With $\alpha$ optimal solution of the linear system: \begin{equation} \label{eq:l2_svc_lb_lagrangian_sol} (Q + yy^T + D) \alpha = - (q - \lambda) \end{equation} the gradient wrt $\lambda$ is: \begin{equation} \label{eq:l2_svc_lb_lagrangian_der_l} \frac{\partial \mathcal{L}}{\partial \lambda}=-\alpha \end{equation} \bigskip Note that since the Hessian matrix $Q$ of the $\mathcal{L}_2$-SVC is symmetric and strictly positive definite, we can find the unique solution of the Lagrangian dual relaxation, i.e.,~\ref{eq:l2_svc_lagrangian_sol} and~\ref{eq:l2_svc_lb_lagrangian_sol}, by solving the system with the Cholesky factorization. \bigskip Since the linear algebra methods in the ML context are crucial and also in order to deal with a per-iteration cost equals to the other algorithms described later to provide a coherent comparison of all at the end, we will solve it with a primal-dual optimization method and we modify its definition by adding a strictly convex augmentation term, i.e., a penalty term, in order to improve the actual convergence of the algorithms. So, if we consider a general quadratic optimization problem subject to linear constraints, i.e., equality and inequality constraints, defined as: \begin{equation} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha \\ \textrm{subject to} \quad & A \alpha = b \\ & G \alpha \leq h \\ & lb \leq \alpha \leq ub \end{aligned} \end{equation} or, equivalently: \begin{equation} \begin{aligned} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha \\ \textrm{subject to} \quad & A \alpha = b \\ & \hat{G} \alpha \leq \hat{h} \end{aligned} \end{equation} where $\hat{G} = \begin{bmatrix} G \\ -I \\ I \end{bmatrix}$ and $\hat{h} = \begin{bmatrix} h & -lb & ub \end{bmatrix}$; we give the following \emph{augmented Lagrangian dual}: \begin{equation} \label{eq:l2_svc_gen_aug_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda} \min_{\alpha} \quad & \frac{1}{2} \alpha^T Q \alpha + q^T \alpha + \mu^T (A \alpha - b) + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| A \alpha - b \|^2 + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \lambda \geq 0 \end{aligned} \end{equation} with $\rho > 0$. \bigskip According to this definition, we change the formulation~\ref{eq:l2_svc_lagrangian_dual} as: \begin{equation} \label{eq:l2_svc_aug_lagrangian_dual} \begin{aligned} \max_{\mu,\lambda} \min_{\alpha} \mathcal{L}(\alpha,\mu,\lambda) &= \frac{1}{2} \alpha^T (Q + D) \alpha+q^T\alpha + \mu^T (y^T \alpha) + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| y^T \alpha \|^2 + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} and the formulation~\ref{eq:l2_svc_lb_lagrangian_dual} as: \begin{equation} \label{eq:l2_svc_lb_aug_lagrangian_dual} \begin{aligned} \max_{\lambda} \min_{\alpha} \mathcal{L}(\alpha,\lambda) &= \frac{1}{2} \alpha^T (Q + yy^T + D) \alpha + q^T \alpha + \lambda^T (\hat{G} \alpha - \hat{h}) + \frac{\rho}{2} \| \hat{G} \alpha - \hat{h} \|^2 \\ \text{subject to} \quad & \,\, \lambda \geq 0 \end{aligned} \end{equation} where $\hat{G} = \begin{bmatrix} -I \end{bmatrix}$ and $\hat{h} = \begin{bmatrix} -lb \end{bmatrix}$ with $lb^T = [0, \dots, 0]$ and $\rho > 0$.
{ "alphanum_fraction": 0.6838952405, "avg_line_length": 46.3231810491, "ext": "tex", "hexsha": "1fefa38d7f32604d4693a06157412688120cddde", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-03-13T20:23:37.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-10T13:38:57.000Z", "max_forks_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DonatoMeoli/NumericalOptimization", "max_forks_repo_path": "notebooks/optimization/tex/linear_svc.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_issues_repo_issues_event_max_datetime": "2022-02-25T09:03:13.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-25T08:29:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DonatoMeoli/NumericalOptimization", "max_issues_repo_path": "notebooks/optimization/tex/linear_svc.tex", "max_line_length": 676, "max_stars_count": 4, "max_stars_repo_head_hexsha": "e60144458026a6ddbe1612f92b838c342db572eb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DonatoMeoli/NumericalOptimization", "max_stars_repo_path": "notebooks/optimization/tex/linear_svc.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-15T18:23:56.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-22T09:17:56.000Z", "num_tokens": 9626, "size": 27377 }
\documentclass{article} \title{Complexity Measures} \author{Vladimir Feinberg} \input{../defs} \begin{document} \maketitle Complexity measures evaluate the expressiveness of a hypothesis class; they are useful to the extent with which they relate sample and generalization error. \section{Setup} We suppose that our data comes in the form of ordered pairs from $\mcX\times \mcY$. Samples follow a particular distribution $(x, y)\sim D$. A hypothesis class $\mcH$ is set of functions $\mcX\rightarrow\mcY$. A common approach to supervised learning is ERM, where $m$ iid samples from $D$, $S$, are used to find the $h\in\mcH$ minimizing a specified loss $\ell:\mcY^2\rightarrow\R$ over this set. Complexity measures then let us quantify exactly how much loss we can expect when sampling from $D$ again. We seek to quantify the generalization gap with the help of our notions of complexity. For a fixed $h\in \mcH$: $$ \varepsilon= \E\left[\ell\pa{h(x), y)}|(x,y)\sim D\right]-\E\left[\ell\pa{h(x), y)}|(x,y)\sim \Uniform(S)\right] $$ Analysis of Rademacher complexity is agnostic to $h,\ell$; the hypothesis class might as well consist of functions $g:\mcX\times\mcY\rightarrow\R$ yielding their composition. VC dimension analysis, however, requires $\mcY=\{0, 1\}$ and $\ell(a, b)=\indicator\{a=b\}$. VC dimension is still useful for regression problems, by thresholding hypotheses $h\mapsto \indicator{h>\beta}$ for fixed $\beta$.\footnote{\url{https://stats.stackexchange.com/questions/140430}} Thus, it is useful to find bounds on $\varepsilon$, the difference between the generalization loss $\E\left[\ell\pa{h(x), y)}\right]$, where $(x,y)\sim D$, and sample loss, where the loss is the expectation before taken for $(x,y)$ is uniform over $S$. Let the gap between the generalization and sample error be $\varepsilon$. \section{Complexity Measures} The empirical Rademacher complexity $\hat{R}_S$ assumes a fixed sample $S$ from $D^m$. It relates complexity of a function class $\mcG$ containing vectorized functions $g\in \mcG$ which take elements $z_i=(x_i, y_i)$ in $S$ and return costs through the correlation of $\mcG$ with noise. Let $\vsigma\sim \Uniform\pa{\pm 1}^m$. Rademacher complexity is then the average empirical one. $$ \hat{R}_S(\mcG)=\E_\vsigma \sup_g \frac{1}{m}\sum_{i=1}^m{g(z) \sigma_i},\;\;\; R_m(\mcG)=\E_S\hat{R}_S(\mcG) $$ VC dimension accomplishes a similar task for binary classification by rating the complexity of a hypothesis class $\mcH$. Let hypotheses $\mcH\ni h:\mcX\rightarrow \mcY=\ca{\pm 1}$ be applied elementwise over a vector of inputs $\vx$. First we define the growth function $\Pi_\mcH:\N\rightarrow\N$, which defines the maximum number of distinctions a hypothesis class can make over all sets of points in the input space: $$ \Pi_\mcH(m)=\max_{\vx\in\mcX^m}\card{\set{h(\vx)}{h\in\mcH}} $$ Then the VC dimension of $\mcH$ is then $\max\set{m\in\N}{\Pi_\mcH(m)=2^m}$. \section{Overview of Results} Proofs can be found in a \nurl{http://ittc.ku.edu/~beckage/ml800/VC_dim.pdf}{cogent write-up} by Prof. Beckage from the University of Kansas. \subsection{VC Generalization Bounds} Upper bound. If $d$ is the VC-dimension of $\mcH$, then for any $D$ wp $1-\delta$: $$ \varepsilon\le \tilde{O}\pa{\sqrt{\frac{d-\log \delta}{m}}} $$ The above inequality is random since it depends on $S$, the $D^m$-valued rv. TODO. find source removing tilde? Agnostic lower bound. We may find a $D$ such that with a fixed nonzero probability (a non-negligible set of candidate samples $S$), the following holds: $$ \varepsilon\ge \Omega\pa{\sqrt{\frac{d}{m}}} $$ The above implies that in the common case of agnostic hypothesis learning, where we do not know distribution $D$, VC-dimension is, \textit{up to logarithmic factors, asymptotically efficient} in quantifying the generalization gap. Realizability. Suppose $D$ is realizable wrt $\mcH$, so that there exists an $f\in\mcH$ such that for almost any $(x,y)$ sampled from $D$, $f(x)=y$. Then all statements above hold but with $\sqrt{\varepsilon}$ instead of $\varepsilon$. \subsection{Growth Function Bounds} Sauer's Lemma implies that VC dimension $d$ bounds the growth function: in a graph of the logarithm of the growth function vs $m$, growth is linear since $\Pi_\mcH(n)=n$ for $n\le d$. Then for $n>d$, growth is at most logarithmic, i.e., $\log\Pi_\mcH=O(\log m)$. With Massart's Lemma we have wp $1-\delta$: $$ \varepsilon\le O\pa{\sqrt{\frac{\log\Pi_\mcH(m)-\log\delta}{m}}} $$ Since the above would be large if $\log\Pi_\mcH(m)\simeq m$, it is clear why Sauer's Lemma enables the essential relationship between learnability and complexity. \subsection{Rademacher bounds} With $R_m$ either the empirical or expected Rademacher complexity over the sample for a given $h,\ell$ we have again wp $1-\delta$: $$ \varepsilon\le 2R_m+O\pa{\frac{\log\nicefrac{1}{\delta}}{m}} $$ $R_m$ may be NP-hard to compute, depending on $\mcH$. This tells us Rademacher complexity could only be a useful improvement over VC-bounds, asymptotically, if we have an efficient approximation for the empirical Rademacher complexity or some knowledge of $D$ as required to compote the true Rademacher complexity. \section{Hardness of Learning} Rademacher and Gaussian Complexities: Risk Bounds and Structural Results by Bartlett and Mendelson. \end{document}
{ "alphanum_fraction": 0.7364196379, "avg_line_length": 60.875, "ext": "tex", "hexsha": "d42f25d3402e77fccae8d7affd118658981b2d3d", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-01-07T06:21:43.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-23T10:45:22.000Z", "max_forks_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "vlad17/shallow-ml-notes", "max_forks_repo_path": "old/statistical-learning/complexity-measures.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "vlad17/shallow-ml-notes", "max_issues_repo_path": "old/statistical-learning/complexity-measures.tex", "max_line_length": 463, "max_stars_count": 14, "max_stars_repo_head_hexsha": "6535ae666b22847303a2ec72012b31ccb4144900", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "vlad17/shallow-ml-notes", "max_stars_repo_path": "old/statistical-learning/complexity-measures.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-22T18:48:49.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-27T18:39:35.000Z", "num_tokens": 1549, "size": 5357 }
\documentclass[inequalities.tex]{subfile} \begin{document} \section{Probability in Inequality}\label{sec:prob} \end{document}
{ "alphanum_fraction": 0.7829457364, "avg_line_length": 21.5, "ext": "tex", "hexsha": "760676c8317e81186a5da5fb9524095689396aa0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_path": "prob.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_path": "prob.tex", "max_line_length": 52, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_path": "prob.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "num_tokens": 37, "size": 129 }
\newpage \subsection{Recurrent Sequences} \begin{myitemize} \item \href{https://ufile.io/ywbniil2}{WOOT 2010-11 Recursion} \end{myitemize} \begin{BoxedTheorem}{Sum of Geometric Sequences}{}\label{theorem:Sum of Geometric Sequences} Every recurrent sequence can be written as a sum of some geometric sequences. Given a recurrent sequence, \[x_n = a_1x_{n-1} + a_2x_{n-2} +\dots + a_kx_{n-k}\] Then $ x_n $ can be written as \[x_n = c_1r_1^n + c_2r_2^n +\dots + c_lr_l^n\] For all $ c_i $ if $ r_i $ are the roots of the \emph{characteristic polynomial} of the recursion. Which is: \[\tcboxmath[colback=white, colframe=white]{x^k - a_1x^{k-1} - a_2x^{k-2} \dots - a_k = 0}\] If there are double roots, say $ r_1 = r_2 = r_3 $, then we instead have, \[x_n = \tcboxmath[colback=white, colframe=white]{c_1r_1^n + c_2n\ r_2^n + c_3n^2\ r^n} \dots + c_lr_l^n\] Reversely, we can say that a sequence defined by a sum of geometric recurrent series is a recursion. \end{BoxedTheorem} \lem{}{Let $ F_n $ be the $ n $th Fibonacci number. Then the following holds: \[F_{n}^2 + F_{n+1}^2 = F_{2n+1}\]} \proof{Expanding the general form of the terms, and showing that $ a_n = F_{n}^2 + F_{n+1}^2 - F_{2n+1} $ is a recursion by \autoref{theorem:Sum of Geometric Sequences}.} \begin{BoxedTheorem}[title=Repertoire Method]{}{} Given a recurrent function defined by \[f(n) = A(n)\a + B(n)\beta + C(n)\gamma\] We plug in different values for $ f(n) $, for example, $ f(n) = 1, n, 2n $ etc. for which the values are known from the recursion, and then solve for $ A, B, C $. \end{BoxedTheorem}
{ "alphanum_fraction": 0.671577655, "avg_line_length": 52.5483870968, "ext": "tex", "hexsha": "9fcfb585b78efaac5045f8344312e901f8ff958c", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_path": "combi/sec8_1_Recursion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_path": "combi/sec8_1_Recursion.tex", "max_line_length": 171, "max_stars_count": 48, "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_path": "combi/sec8_1_Recursion.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "num_tokens": 617, "size": 1629 }
%\chapter{PROLOGUE TO FOURTH ARTICLE} \section*{Prologue} \addcontentsline{toc}{section}{Prologue} % \begin{tabular}{p{0.16\linewidth}p{0.78\linewidth}} % \textit{Title:} & Parameter Prediction for Unseen Deep Architectures \\ % \textit{Authors:} & Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero \\ % \textit{Published at:} & \venue{Neural Information Processing Systems (NeurIPS 2021)} \\ % \textit{Code release:} & \url{https://github.com/facebookresearch/ppuda} \\ % \textit{Personal contributions:} & developed the key components of algorithms and models; developed the code; designed % and ran all experiments; wrote most of the article. % \end{tabular} \vspace{5pt} \densepar{Context.} Before deep learning it was often necessary to manually design features (\eg SIFT~\citep{lowe2004distinctive}). Now, we can simply learn features using gradient descent algorithms like SGD. However, SGD itself is manually designed and thus has its own limitations similar to manual feature engineering. In particular, SGD is computationally expensive and requires expertise to tune it. In addition, SGD does not accumulate the knowledge of previous optimizations, rather it starts from scratch. While replacing manually designed features with the learnable ones is extremely successful~\citep{krizhevsky2012imagenet}, replacing SGD appears to be more challenging. So far, learnable methods based on recurrent neural networks have been explored to tackle this task~\citep{andrychowicz2016learning}. However, these methods produce optimizers that are inefficient as SGD, motivating research into alternative approaches. \densepar{Contributions.} We build on Graph HyperNetworks (GHNs)~\citep{zhang2018graph} that take a neural network architecture and output its trained parameters in a single forward pass. To train and evaluate GHNs, we release the DeepNets-1M dataset of neural architectures for vision tasks: CIFAR-10 and ImageNet. We significantly improve on GHNs in terms of design and generalization ability. For example, we can take a common ResNet-50 neural network and predict all its parameters using our GHN-2 in less than a second even on a CPU. This ResNet-50 achieves 58.6\% on CIFAR-10 without any training, a remarkable performance since our GHN-2 has never observed models at the same scale and connectivity. %\vspace{-3pt} \densepar{Recent works.} %Predicting performant parameters of arbitrary large-scale networks is a challenging task, so little progress has been made in this direction. \citet{cai2019once} provide a method to obtain performant ImageNet parameters for diverse networks. However, all their networks are derived from a shared ``supernet'' based on MobileNet-v3~\citep{howard2019searching}. As a result, this method cannot be applied to arbitrary networks. Other works are based on meta-optimizers~\citep{ravi2016optimization,metz2020tasks,wichrowska2017learned} and, as inherent to iterative optimizers, are computationally inefficient. HyperTransformers can predict parameters for small-scale networks and can generalize across tasks~\citep{anonymous2021HyperTransformer}. Scaling up HyperTransformers and connecting them with GHNs may enable parameter prediction across tasks.\looseness-1
{ "alphanum_fraction": 0.8034947885, "avg_line_length": 116.5, "ext": "tex", "hexsha": "64279b6bec53f7e633092683823eadff5248a9f6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "uoguelph-mlrg/phdthesis_boris", "max_forks_repo_path": "Ch6_2021_neurips/prolog.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "uoguelph-mlrg/phdthesis_boris", "max_issues_repo_path": "Ch6_2021_neurips/prolog.tex", "max_line_length": 917, "max_stars_count": null, "max_stars_repo_head_hexsha": "bf8f9e040e664356af31a2d2e4f9122bb33d0196", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "uoguelph-mlrg/phdthesis_boris", "max_stars_repo_path": "Ch6_2021_neurips/prolog.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 774, "size": 3262 }
\documentclass[onecolumn, draftclsnofoot,10pt, compsoc]{IEEEtran} \usepackage{graphicx} \usepackage{url} \usepackage{setspace} \usepackage[margin=0.75in]{geometry} \setlength{\parindent}{0pt} \usepackage[hidelinks]{hyperref} \usepackage{listings} \usepackage{float} \renewcommand\thesection{\Roman{section}} \renewcommand\thesubsection{\Alph{subsection}} \renewcommand\thesubsubsection{\arabic{subsubsection}} \geometry{textheight=9.5in, textwidth=7in} % 1. Fill in these details \def \CapstoneTeamName{ Aerolyzer} \def \CapstoneTeamNumber{ 22} \def \GroupMemberOne{ E. Reilly Collins} \def \GroupMemberTwo{ Sophia Liu} \def \GroupMemberThree{ Jesse Hanson} \def \CapstoneProjectName{ Aerosol Analyzer Mobile Web Application} \def \CapstoneSponsorCompany{ NASA JPL} \def \CapstoneSponsorPerson{ Kim Whitehall, Lewis McGibbney} % 2. Uncomment the appropriate line below so that the document type works \def \DocType{ %Problem Statement %Requirements Document %Technology Review %Design Document Progress Report } \newcommand{\NameSigPair}[1]{\par \makebox[2.75in][r]{#1} \hfil \makebox[3.25in]{\makebox[2.25in]{\hrulefill} \hfill \makebox[.75in]{\hrulefill}} \par\vspace{-12pt} \textit{\tiny\noindent \makebox[2.75in]{} \hfil \makebox[3.25in]{\makebox[2.25in][r]{Signature} \hfill \makebox[.75in][r]{Date}}}} % 3. If the document is not to be signed, uncomment the RENEWcommand below \renewcommand{\NameSigPair}[1]{#1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{titlepage} \pagenumbering{gobble} \begin{singlespace} %\includegraphics[height=4cm]{coe_v_spot1} \hfill % 4. If you have a logo, use this includegraphics command to put it on the coversheet. %\includegraphics[height=4cm]{CompanyLogo} \par\vspace{.2in} \centering \scshape{ \huge CS Capstone \DocType \par {\large\today}\par \vspace{.5in} \textbf{\Huge\CapstoneProjectName}\par \vfill {\large Prepared for}\par \Huge \CapstoneSponsorCompany\par \vspace{5pt} {\Large\NameSigPair{\CapstoneSponsorPerson}\par} {\large Prepared by }\par Group\CapstoneTeamNumber\par % 5. comment out the line below this one if you do not wish to name your team \CapstoneTeamName\par \vspace{5pt} {\Large \NameSigPair{\GroupMemberOne}\par \NameSigPair{\GroupMemberTwo}\par \NameSigPair{\GroupMemberThree}\par } \vspace{18pt} } \begin{abstract} % 6. Fill in your abstract \noindent Over the past 10 weeks, the Aerolyzer team has worked with our client to make progress towards our finished software product. The purpose of this document is to illustrate that progress and provide a retrospective of our work done so far. A summary the purpose and goals of our project, our current status, and our weekly progress is offered. Additionally, we explain the positives, things that will change, and specific actions to be taken for these changes to occur. We are on track to have our aerosol-analyzing mobile application ready in time for the Expo. \medskip In summary, we have now completed several documents that aided us in figuring out how our project will come together. Furthermore, we have worked on several tasks assigned by our client and added these to our code repository. Lastly, we finish this term with a better understanding of the software design and development process. \end{abstract} \end{singlespace} \end{titlepage} %s\newpage \pagenumbering{arabic} \tableofcontents % 7. uncomment this (if applicable). Consider adding a page break. %\listoffigures \clearpage \listoftables \clearpage \begin{flushleft} % 8. now you write! \section{Introduction} Aerolyzer is an application that our group is currently in the processing of developing in collaboration with our clients at NASA JPL. Throughout the past 10 weeks, we have meet weekly with our client for positive discussion, as well as for learning purposes and to formulate a plan for the development of our web application. This document serves to highlight the progress we have made over the past few months, as well as provide a more detailed retrospective of the work we have accomplished thus far. More specifically, we will elaborate on the purpose and overall goals of our project, in addition to focusing on our current status and explaining the progress we have made week-by-week. Lastly, we will provide a thorough discussion on the positives we have encountered this term, what will need to be changed going forward, and the individual actions that we will need to implement in order for these changes to be made. \section{Project Purposes and Goals} The purpose of Aerolyzer is to develop an application capable of analyzing an image and inferring the corresponding aerosol content. To accomplish this, our image detection will uniquely utilize color distribution within the image to identify features necessary for analyzing said aerosol content. Our goal is to create a web application that employs an open source algorithm that uses the location's meteorological conditions, the colors from the image, and analyzed geo-related data to accomplish our desired functionality of providing our user with an accurate representation of the aerosol content in the image they provided. \section{Current Project Status} Presently, our team has established a problem statement, developed an in-depth requirements document, researched and chosen the technologies we will be utilizing throughout the development of our web application, and has created a design document which outlines explicitly how we will be using each technology. Moreover, we have created a Github Organization repository for our Aerolyzer team which currently contains PyLint, an integrated version of TravisCI and Coveralls, and a Sphinx Documentation skeleton which will be used for both code and usage documentation. \section{Weekly Summaries} The following is a detailed week-by-week summary of our activities, problems, and solutions. The three team member weekly progress reports are condensed into each week. The weeks correspond with our term; weeks 1 and 2 are omitted, as teams had not been assigned or begun working until week 3. \subsection{Week 3} This week was a busy one, as we had to get started on the project as a whole. We had our first meeting with our clients: Kim Whitehall and Lewis McGibbney from NASA JPL. After our meeting, Kim sent us some resources via links to look through to help us get started. We looked through these and got a better understanding of our project, which was really helpful for working on the problem statement document. We also determined what the goals of our project would be. Since Reilly mentioned her interest in UX, she was given the tentative team role of looking into the UX component of our app and explored some options for displaying photos by looking at other apps with similar functionality. Sophia was assigned research on REST APIs and completed this throughout the week. Jesse has a physics background and therefore was asked to conduct some research for the algorithm implementation, specifically optics and color-analyzing. Additionally, we completed our problems statement assignment by creating the LaTeX file and Makefile to compile a PDF for Kim and Lewis. This document summarized the expectations for our senior capstone project. They gave us feedback on our draft and approved the final version, which was signed and sent back to us to turn in. Our Github repo was created this week; our problem statement files were pushed into a new directory. Weekly updates could now be added to the Wiki. \medskip As a group, we were not sure whether we are able to make the Github repo public and use an organization or if we are restricted to just private repos. Kim and Lewis contacted the instructor and determined we could use a public repo under an organization. Another issue we encountered was having difficulty limiting our project to certain main goals. We had a great discussion with our clients to better figure out the scope of our roles. Lastly, we had trouble researching articles on the topic at hand (aerosol and atmospheric conditions). The papers regarding the aerosol content were tough to understand at first. \subsection{Week 4} We met with our TA Vee this week for the first time and discussed what to anticipate for our project. He went over some of the expectations for the class and gave us advice on working as a team, interacting with our client, and completing work for the capstone class. The team continued reviewing various documents that the client has been sending to our mailing list regarding aerosol research and sunset image repositories. Each member also continued working on their specific tasks: Reilly looked at UX examples (such as typical photo-centric app layouts) for taking and displaying photos; Sophia completed a mini project with REST; and Jesse became familiar with \texttt{MatPlotLib} and looked up professors at OSU with a background in optics to contact. Additionally, we revised our problem statement based on feedback from lecture and started working on the requirements document with help from our client. \medskip Our client helped us figure out how to make our Github profiles public. No outstanding issues were encountered at this time. \subsection{Week 5} This week, the team updated and finished our problem statement using feedback from our professors, as well as suggestions from our client. This was a very useful assignment, as definitely have a much better understanding of our project now; everyone on the team is on the same page. A final hard copy was turned in, and the final unsigned version was committed to our repo. We also continued working on our requirements document this week. This was challenging at first, as we were given very little direction in class and found it difficult to find help. However, it was certainly helpful to figure out what exactly we would be doing for the rest of our project. Going through requirements during our meeting with the client was helpful for completing the doc as well. \medskip Our client asked us to look at some coding tasks that would be completed at some point before beginning our project. These tasks consisted of working with APIs that allowed us to gather weather data and sunset images from various websites. Additionally, we continued looking at research documents suggested by our client and reviewed our Github workflow document as a team. \subsection{Week 6} For this week, we spent the majority of our time finishing up our requirements document. There were some sections that we could not fully complete, as they depended on completing further research regarding the color-analyzing aerosol algorithm, but we just made note of that in the document and will add to it as needed. The Gantt chart took us quite a bit of time to complete, mainly because we were unsure how to create one. However, we got help from our client during our weekly meeting and were able to complete it based on that information. Together, we wrote out tasks to be completed until Expo based on our functional requirements and converted those into a working Gantt chart. Reilly worked on the first coding task by writing a small Python program that used the Weather Underground API to retrieve meteorological data. The Wunderground API seems pretty useful for our needs, especially considering it is the free version. Our client also provided a script that looked at color channels in a given image that we could use in our software. It was very exciting to see our project form from ideas and now look like the start of an actual application. \medskip Finally, we spoke with one of our professors this week to better clarify what problem we are trying to solve with Aerolyzer. She helped us work through finding the best way to convey the goals of our project to others. \subsection{Week 7} We made quite a bit of progress this week by finishing up our requirements document and beginning our tech document. Our client gave us several suggestions for improvement on our requirements document and Gantt chart the day that it was due, so we spoke with our TA and were able to turn it in a day later than discussed and committed the doc to Github as well. We did receive some feedback from our TA and will need to make the suggested changes for the final document we put together at the end of the year. \medskip Since there was no school Friday, we met with our client on Wednesday and went over how we would complete our tech document. We came up with 9 different pieces of the Aerolyzer application that we could research to find alternative technologies. These included: uploading photos, getting geo-related data, and documentation for Jesse; the coding environment, image and metadata storage, and extracting EXIF metadata for Reilly; and the web interface, testing, and getting meteorological data for Sophia. Furthermore, our client assigned us each a technology to begin implementing and commit to the repo. Each team member had an issue under the Github repo: Reilly was to add the Pylint config file and supporting Wiki; Jesse, the Sphinx documentation; and TravisCI and Coveralls support for Sophia. \medskip One issue we had this week involved our tech review. Since we had already decided on a few of the technologies we would be using as a team and had begun implementing a few of them, our client asked the professor about what was required of the tech review assignment. She did not think we should spend time searching for alternatives when we had already decided to use and begun moving forward with specific technologies. She told us that he said we only had to compare technologies where possible, meaning we did not have to research alternatives for technologies we had already selected. \subsection{Week 8} This week, our tech review was committed to github. The chosen technologies for various pieces of our system were Pylint, ExifRead, MySQL, TravisCI and Coveralls, Django, Weather Underground API, MISR, Sphinx, and DropzoneJS. Reilly committed a Pylint file for our project (\texttt{aerolyzer\_lint\_file}) and will be in charge of maintaining this file going forward; she also added our Pylint config information to the wiki. Sophia added our custom .travis.yml file to integrate TravisCI to our code base. Jesse continued working on our Sphinx documentation. Our design doc tex template was committed to Github in lieu of going to class on Thursday. \medskip We ran into a problem checking in our tech review files: somehow \texttt{git push} did not actually push any files and gave a “nothing to commit” message. We ended up adding files manually via the online github interface in order to meet the midnight deadline. Over the next few days we tried to figure out what went wrong, but were not able to find a root issue. However, the problem seems to be fixed now (as we have committed files since then), so hopefully we do not run into this again. We also ran into a problem when implementing the \texttt{.travis.yml} file from Sophia; we were confused about what to add to the content of it. Our clients helped us and Sophia during our team meeting, offering an example .yml file from another project he had worked on. We also ran into the issue of adding Coveralls to the main repo, and not just a personal repo. She fixed this by going into the settings for the main repo to allow coveralls to be integrated. Finally, Reilly had an issue with the Pylint file in her first commit and had added the entire source code directory instead of just the config file. Our client helped us and Reilly remedy this, then closed the issue and merged the pull request. \subsection{Week 9} This past week, the team just worked on the design document and completed close to half of it. We also got started on a document for our progress report. Additionally, Sophia worked on fixing the TravisCI .yml file by changing the content based on feedback from our client. We did not meet with our client due to the Thanksgiving holiday. \medskip We are confused about our feedback on the tech review. Our client told us that one of our professors said we did not have to compare different technologies for pieces of the system that we had already moved forward with implementing, as that would be an inefficient use of our time. However, a different professor graded our assignment, so we believe there was a miscommunication and we received a poorer grade as a result. \subsection{Week 10} This week, we continued working on the design document. We had the design document finished by Thursday, then received the approval of and signature form our client on Thursday night. Only one of our clients signed, though we left a signature space on the document for our other client - our main project manager was on vacation and was not sure whether she would be able to provide a signature before the deadline. The document was turned in Friday morning with our signatures added. We received some feedback from our clients regarding the design doc and will probably end up changing our DBMS. We will discuss this and other changes to the design doc at our last meeting, then make changes to the tech review and design doc accordingly next term. We also met up to discuss dividing up work for the progress report and will now begin working on our assigned sections. We will not have a meeting this Friday due to our client being on vacation, and our last meeting of the term will be this upcoming Friday. \medskip Over the weekend and into finals week, we will be writing our progress report and working on the presentation slides. Then we will meet up to practice and record our presentation. This will be our last assignment of the term; everything will be turned in before noon on Wednesday. \clearpage \section{Retrospective} \begin{table}[h!] \caption{Retrospective of the past 10 weeks}\label{table:1} \centering \begin{tabular}{| p{0.3\linewidth} | p{0.3\linewidth} | p{0.3\linewidth} |} \hline Positives & Deltas & Actions \\ [0.5ex] \hline Our team works well together & We need to do a better job of allocating work ahead of time and completing our tasks sooner & From now on we will allocate task the day an assignment is posted and aim to complete all assignments no later than three days before they are due \\ [0.5ex] \hline Our team gets along well with our clients, Dr. Whitehall and Dr. McGibbney & We need to do a better job of communicating with our clients consistently throughout the week & To do this we will make better use of our slack channel as questions arise \\ [0.5ex] \hline Each member of the group brings a unique skill set to the table & We need to ensure that we are utilizing our strengths while also developing our weaknesses & Next term we will discuss in our weekly client meeting how we can implement both of these changes, and proceed from there \\ [0.5ex] \hline After lengthy discussion with our client, we have determined all the technologies necessary for our web application & We may need to change some of the technologies we selected. For example, we will likely no longer be using MySQL & We will need to further discuss our technologies with our clients, and determine which technologies may be better suited for our uses \\ [0.5ex] \hline We have the majority of our initial project development documents completed & According to our TA, we will need to make changes to our Requirements Document and Tech Review next term & We will use the feedback from our TA to flesh out our Requirements Document and Tech Review, as well as provide additional available technologies in the Tech Review for the sections in which we only included one. \\ [0.5ex] \hline \end{tabular} \end{table} \section{Conclusion} In summary, over the past 10 weeks our team has made significant progress in establishing a well thought out development plan for our project. Through the creation of our problem statement, requirements document, tech review, and design document, we have manage to create a solid set of guidelines which we can work according to and continually update throughout our development of the web application. Moreover, we not only have a better of understanding of the software design and development process, but we have improved our background knowledge on the science behind our application through continual discussion with our client. To conclude, we are satisfied with the accomplishments we have made over the course of this term, and we are excited for the opportunity to begin developing code in the coming months. \end{flushleft} \end{document}
{ "alphanum_fraction": 0.7784567287, "avg_line_length": 103.9950738916, "ext": "tex", "hexsha": "df0739a26b003dcfaadc71f11c421a18e9b00634", "lang": "TeX", "max_forks_count": 14, "max_forks_repo_forks_event_max_datetime": "2017-11-24T21:32:02.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-21T11:03:16.000Z", "max_forks_repo_head_hexsha": "f6152d79569c8d061b167a72c2f51860dcb605b6", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "liusop/Aerolyze", "max_forks_repo_path": "docs/2016TeamDocs/Final/Aerolyzer_ProgressReport.tex", "max_issues_count": 102, "max_issues_repo_head_hexsha": "f6152d79569c8d061b167a72c2f51860dcb605b6", "max_issues_repo_issues_event_max_datetime": "2018-05-24T00:58:08.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-21T11:01:35.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "liusop/Aerolyze", "max_issues_repo_path": "docs/2016TeamDocs/Final/Aerolyzer_ProgressReport.tex", "max_line_length": 1201, "max_stars_count": 9, "max_stars_repo_head_hexsha": "16c91740ba561b988e67fdcd6ef802ed8a826da2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "wingarlo/Aerolyzer", "max_stars_repo_path": "docs/2016TeamDocs/Final/Aerolyzer_ProgressReport.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-04T05:05:36.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-21T22:19:20.000Z", "num_tokens": 4560, "size": 21111 }
% \documentstyle[11pt,psfig]{article} \documentstyle[11pt]{article} \hoffset=-.7in \voffset=-.6in \textwidth=6.5in \textheight=8.5in \begin{document} \vspace*{-1in} \thispagestyle{empty} \begin{center} ARGONNE NATIONAL LABORATORY \\ 9700 South Cass Avenue \\ Argonne, IL 60439 \end{center} \vskip .5 in \begin{center} \rule{1.75in}{.01in} \\ \vspace{.1in} ANL/MCS-TM-234 \\ \rule{1.75in}{.01in} \\ \vskip 1.3in {\Large\bf Users Guide for ROMIO: A High-Performance, \\ [1ex] Portable MPI-IO Implementation} \\ [4ex] by \\ [2ex] {\large\it Rajeev Thakur, Robert Ross, Ewing Lusk, William Gropp, Robert Latham} \vspace{1in} Mathematics and Computer Science Division \bigskip Technical Memorandum No.\ 234 \vspace{1.4in} Revised May 2004, November 2007, April 2010 \end{center} \vfill {\small \noindent This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, U.S. Department of Energy, under Contract W-31-109-Eng-38; and by the Scalable I/O Initiative, a multiagency project funded by the Defense Advanced Research Projects Agency (Contract DABT63-94-C-0049), the Department of Energy, the National Aeronautics and Space Administration, and the National Science Foundation.} \newpage %% Line Spacing (e.g., \ls{1} for single, \ls{2} for double, even \ls{1.5}) %% \newcommand{\ls}[1] {\dimen0=\fontdimen6\the\font \lineskip=#1\dimen0 \advance\lineskip.5\fontdimen5\the\font \advance\lineskip-\dimen0 \lineskiplimit=.9\lineskip \baselineskip=\lineskip \advance\baselineskip\dimen0 \normallineskip\lineskip \normallineskiplimit\lineskiplimit \normalbaselineskip\baselineskip \ignorespaces } \renewcommand{\baselinestretch}{1} \newcommand {\ix} {\hspace*{2em}} \newcommand {\mc} {\multicolumn} \tableofcontents \thispagestyle{empty} \newpage \pagenumbering{arabic} \setcounter{page}{1} \begin{center} {\bf Users Guide for ROMIO: A High-Performance,\\[1ex] Portable MPI-IO Implementation} \\ [2ex] by \\ [2ex] {\it Rajeev Thakur, Robert Ross, Ewing Lusk, and William Gropp} \end{center} \addcontentsline{toc}{section}{Abstract} \begin{abstract} \noindent ROMIO is a high-performance, portable implementation of MPI-IO (the I/O chapter in the \mbox{MPI Standard}). This document describes how to install and use ROMIO version~1.2.4 on various machines. \end{abstract} \section{Introduction} ROMIO\footnote{\tt http://www.mcs.anl.gov/romio} is a high-performance, portable implementation of MPI-IO (the I/O chapter in MPI~\cite{mpi97a}). This document describes how to install and use ROMIO version~1.2.4 on various machines. % % MAJOR CHANGES IN THIS VERSION % \section{Major Changes in This Version} \begin{itemize} \item Added section describing ROMIO \texttt{MPI\_FILE\_SYNC} and \texttt{MPI\_FILE\_CLOSE} behavior to User's Guide \item Bug removed from PVFS ADIO implementation regarding resize operations \item Added support for PVFS listio operations (see Section \ref{sec:hints}) \item Added the following working hints: \texttt{romio\_pvfs\_listio\_read}, \texttt{romio\_pvfs\_listio\_write} \end{itemize} % % GENERAL INFORMATION % \section{General Information} This version of ROMIO includes everything defined in the MPI I/O chapter except support for file interoperability and user-defined error handlers for files (\S~4.13.3). The subarray and distributed array datatype constructor functions from Chapter 4 (\S~4.14.4 \& \S~4.14.5) have been implemented. They are useful for accessing arrays stored in files. The functions {\tt MPI\_File\_f2c} and {\tt MPI\_File\_c2f} (\S~4.12.4) are also implemented. C, Fortran, and profiling interfaces are provided for all functions that have been implemented. This version of ROMIO runs on at least the following machines: IBM SP; Intel Paragon; HP Exemplar; SGI Origin2000; Cray T3E; NEC SX-4; other symmetric multiprocessors from HP, SGI, DEC, Sun, and IBM; and networks of workstations (Sun, SGI, HP, IBM, DEC, Linux, and FreeBSD). Supported file systems are IBM PIOFS, Intel PFS, HP/Convex HFS, SGI XFS, NEC SFS, PVFS, NFS, NTFS, and any Unix file system (UFS). This version of ROMIO is included in MPICH 1.2.4; an earlier version is included in at least the following MPI implementations: LAM, HP MPI, SGI MPI, and NEC MPI. Note that proper I/O error codes and classes are returned and the status variable is filled only when used with MPICH revision 1.2.1 or later. You can open files on multiple file systems in the same program. The only restriction is that the directory where the file is to be opened must be accessible from the process opening the file. For example, a process running on one workstation may not be able to access a directory on the local disk of another workstation, and therefore ROMIO will not be able to open a file in such a directory. NFS-mounted files can be accessed. An MPI-IO file created by ROMIO is no different from any other file created by the underlying file system. Therefore, you may use any of the commands provided by the file system to access the file, for example, {\tt ls}, {\tt mv}, {\tt cp}, {\tt rm}, {\tt ftp}. Please read the limitations of this version of ROMIO that are listed in Section~\ref{sec:limit} of this document (e.g., restriction to homogeneous environments). \subsection{ROMIO Optimizations} \label{sec:opt} ROMIO implements two I/O optimization techniques that in general result in improved performance for applications. The first of these is \emph{data sieving}~\cite{choudhary:passion}. Data sieving is a technique for efficiently accessing noncontiguous regions of data in files when noncontiguous accesses are not provided as a file system primitive. The naive approach to accessing noncontiguous regions is to use a separate I/O call for each contiguous region in the file. This results in a large number of I/O operations, each of which is often for a very small amount of data. The added network cost of performing an I/O operation across the network, as in parallel I/O systems, is often high because of latency. Thus, this naive approach typically performs very poorly because of the overhead of multiple operations. % In the data sieving technique, a number of noncontiguous regions are accessed by reading a block of data containing all of the regions, including the unwanted data between them (called ``holes''). The regions of interest are then extracted from this large block by the client. This technique has the advantage of a single I/O call, but additional data is read from the disk and passed across the network. There are four hints that can be used to control the application of data sieving in ROMIO: \texttt{ind\_rd\_buffer\_size}, \texttt{ind\_wr\_buffer\_size}, \texttt{romio\_ds\_read}, and \texttt{romio\_ds\_write}. These are discussed in Section~\ref{sec:hints}. The second optimization is \emph{two-phase I/O}~\cite{bordawekar:primitives}. Two-phase I/O, also called collective buffering, is an optimization that only applies to collective I/O operations. In two-phase I/O, the collection of independent I/O operations that make up the collective operation are analyzed to determine what data regions must be transferred (read or written). These regions are then split up amongst a set of aggregator processes that will actually interact with the file system. In the case of a read, these aggregators first read their regions from disk and redistribute the data to the final locations, while in the case of a write, data is first collected from the processes before being written to disk by the aggregators. There are five hints that can be used to control the application of two-phase I/O: \texttt{cb\_config\_list}, \texttt{cb\_nodes}, \texttt{cb\_buffer\_size}, \texttt{romio\_cb\_read}, and \texttt{romio\_cb\_write}. These are discussed in Subsection~\ref{sec:hints}. \subsection{Hints} \label{sec:hints} If ROMIO doesn't understand a hint, or if the value is invalid, the hint will be ignored. The values of hints being used by ROMIO for a file can be obtained at any time via {\tt MPI\_File\_get\_info}. The following hints control the data sieving optimization and are applicable to all file system types: \begin{itemize} \item \texttt{ind\_rd\_buffer\_size} -- Controls the size (in bytes) of the intermediate buffer used by ROMIO when performing data sieving during read operations. Default is \texttt{4194304} (4~Mbytes). \item \texttt{ind\_wr\_buffer\_size} -- Controls the size (in bytes) of the intermediate buffer used by ROMIO when performing data sieving during write operations. Default is \texttt{524288} (512~Kbytes). \item \texttt{romio\_ds\_read} -- Determines when ROMIO will choose to perform data sieving. Valid values are \texttt{enable}, \texttt{disable}, or \texttt{automatic}. Default value is \texttt{automatic}. In \texttt{automatic} mode ROMIO may choose to enable or disable data sieving based on heuristics. \item \texttt{romio\_ds\_write} -- Same as above, only for writes. \end{itemize} The following hints control the two-phase (collective buffering) optimization and are applicable to all file system types: \begin{itemize} \item \texttt{cb\_buffer\_size} -- Controls the size (in bytes) of the intermediate buffer used in two-phase collective I/O. If the amount of data that an aggregator will transfer is larger than this value, then multiple operations are used. The default is \texttt{4194304} (4~Mbytes). \item \texttt{cb\_nodes} -- Controls the maximum number of aggregators to be used. By default this is set to the number of unique hosts in the communicator used when opening the file. \item \texttt{romio\_cb\_read} -- Controls when collective buffering is applied to collective read operations. Valid values are \texttt{enable}, \texttt{disable}, and \texttt{automatic}. Default is \texttt{automatic}. When enabled, all collective reads will use collective buffering. When disabled, all collective reads will be serviced with individual operations by each process. When set to \texttt{automatic}, ROMIO will use heuristics to determine when to enable the optimization. \item \texttt{romio\_cb\_write} -- Controls when collective buffering is applied to collective write operations. Valid values are \texttt{enable}, \texttt{disable}, and \texttt{automatic}. Default is \texttt{automatic}. See the description of \texttt{romio\_cb\_read} for an explanation of the values. \item \texttt{romio\_no\_indep\_rw} -- This hint controls when ``deferred open'' is used. When set to \texttt{true}, ROMIO will make an effort to avoid performing any file operation on non-aggregator nodes. The application is expected to use only collective operations. This is discussed in further detail below. \item \texttt{cb\_config\_list} -- Provides explicit control over aggregators. This is discussed in further detail below. \end{itemize} For some systems configurations, more control is needed to specify which hardware resources (processors or nodes in an SMP) are preferred for collective I/O, either for performance reasons or because only certain resources have access to storage. The additional MPI\_Info key name \texttt{cb\_config\_list} specifies a comma-separated list of strings, each string specifying a particular node and an optional limit on the number of processes to be used for collective buffering on this node. This refers to the same processes that \texttt{cb\_nodes} refers to, but specifies the available nodes more precisely. The format of the value of \texttt{cb\_config\_list} is given by the following BNF: \begin{verbatim} cb_config_list => hostspec [ ',' cb_config_list ] hostspec => hostname [ ':' maxprocesses ] hostname => <alphanumeric string> | '*' maxprocesses => <digits> | '*' \end{verbatim} The value \texttt{hostname} identifies a processor. This name must match the name returned by \texttt{MPI\_Get\_processor\_name}~\footnote{The MPI standard requires that the output from this routine identify a particular piece of hardware; some MPI implementations may not conform to this requirement. MPICH does conform to the MPI standard.} % for the specified hardware. The value \texttt{*} as a hostname matches all processors. The value of maxprocesses may be any nonnegative integer (zero is allowed). The value \texttt{maxprocesses} specifies the maximum number of processes that may be used for collective buffering on the specified host. If no value is specified, the value one is assumed. If \texttt{*} is specified for the number of processes, then all MPI processes with this same hostname will be used.. Leftmost components of the info value take precedence. Note: Matching of processor names to \texttt{cb\_config\_list} entries is performed with string matching functions and is independent of the listing of machines that the user provides to mpirun/mpiexec. In other words, listing the same machine multiple times in the list of hosts to run on will not cause a \texttt{*:1} to assign the same host four aggregators, because the matching code will see that the processor name is the same for all four and will assign exactly one aggregator to the processor. The value of this info key must be the same for all processes (i.e., the call is collective and each process must receive the same hint value for these collective buffering hints). Further, in the ROMIO implementation the hint is only recognized at \texttt{MPI\_File\_open} time. The set of hints used with a file is available through the routine \texttt{MPI\_File\_get\_info}, as documented in the MPI standard. As an additional feature in the ROMIO implementation, wildcards will be expanded to indicate the precise configuration used with the file, with the hostnames in the rank order used for the collective buffering algorithm (\emph{this is not implemented at this time}). Here are some examples of how this hint might be used: \begin{itemize} \item \texttt{*:1} One process per hostname (i.e., one process per node) \item \texttt{box12:30,*:0} Thirty processes on one machine, namely \texttt{box12}, and none anywhere else. \item \texttt{n01,n11,n21,n31,n41} One process on each of these specific nodes only. \end{itemize} When the values specified by \texttt{cb\_config\_list} conflict with other hints (e.g., the number of collective buffering nodes specified by \texttt{cb\_nodes}), the implementation is encouraged to take the minimum of the two values. In other words, if \texttt{cb\_config\_list} specifies ten processors on which I/O should be performed, but \texttt{cb\_nodes} specifies a smaller number, then an implementation is encouraged to use only \texttt{cb\_nodes} total aggregators. If \texttt{cb\_config\_list} specifies fewer processes than \texttt{cb\_nodes}, no more than the number in \texttt{cb\_config\_list} should be used. The implementation is also encouraged to assign processes in the order that they are listed in \texttt{cb\_config\_list}. The following hint controls the deferred open feature of romio and are also applicable to all file system types: \begin{itemize} \item \texttt{romio\_no\_indep\_rw} -- If the application plans on performing only collecitve operations and this hint is set to ``true'', then ROMIO can have just the aggregators open a file. The \texttt{cb\_config\_list} and \texttt{cb\_nodes} hints can be given to further control which nodes are aggregators. \end{itemize} For PVFS, PIOFS, and PFS: \begin{itemize} \item \texttt{striping\_factor} -- Controls the number of I/O devices to stripe across. The default is file system dependent, but for PVFS it is \texttt{-1}, indicating that the file should be striped across all I/O devices. \item \texttt{striping\_unit} -- Controls the striping unit (in bytes). For PVFS the default will be the PVFS file system default strip size. \item \texttt{start\_iodevice} -- Determines what I/O device data will first be written to. This is a number in the range of 0 ... striping\_factor - 1. \end{itemize} \subsubsection{Hints for PFS} \label{sec:hints_pfs} \begin{itemize} \item \texttt{pfs\_svr\_buf} -- Turns on PFS server buffering. Valid values are \texttt{true} and \texttt{false}. Default is \texttt{false}. \end{itemize} \subsubsection{Hints for XFS} \label{sec:hints_xfs} For XFS control is provided for the direct I/O optimization: \begin{itemize} \item \texttt{direct\_read} -- Controls direct I/O for reads. Valid values are \texttt{true} and \texttt{false}. Default is \texttt{false}. \item \texttt{direct\_write} -- Controls direct I/O for writes. Valid values are \texttt{true} and \texttt{false}. Default is \texttt{false}. \end{itemize} \subsubsection{Hints for PVFS (v1)} \label{sec:hints_oldpvfs} For PVFS control is provided for the use of the listio interface. This interface to PVFS allows for a collection of noncontiguous regions to be requested (for reading or writing) with a single operation. This can result in substantially higher performance when accessing noncontiguous regions. Support for these operations in PVFS exists after version 1.5.4, but has not been heavily tested, so use of the interface is disabled in ROMIO by default at this time. The hints to control listio use are: \begin{itemize} \item \texttt{romio\_pvfs\_listio\_read} -- Controls use of listio for reads. Valid values are \texttt{enable}, \texttt{disable}, and \texttt{automatic}. Default is \texttt{disable}. \item \texttt{romio\_pvfs\_listio\_write} -- Controls use of listio for writes. Valid values are \texttt{enable}, \texttt{disable}, and \texttt{automatic}. Default is \texttt{disable}. \end{itemize} \subsubsection{Hints for PVFS (v2)} \label{sec:hints_pvfs} The PVFS v2 file system has many tuning parameters. \begin{itemize} \item dtype i/o \end{itemize} \subsubsection{Hints for Lustre} \begin{itemize} \item romio\_lustre\_co\_ratio In stripe-contiguous IO pattern, each OST will be accessed by a group of IO clients. CO means *C*lient/*O*ST ratio, or the max. number of IO clients for each OST. CO=1 by default. \item \texttt{romio\_lustre\_coll\_threshold} We won't do collective I/O if this hint is set and the IO request size is bigger than this value. That's because when the request size is big, the collective communication overhead increases and the benefits from collective I/O becomes limited. A value of 0 means always perform collective I/O \item \texttt{romio\_lustre\_cb\_ds\_threshold} ROMIO can optimize collective I/O with a version of data sieving. If the I/O request is smaller than this hint's value, though, ROMIO will not try to apply the data sieving optimization. \item \texttt{romio\_lustre\_ds\_in\_coll} Collective IO will apply read-modify-write to deal with non-contiguous data by default. However, it will introduce some overhead(IO operation and locking). The Lustre developers have run tests where data sieving showed bad collective write performance for some kinds of workloads. So, to avoid this, we define the \texttt{romio\_lustre\_ds\_in\_coll} hint to disable the read-modify-write step in collective I/O. This optimization is distinct from the one in independent I/O (controlled by \texttt{romio\_ds\_read} and \texttt{romio\_ds\_write}). \end{itemize} \subsubsection{Hints for PANFS (Panasas)} PanFS allows users to specify the layout of a file at file-creation time. Layout information includes the number of StorageBlades (SB) across which the data is stored, the number of SBs across which a parity stripe is written, and the number of consecutive stripes that are placed on the same set of SBs. The \texttt{panfs\_layout\_*} hints are only used if supplied at file-creation time. \begin{itemize} \item \texttt{panfs\_layout\_type} Specifies the layout of a file: 2 = RAID0 3 = RAID5 Parity Stripes \item \texttt{panfs\_layout\_stripe\_unit} The size of the stripe unit in bytes \item \texttt{panfs\_layout\_total\_num\_comps} The total number of StorageBlades a file is striped across. \item \texttt{ panfs\_layout\_parity\_stripe\_width} If the layout type is RAID5 Parity Stripes, this hint specifies the number of StorageBlades in a parity stripe. \item \texttt{panfs\_layout\_parity\_stripe\_depth} If the layout type is RAID5 Parity Stripes, this hint specifies the number of contiguous parity stripes written across the same set of SBs. \item \texttt{panfs\_layout\_visit\_policy} If the layout type is RAID5 Parity Stripes, the policy used to determine the parity stripe a given file offset is written to: 1 = Round Robin \end{itemize} PanFS supports the ``concurrent write'' (CW) mode, where groups of cooperating clients can disable the PanFS consistency mechanisms and use their own consistency protocol. Clients participating in concurrent write mode use application specific information to improve performance while maintaining file consistency. All clients accessing the file(s) must enable concurrent write mode. If any client does not enable concurrent write mode, then the PanFS consistency protocol will be invoked. Once a file is opened in CW mode on a machine, attempts to open a file in non-CW mode will fail with EACCES. If a file is already opened in non-CW mode, attempts to open the file in CW mode will fail with EACCES. The following hint is used to enable concurrent write mode. \begin{itemize} \item \texttt{panfs\_concurrent\_write} If set to 1 at file open time, the file is opened using the PanFS concurrent write mode flag. Concurrent write mode is not a persistent attribute of the file. \end{itemize} Below is an example PanFS layout using the following parameters: \begin{verbatim} - panfs_layout_type = 3 - panfs_layout_total_num_comps = 100 - panfs_layout_parity_stripe_width = 10 - panfs_layout_parity_stripe_depth = 8 - panfs_layout_visit_policy = 1 Parity Stripe Group 1 Parity Stripe Group 2 . . . Parity Stripe Group 10 ---------------------- ---------------------- -------------------- SB1 SB2 ... SB10 SB11 SB12 ... SB20 ... SB91 SB92 ... SB100 ----------------------- ----------------------- --------------------- D1 D2 ... D10 D91 D92 ... D100 D181 D182 ... D190 D11 D12 D20 D101 D102 D110 D191 D192 D193 D21 D22 D30 . . . . . . D31 D32 D40 D41 D42 D50 D51 D52 D60 D61 D62 D70 D71 D72 D80 D81 D82 D90 D171 D172 D180 D261 D262 D270 D271 D272 D273 . . . . . . ... \end{verbatim} \subsubsection{Systemwide Hints} \label{sec:system_hints} A site administrator with knowledge of the storage and networking capabilities of a machine might be able to come up with a set of hint values that work better for that machine than the ROMIO default values. As an extention to the standard, ROMIO will consult a ``hints file''. This file provides an additional mechanism for setting MPI-IO hints, albeit in a ROMIO-specific manner. The hints file contains a list of hints and their values. ROMIO will use these initial hint settings, though programs are free to override any of them. The format of the hints file is a list of hints and their values, one per line. A \# character in the first column indicates a comment, and ROMIO will ignore the entire line. Here's an example: \begin{verbatim} # this is a comment describing the following setting cb_nodes 32 # these nodes happen to have the best connection to storage cb_config_list n01,n11,n21,n31,n41 \end{verbatim} ROMIO will look for these hints in the file \texttt{/etc/romio-hints}. A user can set the environment variable \texttt{ROMIO\_HINTS} to the name of a file which ROMIO will use instead. \subsection{Using ROMIO on NFS} It is worth first mentioning that in no way do we encourage the use of ROMIO on NFS volumes. NFS is not a high-performance protocol, nor are NFS servers typically very good at handling the types of concurrent access seen from MPI-IO applications. Nevertheless, NFS is a very popular mechanism for providing access to a shared space, and ROMIO does support MPI-IO to NFS volumes, provided that they are configured properly. To use ROMIO on NFS, file locking with {\tt fcntl} must work correctly on the NFS installation. On some installations, fcntl locks don't work. To get them to work, you need to use Version~3 of NFS, ensure that the lockd daemon is running on all the machines, and have the system administrator mount the NFS file system with the ``{\tt noac}'' option (no attribute caching). Turning off attribute caching may reduce performance, but it is necessary for correct behavior. The following are some instructions we received from Ian Wells of HP for setting the {\tt noac} option on NFS. We have not tried them ourselves. We are including them here because you may find them useful. Note that some of the steps may be specific to HP systems, and you may need root permission to execute some of the commands. \begin{verbatim} >1. first confirm you are running nfs version 3 > >rpcnfo -p `hostname` | grep nfs > >ie > goedel >rpcinfo -p goedel | grep nfs > 100003 2 udp 2049 nfs > 100003 3 udp 2049 nfs > > >2. then edit /etc/fstab for each nfs directory read/written by MPIO > on each machine used for multihost MPIO. > > Here is an example of a correct fstab entry for /epm1: > > ie grep epm1 /etc/fstab > > ROOOOT 11>grep epm1 /etc/fstab > gershwin:/epm1 /rmt/gershwin/epm1 nfs bg,intr,noac 0 0 > > if the noac option is not present, add it > and then remount this directory > on each of the machines that will be used to share MPIO files > >ie > >ROOOOT >umount /rmt/gershwin/epm1 >ROOOOT >mount /rmt/gershwin/epm1 > >3. Confirm that the directory is mounted noac: > >ROOOOT >grep gershwin /etc/mnttab >gershwin:/epm1 /rmt/gershwin/epm1 nfs >noac,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0 0 0 899911504 \end{verbatim} \subsubsection{ROMIO, NFS, and Synchronization} NFS has a ``sync'' option that specifies that the server should put data on the disk before replying that an operation is complete. This means that the actual I/O cost on the server side cannot be hidden with caching, etc. when this option is selected. In the ``async'' mode the server can get the data into a buffer (and perhaps put it in the write queue; this depends on the implementation) and reply right away. Obviously if the server were to go down after the reply was sent but before the data was written, the system would be in a strange state, which is why so many articles suggest the "sync" option. Some systems default to ``sync'', while others default to ``async'', and the default can change from version to version of the NFS software. If you find that access to an NFS volume through MPI-IO is particularly slow, this is one thing to check out. \subsection{Using testfs} The testfs ADIO implementation provides a harness for testing components of ROMIO or discovering the underlying I/O access patterns of an application. When testfs is specified as the file system type, no actual files will be opened. Instead debugging information will be displayed on the processes opening the file. Subsequent I/O operations on this testfs file will provide additional debugging information. The intention of the testfs implementation is that it serve as a starting point for further instrumentation when debugging new features or applications. As such it is expected that users will want to modify the ADIO implementation in order to get the specific output they desire. \subsection{ROMIO and {\tt MPI\_FILE\_SYNC}} The MPI specification notes that a call to {\tt MPI\_FILE\_SYNC} ``causes all previous writes to {\tt fh} by the calling process to be transferred to the storage device.'' Likewise, calls to {\tt MPI\_FILE\_CLOSE} have this same semantic. Further, ``if all processes have made updates to the storage device, then all such updates become visible to subsequent reads of {\tt fh} by the calling process.'' The intended use of {\tt MPI\_FILE\_SYNC} is to allow all processes in the communicator used to open the file to see changes made to the file by each other (the second part of the specification). The definition of ``storage device'' in the specification is vague, and it isn't necessarily the case that calling {\tt MPI\_FILE\_SYNC} will force data out to permanent storage. Since users often use {\tt MPI\_FILE\_SYNC} to attempt to force data out to permanent storage (i.e. disk), the ROMIO implementation of this call enforces stronger semantics for most underlying file systems by calling the appropriate file sync operation when {\tt MPI\_FILE\_SYNC} is called (e.g. {\tt fsync}). However, it is still unwise to assume that the data has all made it to disk because some file systems (e.g. NFS) may not force data to disk when a client system makes a sync call. For performance reasons we do \emph{not} make this same file system call at {\tt MPI\_FILE\_CLOSE} time. At close time ROMIO ensures any data has been written out to the ``storage device'' (file system) as defined in the standard, but does not try to push the data beyond this and into physical storage. Users should call {\tt MPI\_FILE\_SYNC} before the close if they wish to encourage the underlying file system to push data to permanent storage. \subsection{ROMIO and {\tt MPI\_FILE\_SET\_SIZE}} {\tt MPI\_FILE\_SET\_SIZE} is a collective routine used to resize a file. It is important to remember that a MPI-IO routine being collective does not imply that the routine synchronizes the calling processes in any way (unless this is specified explicitly). As of 1.2.4, ROMIO implements {\tt MPI\_FILE\_SET\_SIZE} by calling {\tt ftruncate} from all processes. Since different processes may call the function at different times, it means that unless external synchronization is used, a resize operation mixed in with writes or reads could have unexpected results. In short, if synchronization after a set size is needed, the user should add a barrier or similar operation to ensure the set size has completed. % % INSTALLATION INSTRUCTIONS % \section{Installation Instructions} Since ROMIO is included in MPICH, LAM, HP MPI, SGI MPI, and NEC MPI, you don't need to install it separately if you are using any of these MPI implementations. If you are using some other MPI, you can configure and build ROMIO as follows: Untar the tar file as \begin{verbatim} gunzip -c romio.tar.gz | tar xvf - \end{verbatim} {\noindent or} \begin{verbatim} zcat romio.tar.Z | tar xvf - \end{verbatim} {\noindent then} \begin{verbatim} cd romio ./configure make \end{verbatim} Some example programs and a Makefile are provided in the {\tt romio/test} directory. Run the examples as you would run any MPI program. Each program takes the filename as a command-line argument ``{\tt -fname filename}''. The {\tt configure} script by default configures ROMIO for the file systems most likely to be used on the given machine. If you wish, you can explicitly specify the file systems by using the ``{\tt -file\_system}'' option to configure. Multiple file systems can be specified by using `+' as a separator, e.g., \\ \hspace*{.4in} {\tt ./configure -file\_system=xfs+nfs} \\ For the entire list of options to configure, do\\ \hspace*{.4in} {\tt ./configure -h | more} \\ After building a specific version, you can install it in a particular directory with \\ \hspace*{.4in} {\tt make install PREFIX=/usr/local/romio (or whatever directory you like)} \\ or just\\ \hspace*{.4in} {\tt make install (if you used -prefix at configure time)} If you intend to leave ROMIO where you built it, you should {\it not} install it; {\tt make install} is used only to move the necessary parts of a built ROMIO to another location. The installed copy will have the include files, libraries, man pages, and a few other odds and ends, but not the whole source tree. It will have a {\tt test} directory for testing the installation and a location-independent Makefile built during installation, which users can copy and modify to compile and link against the installed copy. To rebuild ROMIO with a different set of configure options, do\\ \hspace*{.4in} {\tt make distclean}\\ to clean everything, including the Makefiles created by {\tt configure}. Then run {\tt configure} again with the new options, followed by {\tt make}. \subsection{Configuring for Linux and Large Files } 32-bit systems running linux kernel version 2.4.0 or newer and glibc version 2.2.0 or newer can support files greater than 2 GBytes in size. This support is currently automaticly detected and enabled. We document the manual steps should the automatic detection not work for some reason. The two macros {\tt\_FILE\_OFFSET\_BITS=64} and {\tt\_LARGEFILE64\_SOURCE} tell gnu libc it's ok to support large files on 32 bit platforms. The former changes the size of {\tt off\_t} (no need to change source. might affect interoperability with libraries compiled with a different size of {\tt off\_t}). The latter exposes the gnu libc functions open64(), write64(), read64(), etc. ROMIO does not make use of the 64 bit system calls directly at this time, but we add this flag for good measure. If your linux system is relatively new, there is an excellent chance it is running kernel 2.4.0 or newer and glibc-2.2.0 or newer. Add the string \begin{verbatim} "-D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE" \end{verbatim} to your CFLAGS environment variable before runnint {\tt./configure} % % TESTING ROMIO % \section{Testing ROMIO} To test if the installation works, do\\ \hspace*{.4in} {\tt make testing}\\ in the {\tt romio/test} directory. This calls a script that runs the test programs and compares the results with what they should be. By default, {\tt make testing} causes the test programs to create files in the current directory and use whatever file system that corresponds to. To test with other file systems, you need to specify a filename in a directory corresponding to that file system as follows:\\ \hspace*{.4in} {\tt make testing TESTARGS="-fname=/foo/piofs/test"} % % COMPILING AND RUNNING MPI-IO PROGRAMS % \section{Compiling and Running MPI-IO Programs} If ROMIO is not already included in the MPI implementation, you need to include the file {\tt mpio.h} for C or {\tt mpiof.h} for Fortran in your MPI-IO program. Note that on HP machines running HPUX and on NEC SX-4, you need to compile Fortran programs with {\tt mpifort}. With MPICH, HP MPI, or NEC MPI, you can compile MPI-IO programs as \\ \hspace*{.4in} {\tt mpicc foo.c}\\ or \\ \hspace*{.4in} {\tt mpifort foo.f}\\ With SGI MPI, you can compile MPI-IO programs as \\ \hspace*{.4in} {\tt cc foo.c -lmpi}\\ or \\ \hspace*{.4in} {\tt f77 foo.f -lmpi}\\ or \\ \hspace*{.4in} {\tt f90 foo.f -lmpi}\\ With LAM, you can compile MPI-IO programs as \\ \hspace*{.4in} {\tt hcc foo.c -lmpi}\\ or \\ \hspace*{.4in} {\tt hf77 foo.f -lmpi}\\ If you have built ROMIO with some other MPI implementation, you can compile MPI-IO programs by explicitly giving the path to the include file mpio.h or mpiof.h and explicitly specifying the path to the library libmpio.a, which is located in {\tt \$(ROMIO\_HOME)/lib/\$(ARCH)/libmpio.a}. Run the program as you would run any MPI program on the machine. If you use {\tt mpirun}, make sure you use the correct {\tt mpirun} for the MPI implementation you are using. For example, if you are using MPICH on an SGI machine, make sure that you use MPICH's {\tt mpirun} and not SGI's {\tt mpirun}. % % LIMITATIONS % \section{Limitations of This Version of ROMIO \label{sec:limit}} \begin{itemize} \item When used with any MPI implementation other than MPICH revision 1.2.1 or later, the {\tt status} argument is not filled in any MPI-IO function. Consequently, {\tt MPI\_Get\_count} and\linebreak {\tt MPI\_Get\_elements} will not work when passed the {\tt status} object from an MPI-IO operation. \item Additionally, when used with any MPI implementation other than MPICH revision 1.2.1 or later, all MPI-IO functions return only two possible error codes---{\tt MPI\_SUCCESS} on success and {\tt MPI\_ERR\_UNKNOWN} on failure. \item This version works only on a homogeneous cluster of machines, and only the ``native'' file data representation is supported. \item Shared file pointers are not supported on PVFS and IBM PIOFS file systems because they don't support {\tt fcntl} file locks, and ROMIO uses that feature to implement shared file pointers. \item On HP machines running HPUX and on NEC SX-4, you need to compile Fortran programs with {\tt mpifort}. \item The file-open mode {\tt MPI\_MODE\_EXCL} does not work on Intel PFS file system, due to a bug in PFS. \end{itemize} % % USAGE TIPS % \section{Usage Tips} \begin{itemize} \item When using ROMIO with SGI MPI, you may sometimes get an error message from SGI MPI: ``MPI has run out of internal datatype entries. Please set the environment variable {\tt MPI\_TYPE\_MAX} for additional space.'' If you get this error message, add the following line to your {\tt .cshrc} file:\\ \hspace*{.4in} {\tt setenv MPI\_TYPE\_MAX 65536}\\ Use a larger number if you still get the error message. \item If a Fortran program uses a file handle created using ROMIO's C interface, or vice versa, you must use the functions {\tt MPI\_File\_c2f} or {\tt MPI\_File\_f2c} (see \S~4.12.4 in~\cite{mpi97a}). Such a situation occurs, for example, if a Fortran program uses an I/O library written in C with MPI-IO calls. Similar functions {\tt MPIO\_Request\_f2c} and {\tt MPIO\_Request\_c2f} are also provided. \item For Fortran programs on the Intel Paragon, you may need to provide the complete path to {\tt mpif.h} in the {\tt include} statement, e.g., \\ \hspace*{.4in} {\tt include '/usr/local/mpich/include/mpif.h'}\\ instead of \\ \hspace*{.4in} {\tt include 'mpif.h'}\\ This is because the {\tt -I} option to the Paragon Fortran compiler {\tt if77} doesn't work correctly. It always looks in the default directories first and, therefore, picks up Intel's {\tt mpif.h}, which is actually the {\tt mpif.h} of an older version of MPICH. \end{itemize} % % MAILING LIST % % this mailing list has been dead for a while % % REPORTING BUGS % \section{Reporting Bugs} If you have trouble, first check the users guide. Then check if there is a list of known bugs and patches on the ROMIO web page at {\tt http://www.mcs.anl.gov/romio}. Finally, if you still have problems, send a detailed message containing:\\ \hspace*{.2in}$\bullet$ the type of system (often {\tt uname -a}),\\ \hspace*{.2in}$\bullet$ the output of {\tt configure},\\ \hspace*{.2in}$\bullet$ the output of {\tt make}, and \\ \hspace*{.2in}$\bullet$ any programs or tests\\ to {\tt [email protected]}. % % ROMIO INTERNALS % \section{ROMIO Internals} A key component of ROMIO that enables such a portable MPI-IO implementation is an internal abstract I/O device layer called ADIO~\cite{thak96e}. Most users of ROMIO will not need to deal with the ADIO layer at all. However, ADIO is useful to those who want to port ROMIO to some other file system. The ROMIO source code and the ADIO paper~\cite{thak96e} will help you get started. MPI-IO implementation issues are discussed in~\cite{thak99b}. All ROMIO-related papers are available online at {\tt http://www.mcs.anl.gov/romio}. \section{Learning MPI-IO} The book {\em Using MPI-2: Advanced Features of the Message-Passing Interface}~\cite{grop99a}, published by MIT Press, provides a tutorial introduction to all aspects of MPI-2, including parallel I/O. It has lots of example programs. See {\tt http://www.mcs.anl.gov/mpi/usingmpi2} for further information about the book. % % MAJOR CHANGES IN PREVIOUS RELEASES % \section{Major Changes in Previous Releases} \subsection{Major Changes in Version 1.2.3} \begin{itemize} \item Added explicit control over aggregators for collective operations (see description of \texttt{cb\_config\_list}). \item Added the following working hints: \texttt{cb\_config\_list}, \texttt{romio\_cb\_read}, \texttt{romio\_cb\_write},\newline \texttt{romio\_ds\_read}. These additional hints have been added but are currently ignored by the implementation: \texttt{romio\_ds\_write}, \texttt{romio\_no\_indep\_rw}. \item Added NTFS ADIO implementation. \item Added testfs ADIO implementation for use in debugging. \item Added delete function to ADIO interface so that file systems that need to use their own delete function may do so (e.g. PVFS). \item Changed version numbering to match version number of MPICH release. \end{itemize} \subsection{Major Changes in Version 1.0.3} \begin{itemize} \item When used with MPICH 1.2.1, the MPI-IO functions return proper error codes and classes, and the status object is filled in. \item On SGI's XFS file system, ROMIO can use direct I/O even if the user's request does not meet the various restrictions needed to use direct I/O. ROMIO does this by doing part of the request with buffered I/O (until all the restrictions are met) and doing the rest with direct I/O. (This feature hasn't been tested rigorously. Please check for errors.) By default, ROMIO will use only buffered I/O. Direct I/O can be enabled either by setting the environment variables {\tt MPIO\_DIRECT\_READ} and/or {\tt MPIO\_DIRECT\_WRITE} to {\tt TRUE}, or on a per-file basis by using the info keys {\tt direct\_read} and {\tt direct\_write}. Direct I/O will result in higher performance only if you are accessing a high-bandwidth disk system. Otherwise, buffered I/O is better and is therefore used as the default. \item Miscellaneous bug fixes. \end{itemize} \subsection{Major Changes in Version 1.0.2} \begin{itemize} \item Implemented the shared file pointer functions and split collective I/O functions. Therefore, the main components of the MPI I/O chapter not yet implemented are file interoperability and error handling. \item Added support for using ``direct I/O'' on SGI's XFS file system. Direct I/O is an optional feature of XFS in which data is moved directly between the user's buffer and the storage devices, bypassing the file-system cache. This can improve performance significantly on systems with high disk bandwidth. Without high disk bandwidth, regular I/O (that uses the file-system cache) perfoms better. ROMIO, therefore, does not use direct I/O by default. The user can turn on direct I/O (separately for reading and writing) either by using environment variables or by using MPI's hints mechanism (info). To use the environment-variables method, do \begin{verbatim} setenv MPIO_DIRECT_READ TRUE setenv MPIO_DIRECT_WRITE TRUE \end{verbatim} To use the hints method, the two keys are {\tt direct\_read} and {\tt direct\_write}. By default their values are {\tt false}. To turn on direct I/O, set the values to {\tt true}. The environment variables have priority over the info keys. In other words, if the environment variables are set to {\tt TRUE}, direct I/O will be used even if the info keys say {\tt false}, and vice versa. Note that direct I/O must be turned on separately for reading and writing. The environment-variables method assumes that the environment variables can be read by each process in the MPI job. This is not guaranteed by the MPI Standard, but it works with SGI's MPI and the {\tt ch\_shmem} device of MPICH. \item Added support (new ADIO device, {\tt ad\_pvfs}) for the PVFS parallel file system for Linux clusters, developed at Clemson University (see {\tt http://www.parl.clemson.edu/pvfs}). To use it, you must first install PVFS and then when configuring ROMIO, specify {\tt -file\_system=pvfs} in addition to any other options to {\tt configure}. (As usual, you can configure for multiple file systems by using ``{\tt +}''; for example, {\tt -file\_system=pvfs+ufs+nfs}.) You will need to specify the path to the PVFS include files via the {\tt -cflags} option to {\tt configure}, for example, \newline {\tt configure -cflags=-I/usr/pvfs/include}. You will also need to specify the full path name of the PVFS library. The best way to do this is via the {\tt -lib} option to MPICH's {\tt configure} script (assuming you are using ROMIO from within MPICH). \item Uses weak symbols (where available) for building the profiling version, i.e., the PMPI routines. As a result, the size of the library is reduced considerably. \item The Makefiles use {\em virtual paths} if supported by the make utility. GNU {\tt make} supports it, for example. This feature allows you to untar the distribution in some directory, say a slow NFS directory, and compile the library (create the .o files) in another directory, say on a faster local disk. For example, if the tar file has been untarred in an NFS directory called {\tt /home/thakur/romio}, one can compile it in a different directory, say {\tt /tmp/thakur}, as follows: \begin{verbatim} cd /tmp/thakur /home/thakur/romio/configure make \end{verbatim} The .o files will be created in {\tt /tmp/thakur}; the library will be created in\newline {\tt /home/thakur/romio/lib/\$ARCH/libmpio.a}. This method works only if the {\tt make} utility supports {\em virtual paths}. If the default {\tt make} utility does not, you can install GNU {\tt make} which does, and specify it to {\tt configure} as \begin{verbatim} /home/thakur/romio/configure -make=/usr/gnu/bin/gmake (or whatever) \end{verbatim} \item Lots of miscellaneous bug fixes and other enhancements. \item This version is included in MPICH 1.2.0. If you are using MPICH, you need not download ROMIO separately; it gets built as part of MPICH. The previous version of ROMIO is included in LAM, HP MPI, SGI MPI, and NEC MPI. NEC has also implemented the MPI-IO functions missing in ROMIO, and therefore NEC MPI has a complete implementation of MPI-IO. \end{itemize} \subsection{Major Changes in Version 1.0.1} \begin{itemize} \item This version is included in MPICH 1.1.1 and HP MPI 1.4. \item Added support for NEC SX-4 and created a new device {\tt ad\_sfs} for NEC SFS file system. \item New devices {\tt ad\_hfs} for HP HFS file system and {\tt ad\_xfs} for SGI XFS file system. \item Users no longer need to prefix the filename with the type of file system; ROMIO determines the file-system type on its own. \item Added support for 64-bit file sizes on IBM PIOFS, SGI XFS, HP HFS, and NEC SFS file systems. \item {\tt MPI\_Offset} is an 8-byte integer on machines that support 8-byte integers. It is of type {\tt long long} in C and {\tt integer*8} in Fortran. With a Fortran 90 compiler, you can use either {\tt integer*8} or {\tt integer(kind=MPI\_OFFSET\_KIND)}. If you {\tt printf} an {\tt MPI\_Offset} in C, remember to use {\tt \%lld} or {\tt \%ld} as required by your compiler. (See what is used in the test program {\tt romio/test/misc.c}). On some machines, ROMIO detects at configure time that {\tt long long} is either not supported by the C compiler or it doesn't work properly. In such cases, configure sets {\tt MPI\_Offset} to {\tt long} in C and {\tt integer} in Fortran. This happens on Intel Paragon, Sun4, and FreeBSD. \item Added support for passing hints to the implementation via the {\tt MPI\_Info} parameter. ROMIO understands the following hints (keys in {\tt MPI\_Info} object): \texttt{cb\_buffer\_size}, \texttt{cb\_nodes},\newline \texttt{ind\_rd\_buffer\_size}, \texttt{ind\_wr\_buffer\_size} (on all but IBM PIOFS), \texttt{striping\_factor} (on PFS and PIOFS), \texttt{striping\_unit} (on PFS and PIOFS), \texttt{start\_iodevice} (on PFS and PIOFS), and \texttt{pfs\_svr\_buf} (on PFS only). \end{itemize} \newpage \addcontentsline{toc}{section}{References} \bibliographystyle{plain} %% these are the "full" bibliography databases %\bibliography{/homes/thakur/tex/bib/papers,/homes/robl/projects/papers/pario} % this is the pared-down one containing only those references used in % users-guide.tex % to regenerate, uncomment the full databases above, then run % ~gropp/bin/citetags users-guide.tex | sort | uniq | \ % ~gropp/bin/citefind - /homes/thakur/tex/bib/papers.bib \ % /homes/robl/projects/papers/pario \bibliography{romio} \end{document}
{ "alphanum_fraction": 0.7511971514, "avg_line_length": 42.9780123131, "ext": "tex", "hexsha": "b33d4830790fe1f1a46d3ffdbaea319c2c90d96c", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-06-13T07:23:35.000Z", "max_forks_repo_forks_event_min_datetime": "2015-12-29T22:14:56.000Z", "max_forks_repo_head_hexsha": "cc5f4d3fd0f8c9f2774d10deaebdced77985d839", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "OpenCMISS-Dependencies/mpich2", "max_forks_repo_path": "src/mpi/romio/doc/users-guide.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "cc5f4d3fd0f8c9f2774d10deaebdced77985d839", "max_issues_repo_issues_event_max_datetime": "2017-05-16T19:17:42.000Z", "max_issues_repo_issues_event_min_datetime": "2015-12-30T22:28:15.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "OpenCMISS-Dependencies/mpich2", "max_issues_repo_path": "src/mpi/romio/doc/users-guide.tex", "max_line_length": 96, "max_stars_count": 7, "max_stars_repo_head_hexsha": "cc5f4d3fd0f8c9f2774d10deaebdced77985d839", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "OpenCMISS-Dependencies/mpich2", "max_stars_repo_path": "src/mpi/romio/doc/users-guide.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-15T00:54:47.000Z", "max_stars_repo_stars_event_min_datetime": "2015-12-31T03:15:50.000Z", "num_tokens": 13103, "size": 48866 }
\newcommand{\D}{.} %provide master folder for all documents %Need to change year to current year in the following 4 lines for figure locations \newcommand{\fish}{C:/Rsaves/fishery/2019/} \newcommand{\figs}{C:/bio.data/bio.snowcrab/assessments/2019/figures/} \newcommand{\g}{C:/bio.data/bio.snowcrab/assessments/2019/timeseries/survey/} \newcommand{\maps}{C:/bio.data/bio.snowcrab/output/maps/survey/snowcrab/annual/} %---------------------------------------------------------------------------------------- % PACKAGES AND THEMES %---------------------------------------------------------------------------------------- \documentclass{beamer} \mode<presentation> { \usetheme{Boadilla} \usecolortheme{dolphin} } \usepackage{graphicx} % Allows including images \usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables \graphicspath{C:/Rsaves/fishery/} %provide master folder for all documents %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \title[Science / Industry Pre-Rap]{N-ENS Pre-Rap} % The short title appears at the bottom of every slide, the full title is only on the title page \author{Snow Crab Unit} % Your name \institute[DFO] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space { Department of Fisheries and Oceans \\ % Your institution for the title page \medskip \textit{} % Your email address } \date{\the\year} % Date, can be changed to a custom date \begin{document} \begin{frame} \titlepage % Print the title page as the first slide \end{frame} \begin{frame} \frametitle{Overview} % Table of contents slide, comment this block out to remove it \tableofcontents%[part=1] % Throughout your presentation, if you choose to use \section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation \end{frame} %\begin{frame} %\frametitle{Overview} % Table of contents slide, comment this block out to remove it %\tableofcontents[part=2] % Throughout your presentation, if you choose to use \section{} and \subsection{} commands, these will be printed on this slide as an overview of your presentation %\end{frame} %---------------------------------------------------------------------------------------- % PRESENTATION SLIDES %---------------------------------------------------------------------------------------- %\part{1} \section{Commercial Fishery} %\subsection{Landings} \begin{frame} \frametitle{Landings} \begin{table}[ht] \centering \begin{tabular}{rlccclrr} \hline Area & Year & TAC & Landings & Catch Rate (lbs/trap) \\ \hline N-ENS & & & & \\ & 2017 & 825 & 813 & 198 \\ & 2018 & 784 & 742 & 136 \\ & 2019 & 631 & 629 & 191 \\ \hline CFA 23 & & & & \\ & 2017 & 3640 & 3636 & 217 \\ & 2018 & 3276 & 3280 & 265 \\ & 2019 & 3604 & 3590 & 240 \\ \hline CFA 24 & & & & \\ & 2017 & 3090 & 3083 & 198 \\ & 2018 & 2781 & 2784 & 242 \\ & 2019 & 3059 & 3042 & 221 \\ \hline \hline \end{tabular} \end{table} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Weekly Landings} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.75\textwidth]{\fish NENS_weekly_landing.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} %\frametitle{Spring Landings} \begin{figure} \vspace*{-0.25cm} \centerline{\includegraphics[width=0.8\textwidth]{\fish percent_spring_landings.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} %\frametitle{Active Vessels} \begin{figure} % \vspace*{1cm} \centerline{\includegraphics[width=0.80\textwidth]{\fish vessels_per_year.pdf}} \end{figure} \end{frame} %------------------------------------------------ %\subsection{Catch Rates} \begin{frame} %\frametitle{Catch Rates} \begin{figure} \vspace*{-.25cm} \centerline{\includegraphics[width=0.75\textwidth]{\fish annual_cpue_kg.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} %\frametitle{Weekly Catch Rates} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.90\textwidth]{\fish weekly_cpue_smoothed.pdf}} \end{figure} \end{frame} %------------------------------------------------ % Need to change the date reference in the below figure name \begin{frame} \frametitle{Catch Rate Locations} \begin{figure} % \vspace*{-.5cm} \centerline{\includegraphics[width=0.90\textwidth]{C:/bio.data/bio.snowcrab/output/maps/logbook/snowcrab/annual/cpue/cpue\D 2019.png}} \end{figure} \end{frame} %------------------------------------------------ %\subsection{Size of Catch} \begin{frame} \frametitle{Crab Size} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.70\textwidth]{\fish mean_cw_observed.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Crab Size} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.70\textwidth]{\fish cw_vs_mass.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Crab Size} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.90\textwidth]{\fish mean_weight_observed.pdf}} \end{figure} \end{frame} %------------------------------------------------ %\subsection{Fishing Positions} \begin{frame} \begin{figure} \vspace*{-.3cm} \centerline{\includegraphics[width=0.75\textwidth]{\fish nens_past_two_years_fishing_positions.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Seasonal Fishing Patterns} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.15\textwidth]{\fish nens_spring_fishing_positions.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.15\textwidth]{\fish nens_summer_fishing_positions.pdf}} \end{figure} \end{column} \end{columns} \end{frame} %------------------------------------------------ \section{At-Sea Observer} %\subsection{Observer Coverage} \begin{frame} \frametitle{At-Sea Observer Coverage} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.90\textwidth]{\fish observersummary.pdf}} \end{figure} \end{frame} %------------------------------------------------ % Need to change years in figure names to current and past year %\subsection{Catch Composition} \begin{frame} \frametitle{Catch Composition} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.05\textwidth]{\fish 2018_N-ENS_size_freq.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.05\textwidth]{\fish 2019_N-ENS_size_freq.pdf}} \end{figure} \end{column} \end{columns} \end{frame} %------------------------------------------------ % Need to change years in figure names to current and past year \begin{frame} \frametitle{Catch Composition} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.05\textwidth]{\fish N-ENS_Spring_size_freq.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.05\textwidth]{\fish N-ENS_Summer_size_freq.pdf}} \end{figure} \end{column} \end{columns} \end{frame} %------------------------------------------------ %\subsection{Soft Crab} %------------------------------------------------ \begin{frame} \frametitle{Softshell Catches} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.90\textwidth]{\fish softsummary.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Softshell Catches} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.70\textwidth]{\fish soft_crab_by_month.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Softshell Catches} \begin{figure} \vspace*{-.5cm} \centerline{\includegraphics[width=0.70\textwidth]{\fish nens_soft_crab_positions_68.pdf}} \end{figure} \end{frame} %------------------------------------------------ %------------------------------------------------ \section{Survey} %\subsection{Snow Crab Catches} \begin{frame} \begin{figure} \vspace*{-0.9cm} \centerline{\includegraphics[width=0.9\textwidth]{2019_All_Survey_Stations.pdf}} \end{figure} \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Female Crab} \begin{figure} \vspace*{-0.4cm} \centerline{\includegraphics[width=0.55\textwidth]{\figs size\D freq/survey/female.pdf}} \end{figure} \end{frame} %------------------------------------------------ % Need to change years in figure names to current and past year \begin{frame} \frametitle{Mature Female Distribution} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\maps totmass\D female\D mat/totmass\D female\D mat\D 2018.png}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.0\textwidth]{\maps totmass\D female\D mat/totmass\D female\D mat\D 2019.png}} \end{figure} \end{column} \end{columns} Units: mt/km2 \end{frame} %------------------------------------------------ \begin{frame} \frametitle{Male Crab} \begin{figure} \vspace*{-0.4cm} \centerline{\includegraphics[width=0.55\textwidth]{\figs size\D freq/survey/male.pdf}} \end{figure} \end{frame} %------------------------------------------------ % Need to change years in figure names to current and past year \begin{frame} \frametitle{Commercial Male Distribution} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\maps R0\D mass/R0\D mass\D 2018.png}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.0\textwidth]{\maps R0\D mass/R0\D mass\D 2019.png}} \end{figure} \end{column} \end{columns} Units: mt/km2 \end{frame} %---------------------------------------------------------------------------------------- % Need to change years in figure names to current and past year %\subsection{Ecosystem- Biological and Environmental} \begin{frame} \frametitle{Predation} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\g ms\D mass\D 10.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 10/ms\D mass\D 10\D 2018.png}} \end{figure} \begin{figure} \vspace*{-1.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 10/ms\D mass\D 10\D 2019.png}} \end{figure} \end{column} \end{columns} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Predation} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\g ms\D mass\D 201.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 201/ms\D mass\D 201\D 2018.png}} \end{figure} \begin{figure} \vspace*{-1.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 201/ms\D mass\D 201\D 2019.png}} \end{figure} \end{column} \end{columns} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Coexistent Species} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\g ms\D mass\D 2211.pdf}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 2211/ms\D mass\D 2211\D 2018.png}} \end{figure} \begin{figure} \vspace*{-1.5cm} \centerline{\includegraphics[width=0.85\textwidth]{\maps bycatch/ms\D mass\D 2211/ms\D mass\D 2211\D 2019.png}} \end{figure} \end{column} \end{columns} \end{frame} %------------------------------------------------ %\subsection{Temperature} \begin{frame} \frametitle{Temperatures} \begin{figure} %\vspace*{-0.4cm} \centerline{\includegraphics[width=0.65\textwidth]{\g t.pdf}} \end{figure} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Temperatures} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{figure} \centerline{\includegraphics[width=1.0\textwidth]{\maps /t/t\D 2018.png}} \end{figure} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.0\textwidth]{\maps /t/t\D 2019.png}} \end{figure} \end{column} \end{columns} \end{frame} %---------------------------------------------------------------------------------------- \section{St Anns Bank Acoustic Receivers} \begin{frame} \frametitle{Acoustic Receivers} \begin{columns} \begin{column}<+->{0.5\textwidth} \vspace*{-0.5cm} \begin{itemize} \item Collaborative project since 2015 \item Two lines of 23 acoustic receivers within the MPA \item Detect animals we tag (snow crab, cod, etc) as well as other projects from along the eastern seaboard \end{itemize} \end{column} \begin{column}{0.5\textwidth} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.0\textwidth]{V2LSCF.receiver.line.locations.pdf}} \end{figure} \end{column} \end{columns} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Detections} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.8\textwidth]{V2LSCF.detected.projects.map.pdf}} \end{figure} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Detections} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.9\textwidth]{V2LSCF.species.pie.chart.pdf}} \end{figure} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Detections} \begin{figure} \vspace*{-4.5cm} \centerline{\includegraphics[width=1.1\textwidth]{V2LSCF.species.by.month.pdf}} \end{figure} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} \frametitle{Temperatures} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=0.95\textwidth]{V2LSCF.temperature.pdf}} \end{figure} \end{frame} %--------------------------------------------------------------------------------------- \section{DFO Business} %\subsection{Collaborative Agreement} \begin{frame} \frametitle{Current Collaborative Agreement (CA)} \vspace*{-0.5cm} \begin{block} \begin{itemize} \item 2019/20 is the final of a five year CA \item Funds remaining at the end of the year (April 1, 2020) will be returned to participants \item \% of each license recalculates annually based on previous season's quota \item Working on a new 5 year CA. \item Will still be a "Use of Fish" approach with science quota \item Please DO NOT send payment until invoice is received, not just CA for signature \end{itemize} \end{block} \end{frame} %------------------------------------------------------------------------------------ \begin{frame} \frametitle{Collaborative Agreement (CA)} \begin{figure} \vspace*{-0.5cm} \centerline{\includegraphics[width=1.0\textwidth]{Spending.Breakdown.pdf}} \end{figure} \end{frame} %---------------------------------------------------------------------------------------- %\subsection{Assessment Cycle} \begin{frame} \frametitle{Meeting and Document Schedule} \begin{block}{} \begin{itemize} \item Current: \begin{itemize} \item Industry Meetings (Jan), RAP (late Feb), AC Meetings (early March) \item Framework preceding RAP this year. \item Preliminary report, Res Doc (~150 pages), SAR (30 pages) \end{itemize} \item Future: \begin{itemize} \item In-house review of assessment (mid Feb), Industry Meetings (late Feb), AC Meetings (March) \item Preliminary report, Stock update (30 pages) \item Full RAP, Res Doc every 3 years \end{itemize} \end{itemize} \end{block} \end{frame} %---------------------------------------------------------------------------------------- \begin{frame} The End \end{frame} %------------------------------------------------ % Document End \end{document}
{ "alphanum_fraction": 0.5916098344, "avg_line_length": 21.976070529, "ext": "tex", "hexsha": "7357a97ce2fac47a988ce1cf65df4a048ad16d49", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c791c98d5ad327359a9fab67b4ba3296fa99f933", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "brent0/SCReports", "max_forks_repo_path": "inst/src/NENSPreRap.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c791c98d5ad327359a9fab67b4ba3296fa99f933", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "brent0/SCReports", "max_issues_repo_path": "inst/src/NENSPreRap.tex", "max_line_length": 203, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c791c98d5ad327359a9fab67b4ba3296fa99f933", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "brent0/SCReports", "max_stars_repo_path": "inst/src/NENSPreRap.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T14:41:01.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-14T14:41:01.000Z", "num_tokens": 5003, "size": 17449 }
\subsection{Histograms}
{ "alphanum_fraction": 0.7692307692, "avg_line_length": 6.5, "ext": "tex", "hexsha": "4c5168db56a1d0313344a97fdf2036c8c06dd549", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/nonParametric/01-01-histograms.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/nonParametric/01-01-histograms.tex", "max_line_length": 23, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/nonParametric/01-01-histograms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 26 }
\chapter{Plan of Action} \label{cpt:planofaction} This plan of action is part of the bachelor project on anonymous video streaming on tablets. In this project we attempt to get Tribler, a peer-to-peer file sharing application working anonymously on Android using an already implemented Tor-like protocol. In this appendix we outline the details of the assignment. We will describe the approach we will use and how we will structure this project. Finally, we will describe how we maintain the quality of our work. \section{Assignment} In this section we will describe our assignment, client, contacts, the problem we will address and sketch out what product we will eventually deliver. This section will also contain some of the critical requirements that we will have to meet along with the risk involved. \subsection{Assignment} Our assignment is to implement a new feature of Tribler into a mobile android application. This new feature allows the creation and usage of so called anontunnels. These tunnels allow anonymous download within a peer-to-peer network between devices, in our case Android smartphones. As these tunnels run on Python code, we will have to be able to run Python on an Android smartphone, along with all the libraries it depends on. \subsection{The client} Our client is Dr. Ir. Johan Pouwelse, the head of the Tribler group and Assistant Professor at the Parallel and Distributed Systems Group of the Faculty of EEMCS, Delft University of Technology. Pouwelse has measured and researched peer-to-peer networks for years and has been working on Tribler for nine years. \subsection{Contacts} \emph{The Client:}\\\\ \textbf{Dr. Ir. Johan Pouwelse}\\ [email protected]\\ +31 (0)15 27 82539\\ Room: HB 07.290\\ Mekelweg 4\\ 2628 CD Delft\\\\ \noindent\emph{TU Delft coach:}\\\\ \textbf{Ir. Egbert Bouman}\\ [email protected]\\ Room: HB 07.290\\ Mekelweg 4\\ 2628 CD Delft\\\\ \noindent\emph{Bachelor Project Coordinator:}\\\\ \textbf{Dr. Martha A. Larson}\\ [email protected]\\ +31 (0)15 27 87357\\ Room: HB 11.040\\ Mekelweg 4\\ 2628 CD Delft\\ \subsection{The final product} \label{ssec:final-product} At the end of the bachelor project, we will deliver an Android application that allows users to find and download content anonymously. This application makes use of the code from the Tribler project, which is written in Python. Therefore, we will make sure that the Android application will be able to run Python code.\\ The downloads that run through the anontunnels will be anonymous, just like the current Tor protocol. \subsection{Requirements and risks} The final product will offer the features that are described in Subsection \ref{ssec:final-product} and should be considered a prototype. The prototype will offer anonymous downloading using the anontunnels mentioned earlier. The development of the prototype will be targeted to the Android platform.\\\\ The following requirements are set: \begin{itemize} \item Weekly Scrum evaluations. At the end of each Scrum iteration, we evaluate what we have done and set the target for next week. This keeps the deadlines SMART\footnote{\href{http://www.techrepublic.com/article/use-smart-goals-to-launch-management-by-objectives-plan/}{www.techrepublic.com/article/use-smart-goals-to-launch-management-by-objectives-plan/}} and manageable. \item Weekly meeting with the client. Every two weeks we will implement a feature, but we will show and discuss our progress each week with the supervisor. \item The members of the team will complete a prototype at the end of this project and will also demonstrate this during a 30 minute presentation given in the last week of Q4. \end{itemize} The risks involved with this project include: \begin{itemize} \item Run Python code on an Android application. As the Tor-tunnel functionality is written in Python code, we need to be able to run Python code on an Android device as well. A library exist where you can write Python for Android, but we will still face a challenge when we will try to combine other libraries. \item As we are dependent on third party code, we might lose time to understand or read certain parts of code or documentation as well as link pieces of code that belong to different parties together. \end{itemize} \section{Approach} In this section, we will describe the approach we are taking for this project. First, we will discuss the methodology we are using (Scrum). After that, we will discuss the MoSCoW technique we are using to classify requirements. Afterwards, we will give an overview of the tools we will be using during this project. Finally, we will give our planning and milestones. \subsection{Scrum methodology} For the project, we will be using the Scrum methodology. We have used Scrum in various projects already during our studies and it has proven to be a very efficient way of working. In this section, we will discuss how we plan to use Scrum and what our Scrum iterations will look like. First we will look at the different roles involved in the Scrum process. After that, we will describe how the Scrum process is organized. \subsubsection{Scrum roles} There are three primary roles involved in Scrum: \begin{itemize} \item Product owner: the product owner represents the stakeholders and is the voice of the customer. He is responsible for the success of the product. Johan Pouwelse is the product owner. \item Development team: the development team consists of Rolf, Laurens and Martijn. We are responsible for delivering the final product to the product owner. \item Scrum master: he guides the team by assuring the right choices are being made. He is responsible for arranging the meetings. Rolf is our Scrum master. \end{itemize} \subsubsection{The Scrum process} There are several steps involved in the Scrum process. First, we will create a product backlog. This is a list with the functional demands the product owner has and it contains the items we still have to do. In total, we have 5 sprints. At the end of each sprint, we will deliver a part of the final product. The duration of each sprint is two weeks. At the beginning of each sprint, we will create a sprint backlog. This backlog describes the functional demands, divided in each subtask for this sprint.\\\\ Each morning, we will start the day with the daily Scrum. This is a short meeting where every member of the development team answers the following question: \begin{itemize} \item What have you done yesterday? \item What are you going to do today? \item Are there any problems you ran into? \end{itemize} At the beginning of each sprint, we start with a meeting. This meeting is attended by all members of the team and the supervisor. The purpose of this meeting is to evaluate the last sprint and decide on the tasks that have to be done during the next sprint. \subsection{MoSCoW} MoSCoW is a technique that can be used to place importance on the delivery of each requirement. During each sprint, we will classify the features in one category. The MoSCoW model has the following categories: \begin{itemize} \item Must have: the requirement must be part of the final product to be considered a success. \item Should have: the requirement has a high priority and should be in the final product. \item Could have: the requirement is desirable but not necessary. \item Would have: the requirement is not implemented in a given release but is considered as a requirement in the future. \end{itemize} At the start of each sprint, we evaluate the goals we want to achieve that sprint. After that, we classify each goal into one of the categories above. Since we do not have everything clear at the start of the project, it could happen that we prioritize the goals differently during each sprint. \subsection{Tools} For this project, we will use various tools, both hardware and software. First of all, we will be using our own laptops for the development of the software. We will develop our software on the Ubuntu platform. We are also using Android phones that we can rent from the Tribler team.\\\\ If we look at the software, we will make use of the Android SDK and NDK. The Android SDK will allow us to build .apk files. The NDK allows to implement parts of an Android application in C or C++. To be able to run Python code on an Android device, we will use the Python for Android library. \subsection{Planning} Our planning can be found on GitHub. As for now, we have four milestones: \begin{itemize} \item 02-05-14: the literature research should be done and a report about the read literature should have been written. \item 09-05-14: we should be able to compile the TGS for Android project and run it on an Android device. \item 30-05-14: we should be able to send a packet between two Android devices over anonymous tunnels. \item 13-06-14: a GUI for testing purposes should be designed and created. \item 27-06-14: the end presentation of our project. \end{itemize} \section{Project structure} In this section the administrative aspects of the project are described. \subsection{Members} The members of the project are Rolf Jagerman, Laurens Versluis and Martijn de Vos. All members will work 40 hours per week to ensure the mandatory 15 EC per student are utilized. The division of labor is evenly distributed across all activities (analysis, documentation, implementation, etc.).\\ \noindent\textbf{Contact Information}\\ Rolf Jagerman - [email protected]\\ Laurens Versluis - [email protected]\\ Martijn de Vos - [email protected] \subsection{Reporting} Weekly meetings with the client will be held in person. All documentation of the project will be written in {\LaTeX} and will be provided as a PDF. The project material, including source code and documentation, will be available throughout the project on the AT3 GitHub repository\footnote{\href{http://www.github.com/rjagerman/AT3/}{www.github.com/rjagerman/AT3}}. \section{Quality assurance} To assure a good quality of the delivered product, several agreements are made about which methods should be used. In particular we look at testing, code review and version control. \subsection{Testing} All written software will be tested using Python unit tests. Additionally, test code coverage is provided to ensure a majority of the code has been thoroughly tested. \subsection{Code review} All written code will be reviewed by at least one team member before pull requests are accepted. This will ensure the code is comprehensible, working and correct. Additionally our code will undergo a complete source review by SIG (Software Improvement Group). \subsection{Version control} To maintain a good overview and history of the code we write, we will use a version control system. All our code will be stored on GitHub and therefore use the Git version control system.
{ "alphanum_fraction": 0.7924493554, "avg_line_length": 77.0212765957, "ext": "tex", "hexsha": "66af2ad466342fa57d1165a278119a95c07a58f2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8731e796acb1d6846708c631b78016df1ad8731c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "brussee/AT3", "max_forks_repo_path": "Reports/appendices/action-plan.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8731e796acb1d6846708c631b78016df1ad8731c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "brussee/AT3", "max_issues_repo_path": "Reports/appendices/action-plan.tex", "max_line_length": 508, "max_stars_count": null, "max_stars_repo_head_hexsha": "8731e796acb1d6846708c631b78016df1ad8731c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "brussee/AT3", "max_stars_repo_path": "Reports/appendices/action-plan.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2471, "size": 10860 }
\documentclass{tufte-book} \title{Acoustic Propagation Modelling} \author{Aaron Kaw} \input{C:/Users/aaron/OneDrive/Science/Programming/LaTeX/AaronLibrary.tex} \newcommand{\bty}{\text{bty}} \newcommand{\ati}{\text{ati}} \newcommand{\bnd}{\text{bnd}} \newcommand{\inc}{\text{inc}} \newcommand{\rfl}{\text{rfl}} \renewcommand{\max}{\text{max}} \renewcommand{\min}{\text{min}} \newcommand{\ray}{\text{ray}} \newcommand{\spn}{\text{spn}} \newcommand{\nse}{\text{nse}} \newcommand{\erf}{\text{erf}} \newcommand{\dtc}{\text{dtc}} \newcommand{\fal}{\text{fal}} \newcommand{\erfc}{\text{erfc}} \newcommand{\DT}{\text{DT}} \newcommand{\NL}{\text{NL}} \newcommand{\SL}{\text{SL}} \newcommand{\TL}{\text{TL}} \begin{document} \maketitle \tableofcontents \part{Helmholtz Equation} Taken from Computational Ocean Acoustics\cite{jensen2011computational}. \A{ O(\omega^2): && \Abs{\nabla\tau}^2 &= \frac{1}{c^2(\vec{x})} \\ O(\omega^1): && 2\nabla\tau \cdot \nabla A_0 + \Par{\nabla^2\tau}A_0 &= 0 \\ O(\omega^{1 - j}): && 2\nabla\tau \cdot \nabla A_j + \Par{\nabla^2\tau}A_j &= -\nabla^2 A_{j-1} } \chapter{Eikonal Equation} The eikonal equation \A{ \Abs{\nabla\tau}^2 &= \frac{1}{c^2(\vec{x})} } is a first-order nonlinear PDE for modelling the path taken by a ray. \section{First-Order System} In cylindrical coordinates, \A{ \Dif{r}{s} &= c\xi(s) & \Dif{\xi}{s} &= \frac{-1}{c(r, z)^2} \Part{c}{r} \\ \Dif{z}{s} &= c\zeta(s) & \Dif{\zeta}{s} &= \frac{-1}{c(r, z)^2} \Part{c}{z} \\ \Dif{\tau}{s} &= \frac{1}{c(r, z)} } with initial conditions \A{ r &= r_0 & \xi &= \frac{\cos(\theta_0)}{c(r_0, z_0)} \\ z &= z_0 & \zeta &= \frac{\sin(\theta_0)}{c(r_0, z_0)} \\ \tau &= 0 } and boundary conditions defined as reflection off the bathymetry $z_\bty(r)$ and altimetry $z_\ati(r)$. \A{ \theta_i &= c(r, z) \cos^{-1}\Par{\xi_i} \\ &= c(r, z) \sin^{-1}\Par{\zeta_i} \\ \theta_r &= 2\theta_\bnd - \theta_i \\ \xi_r &= \frac{\cos(\theta_r)}{c(r, z)} \\ \zeta_r &= \frac{\sin(\theta_r)}{c(r, z)} } \chapter{Boundary Reflection} \A{ \vec{t}_\rfl &= \vec{t}_\inc - 2\Par{\vec{t}_\inc \cdot \vec{n}_\bnd} \vec{n}_\bnd } \chapter{Sonar Equations} \section{Detection Threshold} \subsection{Detection Index} The detection index is expressed as \A{ d &= \Par{\frac{\mu_\spn - \mu_\nse}{\sigma_\nse}}^2 } For a Gaussian noise and signal-plus-noise with a non-fluctuating signal, note that $\sigma_\nse = \sigma_\spn$ so \A{ f_\nse(x) &= \frac{1}{\sqrt{2\pi\sigma_\nse^2}}\exp\Brace{\frac{-1}{2}\Par{\frac{x - \mu_\nse}{\sigma_\nse}}^2} \\ f_\spn(x) &= \frac{1}{\sqrt{2\pi\sigma_\nse^2}}\exp\Brace{\frac{-1}{2}\Par{\frac{x - \mu_\spn}{\sigma_\nse}}^2} } Their respective cumulative density functions are \A{ F_\nse(x) &= \frac{1}{2}\Brack{1 + \erf\Par{\frac{x - \mu_\nse}{\sqrt{2}\sigma_\nse}}} \\ F_\spn(x) &= \frac{1}{2}\Brack{1 + \erf\Par{\frac{x - \mu_\spn}{\sqrt{2}\sigma_\nse}}} } The probability of detection and probability of false alarm are defined via the point at which the two density values meet, integrated above as \A{ p_\dtc &= 1 - \frac{1}{2}\Brack{1 + \erf\Par{\frac{x - \mu_\spn}{\sqrt{2}\sigma_\nse}}}, & p_\fal &= 1 - \frac{1}{2}\Brack{1 + \erf\Par{\frac{x - \mu_\nse}{\sqrt{2}\sigma_\nse}}} \\ \Rightarrow p_\dtc &= \frac{1}{2}\erfc\Par{\frac{x - \mu_\spn}{\sqrt{2}\sigma_\nse}}, & p_\fal &= \frac{1}{2}\erfc\Par{\frac{x - \mu_\nse}{\sqrt{2}\sigma_\nse}} \\ \Rightarrow x &= \mu_\spn + \sqrt{2}\sigma_\nse\erfc^{-1}\Par{2p_\dtc}, & x &= \mu_\nse + \sqrt{2}\sigma_\nse\erf^{-1}\Par{2p_\fal} } So equating these expressions yields \A{ \mu_\spn + \sqrt{2}\sigma_\nse\erfc^{-1}\Par{2p_\dtc} &= \mu_\nse + \sqrt{2}\sigma_\nse\erfc^{-1}\Par{2p_\fal} \\ \Rightarrow \Par{\frac{\mu_\spn - \mu_\nse}{\sigma_\nse}}^2 &= 2\Brack{\erfc^{-1}\Par{2p_\fal} - \erfc^{-1}\Par{2p_\dtc}}^2 } where the left hand side is the definition of the detection index. Thus, \A{ d &= 2\Brack{\erfc^{-1}\Par{2p_\fal} - \erfc^{-1}\Par{2p_\dtc}}^2 } Rearranged, \A{ p_\dtc &= \frac{1}{2}\erfc\Par{\erfc^{-1}\Par{2p_\fal} - \sqrt{\frac{d}{2}}} } \bibliographystyle{apalike} \bibliography{Prop} \part{Appendix} \chapter{Concocting Equations} \section{Celerity} \A{ c_\max &= 1600 \\ c_\min &= 1500 \\ c(r, z) &= c_0 + c_1 z + c_2 z^2 \\ c_0 + c_1 z_\ati + c_2 z_\ati^2 &= c(r, z_\ati) = c_\max \\ c_0 + c_1 \frac{z_\ati + z_\bty}{2} + c_2 \Par{\frac{z_\ati + z_\bty}{2}}^2 &= c(r, \frac{z_\ati + z_\bty}{2}) = c_\min \\ c_0 + c_1 z_\bty + c_2 z_\bty^2 &= c(r, z_\bty) = c_\max \\ \Matrix{ccc}{ 1 & z_\ati & z_\ati^2 \\ 1 & \frac{z_\ati + z_\bty}{2} & \Par{\frac{z_\ati + z_\bty}{2}}^2 \\ 1 & z_\bty & z_\bty^2 } \Matrix{c}{ c_0 \\ c_1 \\ c_2 } &= \Matrix{c}{ c_\max \\ c_\min \\ c_\max } } \section{Bathymmetry} \A{ z_\min &= 700 \\ z_\max &= 1000 \\ r_0 &= 300 \\ z_\bty(r) &= z_\max - \Par{z_\max - z_\min}\exp\Par{-\frac{\Par{r - r_0}^2}{A_r}} \\ z_\bty\Par{\frac{r_0}{3}} &= z_\min + \frac{z_\min + z_\max}{10} = z_\max - \Par{z_\max - z_\min}\exp\Par{-\frac{\Par{r - r_0}^2}{A_r}} \\ z_\min + \frac{z_\min + z_\max}{10} &= z_\max - \Par{z_\max - z_\min}\exp\Par{-\frac{\Par{r - r_0}^2}{A_r}} \\ \Par{z_\max - z_\min}\exp\Par{-\frac{\Par{r - r_0}^2}{A_r}} &= z_\max - z_\min - \frac{z_\min + z_\max}{10} \\ \exp\Par{-\frac{\Par{r - r_0}^2}{A_r}} &= \frac{\dfrac{9}{10}z_\max - \dfrac{11}{10}z_\min}{z_\max - z_\min} \\ -\frac{\Par{r - r_0}^2}{A_r} &= \ln\Par{\dfrac{9z_\max- 11z_\min}{10\Par{z_\max - z_\min}}} \\ A_r &= \frac{4r_0^2/9}{\ln\Par{\dfrac{9z_\max- 11z_\min}{10\Par{z_\max - z_\min}}}} } \chapter{Sonar Equation Manipulation} \section{Calculating Probability of Detection} \A{ d &= Bt\Par{\frac{\SL - \TL}{B\NL}}^2 \\ p_\dtc &= \frac{1}{2}\erfc\Par{\sqrt{\frac{d}{2}} + \erfc^{-1}\Par{2p_\fal}} } \end{document}
{ "alphanum_fraction": 0.6061079299, "avg_line_length": 36.2452830189, "ext": "tex", "hexsha": "8b0845fedd58fac9991504f21d4af8c9679ac790", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-08-05T02:54:21.000Z", "max_forks_repo_forks_event_min_datetime": "2021-08-05T02:54:21.000Z", "max_forks_repo_head_hexsha": "9fdc1cefa51894661543428022d4fe45cd952e4f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kapple19/OceanAcousticsModelling", "max_forks_repo_path": "doc/AcousticPropagation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9fdc1cefa51894661543428022d4fe45cd952e4f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kapple19/OceanAcousticsModelling", "max_issues_repo_path": "doc/AcousticPropagation.tex", "max_line_length": 164, "max_stars_count": 1, "max_stars_repo_head_hexsha": "9fdc1cefa51894661543428022d4fe45cd952e4f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kapple19/OceanAcousticsModelling", "max_stars_repo_path": "doc/AcousticPropagation.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-21T11:08:53.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-21T11:08:53.000Z", "num_tokens": 2534, "size": 5763 }
% Activate the following line by filling in the right side. If for example the name of the root file is Main.tex, write % "...root = Main.tex" if the chapter file is in the same directory, and "...root = ../Main.tex" if the chapter is in a subdirectory. %!TEX root = \chapter[Defining And Initializing Data]{Defining And Initializing Data} This chapter has four major sections: \begin{itemize} \item An overview of assembler labels, variables, and data This section explains: \begin{itemize} \item Assembler label and variable types \item The relationship between assembler variable types and the values associated with variables: the processor or floating-point coprocessor data types \item How to specify data values in assembler programs \end{itemize} \item Assembler variables This section explains: \begin{itemize} \item Storage allocations for variables \item V ariable attributes \item Defining and initializing simple-type variables with the DBIT, DB, DW, DD, DP, DQ, and DT directives \item Defining compound types with the RECORD and STRUC directives; defining and initializing variables of these types (records and structures) \item Defining and initializing variables with DUP clause(s) \end{itemize} \item Assembler labels This section explains: \begin{itemize} \item Label attributes \item The location counter and the ORG and EVEN directives \item The LABEL directive \item Defining implicit NEAR labels \item The PROC directive \end{itemize} \item Using symbolic data, including named variables and labels, with the EQU and PURGE directives \end{itemize} \subsection*{Specifying Assembler Data Values} Assembler data can be expressed in binary, hexadecimal, octal, decimal, or ASCII form. Decimal values that represent integers or reals can be specified with a minus sign; a plus sign is redundant but accepted. Real numbers can also be expressed in floating-point decimal or in hexadecimal notations. Table 4-2 summarizes the valid ways of specifying data values in assembler programs. \begin{center} Table 4-2. Assembler Data Value Specification Rules \begin{tabular}{| l l l p{6.5cm} |} \hline \textbf{Value in} & \textbf{Examples} & & \textbf{Rules of Formulation} \\ \hline Binary & 1100011B & 110B & A sequence of 0's and 1's followed by the letter B. \\ & & & \\ Octal & 7777O & 4567Q & A sequence of digits in the range 0..7 followed by the letter O or the letter Q.\\ & & & \\ Decimal & 3309 & 3309D & A sequence of digits in the range 0..9 followed by an optional letter D.\\ & & & \\ Hexadecimal & 55H & 4BEACH & A sequence of digits in the range 0..9 and/or letters A..F followed by the letter H. A digit must begin the sequence.\\ & & & \\ ASCII & 'AB' & 'UPDATE.EXT' & Any ASCII string enclosed in single quotes.\\ & & & \\ Decimal & -1. & 1E-32 3.14159 & A rational number that may be preceded by a sign and followed by an optional exponent. A decimal point is required if no exponent is present but is optional otherwise. The exponent begins with the letter E followed by an optional sign and a sequence of digits in the range 0..9. \\ & & & \\ Hexadecimal & 40490FR & 0C0000R & A sequence of digits in the range 0..9 and/or letters A..F followed by the letter R. The sequence must begin with a digit, and the total number of digits and letters must be (8/16/20) or (9/17/21 with the first digit 0). \\ \hline \end{tabular} \end{center} A real hexadecimal specification must be the exact sequence of hex digits to fill the internal floating-point coprocessor representation of the floating-point number. For this reason, such values must have exactly 8, 16, or 20 hexadecimal digits, corresponding to the single, double, and extended precision reals that the floating- point coprocessor and the floating-point instructions handle. Such values can have 9, 17, or 21 hexadecimal digits only if the initial digit must be a zero because the value begins with a letter. Data values can be specified in an assembler program in a variety of formats, as shown in Table 4-2. The way the processor or floating-point coprocessor represents such data internally is called its storage format. See also: Processor storage formats, Appendix A floating-point coprocessor storage formats, Chapter 7 \subsection*{Initializing Variables} Assembler variables can be initialized by: \begin{itemize} \item Variable or segment names that represent logical addresses \item Constants (see Table 4-2) \item Constant expressions \end{itemize} A series of operands and operators is called an expression. An expression that yields a constant value is called a constant expression. See also: Assembler expressions, Chapter 5 The assembler evaluates constant expressions in programs. \subsection*{How the Assembler Evaluates Constant Expressions} The assembler can perform arithmetic operations on 8-, and 16-bit numbers. The assembler interprets these numbers as integer or ordinal data types. An integer value specified with a sign is a constant expression. The assembler evaluates integer or ordinal operands and expressions using 32-bit two's complement integer arithmetic. By using this arithmetic, the assembler can evaluate expressions whose operands' sizes might extend beyond the storage type of the result. As long as the expression's value fits in the storage type of the destination, the assembler does not generate an error when intermediate results are too large. The assembler does generate an error if the final result is too large to fit in the destination. \subsection*{Variables} A variable defines a logical address for the storage of value(s). An assembler variable is not required to have a name as long as its associated value(s) are accessible. But, every variable has a type; records and structures have a compound type. Assembler variables must be defined with a storage allocation statement. A storage allocation specifies a type (storage size in bytes) and defines a logical address for a variable that gives access to the variable's value(s). A storage allocation statement may also specify initial value(s) for a variable. Use the DB, DW, DD, DP, DQ, or DT directive to allocate storage for simple-type variables of the following sizes: \begin{tabular}{l l} DB & 8-bits (byte)\\ DW & 16-bits (word)\\ DD & 32-bits (dword)\\ DP & 48-bits (pword)\\ DQ & 64-bits (qword)\\ DT & 80-bits (10 bytes)\\ \end{tabular} Use a DUP clause within any assembler data allocation statement to allocate and optionally initialize a sequence of storage units of a single variable type. DUP defines an array-like variable whose element values are accessed by an offset from the variable name or from the initially specified storage unit. \subsection*{Syntax} \begin{tabular}{p{2cm} p{12.5cm}} & \begin{verbatim}[name] dtyp init [,...]\end{verbatim}\\ Where: & \\ & \\ name & is the name of the variable. Within the module, it must be a unique identifier.\\ & \\ dtyp & is DB, DW, DD, DP, DQ, or DT.\\ & \\ init & is the initial value to be stored in the allocated space. init can be a numeric constant (expressed in binary, hexadecimal, decimal, or octal), an ASCII string, or the question mark character (?), which specifies storage with undefined value(s). dtyp restricts the values that may be specified for init.\\ \end{tabular} \subsection*{Defining and Initializing Variables of a Simple Type} All assembler variable definitions use the DB, DW, DD, DQ, DP, or DT directives. The template components of compound variable types are simple types defined with these directives. \subsection*{DB Directive} \subsection*{Syntax} \begin{tabular}{p{2cm} p{12.5cm}} & \begin{verbatim} [name] DB init [,...] \end{verbatim} \\ Where: & \\ & \\ name & is the name of the variable. Within the module, it must be a unique identifier. \\ & \\ init & is a question mark (?), a constant expression, or a string of up to 255 ASCII characters enclosed in single quotes (').\\ \end{tabular} \subsection*{Discussion} DB reserves storage for and optionally initializes a variable of type BYTE. ? reserves storage with an undefined value. Numeric initial values can be specified in binary, octal, decimal, or hexadecimal (see Table 4-2). The specified constant or constant expression must evaluate to a number in the range 0..255 (processor ordinal) or -128..127 (processor integer). The components of character string values must be ASCII characters and the whole string must be enclosed in single quotes. To include a single quote character within such a string, specify two single quotes (''). Each ASCII character requires a byte of storage. In BYTE strings, successive characters occupy successive bytes. The name of the variable represents the logical address of the first character in such a string. \subsection*{Examples} \begin{enumerate} \item This example initializes the variable ABYTE to the constant value 100 (decimal). It reserves storage for another byte with an undefined value. \begin{verbatim} ABYTE DB 100 DB ? \end{verbatim} \item This example initializes three successive bytes to the values 4, 10, and 200, respectively. \begin{verbatim} BYTES3 DB 4,10,200 \end{verbatim} \item This example initializes seven bytes containing the ASCII values of the characters A, B, C, ' , D, E, and F, respectively. \begin{verbatim} STRGWQUOT DB 'ABC''DEF' \end{verbatim} \end{enumerate} \subsection*{DW Directive} \subsection*{Syntax} \begin{tabular}{p{2cm} p{12.5cm}} & \begin{verbatim} [name] DW init [,...] \end{verbatim} \\ Where: & \\ & \\ name & is the name of the variable. Within the module, it must be a unique identifier. \\ & \\ init & is a question mark (?), a constant expression, or a string of up to 2 characters enclosed in single quotes (').\\ \end{tabular} \subsection*{Discussion} DW defines storage for and optionally initializes a 16-bit variable of type WORD. ? reserves storage with an undefined value. Numeric initial values can be specified in binary, octal, decimal, or hexadecimal (see Table 4-2). The specified constant or constant expression must evaluate to a number in the range 0..65535 (processor ordinal) or -32768..32767 (processor integer). A variable or label name yields an initial value that is the offset of the variable or label. It is an error to initialize a WORD variable with the name of a variable or label that has been defined in a USE32 segment; its offset is too large (32-bits). A segment name yields an initial value that is the segment selector. A 1- or 2-character string yields an initial value that is interpreted and stored as a number. The assembler stores a 2-byte value even if the specified string has only one character: \begin{itemize} \item It stores the specified initial value in the least significant byte. \item It zeros the remaining byte. \end{itemize} \subsection*{Examples} \begin{enumerate} \item This example tells the assembler to reserve storage for two uninitialized words. \begin{verbatim} UNINIT DW ?,? \end{verbatim} \item This example initializes WORD variables with numeric values. \begin{verbatim} CONST DW 5000 ; decimal constant HEXEXP DW OFFFH -10 ; expression \end{verbatim} \item This example initializes VAR1OFF to the offset of VAR1 (both variables are within a USE16 segment) and CODESEL to the selector of a segment named CODE. \begin{verbatim} VAR1OFF DW VAR1 CODESEL DW CODE \end{verbatim} \item This example initializes NUMB to the ASCII value (interpreted as a number) of the letters AB. \begin{verbatim} NUMB DW 'AB' ; equivalent to NUMB DW 4142H \end{verbatim} \end{enumerate} \subsection*{DD Directive} \begin{tabular}{p{2cm} p{12.5cm}} & \begin{verbatim} [name] DW init [,...] \end{verbatim} \\ Where: & \\ & \\ name & is the name of the variable. Within the module, it must be a unique identifier. \\ & \\ init & is a question mark (?), a constant expression, the name of a variable or label, or a string of up to 4 characters enclosed in single quotes (').\\ \end{tabular} \subsection*{Discussion} DD defines storage for and optionally initializes a 32-bit variable of type DWORD. ? reserves storage with an undefined value. Integer initial values can be specified in binary, octal, decimal, or hexadecimal (see Table 4-2). The specified constant or constant expression must evaluate to a number in the range: $-2^{31}$..$2^{31-1}$ (processor integer or floating-point coprocessor short integer) Or, 0..$2^{32-1}$ (processor ordinal) Real initial values can be specified in floating-point decimal or in hexadecimal (see Table 4-2). A decimal constant must evaluate to a real in the ranges: -3.4E38..-1.2E-38, 0.0, 1.2E-38..3.4E38 (floating-point coprocessor single precision real) A constant expressed as a hexadecimal real must be the exact sequence of hex digits to fill the internal floating-point coprocessor representation of a single precision real (8 hexadecimal digits or 9 hexadecimal digits, including an initial 0). A USE16 variable or label name yields an initial value that fills the dword. Its high-order word contains the segment selector and its low-order word contains the offset of the USE16 variable or label. A USE32 variable or label name yields an initial value that is the offset (from the segment base) of the variable or label. A string (up to four characters) yields an initial value that is interpreted and stored as a number. The assembler stores a 4-byte value even if the specified string has fewer than four characters: \begin{itemize} \item It stores the specified initial values in the least significant bytes. \item It zeros the remaining bytes. \end{itemize} \subsection*{Examples} \begin{enumerate} \item This example defines two variables, a floating-point coprocessor short integer and a single precision real. \begin{verbatim} INTVAR DD 1234567 REALVAR DD 1.6E25 \end{verbatim} \item In this example, LAB1 was defined in a USE16 segment and LAB2 was defined in a USE32 segment. \begin{verbatim} LAB1_ADD DD LAB1 ; LAB1_ADD contains offset and ; segment selector of LAB1 LAB2_ADD DD LAB2 ; LAB2_ADD contains offset of LAB2 \end{verbatim} \item This example initializes three unnamed dwords. The first contains an undefined value. The second contains the ASCII numeric value of the letter A. The third contains the integer 450 (decimal). \begin{verbatim} DD ?, 'A', 450 \end{verbatim} \end{enumerate}
{ "alphanum_fraction": 0.7686290942, "avg_line_length": 55.93359375, "ext": "tex", "hexsha": "d8e53a9ae4fa4d9800f3333c38c1d25fc71f130c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pclwillmott/asm286", "max_forks_repo_path": "doc/Chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pclwillmott/asm286", "max_issues_repo_path": "doc/Chapter4.tex", "max_line_length": 527, "max_stars_count": null, "max_stars_repo_head_hexsha": "2648bf9db3d746de4145699db1a1161182fd4be0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pclwillmott/asm286", "max_stars_repo_path": "doc/Chapter4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3460, "size": 14319 }
% Created 2021-02-20 sam. 14:43 % Intended LaTeX compiler: pdflatex \documentclass[a4paper, 10pt, DIV=12, parskip=full]{scrreprt} \input{preamble.tex} \addbibresource{ref.bib} \author{Thomas Dehaeze} \date{\today} \title{Active Damping of Rotating Platforms using Integral Force Feedback - Matlab Computation} \hypersetup{ pdfauthor={Thomas Dehaeze}, pdftitle={Active Damping of Rotating Platforms using Integral Force Feedback - Matlab Computation}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 27.1 (Org mode 9.5)}, pdflang={English}} \begin{document} \maketitle \tableofcontents This document gathers the Matlab code used to for the conference paper \cite{dehaeze20_activ_dampin_rotat_platf_integ_force_feedb} and the journal paper \cite{dehaeze21_activ_dampin_rotat_platf_using}. It is structured in several sections: \begin{itemize} \item Section \ref{sec:system_description}: presents a simple model of a rotating suspended platform that will be used throughout this study. \item Section \ref{sec:iff_pure_int}: explains how the unconditional stability of IFF is lost due to Gyroscopic effects induced by the rotation. \item Section \ref{sec:iff_pseudo_int}: suggests a simple modification of the control law such that damping can be added to the suspension modes in a robust way. \item Section \ref{sec:iff_parallel_stiffness}: proposes to add springs in parallel with the force sensors to regain the unconditional stability of IFF. \item Section \ref{sec:comparison}: compares both proposed modifications to the classical IFF in terms of damping authority and closed-loop system behavior. \item Section \ref{sec:notations}: contains the notations used for both the Matlab code and the paper \end{itemize} The matlab code is accessible on \href{https://zenodo.org/record/3894343}{Zonodo} and \href{https://github.com/tdehaeze/dehaeze20\_contr\_stewa\_platf}{Github} \cite{dehaeze20_activ_dampin_rotat_posit_platf}. It can also be download as a \texttt{.zip} file \href{matlab.zip}{here}. To run the Matlab code, go in the \texttt{matlab} directory and run the following Matlab files corresponding to each section. \begin{table}[htbp] \caption{Paper's sections and corresponding Matlab files} \centering \begin{tabular}{ll} Sections & Matlab File\\ \hline Section \ref{sec:system_description} & \texttt{s1\_system\_description.m}\\ Section \ref{sec:iff_pure_int} & \texttt{s2\_iff\_pure\_int.m}\\ Section \ref{sec:iff_pseudo_int} & \texttt{s3\_iff\_hpf.m}\\ Section \ref{sec:iff_parallel_stiffness} & \texttt{s4\_iff\_kp.m}\\ Section \ref{sec:comparison} & \texttt{s5\_act\_damp\_comparison.m}\\ \end{tabular} \end{table} \chapter{System Description and Analysis} \label{sec:org866deeb} \label{sec:system_description} \section{System description} \label{sec:org597d9d5} The system consists of one 2 degree of freedom translation stage on top of a spindle (figure \ref{fig:system}). \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs-paper/system.png} \caption{\label{fig:system}Schematic of the studied system} \end{figure} The control inputs are the forces applied by the actuators of the translation stage (\(F_u\) and \(F_v\)). As the translation stage is rotating around the Z axis due to the spindle, the forces are applied along \(\vec{i}_u\) and \(\vec{i}_v\). \section{Equations} \label{sec:orge2ee0c6} Based on the Figure \ref{fig:system}, the equations of motions are: \begin{important} \begin{equation} \begin{bmatrix} d_u \\ d_v \end{bmatrix} = \bm{G}_d \begin{bmatrix} F_u \\ F_v \end{bmatrix} \end{equation} Where \(\bm{G}_d\) is a \(2 \times 2\) transfer function matrix. \begin{equation} \bm{G}_d = \frac{1}{k} \frac{1}{G_{dp}} \begin{bmatrix} G_{dz} & G_{dc} \\ -G_{dc} & G_{dz} \end{bmatrix} \end{equation} With: \begin{align} G_{dp} &= \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2 \\ G_{dz} &= \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \\ G_{dc} &= 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \end{align} \end{important} \section{Numerical Values} \label{sec:orgafc7947} Let's define initial values for the model. \begin{minted}[]{matlab} k = 1; % Actuator Stiffness [N/m] c = 0.05; % Actuator Damping [N/(m/s)] m = 1; % Payload mass [kg] \end{minted} \begin{minted}[]{matlab} xi = c/(2*sqrt(k*m)); w0 = sqrt(k/m); % [rad/s] \end{minted} \section{Campbell Diagram} \label{sec:org008e1a4} The Campbell Diagram displays the evolution of the real and imaginary parts of the system as a function of the rotating speed. It is shown in Figures \ref{fig:campbell_diagram_real} and \ref{fig:campbell_diagram_imag}, and one can see that the system becomes unstable for \(\Omega > \omega_0\) (the real part of one of the poles becomes positive). \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/campbell_diagram_real.png} \caption{\label{fig:campbell_diagram_real}Campbell Diagram - Real Part} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/campbell_diagram_imag.png} \caption{\label{fig:campbell_diagram_imag}Campbell Diagram - Imaginary Part} \end{figure} \section{Simscape Model} \label{sec:org50f1e50} In order to validate all the equations of motion, a Simscape model of the same system has been developed. The dynamics of the system can be identified from the Simscape model and compare with the analytical model. The rotating speed for the Simscape Model is defined. \begin{minted}[]{matlab} W = 0.1; % Rotation Speed [rad/s] \end{minted} \begin{minted}[]{matlab} open('rotating_frame.slx'); \end{minted} The transfer function from \([F_u, F_v]\) to \([d_u, d_v]\) is identified from the Simscape model. \begin{minted}[]{matlab} %% Name of the Simulink File mdl = 'rotating_frame'; %% Input/Output definition clear io; io_i = 1; io(io_i) = linio([mdl, '/K'], 1, 'openinput'); io_i = io_i + 1; io(io_i) = linio([mdl, '/G'], 2, 'openoutput'); io_i = io_i + 1; \end{minted} \begin{minted}[]{matlab} G = linearize(mdl, io, 0); %% Input/Output definition G.InputName = {'Fu', 'Fv'}; G.OutputName = {'du', 'dv'}; \end{minted} The same transfer function from \([F_u, F_v]\) to \([d_u, d_v]\) is written down from the analytical model. \begin{minted}[]{matlab} Gth = (1/k)/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2), 2*W*s/(w0^2) ; ... -2*W*s/(w0^2), (s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)]; \end{minted} Both transfer functions are compared in Figure \ref{fig:plant_simscape_analytical} and are found to perfectly match. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_simscape_analytical.png} \caption{\label{fig:plant_simscape_analytical}Bode plot of the transfer function from \([F_u, F_v]\) to \([d_u, d_v]\) as identified from the Simscape model and from an analytical model} \end{figure} \section{Effect of the rotation speed} \label{sec:org6d643e3} The transfer functions from \([F_u, F_v]\) to \([d_u, d_v]\) are identified for the following rotating speeds. \begin{minted}[]{matlab} Ws = [0, 0.2, 0.7, 1.1]*w0; % Rotating Speeds [rad/s] \end{minted} \begin{minted}[]{matlab} Gs = {zeros(2, 2, length(Ws))}; for W_i = 1:length(Ws) W = Ws(W_i); Gs(:, :, W_i) = {(1/k)/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2), 2*W*s/(w0^2) ; ... -2*W*s/(w0^2), (s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)]}; end \end{minted} They are compared in Figures \ref{fig:plant_compare_rotating_speed_direct} and \ref{fig:plant_compare_rotating_speed_coupling}. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_compare_rotating_speed_direct.png} \caption{\label{fig:plant_compare_rotating_speed_direct}Comparison of the transfer functions from \([F_u, F_v]\) to \([d_u, d_v]\) for several rotating speed - Direct Terms} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_compare_rotating_speed_coupling.png} \caption{\label{fig:plant_compare_rotating_speed_coupling}Comparison of the transfer functions from \([F_u, F_v]\) to \([d_u, d_v]\) for several rotating speed - Coupling Terms} \end{figure} \chapter{Problem with pure Integral Force Feedback} \label{sec:org02f3cde} \label{sec:iff_pure_int} Force sensors are added in series with the two actuators (Figure \ref{fig:system_iff}). Two identical controllers \(K_F\) are used to feedback each of the sensed force to its associated actuator. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs-paper/system_iff.png} \caption{\label{fig:system_iff}System with added Force Sensor in series with the actuators} \end{figure} \section{Plant Parameters} \label{sec:orgd2d5c32} Let's define initial values for the model. \begin{minted}[]{matlab} k = 1; % Actuator Stiffness [N/m] c = 0.05; % Actuator Damping [N/(m/s)] m = 1; % Payload mass [kg] \end{minted} \begin{minted}[]{matlab} xi = c/(2*sqrt(k*m)); w0 = sqrt(k/m); % [rad/s] \end{minted} \section{Equations} \label{sec:orgad8546b} The sensed forces are equal to: \begin{equation} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} F_u \\ F_v \end{bmatrix} - (c s + k) \begin{bmatrix} d_u \\ d_v \end{bmatrix} \end{equation} Which then gives: \begin{important} \begin{equation} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} = \bm{G}_{f} \begin{bmatrix} F_u \\ F_v \end{bmatrix} \end{equation} \begin{equation} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} = \frac{1}{G_{fp}} \begin{bmatrix} G_{fz} & -G_{fc} \\ G_{fc} & G_{fz} \end{bmatrix} \begin{bmatrix} F_u \\ F_v \end{bmatrix} \end{equation} \begin{align} G_{fp} &= \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2 \\ G_{fz} &= \left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} \right) \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2 \\ G_{fc} &= \left( 2 \xi \frac{s}{\omega_0} + 1 \right) \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right) \end{align} \end{important} \section{Comparison of the Analytical Model and the Simscape Model} \label{sec:org5d37f2f} The rotation speed is set to \(\Omega = 0.1 \omega_0\). \begin{minted}[]{matlab} W = 0.1*w0; % [rad/s] \end{minted} \begin{minted}[]{matlab} open('rotating_frame.slx'); \end{minted} And the transfer function from \([F_u, F_v]\) to \([f_u, f_v]\) is identified using the Simscape model. \begin{minted}[]{matlab} %% Name of the Simulink File mdl = 'rotating_frame'; %% Input/Output definition clear io; io_i = 1; io(io_i) = linio([mdl, '/K'], 1, 'openinput'); io_i = io_i + 1; io(io_i) = linio([mdl, '/G'], 1, 'openoutput'); io_i = io_i + 1; \end{minted} \begin{minted}[]{matlab} Giff = linearize(mdl, io, 0); %% Input/Output definition Giff.InputName = {'Fu', 'Fv'}; Giff.OutputName = {'fu', 'fv'}; \end{minted} The same transfer function from \([F_u, F_v]\) to \([f_u, f_v]\) is written down from the analytical model. \begin{minted}[]{matlab} Giff_th = 1/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)) + (2*W*s/(w0^2))^2, - (2*xi*s/w0 + 1)*2*W*s/(w0^2) ; ... (2*xi*s/w0 + 1)*2*W*s/(w0^2), (s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))+ (2*W*s/(w0^2))^2]; \end{minted} The two are compared in Figure \ref{fig:plant_iff_comp_simscape_analytical} and found to perfectly match. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_iff_comp_simscape_analytical.png} \caption{\label{fig:plant_iff_comp_simscape_analytical}Comparison of the transfer functions from \([F_u, F_v]\) to \([f_u, f_v]\) between the Simscape model and the analytical one} \end{figure} \section{Effect of the rotation speed} \label{sec:org17fee48} The transfer functions from \([F_u, F_v]\) to \([f_u, f_v]\) are identified for the following rotating speeds. \begin{minted}[]{matlab} Ws = [0, 0.2, 0.7]*w0; % Rotating Speeds [rad/s] \end{minted} \begin{minted}[]{matlab} Gsiff = {zeros(2, 2, length(Ws))}; for W_i = 1:length(Ws) W = Ws(W_i); Gsiff(:, :, W_i) = {1/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)) + (2*W*s/(w0^2))^2, - (2*xi*s/w0 + 1)*2*W*s/(w0^2) ; ... (2*xi*s/w0 + 1)*2*W*s/(w0^2), (s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))+ (2*W*s/(w0^2))^2]}; end \end{minted} The obtained transfer functions are shown in Figure \ref{fig:plant_iff_compare_rotating_speed}. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_iff_compare_rotating_speed.png} \caption{\label{fig:plant_iff_compare_rotating_speed}Comparison of the transfer functions from \([F_u, F_v]\) to \([f_u, f_v]\) for several rotating speed} \end{figure} \section{Decentralized Integral Force Feedback} \label{sec:org2d5427a} The decentralized IFF controller consists of pure integrators: \begin{equation} \bm{K}_{\text{IFF}}(s) = \frac{g}{s} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} The Root Locus (evolution of the poles of the closed loop system in the complex plane as a function of \(g\)) is shown in Figure \ref{fig:root_locus_pure_iff}. It is shown that for non-null rotating speed, one pole is bound to the right-half plane, and thus the closed loop system is unstable. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_pure_iff.png} \caption{\label{fig:root_locus_pure_iff}Root Locus for the Decentralized Integral Force Feedback controller. Several rotating speed are shown.} \end{figure} \chapter{Integral Force Feedback with an High Pass Filter} \label{sec:orgf9854e9} \label{sec:iff_pseudo_int} \section{Plant Parameters} \label{sec:org11b9a15} Let's define initial values for the model. \begin{minted}[]{matlab} k = 1; % Actuator Stiffness [N/m] c = 0.05; % Actuator Damping [N/(m/s)] m = 1; % Payload mass [kg] \end{minted} \begin{minted}[]{matlab} xi = c/(2*sqrt(k*m)); w0 = sqrt(k/m); % [rad/s] \end{minted} \section{Modified Integral Force Feedback Controller} \label{sec:orgf920a8a} Let's modify the initial Integral Force Feedback Controller ; instead of using pure integrators, pseudo integrators (i.e. low pass filters) are used: \begin{equation} K_{\text{IFF}}(s) = g\frac{1}{\omega_i + s} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} where \(\omega_i\) characterize down to which frequency the signal is integrated. Let's arbitrary choose the following control parameters: \begin{minted}[]{matlab} g = 2; wi = 0.1*w0; \end{minted} And the following rotating speed. \begin{minted}[]{matlab} Giff = 1/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)) + (2*W*s/(w0^2))^2, - (2*xi*s/w0 + 1)*2*W*s/(w0^2) ; ... (2*xi*s/w0 + 1)*2*W*s/(w0^2), (s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))+ (2*W*s/(w0^2))^2]; \end{minted} The obtained Loop Gain is shown in Figure \ref{fig:loop_gain_modified_iff}. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/loop_gain_modified_iff.png} \caption{\label{fig:loop_gain_modified_iff}Loop Gain for the modified IFF controller} \end{figure} \section{Root Locus} \label{sec:org911f45b} As shown in the Root Locus plot (Figure \ref{fig:root_locus_modified_iff}), for some value of the gain, the system remains stable. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_modified_iff.png} \caption{\label{fig:root_locus_modified_iff}Root Locus for the modified IFF controller} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_modified_iff_zoom.png} \caption{\label{fig:root_locus_modified_iff_zoom}Root Locus for the modified IFF controller - Zoom} \end{figure} \section{What is the optimal \(\omega_i\) and \(g\)?} \label{sec:org028e747} In order to visualize the effect of \(\omega_i\) on the attainable damping, the Root Locus is displayed in Figure \ref{fig:root_locus_wi_modified_iff} for the following \(\omega_i\): \begin{minted}[]{matlab} wis = [0.01, 0.1, 0.5, 1]*w0; % [rad/s] \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_wi_modified_iff.png} \caption{\label{fig:root_locus_wi_modified_iff}Root Locus for the modified IFF controller (zoomed plot on the left)} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_wi_modified_iff_zoom.png} \caption{\label{fig:root_locus_wi_modified_iff_zoom}Root Locus for the modified IFF controller (zoomed plot on the left)} \end{figure} For the controller \begin{equation} K_{\text{IFF}}(s) = g\frac{1}{\omega_i + s} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} The gain at which the system becomes unstable is \begin{equation} g_\text{max} = \omega_i \left( \frac{{\omega_0}^2}{\Omega^2} - 1 \right) \label{eq:iff_gmax} \end{equation} While it seems that small \(\omega_i\) do allow more damping to be added to the system (Figure \ref{fig:root_locus_wi_modified_iff}), the control gains may be limited to small values due to \eqref{eq:iff_gmax} thus reducing the attainable damping. There must be an optimum for \(\omega_i\). To find the optimum, the gain that maximize the simultaneous damping of the mode is identified for a wide range of \(\omega_i\) (Figure \ref{fig:mod_iff_damping_wi}). \begin{minted}[]{matlab} wis = logspace(-2, 1, 100)*w0; % [rad/s] opt_xi = zeros(1, length(wis)); % Optimal simultaneous damping opt_gain = zeros(1, length(wis)); % Corresponding optimal gain for wi_i = 1:length(wis) wi = wis(wi_i); Kiff = 1/(s + wi)*eye(2); fun = @(g)computeSimultaneousDamping(g, Giff, Kiff); [g_opt, xi_opt] = fminsearch(fun, 0.5*wi*((w0/W)^2 - 1)); opt_xi(wi_i) = 1/xi_opt; opt_gain(wi_i) = g_opt; end \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/mod_iff_damping_wi.png} \caption{\label{fig:mod_iff_damping_wi}Simultaneous attainable damping of the closed loop poles as a function of \(\omega_i\)} \end{figure} \chapter{IFF with a stiffness in parallel with the force sensor} \label{sec:orgc5ba04a} \label{sec:iff_parallel_stiffness} \section{Schematic} \label{sec:orgc647af5} In this section additional springs in parallel with the force sensors are added to counteract the negative stiffness induced by the rotation. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs-paper/system_parallel_springs.png} \caption{\label{fig:system_parallel_springs}Studied system with additional springs in parallel with the actuators and force sensors} \end{figure} In order to keep the overall stiffness \(k = k_a + k_p\) constant, a scalar parameter \(\alpha\) (\(0 \le \alpha < 1\)) is defined to describe the fraction of the total stiffness in parallel with the actuator and force sensor \begin{equation} k_p = \alpha k, \quad k_a = (1 - \alpha) k \end{equation} \section{Equations} \label{sec:org0de17db} \begin{important} \begin{equation} \begin{bmatrix} f_u \\ f_v \end{bmatrix} = \bm{G}_k \begin{bmatrix} F_u \\ F_v \end{bmatrix} \end{equation} \begin{equation} \begin{bmatrix} f_u \\ f_v \end{bmatrix} = \frac{1}{G_{kp}} \begin{bmatrix} G_{kz} & -G_{kc} \\ G_{kc} & G_{kz} \end{bmatrix} \begin{bmatrix} F_u \\ F_v \end{bmatrix} \end{equation} With: \begin{align} G_{kp} &= \left( \frac{s^2}{{\omega_0}^2} + 2\xi \frac{s}{{\omega_0}^2} + 1 - \frac{\Omega^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0}\frac{s}{\omega_0} \right)^2 \\ G_{kz} &= \left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} + \alpha \right) \left( \frac{s^2}{{\omega_0}^2} + 2\xi \frac{s}{{\omega_0}^2} + 1 - \frac{\Omega^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0}\frac{s}{\omega_0} \right)^2 \\ G_{kc} &= \left( 2 \xi \frac{s}{\omega_0} + 1 - \alpha \right) \left( 2 \frac{\Omega}{\omega_0}\frac{s}{\omega_0} \right) \end{align} \end{important} If we compare \(G_{kz}\) and \(G_{fz}\), we see that the spring in parallel adds a term \(\alpha\). In order to have two complex conjugate zeros (instead of real zeros): \begin{equation} \alpha > \frac{\Omega^2}{{\omega_0}^2} \quad \Leftrightarrow \quad k_p > m \Omega^2 \end{equation} \section{Plant Parameters} \label{sec:orga1e1958} Let's define initial values for the model. \begin{minted}[]{matlab} k = 1; % Actuator Stiffness [N/m] c = 0.05; % Actuator Damping [N/(m/s)] m = 1; % Payload mass [kg] \end{minted} \begin{minted}[]{matlab} xi = c/(2*sqrt(k*m)); w0 = sqrt(k/m); % [rad/s] \end{minted} \section{Comparison of the Analytical Model and the Simscape Model} \label{sec:orgbea84d1} The same transfer function from \([F_u, F_v]\) to \([f_u, f_v]\) is written down from the analytical model. \begin{minted}[]{matlab} W = 0.1*w0; % [rad/s] kp = 1.5*m*W^2; cp = 0; \end{minted} \begin{minted}[]{matlab} open('rotating_frame.slx'); \end{minted} \begin{minted}[]{matlab} %% Name of the Simulink File mdl = 'rotating_frame'; %% Input/Output definition clear io; io_i = 1; io(io_i) = linio([mdl, '/K'], 1, 'openinput'); io_i = io_i + 1; io(io_i) = linio([mdl, '/G'], 1, 'openoutput'); io_i = io_i + 1; Giff = linearize(mdl, io, 0); %% Input/Output definition Giff.InputName = {'Fu', 'Fv'}; Giff.OutputName = {'fu', 'fv'}; \end{minted} \begin{minted}[]{matlab} w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff_th = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2 ]; Giff_th.InputName = {'Fu', 'Fv'}; Giff_th.OutputName = {'fu', 'fv'}; \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_iff_kp_comp_simscape_analytical.png} \caption{\label{fig:plant_iff_kp_comp_simscape_analytical}Comparison of the transfer functions from \([F_u, F_v]\) to \([f_u, f_v]\) between the Simscape model and the analytical one} \end{figure} \section{Effect of the parallel stiffness on the IFF plant} \label{sec:orge9068cb} The rotation speed is set to \(\Omega = 0.1 \omega_0\). \begin{minted}[]{matlab} W = 0.1*w0; % [rad/s] \end{minted} And the IFF plant (transfer function from \([F_u, F_v]\) to \([f_u, f_v]\)) is identified in three different cases: \begin{itemize} \item without parallel stiffness \item with a small parallel stiffness \(k_p < m \Omega^2\) \item with a large parallel stiffness \(k_p > m \Omega^2\) \end{itemize} The results are shown in Figure \ref{fig:plant_iff_kp}. One can see that for \(k_p > m \Omega^2\), the systems shows alternating complex conjugate poles and zeros. \begin{minted}[]{matlab} kp = 0; w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2]; \end{minted} \begin{minted}[]{matlab} kp = 0.5*m*W^2; k = 1 - kp; w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff_s = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2]; \end{minted} \begin{minted}[]{matlab} kp = 1.5*m*W^2; k = 1 - kp; w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff_l = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2]; \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_iff_kp.png} \caption{\label{fig:plant_iff_kp}Transfer function from \([F_u, F_v]\) to \([f_u, f_v]\) for \(k_p = 0\), \(k_p < m \Omega^2\) and \(k_p > m \Omega^2\)} \end{figure} \section{IFF when adding a spring in parallel} \label{sec:org9f1e3df} In Figure \ref{fig:root_locus_iff_kp} is displayed the Root Locus in the three considered cases with \begin{equation} K_{\text{IFF}} = \frac{g}{s} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} One can see that for \(k_p > m \Omega^2\), the root locus stays in the left half of the complex plane and thus the control system is unconditionally stable. Thus, decentralized IFF controller with pure integrators can be used if: \begin{equation} k_{p} > m \Omega^2 \end{equation} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_iff_kp.png} \caption{\label{fig:root_locus_iff_kp}Root Locus} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_iff_kp_zoom.png} \caption{\label{fig:root_locus_iff_kp_zoom}Root Locus} \end{figure} \section{Effect of \(k_p\) on the attainable damping} \label{sec:orgcb7905c} However, having large values of \(k_p\) may decrease the attainable damping. To study the second point, Root Locus plots for the following values of \(k_p\) are shown in Figure \ref{fig:root_locus_iff_kps}. \begin{minted}[]{matlab} kps = [2, 20, 40]*m*W^2; \end{minted} It is shown that large values of \(k_p\) decreases the attainable damping. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_iff_kps.png} \caption{\label{fig:root_locus_iff_kps}Root Locus plot} \end{figure} \begin{minted}[]{matlab} alphas = logspace(-2, 0, 100); opt_xi = zeros(1, length(alphas)); % Optimal simultaneous damping opt_gain = zeros(1, length(alphas)); % Corresponding optimal gain Kiff = 1/s*eye(2); for alpha_i = 1:length(alphas) kp = alphas(alpha_i); k = 1 - alphas(alpha_i); w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2]; fun = @(g)computeSimultaneousDamping(g, Giff, Kiff); [g_opt, xi_opt] = fminsearch(fun, 2); opt_xi(alpha_i) = 1/xi_opt; opt_gain(alpha_i) = g_opt; end \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/opt_damp_alpha.png} \caption{\label{fig:opt_damp_alpha}Attainable damping ratio and corresponding controller gain for different parameter \(\alpha\)} \end{figure} \chapter{Comparison} \label{sec:org4714bd6} \label{sec:comparison} Two modifications to adapt the IFF control strategy to rotating platforms have been proposed. These two methods are now compared in terms of added damping, closed-loop compliance and transmissibility. \section{Plant Parameters} \label{sec:org90a54af} Let's define initial values for the model. \begin{minted}[]{matlab} k = 1; % Actuator Stiffness [N/m] c = 0.05; % Actuator Damping [N/(m/s)] m = 1; % Payload mass [kg] \end{minted} \begin{minted}[]{matlab} xi = c/(2*sqrt(k*m)); w0 = sqrt(k/m); % [rad/s] \end{minted} The rotating speed is set to \(\Omega = 0.1 \omega_0\). \begin{minted}[]{matlab} W = 0.1*w0; \end{minted} \section{Root Locus} \label{sec:orgf922463} IFF with High Pass Filter \begin{minted}[]{matlab} wi = 0.1*w0; % [rad/s] Giff = 1/(((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))^2 + (2*W*s/(w0^2))^2) * ... [(s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2)) + (2*W*s/(w0^2))^2, - (2*xi*s/w0 + 1)*2*W*s/(w0^2) ; ... (2*xi*s/w0 + 1)*2*W*s/(w0^2), (s^2/w0^2 - W^2/w0^2)*((s^2)/(w0^2) + 2*xi*s/w0 + 1 - (W^2)/(w0^2))+ (2*W*s/(w0^2))^2]; \end{minted} IFF With parallel Stiffness \begin{minted}[]{matlab} kp = 5*m*W^2; k = k - kp; w0p = sqrt((k + kp)/m); xip = c/(2*sqrt((k+kp)*m)); Giff_kp = 1/( (s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2)^2 + (2*(s/w0p)*(W/w0p))^2 ) * [ ... (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2, -(2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)); (2*xip*s/w0p + k/(k + kp))*(2*(s/w0p)*(W/w0p)), (s^2/w0p^2 + kp/(k + kp) - W^2/w0p^2)*(s^2/w0p^2 + 2*xip*s/w0p + 1 - W^2/w0p^2) + (2*(s/w0p)*(W/w0p))^2 ]; k = k + kp; \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/comp_root_locus.png} \caption{\label{fig:comp_root_locus}Root Locus plot - Comparison of IFF with additional high pass filter, IFF with additional parallel stiffness} \end{figure} \section{Controllers - Optimal Gains} \label{sec:org1889cfc} In order to compare to three considered Active Damping techniques, gains that yield maximum damping of all the modes are computed for each case. The obtained damping ratio and control are shown below. \begin{center} \begin{tabular}{lrr} & Obtained \(\xi\) & Control Gain\\ \hline Modified IFF & 0.83 & 1.99\\ IFF with \(k_p\) & 0.83 & 2.02\\ \end{tabular} \end{center} \section{Passive Damping - Critical Damping} \label{sec:org81b0306} \begin{equation} \xi = \frac{c}{2 \sqrt{km}} \end{equation} Critical Damping corresponds to to \(\xi = 1\), and thus: \begin{equation} c_{\text{crit}} = 2 \sqrt{km} \end{equation} \begin{minted}[]{matlab} c_opt = 2*sqrt(k*m); \end{minted} \section{Transmissibility And Compliance} \label{sec:orge56633c} \label{sec:comp_transmissibilty} \begin{minted}[]{matlab} open('rotating_frame.slx'); \end{minted} \begin{minted}[]{matlab} %% Name of the Simulink File mdl = 'rotating_frame'; %% Input/Output definition clear io; io_i = 1; io(io_i) = linio([mdl, '/dw'], 1, 'input'); io_i = io_i + 1; io(io_i) = linio([mdl, '/fd'], 1, 'input'); io_i = io_i + 1; io(io_i) = linio([mdl, '/Meas'], 1, 'output'); io_i = io_i + 1; \end{minted} \begin{minted}[]{matlab} G_ol = linearize(mdl, io, 0); %% Input/Output definition G_ol.InputName = {'Dwx', 'Dwy', 'Fdx', 'Fdy'}; G_ol.OutputName = {'Dx', 'Dy'}; \end{minted} \subsection{Passive Damping} \label{sec:orgc0de759} \begin{minted}[]{matlab} kp = 0; cp = 0; \end{minted} \begin{minted}[]{matlab} c_old = c; c = c_opt; \end{minted} \begin{minted}[]{matlab} G_pas = linearize(mdl, io, 0); %% Input/Output definition G_pas.InputName = {'Dwx', 'Dwy', 'Fdx', 'Fdy'}; G_pas.OutputName = {'Dx', 'Dy'}; \end{minted} \begin{minted}[]{matlab} c = c_old; \end{minted} \begin{minted}[]{matlab} Kiff = opt_gain_iff/(wi + s)*tf(eye(2)); \end{minted} \begin{minted}[]{matlab} G_iff = linearize(mdl, io, 0); %% Input/Output definition G_iff.InputName = {'Dwx', 'Dwy', 'Fdx', 'Fdy'}; G_iff.OutputName = {'Dx', 'Dy'}; \end{minted} \begin{minted}[]{matlab} kp = 5*m*W^2; cp = 0.01; \end{minted} \begin{minted}[]{matlab} Kiff = opt_gain_kp/s*tf(eye(2)); \end{minted} \begin{minted}[]{matlab} G_kp = linearize(mdl, io, 0); %% Input/Output definition G_kp.InputName = {'Dwx', 'Dwy', 'Fdx', 'Fdy'}; G_kp.OutputName = {'Dx', 'Dy'}; \end{minted} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/comp_transmissibility.png} \caption{\label{fig:comp_transmissibility}Comparison of the transmissibility} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/comp_compliance.png} \caption{\label{fig:comp_compliance}Comparison of the obtained Compliance} \end{figure} \chapter{Notations} \label{sec:org61801fb} \label{sec:notations} \begin{center} \begin{tabular}{llll} & Mathematical Notation & Matlab & Unit\\ \hline Actuator Stiffness & \(k\) & \texttt{k} & N/m\\ Actuator Damping & \(c\) & \texttt{c} & N/(m/s)\\ Payload Mass & \(m\) & \texttt{m} & kg\\ Damping Ratio & \(\xi = \frac{c}{2\sqrt{km}}\) & \texttt{xi} & \\ Actuator Force & \(\bm{F}, F_u, F_v\) & \texttt{F} \texttt{Fu} \texttt{Fv} & N\\ Force Sensor signal & \(\bm{f}, f_u, f_v\) & \texttt{f} \texttt{fu} \texttt{fv} & N\\ Relative Displacement & \(\bm{d}, d_u, d_v\) & \texttt{d} \texttt{du} \texttt{dv} & m\\ Resonance freq. when \(\Omega = 0\) & \(\omega_0\) & \texttt{w0} & rad/s\\ Rotation Speed & \(\Omega = \dot{\theta}\) & \texttt{W} & rad/s\\ Low Pass Filter corner frequency & \(\omega_i\) & \texttt{wi} & rad/s\\ \end{tabular} \end{center} \begin{center} \begin{tabular}{llll} & Mathematical Notation & Matlab & Unit\\ \hline Laplace variable & \(s\) & \texttt{s} & \\ Complex number & \(j\) & \texttt{j} & \\ Frequency & \(\omega\) & \texttt{w} & [rad/s]\\ \end{tabular} \end{center} \printbibliography \bibliography{ref} \end{document}
{ "alphanum_fraction": 0.658254469, "avg_line_length": 36, "ext": "tex", "hexsha": "1db34e2d98a998851293947d196de1e999fb9d1e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_forks_repo_path": "matlab/index.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_issues_repo_path": "matlab/index.tex", "max_line_length": 281, "max_stars_count": null, "max_stars_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_stars_repo_path": "matlab/index.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13028, "size": 34236 }
\documentclass{article} \usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry} \usepackage{amsmath, amsthm, verbatim, enumerate, xcolor, mathpartir} \input{macros} \title{IIT CS595: Topics \& Applications in Programming Languages\\ {\large Homework 0: Rule induction, expressions, type safety}} \author{Dustin Van Tate Testa} \begin{document} \maketitle \section{Logistics} \paragraph{Writing up Solutions.} If you're looking at this file, you're probably (at least thinking about) writing up your solutions in LaTeX. Nice! If you need help, see the writeup PDF for a link to resources and other options. \paragraph{Collaboration.} Please see the collaboration policy on the course website. \paragraph{Submission.} Submit your answers as a single .pdf or .doc file (see above) on Blackboard under the correct assignment. \section{Evaluation} \begin{task} Evaluate the following expression using the dynamics semantics from class, showing each step: (Put your answers in the ... and copy and paste the line to add more lines as necessary). \[ \begin{array}{l l} & \kwlet{x}{\kwn{2}}{x+(\kwlet{x}{\kwn{1}}{\kwlet{y}{x+\kwn{1}}{x+y}})}\\ \step & \kwn{2}+(\kwlet{x}{\kwn{1}}{\kwlet{y}{x+\kwn{1}}{x+y}})\\ \step & \kwn{2}+(\kwlet{y}{\kwn{1}+\kwn{1}}{\kwn{1}+y}\\ \step & \kwn{2}+(\kwlet{y}{\kwn{2}}{\kwn{1}+y}\\ \step & \kwn{2}+(\kwn{1}+\kwn{2})\\ \step & \kwn{2}+\kwn{3}\\ \step & \kwn{5}\\ \end{array} \] \end{task} \section{Red-Black Trees} In data structures, a red-black tree is a binary tree whose nodes are colored either red or black. % Red-black trees must obey the following rules: \begin{itemize} \item Leaf nodes must be black. \item No red node has red children. \item Any path from the root to a leaf contains the same number of black nodes. \end{itemize} We can express red-black trees using the following small language (note that this syntax doesn't enforce the rules above; we'll do that later). \[ \begin{array}{r l l l} Trees & \tree & \bnfdef & \leaf \bnfalt \bnode{\tree}{\tree} \bnfalt \rnode{\tree}{\tree}\\ Colors & \colr & \bnfdef & \bc \bnfalt \rc\\ \end{array} \] where~$\leaf$ is a leaf and~$\bnode{\tree_1}{\tree_2}$ and~$\rnode{\tree_1}{\tree_2}$ represent black and red nodes, respectively, with children~$\tree_1$ and~$\tree_2$. We will define a judgment~$\tree\istree{\colr}{n}$, meaning that~$\tree$ is a valid red-black tree whose root node has the color~$\colr$ (which is either~$\bc$ or~$\rc$) and all of whose paths from root to leaf have exactly~$n$ black nodes. { \centering \def \MathparLineskip {\lineskip=0.43cm} \begin{mathpar} \Rule{RBT-1} {\strut} {\leaf \istree{\bc}{1}} \and \Rule{RBT-2} {\tree_1 \istree{\colr_1}{n}\\ \tree_2 \istree{\colr_2}{n}} {\bnode{\tree_1}{\tree_2} \istree{\bc}{n+1}} \and \Rule{RBT-3} {\tree_1 \istree{\bc}{n}\\ \tree_2 \istree{\bc}{n}} {\rnode{\tree_1}{\tree_2} \istree{\rc}{n}} \end{mathpar} } Rule~(RBT-1) says that leaves are valid trees as long as they are black: a leaf by definition has~1 black node on any path from root to leaf. % Rule~(RBT-2) allows a black node with two children as long as both children are valid red-black trees with the same number of black nodes on any path (regardless of what color their roots are); the result is a tree with a black root and~$n+1$ black nodes on any path. % Rule~(RBT-3) enforces that red nodes do not have red children by requiring in the premises that both children have black roots. Define~$\nodes{\tree}$ to be the number of nodes, including leaves, in a tree: \[ \begin{array}{l l l} \nodes{\leaf} & = & 1\\ \nodes{\bnode{\tree_1}{\tree_2}} & = & 1 + \nodes{\tree_1} + \nodes{\tree_2}\\ \nodes{\rnode{\tree_1}{\tree_2}} & = & 1 + \nodes{\tree_1} + \nodes{\tree_2} \end{array} \] \begin{task} Prove by rule induction that for any tree~$\tree$, color~$\colr \in \{\rc, \bc\}$ and~$n \geq 1$, if~$\tree \istree{\colr}{n}$, then~$\nodes{\tree} \geq 2^{n-1}$. {\em You must} use rule induction, not any other proof technique. \textbf{Note:} This, together with one or two other properties of red-black trees, proves that a valid red-black tree is approximately balanced. \end{task} \textbf{Answer:} - Notice use of shorthand $\kwlen{\tree}$ to indicate the value of $n$ (black height) for tree \tree \\ Base Case: \begin{itemize} \item all cases when n = 1: \begin{itemize} \item {\tree = \leaf}: Rule~(RBT-1) confirms that when T is a single black node, n = 1.\\ The proposition holds for this case as $\nodes{\leaf} = 1 \geq 2^{n-1} = 2^{1-1} = 1$. \item {\tree = \rnode{\leaf}{\leaf}}: rule~(RBT-3) confirms that when $\tree$ is a red node, n is the same as it's children, which when they're both leaves makes n = 1.\\ The proposition holds for this case as follows:\\ LHS: $\nodes{\rnode{\leaf}{\leaf}} = 1 + \nodes{\leaf} + \nodes{\leaf} = 1 + 1 + 1 = 3$\\ RHS: $2^{n-1} = 2 ^ {1 - 1} = 2^0 = 1$\\ giving $3 \geq 1$ \end{itemize} \end{itemize} Inductive case assuming $\nodes{\tree} \geq 2^{n-1}$: \begin{itemize} \item Case when $\tree = \bnode{...}{...}$ -- tree's top element is a black node \[ \begin{array}{l l l l} \nodes{\bnode{\tree_1}{\tree_2}} & \geq & 2^{\kwlen{\tree}-1} & \text{the proposition for this case}\\ 1 + \nodes{\tree_1} + \nodes{\tree_2} & \geq & 1 + 2 ^ {\kwlen{\tree_1} - 1} + 2 ^ {\kwlen{\tree_2} - 1} & \text{apply rule (RBT-2)}\\ 1 & \geq & 1 & \text{apply inductive hypothesis}\\ \end{array} \] \item Case when $\tree = \rnode{...}{...}$ -- tree's top element is a red node \[ \begin{array}{l l l l} \nodes{\rnode{\tree_1}{\tree_2}} & \geq & 2^{\kwlen{\tree}-1} & \text{the proposition for this case}\\ 1 + \nodes{\tree_1} + \nodes{\tree_2} & \geq & 2 ^ {\kwlen{\tree_1} - 1} + 2 ^ {\kwlen{\tree_2} - 1} & \text{apply rule (RBT-3)}\\ 1 & \geq & 0 & \text{apply inductive hypothesis}\\ \end{array} \] \end{itemize} \section{Booleans} In this task, you'll extend the {\Elang} language from class with Booleans: \[ \begin{array}{r l l l} \mathit{Types} & \tau & \bnfdef & \kwint \bnfalt \kwstring \bnfalt \kwbool\\ \mathit{Expressions} & e & \bnfdef & x \bnfalt \kwn{n} \bnfalt \kws{s} \bnfalt \kwtrue \bnfalt \kwfalse \bnfalt e = e \bnfalt e + e \bnfalt e \cat e \bnfalt \kwlen{e} \bnfalt \kwlet{x}{e}{e} \bnfalt \kwif{e}{e}{e} \end{array} \] The expressions~$\kwtrue$ and~$\kwfalse$ are values and have their usual meanings. % The expression~$\kwif{e_1}{e_2}{e_3}$ should evaluate~$e_1$. If it evaluates to~$\kwtrue$, it should continue evaluating~$e_2$, otherwise it should continue evaluating~$e_3$. % The other branch {\em should not} be evaluated (this isn't possible to express in {\Elang}, but consider the code $\kwif{x = \kwn{0}}{\kwn{0}}{\kwn{42} / x}$: we certainly don't want to evaluate the else branch when the condition is true!). % We've also added an integer equality test~$e_1 = e_2$ to have an interesting way of producing Booleans (this is an operation on integers only, not on Booleans or strings). % Recall the dynamic semantics from class, now extended with the rules for~$e_1 = e_2$. \textbf{Note:} There's a lot here, but don't panic: the only new rules here are (S-11) through (S-14). The rest are just there as a reminder. Your job is only to add Booleans to the language. { \centering \def \MathparLineskip {\lineskip=0.43cm} \begin{mathpar} \Rule{V-1} {\strut} % This leaves the proper spacing above the line of an axiom {\kwn{n}\val} \and \Rule{V-2} {\strut} {\kws{s}\val} \and \Rule{S-1} {\strut} {\kwn{n_1} + \kwn{n_2} \step \kwn{n_1 + n_2}} \and \Rule{S-2} {\strut} {\kws{s_1} \cat \kws{s_2} \step \kws{s_1s_2}} \and \Rule{S-3} {\strut} {\kwlen{\kws{s}} \step \kwn{\kwlen{s}}} \and \Rule{S-4} {e_1 \step e_1'} {e_1 + e_2 \step e_1' + e_2} \and \Rule{S-5} {e_2 \step e_2'} {\kwn{n_1} + e_2 \step \kwn{n_1} + e_2'} \and \Rule{S-6} {e_1 \step e_1'} {e_1 \cat e_2 \step e_1' \cat e_2} \and \Rule{S-7} {e_2 \step e_2'} {\kws{s_1} \cat e_2 \step \kws{s_1} \cat e_2'} \and \Rule{S-8} {e \step e'} {\kwlen{e} \step \kwlen{e'}} \and \Rule{S-9} {e_1 \step e_1'} {\kwlet{x}{e_1}{e_2} \step \kwlet{x}{e_1'}{e_2}} \and \Rule{S-10} {v \val} {\kwlet{x}{v}{e_2} \step \sub{v}{x}{e_2}} \and \Rule{S-11} {e_1 \step e_1'} {e_1 = e_2 \step e_1' = e_2} \and \Rule{S-12} {e_2 \step e_2'} {\kwn{n_1} = e_2 \step \kwn{n_1} = e_2'} \and \Rule{S-13} {\strut} {\kwn{n} = \kwn{n} \step \kwtrue} \and \Rule{S-14} {n_1 \neq n_2} {\kwn{n_1} = \kwn{n_2} \step \kwfalse} \end{mathpar} } \begin{task}\label{task:dyn} Write the new inference rules for the dynamic semantics of Booleans and the if-else construct. You should have 2 new rules for the judgment~$e\val$ and 3 new rules for the judgment~$e \step e$.\\ (\textbf{Hint:} only one of these will be a search rule.) \end{task} \textbf{Answer:} { \centering \def \MathparLineskip{\lineskip=0.43cm} \begin{mathpar} \Rule{V-3} {\strut} {\kwfalse \val} \and \Rule{V-4} {\strut} {\kwtrue \val} \and \Rule{S-15} {e_1 \step e_1'} {\kwif{e_0}{e_1}{e_2} \step \kwif{e_0'}{e_1}{e_2}} \and \Rule{S-16} {\strut} {\kwif{\kwtrue}{e_1}{e_2} \step e_1} \and \Rule{S-17} {\strut} {\kwif{\kwfalse}{e_1}{e_2} \step e_2} \end{mathpar} } We also extend the typing rules with the new rule for equality testing and Booleans. { \centering \def \MathparLineskip {\lineskip=0.43cm} \begin{mathpar} \Rule{T-8} {\typed{\ctx}{e_1}{\kwint}\\ % Use \\ to separate multiple premises \typed{\ctx}{e_2}{\kwint}} {\typed{\ctx}{e_1 = e_2}{\kwbool}} \and \Rule{T-9} {\strut} {\typed{\ctx}{\kwtrue}{\kwbool}} \and \Rule{T-10} {\strut} {\typed{\ctx}{\kwfalse}{\kwbool}} \and \Rule{T-11} {\typed{\ctx}{e_1}{\kwbool}\\ \typed{\ctx}{e_2}{\tau}\\ \typed{\ctx}{e_3}{\tau}} {\typed{\ctx}{\kwif{e_1}{e_2}{e_3}}{\tau}} \end{mathpar} } Note that rule (T-11) doesn't require that the two branches of the conditional have a particular type (e.g.~$\kwint$). % The use of the same metavariable~$\tau$ {\em does} however, mean that the two branches~$e_2$ and~$e_3$ must have the {\em same} type, which is then the type of the whole expression (this makes sense: how could we possibly give a type to the expression $\kwif{n = \kwn{0}}{\kwn{42}}{\kws{\text{Oops}}}$?). \begin{task} Prove the cases of the Preservation theorem for the new rules you added in Task~\ref{task:dyn}. \end{task} \textbf{Answer:} Proposition: if $\typed{\ctx}{e}{\tau}$ and $e \step e'$ then $\typed{\ctx}{e'}{\tau}$ \begin{itemize} \item S-16: When $e = \kwif{\kwtrue}{e_1}{e_2}$\\ \step $e' = e_1$\\ by inversion on rule (T-11) $\typed{\ctx}{\kwif{e_0}{e_1}{e_2}}{\tau}$ and $\typed{\ctx}{e_1}{\tau}$\\ with induction on $e_1 \step e_1'$ this confirms that the type of the stepped value is preserved. \item S-17: When $e = \kwif{\kwfalse}{e_1}{e_2}$\\ \step $e' = e_2$\\ by inversion on rule (T-11) $\typed{\ctx}{\kwif{e_0}{e_1}{e_2}}{\tau}$ and $\typed{\ctx}{e_2}{\tau}$\\ with induction on $e_2 \step e_2'$ this confirms that the type of the stepped value is preserved. \item S-15: When $e = \kwif{e_0}{e_1}{e_2}$\\ \step $e' = \kwif{e_0'}{e_1}{e_2}$\\ by inversion on rule (T-11), $\typed{\ctx}{e_0}{\kwbool}$, $\typed{\ctx}{e_1}{\tau}$, and $\typed{\ctx}{e_2}{\tau}$\\ by induction we can assume $e_0 \step e_0'$ with $\typed{\ctx}{e_0'}{\kwbool}$\\ thus we can apply (T-11) to assert that the type is the same. \end{itemize} \begin{task}\label{task-cf} State (you do not have to prove it) the new case of the Canonical Forms lemma for Booleans. \end{task} \textbf{Answer:} if $e \val$ and $\typed{}{e}{\kwbool}$ then $e$ is either $\kwtrue$ or $\kwfalse$. \begin{task} Prove the cases of the Progress theorem for the new rules (T-9) through (T-11). You may (and should) use the new case of Canonical Forms from Task~\ref{task-cf}. \end{task} \textbf{Answer:} Proposition: if $\typed{}{e}{\tau}$ then $e \val$ or $e \step e'$ \begin{itemize} \item T-9: $e = \kwtrue$, $\tau = \kwbool$\\ by V-4 $e \val$ \item T-10: $e = \kwfalse$, $\tau = \kwbool$\\ by V-3 $e \val$ \item T-11: $e = \kwif{e_1}{e_2}{e_3}$\\ by inversion on rule T-11 $\typed{}{e_1}{\kwbool}$\\ by induction, either $e_1 \val$ or $e_1 \step e_1'$\\ by the Canonical Forms lemma for from Task~\ref{task-cf} $e_1$ can hold either \kwtrue or \kwfalse \begin{itemize} \item \kwtrue : S-16 confirms that $\kwif{\kwtrue}{e_2}{e_3} \step e_2$ and thus $e' = e_2$ \item \kwfalse : S-17 confirms that $\kwif{\kwfalse}{e_2}{e_3} \step e_3$ and thus $e' = e_3$ \end{itemize} \end{itemize} \end{document}
{ "alphanum_fraction": 0.5914374413, "avg_line_length": 32.3621495327, "ext": "tex", "hexsha": "2d53b459814a118fd4b8f49cd6d2c3a25f770adf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "dvtate/CS595-PLT", "max_forks_repo_path": "HW0/hw0.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "dvtate/CS595-PLT", "max_issues_repo_path": "HW0/hw0.tex", "max_line_length": 146, "max_stars_count": null, "max_stars_repo_head_hexsha": "0f7ced45b1ecfc205db258a49d628aa9929fec3f", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "dvtate/CS595-PLT", "max_stars_repo_path": "HW0/hw0.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5029, "size": 13851 }
%------------------------- % Resume in Latex % Author : Ibrahim Eren Tilla % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeItemm}[1]{ \item\small{ { #1 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{\href{linkedin.com/in/}{{\Large Ibrahim Eren Tilla} \hspace{8cm}}} & Email: \href{mailto:[email protected]}{\textcolor{blue}{ibrahim\[email protected]}}\\ \href{https://www.linkedin.com/in/erentilla/}{LinkedIn: \textcolor{blue}{linkedin.com/in/erentilla}} & Mobile: +90 532 690 44 69\\ \href{https://github.com/erentilla}{GitHub: \textcolor{blue}{github.com/erentilla}} & Istanbul, Turkey \\ \end{tabular*} %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {Bilkent University \hspace{0 cm}}{Ankara, Turkey} {Bachelor's Degree in Computer Engineering: Top \%25 of class}{August 2017 -- June 2021} %\\~\\ %{\underline{Relevant Courses:}{ Data Structures, Object-Oriented Software Engineering, Algorithms, Operating Systems, Database Systems, Programming Languages, Computer Organization, Digital Design, Discrete Mathematics, Probability and Statistics, Linear Algebra, Artificial Intelligence} } \resumeSubHeadingListEnd %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {Trendyol Group}{Istanbul, Turkey} {Associate Software Developer \hspace{0 cm}}{June 2021 -- Present} \resumeItemListStart \resumeItemm {Designed and implemented a quality assessment microservice using \textbf{Spring, React, and Couchbase}. It is \textbf{used by 3500+} customer service agents \textbf{every month}.} \resumeItemm {Contributed to the new notification center project \textbf{increasing the Kafka throughput 4 times.} } \resumeItemm {Constructed a new gateway project, creating RESTful APIs using \textbf{Kotlin}.} \resumeItemm {Currently supporting the Trendyol infrastructure which has \textbf{15+ million visitors per day}. } \resumeItemListEnd \resumeSubheading {Vadi Enterprise Information Systems}{Istanbul, Turkey} {Software Engineer Intern \hspace{0 cm}}{July 2020 -- October 2020} \resumeItemListStart \resumeItemm {Implemented a web-based application that ingests and analyzes the data from various gas sensors over Istanbul using IoT. \textbf{Used AWS IoT Core, IoT Analytics, and DynamoDB.}} \resumeItemm {Established and fine-tuned communication through UART protocol over STM32 MCUs.} \resumeItemm {The application is currently being \textbf{used by 7 district municipalities} in Istanbul.} \resumeItemListEnd \resumeSubheading {Turkish Airlines}{Istanbul, Turkey} {Software Engineer Intern \hspace{0 cm}}{July 2019 -- September 2019} \resumeItemListStart \resumeItemm {Designed, built, and maintained a user agreement program which is \textbf{used by 200+ people} in the company on a monthly basis.} \resumeItemm {Used \textbf{Python} for the structure and connected the system to the \textbf{MS SQL} database to store the daily field data of arriving and departing planes.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PROJECTS----------------- \section{Projects} \resumeSubHeadingListStart \resumeSubheading {The Third Eye}{} {\vspace{-25pt}}{} \resumeItemListStart \resumeItemm {Implemented a social distancing and face mask detection program in \textbf{Python}, using \textbf{OpenCV and YOLOv4} libraries.} \resumeItemm {Trained the face mask detection algorithm using \href{https://www.kaggle.com/omkargurav/face-mask-dataset}{\textcolor{blue}{a dataset}} from Kaggle, securing an \textbf{average precision of 0.88}.} \resumeItemm {Created \textbf{analysis \& design} reports, concentrating on the \textbf{application lifecycle management}, project reports can be examined on \href{https://thirdeyeproject.github.io/}{\textcolor{blue}{project website}}.} \resumeItemListEnd \resumeSubheading {CodeGiant}{} {\vspace{-25pt}}{} \resumeItemListStart \resumeItemm {A web-application of a coding practice \& interview system (e.g., HackerRank, LeetCode).} \resumeItemm {Worked on the \textbf{back-end} of the website and created \textbf{RESTful APIs} using \textbf{Node.js} and \textbf{Express.js}.} \resumeItemm {An integrated system that can support \textbf{thousands of users}, code be found on \href{https://github.com/erentilla/CodeGiant}{\textcolor{blue}{GitHub}}.} \resumeItemListEnd \resumeSubheading {Crossword Solver}{} {\vspace{-25pt}}{} \resumeItemListStart \resumeItemm {An open-source \textbf{ML program} to solve The Mini Crossword from NYTimes.} \resumeItemm {Downloads the clues and solves them through heuristically searching online dictionaries, code can be found on \href{https://github.com/erentilla/Artificial-Intelligence}{\textcolor{blue}{GitHub}}.} \resumeItemListEnd \resumeSubHeadingListEnd % %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \resumeSubHeadingListStart \item{ \textbf{Languages:}{ Java, Javascript, C++, HTML5, CSS, Python, Go} %\hfill %\textbf{Technologies}{: Angular2+, React, Nodejs} } \item{ \textbf{Tools/Technologies:}{ Git, Spring, Kafka, Couchbase, Amazon Web Services, NodeJS, ExpressJS, React} } \resumeSubHeadingListEnd %------------------------------------------- %--------ACHIEVEMENTS------------ \section{Achievements} \resumeSubHeadingListStart \item{ \textbf{}{Placed \textbf{984th} on the national admission exam in Turkey where roughly \textbf{1.2 million people} attend, ranking within the \textbf{top 0.08\% percent} of contestants.} } \resumeSubHeadingListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6704488149, "avg_line_length": 37.7714285714, "ext": "tex", "hexsha": "60da26dbbcbda7494e4233d84846d90307485c65", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-13T19:37:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-13T19:37:00.000Z", "max_forks_repo_head_hexsha": "49104f3b00ee6db53b79f0b9092afe4193e783db", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "erentilla/resume", "max_forks_repo_path": "eren_tilla_resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "49104f3b00ee6db53b79f0b9092afe4193e783db", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "erentilla/resume", "max_issues_repo_path": "eren_tilla_resume.tex", "max_line_length": 297, "max_stars_count": null, "max_stars_repo_head_hexsha": "49104f3b00ee6db53b79f0b9092afe4193e783db", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "erentilla/resume", "max_stars_repo_path": "eren_tilla_resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2188, "size": 7932 }
% !TEX root =main.tex \input{CR-TLP-cost} %\subsection{Cost Analysis}\label{TLP-cost-compare} % We summarize C-TLP's cost analysis in Table \ref{table::puzzle-com}. It considers a generic setting where the protocol deals with $z$ puzzles. We refer readers to Appendix \ref{TLP-cost-compare} for a detailed analysis. % % \begin{table*}[!htbp] %\begin{footnotesize} % %\begin{center} %\caption{ \small C-TLP's Detailed Cost Breakdown}\label{table::puzzle-com} % %\renewcommand{\arraystretch}{.85} %\scalebox{0.94}{ %\begin{subtable}{.64\linewidth}%xxxx %\begin{minipage}{.88\linewidth} %\caption{\small Computation Cost} %\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} % \hline %\cellcolor[gray]{0.9}&\cellcolor[gray]{0.9} & % \multicolumn{3}{c|}{\cellcolor[gray]{0.9}\scriptsize \underline{ \ \ \ \ \ \ \ \ \ Protocol Function \ \ \ \ \ \ \ \ \ }}&\cellcolor[gray]{0.9}\\ % %\cline{3-5} %\cellcolor[gray]{0.9} \multirow{-2}{*}{\scriptsize Protocol}&\cellcolor[gray]{0.9} \multirow{-2}{*} {\scriptsize Operation}&\cellcolor[gray]{0.9}\scriptsize$\mathtt{GenPuz}$&\cellcolor[gray]{0.9}\scriptsize$\mathtt{SolvPuz}$&\cellcolor[gray]{0.9}\scriptsize$\mathtt{Verify}$&\multirow{-2}{*} {\cellcolor[gray]{0.9}\scriptsize Complexity} \\ %\hline %\cellcolor[gray]{0.9} &\multirow{3}{*}{\rotatebox[origin=c]{0}{\scriptsize }} \cellcolor[gray]{0.9}\scriptsize Exp.&\scriptsize$z+1$&\scriptsize$T z$ &$-$&\multirow{4}{*}{\rotatebox[origin=c]{0}{\scriptsize $O(T z)$}}\\ % \cline{2-5} % \cellcolor[gray]{0.9} &\cellcolor[gray]{0.9}\scriptsize Add. or Mul.&\scriptsize$z$ &\scriptsize$z$&$-$ & \\ % \cline{2-5} % \cellcolor[gray]{0.9} &\cellcolor[gray]{0.9}\scriptsize Commitment&\scriptsize$z$&$-$ &\scriptsize$z$&\\ % \cline{2-5} %\cellcolor[gray]{0.9} \multirow{-4}{*}{\rotatebox[origin=c]{0}{\scriptsize C-TLP }} &\cellcolor[gray]{0.9}\scriptsize Sym. Enc&\scriptsize$z$&\scriptsize$z$ &$-$&\\ % % \hline %\end{tabular} %\end{minipage}%****** %\end{subtable}%xxxx % %\begin{subtable}{.46\linewidth}%xxxx %\renewcommand{\arraystretch}{1.68} %\begin{minipage}{.9\linewidth} %\caption{\small Communication Cost (in bit)} %\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} % \hline % {\cellcolor[gray]{0.9}\scriptsize Protocol}&{\cellcolor[gray]{0.9}\scriptsize Model}& %{\cellcolor[gray]{0.9}\scriptsize Client}&{\cellcolor[gray]{0.9}\scriptsize Server}&{\cellcolor[gray]{0.9}\scriptsize Complexity}\\ % \cline{3-4} % %\hline % \cellcolor[gray]{0.9} &\cellcolor[gray]{0.9} \multirow{2}{*}{\rotatebox[origin=c]{0}}\scriptsize Standard&\scriptsize$3200 z$&\scriptsize$1524 z$ &\multirow{2}{*}{\rotatebox[origin=c]{0}{\scriptsize $O(z)$ }}\\ % \cline{2-4} % \multirow{-2}{*}{\rotatebox[origin=c]{0}{\cellcolor[gray]{0.9} \scriptsize C-TLP }}&\cellcolor[gray]{0.9}\scriptsize R.O.&\scriptsize$2432 z$ &\scriptsize$628 z$& \\ % % \hline %\end{tabular} %\end{minipage}%****** %\end{subtable}%xxxx %} %\end{center} %\end{footnotesize} %\end{table*} % % \noindent\textbf{\textit{Computation Complexity}}. For a client to generate $z$ puzzles, it performs $z$ symmetric key-based encryption, $z$ modular exponentiations and $z$ modular additions. Also, to commit to values, it invokes a commitment scheme $z$ times; if a hash-based commitment is used then it would involve $z$ invocations of a hash function, and if Pedersen commitment is used then it would involve $2 z$ exponentiations and $z$ multiplications. Thus, the overall computation complexity of the client is $O(z)$. For the server to solve $z$ puzzles, it calls $\mathtt{TLP.SolvPuz}(.)$ $z$ times, which leads to $O(Tz)$ computation complexity. Also, the verification cost only involves $z$ invocations of the commitment scheme; if the hash-based commitment is used, then it would involve $z$ invocations of a hash function, if Pedersen commitment is utilised, then it would involve $2 z$ exponentiations and $z$ multiplications. Thus, the verification's complexity is $O(z)$. % % % % % % % \noindent\textbf{\textit{Communication Complexity}}. In step \ref{Generate-Puzzle}, the client publishes two vectors: $\vv{\bm{o}}$ and $\vv{\bm{h}}$, with $2 z$ and $z$ elements respectively. Each element of $\vv{\bm{o}}$ is a pair $(o_{\scriptscriptstyle j,1},o_{\scriptscriptstyle j,2})$, where $o_{\scriptscriptstyle j,1}$ is an output of symmetric key encryption, e.g. $|o_{\scriptscriptstyle j,1}|=128$-bit, and $o_{\scriptscriptstyle j,2}$ is an element of $\mathbb{Z}_{\scriptscriptstyle N}$, e.g. $|o_{\scriptscriptstyle j,2}|=2048$-bit. Also, each element $h_{\scriptscriptstyle j}$ of $\vv{\bm{h}}$ is either an output of a hash function, when a hash-based commitment is used, e.g. $|h_{\scriptscriptstyle j}|=256$-bit, or an element of $\mathbb{F}_{\scriptscriptstyle q}$ when Pedersen commitment is used, e.g. $|h_{\scriptscriptstyle j}|=1024$-bit. Thus, its bandwidth is about $2432 z$ bits when the former, or $3200 z$ bits when the latter commitment scheme is utilised. Also, its complexity is $O(z)$. For the server to prove, in step \ref{prove-}, it sends $z$ pairs $(m_{\scriptscriptstyle j},d_{\scriptscriptstyle j})$ to the verifier, where $m_{\scriptscriptstyle j}$ is an arbitrary message, e.g. $|m_{\scriptscriptstyle j}|=500$-bit, and $d_{\scriptscriptstyle j}$ is either a long enough random value, e.g. $|d_{\scriptscriptstyle j}|=128$-bit, when the hash-based commitment is used, or an element of $\mathbb{F}_{\scriptscriptstyle q}$ when Pedersen scheme is used, e.g. $|d_{\scriptscriptstyle j}|=1024$-bit. So, its bandwidth is about either $628 z$ or $1524 z$ bits when the former or latter commitment scheme is used respectively. The solver's communication complexity is $O(z)$.
{ "alphanum_fraction": 0.6870003476, "avg_line_length": 69.3253012048, "ext": "tex", "hexsha": "a89831e097b85256ce000c4e730d228c2943bde6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b2139df715f441a48eeae0b88e038fb6acc5d6e2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AydinAbadi/CR-LP", "max_forks_repo_path": "Paper/eprint-version/CR-TLP-cost-summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b2139df715f441a48eeae0b88e038fb6acc5d6e2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AydinAbadi/CR-LP", "max_issues_repo_path": "Paper/eprint-version/CR-TLP-cost-summary.tex", "max_line_length": 1729, "max_stars_count": null, "max_stars_repo_head_hexsha": "b2139df715f441a48eeae0b88e038fb6acc5d6e2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AydinAbadi/CR-LP", "max_stars_repo_path": "Paper/eprint-version/CR-TLP-cost-summary.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1991, "size": 5754 }
\newpage \thispagestyle{plain} { \selectlanguage{catalan} \paragraph{} % Hack to ensure that TOC link works \begin{abstract} \addcontentsline{toc}{section}{\abstractname} \lipsum[1][1-10] % replace with content \end{abstract} } \vspace{50pt} { \selectlanguage{spanish} \paragraph{} % Hack to ensure that TOC link works \begin{abstract} \addcontentsline{toc}{section}{\abstractname} \lipsum[2][1-10] % replace with content \end{abstract} } \vspace{50pt} { \selectlanguage{english} \paragraph{} % Hack to ensure that TOC link works \begin{abstract} \addcontentsline{toc}{section}{\abstractname} \lipsum[3][1-10] % replace with content \end{abstract} } \newpage
{ "alphanum_fraction": 0.6452879581, "avg_line_length": 19.5897435897, "ext": "tex", "hexsha": "b7c145d870189eb7c7c4e687eb9e9e9e7542df8f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dd0ef8054700bf6bbf52439286b0b75a6ff6c2dc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "liamato/URV-TFG-Template", "max_forks_repo_path": "sections/abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dd0ef8054700bf6bbf52439286b0b75a6ff6c2dc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "liamato/URV-TFG-Template", "max_issues_repo_path": "sections/abstract.tex", "max_line_length": 53, "max_stars_count": null, "max_stars_repo_head_hexsha": "dd0ef8054700bf6bbf52439286b0b75a6ff6c2dc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "liamato/URV-TFG-Template", "max_stars_repo_path": "sections/abstract.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 225, "size": 764 }
\subsection{The Harrod-Domar model} \subsubsection{Introduction to growth models} We have output as a function of capital. \(Y=f(K)\) We also have capital dynamics. \(\dot K=I-\delta K\) \(I=S=sY\) This gives us: \(\dot K = sY-\delta K\) \subsubsection{Introduction} The production function is: \(Y=cK\) This gives us: \(\dot K=(sc-\delta )K\) \subsubsection{Growth} \(\dot Y=c\dot K \) \(\dfrac{\dot Y}{Y}=c\dfrac{\dot K}{Y}\) \(\dfrac{\dot Y}{Y}=c\dfrac{(sc-\delta )K}{cK}\) \(\dfrac{\dot Y}{Y}=sc-\delta \) \subsubsection{Per-capita growth} Per capita income is: \(y=\dfrac{Y}{L}\) \(k=\dfrac{K}{L}\)
{ "alphanum_fraction": 0.6257961783, "avg_line_length": 13.0833333333, "ext": "tex", "hexsha": "f7b768f924b983c0928701585f6e78a01d948ebe", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/economics/neoClassical/03-01-harrodDomar.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/economics/neoClassical/03-01-harrodDomar.tex", "max_line_length": 48, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/economics/neoClassical/03-01-harrodDomar.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 221, "size": 628 }
\subsection{k-Nearest Neighbour Search}
{ "alphanum_fraction": 0.7674418605, "avg_line_length": 8.6, "ext": "tex", "hexsha": "f9d1cf2e171b5ad31ac2d32a9cd8189369c50e53", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/distance/04-02-kNNS.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/distance/04-02-kNNS.tex", "max_line_length": 39, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/distance/04-02-kNNS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11, "size": 43 }
\section{Conclusion} Pig and Jaql both succeed in the task of abstracting from the Map/Reduce pattern towards the actual data processing tasks. Pig Latin is limited in expressiveness and a fallback to UDFs is often needed, whereas Jaql provides a richer feature set being extendable with pure Jaql. The time consuming part of developing in Pig is writing UDFs. If a library of commonly used UDFs is present (i.e. Pig's Open Source UDF Repository PiggyBank~\cite{pigPiggyBank} or company specific in-house librarys) a Pig Latin script is composed in very short time. For a lot of common tasks, native Pig Latin statements already are powerful enough and UDFs aren't needed. Furthermore we would prefer Pig for ``easy'' querying tasks, since basic functionality is quickly accessible and well documented. But we think, that prototyping of more complex problems is faster done in Jaql than in Pig. Since Pig and Jaql are still in development improvements may be seen, especially in performance, where both aim on competing with native Java. While Pig seems to be production-ready Jaql currently appears to be more of a research project. In conclusion, Pig and Jaql both ease programming for Hadoop by allowing rapid development, compared to Hadoop's Java interface, but they are both in an early stage of development and cannot yet fully compete with pure Java Hadoop performance.
{ "alphanum_fraction": 0.8072202166, "avg_line_length": 72.8947368421, "ext": "tex", "hexsha": "9298fffb6ef5acf546c5dc68f67af07b7d46df40", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-05-02T22:18:22.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-07T14:02:25.000Z", "max_forks_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "rkh/hadoop-scripting", "max_forks_repo_path": "paper/conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "rkh/hadoop-scripting", "max_issues_repo_path": "paper/conclusion.tex", "max_line_length": 373, "max_stars_count": 3, "max_stars_repo_head_hexsha": "33fb52b3f9722b850dc8cc5fcf720189bee70185", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "rkh/hadoop-scripting", "max_stars_repo_path": "paper/conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-09T17:42:38.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-15T00:29:12.000Z", "num_tokens": 301, "size": 1385 }
\chapter{Introduction} LPS is a Language Prototyping System based on Modular Monadic Semantics. For more information, see: http://lsi.uniovi.es/~labra/LPS/LPS.html Some papers describing the system are \cite{Labra98, Labra99}. Modular Monadic Semantics (\cite{LiangHudakJones95, LiangHudak96}) separates values from computations. A computation can be described by means of a monad \cite{Moggi89}. Using monad transformers, a monad can be transformed into a different monad with more computational features. The system can be seen as a Domain Specific Language embedded in Haskell \cite{Haskell98}. In order to derive a programming language specification it is necessary to provide: \begin{itemize} \item A \emph{Parser} for the language \item A description of the domain values \item A characterization of the monad that models the computation \end{itemize}
{ "alphanum_fraction": 0.7618002195, "avg_line_length": 26.7941176471, "ext": "tex", "hexsha": "88475cf7e76918ce21471b887867f4c5332af5e9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-09T10:16:44.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-09T10:16:44.000Z", "max_forks_repo_head_hexsha": "c5ff4e644fce1e97f1823321710dd082aa79aa72", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "palexand/interpreters", "max_forks_repo_path": "LPS/Introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c5ff4e644fce1e97f1823321710dd082aa79aa72", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "palexand/interpreters", "max_issues_repo_path": "LPS/Introduction.tex", "max_line_length": 80, "max_stars_count": 10, "max_stars_repo_head_hexsha": "c5ff4e644fce1e97f1823321710dd082aa79aa72", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "palexand/interpreters", "max_stars_repo_path": "LPS/Introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-18T18:39:05.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-05T13:24:43.000Z", "num_tokens": 227, "size": 911 }
\chapter{System Description}\label{ch:system-description} Using the knowledge from the previous chapter, this chapter first introduces the architecture of the developed neural network. Afterward, it explains the training process together with the preparation of the training data and the overall concept to generate the final abstractive summaries. Then it continues with a short introduction of the Texar library \cite{hu2019texar} which was used to develop the neural network. Finally, the exact hyperparameters that were used are described. % =============== % MODEL ARCHITECTURE % =============== \section{Model Architecture} The architecture of the model used in this work is rather straightforward. For the encoding part, BERT is used, and for decoding, a normal Transformer decoder is used. As proposed in \cite{1608.05859}, the input embeddings are tied to the output layer. As described in \cref{sec:bert}, BERT allows either one or two sentence\footnote{It must again be noted that "sentence" is not referring to an actual linguistic sentence.} inputs. For this work, only one sentence is used as there is no need (and logical way) to split the input into two parts. The Transformer decoder performs attentions over the full output representations of the encoder. The full architecture is shown in \cref{fig:summarization-architecture}. \begin{figure}[h] \centering \includegraphics[width=0.7\paperwidth]{figures/summarization-architecture} \caption{Visualization of the model architecture} \label{fig:summarization-architecture} \end{figure} % ================ % TRAINING AND CONCEPT % ================ \section{Training and Concept}\label{sec:system-description-training} To circumvent the high memory usage of BERT at long sequence lengths, the whole meeting transcript is not used as input for the network at once. As described in \cref{sec:ami-meeting-corpus}, for every sentence in a meeting's abstractive summary, there exists a link to one or more dialogue acts. These are the dialogue acts that influence the content of the sentence the most. An example of such a link is shown in \cref{fig:dialogue-arc-summary-link-example}. For training, all of these dialogue acts that belong to the same summary sentence are concatenated and then used as the source for the summary. The summary sentence itself is used as the target summary. % TODO I'm pretty sure that there are better examples, but for now this is fine \begin{figure}[h] \begin{lstlisting}[numbers=none] DA1: Thank you very much indeed, DA2: Cool, thank you, DA3: So I call the meeting closed. SUM: They close the meeting by thanking one another. \end{lstlisting} \caption[Three dialogue acts that are linked to one sentence of the summary]{Three dialogue acts (DA1--3) that are linked to one sentence of the summary (SUM)} \label{fig:dialogue-arc-summary-link-example} \end{figure} During training, the network was evaluated on the development data set every 75 steps. If the network improved its score, this model was saved. To get a summary of a whole meeting, the topic segmentation described in \cref{ssec:ami-annotations} is used. The meeting's transcript is segmented by topic, and then these segments are used as the input for the same network that was already trained to summarize dialogue acts. Afterward, the output for all transcript segments can be concatenated. These concatenated outputs together form the full summary of a meeting. \section{Preparation of the Data}\label{sec:preparation-of-the-data} This section describes how the data of the two used corpora introduced in \cref{ch:data} is processed. \paragraph{Format of the Corpus Data} The corpus data for both the AMI Meeting Corpus and the ICSI Meeting Corpus is available in the NITE XML Toolkit (NXT) format \cite{Carletta2003}. For automatic parsing, a Java library is available that makes it possible to write a Java program that traverses through the corpora and collects the necessary data. For this work, such a program was developed. It parses the data of the corpora and creates files that are easier to further process compared to the complex NXT format. For training, three tab-separated-values (.tsv) files are created, one for each dataset (training, development, test). For generating the final summary, it creates a text file for each meeting. The text file contains the transcripts of the meeting split by topic, with each topic-transcript starting in a new line. To evaluate the results, for each meeting a text file is generated that contains the summary of the meeting. \paragraph{Data Cleaning} In the AMI and ICSI Meeting Corpora, the transcriptions try to include every spoken word. This includes speech disfluencies like "hmm," "huh," etc. As these do not add any meaningful value to a sentence, they are filtered out while processing the data. Words like "yeah," "nope," and "nah" are kept because they carry meaning, like agreement or disagreement. \paragraph{Data Split} For the AMI Meeting Corpus, the recommended segmentation described in \cref{ssec:ami-segmentation-of-the-corpus} is used. As the ISCI Meeting Corpus has no recommended segmentation, the traditional test set and a randomly generated development set are used, as described in \cref{sec:icsi-corpus}. % =================== % IMPLEMENTATION WITH TEXAR % =================== \section{Implementation with Texar} For the actual implementation of the network, the Texar toolkit \cite{hu2019texar} is used. Texar is an open-source modular toolkit that provides plenty of useful abstractions for fast prototyping of neural network architectures with a special focus on natural language processing. It comes in two nearly identical versions, one for TensorFlow \cite{tensorflow2015-whitepaper} and one for PyTorch \cite{NEURIPS2019_9015}. For this work, the TensorFlow version is used. Texar has modules for both BERT and regular Transformers and allows for an easy connection of BERT and a Transformer decoder. The code for the complete architecture is only a few hundred lines of code. Nearly every hyperparameter, like the number of decoder layers, number of attention heads or just simple settings like the learning rate, can easily be modified. As an example, the hyperparameter configuration for the Transformer decoder is shown in \cref{lst:hyperparameters-decoder}. \begin{lstlisting}[numbers=none,language=Python,caption={Hyperparameters for Transformer decoder},captionpos=b,label=lst:hyperparameters-decoder,float] hidden_dim = 768 decoder = { 'dim': hidden_dim, 'num_blocks': 6, 'multihead_attention': { 'num_heads': 8, 'output_dim': hidden_dim }, 'initializer': { 'type': 'variance_scaling_initializer', 'kwargs': { 'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', }, }, 'poswise_feedforward': tx.modules.default_transformer_poswise_net_hparams( output_dim=hidden_dim) } \end{lstlisting} % ==================== % EXPLORING HYPERPARAMETERS % ==================== \section{Exploring Hyperparameters} This section explores the hyperparameters used. The parameters were determined empirically by testing multiple variations. After multiple tests, the following hyperparameters yielded the best results. \paragraph{Optimizer and Learning Rate} The Adam optimizer \cite{article} is used with $\beta_1=0.9$, $\beta_2=0.997$, and $\epsilon = 10^{-9}$. The learning rate is computed by the following formula: \[ learning\_rate = 0.2 \cdot d_{model}^{-0.5} \cdot min(step\_num^{-0.5}, step\_num \cdot warmup\_steps^{-1.5}) \] which is equivalent to the learning rate used to train the original Transformer \cite[p.~7]{1706.03762} but multiplied by a factor of $0.2$ because this yields better results for this training data, as shown by empirical testing. Results for tests with a constant linear learning rate have been slightly worse. The warmup steps are set to $warmup\_steps = 4000$. The formula can be interpreted as linearly increasing the learning rate until $warmup\_steps < steps$ and then steadily decreasing it again. The function's graph is visualized in \cref{fig:learning-rate}. \begin{figure}[h] \centering \includegraphics[width=0.6\paperwidth]{figures/learning-rate} \caption{Visualization of the learning rate with $warmup\_steps = 4000$} \label{fig:learning-rate} \end{figure} \paragraph{Batch Size and Maximum Sequence Length} Due to memory constraints,\footnote{Training was performed on a single GeForce RTX 2080 Ti GPU with 11 GB of memory.} it was necessary to find a good balance between the batch size and maximum input sequence length. For the experiments performed in this thesis, the following values proved to be a good compromise: \begin{itemize} \item A maximum input sequence length of $96$ % TODO Maybe provide some stats: What is the average sequence length and how many inputs were truncated with a sequence length of 96? \item A batch size of $28$ \end{itemize} Smaller batch sizes harm the network's performance, and smaller maximum sequence lengths mean lost information as some of the inputs are quite long and would be truncated. \paragraph{BERT} Due to the same memory constraints mentioned in the previous paragraph, the smaller BERT model, BERT\textsubscript{BASE}, is used as the larger model, BERT\textsubscript{LARGE}, requires significantly more memory. \paragraph{Transformer Decoder} For the Transformer decoder, $N=6$ stacked decoder layers are used. The number of attention heads is set to eight, and the hidden dimension has a size of $768$, as this is the same dimension used by the BERT\textsubscript{BASE} model \cite[p.~3]{devlin2018bert}. % TODO Concluding statement that leads to the following chapter
{ "alphanum_fraction": 0.7713146596, "avg_line_length": 54.3444444444, "ext": "tex", "hexsha": "4737b7f3acabf0ff7c6551ec80ea3e048dc53845", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-25T15:03:24.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-25T15:03:24.000Z", "max_forks_repo_head_hexsha": "431c9547da191e16bd66a3f1aeeec40403c46b61", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Bastian/bachelors-thesis", "max_forks_repo_path": "content/4_system_description.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "431c9547da191e16bd66a3f1aeeec40403c46b61", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Bastian/bachelors-thesis", "max_issues_repo_path": "content/4_system_description.tex", "max_line_length": 229, "max_stars_count": 1, "max_stars_repo_head_hexsha": "431c9547da191e16bd66a3f1aeeec40403c46b61", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Bastian/bachelors-thesis", "max_stars_repo_path": "content/4_system_description.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-20T13:37:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-20T13:37:57.000Z", "num_tokens": 2223, "size": 9782 }
\documentclass[table, 12pt]{article} \usepackage{graphicx} \usepackage[T1]{fontenc} \usepackage{tocloft} \usepackage{todonotes} \usepackage{caption} \usepackage{hyperref} \usepackage{booktabs} \usepackage{listings} \usepackage{pdfpages} \usepackage{pdflscape} \usepackage{textpos} \usepackage{scrhack} \usepackage{xcolor} \usepackage{float} \usepackage{longtable} \usepackage{enumerate} \usepackage{tasks} \usepackage{tabularx} \usepackage{titlesec} \usepackage{listing} \usepackage{graphicx} \usepackage{subcaption} \titleformat{\paragraph} {\normalfont\normalsize\bfseries}{\theparagraph}{1em}{} \titlespacing*{\paragraph} {0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex} \begin{document} \begin{titlepage} \centering {\scshape\large AY 2020/2021 \par} \vfill \includegraphics[width=100pt]{assets/logo-polimi-new}\par\vspace{1cm} {\scshape\LARGE Politecnico di Milano \par} \vspace{1.5cm} {\huge\bfseries Acceptance Testing Document \par} \vspace{2cm} {\Large {Luca Pirovano\quad Nicolò Sonnino}\par} \vfill {\large Professor\par Matteo \textsc{Rossi}} \vfill {\large \textbf{Version 1.0} \\ \today \par} \end{titlepage} \hypersetup{% pdfborder = {0 0 0} } \thispagestyle{plain} \pagenumbering{gobble} \mbox{} \newpage \pagenumbering{roman} \tableofcontents \newpage \pagenumbering{arabic} \section{Tested Project} \begin{itemize} \item \textbf{Authors}\\Robert Medvedec\\Toma Sikora \item \textbf{Repository URL}\\ \underline{\href{https://github.com/robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora}{Link}} \item \textbf{Documents considered}\begin{itemize} \item RASD: Requirements Analysis Specification Document; \item DD: Design Document; \item ITD: Implementation and Testing Document; \end{itemize} \end{itemize} \newpage \section{Installation} For the installation phase, we followed the \textit{Installation Instructions} section in the ITD (6.0). We have installed the two application artifacts (CLup and CLupSM) for Android operating system (.apk file) provided by the developers in the following environments: \begin{itemize} \item Google Pixel 3A: \begin{itemize} \item \textbf{Platform:} emulated on the Android Virtual Device (AVD) environment, with SDK Version: 29. \item \textbf{O.S.:} Android 10 (Q). \end{itemize} \item OnePlus 6T: \begin{itemize} \item \textbf{Platform:} physical device with SDK 29. \item \textbf{O.S.:} Android 10 (Q). \end{itemize} \item NVIDIA Shield Tablet: \begin{itemize} \item \textbf{Platform:} physical device with SDK 26. \item \textbf{O.S.:} Lineage O.S. 15.0 (Android Oreo). \end{itemize} \end{itemize} Since there were neither iOS release and web GUI we only tested the Android artifact directly on the devices listed above and also debugged the project through the Android Studio Suite. \newpage \section{Acceptance Test Cases} Since the database was deployed on the Firebase platform, we could not access to it and consequently we could not perform tests on it. At the same time, we could not have any idea about how data was stored in the DBMS and how errors were handled. \subsection{Test Scenarios} \subsubsection{Store Selection} \label{store_selection} \begin{enumerate}[i] \item \textbf{Goal:} choose a specific store located in Pavia. \item \textbf{Steps to reproduce:} \begin{itemize} \item[-] Open the app. \item[-] Tap \textit{SELECT A STORE} button. \item[-] Click the first text box and select the city \textit{Pavia, Italy}. \item[-] Click on the second text box and select the grocery shop. \item[-] Select one of the addresses printed in the last text area. \item[-] Tap \textit{SELECT STORE}. \end{itemize} \item \textbf{Issues:} \begin{itemize} \item[-] Selecting a field in the form produced no UI changes visible to the user. After a code inspection, we realized that the issue was related to the system theme. In fact, we had the dark mode enabled on each device. This produced a different UI color palette, with the consequence of white text on white background. An issue screenshot is shown in figure \ref{white_label_issue}. \item[-] Tapping on \textit{SELECT STORE} without confirming the address leads to an application (or sometimes page) crash, reporting a runtime exception. \end{itemize} \end{enumerate} \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{assets/screenshots/white_label_fix_jpg.jpg} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{assets/screenshots/white_label_issue.jpg} \end{minipage} \caption{Store selection (expected vs. actual)} \label{white_label_issue} \end{figure} \subsubsection{New Ticket Request} \begin{enumerate}[i] \item \textbf{Goal:} retrieve a ticket at Minimarket located in Pavia. \item \textbf{Steps to reproduce:} \begin{itemize} \item[-] Select a store as described in paragraph \ref{store_selection} \item[-] Click on \textit{REQUEST TICKET} button. \item[-] Visualize the QR Code of the fresh new ticket. \end{itemize} \item \textbf{Test cases:} \begin{enumerate} \item Use of \textit{CHECK TICKET} button. \item Retrieve a ticket for a closed store. \end{enumerate} \item \textbf{Test results:} \begin{enumerate} \item If the ticket had been validated, the app returned a confirmation message. Otherwise, no actions were performed. \item The app correctly threw an error stating that the store was closed. \end{enumerate} \item \textbf{Issues:} \begin{itemize} \item[-] Once a ticket has been retrieved, if the user closes the application or returns to previous page, the ticket is definitely lost and there is no way of recovering it.\\The main consequence of this behavior is that the ticket keeps its queue position, blocking all following tickets in the same slot. This issue makes the entire application useless for the affected slot. \end{itemize} \end{enumerate} \subsubsection{Login as a Store Manager} \label{login_store_manager} \begin{enumerate}[i] \item \textbf{Goal:} login using an existing Store Manager account. \item \textbf{Steps to reproduce:} \begin{itemize} \item[-] Open the application. \item[-] Tap on \textit{STORE MANAGER LOGIN} button. \item[-] Insert credentials. \item[-] Tap on \textit{LOGIN} button. \end{itemize} \item \textbf{Test Cases:} \begin{enumerate} \item Insertion of wrong credentials. \item Insertion of unconfirmed account credentials. \item Password reset (if forgotten). \item Tap the login button without credentials insertion. \item Account switching. \end{enumerate} \item \textbf{Test results:} \begin{enumerate} \item Correct throwing of a login error. \item Correct sending of confirmation email. \item Correct password reset flow. \item Correct throwing of a missing fields error. \item Correct logout (and successive login) of users. \end{enumerate} \end{enumerate} \subsubsection{Manage a store as Store Manager} \begin{enumerate}[i] \item \textbf{Goal:} open or close an existent shop managed by our profile. \item \textbf{Steps to reproduce:} \begin{itemize} \item[-] Login as a store manager, as described in section \ref{login_store_manager} \item[-] Click on \textit{OPEN STORE} or \textit{CLOSE STORE} \end{itemize} \item \textbf{Test results:} the store correctly reacted to our requests, closing and opening itself. In case of closure, slots in that store became unavailable. \end{enumerate} \subsubsection{Customer Control} \begin{enumerate}[i] \item \textbf{Goal:} scan tickets and monitor accesses. \item \textbf{Steps to reproduce:} \begin{itemize} \item[-] Login as a store manager, as described in section \ref{login_store_manager}. \item[-] Tap on \textit{CUSTOMER CONTROL} button. \item[-] Tap on \textit{SCAN A TICKET} button. \item[-] Read the QR Code using the device integrated camera and validate ticket. \item[-] Tap on \textit{STORE EXIT} button. \end{itemize} \item \textbf{Test Cases:} \begin{enumerate} \item Scan a valid ticket. \item Scan an expired or lost ticket. \item Scan a ticket after reaching the maximum slot capacity. \item Scan a ticket retrieved immediately after a lost one. \end{enumerate} \item \textbf{Test results:} \begin{enumerate} \item The application correctly validated the ticket, giving also a confirmation message. \item In the case of an expired ticket, the application correctly displayed an error message. In the second case the application validated the ticket. \item The ticket was validated and the available slot counter went below zero, as shown in figure \ref{counter_going_brr}. \item The ticket was not validated. \end{enumerate} \item \textbf{Issues} \begin{itemize} \item[-] Losing a ticket resulted in blocking the whole queue. Since the ticket was still valid but not scannable, every following booking was considered invalid, resulting in an unavailability of the slot. \item[-] Scanning more tickets than the available store slots produced more accesses than the allowed ones, resulting in a negative value of the slots counter. \end{itemize} \end{enumerate} \begin{figure}[H] \centering \includegraphics[width=\textwidth/3]{assets/screenshots/oh_no.jpg} \caption{The counter reached a negative value.} \label{counter_going_brr} \end{figure} \section{Project Inspection} \subsection{Documentation Quality} \subsubsection{RASD} Inside the Overall Description section (2.1.1) the figure used for describing the internal structure of the system is too technical. In fact, since the audience of this document is a possible CLup customer without any IT skills, concepts like database, API, mobile operating systems and so on are meaningless and may cause confusion for the document reader. The choice of manually inserting each store manager into the database is a waste of resource for CLup System Administrators. Consider for example an entire medium-size city using CLup, for every store (about 50) there are a certain number of managers (realistically a minimum set of 50 people); the system administrators will then need to manually create and maintain all these accounts, resulting in an enormous amount of work. Finally, there is a lack of details in the use case diagrams, which are not covering all the use cases described before. \subsubsection{DD} Inside the Architectural Design Overview section (2.1) the application server is designed to communicate with the Google Maps API (in order to provide a map representation). In our opinion, linking these two components is unrealistic and resource consuming, because the map needs to be rendered directly on client machine. In a real world scenario, the client would directly contact the maps API without passing through the server. In the deployment diagram, we could not understand why \textit{Spring Boot - MySQL} is deployed into the Database Server. In fact, Spring is a complete Java Enterprise backend framework, which interfaces with the database server through several data access interfaces (JPA, MongoDB and so on). Finally, the Implementation, Integration and Test Plan section is well done and very detailed. In fact, the precedence of components implementation follows a well defined logic that we think it is required in this kind of document. \subsubsection{ITD} The Adopted Development Frameworks and Languages section (3.1) is in our opinion very poor of information. In fact, the technology choices are not motivated; furthermore, there is a digression on the IDE used, which is not relevant to the section scope. The structure of the technologies used, such as Kotlin and Firebase, is not explained. For example, we do not know the whole Firebase interaction flow, Kotlin integration with Android and so on. On the other hand, meaningless details, such as QR Code AES encryption (completely useless in our opinion), are explained and commented too much (even with random code snippet in certain points). In the section 3.3 there is a tree description of an existent set of data in the Firebase DBMS. In our opinion this whole section is required in the Design Document instead of in the implementation one. In fact, if your project is realized in outsourcing, the assigned developer needs to know the exact organization of the database; otherwise, the resulting product could be completely different from the designed one (in the Design Document the logical description of data is omitted). The code structure section contains only a list of components/packages without any explanation about their utility and role. We expected at least the description of the application packages with their functionalities. \subsection{Architecture Quality} First of all, the architecture constraints and guidelines stated in the Design Documents has not been followed. In fact, the original three-tiered system became a two-tiered one, because of the total lack of an application server or any backend platform. Furthermore, the application and presentation layers both reside on the client machine. The application interfaces directly with the Firebase DBMS system, which leads to a potential dangerous security breach; decompiling the APK (fairly easy job) lets everybody obtain Firebase credential keys, Google Developer keys and other sensitive data. In the current implementation is even possible for a malicious user to edit, insert or delete data from database simply making an API request. \subsection{Code Quality} During our testing phase, we faced several critical implementation issues, such as exception throwing (resulting in application crash) instead of error messages and lack of feedbacks on user actions. The adoption of JUnit is completely pointless because it is never used as intended. In fact, unit testing means automatic model consistency test, which implies mocking up objects, runtime assertions and so on. The only type of tests made were simply some \texttt{System.out.println()} statements on several objects. The tests are then explained with the usage of code snippets completely out of context (and also without any form of description or comment). Finally, code commenting is not constant and, together with the total absence of code docs, make it very difficult to read. \newpage \section{Conclusions} After a complete analysis of the previous sections, we can surely assume that this implementation project was carried very hastily, often overlooking major considerations (some of them stated in the Design Document) which would have created a solid and secure system. \section{Effort Spent} \begin{center} \begin{tabular}{ | c || c | c | c | c| c|} \hline Student & Time for Acceptance Testing \\ \hline Luca Pirovano & 10 hours \\ \hline Nicolò Sonnino & 10 hours \\ \hline \end{tabular} \end{center} \end{document}
{ "alphanum_fraction": 0.715379349, "avg_line_length": 54.2804054054, "ext": "tex", "hexsha": "71210d352e4235a597823219f9a75eeaa1101768", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-27T09:35:05.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-27T09:35:05.000Z", "max_forks_repo_head_hexsha": "63d2196313c64e11c19323c6e51f7de33761c785", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "S0NN1/SE2-Piemonti-Pirovano-Sonnino", "max_forks_repo_path": "AT/ATD.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "63d2196313c64e11c19323c6e51f7de33761c785", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "S0NN1/SE2-Piemonti-Pirovano-Sonnino", "max_issues_repo_path": "AT/ATD.tex", "max_line_length": 486, "max_stars_count": 1, "max_stars_repo_head_hexsha": "63d2196313c64e11c19323c6e51f7de33761c785", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "S0NN1/SE2-Piemonti-Pirovano-Sonnino", "max_stars_repo_path": "AT/ATD.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-30T14:17:56.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-30T14:17:56.000Z", "num_tokens": 3746, "size": 16067 }
\chapter{From LMS to Deep Learning} \section{LMS of zero-mean time-series} The time-series signal is non-stationary data, which can be analysed by the LMS algorithm. The algorithm is used to predict one-step value based on the previous four values $y[n-4]$, $y[n-3]$, $y[n-2]$ and $y[n-1]$. As shown in Fig.\ref{fig:4_1_a}(a), the time-series is non-linear and zero mean. The performance of the basic LMS algorithm is illustrated in Fig.\ref{fig:4_1_a}(b). The predicted time-series is zero-mean as well. However, the predicted series do not capture perfectly of the original at the beginning part. After 400 time index, the predicted series converge with small difference with true series. In order to evaluate the performance appropriately, the metrics of mean squared error (MSE) and prediction gain ($R_p$) are measured. The MSE should be close to zero while the gain should be as large as possible. As to the LMS algorithm, the MSE is 16.032$dB$ with $R_p=5.196$, which performs inexpressively to some extent. \begin{figure}[htb] \centering \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/41a1.eps} \end{subfigure} \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/41a2.eps} \end{subfigure} \caption{LMS: zero mean time-series one-step prediction} \label{fig:4_1_a} \end{figure} \section{Activation function of predicted series} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{fig/4/42.eps} \caption{Dynamical perceptron: zero mean time-series one-step prediction} \label{fig:4_2} \end{figure} \noindent Due to the non-linearity of most of real-life data, the activation function \texttt{tanh} is applied to the each step of AR(4) process, which can be expressed as \begin{align} \hat y[n]&=tanh(\mathbf{w^Ty})\label{eq:act}\\ \text{where}\quad \mathbf{y}&=[y[n-4], y[n-3], y[n-2], y[n-1]]\notag \end{align} Fig.\ref{fig:4_2} depicts the output using activation function against the original time-series. It is obvious that using \texttt{tanh} is inappropriate to predict the time-series data. The reason is that the range of \texttt{tanh} lies in $(-1,1)$ whereas the zero-mean time-series is bounded in $(-40,40)$. Therefore, in order to appropriately predict the series, the activation function need to be scaled. \section{Scaled activation function} As analysis previous, scaling the activation function expresses in Eq.\ref{eq:act} by factor $a$. Fig.\ref{fig:4_3_a} illustrates the performance of varying $a$ with zero-mean data. For a small value of $a=20$, the predicted $\hat y[n]$ is still lower than the range of the desired data. Thus, the MSE is extreme large than the standard LMS algorithm. However, the maximum range of prediction is restricted by activation function, leading to small variance of error. Thus, the prediction gain $R_p$ is larger than the LMS. With incremental of $a$ up to 80, the MSE is getting to decrease, corresponding to increasing predict gain. However, if the value of $a$ is over 80, the prediction is overshooting to the true data, resulting in decreasing $R_p$ and increasing MSE. In conclusion, the optimal range of $a$ for predictinng zero mean data is $70\sim80$. \begin{figure}[htb] \centering \hspace{-0.4cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a1.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a2.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a3.eps} \end{subfigure} \\ \hspace{-0.4cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a4.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a5.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43a6.eps} \end{subfigure} \caption{Scaled \texttt{tanh}: Prediction of zero mean data} \label{fig:4_3_a} \end{figure}\\ In addition, it is harder to predict the none-zero mean data as shown in Fig.\ref{fig:4_3_b}, which presents in the larger MSE and smaller $R_p$. However, the optimal range of $a$ for non-zero mean prediction is $40\sim 50$. \begin{figure}[htb] \centering \hspace{-0.4cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b1.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b2.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b3.eps} \end{subfigure} \\ \hspace{-0.4cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b4.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b5.eps} \end{subfigure} \hspace{-0.2cm} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/43b6.eps} \end{subfigure} \caption{Scaled \texttt{tanh}: Prediction of none-zero mean data} \label{fig:4_3_b} \end{figure} \section{None-linearity prediction with bias} Previous work is based on the zero mean time-series to make one-step ahead prediction. By adding a bias $b$ for the activation function, expressed as $tanh(\mathbf{w}^T\mathbf{x}+b)$, the model can account for the mean automatically. Due to the small learning rate, the performances with bias are similar to the one without bias if only one epoch experiment is implemented. Thus, 100 number of epochs are used in order to continuously update weight. Fig.\ref{fig.bia} plots the MSE curves with or without bias for 100 epochs learning. Same performances are obtained at the beginning of training. Both of curves are rapidly plummet at first epochs and then slightly decrease up to convergence. Overall, the model with bias outperform the one without bias which converges after 60 epochs. Fig.\ref{fig.biasplot} shows the prediction of last epoch with bias against with the original non-zero mean series. Compared with the plots in Fig.\ref{fig:4_3_b} with amplitude $a=50$, the MSE reduces approximately in a half and prediction gain increases as well. \begin{figure}[htb] \centering \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/44a2.eps} \caption{Prediction MSE of 100 epochs} \label{fig.bia} \end{subfigure} \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/44a1.eps} \caption{The last epoch prediction} \label{fig.biasplot} \end{subfigure} \caption{\texttt{tanh} with bias: None-zero mean time-series one-step prediction} \label{fig:4_4} \end{figure} \section{Prediction with initialized weight} As to the LMS algorithm, the model cannot capture the original time-series at the beginning, which causes a long time to converge. Fig.\ref{fig:4_5}(a) illustrates the standard LMS prediction for non-zero mean which performs dissatisfactory with quite large error and insignificant prediction gain. The reason is that the initial weights are assumed to zero, which introduces difficulties to predict the non-zero mean data. Fig.\ref{fig:4_5} depicts the performance of training initial weights. After training the first 20 data samples for 100 epochs, the initial weights are obtained which can speed up the learning process. With the pre-trained initial weights, the model perform better than training model after 100 epoch, with smaller MSE = 5.162 and slightly larger $R_p=16.342$. \begin{figure}[htb] \centering \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/45a1.eps} \end{subfigure} \hspace{0.4cm} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/45a2.eps} \end{subfigure} \caption{Standard LMS and Dynamical perceptron: zero mean time-series prediction} \label{fig:4_5} \end{figure} \section{Back-propagation of Deep Network} For deep network, the neurons at each layers are fully connected with all inputs at last layer and outputs at next layer. And the weights can be expressed as: \begin{align} \mathbf{w}^{l}\Rightarrow w_{ij}^{(l)}\left\{ \begin{array}{lr} 1\leqslant l \leqslant L\text{ layers;}\\ 1\leqslant i \leqslant d^{(l-1)}\text{ inputs;} \\ 1\leqslant j \leqslant d^{(l)}\text{ outputs;} \\ \end{array} \right.\\ d\text{ is neuron number at each layer}\notag \end{align} Thus, the weighted output on each neuron position is calculated by summing all weighted inputs with activation function. \begin{align} z_{j}^l &=\sigma(w^l_{ij}a^{l-1}_{ij}+w^l_{i0})\\ \mathbf z^l &=\sigma(\mathbf w^l \mathbf a^{l-1}+w^l_{i0}) \end{align} Therefore, set the weights randomly and the forward propagation process of the first iteration is finished. However, back-propagation of weights and bias need loss function, generally mean squared error, to calculated the gradient as shown below. \begin{align} E=\mathbb {E}\{\|x[n]-\hat x[n]\|^2 \} \end{align} Therefore, the error for the output $\delta^l_j$ can be expressed as \begin{align} \delta^l_j&=\frac{\partial E}{\partial z^l_j}\notag\\ &=\sum_i\frac{\partial z_i^{l+1}}{\partial z^l_j}\delta_i^{l+1}\notag\\ &=\sum_i w_{ij}^{l+1}\delta^{l+1}_i\sigma'(z^l_j) \end{align} In addition, the output error is corresponding to the previous weights. Thus, the gradient of weight is \begin{align} \frac{\partial E}{\partial w^l_{ij}}=a_{i}^{l-1}\delta_j^l \end{align} Therefore, the weight can be updated by \begin{align} w_{ij}=w_{ij}-\eta\frac{\partial E}{\partial w^l_{ij}}=w_{ij}-\eta a_{i}^{l-1}\delta_j^l \end{align} \section{Deep Network} There are 10 sinusoidal waves with different frequency and amplitude as the linear inputs. The output $y[n]$ after applying activation function to $\mathbf x[n]$ is highly non-linear as shown in Fig.\ref{fig:4_7_1}. With the default parameters, three model, specifically in single neuron with linear and \texttt{tanh} function, deep network with \texttt{relu}, are used to train and test to evaluate their performance. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{fig/4/47a1.png} \caption{Harmonics sine waves and non-linear data with noise} \label{fig:4_7_1} \end{figure}\\ Fig.\ref{fig:4_7_2} shows the regression curves of these three models. The models use single neutron have similar performance, whose predictions are still linear. As to the deep network, this model can predict non-linear curve and the trend of the data is generally predicted. However, the performance is still insignificant which may be caused by the power of noise. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{fig/4/47a2.eps} \caption{Performance of three models} \label{fig:4_7_2} \end{figure}\\ Fig.\ref{fig:4_7_3} shows the train error and test error for each model. The single neuron model learning curve rapidly decrease and then keep flat. While the test error of deep network starts at lower point and then converges to a slightly lower value. However, the single neuron models converge much more rapidly than the deep network, since their architectures are simple which are easily to be trained. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{fig/4/47a3.eps} \caption{Learning curves of three models} \label{fig:4_7_3} \end{figure} \newpage \section{Noise power and drawbacks of Deep Network} Experiments are repeated by changing different power of noise which are $\sigma^2=$ 0, 0.01, 0.2. Fig.\ref{fig:4_8_b} shows the predictions and learning curves with $\sigma^2=0$. The model of deep network outperform during training and testing. The series is accurately captured in training and acceptable error in testing. However, the single neutron models can not predict non-linear series as well. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48b1.eps} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48b2.eps} \end{subfigure} \caption{Performance of three models, $\sigma^2=0$} \label{fig:4_8_b} \end{figure}\\ When adding a small power of noise $\sigma^2=0.01$, the performance of deep network model is getting worse. Parts of trends are inaccurately captured, resulting in a slightly increased error. Nevertheless, the deep network model still has expressive performance compared with single neuron models. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48a1.eps} \end{subfigure} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48a2.eps} \end{subfigure} \caption{Performance of three models, $\sigma^2=0.01$} \label{fig:4_8_a} \end{figure}\\ If the noise power is $\sigma^2=0.2$, the deep network model suffers the problem of over-fitting, which makes the model to predict the noise sufficiently in training. Thus, the testing error keeps increasing trend. \begin{figure}[H] \centering \begin{subfigure}[b]{0.6\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48c1.eps} \end{subfigure} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{fig/4/48c2.eps} \end{subfigure} \caption{Performance of three models, $\sigma^2=0.2$} \label{fig:4_8_c} \end{figure} \noindent In summary, by varying the power of noise, the single neuron models perform much robust and stable with less computational cost as well, even if their performances need to be improved. As to the deep network model, it is easily affected by the noise power leading to a unstable performance. Moreover, due to fully-connected with all previous layer inputs and next layer outputs, the computational cost of deep network training is quite large.
{ "alphanum_fraction": 0.7116328426, "avg_line_length": 60.5542168675, "ext": "tex", "hexsha": "9da82c1eb6af548821c81348d9285c392e9f00fa", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zdhank/Adaptive-Signal-Processing", "max_forks_repo_path": "Report/sections/Part4/Part4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zdhank/Adaptive-Signal-Processing", "max_issues_repo_path": "Report/sections/Part4/Part4.tex", "max_line_length": 1051, "max_stars_count": 2, "max_stars_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zdhank/Adaptive-Signal-Processing", "max_stars_repo_path": "Report/sections/Part4/Part4.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-19T08:55:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-05T10:27:57.000Z", "num_tokens": 4329, "size": 15078 }
% 9.5.07 % This is a sample documentation for Compass in the tex format. % We restrict the use of tex to the following subset of commands: % % \section, \subsection, \subsubsection, \paragraph % \begin{enumerate} (no-nesting), \begin{quote}, \item % {\tt ... }, {\bf ...}, {\it ... } % \htmladdnormallink{}{} % \begin{verbatim}...\end{verbatim} is reserved for code segments % ...'' % \section{Empty Instead Of Size} \label{EmptyInsteadOfSize::overview} While comparing the result of the size() member function on STL containers against 0 is functionally equivalent to calling the empty() member function, empty() is to be preferred as it is always a constant-time operation, while size() on std::list may take linear time. This checker detects cases where the result of size() is compared against the constant 0. \subsection{Parameter Requirements} This checker does not require any parameters. \subsection{Non-Compliant Code Example} \begin{verbatim} #include <vector> bool f(const std::vector<int> &v) { if (v.size() > 0) // not OK: use !v.empty() instead return true; if (0 == v.size()) // not OK: use v.empty() instead return false; return false; } \end{verbatim} \subsection{Compliant Solution} \begin{verbatim} #include <vector> bool f2(const std::vector<int> &v) { if (!v.empty()) return true; if (v.empty()) return false; return false; } \end{verbatim} \subsection{Mitigation Strategies} \subsubsection{Static Analysis} Compliance with this rule can be checked using structural static analysis checkers using the following algorithm: \begin{enumerate} \item For each member function call, see if the called member function is named `size' and if the call is embedded in an expression that compares its return value against the constant 0. \item If the above check evaluates to true, emit a diagnostic. \end{enumerate} There are numerous ways to defeat this simple analysis, for instance by assigning the return value from size() to a variable, by comparing the return value against a variable that is always 0, or by calling size() through a member function pointer. Further, the analysis only looks for member functions named `size' but does not try to ascertain that it belongs to a `container' (as that is not something that can be checked reliably). \subsection{References} % Write some references % ex. \htmladdnormallink{ISO/IEC 9899-1999:TC2}{https://www.securecoding.cert.org/confluence/display/seccode/AA.+C+References} Forward, Section 6.9.1, Function definitions'' The reference for this checker is: S.~Meyers: ``Effective STL'', Item~3: ``Call \verb+empty+ instead of checking \verb+size()+ against zero.''
{ "alphanum_fraction": 0.7340740741, "avg_line_length": 33.75, "ext": "tex", "hexsha": "d95ae421b3420fe8502e718ba3e5b8ea21142e9a", "lang": "TeX", "max_forks_count": 146, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "projects/compass/extensions/checkers/emptyInsteadOfSize/emptyInsteadOfSizeDocs.tex", "max_issues_count": 174, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "projects/compass/extensions/checkers/emptyInsteadOfSize/emptyInsteadOfSizeDocs.tex", "max_line_length": 173, "max_stars_count": 488, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "projects/compass/extensions/checkers/emptyInsteadOfSize/emptyInsteadOfSizeDocs.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "num_tokens": 666, "size": 2700 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{biblatex} \usepackage{graphicx} \usepackage{subcaption} \usepackage{amsmath} \usepackage{upgreek} \usepackage{amssymb} \addbibresource{main.bib} \setlength{\parskip}{1em} \setlength{\parindent}{0em} \title{Methods for particle tracking in zebrafish} \author{Yngve Mardal Moe} \date{November 2021} \begin{document} \maketitle \section{Introduction} This document describes the workflow for tracking particles in zebrafish embryo. The workflow is separated into three parts. \begin{itemize} \item Preprocessing and particle tracking \item Estimating vasculature geometry \item Estimating velocities and pressures \end{itemize} We disregarded the results for all files where the vessel background image had different shape compared to the particle videos. \section{Preprocessing and particle tracking} Tracking particles is done using \texttt{trackpy}, a Python library for PTV. TrackPy uses the method described in \cite{crocker1996methods}, which roughly speaking does as follows: \begin{enumerate} \item Preprocess each frame using a band-pass filter. This removes high-frequency noise and low-frequency sensor differences. The high-pass part of the band-pass filter is decided based on the particles' estimated size \item Find possible particles. This will find a large amount of false positives that needs to be filtered away. \item Filter away false positive particles. We require that particles are separated by at least 6 pixels and that the particles have total mass (i.e. total integrated brightness) $\geq 50$. \item Link particles between frames by searching the next frame in a 16 pixel radius around the particle's position in the current frame. Our settings allows particles to be present for only every second frame. \item Exclude particles that are not present in at least two frames. \end{enumerate} Step 1-3 are performed by the \texttt{trackpy.batch} function, step 4 is performed by the \texttt{trackpy.link} function and step 5 is performed by the \texttt{trackpy.filter\_stubs} function. We noticed, however, that this default preprocessing is insufficient. We therefore perform the following preprocessing instead \begin{enumerate} \item For each pixel, compute its average value across each frames. This gives us a background-signal estimate \item For each pixel, subtract the background signal and set all negative pixels equal to 0 \item Perform a morphological (greyscale) opening with a $3\times3$ structuring element \item Perform a morphological (greyscale) closing with a $5\times5$ structuring element \item Modify the dynamic range to be between 2 and 50 (clipping pixel values between 2 and 50) \item Transform each pixel value using the transform $T(x) = 255 (x - 2)/(50 - 2)$ \end{enumerate} By disabling the default preprocessing in \texttt{trackpy.batch} and using the above preprocessing instead, we get a better estimate of the particle positions. To track particles, with this pipeline, use the \texttt{scripts/track\_particles.py} script. \section{Estimating vasculature geometry} The next step is to estimate the vasculature geometry. Specifically its shape, centerline and radius. To estimate the vasculature shape and centerline, we used a manual segmentation procedure (with the \texttt{scripts/roi\_generator.py} script). \subsection{Estimating shape and centerline} The manual segmentation process worked as follows: \begin{enumerate} \item Add first point on vessel boundary \item Add new point on vessel boundary. The outline will be linearly interpolated between these this point and the previous point. \item Repeat the above step until the full outline is drawn. Once finished, the first and last point will be connected. \end{enumerate} After creating a segmentation mask (or region of interest, ROI), we need to parametrise the centerline. To accomplish this, we use the following process \begin{enumerate} \item Skeletonise the ROI (Lee's algorithm \cite{lee1994building}) \item Find the index of all skeleton-pixels. \item Compute the 2-nearest-neighbours (2NN) graph using the nonzero-pixel coordinates. \item Find endpoints of the skeletonised ROI. This is done by convolving the skeletonised ROI with a $3\times3$-kernel consisting of only ones. The endpoints will then be the pixels with value equal to 2. \item Find the shortest path in the 2NN-graph between the two edge pixels. This will give an ordered list of coordinates, which we can interpolate between to compute the centerline. We use nearest-neighbour interpolation for this, since that makes future steps easier to implement, but higher-order splines are also possible. \item Estimate the direction of the centerline at the two ends of the centerline. We use a finite-difference approximation, but "looking" two pixels back instead of one to reduce the chance of a $45^\circ$-angle. \item Cut the ROI so the centerline-endpoints touch the boundary of the ROI. \end{enumerate} There are two important notes to be aware of when creating the ROI. Firstly, the skeletonisation must return a single line with no bifurcations, otherwise, the centerline parametrisation will not work correctly. As a consequence, the ROI must also be drawn in with no birfurcations. Also you should not use a straight line when you ''cut`` the ROI. Instead, we should have a ''arrow``-like shape that points out of the ROI (see Figure \ref{fig:roi}). This is important for the skeletonisation procedure. Otherwise, we may end up with a skeleton with slight bifurcations towards the corners of the ROI. \begin{figure} \centering \begin{subfigure}{0.8 \linewidth} \centering \includegraphics[width=\linewidth]{figures/outline_before_clip.png} \\ (1) \end{subfigure} \begin{subfigure}{0.8 \linewidth} \centering \includegraphics[width=\linewidth]{figures/Polygon_mask.png} \\ (2) \end{subfigure} \begin{subfigure}{0.8 \linewidth} \centering \includegraphics[width=\linewidth]{figures/skeleton_with_outline.png} \\ (3) \end{subfigure} \begin{subfigure}{0.8 \linewidth} \centering \begin{subfigure}{0.3 \linewidth} \centering \includegraphics[width=\linewidth]{figures/clipping_full_poly.png} (4-a) \end{subfigure} \begin{subfigure}{0.3 \linewidth} \centering \includegraphics[width=\linewidth]{figures/clipping_clipped_poly.png} (4-b) \end{subfigure} \end{subfigure} \begin{subfigure}{0.8 \linewidth} \centering \includegraphics[width=\linewidth]{figures/centerline_direction.png} \\ (5) \end{subfigure} \caption{From top to bottom: (1) The manually marked ROI, each dot represents a vertex manually added by the user. (2) The polygonal mask generated from the outline. (3) The skeleton of the polygonal mask, with the ROI superimposed. (4-a) The polygonal mask before clipping, in pink, the estimated direction, in yellow, the centerline normal, used to clip the ROI. (4-b) The polygonal mask after clipping, in pink, the estimated direction, in yellow, the centerline normal, used to clip the ROI. (5) The directional components of the nearest centerline pixel.} \label{fig:roi} \end{figure} \subsection{Estimating centerline-distances, and vessel radius} To estimate the distance to the centerline, we transform the skeletonised ROI with the euclidean distance transform and (bi-)linearly interpolate subpixel differences in the particle positions. To estimate the radius of the blood vessel, we use the maximum distance to the centerline for all particles within the ROI. \section{Estimating velocities} To estimate the velocities, we need three things \begin{enumerate} \item The particle tracks \item The spatial pixel-dimensions \item The time between each frame \end{enumerate} The particle tracks were already estimated with the \texttt{scripts/track\_particles.py}-script. However, that script didn't filter tracks by their length. Here, we filter those tracks further by removing all tracks where the particle was present for less than 5 frames. We also remove all tracks outside the ROI. To compute the spatial pixel-dimensions and the time between each frame, we use the image metadata, obtained with the \texttt{confocal\_microscopy.files.ims.load\_ims\_metadata}-function. We then use the following code to get the image size, pixel size, and timestep. The output of this code is compared with many of the \texttt{legend.docx} files provided by Federico, and these numbers always coincided. \begin{verbatim} ## Get the image metadata image_size = ims.find_physical_image_size(metadata)[1:] image_shape = [ int(metadata["CustomData"]["Height"]), int(metadata["CustomData"]["Width"]) ] pixel_size = np.round(np.array(image_size) / image_shape, 3) timestamps = [ datetime.fromisoformat(metadata["TimeInfo"][f"TimePoint{i+1}"]) for i in range(7000) ] timestamps = [t - timestamps[0] for t in timestamps] timestamps = [(t.seconds + t.microseconds*1e-6) for t in timestamps] timestamps = np.linspace(0, timestamps[-1], len(timestamps)) timestep = timestamps[-1]/len(timestamps) \end{verbatim} Based on this, we could transform the particle velocities from pixels per frame to $\upmu \text{m}/\text{s}$. \subsection{Validating the results} To validate the results from the particle tracking, we manually tracked in a variety of of fishes and vessels. The automatic velocity estimates were generally satisfying with a high sensitivity\footnote{Also known as recall} (most manually tracked particles were found), and a low-medium positive predictive value\footnote{Also known as precision} (the automatic tracking found approximately twice the number of tracks compared to the manual tracking). Some of the tracks found by the automatic algorithm and not manually were false positives and others were actual particles not found during the manual tracking. To see the results from the manual tracking, see the \texttt{scripts/Early exploratory analysis.ipynb}. \section{Estimating pressures} There are several ways to estimate the pressure. If we first assume that we know the viscosity (more about that later), then we can estimate the pressure by \begin{enumerate} \item Using Poiseuille's law and estimating the pressure for each particle independently, and then computing the average pressure gradient \item Assume a velocity profile on the form $v(r) = b r^2 - a$ and compute the pressure gradient analytically \end{enumerate} With option one, we estimate the pressure-gradient that drives the motion of the $i$-th particle, $|\nabla p_i|$, with the following formula \begin{equation} |\nabla p_i| = 3 v_i \mu / (R - r_i), \end{equation} where $v_i$ and $r_i$ is the velocity and distance to centerline for the $i$-th particle, $R$ is the radius of the vessel and $\mu$ is the blood viscosity. To compute the total pressure gradient, $|\nabla p|$, we compute the average of $|\nabla p_i|$. If we instead assume a monomial+intercept velocity profile, then we estimate the total pressure gradient, $|\nabla p|$, by \begin{equation} |\nabla p| = 4 a \mu / R^2, \end{equation} where $a$ is the amplitude of the velocity profile. To estimate the parameters of the velocity profile, we use the fact that $v(R) = 0$ to remove one degree of freedom (the $b$ in $v(r) = b r^2 + a$). Then, we use Brent's root-finding method \cite{brent1973algorithms,virtanen2020scipy} to estimate the $a$ that minimise the squared error $\sum_i (v(r_i) - v_i)^2$. To estimate the velocities and pressures for a single blood vessels, run the \texttt{scripts/Summary analysis.ipynb}-notebook. To compute for all blood vessels, run the \texttt{scripts/compute\_summaries.py} function. \subsection{Estimating the viscosity} There are two ways to estimate the viscosity, either with the method of Lee et al. \cite{lee2017rapid}, who measured the viscosity as a function of tube hematocrit ($ht$) (fraction red blood cells in the blood) in a rectangular channel of size $60\upmu \text{m} \times 240 \upmu \text{m}$. They found that the blood was Newtonian in so large channels, and fitted a quadratic model to the viscosity for $0 \leq ht \lessapprox 0.35$. Moreover, Lee et al. found that zebrafish blood plasma ($ht=0$) has approximately 1.5 the viscosity of water. The other way of estimating the viscosity of zebrafish blood is with the method of Pries et al. \cite{pries1992blood}, who combined many estimates of human blood viscosity and found that the viscosity varies greatly with the vessel radius and discharge hematocrit (fraction of red blood cells in blood that is released through an open blood vessel). They fitted a complicated heuristic model based on a variety of properties of human blood. One of which was that human blood plasma has the same viscosity as water. This model is therefore not fully compatible with the findings of \cite{lee2017rapid}. To conclude, we see that there is no accurate measurement of zebrafish blood viscosity. The measurements by Lee et al. does not account for small vessels, where the red blood cells interact with the vessel walls, whereas the model by Pries et al. is based on human blood, which has different properties compared to the blood of zebrafish. \printbibliography \end{document}
{ "alphanum_fraction": 0.7665654846, "avg_line_length": 64.319047619, "ext": "tex", "hexsha": "17a191ca076fb756bbb1686e04781a29ed46103c", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-18T13:49:28.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-18T13:49:28.000Z", "max_forks_repo_head_hexsha": "632186ed94c160795b3306d4d7360d88e369054a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yngvem/confocal-microscopy", "max_forks_repo_path": "report/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "632186ed94c160795b3306d4d7360d88e369054a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yngvem/confocal-microscopy", "max_issues_repo_path": "report/main.tex", "max_line_length": 613, "max_stars_count": 1, "max_stars_repo_head_hexsha": "632186ed94c160795b3306d4d7360d88e369054a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yngvem/zebrafish-bloodflow", "max_stars_repo_path": "report/main.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-27T17:30:49.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-27T17:30:49.000Z", "num_tokens": 3255, "size": 13507 }
\author{Anthony Odenthal - KE7OSN} \title{W7PRA Meeting Monday August 19th 2019 19:00} \documentclass[letter,11pt]{extarticle} \usepackage[margin=0.5in]{geometry} \usepackage{enumitem} \setlist{noitemsep} \usepackage{color} \usepackage{array} \usepackage{hyperref} \usepackage[compact]{titlesec} \usepackage{listings} \titlespacing*{\section}{0pt}{6pt}{6pt} \titlespacing*{\subsection}{0pt}{6pt}{6pt} \begin{document} \thispagestyle{empty} \begin{center} \textbf{W7PRA\\Meeting Agenda} \vspace{0.33cm} \end{center} \begin{center} \begin{tabular}{| m{3.0cm} | m{7.5cm} |} \hline \textbf{Date and Time} & Monday 19\textsuperscript{th} August 2019 at 19:00 \\ \hline \textbf{Venue} & Tommy's 4\textsuperscript{th} Street Bar \& Grill \\ \hline \end{tabular} \end{center} \begin{center} \textbf{Please pay your bill by 8pm} \end{center} \subsection*{Agenda} \begin{enumerate} \item Call to Order \item Introductions \item Secretary's Report - Ted \item Treasurer's Report - Andy \\ Balances \begin{tabular}{|l|l|} \hline OSUCU Checking & \$396.70 \\ \hline OSUCU Savings & \$580.38 \\ \hline Paypal & \$184.00 \\ \hline Square & \$0.00 \\ \hline Cash & \$3.65 \\ \hline \textbf{Total} & \textbf{\$1164.73} \\ \hline \end{tabular} \\ \\ % Expenses % \begin{description} % \item[Liability insurance] \$200.00 % \end{description} \item Trustee's Report - Mike \begin{itemize} \item Cline Butte - Now DCS 026, new build and allstar link \item Walker Mtn - 147.14 CTCSS 141.3 on the air with temp antenna \item King now has a backup receiver CTCSS 88.5, primary DCS 023 \end{itemize} % \item Bunny Hunt on hiatus - Mike \item Previous Events/Old Business: \begin{itemize} \item Field Day June 22\textsuperscript{nd}-23\textsuperscript{rd} \end{itemize} \item New Business / Upcoming Events \begin{itemize} \item FEMA test 5330.5KHz 1200-1300 July 15, 16, 17, \& 18 Whidbey Island near Oak Harbor \item da Vinci Days \& Graand Kinetic Challenge July 20-21 \item BCARES Tech class \& exam Oct 11-13 \end{itemize} \item Recap of the last month in Radio \begin{itemize} \item Solar Activity \item DX \item Achievements? \end{itemize} \item Presentation - 7QP Recap - Mike \end{enumerate} %\newpage Current Officers \\ \begin{tabular}{|l|l|} \hline Chair & Anthony Odenthal KE7OSN \\ \hline Secretary & Ted Mitchell W7RTM \\ \hline Treasurer & Andy Prentice W7AAU \\ \hline Trustee & Mike Shelby W7RIS \\ \hline Board & Bill Powell KE7ZKH \\ \hline Board & Matt Cawrse KF7DVN \\ \hline Board & Drew Terrill W3HES\\ \hline \end{tabular} \subsection*{\color{red}{Next Meeting: Monday 19\textsuperscript{th} August 2019 at 19:00.}} \end{document}
{ "alphanum_fraction": 0.6791784703, "avg_line_length": 29.1134020619, "ext": "tex", "hexsha": "6da9ea4c37345db83211f4fe90f47087b1f149ba", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "13458ace8b13b9b5e1b3156f1c3fcc434144b634", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Odysimus/W7PRA-Meetings", "max_forks_repo_path": "2019/July/July_2019_Agenda.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "13458ace8b13b9b5e1b3156f1c3fcc434144b634", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Odysimus/W7PRA-Meetings", "max_issues_repo_path": "2019/July/July_2019_Agenda.tex", "max_line_length": 93, "max_stars_count": null, "max_stars_repo_head_hexsha": "13458ace8b13b9b5e1b3156f1c3fcc434144b634", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Odysimus/W7PRA-Meetings", "max_stars_repo_path": "2019/July/July_2019_Agenda.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1085, "size": 2824 }
\documentclass{article} \usepackage{MinionPro} \usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry} \usepackage{lipsum} \usepackage{url} \usepackage{hyperref} \def\FormatName#1{% \def\myname{KC Sivaramakrishnan}% \def\name{#1}% \ifx\name\myname \textbf{#1}% \else #1% \fi } \newcommand{\loud}[1]{\textbf{\textit{#1}}} \newcommand{\R}[1]{~\\[-2mm] \noindent \textbf{#1.~~}} \begin{document} \noindent \Large \textsc{KC Sivaramakrishnan} \hfill \textsc{Research Statement} \normalsize \noindent \hrulefill In recent years, there has been a widespread adoption of both multicore and cloud computing. Multicore processors have become the norm in mobile, desktop and enterprise computing, with an increasing number of cores being fitted on a chip with every successive generation. Cloud computing has paved the way for companies to rent farms of such multicore processors on a pay-per-use basis, with the ability to dynamically scale on demand. Indeed, many real-world services for communication, governance, commerce, education, entertainment, etc., are routinely exposed as a web-service that runs in third-party cloud compute platforms such as Windows Azure and Amazon's EC2. These services tend to be concurrently accessed by millions of users, increasingly through multicore-capable mobile and desktop devices. As a result, there are a growing proportion of developers who must tackle the complexity of programming for a cloud of multicore processors. In particular, they must ensure correct application behavior in the face of asynchrony, concurrency, and partial failures, all the while providing good scalability as well as minimizing the user's perception of latency. Reasoning about concurrent programs is inherently a difficult endeavor. For each new concurrent thread of control added to a program, there is an exponential growth in the number of potential schedules. This greatly increases the chance of subtle concurrency bugs evading the programmer during software development and testing, only to appear in production environments with devastating consequences. As we head into multicore compute platforms with 1000+ cores, the existing ad hoc programming models and systems are simply not equipped for safe and scalable concurrent programming, where the emerging concerns such as heterogeneity in hardware, partial failures of cores, and secure computation with third-party code and infrastructure warrant a holistic approach to software development. The goal of my current research is to improve the state of the art in developing correct and scalable programs for loosely coupled massively scalable systems through rigorous and declarative programming language abstractions. While I am broadly interested in concurrency and parallelism, I am particularly drawn to architecture-induced complexities such as absence of cache coherence and eventual consistency, which often lead to programs where the guarantee of correctness and concision take a back seat to performance. I aim to address several key challenges crosscutting disparate domains, such as (i) developing concurrent programming platforms to enable building applications that seamlessly scale on heterogeneous compute clouds, (ii) rigorously verifying the correctness of parallel and distributed software, to eliminate concurrency and security bugs, which are notoriously difficult to detect in large-scale deployments, and (iii) incorporating emerging concerns such as security and energy into the programming model. \section*{Previous Work} Towards this goal, as a part of my PhD thesis, I have developed the MultiMLton platform, a parallel and distributed extension of MLton Standard ML compiler and runtime, which has been part of several successful research projects~\cite{mmpar, mmgc, KC_MARC12, Ziarek11, Ziarek_ICFP09, isolates, MMJFP}. My work on MultiMLton has been recognized by Purdue University with the 2014 Maurice H. Halstead award for outstanding research in Software Engineering and a best paper award at the 2012 Many-core Architecture Research Community (MARC) symposium at RWTH Aachen. The goal of MultiMLton is to allow the programmer to declaratively develop concurrent programs, without worrying about the issues of architectural nuances and achieving good scalability. There are numerous challenges to realizing these goals, some of which I have addressed as a part of my PhD research. \R{Programming model} The programming model of MultiMLton is a mostly functional language combined with lightweight threading, where threads primarily interact over first-class message-passing channels. On manycore systems, the cost of creating and managing interactive asynchronous computations can be substantial due to the scheduler and memory management overheads. My work in JFP'14~\cite{MMJFP} and DAMP'10~\cite{mmpar} describes a novel \emph{parasitic} threading system for short-lived interactive asynchronous computation. The key feature of parasitic threads is that they execute only when attached to a host lightweight thread, sharing the stack space, and hence amortizing the overheads. In MultiMLton, parasitic threads play a key role in the efficient realization of composable asynchronous and heterogeneous communication protocols~\cite{Ziarek11}. \R{Programming next-generation many-core processors} With ever increasing core count on a chip, processors are soon to hit a "coherence wall", where the performance and design complexity of cache coherence hardware limits multicore scalability. Architectures, such as Intel's 48-core Single-chip Cloud Computer (SCC), completely shun hardware cache coherence, and instead, provide fast hardware message-passing interconnect and the ability to manage coherence in software. Although a shared memory system, SCC is typically programmed as a cluster of machines on a chip, due to the lack of suitable abstractions to deal with absence of cache coherence. The question is whether we can program this \emph{cloud} of cores on a chip in the way we program shared-memory multicore processors. Such a system will greatly simplify developing programs for the massively scalable SCC processor. My ISMM'12~\cite{mmgc} work describes a novel runtime system for MultiMLton, which utilizes a core-local partitioned heap scheme to circumvent the absence of cache coherence. This runtime system not only enables shared memory programming of non-cache-coherent systems, but by enabling concurrent core-local heap collections, exhibits immense scalability benefits even on cache-coherent architectures such as 48-core AMD magny-cours and 864-core Azul Vega3 architectures. My MARC'12~\cite{KC_MARC12} work describes an extension of this design to exploit software support for cache coherence and hardware interconnect for inter-thread message passing. Across our benchmarks, we measured that this new runtime system design enables more than 99\% of the memory access to be potentially cached. This work won the \loud{Best Paper Award} at MARC'12. \R{Migrating to the cloud} MultiMLton programming model combines the benefit of functional programming with synchronous message passing, and offers an attractive model for expressing concurrency. However, in a high-latency environment like the cloud, the synchrony is at odds with high latency, whereas switching to an explicit asynchronous programming model complicates reasoning. The question is whether we can express programs for high-latency environments \emph{synchronously}, but speculatively discharge them asynchronously, and ensure that the observable behavior mirrors that of the original synchronous program. My PADL'14~\cite{RxCML} work identified that the necessary and sufficient condition for divergent behavior (mis-speculation) is the presence of happens-before cycle in the dependence relation between communication actions. Utilizing this idea, I have built an optimistic concurrency control mechanism for concurrent ML programs, on top of MultiMLton, capable of running in compute clouds. Our extensive experiments on Amazon EC2 validate our thesis that this technique is quite useful in practice. Apart from my PhD research, I have the opportunity to explore the questions relating to concurrent programming for multicore systems during my internships with industrial research labs. At Microsoft Research Cambridge, I have developed a new concurrency substrate for Glasgow Haskell Compiler (GHC), that allows Haskell programmers to safely and composably describe schedulers for Haskell threads in Haskell on scalable multi-core architectures~\cite{CompSA}. At Samsung Research, I prototyped the compiler analysis and the runtime system for new clean-slate manycore programming language~\cite{SPARTA}. These experiences have provided me insights into the issues are faced by developers in today's parallel compute systems, and also the necessary expertise to realize my ideas in complex, real-world software platforms. \section*{Current Work} After finishing my PhD~\cite{thesis} at Purdue University, I joined University of Cambridge as a Research Associate under the OCaml Labs initiative where I continue to explore the development of concurrent and parallel functional programming language systems. I lead the Multicore OCaml project which aims to add concurrent and parallel programming support for the industrial strength OCaml programming language. A particularly distinguishing feature of this effort is that instead of baking the concurrency support into the compiler and exposing libraries for concurrent programming, we extend OCaml with support for lineaer algebraic effects and handlers~\cite{effects}. Algebraic effects and handlers provide a modular abstraction for expressing effectful computation, allowing the programmer to separate the expression of an effectful computation from its implementation. In this system, concurrency is just another effect, modularly expressed through algebraic effects and their handlers. This design allows the programmer to describe thread schedulers and concurrent data structures as OCaml libraries, keeping the compiler lean, while providing the flexibility of swapping in alternate but equivalent implementations. Given that OCaml has an excellent Javascript backend~\cite{js_of_ocaml}, the addition of concurrency support to OCaml has the potential to greatly simplify the implementation of highly-concurrent real-world web-services. \section*{Future Work} In my future work, I wish to build on the MultiMLton programming model to address the emerging and future challenges in programming manycore and large-scale distributed clusters. Nowadays, an increasing number of companies deploy their web-services on geo-distributed third-party cloud platforms such as Amazon EC2, Windows Azure, Heroku, Google App Engine, etc. These cloud platforms are becoming more heterogenous incorporating many-core general-purpose processors (potentially with multiple \emph{coherence domains}) with GPGPUs and FPGAs, providing the ability to offload expensive computation to specialized hardware. Traditionally, developers have relied on the infrastructure providing strong consistency guarantees such as sequential consistency, linearizability and distributed transactions in order to build correct applications for scalable platforms. Strong consistency provides a semblance of a single consistent view of memory, where the operations appear to be performed sequentially in some linear order. Unfortunately, providing such strong guarantees in unviable in the face of modern multicore processors and geo-distributed massively scalable services. Modern multicore processors and concurrent programming languages expose subtle relaxed memory behaviors that arise from hardware and compiler optimizations to the developer. The state of the art distributed stores also resort to weak consistency guarantees, where the different geo-distributed replicas eventually converge to a uniform state at some arbitrary point in the future. In particular, no assumptions can be made about the global state of the system at any particular time. Such weakly consistent systems make a well-known trade-off: developers avoid the cost associated with latency, contention and availability to achieve strong consistency, in exchange for weaker consistency guarantees about the data; the semblance of a uniform consistent view of memory is lost. Additionally, since the web applications run on third-party compute platforms, the issue of data privacy and security has also become a concern. In the cloud, an application typically shares resources such as disk, network and processor time with applications from other customers, running on the same server. Since critical services such as health-care and banking services are increasingly offered online, any vulnerability in the software in such an environment could lead to a potentially serious security breach. Existing programming models for these platforms are ill-equipped to address these consistency and security challenges. To address these issues, I have developed Quelea~\cite{quelea}, a rigorous programming model that elegantly combines shared-memory parallelism with datacenter-wide distribution. Quelea programming model is exposed as a domain-specific language for describing the semantics of replicated datatypes in the same vein as abstract datatypes, along with a novel contract language for declaratively expressing the consistency and security obligations. This clean separation of consistency and security properties from the datatype semantics permits the same program to be compiled for a variety of backends including multicore processors and distributed stores. A key aim of Quelea is to remain portable and agnostic to any particular implementation. The developers describe what minimal consistency guarantees are required for the application, but not how to achieve them, which is the concern of individual backend implementations. The separation of consistency concerns and datatype semantics allows Quelea to be realized as a shim layer on top of a wide variety of eventually consistent data stores. The stores provide availability, durability and distribution guarantees, while our system simply upgrades the safety properties. This layered approach is geared towards interoperability and broad applicability, providing clearer semantics and stronger safety properties for existing software systems. The declarative description of safety properties in Quelea lends itself to rigorous formal verification of distributed software, eliminating bugs and vulnerabilities prior to deployment, preventing privacy and monetary losses associated with software bugs in the wild. I intend to build Quelea on top of the highly successful Mirage Unikernel, which is part XenServer and Linux Foundation, with millions of deployments in the cloud already. Mirage project is led from the Cambridge Computer Lab, where I have chosen to pursue my post-doctoral research. A symbiotic relationship with Mirage will help rapid adoption of Quelea, and greatly impact the way in which cloud services are built and deployed. \bibliographystyle{my_ieeetr} \small \bibliography{all} \end{document}
{ "alphanum_fraction": 0.8291609173, "avg_line_length": 59.9173228346, "ext": "tex", "hexsha": "2cd21eb50cf09cc6e1a16d5ace08fc16a428ded6", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2020-06-16T11:40:01.000Z", "max_forks_repo_forks_event_min_datetime": "2016-09-21T14:55:19.000Z", "max_forks_repo_head_hexsha": "6435101f054f9911186ac965a39067eb1b289f9d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SaiVK/kayceesrk.github.io", "max_forks_repo_path": "research/research.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "6435101f054f9911186ac965a39067eb1b289f9d", "max_issues_repo_issues_event_max_datetime": "2019-09-16T08:49:29.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-26T00:11:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SaiVK/kayceesrk.github.io", "max_issues_repo_path": "research/research.tex", "max_line_length": 92, "max_stars_count": 3, "max_stars_repo_head_hexsha": "01dfa9a2b14650f9a45072a56aa91cc942e44c28", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kayceesrk/kayceesrk.github.io", "max_stars_repo_path": "research/research.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-16T11:39:36.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-30T02:38:32.000Z", "num_tokens": 3128, "size": 15219 }
% -*- mode: latex; coding: utf-8; TeX-master: ../thesis -*- % !TEX TS-program = pdflatexmk % !TEX encoding = UTF-8 Unicode % !TEX root = ../thesis.tex \section{Learning Functional Programming} Within the last decade, concepts from functional programming have been brought into the daily life of almost every programmer. There are many events that contributed to this gain in popularity: In 2007, C\# 3.0 was released, which introduced lambda expressions and laid the foundations for turning C\# into a hybrid Object-Oriented / Functional language\autocite{csharp-functional}. Two years later, Ryan Dhal published the initial version of Node.js, eliminating JavaScript's ties to the browser and introducing it as a server-side programming language, increasing the adoption of JavaScript further. In 2013, Java 8 was released and brought support for lambda expressions and streams. Within the same time frame, Python has been rapidly growing in popularity\autocite{python-popularity}. Further, many new multi-paradigm programming languages have been introduced, including Rust, Kotlin, Go and Dart. They all have functions as first-class citizens in the language since their initial release. With these developments, it can be said that functional programming has emerged from niche use-cases and academia to truly arrive in the wider programming community. For example Rust, the `most popular programming language' for 5 years in a row (2016--2020) according to the Stack Overflow Developer survey\autocite{rust-loved}, has been significantly influenced by functional programming languages\autocite{rust-functional}. Further, in idiomatic Rust code, a functional style can be clearly observed\footnote{A simple example for this may be that variables are immutable by default.}. Learning a purely functional programming language increases fluency with these concepts and teaches a different way to think and approach problems when programming. Due to this, many people recommend learning a functional programming language\autocite{blog1-funcprog}\autocite{blog2-funcprog}\autocite{blog3-funcprog}\autocite{blog4-funcprog}, even if one may not end up using that language at all\autocite{quora-funcprog}. Most literature about functional programming, including academia and online resources like blogs, contain code examples written in Haskell. Further, according to the Tiobe Index\autocite{tiobe-index}, Haskell is also the most popular purely functional programming language\autocite{comparison-functional-languages}. \section{Haskell} Haskell, the \textit{lingua franca} amongst functional programmers, is a lazily-evaluated, purely functional programming language. While Haskell's strengths stem from all it's features like its advanced type system, pattern matching and more, these features are also what makes Haskell famously hard to learn\autocite{haskell-hard-one}\autocite{haskell-hard-two}\autocite{haskell-hard-three}\autocite{haskell-hard-four}. Beginner Haskell programmers face a very distinctive challenge in contrast to learning a new, non-functional programming language: Not only do they need to learn a new language with an unusual syntax (compared to imperative or object-oriented languages), they also need to change their way of thinking and reasoning about problems. For example, the renowned quicksort-implementation from the Haskell Introduction Page\autocite{haskell-quicksort}: \begin{listing} \begin{haskellcode} quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs \end{haskellcode} \caption{Quicksort implementation in Haskell}\label{code:haskell-quicksort} \end{listing} While this is only a very short and clean piece of code, these 6 lines already pose many challenges to non-experienced Haskellers: \begin{itemize} \item The function's signature with no `fn' or `func' statement as they often appear in imperative languages \item The pattern matching, which would be a `switch' statement or a chain of `if / else' conditions \item The deconstruction of the list within the pattern matching \item The functional nature of the program, passing `(< p)' (a function returning a function) to another function \item The function call to `filter' without parenthesised arguments and no clear indicator at which arguments it takes and which types are returned \end{itemize} Although some of these constructs also exist in imperative or object-oriented languages, the cumulative difference is not to underestimate and adds to Haskell's steep learning curve. \section{Goals} As demonstrated in the example above, learning a new paradigm and syntax at the same time can be daunting and discouraging for novices. The entry barrier for functional programming should be lowered by using a modern, multi-paradigm language with a clear and familiar syntax. The functional programming beginner should be able to focus on the paradigm first, and then change to a language like Haskell to fully get into functional programming. To achieve this goal, this thesis will consist of two parts. In the first part, writing functional code will be made as easy as possible. This means that a programming language with an easy and familiar syntax should be chosen. Optimally, this language should already support functions as first-class citizens. Additionally, it should be statically typed, as a static type system makes it easier to reason about a program and can support the programmer while writing code. In the second part, a linter will be created to check code for non-functional statements. To achieve this, a definition of what functional purity means has to be selected and a ruleset has to be worked out and implemented into a static analysis tool. \section{Why Go}\label{sec:why-go} The language of choice for this task is Go, a statically typed, garbage-collected programming language designed at Google in 2009\autocite{golang-publish}. With its strong syntactic similarity to C, it should be familiar to most programmers. Go strives for simplicity and its syntax is extremely small and easy to learn. For example, the language consists of only 25 keywords and purposefully omits constructs like the ternary operator (<bool> ? <then> : <else>) as a replacement for the longer `if <bool> \{ <then> \} else \{ <else> \}' for clarity. `A language needs only one conditional control flow construct'\autocite{go-ternary}, and this also holds true for many other constructs. In Go, there is usually only one way to express something, improving the clarity of code. Due to this clarity and unambiguity, the language is a perfect fit to grasp the concepts and trace the inner workings of functional programming. It should be easy to read code and understand what it does without a lot of experience with the language. There are however a few downsides of using Go. Currently, Go does not have polymorphism, which means that functions always have to be written with specific types. Due to this, Go also does not include common list processing functions like `map', `filter', `reduce' and more\footnote{Although Go does have some polymorphic functions like `append', these are specified as built-in functions in the language and not user-defined}. Further, Go does not have a built-in `list' datatype. However, Go's `slices' cover a lot of use cases for lists already. Section~\ref{sec:go-slices} covers this topic in more detail. \section{Existing Work} With Go's support of some functional aspects, patterns and best practices have emerged that make us of functional programming constructs. For example, in the \textit{net/http} package of the standard library, the function \begin{gocode} func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) \end{gocode} is used to register functions for http server handling\autocite{go-http-doc}: \begin{code} \begin{gocode} func myHandler(w http.ResponseWriter, r *http.Request) { // Handle the given HTTP request } func main() { // register myHandler in the default ServeMux http.HandleFunc("/", myHandler) http.ListenAndServe(":8080", nil) } \end{gocode} \caption{Go web server handler function} \end{code} Using functions as function parameters or return types is a commonly used feature in Go, not just within the standard library. Furthermore, design patterns have emerged within the community that use functional concepts. An example of this are `functional options'. \subsection{Functional Options} The `functional options' pattern has been outlined in Dave Cheney's blog post `Functional options for friendly APIs'\autocite{functional-options} and is a great example on how to use the support for multiple paradigms. The basic idea with functional options is that a type constructor receives an unknown (0-n) amount of options: \begin{code} \begin{gocode} func New(requiredSetting string, opts ...option) *MyType { t := &MyType{ setting: requiredSetting, featureX: false, } for _, opt := range opts { opt(t) } return t } type option func(t *MyType) \end{gocode} \caption{Constructor with functional options} \end{code} These options can then access the instance of \mintinline{go}|MyType| to modify it accordingly, for example: \begin{code} \begin{gocode} func EnableFeatureX() option { return func(t *MyType) { t.featureX = true } } \end{gocode} \caption{Example for a functional option} \end{code} To enable feature X, `New' can be called with that option: \begin{gocode} t := New("required", EnableFeatureX()) \end{gocode} With this pattern, it is easy to introduce new options without breaking old usages of the API. Furthermore, the typical `config struct' pattern can be avoided and meaningful zero values can be set. A more extensive example on how functional options are implemented and used can be found in Appendix~\ref{appendix:funcopts}. \begin{quote} In summary \begin{itemize} \item Functional options let you write APIs that can grow over time. \item They enable the default use case to be the simplest. \item They provide meaningful configuration parameters. \item Finally they give you access to the entire power of the language to initialize complex values. \end{itemize}\autocite{functional-options} \end{quote} While this is a great example of what can be done with support for functional concepts, a purely functional approach to Go has so far been discouraged by the core Go team, which is understandable for a multi-paradigm programming language. However, multiple developers have already researched and tested Go's ability to do functional programming. \subsection{Functional Go?} In his talk `Functional Go'\autocite{func-go-talk}, Francesc Campoy Flores analysed some commonly used functional language features in Haskell and how they can be copied to Go. Ignoring speed and stack overflows due to non-existent tail call optimisation\autocite{go-tco}, the main issue is with the type system and the missing polymorphism. \subsection{go-functional} In July 2017, Aaron Schlesinger, a Go programmer for Microsoft Azure, gave a talk on functional programming with Go. He released a repository\autocite{go-functional} that contains `core utilities for functional Programming in Go'. The project is currently unmaintained, but showcases functional programming concepts like currying, functors and monoids in Go. In the `README' file of the repository, he also states that: \begin{quote} Note that the types herein are hard-coded for specific types, but you could use code generation to produce these FP constructs for any type you please! \autocite{go-functional-readme} \end{quote} \section{Verdict} The aforementioned projects showcase the main issue with functional programming in Go: the missing helper functions that are prevalent in functional languages and that they currently cannot be implemented in a generic way. To make functional programming more accessible in Go, this thesis will research what the most used higher-order functions are and implement them with a focus on usability. Furthermore, to learn purely functional programming, a list of rules for pure functional code should be curated and implemented in a static code analysis tool. This tool can then be used to check existing code and report constructs that are not functional.
{ "alphanum_fraction": 0.7879955063, "avg_line_length": 50.4534412955, "ext": "tex", "hexsha": "d41607990f0ebd6113856e3250f43775ccf3d2f3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tommyknows/bachelor-thesis", "max_forks_repo_path": "thesis/chapters/20_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tommyknows/bachelor-thesis", "max_issues_repo_path": "thesis/chapters/20_introduction.tex", "max_line_length": 177, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tommyknows/bachelor-thesis", "max_stars_repo_path": "thesis/chapters/20_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2793, "size": 12462 }
\documentclass[a4paper]{article} \input{temp} \setcounter{section}{-1} \begin{document} \title{Mathematical Biology} \maketitle \newpage \tableofcontents \newpage \section{Miscellaneous} Course notes online: Julia Gog(www.damtp.cam.ac.uk/research/dd/teaeching, 2013-2017), Peter Haynes(www.damtp.cam.ac.uk/user/phh/mathbio.html) Moodle page: Handwritten notes by lecture; Matlab/Python programming examples; solved exercises. This course involves 3 models: Deterministic temporal models (11 lectures), Stochastic temporal models (5 lectures), Deterministic spatio-temporal models (8 lectures). The focus of this course is biochemical reactions and population processes. (some introductory speech) \begin{eg} (1, Transient population) If we use $n(t)$ to denote the size of a population, we may want to model $\frac{dn}{dt} = f(n)$ by an ODE, or maybe if we have several components $\mathbf{n}(t)$ then we may want to model $\frac{d\mathbf{n}}{dt} = \mathbf{f}(\mathbf{n})$ which is a system of ODEs. Note that although $n$ should be an integer (discrete), when $n >> 1$ we may model it with continuous equations. \end{eg} \begin{eg} (2) $n \to \partial_t P(n,t) = W \cdot P(n,t)$, Markov processes. Here $P(n,t)$ is a probablity(?), $n$ being a state, and $W$ being the transition matrix. \end{eg} \begin{eg} (3)\\ If we include spatial aspect, we may have $n(t)$ becoming $n(x,t)$. Now there might be 'diffusion': $\partial_t n(x,t) = f(n(x,t)) + D \nabla^2 (x,t)$ where $\nabla^2 = \frac{\partial^2}{\partial x^2}$; this is the reaction-diffusion equation. \end{eg} \newpage \section{Birth-death models} The general idea is that we have a population of size $n(t)$; per capita per unit time, we have births of rate $b$ and deaths of rate $d$. Then we can write $$n(t+\Delta t) = n(t) + bn\Delta t - dn \Delta t$$ So we have an ODE $$\frac{dn}{dt} = (b-d)n = rn$$ where $r = b-d$. This has an easy solution $n(t) = n_0 e^{rt}$, assuming $r$ is a constant. We see that if $r$ is positive then the population grows exponentially, and if $r$ is negative then the population decreases to 0 asymptotically. Now probably $b$ and $d$ are related to $n$ by $b(n) = bn$ and $d(n) = dn^2$ due to competition. Then we have $$\frac{dn}{dt} = bn-dn^2$$ which we can definitely rewrite as $$\frac{dn}{dt}=\alpha n(1-n)$$ by some change of variable on $n$. Now \begin{equation*} \begin{aligned} \frac{dn}{n(1-n)}&=\alpha dt\\ \implies \frac{dn}{n} + \frac{dn}{1-n} &= \alpha dt\\ \implies \ln n - \ln(1-n) &= \alpha t + c\\ \implies n &= \frac{n_0 e^{\alpha t}}{(1-n_0) + n_0 e^{\alpha t}} \end{aligned} \end{equation*} where we are given that $t=0$, $n=n_0$. If $t \gg \frac{1}{\alpha}$, when $t \to \infty$ we have $n(t) \to 1$. Now we can investigate if the population size is stable, and if it has any fixed points. Let's now define $\mathbf{n} = (n_1,...,n_p)$, i.e. $p$ populations, and $\frac{d\mathbf{n}}{dt} = \mathbf{f}(\mathbf{n})$. If $\mathbf{n}=\mathbf{n}^*$ is a fixed point, then $\frac{d\mathbf{n}}{dt} = 0$, i.e. $\mathbf{f}(\mathbf{n}) = 0$. Now if we apply a small perturbation $\mathbf{n} = \delta\mathbf{n}^* + \delta \mathbf{n}$, i.e. \begin{equation*} \begin{aligned} \frac{d}{dt} \delta \mathbf{n} &= \mathbf{f} (\mathbf{n}^* + \delta \mathbf{n})\\ a&= \mathbf{f} (\mathbf{n}^*) + \frac{\partial f_i}{\partial n_j} \delta_{nj} + \frac{1}{2} \frac{\partial^2 f_i} {\partial n_j \partial n_k} \delta_{n_j} \delta_{n_k} \end{aligned} \end{equation*} So $\frac{d}{dt} \delta \mathbf{n} = J \cdot \partial \mathbf{n}$, so $\delta n(t) = e^{J t} \cdot \delta n(0)$. If $\lambda_i$'s are the eigenvalues of $J$, we consider the real part of $\lambda_i$: if $Re(\lambda_i)<0$, then if $p\geq 5$ we only have numerical solutions, if $3 \leq p \leq 5$ we have analytic solutions, and $p=2$ is an easy case (recall $p$ is the number of populations): $\bullet$ If $p=2$, $\mathbf{n} = (n_1,n_2)$, then \begin{equation*} \begin{aligned} \frac{d}{dt} \begin{pmatrix} \delta_{n_1} \\ \delta_{n_2} \end{pmatrix} = \begin{pmatrix} \frac{\partial f_1}{\partial n_1} & \frac{\partial f_1}{\partial n_2}\\ \frac{\partial f_2}{\partial n_1} & \frac{\partial f_2}{\partial n_2} \end{pmatrix} \cdot \begin{pmatrix} \delta_{n_1}\\ \delta_{n_2} \end{pmatrix} \end{aligned} \end{equation*} Where the matrix is $J$. Now we have $\lambda_1\lambda_2 =\det J$ and $\lambda_1 + \lambda_2 = \tr J$. Determined by the signs of those two, we have different possible behaviours: \includegraphics[scale=0.5]{image/Bio_01.png} Now let's consider the spread of Dengue. There are several processes going on at the same time: (1) Mosquitos carry dengue;\\ (2) Wolbachia infect mosquitos;\\ (3) Infected mosquitos do not transmit dengue;\\ (4) Wolbachia transmission only across generations. Question: will an intially infected population of mosquitos eventually spread over the entire population as $t \to \infty$? \includegraphics[scale=0.7]{image/Bio_02.png} \includegraphics[scale=0.5]{image/Bio_03.png} We always assume that there are enough males to fertilise the female eggs. Now consider $\frac{d}{dt}$ of $n_U$ and $n_I$ (uninfected and infected females). From the above tables we should be able to get (hopefully) \begin{equation*} \begin{aligned} \frac{d}{dt} n_U &= r n_U \frac{n_U}{n_U + n_I} - dn_U - \varepsilon (n_U+n_I) n_U\\ \frac{d}{dt} n_I &= \lambda r n_I \frac{n_U}{n_U + n_I} + \lambda r n_I \frac{n_I}{n_U + n_I} - \mu d n_I - \varepsilon (n_U+n_I) n_I \ (*) \end{aligned} \end{equation*} This is our model when $p=2$. The term with $\varepsilon$ is the death rate associated with competition. We'll try to simplify these equations. By rescaling the time as $t \to rt$, and rescaling the population as $n \to \frac{\varepsilon}{r} n$, we get (?) \begin{equation*} \begin{aligned} \frac{d}{dt} n_U &= n_U \frac{n_U}{n_U + n_I} - \frac{d}{r} n_U - (n_U+n_I) n_U\\ &= n_U\left[\frac{n_U}{n_U+n_I} - \frac{d}{r} - (n_U+n_I)\right] \ (1) \end{aligned} \end{equation*} and the second equation becomes (???) \begin{equation*} \begin{aligned} \frac{d}{dt} n_I = n_I \left[\lambda - \mu \frac{d}{r} - (n_U+n_I)\right] \ (2) \end{aligned} \end{equation*} We'd like to see that the model has at least the fixed points $\mathbf{n}^* = (n^*_U,0)$ and $(0,n^*_I)$. The lecture then somehow defines \begin{equation*} \begin{aligned} n^*_I &= \lambda - \mu \frac{d}{r}\\ n^*_U &= 1-\frac{d}{r} \end{aligned} \end{equation*} so that our differential equations become \begin{equation*} \begin{aligned} \frac{d}{dt} n_U &= n_U \left[n^*_U - \frac{n_I}{n_U+n_I} - (n_U+n_I)\right]\\ \frac{d}{dt} n_I &= n_I \left[n^*_ - (n_U+n_I)\right] \end{aligned} \end{equation*} which has fixed points $(0,0),(n^*_U,0),(0,n^*_I),(n^*_I(1-n^*_I-n^*_U),n^*_I(n^*_U-n^*_I))$. The first is unstable, the second two are stable and the last fixed point is a saddle. This is disappointing because we want a small infection to be spread out to the whole population, but in that case the second fixed point needs to be unstable. Global analysis: We can plot the flow of the ode system \begin{equation*} \begin{aligned} \frac{d}{dt} {n_U \choose n_I} = {{f_U(n_U,n_I)} \choose {f_I(n_U,n_I)}} \end{aligned} \end{equation*} This is usually done by programming. We'll try to sketch the flow of this model: \includegraphics[scale=0.5]{image/Bio_04.png} Qualitative behaviour: we need a finite (large) $n_I(0)$ (in order to converge to $(n^*_I,0)$). For quantitative part we can only do numerical integrations. Now we consider an epidemic model, where each individual may pass through three phases: susceptibles, infectives, recovered (compartment models -- same individual for different phases). We use $S(t),I(t),R(t)$ to denote the population of each of them. \includegraphics[scale=0.5]{image/Bio_05.png} Now we want to know what is the rate of their change. We use biological datum, from which we know there is a per capita infection rate $\beta$, a recovery rate $\nu$, so in conclusion we have \begin{equation*} \begin{aligned} \frac{d}{dt} S &= -\beta SI\\ \frac{d}{dt} I &= \beta SI - \nu I\\ \frac{d}{dt} R &= \nu \end{aligned} \end{equation*} Also we expect the population to be closed, i.e. the total population should not change over time (unlike the previous model), so \begin{equation*} \begin{aligned} \frac{d}{dt}(S+I+R) = 0 \end{aligned} \end{equation*} Which is true. So it's sufficient to look at just two equations, \begin{equation*} \begin{aligned} \frac{dS}{dt} &= -\beta SI\\ \frac{dI}{dt} &= \beta SI - \nu I \end{aligned} \end{equation*} What are the questions we want to ask? We may want to know:\\ $\bullet$ When can you have an \emph{outbreak}, i.e. $\frac{dI}{dt}>0$;\\ $\bullet$ What is the final size of the outbreak?\\ $\bullet$ What vaccination strategy would work best?\\ $\bullet$ Endemic $\implies I^*>0$, finite number of $I$ in steady state. For obvious reason we call this the "SIR model". (Q1)\\ at $t=0$, $\frac{d}{dt}I = [\beta S(0) - \nu] \cdot I(0)$, so $\frac{\beta}{\mu}S(0) > 1 \implies \frac{\beta N}{\nu}>1 = \mathcal{R}_0$, the reproduction ratio, or the mean number of susceptibles infected per infective. (Q2)\\ \includegraphics[scale=0.5]{image/Bio_06.png} \includegraphics[scale=0.5]{image/Bio_07.png} We had $dS = -\beta SI dt$, $dI = (\beta SI - \nu I) dt$, so \begin{equation*} \begin{aligned} \frac{dI}{dS} &= \frac{(\beta SI - \nu I)}{-\beta SI} = -1 + \frac{\nu}{\beta} \cdot \frac{1}{S}\\ &= \frac{N}{\mathcal{R}_0} \cdot \frac{1}{S} - 1 \end{aligned} \end{equation*} So \begin{equation*} \begin{aligned} I=\frac{N}{\mathcal{R}_0} \ln S - S + \text{ const} \end{aligned} \end{equation*} Now $I(t)-I(0) = \frac{N}{\mathcal{R}_0} \ln \frac{S(t)}{S(0)} - (S(t)-S(0))$. As $t \to \infty$, $S(t) = \sigma N$ ($\sigma < 1$, $I(t) \to 0$. So \begin{equation*} \begin{aligned} 0 &= \frac{N}{\mathcal{R}_0} \ln \sigma - (\sigma N-N)\\ \sigma - \frac{1}{\mathcal{R}_0} \ln \sigma &= 1 \end{aligned} \end{equation*} We can rewrite the second equation as $\mathcal{R}_0 (\sigma-1) = \ln \sigma$. Clear $\sigma = 1$ is a solution, but we are not interested in that. For $\mathcal{R}_0 \gg 1$, $\sigma \approx e^{-\mathcal{R}_0}$.(by sketching curves below and analyze intersections?) \includegraphics[scale=0.5]{image/Bio_08.png} Insight: in any epidemic, only a fraction of the population is infected. Size of the epidemic = $(1-\sigma)N$. (Q3)\\ Vaccination? We can use pre-vaccination, i.e. vaccinate in anticipation of an outbreak. Let $p$ be the fraction of the population being vaccinated. At $t=0$ we have \begin{equation*} \begin{aligned} S(0) \approx (1-p)N\\ \frac{\beta}{\nu} S(0) > 1 \implies \frac{\beta}{\nu} (1-p)N = (1-p)\mathcal{R_0} > 1 \end{aligned} \end{equation*} so $p>1-\frac{1}{\mathcal{R}_0}$. \includegraphics[scale=0.5]{image/Bio_09.png} (Q4)\\ Endemic: $I(t) \to I^* > 0$ as $t \to \infty$.\\ Biological datum: suppose we have a finite rate of birth of $S$, and of death of $S,I,R$. Now \begin{equation*} \begin{aligned} \frac{d}{dt} S = -\beta SI - \mu S + \mu N\\ \frac{d}{dt} I = \beta SI - \nu I - \mu I\\ \frac{d}{dt} R = \nu I - \mu R \end{aligned} \end{equation*} at $t=0$, $\mathcal{R} = \frac{\beta}{\nu+\mu}N$(??). So \begin{equation*} \begin{aligned} S^* &= \frac{\mu+\nu}{\beta} = \frac{N}{\mathcal{R}_0}\\ I^* &= \frac{\mu(N-S^*)}{\beta S^*} = \frac{\mu}{\beta}(\mathcal{R}_0-1) \end{aligned} \end{equation*} We had a Jacobian (?) \begin{equation*} \begin{aligned} J=\begin{pmatrix} -\beta I - \mu & -\beta S\\ \beta I & \beta S - \nu - \mu \end{pmatrix} \end{aligned} \end{equation*} so \begin{equation*} \begin{aligned} J^*=\begin{pmatrix} -\mu \mathcal{R}_0 & -(\mu+\nu)\\ \beta(\mathcal{R}_0-1) & 0 \end{pmatrix} \end{aligned} \end{equation*} $\lambda = -\frac{1}{2} \mu \mathcal{R}_0 \pm i\sqrt{\mu\nu(\mathcal{R}_0 - 1}$ (?). If we assume $\nu \gg \mu$, then rate of recovery $\gg$ rate of death, so we get a short lived disease. \includegraphics[scale=0.5]{image/Bio_10.png} \includegraphics[scale=0.5]{image/Bio_11.png} Time period is $T = \frac{2\pi}{\omega} = \frac{2\pi}{\sqrt{\mu\nu(R_0-1)}}$ If $\mu \gg \nu$, for example, for Measles we have $\mu \sim \frac{1}{70}$ years, $\nu \sim \frac{1}{10}$ days, $\mathcal{R}_0 \sim 20$, so the period of oscillation $T \approx 2.18$ years. Then we have the following pattern. \includegraphics[scale=0.5]{image/Bio_12.png} \newpage \section{Enzyme Kinetics} Enzyme is biological catalyst, which speeds up the process of reaction (substrate turning to product) without being consumed in the process. \includegraphics[scale=0.5]{image/Bio_13.png} Suppose we have the model \begin{equation*} \begin{aligned} \frac{dS}{dt} = -\bar{k}S & \text{ no enzyme}\\ \frac{dS}{dt} = -kS & \text{ enzyme present} \end{aligned} \end{equation*} where $k \gg \bar{k}$. Hypothesis: (Michaelis-Menten, $\sim$1915) \begin{equation*} \begin{aligned} S + E \xrightarrow{k_1} C \text{ (complex)}\\ C \xrightarrow{k_2} S+E\\ C \xrightarrow{k_3} E+P \end{aligned} \end{equation*} Under this model, we have \begin{equation*} \begin{aligned} \frac{d}{dt} S &= -k_1 ES + k_2C\\ \frac{d}{dt} E &= -k_1 ES + k_2C + k_3 C\\ \frac{d}{dt} C &= k_1 ES - k_2C - k_3 C\\ \frac{d}{dt} P &= k_3 C \end{aligned} \end{equation*} We see that $\frac{d}{dt} (E+C)=0$, and $\frac{d}{dt} (S+C+P)=0$. Therefore, $E+C=e_0$ is constant and $S+C+P=s_0$ is constant. There are 4 equations and 2 constraints, so we can reduce the dimension of the equation. We will choose \begin{equation*} \begin{aligned} \frac{d}{dt} S = -k_1(e_0-C) S + k_2 C = -k_1e_0 S + (k_1S+k_2) C\\ \frac{d}{dt} C = k_1(e_0-C)S - k_2C - k_3C = k_1 e_0 S - (k_1S+k_2+k_3) C \end{aligned} \end{equation*} We use our old trick (?) of rescaling variables by $n_S = \frac{S}{s_0}$, $n_C = \frac{C}{e_0}$, then $n_S=1$ and $n_C=0$ at $t=0$. The equations become \begin{equation*} \begin{aligned} \frac{d}{dt} n_S = -n_S + (n_S + \mu-\lambda) n_C\\ \varepsilon \frac{d}{dt} n_C = n_S - (n_S+\mu) n_C \end{aligned} \end{equation*} where $\mu = \frac{k_3}{k_1s_0}, \lambda = \frac{k_2+k_3}{k_1s_0}, \varepsilon = \frac{e_0}{s_0} \ll 1$. Steady state value of $n_C$ assuming $n_S$ is fixed:\\ $n_C = \frac{n_S}{n_S+\mu}$ as $t \to \infty$. Plug them back into $n_S$ equation, get \begin{equation*} \begin{aligned} \frac{d}{dt} n_S = -n_S + (n_S+\mu-\lambda) \frac{n_S}{n_S+\mu}\\ \frac{d}{dt} n_S = -\frac{\lambda n_S}{n_S + \mu} \ (*) \end{aligned} \end{equation*} This is called the Michaelis-Menten equation. \section{Prey-predator system} Suppose we have rabbits with number $n_R$, and foxes with number $n_F$. Our biological datum is that if there is no predation then rabbits grow unboundedly, and foxes die (as they have nothing to eat). With predation, rabbits die at a rate $b$ per capita, and foxes reproduce at a rate $c$ per capita. We have \begin{equation*} \begin{aligned} \frac{d}{dt} n_R = a \cdot n_R - b n_R n_F\\ \frac{d}{dt} n_F = c n_R n_F - dn_F \end{aligned} \end{equation*} we write as \begin{equation*} \begin{aligned} \frac{d}{dt} n_R = n_R (1-n_F)\\ \frac{d}{dt} n_F = -\alpha n_F (1-n_R) \end{aligned} \end{equation*} This has fixed points $(0,0)$ and $(1,1)$, and has Jacobian \begin{equation*} \begin{aligned} \begin{pmatrix} 1-n_F & -n_R\\ \alpha n_F & -\alpha(1-n_R) \end{pmatrix} \end{aligned} \end{equation*} evaluate at the two fixed points, we see that $(0,0)$ is unstable (actually a saddle), and the $(1,1)$ fixed point has eigenvalue (of Jacobian) $\pm i\sqrt{\alpha}$. Since the real part is 0 we don't know (yet) whether it is stable or not. We have to solve the equations of motion: \begin{equation*} \begin{aligned} d n_R = n_R (1-n_F) dt\\ d n_F = -\alpha n_F (1-n_R) dt \end{aligned} \end{equation*} so $\frac{d n_R}{dn_F} = \frac{n_R (1-n_F)}{-\alpha n_F(1-n_R)}$, i.e. $\alpha \frac{1-n_R}{n_R} dn_R = -\frac{1-n_F}{n_F} dn_F$ so $\alpha(\log n_R - n_R) = -(\log n_F - n_F) + c$. We get $H(n_R,n_F) = \alpha(n_R - n_R) + \log n_F - n_F = $constant, where $H(n_R,n_F)$ is the integral of motion. So we should observe the number $n_R$ and $n_F$ show a pattern of oscillation with some period $T$ which we wish to find. \includegraphics[scale=0.5]{image/Bio_14.png} We'll get $T=1$ after some interesting contour integration (no idea) $\oint d(\ln n_R ) = \int_0^\pi (1-n_F)dt$. What if both are hunted upon (say by a disease)? Let the death rate of rabbits and foxes be $\lambda_R$ and $\lambda_F$ respectively. Then $\frac{d}{dt} n_R = an_R - bn_Rn_F - \lambda_R n_R$, $\frac{d}{dt} n_F = cn_Rn_F - dn_R-\lambda_F n_F$. We introduce a shift parameter $a \to a-\lambda_R$, $d \to d+\lambda_F$, then we find that there is a shift on the fixed points: the fixed point in original variables is $(n^*_R,n^*_F) = (d/c,a/b)$. Now the new fixed point becomes $(n^*_R$, $n^*_F) =((d+\lambda_F)/c,(a-\lambda_R)/b)$. Fragility of ecosystems: if we add competition: \begin{equation*} \begin{aligned} \frac{d}{dt} n_R &= n_R(1-n_F) - \varepsilon_R n_R^2\\ \frac{d}{dt} n_F &= -\alpha n_F(1-n_R) - \alpha \varepsilon_F n_F^2 \end{aligned} \end{equation*} After doing the same calculation we get \begin{equation*} \begin{aligned} n_R^* &= \frac{1+\varepsilon_F}{1+\varepsilon_R\varepsilon_F}\\ n_F^* &= \frac{1-\varepsilon_F}{1+\varepsilon_R\varepsilon_F}\\ J^* &= \begin{pmatrix} -\varepsilon_R n_R^* & -n_R^*\\ \alpha n_F^* & -\alpha\varepsilon_F n_F^* \end{pmatrix} \end{aligned} \end{equation*} and only the (2,1) entry is positive, so $\tr J <0$, $\det J >0$. So this fixed point now becomes stable. Let's consider another model, the Fitzhugh-Nagumo model: \begin{equation*} \begin{aligned} \frac{d}{dt} n_1 = c(n_2+n_1 - \frac{1}{3} n_1^3 + \mathcal{L}(t))\\ \frac{d}{dt} n_2 = -\frac{1}{c} (n_1 - a+bn_2) \end{aligned} \end{equation*} where $\mathcal{L}(t)$ is an external signal, which can depend on time. Nullclines(maybe something in dynamical systems?):\\ \begin{equation*} \begin{aligned} \frac{d}{dt} n_1 = 0 \implies n_2 = -n_1 + \frac{1}{3} n_1^3\\ \frac{d}{dt} n_2 = 0 \implies n_2 = \frac{a-n_1}{b} \end{aligned} \end{equation*} where $\mathcal{L}(t) = 0$. The parameter bounds are $0<b\leq 1, 1-\frac{2}{3} b < a < 1, c \gg 1$. If we plot the two graphs we get fixed point at $n_1>0,n_2<0$, a stable focus. \includegraphics[scale=0.5]{image/Bio_15.png} If we now add $\mathcal{L}(t)$, we are essentially moveing the cubic curve upwards. There are three cases in terms of the magnitude added:\\ (1) small: nothing interesting happens, the fixed point just moves by a bit;\\ (2) medium: $n_2$ spikes up first, then falls back to the fixed points; (3) large: fixed points lose stability; $n_2$ oscillate like the above diagram, with spikes after certain periods. \iffalse \begin{equation*} \begin{aligned} \end{aligned} \end{equation*} \fi \end{document}
{ "alphanum_fraction": 0.6706045126, "avg_line_length": 42.612244898, "ext": "tex", "hexsha": "b2471a0b7e178c36fa8d33e4f626ca9b508d25ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_forks_repo_path": "Notes/Biology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_issues_repo_path": "Notes/Biology.tex", "max_line_length": 528, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io", "max_stars_repo_path": "Notes/Biology.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z", "num_tokens": 6909, "size": 18792 }
\documentclass[]{article} \usepackage{setspace} \usepackage[margin = 1in]{geometry} \usepackage{todonotes} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={Chapter 1}, pdfauthor={Frederick Boehm}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} \usepackage[ backend=biber, style=authoryear, natbib=true, url=true, doi=true, eprint=true ]{biblatex} \addbibresource{ch1.bib} \addbibresource{research.bib} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{Chapter 1} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Frederick Boehm} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{\today} \begin{document} \doublespacing \maketitle \listoftodos \listoffigures \listoftables \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Complex traits \& QTL mapping \end{enumerate} \begin{itemize} \tightlist \item look at Karl \& Saunak's chapter 1 \item goal is to motivate study of complex traits with QTL mapping \end{itemize} Identification of genes that affect measurable phenotypes has a long and successful history in model organisms. Complex traits include classical clinical phenotypes such as systolic blood pressure and body weight as well as newly measurable biomolecular phenotypes like gene expression levels, protein concentrations, and lipid levels. Understanding the genetic underpinnings of such traits may inform many areas of biology, medicine, and public health. The first reported QTL study is from 1923, 30 years before the discovery of the structure of DNA \citep{watson1953molecular}. \citet{sax1923association} examined seed weight for the common bean (\emph{Phaseolus vulgaris}) in an F\(_2\) intercross. He assigned each F\(_2\) subject to a gene class by examining its seed color patterns. \citet{lander1989mapping} kickstarted modern QTL mapping methods research with their seminal report in the late 1980s. Their article Goals of a QTL study depend on its scientific context. Often a researcher seeks to identify genomewide positions of QTL for each trait of interest. In some studies, the total number of QTL for a trait may be more interesting than the QTL positions. A QTL study begins with a scientific question and the choice of a study design. Elements to consider include the mating design, phenotyping plan, genotyping plan, and statistical analysis methods. For most of the last century, attaining clinical phenotypes, such as body weight, was much less costly than genotyping. This setting sometimes led researchers to selectively genotype only those organisms with extreme phenotype values. With diminishing costs for both genotyping and phenotyping, many recent studies genotype and phenotype all subjects. Since the 1980s, researchers have written and shared computer software for QTL studies. Early efforts included MAPMAKER, QTL Cartographer, Since the early 2000s, the ``qtl'' R package has been a state-of-the-art resource for QTL mapping studies. It is open-source, free to download, well documented, and well supported. \textbf{Mamm Genome. 1999 Apr;10(4):327-34. Overview of QTL mapping software and introduction to map manager QT. Manly KF1, Olson JM.} \subsection{Mating designs for two-parent crosses} One widely used mating design is the backcross. Although variations are possible, one typically begins with two inbred lines. Let's designate the two lines as "A" and "B". Mating of lines A and B leads to offspring, which we designate as F$_1$ (to denote the first filial generation). The F$_1$ offspring, then, mate with the A line (ie, the parental line) to produce N$_2$ subjects, where the letter "N" denotes the offspring from a backcross and subscript 2 reflects the generation number. We assume that we're working with diploid organisms, so that every subject has two copies - which need not be identical - of each chromosome. Let's assume, further, that we're working with mice, so that every organism has 20 chromosome pairs. Inbred lines, by definition, are homozygous at all markers. Let's designate, for a given marker, the A line to have two copies of allele A and the B line to have two copies of allele B. Let's consider the genetic makeup of the F$_1$ subjects. We first must examine gametogenesis, the process of producing gametes, or reproductive cells, in the parents. Gametogenesis results in production of haploid cells, \emph{i.e.}, cells with only one copy of every chromosome. Two gametes, one from each parent, unite to form a diploid zygote. All F$_1$ subjects are genetically identical. For every chromosome pair, they inherited one copy of the A chromosome (from the A parent) and one copy of the B chromosome (from the B parent). In other words, every F$_1$ subject has genotype AB at every marker, where A the allele from the A parent and B the allele from the B parent. The N$_2$ generation, then, has subjects with either AA or AB genotypes at a given marker. To understand whether a given subject has AA or AB genotype at a given marker, we need to consider the fact that gametogenesis involves crossover events before the diploid cells divide into haploid cells. A crossover event results in a swapping of DNA between the two copies of a chromosome. In inbred lines, this swapping of DNA between the two copies of a chromosome is undetectable with marker genotyping, because both chromosomes have the same allele (either two As or two Bs in our example). However, the F$_1$ subjects, with AB marker genotypes, have distinct alleles at every marker. Thus, marker genotyping has the potential to detect crossover events that occur during gametogenesis in the F$_1$ by examining marker genotypes in the N$_2$ subjects. A picture helps to clarify this idea (Figure \ref{}). \todo{Add figure here AND add text explaining the figure here} Another widely used two-parent cross is the "intercross". In it, two inbred lines again mate to produce F$_1$ subjects. Then, however, two F$_1$ subjects mate to produce F$_2$ subjects. The crossover events that occur during gametogenesis in the F$_1$ subjects gives rise to the three marker genotypes observed in F$_2$ subjects: AA, AB, and BB. We present a diagram for the intercross in Figure \ref{}. \subsection{Quantitative traits} Quantitative traits typically are those that take continuous values over a (possibly infinite) interval. Distinct analysis methods - typically with statistical tools called "generalized linear models" - are needed for traits that take binary values or one of only a few discrete values or only whole numbers. Examples of quantitative traits include both clinical traits like body weight, height, systolic blood pressure, and fasting blood glucose level and newly measurable biomolecular traits like gene expression levels, protein concentrations, and lipid levels. \subsection{Statistical challenges in QTL studies} \citet{broman2009guide} articulate two statistical challenges: 1. the missing data problem and 2. the model selection problem. The missing data problem arises because subjects in QTL experiments typically are genotyped at a set of discrete markers across the genome. They are not genotyped at every nucleotide base. The "missing" genotypes belong to those bases that are not genotyped. Statistical tools for probabilistically handling missing data, often with hidden Markov models, \todo{what did Lander \& Botstein do for missing data problem? How do they infer genotypes between markers?} \subsection{QTL mapping in a backcross} \subsection{QTL mapping in an intercross} Lander \& Botstein 1989 Haley \& Knott 1992 Martinez \& Curnow 1992 Soller et al 1976 \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Multivariate QTL scan in two-parent crosses \end{enumerate} Jiang \& Zeng 1995 Knott \& Haley 2000 Jiang and Zeng developed multivariate methods for QTL mapping in two-parent crosses. They devised a multivariate analog of Zeng's composite interval mapping (Zeng 1993, 1994). This strategy treats phenotypes as arising from a mixture of normal distributions in which each genotype class has distinct distribution parameters. Within this multivariate mapping framework, Jiang and Zeng developed the first test of pleiotropy vs.~separate QTL. Jiang and Zeng first developed a test for pleiotropy vs.~separate QTL in two-parent crosses. They developed it in the context of their work in multivariate QTL studies with composite interval mapping. They framed the scientific question of whether two traits are affected by a single, shared locus or by two distinct loci in terms of two statistical hypotheses. The null hypothesis states that there is a single pleiotropic locus that affects both traits, while the alternative hypothesis is that there are two distinct but nearby loci, with each locus affecting exactly one trait. Knott and Haley subsequently reported methods for testing pleiotropy vs.~separate QTL in two-parent crosses. Knott and Haley integrated their earlier work on univariate QTL mapping with marker regression with Jiang and Zeng's multivariate methods to develop a test of pleiotropy vs.~separate QTL with multivariate marker regression methods. Jiang and Zeng (1995) One disadvantage of multivariate compositive interval mapping is the computing requirements for the iterative expectation-maximization procedures needed for parameter estimation. This prompted Knott and Haley (2000) to develop a marker regression-based approximation to multivariate composite interval mapping. Knott and Haley (2000) used a multivariate linear model for simultaneous mapping of multiple traits. They also presented a multivariate marker regression-based test of pleiotropy vs. separate QTL. This test is suitable for subjects that are equally related to each other, like the collection of offspring in an F\(_2\) intercross of two inbred lines. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Multiparental populations \end{enumerate} \begin{itemize} \tightlist \item what are they? Breeding design for CC \& DO. Why use them? \end{itemize} While QTL mapping studies in the 1990s contributed to many advances in genetics and biology, complex trait researchers recognized the mapping resolution limitations in crosses involving two inbred lines. Seeking greater precision for QTL positions, scientific communities collectively decided to pool their expertise and resources into community-supported and community-maintained model organism mapping populations. These new mapping populations would incorporate genetic material from more than two inbred founder lines. The accumulated meiotic recombination events over many generations would enhance mapping resolution over previously available populations. Products of these community-based efforts include the Collaborative Cross and Diversity Outbred populations from mouse researchers and Drosophila Synthetic Population Resource \citep{king2012genetic} and in the fruit fly scientific community. In subsequent years, scientists created multiparental populations in many other organisms, including tomato, rice, maize \citep{lehermeier2014usefulness}, wheat \citep{mackay2014eight, huang2012multiparent, milner2016multiparental}, Arabidopsis \citep{}, apple \citep{allard2016detecting} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \tightlist \item Univariate QTL mapping in MPP \end{enumerate} \begin{itemize} \tightlist \item contrast with univariate QTL mapping in two-parent crosses \end{itemize} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{5} \tightlist \item Multivariate mapping in mpp \end{enumerate} 6A. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{6} \tightlist \item Testing pleiotropy vs separate QTL in MPP \end{enumerate} Testing pleiotropy vs.~separate 7A. allele effects plots to discern pleiotropy v separate QTL King et al 2012 Macdonald \& Long 2007 \emph{maybe do a citation search on these 2 articles to see who has used their ideas} CAPE software package - what exactly is the CAPE method??? \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{7} \tightlist \item Testing pleiotropy vs separate QTL to dissect an expression trait hotspot Tian et al.~2016. ?Schadt et al.~2005? \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7899720689, "avg_line_length": 44.0810810811, "ext": "tex", "hexsha": "92cbfdabaeaad81d668a5f4e4f7f336f1a02081b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c47c1926c17432f72faef71e36ad2eb94abe5b79", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fboehm/chapter1", "max_forks_repo_path": "main-old.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c47c1926c17432f72faef71e36ad2eb94abe5b79", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fboehm/chapter1", "max_issues_repo_path": "main-old.tex", "max_line_length": 811, "max_stars_count": null, "max_stars_repo_head_hexsha": "c47c1926c17432f72faef71e36ad2eb94abe5b79", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fboehm/chapter1", "max_stars_repo_path": "main-old.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3759, "size": 14679 }
\section{Why \sofa ?} % The creation of geometric models can require complex algorithms for surface extraction, mesh simplification or refinement and volumetric meshing. Programming interactive physical simulation of rigid and deformable objects requires multiple skills in geometric modeling, computation mechanics, numerical analysis, collision detection, rendering, user interface and haptics feedback, among others. It is also challenging from a software engineering standpoint, with the need for computationally efficient algorithms, multi-threading, or the deployment of applications on modern hardware architectures such as the GPU. The development of complex medical simulations has thus become an increasingly complex task, involving more domains of expertise than a typical research and development team can provide. The goal of \sofa{} is to address these issues within a highly modular yet efficient framework, to allow researchers and developers to focus on their own domain of expertise, while re-using other expert's contributions. \section{The "philosophy" of \sofa} \sofa{} introduces the concept of multi-model representation to easily build simulations composed of complex anatomical structures. The pool of simulated objects and algorithms used in a simulation (also called a scene) is described using a hierarchical data structure similar to scene graphs used in graphics libraries. Simulated objects are decomposed into collections of independent components, each of them describing one feature of the model, such as state vectors, mass, forces, constraints, topology, integration scheme, and solving process. As a result, switching from internal forces based on springs to a finite element approach can be done by simply replacing one component with another, all the rest (mass, collision models, time integration, ...) remaining unchanged. Similarly, it is possible to keep the same force model and modify the solver and state vectors in order to compute the model on the GPU instead of the CPU. Moreover, the simulation algorithms, embedded in components, can be customized with the same flexibility as the physical models. In addition to this first level of modularity, it is possible to go one step further and decompose simulated objects into multiple partial models, each optimized for a given type of computation. Typically, a physical object in \sofa{} is described using three partial models: a mechanical model with mass and constitutive laws, a collision model with contact geometry, and a visual model with detailed geometry and rendering parameters. Each model can be designed independently of the others, and more complex combinations are possible, for instance for coupling two different physical objects. During run-time, the models are synchronized using a generic mechanism called \textit{mapping} to propagate forces and displacements. The user can interact in real-time with the mechanical models simulated in SOFA, using the mouse but also using other type of input device. Haptic rendering is also supported. % \SC{This mapping can simply update the position of vertices on the visual model, or transmit forces and velocities between the collision model and mechanical model. In this case the mapping follows the physical principle of the \textit{virtual works}.} \ff{(Mapping trop détaillé ici à mon avis)} \section{Why should I contribute to \sofa ?} \sofa{} was first released in 2007~\cite{ACFBPDDG07}. Since then, it has evolved toward a comprehensive, high-performance library used by an increasing number of academics and commercial companies. The code is open-source and the license is LGPL. You can use this code to build your own medical simulations needs or for other applications. You can also include this code in a commercial product. The only requirement is that if you modify the code for a commercial product you need to share this modification with your client. Moreover, you can build upon SOFA using the plug-in system. Your plug-in can have an other license than LGPL. Consequently there is a considerable freedom for you to use SOFA for your research, your developments or your products ! Finally, \sofa is also intended for the research community to help the development and the sharing of newer algorithms and models. So, do not hesitate to share your experience of \sofa, your code and your results with the \sofa community !!!
{ "alphanum_fraction": 0.8128681468, "avg_line_length": 129.8235294118, "ext": "tex", "hexsha": "fccf0ae94c941973c6e6472d035fb7f238be9a65", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_forks_repo_licenses": [ "OML" ], "max_forks_repo_name": "sofa-framework/issofa", "max_forks_repo_path": "doc/introduction/introduction_body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "OML" ], "max_issues_repo_name": "sofa-framework/issofa", "max_issues_repo_path": "doc/introduction/introduction_body.tex", "max_line_length": 387, "max_stars_count": null, "max_stars_repo_head_hexsha": "94855f488465bc3ed41223cbde987581dfca5389", "max_stars_repo_licenses": [ "OML" ], "max_stars_repo_name": "sofa-framework/issofa", "max_stars_repo_path": "doc/introduction/introduction_body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 852, "size": 4414 }
\SetAPI{J-C} \section{database.connection.factory} \label{configuration:DatabaseConnectionFactory} \ClearAPI Whether ambeth should use an integrated connection factory or connections are provided by data sources. If true, Ambeth will create its own database connection, otherwise they are managed by the data source. Valid values are "true" and "false". %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.informationbus.persistence.setup.AmbethPersistenceSchemaModule} & \\ \hline \type{com.koch.ambeth.informationbus.persistence.setup.AmbethPersistenceSchemaModule} & \\ \hline \type{com.koch.ambeth.persistence.jdbc.ioc.PersistenceJdbcModule} & \prettyref{module:Persistence} \\ \hline \type{com.koch.ambeth.persistence.jdbc.ioc.PersistenceJdbcModule} & \prettyref{module:Persistence} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \type{com.koch.ambeth.persistence.jdbc.config.PersistenceJdbcConfigurationConstants.IntegratedConnectionFactory} \begin{lstlisting}[style=Props,caption={Usage example for \textit{database.connection.factory}}] database.connection.factory=true \end{lstlisting}
{ "alphanum_fraction": 0.7931873479, "avg_line_length": 45.6666666667, "ext": "tex", "hexsha": "2999661d8b4afe473543501062da78557d458dd9", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/DatabaseConnectionFactory.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/DatabaseConnectionFactory.tex", "max_line_length": 244, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/DatabaseConnectionFactory.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 327, "size": 1233 }
\section{User stories}{} %sem preambulo Table \ref{table:userStories} presents all the user stories for artifacts development, used in the system and managed according to agile methodologies. \begin{table}[H] \caption{\ac{SCM-BP} User Stories} \label{table:userStories} \begin{tabular}{|l|p{13.5cm}|} \hline US-1 & As an administrator, clicking “new” on the supply chain list page takes you to the chain configuration page, with empty settings (no phases, sub-phases, and fields).\\ \hline US-2 & As an administrator, clicking “edit” on the supply chain list page takes you to the chain configuration page, with the settings filled in (with phases, sub-phases, and fields already registered). \\ \hline US-3 & As an administrator, when clicking delete on the supply chain list page, a modal should appear requesting deletion confirmation.\\ \hline US-4 & As an administrator, when clicking confirm deletion on the supply chain list page, an alert should appear stating the deletion result: alert-success or alert-danger.\\ \hline US-5 & As an administrator, on the creation or editing screens of a chain, the administrator must tell from each section which user types can enter information in that section.\\ \hline US-6 & As an administrator, when clicking on the User List page redirects to the user creation page with its empty settings.\\ \hline US-7 & As an administrator, on the user creation page, the admin have to enter the type of user (Admin, Producer, Manufacturer, Distributor, Wholesaler, Retailer, End User).\\ \hline US-8 & As an administrator, clicking “edit” on the User List page takes you to the user creation page, with the settings filled in (with the previously entered data).\\ \hline US-9 & As an administrator, when clicking delete on the User List page, a modal should appear asking for deletion confirmation.\\ \hline US-10 & As an administrator, when clicking confirm deletion on the User List page, an alert should appear stating the deletion result: alert-success or alert-danger.\\ \hline US-11 & As a "Member" User (Admin, Producer, Manufacturer, Distributor, Wholesaler, Retailer), clicking Move Asset redirects to the information entry page in the chain.\\ \hline US-12 & As a "Member", each user can only enter information regarding the allowed phase in the access rules (e.g., a distributor cannot enter exploration information) as defined in use case 5.\\ \hline US-13 & As any user (Admin, Member, or End User), clicking Track Asset will take you to a page with a list of all assets paged and filtered by date in descending order (most current to oldest).\\ \hline US-14 & As any user (Admin, Member, or End User), by clicking on “Track Asset”, the user can enter an Id in the input search to search.\\ \hline US-15 & As any user (Admin, Member, or End User), by clicking on “Track”, the user will go to a page with all information of the respective asset, from its conception to the current state.\\ \hline \end{tabular} \end{table}
{ "alphanum_fraction": 0.7309172547, "avg_line_length": 76.0487804878, "ext": "tex", "hexsha": "9bdb2550197b70343abd737d20b0dd5265b15ac2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c7e7e1da620ea717619c058c170db46cee8e9970", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "juniorug/Master-thesis", "max_forks_repo_path": "apendices/A2_userStories.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c7e7e1da620ea717619c058c170db46cee8e9970", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "juniorug/Master-thesis", "max_issues_repo_path": "apendices/A2_userStories.tex", "max_line_length": 210, "max_stars_count": null, "max_stars_repo_head_hexsha": "c7e7e1da620ea717619c058c170db46cee8e9970", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "juniorug/Master-thesis", "max_stars_repo_path": "apendices/A2_userStories.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 785, "size": 3118 }
\section{Conclusion} \p{Without reducing linguistic \i{performance} to language qua field of propositional expression, and without collapsing linguistic meaning to a computable/propositional fragment, we can still allow interpretive-Phenomenological and formal/mathematical perspectives to co-exist. In the theory I have sketched, Cognitive Schema summarize lived, situated judgments and intentions that (in concrete form) are not \q{computable} (although I make no metaphysical claims about the \q{abstract} computability of mental processes merely by virtue of their neurophysical materiality). However, our propensity to call up certain construals rather than others is triggered by linguistic formations, and in broad outline the catalog of these triggers, and their compositional structure, can be formalized (and even used to improve formal systems, like programming languages). The challenge is to advocate for this co-existence without implying that formal systems, and mathematically provable system-properties, are the only kind of research tools which have scientific merit. } \p{Subjective assessments are intrinsic to most linguists' argumentation \mdash{} warranting claims not with empirical data or logico-mathematical proof but by appealing to speakers' intuitions, so that reading linguistic texts is also collaborating on an ongoing research project (partly because language evolves, so word-meanings change, and formations which are ungrammatical for one generation may be experienced differently by others). Nevertheless, linguistics, like economics, seems broadly accepted as a human \i{science}, not just an interpretive discipline. The claim that an economist's equation or a linguist's meta-grammar are accurate explanations, useful explanatory frameworks, seems generally evaluated in terms of whether their framework captures emergent higher-order structure, and offers an explanatory potential that does not merely reiterate lower-scale paradigms. A theory expressed in the language of linguistics (not, say, neural networks), if it meets general criteria of testability and refutability (not necessarily empiricist/quantitative), arguably carries even more weight than lower-level neurophysical explanation \mdash{} precisely because the higher-scale \q{theory language} carries the burden of explaining emergent properties, which as \i{emergent} bear some descriptive/behavioral (if not causal) autonomy. Likewise, a subjectively plausible and theoretically motivated equation which fits economic data probably carries more weight than a mere statistical analysis. An explanatory focus on the higher-scale in terms of its own distinct (emergent) structures and theorized entities (like words and morphemes, in the case of linguistics, or markets and commodities, in the case of economics), reflects the linguist's or economist's charge to connect human phenomena with mental (and therefore, ultimately physical) law. Nonetheless, even with liberal use of subjective judgments, economics and linguistics (and some other human sciences as well, potentially) are attached to the overall sphere of natural science, by virtue of causal links in principle even if not in practice. Scientific rigor in this humanistic setting is neither reducible to the techniques of natural science, nor dualistically separate from them. Natural science and humanities are certainly not mutually irrelevant, but nor is the proper vehicle for scientific literacy to find a forum in the humanities merely to emulate numeric methods, as with statistics in sociology, or a retreat to narrow and behavioristic reductionism, in place of localized interpretation and situational particularism. } \p{Subjective impressions (conscious experiences, emotions, intuitions, qualia, qualitative universals and particulars \mdash{} the qualitative characteristic in itself, and the hyletic-spatial trace, the site in experiential space as the quale becomes a moment of consciousness) \mdash{} these are not scientifically tractable and do not have obvious physical location or measurability, which makes them controversial as objects of scientific method. Yet, even so, we do have conscious experiences, we do subconsciously (and when needed consciously, or with deliberate conscious attention) make judgments about classifications, or how parts aggregate into wholes, or are individuated apart from a larger whole in context; we can reflect on patterns in these judgments, not \i{introspectively} examining thoughts as they occur, but marshalling an overall familiarity with mental processes. Consciousness is not only a kind of mentality, shared by humans and some animals; it is also a metacognitive tool, something we deploy to focus attention on a certain object or topic. We \q{practice} how to \i{be} conscious, how best to distribute attention, in each setting (like an athlete maintaining a meditative state of ambient awareness, poised to latch conscious attention onto playing technique which is optimally instinctive, but \q{feels} different when degraded by fatigue or distraction). Our faculty for these modulations, switching among sub- and passive consciousness, attentive consciousness, \q{ambient} awareness, and back again, reveals that consciousness is not only an aspect of mind but a tool; it has a meta-cognitive and epistemic dimension, an awareness of what is known or not-yet-known and a technique of directing attention to the latter. } \p{A case-study: in a motel I unexpectedly find a newspaper outside the door. Next morning I look outside curious whether a paper is there; after several days I come to expect the paper. So I open the door not preoccupied with confirming this, but with (maybe rather distractedly) fetching it. Initially I do not expect the paper, but, generally poised to notice both expected and unexpected circumstances, I make a mental adjustment and interpret the situation quickly; by the third day the paper has become expected, like other things I anticipate finding in a motel hallway, and the thrust of my attention, during the brief episode of my picking it up, is kinaesthetic and motor-intentional more than visual and inquisitive. Only on the second morning is the question of a paper's presence intended in an epistemic mode; but, while it is so thematized, I direct attention to optimize my ability to resolve the question. How we engage attention is a deliberate choice, reflecting and responding to our metacognitive attitudes, what we think we know and do not know. } \p{Because consciousness is in some ways a mental tool, we have an intimate familiarity with it, a familiarity which extends beyond our own minds: we can make reasonable guesses about what others do or do not know and perceive. Our ability to anticipate others' epistemic states is an intrinsic feature of social interaction, of intersubjectivity; we therefore understand consciousness not only via our own use and possession/experience of it, but as a general feature of the human mind. We can accordingly make structured claims about conscious processes, not in the sense of introspective reports but of retrospective suggestions \mdash{} by analogy, a pianist on reflection may have alot to say about playing technique, but she does not acquire this wisdom from introspective study of her own playing while it happens; rather with accrued wisdom and reflection. In terms of phenomenological method, our study of thought and consciousness is analogous: it is reflective examination of what it means to be consciously intelligent beings, not introspective psychology, or meditative meta-experience. The methodological implications of this retrospection (as opposed to \i{intro}spection), how phenomenological writing seeks reflective consensus on claims about consciousness \mdash{} this fashion of constructing a research community, a discursive-methodological field, does not conform to empirical scientific method, but is arguably a quite valid and defensible means of meeting the criteriological goals \mdash{} the discourse ethics, the democratization of scientific participation \mdash{} which physical science achieves via empiricist Ontology. For all its limitations, Positivism has the one virtue of disputational inclusiveness, demanding potential observability (not some special revelation or insight) for theoretic ur-entities. The civic norms of Phenomenology are more complex, because both \q{transcendental} analysis of consciousness \mdash{} as a kind of philosophical ground zero, a neo-Cartesian fortress against skepticism and empiricism \mdash{} and also a more pluralistic, enculturated, embodied, social Phenomenology, are well-represented (and interpenetrate in complex ways) in the continuing post-Husserl tradition. That being said, even in its most neo-Idealist, reifying consciousness as a primordial frame on any cognitive-scientific reasoning, as human sciences' condition of possibility, Phenomenology cannot help but textually acknowledge pluralism, and philosophical collaboration \mdash{} precisely because its claims are not descriptive of empirically locatable/observable objects. } \p{Interestingly, the phenomenological tradition reveals substantial interest in both the socio-political and the formal-mathematical: this is not so noteworthy in itself, because Analytic philosophy also connects (say) language with (say) logic, but the phenomenological tradition is distinct in that it joins the humanistic and the formal/mathematical without the same tendency to hone in on a overlapping, logico-semantic core. In writings where Analytic philosophers appear to address both social and mathematical concerns, usually their underlying motivation, or so it seems to me, is to find some logical underpinnings to linguistic or cognitive structure (say, \i{implicatures}) \mdash{} logic, subject to formal treatment, also manifesting itself in the organization of thoughts and expressions. Amongst phenomenologists, however, for example Husserl, Merleau-Ponty (in his science-oriented writings; \cite{DavidMorris}), and Anglo-American writers in the \q{Naturalizing Phenomenology} tradition, there is evident interest in mathematics \i{apart from} logic: topology, differential geometry, mereotopology, multi-granularity.\footnote{Not that logic is wholly unrelated to these subjects: consider topological and type/Category-theoretic embeddings of logical systems within certain categories, or technical domains, like toposes, sheaves, granules; but logic in this sense, mathematically founded within spaces otherwise discussed at least as metaphoric guides within Phenomenology, does not appear to be the dominant understanding of logic in the Analytic philosophical tradition. To be fair, style may dictate that argumentation should be trimmed to its essential elements, and mathematical deductions are rarely if ever essential for defending phenomenological claims. In Jean Petitot, for example, mathematics is sometimes intrinsic to empirical backing for phenomenological ideas, but other times (say, sheaf mereology), the formal theories, while useful analogies, do not clearly pair up with logico-deductive justifications. But, I would reply, there is so much unexplained about consciousness, and cognition as it occurs in conscious minds --- the controversial \q{Explanatory Gap} between mind and matter --- that much of the important argumentation does not yet have deductive signposts; we need an effective methodology which is not so linear. As we approach beyond a simplifying, logico-functionalist vantage, which we eventually must transcend, both functionalization and empiricism fall by the wayside as reasonable methods for \q{Naturalizing} consciousness. We have to accept when the formal/mathematical stands as more intuitive than rhetorical, on pain of \q{Naturalization} being quarantined from a humanistic core entirely. } Phenomenology therefore uncovers an arguably deeper and truer bridge between human and \q{eidetic} sciences, in Petitot's phrase, one which is not pre-loaded with logico-reductive presuppositions. If this is accurate, Phenomenology can provide a deeper methodology for the humanities in their interactions with natural science. Even insofar as we stay committed to the idea that social/cultural/mental phenomena emerge from (neuro-)physical ones, we need to curate methods for these \q{emergent} sciences which have the requisite theoretical autonomy to actually extend the explanatory reach of the natural sciences on which they causally rest. Cognitive Linguistics, I would argue, is a good example of this notion of autonomy, and its methodology, I would also argue, bears an important resemblance to phenomenological research. } \p{Another brief case-study (revisiting footnote \ref{footnoteVision}): our environing world mostly discloses itself through objects' visible exterior: as much as we have on occasion a palpable sense of volume as well (as when looking through a fog) \mdash{} and as much as what we see is inextricable from our embodied interactions with objects, adding tactile and kinaesthetic dimensions, a canonical sense of perception is still the vision of distant objects, usually through their surface geometry. A canonical example of perceptual cognition is therefore reconstructing geometry from visual appearances, especially color gradations \mdash{} mathematically, converting \q{color} vector fields to curvature vector fields (it's worth noting that color is an almost primordial example of a \Gardenfors{} Conceptual Space). This kind of transformation, described (say) via differential geometry, is \i{qua} theoretical device an example of semiotic morphism, a mapping between representation disciplines. The point is not, however, that there are precise correlates in the brain which \q{implement} this procedure; that the semiotic morphism takes a domain and codomain that quantify over empirically locatable, neurophysical entities. We can study how software reconstructs color data from geometry as an approximation to a \i{process}, a model-building whose semiotics of approximation is coarse-grained and holisitc.\footnote{The experiential verisimilitude of computer graphics is a phenomenological data point, but so is their obvious unreality \mdash{} the mathematics reveals something about, but is not an all-encompassing model for, shape and color \i{qua} material phenomenon, still less the neuroscience of color experience. Morphism between structures may model \i{processes} more correctly than the structures themselves approximate their substrata \mdash{} but this is no longer a semiotics of causal/physical reductionism, a use of mathematics (like differential geometry) to iconify empirical givens, the way that (say) the Navier-Stokes equations are understood to refer explicitly to (even while idealizing and abstracting from) fluid-mechanical dynamics. Our theory-semiotics has to locate the site of designation at a more oblique scale, a different Ontological register, of processes and transformations \mdash{} seeing in phenomena the image of a theoretical model because of its global structure, as a sign in its own right, rather than a collage of symbols and numbers to which are reduced spatializations and trajectories of causation and physical influence. } Formal devices like vectors or vector fields need not mold symbolic systems by mapping individual symbols to spacetime objects, or processes, but rather afford representation-mappings that capture cognition indirectly and patternwise. } \p{I make this point using visual consciousness as an example, but it applies also to cognitive grammar, where the color -to- curvature-vector morphism has an analogue in the mapping of word-sequences to semantic graphs. I do not intend to claim that there are specific, individuated neurophysical analogues to theoretical posits in the symbolic regime I sketched earlier, in terms of \POS{} and lexical annotations, inter-word and inter-phrase connections, phrase-aggregate hypernodes or frames, and the rest. There are not, necessarily, for example, little brain regions whose role is to represent different types of phrase structures (e.g., different flavors of pluralization). Our explanatory ambitions, instead, should be cognitive-linguistic models of a global process-structure, agnostic about one-to-one correspondence between the posits of the theory and the empirical stuff whose behaviors it wants to explain. Cognitive triggers bridge formal/empirical sciences with the phenomenological/humanistic: their causal engenderings are physical and structural phenomena, but their manifestation in the world is not fully tractable without an interpersonal deliberation accounting for both the privateness of consciousness and the sociality of mind, and, so, something akin to Phenomenology. } \p{It may appear that I am describing a weak-functional theory (or metatheory) which uses functional description in lieu of precise micro-physical explanation \mdash{} in other words, that in lieu of explaining precisely how the brain achieves vision or language, we describe functional capabilities that are prerequisite for these competences, and refactor the goal of scientific explanation as to describe the system of intermediate functionality as correctly as possible, rather than describe how this functionality is physically realized. In a strong form, this re-orientation yields functionalism in theories/philosophies of Mind, that try to refrain from Ontological commitments to mental states or properties \i{apart from} descriptions of their functional roles. In other words, according to the parameters of the field of study and its institutions, even if not deep metaphysical beliefs, mental states are reducible to functional states, and cognitive systems are scientifically equivalent if they reveal similar functional organization, whether they belong to human or animal minds or computers or extra-terrestrials. A more modest functionalism would reject the implied reductionistic (maybe eliminative) Ontological stance, and maintain that mental things are not wholly, metaphysically subsumed by their functional organization, while still practicing a kind of theory whereby this functional organization is the proper object of study; the specific aspect of the mental realm which is scientifically tractable. } \p{I do not believe I am making even such weak-functionalist claims: either branch of functionalism can misattribute the methodological association between theoretical structures and explanatory goals. We may be led toward the stronger or weaker functionalist viewpoints if we understand that a cognitive theory should task itself with making symbolic icons for scientifically grounded referents, grounded in an abstract space of functional organization if not in empirical space-time. Of course, most scientific explanation does construct a specialized, technical semiotics whose signs refer into either formal spaces or accounts of empirical space-bound things, however abstracted or idealized. But, conversely, insofar as I propose to focus on functional structures, and particularly cross-representation-framework transformations, my intent is to \q{functionalize} the discursive norms of the theory, not the phenomena it investigates. In order to negotiate between the competing demands of scientific rigor and formalization \mdash{} on the one hand \mdash{} with the immediacy and etheriality and subjectivity of consciousness, on the other, we need to \q{attach} theoretical structures to mental phenomena without getting bogged down in questions of the scientific or Ontological status of mental things, how they are \q{scientific} individually and collectively (collectively as in the Ontology of \q{Mind} overall). } \p{This suggests adopting functional attitudes not in the theory but the metatheory: to use functionalism as an organizing principle on the theoretical \i{discourse}, on the attitudes of the scientists and scholars who want to straddle the divide between natural and mathematical sciences and humanism and consciousness. The \q{semiotic morphism} of color-to-curvature vector fields, or word-sequences to typed semantic graphs, are recommendations for guidelines on how researchers should write and communicate about cognitive processes in their global structure. I have tried to outline a metadiscourse more than a metalanguage \mdash{} not a template for building theory-languages whose signs refer into a realm of posited empirical or abstract entities, but a template for using certain formal-mathematical constructions (in domains like typed lambda calculus, type theory, or differential geometry) as a textual prelude, a way to position the norms of writing to be receptive to both scientific-mathematical and phenomenological concerns. If semiotic morphisms like color-to-curvature or word-sequence-to-semantic-graph have explanatory merit as ways to picture cognitive processes, this merit is intended to be judged according to how it affects discursive norms on this scientific borderlands between mathematics and humanities, rather than how it reduces empirical phenomena to mathematizable abstractions. If there is \i{something} in cognition analogous to these morphisms, even if \q{analogous} means merely that holding the morphisms as formally defined in our minds while thinking about cognition can show us philosophical ways forward, then we should be interested in refining these formalizations as part of the overall Cognitive-Phenomenological project. }
{ "alphanum_fraction": 0.8121753395, "avg_line_length": 72.1907894737, "ext": "tex", "hexsha": "4e0f3e91b2276a69699da40cf6f9f2930fd5eed3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "ScignScape-RZ/phcg", "max_forks_repo_path": "css/section3.ngml.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "ScignScape-RZ/phcg", "max_issues_repo_path": "css/section3.ngml.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "8fd304d7df709d32367e49a98fb99f16162c5477", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "ScignScape-RZ/phcg", "max_stars_repo_path": "css/section3.ngml.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4662, "size": 21946 }