Search is not available for this dataset
text
string
meta
dict
\section{Predefined global functions} \label{sec:appendix:primops} % \begin{table}[h] % \caption{Predefined primitive operations of \langname} % \label{table:primops} % \footnotesize % \tiny \tiny \begin{longtable}[h]{|l |l | p{.25\linewidth} | p{.5\linewidth} |} \hline % \rowfont{\bfseries} Code & Mnemonic & Signature & Description \\ \hline \input{generated/predeffunc_rows.tex} % SelectField & & $((\tau_1,\dots,\tau_n), i: Byte) \to \tau_i$ & $\SelectField{(e_1,\dots,e_n)}{i}$ & \\ % \hline % SomeValue & & $[T](x: T) \to Option[T]$ & $\Some{e}$ & injects value into non-empty optional value \\ % \hline % NoneValue & & $[T]()\to Option[T]$ & $\None{\tau}$ & constructs empty optional value of type $\tau$ \\ % \hline % Collection & & $[T](T, \dots, T)\to Coll[T]$ & $\Coll{e_1,\dots,e_n}$ & constructor of collection with $n$ items \\ % \hline \end{longtable} \normalsize % \end{table} \mnote{This table is autogenerated from sigma operation descriptors. See SigmaPredef.scala} \input{generated/predeffunc_sections.tex}
{ "alphanum_fraction": 0.6371115174, "avg_line_length": 32.1764705882, "ext": "tex", "hexsha": "326f3dc097d8c8af7220c0565dbcd0bb497aea09", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-01-30T02:12:08.000Z", "max_forks_repo_forks_event_min_datetime": "2017-12-28T11:19:17.000Z", "max_forks_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jozanek/sigmastate-interpreter", "max_forks_repo_path": "docs/spec/appendix_primops.tex", "max_issues_count": 486, "max_issues_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e", "max_issues_repo_issues_event_max_datetime": "2022-03-30T11:02:28.000Z", "max_issues_repo_issues_event_min_datetime": "2017-12-08T13:07:23.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jozanek/sigmastate-interpreter", "max_issues_repo_path": "docs/spec/appendix_primops.tex", "max_line_length": 117, "max_stars_count": 41, "max_stars_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jozanek/sigmastate-interpreter", "max_stars_repo_path": "docs/spec/appendix_primops.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-23T19:27:50.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-21T13:18:44.000Z", "num_tokens": 391, "size": 1094 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Methodology} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Dataset} The researchers will be using a custom dataset mined from public GitHub repositories that contain Python, JavaScript, and Java source code, such as those repositories that implement algorithms and data structures. Labels will be extracted from a number of heuristics such as method name or the repository folder in the case of algorithm implementation datasets. Since CodeBERT only accepts input lengths of less than 512, it is important to preprocess the dataset to make sure that each code snippet contains less than 512 tokens\cite{devlin2018bert}\cite{liu2019roberta}. \textbf{RQ1.} Data Augmentation\cite{feng2021survey} will be done to make the dataset synthetically larger. To counter the challenge of overfitting, that is, making sure that the model will not simply memorize variable names, the researchers will be using synonym replacements on identifiers and swapping consecutive lines that are declaring variables as these do not have an effect on the code. Custom code extraction programs will be written for each programming language using available parsers. \textbf{RQ2.} Code snippets should be able to preserve coding styles except for the comments which will not be considered. \section{Training} CodeBERT will be used for transfer learning on the custom dataset using a multiclass classification task with each one-hot encoded label corresponding to one algorithm. Different hyperparameters on the output layer will be used for model selection. The researchers will make use of the PyTorch\cite{paszke2019pytorch} library for implementation and PyTorch Lightning for training\cite{Falcon_PyTorch_Lightning_2019}. \section{Model Selection} The best performing model will be selected with the criteria based on its F1-score. Precision and recall are metrics that are calculated based on the true positives, false positives, true negatives, and false negatives given by the formulae. Once the best model has been selected, a prototype implementation of Project CodeC’s task-constrained feedback feature will be implemented to test on actual programming competition datasets. \( precision = \frac{true positive}{true positive + false positive}\) and \( recall = \frac{true positive}{true positive + false negative}\). The F1 score balances these metrics using the formula \( F1 = \frac{2(precision)(recall)}{precision + recall}\).\newline \textbf{RQ1.} The model with the highest F1 score must be the one with the best performing model. Other performance metrics may be collected such as running loss and classification accuracy. \textbf{RQ2.} By recording the examples that resulted in false positives and false negatives, manual analysis will be performed to identify the features that the transformer was not able to fully capture.
{ "alphanum_fraction": 0.7657718121, "avg_line_length": 55.1851851852, "ext": "tex", "hexsha": "fbd443929ce10f8b7117b059598ec6c8acb69ea3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0d9b862442780cd39731d3e7b6416fe9cc4a08aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gerdiedoo/test-thesis", "max_forks_repo_path": "chapters/methodology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0d9b862442780cd39731d3e7b6416fe9cc4a08aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gerdiedoo/test-thesis", "max_issues_repo_path": "chapters/methodology.tex", "max_line_length": 122, "max_stars_count": null, "max_stars_repo_head_hexsha": "0d9b862442780cd39731d3e7b6416fe9cc4a08aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gerdiedoo/test-thesis", "max_stars_repo_path": "chapters/methodology.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 615, "size": 2980 }
\section{Value of Different Policies [35 pts]} In many situations such as healthcare or education, we cannot run any arbitrary policy and collect data from running those policies for evaluation. In these cases, we may need to take data collected from following one policy and use it to evaluate the value of a different policy. The equality proved in the following exercise can be an important tool for achieving this. The purpose of this exercise is to get familiar on how to compare the value of different policies, $\pi_1$ and $\pi_2$, on a fixed horizon MDP. A fixed horizon MDP is an MDP where the agent's state is reset after $H$ timesteps; $H$ is called the \emph{horizon} of the MDP. There is no discount (i.e., $\gamma=1$) and policies are allowed to be non-stationary, i.e., the action identified by a policy depends on the timestep in addition to the state. Let $x_t\sim \pi$ denote the distribution over states at timestep $t$ (for $1\leq t \leq H$) upon following policy $\pi$ and $V^{\pi}_t(x_t)$ denote the value function of policy $\pi$ in state $x_t$ and timestep $t$, and $Q_t^{\pi}(x_t,a)$ denote the corresponding $Q$ value associated to action $a$. As a clarifying example, we denote $\E_{x_t \sim \pi_1} V(x_t)$ to represent the average value of the value function $V(\cdot)$ over the states at timestep $t$ encountered upon following policy $\pi_1$. Please show the following: \begin{equation} \label{eq:1} V_1^{\pi_1}(x_1) - V_1^{\pi_2}(x_1) = \sum_{t=1}^H \E_{x_t \sim \pi_2} \Big( Q_t^{\pi_1}(x_t,\pi_1(x_t,t)) - Q_t^{\pi_1}(x_t,\pi_2(x_t,t)) \Big) \end{equation} \textbf{Intuition:} The above expression can be interpreted in the following way. For concreteness, assume that $\pi_1$ is the better policy, i.e., achieving $V_1^{\pi_1}(x_1) \geq V_1^{\pi_2}(x_1)$. Suppose you're following policy $\pi_2$ and you are at timestep $t$ in state $x_t$. You have the option to follow $\pi_1$ (the better policy) until the end of the episode, totalling $Q_t^{\pi_1}(x_t,\pi_1(x_t,t))$ return from the current state-timestep; or you have the option to follow $\pi_2$ for one timestep and then follow $\pi_1$ instead until the end of the episode (you can follow many other policies of course). This would you give you a ``loss'' of $Q_t^{\pi_1}(x_t,\pi_1(x_t,t)) - Q_t^{\pi_1}(x_t,\pi_2(x_t,t))$ that originates from following the worse policy $\pi_2$ instead of $\pi_1$ in that timestep. % Then equation \ref{eq:1} Then the equation above means that the value difference of the two policies is the sum of all the losses induced by following the suboptimal policy for every timestep, weighted by the expected trajectory of the policy you're following. \textbf{Answer:} \begin{equation} \begin{split} V^{\pi_1}_{t}(x_t) - V^{\pi_2}_{t}(x_t) & = Q^{\pi_1}_{t}(x_t,\pi_1(x_t,t)) - Q^{\pi_2}_{t}(x_t, \pi_2(x_t,t)) \\ & = Q^{\pi_1}_{t}(x_t,\pi_1(x_t,t)) - Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t)) + Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t)) - Q^{\pi_2}_{t}(x_t, \pi_2(x_t,t)) \end{split} \end{equation} \begin{equation} \begin{split} Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t)) - Q^{\pi_2}_{t}(x_t, \pi_2(x_t,t)) & = r_t(x_t, \pi_2(x_t, t)) + {\mathbb E}_{s' \sim p(x_t, \pi_2(x_t, t))}V^{\pi_1}_{t+1}(s') \\ & \quad - r_t(x_t, \pi_2(x_t, t)) - {\mathbb E}_{s' \sim p(x_t, \pi_2(x_t, t))}V^{\pi_2}_{t+1}(s') \\ & = {\mathbb E}_{s' \sim p(x_t, \pi_2(x_t, t))}(V^{\pi_1}_{t+1}(s') - V^{\pi_2}_{t+1}(s')) \end{split} \end{equation} Plug back to previous formula: \begin{equation} \begin{split} V^{\pi_1}_{t}(x_t) - V^{\pi_2}_{t}(x_t) & = Q^{\pi_1}_{t}(x_t,\pi_1(x_t,t)) - Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t)) + {\mathbb E}_{s' \sim p(x_t, \pi_2(x_t, t))}(V^{\pi_1}_{t+1}(s') - V^{\pi_2}_{t+1}(s')) \end{split} \end{equation} \begin{equation} \begin{split} {\mathbb E}_{x_t \sim \pi_2}(V^{\pi_1}_{t}(x_t) - V^{\pi_2}_{t}(x_t)) & = {\mathbb E}_{x_t \sim \pi_2}(Q^{\pi_1}_{t}(x_t,\pi_1(x_t,t)) - Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t))) + {\mathbb E}_{x_{t+1} \sim \pi_2}(V^{\pi_1}_{t+1}(x_{t+1}) - V^{\pi_2}_{t+1}(x_{t+1})) \\ & = {\mathbb E}_{x_t \sim \pi_2}(Q^{\pi_1}_{t}(x_t,\pi_1(x_t,t)) - Q^{\pi_1}_{t}(x_t, \pi_2(x_t,t))) \\ & \quad + \sum_{\tau=t+1}^H({\mathbb E}_{x_\tau \sim \pi_2}(Q_\tau^{\pi_1}(x_\tau, \pi_1(x_\tau, \tau)) - Q_\tau^{\pi_2}(x_\tau, \pi_2(x_\tau, \tau)))) \\ & = \sum_{\tau=t}^H({\mathbb E}_{x_\tau \sim \pi_2}(Q_\tau^{\pi_1}(x_\tau, \pi_1(x_\tau, \tau)) - Q_\tau^{\pi_2}(x_\tau, \pi_2(x_\tau, \tau)))) \end{split} \end{equation} By induction, above expression is true for any $1 \leq \tau \leq H$, this gives the thesis.
{ "alphanum_fraction": 0.6650463986, "avg_line_length": 87.0384615385, "ext": "tex", "hexsha": "3b1ccdb06812706ee00e9ab9b9d6f94443cb2003", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-02T01:34:47.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-02T01:34:47.000Z", "max_forks_repo_head_hexsha": "dc9a2238c7e28db7ae5eaebde6d776a2e5a59ebc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ksang/cs234-assignments", "max_forks_repo_path": "assignment1_written/tex/Q_policies.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "dc9a2238c7e28db7ae5eaebde6d776a2e5a59ebc", "max_issues_repo_issues_event_max_datetime": "2022-02-10T02:04:12.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-13T17:43:49.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ksang/cs234-assignments", "max_issues_repo_path": "assignment1_written/tex/Q_policies.tex", "max_line_length": 531, "max_stars_count": 8, "max_stars_repo_head_hexsha": "dc9a2238c7e28db7ae5eaebde6d776a2e5a59ebc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ksang/cs234-assignments", "max_stars_repo_path": "assignment1_written/tex/Q_policies.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-27T12:53:13.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-25T12:29:52.000Z", "num_tokens": 1749, "size": 4526 }
\section{Sustainability} \begin{frame}{\insertsec} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \textbf{PPP} & \textbf{Useful life} & \textbf{Risks} \\ \hhline{-===} \multirow[c]{2}{*}{\textbf{Environmental}} & \makecell{Design \\ consumption} & \makecell{Ecological \\ footprint} & \makecell{Environmental \\ risks} \\ \cline{2-4} & 2/10 & 19/20 & -2/-20 \\ \hline \multirow{2}{*}{\textbf{Economical}} & Bill & \makecell{Viability \\ plan} & \makecell{Economical \\ risks} \\ \cline{2-4} & 7/10 & 18/20 & -15/-20 \\ \hline \multirow{2}{*}{\textbf{Social}} & \makecell{Personal \\ impact} & \makecell{Social \\ Impact} & \makecell{Social \\ risks} \\ \cline{2-4} & 9/10 & 18/20 & -5/-20 \\ \hline \hline\hline \multirow{2}{*}{\parbox[c]{3cm}{\centering\textbf{Sustainability \\ range}}} & 18/30 & 55/60 & -22/-60 \\ \cline{2-4} & \multicolumn{3}{c|}{51/90} \\ \hline \end{tabular} \caption{Sustainability matrix of the project \label{tab:sustainability}} \end{table} \end{frame}
{ "alphanum_fraction": 0.5576271186, "avg_line_length": 35.7575757576, "ext": "tex", "hexsha": "29030816bf58275a60a21e1c6a8f778d8a8e4dc1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-23T08:11:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-23T08:11:28.000Z", "max_forks_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jmigual/FIB-TFG", "max_forks_repo_path": "LATEX/GEP_presentation/sections/05_sustainability.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jmigual/FIB-TFG", "max_issues_repo_path": "LATEX/GEP_presentation/sections/05_sustainability.tex", "max_line_length": 89, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7551a3c13a985ee7eecf7a4f38a6ee4803b05ff1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jmigual/FIB-TFG", "max_stars_repo_path": "LATEX/GEP_presentation/sections/05_sustainability.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-02T15:17:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-02T15:17:51.000Z", "num_tokens": 439, "size": 1180 }
\documentclass[letterpaper,11pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage[hmargin=1.25in,vmargin=1in]{geometry} \usepackage{booktabs} \usepackage{graphicx} \usepackage{hyperref} \usepackage{lmodern} \usepackage{microtype} \title{Coursework 1: STAT 570} \author{Philip Pham} \date{\today} \begin{document} \maketitle \begin{enumerate} \item The data we analyze are from a 1970s study that investigated insurance redlining on $n = 47$ zipcodes. Information on who was being refused homeowners is not available so instead we take as response the number of FAIR plan policies written and renewed in Chicago by zip code over the period December 1977 to May 1978. The FAIR plan was offered by the city of Chicago as a default policy to homeowners who had been rejected by the voluntary market. The data we will analyze are named \texttt{chredlin} and are in the \texttt{faraway} package. The variable \texttt{involact} are the number of new FAIR plan policies and renewals per 100 housing units. We will consider five covariates for modeling the response: racial composition in percent minority (\texttt{race} $x_{i1}$), fires per 100 housing units (\texttt{fire} $x_{i2}$), theft per 1000 population (\texttt{theft} $x_{i3}$), percent of housing units built before 1939 (\texttt{age} $x_{i4}$), log median family income in thousands of dollars (\texttt{lincome} $x_{i5}$`), $i = 1,\ldots,47$. We will examine the model with the main effects due to race, fire, theft, age and $\log(\mathrm{income})$. We let $Y_i$ represent \texttt{involact}, and $x_i = \left(x_{i1}, x_{i2}, \ldots, x_{i5}\right)$, the covariates, for individual $i$, $i = 1,2,\ldots,47$. We fit the model \begin{equation} y_i = \beta_0 + \sum_{j=1}^5x_{ij}\beta_j + \epsilon_i \label{eqn:p1_model} \end{equation} for $i=1,\ldots,n$ using least squares. \begin{enumerate} \item Provide informative plots to illustrate what we might expect to learn from the model in Equation \ref{eqn:p1_model}. \label{part:p1a} \begin{description} \item[Solution:] See Figure \ref{fig:p1_pair_plots} and the corresponding code in \href{https://nbviewer.jupyter.org/github/ppham27/stat570/blob/master/hw1/chredlin\_explore.ipynb}{\texttt{chredlin\_explore.ipynb}}. \texttt{fire}, \texttt{race}, and \texttt{age} appear to be positively correlated with \texttt{involact}. \texttt{income} appears to be negatively correlated. Zipcodes in the northern \texttt{side} of Chicago have a lower minority population and higher income. \texttt{involact} is smaller in these northern zipcodes, too. \end{description} \item Give interpretations of the parameters $\beta_j$, $j = 1,\ldots,5$. \label{part:p1b} \begin{table} \centering \input{p1_model_parameters.tex} \caption{The result of fitting the model described in Equation \ref{eqn:p1_model}. The procedure for obtaining the estimates and test statistics is described in Part \ref{part:p1_model}.} \label{tab:p1_model_parameters} \end{table} \begin{description} \item[Solution:] Fitting such a model, we get the estimates in Table \ref{tab:p1_model_parameters} for $\beta_j$. The percent of minorities (\texttt{race}) and frequency of fires (\texttt{fire}) are positively correlated with the number of FAIR plan policies. \texttt{involact} is the number of FAIR plans per 100 housing units. Thus, every percent increase in racial minorities means about 1 FAIR plan, and for every fire per 100 housing units, there are 3 FAIR plans. \texttt{age} seems to have postive effect on \texttt{involact}, while \texttt{theft} has a negative effect. \texttt{log\_income} doesn't seem to tell us anything new: it's correlated with other covariates, and its effect is mainly due to chance. \end{description} \item Reproduce every number in the handout using matrix and arithmetic operations. \label{part:p1_model} \begin{description} \item[Solution:] Let us assume that $\epsilon_i \sim \mathcal{N}\left(0, \sigma^2\right)$. The log-likelihood of this model is \begin{align} \sum_{i=1}^n \log\mathbb{P}\left(y_i \mid x_i, \beta, \sigma^2\right) &= -\frac{n}{2}\log\left(2\pi\sigma^2\right) - \frac{1}{2\sigma^2}\sum_{i=1}^n \left(y_i - x_i^\intercal\beta\right)^2 \nonumber \\ &= -\frac{n}{2}\log\left(2\pi\sigma^2\right) - \frac{1}{2\sigma^2}\left\lVert y - X\beta \right\rVert_2^2, \label{eqn:p1_log_likelihood} \end{align} where we $0$-index $\beta$ and the columns of $X$, so each row of $X$ is $x_i = \left(1, x_{i1}, x_{i2},\ldots,x_{i5}\right)$. \subsection*{Estimating $\hat{\beta}$} To maximize Equation \ref{eqn:p1_log_likelihood}, we choose $\hat{\beta}$ such that $X\hat{\beta}$ is the projection of $y$ onto the hyperplane spanned by the columns of $X$. Thus, we must have that $X^\intercal\left(y - X\hat{\beta}\right) = 0$ since the residuals will orthogonal to the columns of $X$ if $X\hat{\beta}$ is the projection that minimizes the squared error. Solving for $\hat{\beta}$, we have that \begin{equation} \hat{\beta} = \left(X^\intercal X\right)^{-1} X^\intercal y. \label{eqn:p1_beta_hat} \end{equation} The results of apply Equation \ref{eqn:p1_beta_hat} can be seen in the first column of Table \ref{tab:p1_model_parameters}. \subsection*{Estimating $\hat{\sigma}^2$} Let us derive an unbiased estimator for residual standard error. Consider the residual random vector. \begin{equation} R = y - X\hat{\beta} \end{equation} As stated earlier, the residuals are orthogonal to hyperplane spanned by the columns of $X$, so they must lie in some orthonormal hyperplane of $N - p$ vectors, where $p = \dim(\beta)$. Thus, residuals are $y$ projected down to this space. Let $w_1,\ldots,w_{n-p}$ be an orthonormal basis of this space. Let $W$ be matrix with these basis vectors as the columns. We have that \begin{align} R &= y - X\hat{\beta} \nonumber\\ &= W\left(W^\intercal y\right) \nonumber\\ &= W\left(W^\intercal\left(X\beta + \sigma\epsilon\right)\right) \nonumber\\ &= W\left(W^\intercal X\right)\beta + \sigma W\left(W^\intercal\epsilon\right) \nonumber\\ &= \sigma W\left(W^\intercal\epsilon\right). \end{align} Now, $W^\intercal\epsilon \sim \mathcal{N}\left(0, I_{n-p}\right)$. To see this, note that the $i$th entry is $\sum_{j=1}^n w_{ij}\epsilon_j \sim \mathcal{N}\left(0, 1\right)$, and for $i \neq i^\prime$, \begin{align*} \operatorname{Cov}\left( \left(W^\intercal\epsilon\right)_i, \left(W^\intercal\epsilon\right)_{i^\prime}\right) &= \mathbb{E}\left[ \left(\sum_{j=1}w_{ij}\epsilon_j\right)\left(\sum_{k=1}w_{i^\prime k}\epsilon_k\right) \right] \\ &= \sum_{j=1}^n\mathbb{E}\left[w_{ij}w_{i^\prime j} \epsilon_j^2\right] + 2\sum_{j=1}^{n-1}\sum_{k=j+1}^n \mathbb{E}\left[w_{ij}w_{i^\prime k} \epsilon_j\epsilon_k\right] \\ &= w_i^\intercal w_{i^\prime} + 2\sum_{j=1}^{n-1}\sum_{k=j+1}^n w_{ij}w_{i^\prime k} \mathbb{E}\left[\epsilon_j\epsilon_k\right] \\ &= 0, \end{align*} where the first term disappears by since the two vectors are orthonormal, and the second term disappears because of independence of the errors. Thus, we have that \begin{equation} R^\intercal R = \sigma^2 \left(W^\intercal\epsilon\right)^\intercal W^\intercal W \left(W^\intercal\epsilon\right) = \sigma^2 \left(W^\intercal\epsilon\right)^\intercal\left(W^\intercal\epsilon\right) \sim \sigma^2 \chi^2_{n-p}. \label{eqn:p1_residual_distribution} \end{equation} Finally, we have that \begin{equation*} \mathbb{E}\left[R^\intercal R\right] = \sigma^2\left(n - p\right) \Rightarrow \mathbb{E}\left[\frac{\sum_{i=1}^n \left(y - X\hat{\beta}\right)^2}{n-p}\right] = \sigma^2. \end{equation*} Our consistent estimator is \begin{equation} \hat{\sigma}^2 = \frac{\sum_{i=1}^n \left(y - X\hat{\beta}\right)^2}{n-p}. \label{eqn:p1_sample_variance} \end{equation} Applying Equation \ref{eqn:p1_sample_variance}, we obtain \boxed{\hat{\sigma} = \input{p1_residual_standard_error.txt}\unskip.} \subsection*{Hypothesis Testing} We can rewrite $y$ as $y = X\beta + \sigma \epsilon$, where each element of $\epsilon$ is drawn from $\mathcal{N}\left(0, 1\right)$. Substituting, we have that \begin{align} \hat{\beta} &= \left(X^\intercal X\right)^{-1}X^\intercal\left(X\beta + \sigma\epsilon\right) \nonumber\\ &= \beta + \sigma\left(X^\intercal X\right)^{-1}X^\intercal \epsilon. \label{eqn:p1_beta_hat_distribution} \end{align} Thus, $\hat{\beta}_j \sim \mathcal{N}\left(\beta_j, \sigma^2\left(X^\intercal X\right)^{-1}_{jj}\right)$. This gives us that \begin{equation*} \frac{\hat{\beta}_j - \beta_j}{\sqrt{\sigma^2\left(X^\intercal X\right)^{-1}_{jj}}} \sim \mathcal{N}\left(0, 1\right). \end{equation*} From Equations \ref{eqn:p1_residual_distribution} and \ref{eqn:p1_sample_variance}, \begin{equation} (n - p)\frac{\hat{\sigma}^2}{\sigma^2} \sim \chi^2_{n-p}. \end{equation} $\hat{\beta}$ and $\hat{\sigma}^2$ are independent by \href{https://en.wikipedia.org/wiki/Basu\%27s_theorem}{Basu's theorem}: $\hat{\sigma}^2$ is an ancillary statistic that does not depend on the model parameters, $\beta$. Thus, we have that \begin{equation} \left. \frac{\hat{\beta}_j - \beta_j}{\sqrt{\sigma^2\left(X^\intercal X\right)^{-1}_{jj}}} \middle/ \sqrt{\frac{(n - p)\frac{\hat{\sigma}^2}{\sigma^2}}{n-p}} \right. = \frac{\hat{\beta}_j - \beta_j}{\sqrt{\hat{\sigma}^2\left(X^\intercal X\right)^{-1}_{jj}}} \sim t_{n-p}. \label{eqn:p1_beta_hat_j_distribution} \end{equation} That is, we have $t$ distribution with $n - p$ degrees of freedom. The denominator of Equation \ref{eqn:p1_beta_hat_j_distribution} gives the second column of Table \ref{tab:p1_model_parameters}. For each $\beta_j$, our null hypothesis is $H_0: \beta_j = 0$. Thus, our $t$-test statistic is obtain from substituting $\beta_j$ into Equation \ref{eqn:p1_beta_hat_j_distribution}, \begin{equation*} \hat{t}_j = \frac{\hat{\beta}_j}{\sqrt{\hat{\sigma}^2\left(X^\intercal X\right)^{-1}_{jj}}}, \end{equation*} which gives us the third column of Table \ref{tab:p1_model_parameters}. The fourth column is the probability of obtaining evidence that contradicts the null hypothesis at least as much. Let $F^{-1}_{t_{n-p}}$ be the inverse cumulative distribution function. The $p$-value is \begin{equation*} \mathbb{P}\left( \left\lvert T_{n - p}\right\rvert \geq \left\lvert \hat{t}_j\right\rvert \mid \hat{t}_j \right) = 2\left(1 - F^{-1}_{n - p}\left(\left\lvert\hat{t}_j\right\rvert\right)\right). \end{equation*} These calculations are carried out in \href{https://nbviewer.jupyter.org/github/ppham27/stat570/blob/master/hw1/chredlin\_model.ipynb}{\texttt{chredlin\_explore.ipynb}}. \end{description} \item What assumptions are valid for: \begin{enumerate} \item An unbiased estimate of $\beta_j$, $j = 1,\ldots,5$. \label{part:p1di} \begin{description} \item[Solution:] From Equation \ref{eqn:p1_beta_hat_distribution}, we have that \begin{equation} \mathbb{E}\left[\hat{\beta}\right] = \beta + \left(X^\intercal X\right)^{-1}X^\intercal \mathbb{E}\left[\epsilon\right] \end{equation} since expectation is a linear operator. In our previous calcuations, we assumed that the $\epsilon_i$ were independent and normally distributed. It's sufficient, however, that $\boxed{\mathbb{E}\left[\epsilon\right] = \mathbf{0}.}$ Then, we'll have \begin{equation*} \operatorname{bias}\left(\hat{\beta}\right) = \mathbb{E}\left[\hat{\beta}\right] - \beta = \beta - \beta = 0. \end{equation*} \end{description} \item An accurate estimate of the standard error of $\hat{\beta}_j$, $j = 1,\ldots,5$. \begin{description} \item[Solution:] From Equation \ref{eqn:p1_beta_hat_distribution}, we can estimate the standard error exactly if $\sigma^2$ is known. For $\hat{\beta}_j$, we get $\sigma\sqrt{\left(X^\intercal X\right)_{jj}^{-1}}$. When $\sigma^2$ is unknown, but our errors are still independent and normally distributed, we apply Equation \ref{eqn:p1_beta_hat_j_distribution}. Since $\hat{\beta}_j$ has Student's $t$-distribution, we can estimate the standard error for $\hat{\beta}_j$ with $\sqrt{\hat{\sigma}^2\left(X^\intercal X\right)_{jj}^{-1}}$. If our errors are not normally distributed, our estimate is only accurate if the number of observations is large, and our errors have a distribution that converges to a normal distribution. \end{description} \item Accurate coverage probabilities for $100\left(1 - \alpha\right)\%$ confidence intervals of the form \begin{equation} \hat{\beta}_j \pm \sqrt{\hat{\sigma}_j^2}z_{1-\alpha/2}, \label{eqn:p1_confidence_interval_normal} \end{equation} where $z_{1-\alpha/2}$ represents the $\left(1-\alpha/2\right)$ quantile of an $\mathcal{N}\left(0, 1\right)$ random variable, and $\hat{\sigma}_j^2 = \hat{\sigma}^2\left(X^\intercal X\right)_{jj}^{-1}$. \begin{description} \item[Solution:] Firstly, the assumptions from the previous part must hold for $\hat{\sigma}_j^2$ to be meaningful. From Equation \ref{eqn:p1_beta_hat_j_distribution}, $\hat{\sigma}_j^2$ has Student's $t$-distribution, so the normal approximation for the the confidence interval (Equation \ref{eqn:p1_confidence_interval_normal}) only holds when $n$ is large. \end{description} \item Accurate coverage probabilities for $100\left(1 - \alpha\right)\%$ confidence intervals of the form \begin{equation} \hat{\beta}_j \pm \sqrt{\hat{\sigma}_j^2}t_{n-p}\left(1-\alpha/2\right), \label{eqn:p1_confidence_interval_t} \end{equation} where $p = \dim\left(\beta\right)$ and $t_{n-p}\left(1-\alpha/2\right)$ represents the $\left(1-\alpha/2\right)$ quantile of standard Student's $t$ random variable with $n - p$ degrees of freedom. \begin{description} \item[Solution:] Equation \ref{eqn:p1_beta_hat_j_distribution} shows that this is exactly the correct distribution when the $\epsilon_i$ are independent and identically distributed as normal random variables with mean zero. It may still prove to be an accurate confidence interval if the errors have distributions that are well-approximated by the normal distribution and the number of observations is large. \end{description} \item An accurate prediction for an \emph{observed} outcome at $x_0$. \begin{description} \item[Solution:] Suppose we were to observe $\left(x_0, y_0 = x_0^\intercal\beta + \epsilon_0\right)$. Let our prediction be $\hat{y}_0 = x_0^\intercal\hat{\beta}$. If the conditions in Part \ref{part:p1di} are satisfied, the error has mean zero, and our estimate for $\hat{\beta}$ is unbiased, so \begin{equation*} \mathbb{E}\left[y_0\right] = x_0^\intercal\beta = \mathbb{E}\left[\hat{y}_0\right]. \end{equation*} We want to compare our prediction with $\hat{y}_0$ with some hypothetical observed response $y_0$. We'll call our prediction accurate within $\delta > 0$ if \begin{equation*} \hat{y}_0 - \delta \leq y_0 \leq \hat{y}_0 + \delta. \end{equation*} We want the probability of this event to be high, so we'll say accurate within $\delta$ at confidence level $1 - \alpha$ if \begin{equation*} \mathbb{P}\left(\hat{y}_0 - \delta \leq y_0 \leq \hat{y}_0 + \delta\right) = \mathbb{P}\left(-\delta \leq y_0 - \hat{y}_0 \leq \delta\right) \geq 1 - \alpha. \end{equation*} Our prediction is accurate if for small $\alpha$, we have small $\delta$. If we assume normality, we can calculate the minimum $\delta$ for a specific $\alpha$, which we'll denote $\delta_\alpha$. Since $\hat{\beta}$ satisfies $\left(X^\intercal X\right)\hat{\beta} = X^\intercal y$, we have that the intercept estimate is \begin{equation} \hat{\beta}_0 = \bar{y} - \sum_{j=1}^p \hat{\beta}_j \bar{X}_{:,j}. \end{equation} Consider trying to predict $\hat{y} = x^\intercal\hat{\beta}$ for some $x$. We have that \begin{equation} \hat{y} = \bar{y} + \sum_{j=1}^p \left(x_i - \bar{X}_{:,j}\right)\hat{\beta}_j, \end{equation} so the variance of the prediction increases with values far from data. Let $\bar{X}$ be the vector of column-wise means of $X$. Since $\bar{\epsilon}$ is an ancillary statistic, this can also be written as \begin{equation} \hat{y} \mid x \sim \mathcal{N}\left( x^\intercal \beta, \sigma^2 \left( \frac{1}{n} + \left(x - \bar{X}\right)^\intercal \left(X^\intercal X\right)^{-1} \left(x - \bar{X}\right) \right) \right). \end{equation} Using the same method as in deriving Equation \ref{eqn:p1_beta_hat_j_distribution}, if we replace $\beta$ with $\hat{\beta}$ and $\sigma^2$ with $\hat{\sigma}^2$, we have \begin{equation} \frac{\hat{y} - x^\intercal\hat{\beta}}{ \sqrt{\hat{\sigma}^2\left(\frac{1}{n} + \left(x - \bar{X}\right)^\intercal \left(X^\intercal X\right)^{-1} \left(x - \bar{X}\right)\right)}} \sim t_{n-p}. \label{eqn:p1_response_confidence_interval} \end{equation} Noting that $y_0 \sim \mathcal{N}\left(x_0^\intercal\beta, \sigma^2\right)$, we can apply Equation \ref{eqn:p1_response_confidence_interval} to $\left(x_0, y_0\right)$, which gives us \begin{equation*} \boxed{ \delta_\alpha = t_{n-p}\left(1 - \alpha/2\right) \sqrt{\hat{\sigma}^2\left( 1 + \frac{1}{n} + \left(x_0 - \bar{X}\right)^\intercal \left(X^\intercal X\right)^{-1} \left(x_0 - \bar{X}\right) \right). } } \end{equation*} Thus, our predictions will always have standard error of at least $\sigma$, but they will be more accurate when $x_0$ is close to $\bar{X}$. \end{description} \end{enumerate} \item Summarize the relationship between $y$, and $x_1$, $x_2$, $x_3$, $x_4$, $x_5$, fitting any other models that you see fit to. \begin{description} \item[Solution:] The relationship between $y$ and the covariates was described in Parts \ref{part:p1a} and \ref{part:p1b}. Particularly, we see \texttt{income} does not explain much about \texttt{involact} due to multicollinearity: it is correlated with \texttt{race} and \texttt{fire}. Removing \texttt{log\_income} from the model gives us the model parameters in Table \ref{tab:p1_model_parameters_custom}. The residual standard error for this model was \input{p1_residual_standard_error_custom.txt}\unskip, which is actually ever so slightly smaller than the model that includes income. \begin{table} \centering \input{p1_model_parameters_custom.tex} \caption{The result of fitting a model without considering income.} \label{tab:p1_model_parameters_custom} \end{table} I tried adding an indicator for \texttt{side} but it suffers from the same issue as \texttt{income}: its effect is already explained by the other covariates. \end{description} \end{enumerate} \pagebreak \item Consider the following distributions: \begin{description} \item[Poisson:] \begin{equation} p\left(y \mid \mu\right) = \frac{\exp\left(-\mu\right)\mu^y}{y!}, \label{eqn:p2_poisson} \end{equation} for $y = 0,1,2,\ldots$. \item[Gamma:] \begin{equation} p\left(y \mid \alpha, \beta\right) = \frac{\beta^\alpha}{\Gamma(\alpha)}y^{\alpha - 1}\exp(-\beta y) \label{eqn:p2_gamma} \end{equation} for $y > 0$ and with $\alpha$ known. \item[Inverse Gaussian:] \begin{equation*} p\left(y \mid \mu, \delta\right) = \left(\frac{\delta}{2\pi y^3}\right)^{1/2} \exp\left[ \frac{-\delta\left(y-\mu\right)^2}{2\mu^2y} \right] \label{eqn:p2_inverse_gaussian} \end{equation*} for $y > 0$ and $\delta$ known. \end{description} A distribution is said to be a member of the one parameter exponential family of distributions if it can be written as \begin{equation} p\left(y \mid \eta_1,\eta_2\right) = h(y)\exp\left[ \eta_1y + \eta_2T_2(y) - A\left(\eta_1,\eta_2\right) \right], \label{eqn:p2_exponential} \end{equation} where $\eta_2$ is known. \begin{enumerate} \item Show that each of the above distributions is a member of the exponential family and identify $\eta_1$, $\eta_2$, $T_2(y)$, $A\left(\eta_1,\eta_2\right)$, and $h(y)$. \label{part:p2a} \begin{description} \item[Solution:] For each distribution, we can do some algebra. \begin{description} \item[Poisson:] We can rewrite Equation \ref{eqn:p2_poisson} as Equation \ref{eqn:p2_exponential}, where \begin{align*} \eta_1 &= \log\mu \\ \eta_2 &= 0 \\ T_2(y) &= 0 \\ A\left(\eta_1,\eta_2\right) &= \exp(\eta_1) \\ h(y) &= \frac{1}{y!}. \end{align*} \item[Gamma:] We can rewrite Equation \ref{eqn:p2_gamma} as Equation \ref{eqn:p2_exponential}, where \begin{align*} \eta_1 &= -\beta \\ \eta_2 &= \alpha - 1 \\ T_2(y) &= \log(y) \\ A\left(\eta_1,\eta_2\right) &=-\left(\eta_2 + 1\right)\log\left(-\eta_1\right) + \log\Gamma\left(\eta_2 + 1\right)\\ h(y) &= 1. \end{align*} \item[Inverse Gaussian:] We can rewrite Equation \ref{eqn:p2_inverse_gaussian} as Equation \ref{eqn:p2_exponential}, where \begin{align*} \eta_1 &= -\frac{\delta}{2\mu^2} \\ \eta_2 &= -\frac{\delta}{2} \\ T(y) &= \frac{1}{y} \\ A\left(\eta_1,\eta_2\right) &= -2\sqrt{\eta_1\eta_2} - \frac{1}{2}\log\left(-2\eta_2\right) \\ h(y) &= \frac{1}{\sqrt{2\pi y^3}}. \end{align*} \end{description} \end{description} \item Identify $\mathbb{E}\left[Y \mid \theta\right]$ and $\operatorname{Var}\left(Y \mid \theta\right)$. \begin{description} \item[Solution:] We can derive a general formula for computing the mean and variance from Equation \ref{eqn:p2_exponential}. The log-likelihood function is \begin{equation} l\left( \eta_1, \eta_2 \right) = \log h(y) + \eta_1y + \eta_2 T_2(y) - A\left(\eta_1,\eta_2\right). \end{equation} If $\eta_2$ is known, the score function is \begin{equation} S\left(\eta_1, \eta_2\right) = \frac{\partial l\left(\eta_1,\eta_2\right)}{\partial \eta_1} = y - \frac{\partial A\left(\eta_1,\eta_2\right)}{\partial \eta_1}. \end{equation} The expectation of the score is $0$, so \begin{equation} \boxed{\mathbb{E}\left[y \mid \eta_1, \eta_2\right] = \frac{\partial A\left(\eta_1,\eta_2\right)}{\partial \eta_1}.} \label{eqn:p2_score_mean} \end{equation} The variance of the score is Fisher information, so \begin{align} \mathcal{I}\left(\eta_1, \eta_2\right) &= \operatorname{Var}\left( S\left(\eta_1, \eta_2\right) \right) \nonumber\\ &= \mathbb{E}\left[ \left(y - \frac{\partial A\left(\eta_1,\eta_2\right)}{\partial \eta_1} \right)^2 \mid \eta_1, \eta_2 \right]\nonumber\\ &= \operatorname{Var}\left(y \mid \eta_1, \eta_2\right) \label{eqn:p2_fisher_variance} \end{align} by Equation \ref{eqn:p2_score_mean} and using that the mean of the score function is $0$. An alternative definition of the Fisher information is the expected value of the observed information: \begin{equation} \mathcal{I}\left(\eta_1, \eta_2\right) = - \frac{\partial^2 l\left(\eta_1,\eta_2\right)}{\partial\eta_1^2} = \frac{\partial^2 A\left(\eta_1,\eta_2\right)}{\partial\eta_1^2}. \label{eqn:p2_fisher_expectation} \end{equation} Combining Equations \ref{eqn:p2_fisher_variance} and \ref{eqn:p2_fisher_expectation}, we obtain \begin{equation} \boxed{ \operatorname{Var}\left(y \mid \eta_1, \eta_2\right) = \frac{\partial^2 A\left(\eta_1,\eta_2\right)}{\partial\eta_1^2}. } \label{eqn:p2_score_variance} \end{equation} We can now apply Equations \ref{eqn:p2_score_mean} and \ref{eqn:p2_score_variance} to our the results from Part \ref{part:p2a}. \begin{description} \item[Poisson:] \begin{align*} \mathbb{E}\left[y \mid \eta_1, \eta_2\right] &= \exp\left(\eta_1\right) = \mu \\ \operatorname{Var}\left(y \mid \eta_1, \eta_2\right) &= \exp\left(\eta_1\right) = \mu. \end{align*} \item[Gamma:] \begin{align*} \mathbb{E}\left[y \mid \eta_1, \eta_2\right] &= -\frac{\eta_2 + 1}{\eta_1} = \frac{\alpha}{\beta} \\ \operatorname{Var}\left(y \mid \eta_1, \eta_2\right) &= \frac{\eta_2 + 1}{\eta_1^2} = \frac{\alpha}{\beta^2}. \end{align*} \item[Inverse Gaussian:] \begin{align*} \mathbb{E}\left[y \mid \eta_1, \eta_2\right] &= \sqrt{\frac{\eta_2}{\eta_1}} = \mu \\ \operatorname{Var}\left(y \mid \eta_1, \eta_2\right) &= \frac{1}{2}\sqrt{\frac{\eta_2}{\eta_1^3}} = \frac{\mu^3}{\delta}. \end{align*} \end{description} \end{description} \item The canonical link function is such that $g(\mu) = \eta_1$. Determine the canonical link for each distribution. \begin{description} \item[Solution:] We can use the results from Part \ref{part:p2a}. \begin{description} \item[Poisson:] $\displaystyle g(\mu) = \exp(\mu)$. \item[Gamma:] $\displaystyle g(\mu) = -\frac{\alpha}{\mu} \propto \mu^{-1}$, where $\alpha$ is known. \item[Inverse Gaussian:] $\displaystyle g(\mu) = -\frac{\delta}{2\mu^2} \propto \mu^{-2}$, where $\delta$ is known. \end{description} \end{description} \end{enumerate} \end{enumerate} \begin{figure} \centering \includegraphics[width=\textwidth]{p1_pair_plots.pdf} \caption{The empirical univariate and joint distributions for the \texttt{chredlin} dataset.} \label{fig:p1_pair_plots} \end{figure} \end{document}
{ "alphanum_fraction": 0.6131021602, "avg_line_length": 41.1260869565, "ext": "tex", "hexsha": "7589770e4739a981ebefd2fca41aa1255856afde", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "859832aed3ae172abc8b6fbcd2221eb552291a00", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ppham27/stat570", "max_forks_repo_path": "hw1/solutions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "859832aed3ae172abc8b6fbcd2221eb552291a00", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ppham27/stat570", "max_issues_repo_path": "hw1/solutions.tex", "max_line_length": 139, "max_stars_count": 2, "max_stars_repo_head_hexsha": "859832aed3ae172abc8b6fbcd2221eb552291a00", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ppham27/stat570", "max_stars_repo_path": "hw1/solutions.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-25T22:46:11.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-22T11:05:54.000Z", "num_tokens": 8870, "size": 28377 }
\chapter{Parameter Estimation I: Bayes' Box} One of the most important times to use Bayes' rule is when you want to do {\it parameter estimation}. Parameter estimation is a fairly common situation in statistics. In fact, it is possible to interpret almost any problem in statistics as a parameter estimation problem and approach it in this way! Firstly, what is a parameter? One way to think of a parameter is that it is just a fancy term for a quantity or a number that is unknown\footnote{Another use for the term parameter is any quantity that something else depends on. For example, a normal distribution has a mean $\mu$ and a standard deviation $\sigma$ that defines which normal distribution we are talking about. $\mu$ and $\sigma$ are then said to be parameters of the normal distribution.}. For example, how many people are currently in New Zealand? Well, a Google search suggests 4.405 million. But that does not mean there are {\bf exactly} 4,405,000 people. It could be a bit more or a bit less. Maybe it is 4,405,323, or maybe it is 4,403,886. We don't really know. We could call the true number of people in New Zealand right now $\theta$, or we could use some other letter or symbol if we want. When talking about parameter estimation in general we often call the unknown parameter(s) $\theta$, but in specific applications we will call the parameter(s) something else more appropriate for that application. The key is to realise that we can use the Bayes' Box, like in previous chapters. But now, our list of possible hypotheses is a list of possible values for the unknown parameter. For example, a Bayes' Box for the precise number of people in New Zealand might look something like the one in Table~\ref{tab:nz_pop}. \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline {\bf Possible Hypotheses} & {\tt prior} & {\tt likelihood} & {\tt prior $\times$ likelihood} & {\tt posterior}\\ \hline \ldots & \ldots & \ldots & \ldots & \ldots\\ $\theta = 4404999$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405000$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405001$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405002$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405003$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405004$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405005$ & 0.000001 & \ldots & \ldots & \ldots\\ $\theta = 4405006$ & 0.000001 & \ldots & \ldots & \ldots\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ \hline Totals: & 1 & & \ldots & 1\\ \hline \end{tabular} \caption{\it An example of how a Bayes' Box may be used in a parameter estimation situation.\label{tab:nz_pop}} \end{center} \end{table} There are a few things to note about this Bayes' box. Firstly, it is big, which is why I just put a bunch of ``\ldots''s in there instead of making up numbers. There are lots of possible hypotheses, each one corresponding to a possible value for $\theta$. The prior probabilities I have put in the second column were for illustrative purposes. They needn't necessarily all be equal (although that is often a convenient assumption). All the stuff we've seen in smaller examples of Bayes' rule and/or use of a Bayes' Box still applies here. The likelihoods will still be calculated by seeing how the probability of the data depends on the value of the unknown parameter. You still go through all the same steps, multiplying prior times likelihood and then normalising that to get the posterior probabilities for all of the possibilities listed. Note that a set of possible values together with the probabilities is what is commonly termed a {\it probability distribution}. In basic Bayesian problems, like in the introductory chapters, we start with some prior probabilities and update them to get posterior probabilities. In parameter estimation, we start with a prior {\it distribution} for the unknown parameter(s) and update that to get a posterior {\it distribution} for the unknown parameter(s). \begin{framed} {\bf A quantity which has a probability associated with each possible value is traditionally called a ``random variable''. Random variables have probability distributions associated with them. In Bayesian stats, an unknown parameter looks mathematically like a ``random variable'', but I try to avoid the word random itself because it usually has connotations about something that fluctuates or varies. In Bayesian statistics, the prior distribution and posterior distribution only describe our uncertainty. The actual parameter is a single fixed number.} \end{framed} \section{Parameter Estimation: Bus Example} This is a beginning example of parameter estimation from a Bayesian point of view. It shows the various features that are always present in a Bayesian parameter estimation problem. There will be a prior distribution, the likelihood, and the posterior distribution. We will spend a lot of time on this problem but keep in mind that this is just a single example, and certain things about this example (such as the choice of the prior and the likelihood) are specific to this example only, while other things about it are very general and will apply in all parameter estimation problems. You will see and gain experience with different problems in lectures, labs, and assignments. After moving to Auckland, I decided that I would take the bus to work each day. However, I wasn't very confident with the bus system in my new city, so for the first week I just took the first bus that came along and was heading in the right direction, towards the city. In the first week, I caught 5 morning buses. Of these 5 buses, two of them took me to the right place, while three of them took me far from work, leaving me with an extra 20 minute walk. Given this information, I would like to try to infer the proportion of the buses that are ``good'', that would take me right to campus. Let us call this fraction $\theta$ and we will infer $\theta$ using the Bayesian framework. We will start with a prior distribution that describes initial uncertainty about $\theta$ and update this to get the posterior distribution, using the data that 2/5 buses I took were ``good''. First we must think about the meaning of the parameter $\theta$ in our particular problem so we can choose a sensible prior distribution. Since $\theta$ is, in this example, a proportion, we know it cannot be less than 0 or greater than 1. In principle, $\theta$ could be any real value between 0 and 1. To keep things simple {\it to begin with}, we shall make an approximation and assume that the set of possible values for $\theta$ is: \begin{center} $\{0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1\}$. \end{center} This discrete approximation means that we can use a Bayes' Box. The first things to fill out in the Bayes' Box are the possible values and the prior probabilities (the prior distribution). For starters, let us assume that before we got the data (two successes out of 5 trials), we were very uncertain about the value of $\theta$, and this can be modelled by using a uniform prior distribution. There are 11 possible values for $\theta$ that are being considered with our discrete approximation, so the probability of each is $1/11 = 0.0909$. The partially complete Bayes' Box is given in Table~\ref{tab:bus_bayes_box1}. Note the new notation that I have put in the column titles. We will use this notation in all of our parameter estimation examples (although the parameter(s) and data may have different symbols when $\theta$ and $x$ respectively are not appropriate). \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \tt{possible values} & \tt{prior} & \tt{likelihood} & \tt{prior} $\times$ \tt{likelihood} & \tt{posterior}\\ $\theta$ & $p(\theta)$ & $p(x|\theta)$ & $p(\theta)p(x|\theta)$ & $p(\theta|x)$\\ \hline 0 & 0.0909 & & & \\ 0.1 & 0.0909 & & & \\ 0.2 & 0.0909 & & & \\ 0.3 & 0.0909 & & & \\ 0.4 & 0.0909 & & & \\ 0.5 & 0.0909 & & & \\ 0.6 & 0.0909 & & & \\ 0.7 & 0.0909 & & & \\ 0.8 & 0.0909 & & & \\ 0.9 & 0.0909 & & & \\ 1 & 0.0909 & & & \\ \hline Totals & 1 & & & 1\\ \hline \end{tabular} \caption{\it Starting to make a Bayes' Box for the bus problem. This one just has the possible parameter values and the prior distribution. \label{tab:bus_bayes_box1}} \end{center} \end{table} To get the likelihoods, we need to think about the properties of our experiment. In particular, we should imagine that we knew the value of $\theta$ and were trying to predict what experimental outcome (data) would occur. Ultimately, we want to find the probability of our actual data set (2 out of the 5 buses were ``good''), for all of our possible $\theta$ values. Recall that, if there are $N$ repetitions of a ``random experiment'' and the ``success'' probability is $\theta$ at each repetition, then the number of ``successes'' $x$ has a binomial distribution: \begin{eqnarray} p(x|\theta) &=& \left(\begin{array}{c}N \\ x\end{array}\right) \theta^x\left(1-\theta\right)^{N - x}.\label{eq:binomial_likelihood2} \end{eqnarray} where $\left(\begin{array}{c}N \\ x\end{array}\right) = \frac{N!}{x!(N-x)!}$. This is the probability mass function for $x$ (if we imagine $\theta$ to be known), hence the notation $p(x|\theta)$, read as ``the probability distribution for $x$ given $\theta$''. Since there are five trials ($N=5$) in the bus problem, the number of successes $x$ must be one of 0, 1, 2, 3, 4, or 5. If $\theta$ is a high number close to 1, then we would expect the resulting value of the data (number of successes) $x$ to be something high like 4 or 5. Low values for $x$ would still be possible but they would have a small probability. If $\theta$ is a small number, we would expect the data to be 0, 1, or 2, with less probability for more than 2 successes. This is just saying in words what is written precisely in Equation~\ref{eq:binomial_likelihood}. The probability distribution for the data $x$ is plotted in Figure~\ref{fig:binomial} for three illustrative values of the parameter $\theta$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.6]{Figures/binomial.pdf} \caption{\it The binomial probability distribution for the data $x$, for three different values of the parameter $\theta$. If $\theta$ is low then we would expect to see lower values for the data. If $\theta$ is high then high values are more probable (but all values from 0 to 5 inclusive are still possible). The actual observed value of the data was $x=2$. If we focus only on the values of the curves at $x=2$, then the heights of the curves give the likelihood values for these three illustrative values of $\theta$. \label{fig:binomial}} \end{center} \end{figure} To obtain the actual likelihood values that go into the Bayes' Box, we can simply substitute in the known values $N=5$ and $x=2$: \begin{eqnarray} P(x=2|\theta) &=& \left(\begin{array}{c}5 \\ 2\end{array}\right) \theta^2\left(1-\theta\right)^{5 - 2}\\ &=& 10\times\theta^2\left(1-\theta\right)^3.\label{eq:binomial_likelihood3} \end{eqnarray} The resulting equation depends on $\theta$ only! We can go through the list of $\theta$ values and get a numerical answer for the likelihood $P(x=2|\theta)$, which is what we need for the Bayes' Box. The final steps are, as usual, to multiply the prior by the likelihood and then normalise that to get the posterior distribution. The completed Bayes' Box is given in Table~\ref{tab:bus_bayes_box2}. \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \tt{possible values} & \tt{prior} & \tt{likelihood} & \tt{prior} $\times$ \tt{likelihood} & \tt{posterior}\\ $\theta$ & $p(\theta)$ & $p(x|\theta)$ & $p(\theta)p(x|\theta)$ & $p(\theta|x)$\\ \hline 0 & 0.0909 & 0 & 0 & 0\\ 0.1 & 0.0909 & 0.0729 & 0.0066 & 0.0437\\ 0.2 & 0.0909 & 0.2048 & 0.0186 & 0.1229\\ 0.3 & 0.0909 & 0.3087 & 0.0281 & 0.1852\\ 0.4 & 0.0909 & 0.3456 & 0.0314 & 0.2074\\ 0.5 & 0.0909 & 0.3125 & 0.0284 & 0.1875\\ 0.6 & 0.0909 & 0.2304 & 0.0209 & 0.1383\\ 0.7 & 0.0909 & 0.1323 & 0.0120 & 0.0794\\ 0.8 & 0.0909 & 0.0512 & 0.0047 & 0.0307\\ 0.9 & 0.0909 & 0.0081 & 0.0007 & 0.0049\\ 1 & 0.0909 & 0 & 0 & 0\\ \hline Totals & 1 & & 0.1515 & 1\\ \hline \end{tabular} \caption{\it The completed Bayes' Box for the bus problem (using a binomial distribution to obtain the likelihood). \label{tab:bus_bayes_box2}} \end{center} \end{table} There are a few interesting values in the likelihood column that should help you to understand the concept of likelihood a bit better. Look at the likelihood for $\theta = 0$: it is zero. What does this mean? It means that if we imagine $\theta=0$ is the true solution, the probability of obtaining the data that we got ($x=2$ successes) would be zero. That makes sense! If $\theta=0$, it means none of the buses are the ``good'' buses, so how could I have caught a good bus twice? The probability of that is zero. The likelihood for $\theta=1$ is also zero for similar reasons. If all of the buses are good, then having 2/5 successes is impossible. You would get 5/5 with 100\% certainty. So $P(x=2|\theta=1) = 0$. The likelihood is highest for $\theta = 0.4$, which just so happens to equal 2/5. This $\theta=0.4$ predicted the data best. It does not necessarily mean that $\theta=0.4$ is the most probable value. That depends on the prior as well (but with a uniform prior, it does end up being that way. As you can see in the posterior distribution column, $\theta=0.4$ has the highest probability in this case). \subsection{Sampling Distribution and Likelihood} As we study more examples of parameter estimation, you might notice that we always find the likelihood by specifying a probability distribution for the data given the parameters $p(x|\theta)$, and then we substituting in the actual observed data (Equations~\ref{eq:binomial_likelihood2} and~\ref{eq:binomial_likelihood3}). Technically, only the version with the actual data set substituted in is called the likelihood. The probability distribution $p(x|\theta)$, which gives the probability of other data sets that did not occur (as well as the one that did), is sometimes called the {\it sampling distribution}. At times, I will distinguish between the sampling distribution and the likelihood, and at other times I might just use the word likelihood for both concepts. \subsection{What is the ``Data''?} Even though this example is meant to be introductory, there is a subtlety that has been swept under the rug. Notice that our data consisted of the fact that we got 2/5 successes in the experiment. When we worked out the likelihood, we were considering the probability of getting $x=2$, but we didn't have a probability for $N=5$. In principle, we could treat $x$ and $N$ as two separate data sets. We could first update from the prior to the posterior given $N=5$, and then update again to take into account $x$ as well as $N$. However, the first update would be a bit weird. Why would knowing the number of trials tell you anything about the success probability? Effectively, what we have done in our analysis is assume that $N=5$ is prior information that lurks in the background the whole time. Therefore our uniform prior for $\theta$ already ``knows'' that $N=5$, so we didn't have to consider $P(N=5|\theta)$ in the likelihood. This subtlety usually doesn't matter much. \section{Prediction in the Bus Problem}\label{sec:prediction_bus_problem} We have now seen how to use information (data) to update from a prior distribution to a posterior distribution when the set of possible parameter values is discrete. The posterior distribution is the complete answer to the problem. It tells us exactly how strongly we should believe in the various possible solutions (possible values for the unknown parameter). However, there are other things we might want to do with this information. Predicting the future is one! It's fun, but risky. Here we will look at how prediction is done using the Bayesian framework, continuing with the bus example. To be concrete, we are interested in the following question: {\it what is the probability that I will catch the right bus tomorrow?}. This is like trying to predict the result of a future experiment. In the Bayesian framework, our predictions are always in the form of probabilities or (later) probability distributions. They are usually calculated in three stages. First, you pretend you {\it actually know} the true value of the parameters, and calculate the probability based on that assumption. Then, you do this for all possible values of the parameter $\theta$ (alternatively, you can calculate the probability as a function of $\theta$). Finally, you combine all of these probabilities in a particular way to get one final probability which tells you how confident you are of your prediction. Suppose we knew the true value of $\theta$ was 0.3. Then, we would know the probability of catching the right bus tomorrow is 0.3. If we knew the true value of $\theta$ was 0.4, we would say the probability of catching the right bus tomorrow is 0.4. The problem is, we don't know what the true value is. We only have the posterior distribution. Luckily, the sum rule of probability (combined with the product rule) can help us out. We are interested in whether I will get the good bus tomorrow. There are 11 different ways that can happen. Either $\theta=0$ and I get the good bus, or $\theta=0.1$ and I get the good bus, or $\theta=0.2$ and I get the good bus, and so on. These 11 ways are all mutually exclusive. That is, only one of them can be true (since $\theta$ is actually just a single number). Mathematically, we can obtain the posterior probability of catching the good bus tomorrow using the sum rule: \begin{eqnarray} P(\textnormal{good bus tomorrow} | x) &=& \sum_{\theta} p(\theta | x)P(\textnormal{good bus tomorrow} | \theta, x)\\ &=& \sum_{\theta} p(\theta | x)\theta \end{eqnarray} This says that the total probability for a good bus tomorrow (given the data, i.e. {\it using the posterior distribution} and not the prior distribution) is given by going through each possible $\theta$ value, working out the probability {\it assuming the $\theta$ value you are considering is true}, multiplying by the probability (given the data) this $\theta$ value is actually true, and summing. In this particular problem, because $P(\textnormal{good bus tomorrow} | \theta, x) = \theta$, it just so happens that the probability for tomorrow is the expectation value of $\theta$ using the posterior distribution. To three decimal places, the result for the probability tomorrow is 0.429. Interestingly, this is not equal to $2/5 = 0.4$. In practice, these kinds of calculations are usually done in a computer. The R code for computing the Bayes' Box and the probability for tomorrow is given below. This is very much like many of the problems we will work on in labs. \begin{minted}[mathescape, numbersep=5pt, gobble=0, frame=single, framesep=2mm, fontsize=\small]{r} # Make a vector of possibilities (first column of the Bayes' Box) theta = seq(0, 1, by=0.1) # Corresponding vector of prior probabilities # (second column of the Bayes' Box) prior = rep(1/11,11) # Likelihood. Notice use of dbinom() rather than formula # because R conveniently knows a lot of # standard probability distributions already lik = dbinom(2,5,theta) # Prior times likelihood, then normalise to get posterior h = prior*lik post = h/sum(h) # Probability for good bus tomorrow (prediction!) # This happens to be the same as the posterior expectation of theta # *in this particular problem* because the probability of a # good bus tomorrow GIVEN theta is just theta. prob_tomorrow = sum(theta*post) \end{minted} \section{Bayes' Rule, Parameter Estimation Version} Mathematically, what we did to calculate the posterior distribution was to take the prior distribution as a whole (the whole second column) and multiply it by the likelihood (the whole third column) to get the unnormalised posterior, then normalise to get the final posterior distribution. This can be written as follows, which we will call the ``parameter estimation'' version of Bayes' rule. There are three ways to write it: \begin{eqnarray} p(\theta | x) &=& \frac{p(\theta)p(x|\theta)}{p(x)}\\ p(\theta | x) &\propto& p(\theta)p(x|\theta)\\ {\tt posterior} &\propto& {\tt prior}\times{\tt likelihood}.\label{eq:bayes_pe} \end{eqnarray} Writing the equations in these ways is most useful when you can write the prior $p(\theta)$ and the likelihood $p(x|\theta)$ as formulas (telling you how the values depend on $\theta$ as you go through the rows). Then you can get the equation for the posterior distribution (whether it is a discrete distribution, or a continuous one, in which case $p(\theta)$ and $p(\theta|x)$ are probability densities. We will do this in the next chapter. The notation in Equation~\ref{eq:bayes_pe} is very simplified and concise, but is a popular kind of notation in Bayesian work. For an explanation of the relationship between this notation and other common choices (such as $P(X=x)$ for a discrete distribution or $f(x)$ for a density), see Appendix~\ref{sec:probability}. \chapter{Parameter Estimation: Analytical Methods} Analytical methods are those which can be carried out with a pen and paper, or the ``old school'' way before we all started using computers. There are some problems in Bayesian statistics that can be solved in this way, and we will see a few of them in this course. For an analytical solution to be possible, the maths usually has to work out nicely, and that doesn't always happen, so the techniques shown here don't {\it always} work. When they do -- great! When they don't, that's what MCMC (and JAGS) is for! Let's look at the {\it binomial likelihood} problem again, with the familiar bus example. Out of $N=5$ attempts at a ``repeatable'' experiment, there were $x=2$ successes. From this, we want to infer the value of $\theta$, the success probability that applied on each trial, or the overall fraction of buses that are good. Because of its meaning, we know with 100\% certainty that $\theta$ must be between 0 and 1 (inclusive). Recall that, if we knew the value of $\theta$ and wanted to predict the data $x$ (regarding $N$ as being known in advance), then we would use the binomial distribution: \begin{eqnarray} p(x|\theta) &=& \left(\begin{array}{c}N \\ x\end{array}\right) \theta^x\left(1-\theta\right)^{N - x}.\label{eq:binomial_likelihood} \end{eqnarray} Let's use a uniform prior for $\theta$, but instead of making the discrete approximation and using a Bayes' Box, let's keep the continuous set of possibilities, that $\theta$ can be any real number between 0 and 1. Because the set of possibilities is continuous, the prior and the posterior for $\theta$ will both be probability {\it densities}. If we tried to do a Bayes' Box now, it would have infinitely many rows! The equation for our prior, a uniform probability density between 0 and 1, is: \begin{eqnarray} p(\theta) &=& \left\{ \begin{array}{lr} 1, & 0 \leq \theta \leq 1\\ 0, & \textnormal{otherwise} \end{array} \right.\label{eq:uniform} \end{eqnarray} If we keep in mind that $\theta$ is between 0 and 1, and therefore remember at all times that we are restricting our attention to $\theta \in [0, 1]$, we can write the uniform prior much more simply as: \begin{eqnarray} p(\theta) &=& 1. \end{eqnarray} If you find the Bayes' Box way of thinking easier to follow than the mathematics here, you can imagine we are making a Bayes' Box like in Table~\ref{tab:bus_bayes_box1}, but with an ``infinite'' number of rows, and the equation for the prior tells us how the prior probability varies as a function of $\theta$ as we go down through the rows (since the prior is uniform, the probabilities don't vary at all). To find the posterior probability density for $\theta$, we use the ``parameter estimation'' form of Bayes' rule: \begin{eqnarray} \textnormal{posterior} \propto \textnormal{prior} \times \textnormal{likelihood}\\ p(\theta|x) \propto p(\theta)p(x|\theta). \end{eqnarray} We already wrote down the equations for the prior and the likelihood, so we just need to multiply them. \begin{eqnarray} p(\theta|x) &\propto& p(\theta)p(x|\theta)\\ &\propto& 1 \times \left(\begin{array}{c}N \\ x\end{array}\right) \theta^x\left(1-\theta\right)^{N - x} \end{eqnarray} Since we are using the abbreviated form of the prior, we must remember this equation only applies for $\theta \in [0, 1]$. To simplify the maths, there are some useful tricks you can use a lot of the time when working things out analytically. Notice that the ``parameter estimation'' form of Bayes' rule has a proportional sign in it, not an equals sign. That's because the prior times the likelihood can't actually be the posterior distribution because it is not normalised. The sum or integral is not 1. However, the equation still gives the correct shape of the posterior probability density function (the way it varies as a function of $\theta$). This is helpful because you can save ink. If there are some constant factors in your expression for the posterior that don't involve the parameter (in this case, $\theta$), you can ignore them. The proportional sign will take care of them. In this case, it means we can forget about the pesky ``$N$ choose $x$'' term, and just write: \begin{eqnarray} p(\theta|x) &\propto& \theta^x\left(1-\theta\right)^{N - x}\\ &\propto& \theta^2\left(1-\theta\right)^3. \end{eqnarray} The final step I included was to substitute in the actual values of $N$ and $x$ instead of leaving the symbols there. That's it! We have the correct shape of the posterior distribution. We can use this to plot the posterior, as you can see in Figure~\ref{fig:bus_inference}. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.6]{Figures/bus_inference.pdf} \caption{\it The prior and the posterior for $\theta$ in the bus problem, given that we had 2/5 successes. The prior is just a uniform density and this is plotted as a flat line, describing the fact that $\theta$ can be anywhere between 0 and 1 and we don't have much of an idea. After getting the data, the distribution changes to the posterior which is peaked at 0.4, although there is still a pretty wide range of uncertainty.\label{fig:bus_inference}} \end{center} \end{figure} \section{``$\sim$'' Notation} While it is very helpful to know the full equations for different kinds of probability distributions (both discrete and continuous), it is useful to be able to communicate about probability distributions in an easier manner. There is a good notation for this which we will sometimes use in STATS 331. If we want to communicate about our above analysis, and someone wanted to know what prior distribution we used, we could do several things. We could say ``the prior for $\theta$ was uniform between 0 and 1'', or we could give the formula for the prior distribution (Equation~\ref{eq:uniform}). However, a convenient shorthand in common use is to simply write: \begin{eqnarray} \theta \sim \textnormal{Uniform}(0, 1) \end{eqnarray} or, even more concisely: \begin{eqnarray} \theta \sim U(0, 1). \end{eqnarray} This notation conserves ink, and is good for quick communication. It is also very similar to the notation used in JAGS, which will be introduced in later chapters. We can also write the binomial likelihood (which we used for the bus problem) in this notation, instead of writing out the full equation (Equation~\ref{eq:binomial_likelihood}). We can write: \begin{eqnarray} x | \theta \sim \textnormal{Binomial}(N, \theta) \end{eqnarray} This says that if we knew the value of $\theta$, $x$ would have a binomial distribution with $N$ trials and success probability $\theta$. We can also make this one more concise: \begin{eqnarray} x \sim \textnormal{Bin}(N, \theta) \end{eqnarray} The differences here are that ``Binomial'' has been shortened to ``Bin'' and the ``given $\theta$'' part has been left out. However, we see that there is a $\theta$ present on the right hand side, so the ``given $\theta$'' must be understood implicitly. \section{The Effect of Different Priors} We decided to do this problem with a uniform prior, because it is the obvious first choice to describe ``prior ignorance''. However, in principle, the prior could be different. This will change the posterior distribution, and hence the conclusions. This isn't a problem of Bayesian analysis, but a feature. Data on its own doesn't tell us exactly what should believe. We must combine the data with all our other prior knowledge (i.e. put the data in context) to arrive at reasoned conclusions. In this section we will look at the effect of different priors on the results, again focusing on the bus problem for continuity. Specifically, we will look at three different priors: the uniform one that we already used, and two other priors discussed below. \subsection{Prior 2: Emphasising the Extremes} One possible criticism of the uniform prior is that there is not much probability given to extreme solutions. For example, according to the Uniform(0, 1) prior, the prior probability that $\theta$ is between 0 and 0.1 is only $\int_0^{0.1} 1 \, d\theta = 0.1$. But, depending on the situation, we might think values near zero should be more plausible\footnote{Here's another parameter that is between 0 and 1: the proportion of households in New Zealand that keep a Macaw as a pet (call that $\phi$). I hope this number is low (it is very difficult to take responsible care of such a smart bird). I also think it probably is low. I would definitely object to a prior that implied $P(\phi < 0.1) = 0.1$. I would want a prior that implied something like $P(\phi < 0.1) = 0.999999$.}. One possible choice of prior distribution that assigns more probability to the extreme values (close to 0 or 1) is: \begin{eqnarray} p(\theta) \propto \theta^{-\frac{1}{2}}(1 - \theta)^{-\frac{1}{2}}.\label{eq:prior2} \end{eqnarray} \subsection{Prior 3: Already Being Well Informed} Here's another scenario that we might want to describe in our prior. Suppose that, before getting this data, you weren't ignorant at all, but already had a lot of information about the value of the parameter. Say that we already had a lot of information which suggested the value of $\theta$ was probably close to 0.5. This could be modelled by the following choice of prior: \begin{eqnarray} p(\theta) \propto \theta^{100}(1 - \theta)^{100}.\label{eq:prior3} \end{eqnarray} The three priors are plotted in Figure~\ref{fig:three_priors} as dotted lines. The three corresponding posterior distributions are plotted as solid lines. The posteriors were computed by multiplying the three priors by the likelihood and normalising. The blue curves correspond to the uniform prior we used before, the red curves use the ``emphasising the extremes'' prior, and the green curves use the ``informative'' prior which assumes that $\theta$ is known to be close to 0.5. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.6]{Figures/three_priors.pdf} \caption{\it Three different priors (dotted lines) and the corresponding three posteriors (solid lines) given the bus example data. See the text for discussion of these results.\label{fig:three_priors}} \end{center} \end{figure} There are a few interesting things to notice about this plot. Firstly, the posterior distributions are basically the same for the red and blue priors (the uniform prior and the ``emphasising the extremes'' prior). The main difference in the posterior is, as you would expect, that the extremes are emphasised a little more. If something is more plausible before you get the data, it's more plausible afterwards as well. The big difference is with the informative prior. Here, we were already pretty confident that $\theta$ was close to 0.5, and the data (since it's not very much data) hasn't given us any reason to doubt that, so we still think $\theta$ is close to 0.5. Since we already knew that $\theta$ was close to 0.5, the data are acting only to increase the precision of our estimate (i.e. make the posterior distribution narrower). But since we had so much prior information, the data aren't providing much ``extra'' information, and the posterior looks basically the same as the prior. \subsection{The Beta Distribution} The three priors we have used are all examples of {\it beta} distributions. The beta distributions are a family of probability distributions (like the normal, Poisson, binomial, and so on) which can be applied to continuous random variables known to be between 0 and 1. The general form of a beta distribution (here written for a variable $x$) is: \begin{eqnarray} p(x|\alpha, \beta) \propto x^{\alpha - 1}(1 - x)^{\beta - 1}\label{eq:beta}. \end{eqnarray} The quantities $\alpha$ and $\beta$ are two parameters that control the shape of the beta distribution. Since we know the variable $x$ is between 0 and 1 with probability 1, the normalisation constant could be found by doing an integral. Then you could write the probability distribution with an equals sign instead of a proportional sign, \begin{eqnarray} p(x|\alpha, \beta) &=& \frac{x^{\alpha - 1}(1 - x)^{\beta - 1}} {\int_0^1 x^{\alpha - 1}(1 - x)^{\beta - 1} \, dx}\\ &=& \frac{x^{\alpha - 1}(1 - x)^{\beta - 1}} {B(\alpha, \beta)}. \end{eqnarray} where $B(\alpha, \beta)$ (called the ``beta function'') is defined (usefully\ldots) as the result of doing that very integral (it can be related to factorials too, if you're interested). Thankfully, we can get away with the ``proportional'' version most of the time. In ``$\sim$'' notation the beta distribution is written as: \begin{eqnarray} x|\alpha, \beta &\sim& \textnormal{Beta}(\alpha, \beta). \end{eqnarray} Again, the ``given $\alpha$ and $\beta$'' can be dropped. It is implicit because they appear on the right hand side. By identifying the terms of Equation~\ref{eq:beta} with the form of our three priors (Equations~\ref{eq:uniform},~\ref{eq:prior2} and~\ref{eq:prior3})), we see that our three priors can be written in ``$\sim$'' notation like this: \begin{table}[!ht]\begin{center} \begin{tabular}{ll} Prior 1: & $\theta \sim \textnormal{Beta}(1, 1)$\\ Prior 2: & $\theta \sim \textnormal{Beta}\left(\frac{1}{2}, \frac{1}{2}\right)$\\ Prior 3: & $\theta \sim \textnormal{Beta}(101, 101)$\\ \end{tabular}\end{center} \end{table} When you work out the posterior distributions analytically and then compare them to the formula for the beta distribution, you can see that the three posteriors are also beta distributions! Specifically, you get: \begin{table}[!ht]\begin{center} \begin{tabular}{ll} Posterior 1: & $\theta \sim \textnormal{Beta}(3, 4)$\\ Posterior 2: & $\theta \sim \textnormal{Beta}(2.5, 3.5)$\\ Posterior 3: & $\theta \sim \textnormal{Beta}(103, 104)$ \end{tabular}\end{center} \end{table} This is ``magic'' that is made possible by the mathematical form of the beta prior and the binomial likelihood\footnote{The technical term for this magic is that the beta distribution is a {\it conjugate prior} for the binomial likelihood.}. It is not always possible to do this. We can also derive the general solution for the posterior for $\theta$ when the prior is a Beta$(\alpha, \beta)$ distribution, the likelihood is a binomial distribution, and $x$ successes were observed out of $N$ trials. The posterior is: \begin{eqnarray} p(\theta | x) &\propto& p(\theta)p(x|\theta)\\ &\propto& \theta^{\alpha - 1}(1-\theta)^{\beta-1} \times \theta^x (1-\theta)^{N-x}\\ &=& \theta^{\alpha + x - 1}(1 - \theta)^{\beta + N - x - 1} \end{eqnarray} which can be recognised as a Beta$(\alpha+x, \beta+N-x)$ distribution. Remember that in this particular problem, the probability of a success tomorrow is simply the expectation value (mean) of the posterior distribution for $\theta$. We can look up (or derive) the formula for the mean of a beta distribution and find that if $x \sim \textnormal{Beta}(\alpha, \beta)$ then $\mathds{E}(x) = \alpha/(\alpha + \beta)$. Applying this to the three posterior distributions gives: \begin{table}[!ht]\begin{center} \begin{tabular}{lll} $P(\textnormal{good bus tomorrow}|x) = 3/7 $ & $\approx 0.429$ & (using prior 1)\\ $P(\textnormal{good bus tomorrow}|x) = 2.5/6$ & $\approx 0.417$ & (using prior 2)\\ $P(\textnormal{good bus tomorrow}|x) = 103/207$ & $\approx 0.498$ & (using prior 3) \end{tabular}\end{center} \end{table} The result for Prior 1 is Laplace's infamous ``rule of succession'' which I will discuss a little bit in lectures. \subsection{A Lot of Data} As shown above, the choice of prior distribution has an impact on the conclusions. Sometimes it has a big impact (the results using prior 3 were pretty different to the results from priors 1 and 2), and sometimes not much impact (e.g. the results from priors 1 and 2 were pretty similar). There is a common phenomenon that happens when there is a lot of data: the prior tends not to matter so much. Imagine we did a much bigger version of the bus experiment with $N=1000$ trials, which resulted in $x=500$ successes. Then the posterior distributions corresponding to the three different priors are all very similar (Figure~\ref{fig:lots_of_data}). \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.6]{Figures/lots_of_data.pdf} \caption{\it When you have a lot of data, the results are less sensitive to the choice of prior distribution. Note that we have zoomed in and are only looking around $\theta=0.5$: these posterior distributions are quite narrow because there is now a lot more information about $\theta$. The red and blue posteriors (based on priors 1 and 2) are so similar that they overlap and look like one purple curve.\label{fig:lots_of_data}} \end{center} \end{figure} This is reassuring. Note, however, that this only occurs because the three analyses used the same likelihood. If three people have different prior distributions for something {\it and} they can't agree on what the experiment even means, there is no guarantee they will end up agreeing, even if there's a large amount of data! Remember though, that when the results {\it are} sensitive to the choice of prior, that is not a problem with the Bayesian approach, but rather an important warning message: the data aren't very informative! Then, the options are: i) think really hard about your prior distribution and be careful when deciding what it should be, and ii) get more or better data! %\section{Poisson Example} %We have just seen an example of how to calculate the posterior distribution for %a single parameter, given some data. Now let's consider another example %involving different probability distributions. The Poisson distribution is %commonly used for so-called ``rare events''. For example, how many ... %The probability mass function for the Poisson distribution is: %\begin{eqnarray} %p(x | \lambda) &=& \frac{\lambda^x e^{-\lambda}}{x!} %\end{eqnarray}
{ "alphanum_fraction": 0.7465099004, "avg_line_length": 52.1214953271, "ext": "tex", "hexsha": "b4a7c5107cfede835648e8ae5694625568decd02", "lang": "TeX", "max_forks_count": 11, "max_forks_repo_forks_event_max_datetime": "2020-06-04T20:04:47.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-29T14:34:51.000Z", "max_forks_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xulinpan/stat331", "max_forks_repo_path": "parameter_estimation.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_issues_repo_issues_event_max_datetime": "2015-07-10T08:48:27.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-07T05:00:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xulinpan/stat331", "max_issues_repo_path": "parameter_estimation.tex", "max_line_length": 217, "max_stars_count": 55, "max_stars_repo_head_hexsha": "7ca22bf3ce2c43d1a3b5fd8ff22842bdeb87ecf5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xulinpan/stat331", "max_stars_repo_path": "parameter_estimation.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-25T03:36:54.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-09T18:03:23.000Z", "num_tokens": 10734, "size": 39039 }
\documentclass[11pt,a4paper,english]{article} \usepackage[utf8]{inputenc} \usepackage[margin=1in]{geometry} \usepackage{changepage} \usepackage{pdflscape} \usepackage{natbib} \setlength{\bibsep}{0.0pt} \usepackage{hyperref} \usepackage{amsmath, amsthm, amssymb} \usepackage{multirow} \usepackage[doublespacing]{setspace} \usepackage[english]{babel} \usepackage{microtype} \usepackage{graphicx} \usepackage{booktabs} \usepackage{longtable} \usepackage{soul,color} \usepackage{authblk} \usepackage{array} \usepackage{subcaption} \usepackage{afterpage} \usepackage[bottom]{footmisc} \newtheorem{prop}{Proposition} \newtheorem{cor}{Corollary} \setlength{\abovedisplayskip}{2.5pt} \setlength{\belowdisplayskip}{2.5pt} \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \begin{document} \hypertarget{introduction}{% \subsection{Introduction}\label{introduction}} The basic Solow model assumed the following: \begin{itemize} \tightlist \item Constant returns to scale production function \item Constant population growth \(n\) \item Constant technological progress \(g\) \item Fixed saving rate \(s\) \end{itemize} Althought it is a useful model, the choice of \(s\) is arbitrary and \textbf{does not obey any microeconomic foundation}. In particular, microeconomic theory relates present and futur consumption decisions through the interest rate. The Ramsey model ammends this shortcomming and the decisions of all agents are microeconomically founded. In particular, we assume: \textbf{Ramsey assumptions} \begin{itemize} \tightlist \item Large number of identical firms operating the same technology: \begin{itemize} \tightlist \item Rent capital and hire labour \end{itemize} \item Large number of families: \begin{itemize} \tightlist \item Consume, supply labour, own and lend capital \end{itemize} \item Constant returns to scale production function \end{itemize} Agents (families) optimally decide consumption and savings, depeding on the interest rate. Hence, the \textbf{saving rate} is definetely no longer exogenous and \textbf{need not be constant}. \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \hypertarget{references}{% \subsection{References}\label{references}} \href{https://doi.org/10.2307/2224098}{Ramsey (1928)} \href{https://doi.org/10.2307/2295827}{Cass (1965)} Koopmans (1965) Romer, Advanced Macroeconomics: Chapter 2{[}\^{}1{]} \hypertarget{romer-derives-all-the-results-in-continuous-time-we-use-discrete-time.}{% \subsection{{[}\^{}1{]}: Romer derives all the results in continuous time, we use discrete time.}\label{romer-derives-all-the-results-in-continuous-time-we-use-discrete-time.}} title: Households linktitle: Households toc: true type: docs date: ``2019-07-11T00:00:00Z'' lastmod: ``2019-07-11T00:00:00Z'' draft: false menu: Ramsey: parent: The Ramsey model weight: 1 \hypertarget{prevnext-pager-order-if-docs_section_pager-enabled-in-params.toml}{% \section{\texorpdfstring{Prev/next pager order (if \texttt{docs\_section\_pager} enabled in \texttt{params.toml})}{Prev/next pager order (if docs\_section\_pager enabled in params.toml)}}\label{prevnext-pager-order-if-docs_section_pager-enabled-in-params.toml}} \hypertarget{weight-1}{% \subsection{weight: 1}\label{weight-1}} The economy is populated by a large number of identical households. Each household is composed on one individual who lives indefinitely. \hypertarget{utility-representation}{% \subsection{Utility representation}\label{utility-representation}} Households exhibit the following utility representation: \[U = \sum_{t=0}^{\infty} \beta^{t} u(c_{t}).\] \textbf{Assumptions on \(u( c )\)} The utility function satisfies the following properties: \begin{itemize} \tightlist \item \(u( c )\) is a continuous function defined over \([0,+\infty)\) \item It has continuous derivatives of any required order defined over \((0,+\infty)\) \item In particular: \(u^{\prime}( c )>0\) and \(u^{\prime \prime}( c )<0\) \item We also impose the Inada conditions: \(\lim_{c \rightarrow 0}u^{\prime}( c ) = +\infty\) and \(\lim_{c \rightarrow +\infty}u^{\prime}( c ) = 0\) \end{itemize} \hypertarget{households-budget-constraint}{% \subsection{Household's budget constraint}\label{households-budget-constraint}} The household reveices labour income \(w\) which uses to save \(k\) and consume \(c\). For simplicity, savings take the form of capital (instead of assets). Households lend capital to firms, obtaining a real interest \(r\). However, capital is subject to a constant depreciation rate \(\delta\). Therefore, each period of time, households face the following budget constraint: \[c_{t} + k_{t+1} = w_{t} + r_{t} k_{t} + (1-\delta) k_{t}.\] The left-hand side represents expenditures during period \(t\): \begin{itemize} \tightlist \item Consumption \item Saving \end{itemize} The right-hand side includes all sources of income: \begin{itemize} \tightlist \item Wages \item Interests \item Remaining capital after depreciation \begin{itemize} \tightlist \item Effectively, households can \emph{eat} capital \end{itemize} \end{itemize} \textbf{Note:} The budget constraint could have included \emph{dividends} \(d_{t}\) in the right-hand side. However, as we shall see, firms operate in perfect competition, and make zero profits. \hypertarget{solving-the-households-problem}{% \subsection{Solving the household's problem}\label{solving-the-households-problem}} First, we assume that households have perfect foresight. This means that a household is able to perfectly forecast all the future values of the relevant variables when deciding. For instance, at time \(t\) the household is able to correctly compute \(r_{t+1}\) and \(w_{t+1}.\) \textbf{Assumption H1:} Househlds have perfect foresight. In this problem, we want to obtain the path of \(c_{t}\). Note that the problem is infinite, in the sense that we need to determine \(c_{t}\) for each and every period of time. Instead, we can try to solve for the optimal trajectory of \(c_{t}\), this is, how it evolves over time: \(c_{t+1} = G(c_{t})\). The optimisation problem reads: \begin{eqnarray} \max_{c_{t}, k_{t+1}} & \sum_{t=0}^{\infty} \beta^{t} u(c_{t}) \\\ c_{t} + k_{t+1} & = w_{t} + r_{t} k_{t} + (1- \delta) k_{t}. \end{eqnarray} First, start off by writing the Langrangian equation corresponding to the problem: \[\mathcal{L} = \sum_{t=0}^{\infty} \beta^t u(c_{t}) + \sum_{t=0}^{\infty} \lambda_{t}(w_{t} + r_{t}k_{t} + (1-\delta)k_{t} - c_{t} - k_{t+1}).\] At time \(t\), the households has \emph{two} decisions to make: \begin{itemize} \tightlist \item How much to consume at \(t\): \(c_{t}\) \item How much to save at \(t\): \(k_{t+1}\) \end{itemize} Therefore, we derive the Lagrangian function with respect to \(c_{t}\) and \(k_{t+1}\). \[\frac{\partial \mathcal{L}}{\partial c_{t}} = \beta^{t} u^{\prime}(c_{t}) - \lambda_{t}\] \[\frac{\partial \mathcal{L}}{\partial k_{t+1}} = - \lambda_{t} + \lambda_{t+1}(r_{t+1}+(1-\delta))\] Set both equal to zero to obtain the maximum, and combine the equations to get: \[u^{\prime}(c_{t}) = \beta u^{\prime}(c_{t+1})(r_{t+1}+1-\delta).\] This condition is called the \textbf{Euler equation}. It tells us the optimal behaviour of the household. Rearranging the expression a little bit, we obtain: \[\frac{u^{\prime}(c_{t})}{u^{\prime}(c_{t+1})} = \beta (r_{t+1} + 1 - \delta).\] Therefore, if the interest rate is to increase, the household would optimally postpone consumption: this is, decrease \(c_{t}\) and increase \(c_{t+1}.\) \textbf{Note:} Remember that \(u^{\prime \prime} < 0\): marginal utilities are higher the lower consumption is. Slightly more complex relationships can arise if population grows or there is technological progress, see Romer, Chapter 2. \hypertarget{the-euler-equation}{% \subsubsection{The Euler equation}\label{the-euler-equation}} The Euler equation appears often in optimisation problems: both using discrete and continuous time. The discrete-time version is relatively easier to intepret and obtain. In our case, the Euler equation relates the present and future marginal utilities of consumption. Imagine that the household decreased the amount consumed today, \(c_{t}\) by an infinitesimally small amount \(\Delta_{c_{t}}\). This causes a utility loss of \(\Delta_{c_{t}} u^{\prime}(c_{t})\). Saving \(\Delta_{c_{t}}\) until tomorrow and consuming all the proceedings generates a utility gain of \(\Delta_{c_{t}}(r_{t+1} + 1 - \delta) u^{\prime}(c_{t+1})\). This gain \emph{must} be, of course, discounted using the rate \(\beta\) to compare present-day equivalents. Since the household is optimising, both the loss and the gain must be equal, otherwise, there would be an alternative consumption level that would maximise utility. Hence, putting everything together: \(\Delta_{c_{t}} u^{\prime}(c_{t}) = \beta \Delta_{c_{t}} (r_{t+1} + 1 - \delta) u^{\prime}(c_{t+1}).\) Cancel \(\Delta_{c_{t}} > 0\) on both sides to obtain the Euler equation. Alternatively, we can interpret the Euler equation as stating that it is not optimal to slightly deviate from the optimal path, consuming slighly less for instance, and later returning to the optimal path. \textbf{Remark:} the scheme we are putting in place here does not modify the intertemporal budget constraint because all the additional proceedings from saving are consumed. \hypertarget{the-transversality-condition}{% \subsubsection{The Transversality Condition}\label{the-transversality-condition}} The Euler equation is \emph{not} enough to fully determine the optimal sequence of consumption and savings. For the moment, trajectories where the capital or consumption grow boundedless are feasible, but these should not be optimal. \begin{itemize} \tightlist \item First, consider the economy continously accumulates capital. In that case, consumption must decrease towards zero. Since we have assumed that \(\lim_{c \rightarrow 0}u^{\prime}( c )=+\infty\) (Inada condition) a slighly increase in consumption raises utility by a large amount, this is, the marginal utility of consumption is very large. Hence, it cannot be optimal to accumulate capital and let consumption go to zero. \item Alternatively, let savings vanish. This case typically implies that \(k_{t}\) converges to zero in finite time (this is, for some \(t<\infty\)). The condition that capital must be positive prevents such cases. \end{itemize} \textbf{Note:} the preceding points are only descriptive of what the transversality condition implies. In fact, that transversality condition reads as: \[\lim_{t \rightarrow \infty} \beta^{t}u^{\prime}(c_{t})k_{t+1} = 0.\] In general, we will obtain steady-state solution in which the path of capital and consumption is bounded. This is, in the optimal solution consumption \(c_{t} \rightarrow c^{\star}\) and capital \(k_{t} \rightarrow k^{\star}\). In this case, the fact that \(\beta \in (0,1)\) ensures the validity of the transversality condition. We assume that a large number of idential firms populates the economy. Firms produce a single, homogeneous good using labour and capital. The production function has the following properties (assumptions): \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \(F(K_{t}, X_{t})\) is continuous and defined on \([0,+\infty)^{2},\) \item \(F(K_{t}, X_{t})\) has continuous derivatives of every requires order on \((0,+\infty)^2,\) \item \(F_{i}(K_{t}, X_{t}) > 0, F_{ii}(K_{t}, X_{t}) < 0\) and \(F_{ii}(K_{t},X_{t}) F_{jj}(K_{t}, X_{t}) - F_{ij}(K_{t}, X_{t})F_{ji}(K_{t}, X_{t})>0\): the function is stricly increasing in both arguments and strictly concave. \item \(F(K_{t}, X_{t})\) is homogeneous of degree one. \item The Inada conditions are satisfied: \(\lim_{K \rightarrow 0} F_{1}(K_{t}, X_{t}) = \lim_{X \rightarrow 0} F_{2}(K_{t}, X_{t}) = +\infty\) and \(\lim_{K \rightarrow +\infty} F_{1}(K_{t}, X_{t}) = \lim_{X \rightarrow +\infty} F_{2}(K_{t}, X_{t}) = 0.\) \end{enumerate} \textbf{Note:} the last two inequalities in Assumption 3 determine that the Hessian matrix of the production function is negative definite, hence the function is sctrictly concave. Firms maximise real profits. Since there are many firms competing, in equilibrium they make exactly zero profits. Moreover, also in equilibrium factors are paid their marginal productivity. Since the production function \(F\) is homogeneous of degree one wecan write it in \emph{intensive} terms: \[f(k) \equiv F\left(\frac{K}{X},1\right),\] where \(k \equiv \frac{K}{X}\). Because markets are competitive, capital earns its marginal product \(\partial F(K,X)/ \partial K\) or, equivalently, \(f^{\prime}(k)\) in intensive terms. Thus, the real interest rate at time \(t\) is: \[r_{t} = f^{\prime}(k_{t}).\] The marginal product of labour is given by \(\partial F(K,X)/\partial X.\) In intensive terms, it is equal to: \[w_{t} = f(k) - f^{\prime}(k) k.\] At the equilibrium, we have that factors are paid their marginal productivities, thus: \[r_{t} = f^{\prime}(k)\] and \[w_{t} = f(k) - f^{\prime}(k)k.\] \hypertarget{definition-of-intertemporal-equilibrium}{% \subsection{Definition of intertemporal equilibrium}\label{definition-of-intertemporal-equilibrium}} An intertemporal equilibrium with perfect foresight is \textbf{a sequence} \((k_{t}, c_{t}) \in \mathbb{R}_{++}^{2}, t=0,\ldots,+\infty\), such that given \(k_{0} > 0\), \textbf{households and firms optimise}, \textbf{markets clear}, and \textbf{capital accumulation follows} \(k_{t+1} = f(k_{t}) + (1-\delta)k_{t} -c_{t}.\) Thus, all the following equations must be satisfied: \begin{eqnarray} & u^{\prime}(c_{t}) = \beta (r_{t+1} + 1 - \delta) u^{\prime}(c_{t+1}), \\\ & k_{t+1} = f(k) + (1-\delta)k_{t} - c_{t}, \\\ & w_{t} = f(k_{t}) - f^{\prime}(k_{t})k_{t}, \\\ & r_{t} = f^{\prime}(k_{t}), \\\ & \lim_{t \rightarrow \infty} \beta^{t}u^{\prime}(c_{t})k_{t+1} = 0. \end{eqnarray} The first equation is the Euler equation, which implies that households are optimising. Equations 3 and 4 denote market clearing conditions for labour and capital. The second equation is the law of motion for capital: capital next period equals production net of consumption plus the remaining capital that has not depreciated. Finally, the traversality condition allows us to pick a non-explosive path. After some substitutions, we can express the intertemporal equilibrium ---forgetting for a moment about the transversality condition--- as: \begin{eqnarray} & u^{\prime}(c_{t}) = \beta (f^{\prime}(k_{t+1}) + 1 - \delta) u^{\prime}(c_{t+1}), \\\ & k_{t+1} = f(k) + (1-\delta)k_{t} - c_{t}. \end{eqnarray} This is a two-dimensional dynamic system with a unique predetermined state variable: \(k_{t}.\) To analyse it, we first compute its steady state and then check the dynamics. The steady state of an N-dimensional dynamic system is a configuration of the system variables' such that the system remains unchanged over time. This is, upon reaching the steady state, the system remains there forever. Hence, if the system is given by: \begin{eqnarray} x^{1}_{t+1} & = f_{1}(x^{1}_{t}, \ldots, x^{N}_{t}), \\\ \vdots & \\\ x^{N}_{t+1} & = f_{N}(x^{1}_{t}, \ldots, x^{N}_{t}). \end{eqnarray} at the steady state we have that \begin{eqnarray} x^{1}_{t+1} & = x^{1}_{t}, \\\ \vdots & \\\ x^{N}_{t+1} & = x^{N}_{t}. \end{eqnarray} In many cases, the steady state is only reached after some periods, unless we intentionally start at the steady state. Moreover, the steady state is often approached asymptotically: it is never fully reached, but we get infinitesimally close to it. Finally, steady states can be (asymptotically) stable or unstable. In the first sort, variables approach the steady-state level. In the second, the system \emph{diverges} from the steady state. \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} In our case, denote by \((\bar{k}, \bar{c}) \in \mathbb{R}_{+}^{2}\) the capital and consumption levels at the steady state. Then, a steady state solution satifies the following system of equations: \begin{eqnarray} u^{\prime}(\bar{c}) & = \beta (f^{\prime}(\bar{k}) + 1 -\delta) u^{\prime}(\bar{c}), \\\ \bar{k} & = f(\bar{k}) + (1-\delta) \bar{k} - \bar{c}. \end{eqnarray} Simplifying leads to: \begin{eqnarray} f^{\prime}(\bar{k}) = \frac{\theta}{\beta}, \\\ \bar{c} = f(\bar{k}) - \delta \bar{k}, \end{eqnarray} where \(\theta \equiv 1 - \beta(1-\delta) \in (0,1).\) From the Inada conditions for the production function we have that \[0 = \lim_{k \rightarrow +\infty} f^{\prime}(k) < \frac{\theta}{\beta} < \lim_{k \rightarrow 0}f^{\prime}(k) = +\infty.\] Moreover, \(f^{\prime}(k)\) is monotonically decreasing. Then, by the intermediate value theorem, there exists a unique level \(\bar{k} \in (0, +\infty)\) that solves the equation.\\ This level is a \emph{modified golden rule}. Consequently, \(\bar{c} = f(\bar{k}) - \delta \bar{k}\) uniquely determines the steady-state consumption level. We can study the dynamics of the model using the {[}phase diagram{]}. \hypertarget{golden-rule}{% \subsection{The Golden rule}\label{golden-rule}} The Golden rule level of capital is the level of capital, \(k^{\mathcal{GR}}\) that maximises consumption, \(c\) at the steady state, thus achieving maximum consumption: \(c^{\mathcal{GR}}.\) We already know that, at the steady state: \[c = f(k) -\delta k.\] Hence, the level of capital that maximies \(c\) is given by: \[\frac{\partial c(k)}{\partial k} = f^{\prime}(k) - \delta = 0.\] To maximise consumption at the steady state we must have: \[f^{\prime}(k) = \delta \implies k^{\mathcal{GR}} = {f^{\prime}}^{-1}(\delta).\] However, at the \emph{competitive} steady state we have: \[f^{\prime}(k) = \frac{1-\beta (1 - \delta)}{\beta} \implies \bar{k} = {f^{\prime}}^{-1}\left(\frac{1-\beta (1 - \delta)}{\beta}\right). \] Hence, \(k^{\mathcal{GR}} = \bar{k}\) is only possible if \(\frac{1-\beta (1 - \delta)}{\beta} = \delta \implies \beta = 1.\) \hypertarget{under--or-over-accumulation-of-capital-in-the-ramsey-model}{% \subsubsection{Under- or over-accumulation of capital in the Ramsey model}\label{under--or-over-accumulation-of-capital-in-the-ramsey-model}} According to our result before, the steady state does not correspond, in general, with a level of capital that maximises consumption at the steady state. \[f^{\prime}(\bar{k}) = \frac{1-\beta (1 - \delta)}{\beta}\] \[f^{\prime}(k^{\mathcal{GR}}) = \delta.\] So, unless \(\beta = 1,\) the economy is not in the Golden-rule level of capital. Since \(\frac{1-\beta (1 - \delta)}{\beta} > \delta\), we conclude that \(\bar{k} < k^{\mathcal{GR}}:\) the level of capital is below its Golden-rule level. There is an important remark to be made here. \begin{itemize} \tightlist \item The level of capital in the \emph{competitive equilibrium} maximises life-time utility: we obtained it solving the utility maximisation problem. \item This level of capital does \textbf{not} maximise steady-state consumption. \end{itemize} In that sense, achieving the Golden Rule level of capital \(k^{\mathcal{GR}}\) is \textbf{not} desirable from the viewpoint of utility maximisation. \hypertarget{stability}{% \subsection{Stability of the steady state}\label{stability}} The economy is represented by a \(2 \times 2\) system of first-order difference equations. We can analyse the stability of the steady-state analysing the the eigenvalues of the Jacobian matrix evaluated at the steady state. \textbf{Note:} the {[}Appendix{]} discusses this in more detail. Our dynamic equations are: \[ u^{\prime}(c_{t}) = \beta (f^{\prime}(k_{t+1}) + 1 - \delta) u^{\prime}(c_{t+1}), \\\ k_{t+1} = f(k_{t}) + (1-\delta) k_{t} - c_{t}. \] First, compute the Jacobian matrix: \[\bar{A} = \begin{pmatrix} \frac{\partial c_{t+1}}{\partial c_{t}} & \frac{\partial c_{t+1}}{\partial k_{t}} \\\ \frac{\partial k_{t+1}}{\partial c_{t}} & \frac{\partial k_{t+1}}{\partial k_{t}} \end{pmatrix}.\] In our case: \[\bar{A} = \begin{pmatrix} \frac{u^{\prime \prime}(c_{t})+\beta f^{\prime \prime}(k_{t+1})u^{\prime}(c_{t+1})}{\beta \left[ f^{\prime}(k_{t+1}) + 1 - \delta \right] u^{\prime \prime}(c_{t+1})} & -\frac{\beta f^{\prime \prime}(k_{t+1}) \left[ f^{\prime}(k_{t}) + 1 - \delta \right] u^{\prime}(c_{t+1})}{\beta \left[ f^{\prime}(k_{t+1}) + 1 - \delta \right] u^{\prime \prime}(c_{t+1})} \\\ -1 & f^{\prime}(k_{t}) + 1 - \delta \end{pmatrix}.\] Then, we evaluate the Jacobian matrix \(\bar{A}\) at the steady state, using the information we know: \[\color{red}{k_{t+1} = k_{t} = \bar{k},} \] \[\color{green}{c_{t+1} = c_{t} = \bar{c},} \] \[\color{blue}{1 = \beta \left( f^{\prime}(\bar{k}) + 1 - \delta \right).} \] Going one step at a time, we should first substitute all \(t\) and \(t+1\) variables for the steady-state levels \(\bar{k}\) and \(\bar{c}\). \[\bar{A}\bigr\rvert_{\substack{ k_{t} = k_{t+1} = \bar{k} \\\ c_{t} = c_{t+1} = \bar{c} } } = \begin{pmatrix} \frac{ u^{\prime \prime}(\color{green}{\bar{c}})+\beta f^{\prime \prime}(\color{red}{\bar{k}})u^{\prime}(\color{green}{\bar{c})} } { \beta \left[ f^{\prime}(\color{red}{\bar{k}}) + 1 - \delta \right] u^{\prime \prime}(\bar{\color{green}{c}}) } & -\frac{ \beta f^{\prime \prime}(\bar{k}) \left[ f^{\prime}(\bar{k}) + 1 - \delta \right] u^{\prime}(\bar{\color{green}{c}}) } { \beta \left[ f^{\prime}(\bar{k}) + 1 - \delta \right] u^{\prime \prime}(\bar{\color{green}{c}})} \\\ -1 & f^{\prime}(\color{red}{\bar{k}}) + 1 - \delta \end{pmatrix} = \] \[\begin{pmatrix} \frac{ u^{\prime \prime}(\bar{c})+\beta f^{\prime \prime}(\bar{k})u^{\prime}(\bar{c)} } { \color{blue}{\beta \left[ f^{\prime}(\bar{k}) + 1 - \delta \right]} u^{\prime \prime}(\bar{c}) } & -\frac{ \color{blue}{\beta} f^{\prime \prime}(\bar{k}) \color{blue}{\left[ f^{\prime}(\bar{k}) + 1 - \delta \right]} u^{\prime}(\bar{c}) } { \color{blue}{\beta \left[ f^{\prime}(\bar{k}) + 1 - \delta \right]} u^{\prime \prime}(\bar{c})} \\\ -1 & \color{blue}{f^{\prime}(\bar{k}) + 1 - \delta} \end{pmatrix} = \] \[\begin{pmatrix} \frac{ u^{\prime \prime}(\bar{c})+\beta f^{\prime \prime}(\bar{k})u^{\prime}(\bar{c)} } { u^{\prime \prime}(\bar{c}) } & -\frac{ f^{\prime \prime}(\bar{k}) u^{\prime}(\bar{c}) } { u^{\prime \prime}(\bar{c})} \\\ -1 & \frac{1}{\beta} \end{pmatrix} = \] \[\begin{pmatrix} 1 - \beta f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} & f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} \\\ -1 & \frac{1}{\beta} \end{pmatrix}, \] where \(\epsilon_{c} = - \frac{u^{\prime \prime}( c )}{u^{\prime}( c )} c\) represents the degree of relative risk aversion. Finally, lets compute the roots of the characteristic equation (the eigenvalues) of this matrix. We can use several techniques: \hypertarget{direct-computation-of-the-eigenvalues}{% \subsubsection{Direct computation of the eigenvalues}\label{direct-computation-of-the-eigenvalues}} In our case, it boils down to solving the determinant of the matrix \(|\bar{A} - \lambda I|=0\) Hence, we have a second-order equation in \(\lambda\): \[\begin{vmatrix} 1-\beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} - \lambda & f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} \\\ -1 & \frac{1}{\beta} - \lambda \end{vmatrix} = \lambda^{2} + \lambda \left( \beta f^{\prime \prime} (\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} - 1 - \frac{1}{\beta} \right) + \frac{1}{\beta} = 0.\] The roots of this equation are: \[\lambda_{1} = \frac{1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} + \sqrt{\left( 1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \right)^2 - \frac{4}{\beta}}}{2} > 1\] It is clear that \(\lambda_{1} > 1:\) the term \(1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} > 1\) because \(f^{\prime \prime}(\cdot) < 0.\) For \(\lambda_{2}\) we have: \[\lambda_{2} = \frac{1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} - \sqrt{\left( 1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \right)^2 - \frac{4}{\beta}}}{2}.\] Let \(\phi \equiv 1+\frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}}\) and \(\kappa \equiv \frac{4}{\beta}\). Assume that \(\lambda_{2} > 1\), then we must have: \[\frac{\phi - \sqrt{\phi^{2} - \kappa}}{2} > 1 \implies \kappa > 4\phi - 4.\] Substituting: \[4 + \frac{4}{\beta} - 4 \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} - 4 < \frac{4}{\beta} \implies -4\beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} < 0.\] But this is impossible because \(f^{\prime \prime}(\cdot) < 0.\) Hence, \(\lambda_{2} < 1\). Moreover, we also know that \(\lambda_{2} > 0\). Hence, \(\lambda_{1} > 1, \lambda_{2}\in(0,1)\) and the \textbf{steady state is a saddle.} \textbf{Note:} It is a saddle because the \emph{absolute value} of one eigenvalue is larger than one, and the \emph{absolute value} of the secon eigevalue is between \(0\) and \(1.\) \hypertarget{use-the-intermediate-value-theorem}{% \subsubsection{Use the intermediate value theorem}\label{use-the-intermediate-value-theorem}} The characteristic equation associated with the Jacobian evaluated at the steady state is: \[G(\lambda) = \lambda^{2} + \lambda \left( \beta f^{\prime \prime} (\bar{k}) \frac{\bar{c}}{\epsilon_{\bar{c}}} - 1 - \frac{1}{\beta} \right) + \frac{1}{\beta} = 0.\] First, notice this is a continuous function on \(\lambda.\) We are interested in checking whether the roots (or at least one root) lies in the interval \((-1,1)\). Then, compute the following: \[\lim_{\lambda \rightarrow -\infty} G(\lambda) = + \infty\] \[\lim_{\lambda \rightarrow +\infty} G(\lambda) = + \infty\] \[G(-1) = \frac{2}{\beta} - \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} > 0\] \[G(0) = \frac{1}{\beta} > 0 \] \[G(1) = \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} < 0.\] Hence, \(\exists \lambda_{2} \in (-1,0)\) such that \(G(\lambda_{2}) = 0\). Similarly, the second root lies beyond \(1.\) Therefore, \(\lambda_{1} > 1, \lambda_{2}\in(0,1)\) and the \textbf{steady state is a saddle.} \hypertarget{use-eigenvalues-properties}{% \subsubsection{Use eigenvalues' properties}\label{use-eigenvalues-properties}} For any matrix, we have the following: \begin{itemize} \tightlist \item The product of eigenvalues equals the \textbf{determinant} of the matrix: \(\mathrm{det}(M) = \lambda_{1} \lambda_{2} \ldots \lambda_{N},\) \item The sum of eigenvalues equals the \textbf{trace} of matrix: \(\mathrm{tr}(M) = \lambda_{1} + \lambda_{2} + \ldots + \lambda_{N}.\) \end{itemize} In our case: \[\lambda_{1} \lambda_{2} = \frac{1}{\beta} \] \[\lambda_{1} + \lambda_{2} = \underbrace{1+ \frac{1}{\beta} - \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}}}_{>1}.\] The first equation implies that either a) \(\lambda_{1} > 0, \lambda_{2} > 0\) or b) \(\lambda_{1} < 0, \lambda_{2} < 0.\) However, since \(\lambda_{1} + \lambda_{2} > 1\) b) is impossible. Then, \(\lambda_{1} > 0, \lambda_{2} > 0.\) Substitute \(\lambda_{1} \lambda_{2} = \frac{1}{\beta}\) in the second equation: \[ \lambda_{1} + \lambda_{2} = 1+ \lambda_{1} \lambda_{2} - \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}}.\] Rearranging: \[ \lambda_{1} + \lambda_{2} - \lambda_{1} \lambda_{2} = 1+ - \beta f^{\prime \prime}(\bar{k})\frac{\bar{c}}{\epsilon_{\bar{c}}} > 1 \implies \lambda_{1} + \lambda_{2} - \lambda_{1}\lambda_{2} - 1 >0.\] Factorisation leads to: \[(1 - \lambda_{2})(\lambda_{1} - 1) > 0.\] Therefore: \[ \lambda_{2} < 1 \] \[ \lambda_{1} > 1 \] \[ \lambda_{1} > 0, \lambda_{2} > 0\] and \textbf{the steady state is saddle.} \textbf{Note:} based on Section 2.3 of \emph{Romer, Advanced Macroeconomics}. We can describe the dynamics of the economy using two equations: \begin{eqnarray} u^{\prime}(c_{t}) = \beta(f^{\prime}(k_{t+1}) + 1 - \delta)u^{\prime}(c_{t+1}), \\\ k_{t+1} = f(k_{t} + (1-\delta)k_{t} - c_{t}. \end{eqnarray} \$\$ \hypertarget{the-dynamics-of-c}{% \subsection{\texorpdfstring{The dynamics of \(c\)}{The dynamics of c}}\label{the-dynamics-of-c}} The first equation describes the dynamics of consumption. \[u^{\prime}(c_{t}) = \beta ( f{\prime}(k_{t+1}) + 1 - \delta) u^{\prime}(c_{t+1}).\] Rearranging it we arrive at: \[\frac{u^{\prime}(c_{t})}{u^{\prime}(c_{t+1})} = \beta ( f^{\prime}(k_{t+1}) + 1 - \delta).\] If the left-hand side term equals 1 (\(\frac{u^{\prime}(c_{t})}{u^{\prime}(c_{t+1})} = 1)\), then consumption remains constant over time. This is, when \[ 1 = \beta ( f^{\prime}(k_{t+1}) + 1 - \delta)\] consumption is constant over time: \(c_{t+1} = c_{t}.\) This condition depends only on the level of capital. Denote \(k^{\star}\) such level. When \(k < k^{\star}, f^{\prime}(k) > f^{\prime}(k^{\star})\), implying that \(u^{\prime}(c_{t}) > u^{\prime}(c_{t+1})\) and hence consumption will raise over time. Similarly, if \(k > k^{\star}\) implies that consumption falls. This information is summarised in the following Figure. \begin{figure} \centering \includegraphics{/home/eric/eric-roca.github.io/static/img/ramsey/dynamics_c.png} \caption{dynamics of c} \end{figure} The arrows show the direction in which consumption \(c\) evolves. As discussed before, consumtion raises when capital is below \(k^{\star}\) and raises when above. Note the vertical line: it denotes the level of capital \(k^{\star}\) such that \(c\) is constant. The value \(k^{\star}\) can be easily computed: \(k^{\star} = {f^{\prime}}^{-1}\left(\frac{1-\beta (1-\delta)}{\beta}\right).\) \hypertarget{the-dynamics-of-k}{% \subsection{\texorpdfstring{The dynamics of \(k\)}{The dynamics of k}}\label{the-dynamics-of-k}} We can proceed simiarly with the motion of capital. The relevant equation in this case is: \[k_{t+1} = f(k_{t}) + (1- \delta)k_{t} - c_{t}.\] As before, we are interested by the combinations of \((k,c)\) such that capital remains constant over time. In this case, though, and contrasting with the previous, we will obtain that both capital and consumption affect capital levels in the future: production uses current capital, and current consumption depletes current production. We can rearrange the previous equation to obtain the following: \[k_{t+1} - k_{t} = f(k_{t}) - \delta k_{t} - c_{t}.\] Setting \(k_{t+1} = k_{t} \implies k_{t+1} - k_{t} = 0\) so capital is constant yields: \[ 0 = f(k_{t}) - \delta k_{t} - c_{t} \implies c_{t} = f(k_{t}) - \delta k_{t}.\] In the \((k,c)\) space, the equation takes a form similar to a parabola: it combines production ---with decreasing marginal returns--- with a linear depreciation rate. Technically, it can be seen that \(\frac{\partial f(k) - \delta k}{\partial k} = f^{\prime}(k) - \delta\), which admits a unique maximum: \(f^{\prime}(k)\) is a decreasing function. Moreover, \(\frac{\partial^{2} f(k) - \delta k}{\partial k^{2}} = f^{\prime \prime}(k) < 0,\) confirming that the optimal point is a maximum. Let's now analyse how capital evolves when consumption is above and below the level that makes it constant. From \(k_{t+1} - k_{t} = f(k_{t}) - \delta k_{t} - c_{t}\), if consumption is above the level we have that \(k_{t+1} - k_{t} < 0,\) meaning that capital decreases over time. The opposite applies when consumption is below the level that guarantees a constant level of capital. The Figure below summarises these findings: \begin{figure} \centering \includegraphics{/home/eric/eric-roca.github.io/static/img/ramsey/dynamics_g.png} \caption{dynamics of g} \end{figure} \hypertarget{the-phase-diagram}{% \subsection{The phase diagram}\label{the-phase-diagram}} We can combine the information above to produce the \emph{phase diagram}. It indicates the motion of variables at different points of the \((k,c)\) space. \begin{figure} \centering \includegraphics{/home/eric/eric-roca.github.io/static/img/ramsey/phase_diagram.png} \caption{phase diagram} \end{figure} The arrows show the motion of each variable at different conbinations of \((k,c).\) For instance, to the left of the \(c_{t+1} = c_{t}\) locus and above the \(_{t+1} - k_{t} = 0\) locus consumption increases and capital decreases. On top of each curve only capital or consumption moves, the other remains constant. For example, on the \(c_{t+1} - c_{t}\) locus, consumption is constant but capital changes. Finally, the point indicating the intersection of both curves is the steady state: all variables remain constant at their values. --- title: Additional material - trajectory linktitle: Trajectory toc: true type: docs date: ``2019-07-11T00:00:00Z'' lastmod: ``2019-07-11T00:00:00Z'' draft: false menu: Ramsey: parent: The Ramsey model weight: 6 \hypertarget{prevnext-pager-order-if-docs_section_pager-enabled-in-params.toml-1}{% \section{\texorpdfstring{Prev/next pager order (if \texttt{docs\_section\_pager} enabled in \texttt{params.toml})}{Prev/next pager order (if docs\_section\_pager enabled in params.toml)}}\label{prevnext-pager-order-if-docs_section_pager-enabled-in-params.toml-1}} \hypertarget{weight-6}{% \subsection{weight: 6}\label{weight-6}} \hypertarget{the-trajectory-around-the-steady-state}{% \subsection{The trajectory around the steady state}\label{the-trajectory-around-the-steady-state}} For the moment, we have established that the model converges towards a unique steady state following a saddle path. \textbf{Note:} This means that there is only one combination of inital capital and consumption, \((k_{0}, c_{0})\), such that the ecomomy converges. Any other initial value consumption at \(t=0\) (capital is pre-determined and thus we \emph{cannot} change it) has a diverging trajectory. We now compute the exact behaviour of \((k_{t}, c_{t})\) around the steady state. In general, though, the notion of \emph{around the steady state} is quite generous and we extend it quite far from the steady state. The model is higgly non-linear, so we study a simplied linearised version around the steady state. First, let's approximate the dynamics of capital and consumption {[}around the steady state.{]} \hypertarget{the-approximation-around-the-steady-state}{% \subsubsection{The approximation around the steady state}\label{the-approximation-around-the-steady-state}} The behaviour of the dynamical system around the steady state can be approximated using a first-order Taylor expansion around it. In that sense: \[\begin{pmatrix} c_{t+1} - \bar{c} \\\ k_{t+1} - \bar{k} \end{pmatrix} \approx \bar{A} \begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix}.\] This dynamics are governed by 2--dimensional system of difference equations. To solve it, we begin by studying a simpler system. \hypertarget{diagonalising-the-matrix-eigenvalues-and-eigenvectors}{% \subsubsection{Diagonalising the matrix: eigenvalues and eigenvectors}\label{diagonalising-the-matrix-eigenvalues-and-eigenvectors}} \textbf{Note:} based on the notes by \href{http://www.princeton.edu/~moll/ECO503Web/Lecture4_ECO503.pdf}{Benjamin Moll}. To simplify notation, let \(y_{t} = \begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} -\bar{k} \end{pmatrix}.\) Our problem can be written as: \(y_{t+1} = \bar{A} y_{t}\). It \(\bar{A}\) was a diagonal matrix, the solution to the system would be straighforward. In fact, if that were the case (I change variables to avoid confussion): \[\begin{pmatrix} \phi_{t+1} \\\\ \omega_{t+1} \end{pmatrix} = \begin{pmatrix} a_{1,1} & 0 \\\ 0 & a_{2,2} \end{pmatrix} \begin{pmatrix} \phi_{t} \\\ \omega_{t} \end{pmatrix}.\] Then, it is clear that \(\phi_{t+1} = a_{1,1} \phi_{t} \implies \phi_{t} = a_{1,1}^{t} \phi_{0}.\) And we would find a similar, equivalent expression for \(\omega.\) \(\bar{A}\) is not diagonal, but we can apply a Jordan decomposition to it and obtain an equivalent system governed by a diagonal matrix. In particular, we look for an \(2 \times 2\) invertible matrix \(X\) such that \( X^{-1} \Lambda X = \bar{A}\), where \(\Lambda\) is diagonal. Then, we apply the following transformation to our system (pre-mulitiplying by \(\color{red}{X^{-1}}\) on both sides and multiplying by \(\color{green}{XX^{-1}}\) on the righ-hand side). \[ \color{red}{X^{-1}} y_{t+1} = \color{red}{X^{-1}} \bar{A} \color{green}{(X X^{-1})} y_{t} = X^{-1} y_{t+1} = X^{-1} \bar{A} X (X^{-1} y_{t}) =\] \[ = \Lambda X^{-1} y_{t}.\] Denote \(z \equiv X^{-1} y\). Thus, the system becomes: \[z_{t+1} = \Lambda z_{t}.\] Since \(\Lambda\) is diagonal, the soltions are of the form: \[z_{t} = \Lambda^{t} z_{0}.\] \textbf{Note:} in general, the power of a matrix is complex. However, for a \emph{diagonal matrix} it is simply the power of each component. Once we have the solution for the transformed system we must undo the transformation: in fact, we do not care abut the evolution of \(z\). This step involves using that \(X^{-1} y = z \implies y = X z\). \hypertarget{the-matrix-lambda}{% \paragraph{\texorpdfstring{The matrix \(\Lambda\)}{The matrix \textbackslash{}Lambda}}\label{the-matrix-lambda}} In a Jordan decomposition, the diagonal matrix \(\Lambda\) consists of the matrix whose entries are its eigenvalues. Thus (we have a \(2 \times 2\) system), \[\Lambda = \begin{pmatrix} \lambda_{1} & 0 \\\ 0 & \lambda_{2} \end{pmatrix}.\] \hypertarget{the-matrix-x}{% \paragraph{\texorpdfstring{The matrix \(X\)}{The matrix X}}\label{the-matrix-x}} The matrix \(X\) contains the eigenvectors of \(\bar{A}\) in columns. There is one eigenvector associated to each eigenvalue. \textbf{Note:} it is important to correctly relate each eigenvetor to its eigenvalue. To find an eigenvector associated with \(\lambda_{1}\) (\emph{there are infinitely many possible eigenvectors, we only want one}) we solve the following system: \[\bar{A} \begin{pmatrix} v_{1} \\\ v_{2} \end{pmatrix} = \lambda_{1} \begin{pmatrix} v_{1} \\\ v_{2} \end{pmatrix} \quad \mathrm{or} \quad (\bar{A} - \lambda_{1} \mathbb{I}) \begin{pmatrix} v_{1} \\\ v_{2} \end{pmatrix} = \begin{pmatrix} 0 \\\ 0 \end{pmatrix}.\] The matrix \(X\) then becomes: \[X = \begin{pmatrix} v_{1,1} & v_{2,1} \\\ 1 & 1 \end{pmatrix}.\] \hypertarget{solving-the-system}{% \subsubsection{Solving the system}\label{solving-the-system}} We begin with the solution for the transformed variable \(z.\) We already know its takes the form: \[z_{t} = \Lambda^{t} z_{0} \quad \mathrm{or} \quad \begin{pmatrix} z_{t}^{1} \\\ z_{t}^{2} \end{pmatrix} = \begin{pmatrix} \lambda_{1}^{t} & 0 \\\ 0 & \lambda_{2}^{t} \end{pmatrix} \begin{pmatrix} v_{0}^{1} \\\ v_{0}^{2} \end{pmatrix}.\] Next, we reverse the transformation, this is, we obtain the dynamics of \(y: y_{t} = X z_{t}.\) Therefore, we obtain: \[y_{t} = X z_{t} = X \Lambda^{t} z_{0} = \begin{pmatrix} v_{1,1} & v_{2,1} \\\ v_{1,2} & v_{2,2} \end{pmatrix} \begin{pmatrix} \lambda_{1}^{t} & 0 \\\ 0 & \lambda_{2}^{t} \end{pmatrix} \begin{pmatrix} z_{0}^{1} \\\ z_{0}^{2} \end{pmatrix}.\] Alternatively, multiplying the matrices and recalling that \(y_{t}=\begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix}\) : \[\begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix} = z_{0}^{1} \lambda_{1}^{t} \begin{pmatrix} v_{1,1} \\\ v_{1,2} \end{pmatrix} + z_{0}^{2} \lambda_{2}^{t} \begin{pmatrix} v_{2,1} \\\ v_{2,2} \end{pmatrix}.\] The eigenvalues appear clearly in the solution. \textbf{Remember} that we know that one eigenvalue is bigger than one, while the second lies within the unit circle. Without lose of generality, assume that \(\lambda_{1} \in (-1,1)\) and \(\lambda_{2} > 1.\) According to the solution before, this implies an explosive behaviour: \(\lambda_{2} > 1 \implies \lim_{t \rightarrow +\infty} \lambda_{2}^{t} = +\infty.\) To have a well-behaved dynamics we must impose \(z_{0}^{2} = 0\), which eliminates the explosive behaviour. \textbf{Technical remark:} Denote by \(l\) the number of eigenvalues in the unit circle, and denote by \(m\) the number of pre-determinted state variables: \begin{itemize} \tightlist \item if \(l=m\) (standard case): saddle-path, \emph{unique} optimal trajectory. The eigenvalues within the unit circle govern the speed of convergence. \item if \(l < m\): unstable \item if \(l > m\): multiple optimal trajectories. \end{itemize} After imposing \(z_{0}^{2} = 0\) the system becomes: \[\begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix} = z_{0}^{1} \lambda_{1}^{t} \begin{pmatrix} v_{1,1} \\\ v_{1,2} \end{pmatrix}.\] \hypertarget{closing-the-model-with-initial-values}{% \paragraph{Closing the model with initial values}\label{closing-the-model-with-initial-values}} We know that the economy begins with a level of capital \(k_{t=0} = k_{0} > 0.\) Substituting \(t=0\) in the equation above, and focusing on capital, we obtain: \[k_{0} - \bar{k} = z_{0}^{1} v_{1,2} \implies \color{red}{z_{0}^{1} = \frac{k_{0} - \bar{k}}{v_{1,2}}}.\] Therefore, we can substitute the value for \(z_{0}^{1}\) to obtain the dynamics of capital: \[k_{t} - \bar{k} = \color{red}{z_{0}^{1}} \lambda_{1}^{t} v_{1,2} = \left(k_{0} - \bar{k} \right) \lambda^{t}.\] Similarly, we have that for consumption: \[c_{t} - \bar{c} = \color{red}{z_{0}^{1}} \lambda_{1}^{t} v_{1,1} = \frac{v_{1,1}}{v_{1,2}} \lambda_{1}^{t} (k_{0} - \bar{k}).\] Finally, the initial value of consumption \(c_{0}\) that puts the economy in the saddle path is obtained by setting \(t=0\) in the previous equation: \[c_{0} - \bar{c} = \frac{v_{1,1}}{v_{1,2}} (k_{0} - \bar{k}).\] We can now completely solve an example. \hypertarget{utility-and-production-functions}{% \subsection{Utility and production functions}\label{utility-and-production-functions}} We assume that utility is logarithmic and we take a Cobb-Douglas production function. \begin{eqnarray} u(c_{t}) = \log (c_{t}), \\\ F(K_{t},X_{t}) = AK_{t}^{\alpha}X_{t}^{1-\alpha}, \\\ A >0, \alpha \in (0,1). \end{eqnarray} \$\$ We can check that both functions satisfy the Inada conditions: \[\lim_{c_{t} \rightarrow 0}u^{\prime}(c_{t}) = \lim_{c_{t} \rightarrow 0} \frac{1}{c_{t}} = +\infty, \\\ \lim_{c_{t} \rightarrow +\infty}u^{\prime}(c_{t}) = \lim_{c_{t} \rightarrow +\infty} \frac{1}{c_{t}} = 0, \\\ \lim_{K_{t} \rightarrow 0} F^{\prime}_{K_{t}}(K_{t}, X_{t}) = \lim_{K_{t} \rightarrow 0} A\alpha K_{t}^{\alpha - 1}X_{t}^{1-\alpha} = +\infty, \\\ \lim_{K_{t} \rightarrow +\infty} F^{\prime}_{K_{t}}(K_{t}, X_{t}) = \lim_{K_{t} \rightarrow +\infty} A\alpha K_{t}^{\alpha - 1}X_{t}^{1-\alpha} = 0, \\\ \lim_{X_{t} \rightarrow 0} F^{\prime}_{X_{t}}(K_{t}, X_{t}) = \lim_{X_{t} \rightarrow 0} A (1-\alpha) K_{t}^{\alpha}X_{t}^{\alpha} = +\infty, \\\ \lim_{X_{t} \rightarrow +\infty} F^{\prime}_{X_{t}}(K_{t}, X_{t}) = \lim_{X_{t} \rightarrow +\infty} A (1- \alpha) K_{t}^{\alpha}X_{t}^{\alpha} = 0. \] The production function, expressed in intensive terms, becomes: \[F(K_{t}, X_{t}) = X_{T} F\left(\frac{K_{t}}{X_{t}},1\right) = X_{t} f(k_{t}) = X_{t} k_{t}^{\alpha}, k_{t} \equiv \frac{K_{t}}{X_{t}}.\] with total production \(Y_{t} = F(K_{t},X_{t})\) and production per capita equal to \(y_{t} \equiv \frac{Y_{t}}{X_{t}} = f(k_{t}).\) \hypertarget{households-optimisation}{% \subsection{Household's optimisation}\label{households-optimisation}} Instead of using the results from previous sections, we develop again the utility maximisation. First, we write down the household's budget constraint: \hypertarget{households-budget-constraint-1}{% \subsubsection{Household's budget constraint}\label{households-budget-constraint-1}} At each period, total received income is composed of \emph{wages} and \emph{interests}. It can be spend on \emph{consumption} or \emph{saving}. Remember that capital depreciates at a rate \(\delta \in [0,1].\) Hence, we receive back from firms \(1-\delta\) times the capital we lend. Therefore: \[ w_{t} + r_{t}k_{t} + (1-\delta)k_{t} = c_{t} + k _{t+1}.\] \hypertarget{intertemporal-utility-maximistion-lagrangean}{% \subsubsection{Intertemporal utility maximistion: Lagrangean}\label{intertemporal-utility-maximistion-lagrangean}} The intertemporal utility maximisation problem is: \begin{eqnarray} \max_{c_{t}, k_{t+1}} \sum_{t=0}^{\infty} \beta^{t} u(c_{t}) & \\\ \mathrm{s.t.} \quad w_{t} + r_{t} k_{t} + (1-\delta)k_{t} = c_{t} + k_{t+1}, \\\ c_{t}, k_{t+1} >0, k_{t=0} = k_{0} > 0. \end{eqnarray} \$\$ The Lagrangean becomes: \[ \mathcal{L} = \sum_{t=0}^{\infty} \beta^{t} u(c_{t}) + \sum_{t=0}^{\infty} \lambda_{t}(w_{t} + (r_{t} + 1 - \delta)k_{t} - c_{t} - k_{t+1}).\] We use the first-order conditions with respect to \(c_{t}\) and \(k_{t+1}\) to find the optimal consumption path. In particular, in this step we shall obtain the Euler equation, which we combine latter with the transversality condition. \[ \frac{\partial \mathcal{L}}{\partial c_{t}} = \beta^{t} u^{\prime}(c_{t}) - \lambda_{t} = \beta^{t}\frac{1}{c_{t}} - \lambda_{t} = 0 \\\ \frac{\partial \mathcal{L}}{\partial k_{t+1}} = -\lambda_{t} + \lambda_{t+1} (r_{t+1} + 1 - \delta) = 0. \] \textbf{Note:} the term \(k_{t+1}\) appears within the summation at \(t=t+1.\) In fact expanding it reveals this fact clearly: \[ + \ldots \lambda_{t} (w_{t} + (r_{t} + 1 - \delta)k_{t} - c_{t} - k_{t+1}) + \\\ \quad \quad + \lambda_{t+1}(w_{t+1} + (r_{t+1} + 1 - \delta)k_{t+1} - c_{t+1} - k_{t+2} + \ldots\] Combining both, we obtain the Euler equation: \[\beta^{t}\frac{1}{c_{t}} = r_{t+1} \beta^{t+1} \frac{1}{c_{t+1}} \implies c_{t+1} = \beta (r_{t} + 1 - \delta)c_{t}.\] Finally, we should remember to impose the transversality condition: \[\lim_{t \rightarrow +\infty} \beta^{t}u^{\prime}(c_{t})k_{t+1} = \lim_{t \rightarrow +\infty}\beta^{t}\frac{1}{c_{t}k_{t+1}} = 0.\] \hypertarget{firms-optimisation}{% \subsection{Firm's optimisation}\label{firms-optimisation}} In this model, firms operate under pefect competition, making zero profits. Moreover, factors are paid their marginal productivity. \[r_{t} = F^{\prime}_{K_{t}}(K_{t}, X_{t}) = \alpha K_{t}^{\alpha - 1}X_{t}^{1-\alpha} = \alpha \left(\frac{K_{t}}{X_{t}}\right)^{\alpha - 1} = \alpha k_{t}^{\alpha -1}. \\\ w_{t} = F^{\prime}_{X_{t}}(K_{t}, X_{t}) = (1-\alpha) K_{t}^{\alpha} X_{t}^{-\alpha} = (1-\alpha)\left(\frac{K_{t}}{X_{t}}\right)^{\alpha} = (1-\alpha) k_{t}^{\alpha}.\] Alternatively, using the information in the {[}Appendix{]} and working directly with the intensive-form production function: \[r_{t} = f^{\prime}(k_{t}) = \frac{\partial f(k_{t})}{\partial k_{t}} = \alpha k_{t}^{\alpha -1}, \\\ w_{t} = f(k_{t}) - f^{\prime}(k_{t})k_{t} = k_{t}^{\alpha} - \alpha k_{t}^{\alpha - 1} k_{t} = (1-\alpha)k_{t}^{\alpha}.\] \hypertarget{the-dynamic-system}{% \subsection{The dynamic system}\label{the-dynamic-system}} We are now in a position to solve the model. We have the following equations: \[ c_{t+1} = \beta (r_{t+1} + 1 - \delta) c_{t}, \\\ w_{t} + r_{t}k_{t} + (1-\delta) k_{t} = c_{t} + k_{t+1}, \\\ w_{t} = (1-\alpha) k^{\alpha}_{t}, \\\ r_{t} = \alpha k^{\alpha-1}, \\\ \lim_{t \rightarrow +\infty} \beta^{t}\frac{1}{c_{t}} k_{t+1} = 0.\] Substituing the wage and interest rate into the budget constraint allows us to retrieve the dynamics of capital: \[ c_{t+1} = \beta (\alpha k^{\alpha-1} + 1 - \delta) c_{t}, \\\ k_{t}^{\alpha} + (1-\delta) k_{t} = c_{t} + k_{t+1}, \\\ \lim_{t \rightarrow +\infty} \beta^{t}\frac{1}{c_{t}} k_{t+1} = 0.\] \hypertarget{steady-state}{% \subsubsection{Steady state}\label{steady-state}} Before trying to solve for the optimal trajectory, we analyse the steady states of this economy. A steady-state \((\bar{k}, \bar{c}\) is such that capital and consumption remain constant over time: \(c_{t+1} = c_{t}\) and \(k_{t+1} = k_{t}.\) Applying this idea to the equations above we get ---here we can temporarily forget about the transversality condition. \[ \bar{c} = \beta(\alpha \bar{k}^{\alpha-1} + 1 - \delta) \bar{c}, \\\ \bar{k}^{\alpha} + (1-\delta)\bar{k} = \bar{c} + \bar{k}.\] Hence, using the first equation we can easily get the steady level of capital: \[ \bar{k} = \left(\frac{\alpha \beta}{1-\beta(1-\delta)}\right)^{\frac{1}{1-\alpha}}.\] Using this solution in the second equation yields the steady-state level of consumption: \[ \bar{c} = \left(\frac{\alpha \beta}{1-\beta (1-\delta)}\right)^{\frac{\alpha}{1-\alpha}} - \delta \left(\frac{\alpha \beta}{1-\beta (1- \delta)}\right)^{\frac{1}{1-\alpha}}.\] \hypertarget{stability-1}{% \subsection{Stability}\label{stability-1}} First, we analyse the stability of the steady state, as we did in the {[}general case{]} Since we are using log-utility, the coefficient of relative risk aversion \(\epsilon_{c} = 1.\) Instead of plugging the correct values in the Jacobian matrix \(\bar{A}\), for the sake of clarity, we derive everything again: \[\bar{A} = \begin{pmatrix} \frac{\partial c_{t+1}}{\partial c_{t}} & \frac{\partial c_{t+1}}{\partial k_{t}} \\\ \frac{\partial k_{t+1}}{\partial c_{t}} & \frac{\partial k_{t+1}}{\partial k_{t}} \end{pmatrix} \] \[ = \] \[\begin{pmatrix} \beta \left( \alpha k_{t+1}^{\alpha -1} + 1 - \delta \right) - \frac{\beta c_{t} \alpha (\alpha-1) }{k_{t+1}^{2-\alpha}} & \frac{\beta c_{t} \alpha (\alpha -1)}{k_{t+1}^{2-\alpha}} \left( \alpha k_{t}^{\alpha-1} + 1 - \delta \right) \\\ -1 & \alpha k_{t}^{\alpha -1} + 1 - \delta \end{pmatrix}\] Evaluating it at the steady state: \[\bar{A}\bigr\rvert_{\substack{ k_{t} = k_{t+1} = \bar{k} \\\ c_{t} = c_{t+1} = \bar{c} } } = \begin{pmatrix} 1 - \beta \bar{c} \alpha (\alpha -1) \bar{k}^{\alpha -2} & \alpha (\alpha -1 ) \bar{c} \bar{k}^{\alpha -2} \\\ -1 & \frac{1}{\beta} \end{pmatrix},\] where we have used \(1 = \beta \left( \alpha \bar{k}^{\alpha -1} + 1 - \delta \right)\) at the steady state. We can use the fact \(\mathrm{tr}(\bar{A}) = \lambda_{1} + \lambda_{2}, \mathrm{det}(\bar{A}) = \lambda_{1} \lambda_{2}.\) Therefore: \[\lambda_{1} \lambda_{2} = \frac{1}{\beta} \] \[\lambda_{1} + \lambda_{2} = \frac{1}{\beta} + 1 - \beta \bar{c} \alpha (\alpha-1) \bar{k}^{\alpha -2}.\] The first equation implies that either a) \(\lambda_{1} > 0, \lambda_{2} > 0\) or b) \(\lambda_{1} < 0, \lambda_{2} < 0.\) However, since \(\lambda_{1} + \lambda_{2} > 1\) b) is impossible. Then, \(\lambda_{1} > 0, \lambda_{2} > 0.\) Substitute \(\lambda_1 \lambda_2 = \frac{1}{\beta} \) in the second equation: \[ \lambda_{1} + \lambda_{2} = 1+ \lambda_{1} \lambda_{2} - \beta ( \alpha (\alpha - 1) \bar{k}^{\alpha -2}\frac{\bar{c}}{\epsilon_{\bar{c}}}.\] Rearranging: \[ \lambda_{1} + \lambda_{2} - \lambda_{1} \lambda_{2} = 1+ - \beta \alpha (\alpha -1) \bar{k}^{\alpha -2}\frac{\bar{c}}{\epsilon_{\bar{c}}} > 1 \implies \lambda_{1} + \lambda_{2} - \lambda_{1}\lambda_{2} - 1 >0.\] Factorisation leads to: \[(1 - \lambda_{2})(\lambda_{1} - 1) > 0.\] Therefore: \[ \lambda_{2} < 1 \] \[ \lambda_{1} > 1 \] \[ \lambda_{1} > 0, \lambda_{2} > 0\] and \textbf{the steady state is saddle.} \hypertarget{trajectory}{% \subsection{Trajectory}\label{trajectory}} Describing the trajectory around the steady state analitycally would be cumbersome. Introducing an alternative production function of the \(Ak\) family alleviates this problem, see the notes by \href{https://eml.berkeley.edu/~webfac/obstfeld/e202a_f13/lecture2.pdf}{Maurice Obsfeld} Instead, we solve this section numerically. In particular, assume that \(\alpha = 0.3, \\, \beta = 0.9, \\, \delta = 0.1.\) The initial level of capital \(k_{t=0} = 1.\) With these values, we have the following: \[\bar{k} = 1.65202, \\, \bar{c}=0.997329, \\, \bar{A} = \begin{pmatrix} 1.08029 & -0.089214 \\\ -1 & 1.11111 \end{pmatrix}.\] The eigenvalues of the matrix \(\bar{A}\) are \[\lambda_{1} = 0.796618 \in (-1,1)\] \[\lambda_{2} =1.39479 > 1.\] One possible set of eigenvectors is given by: \[v_{1} = (0.314494 , 1), v_{2} = (-0.283675, 1)\] which are associated to \(\lambda_{1}\) and \(\lambda_{2}\), respectively. Hence, the matrix \[X = \begin{pmatrix} 0.314494 & -0.283675 \\\ 1 & 1 \end{pmatrix}.\] \hypertarget{solving-the-system-1}{% \subsubsection{Solving the system}\label{solving-the-system-1}} Applying the results from before the system is: \[\begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix} = z_{0}^{1} 0.796618^{t} \begin{pmatrix} 0.314494 \\\ 1 \end{pmatrix} + z_{0}^{2} 1.39479 \begin{pmatrix} -0.283675 \\\ 1 \end{pmatrix}.\] Since \(\lambda_{2} > 1\) we must set the value \(z_{0}^{2} = 0\) to avoid an explosive behaviour. Consequently: \[\begin{pmatrix} c_{t} - \bar{c} \\\ k_{t} - \bar{k} \end{pmatrix} = z_{0}^{1} 0.796618^{t} \begin{pmatrix} 0.314494 \\\ 1 \end{pmatrix}.\] We can solve the equation first by setting \(t=0\) to obtain the value \(z_{0}^{1}:\) \[\underbrace{k_{0}}_{=1} - \underbrace{\bar{k}}_{=1.65202} = z_{0}^{1} \times 1= -0.652017.\] Consequently, the dynamic equation for capital becomes: \(k_{t} - \bar{k} = -0.652017 \times 0.796618^{t}\) And for consumption: \(c_{t} - \bar{c} = -0.652017 \times 0.314494 \times 0.796618^{t} = -0.205055 \times 0.786618^{t}.\) Finally, we solve for the initial level of consumption compatible with the saddle path. For \(t=0\) we have: \(c_{0} - \underbrace{\bar{c}}_{=0.997329} = \underbrace{z_{0}^{1}}_{= -0.652017} \times 0.314494 \implies c_{0} = 0.792273.\) \hypertarget{optimality}{% \subsection{Optimality}\label{optimality}} Before {[}we claimed{]} that the solution is \emph{optimal} despite the fact that it does not maximise consumption at the steady state. However, what we claimed was that the \emph{entire path} was optimal. We can proof it solving the Ramsey allocation problem from the perspective of the social planner. \hypertarget{social-planner}{% \subsubsection{Social planner}\label{social-planner}} A social planner maximises total welfare, given by: \[\max \sum_{t=0}^{\infty} \beta^{t} u(c_{t}).\] The social planner directly allocates resource between consumption and investment, so he is not bothered by wages or interest rates. Hence, his intertemporal budget constraint reads. In particular, he has to allocate total production and capital net of depreciation between \(c_{t}\) and \(k_{t+1}.\) \[f(k_{t}) = c_{t} + k_{t+1} + (1-\delta) k_{t}.\] He operates under a certain level \(k_{0} > 0\) given. Note how the budget constraint is equivalent to the one we found in the competitive equilibrium. Moreover, total welfare coincides, too. Hence, the solutions will be the same, proving that the competitive equilibrium was optimal. The two main equations of the model are relevant for the planner as well. First, he must follow the Euler equation. In fact, he can reduce present consumption by \(\Delta c\) at time \(t\) and save it. The present-day cost is \(u^{\prime} ( c_{t} ) \Delta_{c}\). He obtainins \(f^{\prime}(k_t) \Delta_{c} + (1-\delta) \Delta_{c}\) at time \(t+1,\) which raises utility by \(\left( f^{\prime}(k_{t}) \Delta_{c} + (1-\delta) \Delta_{c} \right) u^{\prime}_{c_{t+1}}.\) Optimality implies that this trade off is not possible anymore, hence, we obtain the Euler equation again. Secondly, although the planner directly allocates consumption and investments, he must follow the technological constraint displayed above. This does not represent preferences, and coincides with the househld's budget constraint after clearing the markets. Finally, observe that the \emph{first welfare theorem} directly applies: if markets are competitve and there re no externalities, then the decentralised equilibrium is Pareto-efficient. All conditions hold in the Ramsey model. \hypertarget{the-golden-rule}{% \subsection{The Golden rule}\label{the-golden-rule}} We have discussed before that, in the Ramsey model, the economy converges to a steady state with \(\bar{k}\) below the golden-rule level of capital \(k^{\mathcal{GR}}\). This constrasts with the Solow model: in the Solow model, a sufficietly high saving rate causes capital over-accumulation. This opens the possibility of finding alternative paths that yield higher consumption levels in each and every period ---meaning that such a saving rate was not Pareto-efficient. In the Ramsey model, savings are optimally derived from the utility maximisation, and the level of utility depends on consumption. Moreover, the model features \emph{no} externalities. Consequently, the model cannot produce a situation whereby it is possible to increase consumption at each and every period: this contradicts optimisation. However, \(\bar{k} < k^{\mathcal{GR}}\) implies that it is possible to increase the saving rate today, build up the extra capital needed to reach \(k^{\mathcal{GR}}\) and then be able to consume \(c^{\mathcal{GR}} > \bar{c}\) at the steady state ---which, remember, lasts forever. Suppose that we had reached the steady-state. Why do the households not change their saving rate to reach \(c^{\mathcal{GR}}?\) The reason is that households value present consumption more than future consumption. Therefore, the current utility cost of a reduction in consumption is larger the future gains once discounted. Because of the discount rate, the utility gains from an eventual permanent increase in consumption are bounded. All in all, the economy converges to a value \(\bar{k}\) below the steady state. Because \(\bar{k}\) is the optimal level of \(k\) for the economy to converge to, it is known as the \emph{modified golden-rule} capital stock. We {[}had already noticed{]} that the discount factor \(\beta\) prevents the economy from reaching the golden-rule level of captal. According to our computation, the Golden-rule level of capital is only attained if \(\beta = 1.\) This implies that present and future consumption are valued the same, which complements the discussion above. \end{document}
{ "alphanum_fraction": 0.6760142889, "avg_line_length": 41.568866571, "ext": "tex", "hexsha": "971772446550390fb79f61df537953d1c6806840", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b982961710055b7ff6f026c9292469d5efc16b8a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eric-roca/eric-roca.github.io", "max_forks_repo_path": "courses/ramsey/Rasey.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b982961710055b7ff6f026c9292469d5efc16b8a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eric-roca/eric-roca.github.io", "max_issues_repo_path": "courses/ramsey/Rasey.tex", "max_line_length": 360, "max_stars_count": null, "max_stars_repo_head_hexsha": "b982961710055b7ff6f026c9292469d5efc16b8a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eric-roca/eric-roca.github.io", "max_stars_repo_path": "courses/ramsey/Rasey.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 19758, "size": 57947 }
\hypertarget{group__numpp__krylov}{}\section{Krylov Subspace methods} \label{group__numpp__krylov}\index{Krylov Subspace methods@{Krylov Subspace methods}} Group of methods used to iteratively solve linear equations of form $ Ax = b$. \begin{DoxyCompactItemize} \item {\footnotesize template$<$typename T , std\+::size\+\_\+t Size$>$ }\\C\+O\+N\+S\+T\+E\+X\+PR auto \hyperlink{group__numpp__krylov_ga5e105c0002fbd19b7c8144540040550f}{numpp\+::krylov\+::conjugate\+\_\+gradient} (const \hyperlink{classnumpp_1_1matrix_1_1dense}{matrix\+::dense}$<$ T, Size, Size $>$ \&A, const \hyperlink{classnumpp_1_1vector}{vector}$<$ T, Size $>$ \&x, const \hyperlink{classnumpp_1_1vector}{vector}$<$ T, Size $>$ \&b, const double threshold=0.\+01, const std\+::size\+\_\+t iterations=Size $>$ 20 ? Size \+:20) \end{DoxyCompactItemize} \subsection{Detailed Description} Group of methods used to iteratively solve linear equations of form $ Ax = b$. \subsection{Function Documentation} \mbox{\Hypertarget{group__numpp__krylov_ga5e105c0002fbd19b7c8144540040550f}\label{group__numpp__krylov_ga5e105c0002fbd19b7c8144540040550f}} \index{Krylov Subspace methods@{Krylov Subspace methods}!conjugate\+\_\+gradient@{conjugate\+\_\+gradient}} \index{conjugate\+\_\+gradient@{conjugate\+\_\+gradient}!Krylov Subspace methods@{Krylov Subspace methods}} \subsubsection{\texorpdfstring{conjugate\+\_\+gradient()}{conjugate\_gradient()}} {\footnotesize\ttfamily template$<$typename T , std\+::size\+\_\+t Size$>$ \\ C\+O\+N\+S\+T\+E\+X\+PR auto numpp\+::krylov\+::conjugate\+\_\+gradient (\begin{DoxyParamCaption}\item[{const \hyperlink{classnumpp_1_1matrix_1_1dense}{matrix\+::dense}$<$ T, Size, Size $>$ \&}]{A, }\item[{const \hyperlink{classnumpp_1_1vector}{vector}$<$ T, Size $>$ \&}]{x, }\item[{const \hyperlink{classnumpp_1_1vector}{vector}$<$ T, Size $>$ \&}]{b, }\item[{const double}]{threshold = {\ttfamily 0.01}, }\item[{const std\+::size\+\_\+t}]{iterations = {\ttfamily Size}, }\item[{20 ? Size \+:20}]{ }\end{DoxyParamCaption})} Calculates solution to linear equation $ Ax = b $ via conjugate gradient \begin{DoxyTemplParams}{Template Parameters} {\em T} & Argument type of the matrix (e.\+g. double) \\ \hline {\em Size} & size of matrix row and column (have to be equal)\\ \hline \end{DoxyTemplParams} \begin{DoxyParams}{Parameters} {\em A} & matrix A of the method {\bfseries }\\ \hline \end{DoxyParams} \begin{DoxyWarning}{Warning} {\bfseries Matrix A H\+AS TO be symmetric, it\textquotesingle{}s not enforced in anyway in this method} \end{DoxyWarning} \begin{DoxyParams}{Parameters} {\em x} & Vector containing an initial guess of the solution (solution won\textquotesingle{}t be placed here!). {\bfseries }\\ \hline \end{DoxyParams} \begin{DoxyWarning}{Warning} {\bfseries If you are unsure about this parameter go with vector filled with 1\textquotesingle{}s.} \end{DoxyWarning} \begin{DoxyParams}{Parameters} {\em b} & Solution vector of the linear equations\\ \hline {\em threshold} & Tolerance of the algorithm. If the error (as an euclidean norm of residual)~\newline is smaller than this value, the algorithm will stop\\ \hline {\em iterations} & Maximum number of iterations performed by the algorithm\\ \hline \end{DoxyParams} Conjugate gradient is one of the algorithms from the krylov subspace method. It may allow us to solve linear equations of the form $Ax = b$ in a more efficient manner~\newline than popular direct methods like LU, Cholesky or similiar. For exact use cases consult the professional literature. \begin{DoxyReturn}{Returns} vector of type identical to x filled with an answer \end{DoxyReturn}
{ "alphanum_fraction": 0.7378667398, "avg_line_length": 50.6527777778, "ext": "tex", "hexsha": "b386c29890412991e0d1a77b5eb980658671cafd", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2018-04-06T06:45:22.000Z", "max_forks_repo_forks_event_min_datetime": "2017-08-06T13:58:27.000Z", "max_forks_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "szymonmaszke/numpp", "max_forks_repo_path": "docs/group__numpp__krylov.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_issues_repo_issues_event_max_datetime": "2018-12-16T00:03:38.000Z", "max_issues_repo_issues_event_min_datetime": "2018-11-28T12:15:46.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vyzyv/numpp", "max_issues_repo_path": "docs/group__numpp__krylov.tex", "max_line_length": 530, "max_stars_count": 10, "max_stars_repo_head_hexsha": "9149c9d81f70a6ce833fdd1d2f0f2b584e2ac4d9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "szymonmaszke/numpp", "max_stars_repo_path": "docs/group__numpp__krylov.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-02T15:17:00.000Z", "max_stars_repo_stars_event_min_datetime": "2018-06-06T01:51:17.000Z", "num_tokens": 1211, "size": 3647 }
% !TeX encoding = UTF-8 % !TeX program = lualatex % !TeX spellcheck = en_US % !TeX root = thesis.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %BASIC OPTIONS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[ %draft, 12pt, %oneside, twoside, paper=a4, % Seitengröße% DIV=9, % Satzspiegel: z.B. calc oder 6-15 //war ursprünglich 13 BCOR=4mm, % Bindekorrektur %listof=chapterentry, % Im Abbildungsverzeichnis und Tabellenverzeichnis auch Kapitel anzeigen toc=bibliography, % Literaturverzeichnis mit ins Inhaltsverzeichnis eintragen toc=listof, footnotes=multiple, % mehrere Fußnoten separieren %parindent, % Absatzstil hier: Einrückung; parskip=half, % Absatzstil hier: keine Einrückung; numbers=noenddot, % Keine Punkte nach Nummerierung ]{scrbook} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %TITLEPAGE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \input{author.tex} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %CUSTOM COMMANDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\up}[2]{#1\textsuperscript{#2}} % z.B. \up I{fdfgdfs} \newcommand{\down}[2]{#1\textsubscript{#2}} \newcommand{\OF}{\mbox{OpenFOAM}} \newcommand{\CR}{\gls{CR}} \newcommand{\WT}{\gls{WT}} \newcommand{\BC}{\gls{BC}} \newcommand{\ABL}{\gls{ABL}} \newcommand{\NVP}{\gls{NVP}} \newcommand{\USED}{{\color{red}{USED}}} \newcommand{\BRAC}{BRAC University} \newcommand{\CFD}{\gls{CFD}} \newcommand{\ACR}{\gls{ACR}} \newcommand{\kEps}{$k-\varepsilon$} \newcommand{\kOmega}{$k-\omega$} \newcommand{\R}{\gls{R}} \newcommand{\COtwo}{CO$_2$} \newcommand{\yplus}{\gls{symb:yplus}} \newcommand{\yPlus}{\gls{symb:yplus}} \newcommand{\SD}{\gls{SD}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %FONTS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{textcomp} \usepackage[]{euler} \usepackage{fontspec} \usepackage{luatextra} \usepackage{csquotes} %FONT Combination 1 \setmainfont{Erewhon}[ Extension=.otf, UprightFont=*-Regular, ItalicFont=*-Italic, BoldFont=*-Bold, BoldItalicFont=*-BoldItalic, SlantedFont=*-RegularSlanted, BoldSlantedFont=*-BoldSlanted, ] \setsansfont{texgyreheros}[ Scale=MatchLowercase,% or MatchUppercase Extension=.otf, UprightFont=*-regular, ItalicFont=*-italic, BoldFont=*-bold, BoldItalicFont=*-bolditalic, ] %FONT Combination 2 \setmainfont{XCharter-Roman} \setsansfont{texgyreheros}[ Scale=MatchLowercase,% or MatchUppercase Extension=.otf, UprightFont=*-regular, ItalicFont=*-italic, BoldFont=*-bold, BoldItalicFont=*-bolditalic, ] %FONT Combination 3 (Adobe Fonts) %\setmainfont{MinionPro-Regular.otf}[ %ExternalLocation=./fonts/, %Ligatures=TeX, %Numbers=OldStyle, %ItalicFont = MinionPro-It.otf, %SlantedFont = MinionPro-It.otf, %BoldFont = MinionPro-Bold.otf, %BoldItalicFont = MinionPro-BoldIt.otf ] %\setsansfont{MyriadPro-Regular.otf}[ %ExternalLocation=./fonts/, %ItalicFont = MyriadPro-It.otf, %SlantedFont = MyriadPro-It.otf, %BoldFont = MyriadPro-Bold.otf, %BoldItalicFont = MyriadPro-BoldIt.otf ] %\setmonofont{CourierStd.otf}[ %ExternalLocation=./fonts/,] %LEAVE THIS UNTOUCHED FOR NOW %%%\setmathfont{XITS Math} %%%\setmathfont[range=\mathup/{num,latin,Latin,greek,Greek}]{Minion Pro} %%%\setmathfont[range=\mathbfup/{num,latin,Latin,greek,Greek}]{MinionPro-Bold} %%%\setmathfont[range=\mathit/{num,latin,Latin,greek,Greek}]{MinionPro-It} %%%\setmathfont[range=\mathbfit/{num,latin,Latin,greek,Greek}]{MinionPro-BoldIt} %%%\setmathfont[range=\mathscr,StylisticSet={1}]{XITS Math} %%%\setmathfont[range={"005B,"005D,"0028,"0029,"007B,"007D,"2211,"002F,"2215 } ]{Latin Modern Math} % brackets, sum, / %%%\setmathfont[range={"002B,"002D,"003A-"003E} ]{MnSymbol} % + - < = > %%%\setmathrm{Minion Pro} %%% %%%\usepackage{luaotfload,lualatex-math} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %FONT MISC SETTINGS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \RedeclareSectionCommands[ %Linebreak after paragraphs afterskip=1sp ]{paragraph,subparagraph} \usepackage{etex} %\reserveinserts{20} \usepackage[english]{babel} % Schriftsatzerweiterung \usepackage{alphabeta} % Use greek letters as text and in PDF bookmarks \usepackage{bm} % Enable bold math symbols \usepackage{microtype}% %\usepackage{geometry} \makeatletter \g@addto@macro\bfseries{\boldmath} % bold math in section heading \makeatother \usepackage{wrapfig} % Von Text umflossene Grafiken \usepackage{subcaption} \usepackage{ellipsis} % Weißraum bei Ellipsen optimal \usepackage[NewCommands]{ragged2e} % Flattersatz mit (!) Silbentrennung \clubpenalty = 10000 % Schusterjungen und Hurenkinder vermeiden \widowpenalty = 10000 % Schusterjungen und Hurenkinder vermeiden \displaywidowpenalty = 10000 % Schusterjungen und Hurenkinder vermeiden \usepackage{marvosym} % Euro-Symbol verfügbar machen \usepackage[official,right]{eurosym} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %GRAPHICS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{graphicx} % Grafiken einbinden \makeatletter % Der folgende neue Befehl bindet Bilder in der Originalgrösse ein, falls sie weniger breit als die Seite sind. Sonst wird das Bild auf Seitenbreite skaliert. \def\ScaleIfNeeded{% \ifdim\Gin@nat@width>\linewidth \linewidth \else \Gin@nat@width \fi } \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %COLORS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[dvipsnames]{xcolor} \definecolor{airforceblue}{rgb}{0.36, 0.54, 0.66} \definecolor{royalblue}{rgb}{0.0, 0.14, 0.4} \definecolor{tumblue}{RGB}{15 , 27 , 95} \definecolor{tumblue}{RGB}{15 , 27 , 95} \definecolor{lightblue}{RGB}{220 , 230 , 241} \definecolor{tum_blue}{RGB}{0, 101, 189} \definecolor{tum_white}{RGB}{255 , 255 , 255} \definecolor{tum_black}{RGB}{0,0,0} %Secondary \definecolor{tum_blue2}{RGB}{0 , 82 , 147} \definecolor{tum_blue3}{RGB}{0 , 51 , 89} \definecolor{tum_grey1}{RGB}{088 , 088 , 090} \definecolor{tum_grey2}{RGB}{156 , 157 , 159} \definecolor{tum_grey3}{RGB}{217 , 218 , 219} %Highlights \definecolor{tum_beige}{RGB}{218 , 215 , 203} \definecolor{tum_orange}{RGB}{227 , 114 , 34} \definecolor{tum_green}{RGB}{162 , 173 , 0} \definecolor{tum_lightblue}{RGB}{152 , 198 , 234} \definecolor{tum_turquoise}{RGB}{100 , 160 , 200} \definecolor{darkred}{rgb}{0.7, 0.11, 0.11} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %URLs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\KOMAoptions{DIV=last} % Anpassung des Satzspiegels an Schriften, (nur bei DIV=calc wirksam) \PassOptionsToPackage{hyphens}{url} %\usepackage[hyphenbreaks]{breakurl} \usepackage[hyphens]{url} %\urlstyle{same} \makeatletter \g@addto@macro{\UrlBreaks}{\UrlOrds} \makeatother %URL smaller \usepackage{relsize} \renewcommand*{\UrlFont}{\ttfamily\smaller\relax} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %HYPERREF %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\usepackage[hang, flushmargin]{footmisc} \usepackage[breaklinks, hidelinks,hyperfootnotes=false, unicode]{hyperref} %Clickable hyperlinks everywhere \def\UrlBigBreaks{\do\/\do-\do:} \hypersetup{ draft=false, % bookmarks=true, bookmarksopen=true, bookmarksopenlevel=1, %hier darf nur ne Zahl stehen, ansonsten produziert TeX nen Fehler in Zusammenhang mit bookmarksopen bookmarksdepth=4, %deprecated bookmarksnumbered=true, linktocpage=true, %break links correctly in listoftables/figures unicode=true, % non-Latin characters in Acrobat’s bookmarks pdftoolbar=true, % show Acrobat’s toolbar? pdfmenubar=true, % show Acrobat’s menu? pdffitwindow=true, % window fit to page when opened pdfstartview={XYZ null null 1.00}, % XYZ left top zoom Sets a coordinate and a zoom factor. If any one is null, the source link value is used. null null null will give the same values as the current page. Fit Fits the page to the window. FitH top % Fits the width of the page to the window. FitV left Fits the height of the page to the window. FitR left bottom right top Fits the rectangle specified by the four coordinates to the window. FitB Fits the page bounding box to the window. % FitBH top Fits the width of the page bounding box to the window. % FitBV left Fits the height of the page bounding box to the window. pdftitle = {\titel}, % title pdfauthor = {\autor}, % author pdfsubject = {\art}, % subject of the document pdfcreator={\autor}, % creator of the document pdfproducer={\autor}, % producer of the document pdfkeywords= {} {} {} {}, % list of keywords pdfnewwindow=true, % links in new window pdfpagelayout={TwoColumnRight}, colorlinks=true, % false: boxed links; true: colored links linkcolor=black, % color of internal links citecolor=tumblue, % color of links to bibliography filecolor=black, % color of file links urlcolor=tumblue, % color of external links anchorcolor =black, linkbordercolor={blue!35!black}, % color of internal links citebordercolor={blue!35!black}, % color of links to bibliography filebordercolor={blue!35!black}, % color of file links urlbordercolor={blue!35!black}, % color of external links menucolor =red, runcolor =cyan, pdfencoding=auto, } \usepackage{bookmark} \usepackage{xspace} % Fixes usage of spaces %\usepackage{layout} % Layout überprüfen \usepackage{setspace} % Linespacing: singelspacing, onehalfspacing, doublespacing \usepackage[english, plain]{fancyref} %Cross-referencing \usepackage{upgreek} \usepackage{lscape} % Landscpae Seiten \usepackage{pdflscape} % Landscape Seiten drehen \usepackage{gensymb} % Celsius anzeigen \usepackage{pdfpages} \usepackage[footnote]{acronym} % Abkürzungsverzeichnis \usepackage{mathtools} % braces in math environment \usepackage{epigraph} %\usepackage{pgfplots} %\usepackage{pgfkeys} \usepackage{calc} \usepackage{tikz} \usepackage{setspace} %\usepackage[version=3]{mhchem} %chemische Formeln \usepackage[%format=hang ,labelfont=bf, %tableposition=top %no effefct with KOMA class ]{caption} %Bildunterschriften kleiner \renewcommand\captionfont{\footnotesize\sffamily} % Bildtitel umformatieren \usepackage[rightcaption]{sidecap} %caption zentr./seitl. \makeatletter \newenvironment{sidefigure}{\SC@float[c]{figure}}{\endSC@float} %caption zentr./seitl. \makeatother \makeatletter \newenvironment{sidetable}{\SC@float[c]{table}}{\endSC@float} %caption zentr./seitl. \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %UNITS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[ separate-uncertainty = true, uncertainty-separator = {\,}, mode = text, output-decimal-marker ={.}, multi-part-units = single, range-units = single, %range-phrase = {--}, ]{siunitx} %SI Einheit \sisetup{ %list-final-separator = { \translate{und} }, %range-phrase = { \translate{bis} }, %list-pair-separator = { \translate{und} }, %exponent-product = \cdot %detect-all, %apply document fonts for siunitx %math-rm=\mathsf, %text-rm=\sffamily, % locale = US, %input-decimal-markers = {.}, %output-decimal-marker = {.}, %input-ignore = {,}, %group-digits = true, %group-separator = {,}, %group-separator = {}, %tight-spacing = true, %input-signs = , %input-symbols = , %input-open-uncertainty = , %input-close-uncertainty = , table-align-text-pre = false, } %round-mode = figures, %places %round-precision = 3, %round-integer-to-decimal= false, %zero-decimal-to-integer= false, %add-decimal-zero = false, %add-integer-zero = false, %table-space-text-pre = (, %table-space-text-post = ), \usepackage{textpos} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %TABLES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{booktabs} % Lines betwenn tables \usepackage{longtable} % Tables that are longer than one page \usepackage{tablefootnote} % Footnote in tables \usepackage{tabularx, longtable, array} % Tabellen \usepackage{ltablex} %advanced longtables across multiple pages \usepackage{ltxtable} \usepackage{floatrow} \floatsetup[table]{capposition=top} \usepackage{etoolbox} %Change font of tables \usepackage{multirow} % merge rown in tables \usepackage{color, colortbl} %You will need the following two packages, the first to define new colors and the latter to actually color the table \usepackage[para,online,flushleft]{threeparttable} % Footnotes in tables % San serif table font %\AtBeginEnvironment{tabular}{\rmfamily} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %TABLE WITH SMALL FONT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newenvironment{tabularsmall}{% \fontsize{8}{12}\selectfont\tabular }{% \endtabular } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %CAPTIONS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{caption} \captionsetup[subfloat]{font=sf,size=footnotesize} \usepackage{sidecap} %captions on the side of figures %Captions within pdfpages \makeatletter \newcommand*{\AM@pagecommandstar}{} \define@key{pdfpages}{pagecommand*}{\def\AM@pagecommandstar{#1}} \patchcmd{\AM@output}{\begingroup\AM@pagecommand\endgroup} {\ifthenelse{\boolean{AM@firstpage}}{\begingroup\AM@pagecommandstar\endgroup}{\begingroup\AM@pagecommand\endgroup}}{}{} % Patch to use new option \patchcmd{\AM@split@optionsii}{\equal{pagecommand}{\AM@temp}\or} {\equal{pagecommand}{\AM@temp}\or\equal{pagecommand*}{\AM@temp}\or}{}{} \makeatother %Captions within pdfpages % Kommando zum Ausbügeln des Bugs "\subfloat ohne \caption" \makeatletter \providecommand\phantomcaption{\caption@refstepcounter\@captype} \makeatother \usepackage[]{hypcap} %Links to image directly, not to caption %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %MISC PACKAGES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{blindtext} \usepackage{enumitem} %Itemize indention \setlist[itemize,1]{leftmargin=\dimexpr 26pt-.22in} %\setitemize{noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt} % Change size of itemize environments \usepackage{listings} \usepackage{morewrites} % Multiple Columns \usepackage{multicol} \usepackage{emptypage} %removes headers and footers on empty pages. %% Custom commands \usepackage{soul} %% New itemize environment for equations \newenvironment{equationitem} { \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{1pt} \setlength{\parsep}{1pt}} { \end{itemize} } %% Appendix in toc \usepackage[ % toc ]{appendix} \usepackage[ % obeyDraft ]{todonotes} %% Counters for figures and tables \usepackage[figure,table,equation]{totalcount} \usepackage{placeins} %clear floats without new page %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %BIBLIOGRAPHY %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Numbered Bibliography %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[compress,sort, numbers ]{natbib} %\bibliographystyle{agsm} \bibliographystyle{dinat} %\bibliographystyle{newapa} \renewcommand*{\bibfont}{\sffamily} % Change Bibliography Header \makeatletter \renewcommand\bibsection{% % \chapter*{{\sffamily\huge\bibname}\@mkboth{\sffamily\MakeUppercase{\bibname}}{\sffamily\bibname}}% \chapter*{{\sffamily\huge\bibname}\@mkboth{\sffamily\bibname}{\sffamily\bibname}}% }% \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %APA Style %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\renewcommand{\refname}{\centering REFERENCES} % %\usepackage[%nodoi, %nosectionbib, %%numberedbib %]{apacite} %\bibliographystyle{apacite} %\AtBeginDocument{\urlstyle{APACsame}} % %\renewcommand\bibliographytypesize{\small} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Backlinks to citations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % #1: number of distinct back references % #2: backref list with distinct entries % #3: number of back references including duplicates % #4: backref list including duplicates \RequirePackage[hyperpageref]{backref} \renewcommand{\backreflastsep}{ and~} \renewcommand{\backreftwosep}{ and~} \renewcommand{\backref}[1]{}% for backref < 1.33 necessary \renewcommand{\backrefalt}[4]{ \ifnum#1=0 %No cited. \else \ifnum#1=1 \footnotesize \mbox{(cited on page #2)} \else %\footnotesize \mbox{{\color{darkred}(cited on pages #2)}} \footnotesize \mbox{(cited on pages #2)} \fi \fi } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %GLOSSARIES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[ nomain, nonumberlist, acronym, %section ] {glossaries-extra} \glsaccessname{unit} \setlength{\glsdescwidth}{15cm} \glssetcategoryattribute{acronym}{glossdescfont}{textsf} \glssetcategoryattribute{acronym}{glossnamefont}{textsf} \setabbreviationstyle[acronym]{long-short} \glssetnoexpandfield{unit} \newglossarystyle{symbunitlong}{% \vspace*{.3cm} \setglossarystyle{long3col}% base this style on the list style \renewenvironment{theglossary}{% Change the table type --> 3 columns \begin{longtable}{lp{0.6\glsdescwidth}>{\arraybackslash}p{2cm}}}% {\end{longtable}}% % \renewcommand*{\glossaryheader}{% Change the table header \sffamily\bfseries Sign & \sffamily\bfseries Description & \sffamily\bfseries Unit \\ %\hline \endhead} \renewcommand*{\glossentry}[2]{% Change the displayed items \glstarget{##1}{\sffamily\glossentryname{##1}} % & \sffamily\glossentrydesc{##1}% Description & \sffamily\glsunit{##1} \tabularnewline } } \usepackage{smartdiagram} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %CUSTOMIZE TOC %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Differrent font in TOC \usepackage[titles]{tocloft} % Change font of Chapters, LOF and LOT \setlength{\cftbeforechapskip}{2ex} \setlength{\cftbeforesecskip}{0.8ex} \setlength{\cftbeforesubsecskip}{0.8ex} \renewcommand{\cftchapfont}{% \sffamily\bfseries } \renewcommand{\cftchappagefont}{\sffamily} \renewcommand{\cftsecfont}{\sffamily} \renewcommand{\cftsecpagefont}{\sffamily} \renewcommand{\cftsubsecfont}{\sffamily} \renewcommand{\cftsubsecpagefont}{\sffamily} \renewcommand\cftloftitlefont{\sffamily} \renewcommand\cftfigfont{\sffamily} \renewcommand\cftfigpagefont{\sffamily} \renewcommand\cftlottitlefont{\sffamily} \renewcommand\cfttabfont{\sffamily} \renewcommand\cfttabpagefont{\sffamily} % TOC depth %\setcounter{secnumdepth}{4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %CUSTOM PAGE LAYOUT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Code for Headers and Footers adapted from http://www.kfiles.de \newlength{\marginWidth} \setlength\marginWidth{\marginparwidth+\marginparsep} \newlength{\fulllinewidth} \setlength\fulllinewidth{\textwidth+\marginWidth} \usepackage{truncate} %Um zu lange Kapiteltitel abzuschneiden \footskip=1.6cm \makeatletter % = mache @ letter %Vordefinition mehrfachverwendeter Teile \def\oddfootSTANDARD{ \renewcommand{\@oddfoot}{ \hbox to\textwidth{\vbox{\hbox to\textwidth{ \hfill \strut \hspace{1pt} }}} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \strut %unsichtbares Zeichen % \large %Größe der Seitenzahl \hspace{5pt} \vrule width 1pt height 1cm \hspace{8pt} \textsf{\thepage} \hfill }}}\hss } } \def\evenfootSTANDARD{ \renewcommand{\@evenfoot}{ \hspace{-\marginWidth} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ % \large \strut %unsichtbares Zeichen \hfill \textsf{\thepage} \hspace{5pt} \vrule width 1pt height 1cm \hspace{7pt} }}}\hss } } %Standardstil für die gesamte Dissertation \newcommand{\ps@thesis}{ \renewcommand{\@oddhead}{ \hbox to\textwidth{\vbox{\hbox to\textwidth{ \textsf \hfill \rightmark \strut \hspace{1pt} }}} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \strut %unsichtbares Zeichen \hspace{5pt} \vrule width 1pt \hspace{5pt} \textsf{\thesection} %%Zuständig für Nummerierung rechts oben; thesection produziert X.0, X.1 \hfill }}}\hss } \renewcommand{\@evenhead}{ \hspace{-\marginWidth} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \hfill \strut %unsichtbares Zeichen \textbf{\textsf{Chapter~\thechapter}} \hspace{5pt} \vrule width 1pt \hspace{7pt} \strut }}}\hss \hbox to\textwidth{\vbox{\hbox to\textwidth{ \strut %unsichtbares Zeichen \truncate{.9\textwidth}{\leftmark} \hfill }}}\hss } \oddfootSTANDARD \evenfootSTANDARD } %Der PLAIN-Style der Chapter- und Sonderseiten muss redefiniert werden. \renewcommand{\ps@plain}{ \let\@oddhead\@empty \let\@evenhead\@empty \let\@evenfoot\@empty \oddfootSTANDARD } %Spezieller Stil für Inhaltsverzeichnis und Literaturverzeichnis (ohne Nummern wie 0.0 oder B.0) \newcommand{\ps@thesisINTRO}{ \renewcommand{\@oddhead}{ \hbox to\textwidth{\vbox{\hbox to\textwidth{ \textsf \hfill \textsf{\rightmark} \strut \hspace{1pt} }}} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \strut %unsichtbares Zeichen \hspace{5pt} \vrule width 1pt \hspace{5pt} \textsf{\thechapter} %%Zuständig für Nummerierung rechts oben; thesection produziert X.0, X.1 \hfill }}}\hss } \renewcommand{\@evenhead}{ \hspace{-\marginWidth} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \hfill \strut %unsichtbares Zeichen \textbf{\textsf{Chapter}} \hspace{5pt} \vrule width 1pt \hspace{7pt} \strut }}}\hss \hbox to\textwidth{\vbox{\hbox to\textwidth{ \strut %unsichtbares Zeichen \truncate{.9\textwidth}{\leftmark} \hfill }}}\hss } \oddfootSTANDARD \evenfootSTANDARD } \newcommand{\ps@thesisAPPENDIX}{ \renewcommand{\@oddhead}{ \hbox to\textwidth{\vbox{\hbox to\textwidth{ \textsf \hfill \rightmark \strut \hspace{1pt} }}} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \strut %unsichtbares Zeichen \hspace{5pt} \vrule width 1pt \hspace{5pt} \textsf \thechapter %\\thesection %%Zuständig für Nummerierung rechts oben; thesection produziert X.0, X.1 \hfill }}}\hss } \renewcommand{\@evenhead}{ \hspace{-\marginWidth} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \hfill \strut %unsichtbares Zeichen \textbf{\textsf{Appendix~\thechapter}} \hspace{5pt} \vrule width 1pt \hspace{7pt} \strut }}}\hss \hbox to\textwidth{\vbox{\hbox to\textwidth{ \strut %unsichtbares Zeichen \truncate{.9\textwidth}{\leftmark} \hfill }}}\hss } \oddfootSTANDARD \evenfootSTANDARD } \newcommand{\ps@thesisLISTS}{ \renewcommand{\@oddhead}{ \hbox to\textwidth{\vbox{\hbox to\textwidth{ \textsf \hfill \textsf{\rightmark} \strut \hspace{1pt} }}} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \strut %unsichtbares Zeichen \hspace{5pt} \vrule width 1pt \hspace{5pt} \textsf \thechapter %%Zuständig für Nummerierung rechts oben; thesection produziert X.0, X.1 \hfill }}}\hss } \renewcommand{\@evenhead}{ \hspace{-\marginWidth} \hbox to\marginWidth{\vbox{\hbox to\marginWidth{ \hfill \strut %unsichtbares Zeichen \textbf{\textsf{Chapter}} \hspace{5pt} \vrule width 1pt \hspace{7pt} \strut }}}\hss \hbox to\textwidth{\vbox{\hbox to\textwidth{ \strut %unsichtbares Zeichen % \truncate{.9\textwidth}{\textsf{\MakeUppercase{\leftmark}}}% Zuständig für Variable hinter | \truncate{.9\textwidth}{\textsf{\leftmark}}% Zuständig für Variable hinter | \hfill }}}\hss } \oddfootSTANDARD \evenfootSTANDARD } \newcommand{\ps@reallyempty}{ \let\@oddhead\@empty \let\@evenhead\@empty \let\@oddfoot\@empty \let\@evenfoot\@empty } \renewcommand{\chaptermark}[1]{\markboth{\textsf{#1}}{}}%markboth hat zwei argumente für die linke und rechte seite \renewcommand{\sectionmark}[1]{\markright{\textsf{#1}}} \makeatother % = mache @ wieder zu nicht-Buchstaben \pagestyle{thesis} %%Problem mit den Seitenzahlen und Headern auf leeren Seiten nach Kapiteln: \let\origdoublepage\cleardoublepage \newcommand{\clearemptydoublepage}{% \clearpage {\pagestyle{empty}\origdoublepage}% } \let\cleardoublepage\clearemptydoublepage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %INDEX %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{makeidx} \makeindex \usepackage[totoc,columns=2,minspace=100pt]{idxlayout} %modify layout of index %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %HELPER FOR FORMATTING GRAPHICS %https://www.queryxchange.com/q/24_86356/how-to-trim-clip-crop-graphics-without-trial-and-error/ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\newcommand{\showgrid}[2]{% % \newcommand{\gridlen}{2} % This determines the size of the squares that later appear: 2 = 0.4 cm % 5 = 1 cm % 20 = 4 cm % \resizebox{#1}{!}{% % \begin{tikzpicture}[inner sep=0] % % Bild laden % \node[anchor=south west] (image) at (0, 0) {#2}; % % Koordinaten fast oben rechts % \path (image.north east) -- ++(-\gridlen, -\gridlen) coordinate (obenrechts); % % \begin{scope}[red] % % Gitter unten links % \draw[xstep=.2, ystep=.2, very thin] (0, 0) grid (\gridlen, \gridlen); % \draw[xstep=1, ystep=1, semithick] (0, 0) grid (\gridlen, \gridlen); % % Gitter oben rechts % \draw[xstep=.2, ystep=.2, shift={(obenrechts)}, very thin] (0, 0) grid (\gridlen, \gridlen); % \draw[xstep=1, ystep=1, shift={(obenrechts)}, semithick] (0, 0) grid (\gridlen, \gridlen); % % % Rahmen % \draw (0, 0) rectangle (image.north east); % \end{scope} % \end{tikzpicture}% % } %} %Example: %\showgrid{0.8\linewidth}{\includegraphics[clip, trim=31mm 58mm 102mm 31mm]{Test.pdf}} \usepackage{tikz} % Linen über Graphiken \newcommand{\showgrid}[3][5]{% \providecommand{\griddepth}{#1} \resizebox{#2}{!}{% \begin{tikzpicture}[inner sep=0] % Bild laden \node[anchor=south west] (image) at (0, 0) {#3}; % Linien einfügen \begin{scope}[red] % Äußere Schleife für dicke Rechtecke \foreach \iThick in {0, ..., \griddepth} {% \path (image.north east) ++(-\iThick, -\iThick) coordinate(topright); \draw[semithick] (\iThick, \iThick) rectangle (topright); % Zwischen den Linien auffüllen \ifnum\iThick<\griddepth % dünne Rechtecke \foreach \iThin in {1, ..., 4} {% \path (image.north east) ++(-\iThick, -\iThick) ++(-\iThin/5, -\iThin/5) coordinate(topright); \draw[very thin] (\iThick, \iThick) ++(\iThin/5, \iThin/5) rectangle (topright); } \fi } \end{scope} \end{tikzpicture} } } %Example: %\showgrid[1]{0.8\linewidth{\includegraphics[clip, trim=20mm 34mm 8mm 16mm]{Test.pdf}} % %The thick lines have a distance of 10mm, the thin ones of 2mm. This is independent of the image if no width or height argument is passed to the image. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %LATEX OVERLAY GENERATOR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %LATEX OVERLAY GENERATOR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LaTeX Overlay Generator - Annotated Figures v0.0.1 % Created with http://ff.cx/latex-overlay-generator/ % If this generator saves you time, consider donating 5,- EUR! :-) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\annotatedFigureBoxCustom{bottom-left}{top-right}{label}{label-position}{box-color}{label-color}{border-color}{text-color} \newcommand*\annotatedFigureBoxCustom[8]{\draw[#5,thick,rounded corners] (#1) rectangle (#2);\node at (#4) [fill=#6,thick,shape=circle,draw=#7,inner sep=2pt,font=\sffamily,text=#8] {\textbf{#3}};} %\annotatedFigureBox{bottom-left}{top-right}{label}{label-position} \newcommand*\annotatedFigureBox[4]{\annotatedFigureBoxCustom{#1}{#2}{#3}{#4}{white}{white}{black}{black}} \newcommand*\annotatedFigureText[4]{\node[draw=none, anchor=south west, text=#2, inner sep=0, text width=#3\linewidth,font=\sffamily] at (#1){#4};} \newcommand*\annotatedFigureBoxCustomBlack[8]{\draw[#5,thick,rounded corners] (#1) rectangle (#2);\node at (#4) [fill=#6,thick,shape=circle,draw=#7,inner sep=2pt,font=\sffamily,text=#8] {\textbf{#3}};} %\annotatedFigureBox{bottom-left}{top-right}{label}{label-position} \newcommand*\annotatedFigureBoxBlack[4]{\annotatedFigureBoxCustomBlack{#1}{#2}{#3}{#4}{black}{white}{black}{black}} \newcommand*\annotatedFigureTextBlack[4]{\node[draw=none, anchor=south west, text=#2, inner sep=0, text width=#3\linewidth,font=\sffamily] at (#1){#4};} \newcommand*\annotatedFigureBoxCustomGray[8]{\draw[#5,thick,rounded corners] (#1) rectangle (#2);\node at (#4) [fill=#6,thick,shape=circle,draw=#7,inner sep=2pt,font=\sffamily,text=#8] {\textbf{#3}};} %\annotatedFigureBox{bottom-left}{top-right}{label}{label-position} \newcommand*\annotatedFigureBoxGray[4]{\annotatedFigureBoxCustomBlack{#1}{#2}{#3}{#4}{gray}{white}{gray}{black}} \newcommand*\annotatedFigureTextGray[4]{\node[draw=none, anchor=south west, text=#2, inner sep=0, text width=#3\linewidth,font=\sffamily] at (#1){#4};} \newenvironment {annotatedFigure}[1]{\centering\begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) { #1};\begin{scope}[x={(image.south east)},y={(image.north west)}]}{\end{scope}\end{tikzpicture}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %CLEAR DOUBLE PAGE PROPERLY %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Input something on the left page, use \cleardoublepage to force on right \makeatletter \newcommand*{\cleartoleftpage}{% \clearpage \if@twoside \ifodd\c@page \hbox{}\newpage \if@twocolumn \hbox{}\newpage \fi \fi \fi } \makeatother \usepackage[percent]{overpic} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Fix warnings %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{silence}\WarningFilter{fixltx2e}{}
{ "alphanum_fraction": 0.6566336366, "avg_line_length": 28.5139826422, "ext": "tex", "hexsha": "8160c04128d1558dbc5440edc05fec135570574a", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2018-07-05T09:43:28.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-08T22:55:05.000Z", "max_forks_repo_head_hexsha": "021e8ef8c6e019ab4b12ab68d8dab021765dd17d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kostnermo/CleanThesisLaTeX", "max_forks_repo_path": "header.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "021e8ef8c6e019ab4b12ab68d8dab021765dd17d", "max_issues_repo_issues_event_max_datetime": "2018-06-08T00:12:39.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-04T09:50:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kostnermo/CleanThesisLaTeX", "max_issues_repo_path": "header.tex", "max_line_length": 261, "max_stars_count": 13, "max_stars_repo_head_hexsha": "021e8ef8c6e019ab4b12ab68d8dab021765dd17d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kostnermo/CleanThesisLaTeX", "max_stars_repo_path": "header.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-01T05:21:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-23T16:41:53.000Z", "num_tokens": 9809, "size": 29569 }
\ignore{ \documentstyle[11pt]{report} \textwidth 13.7cm \textheight 21.5cm \newcommand{\myimp}{\verb+ :- +} \newcommand{\ignore}[1]{} \def\definitionname{Definition} \makeindex \begin{document} } \chapter{\label{chapter:tabling}Tabling} The Picat system is a term-rewriting system. For a predicate call, Picat selects a matching rule and rewrites the call into the body of the rule. For a function call $C$, Picat rewrites the equation $C = X$ where $X$ is a variable that holds the return value of $C$. Due to the existence of recursion in programs, the term-rewriting process may never terminate. Consider, for example, the following program: \begin{verbatim} reach(X,Y) ?=> edge(X,Y). reach(X,Y) => reach(X,Z),edge(Z,Y). \end{verbatim} where the predicate \texttt{edge} defines a relation, and the predicate \texttt{reach} defines the transitive closure of the relation. For a query such as \texttt{reach(a,X)}, the program never terminates due to the existence of left-recursion in the second rule. Even if the rule is converted to right-recursion, the query may still not terminate if the graph that is represented by the relation contains cycles. Another issue with recursion is redundancy. Consider the following problem: \emph{Starting in the top left corner of a $N\times N$ grid, one can either go rightward or downward. How many routes are there through the grid to the bottom right corner?} The following gives a program in Picat for the problem: \begin{verbatim} route(N,N,_Col) = 1. route(N,_Row,N) = 1. route(N,Row,Col) = route(N,Row+1,Col)+route(N,Row,Col+1). \end{verbatim} The function call \texttt{route(20,1,1)} returns the number of routes through a 20$\times$20 grid. The function call \texttt{route(N,1,1)} takes exponential time in \texttt{N}, because the same function calls are repeatedly spawned during the execution, and are repeatedly resolved each time that they are spawned. \section{Table Declarations} Tabling\index{tabling} is a memoization technique that can prevent infinite loops and redundancy. The idea of tabling\index{tabling} is to memorize the answers to subgoals and use the answers to resolve their variant descendants. In Picat, in order to have all of the calls and answers of a predicate or function tabled\index{tabling}, users just need to add the keyword \texttt{table}\index{\texttt{table}} before the first rule. \subsection*{Example} \begin{verbatim} table reach(X,Y) ?=> edge(X,Y). reach(X,Y) => reach(X,Z),edge(Z,Y). table route(N,N,_Col) = 1. route(N,_Row,N) = 1. route(N,Row,Col) = route(N,Row+1,Col)+route(N,Row,Col+1). \end{verbatim} With tabling\index{tabling}, all queries to the \texttt{reach} predicate are guaranteed to terminate, and the function call \texttt{route(N,1,1)} takes only \texttt{N}$^2$ time. For some problems, such as planning problems, it is infeasible to table\index{tabling} all answers, because there may be an infinite number of answers. For some other problems, such as those that require the computation of aggregates, it is a waste to table\index{tabling} non-contributing answers. Picat allows users to provide table modes\index{mode-directed tabling} to instruct the system about which answers to table\index{tabling}. For a tabled\index{tabling} predicate, users can give a \emph{table mode declaration}\index{mode-directed tabling} in the form ($M_{1},M_{2},\ldots,M_{n}$), where each $M_{i}$ is one of the following: a plus-sign (+) indicates input, a minus-sign (-) indicates output, \texttt{max} indicates that the corresponding variable should be maximized, and \texttt{min} indicates that the corresponding variable should be minimized. The last mode $M_{n}$ can be \texttt{nt}, which indicates that the argument is not tabled. Two types of data can be passed to a tabled predicate as an \texttt{nt} argument: (1) global data that are the same to all the calls of the predicate, and (2) data that are functionally dependent on the input arguments. Input arguments are assumed to be ground. Output arguments, including \texttt{min} and \texttt{max} arguments, are assumed to be variables. An argument with the mode \texttt{min} or \texttt{max} is called an \emph{objective} argument. Only one argument can be an objective to be optimized. As an objective argument can be a compound value, this limit is not essential, and users can still specify multiple objective variables to be optimized. When a table mode\index{mode-directed tabling} declaration is provided, Picat tables\index{tabling} only one optimal answer for the same input arguments. \subsection*{Example} \begin{verbatim} table(+,+,-,min) sp(X,Y,Path,W) ?=> Path = [(X,Y)], edge(X,Y,W). sp(X,Y,Path,W) => Path = [(X,Z)|Path1], edge(X,Z,Wxz), sp(Z,Y,Path1,W1), W = Wxz+W1. \end{verbatim} The predicate \texttt{edge(X,Y,W)} specifies a weighted directed graph, where \texttt{W} is the weight of the edge between node \texttt{X} and node \texttt{Y}. The predicate \texttt{sp(X,Y,Path,W)} states that \texttt{Path} is a path from \texttt{X} to \texttt{Y} with the minimum weight \texttt{W}. Note that whenever the predicate \texttt{ sp/4} is called, the first two arguments must always be instantiated. For each pair, the system stores only one path with the minimum weight. The following program finds a shortest path among those with the minimum weight for each pair of nodes: \begin{verbatim} table (+,+,-,min). sp(X,Y,Path,WL) ?=> Path = [(X,Y)], WL = (Wxy,1), edge(X,Y,Wxy). sp(X,Y,Path,WL) => Path = [(X,Z)|Path1], edge(X,Z,Wxz), sp(Z,Y,Path1,WL1), WL1 = (Wzy,Len1), WL = (Wxz+Wzy,Len1+1). \end{verbatim} For each pair of nodes, the pair of variables \texttt{(W,Len)} is minimized, where \texttt{W} is the weight, and \texttt{Len} is the length of a path. The built-in function \texttt{compare\_terms($T_1$,$T_2$)} is used to compare answers. Note that the order is important. If the term would be \texttt{(Len,W)}, then the program would find a shortest path, breaking a tie by selecting one with the minimum weight. The tabling system is useful for offering dynamic programming solutions for planning problems. The following shows a tabled program for general planning problems: \begin{verbatim} table (+,-,min) plan(S,Plan,Len),final(S) => Plan=[],Len=0. plan(S,Plan,Len) => action(Action,S,S1), plan(S1,Plan1,Len1), Plan = [Action|Plan1], Len = Len1+1. \end{verbatim} The predicate \texttt{action(Action,S,S1)} selects an action and performs the action on state \texttt{S} to generate another state, \texttt{S1}. \subsection*{Example} The program shown in Figure \ref{fig:farmer} solves the Farmer's problem: {\em The farmer wants to get his goat, wolf, and cabbage to the other side of the river. His boat isn't very big, and it can only carry him and either his goat, his wolf, or his cabbage. If he leaves the goat alone with the cabbage, then the goat will gobble up the cabbage. If he leaves the wolf alone with the goat, then the wolf will gobble up the goat. When the farmer is present, the goat and cabbage are safe from being gobbled up by their predators.} \begin{figure} \begin{center} \begin{verbatim} go => S0=[s,s,s,s], plan(S0,Plan,_), writeln(Plan.reverse()). table (+,-,min) plan([n,n,n,n],Plan,Len) => Plan=[], Len=0. plan(S,Plan,Len) => Plan=[Action|Plan1], action(S,S1,Action), plan(S1,Plan1,Len1), Len=Len1+1. action([F,F,G,C],S1,Action) ?=> Action=farmer_wolf, opposite(F,F1), S1=[F1,F1,G,C], not unsafe(S1). action([F,W,F,C],S1,Action) ?=> Action=farmer_goat, opposite(F,F1), S1=[F1,W,F1,C], not unsafe(S1). action([F,W,G,F],S1,Action) ?=> Action=farmer_cabbage, opposite(F,F1), S1=[F1,W,G,F1], not unsafe(S1). action([F,W,G,C],S1,Action) ?=> Action=farmer_alone, opposite(F,F1), S1=[F1,W,G,C], not unsafe(S1). index (+,-) (-,+) opposite(n,s). opposite(s,n). unsafe([F,W,G,_C]),W==G,F!==W => true. unsafe([F,_W,G,C]),G==C,F!==G => true. \end{verbatim} \end{center} \caption{\label{fig:farmer}A program for the Farmer's problem.} \end{figure} \section{The Tabling Mechanism} The Picat tabling\index{tabling} system employs the so-called \emph{linear tabling}\index{linear tabling} mechanism, which computes fixpoints by iteratively evaluating looping subgoals. The system uses a data area, called the \emph{table area}\index{tabling}, to store tabled\index{tabling} subgoals and their answers. The tabling area can be initialized with the following built-in predicate: \begin{itemize} \item \texttt{initialize\_table}\index{\texttt{initialize\_table/0}}: This predicate initializes the table area. \end{itemize} This predicate clears up the table area. It's the user's responsibility to ensure that no data in the table area are referenced by any part of the application. Linear tabling relies on the following three primitive operations to access and update the table\index{tabling} area. \begin{description} \item[Subgoal lookup and registration:] This operation is used when a tabled\index{tabling} subgoal is encountered during execution. It looks up the subgoal table\index{tabling} to see if there is a variant of the subgoal. If not, it inserts the subgoal (termed a \emph{pioneer} or \emph{generator}) into the subgoal table\index{tabling}. It also allocates an answer table\index{tabling} for the subgoal and its variants. Initially, the answer table\index{tabling} is empty. If the lookup finds that there already is a variant of the subgoal in the table\index{tabling}, then the record that is stored in the table\index{tabling} is used for the subgoal (called a \emph{consumer}). Generators and consumers are handled differently. In linear tabling\index{linear tabling}, a generator is resolved using rules, and a consumer is resolved using answers; a generator is iterated until the fixed point is reached, and a consumer fails after it exhausts all of the existing answers. \item[Answer lookup and registration:] This operation is executed when a rule succeeds in generating an answer for a tabled\index{tabling} subgoal. If a variant of the answer already exists in the table\index{tabling}, then it does nothing; otherwise, it inserts the answer into the answer table\index{tabling} for the subgoal, or it tables\index{tabling} the answer according to the mode declaration. Picat uses the lazy consumption strategy (also called the local strategy). After an answer is processed, the system backtracks to produce the next answer. \item[Answer return:] When a consumer is encountered, an answer is returned immediately, if an answer exists. On backtracking, the next answer is returned. A generator starts consuming its answers after it has exhausted all of its rules. Under the lazy consumption strategy, a top-most looping generator does not return any answer until it is complete. \end{description} \ignore{ \section{Initialization of the Table Area} \begin{itemize} \item \texttt{table\_get\_all($Goal$) = $List$}\index{\texttt{table\_get\_all/1}}: This function returns a list of answers of the subgoals that are subsumed by $Goal$. For example, the \texttt{table\_get\_all(\_)}\index{\texttt{table\_get\_all/1}} fetches all of the answers in the table\index{tabling}, since any subgoal is subsumed by the anonymous variable\index{anonymous variable}. \item \texttt{table\_get\_one($Goal$)}\index{\texttt{table\_get\_one/1}}: If there is a subgoal in the subgoal table\index{tabling} that is a variant of \texttt{$Goal$}, and that has answers, then \texttt{$Goal$} is unified with the first answer. This predicate fails if there is no variant subgoal in the table\index{tabling}, or if there is no answer available. \end{itemize} } \ignore{ \end{document} }
{ "alphanum_fraction": 0.7258184832, "avg_line_length": 72.245508982, "ext": "tex", "hexsha": "4a0447bfab5b741a003497cf25a3374c713fcc91", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-08T17:56:16.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-08T17:56:16.000Z", "max_forks_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ponyatov/pycat", "max_forks_repo_path": "src/doc/tabling.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_issues_repo_issues_event_max_datetime": "2021-08-23T20:29:31.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:55:51.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ponyatov/pycat", "max_issues_repo_path": "src/doc/tabling.tex", "max_line_length": 1771, "max_stars_count": 2, "max_stars_repo_head_hexsha": "b02baacfd519cc1f95b42cdce0d96078910e50c0", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ponyatov/pycat", "max_stars_repo_path": "src/doc/tabling.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-08T18:13:13.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-02T01:10:55.000Z", "num_tokens": 3243, "size": 12065 }
% Preamble \documentclass[11pt,a4paper]{article} \usepackage{amsmath,amssymb,amsfonts,amsthm} \usepackage{graphicx} \usepackage{wrapfig} \usepackage[a4paper]{geometry} \usepackage{caption} \usepackage{lscape} \usepackage{array} \usepackage{subcaption} \usepackage{grffile} \usepackage{rotating} \usepackage[pdfborder={0 0 0}]{hyperref} \usepackage[section] {placeins} \newcommand{\field}[1]{\mathbb{#1}} % the font for a mathematical field is blackboard \newcommand{\R}{\field{R}} % the field of the reals \newcommand{\N}{\field{N}} % the field of the natural numbers \newcommand{\C}{\field{C}} % the field of complex number \newcommand{\Z}{\field{Z}} % the field of integers \theoremstyle{plain} \newtheorem{stelling}{Stelling} \newtheorem{lemma}[stelling]{Lemma} \newtheorem{gevolg}[stelling]{Gevolg} \theoremstyle{definition} \newtheorem{definitie}[stelling]{Definitie} \newtheorem{voorbeeld}[stelling]{Voorbeeld} \theoremstyle{remark} \newtheorem*{noot}{Noot} \newcommand{\drsh}{\reflectbox{\rotatebox[origin=c]{180}{\Huge $\Rsh$ \hspace{3pt}}}} \textheight 230truemm \begin{document} \begin{center} {\huge {\tt CRADLE++} Documentation} \vspace{20pt} {\large Customisable RAdioactive Decay for Low Energy Particle Physics: \\ A C++ event generator} \vspace{15pt} {\small \href{mailto:[email protected]}{[email protected]}} \vspace{10pt} \today \end{center} \section{Introduction} This C++ package attempts to provide a general event generator for low energy particle physics, specifically nuclear spectroscopy studies. Its main focus lies on weak interaction studies perpetuated by nuclear beta decay. Its goals are accessibility and extendibility. \\ \\ The general structure of the cradle CRADLE++ program is as follows \begin{tabbing} DecayManager: \= Performs the main loop, keeps the particle stack, particle factory \\ \> and the registered DecayModes. \end{tabbing} \begin{tabbing} \hspace{10pt} \drsh Particle: \= Contains physical information (mass, charge, \ldots), kinematical \\ \> information, and array of DecayChannels. \end{tabbing} \begin{tabbing} \hspace{20pt} \drsh DecayChannel: \= Contains intensity, parent and daughter excitation energy,\\ \> Q value, lifetime and DecayMode. \end{tabbing} \begin{tabbing} \hspace{30pt} \drsh DecayMode: \= Dynamics of the specific decay. Inherited by daughter\\ \> classes BetaMinus, Alpha, etc. Parent class performs\\ \> TwoBodyDecay() and ThreeBodyDecay(), daughter\\ \> classes perform particle creation and initialisation. \end{tabbing} \section{Prerequisites} We attempted to use as few external libraries as possible without slowing down the coding progress. Two libraries were used. Both are extensive and widespread C++ libraries, typically installed by default on Linux systems. \begin{enumerate} \item BOOST: The linear algebra routines of uBLAS, and the program\_options library are extensively used. (\url{http://www.boost.org}) \item Gnu Scientific Library (GSL): Used for the calculation of the calculation of the complex Gamma function in the Fermi function. (\url{http://www.gnu.org/software/gsl/}) \end{enumerate} When using the precompiled binary, the libraries are statically included, and require only a Linux 64-bit system to run. Building requires the boost libraries to be installed, if not done already. This can be done easily using the package manager {\tt sudo apt-get install libboost-dev} (replace {\tt apt-get} for any other package manager on your system). \vspace{10pt} CRADLE++ uses the data files from NNDC and ENSDF databases, as compiled for the Geant4 distribution. These can be downloaded at \url{http://geant4.web.cern.ch/geant4/support/download.shtml}. CRADLE++ uses the G4RadioactiveDecayX.X and G4PhotonEvaporationX.X data files. \vspace{10pt} CRADLE++ was tested on Ubuntu 14.04 LTS, and Ubuntu 15.10 in a fresh virtual machine install. \section{Configuration} All options can be seen by running the program with the -h [--help] flags. \vspace{10pt} CRADLE++ requires a configuration file on start-up to set several parameters, such as weak interaction coupling constants, general verbosity or lifetime cuts. Options are written in typical INI format. By default {\tt config.txt} in the local folder is loaded. This behaviour can be overridden using command line options. \vspace{10pt} The location of the data files can be set using the environment variables CRADLE\_GammaData and CRADLE\_RadiationData. In case these are not defined, CRADLE++ looks for them in the GammaData and RadiationData local folders. \section{Usage} CRADLE++ performs the full decay chain until it arrives at a state with a lifetime longer than the value {\tt Cut.Lifetime} in seconds specified in the configuration file. It starts from an initial state with a name, Z value, A value and an excitation energy. The initial state is at rest. Beta decay is currently the only DecayMode where angular correlations have been implemented. Each decay is performed in the rest frame of the mother and afterwards Lorentz boosted into the lab frame. In the calculation natural units are used, i.e. $\hbar=c=m_e=1$. The standard energy scale is keV. Output is by default written to {\tt output.txt}. This behaviour can be overridden with the command line options. \vspace{10pt} The structure of the output file is as follows: \begin{itemize} \item Column 1: Event ID \item Column 2: Time of particle creation in seconds. The decay of the parent is included. \item Column 3: Particle name \item Column 4: Excitation energy \item Column 5: Kinetic energy (keV) \item Column 6-9: Four momentum of emitted particle in lab frame \end{itemize} Particles are emitted in a $4\pi$ solid angle and momentum conservation is guaranteed. \section{Example} We study the proton spectrum after emission from $^{32}$Cl($E_{X}=5046.3$\,keV) after $\beta^+$ decay from $^{32}$Ar for both scalar and vector interactions. Setting {\tt Couplings.CS} and {\tt Couplings.CV} to 0 and 1 in {\tt config.txt}, respectively, we run CRADLE++ as follows \vspace{10pt} {\tt ./Cradle++ -n 32Ar -z 18 -a 32 -l 500000 -f 32Ar\_vector.txt} \vspace{10pt} Doing the same thing, but reversing the two coupling constants we get the scalar recoil spectrum. Both are plotted in figure \ref{fig:32Ar_p}. The gamma rays from the decay are shown in figure \ref{fig:32Cl_gamma}. \vspace{10pt} \textbf{Note}: This is a special case, as nucleon emission is not included in the Geant4 codebase, and so it is not present in its data files. It can however be added manually without any issue. See the README file of the G4RadioactiveDecayX.X for the structure of the data files. Proton decay can then be trivially included by adding \begin{verbatim} # Denotes metastable state; Excitation Energy; Lifetime P 5046.3 0.0000e+00 # Decay modes from this state; Name; Daughter Excitation; Intensity; Q Proton 0.0000e+00 1.0000e+02 3.4580e+03 \end{verbatim} to the data file {\tt z17.a32}. The intensity (here set to $J_p = 100.0$) is used to choose the decay mode via the probability \begin{equation} P(p) = \frac{J_p}{\sum_i J_i} \end{equation} where $J_i$ denotes the intensity of the $i$-th decay channel from the current state. This means it competes with $\gamma$ decay from the same level. Isolating the proton emission can be done simply by increasing $J_p$. \begin{figure}[h!] \vspace{-20pt} \centering \includegraphics[width=\textwidth]{32Ar_proton.pdf} \caption{Comparison of the proton peak after emission from $^{32}$Cl for a vector and scalar interaction.} \label{fig:32Ar_p} \end{figure} \begin{figure}[h!] \vspace{-30pt} \centering \includegraphics[width=0.95\textwidth]{32Ar_gamma.pdf} \caption{Gamma spectrum after $\beta^+$ decay of $^{32}$Ar. The highest $\gamma$ energies above 10\,MeV come from very weakly populated levels in $^{32}$P after $\beta^+$ from $^{32}$Cl.} \label{fig:32Cl_gamma} \end{figure} \end{document}
{ "alphanum_fraction": 0.7633626098, "avg_line_length": 54.9655172414, "ext": "tex", "hexsha": "0254759c4bca413421c8721665727b38273723c3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-06-13T19:03:55.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-13T19:03:55.000Z", "max_forks_repo_head_hexsha": "8b7979a201c6d95abbc00cf44159b8fa577a17f5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "leenderthayen/CRADLE", "max_forks_repo_path": "doc/Generator_Documentation.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "8b7979a201c6d95abbc00cf44159b8fa577a17f5", "max_issues_repo_issues_event_max_datetime": "2021-04-13T04:49:09.000Z", "max_issues_repo_issues_event_min_datetime": "2019-04-22T13:07:52.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "leenderthayen/CRADLE", "max_issues_repo_path": "doc/Generator_Documentation.tex", "max_line_length": 702, "max_stars_count": null, "max_stars_repo_head_hexsha": "8b7979a201c6d95abbc00cf44159b8fa577a17f5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "leenderthayen/CRADLE", "max_stars_repo_path": "doc/Generator_Documentation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2173, "size": 7970 }
\subsection{Lagrange polynomials}\label{subsec:lagrange_polynomials} \begin{definition}\label{def:omega_polynomial} Given distinct elements \( x_0, \ldots, x_n \) of the field \( \BbbK \), we form the polynomial \begin{equation*} \omega(X) \coloneqq \prod_{k=0}^n (X - x_j). \end{equation*} \end{definition} \begin{proposition}\label{thm:omega_polynomial_derivative} For the polynomial \( \omega \) from \fullref{def:omega_polynomial}, for \( k = 0, \ldots, n \) we have \begin{equation*} \omega'(x_j) = \prod_{\substack{j = 0 \\ j \neq k}}^n (x_j - x_k), \end{equation*} where \( \omega' \) is the algebraic \hyperref[def:algebraic_derivative]{derivative} of \( \omega \). \end{proposition} \begin{proof} Fix \( k \in \{ 0, \ldots, n \} \) and denote \begin{equation*} q(X) \coloneqq \prod_{\substack{j = 0 \\ j \neq k}}^n (X - x_j). \end{equation*} Then \begin{equation*} \omega(X) = (X - x_k) q(X) \end{equation*} so \begin{equation*} \omega'(X) = [q(X) + X q'(X)] - x_k q'(X) = q(X) + (X - x_k) q'(X). \end{equation*} So for \( x_k \) we have \begin{equation*} \omega'(x_k) = q(x_k) = \prod_{\substack{j = 0 \\ j \neq k}}^n (x_k - x_j). \end{equation*} \end{proof} \begin{theorem}[Lagrange interpolation]\label{thm:lagrange_interpolation} Let \( x_0, x_1, \ldots, x_n \) be pairwise distinct elements of \( \BbbK \) and \( y_0, y_1, \ldots, y_n \) be arbitrary elements of \( \BbbK \). Then there exists a unique \hyperref[def:polynomial]{polynomial} \( L_n(X) \in \pi_n \) (where \( \pi_n \) is the \hyperref[def:polynomial_free_module]{\( n \)-dimensional polynomial vector space}) such that \begin{equation}\label{eq:thm:lagrange_interpolation/condition} L_n(x_k) = y_k, k = 0, 1, \ldots, n. \end{equation} \end{theorem} \begin{proof} We will first show uniqueness. Let \( p, q \in \pi_n \) both satisfy \fullref{eq:thm:lagrange_interpolation/condition}. Their difference \( p - q \) is a polynomial of degree at most \( n \) that has \( n + 1 \) roots. By \fullref{thm:integral_domain_polynomial_root_limit}, \( p - q = 0 \). This proves uniqueness. We construct the polynomial explicitly. Define the Lagrange polynomial \begin{equation*} L_n(X) = \sum_{m=0}^n y_m \prod_{\substack{j = 0 \\ j \neq m}}^n \frac {(X - x_j)} {(x_m - x_j)}. \end{equation*} For \( k = 0, 1, \ldots, n \) we have \begin{equation*} L_n(x_k) = y_k \underbrace{\prod_{\substack{j = 0 \\ j \neq k}}^n \frac {(x_k - x_j)} {(x_k - x_j)}}_{=1} + \sum_{\substack{m = 0 \\ m \neq k}}^n y_m \overbrace{\frac{(x_k - x_m)}{(x_k - x_m)}}^{=0} \prod_{\substack{j = 0 \\ j \neq k \\ j \neq m}}^n \frac {(x_k - x_j)} {(x_m - x_j)} = y_k. \end{equation*} Therefore, \( L_n \) satisfies \eqref{eq:thm:lagrange_interpolation/condition}, which proves existence. \end{proof}
{ "alphanum_fraction": 0.6343178622, "avg_line_length": 48.2033898305, "ext": "tex", "hexsha": "80f31d1fc0cf1e1a645c0fef342022661a2290fb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "v--/notebook", "max_forks_repo_path": "src/lagrange_polynomials.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "v--/notebook", "max_issues_repo_path": "src/lagrange_polynomials.tex", "max_line_length": 356, "max_stars_count": null, "max_stars_repo_head_hexsha": "d9bdfbab9f35095db2721f991a3418f58f997a56", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "v--/notebook", "max_stars_repo_path": "src/lagrange_polynomials.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1084, "size": 2844 }
% !TEX root = main.tex %------------------------------------------------- \section{Correlation}\label{sec:correlation} %----------------------------- \subsection{Product moments} %Let $X$ and $Y$ be random variables on the same probability space. \begin{definition}\label{def:prod_rvs} The \emph{product} of $X$ and $Y$ is the random variable \[ \begin{array}{rlcl} XY : & \Omega & \to & \R \\ & \omega & \mapsto & X(\omega)Y(\omega). \end{array} \] \end{definition} \begin{definition} \ben \it % << discrete If $X$ and $Y$ are jointly discrete, the \emph{product moment} of $X$ and $Y$ is \[ \expe(XY) = \sum_{i=1}^{\infty}\sum_{j=1}^{\infty} x_iy_j\, f_{X,Y}(x_i,y_j) \] whenever this sum exists. \it % << cts If $X$ and $Y$ are jointly continuous, the \emph{product moment} of $X$ and $Y$ is \[ \expe(XY) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} xy\, f_{X,Y}(x,y)\,dx\,dy \] whenever this integral exists. \een \end{definition} % %----------------------------- \subsection{Covariance} The covariance of $X$ and $Y$ is the product moment of the \emph{centred} variables $X-\expe(X)$ and $Y-\expe(Y)$. \begin{definition}\label{def:covariance} The \emph{covariance} of $X$ and $Y$ is \[ \cov(X,Y) = \expe\big(\big[X-\expe(X)\big]\big[Y-\expe(Y)\big]\big). \] \end{definition} \begin{remark} Note that $\cov(X,Y)=\cov(Y,X)$ and $\cov(X,X)=\var(X)$. \end{remark} In the same way that $\var(X)=\expe(X^2)-\expe(X)^2$ we have the following convenient expression for $\cov(X,Y)$. \begin{lemma}\label{lem:covariance-formula} %\[ $\cov(X,Y) = \expe(XY) - \expe(X)\expe(Y)$. %\] \end{lemma} \begin{proof} Expand the product in Definition~\ref{def:covariance} then apply the linearity of expectation. \end{proof} \begin{lemma}\label{lem:var_of_sum} For random variables $X_1,X_2,\ldots,X_n$, \[ \var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\sum_{j=1}^n \cov(X_i,X_j) %= \sum_{i=1}^n \var(X_i) + 2\sum_{i=1}^{n}\sum_{j=i+1}^n \cov(X_i,X_j). \] \end{lemma} \begin{proof} Let $Y=\sum_{i=1}^n X_i$. Then \begin{align*} \var(Y) = \expe(Y^2)-\expe(Y)^2 & = \expe\left[\left(\sum_{i=1}^n X_i\right)^2\right] - \left[\expe\left(\sum_{i=1}^n X_i\right)\right]^2 \\ & = \expe\left[\sum_{i=1}^n \sum_{j=1}^n X_iX_j\right] - \left[\sum_{i=1}^n \expe(X_i)\right]^2 \\ & = \sum_{i=1}^n\sum_{j=1}^n \expe(X_iX_j) - \sum_{i=1}^n\sum_{j=1}^n \expe(X_i)\expe(X_j) \\ & = \sum_{i=1}^n\sum_{j=1}^n \cov(X_i,X_j) \end{align*} \end{proof} \begin{exercise}\label{exe:covar_bilinear} Show that covariance is a \emph{bilinear} operator, in the sense that \[ \cov(aX_1+bX_2,cY_1+dY_2) = ac\cov(X_1,Y_1) + ad\cov(X_1,Y_2) + bc\cov(X_2,Y_1) + cd\cov(X_2,Y_2). \] \end{exercise} %----------------------------- \subsection{Correlation} \begin{definition} $X$ and $Y$ are said to be \emph{correlated} if $\cov(X,Y)\neq 0$ or equivalently if \[ \expe(XY)\neq\expe(X)\expe(Y), \] otherwise they are said to be \emph{uncorrelated}. \end{definition} % lem: independent => correlated \begin{lemma}\label{lem:indept_implies_uncorrelated} If $X$ and $Y$ are independent, they are uncorrelated. \end{lemma} \begin{proof} Let $X$ and $Y$ be jointly continuous (the discrete case is similar). \par Because $X$ and $Y$ are independent, $f_{X,Y}(x,y)=f_X(x)f_Y(y)$ for all $x,y\in\R$. Hence \begin{align*} \expe(XY) & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} xy\, f_{X,Y}(x,y)\,dx\,dy \\ & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} xy\, f_X(x)f_Y(y)\,dx\,dy \text{\quad(by independence),}\\ & = \left(\int_{-\infty}^{\infty}x\,f_X(x)\,dx\right)\left(\int_{-\infty}^{\infty}y\,f_Y(y)\,dy\right) \\ & = \expe(X)\expe(Y). \end{align*} \end{proof} % variance of sum = sum of variances \begin{lemma}\label{lem:var_of_indept_sum} If $X_1,X_2,\ldots,X_n$ are pairwise uncorrelated, \[ \var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\var(X_i). \] \end{lemma} \begin{proof} Because the $X_i$ are pairwise uncorrelated, $\cov(X_i,X_j)=0$ whenever $i\neq j$, so by Lemma~\ref{lem:var_of_sum}, \[ \var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\cov(X_i,X_i) = \sum_{i=1}^n\var(X_i). \] \end{proof} % example: negative binomial \begin{example} Let $X_1,\ldots,X_r$ be independent with each $X_i\sim\text{Geometric}(p)$, the distribution of the number of failures before the first success in a sequence of independent Bernoulli trials where the probability of success is $p$. Find the mean and variance of $Y = \sum_{i=1}^r X_i$. \begin{solution} Since $X_i\sim\text{Geometric}(p)$, we know that \[ \expe(X_i)=\frac{1-p}{p} \quad\text{and}\quad \var(X_i)=\frac{1-p}{p^2}. \] By the linearity of expectation, \[ \expe(Y) = \expe(X_1)+\expe(X_2)+\ldots+\expe(X_r) = \frac{r(1-p)}{p} \] and because the $X_i$ are independent, \[ \var(Y) = \var(X_1)+\var(X_2)+\ldots+\var(X_r) = \frac{r(1-p)}{p^2} \] In this case $Y$ has the so-called \emph{negative binomial} distribution with parameters $r$ and $p$. This is the distribution of the number of failures before the $r$th success in a sequence of independent Bernoulli trials in which the probability of success is $p$. \end{solution} \end{example} %----------------------------- \subsection{Correlation coefficient} The correlation coefficient is the product moment of the \emph{standardized} variables $\displaystyle\frac{X-\expe(X)}{\sqrt{\var(X)}}$ and $\displaystyle\frac{Y-\expe(Y)}{\sqrt{\var(Y)}}$. \begin{definition}\label{def:correlation_coefficient} The \emph{correlation coefficient} of $X$ and $Y$ is \[ \rho(X,Y) = \expe\left[\left(\frac{X-\expe(X)}{\sqrt{\var(X)}}\right)\left(\frac{Y-\expe(Y)}{\sqrt{\var(Y)}}\right)\right] \] \end{definition} By the linearity of expectation, we have the following convenient expression for $\rho(X,Y)$. \begin{lemma} The correlation coefficient of $X$ and $Y$ can be written as \[ \rho(X,Y) = \frac{\cov(X,Y)}{\sqrt{\var(X)\cdot\var(Y)}} \] \end{lemma} \begin{proof} Follows easily from Lemma~\ref{lem:covariance-formula}. \end{proof} Note that $\rho(X,Y)=0$ whenever $X$ and $Y$ are uncorrelated. In fact the correlation coefficient satisfies the inequality $|\rho(X,Y)|\leq 1$ and thus provides a \emph{standardized} measure of the (linear) dependence between $X$ and $Y$. To prove this we need the following result from mathematical analysis.% called the \emph{Cauchy-Schwarz inequality}. %First we need the following technical result (which we shall not prove). %% lemma %\begin{lemma}\label{lem:pos_rv_expe_zero} %If $X\geq 0$ and $\expe(X)=0$ then $\prob(X=0)=1$. %\end{lemma} %\begin{proof} %Proof by contradiction: let $X\geq 0$ with $\expe(X)=0$, and suppose that $\prob(X>0)>0$. %\bit %\it Because the CDF $\prob(X\leq x)$ is right-continuous, there exists $\epsilon>0$ such that $\prob(X>\epsilon)>0$. %\it This implies that $X\geq \epsilon I(X>\epsilon)$. %\it Taking the expected value of both sides, $\expe(X)\geq \epsilon\,\prob(X>\epsilon) > 0$ (by monotonicity). %\eit %This is a contradiction, so we conclude that $\prob(X>0)=0$. %\end{proof} % thm: cauchy-schwarz \begin{theorem}[Cauchy-Schwarz inequality for random variables] For any two random variables $X$ and $Y$, \[ \expe(XY)^2 \leq \expe(X^2)\expe(Y^2) \] with equality if and only if $\prob(Y=aX)=1$ for some $a\in\R$. \end{theorem} \begin{theorem}\label{thm:bounds_on_rho} The correlation coefficient satisfies the inequality \[ |\rho(X,Y)|\leq 1, \] with equality if and only if $\prob(Y=aX+b)=1$ for some $a,b\in\R$. \end{theorem} \begin{proof} Apply the Cauchy-Schwarz inequality to $X-\expe X$ and $Y-\expe Y$: \begin{align*} \cov(X,Y)^2 & = \expe\big((X-\expe X)(Y-\expe Y)\big) \\ & \leq \expe\big((X-\expe X)^2\big)\expe\big((Y-\expe Y)^2\big) \\ & = \var(X)\var(Y), \end{align*} with equality if and only if there exists $a\in\R$ such that \[ \prob\big[Y-\expe Y = a(X-\expe X)] = 1. \] Hence, \[ |\rho(X,Y)| = \left|\frac{\cov(X,Y)}{\sqrt{\var(X)\var(Y)}}\right| \leq 1 \] with equality if and only if $\prob(Y = aX + b) = 1$, where $b = \expe Y - a\expe X$. \end{proof} %---------------------------------------------------------------------- \begin{exercise} \begin{questions} %---------------------------------------- %========================================================================== \question Let $X$ and $Y$ be two random variables having the same distribution but which are not necessarily independent. Show that $ \cov(X+Y,X-Y)=0 $ provided that their distribution has finite mean and variance. \begin{answer} Perhaps the simplest method is the following: let $U=X+Y$ and $V=X-Y$. Then \begin{align*} \cov(X+Y,X-Y) & = \expe(UV) - \expe(U)\expe(V) \\ & = \expe\big[(X+Y)(X-Y)\big] - \expe(X+Y)\expe(X-Y) \\ & = \expe(X^2 - Y^2) - \big[\expe(X)+\expe(Y)\big]\big[\expe(X)-\expe(Y)\big] \qquad\text{(by the linearity of expectation)} \\ & = \expe(X^2) - \expe(Y^2) - \expe(X)^2 +\expe(X)\expe(Y) - \expe(Y)\expe(X) + \expe(Y)^2 \quad\text{(by linearity again)} \\ & = \big[\expe(X^2) - \expe(X)^2\big] - \big[\expe(Y^2) - \expe(Y)^2\big] \\ & = \var(X) - \var(Y). \end{align*} Since $X$ and $Y$ have the same distribution, their variances must be equal, so $\cov(X+Y,X-Y)=0$. \end{answer} %========================================================================== \question Consider a fair six-sided die whose faces show the numbers $-2,0,0,1,3,4$. The die is independently rolled four times. Let $X$ be the average of the four numbers that appear, and let $Y$ be the product of these four numbers. Compute $\expe(X)$, $\expe(X^2)$, $\expe(Y)$ and $\cov(X,Y)$. \begin{answer} Let $X_1,X_2,X_3,X_4$ be independent discrete random variables on the set $\{-2,0,0,1,3,4\}$. Each $X_i$ is identically distributed according to the following PMF: \[\begin{array}{|c|ccccc|}\hline k & -2 & 0 & 1 & 3 & 4 \\ \hline \prob(X_i=k) & 1/6 & 1/3 & 1/6 & 1/6 & 1/6 \\ \hline \end{array}\] Hence for $i=1,2,3,4$, \begin{align*} \expe(X_i) & = \frac{1}{6}(-2+0+0+1+3+4) = 1, \\ \expe(X_i^2) & = \frac{1}{6}(4+0+0+1+9+16) = \frac{30}{6} = 5, \\ \var(X_i) & = \expe(X_i^2)-\expe(X_i)^2 = 4. \end{align*} Let $X=\frac{1}{4}(X_1+X_2+X_3+X_4)$. Then \[ \expe(X) = \frac{1}{4}\big(\expe(X_1)+\ldots+\expe(X_4)\big) = 1 \] By independence, \[ \var(X) = \frac{1}{16}\big(\var(X_1)+\ldots+\var(X_4)\big) = 1 \] so $\expe(X^2) = \var(X) + \expe(X)^2 = 2$. and $Y=X_1 X_2 X_3 X_4$. By independence, \begin{align*} \expe(Y) & = \expe(X_1 X_2 X_3 X_4) \\ & = \expe(X_1)\expe(X_2)\expe(X_3)\expe(X_4) \\ & = 1 \\ \end{align*} and because the $X_i$ are identically distributed, \begin{align*} \expe(XY) & = \frac{1}{4}\expe\big(\expe(X_1+X_2+X_3+X_4)X_1X_2X_3X_4\big) \\ & = \expe(X_1^2)\expe(X_2)\expe(X_3)\expe(X_4) \\ & = \expe(X_1^2) = 5 \\ \end{align*} so $\cov(XY) = \expe(XY) - \expe(X)\expe(Y) = 4$. \end{answer} %========================================================================== \question A fair die is rolled twice. Let $U$ denote the number obtained on the first roll, let $V$ denote the number obtained on the second roll, let $X=U+V$ denote their sum and let $Y=U-V$ denote their difference. Compute the mean and variance of $X$ and $Y$, and compute $\expe(XY)$. Check whether $X$ and $Y$ are uncorrelated. Check whether $X$ and $Y$ are independent. \begin{answer} Let $U,V\sim\text{Uniform}\{1,2,3,4,5,6\}$ be independent (and identically distributed) random variables, and define $X=U+V$ and $Y=U-V$. \begin{align*} \expe(X) & = \expe(U) + \expe(V) = 7 \\ \expe(Y) & = \expe(U) - \expe(V) = 0 \\ \intertext{By independence,} \var(X) & = \var(U) + \var(V) = 35/6 \\ \var(Y) & = \var(U) + \var(V) = 35/6 \\ \intertext{Because $U$ and $V$ are identically distributed, and} XY & = (U+V)(U-V) = U^2 - V^2 \\ \intertext{it follows that} \expe(XY) & = \expe(U^2) - \expe(V^2) = 0 \\ \intertext{$X$ and $Y$ are uncorrelated, since} \cov(X,Y) & = \expe(XY) - \expe(X)\expe(Y) =0 \\ \intertext{However $X$ and $Y$ are not independent, because (for example)} \prob(Y=0) & \neq \prob(Y=0|X=12)=1 \end{align*} \end{answer} %---------------------------------------- \end{questions} \end{exercise} %----------------------------------------------------------------------
{ "alphanum_fraction": 0.6128446426, "avg_line_length": 37.5046439628, "ext": "tex", "hexsha": "303aae6a5d3cebb08577c3f94009ae425b510ca8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:13:05.000Z", "max_forks_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gillardjw/notes", "max_forks_repo_path": "L5/MA2500/05B_correlation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gillardjw/notes", "max_issues_repo_path": "L5/MA2500/05B_correlation.tex", "max_line_length": 364, "max_stars_count": null, "max_stars_repo_head_hexsha": "58b3f7e8e2c289a88905bda689c95483bee04490", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gillardjw/notes", "max_stars_repo_path": "L5/MA2500/05B_correlation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4639, "size": 12114 }
\chapter{Dynamic Programming} \section{Summary} Dynamic programming is a \textbf{collection of algorithms} that can be used to find the \textbf{optimal policy}. It assumes a perfect model of the system (MDP) and uses a lot of computational power. \begin{equation} v_*(s) = \max_a \sum_{s',r} p(s', r | s, a)[r + \gamma v_*(s')] \label{eq:bellman optimality equation state-value function} \end{equation} \begin{equation} q_*(s, a) = \sum_{s', r} p(s', r | s, a) [r + \gamma \max_{a'} q_*(s', a')] \label{eq:bellman optimality equation action-value function} \end{equation} \subsection{Policy evaluation} The bellman equation from \ref{eq:bellman equation value function derivation} can be converted into an iterative method called \textbf{iterative policy evaluation} to find the value function. It takes the expected value over all the same next states. All updates in dynamic programming are called \textbf{expected updates}, because they are based on expectation over all possible next states rather then the sample next states. \begin{equation} \begin{split} v(s)_{k+1} & = \EX_\pi\left[ R_{t+!} + \gamma v_k(S_{k+1}) | S_t = s \right] \\ & = \sum_a \pi(a | s) \sum_{s', r} p(s', r | s, a) \left[r + \gamma v(s')\right] \\ \end{split} \label{eq:iterative policy evaluation update rule} \end{equation} \subsection{Policy improvement} \begin{equation} \begin{split} q_\pi(a, s) & = \EX\left[R_{t+1} + \gamma v_\pi(S_{t+1}) | S_t = s, A_t = a \right]\\ & = \sum_{s', r} p(s', r | s, a)\big[r + \gamma v_\pi(s')\big] \end{split} \label{eq:policy improvement, select the next action} \end{equation} Given a policy $\pi$ and value function $v_\pi(s)$, one action $a$ can be selected that maximizes equation~\ref{eq:policy improvement, select the next action} and all sequential actions follow the policy $\pi$. The \textbf{policy improvement theorem} say's that if a new policy $\pi'$ satisfies equation~\ref{eq:policy improvement theorem condition}, the the new policy will satisfy equation~\ref{eq:policy improvement theorem result}. And be as good or better then the original policy.(proof on page 78-79 of the book) \begin{equation} q_\pi(s, \pi'(s)) \geq v_\pi(s) \label{eq:policy improvement theorem condition} \end{equation} \begin{equation} v_{\pi}(s) \leq v_{\pi'}(s) \label{eq:policy improvement theorem result} \end{equation} The new improved policy $\pi'$ is formally written down in equation~\ref{eq:greedy policy action-value}. The corresponding value function is formally written down in equation~\ref{eq:greedy policy value function to bellman equation}. Where we \textbf{end up with the bellman optimality equation}. Indicating that the policy can improve until it's the optimal policy. \begin{equation} \begin{split} \pi'(s) & = \argmax_a q_\pi (s, a) \\ & = \argmax_a \EX\left[ R_{t+1} + \gamma v_\pi(S_{t+1} | S_t = s, A_t = a) \right] \\ & = \argmax_a \sum_{s',r} p(s', r|s, a)\left[r + \gamma v_{\pi}(s')\right] \end{split} \label{eq:greedy policy action-value} \end{equation} \begin{equation} \begin{split} v_{\pi'}(s) & = \max_a \EX\left[ R_{t+1} + \gamma v_\pi'(S_{t+1}) | S_t = s, A_t=a \right] \\ & = \max_a \sum_{s', r}p(s', r|s, a)\left[r + \gamma v_{\pi'}(s')\right] \end{split} \label{eq:greedy policy value function to bellman equation} \end{equation} \subsection{Policy iteration} The iterative process of evaluating a policy, and then creating a new policy that is greedy towards the old one is called \textbf{policy iteration}. \subsection{Value iteration} Instead of evaluating the complete policy first, and then improving the policy. The policy can be improved after every state evaluation. Effective \textbf{turning the bellman optimality equation into the iterative update} of equation~\ref{eq:value iteration}. \begin{equation} v_{k+1} = \max_a = \sum_{s',r} p(s', r| s, a)\big[ r + \gamma v_k(s')\big] \label{eq:value iteration} \end{equation} \subsection{Generalized policy iteration} The iterative process of repeatedly evaluating a policy and using it to create an improved version of that policy, is referred to as \textbf{generalized policy iteration} or short GPI. Both policy iteration and value iteration are GPI, as do many stochastic methods. \section{Exercises} \subsection{Exercise 4.8} The reward is only obtained when the capital is above 99. When the capital is at 50, there is a 50\% chance you can win the game. So this obviously is the optimal policy. When you reach 51: it would be rather odd to bet the entire capital, as you don't need to risk it all to reach 100. Bigger downside, but same upside. So the best course of action is to bet with 1, see if you can grow this above 50. If you lose it, you still have a 50\% chance to win by betting it all.
{ "alphanum_fraction": 0.7243928194, "avg_line_length": 53.8068181818, "ext": "tex", "hexsha": "454587d2e3f80f1fc5293012190a71c24bff9a9c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Zilleplus/HML", "max_forks_repo_path": "RL/notes/TeX_files/chapter04.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Zilleplus/HML", "max_issues_repo_path": "RL/notes/TeX_files/chapter04.tex", "max_line_length": 520, "max_stars_count": null, "max_stars_repo_head_hexsha": "ab9510e27103bb7c14e801606bb25b7c4e17e8ea", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Zilleplus/HML", "max_stars_repo_path": "RL/notes/TeX_files/chapter04.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1413, "size": 4735 }
\documentclass[11pt, letterpaper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{bm} \title{Lecture 3: Dipole interactions and introduction to Cartesian tensors} \begin{document} \maketitle The main aim of this lecture is to understand the origin of $-1/r^6$ attractive potential. This potential arises due to dipole-dipole interactions and is responsible for cohesion amongst liquid molecules. \section{A general idea} We start with the derivation of potential due to a dipole and use it to obtain the energy of interaction between two dipoles for a polar liquid. We then briefly derive the same result for non-polar liquids and see that dipole interactions have identical dependence in the two cases, namely, $-1/r^6$. Finally we write down the general case where molecules may have some permanent and some instantaneous electronic polarization and define the Van der Waal's forces of attraction. \section{Dipole-dipole interactions} \subsection{Polar liquids} The Coulomb potential due to point charge is given by $$ V(r)=\frac{q}{4\pi\epsilon_0 r} $$ Based on this, we can derive the potential due to a dipole at any spatial location $\bm{X}$. The result is $$ V(\bm{X})\equiv V({r,\theta}) =\frac{D \cos(\theta)}{4\pi\epsilon_0 r^2} \rightarrow\text{ \footnotesize{Faster decay}} $$ Where $\bm{D}=D\bm{p}$ is the dipole moment and $\theta$ is the angle between $\bm{X}$ and $\bm{p}$. Then, the inner product of $\bm{D}$ and $\bm{X}$ becomes $D r \cos(\theta)$. Hence we can write $$ V({r,\theta}) =\frac{D r \cos(\theta)}{4\pi\epsilon_0 r^3}\equiv\frac{D \bm{p}\cdot\bm{X}}{4\pi\epsilon_0 r^3} $$ Since potential goes like $1/r^2$ the electric field $\bm{E}$ decays as $1/r^3$. Once we have the potential due to a single dipole, we can find out the energy of a second dipole placed in the field of the first. Assuming that the field is uniform over the length of the second dipole (it is placed far enough), we can obtain this energy to be $$ E_{12} = D\bm{p}\cdot\bm{E} $$ Now, the dipole moment of the other dipole is not yet known. Although its intrinsic moment might be $\bm{D}$, due to thermal fluctuations, it would sample all possible orientations and lead to a net zero dipole moment over time. However, the electric field of the first dipole $\bm{E}$ will cause the second dipole to have a preferential orientation. We can hence deduce that the dipole moment of the second dipole will be proportional to $E$ and of course also to $D$. Then we must construct a dimensionless variable containing $E$. This is given by the Boltzmann factor $DE/kT$. Also, since $\bm{E}$ induces the dipole moment in the second dipole, it is reasonable to assume that first effects of $\bm{E}$ will be linear in $E$. Hence, the dipole moment of the second dipole becomes $D(DE/kT)$, giving a small moment if thermal energy is large and a large moment if electric field is strong. Finally we can obtain the force between two dipoles using expression for the electric field of a dipole. $$ \bm{F} = \underbrace{-q\bm{E}(\bm{X} - \bm{p}\delta l /2)}_{\text{force due to field at the negative charge}} + \underbrace{q\bm{E}(\bm{X} + \bm{p}\delta l /2)}_{\text{force due to field at the positive charge}} $$ Using Taylor expansion, we see that leading order terms cancel and we get a force dependent on the gradient of $E$. $$ \bm{F} = q\delta l \bm{p}\cdot\nabla\bm{E} $$ The dipole moment is now given by $D(DE/kT)$ and hence $$ \bm{F} = \frac{D^2E}{kT} \bm{p}\cdot\nabla\bm{E} $$ $$ \Rightarrow \bm{F} \sim \nabla\frac{D^2\bm{E^2}}{kT} \propto \frac{1}{r^7} \text{(\hfill since $E\propto\frac{1}{r^3})$} $$ and hence potential between two permanent dipoles varies as $r^{-6}$. Next, we look at non-polar liquids. \section{Non-polar liquids} In non-polar liquids molecules do not have a permanent dipole moment but the electron cloud around the nucleus has a fluctuating distribution. This causes instantaneous polarization of the molecule. The field induced by this dipole then causes polarization of the nearby constituents. Yet, we will see that the force and the potential in this case have dependence identical to the case of polar liquids. Let the induced dipole moment be $\bm{p_{ind}}$. Then $$ \bm{p_{ind}} = \alpha \bm{E} $$ where $\alpha$ is the polarizability of the molecule and $\bm{E}$ is the electric field of the other dipole. The force is then given by $$ \bm{F} = \bm{p_{ind}}\cdot \nabla\bm{E} $$ $$ \Rightarrow \bm{F} = \alpha \bm{E} \cdot \nabla\bm{E} \propto \nabla\bm{E}^2 $$ Since $E\propto r^{-3}$, we again have the $1/r^{7}$ dependence of force and $1/r^{6}$ dependence of potential for interaction of two induced dipoles. In general, a molecule may have some permanent polarization and some electronic polarization due to fluctuations in the electron cloud. This will result in a permanent electric field superposed with a fluctuating field. Therefore, in general, we can write $$ \bm{F} = \bm{p}\cdot \nabla\bm{E} $$ $$ \Rightarrow \bm{F} = (\bm{p}_{permanent}+\bm{p}_{eletronic})\cdot \nabla(\bm{E}_{permanent}+\bm{E}_{fluctuating}) $$ This gives rise to three kinds of interactions \begin{itemize} \item permanent dipoles with permanent dipoles - Keesom interactions \item permanent dipoles with instantaneous electronic dipoles - Debye interactions \item instantaneous dipoles with instantaneous dipoles - London interactions \end{itemize} Together, these three are known as Van der Waal's forces. \end{document}
{ "alphanum_fraction": 0.7384643443, "avg_line_length": 62.3068181818, "ext": "tex", "hexsha": "e7c30d8e6b15d0040e43ef48ca79b68beae8ac9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_forks_repo_path": "tex_files/lecture03.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_issues_repo_path": "tex_files/lecture03.tex", "max_line_length": 894, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f4ffd25fa16fa08c2c2a5d465bb8a19a1d02d850", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pulkitkd/Fluid_Dynamics_notes", "max_stars_repo_path": "tex_files/lecture03.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-16T04:19:07.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-16T04:19:07.000Z", "num_tokens": 1588, "size": 5483 }
% This is a test file for TeX2CNL. % It is an edited version of a chapter from Dense Sphere Packings. %publications-of-thomas-hales/geometry/dense-sphere-packings/packing.tex % TeX2CNL version started July 20 2019 % Thomas Hales % The character set used in this file is rather small % cat test_packing.tex | tr -d '[a-zA-Z0-9%$<>^{}()_+-&@,.:;=~/!]' | tr -d '\r\n\\ `\*|#' % Issues function calls f(a,b) vs curry f a b \app{f}{(a,b)} % \vspace*{250px} how to get at it? - For now, documents should use alternatives, such as \null ..\vspace{...}. % \hspace*{} % \left. (with period, makes an invisible delimiter) % \linebreak[number] % \predisplaypenalty=0 and assignments. % What about \hbox{}? We might want two variants, \input{cnl-style} \begin{cnl} %% Customize for this file. \CnlCustom[1]\newterm{ #1 } \CnlDelete[2]\indy \CnlDelete[2]\formaldef \CnlDelete[1]\formalauthor \CnlDelete[1]\guid \CnlDelete[1]\chapter \CnlDelete\figDEQCVQL \CnlDelete\figXOHAZWO \CnlDelete\figKFETCJS \CnlDelete\figHFFTUNW \CnlDelete\figBUGZBTW \CnlDelete\figELMXAFH \CnlDelete\figNOCHOTB \CnlDelete\figYAJOTSL \CnlDelete\figKVIVUOT \CnlDelete\figBWEYURN \CnlDelete\figFIFJALK \CnlDelete\figTULIGLY \CnlDelete\figBJLIEKB \CnlDelete\figPQFEXQN \CnlDelete\figJXEHXQY \CnlDelete\figYAHDBVO \CnlDelete\figZXEVDCA \CnlCustom[1]\nsqrt{ \sqrt{#1} } \CnlNoExpand\vecR \def\vecR#1{\ring{R}^{#1}} \CnlNoExpand\Real \def\Real{\ring{R}} \def\Nat{\ring{N}} \def\vecZ#1{\ring{Z}^{#1}} \CnlNoExpand\pi \CnlNoExpand\ups \CnlNoExpand\openBall % B(p,r) in TeX \def\openBall#1#2{B(#1,#2)} \CnlNoExpand\floor \def\floor#1{\lfloor #1 \rfloor} \CnlNoExpand\AA \def\AA#1#2{A(#1,#2)} \def\Aplus{A_+} \def\Aminus{A_-} \def\Vee#1{#1} \def\CapV#1{#1} \def\bVo{\bV(#1)} \def\braced#1{\{#1\}} \CnlCustom[1]\braced{\setenum{\list{#1}}} \def\setcomp#1#2{\{#1\mid #2\}} \CnlNoExpand\setcomp \CnlNoExpand\mid \def\gammaX{\gamma} \def\preamble#1{#1} \CnlDelete\preamble %SUM \def\sumfinitepred#1#2#3{{\sum_{#2} {#3}}} %#1 free \CnlCustom[3]\sumfinitepred{\sumfinite{\setcomp{#1}{#2}}{\LAMBDA #1, #3}} \def\sumupto#1#2#3#4{{\sum_{#1 = #2}^{#3} {#4}}} \CnlCustom[4]\sumupto{\sumfinite{((#2) .. (#3))}{\LAMBDA #1, #4}} \def\sumvar#1{#1\mid} \CnlDelete[1]\sumvar \def\andcomma{,} \CnlCustom\andcomma{and} %ELLIPSIS \def\setenumdots#1#2#3{\{{#1}_{#2},\ldots,{#1}_{#3}\}} \CnlCustom[3]\setenumdots{\image{#1}{((#2) .. (#3))}} \CnlNoExpand\image \def\setenumrange#1#2{\{#1,\ldots,#2\}} \CnlCustom[2]\setenumrange{((#1) .. (#2))} \def\listdots#1#2#3{[{#1}_{#2};\ldots;{#1}_{#3}]} \CnlCustom[2]\listdots{\listmap{#1}{\listrange{#2}{#3}}} %for $i=1,...,j$ \def\fordots#1#2#3{#1=#2,\ldots,#3} \CnlCustom[3]\fordots{#1\in ((#2) .. (#3))} %% BEGIN TEXT \begin{summary} This chapter comprises much of the core material of the book. At last we take up the topic of dense sphere packings. Associated with a sphere packing $V$ in $\vecR{3}$ are various subsidiary decompositions of space. This chapter focuses on three such decompositions: the Voronoi decomposition into polyhedra, the Rogers decomposition into simplices, and the Marchal decomposition into cells. Each of these decompositions leads to a bound on the density of sphere packings. The bounds in the first two cases are not sharp. The third decomposition leads to a sharp bound $\pi/\nsqrt{18}$ on the density of sphere packing in three dimensions. The final sections of this chapter undertake a detailed study of the properties of the Marchal cell decomposition. \end{summary} \section{The Primitive State of Our Subject Revealed}\label{primitive state} \subsection{definition}\label{definition} Informally, a \newterm{packing} is an arrangement of congruent balls in Euclidean three space that are nonoverlapping in the sense that the interiors of the balls are pairwise disjoint. By convention, we take the radius of the congruent balls to be $1$. %the scale %invariance of density, without loss of generality, units can be chosen %so that each ball has radius $1$. Let $ V$ be the set of centers of the balls in a packing. The choice of unit radius for the balls implies that any two points in $ V$ have distance at least $2$ from each other. Formally, the packing is identified with the set of centers $V$. \indy{Notation}{V@$V$ (packing)}% A packing in which no further balls can be added is said to be {\it saturated} (Figure~\ref{fig:saturated}). \figDEQCVQL % fig:saturated \begin{definition}[saturated,~packing] \label{def:packing,saturated} \preamble{ \guid{XASMJUK} \formaldef{packing}{packing} \formaldef{saturated}{saturated} \indy{Index}{saturated}% \indy{Index}{packing}% } A \newterm{packing} $ V\subset \vecR{3}$ is a set such that \[ \forall \u,~\v\in V.~ \norm{ \u}{\v} < 2 \Rightarrow ( \u=\v). \] A set $V$ is \newterm{saturated} if for every $\p\in\vecR{3}$ there exists some $ \u\in V$ such that $\norm{ \u}{\p}< 2$. \end{definition} Let $\openBall{\p}{r}$ denote the open ball in Euclidean three-space at center $\p$ and radius $r$. The open ball is measurable with measure $4\pi r^3/3$. Set $ \Vee{V}(\p,r) = V \cap \openBall{\p}{r}$. \formaldef{ball}{ball} \formaldef{$\Vee{V}(\p,r)$}{V INTER ball(p,r)} \indy{Index}{measure}% \indy{Notation}{B@$\openBall{\v}{r}$ (open ball)}% \indy{Notation}{V1@$\Vee{V}(\p,r)=V\cap \openBall{\p}{r}$}% \begin{lemma} \label{lemma:V-finite} \preamble{ \guid{KIUMVTC} \formalauthor{Nguyen Tat Thang} } Let $ V$ be a packing and let $\p\in\vecR{3}$. Then the set $ \Vee{V}(\p,r)$ is finite. \end{lemma} \begin{proof} Let $\p = (p_1,p_2,p_3)$. The floor function gives the map \[ (v_1,v_2,v_3)\mapsto (\floor{2(v_1-p_1)} , \floor{2(v_2-p_2)}, \floor{2(v_3-p_3)}). \] It is a one-to-one map from $ \Vee{V}(\p,r)$ into the set $\vecZ{3}\cap \openBall{\orz}{2r + 1}$. By Lemma~\ref{lemma:Zcount} the range of this one-to-one map is finite. Hence, the domain $ \Vee{V}(\p,r)$ of the map is also finite.% \footnote{An alternative proof uses the open cover of the compact ball $\bar \openBall{\p}{r}$ by the sets $\bar \openBall{\p}{r}\setminus V$ and $\openBall{\v}{1}$ for $\v\in V$. By compactness, the cover is necessarily finite.} \end{proof} \subsection{Voronoi cell}\label{Voronoi cells} Geometric decompositions of space give a way to estimate the density of sphere packings. A popular decomposition of space is the Voronoi cell decomposition (Figure~\ref{fig:voronoi}). \begin{definition}[Voronoi~cell,~$\Omega$] \label{def:voronoi-cell-omega} \preamble{ \guid{YGFWXEH} \formaldef{$\Omega$}{voronoi\_closed} %\indy{Index}{Voronoi cell}% } Let $V\subset\vecR{3}$ and $\v\in V$. The \fullterm{Voronoi cell}{decomposition!Voronoi} $\Omega(V,\v)$ is the set of points at least as close to $\v$ as to any other point in $V$. \end{definition} \indy{Notation}{zzZ@$\Omega$ (Voronoi cell)} % \figXOHAZWO % fig:voronoi \begin{lemma}[Voronoi partition]\label{lemma:Voronoi-partition} \preamble{ \guid{TIWWFYQ} } If $V$ is a saturated packing, then \begin{equation}\label{eqn:vor-rn} \vecR{3} = \bigcup \setcomp{\Omega(V,\v)}{ \v \in V}. \end{equation} \end{lemma} \begin{proof} If $V$ is a saturated packing, then every point $\p$ has distance less than $2$ from some point of $V$. The set $\Vee{V}(\p,2)$ is finite by Lemma~\ref{lemma:V-finite}. Hence, $\p$ is at least as close to some $\v\in V$ as it is to any other $\w\in V$. This means that $\p\in\Omega(V,\v)$. \end{proof} We use half-spaces to separate one Voronoi cell from another. \begin{definition}[half-space]\label{def:half-space} \preamble{ \guid{BGXHPKY} \formaldef{$A$}{bis} \formaldef{$\Aplus$}{bis\_le} } \begin{align*} \AA{\u}{\v} &= \setcomp{\p\in\vecR{3} }{ 2(\v-\u)\cdot \p = \normo{\v}^2 - \normo{\u}^2 },\\ \Aplus(\u,\v) &= \setcomp{\p\in\vecR{3} }{ 2(\v-\u)\cdot \p \le \normo{\v}^2 - \normo{\u}^2 }, \end{align*} when $\u,\v\in\vecR{3}$. The plane $\AA{\u}{\v}$ is the \newterm{bisector} of $\braced{\u,\v}$ and $\Aplus(\u,\v)$ is the \newterm{half-space} of points at least as close to $\u$ as to $\v$. \end{definition} \indy{Notation}{A1@$\AA{\u}{\v}$ (bisector)}% \indy{Notation}{A2@$\Aplus(\u,\v)$ (half-space)}% Each Voronoi cell is a bounded polyhedron. \begin{lemma}[Voronoi polyhedron] \label{lemma:Voronoi-polyhedron} \preamble{ \guid{RHWVGNP} } Let $V\subset\vecR{3}$ be a saturated packing. Then $\Omega(V,\v)\subset \openBall{\v}{2}$. Also, $\Omega(V,\v)$ is a polyhedron defined by the intersection of the finitely many half-spaces $\Aplus(\v,\u)$ for $\u\in \Vee{V}(\v,4)\setminus\braced{\v}$. \end{lemma} \begin{proof} The Voronoi cell $\Omega(V,\v)$ is the intersection of the half-spaces $\Aplus(\v,\u)$ as $\u$ runs over $V\setminus \braced{\v}$. Let $\p\not\in \openBall{\v}{2}$. By saturation, there exists $\u\in V$ such that $\norm{\p}{\u}<2$. Then \[ \norm{\p}{\u} < 2 \le \norm{\p}{\v}. \] Hence, $\p\not\in\Omega(V,\v)$. This proves the first conclusion. Let $\Omega'$ be the intersection of the half-spaces $\Aplus(\v,\u)$ as $\u$ runs over $\Vee{V}(\v,4)$. Clearly, $\Omega(V,\v)\subset \Omega'$. Assume for a contradiction that $\p\in \Omega'\setminus\Omega(V,\v)$. The intersection of the ray $\op{aff}_+\braced{\v,\{\p\}}$ with $\Omega(V,\v)$ is a closed and bounded convex subset of the line. By general principles of convex sets, this intersection is an interval $\op{conv}\braced{\v,\p'}$ for some $\p'\in\Omega(V,\v)\subset \openBall{\v}{2}$. For some small $t>0$, the point lies beyond the interval but remains within the ball: \[ \q = (1+t)\p' -t \v\in (\openBall{\v}{2}\cap \Omega')\setminus \Omega(V,\v). \] Choose $\u\in V\setminus \Vee{V}(\v,4)$ such that $\q\in \Aplus(\u,\v)$. By the triangle inequality, \[ \norm{\u}{\v} \le \norm{\u}{\q} + \norm{\v}{\q} \le 2\norm{\v}{\q} < 4. \] This contradicts the assumption $\u\not\in \Vee{V}(\v,4)$. The number of half-spaces $\Aplus(\v,\u)$ for $\u\in \Vee{V}(\v,4)$ is finite by Lemma~\ref{lemma:V-finite}. A set defined by the intersection of a finite number of closed half-spaces is a polyhedron. \end{proof} \begin{lemma}[Voronoi compact] \label{lemma:Voronoi-compact} \preamble{ \guid{DRUQUFE} \formalauthor{Nguyen Tat Thang} } Let $ V$ be a saturated packing. For every $\v\in V$, the Voronoi cell $\Omega( V,\v)$ is compact, convex, and measurable. \end{lemma} \begin{proof} By the previous lemma, it is a bounded polyhedron. Every bounded polyhedron is compact, convex, and measurable. \end{proof} \subsection{reduction to a finite packing}\label{reduction to finite} We finally state the main result of this book, the Kepler conjecture. The proof fills most of this book. This section describes the outline of the proof and gives references to the sources of the details of the proof. \begin{theorem*}[Kepler's conjecture on dense packings] \label{theorem:kepler} \preamble{ \guid{IJEKNGA} \formaldef{Kepler conjecture}{kepler\_conjecture}% } No packing of congruent balls in Euclidean three space has density greater than that of the face-centered cubic (FCC) packing. \end{theorem*} \indy{Index}{FCC}% \indy{Index}{face-centered cubic|see{FCC}}% \indy{Index}{HCP}% \indy{Index}{hexagonal-close packing|see{HCP}}% \indy{Index}{Kepler conjecture}% \begin{remark}\guid{LLFORJR} This density is $\pi/\nsqrt{18}\approx 0.74$. There are other packings, such as the HCP or the FCC packing with finitely many balls removed, that attain this same density. \end{remark} The Kepler conjecture is a statement about space-filling packings. A space-filling packing is specified by a countable number of real coordinates -- three for the position of each of countably many balls. The first task in resolving the conjecture is to reduce the problem to one involving only a finite number of balls. This is accomplished by Lemma~\ref{lemma:deltabound}. The relevant concepts are \fullterm{negligibility}{negligible} and {\it FCC-compatibility}, given as follows. FCC-compatibility means that the Voronoi cells on average have volume at least that of those in the FCC-packing. Negligibility means that the error term is insignificant. \begin{definition}[negligible,~FCC-compatible] \label{def:negligible} \preamble{ \guid{ZREKEVW} \formaldef{FCC compatible}{fcc\_compatible} \formaldef{negligible}{negligible\_fun\_0} } A function $G:V\to \Real$ on a set $V\subset\vecR{3}$ is \fullterm{negligible\/}{negligible} if there is a constant $c_1$ such that for all $r\ge1$, \[ \sumfinitepred{\v}{\v\in \Vee{V}(\orz,r)}{G(\v)} \le c_1 r^2.\] A function $G: V\to\Real$ is \fullterm{FCC-compatible\/}{FCC!compatible} if for all $\v\in V$, \[ 4\nsqrt{2}\le \op{vol}(\Omega(V,\v)) + G(\v).\] \indy{Notation}{G@$G$ (negligible function)}% \end{definition} \begin{remark}\guid{RTMZJVG} The value $\op{vol}(\Omega(V,\v)) + G(\v)$ may be interpreted as an \newterm{adjusted\/} volume of the Voronoi cell. The constant $4\nsqrt{2}$ that appears in the definition of FCC-compatibility is the volume of the Voronoi cell in the HCP and FCC packings. (See Chapter~\ref{sec:close}.) The corrected volume is at least the volume of these Voronoi cells when the correction term $G$ is FCC-compatible. % \indy{Index}{corrected volume}% \end{remark} The density $\delta( V,\p,r)$ of a packing $ V$ within a bounded region of space is defined as a ratio. The numerator is volume of $\openBall{V}{\p,r}$, defined as the intersection with $\openBall{\p}{r}$ of the union of all balls $\openBall{\v}{1}$ in the packing $V$. The denominator is the volume of $\openBall{\p}{r}$. \indy{Notation}{zzd@$\delta( V,\p,r)$}% \indy{Notation}{V@$V$ (packing)}% \begin{lemma}[reduction to finite dimensions] \label{lemma:reduction-finite-dimensions} % \preamble{ \guid{JGXZYGW} \formalauthor{Nguyen Tat Thang} } % If there exists a \newterm{negligible} \fullterm{FCC-compatible}{FCC!compatible} function $G: V\to\Real$ for a saturated packing $ V$, then there exists a constant $c=c(V)$ such that for all $r\ge1$, \[ \delta( V,\orz,r) \le {\pi}\Big/{\nsqrt{18}} + c/r. \] \end{lemma} \begin{proof} The volume of $\openBall{ V}{\orz,r}$ is at most the product of the volume $4\pi/3$ of each ball with the number of centers in $\openBall{\orz}{r+1}$. Hence, \begin{equation} \label{eqn:Abound} \op{vol}\, \openBall{ V}{\orz,r} \le \card( \Vee{V}(\orz,r+1))\, 4\pi/3. \end{equation} Each truncated Voronoi cell is contained in a ball of radius $2$ that is concentric with the unit ball in that cell. The volume of the large ball $\openBall{\orz}{r+3}$ is at least the combined volume of all truncated Voronoi cells centered in $\openBall{\orz}{r+1}$. This observation, combined with FCC-compatibility and negligibility, gives \begin{equation} \begin{split} 4\nsqrt{2}\,\,\card( \Vee{V}(\orz,r+1)) &\le \sumfinitepred{\v}{\v\in \Vee{V}(\orz,r+1)}{\left(G(\v) + \op{vol}(\Omega(V,\v))\right)} \\ &\le c_1 (r+1)^2 + \op{vol}\,\openBall{\orz}{r+3} \\ &\le c_1 (r+1)^2 + (1+3/r)^3 \op{vol}\,\openBall{\orz}{r}. \label{eqn:Bbound} \end{split} \end{equation} \indy{Index}{FCC!compatible}% Recall that $\delta( V,\orz,r)= \op{vol}\,\openBall{ V}{\orz,r}/\op{vol}\,\openBall{\orz}{r}$. Divide Inequality \ref{eqn:Abound} through by $\op{vol}\,\openBall{\orz}{r}$. Use Inequality~\ref{eqn:Bbound} to eliminate $\card( \Vee{V}(\orz,r+1))$ from the resulting inequality. This gives \[ \delta( V,\orz,r) \le \frac{\pi}{\sqrt{18}} (1+3/r)^3 + c_1 \frac{(r+1)^2}{r^34\sqrt{2}}. \] The result follows for an appropriately chosen constant $c$ (depending on $c_1$). \end{proof} \begin{remark}[Kepler conjecture in precise terms]\guid{ZHIQGGN} \label{remark:precise} The precise meaning of the \newterm{sphere packing problem} or the \newterm{Kepler conjecture} is to prove the bound $\delta( V,\orz,r) \le \pi/\nsqrt{18} + c/r$ for every saturated packing $ V$. The error term $c/r$ comes from the boundary effects of a bounded container holding the balls. The error tends to zero as the radius $r$ of the container tends to infinity. Thus, by the preceding lemma, the existence of a negligible FCC-compatible function provides the solution to the packing problem. The strategy is to define a negligible function and then to solve an optimization problem in finitely many variables to establish that the function is also FCC-compatible. \end{remark} \section{Rogers Simplex}\label{sec:rogers} \indy{Index}{Rogers|see{decomposition}}% \indy{Index}{decomposition!Rogers}% Rogers gave a bound on the density of sphere packings in Euclidean space of arbitrary dimension~\cite{Rogers:1958:Packing}. His bound states that the density of a packing in $n$-dimensions cannot exceed the ratio of the volume of $A \cap T$ to the volume of $T$, where $T$ is a regular tetrahedron of side length $2$ and $A$ is the set of $n+1$ balls of unit radius placed at the extreme points of $T$. In two dimensions, the Rogers's bound is sharp and gives a solution to the sphere packing problem. In three dimensions the bound is approximately $0.7797$, which differs significantly from the optimal value $\pi/\nsqrt{18}\approx 0.74$. Rogers's bound is the unattainable density that would result if regular tetrahedra could tile space.\footnote{Aristotle erroneously believed that regular tetrahedra tile space: ``It is agreed that there are only three plane figures which can fill a space, the triangle, the square, and the hexagon, and only two solids, the pyramid and the cube.''~\cite{Aristotle}.} \indy{Notation}{T@$T$ (regular tetrahedron)}% \indy{Index}{decomposition!Rogers}% \indy{Index}{Aristotle}% To prove his bound, Rogers gives a partition of Euclidean space into simplices with extreme points in a packing $V$. This section develops the basis properties of Rogers simplices. The next section modifies the simplices to obtain a sharp bound on the density of packings. \subsection{faces}\label{faces} The Rogers partition is a refinement of the Voronoi cell decomposition. In preparation for this decomposition, this subsection goes into greater detail about the structure of the faces of a Voronoi cell. We parameterize various faces of the Voronoi cell by lists of points in a saturated packing $V$ (Figure~\ref{fig:vset}). \figKFETCJS % fig:vset \begin{definition}[$\Omega$ reprise] \label{def:Omega} \preamble{ \guid{BBDTRGC} \formaldef{$\Omega(V,W)$}{voronoi\_set} \formaldef{$\Omega(V,\bu)$}{voronoi\_list} } % Let $V$ be a saturated packing. The notation $\Omega(V,\wild )$ can be \newterm{overloaded} to denote intersections of Voronoi cells, when the second argument is a set or list of points. If $W\subset V$, then the intersection of the family of Voronoi cells is $\Omega(V,W)$: \[ \Omega(V,W) = \bigcap \setcomp{\Omega(V, \u)}{ \u\in W }. \] Define $\Omega$ on lists to be the same as its value on point sets: \begin{align*} \Omega(V,\listdots{\u}{0}{k}) = \Omega(V,\setenumdots{\u}{0}{k}). \end{align*} \end{definition} \indy{Notation}{zzZ@$\Omega(V,W)$ (intersection of Voronoi cells)}% An intersection of Voronoi cells can be written in many equivalent forms: \[ \Omega(V,\v)\cap \Omega(V,\u) =\Omega(V,\braced{\u,\v}) = \Omega(V,\v)\cap \Aplus(\u,\v) = \Omega(V,\v)\cap \AA{\u}{\v} = \cdots. \] \begin{definition}[$\bV$] \label{def:bV} \preamble{ \guid{NOPZSEH} \formaldef{$\bVo{k}$}{barV} } % Let $V$ be a saturated packing. When $k=0,1,2,3$, let $ \bVo{k}$ be the set of lists $\bu=\listdots{\u}{0}{k}$ of length $k+1$ with $ \u_i\in V$ such that \begin{equation}\label{eqn:omega-dim} \dimaff(\Omega(V,\listdots{\u}{0}{k})) = 3-j \end{equation} for all $0<j\le k$. (Recall that $\dimaff(X)$ is the affine dimension of $X$ from Definition~\ref{def:affine}.) Set $\bVo{k}=\emptyset$ for $k>3$. \end{definition} \indy{Notation}{dimaff@$\dimaff$ (affine dimension)}% In particular, $V$ can be identified with $\bVo{0}$ under the natural bijection $\v\mapsto[\v]$, and $\bVo{1}$ is the set of lists $[\u;\v]$ of distinct elements such that the Voronoi cells at $ \u$ and $\v$ have a common facet (Lemma~\ref{lemma:omega-facet}). \begin{notation}[underscore] The underscores follow a special syntax. In $\bVo{k}$, the underscore is a function \[ \underline{\phantom V}:\setcomp{V }{ \text{$V$ saturated packing} } \times \Nat \to \wild. \] The syntax is somewhat different in $\bu$. Here the underscore is not a function, but part of its name, following a general typographic convention to mark lists of points. The notations are coherent because $\bu\in\bVo{k}$. \end{notation} \begin{notation}[$\trunc{\bu}{j}$] \preamble{ \guid{JNRJQSM} \indy{Notation}{d@$d_j$ (truncation of lists)}% \formaldef{$\trunc{\bu}{j}$} {truncate\_simplex}\hspace{-3pt} } % When $\bu=\listdots{\u}{0}{k}$ and $j\le k$, write $\trunc{\bu}{j} = \listdots{\u}{0}{j}$ for the truncation of the list. \end{notation} Truncation $\bu\mapsto\trunc{\bu}{j}$ maps $\bVo{k}$ to $\bVo{j}$ when $j\le k$. Beware of the index: $k$ is the \newterm{codimension} of $\Omega(V,\bu)$ in $\vecR{3}$, when $\bu\in \bVo{k}$; it is not the \newterm{length} of the list $\bu$ (which is $k+1$).\footnote{By convention a $k$-simplex is presented as a $k+1$-tuple. Because of this shift by one, the notation $\trunc{\bu}{j}$ also differs by the same shift.} \begin{lemma}[Voronoi face] \label{lemma:omega-face} \preamble{ \guid{KHEJKCI} } % Let $V\subset\vecR{3}$ be a saturated packing. Let $\bu=\listdots{\u}{0}{k}\in \bVo{k}$. Then $\Omega(V,\bu)$ is a face of $\Omega(V,\u_0)$. \end{lemma} \begin{proof} This follows directly from the definition of face on page~\pageref{def:face}. The set $\Omega(V,\bu)$ is an intersection of the convex sets $\Omega(V,\u_i)$ and is therefore convex. Also, $\Omega(V,\bu)$ is the intersection of $\Omega(V,\u_0)$ with the planes $\AA{\u_0}{\u_i}$, where $i>0$. Let $\p,\q\in\Omega(V,\u_0)$ and assume \[ \p' = s \p + t \q\in\Omega(V,\bu),\quad \text{ for some } s>0,\quad t>0,~\quad s + t = 1. \] Then $\p'\in \AA{\u_0}{\u_i}$. Each plane $\AA{\u_0}{\u_i}$ is a face of the corresponding half-space $\Aplus(\u_0,\u_i)$ containing $\p$ and $\q$. By the definition of face, $\p,\q$ must also lie in $\AA{\u_0}{\u_i}$. It follows that $\p,\q$ also lie in $\Omega(V,\bu)$. By the definition of face, $\Omega(V,\bu)$ is a face of $\Omega(V,\u_0)$. \end{proof} \begin{lemma}[facets] \label{lemma:omega-facet} \preamble{ \guid{IDBEZAL} } % Let $V\subset\vecR{3}$ be a saturated packing. Let $\bu\in \bVo{k}$ for some $k<3$. Then $F$ is a facet of $\Omega(V,\bu)$ if and only if there exists $\bv\in \bVo{k+1}$ such that $F = \Omega(V,\bv)$ and $\trunc{\bv}{k} = \bu$. \end{lemma} \begin{proof} Use Lemma~\ref{lemma:Voronoi-polyhedron} to write the polyhedron $\Omega(V,\bu)$ in the form of Equation~\ref{eqn:polyrep}: \[ \Omega(V,\bu) = A \cap A_\pm(\v_1,\u_0) \cap \cdots\cap A_\pm(\v_r,\u_0), \] where $A$ is the affine hull of $\Omega(V,\bu)$, $\v_i\in V$, where $\Aminus(\v,\w) = \Aplus(\w,\v)$ with the signs $\pm$ chosen as needed, and $r$ is as small as possible. By Lemma~\ref{lemma:webster}, if $F$ is any facet of $\Omega(V,\bu)$, then there exists an $i\le r$ such that \[ F = \Omega(V,\bu) \cap \AA{\v_i}{\u_0} = \Omega(V,\bv), \] where $\bv = [\u_0;\ldots;\u_k;\v_i]$ is the list that appends $\v_i$ to $\bu$. Also, \[ \dimaff(\Omega(V,\bv)) = \dimaff(F) = \dimaff(\Omega(V,\bu)) - 1 = 3 - k - 1, \] because $F$ is a facet. It follows that $\bv \in \bVo{k+1}$. This proves the implication in the forward direction. To prove the converse, let $\bv\in \bVo{k+1}$, where $\trunc{\bv}{k} = \bu$. Elementary verifications show that $\Omega(V,\bv)\subset \Omega(V,\bu)$ and that this set is nonempty if $k<3$. By Lemma~\ref{lemma:omega-face} and Lemma~\ref{lemma:webster}, $\Omega(V,\bv)$ is a face of $\Omega(V,\bu)$. By the definition of $\bVo{\wild }$, \[ \dimaff(\Omega(V,\bv)) = 3 - (k+1) = \dimaff(\Omega(V,\bu)) -1. \] It follows that $\Omega(V,\bv)$ is a facet of $\Omega(V,\bu)$. \end{proof} \subsection{partitioning space}\label{partitioning space} Each Rogers simplex is given as the convex hull of its set of extreme points. The extreme points $\omega(d_i\bu)$ are defined by recursion. \begin{definition}[$\omega$] \label{def:omega} \preamble{ \guid{JJGTQMN} \formaldef{$\omega_k$} {omega\_list\_n} \formaldef{$\omega$}{omega\_list} } % Let $V$ be a saturated packing and let $\bu=[\u_0;\ldots]\in \bVo{k}$ for some $k$. Define points $\omega_j=\omega_j(V,\bu)\in\vecR{3}$ by recursion over $j\le k$ (Figure~\ref{fig:rogers-omega}). \begin{align*} \omega_{0\phantom{+1}} &= \u_0,\\ \omega_{j+1} &=\text{the closest point to } \omega_j \text{ on }\Omega(V,\trunc{\bu}{j+1}). \end{align*} Set $\omega(V,\bu) = \omega_{k}(V,\bu)$, when $\bu\in \bVo{k}$. The set $V$ is generally fixed and is dropped from the notation. \end{definition} \figHFFTUNW % fig:rogers-omega \claim{The point $\omega(\bu)$ exists when $\bu\in \Vee{V}(k)$.} Indeed, the set $\Omega(V, \bu)$ is nonempty, convex, and compact. Thus, by convex analysis, the closest point $\omega( \bu)$ exists uniquely. The point $\omega_k$ depends on $\bu$ through its projection to $\trunc{\bu}{k}$ so that \[ \omega_k(\bu) = \omega_k(\trunc{\bu}{k})=\omega(\trunc{\bu}{k}). \] \indy{Notation}{zzz@$\omega$ (extreme points of Rogers simplex)}% \begin{definition}[R,~Rogers simplex] \label{def:Rogers-simplex} \preamble{ \guid{PHZVPFY} \formaldef{$R$}{rogers} \indy{Notation}{R@$R$ (Rogers simplex)}% } % Let $V\subset\vecR{3}$ be a saturated packing. For $\bu\in \bVo{k}$, let \[ R(\bu) = \op{conv}\{\omega( \trunc{\bu}{0}), \omega( \trunc{\bu}{1}),\ldots,\omega( \trunc{\bu}{k})\}. \] The set $R(\bu)$ is called the Rogers simplex of $\bu$. \end{definition} Each Voronoi cell can be partitioned into Rogers simplices (Figure~\ref{fig:rogers-random}). \figBUGZBTW % fig:rogers-random \begin{lemma}[Rogers decomposition] \label{lemma:Rogers-d} \preamble{ \guid{GLTVHUM} } For any saturated packing $V\subset\vecR{3}$, and any $\u_0\in V$, \begin{equation} \Omega(V,\u_0) = \bigcup \setcomp{ R(\bv) }{ \bv\in \bVo{3},~\trunc{\bv}{0} =[\u_0]}. \end{equation} Consequently, \[ \vecR{3} = \bigcup\, \setcomp{ R(\bv) }{ \bv\in\bVo{3}}. \] \end{lemma} \begin{proof} The proof uses standard facts about convex sets and polyhedra from Section~\ref{sec:poly}. By the covering of $\vecR{3}$ by Voronoi cells by \eqref{eqn:vor-rn}, it is enough to show that each Voronoi cell is covered by Rogers simplices. Let $\bu\in \bVo{j}$ for $j<3$. Consider the following set: \[ N = \left\{k\in\Nat\mid j\le k\le 3, ~~\Omega(V,\bu) = \bigcup_{\bv \in \bVo{k},~\trunc{\bv}{j}=\bu} \op{conv}(O_k \cup\Omega(V,\bv)) \right\}, \] where $O_k = \{\omega(\trunc{\bv}{j}),\ldots,\omega(\trunc{\bv}{k-1})\}$. \claim{We claim $N = \setenumrange{j}{3}$.} Indeed, to see that $j\in N$, we note that \[ \Omega(V,\bu) = \op{conv}(\Omega(V,\bu)), \] which holds by the convexity of the polyhedron $\Omega(V,\bu)$. We assume that $k\in N$ and consider the membership condition of $N$ for $k+1$. We may assume that $k+1\le 3$. Then \begin{alignat*}{4} &\phantom{=}&&\bigcup _{\bv \in \bVo{k+1},~\trunc{\bv}{j}=\bu} \op{conv}(O_{k+1} \cup\Omega(V,\bv)) \vspace{6pt}\\ &=&&\bigcup _{\bv \in \bVo{k+1},~\trunc{\bv}{j}=\bu} \op{conv}(O_{k} \cup\op{conv}(\braced{\omega(\trunc{\bv}{k})} \cup\Omega(V,\bv))) \vspace{6pt}\\ &=&&\bigcup _{\bw\in \bVo{k},~\trunc{\bw}{j}=\bu,~~} \bigcup_{\bv \in \bVo{k+1},~\trunc{\bv}{k}=\bw} \op{conv}(O_{k} \cup\op{conv}(\braced{\omega(\trunc{\bv}{k})} \cup\Omega(V,\bv))) \vspace{6pt}\\ &=&&\bigcup _{\bw\in \bVo{k},~\trunc{\bw}{j}=\bu\phantom{+1}} \op{conv}(O_{k} \cup \Omega(V,\bw) ) \vspace{6pt}\\ &=&&\quad\Omega(V,\bu). \end{alignat*} The induction hypothesis is used in the last step. This proves $k+1\in N$, and induction gives $N=\setenumrange{j}{3}$. Consider the extreme case $j=0$ and $k=3$. The set $\Omega(V,\bv)$ reduces to $\braced{\omega(\bv)}$ and the convex hull becomes \[ \op{conv}(O_{k}\cup \Omega(V,\bv)) = R(\bv) \] when $\bv\in \bVo{3}$. This gives \begin{equation} \Omega(V,\u_0) = \bigcup \setcomp{ R(\bv) }{ \bv\in \bVo{3},~\trunc{\bv}{0} =[\u_0]}. \end{equation} This proves the lemma. \end{proof} \figELMXAFH % fig:rogers-ill-defined Although the Rogers simplex $R(\bu)$ need not determine the parameter $\bu$ (Figure~\ref{fig:rogers-ill-defined}), the intersection of two different Rogers simplices is a null set. \begin{lemma}[Rogers disjoint] \label{lemma:R-inter} \preamble{ \guid{DUUNHOR} } % Let $V$ be a saturated packing and let $\bu,\bv\in \bVo{3}$ be lists such that $R(\bu)\ne R(\bv)$. Then the intersection \[ R(\bu)\cap R(\bv) \] is contained in a plane (and hence has measure zero). \end{lemma} This result and the previous lemma show that the simplices $R(\bu)$ partition Euclidean three-space. \begin{proof} We may assume that the affine dimension of $R(\bu)$ is three, for otherwise $R(\bu)$ is contained in a plane. Similarly, we may assume that the affine dimension of $R(\bv)$ is three. Let $\bu = [\u_0;\ldots]$ and $\bv = [\v_0;\ldots]$. Let $k$ be the first index such that \[ \Omega(V,\listdots{\u}{0}{k}) \ne \Omega(V,\listdots{\v}{0}{k}). \] \claim{Such an index $k$ exists.} Indeed, the definition of points $\omega(\trunc{\bu}{i})$ depends on $\bu$ only through the sets $\Omega(V,\trunc{\bu}{j})$. Hence, $R(\u)\ne R(\v)$ implies that the two sequences $\Omega(V,\wild )$ must differ at some index. We have $\omega_i(\bu) = \omega_i(\bv)$, for $i<k$. Select $\w\in R(\bu)\cap R(\bv)$. Write \[ \w = \sumupto{j}{0}{3}{s_j \omega_j(\bu)} = \sumupto{j}{0}{3}{t_j \omega_j(\bv)}, \text{ where } \sumupto{j}{0}{3}{s_j} = \sumupto{j}{0}{3}{t_j} = 1. \] Set $\sigma_i = \sumupto{j}{1}{3}{s_j}$. \claim{We claim that $s_i = t_i$, and $\sigma_{i+1} = \sumupto{j}{i+1}{3}{t_j}$, for $\fordots{i}{0}{k-1}$.} Indeed, the proof is an induction on $i$. Assume that the claim holds for all indices less than $i$ so that \[ \sumupto{j}{i}{3}{s_j \omega_j(\bu)} = \sumupto{j}{i}{3}{t_j \omega_j(\bv)}. \] We apply Lemma~\ref{lemma:scale2} to the points \[ \p_0=\omega_i(\bu)=\omega_i(\bv),\quad \p = \sumupto{j}{i+1}{3}{\frac{s_j}{\sigma_i} \omega_j(\bu)},\quad \p' = \sumupto{j}{i+1}{3}{\frac{t_j}{\sigma_i} \omega_j(\bv)} \] in the polyhedron $\Omega(V,\listdots{\u}{0}{i})$ to obtain the induction step $s_i=t_i$. Let \[ \Omega'=\Omega(V,\listdots{\u}{0}{k}]) \cap \Omega(V,\listdots{\v}{0}{k})= \Omega(V,[\u_0;\ldots;\u_{k};\v_{k}]). \] The claim implies that \[ \frac{1}{\sigma_k}\sumupto{j}{k}{3}{s_j \omega_j(\bu)} = \frac{1}{\sigma_k}\sumupto{j}{k}{3}{t_j \omega_j(\bv) \in\Omega'}. \] It follows that the intersection $R(\u)\cap R(\v)$ lies in the convex hull $C$ of \[ \{\omega([\u_0]),\ldots,\omega( [\u_0;\ldots; \u_{k-1}])\} \] and $\Omega'$. The set $\Omega'$ lies in a facet of $\Omega(V,\trunc{\bu}{k})$. Hence, the affine dimension of $\Omega'$ is at most $3-k-1=2-k$. In general, if a set $A$ has affine dimension $r$, then the affine dimension of $\op{conv}(\braced{\p}\cup A)$ is at most $r+1$. It follows that the affine dimension of $C$ is at most $k + (2-k) = 2$. The intersection is thus contained in a plane. \end{proof} \subsection{circumcenter}\label{circumcenter} The extreme points of a Rogers simplex are closely related to the circumcenter of various subsets of $V$. This subsection develops the connection between Rogers simplices and circumcenters. \begin{definition}[circumcenter,~circumradius] \label{def:circumcenter} \preamble{ \guid{IFLFHKT} \formaldef{circumcenter}{circumcenter}% \formaldef{circumradius}{radV}% \indy{Notation}{S@$S$ (finite subset of $\vecR{3}$)}% } % Let $S\subset\vecR{n}$. A point $\p$ is a \newterm{circumcenter} of $S$ if it is an element in the affine hull of $S$ that is equidistant from every $\v\in S$. If $S$ has circumcenter $\p$, then the common distance $\norm{\p}{\v}$ for all $\v\in S$ is the \newterm{circumradius} of $S$. \end{definition} The circumcenter comes as a solution to a system of linear equations. We pause to review a standard result from the theory of linear algebra, asserting the existence of a solution to a system of equations. Recall that a finite set $S$ is \fullterm{affinely~independent}{affine!independence} if $\dimaff(S) = \card(S) -1$. % \formaldef{affinely independent}{\textasciitilde affine\_dependent s}% \begin{lemma}[linear systems] \label{lemma:affine-system} \preamble{ \guid{QXSKIIT} } % Let $S=\setenumdots{\v}{0}{m}\subset \vecR{n}$ be an affinely independent set of cardinality $m+1$. Then every system of equations \[ \p \cdot (\v_i - \v_0) = b_i-b_0,\quad \text{for } \fordots{i}{1}{m} \] has a unique solution $\p$ that lies in the affine hull of $S$. \end{lemma} \begin{proof} This is a standard result from linear algebra. We sketch a proof for the sake of completeness. Let $\w_i = \v_i-\v_0$ and replace $b_i-b_0$ with $b_i$. The lemma reduces to the following claim. Let $S' = \setenumdots{w}{1}{m}$ be a \newterm{linearly independent} set of cardinality $m$. Then every system of equations \[ \p \cdot \w_i = b_i,\quad \text{for } \fordots{i}{1}{m} \] has a unique solution in $\p$ that lies in the linear span of $S'$. \claim{A solution is unique}. Indeed, the difference $\p = \p'-\p'' = \sumupto{i}{1}{m}{ s_i \w_i}$ of two solutions $\p',\p''$ satisfies \[ \normo{\p}^2=\p\cdot\p = \sumupto{i}{1}{m}{ s_i \w_i} \cdot (\p' - \p'') = \sumupto{i}{1}{m}{ s_i (b_i-b_i)}= 0. \] It follows that $\p=\orz$ and $\p'=\p''$. This proves uniqueness. Let $W$ be the linear span of $\setenumdots{\w}{1}{m}$. The image of the map $W\to\vecR{m}$, $\p\mapsto (\p\cdot\w_1,\ldots,\p\cdot\w_m)$ is a linear space and is therefore an affine set. \claim{A solution exists; that is, the image is all of $\,\vecR{m}$.} Otherwise, by Lemma~\ref{lemma:aff-u} some equation must hold; that is, there exists $\u\ne \orz$ such that $\u\cdot \q =b$ for every point $\q$ in the image. As $\orz$ lies in the image, $b=0$. Write $\p = \sum\upto{i}{1}{m}{ u_i \w_i}\in W$ and let $\q\in\vecR{m}$ be the image of $\p\in W$. Then \[ \normo{\p}^2 = \p\cdot\p = \sumupto{i}{1}{m}{ u_i (\p \cdot \w_i)} = \u\cdot \q = 0. \] Thus $\p=\orz$ so that $\u=\orz$. We have reached a contradiction. \end{proof} \begin{lemma}[circumcenter exists] \label{lemma:circumcenter exists} \preamble{ \guid{OAPVION} } % Let $S\subset \vecR{n}$ be a nonempty affinely independent set. Then there exists a unique circumcenter of $S$. \end{lemma} \begin{proof} A point $\p$ is a circumcenter if and only if it is a point in the affine hull of $S$ that satisfies the system of equations: \[ \norm{\p}{\v_i}^2 = \norm{\p}{\v_0}^2,\qquad \fordots{i}{1}{m}. \] Equivalently, \[ \p\cdot (\v_i-\v_0) = b_i-b_0,\qquad \fordots{i}{1}{m}, \] where $b_i = \normo{\v_i}^2/2$. By Lemma~\ref{lemma:affine-system}, this system of equations has a unique solution. \end{proof} The following lemma describes the structure of the affine hull of a face of a Voronoi cell. It describes the affine hull as an intersection of half-spaces and shows that it meets $\op{aff}(S)$ orthogonally at the circumcenter of $S$. \begin{lemma}[] \label{lemma:aff-center} \preamble{ \guid{MHFTTZN} } % Let $V$ be a saturated packing and let $k\le 3$. Let $S = \setenumdots{\u}{0}{k}$, where $\bu=\listdots{\u}{0}{k}\in \bVo{k}$. Then \begin{enumerate} \item $\dimaff (S)= k$. (In particular, $\card\setenumdots{\u}{0}{k}=k+1$, and $S$ is affinely independent.) \item $\aff{\Omega(V,\bu)}= \cap_{i=1}^k \AA{\u_0}{\u_i}.$ \item $\aff{\Omega(V,\bu)} \cap \aff(S) = \braced{\q}$, where $\q$ is the circumcenter of $S$. \item $(\aff{\Omega(V,\bu)}-\q) \perp (\aff(S)-\q)$, where $X-\q$ denotes the translate of a set $X$ by $-\q$, and $(\perp)$ is the orthogonality relation. \end{enumerate} \end{lemma} \indy{Notation}{7@$\perp$}% \begin{proof} The proof is by induction on $k$. \claim{The lemma holds when $k=0$.} Indeed, $\Omega(V,\u_0)$ contains an open ball centered at $\u_0$, so its affine hull is $\vecR{3}$. This is the first conclusion. The other conclusions reduce to trivial facts: $\dimaff\vecR{3} = 3$, $\dimaff\braced{\u_0}=0$, $\vecR{3}\cap \brace{\u_0} = \braced{\u_0}$, and $\vecR{3}\perp \braced{\orz}$. Assume the induction hypothesis for $k$. We may assume that $k<3$ because otherwise there is nothing further to prove. Let $\bu\in \bVo{k+1}$. Let $\bv = \trunc{\bu}{k}\in \bVo{k}$. Let $\q_k$ be the circumcenter of (the point set of) $\bv$. Write $A_j = \cap_{i=1}^j \AA{\u_0}{\u_i}$; $B_j = \aff(\Omega(V,\trunc{\bu}{j}))$; $C_j = \aff\setenumdots{\u}{0}{j}$; $S_j = \setenumdots{\u}{0}{j}$. By the induction hypothesis $A_k = B_k$. \claim{We claim $\dimaff S_{k+1} =k+1$.} Otherwise, by general background facts about affine sets, $\u_{k+1}\in C_k$. Write $\u_{k+1}-\q_k=\sumupto{i}{0}{k}{ t_i (\u_i-\q_k)}$. If $\p\in A_k$, then by the orthogonality induction hypothesis: \begin{align*} (\u_{k+1}-\q_k)\cdot (\p-\q_k) &= \sumupto{i}{0}{k}{ t_i (\u_i-\q_k)\cdot (\p-\q_k)} = 0, \intertext{ and consequently } \norm{\u_{k+1}}{\p}^2 - \norm{\u_0}{\p}^2 &= \norm{\u_{k+1}}{\q_k}^2 - \norm{\u_0}{\q_k}^2. \end{align*} Thus, if $A_k$ meets $\AA{\u_0}{\u_{k+1}}$ at some point $\p$, then both sides of this equation vanish and $A_k\subset \AA{\u_0}{\u_{k+1}}$. This is contrary to $0\le \dimaff(A_{k+1}) = \dimaff(A_k) - 1$, which holds because $\bu\in \bVo{k+1}$ with $k<3$. \claim{We claim that $B_{k+1} = A_{k+1}$.} Indeed, by definition, $B_{k+1}\subset A_{k+1}\subset A_k$. Also, \[ \dimaff B_{k+1} = 3 - (k+1) \le \dimaff{A_{k+1}} \le \dimaff A_k = 3 - k. \] Hence, by general background on affine sets, if $A_{k+1}\ne A_k$, then $B_{k+1}=A_{k+1}$. Suppose for a contradiction that $A_k = A_{k+1}$. Then $\Omega(V,\bv) \subset \Omega(V,\bu) = \Omega(V,\bv)\cap \AA{\u_0}{\u_{k+1}} \subset \Omega(V,\bv)$, so that $B_k = B_{k+1}$. This contradicts the defining conditions of $\bVo{k+1}$. \claim{We claim that $A_{k+1}\cap C_{k+1} = \braced{\q_{k+1}}$.} Indeed, by the definition of $A_{k+1}$, any point in this affine set is equidistant from every point of $S_{k+1}$. By the definition of $C_{k+1}$, the intersection lies in the affine hull of $S_{k+1}$. This uniquely characterizes the circumcenter. \claim{Finally, $(A_{k+1} -\q_{k+1})\perp (C_{k+1}-\q_{k+1})$.} Indeed, if $\p\in A_{k+1}$, then \begin{align*} 0 &=\norm{\p}{\u_i}^2 -\norm{\p}{\u_0}^2\\ &=\norm{(\p-\q_{k+1})}{(\u_i-\q_{k+1})}^2 -\norm{(\p-\q_{k+1})}{(\u_0-\q_{k+1})}^2\\ &=-2 (\p-\q_{k+1})\cdot (\u_i-\u_0). \end{align*} Since the linear span of the points $\u_i-\u_0$ is all of $C_{k+1}-\q_{k+1}$, the final claim and the proof by induction ensue. \end{proof} \begin{definition}[h] \label{def:h,hl} \preamble{ \guid{CHNGQBD} \formaldef{h}{hl}% \indy{Notation}{h@$h$ (circumradius)}% } % If $\bu=[\u_0;\u_0;\ldots;\u_k]$ is a list of points in $\vecR{n}$, then let $h(\bu)$ be the circumradius of its point set $\setenumdots{\u}{0}{k}$. \end{definition} \begin{remark} The constant $r=\sqrt2$ is the smallest real number $r$ such that there exist four cocircular points in the plane with pairwise distances at least $2$ and with circumradius $r$ (Figure~\ref{fig:rogers-sqrt2}). The four points are the vertices of a square of side length $2$. Eight two-dimensional Rogers simplices meet at the circumcenter of the square, but when $r<\sqrt2$, only six Rogers simplices meet at the circumcenter. In general, at $r=\sqrt2$, certain degeneracies start to appear in $n$-dimensions that cannot occur for a smaller radius. To avoid degeneracies, many lemmas in this section assume that the circumradius is less than $\sqrt2$. \end{remark} \figNOCHOTB % fig:rogers-sqrt2 \begin{lemma}[nondegeneracy] \label{lemma:sqrt2-close} \preamble{ \guid{XYOFCGX} } % Let $V\subset\vecR{3}$ be a saturated packing. Let $S\subset V$ be an affinely independent set with circumcenter $\p$. Assume that the circumradius of $S$ is less than $\sqrt2$. Then $\norm{\v}{\p}>\norm{\u}{\p}$ for all $\u\in S$ and all $\v\in V\setminus S$. \end{lemma} \begin{proof} Assume for a contradiction that there is a point $\w\in V\setminus S$ satisfying \begin{equation}\label{eqn:closest} \norm{\w}{\p}\le \norm{ \u}{\p}, \quad\text{for all } \u\in S. \end{equation} The angles $\arc_V(\p,\braced{\v, \u})$ are obtuse for distinct elements $\v,\u$ of $ S\cup\braced{\w}$ because of the law of cosines and \[ \norm{\p}{\u} < \sqrt2,\quad \norm{\p}{ \v} <\sqrt2,\quad \norm{\u}{ \v} \ge 2. \] Let $S=\setenumdots{\u}{0}{k}$. A case-by-case argument follows for each $k\in\braced{0,1,2,3}$. \begin{enumerate} \setcounter{enumi}{-1} \item The case $k=0$ is trivial. \item In the case $k=1$, the points $\p, \u_0, \u_1$ are collinear and cannot give two obtuse angles. \item In this case, let $\w'$ be the projection of $\w$ to the plane containing $\p, \u_0, \u_1, \u_2$. Under orthogonal projection, the angles remain obtuse: \[ (\u_i-\p)\cdot (\w-\p) = (\u_i-\p)\cdot (\w'-\p) <0. \] The four points $\w', \u_0, \u_1$, and $\u_2$ can be arranged cyclically around $\p$, according to the polar cycle, each forming an obtuse angle with the next. A circle around $\p$ cannot give four obtuse angles because the sum is $2\pi$. \item In this case, assume that $ \u_0,\ldots, \u_3$ are labeled according to the azimuth cycle around the line $\op{aff}\braced{\p,\w}$. Consider the dihedral angle \[ \gamma=\gamma_i=\dih(\braced{\p,\w},\braced{ \u_i, \u_{i+1}}) \] of the simplex $\braced{\p,\w,\u_i,\u_{i+1}}$ along the edge $\braced{\p,\w}$. By the spherical law of cosines, the angle $\gamma$ of the spherical triangle with sphere center $\p$ is given in terms of the edges as \[ \cos c - \cos a \cos b = \sin a \sin b \cos \gamma. \] The angles $a,b,c$ are obtuse, so that both terms on the left-hand side are negative. Thus, $\gamma>\pi/2$. The angle $\op{azim}(\p,\w,\u_i,\u_{i+1})$ is then also greater than $\pi/2$ by Lemma~\ref{lemma:dih-azim}. This is impossible, as the sum of the four azimuth angles $\gamma$ is $2\pi$ by Lemma~\ref{lemma:2pi-sum}. \end{enumerate} \end{proof} With nondegeneracy established, we can now give further details about the extreme points of a Rogers simplex and their relationship to the circumcenter of a subset $S$ of the packing $V$. \begin{lemma}[Rogers simplex and circumcenter] \label{lemma:v2} \preamble{ \guid{XNHPWAB} } Let $V$ be a saturated packing. Let $\bu=\listdots{\u}{0}{k}\in \bVo{k}$ for some $k\le 3$, and let $S=\setenumdots{\u}{0}{k}$ be the point set of $\bu$. Assume that $h(\bu)<\sqrt2$. Then \begin{enumerate} \item% $\omega(\bu)$ is the circumcenter of $S$. \item% $\omega(\bu)\in\op{conv}(S)$. \item% The set $\setcomp{\omega(\trunc{\bu}{j})}{ j\le k}$ has affine dimension $k$. \item The sequence $h(\trunc{\bu}{j})$ is strictly increasing in $j$. \end{enumerate} \end{lemma} \indy{Index}{convex hull}% \begin{proof} The four conclusions of the lemma are proved separately. \begin{enumerate} \item \claim{We claim that $\omega(\bu)$ is the circumcenter of $S$.} Indeed, by definition, if $\bu\in \bVo{k}$, then \[ \dimaff\Omega(V,\listdots{\u}{0}{k}) = 3-k. \] The case $k=0$ of the claim is trivially satisfied. Assume by induction the result holds for natural numbers up to $k$. Now consider the case $k+1$. Let $\bu\in \bVo{k+1}$ and let $S_{k+1}$ be the point set of $\bu$. By the induction hypothesis, $\omega(\trunc{\bu}{k})$ is the circumcenter of the point set of $\trunc{\bu}{k}$. Let $\p$ be the point in $A=\aff(\Omega(V,\bu))$ closest to $\omega(\trunc{\bu}{k})$. By Lemma~\ref{lemma:aff-center}, the circumcenter of $S_{k+1}$ is the point of intersection of orthogonal affine sets $\aff(S_{k+1})$ and $A$. Thus, the circumcenter equals the unique point $\p$ of $A$ closest to the point $\omega(\trunc{\bu}{k})$ in $\aff(S_{k+1})$. By Lemma~\ref{lemma:sqrt2-close}, $\p\in\Omega(V,\bu)$. Thus, $\p=\omega(\bu)$. The claim ensues. \item\claim{We claim $\omega(\bu)\in\op{conv}(S)$.} Otherwise, select $\v\in S$ such that $\aff(S')$ separates $\omega(\bu)$ from $ \v$, where $S'=S\setminus\braced{\v}$. Let $\p'$ (resp. $\p=\omega(\bu)$) be the circumcenter of $S'$ (resp. $S$). When $\u\in S'$, the law of cosines gives \begin{align*} \norm{ \u}{\p}^2 &= \norm{\u}{\p'}^2 + \norm{\p'}{\p}^2\\ \norm{ \v}{\p}^2 &\ge \norm{\v}{\p'}^2 + \norm{\p'}{\p}^2. \end{align*} This gives $\norm{\v}{\p'}\le \norm{\u}{\p'}$, which is contrary to Lemma~\ref{lemma:sqrt2-close}. \item\claim{The set $\setcomp{\omega(\trunc{\bu}{j})}{ j\le k}$ has affine dimension $k$.} % % Lemma~\ref{lemma:aff-center} implies that the vectors ${\omega(\trunc{\bu}{i+1})}-{\omega(\trunc{\bu}{i})}$ are mutually orthogonal. Thus, the claim about affine dimension easily follows if we show that these vectors are nonzero. Otherwise, the circumcenter $\omega(\trunc{\bu}{i})$ of $S_i=\setenumdots{\u}{0}{i}$ has an equally close point $ \u_{i+1}\in V\setminus S_i$, which is impossible by Lemma~\ref{lemma:sqrt2-close}. \item\claim{The sequence $h(\trunc{\bu}{j})$ is strictly increasing in $j$.} Indeed, by the Pythagorean theorem, \begin{equation} \norm{\omega(\trunc{\bu}{j})}{\omega(\trunc{\bu}{0})}^2 = \sumupto{i}{1}{j}{\norm{\omega(\trunc{\bu}{i})}{\omega(\trunc{\bu}{i-1})}^2}. \end{equation} So the result follows from the previous claim. \end{enumerate} \end{proof} \begin{lemma} \label{lemma:h-omega} \preamble{ \guid{WAUFCHE} } % Let $V$ be a saturated packing. Let $\bu =[\u_0;\ldots]\in \bVo{k}$ for some $k$. Then $h(\bu)\le \norm{\omega(\bu)}{\u_0}$. Moreover, if $h(\bu)<\sqrt2$, then $h(\bu)=\norm{\omega(\bu)}{\u_0}$. \end{lemma} \begin{proof} By construction, the point $\omega(\bu)$ belongs to $\Omega(V,\bu)$ and is therefore equidistant to the points in $S=\setenumdots{\u}{0}{k}$. The orthogonal projection of $\omega(\bu)$ to $\op{aff}(S)$ is the circumcenter of $S$. The orthogonal projection cannot increase distances, and the inequality ensues. If $h(\bu)<\sqrt2$, then $\omega(\bu)$ is already the circumcenter by Lemma~\ref{lemma:v2}, so that equality holds. \end{proof} \begin{lemma} \label{lemma:omega-uv} % \preamble{ \guid{NJIUTIU} } % Let $V$ be a saturated packing. Let $\bu,\bv\in \bVo{3}$. Suppose that $R(\bu)=R(\bv)$ and that $\dimaff R(\bu)=3$. Then $\omega_i(\bu)=\omega_i(\bv)$, for $i=0,1,2,3$. \end{lemma} \begin{proof} Let $R=R(\bu)=R(\bv)$. The set \[ W =\{\omega_0(\bu),\ldots,\omega_3(\bu)\} \] is characterized as the set of extreme points of the simplex $R$. It is the same for both $\bu$ and $\bv$. Since $R\subset \Omega(V,\u_0)\cap \Omega(V,\v_0)$, and the Rogers simplex $R$ has full dimension, we must have $\u_0=\v_0$. Inductively, we may determine $\omega_i = \omega_i(\bu)=\omega_i(\bv)$ as follows. The point $\omega_0$ is $\u_0=\v_0$. The point $\omega_{i+1}$ is the closest point of $\op{conv}(W\setminus\setenumdots{\omega}{0}{i})$ to $\omega_{i}$. Note that $\op{conv}(W\setminus\setenumdots{\omega}{0}{i})$ is a subset containing $\omega_{i+1}$ of the set $\Omega(V,d_{i+1}\bu)$ that is used to define $\omega_{i+1}(\bu)$ (Definition~\ref{def:omega}). This description of the points $\omega_i$ is independent of $\bu\in \bVo{3}$ such that $R=R(\bu)$. \end{proof} \begin{lemma} \label{lemma:dk-uv} % \preamble{ \guid{TEZFFSK} } % Let $V$ be a saturated packing. Let $\bu=[\u_0;\ldots]\in \bVo{3}$ be such that $\dimaff R(\bu)=3$. Select $k\le 3$ such that $h(d_k\bu) < \sqrt2$. % % Suppose that $R(\bu)=R(\bv)$, for some $\bv\in \bVo{3}$. Then \[ d_k\bu = d_k\bv. \] \end{lemma} \begin{proof} Write $\omega_i$ for $\omega_i(\bu)$. By Lemma~\ref{lemma:omega-uv}, these points are determined by $R(\bu)$. % By Lemma~\ref{lemma:v2}, $h(d_i\bu)<\sqrt2$, and $\omega_i$ is the circumcenter of $\setenumdots{\u}{0}{i}$, for all $i\le k$. Since $R(\bu)=\op{conv}\setenumdots{\omega}{0}{3}$ has affine dimension $3$, the points $\setenumdots{\omega}{0}{k}$ are affinely independent. These circumcenters are constructed as points in the affine hull of $\setenumdots{\u}{0}{k}$. Hence $\u_0,\ldots,\u_k$ are also affinely independent. By Lemma~\ref{lemma:sqrt2-close}, we have the following recursive description of the points $\u_i$ in terms of $\omega_i$, for $i\le k$. The point $\u_0$ is $\omega_0$. The point $\u_{i+1}$ is the unique $\v\in V\setminus \setenumdots{\u}{0}{i}$ such that \[ \norm{\v}{\omega_{i+1}} = \norm{\u_0}{\omega_{i+1}}. \] This description of $\listdots{\u}{0}{k}=d_k\bu$ depends on $\u$ only through $R(\bu)$. \end{proof} \subsection{Delaunay simplex}\label{Delaunay simplex} The Delaunay decomposition of space into simplices is dual to the Voronoi cell. It is presented as a collection of $k$-simplices with vertices in $V$, for $k=1,2,3$. The Delaunay $1$-simplices are defined as the edges between two points in a packing $V$ whose their Voronoi cells have a common facet.\footnote{The Delaunay decomposition may be degenerate if the points of $V$ are not in general position. This book confines itself to the nondegenerate situation.} A $2$-simplex is given with vertices at three points in $V$ if their Voronoi cells have a common edge. A $3$-simplex is given for every four points in $V$ whose Voronoi cells have a common extreme point. A Delaunay $3$-simplex is the convex hull of four points in the packing $V$. Under a nondegeneracy condition (on the circumradius of the set of points), we may construct a Delaunay simplex as a union of Rogers simplices. To this end, we examine the set of all Rogers simplices around a common extreme point. The convex hull of a nondegenerate set $S\subset V$ of four points consists of $4!$ Rogers simplices, each facet of the convex hull consists of $3!$ pieces, and so forth (Lemma~\ref{lemma:Rconv}). In brief, the Rogers simplices give every nondegenerate Delaunay simplex an identical simplicial structure. % Recall that $\op{Sym}(k+1)$ is the \newterm{group} of all permutations on the set $\setenumrange{0}{k}$. Let $\bu = \listdots{\u}{0}{k}$ be a list of length $k+1$. For any \newterm{permutation} $\rho\in\op{Sym}(k+1)$, let $\rho_*(\bu)$ be the \newterm{rearrangement} given by \[ \rho_*(\bu)_i = \u_{\rho^{-1} i}, % \] where $\u_i$ denotes the $i$th element of a list $\bu$. \formaldef{permutation}{permutes}% \formaldef{$\rho_*$}{left\_action\_list}% \indy{Notation}{zzr@$\rho$ (permutation)}% \indy{Notation}{Sym@$\op{Sym}$ (symmetric group)}% The following lemma shows that rearrangements have the same extreme point of a Rogers simplex. \begin{lemma}[extreme point rearrangement] \label{lemma:perm-Vk} \preamble{ \guid{YIFVQDV} } % Let $V$ be a saturated packing. Let $\bu\in \bVo{k}$. Assume that $h(\bu)<\sqrt2$. Let $\bv$ be any rearrangement of $\bu$ under a permutation. Then $\bv\in \bVo{k}$ and $\omega(\bu) = \omega(\bv)$. \end{lemma} \begin{proof} Let $\bv = \listdots{\v}{0}{k}$. Let $S_j = \setenumdots{\v}{0}{j}$, $\Omega_j = \Omega(V,\trunc{\bv}{j})$, $A_j=\cap_{i=1}^j \AA{\v_0}{\v_i}$, and $a_j = \dimaff(A_j)$, for $0\le j\le k$. By convention, set $A_0 = \vecR{3}$, so that $a_0=\dimaff(A_0) = 3$. Also, set $a_{-1} = 4$ by convention. The set $S_k$ is the point set of $\bu$, which is affinely independent by Lemma~\ref{lemma:aff-center}. The set $S_j$ is also affinely independent. Let $\p_j$ be the circumcenter of $S_j$. The circumradius of $S_j$ is at most the circumradius of $S_k$, which by assumption is less than $\sqrt2$. \claim{We claim that $\dimaff \Omega_j = a_j$, when $0\le j\le k$.} By Lemma~\ref{lemma:sqrt2-close}, if $\p=\p_j$, then \begin{equation}\label{eqn:sqrt2-close} \norm{\v}{\p} > \norm{\u}{\p}\text{ for all }\u\in S_j \text{ and for all }\v\in V\setminus S_j. \end{equation} Select a small neighborhood $U$ of $\p_j$ such that \eqref{eqn:sqrt2-close} holds for all $\p\in U_j$. By the definition of Voronoi cell, $\Omega_j \cap U=A_j\cap U$. By background facts on affine sets $\dimaff\Omega_j = \dimaff A_j=a_j$. This gives the claim. To prove the lemma, we prove the following claim by simultaneous induction on $j$. For all $0\le j\le k$ we have \begin{align*} a_j &\ge a_{j-1} - 1\ge 3-j.\\ a_j &= 3-j \text{ if and only if } a_i=3-i \text{ for all } 0\le i\le j. \end{align*} The base case $j=0$ is trivial. Assume the induction hypothesis for $j$. We have $A_{j+1} = A_{j}\cap \AA{\v_0}{\v_{j+1}}$. The intersection contains $\p_{j+1}$ and is therefore nonempty. By general background facts on the intersection of an affine set with a hyperplane, $a_{j+1} \ge a_{j}-1$. By the induction hypothesis, $a_{j}-1\ge 3-(j+1)$. If $a_{j+1}=3-(j+1)$, then $a_{j}=3-j$ and by the induction hypothesis $a_{i}=3-i$ for all $0\le i\le j$. This completes the proof of the claim by induction. We have $a_k = \dimaff A_k=\dimaff \Omega_k$. However, $\Omega_k= \Omega(V,\bu)$, and since $\bu\in \bVo{k}$, it follows that $3-k=\dimaff\Omega(V,\bu)=a_k$. By the established claims, $a_i = 3-i$ for all $0\le i\le k$. This proves $\bv\in\bVo{k}$. Finally, $\omega(\bu) = \omega(\bv)$ because both equal the circumcenter of the point set $S_k$. \end{proof} The next lemma shows that the map from permutations to Rogers simplices is one-to-one. \begin{lemma}[permutations one-to-one] \label{lemma:permutations-one-to-one} \preamble{ \guid{KSOQKWL} } % Let $V$ be a saturated packing and let $\bu\in \bVo{k}$. Assume that $h(\bu)<\sqrt2$. Let $\rho\in\op{Sym}(k+1)$ such that $R(\bu)= R(\rho_*\bu)$. Then $\rho= I$. \end{lemma} \begin{proof} We assume that $\rho\ne I$, write $\bv = \rho_*\bu$, and prove that $R(\bu)\ne R(\bv)$. By Lemma~\ref{lemma:v2}, the sets $\setcomp{\omega(\trunc{\bu}{j})}{ j\le k}$ and $\setcomp{\omega(\trunc{\bv}{j})}{ j\le k}$ are each affinely independent of cardinality $k+1$. By Lemma~\ref{lemma:simplex-poly}, these are sets of extreme points of $R(\bu)$ and $R(\bv)$, respectively. Thus, it is enough to show that the sets of extreme points are unequal. Let $j$ be the largest index such that $\trunc{\bu}{j}=\trunc{\bv}{j}$. The assumption $\rho\ne I$ implies that $j<k$. Let $\p$ be the circumcenter of $\setenumdots{\u}{0}{j+1}$. By Lemma~\ref{lemma:sqrt2-close}, \[ \norm{\u_0}{\p} = \norm{\u_{j+1}}{\p} < \norm{\v_{j+1}}{\p}. \] Thus, $\omega(\trunc{\bu}{j+1}) \ne \omega(\trunc{\bv}{j+1})$. The result ensues. \end{proof} To prepare for Lemma~\ref{lemma:Rconv}, we need a preliminary lemma that does some index shuffling for us. It gives an explicit representatives of the cosets of $\op{Sym}(k+1)$ in $\op{Sym}(k+2)$. \begin{definition} \label{def:bui} \preamble{ \guid{TSIVSKG} \formaldef{$\bu^i$}{DROP} } % Let $\bu$ be any list. For each $i$, let $\bu^i = [\u_0;\ldots;\hat\u_i;\ldots]$ be the list that drops the $i$th entry. \end{definition} \begin{lemma}[coset representatives] \label{lemma:coset-bijection} \preamble{ \guid{IVFICRK} } % There is a bijection between the set \[ \setcomp{(i,\sigma)}{ 0\le i\le k+1,\quad \sigma\in \op{Sym}(k+1)} \] and $\op{Sym}(k+2)$ such that for any list $\bu$ of length $k+2$ \[ (\rho_*\bu)_j = \begin{cases} (\sigma_*(\bu^i))_j&0\le j \le k\\ \bu_i & j=k+1. \end{cases} \] \end{lemma} \begin{proof} The bijection sends $(i,\sigma)$ to the permutation $\rho$, where \[ \rho^{-1} j = \begin{cases} \sigma^{-1} j, & \sigma^{-1} j<i\\ (\sigma^{-1}j)+1 & \sigma^{-1} j \ge i\\ i& j=k+1. \end{cases} \] This has the required properties. \end{proof} This lemma shows that each (nondegenerate) Delaunay simplex can be partitioned as a union of Rogers simplices, indexed by the permutation group (Figure~\ref{fig:rogers-fact}). \figYAJOTSL % fig:rogers-fact \begin{lemma}[Delaunay simplex] \label{lemma:Rconv} \preamble{ \guid{WQPRRDY} } % Let $V$ be a saturated packing and let $\bu = \listdots{\u}{0}{k}\in \bVo{k}$. Assume that $h(\bu)<\sqrt2$. Then \[ \op{conv}\setenumdots{\u}{0}{k} = \bigcup \,\setcomp{ R(\rho_*\bu) }{ \rho\in \op{Sym}(k+1)}. \] \end{lemma} \indy{Notation}{Sym@$\op{Sym}$ (symmetric group)}% \begin{proof} The proof is by induction on $k$. The base case of the induction $k=0$ reduces to the trivial assertion: $\op{conv}\braced{\u_0} = \op{conv}\braced{\u_0}$. \claim{We claim $\bu^i\in \bVo{k}$, when $\bu=\listdots{\u}{0}{k+1}]\in \bVo{k+1}$.} Indeed, some permutation $\rho\in \op{Sym}(k+2)$ carries $\bu$ to $\bv=[\u_0;\ldots;\hat\u_i;\ldots;\u_{k+1};\u_i]$. By Lemma~\ref{lemma:perm-Vk}, $\bv\in \bVo{k+1}$, so that $\bu^i = \trunc{\bv}{k}\in \bVo{k}$. By the induction hypothesis \begin{equation}\label{eqn:sigma} \op{conv}(S\setminus\braced{\u_i}) = \bigcup \,\setcomp{ R(\sigma_*\bu^i) }{ \sigma\in \op{Sym}(k+1)}, \end{equation} where $S = \setenumdots{\u}{0}{k+1}$. By Lemma~\ref{lemma:simplex-poly}, the facets of the polyhedron $\op{conv}(S)$ are the sets $\op{conv}(S\setminus\braced{\u_i})$. Lemma~\ref{lemma:facet-partition} gives the partition \begin{equation} \label{eqn:convS} \op{conv}(S) = \bigcup_{i=0}^{k+1} \op{conv}(\braced{\omega(\bu)} \cup \op{conv}(S\setminus\braced{\u_i})). \end{equation} Substitute the formula \eqref{eqn:sigma} into \eqref{eqn:convS} and use the bijection of Lemma~\ref{lemma:coset-bijection} to replace the double union by a single union over $\rho\in \op{Sym}(k+2)$. Background facts in affine geometry then simplify the expression to the desired formula. The proof by induction ensues. \end{proof} In summary of this section, by construction, the Rogers simplices $R(\bu)$ are compatible with the Voronoi decomposition of space. Under mild restrictions on the circumradius, they can also by Lemma~\ref{lemma:Rconv} be reassembled into simplices (the Delaunay simplices) with extreme points at the centers of the packing. \indy{Index}{Delaunay|see{decomposition}}% \indy{Index}{decomposition!Delaunay}% \indy{Index}{decomposition!Voronoi} % \section{Cells}\label{cells} \subsection{definition}\label{cell-definition} Marchal~\cite{marchal:2009} has proposed an approach to sphere packings that gives some improvements to the original proof in~\cite{Hales:2006:DCG}. He gives a partition of space into cells that is a variant of Rogers's partition into the simplices $R(\bu)$. The main part of the construction is the decomposition obtained by truncating Voronoi cells with a ball of radius $\sqrt2$. In a few carefully chosen situations, the simplices $R(\bu)$ are assembled into larger convex cells (Delaunay cells), as suggested by Lemma~\ref{lemma:Rconv}. \indy{Index}{Marchal, C.}% \indy{Index}{decomposition!Marchal}% \begin{definition}[Marchal cells] \label{def:mcell} \preamble{ \guid{QEEHXUB} \formaldef{$i$-cell}{mcell} \formaldef{$\xi$}{mxi}% \indy{Index}{cell}% \indy{Index}{Marchal cell}% } % Let $V$ be a saturated packing. Let \[ \bu=\listdots{\u}{0}{3}\in \bVo{3}. \] Define $\xi(\bu)$ as follows. If $\sqrt2\le h(\trunc{\bu}{2})$, then let $\xi(\bu)=\omega(d_2\bu)$. If $h(\trunc{\bu}{2})<\sqrt2\le h(\bu)$, define $\xi(\bu)$ to be the unique point in \[ \op{conv}\braced{\omega(d_2\bu),\omega(\bu)} \] at distance $\sqrt2$ from $\u_0$. Set $a(\bu)={h(\trunc{\bu}{1})}/{\sqrt2}$. A set $\cell(\bu,i)\subset\vecR{3}$ is associated with $\bu$ and $i=0,1,2,3,4$. \hfill\break\smallskip \begin{enumerate} \setcounter{enumi}{-1} \item % The $0$-cell of $\bu$ is defined to be empty unless $\sqrt2 \le h(\bu)$. If this inequality holds, then the $0$-cell is \[ \cell(\bu,0) = R(\bu)\setminus \openBall{\u_0}{\sqrt2}.% % \] \bigskip \item The $1$-cell of $\bu$ is defined to be empty unless $\sqrt2 \le h(\bu)$. If this inequality holds, then the $1$-cell is \[ \cell(\bu,1) = (R(\bu) \cap \bar \openBall{\u_0}{\sqrt2}) \setminus \op{rcone}^0(\u_0,\u_1,a(\bu)), \] (The set $\op{rcone}^0(\u_0,\u_1,a)$ is empty when $a>1$.) \bigskip \item The $2$-cell of $\bu$ is defined to be empty unless $h(\trunc{\bu}{1})<\sqrt2\le h(\bu)$. If this inequality holds, then the $2$-cell is (with $a=a(\bu)$ as above) \begin{align*} \cell(\bu,2) &= \op{rcone}(\u_0,\u_1,a)\cap \op{rcone}(\u_1,\u_0,a)\cap \op{aff}_+(\braced{\u_0,\u_1},\braced{\xi(\bu),\omega(\bu)}). \end{align*} \bigskip \item The $3$-cell of $\bu$ is defined to be empty unless $h(\trunc{\bu}{2}) <\sqrt2 \le h(\bu)$. If this inequality holds, then $\xi(\bu)\in \op{conv} \braced{\omega(\trunc{\bu}{2}),\omega(\bu)}$ % and the $3$-cell is \[ \cell(\bu,3) = \op{conv}\braced{ \u_0, \u_1, \u_2,\xi(\bu)}. \] \bigskip \item The $4$-cell of $\bu$ is defined to be empty unless $h(\bu) <\sqrt2$. If this inequality holds, the $4$-cell is \[ \cell(\bu,4) = \op{conv}\braced{ \u_0, \u_1, \u_2, \u_3}. \] \end{enumerate} \end{definition} \indy{Notation}{zzo@$\xi$ (cell parameter)}% \figKVIVUOT % fig:marchal-variety The $0$- and $1$-cells are subsets of a Rogers simplex $R$ (Figure~\ref{fig:marchal-variety}). Yet, the $2$-, $3$-, and $4$-cells lie in a union of simplices. The index $i$ in $\cell(\bu,i)$ indicates the number of points of $V$ that are extreme points of the cell (Figures~\ref{fig:marchal-variety} and \ref{fig:marchal-2d}). \figBWEYURN % fig:marchal-2d \subsection{informal discussion}\label{informal} A \newterm{cell}, short for Marchal cell, can be described in an alternative intuitive way. If $S\subset\vecR{3}$, let \[ \op{equi}(S,r) = \setcomp{ \p }{ \norm{\p}{\v} = r \text{ for all } \v \in S }. \] If $S$ is a finite set of cardinality $k$, of affine dimension $k-1$, and with circumradius less than $r$, then $\op{equi}(S,r)$ is a sphere of dimension $3-k$. In particular, if $k=3$, then $\op{equi}(S,r)$ is a set of two points. \indy{Notation}{equi@equi (intersection of spheres)}% \indy{Notation}{C@$C(S)$ (cell-like subset of $\vecR{3}$)}% Let $V$ be a saturated packing. Define \[ C(S) = \op{conv} (S\cup\op{equi}(S, \sqrt2) ) \] for $S\subset V$. The set $C(S)$ is empty if the circumradius of $S$ is greater than $\sqrt2$. The set $C(\emptyset)$ is $\vecR{3}$. The set $C(\braced{\w})$ is a ball of radius $\sqrt2$ with center $\w$. The set $C(\braced{\v,\w})$ is a double cone, $C(\braced{\u,\v,\w})$ a bipyramid, and $C(\{{\mathbf t},\u,\v,\w\})$ is a simplex. \begin{lemma} \label{lemma:CS-union} \preamble{ \guid{VXIQEJC} } % Let $V$ be a saturated packing. If $S\subset V$ is not empty, then $C(S)$ is contained in the union of sets \[ C(S\setminus \braced{\v}), \quad \v\in S. \] \end{lemma} \begin{lemma} \preamble{ \guid{WHCFBMJ} } % Let $V$ be a saturated packing and let $S\subset V$. Set \[C'(S) = C(S) \setminus \bigcup_{S\subset S'\subset V} C(S'),\] where $S'\subset V$ runs over subsets of cardinality $k=\op{card}(S)+1$ that contain $S$. Then $C'(S)$ equals a union of $k$-cells, up to a null set. \end{lemma} In other words, up to a null set, the union of $0$-cells is the set of points outside the balls $C(\braced{\v})$, for $\v\in V$. The union of the $1$-cells is the set of points inside the balls $C(\braced{\v})$ but outside the double cones $C(\braced{\u,\w})$, and so forth. It is possible to base a construction of cells on this lemma, dispensing entirely with Voronoi cells and Rogers simplices. It is quick and intuitive. We have followed a longer path that gives more detail about the structure of cells. \bigskip At first, the definition of cells seems unmotivated. Some history might help. The 1998 proof of the Kepler conjecture partitioned space into a hybrid of truncated Voronoi cells and Delaunay-like simplices (Figure~\ref{fig:ferguson-hales}). In vague terms, the Delaunay simplices are tuned for detail. The Voronoi cells are coarsely tuned, suitable for rough hewing. Delaunay simplices articulate the foreground, while Voronoi cells fill the background. The solution to the problem lies in the right balance between foreground and background. Too many Delaunay simplices and the details overwhelm. Too many Voronoi cells and the estimates become too weak. The central geometrical insights of the original proof are expressed as rules that delineate foreground against background, Delaunay against Voronoi. \figFIFJALK % fig:ferguson-hales Cells give a hybrid decomposition. A $4$-cell is a Delaunay simplex. The $0$- and $1$-cells are parts of a Voronoi cell. The $2$- and $3$-cells are gradations between the two. Examples show the shortcomings of a nonhybrid approach. Recall that the density of the face centered cubic packing is $\pi/\nsqrt{18}\approx 0.74048$. Numerical evidence shows that an approach based entirely on Delaunay simplices should give a bound of about $0.740873$, a failure that comes tantalizingly close~\cite{Hales:1992:JCAM}. The dodecahedral theorem, which asserts that the Voronoi cell of smallest volume in the regular dodecahedron, gives the bound of about $0.755$ ~\cite{Hales:2010:Dodec}. Thus, the pure Voronoi cell strategy fails as well. The pure approaches can be modified in ways that are conjectured to produce sharp bounds. These modifications are complex and daunting. A common practice that started with L. Fejes T\'oth is to truncate Voronoi cells by intersecting them with a ball concentric with the cell. Different authors use different radii for the truncating sphere: $7/\nsqrt{27}\approx 1.347$ \cite{Fej53}, $\sqrt2$, $1.385$, and $1.255$ \cite{Hales:2006:DCG}, $\sqrt2$ \cite{marchal:2009}, and $\sqrt{3}\tan{\pi/5}\approx 1.258$ \cite{Hales:2010:Dodec}. A larger radius retains more information and complexity than a smaller radius. The $0$-cells are the refuse that lie outside the ball of truncation and are inconsequential to the proof. \bigskip \subsection{cell partition}\label{cell partition} \begin{lemma}[] \label{lemma:M-complement4} \preamble{ \guid{EMNWUUS} \formalauthor{Vu Khac Ky}\oldrating{100} } % Let $V$ be a saturated packing. Let $\bu\in \bVo{3}$. The following are equivalent. \begin{enumerate} \item $\cell(\bu,i)=\emptyset$ for $i=0,1,2,3$. \item $\cell(\bu,4)\ne\emptyset$. \item $h(\bu)<\sqrt2$. \end{enumerate} \end{lemma} \begin{proof} The diameter of $R(\bu)$ is easily seen to be $h(\bu)$. Hence, if $h(\bu)<\sqrt2$ all of the defining conditions are empty for $\cell(\bu,i)$ for $i<4$. The result ensues. \end{proof} \begin{lemma}[] \label{lemma:M-exhaust} \preamble{ \guid{SLTSTLO} } % Let $V$ be a saturated packing and let $\bu\in \bVo{3}$. Then every point in $R(\bu)$ belongs to $\cell(\bu,i)$ for some $0\le i\le 4$. Furthermore, there is a null set $Z$ such that each point in $R(\bu)\setminus Z$ belongs to a unique $\cell(\bu,i)$. \end{lemma} \begin{proof} Explicitly, the null set is the union of $R(\bu)\setminus R^0(\bu)$ (which lies in a finite union of planes), the sphere of radius $\sqrt2$ at $\u_0$, the difference $\op{rcone}(\u_0,\u_1,a)\setminus \op{rcone}^0(\u_0,\u_1,a)$, and the plane $\op{aff}\braced{\u_0,\u_1,\xi(\bu)}$. Let $\p\in R(\bu)$. To make the cases disjoint, each of the following cases assumes that the conditions of preceding cases fail. It is convenient to reorder the cases to make the $4$-cell appears first. \begin{enumerate} \setcounter{enumi}{3} \item % If $h(\bu)<\sqrt2$, then $\p\in\cell(\bu,4)$. \setcounter{enumi}{-1} \item % If $\norm{\p}{\u_0} \ge\sqrt2$, then $\p\in\cell(\bu,0)$. \item If $\p\not\in\op{rcone}^0(\u_0,\u_1,h(\trunc{\bu}{1})/\nsqrt2)$, then $\p\in \cell(\bu,1)$. \item If $\p\in \op{aff}_+(\braced{\u_0,\u_1},\braced{\xi(\bu),\omega(\bu)})$, then $\p\in \cell(\bu,2)$. \item If $\p\in \op{aff}_+(\braced{\u_0,\u_1},\braced{\u_2,\xi(\bu)})$, then $\p\in \cell(\bu,3)$. \end{enumerate} When the corresponding strict inequalities are used, we obtain uniqueness for $R(\bu)\setminus Z$. \end{proof} \begin{definition}[$i$-rearrangement] \label{def:i-rearrangement} \preamble{ \guid{BGXEVQU} \formaldef{$i$-rearrangement}{\_} } % Let $\bu=\listdots{\u}{0}{k},\bv=\listdots{\v}{0}{k}$ be two lists of the same length. One is an $i$-\newterm{rearrangement} of the other if $\rho_*\bu = \bv$ for some $\rho\in\op{Sym}(k+1)$ such that $\rho(j) = j$ when $j \ge i$. \end{definition} In particular, if $\bu,\bv$ are $0$- or $1$-rearrangements of one another, then $\bu = \bv$. The constraint $\rho(j)=j$ is vacuous when $j>k$. \begin{lemma}[] \label{lemma:i-omega} % \preamble{ \guid{YNHYJIT} } % Let $V$ be a saturated packing, let $\bu\in \bVo{3}$, and let $i\in\braced{2,3,4}$. Assume that $h(d_{i-1}\bu)<\sqrt2$. Let $\bv$ be an $i$-rearrangement of $\bu$. Then $\bv\in \bVo{3}$ and $\omega_j(\bu)=\omega_j(\bv)$, for $\fordots{j}{i-1}{3}$. \end{lemma} \begin{proof} Let $S_j =\setenumdots{\u}{0}{j}$, for $j\ge i-1$, where $\bu=[\u_0;\ldots]$. Since $\bv=[\v_0;\ldots]$ is an $i$-rearrangement of $\bu$, we have $S_j = \setenumdots{\v}{0}{j}$ and $\Omega(V,d_j\bu) = \Omega(V,d_j\bv)$, % for all $j\ge i-1$. By Lemma~\ref{lemma:perm-Vk}, $\omega_{i-1}(\bu) = \omega_{i-1}(\bv)$. By the recursive definition of the points $\omega_j$, we then have $\omega_j(\bu)=\omega_j(\bv)$, for $\fordots{j}{i-1}{3}$. We show that $\bv\in \bVo{3}$ by checking the defining condition \[ \dimaff \Omega(V,d_j\bv) = 3 - j, \text{ for } 0 < j \le 3. \] When $j\le i-1$, the conclusion $d_{i-1}\bv\in \bVo{i-1}$ of Lemma~\ref{lemma:perm-Vk} implies the condition. When $i-1<j$, the identity $\Omega(V,d_J\bv)=\Omega(V,d_j\bu)$ and $\bu\in \bVo{3}$ imply the condition. The result ensues. \end{proof} \begin{lemma}[] \label{lemma:marchal-equal} \preamble{ \guid{RVFXZBU} } % Let $V$ be a saturated packing, let $\bu,\bv\in \bVo{3}$, and let $i\in \braced{0,1,2,3,4}$. If $\bu$ is an $i$-rearrangement of $\bv$, then $\cell(\bu,i)=\cell(\bv,i)$. \end{lemma} \begin{proof} The statement follows from the definition of cells. \end{proof} \begin{lemma}[] \label{lemma:cell-in-rogers} % \preamble{ \guid{QZKSYKG} } % Let $V$ be a saturated packing, let $\bu\in \bVo{3}$, and $k\in \braced{0,1,2,3,4}$. Assume that $\cell(\bu,k)$ is not empty. Then each $k$-rearrangement $\bv$ of $\bu$ lies in $\bVo{3}$. Moreover, $\cell(\bu,k)$ is contained in the union of $R(\bv)$, as $\bv$ runs over all $k$-rearrangements of $\bu$. \end{lemma} \begin{proof} If $k=0$ or $k=1$, then by definition, $\cell(\bu,k)\subset R(\bu)$. Assume that $2\le k\le 4$. The nonemptiness hypothesis implies $h(d_{k-1}\bu)<\sqrt2$. Lemma~\ref{lemma:i-omega} implies that $\bv\in \bVo{3}$. The definition of the cells can be used to show directly that \[ \cell(\bu,k)\subset \op{conv}\{\u_0,\u_1,\ldots,\u_{k-1},\omega_{k}(\bu),\ldots, \omega_3(\bu)\}. \] The definition of $k$-rearrangement, Lemma~\ref{lemma:Rconv}, and Lemma~\ref{lemma:i-omega} partition this convex hull according to $k$-rearrangements of $\bu$. The result ensues. \end{proof} \begin{lemma}[] \label{lemma:cell-disjoint} % \preamble{ \guid{DDZUPHJ} } % Let $V$ be a saturated packing, let $\bu,\bv\in \bVo{3}$, and let $k\in \braced{0,1,2,3,4}$. Suppose that $R(\bu)=R(\bv)$, that $R(\bu)$ has affine dimension three, and that $\cell(\bu,k)$ is not empty. Then $\cell(\bu,k)=\cell(\bv,k)$. \end{lemma} \begin{proof} We break the proof into cases, according to $k$. Assume $k=4$. By the definition of the cell, the nonemptiness condition implies that $h(\bu)<\sqrt2$. By Lemma~\ref{lemma:h-omega} and Lemma~\ref{lemma:dk-uv}, $\bu=\bv$. Assume that $k=3$. The nonemptiness condition gives $h(d_2\bu)<\sqrt2\le h(\bu)$. By Lemma~\ref{lemma:h-omega} and Lemma~\ref{lemma:dk-uv}, $d_2\bu=d_2\bv$. The point $\omega(\bu)$ is determined by $R(\u)$ by Lemma~\ref{lemma:omega-uv}. The point $\xi(\bu)$ is determined by $\omega(d_2\bu)$ and $\omega(\bu)$. Finally, $\cell(\bu,3)$ is determined by $d_2\bu$ and $\xi(\bu)$. The conclusion $\cell(\bu,3)=\cell(\bv,3)$ ensues. The other cases are similar. Assume that $k=2$ and that $\cell(\bu,2)$ is not empty. If $h(d_2\bu)<\sqrt2$, then $d_2\bu=d_2\bv$, and the cell is determined by $d_2\bu$ and $\xi(\bu)$, as in the case $k=3$. If $h(d_1\bu)<\sqrt2\le h(d_2\bu)$, then $d_1\bu=d_1\bv$, and the cell is determined by $d_1\bu$ and the points $\omega_j(\bu)$, which in turn depend only on $R(\bu)$, by Lemma~\ref{lemma:omega-uv}. The cases $k=0$ and $k=1$ are similar, but even more trivial, and are left to the reader. \end{proof} The following lemma and Lemma~\ref{lemma:M-exhaust} show that the cells partition $\vecR{3}$. \begin{lemma}[] \label{lemma:marchal-partition} % \preamble{ \guid{AJRIPQN} } % Let $V$ be a saturated packing, let $\bu,\bv\in \bVo{3}$, and let $k,k'\in \braced{0,1,2,3,4}$. Suppose that $\cell(\bu,k)\cap\cell(\bv,k')$ has positive measure. Then $k=k'$ and $\cell(\bu,k)=\cell(\bv,k)$. \end{lemma} \begin{proof} Select $\bw\in \bVo{3}$ such that \[ R(\bw)\cap \cell(\bu,k)\cap\cell(\bv,k') \] has positive measure. In particular, $R(\bw)$ has affine dimension three. By Lemmas~\ref{lemma:cell-in-rogers} and~\ref{lemma:marchal-equal} and \ref{lemma:R-inter}, we may replace $\bu$ with a $k$-rearrangement, and $\bv$ with a $k'$-rearrangement to assume without loss of generality that $R(\bw)=R(\bu)=R(\bv)$. Lemma~\ref{lemma:cell-disjoint} implies that $\cell(\bu,k)=\cell(\bw,k)$ and $\cell(\bv,k')=\cell(\bw,k')$. Since $R(\w)\cap\cell(\bw,k')\cap\cell(\bw,k)$ has positive measure, Lemma~\ref{lemma:M-exhaust} implies that $k=k'$. The result ensues. \end{proof} \subsection{edges of cells}\label{edges-cells} \begin{definition} \label{def:CapV} \preamble{ \guid{LEPJBDJ} \formaldef{$\CapV{V}(X)$}{VX} \indy{Notation}{V@$\CapV{V}(X)=V\cap X$}% } % Let $V$ be a saturated packing and let $\bu=[\u_0;\ldots]\in \bVo{3}$. Let $X=\cell(\bu,k)$. When $X\ne\emptyset$, define $\CapV{V}(X) = \setenumdots{\u}{0}{k-1}$. In particular, if $X$ is a $0$-cell $\CapV{V}(X)=\emptyset$. \end{definition} \begin{lemma}[] \label{lemma:VX} \preamble{ \guid{HDTFNFZ} } % Let $V$ be a saturated packing and let $\bu=[\u_0;\ldots]\in \bVo{3}$. Let $X=\cell(\bu,k)$. If $X\ne\emptyset$, then $\CapV{V}(X) = V\cap X$. In particular, the set $\CapV{V}(X)$ is well-defined. \end{lemma} \begin{proof} If $i\le k-1$, then $\u_i\in V$ and $X=\cell(\bv,k)$, for some $k$-rearrangement $\bv=[\u_i;\ldots]$ of $\bu$. By the definition of $\cell(\bv,k)$, we find that $\v_0=\u_i$ belongs to $\cell(\bv,k)$, when $k\ge 1$. This implies that $\CapV{V}(X)\subset V\cap X$. Conversely, let $\v\in V\cap \cell(\bu,k)$. It can be checked from definitions that $\cell(\bu,0)\subset \Omega(V,\u_0)$ and \[ \cell(\bu,k)\subset \Omega(V,\u_0)\cup \cdots \cup\Omega(V,\u_{k-1}), \quad \text{when~} k\ge1. \] This implies $\v\in V\cap\Omega(V,\u_i)$ for some $i\le k$. This forces $\v=\u_i$. Hence $\CapV{V}(X) = V\cap X$. \end{proof} \begin{lemma} \label{lemma:cell-radial} \preamble{ \guid{URRPHBZ} } % Let $V$ be a saturated packing and let $\bu\in \bVo{3}$. Then $X=\op{cell}(\bu,k)$ is measurable and eventually radial at each $\v\in V$. Furthermore, the cell $X$ is bounded away from every $v\in V\setminus \CapV{V}(X)$, so that the solid angle of $X$ is zero, except at $\v\in \CapV{V}(X)$. \end{lemma} \begin{proof} The first claim of the lemma follows from the fact that $R(\bu)$ is a simplex, and $R(\bu)\cap V = \braced{\u_0}$. Each cell is compact, and is bounded away from every point not in the cell. Lemma~\ref{lemma:VX} implies the second claim of the lemma. \end{proof} \begin{lemma} \preamble{ \guid{QZYZMJC} } % Let $V$ be a saturated packing. For every $\v\in V$, \[ \sumfinitepred{X}{\sumvar{X}\v\in \CapV{V}(X)}{\op{sol}(X,\v)} = 4\pi, \] where the sum runs over all cells $X$ such that $\v\in \CapV{V}(X)$. \end{lemma} \begin{proof} Indeed, the cells partition $\vecR{3}$ and $\sol(\openBall{\v}{\epsilon}) = 4\pi$. \end{proof} \begin{definition}[$\op{tsol}$] \label{def:total-solid} \preamble{ \guid{LZYLTFY} \formaldef{$\op{tsol}$}{total\_solid} \indy{Index}{angle!total solid}% \indy{Index}{extreme point}% \indy{Notation}{tsol@$\op{tsol}$}% } % Define the \newterm{total solid angle} of a cell $X$ to be \[ \op{tsol}(X) = \sumfinitepred{\v}{\v \in \CapV{V}(X) }{\op{sol}(X,\v)}. \] \end{definition} \begin{definition}[edge] \label{def:edgeX} \preamble{ \guid{WYORUNK} \formaldef{$E(X)$}{edgeX} \indy{Notation}{E3a@$E(X)$}% \indy{Notation}{0@$\tbinom{n}{k}$ (binomial coefficient)}% } % Let $E(X)$ be the set of \newterm{extremal edges} of the $k$-cell $X$ in a saturated packing $V$. More precisely, let \[ E(X)=\setcomp{\braced{ \u_i, \u_j}}{ \u_i\ne \u_j\in\CapV{V}( X)}. \] \end{definition} In particular, $E(X)$ is empty for $0$ and $1$-cells and contains $\tbinom{k}{2}$ pairs when $2\le k\le 4$. \begin{definition}[$\op{dih}$] \label{def:dihX} \preamble{ \guid{RSDYMHV} \formaldef{$\dih$}{dihX} \indy{Notation}{dih}% \indy{Index}{angle!dihedral}% \indy{Notation}{h@$h$ (half-edge length)}% } % Let $V$ be a saturated packing. Let $X$ be a $k$-cell, where $2\le k\le 4$. Let $\ee\in E(X)$. We define the dihedral angle $\op{dih}(X,\ee)$ of $X$ along $\ee$ as follows. Explicitly, if $X$ is a null set, then set $\op{dih}(X,\ee)=0$. Otherwise, choose $\bu=[\u_0;\u_1;\u_2;\u_3]\in \bVo{3}$ such that $X=\cell(\bu,k)$ and $\ee=\braced{\u_0,\u_1}$. Set $\op{dih}(X,\ee)=\dih_V(\braced{\u_0,\u_1},\braced{\v,\w})$, where \[ \braced{\v,\w} = \begin{cases} \braced{\xi(\bu),\omega(\bu) } & k=2\\ \braced{\u_2,\xi(\bu)} & k=3\\ \braced{\u_2,\u_3} &k=4. \end{cases} \] This is independent of the choice $\bu$ defining $X$. \end{definition} Each edge $\ee=\braced{ \u, \v}\in E(X)$ has a half-length $h(\ee) = \norm{ \u}{ \v}/2$. This definition of $h$ is compatible with the previous definition of the circumradius of lists in the sense that $h([\u;\v]) = h(\ee)$. \begin{lemma} \preamble{ \guid{GRUTOTI} } % Let $V$ be a saturated packing. Assume that $\u_0,\u_1\in V$ satisfy $\norm{\u_0}{\u_1}<2\nsqrt2$. Set $\ee=\braced{\u_0,\u_1}$. Then \[ \sumfinitepred{X}{X\mid \ee\in E(X) }{\op{dih}(X,\ee)} = 2\pi. \] The sum runs over cells $X$ such that $\ee\in E(X)$. \end{lemma} \begin{proof} Consider the set $C=\openBall{\u_0}{r}\cap \op{rcone}^0(\u_0,\u_1,a)$, where $r$ and $a$ are small positive real numbers. From the definition of $k$-cells, it follows that we can choose $r$ and $a$ sufficiently small so that if $X$ is a $k$-cell that meets $C$ in a set of positive measure, then $k\ge 2$ and there exists $\bu\in \bVo{3}$ such that $X=\cell(\bu,k)$ and $d_1\bu=[\u_0;\u_1]$. Moreover, \[ C\cap X = C\cap A, \quad A=\op{aff}_+(\braced{\u_0,\u_1},\braced{\v,\w}), \] where $A$ is the lune of Definition~\ref{def:lune} and $\v$, $\w$ are chosen as in Definition~\ref{def:dihX}. By Lemma~\ref{lemma:wedge-sol} and Definition~\ref{def:dihX}, the volume of this intersection is \[ \op{vol}(C\cap A) = \op{vol}(C)\, {\op{dih}_V(\braced{\u_0,\u_1},\braced{\v,\w}) }/{(2\pi)} = \op{vol}(C)\, {\dih(X,\ee)}/{(2\pi)}. \] The set of cells meeting $C$ in a set of positive measure gives a partition of $C$ into finitely many measurable sets. This gives \[ \op{vol}(C) = \sumfinitepred{X}{\sumvar{X}\ee\in E(X)}{\op{vol}(C\cap X)} = \op{vol}(C)\sumfinitepred{X}{\sumvar{X} \ee\in E(X)}{ \dih(X,\ee)/(2\pi)}. \] The calculation of volumes in Chapter~\ref{chapter:volume} gives $\op{vol}(C)>0$. The conclusion follows by canceling $\op{vol}(C)$ from both sides of the equation. \end{proof} \subsection{A conjecture}\label{A conjecture} This section shows how the existence of a FCC-compatible negligible function is a consequence of an explicit inequality related to the distances $h(\bu)$, where $\bu\in \bVo{1}$. \begin{definition}[$\sol_0$,~$\tau_0$,~$m_1$,~$m_2$,~$h_+$,~$M$] \label{def:sol0} \preamble{ \guid{AOZUTMU} \formaldef{$\sol_0$}{sol0}% \formaldef{$\tau_0$}{tau0}% \formaldef{$m_1$}{mm1}% \formaldef{$m_2$}{mm2}% \formaldef{$h_+$}{hplus}% \formaldef{$M$}{marchal}% \indy{Notation}{h@$h_+=1.3254$}% \indy{Notation}{sol0@$\sol_0 = 3\arccos(1/3)-\pi$}% \indy{Notation}{m1@$m_1\approx 1.012$}% \indy{Notation}{m2@$m_2\approx 0.0254$}% \indy{Notation}{zzt1@$\tau_0\approx\op{tgt}$}% \indy{Notation}{M@$M$ (Marchal's quartic)}% } % Define the following constants and functions: \begin{align}\label{eqn:m-def} \sol_0 &= 3\arccos(1/3)-\pi\\ %\Delta_1 &= (3\arccos(1/3)-\pi)/\pi\\ \tau_0 &= 4\pi - 20\sol_0\\ m_1 &= \sol_0 2\nsqrt2/\tau_0 \approx 1.012 \\ %% K m_2 &= (6\sol_0- \pi)\nsqrt2/(6 \tau_0) \approx 0.0254\\ %% M h_+ &= 1.3254 \hbox{~(exact rational value).} \end{align} Let $M:\Real\to\Real$ be the following piecewise polynomial function (Figure~\ref{fig:M}): \begin{equation}\label{eqn:M} M(h) = \begin{cases} % (\sqrt2-h) (h-1.3254) (9h^2 - 17 h + 3)/(1.627 (\sqrt2-1))& h\le\sqrt2\\ \dfrac{\sqrt2-h}{\sqrt2-1}~% \dfrac{h_+-h}{h_+-1} ~\dfrac{17 h - 9 h^2 - 3}{5} & h \le \sqrt2\vspace{3pt} \\ 0 & h >\sqrt2. \end{cases} \\ \end{equation} \end{definition} \figTULIGLY % fig:M The constant $\sol_0$ is the area of a spherical triangle with sides $\pi/3$. Simple calculations based on the definitions give \begin{equation}\label{eqn:km} m_1 - 12m_2 = \sqrt{1/2} \end{equation} and \begin{equation} M(1) = 1,\quad M(h_+)=0,\quad M(\sqrt2) =0. \end{equation} \begin{definition}[$\gammaX$] \label{def:gammaX} \preamble{ \guid{KGFDCFM} \formaldef{$\gammaX$}{gammaX} \indy{Notation}{zzc@$\gammaX$ (packing inequality)}% } % For any cell $X$ of a saturated packing, define the functional $\gammaX(X,\wild)$ on $\{f:\Real\to\Real\}$ by \begin{equation}\label{eqn:gamma-def} \gammaX(X,f) = \op{vol}(X) -\left(\frac{2m_1}{\pi}\right) \op{tsol}(X) + \left(\frac{8m_2}{\pi}\right) \sumfinitepred{\ee}{\ee\in E(X)}{ \dih(X,\ee) f(h(\ee))}. \end{equation} \end{definition} \begin{theorem*}[] \label{lemma:MI} \preamble{ \guid{HJKDESR} } % Let $V$ be any saturated packing and let $X$ be any cell of $V$. Then \begin{equation}\label{eqn:mfe} \gammaX(X,M)\ge 0, \end{equation} where $M$ is the function defined in \eqref{eqn:M}. \end{theorem*} \begin{remark} % We do not use this inequality, and its proof is omitted. The only published proof~\cite{marchal:2009} is not satisfactory because it plots sample level curves of the function and reaches conclusions based on the visual appearance of these level curves. \end{remark} \begin{conjecture}[Marchal] \label{conj:m1} \preamble{ \guid{PHNFUXP} } % For any packing $ V$ and any $ \u\in V$, \[ \sumfinitepred{\v}{\v\in V\setminus\braced{\u}} {M(h([\u;\v]))} \le 12. \] \end{conjecture} This book proves a variant of the conjecture. \begin{theorem} \label{theorem:mk1} \preamble{ \guid{QFUXEXR} } % % formalization must follow the statements in pack_concl.hl, 250 % each estimate. The conjecture~\eqref{conj:m1} and inequality~\eqref{eqn:mfe} imply that for every saturated packing $V$, there exists a negligible FCC-compatible function $G:V\to \Real$. \end{theorem} The theorem follows from the following more general statement. \begin{lemma} \label{lemma:mk1} \preamble{ \guid{KIZHLTL} } % Let $f$ be any bounded, compactly supported function. Set \[ G( \u_0,f) = -\op{vol}(\Omega(V, \u_0)) + 8 m_1 - \sumfinitepred{\u}{\u\in V\setminus\braced{\u_0}} {8 m_2 f(h([\u_0;\u]))}. \] If \[ \sumfinitepred{\v}{\v\in V\setminus\braced{\u}}{f(h([\u;\v]))} \le 12, \] then $G(\wild,f)$ is FCC-compatible. Moreover, if there exists a constant $c_0$ such that for all $r\ge1$ \[ \sumfinitepred{X}{X\subset \openBall{\orz}{r}}{ \gammaX(X,f) \ge c_0 r^2}, \] then $G(\wild,f)$ is negligible. \end{lemma} Theorem~\ref{theorem:mk1} is the special case $f=M$. Inequality~\ref{eqn:mfe} implies that we can take $c_0=0$ in the lemma. \begin{proof} The function $G(\wild,f)$ is FCC-compatible (page \pageref{def:negligible}) directly by equation~\eqref{eqn:km} and the assumption of the lemma: \indy{Index}{negligible}% \indy{Index}{FCC!compatible}% \begin{align*} 4\nsqrt{2} &= 8 m_1 - 8 (12 m_2)\\ &\le 8 m_1 - 8 m_2 \sumfinitepred{\u}{\u\in V\setminus \braced{\u_0} } {f(h([\u_0;\u]))}\\ &= \op{vol}(\Omega(V, \u_0)) + G( \u_0,f). \end{align*} The issue is to prove that it is negligible. More explicitly, we show that there is a constant $c$ such that for all $r\ge 1$:% \begin{align}\label{eqn:neg} -\sumfinitepred{\u}{\u\in \Vee{V}(\orz,r)}{ G( \u,f)} &= \sumfinitepred{\u}{\u\in \Vee{V}(\orz,r)}{ \op{vol}(\Omega(V, \u)) } -\sumfinitepred{\u}{\u\in \Vee{V}(\orz,r)}{8m_1} + \sumfinitepred{\u}{\u\in \Vee{V}(\orz,r)}{ \sumfinitepred{\v}{\v\in V\setminus\braced{\u} } {8 m_2 f(h([\u;\v]))}}\notag \\ &\ge \sumfinitepred{X}{X\subset \openBall{\orz}{r}}{\gammaX(X,f)} + c r^2, \end{align} where all unmarked sums run over $ \u\in \Vee{V}(\orz,r)$. The lemma follows from this inequality, the assumption of the lemma, and the definition of negligible (Definition~\ref{def:negligible}). Lemmas~\ref{lemma:Zr2} and \ref{lemma:V-finite} show that the number of points of $ V$ near the boundary of $\openBall{\orz}{r}$ is at most $c r^2$, for some $c$. The function $\gammaX(X,f)$ is defined as a sum of three terms~\eqref{eqn:gamma-def}. The sum of $\gammaX(X,f)$ over all cells in a large ball $\openBall{\orz}{r}$ is a sum of the contributions $T_1(r) + T_2(r) + T_3(r)$ from the three separate terms defining $\gammaX$. The sum of $-G$ in equation~\eqref{eqn:neg} is a sum of three corresponding terms $T'_i(r)$. It is enough to work term by term, producing constants $c_i$ such that \[ T_i'(r) \ge T_i(r) + c_i r^2,\quad i=1,2,3. \] The sum of the volumes of the Voronoi cells $ \u\in \openBall{\orz}{r}$ is not exactly the volume of $\openBall{\orz}{r}$ because of the contribution at the boundary of $\openBall{\orz}{r}$ of Voronoi cells that are only partly contained in $\openBall{\orz}{r}$. Similarly, the sum of the various $k$-cells for $X\subset \openBall{\orz}{r}$ is not exactly the volume of $\openBall{\orz}{r}$ because of contributions from the boundary. The boundary contributions have order $r^2$. See Section~\ref{sec:finiteness} for order $r^2$ calculations. Thus, \[ T_1'= \sumfinitepred{\u}{ \u\in \Vee{V}(\orz,r)}{ \op{vol}(\Omega(V, \u)) } \ge \sumfinitepred{X}{X\subset \openBall{\orz}{r}} {\op{vol}(X)} + c_1 r^2 = T_1 + c_1 r^2. \] The estimates on the other terms are similar. The solid angles around each point sum to $4\pi$. In Landau big O notation, this gives \begin{align*} \sumfinitepred{X}{X\subset \openBall{\orz}{r}} {\op{tsol}(X)} &= \sumfinitepred{X\subset \openBall{\orz}{r}~~}{ \sumfinitepred{\u}{ \u\in \CapV{V}( X)}{ \sol(X, \u)}}\\ &=\sumfinitepred{\u}{\u\in \Vee{V}(\orz,r)~~}{ \sumfinitepred{X}{\sumvar{X} \u\in \CapV{V}( X)}{ \sol(X, \u)}} + O(r^2)\\ &=\sumfinitepred{\u}{ \u\in \Vee{V}(\orz,r)} {4\pi} + O(r^2). \end{align*} \indy{Notation}{O@$O$ (Landau's big O)}% Hence \[ T_2' = -\sum_{ \Vee{V}(\orz,r)} 8 m_1 = -\sumfinitepred{X}{X\subset \openBall{\orz}{r}}{\left(\frac{2m_1}\pi\right) \op{tsol(X)}} + O(r^2) = T_2 + O(r^2). \] Similarly, the dihedral angles around each edge sum to $2\pi$. A factor of two enters the following calculation because there are two ordered pairs for each unordered pair $\ee=\braced{ \u_0, \u_1}$: \begin{alignat*}{6} &\phantom{=}&&\sumfinitepred{X}{X\subset \openBall{\orz}{r}} &&{\sumfinitepred{\ee}{{~~\ee\in E(X)~~~~~} {\dih(X,\ee) f(h(\ee))}} \vspace{3pt}\\ &=&&\sumfinitepred{\ee}{\ee\subset \openBall{\orz}{r}} &&{\sumfinitepred{X}{~~\sumvar{X} \ee\in E(X)} {\dih(X,\ee) f(h(\ee))}} +O(r^2)\vspace{3pt}\\ &=&&\sumfinitepred{\ee}{\ee\subset \openBall{\orz}{r}} &&{2\pi f(h(\ee))} + O(r^2)\vspace{3pt} \\ &=&&\sumfinitepred{\u}{ \u_0\in \Vee{V}(\orz,r)} &&{\sumfinitepred{\u_1}{~~\u_1\in \Vee{V}(\orz,r)~~~} {\pi f(h( \u_0, \u_1))}} + O(r^2).\\ \end{alignat*} Finally, \arrayqed \begin{align*} T_3' &= \sum\sum 8 m_2 f(h(\bu)) \\ &\ge \left(\frac{8m_2}\pi\right) \sumfinitepred{X}{X\subset \openBall{\orz}{r}~~}{\sumfinitepred{\ee}{\ee\in E(X)}{\dih(X,\ee) f(h(\ee))}} + O(r^2) \\ &= T_3 + O(r^2).\arrayqedhere \end{align*} \end{proof} \section{Clusters}\label{clusters} This section introduces a variant of Conjecture~\ref{conj:m1}. In this variant, a piecewise linear function $L$ replaces the piecewise polynomial function $M$. More crucially, the support of the function $L$ is contained in $\leftclosed1,1.26\rightclosed$. By contrast, the function $M$ is positive on a large interval: $\leftclosed1,1.3254\rightopen$. This difference in the support of the function creates a large difference in the difficulty of the conjectures. The conjecture formulated in this section also implies the existence of FCC-compatible negligible functions. To prove this existence result, it is helpful to group cells together into new aggregates, called \newterm{clusters}. This section makes a detailed study of clusters in order to produce a negligible function. The aim of this section is to prove a variant (Theorem~\ref{theorem:mk2}) of Theorem~\ref{theorem:mk1} that uses the function $L$ rather than $M$. Recall that $M(h_+) = 0$, where $h_+ = 1.3254$. \indy{Index}{cell cluster}% \begin{definition}[$L$,~$h_0$,~$h_-$] \label{def:L} \preamble{ \guid{ULZRABY} \formaldef{$h_0$}{h0} \formaldef{$h_-$}{hminus} \formaldef{$L$}{lmfun} \indy{Notation}{L@$L$ (linear function)}% \indy{Notation}{h@$h_- \approx 1.23175$}% \indy{Notation}{h@$h_0 = 1.26$}% } % Set \[ h_0 = 1.26.%\\ %%\hm \] Let $L:\Real\to\Real$ be the piecewise linear function \[ L(h) = \begin{cases} \dfrac{h_0-h}{h_0-1}, & h \le h_0 \\ 0, & h\ge h_0. \\ \end{cases} \] It follows from the definition that \[ L(1) = 1\textand L(\hm) = 0. \] Let $h_- \approx 1.23175$ be the unique root of the quartic polynomial $M(h)-L(h)$ that lies in the interval $[1.231,1.232]$. \end{definition} The inequality $L(h)\ge M(h)$ holds except when $h\in [h_-,h_+]$. \figBJLIEKB % fig:L \begin{definition}[critical~edge,~$\op{EC}$,~$\op{wt}$] \label{def:wt} \preamble{ \guid{MZSRVBC} \indy{Notation}{E3b@$\op{EC}$ (critical edges)}% \indy{Notation}{wt@$\op{wt}$ (weight)}% } % A \newterm{critical edge} $\ee$ of a saturated packing $V$ is an unordered pair that appears as an element of $E(X)$ for some $k$-cell $X$ of the packing $V$ such that $h(\ee)\in[h_-,h_+]$. Let $\op{EC}(X)$ be the set of critical edges that belong to $E(X)$. If $X$ is any cell such that $\op{EC}(X)$ is nonempty, let the \newterm{weight} $\op{wt}(X)$ of $X$ be $1/\card(\op{EC}(X))$. \end{definition} \begin{definition}[$\beta_0$,~$\beta$] \label{def:beta} \preamble{ \guid{PQFEXQN} \formaldef{$\beta_0$}{bump} \formaldef{$\beta$}{beta\_bump} \indy{Notation}{zzb@$\beta,~\beta_0$ (bump)}% } % Set \[ \beta_0(h) = 0.005 (1 - (h-h_0)^2/(h_+-h_0)^2). \] (See Figure~\ref{fig:fg1}.) If $X$ is a $4$-cell with exactly two critical edges and if those edges are opposite, then set \[ \beta(\ee,X) = \beta_0(h(\ee)) - \beta_0(h(\ee')), \text{ where }\op{EC}(X) = \braced{\ee,\ee'} . \] Otherwise, for all other edges in all other cells, set $\beta(\ee,X) = 0$. \end{definition} \figPQFEXQN % fig:fg1 \begin{definition}[cell cluster,~$\Gamma$] \label{def:gammaL} \preamble{ \guid{YSULGYR} \formaldef{cell cluster}{cell\_cluster} \formaldef{$\Gamma$}{cluster\_gammaX} \indy{Index}{cell cluster}% \indy{Notation}{CL@$\op{CL}$ (cell cluster)}% \indy{Notation}{zzC@$\Gamma$}% } % Let $V$ be a saturated packing. Let $\ee\in \op{EC}(X)$ be a critical edge of a $k$-cell $X$ of $V$ for some $2\le k\le 4$. A \newterm{cell cluster} is the set \[ \op{CL}(\ee) = \setcomp{X}{ \ee\in \op{EC}(X)} \] \indy{Notation}{cluster}% of all cells around $\ee$. Define \[ \Gamma(\ee) = \sumfinitepred{X}{X\in \op{CL}(\ee)} {\gammaX(X,L) \op{wt}(X) +\beta(\ee,X)}. \] \end{definition} The following weak form of Theorem~\ref{lemma:MI} is sufficient for our needs. \begin{lemma} \label{lemma:LI} \preamble{ \guid{TSKAJXY} } % Let $V$ be any saturated packing and let $X$ be any cell of $V$ such that $\op{EC}(X)$ is empty. Then \[ \gammaX(X,L)\ge 0. \] \end{lemma} \begin{proof} This is a \cc{TSKAJXY}{}. \end{proof} \begin{theorem}[cell cluster inequality] \label{lemma:cluster} \preamble{ \guid{OXLZLEZ} } % Let $\op{CL}(\ee)$ be any cell cluster of a critical edge $\ee$ in a saturated packing $V$. Then $\Gamma(\ee)\ge 0$. \end{theorem} \begin{proof}[Proof sketch] The proof of this cell cluster inequality is a \cc{OXLZLEZ}{}, which is the most delicate computer estimate in the book. It reduces the cell cluster inequality to hundreds of nonlinear inequalities in at most six variables. In degenerate cells with a face of area zero, Euler's formula (Lemma~\ref{lemma:euler}) should be used to calculate solid angles, because the standard dihedral angle formula for solid angles can lead to the evaluation of $\atn$ at the branch point $(0,0)$, which is numerical unstable and is best avoided. \end{proof} \begin{example}\guid{JXEHXQY} We construct an example of a cell cluster in the form of an octahedron, with four $4$-cells joined along a common critical edge $\ee$. Assume that all of the edges of the octahedron have length $2$, except for one of length $y$ for some edge that does not meet $\ee$. If $y\in \leftclosed 2 h_-,2 h_+\rightclosed$, then one of the four simplices has weight $1/2$ and the other three simplices have weight $1$. The parameter $y$ determines the cell cluster up to isometry. We plot the function $f(y)=\Gamma(\ee)$ as a function of $y$. We also plot the function \[ g(y) = \sumfinitepred{X}{X\in \op{CL}(\ee)} {\gammaX(X,L) \op{wt}(X)}. \] As the plot shows, the function $g$ is not positive. This shows that without the small correction term $\beta$, the cell cluster inequality is false. Numerical evidence suggests that the global minimum of $\Gamma(\ee)$ occurs when the cell cluster has the form of an octahedron with parameter $y=2h_+$ and value $f(2h_+)\approx 0.0013$. \end{example} \indy{Notation}{g@$g$ (function name)}% \figJXEHXQY % fig:fg The proof of the following lemma is deferred, because it relies on many computer calculations and is extremely long and complex. The non-computer parts of the proof take up most of the remainder of the book. In this chapter the lemma is treated as an unproved assertion. \begin{lemma*} \label{conj:L12} \preamble{ \guid{BJERBNU} } % For any saturated packing $ V$ and any $ \u_0\in V$, \begin{equation}\label{eqn:L12} \sumfinitepred{\u_1}{\u_1\in V\mid h( \u_0, \u_1)\le \hm}{L(h\braced{\u_0, \u_1})} \le 12. \end{equation} \end{lemma*} \begin{lemma}[] \label{theorem:mk2} \preamble{ \guid{UPFZBZM} } % Inequality~\eqref{eqn:L12} implies that for every saturated packing $V$, there exists a negligible FCC-compatible function $G:V\to \Real$. \end{lemma} \begin{remark}\label{rem:L12KC} In light of Lemma~\ref{lemma:reduction-finite-dimensions}, inequality~\ref{eqn:L12} implies the Kepler conjecture. \end{remark} \begin{proof} By Lemma~\ref{lemma:mk1}, the proof reduces to showing that there exists a constant $c_0$ such that for all $r\ge1$ \[ \sumfinitepred{X}{X\subset \openBall{\orz}{r}} {\gammaX(X,L)} \ge c_0 r^2. \] If a cell $X$ does not belong to any cell cluster, then \[ \gammaX(X,L)\ge 0 \] by Lemma~\ref{lemma:LI}. Note that the function $\beta(\ee,X)$ averages to zero for any $4$-cell $X$: \[ \sumfinitepred{\ee}{\ee\in \op{EC}(X)}{ \beta(\ee,X)} = 0. \] Hence, the terms involving $\beta$ in sums may be disregarded in this proof. (These terms may be disregarded here, but they are needed in Lemma~\ref{lemma:cluster}.) Theorem~\ref{lemma:cluster} gives the required inequality for cell clusters. Again, using big O notation, \begin{alignat*}{8} \sumfinitepred{X}{X\subset \openBall{\orz}{r}} {\gammaX(X,L)} &=&&\sumfinitepred{X}{X\subset \openBall{\orz}{r}\mid \op{EC}(X)\ne\emptyset} \hspace{-2em}{\gammaX(X,L)} \quad + \sumfinitepred{X}{X\subset \openBall{\orz}{r}\andcomma \op{EC}(X)=\emptyset} \hspace{-2em}{\gammaX(X,L)} \vspace{6pt}\\ &\ge &&\sumfinitepred{X}{X\subset \openBall{\orz}{r}\andcomma \op{EC}(X)\ne\emptyset} \hspace{-2em}{\gammaX(X,L)}\vspace{6pt} \\ &= &&\sumfinitepred{X}{\hspace{1.8em}X\subset \openBall{\orz}{r}\hspace{1.8em}}\hspace{-2em}{\gammaX(X,L)\sumfinitepred{\ee}{\ee \in \op{EC}(X)}{\op{wt}(X)}} \vspace{6pt}\\ &= &&\sumfinitepred{\ee}{\hspace{1.85em}\ee\subset \openBall{\orz}{r}\hspace{1.85em}}\hspace{-2.5em}{\sumfinitepred{X}{~~~~\sumvar{X} \ee \in \op{EC}(X)}\hspace{-1.1em}{\gammaX(X,L)\op{wt}(X)}} &+ O(r^2)\vspace{6pt}\\ &= &&\sumfinitepred{\ee}{\hspace{1.85em}\ee\subset \openBall{\orz}{r}\hspace{1.85em}}\hspace{-1em}{\Gamma(\ee)} &+ O(r^2)\vspace{6pt}\\ &\ge&& ~~O(r^2). \end{alignat*} \end{proof} \begin{definition}[$\BB$] \label{def:BB} \preamble{ \guid{WTKURHK} \formaldef{$\BB$}{ball\_annulus} \indy{Notation}{BB@$\BB$}% } % Let $\BB$ be the \newterm{annulus} $\bar \openBall{\orz}{2h_0}\setminus \openBall{\orz}{2}$, where $\bar \openBall{\orz}{r}$ is the closed ball of radius $r$. \end{definition} \begin{corollary} \label{cor:CE} \preamble{ \guid{RDWKARC} } % If the Kepler conjecture is false, there exists a finite packing $V\subset\BB$ with the following properties. \begin{equation}\label{eqn:CE} \sumfinitepred{\u}{ \u\in V} {L(h\braced{\orz, \u})} > 12. \end{equation} \end{corollary} The proof of the Kepler conjecture proceeds by assuming that there is a counterexample to Inequality~\ref{eqn:L12} and then deriving a contradiction. This corollary formulates the potential counterexample in slightly simpler terms. \begin{proof} If the Kepler conjecture is false, Inequality~\ref{eqn:L12} is violated for some packing $ V$ and some $ \u_0\in V$. After translating $ V$ to $ V - \u_0$ and $ \u_0$ to $\orz$, it follows without loss of generality that $ \u_0=\orz\in V$. After the replacement of $ V$ with the finite subset $V\cap \BB$, it follows without loss of generality that the packing is a finite subset of $\BB$. \end{proof} \section{Counting Spheres}\label{counting spheres} This section proves two estimates about a packing $V\subset \BB$ that satisfies Inequality~\ref{eqn:CE}. The first estimate (Lemma~\ref{lemma:13-14}) shows that the cardinality of $V$ is thirteen, fourteen, or fifteen. The second estimate (See Lemma~\ref{lemma:D'}.) shows that no point $\v\in V$ can be strongly isolated from the other points of $V$. To prove these two estimates, we need a formula for the smallest possible area of a spherical polygon that contains a disk. This formula is developed in the first subsection. \subsection{solid angle}\label{solid angle} \indy{Index}{polygon}% The following lemma is analogous to the Rogers decomposition of a polyhedron into simplices. The lemma constructs $2k$ points to be used to triangulate (a subset of) a polygon (Figure~\ref{fig:reference-w}). \begin{lemma}[] \label{lemma:2D-poly} \preamble{ \guid{EUSOTYP} } % Let $P$ be a two-dimensional bounded polyhedron in $\vecR{2}$. Let $k$ be the number of facets of $P$. Let $r>0$. Suppose that \[ \setcomp{\v\in \vecR{2}}{ \normo{v}< r} \subset P. \] Then there exist nonzero points $\w_j\in P$ for $\fordots{j}{0}{2k-1}$ such that \begin{enumerate} \item The polar cycle on $\setcomp{\w_j}{ j}$ is given by $\sigma(\w_j) = \w_{j+1}$, with indexing mod $2k$. \item $\theta(\w_{2i},\w_{2i+1}) = \theta(\w_{2i+1},\w_{2i+2}) < \pi/2$, where $\theta$ denotes the relative polar coordinate of Lemma~\ref{lemma:polar-sum}. \item $\normo{\w_{2i}}=r$ and $\normo{\w_{2i+1}} = r\sec \theta(\w_{2i},\w_{2i+1})$, for $\fordots{i}{0}{k-1}$. \item $\w_{2i}\cdot (\w_{2i\pm1}-\w_{2i}) = 0$. \end{enumerate} \end{lemma} \figYAHDBVO % fig:reference-w \begin{proof} Enumerate the distinct facets $F_1,\ldots,F_k$ of $P$, and for each one select a defining equation \[ F_i = P\cap \setcomp{\p }{ \u_i \cdot \p = b_i}, \quad\text{where } \normo{\u_i}=1\text{ and } b_i\ge0. \] We may assume that ordering of facets by increasing subscripts is the ordering by the polar cycle on $\u_i\in\vecR{2}$. The assumption that $P$ contains an open disk of radius $r$ gives $r\le b_i$. \claim{We claim that $r\u_i\in P$ and does not lie in any facet except possibly $F_i$.} Otherwise, for some $j$, \[ r\le b_j \le (r\u_i)\cdot \u_j\le r\normo{\u_i}\normo{\u_j}=r. \] This is the case of equality in Cauchy--Schwarz, which implies that $\u_i=\u_j$. The definition of face implies that $b_i=b_j$, and $i=j$. The claim ensues. \claim{We claim that $0<\theta(\u_i,\u_{i+1})<\pi$.} Indeed, from the previous claim it follows that $0<\theta(\u_i,\u_{i+1})$. By the boundedness of $P$, for any nonzero $\v$ orthogonal to $\u_i$, there exists $s>0$ such that $r\u_i + s \v$ lies in some facet $F_j\ne F_i$. This gives $r \u_j\cdot\u_i + s \u_j\cdot \v = b_j$. The condition $r\u_i\in P\setminus F_j$ gives $r\u_j\cdot\u_i < b_j$. Hence $\u_j\cdot\v > 0$. For an appropriate choice of sign of $\v$, this gives $\theta(\u_i,\u_{i+1})\le\theta(\u_i,\u_j)<\pi$. Suppressing the subscript $i$, we write $\psi = \theta(\u_i,\u_{i+1})/2$. Let $\u_i'$ be the point in the plane given in polar coordinates by \[ \normo{\u_i'} = r\sec\psi,\quad \theta(\u_i,\u'_{i}) = \theta(\u_i',\u_{i+1})=\psi. \] \claim{We claim that $\u'_i\in P$.} Indeed, for every $j$, we have \[ \u_j \cdot \u_i' = \normo{\u_j}\normo{\u_i'} \cos\varphi = r\sec\psi\cos\varphi, \] where $\varphi = \theta(\u_i',\u_j)$. From the polar order, and the construction of $\u_i'$ along the bisector of $\u_i,\u_{i+1}$, it follows that \[ \psi\le \varphi \le 2\pi - \psi. \] Hence, $\cos\varphi \le \cos\psi$. This gives \[ \u_j \cdot \u_i' \le r \le b_j. \] This shows that $\u'_i$ satisfies all the defining conditions of $P$. Set $\w_{2i}=\u_i$ and $\w_{2i+1}=\u'_i$. It is clear from construction that the polar cycle on $\setcomp{\w_j}{ j}$ is compatible with the indexing. The enumerated properties of the lemma now follow from Lemma~\ref{lemma:polar-sum}. The lemma ensues. \end{proof} \begin{lemma}[] \label{lemma:ngon} \preamble{ \guid{GOTCJAH} } % Let $P$ be a bounded polyhedron in $\vecR{3}$ that contains $\orz$ as an interior point. Let $F$ be a facet of $P$, given by an equation \[ F = \setcomp{\p }{ \p \cdot \v = b_0} \cap P. \] Let $W_F$ be the corresponding topological component of $Y(V_P,E_P)$. Assume that $W_F$ contains the right-circular cone \begin{equation}\label{eqn:rW} \op{rcone}^0(\orz,\v,t) \subset W_F \end{equation} for some $t$ such that $0<t<1$. Then \[ \sol(W_F) \ge 2\pi - 2 k \,\arcsin\left(\,t\sin(\pi/k)\,\right), \] where $k$ is the number of edges of $F$. \end{lemma} \begin{proof} Project the facet $F$ to $\vecR{2}$ by projecting onto the coordinates of $\e_2$ and $\e_3$ of an orthonormal frame $(\e_1,\e_2,\e_3)$ adapted to $(\orz,\v,\ldots)$. By the Pythagorean theorem, the hypothesis \eqref{eqn:rW} implies that a disk of radius \[ b_1 \nsqrt{1-t^2}/t \] is contained in the projected face, where $b_1= b_0/\normo{\v}$ is the distance from $\op{aff}(F)$ to $\orz$. Apply Lemma~\ref{lemma:2D-poly} to the projected face, and pull the points $\w_j$ back to points on $F$ with the same names. By the additivity of measure over measurable sets that are disjoint up to a null set, we may partition into wedges: \begin{align*} \sol(W_F) &= \sum_j \sol(W_F \cap W(\orz,\v,\w_j,\w_{j+1}))\\ &\ge \sum_j \sol(\op{aff}_+^0(\orz,\braced{\v,\w_j,\w_{j+1}})). \end{align*} The solid triangles that appear in the last sum are primitive volumes, which are computed in terms of dihedral angles in Chapter~\ref{chapter:volume}. Set \[ \beta_{2j}=\beta_{2j+1}=\dih_V(\braced{\orz,\v},\braced{\w_{2j},\w_{2j\pm1}}) \text{ and } a = \arc_V(\orz,\braced{\v,\w_{2j}}) = \arccos t. \] The three vectors $\v$, $\w_{2j}-\v$, and $\w_{2j+1}-\w_{2j}$ are mutually orthogonal by the final claim of Lemma~\ref{lemma:2D-poly}. Lemma~\ref{lemma:dih-cross} gives \[ \dih_V(\braced{\orz,\w_{2j}},\braced{\v,\w_{2j+1}}) =\pi/2, \] because \begin{align*} (\w_{2j} \times \v)\cdot (\w_{2j}\times \w_{2j+1}) &= (\w_{2j} \times \v)\cdot (\w_{2j}\times (\w_{2j+1}-\w_{2j}))\\ &= (\w_{2j+1}-\w_{2j})\cdot ((\w_{2j} \times \v)\times \w_{2j})\\ &= 0. \end{align*} Consider a spherical triangle with sides $a,b,c$ and opposite angles $\alpha,\beta,\gamma$. If $\gamma=\pi/2$, then by Girard's formula, the area of the triangle is \[ \alpha+\beta-\pi/2, \] and by the spherical law of cosines (Lemma~\ref{lemma:sloc2}) \[ \cos\alpha =\cos a\sin\beta. \] This determines the area $g(a,\beta)$ of the triangle as a function of $a$ and $\beta$: \[ g(a,\beta) = \beta - \arcsin(\cos a \sin \beta). \] \indy{Index}{Girard's formula}% \indy{Notation}{g@$g$ (triangle area)}% \indy{Notation}{zza@$\alpha$ (angle)}% \indy{Notation}{zzb@$\beta$ (angle)}% \indy{Notation}{zzc@$\gamma$ (angle)}% \indy{Index}{convex}% \indy{Index}{Girard's formula}% \indy{Index}{polygon}% The solid angle of $W_F$ is at least sum of the areas of the triangles: \[ \sumupto{j}{0}{2k-1}{g(a,\beta_j)}, \] with angle sum \[ \sumupto{j}{0}{2k-1}{ \beta_j} = 2\pi. \] The second partial of $g$ with respect to $\beta$ is \[ \frac{\partial^2 g(a,\beta)}{\partial \beta^2} = \frac{\cos a\sin^2 a\sin \beta}{\sin^2\alpha} \ge 0. \] Thus, the function is convex in $\beta$. By convexity, the minimum area occurs when all angles are equal $\beta=\beta_j = \pi/k$. The solid angle bound of the lemma is equal to \[ 2 k g(a,\pi/k) \] where $\cos a=t$. \end{proof} \subsection{a polyhedral bound}\label{a polyhedral bound} \begin{definition}[weakly saturated] \label{def:weakly-saturated} \preamble{ \guid{HUCFLEB} } % Let $r$ and $r'$ be real numbers such that $2\le r\le r'$. Define a set $ V\subset\vecR{3}\setminus \openBall{\orz}{2}$ to be \newterm{weakly saturated} with parameters $(r,r')$ if for every $\p\in\vecR{3}$ \[ 2\le\normo{\p}\le r'~~~\implies~~~ \exists \u\in V.~\norm{ \u}{\p}< r. \] \end{definition} \begin{lemma}[] \label{lemma:poly-bounded} \preamble{ \guid{TARJJUW} \formalauthor{Dang Tat Dat} \indy{Index}{polyhedron}% } % Fix $r$ and $r'$ such that $2\le r\le r'$. Let $ V$ be a weakly saturated finite packing with parameters $(r,r')$. For any $g: V\to\Real$, let $P( V,g)$ be the polyhedron given by the intersection of half-spaces \[ \setcomp{\p }{ \u\cdot \p \le g( \u)},\quad \u\in V. \] Then $P( V,g)$ is bounded. \end{lemma} \begin{proof} Assume for a contradiction that $P=P( V,g)$ is unbounded and there exists $\p\in P$ such that $\normo{\p} > g( \u) r'/2$ for all $ \u\in V$. Let $\v = r' \p/\normo{\p}$ so that $r'=\normo{\v}$. By the weak saturation of $ V$, there exists $ \u\in V$ such that $\norm{\v}{ \u}<r$. Then, \begin{align*} \normo{\p} &> g( \u) r'/2 \ge \u\cdot (r' \p)/2 = \normo{\p} \u\cdot \v /2\\ &= \normo{\p} (\normo{ \u}^2 + \normo{\v}^2 - \norm{ \u}{\v}^2)/4\\ &> \normo{\p}(4+r'^2-r^2)/4\\ &\ge \normo{\p}. \end{align*} This contradiction shows that $P$ is bounded. \end{proof} \begin{lemma} \label{lemma:g-ineq} \preamble{ \guid{YSSKQOY} } % Let \[ g(h) = \arccos(h/2) - \pi/6. \] Then \begin{equation}\label{eqn:disks} \op{arc}(2h,2h',2)\ge g(h) + g(h') ,% \end{equation}% for all $h,h'\in\leftclosed 1,h_0\rightclosed$. \end{lemma} \begin{proof} The function $g$ can be rewritten as \[ g(h) = \op{arc}(2h,2,2) - \op{arc}(2,2,2)/2. \] It is enough to prove a more general inequality in symmetrical form \begin{equation}\label{eqn:disks-symmetrical} f(a_2,b_2) - f(a_1,b_2) - f(a_2,b_1) + f(a_1,b_1)\ge0, \end{equation} when \[ 2\le a_1 \le a_2 \le 2h_0,\text{ and } 2\le b_1\le b_2 \le 2h_0, \] where $f(a,b) = \op{arc}(a,b,2)$. A calculation gives \[ \frac {\partial^2 f(a,b)}{\partial a\,\partial b} = \frac{32 a b}{\ups{a^2}{b^2}{4}^{3/2}} > 0. \] Thus, by holding $a$ fixed, $\partial f/\partial a$ is increasing in $b$: \[ \frac {\partial f(a,b_2) } {\partial a} -\frac{\partial f(a,b_1)}{\partial a} \ge 0. \] This shows that with $b_1$ and $b_2$ fixed, $f(a,b_2)-f(a,b_1)$ is increasing in $a$. Equation~\ref{eqn:disks-symmetrical} ensues. \end{proof} Since $L(h)\le 1$ when $h\ge1$, it is clear that a finite packing $V$ that satisfies Inequality~\ref{eqn:CE} has cardinality greater than twelve. The following lemma also gives an upper bound on the cardinality of $V$. \begin{lemma}[] \label{lemma:13-14} %% \preamble{ \guid{DLWCHEM} } % If $V\subset \BB$ is a packing that satisfies Inequality~\ref{eqn:CE}, then the cardinality of $V$ is thirteen, fourteen, or fifteen. \end{lemma} \begin{proof} (Following Marchal.) Consider a finite packing $ V=\setenumdots{\u}{1}{N}\subset \BB$ satisfying Inequality~\ref{eqn:CE}. The packing $V$ contains more than twelve points because otherwise Inequality~\ref{eqn:CE} cannot hold, as $L(h)\le 1$. By adding points as necessary, the packing becomes weakly saturated in the sense of Definition~\ref{def:weakly-saturated}, with $r=2$ and $r'=2\hm$. It is enough to show that this enlarged set has cardinality less than sixteen. Let \[ % g(h) = \arccos(h/2) - \pi/6, % \] % and let $h_i = \normo{ \u_i}/2$. Then $h_i\le h_0=1.26$. Consider the spherical disks $D_i$ of radii $g(h_i)$, centered at $ \u_i/\normo{ \u_i}$ on the unit sphere. These disks do not overlap by Lemma~\ref{lemma:g-ineq}. \indy{Notation}{D@$D$ (spherical disk)}% For each $i$, the plane through the circular boundary of $D_i$ bounds a half-space containing the origin. The intersection of these half-spaces is a polyhedron $P$, which is bounded by Lemma~\ref{lemma:poly-bounded}. (See Figure~\ref{fig:marchal-polyhedron}.) Lemma~\ref{lemma:polyhedron} associates a fan $(V_P,E_P)$ with $P$. (The set $V_P$ is dual to $ V$; the set $V_P$ is in bijection with extreme points of $P$, whereas $ V$ is in bijection with the facets of $P$.) There are natural bijections between the following sets. \begin{enumerate}\wasitemize \item $ V = \setenumdots{\u}{1}{N}$. \item The facets of $P$. \item The set of topological components of $Y(V_P,E_P)$. \item The set of faces in the hypermap $\op{hyp}(V_P,E_P)$. \end{enumerate}\wasitemize The first conclusion of Lemma~\ref{lemma:webster} gives the bijection of the first two sets. Lemmas~\ref{lemma:WF} and ~\ref{lemma:face} give the other bijections. \figZXEVDCA % fig:marchal-polyhedron By Lemma~\ref{lemma:edge-bi}, the number of edges of the facet $i$ is $k_i$, the cardinality of the corresponding face in $\op{hyp}(V_P,E_P)$. By Lemma~\ref{lemma:ngon}, the solid angle of the topological component $W_i$ of $Y(V_P,E_P)$ is at least $\op{reg}(g(h_i),k_i)$, where \indy{Index}{half-plane}% \indy{Index}{half-space}% \indy{Notation}{reg (area of regular spherical polygon)}% \[ \op{reg}(a,k) = 2\pi - 2 k (\arcsin(\cos(a)\sin(\pi/k))). \] By a \cc{BIEFJHU}{% This is a linear lower bound on the area of a regular polygon.} \begin{equation}\label{eqn:alin} \op{reg}(g(h),k) \ge c_0 + c_1 k + c_2 L(h),\quad \text{ for all } k = 3,4,\ldots,\quad 1\le h\le \hm, \end{equation} where \[ %c_0 = 0.6327,\quad c_1 = -0.0333,\quad c_2 =0.4754. c_0=0.591,\quad c_1=-0.0331,\quad c_2 = 0.506. \] The sum $\sum_i k_i$ is the number of darts in $\op{hyp}(V_P,E_P)$ by Lemma~\ref{lemma:polyhedron}. By Lemma~\ref{lemma:dart-upper}, $\sum_i k_i \le (6N-12)$. Summing over $i$, an estimate on $N$ follows: \indy{Index}{polyhedron}% \indy{Index}{hypermap!planar}% \begin{align*} 4\pi &= \sum_i\op{sol}(W_i)\\ &\ge \sum_i \op{reg}(g(h_i),k_i) \\ &\ge c_0 N +c_1\sum_i k_i + c_2 \sum L(h_i)\\ &\ge c_0 N +c_1 (6N-12) + c_2 12. \end{align*} This gives $16 > N$. \end{proof} \begin{lemma}[] \label{lemma:D'} \preamble{ \guid{XULJEPR} } % Assume that $V\subset \BB$ is a packing that satisfies Inequality~\ref{eqn:CE}. Then for every $ \v\in V$ such that $\normo{ \v}=2$, there exists $\u\in V$ such that $0<\norm{ \v}{ \u}< 2\hm$. \end{lemma} \begin{proof} Assume for a contradiction that a packing $V$ exists that satisfies the inequality for which there exists $\v\in V$ for which \begin{equation}\label{eqn:norm-hm} 2\hm\le\norm{\v}{\u},\quad \u\ne \v. \end{equation} The assumption that \eqref{eqn:CE} holds implies that $N\ge 13$. Create one large disk $D_1'$ centered at $\v/2$ and repeat the proof of the previous lemma. Extend the packing to a weak saturation with parameters $r=r'=2\hm$. This can be done in a way that maintains the assumptions on $\v$. By Lemma~\ref{lemma:poly-bounded}, the polyhedron is bounded. By a \cc{WAZLDCD}{} % \[ a'=0.797 < \arc(2,2h,2\hm)-g(h)\text{\ \ for } 1\le h \le \hm.\] By \eqref{eqn:norm-hm}, we may take $a'$ for the arcradius of the large disk $D_1'$. By a \cc{UKBRPFE}{} % \begin{equation}\label{eqn:alin2} \op{reg}(a',k) \ge c_0 + c_1 k + c_2 L(1) + c_3,\quad k=3,4,\ldots\end{equation} where $c_3 = 1$. % Then \begin{align*} \label{eqn:D'} 4\pi &= \sumupto{i}{1}{N}{\op{sol}(W_i)}\\ &\ge \op{reg}(a',k_1)+\sumupto{i}{2}{N}{ \op{reg}(g(h_i),k_i)} \\ &\ge c_0 N +c_1\sumupto{i}{1}{N}{k} + c_2 \sumupto{i}{1}{N}{L(h_i)} + c_3\\ &\ge c_0 N +c_1 (6N-12) + c_2 12 + c_3. \end{align*} This gives a contradiction $13 > N \ge 13.$ \end{proof} \end{cnl}
{ "alphanum_fraction": 0.6638754716, "avg_line_length": 33.8057210965, "ext": "tex", "hexsha": "eebb2731c6bb07c5c1a13a493ccfcd7e097ea2fb", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2020-08-15T01:30:32.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-27T16:34:53.000Z", "max_forks_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "HoanNguyen92/CNL-CIC", "max_forks_repo_path": "TeX2CNL/tex/test_packing.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_issues_repo_issues_event_max_datetime": "2020-03-25T15:51:32.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-17T06:09:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "HoanNguyen92/CNL-CIC", "max_issues_repo_path": "TeX2CNL/tex/test_packing.tex", "max_line_length": 217, "max_stars_count": 14, "max_stars_repo_head_hexsha": "b521d3393339e5dd3b7f5cd21ba81a758bd5c55c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HoanNguyen92/CNL-CIC", "max_stars_repo_path": "TeX2CNL/tex/test_packing.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-07T18:13:04.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-27T16:34:39.000Z", "num_tokens": 42447, "size": 113452 }
\documentclass[grad,lot,lof,11pt,oneside,onehalfspace]{RUthesis} \usepackage{xspace,amsmath,amssymb} \usepackage[squaren]{SIunits} \usepackage{hyperref} % Hyperref options for color/format of links \hypersetup{ colorlinks=true, % false: boxed links; true: colored links linkcolor=black, % color of internal links citecolor=black, % color of links to bibliography filecolor=black, % color of file links urlcolor=black % color of external links } \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\ben}{\begin{equation*}} \newcommand{\een}{\end{equation*}} \newcommand{\bea}{\begin{eqnarray}} \newcommand{\eea}{\end{eqnarray}} \newcommand{\bean}{\begin{eqnarray*}} \newcommand{\eean}{\end{eqnarray*}} \newcommand{\dif}{\mathrm{d}} \newcommand{\me}{\mathrm{e}} \newcommand{\e}[1]{\ensuremath{\times10^{#1}}} % Additional non-SI-units % useful/specific to your field \addunit{\dpi}{dpi} \addunit{\tcid}{TCID\ensuremath{_{50}}} \title{A proposed Defense Framework Against Adversarial Machine Learning Attacks Using Honeypots.} \author{Fadi Younis} \RUdegree{Master of Science} \RUfield{Computer Science} \RUsupervisor{Dr.\ Ali Miri} \RUdepartment{Science} \RUsubmitdate{September 14, 2012} \RUaddress{Department of Science,\\ Ryerson University,\\ Toronto, Ontario \\ Canada \\ M5B 2K3} \RUyear{2018} \RUpastdegrees{B.Sc.\ Ryerson University, Toronto (ON), Canada, 2009} \RUdedication{To my friends, My dear family and and wonderful colleagues, Without whom none of my success would be possible. } % WARNING: % MSc abstract should be at most 150 words not including title, name, etc. % PhD abstract should be at most 350 words not including title, name, etc. \RUabstract{ This is my wonderful abstract. I really like my abstract because it's so pretty. I've made sure it's less than 150 words if I'm a M.Sc.\ student and less than 350 words if I'm a Ph.D.\ student. I've not included any graphs, tables or references in it. If I really had to use symbols in my abstract I'd be sure to also explain right here what they stand for because I'm such a good student and I know all the thesis rules. } \RUacknowledgement{ Many people have contributed to my work here at Ryerson University. First I thank my supervisor Dr.\ Ali Miri for guiding my research, as well as providing many helpful suggestions throughout my time here. I would like to acknowledge the Department of Computer Science for providing me funding while I did my research. } \begin{document} \maketitle \chapter{Introduction} In this chapter, we introduce the thesis problem setting in which Adversarial Examples occur and maliciously influence the prediction model (\textbf{Section 1.1}). Then, we provide an overview of the thesis problem subject matter (\textbf{Section 1.2}), the motivation for solving the problem follows after (\textbf{Section 1.3}). We then outline the goal(s) we hope to reach with our solution, by the end of this thesis (\textbf{Section 1.4}). Finally, we end the chapter with an outline for the remainder of this thesis (\textbf{Section 1.5}). \section{Setting} Machine Learning, as we know it, is exploding in development and demand, with utilization in critical applications, services and domains, not confined to one area of industry but several. With each day, more applications are harnessing the suitability of Machine Learning. To meet the ever increasing demand that this forefront is witnessing, tech giants such as Amazon, Google, Uber, Netflix, Microsoft and many others are providing their Machine Learning adjuncted products and services in the form of an online cloud service, otherwise known as\textit{ Machine-Learning-As-A-Service} (MLaaS)\cite{ribeiro_mlaas:_2015}. While the need for easily accessible Machine Learning tools is becoming readily available, the desire to personally customize and build these services from the ground up is actually decreasing and less relied upon. This is because users do not wish to spend countless hours training, testing and fine-tuning their machine learning models, they simply want to readily use them. While some users still prefer to ordain and control how their models are constructed and deployed, companies have undergone the effort to hide the internal complex mechanisms from most of their indolent users, and package them in non-transparent and easily customized services. Essentially, they provide their services in the form of a \textit{Black-Box}\cite{kurakin_adversarial_2016}\cite{papernot_practical_2016}. This opaque system container accepts some input and produces an output, but where the internal details of the model are hidden. However, like any application deployed, we cannot assume it is sited in a safe environment. There are security flaws and vulnerabilities in every man-made system and the container holding a MLaaS is no exception to the assumption. These weaknesses introduce a susceptibility to malicious attack(s), which is expected and almost always the case. Like any regular user, an adversarial attacker also does not have access to the internal components of the system, this knowledge might give a sense of security, and that the \textit{Black-Box} system is secure. But as we’ll see, this sense of security is only temporary, and only the beginning of the dangers to follow. \section{The Problem at a Glance} As we learned in the introduction of this chapter, Machine Learning Adversarial threats exist and are lurking close-by. The threat transpires when an attacker misleads and confuses the prediction model inside the cloud computing application offering the MLaaS, and allow malicious activities to go undetected \cite{papernot_practical_2016}. This drives up the rate of false negatives, violating model integrity. These masqueraded inputs - called Adversarial Examples\cite{kurakin_adversarial_2016}, represent one of the recent threats to cloud services providing\textit{ Machine Learning as a Service} (MLaaS). These nonlinear inputs look familiar to the inputs normally accepted by a linearly designed classifier, but only appear that way. They’re maliciously crafted to exploit blind spots in the classifier boundary space, to mislead and confuse the learning mechanism in the classifier, to compromise the model integrity, post training. Most defenses aim at strengthening the classifier’s discriminator function, by training it on malicious input ahead of time to make it robust. Defense methods, such as \textit{Regularization and Adversarial Training} have proven unsuccessful. The latter method, and others like it, alone, cannot be relied upon since they do not generalize well on new adversarial inputs. This is evidently true in the case of a Black-Box (blind model) setting, where the adversary has access to only input and output labels, as mentioned. Our aim is to develop an adversarial defense framework to act as a secondary level of prevention to curb adversarial examples from corrupting the classifier, by deceiving the attacker. \section{Motivation} Due to the exploding demand that machine learning is witnessing, the risk of adversarial threats has increased as well. For example, an attacker can maliciously fool an \textit{Artificial Neural Network} (ANN) classifier into allowing malicious activities to go undetected, without direct influence on the classifier itself (ref). These masqueraded inputs - called Adversarial Examples, represent one of the recent threats to cloud services providing Machine Learning as a Service (MLaaS). They’re maliciously crafted to exploit blind spots in the classifier boundary space, to mislead and confuse the learning mechanism in the classifier, post model training, to compromise the integrity of the model. As a result there has been an increased interest in defense techniques to combat them. \\ Our challenge here lies in constructing an adversarial defense technique capable of deceiving the intrusive attacker and lure him away from the black-box target model. For purposes of our approach, we have decided to primarily use \textit{Adversarial Honey-Tokens}, which act as fictional digital breadcrumbs designed to lure the attacker and detectable using network sniffers used by the attacker. It is possible to generate a unique token for each item to deceive the attacker and track his abuse, however each token must be designed, generated and strategically embedded into the system to misinform and fool the adversary.\\ previous research has aimed at strengthening the classifier’s discriminator function, by training it on malicious input to make it robust. Defense methods, such as \textit{Regularization and Adversarial Training} have proven unsuccessful. The latter method has been criticized because it cannot be relied upon since they do not generalize well on new adversarial inputs \cite{rozsa_towards_2016}. This is evidently true in the case of a Black-Box (blind model) setting, where the adversary has access to only input and output labels. We believe it is necessary to develop an adversarial defense framework to act as a secondary level of protection to prevent adversarial examples from corrupting the classifier, by deceiving the attacker. The majority of our is designing a distributed network of High-Interaction Honeypots as an open target for adversaries, these honeypot nodes act as sandboxes to contain the decoy neural network, collect valuable data and insight into adversarial attacks. We believe this might deter adversaries from attacking the target model. Other adversarial example defenses can also benefit and utilize this framework as an appendage in their techniques. Unlike other proposed defense models in literature, our model prevents the attacker from interacting directly with the target.\\ We designed our defense framework to deceive the adversary in 3 steps, occurring sequential order of each other. The information collected from the attacker's interaction with decoy model could then potentially be used to learn from the attacker, re-train and robustify the deep learning model in future training iterations. Our defense approach is motivated by trying to answer the following question \textit{"is there a possible way to fool the attacker and prevent him from learning behavior of the model"}. At its core, our intention is to devise a defense technique to both fool and prevent the attacker from interacting with the model. \section{Thesis Goals} Our thesis is as follows:\newline\\ \textit{"Given a deep learning model within a Black-Box setting and a intrusive attacker with an adversarial input capable of corrupting the model and misclassifying its labels there exists a defense framework to fool and deter the attacker's attempts. If building such a defense framework is possible then there exists a way to prevent the attacker from learning the model's behavior and corrupting it."}\newline\\ The purpose of this work is to investigative the thesis above and build a an appropriate defense framework that implements it. Due to time and resource constraints, we limit our objectives to the following: \begin{itemize} \item Propose an adversarial defense approach, that will act as a secondary level of defense to reinforce existing adversarial defenses. It aims to: 1) preventing the attacker from correctly learning the classifier labels, and approximating the correct architecture of the “Black-Box”, 2) luring the attacker away from the model towards a decoy model, and 3) create an infeasible computational work for the adversary, with no functional use. \item Evaluate performance of the proposed defense method in the thesis, its strengths, weaknesses, limitations and benefits. \item Provide a detailed architecture and implementation of the \textit{Adversarial Honey-Tokens}, their design, features, usage, deployment and benefits. \end{itemize} \section{Overview} This thesis has 7 chapters. It is divided as follows: \begin{itemize} \item\textbf{Chapter 1 -} gives a brief introduction to the problem at hand and its setting. This chapter also gives an overview of the thesis goals, and a breakdown of the outline. \item \textbf{Chapter 2 -} introduces the problem, and the motivation for solving it. It also introduces relevant concepts, such as \textit{Adversarial Machine Learning}, \textit{Transferability}, \textit{Deep Neural Networks}, \textit{Black-Box Systems}, and \textit{Honeypots}. \item \textbf{Chapter 3 -} gives a summary and critical evaluation of the related work(s) authored by other researchers on the topic of adversarial black box attacks and proposed defenses. \item \textbf{Chapter 4 -} outlines the design and architecture of the defense approach. It also gives insight into the approach's setup, environment and limitations. It also shows how Honeypots can be used to curb adversarial attacks from influencing the model. \item \textbf{Chapter 5 -} details the approach set-up, adversarial environment, limitations and evaluation criteria. \item \textbf{Chapter 6 -} details implementation of the\textit{ Adversarial Honey-Tokens}, its features, usage, deployment and benefits. \item\textbf{Chapter 7 -} summarizes the thesis, gives an overview of the contribution(s) made and suggests future research directions. \end{itemize} \chapter{Background} The aim of this Chapter is to introduce the main problems and challenges involved in defenses against \textit{Adversarial Examples}. We also formulate important concepts so that they can be used to understand the upcoming chapters. We begin by throughly explaining concepts that will be important for the rest of this thesis. \section{Deep Learning and Security} \subsection{Deep Learning} \subsection{Vulnerabilities} \subsection{Attacks on Deep Learning Networks} \section{Adversarial Examples} \subsection{Definition Adversarial Examples} \subsection{Generation Adversarial Examples} \subsection{Impact of Adversarial Examples} \section{Adversarial Defenses} \section{Transferability} \newpage \section{Black-Box Learning Systems} \subsection{Black-Box Attack Models} \subsection{Black-box Model Vs. Blind Model} \subsection{Attack Approach} \subsection{Vulnerability to Black-Box Attacks} %========================================================================== \newpage \section{Honeypots} The section focuses on Honeypots, we'll start with an in-depth definition of what a \textit{honeypot} is. Then, We dissect and explain the components comprising a honeypot and evaluate it's intrinsic value. This section also offers insight into how other security researchers have proposed using Honeypots for purposes of deception, as well as infrastructure security. \subsection{Concept of Honeypots} A honeypot can be thought of as a fake system to collect intelligence on an adversary by inducing him to attack it. It's meant to appear and respond like a real system, in the production environment. However, the honeypot and the data inside is falsified and spurious. A honeypot has no real value. If it should become compromised, it poses no threat on the production environment\cite{lihet_how_2015}\cite{suo_research_2014}. Honeypots can be deployed with fabricated information that can be attractive to outside attackers, as well as re-direct attackers towards decoy systems and away from critical infrastructure \cite{guarnizo_siphon:_2017}. \subsection{Classification of Honeypots} Honeypots can be classified using several different criteria. However, for purposes of this thesis we classify them based on functionality and operation. \begin{itemize} \item {\textbf{Research Honeypots} -} They're Honeypots deployed with the highest level of risk associated with them, this is in order to expose the full range of attacks initiated by the adversary. They're mainly used to collect statistical data on adversarial activities inside the honeypot.\cite{lihet_how_2015}. They're more difficult to deploy, but this doesn't hinder from their use by organizations to study attacks and develop security countermeasures against them. Research Honeypots help understand the trends, strategies and motives behind adversarial attacks\cite{nawrocki_survey_2016}. \item {\textbf{Production Honeypots} -} They are Honeypots known for ease of deployment and utility, known for use in company production environment \cite{nawrocki_survey_2016}. Closely monitored and maintained, their purpose lies in their ability to be used in an organization's security infrastructure to deflect probes and security attacks. They're attractive as an option for ease of deployment and the sheer value of information collected on the adversary. \item {\textbf{Physical/Virtual Honeypots} -} Physical honeypots are locally deployed honeypots, being part of the physical infrastructure. considered to be intricate and difficult to properly implement \cite{lihet_how_2015}. On the other hand, virtual Honeypots are simulated systems (virtualized) by the host system to forward network traffic to the virtual honeypot \cite{nawrocki_survey_2016}. \item {\textbf{Server/Client Honeypots} -} The main different between server and client honeypots is the former will wait until the adversary initiates the communication, while client Honeypots contact malicious entities and request an interaction \cite{nawrocki_survey_2016}. However, traditional Honeypots are usually server-based. \item {\textbf{Cloud Honeypots} -} They are honeypots deployed on the cloud. This type of honeypot has many advantages, as well as restrictions. They are used by companies that at least have one part of their infrastructure on the cloud. Having the system (or part of it) in the cloud has its advantages, it makes it easy to install, update, as well as recover the honeypot in case of a corruption\cite{lihet_how_2015}. \item {\textbf{Honey-tokens} -} can be thought of as a \textit{digital} piece of information. It can manifested from a document, database entry, E-mail, or a credentials. In essence, it could be anything considered valuable enough to lure and bait the adversary. The benefit with these tokens is that they can be used to track stolen information and level of adversarial abuse in them system \cite{akiyama_honeycirculator:_2017} \end{itemize} \subsection{Honeypot Deployment Modes} Honeypots can be deployed in one of three deployment modes \cite{campbell_survey_2015}, they are: \begin{itemize} \item \textbf{Deception} mode manipulates the adversary into thinking the responses are coming from the actual system itself. This system is used as a decoy and contains security weaknesses to attract attackers. According to researchers a honeypot is involved in deception activities if its responses can deceive an attacker into thinking that the response returned is from the real system. \item \textbf{Intimidation} modes is when the adversary is aware of the measures in place to protect the system. A notification may inform the the attacker that the system is protect and all activity is monitored. This countermeasure may ward or scare off any adversarial \textit{novice}, and leave only the experienced adversaries with in-depth knowledge and competent skills to attack the system. \item \textbf{Reconnaissance} mode is used to record and capture new attacks. This information is used to implement heuristics-based rules that can be applied in intrusion detection and prevention systems. With Reconnaissance, the honeypot is used to detect both internal and external adversaries of the system. \end{itemize} \subsection{Honeypot Role and Responsibilities} The true value of Honeypots lay in their ability to address the issue of security in production system environments, they mainly focus ons: \begin{itemize} \item \textbf{Interaction -} the honeypot should be responsible for interacting with the adversary. this pertains to acting as the main environment where the adversary becomes active and executes his attack strategy. \item \textbf{Deception -} the honeypot should be responsible for deceiving the adversary. This pertains to the disguising itself as a normal production environment, when in fact it's a \textit{trap} or \textit{sandbox} designed to exploit the adversary. \item \textbf{Data Collection -} the honeypot should be responsible for capturing and collecting data on the adversary. This information will potentially be useful for studying the attacker and his motivations. \end{itemize} \subsubsection{Advantages of Honeypots} Honeypots, alone, do not enhance the security of an infrastructure. However, we can think of them as subordinate to measures already in place. However, this level of importance does not take away from some distinct advantages when compared to other security mechanisms. Here, we highlight a few \cite{nawrocki_survey_2016}\cite: \begin{itemize} \item{\textbf{Valuable Data Collection} -} Honeypots collect data which is not polluted with noise from production activities and which is usually of high value. This makes data sets smaller and data analysis less complex. \item{\textbf{Flexibility} -} Honeypots are a very flexible concept to comprehend, as can be seen by the wide array of honeypot software available in the market. The indicates that a well-adjusted honeypot tool can be modified and used for different tasks, which further reduces architecture redundancy. \item{\textbf{Independent from Workload} -} Honeypots do not need to process traffic directed or which originates from them. This means they are independent from the workload which the production system experiences. \item{\textbf{Zero-Day-Exploit Detection} -} Honeypots capture any and every activity occurring within them, this could give indication to unseen adversarial strategies, trends and zero-day-exploits that can be identified from the session data collected. \item{\textbf{Lower False Positives and Negatives} -} any activity that occurs inside the server-honeypot is a considered to be out-of-place and therefore an anomaly, which is by definition an attack. Honeypots verify attacks by detecting system state changes and activities that occur within the honeypot container. This helps to reduce false positives and negatives. \end{itemize} \subsubsection{Disadvantages of Honeypots} Ultimately, no one security system or tool that exists is faultless. Honeypots suffers from some disadvantages, some of them are \cite{nawrocki_survey_2016}: \begin{itemize} \item{\textbf{Limited Field of View} -} a Honeypots is only useful if an adversary attacks them, and worthless if no one does. if the honeypot is evaded by the adversary, and attacks the production system or target environment directly, it will not be detected. \item{\textbf{Being Fingerprinted} -} here, fingerprinting signifies the ability of the attacker to identify the presence of a honeypot. If the honeypot behaves differently than a real system, the attacker might identify and consequently detect it. If their presence is detected, the attacker can simply ignore the honeypot and attack the targeted system instead. \item{\textbf{Risk to the Environment} -} Honeypots might introduce a vulnerability to the production infrastructure environment, if exploited and compromised. And naturally, as the level of interaction (freedom) that the adversary has within the environment increases, so does the level of potential misuse and risk. The honeypot can be monitored, and the risk mitigated, but not completely eliminated. \end{itemize} \subsection{Honeypots Level of Interaction} A honeypot is considered to be an fake system, with no real value. It is built and designed to emulate the same tasks that a real production system can accomplish. However, these tasks are of no significance, hence compromising the honeypot poses no threat on the production environment .Honeypot system functionality can be categorized according to the level interaction the adversary has with the honeypot system environment \cite{lihet_how_2015}: \begin{itemize} \item\textbf{Low-interaction Honeypot(LIHP) - } these type of system emulate only simple services like \textit{Secure Shell} (SSH), \textit{Hypertext Transfer Protocol}(HTTP) or \textit{File Transfer Protocol}(FTP). These systems are easily discoverable by attackers and provide the lowest possible level of security. However, they have a promising advantage, they are easy to install, configure and monitor. They should not be used in production environments, but for education and demonstration purposes. Some examples of such systems include \textit{Honeyperl}, \textit{Honeypoint}, and \textit{mysqlpot}. \item\textbf{Medium-interaction Honeypots(MIHP) -} this type of system is a hybrid, which lays in the middle ground between low/high interaction honeypots. This means that the honeypot is still an instance that runs within the operating system. However it blends in so seamlessly into the environment that it becomes difficult to detect by attackers lurking within the network. Some examples of such systems are \textit{Kippo} and \textit{Honeypy}. \item\textbf{High interaction Honeypot(HIHP) -} the main characteristic regarding High-Interaction Honeypots is that they're using a real live operating system. It uses more hardware resources and poses a major level risk on the rest of the production environment and infrastructure, when deployed. In order to minimize risk and prevent exploitation by an adversary, it is constantly under monitoring. Some examples of such systems are \textit{Pwnypot} and \textit{Capture-HPC} \end{itemize} \subsection{Honeypots in Deception} \subsection{Conclusion} As mentioned above, honeypots have a wide array of enterprise applications and uses. Currently, honeypot technology has been utilized in detecting \textit{Internet of Things} (IoT) cyberattack behavior, by analyzing incoming network traffic traversing through IoT nodes, and gathering attack intelligence \cite{dowling_zigbee_2017}. In robotics, a honeypot was built to investigate remote network attacks on robotic systems\cite{irvene_honeybot:_2017}. Evidently, There is an increasing need to install \textit{red herring} system in place to thwart adversarial attacks before they occur and cause damage to production systems. \\ One of the most popular type of honeypots technologies witnessing an increase in its popularity is High-Interaction-Honeypots (HIHP). This type of honeypot is preferred since it provides a real-live system for the attacker to be active in. This property is valuable, since it captures the full spectrum of attacks launched by adversaries within the system. It us allows to learn as much as possible about the attacker, the strategy involved and tools used. Gaining this knowledge allows security experts to get insight into what future attacks might look like, and better understand the current ones. \\ High-Inter where is the advantage here whats the next chapter \section{The Problem we Aim to Solve} %=========================================================================== \chapter{Related Work} In this Chapter we summarize and evaluate the work authored by other researchers on adversarial black box attacks defense techniques, as well as deception techniques and HoneyTokens.\\ The works below focus directly with the concept of defending adversarial examples by preprocessing of the data during the training phase of DNN model preparation. mention the challenge. That typically means influencing the effect the data will have on the underlying DNN model, and filtering our malicious perturbations inserted by an adversary that may corrupt it. Other works in this section focus on the role of cyber security defense through method of deception, specifically on the role of decoys and fake entities to deceive the attacker. Our challenge here is construct a secondary level of protection and defense, designed not to replace existing defense techniques ,but supplement and reinforce the mentioned defense frameworks below through means of adversary deception.\\ alternatives to current defense techniques include defense distillation The following papers and works deal directly with defenses against adversarial examples and other works associated with defense through deception using HoneyTokens: \renewcommand{\theenumi}{\roman{enumi}}% \begin{enumerate} \item \textit{Efficient Defenses Against Adversarial Attacks} this paper focuses on addressing the lack of efficient defenses against adversarial attacks undermining DNNs. This pressing need has been amplified by the fact that there isn't a unified understanding of how or why these adversarial attacks occur. The authors propose a solution which focuses on re-reinforcing the already existent DNN model and make it robust to adversarial attacks, attempting to fool it. The proposed solution focuses on utilizing two strategies to strengthen the model, the first strategy is using bounded RELU activation functions and second is augmented Gaussian data for training. The result of applying both strategies is a much smoother and stable model, without losing on the model's performance or accuracy. \item \textit{Blocking Transferability of Adversarial Examples in Black-Box Learning Systems} - this paper \cite{hosseini_blocking_2017} is the closest academic paper, in terms of incentive and stimulus, for the purpose of developing our proposed auxiliary defense technique. An adversarial training approach is presented for robustifying Black-Box learning systems against Adversarial perturbations. In this paper, the method of \textit{NULL labeling} is proposed, where adversarial examples are filtered out and discarded, instead of allowing them to be classified into respectful target label. The ingenuity of this approach lies how it is able to distinguish between clean and perturbed input. This method shows that it is capable of blocking adversarial transferability and resisting the adversarial input that exploit it. The latter is achieved by mapping malicious input to a NULL label and allowing clean test data to be classified into its original label, all while maintaining prediction accuracy. \item \textit{Towards Robust Deep Neural Networks with BANG - } this paper \cite{rozsa_towards_2016} is another training approach for combating adversarial examples and robustifying the learning model. The authors propose this technique in response to the abnormal and mysterious nature of adversarial examples and the reason for their existence in Deep Neural Networks (DNNs). For this very purpose, the authors present a data training approach, known as \textit{Batch Adjusted Network Gradients} or \textit{BANG}. This method works by attempting to balance the causality that each input element has on the node weight updates. This achieves enhanced stability in the model by forming \textit{flatter} areas in the classification region and it becomes robust to input distortions that work on exploiting this. This method is designed to avoid instability brought about by the adversarial examples. The latter method achieves good results without manipulating the training data and low computational cost, while maintaining classification accuracy. \item \textit{HoneyCirculator: distributing credential HoneyToken for introspection of web-based attack cycle} - in this paper \cite{akiyama_honeycirculator:_2017} \item \textit{A Survey on Fake Entities as a Method to Detect and Monitor Malicious Activity - } this survey paper \cite{rauti_survey_2017} serves as an examination the concept of \textit{fake entities}, which my thesis defense relies heavily on. What makes fake entities so attractive as an asset, to the authors, is how inexpensive, lightweight and easy-to deploy they are, compared to other security mechanisms. Simply put, they're digital objects intended to be accessed by the attacker. Once in possession by an attacker, the defender is notified and can begin monitoring the attacker's activity. the main concern for the authors is designing convincing fake data to deceive to attract and fool an adversary. Generally, the defender should design fake entities which are \textit{attractive} to the attacker, while not revealing important or compromising information to the attacker, and learn as much as possible about the attacker. As the threat of adversarial attacks increases, so will the need for novelty in the approach to combat it. \item \textit{Designing Adaptive Deception Strategies - } this paper \cite{faveri_designing_2016} was used to derive the initial motivation for using Honeypots as an ad-hoc method approach to curb adversarial attacks in this thesis. Here, the authors strongly advocate for the use of deception-based strategies in defense architectures to mislead and confuse attackers. Specifically, the authors suggest compounding deception techniques into the traditional software life-cycle approach, to be used when designing adaptive systems designed to thwart adversaries, based on the attacker's goals, monitoring channels, metrics, and risks. The utility of this approach is shown through the use of a detailed use-case where a deception strategy is used build a smartphone application that synchronizes erroneous data with a database to deceive the attacker. \item \textit{Deception Planning Models for Cyber Security - in this paper} the authors focus on the role that deception plays has when it comes to defense in Cyber Security. Here, the authors identify 20 important features that aid in characterizing and integrating deception as a method of defense in design of deception-based frameworks. For instance, what is the scope of deception when it comes to design, which tools are being used to deceive the adversary, how is deception integrated into other parts of the defense framework. Other features include, deception metrics, risks, bias exploitation, deploy and execution, timing and termination plan. This survey outlines important factors that should considered when integrating deception into any defense framework. \\ There is also extensive work done on utilizing adversarial transferability in other forms of adversarial attacks, deep learning vulnerabilities in DNNs, and black-box attacks in machine Learning. Other interesting works include utilizing honeypots in defense techniques, such as design and implementation of a honey-trap \cite{egupov_development_2017}, deception in distributed system environments \cite{soule_enabling_2016}, and using containers in deceptive honeypots \cite{kedrowitsch_first_2017}. \end{enumerate} \chapter{Proposed Defense Approach} \label{chap-math} \section{Motivation} \section{Approach} \section{Attacker} \subsection{Attack Setting} \subsection{Attack Goal} \subsection{Attacker Knowledge} \subsection{Attacker Capabilities} \section{Adversarial Honeypot Network Overview} \section{Individual Honeypot Topology} \section{Threat Model} \section{Target and Decoy Model} \section{Target Model} \subsection{Purpose} \subsection{Architecture and Topology} \subsection{Training, Testing and Validation} \newpage \section{Attracting The Adversary} \subsection{Adversarial Tokens} \subsection{Weak TCP/IP ports} \subsection{Decoy Target Model} \section{Detecting Malicious Behavior} \section{Monitoring the Adversary} \section{Launching The Attack} \section{Defending Against Attack} \section{Deployment} \section{Scalability} \section{Security} \section{Mathematical models of virus dynamics} The basic model of virus dynamics can be written as follows \cite{nowak-may, perelson02, bonhoeffer97}: \begin{align*} \frac{\dif T}{\dif t} &= -\beta T V \\ \frac{\dif I}{\dif t} &= \beta T V - \delta I \label{basic}\\ \frac{\dif V}{\dif t} &= p I - c V\ .\nonumber \end{align*} where $T$ is something something. \section{Shape of the viral titer curve} The virus spread and kills everything. This is well illustrated in Figure \ref{kinetics} where the kinetics of the infection are shown for three different viral production rates of the secondary cell population for the case where these cells are 1,000-fold harder to infect than cells of the default type. \begin{figure*} \begin{center} \resizebox{0.31\textwidth}{!}{\includegraphics{myfig}} \resizebox{0.31\textwidth}{!}{\includegraphics{myfig}} \resizebox{0.31\textwidth}{!}{\includegraphics{myfig}} \end{center} \caption[Example figure file]{\textbf{An example illustrating how to include figure files.} I can not only display figures, but I can resize them so they look just right to appear side-by-side. This is how you produce multiple-panel images as a single figure. In addition, if your figure file is generated from some sort of model, you can refer to the table where the data or model parameters are listed. For example, you could say: All parameters are as in Table \ref{params}.} \label{kinetics} \end{figure*} When secondary cells produce only 10-fold more virus than cells of the default type, the infection is mostly limited to the default cell population as the amount of virus produced is not sufficient for the infection to spread to the secondary cell population. Increasing the production rate to 100-fold more than cells of the default type results in a sufficient amount of virus being produced to sustain a slow growing infection within the secondary cell population, leading to long-lasting, high-levels of viral titer. Finally, increasing the viral production rate to 1,000-fold more than the default cell type allows the infection to successfully infect and decimate both cell populations rapidly. From these results, we see that there appears to be a relationship between the secondary cells' susceptibility to infection and their viral production rate which leads to a severe and sustained infection. Note that all parameters use to produce our simulations can be found in Table \ref{params}. \chapter{Evaluation and Discussion} \chapter{Implementation and Discussion} \section{Background} \section{Architecture} \section{Features} \section{Functionality} \section{Usage} \section{Deployment} \section{Integration} \section{Benefits} \section{Future Work} \appendix % Optional: only if you need one \chapter{Summary and Contributions} \section{Summary} \section{Discussion} \section{Contributions} \section{Future Work} \section{Conclusion} \chapter{Appendix A} \chapter{Appendix B} blah blah \subsection{An appendix subsection if required} blah blah blah. \chapter{The second appendix chapter} \section{A section in my second appendix chapter} blah blah. \subsection{Just making sure it all works} blah blah blah. % This typesets your bibliography % based on the grad-thesis.bib file \cleardoublepage \bibliographystyle{abbrv} \addcontentsline{toc}{chapter}{Bibliography} \bibliography{Thesis}%UPDATE THIS!!!!!!!!!!!!!! \end{document}
{ "alphanum_fraction": 0.798380091, "avg_line_length": 116.8298507463, "ext": "tex", "hexsha": "020cc8eb1126737d43f4f9dbd8fa470f702ba81f", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-04T05:02:50.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-04T05:02:50.000Z", "max_forks_repo_head_hexsha": "7f4984f4d213d9061d95890600b51b60ff226e86", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "FadiSYounis/Thesis", "max_forks_repo_path": "grad-thesis (fadi-MS-7996's conflicted copy 2017-12-11).tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7f4984f4d213d9061d95890600b51b60ff226e86", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "FadiSYounis/Thesis", "max_issues_repo_path": "grad-thesis (fadi-MS-7996's conflicted copy 2017-12-11).tex", "max_line_length": 1981, "max_stars_count": null, "max_stars_repo_head_hexsha": "7f4984f4d213d9061d95890600b51b60ff226e86", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "FadiSYounis/Thesis", "max_stars_repo_path": "grad-thesis (fadi-MS-7996's conflicted copy 2017-12-11).tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8524, "size": 39138 }
\chapter{LHC AND CMS EXPERIMENT} The Large Hadron Collider~\cite{LHC_refer} is the most powerful hadron collider ever built. The circumference of this circle superconducting collider is 26.7 km and the designed full operation energy is 14 TeV. In 2012, the LHC operated on 8 TeV and in 2016, it boosted up to 13 TeV. There are four collision locations on the LHC ring which hold four particle detectors, ALICE, ATLAS, CMS and LHCb. The ATLAS and CMS are general purpose detectors, aiming for the high luminosity operation. The LHCb focuses on the b physics study, while ALICE studies the lead-lead collision. %in the order $L=10^{34} cm^{-2}s^{-1}$% \section{LHC accelerator} A sketch view of the proton accelerating process is shown in Figure.~\ref{fig:LHC_sketch}. The LHC is the most powerful accelerator in the accelerating chain. Before the beams are injected into LHC, a series of steps are taken. In the proton-proton(p-p) collision, protons are from the source Duoplasmatron, which uses electric field to break down hydrogen gas into protons and electrons, then the protons are accelerated by a 90 kV supply. Leaving the source, the protons are focused and accelerated to 750 keV by the radio frequency quadrupole. Then the protons are further accelerated by the linear accelerator Linac2 to 50 MeV. The proton synchrotron booster further accelerates the protons from 50 MeV to 1.4 GeV and injects the protons into the proton synchrotron(PS). The PS accelerates the protons to 25 GeV followed by the super proton synchrotron which boosts the protons to 450 GeV. The LHC is the last ring in the whole accelerating process and accelerates the protons to its current final energy 6.5 GeV. In a normal fill of protons in LHC, the ring holds 2808 bunches with an approximation of $10^{11}$ protons. There are thousands of superconducting magnets along the LHC ring to bend and focus the beam. Among the magnets, there are 1232 main dipoles which are used to bend the beam with a magnetic field above 8 T. Other types of magnets, for example, the quadrupole magnet can tight the beam either vertically or horizontally, while the sextupole, octupole and decapole can help fine tuning the magnetic field. The radiofrequency cavities(RF) in the LHC are used to accelerate the protons from 450 GeV to 6.5 GeV, keep the bunches in the beam pipe compact and restore the energy loss from synchrotron radiation. Eight RFs per beam, provide 16 MV longitudinal oscillating voltage with the 400 MHz superconducting cavity system. The machine luminosity is an important parameter of the collider. For a process under study, the number of events created per-second $N_{\textrm{event}}$ is shown in Equation.~\ref{event_number}, in which L is the machine luminosity and $\sigma_{event}$ is the cross section of that process. \begin{align}\label{event_number} N_{\textrm{event}}=L\sigma_{event} \end{align} The machine luminosity is determined by a number of factors as shown in Equation.~\ref{Lumi_express}. \begin{align}\label{Lumi_express} L=\frac{N_{b}^{2}n_{b}f_{fev}\gamma_{\gamma}}{4\pi\epsilon\beta*}F \end{align} The $n_b$ and $N_{b}$ are the number of bunches per-beam and the number of protons per-bunch respectively. The $f_{rev}$ and $\gamma_{\gamma}$ are the revolution frequency and relativistic gamma factor respectively. The $\beta*$ is the amplitude function of the beam at the collision point while the function F describes the reduction of luminosity because of the crossing angle. This machine luminosity is also called instantaneous luminosity which is the luminosity at a unit time. The integrated luminosity which later is referred as luminosity is the instantaneous luminosity that integrates over time. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/LHC_chain.jpg} \caption[A sketch view LHC injection chain and main experiments associated]{A sketch view LHC injection chain and main experiments associated~\cite{Christiane:1260465}.} \label{fig:LHC_sketch} \end{figure} \section{Compact Muon Solenoid} The compact muon solenoid(CMS) is a general purpose detector. It is a high performance detector that is designed to observe any new physics produced by LHC. It covers a broad range of physic studies like the standard model physics and the search for supersymmetry and dark matter candidates. The CMS detector is designed to have good muon momentum and position resolution over a large range of energy and angles, good charged particle momentum resolution and high identification efficiency within inner tracker, good electromagnetic energy and position resolution, high photon and lepton isolation efficiency in the high luminosity condition and good missing transverse momentum and jet energy resolution. A general view of the CMS detector is shown in Figure.~\ref{fig:CMS_sketch}. The detector is composed of a set of sub-detectors from the inside and out in a ring structure. The main sub-detectors are the followings: \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/CMS_detecter.png} \caption[A sketch view of CMS detector]{A sketch view of CMS detector~\cite{CMS_experiment}} \label{fig:CMS_sketch} \end{figure} \begin{itemize} \item The inner tracker consists of two parts, the pixel detector and silicon strip tracker which are used to measure the momentum and tracks of the charged particles. \item The electromagnetic calorimeter mainly measures the energy and the position of electrons and photons. Other particles will leave some percentage of energy inside while passing through. \item The hadron calorimeter measures the energy of the hadrons. \item The muon detector measures the tracks and momentum of the muons. \end{itemize} %\begin{enumerate}[$\bullet$] %\item The inner tracker consists of two parts, the pixel detector and silicon strip tracker which are used to measure the momentum and tracks of the charged particles. %\item The electromagnetic calorimeter mainly measures the energy and the position of electrons and photons. Other particles will leave some percentage of energy inside while passing through. %\item The hadron calorimeter measures the energy of the hadrons. %\item The muon detector measures the tracks and momentum of the muons. %\end{enumerate} Another outstanding feature of CMS is the superconducting magnet system, which provides a 3.8 T magnetic field. The configuration of the magnet system drives the design and layout of the detector. Besides the sub-detectors listed above, the trigger system is also crucial for the success of the whole program. The trigger system consists of two parts, the hardware based level-1 trigger and the software based high level trigger. The trigger system does the initial selection of events from a huge flux of events per-collision, which makes it possible for data-acquisition and recording. The details of the sub-detectors and other systems mentioned will be further discussed later. In general, the CMS detector is 21.6 m long, 14.6 m in diameter and weighs 12500 tonnes in total. The coordinate system adopted by CMS sets the center at the collision point. The x-axis points towards the center of LHC ring and the z-axis points along the beam direction. The azimuthal angle $\phi$ measures from the x-axis in the x-y plane, polar angle $\theta$ measures from the zenith direction and the r is the radial coordinate. Another variable called the pseudorapidity $\eta$, defined as $\eta=-\textrm{ln}\big[\textrm{tan}(\theta/2)\big]$ is also frequently used in the measurement. In the case, a particle with $E\gg m$, pseudorapidity can be approximated as $\eta=-\textrm{ln}\big(\frac{E+p_{z}}{E-p_{z}}\big)$, where the $p_{z}$ is the longitudinal component of momentum~\cite{CMS_experiment}. \subsection{Tracker} The inner tracking system of CMS is designed to measure the trajectories of charged particles. The efficient and precise measurement is crucial for the reconstruction and identification of particles. The LHC operates with the instantaneous luminosity in the order of $10^{34}cm^{-2}s^{-1}$, in average producing more than 20 p-p interactions and 1000 particles per-bunch crossing collison. High granularity and fast response are the primary features of the tracker system. To measure the trajectories precisely, low interactions of tracker materials with incoming particles, like multi-scattering, photon conversion and nuclear interaction are also important. In the long operation period, radiation hardness of the tracker material is needed. All of these factors drive the design of CMS tracker system. The tracker system of CMS surrounds the collision point and has the dimension of 5.8 m in length and 2.5 m in diameter, with a coverage up to pseudorapidity $|\eta|<2.5$. An overview of the tracker system layout is shown in Figure.~\ref{fig:tracker_sketch}. Three layers of silicon pixel detector modules surround the interaction point and two additional disks of pixel modules on each side, in all 66 million pixels of the size $100\times150~ \mu m$ each. Following the pixel detector is the silicon strip tracker. There are four components of the strip tracker, tracker inner barrel(TIB), tracker inner discs(TID), tracker outer barrel(TOB) and tracker end caps(TEC). The arrangement of the strip tracker components is shown in Figure.~\ref{fig:tracker_sketch} which consist of 10 layers and about 10 million strips. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/Tracker_structure.png} \caption[The structure of tracker in CMS]{The structure of tracker in CMS~\cite{CMS_experiment}} \label{fig:tracker_sketch} \end{figure} \subsection{Electromagnetic calorimeter} The electromagnetic calorimeter(ECAL) in CMS is a hermetic homogeneous lead tungstate($\textrm{PbWO}_{4}$) detector. The whole sub-detector is composed of two parts, the central barrel(EB) covering the pseudorapidity range $|\eta|<1.479$ and the endcap disks(EE) covering the range $1.479<|\eta|<3.0$. The EB is made up of 61200 $\textrm{PbWO}_{4}$ crystals with $22\times22~mm^{2}$ in the front face, 23 cm in length(25.8 in radiation lengths). The EE is made up of 7324 crystals per disk with front face $29\times29~mm^{2}$ and 22 in length(24.7 in radiation lengths). The geometrical configuration of ECAL is shown in Figure.~\ref{fig:ECAL_sketch}. The $\textrm{PbWO}_{4}$ crystals used in ECAL have high density(8.28 $g/cm^{3}$), short radiation length(0.89 cm) and small Moli$\grave{e}$re radius(2.2 cm), together with the specific geometrical parameters used, rendering ECAL good energy resolution, fast response, fine granularity and high radiation resistant. Photodetectors are placed at the end of the crystals. In EB, avalanche photodiodes(APDs) are used, while vacuum phototriodes(VPTs) are used in EE. Both types of the photodiode show good performance in the environment with hard radiation and 4 T magnetic field. The preshower detector(ES)~\cite{CMS_TDR} is in front of the EE in the pseudorapidity range $1.653<|\eta|<2.6$. The ES is a sampling detector with silicon strip sensors placed behind the lead radiator to measure the energy and position of the incoming particles. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/ECAL_transverse.png} \caption[ECAL geometrical configuration]{ECAL geometrical configuration~\cite{CMS_TDR}} \label{fig:ECAL_sketch} \end{figure} Each half of the EB is composed of 18 supermodules that each supermodule contains 1700 crystals. The relative energy resolution of $\textrm{PbWO}_{4}$ crystal refers to the resolution that is measured with ECAL supermdules directly exposed to electron beam. The relative energy resolution as a function of electron energy can be expressed as %spikes % np scattering in the protective epoxy coating of the APD, and the resulting proton directly ionizing the APD active volume. \begin{align*} \frac{\sigma}{E}=\frac{2.8\%}{\sqrt{E/\textrm{GeV}}}\oplus\frac{12\%}{E/\textrm{GeV}}\oplus 0.3\% \end{align*} The first term stands for the contribution from stochastic factors, like the number fluctuation in production of the secondary particles. The second term is the noise contribution coming from the electronics and digitization, while the last is a constant term that covers the other contribution factors~\cite{ECAL_EB_reso}. \subsection{The hadron calorimeter} The CMS hadron calorimeter(HCAL) is a hermetic sampling detector, which is important for the measurement of the energy and momentum of hadrons, also the missing energy that caused by the non-interacting particles. The HCAL is composed of three parts, the HCAL barrel(HB), HCAL endcaps(HE) and forward calorimeter(HF). The mechanical structure of HCAL is shown in Figure.~\ref{fig:HCALL_sketch}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/HCAL_sketch.png} \caption[A longitudinal view of CMS HCAL sub-detector]{A longitudinal view of CMS HCAL sub-detector~\cite{CMS_experiment}} \label{fig:HCALL_sketch} \end{figure} The HB is composed of several layer of brass absorber plates, between which are the plastic scintillator tiles. The innermost and outermost layer plates are made of stainless steel to gain structural strength. When hadrons hit the absorber, secondary particles are produced in showers. As the showers develop, the alternating layers of scintillators are activated and emit blue-violet light. The lights are collected as signals. The HB covers the central pseudorapidity range $|\eta|<1.3$ with an individual read out unit $\Delta \eta\times\Delta\phi=0.087\times 0.087$. Because of the limited spaces between ECAL and muon detector, to have enough sampling depth in HCAL central region, an extra outer calorimeter(HO) is installed. The HO utilizes the outside solenoid coil as additional absorber to insert plastic scintillators. Similar to HB, the HE is also made of brass absorber and plastic scintillator tiles layers, which covers the range $1.3<|\eta|<3.0$. The read out units have the geometry $\Delta \eta\times\Delta\phi=0.17\times 0.17$ in the HE~\cite{CMS_experiment}. A combined ECAL and HCAL energy resolution~\cite{HCAL_reso} measured in test beam with pions is \begin{align*} \frac{\sigma}{E}=\frac{110\%}{\sqrt{E/\textrm{GeV}}}\oplus 0.9\% \end{align*} The HF situates $\pm11$ m from the interaction point to complement the large pseudorapidity measurement of HE in the range $3.0<|\eta|<5.0$. The HF is made of grooved steel plates with quartz fibers. Charged shower particles generate Cherenkov light, which is collected by the quartz fibers as signals. Radiation hardness is critical for the operation of HF. \subsection{Muon detector} The CMS muon detector is designed to measure the momentum and charge of muon. Three types of gas detectors are used in CMS, the barrel drift tube(DT) chambers, the cathode strip chambers(CSC) in the endcaps and the resistive plate chambers(RPC) in both barrel and endcap regions. In the muon detector barrel(MB), DT chambers and RPCs are used which cover the pseudorapidity region $|\eta|<1.2$. The MB is composed of 250 chambers. The chambers are located in 4 stations inside the magnet return yoke. The yoke is further divided into 5 wheels, each of which is composed of 12 sectors. As shown in Figure.~\ref{fig:muon_sketch}, the stations are named from MB1 to MB4, which are composted of one DT chamber and varied number of RPCs that depending on the exact location. DT chambers measure the position of the incoming muons which knock off the electrons in the gas atoms of the chambers and are collected by a large numbers of charged wires inside.% The gas used in DT chambers is a mixture of Ar and $\textrm{CO}_{2}$. In muon detector endcaps(ME), 468 CSCs which covers the range $0.9<|\eta|<2.4$ are used. The MEs are also composed of 4 stations of chambers in each of the discs. The CSCs are in a wedge shape and composed of 6 gas gaps. Each gap is filled with a cathodes strip and anode wires which run perpendicularly to the strip. The incoming muons knock off the electrons and create avalanches which are received by the positive charged wires. Both DT chambers and CSCs measure the position and trigger on the $\pt$ of muons. To better deal with the high luminosity and improve the $\pt$ resolution of the triggered muon, a dedicated trigger system, the RPCs are added to both MB and ME. The RPCs are double-gap bakelite chambers which operate with the avalanches that are caused by the muons. The RPCs can provide additional fast triggering and sharp $\pt$ triggering thresholds. Four RPCs layers are used in the MB first two stations(two each) and another two layers(one each) are used in the last two stations. In the ME, one RPC layer in each of the first three stations. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{chapter3/Muon_chambers.png} \caption[A overview of muon chamber configuration in CMS]{A overview of muon chamber configuration in CMS~\cite{Muon_chambers}} \label{fig:muon_sketch} \end{figure} \subsection{Trigger} The LHC is a high luminosity collider in the order of $10^{34}\textrm{cm}^{2}\textrm{s}^{-1}$. In the p-p collision, the bunch crossing time is 25 ns and the corresponding frequency is 40 MHz. In each of the 25 ns, there are approximately 20 collisions. This high rate makes it impossible to transmit and record all of the events, also it is not necessary, since most of the events are not of current physics research interests. The CMS uses a two level trigger system to select events and reduces the event recording rate, the Level-1(L1) trigger and High-level trigger(HLT) system. The L1 trigger system is based on custom designed and programmable electronics which are situated inside the detector. It utilizes the information from calorimeters and muon system, performing simplified but effective reconstructions, corrections and selections. The L1 reduces the incoming event rate from 40 MHz to 100 kHz. In LHC Run II, the L1 trigger system has been upgraded to deal with the increasing luminosity and improves the performance. The L1 calorimeter trigger accesses the information from the whole ECAL and HCAL in the granularity of trigger tower level, which approximately corresponds to a region $0.087\times0.087$ in $\eta$ and $\phi$. The L1 trigger reconstructs the $\Pe/\gamma$, jets, $\tau$ and the sum energy of the candidates with the algorithms implemented in the time multiplexed trigger architecture~\cite{TMT_trigger}. The algorithms are implemented at hardware trigger level, with dynamic clustering of trigger towers, pile-up migration and innovated tau and jet reconstruction with various look-up tables for the calibrations and corrections~\cite{L1_Egamma}. The L1 muon trigger system fully utilizes three muon detectors in the track reconstruction. In general, based on the geometry, the track reconstruction of muons are divided into three regions, the barrel, the overlap and the endcap. Through dedicated construction algorithms, tracks are built and various muon qualities are calculated. These informations are used in the triggering processes~\cite{L1muontrigger}. The HLT system in CMS further reduces the event rate from 100 kHz(after L1 trigger selection) down to 1 kHz. The HLT is software based trigger system and utilizes the streamlined version of the CMS soft-ware for the event reconstruction on large computer farm~\cite{CMS_trigger_RUNII}. Maximizing the trigger efficiency and keeping acceptable CPU-time is crucial for the HLT system. The HLT accesses to the full granularity and sub-detectors of CMS, including the tracker system. The dedicated algorithms used in the HLT is the very closed to the ones used in the off-line reconstruction, besides some of the parameter configurations~\cite{CMS_HLT_RunII}. %\subsubsection{Level 1 Trigger} %probably I have to put something there2 %\subsubsection{High Level Trigger}
{ "alphanum_fraction": 0.793025928, "avg_line_length": 119.1488095238, "ext": "tex", "hexsha": "36c6bf6b8b9a0bb86608b074c46c6ab8dec9e09f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "974c9f324f46d225e3f81962ca2b911505bbbbf1", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "fanbomeng/Thesis", "max_forks_repo_path": "chapters/chapter3/chapter3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "974c9f324f46d225e3f81962ca2b911505bbbbf1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "fanbomeng/Thesis", "max_issues_repo_path": "chapters/chapter3/chapter3.tex", "max_line_length": 1509, "max_stars_count": null, "max_stars_repo_head_hexsha": "974c9f324f46d225e3f81962ca2b911505bbbbf1", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "fanbomeng/Thesis", "max_stars_repo_path": "chapters/chapter3/chapter3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4836, "size": 20017 }
\documentclass{article} \title{Differential Geometry Notes} \author{Lucas Simon} \date{\copyright\ 2015 Markus Pflaum, All Rights Reserved} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{hyperref} \usepackage{tikz} \usepackage{tikz-cd} \usepackage{mathrsfs} \usepackage[all]{xy} \newcommand{\cat}[1]{\textbf{#1}} \newcommand\Lie{\mathcal{L}} %\newcommand{\det}[1]{\text{det}(#1 )} \begin{document} \maketitle \section{Notice} These notes contain errors. Please put an issue on github or just fork the repository and make the changes yourself. \section{March 6, 2015} If we have $f:M \to N$ and $q \in N$. The claim was $f^{-1}(q) \subset M$ is a submanifold. We have to find charts. Let $p \in f^{-1}(q)$. By the rank theorem on charts, there are charts $(U, x)$ around $p$ and $(V,y)$ around $q$ such that $x(p) = 0$ and $y(q) = 0$, and $y \circ f \circ x^{-1}(v)=(v_1, \ldots, v_n)$. for $v$ in a neighborhood of origin in $\mathbb{R}^m$. For $v$ in a neighborhood of origin in $\mathbb{R}^m$ the chart we are looking for, find $f^{-1}(q)$ is $(x_{n+1}, \ldots, x_m)$. Then $f^{-1}(q) \cap U = (x_{n+1}, \ldots, x_m)^{-1}(\mathbb{R}^{m-n})$ \textbf{Theorem (Ehresmann):} A proper surjective submersion is a fiber bundle with fiber \textbf{Orientation:} Let $M$ be a smooth connected $m$-manifold. and let $\Lambda^kT^*M \sim O_N := \{ \omega_p \in \Lambda^kT^*M | p \in M \text{ and } \omega_p \neq 0\} \subset \Lambda^m T^*M$. These $\omega$'s are nonzero in some fiber. Then $\text{dim}\Lambda^m T^*_pM = 1$, this is the space of determinants. This is a subspace. A manifold $M$ is called \textbf{orientable} if $\Lambda^k T^*M \sim 0_M$ has exactly two connected components. An \textbf{orientation} of an orientable smooth manifold $M$ is a choice of a component of $\Lambda^kT^*M \sim 0_M$. The zero section of a vector bundle $p:V \to M$ is a $M \to v, p \mapsto 0_p$ where $0_M$ is the zero section of $\Lambda^kT^*M$ or better its image. The tangent bundle recap: take $TM$ of some smooth manifold $M$, and let $(U,x)$ and $(V,y)$ be smooth charts such that $U \cap V \neq \varnothing$. Then $x \circ x^{-1}|_{U \cap V}:x(U \cap V) \to y(U \cap V)$ are smooth transition maps of $M$. Then the induced map on trivializations $x(U \cap V)\times \mathbb{R}^m \to y(U \cap V)\times \mathbb{R}^m$ where $(p,v) \mapsto (y \circ x^{-1}(p), D(y \circ x^{-1})(p)v)$ \textbf{Counterexample:} Mobius band. \textbf{Example:} Chiral molecules have a defined orientation. Similar, but not the same. \textbf{Theorem:} Let $M$ be a connected smooth $m$-manifold. Then the following are equivalent: (1) $M$ is orientable (2) There is an atlas $\mathcal{A}$ of $M$ such that the det($D(x \circ y^{-1})(y(p))) > 0$ for each $(U,x), (V,y) \in \mathcal{A}$ and $p \in U \cap V$ (3) There is a nowhere vanishing $m$-form $\omega \in \Omega^m(M)$. Proof: $(1) \Rightarrow (2)$ Let $\Lambda$ be an orientation have have $\Lambda \cap \Lambda^mT^*_pM \sim 0_p$ is a component of $\Lambda^mT^*_pM \sim 0_p$ Define $\mathcal{A}$ be the set of all charts $(U,x)$ of $M$ such that $dx_1 \wedge \cdots \wedge dx_m(p) \in \Lambda$ for all $p \in U$. Assume also that each $U$ is connected. We need to compute the transition functions to check ....WHAT?..... Let $(V,y)$ be a second chart from $\mathcal{A}$ such that $p \in U \cap V$ then \[ dx_1 \wedge \cdots \wedge dx_m |_p = \text{det}(\frac{\partial x_k}{\partial y_j})(p)dy_1 \wedge \cdots \wedge dy_m|p \] Since $\text{det}(\frac{\partial x_k}{\partial y_j})(p) = \text{det}(D(x \circ y^{-1})(y(p))) > 0$. hence the transition functions are positive. \section{March 9, 2015} \textbf{Theorem} Let $M$ be a connected manifold. The following are equivalent (1) $M$ is orientable (2) There exists an atlas $\mathscr{A}$ of $M$ such that the detereminant of $D(x \circ y^{-1})(y(p)) > 0$ for all $(U,x), (V,y) \in \mathscr{A}$ and $p \in U \cap V$. (3) There is a nowhere vanishing $\omega \in \Omega^m(M)$ with $m = \text{dim}(M)$. \begin{proof} $(2) \Rightarrow (3)$. Under the hypotheses of $(2)$. Chose a smooth partition of unity $(\phi_i)_{i \in \mathbb{N}}$ of $M$ subordinate to $\mathscr{A}$; that is, for each $i \in \mathbb{N}$ there is $(U_i, x^{(i)}) \in \mathscr{A}$ such that $\text{supp}(\phi_i) \subset U$ is relatively compact; that is, it's closure is compact and contained in $U$. ($\sum \phi_i = 1$, (supp($\phi_i$)) is locally finite). Put $\omega := \sum_{i \in \mathbb{N}} \phi_i \cdot dx_1^{(i)} \wedge \cdots \wedge dx_m^{(i)}$. If $p \in U^{(i)} \cap U^{(j)}$, then $dx_1^{(i)} \wedge \cdots \wedge dx_m^{(i)} = \lambda_{ij}dx_1^{(j)}\wedge \cdots \wedge dx_m^{(j)}$ where $\lambda_{ij} = \text{det}(D(x^{(i)} \circ x^{(j)-1}) (x^{(j)}(p)) > 0$. Then $\omega(p) = (\sum_{i \in \mathbb{N}} \phi_j(p) \lambda_{ji}(p))dx_1^{(j)}\wedge \cdots \wedge dx_m^{(m)}(p)$. Since each term is greater than $0$. $(3) \Rightarrow (1)$. Under the hypotheses of (3), there is a nowhere vanishing $\omega \in \Omega^m(M)$ such that $\omega(p)$ is nonzero for all $p \in M$. Then $\Lambda^mT^*M \sim 0_m$ is the union of $\Lambda^+ := \{ \rho \in \lambda^m T^*M : \rho = \lambda \cdot \omega_{\pi(\rho)} \text{ for some } \lambda > 0 \}$ and $\Lambda^- := \{ \rho \in \lambda^m T^*M : \rho = \lambda \cdot \omega_{\pi(\rho)} \text{ for some } \lambda < 0 \}$. Notice $\Lambda^+ \cap \Lambda^- = \varnothing$. Then show $\Lambda^+$ and $\Lambda^-$ are path connected. Take $(p, \rho)$ and $(q, \tau)$. We may connect $\rho$ by a path to $\omega(p)$ where $\gamma(t) = (\lambda(1-t) + t)\omega(p) + \rho = \lambda\omega(p)$ where $\lambda > 0$. Then, $\omega(p)$ may be connected by a path with $\omega(q)$ by taking $\tilde{\gamma}(t)$ a path where $\tilde{\gamma}(0) = p$ and $\tilde{\gamma}(1) = q$ and put $\gamma(t) = \omega(\tilde{\gamma}(t))$. We do this so we can integrate on $M$. \end{proof} We have orientation so we may properly integrate. Reminder: $D \subset \mathbb{R}^n$ is open and bounded, $\phi: D \to \tilde{D} \subset \mathbb{R}^n$ is a diffeomorphism, and $f: \phi(D) \to \mathbb{R}$ a continuous function, then \[ \int_{\phi(A)}f = \int_{A}f \circ \phi |\text{det}(D\phi)| \text{ (transformation formula)} \] CHECK OUT PAGE 264 TU Assume $\omega \in \Omega^n(\tilde{D})$. Then $\omega = f dx_1 \wedge \cdots \wedge dx_n$ for some $f \in C^\infty(\tilde{D})$. Put $\tilde{A} = \phi(A)$, and define \[ \int_{\tilde(A)}\omega := \int_{\tilde{A}}f \] condsider $\phi^*(\omega) \in \Omega^n(D)$. Then $\phi^*(f dx_1 \wedge \cdots \wedge dx_n) = (f \circ \phi)\cdot \text{det}(D\phi) \cdot dx_1 \wedge \cdots \wedge dx_n$ Let $M$ be an oriented manifold and $\mathscr{A}$ an oriented atlas. Choose a partition of unity $(\phi_i)_{i \in \mathbb{N}}$ subordinate to $\mathscr{A}$. For each $\omega \in \Omega^m_c(M)$, put \[ \int_M \omega = \sum_{i \in \mathbb{N}} \int_{\tilde{U_i}} \phi_ix^{-1*}\omega \] Prove this is independent of atlas. \section{March 11, 2015} \textbf{Notations:} (1) Denote $\mathbb{H}^n$ as the upper-half space which is the set $\{ (x_1, \ldots, x_n) \in \mathbb{R}^n | x_1 \geq 0 \}$. (2) The interior of a manifold with boundary is denoted $M^\circ = M \sim \partial M$. \textbf{Definition:} A \textbf{manifold with boundary} $M$ is a topolgical space which is Hausdorff and second countable subject to the following conditions: (1) A chart of $M$ in $\mathbb{H}^n$ is a homeomorphism $x: U \subset M \to \hat{U} \subset \mathbb{H}^n$ where $U, \hat{U}$ are open. (2) Two charts $(U, x), (V,y)$ of $M$ in $\mathbb{H}^n$ are called $C^\infty$-compatible charts if $x \circ y^{-1}|_{U \cap V}: y(U \cap V) \to x(U \cap V)$ is a $C^\infty$-diffeomorphism. (3) An atlas of $M$ in $\mathbb{H}^n$ consists of a set of $C^\infty$-compatible charts in $\mathbb{H}^n$ which cover $M$. (4) A maximal atlas $\mathscr{A}$ \textbf{Definition:} Let $M$ be a manifold with boundary. Define $\partial M \subset M$ as the set of points $p \in M$ such that there is a chart $(U, x)$ around $p$ with $x(p) = (0, x_2(p), \ldots, x_n(p))$ with $p \in x^{-1}(\{ 0 \}\times \mathbb{R}^{n-1}$. \textbf{Observations:} (1) $\partial M$ is a manifold of dimension $n-1$. It's atlas is given by charts $(U\cap M, \bar{x}|_{U \cap \partial M})$ where $(U,x) \in \mathscr{A}$ and $p \in V$, with $\bar{x}(p) = (x_2(p), \ldots, x_n(p))$ (from $(0, x_2(p), \ldots, x_n(p))$). This gives us transition functions $\bar{x} \circ \bar{y}^{-1}: \bar{y}(U \cap V \cap \partial M) \to \bar{x}(U \cap V \cap \partial M)$ is a diffeomorphism. (2) The tangent spaces of the interior are obvious. On the boundary, using curves ends up being very technical. The space of derivations definition gives a more obvious definition of the tangent space on the boundary. So, $T_pM = \text{Der}(C^\infty_p, \mathbb{R})$ for $p \in \partial M$. Then the tangent space is spanned by $\{ \frac{\partial}{\partial x_1}|_p, \ldots, \frac{\partial}{\partial x_n}|_p \}$. (3) Orientation is defined in the same way. Notice that the boundary has an induced orientation. We get this from ... \textbf{Theorem: (Stokes)} Given a compact oriented $m$-manifold $M$ with boundary. Then for each $\omega \in \Omega^{m-1}(M)$ Then \[ \int_{\partial M} \omega|_{\partial M} = \int_M d\omega \] where $\partial M$ has the indeuced orientation. \begin{proof} Recall the fundamental theorem of calculus: \begin{align*} \int_0^a \frac{\partial}{\partial s} f(s, t_2, \ldots, t_m) ds & = f(A, t_2, \ldots, t_m) - f(0, t_2, \ldots, t_m) \\ & = \int_{\{ A \}} f(t, t_2, \ldots, t_n) dt - \int_{\{ 0 \}}f(t, t_2, \ldots t_n)dt \\ \end{align*} Now, let $Q \subset \mathbb{H}^n$ be a cube; that is, $Q = [a_1, b_1] \times \cdots [a_n, b_n]$ with $a_1 \geq 0$ and $a_2,\ldots a_n \in \mathbb{R}$, $b_i > a_i$ for all $i \in \{ 1, \ldots, n \}$. Let $\omega \in \Omega^{m-1}(Q)$ with the support of $\omega$ compactly contained in $Q$. Locally, we may represent $\omega$ as $\sum_{i=1}^m \omega_i dx_1 \wedge \cdots \wedge \hat{dx_i} \wedge \cdots \wedge dx_m$ for $\omega_i \in C^\infty(Q)$. Then \begin{align*} \int_Q d\omega & = \sum_i (-1)^i \int_Q \frac{\partial \omega_i}{\partial x_i}dx_1 \wedge \cdots \wedge dx_m \\ & = \sum_i \int_{Q_i} \end{align*} \end{proof} \section{Friday, March 13} We have $Q = [A_1, B_1] \times \cdots \times [A_n, B_n] \subset \mathbb{R}^n$, $\omega \in \Omega^{n-1}(M)$, and \[ \text{supp}(\omega) \subset \subset \begin{cases} (A_1, B_1) \times \cdots \times (A_n, B_n) & A_1 > 0 \\ [0, B_1) \times (A_2, B_2) \times \cdots \times (A_n, B_n) \end{cases} \] $\omega = \sum_i \omega_i dx_1 \wedge \cdots \wedge \hat{dx_i} \wedge \cdots \wedge dx_n$, $d\omega$ $= \sum_i (-1)^i\frac{\partial \omega_i}{\partial x_i} dx_1 \wedge \cdots dx_n$, and \begin{align*} \int_M d \omega & = \sum_i \int_{Q_i}(\int_{A_i}^{B_i} \frac{\partial \omega_i}{\partial x_i} dx_i )\wedge dx_1 \wedge \cdots \wedge \hat{dx_i} \wedge \cdots \wedge dx_n\\ & = -\int_{Q_1} \omega_1 dx_2 \wedge \cdots \wedge dx_n \end{align*} where $Q_i = [A_1, B_1] \times \cdots \times \hat{[A_i, B_i]} \times \cdots \times [A_n, B_n]$, because \[ \int_{A_i}^{B_i} \frac{\partial \omega_i}{\partial x_i}dx_i = \omega_i(B_i) - \omega_i(A_i) = 0 - 0 = 0 \] For the boundary, we have \[ \int_{\partial Q} \omega = \int_{Q_1} \omega = \int_{Q_1} -\omega_1 d\tilde{x}_2 \wedge \cdots \wedge d\tilde{x}_n \] We want an outward pointing orientation. Notice $\frac{\partial}{\partial x_1}$ points invward to $M$ (with respect to $Q$) but we want to orient $Q$, resp $M$ such that \[ -\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \ldots, \frac{\partial}{\partial x_n} \] Notice $\partial Q_1 \cup \cup_{l \geq 2} Q_l$ Now we choose an oriented atlas $\mathscr{A}$ of $M$, after passing to a finite atlas, we can assume that $\tilde{U} \subset \mathbb{R}^n$ for $(U,x) \in \mathscr{A}$ has form $[A_1, B_1] \times \cdots \times[A_n, B_n] \subset \mathbb{H}^n$ and $x: U \to \tilde{U} \subset \mathbb{H}^n$ and such that $\mathscr{A}$ is countable. Choose a smooth partition of unity. Choose a smooth partition of unity $(\phi_{(U,x)})_{(U,x) \in \mathscr{A}}$ subordinate to $\mathscr{A}$ where $\text{supp}(\phi_{(U,x)}) \subset \subset U \Rightarrow x_*(\phi_{(U,x)} \omega \subset \subset \tilde{U} \subset \mathbb{H}^n)$ and \begin{align*} \int_M d\omega &= \sum_{(U,x) \in \mathscr{A}} \int_{\tilde{U}}d(x_*\phi_{(U,x)} \omega) \\ &= \sum_{(U,x) \in \mathscr{A}} \int_{\partial \tilde{U}} x_*(\phi_{(U,x)}\omega)\\ &= \sum_{(U,x) \in \mathscr{A}} \int_{\partial \tilde{U}} x_*(\phi_{(U,x)}\omega|_{\partial M}) \\ &= \int_{\partial M} \omega \end{align*} \textbf{Corollary:} If $M$ is a closed manifold (compact and no boundary) then \[ \int_M d \omega = 0 \] For $\omega \in \Omega^{\text{dim}M}(M)$ \textbf{Integration of Vector Fields} Let $\xi: M \to TM$ be a $C^\infty$ vector field on a smooth manifold $M$. A curve $\gamma: I \to M$, $I = (a,b) \subset \mathbb{R}$ is called an integral curve of $M$ if $\xi(\gamma(t)) \in T_{\gamma(t)}M$ is equal to $\dot{\gamma}(t)$ for all $t \in I$. \textbf{Question:} \section{March 16, 2015} \textbf{Integral Curves:} Let $\xi: M \to TM$ be a smooth vector field on a manifold $M$. By an integral curve of $\xi$, one understands a smooth map $\gamma: I \to M$, with $I \subset \mathbb{R}$ an open interval, such that \[ \dot{\gamma}(t) = \xi(\gamma(t)) \text{ for all } t \in I \] \textbf{Observations:} (1) For each $p \in M$, there exists an open interval $I \subset \mathbb{R}$ containing the origin 0, and a smooth integral curve $\gamma: I \to M$ of $\xi$ such that $\gamma(0) = p$. \begin{proof} Choose coordinates $(U,x)$ of $M$ around $p$, and then consider the following ordinary differential equation: \begin{align*} \dot{c}(t) = F(c(t)) & & c(0) = x(p) \end{align*} where $F := (pr_2 \circ Tc \circ \xi \circ x^{-1}): \hat{U} \to \mathbb{R}^n$. By existence and uniqueness (Picard-Lindelof theorem), there exists $c:(-\varepsilon, \varepsilon) \to \hat{U}$ such that the initial value problem is satisfied. We put $\gamma := x^{-1} \circ c: (-\varepsilon, \varepsilon) \to M$, then $\gamma(0) = p$ and $\dot{\gamma}(t) = (Tx)^{-1}(c(t), \dot{c}(t)) = Tx^{-1}(c(t), F(c(t))) = \xi(x^{-1}(c(t))) = \xi(\gamma(t))$. (Note that this holds true in Banach manifolds) \end{proof} (2) If $\gamma_1, \gamma_2$ are integral curves of $\xi$ with $\gamma_1(0) = \gamma_2(0) = p$, then $\gamma_1|_{I_1 \cap I_2} = \gamma_2|_{I_1 \cap I_2}$ (Note $I_1 \cap I_2$ is nonempty since they both implicitly contain 0) \begin{proof} Let $K = \{ t \in I_1 \cap I_2 : \gamma_1(t) = \gamma_2(t) \}$. We have $K = (\gamma_1, \gamma_2)^{-1}(\Delta_M)$. (note $(\gamma_1, \gamma_2): I_1 \cap I_2 \to M \times M$). By continuity of $\gamma_1$ and $\gamma_2$ and $M$ being Hausdorff, $I_1 \cap I_2$ is an open interval around the origin, hence connected. Let $t \in K$. Consider $\tilde{\gamma_1}: I_1 - t \to M$ and $\tilde{\gamma_2}: I_2 + t \to M$ where $\tilde{\gamma_i}(s) = \gamma_i(s+t)$. So $\tilde{\gamma_1}(0) = \gamma_1(t) = \gamma_2(t) = \tilde{\gamma_2}(0)$, so $\dot{\tilde{\gamma_i}}(s) = \dot{\gamma_i}(s+t) = \xi(\gamma_i(x + t)) = \xi(\tilde{\gamma_i}(s))$. By local uniqueness of the initial value problem, there exists an $\varepsilon$ such that $\tilde{\gamma_1}(s) = \tilde{\gamma_2}(s)$ for $s \in (-\varepsilon, \varepsilon)$. Hence $\gamma_1$ and $\gamma_2$ agree on an $\varepsilon$-neighborhood of $t$. \end{proof} (3) For each $p \in M$, let $I_p = (t_p^-, f_p^+)$ with $t_p^- < t_p^+$ and $t_p^-, t_p^+ \in \mathbb{R} \cup \{ \pm \infty \}$. There of all intervals $I$ such that there exists an integral curve $\gamma : I \to M$ of $\xi$ with $\gamma(0) = p$. Define $\gamma_p: I_p \to M$ by $t \mapsto \gamma(t)$, where $t \in I$ with $\gamma:I \to M$ (If $M$ is compact , the $I_p = \mathbb{R}$, a counterexample is the plane with a point removed and having a constant vector field oriented upwards). Now put $\mathcal{D} = \cup_{p \in M}I_p \times \{ p \} \subset \mathbb{R} \times M$, and $\phi:\mathcal{D} \to M$, $(t,p) \mapsto \gamma_p(t)$. Then $\phi$ is called the flow of the vector field $\xi$. It has the following nice properties: (a) $\mathcal{D} \subset \mathbb{R} \times M$ is open. (b) The domain $\phi_t \circ \phi_s \subset $ domain $\phi_{t+s}$ where $\phi_t: M \to M$ where $p \mapsto \phi(t,p)$ (c) $\phi_{t+ s}(p) = \phi_t \circ \phi_s(p)$ for $p \in \text{dom}(\phi_t \circ \phi_s)$. (d) $\phi_d$ \begin{proof} \end{proof} \section{18 March, 2015 (Wednesday)} \textbf{Banach Fixed Point Theorem:} If you have a complete metric space with a Lipschitz contraction, then the space has a unique fixed point. \textbf{Proposition:} Let $J$ be an open interval containing $0$, $U$ an open set of a banach space $\mathbb{E}$, and $x_0 \in \mathbb{E}$. Let $a \in (0,1)$ such that the closed ball $\bar{B}_{3a} \subset U$. Assume that $f: J \times U \to \mathbb{E}$ be a bounded continuous map, bounded by constant $L \geq 1$, and satisfying on $U$ uniformly with respect to $J$ a Lipschitz condition with Lipschitz constant $K \geq 1$. Then $\| f(t,x) - f(t,y) \| \leq K \| x - y \|$ for all $t \in J$ and $x,y \in U$. If $b < \frac{a}{LK}$, then for each $x \in \bar{B}_a(x_0)$ there exists a unique flow $\phi: J_b \times B_a(x) \to U$; that is, $\frac{d}{dt}\phi(t,x) = f(t, \phi(t,x))$ and $\phi(0,x) = x$. Letting $I_b = [-b,b]$, and let $x$ be fixed in $\bar{B}_a(x_0)$. Let $M$ be a set of continuous maps \[ a: I_b \to \bar{B}_{2a}(x_0) \] We have that $M$ is a complete metric space with distance given by the sup-norm. \begin{align*} S: M \to M && s\alpha(t) = x + \int_0^t f(u,\alpha(u))du \end{align*} Choose $S$ fulfills Lipschitz-condition with Lipschitz-constant $L_x < 1$ which implies there exists a unique fixed point by the Banach Fixed Point Theorem. Call this $\phi_x \in M$ with $s\phi_x = \phi_x$. By the fundamental theorem of calculus, we have $\phi_x(t) = x + \int_0^t f(u, \phi_x(u))du$ is differentiable; that is, $\dot{\phi_x}(t) = f(t,\phi_x(t))$ with $\phi_x(0) = x$. If $f$ is $C^k$ for $k \in \mathbb{N}^* \cup \{ +\infty \}$, then $\phi$ is $C^k$. Look at Lang Differentiable Manifolds for the full proof. Last lecture we had $\phi: \mathcal{D} \to M$ by $(t,p) \mapsto \gamma_p(t)$. Then $\phi$ has the following properties: (1) $\mathcal{D}\subset \mathbb{R} \times M$ is open (2) $\text{dom}(\phi_s \circ \phi_t) \subset \text{dom}(\phi_{s + t})$ where $\phi_t :\mathcal{D} \cap \{t\} \times M = \mathcal{D}_y = \text{dom}(\phi_t)$ by $p \mapsto \phi(t,p)$ (3) We also have $\phi_{t+s} = \phi_t \circ \phi_s$ for $p \in \text{dom}(\phi_t \circ \phi_s)$ (4) $\phi_t: \mathcal{D}_t \to \mathcal{D}_{-t}$ is a diffeomorphism with inverse $\phi_{-t}$ \begin{proof} (a) Local flow theorem from Lang (b) Let $s \in (t_-(p),t_+(p))$ %and $t \in (t_-(\gamma_p(s)), t_+(\gamma_p(s)))$ Then $f \mapsto \gamma_p(s+t)$ is an integral curve of $\mathcal{G}$ and has maximal domain $(t_-(p) - s,t_+(p) - s) = (t_-(\gamma_p(s)), t_+(\gamma_p(s)))$. Since $\gamma_p(s+0) = \gamma_p(s)$. Now $p \in \text{dom}(\phi_t \circ \phi_s) \Rightarrow p \in \text{dom}(\phi_s) \Rightarrow s \in (t_-(p), t_+(p))$ and $t \in (t_-(\gamma_p(s)),t_+(\gamma_p(s)) \Rightarrow t+s \in (t_-(p), t_+(p))$. \end{proof} \section{March 20, 2015 (Friday)} \textbf{Lie Derivatives:} We want to take derivatives of vector fields $\xi: M \to TM$ which gives a tangent map $T\xi: TM \to T(TM)$. Assume $W: M \to TM$ is a second vector field. We want to define a derivative of $\xi$ with respect to $W$. \textbf{Lie Derivative:} Looking at the flow of $W$, $\phi:\mathcal{D} \to M$ with \[ \Lie_W \xi(p) := \lim_{t \to 0} \frac{T\phi_{-t}(\xi_{\phi_t(p)}) - \xi_p}{t} = \frac{d}{dt}T\phi_{-t}(\xi_{\phi_t(p)})|_{t=0} \] Notice that the limit exists in coordinates since all the functions are smooth. The map $\Lie_W$ is called the Lie derivative.\\ \textbf{Observations:} (1) $\Lie_W f = W(f)$ (2) $\Lie_W \xi = [W, \xi]$ (3) $\Lie_W$ is tensorial in $W$ only over $\mathbb{R}$, not $C^{\infty}(M)$. (4) $\Lie_W: \Omega^\bullet(M) \to \Omega^\bullet(M)$ commutes with $d$. (5) $\Lie_W(\omega \wedge \rho) = \Lie_W\omega \wedge \rho + \omega \wedge \Lie_W \rho$ (6) \textbf{Cartan's Magical Formula:} $\Lie_W\omega = i_Wd\omega + di_W \omega$ for $\omega \in \Omega^k(M)$ where $i_W \in \Omega^{k-1}(M)$ is defined by $i_W\omega(Y_1, \ldots, Y_{k-1}) = \omega(W, Y_1, \ldots, Y_k)$ (useful for proving Poincare's lemma). \begin{proof} (1) $\Lie_Wf(p) = \frac{d}{dt}(\phi_t^*f)(p) = \frac{d}{dt}(f \circ \phi_t(p))|_{t = 0} = W(p)\cdot [f]_p$. \\ (2) We show that the bracket is a derivation to show that the bracket is still a vector field. \textbf{Exercise:} do this. \\ (3) Omitted \\ (4) We have $\Lie_W d \omega = \frac{d}{dt}\phi_t^*(d \omega) |_{t=0} = \frac{d}{dt}d(\phi_t^* \omega)|_{t=0} = d(\frac{d}{dt} \phi_t^*\omega)|_{t=0}$ \\ (5) Same argument as (4) \\ (6) We prove this by induction on $k$. For $k = 0$, $\Lie_Wf = Wf$ and $i_w df + di_Wf = i_wdf = Wf$. Assume this holds true for $k-1$. \end{proof} \section{March 30, 2015 (Monday)} \textbf{Proposition:} $\mathcal{L}_XY = [X,Y]$ \begin{proof} For $f \in C^\infty(M)$ we have \begin{align*} \mathcal{L}_XY(f) & = (\lim_{t \to 0} \frac{TX_{-t}Y_{x_y(m)} - y_m}{t})(f) \\ & = \frac{d}{dt}|_{t=0}(TX_{-t}Y_{X_t(m)})(t) \\ & = \frac{d}{dt}|_{t=0}Y_{X_t(m)}(f \circ X_{-t}) \end{align*} For the auxillary function $H(t,u) = f(X_{-t}(Y_u(X_t(m))))$ with $(t,u) \in \mathbb{R}^2$, small enough. We have \begin{align*} Y_{X_t(m)}(f \circ X_{-t}) = \frac{\partial}{\partial r_2}|_{(t,0)}H(t,r_2) \\ %\mathcal{L}_Yg(p) = Y_g(p) = Y_pg \end{align*} Then we have $\mathcal{L}_XY(f) = \frac{\partial^2}{\partial r_1 \partial r_2}|_{(0,0)}$. Consider another auxillary function $K(t,u,s) = f(X_s(Y_u(X_t(m))))$ we have $H(t,u) = K(t,u,-t)$ Then \begin{align*} \mathcal{L}_XY(f) & = \frac{\partial^2 K}{\partial r_1 \partial r_2}|_{(0,0,0)} - \frac{\partial^2 K}{\partial r_2 \partial r_3}|_{(0,0,0)} \\ \frac{\partial K}{\partial r_2}|_{(t,0,0)}& = Y_{X_t(m)}f = (Yf)(X_t(m)) \\ \frac{\partial^2 K}{\partial r_1 \partial r_2}|_{(0,0,0)} & = X_m(Yf) \\ \frac{\partial K}{\partial r_3}|_{(0,0,0)} & = Xf(Y_u(m)) \\ \frac{\partial^2 K}{\partial r_1 \partial r_3}|_{(0,0,0)} &= Y_m(Xf) \end{align*} \end{proof} \textbf{Cartan's Magic Formula} $\mathcal{L}_X \omega = i_X d \omega + d i_X \omega$ for $\omega \in \Omega^k(M)$. \begin{proof} This proof follows from induction. For $k = 0$ \[ \mathcal{L}_Xf = Xf = i_X df = i_X df + d i_X f \] Now, for the induction step, take \begin{align*} \mathcal{L}_X(df \wedge \omega) & = \Lie_Xdf \wedge \omega + df \wedge \Lie_X \omega \\ (i_Xd + di_X)(df \wedge \omega) & = -df \wedge i_Xd\omega + d(i_Xdf \wedge \omega - df \wedge i_X \omega) \\ & = - df \wedge i_X d \omega + d i_X df \wedge \omega + (i_Xdf)\wedge d \omega + df \wedge di_X\omega \\ & = \Lie_X df \wedge \omega \cdots \text{ look in Tu } \end{align*} \textbf{Exercise:} Show $i_X(\rho \wedge \omega) = i_X \rho \wedge \omega + (-1)^{deg(\rho)} \rho \wedge i_X \omega$ \end{proof} \section{April 1, 2015 (Wednesday)} \textbf{Andy} Given a smooth n-manifold $M$, a \textbf{Riemannian metric} $g$ is a smooth symmetry covariant 2-tensor field on $M$ that is positive definite at each point in $M$; that is, $g \in \Gamma(T*M \otimes T*M)$. Locally, we may express $g$ as $g_{ij}dx^i \otimes dx^j$ for coordinates $(U, x^1, \ldots, x^n)$ where $(g_{ij})$ is a positive definite matrix of smooth functions. A Kahler structure on a Riemannian manifold $(M^n, g)$ is given by a 2-form $\omega$ and a field of endomorphisms $J$ on the tangent bundle such that Algebraic conditions: (1) $J$ is an almost complex structure; that is, $J^2 = -Id$ as an endomorphism on the tangent space (2) $g(X,Y) = g(JX, JY)$ for each $X,Y \in \Gamma(TM)$ (3) $\omega(X,Y)= g(JX,Y)$ Analytic conditions: (4) The 2-form $\omega$ is closed; ie, $d\omega = 0$ (5) $J$ is integrable Note that $(1)$ and $(5)$ are equivalent to having a holomorphic structure. If $N(X,Y) = 2([JX, JY] - [X,Y] - [JX, Y] - [X,JY]) = 0$ we have the holomorphic structure. Locally, we may express $\omega$ as $ih_{\alpha \beta}dz_\alpha \wedge dz_{\bar{\beta}}$ where $h_{\alpha \beta} = h(\frac{\partial}{\partial z_{\alpha}}, \frac{\partial}{\partial z_{\bar{\beta}}})$ and $h$ is hermitian. Also, $\frac{\partial^2 u}{\partial z_\alpha \partial z_{\bar{\beta}}}$ where $u$ is the Kahler potential. As a side remark, the only solutions found to the Einstein vacuum equation $R_{\alpha \beta} = 0$ are Kahler manifolds. A complex manifold is a smooth manifold of dimension $2n$ which admits a holomorphic atlas $\{U_i, \phi_i \}$ such that the transition functions $\phi_i$ are biholomorphic and map into $\mathbb{C}^n$. Remember that a functions $F = f + ig$ is holomorphic if it satisfies the Cauchy-Riemann equations \begin{align*} \frac{\partial f}{\partial x} = \frac{\partial g}{\partial y} && \frac{\partial f}{\partial y} = -\frac{\partial g}{\partial x} \end{align*} \textbf{Exercise:} Show that this is equivalent to the equation $\frac{\partial F}{\partial \bar{z}} = 0$ The canonical examples of a kahler manifolds are the complex projective plane, tori, $\mathbb{C}^n$, and Riemann surfaces. Note that every complex variety may be embedded in $\mathbb{CP}^n$. \textbf{Nicholas:} A \textbf{Calabi-Yau manifold} is a compact Kahler manifold where the holonomy group is $SU(d)$ where $d$ is the complex dimension. \textbf{Definition:} Take $C^\infty(M, TM)$ as the space of vector fields on $M$. A bilinear map $\nabla: C^\infty(M, TM) \to C^\infty(M, TM)$ where $(X,Y) \mapsto \nabla_X Y$ is a connection if it satisfies (1) $\nabla_{fX}Y = f\nabla_XY$ for each $f \in C^\infty(M, TM)$ (2) $\nabla_X(fX) = X(f)Y + f\nabla_XY$ \textbf{Definition:} A vector field $X$ is parallel if $\nabla_YX = 0$ for every $Y \in C^\infty(M, TM)$ Take $\gamma: [a,b] \to M$ be a smooth curve on $M$. A vector field $X$ on $\gamma([a,b])$ is called a parallel transport of a vector $v \in T_{\gamma(a)M}$ if $\nabla_{\dot{\gamma(t)}}X = 0$ for each $t$ and $X(a) = v$. If $X$ is a parallel transport of $v$ and $Y$ is a parallel transport of $w$ (both along $\gamma$) Then $c_1X + c_2Y$ is the unique parallel transport of $c_1v + c_2w$ along $\gamma$. Let $X^{e_i}$ be a parallel transport of $e_i$ along $\gamma$. Taking $f_\gamma:T_{\gamma(a)}M \to T_{\gamma(b)}M$ by $v = i^ie_i \mapsto i^i X^{e_i}$. Considering all loops in $M$ based at $p \in M$. Taking $\alpha$ as a loop of $M$, the map $f_\alpha: T_{\gamma(a)} \to T_{\gamma(b)}M \in GL(n; \mathbb{R})$ \section{April 6, 2015 (Monday)} Let $V$ be a finite dimensional $\mathbb{R}$-vector space and $\lambda:V \times V \to \mathbb{R}$ a \textbf{symmetric bilinear form}; that is, $\lambda$ satisfies the following properites: (1) $\lambda(v + v',w) = \lambda(v,w) + \lambda(v',w)$ (2) $\lambda(av,w) = \lambda(v,aw) = a\lambda(v,w)$ (3) $\lambda(v,w) = \lambda(w,v)$ Moreover, we say $\lambda$ is \textbf{nondegenerate} if $\lambda(v,w) = 0$ if $v = 0$ or $w = 0$. \textbf{Theorem:}(Sylvester) If $\Lambda: V\times V \to \mathbb{R}$ is a symmetric bilinear form, then there is a basis $(b_i)_{i=1}^n$ of $V$ such that $\lambda$ has the matrix %% Look at photograph \textbf{Observation:} $\lambda$ is non-degenerate iff $ker(\lambda_ij) = 0$. \textbf{Definitions:} The \textbf{signature} of $\lambda$ is $(n_+,n_-)$ where $n_+$ is the number of positive eigenvalues and $n_-$ is the number of negative eigenvalues. If $n_+$ is the dimension of $V$, then $\lambda$ is called \textbf{positive-definite}. Also, $n_-$ is called the \textbf{index} of $\lambda$. \textbf{Definition:} A \textbf{semi-riemannian} n-manifold is a manifold $M$ together with a nondegenerate symmetric tensor $g \in \Gamma(T^*M \otimes T^*M)$ such that the index $g_p$ at $p \in M$ is constant for any $p \in M$. If the index of $g$ is 0, then $(M,g)$ is called \textbf{riemannian}. Locally, for some chart $(U,\phi)$ with local coordinates, $x^1, \ldots, x^n$, we can express g as \[ g = g_{ij}dx^i\otimes dx^j \] \textbf{Sidenote:} General relativity is the geometry of 4-dimensional semi-Riemannian manifolds with index 1. A semi-riemannian metric with index 1 is called a \textbf{Lorentz metric}. \textbf{Remark:} There is no Lorentz metric on $S^2$. (Of topological nature) \textbf{Theorem:} Every manifold admits a Riemannian metric \begin{proof} Let $\mathscr{A}$ be an atlas of $M$. For each $(U,x) \in \mathscr{A}$, put $g_U:=x^*(\langle -,- \rangle)$ of the standard euclidean metric on $\mathbb{R}^n$. Choose a partition of unity subordinate to $\mathscr{A}$, $(\phi_V)$. Put \[ g(v,w) = \sum_{(U,\phi)\in \mathscr{A}} \phi_U(p) g_U(v,x) \text{ for } v,w \in T_pM \] Notice that each point $g_p$ is positive definite and symmetric. \end{proof} \textbf{Observation:} For a lorentz metric, it may cancel out on the partition of unity. Observe %\[ %\frac{1}{2} \left(\begin{array}1&0\\-1&0\end{array}\right) + \frac{1}{2}(-1010) = 0 %\] Assume $(M,g)$ is semi-riemannian metric. Let $(U,x)$ be a chart and $\frac{\partial}{\partial x_i}$ a local frame of $TM$. Put $g_{ij}^{(U,x)} := g(\frac{\partial}{\partial x_i}, \frac{\partial}{\partial x_j}) \in C^\infty(U)$. If $(V,y)$ is another coordinate chart with $U\cap V \neq \varnothing$, we want to know how the local expression of $g$ transforms. \begin{align*} \frac{\partial}{\partial y_j}|_p = \sum_{k=1}^n \frac{\partial (x_k \circ y^{-1})}{\partial y_j}(p) \frac{\partial}{\partial x_k}|p \text{ and } \\ g_{ij}^{(V,y)}(p) = \sum_{k,l=1}^n \frac{\partial (x_k \circ y^{-1})}{\partial y_j}(p) \cdot \frac{\partial (x_l \circ y^{-1})}{y_j}(p)g_{kl}^{(U,x)}(p) \end{align*} Assume $N \hookrightarrow M$ is a submanifold, and that $g$ is a semi-riemannian metric on $M$. Then, one can pull-back $g$ to $N$ to a get a symmetric 2-tensor $i^*g \in C^\infty(T^*M \otimes T^*M)$ with \[ i^*g(p)(v,w) = g(i(p))(Ti(y), Ti(w)) \] \textbf{Observations:} (1) If $g$ is positive definite, the $i^*g$ is so as well. (2) The pull-back of a semi-riemannian metric may not be semi-riemannian. The obstructions for this are topological, but \section{April 8, 2015 (Wednesday)} \textbf{Exotic Spheres:}(Milnor) There is a family of smooth 7-manifolds with are homeomorphic to $S^7 \subset \mathbb{R}^8$, but not diffeomorphic. \textbf{Example} Consider \[ \begin{tikzcd} \tilde{\mathbb{R}} \arrow[d, "\psi"] & x \arrow[d, mapsto] & & \mathbb{R} \arrow[d, "Id"]\\ \mathbb{R} & x^3 & & \mathbb{R} \end{tikzcd} \] Observe that these manifolds do not have the same smooth structure, but are diffeomorphic by $\tilde{\mathbb{R}} \xrightarrow{x^{3}} \mathbb{R}$. (1) We want $M$ to be homeomorphic to $S^n$ (2) Construct $M^7_k$ by sphere bundles $E \to S^4$ (3) Prove that $M^7_k \cong S^n$ as a homeomorphism. (4) (Black Magic) Construct an invariant $\lambda(M^7_k) \neq \lambda(S^7)$. First $p \in M$ is a point, and $f:M \to \mathbb{R}$ is a morse function if the Hessian matrix of the critical points is non-singular. Recall that the critical points are the $p \in M$ such that $dH_p = 0$. The Hessian matrix can be represented as the matrix \[ \left( \frac{\partial^2 f}{\partial x_i \partial x_j} \right)_{i,j} \] \textbf{Theorem:} If $M$ is a compact n-manifold with $f$ a morse function with 2 critical points, then $M$ is homeomorphic to $S^n$. \textbf{Theorem:} Let $f \in C^\infty(M)$, $M^r = f^{-1}(-\infty, r)$, $a< b \in \mathbb{R}$. If $f^{-1}([a,b])$ is compact with no critical points, then $M^a$ is diffeomorphic to $M^b$. \begin{proof} Let $g$ be a Riemannian metric $g(X,Y) = \langle X, Y \rangle$. Let $\text{grad}(f) \in \mathcal{X}(M)$ with $\langle \text{grad}(f), Y \rangle = \tilde{X}(f)$. Observe $X = \phi\text{grad}(f)$ for $\phi \in C^\infty(f^{-1}[[a,b])$ with \[ \phi = \frac{1}{ \| \text{grad}(f) \|^2} \] is a vector field of compact support. Defines a flow $\phi_t$ with $X(p) = \frac{d}{dt} \phi_t(p)$; consider $f(\phi_t(q))$ as a function of $t$. If $\phi_t(q) \in f^{-1}[a,b]$, then $\frac{d}{dt}f(\phi(q))) = \langle \frac{d \phi_t(q)}{dt}, \text{grad}(f) \rangle = X(f) = \phi \| \text{grad}(f) \|^2 = 1$. This implies that $f(\phi_t(q)) = f(q) + t$. If $f(q) \leq a$, then $f(\phi_{b-a}(q)) = f(q) + b - a \leq b$. \end{proof} (2) For constructing $M_k^7$, consider a sphere bundle $S^2 \hookrightarrow M_k^7 \to S^4$. Observe that $S^4 = U^+ \cup U^-$ for $U^+ = S^4 \sim N$ and $U^- = S^4 \sim S$ and each of these sets are homeomorphic to $\mathbb{R}^4$. Decompose $M^7_k$ are the union of the preimage of these sets, and denote them $V^+$ and $V^-$ respectively, these are homeomorphic to $\mathbb{R}^4 \times S^3$. Define a map $V^+ \to V-$ by \[ (u;v) \mapsto \left( \frac{u}{\| u \|^2}; \frac{u^ivu^j}{\| u \|} \right) = (u';v') \] with $u \in \mathbb{H}$ and $v \in S^3 \subset \mathbb{H}$. We define a morse function $f(u;v)$ by \[ \frac{\text{Re}(v)}{(1 + \| u \|^2)^{1/2}} = \frac{\text{Re}(u'')}{(1+ \| u'' \|^2)^{1/2}} \] where $u'' = u'(v')^{-1}$. \section{April 10, 2015} \textbf{Morse Theory:} Studies smooth functions on a manifold to better understand the underlying topological structure. Let $f: M \to \mathbb{R}$ be a smooth function. Then the points $p \in M$ such that the differential of $f$ is the 0 map are called \textbf{critical points}. In local coordinates, this may be expressed as \[ \frac{\partial f}{\partial x_i}(p) = 0 \] \textbf{Definition:} The Hessian matrix is the matrix \[ H_f \left[\frac{\partial^2}{\partial x_i \partial x_j} \right] = \left(\frac{\partial^2f}{\partial x_i \partial x_j} \right)_{i,j} \] \textbf{Definition:} A critical point is nondegenerate at $p$ of $f$ is the Hessian matrix is nonsingular. \textbf{Proposition:} The nondegeneracy of a point is independent of the chart used. \textbf{Definition:} A smooth function $f \in C^\infty(M)$ is called a \textbf{Morse function} if all its critical points are nondegenerate. \textbf{Lemma:}(Morse Lemma) For a smooth $m$-manifold $M$, a point $b$ is a nondegenerate critical point of a smooth function $f$, there exists a chart $(x_1, \ldots, x_m)$ such that $x_i(b) = 0$ and \[ f = -x_1^2 - x_2^2 - \cdots - x_\alpha^2 + x_{\alpha + 1}^2 + \ldots + x_m^2 + f(b) \] \textbf{Corollary:} Nondegenerate critical points are isolated (there exists a neighborhood of $b$ such that $b$ is the only critical point in this neighborhood) \textbf{Corollary:} A Morse function on a compact $m$-manifold $M$ has only finitely many critical points. \textbf{Definition:} Two functions $f,g$ on a smooth $m$-manifold $M$ are called $(C^2, \varepsilon)$-close if the following three properties hold: (1) $|f(p) - g(p) | < \varepsilon$ (2) $|\frac{\partial f}{\partial x_i}(p) - \frac{\partial g}{\partial x_i}(p)| < \varepsilon$ (3) $|\frac{\partial^2 f}{\partial x_i \partial x_j}(p) - \frac{\partial^2 g}{\partial x_i \partial x_j}(p)| < \varepsilon$ \textbf{Theorem:} Let $g: M \to \mathbb{R}$ be a smooth function. Then there exists a Morse function $f$ such that $f$ and $g$ are $(C^2,\varepsilon)$-close. \section{13 April, 2015 (Monday)} \textbf{Definition:} The \textbf{Minkowski Metric} over $\mathbb{R}^4$ is the metric $g$ such that for any vectors $v,w \in \mathbb{R}^4$, $g(v,w) = -v_1w_1 + v_2w_2 + v_3w_3 + v_4w_4$. \textbf{Definition:} Recall that a local diffeomorphism \textbf{Definition:} A local diffeomorphism $\phi:M \to N$ between semi-riemannian manifolds $(M,g_M)$ and $(N,g_N)$ is a local isometry if for all $p \in M$ and $v,w \in T_pM$, \[ g_M(v,w) = g_N(T_p\phi(v), T_p\phi(w)) \] \textbf{Observation:} For each semi-riemannian manifold $(M, g)$, the set of isometries form a group, denoted by $Isom(M,g)$.\\ \textbf{Exercise:} Check that $Isom(M,g)$ is a group.\\ \textbf{Examples:} (1) Maps from $\mathbb{R}^n, g_{euc})$ to itself of the form $f: \mathbb{R}^n \to \mathbb{R}^n$, by $v \mapsto Av + b$, where $A \in O(n, \mathbb{R})$ and $v, b \in \mathbb{R}^n$. This space of maps are called the \textbf{Euclidean transformations}. We denote this by $Trans_{euc}(\mathbb{R}^n)$. Notice that compositions of such transformations are an orthogonal transformation.\\ \textbf{Theorem:} $Trans_{euc}(\mathbb{R}^n) = Isom(\mathbb{R}^n, g_{euc})$ This is highly nontrivial to prove (2) Maps $f:(\mathbb{R}^n, g_{Min}) \to (\mathbb{R}^n, g_{Min})$ of the form $f(v) = Av + b$ for $A \in O(n,1) = \{ A \in GL(n+1, \mathbb{R})) : g_{Min}(Av,Aw) = g_{Min}(v,w) \}$. The set of all transformations is a group called the poincare group. This is the isometry group. (3) The set of isometries of the sphere $S^n$ is $O(n+1)$. \textbf{Covariant Derivatives:} Let $\eta: M \to TM$ be a vector field. It's exterior derivative is a map $T\eta: TM \to TTM$. If $\xi \in T_pM$, then $T\eta \xi \in T_{\eta(p)}TM \neq TM$; this is a problem! \textbf{Definition:} By a covariant derivative (or connection) on a manifold $M$ is a map $\nabla: \mathfrak{X}^\infty(M) \to \Omega^1(M) \otimes_{C^\infty (M)} \mathfrak{X}^\infty(M)$ such that the following holds true: \[ \nabla_\xi(f\eta) = df \otimes \eta + f\nabla_\xi(\eta) \] This implies the following properties: \section{April 15, 2015 (Wednesday)} \end{document}
{ "alphanum_fraction": 0.6457764208, "avg_line_length": 68.3846153846, "ext": "tex", "hexsha": "a6e7561a29cc5d279e3f0fe76e8e7a858bdf283e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "78b06255d12defe8979fdd3a49883705c16b3592", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lusi8559/MATH_6250_notes", "max_forks_repo_path": "notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "78b06255d12defe8979fdd3a49883705c16b3592", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lusi8559/MATH_6250_notes", "max_issues_repo_path": "notes.tex", "max_line_length": 970, "max_stars_count": null, "max_stars_repo_head_hexsha": "78b06255d12defe8979fdd3a49883705c16b3592", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lusi8559/MATH_6250_notes", "max_stars_repo_path": "notes.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14164, "size": 37338 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={HW1}, pdfauthor={Henrique Magalhaes Rio}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \providecommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{HW1} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Henrique Magalhaes Rio} \preauthor{\centering\large\emph} \postauthor{\par} \date{} \predate{}\postdate{} \usepackage{xcolor} \usepackage{framed} \begin{document} \maketitle \colorlet{shadecolor}{gray!10} \newcommand{\answerstart}{ \colorlet{shadecolor}{orange!20} \begin{shaded} } \newcommand{\answerend}{ \end{shaded} \colorlet{shadecolor}{gray!10}} \hypertarget{question-1}{% \section{Question 1}\label{question-1}} \hypertarget{part-1-a}{% \subsection{Part 1 (A)}\label{part-1-a}} \includegraphics{HW1_files/figure-latex/unnamed-chunk-2-1.pdf} \hypertarget{part-2-b}{% \subsection{Part 2 (B)}\label{part-2-b}} \colorlet{shadecolor}{orange!20} \begin{shaded} \(\hat{\beta_0}=578.92775\), is the estimated average cholesterol level for an athlete with 0 mg of fat intake. \(\hat{\beta_1}=0.54030\), is the estimated average difference in cholesterol level for a one unit difference in fat intake. \(\hat{\sigma}= 133.4377\), is the estimated standard deviation of the model. \end{shaded} \colorlet{shadecolor}{gray!10} \hypertarget{part-3-c}{% \subsection{Part 3 (C)}\label{part-3-c}} \colorlet{shadecolor}{orange!20} \begin{shaded} We reject the null hypothesis that there is no linear relationship between fat intake and cholesterol level. (p-value=0.000236) \end{shaded} \colorlet{shadecolor}{gray!10} \hypertarget{appendix}{% \section{Appendix}\label{appendix}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(knitr)} \KeywordTok{library}\NormalTok{(ggplot2)} \KeywordTok{library}\NormalTok{(dplyr)} \KeywordTok{library}\NormalTok{(tidyverse)} \KeywordTok{library}\NormalTok{(broom)} \KeywordTok{library}\NormalTok{(splines)} \KeywordTok{library}\NormalTok{(caret)} \NormalTok{knitr}\OperatorTok{::}\NormalTok{opts_chunk}\OperatorTok{$}\KeywordTok{set}\NormalTok{(}\DataTypeTok{echo =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{message =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{warning =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{fig.width =} \DecValTok{4}\NormalTok{, }\DataTypeTok{fig.height =} \DecValTok{4}\NormalTok{, }\DataTypeTok{tidy =} \OtherTok{TRUE}\NormalTok{)} \NormalTok{chol <-}\StringTok{ }\KeywordTok{read.csv}\NormalTok{(}\StringTok{"cholDat.csv"}\NormalTok{)} \KeywordTok{ggplot}\NormalTok{(chol,}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{y=}\NormalTok{chol,}\DataTypeTok{x=}\NormalTok{fat)) }\OperatorTok{+}\KeywordTok{theme_bw}\NormalTok{()}\OperatorTok{+}\StringTok{ }\KeywordTok{geom_point}\NormalTok{() }\OperatorTok{+}\KeywordTok{ylab}\NormalTok{(}\StringTok{"Cholesterol"}\NormalTok{)}\OperatorTok{+}\StringTok{ }\KeywordTok{xlab}\NormalTok{(}\StringTok{"Fat Intake"}\NormalTok{)} \NormalTok{cholm <-}\StringTok{ }\KeywordTok{lm}\NormalTok{(chol}\OperatorTok{~}\NormalTok{fat, }\DataTypeTok{data=}\NormalTok{chol)} \KeywordTok{summary}\NormalTok{(cholm)} \end{Highlighting} \end{Shaded} \end{document}
{ "alphanum_fraction": 0.7258289925, "avg_line_length": 39.5527638191, "ext": "tex", "hexsha": "122c599562b8ce793ac20b475610d7ddbff45501", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dbad7f48db1cc4fe5a4f42c4cac03ec9464afe23", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "henriquem27/School-Projects", "max_forks_repo_path": "STAT342/HW1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dbad7f48db1cc4fe5a4f42c4cac03ec9464afe23", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "henriquem27/School-Projects", "max_issues_repo_path": "STAT342/HW1.tex", "max_line_length": 427, "max_stars_count": null, "max_stars_repo_head_hexsha": "dbad7f48db1cc4fe5a4f42c4cac03ec9464afe23", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "henriquem27/School-Projects", "max_stars_repo_path": "STAT342/HW1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2835, "size": 7871 }
\chapter{Monte Carlo Simulation} \section{Simulation} \begin{itemize} \item GEANT \item Minbias overlay \item Alpgen and CompHEP Matrix element generators \item Pythia showering and hadronization and underlying event \end{itemize} \section{Calibration and Physics Object Corrections} \begin{itemize} \item Muon smearing from $Z\rightarrow\mu\mu$~events. Muon correction factors \item Electron smearing from $Z\rightarrow e e$~events. Electron correction factors \item Jet energy scale and jet smearing (SSR). \item Missing correction. \item $B$-jet correction factors \end{itemize}
{ "alphanum_fraction": 0.7996575342, "avg_line_length": 32.4444444444, "ext": "tex", "hexsha": "5933dd0b6c0abb83fb2c65f7e7949f0fb1566561", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tgadf/thesis", "max_forks_repo_path": "Old/Simulation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tgadf/thesis", "max_issues_repo_path": "Old/Simulation.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "19d4a6bc7f7ac8660fce582322703d50e0d6bd31", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tgadf/thesis", "max_stars_repo_path": "Old/Simulation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 159, "size": 584 }
\documentclass{article} \usepackage[colorlinks=true]{hyperref} \usepackage[cmex10]{amsmath} \usepackage{bbm} \usepackage{graphicx} \usepackage{subfig} \usepackage{algorithm} \usepackage{algorithmic} \usepackage{comment} \usepackage{amsmath} \usepackage{amssymb} \usepackage{multirow} \DeclareMathOperator*{\argmin}{\mathrm{argmin}} \DeclareMathOperator*{\pro}{\mathcal P_{\Omega}} \DeclareMathOperator*{\pron}{\mathcal P_{\bar{\Omega}}} \DeclareMathOperator*{\proe}{\mathcal P_{H}} \newcommand{\BigO}[1]{\ensuremath{\operatorname{O}\left(#1\right)}} \begin{document} \title{MC-Kit Manual} \author{Stephen Tierney} \maketitle \section{Introduction} \subsection{Classic Matrix Completion} The problem of Matrix Completion (MC) is to recover a matrix $\mathbf A$ from only a small sample of its entries. Let $\mathbf A \in \mathbb R^{m \times n}$ be the matrix we would like to know as precisely as possible while only observing a subset of its entries $(i, j) \in \Omega$. It is assumed that observed entries in $\Omega$ are uniform randomly sampled. Low-Rank Matrix Completion is a variant that assumes that $\mathbf A$ is low-rank. The tolerance for ``low'' is dependant upon the size of $\mathbf A$ and the number of sampled entries. For further explanation let us define the sampling operator $\pro : \mathbb R^{m \times n} \rightarrow \mathbb R^{m \times n}$ as \begin{align} [ \pro ( X ) ]_{ij} = \left\{ \begin{array}{ll} X_{ij}, &(i, j) \in \Omega,\\ 0, &\text{otherwise}. \end{array} \right. \end{align} We also define the opposite operator $\pron$, which keeps those outside $\Omega$ unchanged and sets values inside $\Omega$ (i.e. $\bar{\Omega}$) to $0$. Since we assume that the matrix to recover is low-rank, one could recover the unknown matrix by solving \begin{align} \min_{\mathbf A} \; \text{rank}(\mathbf A)\\ \text{s.t.} \; \pro (\mathbf A) = \pro (\mathbf M) \nonumber \end{align} where matrix $\mathbf M$ is the partially observed matrix. In practice this problem is intractable therefore we use the closest convex relaxation i.e. the nuclear norm \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \nonumber \label{classic_objective} \end{align} We also consider the case where our observed entries may contain a limited amount of noise. Our corresponding objective is the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* + \frac{\lambda}{2} \| \pro (\mathbf E) \|_F^2\\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) + \pro (\mathbf E) \nonumber \end{align} \section{Singular Value Shrinkage Operator} Central to this work is the {\bf{singular value shrinkage operator}}. Consider the singular value decomposition (SVD) of a matrix $\mathbf Y \in \mathbb R^{m \times n}$ with rank $r$ \begin{align} \mathbf Y = \mathbf{U \Sigma V^T}, \;\; \mathbf \Sigma = \text{diag}(\{\sigma_i\}_{i=1}^r). \end{align} Then we define the singular value shrinkage operator for any $\tau \geq 0$ as \begin{align} \mathcal D_{\tau}(\mathbf Y) = \mathbf U S_{\tau}(\mathbf \Sigma) \mathbf V^T, \;\; S_{\rho}(\mathbf \Sigma) = \text{diag}(\{\text{max}(\sigma_i - \tau, 0)\}). \end{align} It has been shown \cite{cai2010singular} that the operator $\mathcal D_{\tau}(\mathbf Y)$ is the solution to the proximal nuclear norm problem i.e. \begin{align} \mathcal D_{\tau}(\mathbf Y) = \argmin_{\mathbf X} \; \tau \| \mathbf X \|_* + \frac{1}{2} \| \mathbf{X - Y} \|_F^2 \end{align} The implementation of the singular value singular shrinkage operator is implemented by the function $[ \mathbf X, \mathbf s ] = \text{nn\_prox}( \mathbf Y, \tau )$, where $\mathbf s$ is the vector of the singular values of $\mathbf X$. \newpage \section{Function Listing} \begin{table}[!h] {\small{ \centering \begin{tabular}{c | c | c} \hline Problem & Function & Section \\ \hline $\begin{array}{c} \min_{\mathbf A} \; \tau \| \mathbf A \|_* + \frac{1}{2} \| \mathbf{ A } \|_F^2 \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \end{array}$ & solve\_svt & 4.1.1 \\ \hline \multirow{2}{*}{$\begin{array}{c} \min_{\mathbf A} \; \tau \| \mathbf A \|_* \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \end{array}$} & solve\_ialm & 4.1.2 \\ & solve\_lin & 4.1.3 \\ \hline \multirow{3}{*}{$\begin{array}{c} \tau \| \mathbf A \|_* + \frac{\lambda}{2} \| \mathbf{ \pro (A) - \pro (M) } \|^2_F \end{array}$} & solve\_e\_lin & 4.2 \\ & solve\_e\_lin\_ext & \\ & solve\_e\_lin\_acc & \\ \hline $\begin{array}{c} \min_{\mathbf{A,E}} \; \tau \| \mathbf A \|_* + \frac{\lambda}{2} \| \pro (\mathbf E) \|_F^2\\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) + \pro (\mathbf E) \end{array}$ & solve\_e\_exact & 4.3 \\ \hline \end{tabular} }} \end{table} \newpage \section{Classic Implementations} \subsection{Noise Free Data} \subsubsection{SVT} The function \begin{align} [ \mathbf A, \mathbf{f\_values}, \mathbf{stop\_vals} ] = \text{solve\_svt}( \mathbf M, \Omega, \tau, \mu, iterations, tol )\notag \end{align} solves the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* + \frac{1}{2} \| \mathbf{ A } \|_F^2 \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \nonumber \end{align}\ as proposed by the authors of \cite{cai2010singular}. \begin{itemize} \item $\mathbf M$ - matrix with observed entries \item $\Omega$ - vector of constrained matrix indices \item $\tau$ - regularisation (optional) \item $\mu$ - step size (optional) \item $iterations$ - maximum number of iterations (optional) \item $tol$ - stopping criteria tolerance (optional) \end{itemize} \subsubsection{Inexact ALM} The function \begin{align} [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_ialm}( \mathbf M, \Omega, \tau, \mu, iterations, tol )\notag \end{align} solves the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \nonumber \end{align} as proposed by the authors of \cite{lin2010augmented}. \begin{itemize} \item $\mathbf M$ - matrix with observed entries \item $\Omega$ - vector of constrained matrix indices \item $\tau$ - regularisation (optional) \item $\mu$ - step size (optional) \item $iterations$ - maximum number of iterations (optional) \item $tol$ - stopping criteria tolerance (optional) \end{itemize} \subsubsection{Linearised ALM} The function \begin{align} [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_lin}( \mathbf M, \Omega, \tau, \mu, \rho, iterations, tol ) \notag \end{align} solves the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* \\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) \nonumber \end{align} \begin{itemize} \item $\mathbf M$ - matrix with observed entries \item $\Omega$ - vector of constrained matrix indices \item $\tau$ - regularisation (optional) \item $\mu$ - step size (optional) \item $\rho$ - linearisation step size (optional) \item $iterations$ - maximum number of iterations (optional) \item $tol$ - stopping criteria tolerance (optional) \end{itemize} \subsection{Noisey Data Relaxation} The functions \begin{align} [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_e\_lin}( \mathbf M, \Omega, \tau, \lambda, \rho, iterations, tol ) \notag \\ [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_e\_lin\_ext}( \mathbf M, \Omega, \tau, \lambda, \rho, iterations, tol ) \notag \\ [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_e\_lin\_acc}( \mathbf M, \Omega, \tau, \lambda, \rho, iterations, tol )\notag \end{align} solve the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* + \frac{\lambda}{2} \| \mathbf{ \pro (A) - \pro (M) } \|^2_F \end{align} with increasing convergence speed based on \cite{ji2009accelerated}. \begin{itemize} \item $\mathbf M$ - matrix with observed entries \item $\Omega$ - vector of constrained matrix indices \item $\tau$ - regularisation (optional) \item $\lambda$ - regularisation (optional) \item $\rho$ - linearisation step size (optional) \item $iterations$ - maximum number of iterations (optional) \item $tol$ - stopping criteria tolerance (optional) \end{itemize} \subsection{Noisey Data Exact} The function \begin{align} [ \mathbf A, \mathbf{f\_vals}, \mathbf{stop\_vals} ] = \text{solve\_e\_exact}( \mathbf M, \Omega, \tau, \lambda, \rho, iterations, tol ) \notag \end{align} solve the following \begin{align} \min_{\mathbf A} \; \tau \| \mathbf A \|_* + \frac{\lambda}{2} \| \pro (\mathbf E) \|_F^2\\ \text{s.t.} \; \pro (\mathbf M) = \pro (\mathbf A) + \pro (\mathbf E) \nonumber \end{align} \begin{itemize} \item $\mathbf M$ - matrix with observed entries \item $\Omega$ - vector of constrained matrix indices \item $\tau$ - regularisation (optional) \item $\lambda$ - regularisation (optional) \item $\rho$ - linearisation step size (optional) \item $iterations$ - maximum number of iterations (optional) \item $tol$ - stopping criteria tolerance (optional) \end{itemize} \newpage \bibliographystyle{plain} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.6777141599, "avg_line_length": 39.4869565217, "ext": "tex", "hexsha": "089152466bea59a1580eb9539ecf9a3c784ee2d2", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-23T10:41:49.000Z", "max_forks_repo_forks_event_min_datetime": "2018-09-10T13:33:49.000Z", "max_forks_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "luhairong11/MCK", "max_forks_repo_path": "manual/manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "luhairong11/MCK", "max_issues_repo_path": "manual/manual.tex", "max_line_length": 549, "max_stars_count": null, "max_stars_repo_head_hexsha": "cf1f612e6f6d676c4a4051b9d33c32b0568e7b19", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "luhairong11/MCK", "max_stars_repo_path": "manual/manual.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3060, "size": 9082 }
\subsection{Seeds} \subsection{Period}
{ "alphanum_fraction": 0.7380952381, "avg_line_length": 7, "ext": "tex", "hexsha": "0ce4ee77115e6471a5968166c1c4c202b1cf4435", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/probability/pseudoRandom/01-01-pseudo.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/probability/pseudoRandom/01-01-pseudo.tex", "max_line_length": 19, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/probability/pseudoRandom/01-01-pseudo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12, "size": 42 }
\section{Substitute Vectors} \label{sec:subthr} %%substitute vector applications In this section we explore the usage of substitute vectors within the context clustering framework by comparing similarity metrics, dimensionality reduction techniques and clustering methods. The first subsection describes the computation of substitute vectors using a statistical language model. Section~\ref{sec:dist} gives a detailed comparison of similarity metrics in the high dimensional substitute vector space. Section~\ref{sec:dimreduce} analyzes the application of dimensionality reduction algorithms to substitute vectors. Section~\ref{sec:clustering} presents a comparison of clustering methods on the PTB. \subsection{Computation of Substitute Vectors} \label{sec:subcomp} In this study, we predict the syntactic category of a word in a given context based on its substitute vector. The dimensions of the substitute vector represent words in the vocabulary, and the entries in the substitute vector represent the probability of those words being used in the given context. Note that the substitute vector is a function of the context only and is indifferent to the target word. %This section details the choice of the data set, the vocabulary and %the estimation of substitute vector probabilities. %% % what is the test data %% The Wall Street Journal Section of the Penn Treebank \cite{treebank3} %% was used as the test corpus (1,173,766 tokens, 49,206 types). %% % what is the tag set %% The treebank uses 45 part of speech tags which is the set we used as %% the gold standard for comparison in our experiments. %% % what is the LM training data %% %Train => 5181717 126019973 690121813 %% To compute substitute probabilities we trained a language model using %% approximately 126 million tokens of Wall Street Journal data %% (1987-1994) extracted from CSR-III Text \cite{csr3text} (we excluded %% the test corpus). %% % how is the language model trained %% We used SRILM \cite{Stolcke2002} to build a 4-gram language model with %% Kneser-Ney discounting. %% % what is the vocabulary %% Words that were observed less than 20 times in the language model %% training data were replaced by \unk\ tags, which gave us a %% vocabulary size of 78,498. %% % perplexity %% The perplexity of the 4-gram language model on the test corpus is 96. % how are the substitutes computed It is best to use both the left and the right context when estimating the probabilities for potential lexical substitutes. For example, in \emph{``He lived in San Francisco suburbs.''}, the token \emph{San} would be difficult to guess from the left context but it is almost certain looking at the right context. We define $c_w$ as the $2n-1$ word window centered around the target word position: $w_{-n+1} \ldots w_0 \ldots w_{n-1}$ ($n=4$ is the n-gram order). The probability of a substitute word $w$ in a given context $c_w$ can be estimated as: \begin{eqnarray} \label{eq:lm1}P(w_0 = w | c_w) & \propto & P(w_{-n+1}\ldots w_0\ldots w_{n-1})\\ \label{eq:lm2}& = & P(w_{-n+1})P(w_{-n+2}|w_{-n+1})\nonumber\\ &&\ldots P(w_{n-1}|w_{-n+1}^{n-2})\\ \label{eq:lm3}& \approx & P(w_0| w_{-n+1}^{-1})P(w_{1}|w_{-n+2}^0)\nonumber\\ &&\ldots P(w_{n-1}|w_0^{n-2}) \end{eqnarray} where $w_i^j$ represents the sequence of words $w_i w_{i+1} \ldots w_{j}$. In Equation \ref{eq:lm1}, $P(w|c_w)$ is proportional to $P(w_{-n+1}\ldots w_0 \ldots w_{n+1})$ because the words of the context are fixed. Terms without $w_0$ are identical for each substitute in Equation \ref{eq:lm2} therefore they have been dropped in Equation \ref{eq:lm3}. Finally, because of the Markov property of n-gram language model, only the closest $n-1$ words are used in the experiments. Near the sentence boundaries the appropriate terms were truncated in Equation \ref{eq:lm3}. Specifically, at the beginning of the sentence shorter n-gram contexts were used and at the end of the sentence terms beyond the end-of-sentence token were dropped. %% Rest of this section details the choice of the data set, the %% vocabulary and the estimation of substitute probabilities. %% For computational efficiency only the top 100 substitutes and their %% unnormalized probabilities were computed for each of the 1,173,766 %% positions in the test set\footnote{The substitutes with unnormalized %% log probabilities can be downloaded from %% \mbox{\url{http://goo.gl/jzKH0}}. For a description of the {\sc %% fastsubs} algorithm used to generate the substitutes please see %% \mbox{\url{http://arxiv.org/abs/1205.5407v1}}. {\sc fastsubs} %% accomplishes this task in about 5 hours, a naive algorithm that %% looks at the whole vocabulary would take more than 6 days on a %% typical 2012 workstation.}. The probability vectors for each %% position were normalized to add up to 1.0 giving us the final %% substitute vectors used in the rest of this study. % what is the LM training data %Train => 5181717 126019973 690121813 To compute substitute probabilities we trained a language model using approximately 126 million tokens of Wall Street Journal data (1987-1994) extracted from CSR-III Text \cite{csr3text} (excluding sections of the PTB). % how is the language model trained We used SRILM \cite{Stolcke2002} to build a 4-gram language model with Kneser-Ney discounting. % what is the vocabulary Words that were observed less than 500 times in the LM training data were replaced by \unk\ tags, which gave us a vocabulary size of 12,672. % what is the test data The first 24,020 tokens of the Penn Treebank Wall Street Journal Section 00 (PTB24K) was used as the test corpus to be induced. The corpus size was kept small in order to efficiently compute full distance matrices. Substitution probabilities for 12,672 vocabulary words were computed at each of the 24,020 positions. % perplexity The perplexity of the 4-gram language model on the test corpus was 55.4 which is quite low due to using a small vocabulary and in-domain data. % what is the tag set The treebank uses 45 part of speech tags which is the set we used as the gold standard for comparison in our experiments.
{ "alphanum_fraction": 0.7620672843, "avg_line_length": 50.0243902439, "ext": "tex", "hexsha": "9a3b62f9be556728f1e2512876592e899b8846e6", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-06T07:56:00.000Z", "max_forks_repo_head_hexsha": "f4723cac53b4d550d2b0c613c9577eb247c7ff4a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ai-ku/upos_2014", "max_forks_repo_path": "papers/coling2014/substitute.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4723cac53b4d550d2b0c613c9577eb247c7ff4a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ai-ku/upos_2014", "max_issues_repo_path": "papers/coling2014/substitute.tex", "max_line_length": 82, "max_stars_count": 4, "max_stars_repo_head_hexsha": "27d610318a0c777e2ca88b1ab2de5aa48f5a399f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ai-ku/upos", "max_stars_repo_path": "papers/cl2012/cl/substitute.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-18T11:35:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-24T11:27:18.000Z", "num_tokens": 1671, "size": 6153 }
%\documentclass[a4paper,10pt]{article} %\documentclass[a4paper,10pt,landscape]{article} \documentclass[10pt]{article} %\usepackage[landscape, a4paper, margin=20pt]{geometry} \usepackage[a4paper, margin=20pt]{geometry} %\usepackage[utf8]{inputenc} %\usepackage[square,sort,comma,numbers]{natbib} %\usepackage[backend=biber,autocite=inline,style=authoryear]{biblatex} \usepackage[backend=biber,autocite=inline]{biblatex} \addbibresource{mybib21.bib} %\usepackage{a4wide} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{listings} \usepackage{color} \usepackage{enumerate} %\usepackage{IEEEtrantools} %\usepackage[redeflists]{IEEEtrantools} \usepackage{verbatim} \usepackage{graphicx} \usepackage{subcaption} \usepackage[section]{placeins} %no trans-sectional figures! \usepackage{wrapfig, framed, caption} \usepackage{slashbox} \usepackage{booktabs} % For \toprule, \midrule and \bottomrule \usepackage{siunitx} % Formats the units and values \usepackage{pgfplotstable} % Generates table from .csv % Setup siunitx: \sisetup{ round-mode = places, % Rounds numbers round-precision = 2, % to 2 places } \usepackage{hyperref} \hypersetup{linktocpage, linktoc=all, %colorlinks=true, %linkcolor=blue, } \usepackage{lipsum} %\usepackage[onehalfspace]{setspace} \usepackage{setspace} % Basic data \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\ASSIGNMENT}{2} \newcommand{\B}{\{-1,1\}} \newcommand{\E}{\mathbf{E}} \newcommand{\F}{\mathbb{F}} \newcommand{\Inf}{\textbf{Inf}} \newcommand{\I}{\mathbf{I}} \newcommand{\NS}{\textbf{NS}} \newcommand{\R}{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\aufgabe}[1]{\item{\bf #1}} \newcommand{\bvec}[1]{\mathbf{#1}} \newcommand{\bv}[1]{\mathbf{#1}} \newcommand{\ceil}[1]{\lceil{#1}\rceil} \newcommand{\floor}[1]{\lfloor{#1}\rfloor} \newcommand{\gt}{>} \newcommand{\half}[1]{\frac{#1}{2}} \newcommand{\lt}{<} \newcommand{\tuple}[1]{\langle #1 \rangle} \newcommand{\suftab}{\text{suftab}} \setlength{\parskip}{1.0em} \setlength{\parindent}{1em} \lstset{ %basicstyle=\footnotesize, %basicstyle=\ttfamily\footnotesize, %basicstyle=\ttfamily\small, %basicstyle=\ttfamily\scriptsize, frame=single, %numbers=none, %numbersep=5pt, numberstyle=\tiny, showspaces=false, showstringspaces=false, tabsize=2, breaklines=true, %escapeinside={#*}{*#}, escapeinside={$}{$}, %escapeinside={*\&}{\&*},% if you want to add LaTeX within your code %mathescape=true, %language=C++ } \theoremstyle{definition} \newtheorem{mydef}{Definition}[section] \theoremstyle{remark} \newtheorem{remark}{Remark} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] %\newtheorem{thm}{Theorem}[mydef] \newtheorem{lemma}{Lemma}[section] %\newtheorem{lemma}{Lemma}[mydef] \begin{document} \renewcommand{\thesubsection}{\thesection.\alph{subsection}}\renewcommand{\thesubsection}{\thesection.\alph{subsection}} % Document title \begin{titlepage} \title{Stitching Chromatin Puzzle with Hi-C} \author{Yiftach Kolb} %\author{\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_} \date{\today} \end{titlepage} \maketitle \section{Main Assumptions and Ideas} \begin{itemize} \item{} Two blocks of the Hi-C matrix that should be aligned together create a pattern that 'looks right', and when they shouldn't be adjacent or adjacent in the wrong orientation/side it 'looks wrong'. \item{} If we could indeed say whether every pair should(n't) be adjacent, we can patch together the chromosome using a greedy algorithm. This turns reduces the problem from a hard TSP situation to an $O(n^2)$. \item{} How do we achieve step 1? We can come up with a good scorign scheme to serve as a distance 'metric' or something. But probably a better way is to train a classifier. \item{} The Classifier. Maybe any classifier can do the job but remember that Hi-C matrices are basically images $\Rightarrow$ NN or some similar ML technique are very good at image classification. \item{} What do we traim/test on? Take HI-C that that is correct, cut it to pieces. We know the order of the pieces so we can train a classifier to predict if two pieces belong. \item{} Basically we rearrange blocks of the Hi-C matrix. Cut portions (probably along the diagonal) and feed this to the NN to train on. \end{itemize} \subsection{} %\begin{figure}[htb!] %\begin{framed} %\includegraphics[width=\textwidth]{./images/plot00.jpg} %\caption{Example where ranking fails but scoring works} %\label{fig:rankvsscore} %\end{framed} %\end{figure} % references \section{Reference} %\nocite{zhao2020npf} \printbibliography \listoffigures \listoftables \end{document}
{ "alphanum_fraction": 0.7341745081, "avg_line_length": 25.4130434783, "ext": "tex", "hexsha": "118fd4841cd9a0ccd1a5e85f3a794e08192e2fd6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f8392aba7deb63aa85f3d137ef81dea1bb742b41", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zelhar/mg21", "max_forks_repo_path": "hic/stitching_chromatin_puzzle_with_hi-c.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f8392aba7deb63aa85f3d137ef81dea1bb742b41", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zelhar/mg21", "max_issues_repo_path": "hic/stitching_chromatin_puzzle_with_hi-c.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "f8392aba7deb63aa85f3d137ef81dea1bb742b41", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zelhar/mg21", "max_stars_repo_path": "hic/stitching_chromatin_puzzle_with_hi-c.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1442, "size": 4676 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % HIGZ User Guide -- LaTeX Source % % % % Chapter: The inquiry functions % % % % Editor: Michel Goossens / CN-AS % % Last Mod.: 9 July 1993 oc % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{The inquiry functions} \index{inquiry functions} \section{Inquiry the current attributes values} \index{attributes!inquire values} \Shubr{IGQ}{(PNAME,*RVAL*)} \Action This routine inquires the value of attribute \Lit{PNAME} and returns in into \Lit{RVAL}. \Pdesc \begin{DLtt}{1234567} \item[PNAME] Attribute name \item[RVAL] Returned value. See the description below. \end{DLtt} \index{fill area!interior style!current value} \index{fill area!style index!current value} \index{fill area!colour index!current value} \index{polyline!type!current value} \index{polyline!width!current value} \index{polyline!colour index!current value} \index{polymarker!type!current value} \index{polymarker!scale factor!current value} \index{polymarker!colour index!current value} \index{text!colour index!current value} \index{text!alignment!current value} \index{text!character height!current value} \index{text!angle!current value} \index{text!font!current value} \index{text!precision!current value} \index{text!width!current value} \index{axis!tick marks size!current value} \index{axis!labels size!current value} \index{axis!labels offset!current value} \index{box!border!current value} \index{arc!border!current value} \clearpage \begin{Tabhere} \begin{tabularx}{\textwidth}{|c|X|} \hline \multicolumn{1}{|c|}{\tt PNAME} & \multicolumn{1}{c|}{\Lit{RVAL} description} \\ \hline '\Sind{FAIS}' & RVAL(1)=Fill Area Interior Style (0,1,2,3) \\ '\Sind{FASI}' & RVAL(1)=Fill Area Style Index \\ '\Sind{LTYP}' & RVAL(1)=Line TYPe \\ '\Sind{BASL}' & RVAL(1)=BAsic Segment Length \\ '\Sind{LWID}' & RVAL(1)=Line WIDth \\ '\Sind{MTYP}' & RVAL(1)=Marker TYPe \\ '\Sind{MSCF}' & RVAL(1)=Marker SCale Factor \\ '\Sind{PLCI}' & RVAL(1)=PolyLine Colour Index \\ '\Sind{PMCI}' & RVAL(1)=PolyMarker Colour Index \\ '\Sind{FACI}' & RVAL(1)=Fill Area Colour Index \\ '\Sind{TXCI}' & RVAL(1)=TeXt Colour Index \\ '\Sind{TXAL}' & RVAL(1)=Alignment horizontal RVAL(2)=Alignment vertical \\ '\Sind{CHHE}' & RVAL(1)=CHaracter HEight \\ '\Sind{TANG}' & RVAL(1)=Text ANGle \\ '\Sind{TXFP}' & RVAL(1)=TeXt Font RVAL(2)=TeXt Precision \\ '\Sind{TMSI}' & RVAL(1)=Tick Marks SIze (in \WC) \\ '\Sind{LASI}' & RVAL(1)=LAbels SIze (in \WC) \\ '\Sind{LAOF}' & RVAL(1)=LAbels OFfset \\ '\Sind{PASS}' & RVAL(1)=IGTEXT Width \\ '\Sind{CSHI}' & RVAL(1)=IGTEXT Shift \\ '\Sind{BORD}' & RVAL(1)=Border for IGBOX, IGFBOX and IGARC (0=No , 1=Yes) \\ '\Sind{BARO}' & RVAL(1)=IGHIST or IGRAPH BAR charts Offset (\%) \\ '\Sind{BARW}' & RVAL(1)=IGHIST or IGRAPH BAR charts Width (\%) \\ '\Sind{AWLN}' & RVAL(1)=Axis Wire LeNght \\ '\Sind{DIME}' & RVAL(1)=2D or 3D \\ '\Sind{NCOL}' & RVAL(1)=Number of entry in the COLour map. \\ '\Sind{RGB }' & RVAL(1)=Index (Input) RVAL(2)=Red RVAL(3)=Green RVAL(4)=Blue \\ \hline \end{tabularx} \caption{Description of the \protect\Rind{IGQ} parameters} \label{tab-IGQ} \end{Tabhere} \newpage \section{General inquiry function} \Shubr{IGQWK}{(IWKID,PNAME,RVAL*)} \Action This routine inquires the values of attribute \Lit{PNAME} and returns it into \Lit{RVAL}. \Pdesc \begin{DLtt}{1234567} \item[IWKID] Workstation identifier. \item[PNAME] Attribute name. \item[RVAL] Returned value. See the description below. \end{DLtt} \begin{Tabhere} \begin{center} \begin{tabular}{|c|l|c|} \hline \multicolumn{1}{|c|}{\tt PNAME} & \multicolumn{1}{c|}{\Lit{RVAL} description} & \multicolumn{1}{c|}{\Lit{RVAL} dimension} \\ \hline '\Sind{MXDS}' & Maximal display surface ({\tt XMAX YMAX}) & 2 \\ '\Sind{NTNB}' & Current {\tt NT} number & 1 \\ '\Sind{NTWN}' & Current window parameter & 4 \\ '\Sind{NTVP}' & Current viewport parameter & 4 \\ '\Sind{DVOL}' & Display volume in 3D & 3 \\ '\Sind{ACTI}' & 1. if IWKID is active, 0. if not & 1 \\ '\Sind{OPEN}' & 1. if IWKID is open, 0. if not & 1 \\ '\Sind{NBWK}' & Number and list of open workstations & 11 \\ '\Sind{2BUF}' & 1. if the double buffer is on, 0. if not & 11 \\ '\Sind{HWCO}' & Number of colours supported by the hardware & 11 \\ '\Sind{WIID}' & Window identifier associated to IWKID. & 1 \\ \hline \end{tabular} \end{center} \caption{Description of the \protect\Rind{IGQWK} parameters} \label{tab-IGQWK} \end{Tabhere}
{ "alphanum_fraction": 0.5800038979, "avg_line_length": 40.0859375, "ext": "tex", "hexsha": "f23c211b81b3943fbe7fa1a969b07c7cd232c0a9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "higzhplot/higzreq.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "higzhplot/higzreq.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "higzhplot/higzreq.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 1632, "size": 5131 }
\section{Software architecture} This section explains the application's internal structure. \subsection{Logical architecture} The application's architecture is multilayered, it uses the widespread three-tier architecture. This is a clear explanation of the three-tier architecture: \begin{quote} Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic ("business rules"), computer data storage, and data access are developed and maintained as independent modules, most often on separate platforms. \cite{multitier} \end{quote} In the case of this project, the presentation layer is the most important one with the majority of the code. The app focuses a lot on productivity, so most of the GradeCalc's features are on its interface. The domain layer has the app's logic, and the data layer is composed of the database structure and thanks to Firebase it doesn't have any code. % \vfill % \begin{figure}[H] % \center % \includegraphics[width=0.25\columnwidth]{media/layers.pdf} % \caption{Three tier layers.} % \end{figure} % \vfill \input{sections/patterns} % \begin{titlebox}{TODO} % (3 capes, dibuix, explcacio i els patrons) three-tier architecture % \end{titlebox} % ----------------------------------------------------------------------------------- % ----------------------------------------------------------------------------------- % ----------------------------------------------------------------------------------- % ----------------------------------------------------------------------------------- \newpage \subsection{Physical architecture} % \begin{titlebox}{TODO} % Fill content (les 3 capes estan al frontend, i el firebase esta separat, tb netlify) (explicar com es conecta) % \end{titlebox} The physical architecture is heavily influenced by the choice of using Firebase. Doug Stevenson explains how firebase shapes your app's architecture very well in his Medium article "What is Firebase? The complete story, abridged."\cite{firebase-article}, this is an extract of his article: \begin{quote} \onehalfspacing Firebase is a toolset to “build, improve, and grow your app”, and the tools it gives you cover a large portion of the services that developers would normally have to build themselves, but don’t really want to build, because they’d rather be focusing on the app experience itself. This includes things like analytics, authentication, databases, configuration, file storage, push messaging, and the list goes on. The services are hosted in the cloud, and scale with little to no effort on the part of the developer. \end{quote} \begin{quote} \onehalfspacing This is different than traditional app development, which typically involves writing both frontend and backend software. The frontend code just invokes API endpoints exposed by the backend, and the backend code actually does the work. However, with Firebase products, the traditional backend is bypassed, putting the work into the client. \end{quote} \vfill \begin{figure}[H] \center \includegraphics[width=0.95\columnwidth]{media/diagrams/firebase-diagram.png} \caption{Traditional vs Firebase architecture. \cite{firebase-article}} \label{fig:firebase-diagram} \end{figure} \vfill \clearpage\newpage\noindent The Firebase suite offers many individual products (Fig. \ref{fig:firebase-products}). The ones that this project uses are authentication and database. \begin{figure}[H] \center \includegraphics[width=0.95\columnwidth]{media/firebase-products-cropped.png} \caption{Firebase products. \cite{firebase-article}} \label{fig:firebase-products} \end{figure} % GradeCalc uses Google Firebase as the back-end, from all the features it offers, GradeCalc uses authentication and database. GradeCalc also uses Algolia, a third-party service, to perform searches. There's a cron job\footnote{A cron job is a scheduled task that runs periodically.} in Heroku that runs every day to update the searchable information in Algolia. Notice that using Firebase and Algolia allows to not have any code in the back-end, so practically speaking GradeCalc has no back-end code. \noindent The 3 application layers are in the front-end. There we use Gulp to generate the needed and optimized static HTML, CSS, and JS files. These optimized files are hosted in Netlify. \vfill \begin{figure}[H] \center \includegraphics[width=0.85\columnwidth]{media/diagrams/architecture.pdf} \caption{GradeCalc architecture} \label{fig:architecture_diagram} \end{figure} \vfill % \vfill % \begin{figure}[H] % \center % \includegraphics[width=0.5\columnwidth]{media/diagrams/MVC-Process.pdf} % \caption{The model, view, and controller pattern relative to the user.\cite{mvc-diagram}} % \label{fig:mcv-diagram} % \end{figure} % \vfill \clearpage\newpage\noindent GradeCalc is a Progressive Web App (PWA) instead of a native app. This is how Mozilla Web Docs defines PWAs \cite{pwa-mozilla}: PWAs are web apps developed using a number of specific technologies and standard patterns to allow them to take advantage of both web and native app features. There are some key principles a web app should try to observe to be identified as a PWA. It should be: \begin{itemize}[noitemsep] \item \textbf{Discoverable}, so the contents can be found through search engines. \item \textbf{Installable}, so it can be available on the device's home screen or app launcher. \item \textbf{Linkable}, so you can share it by simply sending a URL. \item \textbf{Network independent}, so it works offline or with a poor network connection. \item \textbf{Progressive}, so it's still usable on a basic level on older browsers, but fully-functional on the latest ones. \item \textbf{Re-engageable}, so it's able to send notifications whenever there's new content available. \item \textbf{Responsive}, so it's usable on any device with a screen and a browser—mobile phones, tablets, laptops, TVs, refrigerators, etc. \item \textbf{Safe}, so the connections between the user, the app, and your server are secured against any third parties trying to get access to sensitive data. \end{itemize} \noindent Offering these features and making use of all the advantages offered by web applications can create a compelling, highly flexible offering for your users and customers. % \subsubsection*{What is a Single-page application?} % A single-page application (SPA) is a web application or website that interacts with the web browser by dynamically rewriting the current web page with new data from the webserver, instead of the default method of the browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app. % In a SPA, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does it transfer control to another page, although the location hash or the HTML5 History API can be used to provide the perception and navigability of separate logical pages in the application. \cite{spa} % \subsubsection*{What is a Progressive web app?} % Progressive Web Apps are web applications that have been designed so they are capable, reliable, and installable. These three pillars transform them into an experience that feels like a native application. \cite{pwa-pillars} % Where: % \begin{itemize}[noitemsep] % \item \textbf{Capable} means that the PWA feels as powerful/capable as a native app. % \item \textbf{Reliable} means that the PWA feels fast and dependable regardless of the network. % \item \textbf{Installable} means that the PWA runs in a standalone window instead of a browser tab % \end{itemize} % \subsubsection*{Why PWA instead of a native app?} GradeCalc is PWA mainly \textbf{because it saves up a lot of development time} at the expense of some, really specific, capabilities. A PWA can run in any of the most popular operating systems (Android, iOS, Windows, macOS, and Linux), contrary to a native app that only runs in its respective OS. So, the same code will work everywhere. % None of the app requirements (\ref{chap:requirements}) are exclusive to native apps, and some are easily achieved with a PWA. % The main downside is that iOS users won't fully enjoy the web app\cite{pwa-ios} because Safari iOS doesn't implement some of the APIs that PWAs use, like Push Notifications. As Chris Love explains \cite{pwa-ios}: % \begin{displayquote} % Sure there are limitations to for Progressive Web Apps on iOS, but they are not deal breakers. Many of the most requested features have at least some form of fallback solution. It may not provide a comparable user experience the native web platform API or service offers. % \end{displayquote}
{ "alphanum_fraction": 0.7446078974, "avg_line_length": 59.8741721854, "ext": "tex", "hexsha": "6b42b1e08c5e663f808b05133bd28c49ded31556", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mauriciabad/TFG", "max_forks_repo_path": "sections/architecture.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mauriciabad/TFG", "max_issues_repo_path": "sections/architecture.tex", "max_line_length": 517, "max_stars_count": null, "max_stars_repo_head_hexsha": "0fa78d5709b31024bafdfd0428c972cf0cec3ffb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mauriciabad/TFG", "max_stars_repo_path": "sections/architecture.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2033, "size": 9041 }
\section{Progress} %%% % What is done % What is left to do % Mention descaling due to overly optimistic assumptions - Python bindings for Headless webkit had been phased out %%% In general, steady progress is being made. Most of the lower level modules are written and tested, and there is significant progress on the modules that are not finished yet. Integration of all the modules, along with the associated testing, will be challenging, but certainly doable in the few weeks that are left. Below is a checklist from a high level that shows the current progress. All coding activities are at the top, followed by integration and testing. Further details follow the checklist. \begin{checklist} \item Framing of data \item Reassembly of stream (part of framing) \checkeditem Generation of headers \checkeditem Sequencing \checkeditem Encoding as URL \checkeditem Encoding as Cookie \checkeditem Encoding as BMP \checkeditem Encoding as JPG \checkeditem Encoding as PNG \checkeditem Image gallery website pieces \item Integrating with Apache \item Integrating with Headless Webkit \item Unit testing of framing \item Unit testing of reassembly \checkeditem Unit testing of headers \checkeditem Unit testing of sequencing \checkeditem Unit testing of encoding schemes \item End-to-end testing with regular browser with packet capture \item End-to-end testing with Tor \item Performance testing design \item Performance testing \end{checklist} How the framing is going to work is being discussed and overall decisions have been made. There are some subtleties that need to be worked out prior to integration. Connection establishment needs to be finalized. There is discussion of details related to authentication and keeping track of multiple connections ongoing, but nearly finalized. Actual integration on the client-side and server-side needs to be done as well. Since additional encoding schemes can be added easily by writing a new encoder, further schemes are being considered. SVG in particular is being looked at as something that could be added. Overall tests need to be created as well as performed. This will provide performance information, to characterize the overhead introduced by this layer within the stack.
{ "alphanum_fraction": 0.7982417582, "avg_line_length": 52.9069767442, "ext": "tex", "hexsha": "619995abead469f1966e46978222cd2f975041a3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-17T16:57:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-17T16:57:23.000Z", "max_forks_repo_head_hexsha": "eb9004fb3e40e760bb9add772340a5d5805a7558", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ben-jones/facade", "max_forks_repo_path": "docs/ProgressReport/Progress.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eb9004fb3e40e760bb9add772340a5d5805a7558", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ben-jones/facade", "max_issues_repo_path": "docs/ProgressReport/Progress.tex", "max_line_length": 316, "max_stars_count": null, "max_stars_repo_head_hexsha": "eb9004fb3e40e760bb9add772340a5d5805a7558", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ben-jones/facade", "max_stars_repo_path": "docs/ProgressReport/Progress.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 476, "size": 2275 }
%------------------------------------------------------------------------------- % @author: Miguel Ramos Pernas % @email: [email protected] %------------------------------------------------------------------------------- % % Description: % % Test the 'smartboa' style % %------------------------------------------------------------------------------- \input{../phys_symbols.tex} % % Test general settings % \section{Introduction}\label{sec:introduction} \begin{frame}{Introduction} This is the introduction \TwoColSlide{ \begin{block}{This is block 1} Block 1 \end{block} }{ \begin{block}{This is block 2} Block 2 \end{block} } \end{frame} % % Test the symbols % \section{Symbols}\label{sec:symbols} \begin{frame}{Some symbols} Magnitudes:\\ \energy, \DOCA, \IP, \pt Units:\\ \MeV, \MeVc, \MeVcc, \GeVcc, \TeVcc, \m, \cm, \mm, \fb, \ifb, \CL, \Hz, \MHz, \s Particles:\\ \elp, \elm, \mup, \mum, \taup, \taum, \pip, \pim, \Ks, \Kl, \Kp, \Km, \Kpm, \Sigmap, \Lz, \Dpm, \Lcpm, \Bd, \Bs, \Bcpm, \Lb, \Az Decays:\\ \decay{\Ks}{\mup\mum}, \decay{\Bs}{\mup\mum}, ... Branching fractions:\\ \brof{\decay{\Ks}{\mup\mum}}, \brof{\decay{\Bs}{\mup\mum}}, ... \end{frame} % % Test the block-related features % \section{SBlock}\label{sec:sblock} \begin{frame}{SBlock} \begin{sblock}[bgtitle=yellow,fgtitle=red,bgbody=yellow!30,fgbody=red!30,width=0.8\textwidth]{SBlock with custom width} Something written inside \end{sblock} \begin{sblock}{SBlock with default arguments} Something written inside \end{sblock} \end{frame} % % Test the itemize-related features % \section{Itemize}\label{sec:itemize} \begin{frame}{Itemize} This is an itemize environment\\ \begin{itemize} \item First item \item second item \end{itemize} This is a sitemize environment\\ \begin{sitemize}[margin=30pt,sitemsep=0.4cm] \item First item \item Second item \end{sitemize} This is a check-list\\ \begin{sitemize} \item[\todo] to do \item[\done] done \item[\wontdo] will not do \end{sitemize} \end{frame}
{ "alphanum_fraction": 0.5761336516, "avg_line_length": 22.2872340426, "ext": "tex", "hexsha": "3c54b140dee7367b5f8e778fe6608a24e5ec0f53", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1d8b1890fba976e07b1459b497dfa356fdae9f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mramospe/Latex", "max_forks_repo_path": "beamer/test.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d1d8b1890fba976e07b1459b497dfa356fdae9f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mramospe/Latex", "max_issues_repo_path": "beamer/test.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "d1d8b1890fba976e07b1459b497dfa356fdae9f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mramospe/Latex", "max_stars_repo_path": "beamer/test.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 685, "size": 2095 }
\chapter{Conclusions and Future Work} \label{chpr:conclusions} Bitcoin and similar systems make trustless timestamping possible. A user can use his own transaction to create such proofs. This represents the highest level of security for a timestamp, relying only on the functioning of the decentralized network. OpenTimestamps defines a common standard to formalize timestamps. In addition, it provides a solution to fix the cost and scalability issue: its public calendars allow anyone to timestamp for free in a trust-minimized setting. Elliptic curve commitments can improve OpenTimestamps by giving the possibility to timestamp at zero marginal cost. If confined to Bitcoin, they give rise to two practical techniques: \textit{pay-to-contract} if the commitment is done using the payee public key, \textit{sign-to-contract} if the commitment is included in the signature. The first is viable but leads the user out of a BIP32 logic and this may compromise his funds in case of unexpected malfunctioning; the second does not involve this kind of risk, hence it should be preferred. Thanks to our integration with the bitcoin wallet Electrum, \textit{sign-to-contract} can be tested and it is easily accessible. To push this work further, the next step should be the inclusion of \verb|OpSecp256k1Commitment| in the python-opentimestamp library. However, with segwit, proofs double in size and it is harder to retrieve the information to independently create the timestamps: this is a problem that would deserve some research, to assess how to best address it. Further research could then go in different directions. One path would be the definition of a reasonable set of rules to allow simple users to help the calendar by performing external timestamping when signing their own transactions. Another one could consist in improving the Electrum experience, by adding a RPC to the Electrum server to retrieve the WTXID Merkle tree so to independently complete \textit{sing-to-contract} proofs or by embedding the possibility to cooperate with calendars providing external timestamps. Deeper study of elliptic curve commitments to examine applications beyond timestamping \cite{TapRoot} is also a promising avenue for research.
{ "alphanum_fraction": 0.8202146691, "avg_line_length": 117.6842105263, "ext": "tex", "hexsha": "cc90ad1367db22e529e3dcf78ccdbbbd598da583", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-19T09:36:36.000Z", "max_forks_repo_forks_event_min_datetime": "2018-04-06T17:48:54.000Z", "max_forks_repo_head_hexsha": "d5754ae5c05f110e1fba115dc011f240878933f3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "LeoComandini/Thesis", "max_forks_repo_path": "Chapters/Conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d5754ae5c05f110e1fba115dc011f240878933f3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "LeoComandini/Thesis", "max_issues_repo_path": "Chapters/Conclusions.tex", "max_line_length": 289, "max_stars_count": 13, "max_stars_repo_head_hexsha": "d5754ae5c05f110e1fba115dc011f240878933f3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "LeoComandini/Thesis", "max_stars_repo_path": "Chapters/Conclusions.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-20T10:25:26.000Z", "max_stars_repo_stars_event_min_datetime": "2018-04-09T03:42:55.000Z", "num_tokens": 454, "size": 2236 }
\documentclass[]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={A BAE Book Example}, pdfauthor={BAE Faculty}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{natbib} \bibliographystyle{apalike} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{A BAE Book Example} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{BAE Faculty} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{2018-12-18} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage[table]{xcolor} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{amsthm} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{lemma}{Lemma}[chapter] \theoremstyle{definition} \newtheorem{definition}{Definition}[chapter] \newtheorem{corollary}{Corollary}[chapter] \newtheorem{proposition}{Proposition}[chapter] \theoremstyle{definition} \newtheorem{example}{Example}[chapter] \theoremstyle{definition} \newtheorem{exercise}{Exercise}[chapter] \theoremstyle{remark} \newtheorem*{remark}{Remark} \newtheorem*{solution}{Solution} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \chapter{Introduction}\label{introduction} This is a \emph{sample} book written in \textbf{Markdown}. This is a great package written by \href{https://bookdown.org/yihui/bookdown/}{Yihui Xie}. I am using his template, and added some additional info I thought was useful along the way! This is really a very succint tutorial, so take it as an introduction to your discovery of Reproducible research!! The \textbf{bookdown} package can be installed from CRAN or Github: \begin{Shaded} \begin{Highlighting}[] \KeywordTok{install.packages}\NormalTok{(}\StringTok{"bookdown"}\NormalTok{)} \CommentTok{# or the development version} \CommentTok{# devtools::install_github("rstudio/bookdown")} \end{Highlighting} \end{Shaded} Remember each Rmd file contains one and only one chapter, and a chapter is defined by the first-level heading \texttt{\#}. To compile this example to PDF, you need XeLaTeX. You are recommended to install TinyTeX (which includes XeLaTeX): \url{https://yihui.name/tinytex/}. \chapter{Introduction: Text features}\label{intro} The markdown syntax is very simple and this is one of the reasons why it is so attractive. You can find all the syntax with \href{https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf}{this R Markdown reference document}, but I thought I would go through them as I have learned several tricks I could not easily find in the document. \section{Paragraph Breaks and Forced Line Breaks}\label{paragraph-breaks-and-forced-line-breaks} To insert a break between paragraphs, include a single completely blank line. To force a line break, put \emph{two} blank spaces at the end of a line. \begin{verbatim} To insert a break between paragraphs, include a single completely blank line. To force a line break, put _two_ blank spaces at the end of a line. \end{verbatim} \section{Headers}\label{headers} The character \texttt{\#} at the beginning of a line means that the rest of the line is interpreted as a section header. The number of \texttt{\#}s at the beginning of the line indicates the level of the section (1,2,3, etc.). For instance, \texttt{Components\ of\ R\ Markdown} above is preceded by a single \texttt{\#}, because it is of level 1 but \texttt{Headers} at the start of this paragraph is preceded by \texttt{\#\#\#} because it is of level 3. Up to six levels are understood by Markdown. Do not interrupt these headers by line-breaks. Make sure that in your .Rmd file, you leave a blank line before a header, otherwise, \texttt{pandoc} will not render it as a header. \section{Italics and bold}\label{italics-and-bold} \begin{verbatim} *italics* and _italics_ \end{verbatim} renders \emph{italics} and \emph{italics} \begin{verbatim} **bold** and __bold__ \end{verbatim} renders \textbf{bold} and \textbf{bold} \section{Supbscripts and superscripts}\label{supbscripts-and-superscripts} To write sub- and superscripts, like in NO\textsubscript{3}\textsuperscript{-} or PO\textsubscript{4}\textsuperscript{3-} write as \begin{verbatim} NO~3~^-^ or PO~4~^3-^ \end{verbatim} PO\textsubscript{4}\textsuperscript{3-} does not look as neat as \(PO_4^{3-}\) \begin{verbatim} PO~4~^3-^ does not look as neat as $PO_4^{3-}$ \end{verbatim} but looks more seamless in a normal text because the former does not appear as an \protect\hyperlink{inline-equations}{equation} while the other does. \section{Lists}\label{lists} \begin{verbatim} * unordered list * item 2 + sub-item 1 #4 <spaces> before + + sub-item 2 #4 <spaces> before + \end{verbatim} \begin{itemize} \tightlist \item unordered list \item item 2 \begin{itemize} \tightlist \item sub-item 1 \item sub-item 2 \end{itemize} \end{itemize} \begin{verbatim} 1. ordered list 2. item 2 + sub-item 1 #4 <spaces> before + + sub-item 2 #4 <spaces> before + \end{verbatim} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item ordered list \item item 2 \begin{itemize} \tightlist \item sub-item 1 \item sub-item 2 \end{itemize} \end{enumerate} In the ordered list, there is a subtlety unrevealed in the Rmardown documentation, which is that the numbering \emph{always} increases, and that \emph{only} the number value of the first item matters. So, one cannot have a list in a decreasing order (which is too bad because when one makes a list of his/her publications, it is nice to have a decreasing order\ldots{}), and the only number that matters is the first one. So this code \begin{verbatim} 5. ordered list 7. item 2 2. item 2 # blank line for list to take into effect b. sub-item 1 #4 <spaces> before b a. sub-item 2 \end{verbatim} renders this: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{4} \item ordered list \item item 2 \item item 2 \begin{enumerate} \def\labelenumii{\alph{enumii}.} \setcounter{enumii}{1} \tightlist \item sub-item 1\\ \item sub-item 2 \end{enumerate} \end{enumerate} \section{Quotations}\label{quotations} \begin{quote} R Markdown is, in particular, both ``free as in beer'' (you will never pay a dollar for software to use it) and ``free as in speech'' (the specification is completely open to all to inspect). \end{quote} Is a quotation from \href{http://www.stat.cmu.edu/~cshalizi/rmarkdown/}{Dr Shalizi} which code starts with a \texttt{\textgreater{}} \begin{verbatim} > R Markdown is, in particular, both "free as in beer" etc. \end{verbatim} \section{Computer type}\label{computer-type} This is to differentiate regular text from \texttt{code\ text} so that both can be easily differentiated: \texttt{R} vs R. \begin{verbatim} This is to differentiate regular text from `code text` so that both can be easily differentiated: `R` vs R. \end{verbatim} An entire paragraph of code which is rendered as a ``code box'' in html (but not in pdf) starts with three ``back-ticks'', and end the same \section{Symbols and Special characters}\label{symbols-and-special-characters} The principal keys, like the alphabet, are understood univocally across platforms such as Windows, Mac OS, or Linux. However, there are special characters such °, ² or µ that have different embedded codes across the different platforms. For example, if you and your co-worker work on the same document and one works using Windows and the other uses Mac, the actual symbol in the code text may not show the same one from one to the other plateform. For example, if I write in an R markdown document ``10 m²'' and I have added the ² symbol by typing on a PC ``Alt+0178'', as this corresponds to the ascii code for ² in Windows, the same document open on a Mac will render ``10 m?'', because it cannot interpret the embedded code for ²\ldots{} Several consequences: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \textbf{NEVER} use special characters in variable names in an \texttt{R} code, \item in an R markdown document, use the HTML Number or HTML name in the text \end{enumerate} \href{https://www.ascii.cl/htmlcodes.htm}{HTML Number and Names} can be found on this \href{https://www.ascii.cl/htmlcodes.htm}{ascii page} or on this page for \href{http://www.dionysia.org/html/entities/symbols.html}{greek letter codes and some other symbols} So to type T°C, m², µm, or pε you type like this \begin{verbatim} So to type T&deg;C, m&sup2;, &micro;m, or p&epsilon; you type like this \end{verbatim} Do not forget to add the ``;'' !!! \chapter{Editing equations in R Markdown}\label{editing-equations-in-r-markdown} We do not write nearly as much math in our field than our colleagues in statistics or math. But we do enough to have a bit of a tutorial. I have borrowed this entire section from the \href{http://www.stat.cmu.edu/~cshalizi/rmarkdown/\#math-in-r-markdown}{excellent tutorial from Dr Shalizi} again. R Markdown gives you the syntax to render complex mathematical formulas and derivations, and have them displayed very nicely. Like code, the math can either be inline or set off (displays). \hypertarget{inline-equations}{\section{Inline equations}\label{inline-equations}} Inline math is marked off witha pair of dollar signs (\texttt{\$}), as \(\pi r^2\) or \(e^{i\pi}\). \begin{verbatim} Inline math is marked off witha pair of dollar signs (`$`), as $\pi r^2$ or $e^{i\pi}$. \end{verbatim} \section{Set off equations}\label{set-off-equations} In the \href{https://francoisbirgand.github.io/tutorial-RMarkdown_instructions.html}{R markdown tutorial} I have other instructions about equations that are really related to R markdown. I now prefer the way it is done in \texttt{bookdown} much better, particularly for the referencing. This simple LaTeX syntax \begin{verbatim} \begin{equation} H_2O + H_2O \rightleftharpoons OH^- + H_3O^+ \label{eq:autoprotolysis} \end{equation} \end{verbatim} yields \begin{equation} H_2O + H_2O \rightleftharpoons OH^- + H_3O^+ \label{eq:autoprotolysis} \end{equation} Notice that the number \eqref{eq:autoprotolysis} is automatically added as you put in the text \texttt{\textbackslash{}@ref(eq:autoprotolysis)}. Of course, it is possible to write a little more complicated equations. The limit is LaTeX. I have not explored all what there is but here is an example that was useful for me: \begin{verbatim} \begin{equation} H_nA \rightleftharpoons H_{n-1}A^- + H^+ \hspace{0.6cm} K_{a1} = \frac{[H_{n-1}A^-][H^+]}{[H_nA]} \hspace{0.6cm} [H_{n-1}A^-] = [H_nA] \frac {K_{a1}}{[H^+]}\\ H_{n-1}A^- \rightleftharpoons H_{n-2}A^{2-} + H^+ \hspace{0.6cm} K_{a2} = \frac{[H_{n-2}A^{2-}][H^+]}{[H_{n-1}A]} \hspace{0.6cm} [H_{n-2}A^-] = [H_nA] \frac {K_{a1}K_{a2}}{[H^+]^2}\\ \\ \vdots \hspace{6cm} \vdots \hspace{6cm} \vdots\\ \\ HA^{(n-1)-} \rightleftharpoons A^{n-} + H^+ \hspace{0.6cm} K_{an} = \frac{[A^{n-}][H^+]}{[HA^{(n-1)-}]} \hspace{0.6cm} [A^{n-}] = [H_nA] \frac {K_{a1}K_{a2} \ldots K_{an}}{[H^+]^n}\\ \label{eq:HnApoly} \end{equation} \end{verbatim} yields \begin{equation} H_nA \rightleftharpoons H_{n-1}A^- + H^+ \hspace{0.6cm} K_{a1} = \frac{[H_{n-1}A^-][H^+]}{[H_nA]} \hspace{0.6cm} [H_{n-1}A^-] = [H_nA] \frac {K_{a1}}{[H^+]}\\ H_{n-1}A^- \rightleftharpoons H_{n-2}A^{2-} + H^+ \hspace{0.6cm} K_{a2} = \frac{[H_{n-2}A^{2-}][H^+]}{[H_{n-1}A]} \hspace{0.6cm} [H_{n-2}A^-] = [H_nA] \frac {K_{a1}K_{a2}}{[H^+]^2}\\ \\ \vdots \hspace{6cm} \vdots \hspace{6cm} \vdots\\ \\ HA^{(n-1)-} \rightleftharpoons A^{n-} + H^+ \hspace{0.6cm} K_{an} = \frac{[A^{n-}][H^+]}{[HA^{(n-1)-}]} \hspace{0.6cm} [A^{n-}] = [H_nA] \frac {K_{a1}K_{a2} \ldots K_{an}}{[H^+]^n}\\ \label{eq:HnApoly} \end{equation} Again, you see that the numbering of the equation is automatic!! \chapter{Code chunks}\label{code-chunks} So this is one of the great beauties of the \texttt{R\ markdown} platform: You insert text, equations, and code chunks! Code chunks are used to run \texttt{R} code, \texttt{python} code, and others such as \texttt{C} and \texttt{fortran} codes! In \texttt{bookdown}, one also uses them to insert pictures, videos, and tables. \section{Inserting pictures}\label{inserting-pictures} This code: yields: \begin{figure} {\centering \includegraphics[width=1\linewidth]{pictures/diagenesis-diffusion-directions} } \caption{Diffusion fluxes of electron acceptors and all other soil diagenesis processes of a theoretical layered wetland soil}\label{fig:diagenesis-diffusion-directions} \end{figure} So the first thing in the code is its name. It then becomes possible to reference the figure using \texttt{Figure\ \textbackslash{}@ref(fig:diagenesis-diffusion-directions)} to say: look at the cool stuff in Figure \ref{fig:diagenesis-diffusion-directions}. Then, there are other settings for the code, which are detailed below. The figure caption is announced with the \texttt{fig.cap=}. \section{Inserting several pictures}\label{inserting-several-pictures} It is also possible to insert several pictures lined up using this code: which yields: \begin{figure} {\centering \includegraphics[width=0.4\linewidth]{pictures/brickwall} \includegraphics[width=0.4\linewidth]{pictures/brick-skyscraper} } \caption{Small and large structures can be built from the addition of bricks, one at a time}\label{fig:brickwall} \end{figure} \section{Inserting videos}\label{inserting-videos} Similarly, it is possible to insert videos from the web using this code: which yields: \label{fig:ATPaseRotation}ATP synthase in action. Obtained with permission from HarvardX \section{Inserting R code}\label{inserting-r-code} And obviously, a nice thing about \texttt{R\ markdown} is to be able to insert \texttt{R} code chunks like the one below to make some pretty complicated figures (the one here is not that much, and is not particularly well written\ldots{}) which yields: \begin{figure} {\centering \includegraphics[width=0.85\linewidth]{bookdown-demo_files/figure-latex/molarfracPO4-1} } \caption{Molar fraction for the conjugate acid forms of the triprotic phosphoric acid in dilute solutions at 25°C}\label{fig:molarfracPO4} \end{figure} \section{Inserting tables}\label{inserting-tables} To me, this is where Rmarkdown is the weakest for now. Tables are not that easy to handle\ldots{} Here is a Chunk code that works with \texttt{pander}. I had to use this because otherwise, it would not render the equations well, but it does not have a caption, which is what I was trying to have anyway. \begin{longtable}[]{@{}cc@{}} \toprule Equilibrium reactions & Log K\tabularnewline \midrule \endhead H\textsuperscript{+} + OH\textsuperscript{-} ⇆ H\textsubscript{2}O & -14.00\tabularnewline H\textsuperscript{+} + e\textsuperscript{-} ⇆ H\textsubscript{2(g)} & 0.00\tabularnewline H\textsuperscript{+} + e\textsuperscript{-} + ¼O\textsubscript{2(g)} ⇆ ½H\textsubscript{2}O & 20.78\tabularnewline \bottomrule \end{longtable} There is another package which I like in many ways, but it is still imperfect: \texttt{KableExtra}. In the code below, it is possible to insert picture with text. The beauty is that I was able to get the caption here and this is very useful. I suppose that for most applications, \texttt{KableExtra} is still the best thing outthere. Again, to reference the table, use \texttt{\textbackslash{}@ref(tab:ElecAllocTab)} to say that table \ref{tab:ElecAllocTab} is very messy!! \label{tab:ElecAllocTab}Examples of electron allocations on the C, N, S, and P atoms generating different inorganic and organic molecules relevant to environmental and ecological engineering Nb of e\textsuperscript{-} stored on the atoms C N S P 0 carbon dioxide~\includegraphics{pictures/ElecAlloc_CO2.png} nitr\emph{\textbf{ate}}~\includegraphics{pictures/ElecAlloc_NO3-.png} sulf\emph{\textbf{ate}}~\includegraphics{pictures/ElecAlloc_SO42-.png} phosph\emph{\textbf{ate}}~\includegraphics{pictures/ElecAlloc_PO43-.png} 1 C\#1 pyruvic acid~\includegraphics{pictures/ElecAlloc_pyruvic_acid.png} 2 C\#2 pyruvic acid~\includegraphics{pictures/ElecAlloc_pyruvic_acid.png} carbon monoxide \includegraphics[width=0.70000\textwidth]{pictures/ElecAlloc_CO.png} nitr\emph{\textbf{ite}}~\includegraphics{pictures/ElecAlloc_NO2-.png} sulf\emph{\textbf{ite}}~\includegraphics{pictures/ElecAlloc_SO32-.png} sulfur dioxide ~\includegraphics{pictures/ElecAlloc_SO2.png} 3 C\#1 of glucose~\includegraphics{pictures/ElecAlloc_glucose.png} N\#2 of nitrous oxide~\includegraphics{pictures/ElecAlloc_N2O.png} 4 C\#2 to C\#5 of glucose~\includegraphics{pictures/ElecAlloc_glucose.png} 5 C\#6 of glucose~\includegraphics{pictures/ElecAlloc_glucose.png} dinitrogen~\includegraphics{pictures/ElecAlloc_N2.png} nitrogen monoxide \includegraphics{pictures/ElecAlloc_NO.png} N\#1 of nitrous oxide \includegraphics{pictures/ElecAlloc_N2O.png} 6 C of fatty acid~\includegraphics[width=0.70000\textwidth]{pictures/ElecAlloc_fatty_acid.png} 7 pyruvic acid (C\#3)~\includegraphics{pictures/ElecAlloc_pyruvic_acid.png} 8 methane~\includegraphics[width=0.70000\textwidth]{pictures/ElecAlloc_CH4.png} ammonium~\includegraphics{pictures/ElecAlloc_NH4+.png} ammonia \includegraphics[width=0.70000\textwidth]{pictures/ElecAlloc_NH3.png} amine groups in amino-acids \includegraphics{pictures/ElecAlloc_cysteine.png} dihydrogen sulfide~\includegraphics{pictures/ElecAlloc_H2S.png} hydrogen sulfide \includegraphics{pictures/ElecAlloc_HS-.png} sulfide \includegraphics{pictures/ElecAlloc_S2-.png} thiol groups in organic molecules \includegraphics{pictures/ElecAlloc_cysteine.png} \section{Other important things about code chunks}\label{other-important-things-about-code-chunks} Code chunks (but not inline code) can take a lot of \textbf{options} which modify how they are run, and how they appear in the document. These options go after the initial \texttt{r} and before the closing \texttt{\}} that announces the start of a code chunk. One of the most common options turns off printing out the code, but leaves the results alone: \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ echo=FALSE\}} Another runs the code, but includes neither the text of the code nor its output. \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ include=FALSE\}} This might seem pointless, but it can be useful for code chunks which do set-up like loading data files, or initial model estimates, etc. Another option prints the code in the document, but does not run it: \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ eval=FALSE\}} This is useful if you want to talk about the (nicely formatted) code. Another option on the results of the code is that it generate all results ``as-is'', which is very nice when your code generates mark-up text to be rendered by \texttt{Pandoc}. \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ results="asis"\}} By default, the results of a chunk with have \texttt{\#\#} as a prefix. You can remove this by putting \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ comment=FALSE\}} Sometimes, running of the code will generate warnings and messages. These can be turned off in the output by using \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ warning=FALSE,\ message\ =\ FALSE\}} \section{Naming Chunks}\label{naming-chunks} You can give chunks names immediately after their opening, like \texttt{\textasciigrave{}\textasciigrave{}\textasciigrave{}\{r,\ clevername\}}. This name is then used for the images (or other files) that are generated when the document is rendered. \section{Adjusting figure sizes and alignments}\label{adjusting-figure-sizes-and-alignments} These details are discussed in the accompanying written article on instantaneous vs.~interval-average flow data. \subsubsection{\texorpdfstring{``Caching'' Code Chunks (Re-Running Only When Changed)}{Caching Code Chunks (Re-Running Only When Changed)}}\label{caching-code-chunks-re-running-only-when-changed} By default, R Markdown will re-run all of your code every time you render your document. If some of your code is slow, this can add up to a lot of time. You can, however, ask R Markdown to keep track of whether a chunk of code has changed, and only re-run it if it has. This is called \textbf{caching} the chunk. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{summary}\NormalTok{(cars)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## speed dist ## Min. : 4.0 Min. : 2.00 ## 1st Qu.:12.0 1st Qu.: 26.00 ## Median :15.0 Median : 36.00 ## Mean :15.4 Mean : 42.98 ## 3rd Qu.:19.0 3rd Qu.: 56.00 ## Max. :25.0 Max. :120.00 \end{verbatim} One issue is that a chunk of code which hasn't changed itself might call on results of earlier, modified chunks, and then we \emph{would} want to re-run the downstream chunks. There are options for manually telling R Markdown ``this chunk depends on this earlier chunk'', but it's generally easier to let it take care of that, by setting the \texttt{autodep=TRUE} option. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item If you load a package with the \texttt{library()} command, R Markdown isn't smart enough to check whether the package has changed (or indeed been installed, if you were missing it). So that won't trigger an automatic re-running of a cached code chunk. \item To manually force re-running all code chunks, the easiest thing to do is to delete the directory R Markdown will create (named something like \emph{filename}\texttt{\_cache}) which it uses to store the state of all code chunks. \end{enumerate} \subsubsection{Setting Defaults for All Chunks}\label{setting-defaults-for-all-chunks} You can tell R to set some defaults to apply to all chunks where you don't specifically over-ride them. Here are the ones I generally use: This sets some additional options beyond the ones I've discussed, like not re-running a chunk if only the comments have changed (\texttt{cache.comments\ =\ FALSE}), and leaving out messages and warnings. (I'd only recommend suppressing warnings once you're sure your code is in good shape.) I would typically give this set-up chunk itself the option \texttt{include=FALSE}. You can over-ride these defaults by setting options for individual chunks. \subsubsection{More Chunk options}\label{more-chunk-options} See {[}\url{http://yihui.name/knitr/options/}{]} for a complete listing of possible chunk options. \chapter{Referencing and Cited References}\label{referencing-and-cited-references} \section{Autoreferencing}\label{autoreferencing} The beauty of the system is that the referencing is all automatised. \begin{itemize} \tightlist \item So to reference an equation use \texttt{\textbackslash{}@ref(eq:HnApoly)} to obtain equation \eqref{eq:HnApoly} \item So to reference a figure use\texttt{\textbackslash{}@ref(fig:molarfracPO4)} to obtain figure \ref{fig:molarfracPO4} \item So to reference an table use\texttt{\textbackslash{}@ref(tab:ElecAllocTab)} to obtain table \ref{tab:ElecAllocTab} \end{itemize} \section{Cited references}\label{cited-references} I have created and maintained for over 15 years a database written in Microsoft Access of around 2000 articles, most of them photocopied and typed in the database by hand during and after my Ph.D. (yes, I should have a medal!\ldots{}) but I have to admit that this system is past its time\ldots{} I have now switched to Paperpile. There are many other referencing software out there now (\href{https://en.wikipedia.org/wiki/Comparison_of_reference_management_software}{a list and comparison can be found here}). The beauty of them is that they will all export in recognized formats and this is what we care about here. So no more copy/paste of references from some pdf. We are going into \emph{reproducible research}. This means that all the articles you want to work with will be entered in the reference database of your choice. Make sure you start one ASAP! In the YAML header, you can now see \texttt{bibliography:\ {[}book.bib,\ packages.bib{]}}. This means that there are files named \texttt{book.bib} and \texttt{packages.bib} that are located in the same directory as the all the .Rmd files, which when we `knit', \href{https://pandoc.org/MANUAL.html}{Pandoc} will look for. You can access these \texttt{.bib} file in the GitHub directory. There is actually a possibility to add within the \texttt{.Rmd} document itself, the text needed for Pandoc to read the references, but unless you only have very few references, this can become very messy quickly. See the \href{http://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html\#bibliographies}{RStudio Bibliographies documentation} for more details. In short, in this file, there is a list of articles which information is coded according to the format chosen. By the way, a lot of the details for the bibliography are available on the \href{http://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html\#bibliographies}{RStudio Bibliographies documentation}. R Markdown will recognize up to 10 different formats. The one Paperpile exports is \texttt{.bibtex}, so this is the one I have usually add in the YAML header. The filename extension will determine the \emph{rendering} process, so make sure you have the right extension as well. So, in the \texttt{.bib} file I have, the first article appears as \begin{verbatim} @ARTICLE{Kuhne2009-tn, title = "Improving the Traditional Information Management in Natural Sciences", author = "K{\"u}hne, Martin and Liehr, Andreas W", journal = "Data Sci. J.", volume = 8, pages = "18--26", year = 2009, issn = "1683-1470, 0308-9541", doi = "10.2481/dsj.8.18" } \end{verbatim} here is another reference \citep{Maxwell2018-ht} or this website \citep{Wikipedia_contributors2018-ia} The first item after ``ARTICLE\{'' is the unique identifier for the article. The identifier of this article is \texttt{Kuhne2009-tn}. When one wants to cite this article, one will say something like ``reproducible research has been suggested to become the norm \citep{Kuhne2009-tn}''. And you would code like \begin{verbatim} "reproducible research has been suggested to become the norm [@Kuhne2009-tn]" \end{verbatim} If you want to say that ``Kühne et al. \citeyearpar{Kuhne2009-tn} have shown that etc.'', you would add a ``dash'' just before the ``@'' and code it as such: \begin{verbatim} Kühne et al. [-@Kuhne2009-tn] have shown that etc." \end{verbatim} If you want to cite several references, you would add a semicolon between the two such as in: \begin{verbatim} "reproducible research has been suggested to become the norm [@Kuhne2009-tn; @Buckheit1995-ls]" \end{verbatim} Notice that among all the fields from the example above, there are \texttt{doi} and \texttt{issn}. \texttt{doi} stands for ``Digital Object Identifier''. \texttt{issn} stands for International Standard Serial Number (ISSN), which is an eight-digit serial number used to uniquely identify a serial publication. The \texttt{doi} value in this example is \texttt{10.2481/dsj.8.18}, which is unique in the world. These \texttt{doi} are applied to articles but not only. They are also applied to data. Eventually, all data that will be used in an article will have to have its own \texttt{doi} and all the codes that are used to analyze the data will refer to this unique \texttt{doi}. This is not quite implemented yet (in 2017), but will likely be by 2022. \texttt{doi} or \texttt{url} (not added here; stands for Uniform Resource Locator. Quite a mouthful, really, for what it is: a web address) are not necessarily exported from your reference software by default. Make sure you add this possibility, as \texttt{doi} is almost routinely added in the reference list in most journals. The reference list of the citations will appear right after you place \begin{verbatim} # References \end{verbatim} at the bottom of your Rmd docucment, if it a single Rmd document, or as a separate chapter for a book. When rendering your document, the list will appear automatically afterwards, and if you have in text notes, these will appear underneath. \subsection{Link to citations}\label{link-to-citations} In a single Rmd document (not a book) One very nice feature is to create hyperlinks from the in-text citations to the citations in the reference section. For this, just add \texttt{link-citations:\ true} in the YAML header. \subsection{CSL and styling of citations}\label{csl-and-styling-of-citations} CSL stands for Citation Style Language. The CSL line command is an option for the citation styles to be used in your document. You can comment it out by adding a ``\#'' in front of it and the default .CSL file will apply without you noticing it. Each journal has its own way of handling how an article/reference should be cited in the text, and in the reference section, and there are hundreds of different styles out there\ldots{} You can read lots of details on this \href{http://docs.citationstyles.org/en/stable/primer.html}{CSL primer} about how all this works. While I was writing this tutorial, I did not specify at first the citation style. And I kept getting, using the same example as before, ``(Kühne and Liehr 2009)'', although I wanted ``(Kühne and Liehr, 2009)'', i.e., with a comma between the authors and the date, because this is the way I always did it and I think it is a lot better this way. Then I started thinking about the journals for which the inline citations are numbers, sometimes in brackets, sometimes without brackets\ldots{} what a nightmare\ldots{}! Actually it is extremely simple: all this is done automatically when you specify the CSL corresponding to the journal style following which you wish to write. For this, you can pick at \url{https://github.com/citation-style-language/styles} the *.CSL file you are looking for (actually there are too many of them and they are not all displayed). The fastest way is to Google `` CSL'', and you will land on the CSL file you are looking for. I recommend that you click on the \texttt{raw} icon on GitHub, and copy all the file in a text editor. Warning! You need to make sure that your text editor does not add any weird formatting or add an extension at the end of the file. I had that problem and I could not see the extension on my computer, although I had the option to display so. Save the file in the same directory as your .Rmd. So, to go back with my struggling to style the inline citation and the ``missing comma'', it turns out that the default CSL file uses a ``\href{http://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html\#citation_styles}{Chicago author-date format}'' (I am not sure what this means exactly), which in text styling is ``(author date)'', without the comma\ldots{}! If you use ``journal-of-hydrology.CSL'', you will see that all of a sudden after you \texttt{knit}, the commas just appeared\ldots{}! Eureka! If you use ``nature.CSL'' (because we all want to be ready when we publish in Nature!), you will see that there are no more in-line citations, just numbers in superscript, hyphenated when needed, and the references are all in order, with the journal names in italics, the journal volume in bold, the year in parenthesis at the end, etc.! Is not that wonderful? Without your doing anything, other than adding the citation properly in the text following the guidelines above. You can, if you want and have a lot of time to waste, take existing CSL files and modify them to have your own custom citation style. Make sure you take an ``independent'' CSL file and modify it. Most of them are dependent upon a ``source'' or independent one, and the code in the CSL file is just saying how the dependent file should slightly alter the independent one. Good instructions on how to do this can be found on the excellent \href{http://docs.citationstyles.org/en/stable/primer.html}{CSL primer}. \chapter{Final words}\label{final-words} Now, it is your time to play!! \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7489111556, "avg_line_length": 36.512195122, "ext": "tex", "hexsha": "23a86a73f33ff96d9d11477d1a894d5c3227c2fe", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2f42b505d1e7e9db786f2c874db99924c4041bef", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "francoisbirgand/BAEBookExample", "max_forks_repo_path": "docs/bookdown-demo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2f42b505d1e7e9db786f2c874db99924c4041bef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "francoisbirgand/BAEBookExample", "max_issues_repo_path": "docs/bookdown-demo.tex", "max_line_length": 183, "max_stars_count": null, "max_stars_repo_head_hexsha": "2f42b505d1e7e9db786f2c874db99924c4041bef", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "francoisbirgand/BAEBookExample", "max_stars_repo_path": "docs/bookdown-demo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11367, "size": 37425 }
\section{Some Appendix Section}\label{sec:appendix01} Appendices provide only two structural levels, \viz, \texttt{\textbackslash{} section}, and \texttt{\textbackslash{} subsection}. The numbering of figures, listings, tables, and footnotes is not reset. Thus, it continues as usual in the appendix. \subsection{Some Appendix Subsection} \lipsum[10]
{ "alphanum_fraction": 0.7651933702, "avg_line_length": 40.2222222222, "ext": "tex", "hexsha": "31967ef74ca98de6c19e0fff5bb0aabcd3cebc2c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "030722f9f817afefea5b81f4bb0bfaa1206eec3a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mxinden/bachelor_thesis", "max_forks_repo_path": "content/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "030722f9f817afefea5b81f4bb0bfaa1206eec3a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mxinden/bachelor_thesis", "max_issues_repo_path": "content/appendix.tex", "max_line_length": 130, "max_stars_count": null, "max_stars_repo_head_hexsha": "030722f9f817afefea5b81f4bb0bfaa1206eec3a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mxinden/bachelor_thesis", "max_stars_repo_path": "content/appendix.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 88, "size": 362 }
\documentclass{article} \usepackage{fullpage} \usepackage{html} \title{Soot overview/Disassembling classfiles} \author{Raja Vall\'ee-Rai (\htmladdnormallink{[email protected])}{mailto:[email protected]}} \date{March 15, 2000} \begin{document} \maketitle \section{Goals} By the end of this lesson, the student should be able to: \begin{itemize} \item understand what Soot is, and its two main uses \item have Soot correctly installed \item have the CLASSPATH environment variable properly set up \item produce baf, grimp or jimple code for any classfile \end{itemize} \section{Testing your Installation} This is an interactive tutorial. So the first thing you must do is test your installation. This can be done by typing {\tt java soot.Main} at the shell prompt. If your installation is incorrect you should get a class "soot.Main" not found exception. Please refer to the installation instructions which came with the Soot software if this occurs. If your installation is correct you should see something like: \begin{verbatim} ~ $ java soot.Main Soot version 2.0 Copyright (C) 1997-2003 Raja Vallee-Rai and others. All rights reserved. Contributions are copyright (C) 1997-2003 by their respective contributors. See the file 'credits' for a list of contributors. See individual source files for details. Soot comes with ABSOLUTELY NO WARRANTY. Soot is free software, and you are welcome to redistribute it under certain conditions. See the accompanying file 'COPYING-LESSER.txt' for details. Visit the Soot website: http://www.sable.mcgill.ca/soot/ For a list of command line options, enter: java soot.Main --help \end{verbatim} \section{What is Soot?} Soot has two fundamental uses; it can be used as a stand-alone command line tool or as a Java compiler framework. As a command line tool, Soot can: \begin{enumerate} \item dissassemble classfiles \item assemble classfiles \item optimize classfiles \end{enumerate} As a Java compiler framework, Soot can be used as a testbed for developing new optimizations. These new optimizations can then be added to the base set of optimizations invoked by the command line Soot tool. The optimizations that can be added can either be applied to single classfiles or entire applications. Soot accomplishes these myriad tasks by being able to process classfiles in a variety of different forms. Currently Soot inputs two different intermediate representations (classfiles or Jimple code), and outputs any of its intermediate representations. By invoking Soot with the \texttt{--help} option, you can see the output formats: \begin{verbatim} > java soot.Main --help <...snip...> Output Options: -d DIR -output-dir DIR Store output files in DIR -f FORMAT -output-format FORMAT Set output format for Soot J jimple Produce .jimple Files j jimp Produce .jimp (abbreviated Jimple) files S shimple Produce .shimple files s shimp Produce .shimp (abbreviated Shimple) files B baf Produce .baf files b Produce .b (abbreviated Baf) files G grimple Produce .grimple files g grimp Produce .grimp (abbreviated Grimp) files X xml Produce .xml Files n none Produce no output jasmin Produce .jasmin files c class (default) Produce .class Files d dava Produce dava-decompiled .java files -xml-attributes Save tags to XML attributes for Eclipse <...snip...> \end{verbatim} There are six intermediate representations currently being used in Soot: baf, jimple, shimple, grimp, jasmin, and classfiles. A brief explanation of each form follows: \begin{enumerate} \item[baf] a streamlined representation of bytecode. Used to inspect Java bytecode as stack code, but in a much nicer form. Has two textual representations (one abbreviated ({\tt .b} files), one full ({\tt .baf} files).) \item[jimple] typed 3-address code. A very convenient representation for performing optimizations and inspecting bytecode. Has two textual representations ({\tt .jimp} files, and {\tt .jimple} files.) \item[shimple] an SSA variation of jimple. Has two textual representations ({\tt .shimp} files, and {\tt .shimple} files.) \item[grimp] aggregated (with respect to expression trees) jimple. The best intermediate representation for inspecting dissassembled code. Has two textual representations ({\tt .grimp} files, and {\tt .grimple} files.) \item[jasmin] a messy assembler format. Used mainly for debugging Soot. Jasmin files end with "{\tt .jasmin}". \item[classfiles] the original Java bytecode format. A binary (non-textual) representation. The usual {\tt .class} files. \end{enumerate} \section{Setting up your {\tt CLASSPATH} and generating a Jimple file} Soot looks for classfiles by examining your {\tt CLASSPATH} environment variable or by looking at the contents of the {\tt -soot-classpath} command line option. Included in this lesson is the {\htmladdnormallink{\tt Hello.java}{Hello.java}} program. Download this file, compile it (using javac or other compilers), and try the following command in the directory where {\tt Hello.class} is located. \begin{verbatim} > java soot.Main -f jimple Hello \end{verbatim} This may or not work. If you get the following: \begin{verbatim} Exception in thread "main" java.lang.RuntimeException: couldn't find type: java.lang.Object (is your soot-class-path set properly?) \end{verbatim} This means that a classfile is not being located. Either Soot can not find the {\tt Hello} classfile, or it can not load the Java Development Kit libraries. Soot resolves classfiles by examining the directories in your {\tt CLASSPATH} environment variable or the {\tt -soot-classpath} command line option. Potential problem \#1: Soot can not locate the Hello classfile. To make sure that it can find the classfile {\tt "Hello"}, (1) add {\tt "."} to your {\tt CLASSPATH} or (2) specify {\tt "."} on the command line. To carry out (1) on UNIX-style systems using bash, \begin{verbatim} > export CLASSPATH=$CLASSPATH:. \end{verbatim} and on Windows machines, \begin{verbatim} > SET CLASSPATH=%CLASSPATH%;. \end{verbatim} and to do (2), \begin{verbatim} > java soot.Main --soot-classpath . -f jimple Hello \end{verbatim} Potential problem \#2: Soot cannot locate the class libraries. In this case, Soot will report that the type {\tt "java.lang.Object"} could not be found. Under JDK1.2, the class libraries do not need to be explicitly specified in the {\tt CLASSPATH} for the Java Virtual Machine to operate. Soot requires them to be specified either on the {\tt CLASSPATH} or in the soot-classpath command line option. Theoretically, this means adding the path to a {\tt "rt.jar"} file to the {\tt CLASSPATH} or the soot-classpath. \subsection{Locating the {\tt rt.jar} file} It is usually in a directory of the form "\$root/jdk1.2.2/jre/lib" where \$root is "/usr/local" or some similarly named directory. If you can not find it, you can attempt a find command such as: \begin{verbatim} > cd /usr ; find . -name "rt.jar" -print \end{verbatim} which may be able to locate it for you. Otherwise your best bet is to contact your system administrator. \paragraph{Important note for Windows users} Note that as of release 1, Soot will treat drive letters correctly, but under Windows the path separator {\em must} be a backslash ($\backslash$), not a forward slash. Summing up, you must issue a command of the form: \begin{verbatim} > export CLASSPATH=.:/usr/local/pkgs/jdk1.2.2/jre/lib/rt.jar \end{verbatim} or if you use the soot-classpath option which is more cumbersome: \begin{verbatim} > java soot.Main -f jimple --soot-classpath .:/usr/local/pkgs/jdk1.2.2/jre/lib/rt.jar Hello \end{verbatim} Once your {\em CLASSPATH} is set up properly, you should get: \begin{verbatim} > java soot.Main -f jimple Hello Transforming Hello... \end{verbatim} The file called Hello.jimple should contain: \begin{verbatim} public class Hello extends java.lang.Object { public void <init>() { Hello r0; r0 := @this: Hello; specialinvoke r0.<java.lang.Object: void <init>()>(); return; } public static void main(java.lang.String[]) { java.lang.String[] r0; java.io.PrintStream $r1; r0 := @parameter0: java.lang.String[]; $r1 = <java.lang.System: java.io.PrintStream out>; virtualinvoke $r1.<java.io.PrintStream: void println(java.lang.String )>("Hello world!"); return; } } \end{verbatim} \section{Generating jimple, baf, grimp for java.lang.String} By simple extrapolation, you should be able to now generate {\tt .b, .baf, .jimp, .jimple, .grimp,} and {\tt .grimple} files for any of your favorite classfiles. A particularly good test is a classfile from the JDK library. So a command like: \begin{verbatim} > java soot.Main -f baf java.lang.String \end{verbatim} should yield a file java.lang.String.baf containing text of the form: \begin{verbatim} public static java.lang.String valueOf(char[], int, int) { word r0, i0, i1; r0 := @parameter0: char[]; i0 := @parameter1: int; i1 := @parameter2: int; new java.lang.String; dup1.r; load.r r0; load.i i0; load.i i1; specialinvoke <java.lang.String: void <init>(char[],int,int)>; return.r; } \end{verbatim} \section{History} \begin{itemize} \item February 8, 2000: Initial version. \item February 16, 2000: Added changes for Soot version 021400 (Soot now prints the missing type) and emitted the title at the beginning. -PL \item March 1, 2000: Added changes for Release 1 (phantom class error printed instead) and emphasized that rt.jar should not occur in CLASSPATH. -PL \item March 11, 2000: Added note for Windows users in section about the classpath. \item March 15, 2000: Final tweaks for Release 1. \item January 29, 2001: Add the note of the release 1.2.1 . \item February 3, 2001: Added a hyperlink to Hello.java. \item June 6, 2003: Update for Soot 2.0. \end{itemize} \end{document}
{ "alphanum_fraction": 0.714574509, "avg_line_length": 34.3973509934, "ext": "tex", "hexsha": "86744bbedf77ebb958fd8bf07f7716ad1e6089b8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-14T19:58:38.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-14T19:58:38.000Z", "max_forks_repo_head_hexsha": "23de49765326f09f642b7097b7334facec0e96c3", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "UCLA-SEAL/JShrink", "max_forks_repo_path": "code/jshrink/soot/tutorial/intro/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "23de49765326f09f642b7097b7334facec0e96c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "UCLA-SEAL/JShrink", "max_issues_repo_path": "code/jshrink/soot/tutorial/intro/intro.tex", "max_line_length": 131, "max_stars_count": 1, "max_stars_repo_head_hexsha": "23de49765326f09f642b7097b7334facec0e96c3", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "UCLA-SEAL/JShrink", "max_stars_repo_path": "code/jshrink/soot/tutorial/intro/intro.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-07T16:13:03.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-07T16:13:03.000Z", "num_tokens": 2700, "size": 10388 }
\documentclass[../../main.tex]{subfiles} \begin{document} % START SKRIV HER \chapter{Methodology} % SLUTT SKRIV HER \end{document}
{ "alphanum_fraction": 0.7164179104, "avg_line_length": 13.4, "ext": "tex", "hexsha": "b081d513cdb5b261275124f987ed06dc20ae4289", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Marcusln/master-thesis", "max_forks_repo_path": "sections/04_methodology/04_methodology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Marcusln/master-thesis", "max_issues_repo_path": "sections/04_methodology/04_methodology.tex", "max_line_length": 40, "max_stars_count": null, "max_stars_repo_head_hexsha": "24cf0b8e1d8ce0bb7961eafa92084827eafd74e6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Marcusln/master-thesis", "max_stars_repo_path": "sections/04_methodology/04_methodology.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 43, "size": 134 }
\addcontentsline{toc}{section}{Acknowledgements} \chapter*{Acknowledgements} This template was not created by me (Neil Cook) though I added some functionality, commands and a lot of the comments, including this ``readme''. This template was passed on to me by Dr. Federico Marocco and was originally created by Dr. Kieran Forde (2007) and in various forms have been used by many of the astrophysics PhD students at the University of Hertfordshire.
{ "alphanum_fraction": 0.7552742616, "avg_line_length": 15.8, "ext": "tex", "hexsha": "3ceb955d998e951b0cb7c264c857a4ae175071b4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a93c1f9b6fcb69efffeb1116b17caccf0f476f11", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "njcuk9999/neil-thesis-template", "max_forks_repo_path": "preamble/Acknowledgements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a93c1f9b6fcb69efffeb1116b17caccf0f476f11", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "njcuk9999/neil-thesis-template", "max_issues_repo_path": "preamble/Acknowledgements.tex", "max_line_length": 370, "max_stars_count": null, "max_stars_repo_head_hexsha": "a93c1f9b6fcb69efffeb1116b17caccf0f476f11", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "njcuk9999/neil-thesis-template", "max_stars_repo_path": "preamble/Acknowledgements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 106, "size": 474 }
\documentclass{article} \setlength{\textwidth}{6.2in} \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in} \setlength{\textheight}{8.7in} \setlength{\topmargin}{0in} \usepackage{graphicx} \usepackage{mathtools} \usepackage{cite} \begin{document} \begin{center} \huge{Uncertainty Quantification Plan} \end{center} \vspace{0.2in} \section{Introduction} The Plasma-Surface-Interactions (PSI) SciDAC project is focused on predictive modeling of material damage in the tungsten wall material of the plasma divertor in a Tokamak fusion reactor system. The effort includes multiscale modeling methodologies involving particle in cell (PIC) methods, molecular dynamics (MD), and continuum modeling. For the latter, a high performance simulator named Xolotl is being developed. Xolotl is able to reproduce the evolution of the surface of the material by solving the cluster dynamics formulated Advection-Diffusion-Reaction (ADR) equations with an incident flux \begin{equation} \delta_t \bar{C} = \phi \cdot \rho + D \nabla^2 \bar{C} - \nabla \bar{\nu}C - \bar{Q}(\bar{C}) \end{equation} The time evolution of the concentration of each cluster in the network ($\delta_t \bar{C}$ where $\bar{C}$ is the vector of concentrations) is thus related to the external flux ($\phi \cdot \rho$), the isotropic diffusion ($D \nabla^2 \bar{C}$), the advection ($\nabla \bar{\nu}C$), and the reactions between the clusters ($\bar{Q}(\bar{C})$). In order to compute these different quantities, one needs different input information. The diffusion coefficient is a function of the diffusion factor $D_0$ and the migration energy $E_m$ \begin{equation} D_i = D_{0,i} \cdot e^{-E_m/(k_B T)} \end{equation} with $k_B$ the Boltzmann constant and $T$ the temperature. The reactions between clusters can be separated into two categories: actual reaction $r_i$ (where the cluster $i$ is produced or is used to produce another cluster) and dissociation $d_i$ (where the cluster $i$ is dissociated in two other clusters). The former quantity is a function of all the concentrations, $\bar{C}$, and of the diffusion coefficient $D_0$, while the latter is a function of the binding energies $E_b$ in addition to the previous quantities. The details of the aforementioned functions will not be explained here, but the curious reader can have a look at this document \cite{math} for more information. The advection part is not yet taken into account here and the external flux is a constant. \newline Most of the quantities that are needed ($E_b$, $E_m$, $D_0$) are results of simulations combined with experimental observations and, as a consequence, are not perfectly determined. In order for Xolotl to give more relevant results, one has to explore the influence of these uncertainties on the final result. This will be done with the help of Uncertainty Quantification (UQ) techniques. \section{Uncertainty Quantification Techniques} Before being able to perform any uncertainty quantification, one needs to have some knowledge about the underlying concepts. This section is devoted to the description of the main concepts that will be used to quantify the uncertainties occurring in Xolotl. \subsection{Bayesian Inference} Inference refers to the process of deriving logical conclusions from premises assumed to be true. Bayesian inference is thus the same idea which uses Bayes' rule for the process. Three important quantities take part in that rule: \begin{equation} P(H|E) \propto P(E|H) \cdot P(H) \end{equation} \begin{itemize} \itemsep0em \item[-] the posterior $P(H|E)$ (probability of the hypothesis $H$ given the evidence $E$) that is infered; \item[-] the likelihood $P(E|H)$ (probability of the evidence $E$ given the hypothesis $H$); \item[-] and the prior $P(H)$ that gathers all the information one had before the evidence $E$ was observed. \end{itemize} This simply means that the posterior is updated from the prior for each occurence of new evidence observed, knowing the likelihood of observing these evidences given the hypothesis. \newline The important feature of Bayesian inference is that everything is considered as a random variable. For instance, if one is given a number of data points and wants to model them with a line, the probability density functions (PDF) for each coefficient of the polynomial would need to be initially specified. These PDFs are the priors and do not need to be well defined. Furthermore, this lack of knowledge can be shown by simply using a flat PDF (e.g. a uniform distribution along the interval $[-1000,1000]$ for each coefficient). The likelihood function is a direct derivative of the model that is used for the inference. \newline Most of the time, the posterior cannot be calculated analytically because of its integral form. Markov Chain Monte-Carlo (MCMC) is a method used to tackle this problem, where a Markov chain randomly wanders in the parameter space. The PDF of the posterior is then obtained gathering the steps taken by the Markov chain. \subsection{Global Sensitivity Analysis} Global sensitivity analysis is a technique that is commonly used with the intentions of reducing model dimensionality. Performing a global sensitivity analysis determines an influential hierarchy of uncertain input parameters in relation to the variation of the output. The aforementioned hierarchy, thus, enables the identification of the input parameters whose uncertainty has a negligible contribution to uncertainties in the quantities of interest. \newline Let $i \subseteq \mathcal{I} = \{ 1,\ldots,n\}$. Assume $\boldsymbol{\xi}$ to be the set of uncertain model input parameters such that $\boldsymbol{\xi} \in \mathcal{U}^n$ where $\mathcal{U}^n = \{\boldsymbol{\xi} : 0 \leq \xi_i \leq 1, i \in \mathcal{I}\}$ is the $n$-dimensional hypercube. Suppose $f$ represents the model output which can be denoted by \ref{eq:pce}. The mean value, or expectation of $f$, is defined by \[ f_0 = E(f) = \int_{\mathcal{U}^d} f(\boldsymbol{\xi})d\boldsymbol{\xi} \] Additionaly, \[ f_i(\xi_i) = E(f|\xi_i)- f_0 = \int_{\mathcal{U}^{d-1}} f(\boldsymbol{\xi})d\boldsymbol{\xi}_{\sim i} - f_0 \] defines the conditional expectation, or first order effect, where $\xi_{\sim i}$ denotes the set of all input parameters except $\xi_i$. This describes the effect on the model output that results from varying $\xi_i$. \newline Analogous to the first order definition, the second order effect, $$ f_{ij}(\xi_i,\xi_j) = E(f|\xi_i,\xi_j) - f_0 - f_i - f_j $$ defines the effect on the model output that results from simultaneously varying $\xi_i$ and $\xi_j$ along with their corresponding individual effects. \newline There are two quantities that are of particular importance when performing a global sensitivity analysis: first order sensitivity indices and total sensitivity indices. First order sensitivity indices (sometimes referred to as main effect sensitivity indices), $S_{i}$, describe the fraction of the uncertainty in the output, $f$, that can be attributed to the input parameter $\xi_i$ alone. This quantity compares the variance \cite{spectral} of the conditional expectation, $Var[E(f|\xi_i)]$, against the total variance, $Var(f)$, i.e. \begin{equation} S_i = \frac{Var[E(f|\xi_i)]}{Var(f)} \label{eq:si} \end{equation} % \begin{equation} % S_i = % \frac{\sum_{k \in \mathbb{I}_i} f_k^2 \| \varphi_k \|^2}{\sum_{k=1}^P f_k^2 \| \varphi_k \|^2} = \frac{\sum_{k \in \mathbb{I}_i} % f_k^2 \| \varphi_k \|^2}{Var[f({\boldsymbol \xi})]} % \end{equation} % where $\mathbb{I}_i$ are the terms involving $\xi_i$ only. \newline % AKA 1st order main effects % \item {\bf Joint sensitivity indices} (uncertainty contribution due % to terms with only $\xi_i\xi_j$) % \begin{equation} % S_{ij} = \frac{ \sum_{k \in \mathbb{I}_{ij} } f_k^2 \| % \varphi_k \|^2 }{ \sum_{k=1}^P f_k^2 \| \varphi_k \|^2 } % \end{equation} % where $\mathbb{I}_{ij}$ are the terms involving $\xi_i\xi_j$ only Total sensitivity indices, $T_i$, describe the contribution to the uncertainty in the model output resulting from the uncertain input $\xi_i$ including all corresponding interactions with other input variables, \begin{equation} T_i = \frac{E[Var(f|\boldsymbol{\xi}_{\sim i})]}{Var(f)} = \frac{Var(f) - Var[E(f|\boldsymbol{\xi}_{\sim i})]}{Var(f)} \end{equation} \subsection{Polynomial Chaos Expansion} Polynomial chaos expansion (PCE) is a method used to represent uncertain quantities of interest (QoIs) in terms of orthogonal polynomials with independent random variables (RVs) with known densities. \newline % Let $\boldsymbol \xi$ be a set of $n$ independent random variables, with known % densities, which parameterize the uncertain input quantities. Clearly, the % output $f$ is dependent upon the random inputs which, in turn, results in the % output being random. Let $\boldsymbol \xi$ be a set of $n$ independent random variables, with known densities, for which the QoI $f$ is dependent upon. Assuming $f$ is square-integrable, it can be represented by the polynomial chaos expansion \begin{equation} f(\boldsymbol{\xi}) = \sum_{k=0}^P f_k\varphi_k(\boldsymbol \xi) \label{eq:pce} \end{equation} where $\varphi_k$ are the $n$-dimensional basis functions up to order P with coefficients $f_k$. Utilizing the orthogonality of the basis functions, the deterministic coefficients $f_k$ are simply given by \begin{equation} f_k = \frac{\langle f,\varphi_k \rangle}{\| \varphi_k \|^2} \text{, } \; k=0,\ldots,P \end{equation} The choice of the polynomial chaos (PC) type to be used often depends principally on the domain of input parameters, even if the PDF of these parameters can also be mapped to any given known density (gaussian, uniform, gamma, beta, \dots). \section{Initial Investigation of Input Parameters} \subsection{Parameters Under Consideration} In the first phase of Xolotl, only the one-dimensional case of the ADR equations for tungsten is considered. For this problem, the clusters are composed of three different elementary clusters: helium (He), vacancy (V), and interstitial (I). Each cluster is defined by its composition ($n_{He}$, $n_V$, $n_I$), binding energies ($E_{b, He}$, $E_{b, V}$, $E_{b, I}$), migration energy ($E_m$), and diffusion coefficient ($D_0$). \newline A text file containing the information for each cluster in the network is currently being used as an input for Xolotl. This means that when looking at small networks considering only ten clusters, the number of input parameters is fifty. Luckily, binding energies can actually be computed from formation energies through the formula \begin{eqnarray} E_{b, He}(He_X, V_Y, I_Z) & = & E_f(He_{X-1}, V_Y, I_Z) + E_f(He_1, 0, 0) - E_f(He_X, V_Y, I_Z) \nonumber \\ E_{b, V}(He_X, V_Y, I_Z) & = & E_f(He_X, V_{Y-1}, I_Z) + E_f(0, V_1, 0) - E_f(He_X, V_Y, I_Z) \\ E_{b, I}(He_X, V_Y, I_Z) & = & E_f(He_X, V_Y, I_{Z-1}) + E_f(0, 0, I_1) - E_f(He_X, V_Y, I_Z) \nonumber \end{eqnarray} Using the formation energies as an input would then reduce the number of parameters from five to three for each cluster. \newline At the moment, formation energy data is only available for $HeV$ clusters with $V = 1, 2, 6, 14, 18, 19,$ $27, 32, 44$ (for tungsten problems, one is mainly interested in $HeV$ clusters with $V$ up to $50$). Figure \ref{fig:Ef} shows the formation energy data as a function of the helium and vacancy cluster sizes. By representing this as a function of the helium/vacancy ratio, one can see common features appearing. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.95\textwidth]{formationEnergiesUQ} \caption{Formation energies (in eV) for $V = 1, 2, 6, 14, 18, 19, 27, 32, 44$ as a function of the number of helium in the cluster (left) of the helium/vacancy ratio (right). \label{fig:Ef}} \end{center} \end{figure} Modeling the formation energies would permit extrapolation of the missing data and, additionally, would result in a drastic reduction of the number of input parameters. \subsection{Modeling the Formation Energies} The first step in the modeling of the formation energies is to find a good fit. This fit function will depend on both helium and vacancy sizes (even though the representation in figure \ref{fig:Ef} can be misleading). Also, a change of variable from the helium size to the helium/vacancy ratio seems to be more suitable in order to capture the formation energy features. This data clearly captures the existence of two different behaviours separated at helium/vacancy ratio around one. The use of a piecewise function for the fit is thus indicated. \newline Once this fit is obtained, a thorough investigation of the difference between the data and the fit will be necessary. In turn, this will help shape the existing uncertainties of the model (standard deviation and/or bias). \newline Then, the Bayesian Inference method can be used to infer the final model of the formation energies and the joint posteriors (giving information about the correlation between parameters). The coefficients of the fit function will be used as the parameters (priors), and the uncertainties of the model will be considered as hyperparameters (hyperparameter meaning parameter of the prior). This will be done with the help of the UQ Toolkit \cite{uqtk} libraries providing (among others) a Bayesian Inference method using a Metropolis-Hastings MCMC algorithm. After which, the obtained joint posterior would, nearly, be ready for use in other UQ methods. \section{Proposed Strategy} This section starts with the description of the quantities of interest under consideration, and then exhibits the different methods that are planned to be used to perform uncertainty quantification on the Xolotl software. The DAKOTA \cite{dakota} software was chosen as the provider of these algorithms. \subsection{Quantities of Interest} To be determined\dots \subsection{Global Sensitivity Analysis} A global sensitivity analysis will be done on Xolotl in order to determine the influential hierarchy of the uncertain input parameters and to reduce the overall model dimensionality. Two independent ways to do so are considered in the following sections. \subsubsection{Obtaining Sensitivity Indices via Monte-Carlo Sampling} This method is a computationally expensive one. Random variations of the input parameters will be generated and the Xolotl software will be run for each of variations. Monte-Carlo sampling will be used first for global sensitivity analysis in order to have a rough estimate of the important input parameters. A more refined analysis will be then performed with the use of a PC surrogate (see next section). \newline Let $X = \{\boldsymbol{\xi}^{(1)}, \ldots, \boldsymbol{\xi}^{(M)}\}$ and $\tilde{X} = \{\boldsymbol{\xi}^{(1)}, \ldots, \boldsymbol{\xi}^{(M)}\}$ be independent sample sets drawn uniformly in $\mathcal{U}^d$. It follows that one can estimate the mean, $E(f)$, and variance, $Var(f)$, respectively as follows (where the hat denotes the estimator) \begin{equation} \widehat{E(f)} = \frac{1}{M} \sum_{s=1}^M f(\boldsymbol{\xi}^{(s)}) \end{equation} \begin{equation} \widehat{Var(f)} = \frac{1}{M-1} \sum_{s=1}^M [f(\boldsymbol{\xi}^{(s)})]^2 - \widehat{E(f)}^2 \end{equation} Utilizing the previous results and equation (\ref{eq:si}), it can be shown that the first order sensitivity indices are estimated \cite{sobol} by \begin{equation} \widehat{S_i} = \frac{Var[\widehat{E(f|\xi_i)}]}{\widehat{Var(f)}} \end{equation} where \[ Var[\widehat{E(f|\boldsymbol{\xi}_i)}] = \frac{1}{M} \sum_{s=1}^M f(\boldsymbol{\xi}^{(s)}_{\sim i},\boldsymbol{\xi}^{(s)}_i) f(\boldsymbol{\tilde{\xi}}^{(s)}_{\sim i},\boldsymbol{\xi}^{(s)}_i) - \widehat{E(f)}^2 . \] Analogously, \begin{equation} \widehat{T_i} = 1 - \frac{Var[\widehat{E(f|\boldsymbol{\xi}_{\sim i})}]}{\widehat{Var(f)}} \end{equation} estimates the total sensitivity indices with \[ Var[\widehat{E(f|\boldsymbol{\xi}_{\sim i})}] = \frac{1}{M} \sum_{s=1}^M f(\boldsymbol{\xi}^{(s)}_{\sim i},\boldsymbol{\xi}^{(s)}_i) f(\boldsymbol{\xi}^{(s)}_{\sim i},\boldsymbol{\tilde{\xi}}^{(s)}_i) - \widehat{E(f)}^2 . \] \subsubsection{Obtaining Sensitivity Indices via PCE} The perks of using PCE to represent quantities of interest is that parametric sensitivity information comes as a direct result. It is simply a matter of taking the PCE and decomposing it termwise. To illustrate this, $f \in \mathcal{L}_2(\mathcal{U}^n)$ will be represented by (\ref{eq:pce}) as follows \[ f({\boldsymbol \xi}) = \sum_{k=0}^P f_k\varphi({\boldsymbol \xi}) . \] The mean, or expectation, of $f$ is defined to be \[ E(f)=f_0 , \] and the variance \[ \sigma^2 = Var[f({\boldsymbol \xi})] = \sum_{k=1}^P f_k^2 \| \varphi_k \|^2 \] From the previous definition, and recalling (\ref{eq:si}), it follows that the first order sensitivity indices are determined by \begin{equation} S_i = \frac{\sum_{k \in \mathcal{I}_i}f_k^2 \| \varphi_k \|^2}{Var[f({\boldsymbol \xi})]} \end{equation} where $\mathcal{I}_i$ are the terms involving $\xi_i$ only. The joint sensitivity indices, uncertainty contribution due to terms with only $\xi_i\xi_j$, are similarly obtained. Hence, \[ S_{ij} = \frac{\sum_{k \in \mathcal{I}_{ij}} f_k^2 \| \varphi_k \|^2}{Var[f({\boldsymbol \xi})]} \] where $\mathcal{I}_{ij}$ are the terms involving $\xi_i\xi_j$ only. Recall that the total sensitivity indices describe the uncertainty contribution due to all terms containing $\xi_i$; thus $T_i$ can be obtained by simply summing all sensitivity indices involving $\xi_i$. \subsection{Constructing Xolotl Surrogate} A surrogate model is an inexpensive approximate model intended to capture the salient features of an expensive high-fidelity model. It can be built directly from data (QoIs) generated by Xolotl here. \newline Given the supposed strong nonlinear dependence of the system on uncertain parameters, and because the solution typically exhibits the formation of sharp fronts, it is advisable to consider non-intrusive PC methods, focusing on smooth observables, and relying on sparse quadrature sampling. \newline Xolotl's output quantities of interest will be approximated using the theory of polynomial chaos expansions previously defined. Representing these quantities as such enables the construction of PC surrogates for the QoIs. Employing a Non-Intrustive Spectral Projection (NISP) method, a projection is applied exclusively to the QoIs and thereby only computing PCEs for these quantities. Using the NISP method allows Xolotl to be used as a black box in order to obtain function evaluations from which the PC coefficients are determined. \newline To illustrate this idea, let $y = f(\boldsymbol{\xi})$ be an output QoI in Xolotl and $\boldsymbol{\xi}$ be the set of uncertain parameterized inputs. The output will be represented with a PCE as shown in (\ref{eq:pce}). Recall that the deterministic coefficients are found by \[ f_k=\frac{\langle f,\varphi_k \rangle}{\| \varphi_k \|^2} = \frac{\int f\varphi_k(\xi)w(\xi)d\xi}{\int \varphi_k^2(\xi)w(\xi)d\xi} \text{, } \; k=0,\ldots,P \] These integrals will be evaluated using numerical quadrature, \[ \int f\varphi_k(\xi)w(\xi)d\xi=\sum_{q=1}^Q w_q f(\xi_q)\varphi_k(\xi_q) \] where $\xi_q$ and $w_q$ are the multidimensional quadrature points and weights, respectively, and $Q$ is the number of 1D quadrature points. Such standard quadrature methods use tensor products of the 1D quadrature rules which require, approximately, $Q^n$ computations in order to evaluate the integrals. \newline The high dimensionality of Xolotl suggests an optimal alternative to standard numerical quadrature would be to use adaptive sparse quadrature methods, the details of which can be found in \cite{spectral}. \section{In a Nut Shell\dots} The Xolotl uncertainty quantification plan is as follows: the first step would be a Monte-Carlo sampling based global sensitivity analysis to establish which uncertain inputs matter; then, a PC surrogate for the quantities of interest would be constructed using adaptive sparse quadrature; finally, the input paramaters uncertainties (previously determined with the help of Bayesien inference) can be propagated through Xolotl with using the PC surrogate via Monte-Carlo. \bibliographystyle{plain} \bibliography{biblio} \end{document}
{ "alphanum_fraction": 0.748195122, "avg_line_length": 44.4685466377, "ext": "tex", "hexsha": "7313f0106865199b5f9cb2234b050a03fe4cfb90", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2022-01-04T14:54:16.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-13T20:36:03.000Z", "max_forks_repo_head_hexsha": "993434bea0d3bca439a733a12af78034c911690c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ORNL-Fusion/xolotl", "max_forks_repo_path": "UQ/document/UQPlan.tex", "max_issues_count": 97, "max_issues_repo_head_hexsha": "993434bea0d3bca439a733a12af78034c911690c", "max_issues_repo_issues_event_max_datetime": "2022-03-29T16:29:48.000Z", "max_issues_repo_issues_event_min_datetime": "2018-02-14T15:24:08.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ORNL-Fusion/xolotl", "max_issues_repo_path": "UQ/document/UQPlan.tex", "max_line_length": 133, "max_stars_count": 13, "max_stars_repo_head_hexsha": "993434bea0d3bca439a733a12af78034c911690c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ORNL-Fusion/xolotl", "max_stars_repo_path": "UQ/document/UQPlan.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-30T02:39:01.000Z", "max_stars_repo_stars_event_min_datetime": "2018-06-13T18:08:57.000Z", "num_tokens": 5751, "size": 20500 }
\subsection{Representation theory for the time group} Time is a linear operator Instead, we describe the time operator as a Lie group, using Lie algebra. \(\Psi (t_b-t_a)=e^{(t_b-t_a)X}\) \subsubsection{States are vectors} We can remove a degree of freedom by using norm of 1 for vectors For each dynamic system we define a set of possible states. We can describe a state \(v\in V\). \subsubsection{Finite state spaces} We can describe a system like heads or tails. \subsubsection{Infinite state spaces} This can describe continous position, or an angle.
{ "alphanum_fraction": 0.7601410935, "avg_line_length": 22.68, "ext": "tex", "hexsha": "a2cc33acfbfd09e7e6870fddb846773fe4fa4aa9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_line_length": 73, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/physics/QM/03-03-representation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 138, "size": 567 }
\documentclass[12pt]{article} \batchmode \usepackage{url} \urlstyle{sf} \usepackage[svgnames]{xcolor} \usepackage[colorlinks,linkcolor=Blue,citecolor=Blue,urlcolor=Blue]{hyperref} \usepackage{amssymb} \usepackage{amsmath} \usepackage{framed} \usepackage{mdframed} % \usepackage{geometry} % \usepackage{pdflscape} \newmdenv[backgroundcolor=white]{spec} \newmdenv[backgroundcolor=yellow]{alert} \hyphenation{Suite-Sparse} \hyphenation{Graph-BLAS} \hyphenation{Suite-Sparse-Graph-BLAS} \DeclareMathOperator{\sech}{sech} \DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\arcsec}{arcsec} \DeclareMathOperator{\arccot}{arcCot} \DeclareMathOperator{\arccsc}{arcCsc} \DeclareMathOperator{\arccosh}{arcCosh} \DeclareMathOperator{\arcsinh}{arcsinh} \DeclareMathOperator{\arctanh}{arctanh} \DeclareMathOperator{\arcsech}{arcsech} \DeclareMathOperator{\arccsch}{arcCsch} \DeclareMathOperator{\arccoth}{arcCoth} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\erfc}{erfc} \newenvironment{packed_itemize}{ \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{itemize}} \title{User Guide for SuiteSparse:GraphBLAS} \author{Timothy A. Davis \\ \small [email protected], Texas A\&M University. \\ \small \url{http://suitesparse.com} \\ \small \url{https://people.engr.tamu.edu/davis} \\ \small \url{https://twitter.com/DocSparse} } % version and date are set by cmake \input{GraphBLAS_version.tex} %------------------------------------------------------------------------------- \begin{document} %------------------------------------------------------------------------------- \maketitle \begin{abstract} SuiteSparse:GraphBLAS is a full implementation of the GraphBLAS standard, which defines a set of sparse matrix operations on an extended algebra of semirings using an almost unlimited variety of operators and types. When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs. GraphBLAS provides a powerful and expressive framework for creating high-performance graph algorithms based on the elegant mathematics of sparse matrix operations on a semiring. \vspace{1in} SuiteSparse:GraphBLAS is under the Apache-2.0 license, except for the \verb'@GrB' MATLAB interface, which is licensed under the GNU GPLv3 (or later). Refer to the SPDX license identifier in each file for details. Note that all of the compiled \verb'libgraphblas.so' is under the Apache-2.0 license. \end{abstract} \newpage {\small \tableofcontents } \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{intro} The GraphBLAS standard defines sparse matrix and vector operations on an extended algebra of semirings. The operations are useful for creating a wide range of graph algorithms. For example, consider the matrix-matrix multiplication, ${\bf C=AB}$. Suppose ${\bf A}$ and ${\bf B}$ are sparse $n$-by-$n$ Boolean adjacency matrices of two undirected graphs. If the matrix multiplication is redefined to use logical AND instead of scalar multiply, and if it uses the logical OR instead of add, then the matrix ${\bf C}$ is the sparse Boolean adjacency matrix of a graph that has an edge $(i,j)$ if node $i$ in ${\bf A}$ and node $j$ in ${\bf B}$ share any neighbor in common. The OR-AND pair forms an algebraic semiring, and many graph operations like this one can be succinctly represented by matrix operations with different semirings and different numerical types. GraphBLAS provides a wide range of built-in types and operators, and allows the user application to create new types and operators without needing to recompile the GraphBLAS library. For more details on SuiteSparse:GraphBLAS, and its use in LAGraph, see \cite{Davis18,Davis18b,DavisAznavehKolodziej19,Davis20,Mattson19}. A full and precise definition of the GraphBLAS specification is provided in {\em The GraphBLAS C API Specification} by {Ayd\i n Bulu\c{c}, Timothy Mattson, Scott McMillan, Jos\'e Moreira, Carl Yang, and Benjamin Brock} \cite{BulucMattsonMcMillanMoreiraYang17,spec}, based on {\em GraphBLAS Mathematics} by Jeremy Kepner \cite{Kepner2017}. The GraphBLAS C API Specification is available at \url{http://graphblas.org}. This version of SuiteSparse:GraphBLAS conforms to Version \input{GraphBLAS_API_version.tex} of {\em The GraphBLAS C API specification}. In this User Guide, aspects of the GraphBLAS specification that would be true for any GraphBLAS implementation are simply called ``GraphBLAS.'' Details unique to this particular implementation are referred to as SuiteSparse:GraphBLAS. All functions, objects, and macros with a name of the form \verb'GxB_*' are SuiteSparse-specific extensions to the spec. \begin{alert} {\bf SPEC:} Non-obvious deviations or additions to the v1.3 GraphBLAS C API Specification are highlighted in a box like this one, except for \verb'GxB*' methods. They are not highlighted since their name makes it clear that they are extensions to the v1.3 GraphBLAS C API. \end{alert} \newpage %------------------------------------------------------------------------------- \subsection{Release Notes} %------------------------------------------------------------------------------- \begin{itemize} \item Version 5.0.5 (May 17, 2021) \begin{packed_itemize} \item (26) performance bug fix: reduce-to-vector where \verb'A' is hypersparse CSR with a transposed descriptor (or CSC with no transpose), and some cases for \verb'GrB_mxm/mxv/vxm' when computing \verb'C=A*B' with A hypersparse CSC and \verb'B' bitmap/full (or \verb'A' bitmap/full and \verb'B' hypersparse CSR), the wrong internal method was being selected via the auto-selection strategy, resulting in a significant slowdown in some cases. \end{packed_itemize} \item Version 5.0.4 (May 13, 2021) \begin{packed_itemize} \item \verb'@GrB' MATLAB interface: changed license to GNU General Public License v3.0 or later. \end{packed_itemize} \item Version 5.0.3 (May 12, 2021) \begin{packed_itemize} \item (25) bug fix: disabling \verb'ANY_PAIR' semirings by editting \verb'Source/GB_control.h' would cause a segfault if those disabled semirings were used. \item demos are no longer built by default \item (24) bug fix: new functions in v5.0.2 not declared as \verb'extern' in \verb'GraphBLAS.h'. \item \verb'GrB_Matrix_reduce_BinaryOp' reinstated from v4.0.3; same limit on built-in ops that correspond to known monoids. \end{packed_itemize} \item Version 5.0.2 (May 5, 2021) \begin{packed_itemize} \item (23) bug fix: \verb'GrB_Matrix_apply_BinaryOp1st' and \verb'2nd' were using the wrong descriptors for \verb'GrB_INP0' and \verb'GrB_INP1'. Caught by Erik Welch, Anaconda. \item memory pool added for faster allocation/free of small blocks \item \verb'@GrB' MATLAB interface ported to MATLAB R2021a. \item \verb'GxB_PRINTF' and \verb'GxB_FLUSH' global options added. \item \verb'GxB_Matrix_diag': construct a diagonal matrix from a vector \item \verb'GxB_Vector_diag': extract a diagonal from a matrix \item \verb'concat/split': methods to concatenate and split matrices. \item \verb'import/export': size of arrays now in bytes, not entries. This change is required for better internal memory management, and it is not backward compatible with the \verb'GxB*import/export' functions in v4.0. A new parameter, \verb'is_uniform', has been added to all import/export methods, which indicates that the matrix values are all the same. \item (22) bug fix: SIMD vectorization was missing \verb'reduction(+,task_cnvals)' in \verb'GB_dense_subassign_06d_template.c'. Caught by Jeff Huang, Texas A\&M, with his software package for race-condition detection. \item \verb'GrB_Matrix_reduce_BinaryOp': removed. Use a monoid instead, with \verb'GrB_reduce' or \verb'GrB_Matrix_reduce_Monoid'. \end{packed_itemize} \item Version 4.0.3 (Jan 19, 2021) \begin{packed_itemize} \item faster min/max monoids \item MATLAB: \verb'G=GrB(G)' converts \verb'G' from v3 object to v4 \end{packed_itemize} \item Version 4.0.2 (Jan 13, 2021) \begin{packed_itemize} \item ability to load \verb'*.mat' files saved with the v3 \verb'GrB' MATLAB interface. \end{packed_itemize} \item Version 4.0.1 (Jan 4, 2021) \begin{packed_itemize} \item significant performance improvements: compared with v3.3.3, up to 5x faster in breadth-first-search (using \verb'LAGraph_bfs_parent2'), and 2x faster in Betweenness-Centrality (using \verb'LAGraph_bc_batch5'). \item \verb'GrB_wait(void)', with no inputs: removed \item \verb'GrB_wait(&object)': polymorphic function added \item \verb'GrB_*_nvals': no longer guarantees completion; use \verb'GrB_wait(&object)' or non-polymorphic \verb'GrB_*_wait (&object)' instead \item \verb'GrB_error': now has two parameters: a string (\verb'char **') and an object. \item \verb'GrB_Matrix_reduce_BinaryOp' limited to built-in operators that correspond to known monoids. \item \verb'GrB_*_extractTuples': may return indices out of order \item removed internal features: GBI iterator, slice and hyperslice matrices \item bitmap/full matrices and vectors added \item positional operators and semirings: \verb'GxB_FIRSTI_INT32' and related ops \item jumbled matrices: sort left pending, like zombies and pending tuples \item \verb'GxB_get/set': added \verb'GxB_SPARSITY_*' (hyper, sparse, bitmap, or full) and \verb'GxB_BITMAP_SWITCH'. \item \verb'GxB_HYPER': enum renamed to \verb'GxB_HYPER_SWITCH' \item \verb'GxB*import/export': API modified \item \verb'GxB_SelectOp': \verb'nrows' and \verb'ncols' removed from function signature. \item OpenMP tasking removed from mergesort and replaced with parallel for loops. Just as fast on Linux/Mac; now the performance ports to Windows. \item \verb'GxB_BURBLE' added as a supported feature. This was an undocumented feature of prior versions. \item bug fix: \verb'A({lo,hi})=scalar' in MATLAB, \verb'A(lo:hi)=scalar' was OK \end{packed_itemize} \item Version 3.3.3 (July 14, 2020). Bug fix: \verb'w<m>=A*u' with mask non-empty and u empty. \item Version 3.3.2 (July 3, 2020). Minor changes to build system. \item Version 3.3.1 (June 30, 2020). Bug fix to \verb'GrB_assign' and \verb'GxB_subassign' when the assignment is simple (\verb'C=A') but with typecasting. \item Version 3.3.0 (June 26, 2020). Compliant with V1.3 of the C API (except that the polymorphic \verb'GrB_wait(&object)' doesn't appear yet; it will appear in V4.0). Added complex types (\verb'GxB_FC32' and \verb'GxB_FC64'), many unary operators, binary operators, monoids, and semirings. Added bitwise operators, and their monoids and semirings. Added the predefined monoids and semirings from the v1.3 spec. MATLAB interface: added complex matrices and operators, and changed behavior of integer operations to more closely match the behavior on MATLAB integer matrices. The rules for typecasting large floating point values to integers has changed. The specific object-based \verb'GrB_Matrix_wait', \verb'GrB_Vector_wait', etc, functions have been added. The no-argument \verb'GrB_wait()' is deprecated. Added \verb'GrB_getVersion', \verb'GrB_Matrix_resize', \verb'GrB_Vector_resize', \verb'GrB_kronecker', \verb'GrB_*_wait', scalar binding with binary operators for \verb'GrB_apply', \verb'GrB_Matrix_removeElement', and \verb'GrB_Vector_removeElement'. \item Version 3.2.0 (Feb 20, 2020). Faster \verb'GrB_mxm', \verb'GrB_mxv', and \verb'GrB_vxm', and faster operations on dense matrices/vectors. Removed compile-time user objects (\verb'GxB_*_define'), since these were not compatible with the faster matrix operations. Added the \verb'ANY' and \verb'PAIR' operators. Added the predefined descriptor, \verb'GrB_DESC_*'. Added the structural mask option. Changed default chunk size to 65,536. Note that v3.2.0 is not compatible with the MS Visual Studio compiler; use v3.1.2 instead. MATLAB interface modified: \verb'GrB.init' is now optional. \item Version 3.1.2 (Dec, 2019). Changes to allow SuiteSparse:GraphBLAS to be compiled with the Microsoft Visual Studio compiler. This compiler does not support the \verb'_Generic' keyword, so the polymorphic functions are not available. Use the equivalent non-polymorphic functions instead, when compiling GraphBLAS with MS Visual Studio. In addition, variable-length arrays are not supported, so user-defined types are limited to 128 bytes in size. These changes have no effect if you have an ANSI C11 compliant compiler. MATLAB interface modified: \verb'GrB.init' is now required. \item Version 3.1.0 (Oct 1, 2019). MATLAB interface added. See the \newline \verb'GraphBLAS/GraphBLAS' folder for details and documentation, and Section~\ref{matlab}. \item Version 3.0 (July 26, 2019), with OpenMP parallelism. The version number is increased to 3.0, since this version is not backward compatible with V2.x. The \verb'GxB_select' operation changes; the \verb'Thunk' parameter was formerly a \verb'const void *' pointer, and is now a \verb'GxB_Scalar'. A new parameter is added to \verb'GxB_SelectOp_new', to define the expected type of \verb'Thunk'. A new parameter is added to \verb'GxB_init', to specify whether or not the user-provided memory management functions are thread safe. The remaining changes add new features, and are upward compatible with V2.x. The major change is the addition of OpenMP parallelism. This addition has no effect on the API, except that round-off errors can differ with the number of threads used, for floating-point types. \verb'GxB_set' can optionally define the number of threads to use (the default is \verb'omp_get_max_threads'). The number of threads can also defined globally, and/or in the \verb'GrB_Descriptor'. The \verb'RDIV' and \verb'RMINUS' operators are added, which are defined as $f(x,y)=y/x$ and $f(x,y)=y-x$, respectively. Additional options are added to \verb'GxB_get'. \item Version 2.3.3 (May 2019): Collected Algorithm of the ACM. No changes from V2.3.2 other than the documentation. \item Version 2.3 (Feb 2019) improves the performance of many GraphBLAS operations, including an early-exit for monoids. These changes have a significant impact on breadth-first-search (a performance bug was also fixed in the two BFS \verb'Demo' codes). The matrix and vector import/export functions were added (Section~\ref{import_export}), in support of the new LAGraph project (\url{https://github.com/GraphBLAS/LAGraph}, see also Section~\ref{lagraph}). LAGraph includes a push-pull BFS in GraphBLAS that is faster than two versions in the \verb'Demo' folder. \verb'GxB_init' was added to allow the memory manager functions (\verb'malloc', etc) to be specified. \item Version 2.2 (Nov 2018) adds user-defined objects at compile-time, via user \verb'*.m4' files placed in \verb'GraphBLAS/User', which use the \verb'GxB_*_define' macros (NOTE: feature removed in v3.2). The default matrix format is now \verb'GxB_BY_ROW'. Also added are the \verb'GxB_*print' methods for printing the contents of each GraphBLAS object (Section~\ref{fprint}). PageRank demos have been added to the \verb'Demos' folder. \item Version 2.1 (Oct 2018) was a major update with support for new matrix formats (by row or column, and hypersparse matrices), and MATLAB-like colon notation (\verb'I=begin:end' or \verb'I=begin:inc:end'). Some graph algorithms are more naturally expressed with matrices stored by row, and this version includes the new \verb'GxB_BY_ROW' format. The default format in Version 2.1 and prior versions is by column. New extensions to GraphBLAS in this version include \verb'GxB_get', \verb'GxB_set', and \verb'GxB_AxB_METHOD', \verb'GxB_RANGE', \verb'GxB_STRIDE', and \verb'GxB_BACKWARDS', and their related definitions, described in Sections~\ref{descriptor},~\ref{options},~and~\ref{colon}. \item Version 2.0 (March 2018) addressed changes in the GraphBLAS C API Specification and added \verb'GxB_kron' and \verb'GxB_resize'. \item Version 1.1 (Dec 2017) primarily improved the performance. \item Version 1.0 was released on Nov 25, 2017. \end{itemize} %------------------------------------------------------------------------------- \subsubsection{Regarding historical and deprecated functions and symbols} %------------------------------------------------------------------------------- When a \verb'GxB*' function or symbol is added to the C API Specification with a \verb'GrB*' name, the new \verb'GrB*' name should be used instead, if possible. However, the old \verb'GxB*' name will be kept as long as possible for historical reasons. Historical functions and symbols will not always be documented here in the SuiteSparse:GraphBLAS User Guide, but they will be kept in \verb'GraphbBLAS.h' and kept in good working order in the library. Historical functions and symbols would only be removed in the very unlikely case that they cause a serious conflict with future methods. The only methods that have been fully deprecated and removed are the no-input \verb'GrB_wait' and \verb'GrB_error' methods, which are incompatible with the new 1-input \verb'GrB_wait' and 2-input \verb'GrB_error' methods defined in the latest version of the C API Specification, and the \verb'GrB_Matrix_reduce_BinaryOp' function, which is limited to binary operators that correspond to built-in monoids. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Basic Concepts} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{basic} Since the {\em GraphBLAS C API Specification} provides a precise definition of GraphBLAS, not every detail of every function is provided here. For example, some error codes returned by GraphBLAS are self-explanatory, but since a specification must precisely define all possible error codes a function can return, these are listed in detail in the {\em GraphBLAS C API Specification}. However, including them here is not essential and the additional information on the page might detract from a clearer view of the essential features of the GraphBLAS functions. This User Guide also assumes the reader is familiar with MATLAB. MATLAB supports only the conventional plus-times semiring on sparse double and complex matrices, but a MATLAB-like notation easily extends to the arbitrary semirings used in GraphBLAS. The matrix multiplication in the example in the Introduction can be written in MATLAB notation as \verb'C=A*B', if the Boolean \verb'OR-AND' semiring is understood. Relying on a MATLAB-like notation allows the description in this User Guide to be expressive, easy to understand, and terse at the same time. {\em The GraphBLAS C API Specification} also makes use of some MATLAB-like language, such as the colon notation. MATLAB notation will always appear here in fixed-width font, such as \verb'C=A*B(:,j)'. In standard mathematical notation it would be written as the matrix-vector multiplication ${\bf C = A b}_j$ where ${\bf b}_j$ is the $j$th column of the matrix ${\bf B}$. The GraphBLAS standard is a C API and SuiteSparse:GraphBLAS is written in C, and so a great deal of C syntax appears here as well, also in fixed-width font. This User Guide alternates between all three styles as needed. %=============================================================================== \subsection{Graphs and sparse matrices} %======================================= %=============================================================================== \label{sparse} Graphs can be huge, with many nodes and edges. A dense adjacency matrix ${\bf A}$ for a graph of $n$ nodes takes $O(n^2)$ memory, which is impossible if $n$ is, say, a million. Let $|{\bf A}|$ denote the number of entries in a matrix. Most graphs arising in practice are sparse, however, with only $|{\bf A}|=O(n)$ edges, where $|{\bf A}|$ denotes the number of edges in the graph, or the number of explicit entries present in the data structure for the matrix ${\bf A}$. Sparse graphs with millions of nodes and edges can easily be created by representing them as sparse matrices, where only explicit values need to be stored. Some graphs are {\em hypersparse}, with ${|\bf A}| << n$. SuiteSparse:GraphBLAS supports two kinds of sparse matrix formats: a regular sparse format, taking $O(n+|{\bf A}|)$ space, and a hypersparse format taking only $O(|{\bf A}|)$ space. As a result, creating a sparse matrix of size $n$-by-$n$ where $n=2^{60}$ (about $10^{18}$) can be done on quite easily on a commodity laptop, limited only by $|{\bf A}|$. A sparse matrix data structure only stores a subset of the possible $n^2$ entries, and it assumes the values of entries not stored have some implicit value. In conventional linear algebra, this implicit value is zero, but it differs with different semirings. Explicit values are called {\em entries} and they appear in the data structure. The {\em pattern} of a matrix defines where its explicit entries appear. It will be referenced in one of two equivalent ways. It can be viewed as a set of indices $(i,j)$, where $(i,j)$ is in the pattern of a matrix ${\bf A}$ if ${\bf A}(i,j)$ is an explicit value. It can also be viewed as a Boolean matrix ${\bf S}$ where ${\bf S}(i,j)$ is true if $(i,j)$ is an explicit entry and false otherwise. In MATLAB notation, \verb'S=spones(A)' or \verb'S=(A~=0)', if the implicit value is zero. The \verb'(i,j)' pairs, and their values, can also be extracted from the matrix via the MATLAB expression \verb'[I,J,X]=find(A)', where the \verb'k'th tuple \verb'(I(k),J(k),X(k))' represents the explicit entry \verb'A(I(k),J(k))', with numerical value \verb'X(k)' equal to $a_{ij}$, with row index $i$=\verb'I(k)' and column index $j$=\verb'J(k)'. The entries in the pattern of ${\bf A}$ can take on any value, including the implicit value, whatever it happens to be. This differs slightly from MATLAB, which always drops all explicit zeros from its sparse matrices. This is a minor difference but it cannot be done in GraphBLAS. For example, in the max-plus tropical algebra, the implicit value is negative infinity, and zero has a different meaning. Here, the MATLAB notation used will assume that no explicit entries are ever dropped because their explicit value happens to match the implicit value. {\em Graph Algorithms in the Language on Linear Algebra}, Kepner and Gilbert, eds., provides a framework for understanding how graph algorithms can be expressed as matrix computations \cite{KepnerGilbert2011}. For additional background on sparse matrix algorithms, see also \cite{Davis06book} and \cite{DavisRajamanickamSidLakhdar16}. %=============================================================================== \subsection{Overview of GraphBLAS methods and operations} %===================== %=============================================================================== \label{overview} GraphBLAS provides a collection of {\em methods} to create, query, and free its of objects: sparse matrices, sparse vectors, scalars, types, operators, monoids, semirings, and a descriptor object used for parameter settings. Details are given in Section~\ref{objects}. Once these objects are created they can be used in mathematical {\em operations} (not to be confused with the how the term {\em operator} is used in GraphBLAS). A short summary of these operations and their nearest MATLAB analog is given in the table below. % \vspace{0.1in} \begin{tabular}{ll} operation & approximate MATLAB analog \\ \hline matrix multiplication & \verb'C=A*B' \\ element-wise operations & \verb'C=A+B' and \verb'C=A.*B' \\ reduction to a vector or scalar & \verb's=sum(A)' \\ apply unary operator & \verb'C=-A' \\ transpose & \verb"C=A'" \\ submatrix extraction & \verb'C=A(I,J)' \\ submatrix assignment & \verb'C(I,J)=A' \\ select & \verb'C=tril(A)' \\ \hline \end{tabular} \vspace{0.1in} GraphBLAS can do far more than what MATLAB can do in these rough analogs, but the list provides a first step in describing what GraphBLAS can do. Details of each GraphBLAS operation are given in Section~\ref{operations}. With this brief overview, the full scope of GraphBLAS extensions of these operations can now be described. GraphBLAS has 13 built-in scalar types: Boolean, single and double precision floating-point (real and complex), and 8, 16, 32, and 64-bit signed and unsigned integers. In addition, user-defined scalar types can be created from nearly any C \verb'typedef', as long as the entire type fits in a fixed-size contiguous block of memory (of arbitrary size). All of these types can be used to create GraphBLAS sparse matrices, vectors, or scalars. The scalar addition of conventional matrix multiplication is replaced with a {\em monoid}. A monoid is an associative and commutative binary operator \verb'z=f(x,y)' where all three domains are the same (the types of \verb'x', \verb'y', and \verb'z'), and where the operator has an identity value \verb'id' such that \verb'f(x,id)=f(id,x)=x'. Performing matrix multiplication with a semiring uses a monoid in place of the ``add'' operator, scalar addition being just one of many possible monoids. The identity value of addition is zero, since $x+0=0+x=x$. GraphBLAS includes many built-in operators suitable for use as a monoid: min (with an identity value of positive infinity), max (whose identity is negative infinity), add (identity is zero), multiply (with an identity of one), four logical operators: AND, OR, exclusive-OR, and Boolean equality (XNOR), four bitwise operators (AND, OR, XOR, and XNOR), and the ANY operator. User-created monoids can be defined with any associative and commutative operator that has an identity value. Finally, a semiring can use any built-in or user-defined binary operator \verb'z=f(x,y)' as its ``multiply'' operator, as long as the type of its output, \verb'z' matches the type of the semiring's monoid. The user application can create any semiring based on any types, monoids, and multiply operators, as long these few rules are followed. Just considering built-in types and operators, GraphBLAS can perform \verb'C=A*B' in thousands of unique semirings. With typecasting, any of these semirings can be applied to matrices \verb'C', \verb'A', and \verb'B' of 13 predefined types, in any combination. This results in millions of possible kinds of sparse matrix multiplication supported by GraphBLAS, and this is counting just built-in types and operators. By contrast, MATLAB provides just two semirings for its sparse matrix multiplication \verb'C=A*B': plus-times-double and plus-times-complex, not counting the typecasting that MATLAB does when multiplying a real matrix times a complex matrix. A monoid can also be used in a reduction operation, like \verb's=sum(A)' in MATLAB. MATLAB provides the plus, times, min, and max reductions of a real or complex sparse matrix as \verb's=sum(A)', \verb's=prod(A)', \verb's=min(A)', and \verb's=max(A)', respectively. In GraphBLAS, any monoid can be used (min, max, plus, times, AND, OR, exclusive-OR, equality, bitwise operators, or any user-defined monoid on any user-defined type). Element-wise operations are also expanded from what can be done in MATLAB. Consider matrix addition, \verb'C=A+B' in MATLAB. The pattern of the result is the set union of the pattern of \verb'A' and \verb'B'. In GraphBLAS, any binary operator can be used in this set-union ``addition.'' The operator is applied to entries in the intersection. Entries in \verb'A' but not \verb'B', or visa-versa, are copied directly into \verb'C', without any application of the binary operator. The accumulator operation for ${\bf Z = C \odot T}$ described in Section~\ref{accummask} is one example of this set-union application of an arbitrary binary operator. Consider element-wise multiplication, \verb'C=A.*B' in MATLAB. The operator (multiply in this case) is applied to entries in the set intersection, and the pattern of \verb'C' just this set intersection. Entries in \verb'A' but not \verb'B', or visa-versa, do not appear in \verb'C'. In GraphBLAS, any binary operator can be used in this manner, not just scalar multiplication. The difference between element-wise ``add'' and ``multiply'' is not the operators, but whether or not the pattern of the result is the set union or the set intersection. In both cases, the operator is only applied to the set intersection. Finally, GraphBLAS includes a {\em non-blocking} mode where operations can be left pending, and saved for later. This is very useful for submatrix assignment (\verb'C(I,J)=A' where \verb'I' and \verb'J' are integer vectors), or scalar assignment (\verb'C(i,j)=x' where \verb'i' and \verb'j' are scalar integers). Because of how MATLAB stores its matrices, adding and deleting individual entries is very costly. For example, this is very slow in MATLAB, taking $O(nz^2)$ time: \begin{mdframed} {\footnotesize \begin{verbatim} A = sparse (m,n) ; % an empty sparse matrix for k = 1:nz compute a value x, row index i, and column index j A (i,j) = x ; end\end{verbatim}}\end{mdframed} The above code is very easy read and simple to write, but exceedingly slow. In MATLAB, the method below is preferred and is far faster, taking at most $O(|{\bf A}| \log |{\bf A}| +n)$ time. It can easily be a million times faster than the method above. Unfortunately the second method below is a little harder to read and a little less natural to write: \begin{mdframed} {\footnotesize \begin{verbatim} I = zeros (nz,1) ; J = zeros (nz,1) ; X = zeros (nz,1) ; for k = 1:nz compute a value x, row index i, and column index j I (k) = i ; J (k) = j ; X (k) = x ; end A = sparse (I,J,X,m,n) ; \end{verbatim}} \end{mdframed} GraphBLAS can do both methods. SuiteSparse:GraphBLAS stores its matrices in a format that allows for pending computations, which are done later in bulk, and as a result it can do both methods above equally as fast as the MATLAB \verb'sparse' function, allowing the user to write simpler code. %=============================================================================== \subsection{The accumulator and the mask} %===================================== %=============================================================================== \label{accummask} Most GraphBLAS operations can be modified via transposing input matrices, using an accumulator operator, applying a mask or its complement, and by clearing all entries the matrix \verb'C' after using it in the accumulator operator but before the final results are written back into it. All of these steps are optional, and are controlled by a descriptor object that holds parameter settings (see Section~\ref{descriptor}) that control the following options: \begin{itemize} \item the input matrices \verb'A' and/or \verb'B' can be transposed first. \item an accumulator operator can be used, like the plus in the statement \verb'C=C+A*B'. The accumulator operator can be any binary operator, and an element-wise ``add'' (set union) is performed using the operator. \item an optional {\em mask} can be used to selectively write the results to the output. The mask is a sparse Boolean matrix \verb'Mask' whose size is the same size as the result. If \verb'Mask(i,j)' is true, then the corresponding entry in the output can be modified by the computation. If \verb'Mask(i,j)' is false, then the corresponding in the output is protected and cannot be modified by the computation. The \verb'Mask' matrix acts exactly like logical matrix indexing in MATLAB, with one minor difference: in GraphBLAS notation, the mask operation is $\bf C \langle M \rangle = Z$, where the mask $\bf M$ appears only on the left-hand side. In MATLAB, it would appear on both sides as \verb'C(Mask)=Z(Mask)'. If no mask is provided, the \verb'Mask' matrix is implicitly all true. This is indicated by passing the value \verb'GrB_NULL' in place of the \verb'Mask' argument in GraphBLAS operations. \end{itemize} \noindent This process can be described in mathematical notation as: \vspace{-0.2in} {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \> ${\bf A = A}^{\sf T}$, if requested via descriptor (first input option) \\ \> ${\bf B = B}^{\sf T}$, if requested via descriptor (second input option) \\ \> ${\bf T}$ is computed according to the specific operation \\ \> ${\bf C \langle M \rangle = C \odot T}$, accumulating and writing the results back via the mask \end{tabbing} } \noindent The application of the mask and the accumulator operator is written as ${\bf C \langle M \rangle = C \odot T}$ where ${\bf Z = C \odot T}$ denotes the application of the accumulator operator, and ${\bf C \langle M \rangle = Z}$ denotes the mask operator via the Boolean matrix ${\bf M}$. The Accumulator Phase, ${\bf Z = C \odot T}$, is performed as follows: \vspace{-0.2in} % accum: Z = C odot T {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \> {\bf Accumulator Phase}: compute ${\bf Z = C \odot T}$: \\ \> \> if \verb'accum' is \verb'NULL' \\ \> \>\> ${\bf Z = T}$ \\ \> \> else \\ \> \>\> ${\bf Z = C \odot T}$ \end{tabbing}} The accumulator operator is $\odot$ in GraphBLAS notation, or \verb'accum' in the code. The pattern of ${\bf C \odot T}$ is the set union of the patterns of ${\bf C}$ and ${\bf T}$, and the operator is applied only on the set intersection of ${\bf C}$ and ${\bf T}$. Entries in neither the pattern of ${\bf C}$ nor ${\bf T}$ do not appear in the pattern of ${\bf Z}$. That is: \newpage % \vspace{-0.2in} {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \> for all entries $(i,j)$ in ${\bf C \cap T}$ (that is, entries in both ${\bf C}$ and ${\bf T}$) \\ \> \> $z_{ij} = c_{ij} \odot t_{ij}$ \\ \> for all entries $(i,j)$ in ${\bf C \setminus T}$ (that is, entries in ${\bf C}$ but not ${\bf T}$) \\ \> \> $z_{ij} = c_{ij}$ \\ \> for all entries $(i,j)$ in ${\bf T \setminus C}$ (that is, entries in ${\bf T}$ but not ${\bf C}$) \\ \> \> $z_{ij} = t_{ij}$ \end{tabbing} } The Accumulator Phase is followed by the Mask/Replace Phase, ${\bf C \langle M \rangle = Z}$ as controlled by the \verb'GrB_REPLACE' and \verb'GrB_COMP' descriptor options: \vspace{-0.2in} % mask/replace/scmp: C<M> = Z {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \>{\bf Mask/Replace Phase}: compute ${\bf C \langle M \rangle = Z}$: \\ \> \> if (\verb'GrB_REPLACE') delete all entries in ${\bf C}$ \\ \> \> if \verb'Mask' is \verb'NULL' \\ \> \>\> if (\verb'GrB_COMP') \\ \> \>\>\> ${\bf C}$ is not modified \\ \> \>\> else \\ \> \>\>\> ${\bf C = Z}$ \\ \> \> else \\ \> \>\> if (\verb'GrB_COMP') \\ \> \>\>\> ${\bf C \langle \neg M \rangle = Z}$ \\ \> \>\> else \\ \> \>\>\> ${\bf C \langle M \rangle = Z}$ \end{tabbing} } Both phases of the accum/mask process are illustrated in MATLAB notation in Figure~\ref{fig_accummask}. A GraphBLAS operation starts with its primary computation, producing a result \verb'T'; for matrix multiply, \verb'T=A*B', or if \verb'A' is transposed first, \verb"T=A'*B", for example. Applying the accumulator, mask (or its complement) to obtain the final result matrix \verb'C' can be expressed in the MATLAB \verb'accum_mask' function shown in the figure. This function is an exact, fully functional, and nearly-complete description of the GraphBLAS accumulator/mask operation. The only aspects it does not consider are typecasting (see Section~\ref{typecasting}), and the value of the implicit identity (for those, see another version in the \verb'Test' folder). \begin{figure} \begin{mdframed}[leftmargin=-0.4in,userdefinedwidth=5.8in] {\footnotesize \begin{verbatim} function C = accum_mask (C, Mask, accum, T, C_replace, Mask_complement) [m n] = size (C.matrix) ; Z.matrix = zeros (m, n) ; Z.pattern = false (m, n) ; if (isempty (accum)) Z = T ; % no accum operator else % Z = accum (C,T), like Z=C+T but with an binary operator, accum p = C.pattern & T.pattern ; Z.matrix (p) = accum (C.matrix (p), T.matrix (p)); p = C.pattern & ~T.pattern ; Z.matrix (p) = C.matrix (p) ; p = ~C.pattern & T.pattern ; Z.matrix (p) = T.matrix (p) ; Z.pattern = C.pattern | T.pattern ; end % apply the mask to the values and pattern C.matrix = mask (C.matrix, Mask, Z.matrix, C_replace, Mask_complement) ; C.pattern = mask (C.pattern, Mask, Z.pattern, C_replace, Mask_complement) ; end function C = mask (C, Mask, Z, C_replace, Mask_complement) % replace C if requested if (C_replace) C (:,:) = 0 ; end if (isempty (Mask)) % if empty, Mask is implicit ones(m,n) % implicitly, Mask = ones (size (C)) if (~Mask_complement) C = Z ; % this is the default else C = C ; % Z need never have been computed end else % apply the mask if (~Mask_complement) C (Mask) = Z (Mask) ; else C (~Mask) = Z (~Mask) ; end end end \end{verbatim} } \end{mdframed} \caption{Applying the mask and accumulator, ${\bf C \langle M \rangle = C \odot T}$\label{fig_accummask}} \end{figure} One aspect of GraphBLAS cannot be as easily expressed in a MATLAB sparse matrix: namely, what is the implicit value of entries not in the pattern? To accommodate this difference in the \verb'accum_mask' MATLAB function, each sparse matrix \verb'A' is represented with its values \verb'A.matrix' and its pattern, \verb'A.pattern'. The latter could be expressed as the sparse matrix \verb'A.pattern=spones(A)' or \verb'A.pattern=(A~=0)' in MATLAB, if the implicit value is zero. With different semirings, entries not in the pattern can be \verb'1', \verb'+Inf', \verb'-Inf', or whatever is the identity value of the monoid. As a result, Figure~\ref{fig_accummask} performs its computations on two MATLAB matrices: the values in \verb'A.matrix' and the pattern in the logical matrix \verb'A.pattern'. Implicit values are untouched. The final computation in Figure~\ref{fig_accummask} with a complemented \verb'Mask' is easily expressed in MATLAB as \verb'C(~Mask)=Z(~Mask)' but this is costly if \verb'Mask' is very sparse (the typical case). It can be computed much faster in MATLAB without complementing the sparse \verb'Mask' via: {\footnotesize \begin{verbatim} R = Z ; R (Mask) = C (Mask) ; C = R ; \end{verbatim} } A set of MATLAB functions that precisely compute the ${\bf C \langle M \rangle = C \odot T}$ operation according to the full GraphBLAS specification is provided in SuiteSparse:GraphBLAS as \verb'GB_spec_accum.m', which computes ${\bf Z=C\odot T}$, and \verb'GB_spec_mask.m', which computes ${\bf C \langle M \rangle = Z}$. SuiteSparse:GraphBLAS includes a complete list of \verb'GB_spec_*' functions that illustrate every GraphBLAS operation. The methods in Figure~\ref{fig_accummask} rely heavily on MATLAB's logical matrix indexing. For those unfamiliar with logical indexing in MATLAB, here is short summary. Logical matrix indexing in MATLAB is written as \verb'A(Mask)' where \verb'A' is any matrix and \verb'Mask' is a logical matrix the same size as \verb'A'. The expression \verb'x=A(Mask)' produces a column vector \verb'x' consisting of the entries of \verb'A' where \verb'Mask' is true. On the left-hand side, logical submatrix assignment \verb'A(Mask)=x' does the opposite, copying the components of the vector \verb'x' into the places in \verb'A' where \verb'Mask' is true. For example, to negate all values greater than 10 using logical indexing in MATLAB: \begin{mdframed} {\footnotesize \begin{verbatim} >> A = magic (4) A = 16 2 3 13 5 11 10 8 9 7 6 12 4 14 15 1 >> A (A>10) = - A (A>10) A = -16 2 3 -13 5 -11 10 8 9 7 6 -12 4 -14 -15 1 \end{verbatim} } \end{mdframed} In MATLAB, logical indexing with a sparse matrix \verb'A' and sparse logical matrix \verb'Mask' is a built-in method. The Mask operator in GraphBLAS works identically as sparse logical indexing in MATLAB, but is typically far faster in SuiteSparse:GraphBLAS than the same operation using MATLAB sparse matrices. %=============================================================================== \subsection{Typecasting} %====================================================== %=============================================================================== \label{typecasting} If an operator \verb'z=f(x)' or \verb'z=f(x,y)' is used with inputs that do not match its inputs \verb'x' or \verb'y', or if its result \verb'z' does not match the type of the matrix it is being stored into, then the values are typecasted. Typecasting in GraphBLAS extends beyond just operators. Almost all GraphBLAS methods and operations are able to typecast their results, as needed. If one type can be typecasted into the other, they are said to be {\em compatible}. All built-in types are compatible with each other. GraphBLAS cannot typecast user-defined types thus any user-defined type is only compatible with itself. When GraphBLAS requires inputs of a specific type, or when one type cannot be typecast to another, the GraphBLAS function returns an error code, \verb'GrB_DOMAIN_MISMATCH' (refer to Section~\ref{error} for a complete list of error codes). Typecasting can only be done between built-in types, and it follows the rules of the ANSI C language (not MATLAB) wherever the rules of ANSI C are well-defined. However, unlike MATLAB, the ANSI C11 language specification states that the results of typecasting a \verb'float' or \verb'double' to an integer type is not always defined. In SuiteSparse:GraphBLAS, whenever C leaves the result undefined the rules used in MATLAB are followed. In particular \verb'+Inf' converts to the largest integer value, \verb'-Inf' converts to the smallest (zero for unsigned integers), and \verb'NaN' converts to zero. Positive values outside the range of the integer are converted to the largest positive integer, and negative values less than the most negative integer are converted to that most negative integer. Other than these special cases, SuiteSparse:GraphBLAS trusts the C compiler for the rest of its typecasting. Typecasting to \verb'bool' is fully defined in the C language specification, even for \verb'NaN'. The result is \verb'false' if the value compares equal to zero, and true otherwise. Thus \verb'NaN' converts to \verb'true'. This is unlike MATLAB, which does not allow a typecast of a \verb'NaN' to the MATLAB logical type. \begin{alert} {\bf SPEC:} the GraphBLAS API C Specification states that typecasting follows the rules of ANSI C. Yet C leaves some typecasting undefined. All typecasting between built-in types in SuiteSparse:GraphBLAS is precisely defined, as an extension to the spec. \end{alert} %=============================================================================== \subsection{Notation and list of GraphBLAS operations} %======================== %=============================================================================== \label{list} As a summary of what GraphBLAS can do, the following table lists all GraphBLAS operations. Upper case letters denote a matrix, lower case letters are vectors, and ${\bf AB}$ denote the multiplication of two matrices over a semiring. \vspace{0.05in} {\footnotesize \begin{tabular}{lll} \hline \verb'GrB_mxm' & matrix-matrix multiply & ${\bf C \langle M \rangle = C \odot AB}$ \\ \verb'GrB_vxm' & vector-matrix multiply & ${\bf w^{\sf T}\langle m^{\sf T}\rangle = w^{\sf T}\odot u^{\sf T}A}$ \\ \verb'GrB_mxv' & matrix-vector multiply & ${\bf w \langle m \rangle = w \odot Au}$ \\ \hline \verb'GrB_eWiseMult' & element-wise, & ${\bf C \langle M \rangle = C \odot (A \otimes B)}$ \\ & set intersection & ${\bf w \langle m \rangle = w \odot (u \otimes v)}$ \\ \hline \verb'GrB_eWiseAdd' & element-wise, & ${\bf C \langle M \rangle = C \odot (A \oplus B)}$ \\ & set union & ${\bf w \langle m \rangle = w \odot (u \oplus v)}$ \\ \hline \verb'GrB_extract' & extract submatrix & ${\bf C \langle M \rangle = C \odot A(I,J)}$ \\ & & ${\bf w \langle m \rangle = w \odot u(i)}$ \\ \hline \verb'GxB_subassign' & assign submatrix & ${\bf C (I,J) \langle M \rangle = C(I,J) \odot A}$ \\ & (with submask for ${\bf C(I,J)}$) & ${\bf w (i) \langle m \rangle = w(i) \odot u}$ \\ \hline \verb'GrB_assign' & assign submatrix & ${\bf C \langle M \rangle (I,J) = C(I,J) \odot A}$ \\ & (with mask for ${\bf C}$) & ${\bf w \langle m \rangle (i) = w(i) \odot u}$ \\ \hline \verb'GrB_apply' & apply unary operator & ${\bf C \langle M \rangle = C \odot} f{\bf (A)}$ \\ & & ${\bf w \langle m \rangle = w \odot} f{\bf (u)}$ \\ & apply binary operator & ${\bf C \langle M \rangle = C \odot} f({\bf A},y)$ \\ & & ${\bf C \langle M \rangle = C \odot} f(x,{\bf A})$ \\ & & ${\bf w \langle m \rangle = w \odot} f({\bf u},y)$ \\ & & ${\bf w \langle m \rangle = w \odot} f(x,{\bf u})$ \\ \hline \verb'GxB_select' & apply select operator & ${\bf C \langle M \rangle = C \odot} f({\bf A},k)$ \\ & & ${\bf w \langle m \rangle = w \odot} f({\bf u},k)$ \\ \hline \verb'GrB_reduce' & reduce to vector & ${\bf w \langle m \rangle = w \odot} [{\oplus}_j {\bf A}(:,j)]$ \\ & reduce to scalar & $s = s \odot [{\oplus}_{ij} {\bf A}(i,j)]$ \\ \hline \verb'GrB_transpose' & transpose & ${\bf C \langle M \rangle = C \odot A^{\sf T}}$ \\ \hline \verb'GrB_kronecker' & Kronecker product & ${\bf C \langle M \rangle = C \odot \mbox{kron}(A, B)}$ \\ \hline \end{tabular} } \vspace{0.15in} Each operation takes an optional \verb'GrB_Descriptor' argument that modifies the operation. The input matrices ${\bf A}$ and ${\bf B}$ can be optionally transposed, the mask ${\bf M}$ can be complemented, and ${\bf C}$ can be cleared of its entries after it is used in ${\bf Z = C \odot T}$ but before the ${\bf C \langle M \rangle = Z}$ assignment. Vectors are never transposed via the descriptor. Let ${\bf A \oplus B}$ denote the element-wise operator that produces a set union pattern (like \verb'A+B' in MATLAB). Any binary operator can be used this way in GraphBLAS, not just plus. Let ${\bf A \otimes B}$ denote the element-wise operator that produces a set intersection pattern (like \verb'A.*B' in MATLAB); any binary operator can be used this way, not just times. Reduction of a matrix ${\bf A}$ to a vector reduces the $i$th row of ${\bf A}$ to a scalar $w_i$. This is like \verb"w=sum(A')" since by default, MATLAB reduces down the columns, not across the rows. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Interfaces to MATLAB, Python, Julia, Java} %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The MATLAB interface to SuiteSparse:GraphBLAS is included with this distribution, described in Section~\ref{matlab}. It is fully polished, and fully tested, but does have some limitations that will be addressed in future releases. A beta version of a Python interface is now available, as is a Julia interface. These are not part of the SuiteSparse:GraphBLAS distribution. See the links below (see Sections \ref{python} and \ref{julia}). %=============================================================================== \subsection{MATLAB Interface} %=============================================================================== \label{matlab} An easy-to-use MATLAB interface for SuiteSparse:GraphBLAS is available; see the documentation in the \verb'GraphBLAS/GraphBLAS' folder for details. Start with the \verb'README.md' file in that directory. An easy-to-read output of the MATLAB demos can be found in \verb'GraphBLAS/GraphBLAS/demo/html'. The MATLAB interface adds the \verb'@GrB' class, which is an opaque MATLAB object that contains a GraphBLAS matrix, either double or single precision (real or complex), boolean, or any of the built-in integer types. MATLAB sparse and full matrices can be arbitrarily mixed with GraphBLAS matrices. The following overloaded operators and methods all work as you would expect for any matrix. The matrix multiplication \verb'A*B' uses the conventional \verb'PLUS_TIMES' semiring. {\footnotesize \begin{verbatim} A+B A-B A*B A.*B A./B A.\B A.^b A/b C=A(I,J) -A +A ~A A' A.' A&B A|B b\A C(I,J)=A A~=B A>B A==B A<=B A>=B A<B [A,B] [A;B] A(1:end,1:end) \end{verbatim}} For a list of overloaded operations and static methods, type \verb'methods GrB' in MATLAB, or \verb'help GrB' for more details. {\bf Limitations:} Some features for MATLAB sparse matrices are not yet available for GraphBLAS matrices. Some of these may be added in future releases. \begin{packed_itemize} \item If you save a GrB matrix object from MATLAB using one version of SuiteSparse:GraphBLAS, you can load it back in in that version or in a later version. You cannot go backwards, and save in v5.x and load in v4.x, for example. \item \verb'GrB' matrices with dimension larger than \verb'2^53' do not display properly in the MATLAB \verb'whos' command. MATLAB gets this information from \verb'size(A)', which returns a correct result, but MATLAB rounds it to double before displaying it. The size is displayed correctly with \verb'disp' or \verb'display'. \item Non-blocking mode is not exploited; this would require a MATLAB mexFunction to modify its inputs, which is technically possible but not permitted by the MATLAB API. This can have significant impact on performance, if a MATLAB m-file makes many repeated tiny changes to a matrix. This kind of computation can often be done with good performance in the C API, but will be very slow in MATLAB. \item Linear indexing, or \verb'A(:)' for a 2D matrix, and a single output of \verb'I=find(A)'. \item The second output for \verb'min' and \verb'max', and the \verb'includenan' option. \item Singleton expansion. \item Dynamically growing arrays, where \verb'C(i)=x' can increase the size of \verb'C'. \item Saturating element-wise binary and unary operators for integers. For \verb'C=A+B' with MATLAB \verb'uint8' matrices, results saturate if they exceed 255. This is not compatible with a monoid for \verb'C=A*B', and thus MATLAB does not support matrix-matrix multiplication with \verb'uint8' matrices. In GraphBLAS, \verb'uint8' addition acts in a modulo fashion. Saturating binary operators could be added in the future, so that \verb"GrB.eadd (A, '+saturate', B)" could return the MATLAB result. \item Solvers, so that \verb'x=A\b' could return a GF(2) solution, for example. \item Sparse matrices with dimension higher than 2. \end{packed_itemize} %=============================================================================== \subsection{Python Interface} %=============================================================================== \label{python} See Michel Pelletier's Python interface at \href{https://github.com/michelp/pygraphblas}{https://github.com/michelp/pygraphblas}; it also appears at \href{https://anaconda.org/conda-forge/pygraphblas}{https://anaconda.org/conda-forge/pygraphblas}. See Jim Kitchen and Erik Welch's (both from Anaconda, Inc.) Python interface at \href{https://github.com/metagraph-dev/grblas}{https://github.com/metagraph-dev/grblas}. See also \href{https://anaconda.org/conda-forge/graphblas}{https://anaconda.org/conda-forge/graphblas}. %=============================================================================== \subsection{Julia Interface} %=============================================================================== \label{julia} See Abhinav Mehndiratta's Julia interface at \\ \href{https://github.com/abhinavmehndiratta/SuiteSparseGraphBLAS.jl}{https://github.com/abhinavmehndiratta/SuiteSparseGraphBLAS.jl}. %=============================================================================== \subsection{Java Interface} %=============================================================================== \label{java} Fabian Murariu is working on a Java interface. See \newline \href{https://github.com/fabianmurariu/graphblas-java-native}{https://github.com/fabianmurariu/graphblas-java-native}. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{GraphBLAS Context and Sequence} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{context} A user application that directly relies on GraphBLAS must include the \verb'GraphBLAS.h' header file: \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} #include "GraphBLAS.h" \end{verbatim} } \end{mdframed} The \verb'GraphBLAS.h' file defines functions, types, and macros prefixed with \verb'GrB_' and \verb'GxB_' that may be used in user applications. The prefix \verb'GrB_' denote items that appear in the official {\em GraphBLAS C API Specification}. The prefix \verb'GxB_' refers to SuiteSparse-specific extensions to the GraphBLAS API. The \verb'GraphBLAS.h' file includes all the definitions required to use GraphBLAS, including the following macros that can assist a user application in compiling and using GraphBLAS. There are two version numbers associated with SuiteSparse:GraphBLAS: the version of the {\em GraphBLAS C API Specification} it conforms to, and the version of the implementation itself. These can be used in the following manner in a user application: {\footnotesize \begin{verbatim} #if GxB_SPEC_VERSION >= GxB_VERSION (2,0,3) ... use features in GraphBLAS specification 2.0.3 ... #else ... only use features in early specifications #endif #if GxB_IMPLEMENTATION >= GxB_VERSION (4,0,0) ... use features from version 4.0.1 (or later) of a specific GraphBLAS implementation #endif \end{verbatim}} SuiteSparse:GraphBLAS also defines the following strings with \verb'#define'. Refer to the \verb'GraphBLAS.h' file for details. \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline Macro & purpose \\ \hline \verb'GxB_IMPLEMENTATION_ABOUT' & this particular implementation, copyright, and URL \\ \verb'GxB_IMPLEMENTATION_DATE' & the date of this implementation \\ \verb'GxB_SPEC_ABOUT' & the GraphBLAS specification for this implementation \\ \verb'GxB_SPEC_DATE' & the date of the GraphBLAS specification \\ \verb'GxB_IMPLEMENTATION_LICENSE' & the license for this particular implementation \\ \hline \end{tabular} } \vspace{0.2in} Finally, SuiteSparse:GraphBLAS gives itself a unique name of the form \verb'GxB_SUITESPARSE_GRAPHBLAS' that the user application can use in \verb'#ifdef' tests. This is helpful in case a particular implementation provides non-standard features that extend the GraphBLAS specification, such as additional predefined built-in operators, or if a GraphBLAS implementation does not yet fully implement all of the GraphBLAS specification. The SuiteSparse:GraphBLAS name is provided in its \verb'GraphBLAS.h' file as: {\footnotesize \begin{verbatim} #define GxB_SUITESPARSE_GRAPHBLAS \end{verbatim}} For example, SuiteSparse:GraphBLAS predefines additional built-in operators not in the specification. If the user application wishes to use these in any GraphBLAS implementation, an \verb'#ifdef' can control when they are used. Refer to the examples in the \verb'GraphBLAS/Demo' folder. As another example, the GraphBLAS API states that an implementation need not define the order in which \verb'GrB_Matrix_build' assembles duplicate tuples in its \verb'[I,J,X]' input arrays. As a result, no particular ordering should be relied upon in general. However, SuiteSparse:GraphBLAS does guarantee an ordering, and this guarantee will be kept in future versions of SuiteSparse:GraphBLAS as well. Since not all implementations will ensure a particular ordering, the following can be used to exploit the ordering returned by SuiteSparse:GraphBLAS. {\footnotesize \begin{verbatim} #ifdef GxB_SUITESPARSE_GRAPHBLAS // duplicates in I, J, X assembled in a specific order; // results are well-defined even if op is not associative. GrB_Matrix_build (C, I, J, X, nvals, op) ; #else // duplicates in I, J, X assembled in no particular order; // results are undefined if op is not associative. GrB_Matrix_build (C, I, J, X, nvals, op) ; #endif \end{verbatim}} The remainder of this section describes GraphBLAS functions that create, modify, and destroy the GraphBLAS context, or provide utility methods for dealing with errors: \vspace{0.2in} {\footnotesize \begin{tabular}{lll} \hline GraphBLAS function & purpose & Section \\ \hline \verb'GrB_init' & start up GraphBLAS & \ref{init} \\ \verb'GrB_getVersion'& C API supported by the library & \ref{getVersion} \\ \verb'GxB_init' & start up GraphBLAS with different \verb'malloc' & \ref{xinit} \\ \verb'GrB_Info' & status code returned by GraphBLAS functions & \ref{info} \\ \verb'GrB_error' & get more details on the last error & \ref{error} \\ \verb'GrB_finalize' & finish GraphBLAS & \ref{finalize} \\ \hline \end{tabular} } \vspace{0.2in} %=============================================================================== \subsection{{\sf GrB\_Index:} the GraphBLAS integer} %========================== %=============================================================================== \label{grbindex} Matrix and vector dimensions and indexing rely on a specific integer, \verb'GrB_Index', which is defined in \verb'GraphBLAS.h' as {\footnotesize \begin{verbatim} typedef uint64_t GrB_Index ; \end{verbatim}} Row and column indices of an \verb'nrows'-by-\verb'ncols' matrix range from zero to the \verb'nrows-1' for the rows, and zero to \verb'ncols-1' for the columns. Indices are zero-based, like C, and not one-based, like MATLAB. In SuiteSparse:GraphBLAS, the largest size permitted for any integer of \verb'GrB_Index' is $2^{60}$. The largest \verb'GrB_Matrix' that SuiteSparse:GraphBLAS can construct is thus $2^{60}$-by-$2^{60}$. An $n$-by-$n$ matrix $A$ that size can easily be constructed in practice with $O(|{\bf A}|)$ memory requirements, where $|{\bf A}|$ denotes the number of entries that explicitly appear in the pattern of ${\bf A}$. The time and memory required to construct a matrix that large does not depend on $n$, since SuiteSparse:GraphBLAS can represent ${\bf A}$ in hypersparse form (see Section~\ref{hypersparse}). The largest \verb'GrB_Vector' that can be constructed is $2^{60}$-by-1. %=============================================================================== \subsection{{\sf GrB\_init:} initialize GraphBLAS} %============================ %=============================================================================== \label{init} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} typedef enum { GrB_NONBLOCKING = 0, // methods may return with pending computations GrB_BLOCKING = 1 // no computations are ever left pending } GrB_Mode ; \end{verbatim} }\end{mdframed} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_init // start up GraphBLAS ( GrB_Mode mode // blocking or non-blocking mode ) ; \end{verbatim} }\end{mdframed} \hypertarget{link:init}{\mbox{ }}% \verb'GrB_init' must be called before any other GraphBLAS operation. It defines the mode that GraphBLAS will use: blocking or non-blocking. With blocking mode, all operations finish before returning to the user application. With non-blocking mode, operations can be left pending, and are computed only when needed. Non-blocking mode can be much faster than blocking mode, by many orders of magnitude in extreme cases. Blocking mode should be used only when debugging a user application. The mode cannot be changed once it is set by \verb'GrB_init'. GraphBLAS objects are opaque. This allows GraphBLAS to postpone operations and then do them later in a more efficient manner by rearranging them and grouping them together. In non-blocking mode, the computations required to construct an opaque GraphBLAS object might not be finished when the GraphBLAS method or operation returns to the user. However, user-provided arrays are not opaque, and GraphBLAS methods and operations that read them (such as \verb'GrB_Matrix_build') or write to them (such as \verb'GrB_Matrix_extractTuples') always finish reading them, or creating them, when the method or operation returns to the user application. All methods and operations that extract values from a GraphBLAS object and return them into non-opaque user arrays always ensure that the user-visible arrays are fully populated when they return: \verb'GrB_*_reduce' (to scalar), \verb'GrB_*_nvals', \verb'GrB_*_extractElement', and \verb'GrB_*_extractTuples'. These functions do {\em not} guarantee that the opaque objects they depend on are finalized. To do that, use \verb'GrB_wait(&object)' instead. SuiteSparse:GraphBLAS is multithreaded internally, via OpenMP, and it is also safe to use in a multithreaded user application. See Section~\ref{sec:install} for details. User threads must not operate on the same matrices at the same time, with one exception. Multiple user threads can use the same matrices or vectors as read-only inputs to GraphBLAS operations or methods, but only if they have no pending operations (use \verb'GrB_Matrix_wait' or \verb'GrB_Vector_wait' first). User threads cannot simultaneously modify a matrix or vector via any GraphBLAS operation or method. It is safe to use the internal parallelism in SuiteSparse:GraphBLAS on matrices, vectors, and scalars that are not yet completed. The library handles this on its own. The \verb'GrB_*_wait(&object)' function is only needed when a user application makes multiple calls to GraphBLAS in parallel, from multiple user threads. With multiple user threads, exactly one user thread must call \verb'GrB_init' before any user thread may call any \verb'GrB_*' or \verb'GxB_*' function. When the user application is finished, exactly one user thread must call \verb'GrB_finalize', after which no user thread may call any \verb'GrB_*' or \verb'GxB_*' function. The mode of a GraphBLAS session can be queried with \verb'GxB_get'; see Section~\ref{options} for details. \newpage %=============================================================================== \subsection{{\sf GrB\_getVersion:} determine the C API Version} %=============== %=============================================================================== \label{getVersion} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_getVersion // runtime access to C API version number ( unsigned int *version, // returns GRB_VERSION unsigned int *subversion // returns GRB_SUBVERSION ) ; \end{verbatim} }\end{mdframed} GraphBLAS defines two compile-time constants that define the version of the C API Specification that is implemented by the library: \verb'GRB_VERSION' and \verb'GRB_SUBVERSION'. If the user program was compiled with one version of the library but linked with a different one later on, the compile-time version check with \verb'GRB_VERSION' would be stale. \verb'GrB_getVersion' thus provides a runtime access of the version of the C API Specification supported by the library. \begin{alert} {\bf SPEC:} This version of SuiteSparse:GraphBLAS supports \input{GraphBLAS_API_version.tex} of the C API Specification, with the exception of changes to \verb'GrB_wait', \verb'GrB_error', and \verb'GrB_Matrix_reduce_BinaryOp'. \end{alert} %=============================================================================== \subsection{{\sf GxB\_init:} initialize with alternate malloc} %================ %=============================================================================== \label{xinit} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_init // start up GraphBLAS and also define malloc, etc ( GrB_Mode mode, // blocking or non-blocking mode // pointers to memory management functions. void * (* user_malloc_function ) (size_t), void * (* user_calloc_function ) (size_t, size_t), void * (* user_realloc_function ) (void *, size_t), void (* user_free_function ) (void *), bool user_malloc_is_thread_safe ) ; \end{verbatim} }\end{mdframed} \verb'GxB_init' is identical to \verb'GrB_init', except that it also redefines the memory management functions that SuiteSparse:GraphBLAS will use. Giving the user application control over this is particularly important when using the \verb'GxB_*import' and \verb'GxB_*export' functions described in Section~\ref{import_export}, since they require the user application and GraphBLAS to use the same memory manager. \verb'user_calloc_function' and \verb'user_realloc_function' are optional, and may be \verb'NULL'. If \verb'NULL', then the \verb'user_malloc_function' is relied on instead, for all memory allocations. These functions can only be set once, when GraphBLAS starts. Either \verb'GrB_init' or \verb'GxB_init' must be called before any other GraphBLAS operation, but not both. The last argument to \verb'GxB_init' informs GraphBLAS as to whether or not the functions are thread-safe. The ANSI C and Intel TBB functions are thread-safe, but the MATLAB \verb'mxMalloc' and related functions are not thread-safe. If not thread-safe, GraphBLAS calls the functions from inside an OpenMP critical section. The following usage is identical to \verb'GrB_init(mode)': {\footnotesize \begin{verbatim} GxB_init (mode, malloc, calloc, realloc, free, true) ; \end{verbatim}} SuiteSparse:GraphBLAS can be compiled as normal (outside of MATLAB) and then linked into a MATLAB \verb'mexFunction'. However, a \verb'mexFunction' should use the MATLAB memory managers. To do this, use the following instead of \verb'GrB_init(mode)' in a MATLAB \verb'mexFunction', with the flag \verb'false' since these functions are not thread-safe: {\footnotesize \begin{verbatim} #include "mex.h" #include "GraphBLAS.h" ... GxB_init (mode, mxMalloc, mxCalloc, mxRealloc, mxFree, false) ; \end{verbatim}} Passing in the last parameter as \verb'false' requires that GraphBLAS be compiled with OpenMP. Internally, SuiteSparse:GraphBLAS never calls any memory management function inside a parallel region. Results are undefined if all three of the following conditions hold: (1) the user application calls GraphBLAS in parallel from multiple user-level threads, (2) the memory functions are not thread-safe, and (3) GraphBLAS is not compiled with OpenMP. Safety is guaranteed if at least one of those conditions is false. To use the scalable Intel TBB memory manager: {\footnotesize \begin{verbatim} #include "tbb/scalable_allocator.h" #include "GraphBLAS.h" ... GxB_init (mode, scalable_malloc, scalable_calloc, scalable_realloc, scalable_free, true) ; \end{verbatim}} \newpage %=============================================================================== \subsection{{\sf GrB\_Info:} status code returned by GraphBLAS} %=============== %=============================================================================== \label{info} Each GraphBLAS method and operation returns its status to the caller as its return value, an enumerated type (an \verb'enum') called \verb'GrB_Info'. The first two values in the following table denote a successful status, the rest are error codes. \vspace{0.2in} \noindent {\small \begin{tabular}{llp{2.8in}} \hline \verb'GrB_SUCCESS' & 0 & the method or operation was successful \\ \verb'GrB_NO_VALUE' & 1 & the method was successful, but the entry \\ & & does not appear in the matrix or vector. \\ \hline \hline \verb'GrB_UNINITIALIZED_OBJECT' & 2 & object has not been initialized \\ \verb'GrB_INVALID_OBJECT' & 3 & object is corrupted \\ \verb'GrB_NULL_POINTER' & 4 & input pointer is \verb'NULL' \\ \verb'GrB_INVALID_VALUE' & 5 & generic error code; some value is bad \\ \verb'GrB_INVALID_INDEX' & 6 & a row or column index is out of bounds \\ \verb'GrB_DOMAIN_MISMATCH' & 7 & object domains are not compatible \\ \verb'GrB_DIMENSION_MISMATCH' & 8 & matrix dimensions do not match \\ \verb'GrB_OUTPUT_NOT_EMPTY' & 9 & output matrix already has values in it \\ \hline \verb'GrB_OUT_OF_MEMORY' & 10 & out of memory \\ \verb'GrB_INSUFFICIENT_SPACE' & 11 & output array not large enough \\ \verb'GrB_INDEX_OUT_OF_BOUNDS' & 12 & a row or column index is out of bounds \\ \hline \verb'GrB_PANIC' & 13 & unrecoverable error. \\ \hline \end{tabular} \vspace{0.2in} } Not all GraphBLAS methods or operations can return all status codes. Any GraphBLAS method or operation can return an out-of-memory condition, \verb'GrB_OUT_OF_MEMORY', or a panic, \verb'GrB_PANIC'. These two errors, and the \verb'GrB_INDEX_OUT_OF_BOUNDS' error, are called {\em execution errors}. The other errors are called {\em API} errors. An API error is detected immediately, regardless of the blocking mode. The detection of an execution error may be deferred until the pending operations complete. In the discussions of each method and operation in this User Guide, most of the obvious error code returns are not discussed. For example, if a required input is a \verb'NULL' pointer, then \verb'GrB_NULL_POINTER' is returned. Only error codes specific to the method or that require elaboration are discussed here. For a full list of the status codes that each GraphBLAS function can return, refer to {\em The GraphBLAS C API Specification} \cite{spec}. \newpage %=============================================================================== \subsection{{\sf GrB\_error:} get more details on the last error} %============= %=============================================================================== \label{error} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_error // return a string describing the last error ( const char **error, // error string <type> object // a GrB_matrix, GrB_Vector, etc. ) ; \end{verbatim} }\end{mdframed} Each GraphBLAS method and operation returns a \verb'GrB_Info' error code. The \verb'GrB_error' function returns additional information on the error for a particular object in a null-terminated string. The string returned by \verb'GrB_error' is never a \verb'NULL' string, but it may have length zero (with the first entry being the \verb"'\0'" string-termination value). The string must not be freed or modified. {\footnotesize \begin{verbatim} info = GrB_some_method_here (C, ...) ; if (! (info == GrB_SUCCESS || info == GrB_NO_VALUE)) { char *err ; GrB_error (&err, C) ; printf ("info: %d error: %s\n", info, err) ; } \end{verbatim}} If \verb'C' has no error status, or if the error is not recorded in the string, an empty non-null string is returned. In particular, out-of-memory conditions result in an empty string fro \verb'GrB_error'. SuiteSparse:GraphBLAS reports many helpful details via \verb'GrB_error'. For example, if a row or column index is out of bounds, the report will state what those bounds are. If a matrix dimension is incorrect, the mismatching dimensions will be provided. \verb'GrB_BinaryOp_new', \verb'GrB_UnaryOp_new', and \verb'GxB_SelectOp_new' record the name the function passed to them, and \verb'GrB_Type_new' records the name of its type parameter, and these are printed if the user-defined types and operators are used incorrectly. Refer to the output of the example programs in the \verb'Demo' and \verb'Test' folder, which intentionally generate errors to illustrate the use of \verb'GrB_error'. The only functions in GraphBLAS that return an error string are functions that have a single input/output argument \verb'C', as a \verb'GrB_Matrix', \verb'GrB_Vector', \verb'GxB_Scalar', or \verb'GrB_Descriptor'. Methods that create these objects (such as \verb'GrB_Matrix_new') return a \verb'NULL' object on failure, so these methods cannot also return an error string in \verb'C'. Any subsequent GraphBLAS method that modifies the object \verb'C' clears the error string. Note that \verb'GrB_NO_VALUE' is an not error, but an informational status. \verb'GrB_*_extractElment(&x,A,i,j)', which does \verb'x=A(i,j)', returns this value to indicate that \verb'A(i,j)' is not present in the matrix. That method does not have an input/output object so it cannot return an error string. The \verb'GrB_error' function is a polymorphic function for the following variants: \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Type_error (const char **error, const GrB_Type type) ; GrB_Info GrB_UnaryOp_error (const char **error, const GrB_UnaryOp op) ; GrB_Info GrB_BinaryOp_error (const char **error, const GrB_BinaryOp op) ; GrB_Info GxB_SelectOp_error (const char **error, const GxB_SelectOp op) ; GrB_Info GrB_Monoid_error (const char **error, const GrB_Monoid monoid) ; GrB_Info GrB_Semiring_error (const char **error, const GrB_Semiring semiring) ; GrB_Info GxB_Scalar_error (const char **error, const GxB_Scalar s) ; GrB_Info GrB_Vector_error (const char **error, const GrB_Vector v) ; GrB_Info GrB_Matrix_error (const char **error, const GrB_Vector A) ; GrB_Info GrB_Descriptor_error (const char **error, const GrB_Descriptor d) ; \end{verbatim} }\end{mdframed} Currently, only \verb'GrB_Matrix_error', \verb'GrB_Vector_error', \verb'GxB_Scalar_error', and \verb'GrB_Descriptor_error' are able to return non-empty error strings. The latter can return an error string only from \verb'GrB_Descriptor_set' and \verb'GxB_set(d,...)'. The only GraphBLAS methods (Section~\ref{objects}) that return an error string are \verb'*setElement', \verb'*removeElement', \verb'GxB_Matrix_Option_set(A,...)', \newline \verb'GxB_Vector_Option_set(v,...)', \verb'GrB_Descriptor_set', and \verb'GxB_Desc_set(d,...)'. All GraphBLAS operations discussed in Section~\ref{operations} can return an error string in their input/output object, except for \verb'GrB_reduce' when reducing to a scalar. \begin{alert} {\bf SPEC:} \verb'GrB_error' conforms to a draft of the v2.0 GraphBLAS C Specification. The v1.3 version of this function has the signature \newline \verb'const char *GrB_error (void)', with no inputs, which is no longer supported in SuiteSparse:GraphBLAS v4 or later. \end{alert} \newpage %=============================================================================== \subsection{{\sf GrB\_wait:} on all objects} %================================== %=============================================================================== \label{wait_all} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait ( ) ; // wait for all objects: NOT SUPPORTED \end{verbatim} }\end{mdframed} \begin{alert} {\bf SPEC:} The v1.3 GraphBLAS C API Specification includes \verb'GrB_wait ( )' with no inputs, which waits for all objects computed by any user thread. This has serious performance issues and thus it is no longer implemented in SuiteSparse:GraphBLAS v4 and later. SuiteSparse:GraphBLAS only provides \verb'GrB_wait (&object)', to wait on a single object. \end{alert} %=============================================================================== \subsection{{\sf GrB\_finalize:} finish GraphBLAS} %============================ %=============================================================================== \label{finalize} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_finalize ( ) ; // finish GraphBLAS \end{verbatim} }\end{mdframed} \verb'GrB_finalize' must be called as the last GraphBLAS operation, even after all calls to \verb'GrB_free'. All GraphBLAS objects created by the user application should be freed first, before calling \verb'GrB_finalize' since \verb'GrB_finalize' will not free those objects. In non-blocking mode, GraphBLAS may leave some computations as pending. These computations can be safely abandoned if the user application frees all GraphBLAS objects it has created and then calls \verb'GrB_finalize'. When the user application is finished, exactly one user thread must call \verb'GrB_finalize'. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{GraphBLAS Objects and their Methods} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{objects} GraphBLAS defines eight different objects to represent matrices and vectors, their scalar data type (or domain), binary and unary operators on scalar types, monoids, semirings, and a {\em descriptor} object used to specify optional parameters that modify the behavior of a GraphBLAS operation. SuiteSparse:GraphBLAS adds two additional objects: a scalar (\verb'GxB_Scalar'), and an operator for selecting entries from a matrix or vector (\verb'GxB_SelectOp'). The GraphBLAS API makes a distinction between {\em methods} and {\em operations}. A method is a function that works on a GraphBLAS object, creating it, destroying it, or querying its contents. An operation (not to be confused with an operator) acts on matrices and/or vectors in a semiring. \vspace{0.1in} \noindent {\small \begin{tabular}{ll} \hline \verb'GrB_Type' & a scalar data type \\ \verb'GrB_UnaryOp' & a unary operator $z=f(x)$, where $z$ and $x$ are scalars\\ \verb'GrB_BinaryOp' & a binary operator $z=f(x,y)$, where $z$, $x$, and $y$ are scalars\\ \verb'GxB_SelectOp' & a select operator \\ \verb'GrB_Monoid' & an associative and commutative binary operator \\ & and its identity value \\ \verb'GrB_Semiring' & a monoid that defines the ``plus'' and a binary operator\\ & that defines the ``multiply'' for an algebraic semiring \\ \verb'GrB_Matrix' & a 2D sparse matrix of any type \\ \verb'GrB_Vector' & a 1D sparse column vector of any type \\ \verb'GxB_Scalar' & a scalar of any type \\ \verb'GrB_Descriptor'& a collection of parameters that modify an operation \\ \hline \end{tabular} } \vspace{0.1in} Each of these objects is implemented in C as an opaque handle, which is a pointer to a data structure held by GraphBLAS. User applications may not examine the content of the object directly; instead, they can pass the handle back to GraphBLAS which will do the work. Assigning one handle to another is valid but it does not make a copy of the underlying object. \newpage %=============================================================================== \subsection{The GraphBLAS type: {\sf GrB\_Type}} %============================== %=============================================================================== \label{type} A GraphBLAS \verb'GrB_Type' defines the type of scalar values that a matrix or vector contains, and the type of scalar operands for a unary or binary operator. There are 13 built-in types, and a user application can define any types of its own as well. The built-in types correspond to built-in types in C (\verb'#include <stdbool.h>' and \verb'#include <stdint.h>'), and the classes in MATLAB, as listed in the following table. MATLAB allows for \verb'double complex' sparse matrices, but the \verb'class(A)' for such a matrix is just \verb'double'. MATLAB treats the complex types as properties of a class. \vspace{0.2in} \noindent {\footnotesize \begin{tabular}{lllll} \hline GraphBLAS & C type & MATLAB & description & range \\ type & & class & & \\ \hline \verb'GrB_BOOL' & \verb'bool' & \verb'logical' & Boolean & true (1), false (0) \\ \hline \verb'GrB_INT8' & \verb'int8_t' & \verb'int8' & 8-bit signed integer & -128 to 127 \\ \verb'GrB_INT16' & \verb'int16_t' & \verb'int16' & 16-bit integer & $-2^{15}$ to $2^{15}-1$ \\ \verb'GrB_INT32' & \verb'int32_t' & \verb'int32' & 32-bit integer & $-2^{31}$ to $2^{31}-1$ \\ \verb'GrB_INT64' & \verb'int64_t' & \verb'int64' & 64-bit integer & $-2^{63}$ to $2^{63}-1$ \\ \hline \verb'GrB_UINT8' & \verb'uint8_t' & \verb'uint8' & 8-bit unsigned integer & 0 to 255 \\ \verb'GrB_UINT16' & \verb'uint16_t' & \verb'uint16' & 16-bit unsigned integer & 0 to $2^{16}-1$ \\ \verb'GrB_UINT32' & \verb'uint32_t' & \verb'uint32' & 32-bit unsigned integer & 0 to $2^{32}-1$ \\ \verb'GrB_UINT64' & \verb'uint64_t' & \verb'uint64' & 64-bit unsigned integer & 0 to $2^{64}-1$ \\ \hline \verb'GrB_FP32' & \verb'float' & \verb'single' & 32-bit IEEE 754 & \verb'-Inf' to \verb'+Inf'\\ \verb'GrB_FP64' & \verb'double' & \verb'double' & 64-bit IEEE 754 & \verb'-Inf' to \verb'+Inf'\\ \hline \verb'GxB_FC32' & \verb'float complex' & \verb'single' & 32-bit IEEE 754 & \verb'-Inf' to \verb'+Inf'\\ & & \verb'~isreal(.)' & complex & \\ \hline \verb'GxB_FC64' & \verb'double complex' & \verb'double' & 64-bit IEEE 754 & \verb'-Inf' to \verb'+Inf'\\ & & \verb'~isreal(.)' & complex & \\ \hline \end{tabular} } \vspace{0.2in} The ANSI C11 definitions of \verb'float complex' and \verb'double complex' are not always available. The \verb'GraphBLAS.h' header defines them as \verb'GxB_FC32_t' and \verb'GxB_FC64_t', respectively. The user application can also define new types based on any \verb'typedef' in the C language whose values are held in a contiguous region of memory. For example, a user-defined \verb'GrB_Type' could be created to hold any C \verb'struct' whose content is self-contained. A C \verb'struct' containing pointers might be problematic because GraphBLAS would not know to dereference the pointers to traverse the entire ``scalar'' entry, but this can be done if the objects referenced by these pointers are not moved. A user-defined complex type with real and imaginary types can be defined, or even a ``scalar'' type containing a fixed-sized dense matrix (see Section~\ref{type_new}). The possibilities are endless. GraphBLAS can create and operate on sparse matrices and vectors in any of these types, including any user-defined ones. For user-defined types, GraphBLAS simply moves the data around itself (via \verb'memcpy'), and then passes the values back to user-defined functions when it needs to do any computations on the type. The next sections describe the methods for the \verb'GrB_Type' object: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_Type_new' & create a user-defined type \\ \verb'GrB_Type_wait' & wait for a user-defined type \\ \verb'GxB_Type_size' & return the size of a type \\ \verb'GrB_Type_free' & free a user-defined type \\ \hline \end{tabular} } \vspace{0.2in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Type\_new:} create a user-defined type} %------------------------------------------------------------------------------- \label{type_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Type_new // create a new GraphBLAS type ( GrB_Type *type, // handle of user type to create size_t sizeof_ctype // size = sizeof (ctype) of the C type ) ; \end{verbatim} }\end{mdframed} \verb'GrB_Type_new' creates a new user-defined type. The \verb'type' is a handle, or a pointer to an opaque object. The handle itself must not be \verb'NULL' on input, but the content of the handle can be undefined. On output, the handle contains a pointer to a newly created type. The \verb'ctype' is the type in C that will be used to construct the new GraphBLAS type. It can be either a built-in C type, or defined by a \verb'typedef'. The second parameter should be passed as \verb'sizeof(ctype)'. The only requirement on the C type is that \verb'sizeof(ctype)' is valid in C, and that the type reside in a contiguous block of memory so that it can be moved with \verb'memcpy'. For example, to create a user-defined type called \verb'Complex' for double-precision complex values using the ANSI C11 \verb'double complex' type, the following can be used. A complete example can be found in the \verb'usercomplex.c' and \verb'usercomplex.h' files in the \verb'Demo' folder. {\footnotesize \begin{verbatim} #include <math.h> #include <complex.h> GrB_Type Complex ; GrB_Type_new (&Complex, sizeof (double complex)) ; \end{verbatim} } To demonstrate the flexibility of the \verb'GrB_Type', consider a ``scalar'' consisting of 4-by-4 floating-point matrix and a string. This type might be useful for the 4-by-4 translation/rotation/scaling matrices that arise in computer graphics, along with a string containing a description or even a regular expression that can be parsed and executed in a user-defined operator. All that is required is a fixed-size type, where \verb'sizeof(ctype)' is a constant. {\footnotesize \begin{verbatim} typedef struct { float stuff [4][4] ; char whatstuff [64] ; } wildtype ; GrB_Type WildType ; GrB_Type_new (&WildType, sizeof (wildtype)) ; \end{verbatim} } With this type a sparse matrix can be created in which each entry consists of a 4-by-4 dense matrix \verb'stuff' and a 64-character string \verb'whatstuff'. GraphBLAS treats this 4-by-4 as a ``scalar.'' Any GraphBLAS method or operation that simply moves data can be used with this type without any further information from the user application. For example, entries of this type can be assigned to and extracted from a matrix or vector, and matrices containing this type can be transposed. A working example (\verb'wildtype.c' in the \verb'Demo' folder) creates matrices and multiplies them with a user-defined semiring with this type. Performing arithmetic on matrices and vectors with user-defined types requires operators to be defined. Refer to Section~\ref{user} for more details on these example user-defined types. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Type\_wait:} wait for a type} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined type ( GrB_Type *type // type to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined type, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_Type_wait(&type)' ensures the \verb'type' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_Type_wait(&type)', except to ensure that \verb'type' is valid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Type\_size:} return the size of a type} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Type_size // determine the size of the type ( size_t *size, // the sizeof the type GrB_Type type // type to determine the sizeof ) ; \end{verbatim} }\end{mdframed} This function acts just like \verb'sizeof(type)' in the C language. For example \verb'GxB_Type_size (&s, GrB_INT32)' sets \verb's' to 4, the same as \verb'sizeof(int32_t)'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Type\_free:} free a user-defined type} %------------------------------------------------------------------------------- \label{type_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-defined type ( GrB_Type *type // handle of user-defined type to free ) ; \end{verbatim} }\end{mdframed} \verb'GrB_Type_free' frees a user-defined type. Either usage: {\small \begin{verbatim} GrB_Type_free (&type) ; GrB_free (&type) ; \end{verbatim}} \noindent frees the user-defined \verb'type' and sets \verb'type' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'type == NULL' on input. It is safe to attempt to free a built-in type. SuiteSparse:GraphBLAS silently ignores the request and returns \verb'GrB_SUCCESS'. A user-defined type should not be freed until all operations using the type are completed. SuiteSparse:GraphBLAS attempts to detect this condition but it must query a freed object in its attempt. This is hazardous and not recommended. Operations on such objects whose type has been freed leads to undefined behavior. It is safe to first free a type, and then a matrix of that type, but after the type is freed the matrix can no longer be used. The only safe thing that can be done with such a matrix is to free it. The function signature of \verb'GrB_Type_free' uses the generic name \verb'GrB_free', which can free any GraphBLAS object. See Section~\ref{free} details. GraphBLAS includes many such generic functions. When describing a specific variation, a function is described with its specific name in this User Guide (such as \verb'GrB_Type_free'). When discussing features applicable to all specific forms, the generic name is used instead (such as \verb'GrB_free'). \newpage %=============================================================================== \subsection{GraphBLAS unary operators: {\sf GrB\_UnaryOp}, $z=f(x)$} %========== %=============================================================================== \label{unaryop} A unary operator is a scalar function of the form $z=f(x)$. The domain (type) of $z$ and $x$ need not be the same. In the notation in the tables below, $T$ is any of the 13 built-in types and is a place-holder for \verb'BOOL', \verb'INT8', \verb'UINT8', ... \verb'FP32', \verb'FP64', \verb'FC32', or \verb'FC64'. For example, \verb'GrB_AINV_INT32' is a unary operator that computes \verb'z=-x' for two values \verb'x' and \verb'z' of type \verb'GrB_INT32'. The notation $R$ refers to any real type (all but \verb'FC32' and \verb'FC64'), $I$ refers to any integer type (\verb'INT*' and \verb'UINT*'), $F$ refers to any real or complex floating point type (\verb'FP32', \verb'FP64', \verb'FC32', or \verb'FC64'), and $Z$ refers to any complex floating point type (\verb'FC32' or \verb'FC64'). The logical negation operator \verb'GrB_LNOT' only works on Boolean types. The \verb'GxB_LNOT_'$R$ functions operate on inputs of type $R$, implicitly typecasting their input to Boolean and returning result of type $R$, with a value 1 for true and 0 for false. The operators \verb'GxB_LNOT_BOOL' and \verb'GrB_LNOT' are identical. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Unary operators for all types} \\ \hline GraphBLAS name & types (domains) & $z=f(x)$ & description \\ \hline \verb'GxB_ONE_'$T$ & $T \rightarrow T$ & $z = 1$ & one \\ \verb'GrB_IDENTITY_'$T$ & $T \rightarrow T$ & $z = x$ & identity \\ \verb'GrB_AINV_'$T$ & $T \rightarrow T$ & $z = -x$ & additive inverse \\ \verb'GrB_MINV_'$T$ & $T \rightarrow T$ & $z = 1/x$ & multiplicative inverse \\ \hline \end{tabular} \vspace{0.2in} \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Unary operators for real and integer types} \\ \hline GraphBLAS name & types (domains) & $z=f(x)$ & description \\ \hline \verb'GrB_ABS_'$T$ & $R \rightarrow R$ & $z = |x|$ & absolute value \\ \verb'GrB_LNOT' & \verb'bool' $\rightarrow$ \verb'bool' & $z = \lnot x$ & logical negation \\ \verb'GxB_LNOT_'$R$ & $R \rightarrow R$ & $z = \lnot (x \ne 0)$ & logical negation \\ \verb'GrB_BNOT_'$I$ & $I \rightarrow I$ & $z = \lnot x$ & bitwise negation \\ \hline \end{tabular} \vspace{0.2in} \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Positional unary operators for any type (including user-defined)} \\ \hline GraphBLAS name & types (domains) & $z=f(a_{ij})$ & description \\ \hline \verb'GxB_POSITIONI_'$T$ & $ \rightarrow T$ & $z = i$ & row index (0-based) \\ \verb'GxB_POSITIONI1_'$T$ & $ \rightarrow T$ & $z = i+1$ & row index (1-based) \\ \verb'GxB_POSITIONJ_'$T$ & $ \rightarrow T$ & $z = j$ & column index (0-based) \\ \verb'GxB_POSITIONJ1_'$T$ & $ \rightarrow T$ & $z = j+1$ & column index (1-based) \\ \hline \end{tabular} \vspace{0.2in} \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Unary operators for floating-point types (real and complex)} \\ \hline GraphBLAS name & types (domains) & $z=f(x)$ & description \\ \hline \verb'GxB_SQRT_'$F$ & $F \rightarrow F$ & $z = \sqrt(x)$ & square root \\ \verb'GxB_LOG_'$F$ & $F \rightarrow F$ & $z = \log_e(x)$ & natural logarithm \\ \verb'GxB_EXP_'$F$ & $F \rightarrow F$ & $z = e^x$ & natural exponent \\ \hline \verb'GxB_LOG10_'$F$ & $F \rightarrow F$ & $z = \log_{10}(x)$ & base-10 logarithm \\ \verb'GxB_LOG2_'$F$ & $F \rightarrow F$ & $z = \log_2(x)$ & base-2 logarithm \\ \verb'GxB_EXP2_'$F$ & $F \rightarrow F$ & $z = 2^x$ & base-2 exponent \\ \hline \verb'GxB_EXPM1_'$F$ & $F \rightarrow F$ & $z = e^x - 1$ & natural exponent - 1 \\ \verb'GxB_LOG1P_'$F$ & $F \rightarrow F$ & $z = \log(x+1)$ & natural log of $x+1$ \\ \hline \verb'GxB_SIN_'$F$ & $F \rightarrow F$ & $z = \sin(x)$ & sine \\ \verb'GxB_COS_'$F$ & $F \rightarrow F$ & $z = \cos(x)$ & cosine \\ \verb'GxB_TAN_'$F$ & $F \rightarrow F$ & $z = \tan(x)$ & tangent \\ \hline \verb'GxB_ASIN_'$F$ & $F \rightarrow F$ & $z = \sin^{-1}(x)$ & inverse sine \\ \verb'GxB_ACOS_'$F$ & $F \rightarrow F$ & $z = \cos^{-1}(x)$ & inverse cosine \\ \verb'GxB_ATAN_'$F$ & $F \rightarrow F$ & $z = \tan^{-1}(x)$ & inverse tangent \\ \hline \verb'GxB_SINH_'$F$ & $F \rightarrow F$ & $z = \sinh(x)$ & hyperbolic sine \\ \verb'GxB_COSH_'$F$ & $F \rightarrow F$ & $z = \cosh(x)$ & hyperbolic cosine \\ \verb'GxB_TANH_'$F$ & $F \rightarrow F$ & $z = \tanh(x)$ & hyperbolic tangent \\ \hline \verb'GxB_ASINH_'$F$ & $F \rightarrow F$ & $z = \sinh^{-1}(x)$ & inverse hyperbolic sine \\ \verb'GxB_ACOSH_'$F$ & $F \rightarrow F$ & $z = \cosh^{-1}(x)$ & inverse hyperbolic cosine \\ \verb'GxB_ATANH_'$F$ & $F \rightarrow F$ & $z = \tanh^{-1}(x)$ & inverse hyperbolic tangent \\ \hline \verb'GxB_SIGNUM_'$F$ & $F \rightarrow F$ & $z = \sgn(x)$ & sign, or signum function \\ \verb'GxB_CEIL_'$F$ & $F \rightarrow F$ & $z = \lceil x \rceil $ & ceiling function \\ \verb'GxB_FLOOR_'$F$ & $F \rightarrow F$ & $z = \lfloor x \rfloor $ & floor function \\ \verb'GxB_ROUND_'$F$ & $F \rightarrow F$ & $z = \mbox{round}(x)$ & round to nearest \\ \verb'GxB_TRUNC_'$F$ & $F \rightarrow F$ & $z = \mbox{trunc}(x)$ & round towards zero \\ \hline \verb'GxB_LGAMMA_'$F$ & $F \rightarrow F$ & $z = \log(|\Gamma (x)|)$ & log of gamma function \\ \verb'GxB_TGAMMA_'$F$ & $F \rightarrow F$ & $z = \Gamma(x)$ & gamma function \\ \verb'GxB_ERF_'$F$ & $F \rightarrow F$ & $z = \erf(x)$ & error function \\ \verb'GxB_ERFC_'$F$ & $F \rightarrow F$ & $z = \erfc(x)$ & complimentary error function \\ \hline \verb'GxB_FREXPX_'$F$ & $F \rightarrow F$ & $z = \mbox{frexpx}(x)$ & normalized fraction \\ \verb'GxB_FREXPE_'$F$ & $F \rightarrow F$ & $z = \mbox{frexpe}(x)$ & normalized exponent \\ \hline \verb'GxB_ISINF_'$F$ & $F \rightarrow $ \verb'bool' & $z = \mbox{isinf}(x)$ & true if $\pm \infty$ \\ \verb'GxB_ISNAN_'$F$ & $F \rightarrow $ \verb'bool' & $z = \mbox{isnan}(x)$ & true if \verb'NaN' \\ \verb'GxB_ISFINITE_'$F$ & $F \rightarrow $ \verb'bool' & $z = \mbox{isfinite}(x)$ & true if finite \\ \hline \end{tabular} \vspace{0.2in} \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Unary operators for complex types} \\ \hline GraphBLAS name & types (domains) & $z=f(x)$ & description \\ \hline \verb'GxB_CONJ_'$Z$ & $Z \rightarrow Z$ & $z = \overline{x}$ & complex conjugate \\ \verb'GxB_ABS_'$Z$ & $Z \rightarrow F$ & $z = |x|$ & absolute value \\ \verb'GxB_CREAL_'$Z$ & $Z \rightarrow F$ & $z = \mbox{real}(x)$ & real part \\ \verb'GxB_CIMAG_'$Z$ & $Z \rightarrow F$ & $z = \mbox{imag}(x)$ & imaginary part \\ \verb'GxB_CARG_'$Z$ & $Z \rightarrow F$ & $z = \mbox{carg}(x)$ & angle \\ \hline \end{tabular} } \vspace{0.2in} A positional unary operator return the row or column index of an entry. For a matrix $z=f(a_{ij})$ returns $z = i$ or $z = j$, or +1 for 1-based indices. The latter is useful in the MATLAB interface, where row and column indices are 1-based. When applied to a vector, $j$ is always zero, and $i$ is the index in the vector. Positional unary operators come in two types: \verb'INT32' and \verb'INT64', which is the type of the output, $z$. The functions are agnostic to the type of their inputs; they only depend on the position of the entries, not their values. User-defined positional operators cannot be defined by \verb'GrB_UnaryOp_new'. \verb'GxB_FREXPX' \verb'GxB_FREXPE' return the mantissa and exponent, respectively, from the ANSI C11 \verb'frexp' function. The exponent is returned as a floating-point value, not an integer. The operators \verb'GxB_EXPM1_FC*' and \verb'GxB_LOG1P_FC*' for complex types are currently not accurate. They will be revised in a future version. The functions \verb'casin', \verb'casinf', \verb'casinh', and \verb'casinhf' provided by Microsoft Visual Studio for computing $\sin^{-1}(x)$ and $\sinh^{-1}(x)$ when $x$ is complex do not compute the correct result. Thus, the unary operators \verb'GxB_ASIN_FC32', \verb'GxB_ASIN_FC64' \verb'GxB_ASINH_FC32', and \verb'GxB_ASINH_FC64' do not work properly if the MS Visual Studio compiler is used. These functions work properly if the gcc, icc, or clang compilers are used on Linux or MacOS. Integer division by zero normally terminates an application, but this is avoided in SuiteSparse:GraphBLAS. For details, see the binary \verb'GrB_DIV_'$T$ operators. \begin{alert} {\bf SPEC:} The definition of integer division by zero is an extension to the spec. \end{alert} The next sections define the following methods for the \verb'GrB_UnaryOp' object: \vspace{0.1in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_UnaryOp_new' & create a user-defined unary operator \\ \verb'GrB_UnaryOp_wait' & wait for a user-defined unary operator \\ \verb'GxB_UnaryOp_ztype' & return the type of the output $z$ for $z=f(x)$\\ \verb'GxB_UnaryOp_xtype' & return the type of the input $x$ for $z=f(x)$\\ \verb'GrB_UnaryOp_free' & free a user-defined unary operator \\ \hline \end{tabular} } \vspace{0.1in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_UnaryOp\_new:} create a user-defined unary operator} %------------------------------------------------------------------------------- \label{unaryop_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_UnaryOp_new // create a new user-defined unary operator ( GrB_UnaryOp *unaryop, // handle for the new unary operator void *function, // pointer to the unary function GrB_Type ztype, // type of output z GrB_Type xtype // type of input x ) ; \end{verbatim} }\end{mdframed} \verb'GrB_UnaryOp_new' creates a new unary operator. The new operator is returned in the \verb'unaryop' handle, which must not be \verb'NULL' on input. On output, its contents contains a pointer to the new unary operator. The two types \verb'xtype' and \verb'ztype' are the GraphBLAS types of the input $x$ and output $z$ of the user-defined function $z=f(x)$. These types may be built-in types or user-defined types, in any combination. The two types need not be the same, but they must be previously defined before passing them to \verb'GrB_UnaryOp_new'. The \verb'function' argument to \verb'GrB_UnaryOp_new' is a pointer to a user-defined function with the following signature: {\footnotesize \begin{verbatim} void (*f) (void *z, const void *x) ; \end{verbatim} } When the function \verb'f' is called, the arguments \verb'z' and \verb'x' are passed as \verb'(void *)' pointers, but they will be pointers to values of the correct type, defined by \verb'ztype' and \verb'xtype', respectively, when the operator was created. {\bf NOTE:} The pointers may not be unique. That is, the user function may be called with multiple pointers that point to the same space, such as when \verb'z=f(z,y)' is to be computed by a binary operator, or \verb'z=f(z)' for a unary operator. Any parameters passed to the user-callable function may be aliased to each other. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_UnaryOp\_wait:} wait for a unary operator} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined unary operator ( GrB_UnaryOp *unaryop // unary operator to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined unary operator, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_UnaryOp_wait(&unaryop)' ensures the \verb'op' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_UnaryOp_wait(&unaryop)', except to ensure that the \verb'unaryop' is valid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_UnaryOp\_ztype:} return the type of $z$} %------------------------------------------------------------------------------- \label{unaryop_ztype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_UnaryOp_ztype // return the type of z ( GrB_Type *ztype, // return type of output z GrB_UnaryOp unaryop // unary operator ) ; \end{verbatim} }\end{mdframed} \verb'GxB_UnaryOp_ztype' returns the \verb'ztype' of the unary operator, which is the type of $z$ in the function $z=f(x)$. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_UnaryOp\_xtype:} return the type of $x$} %------------------------------------------------------------------------------- \label{unaryop_xtype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_UnaryOp_xtype // return the type of x ( GrB_Type *xtype, // return type of input x GrB_UnaryOp unaryop // unary operator ) ; \end{verbatim} }\end{mdframed} \verb'GxB_UnaryOp_xtype' returns the \verb'xtype' of the unary operator, which is the type of $x$ in the function $z=f(x)$. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_UnaryOp\_free:} free a user-defined unary operator} %------------------------------------------------------------------------------- \label{unaryop_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-created unary operator ( GrB_UnaryOp *unaryop // handle of unary operator to free ) ; \end{verbatim} }\end{mdframed} \verb'GrB_UnaryOp_free' frees a user-defined unary operator. Either usage: {\small \begin{verbatim} GrB_UnaryOp_free (&unaryop) ; GrB_free (&unaryop) ; \end{verbatim}} \noindent frees the \verb'unaryop' and sets \verb'unaryop' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'unaryop == NULL' on input. It does nothing at all if passed a built-in unary operator. \newpage %=============================================================================== \subsection{GraphBLAS binary operators: {\sf GrB\_BinaryOp}, $z=f(x,y)$} %====== %=============================================================================== \label{binaryop} A binary operator is a scalar function of the form $z=f(x,y)$. The types of $z$, $x$, and $y$ need not be the same. The built-in binary operators are listed in the tables below. The notation $T$ refers to any of the 13 built-in types, but two of those types are SuiteSparse extensions (\verb'GxB_FC32' and \verb'GxB_FC64'). For those types, the operator name always starts with \verb'GxB', not \verb'GrB'). The six \verb'GxB_IS*' comparison operators and the \verb'GxB_*' logical operators all return a result one for true and zero for false, in the same domain $T$ or $R$ as their inputs. These six comparison operators are useful as ``multiply'' operators for creating semirings with non-Boolean monoids. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary operators for all 13 types} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline % numeric TxT->T \verb'GrB_FIRST_'$T$ & $T \times T \rightarrow T$ & $z = x$ & first argument \\ \verb'GrB_SECOND_'$T$ & $T \times T \rightarrow T$ & $z = y$ & second argument \\ \verb'GxB_ANY_'$T$ & $T \times T \rightarrow T$ & $z = x$ or $y$ & pick $x$ or $y$ arbitrarily \\ \verb'GxB_PAIR_'$T$ & $T \times T \rightarrow T$ & $z = 1$ & one \\ \verb'GrB_PLUS_'$T$ & $T \times T \rightarrow T$ & $z = x+y$ & addition \\ \verb'GrB_MINUS_'$T$ & $T \times T \rightarrow T$ & $z = x-y$ & subtraction \\ \verb'GxB_RMINUS_'$T$ & $T \times T \rightarrow T$ & $z = y-x$ & reverse subtraction \\ \verb'GrB_TIMES_'$T$ & $T \times T \rightarrow T$ & $z = xy$ & multiplication \\ \verb'GrB_DIV_'$T$ & $T \times T \rightarrow T$ & $z = x/y$ & division \\ \verb'GxB_RDIV_'$T$ & $T \times T \rightarrow T$ & $z = y/x$ & reverse division \\ \verb'GxB_POW_'$T$ & $T \times T \rightarrow T$ & $z = x^y$ & power \\ \hline % TxT->T comparison \verb'GxB_ISEQ_'$T$ & $T \times T \rightarrow T$ & $z = (x == y)$ & equal \\ \verb'GxB_ISNE_'$T$ & $T \times T \rightarrow T$ & $z = (x \ne y)$ & not equal \\ \hline \end{tabular} } \vspace{0.2in} The \verb'GxB_POW_*' operators for real types do not return a complex result, and thus $z = f(x,y) = x^y$ is undefined if $x$ is negative and $y$ is not an integer. To compute a complex result, use \verb'GxB_POW_FC32' or \verb'GxB_POW_FC64'. Operators that require the domain to be ordered (\verb'MIN', \verb'MAX', and relative comparisons less-than, greater-than, and so on) are not defined for complex types. These are listed in the following table: \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary operators for all non-complex types} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline % numeric RxR->R \verb'GrB_MIN_'$R$ & $R \times R \rightarrow R$ & $z = \min(x,y)$ & minimum \\ \verb'GrB_MAX_'$R$ & $R \times R \rightarrow R$ & $z = \max(x,y)$ & maximum \\ \hline % RxR->R comparison \verb'GxB_ISGT_'$R$ & $R \times R \rightarrow R$ & $z = (x > y)$ & greater than \\ \verb'GxB_ISLT_'$R$ & $R \times R \rightarrow R$ & $z = (x < y)$ & less than \\ \verb'GxB_ISGE_'$R$ & $R \times R \rightarrow R$ & $z = (x \ge y)$ & greater than or equal \\ \verb'GxB_ISLE_'$R$ & $R \times R \rightarrow R$ & $z = (x \le y)$ & less than or equal \\ \hline % RxR->R logical \verb'GxB_LOR_'$R$ & $R \times R \rightarrow R$ & $z = (x \ne 0) \vee (y \ne 0) $ & logical OR \\ \verb'GxB_LAND_'$R$ & $R \times R \rightarrow R$ & $z = (x \ne 0) \wedge (y \ne 0) $ & logical AND \\ \verb'GxB_LXOR_'$R$ & $R \times R \rightarrow R$ & $z = (x \ne 0) \veebar (y \ne 0) $ & logical XOR \\ \hline \end{tabular} } \vspace{0.2in} Another set of six kinds of built-in comparison operators have the form $T \times T \rightarrow $\verb'bool'. Note that when $T$ is \verb'bool', the six operators give the same results as the six \verb'GxB_IS*_BOOL' operators in the table above. These six comparison operators are useful as ``multiply'' operators for creating semirings with Boolean monoids. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary comparison operators for all 13 types} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline % 6 TxT -> bool comparison \verb'GrB_EQ_'$T$ & $T \times T \rightarrow $\verb'bool' & $z = (x == y)$ & equal \\ \verb'GrB_NE_'$T$ & $T \times T \rightarrow $\verb'bool' & $z = (x \ne y)$ & not equal \\ \hline \multicolumn{4}{ }{\mbox{ }} \\ \hline \multicolumn{4}{|c|}{Binary comparison operators for non-complex types} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline \verb'GrB_GT_'$R$ & $R \times R \rightarrow $\verb'bool' & $z = (x > y)$ & greater than \\ \verb'GrB_LT_'$R$ & $R \times R \rightarrow $\verb'bool' & $z = (x < y)$ & less than \\ \verb'GrB_GE_'$R$ & $R \times R \rightarrow $\verb'bool' & $z = (x \ge y)$ & greater than or equal \\ \verb'GrB_LE_'$R$ & $R \times R \rightarrow $\verb'bool' & $z = (x \le y)$ & less than or equal \\ \hline \end{tabular} } \vspace{0.2in} GraphBLAS has four built-in binary operators that operate purely in the Boolean domain. The first three are identical to the \verb'GxB_L*_BOOL' operators described above, just with a shorter name. The \verb'GrB_LXNOR' operator is the same as \verb'GrB_EQ_BOOL'. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary operators for the boolean type only} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline % 3 bool x bool -> bool \verb'GrB_LOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \vee y $ & logical OR \\ \verb'GrB_LAND' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \wedge y $ & logical AND \\ \verb'GrB_LXOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \veebar y $ & logical XOR \\ \verb'GrB_LXNOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = \lnot (x \veebar y) $ & logical XNOR \\ \hline \end{tabular} } \vspace{0.2in} The following operators are defined for real floating-point types only (\verb'GrB_FP32' and \verb'GrB_FP64'). They are identical to the ANSI C11 functions of the same name. The last one in the table constructs the corresponding complex type. {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary operators for the real floating-point types only} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline \verb'GxB_ATAN2_'$F$ & $F \times F \rightarrow F$ & $z = \tan^{-1}(y/x)$ & 4-quadrant arc tangent \\ \verb'GxB_HYPOT_'$F$ & $F \times F \rightarrow F$ & $z = \sqrt{x^2+y^2}$ & hypotenuse \\ \verb'GxB_FMOD_'$F$ & $F \times F \rightarrow F$ & & ANSI C11 \verb'fmod' \\ \verb'GxB_REMAINDER_'$F$ & $F \times F \rightarrow F$ & & ANSI C11 \verb'remainder' \\ \verb'GxB_LDEXP_'$F$ & $F \times F \rightarrow F$ & & ANSI C11 \verb'ldexp' \\ \verb'GxB_COPYSIGN_'$F$ & $F \times F \rightarrow F$ & & ANSI C11 \verb'copysign' \\ \hline \verb'GxB_CMPLX_'$F$ & $F \times F \rightarrow Z$ & $z = x + y \times i$ & complex from real \& imag \\ \hline \end{tabular} } \vspace{0.2in} Eight bitwise operators are predefined for signed and unsigned integers. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Binary operators for signed and unsigned integers} \\ \hline GraphBLAS name & types (domains) & $z=f(x,y)$ & description \\ \hline \verb'GrB_BOR_'$I$ & $I \times I \rightarrow I$ & \verb'z=x|y' & bitwise logical OR \\ \verb'GrB_BAND_'$I$ & $I \times I \rightarrow I$ & \verb'z=x&y' & bitwise logical AND \\ \verb'GrB_BXOR_'$I$ & $I \times I \rightarrow I$ & \verb'z=x^y' & bitwise logical XOR \\ \verb'GrB_BXNOR_'$I$ & $I \times I \rightarrow I$ & \verb'z=~(x^y)' & bitwise logical XNOR \\ \hline \verb'GxB_BGET_'$I$ & $I \times I \rightarrow I$ & & get bit y of x \\ \verb'GxB_BSET_'$I$ & $I \times I \rightarrow I$ & & set bit y of x \\ \verb'GxB_BCLR_'$I$ & $I \times I \rightarrow I$ & & clear bit y of x \\ \verb'GxB_BSHIFT_'$I$ & $I \times $\verb'int8'$ \rightarrow I$ & & bit shift \\ \hline \end{tabular} } \vspace{0.2in} There are two sets of built-in comparison operators in SuiteSparse:Graph\-BLAS, but they are not redundant. They are identical except for the type (domain) of their output, $z$. The \verb'GrB_EQ_'$T$ and related operators compare their inputs of type $T$ and produce a Boolean result of true or false. The \verb'GxB_ISEQ_'$T$ and related operators do the same comparison and produce a result with same type $T$ as their input operands, returning one for true or zero for false. The \verb'IS*' comparison operators are useful when combining comparisons with other non-Boolean operators. For example, a \verb'PLUS-ISEQ' semiring counts how many terms of the comparison are true. With this semiring, matrix multiplication ${\bf C=AB}$ for two weighted undirected graphs ${\bf A}$ and ${\bf B}$ computes $c_{ij}$ as the number of edges node $i$ and $j$ have in common that have identical edge weights. Since the output type of the ``multiplier'' operator in a semiring must match the type of its monoid, the Boolean \verb'EQ' cannot be combined with a non-Boolean \verb'PLUS' monoid to perform this operation. Likewise, SuiteSparse:GraphBLAS has two sets of logical OR, AND, and XOR operators. Without the \verb'_'$T$ suffix, the three operators \verb'GrB_LOR', \verb'GrB_LAND', and \verb'GrB_LXOR' operate purely in the Boolean domain, where all input and output types are \verb'GrB_BOOL'. The second set (\verb'GxB_LOR_'$T$ \verb'GxB_LAND_'$T$ and \verb'GxB_LXOR_'$T$) provides Boolean operators to all 11 real domains, implicitly typecasting their inputs from type $T$ to Boolean and returning a value of type $T$ that is 1 for true or zero for false. The set of \verb'GxB_L*_'$T$ operators are useful since they can be combined with non-Boolean monoids in a semiring. \begin{alert} {\bf SPEC:} The definition of integer division by zero is an extension to the spec. \end{alert} Floating-point operations follow the IEEE 754 standard. Thus, computing $x/0$ for a floating-point $x$ results in \verb'+Inf' if $x$ is positive, \verb'-Inf' if $x$ is negative, and \verb'NaN' if $x$ is zero. The application is not terminated. However, integer division by zero normally terminates an application. SuiteSparse:GraphBLAS avoids this by adopting the same rules as MATLAB, which are analogous to how the IEEE standard handles floating-point division by zero. For integers, when $x$ is positive, $x/0$ is the largest positive integer, for negative $x$ it is the minimum integer, and 0/0 results in zero. For example, for an integer $x$ of type \verb'GrB_INT32', 1/0 is $2^{31}-1$ and (-1)/0 is $-2^{31}$. Refer to Section~\ref{type} for a list of integer ranges. Eight positional operators are predefined. They differ when used in a semiring and when used in \verb'GrB_eWise*' and \verb'GrB_apply'. Positional operators cannot be used in \verb'GrB_build', nor can they be used as the \verb'accum' operator for any operation. The positional binary operators do not depend on the type or numerical value of their inputs, just their position in a matrix or vector. For a vector, $j$ is always 0, and $i$ is the index into the vector. There are two types $T$ available: \verb'INT32' and \verb'INT64', which is the type of the output $z$. User-defined positional operators cannot be defined by \verb'GrB_BinaryOp_new'. \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Positional binary operators for any type (including user-defined)} \\ \multicolumn{4}{|c|}{when used as a multiplicative operator in a semiring} \\ \hline GraphBLAS name & types (domains) & $z=f(a_{ik},b_{kj})$ & description \\ \hline \verb'GxB_FIRSTI_'$T$ & $ \rightarrow T$ & $z = i$ & row index of $a_{ik}$ (0-based) \\ \verb'GxB_FIRSTI1_'$T$ & $ \rightarrow T$ & $z = i+1$ & row index of $a_{ik}$ (1-based) \\ \verb'GxB_FIRSTJ_'$T$ & $ \rightarrow T$ & $z = k$ & column index of $a_{ik}$ (0-based) \\ \verb'GxB_FIRSTJ1_'$T$ & $ \rightarrow T$ & $z = k+1$ & column index of $a_{ik}$ (1-based) \\ \verb'GxB_SECONDI_'$T$ & $ \rightarrow T$ & $z = k$ & row index of $b_{kj}$ (0-based) \\ \verb'GxB_SECONDI1_'$T$ & $ \rightarrow T$ & $z = k+1$ & row index of $b_{kj}$ (1-based) \\ \verb'GxB_SECONDJ_'$T$ & $ \rightarrow T$ & $z = j$ & column index of $b_{kj}$ (0-based) \\ \verb'GxB_SECONDJ1_'$T$ & $ \rightarrow T$ & $z = j+1$ & column index of $b_{kj}$ (1-based) \\ \hline \end{tabular} } \vspace{0.2in} {\footnotesize \begin{tabular}{|llll|} \hline \multicolumn{4}{|c|}{Positional binary operators for any type (including user-defined)} \\ \multicolumn{4}{|c|}{when used in all other methods} \\ \hline GraphBLAS name & types (domains) & $z=f(a_{ij},b_{ij})$ & description \\ \hline \verb'GxB_FIRSTI_'$T$ & $ \rightarrow T$ & $z = i$ & row index of $a_{ij}$ (0-based) \\ \verb'GxB_FIRSTI1_'$T$ & $ \rightarrow T$ & $z = i+1$ & row index of $a_{ij}$ (1-based) \\ \verb'GxB_FIRSTJ_'$T$ & $ \rightarrow T$ & $z = j$ & column index of $a_{ij}$ (0-based) \\ \verb'GxB_FIRSTJ1_'$T$ & $ \rightarrow T$ & $z = j+1$ & column index of $a_{ij}$ (1-based) \\ \verb'GxB_SECONDI_'$T$ & $ \rightarrow T$ & $z = i$ & row index of $b_{ij}$ (0-based) \\ \verb'GxB_SECONDI1_'$T$ & $ \rightarrow T$ & $z = i+1$ & row index of $b_{ij}$ (1-based) \\ \verb'GxB_SECONDJ_'$T$ & $ \rightarrow T$ & $z = j$ & column index of $b_{ij}$ (0-based) \\ \verb'GxB_SECONDJ1_'$T$ & $ \rightarrow T$ & $z = j+1$ & column index of $b_{ij}$ (1-based) \\ \hline \end{tabular} } \vspace{0.2in} The next sections define the following methods for the \verb'GrB_BinaryOp' object: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_BinaryOp_new' & create a user-defined binary operator \\ \verb'GrB_BinaryOp_wait' & wait for a user-defined binary operator \\ \verb'GxB_BinaryOp_ztype' & return the type of the output $z$ for $z=f(x,y)$\\ \verb'GxB_BinaryOp_xtype' & return the type of the input $x$ for $z=f(x,y)$\\ \verb'GxB_BinaryOp_ytype' & return the type of the input $y$ for $z=f(x,y)$\\ \verb'GrB_BinaryOp_free' & free a user-defined binary operator \\ \hline \end{tabular} } \vspace{0.2in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_BinaryOp\_new:} create a user-defined binary operator} %------------------------------------------------------------------------------- \label{binaryop_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_BinaryOp_new ( GrB_BinaryOp *binaryop, // handle for the new binary operator void *function, // pointer to the binary function GrB_Type ztype, // type of output z GrB_Type xtype, // type of input x GrB_Type ytype // type of input y ) ; \end{verbatim} }\end{mdframed} \verb'GrB_BinaryOp_new' creates a new binary operator. The new operator is returned in the \verb'binaryop' handle, which must not be \verb'NULL' on input. On output, its contents contains a pointer to the new binary operator. The three types \verb'xtype', \verb'ytype', and \verb'ztype' are the GraphBLAS types of the inputs $x$ and $y$, and output $z$ of the user-defined function $z=f(x,y)$. These types may be built-in types or user-defined types, in any combination. The three types need not be the same, but they must be previously defined before passing them to \verb'GrB_BinaryOp_new'. The final argument to \verb'GrB_BinaryOp_new' is a pointer to a user-defined function with the following signature: {\footnotesize \begin{verbatim} void (*f) (void *z, const void *x, const void *y) ; \end{verbatim} } When the function \verb'f' is called, the arguments \verb'z', \verb'x', and \verb'y' are passed as \verb'(void *)' pointers, but they will be pointers to values of the correct type, defined by \verb'ztype', \verb'xtype', and \verb'ytype', respectively, when the operator was created. {\bf NOTE:} SuiteSparse:GraphBLAS may call the function with the pointers \verb'z' and \verb'x' equal to one another, in which case \verb'z=f(z,y)' should be computed. Future versions may use additional pointer aliasing. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_BinaryOp\_wait:} wait for a binary operator} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined binary operator ( GrB_BinaryOp *binaryop // binary operator to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined binary operator, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_BinaryOp_wait(&binaryop)' ensures the \verb'binaryop' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_BinaryOp_wait(&binaryop)', except to ensure that the \verb'binaryop' is valid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_BinaryOp\_ztype:} return the type of $z$} %------------------------------------------------------------------------------- \label{binaryop_ztype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_BinaryOp_ztype // return the type of z ( GrB_Type *ztype, // return type of output z GrB_BinaryOp binaryop // binary operator to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_BinaryOp_ztype' returns the \verb'ztype' of the binary operator, which is the type of $z$ in the function $z=f(x,y)$. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_BinaryOp\_xtype:} return the type of $x$} %------------------------------------------------------------------------------- \label{binaryop_xtype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_BinaryOp_xtype // return the type of x ( GrB_Type *xtype, // return type of input x GrB_BinaryOp binaryop // binary operator to query ) ; \end{verbatim} }\end{mdframed} \verb'GxB_BinaryOp_xtype' returns the \verb'xtype' of the binary operator, which is the type of $x$ in the function $z=f(x,y)$. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_BinaryOp\_ytype:} return the type of $y$} %------------------------------------------------------------------------------- \label{binaryop_ytype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_BinaryOp_ytype // return the type of y ( GrB_Type *ytype, // return type of input y GrB_BinaryOp binaryop // binary operator to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_BinaryOp_ytype' returns the \verb'ytype' of the binary operator, which is the type of $y$ in the function $z=f(x,y)$. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_BinaryOp\_free:} free a user-defined binary operator} %------------------------------------------------------------------------------- \label{binaryop_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-created binary operator ( GrB_BinaryOp *binaryop // handle of binary operator to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_BinaryOp_free' frees a user-defined binary operator. Either usage: {\small \begin{verbatim} GrB_BinaryOp_free (&op) ; GrB_free (&op) ; \end{verbatim}} \noindent frees the \verb'op' and sets \verb'op' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'op == NULL' on input. It does nothing at all if passed a built-in binary operator. %------------------------------------------------------------------------------- \subsubsection{{\sf ANY} and {\sf PAIR} operators} %------------------------------------------------------------------------------- \label{any_pair} SuiteSparse:GraphBLAS v3.2.0 adds two new operators, \verb'ANY' and \verb'PAIR'. The \verb'PAIR' operator is simple to describe: just $f(x,y)=1$. It is called the \verb'PAIR' operator since it returns $1$ in a semiring when a pair of entries $a_{ik}$ and $b_{kj}$ is found in the matrix multiply. This operator is simple yet very useful. It allows purely symbolic computations to be performed on matrices of any type, without having to typecast them to Boolean with all values being true. Typecasting need not be performed on the inputs to the \verb'PAIR' operator, and the \verb'PAIR' operator does not have to access the values of the matrix, so it is a very fast operator to use. The \verb'ANY' operator is very unusual, but very powerful. It is the function $f_{\mbox{any}}(x,y)=x$, or $y$, where GraphBLAS has to freedom to select either $x$, or $y$, at its own discretion. Do not confuse the \verb'ANY' operator with the \verb'any' function in MATLAB, which computes a reduction using the logical OR operator. The \verb'ANY' function is associative and commutative, and can thus serve as an operator for a monoid. The selection of $x$ are $y$ is not randomized. Instead, SuiteSparse:GraphBLAS uses this freedom to compute as fast a result as possible. When used in a dot product, \[ c_{ij} = \sum_k a_{ik} b_{kj} \] for example, the computation can terminate as soon as any matching pair of entries is found. When used in a parallel saxpy-style computation, the \verb'ANY' operator allows for a relaxed form of synchronization to be used, resulting in a fast benign race condition. Because of this benign race condition, the result of the \verb'ANY' monoid can be non-deterministic, unless it is coupled with the \verb'PAIR' multiplicative operator. In this case, the \verb'ANY_PAIR' semiring will return a deterministic result, since $f_{\mbox{any}}(1,1)$ is always 1. When paired with a different operator, the results are non-deterministic. This gives a powerful method when computing results for which any value selected by the \verb'ANY' operator is valid. One such example is the breadth-first-search tree. Suppose node $j$ is at level $v$, and there are multiple nodes $i$ at level $v-1$ for which the edge $(i,j)$ exists in the graph. Any of these nodes $i$ can serve as a valid parent in the BFS tree. Using the \verb'ANY' operator, GraphBLAS can quickly compute a valid BFS tree; if it used again on the same inputs, it might return a different, yet still valid, BFS tree, due to the non-deterministic nature of intra-thread synchronization. \newpage %=============================================================================== \subsection{SuiteSparse:GraphBLAS select operators: {\sf GxB\_SelectOp}} %====== %=============================================================================== \label{selectop} \begin{alert} NOTE: the API for the select function has changed in v4.0.1 of SuiteSparse:GraphBLAS. The former function signature (in v3.3.3 and earlier) included the dimensions of the matrix. These are not needed since they can be passed in as part of the \verb'thunk', if needed. \end{alert} A select operator is a scalar function of the form $z=f(i,j,a_{ij},\mbox{thunk})$ that is applied to the entries $a_{ij}$ of an $m$-by-$n$ matrix. The domain (type) of $z$ is always boolean. The domain (type) of $a_{ij}$ can be any built-in or user-defined type, or it can be \verb'GrB_NULL' if the operator is type-generic. The \verb'GxB_SelectOp' operator is used by \verb'GxB_select' (see Section \ref{select}) to select entries from a matrix. Each entry \verb'A(i,j)' is evaluated with the operator, which returns true if the entry is to be kept in the output, or false if it is not to appear in the output. The signature of the select function \verb'f' is as follows: {\footnotesize \begin{verbatim} bool f // returns true if A(i,j) is kept ( const GrB_Index i, // row index of A(i,j) const GrB_Index j, // column index of A(i,j) const void *x, // value of A(i,j), or NULL if f is type-generic const void *thunk // user-defined auxiliary data ) ; \end{verbatim}} Operators can be used on any type, including user-defined types, except that the comparisons \verb'GT', \verb'GE', \verb'LT', and \verb'LE' can only be used with built-in types. User-defined select operators can also be created. \vspace{0.2in} {\footnotesize \begin{tabular}{lll} \hline GraphBLAS name & MATLAB & description \\ & analog & \\ \hline \verb'GxB_TRIL' & \verb'C=tril(A,k)' & true for \verb'A(i,j)' if \verb'(j-i) <= k' \\ \verb'GxB_TRIU' & \verb'C=triu(A,k)' & true for \verb'A(i,j)' if \verb'(j-i) >= k' \\ \verb'GxB_DIAG' & \verb'C=diag(A,k)' & true for \verb'A(i,j)' if \verb'(j-i) == k' \\ \verb'GxB_OFFDIAG' & \verb'C=A-diag(A,k)' & true for \verb'A(i,j)' if \verb'(j-i) != k' \\ \hline \verb'GxB_NONZERO' & \verb'C=A(A~=0)' & true if \verb'A(i,j)' is nonzero\\ \verb'GxB_EQ_ZERO' & \verb'C=A(A==0)' & true if \verb'A(i,j)' is zero\\ \verb'GxB_GT_ZERO' & \verb'C=A(A>0)' & true if \verb'A(i,j)' is greater than zero \\ \verb'GxB_GE_ZERO' & \verb'C=A(A>=0)' & true if \verb'A(i,j)' is greater than or equal to zero \\ \verb'GxB_LT_ZERO' & \verb'C=A(A<0)' & true if \verb'A(i,j)' is less than zero \\ \verb'GxB_LE_ZERO' & \verb'C=A(A<=0)' & true if \verb'A(i,j)' is less than or equal to zero \\ \hline \verb'GxB_NE_THUNK' & \verb'C=A(A~=k)' & true if \verb'A(i,j)' is not equal to \verb'k'\\ \verb'GxB_EQ_THUNK' & \verb'C=A(A==k)' & true if \verb'A(i,j)' is equal to \verb'k'\\ \verb'GxB_GT_THUNK' & \verb'C=A(A>k)' & true if \verb'A(i,j)' is greater than \verb'k' \\ \verb'GxB_GE_THUNK' & \verb'C=A(A>=k)' & true if \verb'A(i,j)' is greater than or equal to \verb'k' \\ \verb'GxB_LT_THUNK' & \verb'C=A(A<k)' & true if \verb'A(i,j)' is less than \verb'k' \\ \verb'GxB_LE_THUNK' & \verb'C=A(A<=k)' & true if \verb'A(i,j)' is less than or equal to \verb'k' \\ % \hline \end{tabular} } \vspace{0.2in} The following methods operate on the \verb'GxB_SelectOp' object: \vspace{0.1in} {\footnotesize \begin{tabular}{ll} \hline \verb'GxB_SelectOp_new' & create a user-defined select operator \\ \verb'GxB_SelectOp_wait' & wait for a user-defined select operator \\ \verb'GxB_SelectOp_xtype' & return the type of the input $x$ \\ \verb'GxB_SelectOp_ttype' & return the type of the input {\em thunk} \\ \verb'GxB_SelectOp_free' & free a user-defined select operator \\ \hline \end{tabular} } \vspace{0.1in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_SelectOp\_new:} create a user-defined select operator} %------------------------------------------------------------------------------- \label{selectop_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_SelectOp_new // create a new user-defined select operator ( GxB_SelectOp *selectop, // handle for the new select operator void *function, // pointer to the select function GrB_Type xtype, // type of input x, or NULL if type-generic GrB_Type ttype // type of input thunk, or NULL if type-generic ) ; \end{verbatim} }\end{mdframed} \verb'GxB_SelectOp_new' creates a new select operator. The new operator is returned in the \verb'selectop' handle, which must not be \verb'NULL' on input. On output, its contents contains a pointer to the new select operator. The \verb'function' argument to \verb'GxB_SelectOp_new' is a pointer to a user-defined function whose signature is given at the beginning of Section~\ref{selectop}. Given the properties of an entry $a_{ij}$ in a matrix, the \verb'function' should return \verb'true' if the entry should be kept in the output of \verb'GxB_select', or \verb'false' if it should not appear in the output. The type \verb'xtype' is the GraphBLAS type of the input $x$ of the user-defined function $z=f(i,j,x,\mbox{thunk})$. The type may be built-in or user-defined, or it may even be \verb'GrB_NULL'. If the \verb'xtype' is \verb'GrB_NULL', then the \verb'selectop' is type-generic. The type \verb'ttype' is the GraphBLAS type of the input {\em thunk} of the user-defined function $z=f(i,j,x,\mbox{thunk})$. The type may be built-in or user-defined, or it may even be \verb'GrB_NULL'. If the \verb'ttype' is \verb'GrB_NULL', then the \verb'selectop' does not access this parameter. The \verb'const void *thunk' parameter on input to the user \verb'function' will be passed as \verb'NULL'. %------------------------------------------------------------------------------- \subsubsection{{\sf GB\_SelectOp\_wait:} wait for a select operator} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined select operator ( GxB_SelectOp *selectop // select operator to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined select operator, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GxB_SelectOp_wait(&selectop)' ensures the \verb'selectop' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GxB_SelectOp_wait(&selectop)', except to ensure that the \verb'selectop' is valid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_SelectOp\_xtype:} return the type of $x$} %------------------------------------------------------------------------------- \label{selectop_xtype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_SelectOp_xtype // return the type of x ( GrB_Type *xtype, // return type of input x GxB_SelectOp selectop // select operator ) ; \end{verbatim} }\end{mdframed} \verb'GxB_SelectOp_xtype' returns the \verb'xtype' of the select operator, which is the type of $x$ in the function $z=f(i,j,x,\mbox{thunk})$. If the select operator is type-generic, \verb'xtype' is returned as \verb'GrB_NULL'. This is not an error condition, but simply indicates that the \verb'selectop' is type-generic. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_SelectOp\_ttype:} return the type of the {\em thunk}} %------------------------------------------------------------------------------- \label{selectop_ttype} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_SelectOp_ttype // return the type of thunk ( GrB_Type *ttype, // return type of input thunk GxB_SelectOp selectop // select operator ) ; \end{verbatim} }\end{mdframed} \verb'GxB_SelectOp_ttype' returns the \verb'ttype' of the select operator, which is the type of {\em thunk} in the function $z=f(i,j,x,\mbox{thunk})$. If the select operator does not use this parameter, \verb'ttype' is returned as \verb'GrB_NULL'. This is not an error condition, but simply indicates that the \verb'selectop' does not use this parameter. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_SelectOp\_free:} free a user-defined select operator} %------------------------------------------------------------------------------- \label{selectop_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-created select operator ( GxB_SelectOp *selectop // handle of select operator to free ) ; \end{verbatim} }\end{mdframed} \verb'GxB_SelectOp_free' frees a user-defined select operator. Either usage: {\small \begin{verbatim} GxB_SelectOp_free (&selectop) ; GrB_free (&selectop) ; \end{verbatim}} \noindent frees the \verb'selectop' and sets \verb'selectop' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'selectop == NULL' on input. It does nothing at all if passed a built-in select operator. \newpage %=============================================================================== \subsection{GraphBLAS monoids: {\sf GrB\_Monoid}} %============================= %=============================================================================== \label{monoid} A {\em monoid} is defined on a single domain (that is, a single type), $T$. It consists of an associative binary operator $z=f(x,y)$ whose three operands $x$, $y$, and $z$ are all in this same domain $T$ (that is $T \times T \rightarrow T$). The associative operator must also have an identity element, or ``zero'' in this domain, such that $f(x,0)=f(0,x)=x$. Recall that an associative operator $f(x,y)$ is one for which the condition $f(a, f(b,c)) = f(f (a,b),c)$ always holds. That is, operator can be applied in any order and the results remain the same. Predefined binary operators that can be used to form monoids are listed in the table below. Most of these are the binary operators of predefined monoids, except that the bitwise monoids are predefined only for the unsigned integer types, not the signed integers. Recall that $T$ denotes any built-in type (including boolean, integer, floating point real, and complex), $R$ denotes any non-complex type, and $I$ denotes any integer type. \vspace{0.2in} \noindent {\footnotesize \begin{tabular}{lllll} \hline GraphBLAS & types (domains) & expression & identity & terminal \\ operator & & $z=f(x,y)$ & & \\ \hline % numeric TxT->T \verb'GrB_PLUS_'$T$ & $T \times T \rightarrow T$ & $z = x+y$ & 0 & none \\ \verb'GrB_TIMES_'$T$ & $T \times T \rightarrow T$ & $z = xy$ & 1 & 0 or none (see note) \\ \verb'GxB_ANY_'$T$ & $T \times T \rightarrow T$ & $z = x$ or $y$ & any & any \\ \hline \verb'GrB_MIN_'$R$ & $R \times R \rightarrow R$ & $z = \min(x,y)$ & $+\infty$ & $-\infty$ \\ \verb'GrB_MAX_'$R$ & $R \times R \rightarrow R$ & $z = \max(x,y)$ & $-\infty$ & $+\infty$ \\ \hline % bool x bool -> bool \verb'GrB_LOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \vee y $ & false & true \\ \verb'GrB_LAND' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \wedge y $ & true & false \\ \verb'GrB_LXOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z = x \veebar y $ & false & none \\ \verb'GrB_LXNOR' & \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool' & $z =(x == y)$ & true & none \\ \hline % bitwise \verb'GrB_BOR_'$I$ & $I$ $\times$ $I$ $\rightarrow$ $I$ & \verb'z=x|y' & all bits zero & all bits one \\ \verb'GrB_BAND_'$I$ & $I$ $\times$ $I$ $\rightarrow$ $I$ & \verb'z=x&y' & all bits one & all bits zero \\ \verb'GrB_BXOR_'$I$ & $I$ $\times$ $I$ $\rightarrow$ $I$ & \verb'z=x^y' & all bits zero & none \\ \verb'GrB_BXNOR_'$I$ & $I$ $\times$ $I$ $\rightarrow$ $I$ & \verb'z=~(x^y)' & all bits one & none \\ \hline \end{tabular} } \vspace{0.2in} The above table lists the GraphBLAS operator, its type, expression, identity value, and {\em terminal} value (if any). For these built-in operators, the terminal values are the {\em annihilators} of the function, which is the value $z$ so that $z=f(z,y)$ regardless of the value of $y$. For example $\min(-\infty,y) = -\infty$ for any $y$. For integer domains, $+\infty$ and $-\infty$ are the largest and smallest integer in their range. With unsigned integers, the smallest value is zero, and thus \verb'GrB_MIN_UINT8' has an identity of 255 and a terminal value of 0. When computing with a monoid, the computation can terminate early if the terminal value arises. No further work is needed since the result will not change. This value is called the terminal value instead of the annihilator, since a user-defined operator can be created with a terminal value that is not an annihilator. See Section~\ref{monoid_terminal_new} for an example. The \verb'GxB_ANY_*' monoid can terminate as soon as it finds any value at all. {\bf NOTE:} The \verb'GrB_TIMES_FP*' operators do not have a terminal value of zero, since they comply with the IEEE 754 standard, and \verb'0*NaN' is not zero, but \verb'NaN'. Technically, their terminal value is \verb'NaN', but this value is rare in practice and thus the terminal condition is not worth checking. % 40: (min,max,+,*) x (int8,16,32,64, uint8,16,32,64, fp32, fp64) The C API Specification includes 44 predefined monoids, with the naming convention \verb'GrB_op_MONOID_type'. Forty monoids are available for the four operators \verb'MIN', \verb'MAX', \verb'PLUS', and \verb'TIMES', each with the 10 non-boolean real types. Four boolean monoids are predefined: \verb'GrB_LOR_MONOID_BOOL', \verb'GrB_LAND_MONOID_BOOL', \verb'GrB_LXOR_MONOID_BOOL', and \verb'GrB_LXNOR_MONOID_BOOL'. % 13 ANY % 4 complex (PLUS, TIMES) % 16 bitwise % 33 total These all appear in SuiteSparse:GraphBLAS, which adds 33 additional predefined \verb'GxB*' monoids, with the naming convention \verb'GxB_op_type_MONOID'. The \verb'ANY' operator can be used for all 13 types (including complex). The \verb'PLUS' and \verb'TIMES' operators are provided for both complex types, for 4 additional complex monoids. Sixteen monoids are predefined for four bitwise operators (\verb'BOR', \verb'BAND', \verb'BXOR', and \verb'BNXOR'), each with four unsigned integer types (\verb'UINT8', \verb'UINT16', \verb'UINT32', and \verb'UINT64'). The next sections define the following methods for the \verb'GrB_Monoid' object: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_Monoid_new' & create a user-defined monoid \\ \verb'GrB_Monoid_wait' & wait for a user-defined monoid \\ \verb'GxB_Monoid_terminal_new' & create a monoid that has a terminal value\\ \verb'GxB_Monoid_operator' & return the monoid operator \\ \verb'GxB_Monoid_identity' & return the monoid identity value \\ \verb'GxB_Monoid_terminal' & return the monoid terminal value (if any) \\ \verb'GrB_Monoid_free' & free a monoid \\ \hline \end{tabular} } \vspace{0.2in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Monoid\_new:} create a monoid} %------------------------------------------------------------------------------- \label{monoid_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Monoid_new // create a monoid ( GrB_Monoid *monoid, // handle of monoid to create GrB_BinaryOp op, // binary operator of the monoid <type> identity // identity value of the monoid ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Monoid_new' creates a monoid. The operator, \verb'op', must be an associative binary operator, either built-in or user-defined. In the definition above, \verb'<type>' is a place-holder for the specific type of the monoid. For built-in types, it is the C type corresponding to the built-in type (see Section~\ref{type}), such as \verb'bool', \verb'int32_t', \verb'float', or \verb'double'. In this case, \verb'identity' is a scalar value of the particular type, not a pointer. For user-defined types, \verb'<type>' is \verb'void *', and thus \verb'identity' is a not a scalar itself but a \verb'void *' pointer to a memory location containing the identity value of the user-defined operator, \verb'op'. If \verb'op' is a built-in operator with a known identity value, then the \verb'identity' parameter is ignored, and its known identity value is used instead. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Monoid\_wait:} wait for a monoid} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined monoid ( GrB_Monoid *monoid // monoid to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined monoid, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_Monoid_wait(&monoid)' ensures the \verb'monoid' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_Monoid_wait(&monoid)', except to ensure that the \verb'monoid' is valid. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Monoid\_terminal\_new:} create a monoid with terminal} %------------------------------------------------------------------------------- \label{monoid_terminal_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Monoid_terminal_new // create a monoid that has a terminal value ( GrB_Monoid *monoid, // handle of monoid to create GrB_BinaryOp op, // binary operator of the monoid <type> identity, // identity value of the monoid <type> terminal // terminal value of the monoid ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Monoid_terminal_new' is identical to \verb'GrB_Monoid_new', except that it allows for the specification of a {\em terminal value}. The \verb'<type>' of the terminal value is the same as the \verb'identity' parameter; see Section~\ref{monoid_new} for details. The terminal value of a monoid is the value $z$ for which $z=f(z,y)$ for any $y$, where $z=f(x,y)$ is the binary operator of the monoid. This is also called the {\em annihilator}, but the term {\em terminal value} is used here. This is because all annihilators are terminal values, but a terminal value need not be an annihilator, as described in the \verb'MIN' example below. If the terminal value is encountered during computation, the rest of the computations can be skipped. This can greatly improve the performance of \verb'GrB_reduce', and matrix multiply in specific cases (when a dot product method is used). For example, using \verb'GrB_reduce' to compute the sum of all entries in a \verb'GrB_FP32' matrix with $e$ entries takes $O(e)$ time, since a monoid based on \verb'GrB_PLUS_FP32' has no terminal value. By contrast, a reduction using \verb'GrB_LOR' on a \verb'GrB_BOOL' matrix can take as little as $O(1)$ time, if a \verb'true' value is found in the matrix very early. Monoids based on the built-in \verb'GrB_MIN_*' and \verb'GrB_MAX_*' operators (for any type), the boolean \verb'GrB_LOR', and the boolean \verb'GrB_LAND' operators all have terminal values. For example, the identity value of \verb'GrB_LOR' is \verb'false', and its terminal value is \verb'true'. When computing a reduction of a set of boolean values to a single value, once a \verb'true' is seen, the computation can exit early since the result is now known. If \verb'op' is a built-in operator with known identity and terminal values, then the \verb'identity' and \verb'terminal' parameters are ignored, and its known identity and terminal values are used instead. There may be cases in which the user application needs to use a non-standard terminal value for a built-in operator. For example, suppose the matrix has type \verb'GrB_FP32', but all values in the matrix are known to be non-negative. The annihilator value of \verb'MIN' is \verb'-INFINITY', but this will never be seen. However, the computation could could terminate when finding the value zero. This is an example of using a terminal value that is not actually an annihilator, but it functions like one since the monoid will operate strictly on non-negative values. In this case, a monoid created with \verb'GrB_MIN_FP32' will not terminate early. To create a monoid that can terminate early, create a user-defined operator that computes the same thing as \verb'GrB_MIN_FP32', and then create a monoid based on this user-defined operator with a terminal value of zero and an identity of \verb'+INFINITY'. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Monoid\_operator:} return the monoid operator} %------------------------------------------------------------------------------- \label{monoid_operator} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Monoid_operator // return the monoid operator ( GrB_BinaryOp *op, // returns the binary op of the monoid GrB_Monoid monoid // monoid to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Monoid_operator' returns the binary operator of the monoid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Monoid\_identity:} return the monoid identity} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Monoid_identity // return the monoid identity ( void *identity, // returns the identity of the monoid GrB_Monoid monoid // monoid to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Monoid_identity' returns the identity value of the monoid. The \verb'void *' pointer, \verb'identity', must be non-\verb'NULL' and must point to a memory space of size at least equal to the size of the type of the \verb'monoid'. The type size can be obtained via \verb'GxB_Monoid_operator' to return the monoid additive operator, then \verb'GxB_BinaryOp_ztype' to obtain the \verb'ztype', followed by \verb'GxB_Type_size' to get its size. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Monoid\_terminal:} return the monoid terminal value} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Monoid_terminal // return the monoid terminal ( bool *has_terminal, // true if the monoid has a terminal value void *terminal, // returns the terminal of the monoid GrB_Monoid monoid // monoid to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Monoid_terminal' returns the terminal value of the monoid (if any). The \verb'void *' pointer, \verb'terminal', must be non-\verb'NULL' and must point to a memory space of size at least equal to the size of the type of the \verb'monoid'. The type size can be obtained via \verb'GxB_Monoid_operator' to return the monoid additive operator, then \verb'GxB_BinaryOp_ztype' to obtain the \verb'ztype', followed by \verb'GxB_Type_size' to get its size. If the monoid has a terminal value, then \verb'has_terminal' is \verb'true', and its value is returned in the \verb'terminal' parameter. If it has no terminal value, then \verb'has_terminal' is \verb'false', and the \verb'terminal' parameter is not modified. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Monoid\_free:} free a monoid} %------------------------------------------------------------------------------- \label{monoid_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-created monoid ( GrB_Monoid *monoid // handle of monoid to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Monoid_frees' frees a monoid. Either usage: {\small \begin{verbatim} GrB_Monoid_free (&monoid) ; GrB_free (&monoid) ; \end{verbatim}} \noindent frees the \verb'monoid' and sets \verb'monoid' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'monoid == NULL' on input. It does nothing at all if passed a built-in monoid. \newpage %=============================================================================== \subsection{GraphBLAS semirings: {\sf GrB\_Semiring}} %========================= %=============================================================================== \label{semiring} A {\em semiring} defines all the operators required to define the multiplication of two sparse matrices in GraphBLAS, ${\bf C=AB}$. The ``add'' operator is a commutative and associative monoid, and the binary ``multiply'' operator defines a function $z=fmult(x,y)$ where the type of $z$ matches the exactly with the monoid type. SuiteSparse:GraphBLAS includes 1,473 predefined built-in semirings. The next sections define the following methods for the \verb'GrB_Semiring' object: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_Semiring_new' & create a user-defined semiring \\ \verb'GrB_Semiring_wait' & wait for a user-defined semiring \\ \verb'GxB_Semiring_add' & return the additive monoid of a semiring \\ \verb'GxB_Semiring_multiply' & return the binary operator of a semiring \\ \verb'GrB_Semiring_free' & free a semiring \\ \hline \end{tabular} } % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Semiring\_new:} create a semiring} %------------------------------------------------------------------------------- \label{semiring_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Semiring_new // create a semiring ( GrB_Semiring *semiring, // handle of semiring to create GrB_Monoid add, // add monoid of the semiring GrB_BinaryOp multiply // multiply operator of the semiring ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Semiring_new' creates a new semiring, with \verb'add' being the additive monoid and \verb'multiply' being the binary ``multiply'' operator. In addition to the standard error cases, the function returns \verb'GrB_DOMAIN_MISMATCH' if the output (\verb'ztype') domain of \verb'multiply' does not match the domain of the \verb'add' monoid. Using built-in types and operators, 2,438 semirings can be built. This count excludes redundant Boolean operators (for example \verb'GrB_TIMES_BOOL' and \verb'GrB_LAND' are different operators but they are redundant since they always return the same result). The v1.3 C API Specification for GraphBLAS includes 124 predefined semirings, with names of the form \verb'GrB_add_mult_SEMIRING_type', where \verb'add' is the operator of the additive monoid, \verb'mult' is the multiply operator, and \verb'type' is the type of the input $x$ to the multiply operator, $f(x,y)$. The name of the domain for the additive monoid does not appear in the name, since it always matches the type of the output of the \verb'mult' operator. Twelve kinds of \verb'GrB*' semirings are available for all 10 real, non-boolean types: \verb'PLUS_TIMES', \verb'PLUS_MIN', \verb'MIN_PLUS', \verb'MIN_TIMES', \verb'MIN_FIRST', \verb'MIN_SECOND', \verb'MIN_MAX', \verb'MAX_PLUS', \verb'MAX_TIMES', \verb'MAX_FIRST', \verb'MAX_SECOND', and \verb'MAX_MIN'. Four semirings are for boolean types only: \verb'LOR_LAND', \verb'LAND_LOR', \verb'LXOR_LAND', and \verb'LXNOR_LOR'. SuiteSparse:GraphBLAS pre-defines 1,553 semirings from built-in types and operators, listed below. The naming convention is \verb'GxB_add_mult_type'. The 124 \verb'GrB*' semirings are a subset of the list below, included with two names: \verb'GrB*' and \verb'GxB*'. If the \verb'GrB*' name is provided, its use is preferred, for portability to other GraphBLAS implementations. \vspace{-0.05in} \begin{itemize} \item 1000 semirings with a multiplier $T \times T \rightarrow T$ where $T$ is any of the 10 non-Boolean, real types, from the complete cross product of: \vspace{-0.05in} \begin{itemize} \item 5 monoids (\verb'MIN', \verb'MAX', \verb'PLUS', \verb'TIMES', \verb'ANY') \item 20 multiply operators (\verb'FIRST', \verb'SECOND', \verb'PAIR', \verb'MIN', \verb'MAX', \verb'PLUS', \verb'MINUS', \verb'RMINUS', \verb'TIMES', \verb'DIV', \verb'RDIV', \verb'ISEQ', \verb'ISNE', \verb'ISGT', \verb'ISLT', \verb'ISGE', \verb'ISLE', \verb'LOR', \verb'LAND', \verb'LXOR'). \item 10 non-Boolean types, $T$ \end{itemize} \item 300 semirings with a comparison operator $T \times T \rightarrow$ \verb'bool', where $T$ is non-Boolean and real, from the complete cross product of: \vspace{-0.05in} \begin{itemize} \item 5 Boolean monoids (\verb'LAND', \verb'LOR', \verb'LXOR', \verb'EQ', \verb'ANY') \item 6 multiply operators (\verb'EQ', \verb'NE', \verb'GT', \verb'LT', \verb'GE', \verb'LE') \item 10 non-Boolean types, $T$ \end{itemize} \item 55 semirings with purely Boolean types, \verb'bool' $\times$ \verb'bool' $\rightarrow$ \verb'bool', from the complete cross product of: \vspace{-0.05in} \begin{itemize} \item 5 Boolean monoids (\verb'LAND', \verb'LOR', \verb'LXOR', \verb'EQ', \verb'ANY') \item 11 multiply operators (\verb'FIRST', \verb'SECOND', \verb'PAIR', \verb'LOR', \verb'LAND', \verb'LXOR', \verb'EQ', \verb'GT', \verb'LT', \verb'GE', \verb'LE') \end{itemize} \item 54 complex semirings, $Z \times Z \rightarrow Z$ where $Z$ is \verb'GxB_FC32' (single precision complex) or \verb'GxB_FC64' (double precision complex): \vspace{-0.05in} \begin{itemize} \item 3 complex monoids (\verb'PLUS', \verb'TIMES', \verb'ANY') \item 9 complex multiply operators (\verb'FIRST', \verb'SECOND', \verb'PAIR', \verb'PLUS', \verb'MINUS', \verb'TIMES', \verb'DIV', \verb'RDIV', \verb'RMINUS') \item 2 complex types, $Z$ \end{itemize} \item 64 bitwise semirings, $U \times U \rightarrow U$ where $U$ is an unsigned integer. \vspace{-0.05in} \begin{itemize} \item 4 bitwise monoids (\verb'BOR', \verb'BAND', \verb'BXOR', \verb'BXNOR') \item 4 bitwise multiply operators (the same list) \item 4 unsigned integer types \end{itemize} \item 80 positional semirings, $X \times X \rightarrow T$ where $T$ is \verb'INT32' or \verb'INT64': \vspace{-0.05in} \begin{itemize} \item 5 monoids (\verb'MIN', \verb'MAX', \verb'PLUS', \verb'TIMES', \verb'ANY') \item 8 positional operators (\verb'FIRSTI', \verb'FIRSTI1', \verb'FIRSTJ', \verb'FIRSTJ1', \verb'SECONDI', \verb'SECONDI1', \verb'SECONDJ', \verb'SECONDJ1') \item 2 integer types (\verb'INT32', \verb'INT64') \end{itemize} \end{itemize} %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Semiring\_wait:} wait for a semiring} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a user-defined semiring ( GrB_Semiring *semiring // semiring to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined semiring, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_Semiring_wait(&semiring)' ensures the \verb'semiring' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_Semiring_wait(&semiring)', except to ensure that the \verb'semiring' is valid. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Semiring\_add:} return the additive monoid of a semiring} %------------------------------------------------------------------------------- \label{semiring_add} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Semiring_add // return the add monoid of a semiring ( GrB_Monoid *add, // returns add monoid of the semiring GrB_Semiring semiring // semiring to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Semiring_add' returns the additive monoid of a semiring. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Semiring\_multiply:} return multiply operator of a semiring} %------------------------------------------------------------------------------- \label{semiring_multiply} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Semiring_multiply // return multiply operator of a semiring ( GrB_BinaryOp *multiply, // returns multiply operator of the semiring GrB_Semiring semiring // semiring to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Semiring_multiply' returns the binary multiplicative operator of a semiring. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Semiring\_free:} free a semiring} %------------------------------------------------------------------------------- \label{semiring_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a user-created semiring ( GrB_Semiring *semiring // handle of semiring to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Semiring_free' frees a semiring. Either usage: {\small \begin{verbatim} GrB_Semiring_free (&semiring) ; GrB_free (&semiring) ; \end{verbatim}} \noindent frees the \verb'semiring' and sets \verb'semiring' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'semiring == NULL' on input. It does nothing at all if passed a built-in semiring. \newpage %=============================================================================== \subsection{GraphBLAS scalars: {\sf GxB\_Scalar}} %============================= %=============================================================================== \label{scalar} This section describes a set of methods that create, modify, query, and destroy a SuiteSparse:GraphBLAS scalar, \verb'GxB_Scalar': \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GxB_Scalar_new' & create a scalar \\ \verb'GxB_Scalar_wait' & wait for a scalar \\ \verb'GxB_Scalar_dup' & copy a scalar \\ \verb'GxB_Scalar_clear' & clear a scalar of its entry \\ \verb'GxB_Scalar_nvals' & return the number of entries in a scalar (0 or 1) \\ \verb'GxB_Scalar_type' & return the type of a scalar \\ \verb'GxB_Scalar_setElement' & set the single entry of a scalar \\ \verb'GxB_Scalar_extractElement' & get the single entry from a scalar \\ \verb'GxB_Scalar_free' & free a scalar \\ \hline \end{tabular} } %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_new:} create a scalar} %------------------------------------------------------------------------------- \label{scalar_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_new // create a new GxB_Scalar with no entry ( GxB_Scalar *s, // handle of GxB_Scalar to create GrB_Type type // type of GxB_Scalar to create ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_new' creates a new scalar with no entry in it, of the given type. This is analogous to MATLAB statement \verb's = sparse(0)', except that GraphBLAS can create scalars any type. The pattern of the new scalar is empty. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_wait:} wait for a scalar} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a scalar ( GxB_Scalar *s // scalar to wait for ) ; \end{verbatim} }\end{mdframed} In non-blocking mode, the computations for a \verb'GxB_Scalar' may be delayed. In this case, the scalar is not yet safe to use by multiple independent user threads. A user application may force completion of a scalar \verb's' via \verb'GxB_Scalar_wait(&s)'. After this call, different user threads may safely call GraphBLAS operations that use the scalar \verb's' as an input parameter. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_dup:} copy a scalar} %------------------------------------------------------------------------------- \label{scalar_dup} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_dup // make an exact copy of a GxB_Scalar ( GxB_Scalar *s, // handle of output GxB_Scalar to create const GxB_Scalar t // input GxB_Scalar to copy ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_dup' makes a deep copy of a scalar, like \verb's=t' in MATLAB. In GraphBLAS, it is possible, and valid, to write the following: {\footnotesize \begin{verbatim} GxB_Scalar t, s ; GxB_Scalar_new (&t, GrB_FP64) ; s = t ; // s is a shallow copy of t \end{verbatim}} Then \verb's' and \verb't' can be used interchangeably. However, only a pointer reference is made, and modifying one of them modifies both, and freeing one of them leaves the other as a dangling handle that should not be used. If two different scalars are needed, then this should be used instead: {\footnotesize \begin{verbatim} GxB_Scalar t, s ; GxB_Scalar_new (&t, GrB_FP64) ; GxB_Scalar_dup (&s, t) ; // like s = t, but making a deep copy \end{verbatim}} Then \verb's' and \verb't' are two different scalars that currently have the same value, but they do not depend on each other. Modifying one has no effect on the other. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_clear:} clear a scalar of its entry} %------------------------------------------------------------------------------- \label{scalar_clear} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_clear // clear a GxB_Scalar of its entry ( // type remains unchanged. GxB_Scalar s // GxB_Scalar to clear ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_clear' clears the entry from a scalar. The pattern of \verb's' is empty, just as if it were created fresh with \verb'GxB_Scalar_new'. Analogous with \verb's = sparse (0)' in MATLAB. The type of \verb's' does not change. Any pending updates to the scalar are discarded. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_nvals:} return the number of entries in a scalar} %------------------------------------------------------------------------------- \label{scalar_nvals} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_nvals // get the number of entries in a GxB_Scalar ( GrB_Index *nvals, // GxB_Scalar has nvals entries (0 or 1) const GxB_Scalar s // GxB_Scalar to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_nvals' returns the number of entries in a scalar, which is either 0 or 1. Roughly analogous to \verb'nvals = nnz(s)' in MATLAB, except that the implicit value in GraphBLAS need not be zero and \verb'nnz' (short for ``number of nonzeros'') in MATLAB is better described as ``number of entries'' in GraphBLAS. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_type:} return the type of a scalar} %------------------------------------------------------------------------------- \label{scalar_type} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_type // get the type of a GxB_Scalar ( GrB_Type *type, // returns the type of the GxB_Scalar const GxB_Scalar s // GxB_Scalar to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_type' returns the type of a scalar. Analogous to \verb'type = class (s)' in MATLAB. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_setElement:} set the single entry of a scalar} %------------------------------------------------------------------------------- \label{scalar_setElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_setElement // s = x ( GxB_Scalar s, // GxB_Scalar to modify <type> x // user scalar to assign to s ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_setElement' sets the single entry in a scalar, like \verb's = sparse(x)' in MATLAB notation. For further details of this function, see \verb'GxB_Matrix_setElement' in Section~\ref{matrix_setElement}. If an error occurs, \verb'GrB_error(&err,s)' returns details about the error. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_extractElement:} get the single entry from a scalar} %------------------------------------------------------------------------------- \label{scalar_extractElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_extractElement // x = s ( <type> *x, // user scalar extracted const GxB_Scalar s // GxB_Sclar to extract an entry from ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_extractElement' extracts the single entry from a sparse scalar, like \verb'x = full(s)' in MATLAB. Further details of this method are discussed in Section~\ref{matrix_extractElement}, which discusses \verb'GrB_Matrix_extractElement'. {\bf NOTE: } if no entry is present in the scalar \verb's', then \verb'x' is not modified, and the return value of \verb'GxB_Scalar_extractElement' is \verb'GrB_NO_VALUE'. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Scalar\_free:} free a scalar} %------------------------------------------------------------------------------- \label{scalar_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a GxB_Scalar ( GxB_Scalar *s // handle of GxB_Scalar to free ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Scalar_free' frees a scalar. Either usage: {\small \begin{verbatim} GxB_Scalar_free (&s) ; GrB_free (&s) ; \end{verbatim}} \noindent frees the scalar \verb's' and sets \verb's' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb's == NULL' on input. Any pending updates to the scalar are abandoned. \newpage %=============================================================================== \subsection{GraphBLAS vectors: {\sf GrB\_Vector}} %============================= %=============================================================================== \label{vector} This section describes a set of methods that create, modify, query, and destroy a GraphBLAS sparse vector, \verb'GrB_Vector': \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_Vector_new' & create a vector \\ \verb'GrB_Vector_wait' & wait for a vector \\ \verb'GrB_Vector_dup' & copy a vector \\ \verb'GrB_Vector_clear' & clear a vector of all entries \\ \verb'GrB_Vector_size' & return the size of a vector \\ \verb'GrB_Vector_nvals' & return the number of entries in a vector \\ \verb'GxB_Vector_type' & return the type of a vector \\ \verb'GrB_Vector_build' & build a vector from a set of tuples \\ \verb'GrB_Vector_setElement' & add an entry to a vector \\ \verb'GrB_Vector_extractElement' & get an entry from a vector \\ \verb'GrB_Vector_removeElement' & remove an entry from a vector \\ \verb'GrB_Vector_extractTuples' & get all entries from a vector \\ \verb'GrB_Vector_resize' & resize a vector \\ % new in v5:------------ \verb'GxB_Vector_diag' & extract a diagonal from a matrix \\ %----------------------- \verb'GrB_Vector_free' & free a vector \\ \hline \hline \verb'GxB_Vector_import_CSC' & import a vector in CSC format \\ \verb'GxB_Vector_export_CSC' & export a vector in CSC format \\ \hline \verb'GxB_Vector_import_Bitmap' & import a vector in bitmap format \\ \verb'GxB_Vector_export_Bitmap' & export a vector in bitmap format \\ \hline \verb'GxB_Vector_import_Full' & import a vector in full format \\ \verb'GxB_Vector_export_Full' & export a vector in full format \\ \hline \end{tabular} } \vspace{0.2in} Refer to Section~\ref{import_export} for a discussion the import/export methods. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_new:} create a vector} %------------------------------------------------------------------------------- \label{vector_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_new // create a new vector with no entries ( GrB_Vector *v, // handle of vector to create GrB_Type type, // type of vector to create GrB_Index n // vector dimension is n-by-1 ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_new' creates a new \verb'n'-by-\verb'1' sparse vector with no entries in it, of the given type. This is analogous to MATLAB statement \verb'v = sparse (n,1)', except that GraphBLAS can create sparse vectors any type. The pattern of the new vector is empty. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_wait:} wait for a vector} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a vector ( GrB_Vector *w // vector to wait for ) ; \end{verbatim} }\end{mdframed} In non-blocking mode, the computations for a \verb'GrB_Vector' may be delayed. In this case, the vector is not yet safe to use by multiple independent user threads. A user application may force completion of a vector \verb'w' via \verb'GrB_Vector_wait(&w)'. After this call, different user threads may safely call GraphBLAS operations that use the vector \verb'w' as an input parameter. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_dup:} copy a vector} %------------------------------------------------------------------------------- \label{vector_dup} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_dup // make an exact copy of a vector ( GrB_Vector *w, // handle of output vector to create const GrB_Vector u // input vector to copy ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_dup' makes a deep copy of a sparse vector, like \verb'w=u' in MATLAB. In GraphBLAS, it is possible, and valid, to write the following: {\footnotesize \begin{verbatim} GrB_Vector u, w ; GrB_Vector_new (&u, GrB_FP64, n) ; w = u ; // w is a shallow copy of u \end{verbatim}} Then \verb'w' and \verb'u' can be used interchangeably. However, only a pointer reference is made, and modifying one of them modifies both, and freeing one of them leaves the other as a dangling handle that should not be used. If two different vectors are needed, then this should be used instead: {\footnotesize \begin{verbatim} GrB_Vector u, w ; GrB_Vector_new (&u, GrB_FP64, n) ; GrB_Vector_dup (&w, u) ; // like w = u, but making a deep copy \end{verbatim}} Then \verb'w' and \verb'u' are two different vectors that currently have the same set of values, but they do not depend on each other. Modifying one has no effect on the other. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_clear:} clear a vector of all entries} %------------------------------------------------------------------------------- \label{vector_clear} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_clear // clear a vector of all entries; ( // type and dimension remain unchanged. GrB_Vector v // vector to clear ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_clear' clears all entries from a vector. All values \verb'v(i)' are now equal to the implicit value, depending on what semiring ring is used to perform computations on the vector. The pattern of \verb'v' is empty, just as if it were created fresh with \verb'GrB_Vector_new'. Analogous with \verb'v (:) = sparse(0)' in MATLAB. The type and dimension of \verb'v' do not change. Any pending updates to the vector are discarded. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_size:} return the size of a vector} %------------------------------------------------------------------------------- \label{vector_size} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_size // get the dimension of a vector ( GrB_Index *n, // vector dimension is n-by-1 const GrB_Vector v // vector to query ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_size' returns the size of a vector (the number of rows). Analogous to \verb'n = length(v)' or \verb'n = size(v,1)' in MATLAB. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_nvals:} return the number of entries in a vector} %------------------------------------------------------------------------------- \label{vector_nvals} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_nvals // get the number of entries in a vector ( GrB_Index *nvals, // vector has nvals entries const GrB_Vector v // vector to query ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_nvals' returns the number of entries in a vector. Roughly analogous to \verb'nvals = nnz(v)' in MATLAB, except that the implicit value in GraphBLAS need not be zero and \verb'nnz' (short for ``number of nonzeros'') in MATLAB is better described as ``number of entries'' in GraphBLAS. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_type:} return the type of a vector} %------------------------------------------------------------------------------- \label{vector_type} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_type // get the type of a vector ( GrB_Type *type, // returns the type of the vector const GrB_Vector v // vector to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_type' returns the type of a vector. Analogous to \verb'type = class (v)' in MATLAB. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_build:} build a vector from a set of tuples} %------------------------------------------------------------------------------- \label{vector_build} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_build // build a vector from (I,X) tuples ( GrB_Vector w, // vector to build const GrB_Index *I, // array of row indices of tuples const <type> *X, // array of values of tuples GrB_Index nvals, // number of tuples const GrB_BinaryOp dup // binary function to assemble duplicates ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_build' constructs a sparse vector \verb'w' from a set of tuples, \verb'I' and \verb'X', each of length \verb'nvals'. The vector \verb'w' must have already been initialized with \verb'GrB_Vector_new', and it must have no entries in it before calling \verb'GrB_Vector_build'. This function is just like \verb'GrB_Matrix_build' (see Section~\ref{matrix_build}), except that it builds a sparse vector instead of a sparse matrix. For a description of what \verb'GrB_Vector_build' does, refer to \verb'GrB_Matrix_build'. For a vector, the list of column indices \verb'J' in \verb'GrB_Matrix_build' is implicitly a vector of length \verb'nvals' all equal to zero. Otherwise the methods are identical. \begin{alert} {\bf SPEC:} As an extension to the spec, results are defined even if \verb'dup' is non-associative. \end{alert} %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_setElement:} add an entry to a vector} %------------------------------------------------------------------------------- \label{vector_setElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_setElement // w(i) = x ( GrB_Vector w, // vector to modify <type> x, // scalar to assign to w(i) GrB_Index i // index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_setElement' sets a single entry in a vector, \verb'w(i) = x'. The operation is exactly like setting a single entry in an \verb'n'-by-1 matrix, \verb'A(i,0) = x', where the column index for a vector is implicitly \verb'j=0'. For further details of this function, see \verb'GrB_Matrix_setElement' in Section~\ref{matrix_setElement}. If an error occurs, \verb'GrB_error(&err,w)' returns details about the error. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_extractElement:} get an entry from a vector} %------------------------------------------------------------------------------- \label{vector_extractElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_extractElement // x = v(i) ( <type> *x, // scalar extracted const GrB_Vector v, // vector to extract an entry from GrB_Index i // index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_extractElement' extracts a single entry from a vector, \verb'x = v(i)'. The method is identical to extracting a single entry \verb'x = A(i,0)' from an \verb'n'-by-1 matrix, so further details of this method are discussed in Section~\ref{matrix_extractElement}, which discusses \verb'GrB_Matrix_extractElement'. In this case, the column index is implicitly \verb'j=0'. {\bf NOTE: } if no entry is present at \verb'v(i)', then \verb'x' is not modified, and the return value of \verb'GrB_Vector_extractElement' is \verb'GrB_NO_VALUE'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_removeElement:} remove an entry from a vector} %------------------------------------------------------------------------------- \label{vector_removeElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_removeElement ( GrB_Vector w, // vector to remove an entry from GrB_Index i // index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_removeElement' removes a single entry \verb'w(i)' from a vector. If no entry is present at \verb'w(i)', then the vector is not modified. If an error occurs, \verb'GrB_error(&err,w)' returns details about the error. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_extractTuples:} get all entries from a vector} %------------------------------------------------------------------------------- \label{vector_extractTuples} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_extractTuples // [I,~,X] = find (v) ( GrB_Index *I, // array for returning row indices of tuples <type> *X, // array for returning values of tuples GrB_Index *nvals, // I, X size on input; # tuples on output const GrB_Vector v // vector to extract tuples from ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_extractTuples' extracts all tuples from a sparse vector, analogous to \verb'[I,~,X] = find(v)' in MATLAB. This function is identical to its \verb'GrB_Matrix_extractTuples' counterpart, except that the array of column indices \verb'J' does not appear in this function. Refer to Section~\ref{matrix_extractTuples} where further details of this function are described. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_resize:} resize a vector} %------------------------------------------------------------------------------- \label{vector_resize} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Vector_resize // change the size of a vector ( GrB_Vector u, // vector to modify GrB_Index nrows_new // new number of rows in vector ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_resize' changes the size of a vector. If the dimension decreases, entries that fall outside the resized vector are deleted. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_diag:} extract a diagonal from a matrix} %------------------------------------------------------------------------------- \label{vector_diag} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_diag // extract a diagonal from a matrix ( GrB_Vector v, // output vector const GrB_Matrix A, // input matrix int64_t k, const GrB_Descriptor desc // unused, except threading control ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_diag' extracts a vector \verb'v' from an input matrix \verb'A', which may be rectangular. If \verb'k' = 0, the main diagonal of \verb'A' is extracted; \verb'k' $> 0$ denotes diagonals above the main diagonal of \verb'A', and \verb'k' $< 0$ denotes diagonals below the main diagonal of \verb'A'. Let \verb'A' have dimension $m$-by-$n$. If \verb'k' is in the range 0 to $n-1$, then \verb'v' has length $\min(m,n-k)$. If \verb'k' is negative and in the range -1 to $-m+1$, then \verb'v' has length $\min(m+k,n)$. If \verb'k' is outside these ranges, \verb'v' has length 0 (this is not an error). This function computes the same thing as the MATLAB statement \verb'v=diag(A,k)' when \verb'A' is a matrix, except that \verb'GxB_Vector_diag' can also do typecasting. The vector \verb'v' must already exist on input, and \verb'GrB_Vector_size (&len,v)' must return \verb'len' = 0 if \verb'k' $\ge n$ or \verb'k' $\le -m$, \verb'len' $=\min(m,n-k)$ if \verb'k' is in the range 0 to $n-1$, and \verb'len' $=\min(m+k,n)$ if \verb'k' is in the range -1 to $-m+1$. Any existing entries in \verb'v' are discarded. The type of \verb'v' is preserved, so that if the type of \verb'A' and \verb'v' differ, the entries are typecasted into the type of \verb'v'. Any settings made to \verb'v' by \verb'GxB_Vector_Option_set' (bitmap switch and sparsity control) are unchanged. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_free:} free a vector} %------------------------------------------------------------------------------- \label{vector_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a vector ( GrB_Vector *v // handle of vector to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_free' frees a vector. Either usage: {\small \begin{verbatim} GrB_Vector_free (&v) ; GrB_free (&v) ; \end{verbatim}} \noindent frees the vector \verb'v' and sets \verb'v' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'v == NULL' on input. Any pending updates to the vector are abandoned. \newpage %=============================================================================== \subsection{GraphBLAS matrices: {\sf GrB\_Matrix}} %============================ %=============================================================================== \label{matrix} This section describes a set of methods that create, modify, query, and destroy a GraphBLAS sparse matrix, \verb'GrB_Matrix': \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GrB_Matrix_new' & create a matrix \\ \verb'GrB_Matrix_wait' & wait for a matrix \\ \verb'GrB_Matrix_dup' & copy a matrix \\ \verb'GrB_Matrix_clear' & clear a matrix of all entries \\ \verb'GrB_Matrix_nrows' & return the number of rows of a matrix \\ \verb'GrB_Matrix_ncols' & return the number of columns of a matrix \\ \verb'GrB_Matrix_nvals' & return the number of entries in a matrix \\ \verb'GxB_Matrix_type' & return the type of a matrix \\ \verb'GrB_Matrix_build' & build a matrix from a set of tuples \\ \verb'GrB_Matrix_setElement' & add an entry to a matrix \\ \verb'GrB_Matrix_extractElement'& get an entry from a matrix \\ \verb'GrB_Matrix_removeElement' & remove an entry from a matrix \\ \verb'GrB_Matrix_extractTuples' & get all entries from a matrix \\ \verb'GrB_Matrix_resize' & resize a matrix \\ % new in v5:------------ \verb'GxB_Matrix_concat' & concatenate many matrices into one matrix \\ \verb'GxB_Matrix_split' & split one matrix into many matrices \\ \verb'GxB_Matrix_diag' & construct a diagonal matrix from a vector \\ %----------------------- \verb'GrB_Matrix_free' & free a matrix \\ \hline \hline \verb'GxB_Matrix_import_CSR' & import a matrix in CSR form \\ \verb'GxB_Matrix_export_CSR' & export a matrix in CSR form \\ \hline \verb'GxB_Matrix_import_CSC' & import a matrix in CSC form \\ \verb'GxB_Matrix_export_CSC' & export a matrix in CSC form \\ \hline \verb'GxB_Matrix_import_HyperCSR' & import a matrix in HyperCSR form \\ \verb'GxB_Matrix_export_HyperCSR' & export a matrix in HyperCSR form \\ \hline \verb'GxB_Matrix_import_HyperCSC' & import a matrix in HyperCSC form \\ \verb'GxB_Matrix_export_HyperCSC' & export a matrix in HyperCSC form \\ \hline \verb'GxB_Matrix_import_BitmapR' & import a matrix in BitmapR form \\ \verb'GxB_Matrix_export_BitmapR' & export a matrix in BitmapR form \\ \hline \verb'GxB_Matrix_import_BitmapC' & import a matrix in BitmapC form \\ \verb'GxB_Matrix_export_BitmapC' & export a matrix in BitmapC form \\ \hline \verb'GxB_Matrix_import_FullR' & import a matrix in FullR form \\ \verb'GxB_Matrix_export_FullR' & export a matrix in FullR form \\ \hline \verb'GxB_Matrix_import_FullC' & import a matrix in FullC form \\ \verb'GxB_Matrix_export_FullC' & export a matrix in FullC form \\ \hline \end{tabular} } \vspace{0.2in} Refer to Section~\ref{import_export} for a discussion the import/export methods. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_new:} create a matrix} %------------------------------------------------------------------------------- \label{matrix_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_new // create a new matrix with no entries ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // matrix dimension is nrows-by-ncols GrB_Index ncols ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_new' creates a new \verb'nrows'-by-\verb'ncols' sparse matrix with no entries in it, of the given type. This is analogous to the MATLAB statement \verb'A = sparse (nrows, ncols)', except that GraphBLAS can create sparse matrices of any type. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_wait:} wait for a matrix} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a matrix ( GrB_Matrix *C // matrix to wait for ) ; \end{verbatim} }\end{mdframed} In non-blocking mode, the computations for a \verb'GrB_Matrix' may be delayed. In this case, the matrix is not yet safe to use by multiple independent user threads. A user application may force completion of a matrix \verb'C' via \verb'GrB_Matrix_wait(&C)'. After this call, different user threads may safely call GraphBLAS operations that use the matrix \verb'C' as an input parameter. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_dup:} copy a matrix} %------------------------------------------------------------------------------- \label{matrix_dup} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_dup // make an exact copy of a matrix ( GrB_Matrix *C, // handle of output matrix to create const GrB_Matrix A // input matrix to copy ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_dup' makes a deep copy of a sparse matrix, like \verb'C=A' in MATLAB. In GraphBLAS, it is possible, and valid, to write the following: {\footnotesize \begin{verbatim} GrB_Matrix A, C ; GrB_Matrix_new (&A, GrB_FP64, n) ; C = A ; // C is a shallow copy of A \end{verbatim}} Then \verb'C' and \verb'A' can be used interchangeably. However, only a pointer reference is made, and modifying one of them modifies both, and freeing one of them leaves the other as a dangling handle that should not be used. If two different matrices are needed, then this should be used instead: {\footnotesize \begin{verbatim} GrB_Matrix A, C ; GrB_Matrix_new (&A, GrB_FP64, n) ; GrB_Matrix_dup (&C, A) ; // like C = A, but making a deep copy \end{verbatim}} Then \verb'C' and \verb'A' are two different matrices that currently have the same set of values, but they do not depend on each other. Modifying one has no effect on the other. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_clear:} clear a matrix of all entries} %------------------------------------------------------------------------------- \label{matrix_clear} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_clear // clear a matrix of all entries; ( // type and dimensions remain unchanged GrB_Matrix A // matrix to clear ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_clear' clears all entries from a matrix. All values \verb'A(i,j)' are now equal to the implicit value, depending on what semiring ring is used to perform computations on the matrix. The pattern of \verb'A' is empty, just as if it were created fresh with \verb'GrB_Matrix_new'. Analogous with \verb'A (:,:) = 0' in MATLAB. The type and dimensions of \verb'A' do not change. Any pending updates to the matrix are discarded. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_nrows:} return the number of rows of a matrix} %------------------------------------------------------------------------------- \label{matrix_nrows} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_nrows // get the number of rows of a matrix ( GrB_Index *nrows, // matrix has nrows rows const GrB_Matrix A // matrix to query ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_nrows' returns the number of rows of a matrix (\verb'nrows=size(A,1)' in MATLAB). % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_ncols:} return the number of columns of a matrix} %------------------------------------------------------------------------------- \label{matrix_ncols} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_ncols // get the number of columns of a matrix ( GrB_Index *ncols, // matrix has ncols columns const GrB_Matrix A // matrix to query ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_ncols' returns the number of columns of a matrix (\verb'ncols=size(A,2)' in MATLAB). %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_nvals:} return the number of entries in a matrix} %------------------------------------------------------------------------------- \label{matrix_nvals} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_nvals // get the number of entries in a matrix ( GrB_Index *nvals, // matrix has nvals entries const GrB_Matrix A // matrix to query ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_nvals' returns the number of entries in a matrix. Roughly analogous to \verb'nvals = nnz(A)' in MATLAB, except that the implicit value in GraphBLAS need not be zero and \verb'nnz' (short for ``number of nonzeros'') in MATLAB is better described as ``number of entries'' in GraphBLAS. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_type:} return the type of a matrix} %------------------------------------------------------------------------------- \label{matrix_type} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_type // get the type of a matrix ( GrB_Type *type, // returns the type of the matrix const GrB_Matrix A // matrix to query ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_type' returns the type of a matrix, like \verb'type=class(A)' in MATLAB. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_build:} build a matrix from a set of tuples} %------------------------------------------------------------------------------- \label{matrix_build} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_build // build a matrix from (I,J,X) tuples ( GrB_Matrix C, // matrix to build const GrB_Index *I, // array of row indices of tuples const GrB_Index *J, // array of column indices of tuples const <type> *X, // array of values of tuples GrB_Index nvals, // number of tuples const GrB_BinaryOp dup // binary function to assemble duplicates ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_build' constructs a sparse matrix \verb'C' from a set of tuples, \verb'I', \verb'J', and \verb'X', each of length \verb'nvals'. The matrix \verb'C' must have already been initialized with \verb'GrB_Matrix_new', and it must have no entries in it before calling \verb'GrB_Matrix_build'. Thus the dimensions and type of \verb'C' are not changed by this function, but are inherited from the prior call to \verb'GrB_Matrix_new' or \verb'GrB_matrix_dup'. An error is returned (\verb'GrB_INDEX_OUT_OF_BOUNDS') if any row index in \verb'I' is greater than or equal to the number of rows of \verb'C', or if any column index in \verb'J' is greater than or equal to the number of columns of \verb'C' Any duplicate entries with identical indices are assembled using the binary \verb'dup' operator provided on input. All three types (\verb'x', \verb'y', \verb'z' for \verb'z=dup(x,y)') must be identical. The types of \verb'dup', \verb'C' and \verb'X' must all be compatible. See Section~\ref{typecasting} regarding typecasting and compatibility. The values in \verb'X' are typecasted, if needed, into the type of \verb'dup'. Duplicates are then assembled into a matrix \verb'T' of the same type as \verb'dup', using \verb'T(i,j) = dup (T (i,j), X (k))'. After \verb'T' is constructed, it is typecasted into the result \verb'C'. That is, typecasting does not occur at the same time as the assembly of duplicates. \begin{alert} {\bf SPEC:} As an extension to the spec, results are defined even if \verb'dup' is non-associative. \end{alert} The GraphBLAS API requires \verb'dup' to be associative so that entries can be assembled in any order, and states that the result is undefined if \verb'dup' is not associative. However, SuiteSparse:GraphBLAS guarantees a well-defined order of assembly. Entries in the tuples \verb'[I,J,X]' are first sorted in increasing order of row and column index, with ties broken by the position of the tuple in the \verb'[I,J,X]' list. If duplicates appear, they are assembled in the order they appear in the \verb'[I,J,X]' input. That is, if the same indices \verb'i' and \verb'j' appear in positions \verb'k1', \verb'k2', \verb'k3', and \verb'k4' in \verb'[I,J,X]', where \verb'k1 < k2 < k3 < k4', then the following operations will occur in order: {\footnotesize \begin{verbatim} T (i,j) = X (k1) ; T (i,j) = dup (T (i,j), X (k2)) ; T (i,j) = dup (T (i,j), X (k3)) ; T (i,j) = dup (T (i,j), X (k4)) ; \end{verbatim}} This is a well-defined order but the user should not depend upon it when using other GraphBLAS implementations since the GraphBLAS API does not require this ordering. However, SuiteSparse:GraphBLAS guarantees this ordering, even when it compute the result in parallel. With this well-defined order, several operators become very useful. In particular, the \verb'SECOND' operator results in the last tuple overwriting the earlier ones. The \verb'FIRST' operator means the value of the first tuple is used and the others are discarded. The acronym \verb'dup' is used here for the name of binary function used for assembling duplicates, but this should not be confused with the \verb'_dup' suffix in the name of the function \verb'GrB_Matrix_dup'. The latter function does not apply any operator at all, nor any typecasting, but simply makes a pure deep copy of a matrix. The parameter \verb'X' is a pointer to any C equivalent built-in type, or a \verb'void *' pointer. The \verb'GrB_Matrix_build' function uses the \verb'_Generic' feature of ANSI C11 to detect the type of pointer passed as the parameter \verb'X'. If \verb'X' is a pointer to a built-in type, then the function can do the right typecasting. If \verb'X' is a \verb'void *' pointer, then it can only assume \verb'X' to be a pointer to a user-defined type that is the same user-defined type of \verb'C' and \verb'dup'. This function has no way of checking this condition that the \verb'void * X' pointer points to an array of the correct user-defined type, so behavior is undefined if the user breaks this condition. The \verb'GrB_Matrix_build' method is analogous to \verb'C = sparse (I,J,X)' in MATLAB, with several important extensions that go beyond that which MATLAB can do. In particular, the MATLAB \verb'sparse' function only provides one option for assembling duplicates (summation), and it can only build double, double complex, and logical sparse matrices. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_setElement:} add an entry to a matrix} %------------------------------------------------------------------------------- \label{matrix_setElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_setElement // C (i,j) = x ( GrB_Matrix C, // matrix to modify <type> x, // scalar to assign to C(i,j) GrB_Index i, // row index GrB_Index j // column index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_setElement' sets a single entry in a matrix, \verb'C(i,j)=x'. If the entry is already present in the pattern of \verb'C', it is overwritten with the new value. If the entry is not present, it is added to \verb'C'. In either case, no entry is ever deleted by this function. Passing in a value of \verb'x=0' simply creates an explicit entry at position \verb'(i,j)' whose value is zero, even if the implicit value is assumed to be zero. An error is returned (\verb'GrB_INVALID_INDEX') if the row index \verb'i' is greater than or equal to the number of rows of \verb'C', or if the column index \verb'j' is greater than or equal to the number of columns of \verb'C'. Note that this error code differs from the same kind of condition in \verb'GrB_Matrix_build', which returns \verb'GrB_INDEX_OUT_OF_BOUNDS'. This is because \verb'GrB_INVALID_INDEX' is an API error, and is caught immediately even in non-blocking mode, whereas \verb'GrB_INDEX_OUT_OF_BOUNDS' is an execution error whose detection may wait until the computation completes sometime later. The scalar \verb'x' is typecasted into the type of \verb'C'. Any value can be passed to this function and its type will be detected, via the \verb'_Generic' feature of ANSI C11. For a user-defined type, \verb'x' is a \verb'void *' pointer that points to a memory space holding a single entry of this user-defined type. This user-defined type must exactly match the user-defined type of \verb'C' since no typecasting is done between user-defined types. \paragraph{\bf Performance considerations:} % BLOCKING: setElement, *assign SuiteSparse:GraphBLAS exploits the non-blocking mode to greatly improve the performance of this method. Refer to the example shown in Section~\ref{overview}. If the entry exists in the pattern already, it is updated right away and the work is not left pending. Otherwise, it is placed in a list of pending updates, and the later on the updates are done all at once, using the same algorithm used for \verb'GrB_Matrix_build'. In other words, \verb'setElement' in SuiteSparse:GraphBLAS builds its own internal list of tuples \verb'[I,J,X]', and then calls \verb'GrB_Matrix_build' whenever the matrix is needed in another computation, or whenever \verb'GrB_Matrix_wait' is called. As a result, if calls to \verb'setElement' are mixed with calls to most other methods and operations (even \verb'extractElement') then the pending updates are assembled right away, which will be slow. Performance will be good if many \verb'setElement' updates are left pending, and performance will be poor if the updates are assembled frequently. A few methods and operations can be intermixed with \verb'setElement', in particular, some forms of the \verb'GrB_assign' and \verb'GxB_subassign' operations are compatible with the pending updates from \verb'setElement'. Section~\ref{compare_assign} gives more details on which \verb'GxB_subassign' and \verb'GrB_assign' operations can be interleaved with calls to \verb'setElement' without forcing updates to be assembled. Other methods that do not access the existing entries may also be done without forcing the updates to be assembled, namely \verb'GrB_Matrix_clear' (which erases all pending updates), \verb'GrB_Matrix_free', \verb'GrB_Matrix_ncols', \verb'GrB_Matrix_nrows', \verb'GxB_Matrix_type', and of course \verb'GrB_Matrix_setElement' itself. All other methods and operations cause the updates to be assembled. Future versions of SuiteSparse:GraphBLAS may extend this list. See Section~\ref{random} for an example of how to use \verb'GrB_Matrix_setElement'. If an error occurs, \verb'GrB_error(&err,C)' returns details about the error. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_extractElement:} get an entry from a matrix} %------------------------------------------------------------------------------- \label{matrix_extractElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_extractElement // x = A(i,j) ( <type> *x, // extracted scalar const GrB_Matrix A, // matrix to extract a scalar from GrB_Index i, // row index GrB_Index j // column index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_extractElement' extracts a single entry from a matrix \verb'x=A(i,j)'. An error is returned (\verb'GrB_INVALID_INDEX') if the row index \verb'i' is greater than or equal to the number of rows of \verb'C', or if column index \verb'j' is greater than or equal to the number of columns of \verb'C'. {\bf NOTE: } if no entry is present at \verb'A(i,j)', then \verb'x' is not modified, and the return value of \verb'GrB_Matrix_extractElement' is \verb'GrB_NO_VALUE'. If the entry is not present then GraphBLAS does not know its value, since its value depends on the implicit value, which is the identity value of the additive monoid of the semiring. It is not a characteristic of the matrix itself, but of the semiring it is used in. A matrix can be used in any compatible semiring, and even a mixture of semirings, so the implicit value can change as the semiring changes. As a result, if the entry is present, \verb'x=A(i,j)' is performed and the scalar \verb'x' is returned with this value. The method returns \verb'GrB_SUCCESS'. If the entry is not present, \verb'x' is not modified, and \verb'GrB_NO_VALUE' is returned to the caller. What this means is up to the caller. The function knows the type of the pointer \verb'x', so it can do typecasting as needed, from the type of \verb'A' into the type of \verb'x'. User-defined types cannot be typecasted, so if \verb'A' has a user-defined type then \verb'x' must be a \verb'void *' pointer that points to a memory space the same size as a single scalar of the type of \verb'A'. Currently, this method causes all pending updates from \verb'GrB_setElement', \verb'GrB_assign', or \verb'GxB_subassign' to be assembled, so its use can have performance implications. Calls to this function should not be arbitrarily intermixed with calls to these other two functions. Everything will work correctly and results will be predictable, it will just be slow. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_removeElement:} remove an entry from a matrix} %------------------------------------------------------------------------------- \label{matrix_removeElement} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_removeElement ( GrB_Matrix C, // matrix to remove an entry from GrB_Index i, // row index GrB_Index j // column index ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_removeElement' removes a single entry \verb'A(i,j)' from a matrix. If no entry is present at \verb'A(i,j)', then the matrix is not modified. If an error occurs, \verb'GrB_error(&err,A)' returns details about the error. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_extractTuples:} get all entries from a matrix} %------------------------------------------------------------------------------- \label{matrix_extractTuples} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_extractTuples // [I,J,X] = find (A) ( GrB_Index *I, // array for returning row indices of tuples GrB_Index *J, // array for returning col indices of tuples <type> *X, // array for returning values of tuples GrB_Index *nvals, // I,J,X size on input; # tuples on output const GrB_Matrix A // matrix to extract tuples from ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_extractTuples' extracts all the entries from the matrix \verb'A', returning them as a list of tuples, analogous to \verb'[I,J,X]=find(A)' in MATLAB. Entries in the tuples \verb'[I,J,X]' are unique. No pair of row and column indices \verb'(i,j)' appears more than once. The GraphBLAS API states the tuples can be returned in any order. If \verb'GrB_wait(&A)' is called first, then SuiteSparse:GraphBLAS chooses to always return them in sorted order, depending on whether the matrix is stored by row or by column. Otherwise, the indices can be returned in any order. The number of tuples in the matrix \verb'A' is given by \verb'GrB_Matrix_nvals(&anvals,A)'. If \verb'anvals' is larger than the size of the arrays (\verb'nvals' in the parameter list), an error \verb'GrB_INSUFFICIENT_SIZE' is returned, and no tuples are extracted. If \verb'nvals' is larger than \verb'anvals', then only the first \verb'anvals' entries in the arrays \verb'I' \verb'J', and \verb'X' are modified, containing all the tuples of \verb'A', and the rest of \verb'I' \verb'J', and \verb'X' are left unchanged. On output, \verb'nvals' contains the number of tuples extracted. \begin{alert} {\bf SPEC:} As an extension to the spec, the arrays \verb'I', \verb'J', and/or \verb'X' may be passed in as \verb'NULL' pointers. In this case, \verb'GrB_Matrix_extractTuples' does not return a component specified as \verb'NULL'. This is not an error condition. \end{alert} % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_resize:} resize a matrix} %------------------------------------------------------------------------------- \label{matrix_resize} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Matrix_resize // change the size of a matrix ( GrB_Matrix A, // matrix to modify const GrB_Index nrows_new, // new number of rows in matrix const GrB_Index ncols_new // new number of columns in matrix ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_resize' changes the size of a matrix. If the dimensions decrease, entries that fall outside the resized matrix are deleted. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_concat:} concatenate matrices } %------------------------------------------------------------------------------- \label{matrix_concat} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_concat // concatenate a 2D array of matrices ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix *Tiles, // 2D row-major array of size m-by-n const GrB_Index m, const GrB_Index n, const GrB_Descriptor desc // unused, except threading control ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_concat' concatenates an array of matrices (\verb'Tiles') into a single \verb'GrB_Matrix' \verb'C'. \verb'Tiles' is an \verb'm'-by-\verb'n' dense array of matrices held in row-major format, where \verb'Tiles [i*n+j]' is the $(i,j)$th tile, and where \verb'm' $> 0$ and \verb'n' $> 0$ must hold. Let $A_{i,j}$ denote the $(i,j)$th tile. The matrix \verb'C' is constructed by concatenating these tiles together, as: \[ C = \left[ \begin{array}{ccccc} A_{0,0} & A_{0,1} & A_{0,2} & \cdots & A_{0,n-1} \\ A_{1,0} & A_{1,1} & A_{1,2} & \cdots & A_{1,n-1} \\ \cdots & \\ A_{m-1,0} & A_{m-1,1} & A_{m-1,2} & \cdots & A_{m-1,n-1} \end{array} \right] \] On input, the matrix \verb'C' must already exist. Any existing entries in \verb'C' are discarded. \verb'C' must have dimensions \verb'nrows' by \verb'ncols' where \verb'nrows' is the sum of the number of rows in the matrices $A_{i,0}$ for all $i$, and \verb'ncols' is the sum of the number of columns in the matrices $A_{0,j}$ for all $j$. All matrices in any given tile row $i$ must have the same number of rows (that is, and all matrices in any given tile column $j$ must have the same number of columns). The type of \verb'C' is unchanged, and all matrices $A_{i,j}$ are typecasted into the type of \verb'C'. Any settings made to \verb'C' by \verb'GxB_Matrix_Option_set' (format by row or by column, bitmap switch, hyper switch, and sparsity control) are unchanged. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_split:} split a matrix } %------------------------------------------------------------------------------- \label{matrix_split} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_split // split a matrix into 2D array of matrices ( GrB_Matrix *Tiles, // 2D row-major array of size m-by-n const GrB_Index m, const GrB_Index n, const GrB_Index *Tile_nrows, // array of size m const GrB_Index *Tile_ncols, // array of size n const GrB_Matrix A, // input matrix to split const GrB_Descriptor desc // unused, except threading control ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_split' does the opposite of \verb'GxB_Matrix_concat'. It splits a single input matrix \verb'A' into a 2D array of tiles. On input, the \verb'Tiles' array must be a non-\verb'NULL' pointer to a previously allocated array of size at least \verb'm*n' where both \verb'm' and \verb'n' must be greater than zero. The \verb'Tiles_nrows' array has size \verb'm', and \verb'Tiles_ncols' has size \verb'n'. The $(i,j)$th tile has dimension \verb'Tiles_nrows[i]'-by-\verb'Tiles_ncols[j]'. The sum of \verb'Tiles_nrows [0:m-1]' must equal the number of rows of \verb'A', and the sum of \verb'Tiles_ncols [0:n-1]' must equal the number of columns of \verb'A'. The type of each tile is the same as the type of \verb'A'; no typecasting is done. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_diag:} construct a diagonal matrix} %------------------------------------------------------------------------------- \label{matrix_diag} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_diag // construct a diagonal matrix from a vector ( GrB_Matrix C, // output matrix const GrB_Vector v, // input vector int64_t k, const GrB_Descriptor desc // unused, except threading control ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_diag' constructs a matrix from a vector. Let $n$ be the length of the \verb'v' vector, from \verb'GrB_Vector_size (&n, v)'. If \verb'k' = 0, then \verb'C' is an $n$-by-$n$ diagonal matrix with the entries from \verb'v' along the main diagonal of \verb'C', with \verb'C(i,i)=v(i)'. If \verb'k' is nonzero, \verb'C' is square with dimension $n+|k|$. If \verb'k' is positive, it denotes diagonals above the main diagonal, with \verb'C(i,i+k)=v(i)'. If \verb'k' is negative, it denotes diagonals below the main diagonal of \verb'C', with \verb'C(i-k,i)=v(i)'. This behavior is identical to the MATLAB statement \verb'C=diag(v,k)', where \verb'v' is a vector, except that \verb'GxB_Matrix_diag' can also do typecasting. \verb'C' must already exist on input, of the correct size. Any existing entries in \verb'C' are discarded. The type of \verb'C' is preserved, so that if the type of \verb'C' and \verb'v' differ, the entries are typecasted into the type of \verb'C'. Any settings made to \verb'C' by \verb'GxB_Matrix_Option_set' (format by row or by column, bitmap switch, hyper switch, and sparsity control) are unchanged. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_free:} free a matrix} %------------------------------------------------------------------------------- \label{matrix_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a matrix ( GrB_Matrix *A // handle of matrix to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_free' frees a matrix. Either usage: {\small \begin{verbatim} GrB_Matrix_free (&A) ; GrB_free (&A) ; \end{verbatim}} \noindent frees the matrix \verb'A' and sets \verb'A' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'A == NULL' on input. Any pending updates to the matrix are abandoned. \newpage %=============================================================================== \subsection{GraphBLAS matrix and vector import/export} %======================== %=============================================================================== \label{import_export} The import/export functions allow the user application to create a \verb'GrB_Matrix' or \verb'GrB_Vector' object, and to extract its contents, faster and with less memory overhead than the \verb'GrB_*_build' and \verb'GrB_*_extractTuples' functions. The semantics of import/export are the same as the {\em move constructor} in C++. On import, the user provides a set of arrays that have been previously allocated via the ANSI C \verb'malloc', \verb'calloc', or \verb'realloc' functions (by default), or by the corresponding functions passed to \verb'GxB_init'. The arrays define the content of the matrix or vector. Unlike \verb'GrB_*_build', the GraphBLAS library then takes ownership of the user's input arrays and may either: \begin{enumerate} \item incorporate them into its internal data structure for the new \verb'GrB_Matrix' or \verb'GrB_Vector', potentially creating the \verb'GrB_Matrix' or \verb'GrB_Vector' in constant time with no memory copying performed, or \item if the library does not support the import format directly, then it may convert the input to its internal format, and then free the user's input arrays. \item A GraphBLAS implementation may also choose to use a mix of the two strategies. \end{enumerate} SuiteSparse:GraphBLAS takes the first approach, and so the import functions always take $O(1)$ time, and require $O(1)$ memory space to be allocated. Regardless of the method chosen, as listed above, the input arrays are no longer owned by the user application. If \verb'A' is a \verb'GrB_Matrix' created by an import, the user input arrays are freed no later than \verb'GrB_free(&A)', and may be freed earlier, at the discretion of the GraphBLAS library. The data structure of the \verb'GrB_Matrix' and \verb'GrB_Vector' remain opaque. The export of a \verb'GrB_Matrix' or \verb'GrB_Vector' is symmetric with the import operation. The export changes the ownership of the arrays, where the \verb'GrB_Matrix' or \verb'GrB_Vector' no longer exists when the export completes, and instead the user is returned several arrays that contain the matrix or vector in the requested format. Ownership of these arrays is given to the user application, which is then responsible for freeing them via the ANSI C \verb'free' function (by default), or by the \verb'free_function' that was passed in to \verb'GxB_init'. Alternatively, these arrays can be re-imported into a \verb'GrB_Matrix' or \verb'GrB_Vector', at which point they again become the responsibility of GraphBLAS. For an export, if the output format matches the current internal format of the matrix or vector then these arrays are returned to the user application in $O(1)$ time and with no memory copying performed. Otherwise, the \verb'GrB_Matrix' or \verb'GrB_Vector' is first converted into the requested format, and then exported. Exporting a matrix or vector forces completion of any pending operations on the matrix, with one exception. SuiteSparse:GraphBLAS supports three kinds of pending operations: {\em zombies} (pending deletions), {\em pending tuples} (pending insertions), and a {\em lazy sort}. In the latter, if the matrix or vector is left in a {\em jumbled} state, indices in any row or column may appear out of order. If unjumbled, the indices always appear in ascending order. The vector import/export methods use a three formats for a \verb'GrB_Vector'. Eight different formats are provided for the import/export of a \verb'GrB_Matrix'. For each format, the numerical value array (\verb'Ax' or \verb'vx') has a C type corresponding to one of the 13 built-in types in GraphBLAS (\verb'bool', \verb'int*_t', \verb'uint*_t', \verb'float', \verb'double' \verb'float complex', \verb'double complex'), or that corresponds with the user-defined type. No typecasting is done on import or export. \begin{alert} % {\bf FUTURE:} For the import methods, the numerical array must be large enough to hold all the entries, but in a future release of SuiteSparse: GraphBLAS, it may be specified as an array of length one. This will indicate that all entries in the matrix or vector being constructed have the same uniform value, given by \verb'Ax[0]' for matrices and \verb'vx[0]' for vectors. Likewise, on export, a future release of SuiteSparse:GraphBLAS may return arrays of size large enough only to hold a single entry. Even though there may be many more entries than that in the matrix or vector. This will be indicated, on both import and export, with the \verb'is_uniform' boolean flag. If \verb'is_uniform' is true, then all entries present in the matrix or vector have the same value, and the \verb'Ax' array (for matrices) or \verb'vx' array (for vectors) only need to be large enough to hold a single value. Currently, uniform-valued matrices are not yet supported in this release of SuiteSparse:GraphBLAS. On export, \verb'is_uniform' will always be false. On import, an error is returned if \verb'is_uniform' is true on input. This feature will be added later, but the \verb'is_uniform' parameter is added now, so that the API does not need to change once this feature is implemented. \end{alert} The export of a \verb'GrB_Vector' in \verb'CSC' format may return the indices in a jumbled state, in any order. For a \verb'GrB_Matrix' in \verb'CSR' or \verb'HyperCSR' format, if the matrix is returned as jumbled, the column indices in any given row may appear out of order. For \verb'CSC' or \verb'HyperCSC' formats, if the matrix is returned as jumbled, the row indices in any given column may appear out of order. On import, if the user-provided arrays contain jumbled row or column vectors, then the input flag \verb'jumbled' must be passed in as \verb'true'. On export, if \verb'*jumbled' is \verb'NULL', this indicates to the export method that the user expects the exported matrix or vector to be returned in an ordered, unjumbled state. If \verb'*jumbled' is provided, then it is return as \verb'true' if the indices may appear out of order, or \verb'false' if they are known to be in ascending order. Matrices and vectors in bitmap or full format are never jumbled. The table below lists the methods presented in this section. \vspace{0.2in} {\footnotesize \begin{tabular}{lll} \hline method & purpose & Section \\ \hline \verb'GxB_Vector_import_CSC' & import a vector in CSC format & \ref{vector_import_csc} \\ \verb'GxB_Vector_export_CSC' & export a vector in CSC format & \ref{vector_export_csc} \\ \hline \verb'GxB_Vector_import_Bitmap' & import a vector in bitmap format & \ref{vector_import_bitmap} \\ \verb'GxB_Vector_export_Bitmap' & export a vector in bitmap format & \ref{vector_export_bitmap} \\ \hline \verb'GxB_Vector_import_Full' & import a vector in full format & \ref{vector_import_full} \\ \verb'GxB_Vector_export_Full' & export a vector in full format & \ref{vector_export_full} \\ \hline \hline \verb'GxB_Matrix_import_CSR' & import a matrix in CSR form & \ref{matrix_import_csr} \\ \verb'GxB_Matrix_export_CSR' & export a matrix in CSR form & \ref{matrix_export_csr} \\ \hline \verb'GxB_Matrix_import_CSC' & import a matrix in CSC form & \ref{matrix_import_csc} \\ \verb'GxB_Matrix_export_CSC' & export a matrix in CSC form & \ref{matrix_export_csc} \\ \hline \verb'GxB_Matrix_import_HyperCSR' & import a matrix in HyperCSR form & \ref{matrix_import_hypercsr} \\ \verb'GxB_Matrix_export_HyperCSR' & export a matrix in HyperCSR form & \ref{matrix_export_hypercsr} \\ \hline \verb'GxB_Matrix_import_HyperCSC' & import a matrix in HyperCSC form & \ref{matrix_import_hypercsc} \\ \verb'GxB_Matrix_export_HyperCSC' & export a matrix in HyperCSC form & \ref{matrix_export_hypercsc} \\ \hline \verb'GxB_Matrix_import_BitmapR' & import a matrix in BitmapR form & \ref{matrix_import_bitmapr} \\ \verb'GxB_Matrix_export_BitmapR' & export a matrix in BitmapR form & \ref{matrix_export_bitmapr} \\ \hline \verb'GxB_Matrix_import_BitmapC' & import a matrix in BitmapC form & \ref{matrix_import_bitmapc} \\ \verb'GxB_Matrix_export_BitmapC' & export a matrix in BitmapC form & \ref{matrix_export_bitmapc} \\ \hline \verb'GxB_Matrix_import_FullR' & import a matrix in FullR form & \ref{matrix_import_fullr} \\ \verb'GxB_Matrix_export_FullR' & export a matrix in FullR form & \ref{matrix_export_fullr} \\ \hline \verb'GxB_Matrix_import_FullC' & import a matrix in FullC form & \ref{matrix_import_fullc} \\ \verb'GxB_Matrix_export_FullC' & export a matrix in FullC form & \ref{matrix_export_fullc} \\ \hline \end{tabular} } \vspace{0.2in} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_import\_CSC} import a vector in CSC form} %------------------------------------------------------------------------------- \label{vector_import_csc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_import_CSC // import a vector in CSC format ( GrB_Vector *v, // handle of vector to create GrB_Type type, // type of vector to create GrB_Index n, // vector length GrB_Index **vi, // indices, vi_size >= nvals(v)*sizeof(int64_t) void **vx, // values, vx_size >= nvals(v)*(type size) GrB_Index vi_size, // size of vi in bytes GrB_Index vx_size, // size of vx in bytes bool is_uniform, // if true, v has uniform values (not yet supported) GrB_Index nvals, // # of entries in vector bool jumbled, // if true, indices may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \noindent \verb'GxB_Vector_import_CSC' is analogous to \verb'GxB_Matrix_import_CSC'. Refer to the description of \verb'GxB_Matrix_import_CSC' for details (Section~\ref{matrix_import_csc}). If successful, \verb'v' is created as a \verb'n'-by-1 \verb'GrB_Vector'. Its entries are the row indices given by \verb'vi', with the corresponding values in \verb'vx'. The two pointers \verb'vi' and \verb'vx' are returned as \verb'NULL', which denotes that they are no longer owned by the user application. They have instead been moved into the new \verb'GrB_Vector' \verb'v'. If \verb'jumbled' is true, the row indices in \verb'vi' must appear in sorted order. No duplicates can appear. These conditions are not checked, so results are undefined if they are not met exactly. The user application can check the resulting vector \verb'v' with \verb'GxB_print', if desired, which will determine if these conditions hold. If not successful, \verb'v' is returned as \verb'NULL' and \verb'vi' and \verb'vx' are not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_export\_CSC:} export a vector in CSC form} %------------------------------------------------------------------------------- \label{vector_export_csc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_export_CSC // export and free a CSC vector ( GrB_Vector *v, // handle of vector to export and free GrB_Type *type, // type of vector exported GrB_Index *n, // length of the vector GrB_Index **vi, // indices, vi_size >= nvals(v)*sizeof(int64_t) void **vx, // values, vx_size >= nvals(v)*(type size) GrB_Index *vi_size, // size of vi in bytes GrB_Index *vx_size, // size of vx in bytes bool *is_uniform, // if true, v has uniform values (not yet supported) GrB_Index *nvals, // # of entries in vector bool *jumbled, // if true, indices may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_export_CSC' is analogous to \verb'GxB_Matrix_export_CSC'. Refer to the description of \verb'GxB_Matrix_export_CSC' for details (Section~\ref{matrix_export_csc}). Exporting a vector forces completion of any pending operations on the vector, except that indices may be exported out of order (\verb'jumbled' is \verb'true' if they may be out of order, \verb'false' if sorted in ascending order). If \verb'jumbled' is \verb'NULL' on input, then the indices are always returned in sorted order. If successful, \verb'v' is returned as \verb'NULL', and its contents are returned to the user, with its \verb'type', dimension \verb'n', and number of entries \verb'nvals'. A list of row indices of entries that were in \verb'v' is returned in \verb'vi', and the corresponding numerical values are returned in \verb'vx'. If \verb'nvals' is zero, the \verb'vi' and \verb'vx' arrays are returned as \verb'NULL'; this is not an error condition. If not successful, \verb'v' is unmodified and \verb'vi' and \verb'vx' are not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_import\_Bitmap} import a vector in bitmap form} %------------------------------------------------------------------------------- \label{vector_import_bitmap} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_import_Bitmap // import a bitmap vector ( GrB_Vector *v, // handle of vector to create GrB_Type type, // type of vector to create GrB_Index n, // vector length int8_t **vb, // bitmap, vb_size >= n void **vx, // values, vx_size >= n * (type size) GrB_Index vb_size, // size of vb in bytes GrB_Index vx_size, // size of vx in bytes bool is_uniform, // if true, v has uniform values (not yet supported) GrB_Index nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \noindent \verb'GxB_Vector_import_Bitmap' is analogous to \verb'GxB_Matrix_import_BitmapC'. Refer to the description of \verb'GxB_Matrix_import_BitmapC' for details (Section~\ref{matrix_import_bitmapc}). If successful, \verb'v' is created as a \verb'n'-by-1 \verb'GrB_Vector'. Its entries are determined by \verb'vb', where \verb'vb[i]=1' denotes that the entry $v(i)$ is present with value given by \verb'vx[i]', and \verb'vb[i]=0' denotes that the entry $v(i)$ is not present (\verb'vx[i]' is ignored in this case). The two pointers \verb'vb' and \verb'vx' are returned as \verb'NULL', which denotes that they are no longer owned by the user application. They have instead been moved into the new \verb'GrB_Vector' \verb'v'. The \verb'vb' array must not hold any values other than 0 and 1. The value \verb'nvals' must exactly match the number of 1s in the \verb'vb' array. These conditions are not checked, so results are undefined if they are not met exactly. The user application can check the resulting vector \verb'v' with \verb'GxB_print', if desired, which will determine if these conditions hold. If not successful, \verb'v' is returned as \verb'NULL' and \verb'vb' and \verb'vx' are not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_export\_Bitmap:} export a vector in bitmap form} %------------------------------------------------------------------------------- \label{vector_export_bitmap} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_export_Bitmap // export and free a bitmap vector ( GrB_Vector *v, // handle of vector to export and free GrB_Type *type, // type of vector exported GrB_Index *n, // length of the vector int8_t **vb, // bitmap, vb_size >= n void **vx, // values, vx_size >= n * (type size) GrB_Index *vb_size, // size of vb in bytes GrB_Index *vx_size, // size of vx in bytes bool *is_uniform, // if true, v has uniform values (not yet supported) GrB_Index *nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_export_Bitmap' is analogous to \verb'GxB_Matrix_export_BitmapC'. Refer to the description of \verb'GxB_Matrix_export_BitmapC' for details (Section~\ref{matrix_export_bitmapc}). Exporting a vector forces completion of any pending operations on the vector. If successful, \verb'v' is returned as \verb'NULL', and its contents are returned to the user, with its \verb'type', dimension \verb'n', and number of entries \verb'nvals'. The entries that were in \verb'v' are returned in \verb'vb', where \verb'vb[i]=1' means $v(i)$ is present with value \verb'vx[i]', and \verb'vb[i]=0' means $v(i)$ is not present (\verb'vx[i]' is undefined in this case). The corresponding numerical values are returned in \verb'vx'. If not successful, \verb'v' is unmodified and \verb'vb' and \verb'vx' are not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_import\_Full} import a vector in full form} %------------------------------------------------------------------------------- \label{vector_import_full} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_import_Full // import a full vector ( GrB_Vector *v, // handle of vector to create GrB_Type type, // type of vector to create GrB_Index n, // vector length void **vx, // values, vx_size >= nvals(v) * (type size) GrB_Index vx_size, // size of vx in bytes bool is_uniform, // if true, v has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \noindent \verb'GxB_Vector_import_Full' is analogous to \verb'GxB_Matrix_import_FullC'. Refer to the description of \verb'GxB_Matrix_import_BitmapC' for details (Section~\ref{matrix_import_fullc}). If successful, \verb'v' is created as a \verb'n'-by-1 \verb'GrB_Vector'. All entries are present, and the value of $v(i)$ is given by \verb'vx[i]'. The pointer \verb'vx' is returned as \verb'NULL', which denotes that it is no longer owned by the user application. It has instead been moved into the new \verb'GrB_Vector' \verb'v'. If not successful, \verb'v' is returned as \verb'NULL' and \verb'vx' is not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_export\_Full:} export a vector in full form} %------------------------------------------------------------------------------- \label{vector_export_full} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_export_Full // export and free a full vector ( GrB_Vector *v, // handle of vector to export and free GrB_Type *type, // type of vector exported GrB_Index *n, // length of the vector void **vx, // values, vx_size >= nvals(v) * (type size) GrB_Index *vx_size, // size of vx in bytes bool *is_uniform, // if true, v has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_export_Full' is analogous to \verb'GxB_Matrix_export_FullC'. Refer to the description of \verb'GxB_Matrix_export_FullC' for details (Section~\ref{matrix_export_fullc}). Exporting a vector forces completion of any pending operations on the vector. All entries in \verb'v' must be present. In other words, prior to the export, \verb'GrB_Vector_nvals' for a vector of length \verb'n' must report that the vector contains \verb'n' entries; \verb'GrB_INVALID_VALUE' is returned if this condition does not hold. If successful, \verb'v' is returned as \verb'NULL', and its contents are returned to the user, with its \verb'type' and dimension \verb'n'. The entries that were in \verb'v' are returned in the array \verb'vx', \verb'vb', where \verb'vb[i]=1' means $v(i)$ is present with value where the value of $v(i)$ is \verb'vx[i]'. If not successful, \verb'v' is unmodified and \verb'vx' is not modified. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_CSR:} import a CSR matrix} %------------------------------------------------------------------------------- \label{matrix_import_csr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_CSR // import a CSR matrix ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix GrB_Index **Ap, // row "pointers", Ap_size >= (nrows+1)*sizeof(int64_t) GrB_Index **Aj, // column indices, Aj_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A) * (type size) GrB_Index Ap_size, // size of Ap in bytes GrB_Index Aj_size, // size of Aj in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) bool jumbled, // if true, indices in each row may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_CSR' imports a matrix from 3 user arrays in CSR format. In the resulting \verb'GrB_Matrix A', the \verb'CSR' format is a sparse matrix with a format (\verb'GxB_FORMAT') of \verb'GxB_BY_ROW'. The first four arguments of \verb'GxB_Matrix_import_CSR' are the same as all four arguments of \verb'GrB_Matrix_new', because this function is similar. It creates a new \verb'GrB_Matrix A', with the given type and dimensions. The \verb'GrB_Matrix A' does not exist on input. Unlike \verb'GrB_Matrix_new', this function also populates the new matrix \verb'A' with the three arrays \verb'Ap', \verb'Aj' and \verb'Ax', provided by the user, all of which must have been created with the ANSI C \verb'malloc', \verb'calloc', or \verb'realloc' functions (by default), or by the corresponding \verb'malloc_function', \verb'calloc_function', or \verb'realloc_function' provided to \verb'GxB_init'. These arrays define the pattern and values of the new matrix \verb'A': \begin{itemize} \item \verb'GrB_Index Ap [nrows+1] ;' The \verb'Ap' array is the row ``pointer'' array. It does not actual contain pointers. More precisely, it is an integer array that defines where the column indices and values appear in \verb'Aj' and \verb'Ax', for each row. The number of entries in row \verb'i' is given by the expression \verb'Ap [i+1] - Ap [i]'. \item \verb'GrB_Index Aj [nvals] ;' The \verb'Aj' array defines the column indices of entries in each row. \item \verb'ctype Ax [nvals] ;' The \verb'Ax' array defines the values of entries in each row. It is passed in as a \verb'(void *)' pointer, but it must point to an array of size \verb'nvals' values, each of size \verb'sizeof(ctype)', where \verb'ctype' is the exact type in C that corresponds to the \verb'GrB_Type type' parameter. That is, if \verb'type' is \verb'GrB_INT32', then \verb'ctype' is \verb'int32_t'. User types may be used, just the same as built-in types. \end{itemize} The content of the three arrays \verb'Ap' \verb'Aj', and \verb'Ax' is very specific. This content is not checked, since this function takes only $O(1)$ time. Results are undefined if the following specification is not followed exactly. The column indices of entries in the ith row of the matrix are held in \verb'Aj [Ap [i] ... Ap[i+1]]', and the corresponding values are held in the same positions in \verb'Ax'. Column indices must be in the range 0 to \verb'ncols'-1. If \verb'jumbled' is \verb'false', column indices must appear in ascending order within each row. If \verb'jumbled' is \verb'true', column indices may appear in any order within each row. No duplicate column indices may appear in any row. \verb'Ap [0]' must equal zero, and \verb'Ap [nrows]' must equal nvals. The \verb'Ap' array must be of size \verb'nrows'+1 (or larger), and the \verb'Aj' and \verb'Ax' arrays must have size at least \verb'nvals'. If \verb'nvals' is zero, then the content of the \verb'Aj' and \verb'Ax' arrays is not accessed and they may be \verb'NULL' on input (if not \verb'NULL', they are still freed and returned as \verb'NULL', if the method is successful). An example of the CSR format is shown below. Consider the following matrix with 10 nonzero entries, and suppose the zeros are not stored. \begin{equation} \label{eqn:Aexample} A = \left[ \begin{array}{cccc} 4.5 & 0 & 3.2 & 0 \\ 3.1 & 2.9 & 0 & 0.9 \\ 0 & 1.7 & 3.0 & 0 \\ 3.5 & 0.4 & 0 & 1.0 \\ \end{array} \right] \end{equation} The \verb'Ap' array has length 5, since the matrix is 4-by-4. The first entry must always zero, and \verb'Ap [5] = 10' is the number of entries. The content of the arrays is shown below: {\footnotesize \begin{verbatim} int64_t Ap [ ] = { 0, 2, 5, 7, 10 } ; int64_t Aj [ ] = { 0, 2, 0, 1, 3, 1, 2, 0, 1, 3 } ; double Ax [ ] = { 4.5, 3.2, 3.1, 2.9, 0.9, 1.7, 3.0, 3.5, 0.4, 1.0 } ; \end{verbatim} } Spaces have been added to the \verb'Ap' array, just for illustration. Row zero is in \verb'Aj [0..1]' (column indices) and \verb'Ax [0..1]' (values), starting at \verb'Ap [0] = 0' and ending at \verb'Ap [0+1]-1 = 1'. The list of column indices of row one is at \verb'Aj [2..4]' and row two is in \verb'Aj [5..6]'. The last row (three) appears \verb'Aj [7..9]', because \verb'Ap [3] = 7' and \verb'Ap [4]-1 = 10-1 = 9'. The corresponding numerical values appear in the same positions in \verb'Ax'. To iterate over the rows and entries of this matrix, the following code can be used (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} int64_t nvals = Ap [nrows] ; for (int64_t i = 0 ; i < nrows ; i++) { // get A(i,:) for (int64_t p = Ap [i] ; p < Ap [i+1] ; p++) { // get A(i,j) int64_t j = Aj [p] ; // column index double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} On successful creation of \verb'A', the three pointers \verb'Ap', \verb'Aj', and \verb'Ax' are set to \verb'NULL' on output. This denotes to the user application that it is no longer responsible for freeing these arrays. Internally, GraphBLAS has moved these arrays into its internal data structure. They will eventually be freed no later than when the user does \verb'GrB_free(&A)', but they may be freed or resized later, if the matrix changes. If an export is performed, the freeing of these three arrays again becomes the responsibility of the user application. The \verb'GxB_Matrix_import_CSR' function will rarely fail, since it allocates just $O(1)$ space. If it does fail, it returns \verb'GrB_OUT_OF_MEMORY', and it leaves the three user arrays unmodified. They are still owned by the user application, which is eventually responsible for freeing them with \verb'free(Ap)', etc. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_CSR:} export a CSR matrix} %------------------------------------------------------------------------------- \label{matrix_export_csr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_CSR // export and free a CSR matrix ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix GrB_Index **Ap, // row "pointers", Ap_size >= (nrows+1)*sizeof(int64_t) GrB_Index **Aj, // column indices, Aj_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A)*(type size) GrB_Index *Ap_size, // size of Ap in bytes GrB_Index *Aj_size, // size of Aj in bytes GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) bool *jumbled, // if true, indices in each row may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_CSR' exports a matrix in CSR form. If successful, the \verb'GrB_Matrix A' is freed, and \verb'A' is returned as \verb'NULL'. Its type is returned in the \verb'type' parameter, its dimensions in \verb'nrows' and \verb'ncols', and the CSR format is in the three arrays \verb'Ap', \verb'Aj', and \verb'Ax'. If the matrix has no entries, the \verb'Aj' and \verb'Ax' arrays may be returned as \verb'NULL'; this is not an error, and \verb'GxB_Matrix_import_CSR' also allows these two arrays to be \verb'NULL' on input when the matrix has no entries. After a successful export, the user application is responsible for freeing these three arrays via \verb'free' (or the \verb'free' function passed to \verb'GxB_init'). The CSR format is described in Section~\ref{matrix_import_csr}. If \verb'jumbled' is returned as \verb'false', column indices will appear in ascending order within each row. If \verb'jumbled' is returned as \verb'true', column indices may appear in any order within each row. If \verb'jumbled' is passed in as \verb'NULL', then column indices will be returned in ascending order in each row. No duplicate column indices will appear in any row. \verb'Ap [0]' is zero, and \verb'Ap [nrows]' is equal to the number of entries in the matrix (\verb'nvals'). The \verb'Ap' array will be of size \verb'nrows'+1 (or larger), and the \verb'Aj' and \verb'Ax' arrays will have size at least \verb'nvals'. This method takes $O(1)$ time if the matrix is already in CSR format internally. Otherwise, the matrix is converted to CSR format and then exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_CSC:} import a CSC matrix} %------------------------------------------------------------------------------- \label{matrix_import_csc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_CSC // import a CSC matrix ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix GrB_Index **Ap, // col "pointers", Ap_size >= (ncols+1)*sizeof(int64_t) GrB_Index **Ai, // row indices, Ai_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A) * (type size) GrB_Index Ap_size, // size of Ap in bytes GrB_Index Ai_size, // size of Ai in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) bool jumbled, // if true, indices in each column may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_CSC' imports a matrix from 3 user arrays in CSC format. The \verb'GrB_Matrix A' is created in the \verb'CSC' format, which is a \verb'GxB_FORMAT' of \verb'GxB_BY_COL'. The arguments are identical to \verb'GxB_Matrix_import_CSR', except for how the 3 user arrays are interpreted. The column ``pointer'' array has size \verb'ncols+1'. The row indices of the columns are in \verb'Ai', and must appear in ascending order in each column. The corresponding numerical values are held in \verb'Ax'. The row indices of column \verb'j' are held in \verb'Ai [Ap [j]...Ap [j+1]-1]', and the corresponding numerical values are in the same locations in \verb'Ax'. The same matrix from Equation~\ref{eqn:Aexample}in the last section (repeated here): \begin{equation} A = \left[ \begin{array}{cccc} 4.5 & 0 & 3.2 & 0 \\ 3.1 & 2.9 & 0 & 0.9 \\ 0 & 1.7 & 3.0 & 0 \\ 3.5 & 0.4 & 0 & 1.0 \\ \end{array} \right] \end{equation} is held in CSC form as follows: {\footnotesize \begin{verbatim} int64_t Ap [ ] = { 0, 3, 6, 8, 10 } ; int64_t Ai [ ] = { 0, 1, 3, 1, 2, 3, 0, 2, 1, 3 } ; double Ax [ ] = { 4.5, 3.1, 3.5, 2.9, 1.7, 0.4, 3.2, 3.0, 0.9, 1.0 } ; \end{verbatim} } That is, the row indices of column 1 (the second column) are in \verb'Ai [3..5]', and the values in the same place in \verb'Ax', since \verb'Ap [1] = 3' and \verb'Ap [2]-1 = 5'. To iterate over the columns and entries of this matrix, the following code can be used (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} int64_t nvals = Ap [ncols] ; for (int64_t j = 0 ; j < ncols ; j++) { // get A(:,j) for (int64_t p = Ap [j] ; p < Ap [j+1] ; p++) { // get A(i,j) int64_t i = Ai [p] ; // row index double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} The method is identical to \verb'GxB_Matrix_import_CSR'; just the format is transposed. If \verb'Ap [ncols]' is zero, then the content of the \verb'Ai' and \verb'Ax' arrays is not accessed and they may be \verb'NULL' on input (if not \verb'NULL', they are still freed and returned as \verb'NULL', if the method is successful). \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_CSC:} export a CSC matrix} %------------------------------------------------------------------------------- \label{matrix_export_csc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_CSC // export and free a CSC matrix ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix GrB_Index **Ap, // column "pointers", Ap_size >= ncols+1 GrB_Index **Ai, // row indices, Ai_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A)*(type size) GrB_Index *Ap_size, // size of Ap in bytes GrB_Index *Ai_size, // size of Ai in bytes GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) bool *jumbled, // if true, indices in each column may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_CSC' exports a matrix in CSC form. If successful, the \verb'GrB_Matrix A' is freed, and \verb'A' is returned as \verb'NULL'. Its type is returned in the \verb'type' parameter, its dimensions in \verb'nrows' and \verb'ncols', and the CSC format is in the three arrays \verb'Ap', \verb'Ai', and \verb'Ax'. If the matrix has no entries, \verb'Ai' and \verb'Ax' arrays are returned as \verb'NULL'; this is not an error, and \verb'GxB_Matrix_import_CSC' also allows these two arrays to be \verb'NULL' on input when the matrix has no entries. After a successful export, the user application is responsible for freeing these three arrays via \verb'free' (or the \verb'free' function passed to \verb'GxB_init'). The CSC format is described in Section~\ref{matrix_import_csc}. This method takes $O(1)$ time if the matrix is already in CSC format internally. Otherwise, the matrix is converted to CSC format and then exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_HyperCSR:} import a HyperCSR matrix} %------------------------------------------------------------------------------- \label{matrix_import_hypercsr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_HyperCSR // import a hypersparse CSR matrix ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix GrB_Index **Ap, // row "pointers", Ap_size >= (nvec+1)*sizeof(int64_t) GrB_Index **Ah, // row indices, Ah_size >= (nvec)*sizeof(int64_t) GrB_Index **Aj, // column indices, Aj_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A) * (type size) GrB_Index Ap_size, // size of Ap in bytes GrB_Index Ah_size, // size of Ah in bytes GrB_Index Aj_size, // size of Aj in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) GrB_Index nvec, // number of rows that appear in Ah bool jumbled, // if true, indices in each row may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_HyperCSR' imports a matrix in hypersparse CSR format. The hypersparse HyperCSR format is identical to the CSR format, except that the \verb'Ap' array itself becomes sparse, if the matrix has rows that are completely empty. An array \verb'Ah' of size \verb'nvec' provides a list of rows that appear in the data structure. For example, consider Equation~\ref{eqn:Ahyper}, which is a sparser version of the matrix in Equation~\ref{eqn:Aexample}. Row 2 and column 1 of this matrix are all zero. \begin{equation} \label{eqn:Ahyper} A = \left[ \begin{array}{cccc} 4.5 & 0 & 3.2 & 0 \\ 3.1 & 0 & 0 & 0.9 \\ 0 & 0 & 0 & 0 \\ 3.5 & 0 & 0 & 1.0 \\ \end{array} \right] \end{equation} The conventional CSR format would appear as follows. Since the third row (row 2) is all zero, accessing \verb'Ai [Ap [2] ... Ap [3]-1]' gives an empty set (\verb'[2..1]'), and the number of entries in this row is \verb'Ap [i+1] - Ap [i]' \verb'= Ap [3] - Ap [2] = 0'. {\footnotesize \begin{verbatim} int64_t Ap [ ] = { 0, 2,2, 4, 5 } ; int64_t Aj [ ] = { 0, 2, 0, 3, 0 3 } double Ax [ ] = { 4.5, 3.2, 3.1, 0.9, 3.5, 1.0 } ; \end{verbatim} } A hypersparse CSR format for this same matrix would discard these duplicate integers in \verb'Ap'. Doing so requires another array, \verb'Ah', that keeps track of the rows that appear in the data structure. {\footnotesize \begin{verbatim} int64_t nvec = 3 ; int64_t Ah [ ] = { 0, 1, 3 } ; int64_t Ap [ ] = { 0, 2, 4, 5 } ; int64_t Aj [ ] = { 0, 2, 0, 3, 0 3 } double Ax [ ] = { 4.5, 3.2, 3.1, 0.9, 3.5, 1.0 } ; \end{verbatim} } Note that the \verb'Aj' and \verb'Ax' arrays are the same in the CSR and HyperCSR formats. The row indices in \verb'Ah' must appear in ascending order, and no duplicates can appear. To iterate over this data structure (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} int64_t nvals = Ap [nvec] ; for (int64_t k = 0 ; k < nvec ; k++) { int64_t i = Ah [k] ; // row index // get A(i,:) for (int64_t p = Ap [k] ; p < Ap [k+1] ; p++) { // get A(i,j) int64_t j = Aj [p] ; // column index double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} \vspace{-0.05in} This is more complex than the CSR format, but it requires at most $O(e)$ space, where $A$ is $m$-by-$n$ with $e$ = \verb'nvals' entries. The CSR format requires $O(m+e)$ space. If $e << m$, then the size $m+1$ of \verb'Ap' can dominate the memory required. In the hypersparse form, \verb'Ap' takes on size \verb'nvec+1', and \verb'Ah' has size \verb'nvec', where \verb'nvec' is the number of rows that appear in the data structure. The CSR format can be viewed as a dense array (of size \verb'nrows') of sparse row vectors. By contrast, the hypersparse CSR format is a sparse array (of size \verb'nvec') of sparse row vectors. The import takes $O(1)$ time. If successful, the four arrays \verb'Ah', \verb'Ap', \verb'Aj', and \verb'Ax' are returned as \verb'NULL', and the hypersparse \verb'GrB_Matrix A' is created. If the matrix has no entries, then the content of the \verb'Aj' and \verb'Ax' arrays is not accessed and they may be \verb'NULL' on input (if not \verb'NULL', they are still freed and returned as \verb'NULL', if the method is successful). \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_HyperCSR:} export a HyperCSR matrix} %------------------------------------------------------------------------------- \label{matrix_export_hypercsr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_HyperCSR // export and free a hypersparse CSR matrix ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix GrB_Index **Ap, // row "pointers", Ap_size >= (nvec+1)*sizeof(int64_t) GrB_Index **Ah, // row indices, Ah_size >= nvec*sizeof(int64_t) GrB_Index **Aj, // column indices, Aj_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A)*(type size) GrB_Index *Ap_size, // size of Ap in bytes GrB_Index *Ah_size, // size of Ah in bytes GrB_Index *Aj_size, // size of Aj in bytes GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) GrB_Index *nvec, // number of rows that appear in Ah bool *jumbled, // if true, indices in each row may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_HyperCSR' exports a matrix in HyperCSR format. If successful, the \verb'GrB_Matrix A' is freed, and \verb'A' is returned as \verb'NULL'. Its type is returned in the \verb'type' parameter, its dimensions in \verb'nrows' and \verb'ncols'. and the number of non-empty rows in \verb'nvec'. The hypersparse CSR format is in the four arrays \verb'Ah', \verb'Ap', \verb'Aj', and \verb'Ax'. If the matrix has no entries, the \verb'Aj' and \verb'Ax' arrays are returned as \verb'NULL'; this is not an error. After a successful export, the user application is responsible for freeing these three arrays via \verb'free' (or the \verb'free' function passed to \verb'GxB_init'). The hypersparse CSR format is described in Section~\ref{matrix_import_hypercsr}. This method takes $O(1)$ time if the matrix is already in HyperCSR format internally. Otherwise, the matrix is converted to HyperCSR format and then exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_HyperCSC:} import a HyperCSC matrix} %------------------------------------------------------------------------------- \label{matrix_import_hypercsc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_HyperCSC // import a hypersparse CSC matrix ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix GrB_Index **Ap, // col "pointers", Ap_size >= (nvec+1)*sizeof(int64_t) GrB_Index **Ah, // column indices, Ah_size >= nvec*sizeof(int64_t) GrB_Index **Ai, // row indices, Ai_size >= nvals(A)*sizeof(int64_t) void **Ax, // values, Ax_size >= nvals(A)*(type size) GrB_Index Ap_size, // size of Ap in bytes GrB_Index Ah_size, // size of Ah in bytes GrB_Index Ai_size, // size of Ai in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) GrB_Index nvec, // number of columns that appear in Ah bool jumbled, // if true, indices in each column may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_HyperCSC' imports a matrix in hypersparse CSC format. It is identical to \verb'GxB_Matrix_import_HyperCSR', except the data structure defined by the four arrays \verb'Ah', \verb'Ap', \verb'Ai', and \verb'Ax' holds the matrix as a sparse array of \verb'nvec' sparse column vectors. The following code iterates over the matrix, assuming it has type \verb'GrB_FP64': \vspace{-0.10in} {\footnotesize \begin{verbatim} int64_t nvals = Ap [nvec] ; for (int64_t k = 0 ; k < nvec ; k++) { int64_t j = Ah [k] ; // column index // get A(:,j) for (int64_t p = Ap [k] ; p < Ap [k+1] ; p++) { // get A(i,j) int64_t i = Ai [p] ; // row index double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_HyperCSC:} export a HyperCSC matrix} %------------------------------------------------------------------------------- \label{matrix_export_hypercsc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_HyperCSC // export and free a hypersparse CSC matrix ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix GrB_Index **Ap, // column "pointers", Ap_size >= nvec+1 GrB_Index **Ah, // column indices, Ah_size >= nvec GrB_Index **Ai, // row indices, Ai_size >= nvals(A) void **Ax, // values, Ax_size >= nvals(A) GrB_Index *Ap_size, // size of Ap GrB_Index *Ah_size, // size of Ah GrB_Index *Ai_size, // size of Ai GrB_Index *Ax_size, // size of Ax bool *is_uniform, // if true, A has uniform values (not yet supported) GrB_Index *nvec, // number of columns that appear in Ah bool *jumbled, // if true, indices in each column may be unsorted const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_HyperCSC' exports a matrix in HyperCSC form. If successful, the \verb'GrB_Matrix A' is freed, and \verb'A' is returned as \verb'NULL'. Its type is returned in the \verb'type' parameter, its dimensions in \verb'nrows' and \verb'ncols', and the number of non-empty rows in \verb'nvec'. The hypersparse CSC format is in the four arrays \verb'Ah', \verb'Ap', \verb'Ai', and \verb'Ax'. If the matrix has no entries, the \verb'Ai' and \verb'Ax' arrays are returned as \verb'NULL'; this is not an error. After a successful export, the user application is responsible for freeing these three arrays via \verb'free' (or the \verb'free' function passed to \verb'GxB_init'). The hypersparse CSC format is described in Section~\ref{matrix_import_hypercsc}. This method takes $O(1)$ time if the matrix is already in HyperCSC format internally. Otherwise, the matrix is converted to HyperCSC format and then exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_BitmapR:} import a BitmapR matrix} %------------------------------------------------------------------------------- \label{matrix_import_bitmapr} { \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_BitmapR // import a bitmap matrix, held by row ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix int8_t **Ab, // bitmap, Ab_size >= nrows*ncols void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index Ab_size, // size of Ab in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) GrB_Index nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_BitmapR' imports a matrix from 2 user arrays in BitmapR format. The first four arguments of \verb'GxB_Matrix_import_BitmapR' are the same as all four arguments of \verb'GrB_Matrix_new', because this function is similar. It creates a new \verb'GrB_Matrix A', with the given type and dimensions. The \verb'GrB_Matrix A' does not exist on input. The \verb'GrB_Matrix' \verb'A' is created from the arrays \verb'Ab' and \verb'Ax', each of which are size \verb'nrows*ncols'. Both arrays must have been created with the ANSI C \verb'malloc', \verb'calloc', or \verb'realloc' functions (by default), or by the corresponding \verb'malloc_function', \verb'calloc_function', or \verb'realloc_function' provided to \verb'GxB_init'. These arrays define the pattern and values of the new matrix \verb'A': \begin{itemize} \item \verb'int8_t Ab [nrows*ncols] ;' The \verb'Ab' array defines which entries of \verb'A' are present. If \verb'Ab[i*ncols+j]=1', then the entry $A(i,j)$ is present, with value \verb'Ax[i*ncols+j]'. If \verb'Ab[i*ncols+j]=0', then the entry $A(i,j)$ is not present. The \verb'Ab' array must contain only 0s and 1s. The \verb'nvals' input must exactly match the number of 1s in the \verb'Ab' array. \item \verb'ctype Ax [nrows*ncols] ;' The \verb'Ax' array defines the values of entries in the matrix. It is passed in as a \verb'(void *)' pointer, but it must point to an array of size \verb'nrows*ncols' values, each of size \verb'sizeof(ctype)', where \verb'ctype' is the exact type in C that corresponds to the \verb'GrB_Type type' parameter. That is, if \verb'type' is \verb'GrB_INT32', then \verb'ctype' is \verb'int32_t'. User types may be used, just the same as built-in types. If \verb'Ab[p]' is zero, the value of \verb'Ax[p]' is ignored. \end{itemize} To iterate over the rows and entries of this matrix, the following code can be used (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} for (int64_t i = 0 ; i < nrows ; i++) { // get A(i,:) for (int64_t j = 0 ; j < ncols ; j++) { // get A(i,j) int64_t p = i*ncols + j ; if (Ab [p]) { double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } } \end{verbatim}} On successful creation of \verb'A', the two pointers \verb'Ab', \verb'Ax', are set to \verb'NULL' on output. This denotes to the user application that it is no longer responsible for freeing these arrays. Internally, GraphBLAS has moved these arrays into its internal data structure. They will eventually be freed no later than when the user does \verb'GrB_free(&A)', but they may be freed or resized later, if the matrix changes. If an export is performed, the freeing of these three arrays again becomes the responsibility of the user application. The \verb'GxB_Matrix_import_BitmapR' function will rarely fail, since it allocates just $O(1)$ space. If it does fail, it returns \verb'GrB_OUT_OF_MEMORY', and it leaves the two user arrays unmodified. They are still owned by the user application, which is eventually responsible for freeing them with \verb'free(Ab)', etc. } \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_BitmapR:} export a BitmapR matrix} %------------------------------------------------------------------------------- \label{matrix_export_bitmapr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_BitmapR // export and free a bitmap matrix, by row ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix int8_t **Ab, // bitmap, Ab_size >= nrows*ncols void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index *Ab_size, // size of Ab in bytes GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) GrB_Index *nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_BitmapR' exports a matrix in BitmapR form. If successful, the \verb'GrB_Matrix A' is freed, and \verb'A' is returned as \verb'NULL'. Its type is returned in the \verb'type' parameter, its dimensions in \verb'nrows' and \verb'ncols', and the number of entries in \verb'nvals'. The BitmapR format is in the two arrays \verb'Ab', and \verb'Ax'. After a successful export, the user application is responsible for freeing these three arrays via \verb'free' (or the \verb'free' function passed to \verb'GxB_init'). The BitmapR format is described in Section~\ref{matrix_import_bitmapr}. If \verb'Ab[p]' is zero, the value of \verb'Ax[p]' is undefined. This method takes $O(1)$ time if the matrix is already in BitmapR format internally. Otherwise, the matrix is converted to BitmapR format and then exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_BitmapC:} import a BitmapC matrix} %------------------------------------------------------------------------------- \label{matrix_import_bitmapc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_BitmapC // import a bitmap matrix, held by column ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix int8_t **Ab, // bitmap, Ab_size >= nrows*ncols void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index Ab_size, // size of Ab in bytes GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) GrB_Index nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_BitmapC' imports a matrix from 2 user arrays in BitmapC format. It is identical to \verb'GxB_Matrix_import_BitmapR', except that the entry $A(i,j)$ is held in \verb'Ab[i+j*nrows]' and \verb'Ax[i+j*nrows]', in column-major format. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_BitmapC:} export a BitmapC matrix} %------------------------------------------------------------------------------- \label{matrix_export_bitmapc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_BitmapC // export and free a bitmap matrix, by col ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix int8_t **Ab, // bitmap, Ab_size >= nrows*ncols void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index *Ab_size, // size of Ab in bytes GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) GrB_Index *nvals, // # of entries in bitmap const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_BitmapC' exports a matrix in BitmapC form. It is identical to \verb'GxB_Matrix_export_BitmapR', except that the entry $A(i,j)$ is held in \verb'Ab[i+j*nrows]' and \verb'Ax[i+j*nrows]', in column-major format. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_FullR:} import a FullR matrix} %------------------------------------------------------------------------------- \label{matrix_import_fullr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_FullR // import a full matrix, held by row ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_FullR' imports a matrix from a user arrays in FullR format. The \verb'FullR' format is identical to \verb'BitmapR', except that all entries are present. The value of $A(i,j)$ is \verb'Ax[i*ncols+j]'. To iterate over the rows and entries of this matrix, the following code can be used (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} for (int64_t i = 0 ; i < nrows ; i++) { // get A(i,:) for (int64_t j = 0 ; j < ncols ; j++) { // get A(i,j) int64_t p = i*ncols + j ; double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_FullR:} export a FullR matrix} %------------------------------------------------------------------------------- \label{matrix_export_fullr} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_FullR // export and free a full matrix, by row ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_FullR' exports a matrix in FullR form. It is identical to \verb'GxB_Matrix_export_BitmapR', except that all entries must be present. That is, prior to export, \verb'GrB_Matrix_nvals (&nvals, A)' must return \verb'nvals' equal to \verb'nrows*ncols'. Otherwise, if the \verb'A' is exported with \newline \verb'GxB_Matrix_export_FullR', an error is returned (\verb'GrB_INVALID_VALUE') and the matrix is not exported. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_import\_FullC:} import a FullC matrix} %------------------------------------------------------------------------------- \label{matrix_import_fullc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_import_FullC // import a full matrix, held by column ( GrB_Matrix *A, // handle of matrix to create GrB_Type type, // type of matrix to create GrB_Index nrows, // number of rows of the matrix GrB_Index ncols, // number of columns of the matrix void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index Ax_size, // size of Ax in bytes bool is_uniform, // if true, A has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_import_FullC' imports a matrix from a user arrays in FullC format. The \verb'FullC' format is identical to \verb'BitmapC', except that all entries are present. The value of $A(i,j)$ is \verb'Ax[i+j*nrows]'. To iterate over the rows and entries of this matrix, the following code can be used (assuming it has type \verb'GrB_FP64'): {\footnotesize \begin{verbatim} for (int64_t i = 0 ; i < nrows ; i++) { // get A(i,:) for (int64_t j = 0 ; j < ncols ; j++) { // get A(i,j) int64_t p = i + j*nrows ; double aij = Ax [is_uniform ? 0 : p] ; // numerical value } } \end{verbatim}} Note that the \verb'is_uniform' feature is not yet implemented. Once it is added to SuiteSparse:GraphBLAS, a full matrix with uniform values will take only $O(1)$ time and memory to create and hold in memory. Some operations need not traverse over all entries of the matrix, and these operations will be very fast with full matrices with uniform values. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_export\_FullC:} export a FullC matrix} %------------------------------------------------------------------------------- \label{matrix_export_fullc} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_export_FullC // export and free a full matrix, by column ( GrB_Matrix *A, // handle of matrix to export and free GrB_Type *type, // type of matrix exported GrB_Index *nrows, // number of rows of the matrix GrB_Index *ncols, // number of columns of the matrix void **Ax, // values, Ax_size >= nrows*ncols * (type size) GrB_Index *Ax_size, // size of Ax in bytes bool *is_uniform, // if true, A has uniform values (not yet supported) const GrB_Descriptor desc ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_export_FullC' exports a matrix in FullC form. It is identical to \verb'GxB_Matrix_export_BitmapC', except that all entries must be present. That is, prior to export, \verb'GrB_Matrix_nvals (&nvals, A)' must return \verb'nvals' equal to \verb'nrows*ncols'. Otherwise, if the \verb'A' is exported with \newline \verb'GxB_Matrix_export_FullC', an error is returned (\verb'GrB_INVALID_VALUE') and the matrix is not exported. \newpage %=============================================================================== \subsection{GraphBLAS descriptors: {\sf GrB\_Descriptor}} %===================== %=============================================================================== \label{descriptor} A GraphBLAS {\em descriptor} modifies the behavior of a GraphBLAS operation. If the descriptor is \verb'GrB_NULL', defaults are used. The access to these parameters and their values is governed by two \verb'enum' types, \verb'GrB_Desc_Field' and \verb'GrB_Desc_Value': \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} #define GxB_NTHREADS 5 // for both GrB_Desc_field and GxB_Option_field #define GxB_CHUNK 7 typedef enum { GrB_OUTP = 0, // descriptor for output of a method GrB_MASK = 1, // descriptor for the mask input of a method GrB_INP0 = 2, // descriptor for the first input of a method GrB_INP1 = 3, // descriptor for the second input of a method GxB_DESCRIPTOR_NTHREADS = GxB_NTHREADS, // number of threads to use GxB_DESCRIPTOR_CHUNK = GxB_CHUNK, // chunk size for small problems GxB_AxB_METHOD = 1000, // descriptor for selecting C=A*B algorithm GxB_SORT = 35 // control sort in GrB_mxm } GrB_Desc_Field ; typedef enum { // for all GrB_Descriptor fields: GxB_DEFAULT = 0, // default behavior of the method // for GrB_OUTP only: GrB_REPLACE = 1, // clear the output before assigning new values to it // for GrB_MASK only: GrB_COMP = 2, // use the complement of the mask GrB_STRUCTURE = 4, // use the structure of the mask // for GrB_INP0 and GrB_INP1 only: GrB_TRAN = 3, // use the transpose of the input // for GxB_AxB_METHOD only: GxB_AxB_GUSTAVSON = 1001, // gather-scatter saxpy method GxB_AxB_DOT = 1003, // dot product GxB_AxB_HASH = 1004, // hash-based saxpy method GxB_AxB_SAXPY = 1005 // saxpy method (any kind) } GrB_Desc_Value ; \end{verbatim} } \end{mdframed} \newpage \begin{itemize} \item \verb'GrB_OUTP' is a parameter that modifies the output of a GraphBLAS operation. In the default case, the output is not cleared, and ${\bf C \langle M \rangle = Z = C \odot T}$ is computed as-is, where ${\bf T}$ is the results of the particular GraphBLAS operation. In the non-default case, ${\bf Z = C \odot T}$ is first computed, using the results of ${\bf T}$ and the accumulator $\odot$. After this is done, if the \verb'GrB_OUTP' descriptor field is set to \verb'GrB_REPLACE', then the output is cleared of its entries. Next, the assignment ${\bf C \langle M \rangle = Z}$ is performed. \item \verb'GrB_MASK' is a parameter that modifies the \verb'Mask', even if the mask is not present. If this parameter is set to its default value, and if the mask is not present (\verb'Mask==NULL') then implicitly \verb'Mask(i,j)=1' for all \verb'i' and \verb'j'. If the mask is present then \verb'Mask(i,j)=1' means that \verb'C(i,j)' is to be modified by the ${\bf C \langle M \rangle = Z}$ update. Otherwise, if \verb'Mask(i,j)=0', then \verb'C(i,j)' is not modified, even if \verb'Z(i,j)' is an entry with a different value; that value is simply discarded. If the \verb'GrB_MASK' parameter is set to \verb'GrB_COMP', then the use of the mask is complemented. In this case, if the mask is not present (\verb'Mask==NULL') then implicitly \verb'Mask(i,j)=0' for all \verb'i' and \verb'j'. This means that none of ${\bf C}$ is modified and the entire computation of ${\bf Z}$ might as well have been skipped. That is, a complemented empty mask means no modifications are made to the output object at all, except perhaps to clear it in accordance with the \verb'GrB_OUTP' descriptor. With a complemented mask, if the mask is present then \verb'Mask(i,j)=0' means that \verb'C(i,j)' is to be modified by the ${\bf C \langle M \rangle = Z}$ update. Otherwise, if \verb'Mask(i,j)=1', then \verb'C(i,j)' is not modified, even if \verb'Z(i,j)' is an entry with a different value; that value is simply discarded. If the \verb'GrB_MASK' parameter is set to \verb'GrB_STRUCTURE', then the values of the mask are ignored, and just the pattern of the entries is used. Any entry \verb'M(i,j)' in the pattern is treated as if it were true. The \verb'GrB_COMP' and \verb'GrB_STRUCTURE' settings can be combined, either by setting the mask option twice (once with each value), or by setting the mask option to \verb'GrB_COMP+GrB_STRUCTURE' (the latter is an extension to the spec). Using a parameter to complement the \verb'Mask' is very useful because constructing the actual complement of a very sparse mask is impossible since it has too many entries. If the number of places in \verb'C' that should be modified is very small, then use a sparse mask without complementing it. If the number of places in \verb'C' that should be protected from modification is very small, then use a sparse mask to indicate those places, and use a descriptor \verb'GrB_MASK' that complements the use of the mask. \item \verb'GrB_INP0' and \verb'GrB_INP1' modify the use of the first and second input matrices \verb'A' and \verb'B' of the GraphBLAS operation. If the \verb'GrB_INP0' is set to \verb'GrB_TRAN', then \verb'A' is transposed before using it in the operation. Likewise, if \verb'GrB_INP1' is set to \verb'GrB_TRAN', then the second input, typically called \verb'B', is transposed. Vectors and scalars are never transposed via the descriptor. If a method's first parameter is a matrix and the second a vector or scalar, then \verb'GrB_INP0' modifies the matrix parameter and \verb'GrB_INP1' is ignored. If a method's first parameter is a vector or scalar and the second a matrix, then \verb'GrB_INP1' modifies the matrix parameter and \verb'GrB_INP0' is ignored. To clarify this in each function, the inputs are labeled as \verb'first input:' and \verb'second input:' in the function signatures. \item \verb'GxB_AxB_METHOD' suggests the method that should be used to compute \verb'C=A*B'. All the methods compute the same result, except they may have different floating-point roundoff errors. This descriptor should be considered as a hint; SuiteSparse:GraphBLAS is free to ignore it. \begin{itemize} \item \verb'GxB_DEFAULT' means that a method is selected automatically. \item \verb'GxB_AxB_SAXPY': select any saxpy-based method: \verb'GxB_AxB_GUSTAVSON', and/or \verb'GxB_AxB_HASH', or any mix of the two, in contrast to the dot-product method. \item \verb'GxB_AxB_GUSTAVSON': an extended version of Gustavson's method \cite{Gustavson78}, which is a very good general-purpose method, but sometimes the workspace can be too large. Assuming all matrices are stored by column, it computes \verb'C(:,j)=A*B(:,j)' with a sequence of {\em saxpy} operations (\verb'C(:,j)+=A(:,k)*B(k:,j)' for each nonzero \verb'B(k,j)'). In the {\em coarse Gustavson} method, each internal thread requires workspace of size $m$, to the number of rows of \verb'C', which is not suitable if the matrices are extremely sparse or if there are many threads. For the {\em fine Gustavson} method, threads can share workspace and update it via atomic operations. If all matrices are stored by row, then it computes \verb'C(i,:)=A(i,:)*B' in a sequence of sparse {\em saxpy} operations, and using workspace of size $n$ per thread, or group of threads, corresponding to the number of columns of \verb'C'. \item \verb'GxB_AxB_HASH': a hash-based method, based on \cite{10.1145/3229710.3229720}. It is very efficient for hypersparse matrices, matrix-vector-multiply, and when $|{\bf B}|$ is small. SuiteSparse:GraphBLAS includes a {\em coarse hash} method, in which each thread has its own hash workspace, and a {\em fine hash} method, in which groups of threads share a single hash workspace, as concurrent data structure, using atomics. % [2] Yusuke Nagasaka, Satoshi Matsuoka, Ariful Azad, and Aydın Buluç. 2018. % High-Performance Sparse Matrix-Matrix Products on Intel KNL and Multicore % Architectures. In Proc. 47th Intl. Conf. on Parallel Processing (ICPP '18). % Association for Computing Machinery, New York, NY, USA, Article 34, 1–10. % DOI:https://doi.org/10.1145/3229710.3229720 \item \verb'GxB_AxB_DOT': computes \verb"C(i,j)=A(i,:)*B(j,:)'", for each entry \verb'C(i,j)'. If the mask is present and not complemented, only entries for which \verb'M(i,j)=1' are computed. This is a very specialized method that works well only if the mask is present, very sparse, and not complemented, when \verb'C' is small, or when \verb'C' is bitmap or full. For example, it works very well when \verb'A' and \verb'B' are tall and thin, and \verb"C<M>=A*B'" or \verb"C=A*B'" are computed. These expressions assume all matrices are in CSR format. If in CSC format, then the dot-product method used for \verb"A'*B". The method is impossibly slow if \verb'C' is large and the mask is not present, since it takes $\Omega(mn)$ time if \verb'C' is $m$-by-$n$ in that case. It does not use any workspace at all. Since it uses no workspace, it can work very well for extremely sparse or hypersparse matrices, when the mask is present and not complemented. \end{itemize} \item \verb'GxB_NTHREADS' controls how many threads a method uses. By default (if set to zero, or \verb'GxB_DEFAULT'), all available threads are used. The maximum available threads is controlled by the global setting, which is \verb'omp_get_max_threads ( )' by default. If set to some positive integer \verb'nthreads' less than this maximum, at most \verb'nthreads' threads will be used. See Section~\ref{omp_parallelism} for details. \item \verb'GxB_CHUNK' is a \verb'double' value that controls how many threads a method uses for small problems. See Section~\ref{omp_parallelism} for details. \item \verb'GxB_SORT' provides a hint to \verb'GrB_mxm', \verb'GrB_mxv', \verb'GrB_vxm', and \verb'GrB_reduce' (to vector). These methods can leave the output matrix or vector in a jumbled state, where the final sort is left as pending work. This is typically fastest, since some algorithms can tolerate jumbled matrices on input, and sometimes the sort can be skipped entirely. However, if the matrix or vector will be immediately exported in unjumbled form, or provided as input to a method that requires it to not be jumbled, then sorting it during the matrix multiplication is faster. By default, these methods leave the result in jumbled form (a {\em lazy sort}), if \verb'GxB_SORT' is set to zero (\verb'GxB_DEFAULT'). A nonzero value will inform the matrix multiplication to sort its result, instead. \end{itemize} %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Descriptor\_new:} create a new descriptor} %------------------------------------------------------------------------------- \label{descriptor_new} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Descriptor_new // create a new descriptor ( GrB_Descriptor *descriptor // handle of descriptor to create ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Descriptor_new' creates a new descriptor, with all fields set to their defaults (output is not replaced, the mask is not complemented, the mask is valued not structural, neither input matrix is transposed, the method used in \verb'C=A*B' is selected automatically, and \verb'GrB_mxm' leaves the final sort as pending work). %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Descriptor\_wait:} wait for a descriptor} %------------------------------------------------------------------------------- \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_wait // wait for a descriptor ( GrB_Descriptor *descriptor // descriptor to wait for ) ; \end{verbatim} }\end{mdframed} After creating a user-defined descriptor, a GraphBLAS library may choose to exploit non-blocking mode to delay its creation. \verb'GrB_Descriptor_wait(&d)' ensures the descriptor \verb'd' is completed. SuiteSparse:GraphBLAS currently does nothing for \verb'GrB_Descriptor_wait(&d)', except to ensure that \verb'd' is valid. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Descriptor\_set:} set a parameter in a descriptor} %------------------------------------------------------------------------------- \label{descriptor_set} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_Descriptor_set // set a parameter in a descriptor ( GrB_Descriptor desc, // descriptor to modify GrB_Desc_Field field, // parameter to change GrB_Desc_Value val // value to change it to ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Descriptor_set' sets a descriptor field (\verb'GrB_OUTP', \verb'GrB_MASK', \verb'GrB_INP0', \verb'GrB_INP1', or \verb'GxB_AxB_METHOD') to a particular value. Use \verb'GxB_Dec_set' to set the value of \verb'GxB_NTHREADS', \verb'GxB_CHUNK', and \verb'GxB_SORT'. If an error occurs, \verb'GrB_error(&err,desc)' returns details about the error. \vspace{0.2in} \noindent {\footnotesize \begin{tabular}{|l|p{2.4in}|p{2.2in}|} \hline Descriptor & Default & Non-default \\ field & & \\ \hline \verb'GrB_OUTP' & \verb'GxB_DEFAULT': The output matrix is not cleared. The operation computes ${\bf C \langle M \rangle = C \odot T}$. & \verb'GrB_REPLACE': After computing ${\bf Z=C\odot T}$, the output {\bf C} is cleared of all entries. Then ${\bf C \langle M \rangle = Z}$ is performed. \\ \hline \verb'GrB_MASK' & \verb'GxB_DEFAULT': The Mask is not complemented. \verb'Mask(i,j)=1' means the value $C_{ij}$ can be modified by the operation, while \verb'Mask(i,j)=0' means the value $C_{ij}$ shall not be modified by the operation. & \verb'GrB_COMP': The Mask is complemented. \verb'Mask(i,j)=0' means the value $C_{ij}$ can be modified by the operation, while \verb'Mask(i,j)=1' means the value $C_{ij}$ shall not be modified by the operation. \\ & & \verb'GrB_STRUCTURE': The values of the Mask are ignored. If \verb'Mask(i,j)' is an entry in the \verb'Mask' matrix, it is treated as if \verb'Mask(i,j)=1'. The two options \verb'GrB_COMP' and \verb'GrB_STRUCTURE' can be combined, with two subsequent calls, or with a single call with the setting \verb'GrB_COMP+GrB_STRUCTURE'. \\ \hline \verb'GrB_INP0' & \verb'GxB_DEFAULT': The first input is not transposed prior to using it in the operation. & \verb'GrB_TRAN': The first input is transposed prior to using it in the operation. Only matrices are transposed, never vectors. \\ \hline \verb'GrB_INP1' & \verb'GxB_DEFAULT': The second input is not transposed prior to using it in the operation. & \verb'GrB_TRAN': The second input is transposed prior to using it in the operation. Only matrices are transposed, never vectors. \\ \hline \verb'GrB_AxB_METHOD' & \verb'GxB_DEFAULT': The method for \verb'C=A*B' is selected automatically. & \verb'GxB_AxB_'{\em method}: The selected method is used to compute \verb'C=A*B'. \\ \hline \end{tabular} } \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Desc\_set:} set a parameter in a descriptor} %------------------------------------------------------------------------------- \label{desc_set} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Desc_set // set a parameter in a descriptor ( GrB_Descriptor desc, // descriptor to modify GrB_Desc_Field field, // parameter to change ... // value to change it to ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Desc_set' is like \verb'GrB_Descriptor_set', except that the type of the third parameter can vary with the field. This function can modify all descriptor settings, including those that do not have the type \verb'GrB_Desc_Value'. See also \verb'GxB_set' described in Section~\ref{options}. If an error occurs, \verb'GrB_error(&err,desc)' returns details about the error. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Desc\_get:} get a parameter from a descriptor} %------------------------------------------------------------------------------- \label{desc_get} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Desc_get // get a parameter from a descriptor ( GrB_Descriptor desc, // descriptor to query; NULL means defaults GrB_Desc_Field field, // parameter to query ... // value of the parameter ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Desc_get' returns the value of a single field in a descriptor. The type of the third parameter is a pointer to a variable type, whose type depends on the field. See also \verb'GxB_get' described in Section~\ref{options}. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Descriptor\_free:} free a descriptor} %------------------------------------------------------------------------------- \label{descriptor_free} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_free // free a descriptor ( GrB_Descriptor *descriptor // handle of descriptor to free ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Descriptor_free' frees a descriptor. Either usage: {\small \begin{verbatim} GrB_Descriptor_free (&descriptor) ; GrB_free (&descriptor) ; \end{verbatim}} \noindent frees the \verb'descriptor' and sets \verb'descriptor' to \verb'NULL'. It safely does nothing if passed a \verb'NULL' handle, or if \verb'descriptor == NULL' on input. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_DESC\_*:} predefined descriptors} %------------------------------------------------------------------------------- \label{descriptor_predefined} Version 1.3 of the GraphBLAS C API Specification adds predefined descriptors, and these have been added as of v3.2.0 of SuiteSparse:GraphBLAS. They are listed in the table below. A dash in the table indicates the default. These descriptors may not be modified or freed. Attempts to modify them result in an error (\verb'GrB_INVALID_VALUE'); attempts to free them are silently ignored. % \verb'GrB_NULL' is the default descriptor, with all settings at their defaults: % \verb'OUTP': do not replace the output, % \verb'MASK': mask is valued and not complemented, % \verb'INP0': first input not transposed, and % \verb'INP1': second input not transposed. % For these pre-defined descriptors, the % \verb'GxB_NTHREADS', % \verb'GxB_CHUNK', and % \verb'GxB_SORT' settings are at their default values. \vspace{0.2in} \noindent {\footnotesize \begin{tabular}{|l|lllll|} \hline Descriptor & \verb'OUTP' & \verb'MASK' & \verb'MASK' & \verb'INP0' & \verb'INP1' \\ & & structural & complement & & \\ \hline \verb'GrB_NULL' & - & - & - & - & - \\ \verb'GrB_DESC_T1' & - & - & - & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_T0' & - & - & - & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_T0T1' & - & - & - & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_C' & - & - & \verb'GrB_COMP' & - & - \\ \verb'GrB_DESC_CT1' & - & - & \verb'GrB_COMP' & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_CT0' & - & - & \verb'GrB_COMP' & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_CT0T1' & - & - & \verb'GrB_COMP' & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_S' & - & \verb'GrB_STRUCTURE' & - & - & - \\ \verb'GrB_DESC_ST1' & - & \verb'GrB_STRUCTURE' & - & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_ST0' & - & \verb'GrB_STRUCTURE' & - & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_ST0T1' & - & \verb'GrB_STRUCTURE' & - & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_SC' & - & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & - & - \\ \verb'GrB_DESC_SCT1' & - & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_SCT0' & - & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_SCT0T1' & - & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_R' & \verb'GrB_REPLACE' & - & - & - & - \\ \verb'GrB_DESC_RT1' & \verb'GrB_REPLACE' & - & - & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_RT0' & \verb'GrB_REPLACE' & - & - & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_RT0T1' & \verb'GrB_REPLACE' & - & - & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_RC' & \verb'GrB_REPLACE' & - & \verb'GrB_COMP' & - & - \\ \verb'GrB_DESC_RCT1' & \verb'GrB_REPLACE' & - & \verb'GrB_COMP' & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_RCT0' & \verb'GrB_REPLACE' & - & \verb'GrB_COMP' & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_RCT0T1' & \verb'GrB_REPLACE' & - & \verb'GrB_COMP' & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_RS' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & - & - & - \\ \verb'GrB_DESC_RST1' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & - & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_RST0' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & - & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_RST0T1' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & - & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \verb'GrB_DESC_RSC' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & - & - \\ \verb'GrB_DESC_RSCT1' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & - & \verb'GrB_TRAN' \\ \verb'GrB_DESC_RSCT0' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & \verb'GrB_TRAN' & - \\ \verb'GrB_DESC_RSCT0T1' & \verb'GrB_REPLACE' & \verb'GrB_STRUCTURE' & \verb'GrB_COMP' & \verb'GrB_TRAN' & \verb'GrB_TRAN' \\ \hline \end{tabular}} \newpage %=============================================================================== \subsection{{\sf GrB\_free:} free any GraphBLAS object} %======================= %=============================================================================== \label{free} Each of the ten objects has \verb'GrB_*_new' and \verb'GrB_*_free' methods that are specific to each object. They can also be accessed by a generic function, \verb'GrB_free', that works for all ten objects. If \verb'G' is any of the ten objects, the statement {\footnotesize \begin{verbatim} GrB_free (&G) ; \end{verbatim} } \noindent frees the object and sets the variable \verb'G' to \verb'NULL'. It is safe to pass in a \verb'NULL' handle, or to free an object twice: {\footnotesize \begin{verbatim} GrB_free (NULL) ; // SuiteSparse:GraphBLAS safely does nothing GrB_free (&G) ; // the object G is freed and G set to NULL GrB_free (&G) ; // SuiteSparse:GraphBLAS safely does nothing \end{verbatim} } \noindent However, the following sequence of operations is not safe. The first two are valid but the last statement will lead to undefined behavior. {\footnotesize \begin{verbatim} H = G ; // valid; creates a 2nd handle of the same object GrB_free (&G) ; // valid; G is freed and set to NULL; H now undefined GrB_some_method (H) ; // not valid; H is undefined \end{verbatim}} Some objects are predefined, such as the built-in types. If a user application attempts to free a built-in object, SuiteSparse:GraphBLAS will safely do nothing. The \verb'GrB_free' function in SuiteSparse:GraphBLAS always returns \verb'GrB_SUCCESS'. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The mask, accumulator, and replace option} %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:maskaccum} After a GraphBLAS operation computes a result ${\bf T}$, (for example, ${\bf T=AB}$ for \verb'GrB_mxm'), the results are assigned to an output matrix ${\bf C}$ via the mask/ accumulator phase, written as ${\bf C \langle M \rangle = C \odot T}$. This phase is affected by the \verb'GrB_REPLACE' option in the descriptor, the presence of an optional binary accumulator operator ($\odot$), the presence of the optional mask matrix ${\bf M}$, and the status of the mask descriptor. The interplay of these options is summarized in Table~\ref{tab:maskaccum}. The mask ${\bf M}$ may be present, or not. It may be structural or valued, and it may be complemented, or not. These options may be combined, for a total of 8 cases, although the structural/valued option as no effect if ${\bf M}$ is not present. If ${\bf M}$ is not present and not complemented, then $m_{ij}$ is implicitly true. If not present yet complemented, then all $m_{ij}$ entries are implicitly zero; in this case, ${\bf T}$ need not be computed at all. Either ${\bf C}$ is not modified, or all its entries are cleared if the replace option is enabled. If ${\bf M}$ is present, and the structural option is used, then $m_{ij}$ is treated as true if it is an entry in the matrix (its value is ignored). Otherwise, the value of $m_{ij}$ is used. In both cases, entries not present are implicitly zero. These values are negated if the mask is complemented. All of these various cases are combined to give a single effective value of the mask at position ${ij}$. The combination of all these options are presented in the Table~\ref{tab:maskaccum}. The first column is the \verb'GrB_REPLACE' option. The second column lists whether or not the accumulator operator is present. The third column lists whether or not $c_{ij}$ exists on input to the mask/accumulator phase (a dash means that it does not exist). The fourth column lists whether or not the entry $t_{ij}$ is present in the result matrix ${\bf T}$. The mask column is the final effective value of $m_{ij}$, after accounting for the presence of ${\bf M}$ and the mask options. Finally, the last column states the result of the mask/accum step; if no action is listed in this column, then $c_{ij}$ is not modified. Several important observations can be made from this table. First, if no mask is present (and the mask-complement descriptor option is not used), then only the first half of the table is used. In this case, the \verb'GrB_REPLACE' option has no effect. The entire matrix ${\bf C}$ is modified. Consider the cases when $c_{ij}$ is present but $t_{ij}$ is not, and there is no mask or the effective value of the mask is true for this ${ij}$ position. With no accumulator operator, $c_{ij}$ is deleted. If the accumulator operator is present and the replace option is not used, $c_{ij}$ remains unchanged. \begin{table} {\small \begin{tabular}{lllll|l} \hline repl & accum & ${\bf C}$ & ${\bf T}$ & mask & action taken by ${\bf C \langle M \rangle = C \odot T}$ \\ \hline - &- & $c_{ij}$ & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, update \\ - &- & - & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, insert \\ - &- & $c_{ij}$ & - & 1 & delete $c_{ij}$ because $t_{ij}$ not present \\ - &- & - & - & 1 & \\ - &- & $c_{ij}$ & $t_{ij}$ & 0 & \\ - &- & - & $t_{ij}$ & 0 & \\ - &- & $c_{ij}$ & - & 0 & \\ - &- & - & - & 0 & \\ \hline yes&- & $c_{ij}$ & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, update \\ yes&- & - & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, insert \\ yes&- & $c_{ij}$ & - & 1 & delete $c_{ij}$ because $t_{ij}$ not present \\ yes&- & - & - & 1 & \\ yes&- & $c_{ij}$ & $t_{ij}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&- & - & $t_{ij}$ & 0 & \\ yes&- & $c_{ij}$ & - & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&- & - & - & 0 & \\ \hline - &yes & $c_{ij}$ & $t_{ij}$ & 1 & $c_{ij} = c_{ij} \odot t_{ij}$, apply accumulator \\ - &yes & - & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, insert \\ - &yes & $c_{ij}$ & - & 1 & \\ - &yes & - & - & 1 & \\ - &yes & $c_{ij}$ & $t_{ij}$ & 0 & \\ - &yes & - & $t_{ij}$ & 0 & \\ - &yes & $c_{ij}$ & - & 0 & \\ - &yes & - & - & 0 & \\ \hline yes&yes & $c_{ij}$ & $t_{ij}$ & 1 & $c_{ij} = c_{ij} \odot t_{ij}$, apply accumulator \\ yes&yes & - & $t_{ij}$ & 1 & $c_{ij} = t_{ij}$, insert \\ yes&yes & $c_{ij}$ & - & 1 & \\ yes&yes & - & - & 1 & \\ yes&yes & $c_{ij}$ & $t_{ij}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&yes & - & $t_{ij}$ & 0 & \\ yes&yes & $c_{ij}$ & - & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&yes & - & - & 0 & \\ \hline \end{tabular} } \caption{Results of the mask/accumulator phase \label{tab:maskaccum}} \end{table} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{SuiteSparse:GraphBLAS Options} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{options} SuiteSparse:GraphBLAS includes two type-generic methods, \verb'GxB_set' and \verb'GxB_get', that set and query various options and parameters settings, including a generic way to set values in the \verb'GrB_Descriptor' object. Using these methods, the user application can provide hints to SuiteSparse:GraphBLAS on how it should store and operate on its matrices. These hints have no effect on the results of any GraphBLAS operation (except perhaps floating-point roundoff differences), but they can have a great impact on the amount of time or memory taken. \begin{itemize} \item \verb'GxB_set (field, value)' provides hints to SuiteSparse:GraphBLAS on how it should store all matrices created after calling this function: by row, by column. It provides hints as to when to use {\em hypersparse} \cite{BulucGilbert08,BulucGilbert12} or {\em bitmap} formats. These are global options that modify all matrices created after calling this method. The global settings also control the number of threads used, and the heuristic for selecting the number of threads for small problems (the ``chunk''). {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_HYPER_SWITCH' & \verb'double' & hypersparsity control (0 to 1) \\ \verb'GxB_BITMAP_SWITCH' & \verb'double [8]' & bitmap control \\ \verb'GxB_FORMAT' & \verb'int' & \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' \\ \verb'GxB_GLOBAL_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_GLOBAL_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_BURBLE' & \verb'int' & diagnostic output \\ \verb'GxB_PRINTF' & see below & diagnostic output \\ \verb'GxB_FLUSH' & see below & diagnostic output \\ \verb'GxB_MEMORY_POOL' & \verb'int64_t [64]' & memory pool control \\ \end{tabular} } \item \verb'GxB_set (GrB_Matrix A, field, value)' provides hints to SuiteSparse: GraphBLAS on how to store a particular matrix. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_HYPER_SWITCH' & \verb'double' & hypersparsity control (0 to 1) \\ \verb'GxB_BITMAP_SWITCH' & \verb'double' & bitmap control (0 to 1) \\ \verb'GxB_FORMAT' & \verb'int' & \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' \\ \verb'GxB_SPARSITY_CONTROL' & \verb'int' & 0 to 15 \\ \end{tabular} } \item \verb'GxB_set (GrB_Vector v, field, value)' provides hints to SuiteSparse: GraphBLAS on how to store a particular vector. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_BITMAP_SWITCH' & \verb'double' & bitmap control (0 to 1) \\ \verb'GxB_SPARSITY_CONTROL' & \verb'int' & 0 to 15 \\ \end{tabular} } \item \verb'GxB_set (GrB_Descriptor desc, field, value)' sets the value of a field in a \verb'GrB_Descriptor'. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GrB_OUTP' & \verb'GrB_Desc_field' & replace option \\ \verb'GrB_MASK' & \verb'GrB_Desc_field' & mask option \\ \verb'GrB_INP0' & \verb'GrB_Desc_field' & transpose input 0 \\ \verb'GrB_INP1' & \verb'GrB_Desc_field' & transpose input 1 \\ \verb'GxB_DESCRIPTOR_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_DESCRIPTOR_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_AxB_METHOD' & \verb'int' & method for matrix multiply \\ \verb'GxB_SORT' & \verb'int' & lazy vs aggressive sort \\ \end{tabular} } \end{itemize} The \verb'GxB_get' method queries a \verb'GrB_Descriptor', a \verb'GrB_Matrix', a \verb'GrB_Vector', or the global options. \begin{itemize} \item \verb'GxB_get (field, &value)' retrieves the value of a global option. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_HYPER_SWITCH' & \verb'double' & hypersparsity control (0 to 1) \\ \verb'GxB_BITMAP_SWITCH' & \verb'double [8]' & bitmap control \\ \verb'GxB_FORMAT' & \verb'int' & \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' \\ \verb'GxB_GLOBAL_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_GLOBAL_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_BURBLE' & \verb'int' & diagnostic output \\ \verb'GxB_PRINTF' & see below & diagnostic output \\ \verb'GxB_FLUSH' & see below & diagnostic output \\ \verb'GxB_MEMORY_POOL' & \verb'int64_t [64]' & memory pool control \\ \hline \verb'GxB_MODE' & \verb'int' & blocking/non-blocking \\ \verb'GxB_LIBRARY_NAME' & \verb'char *' & name of library \\ \verb'GxB_LIBRARY_VERSION' & \verb'int [3]' & library version \\ \verb'GxB_LIBRARY_DATE' & \verb'char *' & release date \\ \verb'GxB_LIBRARY_ABOUT' & \verb'char *' & about the library \\ \verb'GxB_LIBRARY_LICENSE' & \verb'char *' & license \\ \verb'GxB_LIBRARY_COMPILE_DATE' & \verb'char *' & date of compilation \\ \verb'GxB_LIBRARY_COMPILE_TIME' & \verb'char *' & time of compilation \\ \verb'GxB_LIBRARY_URL' & \verb'char *' & url of library \\ \verb'GxB_API_VERSION' & \verb'int [3]' & C API version \\ \verb'GxB_API_DATE' & \verb'char *' & C API date \\ \verb'GxB_API_ABOUT' & \verb'char *' & about the C API \\ \verb'GxB_API_URL' & \verb'char *' & \verb'http://graphblas.org' \\ \end{tabular} } \item \verb'GxB_get (GrB_Matrix A, field, &value)' retrieves the current value of an option from a particular matrix \verb'A'. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_HYPER_SWITCH' & \verb'double' & hypersparsity control (0 to 1) \\ \verb'GxB_BITMAP_SWITCH' & \verb'double' & bitmap control (0 to 1) \\ \verb'GxB_FORMAT' & \verb'int' & \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' \\ \verb'GxB_SPARSITY_CONTROL' & \verb'int' & 0 to 15 \\ \verb'GxB_SPARSITY_STATUS' & \verb'int' & 1, 2, 4, or 8 \\ \end{tabular} } \item \verb'GxB_get (GrB_Vector A, field, &value)' retrieves the current value of an option from a particular vector \verb'v'. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GxB_BITMAP_SWITCH' & \verb'double' & bitmap control (0 to 1) \\ \verb'GxB_FORMAT' & \verb'int' & \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' \\ \verb'GxB_SPARSITY_CONTROL' & \verb'int' & 0 to 15 \\ \verb'GxB_SPARSITY_STATUS' & \verb'int' & 1, 2, 4, or 8 \\ \end{tabular} } \item \verb'GxB_get (GrB_Descriptor desc, field, &value)' retrieves the value of a field in a descriptor. {\footnotesize \begin{tabular}{lll} field & value & description \\ \hline \verb'GrB_OUTP' & \verb'GrB_Desc_field' & replace option \\ \verb'GrB_MASK' & \verb'GrB_Desc_field' & mask option \\ \verb'GrB_INP0' & \verb'GrB_Desc_field' & transpose input 0 \\ \verb'GrB_INP1' & \verb'GrB_Desc_field' & transpose input 1 \\ \verb'GxB_DESCRIPTOR_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_NTHREADS' & \verb'int' & number of threads to use \\ \verb'GxB_DESCRIPTOR_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_CHUNK' & \verb'double' & chunk size \\ \verb'GxB_AxB_METHOD' & \verb'int' & method for matrix multiply \\ \verb'GxB_SORT' & \verb'int' & lazy vs aggressive sort \\ \end{tabular} } \end{itemize} %------------------------------------------------------------------------------- \subsection{OpenMP parallelism} %------------------------------------------------------------------------------- \label{omp_parallelism} SuiteSparse:GraphBLAS is a parallel library, based on OpenMP. By default, all GraphBLAS operations will use up to the maximum number of threads specified by the \verb'omp_get_max_threads' OpenMP function. For small problems, GraphBLAS may choose to use fewer threads, using two parameters: the maximum number of threads to use (which may differ from the \verb'omp_get_max_threads' value), and a parameter called the \verb'chunk'. Suppose \verb'work' is a measure of the work an operation needs to perform (say the number of entries in the two input matrices for \verb'GrB_eWiseAdd'). No more than \verb'floor(work/chunk)' threads will be used (or one thread if the ratio is less than 1). The default \verb'chunk' value is 65,536, but this may change in future versions, or it may be modified when GraphBLAS is installed on a particular machine. Both parameters can be set in two ways: \begin{itemize} \item Globally: If the following methods are used, then all subsequent GraphBLAS operations will use these settings. Note the typecast, \verb'(double)' \verb'chunk'. This is necessary if a literal constant such as \verb'20000' is passed as this argument. The type of the constant must be \verb'double'. {\footnotesize \begin{verbatim} int nthreads_max = 40 ; GxB_set (GxB_NTHREADS, nthreads_max) ; GxB_set (GxB_CHUNK, (double) 20000) ; \end{verbatim} } \item Per operation: Most GraphBLAS operations take a \verb'GrB_Descriptor' input, and this can be modified to set the number of threads and chunk size for the operation that uses this descriptor. Note that \verb'chunk' is a \verb'double'. {\footnotesize \begin{verbatim} GrB_Descriptor desc ; GrB_Descriptor_new (&desc) int nthreads_max = 40 ; GxB_set (desc, GxB_NTHREADS, nthreads_max) ; double chunk = 20000 ; GxB_set (desc, GxB_CHUNK, chunk) ; \end{verbatim} } \end{itemize} The smaller of \verb'nthreads_max' and \verb'floor(work/chunk)' is used for any given GraphBLAS operation, except that a single thread is used if this value is zero or less. If either parameter is set to \verb'GxB_DEFAULT', then default values are used. The default for \verb'nthreads_max' is the return value from \verb'omp_get_max_threads', and the default chunk size is currently 65,536. If a descriptor value for either parameter is left at its default, or set to \verb'GxB_DEFAULT', then the global setting is used. This global setting may have been modified from its default, and this modified value will be used. For example, suppose \verb'omp_get_max_threads' reports 8 threads. If \newline \verb'GxB_set (GxB_NTHREADS, 4)' is used, then the global setting is four threads, not eight. If a descriptor is used but its \verb'GxB_NTHREADS' is not set, or set to \verb'GxB_DEFAULT', then any operation that uses this descriptor will use 4 threads. %------------------------------------------------------------------------------- \subsection{Storing a matrix by row or by column} %------------------------------------------------------------------------------- The GraphBLAS \verb'GrB_Matrix' is entirely opaque to the user application, and the GraphBLAS API does not specify how the matrix should be stored. However, choices made in how the matrix is represented in a particular implementation, such as SuiteSparse:GraphBLAS, can have a large impact on performance. Many graph algorithms are just as fast in any format, but some algorithms are much faster in one format or the other. For example, suppose the user application stores a directed graph as a matrix \verb'A', with the edge $(i,j)$ represented as the value \verb'A(i,j)', and the application makes many accesses to the $i$th row of the matrix, with \verb'GrB_Col_extract' \verb'(w,...,A,GrB_ALL,...,i,desc)' with the transposed descriptor (\verb'GrB_INP0' set to \verb'GrB_TRAN'). If the matrix is stored by column this can be extremely slow, just like the expression \verb'w=A(i,:)' in MATLAB, where \verb'i' is a scalar. Since this is a typical use-case in graph algorithms, the default format in SuiteSparse:GraphBLAS is to store its matrices by row, in Compressed Sparse Row format (CSR). MATLAB stores its sparse matrices by column, in ``non-hypersparse'' format, in what is called the Compressed Sparse Column format, or CSC for short. An \verb'm'-by-\verb'n' matrix in MATLAB is represented as a set of \verb'n' column vectors, each with a sorted list of row indices and values of the nonzero entries in that column. As a result, \verb'w=A(:,j)' is very fast in MATLAB, since the result is already held in the data structure a single list, the $j$th column vector. However, \verb'w=A(i,:)' is very slow in MATLAB, since every column in the matrix has to be searched to see if it contains row \verb'i'. In MATLAB, if many such accesses are made, it is much better to transpose the matrix (say \verb"AT=A'") and then use \verb"w=AT(:,i)" instead. This can have a dramatic impact on the performance of MATLAB. Likewise, if \verb'u' is a very sparse column vector and \verb'A' is stored by column, then \verb"w=u'*A" (via \verb'GrB_vxm') is slower than \verb'w=A*u' (via \verb'GrB_mxv'). The opposite is true if the matrix is stored by row. An example of this can be found in Section B.1 of Version 1.2 of the GraphBLAS API Specification, where the breadth-first search \verb'BFS' uses \verb'GrB_vxm' to compute \verb"q'=q'*A". This method is not fast if the matrix \verb'A' is stored by column. The \verb'bfs5' and \verb'bfs6' examples in the \verb'Demo/' folder of SuiteSparse:GraphBLAS use \verb'GrB_vxm', which is fast since the matrices are assumed to be stored in their default format, by row. SuiteSparse:GraphBLAS stores its sparse matrices by row, by default. In Versions 2.1 and earlier, the matrices were stored by column, by default. However, it can also be instructed to store any selected matrices, or all matrices, by column instead (just like MATLAB), so that \verb'w=A(:,j)' (via \verb'GrB_Col_extract') is very fast. The change in data format has no effect on the result, just the time and memory usage. To use a column-oriented format by default, the following can be done in a user application that tends to access its matrices by column. {\footnotesize \begin{verbatim} GrB_init (...) ; // just after GrB_init: do the following: #ifdef GxB_SUITESPARSE_GRAPHBLAS GxB_set (GxB_FORMAT, GxB_BY_COL) ; #endif \end{verbatim} } If this is done, and no other \verb'GxB_set' calls are made with \verb'GxB_FORMAT', all matrices will be stored by column. Alternatively, SuiteSparse:GraphBLAS can be compiled with \verb'-DBYCOL', which changes the default format to \verb'GxB_BY_COL', with no calls to any \verb'GxB_*' function. The default format is now \verb'GxB_BY_ROW'. %------------------------------------------------------------------------------- \subsection{Hypersparse matrices} \label{hypersparse} %------------------------------------------------------------------------------- MATLAB can store an \verb'm'-by-\verb'n' matrix with a very large value of \verb'm', since a CSC data structure takes $O(n+|{\bf A}|)$ memory, independent of \verb'm', where $|{\bf A}|$ is the number of nonzeros in the matrix. It cannot store a matrix with a huge \verb'n', and this structure is also inefficient when $|{\bf A}|$ is much smaller than \verb'n'. In contrast, SuiteSparse:GraphBLAS can store its matrices in {\em hypersparse} format, taking only $O(|{\bf A}|)$ memory, independent of how it is stored (by row or by column) and independent of both \verb'm' and \verb'n' \cite{BulucGilbert08,BulucGilbert12}. In both the CSR and CSC formats, the matrix is held as a set of sparse vectors. In non-hypersparse format, the set of sparse vectors is itself dense; all vectors are present, even if they are empty. For example, an \verb'm'-by-\verb'n' matrix in non-hypersparse CSC format contains \verb'n' sparse vectors. Each column vector takes at least one integer to represent, even for a column with no entries. This allows for quick lookup for a particular vector, but the memory required is $O(n+|{\bf A}|)$. With a hypersparse CSC format, the set of vectors itself is sparse, and columns with no entries take no memory at all. The drawback of the hypersparse format is that finding an arbitrary column vector \verb'j', such as for the computation \verb'C=A(:,j)', takes $O(\log k)$ time if there $k \le n$ vectors in the data structure. One advantage of the hypersparse structure is the memory required for an \verb'm'-by-\verb'n' hypersparse CSC matrix is only $O(|{\bf A}|)$, independent of \verb'm' and \verb'n'. Algorithms that must visit all non-empty columns of a matrix are much faster when working with hypersparse matrices, since empty columns can be skipped. The \verb'hyper_switch' parameter controls the hypersparsity of the internal data structure for a matrix. The parameter is typically in the range 0 to 1. The default is \verb'hyper_switch' = \verb'GxB_HYPER_DEFAULT', which is an \verb'extern' \verb'const' \verb'double' value, currently set to 0.0625, or 1/16. This default ratio may change in the future. The \verb'hyper_switch' determines how the matrix is converted between the hypersparse and non-hypersparse formats. Let $n$ be the number of columns of a CSC matrix, or the number of rows of a CSR matrix. The matrix can have at most $n$ non-empty vectors. Let $k$ be the actual number of non-empty vectors. That is, for the CSC format, $k \le n$ is the number of columns that have at least one entry. Let $h$ be the value of \verb'hyper_switch'. If a matrix is currently hypersparse, it can be converted to non-hypersparse if the either condition $n \le 1$ or $k > 2nh$ holds, or both. Otherwise, it stays hypersparse. Note that if $n \le 1$ the matrix is always stored as non-hypersparse. If currently non-hypersparse, it can be converted to hypersparse if both conditions $n > 1$ and $k \le nh$ hold. Otherwise, it stays non-hypersparse. Note that if $n \le 1$ the matrix always remains non-hypersparse. The default value of \verb'hyper_switch' is assigned at startup by \verb'GrB_init', and can then be modified globally with \verb'GxB_set'. All new matrices are created with the same \verb'hyper_switch', determined by the global value. Once a particular matrix \verb'A' has been constructed, its hypersparsity ratio can be modified from the default with: {\footnotesize \begin{verbatim} double hyper_switch = 0.2 ; GxB_set (A, GxB_HYPER_SWITCH, hyper_switch) ; \end{verbatim}} To force a matrix to always be non-hypersparse, use \verb'hyper_switch' equal to \verb'GxB_NEVER_HYPER'. To force a matrix to always stay hypersparse, set \verb'hyper_switch' to \verb'GxB_ALWAYS_HYPER'. A \verb'GrB_Matrix' can thus be held in one of four formats: any combination of hyper/non-hyper and CSR/CSC. All \verb'GrB_Vector' objects are always stored in non-hypersparse CSC format. A new matrix created via \verb'GrB_Matrix_new' starts with $k=0$ and is created in hypersparse form by default unless $n \le 1$ or if $h<0$, where $h$ is the global \verb'hyper_switch' value. The matrix is created in either \verb'GxB_BY_ROW' or \verb'GxB_BY_COL' format, as determined by the last call to \verb'GxB_set(GxB_FORMAT,...)' or \verb'GrB_init'. A new matrix \verb'C' created via \verb'GrB_dup (&C,A)' inherits the CSR/CSC format, hypersparsity format, and \verb'hyper_switch' from \verb'A'. %------------------------------------------------------------------------------- \subsection{Bitmap matrices} \label{bitmap_switch} %------------------------------------------------------------------------------- By default, SuiteSparse:GraphBLAS switches between all four formats (hypersparse, sparse, bitmap, and full) automatically. Let $d = |{\bf A}|/mn$ for an $m$-by-$n$ matrix $\bf A$ with $|{\bf A}|$ entries. If the matrix is currently in sparse or hypersparse format, and is modified so that $d$ exceeds a given threshold, it is converted into bitmap format. The default threshold is controlled by the \verb'GxB_BITMAP_SWITCH' setting, which can be set globally, or for a particular matrix or vector. The default value of the switch to bitmap format depends on $\min(m,n)$, for a matrix of size $m$-by-$n$. For the global setting, the bitmap switch is a \verb'double' array of size \verb'GxB_NBITMAP_SWITCH'. The defaults are given below: \vspace{0.2in} {\small \begin{tabular}{lll} parameter & default & matrix sizes \\ \hline \verb'bitmap_switch [0]' & 0.04 & $\min(m,n) = 1$ (and all vectors) \\ \verb'bitmap_switch [1]' & 0.05 & $\min(m,n) = 2$ \\ \verb'bitmap_switch [2]' & 0.06 & $\min(m,n) = 3$ to 4 \\ \verb'bitmap_switch [3]' & 0.08 & $\min(m,n) = 5$ to 8 \\ \verb'bitmap_switch [4]' & 0.10 & $\min(m,n) = 9$ to 16\\ \verb'bitmap_switch [5]' & 0.20 & $\min(m,n) = 17$ to 32\\ \verb'bitmap_switch [6]' & 0.30 & $\min(m,n) = 33$ to 64 \\ \verb'bitmap_switch [7]' & 0.40 & $\min(m,n) > 64$ \\ \end{tabular} } \vspace{0.2in} That is, by default a \verb'GrB_Vector' is held in bitmap format if its density exceeds 4\%. To change the global settings, do the following: {\footnotesize \begin{verbatim} double bswitch [GxB_NBITMAP_SWITCH] = { 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 } ; GxB_set (GxB_BITMAP_SWITCH, bswitch) ; \end{verbatim} } If the matrix is currently in bitmap format, it is converted to full if all entries are present, or to sparse/hypersparse if $d$ drops below $b/2$, if its bitmap switch is $b$. A matrix or vector with $d$ between $b/2$ and $b$ remains in its current format. %------------------------------------------------------------------------------- \subsection{Parameter types} %------------------------------------------------------------------------------- The \verb'GxB_Option_Field' enumerated type gives the type of the \verb'field' parameter for the second argument of \verb'GxB_set' and \verb'GxB_get', for setting global options or matrix options. {\footnotesize \begin{verbatim} typedef enum { // for matrix/vector get/set and global get/set: GxB_HYPER_SWITCH = 0, // defines switch to hypersparse (double value) GxB_BITMAP_SWITCH = 34, // defines switch to hypersparse (double value) GxB_FORMAT = 1, // defines CSR/CSC format: GxB_BY_ROW or GxB_BY_COL GxB_SPARSITY_CONTROL = 32, // control the sparsity of a matrix or vector // for global get/set only: GxB_GLOBAL_NTHREADS = GxB_NTHREADS, // max number of threads to use GxB_GLOBAL_CHUNK = GxB_CHUNK, // chunk size for small problems GxB_BURBLE = 99, // diagnositic output GxB_PRINTF = 101, // printf function for diagnostic output GxB_FLUSH = 102, // flush function for diagnostic output // for matrix/vector get only: GxB_SPARSITY_STATUS = 33, // query the sparsity of a matrix or vector // for global get only: GxB_MODE = 2, // mode passed to GrB_init (blocking or non-blocking) GxB_LIBRARY_NAME = 8, // name of the library (char *) GxB_LIBRARY_VERSION = 9, // library version (3 int's) GxB_LIBRARY_DATE = 10, // date of the library (char *) GxB_LIBRARY_ABOUT = 11, // about the library (char *) GxB_LIBRARY_URL = 12, // URL for the library (char *) GxB_LIBRARY_LICENSE = 13, // license of the library (char *) GxB_LIBRARY_COMPILE_DATE = 14, // date library was compiled (char *) GxB_LIBRARY_COMPILE_TIME = 15, // time library was compiled (char *) GxB_API_VERSION = 16, // API version (3 int's) GxB_API_DATE = 17, // date of the API (char *) GxB_API_ABOUT = 18, // about the API (char *) GxB_API_URL = 19, // URL for the API (char *) } GxB_Option_Field ; \end{verbatim} } The \verb'GxB_FORMAT' field can be by row or by column, set to a value with the type \verb'GxB_Format_Value': {\footnotesize \begin{verbatim} typedef enum { GxB_BY_ROW = 0, // CSR: compressed sparse row format GxB_BY_COL = 1 // CSC: compressed sparse column format } GxB_Format_Value ; \end{verbatim} } The default format (in SuiteSparse:GraphBLAS Version 2.2 and later) is by row. The format in SuiteSparse:GraphBLAS Version 2.1 and earlier was by column, just like MATLAB. The default format is given by the predefined value \verb'GxB_FORMAT_DEFAULT', which is equal to \verb'GxB_BY_ROW' if default compile-time options are used. To change the default at compile time to \verb'GxB_BY_COL', compile the SuiteSparse: GraphBLAS library with \verb'-DBYCOL'. This changes \verb'GxB_FORMAT_DEFAULT' to \verb'GxB_BY_COL'. The default hypersparsity ratio is 0.0625 (1/16), but this value may change in the future. Setting the \verb'GxB_HYPER_SWITCH' field to \verb'GxB_ALWAYS_HYPER' ensures a matrix always stays hypersparse. If set to \verb'GxB_NEVER_HYPER', it always stays non-hypersparse. At startup, \verb'GrB_init' defines the following initial settings: {\footnotesize \begin{verbatim} GxB_set (GxB_HYPER_SWITCH, GxB_HYPER_DEFAULT) ; GxB_set (GxB_FORMAT, GxB_FORMAT_DEFAULT) ; \end{verbatim} } That is, by default, all new matrices are held by column in CSR format, unless \verb'-DBYCOL' is used at compile time, in which case the default is to store all new matrices by row in CSC format. If a matrix has fewer than $n/16$ columns, it can be converted to hypersparse format. If it has more than $n/8$ columns, it can be converted to non-hypersparse format. These options can be changed for all future matrices with \verb'GxB_set'. For example, to change all future matrices to be in non-hypersparse CSC when created, use: {\footnotesize \begin{verbatim} GxB_set (GxB_HYPER_SWITCH, GxB_NEVER_HYPER) ; GxB_set (GxB_FORMAT, GxB_BY_COL) ; \end{verbatim} } Then if a particular matrix needs a different format, then (as an example): {\footnotesize \begin{verbatim} GxB_set (A, GxB_HYPER_SWITCH, 0.1) ; GxB_set (A, GxB_FORMAT, GxB_BY_ROW) ; \end{verbatim} } This changes the matrix \verb'A' so that it is stored by row, and it is converted from non-hypersparse to hypersparse format if it has fewer than 10\% non-empty columns. If it is hypersparse, it is a candidate for conversion to non-hypersparse if has 20\% or more non-empty columns. If it has between 10\% and 20\% non-empty columns, it remains in its current format. MATLAB only supports a non-hypersparse CSC format. The format in SuiteSparse:GraphBLAS that is equivalent to the MATLAB format is: {\footnotesize \begin{verbatim} GrB_init (...) ; GxB_set (GxB_HYPER_SWITCH, GxB_NEVER_HYPER) ; GxB_set (GxB_FORMAT, GxB_BY_COL) ; // no subsequent use of GxB_HYPER_SWITCH or GxB_FORMAT \end{verbatim} } The \verb'GxB_HYPER_SWITCH' and \verb'GxB_FORMAT' options should be considered as suggestions from the user application as to how SuiteSparse:GraphBLAS can obtain the best performance for a particular application. SuiteSparse:GraphBLAS is free to ignore any of these suggestions, both now and in the future, and the available options and formats may be augmented in the future. Any prior options no longer needed in future versions of SuiteSparse:GraphBLAS will be silently ignored, so the use these options is safe for future updates. The sparsity status of a matrix can be queried with the following, which returns a value of \verb'GxB_HYPERSPARSE' \verb'GxB_SPARSE' \verb'GxB_BITMAP' or \verb'GxB_FULL'. {\footnotesize \begin{verbatim} int sparsity ; GxB_get (A, GxB_SPARSITY_STATUS, &sparsity) ; \end{verbatim}} The sparsity format of a matrix can be controlled with \verb'GxB_set', which can be any mix (a sum or bitwise or) of \verb'GxB_HYPERSPARSE' \verb'GxB_SPARSE' \verb'GxB_BITMAP', and \verb'GxB_FULL'. By default, a matrix or vector can be held in any format, with the default setting \verb'GxB_AUTO_SPARSITY', which is equal to \verb'GxB_HYPERSPARSE' + \verb'GxB_SPARSE' + \verb'GxB_BITMAP' + \verb'GxB_FULL'. To enable a matrix to take on just \verb'GxB_SPARSE' or \verb'GxB_FULL' formats, but not \verb'GxB_HYPERSPARSE' or \verb'GxB_BITMAP', for example, use the following: {\footnotesize \begin{verbatim} GxB_set (A, GxB_SPARSITY_CONTROL, GxB_SPARSE + GxB_FULL) ; \end{verbatim}} In this case, SuiteSparse:GraphBLAS will hold the matrix in sparse format (\verb'CSC' or \verb'CSC', depending on its \verb'GxB_FORMAT'), unless all entries are present, in which case it will be converted to full format. Only the least 4 bits of the sparsity control are considered, so the formats can be bitwise negated. For example, to allow for any format except full: {\footnotesize \begin{verbatim} GxB_set (A, GxB_SPARSITY_CONTROL, ~GxB_FULL) ; \end{verbatim}} %------------------------------------------------------------------------------- \subsection{{\sf GxB\_BURBLE}, {\sf GxB\_PRINTF}, {\sf GxB\_FLUSH}: diagnostics} %------------------------------------------------------------------------------- \verb'GxB_set (GxB_BURBLE, ...)' controls the burble setting. It can also be controlled via \verb'GrB.burble(b)' in the MATLAB interface. {\footnotesize \begin{verbatim} GxB_set (GxB_BURBLE, true) ; // enable burble GxB_set (GxB_BURBLE, false) ; // disable burble \end{verbatim}} If enabled, SuiteSparse:GraphBLAS reports which internal kernels it uses, and how much time is spent. If you see the word \verb'generic', it means that SuiteSparse:GraphBLAS was unable to use is faster kernels in \verb'Source/Generated', but used a generic kernel that relies on function pointers. This is done for user-defined types and operators, and when typecasting is performed, and it is typically slower than the kernels in \verb'Source/Generated'. If you see a lot of \verb'wait' statements, it may mean that a lot of time is spent finishing a matrix or vector. This may be the result of an inefficient use of the \verb'setElement' and \verb'assign' methods. If this occurs you might try changing the sparsity format of a vector or matrix to \verb'GxB_BITMAP', assuming there's enough space for it. \verb'GxB_set (GxB_PRINTF, printf)' allows the user application to change the function used to print diagnostic output. This also controls the output of the \verb'GxB_*print' functions. By default this parameter is \verb'NULL', in which case the ANSI C11 \verb'printf' function is used. The parameter is a function pointer with the same signature as the ANSI C11 \verb'printf' function. The MATLAB interface to GraphBLAS uses the following so that GraphBLAS can print to the MATLAB Command Window: {\footnotesize \begin{verbatim} GxB_set (GxB_PRINTF, mexPrintf) \end{verbatim}} After each call to the \verb'printf' function, an optional \verb'flush' function is called, which is \verb'NULL' by default. If \verb'NULL', the function is not used. This can be changed with \verb'GxB_set (GxB_FLUSH, flush)'. The \verb'flush' function takes no arguments, and returns an \verb'int' which is 0 if successful, or any nonzero value on failure (the same output as the ANSI C11 \verb'fflush' function, except that \verb'flush' has no inputs). %------------------------------------------------------------------------------- \subsection{Other global options} %------------------------------------------------------------------------------- \verb'GxB_MODE' can only be queried by \verb'GxB_get'; it cannot be modified by \verb'GxB_set'. The mode is the value passed to \verb'GrB_init' (blocking or non-blocking). All threads in the same user application share the same global options, including hypersparsity, bitmap options, and CSR/CSC format determined by \verb'GxB_set', and the the blocking mode determined by \verb'GrB_init'. Specific format and hypersparsity parameters of each matrix are specific to that matrix and can be independently changed. The \verb'GxB_LIBRARY_*' options can be used with \verb'GxB_get' to query the current implementation. For all of these, \verb'GxB_get' returns a string (\verb'char *'), except for \verb'GxB_LIBRARY_VERSION', which takes as input an \verb'int' array of size three. The \verb'GxB_API_*' options can be used with \verb'GxB_get' to query the current GraphBLAS C API Specification. For all of these, \verb'GxB_get' returns a string (\verb'char *'), except for \verb'GxB_API_VERSION', which takes as input an \verb'int' array of size three. \newpage %=============================================================================== \subsection{{\sf GxB\_Global\_Option\_set:} set a global option} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_set // set a global default option ( const GxB_Option_Field field, // option to change ... // value to change it to ) ; \end{verbatim} } \end{mdframed} This usage of \verb'GxB_set' sets the value of a global option. The \verb'field' parameter can be \verb'GxB_HYPER_SWITCH', \verb'GxB_BITMAP_SWITCH', \verb'GxB_FORMAT', \verb'GxB_NTHREADS', \verb'GxB_CHUNK', \verb'GxB_BURBLE', \verb'GxB_PRINTF', \verb'GxB_FLUSH', or \verb'GxB_MEMORY_POOL'. For example, the following usage sets the global hypersparsity ratio to 0.2, the format of future matrices to \verb'GxB_BY_COL', the maximum number of threads to 4, the chunk size to 10000, and enables the burble. No existing matrices are changed. {\footnotesize \begin{verbatim} GxB_set (GxB_HYPER_SWITCH, 0.2) ; GxB_set (GxB_FORMAT, GxB_BY_COL) ; GxB_set (GxB_NTHREADS, 4) ; GxB_set (GxB_CHUNK, (double) 10000) ; GxB_set (GxB_BURBLE, true) ; GxB_set (GxB_PRINTF, mexPrintf) ; \end{verbatim} } The memory pool parameter sets an upper bound on the number of freed blocks of memory that SuiteSparse:GraphBLAS keeps in its internal memory pool for future allocations. \verb'free_pool_limit' is an \verb'int64_t' array of size 64, and \verb'free_pool_limit [k]' is the upper bound on the number of blocks of size $2^k$ that are kept in the pool. Passing in a \verb'NULL' pointer sets the defaults. Passing in an array of size 64 whose entries are all zero disables the memory pool entirely. %=============================================================================== \subsection{{\sf GxB\_Matrix\_Option\_set:} set a matrix option} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_set // set an option in a matrix ( GrB_Matrix A, // matrix to modify const GxB_Option_Field field, // option to change ... // value to change it to ) ; \end{verbatim} } \end{mdframed} This usage of \verb'GxB_set' sets the value of a matrix option, for a particular matrix. The \verb'field' parameter can be \verb'GxB_HYPER_SWITCH', \verb'GxB_BITMAP_SWITCH', \verb'GxB_SPARSITY_CONTROL', or \verb'GxB_FORMAT'. For example, the following usage sets the hypersparsity ratio to 0.2, and the format of \verb'GxB_BY_COL', for a particular matrix \verb'A', and sets the sparsity control to \verb'GxB_SPARSE+GxB_FULL' (allowing the matrix to be held in CSC or FullC formats, but not BitmapC or HyperCSC). SuiteSparse:GraphBLAS currently applies these changes immediately, but since they are simply hints, future versions of SuiteSparse:GraphBLAS may delay the change in format if it can obtain better performance. If the setting is just \verb'GxB_FULL' and some entries are missing, then the matrix is held in bitmap format. {\footnotesize \begin{verbatim} GxB_set (A, GxB_HYPER_SWITCH, 0.2) ; GxB_set (A, GxB_FORMAT, GxB_BY_COL) ; GxB_set (A, GxB_SPARSITY_CONTROL, GxB_SPARSE + GxB_FULL) ; \end{verbatim} } For performance, the matrix option should be set as soon as it is created with \verb'GrB_Matrix_new', so the internal transformation takes less time. If an error occurs, \verb'GrB_error(&err,A)' returns details about the error. %=============================================================================== \subsection{{\sf GxB\_Desc\_set:} set a {\sf GrB\_Descriptor} value} %=============================================================================== \label{gxbset} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_set // set a parameter in a descriptor ( GrB_Descriptor desc, // descriptor to modify const GrB_Desc_Field field, // parameter to change ... // value to change it to ) ; \end{verbatim} } \end{mdframed} This usage is similar to \verb'GrB_Descriptor_set', just with a name that is consistent with the other usages of this generic function. Unlike \verb'GrB_Descriptor_set', the \verb'field' may also be \verb'GxB_NTHREADS', \verb'GxB_CHUNK', or \verb'GxB_SORT'. Refer to Sections~\ref{descriptor_set}~and~\ref{desc_set} for details. If an error occurs, \verb'GrB_error(&err,desc)' returns details about the error. \newpage %=============================================================================== \subsection{{\sf GxB\_Global\_Option\_get:} retrieve a global option} %=============================================================================== \label{gxbget} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_get // gets the current global default option ( const GxB_Option_Field field, // option to query ... // return value of the global option ) ; \end{verbatim} } \end{mdframed} This usage of \verb'GxB_get' retrieves the value of a global option. The \verb'field' parameter can be one of the following: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GxB_HYPER_SWITCH' & sparse/hyper setting \\ \verb'GxB_BITMAP_SWITCH' & bitmap/sparse setting \\ \verb'GxB_FORMAT' & by row/col setting \\ \verb'GxB_MODE' & blocking / non-blocking \\ \verb'GxB_NTHREADS' & default number of threads \\ \verb'GxB_CHUNK' & default chunk size \\ \verb'GxB_BURBLE' & burble setting \\ \verb'GxB_PRINTF' & printf function \\ \verb'GxB_FLUSH' & flush function \\ \verb'GxB_MEMORY_POOL' & memory pool control \\ \hline \verb'GxB_LIBRARY_NAME' & the string \verb'"SuiteSparse:GraphBLAS"' \\ \verb'GxB_LIBRARY_VERSION' & \verb'int' array of size 3 \\ \verb'GxB_LIBRARY_DATE' & date of release \\ \verb'GxB_LIBRARY_ABOUT' & author, copyright \\ \verb'GxB_LIBRARY_LICENSE' & license for the library \\ \verb'GxB_LIBRARY_COMPILE_DATE' & date of compilation \\ \verb'GxB_LIBRARY_COMPILE_TIME' & time of compilation \\ \verb'GxB_LIBRARY_URL' & URL of the library \\ \hline \verb'GxB_API_VERSION' & GraphBLAS C API Specification Version \\ \verb'GxB_API_DATE' & date of the C API Spec. \\ \verb'GxB_API_ABOUT' & about of the C API Spec. \\ \verb'GxB_API_URL' & URL of the spec \\ \hline \end{tabular} } \vspace{0.2in} For example: {\footnotesize \begin{verbatim} double h ; GxB_get (GxB_HYPER_SWITCH, &h) ; printf ("hyper_switch = %g for all new matrices\n", h) ; double b [GxB_BITMAP_SWITCH] ; GxB_get (GxB_BITMAP_SWITCH, b) ; for (int k = 0 ; k < GxB_NBITMAP_SWITCH ; k++) { printf ("bitmap_switch [%d] = %g ", k, b [k]) ; if (k == 0) { printf ("for vectors and matrices with 1 row or column\n") ; } else if (k == GxB_NBITMAP_SWITCH - 1) { printf ("for matrices with min dimension > %d\n", 1 << (k-1)) ; } else { printf ("for matrices with min dimension %d to %d\n", (1 << (k-1)) + 1, 1 << k) ; } } GxB_Format_Value s ; GxB_get (GxB_FORMAT, &s) ; if (s == GxB_BY_COL) printf ("all new matrices are stored by column\n") ; else printf ("all new matrices are stored by row\n") ; GrB_mode mode ; GxB_get (GxB_MODE, &mode) ; if (mode == GrB_BLOCKING) printf ("GrB_init(GrB_BLOCKING) was called.\n") ; else printf ("GrB_init(GrB_NONBLOCKING) was called.\n") ; int nthreads_max ; GxB_get (GxB_NTHREADS, &nthreads_max) ; printf ("max # of threads to use: %d\n", nthreads_max) ; double chunk ; GxB_get (GxB_CHUNK, &chunk) ; printf ("chunk size: %g\n", chunk) ; int64_t free_pool_limit [64] ; GxB_get (GxB_MEMORY_POOL, free_pool_limit) ; for (int k = 0 ; k < 64 ; k++) printf ("pool %d: limit %ld\n", free_pool_limit [k]) ; char *name ; int ver [3] ; GxB_get (GxB_LIBRARY_NAME, &name) ; GxB_get (GxB_LIBRARY_VERSION, ver) ; printf ("Library %s, version %d.%d.%d\n", name, ver [0], ver [1], ver [2]) ; \end{verbatim} } \newpage %=============================================================================== \subsection{{\sf GxB\_Matrix\_Option\_get:} retrieve a matrix option} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_get // gets the current option of a matrix ( GrB_Matrix A, // matrix to query GxB_Option_Field field, // option to query ... // return value of the matrix option ) ; \end{verbatim} } \end{mdframed} This usage of \verb'GxB_get' retrieves the value of a matrix option. The \verb'field' parameter can be \verb'GxB_HYPER_SWITCH', \verb'GxB_BITMAP_SWITCH', \verb'GxB_SPARSITY_CONTROL', \verb'GxB_SPARSITY_STATUS', or \verb'GxB_FORMAT'. For example: \vspace{-0.1in} {\footnotesize \begin{verbatim} double h, b ; int sparsity, scontrol ; GxB_get (A, GxB_SPARSITY_STATUS, &sparsity) ; GxB_get (A, GxB_HYPER_SWITCH, &h) ; printf ("matrix A has hyper_switch = %g\n", h) ; GxB_get (A, GxB_BITMAP_SWITCH, &b) ; printf ("matrix A has bitmap_switch = %g\n", b) ; switch (sparsity) { case GxB_HYPERSPARSE: printf ("matrix A is hypersparse\n") ; break ; case GxB_SPARSE: printf ("matrix A is sparse\n" ) ; break ; case GxB_BITMAP: printf ("matrix A is bitmap\n" ) ; break ; case GxB_FULL: printf ("matrix A is full\n" ) ; break ; } GxB_Format_Value s ; GxB_get (A, GxB_FORMAT, &s) ; printf ("matrix A is stored by %s\n", (s == GxB_BY_COL) ? "col" : "row") ; GxB_get (A, GxB_SPARSITY_CONTROL, &scontrol) ; if (scontrol & GxB_HYPERSPARSE) printf ("A may become hypersparse\n") ; if (scontrol & GxB_SPARSE ) printf ("A may become sparse\n") ; if (scontrol & GxB_BITMAP ) printf ("A may become bitmap\n") ; if (scontrol & GxB_FULL ) printf ("A may become full\n") ; \end{verbatim} } \newpage %=============================================================================== \subsection{{\sf GxB\_Desc\_get:} retrieve a {\sf GrB\_Descriptor} value} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_get // get a parameter from a descriptor ( GrB_Descriptor desc, // descriptor to query; NULL means defaults GrB_Desc_Field field, // parameter to query ... // value of the parameter ) ; \end{verbatim} } \end{mdframed} This usage is the same as \verb'GxB_Desc_get'. The \verb'field' parameter can be \verb'GrB_OUTP', \verb'GrB_MASK', \verb'GrB_INP0', \verb'GrB_INP1', \verb'GxB_AxB_METHOD', \verb'GxB_NTHREADS', \verb'GxB_CHUNK', or \verb'GxB_SORT'. Refer to Section~\ref{desc_get} for details. %=============================================================================== \subsection{Summary of usage of {\sf GxB\_set} and {\sf GxB\_get}} %=============================================================================== The different usages of \verb'GxB_set' and \verb'GxB_get' are summarized below. \noindent To set/get the global options: {\footnotesize \begin{verbatim} GxB_set (GxB_HYPER_SWITCH, double h) ; GxB_set (GxB_HYPER_SWITCH, GxB_ALWAYS_HYPER) ; GxB_set (GxB_HYPER_SWITCH, GxB_NEVER_HYPER) ; GxB_get (GxB_HYPER_SWITCH, double *h) ; double b [GxB_NBITMAP_SWITCH] ; GxB_set (GxB_BITMAP_SWITCH, b) ; GxB_set (GxB_BITMAP_SWITCH, NULL) ; // set defaults GxB_get (GxB_BITMAP_SWITCH, b) ; GxB_set (GxB_FORMAT, GxB_BY_ROW) ; GxB_set (GxB_FORMAT, GxB_BY_COL) ; GxB_get (GxB_FORMAT, GxB_Format_Value *s) ; GxB_set (GxB_NTHREADS, int nthreads_max) ; GxB_get (GxB_NTHREADS, int *nthreads_max) ; GxB_set (GxB_CHUNK, double chunk) ; GxB_get (GxB_CHUNK, double *chunk) ; GxB_set (GxB_BURBLE, bool burble) ; GxB_get (GxB_BURBLE, bool *burble) ; GxB_set (GxB_PRINTF, void *printf_function) ; GxB_get (GxB_PRINTF, void **printf_function) ; GxB_set (GxB_FLUSH, void *flush_function) ; GxB_get (GxB_FLUSH, void **flush_function) ; int64_t free_pool_limit [64] ; GxB_set (GxB_MEMORY_POOL, free_pool_limit) ; GxB_set (GxB_MEMORY_POOL, NULL) ; // set defaults GxB_get (GxB_MEMORY_POOL, free_pool_limit) ; \end{verbatim} } \noindent To get global options that can be queried but not modified: {\footnotesize \begin{verbatim} GxB_get (GxB_MODE, GrB_Mode *mode) ; GxB_get (GxB_LIBRARY_NAME, char **) ; GxB_get (GxB_LIBRARY_VERSION, int *) ; GxB_get (GxB_LIBRARY_DATE, char **) ; GxB_get (GxB_LIBRARY_ABOUT, char **) ; GxB_get (GxB_LIBRARY_LICENSE, char **) ; GxB_get (GxB_LIBRARY_COMPILE_DATE, char **) ; GxB_get (GxB_LIBRARY_COMPILE_TIME, char **) ; GxB_get (GxB_LIBRARY_URL, char **) ; GxB_get (GxB_API_VERSION, int *) ; GxB_get (GxB_API_DATE, char **) ; GxB_get (GxB_API_ABOUT, char **) ; GxB_get (GxB_API_URL, char **) ; \end{verbatim} } \noindent To set/get a matrix option or status {\footnotesize \begin{verbatim} GxB_set (GrB_Matrix A, GxB_HYPER_SWITCH, double h) ; GxB_set (GrB_Matrix A, GxB_HYPER_SWITCH, GxB_ALWAYS_HYPER) ; GxB_set (GrB_Matrix A, GxB_HYPER_SWITCH, GxB_NEVER_HYPER) ; GxB_get (GrB_Matrix A, GxB_HYPER_SWITCH, double *h) ; GxB_set (GrB_Matrix A, GxB_BITMAP_SWITCH, double b) ; GxB_get (GrB_Matrix A, GxB_BITMAP_SWITCH, double *b) ; GxB_set (GrB_Matrix A, GxB_FORMAT, GxB_BY_ROW) ; GxB_set (GrB_Matrix A, GxB_FORMAT, GxB_BY_COL) ; GxB_get (GrB_Matrix A, GxB_FORMAT, GxB_Format_Value *s) ; GxB_set (GrB_Matrix A, GxB_SPARSITY_CONTROL, GxB_AUTO_SPARSITY) ; GxB_set (GrB_Matrix A, GxB_SPARSITY_CONTROL, scontrol) ; GxB_get (GrB_Matrix A, GxB_SPARSITY_CONTROL, int *scontrol) ; GxB_get (GrB_Matrix A, GxB_SPARSITY_STATUS, int *sparsity) ; \end{verbatim} } \noindent To set/get a vector option or status: {\footnotesize \begin{verbatim} GxB_set (GrB_Vector v, GxB_BITMAP_SWITCH, double b) ; GxB_get (GrB_Vector v, GxB_BITMAP_SWITCH, double *b) ; GxB_set (GrB_Vector v, GxB_FORMAT, GxB_BY_ROW) ; GxB_set (GrB_Vector v, GxB_FORMAT, GxB_BY_COL) ; GxB_get (GrB_Vector v, GxB_FORMAT, GxB_Format_Value *s) ; GxB_set (GrB_Vector v, GxB_SPARSITY_CONTROL, GxB_AUTO_SPARSITY) ; GxB_set (GrB_Vector v, GxB_SPARSITY_CONTROL, scontrol) ; GxB_get (GrB_Vector v, GxB_SPARSITY_CONTROL, int *scontrol) ; GxB_get (GrB_Vector v, GxB_SPARSITY_STATUS, int *sparsity) ; \end{verbatim} } \noindent To set/get a descriptor field: {\footnotesize \begin{verbatim} GxB_set (GrB_Descriptor d, GrB_OUTP, GxB_DEFAULT) ; GxB_set (GrB_Descriptor d, GrB_OUTP, GrB_REPLACE) ; GxB_get (GrB_Descriptor d, GrB_OUTP, GrB_Desc_Value *v) ; GxB_set (GrB_Descriptor d, GrB_MASK, GxB_DEFAULT) ; GxB_set (GrB_Descriptor d, GrB_MASK, GrB_COMP) ; GxB_set (GrB_Descriptor d, GrB_MASK, GrB_STRUCTURE) ; GxB_set (GrB_Descriptor d, GrB_MASK, GrB_COMP+GrB_STRUCTURE) ; GxB_get (GrB_Descriptor d, GrB_MASK, GrB_Desc_Value *v) ; GxB_set (GrB_Descriptor d, GrB_INP0, GxB_DEFAULT) ; GxB_set (GrB_Descriptor d, GrB_INP0, GrB_TRAN) ; GxB_get (GrB_Descriptor d, GrB_INP0, GrB_Desc_Value *v) ; GxB_set (GrB_Descriptor d, GrB_INP1, GxB_DEFAULT) ; GxB_set (GrB_Descriptor d, GrB_INP1, GrB_TRAN) ; GxB_get (GrB_Descriptor d, GrB_INP1, GrB_Desc_Value *v) ; GxB_set (GrB_Descriptor d, GxB_AxB_METHOD, GxB_DEFAULT) ; GxB_set (GrB_Descriptor d, GxB_AxB_METHOD, GxB_AxB_GUSTAVSON) ; GxB_set (GrB_Descriptor d, GxB_AxB_METHOD, GxB_AxB_HASH) ; GxB_set (GrB_Descriptor d, GxB_AxB_METHOD, GxB_AxB_SAXPY) ; GxB_set (GrB_Descriptor d, GxB_AxB_METHOD, GxB_AxB_DOT) ; GxB_get (GrB_Descriptor d, GrB_AxB_METHOD, GrB_Desc_Value *v) ; GxB_set (GrB_Descriptor d, GxB_NTHREADS, int nthreads) ; GxB_get (GrB_Descriptor d, GxB_NTHREADS, int *nthreads) ; GxB_set (GrB_Descriptor d, GxB_CHUNK, double chunk) ; GxB_get (GrB_Descriptor d, GxB_CHUNK, double *chunk) ; GxB_set (GrB_Descriptor d, GxB_SORT, sort) ; GxB_get (GrB_Descriptor d, GxB_SORT, int *sort) ; \end{verbatim} } \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{SuiteSparse:GraphBLAS Colon and Index Notation} %%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{colon} MATLAB uses a colon notation to index into matrices, such as \verb'C=A(2:4,3:8)', which extracts \verb'C' as 3-by-6 submatrix from \verb'A', from rows 2 through 4 and columns 3 to 8 of the matrix \verb'A'. A single colon is used to denote all rows, \verb'C=A(:,9)', or all columns, \verb'C=A(12,:)', which refers to the 9th column and 12th row of \verb'A', respectively. An arbitrary integer list can be given as well, such as the MATLAB statements: {\footnotesize \begin{verbatim} I = [2 1 4] ; J = [3 5] ; C = A (I,J) ; \end{verbatim} } \noindent which creates the 3-by-2 matrix \verb'C' as follows: \[ C = \left[ \begin{array}{cc} a_{2,3} & a_{2,5} \\ a_{1,3} & a_{1,5} \\ a_{4,3} & a_{4,5} \\ \end{array} \right] \] The GraphBLAS API can do the equivalent of \verb'C=A(I,J)', \verb'C=A(:,J)', \verb'C=A(I,:)', and \verb'C=A(:,:)', by passing a parameter \verb'const GrB_Index *I' as either an array of size \verb'ni', or as the special value \verb'GrB_ALL', which corresponds to the stand-alone colon \verb'C=A(:,J)', and the same can be done for \verb'J'.. To compute \verb'C=A(2:4,3:8)' in GraphBLAS requires the user application to create two explicit integer arrays \verb'I' and \verb'J' of size 3 and 5, respectively, and then fill them with the explicit values \verb'[2,3,4]' and \verb'[3,4,5,6,7,8]'. This works well if the lists are small, or if the matrix has more entries than rows or columns. However, particularly with hypersparse matrices, the size of the explicit arrays \verb'I' and \verb'J' can vastly exceed the number of entries in the matrix. When using its hypersparse format, SuiteSparse:GraphBLAS allows the user application to create a \verb'GrB_Matrix' with dimensions up to $2^{60}$, with no memory constraints. The only constraint on memory usage in a hypersparse matrix is the number of entries in the matrix. For example, creating a $n$-by-$n$ matrix \verb'A' of type \verb'GrB_FP64' with $n=2^{60}$ and one million entries is trivial to do in Version 2.1 (and later) of SuiteSparse:GraphBLAS, taking at most 24MB of space. SuiteSparse:GraphBLAS Version 2.1 (or later) could do this on an old smartphone. However, using just the pure GraphBLAS API, constructing \verb'C=A(0:(n/2),0:(n/2))' in SuiteSparse Version 2.0 would require the creation of an integer array \verb'I' of size $2^{59}$, containing the sequence 0, 1, 2, 3, ...., requiring about 4 ExaBytes of memory (4 million terabytes). This is roughly 1000 times larger than the memory size of the world's largest computer in 2018. SuiteSparse:GraphBLAS Version 2.1 and later extends the GraphBLAS API with a full implementation of the MATLAB colon notation for integers, \verb'I=begin:inc:end'. This extension allows the construction of the matrix \verb'C=A(0:(n/2),0:(n/2))' in this example, with dimension $2^{59}$, probably taking just milliseconds on an old smartphone. The \verb'GrB_extract', \verb'GrB_assign', and \verb'GxB_subassign' operations (described in the Section~\ref{operations}) each have parameters that define a list of integer indices, using two parameters: \vspace{-0.05in} {\footnotesize \begin{verbatim} const GrB_Index *I ; // an array, or a special value GrB_ALL GrB_Index ni ; // the size of I, or a special value \end{verbatim}} \vspace{-0.05in} These two parameters define five kinds of index lists, which can be used to specify either an explicit or implicit list of row indices and/or column indices. The length of the list of indices is denoted \verb'|I|'. This discussion applies equally to the row indices \verb'I' and the column indices \verb'J'. The five kinds are listed below. \begin{enumerate} \item An explicit list of indices, such as \verb'I = [2 1 4 7 2]' in MATLAB notation, is handled by passing in \verb'I' as a pointer to an array of size 5, and passing \verb'ni=5' as the size of the list. The length of the explicit list is \verb'ni=|I|'. Duplicates may appear, except that for some uses of \verb'GrB_assign' and \verb'GxB_subassign', duplicates lead to undefined behavior according to the GraphBLAS C API Specification. SuiteSparse:GraphBLAS specifies how duplicates are handled in all cases, as an addition to the specification. See Section~\ref{duplicates} for details. \item To specify all rows of a matrix, use \verb'I = GrB_ALL'. The parameter \verb'ni' is ignored. This is equivalent to \verb'C=A(:,J)' in MATLAB. In GraphBLAS, this is the sequence \verb'0:(m-1)' if \verb'A' has \verb'm' rows, with length \verb'|I|=m'. If \verb'J' is used the columns of an \verb'm'-by-\verb'n' matrix, then \verb'J=GrB_ALL' refers to all columns, and is the sequence \verb'0:(n-1)', of length \verb'|J|=n'. \item To specify a contiguous range of indices, such as \verb'I=10:20' in MATLAB, the array \verb'I' has size 2, and \verb'ni' is passed to SuiteSparse:GraphBLAS as the special value \verb'ni = GxB_RANGE'. The beginning index is \verb'I[GxB_BEGIN]' and the ending index is \verb'I[GxB_END]'. Both values must be non-negative since \verb'GrB_Index' is an unsigned integer (\verb'uint64_t'). The value of \verb'I[GxB_INC]' is ignored. \vspace{-0.05in} {\footnotesize \begin{verbatim} // to specify I = 10:20 GrB_Index I [2], ni = GxB_RANGE ; I [GxB_BEGIN] = 10 ; // the start of the sequence I [GxB_END ] = 20 ; // the end of the sequence \end{verbatim}} \vspace{-0.05in} Let $b$ = \verb'I[GxB_BEGIN]', let $e$ = \verb'I[GxB_END]', The sequence has length zero if $b > e$; otherwise the length is $|I| = (e-b) + 1$. \item To specify a strided range of indices with a non-negative stride, such as \verb'I=3:2:10', the array \verb'I' has size 3, and \verb'ni' has the special value \verb'GxB_STRIDE'. This is the sequence 3, 5, 7, 9, of length 4. Note that 10 does not appear in the list. The end point need not appear if the increment goes past it. \vspace{-0.05in} {\footnotesize \begin{verbatim} // to specify I = 3:2:10 GrB_Index I [3], ni = GxB_STRIDE ; I [GxB_BEGIN ] = 3 ; // the start of the sequence I [GxB_INC ] = 2 ; // the increment I [GxB_END ] = 10 ; // the end of the sequence \end{verbatim}} \vspace{-0.05in} The \verb'GxB_STRIDE' sequence is the same as the \verb'List' generated by the following for loop: \vspace{-0.05in} {\footnotesize \begin{verbatim} int64_t k = 0 ; GrB_Index *List = (a pointer to an array of large enough size) for (int64_t i = I [GxB_BEGIN] ; i <= I [GxB_END] ; i += I [GxB_INC]) { // i is the kth entry in the sequence List [k++] = i ; } \end{verbatim}} \vspace{-0.05in} Then passing the explicit array \verb'List' and its length \verb'ni=k' has the same effect as passing in the array \verb'I' of size 3, with \verb'ni=GxB_STRIDE'. The latter is simply much faster to produce, and much more efficient for SuiteSparse:GraphBLAS to process. Let $b$ = \verb'I[GxB_BEGIN]', let $e$ = \verb'I[GxB_END]', and let $\Delta$ = \verb'I[GxB_INC]'. The sequence has length zero if $b > e$ or $\Delta=0$. Otherwise, the length of the sequence is \[ |I| = \Bigl\lfloor\dfrac{e-b}{\Delta}\Bigr\rfloor + 1 \] \item In MATLAB notation, if the stride is negative, the sequence is decreasing. For example, \verb'10:-2:1' is the sequence 10, 8, 6, 4, 2, in that order. In SuiteSparse:GraphBLAS, use \verb'ni = GxB_BACKWARDS', with an array \verb'I' of size 3. The following example specifies defines the equivalent of the MATLAB expression \verb'10:-2:1' in SuiteSparse:GraphBLAS: \vspace{-0.1in} {\footnotesize \begin{verbatim} // to specify I = 10:-2:1 GrB_Index I [3], ni = GxB_BACKWARDS ; I [GxB_BEGIN ] = 10 ; // the start of the sequence I [GxB_INC ] = 2 ; // the magnitude of the increment I [GxB_END ] = 1 ; // the end of the sequence \end{verbatim}} \vspace{-0.1in} The value -2 cannot be assigned to the \verb'GrB_Index' array \verb'I', since that is an unsigned type. The signed increment is represented instead with the special value \verb'ni = GxB_BACKWARDS'. The \verb'GxB_BACKWARDS' sequence is the same as generated by the following for loop: \vspace{-0.1in} {\footnotesize \begin{verbatim} int64_t k = 0 ; GrB_Index *List = (a pointer to an array of large enough size) for (int64_t i = I [GxB_BEGIN] ; i >= I [GxB_END] ; i -= I [GxB_INC]) { // i is the kth entry in the sequence List [k++] = i ; } \end{verbatim}} \vspace{-0.1in} Let $b$ = \verb'I[GxB_BEGIN]', let $e$ = \verb'I[GxB_END]', and let $\Delta$ = \verb'I[GxB_INC]' (note that $\Delta$ is not negative). The sequence has length zero if $b < e$ or $\Delta=0$. Otherwise, the length of the sequence is \[ |I| = \Bigl\lfloor\dfrac{b-e}{\Delta}\Bigr\rfloor + 1 \] \end{enumerate} Since \verb'GrB_Index' is an unsigned integer, all three values \verb'I[GxB_BEGIN]', \verb'I[GxB_INC]', and \verb'I[GxB_END]' must be non-negative. Just as in MATLAB, it is valid to specify an empty sequence of length zero. For example, \verb'I = 5:3' has length zero in MATLAB and the same is true for a \verb'GxB_RANGE' sequence in SuiteSparse:GraphBLAS, with \verb'I[GxB_BEGIN]=5' and \verb'I[GxB_END]=3'. This has the same effect as array \verb'I' with \verb'ni=0'. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{GraphBLAS Operations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{operations} The next sections define each of the GraphBLAS operations, also listed in the table below. SuiteSparse:GraphBLAS extensions (\verb'GxB_subassign', \verb'GxB_select') are included in the table. \vspace{0.2in} {\small \begin{tabular}{lll} \hline \verb'GrB_mxm' & matrix-matrix multiply & ${\bf C \langle M \rangle = C \odot AB}$ \\ \verb'GrB_vxm' & vector-matrix multiply & ${\bf w^{\sf T}\langle m^{\sf T}\rangle = w^{\sf T}\odot u^{\sf T}A}$ \\ \verb'GrB_mxv' & matrix-vector multiply & ${\bf w \langle m \rangle = w \odot Au}$ \\ \hline \verb'GrB_eWiseMult' & element-wise, & ${\bf C \langle M \rangle = C \odot (A \otimes B)}$ \\ & set union & ${\bf w \langle m \rangle = w \odot (u \otimes v)}$ \\ \hline \verb'GrB_eWiseAdd' & element-wise, & ${\bf C \langle M \rangle = C \odot (A \oplus B)}$ \\ & set intersection & ${\bf w \langle m \rangle = w \odot (u \oplus v)}$ \\ \hline \verb'GrB_extract' & extract submatrix & ${\bf C \langle M \rangle = C \odot A(I,J)}$ \\ & & ${\bf w \langle m \rangle = w \odot u(i)}$ \\ \hline \verb'GxB_subassign' & assign submatrix, & ${\bf C (I,J) \langle M \rangle = C(I,J) \odot A}$ \\ & with submask for ${\bf C(I,J)}$ & ${\bf w (i) \langle m \rangle = w(i) \odot u}$ \\ \hline \verb'GrB_assign' & assign submatrix & ${\bf C \langle M \rangle (I,J) = C(I,J) \odot A}$ \\ & with submask for ${\bf C}$ & ${\bf w \langle m \rangle (i) = w(i) \odot u}$ \\ \hline \verb'GrB_apply' & apply unary operator & ${\bf C \langle M \rangle = C \odot} f{\bf (A)}$ \\ & & ${\bf w \langle m \rangle = w \odot} f{\bf (u)}$ \\ & apply binary operator & ${\bf C \langle M \rangle = C \odot} f(x,{\bf A})$ \\ & & ${\bf C \langle M \rangle = C \odot} f({\bf A},y)$ \\ & & ${\bf w \langle m \rangle = w \odot} f(x,{\bf x})$ \\ & & ${\bf w \langle m \rangle = w \odot} f({\bf u},y)$ \\ \hline \verb'GxB_select' & apply select operator & ${\bf C \langle M \rangle = C \odot} f{\bf (A,k)}$ \\ & & ${\bf w \langle m \rangle = w \odot} f{\bf (u,k)}$ \\ \hline \verb'GrB_reduce' & reduce to vector & ${\bf w \langle m \rangle = w \odot} [{\oplus}_j {\bf A}(:,j)]$ \\ & reduce to scalar & $s = s \odot [{\oplus}_{ij} {\bf A}(I,J)]$ \\ \hline \verb'GrB_transpose' & transpose & ${\bf C \langle M \rangle = C \odot A^{\sf T}}$ \\ \hline \verb'GrB_kronecker' & Kronecker product & ${\bf C \langle M \rangle = C \odot \mbox{kron}(A, B)}$ \\ \hline \end{tabular} } \vspace{0.2in} If an error occurs, \verb'GrB_error(&err,C)' or \verb'GrB_error(&err,w)' returns details about the error, for operations that return a modified matrix \verb'C' or vector \verb'w'. The only operation that cannot return an error string is reduction to a scalar with \verb'GrB_reduce'. \newpage %=============================================================================== \subsection{{\sf GrB\_mxm:} matrix-matrix multiply} %=========================== %=============================================================================== \label{mxm} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_mxm // C<Mask> = accum (C, A*B) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_Semiring semiring, // defines '+' and '*' for A*B const GrB_Matrix A, // first input: matrix A const GrB_Matrix B, // second input: matrix B const GrB_Descriptor desc // descriptor for C, Mask, A, and B ) ; \end{verbatim} } \end{mdframed} \verb'GrB_mxm' multiplies two sparse matrices \verb'A' and \verb'B' using the \verb'semiring'. The input matrices \verb'A' and \verb'B' may be transposed according to the descriptor, \verb'desc' (which may be \verb'NULL') and then typecasted to match the multiply operator of the \verb'semiring'. Next, \verb'T=A*B' is computed on the \verb'semiring', precisely defined in the \verb'GB_spec_mxm.m' script in \verb'GraphBLAS/Test'. The actual algorithm exploits sparsity and does not take $O(n^3)$ time, but it computes the following: {\footnotesize \begin{verbatim} [m s] = size (A.matrix) ; [s n] = size (B.matrix) ; T.matrix = zeros (m, n, multiply.ztype) ; T.pattern = zeros (m, n, 'logical') ; T.matrix (:,:) = identity ; % the identity of the semiring's monoid T.class = multiply.ztype ; % the ztype of the semiring's multiply op A = cast (A.matrix, multiply.xtype) ; % the xtype of the semiring's multiply op B = cast (B.matrix, multiply.ytype) ; % the ytype of the semiring's multiply op for j = 1:n for i = 1:m for k = 1:s % T (i,j) += A (i,k) * B (k,j), using the semiring if (A.pattern (i,k) && B.pattern (k,j)) z = multiply (A (i,k), B (k,j)) ; T.matrix (i,j) = add (T.matrix (i,j), z) ; T.pattern (i,j) = true ; end end end end \end{verbatim}} Finally, \verb'T' is typecasted into the type of \verb'C', and the results are written back into \verb'C' via the \verb'accum' and \verb'Mask', ${\bf C \langle M \rangle = C \odot T}$. The latter step is reflected in the MATLAB function \verb'GB_spec_accum_mask.m', discussed in Section~\ref{accummask}. \paragraph{\bf Performance considerations:} Suppose all matrices are in \verb'GxB_BY_COL' format, and \verb'B' is extremely sparse but \verb'A' is not as sparse. Then computing \verb'C=A*B' is very fast, and much faster than when \verb'A' is extremely sparse. For example, if \verb'A' is square and \verb'B' is a column vector that is all nonzero except for one entry \verb'B(j,0)=1', then \verb'C=A*B' is the same as extracting column \verb'A(:,j)'. This is very fast if \verb'A' is stored by column but slow if \verb'A' is stored by row. If \verb'A' is a sparse row with a single entry \verb'A(0,i)=1', then \verb'C=A*B' is the same as extracting row \verb'B(i,:)'. This is fast if \verb'B' is stored by row but slow if \verb'B' is stored by column. If the user application needs to repeatedly extract rows and columns from a matrix, whether by matrix multiplication or by \verb'GrB_extract', then keep two copies: one stored by row, and other by column, and use the copy that results in the fastest computation. By default, \verb'GrB_mxm', \verb'GrB_mxv', \verb'GrB_vxm', and \verb'GrB_reduce' (to vector) can return their result in a jumbled state, with the sort left pending. It can sometimes be faster for these methods to do the sort as they compute their result. Use the \verb'GxB_SORT' descriptor setting to select this option. Refer to Section~\ref{descriptor} for details. \newpage %=============================================================================== \subsection{{\sf GrB\_vxm:} vector-matrix multiply} %=========================== %=============================================================================== \label{vxm} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_vxm // w'<mask> = accum (w, u'*A) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_Semiring semiring, // defines '+' and '*' for u'*A const GrB_Vector u, // first input: vector u const GrB_Matrix A, // second input: matrix A const GrB_Descriptor desc // descriptor for w, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_vxm' multiplies a row vector \verb"u'" times a matrix \verb'A'. The matrix \verb'A' may be first transposed according to \verb'desc' (as the second input, \verb'GrB_INP1'); the column vector \verb'u' is never transposed via the descriptor. The inputs \verb'u' and \verb'A' are typecasted to match the \verb'xtype' and \verb'ytype' inputs, respectively, of the multiply operator of the \verb'semiring'. Next, an intermediate column vector \verb"t=A'*u" is computed on the \verb'semiring' using the same method as \verb'GrB_mxm'. Finally, the column vector \verb't' is typecasted from the \verb'ztype' of the multiply operator of the \verb'semiring' into the type of \verb'w', and the results are written back into \verb'w' using the optional accumulator \verb'accum' and \verb'mask'. The last step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \paragraph{\bf Performance considerations:} % u'=u'*A If the \verb'GxB_FORMAT' of \verb'A' is \verb'GxB_BY_ROW', and the default descriptor is used (\verb'A' is not transposed), then \verb'GrB_vxm' is faster than than \verb'GrB_mxv' with its default descriptor, when the vector \verb'u' is very sparse. However, if the \verb'GxB_FORMAT' of \verb'A' is \verb'GxB_BY_COL', then \verb'GrB_mxv' with its default descriptor is faster than \verb'GrB_vxm' with its default descriptor, when the vector \verb'u' is very sparse. Using the non-default \verb'GrB_TRAN' descriptor for \verb'A' makes the \verb'GrB_vxm' operation equivalent to \verb'GrB_mxv' with its default descriptor (with the operands reversed in the multiplier, as well). The reverse is true as well; \verb'GrB_mxv' with \verb'GrB_TRAN' is the same as \verb'GrB_vxm' with a default descriptor. The breadth-first search presented in Section~\ref{bfs} of this User Guide uses \verb'GrB_vxm' instead of \verb'GrB_mxv', since the default format in SuiteSparse:GraphBLAS is \verb'GxB_BY_ROW'. This corresponds to ``push'' step of a direction-optimized BFS. If the matrix is stored by column, then use \verb'GrB_mxv' for the ``push'' instead. \newpage %=============================================================================== \subsection{{\sf GrB\_mxv:} matrix-vector multiply} %=========================== %=============================================================================== \label{mxv} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_mxv // w<mask> = accum (w, A*u) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_Semiring semiring, // defines '+' and '*' for A*B const GrB_Matrix A, // first input: matrix A const GrB_Vector u, // second input: vector u const GrB_Descriptor desc // descriptor for w, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_mxv' multiplies a matrix \verb'A' times a column vector \verb'u'. The matrix \verb'A' may be first transposed according to \verb'desc' (as the first input); the column vector \verb'u' is never transposed via the descriptor. The inputs \verb'A' and \verb'u' are typecasted to match the \verb'xtype' and \verb'ytype' inputs, respectively, of the multiply operator of the \verb'semiring'. Next, an intermediate column vector \verb't=A*u' is computed on the \verb'semiring' using the same method as \verb'GrB_mxm'. Finally, the column vector \verb't' is typecasted from the \verb'ztype' of the multiply operator of the \verb'semiring' into the type of \verb'w', and the results are written back into \verb'w' using the optional accumulator \verb'accum' and \verb'mask'. The last step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \paragraph{\bf Performance considerations:} % u=A*u Refer to the discussion of \verb'GrB_vxm'. In SuiteSparse:GraphBLAS, \verb'GrB_mxv' is very efficient when \verb'u' is sparse or dense, when the default descriptor is used, and when the matrix is \verb'GxB_BY_COL'. When \verb'u' is very sparse and \verb'GrB_INP0' is set to its non-default \verb'GrB_TRAN', then this method is not efficient if the matrix is in \verb'GxB_BY_COL' format. If an application needs to perform \verb"A'*u" repeatedly where \verb'u' is very sparse, then use the \verb'GxB_BY_ROW' format for \verb'A' instead. \newpage %=============================================================================== \subsection{{\sf GrB\_eWiseMult:} element-wise operations, set intersection} %== %=============================================================================== \label{eWiseMult} Element-wise ``multiplication'' is shorthand for applying a binary operator element-wise on two matrices or vectors \verb'A' and \verb'B', for all entries that appear in the set intersection of the patterns of \verb'A' and \verb'B'. This is like \verb'A.*B' for two sparse matrices in MATLAB, except that in GraphBLAS any binary operator can be used, not just multiplication. The pattern of the result of the element-wise ``multiplication'' is exactly this set intersection. Entries in \verb'A' but not \verb'B', or visa versa, do not appear in the result. Let $\otimes$ denote the binary operator to be used. The computation ${\bf T = A \otimes B}$ is given below. Entries not in the intersection of ${\bf A}$ and ${\bf B}$ do not appear in the pattern of ${\bf T}$. That is: \vspace{-0.2in} {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \> for all entries $(i,j)$ in ${\bf A \cap B}$ \\ \> \> $t_{ij} = a_{ij} \otimes b_{ij}$ \\ \end{tabbing} } \vspace{-0.2in} Depending on what kind of operator is used and what the implicit value is assumed to be, this can give the Hadamard product. This is the case for \verb'A.*B' in MATLAB since the implicit value is zero. However, computing a Hadamard product is not necessarily the goal of the \verb'eWiseMult' operation. It simply applies any binary operator, built-in or user-defined, to the set intersection of \verb'A' and \verb'B', and discards any entry outside this intersection. Its usefulness in a user's application does not depend upon it computing a Hadamard product in all cases. The operator need not be associative, commutative, nor have any particular property except for type compatibility with \verb'A' and \verb'B', and the output matrix \verb'C'. The generic name for this operation is \verb'GrB_eWiseMult', which can be used for both matrices and vectors. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_eWiseMult\_Vector:} element-wise vector multiply} %------------------------------------------------------------------------------- \label{eWiseMult_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_eWiseMult // w<mask> = accum (w, u.*v) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const <operator> multiply, // defines '.*' for t=u.*v const GrB_Vector u, // first input: vector u const GrB_Vector v, // second input: vector v const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_eWiseMult' computes the element-wise ``multiplication'' of two vectors \verb'u' and \verb'v', element-wise using any binary operator (not just times). The vectors are not transposed via the descriptor. The vectors \verb'u' and \verb'v' are first typecasted into the first and second inputs of the \verb'multiply' operator. Next, a column vector \verb't' is computed, denoted ${\bf t = u \otimes v}$. The pattern of \verb't' is the set intersection of \verb'u' and \verb'v'. The result \verb't' has the type of the output \verb'ztype' of the \verb'multiply' operator. The \verb'operator' is typically a \verb'GrB_BinaryOp', but the method is type-generic for this parameter. If given a monoid (\verb'GrB_Monoid'), the additive operator of the monoid is used as the \verb'multiply' binary operator. If given a semiring (\verb'GrB_Semiring'), the multiply operator of the semiring is used as the \verb'multiply' binary operator. The next and final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. Note for all GraphBLAS operations, including this one, the accumulator ${\bf w \odot t}$ is always applied in a set union manner, even though ${\bf t = u \otimes v}$ for this operation is applied in a set intersection manner. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_eWiseMult\_Matrix:} element-wise matrix multiply} %------------------------------------------------------------------------------- \label{eWiseMult_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_eWiseMult // C<Mask> = accum (C, A.*B) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const <operator> multiply, // defines '.*' for T=A.*B const GrB_Matrix A, // first input: matrix A const GrB_Matrix B, // second input: matrix B const GrB_Descriptor desc // descriptor for C, Mask, A, and B ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_eWiseMult' computes the element-wise ``multiplication'' of two matrices \verb'A' and \verb'B', element-wise using any binary operator (not just times). The input matrices may be transposed first, according to the descriptor \verb'desc'. They are then typecasted into the first and second inputs of the \verb'multiply' operator. Next, a matrix \verb'T' is computed, denoted ${\bf T = A \otimes B}$. The pattern of \verb'T' is the set intersection of \verb'A' and \verb'B'. The result \verb'T' has the type of the output \verb'ztype' of the \verb'multiply' operator. The \verb'multiply' operator is typically a \verb'GrB_BinaryOp', but the method is type-generic for this parameter. If given a monoid (\verb'GrB_Monoid'), the additive operator of the monoid is used as the \verb'multiply' binary operator. If given a semiring (\verb'GrB_Semiring'), the multiply operator of the semiring is used as the \verb'multiply' binary operator. \vspace{0.05in} The operation can be expressed in MATLAB notation as: {\footnotesize \begin{verbatim} [nrows, ncols] = size (A.matrix) ; T.matrix = zeros (nrows, ncols, multiply.ztype) ; T.class = multiply.ztype ; p = A.pattern & B.pattern ; A = cast (A.matrix (p), multiply.xtype) ; B = cast (B.matrix (p), multiply.ytype) ; T.matrix (p) = multiply (A, B) ; T.pattern = p ; \end{verbatim} } The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. Note for all GraphBLAS operations, including this one, the accumulator ${\bf C \odot T}$ is always applied in a set union manner, even though ${\bf T = A \otimes B}$ for this operation is applied in a set intersection manner. \newpage %=============================================================================== \subsection{{\sf GrB\_eWiseAdd:} element-wise operations, set union} %========== %=============================================================================== \label{eWiseAdd} Element-wise ``addition'' is shorthand for applying a binary operator element-wise on two matrices or vectors \verb'A' and \verb'B', for all entries that appear in the set intersection of the patterns of \verb'A' and \verb'B'. This is like \verb'A+B' for two sparse matrices in MATLAB, except that in GraphBLAS any binary operator can be used, not just addition. The pattern of the result of the element-wise ``addition'' is the set union of the pattern of \verb'A' and \verb'B'. Entries in neither in \verb'A' nor in \verb'B' do not appear in the result. Let $\oplus$ denote the binary operator to be used. The computation ${\bf T = A \oplus B}$ is exactly the same as the computation with accumulator operator as described in Section~\ref{accummask}. It acts like a sparse matrix addition, except that any operator can be used. The pattern of ${\bf A \oplus B}$ is the set union of the patterns of ${\bf A}$ and ${\bf B}$, and the operator is applied only on the set intersection of ${\bf A}$ and ${\bf B}$. Entries not in either the pattern of ${\bf A}$ or ${\bf B}$ do not appear in the pattern of ${\bf T}$. That is: \vspace{-0.2in} {\small \begin{tabbing} \hspace{2em} \= \hspace{2em} \= \hspace{2em} \= \\ \> for all entries $(i,j)$ in ${\bf A \cap B}$ \\ \> \> $t_{ij} = a_{ij} \oplus b_{ij}$ \\ \> for all entries $(i,j)$ in ${\bf A \setminus B}$ \\ \> \> $t_{ij} = a_{ij}$ \\ \> for all entries $(i,j)$ in ${\bf B \setminus A}$ \\ \> \> $t_{ij} = b_{ij}$ \end{tabbing} } The only difference between element-wise ``multiplication'' (${\bf T =A \otimes B}$) and ``addition'' (${\bf T = A \oplus B}$) is the pattern of the result, and what happens to entries outside the intersection. With $\otimes$ the pattern of ${\bf T}$ is the intersection; with $\oplus$ it is the set union. Entries outside the set intersection are dropped for $\otimes$, and kept for $\oplus$; in both cases the operator is only applied to those (and only those) entries in the intersection. Any binary operator can be used interchangeably for either operation. Element-wise operations do not operate on the implicit values, even implicitly, since the operations make no assumption about the semiring. As a result, the results can be different from MATLAB, which can always assume the implicit value is zero. For example, \verb'C=A-B' is the conventional matrix subtraction in MATLAB. Computing \verb'A-B' in GraphBLAS with \verb'eWiseAdd' will apply the \verb'MINUS' operator to the intersection, entries in \verb'A' but not \verb'B' will be unchanged and appear in \verb'C', and entries in neither \verb'A' nor \verb'B' do not appear in \verb'C'. For these cases, the results matches the MATLAB \verb'C=A-B'. Entries in \verb'B' but not \verb'A' do appear in \verb'C' but they are not negated; they cannot be subtracted from an implicit value in \verb'A'. This is by design. If conventional matrix subtraction of two sparse matrices is required, and the implicit value is known to be zero, use \verb'GrB_apply' to negate the values in \verb'B', and then use \verb'eWiseAdd' with the \verb'PLUS' operator, to compute \verb'A+(-B)'. The generic name for this operation is \verb'GrB_eWiseAdd', which can be used for both matrices and vectors. There is another minor difference in two variants of the element-wise functions. If given a \verb'semiring', the \verb'eWiseAdd' functions use the binary operator of the semiring's monoid, while the \verb'eWiseMult' functions use the multiplicative operator of the semiring. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_eWiseAdd\_Vector:} element-wise vector addition} %------------------------------------------------------------------------------- \label{eWiseAdd_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_eWiseAdd // w<mask> = accum (w, u+v) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const <operator> add, // defines '+' for t=u+v const GrB_Vector u, // first input: vector u const GrB_Vector v, // second input: vector v const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_eWiseAdd' computes the element-wise ``addition'' of two vectors \verb'u' and \verb'v', element-wise using any binary operator (not just plus). The vectors are not transposed via the descriptor. Entries in the intersection of \verb'u' and \verb'v' are first typecasted into the first and second inputs of the \verb'add' operator. Next, a column vector \verb't' is computed, denoted ${\bf t = u \oplus v}$. The pattern of \verb't' is the set union of \verb'u' and \verb'v'. The result \verb't' has the type of the output \verb'ztype' of the \verb'add' operator. The \verb'add' operator is typically a \verb'GrB_BinaryOp', but the method is type-generic for this parameter. If given a monoid (\verb'GrB_Monoid'), the additive operator of the monoid is used as the \verb'add' binary operator. If given a semiring (\verb'GrB_Semiring'), the additive operator of the monoid of the semiring is used as the \verb'add' binary operator. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_eWiseAdd\_Matrix:} element-wise matrix addition} %------------------------------------------------------------------------------- \label{eWiseAdd_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_eWiseAdd // C<Mask> = accum (C, A+B) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const <operator> add, // defines '+' for T=A+B const GrB_Matrix A, // first input: matrix A const GrB_Matrix B, // second input: matrix B const GrB_Descriptor desc // descriptor for C, Mask, A, and B ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_eWiseAdd' computes the element-wise ``addition'' of two matrices \verb'A' and \verb'B', element-wise using any binary operator (not just plus). The input matrices may be transposed first, according to the descriptor \verb'desc'. Entries in the intersection then typecasted into the first and second inputs of the \verb'add' operator. Next, a matrix \verb'T' is computed, denoted ${\bf T = A \oplus B}$. The pattern of \verb'T' is the set union of \verb'A' and \verb'B'. The result \verb'T' has the type of the output \verb'ztype' of the \verb'add' operator. The \verb'add' operator is typically a \verb'GrB_BinaryOp', but the method is type-generic for this parameter. If given a monoid (\verb'GrB_Monoid'), the additive operator of the monoid is used as the \verb'add' binary operator. If given a semiring (\verb'GrB_Semiring'), the additive operator of the monoid of the semiring is used as the \verb'add' binary operator. \vspace{0.05in} The operation can be expressed in MATLAB notation as: {\footnotesize \begin{verbatim} [nrows, ncols] = size (A.matrix) ; T.matrix = zeros (nrows, ncols, add.ztype) ; p = A.pattern & B.pattern ; A = GB_mex_cast (A.matrix (p), add.xtype) ; B = GB_mex_cast (B.matrix (p), add.ytype) ; T.matrix (p) = add (A, B) ; p = A.pattern & ~B.pattern ; T.matrix (p) = cast (A.matrix (p), add.ztype) ; p = ~A.pattern & B.pattern ; T.matrix (p) = cast (B.matrix (p), add.ztype) ; T.pattern = A.pattern | B.pattern ; T.class = add.ztype ; \end{verbatim} } Except for when typecasting is performed, this is identical to how the \verb'accum' operator is applied in Figure~\ref{fig_accummask}. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. \newpage %=============================================================================== \subsection{{\sf GrB\_extract:} submatrix extraction } %======================== %=============================================================================== \label{extract} The \verb'GrB_extract' function is a generic name for three specific functions: \verb'GrB_Vector_extract', \verb'GrB_Col_extract', and \verb'GrB_Matrix_extract'. The generic name appears in the function signature, but the specific function name is used when describing what each variation does. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_extract:} extract subvector from vector} %------------------------------------------------------------------------------- \label{extract_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_extract // w<mask> = accum (w, u(I)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_Vector u, // first input: vector u const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_extract' extracts a subvector from another vector, identical to \verb't = u (I)' in MATLAB where \verb'I' is an integer vector of row indices. Refer to \verb'GrB_Matrix_extract' for further details; vector extraction is the same as matrix extraction with \verb'n'-by-1 matrices. See Section~\ref{colon} for a description of \verb'I' and \verb'ni'. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_extract:} extract submatrix from matrix} %------------------------------------------------------------------------------- \label{extract_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_extract // C<Mask> = accum (C, A(I,J)) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_Matrix A, // first input: matrix A const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C, Mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_extract' extracts a submatrix from another matrix, identical to \verb'T = A(I,J)' in MATLAB where \verb'I' and \verb'J' are integer vectors of row and column indices, respectively, except that indices are zero-based in GraphBLAS and one-based in MATLAB. The input matrix \verb'A' may be transposed first, via the descriptor. The type of \verb'T' and \verb'A' are the same. The size of \verb'C' is \verb'|I|'-by-\verb'|J|'. Entries outside \verb'A(I,J)' are not accessed and do not take part in the computation. More precisely, assuming the matrix \verb'A' is not transposed, the matrix \verb'T' is defined as follows: \vspace{-0.1in} {\footnotesize \begin{verbatim} T.matrix = zeros (ni, nj) ; % a matrix of size ni-by-nj T.pattern = false (ni, nj) ; for i = 1:ni for j = 1:nj if (A (I(i),J(j)).pattern) T (i,j).matrix = A (I(i),J(j)).matrix ; T (i,j).pattern = true ; end end end \end{verbatim}} \vspace{-0.1in} If duplicate indices are present in \verb'I' or \verb'J', the above method defines the result in \verb'T'. Duplicates result in the same values of \verb'A' being copied into different places in \verb'T'. See Section~\ref{colon} for a description of the row indices \verb'I' and \verb'ni', and the column indices \verb'J' and \verb'nj'. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. \paragraph{\bf Performance considerations:} % C=A(I,J) If \verb'A' is not transposed via input descriptor: if \verb'|I|' is small, then it is fastest if \verb'A' is \verb'GxB_BY_ROW'; if \verb'|J|' is small, then it is fastest if \verb'A' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Col\_extract:} extract column vector from matrix} %------------------------------------------------------------------------------- \label{extract_column} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_extract // w<mask> = accum (w, A(I,j)) ( GrB_Vector w, // input/output matrix for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_Matrix A, // first input: matrix A const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index j, // column index const GrB_Descriptor desc // descriptor for w, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Col_extract' extracts a subvector from a matrix, identical to \verb't = A (I,j)' in MATLAB where \verb'I' is an integer vector of row indices and where \verb'j' is a single column index. The input matrix \verb'A' may be transposed first, via the descriptor, which results in the extraction of a single row \verb'j' from the matrix \verb'A', the result of which is a column vector \verb'w'. The type of \verb't' and \verb'A' are the same. The size of \verb'w' is \verb'|I|'-by-1. See Section~\ref{colon} for a description of the row indices \verb'I' and \verb'ni'. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \paragraph{\bf Performance considerations:} % w = A(I,j) If \verb'A' is not transposed: it is fastest if the format of \verb'A' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %=============================================================================== \subsection{{\sf GxB\_subassign:} submatrix assignment} %======================= %=============================================================================== \label{subassign} The methods described in this section are all variations of the form \verb'C(I,J)=A', which modifies a submatrix of the matrix \verb'C'. All methods can be used in their generic form with the single name \verb'GxB_subassign'. This is reflected in the prototypes. However, to avoid confusion between the different kinds of assignment, the name of the specific function is used when describing each variation. If the discussion applies to all variations, the simple name \verb'GxB_subassign' is used. See Section~\ref{colon} for a description of the row indices \verb'I' and \verb'ni', and the column indices \verb'J' and \verb'nj'. \verb'GxB_subassign' is very similar to \verb'GrB_assign', described in Section~\ref{assign}. The two operations are compared and contrasted in Section~\ref{compare_assign}. For a discussion of how duplicate indices are handled in \verb'I' and \verb'J', see Section~\ref{duplicates}. %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_subassign:} assign to a subvector } %------------------------------------------------------------------------------- \label{subassign_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // w(I)<mask> = accum (w(I),u) ( GrB_Vector w, // input/output matrix for results const GrB_Vector mask, // optional mask for w(I), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w(I),t) const GrB_Vector u, // first input: vector u const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Descriptor desc // descriptor for w(I) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_subassign' operates on a subvector \verb'w(I)' of \verb'w', modifying it with the vector \verb'u'. The method is identical to \verb'GxB_Matrix_subassign' described in Section~\ref{subassign_matrix}, where all matrices have a single column each. The \verb'mask' has the same size as \verb'w(I)' and \verb'u'. The only other difference is that the input \verb'u' in this method is not transposed via the \verb'GrB_INP0' descriptor. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_subassign:} assign to a submatrix } %------------------------------------------------------------------------------- \label{subassign_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // C(I,J)<Mask> = accum (C(I,J),A) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C(I,J), unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C(I,J),T) const GrB_Matrix A, // first input: matrix A const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C(I,J), Mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_subassign' operates only on a submatrix \verb'S' of \verb'C', modifying it with the matrix \verb'A'. For this operation, the result is not the entire matrix \verb'C', but a submatrix \verb'S=C(I,J)' of \verb'C'. The steps taken are as follows, except that ${\bf A}$ may be optionally transposed via the \verb'GrB_INP0' descriptor option. \vspace{0.1in} \begin{tabular}{lll} \hline Step & GraphBLAS & description \\ & notation & \\ \hline 1 & ${\bf S} = {\bf C(I,J)}$ & extract the ${\bf C(I,J)}$ submatrix \\ 2 & ${\bf S \langle M \rangle} = {\bf S} \odot {\bf A}$ & apply the accumulator/mask to the submatrix ${\bf S}$\\ 3 & ${\bf C(I,J)}= {\bf S}$ & put the submatrix ${\bf S}$ back into ${\bf C(I,J)}$ \\ \hline \end{tabular} \vspace{0.1in} The accumulator/mask step in Step 2 is the same as for all other GraphBLAS operations, described in Section~\ref{accummask}, except that for \verb'GxB_subassign', it is applied to just the submatrix ${\bf S} = {\bf C(I,J)}$, and thus the \verb'Mask' has the same size as ${\bf A}$, ${\bf S}$, and ${\bf C(I,J)}$. The \verb'GxB_subassign' operation is the reverse of matrix extraction: \begin{itemize} \item For submatrix extraction, \verb'GrB_Matrix_extract', the submatrix \verb'A(I,J)' appears on the right-hand side of the assignment, \verb'C=A(I,J)', and entries outside of the submatrix are not accessed and do not take part in the computation. \item For submatrix assignment, \verb'GxB_Matrix_subassign', the submatrix \verb'C(I,J)' appears on the left-hand-side of the assignment, \verb'C(I,J)=A', and entries outside of the submatrix are not accessed and do not take part in the computation. \end{itemize} In both methods, the accumulator and mask modify the submatrix of the assignment; they simply differ on which side of the assignment the submatrix resides on. In both cases, if the \verb'Mask' matrix is present it is the same size as the submatrix: \begin{itemize} \item For submatrix extraction, ${\bf C \langle M \rangle = C \odot A(I,J)}$ is computed, where the submatrix is on the right. The mask ${\bf M}$ has the same size as the submatrix ${\bf A(I,J)}$. \item For submatrix assignment, ${\bf C(I,J) \langle M \rangle = C(I,J) \odot A}$ is computed, where the submatrix is on the left. The mask ${\bf M}$ has the same size as the submatrix ${\bf C(I,J)}$. \end{itemize} In Step 1, the submatrix \verb'S' is first computed by the \verb'GrB_Matrix_extract' operation, \verb'S=C(I,J)'. Step 2 accumulates the results ${\bf S \langle M \rangle = S \odot T}$, exactly as described in Section~\ref{accummask}, but operating on the submatrix ${\bf S}$, not ${\bf C}$, using the optional \verb'Mask' and \verb'accum' operator. The matrix ${\bf T}$ is simply ${\bf T}={\bf A}$, or ${\bf T}={\bf A}^{\sf T}$ if ${\bf A}$ is transposed via the \verb'desc' descriptor, \verb'GrB_INP0'. The \verb'GrB_REPLACE' option in the descriptor clears ${\bf S}$ after computing ${\bf Z = T}$ or ${\bf Z = C \odot T}$, not all of ${\bf C}$ since this operation can only modify the specified submatrix of ${\bf C}$. Finally, Step 3 writes the result (which is the modified submatrix \verb'S' and not all of \verb'C') back into the \verb'C' matrix that contains it, via the assignment \verb'C(I,J)=S', using the reverse operation from the method described for matrix extraction: {\footnotesize \begin{verbatim} for i = 1:ni for j = 1:nj if (S (i,j).pattern) C (I(i),J(j)).matrix = S (i,j).matrix ; C (I(i),J(j)).pattern = true ; end end end \end{verbatim}} \paragraph{\bf Performance considerations:} % C(I,J) = A If \verb'A' is not transposed: if \verb'|I|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_ROW'; if \verb'|J|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Col\_subassign:} assign to a sub-column of a matrix} %------------------------------------------------------------------------------- \label{subassign_column} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // C(I,j)<mask> = accum (C(I,j),u) ( GrB_Matrix C, // input/output matrix for results const GrB_Vector mask, // optional mask for C(I,j), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(C(I,j),t) const GrB_Vector u, // input vector const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index j, // column index const GrB_Descriptor desc // descriptor for C(I,j) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Col_subassign' modifies a single sub-column of a matrix \verb'C'. It is the same as \verb'GxB_Matrix_subassign' where the index vector \verb'J[0]=j' is a single column index (and thus \verb'nj=1'), and where all matrices in \verb'GxB_Matrix_subassign' (except \verb'C') consist of a single column. The \verb'mask' vector has the same size as \verb'u' and the sub-column \verb'C(I,j)'. The input descriptor \verb'GrB_INP0' is ignored; the input vector \verb'u' is not transposed. Refer to \verb'GxB_Matrix_subassign' for further details. \paragraph{\bf Performance considerations:} % C(I,j) = u \verb'GxB_Col_subassign' is much faster than \verb'GxB_Row_subassign' if the format of \verb'C' is \verb'GxB_BY_COL'. \verb'GxB_Row_subassign' is much faster than \verb'GxB_Col_subassign' if the format of \verb'C' is \verb'GxB_BY_ROW'. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Row\_subassign:} assign to a sub-row of a matrix} %------------------------------------------------------------------------------- \label{subassign_row} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // C(i,J)<mask'> = accum (C(i,J),u') ( GrB_Matrix C, // input/output matrix for results const GrB_Vector mask, // optional mask for C(i,J), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(C(i,J),t) const GrB_Vector u, // input vector const GrB_Index i, // row index const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C(i,J) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Row_subassign' modifies a single sub-row of a matrix \verb'C'. It is the same as \verb'GxB_Matrix_subassign' where the index vector \verb'I[0]=i' is a single row index (and thus \verb'ni=1'), and where all matrices in \verb'GxB_Matrix_subassign' (except \verb'C') consist of a single row. The \verb'mask' vector has the same size as \verb'u' and the sub-column \verb'C(I,j)'. The input descriptor \verb'GrB_INP0' is ignored; the input vector \verb'u' is not transposed. Refer to \verb'GxB_Matrix_subassign' for further details. \paragraph{\bf Performance considerations:} % C(i,J) = u' \verb'GxB_Col_subassign' is much faster than \verb'GxB_Row_subassign' if the format of \verb'C' is \verb'GxB_BY_COL'. \verb'GxB_Row_subassign' is much faster than \verb'GxB_Col_subassign' if the format of \verb'C' is \verb'GxB_BY_ROW'. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_subassign\_$<$type$>$:} assign a scalar to a subvector} %------------------------------------------------------------------------------- \label{subassign_vector_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // w(I)<mask> = accum (w(I),x) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w(I), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w(I),x) const <type> x, // scalar to assign to w(I) const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Descriptor desc // descriptor for w(I) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_subassign_<type>' assigns a single scalar to an entire subvector of the vector \verb'w'. The operation is exactly like setting a single entry in an \verb'n'-by-1 matrix, \verb'A(I,0) = x', where the column index for a vector is implicitly \verb'j=0'. For further details of this function, see \verb'GxB_Matrix_subassign_<type>' in Section~\ref{subassign_matrix_scalar}. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_subassign\_$<$type$>$:} assign a scalar to a submatrix} %------------------------------------------------------------------------------- \label{subassign_matrix_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_subassign // C(I,J)<Mask> = accum (C(I,J),x) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C(I,J), unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C(I,J),x) const <type> x, // scalar to assign to C(I,J) const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C(I,J) and Mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_subassign_<type>' assigns a single scalar to an entire submatrix of \verb'C', like the {\em scalar expansion} \verb'C(I,J)=x' in MATLAB. The scalar \verb'x' is implicitly expanded into a matrix \verb'A' of size \verb'ni' by \verb'nj', and then the matrix \verb'A' is assigned to \verb'C(I,J)' using the same method as in \verb'GxB_Matrix_subassign'. Refer to that function in Section~\ref{subassign_matrix} for further details. For the accumulation step, the scalar \verb'x' is typecasted directly into the type of \verb'C' when the \verb'accum' operator is not applied to it, or into the \verb'ytype' of the \verb'accum' operator, if \verb'accum' is not NULL, for entries that are already present in \verb'C'. The \verb'<type> x' notation is otherwise the same as \verb'GrB_Matrix_setElement' (see Section~\ref{matrix_setElement}). Any value can be passed to this function and its type will be detected, via the \verb'_Generic' feature of ANSI C11. For a user-defined type, \verb'x' is a \verb'void *' pointer that points to a memory space holding a single entry of a scalar that has exactly the same user-defined type as the matrix \verb'C'. This user-defined type must exactly match the user-defined type of \verb'C' since no typecasting is done between user-defined types. If a \verb'void *' pointer is passed in and the type of the underlying scalar does not exactly match the user-defined type of \verb'C', then results are undefined. No error status will be returned since GraphBLAS has no way of catching this error. \paragraph{\bf Performance considerations:} % C(I,J) = scalar If \verb'A' is not transposed: if \verb'|I|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_ROW'; if \verb'|J|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %=============================================================================== \subsection{{\sf GrB\_assign:} submatrix assignment} %========================== %=============================================================================== \label{assign} The methods described in this section are all variations of the form \verb'C(I,J)=A', which modifies a submatrix of the matrix \verb'C'. All methods can be used in their generic form with the single name \verb'GrB_assign'. These methods are very similar to their \verb'GxB_subassign' counterparts in Section~\ref{subassign}. They differ primarily in the size of the \verb'Mask', and how the \verb'GrB_REPLACE' option works. Refer to Section~\ref{compare_assign} for a complete comparison of \verb'GxB_subassign' and \verb'GrB_assign'. See Section~\ref{colon} for a description of \verb'I', \verb'ni', \verb'J', and \verb'nj'. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_assign:} assign to a subvector } %------------------------------------------------------------------------------- \label{assign_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // w<mask>(I) = accum (w(I),u) ( GrB_Vector w, // input/output matrix for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w(I),t) const GrB_Vector u, // first input: vector u const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_assign' operates on a subvector \verb'w(I)' of \verb'w', modifying it with the vector \verb'u'. The \verb'mask' vector has the same size as \verb'w'. The method is identical to \verb'GrB_Matrix_assign' described in Section~\ref{assign_matrix}, where all matrices have a single column each. The only other difference is that the input \verb'u' in this method is not transposed via the \verb'GrB_INP0' descriptor. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_assign:} assign to a submatrix } %------------------------------------------------------------------------------- \label{assign_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // C<Mask>(I,J) = accum (C(I,J),A) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C(I,J),T) const GrB_Matrix A, // first input: matrix A const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C, Mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_assign' operates on a submatrix \verb'S' of \verb'C', modifying it with the matrix \verb'A'. It may also modify all of \verb'C', depending on the input descriptor \verb'desc' and the \verb'Mask'. \vspace{0.1in} \begin{tabular}{lll} \hline Step & GraphBLAS & description \\ & notation & \\ \hline 1 & ${\bf S} = {\bf C(I,J)}$ & extract ${\bf C(I,J)}$ submatrix \\ 2 & ${\bf S} = {\bf S} \odot {\bf A}$ & apply the accumulator (but not the mask) to ${\bf S}$\\ 3 & ${\bf Z} = {\bf C}$ & make a copy of ${\bf C}$ \\ 4 & ${\bf Z(I,J)} = {\bf S}$ & put the submatrix into ${\bf Z(I,J)}$ \\ 5 & ${\bf C \langle M \rangle = Z}$ & apply the mask/replace phase to all of ${\bf C}$ \\ \hline \end{tabular} \vspace{0.1in} In contrast to \verb'GxB_subassign', the \verb'Mask' has the same as \verb'C'. Step 1 extracts the submatrix and then Step 2 applies the accumulator (or ${\bf S}={\bf A}$ if \verb'accum' is \verb'NULL'). The \verb'Mask' is not yet applied. Step 3 makes a copy of the ${\bf C}$ matrix, and then Step 4 writes the submatrix ${\bf S}$ into ${\bf Z}$. This is the same as Step 3 of \verb'GxB_subassign', except that it operates on a temporary matrix ${\bf Z}$. Finally, Step 5 writes ${\bf Z}$ back into ${\bf C}$ via the \verb'Mask', using the Mask/Replace Phase described in Section~\ref{accummask}. If \verb'GrB_REPLACE' is enabled, then all of ${\bf C}$ is cleared prior to writing ${\bf Z}$ via the mask. As a result, the \verb'GrB_REPLACE' option can delete entries outside the ${\bf C(I,J)}$ submatrix. \paragraph{\bf Performance considerations:} % C(I,J) = A If \verb'A' is not transposed: if \verb'|I|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_ROW'; if \verb'|J|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Col\_assign:} assign to a sub-column of a matrix} %------------------------------------------------------------------------------- \label{assign_column} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // C<mask>(I,j) = accum (C(I,j),u) ( GrB_Matrix C, // input/output matrix for results const GrB_Vector mask, // optional mask for C(:,j), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(C(I,j),t) const GrB_Vector u, // input vector const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index j, // column index const GrB_Descriptor desc // descriptor for C(:,j) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Col_assign' modifies a single sub-column of a matrix \verb'C'. It is the same as \verb'GrB_Matrix_assign' where the index vector \verb'J[0]=j' is a single column index, and where all matrices in \verb'GrB_Matrix_assign' (except \verb'C') consist of a single column. Unlike \verb'GrB_Matrix_assign', the \verb'mask' is a vector with the same size as a single column of \verb'C'. The input descriptor \verb'GrB_INP0' is ignored; the input vector \verb'u' is not transposed. Refer to \verb'GrB_Matrix_assign' for further details. \paragraph{\bf Performance considerations:} % C(I,j) = u \verb'GrB_Col_assign' is much faster than \verb'GrB_Row_assign' if the format of \verb'C' is \verb'GxB_BY_COL'. \verb'GrB_Row_assign' is much faster than \verb'GrB_Col_assign' if the format of \verb'C' is \verb'GxB_BY_ROW'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Row\_assign:} assign to a sub-row of a matrix} %------------------------------------------------------------------------------- \label{assign_row} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // C<mask'>(i,J) = accum (C(i,J),u') ( GrB_Matrix C, // input/output matrix for results const GrB_Vector mask, // optional mask for C(i,:), unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(C(i,J),t) const GrB_Vector u, // input vector const GrB_Index i, // row index const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C(i,:) and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Row_assign' modifies a single sub-row of a matrix \verb'C'. It is the same as \verb'GrB_Matrix_assign' where the index vector \verb'I[0]=i' is a single row index, and where all matrices in \verb'GrB_Matrix_assign' (except \verb'C') consist of a single row. Unlike \verb'GrB_Matrix_assign', the \verb'mask' is a vector with the same size as a single row of \verb'C'. The input descriptor \verb'GrB_INP0' is ignored; the input vector \verb'u' is not transposed. Refer to \verb'GrB_Matrix_assign' for further details. \paragraph{\bf Performance considerations:} % C(i,J) = u' \verb'GrB_Col_assign' is much faster than \verb'GrB_Row_assign' if the format of \verb'C' is \verb'GxB_BY_COL'. \verb'GrB_Row_assign' is much faster than \verb'GrB_Col_assign' if the format of \verb'C' is \verb'GxB_BY_ROW'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_assign\_$<$type$>$:} assign a scalar to a subvector} %------------------------------------------------------------------------------- \label{assign_vector_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // w<mask>(I) = accum (w(I),x) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w(I),x) const <type> x, // scalar to assign to w(I) const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_assign_<type>' assigns a single scalar to an entire subvector of the vector \verb'w'. The operation is exactly like setting a single entry in an \verb'n'-by-1 matrix, \verb'A(I,0) = x', where the column index for a vector is implicitly \verb'j=0'. The \verb'mask' vector has the same size as \verb'w'. For further details of this function, see \verb'GrB_Matrix_assign_<type>' in the next section. Following the C API Specification, results are well-defined if \verb'I' contains duplicate indices. Duplicate indices are simply ignored. See Section~\ref{duplicates} for more details. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_assign\_$<$type$>$:} assign a scalar to a submatrix} %------------------------------------------------------------------------------- \label{assign_matrix_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_assign // C<Mask>(I,J) = accum (C(I,J),x) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C(I,J),x) const <type> x, // scalar to assign to C(I,J) const GrB_Index *I, // row indices const GrB_Index ni, // number of row indices const GrB_Index *J, // column indices const GrB_Index nj, // number of column indices const GrB_Descriptor desc // descriptor for C and Mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_assign_<type>' assigns a single scalar to an entire submatrix of \verb'C', like the {\em scalar expansion} \verb'C(I,J)=x' in MATLAB. The scalar \verb'x' is implicitly expanded into a matrix \verb'A' of size \verb'ni' by \verb'nj', and then the matrix \verb'A' is assigned to \verb'C(I,J)' using the same method as in \verb'GrB_Matrix_assign'. Refer to that function in Section~\ref{assign_matrix} for further details. The \verb'Mask' has the same size as \verb'C'. For the accumulation step, the scalar \verb'x' is typecasted directly into the type of \verb'C' when the \verb'accum' operator is not applied to it, or into the \verb'ytype' of the \verb'accum' operator, if \verb'accum' is not NULL, for entries that are already present in \verb'C'. The \verb'<type> x' notation is otherwise the same as \verb'GrB_Matrix_setElement' (see Section~\ref{matrix_setElement}). Any value can be passed to this function and its type will be detected, via the \verb'_Generic' feature of ANSI C11. For a user-defined type, \verb'x' is a \verb'void *' pointer that points to a memory space holding a single entry of a scalar that has exactly the same user-defined type as the matrix \verb'C'. This user-defined type must exactly match the user-defined type of \verb'C' since no typecasting is done between user-defined types. If a \verb'void *' pointer is passed in and the type of the underlying scalar does not exactly match the user-defined type of \verb'C', then results are undefined. No error status will be returned since GraphBLAS has no way of catching this error. Following the C API Specification, results are well-defined if \verb'I' or \verb'J' contain duplicate indices. Duplicate indices are simply ignored. See Section~\ref{duplicates} for more details. \paragraph{\bf Performance considerations:} % C(I,J) = scalar If \verb'A' is not transposed: if \verb'|I|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_ROW'; if \verb'|J|' is small, then it is fastest if the format of \verb'C' is \verb'GxB_BY_COL'. The opposite is true if \verb'A' is transposed. \newpage %=============================================================================== \subsection{Duplicate indices in {\sf GrB\_assign} and {\sf GxB\_subassign}} %=============================================================================== \label{duplicates} According to the GraphBLAS C API Specification if the index vectors \verb'I' or \verb'J' contain duplicate indices, the results are undefined for \verb'GrB_Matrix_assign', \verb'GrB_Matrix_assign', \verb'GrB_Col_assign', and \verb'GrB_Row_assign'. Only the scalar assignment operations (\verb'GrB_Matrix_assign_TYPE' and \verb'GrB_Matrix_assign_TYPE') are well-defined when duplicates appear in \verb'I' and \verb'J'. In those two functions, duplicate indices are ignored. As an extension to the specification, SuiteSparse:GraphBLAS provides a definition of how duplicate indices are handled in all cases. If \verb'I' has duplicate indices, they are ignored and the last unique entry in the list is used. When no mask and no accumulator is present, the results are identical to how MATLAB handles duplicate indices in the built-in expression \verb'C(I,J)=A'. Details of how this is done is shown below. {\small \begin{verbatim} function C = subassign (C, I, J, A) % submatrix assignment with pre-sort of I and J; and remove duplicates % delete duplicates from I, keeping the last one seen [I2 I2k] = sort (I) ; Idupl = [(I2 (1:end-1) == I2 (2:end)), false] ; I2 = I2 (~Idupl) ; I2k = I2k (~Idupl) ; assert (isequal (I2, unique (I))) % delete duplicates from J, keeping the last one seen [J2 J2k] = sort (J) ; Jdupl = [(J2 (1:end-1) == J2 (2:end)), false] ; J2 = J2 (~Jdupl) ; J2k = J2k (~Jdupl) ; assert (isequal (J2, unique (J))) % do the submatrix assignment, with no duplicates in I2 or J2 C (I2,J2) = A (I2k,J2k) ; \end{verbatim}} If a mask is present, then it is replaced with \verb'M = M (I2k, J2k)' for \verb'GxB_subassign', or with \verb'M = M (I2, J2)' for \verb'GrB_assign'. If an accumulator operator is present, it is applied after the duplicates are removed, as (for example): {\small \begin{verbatim} C (I2,J2) = C (I2,J2) + A (I2k,J2k) ; \end{verbatim}} These definitions allow the MATLAB interface to GraphBLAS to return the same results for \verb'C(I,J)=A' for a \verb'GrB' object as they do for built-in MATLAB matrices. They also allow the assignment to be done in parallel. Results are always well-defined in SuiteSparse:GraphBLAS, but they might not be what you expect. For example, suppose the \verb'MIN' operator is being used the following assigment to the vector \verb'x', and suppose \verb'I' contains the entries \verb'[0 0]'. Suppose \verb'x' is initially empty, of length 1, and suppose \verb'y' is a vector of length 2 with the values \verb'[5 7]'. {\small \begin{verbatim} #include "GraphBLAS.h" #include <stdio.h> int main (void) { GrB_init (GrB_NONBLOCKING) ; GrB_Vector x, y ; GrB_Vector_new (&x, GrB_INT32, 1) ; GrB_Vector_new (&y, GrB_INT32, 2) ; GrB_Index I [2] = {0, 0} ; GrB_Vector_setElement (y, 5, 0) ; GrB_Vector_setElement (y, 7, 1) ; GrB_Vector_wait (&y) ; GxB_print (x, 3) ; GxB_print (y, 3) ; GrB_assign (x, NULL, GrB_MIN_INT32, y, I, 2, NULL) ; GrB_Vector_wait (&y) ; GxB_print (x, 3) ; GrB_finalize ( ) ; } \end{verbatim}} You might (wrongly) expect the result to be the vector \verb'x(0)=5', since two entries seem to be assigned, and the min operator might be expected to take the minimum of the two. This is not how SuiteSparse:GraphBLAS handles duplicates. Instead, the first duplicate index of \verb'I' is discarded (\verb'I [0] = 0', and \verb'y(0)=5'). and only the second entry is used (\verb'I [1] = 0', and \verb'y(1)=7'). The output of the above program is: {\small \begin{verbatim} 1x1 GraphBLAS int32_t vector, sparse by col: x, no entries 2x1 GraphBLAS int32_t vector, sparse by col: y, 2 entries (0,0) 5 (1,0) 7 1x1 GraphBLAS int32_t vector, sparse by col: x, 1 entry (0,0) 7 \end{verbatim}} You see that the result is \verb'x(0)=7', since the \verb'y(0)=5' entry has been ignored because of the duplicate indices in \verb'I'. \begin{alert} {\bf SPEC:} Providing a well-defined behavior for duplicate indices with matrix and vector assignment is an extension to the spec. The spec only defines the behavior when assigning a scalar into a matrix or vector, and states that duplicate indices otherwise lead to undefined results. \end{alert} \newpage %=============================================================================== \subsection{Comparing {\sf GrB\_assign} and {\sf GxB\_subassign}} %============= %=============================================================================== \label{compare_assign} The \verb'GxB_subassign' and \verb'GrB_assign' operations are very similar, but they differ in two ways: \begin{enumerate} \item {\bf The Mask has a different size:} The mask in \verb'GxB_subassign' has the same dimensions as \verb'w(I)' for vectors and \verb'C(I,J)' for matrices. In \verb'GrB_assign', the mask is the same size as \verb'w' or \verb'C', respectively (except for the row/col variants). The two masks are related. If \verb'M' is the mask for \verb'GrB_assign', then \verb'M(I,J)' is the mask for \verb'GxB_subassign'. If there is no mask, or if \verb'I' and \verb'J' are both \verb'GrB_ALL', the two masks are the same. For \verb'GrB_Row_assign' and \verb'GrB_Col_assign', the \verb'mask' vector is the same size as a row or column of \verb'C', respectively. For the corresponding \verb'GxB_Row_subassign' and \verb'GxB_Col_subassign' operations, the \verb'mask' is the same size as the sub-row \verb'C(i,J)' or subcolumn \verb'C(I,j)', respectively. \item {\bf \verb'GrB_REPLACE' is different:} They differ in how \verb'C' is affected in areas outside the \verb'C(I,J)' submatrix. In \verb'GxB_subassign', the \verb'C(I,J)' submatrix is the only part of \verb'C' that can be modified, and no part of \verb'C' outside the submatrix is ever modified. In \verb'GrB_assign', it is possible to delete entries in \verb'C' outside the submatrix, but only in one specific manner. Suppose the mask \verb'M' is present (or, suppose it is not present but \verb'GrB_COMP' is true). After (optionally) complementing the mask, the value of \verb'M(i,j)' can be 0 for some entry outside the \verb'C(I,J)' submatrix. If the \verb'GrB_REPLACE' descriptor is true, \verb'GrB_assign' deletes this entry. \end{enumerate} \verb'GxB_subassign' and \verb'GrB_assign' are identical if \verb'GrB_REPLACE' is set to its default value of false, and if the masks happen to be the same. The two masks can be the same in two cases: either the \verb'Mask' input is \verb'NULL' (and it is not complemented via \verb'GrB_COMP'), or \verb'I' and \verb'J' are both \verb'GrB_ALL'. If all these conditions hold, the two algorithms are identical and have the same performance. Otherwise, \verb'GxB_subassign' is much faster than \verb'GrB_assign' when the latter must examine the entire matrix \verb'C' to delete entries (when \verb'GrB_REPLACE' is true), and if it must deal with a much larger \verb'Mask' matrix. However, both methods have specific uses. Consider using \verb'C(I,J)+=F' for many submatrices \verb'F' (for example, when assembling a finite-element matrix). If the \verb'Mask' is meant as a specification for which entries of \verb'C' should appear in the final result, then use \verb'GrB_assign'. If instead the \verb'Mask' is meant to control which entries of the submatrix \verb'C(I,J)' are modified by the finite-element \verb'F', then use \verb'GxB_subassign'. This is particularly useful is the \verb'Mask' is a template that follows along with the finite-element \verb'F', independent of where it is applied to \verb'C'. Using \verb'GrB_assign' would be very difficult in this case since a new \verb'Mask', the same size as \verb'C', would need to be constructed for each finite-element \verb'F'. In GraphBLAS notation, the two methods can be described as follows: \vspace{0.05in} \begin{tabular}{ll} \hline matrix and vector subassign & ${\bf C(I,J) \langle M \rangle} = {\bf C(I,J)} \odot {\bf A}$ \\ matrix and vector assign & ${\bf C \langle M \rangle (I,J)} = {\bf C(I,J)} \odot {\bf A}$ \\ \hline \end{tabular} \vspace{0.05in} This notation does not include the details of the \verb'GrB_COMP' and \verb'GrB_REPLACE' descriptors, but it does illustrate the difference in the \verb'Mask'. In the subassign, \verb'Mask' is the same size as \verb'C(I,J)' and \verb'A'. If \verb'I[0]=i' and \verb'J[0]=j', Then \verb'Mask(0,0)' controls how \verb'C(i,j)' is modified by the subassign, from the value \verb'A(0,0)'. In the assign, \verb'Mask' is the same size as \verb'C', and \verb'Mask(i,j)' controls how \verb'C(i,j)' is modified. The \verb'GxB_subassign' and \verb'GrB_assign' functions have the same signatures; they differ only in how they consider the \verb'Mask' and the \verb'GrB_REPLACE' descriptor Details of each step of the two operations are listed below: \vspace{0.1in} \begin{tabular}{lll} \hline Step & \verb'GrB_Matrix_assign' & \verb'GxB_Matrix_subassign' \\ \hline 1 & ${\bf S} = {\bf C(I,J)}$ & ${\bf S} = {\bf C(I,J)}$ \\ 2 & ${\bf S} = {\bf S} \odot {\bf A}$ & ${\bf S \langle M \rangle} = {\bf S} \odot {\bf A}$ \\ 3 & ${\bf Z} = {\bf C}$ & ${\bf C(I,J)}= {\bf S}$ \\ 4 & ${\bf Z(I,J)} = {\bf S}$ & \\ 5 & ${\bf C \langle M \rangle = Z}$ & \\ \hline \end{tabular} \vspace{0.1in} Step 1 is the same. In the Accumulator Phase (Step 2), the expression ${\bf S} \odot {\bf A}$, described in Section~\ref{accummask}, is the same in both operations. The result is simply ${\bf A}$ if \verb'accum' is \verb'NULL'. It only applies to the submatrix ${\bf S}$, not the whole matrix. The result ${\bf S} \odot {\bf A}$ is used differently in the Mask/Replace phase. The Mask/Replace Phase, described in Section~\ref{accummask} is different: \begin{itemize} \item For \verb'GrB_assign' (Step 5), the mask is applied to all of ${\bf C}$. The mask has the same size as ${\bf C}$. Just prior to making the assignment via the mask, the \verb'GrB_REPLACE' option can be used to clear all of ${\bf C}$ first. This is the only way in which entries in ${\bf C}$ that are outside the ${\bf C(I,J)}$ submatrix can be modified by this operation. \item For \verb'GxB_subassign' (Step 2b), the mask is applied to just ${\bf S}$. The mask has the same size as ${\bf C(I,J)}$, ${\bf S}$, and ${\bf A}$. Just prior to making the assignment via the mask, the \verb'GrB_REPLACE' option can be used to clear ${\bf S}$ first. No entries in ${\bf C}$ that are outside the ${\bf C(I,J)}$ can be modified by this operation. Thus, \verb'GrB_REPLACE' has no effect on entries in ${\bf C}$ outside the ${\bf C(I,J)}$ submatrix. \end{itemize} The differences between \verb'GrB_assign' and \verb'GxB_subassign' can be seen in Tables~\ref{insubmatrix} and \ref{outsubmatrix}. The first table considers the case when the entry $c_{ij}$ is in the ${\bf C(I,J)}$ submatrix, and it describes what is computed for both \verb'GrB_assign' and \verb'GxB_subassign'. They perform the exact same computation; the only difference is how the value of the mask is specified. Compare Table~\ref{insubmatrix} with Table~\ref{tab:maskaccum} in Section~\ref{sec:maskaccum}. The first column of Table~\ref{insubmatrix} is {\em yes} if \verb'GrB_REPLACE' is enabled, and a dash otherwise. The second column is {\em yes} if an accumulator operator is given, and a dash otherwise. The third column is $c_{ij}$ if the entry is present in ${\bf C}$, and a dash otherwise. The fourth column is $a_{i'j'}$ if the corresponding entry is present in ${\bf A}$, where $i={\bf I}(i')$ and $j={\bf J}(i')$. The {\em mask} column is 1 if the effective value of the mask mask allows ${\bf C}$ to be modified, and 0 otherwise. This is $m_{ij}$ for \verb'GrB_assign', and $m_{i'j'}$ for \verb'GxB_subassign', to reflect the difference in the mask, but this difference is not reflected in the table. The value 1 or 0 is the value of the entry in the mask after it is optionally complemented via the \verb'GrB_COMP' option. Finally, the last column is the action taken in this case. It is left blank if no action is taken, in which case $c_{ij}$ is not modified if present, or not inserted into ${\bf C}$ if not present. \begin{table} {\small \begin{tabular}{lllll|l} \hline repl & accum & ${\bf C}$ & ${\bf A}$ & mask & action taken by \verb'GrB_assign' and \verb'GxB_subassign'\\ \hline - &- & $c_{ij}$ & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, update \\ - &- & - & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, insert \\ - &- & $c_{ij}$ & - & 1 & delete $c_{ij}$ because $a_{i'j'}$ not present \\ - &- & - & - & 1 & \\ - &- & $c_{ij}$ & $a_{i'j'}$ & 0 & \\ - &- & - & $a_{i'j'}$ & 0 & \\ - &- & $c_{ij}$ & - & 0 & \\ - &- & - & - & 0 & \\ \hline yes&- & $c_{ij}$ & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, update \\ yes&- & - & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, insert \\ yes&- & $c_{ij}$ & - & 1 & delete $c_{ij}$ because $a_{i'j'}$ not present \\ yes&- & - & - & 1 & \\ yes&- & $c_{ij}$ & $a_{i'j'}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&- & - & $a_{i'j'}$ & 0 & \\ yes&- & $c_{ij}$ & - & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&- & - & - & 0 & \\ \hline - &yes & $c_{ij}$ & $a_{i'j'}$ & 1 & $c_{ij} = c_{ij} \odot a_{i'j'}$, apply accumulator \\ - &yes & - & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, insert \\ - &yes & $c_{ij}$ & - & 1 & \\ - &yes & - & - & 1 & \\ - &yes & $c_{ij}$ & $a_{i'j'}$ & 0 & \\ - &yes & - & $a_{i'j'}$ & 0 & \\ - &yes & $c_{ij}$ & - & 0 & \\ - &yes & - & - & 0 & \\ \hline yes&yes & $c_{ij}$ & $a_{i'j'}$ & 1 & $c_{ij} = c_{ij} \odot a_{i'j'}$, apply accumulator \\ yes&yes & - & $a_{i'j'}$ & 1 & $c_{ij} = a_{i'j'}$, insert \\ yes&yes & $c_{ij}$ & - & 1 & \\ yes&yes & - & - & 1 & \\ yes&yes & $c_{ij}$ & $a_{i'j'}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&yes & - & $a_{i'j'}$ & 0 & \\ yes&yes & $c_{ij}$ & - & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes&yes & - & - & 0 & \\ \hline \end{tabular} } \caption{Results of assign and subassign for entries in the ${\bf C(I,J)}$ submatrix \label{insubmatrix}} \end{table} \newpage Table~\ref{outsubmatrix} illustrates how \verb'GrB_assign' and \verb'GxB_subassign' differ for entries outside the submatrix. \verb'GxB_subassign' never modifies any entry outside the ${\bf C(I,J)}$ submatrix, but \verb'GrB_assign' can modify them in two cases listed in Table~\ref{outsubmatrix}. When the \verb'GrB_REPLACE' option is selected, and when the \verb'Mask(i,j)' for an entry $c_{ij}$ is false (or if the \verb'Mask(i,j)' is true and \verb'GrB_COMP' is enabled via the descriptor), then the entry is deleted by \verb'GrB_assign'. The fourth column of Table~\ref{outsubmatrix} differs from Table~\ref{insubmatrix}, since entries in ${\bf A}$ never affect these entries. Instead, for all index pairs outside the $I \times J$ submatrix, ${\bf C}$ and ${\bf Z}$ are identical (see Step 3 above). As a result, each section of the table includes just two cases: either $c_{ij}$ is present, or not. This in contrast to Table~\ref{insubmatrix}, where each section must consider four different cases. The \verb'GrB_Row_assign' and \verb'GrB_Col_assign' operations are slightly different. They only affect a single row or column of ${\bf C}$. For \verb'GrB_Row_assign', Table~\ref{outsubmatrix} only applies to entries in the single row \verb'C(i,J)' that are outside the list of indices, \verb'J'. For \verb'GrB_Col_assign', Table~\ref{outsubmatrix} only applies to entries in the single column \verb'C(I,j)' that are outside the list of indices, \verb'I'. \begin{table} {\small \begin{tabular}{lllll|l} \hline repl & accum & ${\bf C}$ & ${\bf C=Z}$ & mask & action taken by \verb'GrB_assign' \\ \hline - &- & $c_{ij}$ & $c_{ij}$ & 1 & \\ - &- & - & - & 1 & \\ - &- & $c_{ij}$ & $c_{ij}$ & 0 & \\ - &- & - & - & 0 & \\ \hline yes & - & $c_{ij}$ & $c_{ij}$ & 1 & \\ yes & - & - & - & 1 & \\ yes & - & $c_{ij}$ & $c_{ij}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes & - & - & - & 0 & \\ \hline - &yes & $c_{ij}$ & $c_{ij}$ & 1 & \\ - &yes & - & - & 1 & \\ - &yes & $c_{ij}$ & $c_{ij}$ & 0 & \\ - &yes & - & - & 0 & \\ \hline yes & yes & $c_{ij}$ & $c_{ij}$ & 1 & \\ yes & yes & - & - & 1 & \\ yes & yes & $c_{ij}$ & $c_{ij}$ & 0 & delete $c_{ij}$ (because of \verb'GrB_REPLACE') \\ yes & yes & - & - & 0 & \\ \hline \end{tabular} } \caption{Results of assign for entries outside the ${\bf C(I,J)}$ submatrix. Subassign has no effect on these entries. \label{outsubmatrix}} \end{table} %------------------------------------------------------------------------------- \subsubsection{Example} %------------------------------------------------------------------------------- The difference between \verb'GxB_subassign' and \verb'GrB_assign' is illustrated in the following example. Consider the 2-by-2 matrix ${\bf C}$ where all entries are present. \[ {\bf C} = \left[ \begin{array}{rr} 11 & 12 \\ 21 & 22 \\ \end{array} \right] \] Suppose \verb'GrB_REPLACE' is true, and \verb'GrB_COMP' is false. Let the \verb'Mask' be: \[ {\bf M} = \left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \\ \end{array} \right]. \] Let ${\bf A} = 100$, and let the index sets be ${\bf I}=0$ and ${\bf J}=1$. Consider the computation ${\bf C \langle M \rangle} (0,1) = {\bf C}(0,1) + {\bf A}$, using the \verb'GrB_assign' operation. The result is: \[ {\bf C} = \left[ \begin{array}{rr} 11 & 112 \\ - & 22 \\ \end{array} \right]. \] The $(0,1)$ entry is updated and the $(1,0)$ entry is deleted because its \verb'Mask' is zero. The other two entries are not modified since ${\bf Z} = {\bf C}$ outside the submatrix, and those two values are written back into ${\bf C}$ because their \verb'Mask' values are 1. The $(1,0)$ entry is deleted because the entry ${\bf Z}(1,0)=21$ is prevented from being written back into ${\bf C}$ since \verb'Mask(1,0)=0'. Now consider the analogous \verb'GxB_subassign' operation. The \verb'Mask' has the same size as ${\bf A}$, namely: \[ {\bf M} = \left[ \begin{array}{r} 1 \\ \end{array} \right]. \] After computing ${\bf C} (0,1) {\bf \langle M \rangle} = {\bf C}(0,1) + {\bf A}$, the result is \[ {\bf C} = \left[ \begin{array}{rr} 11 & 112 \\ 21 & 22 \\ \end{array} \right]. \] Only the ${\bf C(I,J)}$ submatrix, the single entry ${\bf C}(0,1)$, is modified by \verb'GxB_subassign'. The entry ${\bf C}(1,0)=21$ is unaffected by \verb'GxB_subassign', but it is deleted by \verb'GrB_assign'. \newpage %------------------------------------------------------------------------------- \subsubsection{Performance of {\sf GxB\_subassign}, {\sf GrB\_assign} and {\sf GrB\_*\_setElement}} %------------------------------------------------------------------------------- When SuiteSparse:GraphBLAS uses non-blocking mode, the modifications to a matrix by \verb'GxB_subassign', \verb'GrB_assign', and \verb'GrB_*_setElement' can postponed, and computed all at once later on. This has a huge impact on performance. A sequence of assignments is fast if their completion can be postponed for as long as possible, or if they do not modify the pattern at all. Modifying the pattern can be costly, but it is fast if non-blocking mode can be fully exploited. Consider a sequence of $t$ submatrix assignments \verb'C(I,J)=C(I,J)+A' to an $n$-by-$n$ matrix \verb'C' where each submatrix \verb'A' has size $a$-by-$a$ with $s$ entries, and where \verb'C' starts with $c$ entries. Assume the matrices are all stored in non-hypersparse form, by row (\verb'GxB_BY_ROW'). If blocking mode is enabled, or if the sequence requires the matrix to be completed after each assignment, each of the $t$ assignments takes $O(a + s \log n)$ time to process the \verb'A' matrix and then $O(n + c + s \log s)$ time to complete \verb'C'. The latter step uses \verb'GrB_*_build' to build an update matrix and then merge it with \verb'C'. This step does not occur if the sequence of assignments does not add new entries to the pattern of \verb'C', however. Assuming in the worst case that the pattern does change, the total time is $O (t \left[ a + s \log n + n + c + s \log s \right] )$. If the sequence can be computed with all updates postponed until the end of the sequence, then the total time is no worse than $O(a + s \log n)$ to process each \verb'A' matrix, for $t$ assignments, and then a single \verb'build' at the end, taking $O(n + c + st \log st)$ time. The total time is $O (t \left [a + s \log n \right] + (n + c + st \log st))$. If no new entries appear in \verb'C' the time drops to $O (t \left [a + s \log n \right])$, and in this case, the time for both methods is the same; both are equally efficient. A few simplifying assumptions are useful to compare these times. Consider a graph of $n$ nodes with $O(n)$ edges, and with a constant bound on the degree of each node. The asymptotic bounds assume a worst-case scenario where \verb'C' has a least some dense rows (thus the $\log n$ terms). If these are not present, if both $t$ and $c$ are $O(n)$, and if $a$ and $s$ are constants, then the total time with blocking mode becomes $O(n^2)$, assuming the pattern of \verb'C' changes at each assignment. This very high for a sparse graph problem. In contrast, the non-blocking time becomes $O(n \log n)$ under these same assumptions, which is asymptotically much faster. \newpage The difference in practice can be very dramatic, since $n$ can be many millions for sparse graphs with $n$ nodes and $O(n)$, which can be handled on a commodity laptop. The following guidelines should be considered when using \verb'GxB_subassign', \verb'GrB_assign' and \verb'GrB_*_setElement'. \begin{enumerate} \item A sequence of assignments that does not modify the pattern at all is fast, taking as little as $\Omega(1)$ time per entry modified. The worst case time complexity is $O(\log n)$ per entry, assuming they all modify a dense row of \verb'C' with \verb'n' entries, which can occur in practice. It is more common, however, that most rows of \verb'C' have a constant number of entries, independent of \verb'n'. No work is ever left pending when the pattern of \verb'C' does not change. \item A sequence of assignments that modifies the entries that already exist in the pattern of a matrix, or adds new entries to the pattern (using the same \verb'accum' operator), but does not delete any entries, is fast. The matrix is not completed until the end of the sequence. \item Similarly, a sequence that modifies existing entries, or deletes them, but does not add new ones, is also fast. This sequence can also repeatedly delete pre-existing entries and then reinstate them and still be fast. The matrix is not completed until the end of the sequence. \item A sequence that mixes assignments of types (2) and (3) above can be costly, since the matrix may need to be completed after each assignment. The time complexity can become quadratic in the worst case. \item However, any single assignment takes no more than $O (a + s \log n + n + c + s \log s )$ time, even including the time for a matrix completion, where \verb'C' is $n$-by-$n$ with $c$ entries and \verb'A' is $a$-by-$a$ with $s$ entries. This time is essentially linear in the size of the matrix \verb'C', if \verb'A' is relatively small and sparse compared with \verb'C'. In this case, $n+c$ are the two dominant terms. \item In general, \verb'GxB_subassign' is faster than \verb'GrB_assign'. If \verb'GrB_REPLACE' is used with \verb'GrB_assign', the entire matrix \verb'C' must be traversed. This is much slower than \verb'GxB_subassign', which only needs to examine the \verb'C(I,J)' submatrix. Furthermore, \verb'GrB_assign' must deal with a much larger \verb'Mask' matrix, whereas \verb'GxB_subassign' has a smaller mask. Since its mask is smaller, \verb'GxB_subassign' takes less time than \verb'GrB_assign' to access the mask. \end{enumerate} % see GraphBLAS/Test/test46.m Submatrix assignment in SuiteSparse:GraphBLAS is extremely efficient, even without considering the advantages of non-blocking mode discussed in Section~\ref{compare_assign}. It can be up to 500x faster than MATLAB R2019b, or even higher depending on the kind of matrix assignment. MATLAB logical indexing (the mask of GraphBLAS) is much faster with GraphBLAS than in MATLAB R2019b; differences of up to 100,000x have been observed. All of the 28 variants (each with their own source code) are either asymptotically optimal, or to within a log factor of being asymptotically optimal. The methods are also fully parallel. For hypersparse matrices, the term $n$ in the expressions in the above discussion is dropped, and is replaced with $h \log h$, at the worst case, where $h << n$ is the number of non-empty columns of a hypersparse matrix stored by column, or the number of non-empty rows of a hypersparse matrix stored by row. In many methods, $n$ is replaced with $h$, not $h \log h$. \newpage %=============================================================================== \subsection{{\sf GrB\_apply:} apply a unary or binary operator} %=============== %=============================================================================== \label{apply} \verb'GrB_apply' is the generic name for 62 specific functions. \verb'GrB_Vector_apply' and \verb'GrB_Matrix_apply' apply a unary operator to the entries of a matrix. \verb'GrB_*_apply_BinaryOp1st*' applies a binary operator where a single scalar is provided as the $x$ input to the binary operator. \verb'GrB_*_apply_BinaryOp2nd*' applies a binary operator where a single scalar is provided as the $y$ input to the binary operator. The generic name appears in the function prototypes, but the specific function name is used when describing each variation. When discussing features that apply to all versions, the simple name \verb'GrB_apply' is used. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_apply:} apply a unary operator to a vector} %------------------------------------------------------------------------------- \label{apply_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // w<mask> = accum (w, op(u)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_UnaryOp op, // operator to apply to the entries const GrB_Vector u, // first input: vector u const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_apply' applies a unary operator to the entries of a vector, analogous to \verb't = op(u)' in MATLAB except the operator \verb'op' is only applied to entries in the pattern of \verb'u'. Implicit values outside the pattern of \verb'u' are not affected. The entries in \verb'u' are typecasted into the \verb'xtype' of the unary operator. The vector \verb't' has the same type as the \verb'ztype' of the unary operator. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_apply:} apply a unary operator to a matrix} %------------------------------------------------------------------------------- \label{apply_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // C<Mask> = accum (C, op(A)) or op(A') ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_UnaryOp op, // operator to apply to the entries const GrB_Matrix A, // first input: matrix A const GrB_Descriptor desc // descriptor for C, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_apply' applies a unary operator to the entries of a matrix, analogous to \verb'T = op(A)' in MATLAB except the operator \verb'op' is only applied to entries in the pattern of \verb'A'. Implicit values outside the pattern of \verb'A' are not affected. The input matrix \verb'A' may be transposed first. The entries in \verb'A' are typecasted into the \verb'xtype' of the unary operator. The matrix \verb'T' has the same type as the \verb'ztype' of the unary operator. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. The built-in \verb'GrB_IDENTITY_'$T$ operators (one for each built-in type $T$) are very useful when combined with this function, enabling it to compute ${\bf C \langle M \rangle = C \odot A}$. This makes \verb'GrB_apply' a direct interface to the accumulator/mask function for both matrices and vectors. The \verb'GrB_IDENTITY_'$T$ operators also provide the fastest stand-alone typecasting methods in SuiteSparse:GraphBLAS, with all $13 \times 13=169$ methods appearing as individual functions, to typecast between any of the 13 built-in types. To compute ${\bf C \langle M \rangle = A}$ or ${\bf C \langle M \rangle = C \odot A}$ for user-defined types, the user application would need to define an identity operator for the type. Since GraphBLAS cannot detect that it is an identity operator, it must call the operator to make the full copy \verb'T=A' and apply the operator to each entry of the matrix or vector. The other GraphBLAS operation that provides a direct interface to the accumulator/mask function is \verb'GrB_transpose', which does not require an operator to perform this task. As a result, \verb'GrB_transpose' can be used as an efficient and direct interface to the accumulator/mask function for both built-in and user-defined types. However, it is only available for matrices, not vectors. \newpage %=============================================================================== \subsubsection{{\sf GrB\_Vector\_apply\_BinaryOp1st:} apply a binary operator to a vector; 1st scalar binding} %=============================================================================== \label{vector_apply1st} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // w<mask> = accum (w, op(x,u)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_BinaryOp op, // operator to apply to the entries <type> x, // first input: scalar x const GrB_Vector u, // second input: vector u const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_apply_BinaryOp1st_<type>' applies a binary operator $z=f(x,y)$ to a vector, where a scalar $x$ is bound to the first input of the operator. It is otherwise identical to \verb'GrB_Vector_apply'. With no suffix, \verb'GxB_Vector_apply_BinaryOp1st' takes as input a \verb'GxB_Scalar'. %=============================================================================== \subsubsection{{\sf GrB\_Vector\_apply\_BinaryOp2nd:} apply a binary operator to a vector; 2nd scalar binding} %=============================================================================== \label{vector_apply2nd} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // w<mask> = accum (w, op(u,y)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_BinaryOp op, // operator to apply to the entries const GrB_Vector u, // first input: vector u <type> y, // second input: scalar y const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_apply_BinaryOp2nd_<type>' applies a binary operator $z=f(x,y)$ to a vector, where a scalar $y$ is bound to the second input of the operator. It is otherwise identical to \verb'GrB_Vector_apply'. With no suffix, \verb'GxB_Vector_apply_BinaryOp2nd' takes as input a \verb'GxB_Scalar'. \newpage %=============================================================================== \subsubsection{{\sf GrB\_Matrix\_apply\_BinaryOp1st:} apply a binary operator to a matrix; 1st scalar binding} %=============================================================================== \label{matrix_apply1st} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // C<M>=accum(C,op(x,A)) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_BinaryOp op, // operator to apply to the entries <type> x, // first input: scalar x const GrB_Matrix A, // second input: matrix A const GrB_Descriptor desc // descriptor for C, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_apply_BinaryOp1st_<type>' applies a binary operator $z=f(x,y)$ to a matrix, where a scalar $x$ is bound to the first input of the operator. It is otherwise identical to \verb'GrB_Matrix_apply'. With no suffix, \verb'GxB_Matrix_apply_BinaryOp1st' takes as input a \verb'GxB_Scalar'. To transpose the input matrix, use the \verb'GrB_INP1' descriptor setting. %=============================================================================== \subsubsection{{\sf GrB\_Matrix\_apply\_BinaryOp2nd:} apply a binary operator to a matrix; 2nd scalar binding} %=============================================================================== \label{matrix_apply2nd} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_apply // C<M>=accum(C,op(A,y)) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_BinaryOp op, // operator to apply to the entries const GrB_Matrix A, // first input: matrix A <type> y, // second input: scalar y const GrB_Descriptor desc // descriptor for C, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_apply_BinaryOp2nd_<type>' applies a binary operator $z=f(x,y)$ to a matrix, where a scalar $x$ is bound to the second input of the operator. It is otherwise identical to \verb'GrB_Matrix_apply'. With no suffix, \verb'GxB_Matrix_apply_BinaryOp2nd' takes as input a \verb'GxB_Scalar'. To transpose the input matrix, use the \verb'GrB_INP0' descriptor setting. \newpage %=============================================================================== \subsection{{\sf GxB\_select:} apply a select operator} %======================= %=============================================================================== \label{select} The \verb'GxB_select' function is the generic name for two specific functions: \\ \verb'GxB_Vector_select' and \verb'GxB_Matrix_select'. The generic name appears in the function prototypes, but the specific function name is used when describing each variation. When discussing features that apply to both versions, the simple name \verb'GxB_select' is used. % \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Vector\_select:} apply a select operator to a vector} %------------------------------------------------------------------------------- \label{select_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_select // w<mask> = accum (w, op(u,k)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GxB_SelectOp op, // operator to apply to the entries const GrB_Vector u, // first input: vector u const GxB_Scalar Thunk, // optional input for the select operator const GrB_Descriptor desc // descriptor for w and mask ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Vector_select' applies a select operator to the entries of a vector, analogous to \verb't = u.*op(u)' in MATLAB except the operator \verb'op' is only applied to entries in the pattern of \verb'u'. Implicit values outside the pattern of \verb'u' are not affected. If the operator is not type-generic, the entries in \verb'u' are typecasted into the \verb'xtype' of the select operator. The vector \verb't' has the same type and size as \verb'u'. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. This operation operates on vectors just as if they were \verb'm'-by-1 matrices, except that GraphBLAS never transposes a vector via the descriptor. The \verb'op' is passed \verb'n=1' as the number of columns. Refer to the next section on \verb'GxB_Matrix_select' for more details. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GxB\_Matrix\_select:} apply a select operator to a matrix} %------------------------------------------------------------------------------- \label{select_matrix} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_select // C<Mask> = accum (C, op(A,k)) or op(A',k) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GxB_SelectOp op, // operator to apply to the entries const GrB_Matrix A, // first input: matrix A const GxB_Scalar Thunk, // optional input for the select operator const GrB_Descriptor desc // descriptor for C, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GxB_Matrix_select' applies a select operator to the entries of a matrix, analogous to \verb'T = A .* op(A)' in MATLAB except the operator \verb'op' is only applied to entries in the pattern of \verb'A'. Implicit values outside the pattern of \verb'A' are not affected. The input matrix \verb'A' may be transposed first. If the operator is not type-generic, the entries in \verb'A' are typecasted into the \verb'xtype' of the select operator. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. The matrix \verb'T' has the same size and type as \verb'A' (or the transpose of \verb'A' if the input is transposed via the descriptor). The entries of \verb'T' are a subset of those of \verb'A'. Each entry \verb'A(i,j)' of \verb'A' is passed to the \verb'op', as $z=f(i,j,m,n,a_{ij},\mbox{thunk})$, where \verb'A' is $m$-by-$n$. If \verb'A' is transposed first then the operator is applied to entries in the transposed matrix, \verb"A'". If $z$ is returned as true, then the entry is copied into \verb'T', unchanged. If it returns false, the entry does not appear in \verb'T'. If \verb'Thunk' is not \verb'NULL', it must be a valid \verb'GxB_Scalar'. If it has no entry, it is treated as if it had a single entry equal to zero, for built-in types (not user-defined types). For user-defined select operators, the entry is passed to the user-defined select operator, with no typecasting. Its type must be identical to \verb'ttype' of the select operator. For the \verb'GxB_TRIL', \verb'GxB_TRIU', \verb'GxB_DIAG', and \verb'GxB_OFFDIAG', the \verb'Thunk' parameter may be \verb'NULL', or it may be present but contain no entry. In this case, these operators use the value of \verb'k=0', the main diagonal. If present, the \verb'Thunk' can be any built-in type. The value of this entry is typecasted: \verb'k = (int64_t) Thunk'. The value \verb'k=0' specifies the main diagonal of the matrix, \verb'k=1' is the +1 diagonal (the entries just above the main diagonal), \verb'k=-1' is the -1 diagonal, and so on. For the \verb'GxB_*ZERO' select operators, \verb'Thunk' is ignored, and may be \verb'NULL'. For built-in types, with the \verb'GxB_*THUNK' operators, the value of \verb'Thunk' is typecasted to the same type as the \verb'A' matrix. For user-defined types, \verb'Thunk' is passed to the select operator without typecasting. The action of \verb'GxB_select' with the built-in select operators is described in the table below. The MATLAB analogs are precise for \verb'tril' and \verb'triu', but shorthand for the other operations. The MATLAB \verb'diag' function returns a column with the diagonal, if \verb'A' is a matrix, whereas the matrix \verb'T' in \verb'GxB_select' always has the same size as \verb'A' (or its transpose if the \verb'GrB_INP0' is set to \verb'GrB_TRAN'). In the MATLAB analog column, \verb'diag' is as if it operates like \verb'GxB_select', where \verb'T' is a matrix. The following operators may be used on matrices with a user-defined type: \verb'GxB_TRIL', \verb'GxB_TRIU', \verb'GxB_DIAG', \verb'GxB_OFFIAG', \verb'GxB_NONZERO', \verb'GxB_EQ_ZERO', \verb'GxB_NE_THUNK', and \verb'GxB_EQ_THUNK'. The comparators \verb'GxB_GT_*' \verb'GxB_GE_*' \verb'GxB_LT_*', and \verb'GxB_LE_*' only work for built-in types. All other built-in select operators can be used for any type, both built-in and any user-defined type. {\bf NOTE:} For floating-point values, comparisons with \verb'NaN' always return false. The built-in select operators should not be used with a scalar \verb'thunk' that is equal to \verb'NaN'. For this case, create a user-defined select operator that performs the test with the ANSI C \verb'isnan' function instead. \vspace{0.2in} {\small \begin{tabular}{llp{3in}} \hline GraphBLAS & MATLAB & \\ name & analog & \\ \hline \verb'GxB_TRIL' & \verb'T=tril(A,k)' & Entries in \verb'T' are the entries on and below the \verb'k'th diagonal of \verb'A'. \\ \verb'GxB_TRIU' & \verb'T=triu(A,k)' & Entries in \verb'T' are the entries on and above the \verb'k'th diagonal of \verb'A'. \\ \verb'GxB_DIAG' & \verb'T=diag(A,k)' & Entries in \verb'T' are the entries on the \verb'k'th diagonal of \verb'A'. \\ \verb'GxB_OFFDIAG' & \verb'T=A-diag(A,k)' & Entries in \verb'T' are all entries not on the \verb'k'th diagonal of \verb'A'. \\ \hline \verb'GxB_NONZERO' & \verb'T=A(A~=0)' & Entries in \verb'T' are all entries in \verb'A' that have nonzero value. \\ \verb'GxB_EQ_ZERO' & \verb'T=A(A==0)' & Entries in \verb'T' are all entries in \verb'A' that are equal to zero. \\ \verb'GxB_GT_ZERO' & \verb'T=A(A>0)' & Entries in \verb'T' are all entries in \verb'A' that are greater than zero. \\ \verb'GxB_GE_ZERO' & \verb'T=A(A<=0)' & Entries in \verb'T' are all entries in \verb'A' that are greater than or equal to zero. \\ \verb'GxB_LT_ZERO' & \verb'T=A(A<0)' & Entries in \verb'T' are all entries in \verb'A' that are less than zero. \\ \verb'GxB_LE_ZERO' & \verb'T=A(A<=0)' & Entries in \verb'T' are all entries in \verb'A' that are less than or equal to zero. \\ \hline \verb'GxB_NE_THUNK' & \verb'T=A(A~=k)' & Entries in \verb'T' are all entries in \verb'A' that are not equal to \verb'k'. \\ \verb'GxB_EQ_THUNK' & \verb'T=A(A==k)' & Entries in \verb'T' are all entries in \verb'A' that are equal to \verb'k'. \\ \verb'GxB_GT_THUNK' & \verb'T=A(A>k)' & Entries in \verb'T' are all entries in \verb'A' that are greater than \verb'k'. \\ \verb'GxB_GE_THUNK' & \verb'T=A(A>=k)' & Entries in \verb'T' are all entries in \verb'A' that are greater than or equal to \verb'k'. \\ \verb'GxB_LT_THUNK' & \verb'T=A(A<k)' & Entries in \verb'T' are all entries in \verb'A' that are less than \verb'k'. \\ \verb'GxB_LE_THUNK' & \verb'T=A(A<=k)' & Entries in \verb'T' are all entries in \verb'A' that are less than or equal to \verb'k'. \\ \hline \end{tabular} } \vspace{0.2in} \newpage %=============================================================================== \subsection{{\sf GrB\_reduce:} reduce to a vector or scalar} %================== %=============================================================================== \label{reduce} The generic function name \verb'GrB_reduce' may be used for all specific functions discussed in this section. When the details of a specific function are discussed, the specific name is used for clarity. %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_reduce\_Monoid} reduce a matrix to a vector} %------------------------------------------------------------------------------- \label{reduce_to_vector} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_reduce // w<mask> = accum (w,reduce(A)) ( GrB_Vector w, // input/output vector for results const GrB_Vector mask, // optional mask for w, unused if NULL const GrB_BinaryOp accum, // optional accum for z=accum(w,t) const GrB_Monoid monoid, // reduce monoid for t=reduce(A) const GrB_Matrix A, // first input: matrix A const GrB_Descriptor desc // descriptor for w, mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_reduce_Monoid' reduces a matrix to a column vector using a monoid, roughly analogous to \verb"t = sum (A')" in MATLAB, in the default case, where \verb't' is a column vector. By default, the method reduces across the rows to obtain a column vector; use \verb'GrB_TRAN' to reduce down the columns. The input matrix \verb'A' may be transposed first. Its entries are then typecast into the type of the \verb'reduce' operator or monoid. The reduction is applied to all entries in \verb'A (i,:)' to produce the scalar \verb't (i)'. This is done without the use of the identity value of the monoid. If the \verb'i'th row \verb'A (i,:)' has no entries, then \verb'(i)' is not an entry in \verb't' and its value is implicit. If \verb'A (i,:)' has a single entry, then that is the result \verb't (i)' and \verb'reduce' is not applied at all for the \verb'i'th row. Otherwise, multiple entries in row \verb'A (i,:)' are reduced via the \verb'reduce' operator or monoid to obtain a single scalar, the result \verb't (i)'. The final step is ${\bf w \langle m \rangle = w \odot t}$, as described in Section~\ref{accummask}, except that all the terms are column vectors instead of matrices. \verb'GrB_reduce' can also be passed a \verb'GrB_BinaryOp' in place of the monoid parameter, but the binary operator must correspond to a known built-in monoid. This provides a limited implementation of \verb'GrB_Matrix_reduce_BinaryOp'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Vector\_reduce\_$<$type$>$:} reduce a vector to a scalar} %------------------------------------------------------------------------------- \label{reduce_vector_to_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_reduce // c = accum (c, reduce_to_scalar (u)) ( <type> *c, // result scalar const GrB_BinaryOp accum, // optional accum for c=accum(c,t) const GrB_Monoid monoid, // monoid to do the reduction const GrB_Vector u, // vector to reduce const GrB_Descriptor desc // descriptor (currently unused) ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Vector_reduce_<type>' reduces a vector to a scalar, analogous to \verb't = sum (u)' in MATLAB, except that in GraphBLAS any commutative and associative monoid can be used in the reduction. If the vector \verb'u' has no entries, that identity value of the \verb'monoid' is copied into the scalar \verb't'. Otherwise, all of the entries in the vector are reduced to a single scalar using the \verb'monoid'. The scalar type is any of the built-in types, or a user-defined type. In the function signature it is a C type: \verb'bool', \verb'int8_t', ... \verb'float', \verb'double', or \verb'void *' for a user-defined type. The user-defined type must be identical to the type of the vector \verb'u'. This cannot be checked by GraphBLAS and thus results are undefined if the types are not the same. The descriptor is unused, but it appears in case it is needed in future versions of the GraphBLAS API. This function has no mask so its accumulator/mask step differs from the other GraphBLAS operations. It does not use the methods described in Section~\ref{accummask}, but uses the following method instead. If \verb'accum' is \verb'NULL', then the scalar \verb't' is typecast into the type of \verb'c', and \verb'c = t' is the final result. Otherwise, the scalar \verb't' is typecast into the \verb'ytype' of the \verb'accum' operator, and the value of \verb'c' (on input) is typecast into the \verb'xtype' of the \verb'accum' operator. Next, the scalar \verb'z = accum (c,t)' is computed, of the \verb'ztype' of the \verb'accum' operator. Finally, \verb'z' is typecast into the final result, \verb'c'. Since this operation does not have a GraphBLAS input/output object, it cannot return an error string for \verb'GrB_error'. \newpage %------------------------------------------------------------------------------- \subsubsection{{\sf GrB\_Matrix\_reduce\_$<$type$>$:} reduce a matrix to a scalar} %------------------------------------------------------------------------------- \label{reduce_matrix_to_scalar} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_reduce // c = accum (c, reduce_to_scalar (A)) ( <type> *c, // result scalar const GrB_BinaryOp accum, // optional accum for c=accum(c,t) const GrB_Monoid monoid, // monoid to do the reduction const GrB_Matrix A, // matrix to reduce const GrB_Descriptor desc // descriptor (currently unused) ) ; \end{verbatim} } \end{mdframed} \verb'GrB_Matrix_reduce_<type>' reduces a matrix \verb'A' to a scalar, roughly analogous to \verb't = sum (A (:))' in MATLAB. This function is identical to reducing a vector to a scalar, since the positions of the entries in a matrix or vector have no effect on the result. Refer to the reduction to scalar described in the previous Section~\ref{reduce_vector_to_scalar}. Since this operation does not have a GraphBLAS input/output object, it cannot return an error string for \verb'GrB_error'. \newpage %=============================================================================== \subsection{{\sf GrB\_transpose:} transpose a matrix} %========================= %=============================================================================== \label{transpose} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_transpose // C<Mask> = accum (C, A') ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const GrB_Matrix A, // first input: matrix A const GrB_Descriptor desc // descriptor for C, Mask, and A ) ; \end{verbatim} } \end{mdframed} \verb'GrB_transpose' transposes a matrix \verb'A', just like the array transpose \verb"T = A.'" in MATLAB. The internal result matrix \verb"T = A'" (or merely \verb"T = A" if \verb'A' is transposed via the descriptor) has the same type as \verb'A'. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}, which typecasts \verb'T' as needed and applies the mask and accumulator. To be consistent with the rest of the GraphBLAS API regarding the descriptor, the input matrix \verb'A' may be transposed first. It may seem counter-intuitive, but this has the effect of not doing any transpose at all. As a result, \verb'GrB_transpose' is useful for more than just transposing a matrix. It can be used as a direct interface to the accumulator/mask operation, ${\bf C \langle M \rangle = C \odot A}$. This step also does any typecasting needed, so \verb'GrB_transpose' can be used to typecast a matrix \verb'A' into another matrix \verb'C'. To do this, simply use \verb'NULL' for the \verb'Mask' and \verb'accum', and provide a non-default descriptor \verb'desc' that sets the transpose option: {\footnotesize \begin{verbatim} // C = typecasted copy of A GrB_Descriptor_set (desc, GrB_INP0, GrB_TRAN) ; GrB_transpose (C, NULL, NULL, A, desc) ; \end{verbatim}} If the types of \verb'C' and \verb'A' match, then the above two lines of code are the same as \verb'GrB_Matrix_dup (&C, A)', except that for \verb'GrB_transpose' the matrix \verb'C' must already exist and be the right size. If \verb'C' does not exist, the work of \verb'GrB_Matrix_dup' can be replicated with this: {\footnotesize \begin{verbatim} // C = create an exact copy of A, just like GrB_Matrix_dup GrB_Matrix C ; GrB_Type type ; GrB_Index nrows, ncols ; GrB_Descriptor desc ; GxB_Matrix_type (&type, A) ; GrB_Matrix_nrows (&nrows, A) ; GrB_Matrix_ncols (&ncols, A) ; GrB_Matrix_new (&C, type, nrows, ncols) ; GrB_Descriptor_new (&desc) ; GrB_Descriptor_set (desc, GrB_INP0, GrB_TRAN) ; GrB_transpose (C, NULL, NULL, A, desc) ; \end{verbatim}} Since the input matrix \verb'A' is transposed by the descriptor, SuiteSparse:Graph\-BLAS does the right thing and does not transpose the matrix at all. Since \verb'T = A' is not typecasted, SuiteSparse:GraphBLAS can construct \verb'T' internally in $O(1)$ time and using no memory at all. This makes \verb'Grb_transpose' a fast and direct interface to the accumulator/mask function in GraphBLAS. This example is of course overkill, since the work can all be done by a single call to the \verb'GrB_Matrix_dup' function. However, the \verb'GrB_Matrix_dup' function can only create \verb'C' as an exact copy of \verb'A', whereas variants of the code above can do many more things with these two matrices. For example, the \verb'type' in the example can be replaced with any other type, perhaps selected from another matrix or from an operator. Consider the following code excerpt, which uses \verb'GrB_transpose' to remove all diagonal entries from a square matrix. It first creates a diagonal \verb'Mask', which is complemented so that ${\bf C \langle \neg M \rangle =A}$ does not modify the diagonal of ${\bf C}$. The \verb'REPLACE' ensures that \verb'C' is cleared first, and then ${\bf C \langle \neg M \rangle = A}$ modifies all entries in ${\bf C}$ where the mask ${\bf M}$ is false. These correspond to all the off-diagonal entries. The descriptor ensures that ${\bf A}$ is not transposed at all. The \verb'Mask' can have any pattern, of course, and wherever it is set true, the corresponding entries in \verb'A' are deleted from the copy \verb'C'. {\footnotesize \begin{verbatim} // remove all diagonal entries from the matrix A // Mask = speye (n) ; GrB_Matrix_new (&Mask, GrB_BOOL, n, n) ; for (int64_t i = 0 ; i < n ; i++) { GrB_Matrix_setElement (Mask, (bool) true, i, i) ; } // C<~Mask> = A, clearing C first. No transpose. GrB_Descriptor_new (&desc) ; GrB_Descriptor_set (desc, GrB_INP0, GrB_TRAN) ; GrB_Descriptor_set (desc, GrB_MASK, GrB_COMP) ; GrB_Descriptor_set (desc, GrB_OUTP, GrB_REPLACE) ; GrB_transpose (A, Mask, NULL, A, desc) ; \end{verbatim}} \newpage %=============================================================================== \subsection{{\sf GrB\_kronecker:} Kronecker product} %========================== %=============================================================================== \label{kron} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GrB_kronecker // C<Mask> = accum (C, kron(A,B)) ( GrB_Matrix C, // input/output matrix for results const GrB_Matrix Mask, // optional mask for C, unused if NULL const GrB_BinaryOp accum, // optional accum for Z=accum(C,T) const <operator> op, // defines '*' for T=kron(A,B) const GrB_Matrix A, // first input: matrix A const GrB_Matrix B, // second input: matrix B const GrB_Descriptor desc // descriptor for C, Mask, A, and B ) ; \end{verbatim} } \end{mdframed} \verb'GrB_kronecker' computes the Kronecker product, ${\bf C \langle M \rangle = C \odot \mbox{kron}(A,B)}$ where \[ \mbox{kron}{\bf (A,B)} = \left[ \begin{array}{ccc} a_{00} \otimes {\bf B} & \ldots & a_{0,n-1} \otimes {\bf B} \\ \vdots & \ddots & \vdots \\ a_{m-1,0} \otimes {\bf B} & \ldots & a_{m-1,n-1} \otimes {\bf B} \\ \end{array} \right] \] The $\otimes$ operator is defined by the \verb'op' parameter. It is applied in an element-wise fashion (like \verb'GrB_eWiseMult'), where the pattern of the submatrix $a_{ij} \otimes {\bf B}$ is the same as the pattern of ${\bf B}$ if $a_{ij}$ is an entry in the matrix ${\bf A}$, or empty otherwise. The input matrices \verb'A' and \verb'B' can be of any dimension, and both matrices may be transposed first via the descriptor, \verb'desc'. Entries in \verb'A' and \verb'B' are typecast into the input types of the \verb'op'. The matrix \verb'T=kron(A,B)' has the same type as the \verb'ztype' of the binary operator, \verb'op'. The final step is ${\bf C \langle M \rangle = C \odot T}$, as described in Section~\ref{accummask}. The operator \verb'op' may be a \verb'GrB_BinaryOp', a \verb'GrB_Monoid', or a \verb'GrB_Semiring'. In the latter case, the multiplicative operator of the semiring is used. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Printing GraphBLAS objects} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{fprint} The ten different objects handled by SuiteSparse:GraphBLAS are all opaque, although nearly all of their contents can be extracted via methods such as \verb'GrB_Matrix_extractTuples', \verb'GrB_Matrix_extractElement', \verb'GxB_Matrix_type', and so on. The GraphBLAS C API has no mechanism for printing all the contents of GraphBLAS objects, but this is helpful for debugging. Ten type-specific methods and two type-generic methods are provided: \vspace{0.2in} {\footnotesize \begin{tabular}{ll} \hline \verb'GxB_Type_fprint' & print and check a \verb'GrB_Type' \\ \verb'GxB_UnaryOp_fprint' & print and check a \verb'GrB_UnaryOp' \\ \verb'GxB_BinaryOp_fprint' & print and check a \verb'GrB_BinaryOp' \\ \verb'GxB_SelectOp_fprint' & print and check a \verb'GxB_SelectOp' \\ \verb'GxB_Monoid_fprint' & print and check a \verb'GrB_Monoid' \\ \verb'GxB_Semiring_fprint' & print and check a \verb'GrB_Semiring' \\ \verb'GxB_Descriptor_fprint' & print and check a \verb'GrB_Descriptor' \\ \verb'GxB_Matrix_fprint' & print and check a \verb'GrB_Matrix' \\ \verb'GxB_Vector_fprint' & print and check a \verb'GrB_Vector' \\ \verb'GxB_Scalar_fprint' & print and check a \verb'GxB_Scalar' \\ \hline \verb'GxB_fprint' & print/check any object to a file \\ \verb'GxB_print' & print/check any object to \verb'stdout' \\ \hline \end{tabular} } \vspace{0.2in} These methods do not modify the status of any object, and thus they cannot return an error string for use by \verb'GrB_error'. If a matrix or vector has not been completed, the pending computations are guaranteed to {\em not} be performed. The reason is simple. It is possible for a bug in the user application (such as accessing memory outside the bounds of an array) to mangle the internal content of a GraphBLAS object, and the \verb'GxB_*print' methods can be helpful tools to track down this bug. If \verb'GxB_*print' attempted to complete any computations prior to printing or checking the contents of the matrix or vector, then further errors could occur, including a segfault. By contrast, GraphBLAS methods and operations that return values into user-provided arrays or variables might finish pending operations before the return these values, and this would change their state. Since they do not change the state of any object, the \verb'GxB_*print' methods provide a useful alternative for debugging, and for a quick understanding of what GraphBLAS is computing while developing a user application. Each of the methods has a parameter of type \verb'GxB_Print_Level' that specifies the amount to print: {\footnotesize \begin{verbatim} typedef enum { GxB_SILENT = 0, // nothing is printed, just check the object GxB_SUMMARY = 1, // print a terse summary GxB_SHORT = 2, // short description, about 30 entries of a matrix GxB_COMPLETE = 3, // print the entire contents of the object GxB_SHORT_VERBOSE = 4, // GxB_SHORT but with "%.15g" for doubles GxB_COMPLETE_VERBOSE = 5 // GxB_COMPLETE but with "%.15g" for doubles } GxB_Print_Level ; \end{verbatim}} The ten type-specific functions include an additional argument, the \verb'name' string. The \verb'name' is printed at the beginning of the display (assuming the print level is not \verb'GxB_SILENT') so that the object can be more easily identified in the output. For the type-generic methods \verb'GxB_fprint' and \verb'GxB_print', the \verb'name' string is the variable name of the object itself. If the file \verb'f' is \verb'NULL', nothing is printed (\verb'pr' is effectively \verb'GxB_SILENT'). If \verb'name' is \verb'NULL', it is treated as the empty string. These are not error conditions. The methods check their input objects carefully and extensively, even when \verb'pr' is equal to \verb'GxB_SILENT'. The following error codes can be returned: \begin{packed_itemize} \item \verb'GrB_SUCCESS': object is valid \item \verb'GrB_UNINITIALIZED_OBJECT': object is not initialized \item \verb'GrB_INVALID_OBJECT': object is not valid \item \verb'GrB_NULL_POINTER': object is a NULL pointer \item \verb'GrB_INVALID_VALUE': \verb'fprintf' returned an I/O error. \end{packed_itemize} The content of any GraphBLAS object is opaque, and subject to change. As a result, the exact content and format of what is printed is implementation-dependent, and will change from version to version of SuiteSparse:GraphBLAS. Do not attempt to rely on the exact content or format by trying to parse the resulting output via another program. The intent of these functions is to produce a report of an object for visual inspection. If the user application needs to extract content from a GraphBLAS matrix or vector, use \verb'GrB_*_extractTuples' or the import/export methods instead. \newpage %=============================================================================== \subsection{{\sf GxB\_fprint:} Print a GraphBLAS object to a file} %============ %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_fprint // print and check a GraphBLAS object ( GrB_<objecttype> object, // object to print and check GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} The \verb'GxB_fprint' function prints the contents of any of the ten GraphBLAS objects to the file \verb'f'. If \verb'f' is \verb'NULL', the results are printed to \verb'stdout'. For example, to print the entire contents of a matrix \verb'A' to the file \verb'f', use \verb'GxB_fprint (A, GxB_COMPLETE, f)'. %=============================================================================== \subsection{{\sf GxB\_print:} Print a GraphBLAS object to {\sf stdout}} %======= %=============================================================================== \label{gxb_print} \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_print // print and check a GrB_Vector ( GrB_<objecttype> object, // object to print and check GxB_Print_Level pr // print level ) ; \end{verbatim} } \end{mdframed} \verb'GxB_print' is the same as \verb'GxB_fprint', except that it prints the contents of the object to \verb'stdout' instead of a file \verb'f'. For example, to print the entire contents of a matrix \verb'A', use \verb'GxB_print (A, GxB_COMPLETE)'. %=============================================================================== \subsection{{\sf GxB\_Type\_fprint:} Print a {\sf GrB\_Type}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Type_fprint // print and check a GrB_Type ( GrB_Type type, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Type_fprint (GrB_BOOL, "boolean type", GxB_COMPLETE, f)' prints the contents of the \verb'GrB_BOOL' object to the file \verb'f'. \newpage %=============================================================================== \subsection{{\sf GxB\_UnaryOp\_fprint:} Print a {\sf GrB\_UnaryOp}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_UnaryOp_fprint // print and check a GrB_UnaryOp ( GrB_UnaryOp unaryop, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_UnaryOp_fprint (GrB_LNOT, "not", GxB_COMPLETE, f)' prints the \verb'GrB_LNOT' unary operator to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_BinaryOp\_fprint:} Print a {\sf GrB\_BinaryOp}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_BinaryOp_fprint // print and check a GrB_BinaryOp ( GrB_BinaryOp binaryop, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_BinaryOp_fprint (GrB_PLUS_FP64, "plus", GxB_COMPLETE, f)' prints the \verb'GrB_PLUS_FP64' binary operator to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_SelectOp\_fprint:} Print a {\sf GxB\_SelectOp}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_SelectOp_fprint // print and check a GxB_SelectOp ( GxB_SelectOp selectop, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_SelectOp_fprint (GxB_TRIL, "tril", GxB_COMPLETE, f)' prints the \verb'GxB_TRIL' select operator to the file \verb'f'. \newpage %=============================================================================== \subsection{{\sf GxB\_Monoid\_fprint:} Print a {\sf GrB\_Monoid}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Monoid_fprint // print and check a GrB_Monoid ( GrB_Monoid monoid, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Monoid_fprint (GxB_PLUS_FP64_MONOID, "plus monoid",' \verb'GxB_COMPLETE, f)' prints the predefined \verb'GxB_PLUS_FP64_MONOID' (based on the binary operator \verb'GrB_PLUS_FP64') to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_Semiring\_fprint:} Print a {\sf GrB\_Semiring}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Semiring_fprint // print and check a GrB_Semiring ( GrB_Semiring semiring, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Semiring_fprint (GxB_PLUS_TIMES_FP64, "standard",' \verb'GxB_COMPLETE, f)' prints the predefined \verb'GxB_PLUS_TIMES_FP64' semiring to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_Descriptor\_fprint:} Print a {\sf GrB\_Descriptor}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Descriptor_fprint // print and check a GrB_Descriptor ( GrB_Descriptor descriptor, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Descriptor_fprint (d, "descriptor", GxB_COMPLETE, f)' prints the descriptor \verb'd' to the file \verb'f'. \newpage %=============================================================================== \subsection{{\sf GxB\_Matrix\_fprint:} Print a {\sf GrB\_Matrix}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Matrix_fprint // print and check a GrB_Matrix ( GrB_Matrix A, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Matrix_fprint (A, "my matrix", GxB_SHORT, f)' prints about 30 entries from the matrix \verb'A' to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_Vector\_fprint:} Print a {\sf GrB\_Vector}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Vector_fprint // print and check a GrB_Vector ( GrB_Vector v, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Vector_fprint (v, "my vector", GxB_SHORT, f)' prints about 30 entries from the vector \verb'v' to the file \verb'f'. %=============================================================================== \subsection{{\sf GxB\_Scalar\_fprint:} Print a {\sf GxB\_Scalar}} %=============================================================================== \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} GrB_Info GxB_Scalar_fprint // print and check a GrB_Scalar ( GxB_Sclarr s, // object to print and check const char *name, // name of the object GxB_Print_Level pr, // print level FILE *f // file for output ) ; \end{verbatim} } \end{mdframed} For example, \verb'GxB_Scalar_fprint (s, "my scalar", GxB_SHORT, f)' prints a short description of the scalar \verb's' to the file \verb'f'. \newpage %=============================================================================== \subsection{Performance and portability considerations} %=============================================================================== Even when the print level is \verb'GxB_SILENT', these methods extensively check the contents of the objects passed to them, which can take some time. They should be considered debugging tools only, not for final use in production. The return value of the \verb'GxB_*print' methods can be relied upon, but the output to the file (or \verb'stdout') can change from version to version. If these methods are eventually added to the GraphBLAS C API Specification, a conforming implementation might never print anything at all, regardless of the \verb'pr' value. This may be essential if the GraphBLAS library is installed in a dedicated device, with no file output, for example. Some implementations may wish to print nothing at all if the matrix is not yet completed, or just an indication that the matrix has pending operations and cannot be printed, when non-blocking mode is employed. In this case, use \verb'GrB_Matrix_wait', \verb'GrB_Vector_wait', or \verb'GxB_Scalar_wait' to finish all pending computations first. If a matrix or vector has pending operations, SuiteSparse:GraphBLAS prints a list of the {\em pending tuples}, which are the entries not yet inserted into the primary data structure. It can also print out entries that remain in the data structure but are awaiting deletion; these are called {\em zombies} in the output report. Most of the rest of the report is self-explanatory. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Examples} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{examples} \begin{alert} {\bf NOTE:} The programs in the \verb'Demo' folder are not always the fastest methods. They are simple methods for illustration only, not performance. Do not benchmark them. Refer to the latest (draft) \verb'LAGraph' package for the fastest methods. Be sure to use the right combination of package versions between LAGraph and SuiteSparse:GraphBLAS. Contact the author ([email protected]) if you have any questions about how to properly benchmark LAGraph + SuiteSparse:GraphBLAS. \end{alert} Several examples of how to use GraphBLAS are listed below. They all appear in the \verb'Demo' folder of SuiteSparse:GraphBLAS. \begin{enumerate} \item performing a breadth-first search, \item finding a maximal independent set, \item creating a random matrix, \item creating a finite-element matrix, \item reading a matrix from a file, and \item complex numbers as a user-defined type. \item triangle counting \item PageRank \item matrix import/export \end{enumerate} Additional examples appear in the newly created LAGraph project, currently in progress. %------------------------------------------------------------------------------- \subsection{LAGraph} %------------------------------------------------------------------------------- \label{lagraph} The LAGraph project is a community-wide effort to create graph algorithms based on GraphBLAS (any implementation of the API, not just SuiteSparse: GraphBLAS). Some of the algorithms and utilities in LAGraph are listed in the table below. Many additional algorithms are planned. Refer to \url{https://github.com/GraphBLAS/LAGraph} for a current list of algorithms (the one below will soon be out of date). Most of the functions in the \verb'Demo/' folder in SuiteSparse:GraphBLAS will eventually be translated into algorithms or utilities for LAGraph. To use LAGraph with SuiteSparse:GraphBLAS, place the two folders \verb'LAGraph' and \verb'GraphBLAS' in the same parent directory. This allows the \verb'cmake' script in LAGraph to find the copy of GraphBLAS. Alternatively, the GraphBLAS source could be placed anywhere, as long as \verb'sudo make install' is performed. Build \verb'GraphBLAS' first, then the \verb'LAGraph' library, and then the tests in \verb'LAGraph/Test'. Many of these algorithms are described in \cite{Davis20}. \vspace{0.1in} {\small \begin{tabular}{ll} \hline \hline Algorithms & description \\ \hline \hline \verb'LAGraph_bfs_parent2' & a direction-optimized BFS \cite{Beamer:2012:DOB,Yang:2018:IPE}, \\ & typically 2x faster than \verb'bfs5m' \\ \verb'LAGraph_bfs_simple' & a simple BFS (about the same as \verb'bfs5m') \\ \verb'LAGraph_bc_batch' & batched betweenness-centrality \\ \verb'LAGraph_bc' & betweenness-centrality \\ \verb'LAGraph_cdlp' & community detection via label propagation \\ \verb'LAGraph_cc' & connected components \\ \verb'LAGraph_BF_*' & three variants of Bellman-Ford \\ \verb'LAGraph_allktruss' & construct all $k$-trusses \\ \verb'LAGraph_dnn' & sparse deep neural network \cite{DavisAznavehKolodziej19} \\ \verb'LAGraph_ktruss' & construct a $k$-trusses \\ \verb'LAGraph_lcc' & local clustering coefficient \\ \verb'LAGraph_pagerank' & PageRank \\ \verb'LAGraph_pagerank2' & PageRank variant \\ \verb'LAGraph_tricount' & triangle counting \\ \end{tabular}} \vspace{0.1in} {\small \begin{tabular}{ll} \hline \hline Utilities & description \\ \hline \hline \verb'LAGraph_Vector_isall' & tests 2 vectors with a binary operator \\ \verb'LAGraph_Vector_isequal' & tests if 2 vectors are equal \\ \verb'LAGraph_Vector_to_dense' & converts a vector to dense \\ \verb'LAGraph_alloc_global' & types, operators, monoids, and semirings \\ \verb'LAGraph_finalize' & ends LAGraph \\ \verb'LAGraph_free' & wrapper for \verb'free' \\ \verb'LAGraph_free_global' & frees objects created by \verb'_alloc_global'\\ \verb'LAGraph_get_nthreads' & get \# of threads used \\ \verb'LAGraph_grread' & read a binary matrix in Galois format \\ \verb'LAGraph_init' & starts LAGraph \\ \verb'LAGraph_isall' & tests 2 matrices with a binary operator \\ \verb'LAGraph_isequal' & tests if 2 matrices are equal \\ \verb'LAGraph_ispattern' & tests if all entries in a matrix are 1 \\ \verb'LAGraph_malloc' & wrapper for \verb'malloc' \\ \verb'LAGraph_mmread' & read a Matrix Market file \\ \verb'LAGraph_mmwrite' & write a Matrix Market file \\ \verb'LAGraph_pattern' & extracts the pattern of a matrix \\ \verb'LAGraph_prune_diag' & diagonal entries from a matrix \\ \verb'LAGraph_rand' & simple random number generator \\ \verb'LAGraph_rand64' & \verb'int64_t' random number generator \\ \verb'LAGraph_random' & random matrix generator \\ \verb'LAGraph_randx' & \verb'double' random number generator \\ \verb'LAGraph_set_nthreads' & set \# of threads to use \\ \verb'LAGraph_tic' & start a timer \\ \verb'LAGraph_toc' & end a timer \\ \verb'LAGraph_tsvread' & read a TSV file \\ \verb'LAGraph_xinit' & starts LAGraph, with different malloc \\ \verb'LAgraph_1_to_n' & construct the vector \verb'1:n' \\ \verb'GB_*sort*' & sorting for \verb'LAGraph_cdlp' \\ % \verb'LAGraph_internal.h' \end{tabular}} \newpage %------------------------------------------------------------------------------- \subsection{Breadth-first search} %------------------------------------------------------------------------------- \label{bfs} The \verb'bfs' examples in the \verb'Demo' folder provide several examples of how to compute a breadth-first search (BFS) in GraphBLAS. Additional BFS examples are in LAGraph, shown below. The \verb'LAGraph_bfs_simple' function starts at a given source node \verb's' of an undirected graph with \verb'n' nodes. The graph is represented as an \verb'n'-by-\verb'n' matrix, \verb'A', where \verb'A(i,j)' is the edge $(i,j)$. The matrix \verb'A' can have any type (even a user-defined type), since the \verb'PAIR' operator does not access its values. No typecasting will be done. The vector \verb'v' of size \verb'n' holds the level of each node in the BFS, where \verb'v(i)=0' if the node has not yet been seen. This particular value makes \verb'v' useful for another role. It can be used as a Boolean mask, since \verb'0' is \verb'false' and nonzero is \verb'true'. Initially the entire \verb'v' vector is zero. It is initialized as a dense vector, with all entries present, to improve performance (otherwise, it will slowly grow, incrementally, and this will take a lot of time if the number of BFS levels is high). The vector \verb'q' is the set of nodes just discovered at the current level, where \verb'q(i)=true' if node \verb'i' is in the current level. It starts out with just a single entry set to true, \verb'q(s)', the starting node. Each iteration of the BFS consists of three calls to GraphBLAS. The first one uses \verb'q' as a mask. It modifies all positions in \verb'v' where \verb'q' has an entry, setting them all to the current \verb'level'. {\footnotesize \begin{verbatim} // v<q> = level, using vector assign with q as the mask GrB_assign (v, q, NULL, level, GrB_ALL, n, GrB_DESC_S) ; \end{verbatim}} The next call to GraphBLAS is the heart of the algorithm: {\footnotesize \begin{verbatim} // q<!v> = q ||.&& A ; finds all the unvisited // successors from current q, using !v as the mask GrB_vxm (q, v, NULL, GxB_ANY_PAIR_BOOL, q, A, GrB_DESC_RC) ; \end{verbatim}} The vector \verb'q' is all the set of nodes at the current level. Suppose \verb'q(j)' is true, and it has a neighbor \verb'i'. Then \verb'A(i,j)=1', and the dot product of \verb'A(i,:)*q' using the \verb'ANY_PAIR' semiring will use the \verb'PAIR' multiplier on these two terms, \verb'f (A(i,j), q(j))', resulting in a value \verb'1'. The \verb'ANY' monoid will ``sum'' up all the results in this single row \verb'i'; note that the \verb'OR' monoid would compute the same thing. If the result is a column vector \verb't=A*q', then this \verb't(i)' will be true. The vector \verb't' will be true for any node adjacent to any node in the set \verb'q'. Some of these neighbors of the nodes in \verb'q' have already been visited by the BFS, either in the current level or in a prior level. These results must be discarded; what is desired is the set of all nodes \verb'i' for which \verb't(i)' is true, and yet \verb'v(i)' is still zero. Enter the mask. The vector \verb'v' is complemented for use a mask, via the \verb'desc' descriptor. This means that wherever the vector is true, that position in the result is protected and will not be modified by the assignment. Only where \verb'v' is false will the result be modified. This is exactly the desired result, since these represent newly seen nodes for the next level of the BFS. A node \verb'k' already visited will have a nonzero \verb'v(k)', and thus \verb'q(k)' will not be modified by the assignment. The result \verb't' is written back into the vector \verb'q', through the mask, but to do this correctly, another descriptor parameter is used: \verb'GrB_REPLACE'. The vector \verb'q' was used to compute \verb't=A*q', and after using it to compute \verb't', the entire \verb'q' vector needs to be cleared. Only new nodes are desired, for the next level. This is exactly what the \verb'REPLACE' option does. As a result, the vector \verb'q' now contains the set of nodes at the new level of the BFS. It contains all those nodes (and only those nodes) that are neighbors of the prior set and that have not already been seen in any prior level. A single call to \verb'GrB_Vector_nvals' finds how many entries are in the current level. If this is zero, the BFS can terminate. \newpage \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} #include "LAGraph_internal.h" #define LAGRAPH_FREE_ALL { GrB_free (&v) ; GrB_free (&q) ; } GrB_Info LAGraph_bfs_simple // push-only BFS ( GrB_Vector *v_output, // v(i) is the BFS level of node i in the graph GrB_Matrix A, // input graph, treated as if boolean in semiring GrB_Index source // starting node of the BFS ) { GrB_Info info ; GrB_Vector q = NULL ; // nodes visited at each level GrB_Vector v = NULL ; // result vector if (v_output == NULL) LAGRAPH_ERROR ("argument missing", GrB_NULL_POINTER) ; GrB_Index n, nvals ; GrB_Matrix_nrows (&n, A) ; // create an empty vector v, and make it dense GrB_Vector_new (&v, (n > INT32_MAX) ? GrB_INT64 : GrB_INT32, n) ; GrB_assign (v, NULL, NULL, 0, GrB_ALL, n, NULL) ; // create a boolean vector q, and set q(source) to true GrB_Vector_new (&q, GrB_BOOL, n) ; GrB_Vector_setElement (q, true, source) ; // BFS traversal and label the nodes for (int64_t level = 1 ; level <= n ; level++) { // v<q> = level GrB_assign (v, q, NULL, level, GrB_ALL, n, GrB_DESC_S) ; // break if q is empty GrB_Vector_nvals (&nvals, q) ; if (nvals == 0) break ; // q'<!v> = q'*A GrB_vxm (q, v, NULL, GxB_ANY_PAIR_BOOL, q, A, GrB_DESC_RC) ; } // free workspace and return result (*v_output) = v ; // return result v = NULL ; // set to NULL so LAGRAPH_FREE_ALL doesn't free it LAGRAPH_FREE_ALL ; // free all workspace (except for result v) return (GrB_SUCCESS) ; } \end{verbatim}} \end{mdframed} \newpage %------------------------------------------------------------------------------- \subsection{Maximal independent set} %------------------------------------------------------------------------------- \label{mis} The {\em maximal independent set} problem is to find a set of nodes $S$ such that no two nodes in $S$ are adjacent to each other (an independent set), and all nodes not in $S$ are adjacent to at least one node in $S$ (and thus $S$ is maximal since it cannot be augmented by any node while remaining an independent set). The \verb'mis' function in the \verb'Demo' folder solves this problem using Luby's method \cite{Luby86}. The key operations in the method are replicated on the next page. The gist of the algorithm is this. In each phase, all candidate nodes are given a random score. If a node has a score higher than all its neighbors, then it is added to the independent set. All new nodes added to the set cause their neighbors to be removed from the set of candidates. The process must be repeated for multiple phases until no new nodes can be added. This is because in one phase, a node \verb'i' might not be added because one of its neighbors \verb'j' has a higher score, yet that neighbor \verb'j' might not be added because one of its neighbors \verb'k' is added to the independent set instead. The node \verb'j' is no longer a candidate and can never be added to the independent set, but node \verb'i' could be added to $S$ in a subsequent phase. The initialization step, before the \verb'while' loop, computes the degree of each node with a \verb'PLUS' reduction. The set of \verb'candidates' is Boolean vector, the \verb'i'th component is true if node \verb'i' is a candidate. A node with no neighbors causes the algorithm to stall, so these nodes are not candidates. Instead, they are immediately added to the independent set, represented by another Boolean vector \verb'iset'. Both steps are done with an \verb'assign', using the \verb'degree' as a mask, except the assignment to \verb'iset' uses the complement of the mask, via the \verb'sr_desc' descriptor. Finally, the \verb'GrB_Vector_nvals' statement counts how many candidates remain. Each phase of Luby's algorithm consists of 11 calls to GraphBLAS operations, all of which are either parallel, or take $O(1)$ time. Not all of them are described here since they are commented in the code itself. The two matrix-vector multiplications are the important parts and also take the most time. They also make interesting use of semirings and masks. The first one computes the largest score of all the neighbors of each node in the candidate set: {\footnotesize \begin{verbatim} // compute the max probability of all neighbors GrB_vxm (neighbor_max, candidates, NULL, maxFirst, prob, A, r_desc) ; \end{verbatim}} \newpage \begin{mdframed}[userdefinedwidth=6in] {\footnotesize \begin{verbatim} // compute the degree of each node GrB_reduce (degrees, NULL, NULL, GrB_PLUS_FP64, A, NULL) ; // singletons are not candidates; they are added to iset first instead // candidates[degree != 0] = 1 GrB_assign (candidates, degrees, NULL, true, GrB_ALL, n, NULL); // add all singletons to iset // iset[degree == 0] = 1 GrB_assign (iset, degrees, NULL, true, GrB_ALL, n, sr_desc) ; // Iterate while there are candidates to check. GrB_Index nvals ; GrB_Vector_nvals (&nvals, candidates) ; while (nvals > 0) { // sparsify the random number seeds (just keep it for each candidate) GrB_assign (Seed, candidates, NULL, Seed, GrB_ALL, n, r_desc) ; // compute a random probability scaled by inverse of degree prand_xget (X, Seed) ; // two calls to GrB_apply GrB_eWiseMult (prob, candidates, NULL, set_random, degrees, X, r_desc) ; // compute the max probability of all neighbors GrB_vxm (neighbor_max, candidates, NULL, maxFirst, prob, A, r_desc) ; // select node if its probability is > than all its active neighbors GrB_eWiseAdd (new_members, NULL,NULL, GrB_GT_FP64, prob, neighbor_max,0); // add new members to independent set. GrB_eWiseAdd (iset, NULL, NULL, GrB_LOR, iset, new_members, NULL) ; // remove new members from set of candidates c = c & !new GrB_apply (candidates, new_members, NULL, GrB_IDENTITY_BOOL, candidates, sr_desc) ; GrB_Vector_nvals (&nvals, candidates) ; if (nvals == 0) { break ; } // early exit condition // Neighbors of new members can also be removed from candidates GrB_vxm (new_neighbors, candidates, NULL, Boolean, new_members, A, NULL) ; GrB_apply (candidates, new_neighbors, NULL, GrB_IDENTITY_BOOL, candidates, sr_desc) ; GrB_Vector_nvals (&nvals, candidates) ; } \end{verbatim}} \end{mdframed} \verb'A' is a symmetric Boolean matrix and \verb'prob' is a sparse real vector (of type \verb'FP32'), where \verb'prob(i)' is nonzero only if node \verb'i' is a candidate. The \verb'prob' vector is computed from a random vector computed by a utility function \verb'prand_xget', in the \verb'Demo' folder. It uses two calls to \verb'GrB_apply' to construct \verb'n' random numbers in parallel, using a repeatable pseudo-random number generator. The \verb'maxFirst' semiring uses \verb'z=FIRST(x,y)' as the multiplier operator. The column \verb'A(:,j)' is the adjacency of node \verb'j', and the dot product \verb"prob'*A(:,j)" applies the \verb'FIRST' operator on all entries that appear in the intersection of \verb'prob' and \verb'A(:,j)', where \verb'z=FIRST(prob(i),A(i,j))' which is just \verb'prob(i)' if \verb'A(i,j)' is present. If \verb'A(i,j)' not an explicit entry in the matrix, then this term is not computed and does not take part in the reduction by the \verb'MAX' monoid. Thus, each term \verb'z=FIRST(prob(i),A(i,j))' is the score, \verb'prob(i)', of all neighbors \verb'i' of node \verb'j' that have a score. Node \verb'i' does not have a score if it is not also a candidate and so this is skipped. These terms are then ``summed'' up by taking the maximum score, using \verb'MAX' as the additive monoid. Finally, the results of this matrix-vector multiply are written to the result, \verb'neighbor_max'. The \verb'r_desc' descriptor has the \verb'REPLACE' option enabled. Since \verb'neighbor_max' does not also take part in the computation \verb"prob'*A", it is simply cleared first. Next, is it modified only in those positions \verb'i' where \verb'candidates(i)' is true, using \verb'candidates' as a mask. This sets the \verb'neighbor_max' only for candidate nodes, and leaves the other components of \verb'neighbor_max' as zero (implicit values not in the pattern of the vector). All of the above work is done in a single matrix-vector multiply, with an elegant use of the \verb'maxFirst' semiring coupled with a mask. The matrix-vector multiplication is described above as if it uses dot products of rows of \verb'A' with the column vector \verb'prob', but SuiteSparse:GraphBLAS does not compute it that way. Sparse dot products are much slower the optimal method for multiplying a sparse matrix times a sparse vector. The result is the same, however. The second matrix-vector multiplication is more straight-forward. Once the set of new members in the independent is found, it is used to remove all neighbors of those new members from the set of candidates. The resulting method is very efficient. For the \verb'Freescale2' matrix, the algorithm finds an independent set of size 1.6 million in 1.7 seconds (on the same MacBook Pro referred to in Section~\ref{bfs}, using a single core), taking four iterations of the \verb'while' loop. For comparison, removing its diagonal entries (required for the algorithm to work) takes 0.3 seconds in GraphBLAS (see Section~\ref{transpose}), and simply transposing the matrix takes 0.24 seconds in both MATLAB and GraphBLAS. \newpage %------------------------------------------------------------------------------- \subsection{Creating a random matrix} %------------------------------------------------------------------------------- \label{random} The \verb'random_matrix' function in the \verb'Demo' folder generates a random matrix with a specified dimension and number of entries, either symmetric or unsymmetric, and with or without self-edges (diagonal entries in the matrix). It relies on \verb'simple_rand*' functions in the \verb'Demo' folder to provide a portable random number generator that creates the same sequence on any computer and operating system. \verb'random_matrix' can use one of two methods: \verb'GrB_Matrix_setElement' and \verb'GrB_Matrix_build'. The former method is very simple to use: {\footnotesize \begin{verbatim} GrB_Matrix_new (&A, GrB_FP64, nrows, ncols) ; for (int64_t k = 0 ; k < ntuples ; k++) { GrB_Index i = simple_rand_i ( ) % nrows ; GrB_Index j = simple_rand_i ( ) % ncols ; if (no_self_edges && (i == j)) continue ; double x = simple_rand_x ( ) ; // A (i,j) = x GrB_Matrix_setElement (A, x, i, j) ; if (make_symmetric) { // A (j,i) = x GrB_Matrix_setElement (A, x, j, i) ; } } \end{verbatim}} The above code can generate a million-by-million sparse \verb'double' matrix with 200 million entries in 66 seconds (6 seconds of which is the time to generate the random \verb'i', \verb'j', and \verb'x'), including the time to finish all pending computations. The user application does not need to create a list of all the tuples, nor does it need to know how many entries will appear in the matrix. It just starts from an empty matrix and adds them one at a time in arbitrary order. GraphBLAS handles the rest. This method is not feasible in MATLAB. The next method uses \verb'GrB_Matrix_build'. It is more complex to use than \verb'setElement' since it requires the user application to allocate and fill the tuple lists, and it requires knowledge of how many entries will appear in the matrix, or at least a good upper bound, before the matrix is constructed. It is slightly faster, creating the same matrix in 60 seconds, 51 seconds of which is spent in \verb'GrB_Matrix_build'. \newpage {\footnotesize \begin{verbatim} GrB_Index *I, *J ; double *X ; int64_t s = ((make_symmetric) ? 2 : 1) * nedges + 1 ; I = malloc (s * sizeof (GrB_Index)) ; J = malloc (s * sizeof (GrB_Index)) ; X = malloc (s * sizeof (double )) ; if (I == NULL || J == NULL || X == NULL) { // out of memory if (I != NULL) free (I) ; if (J != NULL) free (J) ; if (X != NULL) free (X) ; return (GrB_OUT_OF_MEMORY) ; } int64_t ntuples = 0 ; for (int64_t k = 0 ; k < nedges ; k++) { GrB_Index i = simple_rand_i ( ) % nrows ; GrB_Index j = simple_rand_i ( ) % ncols ; if (no_self_edges && (i == j)) continue ; double x = simple_rand_x ( ) ; // A (i,j) = x I [ntuples] = i ; J [ntuples] = j ; X [ntuples] = x ; ntuples++ ; if (make_symmetric) { // A (j,i) = x I [ntuples] = j ; J [ntuples] = i ; X [ntuples] = x ; ntuples++ ; } } GrB_Matrix_build (A, I, J, X, ntuples, GrB_SECOND_FP64) ; \end{verbatim}} The equivalent \verb'sprandsym' function in MATLAB takes 150 seconds, but \verb'sprandsym' uses a much higher-quality random number generator to create the tuples \verb'[I,J,X]'. Considering just the time for \verb'sparse(I,J,X,n,n)' in \verb'sprandsym' (equivalent to \verb'GrB_Matrix_build'), the time is 70 seconds. That is, each of these three methods, \verb'setElement' and \verb'build' in SuiteSparse:GraphBLAS, and \verb'sparse' in MATLAB, are equally fast. \newpage %------------------------------------------------------------------------------- \subsection{Creating a finite-element matrix} %------------------------------------------------------------------------------- \label{fem} Suppose a finite-element matrix is being constructed, with \verb'k=40,000' finite-element matrices, each of size \verb'8'-by-\verb'8'. The following operations (in pseudo-MATLAB notation) are very efficient in SuiteSparse:GraphBLAS. {\footnotesize \begin{verbatim} A = sparse (m,n) ; % create an empty n-by-n sparse GraphBLAS matrix for i = 1:k construct a 8-by-8 sparse or dense finite-element F I and J define where the matrix F is to be added: I = a list of 8 row indices J = a list of 8 column indices % using GrB_assign, with the 'plus' accum operator: A (I,J) = A (I,J) + F end \end{verbatim}} If this were done in MATLAB or in GraphBLAS with blocking mode enabled, the computations would be extremely slow. This example is taken from Loren Shure's blog on MATLAB Central, {\em Loren on the Art of MATLAB} \cite{Davis07}, which discusses the built-in \verb'wathen' function. In MATLAB, a far better approach is to construct a list of tuples \verb'[I,J,X]' and to use \verb'sparse(I,J,X,n,n)'. This is identical to creating the same list of tuples in GraphBLAS and using the \verb'GrB_Matrix_build', which is equally fast. The difference in time between using \verb'sparse' or \verb'GrB_Matrix_build', and using submatrix assignment with blocking mode (or in MATLAB which does not have a nonblocking mode) can be extreme. For the example matrix discussed in \cite{Davis07}, using \verb'sparse' instead of submatrix assignment in MATLAB cut the run time of \verb'wathen' from 305 seconds down to 1.6 seconds. In SuiteSparse:GraphBLAS, the performance of both methods is essentially identical, and roughly as fast as \verb'sparse' in MATLAB. Inside SuiteSparse:GraphBLAS, \verb'GrB_assign' is doing the same thing. When performing \verb'A(I,J)=A(I,J)+F', if it finds that it cannot quickly insert an update into the \verb'A' matrix, it creates a list of pending tuples to be assembled later on. When the matrix is ready for use in a subsequent GraphBLAS operation (one that normally cannot use a matrix with pending computations), the tuples are assembled all at once via \verb'GrB_Matrix_build'. GraphBLAS operations on other matrices have no effect on when the pending updates of a matrix are completed. Thus, any GraphBLAS method or operation can be used to construct the \verb'F' matrix in the example above, without affecting when the pending updates to \verb'A' are completed. The MATLAB \verb'wathen.m' script is part of Higham's \verb'gallery' of matrices \cite{Higham}. It creates a finite-element matrix with random coefficients for a 2D mesh of size \verb'nx'-by-\verb'ny', a matrix formulation by Wathen \cite{Wathen}. The pattern of the matrix is fixed; just the values are randomized. The GraphBLAS equivalent can use either \verb'GrB_Matrix_build', or \verb'GrB_assign'. Both methods have good performance. The \verb'GrB_Matrix_build' version below is about 15\% to 20\% faster than the MATLAB \verb'wathen.m' function, regardless of the problem size. It uses the identical algorithm as \verb'wathen.m'. {\footnotesize \begin{verbatim} int64_t ntriplets = nx*ny*64 ; I = malloc (ntriplets * sizeof (int64_t)) ; J = malloc (ntriplets * sizeof (int64_t)) ; X = malloc (ntriplets * sizeof (double )) ; if (I == NULL || J == NULL || X == NULL) { FREE_ALL ; return (GrB_OUT_OF_MEMORY) ; } ntriplets = 0 ; for (int j = 1 ; j <= ny ; j++) { for (int i = 1 ; i <= nx ; i++) { nn [0] = 3*j*nx + 2*i + 2*j + 1 ; nn [1] = nn [0] - 1 ; nn [2] = nn [1] - 1 ; nn [3] = (3*j-1)*nx + 2*j + i - 1 ; nn [4] = 3*(j-1)*nx + 2*i + 2*j - 3 ; nn [5] = nn [4] + 1 ; nn [6] = nn [5] + 1 ; nn [7] = nn [3] + 1 ; for (int krow = 0 ; krow < 8 ; krow++) nn [krow]-- ; for (int krow = 0 ; krow < 8 ; krow++) { for (int kcol = 0 ; kcol < 8 ; kcol++) { I [ntriplets] = nn [krow] ; J [ntriplets] = nn [kcol] ; X [ntriplets] = em (krow,kcol) ; ntriplets++ ; } } } } // A = sparse (I,J,X,n,n) ; GrB_Matrix_build (A, I, J, X, ntriplets, GrB_PLUS_FP64) ; \end{verbatim}} The \verb'GrB_assign' version has the advantage of not requiring the user application to construct the tuple list, and is almost as fast as using \verb'GrB_Matrix_build'. The code is more elegant than either the MATLAB \verb'wathen.m' function or its GraphBLAS equivalent above. Its performance is comparable with the other two methods, but slightly slower, being about 5\% slower than the MATLAB \verb'wathen', and 20\% slower than the GraphBLAS method above. {\footnotesize \begin{verbatim} GrB_Matrix_new (&F, GrB_FP64, 8, 8) ; for (int j = 1 ; j <= ny ; j++) { for (int i = 1 ; i <= nx ; i++) { nn [0] = 3*j*nx + 2*i + 2*j + 1 ; nn [1] = nn [0] - 1 ; nn [2] = nn [1] - 1 ; nn [3] = (3*j-1)*nx + 2*j + i - 1 ; nn [4] = 3*(j-1)*nx + 2*i + 2*j - 3 ; nn [5] = nn [4] + 1 ; nn [6] = nn [5] + 1 ; nn [7] = nn [3] + 1 ; for (int krow = 0 ; krow < 8 ; krow++) nn [krow]-- ; for (int krow = 0 ; krow < 8 ; krow++) { for (int kcol = 0 ; kcol < 8 ; kcol++) { // F (krow,kcol) = em (krow, kcol) GrB_Matrix_setElement (F, em (krow,kcol), krow, kcol) ; } } // A (nn,nn) += F GrB_assign (A, NULL, GrB_PLUS_FP64, F, nn, 8, nn, 8, NULL) ; } } \end{verbatim}} Since there is no \verb'Mask', and since \verb'GrB_REPLACE' is not used, the call to \verb'GrB_assign' in the example above is identical to \verb'GxB_subassign'. Either one can be used, and their performance would be identical. Refer to the \verb'wathen.c' function in the \verb'Demo' folder, which uses GraphBLAS to implement the two methods above, and two additional ones. \newpage %------------------------------------------------------------------------------- \subsection{Reading a matrix from a file} %------------------------------------------------------------------------------- \label{read} See also \verb'LAGraph_mmread' and \verb'LAGraph_mmwrite', which can read and write any matrix in Matrix Market format, and \verb'LAGraph_binread' and \verb'LAGraph_binwrite', which read/write a matrix from a binary file. The binary file I/O functions are much faster than the \verb'read_matrix' function described here, and also much faster than \verb'LAGraph_mmread' and \verb'LAGraph_mmwrite'. The \verb'read_matrix' function in the \verb'Demo' reads in a triplet matrix from a file, one line per entry, and then uses \verb'GrB_Matrix_build' to create the matrix. It creates a second copy with \verb'GrB_Matrix_setElement', just to test that method and compare the run times. A comparison of \verb'build' versus \verb'setElement' has already been discussed in Section~\ref{random}. The function can return the matrix as-is, which may be rectangular or unsymmetric. If an input parameter is set to make the matrix symmetric, \verb'read_matrix' computes \verb"A=(A+A')/2" if \verb'A' is square (turning all directed edges into undirected ones). If \verb'A' is rectangular, it creates a bipartite graph, which is the same as the augmented matrix, \verb"A = [0 A ; A' 0]". If \verb'C' is an \verb'n'-by-\verb'n' matrix, then \verb"C=(C+C')/2" can be computed as follows in GraphBLAS, (the \verb'scale2' function divides an entry by 2): \vspace{-0.05in} {\footnotesize \begin{verbatim} GrB_Descriptor_new (&dt2) ; GrB_Descriptor_set (dt2, GrB_INP1, GrB_TRAN) ; GrB_Matrix_new (&A, GrB_FP64, n, n) ; GrB_eWiseAdd (A, NULL, NULL, GrB_PLUS_FP64, C, C, dt2) ; // A=C+C' GrB_free (&C) ; GrB_Matrix_new (&C, GrB_FP64, n, n) ; GrB_UnaryOp_new (&scale2_op, scale2, GrB_FP64, GrB_FP64) ; GrB_apply (C, NULL, NULL, scale2_op, A, NULL) ; // C=A/2 GrB_free (&A) ; GrB_free (&scale2_op) ; \end{verbatim}} This is of course not nearly as elegant as \verb"A=(A+A')/2" in MATLAB, but with minor changes it can work on any type and use any built-in operators instead of \verb'PLUS', or it can use any user-defined operators and types. The above code in SuiteSparse:GraphBLAS takes 0.60 seconds for the \verb'Freescale2' matrix, slightly slower than MATLAB (0.55 seconds). Constructing the augmented system is more complicated using the GraphBLAS C API Specification since it does not yet have a simple way of specifying a range of row and column indices, as in \verb'A(10:20,30:50)' in MATLAB (\verb'GxB_RANGE' is a SuiteSparse:GraphBLAS extension that is not in the Specification). Using the C API in the Specification, the application must instead build a list of indices first, \verb'I=[10, 11' \verb'...' \verb'20]'. Thus, to compute the MATLAB equivalent of \verb"A = [0 A ; A' 0]", index lists \verb'I' and \verb'J' must first be constructed: \vspace{-0.05in} {\footnotesize \begin{verbatim} int64_t n = nrows + ncols ; I = malloc (nrows * sizeof (int64_t)) ; J = malloc (ncols * sizeof (int64_t)) ; // I = 0:nrows-1 // J = nrows:n-1 if (I == NULL || J == NULL) { if (I != NULL) free (I) ; if (J != NULL) free (J) ; return (GrB_OUT_OF_MEMORY) ; } for (int64_t k = 0 ; k < nrows ; k++) I [k] = k ; for (int64_t k = 0 ; k < ncols ; k++) J [k] = k + nrows ; \end{verbatim}} Once the index lists are generated, however, the resulting GraphBLAS operations are fairly straightforward, computing \verb"A=[0 C ; C' 0]". \vspace{-0.05in} {\footnotesize \begin{verbatim} GrB_Descriptor_new (&dt1) ; GrB_Descriptor_set (dt1, GrB_INP0, GrB_TRAN) ; GrB_Matrix_new (&A, GrB_FP64, n, n) ; // A (nrows:n-1, 0:nrows-1) = C' GrB_assign (A, NULL, NULL, C, J, ncols, I, nrows, dt1) ; // A (0:nrows-1, nrows:n-1) = C GrB_assign (A, NULL, NULL, C, I, nrows, J, ncols, NULL) ; \end{verbatim}} This takes 1.38 seconds for the \verb'Freescale2' matrix, almost as fast as \verb"A=[sparse(m,m) C ; C' sparse(n,n)]" in MATLAB (1.25 seconds). Both calls to \verb'GrB_assign' use no accumulator, so the second one causes the partial matrix \verb"A=[0 0 ; C' 0]" to be built first, followed by the final build of \verb"A=[0 C ; C' 0]". A better method, but not an obvious one, is to use the \verb'GrB_FIRST_FP64' accumulator for both assignments. An accumulator enables SuiteSparse:GraphBLAS to determine that that entries created by the first assignment cannot be deleted by the second, and thus it need not force completion of the pending updates prior to the second assignment. SuiteSparse:GraphBLAS also adds a \verb'GxB_RANGE' mechanism that mimics the MATLAB colon notation. This speeds up the method and simplifies the code the user needs to write to compute \verb"A=[0 C ; C' 0]": \vspace{-0.05in} {\footnotesize \begin{verbatim} int64_t n = nrows + ncols ; GrB_Matrix_new (&A, xtype, n, n) ; GrB_Index I_range [3], J_range [3] ; I_range [GxB_BEGIN] = 0 ; I_range [GxB_END ] = nrows-1 ; J_range [GxB_BEGIN] = nrows ; J_range [GxB_END ] = ncols+nrows-1 ; // A (nrows:n-1, 0:nrows-1) += C' GrB_assign (A, NULL, GrB_FIRST_FP64, // or NULL, C, J_range, GxB_RANGE, I_range, GxB_RANGE, dt1) ; // A (0:nrows-1, nrows:n-1) += C GrB_assign (A, NULL, GrB_FIRST_FP64, // or NULL, C, I_range, GxB_RANGE, J_range, GxB_RANGE, NULL) ; \end{verbatim}} Any operator will suffice because it is not actually applied. An operator is only applied to the set intersection, and the two assignments do not overlap. If an \verb'accum' operator is used, only the final matrix is built, and the time in GraphBLAS drops slightly to 1.25 seconds. This is a very small improvement because in this particular case, SuiteSparse:GraphBLAS is able to detect that no sorting is required for the first build, and the second one is a simple concatenation. In general, however, allowing GraphBLAS to postpone pending updates can lead to significant reductions in run time. %------------------------------------------------------------------------------- \subsection{PageRank} %------------------------------------------------------------------------------- \label{pagerank} The \verb'Demo' folder contains three methods for computing the PageRank of the nodes of a graph. One uses floating-point arithmetic (\verb'GrB_FP64') and two user-defined unary operators (\verb'dpagerank.c'). The second (\verb'ipagerank.c') is very similar, relying on integer arithmetic instead (\verb'GrB_UINT64'). Neither method include a stopping condition. They simply compute a fixed number of iterations. The third example is more extensive (\verb'dpagerank2.c'), and serves as an example of the power and flexibility of user-defined types, operators, monoids, and semirings. It creates a semiring for the entire PageRank computation. It terminates if the 2-norm of the change in the rank vector \verb'r' is below a threshold. \newpage %------------------------------------------------------------------------------- \subsection{Triangle counting} %------------------------------------------------------------------------------- \label{triangle} A triangle in an undirected graph is a clique of size three: three nodes $i$, $j$, and $k$ that are all pairwise connected. There are many ways of counting the number of triangles in a graph. Let \verb'A' be a symmetric matrix with values 0 and 1, and no diagonal entries; this matrix is the adjacency matrix of the graph. Let \verb'E' be the edge incidence matrix with exactly two 1's per column. A column of \verb'E' with entries in rows \verb'i' and \verb'j' represents the edge $(i,j)$ in the graph, \verb'A(i,j)=1' where \verb'i<j'. Let \verb'L' and \verb'U' be the strictly lower and upper triangular parts of \verb'A', respectively. The methods are listed in the table below. Most of them use a form of masked matrix-matrix multiplication. The methods are implemented in MATLAB in the \verb'tricount.m' file, and in GraphBLAS in the \verb'tricount.c' file, both in the \verb'GraphBLAS/Demo' folder. Refer to the comments in those two files for details and derivations on how these methods work. When the matrix is stored by row, and a mask is present and not complemented, \verb'GrB_INP1' is \verb'GrB_TRAN', and \verb'GrB_INP0' is \verb'GxB_DEFAULT', the SuiteSparse:GraphBLAS implementation of \verb'GrB_mxm' always uses a dot-product formulation. Thus, the ${\bf C \langle L \rangle} = {\bf L}{\bf U}^{\sf T}$ method uses dot products. This provides a mechanism for the end-user to select a masked dot product matrix multiplication method in SuiteSparse:GraphBLAS, which is occasionally faster than the outer product method. The MATLAB form assumes the matrices are stored by column (the only option in MATLAB). Each method is followed by a reduction to a scalar, via \verb'GrB_reduce' in GraphBLAS or by \verb'nnz' or \verb'sum(sum(...))' in MATLAB. \vspace{0.05in} \noindent {\footnotesize \begin{tabular}{lll} \hline method and & in MATLAB & in GraphBLAS \\ citation & & \\ \hline minitri \cite{WolfBerryStark15} & \verb"nnz(A*E==2)/3" & ${\bf C}={\bf AE}$, then \verb'GrB_apply' \\ Burkhardt \cite{Burkhardt16} & \verb"sum(sum((A^2).*A))/6" & ${\bf C \langle A \rangle} = {\bf A}^2$ \\ Cohen \cite{AzadBulucGilbert15,Cohen09} & \verb"sum(sum((L*U).*A))/2" & ${\bf C \langle A \rangle} = {\bf LU}$ \\ Sandia \cite{WolfDeveciBerryHammondRajamanickam17} & \verb"sum(sum((U*U).*U))" & ${\bf C \langle L \rangle} = {\bf LL}$ (outer product) \\ SandiaDot & \verb"sum(sum((U'*L).*L))" & ${\bf C \langle U \rangle} = {\bf L}{\bf U}^{\sf T}$ (dot product) \\ Sandia2 & \verb"sum(sum((L*L).*L))" & ${\bf C \langle U \rangle} = {\bf UU}$ (outer product) \\ \hline \end{tabular} } \vspace{0.05in} In general, the Sandia methods are the fastest of the 6 methods when implemented in GraphBLAS. For full details on the triangle counting and $k$-truss algorithms, and performance results, see \cite{Davis18b}, a copy of which appears in the \verb'SuiteSparse/GraphBLAS/Doc' folder. The code appears in \verb'Extras'. That paper uses an earlier version of SuiteSparse:GraphBLAS in which all matrices are stored by column. \newpage %------------------------------------------------------------------------------- \subsection{User-defined types and operators} %------------------------------------------------------------------------------- \label{user} The \verb'Demo' folder contains two working examples of user-defined types, first discussed in Section~\ref{type_new}: \verb'double complex', and a user-defined \verb'typedef' called \verb'wildtype' with a \verb'struct' containing a string and a 4-by-4 \verb'float' matrix. {\bf Double Complex:} Prior to v3.3, GraphBLAS did not have a native complex type. It now appears as the \verb'GxB_FC64' predefined type, but a complex type can also easily added as a user-defined type. The \verb'Complex_init' function in the \verb'usercomplex.c' file in the \verb'Demo' folder creates the \verb'Complex' type based on the ANSI C11 \verb'double complex' type. It creates a full suite of operators that correspond to every built-in GraphBLAS operator, both binary and unary. In addition, it creates the operators listed in the following table, where $D$ is \verb'double' and $C$ is \verb'Complex'. \vspace{0.1in} {\footnotesize \begin{tabular}{llll} \hline name & types & MATLAB & description \\ & & equivalent & \\ \hline \verb'Complex_complex' & $D \times D \rightarrow C$ & \verb'z=complex(x,y)' & complex from real and imag. \\ \hline \verb'Complex_conj' & $C \rightarrow C$ & \verb'z=conj(x)' & complex conjugate \\ \verb'Complex_real' & $C \rightarrow D$ & \verb'z=real(x)' & real part \\ \verb'Complex_imag' & $C \rightarrow D$ & \verb'z=imag(x)' & imaginary part \\ \verb'Complex_angle' & $C \rightarrow D$ & \verb'z=angle(x)' & phase angle \\ \verb'Complex_complex_real' & $D \rightarrow C$ & \verb'z=complex(x,0)' & real to complex real \\ \verb'Complex_complex_imag' & $D \rightarrow C$ & \verb'z=complex(0,x)' & real to complex imag. \\ \hline \end{tabular} } The \verb'Complex_init' function creates two monoids (\verb'Complex_add_monoid' and \verb'Complex_times_monoid') and a semiring \verb'Complex_plus_times' that corresponds to the conventional linear algebra for complex matrices. The include file \verb'usercomplex.h' in the \verb'Demo' folder is available so that this user-defined \verb'Complex' type can easily be imported into any other user application. When the user application is done, the \verb'Complex_finalize' function frees the \verb'Complex' type and its operators, monoids, and semiring. NOTE: the \verb'Complex' type is not supported in this Demo in Microsoft Visual Studio. {\bf Struct-based:} In addition, the \verb'wildtype.c' program creates a user-defined \verb'typedef' of a \verb'struct' containing a dense 4-by-4 \verb'float' matrix, and a 64-character string. It constructs an additive monoid that adds two 4-by-4 dense matrices, and a multiplier operator that multiplies two 4-by-4 matrices. Each of these 4-by-4 matrices is treated by GraphBLAS as a ``scalar'' value, and they can be manipulated in the same way any other GraphBLAS type can be manipulated. The purpose of this type is illustrate the endless possibilities of user-defined types and their use in GraphBLAS. \newpage %------------------------------------------------------------------------------- \subsection{User applications using OpenMP or other threading models} %------------------------------------------------------------------------------- \label{threads} An example demo program (\verb'openmp_demo') is included that illustrates how a multi-threaded user application can use GraphBLAS. The results from the \verb'openmp_demo' program may appear out of order. This is by design, simply to show that the user application is running in parallel. The output of each thread should be the same. In particular, each thread generates an intentional error, and later on prints it with \verb'GrB_error'. It will print its own error, not an error from another thread. When all the threads finish, the leader thread prints out each matrix generated by each thread. GraphBLAS can also be combined with user applications that rely on MPI, the Intel TBB threading library, POSIX pthreads, Microsoft Windows threads, or any other threading library. In all cases, GraphBLAS will be thread safe. \newpage %------------------------------------------------------------------------------- \section{Compiling and Installing SuiteSparse:GraphBLAS} %------------------------------------------------------------------------------- \label{sec:install} %---------------------------------------- \subsection{On Linux and Mac} %---------------------------------------- GraphBLAS makes extensive use of features in the ANSI C11 standard, and thus a C compiler supporting this version of the C standard is required to use all features of GraphBLAS. On the Mac (OS X), \verb'clang' 8.0.0 in \verb'Xcode' version 8.2.1 is sufficient, although earlier versions of \verb'Xcode' may work as well. For the GNU \verb'gcc' compiler, version 4.9 or later is required. For the Intel \verb'icc' compiler, version 18.0 or later is required. Version 2.8.12 or later of \verb'cmake' is required; version 3.0.0 is preferred. If you are using a pre-C11 ANSI C compiler, or Microsoft Visual Studio, then the \verb'_Generic' keyword is not available. SuiteSparse:GraphBLAS will still compile, but you will not have access to polymorphic functions such as \verb'GrB_assign'. You will need to use the non-polymorphic functions instead. \begin{alert} {\bf NOTE:} icc is generally an excellent compiler, but it will generate slower code than gcc for SuiteSparse:GraphBLAS v3.2.0 and later. This is because of how the two compilers treat \verb'#pragma omp atomic' The use of gcc for SuiteSparse:GraphBLAS v3.2.0 and later is recommended. Atomics are slower in icc as compared to gcc. \end{alert} To compile SuiteSparse:GraphBLAS, simply type \verb'make' in the main GraphBLAS folder, which compiles the library. This will be a single-threaded compilation, which will take a long time. To compile in parallel (40 threads for example), use: {\small \begin{verbatim} make JOBS=40 \end{verbatim} } To use a non-default compiler with 4 threads: {\small \begin{verbatim} make CC=icc CXX=icc JOBS=4 \end{verbatim} } After compiling the library, you can compile the demos with \verb'make all' and then \verb'make run'. If \verb'cmake' or \verb'make' fail, it might be that your default compiler does not support ANSI C11. Try another compiler. For example, try one of these options. Go into the \verb'build' directory and type one of these: {\small \begin{verbatim} CC=gcc cmake .. CC=gcc-6 cmake .. CC=xlc cmake .. CC=icc cmake .. \end{verbatim} } You can also do the following in the top-level GraphBLAS folder instead: {\small \begin{verbatim} CC=gcc make CC=gcc-6 cmake CC=xlc cmake CC=icc cmake \end{verbatim} } For faster compilation, you can specify a parallel make. For example, to use 32 parallel jobs and the \verb'gcc' compiler, do the following: {\small \begin{verbatim} JOBS=32 CC=gcc make \end{verbatim} } If you do not have \verb'cmake', refer to Section~\ref{altmake}. %---------------------------------------- \subsection{On Microsoft Windows} \label{sec:windows} %---------------------------------------- SuiteSparse:GraphBLAS is now ported to Microsoft Visual Studio. However, that compiler is not ANSI C11 compliant. As a result, GraphBLAS on Windows will have a few minor limitations. \begin{itemize} \item The MS Visual Studio compiler does not support the \verb'_Generic' keyword, required for the polymorphic GraphBLAS functions. So for example, you will need to use \verb'GrB_Matrix_free' instead of just \verb'GrB_free'. \item Variable-length arrays are not supported, so user-defined types are limited to 128 bytes in size. This can be changed by editing \verb'GB_VLA_MAXSIZE' in \verb'Source/GB_compiler.h', and recompiling SuiteSparse:GraphBLAS. \end{itemize} If you use a recent \verb'gcc' or \verb'icc' compiler on Windows other than the Microsoft Compiler (\verb'cl'), these limitations can be avoided. The following instructions apply to Windows 10, CMake 3.16, and Visual Studio 2019, but may work for earlier versions. \begin{enumerate} \item Install CMake 3.16 or later, if not already installed. See \url{https://cmake.org/} for details. \item Install Microsoft Visual Studio, if not already installed. See \url{https://visualstudio.microsoft.com/} for details. Version 2019 is preferred, but earlier versions may also work. \item Open a terminal window and type this in the \verb'SuiteSparse/GraphBLAS/build' folder: \vspace{-0.1in} {\small \begin{verbatim} cmake .. \end{verbatim} } \vspace{-0.1in} \item The \verb'cmake' command generates many files in \verb'SuiteSparse/GraphBLAS/build', and the file \verb'graphblas.sln' in particular. Open the generated \verb'graphblas.sln' file in Visual Studio. \item Optionally: right-click \verb'graphblas' in the left panel (Solution Explorer) and select properties; then navigate to \verb'Configuration' \verb'Properties', \verb'C/C++', \verb'General' and change the parameter \verb'Multiprocessor Compilation' to \verb'Yes (/MP)'. Click \verb'OK'. This will significantly speed up the compilation of GraphBLAS. \item Select the \verb'Build' menu item at the top of the window and select \verb'Build Solution'. This should create a folder called \verb'Release' and place the compiled \verb'graphblas.dll', \verb'graphblas.lib', and \verb'graphblas.exp' files there. Please be patient; some files may take a while to compile and sometimes may appear to be stalled. Just wait. % Alternatively, type this command in the terminal window: % {\small % \begin{verbatim} % devenv graphblas.sln /build "release|x64" /project graphblas \end{verbatim}} \item Add the \verb'GraphBLAS/build/Release' folder to the Windows System path: \begin{itemize} \item Open the \verb'Start Menu' and type \verb'Control Panel'. \item Select the \verb'Control Panel' app. \item When the app opens, select \verb'System'. \item From the top left side of the \verb'System' window, select \verb'Advanced System Settings'. You may have to authenticate at this step. \item The \verb'Systems Properties' window should appear with the \verb'Advanced' tab selected; select \verb'Environment Variables'. \item The \verb'Environment Variables' window displays 2 sections, one for \verb'User' variables and the other for \verb'System' variables. Under the \verb'Systems' variable section, scroll to and select \verb'Path', then select \verb'Edit'. A editor window appears allowing to add, modify, delete or re-order the parts of the \verb'Path'. \item Add the full path of the \verb'GraphBLAS\build\Release' folder (typically starting with \verb'C:\Users\you\'..., where \verb'you' is your Windows username) to the \verb'Path'. \item If the above steps do not work, you can instead copy the \verb'graphblas.*' files from \verb'GraphBLAS\build\Release' into any existing folder listed in your \verb'Path'. \end{itemize} \item The \verb'GraphBLAS/Include/GraphBLAS.h' file must be included in user applications via \verb'#include "GraphBLAS.h"'. This is already done for you in the MATLAB interface discussed in the next section. \end{enumerate} %---------------------------------------- \subsection{Compiling the MATLAB interface (for MATLAB R2020a and earlier)} %---------------------------------------- \label{gbmake} First, compile the SuiteSparse:GraphBLAS dynamic library (\verb'libgraphblas.so' for Linux, \verb'libgraphblas.dylib' for Mac, or \verb'graphblas.dll' for Windows), as described in the prior two subsections. Next: \begin{enumerate} \item In the MATLAB command window: {\small \begin{verbatim} cd GraphBLAS/GraphBLAS/@GrB/private gbmake \end{verbatim} } \item Follow the remaining instructions in the \verb'GraphBLAS/GraphBLAS/README.md' file, to revise your MATLAB path and \verb'startup.m' file. \item As a quick test, try the MATLAB command \verb'GrB(1)', which creates and displays a 1-by-1 GraphBLAS matrix. For a longer test, do the following: {\small \begin{verbatim} cd GraphBLAS/GraphBLAS/test gbtest \end{verbatim} } \item In Windows, if the tests fail with an error stating that the mex file is invalid because the module could not be found, it means that MATLAB could not find the compiled \verb'graphblas.lib', \verb'*.dll' or \verb'*.exp' files in the \verb'build/Release' folder. This can happen if your Windows System path is not set properly, or if Windows is not recognizing the \verb'GraphBLAS/build/Release' folder (see Section~\ref{sec:windows}) Or, you might have permission to change your Windows System path. In this case, do the following in the MATLAB Command \vspace{-0.1in} Window: \vspace{-0.1in} {\small \begin{verbatim} cd GraphBLAS/build/Release GrB(1) \end{verbatim} } \vspace{-0.1in} After this step, the GraphBLAS library will be loaded into MATLAB. You may need to add the above lines in your \verb'Documents/MATLAB/startup.m' file, so that they are done each time MATLAB starts. You will also need to do this after \verb'clear all' or \verb'clear mex', since those MATLAB commands remove all loaded libraries from MATLAB. You might also get an error ``the specified procedure cannot be found.'' This can occur if you have upgraded your GraphBLAS library from a prior version, and some of the compiled files \verb'@GrB/private/*.mex*' are stale. Try the command \verb'gbmake all' in the MATLAB Command Window, which forces all of the MATLAB interface to be recompiled. Or, try deleting all \verb'@GrB/private/*.mex*' files and running \verb'gbmake' again. \item On Windows, the \verb'casin', \verb'casinf', \verb'casinh', and \verb'casinhf' functions provided by Microsoft do not return the correct imaginary part. As a result, \verb'GxB_ASIN_FC32', \verb'GxB_ASIN_FC64' \verb'GxB_ASINH_FC32', and \verb'GxB_ASINH_FC64' do not work properly on Windows. This affects the \verb'GrB/asin', \verb'GrB/acsc', \verb'GrB/asinh', and \verb'GrB/acsch', functions in the MATLAB interface. See the MATLAB tests bypassed in \verb'gbtest76.m' for details, in the \newline \verb'GraphBLAS/GraphBLAS/test' folder. %% FUTURE: fix asin and acsc on Windows for the complex case. \end{enumerate} %---------------------------------------- \subsection{Compiling the MATLAB interface (for MATLAB R2021a and later)} %---------------------------------------- MATLAB R2021a includes its own copy of SuiteSparse:GraphBLAS v3.3.3, as the file \verb'libmwgraphblas.so', which is used for the built-in \verb'C=A*B' when both \verb'A' and \verb'B' are sparse (see the Release Notes of MATLAB R2021a, which discusses the performance gained in MATLAB by using GraphBLAS). That's great news for the impact of GraphBLAS on MATLAB itself, and the domain of high performance computing in general, but it causes a linking problem when using this MATLAB interface for GraphBLAS. The two use different versions of the same library, and a segfault arises if the MATLAB interface for v5.x tries to link with the older GraphBLAS v3.3.3 library. Likewise, the built-in \verb'C=A*B' causes a segfault if it tries to use the newer GraphBLAS v4.x or v5.x libraries. To resolve this issue, a second GraphBLAS library must be compiled, \verb'libgraphblas_renamed', where the internal symbols are all renamed so they do not conflict with the \verb'libmwgraphblas' library. Then both libraries can co-exist in the same instance of MATLAB. To do this, go to the \verb'GraphBLAS/GraphBLAS' folder, containing the MATLAB interface. That folder contains a \verb'CMakeLists.txt' file to compile the \verb'libgraphblas_renamed' library. See the instructions for how to compile the C library \verb'libgraphblas', and repeat them but using the folder \newline \verb'SuiteSparse/GraphBLAS/GraphBLAS/build' instead of \newline \verb'SuiteSparse/GraphBLAS/build'. This will compile the renamed SuiteSparse:GraphBLAS dynamic library (\verb'libgraphblas_renamed.so' for Linux, \verb'libgraphblas_renamed.dylib' for Mac, or \verb'graphblas_renamed.dll' for Windows). These can be placed in the same system-wide location as the standard \verb'libgraphblas' libraries, such as \verb'/usr/local/lib' for Linux. The two pairs of libraries share the identical \verb'GraphBLAS.h' include file. Next, compile the MATLAB interface as described in Section~\ref{gbmake}. For any instructions in that Section that refer to the \verb'GraphBLAS/build' folder (Linux and Mac) or \verb'GraphBLAS/build/Release' (Windows), use \newline \verb'GraphBLAS/GraphBLAS/build' (Linux and Mac) or \newline \verb'GraphBLAS/GraphBLAS/build/Release' (Windows) instead. The resulting functions for your \verb'@GrB' object will now work just fine; no other changes are needed. You can even use the GraphBLAS mexFunctions compiled in MATLAB R2021a in earlier versions of MATLAB (such as R2020a). %---------------------------------------- \subsection{Default matrix format} %---------------------------------------- By default, SuiteSparse:GraphBLAS stores its matrices by row, using the \verb'GxB_BY_ROW' format. You can change the default at compile time to \verb'GxB_BY_COL' using \verb'cmake -DBYCOL=1'. For example: {\small \begin{verbatim} cmake -DBYCOL=1 .. \end{verbatim} } The user application can also use \verb'GxB_get' and \verb'GxB_set' to set and query the global option (see also Sections~\ref{gxbset} and \ref{gxbget}): {\small \begin{verbatim} GxB_Format_Value s ; GxB_get (GxB_FORMAT, &s) ; if (s == GxB_BY_COL) printf ("all new matrices are stored by column\n") ; else printf ("all new matrices are stored by row\n") ; \end{verbatim} } %---------------------------------------- \subsection{Setting the C flags and using CMake} %---------------------------------------- The above options can also be combined. For example, to use the \verb'gcc' compiler, to change the default format \verb'GxB_FORMAT_DEFAULT' to \verb'GxB_BY_COL', use the following \verb'cmake' command while in the \verb'GraphBLAS/build' directory: {\small \begin{verbatim} CC=gcc cmake -DBYCOL=1 .. \end{verbatim}} \noindent Then do \verb'make' in the \verb'build' directory. If this still fails, see the \verb'CMakeLists.txt' file. You can edit that file to pass compiler-specific options to your compiler. Locate this section in the \verb'CMakeLists.txt' file. Use the \verb'set' command in \verb'cmake', as in the example below, to set the compiler flags you need. {\small \begin{verbatim} # check which compiler is being used. If you need to make # compiler-specific modifications, here is the place to do it. if ("${CMAKE_C_COMPILER_ID}" STREQUAL "GNU") # cmake 2.8 workaround: gcc needs to be told to do ANSI C11. # cmake 3.0 doesn't have this problem. set ( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -std=c11 -lm " ) ... elseif ("${CMAKE_C_COMPILER_ID}" STREQUAL "Intel") ... elseif ("${CMAKE_C_COMPILER_ID}" STREQUAL "Clang") ... elseif ("${CMAKE_C_COMPILER_ID}" STREQUAL "MSVC") ... endif ( ) \end{verbatim} } To compile SuiteSparse:GraphBLAS without running the demos, use \newline \verb'make library' in the top-level directory, or \verb'make' in the \verb'build' directory. Several compile-time options can be selected by editing the \verb'Source/GB.h' file, but these are meant only for code development of SuiteSparse:GraphBLAS itself, not for end-users of SuiteSparse:GraphBLAS. %---------------------------------------- \subsection{Using a plain makefile} \label{altmake} %---------------------------------------- The \verb'GraphBLAS/alternative' directory contains a simple \verb'Makefile' that can be used to compile SuiteSparse:GraphBLAS. This is a useful option if you do not have the required version of \verb'cmake'. This \verb'Makefile' can even compile the entire library with a C++ compiler, which cannot be done with \verb'CMake'. This alternative \verb'Makefile' does not build the \verb'libgraphblas_renamed.so' library required for MATLAB R2021a (see Section~\ref{R2021a}). This can be done by revising the \verb'Makefile', however: add the \verb'-DGBRENAME=1' flag, and change the library name from \verb'libgraphblas' to \verb'libgraphbas_renamed'. %---------------------------------------- \subsection{Running the Demos} %---------------------------------------- After \verb'make' in the top-level directory to compile the library, type \verb'make run' to run the demos. You can also run the demos after compiling: {\small \begin{verbatim} cd Demo ./demo \end{verbatim} } The \verb'./demo' command is a script that runs the demos with various input matrices in the \verb'Demo/Matrix' folder. The output of the demos will be compared with expected output files in \verb'Demo/Output'. %---------------------------------------- \subsection{Installing SuiteSparse:GraphBLAS} %---------------------------------------- To install the library (typically in \verb'/usr/local/lib' and \verb'/usr/local/include' for Linux systems), go to the top-level GraphBLAS folder and type: {\small \begin{verbatim} sudo make install \end{verbatim} } %---------------------------------------- \subsection{Running the tests} %---------------------------------------- To run a short test, type \verb'make run' at the top-level \verb'GraphBLAS' folder. This will run all the demos in \verb'GraphBLAS/Demos'. MATLAB is not required. To perform the extensive tests in the \verb'Test' folder, and the statement coverage tests in \verb'Tcov', MATLAB R2017A is required. See the \verb'README.txt' files in those two folders for instructions on how to run the tests. The tests in the \verb'Test' folder have been ported to MATLAB on Linux, MacOS, and Windows. The \verb'Tcov' tests do not work on Windows. The MATLAB interface test (\verb'gbtest') works on all platforms; see the \verb'GraphBLAS/GraphBLAS' folder for more details. %---------------------------------------- \subsection{Cleaning up} %---------------------------------------- To remove all compiled files, type \verb'make' \verb'distclean' in the top-level GraphBLAS folder. %------------------------------------------------------------------------------- \section{About NUMA systems} %------------------------------------------------------------------------------- I have tested this package extensively on multicore single-socket systems, but have not yet optimized it for multi-socket systems with a NUMA architecture. That will be done in a future release. If you publish benchmark comparisons with this package, please state the SuiteSparse:GraphBLAS version, and a caveat if appropriate. If you see significant performance issues when going from a single-socket to multi-socket system, I would like to hear from you so I can look into it. % \newpage %------------------------------------------------------------------------------- \section{Acknowledgments} %------------------------------------------------------------------------------- I would like to thank Jeremy Kepner (MIT Lincoln Laboratory Supercomputing Center), and the GraphBLAS API Committee: Ayd\i n Bulu\c{c} (Lawrence Berkeley National Laboratory), Timothy G. Mattson (Intel Corporation) Scott McMillan (Software Engineering Institute at Carnegie Mellon University), Jos\'e Moreira (IBM Corporation), Carl Yang (UC Davis), and Benjamin Brock (UC Berkeley), for creating the GraphBLAS specification and for patiently answering my many questions while I was implementing it. I would like to thank Tim Mattson and Henry Gabb, Intel, Inc., for their collaboration and for the support of Intel. I would like to thank Joe Eaton for his collaboration on the CUDA kernels (still in progress), and for the support of NVIDIA. I would like to thank Michel Pelletier for his collaboration and work on the pygraphblas interface, and Jim Kitchen and Erik Welch for their work on Anaconda's python interface. I would like to thank John Gilbert (UC Santa Barbara) for our many discussions on GraphBLAS, and for our decades-long conversation and collaboration on sparse matrix computations, and sparse matrices in MATLAB in particular. I would like to thank S\'ebastien Villemot (Debian Developer, \url{http://sebastien.villemot.name}) for helping me with various build issues and other code issues with GraphBLAS (and all of SuiteSparse) for its packaging in Debian Linux. I would like to thank Roi Lipman, Redis Labs (\url{https://redislabs.com}), for our many discussions on GraphBLAS and its use in RedisGraph (\url{https://redislabs.com/redis-enterprise/technology/redisgraph/}), a graph database module for Redis. Based on SuiteSparse:GraphBLAS, RedisGraph is up 600x faster than the fastest graph databases ({\footnotesize \url{https://youtu.be/9h3Qco_x0QE} \newline \url{https://redislabs.com/blog/new-redisgraph-1-0-achieves-600x-faster-performance-graph-databases/}}). SuiteSparse:GraphBLAS was developed with support from NVIDIA, Intel, MIT Lincoln Lab, Redis Labs, IBM, and the National Science Foundation (1514406, 1835499). %------------------------------------------------------------------------------- \section{Additional Resources} %------------------------------------------------------------------------------- See \url{http://graphblas.org} for the GraphBLAS community page. See \url{https://github.com/GraphBLAS/GraphBLAS-Pointers} for an up-to-date list of additional resources on GraphBLAS, maintained by G{\'{a}}bor Sz{\'{a}}rnyas. \newpage %------------------------------------------------------------------------------- % References %------------------------------------------------------------------------------- {\small \addcontentsline{toc}{section}{References} \bibliographystyle{annotate} \bibliography{GraphBLAS_UserGuide.bib} } \end{document}
{ "alphanum_fraction": 0.6435398146, "avg_line_length": 46.4596190176, "ext": "tex", "hexsha": "e18c08aecefa26df97226d6ed2f571bae71a4662", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-03-31T03:20:46.000Z", "max_forks_repo_forks_event_min_datetime": "2016-03-02T04:15:56.000Z", "max_forks_repo_head_hexsha": "880be9f60c2fca519117159d954f86783b252ed3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ByLiZhao/SuiteSparse", "max_forks_repo_path": "GraphBLAS/Doc/GraphBLAS_UserGuide.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "880be9f60c2fca519117159d954f86783b252ed3", "max_issues_repo_issues_event_max_datetime": "2021-09-09T17:56:52.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T18:47:50.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ByLiZhao/SuiteSparse", "max_issues_repo_path": "GraphBLAS/Doc/GraphBLAS_UserGuide.tex", "max_line_length": 134, "max_stars_count": 27, "max_stars_repo_head_hexsha": "880be9f60c2fca519117159d954f86783b252ed3", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ByLiZhao/SuiteSparse", "max_stars_repo_path": "GraphBLAS/Doc/GraphBLAS_UserGuide.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-06T20:20:59.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-25T08:07:32.000Z", "num_tokens": 162473, "size": 590223 }
\documentclass[11pt]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{times} \usepackage{eurosym} \usepackage{inconsolata} \usepackage{amsmath} \usepackage{framed} \usepackage{graphicx} \usepackage{hyperref} \hypersetup{colorlinks=true, linkcolor=blue, urlcolor=blue} \usepackage[usenames,dvipsnames,svgnames,table]{xcolor} \definecolor{entityColor}{RGB}{0,100,200} \definecolor{attributeColor}{RGB}{0,100,50} \definecolor{relationColor}{RGB}{160,0,30} \usepackage{listings} \lstdefinestyle{reqT}{ belowcaptionskip=1\baselineskip, breaklines=true, showstringspaces=false, basicstyle=\footnotesize\ttfamily, emph={Ent,Meta,Item,Label,Section,Term,Actor,App,Component,Domain,Module,Product,Release,Resource,Risk,Service,Stakeholder,System,User,Class,Data,Input,Member,Output,Relationship,Design,Screen,MockUp,Function,Interface,Epic,Feature,Goal,Idea,Issue,Req,Ticket,WorkPackage,Breakpoint,Barrier,Quality,Target,Scenario,Task,Test,Story,UseCase,VariationPoint,Variant}, emphstyle=\bfseries\color{entityColor}, emph={[2]has,is,superOf,binds,deprecates,excludes,helps,hurts,impacts,implements,interactsWith,precedes,requires,relatesTo,verifies}, emphstyle={[2]\bfseries\color{relationColor}}, emph={[3]Attr,Code,Constraints,Comment,Deprecated,Example,Expectation,FileName,Gist,Image,Spec,Text,Title,Why,Benefit,Capacity,Cost,Damage,Frequency,Min,Max,Order,Prio,Probability,Profit,Value,Status}, emphstyle={[3]\color{attributeColor}}, } \lstset{style=reqT} \usepackage{fancyvrb} \usepackage[english]{babel} \usepackage{enumitem} \setlist[itemize]{noitemsep} \title{{\bf LAB 2:\\Requirements Prioritization \& Release Planning}\\ Preparations %and instructions } \author{Björn Regnell and Oskar Präntare} \date{\today} \begin{document} \maketitle \section{Introduction} \subsection{Purpose} This document provides instructions on how to prepare for %and run a computer lab session on requirements selection. The lab session illustrates how requirements prioritization and release planning can be supported by computer tools, and demonstrates the complexity in finding solutions to these problems. {\it The preparations in Section~\ref{section:prep} should be completed before the actual lab is run.} \subsection{Prerequisites} This lab assumes that you have installed the open source tool \href{http://reqT.org}{reqT.org} and that you are familiar with basic requirements modeling using reqT. It is also assumed that you have completed \href{https://github.com/reqT/reqT/raw/3.0.x/doc/lab1/lab1.pdf}{Lab 1 Requirements Modeling}. \subsection{Background} In this lab you will learn how to get started with requirements prioritization and release planning through the open source tool reqT, and reflect on how you could select requirements in your own project. In real-world requirements engineering you are continuously faced with different types of trade-off problems. As we have limited development resources and normally would like our most important features to be ready as soon as possible, we need to make hard decisions on what to develop next and what to postpone. If we spend some time on assessing the cost and benefit of the things we have at hand, we can hopefully find a good balance in how we spend our effort wisely in relation to the available lead time. Two main trade-off problems in requirements engineering are \begin{itemize} \item {\bf requirements prioritization}, where (a subset of) requirements are traded off against each other according to the opinions of the stakeholders based on some criteria such as the {\it benefit}, e.g. with respect to the strengthening of our product's brand, or the {\it cost}, e.g. of lost sales in case a requirement is not implemented. There are many prioritisation methods that can be used to elicit the stakeholders' opinions. In this lab we use the \$100 method and an ordinal scale ranking method, and \item {\bf release planning}, where requirements are scheduled in time over several releases under trade-offs with respect to constraints of the available capacities of different resources, requirements priorities and requirements inter-dependencies. \end{itemize} It is also likely that you will need to do hard choices regarding how you spend your efforts in the requirements engineering process. Probably you will have to make trade-offs such as: Is it more important to do more stakeholder analysis at this point, or should we instead focus on validation of the quality requirements that we have elicited so far? Is it more urgent to reduce critical incompleteness issues or should we first improve the verifiability of our scheduled requirements? The prioritization planning methods in this lab may also be good for prioritizing and planning the tasks of the requirements process itself, although we will exemplify the methods by using features under consideration for development. \clearpage\newpage \section{Preparations}\label{section:prep} \begin{framed} \noindent Before doing the lab session% in Section~\ref{section:instr} , complete all preparations in this document %section and bring requested items to the lab. In particular you need to make sure that you can access the text files you prepare below at your lab session computer. \end{framed} \subsection{Prioritization Preparations} \subsubsection{Definitions} Here is one way to formalize the requirements prioritization problem: \begin{description} \item [] $S$ is a set of $m$ stakeholders, $S=\{s_1, s_2 ..., s_m\}$ \item [] $Q$ is set of $n$ requirements $Q=\{q_1, q_2, ..., q_n\}$ \item [] $p(s_i, q_j)$ is a number representing the importance of requirement $q_j$ assigned by stakeholder $s_i$ \item [] $w(s_i)$ is a number representing the importance of stakeholder $s_i$ \item [] $P(q_j)$ is the total priority of requirement $q_j$ calculated by some function that maps all $p(s_i, q_j)$ and $w(s_i)$ to a single, numeric value. \item[] A {\it prioritization method} defines a procedure that assigns numeric values to $p(s_i, q_j)$ for all stakeholders $s_i$ and all requirements $q_j$, and to $w(s_i)$ for all stakeholders, according to some predefined priority criteria. \end{description} %\footnote{If you prepare your files on mac/windows and use them on linux then reqT may provide strange results for some characters. See \url{http://manpages.ubuntu.com/manpages/maverick/man1/dos2unix.1.html}} \noindent Before carrying out a prioritization method, a prioritization criteria needs to be defined. Examples of criteria are: market value, stakeholder benefit, risk of loss, cost of implementation and urgency of delivery. An example of a criteria applied in a pairwise comparison is: ''does requirement X have a higher {\it benefit for stakeholder~A} compared to requirement Y'', and an example of a criteria used in an estimation is: ''the {\it cost of implementing} requirement X will be \euro 3 million''. \begin{framed} \noindent {\bf Define a prioritization criteria}. Choose a prioritization criteria relevant to your project. Define your criteria so that it is desirable to maximize the priority value. \newline\newline Criteria def.: \underline{\hspace{10cm}} \end{framed} \begin{framed} \noindent {\bf Define requirements and stakeholders}. Make a reqT model with 2 stakeholders and 15 requirements from your project, analogous to this template: \begin{lstlisting} Model( Req("autoSave"), Req("exportGraph"), Req("exportTable"), Req("modelTemplates"), Req("releasePlanning"), Req("syntaxColoring"), Req("autoCompletion"), Stakeholder("modeler"), Stakeholder("tester")) \end{lstlisting} Save the model in a text file called \verb+req.scala+ \end{framed} \subsubsection{Methods}\label{section:priomethods} \noindent The {\bf \$100 method} gives each stakeholder a fictitious sum of money to ''spend'' on the requirements, where $p(s_i, q_j)$ is assigned to the amount of money ''spent'' for each requirement representing its importance according to some criteria. The combined priorities $P(q_j)$ are calculated as \begin{displaymath} P(q_j) = \sum\limits_{s_i \in S} p(s_i, q_j) w(s_i) a_i \end{displaymath} where the normalization constants $a_i$ are selected so that the sum of all $P(q_j)$ is normalized to 100 units, thus \begin{displaymath} a_i = \frac{100}{ w \sum\limits_{q_j \in Q} p(s_i, q_j)} \text{ \hspace{4mm} where \hspace{2mm}} w = \sum\limits_{s_i \in S} w(s_i) \end{displaymath} \begin{framed} \noindent {\bf Simplified \$100 method}. If all stakeholders are equally important, the formulas above can be simplified. Simplify $P(q_j)$ when $w(s_i) = a \text{ for all } s_i \in S$: \newline\newline\newline\newline \underline{\hspace{11cm}} \end{framed} \clearpage\newpage \begin{framed} \noindent {\bf Use the \$100 method}. Put yourself in the shoes of each of your 2 stakeholders and use the \$100 method to prioritize each of your 15 requirements according to your selected criteria. \vspace{1em} \begin{tabular}{| c | p{3cm} | c | c |} \hline & & Amount of dollars & Amount of dollars \\ Req & Id & Stakeholder 1 & Stakeholder 2 \\ \hline \hline 1 & & & \\ \hline 2 & & & \\ \hline 3 & & &\\ \hline 4 & & &\\ \hline 5 & & &\\ \hline 6 & & &\\ \hline 7 & & & \\ \hline 8 & & & \\ \hline 9 & & &\\ \hline 10 & & &\\ \hline 11 & & &\\ \hline 12 & & &\\ \hline 13 & & & \\ \hline 14 & & & \\ \hline 15 & & & \\ \hline \end{tabular} \vspace{2em} \noindent Transfer your priority data above into a model file of this form: \begin{lstlisting} Model( Stakeholder("modeler") has ( Prio(1), Req("autoSave") has Benefit(25), Req("exportGraph") has Benefit(10), Req("exportTable") has Benefit(8), //... Req("autoCompletion") has Benefit(28)), Stakeholder("tester") has ( Prio(2), Req("autoSave") has Benefit(3), Req("exportGraph") has Benefit(25), Req("exportTable") has Benefit(14), //... Req("autoCompletion") has Benefit(2))) \end{lstlisting} Save the model code in a text file called \verb+prio100.scala+ \end{framed} \clearpage\newpage \noindent The {\bf ordinal priority ranking method} assigns a positive integer number to each requirement $q_j$ for each stakeholder $s_i$ denoted $p(s_i, q_j) \in [1..n]$, where $n$ is the total number of requirements and all $p(s_i, q_j)$ are different for each stakeholder $s_i$. The priority $p(s_i, q_j)$ represents an ordinal scale estimation of the preference order of the requirement $q_j$ according to the views of stakeholder $s_i$. Estimations on an ordinal scale imply that the estimates only provide ordinal information and not ratio information, which means that it is not possible to tell if one priority is, say $33\%$ or $50\%$ of another priority, just because it has a lower ordinal value. One way to assign ordinal priority values is to use {\bf pairwise comparison} of the requirements and then by some algorithm (e.g. insertion sort\footnote{\url{http://en.wikipedia.org/wiki/Insertion_sort}}) sort the requirements in priority order, and when the sorting is ready, let $p(s_i, q_j) = n$ for the first requirement, $p(s_i, q_j) = n - 1$ for the second, etc. down to $p(s_i, q_j) = 1$ for the last requirement. \begin{framed} \noindent If there are $n$ requirements, what is the total number of possible pairwise combinations, without considering order? \newline\newline \underline{\hspace{11cm}} \newline\newline Try these lines in the reqT console to check your answer above: {\scriptsize\begin{verbatim} reqT> def allPairs(n: Int) = (1 to n).combinations(2).toVector reqT> allPairs(100).foreach(println) reqT> allPairs(100).size //size: _____ \end{verbatim}} \noindent Consider a directed graph of comparisons, where a directed edge $(a,b)$ represents a pair-wise comparison $a < b$. If there are $n$ requirements nodes, what is the minimum number of comparison edges needed to connect all requirement with each other? \newline\newline \underline{\hspace{11cm}} \newline\newline Try these lines in the reqT console to check your answer above: {\scriptsize\begin{verbatim} reqT> def minPairs(n:Int) = (1 to n-1).map(i => (i,i+1)).toVector reqT> minPairs(100).foreach(println) reqT> minPairs(100).size //size: _____ \end{verbatim}} \end{framed} \subsubsection{Compare methods}\label{section:priocmpr} { \begin{framed} \noindent {\bf Define a cost criteria}. Choose a cost criteria relevant to your project. Define your criteria so that it is desirable to minimize the priority value. A typical cost criteria is {\it ''total effort of development including unit and system testing''}. \newline\newline Criteria def.: \underline{\hspace{10cm}} \newline\newline {\bf Use two methods and compare}. Use first the ordinal-scale priority ranking method using insertion sort and then the ratio-scale \$100 method, both with the same cost criteria applied to your 15 features. When you do insertion sort to order your requirements in cost order on a ratio scale, you can e.g. put each requirement on a post-it note and then enact the algorithm physically, or you could enter your requirements in an editor and copy-paste them into the right order while building the final list. Put yourself in the shoes of those stakeholders, e.g. the developers of your project, that are knowledgeable about your costs. While you carry out the prioritization, reflect on pros and cons of each method. \vspace{1em} \begin{tabular}{| c | p{25mm} | p{31mm} | p{31mm} |} \hline & & Cost (insertion sort)& Cost (\$100) \\ Req & Id & Ordinal scale & Ratio scale \\ \hline \hline 1 & & & \\ \hline 2 & & & \\ \hline 3 & & &\\ \hline 4 & & &\\ \hline 5 & & &\\ \hline 6 & & &\\ \hline 7 & & & \\ \hline 8 & & & \\ \hline 9 & & &\\ \hline 10 & & &\\ \hline 11 & & &\\ \hline 12 & & &\\ \hline 13 & & & \\ \hline 14 & & & \\ \hline 15 & & & \\ \hline \end{tabular} \end{framed}} \begin{framed} \noindent {\bf Reflect on pros and cons of each method}. Write down your reflections from comparing the ordinal-scale-priority-ranking-by-insertion-sort-method with the \$100-method. \begin{enumerate}[noitemsep] \item What are the pros and cons of each method? \item How do the methods scale as the number of requirements increases? \item Which method is easiest to carry out incrementally when a new requirement is elicited? \item Which method may give most accurate/useful/honest estimations? \item In which different contexts may the methods differ in usability? \item Was the \$100-method easier because of what you learned about the costs in your first round with the insertion sort method? \item Any other reflections? \vspace{12cm} \end{enumerate} \end{framed} \clearpage\newpage \subsection{Release Planning Preparations} \subsubsection{Definitions}\label{section:rpdef} An optimal release plan can be defined as an allocation of requirements to different releases that maximizes a benefit under some constraints, such as cost and precedence. The definitions below represents one of many possible models of the release planning problem. Here we use a simplified model that does not take different types of resources and costs into account. In the model below, we also exclude precedence constraints. In subsequent assignments, different types of resources and precedence constraints are introduced. \begin{description} \item [] $S$ is a set of $m$ stakeholders, $S=\{s_1, s_2 ..., r_m\}$ \item [] $Q$ is set of $n$ requirements $Q=\{q_1, q_2, ..., q_n\}$ \item [] $R$ is a set of $k$ releases, $R=\{r_1, r_2, ..., r_k\}$ \item [] $r_i$ represents release $i$ containing some subset of requirements $\{q_j\} \subseteq Q$, where every requirement $q_j$ only belongs to one release \item [] $p(s_i, q_j)$ is a number representing the priority of requirement $q_j$ assigned by stakeholder $s_i$ \item [] $w(s_i)$ is a number representing the importance of stakeholder $s_i$ \item [] $P(q_i)$ is the weighted, normalized sum of all stakeholders' priorities of requirement $q_i$, defined as $P(q_i) = \sum\limits_{s_j \in S} p(s_j, q_i)w(s_j)a_i$, where $a_i$ are constants of normalization defined similar to the $a_i$ constants in Section~\ref{section:priomethods}. \item [] $c(q_i)$ is the cost of implementing requirement $q_i$ \item [] $C(r_i)$ is the capacity of release $r_i$ \end{description} \noindent {\bf Optimization}. The releases should be optimized by maximizing the total sum of all priorities of the requirements included in each release, for all releases: \begin{displaymath} max [ \sum\limits_{r_i \in R} \sum\limits_{q_j \in r_i}P(q_j) ] \end{displaymath} \begin{flushleft}subjected to the constraint that the sum of all requirements' costs in a release should be less than the capacity of the release:\end{flushleft} \begin{displaymath} \sum\limits_{q_i \in r_i} c(q_i) \leq C(r_i)\text{, for all }r_i \in R \end{displaymath} \subsubsection{Simple release planning} The definitions in Section~\ref{section:rpdef} represents a simplified model of the release planning problem, where there is only one type of cost and no special constraints on the ordering of requirements. Based on previous data that you have gathered about your 15 requirements, create a simplified release plan by following the steps below. We start by simplifying the model even further considering only one stakeholder. \begin{framed} \footnotesize \noindent {\bf Calculate a simple release plan manually} by allocating some of your $n=15$ requirements to two releases, denoted $r_1$ and $r_2$. In $r_1$ only a maximum of $35\%$ of the total cost can be fitted, and in $r_2$ there is only room for max $15\%$ of the total cost. \begin{enumerate}[noitemsep] \item Fill in the priority data in the Priority column in the table below by transferring the data from one of your 2 stakeholders in Section~\ref{section:priomethods}. \item Fill in the priority data in the Cost column in the table below by transferring the data from the \$100 method in Section~\ref{section:priocmpr}. \item Allocate some $q_i$ to either $r_1$ or $r_2$ and transfer costs and priorities to the selected release respectively. The cost of a requirement is not allowed to be split between releases. Try to maximize the sum of priorities of scheduled requirements. \end{enumerate} \begin{tabular}{| p{8mm} | p{12mm} | p{12mm} | p{1.4cm} | p{1.4cm} | p{1.4cm} | p{1.4cm} |} \hline Req $q_i$ & Prio $p(q_i)$ & Cost $c(q_i)$ & $p(q_i)$ if $q_i \in r_1$ else $0$ & $c(q_i)$ if $q_i \in r_1$ else $0$ & $p(q_i)$ if $q_i \in r_2$ else $0$ & $c(q_i)$ if $q_i \in r_2$ else $0$ \\ \hline \hline 1 & & & & & &\\ \hline 2 & & & & & &\\ \hline 3 & & & & & &\\ \hline 4 & & & & & &\\ \hline 5 & & & & & &\\ \hline 6 & & & & & &\\ \hline 7 & & & & & &\\ \hline 8 & & & & & &\\ \hline 9 & & & & & &\\ \hline 10 & & & & & &\\ \hline 11 & & & & & &\\ \hline 12 & & & & & &\\ \hline 13 & & & & & &\\ \hline 14 & & & & & &\\ \hline 15 & & & & & &\\ \hline \hline Sum: & & & & & & \\ \hline \end{tabular} \begin{description} \item[] \vspace{1em}Total sum of allocated priorities $\sum\limits_{q_i \in r_1} p(q_i) + \sum\limits_{q_i \in r_2} p(q_i) = $ \underline{\hspace{3cm}} \item[] \vspace{1em}Total sum of non-allocated priorities $\sum\limits_{q_i \not\in r_1, r_2} p(q_i) = $ \hspace{8.5mm}\underline{\hspace{3cm}} \item[] \vspace{1em}How long did it take to make a good manual release plan? \hspace{8.5mm}\underline{\hspace{2.3cm}} \end{description} \end{framed} \subsubsection{Advanced release planning}\label{section:advRP} By involving different priorities from more than one stakeholder, adding multiple resources that each have their own costs per requirement, and introducing constraints on the ordering of requirements, the release planning problem becomes significantly more challenging. As a preparation for the lab, you are requested to manually try to solve an advanced release planning problem for a small set of requirements. We will stick with just one stakeholder to make it a bit easier. \footnote{Don't spend ages on trying to find an optimal solution manually; in practice we need tools for that as the release planning problem is similar to the so called \href{http://en.wikipedia.org/wiki/Knapsack_problem}{knapsack problem} which is \href{http://en.wikipedia.org/wiki/NP-hard}{NP-hard}!} {\fontsize{9}{10}\begin{framed} \noindent As an example we will use 9 features listed in Table~\ref{advPlan} below. We assume a hypothetical release planning problem with two releases ''March'' and ''July'' and two development resources ''Team A'' and ''Team B''. The teams have different skills (back-end \& client dev) and the time to do their respective part differ, while both are needed. The teams have different available capacities to spend on the different releases. For the March release Team A can max spend 20 h and Team B can spend max 15 h. For the July release Team A has max 15 h and Team B has max 15 h. \begin{enumerate}[noitemsep,nolistsep] \item Allocate some or all of the features in Table~\ref{advPlan} by filling in its last column with a release name: March or July (or -- if not allocated to any release), while abiding the releases' resource constraints. {\it Try to maximize the sum of all priorities of the allocated features.} \item Fill in the requested results in the blank cells of Table~\ref{advSum}. \item We now introduce the following precedence constraint:\\ \verb+Feature("exportHtml") precedes Feature("exportGraphViz")+ implying that exportHtml must be in \verb+Release("March")+ release while exportGraphViz must be in \verb+Release("July")+ if both are allocated. Re-allocate some or all of the features in Table~\ref{advPlan2} now taking into account the above precedence constraint, while still abiding the same resource constraints as before. {\it Try to maximize the sum of all priorities of the allocated features.} \item Fill in the requested results in the blank cells of Table~\ref{advSum2}. \item How long did it take to make a good manual release plan? \underline{\hspace{1.3cm}} \item How did you find your solution? Describe your methodology. \item What are the difficulties of release planning? \item How does the methodology to find the release plan change when precedence constraints are introduced? \item Are the suggested solutions you have found optimal? Why or why not? \end{enumerate} \end{framed} } \clearpage\newpage \begin{table}[ht] \caption{Advanced release planning without precedence constraints.} \centering \begin{tabular}{|c | c | c | c| p{15mm} |} \hline Feature & Priority & \parbox[t]{14mm}{Team A \\ cost\\} & \parbox[t]{14mm}{Team B \\ cost} & Release \\ \hline \hline exportHtml & 10 & 9 & 2 & \\ \hline exportGraphViz & 10 & 7 & 8 & \\ \hline exportTabular & 10 & 3 & 9 &\\ \hline exportLatex & 7 & 6 & 4 &\\ \hline exportContextDiagramSvg & 6 & 3 & 4 &\\ \hline syntaxColoring & 3 & 6 & 2 &\\ \hline autoCompletion & 4 & 3 & 3 &\\ \hline releasePlanning & 7 & 4 & 5 &\\ \hline autoSave & 9 & 6 & 7 &\\ \hline \end{tabular} \label{advPlan} \end{table} \begin{table}[ht] \caption{Sums without precedence constraints.} \centering \begin{tabular}{| c | c | c | c |} \hline Sum & Release March & Release July & \parbox[t]{14mm}{Total \\} \\ \hline \hline Team A's capacity & 20 & 15 & 35\\ \hline Team B's capacity & 15 & 15 & 30 \\ \hline Sum of hours Team A worked & & & \\ \hline Sum of hours Team B worked & & & \\ \hline Sum of priorities & & & \\ \hline \end{tabular} \label{advSum} \end{table} \begin{framed} \noindent Room for answering questions 6--7 in Section~\ref{section:advRP}. \vspace{5.5cm} \end{framed} \clearpage\newpage \begin{table}[ht] \caption{Advanced release planning {\it with} a precedence constraint.} \centering \begin{tabular}{|c | c | c | c| p{15mm} |} \hline Feature & Priority & \parbox[t]{14mm}{Team A \\ cost\\} & \parbox[t]{14mm}{Team B \\ cost} & Release \\ \hline \hline exportHtml & 10 & 9 & 2 & \\ \hline exportGraphViz & 10 & 7 & 8 & \\ \hline exportTabular & 10 & 3 & 9 &\\ \hline exportLatex & 7 & 6 & 4 &\\ \hline exportContextDiagramSvg & 6 & 3 & 4 &\\ \hline syntaxColoring & 3 & 6 & 2 &\\ \hline autoCompletion & 4 & 3 & 3 &\\ \hline releasePlanning & 7 & 4 & 5 &\\ \hline autoSave & 9 & 6 & 7 &\\ \hline \end{tabular} \label{advPlan2} \end{table} \begin{table}[ht] \caption{Sums {\it with} a precedence constraint.} \centering \begin{tabular}{| c | c | c | c |} \hline Sum & Release March & Release July & \parbox[t]{14mm}{Total \\} \\ \hline \hline Team A's capacity & 20 & 15 & 35\\ \hline Team B's capacity & 15 & 15 & 30 \\ \hline Sum of hours Team A worked & & & \\ \hline Sum of hours Team B worked & & & \\ \hline Sum of priorities & & & \\ \hline \end{tabular} \label{advSum2} \end{table} \begin{framed} \noindent Room for answering questions 8--9 in Section~\ref{section:advRP}. \vspace{5.5cm} \end{framed} \clearpage\newpage \subsubsection{Constraint solving} The reqT tool includes an efficient constraint solver called JaCoP\footnote{See \url{http://jacop.eu/} and \url{http://cs.lth.se/edan01} if you want to learn more about CSP.}, enabling the formulation of prioritization and release planning problems using constraints over integer values in requirements models. After the problem has been formulated the constraint solver in reqT may automatically find a solution (if it exists), without the need for any further algorithm implementation. \begin{framed} \noindent 1. Try these lines in the reqT console: {\scriptsize\begin{verbatim} reqT> Var("a") //constructs a variable named "a" that can be used in constraints reqT> Var("a") > Var("b") //constructs a XgtY constraint reqT> Var("a") :: {1 to 10} //constructs a Bounds constraint on values of Var("a") reqT> val cs = Constraints(Var("a")>Var("b"), Var("a")::{1 to 10}, Var("b")::{5 to 15}) reqT> var m = Model(Stakeholder("s1") has cs) //Constraints may be attributes in a model reqT> cs.satisfy //Constraints can be satisfied if a solution exists reqT> cs.maximize(Var("b")) //search for a solution that maximizes Var("b") reqT> m = m + (Section("solution") has m.satisfy) //m.satisfy returns solution model reqT> cs.satisfy //By default, the solution search is initialized with random values //... thus if many solutions exists, each call to satisfy may pick a new solution reqT> Req("q1")/Benefit > Req("q2")/Benefit //A constraint of integer attribute paths reqT> m = m ++ Model(Req("q1")/Benefit > Req("q2")/Benefit).satisfy //default bounds if no tighter bounds are given: {-1000 to 1000} \end{verbatim}} \noindent 2. Construct a constraint problem with no solution: \newline\newline \newline\newline 3. Call \verb+satisfy+ on your unsatisfiable problem. What is the output? \newline \newline\newline 4. Investigate available constraints in this reqT source file at line 261: \newline {\footnotesize\url{https://github.com/reqT/reqT/blob/3.0.x/src/reqT/constraints.scala#L261}} \noindent 5. Construct and solve some constraint problem using the \verb+AllDifferent+ constraint. You can construct a sequence of variables by typing e.g. \verb+Vector(Var("x"), Var("y"))+, which is a subtype of \verb+Seq[Var]+. \end{framed} %\clearpage\newpage %\section{Lab Instructions}\label{section:instr} %\subsection{Prioritization Lab Instructions} %<TODO> %\begin{enumerate} %\item Load req.scala and make pairwise comparisons using reqT.comparisonParser.parse %\item Load prio100.scala and calculate weighted normalized priorities using template menu -> \$100 method %\end{enumerate} %\subsection{Release Planning Lab Instructions} %<TODO> %\section{Conclusion and reflection} %<TODO> \end{document}
{ "alphanum_fraction": 0.73115614, "avg_line_length": 53.1183206107, "ext": "tex", "hexsha": "d39064663d9d160cbc27afa3365b0aa8e35044e8", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-02-10T12:39:33.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-27T03:32:34.000Z", "max_forks_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_forks_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_forks_repo_name": "reqT/reqT", "max_forks_repo_path": "doc/lab2/lab2-prep-2015.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_issues_repo_issues_event_max_datetime": "2020-09-27T18:45:25.000Z", "max_issues_repo_issues_event_min_datetime": "2015-02-05T10:28:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_issues_repo_name": "reqT/reqT", "max_issues_repo_path": "doc/lab2/lab2-prep-2015.tex", "max_line_length": 709, "max_stars_count": 12, "max_stars_repo_head_hexsha": "8020a9e896487811870f914422c1d43fc05a838d", "max_stars_repo_licenses": [ "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause" ], "max_stars_repo_name": "reqT/reqT", "max_stars_repo_path": "doc/lab2/lab2-prep-2015.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-21T20:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-07T09:32:10.000Z", "num_tokens": 7865, "size": 27834 }
With the resource manager enabled for production usage, users should now be able to run jobs. To demonstrate this, we will add a ``test'' user on the {\em master} host that can be used to run an example job. % begin_ohpc_run \begin{lstlisting}[language=bash,keywords={}] [sms](*\#*) useradd -m test \end{lstlisting} % end_ohpc_run Next, the user's credentials need to be distributed across the cluster. \xCAT{}'s \texttt{xdcp} has a merge functionality that adds new entries into credential files on {\em compute} nodes: % begin_ohpc_run \begin{lstlisting}[language=bash,keywords={}] # Create a sync file for pushing user credentials to the nodes [sms](*\#*) echo "MERGE:" > syncusers [sms](*\#*) echo "/etc/passwd -> /etc/passwd" >> syncusers [sms](*\#*) echo "/etc/group -> /etc/group" >> syncusers [sms](*\#*) echo "/etc/shadow -> /etc/shadow" >> syncusers # Use xCAT to distribute credentials to nodes [sms](*\#*) xdcp compute -F syncusers \end{lstlisting} % end_ohpc_run \nottoggle{isxCATstateful}{Alternatively, the \texttt{updatenode compute -f} command can be used. This re-synchronizes (i.e. copies) all the files defined in the \texttt{syncfile} setup in Section \ref{sec:file_import}. \\ } ~\\ \input{common/prun} \iftoggle{isSLES_ww_slurm_x86}{\clearpage} %\iftoggle{isxCAT}{\clearpage} \subsection{Interactive execution} To use the newly created ``test'' account to compile and execute the application {\em interactively} through the resource manager, execute the following (note the use of \texttt{prun} for parallel job launch which summarizes the underlying native job launch mechanism being used): \begin{lstlisting}[language=bash,keywords={}] # Switch to "test" user [sms](*\#*) su - test # Compile MPI "hello world" example [test@sms ~]$ mpicc -O3 /opt/ohpc/pub/examples/mpi/hello.c # Submit interactive job request and use prun to launch executable [test@sms ~]$ srun -n 8 -N 2 --pty /bin/bash [test@c1 ~]$ prun ./a.out [prun] Master compute host = c1 [prun] Resource manager = slurm [prun] Launch cmd = mpiexec.hydra -bootstrap slurm ./a.out Hello, world (8 procs total) --> Process # 0 of 8 is alive. -> c1 --> Process # 4 of 8 is alive. -> c2 --> Process # 1 of 8 is alive. -> c1 --> Process # 5 of 8 is alive. -> c2 --> Process # 2 of 8 is alive. -> c1 --> Process # 6 of 8 is alive. -> c2 --> Process # 3 of 8 is alive. -> c1 --> Process # 7 of 8 is alive. -> c2 \end{lstlisting} \begin{center} \begin{tcolorbox}[] The following table provides approximate command equivalences between SLURM and PBS Pro: \vspace*{0.15cm} \input common/rms_equivalence_table \end{tcolorbox} \end{center} \nottoggle{isCentOS}{\clearpage} \iftoggle{isCentOS}{\clearpage} \subsection{Batch execution} For batch execution, \OHPC{} provides a simple job script for reference (also housed in the \path{/opt/ohpc/pub/examples} directory. This example script can be used as a starting point for submitting batch jobs to the resource manager and the example below illustrates use of the script to submit a batch job for execution using the same executable referenced in the previous interactive example. \begin{lstlisting}[language=bash,keywords={}] # Copy example job script [test@sms ~]$ cp /opt/ohpc/pub/examples/slurm/job.mpi . # Examine contents (and edit to set desired job sizing characteristics) [test@sms ~]$ cat job.mpi #!/bin/bash #SBATCH -J test # Job name #SBATCH -o job.%j.out # Name of stdout output file (%j expands to %jobId) #SBATCH -N 2 # Total number of nodes requested #SBATCH -n 16 # Total number of mpi tasks #requested #SBATCH -t 01:30:00 # Run time (hh:mm:ss) - 1.5 hours # Launch MPI-based executable prun ./a.out # Submit job for batch execution [test@sms ~]$ sbatch job.mpi Submitted batch job 339 \end{lstlisting} \begin{center} \begin{tcolorbox}[] \small The use of the \texttt{\%j} option in the example batch job script shown is a convenient way to track application output on an individual job basis. The \texttt{\%j} token is replaced with the \SLURM{} job allocation number once assigned (job~\#339 in this example). \end{tcolorbox} \end{center}
{ "alphanum_fraction": 0.7036513545, "avg_line_length": 33.96, "ext": "tex", "hexsha": "6e21e0d42382ea8ce96dddfb369e6c01abd49b96", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-02-17T23:49:09.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-17T23:49:09.000Z", "max_forks_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "utdsimmons/ohpc", "max_forks_repo_path": "docs/recipes/install/common/xcat_slurm_test_job.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "utdsimmons/ohpc", "max_issues_repo_path": "docs/recipes/install/common/xcat_slurm_test_job.tex", "max_line_length": 88, "max_stars_count": 1, "max_stars_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "utdsimmons/ohpc", "max_stars_repo_path": "docs/recipes/install/common/xcat_slurm_test_job.tex", "max_stars_repo_stars_event_max_datetime": "2019-08-17T21:20:07.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-17T21:20:07.000Z", "num_tokens": 1235, "size": 4245 }
% !arara: clean: { files: [cv_egon_geerardyn.aux, cv_egon_geerardyn.bbl, phdthesis.acn, phdthesis.acr, cv_egon_geerardyn.out, cv_egon_geerardyn.blg, cv_egon_geerardyn.log, cv_egon_geerardyn.run.xml, cv_egon_geerardyn.bcf] } % arara: vc % arara: xelatex: { shell : yes, synctex : yes, options : "-8bit" } % arara: biber % arara: xelatex: { shell : yes, synctex : yes, options : "-8bit" } \documentclass{cv-egeerardyn} \addbibresource{biblio/ownWork.bib} \titles{dr. ir.} \firstName{Egon} \lastName{Geerardyn} \tagLine{Generalist, Systems Thinker, IT/EE aficionado} \maritalStatus{not married} \mbti{INTP-A} \address{Leuven (Belgium)} \email{[email protected]} % \website{egeerardyn.github.io} \phone{+32 472 65 61 08} \birthDate{15 April 1988} \nationality{Belgian} \birthPlace{Jette (BE)} \profileGithub{egeerardyn} \profileLinkedIn{egongeerardyn} %\profileResearchGate{Egon\_Geerardyn} \profileStackOverflow{Egon}{514071} \newlink{\VUB}{Vrije Universiteit Brussel}{https://www.vub.ac.be} \newlink{\TUe}{Technische Universiteit Eindhoven}{https://www.tue.nl} \newlink{\Stanford}{Stanford}{https://stanford.edu} \newlink{\EPFL}{EPFL}{https://epfl.ch} \newlink{\CST}{Control System Technology}{https://www.tue.nl/en/university/departments/mechanical-engineering/research/research-groups/control-systems-technology/} \newlink{\ELEC}{ELEC}{http://www.vubirelec.be} \newlink{\IMEC}{IMEC}{http://www.imec.be} \newlink{\PK}{Polytechnische Kring}{http://www.pk.be} \newlink{\lms}{Siemens Industry Software NV (LMS)}{https://www.plm.automation.siemens.com/en/products/lms/} \newlink{\softkinetic}{Sony Depthsensing Solutions}{https://www.sony-depthsensing.com/} \newcommand{\TikZ}{\mbox{Ti\emph{k}Z}\xspace} \newlink{\JohanSchoukens}{Johan Schoukens}{http://homepages.vub.ac.be/~jschouk/} \newlink{\TomOomen}{Tom Oomen (TU/Eindhoven)}{http://www.toomen.eu} \newlink{\PietWambacq}{Piet Wambacq}{https://www.linkedin.com/in/piet-wambacq-5ab852a} \newlink{\GerdVandersteen}{Gerd Vandersteen}{http://vubirelec.be/people/gerd-vandersteen} \newlink{\githubMatlabToTikz}{github:matlab2tikz}{https://github.com/matlab2tikz/matlab2tikz/} \begin{document} \maketitle \begin{experience} \addJob{2017 -- now} {System Engineer (R\&D)} {\softkinetic} {% \vspace{-3em} %TODO: actually put something here } \addJob{2016 -- 2017}% {Software development engineer}% {\lms}% {% I developed Test.Lab environmental testing and modal analysis applications in \keyword{(modern) C++} in \keyword{Visual Studio} (and some \keyword{C\#}, \keyword{VB}, and \keyword{Python} for testing and code analysis). I was responsible for improving the numerical modal estimation algorithms. I took an active role in onboarding \keyword{C++} developers, and migration of the \keyword{SVN}-based toolchain to \keyword{Hg}. } \addJob{2012 -- now}% {Major contributor and maintainer (since 2015)}% {\githubMatlabToTikz}{ I maintain the open-source program \code{matlab2tikz} that converts \keyword{MATLAB}/\keyword{Octave} figures to \keyword{TikZ} (\keyword{\LaTeX}) code. This program is one of the \hlink{most-downloaded}{http://www.mathworks.com/matlabcentral/fileexchange/?sort=downloads\_desc\&term=type\%3A\%22Function\%22} programs on FileExchange.\\ I give guidance to half a dozen volunteers and keep track of the codebase using \keyword{git} and \keyword{Jenkins}. } % \addJob% % {2009 -- 2011} % {Engineering summer jobs} % {\ELEC~(\VUB)}{ % For the statistics, system theory, and system identification courses, I produced course notes, lab instructions, and measurement set-ups using \keyword{LyX}, \keyword{LabView} and \keyword{NI Elvis II}.} % \addJob{2007 -- 2009}{Web master and server admin}{\PK}{ % For my engineering fraternity, I updated the website to \keyword{Joomla} and helped to maintain the \keyword{Gentoo Linux} web server running \keyword{Apache}, \keyword{MySQL} and \keyword{PHP}. % } \end{experience} \begin{education} \addEducation% {2011 -- 2016}% {PhD Electrical Engineering (dr.)}% {\VUB}% {My research on user-friendly \keyword{system identification} that can be used e.g. for \keyword{robust control}, is available on\hlink{GitHub}{https://github.com/egeerardyn/phdthesis}. Promotors: \JohanSchoukens{} and \TomOomen.} \addEducation% {2006 -- 2011}% {BSc \& MSc Engineering Science: Electronics and IT (ir.)}% {\VUB}% {For my thesis on pin-pointing nonlinear contributors in CMOS circuits, I received the \keyword{\IMEC{} award} for best MSc thesis. Promotors: \PietWambacq{} and \GerdVandersteen.} \addText% {\vspace{-6.35\baselineskip}2011 -- now}% {I continually strive to perfect my abilities by taking courses such as \keyword{LabView Core 2}, \keyword{FPGA}, and \keyword{Associate Developer} (CLAD2014). I have successfully completed top-notch MOOCs such as \hlink{Convex Optimization}{https://lagunita.stanford.edu/courses/Engineering/CVX101/Winter2014/about}, \hlink{Functional Programming with Scala}{https://www.coursera.org/course/progfun}, \hlink{Machine Learning}{https://www.coursera.org/learn/machine-learning} and \hlink{Databases}{https://lagunita.stanford.edu/courses/Home/Databases/Engineering/about}. I actively participate in \hlink{Leuven Lean Coffee}{https://www.meetup.com/Leuven-Lean-coffee} to discuss \keyword{Agile}, \keyword{Lean}, etc. In the \hlink{Leuven CoderDojo}{https://www.coderdojobelgium.be/nl}, I am one of the driving forces behind the \keyword{Arduino} track.} \end{education} \begin{languages} \addLanguage{Dutch}{Native}{} \addLanguage{English}{Advanced}{Daily use in a professional setting.} \addLanguage{French}{Intermediate}{I have lived/worked in Ixelles (Brussels) for a few years.} \addLanguage{Japanese}{Basic}{Notions learned in language course at work.} \end{languages} \section{skills} For my everyday work, I rely on \keyword{Windows}, \keyword{Visual Studio}, \keyword{MATLAB}, and \keyword{Python}. Previously, I have used \keyword{macOS}, \keyword{Linux}, \keyword{LabView}, \keyword{Sage}, \keyword{LaTeX}, \keyword{git}, and \keyword{NI hardware} extensively. I also have some experience with \keyword{SimuLink}, \keyword{R}, \keyword{ANTLR}, \keyword{Scala}, \keyword{Java}, \keyword{Delphi}, \keyword{assembly}, and \keyword{VHDL} amongst others.\\ As a teaching assistant, I taught \keyword{C} for \keyword{microcontrollers}, \keyword{PCB} and filter design, and \keyword{PID control}. \end{document}
{ "alphanum_fraction": 0.741980046, "avg_line_length": 50.1153846154, "ext": "tex", "hexsha": "6ac318012c2e44f7673c8fe041afd69f11b87c5a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a2481412be8eccd07f1e561ff5f2d88720a4e89d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "egeerardyn/cv", "max_forks_repo_path": "cv_egon_geerardyn.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a2481412be8eccd07f1e561ff5f2d88720a4e89d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "egeerardyn/cv", "max_issues_repo_path": "cv_egon_geerardyn.tex", "max_line_length": 223, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a2481412be8eccd07f1e561ff5f2d88720a4e89d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "egeerardyn/cv", "max_stars_repo_path": "cv_egon_geerardyn.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-21T20:15:11.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-21T20:15:11.000Z", "num_tokens": 2055, "size": 6515 }
\chapter{Hand Gesture Recognition for Human-Robot Interaction} \label{ch:solution} To build an effective and easy to use hand gesture recognition system for NAO, various tools and technologies are studied during this thesis. The main challenge is to find a solution that can integrate all essential components into a robust system. However, due to the computational and compatibility limitations of NAO \cite{17}, we have faced problems in implementing few contemplated solutions. Finally, the successful solution in achieving the goal will be discussed in the following section. \section{Implementation} \label{sec:sol:impl} After analyzing the disadvantages of other experimental designs, the final design is chosen to build an efficient real-time hand gesture recognition for human-robot interaction using skeletal points. Figure \ref{fg:hri:architecture} shows the architecture of the solution that is implemented during this thesis by grouping many components into 4 different modules which serve several purposes. Each module is implemented in different environment as shown in the figure and they communicate with one another to complete the data flow. All these modules use a common configuration file named as \textit{hri.json} that contains information such as port number, host name and log path. \input{chapter/figures/hri-architecture} % Subsections of implmentation \input{chapter/content/solution/hri} \input{chapter/content/solution/brain} \input{chapter/content/solution/cc} \input{chapter/content/solution/command} \input{chapter/content/solution/head-mount} %% Sections \input{chapter/content/solution/gesture-recognition} \input{chapter/content/solution/robot-interaction} \section{Summary} In this chapter, we have talked about the implementation details of the hand gesture recognition system for human-robot interaction using skeletal points tracking algorithm. Furthermore, we discussed the machine learning techniques which are used to model, train, classify and predict five static hand gestures. Finally, we explained how these trained gesture are used to interact with the humanoid robot NAO.
{ "alphanum_fraction": 0.8171641791, "avg_line_length": 112.8421052632, "ext": "tex", "hexsha": "388849675ab62896db5ad93d3e005d530b2899e1", "lang": "TeX", "max_forks_count": 35, "max_forks_repo_forks_event_max_datetime": "2021-12-15T04:47:16.000Z", "max_forks_repo_forks_event_min_datetime": "2015-06-25T08:51:57.000Z", "max_forks_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_forks_repo_path": "document/thesis/chapter/content/solution.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_issues_repo_issues_event_max_datetime": "2017-11-22T13:52:50.000Z", "max_issues_repo_issues_event_min_datetime": "2017-11-20T13:28:59.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_issues_repo_path": "document/thesis/chapter/content/solution.tex", "max_line_length": 728, "max_stars_count": 68, "max_stars_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_stars_repo_path": "document/thesis/chapter/content/solution.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T12:52:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-26T16:19:59.000Z", "num_tokens": 416, "size": 2144 }
\documentclass[a4paper, 10pt, twoside, headings=small]{scrartcl} \input{../options.tex} \setmainlanguage[]{english} \title{07 Rest, Relationships, and Healing} \author{Ellen G.\ White} \date{2021/03 Rest in Christ} \begin{document} \maketitle \thispagestyle{empty} \pagestyle{fancy} \begin{multicols}{2} \section*{Saturday – Rest, Relationships, and Healing} For those who are convicted of sin and weighed down with a sense of their unworthiness, there are lessons of faith and encouragement in this record. The Bible faithfully presents the result of Israel’s apostasy; but it portrays also the deep humiliation and repentance, the earnest devotion and generous sacrifice, that marked their seasons of return to the Lord. Every true turning to the Lord brings abiding joy into the life. When a sinner yields to the influence of the Holy Spirit, he sees his own guilt and defilement in contrast with the holiness of the great Searcher of hearts. He sees himself condemned as a transgressor. But he is not, because of this, to give way to despair; for his pardon has already been secured. He may rejoice in the sense of sins forgiven, in the love of a pardoning heavenly Father. It is God’s glory to encircle sinful, repentant human beings in the arms of His love, to bind up their wounds, to cleanse them from sin, and to clothe them with the garments of salvation.—Prophets and Kings, pp. 667, 668. The love of God is something more than a mere negation; it is a positive and active principle, a living spring, ever flowing to bless others. If the love of Christ dwells in us, we shall not only cherish no hatred toward our fellows, but we shall seek in every way to manifest love toward them. When one who professes to serve God wrongs or injures a brother, he misrepresents the character of God to that brother, and the wrong must be confessed, he must acknowledge it to be sin, in order to be in harmony with God. Our brother may have done us a greater wrong than we have done him, but this does not lessen our responsibility. If when we come before God we remember that another has aught against us, we are to leave our gift of prayer, of thanksgiving, of freewill offering, and go to the brother with whom we are at variance, and in humility confess our own sin and ask to be forgiven.—Thoughts From the Mount of Blessing, p. 59. If we have in any manner defrauded or injured our brother, we should make restitution. If we have unwittingly borne false witness, if we have misstated his words, if we have injured his influence in any way, we should go to the ones with whom we have conversed about him, and take back all our injurious misstatements. If matters of difficulty between brethren were not laid open before others, but frankly spoken of between themselves in the spirit of Christian love, how much evil might be prevented! How many roots of bitterness whereby many are defiled would be destroyed, and how closely and tenderly might the followers of Christ be united in His love! \section*{Sunday – Facing the Past} Hearing of the abundant provision made by the king of Egypt, ten of Jacob’s sons journeyed thither to purchase grain. On their arrival they were directed to the king’s deputy, and with other applicants they came to present themselves before the ruler of the land. And they “bowed down themselves before him with their faces to the earth.” … As Joseph saw his brothers stooping and making obeisance, his dreams came to his mind, and the scenes of the past rose vividly before him. His keen eye, surveying the group, discovered that Benjamin was not among them. Had he also fallen a victim to the treacherous cruelty of those savage men? He determined to learn the truth. … … He wished to learn if they possessed the same haughty spirit as when he was with them, and also to draw from them some information in regard to their home; yet he well knew how deceptive their statements might be.—Patriarchs and Prophets, pp. 224, 225. The servants of Christ are not to act out the dictates of the natural heart. They need to have close communion with God, lest, under provocation, self rise up, and they pour forth a torrent of words that are unbefitting, that are not as dew or the still showers that refresh the withering plants. This is what Satan wants them to do; for these are his methods. It is the dragon that is wroth; it is the spirit of Satan that is revealed in anger and accusing. But God’s servants are to be representatives of Him. He desires them to deal only in the currency of heaven, the truth that bears His own image and superscription. The power by which they are to overcome evil is the power of Christ. The glory of Christ is their strength. They are to fix their eyes upon His loveliness. … And the spirit that is kept gentle under provocation will speak more effectively in favor of the truth than will any argument, however forcible.—The Desire of Ages, p. 353. Christ is our only hope. We may look to Him, for He is our Saviour. We may take Him at His word, and make Him our dependence. He knows just the help we need, and we can safely put our trust in Him. If we depend on merely human wisdom to guide us, we shall find ourselves on the losing side. But we may come direct to the Lord Jesus, for He has said: “Come unto Me, all ye that labor and are heavy-laden, and I will give you rest. Take My yoke upon you, and learn of Me; for I am meek and lowly in heart: and ye shall find rest unto your souls.” It is our privilege to be taught of Him. … We have a divine audience to which to present our requests. Then let nothing prevent us from offering our petitions in the name of Jesus, believing with unwavering faith that God hears us, and that He will answer us. Let us carry our difficulties to God, humbling ourselves before Him.—Testimonies to Ministers and Gospel Workers, pp. 486, 487. \section*{Monday – Setting the Stage} [Joseph’s] brothers stood motionless, dumb with fear and amazement. The ruler of Egypt their brother Joseph, whom they had envied and would have murdered, and finally sold as a slave! All their ill treatment of him passed before them. They remembered how they had despised his dreams and had labored to prevent their fulfillment. … Seeing their confusion, he said kindly, “Come near to me, I pray you;” and as they came near, he continued, “I am Joseph your brother, whom ye sold into Egypt. Now therefore be not grieved, nor angry with yourselves, that ye sold me hither: for God did send me before you to preserve life.” Feeling that they had already suffered enough for their cruelty toward him, he nobly sought to banish their fears and lessen the bitterness of their self-reproach. … … “And he fell upon his brother Benjamin’s neck, and wept; and Benjamin wept upon his neck. Moreover he kissed all his brethren, and wept upon them: and after that his brethren talked with him.” They humbly confessed their sin and entreated his forgiveness. They had long suffered anxiety and remorse, and now they rejoiced that he was still alive.—Patriarchs and Prophets, pp. 230, 231. Although Joseph was exalted as a ruler over all the land, yet he did not forget God. He knew that he was a stranger in a strange land, separated from his father and his brethren, which often caused him sadness, but he firmly believed that God’s hand had overruled his course, to place him in an important position. And depending on God continually, he performed all the duties of his office, as ruler over the land of Egypt with faithfulness. … [When his brothers] humbly confessed their wrongs which they had committed against Joseph, and entreated his forgiveness, [they] greatly rejoiced to find that he was alive; for they had suffered remorse, and great distress of mind, since their cruelty toward him. And now as they knew that they were not guilty of his blood, their troubled minds were relieved. Joseph gladly forgave his brethren, and sent them away abundantly provided with provisions, and carriages, and everything necessary for the removal of their father’s family and their own to Egypt. Joseph gave his brother Benjamin more valuable presents than to his other brethren. As he sent them away he charged them, “See that ye fall not out by the way.” He was afraid that they might enter into a dispute, and charge upon one another the cause of their guilt in regard to their cruel treatment of himself. With joy they returned to their father.—Spiritual Gifts, vol. 3, pp. 152, 167. Jesus knows the circumstances of every soul. You may say, I am sinful, very sinful. You may be; but the worse you are, the more you need Jesus. He turns no weeping, contrite one away. He does not tell to any all that He might reveal, but He bids every trembling soul take courage. Freely will He pardon all who come to Him for forgiveness and restoration.—The Desire of Ages, p. 568. \section*{Tuesday – Forgive and Forget?} Peter had come to Christ with the question, “How oft shall my brother sin against me, and I forgive him? till seven times?” … Christ taught that we are never to become weary of forgiving. Not “Until seven times,” He said, “but, Until seventy times seven.” Then He showed the true ground upon which forgiveness is to be granted and the danger of cherishing an unforgiving spirit. In a parable He told of a king’s dealing with the officers who administered the affairs of his government. Some of these officers were in receipt of vast sums of money belonging to the state. As the king investigated their administration of this trust, there was brought before him one man whose account showed a debt to his Lord for the immense sum of ten thousand talents. He had nothing to pay, and according to the custom, the king ordered him to be sold, with all that he had, that payment might be made. But the terrified man fell at his feet and besought him, saying, “Have patience with me, and I will pay thee all. Then the Lord of that servant was moved with compassion, and loosed him, and forgave him the debt. … The pardon granted by this king represents a divine forgiveness of all sin. Christ is represented by the king, who, moved with compassion, forgave the debt of his servant. Man was under the condemnation of the broken law. He could not save himself, and for this reason Christ came to this world, clothed His divinity with humanity, and gave His life, the just for the unjust. He gave Himself for our sins, and to every soul He freely offers the blood-bought pardon.—Christ’s Object Lessons, pp. 243, 244. If your brethren err, you are to forgive them. When they come to you with confession, you should not say, I do not think they are humble enough. I do not think they feel their confession. What right have you to judge them, as if you could read the heart? The word of God says, “If he repent, forgive him. And if he trespasses against thee seven times in a day, and seven times in a day turn again to thee, saying, I repent; thou shalt forgive him.” Luke 17:3, 4. … Give the erring one no occasion for discouragement. Suffer not a Pharisaical hardness to come in and hurt your brother. Let no bitter sneer rise in mind or heart. Let no tinge of scorn be manifest in the voice. If you speak a word of your own, if you take an attitude of indifference, or show suspicion or distrust, it may prove the ruin of a soul. He needs a brother with the Elder Brother’s heart of sympathy to touch his heart of humanity. Let him feel the strong clasp of a sympathizing hand, and hear the whisper, Let us pray. God will give a rich experience to you both. Prayer unites us with one another and with God.—Christ’s Object Lessons, pp. 249, 250. \section*{Wednesday – Making It Practical} The cross of Calvary appeals to us in power, affording a reason why we should love our Saviour, and why we should make Him first and last and best in everything. We should take our fitting place in humble penitence at the foot of the cross. Here, as we see our Saviour in agony, the Son of God dying, the just for the unjust, we may learn lessons of meekness and lowliness of mind. Behold Him who with one word could summon legions of angels to His assistance, a subject of jest and merriment, of reviling and hatred. He gives Himself a sacrifice for sin. When reviled, He threatens not; when falsely accused, He opens not His mouth. He prays on the cross for His murderers. He is dying for them; He is paying an infinite price for every one of them. He bears the penalty of man’s sins without a murmur.—Lift Him Up, p. 233. Heaven viewed with grief and amazement Christ hanging upon the cross, blood flowing from His wounded temples, and sweat tinged with blood standing upon His brow. From His hands and feet the blood fell, drop by drop, upon the rock drilled for the foot of the cross. The wounds made by the nails gaped as the weight of His body dragged upon His hands. His labored breath grew quick and deep, as His soul panted under the burden of the sins of the world. All heaven was filled with wonder when the prayer of Christ was offered in the midst of His terrible suffering,—“Father, forgive them; for they know not what they do.” Luke 23:34.—The Desire of Ages, p. 760. The Teacher from heaven, no less a personage than the Son of God, came to earth to reveal the character of the Father to men, that they might worship Him in spirit and in truth. Christ revealed to men the fact that the strictest adherence to ceremony and form would not save them; for the kingdom of God was spiritual in its nature. … He presented to men that which was exactly contrary to the representations of the enemy in regard to the character of God, and sought to impress upon men the paternal love of the Father, who “so loved the world, that He gave His only-begotten Son, that whosoever believeth in Him should not perish but have everlasting life.” He urged upon men the necessity of prayer, repentance, confession, and the abandonment of sin. He taught them honesty, forbearance, mercy, and compassion, enjoining upon them to love not only those who loved them, but those who hated them, who treated them despitefully. In this He was revealing to them the character of the Father, who is long-suffering, merciful, and gracious, slow to anger, and full of goodness and truth. Those who accepted His teaching were under the guardian care of angels, who were commissioned to strengthen, to enlighten, that the truth might renew and sanctify the soul.—Fundamentals of Christian Education, p. 177. \section*{Thursday – Finding Rest After Forgiveness} Called from a dungeon, a servant of captives, a prey of ingratitude and malice, Joseph proved true to his allegiance to the God of heaven. And all Egypt marveled at the wisdom of the man whom God instructed. Pharaoh made him lord of his house, and ruler of all his substance: to bind his princes at his pleasure; and teach his senators wisdom.” Psalm 105:21, 22. Not to the people of Egypt alone, but to all the nations connected with that powerful kingdom, God manifested Himself through Joseph. He desired to make him a light bearer to all peoples, and He placed him next the throne of the world’s greatest empire, that the heavenly illumination might extend far and near. By his wisdom and justice, by the purity and benevolence of his daily life, by his devotion to the interests of the people,—and that people a nation of idolaters,—Joseph was a representative of Christ. In their benefactor, to whom all Egypt turned with gratitude and praise, that heathen people, and through them all the nations with which they were connected, were to behold the love of their Creator and Redeemer.—Testimonies for the Church, vol. 6, p. 219. The heart where love reigns will be guided to a gentle, courteous, compassionate course of conduct toward others, whether they suit our fancy or not, whether they respect us or treat us ill. Love is an active principle; it keeps the good of others continually before us, thus restraining us from inconsiderate actions lest we fail of our object in winning souls to Christ. Love seeks not its own. It will not prompt men to seek their own ease and indulgence of self. It is the respect we render to I that so often hinders the growth of love. … Another striking point in the character of Joseph, worthy of imitation by all … is his deep filial reverence. As he meets his father with tears streaming from his eyes, he hangs upon his neck in an affectionate, loving embrace. He seems to feel that he cannot do enough for his parent’s comfort and watches over his declining years with a love as tender as a mother’s. No pains is spared to show his respect and love upon all occasions. Joseph is an example of what a [son] should be.—Testimonies for the Church, vol. 5, pp. 123, 124. [If] we come to God, feeling helpless and dependent, as we really are, and in humble, trusting faith make known our wants to Him whose knowledge is infinite, who sees everything in creation, and who governs everything by His will and word, He can and will attend to our cry, and will let light shine into our hearts. Through sincere prayer we are brought into connection with the mind of the Infinite. We may have no remarkable evidence at the time that the face of our Redeemer is bending over us in compassion and love, but this is even so. We may not feel His visible touch, but His hand is upon us in love and pitying tenderness. When we come to ask mercy and blessing from God we should have a spirit of love and forgiveness in our … hearts.—Steps to Christ, pp. 96, 97. \section*{Friday – Further Thought} \setlength{\parindent}{0pt}Gospel Workers, “How God Trains His Workers,” pp. 269, 270; Sons and Daughters of God, “In Forgiveness,” p. 153. \end{multicols} \end{document}
{ "alphanum_fraction": 0.7835097995, "avg_line_length": 162.8990825688, "ext": "tex", "hexsha": "d1aa25ecee373df6a65fb4ae1055adef3d72f4cd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4eed4bd2ebd0fd5b33764170427c4f24a2f8f7c9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ch101112/egw_comments_scraper", "max_forks_repo_path": "output/egw_en_07.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4eed4bd2ebd0fd5b33764170427c4f24a2f8f7c9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ch101112/egw_comments_scraper", "max_issues_repo_path": "output/egw_en_07.tex", "max_line_length": 1134, "max_stars_count": 2, "max_stars_repo_head_hexsha": "4eed4bd2ebd0fd5b33764170427c4f24a2f8f7c9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ch101112/egw_comments_scraper", "max_stars_repo_path": "output/egw_en_07.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-06T20:08:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-11T19:01:26.000Z", "num_tokens": 4051, "size": 17756 }
\subsection{Neural networks} \subsubsection{Fully connected neural network} A fully connected neural network (FCNN) is such a neural network in which each neuron is connected to every neuron in the previous layer and each connection has its own weight. \par This is a general purpose connection pattern and makes no assumptions about the features in the input data to recognize. This type of network is very expensive in terms of memory (weights) and computations (connections). \begin{figure}[H] \centering \includegraphics[width=250px]{pictures/fcnn.png} \caption{Fully connected neural network} \end{figure} \subsubsection{Convolutional neural network} Convolutional neural network (CNN) is a type of feed-forward artificial neural network, which means that connections between the neurons do not form a cycle. Information in the network moves only forward so the given signal goes through the neuron once. \par CNN includes a convolutional layer and usually few other hidden layers. Each neuron is connected only to a small region of the previous layer called receptive field. Receptive fields of different neurons are overlapping with other neuron's fields so that together they are covering the whole input. \par CNN are used for image processing because while using convolution they can recognize edges of an object on the image. \begin{figure}[H] \centering \includegraphics[width=250px]{pictures/cnn.png} \caption{Convolutional neural network} \end{figure} \subsubsection{Recurrent neural network} In recurrent neural network (RNN) connections between the neurons form a directed cycle. It means that RNN can use its internal memory to learn sequences of inputs. The same set of weights is applied recursively to the structure. \par RNN are used for text or speech recognition. \begin{figure}[H] \centering \includegraphics[width=250px]{pictures/rnn.png} \caption{Convolutional neural network} \end{figure} \subsection{Activation function} The activation function defines the output of considered neuron with given input. There are many functions that can be used as an activation function in artificial neural networks. The most popular are: \subsubsection{Linear} \begin{center} $ f(x)=ax + b $ \\ $ a,b \in R $ \\~\\ \begin{tikzpicture} \begin{axis} \addplot[cmhplot,-]{x}; \end{axis} \end{tikzpicture} \end{center} \subsubsection{Rectified Linear Unit (ReLU)} \[ f(x)= \left\{ \begin{array}{ll} 0 ,& x < 0 \\ x ,& x\geq 0 \\ \end{array} \right. \] \begin{center} \begin{tikzpicture} \begin{axis}[ xmin=-4,xmax=4, ymin=-4,ymax=4, ] \addplot[cmhplot,-,domain=-4:0]{0}; \addplot[cmhplot,-,domain=0:4]{x}; \end{axis} \end{tikzpicture} \end{center} \subsubsection{Sigmoid} \begin{center} $f(x) = \frac{1}{1 + e^{-x}}$ \\ \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-1,ymax=1, ] \addplot[cmhplot,-]{1/(1 + e^(-x))}; \end{axis} \end{tikzpicture} \end{center} \subsubsection{TanH} \begin{center} $ f(x)=\tanh(x)=\frac{2}{1+e^{-2x}}-1 $ \\ \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-1,ymax=1, ] \addplot[cmhplot,-]{2/(1 + e^(-2*x))-1}; \end{axis} \end{tikzpicture} \end{center} \subsection{Optimization and regularization} \subsubsection{Dropout} \subsubsection{Initial weights} \subsubsection{Learning rate decay} \subsubsection{Gradient Descent}
{ "alphanum_fraction": 0.7208224732, "avg_line_length": 37.1290322581, "ext": "tex", "hexsha": "8571966ae86b949f042d1bbbf0a1bfab05591e15", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a7660983e79cdbff8fdbe257be135f12d39d8d61", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "SoundSourceAnalyzer/Report", "max_forks_repo_path": "theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a7660983e79cdbff8fdbe257be135f12d39d8d61", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "SoundSourceAnalyzer/Report", "max_issues_repo_path": "theory.tex", "max_line_length": 402, "max_stars_count": null, "max_stars_repo_head_hexsha": "a7660983e79cdbff8fdbe257be135f12d39d8d61", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "SoundSourceAnalyzer/Report", "max_stars_repo_path": "theory.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 940, "size": 3453 }
\subsection{Prescient Interface} \label{subsec:prescientInterface} \subsubsection{General Information} The Prescient Interface is used to run the open source Prescient production cost modeling platform available from \url{https://github.com/grid-parity-exchange/Prescient} This allows inputs to be perturbed and data to be read out. \subsubsection{Sampler} For perturbing inputs, the sampled variable needs to be placed inside of \$( )\$ like \verb'$(var)$'. The sampled variable can have a constant added or a multiplication factor like \verb'$(var+3.2)$' or \verb'$(var*2.1)$' or \verb'$(var*5.0+7.0)$' or \verb'$(a*-2.0)$' These can be placed in any of the .dat or .csv files that are listed in the \xmlNode{Files} section as \xmlAttr{type="PrescientInput"} An example line could be: \verb'Abel 1 $(var)$' \begin{lstlisting}[style=XML] <Samplers> <Grid name="grid"> <variable name="var"> <distribution>dist</distribution> <grid construction="equal" steps="1" type="CDF">0.0 1.0</grid> </variable> </Grid> </Samplers> \end{lstlisting} \subsubsection{Files} There are two types of inputs in the \xmlNode{Files} section. The \xmlAttr{type="PrescientRunnerInput"} ones are passed as an argument to the \texttt{runner.py} If multiple PrescientRunnerInput files are specified then \texttt{runner.py} will be called multiple times (which can be used to run a populate and then simulate command). The \xmlAttr{type="PrescientInput"} are just used as additional inputs that have the data in them perturbed. \begin{lstlisting}[style=XML] <Files> <Input name="simulate" type="PrescientRunnerInput" >simulate_day.txt</Input> <Input name="structure" type="PrescientInput" subDirectory="scenarios/pyspdir_twostage/2020-07-10/" >ScenarioStructure.dat</Input> <Input name="scenario_1" type="PrescientInput" subDirectory="scenarios/pyspdir_twostage/2020-07-10/" >Scenario_1.dat</Input> <Input name="actuals" type="PrescientInput" subDirectory="scenarios/pyspdir_twostage/2020-07-10/" >Scenario_actuals.dat</Input> <Input name="forcasts" type="PrescientInput" subDirectory="scenarios/pyspdir_twostage/2020-07-10/" >Scenario_forecasts.dat</Input> <Input name="scenarios" type="PrescientInput" subDirectory="scenarios/pyspdir_twostage/2020-07-10/" >scenarios.csv</Input> </Files> \end{lstlisting} \subsubsection{Models} The \xmlNode{Code} model can be used with the \xmlAttr{subType="Prescient"} to run the Prescient Code Interface. The block currently does not have any option xml nodes. \begin{lstlisting}[style=XML] <Models> <Code name="TestPrescient" subType="Prescient"> <executable> </executable> </Code> </Models> \end{lstlisting} \subsubsection{Output Files Conversion} The code interface reads in the \texttt{hourly\_summary.csv} and the \texttt{bus\_detail.csv} files. It will generate a \texttt{Date\_Hour} variable that can be used as the \xmlNode{pivotParameter} and is a string with the date and hour. It also generates an \texttt{Hour} variable that is the hour as an integer. From the hourly summary it will generate variables like TotalCosts and the other data that appears there. For each of the busses in the bus detail file it generates variables like \texttt{Clay\_LMP} that can be used. Exactly which variables will appear will vary depending on the Prescient input files, but typical ones include \texttt{TotalCosts}, \texttt{FixedCosts}, \texttt{VariableCosts}, \texttt{LoadShedding}, \texttt{OverGeneration}, \texttt{ReserveShortfall}, \texttt{RenewablesUsed}, \texttt{RenewablesCurtailment}, \texttt{Demand}, \texttt{Price}, and \texttt{NetDemand}. Variables that can be included for a typical bus could include ones like \texttt{Abel\_LMP}, \texttt{Abel\_LMP\_DA}, \texttt{Abel\_Shortfall}, and \texttt{Abel\_Overgeneration}. \begin{lstlisting}[style=XML] <HistorySet name="samples"> <Input>var</Input> <Output>Date_Hour, TotalCosts, FixedCosts, VariableCosts, LoadShedding, OverGeneration, ReserveShortfall, RenewablesUsed, RenewablesCurtailment, Demand, Price, NetDemand, Abel_LMP, Clay_LMP </Output> <options> <pivotParameter>Date_Hour</pivotParameter> </options> </HistorySet> \end{lstlisting} \subsubsection{Installation of Libraries} Installing Prescient so that RAVEN can run it requires that RAVEN and Prescient have a superset of the libraries that they use so that both can run. One way to set this up is to install RAVEN, and then source the conda load script and inside of the conda raven libraries environment do the Prescient and Egret install. This is shown in the following listing: \begin{lstlisting} #first clone raven, Egret and Prescient into a directory git clone [email protected]:idaholab/raven.git git clone [email protected]:grid-parity-exchange/Prescient.git git clone [email protected]:grid-parity-exchange/Egret.git #Switch to raven directory cd raven #install raven libraries ./scripts/establish_conda_env.sh --install #switch to using raven libraries source ./scripts/establish_conda_env.sh --load #Switch to Prescient and install cd ../Prescient python setup.py develop --user conda install -c conda-forge coincbc #Switch to Egret and install cd ../Egret/ pip install --user -e . \end{lstlisting} Note that the path to \texttt{runner.py} may need to be added to the PATH variable via a command like: \verb'PATH="$PATH:$HOME/.local/bin"'
{ "alphanum_fraction": 0.7501362893, "avg_line_length": 38.4825174825, "ext": "tex", "hexsha": "e23597519d34a1d31e3ccc184e678f018fe9fe80", "lang": "TeX", "max_forks_count": 95, "max_forks_repo_forks_event_max_datetime": "2022-03-08T17:30:22.000Z", "max_forks_repo_forks_event_min_datetime": "2017-03-24T21:05:03.000Z", "max_forks_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "FlanFlanagan/raven", "max_forks_repo_path": "doc/user_manual/code_interfaces/prescient.tex", "max_issues_count": 1667, "max_issues_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_issues_repo_issues_event_max_datetime": "2022-03-31T19:50:06.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-27T14:41:22.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "FlanFlanagan/raven", "max_issues_repo_path": "doc/user_manual/code_interfaces/prescient.tex", "max_line_length": 205, "max_stars_count": 159, "max_stars_repo_head_hexsha": "bd7fca18af94376a28e2144ba1da72c01c8d343c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "FlanFlanagan/raven", "max_stars_repo_path": "doc/user_manual/code_interfaces/prescient.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-20T13:44:40.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-24T21:07:06.000Z", "num_tokens": 1560, "size": 5503 }
% % BUS 238: Introduction to Entrepreneurship and Innovation % Section: Teamwork % % Author: Jeffrey Leung % \section{Teamwork} \label{sec:teamwork} \begin{easylist} & Interdisciplinary teams: && Make unique contributions && Reduce errors or tunnel vision && Have greater flexibility && Are more united && Learn more broadly && Reduce communication gaps between industries && Allow people to focus on their strength && Require patience and listening && Are not easy to align goals and direction && Are not put into practice often \end{easylist} \clearpage
{ "alphanum_fraction": 0.750877193, "avg_line_length": 21.9230769231, "ext": "tex", "hexsha": "354f8830a964fdb5630af11ff40d3373cf2b6850", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z", "max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AmirNaghibi/notes", "max_forks_repo_path": "bus-238-introduction-to-entrepreneurship-and-innovation/tex/teamwork.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AmirNaghibi/notes", "max_issues_repo_path": "bus-238-introduction-to-entrepreneurship-and-innovation/tex/teamwork.tex", "max_line_length": 58, "max_stars_count": 25, "max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AmirNaghibi/notes", "max_stars_repo_path": "bus-238-introduction-to-entrepreneurship-and-innovation/tex/teamwork.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z", "num_tokens": 141, "size": 570 }
\chapter{List of Awards} \section{List of Awards} \section{Certificates}
{ "alphanum_fraction": 0.7808219178, "avg_line_length": 24.3333333333, "ext": "tex", "hexsha": "f1e84b0379af2c60d450a22474407c360298e76d", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2016-06-08T03:05:30.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-03T15:07:07.000Z", "max_forks_repo_head_hexsha": "692093762bfd855a5ad72f2b23cced34b6827baf", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "arks-api/arks-api", "max_forks_repo_path": "doc/assignments/final-report/Awards.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "692093762bfd855a5ad72f2b23cced34b6827baf", "max_issues_repo_issues_event_max_datetime": "2020-05-01T21:45:15.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-01T21:45:15.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "arks-api/arks-api", "max_issues_repo_path": "doc/assignments/final-report/Awards.tex", "max_line_length": 25, "max_stars_count": 4, "max_stars_repo_head_hexsha": "692093762bfd855a5ad72f2b23cced34b6827baf", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "arks-api/arks-api", "max_stars_repo_path": "doc/assignments/final-report/Awards.tex", "max_stars_repo_stars_event_max_datetime": "2016-06-08T03:05:13.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-03T15:06:56.000Z", "num_tokens": 20, "size": 73 }
\chapter{Testing} \label{Testing} \section{Algorithmus} \label{Algorithmus} \section{Single- vs. Multi-Core} \label{SMCore}
{ "alphanum_fraction": 0.76, "avg_line_length": 15.625, "ext": "tex", "hexsha": "dc933ce15b47c70da249f34379c6ed27ee0ce098", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b37e9c34107527c8611f285985ff818acabbe5c1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "danielvogel/Tree-Height-Balancing", "max_forks_repo_path": "Ausarbeitung/chapter/Testing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b37e9c34107527c8611f285985ff818acabbe5c1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "danielvogel/Tree-Height-Balancing", "max_issues_repo_path": "Ausarbeitung/chapter/Testing.tex", "max_line_length": 32, "max_stars_count": null, "max_stars_repo_head_hexsha": "b37e9c34107527c8611f285985ff818acabbe5c1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "danielvogel/Tree-Height-Balancing", "max_stars_repo_path": "Ausarbeitung/chapter/Testing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 38, "size": 125 }
%------------------------- % Resume in Latex % Author : Rohit Nagvenkar % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[pdftex]{hyperref} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.375in} \addtolength{\evensidemargin}{-0.375in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-.5in} \addtolength{\textheight}{1.0in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \item\small{ \textbf{#1}{: #2 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-1pt}\item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-5pt} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{{\huge Rohit Nagvenkar}} \\ $\diamond \enspace {Email: [email protected]} $ $\enspace \diamond \enspace {Phone: 817{-}821{-}1257} $ $\enspace \diamond \enspace {Arlington, Texas} $ \\ \end{tabular*} %-----------ABOUT ME----------------- \section{Summary} {Figuring out how stuff works, Playing Soccer, Meeting new people, Politics, Hackerrank, Dota 2, StarTrekasd TNG Figuring out how stuff works, Playing Soccer, Meeting new people, Politics, Hackerrank, Dota 2, StarTrek TNG} %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheading {University of Texas at Arlington}{Arlington, TX} {Unmanned Vehicle Systems (CSE concentration)}{Aug. 2018 -- Current} \resumeSubheading {University of Texas at Arlington}{Arlington, TX} {Master of Science in Computer Science and Engineering}{Aug. 2017 -- Current} \resumeSubheading {Thakur College of Science and Commerce}{Mumbai, India} {Master's of Science in Computer Science}{Jun. 2014 -- July. 2016} \resumeSubHeadingListEnd %--------RELEVANT COURSES------------ % \section{Relevant Courses} % \resumeSubHeadingListStart % \item{ % {Robotics, Unmanned Vehicle Systems, Artificial Intelligence, Machine Learning, Computer Vision, Reinforcement Learning} % } % \resumeSubHeadingListEnd %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \resumeSubHeadingListStart \item{ {Python, C, C++, Java, JavaScript, NodeJS, ReactJS, REST, Matlab, Linux} } \resumeSubHeadingListEnd %-----------PROJECTS----------------- \section{Projects} \resumeSubHeadingListStart \resumeSubItem{Computer Vision} {\\Teaching deep Convolutions Neural Networks to play a racing game: Goal was to make the host car run an entire lap using just the visual information captured from a camera, then trained and tested the neural network etc.} \resumeSubItem{Reinforcement Learning} {\\Trigger Strategies for Repeated Cooperative games: Implemented a multi agent system that allowed efficient transmission of data (Nash Equilibrium) with the intent of being able to run forever.} \resumeSubItem{Robotics} {\\Built a holonomic 4 wheeled bot using lego ev3 brick as the processor. Used sensors such as SONAR, LIGHT and INFRARED to avoid obstacles and navigate through various terrains.} \resumeSubItem{Autonomous Systems} {\\Worked with different sensors and control systems such as Pixhawk, LiDAR, GPS/Compass modules,radar, camera module, encoders, PID etc. Worked on path planning algorithms that utilized the sensors and control aspects of the system.} \resumeSubItem{Currently working on} {\\A Centipede bot similar to the "Vietnamese Centipede", figuring out the control dynamics and kinematics (Kinodynamic motion planning) of the bot and be able to learn to traverse into any surroundings using machine learning concepts.} \resumeSubHeadingListEnd %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {University of Texas at Arlington}{Arlington, TX} {Student Assistant}{Jun 2019 - Current} \resumeItemListStart \resumeItem{Communication} {Promote campus events through innovative PR techniques, Advice students about their academic and financial profile.} \resumeItemListEnd \resumeSubheading {Compass Group}{Arlington, TX} {Student Operations Supervisor}{Oct 2017 - May 2019} \resumeItemListStart \resumeItem{Decision Making} {Efficiently managed a team of more than 15 associates in daily operations while providing courteous customer service.} \resumeItem{Leadership} {Executed supervisory objectives by devising staff, work schedules, planning, monitoring and enforcing compliance with health and safety regulations, and exhibited exemplary skills in conflict resolution.} \resumeItemListEnd \resumeSubHeadingListEnd %--------INTERESTS------------ \section{Interests} \resumeSubHeadingListStart \item{ {Figuring out how stuff works, Playing Soccer, Meeting new people, Politics, Hackerrank, Dota 2, StarTrek TNG} } \resumeSubHeadingListEnd %------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6908710353, "avg_line_length": 35.901734104, "ext": "tex", "hexsha": "7c4ea345b7e18f6f82d72abdf0425ea1fd61c842", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3712ec9a25fa35f022824a02ef08d71b37944cb9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rohitnagvenkar/resume-template", "max_forks_repo_path": "rohitnagvenkarResume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3712ec9a25fa35f022824a02ef08d71b37944cb9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rohitnagvenkar/resume-template", "max_issues_repo_path": "rohitnagvenkarResume.tex", "max_line_length": 242, "max_stars_count": null, "max_stars_repo_head_hexsha": "3712ec9a25fa35f022824a02ef08d71b37944cb9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rohitnagvenkar/resume-template", "max_stars_repo_path": "rohitnagvenkarResume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1659, "size": 6211 }
\documentclass{article} \textheight=8.5in \textwidth=6.5in \oddsidemargin=0.0in \topmargin=0.0in \renewcommand\baselinestretch{1.0} % single space \pagestyle{empty} % no headers and page numbers \oddsidemargin -10 true pt % Left margin on odd-numbered pages. \evensidemargin 10 true pt % Left margin on even-numbered pages. \marginparwidth 0.75 true in % Width of marginal notes. \oddsidemargin 0 true in % Note that \oddsidemargin=\evensidemargin \evensidemargin 0 true in \topmargin -0.75 true in % Nominal distance from top of page to top of\textheight 9.5 true in % Height of text (including footnotes and figures) \textwidth 6.375 true in % Width of text line. \parindent=0pt % Do not indent paragraphs \parskip=0.15 true in \usepackage{color} % Need the color package \usepackage{epsfig} \begin{document} \begin{center} \section*{\underline{REPOSITORY}} Author Name : Manorama Pal \& Kishore Kumar Shukla \input{Repository.latex} \end{center} \subsection*{\underline{CREATE REPOSITORY}} \subsubsection*{\underline{USER}} \begin{enumerate} \item Interacte with web-application. \item User send request to the admin to registerd as a author. \item Successful message come to admin,user registered as a author \& create his own Repository. \item User management area where author registration done. \begin{center} \input{Create_rep.latex} \label{figure:Create_rep.latex} \end{center} \end{enumerate} \subsubsection*{\underline{AUTHOR}} \begin{enumerate} \item Author login as a content author \& goes to author home. \item Author create his own Repository. \item After successful creation of repository author works his repository home. \item Return the apporopiate message. \begin{center} \input{createauthor_rep.latex} \label{figure:createauthor_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{UPLOAD CONTENT}} \begin{enumerate} \item Author login as a content author. \item Author send the request to upload content in existing topic or new topic. \item Return the appropriate message. \item Author works in his repository home. \begin{center} \input{Uploadcon_rep.latex} \label{figure:Uploadcon_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{VIEW CONTENT}} \begin{enumerate} \item Author login as a content author. \item Sending request to view a file. \item Show the view file. \item Return the appropriate message. \item Repository home. \begin{center} \input{viewcon_rep.latex} \label{figure:viewcon_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{DELETE DIR \& TOPIC}} \begin{enumerate} \item Author login as a content author. \item Sending request to delete dir \& topic. \item Delete dir \& topic. \item Return the successful message. \item Repository home. \begin{center} \input{deletedir_rep.latex} {DELETE DIR \& TOPIC} \label{figure:deletedir_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{MOVE FILE}} \begin{enumerate} \item Author login as a content author. \item Sending request to move file to another directory. \item Selected file to another directory. \item Return the successful message. \item Repository home. \begin{center} \input{move_rep.latex} \label{figure:move_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{PERMISSION GIVEN}} \begin{enumerate} \item Author login as a content author. \item Sending request to give permission. \item Check Role(check the role i.e author,Instructor(Private area),Instructor(Course area)) \item If role successful permission receive his area i.e authors's phase,Instructor(Private area)Instructor(Course area). \item If role check successful permission given topic view author's phase. \item If failure role check given topic is not received. \item Return the appropriate message. \begin{center} \input{permission_rep.latex} \label{figure:permission_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{REPOSITORY BROWSER}} \begin{enumerate} \item Author login as a content author. \item Author view all the dir \& topic who registered as an author but not read the file. \item Return the appropriate message. \item Repository Browser. \begin{center} \input{repositoryBro_rep.latex} \label{figure:repositoryBro_rep.latex} \end{center} \end{enumerate} \subsection*{\underline{VIEW \& DELETE PERMISSION}} \begin{enumerate} \item Author login as a content author. \item Author goes to his repository home. \item Author send request to delete Permission Receive topic so Simultaneously delete Permission given topic in author phase. \item Similarly author delete Permission given topic in author phase so Simultaneously delete Permission Receive topic in Author phase,Instructor(Private area) and Instructor(Course area). \item Author,Instructor(Private area) and Instructor(Course area)request to view the Receive file. \item View the Receive file. \end{enumerate} \subsection*{\underline{MODIFY IN REPOSITORY}} \begin{enumerate} \item User login as an instructor and goes into the course management. \item Instructor select the course content will select the publish link. \item Permitted topic will display in the unpublished area.Instructor view those file. \item Instructor will publish the selected files. \item Instructor \& student both view the permitted file. \begin{center} \input{modify_rep.latex} {MODIFY IN REPOSITORY} \label{figure:modify_rep.latex} \end{center} \end{enumerate} Written by :- Rekha Pal \end{document}
{ "alphanum_fraction": 0.6341499851, "avg_line_length": 41.3209876543, "ext": "tex", "hexsha": "c7ed0abcfa5d7a121d44dfa16bf3e186f9ba0663", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-05T08:47:47.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-05T08:47:47.000Z", "max_forks_repo_head_hexsha": "c392bb650c3f5d738bfded56b1646f25ca3a9862", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "ynsingh/brihaspati2", "max_forks_repo_path": "docs/Latex/Repository_File.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c392bb650c3f5d738bfded56b1646f25ca3a9862", "max_issues_repo_issues_event_max_datetime": "2020-05-07T07:27:40.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-07T07:27:40.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "ynsingh/brihaspati2", "max_issues_repo_path": "docs/Latex/Repository_File.tex", "max_line_length": 188, "max_stars_count": null, "max_stars_repo_head_hexsha": "c392bb650c3f5d738bfded56b1646f25ca3a9862", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "ynsingh/brihaspati2", "max_stars_repo_path": "docs/Latex/Repository_File.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1441, "size": 6694 }
\chapter{Front end implementation : Building a working UI} \section{User's Manual} \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{C5/project_workflow.png} \caption{The iterative learning workflow} \end{figure} \mbox{}\\ This tool implements an iterate approach to learning. Start by entering some examples that may synthesise your desired output. Then, examine the returned Haskell and iteratively add more examples, fix incorrect ones and re-learn until you are happy with the result. \subsection{Entering examples} In order to use this tool, you first have to enter examples to learn, which specify how the target program behaves on specific inputs.\\ \\ You can change the number of input arguments by editing the "No. Args" box. While there are no restrictions on the possible arguments, be aware that any increase in number of arguments can greatly affect learning performance. Currently, the only supported argument type is Integer. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{C5/screenshot_completed.png} \caption{Learning Factorial} \end{figure} \subsection{The Learning Step} After entering your examples, you can perform learning by clicking the "Run ASP" button. \\ \\ After a short computation step, the learned haskell is displayed, and there are a few options on how to proceed : \begin{itemize} \item If the learned program is incorrect, then you can freely add more examples which specify the missing behaviour. \item If you are unsure about the correctness of the program, you can simply test some more complicated answers using example autocompletion. \item If you are happy with the learned program, you can download the generated code, or save it to be re-used by other tasks. \end{itemize} \subsection{Example autocompletion} If you want to test more complicated values, you can provide examples with valid input, but no output. After learning, these outputs will be completed by the tool, and by analysing the result compared to the expected value, you can discern the correctness of the learned program. \\ \\ If you are unsure about program correctness, you can add more examples to be autocompleted which will then be ran on the learned program without a new learning task being started. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{C1/complete_examples.png} \caption{Example autocompletion} \end{figure} \subsection{Combining functions} Once you have successfully learned a function, you may reuse it in another by pressing the save button while having the ``Add to Background Knowledge'' box checked. Then, reuse this function in another by first adding a new learning task by opening a new tab. Any learning performed on this new task will have knowledge of the initial function and make use of it in a preferential manner. \subsection{Restricting the Language Bias} To help increase learning performance, you can restrict the Language Bias, limiting the operations available to be returned in Haskell. If you have some prior knowledge or suspicion that the target program does not use some available operation, then by selecting the relevant check boxes you can tailor the target language specifically to your target domain. \\ \\ It is also possible to limit the learning to only tail recursive programs. This feature can offer a large performance increase but does only learn a small subset of programs. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{C5/bias_restriction.png} \caption{Bias Restriction} \end{figure} \subsection{Handling errors} Sometimes, it may not be possible to learn a program from a set of examples. They may be contradictory, or the tool may not be robust enough. In this case, you have a few options open to you : \begin{itemize} \item Try to remove certain examples to find out which one might be causing a contradiction. \item Separate your function into multiple smaller ones. For example, a Merge Sort program could be separated into smaller splitting and comparision programs. \end{itemize} \pagebreak \section{The Design Process} Designing the user interface was not an easy process. While I knew the basic features of my UI, to be an iterative learning tool which takes examples as input and displays a learned Haskell program as output, I did not have much of an idea of how to display this to the user. I chose to represent the input fields as a table or spreadsheet as it seemed like an interface that is commonly known and easy to understand. The concept of adding rows and columns, and having functions that work over those columns is one common to spreadsheet tools like Excel, and I thought this would help illustrate the concept of my tool to new users unfamiliar with IFP. \\ \\ One UI approach I considered was writing the tool as a plugin for an existing spreadsheet program such as Excel or LibreOffice Calc. While this would have saved a majority of the frontend design and implementation work, it would have meant that my tool has to rely on user's knowledge of the host program, and limiting what features I could add. Instead, I decided to use an approach which was spreadsheet inspired but implemented in my own web-based UI.\\ \\ Now that I had chosen this approach, I had to design how it would look. Initially using paper sketches and online formatting tools, I started building a picture of what the UI would look like. However, my ideas started looking more like a table than a spreadsheet, as I would not need all the functionality a typical spreadsheet provides. In addition, Javascript libraries for making spreadsheets are quite hard to find and implement, while ones with table functionality are common, as tables are built-in HTML tag. \section{Writing the Backend} The first part of my tool that needed a backend was the part which generates skeleton rules. Instead of finding some way in ASP to enumerate over all possible rule bodies, it seemed easier to write a Java program to perform this generation. I chose Java as it is a language I am very familiar with, reducing the number of new technologies I would have to learn to start implementation. In addition, Java can be quite easy to maintain as the size of the project grows, although it can be overly verbose at times. \\ \\ After handling skeleton rule generation, the backend also needed to handle other tasks that could not be performed using ASP. \begin{itemize} \item Converting from ground \lstinline{choose} atoms as produced in the answer set to full choice rules. %{ \item Generating Haskell from the chosen ASP rules. \item Running the Haskell program to auto-complete any examples. \item Managing when to just run Haskell and when to re-learn. \end{itemize} \mbox{} \\ While it might seem that converting from ASP to valid Haskell would be difficult, it turned out to be a surprisingly easy process due to the chosen subset of the target language and way I have defined rule bodies. Because there are only arithmetic operations with two arguments, the tool can use recursively iterate through the expression, building up the Haskell string as it runs using simple string formatting. It is equally easy to generate guard statements from the bodies of chosen \lstinline{match} rules %{ \section{Technologies Used} To make use of the Java backend, I needed a UI framework which could easily integrate with it, while also not being too complicated to learn. In addition, I wanted a way to implement a web based UI instead of a traditional Desktop application, as it would allow the tool to be hosted on the cloud, available to anyone with a web browser. \subsection{Play Framework and Akka} Because I wanted to build a web app while having a Java backend, it made most sense to use the Play Framework \cite{Play} to link the two together. The Play Framework provides a way to easily integrate a HTML and Javascript frontend with a Java backend, through HTTP requests. Methods on the backend that need to be accessed on the frontend can be written into a Controller class, whose methods become exposed as \lstinline{GET}, \lstinline{POST} and \lstinline{DELETE} endpoints. These endpoints are then accessable in the frontend, through ajax calls written in the Javascript. \\ \\%{ Play also allows for processes to be ran in the background, asynchronously. This was useful for the tool, as I did not want it to be unresponsive while waiting for the potentially long time the learning task takes. To achieve this, I made use of Akka actors \cite{Akka}. Akka is a toolkit used to build concurrent systems made up from actors, abstract concurrent processes. As part of my backend, I define the learning task as one of these actors, which is initiated when the frontend submits a new task. Then, the frontend repeatedly queries the backend until the learning is complete, at which point the learned Haskell is returned. \subsection{Bootstrap} For the user interface, I wanted a quick, simple way to design it, looking clean and structured without anything overly complicated. To achieve this, I used Bootstrap \cite{Bootstrap}, a simple css and javascript library, containing a large built in collection of common HTML elements such as toolbars, modals, tables and tab. Having used Bootstrap in projects before, it was familiar enough that it made writing my initial UI frontend quick and painless, and allowed me to easily make changes based on user feedback. \section{User Feedback and Evaluation} To help with the design of the user interface, it was important to get feedback from users wherever possible. This feedback started with friends and supervisors, and was given informally as we discussed each stage of the UI. With friends, we frequently worked together and at such an early stage it is easy to ask for feedback and iterate based on this. \\ \\ As the UI came nearer to completion, I realised that more formal feedback was necessary. By getting in touch with younger students and family with a non-technical background, I could get more unbiased and descriptive feedback from a variety of perspectives. This feedback helped fix bugs, add extra features, gain new examples and uncover limitations in the learning. \pagebreak %\renewcommand\bibname{{References}} %\bibliography{References} %\bibliographystyle{plain}
{ "alphanum_fraction": 0.7986401166, "avg_line_length": 97.1226415094, "ext": "tex", "hexsha": "2f30c6d2421b87e06fad26f16ccd7892b76eb754", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "acca214241e9e7d7ff5c344039778dbd967a8008", "max_forks_repo_licenses": [ "Apache-2.0", "MIT" ], "max_forks_repo_name": "roddejams/program-synthesis", "max_forks_repo_path": "report/C5/chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "acca214241e9e7d7ff5c344039778dbd967a8008", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0", "MIT" ], "max_issues_repo_name": "roddejams/program-synthesis", "max_issues_repo_path": "report/C5/chapter5.tex", "max_line_length": 658, "max_stars_count": null, "max_stars_repo_head_hexsha": "acca214241e9e7d7ff5c344039778dbd967a8008", "max_stars_repo_licenses": [ "Apache-2.0", "MIT" ], "max_stars_repo_name": "roddejams/program-synthesis", "max_stars_repo_path": "report/C5/chapter5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2140, "size": 10295 }
\documentclass[color=usenames,dvipsnames]{beamer}\usepackage[]{graphicx}\usepackage[]{color} % maxwidth is the original width if it is less than linewidth % otherwise use linewidth (to make sure the graphics do not exceed the margin) \makeatletter \def\maxwidth{ % \ifdim\Gin@nat@width>\linewidth \linewidth \else \Gin@nat@width \fi } \makeatother \definecolor{fgcolor}{rgb}{0, 0, 0} \newcommand{\hlnum}[1]{\textcolor[rgb]{0.69,0.494,0}{#1}}% \newcommand{\hlstr}[1]{\textcolor[rgb]{0.749,0.012,0.012}{#1}}% \newcommand{\hlcom}[1]{\textcolor[rgb]{0.514,0.506,0.514}{\textit{#1}}}% \newcommand{\hlopt}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlstd}[1]{\textcolor[rgb]{0,0,0}{#1}}% \newcommand{\hlkwa}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}% \newcommand{\hlkwb}[1]{\textcolor[rgb]{0,0.341,0.682}{#1}}% \newcommand{\hlkwc}[1]{\textcolor[rgb]{0,0,0}{\textbf{#1}}}% \newcommand{\hlkwd}[1]{\textcolor[rgb]{0.004,0.004,0.506}{#1}}% \let\hlipl\hlkwb \usepackage{framed} \makeatletter \newenvironment{kframe}{% \def\at@end@of@kframe{}% \ifinner\ifhmode% \def\at@end@of@kframe{\end{minipage}}% \begin{minipage}{\columnwidth}% \fi\fi% \def\FrameCommand##1{\hskip\@totalleftmargin \hskip-\fboxsep \colorbox{shadecolor}{##1}\hskip-\fboxsep % There is no \\@totalrightmargin, so: \hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth}% \MakeFramed {\advance\hsize-\width \@totalleftmargin\z@ \linewidth\hsize \@setminipage}}% {\par\unskip\endMakeFramed% \at@end@of@kframe} \makeatother \definecolor{shadecolor}{rgb}{.97, .97, .97} \definecolor{messagecolor}{rgb}{0, 0, 0} \definecolor{warningcolor}{rgb}{1, 0, 1} \definecolor{errorcolor}{rgb}{1, 0, 0} \newenvironment{knitrout}{}{} % an empty environment to be redefined in TeX \usepackage{alltt} %\documentclass[color=usenames,dvipsnames,handout]{beamer} \usepackage[roman]{../lectures} %\usepackage[sans]{../lectures} \hypersetup{pdfpagemode=UseNone,pdfstartview={FitV}} % Load function to compile and open PDF % Compile and open PDF % New command for inline code that isn't to be evaluated \definecolor{inlinecolor}{rgb}{0.878, 0.918, 0.933} \newcommand{\inr}[1]{\colorbox{inlinecolor}{\texttt{#1}}} \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \begin{document} \begin{frame}[plain] \LARGE \centering { \LARGE Hierarchical distance sampling: \\ \Large simulation, fitting, and prediction %, and random effects \\ } {\color{default} \rule{\textwidth}{0.1pt} } \vfill \large WILD(FISH) 8390 \\ Estimation of Fish and Wildlife Population Parameters \\ \vfill \large Richard Chandler \\ University of Georgia \\ \end{frame} \section{Overview} \begin{frame}[plain] \frametitle{Outline} \Large \only<1>{\tableofcontents}%[hideallsubsections]} \only<2 | handout:0>{\tableofcontents[currentsection]}%,hideallsubsections]} \end{frame} \begin{frame} \frametitle{Distance sampling overview} Distance sampling is one of the oldest wildlife sampling methods. \\ \pause \vfill It's based on the simple idea that detection probability should decline with distance between an animal and a transect. \\ \pause \vfill If we can estimate the function describing how $p$ declines with distance $x$, we can estimate abundance\dots \pause if certain assumptions hold, as always. \\ \end{frame} \begin{frame} \frametitle{Distance sampling overview} The simplest estimator of abundance is \[ \hat{N} = \frac{n}{\hat{p}} \] where $n$ is the number of individuals detected, $p$ is detection probability, and $E(n)=Np$. \\ \pause \vfill In distance sampling, detection probability is a \alert{function} of distance, rather than a constant, such that all individuals have unique detection probabilities. \\ \pause \vfill As a result, we have to replace $p$ with \alert{average} detection probability: \[ \hat{N} = \frac{n}{\hat{\bar{p}}} \] \pause \vfill How do we compute average detection probability ($\bar{p}$)? \end{frame} \begin{frame} \frametitle{Detection functions} To estimate average detection probability ($\bar{p}$), we need: \begin{itemize} \item A detection function $g(x)$ describing how $p$ declines with distance. \item A probability distribution $p(x)$ describing the distances of all animals (detected and not detected). \end{itemize} \pause \vfill \centering The most common detection functions are: \\ \vspace{6pt} \begin{tabular}{lc} % \centering \hline Detection function & $g(x)$ \\ \hline Half normal & $\exp(-x^2 / (2\sigma^2))$ \\ Negative exponential & $\exp(-x/\sigma)$ \\ Hazard rate & $1-\exp(-(x/a)^{-b})$ \\ \hline \end{tabular} \end{frame} \begin{frame}[fragile] \frametitle{Half-normal} \footnotesize \[ g(x,\sigma) = \exp(-x^2/(2\sigma^2)) \] \vspace{-12pt} \centering \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{sigma1} \hlkwb{<-} \hlnum{25}\hlstd{; sigma2} \hlkwb{<-} \hlnum{50} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma1}\hlopt{^}\hlnum{2}\hlstd{)),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{xlab}\hlstd{=}\hlstr{"Distance (x)"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"Detection probability (p)"}\hlstd{)} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma2}\hlopt{^}\hlnum{2}\hlstd{)),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{add}\hlstd{=}\hlnum{TRUE}\hlstd{,} \hlkwc{col}\hlstd{=}\hlnum{4}\hlstd{)} \hlkwd{legend}\hlstd{(}\hlnum{70}\hlstd{,} \hlnum{1}\hlstd{,} \hlkwd{c}\hlstd{(}\hlstr{"sigma=25"}\hlstd{,} \hlstr{"sigma=50"}\hlstd{),} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{col}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"black"}\hlstd{,}\hlstr{"blue"}\hlstd{))} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.7\linewidth]{figure/hn-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Negative exponential} \footnotesize \[ g(x,\sigma) = \exp(-x/\sigma) \] \vspace{-12pt} \centering \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{sigma1} \hlkwb{<-} \hlnum{25}\hlstd{; sigma2} \hlkwb{<-} \hlnum{50} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{/}\hlstd{sigma1),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{xlab}\hlstd{=}\hlstr{"Distance (x)"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"Detection probability (p)"}\hlstd{)} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{/}\hlstd{sigma2),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{add}\hlstd{=}\hlnum{TRUE}\hlstd{,} \hlkwc{col}\hlstd{=}\hlnum{4}\hlstd{)} \hlkwd{legend}\hlstd{(}\hlnum{70}\hlstd{,} \hlnum{1}\hlstd{,} \hlkwd{c}\hlstd{(}\hlstr{"sigma=25"}\hlstd{,} \hlstr{"sigma=50"}\hlstd{),} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{col}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"black"}\hlstd{,}\hlstr{"blue"}\hlstd{))} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.7\linewidth]{figure/nexp-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Hazard rate} \footnotesize \[ g(x,a,b) = 1-\exp(-(x/a)^{-b}) \] \vspace{-12pt} \centering \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{a1} \hlkwb{<-} \hlnum{25}\hlstd{; a2} \hlkwb{<-} \hlnum{50}\hlstd{; b1} \hlkwb{<-} \hlnum{2}\hlstd{; b2} \hlkwb{<-} \hlnum{10} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlnum{1}\hlopt{-}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{(x}\hlopt{/}\hlstd{a1)}\hlopt{^}\hlstd{(}\hlopt{-}\hlstd{b1)),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{xlab}\hlstd{=}\hlstr{"Distance (x)"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"Detection probability (p)"}\hlstd{,} \hlkwc{ylim}\hlstd{=}\hlnum{0}\hlopt{:}\hlnum{1}\hlstd{)} \hlkwd{plot}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlnum{1}\hlopt{-}\hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{(x}\hlopt{/}\hlstd{a2)}\hlopt{^}\hlstd{(}\hlopt{-}\hlstd{b2)),} \hlkwc{from}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{to}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{add}\hlstd{=}\hlnum{TRUE}\hlstd{,} \hlkwc{col}\hlstd{=}\hlnum{4}\hlstd{)} \hlkwd{legend}\hlstd{(}\hlnum{70}\hlstd{,} \hlnum{1}\hlstd{,} \hlkwd{c}\hlstd{(}\hlstr{"a=25, b=2"}\hlstd{,} \hlstr{"a=50, b=10"}\hlstd{),} \hlkwc{lty}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{col}\hlstd{=}\hlkwd{c}\hlstd{(}\hlstr{"black"}\hlstd{,}\hlstr{"blue"}\hlstd{))} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.7\linewidth]{figure/haz-1} } \end{knitrout} \end{frame} \begin{frame} \frametitle{Average detection probability ($\bar{p}$)} Regardless of the chosen detection function, average detection probability is defined as: \[ % \bar{p} = \int g(x)p(x) \; \mathrm{d}x % \bar{p} = \int_{b_1}^{b_2} g(x)p(x) \; \mathrm{d}x \bar{p} = \int_{0}^{B} g(x)p(x) \; \mathrm{d}x \] % where $b_1$ and $b_2$ are the limits of the distance interval. where $B$ is the width of the transect. \pause \vfill All that remains is the specification of $p(x)$, the distribution of distances (between animals and the transect). \pause \vfill To understand why $p(x)$ is needed, think about it this way: \begin{itemize} \item If most animals are close to the transect, $\bar{p}$ would be high \item If most animals are far from the transect, $\bar{p}$ would be low \end{itemize} % \pause % \vfill % The standard assumption (for line transects) is that animals are % uniformly distributed with respect to the transect \end{frame} \begin{frame} \frametitle{What should we use for $p(x)$?} What distribution should we use for the distances between animals and transects? \pause \vfill In \alert{line-transect sampling}, it is often assumed that animals are uniformly distributed with respect to the transect. \begin{itemize} \item Consequently, $p(x) = 1/B$, where $x$ is the \alert{perpendicular} distance between animal and transect \item This is guaranteed by random transect placement \item Can also be justified if animals are neither attracted to the transects or avoid them. \end{itemize} \pause \vfill In \alert{point-transect sampling}, we make the same assumptions, but we recognize that area increases with distance from a point. \begin{itemize} \item Consequently, $p(x) = 2x/B^2$ (see pg. 408 in AHM) \item Here, $x$ is the \alert{radial} distance to an animal \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Computing $\bar{p}$ for line transects} Half-normal detection function and line-transect sampling. \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{B} \hlkwb{<-} \hlnum{100} \hlcom{# Transect width} \hlstd{g} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{,} \hlkwc{sigma}\hlstd{=}\hlnum{25}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{))} \hlcom{# g(x)} \hlstd{pdf} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlnum{1}\hlopt{/}\hlstd{B} \hlcom{# p(x), constant} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Do the integration \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{gp} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{g}\hlstd{(x)}\hlopt{*}\hlkwd{pdf}\hlstd{(x)} \hlstd{(pbar} \hlkwb{<-} \hlkwd{integrate}\hlstd{(gp,} \hlkwc{lower}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{upper}\hlstd{=B)}\hlopt{$}\hlstd{value)} \end{alltt} \begin{verbatim} ## [1] 0.3133087 \end{verbatim} \end{kframe} \end{knitrout} % \pause % \vfill % Note the equivalence % \vspace{-6pt} % <<pbar-hn-int2,size='footnotesize'>>= % (pbar <- integrate(g, lower=0, upper=B)$value / B) % (pbar <- (pnorm(B,0,25) - pnorm(0,0,25)) / dnorm(0,0,25) / B) % @ \pause \vfill \centering 31.3\% chance of detecting an individual within 100 m. \\ \end{frame} \begin{frame}[fragile] \frametitle{Computing $\bar{p}$ for point transects} Half-normal detection function and point-transect sampling. \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{B} \hlkwb{<-} \hlnum{100} \hlcom{# Transect width} \hlstd{g} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{,} \hlkwc{sigma}\hlstd{=}\hlnum{25}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{))} \hlcom{# g(x)} \hlstd{pdf} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlnum{2}\hlopt{*}\hlstd{x}\hlopt{/}\hlstd{B}\hlopt{^}\hlnum{2} \hlcom{# p(x)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Do the integration \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{gp} \hlkwb{<-} \hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{g}\hlstd{(x)}\hlopt{*}\hlkwd{pdf}\hlstd{(x)} \hlstd{(pbar} \hlkwb{<-} \hlkwd{integrate}\hlstd{(gp,} \hlkwc{lower}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{upper}\hlstd{=B)}\hlopt{$}\hlstd{value)} \end{alltt} \begin{verbatim} ## [1] 0.1249581 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill % Note the equivalence % \vspace{-6pt} % <<pbar-hn-int2-pt,size='footnotesize'>>= % sigma <- 25 % (pbar <- (sigma^2*(1-exp(-B^2/(2*sigma^2))) - % sigma^2*(1-exp(-0^2/(2*sigma^2)))) * 2*pi/(pi*B^2)) % @ % \pause % \vfill \centering 12.5\% chance of detecting an individual within 100 m. \\ \end{frame} \begin{frame} \frametitle{In-class exercise} Building off the previous example\dots \begin{enumerate} \item Use R code and the Shiny app below to compute $\bar{p}$ for line-transect sampling when $\sigma=50, 100, \mathrm{and}\, 200$, instead of $\sigma=25$. \item Repeat, but for point-transect sampling. \end{enumerate} \vfill \centering \href{ https://richard-chandler.shinyapps.io/distance-sampling/ }{ \Large Shiny App \\ \normalsize \color{blue} https://richard-chandler.shinyapps.io/distance-sampling/ } \end{frame} \begin{frame} \frametitle{\large Conventional vs hierarchical distance sampling} \alert{Conventional} distance sampling \begin{itemize} \item Focus is on estimation of detection function parameters and density \item No model for spatial variation in density \item Data are individual-level distances \item We'll deal with CDS in more depth as a prelude to spatial-capture recapture \end{itemize} \pause \vfill \alert{Hierarchical} distance sampling \begin{itemize} \item Focus is on estimation of detection function parameters and spatial variation in abundance/density \item Data are counts of individuals in each distance bin \item Multinomial $N$-mixture model with a unique function for computing the multinomial cell probabilities \end{itemize} \end{frame} \begin{frame} \frametitle{Hierarchical distance sampling} \small State model (with Poisson assumption) \begin{gather*} \mathrm{log}(\lambda_i) = \beta_0 + \beta_1 {\color{blue} w_{i1}} + \beta_2 {\color{blue} w_{i2}} + \cdots \\ N_i \sim \mathrm{Poisson}(\lambda_i) \end{gather*} \pause Observation model \begin{gather*} \mathrm{log}(\sigma_{i}) = \alpha_0 + \alpha_1 {\color{blue} w_{i1}} + \alpha_2 {\color{blue} w_{i3}} + \cdots \\ \{y_{i1}, \dots, y_{iK}\} \sim \mathrm{Multinomial}(N_i, % \pi(b_1, \dots, b_{J+1}, x, \sigma_i)) \pi(x, \sigma_i)) \end{gather*} \pause \small Definitions \\ $\lambda_i$ -- Expected value of abundance at site $i$ \\ $N_i$ -- Realized value of abundance at site $i$ \\ $\sigma_{i}$ -- Scale parameter of detection function $g(x)$ at site $i$ \\ $\pi(x,\sigma_i)$ -- Function computing multinomial cell probs \\ $y_{ij}$ -- count for distance bin $j$ (final count is unobserved) \\ $\color{blue} w_1$, $\color{blue} w_2$, $\color{blue} w_3$ -- site covariates %\hfill %\\ \end{frame} \section{Point transects} \subsection{Simulation} \begin{frame} \frametitle{Outline} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame} \frametitle{Multinomial cell probs for point transects} \small Definitions needed for computing \alert{bin-specific} $\bar{p}$ and multinomial cell probabilities. \begin{itemize} \small \setlength\itemsep{1pt} \item $y_{ij}$ -- number of individuals detected at site $i$ in bin $j$ \item $\sigma_i$ -- scale parameter of detection function $g(x)$ \item $b_1, \dots, b_J$ -- Distance break points defining $J$ distance intervals % \item $a_1, \dots, a_J$ -- Area of annulus $j$ % \item $\bar{p}_j = \int_{b_j}^{b_{j+1}} g(x,\sigma)p(x|b_j\le x<b_{j+1})\, \mathrm{d}x$ % \item $p(x|b_j\le x<b_{j+1}) = 2\pi x/a_j$ % \item $\psi_j=\Pr(b_j\le x<b_{j+1})=a_j/(\pi B^2)$ \item $\bar{p}_j$ -- Average detection probability in distance interval $j$. \item $\psi_j$ -- Probability of occuring in distance band $j$ \end{itemize} \pause \vfill \footnotesize \begin{columns} \column{0.9\paperwidth} \begin{tabular}{lc} \hline \centering Description & Multinomial cell probability \\ \hline Pr(occurs and detected in first distance bin) & $\pi_1 = \psi_1\bar{p}_1$ \\ Pr(occurs and detected in second distance bin) & $\pi_2 = \psi_2\bar{p}_2$ \\ {\centering $\cdots$} & $\cdots$ \\ Pr(occurs and detected in last distance bin) & $\pi_J = \psi_J\bar{p}_J$ \\ Pr(not detected) & $\pi_{K} = 1-\sum_{j=1}^J \pi_j$ \\ \hline \end{tabular} \end{columns} \end{frame} \begin{frame}[fragile] \frametitle{Point transects, no covariates} \small Abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{nSites} \hlkwb{<-} \hlnum{100}\hlstd{; lambda1} \hlkwb{<-} \hlnum{2.6} \hlcom{## Expected value of N} \hlstd{N3} \hlkwb{<-} \hlkwd{rpois}\hlstd{(}\hlkwc{n}\hlstd{=nSites,} \hlkwc{lambda}\hlstd{=lambda1)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Multinomial cell probabilities \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{J} \hlkwb{<-} \hlnum{5} \hlcom{# distance bins} \hlstd{sigma} \hlkwb{<-} \hlnum{50} \hlcom{# scale parameter} \hlstd{B} \hlkwb{<-} \hlnum{100} \hlcom{# plot radius (B)} \hlstd{b} \hlkwb{<-} \hlkwd{seq}\hlstd{(}\hlnum{0}\hlstd{, B,} \hlkwc{length}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# distance break points} \hlstd{area} \hlkwb{<-} \hlstd{pi}\hlopt{*}\hlstd{b}\hlopt{^}\hlnum{2} \hlcom{# area of each circle} \hlstd{psi} \hlkwb{<-} \hlstd{(area[}\hlopt{-}\hlnum{1}\hlstd{]}\hlopt{-}\hlstd{area[}\hlopt{-}\hlstd{(J}\hlopt{+}\hlnum{1}\hlstd{)])} \hlopt{/} \hlstd{area[J}\hlopt{+}\hlnum{1}\hlstd{]} \hlstd{pbar3} \hlkwb{<-} \hlkwd{numeric}\hlstd{(J)} \hlcom{# average detection probability} \hlstd{pi3} \hlkwb{<-} \hlkwd{numeric}\hlstd{(J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# multinomial cell probs} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{pbar3[j]} \hlkwb{<-} \hlkwd{integrate}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{))}\hlopt{*}\hlstd{x,} \hlkwc{lower}\hlstd{=b[j],} \hlkwc{upper}\hlstd{=b[j}\hlopt{+}\hlnum{1}\hlstd{])}\hlopt{$}\hlstd{value} \hlopt{*} \hlstd{(}\hlnum{2}\hlopt{*}\hlstd{pi}\hlopt{/}\hlkwd{diff}\hlstd{(area)[j])} \hlstd{pi3[j]} \hlkwb{<-} \hlstd{pbar3[j]}\hlopt{*}\hlstd{psi[j] \}; pi3[J}\hlopt{+}\hlnum{1}\hlstd{]} \hlkwb{<-} \hlnum{1}\hlopt{-}\hlkwd{sum}\hlstd{(pi3[}\hlnum{1}\hlopt{:}\hlstd{J])} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y3.all} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwc{nrow}\hlstd{=nSites,} \hlkwc{ncol}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlstd{y3.all[i,]} \hlkwb{<-} \hlkwd{rmultinom}\hlstd{(}\hlkwc{n}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{size}\hlstd{=N3[i],} \hlkwc{prob}\hlstd{=pi3) \}} \hlstd{y3} \hlkwb{<-} \hlstd{y3.all[,}\hlnum{1}\hlopt{:}\hlstd{J]} \hlcom{## Drop final cell} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Observed distances} \centering \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(b[}\hlopt{-}\hlstd{(J}\hlopt{+}\hlnum{1}\hlstd{)]}\hlopt{+}\hlnum{10}\hlstd{,} \hlkwd{colSums}\hlstd{(y3),} \hlkwc{type}\hlstd{=}\hlstr{"h"}\hlstd{,} \hlkwc{lwd}\hlstd{=}\hlnum{80}\hlstd{,} \hlkwc{lend}\hlstd{=}\hlnum{2}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"skyblue4"}\hlstd{,} \hlkwc{xlim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{100}\hlstd{),} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,} \hlnum{70}\hlstd{),} \hlkwc{xlab}\hlstd{=}\hlstr{"Distance"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"Detections"}\hlstd{)} \end{alltt} \end{kframe} \includegraphics[width=0.9\linewidth]{figure/dist-hist3-1} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Point transects, covariates} \small Abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{elevation} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(nSites)} \hlstd{beta0} \hlkwb{<-} \hlnum{2}\hlstd{; beta1} \hlkwb{<-} \hlnum{1} \hlstd{lambda4} \hlkwb{<-} \hlkwd{exp}\hlstd{(beta0} \hlopt{+} \hlstd{beta1}\hlopt{*}\hlstd{elevation)} \hlstd{N4} \hlkwb{<-} \hlkwd{rpois}\hlstd{(}\hlkwc{n}\hlstd{=nSites,} \hlkwc{lambda}\hlstd{=lambda4)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Multinomial cell probabilities \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{noise} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(nSites)} \hlstd{alpha0} \hlkwb{<-} \hlnum{3}\hlstd{; alpha1} \hlkwb{<-} \hlopt{-}\hlnum{0.5} \hlstd{sigma4} \hlkwb{<-} \hlkwd{exp}\hlstd{(alpha0} \hlopt{+} \hlstd{alpha1}\hlopt{*}\hlstd{noise)} \hlstd{pi4} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, nSites, J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# multinomial cell probs} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{pi4[i,j]} \hlkwb{<-} \hlkwd{integrate}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma4[i]}\hlopt{^}\hlnum{2}\hlstd{))}\hlopt{*}\hlstd{x,} \hlkwc{lower}\hlstd{=b[j],} \hlkwc{upper}\hlstd{=b[j}\hlopt{+}\hlnum{1}\hlstd{])}\hlopt{$}\hlstd{value}\hlopt{*}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{pi}\hlopt{/}\hlkwd{diff}\hlstd{(area)[j])}\hlopt{*}\hlstd{psi[j] \}} \hlstd{pi4[i,J}\hlopt{+}\hlnum{1}\hlstd{]} \hlkwb{<-} \hlnum{1}\hlopt{-}\hlkwd{sum}\hlstd{(pi4[i,}\hlnum{1}\hlopt{:}\hlstd{J]) \}} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y4.all} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwc{nrow}\hlstd{=nSites,} \hlkwc{ncol}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlstd{y4.all[i,]} \hlkwb{<-} \hlkwd{rmultinom}\hlstd{(}\hlkwc{n}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{size}\hlstd{=N4[i],} \hlkwc{prob}\hlstd{=pi4[i,]) \}} \hlstd{y4} \hlkwb{<-} \hlstd{y4.all[,}\hlnum{1}\hlopt{:}\hlstd{J]} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Simulated data} \begin{columns} \begin{column}{0.4\textwidth} \small Observations % \tiny \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y4[}\hlnum{1}\hlopt{:}\hlnum{25}\hlstd{,]} \end{alltt} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0 1 0 0 0 ## [2,] 0 0 0 0 0 ## [3,] 0 1 0 0 0 ## [4,] 1 1 0 0 0 ## [5,] 1 0 0 0 0 ## [6,] 0 0 0 0 0 ## [7,] 0 1 0 0 0 ## [8,] 0 0 0 1 0 ## [9,] 0 0 0 0 0 ## [10,] 0 0 0 0 0 ## [11,] 1 0 0 0 0 ## [12,] 0 0 0 0 0 ## [13,] 0 0 0 0 0 ## [14,] 0 0 0 0 0 ## [15,] 1 1 0 0 0 ## [16,] 2 1 0 0 0 ## [17,] 0 0 0 1 0 ## [18,] 0 1 0 1 1 ## [19,] 1 0 0 1 0 ## [20,] 0 0 0 0 0 ## [21,] 0 0 0 0 0 ## [22,] 0 0 0 0 0 ## [23,] 0 0 0 0 0 ## [24,] 0 3 0 0 0 ## [25,] 0 0 0 0 0 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \begin{column}{0.6\textwidth} \pause % \scriptsize {\centering Summary stats \\} \vspace{24pt} \small Proportion of sites known to be occupied \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{# Max count at each site} \hlstd{maxCounts} \hlkwb{<-} \hlkwd{apply}\hlstd{(y4,} \hlnum{1}\hlstd{, max)} \hlstd{naiveOccupancy} \hlkwb{<-} \hlkwd{sum}\hlstd{(maxCounts}\hlopt{>}\hlnum{0}\hlstd{)}\hlopt{/}\hlstd{nSites} \hlstd{naiveOccupancy} \end{alltt} \begin{verbatim} ## [1] 0.55 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill \small Total detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{colSums}\hlstd{(y4)} \end{alltt} \begin{verbatim} ## [1] 26 54 26 16 9 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill Naive abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{sum}\hlstd{(y4)} \end{alltt} \begin{verbatim} ## [1] 131 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \end{columns} \end{frame} \subsection{Likelihood-based inference} \begin{frame}[fragile] \frametitle{Prepare data in `unmarked'} \small Note the new arguments. \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{umf4} \hlkwb{<-} \hlkwd{unmarkedFrameDS}\hlstd{(}\hlkwc{y}\hlstd{=y4,} \hlkwc{siteCovs}\hlstd{=}\hlkwd{data.frame}\hlstd{(elevation,noise),} \hlkwc{dist.breaks}\hlstd{=b,} \hlkwc{survey}\hlstd{=}\hlstr{"point"}\hlstd{,} \hlkwc{unitsIn}\hlstd{=}\hlstr{"m"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{summary}\hlstd{(umf4)} \end{alltt} \begin{verbatim} ## unmarkedFrameDS Object ## ## point-transect survey design ## Distance class cutpoints (m): 0 20 40 60 80 100 ## ## 100 sites ## Maximum number of distance classes per site: 5 ## Mean number of distance classes per site: 5 ## Sites with at least one detection: 55 ## ## Tabulation of y observations: ## 0 1 2 3 4 6 ## 406 70 15 7 1 1 ## ## Site-level covariates: ## elevation noise ## Min. :-2.29527 Min. :-2.80421 ## 1st Qu.:-0.71015 1st Qu.:-0.82447 ## Median :-0.22187 Median :-0.06717 ## Mean :-0.09012 Mean :-0.18718 ## 3rd Qu.: 0.57207 3rd Qu.: 0.39358 ## Max. : 2.26625 Max. : 1.84018 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Fit the model} \footnotesize \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{## fm4 <- distsamp(~noise ~elevation, umf4, keyfun="exp") # negative exp} \hlcom{## fm4 <- distsamp(~noise ~elevation, umf4, keyfun="hazard") # hazard rate} \hlstd{fm4} \hlkwb{<-} \hlkwd{distsamp}\hlstd{(}\hlopt{~}\hlstd{noise} \hlopt{~}\hlstd{elevation, umf4,} \hlkwc{keyfun}\hlstd{=}\hlstr{"halfnorm"}\hlstd{)} \hlcom{# half-normal} \hlstd{fm4} \end{alltt} \begin{verbatim} ## ## Call: ## distsamp(formula = ~noise ~ elevation, data = umf4, keyfun = "halfnorm") ## ## Density: ## Estimate SE z P(>|z|) ## (Intercept) 0.753 0.147 5.11 3.26e-07 ## elevation 1.045 0.101 10.31 6.03e-25 ## ## Detection: ## Estimate SE z P(>|z|) ## sigma(Intercept) 3.079 0.0550 55.95 0.00e+00 ## sigmanoise -0.457 0.0494 -9.26 2.01e-20 ## ## AIC: 437.8227 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill Compare to actual parameter values: \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{c}\hlstd{(}\hlkwc{beta0}\hlstd{=beta0,} \hlkwc{beta1}\hlstd{=beta1);} \hlkwd{c}\hlstd{(}\hlkwc{alpha0}\hlstd{=alpha0,} \hlkwc{alpha1}\hlstd{=alpha1)} \end{alltt} \begin{verbatim} ## beta0 beta1 ## 2 1 ## alpha0 alpha1 ## 3.0 -0.5 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Prediction in `unmarked'} \small Create \texttt{data.frame} with prediction covariates. \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{pred.data} \hlkwb{<-} \hlkwd{data.frame}\hlstd{(}\hlkwc{noise}\hlstd{=}\hlkwd{seq}\hlstd{(}\hlopt{-}\hlnum{3}\hlstd{,} \hlnum{3}\hlstd{,} \hlkwc{by}\hlstd{=}\hlnum{0.5}\hlstd{))} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Get predictions of $\sigma$ for each row of prediction data. \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{sigma.pred} \hlkwb{<-} \hlkwd{predict}\hlstd{(fm4,} \hlkwc{newdata}\hlstd{=pred.data,} \hlkwc{type}\hlstd{=}\hlstr{'det'}\hlstd{,} \hlkwc{append}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill View $\sigma$ predictions \vspace{-6pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{print}\hlstd{(}\hlkwd{head}\hlstd{(sigma.pred),} \hlkwc{digits}\hlstd{=}\hlnum{2}\hlstd{)} \end{alltt} \begin{verbatim} ## Predicted SE lower upper noise ## 1 86 12.5 64 114 -3.0 ## 2 68 8.4 54 87 -2.5 ## 3 54 5.5 44 66 -2.0 ## 4 43 3.5 37 51 -1.5 ## 5 34 2.2 30 39 -1.0 ## 6 27 1.5 25 30 -0.5 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Prediction in `unmarked'} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(Predicted} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{ylab}\hlstd{=}\hlstr{"Scale parameter (sigma)"}\hlstd{,} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{100}\hlstd{),} \hlkwc{xlab}\hlstd{=}\hlstr{"Noise level"}\hlstd{,} \hlkwc{type}\hlstd{=}\hlstr{"l"}\hlstd{)} \hlkwd{lines}\hlstd{(lower} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \hlkwd{lines}\hlstd{(upper} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.8\linewidth]{figure/pred-sigma-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Prediction in `unmarked'} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(Predicted} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{ylab}\hlstd{=}\hlstr{"Scale parameter (sigma)"}\hlstd{,} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{100}\hlstd{),} \hlkwc{xlab}\hlstd{=}\hlstr{"Noise level"}\hlstd{,} \hlkwc{type}\hlstd{=}\hlstr{"l"}\hlstd{)} \hlkwd{lines}\hlstd{(lower} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \hlkwd{lines}\hlstd{(upper} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.8\linewidth]{figure/pred-sigma2-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Prediction in `unmarked'} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(Predicted} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{ylab}\hlstd{=}\hlstr{"Scale parameter (sigma)"}\hlstd{,} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{100}\hlstd{),} \hlkwc{xlab}\hlstd{=}\hlstr{"Noise level"}\hlstd{,} \hlkwc{type}\hlstd{=}\hlstr{"l"}\hlstd{)} \hlkwd{lines}\hlstd{(lower} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \hlkwd{lines}\hlstd{(upper} \hlopt{~} \hlstd{noise, sigma.pred,} \hlkwc{col}\hlstd{=}\hlstr{"grey"}\hlstd{)} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.8\linewidth]{figure/pred-sigma2-2-1} } \end{knitrout} \end{frame} \subsection{Bayesian inference} \begin{frame} \frametitle{Outline} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{\normalsize Conditional-on-$N$ and $n_i=\sum_{j=1}^{J} y_{i,j}$} \vspace{-3pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.678, 0.847, 0.902}\color{fgcolor}\begin{kframe} \begin{verbatim} model { lambda.intercept ~ dunif(0, 20) beta0 <- log(lambda.intercept) beta1 ~ dnorm(0, 0.5) alpha0 ~ dnorm(0, 0.5) alpha1 ~ dnorm(0, 0.5) for(i in 1:nSites) { log(lambda[i]) <- beta0 + beta1*elevation[i] N[i] ~ dpois(lambda[i]) # Latent local abundance log(sigma[i]) <- alpha0 + alpha1*noise[i] tau[i] <- 1/sigma[i]^2 for(j in 1:nBins) { ## Trick to do integration for *point-transects* pbar[i,j] <- (sigma[i]^2 * (1-exp(-b[j+1]^2/(2*sigma[i]^2))) - sigma[i]^2 * (1-exp(-b[j]^2/(2*sigma[i]^2)))) * 2*3.141593/area[j] pi[i,j] <- psi[j]*pbar[i,j] ## Pr(present and detected in bin j) } pi[i,nBins+1] <- 1-sum(pi[i,1:nBins]) ## Pr(not detected) n[i] ~ dbin(1-pi[i,nBins+1], N[i]) y[i,] ~ dmulti(pi[i,1:nBins]/(1-pi[i,nBins+1]), n[i]) ## If N~Pois(lam), then the above is equivalent to: # for(j in 1:nBins) { y[i,j] ~ dpois(lambda[i]*pi[i,j]) } } totalAbundance <- sum(N[1:nSites]) } \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Data, inits, and parameters} Put data in a named list \vspace{-12pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.data.pt} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{y}\hlstd{=y4,} \hlkwc{n}\hlstd{=}\hlkwd{rowSums}\hlstd{(y4),} \hlkwc{area}\hlstd{=}\hlkwd{diff}\hlstd{(area),} \hlkwc{b}\hlstd{=b,} \hlcom{# Distance break points} \hlkwc{psi}\hlstd{=psi,} \hlcom{# Pr(occuring in bin j)} \hlkwc{elevation}\hlstd{=elevation,} \hlkwc{noise}\hlstd{=noise,} \hlkwc{nSites}\hlstd{=nSites,} \hlkwc{nBins}\hlstd{=J)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Initial values \vspace{-12pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.inits.pt} \hlkwb{<-} \hlkwa{function}\hlstd{() \{} \hlkwd{list}\hlstd{(}\hlkwc{lambda.intercept}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{alpha0}\hlstd{=}\hlkwd{rnorm}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{5}\hlstd{),} \hlkwc{N}\hlstd{=}\hlkwd{rowSums}\hlstd{(y4)}\hlopt{+}\hlkwd{rpois}\hlstd{(}\hlkwd{nrow}\hlstd{(y4),} \hlnum{2}\hlstd{))} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Parameters to monitor \vspace{-12pt} \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.pars.pt} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"beta0"}\hlstd{,} \hlstr{"beta1"}\hlstd{,} \hlstr{"alpha0"}\hlstd{,} \hlstr{"alpha1"}\hlstd{,} \hlstr{"totalAbundance"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{MCMC} \small \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.post.pt} \hlkwb{<-} \hlkwd{jags.basic}\hlstd{(}\hlkwc{data}\hlstd{=jags.data.pt,} \hlkwc{inits}\hlstd{=jags.inits.pt,} \hlkwc{parameters.to.save}\hlstd{=jags.pars.pt,} \hlkwc{model.file}\hlstd{=}\hlstr{"distsamp-point-mod.jag"}\hlstd{,} \hlkwc{n.chains}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{n.adapt}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{n.burnin}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{n.iter}\hlstd{=}\hlnum{2000}\hlstd{,} \hlkwc{parallel}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \vfill \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{round}\hlstd{(}\hlkwd{summary}\hlstd{(jags.post.pt)}\hlopt{$}\hlstd{quantile,} \hlkwc{digits}\hlstd{=}\hlnum{3}\hlstd{)} \end{alltt} \begin{verbatim} ## 2.5% 25% 50% 75% 97.5% ## alpha0 2.965 3.028 3.065 3.101 3.169 ## alpha1 -0.563 -0.492 -0.457 -0.425 -0.366 ## beta0 1.636 1.843 1.946 2.038 2.235 ## beta1 0.829 0.960 1.031 1.103 1.252 ## deviance 406.460 414.381 418.910 423.628 433.825 ## totalAbundance 831.000 978.000 1062.000 1163.000 1354.025 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} %\section{Simulation} \section{Line transects} \subsection{Simulation} \begin{frame} \frametitle{Outline} \Large % \tableofcontents[currentsection,currentsubsection] \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{Multinomial cell probs for line transects} \small Definitions needed for computing \alert{bin-specific} $\bar{p}$ and multinomial cell probabilities. \begin{itemize} \small \setlength\itemsep{1pt} \item $y_{ij}$ -- number of individuals detected at site $i$ in bin $j$ \item $\sigma_i$ -- scale parameter of detection function $g(x)$ \item $b_1, \dots, b_{J+1}$ -- Distance break points defining $J$ distance intervals % \item $\bar{p}_j = \int_{b_j}^{b_{j+1}} g(x,\sigma)p(x|b_j\le x<b_{j+1})\, \mathrm{d}x$ % \item $p(x|b_j\le x<b_{j+1}) = 1/(b_{j+1}-b_j)$ \item $\bar{p}_j$ -- Average detection probability in distance bin j % \item $\psi_j=\Pr(b_j\le x<b_{j+1})=(b_{j+1}-b_j)/B$ % -- Pr(occuring in distance bin $j$) \item $\psi_j$ -- Pr(occuring in distance bin $j$) \end{itemize} \pause \vfill \footnotesize \begin{columns} \column{0.9\paperwidth} \begin{tabular}{lc} \hline \centering Description & Multinomial cell probability \\ \hline Pr(occurs and detected in first distance bin) & $\pi_1 = \psi_1\bar{p}_1$ \\ Pr(occurs and detected in second distance bin) & $\pi_2 = \psi_2\bar{p}_2$ \\ {\centering $\cdots$} & $\cdots$ \\ Pr(occurs and detected in last distance bin) & $\pi_J = \psi_J\bar{p}_J$ \\ Pr(not detected) & $\pi_{K} = 1-\sum_{j=1}^J \pi_j$ \\ \hline \end{tabular} \end{columns} \end{frame} \begin{frame}[fragile] \frametitle{Line transects, no covariates} \small Abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{nSites} \hlkwb{<-} \hlnum{100}\hlstd{; lambda1} \hlkwb{<-} \hlnum{2.6} \hlcom{## Expected value of N} \hlstd{N1} \hlkwb{<-} \hlkwd{rpois}\hlstd{(}\hlkwc{n}\hlstd{=nSites,} \hlkwc{lambda}\hlstd{=lambda1)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Multinomial cell probabilities \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{J} \hlkwb{<-} \hlnum{5} \hlcom{# distance bins} \hlstd{sigma} \hlkwb{<-} \hlnum{50} \hlcom{# scale parameter} \hlstd{B} \hlkwb{<-} \hlnum{100}\hlstd{; L} \hlkwb{<-} \hlnum{100} \hlcom{# transect widths (B) and lengths (L)} \hlstd{b} \hlkwb{<-} \hlkwd{seq}\hlstd{(}\hlnum{0}\hlstd{, B,} \hlkwc{length}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# distance break points} \hlstd{psi} \hlkwb{<-} \hlkwd{diff}\hlstd{(b)}\hlopt{/}\hlstd{B} \hlcom{# Pr(x is in bin j)} \hlstd{pbar1} \hlkwb{<-} \hlkwd{numeric}\hlstd{(J)} \hlcom{# average detection probability} \hlstd{pi1} \hlkwb{<-} \hlkwd{numeric}\hlstd{(J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# multinomial cell probs} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{pbar1[j]} \hlkwb{<-} \hlkwd{integrate}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma}\hlopt{^}\hlnum{2}\hlstd{)),} \hlkwc{lower}\hlstd{=b[j],} \hlkwc{upper}\hlstd{=b[j}\hlopt{+}\hlnum{1}\hlstd{])}\hlopt{$}\hlstd{value} \hlopt{/} \hlkwd{diff}\hlstd{(b)[j]} \hlstd{pi1[j]} \hlkwb{<-} \hlstd{pbar1[j]}\hlopt{*}\hlstd{psi[j]} \hlstd{\}} \hlstd{pi1[J}\hlopt{+}\hlnum{1}\hlstd{]} \hlkwb{<-} \hlnum{1}\hlopt{-}\hlkwd{sum}\hlstd{(pi1[}\hlnum{1}\hlopt{:}\hlstd{J])} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y1.all} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwc{nrow}\hlstd{=nSites,} \hlkwc{ncol}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlstd{y1.all[i,]} \hlkwb{<-} \hlkwd{rmultinom}\hlstd{(}\hlkwc{n}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{size}\hlstd{=N1[i],} \hlkwc{prob}\hlstd{=pi1) \}} \hlstd{y1} \hlkwb{<-} \hlstd{y1.all[,}\hlnum{1}\hlopt{:}\hlstd{J]} \hlcom{## Drop final cell} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Observed distances} \centering \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(b[}\hlopt{-}\hlstd{(J}\hlopt{+}\hlnum{1}\hlstd{)]}\hlopt{+}\hlnum{10}\hlstd{,} \hlkwd{colSums}\hlstd{(y1),} \hlkwc{type}\hlstd{=}\hlstr{"h"}\hlstd{,} \hlkwc{lwd}\hlstd{=}\hlnum{80}\hlstd{,} \hlkwc{lend}\hlstd{=}\hlnum{2}\hlstd{,} \hlkwc{col}\hlstd{=}\hlstr{"skyblue4"}\hlstd{,} \hlkwc{xlim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,}\hlnum{100}\hlstd{),} \hlkwc{ylim}\hlstd{=}\hlkwd{c}\hlstd{(}\hlnum{0}\hlstd{,} \hlnum{70}\hlstd{),} \hlkwc{xlab}\hlstd{=}\hlstr{"Distance"}\hlstd{,} \hlkwc{ylab}\hlstd{=}\hlstr{"Detections"}\hlstd{)} \end{alltt} \end{kframe} \includegraphics[width=0.9\linewidth]{figure/dist-hist2-1} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Line transects, covariates} \small Abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{elevation} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(nSites)} \hlstd{beta0} \hlkwb{<-} \hlnum{2}\hlstd{; beta1} \hlkwb{<-} \hlnum{1} \hlstd{lambda2} \hlkwb{<-} \hlkwd{exp}\hlstd{(beta0} \hlopt{+} \hlstd{beta1}\hlopt{*}\hlstd{elevation)} \hlstd{N2} \hlkwb{<-} \hlkwd{rpois}\hlstd{(}\hlkwc{n}\hlstd{=nSites,} \hlkwc{lambda}\hlstd{=lambda2)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Multinomial cell probabilities \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{noise} \hlkwb{<-} \hlkwd{rnorm}\hlstd{(nSites)} \hlstd{alpha0} \hlkwb{<-} \hlnum{3}\hlstd{; alpha1} \hlkwb{<-} \hlopt{-}\hlnum{0.5} \hlstd{sigma2} \hlkwb{<-} \hlkwd{exp}\hlstd{(alpha0} \hlopt{+} \hlstd{alpha1}\hlopt{*}\hlstd{noise)} \hlstd{pi2} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{, nSites, J}\hlopt{+}\hlnum{1}\hlstd{)} \hlcom{# multinomial cell probs} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlkwa{for}\hlstd{(j} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{J) \{} \hlstd{pi2[i,j]} \hlkwb{<-} \hlkwd{integrate}\hlstd{(}\hlkwa{function}\hlstd{(}\hlkwc{x}\hlstd{)} \hlkwd{exp}\hlstd{(}\hlopt{-}\hlstd{x}\hlopt{^}\hlnum{2}\hlopt{/}\hlstd{(}\hlnum{2}\hlopt{*}\hlstd{sigma2[i]}\hlopt{^}\hlnum{2}\hlstd{)),} \hlkwc{lower}\hlstd{=b[j],} \hlkwc{upper}\hlstd{=b[j}\hlopt{+}\hlnum{1}\hlstd{])}\hlopt{$}\hlstd{value} \hlopt{/} \hlstd{(b[j}\hlopt{+}\hlnum{1}\hlstd{]}\hlopt{-}\hlstd{b[j])} \hlopt{*} \hlstd{psi[j] \}} \hlstd{pi2[i,J}\hlopt{+}\hlnum{1}\hlstd{]} \hlkwb{<-} \hlnum{1}\hlopt{-}\hlkwd{sum}\hlstd{(pi2[i,}\hlnum{1}\hlopt{:}\hlstd{J]) \}} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y2.all} \hlkwb{<-} \hlkwd{matrix}\hlstd{(}\hlnum{NA}\hlstd{,} \hlkwc{nrow}\hlstd{=nSites,} \hlkwc{ncol}\hlstd{=J}\hlopt{+}\hlnum{1}\hlstd{)} \hlkwa{for}\hlstd{(i} \hlkwa{in} \hlnum{1}\hlopt{:}\hlstd{nSites) \{} \hlstd{y2.all[i,]} \hlkwb{<-} \hlkwd{rmultinom}\hlstd{(}\hlkwc{n}\hlstd{=}\hlnum{1}\hlstd{,} \hlkwc{size}\hlstd{=N2[i],} \hlkwc{prob}\hlstd{=pi2[i,]) \}} \hlstd{y2} \hlkwb{<-} \hlstd{y2.all[,}\hlnum{1}\hlopt{:}\hlstd{J]} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Simulated data} \begin{columns} \begin{column}{0.4\textwidth} \small Observations % \tiny \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{y2[}\hlnum{1}\hlopt{:}\hlnum{25}\hlstd{,]} \end{alltt} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 0 0 0 0 ## [2,] 1 1 0 0 0 ## [3,] 4 3 0 0 0 ## [4,] 0 0 0 0 0 ## [5,] 1 1 0 0 0 ## [6,] 1 1 0 1 0 ## [7,] 1 1 0 0 0 ## [8,] 0 0 0 0 0 ## [9,] 0 2 0 0 0 ## [10,] 1 2 0 0 0 ## [11,] 0 1 0 0 0 ## [12,] 1 0 0 0 0 ## [13,] 2 1 0 0 0 ## [14,] 2 0 0 0 0 ## [15,] 0 0 1 0 1 ## [16,] 0 0 0 0 0 ## [17,] 2 1 0 0 0 ## [18,] 3 1 0 1 0 ## [19,] 3 0 0 0 0 ## [20,] 1 0 0 0 0 ## [21,] 1 0 0 0 0 ## [22,] 1 0 0 0 0 ## [23,] 1 0 0 0 0 ## [24,] 1 0 0 0 0 ## [25,] 9 4 1 1 0 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \begin{column}{0.6\textwidth} \pause % \scriptsize {\centering Summary stats \\} \vspace{24pt} \small Proportion of sites known to be occupied \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{# Max count at each site} \hlstd{maxCounts} \hlkwb{<-} \hlkwd{apply}\hlstd{(y2,} \hlnum{1}\hlstd{, max)} \hlstd{naiveOccupancy} \hlkwb{<-} \hlkwd{sum}\hlstd{(maxCounts}\hlopt{>}\hlnum{0}\hlstd{)}\hlopt{/}\hlstd{nSites} \hlstd{naiveOccupancy} \end{alltt} \begin{verbatim} ## [1] 0.77 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill \small Total detections in each distance interval \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{colSums}\hlstd{(y2)} \end{alltt} \begin{verbatim} ## [1] 137 78 28 11 1 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill Naive abundance \vspace{-6pt} \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{sum}\hlstd{(y2)} \end{alltt} \begin{verbatim} ## [1] 255 \end{verbatim} \end{kframe} \end{knitrout} \end{column} \end{columns} \end{frame} %\section{Prediction} \subsection{Likelihood-based inference} \begin{frame} \frametitle{Outline} \Large \tableofcontents[currentsection] \end{frame} \begin{frame}[fragile] \frametitle{Prepare data in `unmarked'} \small Note the new arguments. \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{umf} \hlkwb{<-} \hlkwd{unmarkedFrameDS}\hlstd{(}\hlkwc{y}\hlstd{=y2,} \hlkwc{siteCovs}\hlstd{=}\hlkwd{data.frame}\hlstd{(elevation,noise),} \hlkwc{dist.breaks}\hlstd{=b,} \hlkwc{tlength}\hlstd{=}\hlkwd{rep}\hlstd{(L, nSites),} \hlkwc{survey}\hlstd{=}\hlstr{"line"}\hlstd{,} \hlkwc{unitsIn}\hlstd{=}\hlstr{"m"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \pause \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{summary}\hlstd{(umf)} \end{alltt} \begin{verbatim} ## unmarkedFrameDS Object ## ## line-transect survey design ## Distance class cutpoints (m): 0 20 40 60 80 100 ## ## 100 sites ## Maximum number of distance classes per site: 5 ## Mean number of distance classes per site: 5 ## Sites with at least one detection: 77 ## ## Tabulation of y observations: ## 0 1 2 3 4 5 6 8 9 12 ## 368 77 23 16 11 1 1 1 1 1 ## ## Site-level covariates: ## elevation noise ## Min. :-2.15115 Min. :-2.10388 ## 1st Qu.:-0.67921 1st Qu.:-0.62759 ## Median :-0.20250 Median :-0.09483 ## Mean :-0.07937 Mean : 0.04132 ## 3rd Qu.: 0.60327 3rd Qu.: 0.65752 ## Max. : 2.03949 Max. : 2.83252 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Fit the model} \footnotesize \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlcom{## fm <- distsamp(~noise ~elevation, umf, keyfun="exp") # negative exp} \hlcom{## fm <- distsamp(~noise ~elevation, umf, keyfun="hazard") # hazard rate} \hlstd{fm} \hlkwb{<-} \hlkwd{distsamp}\hlstd{(}\hlopt{~}\hlstd{noise} \hlopt{~}\hlstd{elevation, umf,} \hlkwc{keyfun}\hlstd{=}\hlstr{"halfnorm"}\hlstd{)} \hlcom{# half-normal} \hlstd{fm} \end{alltt} \begin{verbatim} ## ## Call: ## distsamp(formula = ~noise ~ elevation, data = umf, keyfun = "halfnorm") ## ## Density: ## Estimate SE z P(>|z|) ## (Intercept) 1.10 0.0957 11.5 1.93e-30 ## elevation 1.05 0.0754 13.9 7.07e-44 ## ## Detection: ## Estimate SE z P(>|z|) ## sigma(Intercept) 3.137 0.0515 60.90 0.00e+00 ## sigmanoise -0.369 0.0443 -8.34 7.55e-17 ## ## AIC: 576.9817 \end{verbatim} \end{kframe} \end{knitrout} \pause \vfill Compare to actual parameter values: \vspace{-6pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{c}\hlstd{(}\hlkwc{beta0}\hlstd{=beta0,} \hlkwc{beta1}\hlstd{=beta1);} \hlkwd{c}\hlstd{(}\hlkwc{alpha0}\hlstd{=alpha0,} \hlkwc{alpha1}\hlstd{=alpha1)} \end{alltt} \begin{verbatim} ## beta0 beta1 ## 2 1 ## alpha0 alpha1 ## 3.0 -0.5 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \subsection{Bayesian inference} \begin{frame} \frametitle{Outline} \Large \tableofcontents[currentsection,currentsubsection] \end{frame} \begin{frame}[fragile] \frametitle{\normalsize Conditional-on-$N$ and $n_i=\sum_{j=1}^{J} y_{i,j}$} \vspace{-3pt} \begin{knitrout}\tiny \definecolor{shadecolor}{rgb}{0.678, 0.847, 0.902}\color{fgcolor}\begin{kframe} \begin{verbatim} model { lambda.intercept ~ dunif(0, 20) beta0 <- log(lambda.intercept) beta1 ~ dnorm(0, 0.5) alpha0 ~ dnorm(0, 0.5) alpha1 ~ dnorm(0, 0.5) for(i in 1:nSites) { log(lambda[i]) <- beta0 + beta1*elevation[i] N[i] ~ dpois(lambda[i]) # Latent local abundance log(sigma[i]) <- alpha0 + alpha1*noise[i] tau[i] <- 1/sigma[i]^2 for(j in 1:nBins) { ## Trick to do integration for *line-transects* pbar[i,j] <- (pnorm(b[j+1], 0, tau[i]) - pnorm(b[j], 0, tau[i])) / dnorm(0, 0, tau[i]) / (b[j+1]-b[j]) pi[i,j] <- psi[j]*pbar[i,j] ## Pr(present and detected in bin j) } pi[i,nBins+1] <- 1-sum(pi[i,1:nBins]) ## Pr(not detected) n[i] ~ dbin(1-pi[i,nBins+1], N[i]) y[i,] ~ dmulti(pi[i,1:nBins]/(1-pi[i,nBins+1]), n[i]) ## If N~Pois(lam), then the above is equivalent to: # for(j in 1:nBins) { y[i,j] ~ dpois(lambda[i]*pi[i,j]) } } totalAbundance <- sum(N[1:nSites]) } \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Data, inits, and parameters} Put data in a named list \vspace{-12pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.data.line} \hlkwb{<-} \hlkwd{list}\hlstd{(}\hlkwc{y}\hlstd{=y2,} \hlkwc{n}\hlstd{=}\hlkwd{rowSums}\hlstd{(y2),} \hlkwc{b}\hlstd{=b,} \hlcom{# Distance break points} \hlkwc{psi}\hlstd{=}\hlkwd{diff}\hlstd{(b)}\hlopt{/}\hlstd{B,} \hlcom{# Pr(occuring in bin j)} \hlkwc{elevation}\hlstd{=elevation,} \hlkwc{noise}\hlstd{=noise,} \hlkwc{nSites}\hlstd{=nSites,} \hlkwc{nBins}\hlstd{=J)} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Initial values \vspace{-12pt} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.inits.line} \hlkwb{<-} \hlkwa{function}\hlstd{() \{} \hlkwd{list}\hlstd{(}\hlkwc{lambda.intercept}\hlstd{=}\hlkwd{runif}\hlstd{(}\hlnum{1}\hlstd{),} \hlkwc{alpha0}\hlstd{=}\hlkwd{rnorm}\hlstd{(}\hlnum{1}\hlstd{,} \hlnum{5}\hlstd{),} \hlkwc{N}\hlstd{=}\hlkwd{rowSums}\hlstd{(y2)}\hlopt{+}\hlkwd{rpois}\hlstd{(}\hlkwd{nrow}\hlstd{(y2),} \hlnum{2}\hlstd{))} \hlstd{\}} \end{alltt} \end{kframe} \end{knitrout} \pause \vfill Parameters to monitor \vspace{-12pt} \begin{knitrout}\small \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlstd{jags.pars.line} \hlkwb{<-} \hlkwd{c}\hlstd{(}\hlstr{"beta0"}\hlstd{,} \hlstr{"beta1"}\hlstd{,} \hlstr{"alpha0"}\hlstd{,} \hlstr{"alpha1"}\hlstd{,} \hlstr{"totalAbundance"}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{MCMC} \small \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{library}\hlstd{(jagsUI)} \hlstd{jags.post.line} \hlkwb{<-} \hlkwd{jags.basic}\hlstd{(}\hlkwc{data}\hlstd{=jags.data.line,} \hlkwc{inits}\hlstd{=jags.inits.line,} \hlkwc{parameters.to.save}\hlstd{=jags.pars.line,} \hlkwc{model.file}\hlstd{=}\hlstr{"distsamp-line-mod.jag"}\hlstd{,} \hlkwc{n.chains}\hlstd{=}\hlnum{3}\hlstd{,} \hlkwc{n.adapt}\hlstd{=}\hlnum{100}\hlstd{,} \hlkwc{n.burnin}\hlstd{=}\hlnum{0}\hlstd{,} \hlkwc{n.iter}\hlstd{=}\hlnum{2000}\hlstd{,} \hlkwc{parallel}\hlstd{=}\hlnum{TRUE}\hlstd{)} \end{alltt} \end{kframe} \end{knitrout} \vfill \begin{knitrout}\scriptsize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{round}\hlstd{(}\hlkwd{summary}\hlstd{(jags.post.line)}\hlopt{$}\hlstd{quantile,} \hlkwc{digits}\hlstd{=}\hlnum{3}\hlstd{)} \end{alltt} \begin{verbatim} ## 2.5% 25% 50% 75% 97.5% ## alpha0 3.043 3.103 3.138 3.172 3.242 ## alpha1 -0.463 -0.403 -0.373 -0.342 -0.290 ## beta0 1.603 1.731 1.792 1.851 1.963 ## beta1 0.897 0.991 1.037 1.089 1.181 ## deviance 520.673 531.579 537.705 543.925 557.278 ## totalAbundance 718.000 791.000 831.000 868.250 947.000 \end{verbatim} \end{kframe} \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Traceplots and density plots} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(jags.post.line[,jags.pars.line[}\hlnum{1}\hlopt{:}\hlnum{3}\hlstd{]])} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.7\textwidth]{figure/bugs-plot1-rem2-1} } \end{knitrout} \end{frame} \begin{frame}[fragile] \frametitle{Traceplots and density plots} \begin{knitrout}\footnotesize \definecolor{shadecolor}{rgb}{0.878, 0.918, 0.933}\color{fgcolor}\begin{kframe} \begin{alltt} \hlkwd{plot}\hlstd{(jags.post.line[,jags.pars.line[}\hlnum{4}\hlopt{:}\hlnum{5}\hlstd{]])} \end{alltt} \end{kframe} {\centering \includegraphics[width=0.7\textwidth]{figure/bugs-plot2-rem2-1} } \end{knitrout} \end{frame} \section{Summary} \begin{frame} \frametitle{Distance sampling summary} Assumptions \begin{itemize} \small \item Animals don't move during the survey \item Animals are uniformly distributed with respect to the transects \item Detection is certain on the transect, i.e. $p=1$ when $x=0$. \item Detections are independent \end{itemize} \pause \vfill \small If these assumptions can be met, distance sampling is a powerful method allowing for inference about abundance and density using data from a single visit. \\ \end{frame} \section{Assignment} \begin{frame}[fragile] \frametitle{Assignment} % \small \footnotesize Create a self-contained R script or Rmarkdown file to do the following: \vfill \begin{enumerate} % \small \footnotesize \item Fit a distance sampling model with a half-normal detection function and the following covariates to the black-throated blue warbler data ({\tt btbw\_data\_distsamp.csv}) in `unmarked' and `JAGS': \begin{itemize} \footnotesize \item Density covariates: {\tt Elevation, UTM.N, UTM.W} \item Detection covariates: {\tt Wind, Noise} \item Response: {\scriptsize \tt btbw0\_20, btbw20\_40, btbw40\_60, btbw60\_80, btbw80\_100} \end{itemize} \item Using the model fitted in `unmarked', create two graphs of the predictions: one for density and the other for the scale parameter ($\sigma$). \item Compare the half-normal model to two other models with the same covariates, but with negative exponential and hazard rate detection functions. Which has the lowest AIC? \end{enumerate} % \pause \vfill Suggestions: \begin{itemize} \item Convert response variables to matrix with \inr{as.matrix} \item Standardize covariates \end{itemize} % \pause \vfill Upload your {\tt .R} or {\tt .Rmd} file to ELC by Monday at 5:00. \end{frame} \end{document}
{ "alphanum_fraction": 0.6291329517, "avg_line_length": 33.7542419267, "ext": "tex", "hexsha": "b4a49f6d9c14bbcc6ae5d8c6a96fc3d645a38d22", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rbchan/popdy-inference", "max_forks_repo_path": "lectures/distsamp-HDS/lecture-distsamp-HDS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rbchan/popdy-inference", "max_issues_repo_path": "lectures/distsamp-HDS/lecture-distsamp-HDS.tex", "max_line_length": 361, "max_stars_count": null, "max_stars_repo_head_hexsha": "f738381e9260e8fbd1a63f0c96402612aa44a3d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rbchan/popdy-inference", "max_stars_repo_path": "lectures/distsamp-HDS/lecture-distsamp-HDS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 25916, "size": 61669 }
\section{Sensors}
{ "alphanum_fraction": 0.7, "avg_line_length": 5, "ext": "tex", "hexsha": "df95ea807323e6a6f47070ba469c0af62cecb5f6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/engineering/engineeringElectrical/06-00-Sensors.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/engineering/engineeringElectrical/06-00-Sensors.tex", "max_line_length": 17, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/engineering/engineeringElectrical/06-00-Sensors.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 20 }
\chapter{Installation} A compiled Broadwick jar is available from the EPIC Scotland website that can be added to your project. Broadwick uses maven as its build tool and so this manual is rather maven focused. You can use Broadwick with other tools or import it into your IDE settings though this is outside the scope of this manual. \section{Installing the Distribution Jar} If you plan on downloading the distribution jar file you should place it under \begin{sourcecode} \${HOME}/.m2/repository/broadwick/broadwick/1.2 \end{sourcecode} \section{Installing From Source} To download the Broadwick sources \begin{enumerate} \item Create a directory for broad wick and ‘cd’ into that directory \item Copy the broadwick sources. \begin{sourcecode} git clone https://github.com/EPICScotland/Broadwick . \end{sourcecode} (this may take a while as there are release snapshots that are also on the site) \item Build the jar library. \begin{sourcecode} man package install; cd archetype; man install \end{sourcecode} You should see output similar to \begin{sourcecode} INFO --- maven-compiler-plugin:3.1:compile (default-compile) @ broadwick --- INFO Changes detected - recompiling the module! INFO Compiling 157 source files to /XXXX/XXXX/XXXX/EPICScotland/Broadwick/target/classes WARNING bootstrap class path not set in conjunction with -source 1.7 WARNING No processor claimed any of these annotations: javax.xml.bind.annotation.XmlAccessorType,javax.xml.bind.annotation.XmlAttribute,lombok.Synchronized,lombok.Setter,lombok.Getter,lombok.EqualsAndHashCode,javax.xml.bind.annotation.XmlRootElement,javax.xml.bind.annotation.XmlSeeAlso,lombok.extern.slf4j.Slf4j,javax.xml.bind.annotation.XmlType,javax.xml.bind.annotation.XmlElements,javax.xml.bind.annotation.XmlRegistry,lombok.Data,lombok.ToString,javax.xml.bind.annotation.XmlElement \end{sourcecode} This will install the broadwick jar and the archetype under .m2/repository/broadwick/broadwick/1.2 \end{enumerate} Now you can create a new project using this archetype: \begin{sourcecode} mvn3 archetype:generate -DarchetypeGroupId=broadwick -DarchetypeArtifactId=broadwick- archetype -DarchetypeVersion=1.2 -DgroupId=<unique id for your project> -DartifactId=<your project name> -Dversion=0.1 -Dpackage=<your package> \end{sourcecode} \section{Ubuntu} The version of Java on Ubuntu 15.04 (other versions are also possibley affected) causes the following error when compiling Broadwick due to the jaxb plugin which is used to generate java sources. \begin{sourcecode} WARNING: Error injecting: org.jvnet.mjiip.v\_2.XJC2Mojo java.lang.NoClassDefFoundError: com/sun/xml/bind/api/ErrorListener \ at java.lang.ClassLoader.defineClass1(Native Method) \ at java.lang.ClassLoader.defineClass(ClassLoader.java:800) \ at java.security.SecureClassLoader.defineClass(SecureClassLo \end{sourcecode} We reccommend using the version of Java from Oracle. The following steps show how to download and install Java on Ubuntu (it is possible to revert back to Ubuntu's version afterwards). \begin{enumerate} \item Download the 32-bit or 64-bit Linux "compressed binary file" - it has a ".tar.gz" file extension. \item Uncompress it \begin{sourcecode} tar -xvf jdk-8u71-linux-x64.tar.gz (32-bit) \end{sourcecode} The JDK 8 package is extracted into ./jdk1.8.0\_71 directory. N.B.: Check carefully this folder name since Oracle seem to change this occasionally with each update. \item Move the JDK 8 directory to /usr/lib \begin{sourcecode} sudo mkdir -p /usr/lib/jvm sudo mv ./jdk1.8.0\_71 /usr/lib/jvm/ \end{sourcecode} \item Now run \begin{sourcecode} sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0\_71/bin/java" 1 sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.8.0\_71/bin/javac" 1 sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.8.0\_71/bin/javaws" 1 \end{sourcecode} This will assign Oracle JDK a priority of 1, which means that installing other JDKs will replace it as the default. Be sure to use a higher priority if you want Oracle JDK to remain the default. \item Correct the file ownership and the permissions of the executables: \begin{sourcecode} sudo chmod a+x /usr/bin/java sudo chmod a+x /usr/bin/javac sudo chmod a+x /usr/bin/javaws sudo chown -R root:root /usr/lib/jvm/jdk1.8.0\_71 \end{sourcecode} N.B.: Remember - Java JDK has many more executables that you can similarly install as above. java, javac, javaws are probably the most frequently required. This answer lists the other executables available. \item Run \begin{sourcecode} sudo update-alternatives --config java \end{sourcecode} You will see output similar to the one below - choose the number of jdk1.8.0\_71 \begin{sourcecode} sudo update-alternatives --config java \end{sourcecode} There are 3 choices for the alternative java (providing /usr/bin/java). \begin{sourcecode} Selection Path Priority Status ------------------------------------------------------------ \ * 0 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1071 auto mode \ 1 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1071 manual mode 2 /usr/lib/jvm/jdk1.8.0\_71/bin/java 1 manual mode \end{sourcecode} \end{enumerate}
{ "alphanum_fraction": 0.7411444142, "avg_line_length": 45.4958677686, "ext": "tex", "hexsha": "33c92a43e08ec8148769ab7e7973893602c5817f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "EPICScotland/Broadwick", "max_forks_repo_path": "doc/chap_installation.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:14:26.000Z", "max_issues_repo_issues_event_min_datetime": "2021-08-13T18:32:58.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "EPICScotland/Broadwick", "max_issues_repo_path": "doc/chap_installation.tex", "max_line_length": 487, "max_stars_count": 4, "max_stars_repo_head_hexsha": "b9c17e26baea943d0786b0203797fa1e0a26726b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "EPICScotland/Broadwick", "max_stars_repo_path": "doc/chap_installation.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-02T13:18:33.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-13T18:05:25.000Z", "num_tokens": 1413, "size": 5505 }
\chapter{Technical Report} GitHub repository: \texttt{https://github.com/alexcosta97/web-apps-project} For this project we used ASP.NET Core 2 and an SQLite3 database. This is interacted with the Entity Framework. We also opted for a code-first approach as it was the most intuitive to add user authentication to. For our user authentication system we used ASP.NET Core Identity. With this system we were able to create a unified user database and add roles to the registered users. These roles were then used to restrict access to specific methods - for example guests can only get the route index and details page, while managers and admins can create, edit and delete routes. If guests try to access these pages they will get redirected to the login page. Normal users will receive an $Unauthorised$ error. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{guest-unauth} \caption{Guest accessing $/Routes/Edit/1$, redirected to login page} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{user-unauth} \caption{User accessing $/Routes/Edit/1$ showing unauthorised error} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{manager-unauth} \caption{Manager accessing $/Routes/Edit/1$ and getting the edit page displayed} \end{figure} \clearpage Other functions of the site, such as favourites and addresses are also restricted to those associated with the currently logged in user. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{addresses-admin} \caption{User account with an address created by the user} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{addresses-manager} \caption{Another user account with no addresses displayed} \end{figure} In order to allow the system administrator manage users and roles in the application, we had to implement our own custom controllers and views. It made for a better customized and saved also the amount of scaffolded code we had to review. The rest of the views and controllers were scaffolded after we had built the models based on our initial class diagram. After scaffolding the controllers and views, we had to test the components that were built to make sure that the relationships between the elements were defined like we wanted them to and that the data that we wanted to be displayed was doing so correctly. For this project, we spent most of the time around the authorization processes, making sure that only the right users had access to the right data and functionality, but also around customizing the views and controllers so that names instead of IDs were displayed to the users, when an element from a relationship had to be selected (for example the staff member that will be the driver for a route).
{ "alphanum_fraction": 0.790368272, "avg_line_length": 42.1492537313, "ext": "tex", "hexsha": "5396eb48956dd8c0d55a7818a47760d6e34aa6a1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3a35b4252b94db8745db115866fc6ff959abbd12", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alexcosta97/web-apps-project", "max_forks_repo_path": "report/3technical.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3a35b4252b94db8745db115866fc6ff959abbd12", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alexcosta97/web-apps-project", "max_issues_repo_path": "report/3technical.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "3a35b4252b94db8745db115866fc6ff959abbd12", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alexcosta97/web-apps-project", "max_stars_repo_path": "report/3technical.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 637, "size": 2824 }
\chapter{Defining New Types \label{defining-new-types}} \sectionauthor{Michael Hudson}{[email protected]} \sectionauthor{Dave Kuhlman}{[email protected]} \sectionauthor{Jim Fulton}{[email protected]} As mentioned in the last chapter, Python allows the writer of an extension module to define new types that can be manipulated from Python code, much like strings and lists in core Python. This is not hard; the code for all extension types follows a pattern, but there are some details that you need to understand before you can get started. \begin{notice} The way new types are defined changed dramatically (and for the better) in Python 2.2. This document documents how to define new types for Python 2.2 and later. If you need to support older versions of Python, you will need to refer to \ulink{older versions of this documentation} {http://www.python.org/doc/versions/}. \end{notice} \section{The Basics \label{dnt-basics}} The Python runtime sees all Python objects as variables of type \ctype{PyObject*}. A \ctype{PyObject} is not a very magnificent object - it just contains the refcount and a pointer to the object's ``type object''. This is where the action is; the type object determines which (C) functions get called when, for instance, an attribute gets looked up on an object or it is multiplied by another object. These C functions are called ``type methods'' to distinguish them from things like \code{[].append} (which we call ``object methods''). So, if you want to define a new object type, you need to create a new type object. This sort of thing can only be explained by example, so here's a minimal, but complete, module that defines a new type: \verbatiminput{noddy.c} Now that's quite a bit to take in at once, but hopefully bits will seem familiar from the last chapter. The first bit that will be new is: \begin{verbatim} typedef struct { PyObject_HEAD } noddy_NoddyObject; \end{verbatim} This is what a Noddy object will contain---in this case, nothing more than every Python object contains, namely a refcount and a pointer to a type object. These are the fields the \code{PyObject_HEAD} macro brings in. The reason for the macro is to standardize the layout and to enable special debugging fields in debug builds. Note that there is no semicolon after the \code{PyObject_HEAD} macro; one is included in the macro definition. Be wary of adding one by accident; it's easy to do from habit, and your compiler might not complain, but someone else's probably will! (On Windows, MSVC is known to call this an error and refuse to compile the code.) For contrast, let's take a look at the corresponding definition for standard Python integers: \begin{verbatim} typedef struct { PyObject_HEAD long ob_ival; } PyIntObject; \end{verbatim} Moving on, we come to the crunch --- the type object. \begin{verbatim} static PyTypeObject noddy_NoddyType = { PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ "noddy.Noddy", /*tp_name*/ sizeof(noddy_NoddyObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ 0, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT, /*tp_flags*/ "Noddy objects", /* tp_doc */ }; \end{verbatim} Now if you go and look up the definition of \ctype{PyTypeObject} in \file{object.h} you'll see that it has many more fields that the definition above. The remaining fields will be filled with zeros by the C compiler, and it's common practice to not specify them explicitly unless you need them. This is so important that we're going to pick the top of it apart still further: \begin{verbatim} PyObject_HEAD_INIT(NULL) \end{verbatim} This line is a bit of a wart; what we'd like to write is: \begin{verbatim} PyObject_HEAD_INIT(&PyType_Type) \end{verbatim} as the type of a type object is ``type'', but this isn't strictly conforming C and some compilers complain. Fortunately, this member will be filled in for us by \cfunction{PyType_Ready()}. \begin{verbatim} 0, /* ob_size */ \end{verbatim} The \member{ob_size} field of the header is not used; its presence in the type structure is a historical artifact that is maintained for binary compatibility with extension modules compiled for older versions of Python. Always set this field to zero. \begin{verbatim} "noddy.Noddy", /* tp_name */ \end{verbatim} The name of our type. This will appear in the default textual representation of our objects and in some error messages, for example: \begin{verbatim} >>> "" + noddy.new_noddy() Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: cannot add type "noddy.Noddy" to string \end{verbatim} Note that the name is a dotted name that includes both the module name and the name of the type within the module. The module in this case is \module{noddy} and the type is \class{Noddy}, so we set the type name to \class{noddy.Noddy}. \begin{verbatim} sizeof(noddy_NoddyObject), /* tp_basicsize */ \end{verbatim} This is so that Python knows how much memory to allocate when you call \cfunction{PyObject_New()}. \note{If you want your type to be subclassable from Python, and your type has the same \member{tp_basicsize} as its base type, you may have problems with multiple inheritance. A Python subclass of your type will have to list your type first in its \member{__bases__}, or else it will not be able to call your type's \method{__new__} method without getting an error. You can avoid this problem by ensuring that your type has a larger value for \member{tp_basicsize} than its base type does. Most of the time, this will be true anyway, because either your base type will be \class{object}, or else you will be adding data members to your base type, and therefore increasing its size.} \begin{verbatim} 0, /* tp_itemsize */ \end{verbatim} This has to do with variable length objects like lists and strings. Ignore this for now. Skipping a number of type methods that we don't provide, we set the class flags to \constant{Py_TPFLAGS_DEFAULT}. \begin{verbatim} Py_TPFLAGS_DEFAULT, /*tp_flags*/ \end{verbatim} All types should include this constant in their flags. It enables all of the members defined by the current version of Python. We provide a doc string for the type in \member{tp_doc}. \begin{verbatim} "Noddy objects", /* tp_doc */ \end{verbatim} Now we get into the type methods, the things that make your objects different from the others. We aren't going to implement any of these in this version of the module. We'll expand this example later to have more interesting behavior. For now, all we want to be able to do is to create new \class{Noddy} objects. To enable object creation, we have to provide a \member{tp_new} implementation. In this case, we can just use the default implementation provided by the API function \cfunction{PyType_GenericNew()}. We'd like to just assign this to the \member{tp_new} slot, but we can't, for portability sake, On some platforms or compilers, we can't statically initialize a structure member with a function defined in another C module, so, instead, we'll assign the \member{tp_new} slot in the module initialization function just before calling \cfunction{PyType_Ready()}: \begin{verbatim} noddy_NoddyType.tp_new = PyType_GenericNew; if (PyType_Ready(&noddy_NoddyType) < 0) return; \end{verbatim} All the other type methods are \NULL, so we'll go over them later --- that's for a later section! Everything else in the file should be familiar, except for some code in \cfunction{initnoddy()}: \begin{verbatim} if (PyType_Ready(&noddy_NoddyType) < 0) return; \end{verbatim} This initializes the \class{Noddy} type, filing in a number of members, including \member{ob_type} that we initially set to \NULL. \begin{verbatim} PyModule_AddObject(m, "Noddy", (PyObject *)&noddy_NoddyType); \end{verbatim} This adds the type to the module dictionary. This allows us to create \class{Noddy} instances by calling the \class{Noddy} class: \begin{verbatim} >>> import noddy >>> mynoddy = noddy.Noddy() \end{verbatim} That's it! All that remains is to build it; put the above code in a file called \file{noddy.c} and \begin{verbatim} from distutils.core import setup, Extension setup(name="noddy", version="1.0", ext_modules=[Extension("noddy", ["noddy.c"])]) \end{verbatim} in a file called \file{setup.py}; then typing \begin{verbatim} $ python setup.py build \end{verbatim} %$ <-- bow to font-lock ;-( at a shell should produce a file \file{noddy.so} in a subdirectory; move to that directory and fire up Python --- you should be able to \code{import noddy} and play around with Noddy objects. That wasn't so hard, was it? Of course, the current Noddy type is pretty uninteresting. It has no data and doesn't do anything. It can't even be subclassed. \subsection{Adding data and methods to the Basic example} Let's expend the basic example to add some data and methods. Let's also make the type usable as a base class. We'll create a new module, \module{noddy2} that adds these capabilities: \verbatiminput{noddy2.c} This version of the module has a number of changes. We've added an extra include: \begin{verbatim} #include "structmember.h" \end{verbatim} This include provides declarations that we use to handle attributes, as described a bit later. The name of the \class{Noddy} object structure has been shortened to \class{Noddy}. The type object name has been shortened to \class{NoddyType}. The \class{Noddy} type now has three data attributes, \var{first}, \var{last}, and \var{number}. The \var{first} and \var{last} variables are Python strings containing first and last names. The \var{number} attribute is an integer. The object structure is updated accordingly: \begin{verbatim} typedef struct { PyObject_HEAD PyObject *first; PyObject *last; int number; } Noddy; \end{verbatim} Because we now have data to manage, we have to be more careful about object allocation and deallocation. At a minimum, we need a deallocation method: \begin{verbatim} static void Noddy_dealloc(Noddy* self) { Py_XDECREF(self->first); Py_XDECREF(self->last); self->ob_type->tp_free((PyObject*)self); } \end{verbatim} which is assigned to the \member{tp_dealloc} member: \begin{verbatim} (destructor)Noddy_dealloc, /*tp_dealloc*/ \end{verbatim} This method decrements the reference counts of the two Python attributes. We use \cfunction{Py_XDECREF()} here because the \member{first} and \member{last} members could be \NULL. It then calls the \member{tp_free} member of the object's type to free the object's memory. Note that the object's type might not be \class{NoddyType}, because the object may be an instance of a subclass. We want to make sure that the first and last names are initialized to empty strings, so we provide a new method: \begin{verbatim} static PyObject * Noddy_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { Noddy *self; self = (Noddy *)type->tp_alloc(type, 0); if (self != NULL) { self->first = PyString_FromString(""); if (self->first == NULL) { Py_DECREF(self); return NULL; } self->last = PyString_FromString(""); if (self->last == NULL) { Py_DECREF(self); return NULL; } self->number = 0; } return (PyObject *)self; } \end{verbatim} and install it in the \member{tp_new} member: \begin{verbatim} Noddy_new, /* tp_new */ \end{verbatim} The new member is responsible for creating (as opposed to initializing) objects of the type. It is exposed in Python as the \method{__new__()} method. See the paper titled ``Unifying types and classes in Python'' for a detailed discussion of the \method{__new__()} method. One reason to implement a new method is to assure the initial values of instance variables. In this case, we use the new method to make sure that the initial values of the members \member{first} and \member{last} are not \NULL. If we didn't care whether the initial values were \NULL, we could have used \cfunction{PyType_GenericNew()} as our new method, as we did before. \cfunction{PyType_GenericNew()} initializes all of the instance variable members to \NULL. The new method is a static method that is passed the type being instantiated and any arguments passed when the type was called, and that returns the new object created. New methods always accept positional and keyword arguments, but they often ignore the arguments, leaving the argument handling to initializer methods. Note that if the type supports subclassing, the type passed may not be the type being defined. The new method calls the tp_alloc slot to allocate memory. We don't fill the \member{tp_alloc} slot ourselves. Rather \cfunction{PyType_Ready()} fills it for us by inheriting it from our base class, which is \class{object} by default. Most types use the default allocation. \note{If you are creating a co-operative \member{tp_new} (one that calls a base type's \member{tp_new} or \method{__new__}), you must \emph{not} try to determine what method to call using method resolution order at runtime. Always statically determine what type you are going to call, and call its \member{tp_new} directly, or via \code{type->tp_base->tp_new}. If you do not do this, Python subclasses of your type that also inherit from other Python-defined classes may not work correctly. (Specifically, you may not be able to create instances of such subclasses without getting a \exception{TypeError}.)} We provide an initialization function: \begin{verbatim} static int Noddy_init(Noddy *self, PyObject *args, PyObject *kwds) { PyObject *first=NULL, *last=NULL, *tmp; static char *kwlist[] = {"first", "last", "number", NULL}; if (! PyArg_ParseTupleAndKeywords(args, kwds, "|OOi", kwlist, &first, &last, &self->number)) return -1; if (first) { tmp = self->first; Py_INCREF(first); self->first = first; Py_XDECREF(tmp); } if (last) { tmp = self->last; Py_INCREF(last); self->last = last; Py_XDECREF(tmp); } return 0; } \end{verbatim} by filling the \member{tp_init} slot. \begin{verbatim} (initproc)Noddy_init, /* tp_init */ \end{verbatim} The \member{tp_init} slot is exposed in Python as the \method{__init__()} method. It is used to initialize an object after it's created. Unlike the new method, we can't guarantee that the initializer is called. The initializer isn't called when unpickling objects and it can be overridden. Our initializer accepts arguments to provide initial values for our instance. Initializers always accept positional and keyword arguments. Initializers can be called multiple times. Anyone can call the \method{__init__()} method on our objects. For this reason, we have to be extra careful when assigning the new values. We might be tempted, for example to assign the \member{first} member like this: \begin{verbatim} if (first) { Py_XDECREF(self->first); Py_INCREF(first); self->first = first; } \end{verbatim} But this would be risky. Our type doesn't restrict the type of the \member{first} member, so it could be any kind of object. It could have a destructor that causes code to be executed that tries to access the \member{first} member. To be paranoid and protect ourselves against this possibility, we almost always reassign members before decrementing their reference counts. When don't we have to do this? \begin{itemize} \item when we absolutely know that the reference count is greater than 1 \item when we know that deallocation of the object\footnote{This is true when we know that the object is a basic type, like a string or a float.} will not cause any calls back into our type's code \item when decrementing a reference count in a \member{tp_dealloc} handler when garbage-collections is not supported\footnote{We relied on this in the \member{tp_dealloc} handler in this example, because our type doesn't support garbage collection. Even if a type supports garbage collection, there are calls that can be made to ``untrack'' the object from garbage collection, however, these calls are advanced and not covered here.} \item \end{itemize} We want to want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions: \begin{verbatim} static PyMemberDef Noddy_members[] = { {"first", T_OBJECT_EX, offsetof(Noddy, first), 0, "first name"}, {"last", T_OBJECT_EX, offsetof(Noddy, last), 0, "last name"}, {"number", T_INT, offsetof(Noddy, number), 0, "noddy number"}, {NULL} /* Sentinel */ }; \end{verbatim} and put the definitions in the \member{tp_members} slot: \begin{verbatim} Noddy_members, /* tp_members */ \end{verbatim} Each member definition has a member name, type, offset, access flags and documentation string. See the ``Generic Attribute Management'' section below for details. A disadvantage of this approach is that it doesn't provide a way to restrict the types of objects that can be assigned to the Python attributes. We expect the first and last names to be strings, but any Python objects can be assigned. Further, the attributes can be deleted, setting the C pointers to \NULL. Even though we can make sure the members are initialized to non-\NULL{} values, the members can be set to \NULL{} if the attributes are deleted. We define a single method, \method{name}, that outputs the objects name as the concatenation of the first and last names. \begin{verbatim} static PyObject * Noddy_name(Noddy* self) { static PyObject *format = NULL; PyObject *args, *result; if (format == NULL) { format = PyString_FromString("%s %s"); if (format == NULL) return NULL; } if (self->first == NULL) { PyErr_SetString(PyExc_AttributeError, "first"); return NULL; } if (self->last == NULL) { PyErr_SetString(PyExc_AttributeError, "last"); return NULL; } args = Py_BuildValue("OO", self->first, self->last); if (args == NULL) return NULL; result = PyString_Format(format, args); Py_DECREF(args); return result; } \end{verbatim} The method is implemented as a C function that takes a \class{Noddy} (or \class{Noddy} subclass) instance as the first argument. Methods always take an instance as the first argument. Methods often take positional and keyword arguments as well, but in this cased we don't take any and don't need to accept a positional argument tuple or keyword argument dictionary. This method is equivalent to the Python method: \begin{verbatim} def name(self): return "%s %s" % (self.first, self.last) \end{verbatim} Note that we have to check for the possibility that our \member{first} and \member{last} members are \NULL. This is because they can be deleted, in which case they are set to \NULL. It would be better to prevent deletion of these attributes and to restrict the attribute values to be strings. We'll see how to do that in the next section. Now that we've defined the method, we need to create an array of method definitions: \begin{verbatim} static PyMethodDef Noddy_methods[] = { {"name", (PyCFunction)Noddy_name, METH_NOARGS, "Return the name, combining the first and last name" }, {NULL} /* Sentinel */ }; \end{verbatim} and assign them to the \member{tp_methods} slot: \begin{verbatim} Noddy_methods, /* tp_methods */ \end{verbatim} Note that we used the \constant{METH_NOARGS} flag to indicate that the method is passed no arguments. Finally, we'll make our type usable as a base class. We've written our methods carefully so far so that they don't make any assumptions about the type of the object being created or used, so all we need to do is to add the \constant{Py_TPFLAGS_BASETYPE} to our class flag definition: \begin{verbatim} Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/ \end{verbatim} We rename \cfunction{initnoddy()} to \cfunction{initnoddy2()} and update the module name passed to \cfunction{Py_InitModule3()}. Finally, we update our \file{setup.py} file to build the new module: \begin{verbatim} from distutils.core import setup, Extension setup(name="noddy", version="1.0", ext_modules=[ Extension("noddy", ["noddy.c"]), Extension("noddy2", ["noddy2.c"]), ]) \end{verbatim} \subsection{Providing finer control over data attributes} In this section, we'll provide finer control over how the \member{first} and \member{last} attributes are set in the \class{Noddy} example. In the previous version of our module, the instance variables \member{first} and \member{last} could be set to non-string values or even deleted. We want to make sure that these attributes always contain strings. \verbatiminput{noddy3.c} To provide greater control, over the \member{first} and \member{last} attributes, we'll use custom getter and setter functions. Here are the functions for getting and setting the \member{first} attribute: \begin{verbatim} Noddy_getfirst(Noddy *self, void *closure) { Py_INCREF(self->first); return self->first; } static int Noddy_setfirst(Noddy *self, PyObject *value, void *closure) { if (value == NULL) { PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute"); return -1; } if (! PyString_Check(value)) { PyErr_SetString(PyExc_TypeError, "The first attribute value must be a string"); return -1; } Py_DECREF(self->first); Py_INCREF(value); self->first = value; return 0; } \end{verbatim} The getter function is passed a \class{Noddy} object and a ``closure'', which is void pointer. In this case, the closure is ignored. (The closure supports an advanced usage in which definition data is passed to the getter and setter. This could, for example, be used to allow a single set of getter and setter functions that decide the attribute to get or set based on data in the closure.) The setter function is passed the \class{Noddy} object, the new value, and the closure. The new value may be \NULL, in which case the attribute is being deleted. In our setter, we raise an error if the attribute is deleted or if the attribute value is not a string. We create an array of \ctype{PyGetSetDef} structures: \begin{verbatim} static PyGetSetDef Noddy_getseters[] = { {"first", (getter)Noddy_getfirst, (setter)Noddy_setfirst, "first name", NULL}, {"last", (getter)Noddy_getlast, (setter)Noddy_setlast, "last name", NULL}, {NULL} /* Sentinel */ }; \end{verbatim} and register it in the \member{tp_getset} slot: \begin{verbatim} Noddy_getseters, /* tp_getset */ \end{verbatim} to register out attribute getters and setters. The last item in a \ctype{PyGetSetDef} structure is the closure mentioned above. In this case, we aren't using the closure, so we just pass \NULL. We also remove the member definitions for these attributes: \begin{verbatim} static PyMemberDef Noddy_members[] = { {"number", T_INT, offsetof(Noddy, number), 0, "noddy number"}, {NULL} /* Sentinel */ }; \end{verbatim} We also need to update the \member{tp_init} handler to only allow strings\footnote{We now know that the first and last members are strings, so perhaps we could be less careful about decrementing their reference counts, however, we accept instances of string subclasses. Even though deallocating normal strings won't call back into our objects, we can't guarantee that deallocating an instance of a string subclass won't. call back into out objects.} to be passed: \begin{verbatim} static int Noddy_init(Noddy *self, PyObject *args, PyObject *kwds) { PyObject *first=NULL, *last=NULL, *tmp; static char *kwlist[] = {"first", "last", "number", NULL}; if (! PyArg_ParseTupleAndKeywords(args, kwds, "|SSi", kwlist, &first, &last, &self->number)) return -1; if (first) { tmp = self->first; Py_INCREF(first); self->first = first; Py_DECREF(tmp); } if (last) { tmp = self->last; Py_INCREF(last); self->last = last; Py_DECREF(tmp); } return 0; } \end{verbatim} With these changes, we can assure that the \member{first} and \member{last} members are never \NULL{} so we can remove checks for \NULL{} values in almost all cases. This means that most of the \cfunction{Py_XDECREF()} calls can be converted to \cfunction{Py_DECREF()} calls. The only place we can't change these calls is in the deallocator, where there is the possibility that the initialization of these members failed in the constructor. We also rename the module initialization function and module name in the initialization function, as we did before, and we add an extra definition to the \file{setup.py} file. \subsection{Supporting cyclic garbage collection} Python has a cyclic-garbage collector that can identify unneeded objects even when their reference counts are not zero. This can happen when objects are involved in cycles. For example, consider: \begin{verbatim} >>> l = [] >>> l.append(l) >>> del l \end{verbatim} In this example, we create a list that contains itself. When we delete it, it still has a reference from itself. Its reference count doesn't drop to zero. Fortunately, Python's cyclic-garbage collector will eventually figure out that the list is garbage and free it. In the second version of the \class{Noddy} example, we allowed any kind of object to be stored in the \member{first} or \member{last} attributes.\footnote{Even in the third version, we aren't guaranteed to avoid cycles. Instances of string subclasses are allowed and string subclasses could allow cycles even if normal strings don't.} This means that \class{Noddy} objects can participate in cycles: \begin{verbatim} >>> import noddy2 >>> n = noddy2.Noddy() >>> l = [n] >>> n.first = l \end{verbatim} This is pretty silly, but it gives us an excuse to add support for the cyclic-garbage collector to the \class{Noddy} example. To support cyclic garbage collection, types need to fill two slots and set a class flag that enables these slots: \verbatiminput{noddy4.c} The traversal method provides access to subobjects that could participate in cycles: \begin{verbatim} static int Noddy_traverse(Noddy *self, visitproc visit, void *arg) { int vret; if (self->first) { vret = visit(self->first, arg); if (vret != 0) return vret; } if (self->last) { vret = visit(self->last, arg); if (vret != 0) return vret; } return 0; } \end{verbatim} For each subobject that can participate in cycles, we need to call the \cfunction{visit()} function, which is passed to the traversal method. The \cfunction{visit()} function takes as arguments the subobject and the extra argument \var{arg} passed to the traversal method. It returns an integer value that must be returned if it is non-zero. Python 2.4 and higher provide a \cfunction{Py_VISIT()} macro that automates calling visit functions. With \cfunction{Py_VISIT()}, \cfunction{Noddy_traverse()} can be simplified: \begin{verbatim} static int Noddy_traverse(Noddy *self, visitproc visit, void *arg) { Py_VISIT(self->first); Py_VISIT(self->last); return 0; } \end{verbatim} \note{Note that the \member{tp_traverse} implementation must name its arguments exactly \var{visit} and \var{arg} in order to use \cfunction{Py_VISIT()}. This is to encourage uniformity across these boring implementations.} We also need to provide a method for clearing any subobjects that can participate in cycles. We implement the method and reimplement the deallocator to use it: \begin{verbatim} static int Noddy_clear(Noddy *self) { PyObject *tmp; tmp = self->first; self->first = NULL; Py_XDECREF(tmp); tmp = self->last; self->last = NULL; Py_XDECREF(tmp); return 0; } static void Noddy_dealloc(Noddy* self) { Noddy_clear(self); self->ob_type->tp_free((PyObject*)self); } \end{verbatim} Notice the use of a temporary variable in \cfunction{Noddy_clear()}. We use the temporary variable so that we can set each member to \NULL{} before decrementing its reference count. We do this because, as was discussed earlier, if the reference count drops to zero, we might cause code to run that calls back into the object. In addition, because we now support garbage collection, we also have to worry about code being run that triggers garbage collection. If garbage collection is run, our \member{tp_traverse} handler could get called. We can't take a chance of having \cfunction{Noddy_traverse()} called when a member's reference count has dropped to zero and its value hasn't been set to \NULL. Python 2.4 and higher provide a \cfunction{Py_CLEAR()} that automates the careful decrementing of reference counts. With \cfunction{Py_CLEAR()}, the \cfunction{Noddy_clear()} function can be simplified: \begin{verbatim} static int Noddy_clear(Noddy *self) { Py_CLEAR(self->first); Py_CLEAR(self->last); return 0; } \end{verbatim} Finally, we add the \constant{Py_TPFLAGS_HAVE_GC} flag to the class flags: \begin{verbatim} Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ \end{verbatim} That's pretty much it. If we had written custom \member{tp_alloc} or \member{tp_free} slots, we'd need to modify them for cyclic-garbage collection. Most extensions will use the versions automatically provided. \section{Type Methods \label{dnt-type-methods}} This section aims to give a quick fly-by on the various type methods you can implement and what they do. Here is the definition of \ctype{PyTypeObject}, with some fields only used in debug builds omitted: \verbatiminput{typestruct.h} Now that's a \emph{lot} of methods. Don't worry too much though - if you have a type you want to define, the chances are very good that you will only implement a handful of these. As you probably expect by now, we're going to go over this and give more information about the various handlers. We won't go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields; be sure your type initialization keeps the fields in the right order! It's often easiest to find an example that includes all the fields you need (even if they're initialized to \code{0}) and then change the values to suit your new type. \begin{verbatim} char *tp_name; /* For printing */ \end{verbatim} The name of the type - as mentioned in the last section, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation! \begin{verbatim} int tp_basicsize, tp_itemsize; /* For allocation */ \end{verbatim} These fields tell the runtime how much memory to allocate when new objects of this type are created. Python has some built-in support for variable length structures (think: strings, lists) which is where the \member{tp_itemsize} field comes in. This will be dealt with later. \begin{verbatim} char *tp_doc; \end{verbatim} Here you can put a string (or its address) that you want returned when the Python script references \code{obj.__doc__} to retrieve the doc string. Now we come to the basic type methods---the ones most extension types will implement. \subsection{Finalization and De-allocation} \index{object!deallocation} \index{deallocation, object} \index{object!finalization} \index{finalization, of objects} \begin{verbatim} destructor tp_dealloc; \end{verbatim} This function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, put it here. The object itself needs to be freed here as well. Here is an example of this function: \begin{verbatim} static void newdatatype_dealloc(newdatatypeobject * obj) { free(obj->obj_UnderlyingDatatypePtr); obj->ob_type->tp_free(obj); } \end{verbatim} One important requirement of the deallocator function is that it leaves any pending exceptions alone. This is important since deallocators are frequently called as the interpreter unwinds the Python stack; when the stack is unwound due to an exception (rather than normal returns), nothing is done to protect the deallocators from seeing that an exception has already been set. Any actions which a deallocator performs which may cause additional Python code to be executed may detect that an exception has been set. This can lead to misleading errors from the interpreter. The proper way to protect against this is to save a pending exception before performing the unsafe action, and restoring it when done. This can be done using the \cfunction{PyErr_Fetch()}\ttindex{PyErr_Fetch()} and \cfunction{PyErr_Restore()}\ttindex{PyErr_Restore()} functions: \begin{verbatim} static void my_dealloc(PyObject *obj) { MyObject *self = (MyObject *) obj; PyObject *cbresult; if (self->my_callback != NULL) { PyObject *err_type, *err_value, *err_traceback; int have_error = PyErr_Occurred() ? 1 : 0; if (have_error) PyErr_Fetch(&err_type, &err_value, &err_traceback); cbresult = PyObject_CallObject(self->my_callback, NULL); if (cbresult == NULL) PyErr_WriteUnraisable(self->my_callback); else Py_DECREF(cbresult); if (have_error) PyErr_Restore(err_type, err_value, err_traceback); Py_DECREF(self->my_callback); } obj->ob_type->tp_free((PyObject*)self); } \end{verbatim} \subsection{Object Presentation} In Python, there are three ways to generate a textual representation of an object: the \function{repr()}\bifuncindex{repr} function (or equivalent back-tick syntax), the \function{str()}\bifuncindex{str} function, and the \keyword{print} statement. For most objects, the \keyword{print} statement is equivalent to the \function{str()} function, but it is possible to special-case printing to a \ctype{FILE*} if necessary; this should only be done if efficiency is identified as a problem and profiling suggests that creating a temporary string object to be written to a file is too expensive. These handlers are all optional, and most types at most need to implement the \member{tp_str} and \member{tp_repr} handlers. \begin{verbatim} reprfunc tp_repr; reprfunc tp_str; printfunc tp_print; \end{verbatim} The \member{tp_repr} handler should return a string object containing a representation of the instance for which it is called. Here is a simple example: \begin{verbatim} static PyObject * newdatatype_repr(newdatatypeobject * obj) { return PyString_FromFormat("Repr-ified_newdatatype{{size:\%d}}", obj->obj_UnderlyingDatatypePtr->size); } \end{verbatim} If no \member{tp_repr} handler is specified, the interpreter will supply a representation that uses the type's \member{tp_name} and a uniquely-identifying value for the object. The \member{tp_str} handler is to \function{str()} what the \member{tp_repr} handler described above is to \function{repr()}; that is, it is called when Python code calls \function{str()} on an instance of your object. Its implementation is very similar to the \member{tp_repr} function, but the resulting string is intended for human consumption. If \member{tp_str} is not specified, the \member{tp_repr} handler is used instead. Here is a simple example: \begin{verbatim} static PyObject * newdatatype_str(newdatatypeobject * obj) { return PyString_FromFormat("Stringified_newdatatype{{size:\%d}}", obj->obj_UnderlyingDatatypePtr->size); } \end{verbatim} The print function will be called whenever Python needs to "print" an instance of the type. For example, if 'node' is an instance of type TreeNode, then the print function is called when Python code calls: \begin{verbatim} print node \end{verbatim} There is a flags argument and one flag, \constant{Py_PRINT_RAW}, and it suggests that you print without string quotes and possibly without interpreting escape sequences. The print function receives a file object as an argument. You will likely want to write to that file object. Here is a sample print function: \begin{verbatim} static int newdatatype_print(newdatatypeobject *obj, FILE *fp, int flags) { if (flags & Py_PRINT_RAW) { fprintf(fp, "<{newdatatype object--size: %d}>", obj->obj_UnderlyingDatatypePtr->size); } else { fprintf(fp, "\"<{newdatatype object--size: %d}>\"", obj->obj_UnderlyingDatatypePtr->size); } return 0; } \end{verbatim} \subsection{Attribute Management} For every object which can support attributes, the corresponding type must provide the functions that control how the attributes are resolved. There needs to be a function which can retrieve attributes (if any are defined), and another to set attributes (if setting attributes is allowed). Removing an attribute is a special case, for which the new value passed to the handler is \NULL. Python supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a \ctype{char*}, while the other accepts a \ctype{PyObject*}. Each type can use whichever pair makes more sense for the implementation's convenience. \begin{verbatim} getattrfunc tp_getattr; /* char * version */ setattrfunc tp_setattr; /* ... */ getattrofunc tp_getattrofunc; /* PyObject * version */ setattrofunc tp_setattrofunc; \end{verbatim} If accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the \ctype{PyObject*} version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available. \subsubsection{Generic Attribute Management} \versionadded{2.2} Most extension types only use \emph{simple} attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met: \begin{enumerate} \item The name of the attributes must be known when \cfunction{PyType_Ready()} is called. \item No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value. \end{enumerate} Note that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored. When \cfunction{PyType_Ready()} is called, it uses three tables referenced by the type object to create \emph{descriptors} which are placed in the dictionary of the type object. Each descriptor controls access to one attribute of the instance object. Each of the tables is optional; if all three are \NULL, instances of the type will only have attributes that are inherited from their base type, and should leave the \member{tp_getattro} and \member{tp_setattro} fields \NULL{} as well, allowing the base type to handle attributes. The tables are declared as three fields of the type object: \begin{verbatim} struct PyMethodDef *tp_methods; struct PyMemberDef *tp_members; struct PyGetSetDef *tp_getset; \end{verbatim} If \member{tp_methods} is not \NULL, it must refer to an array of \ctype{PyMethodDef} structures. Each entry in the table is an instance of this structure: \begin{verbatim} typedef struct PyMethodDef { char *ml_name; /* method name */ PyCFunction ml_meth; /* implementation function */ int ml_flags; /* flags */ char *ml_doc; /* docstring */ } PyMethodDef; \end{verbatim} One entry should be defined for each method provided by the type; no entries are needed for methods inherited from a base type. One additional entry is needed at the end; it is a sentinel that marks the end of the array. The \member{ml_name} field of the sentinel must be \NULL. XXX Need to refer to some unified discussion of the structure fields, shared with the next section. The second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as: \begin{verbatim} typedef struct PyMemberDef { char *name; int type; int offset; int flags; char *doc; } PyMemberDef; \end{verbatim} For each entry in the table, a descriptor will be constructed and added to the type which will be able to extract a value from the instance structure. The \member{type} field should contain one of the type codes defined in the \file{structmember.h} header; the value will be used to determine how to convert Python values to and from C values. The \member{flags} field is used to store flags which control how the attribute can be accessed. XXX Need to move some of this to a shared section! The following flag constants are defined in \file{structmember.h}; they may be combined using bitwise-OR. \begin{tableii}{l|l}{constant}{Constant}{Meaning} \lineii{READONLY \ttindex{READONLY}} {Never writable.} \lineii{RO \ttindex{RO}} {Shorthand for \constant{READONLY}.} \lineii{READ_RESTRICTED \ttindex{READ_RESTRICTED}} {Not readable in restricted mode.} \lineii{WRITE_RESTRICTED \ttindex{WRITE_RESTRICTED}} {Not writable in restricted mode.} \lineii{RESTRICTED \ttindex{RESTRICTED}} {Not readable or writable in restricted mode.} \end{tableii} An interesting advantage of using the \member{tp_members} table to build descriptors that are used at runtime is that any attribute defined this way can have an associated doc string simply by providing the text in the table. An application can use the introspection API to retrieve the descriptor from the class object, and get the doc string using its \member{__doc__} attribute. As with the \member{tp_methods} table, a sentinel entry with a \member{name} value of \NULL{} is required. % XXX Descriptors need to be explained in more detail somewhere, but % not here. % % Descriptor objects have two handler functions which correspond to % the \member{tp_getattro} and \member{tp_setattro} handlers. The % \method{__get__()} handler is a function which is passed the % descriptor, instance, and type objects, and returns the value of the % attribute, or it returns \NULL{} and sets an exception. The % \method{__set__()} handler is passed the descriptor, instance, type, % and new value; \subsubsection{Type-specific Attribute Management} For simplicity, only the \ctype{char*} version will be demonstrated here; the type of the name parameter is the only difference between the \ctype{char*} and \ctype{PyObject*} flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. The value in showing this is two-fold: it demonstrates how basic attribute management can be done in a way that is portable to older versions of Python, and explains how the handler functions are called, so that if you do need to extend their functionality, you'll understand what needs to be done. The \member{tp_getattr} handler is called when the object requires an attribute look-up. It is called in the same situations where the \method{__getattr__()} method of a class would be called. A likely way to handle this is (1) to implement a set of functions (such as \cfunction{newdatatype_getSize()} and \cfunction{newdatatype_setSize()} in the example below), (2) provide a method table listing these functions, and (3) provide a getattr function that returns the result of a lookup in that table. The method table uses the same structure as the \member{tp_methods} field of the type object. Here is an example: \begin{verbatim} static PyMethodDef newdatatype_methods[] = { {"getSize", (PyCFunction)newdatatype_getSize, METH_VARARGS, "Return the current size."}, {"setSize", (PyCFunction)newdatatype_setSize, METH_VARARGS, "Set the size."}, {NULL, NULL, 0, NULL} /* sentinel */ }; static PyObject * newdatatype_getattr(newdatatypeobject *obj, char *name) { return Py_FindMethod(newdatatype_methods, (PyObject *)obj, name); } \end{verbatim} The \member{tp_setattr} handler is called when the \method{__setattr__()} or \method{__delattr__()} method of a class instance would be called. When an attribute should be deleted, the third parameter will be \NULL. Here is an example that simply raises an exception; if this were really all you wanted, the \member{tp_setattr} handler should be set to \NULL. \begin{verbatim} static int newdatatype_setattr(newdatatypeobject *obj, char *name, PyObject *v) { (void)PyErr_Format(PyExc_RuntimeError, "Read-only attribute: \%s", name); return -1; } \end{verbatim} \subsection{Object Comparison} \begin{verbatim} cmpfunc tp_compare; \end{verbatim} The \member{tp_compare} handler is called when comparisons are needed and the object does not implement the specific rich comparison method which matches the requested comparison. (It is always used if defined and the \cfunction{PyObject_Compare()} or \cfunction{PyObject_Cmp()} functions are used, or if \function{cmp()} is used from Python.) It is analogous to the \method{__cmp__()} method. This function should return \code{-1} if \var{obj1} is less than \var{obj2}, \code{0} if they are equal, and \code{1} if \var{obj1} is greater than \var{obj2}. (It was previously allowed to return arbitrary negative or positive integers for less than and greater than, respectively; as of Python 2.2, this is no longer allowed. In the future, other return values may be assigned a different meaning.) A \member{tp_compare} handler may raise an exception. In this case it should return a negative value. The caller has to test for the exception using \cfunction{PyErr_Occurred()}. Here is a sample implementation: \begin{verbatim} static int newdatatype_compare(newdatatypeobject * obj1, newdatatypeobject * obj2) { long result; if (obj1->obj_UnderlyingDatatypePtr->size < obj2->obj_UnderlyingDatatypePtr->size) { result = -1; } else if (obj1->obj_UnderlyingDatatypePtr->size > obj2->obj_UnderlyingDatatypePtr->size) { result = 1; } else { result = 0; } return result; } \end{verbatim} \subsection{Abstract Protocol Support} Python supports a variety of \emph{abstract} `protocols;' the specific interfaces provided to use these interfaces are documented in the \citetitle[../api/api.html]{Python/C API Reference Manual} in the chapter ``\ulink{Abstract Objects Layer}{../api/abstract.html}.'' A number of these abstract interfaces were defined early in the development of the Python implementation. In particular, the number, mapping, and sequence protocols have been part of Python since the beginning. Other protocols have been added over time. For protocols which depend on several handler routines from the type implementation, the older protocols have been defined as optional blocks of handlers referenced by the type object. For newer protocols there are additional slots in the main type object, with a flag bit being set to indicate that the slots are present and should be checked by the interpreter. (The flag bit does not indicate that the slot values are non-\NULL. The flag may be set to indicate the presence of a slot, but a slot may still be unfilled.) \begin{verbatim} PyNumberMethods tp_as_number; PySequenceMethods tp_as_sequence; PyMappingMethods tp_as_mapping; \end{verbatim} If you wish your object to be able to act like a number, a sequence, or a mapping object, then you place the address of a structure that implements the C type \ctype{PyNumberMethods}, \ctype{PySequenceMethods}, or \ctype{PyMappingMethods}, respectively. It is up to you to fill in this structure with appropriate values. You can find examples of the use of each of these in the \file{Objects} directory of the Python source distribution. \begin{verbatim} hashfunc tp_hash; \end{verbatim} This function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a moderately pointless example: \begin{verbatim} static long newdatatype_hash(newdatatypeobject *obj) { long result; result = obj->obj_UnderlyingDatatypePtr->size; result = result * 3; return result; } \end{verbatim} \begin{verbatim} ternaryfunc tp_call; \end{verbatim} This function is called when an instance of your data type is "called", for example, if \code{obj1} is an instance of your data type and the Python script contains \code{obj1('hello')}, the \member{tp_call} handler is invoked. This function takes three arguments: \begin{enumerate} \item \var{arg1} is the instance of the data type which is the subject of the call. If the call is \code{obj1('hello')}, then \var{arg1} is \code{obj1}. \item \var{arg2} is a tuple containing the arguments to the call. You can use \cfunction{PyArg_ParseTuple()} to extract the arguments. \item \var{arg3} is a dictionary of keyword arguments that were passed. If this is non-\NULL{} and you support keyword arguments, use \cfunction{PyArg_ParseTupleAndKeywords()} to extract the arguments. If you do not want to support keyword arguments and this is non-\NULL, raise a \exception{TypeError} with a message saying that keyword arguments are not supported. \end{enumerate} Here is a desultory example of the implementation of the call function. \begin{verbatim} /* Implement the call function. * obj1 is the instance receiving the call. * obj2 is a tuple containing the arguments to the call, in this * case 3 strings. */ static PyObject * newdatatype_call(newdatatypeobject *obj, PyObject *args, PyObject *other) { PyObject *result; char *arg1; char *arg2; char *arg3; if (!PyArg_ParseTuple(args, "sss:call", &arg1, &arg2, &arg3)) { return NULL; } result = PyString_FromFormat( "Returning -- value: [\%d] arg1: [\%s] arg2: [\%s] arg3: [\%s]\n", obj->obj_UnderlyingDatatypePtr->size, arg1, arg2, arg3); printf("\%s", PyString_AS_STRING(result)); return result; } \end{verbatim} XXX some fields need to be added here... \begin{verbatim} /* Added in release 2.2 */ /* Iterators */ getiterfunc tp_iter; iternextfunc tp_iternext; \end{verbatim} These functions provide support for the iterator protocol. Any object which wishes to support iteration over its contents (which may be generated during iteration) must implement the \code{tp_iter} handler. Objects which are returned by a \code{tp_iter} handler must implement both the \code{tp_iter} and \code{tp_iternext} handlers. Both handlers take exactly one parameter, the instance for which they are being called, and return a new reference. In the case of an error, they should set an exception and return \NULL. For an object which represents an iterable collection, the \code{tp_iter} handler must return an iterator object. The iterator object is responsible for maintaining the state of the iteration. For collections which can support multiple iterators which do not interfere with each other (as lists and tuples do), a new iterator should be created and returned. Objects which can only be iterated over once (usually due to side effects of iteration) should implement this handler by returning a new reference to themselves, and should also implement the \code{tp_iternext} handler. File objects are an example of such an iterator. Iterator objects should implement both handlers. The \code{tp_iter} handler should return a new reference to the iterator (this is the same as the \code{tp_iter} handler for objects which can only be iterated over destructively). The \code{tp_iternext} handler should return a new reference to the next object in the iteration if there is one. If the iteration has reached the end, it may return \NULL{} without setting an exception or it may set \exception{StopIteration}; avoiding the exception can yield slightly better performance. If an actual error occurs, it should set an exception and return \NULL. \subsection{Weak Reference Support\label{weakref-support}} One of the goals of Python's weak-reference implementation is to allow any type to participate in the weak reference mechanism without incurring the overhead on those objects which do not benefit by weak referencing (such as numbers). For an object to be weakly referencable, the extension must include a \ctype{PyObject*} field in the instance structure for the use of the weak reference mechanism; it must be initialized to \NULL{} by the object's constructor. It must also set the \member{tp_weaklistoffset} field of the corresponding type object to the offset of the field. For example, the instance type is defined with the following structure: \begin{verbatim} typedef struct { PyObject_HEAD PyClassObject *in_class; /* The class object */ PyObject *in_dict; /* A dictionary */ PyObject *in_weakreflist; /* List of weak references */ } PyInstanceObject; \end{verbatim} The statically-declared type object for instances is defined this way: \begin{verbatim} PyTypeObject PyInstance_Type = { PyObject_HEAD_INIT(&PyType_Type) 0, "module.instance", /* Lots of stuff omitted for brevity... */ Py_TPFLAGS_DEFAULT, /* tp_flags */ 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PyInstanceObject, in_weakreflist), /* tp_weaklistoffset */ }; \end{verbatim} The type constructor is responsible for initializing the weak reference list to \NULL: \begin{verbatim} static PyObject * instance_new() { /* Other initialization stuff omitted for brevity */ self->in_weakreflist = NULL; return (PyObject *) self; } \end{verbatim} The only further addition is that the destructor needs to call the weak reference manager to clear any weak references. This should be done before any other parts of the destruction have occurred, but is only required if the weak reference list is non-\NULL: \begin{verbatim} static void instance_dealloc(PyInstanceObject *inst) { /* Allocate temporaries if needed, but do not begin destruction just yet. */ if (inst->in_weakreflist != NULL) PyObject_ClearWeakRefs((PyObject *) inst); /* Proceed with object destruction normally. */ } \end{verbatim} \subsection{More Suggestions} Remember that you can omit most of these functions, in which case you provide \code{0} as a value. There are type definitions for each of the functions you must provide. They are in \file{object.h} in the Python include directory that comes with the source distribution of Python. In order to learn how to implement any specific method for your new data type, do the following: Download and unpack the Python source distribution. Go the \file{Objects} directory, then search the C source files for \code{tp_} plus the function you want (for example, \code{tp_print} or \code{tp_compare}). You will find examples of the function you want to implement. When you need to verify that an object is an instance of the type you are implementing, use the \cfunction{PyObject_TypeCheck} function. A sample of its use might be something like the following: \begin{verbatim} if (! PyObject_TypeCheck(some_object, &MyType)) { PyErr_SetString(PyExc_TypeError, "arg #1 not a mything"); return NULL; } \end{verbatim}
{ "alphanum_fraction": 0.7281494458, "avg_line_length": 34.5020945542, "ext": "tex", "hexsha": "a485a151a0a676ce3887d2f0507621d7e9fbf187", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-02-15T02:43:25.000Z", "max_forks_repo_forks_event_min_datetime": "2021-02-15T02:43:25.000Z", "max_forks_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "jasonadu/Python-2.5", "max_forks_repo_path": "Doc/ext/newtypes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "jasonadu/Python-2.5", "max_issues_repo_path": "Doc/ext/newtypes.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "jasonadu/Python-2.5", "max_stars_repo_path": "Doc/ext/newtypes.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-21T09:19:46.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-21T09:19:46.000Z", "num_tokens": 14224, "size": 57653 }
\section{Street Address Listing} \begin{longtable}{ r c l } {\bf Street Name} & {\bf Street Number} & {\bf Last Name} \\ \midrule \endhead \input{street_data} \end{longtable}
{ "alphanum_fraction": 0.6931818182, "avg_line_length": 19.5555555556, "ext": "tex", "hexsha": "9044ef0bf137ff4c2b658d8e9b92ad8ef71f9d54", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e83f79ada466fba81d25e2bab4d7dd465994ace3", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "lueckenhoff/hoa-directory-document", "max_forks_repo_path": "street_dir.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e83f79ada466fba81d25e2bab4d7dd465994ace3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "lueckenhoff/hoa-directory-document", "max_issues_repo_path": "street_dir.tex", "max_line_length": 60, "max_stars_count": null, "max_stars_repo_head_hexsha": "e83f79ada466fba81d25e2bab4d7dd465994ace3", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "lueckenhoff/hoa-directory-document", "max_stars_repo_path": "street_dir.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 55, "size": 176 }
\documentclass{article} \begin{document} \section*{Quadratic equations} The quadratic equation \begin{equation} \label{quad} ax^2 + bx + c = 0, \end{equation} where \( a, b \) and \( c \) are constants and \( a \neq 0 \), has two solutions for the variable \( x \): \begin{equation} \label{root} x_{1,2} = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \end{equation} If the \emph{discriminant} \( \Delta \) with \[ \Delta = b^2 - 4ac \] is zero, then the equation (\ref{quad}) has a double solution: (\ref{root}) becomes \[ x = - \frac{b}{2a}. \] \end{document}
{ "alphanum_fraction": 0.6034188034, "avg_line_length": 24.375, "ext": "tex", "hexsha": "613c02e4af12c450d7154b52e47be1ea2ac70773", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2022-03-02T14:46:25.000Z", "max_forks_repo_forks_event_min_datetime": "2021-08-10T18:01:43.000Z", "max_forks_repo_head_hexsha": "26931e5c56bcfdf3d7a924ab6c2ae2f3f5c54332", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PacktPublishing/LaTeX-Beginner-s-Guide---Second-Edition", "max_forks_repo_path": "Chapter_09_-_Writing_Math_Formulas/01_equations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26931e5c56bcfdf3d7a924ab6c2ae2f3f5c54332", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PacktPublishing/LaTeX-Beginner-s-Guide---Second-Edition", "max_issues_repo_path": "Chapter_09_-_Writing_Math_Formulas/01_equations.tex", "max_line_length": 63, "max_stars_count": 22, "max_stars_repo_head_hexsha": "26931e5c56bcfdf3d7a924ab6c2ae2f3f5c54332", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PacktPublishing/LaTeX-Beginner-s-Guide---Second-Edition", "max_stars_repo_path": "Chapter_09_-_Writing_Math_Formulas/01_equations.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-05T10:00:53.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-31T19:46:55.000Z", "num_tokens": 212, "size": 585 }
While first and second moments give us some idea of the performance of our sampling algorithms, we ideally would like a fuller picture. In this section we compare the performance of algorithms using the total variation distance, Wasserstein distance and Kullback--Leibler divergence. Using these measures, we can compare the performance to theoretical upper bounds for \texttt{ULA}. \subsection{Statistical Distances} Let $\mathcal{B}(\R^d)$ denote the Borel $\sigma$-algebra on $\R^d$. Let $P$ and $Q$ be probability measures on the space $(\R^d, \mathcal{B}(\R^d))$. Then we define the total variation distance, Kullback--Leibler divergence and Wasserstein metric as follows: \begin{defn}[Total Variation] The total variation distance between two probability measures $P$ and $Q$ on $(\Omega, \mathcal{F})$ is defined as $$ \norm{P - Q}_{TV} = \sup_{A \in \mathcal{F}} \abs{P(A) - Q(A)}. $$ \end{defn} In other words, total variation measures the greatest possible difference between the probability of an event according to $P$ and $Q$. \begin{prop} If the set $\Omega$ is countable then this is equivalent to half the $L^1$ norm. $$ \norm{P - Q}_{TV} = \frac{1}{2} \norm{P-Q}_1 = \frac{1}{2} \sum_{\omega \in \Omega} \abs{P(\omega) - Q(\omega)} $$ \end{prop} \begin{proof} Let $B = \{\omega: P(\omega) \geq Q(\omega)\}$ and let $A \in \mathcal{F}$ be any event. Then $$ P(A) - Q(A) \leq P(A \cap B) - Q(A \cap B) \leq P(B) - Q(B). $$ The first inequality holds since $P(\omega)-Q(\omega) < 0$ for any $\omega \in A \cap B^c$, and so the difference in probability cannot be greater if these elements are excluded. For the second inequality, we observe that including further elements of $B$ cannot decrease the difference in probability. Similarly, $$ Q(A) - P(A) \leq Q(B^c) - P(B^c) = P(B) - Q(B) $$ Thus, setting $A=B$, we have that $\abs{P(A)-Q(A)}$ is equal to the upper bound in the total variation distance. Hence, $$ \norm{P-Q}_{TV} = \frac{1}{2} \abs{P(B)-Q(B)+Q(B^c)-P(B^c)} = \frac{1}{2} \sum_{\omega \in \Omega} \abs{P(x)-Q(x)} $$ \end{proof} \begin{defn}[Kullback--Leibler Divergence] Let $P$ and $Q$ be two probability measures on $(\Omega, \mathcal{F})$. If $P \ll Q$, the Kullback--Leibler divergence of $P$ with respect to $Q$ is defined as $$ D_{\text{KL}}(P\,||\,Q) = \int_\Omega \od{P}{Q} \log \left( \od{P}{Q} \right) \dif Q. $$ \end{defn} The Kullback--Leibler divergence from $Q$ to $P$ measures the information lost in using $Q$ to approximate $P$ \cite{anderson2004model} and is also known as the relative entropy. It is worth noting that, unlike the other two measures considered here, the Kullback--Leibler divergence is not a metric, and in particular is not symmetric. Finally we consider the Wasserstein distance. If $P$ and $Q$ are probability measures on $(\Omega, \mathcal{F})$, we say that $\gamma$ is a transport plan between two probability measures $P$ and $Q$ if it is a probability measure on $(\Omega \times \Omega, \mathcal{F} \times \mathcal{F})$ such that for any Borel set $A \subset \mathcal{F}$, $\gamma(A \times \Omega)=P(A)$ and $\gamma(\Omega \times A) = Q(A)$. We denote the set of all such transport plans by $\Pi(P,Q)$. In simple terms, the set of transport plans, $\Pi(P,Q)$, represents the possible ways of transporting mass distributed according to $P$ to a distribution according to $Q$, without creating or destroying mass in the process. The `effort' to transport mass is then represented by a cost function $d:\Omega \to \Omega$, so that $d(x,y)$ is the cost of moving unit mass from $x$ to $y$. \begin{defn}[Wasserstein distance] For two probability measures, $P$ and $Q$, the $p$-Wasserstein distance is given by $$ W_p(P,Q) = \left( \inf_{\gamma \in \Pi(P,Q)} \int_{\Omega \times \Omega} d(x,y)^p d \gamma(x,y) \right)^{1/p}. $$ \end{defn} The Wasserstein distance represents the amount of `effort' required to move mass distributed according to $P$ to $Q$. We restrict our attention to $L^1$-Wasserstein and $L^2$-Wasserstein distances, which is to say that we choose our cost function $d$ to be the Euclidean distance, and $p=1,2$. One particular advantage of Wasserstein distance compared to total variation or Kullback--Leibler divergence is that bounds on Wasserstein distance can be used directly to bound the accuracy of the first and second moment approximations, and so for application to statistics there is some evidence to suggest it is the most appropriate of the three measures for our purposes \cite{dalalyan2019user}. Due to impracticality of computing higher-dimensional Wasserstein distances, we use a computationally more feasible variant, the Sliced Wasserstein distance. First proposed in \cite{rabin2011wasserstein} and further elaborated on, for example, in \cite{gswd}, the Sliced Wasserstein distance exploits the fact that the Wasserstein distance between 1-dimensional probability measures $P, Q$ can be computed with an explicit formula $\abs{F^{-1}(t)-G^{-1}(t)}^p dt$ where $F$ and $G$ are the CDFs of $P$ and $Q$ respectively \cite{ramdas2017wasserstein}. \begin{defn}[Sliced Wasserstein distance] For two probability measures, $P$ and $Q$, the $L^p$ Sliced Wasserstein distance is given by $$ SW_p(P,Q) = \left(\int_{\mathbb S^{d-1} } W_p^p\left(\mathcal{RI}_P(\cdot, \theta), \mathcal{RI}_Q(\cdot, \theta) \right) d \theta \right)^{\frac 1 p} $$ \end{defn} where $\mathbb S^{d-1}$ is the $(d-1)$-dimensional sphere and $\mathcal RI$ denotes the Inverse Radon transform. In the above references, it is also proved that $SW_p$ is indeed a metric. The main reason why we can use the Sliced Wasserstein distance as an approximation to the Wasserstein distance is that these two metrics are equivalent\cite{Santa}. \subsubsection{Numerical Comparison} In this section we present a numerical comparison of the algorithms considered, extending the results of \cite{Brosse18tULA} which only considered first and second moments. It should be noted that the results in the aforementioned paper are in dimension 100, while here our results are in dimension 2. This is due to the bin-filling problem when approximating the density function, which is explained in section \ref{sec:Imp}. We do not have evidence to suggest the performance of these algorithms would be similar in higher dimensions. Due to lack of space in this report, the results here are mostly in total variation distance. The functionality for the experiments is available in our GitHub repository (see section \ref{sec:Imp}), and tests can easily be performed in Sliced 1-Wasserstein distance and Kullback--Leibler divergence. \begin{note} It is important to state here that the accuracy of our implementation of these measures has not been thoroughly tested. The number of bins chosen for the histogram heavily influences results, and our default choice of 100 bins for 2 dimensions is somewhat arbitrary. As such, the results presented here should be treated with some caution. \end{note} The results for total variation in the case of the Gaussian (Figure \ref{fig:TVgauss}) support the claim of \cite{Brosse18tULA} that tamed algorithms have similar performance to untamed for small step size. As expected, they do perform slightly worse than the untamed equivalent, but this difference is relatively small. For the ill-conditioned case (Figure \ref{fig:TV_ICgauss}), the unadjusted versions of each algorithm diverge, but the tamed versions are stable. While the performance of all Langevin-based methods is poor, coordinate-wise taming, as predicted, improves performance, and we expect this effect to be more pronounced in higher dimensions. The double well presents a more interesting case. For small step size (Figure \ref{fig:TVdouble02}) there is not much difference between the algorithms, with \texttt{LM} performing the poorest of all the Langevin-based methods, for higher step sizes (Figures \ref{fig:TVdouble1}, \ref{fig:TVdouble2}) it outperforms all other algorithms, even those with Metropolis--Hastings adjustments. For step size 0.1 and higher, the matrix multiplication step in \texttt{HOLA} results in an overflow, while for step size 0.2, \texttt{ULA} diverges. It is only for step size 0.3 that \texttt{LM} finally diverges (Figure \ref{fig:TVdouble3}). The reasons for this behaviour is unclear, but these results seem to support the claims made in \cite{LM12}. To verify this, the experiment for the double well with step size 0.1 was repeated in Sliced 1-Wasserstein distance (Figure \ref{fig:SWdouble1}). The results broadly agree with those of the total variation, although Metropolised algorithms appear to be favoured by this metric, and tamed algorithms graded more harshly. The final three plots (Figures \ref{fig:TVginz01}-\ref{fig:TVrosen01}) are included mostly for completeness. It is interesting to note however, that on the Rosenbrock distribution (Figure \ref{fig:TVrosen001}) \texttt{LM} again outperforms \texttt{ULA} for step size 0.001. \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{Figures/TV_gaussian11_step0pt02.png} \caption{$10^5$ samples from standard Gaussian with step size 0.02} \label{fig:TVgauss} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{Figures/TV_ICgaussian10pt0001_step0pt02.png} \caption{$10^5$ samples from ill-conditioned Gaussian with step size 0.02} \label{fig:TV_ICgauss} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_doublewell_step0pt02.png} \caption{$10^5$ samples from double well with step size 0.02} \label{fig:TVdouble02} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_doublewell_step0pt1.png} \caption{$10^5$ samples from double well with step size 0.1} \label{fig:TVdouble1} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/SW_doublewell_0pt1.pdf} \caption{SW1 distance $10^5$ samples from double well with step size 0.1} \label{fig:SWdouble1} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_doublewell_step0pt2.png} \caption{$10^5$ samples from double well with step size 0.2} \label{fig:TVdouble2} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_doublewell_step0pt3.png} \caption{$10^5$ samples from double well with step size 0.3} \label{fig:TVdouble3} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_ginzburg_step0pt01.png} \caption{$10^5$ samples from 1D Ginzburg Landau with step size 0.01} \label{fig:TVginz01} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_rosenbrock_step0pt001.png} \caption{$10^5$ samples from Rosenbrock with step size 0.001} \label{fig:TVrosen001} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.43\textheight]{WriteUp/TV_rosenbrock_step0pt01.png} \caption{$10^5$ samples from Rosenbrock with step size 0.01} \label{fig:TVrosen01} \end{figure} \newpage \subsection{Theoretical Non-asymptotic Error Bounds} While the asymptotic behaviour of \texttt{ULA} and \texttt{MALA} is well-understood \cite{RT96}, the results do not consider the effect of dimension on the complexity of the algorithms. For practical purposes, an understanding of the non-asymptotic behaviour of the algorithms would be useful and, in particular, theoretical results which could determine the step size and number of iterations required to guarantee an error of no more than some acceptable value $\epsilon$. Theoretical bounds of this nature were first provided in \cite{dalalyan2017theoretical}, which proved bounds on the total variation distance between the distribution of the $n^{\text{th}}$ iterate of the unadjusted Langevin Algorithm, under restrictive assumptions, which will be outlined shortly. In the case of a `warm start', where the distribution of the initial value is close to $\Pi$, it was shown that \texttt{ULA} had an upper bound of $\mathcal{O}(d/\epsilon)$ iterations to achieve precision level $\epsilon$. This result was improved by \cite{durmus2016high, durmus2017nonasymptotic} which extended the analysis to the Wasserstein distance and dispensed with the assumption of a warm start, showing that the upper bound on iterations could be reduced to $\mathcal O (d/\epsilon)$, provided the Hessian of the potential is Lipschitz continuous. Most recently, \cite{dalalyan2019user} has provided `user-friendly' bounds on the Wasserstein distance, further improving the constants in these bounds. It is important to note here that these results provide explicit constants for the non-asymptotic behaviour for \texttt{ULA}. This is in contrast to the theory for \texttt{MALA} \cite{bou2013nonasymptotic}, and the tamed \cite{Brosse18tULA} and higher order algorithms \cite{Sabanis18tHOLA}, for which only the existence of a constant is proven. For the Leimkuhler--Matthews method, no such guarantees are known. As such the non-asymptotic theory for the unadjusted algorithm is practically much more useful. \subsubsection{Non-asymptotic Bounds for \texttt{ULA}} We present without proof the results of \cite{durmus2017nonasymptotic, dalalyan2019user} which to our knowledge are the best known bounds in total variation and Wasserstein distance respectively. For both results we assume that the potential $U$ is continuously differentiable on $\R^d$, and there exist positive constants $m$ and $M$ such that $U$ is $m$-strongly convex and $M$-gradient Lipschitz, i.e. for all $x$, $y \in \R^d$, \begin{align} \label{m-convex} &\text{($m$-strongly convex) \ } U(x) - U(y) - \grad U(y)^\top (x - y) \geq \frac{m}{2}\norm{x-y}_2^2\\ &\text{($M$-gradient Lipschitz) \ } \norm{\grad U(x) - \grad U(y)}_2 \leq M \norm{x-y}_2. \label{M-GradLipschitz} \end{align} Denote the unique minimiser of $U$ by $y=\arg \min_{x \in \R^d} U(x)$. Let $\nu_N$ denote the distribution of the $N^{\text{th}}$ sample \begin{theorem}[Total Variation part (i)] Assume $h \in (0, 2/(m+M))$ and $U$ satisfies the assumptions (\ref{m-convex}, \ref{M-GradLipschitz}) above. Let $\kappa = \frac{2mM}{m+M}$ Then for any initial value $x_0 \in \R^d$ and $N \geq 1$, $$ \norm{\pi_h - \nu_N}_{TV} \leq \left\{ 4 \pi \kappa (1-(1-\kappa h)^{N/2} \right\}^{-1/2} (1-\kappa \gamma)^{N/2} \left\{ \norm{x_0 - y}_2 + (2 \kappa^{-1}d)^{1/2} \right\} $$ \end{theorem} \begin{theorem}[Total Variation part (ii)] Assume $h \in (0, 1/(m+M))$, $U$ satisfies the assumptions (\ref{m-convex}, \ref{M-GradLipschitz}) above, and further, assume that $U$ is three times continuously differentiable, and there exists $L$ such that for all $x,y \in \R^d$, $$ \norm{\grad^2 U(x) - \grad^2 U(y)} \leq L \norm{x-y}. $$ Then \begin{align*} \norm{\pi_h - \pi}_{TV} &\leq (4 \pi)^{-1/2} \left\{ h^2 E_1(h, d) + 2dh^2 E_2(h)/(\kappa m)\right\}^{1/2}\\ &+ (4 \pi)^{-1/2} \lceil \log(h^{-1}/\log(2) \rceil \left\{ h^2 E_1(h,d) + h^2 E_2(h)(2 \kappa^{-1}d + d/m) \right\}^{1/2}\\ &+ 2^{-3/2} M \left\{ 2d h^3 L^2/(3\kappa) + dh^2\right\}^{1/2} \end{align*} where $E_1(h,d)$ and $E_2(h)$ are defined as \begin{align*} E_1(h,d) &= 2 d \kappa^{-1} \left\{2L^2 + 4 \kappa^{-1} (dL^2/3 + h M^4 /4) + h^2 M^4/6 \right\}\\ E_2(h) &= L^4(4\kappa^{-1}/3 + h) \end{align*} \end{theorem} The triangle inequality gives our desired bound on $\norm{\pi - \nu_N}_{TV}$. We next present the `user-friendly' bound in Wasserstein distance. \begin{theorem}[Wasserstein distance] Assume $h \in (0, 2/M)$ and $U$ satisfies the assumptions (\ref{m-convex}, \ref{M-GradLipschitz}) above. \begin{itemize} \item If $h \leq \frac{2}{m+M}$, then $W_2(\nu_N, \pi) \leq (1-mh)^N W_2(\nu_0, \pi) + 1.65 \frac{M}{m}(hp)^{1/2}.$ \item If $h \geq \frac{2}{m+M}$, then $W_2(\nu_N, \pi) \leq (Mh-1)^N W_2(\nu_0, \pi) + \frac{1.65Mh}{2-Mh}(hp)^{1/2}.$ \end{itemize} \end{theorem} \begin{prop} If the initial value $X_0 = x_0$ is deterministic then, $$ W_2(\nu_0, \pi)^2 = \int_{\R^p} \norm{x_0-x}_2^2 \pi(dx) \leq \norm{x_0-y}_2^2 + \frac{p}{m}. $$ \end{prop} \begin{remark} If we choose $h$ and $N$ such that $$ h \leq \frac{2}{m+M}, \quad e^{-mhN}W_2(\nu_0, \pi) \leq \epsilon/2, \quad 1.65\frac{M}{m}(hp)^{1/2} \leq \epsilon/2 $$ then $W_2(\nu_N,\pi) \leq \epsilon$. Hence, for a deterministic initial value $X_0=x_0$, it is sufficient to choose $$ h \leq \frac{m^2 \epsilon^2}{11M^2p} \wedge \frac{2}{m+M}, \quad hN \geq \frac{1}{m} \log \left( \frac{2(\norm{x_0-y}_2^2+p/m)^{1/2}}{\epsilon} \right) $$ for a precision $\epsilon$ in $W_2(\nu_K, \pi)$. \end{remark} \begin{note} In practice, $\norm{x_0-y}_2$ may be difficult to calculate. An alternative bound can easily be derived from the strong convexity of $U$ and the fact that $y$ minimises $U$: \begin{align*} m W_2(\nu_0, \pi)^2 &\leq m \norm{x_0-y}_2^2 + p\\ &\leq 2(f(x_0) - f(y) - \nabla U(y)^{\top}(x_0-y) + p\\ &= 2(f(x_0)-f(y))+p. \end{align*} If $U$ is bounded below by a constant, say $U \geq 0$, this provides an easily computable upper bound on $W_2(\nu_0, \pi)$. \end{note} \subsubsection{Numerical Tests} Despite the `user-friendly' nature of the bounds in Wasserstein distance, it was not possible to verify them numerically, as computing the $W_2$ distance is computationally infeasible. This problem is explained in more detail in section \ref{sec:Imp}. We were, however, able to test the bounds in total variation. The only distribution we have implemented which satisfies the assumptions above was the Gaussian. For the following results, we test on a 1-dimensional $N(0,1)$ distribution. \begin{figure}[H] \centering \includegraphics[height=0.35\textheight]{WriteUp/DM_Pih.pdf} \caption{Bound in $\norm{\pi_h - \pi}_{TV}$ as function of $h$. This bound can be used to select a suitable step size to give error $\epsilon/2$} \label{fig:DM_Pih} \end{figure} \begin{figure}[H] \centering \includegraphics[height=0.35\textheight]{WriteUp/DM_EM_step0pt01.pdf} \caption{Bound in $\norm{\pi_h - \nu_N}_{TV}$ as function of $N$, for $h=0.01$. Having chosen a suitable $h$ according to the bound above, a suitable $N$ can be chosen to give error $\epsilon/2$} \label{fig:DM_Pih} \end{figure} \begin{figure}[H] \centering \includegraphics[height=0.35\textheight]{Figures/DM_plot_nth_sample.png} \caption{$\norm{\pi - \nu_N}_{TV}$ as function of $N$, plotted against theoretical bounds. The experiment was run with 600 chains, to produce a histogram with 80 bins. This number of chains is insufficient to calculate the total variation to a higher level of accuracy, which may explain the difference between our calculated result and the theoretical bound.} \label{fig:DMnth} \end{figure} \begin{figure}[H] \centering \includegraphics[height=0.35\textheight]{Figures/DM_plot_whole_chain.png} \caption{Total Variation distance between distribution taken by pooling all $N$ samples from multiple chains and true (standard normal) distribution as function of $N$, plotted against theoretical bounds. Although the error for the pooled distribution should be worse than that for the $N^{\text{th}}$ sample, a much higher accuracy is achieved. This supports our claim that the higher than expected error in the above plot is a result of too few chains being run.} \label{fig:DMwhole} \end{figure}
{ "alphanum_fraction": 0.7247631659, "avg_line_length": 75.2260536398, "ext": "tex", "hexsha": "c71b44ef05422eb7dbf41abc7fba55cc68a668c2", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-19T17:44:19.000Z", "max_forks_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "swyoon/LangevinMC", "max_forks_repo_path": "WriteUp/BeyondMoments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "swyoon/LangevinMC", "max_issues_repo_path": "WriteUp/BeyondMoments.tex", "max_line_length": 1065, "max_stars_count": 10, "max_stars_repo_head_hexsha": "ed36a17ce9b7d1e39097aeaf5b92f0fa286d5489", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Tom271/LangevinMC", "max_stars_repo_path": "WriteUp/BeyondMoments.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-04T13:35:13.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-07T12:51:19.000Z", "num_tokens": 6127, "size": 19634 }
% \iffalse meta-comment % % Copyright (C) 2020-2021 % The LaTeX Project and any individual authors listed elsewhere % in this file. % % This file is part of the LaTeX base system. % ------------------------------------------- % % It may be distributed and/or modified under the % conditions of the LaTeX Project Public License, either version 1.3c % of this license or (at your option) any later version. % The latest version of this license is in % http://www.latex-project.org/lppl.txt % and version 1.3c or later is part of all distributions of LaTeX % version 2008 or later. % % This file has the LPPL maintenance status "maintained". % % The list of all files belonging to the LaTeX base distribution is % given in the file `manifest.txt'. See also `legal.txt' for additional % information. % % The list of derived (unpacked) files belonging to the distribution % and covered by LPPL is defined by the unpacking scripts (with % extension .ins) which are part of the distribution. % % \fi % Filename: usrguide3.tex \documentclass{ltxguide} \usepackage[T1]{fontenc} % needed for \textbackslash in tt \title{New \LaTeX\ methods for authors (starting 2020)} \author{\copyright~Copyright 2020-2021, \LaTeX\ Project Team.\\ All rights reserved.} \date{2021-06-11} \NewDocumentCommand\cs{m}{\texttt{\textbackslash\detokenize{#1}}} \NewDocumentCommand\marg{m}{\arg{#1}} \NewDocumentCommand\meta{m}{\ensuremath{\langle}\textit{#1}\ensuremath{\rangle}} \NewDocumentCommand\pkg{m}{\textsf{#1}} \NewDocumentCommand\text{m}{\ifmmode\mbox{#1}\else#1\fi} % Fix a 'feature' \makeatletter \renewcommand \verbatim@font {\normalfont \ttfamily} \makeatother \begin{document} \maketitle \tableofcontents \section{Introduction} \LaTeXe{} was released in 1994 and added a number of then-new concepts to \LaTeX{}. These are described in \texttt{usrguide}, which has largely remained unchanged. Since then, the \LaTeX{} team have worked on a number of ideas, firstly a programming language for \LaTeX{} (\pkg{expl3}) and then a range of tools for document authors which build on that language. Here, we describe \emph{stable} and \emph{widely-usable} concepts that have resulted from that work. These `new' ideas have been transferred from development packages into the \LaTeXe{} kernel. As such, they are now available to \emph{all} \LaTeX{} users and have the \emph{same stability} as any other part of the kernel. The fact that `behind the scenes' they are built on \pkg{expl3} is useful for the development team, but is not directly important to users. \section{Creating document commands and environments} \subsection{Overview} Creating document commands and environments using the \LaTeX3 toolset is based around the idea that a common set of descriptions can be used to cover almost all argument types used in real documents. Thus parsing is reduced to a simple description of which arguments a command takes: this description provides the `glue' between the document syntax and the implementation of the command. First, we will describe the argument types, then move on to explain how these can be used to create both document commands and environments. Various more specialized features are then described, which allow an even richer application of a simple interface set up. The details here are intended to help users create document commands in general. More technical detail, suitable for \TeX{} programmers, is included in \texttt{interface3}. \subsection{Describing argument types} In order to allow each argument to be defined independently, the parser does not simply need to know the number of arguments for a function, but also the nature of each one. This is done by constructing an \emph{argument specification}, which defines the number of arguments, the type of each argument and any additional information needed for the parser to read the user input and properly pass it through to internal functions. The basic form of the argument specifier is a list of letters, where each letter defines a type of argument. As will be described below, some of the types need additional information, such as default values. The argument types can be divided into two, those which define arguments that are mandatory (potentially raising an error if not found) and those which define optional arguments. The mandatory types \begin{itemize} \item[\texttt{m}] A standard mandatory argument, which can either be a single token alone or multiple tokens surrounded by curly braces |{}|. Regardless of the input, the argument will be passed to the internal code without the outer braces. This is the type specifier for a normal \TeX{} argument. \item[\texttt{r}] Given as \texttt{r}\meta{token1}\meta{token2}, this denotes a `required' delimited argument, where the delimiters are \meta{token1} and \meta{token2}. If the opening delimiter \meta{token1} is missing, the default marker |-NoValue-| will be inserted after a suitable error. \item[\texttt{R}] Given as \texttt{R}\meta{token1}\meta{token2}\marg{default}, this is a `required' delimited argument as for~\texttt{r}, but it has a user-definable recovery \meta{default} instead of |-NoValue-|. \item[\texttt{v}] Reads an argument `verbatim', between the following character and its next occurrence, in a way similar to the argument of the \LaTeXe{} command \cs{verb}. Thus a \texttt{v}-type argument is read between two identical characters, which cannot be any of |%|, |\|, |#|, |{|, |}| or \verb*| |. The verbatim argument can also be enclosed between braces, |{| and |}|. A command with a verbatim argument will produce an error when it appears within an argument of another function. \item[\texttt{b}] Only suitable in the argument specification of an environment, it denotes the body of the environment, between |\begin|\marg{environment} and |\end|\marg{environment}. See Section~\ref{sec:cmd:body} for details. \end{itemize} The types which define optional arguments are: \begin{itemize} \item[\texttt{o}] A standard \LaTeX{} optional argument, surrounded with square brackets, which will supply the special |-NoValue-| marker if not given (as described later). \item[\texttt{d}] Given as \texttt{d}\meta{token1}\meta{token2}, an optional argument which is delimited by \meta{token1} and \meta{token2}. As with \texttt{o}, if no value is given the special marker |-NoValue-| is returned. \item[\texttt{O}] Given as \texttt{O}\marg{default}, is like \texttt{o}, but returns \meta{default} if no value is given. \item[\texttt{D}] Given as \texttt{D}\meta{token1}\meta{token2}\marg{default}, it is as for \texttt{d}, but returns \meta{default} if no value is given. Internally, the \texttt{o}, \texttt{d} and \texttt{O} types are short-cuts to an appropriated-constructed \texttt{D} type argument. \item[\texttt{s}] An optional star, which will result in a value \cs{BooleanTrue} if a star is present and \cs{BooleanFalse} otherwise (as described later). \item[\texttt{t}] An optional \meta{token}, which will result in a value \cs{BooleanTrue} if \meta{token} is present and \cs{BooleanFalse} otherwise. Given as \texttt{t}\meta{token}. \item[\texttt{e}] Given as \texttt{e}\marg{tokens}, a set of optional \emph{embellishments}, each of which requires a \emph{value}. If an embellishment is not present, |-NoValue-| is returned. Each embellishment gives one argument, ordered as for the list of \meta{tokens} in the argument specification. All \meta{tokens} must be distinct. \item[\texttt{E}] As for \texttt{e} but returns one or more \meta{defaults} if values are not given: \texttt{E}\marg{tokens}\marg{defaults}. See Section~\ref{sec:cmd:embellishment} for more details. \end{itemize} \subsection{Modifying argument descriptions} In addition to the argument \emph{types} discussed above, the argument description also gives special meaning to three other characters. First, \texttt{+} is used to make an argument long (to accept paragraph tokens). In contrast to \cs{newcommand}, this applies on an argument-by-argument basis. So modifying the example to `|s o o +m O{default}|' means that the mandatory argument is now \cs{long}, whereas the optional arguments are not. Secondly, \texttt{!} is used to control whether spaces are allowed before optional arguments. There are some subtleties to this, as \TeX{} itself has some restrictions on where spaces can be `detected': more detail is given in Section~\ref{sec:cmd:opt-space}. Finally, the character \texttt{>} is used to declare so-called `argument processors', which can be used to modify the contents of an argument before it is passed to the macro definition. The use of argument processors is a somewhat advanced topic, (or at least a less commonly used feature) and is covered in Section~\ref{sec:cmd:processors}. \subsection{Creating document commands and environments} \begin{decl} |\NewDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\RenewDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\ProvideDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\DeclareDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \end{decl} This family of commands are used to create a \meta{cmd}. The argument specification for the function is given by \meta{arg spec}, and the command uses the \meta{code} with |#1|, |#2|, etc.\ replaced by the arguments found by the parser. An example: \begin{verbatim} \NewDocumentCommand\chapter{s o m} {% \IfBooleanTF{#1}% {\typesetstarchapter{#3}}% {\typesetnormalchapter{#2}{#3}}% } \end{verbatim} would be a way to define a \cs{chapter} command which would essentially behave like the current \LaTeXe{} command (except that it would accept an optional argument even when a \texttt{*} was parsed). The \cs{typesetnormalchapter} could test its first argument for being |-NoValue-| to see if an optional argument was present. (See Section~\ref{sec:cmd:special} for details of \cs{IfBooleanTF} and testing for |-NoValue-|.) The difference between the \cs{New...} \cs{Renew...}, \cs{Provide...} and \cs{Declare...} versions is the behavior if \meta{cmd} is already defined. \begin{itemize} \item \cs{NewDocumentCommand} will issue an error if \meta{cmd} has already been defined. \item \cs{RenewDocumentCommand} will issue an error if \meta{cmd} has not previously been defined. \item \cs{ProvideDocumentCommand} creates a new definition for \meta{function} only if one has not already been given. \item \cs{DeclareDocumentCommand} will always create the new definition, irrespective of any existing \meta{cmd} with the same name. This should be used sparingly. \end{itemize} \begin{decl} |\NewDocumentEnvironment| \arg{env} \arg{arg spec} \arg{beg-code} \arg{end-code} \\ |\RenewDocumentEnvironment| \arg{env} \arg{arg spec} \arg{beg-code} \arg{end-code} \\ |\ProvideDocumentEnvironment| \arg{env} \arg{arg spec} \arg{beg-code} \arg{end-code} \\ |\DeclareDocumentEnvironment| \arg{env} \arg{arg spec} \arg{beg-code} \arg{end-code} \end{decl} These commands work in the same way as \cs{NewDocumentCommand}, etc.\@, but create environments (\cs{begin}\arg{env} \ldots{} \cs{end}\arg{env}). Both the \meta{beg-code} and \meta{end-code} may access the arguments as defined by \meta{arg spec}. The arguments will be given following \cs{begin}\arg{env}. \subsection{Optional arguments} \label{sec:cmd:opt} In contrast to commands created using \LaTeXe{}'s \cs{newcommand}, optional arguments created using \cs{NewDocumentCommand} may safely be nested. Thus for example, following \begin{verbatim} \NewDocumentCommand\foo{om}{I grabbed `#1' and `#2'} \NewDocumentCommand\baz{o}{#1-#1} \end{verbatim} using the command as \begin{verbatim} \foo[\baz[stuff]]{more stuff} \end{verbatim} will print \begin{quote} I grabbed `stuff-stuff' and `more stuff' \end{quote} This is particularly useful when placing a command with an optional argument \emph{inside} the optional argument of a second command. When an optional argument is followed by a mandatory argument with the same delimiter, the parser issues a warning because the optional argument could not be omitted by the user, thus becoming in effect mandatory. This can apply to \texttt{o}, \texttt{d}, \texttt{O}, \texttt{D}, \texttt{s}, \texttt{t}, \texttt{e}, and \texttt{E} type arguments followed by \texttt{r} or \texttt{R}-type required arguments. The default for \texttt{O}, \texttt{D} and \texttt{E} arguments can be the result of grabbing another argument. Thus for example \begin{verbatim} \NewDocumentCommand\foo{O{#2} m} \end{verbatim} would use the mandatory argument as the default for the leading optional one. \subsection{Spacing and optional arguments} \label{sec:cmd:opt-space} \TeX{} will find the first argument after a function name irrespective of any intervening spaces. This is true for both mandatory and optional arguments. So |\foo[arg]| and \verb*|\foo [arg]| are equivalent. Spaces are also ignored when collecting arguments up to the last mandatory argument to be collected (as it must exist). So after \begin{verbatim} \NewDocumentCommand\foo{m o m}{ ... } \end{verbatim} the user input |\foo{arg1}[arg2]{arg3}| and \verb*|\foo{arg1} [arg2] {arg3}| will both be parsed in the same way. The behavior of optional arguments \emph{after} any mandatory arguments is selectable. The standard settings will allow spaces here, and thus with \begin{verbatim} \NewDocumentCommand\foobar{m o}{ ... } \end{verbatim} both |\foobar{arg1}[arg2]| and \verb*|\foobar{arg1} [arg2]| will find an optional argument. This can be changed by giving the modified |!| in the argument specification: \begin{verbatim} \NewDocumentCommand\foobar{m !o}{ ... } \end{verbatim} where \verb*|\foobar{arg1} [arg2]| will not find an optional argument. There is one subtlety here due to the difference in handling by \TeX{} of `control symbols', where the command name is made up of a single character, such as `\texttt{\textbackslash\textbackslash}'. Spaces are not ignored by \TeX{} here, and thus it is possible to require an optional argument directly follow such a command. The most common example is the use of \texttt{\textbackslash\textbackslash} in \pkg{amsmath} environments, which in the terms here would be defined as \begin{verbatim} \NewDocumentCommand\\{!s !o}{ ... } \end{verbatim} \subsection{`Embellishments'} \label{sec:cmd:embellishment} The \texttt{E}-type argument allows one default value per test token. This is achieved by giving a list of defaults for each entry in the list, for example: \begin{verbatim} E{^_}{{UP}{DOWN}} \end{verbatim} If the list of default values is \emph{shorter} than the list of test tokens, the special |-NoValue-| marker will be returned (as for the \texttt{e}-type argument). Thus for example \begin{verbatim} E{^_}{{UP}} \end{verbatim} has default \texttt{UP} for the |^| test character, but will return the |-NoValue-| marker as a default for |_|. This allows mixing of explicit defaults with testing for missing values. \subsection{Testing special values} \label{sec:cmd:special} Optional arguments make use of dedicated variables to return information about the nature of the argument received. \begin{decl} |\IfNoValueTF| \arg{arg} \arg{true code} \arg{false code} \\ |\IfNoValueT| \arg{arg} \arg{true code} \\ |\IfNoValueF| \arg{arg} \arg{false code} \end{decl} The \cs{IfNoValue(TF)} tests are used to check if \meta{argument} (|#1|, |#2|, \emph{etc.}) is the special |-NoValue-| marker. For example \begin{verbatim} \NewDocumentCommand\foo{o m} {% \IfNoValueTF {#1}% {\DoSomethingJustWithMandatoryArgument{#2}}% {\DoSomethingWithBothArguments{#1}{#2}}% } \end{verbatim} will use a different internal function if the optional argument is given than if it is not present. Note that three tests are available, depending on which outcome branches are required: \cs{IfNoValueTF}, \cs{IfNoValueT} and \cs{IfNoValueF}. As the \cs{IfNoValue(TF)} tests are expandable, it is possible to test these values later, for example at the point of typesetting or in an expansion context. It is important to note that |-NoValue-| is constructed such that it will \emph{not} match the simple text input |-NoValue-|, i.e.~that \begin{verbatim} \IfNoValueTF{-NoValue-} \end{verbatim} will be logically \texttt{false}. When two optional arguments follow each other (a syntax we typically discourage), it can make sense to allow users of the command to specify only the second argument by providing an empty first argument. Rather than testing separately for emptiness and for |-NoValue-| it is then best to use the argument type~|O| with an empty default value, and simply test for emptiness using the \pkg{expl3} conditional \cs{tl_if_blank:nTF} or its \pkg{etoolbox} analogue \cs{ifblank}. \begin{decl} |\IfValueTF| \arg{arg} \arg{true code} \arg{false code} \\ |\IfValueT| \arg{arg} \arg{true code} \\ |\IfValueF| \arg{arg} \arg{false code} \end{decl} The reverse form of the \cs{IfNoValue(TF)} tests are also available as \cs{IfValue(TF)}. The context will determine which logical form makes the most sense for a given code scenario. \begin{decl} |\BooleanFalse| \\ |\BooleanTrue| \end{decl} The \texttt{true} and \texttt{false} flags set when searching for an optional character (using \texttt{s} or \texttt{t\meta{char}}) have names which are accessible outside of code blocks. \begin{decl} |\IfBooleanTF| \arg{arg} \arg{true code} \arg{false code} \\ |\IfBooleanT| \arg{arg} \arg{true code} \\ |\IfBooleanF| \arg{arg} \arg{false code} \end{decl} Used to test if \meta{argument} (|#1|, |#2|, \emph{etc.}) is \cs{BooleanTrue} or \cs{BooleanFalse}. For example \begin{verbatim} \NewDocumentCommand\foo{sm} {% \IfBooleanTF {#1}% {\DoSomethingWithStar{#2}}% {\DoSomethingWithoutStar{#2}}% } \end{verbatim} checks for a star as the first argument, then chooses the action to take based on this information. \subsection{Argument processors} \label{sec:cmd:processors} Argument processor are applied to an argument \emph{after} it has been grabbed by the underlying system but before it is passed to \meta{code}. An argument processor can therefore be used to regularize input at an early stage, allowing the internal functions to be completely independent of input form. Processors are applied to user input and to default values for optional arguments, but \emph{not} to the special |-NoValue-| marker. Each argument processor is specified by the syntax \texttt{>}\marg{processor} in the argument specification. Processors are applied from right to left, so that \begin{verbatim} >{\ProcessorB} >{\ProcessorA} m \end{verbatim} would apply \cs{ProcessorA} followed by \cs{ProcessorB} to the tokens grabbed by the \texttt{m} argument. \begin{decl} |\SplitArgument| \arg{number} \arg{token(s)} \end{decl} This processor splits the argument given at each occurrence of the \meta{tokens} up to a maximum of \meta{number} tokens (thus dividing the input into $\text{\meta{number}} + 1$ parts). An error is given if too many \meta{tokens} are present in the input. The processed input is placed inside $\text{\meta{number}} + 1$ sets of braces for further use. If there are fewer than \arg{number} of \arg{tokens} in the argument then |-NoValue-| markers are added at the end of the processed argument. \begin{verbatim} \NewDocumentCommand \foo {>{\SplitArgument{2}{;}} m} {\InternalFunctionOfThreeArguments#1} \end{verbatim} If only a single character \meta{token} is used for the split, any category code $13$ (active) character matching the \meta{token} will be replaced before the split takes place. Spaces are trimmed at each end of each item parsed. The \texttt{E} argument type is somewhat special, because with a single \texttt{E} in the command declaration you may end up with several arguments in a command (one formal argument per embellishment token). Therefore, when an argument processor is applied to an \texttt{E}-type argument, all the arguments pass through that processor before being fed to the \meta{code}. For example, this command \begin{verbatim} \NewDocumentCommand \foo { >{\TrimSpaces} e{_^} } { [#1](#2) } \end{verbatim} applies \cs{TrimSpaces} to both arguments. \begin{decl} |\SplitList| \arg{token(s)} \end{decl} This processor splits the argument given at each occurrence of the \meta{token(s)} where the number of items is not fixed. Each item is then wrapped in braces within |#1|. The result is that the processed argument can be further processed using a mapping function (see below). \begin{verbatim} \NewDocumentCommand \foo {>{\SplitList{;}} m} {\MappingFunction#1} \end{verbatim} If only a single character \meta{token} is used for the split, any category code $13$ (active) character matching the \meta{token} will be replaced before the split takes place. Spaces are trimmed at each end of each item parsed. \begin{decl} |\ProcessList| \arg{list} \arg{function} \end{decl} To support \cs{SplitList}, the function \cs{ProcessList} is available to apply a \meta{function} to every entry in a \meta{list}. The \meta{function} should absorb one argument: the list entry. For example \begin{verbatim} \NewDocumentCommand \foo {>{\SplitList{;}} m} {\ProcessList{#1}{\SomeDocumentCommand}} \end{verbatim} \begin{decl} |\ReverseBoolean| \end{decl} This processor reverses the logic of \cs{BooleanTrue} and \cs{BooleanFalse}, so that the example from earlier would become \begin{verbatim} \NewDocumentCommand\foo{>{\ReverseBoolean} s m} {% \IfBooleanTF#1% {\DoSomethingWithoutStar{#2}}% {\DoSomethingWithStar{#2}}% } \end{verbatim} \begin{decl} |\TrimSpaces| \end{decl} Removes any leading and trailing spaces (tokens with character code~$32$ and category code~$10$) for the ends of the argument. Thus for example declaring a function \begin{verbatim} \NewDocumentCommand\foo {>{\TrimSpaces} m} {\showtokens{#1}} \end{verbatim} and using it in a document as \begin{flushleft} \verb= =\verb*=\foo{ hello world }= \end{flushleft} will show `\verb*=hello world=' at the terminal, with the space at each end removed. \cs{TrimSpaces} will remove multiple spaces from the ends of the input in cases where these have been included such that the standard \TeX{} conversion of multiple spaces to a single space does not apply. \subsection{Body of an environment} \label{sec:cmd:body} While environments |\begin|\marg{environment}\ \dots{}\,|\end|\marg{environment} are typically used in cases where the code implementing the \meta{environment} does not need to access the contents of the environment (its `body'), it is sometimes useful to have the body as a standard argument. This is achieved by ending the argument specification with~\texttt{b}, which is a dedicated argument type for this situation. For instance \begin{verbatim} \NewDocumentEnvironment{twice} {O{\ttfamily} +b} {#2#1#2} {} \end{verbatim} \begin{verbatim} \begin{twice}[\itshape] Hello world! \end{twice} \end{verbatim} typesets `Hello world!{\itshape Hello world!}'. The prefix |+| is used to allow multiple paragraphs in the environment's body. Argument processors can also be applied to \texttt{b}~arguments. By default, spaces are trimmed at both ends of the body: in the example there would otherwise be spaces coming from the ends the lines after |[\itshape]| and |world!|. Putting the prefix |!| before \texttt{b} suppresses space-trimming. When \texttt{b} is used in the argument specification, the last argument of the environment declaration (e.g., \cs{NewDocumentEnvironment}), which consists of an \meta{end code} to insert at |\end|\marg{environment}, is redundant since one can simply put that code at the end of the \meta{start code}. Nevertheless this (empty) \meta{end code} must be provided. Environments that use this feature can be nested. \subsection{Fully-expandable document commands} Document commands created using \cs{NewDocumentCommand}, etc.\@, are normally created so that they do not expand unexpectedly. This is done using engine features, so is more powerful than \LaTeXe{}'s \cs{protect} mechanism. There are \emph{very rare} occasion when it may be useful to create functions using a expansion-only grabber. This imposes a number of restrictions on the nature of the arguments accepted by a function, and the code it implements. This facility should only be used when \emph{absolutely necessary}. \begin{decl} |\NewExpandableDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\RenewExpandableDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\ProvideExpandableDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \\ |\DeclareExpandableDocumentCommand| \arg{cmd} \arg{arg spec} \arg{code} \end{decl} This family of commands is used to create a document-level \meta{function}, which will grab its arguments in a fully-expandable manner. The argument specification for the function is given by \meta{arg spec}, and the function will execute \meta{code}. In general, \meta{code} will also be fully expandable, although it is possible that this will not be the case (for example, a function for use in a table might expand so that \cs{omit} is the first non-expandable non-space token). Parsing arguments by pure expansion imposes a number of restrictions on both the type of arguments that can be read and the error checking available: \begin{itemize} \item The last argument (if any are present) must be one of the mandatory types \texttt{m}, \texttt{r} or \texttt{R}. \item The `verbatim' argument type \texttt{v} is not available. \item Argument processors (using \texttt{>}) are not available. \item It is not possible to differentiate between, for example |\foo[| and |\foo{[}|: in both cases the \texttt{[} will be interpreted as the start of an optional argument. As a result, checking for optional arguments is less robust than in the standard version. \end{itemize} \subsection{Details about argument delimiters} In normal (non-expandable) commands, the delimited types look for the initial delimiter by peeking ahead (using \pkg{expl3}'s |\peek_...| functions) looking for the delimiter token. The token has to have the same meaning and `shape' of the token defined as delimiter. There are three possible cases of delimiters: character tokens, control sequence tokens, and active character tokens. For all practical purposes of this description, active character tokens will behave exactly as control sequence tokens. \subsubsection{Character tokens} A character token is characterized by its character code, and its meaning is the category code~(|\catcode|). When a command is defined, the meaning of the character token is fixed into the definition of the command and cannot change. A command will correctly see an argument delimiter if the open delimiter has the same character and category codes as at the time of the definition. For example in: \begin{verbatim} \NewDocumentCommand { \foobar } { D<>{default} } {(#1)} \end{verbatim} \begin{verbatim} \foobar <hello> \par \char_set_catcode_letter:N < \foobar <hello> \end{verbatim} the output would be: \begin{verbatim} (hello) (default)<hello> \end{verbatim} as the open-delimiter |<| changed in meaning between the two calls to |\foobar|, so the second one doesn't see the |<| as a valid delimiter. Commands assume that if a valid open-delimiter was found, a matching close-delimiter will also be there. If it is not (either by being omitted or by changing in meaning), a low-level \TeX{} error is raised and the command call is aborted. \subsubsection{Control sequence tokens} A control sequence (or control character) token is characterized by is its name, and its meaning is its definition. A token cannot have two different meanings at the same time. When a control sequence is defined as delimiter in a command, it will be detected as delimiter whenever the control sequence name is found in the document regardless of its current definition. For example in: \begin{verbatim} \cs_set:Npn \x { abc } \NewDocumentCommand { \foobar } { D\x\y{default} } {(#1)} \foobar \x hello\y \par \cs_set:Npn \x { def } \foobar \x hello\y \end{verbatim} the output would be: \begin{verbatim} (hello) (hello) \end{verbatim} with both calls to the command seeing the delimiter |\x|. \subsection{Creating new argument processors} \begin{decl} |\ProcessedArgument| \end{decl} Argument processors allow manipulation of a grabbed argument before it is passed to the underlying code. New processor implementations may be created as functions which take one trailing argument, and which leave their result in the \cs{ProcessedArgument} variable. For example, \cs{ReverseBoolean} is defined as \begin{verbatim} \ExplSyntaxOn \cs_new_protected:Npn \ReverseBoolean #1 { \bool_if:NTF #1 { \tl_set:Nn \ProcessedArgument { \c_false_bool } } { \tl_set:Nn \ProcessedArgument { \c_true_bool } } } \ExplSyntaxOff \end{verbatim} [As an aside: the code is written in \pkg{expl3}, so we don't have to worry about spaces creeping into the definition.] \subsection{Access to the argument specification} The argument specifications for document commands and environments are available for examination and use. \begin{decl} |\GetDocumentCommandArgSpec| \arg{function} \\ |\GetDocumentEnvironmentArgSpec| \arg{environment} \end{decl} These functions transfer the current argument specification for the requested \meta{function} or \meta{environment} into the token list variable \cs{ArgumentSpecification}. If the \meta{function} or \meta{environment} has no known argument specification then an error is issued. The assignment to \cs{ArgumentSpecification} is local to the current \TeX{} group. \begin{decl} |\ShowDocumentCommandArgSpec| \arg{function} \\ |\ShowDocumentEnvironmentArgSpec| \arg{environment} \end{decl} These functions show the current argument specification for the requested \meta{function} or \meta{environment} at the terminal. If the \meta{function} or \meta{environment} has no known argument specification then an error is issued. \end{document}
{ "alphanum_fraction": 0.7467572981, "avg_line_length": 42.6512605042, "ext": "tex", "hexsha": "82c1c0686304a8352146aa91aedef0e2805ce353", "lang": "TeX", "max_forks_count": 235, "max_forks_repo_forks_event_max_datetime": "2022-03-30T06:23:57.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-16T22:02:49.000Z", "max_forks_repo_head_hexsha": "203dc2c2465e5c54f00675fd879ebf7b4117197e", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "dr-scsi/latex2e", "max_forks_repo_path": "base/doc/usrguide3.tex", "max_issues_count": 698, "max_issues_repo_head_hexsha": "203dc2c2465e5c54f00675fd879ebf7b4117197e", "max_issues_repo_issues_event_max_datetime": "2022-03-19T01:55:27.000Z", "max_issues_repo_issues_event_min_datetime": "2017-11-11T09:06:31.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "dr-scsi/latex2e", "max_issues_repo_path": "base/doc/usrguide3.tex", "max_line_length": 190, "max_stars_count": 1265, "max_stars_repo_head_hexsha": "203dc2c2465e5c54f00675fd879ebf7b4117197e", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "dr-scsi/latex2e", "max_stars_repo_path": "base/doc/usrguide3.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T08:20:59.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-02T13:11:48.000Z", "num_tokens": 8169, "size": 30453 }
\chapter{Conclusion and Future Work}\label{sec:conclusion_and_future_work} To conclude this work, we summarize the presented models, algorithms, implementations, studies and the main findings. Moreover, we give an outlook on future work and additional research questions that can be approached building on our work. \section{Summary of this Work} The overarching goal of this work was to enable simulations of the neuromuscular system using detailed, biophysical multi-scale models with high resolutions. The simulations should compute numerically accurate results, run efficiently on various hardware and allow parallel scaling to large problem sizes, which should be solved on supercomputers. As a result, this work established a computational framework for multi-scale modeling of skeletal muscles, their neural activation, muscle contraction and generation of EMG signals on the skin surface. Our approach combined existing models for various parts of the neuromuscular system into a comprehensive multi-scale model framework. Scalability and parallel efficiency of our software were ensured by efficient algorithms, suitable, parallelized numerical schemes and by our accompanying performance analyses. % ieser Arbeit war ... biophysikalisch, hoch aufgelöst, schnell, skalierbar, genau ... We described the following topics in this work: After the introduction in \cref{chap:introduction}, we compared two modeling approaches to describe the movement of the upper arm in \cref{chap:comparative_study}. Based on data of experimental trials we conducted during a graduate school workshop, we developed a first, data-driven model using Gaussian process regression and a second model based on a biophysical simulation with two muscle models. The parameters for the biophysical simulation were fitted to experimental training data using numerical optimization. The comparison of the two approaches revealed a slightly better fit for the biophysical simulation model. This approach had the additional benefit of giving biophysical insights into the functioning of the system and provided estimates for subject-specific muscle parameters. While this study used Hill-type muscle models, which describe muscle forces on a 1D line of action, we considered more accurate multi-scale models in the remainder of this work. \Cref{sec:generation_of_meshes_for_multiscale} dealt with the generation of structured 3D meshes and embedded 1D meshes for muscle fibers. The approach of only using structured meshes, which allowed for a simple domain decomposition proved to be beneficial for the parallel performance of our simulations. We described a workflow, how to obtain these meshes from biomedical imaging data. We developed a serial algorithm and a parallel algorithm to construct the required meshes and to ensure a good mesh quality, even for meshes with high resolutions. The algorithms were based on our novel approach of using harmonic maps to transform reference meshes to cross-sectional slices of the muscle mesh. In \cref{sec:muscle_fibers_and_motor_units}, we described ways to associate muscle fibers with motor units (MUs) in a physiological manner. We developed efficient algorithms for this task for different premises, and employed the algorithms to associate up to \num{270000} muscle fibers to \num{100} MUs for the subsequent use in our simulations. In \cref{chap:models_and_discretization}, we first described all equations of the state-of-the models that we used, and how they can be combined into a multi-scale description. Then, we described their discretization using the finite element method for the spatial derivative terms and various timestepping and operator splitting schemes for the temporal derivatives. One original contribution is the derivation of the finite element formulation for the multidomain equation. Further, we gave a detailed description of the nonlinear solid mechanics discretization, which we used in our implementation. Next, we presented details on our simulation software OpenDiHu, which we used to solve various combinations of the described multi-scale model framework to simulate the neuromuscular system. \Cref{chap:usage} gave an introduction to the design and usage of the software and demonstrated its application using various example problems. \Cref{sec:implementation} described the implementation of OpenDiHu in more detail, motivated various design decisions, introduced the data handling and several algorithms, e.g., to construct a parallel domain decomposition or to map data between meshes, and described the implementation of various solvers for particular parts of the multi-scale model. \Cref{sec:results} presented numerical results, which were obtained using our simulation software. We simulated the passive mechanical behavior of muscle tissue, subcellular models given in CellML description, electrophysiology on muscle fibers, electric conduction in the muscle and the adipose tissue to obtain surface EMG signals, electrophysiology using the 3D homogenized multidomain description, and coupled scenarios of electrophysiology and muscle contraction. We discussed effects of model and structural parameters and interpreted the obtained simulation results. In \cref{sec:performance_analysis}, we analyzed the computational performance of our software in general and various solvers in particular. We conducted numerical studies of universal convergence properties with the software OpenCMISS, which also helped to parameterize the numerical solvers in OpenDiHu. Further studies on mesh widths and used linear solvers were carried out directly using OpenDiHu. We evaluated various optimization options in OpenDiHu and compared the most optimized settings in OpenDiHu with the baseline solver OpenCMISS, yielding a high speedup of more than two orders of magnitude. Moreover, we investigated the computational performance of our models on the GPU, and conducted parallel strong scaling and parallel weak scaling tests on small clusters and the supercomputers at the High Performance Computing Center Stuttgart. \section{Summary of Main Findings} The present work simulated numerous scenarios with various model combinations, which provided different insights. In the following, we summarize the observed findings. We address the biophysical observations in \cref{sec:observations_biophysics} and results of the performance measurements in \cref{sec:observations_performance}. \subsection{Observations from the Fields of Biophysics and Biomechanics}\label{sec:observations_biophysics} The comparison of the linear and nonlinear mechanics models in \cref{sec:solver_solid_mechanics} showed qualitatively different results and demonstrated that the accurate behavior of deforming muscle tissue can only be described by a proper nonlinear anisotropic solid mechanics model. Initially, an open question was also how to relate the accuracy of the simulated EMG signals to the number of fibers and the mesh resolution. Our numerical studies in \cref{sec:action_potential_velocity}, which compared the resulting action propagation velocity for different mesh widths of the 1D muscle fiber meshes showed that a mesh width of \SI{100}{\micro\meter} or 100 elements per \SI{}{\centi\meter} gives reasonably accurate results. To evaluate the 3D mesh width and the spacing between the muscle fibers, we conducted simulations with different 3D mesh resolutions and numbers of fibers in \cref{sec:effects_of_the_mesh_width_emg}. The number of fibers was scaled up to the realistic number of \num{270000} fibers in a biceps brachii muscle. We concluded that the most accurate solution is obtained for a mesh width as fine as possible, as the EMG results were qualitatively different for every refinement step. This emphasizes the need for highly resolved simulation scenarios (representing the real number of fibers in a muscle accurately) for realistic EMG computations and, as a result, the need for High Performance Computing techniques. However, if the EMG is to be sampled by electrodes, i.e., if the EMG recording process should also be part of the simulation, lower mesh widths might be possible, as the EMG is only captured at the locations of the electrodes. One possible approach to reduce the computational effort for EMG simulations would be to only consider the muscle tissue down to a certain depth below the surface with the EMG electrodes. We observed in \cref{sec:simfiber_mu}, that the EMG signal is highly influenced by MUs, whose territories are located close to the electrodes. However, our numerical experiments with EMG decomposition algorithms in \cref{sec:simfiber_decomposition} showed that large MUs located opposite to the EMG electrodes at the deepest muscle tissue layers are detectable in the surface EMG signals. Thus, neglecting the deeper parts of the muscle would remove relevant information from the system and is, therefore, not a valid approach to reduce the computational load. Furthermore, the layer of adipose tissue on top of the muscle showed a smoothing effect on EMG recordings in our simulations, both with the fiber based approach in \cref{sec:simfiber_fat} and with the multidomain approach in \cref{sec:multidomain_components,sec:multidomain_simulation_emg}. One advantage of our simulations compared to experimental studies is that the thickness of the fat layer is known exactly and can also be adjusted. Simulations of muscle contraction with coupled electrophysiology and solid mechanics models showed a spatially inhomogeneous contraction for the biceps muscle while the muscle activation is ramped up. The simulation in \cref{sec:fiber_based_contraction} of an isolated, contracting muscle belly without tendons showed transverse bending, alternating between the left and right-hand sides, as a result of the subsequently activated MUs at the different sides of the muscle. We also simulated the biceps brachii muscle together with its tendons and observed a ripple in the generated muscle force, which is caused by the same inhomogeneous MU activity. The simulations of muscle contraction also showed that, if the muscle is initially in a stress-free state, the model can only achieve a maximum contraction of approximately \SI{85}{\percent}. However, the muscles of the musculoskeletal system are known to exhibit prestresses in their relaxed states. Accordingly, we added prestress to our simulations. The amount of prestress is adjustable in the simulation settings, and the required amount can be determined by a comparison with experimental studies. % linear-nonlin % accuracy -> high mesh resolution % fibers: effects of fat mesh, distance between fibers to surface, size of MUs (EMG decomposition) % multidomain: more smoothed % coupled solid mechanics: inhomogeneous contraction, prestretch required (without only 85% contraction) \section{Summary of Performance Results}\label{sec:observations_performance} A major part of the work was also concerned with improving the performance of the simulation software, and, thus, enabling larger simulation scenarios in shorter runtimes. Previously, literature on biophysical, multi-scale models of skeletal muscles was mainly focused on modelling and interpretation of the results, rather than targeting efficient computations. The work of Röhrle et al. \cite{Roehrle2012} introduced the multi-scale model, which we based our work on, and simulated the tibialis anterior muscle using a 3D mechanics mesh with 12 elements. The work of Heidlauf et al. \cite{Heidlauf2013} considered the same geometry and simulated 400 muscle fibers. The authors parallelized their OpenCMISS based implementation for a fixed number of four processes. We built upon this work with the goal to push the limits of feasible problem sizes, and, in \cref{sec:effects_of_the_mesh_width_emg}, executed our optimized simulation with \num{26912} processes, \num{273529} muscle fibers and a 3D mesh for the electrophysiology model with approximately \num{1e8} degrees of freedom. The performance analyzes in \cref{sec:performance_analysis} showed that the subcellular model contributes a large portion to the total runtime and, thus, is the most crucial part to optimize. By using proper memory layouts, vectorization is possible. Our approach of using explicit vector instructions outperformed the auto-vectorization capabilities of the compiler. The approximation of the exponential function and an improved parallelization scheme for the 1D electric conduction problem additionally contributed to a high speedup. The comparison to the baseline solver OpenCMISS Iron in a strong scaling study in \cref{sec:strong_scaling_runtimes_opencmiss_opendihu} revealed a maximum speedup of 363 for the purely implementation-specific improvements and an additional speedup of 2.5, shown in \cref{sec:opencmiss_numeric_improvements}, by using more efficient numerical methods. In addition, the memory characteristics of the solvers were investigated in \cref{sec:strong_scaling_runtimes_opencmiss_opendihu}. The linear increase in memory consumption of the baseline solver in a weak scaling setting was improved to a nearly constant scaling. Our analysis using a roofline performance model showed that our solvers are compute bound and achieve a computational performance of approximately \SI{25}{\percent} peak performance, which is a very good value. Moreover, hybrid shared/distributed memory parallelism and computations on the GPU were investigated, but both approaches were found to be not competitive with our highly optimized distributed memory parallelization. For the GPU, potentially more efficient approaches than our approach using OpenMP exist, such that a performance improvement in the future could be possible. The modularity of the CellML infrastructure, where computational models can be shared among researchers and are interchangeable in multi-scale simulations was preserved during all optimization endeavors. Our approach was to implement a source-to-source code generator, which transformed the given CellML code into optimized code for the CPU or the GPU. For the solution of the multidomain model, we evaluated various preconditioners and selected the most performant preconditioner-solver combination for our computations. One previously unforeseen result is the large discrepancy of required runtime between the fiber based and the multidomain based electrophysiology models, presented in \cref{sec:solver_multidomain_model}. We measured by a factor of 1000 longer computation times for the multidomain model, which result from the structure of the model. Despite the high computational effort, the multidomain model is useful in practice as it can simulate effects that are not captured by the fiber based model. We gave a detailed comparison between both approaches in \cref{sec:multidomain_differences}. In summary, we provided a computationally efficient and scalable tool for applied biophysics researchers to solve problems in the domains of EMG generation and muscle contraction. For example, the effect of different muscle fiber organizations and MU recruitment strategies can be tested with our software. We demonstrated its use with state-of-the-art EMG decomposition algorithms, which provide the bridge to the experimental domain. Thus, we hope to contribute one step on the pathway of complementing in vivo with in silico experiments to increase the understanding of the neuromuscular system. % provide a tool for applied biophysics researchers % -------------------- % complement in-vivo and in-silico experiments % test different muscle fiber organizations -> possible % decomposition of EMG -> tested % vc better than auto-vec, AVX-512, memory layout % comparison to OpenCMISS % preserve CellML modularity -> code generator % GPU, OpenMP % proper choice of solvers % multidomain performance vs fibers % performance % -------------- % Röhrle2012: TA 12 elements, MU association % Heidlauf2013: 400 fibers, TA, OpenCMISS, mechanics % made new insights possible %wir hatten ja mal vorgenommen dass %tatsächlich ist gelungen: %überraschenderweise, dass: %bewerten mit den Ergebnissen %was bewahrheitet, %unterschiede zwischen grob und hochaufgelöster Sim %anzahl realitäts anzahl Muskelfaser anzahl \section{Outlook and Future Work}\label{sec:future_work} The presented work could be extended in multiple directions, spanning performance improvements and model extensions. First, some ideas for further performance improvements could be implemented and evaluated. The monodomain equation could be solved with implicit-explicit (IMEX) schemes, which could potentially achieve higher precision. The numerically stiff subcellular model is currently solved explicitly. Implicit schemes could be developed, and the implicit iteration equations could be solved symbolically in a preprocessing step using the parsed CellML code. To improve the performance of the multidomain model, the following algorithmic improvements are promising options. The 3D problems of action potential propagation in the muscle volume for every compartment could be restricted to the subset of nodes, where the occupancy factors are above a certain threshold, effectively reducing the problem sizes, and reducing the effect of higher MU counts on the runtime. However, this would bring difficulties to ensure a balanced parallel domain decomposition. Instead of the current parallel partitioning of the domain, the multidomain model could also be parallelized by distributing the MUs to different processes or by a combination of both approaches. On the numerical side, an extended error analysis could be carried out for all model parts, and the timestep widths, which are currently chosen conservatively, could potentially be increased, while keeping the numerical error below a given threshold. Error estimators could be developed, which would allow an adaptive adjustment of the timestep widths. The 3D model solvers for the 3D electrophysiology and multidomain problems could be enhanced with geometric or algebraic multigrid preconditioners. Since all subcellular points in a muscle are usually in similar states at any time, a hybrid approach using analytic descriptions of action potential propagation, as in \cref{sec:sim_rosenfalck}, and a fully numerical treatment could be chosen, and surrogate models could be adaptively added to the computational description. On the technical side, computations on the GPU could be re-evaluated in the future using the existing OpenMP approach with more mature compiler versions or different accelerator targeting programming technologies. Second, the range of simulated models could be extended. The simulations could be applied to further muscle geometries such as the triceps brachii or the tibialis anterior muscles. Muscles with more complex geometries and fiber arrangements could be investigated. A mechanically coupled problem of agonist-antagonist pair could be considered such as a system of biceps and triceps brachii. Apart from the mechanical coupling, a coupling of the neural recruitment involving sensory organs in the muscles could be implemented and used to approach further biomechanical research questions. Such a neuromuscular feedback loop could also be investigated first for a single muscle, e.g., by extending the preliminary implementation in OpenDiHu for the biceps muscle. Pathological conditions could be simulated to understand muscular diseases and neuromuscular electrical stimulation of the muscle for stroke rehabilitation could be considered. By using the preCICE adapters in OpenDiHu, more advanced mechanics solvers could be coupled to an electrophysiology simulation in OpenDiHu, allowing to, e.g., study mechanical effects of surrounding tissue. On a larger scale, the interplay of more organs could be taken into account. Blood perfusion and muscle metabolism could be added, and coupled by models of the lung and general metabolism in the organism. Thus, a digital human model can be envisioned, which allows to study the effects of anomalies and to develop new therapies, effectively utilizing simulation technology for human wellbeing. %reduced to 1D problems using appropriate transformations. % performance improvements % ---------------------------- % more timestepping methods: CVODE (https://computing.llnl.gov/projects/sundials/cvode), imex % different parallelisation where not all ranks have to be involved (for multidomain) -> this feature already exists for the fibers with multipleInstances % more numeric tests on exp function? no % multidomain: compute 3D problem as 1D problem, or adaptive computation of the parts of the fr factors % GPU % extensions % ------------- % other muscles, more muscles, % couple more models: % neuromuscular feedback loop with sensors, %
{ "alphanum_fraction": 0.8239463602, "avg_line_length": 129.6894409938, "ext": "tex", "hexsha": "8258cab2d423f2ba103b5c69b905f376438b8f3b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "maierbn/phd_thesis_source", "max_forks_repo_path": "document/09_conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "maierbn/phd_thesis_source", "max_issues_repo_path": "document/09_conclusion.tex", "max_line_length": 1019, "max_stars_count": 1, "max_stars_repo_head_hexsha": "babee64f01f15d93cb75140eb8c8424883b33c6c", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "maierbn/phd_thesis_source", "max_stars_repo_path": "document/09_conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-05T19:00:04.000Z", "max_stars_repo_stars_event_min_datetime": "2021-09-05T19:00:04.000Z", "num_tokens": 4133, "size": 20880 }
\documentclass[aspectratio=169]{beamer} \usepackage{adjustbox} \usepackage[utf8]{inputenc} \usepackage{tikz} \usepackage{hyperref} \usepackage{multimedia} \usepackage{ulem} \usepackage{wasysym} \usetikzlibrary{mindmap,trees} \usepackage{siunitx} \usepackage{pbox} \usepackage{colortbl} \usepackage[absolute,overlay]{textpos} % \usetikzlibrary{mindmap,trees} \usepackage{smartdiagram} \usetikzlibrary{shapes.geometric,calc} \usetikzlibrary{shapes.symbols} \usetikzlibrary{shapes.symbols,positioning} \usepackage{metalogo} \usetikzlibrary{backgrounds, calc, shadows, shadows.blur} \newcommand\addcurlyshadow[2][]{ % #1: Optional aditional tikz options % #2: Name of the node to "decorate" \begin{pgfonlayer}{background} \path[blur shadow={shadow xshift=0pt, shadow yshift=0pt, shadow blur steps=6}, #1] ($(#2.north west)+(.3ex,-.5ex)$) -- ($(#2.south west)+(.5ex,-.7ex)$) .. controls ($(#2.south)!.3!(#2.south west)$) .. (#2.south) .. controls ($(#2.south)!.3!(#2.south east)$) .. ($(#2.south east)+(-.5ex,-.7ex)$) -- ($(#2.north east)+(-.3ex, -.5ex)$) -- cycle; \end{pgfonlayer} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Style modifications %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usetheme{Berlin} %%% Fonts %%% % Change font. Fontspec requires xelatex instead of pdflatex! % Font catalog: https://www.tug.dk/FontCatalogue/ % \usepackage{fontspec} %\setsansfont{Comfortaa} %\setsansfont{DejaVu Sans} %\setsansfont{Fira Sans} % Use "Fira Sans Light" as the normal font and the "Fira Sans" for % bold fonts %\setsansfont[ %% ItalicFont={Fira Sans Light Italic}, %% BoldFont={Fira Sans}, %% BoldItalicFont={Fira Sans Italic}]{Fira Sans Light} \setbeamerfont{title}{size=\Large, series=\bfseries} \setbeamerfont{frametitle}{size=\large, series=\bfseries} \usepackage{helvet} %%% Slide template %%% \setbeamertemplate{frames}[default] % Empty headline / footline \setbeamertemplate{headline}{} \setbeamertemplate{footline}{} % Remove navigation icons \setbeamertemplate{navigation symbols}{} %%% Colors %%% \usecolortheme{crane} \usecolortheme{crane} \definecolor{lightgray}{RGB}{220,220,220} \definecolor{darkgray}{RGB}{45,45,45} %\definecolor{darkblue}{RGB}{0,86,137} %\definecolor{darkblue}{RGB}{22,90,151} \definecolor{lightblue}{RGB}{229, 245, 255} %\definecolor{darkblue}{RGB}{1,1,100} % Use the slide background in block environments \setbeamercolor{title}{fg=white,bg=darkgray} \setbeamertemplate{blocks}[default] \setbeamercolor{block title}{bg=} \setbeamercolor{block body}{bg=lightgray} \setbeamercolor{frametitle}{fg=white,bg=darkgray} \setbeamerfont{block body}{size=\large} \setbeamercolor{itemize item}{fg=black} \setbeamertemplate{itemize items}[circle] \setbeamercolor{section number projected}{bg=darkgray,fg=white} \setbeamercolor{section in toc}{fg=black} \setbeamercolor{subsection in toc}{fg=darkgray} \addtobeamertemplate{block begin}{\pgfsetfillopacity{0.8}}{\pgfsetfillopacity{1}} \addtobeamertemplate{frametitle}{\pgfsetfillopacity{1.0}}{\pgfsetfillopacity{1}} \addtobeamertemplate{title page}{\pgfsetfillopacity{1.0}}{\pgfsetfillopacity{1}} % Command to place the test (e.g. citation) in the center of the footer \newcommand{\setfootercentertext}[1]{ \setbeamertemplate{footline}{ \hspace*{\fill} \raisebox{3mm}[0mm][0mm]{ \tiny{#1}}\hspace*{\fill}} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Content %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %------------------------------------------------------------------------------ \title{Supervised Machine Learning Methods} %------------------------------------------------------------------------------ \author{\small Konrad U. Förstner} \institute{ZB MED -- Information Centre for Life Science \& TH Köln} \date{\scriptsize Workshop \textit{Systems Biology: From large datasets to biological insight}\\ \ \\ 2021-05-21\\\ \\ \href{https://creativecommons.org/licenses/by/4.0/}{\includegraphics[width=0.88cm]{images/creative_commons_attribute.png}} } \logo{ \includegraphics[height=1.0cm]{images/ZBMED_2017_EN.pdf} \includegraphics[height=0.8cm]{images/logo_TH-Koeln_CMYK_22pt.eps} } \begin{document} %\setbeamertemplate{background}{ % \includegraphics[height=\paperheight]{images/flickr_normanbleventhalmapcenter_2710796662_mod.jpg} %} \begin{frame}{} \titlepage \end{frame} \logo{} \setcounter{tocdepth}{1} \begin{frame}{} \tableofcontents \end{frame} %%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%% \begin{frame}{} \tableofcontents[currentsection] \end{frame} \begin{frame} % \frametitle{Basic concept of supervised machine learning} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_target-plain.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} After the lecture you should have a basic understanding of supervised machine learning approaches and potential applications in research.\\ After the practical part you should be able to implement them with Python and the package scikit-learn.\\ We will not cover the mathematical background. This is not needed at this level but recommended later. \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \setbeamertemplate{background}{ \includegraphics[width=\paperwidth]{images/pexels-thisisengineering-3861969.jpg}} \setfootercentertext{ \href{https://www.pexels.com/photo/photo-of-code-projected-over-woman-3861969/}{https://www.pexels.com/photo/photo-of-code-projected-over-woman-3861969/} -- CC0} \begin{frame} \frametitle{} \end{frame} \setbeamertemplate{background}{} \setfootercentertext{} \setbeamertemplate{background}{ \includegraphics[width=\paperwidth]{images/Tools_by_Todd_Quackenbush.jpg}} \setfootercentertext{ \href{https://commons.wikimedia.org/wiki/File:Toolkit_and\_tools\_(Unsplash).jpg}{ https://commons.wikimedia.org/wiki/File:Toolkit\_and\_tools\_(Unsplash).jpg} -- CC0} \begin{frame} \end{frame} \setbeamertemplate{background}{} \setfootercentertext{} \begin{frame} \begin{center} \includegraphics[width=4.8cm]{images/xkcd_ai_methodology_2x.png}\\ \ \\ {\tiny \href{https://xkcd.com/2451/}{https://xkcd.com/2451/} CC-BY-NC by Randall Munroe } \end{center} \end{frame} \begin{frame} \begin{center} \includegraphics[width=13cm]{images/AI_ML_Deep_Learning.pdf} \end{center} \end{frame} \begin{frame} \begin{center} \begin{adjustbox}{max totalsize={.9\textwidth}{.7\textheight},center} \begin{tikzpicture} \path[mindmap,concept color=red,text=white] node[concept] {Machine\\learning} [clockwise from=110] child[concept color=orange] { node[concept] {Supervised learning}} child[concept color=orange] { node[concept] {Unsuper\-vised learning}} child[concept color=orange] { node[concept] {Rein\-forcement learning}} ; \end{tikzpicture} \end{adjustbox} \end{center} \end{frame} \begin{frame} \frametitle{Two types of tasks that can be solved with supervised learning} \begin{center} \includegraphics[width=13cm]{images/classification_and_regression.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Classification types} \begin{center} \includegraphics[width=13cm]{images/binary_vs_multi-class_classification.pdf} \end{center} \end{frame} \begin{frame} % \frametitle{Basic concept of supervised machine learning} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_Random-Alphabet-Brain.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} Supervised learning means to generate models \\ that generalize from given examples. \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \begin{frame} \frametitle{Basic concept of supervised machine learning} \begin{block}{} \begin{center} \vspace{0.5cm} The model / function maps from a given two-dimensional matrix \textit{X}\\ to an output vector \textit{y} with labels (classification)\\ or numerical values (regression).\\ \ \\ $X_{1} \rightarrow y_{1}$\\ $X_{2} \rightarrow y_{2}$\\ $X_{3} \rightarrow y_{3}$\\ \end{center} \end{block} \end{frame} \begin{frame} % \frametitle{Basic concept of supervised machine learning} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_Random-Alphabet-Brain.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} In the actual training / learning process the parameters of the model / function are estimated. The model is then able to project the input variable $X$ to the output variable $y$.\\ \begin{center} $y = f(X)$ \end{center} \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \begin{frame} \frametitle{Example of classification} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_Random-Alphabet-Brain.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} \begin{center} Cancer classification based on single-cell\\ gene expression data. \end{center} \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \begin{frame} \frametitle{Example of regression} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_Random-Alphabet-Brain.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} \begin{center} Predicting the gene expression level of a gene based on the gene expression levels of several regulators. \end{center} \end{minipage} \vspace{0.3cm} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%% \section{Concepts and terminology} %%%%%%%%%%%%%%%%%%%%%% \begin{frame}{} \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{Entities and their features} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_ftdissociatecell.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} \textbf{Entities} (aka. samples, data points) are described by \\ \textbf{features} (aka. covariates, attributes) that have \textbf{values}.\\ E.g. for different cell lines (entities) the relative expression (values) of several genes (features). \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \begin{frame} \frametitle{Entities and their features} \begin{block}{} \vspace{0.5cm} \ \ \ \ \begin{minipage}{0.10\textwidth} \begin{center} \includegraphics[width=1.6cm]{images/publicdomainvectors_ftdissociatecell.pdf} \end{center} \end{minipage} \hfill \begin{minipage}{0.80\textwidth} Features can be\\ \begin{itemize} \item categorical \begin{itemize} \item Nominal (e.g. cell line, cancer type, eye color, gender) \item Ordinal (e.g. very bad, bad, good, very good) \end{itemize} \item numerical \begin{itemize} \item Discrete (e.g. gene length in nucleotides, number cells) \item Continuous (e.g. cell length, concentration, relative expression) \end{itemize} \end{itemize} \end{minipage} \vspace{0.3cm} \end{block} \end{frame} \definecolor{LightGray}{gray}{0.95} \begin{frame} \frametitle{Feature selection} \begin{block}{} \begin{center} Choosing features with high variance.\\ \ \\ {\small \newcolumntype{g}{>{\columncolor{LightGray}}c} \begin{tabular}{|g|c|c|g|c|} \hline \textbf{Feature A} & \textbf{Feature B} & \textbf{Feature C} & \textbf{Feature D} & \textbf{...}\\ \hline 10.00 & 5.01 & 102.01 & 120 & ... \\ 20.91 & 5.01 & 102.00 & 200 & ... \\ 80.03 & 5.01 & 102.09 & 980 & ... \\ 90.19 & 5.00 & 103.00 & 700 & ... \\ 50.99 & 5.02 & 102.31 & 703 & ... \\ 80.63 & 5.01 & 102.30 & 443 & ... \\ \hline \end{tabular} } \end{center} \end{block} \end{frame} \begin{frame} \frametitle{Feature scaling} \begin{block}{} \begin{center} Normalizing the feature values to their ranges e.g. min/max normalization, mean normalisation, standard score / z-score normalization. \end{center} \vspace{0.5cm} \hspace{1cm} \begin{minipage}{0.33\textwidth} {\small \begin{tabular}{|c|c|} \hline \textbf{Feature A} & \textbf{Feature B}\\ \hline 4.3 & 537\\ 5.3 & 703\\ 2.2 & 510\\ 1.5 & 200\\ 5.2 & 760\\ \hline \end{tabular} } \end{minipage} \begin{minipage}{0.08\textwidth} $\Rightarrow$ \end{minipage} \begin{minipage}{0.40\textwidth} {\small \begin{tabular}{|c|c|} \hline \textbf{Scaled Feature A} & \textbf{Scaled Feature B}\\ \hline 0.736 & 0.601\\ 1.000 & 0.898\\ 0.184 & 0.554\\ 0.000 & 0.00\\ 0.974 & 1.00\\ \hline \end{tabular} } \end{minipage} \hspace{1cm} \vspace{0.5cm} \end{block} \end{frame} \begin{frame} \frametitle{Features encoding} \begin{block}{} \begin{center} Translating categorical values into numerical values\\ (e.g. via one-hot encoding)\\ \ \\ {\small \begin{tabular}{|c|c|c|c|c|} \hline \textbf{} & \textbf{A} & \textbf{C} & \textbf{G} & \textbf{T}\\ \hline A & 1 & 0 & 0 & 0 \\ C & 0 & 1 & 0 & 0 \\ G & 0 & 0 & 1 & 0 \\ T & 0 & 0 & 0 & 1 \\ \hline \end{tabular} \ \\ \ \\ e.g. AATTGC becomes:\\ %1000 1000 0001 0001 0010 0100\\ \colorbox{white}{1, 0, 0, 0,} \colorbox{white}{1, 0, 0, 0,} \colorbox{white}{0, 0, 0, 1,} \colorbox{white}{0, 0, 0, 1,} \colorbox{white}{0, 0, 1, 0,} \colorbox{white}{0, 1, 0, 0}\\ } \end{center} \end{block} \end{frame} \begin{frame} \frametitle{How well does the model fit?} \begin{block}{} \begin{center} \textbf{Overfitting}: Good performance on the training data,\\ poor generalization to other data\\ \end{center} \end{block} \begin{block}{} \begin{center} \textbf{Underfitting}: Poor performance on the training data\\ and poor generalization to other data\\ \end{center} \end{block} \begin{block}{} \begin{center} \textbf{Regularization}: Different methods to prevent overfitting\\ \end{center} \end{block} \end{frame} \begin{frame} \begin{center} \includegraphics[width=14cm]{images/fitting_underfitting_overfitting.pdf} \end{center} \end{frame} %% \begin{frame} %% \frametitle{Curse of dimensionality} %% \begin{block}{} %% \begin{center} %% \end{center} %% \end{block} %% \end{frame} %% \begin{frame} %% \frametitle{Evaluation of classificaions} %% Evaluation of binary classification - Confusion matrix for binary classification %% \begin{itemize} %% \item Positive (P): Observation is positive (for example: is an apple) %% \item Negative (N): Observation is not positive (for example: is not an apple). %% \item True Positive (TP): Observation is positive, and is predicted to be positive. %% \item False Negative (FN): Observation is positive, but is predicted negative. %% \item True Negative (TN): Observation is negative, and is predicted to be negative. %% \item False Positive (FP): Observation is negative, but is predicted positive. %% \item Accuracy %% ACC = (TP + TN) / (P + N) %% \item Recall sensitivity, true positive rate %% TPR = TP / P %% \item Precision / positive predictive value %% PPV = TP / (TP + FP) %% \item F1 (aka F-measure, F-score) - harmonic mean of precision and recall %% F1 = 2 * (1 / (1/recall) + (1/recall)) %% \end{itemize} %% \end{frame} \begin{frame} \frametitle{Workflow for parameter fitting and evaluation} \begin{block}{} \begin{center} \begin{itemize} \item[1.)] Split into training and test/validation set (e.g. 75\%/25\%) \item[2.)] Train model by estimating the parameters with the training set \item[3.)] Evaluate the performance by using the test/validation set \\(e.g. scored as accuracy) \end{itemize} \end{center} \end{block} \end{frame} \begin{frame} \frametitle{Workflow with cross-validation} \begin{center} \includegraphics[width=14cm]{images/workflow_training_test_set.pdf} \end{center} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Selected supervised learning methods} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{} \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{Overview of different methods} \begin{block}{} \begin{center} \begin{itemize} \item K-Nearest neighbor \item Naive Bayes \item Linear Regression \item Logistic Regression \item Decision trees \item Artificial Neural Network (multilayer perceptron) \item Genetic Programming \end{itemize} \end{center} \end{block} \end{frame} %------------------------------- \subsection{k-Nearest Neighbors} %------------------------------- \setcounter{tocdepth}{2} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{block}{} \begin{center} \begin{itemize} \item For classification and regression \item Simplest case of supervised machine learning \item Can be easily applied to multi-class classification \end{itemize} \end{center} \end{block} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_classification_only_training_data.pdf} \end{center} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_classification_k_1.pdf} \end{center} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_classification_k_3.pdf} \end{center} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_regression_only_training_data.pdf} \end{center} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_regression_k_1.pdf} \end{center} \end{frame} \begin{frame} \frametitle{k-Nearest Neighbors} \begin{center} \includegraphics[width=13.0cm]{images/k_nearest_neighbour_regression_k_3.pdf} \end{center} \end{frame} %------------------------- \subsection{Linear models} %------------------------- \setcounter{tocdepth}{2} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \frametitle{Linear models} \begin{block}{} \begin{center} $\hat{y} = w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{3} + ... + w_{n}x_{n} + b$\\ \ \\ with $n$ as the number of features\\ $w$ are the different weights/coefficients\\ $b$ the intercept\\ \end{center} \end{block} \end{frame} \begin{frame} \frametitle{Different ways to estimate the parameters} \begin{block}{} \begin{center} \begin{itemize} \item Ordinary Least Squares \begin{itemize} \item no parameters - easy to use but no possibility to adapt \end{itemize} \item Ridge \begin{itemize} \item coefficients should be close to zero \item more resistant against overfitting \end{itemize} \item Least Absolute Shrinkage and Selection Operator (LASSO) \end{itemize} \end{center} \end{block} \end{frame} \begin{frame} \frametitle{Ordinary least squares (OLS)} \begin{center} \includegraphics[width=13.0cm]{images/linear_model_ordinary_least_squares.pdf} Minimize the offset between $\hat{y}$ and $y$ the\\ mean squared error (MSE) or sum of squared errors (SSE). \end{center} \end{frame} \begin{frame} \frametitle{} \begin{block}{} \begin{center} Once the parameters ($b$ and the weights $w$) of\\ \ \\ $\hat{y} = w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{3} + ... + w_{n}x_{n} + b$\\ \ \\ are estimated the prediction can be performed\\ by putting the x values of the data points into the\\ equation to predict the y value. \end{center} \end{block} \end{frame} %----------------------------------------- \subsection{Support Vector Machines (SVMs)} %----------------------------------------- \setcounter{tocdepth}{2} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \frametitle{Support Vector Machines (SVMs) -- Separating hyperplane} \begin{center} \includegraphics[width=13.0cm]{images/svm_potential_separating_hyperplanes.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Support Vector Machines (SVMs) -- Margin} \begin{center} \includegraphics[width=13.0cm]{images/svm_with_margin.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Support Vector Machines (SVMs) -- Soft Margin} \begin{center} \includegraphics[width=13.0cm]{images/svm_with_soft_margin.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Support Vector Machines (SVMs) -- Kernel trick} \begin{center} \includegraphics[width=13.0cm]{images/svm_kernel_trick_1.pdf} \end{center} \end{frame} \begin{frame} \frametitle{SVM -- Kernel trick} \begin{center} \includegraphics[width=13.0cm]{images/svm_kernel_trick_2.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Support Vector Machines (SVMs) -- Kernel trick} \begin{center} \includegraphics[width=13.0cm]{images/svm_kernel_trick_3.pdf} \end{center} \end{frame} %------------------------- \subsection{Decision Trees and Random Forest} %------------------------- \setcounter{tocdepth}{2} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \frametitle{Decision Trees} \begin{center} \includegraphics[width=13.0cm]{images/decision_tree_0.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Decision Trees} \begin{center} \includegraphics[width=13.0cm]{images/decision_tree_1.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Decision Trees} \begin{center} \includegraphics[width=13.0cm]{images/decision_tree_2.pdf} \end{center} \end{frame} %% \begin{frame} %% Decision trees %% \begin{itemize} %% \item Concepts: %% \item node %% \item edge %% \item root %% \item leaf %% \item child %% \item parent %% \item A forest is a set of n ≥ 0 disjoint trees %% \item features: real-valued or categorial and also missing values %% \end{itemize} %% \end{frame} \begin{frame} \frametitle{Random forest} \begin{block}{} \begin{center} \begin{itemize} \item In the random forests approach many different decision trees are generated by a randomized tree-building algorithm. \item The training set is sampled with replacement to produce a modified training set of equal size to the original but with some training items included more than once. \item In addition, when choosing the question at each node, only a small, random subset of the features is considered. \item Decision is happening by presenting the data to all tree and then do a voting. \end{itemize} \end{center} \end{block} \end{frame} %-------------------------------------- \subsection{Artificial Neural Networks} %-------------------------------------- \setcounter{tocdepth}{2} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \begin{block}{} \begin{center} \frametitle{Artificial Neural Networks} \begin{itemize} \item Aka. Multilayer perceptrons or Feed-forward neural networks \item Inspired by natural neural networks \item For classification or regression \end{itemize} \end{center} \end{block} \end{frame} \begin{frame} \frametitle{Artificial Neural Networks} \begin{center} \includegraphics[width=13.0cm]{images/ANN_without_hidden_layer.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Artificial Neural Networks} \begin{center} \includegraphics[width=13.0cm]{images/ANN_with_hidden_layer.pdf} \end{center} \end{frame} \begin{frame} \frametitle{Artificial Neural Networks} \begin{center} \includegraphics[width=13.0cm]{images/ANN_with_serveral_hidden_layers.pdf} \end{center} \end{frame} \section{Summary} \setcounter{tocdepth}{1} \begin{frame}{} \tableofcontents[currentsubsection] \end{frame} \begin{frame} \frametitle{Summary} \begin{block}{} \begin{itemize} \item Supervised machine learning can be used for classification and regression. \item The parameters of models are estimated based on training data. \item Features have to be selected and potentially encoded or scaled. \item There are numerous machine learning approaches with different strength and weaknesses available. \end{itemize} \end{block} \end{frame} \setbeamercolor{block body}{bg=lightgray} \setbeamertemplate{background}{ \includegraphics[width=\paperwidth]{images/flickr_nateone_3768979925.jpg}} \setfootercentertext{ \href{https://www.flickr.com/photos/nateone/3768979925/}{https://www.flickr.com/photos/nateone/3768979925/} -- CC-BY by flick user \href{https://www.flickr.com/photos/nateone/}{nateone}} \begin{frame} % \frametitle{Acknowledgements} \begin{block}{} \begin{center} \textbf{Thank you for your attention}\\ \ \\ \href{konrad.foerstner.org}{konrad.foerstner.org} / \href{https://twitter.com/konradfoerstner}{@konradfoerstner}\\ \ \\ \href{https://zbmed.de}{zbmed.de} / \href{https://twitter.com/ZB_MED}{@ZB\_MED} \\ \ \\ \href{https://th-koeln.de}{th-koeln.de} / \href{https://twitter.com/th\_koeln}{@th\_koeln} \ \\ \ \\ \includegraphics[height=1.8cm]{images/ZBMED_2017_EN.pdf} \ \ \ \includegraphics[height=1.6cm]{images/logo_TH-Koeln_CMYK_22pt.eps} \\ \end{center} \end{block} \end{frame} \end{document}
{ "alphanum_fraction": 0.6340707646, "avg_line_length": 27.5337301587, "ext": "tex", "hexsha": "f7a6493762ef5cc88fb8d26a89db88d35a5fd908", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-18T09:02:54.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-18T09:02:54.000Z", "max_forks_repo_head_hexsha": "6ecae7e4a20c44ebbf2ffd53fe605a902dfadae6", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "foerstner-lab/2021-06-21-Supervised_Machine_Learning_as_part_of_an_EBI_Systems_Biology_course", "max_forks_repo_path": "slides/Supervised_Machine_Learning_Methods.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ecae7e4a20c44ebbf2ffd53fe605a902dfadae6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "foerstner-lab/2021-06-21-Supervised_Machine_Learning_as_part_of_an_EBI_Systems_Biology_course", "max_issues_repo_path": "slides/Supervised_Machine_Learning_Methods.tex", "max_line_length": 163, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6ecae7e4a20c44ebbf2ffd53fe605a902dfadae6", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "foerstner-lab/2021-06-21-Supervised_Machine_Learning_as_part_of_an_EBI_Systems_Biology_course", "max_stars_repo_path": "slides/Supervised_Machine_Learning_Methods.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-19T11:22:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-22T13:20:12.000Z", "num_tokens": 8420, "size": 27754 }
\documentclass[letterpaper,11pt]{article} \usepackage[margin=1in]{geometry} \usepackage[htt]{hyphenat} \usepackage{courier} \begin{document} \title{\Huge{Problem Set 1}\\ \vspace{0.125in} \Large{MIT 6.0002}\\ \large{Introduction to Computational Thinking and Data Science\\ as Taught in Fall 2016} } \author{ John L. Jones IV} \maketitle \pagebreak \section*{Problem A.1} What were your results from \texttt{compare\_cow\_transport\_algorithms}? Which algorithm runs faster? Why? \\ \\ Using \texttt{ps1\_cow\_data.txt} the results were: \\ \texttt{ greedy\_cow\_transport:\\ length = 6 trips \\ time = .000145 seconds \\ \\ brute\_force\_cow\_transport: \\ length = 5 trips \\ time = 0.48294 seconds \\ } \\ The algorithm \texttt{greedy\_cow\_transport} does not iterate through every possible combination of trips like \texttt{brute\_force\_cow\_transport}. In my implementation of \texttt{greedy\_cow\_transport}, first the input dictionary of cows is copied to a list of cow names \texttt{sorted} from largest to smallest weight. The python \texttt{sorted} function utilizes a Timsort which is $\mathcal{O}(n\log{}n)$. Then \texttt{greedy\_cow\_transport} removes any cow which is larger than \texttt{limit}, an $\mathcal{O}(n)$ operation. The sorted list is then utilize to select the cows which can fit on the ship. Starting at the big end of the list, iterate and select cows that can fit onto the ship without exceeding the payload \texttt{limit}. Since the list has been sorted, this is an $\mathcal{O}(n\log{}n)$ operation. The python \texttt{sorted} function's Timsort and selecting cows operation dominate the run time, making \texttt{greedy\_cow\_transport} $\mathcal{O}(n\log{}n)$. In comparison, \texttt{brute\_force\_cow\_transport} must first create all permutations of the possible trips, $\mathcal{O}(n^{2})$. Then evaluate each of these trips, $\mathcal{O}(n^{2})$. This emphasizes Professor John Guttag's quote from Lecture 1, ``many optimization problems are inherently exponential. What that means is there is no algorithm that provides an exact solution to this problem whose worst case running time is not exponential in the number of items." \section*{Problem A.2} Does the greedy algorithm return the optimal solution? Why/why not? \\ \\ No, the greedy algorithm \emph{does not} return the optimal solution. The nature of this ``knapsack'' problem is $\mathcal{O}(n^{2})$. However, a reasonable solution can be solve in $\mathcal{O}(n\log n)$ with \texttt{greedy\_cow\_transport}. With \texttt{ps1\_cow\_data.txt} \texttt{greedy\_cow\_transport} returns a solution $1000$ times faster than \texttt{brute\_force\_cow\_transport}. \section*{Problem A.3} Does the brute force algorithm return the optimal solution? Why/why not? \\ \\ Yes, the brute force algorithm does return the optimal solution. All possible solutions are found then evaluated. The optimal solution is guaranteed to be produced and returned. However, this comes at a great cost to run-time speed. With \texttt{ps1\_cow\_data.txt} \texttt{brute\_force\_cow\_transport} produces a solution $1000$ times slower than the greedy algorithm. \section*{Problem B.1} Explain why it would be difficult to use a brute force algorithm to solve this problem if there were $30$ different egg weights. You do not need to implement a brute force algorithm in order to answer this. \\ \\ Brute force requires all possible solutions to be solved then evaluated. The width of the search tree grows linearly with the number of egg weights. The number of nodes grows exponentially with number of egg weights. Therefore, the computational complexity grows exponentially with the number of egg weights. Observe: 1. This problem can be separated into smaller similar sub-problems therefore a recursive solution is possible. 2. It is possible to have re-occurring sub-problems. Since these two conditions are true, a dynamic programming approach, which uses a memo, allows the optimal solution to be found in much less run time than a typical brute-force approach. The same recursive solution without a memo is so slow, I did not have the patience to see if it worked on $n = 99$. \section*{Problem B.2} If you were to implement a greedy algorithm for finding the minimum number of eggs needed, what would the objective function be? What would the constraints be? What strategy would your greedy algorithm follow to pick which eggs to take? You do not need to implement a greedy algorithm in order to answer this.\\ \\ Always take the largest egg available, if and only if the largest egg does not exceed the available weight. \section*{Problem B.3} Will a greedy algorithm always return the optimal solution to this problem? Explain why it is optimal or give an example of when it will not return the optimal solution.\\ \\ No the greedy algorithm will not always return the optimal solution. For example, if the egg weights are (1,5,7) and the target weight is 10. The optimal solution is 2 (2 * 5 = 10). The greedy solution would return 4 (1 * 7 + 3 * 1 = 10). \end{document}
{ "alphanum_fraction": 0.7672176309, "avg_line_length": 49.8235294118, "ext": "tex", "hexsha": "5fe4da6b8776985fef82f706b44d28538858f4d5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-26T11:58:04.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-26T11:58:04.000Z", "max_forks_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "John-L-Jones-IV/6.0002", "max_forks_repo_path": "ps1/ps1_answers.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_issues_repo_issues_event_max_datetime": "2021-08-01T09:12:11.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-29T22:36:23.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "John-L-Jones-IV/6.0002", "max_issues_repo_path": "ps1/ps1_answers.tex", "max_line_length": 128, "max_stars_count": null, "max_stars_repo_head_hexsha": "40ca76762266f89ff1f070dff5642cd9e3f120df", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "John-L-Jones-IV/6.0002", "max_stars_repo_path": "ps1/ps1_answers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1302, "size": 5082 }
\PassOptionsToPackage{unicode=true}{hyperref} % options for packages loaded elsewhere \PassOptionsToPackage{hyphens}{url} % \documentclass[ignorenonframetext,]{beamer} \usepackage{pgfpages} \setbeamertemplate{caption}[numbered] \setbeamertemplate{caption label separator}{: } \setbeamercolor{caption name}{fg=normal text.fg} \beamertemplatenavigationsymbolsempty % Prevent slide breaks in the middle of a paragraph: \widowpenalties 1 10000 \raggedbottom \setbeamertemplate{part page}{ \centering \begin{beamercolorbox}[sep=16pt,center]{part title} \usebeamerfont{part title}\insertpart\par \end{beamercolorbox} } \setbeamertemplate{section page}{ \centering \begin{beamercolorbox}[sep=12pt,center]{part title} \usebeamerfont{section title}\insertsection\par \end{beamercolorbox} } \setbeamertemplate{subsection page}{ \centering \begin{beamercolorbox}[sep=8pt,center]{part title} \usebeamerfont{subsection title}\insertsubsection\par \end{beamercolorbox} } \AtBeginPart{ \frame{\partpage} } %\AtBeginSection{ % \ifbibliography % \else % \frame{\sectionpage} % \fi %} \AtBeginSubsection{ \frame{\subsectionpage} } \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provides euro and other symbols \else % if luatex or xelatex \usepackage{unicode-math} \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi \usetheme[]{PaloAlto} \usecolortheme{orchid} % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \usepackage{hyperref} \hypersetup{ pdfauthor={Anthony Raborn\^{}1; Walter Leite; Katerina Marcoulides}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \newif\ifbibliography \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{caption} \title{Comparison of Automated Short Form Selection Strategies} \author{Anthony Raborn\(^1\) \and Walter Leite \and Katerina Marcoulides} \providecommand{\institute}[1]{} \institute{Research and Evaluation Methodology Department \and University of Florida \and 1: \href{mailto:[email protected]}{\nolinkurl{[email protected]}}} \date{November 14, 2018} \begin{document} \frame{\titlepage} \hypertarget{introduction}{% \section{Introduction}\label{introduction}} \begin{frame}{Applications of Psychometric Scales} \protect\hypertarget{applications-of-psychometric-scales}{} Applied researchers are often faced with a dilemma, both with drawbacks: \begin{enumerate}[A] \item Use a well-established but lengthy scale \\ -- Potentially longer administration time for less information \item Use a few items from a scale \\ -- Potentially greater information but weaker validity evidence \end{enumerate} In the literature, researchers attempt to use Option B with some effort spent on collecting validity evidence \end{frame} \begin{frame}{Examples of Item Selection Methods for Short Forms} \protect\hypertarget{examples-of-item-selection-methods-for-short-forms}{} \begin{enumerate} \tightlist \item Hand-Selecting Items \end{enumerate} \begin{itemize} \tightlist \item Using theoretical or practical justifications per item (e.g., Noble, Jensen, Naylor, Bhullar, \& Akeroyd, 2013) \item Retaining one of many (qualitatively) redundant items (e.g., Dennis, 2003) \end{itemize} \begin{enumerate} \setcounter{enumi}{1} \tightlist \item Statistical Criteria \end{enumerate} \begin{itemize} \tightlist \item Retaining items with high factor loadings or item correlations (e.g., Byrne \& Pachana, 2011; Wester, Vogel, O'neil, \& Danforth, 2012) \item Selecting items that improve measures of reliability and/or dimensionality (e.g., Lim \& Chapman, 2013; Veale, 2014) \end{itemize} \end{frame} \begin{frame}{Problem} \protect\hypertarget{problem}{} Creating short forms with (1) good internal structure and (2) good predictive, convergent, and/or divergent validity is difficult by hand using \emph{any} criteria. One potential solution would be to use metaheuristic optimization algorithms (Dréo, Pétrowski, Siarry, \& Taillard, 2006). \end{frame} \begin{frame}{Goals of this Study} \protect\hypertarget{goals-of-this-study}{} \begin{itemize} \tightlist \item Compare different automatic scale reduction strategies \begin{enumerate} \tightlist \item Model fit of final scales (better fit is better) \item Removal of specific problematic items (fewer problematic items is better) \item Reliability of final scales (higher reliability is better) \item Time to converge (faster is better) \end{enumerate} \item Determine which factors affect these comparisons \begin{enumerate} \tightlist \item Population model type (one factor, three factor) \item Severity of problematic items (none, minor, major) \item Strength of relationship to external criterion (none, moderate) \end{enumerate} \end{itemize} \end{frame} \hypertarget{theoretical-framework}{% \section{Theoretical Framework}\label{theoretical-framework}} \begin{frame}{Previous Attempts} \protect\hypertarget{previous-attempts}{} Some ``common'' algorithms in the literature: \begin{enumerate} \tightlist \item ``Maximize Main Loadings'' (not investigated) \item Ant Colony Optimization (ACO) \item Tabu Search (TS) \item Genetic Algorithm (GA) \end{enumerate} An additional method investigated in this study: \begin{enumerate} \setcounter{enumi}{4} \tightlist \item Simulated Annealing (SA) \end{enumerate} \end{frame} \hypertarget{method}{% \section{Method}\label{method}} \begin{frame}{Factors Manipulated} \protect\hypertarget{factors-manipulated}{} \begin{enumerate} \tightlist \item The dimensionality of the full form \end{enumerate} \begin{itemize} \tightlist \item One Factor \item Three Factor \end{itemize} \begin{enumerate} \setcounter{enumi}{1} \tightlist \item Full-scale model misspecification \end{enumerate} \begin{itemize} \tightlist \item No misspecification \item Minor misspecification (six items loading on a nuisance parameter with \(\lambda=.3\)) \item Major misspecification (six items loading on a nuisance parameter with \(\lambda=.6\)) \end{itemize} \begin{enumerate} \setcounter{enumi}{2} \tightlist \item Relationship to External Criterion Variable \end{enumerate} \begin{itemize} \tightlist \item No relationship \item Moderate relationship (\(\gamma = .6\)) \end{itemize} \end{frame} \begin{frame}{One Factor Model} \protect\hypertarget{one-factor-model}{} \begin{figure} \centering \includegraphics[width=2.86458in,height=\textheight]{Factor Diagrams/One Factor Diagram.png} \caption{20-item Self-Deceptive Enhancement Scale (Leite \& Beretvas, 2005)} \end{figure} \end{frame} \begin{frame}{Three Factor Model} \protect\hypertarget{three-factor-model}{} \begin{figure} \centering \includegraphics[width=3.85417in,height=\textheight]{Factor Diagrams/Three Factor Diagram.png} \caption{24-item Teacher Efficacy Scale (Tschannen-Moran \& Hoy, 2001)} \end{figure} \end{frame} \begin{frame}[fragile]{Simulation} \protect\hypertarget{simulation}{} Program: R (R Core Team, 2018) Packages: \begin{enumerate} \tightlist \item \texttt{MASS} (Venables \& Ripley, 2002) (data simulation) \item \texttt{ShortForm} (Raborn \& Leite, 2018) (ACO, SA, TS) \item \texttt{GAabbreviate} (Sahdra et al., 2016) (GA; modified) \end{enumerate} Sample Size: \(n = 500\) Iterations: 100 \end{frame} \begin{frame}{Analysis of Results} \protect\hypertarget{analysis-of-results}{} \begin{enumerate} \tightlist \item CFI, TLI, RMSEA \item Proportion of iterations including each problematic item (excluding no error condition) \item Composite reliability of each factor: \[CR_{factor} = \frac{(\Sigma^I_{i=1}Loading_i)^2}{(\Sigma^I_{i=1}Loading_i)^2 + \Sigma^I_{i=1}(Residual^2_i)} \] \item Run time of algorithms \end{enumerate} \end{frame} \hypertarget{results}{% \section{Results}\label{results}} \begin{frame}{One Factor Model Fit: No External Variable} \protect\hypertarget{one-factor-model-fit-no-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{}p{0.7in}>{}p{0.7in}} \toprule Error Condition & Method & CFI & TLI & RMSEA\\ \midrule & ACO & \textbf{0.975} & \textbf{0.967} & \textbf{0.045}\\ & SA & \textbf{0.992} & \textbf{0.990} & \textbf{0.020}\\ & TS & \textbf{0.985} & \textbf{0.981} & \textbf{0.027}\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash None} & GA & \textbf{0.975} & \textbf{0.968} & \textbf{0.043}\\ \cmidrule{1-5} & ACO & \textbf{0.966} & \textbf{0.956} & 0.052\\ & SA & \textbf{0.989} & \textbf{0.987} & \textbf{0.022}\\ & TS & \textbf{0.978} & \textbf{0.972} & \textbf{0.035}\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash Minor} & GA & \textbf{0.968} & \textbf{0.959} & \textbf{0.048}\\ \cmidrule{1-5} & ACO & 0.944 & 0.928 & 0.062\\ & SA & \textbf{0.983} & \textbf{0.979} & \textbf{0.028}\\ & TS & 0.944 & 0.928 & 0.057\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash Major} & GA & 0.846 & 0.802 & 0.113\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{One Factor Model Fit: External Variable} \protect\hypertarget{one-factor-model-fit-external-variable}{} \small \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}l} \toprule Error Condition & External Relationship & Method & CFI & TLI & RMSEA\\ \midrule \addlinespace[0.3em] \multicolumn{6}{l}{\textbf{None}}\\ & None & ACO & \textbf{0.975} & \textbf{0.967} & \textbf{0.045}\\ & None & SA & \textbf{0.992} & \textbf{0.990} & \textbf{0.020}\\ & None & TS & \textbf{0.985} & \textbf{0.981} & \textbf{0.027}\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & None & GA & \textbf{0.975} & \textbf{0.968} & \textbf{0.043}\\ \cmidrule{1-6} & Moderate & ACO & \textbf{0.979} & \textbf{0.973} & \textbf{0.040}\\ & Moderate & SA & \textbf{0.991} & \textbf{0.989} & \textbf{0.021}\\ & Moderate & TS & \textbf{0.985} & \textbf{0.981} & \textbf{0.027}\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & Moderate & GA & \textbf{0.975} & \textbf{0.968} & \textbf{0.044}\\ \cmidrule{1-6} \addlinespace[0.3em] \multicolumn{6}{l}{\textbf{Major}}\\ & None & ACO & 0.944 & 0.928 & 0.062\\ & None & SA & \textbf{0.983} & \textbf{0.979} & \textbf{0.028}\\ & None & TS & 0.944 & 0.928 & 0.057\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & None & GA & 0.846 & 0.802 & 0.113\\ \cmidrule{1-6} & Moderate & ACO & 0.945 & 0.931 & 0.058\\ & Moderate & SA & \textbf{0.981} & \textbf{0.977} & \textbf{0.029}\\ & Moderate & TS & 0.930 & 0.912 & 0.063\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & Moderate & GA & 0.858 & 0.822 & 0.107\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{One Factor Item Selection Proportions: No External Variable} \protect\hypertarget{one-factor-item-selection-proportions-no-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lllllll} \toprule Error Condition & Item & Factor Loading & ACO & SA & TS & GA\\ \midrule & y3 & 0.580 & 0.84 & 0.39 & 0.48 & 0.66\\ & y5 & 0.534 & 0.81 & 0.36 & 0.43 & 0.35\\ & y4 & 0.448 & 0.70 & 0.59 & 0.53 & 0.61\\ & y2 & 0.408 & 0.66 & 0.36 & 0.37 & 0.41\\ & y6 & 0.393 & 0.51 & 0.38 & 0.39 & 0.93\\ \multirow{-6}{*}{\raggedright\arraybackslash Minor} & y1 & 0.382 & 0.34 & 0.39 & 0.48 & 0.48\\ \cmidrule{1-7} & y3 & 0.580 & 0.49 & 0.12 & 0.46 & 0.61\\ & y5 & 0.534 & 0.35 & 0.32 & 0.48 & 0.44\\ & y4 & 0.448 & 0.33 & 0.17 & 0.48 & 0.70\\ & y2 & 0.408 & 0.19 & 0.19 & 0.37 & 0.48\\ & y6 & 0.393 & 0.21 & 0.21 & 0.36 & 0.96\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & y1 & 0.382 & 0.21 & 0.21 & 0.48 & 0.44\\ \cmidrule{1-7} \textbf{Minor Error Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.643} & \textbf{0.412} & \textbf{0.447} & \textbf{0.573}\\ \cmidrule{1-7} \textbf{Major Error Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.297} & \textbf{0.203} & \textbf{0.438} & \textbf{0.605}\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{One Factor Item Selection Proportions: External Variable} \protect\hypertarget{one-factor-item-selection-proportions-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{llllllll} \toprule Error Condition & External Condition & Item & Factor Loading & ACO & SA & TS & GA\\ \midrule & None & y3 & 0.580 & 0.78 & 0.48 & 0.46 & 0.82\\ & None & y5 & 0.534 & 0.75 & 0.39 & 0.47 & 0.98\\ & None & y4 & 0.448 & 0.65 & 0.45 & 0.50 & 0.21\\ & None & y2 & 0.408 & 0.34 & 0.34 & 0.41 & 0.29\\ & None & y6 & 0.393 & 0.29 & 0.55 & 0.55 & 0.29\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & None & y1 & 0.382 & 0.16 & 0.48 & 0.51 & 0.18\\ \cmidrule{1-8} & Moderate & y3 & 0.580 & 0.49 & 0.10 & 0.44 & 0.80\\ & Moderate & y5 & 0.534 & 0.17 & 0.13 & 0.42 & 0.98\\ & Moderate & y4 & 0.448 & 0.16 & 0.13 & 0.38 & 0.26\\ & Moderate & y2 & 0.408 & 0.12 & 0.14 & 0.47 & 0.43\\ & Moderate & y6 & 0.393 & 0.16 & 0.20 & 0.40 & 0.43\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & Moderate & y1 & 0.382 & 0.15 & 0.24 & 0.53 & 0.46\\ \cmidrule{1-8} \textbf{} & \textbf{No External Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.495} & \textbf{0.448} & \textbf{0.483} & \textbf{0.462}\\ \cmidrule{1-8} \textbf{} & \textbf{Moderate External Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.208} & \textbf{0.157} & \textbf{0.440} & \textbf{0.560}\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{Three Factor Model Fit: No External Variable} \protect\hypertarget{three-factor-model-fit-no-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}>{\raggedright\arraybackslash}p{0.7in}} \toprule Error Condition & Method & CFI & TLI & RMSEA\\ \midrule & ACO & \textbf{0.980} & \textbf{0.974} & \textbf{0.042}\\ & SA & \textbf{0.992} & \textbf{0.990} & \textbf{0.023}\\ & TS & \textbf{0.989} & \textbf{0.986} & \textbf{0.027}\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash None} & GA & \textbf{0.979} & \textbf{0.973} & \textbf{0.042}\\ \cmidrule{1-5} & ACO & \textbf{0.972} & \textbf{0.964} & \textbf{0.050}\\ & SA & \textbf{0.990} & \textbf{0.987} & \textbf{0.026}\\ & TS & \textbf{0.984} & \textbf{0.979} & \textbf{0.035}\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash Minor} & GA & \textbf{0.970} & \textbf{0.961} & \textbf{0.050}\\ \cmidrule{1-5} & ACO & \textbf{0.953} & 0.939 & 0.062\\ & SA & \textbf{0.989} & \textbf{0.986} & \textbf{0.027}\\ & TS & \textbf{0.961} & \textbf{0.950} & 0.053\\ \multirow{-4}{0.7in}{\raggedright\arraybackslash Major} & GA & 0.909 & 0.882 & 0.089\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{Three Factor Model Fit: External Variable} \protect\hypertarget{three-factor-model-fit-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}>{\raggedright\arraybackslash}p{1in}l} \toprule Error Condition & External Relationship & Method & CFI & TLI & RMSEA\\ \midrule \addlinespace[0.3em] \multicolumn{6}{l}{\textbf{None}}\\ & None & ACO & \textbf{0.980} & \textbf{0.974} & \textbf{0.042}\\ & None & SA & \textbf{0.992} & \textbf{0.990} & \textbf{0.023}\\ & None & TS & \textbf{0.989} & \textbf{0.986} & \textbf{0.027}\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & None & GA & \textbf{0.979} & \textbf{0.973} & \textbf{0.042}\\ \cmidrule{1-6} & Moderate & ACO & \textbf{0.977} & \textbf{0.970} & \textbf{0.047}\\ & Moderate & SA & \textbf{0.988} & \textbf{0.984} & \textbf{0.032}\\ & Moderate & TS & \textbf{0.983} & \textbf{0.978} & \textbf{0.037}\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & Moderate & GA & \textbf{0.977} & \textbf{0.970} & \textbf{0.046}\\ \cmidrule{1-6} \addlinespace[0.3em] \multicolumn{6}{l}{\textbf{Major}}\\ & None & ACO & \textbf{0.953} & 0.939 & 0.062\\ & None & SA & \textbf{0.989} & \textbf{0.986} & \textbf{0.027}\\ & None & TS & \textbf{0.961} & \textbf{0.950} & 0.053\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & None & GA & 0.909 & 0.882 & 0.089\\ \cmidrule{1-6} & Moderate & ACO & \textbf{0.964} & \textbf{0.953} & 0.057\\ & Moderate & SA & \textbf{0.984} & \textbf{0.979} & \textbf{0.036}\\ & Moderate & TS & \textbf{0.953} & 0.939 & 0.061\\ \multirow{-4}{1in}{\raggedright\arraybackslash \hspace{1em}} & Moderate & GA & 0.907 & 0.880 & 0.091\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{Three Factor Item Selection Proportions: No External Variable} \protect\hypertarget{three-factor-item-selection-proportions-no-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{lllllll} \toprule Error Condition & Item & Factor Loading & ACO & SA & TS & GA\\ \midrule & y1 & 0.9 & 0.83 & 0.36 & 0.49 & 0.94\\ & y5 & 0.7 & 0.13 & 0.58 & 0.50 & 0.24\\ & y9 & 0.9 & 0.52 & 0.34 & 0.41 & 0.87\\ & y13 & 0.7 & 0.20 & 0.42 & 0.48 & 0.08\\ & y17 & 0.9 & 0.85 & 0.25 & 0.28 & 0.65\\ \multirow{-6}{*}{\raggedright\arraybackslash Minor} & y21 & 0.7 & 0.45 & 0.36 & 0.44 & 0.39\\ \cmidrule{1-7} & y1 & 0.9 & 0.60 & 0.32 & 0.44 & 0.90\\ & y5 & 0.7 & 0.22 & 0.22 & 0.41 & 0.12\\ & y9 & 0.9 & 0.20 & 0.12 & 0.24 & 0.97\\ & y13 & 0.7 & 0.10 & 0.11 & 0.32 & 0.03\\ & y17 & 0.9 & 0.61 & 0.10 & 0.30 & 0.60\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & y21 & 0.7 & 0.30 & 0.10 & 0.27 & 0.41\\ \cmidrule{1-7} \textbf{Minor Error Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.497} & \textbf{0.385} & \textbf{0.433} & \textbf{0.528}\\ \cmidrule{1-7} \textbf{Major Error Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.338} & \textbf{0.162} & \textbf{0.330} & \textbf{0.505}\\ \bottomrule \end{tabular}} \end{table} \end{frame} \begin{frame}{Three Factor Item Selection Proportions: External Variable} \protect\hypertarget{three-factor-item-selection-proportions-external-variable}{} \begin{table}[H] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{llllllll} \toprule Error Condition & External Condition & Item & Factor Loading & ACO & SA & TS & GA\\ \midrule & None & y1 & 0.9 & 0.60 & 0.32 & 0.44 & 0.90\\ & None & y5 & 0.7 & 0.22 & 0.22 & 0.41 & 0.12\\ & None & y9 & 0.9 & 0.20 & 0.12 & 0.24 & 0.97\\ & None & y13 & 0.7 & 0.10 & 0.11 & 0.32 & 0.03\\ & None & y17 & 0.9 & 0.61 & 0.10 & 0.30 & 0.60\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & None & y21 & 0.7 & 0.30 & 0.10 & 0.27 & 0.41\\ \cmidrule{1-8} & Moderate & y1 & 0.9 & 0.37 & 0.25 & 0.51 & 0.84\\ & Moderate & y5 & 0.7 & 0.15 & 0.12 & 0.37 & 0.16\\ & Moderate & y9 & 0.9 & 0.32 & 0.16 & 0.30 & 0.97\\ & Moderate & y13 & 0.7 & 0.06 & 0.16 & 0.20 & 0.03\\ & Moderate & y17 & 0.9 & 0.29 & 0.10 & 0.37 & 0.96\\ \multirow{-6}{*}{\raggedright\arraybackslash Major} & Moderate & y21 & 0.7 & 0.04 & 0.09 & 0.37 & 0.04\\ \cmidrule{1-8} \textbf{} & \textbf{No External Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.338} & \textbf{0.162} & \textbf{0.330} & \textbf{0.505}\\ \cmidrule{1-8} \textbf{} & \textbf{Moderate External Average Proportion:} & \textbf{} & \textbf{} & \textbf{0.205} & \textbf{0.147} & \textbf{0.353} & \textbf{0.500}\\ \bottomrule \end{tabular}} \end{table} \end{frame} \hypertarget{discussion}{% \section{Discussion}\label{discussion}} \begin{frame}{Best Performing Methods} \protect\hypertarget{best-performing-methods}{} \begin{enumerate} \tightlist \item Removal of specific problematic items: \textbf{SA} \item Model fit of final scales: \textbf{SA} \item Reliability of final scales: About equivalent (\emph{ACO} somewhat higher) \item Time to converge: About equivalent (\emph{TS} one factor longer) \end{enumerate} Overall: SA consistently had good\footnote<.->{CFI \textgreater{} .95, TLI \textgreater{} .95, RMSEA \textless{} .05} fit; ACO \& TS consistently had at least adequate\footnote<.->{CFI \textgreater{} .90, TLI \textgreater{} .90, RMSEA \textless{} .08} fit; GA produced poor fit in the presence of major error \end{frame} \begin{frame}{Factors Affecting Comparisons} \protect\hypertarget{factors-affecting-comparisons}{} \small \begin{enumerate} \item Population model type\\ -- model fit: one factor < three factor\\ -- Minimal effect on time to converge \item Severity of problematic items\\ -- Decreased model fit, \textbf{SA} excluded \item Strength of relationship to external criterion\\ -- Somewhat attenuates effect of error only for \textbf{ACO} \end{enumerate} \end{frame} \begin{frame}{For the Future} \protect\hypertarget{for-the-future}{} \begin{block}{Suggestions for Applied Researchers} \begin{enumerate}[A] \item Apply each algorithm to your sample--grab some coffee or tea while they each run! \item Compare the resulting short forms against one another ("face validity" comparisons). \item When possible, test against a second sample (cross-validation). \end{enumerate} \end{block} \begin{block}{Future Research Questions} \begin{enumerate} \tightlist \item How well do the short forms created by each algorithm generalize to new samples? \item How do additional manipulations (e.g., population models, types of errors) affect the algorithms? \end{enumerate} \end{block} \end{frame} \begin{frame}{Corresponding Author} \protect\hypertarget{corresponding-author}{} \center{\huge{[email protected]}} \end{frame} \hypertarget{references}{% \section{References}\label{references}} \begin{frame}[allowframebreaks]{References} \protect\hypertarget{references-1}{} \tiny \hypertarget{refs}{} \leavevmode\hypertarget{ref-byrne2011development}{}% Byrne, G. J., \& Pachana, N. A. (2011). Development and validation of a short form of the geriatric anxiety inventory--the gai-sf. \emph{International Psychogeriatrics}, \emph{23}(1), 125--131. \leavevmode\hypertarget{ref-dennis2003breastfeeding}{}% Dennis, C.-L. (2003). The breastfeeding self-efficacy scale: Psychometric assessment of the short form. \emph{Journal of Obstetric, Gynecologic, \& Neonatal Nursing}, \emph{32}(6), 734--744. \leavevmode\hypertarget{ref-dreo2006metaheuristics}{}% Dréo, J., Pétrowski, A., Siarry, P., \& Taillard, E. (2006). \emph{Metaheuristics for hard optimization: Methods and case studies}. Springer Science \& Business Media. \leavevmode\hypertarget{ref-leite2005validation}{}% Leite, W. L., \& Beretvas, S. N. (2005). Validation of scores on the marlowe-crowne social desirability scale and the balanced inventory of desirable responding. \emph{Educational and Psychological Measurement}, \emph{65}(1), 140--154. \leavevmode\hypertarget{ref-lim2013development}{}% Lim, S. Y., \& Chapman, E. (2013). Development of a short form of the attitudes toward mathematics inventory. \emph{Educational Studies in Mathematics}, \emph{82}(1), 145--164. \leavevmode\hypertarget{ref-noble2013short}{}% Noble, W., Jensen, N. S., Naylor, G., Bhullar, N., \& Akeroyd, M. A. (2013). A short form of the speech, spatial and qualities of hearing scale suitable for clinical use: The ssq12. \emph{International Journal of Audiology}, \emph{52}(6), 409--412. \leavevmode\hypertarget{ref-Raborn2018}{}% Raborn, A., \& Leite, W. (2018). \emph{ShortForm: Automatic short form creation}. Retrieved from \url{https://github.com/AnthonyRaborn/ShortForm} \leavevmode\hypertarget{ref-RCT2018}{}% R Core Team. (2018). \emph{R: A language and environment for statistical computing}. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from \url{https://www.R-project.org/} \leavevmode\hypertarget{ref-Sahdra2016}{}% Sahdra, K., B., Ciarrochi, J., Parker, P., \ldots{} L. (2016). Using genetic algorithms in a large nationally representative american sample to abbreviate the Multidimensional Experiential Avoidance Questionnaire. \emph{Frontiers in Psychology}, \emph{7}(189), 1--14. Retrieved from \url{http://www.frontiersin.org/quantitative_psychology_and_measurement/10.3389/fpsyg.2016.00189/abstract} \leavevmode\hypertarget{ref-tschannen2001teacher}{}% Tschannen-Moran, M., \& Hoy, A. W. (2001). Teacher efficacy: Capturing an elusive construct. \emph{Teaching and Teacher Education}, \emph{17}(7), 783--805. \leavevmode\hypertarget{ref-veale2014edinburgh}{}% Veale, J. F. (2014). Edinburgh handedness inventory--short form: A revised version based on confirmatory factor analysis. \emph{Laterality: Asymmetries of Body, Brain and Cognition}, \emph{19}(2), 164--177. \leavevmode\hypertarget{ref-Venables2002}{}% Venables, W. N., \& Ripley, B. D. (2002). \emph{Modern applied statistics with s} (Fourth). New York: Springer. Retrieved from \url{http://www.stats.ox.ac.uk/pub/MASS4} \leavevmode\hypertarget{ref-wester2012development}{}% Wester, S. R., Vogel, D. L., O'neil, J. M., \& Danforth, L. (2012). Development and evaluation of the gender role conflict scale short form (grcs-sf). \emph{Psychology of Men \& Masculinity}, \emph{13}(2), 199. \end{frame} \end{document}
{ "alphanum_fraction": 0.6923696544, "avg_line_length": 30.4481605351, "ext": "tex", "hexsha": "73f58a32e4cd1d6af440a877e769719656d00164", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aae533a3871e3f04a44edf63b870e864724f8df1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AnthonyRaborn/conference-presentations", "max_forks_repo_path": "FERA 2018 Short Form/FERA_2018_Presentation_Short.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aae533a3871e3f04a44edf63b870e864724f8df1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AnthonyRaborn/conference-presentations", "max_issues_repo_path": "FERA 2018 Short Form/FERA_2018_Presentation_Short.tex", "max_line_length": 229, "max_stars_count": null, "max_stars_repo_head_hexsha": "aae533a3871e3f04a44edf63b870e864724f8df1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AnthonyRaborn/conference-presentations", "max_stars_repo_path": "FERA 2018 Short Form/FERA_2018_Presentation_Short.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10142, "size": 27312 }
\documentclass[10pt]{article} \usepackage[margin=2cm]{geometry} \usepackage{amsmath,amsfonts,amssymb} \usepackage{url} \usepackage{graphicx} \usepackage{subfigure} \usepackage{hyperref} \usepackage[round]{natbib} \renewcommand{\cite}{\citep} \setlength{\bibhang}{0pt} \usepackage{parskip} \setlength{\parindent}{0 pt} \setlength{\parskip}{5 pt} %\bibliographystyle{plos2009} \usepackage{xspace} \newcommand{\dadi}{$\partial$a$\partial$i\xspace} \newcommand{\bolddadi}{$\boldsymbol{\partial}$a$\boldsymbol{\partial}$i\xspace} \newcommand{\Nref}{\ensuremath{N_\text{ref}}\xspace} \newcommand{\ms}{\emph{ms}\xspace} \usepackage{color} \newcommand{\comment}[1]{{\color{blue}APR: #1}} \newcommand{\mold}{\texttt{mold}\xspace} \usepackage{listings} \lstset{ basicstyle=\ttfamily, language=Python, showstringspaces=False, aboveskip=0pt, captionpos=b, belowskip=0pt } \newcommand{\py}[1]{\lstinline[breaklines=true,language=Python, showstringspaces=False]@#1@} \newcommand{\ccode}[1]{\lstinline[breaklines=true,language=C, showstringspaces=False]@#1@} \newcommand{\shell}[1]{\lstinline[breaklines=true, language=csh, showstringspaces=False]@#1@} \renewcommand{\lstlistingname}{Example Code} \renewcommand{\lstlistlistingname}{List of \lstlistingname s} \newcommand{\E}{\mathbb{E}} % For calibration, lines can be 60 characters long in % lstlistings. %\begin{lstlisting} %******************************************************* %\end{lstlisting} \begin{document} \title{\texttt{Moments LD} user manual\\ \normalsize Corresponding to version 0.0.1} \author{Aaron Ragsdale \\ Contact: [email protected]} \date{\today} \maketitle \tableofcontents \clearpage \lstlistoflistings \clearpage \section{Introduction to \texttt{moments.LD}} Welcome to \texttt{moments.LD}, a program for simulating linkage disequilibrium statistics. \texttt{moments.LD}, or \mold, can compute a large set of informative LD statistics for many populations, and performs likelihood-based demographic inference using those statistics. There are three primary features of \mold to enable LD-based demographic inference: reading and parsing data, building demographic models, and inferring the parameters of those models by comparing model predictions to data. Typically, we use biallelic SNP data, along with a recombination map, to compute two-locus statistics over a range of genetic distances. We then use \mold to compute expectations for those statistics under the demographic models we want to test, which can include multiple populations with variable migration, splits and mergers, and population size changes. Using a likelihood-based inference approach, we optimize those models to find the set of parameters that best fit the data. I've tried to make parsing data and defining demographic models as painless as possible, though the complexity of the program does require some amount of script-writing and interaction. Luckily, \mold is written in Python, a friendly and powerful programming language. If you are already familiar with \dadi or \emph{moments}, or Python in general, you are in a good position to dive right in to \mold. If you have limited Python experience, this manual should provide the background and examples to get you up to speed and productive with \mold. %Finally, \mold is a living, breathing, evolving \comment{thing} \subsection{Getting help and helping us} Undoubtedly, there will be bugs. If you find a bug in \mold, or more generally if you find certain aspects of the program to be unintuitive or difficult to use, I would appreciate the feedback. Please submit a bug report at \url{https://bitbucket.org/simongravel/moments/issues}, and I will try to address the issue in a timely manner. Similarly, if you have suggestions for improved functionality or feature requests, those can be submitted in the issues as well or you can contact me directly. As we do our own research, \textit{moments} and \mold are constantly improving. Our philosophy is to include any code we develop for our own projects that may useful to others. If you develop \textit{Moments}-related code that you think might be useful to others, please let me know so I can include it with the main distribution. \section{LD statistics} Patterns of linkage disequilibrium (LD) are informative about evolutionary history, for example for inferring recent admixture events and population size changes or localizing regions of the genome that have experienced recent selective events. LD is commonly measured as the covariance (or correlation) of alleles co-occurring on a haplotype. The covariance ($D$) is \begin{align*} D = \text{Cov}(A,B) & = f_{AB} - pq \\ & = f_{AB}f_{ab} - f_{Ab}f_{aB}, \end{align*} and the correlation ($r$) is \begin{align*} r = \frac{D}{\sqrt{p(1-p)q(1-q)}} . \end{align*} We think of expectations of these quantities as though we average over many realizations of the same evolutionary process, but in reality we have only a single observation for any given pair of SNPs. Therefore in practice we take the averages of LD statistics over many independent pairs of SNPs. $\E[D]$ is zero genome wide, so LD is often measured by the variance of $D$ ($\E[D^2]$) or the square correlation ($r^2$), where \begin{align*} r^2 = \frac{D^2}{p(1-p)q(1-q)}. \end{align*} Because it is difficult to compute expectations for $\E[r^2]$ under even simple evolutionary scenarios, and because it is difficult to accurately estimate $\widehat{r^2}$ from data, we use $\E[D^2$ and related statistics to compare model predictions for LD to data. \subsection{Hill-Robertson statistics} Hill and Robertson introduced a recursion for $\E[D^2]$ that allows for variable recombination rate between loci and population size changes over time \cite{Hill1968}. To solve for $\E[D^2]$, this system requires additional LD statistics, which we call $Dz = D(1-2p)(1-2q)$ and $\pi_2 = p(1-p)q(1-q)$, where $p$ and $q$ are the allele frequencies at the left and right loci, respectively. This system also relies on heterozygosity ($H$), so from this system we can compute the vector of statistics $$y=\begin{pmatrix} \E[D^2] \\ \E[Dz] \\ \E[\pi_2] \\ \E[H] \end{pmatrix}.$$ Instead of computing $\E[r^2]$, which is an expectation of ratios, we use the related statistic $\sigma_D^2 = \frac{\E[D^2]}{\E[\pi_2]}$ \cite{Hill1968,Ohta1971}. This statistic has the advantage that its expectation can be computed from the Hill-Robertson recursion, and we can accurately compute it from either phased or unphased data. \subsection{Multi-population LD statistics} We extended the Hill-Robertson system to consider LD statistics for multiple populations \cite{Ragsdale2018}. Here, we can model population size changes, splits, mergers, and migration events (pulse or continuous). The statistics we consider take the form $$ \mathbf{z} = \begin{pmatrix} \E[D_i D_j] \\ \E[D_i z_{j,k}] \\ \E[{\pi_2}(i,j;k,l)] \\ \E[H_{i,j}] \end{pmatrix}, $$ where $i,j,k,l$ index populations, and \begin{align*} D_iz_{j,k} & = D_i (1-2p_j)(1-2q_k), \\ {\pi_2}(i,j;k,l) & = \frac{1}{4} p_i(1-p_j) q_k(1-q_l) + \frac{1}{4} p_i(1-p_j) q_l(1-q_k) & \\ &\hspace{15 pt} + \frac{1}{4} p_j(1-p_i) q_k(1-q_l) + \frac{1}{4} p_j(1-p_i) q_l(1-q_k) , \\ H_{i,j} & =\frac{1}{2}p_i(1-p_j) + \frac{1}{2}p_j(1-p_i). \end{align*} From these, we can compare a large number of two-locus statistics. In practice, we work with $\sigma_D^2$-like statistics, which are normalized by the $\pi_2$ statistic from one of the populations. \section{Getting started} \subsection{Downloading \py{moments} and \mold} \mold is packaged and released with \py{moments}, a python program for running analyses based on the allele frequency spectrum \cite{Jouganous2017}. \py{moments} is available at \url{https://bitbucket.org/simongravel/moments}. Because \mold continues to be developed and improved, it currently lives on the LD branch of the \py{moments} repository. For this reason, I recommend cloning the \py{moments} repository and switching to the LD branch. To do this, in the Terminal navigate to the parent directory where you want to copy \py{moments} and run\\ \py{git clone https://bitbucket.org/simongravel/moments.git}\\ To switch to the LD branch, run \\ \py{git checkout LD}\\ Before installing \py{moments}, we need to ensure that \py{Python} \comment{...} \subsubsection{Dependencies} \py{moments} and \mold depend on a number of Python libraries. I strongly recommend using Python 3 (support for Python 2 will be dropped in the near future). \begin{enumerate} \item Absolute dependencies: \py{Python}, \py{numpy}, \py{scipy}, \py{cython}, \py{mpmath}, \py{pickle} \item Non-essential dependencies: \begin{enumerate} \item For Plotting, we use \py{matplotlib} (version $\geq$ 0.98.1) \item For Parsing, we take advantage of \py{pandas}, \py{hdf5}, and \py{scikit-allel} (version $\geq$ 1.2.0) \cite{} \item For Demography building, we use \py{networkx} \end{enumerate} \end{enumerate} I recommend that you install IPython as well. The easiest way to obtain all these dependencies is to use a package manager, such as Conda (\url{https://conda.io/en/latest/}) or Enthought (\url{https://www.enthought.com/product/enthought-python-distribution/}). To install all required libraries, we can use the requirements.txt file, which lists each of the listed dependencies. If you are using Conda, for example, navigate to the \py{moments} directory and run\\ \py{conda install --file requirements.txt} \subsubsection{Installing} Once the library dependencies are installed, install \py{moments} and \mold by running\\ \py{python setup.py install}\\ \subsection{Suggested workflow} One of Python's strengths is its interactive nature. When I am first exploring data or writing scripts to build and test models, I often have two windows open: one editing a python script (\py{script.py}) and the other running an IPython session. That way, I can record my work in the python script and test it as I go. Using IPython, I can call the magic command \py{\%run script.py}, which applies changes I've made in my python script to the IPython session. Note that if I've also changed modules that I've loaded, I'll need to reload those as well. Once I'm happy that I have a usable script, I can call it from the terminal for longer runs of optimization or parsing, using \py{python script.py}. Note that we will need to import \py{moments.LD} to be able to use it: \py{import moments.LD as mold}. \section{LDstats objects} \mold represents two-locus statistics using \py{mold.LDstats} objects, which stores the Hill-Robertson statistics and heterozygosity for one or more populations and for any set of recombination rates. The simplest way to create an \py{LDstats} object is by defining a set of statistics by hand. For a single population, the order of the LD statistics is $[\E[D^2], \E[Dz], \E[\pi_2]]$ along with heterozygosity $\E[H]$. If the statistics are defined using variables \py{D2}, \py{Dz}, \py{pi2}, and \py{H}, we would simply call \py{y = mold.LDstats([[D2, Dz, pi2], [H]], num_pops = 1)}. For example, if we set \py{D2, Dz, pi2, H = 1e-7, 1e-8, 2e-7, 1e-3} in this example, and then if we print \py{y}, we would see as the output: \py{Out[1]: LDstats([[1.e-07 1.e-08 2.e-07]], [0.001], num_pops=1, pop_ids=None)} To see which statistics each value corresponds to, we can call \py{y.names()}, which would output: \py{Out[2]: (['DD_1_1', 'Dz_1_1_1', 'pi2_1_1_1_1'], ['H_1_1'])} Typically, we either compute the \py{LDstats} from a demographic model, or we build the object from data. We'll walk through both in later sections. First, we'll use build in model functions to explore what information is stored in \py{LDstats} objects, and how to manipulate the objects. To obtain the expected statistics at equilibrium (steady-state demography), we can call \py{mold.Demographics1D.snm} (snm stands for standard neutral model). We can specify the per-base population size-scaled mutation rate $\theta = 4N_\text{ref}\mu$ (default set to $\theta=0.001$) and population size-scaled recombination rates separating loci $\rho=4 N_\text{ref} r$ (default set to \py{None}). \mold can handle any number of recombination rates (zero, one, or multiple), and the returned \py{LDstats} object will contain LD statistics for as many recombination rates as you gave it. To give some examples,\\ \py{In [3]: mold.Demographics1D.snm()}\\ \py{Out[3]: LDstats([], [0.001], num_pops=1, pop_ids=None)} \py{In [4]: mold.Demographics1D.snm(rho=0)}\\ \py{Out[4]: LDstats([[1.38888889e-07 1.11111111e-07 3.05555556e-07]], [0.001], num_pops=1, pop_ids=None)} \py{In [5]: mold.Demographics1D.snm(rho=[0,1,2,10], theta=0.01)}\\ \py{Out[5]:}\\ \py{LDstats([[1.38888889e-05 1.11111111e-05 3.05555556e-05]}\\ \py{ [8.59375000e-06 6.25000000e-06 2.81250000e-05]}\\ \py{ [6.25000000e-06 4.16666667e-06 2.70833333e-05]}\\ \py{ [2.01612903e-06 8.06451613e-07 2.54032258e-05]], [0.01], num_pops=1, pop_ids=None)} In this last example, the four sets of LD statistics correspond to $\rho=0,1,2,$ and $10$, respectively, while expected heterozygosity is only shown a single time ($\E[H]=0.01$). In each case, \py{num_pops} is automatically set to \py{1}, and because we didn't specify population IDs, \py{pop_ids = None}. \py{y.LD()} will return just the LD statistics, while \py{y.H()} returns just the heterozygosity statistics. \section{Parsing and importing data} \mold can import data and compute LD statistics from either phased or unphased sequencing data. Typically, data is stored in a VCF formatted file. We parse the VCF using \py{scikit-allel} to get genotype arrays, which we then iterate over to count two locus haplotype (phased data) or genotype (unphased data) frequencies for pairs of SNPs. We then use two-locus haplotype or genotype counts to compute statistics in the multi-population Hill-Robertson basis. If we are binning data based on recombination distance, we will also need a recombination map, and if there are multiple populations, we will need to provide a file that identifies individuals with their population (formats for each are described below). If we are interested in data from a subset of the full data (say we want to keep only intergenic variants, or want to focus on a particular region), we can provide a bed file that defines the regions or features we should parse. \subsection{Computing statistics from genotype arrays} To start simply, we might be interested in computing pairwise LD statistics for a given set of genotypes or between two sets of genotypes. A genotype array $G$ has size $L\times n$, where $L$ is the number of SNPs and $n$ is the number of sequenced diploid individuals, so that entry $(i,j)$ is the genotype state of individual $j$ at SNP $i$. Genotype states are either $0$ (homozygous reference), $1$ (heterozygous), or $2$ (homozygous alternate). \py{mold.Parsing} counts and computes statistics from genotype arrays using the approach described in \cite{Ragsdale2019}. To get pairwise statistics for each possible pair (all $L(L-1)/2$ of them), use \\\py{mold.Parsing.compute_pairwise_stats(G)}, \\ where \py{G} is the genotype matrix as described above. This will output three vectors, each of length $L(L-1)/2$, for $D^2$, $Dz$, and $\pi_2$, in that order. To convert to a (symmetric) $L\times L$ matrix of pairwise statistics, we can use \py{numpy}'s \py{triu} function. To get the average Hill-Robertson statistics over all pairwise comparisons for a block of SNPs, use \\\py{mold.Parsing.compute_average_stats(G)}, \\which returns the mean values of $D^2$, $Dz$, and $\pi_2$, in that order. To compute Hill-Robertson statistics between two sets of genotype data, stored in two genotype arrays \py{G1} and \py{G2}, we can similarly call\\ \py{mold.Parsing.compute_pairwise_stats_between(G1, G2)}\\ (which returns vectors of size $L_1L_2$, where $L_i$ is the number of SNPs in genotype array $G_i$) and \\\py{mold.Parsing.compute_average_stats_between(G1, G2)}. \subsection{Computing statistics from a VCF} In our analyses, we are interested in computing two-locus statistics from genotype data stored in a VCF for pairs of SNPs at varying genetic distances. To parse a VCF file and output Hill-Robertson statistics, we use \py{mold.Parsing.compute_ld_statistics}. The only required input for this function is the VCF filename. Otherwise, there are a number of options and arguments that can be passed to this function. \begin{itemize} \item \py{bed_file} : default set to \py{None}. To only parse variants in given regions, we specify those regions in a bed file. \item \py{rec_map_file} : default set to \py{None}. If we pass a recombination map filename, we can specify the formatting of the recombination map file (see subsection below). \item \py{pop_file} : default set to \py{None}. If \py{None}, it uses data from all individuals as a single population. \item \py{pops} : If we pass a population file, we can specify which populations to parse using \py{pops=[pop1, pop2, ...]}. \item \py{bp_bins} : default set to \py{None}. If we do not pass a recombination map, we will parse the data based on base pair distance between SNPs. Here, we pass a list of bin edges. For example, to parse data into bins of $(0,10\text{ kb}]$, $(10\text{ kb},20\text{ kb}]$, and $(20\text{ kb},30\text{ kb}]$, we would set \py{bp_bins = [0, 10000, 20000, 30000]}. \item \py{use_genotypes} : default set to \py{True}, which we used with unphased data. Set to \py{False} for phased data. \item \py{stats_to_compute} : default set to \py{None}. If \py{None}, computes all statistics in the multi-population H-R basis. We can also specify just a subset of these statistics, if desired. \item \py{report} : default set to \py{True}. If \py{True}, outputs progress report as it parses the VCF file. \end{itemize} In the simplest case without a recombination map, we can compute statistics from a VCF file based on physical (bp) distance. To do this, for a single population, we only need to specify the VCF filename and bin edges in base pairs:\\ \py{ld_stats = mold.Parsing.compute_ld_statistics('path/to/vcf/file.vcf.gz', bp_bins=[0, 10000, 20000, 30000])}. \subsection{Multiple populations} To parse data for multiple populations, we need to also include a file that tells us which population each individual belongs to. In the population file, each row corresponds to an individual in the VCF file (individual names must match to those labeled in the VCF header). In each row, the first column is the individual ID, and the second column is the population name. Additional columns are ignored. We could then call \\ \py{ld_stats = mold.Parsing.compute_ld_statistics('path/to/vcf/file.vcf.gz', pop_file='path/to/pop/file.txt', pops=[pop1, pop2, pop3], bp_bins=[0, 10000, 20000, 30000])}. \subsection{Using a recombination map} Because recombination rates can vary along the genome, we often want to parse two-locus data by the genetic instead of physical distance separating SNPs. To use a recombination map to parse data, we specify the file path and name using \py{rec_map_file}. In the recombination map file, the first column corresponds to physical positions, and other columns correspond to the genetic position in either Morgans or cM. If there are multiple maps in the file, we can specify the map we want to use by the map name in the header. For example, the first and last few lines of a set of recombination maps of chromosome 22 for human reference genome build hg19 (available here: \url{https://www.well.ox.ac.uk/~anjali/AAmap/}) are\\\small \py{ "Physical_Pos" "deCODE" "COMBINED_LD" "YRI_LD" "CEU_LD" "AA_Map" "African_Enriched" "Shared_Map"}\\\py{ 16051347 0 0 0 0 0 0 0}\\\py{ 16052618 0 0.0099 0.0083 0.0163 0 0 0}\\\py{ 16053624 0 0.0177 0.0148 0.0293 0 0 0}\\\py{ 16053659 0 0.0179 0.0151 0.0297 0 0 0}\\\py{ 16053758 0 0.0187 0.0157 0.031 0 0 0}\\\py{ ...}\\\py{ 51217134 55.5922 73.6005 75.6968 72.5384 68.9516 67.8206 54.8248}\\\py{ 51219006 55.5922 73.6017 75.6982 72.5398 68.9516 67.8206 54.8248}\\\py{ 51222100 55.5922 73.6037 75.7006 72.5421 68.9516 67.8206 54.8248}\\\py{ 51223637 55.5922 73.6047 75.7018 72.5433 68.9516 67.8206 54.8248}\\\py{ 51229805 55.5922 73.6088 75.7068 72.5479 68.9516 67.8206 54.8248}\\ \normalsize Cumulative distances are given in cM. If we want to use the \cite{Hinch2011} African American admixture map (``AA\_Map'') from this file, setting recombination bins using \py{r_bins} as, for example,\\ \py{r_bins = [0, 1e-5, 2e-5, 5e-5, 1e-4, 2e-4, 1e-3]}\\ we call:\\ \py{ld_stats = mold.Parsing.compute_ld_statistics('path/to/vcf/file.vcf.gz', rec_map_file='path/to/rec/map/file', map_name='AA_Map', map_sep=' ', cM=True, r_bins=r_bins)}.\\ \py{map_sep} by default is set to tab separation, so we'll need tell the parsing function that this map is separated by spaces. \subsection{Parsing example} \comment{Explain \py{msprime_two_pop_parsing.py}} \subsection{Creating bootstrap datasets}\label{section:bootstrap} \subsubsection{Computing sets of LD statistics} \subsubsection{Computing statistic averages and covariances} \section{Specifying a model} In \mold, we want to build and test demographic models by computing LD statistics for a specified model. \mold allows two ways to specify demographic models, either through direct manipulation of \py{LDstats} objects, or by defining a demography through a graphical structure. In this section, we'll describe and give examples for each. We'll start with the direct manipulation of \py{LDstats} objects to give intuition about how \mold computes LD statistics under different demographic scenarios. For more than just a couple populations, it is far easier to implement models using the Demography module, which builds the demography from a user-defined directed graph. \subsection{Implementation} Implementation should be familiar to \dadi and \py{moments} users. Once we have defined a \py{LDstats} object, we can perform demographic functions and manipulations on it and integrate it forward in time. We can start by specifying the initial distribution, before applying demographic events:\\ \py{y = mold.Demographics1D.snm(rho = [0,1,10], theta=0.001)}.\\ \py{y} is now the single population steady-state set of LD statistics for the specified $\rho$ and $\theta$ parameters. From here, we can integrate forward in time with a specified relative population size, or we can split the population into two daugher populations. To integrate forward in time, with a single population, we call:\\ \py{y.integrate([nu], T, rho=[0,1,10], theta=0.001)},\\ where \py{nu} is a relative population size and \py{T} is the integration time (in genetic units). To split into two, we call:\\ \py{y = y.split(1)}.\\ In the \py{split} function, we specify the population number we want to split, and a new population is appended to the end of the population list. For example, if \py{y} currently has two populations, and we want to split population 1 into 1A and 1B, we call \py{y.split(1)}, which returns a new \py{LDstats} object with populations ordered as (1A, 2, 1B). Below, I write out some examples for defining demographic models (a bottleneck model with recent growth, an isolation with migration model, and the Gutenkunst out-of-Africa model). \subsection{The \texttt{Demography} builder} Manual specification of demographic functions becomes increasingly difficult as the number of populations in the model grows. With more than two or three populations, events that occur along different branches or lineages can switch their order, so that writing a flexible demographic model requires a host of pesky, bug-prone, if-then statements. \mold introduces a more user-friendly method to define demographic models through the \py{Demography} module. \py{mold.Demography} uses \py{networkx} to represent demographic models as directed acyclic graphs, where nodes specify populations with attributes (such as size functions, migration rates, frozen ancient sample, etc), and edges define relationships between populations. To start, we \py{import networkx as nx} and define an empty graph as \py{G = nx.DiGraph()}, which will store nodes (populations) and edges (relationships). To add a population, we use \py{G.add_node}, and specify the population name, the population size or size function, and the time the population persists, either up to the simulation ending, splitting or transitioning to other population(s), or extinction some time before present. We can also specify (optional) migration rates \emph{from} this population \emph{to} other populations. To specify migration from other populations to this population, we add migration rates to that other population. \subsubsection{Example: a two-population IM model} For example, let's set up a model where a single population doubles in size for time $0.1$ before splitting into two populations (pop0 and pop1). These split populations have different relative population size (say $3.0$ and $0.5$) and they remain split for time $0.2$ with symmetric migration rate $1.0$. We first set the initial `root' population, which starts at equilibrium (set relative size $\nu=1$ and $T=0$): \\\py{G.add_node('root', nu=1.0, T=0)}\\ We name the pre-split population `pop0', and set \\ \py{G.add_node('pop0', nu=2.0, T=0.1)}\\ We add the two split populations, along with migration rates, as \\ \py{G.add_node('pop1', nu=3.0, T=0.2, m=\{'pop2': 1.0\})} \\ and \\ \py{G.add_node('pop2', nu=0.5, T=0.2, m=\{'pop1': 1.0\})} Finally, we set the graph edges, which defines the topology of the demography: \\ \py{G.add_edges_from([('root', 'pop0'), ('pop0', 'pop1'), ('pop0', 'pop2')])}\\ A similar model (without the pre-split population size change) is given in Example~Code~\ref{lst:IM_demo}. To simulate LD statistics on this demography for a given set of recombination and mutation rates, we call \\ \py{y = mold.Demography.evolve(G, rho=[0,1], theta=0.001, pop_ids=['pop1', 'pop2'])}\\ where \py{pop_ids} specifies the order we would like the statistics to be output. \subsection{Units} \mold, like \texttt{moments} and \dadi, uses genetic units instead of physical units to define models. Time and rates are typically measured in or scaled by $2N_\text{ref}$: \begin{itemize} \item Time is given by units of $2N_\text{ref}$ generations. \item The mutation rate is $\theta=4N_\text{ref}u$, where $u$ is the per-base per-generation mutation rate. \item Migration rates are $2N_\text{ref}m_{ij}$, where $m_{ij}$ is the per lineage migration rate from population $i$ to population $j$. In other words, $m_{ij}$ is the probability that any given lineage in population $j$ had its parent in population $i$ in the previous generation. \item The recombination rate is $\rho=4N_\text{ref}r$, where $r$ is the probability of a recombination event occurring between two loci in a lineage in one generation. \end{itemize} \section{Example Code} In this section, we give some example demographic functions. Some are specified by initializing the equilibrium \py{LDstats} object, and then applying demographic events such as size changes and splits, and integrating forward in time. Others are specified using the \py{Demography} module. Here, we use \py{networkx} to define a population topology and attributes for each population, and then we call \py{mold.Demography.evolve} to compute the expected statistics. In the directory moments/examples/LD, you will find additional example code, including a demographic model for the out of Africa model augmented by Neanderthal introgression into the Eurasian population and a deep split and subsequent migration with an archaic population within Africa (similar to the demographic model inferred in \cite{Ragsdale2019}. Also in the examples directory, we use a subset of populations from the publicly available Simons Diversity Project (Mbuti, Punjabi, Dai, and Papuan) \cite{Simons} along with high coverage Neanderthal and Denisovan individuals \cite{Pruefer} to parse LD statistics. We then fit a demographic model with recent events (modern human splits, size changes, and migration rates) along with the timing of splits between modern humans, Neanderthal and Denisovans, and a hypothesized archaic African population, the split between Neanderthal and Denisovan populations, and the timing and magnitude of admixture from archaic populations into human lineages. The topology of this model is shown in Figure~\ref{fig:archaic_demography}. \clearpage \begin{lstlisting}[caption={\textbf{Bottleneck model:} At time \py{T} in the past, an equilibrium population goes through a bottleneck of depth \py{nuB}, recovering to relative size \py{nuF} through exponential growth. In all examples listed here, we need to \py{import numpy as np} and \py{import moments.LD as mold}.}, float, label={lst:bottleneck}] def bottleneck_growth(params, rho=None, theta=0.001): """ Instantaneous bottleneck to size nuB followed by exponential growth to size nuF over time T """ nuB, nuF, T = params nu_func = lambda t: [nuB * np.exp(np.log(nuF/nuB) * t / T)] y = mold.Demographics1D.snm(rho=rho, theta=theta) y.integrate(nu_func, T, rho=rho, theta=theta) return y \end{lstlisting} \begin{lstlisting}[caption={\textbf{IM model:} One population splits into two some time in the past. Each population can have a new size, with symmetric and continuous migration between populations.}, float, label={lst:IM}] def IM(params, rho=None, theta=0.001): """ Population split T generations ago with relative sizes nu1 and nu2, and symmetric migration rates m """ nu1, nu2, T, m = params y = mold.Demographics1D.snm(rho=rho, theta=theta) y = y.split(1) y.integrate([nu1, nu2], T, m=[[0,m],[m,0]], rho=rho, theta=theta) return y \end{lstlisting} \begin{lstlisting}[caption={\textbf{IM model using Demography:} The same isolation with migration model, defined using the graphical representation of the Demography method.}, float, label={lst:IM_demo}] def IM_graph(params, rho=None, theta=0.001, pop_ids=['pop1', 'pop2']): """ Population split T generations ago with relative sizes nu1 and nu2, and symmetric migration rates m """ nu1, nu2, T, m = params G = nx.DiGraph() G.add_node('root', nu=1.0, T=0) G.add_node('pop1', nu=nu1, T=T, m={'pop2': m}) G.add_node('pop2', nu=nu2, T=T, m={'pop1': m}) G.add_edges_from([('root', 'pop1'), ('root', 'pop2')]) y = mold.Demography.evolve(G, theta=theta, rho=rho, pop_ids=pop_ids) return y \end{lstlisting} \begin{lstlisting}[caption={\textbf{Out of Africa model:} The Gutenkunst Out-of-Africa model \cite{Gutenkunst2009}, with 13 parameters as originally defined. This model has three representative continental populations (often YRI, CEU, and CHB), with an out of Africa split between Eurasian and African populations, followed by a split between European and East Asian populations, with symmetric migration rates and size changes along each branch.}, float, label={lst:ooa}] def OutOfAfrica(params, rho=None, theta=0.001): """ The 13 parameter out of Africa model introduced in Gutenkunst et al. (2009) """ (nuA, TA, nuB, TB, nuEu0, nuEuF, nuAs0, nuAsF, TF, mAfB, mAfEu, mAfAs, mEuAs) = params y = mold.Demographics1D.snm(rho=rho, theta=theta) y.integrate([nuA], TA, rho=rho, theta=theta) y = y.split(1) mig_mat = [[0,mAfB], [mAfB,0]] y.integrate([nuA, nuB], TB, m = mig_mat, rho=rho, theta=theta) y = y.split(2) nu_func = lambda t: [nuA, nuEu0 * np.exp(np.log(nuEuF/nuEu0) * t/TF), nuAs0 * np.exp(np.log(nuAsF/nuAs0) * t/TF)] mig_mat = [[0, mAfEu, mAfAs], [mAfEu, 0, mEuAs], [mAfAs, mEuAs, 0]] y.integrate(nu_func, TF, m = mig_mat, rho=rho, theta=theta) return y \end{lstlisting} \begin{lstlisting}[caption={\textbf{Out of Africa Demography graph:} The same model as \lstlistingname~\ref{lst:ooa}, but defined using the Demography module. The Demography method takes advantage of the package \py{networkx}, which we \py{import networkx as nx}. Here, we define populations (nodes) with attributes (such as migration rates and sizes), and then define edges to relate populations.}, float, label={lst:ooa_demo}] def OutOfAfrica_graph(params, rho=None, theta=0.001, pop_ids=['YRI', 'CEU', 'CHB']): (nuA, TA, nuB, TB, nuEu0, nuEuF, nuAs0, nuAsF, TF, mAfB, mAfEu, mAfAs, mEuAs) = params G = nx.DiGraph() # add the population nodes, with sizes, times, and migrations G.add_node('root', nu=1, T=0) G.add_node('A', nu=nuA, T=TA) G.add_node('B', nu=nuB, T=TB, m={'YRI': mAfB}) G.add_node('YRI', nu=nuA, T=TB+TF, m={'B': mAfB, 'CEU': mAfEu, 'CHB': mAfAs}) nu_func_Eu = lambda t: nuEu0 * np.exp(np.log(nuEuF/nuEu0) * t/TF) G.add_node('CEU', nu=nu_func_Eu, T=TF, m={'YRI': mAfEu, 'CHB': mEuAs}) nu_func_As = lambda t: nuAs0 * np.exp(np.log(nuAsF/nuAs0) * t/TF) G.add_node('CHB', nu=nu_func_As, T=TF, m={'YRI': mAfAs, 'CEU': mEuAs}) # define topology of population graph G.add_edges_from([('root', 'A'), ('A', 'YRI'), ('A', 'B'), ('B', 'CEU'), ('B', 'CHB')]) # evolve using Demography.evolve y = mold.Demography.evolve(G, theta=theta, rho=rho, pop_ids=pop_ids) return y \end{lstlisting} %\begin{lstlisting}[caption={\textbf{Archaic Hominin Demography:}}, float, label={lst:archaic_demo}] %def archaics_demo(params, rho=None, theta=0.001, % """ % Same 13 parameters as the OOA model, augmented by Neanderthal and % Archaic African params. % TAA = time before African expansion AA split off % TN = time before AA split that Neanderthal splits % xAAend = fraction along (TF+TB+TA+TAA) that AA pop goes extinct % xAAmig = fraction along (1-fAAend)*(TF+TB+TA+TAA) that % AA and A start exchanging migrants % mAA = symmetric migration rate % nuN = relative size of Neanderthal % xNsplit = fraction of time between Neanderthal split % and Vindija fixed data (xx kya) (from 0 to 1) % xN_pulse = fraction along TB branch that pulse occurs % fN_pulse = pulse migration proportion from neanderthal % into Eurasia pop % Ne = reference population size (to scale archaic dates % and recombination map) % """ % pop_ids=['YRI', 'CEU', 'CHB', 'Vindija']): % (nuA, TA, nuB, TB, nuEu0, nuEuF, nuAs0, % nuAsF, TF, mAfB, mAfEu, mAfAs, mEuAs, % TAA, TN, xAAend, xAAmig, mAA, % nuN, xNsplit, xN_pulse, fN_pulse, % Ne) = params % % # Vindija is fixed at xx kya (convert to genetic unites using 29 yrs per gen and Ne) % ya = 55000. # in years. gens = ya/29 (Pruefer et al estimate its age at 50-65 kya) % TVindija = ya / 29 / 2 / Ne # time that this sample is frozen, in genetic units % % TND_pre = (TF+TB+TA+TAA+TN - TVindija)*xNsplit % TND_mig = TB*xN_pulse+TA+TAA+TN-TND_pre # migrating neanderthal ends at pulse during TB % TND_vindija = TF+TB+TA+TAA+TN-TVindija-TND_pre % % TAAtot = (TF+TB+TA+TAA)*xAAend % TAApre = TAAtot*xAAmig % TAAmig = TAAtot*(1-xAAmig) % % TB_pre = TB*xN_pulse % TB_post = TB*(1-xN_pulse) % % nu_func_CEU = lambda t : nuEu0 * np.exp(np.log(nuEuF/nuEu0) * t/TF) % nu_func_CHB = lambda t : nuAs0 * np.exp(np.log(nuAsF/nuAs0) * t/TF) % % G = nx.DiGraph() % # set up populations % G.add_node('root', nu=1, T=0) % G.add_node('ND_pre', nu=nuN, T=TND_pre) % G.add_node('ND_vindija', nu=nuN, T=TND_vindija) % G.add_node('Vindija', nu=nuN, T=TVindija, frozen=True) % G.add_node('ND_mig', nu=nuN, T=TND_mig) % G.add_node('AA_MH', nu=1, T=TN) % G.add_node('AA_nomig', nu=1, T=TAApre) % G.add_node('AA_mig', nu=1, T=TAAmig, % m={'MH_pre': mAA, 'A': mAA, 'YRI': mAA}) % G.add_node('MH_pre', nu=1, T=TAA, % m={'AA_mig': mAA}) % G.add_node('A', nu=nuA, T=TA, % m={'AA_mig': mAA}) % G.add_node('YRI', nu=nuA, T=TB+TF, % m={'AA_mig': mAA, 'B': mAfB, 'CEU': mAfEu, 'CHB': mAfAs}) % G.add_node('B_pre', nu=nuB, T=TB_pre, % m={'YRI': mAfB}) % G.add_node('B_post', nu=nuB, T=TB_post, % m={'YRI': mAfB}) % G.add_node('CEU', nu=nu_func_CEU, T=TF, % m={'YRI': mAfEu, 'CHB': mEuAs}) % G.add_node('CHB', nu=nu_func_CHB, T=TF, % m={'YRI': mAfAs, 'CEU': mAfEu}) % % # most edges have weight 1 % G.add_edges_from( [ ('root','ND_pre'), ('root','AA_MH'), % ('AA_MH', 'AA_nomig'), ('AA_MH', 'MH_pre'), % ('AA_nomig', 'AA_mig'), ('MH_pre','A'), % ('ND_pre','ND_mig'), ('ND_pre','ND_vindija'), % ('ND_vindija','Vindija'), ('A','YRI'), ('A','B_pre'), % ('B_post','CEU'), ('B_post','CHB') ], weight=1 ) % # for pulse migration, we use weighted edges % # which need to be added separately % G.add_weighted_edges_from( [ ('B_pre','B_post',1-fN_pulse) ]) % G.add_weighted_edges_from( [ ('ND_mig','B_post',fN_pulse) ]) % % y = moments.LD.Demography.evolve(G, theta=theta, rho=rho, pop_ids=pop_ids) % return y %\end{lstlisting} \clearpage \section{Simulation and Inference} \subsection{Comparing model to data} In the Section Specifying the model, we showed two approaches to defining demographic models. \mold computes the Hill-Robertson statistics ($\E[D_i^2]$, $\E[D_i D_j]$, etc), which give expectations for any pair of loci separated by a given recombination rate. When running inference, we prefer to use statistics normalized by the joint heterozygosity ($\pi_2$) of one of the populations, by default the first population in \py{pop_ids} (as in \cite{Rogers2014} and \cite{Ragsdale2018}). This has the advantage of removing dependence of the statistics on the underlying mutation rate. By using the ratio $\E[\text{stat.}]/\E[\pi_2]$, it also makes computing statistics from data much simpler, as we don't need to computing the total number of pairs per recombination bin. Instead we just sum $D^2$, $Dz$, and $\pi_2$ over all pairs of SNPs within a bin, and then divide by the sum of all contributions to $\pi_2$ for the same bin. This gives $\sigma_D^2$-type statistics for all terms in the multi-population Hill-Robertson basis. As described above in Section~\ref{section:bootstrap}, \py{mold.Parsing.bootstrap_data} computes average statistics and their covariances, after parsing data over subregions of the genome using \py{mold.Parsing.compute_ld_statistics}. \comment{ To compute model expectations for these same statistics, normalized by a given population's $\E[\pi_2]$ and $\E[H]$, we use our demographic function with the wrapper function \py{mold.Inference.wrap_sigmaD2}, which takes the same arguments as our demographic function, as well as the index of the population to normalize by. If our demographic function is \py{OutOfAfrica_graph}, which takes the 13 demographic parameters, $\rho$, $\theta$, and \py{pop_ids}, we compute normalized statistics by \\ \py{y = mold.Inference.sigmaD2(OutOfAfrica_graph, ...} } \comment{\py{mold.Inference.sigmaD2} converts to normalized statistics} \comment{To get $\sigma_D^2$ statistics for bins, we can pass the demographic function, bin edges \py{rho=[rho0, rho1, ...]}, $\theta$, and \py{pop_ids} to \py{mold.Inference.bin_stats}, which computes expected statistics for a bin using the Simpson's rule.} \comment{best way to take a demographic function and return $\sigma_D^2$ type statistics, or expectations over bins using Simpson's rule, etc...? Build into Inference.} To visually compare data and model predictions, using \py{mold.Plotting}, described in Section~\ref{section:plotting}. \subsubsection{Likelihoods} To compare model predictions to observed data, we use a likelihood approach. \mold estimates composite likelihoods using a multivariate Gaussian for each bin. For a given bin, we assume that we have computed estimates for the average statistics $\mathbf{\hat{x}}_{(\rho_0,\rho_1)}$ and the covariance matrix of those statistics from data $\Sigma_{(\rho_0,\rho_1)}$, as well as the model prediction for those statistics for the known recombination rate bin $\mathbf{y}_{(\rho_0,\rho_1)}$. Then the log-likelihood of the model parameters $\Theta$ for that bin is given by $$\mathcal{L}(\Theta | \mathbf{\hat{x}}_{(\rho_0,\rho_1)}) = \mathcal{N} \left( \mathbf{\hat{x}}_{(\rho_0,\rho_1)} | \mathbf{y}_{(\rho_0,\rho_1)}, \Sigma_{(\rho_0,\rho_1)} \right). $$ This log-likelihood is computed by calling \py{mold.Inference.ll(x, y, sigma)}, where \py{x} is the data, \py{y} is the model prediction, and \py{sigma} is the covariance matrix. To compute the composite likelihood over multiple bins, we simply approximate it as the product of likelihoods of each bin. This is computed using \py{mold.Inference.ll_over_bins(xs, mus, sigmas)}, where \py{xs}, \py{mus}, and \py{sigmas} are lists of the data, model predictions, and covariance matrices for each bin. \subsection{Fitting} \comment{above need to discuss that we typically also infer $N_e$ based on the recombination map, since they are typically given in raw recombination rates} For observed data, the goal here is to propose a model and find the parameters of the model that best fit the data. We use functions in \py{mold.Inference} to optimize model parameters, given computed average statistics for each recombination bin and the associated covariance matrices. We take data that has been parsed over $n$ recombination bins, $\{(r_0,r_1], (r_1,r_2], \ldots, (r_{n-1},r_n]\}$, and use the optimization functions in \py{Inference} to explore parameter space of our model (here, called \py{model_func}) to find the optimal parameter values. The most common usage of the optimization functions requires the following input.\\ Required inputs: \begin{itemize} \item The initial parameter guess \py{p0} : this is the list of model parameters, and it is typically augmented by the reference $N_e$, which is used to scale raw recombination rates ($r$) to get $\rho$ values. \comment{with or without $N_e$ on the end} \item \py{data} as two lists. The first list are the mean statistics, which has size $n+1$. The first $n$ entries of the list of means are statistic arrays for each bin (sorted in the order of recombination bins), and the last entry in the list is the set of heterozygosity statistics \comment{as output by \py{Parsing} - does it prefilter the normalized statistic out? or pass all statistics as well as an option for telling it which statistic you normalized by?}. The second list in \py{data} are the corresponding covariance matrices. \item \py{model_func}, which computes (unnormalized) LD statistics \comment{in form of Example Codes} \item \py{rs} : the list of raw recombination rate bin edges, such as $r_\text{edges} = [r_0, r_1, \ldots, r_n]$. If we use \py{rs}, we set \py{Ne} to a fixed value of $N_\text{ref}$ to scale recombination rates, or we use the last entry in the list of parameters, in which case $N_\text{ref}$ is a parameter to be fit. \end{itemize} Optional inputs: \begin{itemize} \item \py{normalization} : The population used to normalize $\sigma_D^2$ statistics. Default set population 1, which uses $\pi_2(1)$ and $H(1)$ statistic in the first population. \item \py{verbose} : Set to \py{True} if we want to output updates of function optimization (integer values tell how often to output updates) \item \py{fixed_params} : A list the same length as \py{p0}. Default set to \py{[None]*len(p0)}. For any values to be fixed (and not optimized over), set that position to the fixed value. \item \py{upper_bounds} and \py{lower_bounds} : Parameters can sometimes diverge to unrealistic values during optimization. To constrain parameter values to a given interval, use \py{upper_bounds} and \py{lower_bounds}, in the same way as \py{fixed_params}. \end{itemize} \subsubsection{Optimization functions} \comment{log fmin and powell} Suppose I have a list of statistics means \py{ms} and covariances \py{vs} over bins defined by bin edges \py{r_bins}, and a model I wish to optimize \py{model_func} that takes parameters \py{[p1,p2,...,pn]}. I set my initial guess \py{p0 = [guess_p1, guess_p2, ..., guess_pn, guess_Ne]}. To run optimization, I call\\ \py{mold.Inference.optimize_log_fmin(p0, [ms, vs], [model_func], rs=r_bins, verbose=1)}. \subsection{Uncertainty analysis} \comment{to come} \cite{Coffman2016} \section{Plotting}\label{section:plotting} \comment{Under development} \subsection{Visualizing LD curves} \subsection{Residuals} \section{The full two-locus frequency spectrum} \subsection{\texttt{moments.TwoLocus}} \subsubsection{Specifying models} \subsubsection{Parameters} \subsubsection{Selection} \section{Frequently asked questions} \begin{enumerate} \item What if I'm having issues running or installing this program? Bug: issues Bigger issues or difficulties: email \item How do I cite \mold? \end{enumerate} \section{Acknowledgements} \py{moments} was originally developed by Julien Jouganous, and based off of Ryan Gutenkunst's \dadi software. \py{moments} and \mold (and indeed much of my own scientific development) are greatly indebted to Ryan Gutenkunst. Not only are these programs modeled off of \dadi's interface and functionality, but also my general approach to writing, testing, and using scientific software has been guided and influenced by Ryan. \bibliography{manualLD} \bibliographystyle{apalike} \end{document}
{ "alphanum_fraction": 0.7315968042, "avg_line_length": 59.5632333768, "ext": "tex", "hexsha": "a44fe32a6d11fe104812eae2276e1ee5e0c291cd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "54d2c58d91a231303fb361258e24b41b23f50661", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "grahamgower/moments", "max_forks_repo_path": "doc/manual/LD/manualLD.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "54d2c58d91a231303fb361258e24b41b23f50661", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "grahamgower/moments", "max_issues_repo_path": "doc/manual/LD/manualLD.tex", "max_line_length": 537, "max_stars_count": null, "max_stars_repo_head_hexsha": "54d2c58d91a231303fb361258e24b41b23f50661", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "grahamgower/moments", "max_stars_repo_path": "doc/manual/LD/manualLD.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13197, "size": 45685 }
\documentclass[aspectratio=169]{beamer} % because we need to claim weird things \newtheorem{claim}{Claim} \newtheorem{defn}{Definition} %\newtheorem{lemma}{Lemma} \newtheorem{thm}{Theorem} \newtheorem{vita}{Vit\ae} \newtheorem{qotd}{Quote of the Day} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{listings} \usepackage{color} \usepackage{graphics} \usepackage{ulem} \bibliographystyle{unsrt} % background image \usebackgroundtemplate% {% \includegraphics[width=\paperwidth,height=\paperheight]{../artifacts/stemulus.pdf}% } \setbeamertemplate{caption}[numbered] \lstset{% breaklines=true, captionpos=b, frame=single, keepspaces=true } % page numbers \addtobeamertemplate{navigation symbols}{}{% \usebeamerfont{footline}% \usebeamercolor[fg]{footline}% \hspace{1em}% \insertframenumber/\inserttotalframenumber } % presentation header \usetheme{Warsaw} \title{Programming Concepts} \author{Dylan Lane McDonald} \institute{CNM STEMulus Center\\Web Development with PHP} \date{\today} \begin{document} \lstset{language=HTML} \begin{frame} \titlepage \end{frame} \begin{frame} \frametitle{Outline} \tableofcontents \end{frame} \section{Who Are We?} \subsection{Who Are We?} \begin{frame} \frametitle{Who Are We?} What are we training to be? \begin{itemize} \item Software Engineer: one who writes software \pause \item Computer Scientist: one who uses sound mathematical principles to solve problems \pause \item Software Architect: one who designs software programs so software engineers can implement them \pause \item Web Developer: one who writes software for the web programming paradigm \end{itemize} \pause So who are we\dots? \pause \mbox{}\\ \textbf{All four of them!} \end{frame} \subsection{What Are We Doing?} \begin{frame} \frametitle{What Are We Doing?} As mentioned in the last slide, we are learning the web programming paradigm. \pause \begin{defn} A \textbf{paradigm} is how a programming language is structured and thought about. Each paradigm is specialized to the problems it aims to solve. \end{defn} \pause There are five major paradigms and dozens of minor paradigms. Each paradigm is specialized to a class of problems. Problems can be solved by multiple paradigms and there are advantages and disadvantages of each. \end{frame} \begin{frame} \frametitle{Programming Paradigms} Common paradigms include: \begin{itemize} \item Imperative: programming step-by-step using procedures (C) \pause \item Functional: programming using mathematical functions (Haskell) \pause \item Object-oriented: programming by modeling each player in the program (Java) \pause \item Logic: programming by axiomatic logical statements (Prolog) \pause \item Symbolic: programming by manipulating formulas \& symbols (LISP) \end{itemize} \end{frame} \section{How Do We Succeed?} \subsection{How Do We Use It?} \begin{frame} \frametitle{How Do We Use It?} Programming languages are used in three different ways: \begin{itemize} \item Compiled: Source code is passed to a \textit{compiler}, which takes the source code and transforms in into \textit{machine code}, which is directly executable by the CPU (C) \item Interpreted: Source code is passed to an \textit{interpreter}, which executes the source code line-by-line as the code is encountered (PHP) \item Hybrid: Source code is compiled into an intermediate, optimized format called \textit{byte code} that is saved and passed to an interpreter when it needs to execute (Java) \end{itemize} The other languages we will be covering, JavaScript \& SQL, are also interpreted. \end{frame} \subsection{How Do We Write Better Code?} \begin{frame} \frametitle{How Do We Write Better Code?} Code maintainability is essential. Non readable, non maintainable code precludes development and is a hindrance to the team. Code can be made easy-to-read by: \begin{itemize} \item Properly indenting the code at each logical level \item Using variable names that concisely describe what the variable does \item Sticking to one variable capitalization convention\footnote{In this class, ``camelBackVariables'' are preferred.} \item Thoroughly commenting code as you write it \item Exercising good \texttt{git} etiquette\dots \pause \begin{itemize} \item Commit early, commit often! \item Write descriptive comments in the \texttt{git commit} journal \end{itemize} \end{itemize} \end{frame} \subsection{How Should I Act?} \begin{frame} \frametitle{How Should I Act?} Attitude is everything! Computer Science is a challenging field in which you are part of a team. Separate yourself by: \begin{itemize} \item \textbf{\underline{\emph{TENACITY}}}: Things \textbf{will} go wrong at some point. Great programmers persist, keep their frustrations in check, and do what it takes to climb the inevitable brick walls. \item \textbf{Teamwork}: Cooperation and being approachable is a must. You will be working side-by-side with people and the more you help the team, the more you help yourself. \item \textbf{Empathy}: Again, this can be a frustrating field at times. Understanding and taking the point of view of an upset end-user will not only defuse a volatile situation, but also help you arrive at a solution. \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.7754985755, "avg_line_length": 35.3355704698, "ext": "tex", "hexsha": "017fa99f7198e04878164e71bc6afadf0334b534", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "27903fb390d37297293be406c1b1cd85a4c628bb", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "dylan-mcdonald/latex-slides", "max_forks_repo_path": "attitude-concepts/attitude-concepts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "27903fb390d37297293be406c1b1cd85a4c628bb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "dylan-mcdonald/latex-slides", "max_issues_repo_path": "attitude-concepts/attitude-concepts.tex", "max_line_length": 220, "max_stars_count": null, "max_stars_repo_head_hexsha": "27903fb390d37297293be406c1b1cd85a4c628bb", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dylan-mcdonald/latex-slides", "max_stars_repo_path": "attitude-concepts/attitude-concepts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1415, "size": 5265 }
\documentclass[a4paper,11pt]{article} \usepackage{fancyhdr} \usepackage[dvips]{graphicx} % for eps and MatLab Diagrams \usepackage{amsmath} \usepackage{psfrag,color} \usepackage[framed]{/Applications/TeX/mcode} \pagestyle{fancy} \begin{document} % Begin Document % Title \title{\textsc{Minimal Bounding Shapes in 2 and 3 Dimensions}} % Authors and Contact Information \author{\textbf{John R. D'Errico}\\ Email: [email protected]} \maketitle \section{Introduction - Minimal Bounding Shapes} This document will discuss the estimation of several basic enclosing shapes around sets of points in 2 and 3 dimensions. But first, why would you wish to use these tools at all? A minimal enclosing object of a well defined basic shape may be of use to roughly characterize objects, perhaps in an image. Perhaps one needs simple estimates of an area enclosed, or of the center of a roughly circular or elliptical object. Some examples might be bacteria, crystals, granular particles, film grain, etc. The basic codes I'll discuss are: \begin{itemize} \item Rectangles - \mcode{MINBOUNDRECT} \item Parallelograms - \mcode{MINBOUNDPARALLELOGRAM} \item Triangles - \mcode{MINBOUNDTRI} \item Circles - \mcode{MINBOUNDCIRCLE} \item Spheres (3-d) - \mcode{MINBOUNDSPHERE} \item Circles - \mcode{MINBOUNDSEMICIRCLE} \item Ellipses - \mcode{MINBOUNDELLIPSE} \item Ellipsoids (3-d) - \mcode{MINBOUNDELLIPSOID} \item INCIRCLE - \mcode{INCIRCLE} \item INSPHERE - \mcode{INSPHERE} \end{itemize} One feature that many of these codes have in common is the initial use of a convex hull call (excluding the incircle and in sphere tools.) Since all of the shapes we will consider here are convex objects, no point that is inside the convex hull of the data set need be considered. Removal of those interior data points will often result in a dramatic reduction in the time required otherwise. \begin{figure} \centering \includegraphics[width=5in]{convhull.pdf} \caption{Example of a convex hull in 2-d} \end{figure} \bigskip \section{Minimal Area Rectangles} Somewhat distinct from the other tools here is rectangle estimation. It is simply done, especially when you start from the convex hull. If we assume that the minimum bounding rectangle must have one (or more) edges parallel to one of the edges of the convex hull, then the task is a simple one indeed. Merely check every edge of the convex hull, effectively spreading a pair of calipers around the object at that angle. One chooses that edge which produces the minimum area from all the possible rectangles. This scheme will generally be a quite efficient one, since most of the time a convex hull is composed of relatively few edges. Even in the rare event where every single point supplied is also found to be a part of the convex hull itself, the rectangle computation is fast enough to be quite efficient. \begin{figure} \centering \includegraphics[width=5in]{rect.jpg} \caption{Minimial bounding rectangle} \end{figure} \bigskip \section{Minimal Area parallelograms} This is very similar to a rectangle. Here we assume that one or more of the facets of the convex hull lies in a a facet of the parallelogram. Simply check each edge of the convex hull. If that edge is included in a facet of the bounding parallelogram, then the opposite edge is easily found. Now the other two edges of the parallelogram must also be parallel. They are quickly tested also. Again, the minimal bounding object is quickly found. \section{Minimal Area Triangles} Again, a bounding triangle should have at least one edge that contains some facet of the convex hull. Check each edge. \begin{figure} \centering \includegraphics[width=5in]{tri.jpg} \caption{Minimial bounding triangle} \end{figure} \section{Minimal Radius Circles and Spheres} Circular regions are slightly more complex than are rectangles, but still simple enough. Again, we start with all of the points making up the convex hull. Arbitrarily pick any three of those points. Find the unique minimum radius enclosing circle that contains those three points. If every other point is inside the above circle, then we are done. Pick that single point which lies furthest outside of the circle, and find the enclosing circle of this set of four points. That larger enclosing circle will have either 2 or three of those points on its circumference. Repeat until no more points lie external to the current enclosing circle. The basic algorithm above is a simple iterative scheme which will generally terminate after only a few iterations. One aspect of this that is worth further discussion is the computation of a circum-circle. Given any three points in the (x,y) plane, $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$, there are two distinct possibilities. Either two of the three points lie on a diameter of a circle that also contains the third point, or all three of the points must lie exactly on the circumference of a circle. In the latter event, that circle with unknown radius $R$ and center $(\mu_x,\mu_y)$ must satisfy (1), (2), and (3). \begin{equation} \tag{1} (x_1 - \mu_x)^2 + (y_1 - \mu_y)^2 = R^2 \end{equation} \begin{equation} \tag{2} (x_2 - \mu_x)^2 + (y_2 - \mu_y)^2 = R^2 \end{equation} \begin{equation} \tag{3} (x_3 - \mu_x)^2 + (y_3 - \mu_y)^2 = R^2 \end{equation} We can eliminate the quadratic terms in the unknowns simply by subtracting pairs of those expressions to yield (4) and (5), linear in the unknowns $(\mu_x,\mu_y)$. \begin{equation} \tag{4} 2(x_1 - x_2)\mu_x + 2(y_1 - y_2)\mu_y = x_1^2 - x_2^2 + y_1^2 - y_2^2 \end{equation} \begin{equation} \tag{5} 2(x_1 - x_3)\mu_x + 2(y_1 - y_3)\mu_y = x_1^2 - x_3^2 + y_1^2 - y_3^2 \end{equation} Solve that linear system of equations for $(\mu_x,\mu_y)$. then use (1) to obtain $R$. This basic scheme of differencing to drop out the nonlinear terms, then solving a linear system for the center of the circle will also apply to computation of a circum-sphere in any number of dimensions. \bigskip \bigskip \section{Minimal Area Enclosing Ellipses and Minimum Volume Enclosing Ellipsoids} Ellipses and ellipsoids are yet a step higher in complexity than are circles. In a simple form, the equation (6) of an ellipse with center $(\mu_x,\mu_y)$, has axis lengths of $a_x$ and $a_y$ along the x and y axes respectively. Clearly, if $a_x = a_y$, then the ellipse is circular. \begin{equation} \tag{6} (\frac{x-\mu_x}{a_x})^2 + (\frac{y - \mu_y}{a_y})^2 = 1 \end{equation} Of course, the form in (6) does not allow for any eccentricity. We can allow for an eccentricity by writing the ellipse as the quadratic form in (7). \begin{equation} \tag{7} ([x,y] - [\mu_x,\mu_y]) \begin{bmatrix} H_{xx} & H_{xy} \\ H_{xy} & H_{yy} \end{bmatrix} ([x,y] - [\mu_x,\mu_y])^T = 1 \end{equation} Enclosing circle parameters (the center of the circle) were derivable from the linear system {(4), (5)}. That approach will fail here though, since the quadratic terms do not drop out. A partial solution arises from a technique called partitioned least squares. \section{In-circles and in-spheres of convex polygons and polyhedra} The in-circle of a convex polygon is easily found. Simply find each edge of the polygon, then compute the normal vector to that edge. If we knew the center of the in-circle, then the dot product of the normal vector to the edge and the vector created by subtracting the in-center and a point on the edge gives the distance to that edge. (Actually, we need to be careful about the sign of that dot product.) \begin{equation} \tag{8} D = \dot{N_{edge}}{(incenter - p_{edge}) \end{equation} The equation (8) holds for each edge. Thus all we need do is find a point ($(x,y)$ coordinates of the in_center) that maximizes the minimum value of D, for all the edges in the polygon. This is a linear programming problem, achieved using slack variables. The same scheme works for in-spheres. \end{document}
{ "alphanum_fraction": 0.753971232, "avg_line_length": 42.5265957447, "ext": "tex", "hexsha": "640de93ed0ee9fade7da72eb879f224b05088a69", "lang": "TeX", "max_forks_count": 9, "max_forks_repo_forks_event_max_datetime": "2020-06-04T11:22:56.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-25T01:32:26.000Z", "max_forks_repo_head_hexsha": "f144a8423d423fa572fc09a9a3f5490eef965f20", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wwarriner/solidification_fdm_solver", "max_forks_repo_path": "lib/pde_solver_framework/geometry_utilities/libs/MinBoundSuite/MinBound.tex", "max_issues_count": 25, "max_issues_repo_head_hexsha": "f144a8423d423fa572fc09a9a3f5490eef965f20", "max_issues_repo_issues_event_max_datetime": "2020-07-29T20:05:39.000Z", "max_issues_repo_issues_event_min_datetime": "2018-11-29T22:46:54.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wwarriner/solidification_fdm_solver", "max_issues_repo_path": "lib/pde_solver_framework/geometry_utilities/libs/MinBoundSuite/MinBound.tex", "max_line_length": 442, "max_stars_count": 9, "max_stars_repo_head_hexsha": "ad72389cda8938b6f3245e4c0b0bda69dafd9cf8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ISET/isetrtf", "max_stars_repo_path": "fitting/vignetting/MinBoundSuite/MinBoundSuite/MinBound.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-05T08:26:57.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-02T21:35:25.000Z", "num_tokens": 2195, "size": 7995 }
% % credit: MNRAS % \documentclass[fleqn, usenatbib]{mnras} \usepackage{style} % title \title[Improving Galaxy Clustering with Deep Learning]{Improving Galaxy Clustering Measurements with Deep Learning: analysis of the DECaLS DR7 data} % If you need two or more lines of authors, add an extra line using \newauthor \author[M. Rezaie et al.]{ Mehdi Rezaie$^{1}$\thanks{E-mail: [email protected]}, Hee-Jong Seo$^{1}$\thanks{E-mail: [email protected]}, Ashley J. Ross$^{2}$, and Razvan C. Bunescu$^{3}$ \\ % List of institutions $^{1}$Department of Physics and Astronomy, Ohio University, Athens, OH 45701, USA\\ $^{2}$The Center of Cosmology and Astro Particle Physics, the Ohio State University, Columbus, OH 43210, USA\\ $^{3}$School of Electrical Engineering and Computer Science, Ohio University, Athens, OH 45701, USA } % \hypersetup{draft} % activate draft mode \begin{document}\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \maketitle \begin{abstract} Robust measurements of cosmological parameters from galaxy surveys rely on our understanding of systematic effects that impact the observed galaxy density field. In this paper we present, validate, and implement the idea of adopting the systematics mitigation method of Artificial Neural Networks for modeling the relationship between the target galaxy density field and various observational realities including but not limited to Galactic extinction, seeing, and stellar density. Our method by construction allows a wide class of models and alleviates over-training by performing k-fold cross-validation and dimensionality reduction via backward feature elimination. By permuting the choice of the training, validation, and test sets, we construct a selection mask for the entire footprint. We apply our method on the extended Baryon Oscillation Spectroscopic Survey (eBOSS) Emission Line Galaxies (ELGs) selection from the Dark Energy Camera Legacy Survey (DECaLS) Data Release 7 and show that the spurious large-scale contamination due to imaging systematics can be significantly reduced by up-weighting the observed galaxy density using the selection mask from the neural network and that our method is more effective than the conventional linear and quadratic polynomial functions. We perform extensive analyses on simulated mock datasets with and without systematic effects. Our analyses indicate that our methodology is more robust to overfitting compared to the conventional methods. This method can be utilized in the catalog generation of future spectroscopic galaxy surveys such as eBOSS and Dark Energy Spectroscopic Instrument (DESI) to better mitigate observational systematics. \end{abstract} % 1-6 \begin{keywords} editorials, notices --- miscellaneous --- catalogs --- surveys \end{keywords} % \tableofcontents %\clearpage %--- sections \input{sections/introduction} \input{sections/data} \input{sections/methodology} \input{sections/results} \input{sections/conclusion} \input{sections/acknowledgement} %--- references \bibliographystyle{mnras} \bibliography{refs} %--- \appendix \input{sections/window} % % residual test with mocks % \section{Robustness tests} % \begin{figure} % \centering % \includegraphics[width=0.4\textwidth]{figures/fig26-chi2cellmockspval.pdf} % \caption{Robustness test with the mitigated mean power spectrum of the 100 contaminated mocks using the Chi-square test: P-values as a function of the lowest $\ell$ bin.} % \label{fig:my_label} % \end{figure} \bsp % typesetting comment \label{lastpage} \end{document}
{ "alphanum_fraction": 0.7885586091, "avg_line_length": 50.2253521127, "ext": "tex", "hexsha": "02e5378ba392994bb9ae71de2781d284bc1ea547", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-17T18:07:47.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-17T18:07:47.000Z", "max_forks_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mehdirezaie/SYSNet", "max_forks_repo_path": "paper/ms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mehdirezaie/SYSNet", "max_issues_repo_path": "paper/ms.tex", "max_line_length": 1694, "max_stars_count": 6, "max_stars_repo_head_hexsha": "8da75f54177e460e6e446bfc2207dd82a76ac4cc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mehdirezaie/SYSNet", "max_stars_repo_path": "paper/ms.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-25T21:50:52.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-29T11:55:30.000Z", "num_tokens": 857, "size": 3566 }
%\documentclass[UTF8]{ctexart} % use larger type; default would be 10pt \documentclass[a4paper]{article} \usepackage{../mqc} \title{\textbf{Modern Quantum Chemistry, Szabo \& Ostlund}\\HW} \author{wsr \vspace{5pt}\\ } \date{\today} % Activate to display a given date or no date (if empty), % otherwise the current date is printed \begin{document} % \boldmath \maketitle \tableofcontents \newpage \setcounter{section}{1} \section{Many-electron Wave Functions and Operators} \subsection{The Electronic Problem} \subsubsection{Atomic Units} \subsubsection{The B-O Approximation} \subsubsection{The Antisymmetry or Pauli Exclusion Principle} \subsection{Orbitals, Slater Determinants, and Basis Functions} \subsubsection{Spin Orbitals and Spatial Orbitals} \ex{2.1} Consider $ \Braket{\chi_k | \chi_m} $. If $ k=m $, \begin{equation}\label{key} \Braket{\chi_{2i-1} | \chi_{2i-1}} = \Braket{\psi_i^\alpha | \psi_i^\alpha} \Braket{\alpha|\alpha} = 1 \end{equation} \begin{equation}\label{key} \Braket{\chi_{2i} | \chi_{2i}} = \Braket{\psi_i^\beta | \psi_i^\beta} \Braket{\alpha|\alpha} = 1 \end{equation} thus \begin{equation}\label{key} \Braket{\chi_k | \chi_k} = 1 \end{equation} If $ k\neq m $, three cases may occur as below \begin{equation}\label{key} \Braket{\chi_{2i-1} | \chi_{2j-1}} = \Braket{\psi_i^\alpha | \psi_j^\alpha}\Braket{\alpha|\alpha} = 0\cdot 1 = 0 \qquad (i\neq j) \end{equation} \begin{equation} \Braket{\chi_{2i-1} | \chi_{2j}} = \Braket{\psi_i^\alpha | \psi_j^\beta}\Braket{\alpha|\beta} = S_{ij}\cdot 0 = 0 \end{equation} \begin{equation}\label{key} \Braket{\chi_{2i} | \chi_{2j}} = \Braket{\psi_i^\beta | \psi_j^\beta}\Braket{\beta|\beta} = 0\cdot 1 = 0 \qquad (i\neq j) \end{equation} thus \begin{equation}\label{key} \Braket{\chi_k | \chi_m} = 0 \qquad (k\neq m) \end{equation} Overall, \begin{equation}\label{key} \Braket{\chi_k | \chi_m} = \delta_{km} \end{equation} \subsubsection{Hartree Products} %. \ex{2.2} \begin{equation}\label{key} \begin{aligned} \mathscr{H}\Psi^{HP} &= \sum_{i=1}^N h(i)\chi_i(\vb{x}_1)\chi_j(\vb{x}_2)\cdots\chi_k(\vb{x}_N)\\ &= \varepsilon_i\chi_i(\vb{x}_1)\chi_j(\vb{x}_2)\cdots\chi_k(\vb{x}_N) + \chi_i(\vb{x}_1)[\varepsilon_j\chi_j(\vb{x}_2)]\cdots\chi_k(\vb{x}_N) + \cdots + \chi_i(\vb{x}_1)\chi_j(\vb{x}_2)\cdots[\varepsilon_k\chi_k(\vb{x}_N)]\\ &= (\varepsilon_i +\varepsilon_j + \cdots + \varepsilon_k)\Psi^{HP} \end{aligned} \end{equation} \subsubsection{Slater Determinants} %. \ex{2.3} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi | \Psi} &= \dfrac{1}{2}\qty(\Braket{\chi_i|\chi_i}\Braket{\chi_j|\chi_j} - \Braket{\chi_i|\chi_j}\Braket{\chi_j|\chi_i} - \Braket{\chi_j|\chi_i}\Braket{\chi_i|\chi_j} + \Braket{\chi_j|\chi_j}\Braket{\chi_i|\chi_i})\\ &= \dfrac{1}{2}(1+0+0+1) = 1 \end{aligned} \end{equation} \ex{2.4} According to Ex. 2.2, we know that $ \chi_i(\vb{x}_1)\chi_j(\vb{x}_2) $ are an eigenfunction of $ \mathscr{H} $ and has the eigenvalue $ \varepsilon_i \varepsilon_j $. Similarly, we have the same conclusion for $ \chi_i(\vb{x}_2)\chi_j(\vb{x}_1) $.\\ For the antisymmetrized wave function, \begin{equation}\label{key} \begin{aligned} \Braket{\Psi | \mathscr{H} | \Psi} &= \dfrac{1}{2}\left(\Braket{\chi_i(\vb{x}_1)\chi_j(\vb{x}_2) | \mathscr{H} | \chi_i(\vb{x}_1)\chi_j(\vb{x}_2)} - \Braket{\chi_i(\vb{x}_1)\chi_j(\vb{x}_2) | \mathscr{H} | \chi_j(\vb{x}_1)\chi_i(\vb{x}_2)}\right.\\ &\left. \quad - \Braket{\chi_j(\vb{x}_1)\chi_i(\vb{x}_2) | \mathscr{H} | \chi_i(\vb{x}_1)\chi_j(\vb{x}_2)} + \Braket{\chi_j(\vb{x}_1)\chi_i(\vb{x}_2) | \mathscr{H} | \chi_j(\vb{x}_1)\chi_i(\vb{x}_2)}\right)\\ &= \dfrac{1}{2}\qty(\varepsilon_i + \varepsilon_j - 0 - 0 + \varepsilon_i + \varepsilon_j)\\ &= \varepsilon_i + \varepsilon_j \end{aligned} \end{equation} \ex{2.5} \begin{equation}\label{key} \begin{aligned} \Braket{K | L} &= \dfrac{1}{2}\Braket{\chi_i(\vb{x}_1)\chi_j(\vb{x}_2) - \chi_j(\vb{x}_1)\chi_i(\vb{x}_2) | \chi_k(\vb{x}_1)\chi_l(\vb{x}_2) - \chi_l(\vb{x}_1)\chi_k(\vb{x}_2)}\\ &= \dfrac{1}{2}\qty(\Braket{\chi_i|\chi_k}\Braket{\chi_j|\chi_l} - \Braket{\chi_i|\chi_l}\Braket{\chi_j|\chi_k} - \Braket{\chi_j|\chi_k}\Braket{\chi_i|\chi_l} + \Braket{\chi_j|\chi_l}\Braket{\chi_i|\chi_k})\\ &= \dfrac{1}{2}\qty(\delta_{ik}\delta_{jl} - \delta_{il}\delta_{jk} - \delta_{jk}\delta_{il} + \delta_{jl}\delta_{ik})\\ &= \delta_{ik}\delta_{jl} - \delta_{il}\delta_{jk} \end{aligned} \end{equation} \subsubsection{The Hartree-Fock Approximation} \subsubsection{The Minimal Basis $ \ce{H_2} $ Model} \ex{2.6} \begin{equation}\label{key} \Braket{\psi_1 | \psi_1} = \dfrac{1}{2(1+S_{12})}\qty(\Braket{\phi_1|\phi_1} + 2\Braket{\phi_1|\phi_2} + \Braket{\phi_2|\phi_2}) = \dfrac{2 + 2S_{12}}{2(1+S_{12})} = 1 \end{equation} \begin{equation}\label{key} \Braket{\psi_2 | \psi_2} = \dfrac{1}{2(1-S_{12})}\qty(\Braket{\phi_1|\phi_1} - 2\Braket{\phi_1|\phi_2} + \Braket{\phi_2|\phi_2}) = \dfrac{2 - 2S_{12}}{2(1-S_{12})} = 1 \end{equation} \begin{equation}\label{key} \Braket{\psi_1 | \psi_2} = \dfrac{1}{2\sqrt{1+S_{12}}\sqrt{1-S_{12}}}\qty(\Braket{\phi_1|\phi_1} - \Braket{\phi_2|\phi_2}) = 0 \end{equation} \subsubsection{Excited Determinants} \subsubsection{Form of the Exact Wfn and CI} \ex{2.7} Size of full CI matrix \begin{equation}\label{key} C_{72}^{42} = 164307576757973059488 \approx \num{1.64e20} \end{equation} The number of singly excited determinants \begin{equation}\label{key} 42\cross 30 = 1260 \end{equation} The number of doubly excited determinants \begin{equation}\label{key} C_{42}^2 C_{30}^2 = 374535 \end{equation} \subsection{Operators and Matrix Elements} \subsubsection{Minimal Basis $ \ce{H_2} $ Matrix Elements} \ex{2.8}\label{2.8} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_{12}^{34} | h(1) | \Psi_{12}^{34}} &= \dfrac{1}{2}\Braket{\chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1) | h(1) | \chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1)}\\ &= \dfrac{1}{2}\qty(\Braket{\chi_3|h(1)|\chi_3} - 0 - 0 + \Braket{\chi_4|h(1)|\chi_4})\\ &= \dfrac{1}{2}\qty(\Braket{\chi_3|h(1)|\chi_3} + \Braket{\chi_4|h(1)|\chi_4}) \end{aligned} \end{equation} thus \begin{equation}\label{key} \Braket{\Psi_{12}^{34} | \mathcal{O}_1 | \Psi_{12}^{34}} = \Braket{3|h|3} + \Braket{4|h|4} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | h(1) | \Psi_{12}^{34}} &= \dfrac{1}{2}\Braket{\chi_1(\vb{x}_1)\chi_2(\vb{x}_2) - \chi_2(\vb{x}_2)\chi_1(\vb{x}_1) | h(1) | \chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1)}\\ &= \dfrac{1}{2}\qty(0 - 0 - 0 + 0)\\ &= 0 \end{aligned} \end{equation} thus \begin{equation}\label{key} \Braket{\Psi_0 | \mathcal{O}_1 | \Psi_{12}^{34}} = 0 \end{equation} Similarly, we get \begin{equation}\label{key} \Braket{\Psi_{12}^{34} | \mathcal{O}_1 | \Psi_0} = 0 \end{equation} \ex{2.9} From Eq. (2.92) in textbook, we get \begin{equation}\label{key} \Braket{\Psi_0 | \mathscr{H} | \Psi_0} = \Braket{1|h|1} + \Braket{2|h|2} + \Braket{12|12} - \Braket{12|21} \end{equation} From Ex 2.8, we get \begin{equation}\label{key} \Braket{\Psi_0 | \mathcal{O}_1 | \Psi_{12}^{34}} = \Braket{\Psi_{12}^{34} | \mathcal{O}_1 | \Psi_0} = 0 \end{equation} thus \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | \mathscr{H} | \Psi_{12}^{34}} &= \Braket{\Psi_0 | \mathcal{O}_2 | \Psi_{12}^{34}}\\ &= \dfrac{1}{2}\Braket{\chi_1(\vb{x}_1)\chi_2(\vb{x}_2) - \chi_1(\vb{x}_2)\chi_2(\vb{x}_1) | \dfrac{1}{r_{12}} | \chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1)}\\ &= \Braket{12|34} - \Braket{12|43} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_{12}^{34} | \mathscr{H} | \Psi_0} &= \Braket{\Psi_{12}^{34} | \mathcal{O}_2 | \Psi_0}\\ &= \dfrac{1}{2}\Braket{\chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1) | \dfrac{1}{r_{12}} | \chi_1(\vb{x}_1)\chi_2(\vb{x}_2) - \chi_2(\vb{x}_2)\chi_1(\vb{x}_1)}\\ &= \Braket{34|12} - \Braket{34|21} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_{12}^{34} | \mathscr{H} | \Psi_{12}^{34}} &= \Braket{\Psi_{12}^{34} | h(1) +h(2) + \dfrac{1}{r_{12}} | \Psi_{12}^{34}}\\ &= 2\cross\dfrac{1}{2}\Braket{\chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1) | h(1) | \chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1)}\\ &+ \dfrac{1}{2}\Braket{\chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1) | \dfrac{1}{r_{12}} | \chi_3(\vb{x}_1)\chi_4(\vb{x}_2) - \chi_3(\vb{x}_2)\chi_4(\vb{x}_1)}\\ &= \Braket{3|h|3} + \Braket{4|h|4} + \Braket{34|34} - \Braket{34|43} \end{aligned} \end{equation} \subsubsection{Notations for 1- and 2-Electron Integrals} \subsubsection{General Rules for Matrix Elements} %. \ex{2.10} \begin{equation}\label{key} \Braket{K | \mathscr{H} | K} = \sum_m^N [m|h|m] + \dfrac{1}{2}\sum_m^N\sum_n^N \Braket{mn||mn} = \sum_m^N [m|h|m] + \dfrac{1}{2}\sum_m^N\sum_n^N \qty([mm|nn] - [mn|nm]) \end{equation} When $ m=n $, \begin{equation}\label{key} [mm|mm] - [mm|mm] = 0 \end{equation} thus \begin{equation}\label{key} \Braket{K | \mathscr{H} | K} = \sum_m^N [m|h|m] + \dfrac{1}{2}\sum_m^N\sum_{n\neq m}^N \qty([mm|nn] - [mn|nm]) = \sum_m^N [m|h|m] + \sum_m^N\sum_{n> m}^N \qty([mm|nn] - [mn|nm]) \end{equation} %. \ex{2.11} \begin{equation}\label{key} \begin{aligned} \Braket{K | \mathscr{H} | K} &= \Braket{K | \mathcal{O}_1 + \mathcal{O}_2 | K} = \sum_m^N [m|h|m] + \sum_m^N\sum_{n>m}^N \Braket{mn||mn}\\ &= \Braket{1|h|1} + \Braket{2|h|2} + \Braket{3|h|3} + \Braket{12||12} + \Braket{13||13} + \Braket{23||23} \end{aligned} \end{equation} %. \ex{2.12} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | \mathscr{H} | \Psi_0} &= \Braket{1|h|1} + \Braket{2|h|2} + \Braket{12||12} \\ &= \Braket{1|h|1} + \Braket{2|h|2} + \Braket{12|12} - \Braket{12|21} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | \mathscr{H} | \Psi_{12}^{34}} = \Braket{12||34} = \Braket{12|34} - \Braket{12|43} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_{12}^{34} | \mathscr{H} | \Psi_0} = \Braket{34||12} = \Braket{34|12} - \Braket{34|21} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_{12}^{34} | \mathscr{H} | \Psi_{12}^{34}} &= \Braket{3|h|3} + \Braket{4|h|4} + \Braket{34||34}\\ &= \Braket{3|h|3} + \Braket{4|h|4} + \Braket{34|34} - \Braket{34|43} \end{aligned} \end{equation} Which are exactly the same with Ex 2.9. \ex{2.13} if $ a=b $, $ r=s $ \begin{equation}\label{key} \Braket{\Psi_a^r | \mathcal{O} | \Psi_b^s} = \Braket{\Psi_a^r | \mathcal{O}_1 | \Psi_a^r} = \sum_c^N\Braket{c|h|c} - \Braket{a|h|a} + \Braket{r|h|r} \end{equation} if $ a=b $, $ r\neq s $ \begin{equation}\label{key} \Braket{\Psi_a^r | \mathcal{O} | \Psi_b^s} = \Braket{\Psi_a^r | \mathcal{O}_1 | \Psi_a^s} = \Braket{r|h|s} \end{equation} if $ a\neq b $, $ r=s $ \begin{equation}\label{key} \Braket{\Psi_a^r | \mathcal{O} | \Psi_b^s} = \Braket{\Psi_a^r | \mathcal{O}_1 | \Psi_b^r} = \Braket{\Psi_a^r | \mathcal{O}_1 | -(\Psi_a^r)_b^a} = -\Braket{b|h|a} \end{equation} if $ a\neq b $, $ r\neq s $ \begin{equation}\label{key} \Braket{\Psi_a^r | \mathcal{O} | \Psi_b^s} = \Braket{\Psi_a^r | \mathcal{O}_1 | (\Psi_a^r)_{rb}^{as}} = 0 \end{equation} \ex{2.14} \begin{equation}\label{key} ^N E_0 = \sum_m^N \Braket{m|h|m} + \sum_m^M\sum_{n>m}^M \Braket{mn||mn} \end{equation} \begin{equation}\label{key} ^{N-1}E_0 = \sum_{m\neq a}^N \Braket{m|h|m} + \sum_{m\neq a}^M\sum_{n>m, n\neq a}^M \Braket{mn||mn} \end{equation} \begin{equation}\label{key} ^N E_0 - ^{N-1}E_0 = \Braket{a|h|a} + \sum_{b\neq a}^N \Braket{ab||ab} \end{equation} \subsubsection{Derivation of the Rules for Matrix Elements} \ex{2.15} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi | \mathscr{H} | \Psi} &= \dfrac{1}{N!} \Braket{\sum_{n=1}^{N!} (-1)^{p_n}\mathscr{P}_n\{\chi_i(1)\chi_j(2)\cdots\chi_k(N)\} | \sum_{c=1}^N h(c) | \sum_{m=1}^{N!} (-1)^{p_m}\mathscr{P}_m\{\chi_i(1)\chi_j(2)\cdots\chi_k(N)\}}\\ &= \dfrac{1}{N!} \sum_{n=1}^{N!} \sum_{m=1}^{N!} (-1)^{p_n + p_m} \sum_{c=1}^N \Braket{ \mathscr{P}_n \{\chi_i(1)\chi_j(2)\cdots\chi_k(N)\} | h(c) | \mathscr{P}_m \{ \chi_i(1)\chi_j(2)\cdots\chi_k(N)\}}\\ \end{aligned} \end{equation} Since the integral inside equals $ 0 $ when $ \mathscr{P}_n \neq \mathscr{P}_m $, \begin{equation}\label{key} \Braket{\Psi | \mathscr{H} | \Psi} = \dfrac{1}{N!} \sum_{n=1}^{N!} (-1)^{p_n + p_n} (\varepsilon_i+\varepsilon_j+\cdots+\varepsilon_k) = \varepsilon_i+\varepsilon_j+\cdots+\varepsilon_k \end{equation} \ex{2.16} %\begin{equation}\label{key} %\Braket{K^{HP} | \mathscr{H} | L} = \dfrac{1}{\sqrt{N!}}\Braket{K^{HP} | \mathscr{H} | \sum_{m=1}^{N!}(-1)^{p_m}\mathscr{P}_m L^{HP}} %\end{equation} % \begin{align}\label{key} \Braket{K | \mathscr{H} | L} &= \dfrac{1}{\sqrt{N!}} \sum_{n=1}^{N!} \Braket{(-1)^{p_n} \mathscr{P}_n K^{HP} | \mathscr{H} | L} \notag\\ &= \dfrac{1}{\sqrt{N!}} \sum_{n=1}^{N!} \Braket{ K^{HP} | \mathscr{H} | L} \notag\\ &= \dfrac{1}{\sqrt{N!}} \times N! \Braket{ K^{HP} | \mathscr{H} | L} \notag\\ &= \sqrt{N!} \Braket{ K^{HP} | \mathscr{H} | L} \end{align} \subsubsection{Transition from Spin Orbitals to Spatial Orbitals} \ex{2.17} \begin{equation} \begin{aligned} \ket{1} = \ket{\psi_1\alpha} & \quad \ket{2} = \ket{\psi_1\beta}\\ \ket{3} = \ket{\psi_2\alpha} & \quad \ket{4} = \ket{\psi_2\beta}\ \end{aligned} \end{equation} thus \begin{equation}\label{key} \begin{aligned} \vb{H} &= \mqty(\Braket{1|h|1} + \Braket{2|h|2} + \Braket{12|12} - \Braket{12|21} & \Braket{12|34} - \Braket{12|43}\\ \Braket{34|12} - \Braket{34|21} & \Braket{3|h|3} + \Braket{4|h|4} + \Braket{34|34} - \Braket{34|43})\\ &= \mqty(2(1|h|1) + (11|11) & (12|12)\\ (21|21) & 2(2|h|2) + (22|22)) \end{aligned} \end{equation} \ex{2.18} \begin{equation}\label{key} \begin{aligned} \abs{\Braket{ab||rs}}^2 &= (\Braket{ab|rs} - \Braket{ab|sr})^*(\Braket{ab|rs} - \Braket{ab|sr})\\ &= \Braket{rs|ab}\Braket{ab|rs} - \Braket{rs|ab}\Braket{ab|sr} - \Braket{sr|ab}\Braket{ab|rs} + \Braket{sr|ab}\Braket{ab|sr}\\ &= [ra|sb][ar|bs] - [ra|sb][as|br] - [sa|rb][ar|bs] + [sa|rb][as|br]\\ &= [ar|bs]^2 - 2[ar|bs][as|br] + [as|br]^2 \end{aligned} \end{equation} Let's calculate $ E_0^{(2)} $ term by term. \begin{equation}\label{key} \begin{aligned} \qty(E_0^{(2)})_1 &= \dfrac{1}{4}\sum_{abrs} \dfrac{[ar|bs]^2}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= \dfrac{1}{4}\sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{[ar|bs]^2 +[\bar{a}\bar{r}|bs]^2 + [ar|\bar{b}\bar{s}]^2 +[\bar{a}\bar{r}|\bar{b}\bar{s}]^2}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= \sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{[ar|bs]^2}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= \sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{\Braket{ab|rs}\Braket{rs|ab}}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \qty(E_0^{(2)})_2 &= \dfrac{1}{4}\sum_{abrs} \dfrac{-2[ar|bs][as|br]}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= -\dfrac{1}{2}\sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{[ar|bs][as|br] +[\bar{a}\bar{r}|\bar{b}\bar{s}][\bar{a}\bar{s}|\bar{b}\bar{r}]}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= -\sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{[ar|bs][as|br]}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s}\\ &= -\sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{\Braket{ab|rs}\Braket{rs|ba}}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s} \end{aligned} \end{equation} \begin{equation}\label{key} \begin{aligned} \qty(E_0^{(2)})_3 &= \dfrac{1}{4}\sum_{abrs} \dfrac{[as|br]^2}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s} = \dfrac{1}{4}\sum_{absr}\dfrac{[ar|bs]^2}{\varepsilon_a+\varepsilon_b-\varepsilon_s-\varepsilon_r}\\ &= \sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{\Braket{ab|rs}\Braket{rs|ab}}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s} \end{aligned} \end{equation} thus, \begin{equation}\label{key} E_0^{(2)} = \sum_{a,b}^{N/2}\sum_{r,s=N/2+1}^K \dfrac{\Braket{ab|rs}(2\Braket{rs|ab} - \Braket{rs|ba})}{\varepsilon_a+\varepsilon_b-\varepsilon_r-\varepsilon_s} \end{equation} \subsubsection{Coulomb and Exchange Integrals} \ex{2.19} \begin{equation}\label{key} J_{ii} = (ii|ii) = K_{ii} \end{equation} \begin{equation}\label{key} J_{ij}^* = \Braket{ij|ij}^* = \Braket{ij|ij} = J_{ij} \end{equation} \begin{equation}\label{key} K_{ij}^* = \Braket{ij|ji}^* = \Braket{ji|ij} = \Braket{ij|ji} = K_{ij} \end{equation} \begin{equation}\label{key} J_{ij} = (ii|jj) = (jj|ii) = J_{ji} \end{equation} \begin{equation}\label{key} K_{ij} = (ij|ji) = (ji|ij) = K_{ji} \end{equation} \ex{2.20} For real spatial orbitals \begin{equation}\label{key} K_{ij} = (ij|ji) = (ij|ij) = (ji|ji) \end{equation} \begin{equation}\label{key} K_{ij} = \Braket{ij|ji} = \Braket{ii|jj} = \Braket{jj|ii} \end{equation} %. \ex{2.21} \begin{equation}\label{key} \vb{H} = \mqty(2(1|h|1) + (11|11) & (12|12)\\ (21|21) & 2(2|h|2) + (22|22)) = \mqty(2h_{11} + J_{11} & K_{12}\\ K_{12} & 2h_{22} + J_{22}) \end{equation} \ex{2.22} \begin{equation}\label{key} E_{\uparrow\downarrow}^{HP} = \Braket{\Psi_{\uparrow\downarrow}^{HP} | h(1) + h(2) + \dfrac{1}{r_{12}} | \Psi_{\uparrow\downarrow}^{HP}} = (1|h|1)+(2|h|2)+(11|22) = h_{11}+h_{22}+J_{12} \end{equation} \begin{equation}\label{key} E_{\downarrow\downarrow}^{HP} = \Braket{\Psi_{\downarrow\downarrow}^{HP} | h(1) + h(2) + \dfrac{1}{r_{12}} | \Psi_{\downarrow\downarrow}^{HP}} = (1|h|1)+(2|h|2)+(11|22) = h_{11}+h_{22}+J_{12} \end{equation} \subsubsection{Pseudo-Classical Interpretation of Determinantal Energies} %. \ex{2.23} a.-g. can be obtained immediately with definition. \subsection{Second Quantization} \subsubsection{Creation and Annihilation Operators and Their Anticommutation Relations} \ex{2.24} Since $ a_i^\dagger a_j^\dagger + a_j^\dagger a_i^\dagger = 0 $, we have \begin{equation}\label{key} (a_1^\dagger a_2^\dagger + a_2^\dagger a_1^\dagger)\ket{K} = 0 \end{equation} for any $ \ket{K} $. \ex{2.25} Since $ a_i a_j^\dagger + a_j^\dagger a_i = \delta_{ij} $, we have \begin{equation}\label{key} (a_1 a_2^\dagger + a_2^\dagger a_1)\ket{K} = 0 \end{equation} \begin{equation}\label{key} (a_1 a_1^\dagger + a_1^\dagger a_1)\ket{K} = \ket{K} \end{equation} for any $ \ket{K} $. \ex{2.26} \begin{equation}\label{key} \Braket{\chi_i|\chi_j} = \Braket{0 | a_i a_j^\dagger | 0} = \Braket{0 | \delta_{ij} - a_j^\dagger a_i | 0} = \delta_{ij} \end{equation} where $ \ket{0} $ is the vacuum state. \ex{2.27} First, if $ i\notin \{1,2,\cdots,N\} $ or $ j\notin \{1,2,\cdots,N\} $, \begin{equation}\label{key} \Braket{K | a_i^\dagger a_j | K} = 0 \end{equation} because inexistent electron cannot be annihilated.\\ Thus, $ i,j\in \{1,2,\cdots,N\} $, and \begin{equation}\label{key} \Braket{K | a_i^\dagger a_j | K} = \delta_{ij}\Braket{K|K} - \Braket{K | a_j a_i^\dagger | K} \end{equation} $ \Braket{K | a_j a_i^\dagger | K} $ would be $ 0 $ because $ \chi_i $ is created twice. Thus, \begin{equation}\label{key} \Braket{K | a_i^\dagger a_j | K} = \delta_{ij} \end{equation} Overall, $ \Braket{K | a_i^\dagger a_j | K} = 1$ when $ i=j $ and $ i\in \{1,2,\cdots,N\} $, but is $ 0 $ otherwise. \ex{2.28} \subex{a.} That's obvious since inexistent electron cannot be annihilated. \subex{b.} That's obvious since an electron cannot be created twice. \subex{c.} \begin{equation}\label{key} \begin{aligned} a_r^\dagger a_a\ket{\Psi_0} &= a_r^\dagger a_a (-\ket{\chi_a\cdots\chi_1\chi_b\cdots\chi_N})\\ &= -a_r^\dagger\ket{\cdots\chi_1\chi_b\cdots\chi_N}\\ &= -\ket{\chi_r\cdots\chi_1\chi_b\cdots\chi_N}\\ &= \ket{\chi_1\cdots\chi_r\chi_b\cdots\chi_N}\\ &= \ket{\Psi_a^r} \end{aligned} \end{equation} \subex{d.} That's similar to 2.28.c. \subex{e.} \begin{equation}\label{key} \begin{aligned} a_s^\dagger a_b a_r^\dagger a_a\ket{\Psi_0} &= a_s^\dagger a_b a_r^\dagger (-\ket{\chi_2\cdots\chi_1\chi_b\cdots\chi_N}) \\ &= -a_s^\dagger a_b \ket{\chi_r\chi_2\cdots\chi_1\chi_b\cdots\chi_N}\\ &= -a_s^\dagger (-\ket{\chi_2\cdots\chi_1\chi_r\cdots\chi_N}) \\ &= \ket{\chi_s\chi_2\cdots\chi_1\chi_r\cdots\chi_N}\\ &= \ket{\chi_1\cdots\chi_r\chi_s\cdots\chi_N}\\ &= \ket{\Psi_{ab}^{rs}} \end{aligned} \end{equation} $ \therefore $ \begin{equation}\label{key} \ket{\Psi_{ab}^{rs}} = a_s^\dagger a_b a_r^\dagger a_a\ket{\Psi_0} = a_s^\dagger (-a_r^\dagger a_b) a_a\ket{\Psi_0} = a_r^\dagger a_s^\dagger a_b a_a\ket{\Psi_0} \end{equation} \subex{f.} That's similar to 2.28.e. \subsubsection{Second-Quantized Operators and Their Matrix Elements} \ex{2.29} \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | \mathcal{O}_1 | \Psi_0} &= \sum_{ij}\Braket{i|h|j}\Bra{0}a_2 a_1 a_i^\dagger a_j a_1^\dagger a_2^\dagger \ket{0}\\ &= \sum_{ij}\Braket{i|h|j}\Bra{0}a_2 a_1 (\delta_{ij} - a_j^\dagger a_i) a_1^\dagger a_2^\dagger \ket{0} \\ &= \sum_i\Braket{i|h|i} \Bra{0}a_2 a_1 a_1^\dagger a_2^\dagger \ket{0} - \sum_{ij}\Braket{i|h|j}\Bra{0}a_2 a_1 a_j a_i^\dagger a_1^\dagger a_2^\dagger \ket{0} \end{aligned} \end{equation} The second terms must be $ 0 $ since $ i\in{1,2} $.\\ Thus, \begin{equation}\label{key} \begin{aligned} \Braket{\Psi_0 | \mathcal{O}_1 | \Psi_0} &= \sum_i\Braket{i|h|i} \Bra{0}a_2 a_1 a_1^\dagger a_2^\dagger \ket{0} = \Braket{1|h|1} +\Braket{2|h|2} \end{aligned} \end{equation} \ex{2.30} %\begin{equation} \allowdisplaybreaks \begin{align} \Braket{\Psi_a^r | \mathcal{O}_1 | \Psi_0} &= \sum_{ij}\Braket{i|h|j}\Braket{\Psi_0 | a_a^\dagger a_r a_i^\dagger a_j | \Psi_0} = \sum_{ij}\Braket{i|h|j}\Braket{\Psi_0 | a_a^\dagger (\delta_{ri} - a_i^\dagger a_r) a_j | \Psi_0}\notag\\ &= \sum_j\Braket{r|h|j}\Braket{\Psi_0 | a_a^\dagger a_j|\Psi_0} - \sum_{ij}\Braket{i|h|j}\Braket{\Psi_0 | a_a^\dagger a_i^\dagger a_r a_j | \Psi_0}\notag\\ %\displaybreak &= \sum_j\Braket{r|h|j}\Braket{\Psi_0 | (\delta_{aj} - a_j a_a^\dagger)|\Psi_0}\notag\\ &= \Braket{r|h|a}\Braket{\Psi_0 | \Psi_0} - \sum_j\Braket{r|h|j}\Braket{\Psi_0 | a_j a_a^\dagger|\Psi_0}\notag\\ &= \Braket{r|h|a} \end{align} %\end{equation} \ex{2.31} \begin{align} \Braket{\Psi_a^r | \mathcal{O}_2 | \Psi_0} &= \dfrac{1}{2} \sum_{ijkl} \Braket{ij|kl} \Braket{\Psi_0 | a_a^\dagger a_r a_i^\dagger a_j^\dagger a_l a_k | \Psi_0} \end{align} while \begin{align} \Braket{\Psi_0 | a_a^\dagger a_r a_i^\dagger a_j^\dagger a_l a_k | \Psi_0} &= \Braket{\Psi_0 | a_a^\dagger \delta_{ri} a_j^\dagger a_l a_k | \Psi_0} - \Braket{\Psi_0 | a_a^\dagger a_i^\dagger a_r a_j^\dagger a_l a_k | \Psi_0} \notag\\ &= \delta_{ri} \qty(\Braket{\Psi_0 | a_j^\dagger \delta_{ak} a_l | \Psi_0} - \Braket{\Psi_0 | a_j^\dagger a_k a_a^\dagger a_l | \Psi_0} ) \notag\\ & \quad{} - \qty(\Braket{\Psi_0 | a_a^\dagger a_i^\dagger \delta_{rj} a_l a_k | \Psi_0} - \Braket{\Psi_0 | a_a^\dagger a_i^\dagger a_j^\dagger a_r a_l a_k | \Psi_0})\notag\\ &= \delta_{ri}\delta_{ak} \Braket{\Psi_0 | a_j^\dagger a_l | \Psi_0} - \delta_{ri}\delta_{al}\Braket{\Psi_0 | a_j^\dagger a_k | \Psi_0} \notag\\ & \quad{} - \delta_{rj}\qty(\Braket{\Psi_0 | a_i^\dagger \delta_{ak} a_l| \Psi_0} - \Braket{\Psi_0 | a_i^\dagger a_k a_a^\dagger a_l | \Psi_0} ) + 0 \notag\\ &= \delta_{ri}\delta_{ak} \Braket{\Psi_0 | a_j^\dagger a_l | \Psi_0} - \delta_{ri}\delta_{al}\Braket{\Psi_0 | a_j^\dagger a_k | \Psi_0} \notag\\ & \quad{} - \delta_{rj}\delta_{ak} \Braket{\Psi_0 | a_i^\dagger a_l| \Psi_0} + \delta_{rj}\delta_{al} \Braket{\Psi_0 | a_i^\dagger a_k | \Psi_0} \end{align} According to Ex. 2.27, we have \begin{align} \Braket{\Psi_a^r | \mathcal{O}_2 | \Psi_0} &= \dfrac{1}{2} \left( \sum_{jl}\Braket{rj|al}\Braket{\Psi_0 | a_j^\dagger a_l | \Psi_0} - \sum_{jk}\Braket{rj|ka}\Braket{\Psi_0 | a_j^\dagger a_k | \Psi_0}\right. \notag\\ & \left. \hspace{30pt} {} - \sum_{il}\Braket{ir|al}\Braket{\Psi_0 | a_i^\dagger a_l | \Psi_0} + \sum_{ik}\Braket{ir|ka}\Braket{\Psi_0 | a_i^\dagger a_k | \Psi_0}\right) \notag\\ &= \dfrac{1}{2}\qty(\sum_j^N\Braket{rj|aj} - \sum_j^N\Braket{rj|ja} - \sum_i^N\Braket{ir|ai} + \sum_i^N\Braket{ir|ia}) \notag\\ &= \sum_j^N\Braket{rj|aj} - \sum_j^N\Braket{rj|ja} \notag\\ &= \sum_j^N\Braket{rj||aj} \end{align} \subsection{Spin-Adapted Configurations} \subsubsection{Spin Operators} \ex{2.32} \subex{a)} \begin{align} \hs_+\ket{\alpha} &= (\hs_x + \I\hs_y)\ket{\alpha} = \qty(\dfrac{1}{2} + \I\dfrac{\I}{2})\ket{\beta} = 0\\ \hs_+\ket{\beta} &= (\hs_x + \I\hs_y)\ket{\beta} = \qty(\dfrac{1}{2} - \I\dfrac{\I}{2})\ket{\alpha} = \ket{\alpha}\\ \hs_-\ket{\alpha} &= (\hs_x - \I\hs_y)\ket{\alpha} = \qty(\dfrac{1}{2} - \I\dfrac{\I}{2})\ket{\beta} = \ket{\beta}\\ \hs_-\ket{\beta} &= (\hs_x - \I\hs_y)\ket{\beta} = \qty(\dfrac{1}{2} + \I\dfrac{\I}{2})\ket{\alpha} = 0 \end{align} \subex{b)} \begin{align} \hs_+\hs_- &= (\hs_x + \I\hs_y)(\hs_x - \I\hs_y) = \hs_x^2 + \hs_y^2 + \I(\hs_y\hs_x - \hs_x\hs_y) = \hs_x^2 + \hs_y^2 + \hs_z\\\ \hs_-\hs_+ &= (\hs_x - \I\hs_y)(\hs_x + \I\hs_y) = \hs_x^2 + \hs_y^2 + \I(\hs_x\hs_y - \hs_y\hs_x) = \hs_x^2 + \hs_y^2 - \hs_z \end{align} thus, \begin{align} \hs^2 &= \hs_x^2 + \hs_y^2 + \hs_z^2 = \hs_+\hs_- - \hs_z + \hs_z^2\\ &= \hs_-\hs_+ + \hs_z + \hs_z^2 \end{align} \ex{2.33} \begin{equation}\label{key} \hs^2 = \mqty(\dfrac{3}{4} & 0\\ 0 & \dfrac{3}{4}) \quad \hs_z = \mqty(\dfrac{1}{2} & 0\\ 0 & -\dfrac{1}{2}) \quad \hs_+ = \mqty(0 & 1\\ 0 & 0) \quad \hs_- = \mqty(0 & 0\\ 1 & 0) \end{equation} thus \begin{align} \hs_+\hs_- - \hs_z + \hs_z^2 &= \mqty(1 & 0\\ 0 & 0) - \mqty(\dfrac{1}{2} & 0\\ 0 & -\dfrac{1}{2}) + \mqty(\dfrac{1}{4} & 0\\ 0 & \dfrac{1}{4}) = \mqty(\dfrac{3}{4} & 0\\ 0 & \dfrac{3}{4}) = \hs^2\\ \hs_-\hs_+ + \hs_z + \hs_z^2 &= \mqty(0 & 0\\ 0 & 1) + \mqty(\dfrac{1}{2} & 0\\ 0 & -\dfrac{1}{2}) + \mqty(\dfrac{1}{4} & 0\\ 0 & \dfrac{1}{4}) = \mqty(\dfrac{3}{4} & 0\\ 0 & \dfrac{3}{4}) = \hs^2 \end{align} \ex{2.34} \begin{align} [\hs^2, \hs_z] &= [\hs_+\hs_- - \hs_z + \hs_z^2, \hs_z] \notag\\ &= \hs_+[\hs_-, \hs_z] + [\hs_+, \hs_z]\hs_- - 0 + 0 \notag\\ &= \hs_+[\hs_x - \I\hs_y, \hs_z] + [\hs_x + \I\hs_y, \hs_z]\hs_- \notag\\ &= \hs_+(-\I\hs_y -\I\cdot\I\hs_x) + (-\I\hs_y + \I\cdot\I\hs_x)\hs_- \notag\\ &= \hs_+ \hs_- - \hs_+\hs_- \notag\\ &= 0 \end{align} \ex{2.35} \begin{equation}\label{key} \mathscr{H} \mathscr{A}\ket{\Phi} = \mathscr{A}\mathscr{H} \ket{\Phi} = \mathscr{A} E \ket{\Phi} = E\mathscr{A} \ket{\Phi} \end{equation} thus $ \mathscr{A} \ket{\Phi} $ is also an eigenfunction of $ \mathscr{H} $ with eigenvalue $ E $. \ex{2.36} \begin{equation}\label{key} \Braket{\Psi_1 | \mathscr{H} \mathscr{A} | \Psi_2} = a_2 \Braket{\Psi_1 | \sH | \Psi_2} \end{equation} Since $ [\sA, \sH]=0 $ and $ \sA $ is Hermitian, \begin{equation}\label{key} \Braket{\Psi_1 | \mathscr{H} \mathscr{A} | \Psi_2} = \Braket{\Psi_1 | \mathscr{A}\mathscr{H} | \Psi_2} = \Braket{\Psi_1 | \mathscr{A}^\dagger \mathscr{H} | \Psi_2} = a_1 \Braket{\Psi_1 | \mathscr{H} | \Psi_2} \end{equation} thus \begin{equation}\label{key} (a_1 - a_2) \Braket{\Psi_1 | \sH |\Psi_2} = 0 \end{equation} Since $ a_1 \neq a_2 $, \begin{equation}\label{key} \Braket{\Psi_1 | \sH |\Psi_2} = 0 \end{equation} \ex{2.37} \begin{align} \hsS_z \ket{\chi_i\chi_j\cdots\chi_k} &= \hsS_z \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\{\chi_i(1)\chi_j(2)\cdots\chi_k(N)\} \notag\\ &= \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\{\hsS_z \chi_i(1)\chi_j(2)\cdots\chi_k(N)\} \notag\\ &= \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\qty{\sum_{i=1}^{N} \hs_z(i)\chi_i(1)\chi_j(2)\cdots\chi_k(N)} \notag\\ &= \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\qty{\qty(\dfrac{1}{2}N^\alpha - \dfrac{1}{2}N^\beta)\chi_i(1)\chi_j(2)\cdots\chi_k(N)} \notag\\ &= \dfrac{1}{2}(N^\alpha - N^\beta) \ket{\chi_i\chi_j\cdots\chi_k} \end{align} \subsubsection{Restricted Determinants and Spin-Adapted Configurations} \ex{2.38} From Ex 2.37, we have \begin{equation}\label{key} \hsS_z \ket{\psi_i\bar{\psi}_i\psi_j\bar{\psi}_j\cdots} = 0 \end{equation} thus \begin{equation}\label{key} \hsS_z^2 \ket{\psi_i\bar{\psi}_i\psi_j\bar{\psi}_j\cdots} = 0 \end{equation} While \begin{align} \hsS_+ \ket{\psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots} &= \hsS_+ \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\{\psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots\} \notag\\ &= \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\{\hsS_+ \psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots\} \notag\\\ &= \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\qty{\sum_a^N \hs_+(a) \psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots} \notag\\ &= \sum_a^N \dfrac{1}{\sqrt{N!}}\sum_{n=1}^{N!}(-1)^{p_n} \hsP_n\qty{ \hs_+(a) \psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots} \end{align} Since \begin{equation}\label{key} \hs_+(a)\psi_k(a) = 0 \quad \hs_+(a)\bar{\psi}_k(a) = \psi_k(a) \end{equation} \begin{align} \hsS_+ \ket{\psi_i\bar{\psi}_i\cdots\psi_k\bar{\psi}_k\cdots} = \sum_a^N 0 = 0 \end{align} thus \begin{equation}\label{key} \hsS_-\hsS_+ \ket{\psi_i\bar{\psi}_i\psi_j\bar{\psi}_j\cdots} = 0 \end{equation} Therefore, \begin{equation}\label{key} \hsS^2 \ket{\psi_i\bar{\psi}_i\psi_j\bar{\psi}_j\cdots} = (\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\ket{\psi_i\bar{\psi}_i\psi_j\bar{\psi}_j\cdots} = 0 \end{equation} \ex{2.39} \subex{$ \bullet $} \begin{align} \hsS^2\ket{^1\Psi_1^2} &= (\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\dfrac{1}{2}(\psi_1(1)\psi_2(2) + \psi_2(1)\psi_1(2))(\alpha(1)\beta(2) - \beta(1)\alpha(2)) \notag\\ &= \dfrac{1}{2}(\psi_1(1)\psi_2(2) + \psi_2(1)\psi_1(2))(\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)(\alpha(1)\beta(2) - \beta(1)\alpha(2)) \end{align} $ \because $ \begin{align} \hsS_-\hsS_+ (\alpha(1)\beta(2) - \beta(1)\alpha(2)) &= \hsS_-(\alpha(1)\alpha(2) - \alpha(1)\alpha(2)) = 0\\ \hsS_z (\alpha(1)\beta(2) - \beta(1)\alpha(2)) &= [1/2+(-1/2)]\alpha(1)\beta(2) - [-1/2+1/2]\beta(1)\alpha(2) = 0 \end{align} $ \therefore $ \begin{equation}\label{key} \hsS^2\ket{^1\Psi_1^2} = 0 \end{equation} thus $ \ket{^1\Psi_1^2} $ is singlet. \subex{$ \bullet $} \begin{align} \hsS^2\ket{^3\Psi_1^2} &= (\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\dfrac{1}{2}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))(\alpha(1)\beta(2) + \beta(1)\alpha(2)) \notag\\ &= \dfrac{1}{2}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))(\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)(\alpha(1)\beta(2) + \beta(1)\alpha(2)) \end{align} $ \because $ \begin{align} \hsS_-\hsS_+ (\alpha(1)\beta(2) + \beta(1)\alpha(2)) &= \hsS_-(\alpha(1)\alpha(2) + \alpha(1)\alpha(2)) = 2(\alpha(1)\beta(2) + \beta(1)\alpha(2))\\ \hsS_z (\alpha(1)\beta(2) + \beta(1)\alpha(2)) &= [1/2+(-1/2)]\alpha(1)\beta(2) + [-1/2+1/2]\beta(1)\alpha(2) = 0 \end{align} $ \therefore $ \begin{equation}\label{key} \hsS^2\ket{^3\Psi_1^2} = 2\ket{^3\Psi_1^2} \end{equation} i.e. $ S=1 $,\\ thus $ \ket{^3\Psi_1^2} $ is triplet. \subex{$ \bullet $} \begin{align} \hsS^2\ket{\Psi_1^{\bar{2}}} &= (\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\dfrac{-1}{\sqrt{2}}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))\beta(1)\beta(2) \notag\\ &= \dfrac{-1}{\sqrt{2}}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))(\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\beta(1)\beta(2) \end{align} $ \because $ \begin{align} \hsS_-\hsS_+ \beta(1)\beta(2) &= \hsS_-(\alpha(1)\beta(2) + \beta(1)\alpha(2)) = 2\beta(1)\beta(2) \\ \hsS_z \beta(1)\beta(2) &= -\beta(1)\beta(2)\\ \hsS_z^2 \beta(1)\beta(2) &= \beta(1)\beta(2)\\ \end{align} $ \therefore $ \begin{align} \hsS^2\ket{\Psi_1^{\bar{2}}} &= 2\ket{\Psi_1^{\bar{2}}} \end{align} i.e. $ S=1 $,\\ thus $ \ket{\Psi_1^{\bar{2}}} $ is triplet. \subex{$ \bullet $} \begin{align} \hsS^2\ket{\Psi_{\bar{1}}^2} &= (\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\dfrac{1}{\sqrt{2}}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))\alpha(1)\alpha(2) \notag\\ &= \dfrac{1}{\sqrt{2}}(\psi_1(1)\psi_2(2) - \psi_2(1)\psi_1(2))(\hsS_-\hsS_+ + \hsS_z + \hsS_z^2)\alpha(1)\alpha(2) \end{align} $ \because $ \begin{align} \hsS_-\hsS_+ \alpha(1)\alpha(2) &= 0 \\ \hsS_z \alpha(1)\alpha(2) &= \alpha(1)\alpha(2)\\ \hsS_z^2 \alpha(1)\alpha(2) &= \alpha(1)\alpha(2)\\ \end{align} $ \therefore $ \begin{align} \hsS^2\ket{\Psi_{\bar{1}}^2} &= 2\ket{\Psi_{\bar{1}}^2} \end{align} i.e. $ S=1 $,\\ thus $ \ket{\Psi_{\bar{1}}^2} $ is triplet. \ex{2.40} \subex{$ \bullet $} \begin{align} \Braket{^1\Psi_1^2 | \sH | ^1\Psi_1^2} &= \dfrac{1}{4} \Braket{\psi_1(1)\psi_2(2) + \psi_1(2)\psi_2(1) | \sH | \psi_1(1)\psi_2(2) + \psi_1(2)\psi_2(1)} \notag\\ &\quad{} \Braket{\alpha(1)\beta(2) - \beta(1)\alpha(2) | \alpha(1)\beta(2) - \beta(1)\alpha(2)}\notag\\ &= \dfrac{1}{4}((1|h|1) + (2|h|2) + (11|22) + (12|21) + (21|12) + (2|h|2) + (1|h|1) + (22|11) ) (1 - 0 - 0 + 1) \notag\\ &= h_{11} + h_{22} + J_{12} + K_{12} \end{align} \subex{$ \bullet $} \begin{align} \Braket{^3\Psi_1^2 | \sH | ^3\Psi_1^2} &= \dfrac{1}{4} \Braket{\psi_1(1)\psi_2(2) - \psi_1(2)\psi_2(1) | \sH | \psi_1(1)\psi_2(2) - \psi_1(2)\psi_2(1)} \notag\\ &\quad{} \Braket{\alpha(1)\beta(2) + \beta(1)\alpha(2) | \alpha(1)\beta(2) + \beta(1)\alpha(2)}\notag\\ &= \dfrac{1}{4}((1|h|1) + (2|h|2) + (11|22) - (12|21) - (21|12) + (2|h|2) + (1|h|1) + (22|11) ) (1 + 0 + 0 + 1) \notag\\ &= h_{11} + h_{22} + J_{12} - K_{12} \end{align} \subsubsection{Unrestricted Determinants} \ex{2.41} \subex{a.} \begin{align} \hsS^2\ket{K} &= \qty(\hsS_-\hsS_+ + \hsS_z + \hsS_z^2) \dfrac{1}{\sqrt{2}} \qty(\psi_1^\alpha(1)\psi_1^\beta(2)\alpha(1)\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)\beta(1)\alpha(2))\notag \\ &= \dfrac{1}{\sqrt{2}} \psi_1^\alpha(1)\psi_1^\beta(2) \qty(\hsS_-\alpha(1)\alpha(2) + 0 + 0) - \psi_1^\beta(1)\psi_1^\alpha(2) \qty(\hsS_-\alpha(1)\alpha(2) + 0 + 0) \notag\\ &= \dfrac{1}{\sqrt{2}} \qty(\psi_1^\alpha(1)\psi_1^\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)) \qty(\alpha(1)\beta(2) + \beta(1)\alpha(2)) \notag\\ &= \dfrac{1}{\sqrt{2}} \qty[\psi_1^\alpha(1)\psi_1^\beta(2)\alpha(1)\beta(2) + \psi_1^\alpha(1)\psi_1^\beta(2)\beta(1)\alpha(2) %\notag\\ %&\quad{} - \psi_1^\beta(1)\psi_1^\alpha(2)\alpha(1)\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)\beta(1)\alpha(2)] \notag\\ &= \ket{K} + \dfrac{1}{\sqrt{2}} \qty[\psi_1^\alpha(1)\psi_1^\beta(2)\beta(1)\alpha(2) - \psi_1^\beta(1)\psi_1^\alpha(2)\alpha(1)\beta(2)] \end{align} thus, $ \ket{K} $ being an eigenfunction of $ \hsS^2 $ requires \begin{align} \psi_1^\alpha(1)\psi_1^\beta(2)\beta(1)\alpha(2) - \psi_1^\beta(1)\psi_1^\alpha(2)\alpha(1)\beta(2) = k\ket{K} \end{align} which requires \begin{equation}\label{key} \psi_1^\alpha = \psi_1^\beta \end{equation} \subex{b.} \begin{align} \Braket{K | \hsS^2 | K} &= \dfrac{1}{2} \Braket{\psi_1^\alpha(1)\psi_1^\beta(2)\alpha(1)\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)\beta(1)\alpha(2) | (\psi_1^\alpha(1)\psi_1^\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)) (\alpha(1)\beta(2) + \beta(1)\alpha(2))} \notag\\ &= \dfrac{1}{2} \Braket{\psi_1^\alpha(1)\psi_1^\beta(2) | \psi_1^\alpha(1)\psi_1^\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)} - \Braket{\psi_1^\beta(1)\psi_1^\alpha(2) | \psi_1^\alpha(1)\psi_1^\beta(2) - \psi_1^\beta(1)\psi_1^\alpha(2)} \notag\\ &= \dfrac{1}{2}\qty[\qty(1 - \abs{S_{11}^{\alpha\beta}}^2) - \qty(\abs{S_{11}^{\alpha\beta}}^2 - 1)] \notag\\ &= 1 - \abs{S_{11}^{\alpha\beta}}^2 \end{align} \end{document}
{ "alphanum_fraction": 0.6169714286, "avg_line_length": 45.7516339869, "ext": "tex", "hexsha": "b4f2d7ee30b9757a1568ae1a4ce307ab6aadb28f", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-03-21T08:43:27.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-11T11:30:44.000Z", "max_forks_repo_head_hexsha": "58d79bd949d34e310e4ce8c287fe4b7ecda560da", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "hebrewsnabla/S-O-MQC-HW", "max_forks_repo_path": "chap2/chap2.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "58d79bd949d34e310e4ce8c287fe4b7ecda560da", "max_issues_repo_issues_event_max_datetime": "2022-01-26T13:00:28.000Z", "max_issues_repo_issues_event_min_datetime": "2021-04-30T15:45:12.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "hebrewsnabla/S-O-MQC-HW", "max_issues_repo_path": "chap2/chap2.tex", "max_line_length": 267, "max_stars_count": 28, "max_stars_repo_head_hexsha": "58d79bd949d34e310e4ce8c287fe4b7ecda560da", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "hebrewsnabla/S-O-MQC-HW", "max_stars_repo_path": "chap2/chap2.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T09:26:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-03T03:37:22.000Z", "num_tokens": 16970, "size": 35000 }
\documentclass[a4paper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[hidelinks]{hyperref} \usepackage{fancyhdr} \usepackage[english]{babel} \usepackage{tabularx} \input{glyphtounicode} \usepackage[a4paper]{geometry} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \geometry{left=1.9cm, top=1.8cm, right=1.9cm, bottom=1.8cm, footskip=.5cm} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-4pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule \vspace{-5pt}] % Ensure that generate pdf is machine readable/ATS parsable \pdfgentounicode=1 %------------------ Custom commands ------------------- \newcommand{\resumeItem}[1]{ \item\small{ {#1 \vspace{-2pt}} } } \newcommand{\resumeSubheading}[4]{ \vspace{-2pt}\item \begin{tabular*}{\textwidth}[t]{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textit{\small#3} & \textit{\small #4} \\ \end{tabular*}\vspace{-7pt} } \newcommand{\resumeSubSubheading}[2]{ \item \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textit{\small#1} & \textit{\small #2} \\ \end{tabular*}\vspace{-7pt} } \newcommand{\resumeProjectHeading}[4]{ \item \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \small#1 & #2 \\ \textit{\small#3} & \textit{\small #4}\\ \end{tabular*}\vspace{-7pt} } \newcommand{\resumeSubItem}[1]{\resumeItem{#1}\vspace{-4pt}} \renewcommand\labelitemii{$\vcenter{\hbox{\tiny$\bullet$}}$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=-0pt, label={}]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}} % Extras \newcommand*{\cvrank}[2]{\footnotesize{\textit{#1} \textbf{#2} $\bullet$}} \newcommand*{\cvrankend}[2]{\footnotesize{\textit{#1} \textbf{#2} \\ } } % Extras end %%%%%%%%%%%%%%%% RESUME STARTS HERE %%%%%%%%%%%%%%%%%%%%%% \begin{document} \vspace*{2.8cm} %-----------Internships----------- \section{Experience} \resumeSubHeadingListStart \resumeSubheading {Tech Leaders Summer Fellow}{Jun. 2020 - Aug. 2020} {Camp K12}{Mumbai, India} \resumeItemListStart \resumeItem{Taught coding in an intuitive manner to \textbf{100}\footnotesize{+} \small students aged 14\hspace{2pt}-\hspace{1pt}18 years using \textbf{Python} and \textbf{JavaScript}} \resumeItem{Spearheaded the redesign \& diversification of the existing curriculum leading to \textbf{15\%} increase in revenue} \resumeItem{Covered curriculum spanning from basics such as Iterators to complex algorithms like Divide \& Conquer} \resumeItem{Utilized AI Playground, Repl.it, Hatch XR, and MIT App Inventor} \resumeItemListEnd \resumeSubheading {Design Secretary}{(Apr. 2020 - Ongoing)} {Chemical Engineering Association (ChEA), IIT Bombay}{Mumbai, India} \resumeItemListStart \resumeItem{Part of a \textbf{9} member team, selected out of \textbf{50+} students on the basis of manifesto, assignment \& interview} \resumeItem{Co-organised and coordinated \textbf{20+} departmental events, managing \textbf{1000+} unique students in total} \resumeItem{Crafted posters, curated content, designed hoodies for the graduating batch leading to higher engagement} \resumeItem{Utilised \textbf{CorelDraw} \& \textbf{Adobe Creative Cloud} (Illustrator, Photoshop, Premiere Pro \& After Effects)} \resumeItemListEnd %--------- \resumeSubheading {Institute Cultural Mentor}{(Jun. 2020 - Ongoing)} {StyleUp - The Fashion Club, Culturals, IIT Bombay}{Mumbai, India} \resumeItemListStart \resumeItem{Nominated (\textbf{4} out of \textbf{100}+) by Institute Fashion Nominee to promote fashion \& other cultural activities} \resumeItem{Conceptualized and spearheaded Style Saturday, a monthly blog aimed at promoting style, grooming and personal well being amidst the COVID lock down resulting in \textbf{2$\times$} monthly engagement} \resumeItem{Led the ideation \& execution of various genre related events, targeting \textbf{300\%} \textbf{increase} in participation by introducing mixed themes, using innovative publicity, and introducing large scale never before seen online events such as Glamour Grande, an online fashion show} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PROJECTS----------- \section{Projects} \resumeSubHeadingListStart \resumeProjectHeading {\textbf{Smart Suitcase} $|$ \emph{Python, OpenCV, ROS, Gazebo, RViz, RQT}}{(Mar. 2020 - Jul. 2020)}{Institute Technical Summer Project, IIT Bombay}{Guide: Vishwajeet Bhagyawant} \resumeItemListStart \resumeItem{Designed a suitcase that follows its owner there by eliminating the trouble of carrying it around} \resumeItem{Tested its functioning through virtual simulation by deploying it on a user controlled test target} \resumeItem{Implemented image processing through \textbf{Python} \& \textbf{OpenCV} to scan the surroundings and lock down on the target through the \textbf{HSV} color range and used centroid tracking for mimicking the targets motion} \resumeItem{Accurately \& efficiently simulated the bot and the control target using \textbf{Gazebo}, while using \textbf{ROS} to facilitate low-level device control and communication between various sub-processes} \resumeItemListEnd %--------- \resumeProjectHeading {\textbf{Wireless Remote Controlled Car} $|$ \emph{IoT}}{(Aug. 2019 - Sep. 2019)} {Electronics \& Robotics Club, IIT Bombay}{} \resumeItemListStart \resumeItem{Built a bot which decodes Bluetooth signals using an \textbf{ATtiny} IC to overcome an obstacle-filled course} \resumeItem{Installed \textbf{HC05} Bluetooth module to operate the bot remotely using an Android app} \resumeItem{Implemented differential steering mechanism via \textbf{L293D} motor driver for reducing the turning radius} \resumeItemListEnd \resumeSubHeadingListEnd %-----------PROGRAMMING SKILLS----------- \section{Achievements} \begin{itemize}[leftmargin=0.15in, label={}] \item{ \small{ \textbf{Competitive Coding :} \cvrank{4\hspace{1pt}$\star$}{CodeChef (relaxxpls)} \cvrankend{Specialist}{Codeforces (relaxxpls)} }} \vspace{-.5mm} \item{ \small{ \textbf{Engineering Entrances :} \cvrank{AIR 1448}{JEE Advanced} \cvrank{99.6\hspace{1pt}\%\hspace{1pt}ile}{JEE Mains} \cvrankend{2\hspace{1pt}nd Rank, Phy Topper}{GCET} }} \vspace{-.5mm} \item{ \small{ \textbf{Medical Entrances :} \cvrank{AIR 1064}{AIIMS} \cvrankend{98.7\hspace{1pt}\%\hspace{1pt}ile\hspace{2pt}(3rd highest in Goa)}{NEET} }} \vspace{-0mm} \end{itemize} %-----------PROGRAMMING SKILLS----------- \section{Technical Skills} \begin{itemize}[leftmargin=0.15in, label={}] \small{\item{ \textbf{Languages}{: C/C++, Python, Java, JavaScript, HTML/CSS, XML, Bash} \\ \textbf{Tools}{: Git, Matlab, Linux Terminal, Eclipse, \LaTeX, AutoCAD, SolidWorks, Bash, ROS, CMake, Adobe CC} \\ \textbf{Libraries}{: STL, NumPy, OpenCV, TKinter} }} \end{itemize} \end{document}
{ "alphanum_fraction": 0.6770820048, "avg_line_length": 46.1235294118, "ext": "tex", "hexsha": "b5b8ac418eba9634001fa71a23ffbbfa9218a700", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-07-29T15:59:17.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-03T13:55:12.000Z", "max_forks_repo_head_hexsha": "98892da40e6ce49a359befa39113c8f5de1d6769", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "relaxxpls/relaxxpls", "max_forks_repo_path": "Resume/archive/Semester 3/1PageNonTech.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "98892da40e6ce49a359befa39113c8f5de1d6769", "max_issues_repo_issues_event_max_datetime": "2021-12-13T14:08:11.000Z", "max_issues_repo_issues_event_min_datetime": "2021-12-13T14:07:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "relaxxpls/relaxxpls", "max_issues_repo_path": "Resume/archive/Semester 3/1PageNonTech.tex", "max_line_length": 309, "max_stars_count": 1, "max_stars_repo_head_hexsha": "98892da40e6ce49a359befa39113c8f5de1d6769", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "relaxxpls/relaxxpls", "max_stars_repo_path": "Resume/archive/Semester 3/1PageNonTech.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-29T15:58:41.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-29T15:58:41.000Z", "num_tokens": 2259, "size": 7841 }
---------------------------------------------------------------------------- % Xic Manual % (C) Copyright 2009, Whiteley Research Inc., Sunnyvale CA % $Id: sidemenu.tex,v 1.83 2016/10/10 18:29:59 stevew Exp $ % ----------------------------------------------------------------------------- % ----------------------------------------------------------------------------- % sidemenu 020715 \chapter{The Side Menu: Geometry Creation} \label{sidemenu} \index{side menu} {\Xic} has a ``side'' menu of buttons, typically displayed along the left edge of the application main window, next to the layer table. This menu contains buttons specific to editing, and is shown only when editing is enabled (meaning that it never appears in the {\Xiv} feature set). The content of the menu differs between electrical and physical modes. If the environment variable {\et XIC\_MENU\_RIGHT} is set when {\Xic} starts, the menu and layer table will be placed along the right edge of the main application window. This might be more convenient for left-handed users. If the {\et XIC\_HORIZ\_BUTTONS} environment variable is set, the ``side'' menu buttons will instead be arrayed horizontally across the top of the main application window, above the top button menu. This section describes in detail the commands available in the side menu in physical and electrical modes. These include commands for geometry creation and other frequently used operations. Again, the side menu is only visible when cell editing is possible. Side menu commands are executed by clicking with button 1 on the buttons. Typing the first few letters of the command name while pointing in a drawing window will also initiate a side menu command. The characters typed are displayed in the key press buffer area to the left of the prompt line in the main window, or in the upper-right corner of sub-window pop-ups. Commands can be exited by selecting the same or another command in most cases, or by pressing the {\kb Esc} key. \index{current transform} In the command descriptions, reference if often made to the ``current transform''. This is a rotation, reflection, and magnification specification for moved or copied objects, and for newly created subcells. The current transform is set with the pop-up produced by the {\cb xform} button in the side. Reference is also made to ``selected'' objects. Objects are selected by clicking the left mouse button (button 1) while pointing at the object, or by pressing and holding button 1 so that the object is enclosed in the rectangle formed with the press and release locations. Selecting a second time will deselect the objects, and all selected objects can be deselected with the {\cb desel} button in the top button menu. Selected objects are displayed with a blinking highlighted border. Objects can also be selected with the {\cb !select} command typed in the prompt area. Reference is made to various commands that start with an exclamation point ``{\cb !}'' such as ``{\cb !set}''. These commands can be entered from the keyboard. Since most of these commands are used infrequently, they are not assigned command buttons. The most important of these commands is probably {\cb !set}, since this allows certain variables to be set which control the behavior of some side menu commands. These ``{\cb !}'' commands are described in chapter \ref{bangcmds}. The tables below summarize the command buttons provided in the side menus in physical and electrical mode. Note that the side menu is different between physical and electrical modes, and that the operation of some commands which appear in both may differ slightly. These differences are noted in the descriptions. In the text, side menu commands are referenced by their internal names, since the command buttons contain an icon and not a label. The side menu is not available in the {\Xiv} feature set, and is invisible when certain modes are in effect, such as in CHD display mode, where editing is not allowed. \begin{table} \begin{center} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\kb Physical Side Menu} \\[0.5ex] \hline \et Icon & \et Name & \et Function\\ \hline \fbox{\epsfbox{images/xform.eps}} & \rb{\vt xform} & \rb{Set current transform}\\ \hline \fbox{\epsfbox{images/place.eps}} & \rb{\vt place} & \rb{Place subcells}\\ \hline \fbox{\epsfbox{images/label.eps}} & \rb{\vt label} & \rb{Create/edit labels}\\ \hline \fbox{\epsfbox{images/logo.eps}} & \rb{\vt logo} & \rb{Create text object}\\ \hline \fbox{\epsfbox{images/box.eps}} & \rb{\vt box} & \rb{Create rectangles}\\ \hline \fbox{\epsfbox{images/polyg.eps}} & \rb{\vt polyg} & \rb{Create polygons}\\ \hline \fbox{\epsfbox{images/wire.eps}} & \rb{\vt wire} & \rb{Create wires}\\ \hline \fbox{\epsfbox{images/style.eps}} & \rb{\parbox{1cm}{\vt style\linebreak{\rm menu}}} & \rb{Set wire style}\\ \hline \fbox{\epsfbox{images/round.eps}} & \rb{\vt round} & \rb{Create disk objects}\\ \hline \fbox{\epsfbox{images/donut.eps}} & \rb{\vt donut} & \rb{Create disk with hole}\\ \hline \fbox{\epsfbox{images/arc.eps}} & \rb{\vt arc} & \rb{Create arcs}\\ \hline \fbox{\epsfbox{images/sides.eps}} & \rb{\vt sides} & \rb{Set rounded granularity}\\ \hline \fbox{\epsfbox{images/xor.eps}} & \rb{\vt xor} & \rb{Exclusive-OR objects}\\ \hline \fbox{\epsfbox{images/break.eps}} & \rb{\vt break} & \rb{Cut objects}\\ \hline \fbox{\epsfbox{images/erase.eps}} & \rb{\vt erase} & \rb{Erase geometry}\\ \hline \fbox{\epsfbox{images/put.eps}} & \rb{\vt put} & \rb{Paste from yank buffer}\\ \hline \fbox{\epsfbox{images/spin.eps}} & \rb{\vt spin} & \rb{Rotate objects}\\ \hline \end{tabular} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\kb Electrical Side Menu} \\[0.5ex] \hline \et Icon & \et Name & \et Function\\ \hline \fbox{\epsfbox{images/xform.eps}} & \rb{\vt xform} & \rb{Set current transform}\\ \hline \fbox{\epsfbox{images/place.eps}} & \rb{\vt place} & \rb{Place subcells}\\ \hline \fbox{\epsfbox{images/devs.eps}} & \rb{\vt devs} & \rb{Show device menu}\\ \hline \fbox{\epsfbox{images/shapes.eps}} & \rb{\parbox{1cm}{\vt shapes\linebreak{\rm menu}}} & \rb{Create outline object}\\ \hline \fbox{\epsfbox{images/wire.eps}} & \rb{\vt wire} & \rb{Create wires}\\ \hline \fbox{\epsfbox{images/label.eps}} & \rb{\vt label} & \rb{Create/edit labels}\\ \hline \fbox{\epsfbox{images/erase.eps}} & \rb{\vt erase} & \rb{Erase geometry}\\ \hline \fbox{\epsfbox{images/break.eps}} & \rb{\vt break} & \rb{Cut objects}\\ \hline \fbox{\epsfbox{images/symbl.eps}} & \rb{\vt symbl} & \rb{Set symbolic mode}\\ \hline \fbox{\epsfbox{images/nodmp.eps}} & \rb{\vt nodmp} & \rb{Name wire nets}\\ \hline \fbox{\epsfbox{images/subct.eps}} & \rb{\vt subct} & \rb{Set subcircuit contacts}\\ \hline \fbox{\epsfbox{images/terms.eps}} & \rb{\vt terms} & \rb{Show terminals}\\ \hline \fbox{\epsfbox{images/spcmd.eps}} & \rb{\vt spcmd} & \rb{Execute {\WRspice} command}\\ \hline \fbox{\epsfbox{images/run.eps}} & \rb{\vt run} & \rb{Run {\WRspice}}\\ \hline \fbox{\epsfbox{images/deck.eps}} & \rb{\vt deck} & \rb{Save SPICE file}\\ \hline \fbox{\epsfbox{images/plot.eps}} & \rb{\vt plot} & \rb{Plot SPICE results}\\ \hline \fbox{\epsfbox{images/iplot.eps}} & \rb{\vt iplot} & \rb{Set dynamic plotting}\\ \hline \end{tabular} \end{center} \caption{\label{sidetab}Commands found in the side menu in physical and electrical modes.} \end{table} \newpage % ----------------------------------------------------------------------------- % xic:arc 120615 \section{The {\cb arc} Button: Create Arcs} \index{arc button} \index{object creation!arcs} \epsfbox{images/arc.eps} The {\cb arc} command button allows the user to create arcs on the current layer. The {\cb sides} button, or the {\cb Sides} entry in the {\cb shapes} menu in electrical mode, can be used to reset the number of segments used to represent the circle containing the arc. Press button 1 first to define the center. Subsequent presses, (or drag releases) define the inner and outer radii, the arc start angle, and the arc terminal angle. In physical mode, if the arc path width is set to zero, a round disk is created, as with the {\cb round} button. If the angle given is 360 degrees, then the created figure is identical to that produced by the {\cb donut} button. In electrical mode, the arc function is entered through the {\cb arc} entry in the menu brought up with the {\cb shapes} button. In this case, the arc path has no width, so that the inner and outer radii are equal and not separately definable. Arcs have no electrical significance, but can be used for illustrative purposes. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within a small distance. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. In electrical mode, an arc is actually a wire, and as such should not be used on the SCED layer. If the current layer is the SCED layer, the arc will be created using the ETC2 layer, otherwise the arc will be created on the current layer. Although there is no error, arc vertices on the SCED layer are considered in the connectivity establishment, leading to inefficiency. If the user insists on the arc being on the SCED layer, the {\cb Change Layer} command in the {\cb Modify Menu} can be used to move it to that layer. If the user presses and holds the {\kb Shift} key after the center location is defined, and before the perimeter is defined by either lifting button 1 or pressing a second time, the current radius is held for x or y. The pointer location of the {\kb Shift} press defines whether x is held (pointer closer to the center y) or y is held (pointer closer to the center x). This allows elliptical arcs to be generated. This similarly applies when defining the outer radii, so that the inner and outer surfaces can have different elliptical aspect ratios, though the outer radius must be larger than the inner radius at all angles. The {\kb Ctrl} key also provides useful constraints. Pressing and holding the {\kb Ctrl} key when defining the radii produces a radius defined by the pointer position projected on to the x or y axis (whichever is closer) defined from the center. Otherwise, off-axis snap points are allowed, which may lead to an unexpected radius on a fine grid. When defining the angles of arcs with the {\kb Ctrl} key pressed, the angle is constrained to multiples of 45 degrees. Ordinarily, the arc angle snaps to the nearest snap point. When the command is expecting a mouse button press to define a radius, the value as defined by the mouse pointer (in microns) is printed in the lower left corner of the drawing window, or the X and Y values are printed if different. Pressing {\kb Enter} will cause prompting for the value(s), in microns. If one number is given, a circular radius is accepted, however one can enter two numbers separated by space to set the X and Y radii separately. Similarly, the angles are displayed, and can be entered in this manner. Prompts can be obtained for the start and end angles separately. The angle should be entered in degrees. Zero degress points along the X axis, and positive angles advance clockwise. % ----------------------------------------------------------------------------- % xic:box 012715 \section{The {\cb box} Button: Create Rectangles} \index{box button} \index{object creation!boxes} \epsfbox{images/box.eps} The {\cb box} command button allows creation of boxes (rectangles) on the currently selected layer. The box can be defined by either clicking button 1 on two diagonal corners, or by pressing button 1 to define the first corner, dragging, then releasing button 1 to define the second corner. The outline of the box is ghost-drawn during creation. The new box will be merged with or clipped to existing boxes on the same layer, unless this feature has been suppressed. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. In physical mode, boxes can also be created from the {\cb Show/Select Devices} panel from the {\cb Device Selections} button in the {\cb Extract Menu}. The {\cb Enable Measure Box} button provides a means of creating boxes of a specific size to match electrical requirements, for example to create rectangular resistor bodies for a given resistance. Boxes can be created whether or not the electrical layer parameters are used or present. In physical mode while the {\cb box} command is active, holding down the {\kb Ctrl} key while clicking on a subcell will paint the area of the subcell with the current layer. In electrical mode, the box command is available by selecting the {\cb box} function in the {\cb shapes} menu. If the current layer is the SCED layer, the box will be created using the ETC2 layer, otherwise the box will be created on the current layer. It is best to avoid use of the SCED layer for other than active wires, for efficiency reasons, though it is not an error. The {\cb Change Layer} command in the {\cb Modify Menu} can be used to change the layer of existing objects to the SCED layer, if necessary. The outline style and fill will be those of the rendering layer. Boxes have no electrical significance, but can be used for illustrative purposes. The {\cb box}, {\cb erase}, and {\cb xor} commands participate in a protocol that is handy on occasion. Suppose that you want to erase an area, and you have zoomed in and clicked to define the anchor, then zoomed out or panned and clicked to finish the operation. Oops, the {\cb box} command was active, not {\cb erase}. One can press {\kb Tab} to undo the unwanted new box, then press the {\cb erase} button, and the {\cb erase} command will have the same anchor point and will be showing the ghost box, so clicking once will finish the erase operation. The anchor point is remembered, when switching directly between these three commands, and the command being exited is in the state where the anchor point is defined, and the ghost box is being displayed. One needs to press the command button in the side menu to switch commands. If {\kb Esc} is pressed, or a non-participating command is entered, the anchor point will be lost. % ----------------------------------------------------------------------------- % xic:break 012815 \section{The {\cb break} Button: Cut Objects} \index{break button} \index{object breaking} \epsfbox{images/break.eps} The {\cb break} button is used to divide objects along a horizontal or vertical line. The command operates on boxes, polygons, and wires. If one or more of those objects was previously selected, the break command will operate on those selections. Otherwise, the user is asked to select objects to break. The user is then asked to click to divide the selected objects along the break line, which is attached to the pointer and ghost-drawn. The orientation of the break line is either horizontal or vertical, which can be toggled by pressing either the {\cb /} (forward slash) or {\kb $\backslash$} (backslash) keys when the break line is visible. The {\cb break} command is useful when one wants to relocate or create a subcell from pieces of an existing design. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. When the {\cb break} command is at the state where objects are selected, and the next button press would initiate the break operation, if either of the {\kb Backspace} or {\kb Delete} keys is pressed, the command will revert the state back to selecting objects. Then, other objects can be selected or selected objects deselected, and the command is ready to go again. This can be repeated, to build up the set of selections needed. At any time, pressing the {\cb Deselect} button to the left of the coordinate readout will revert the command state to the level where objects may be selected to break. The undo and redo operations (the {\kb Tab} and {\kb Shift-Tab} keypreses and {\cb Undo}/{\cb Redo} in the {\cb Modify Menu}) will cycle the command state forward and backward when the command is active. Thus, the last command operation, such as initiating the break by clicking, can be undone and restarted, or redone if necessary. If all command operations are undone, additional undo operations will undo previous commands, as when the undo operation is performed outside of a command. The redo operation will reverse the effect, however when any new modifying operation is started, the redo list is cleared. Thus, for example, if one undoes a box creation, then starts a break operation, the ``redo'' capability of the box creation will be lost. % ----------------------------------------------------------------------------- % xic:deck 062313 \section{The {\cb deck} Button: Save SPICE File} \index{deck button} \index{SPICE deck creation} \epsfbox{images/deck.eps} The {\cb deck} command, available only in electrical mode, creates a SPICE file of the current circuit hierarchy. The file name is prompted for, as is an analysis string. If an analysis string is given, it will be included in the SPICE file after prepending a `.', unless it happens to start with ``run'', in which case it is ignored. If a plot string has been created with the {\cb plot} command, it will also be included as a {\vt .plot} line. Unless the variable {\et SpiceListAll} is set (with the {\cb !set} command), only devices and subcircuits that are ``connected'' will be included in the SPICE file. A device or subcircuit is connected if any of the following is true: \begin{itemize} \item{The subcircuit has a global node.} \item{The device or subcircuit has two or more non-ground connections.} \item{The device or subcircuit has one non-ground connection and one or more grounds.} \item{The device or subcircuit has one non-ground connection and no opens.} \item{The subcircuit has a non-ground connection.} \end{itemize} Note that it is possible for a subcircuit to have no connections on the {\vt .subckt} line, if it contains a global node. For example, the subcircuit might consist of a decoupling capacitor to ground, from a global power supply node (e.g., ``{\vt vdd!}''). Node names will be assigned according to the node name mapping (see \ref{nodmp} currently in force. After the new file is created, the user is given the option of viewing it in a {\cb File Browser} window. \index{CheckSolitary variable} If the variable {\et CheckSolitary} is set (with the {\cb !set} command) then a warning will be issued if nodes are found with only one connection. % ----------------------------------------------------------------------------- % xic:devs 020615 \section{The {\cb devs} Button: Device Menu} \label{devmenu} \index{devs button} \index{device menu} \index{PictorialDevs variable} \epsfbox{images/devs.eps} The {\cb devs} button appears only in electrical mode, and pressing this button will toggle the display of the device selection menu. There are three styles of the device menu. The default style contains a menu bar with four entries: {\cb Devices}, {\cb Sources}, {\cb Macros}, and {\cb Terminals}. Each brings up a sub-menu containing names of library ``devices'', that fall into each category. The second menu style is similar, but the menu bar contains the first letter of the device name (not the SPICE key). In either of these styles, pressing and holding button 1 while the pointer is over one of the menu bar buttons will pop up a menu of device names. Moving the pointer down the menu will highlight the entry under the pointer. A selection can be made by releasing the button. The third style is the pictorial menu, which displays the schematic symbol of each available device, in alphabetical order. Clicking on one of the device images will establish the selection. Each menu style contains a button from which the style can be cycled. After a selection is made, the device symbol will be ghost-drawn and attached to the pointer, and the device will be placed at positions where the user clicks in the drawing windows. The device is positioned such that the reference terminal is located at the point where the user clicked. Devices are placed according to the current transform, which is defined from the pop-up produced by the {\cb xform} button in the side menu. The devices available and other details depend upon the definitions in the device library file. By default, this file is named ``{\vt device.lib}'', and is located in the installation startup directory, but this can be superseded by a custom file of the same name which is found in the library search path ahead of the default file. The present device menu style tracks, and is tracked by, the {\et DevMenuStyle} variable. This variable can be set (with the {\cb !set} command) to an integer 0--2. If 0 or unset, the categorized layout is used. If 1, the alphabetized variation is used, and 2 specifies the pictorial menu. This variable tracks the style of the menu, and resets the style when set. The following table lists the devices found in the device library file supplied with {\Xic}. \begin{tabular}{|l|l|}\hline \bf Name & \bf Description\\ \hline \multicolumn{2}{|c|}{Contact Devices}\\ \hline \et gnd & Ground Contact\\ \hline \et gnde & Alternative Ground Contact\\ \hline \et tbar & Contact Terminal\\ \hline\hline \et tblk & Alternative Contact Terminal\\ \hline\hline \et tbus & Bus Contact Terminal\\ \hline\hline \multicolumn{2}{|c|}{SPICE Devices}\\ \hline \et res & Resistor\\ \hline \et cap & Capacitor\\ \hline \et ind & Inductor\\ \hline \et mut & Mutual Inductor\\ \hline \et isrc & Current Source\\ \hline \et vsrc & Voltage Source\\ \hline \et dio & Junction Diode\\ \hline \et jj & Josephson Junction\\ \hline \et npn & NPN Bipolar Transistor\\ \hline \et pnp & PNP Bipolar Transistor\\ \hline \et njf & N-Channel Junction FET\\ \hline \et pjf & P-Channel Junction FET\\ \hline \et nmos1 & N-Channel MOSFET, 4 Nodes\\ \hline \et pmos1 & P-Channel MOSFET, 4 Nodes\\ \hline \et nmos & N-Channel MOSFET, 3 Nodes\\ \hline \et pmos & P-Channel MOSFET, 3 Nodes\\ \hline \et nmes & N-Channel MESFET\\ \hline \et pmes & P-Channel MESFET\\ \hline \et tra & Transmission Line\\ \hline \et ltra & Transmission Line (LTRA Compatible)\\ \hline \et urc & Uniform RC Line\\ \hline \et vccs & Voltage-Controlled Current Source\\ \hline \et vcvs & Voltage-Controlled Voltage Source\\ \hline \et cccs & Current-Controlled Current Source\\ \hline \et ccvs & Current-Controlled Voltage Source\\ \hline \et sw & Voltage-Controlled Switch\\ \hline \et csw & Current-Controlled Switch\\ \hline \multicolumn{2}{|c|}{Misc.}\\ \hline \et opamp & Example Macro\\ \hline \et vp & Current Meter\\ \hline \end{tabular} The colors used in the pictorial device menu can be changed by setting the Special GUI Colors (see \ref{attrcolor}) listed below. This can be done in the technology file, or with the {\cb !setcolor} command. \begin{tabular}{|l|l|l|} \hline \bf variable & \bf purpose & \bf default\\ \hline \vt GUIcolorDvBg & background & \vt gray90\\ \hline \vt GUIcolorDvFg & foreground & \vt black\\ \hline \vt GUIcolorDvHl & highlight & \vt blue\\ \hline \vt GUIcolorDvSl & selection & \vt gray80\\ \hline \end{tabular} % ----------------------------------------------------------------------------- % not in help \subsection{Terminal Devices} The following are not ``real'' devices, though they appear in the device menu and can be placed in a drawing. Their purpose is to establish connectivity. % ----------------------------------------------------------------------------- % dev:gnd 042611 \subsubsection{Ground Device} \index{gnd device} The {\et gnd} device is used to connect to node 0, which is always taken as the reference (ground) node in SPICE. This can be placed in the main circuit and subcircuits. The device library may contain multiple, functionally identical ``ground'' devices, that differ only visually. In the library, any device that has no {\et name} property and exactly one {\et node} property is taken as a ground device. % ----------------------------------------------------------------------------- % dev:gnde 042611 \subsubsection{Alternative Ground Device} \index{gnde device} The {\et gnde} device is used to connect to node 0, which is always taken as the reference (ground) node in SPICE. This can be placed in the main circuit and subcircuits. This is functionally identical to the {\et gnd} device, but differs visually. % ----------------------------------------------------------------------------- % dev:tbar 062013 \subsubsection{Terminal Device} \label{devtbar} \index{tbar device} The {\et tbar}, {\et tblk}, {\et ttri}, and {\et txbox} are ``terminal devices'' from the default device library. These devices behave identically, and differ only in appearance. Each device has an associated label (with text defaulting to the device name) which can be changed by the user by selecting the label and pressing the {\cb label} button in the side menu. The label will supply a name, which will be applied to a connected net. All nets connected to a terminal device with the same name are taken as being connected together. This will not tie nets between the main circuit and subcircuits, or between subcircuits, unless the terminal name is also a global net name. If not global, the scope is within the cell only. See \ref{nodmp} for more information about net name assignments. Internally, the device will reconfigure itself as a scalar or multi-contact device according to the label. Older {\Xic} releases provided a {\et tbus} terminal, which is no longer compatible. The name applied to a net via a terminal device is handled identically to a name obtained from a wire label. % ----------------------------------------------------------------------------- % dev:tbus 062013 \subsubsection{Bus Terminal Device} \index{tbus device} The {\et tbus} terminal device was provided as a bus terminal in older {\Xic} releases. It is no longer compatible or supported, and must be replaced by a current terminal device in legacy schematics. % ----------------------------------------------------------------------------- % not in help \subsection{SPICE Devices} These devices correspond to element lines in SPICE output. In general, they reflect the generic SPICE syntax. % ----------------------------------------------------------------------------- % dev:res 062908 \subsubsection{Resistor Device} \index{res device} The {\et res} device is a two-terminal resistor. Typically, a {\et value} property is added to specify resistance. Alternatively, a {\et model} property can be added to specify a resistor model. If a {\et model} property is assigned, then a {\et param} property can be used to supply the geometric or other parameters. The `$+$' symbol in the representation accesses a {\et branch} property that returns a hypertext expression consisting of the voltage across the resistor divided by the resistance in ohms, yielding the current through the resistor. The `O' that follows the resistance is the `ohms' unit specifier, and {\it not} an extra zero. % ----------------------------------------------------------------------------- % dev:cap 062908 \subsubsection{Capacitor Device} \index{cap device} The {\et cap} device is a two-terminal capacitor. Typically, a {\et value} property is added to specify capacitance. Alternatively, a {\et model} property can be added to specify a capacitor model. If a {\et model} property is assigned, then a {\et param} property can be used to supply the geometric parameters. In either case, the {\et param} property can be used to provide initial conditions. The `$+$' symbol in the representation accesses a {\et branch} property that returns a hypertext expression consisting of the capacitance value times the time-derivative of the voltage across the capacitor, yielding the capacitor current. % ----------------------------------------------------------------------------- % dev:ind 062908 \subsubsection{Inductor Device} \index{ind device} The {\et ind} device is a two-terminal inductor. A {\et value} property should be added to specify inductance. A {\et param} property can be used to provide initial conditions. The `$+$' symbol in the representation accesses a {\et branch} property that returns a hypertext link to the inductor current vector. % ----------------------------------------------------------------------------- % dev:mut 062908 \subsubsection{Mutual Inductor} \index{mut device} \index{mutual inductors} The {\et mut} device provides support for mutual inductors. The {\et mut} device is never placed. When the {\et mut} device is selected in the device menu, rather than selecting a device for placement as do the other selections, a command mode is entered which allows existing inductors to be selected into mutual inductor pairs. When the {\et mut} device is selected, an existing pair of coupled inductors (if any have been defined) is shown highlighted, and the SPICE coupling factor printed. The arrow keys cycle through the internal list of coupled inductor pairs, or a pair may be selected by clicking on one of the inductors or the coefficient label with button 1. At any time, pressing the `{\kb a}' key will allow addition of a mutual inductor pair. The same effect is obtained by clicking on a non-mutual inductor with button 1. The user is asked to click on the two coupled inductors (if `{\kb a}' entered or there are no existing mutual inductors), or the second inductor (if the user clicked on an inductor), and then to enter the coupling factor. The coupling factor can be any string, so as to allow shell variable expansion in {\WRspice}, but if it parses as a number it must be in the range between -1 and 1. Pressing the `{\kb d}' key will delete the mutual inductance specification for the two inductors currently shown. Pressing the `{\kb k}' key will prompt for a new value of the coupling factor for the mutual inductors shown, as will clicking on the coefficient label in a drawing window. When entering the coefficient string, one can enter either the form {\it name\/}={\it coef\_string\/}, or simply the coefficient string. In the first case, the {\it name} will provide an alternate fixed name for the mutual inductor in SPICE output. This can be any alphanumeric name, but should start with `k' or `K' for SPICE. If no name is given, {\Xic} will assign a name consisting of {\vt K} followed by a unique index integer. One can also change the coefficient string and/or name with the {\cb label} button in the side menu. Again, the label text can have either of the forms described above. Pressing the {\kb Esc} key terminates this (and every) command. One can back out of the operation if necessary with {\kb Tab} (undo), as usual. % ----------------------------------------------------------------------------- % dev:isrc 062908 \subsubsection{Current Source} \index{isrc device} The {\et isrc} device is a general current source. A {\et value} and/or {\et param} property can be added to specify the value, function, or other parameters required by the source. The arrow head in the representation accesses a {\et branch} property that returns a hypertext link to the current in the form ``{\vt @}{\it name}{\vt [c]}''. A {\vt .save} line for this vector is automatically added to the SPICE output. % ----------------------------------------------------------------------------- % dev:vsrc 042611 \subsubsection{Voltage Source} \index{vsrc device} The {\et vsrc} device is a general voltage source. A {\et value} and/or {\et param} property can be added to specify the value, function, or other parameters required by the source. The `$+$' symbol in the representation accesses a {\et branch} property that returns a hypertext link to the current vector when clicked on. % ----------------------------------------------------------------------------- % dev:vp 042611 \subsubsection{Current Meter} \index{vp device} In SPICE, voltage sources are often used as ``current meters'', as the current through a voltage source is saved with the simulation result vectors, and can be plotted or printed. The {\et vp} device is actually a voltage source (identical to a {\vt vsrc} device) however the symbol size is tiny, so that it can be more easily added to an existing schematic for use as a current meter. The symbol contains a hot spot in the representation that accesses a {\et branch} property that returns a hypertext link to the current vector when clicked on. % ----------------------------------------------------------------------------- % dev:dio 062908 \subsubsection{Junction Diode} \index{dio device} The {\et dio} device is a junction diode. A {\et model} property should be added to specify a diode model. A {\et param} property can be added to specify additional parameters. The diode contains no hidden targets. % ----------------------------------------------------------------------------- % dev:jj 062908 \subsubsection{Josephson Junction} \index{jj device} The {\et jj} device is a Josephson junction. A {\et model} property should be added to specify a Josephson junction model. A {\et param} property can be added to specify additional parameters. The `$+$' symbol in the representation accesses the phase node of the Josephson junction. The ``voltage'' on this node is equal to the junction phase, in radians. % ----------------------------------------------------------------------------- % dev:npn 062908 \subsubsection{NPN Bipolar Transistor} \index{npn device} The {\et npn} device is an npn bipolar transistor. A {\et model} property should be added to specify a bipolar transistor model. A {\et param} property can be added to specify additional parameters. The bipolar transistor contains no hidden targets. % ----------------------------------------------------------------------------- % dev:pnp 062908 \subsubsection{PNP Bipolar Transistor} \index{pnp device} The {\et pnp} device is a pnp bipolar transistor. A {\et model} property should be added to specify a bipolar transistor model. A {\et param} property can be added to specify additional parameters. The bipolar transistor contains no hidden targets. % ----------------------------------------------------------------------------- % dev:njf 062908 \subsubsection{N-Channel Junction FET} \index{njf device} The {\et njf} device is an n-channel junction field-effect transistor. A {\et model} property should be added to specify a JFET model. A {\et param} property can be added to specify additional parameters. The JFET contains no hidden targets. % ----------------------------------------------------------------------------- % dev:pjf 062908 \subsubsection{P-Channel Junction FET} \index{pjf device} The {\et pjf} device is a p-channel junction field-effect transistor. A {\et model} property should be added to specify a JFET model. A {\et param} property can be added to specify additional parameters. The JFET contains no hidden targets. % ----------------------------------------------------------------------------- % dev:nmos1 062908 \subsubsection{N-Channel MOSFET, 4 Nodes} \index{nmos1 device} The {\et nmos1} device is a 4-terminal n-channel MOSFET (drain, gate, source, bulk). A {\et model} property should be added to specify a MOS model, suitable for 4-terminal devices. Some of the MOS models provided in {\WRspice}, for SOI devices, use more than four terminals and will not work with this device. It is left as an exercise for the user to create a modified device suitable for use with these models. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:pmos1 062908 \subsubsection{P-Channel MOSFET, 4 Nodes} \index{pmos1 device} The {\et pmos1} device is a 4-terminal p-channel MOSFET (drain, gate, source, bulk). A {\et model} property should be added to specify a MOS model, suitable for 4-terminal devices. Some of the MOS models provided in {\WRspice}, for SOI devices, use more than four terminals and will not work with this device. It is left as an exercise for the user to create a modified device suitable for use with these models. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:nmos 042611 \subsubsection{N-Channel MOSFET, 3 Nodes} \index{nmos device} The {\et nmos} device is an n-channel MOSFET variation that contains three visible nodes (drain, gate, source). The bulk node is connected to an internal global node named ``NSUB''. To use this device, the circuit should contain a voltage source tied to a terminal device with label ``NSUB'' to provide substrate bias to all devices of this type. This simplifies the schematic by hiding the substrate connection to each transistor. A {\et model} property should be added to specify a MOS model, suitable for 4-terminal devices. Some of the MOS models provided in {\WRspice}, for SOI devices, use more than four terminals and will not work with this device. It is left as an exercise for the user to create a modified device suitable for use with these models. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:pmos 042611 \subsubsection{P-Channel MOSFET, 3 Nodes} \index{pmos device} The {\et pmos} device is a p-channel MOSFET variation that contains three visible nodes (drain, gate, source). The bulk node is connected to an internal global node named ``PSUB''. To use this device, the circuit should contain a voltage source tied to a terminal device with label ``PSUB'' to provide substrate bias to all devices of this type. This simplifies the schematic by hiding the substrate connection to each transistor. A {\et model} property should be added to specify a MOS model, suitable for 4-terminal devices. Some of the MOS models provided in {\WRspice}, for SOI devices, use more than four terminals and will not work with this device. It is left as an exercise for the user to create a modified device suitable for use with these models. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:nmes 062908 \subsubsection{N-Channel MESFET} \index{nmes device} The {\et nmes} device is an n-channel MESFET. A {\et model} property should be added to specify a MESFET model. A {\et param} property can be added to specify additional parameters. The MESFET contains no hidden targets. % ----------------------------------------------------------------------------- % dev:pmes 062908 \subsubsection{P-Channel MESFET} \index{pmes device} The {\et pmes} device is a p-channel MESFET. A {\et model} property should be added to specify a MESFET model. A {\et param} property can be added to specify additional parameters. The MESFET contains no hidden targets. % ----------------------------------------------------------------------------- % dev:tra 062908 \subsubsection{Transmission Line} \index{tra device} The {\et tra} device is a general transmission line. In {\WRspice}, this can be lossy or lossless, and may access a model. In other versions of SPICE, this is a lossless line with no model. A {\et model} property can be added to specify a transmission line model. A {\et param} property can be added to specify additional parameters. The transmission line contains no hidden targets. % ----------------------------------------------------------------------------- % dev:ltra 062908 \subsubsection{Transmission Line (LTRA compatibility)} \index{ltra device} The {\et ltra} device is a general transmission line. In {\WRspice}, this can be lossy or lossless, and is basically the same as the {\et tra} device, but defaults to a convolution approach if lossy. In other versions of SPICE, this is a lossy line that requires a model. A {\et model} property can be added to specify a transmission line model. A {\et param} property can be added to specify additional parameters. The transmission line contains no hidden targets. % ----------------------------------------------------------------------------- % dev:urc 062908 \subsubsection{Uniform RC Line} \index{urc device} The {\et urc} device is a lumped-approximation RC line. A {\et model} property should be added to specify a urc model. A {\et param} property can be added to specify additional parameters. The {\et urc} line contains no hidden targets. % ----------------------------------------------------------------------------- % dev:vccs 062908 \subsubsection{Voltage-Controlled Current Source} \index{vccs device} The {\et vccs} device is a voltage-controlled dependent current source. A {\et value} and/or {\et param} property can be added to specify the gain, or other parameters required by the dependent source. Since all four nodes are specified, the two-node variants supported by {\WRspice} are not supported by this device. The VCCS contains no hidden targets. % ----------------------------------------------------------------------------- % dev:vcvs 062908 \subsubsection{Voltage-Controlled Voltage Source} \index{vcvs device} The {\et vcvs} device is a voltage-controlled dependent voltage source. A {\et value} and/or {\et param} property can be added to specify the gain, or other parameters required by the dependent source. Since all four nodes are specified, the two-node variants supported by {\WRspice} are not supported by this device. The VCVS contains no hidden targets. % ----------------------------------------------------------------------------- % dev:cccs 030415 \subsubsection{Current-Controlled Current Source} \index{cccs device} The {\et cccs} device is a current-controlled dependent current source. A {\et devref} property can be used to specify the name of the controlling voltage source or inductor in the common case. A {\et value} and/or {\et param} property should be added to specify gain, or other parameters required by the dependent source. This device supports all of the variants supported in {\WRspice}. The CCCS contains no hidden targets. % ----------------------------------------------------------------------------- % dev:ccvs 030415 \subsubsection{Current-Controlled Voltage Source} \index{ccvs device} The {\et ccvs} is a current-controlled dependent voltage source. A {\et devref} property can be used to specify the name of the controlling voltage source or inductor in the common case. A {\et value} and/or {\et param} property should be added to specify the gain, or other parameters required by the dependent source. This device supports all of the variants supported in {\WRspice}. The CCVS contains no hidden targets. % ----------------------------------------------------------------------------- % dev:sw 062908 \subsubsection{Voltage-Controlled Switch} \index{sw device} The {\et sw} device is a voltage-controlled switch. A {\et model} property should be added to specify a switch model. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:csw 030415 \subsubsection{Current-Controlled Switch} \index{csw device} The {\et csw} device is a current-controlled switch. A {\et devref} property must be used to specify the name of the controlling voltage source or inductor. A {\et model} property should be added to specify the switch model. A {\et param} property can be added to specify additional parameters. This device contains no hidden targets. % ----------------------------------------------------------------------------- % dev:opamp 062908 \subsubsection{Example Opamp Macro} \index{opamp device} The {\et opamp} device is an example ``black box'' device that expands into a subcircuit. It has a predefined {\et model} parameter which gives the subcircuit name (which is resolved in the model library). No properties are required. This device contains no hidden targets. % ----------------------------------------------------------------------------- % xic:donut 100916 \section{The {\cb donut} Button: Create Donut Object} \index{donut button} \index{object creation!donut} \epsfbox{images/donut.eps} The {\cb donut} button appears only in physical mode. It is used to create a ring-like polygon. The number of segments used to approximate a circle can be altered with the {\cb sides} command. If the user presses and holds the {\kb Shift} key after the center location is defined, and before the perimeter is defined by either lifting button 1 or pressing a second time, the current radius is held for x or y. The location of the {\kb Shift} press defines whether x is held (pointer closer to the center y) or y is held (pointer closer to the center x). This allows elliptical donuts to be generated. This similarly applies when defining the outer radii, so that the inner and outer surfaces can have different elliptical aspect ratios, though the outer radius must be larger than the inner radius at all angles. The {\kb Ctrl} key also provides useful constraints. Pressing and holding the {\kb Ctrl} key when defining the radii produces a radius defined by the pointer position projected on to the x or y axis (whichever is closer) defined from the center. Otherwise, off-axis snap points are allowed, which may lead to an unexpected radius on a fine grid. When the command is expecting a mouse button press to define a radius, the value as defined by the mouse pointer (in microns) is printed in the lower left corner of the drawing window, or the X and Y values are printed if different. Pressing {\kb Enter} will cause prompting for the value(s), in microns. If one number is given, a circular radius is accepted, however one can enter two numbers separated by space to set the X and Y radii separately. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. \index{SpotSize variable} If the {\et SpotSize} variable is set to a positive value, or the {\vt MfgGrid} has been given a positive value in the technology file, tiny round and donut figures are constructed somewhat differently. the figure is constructed somewhat differently. Objects created with the {\cb round} and {\cb donut} buttons will be constructed so that all vertices are placed at the center of a spot, and a minimum number of vertices will be used. The {\cb sides} number is ignored. This applies only to figures with minimum radius 50 spots or smaller; the regular algorithm is used otherwise. An object with this preconditioning applied should translate exactly to the e-beam grid. See \ref{spotsize} for more information. % ----------------------------------------------------------------------------- % xic:erase 012715 \section{The {\cb erase} Button: Erase or Yank Geometry} \index{erase button} \index{object erasing} \epsfbox{images/erase.eps} Rectangular regions of polygons, boxes, and wires can be erased or ``yanked'' with the {\cb erase} button. The user clicks twice or presses and drags to define the diagonal of the region to be erased. Selected objects are not erased. Wires maintain a constant width, and are cut at the points where the midpoint crosses the boundary of the erased area. In physical mode, if the {\kb Shift} key is held during the operation termination (click or button release), there is no erasure, however the pieces that would have been erased are ``yanked'', i.e., added to the yank buffer. The pieces are also added to the yank buffer when actually erased. The yank buffer chain has a depth of five, meaning that the contents of the last five yanks/erasures are available for placement with the {\cb put} command. Geometry in ``foreign'' windows can be yanked. These are physical-mode sub-windows showing a different cell than the current cell being edited (as showing in the main window). The foreign window is never erased (i.e., holding {\kb Shift} is not necessary), but the structure that would be erased is added to the yank buffer. Thus, one can quickly copy a rectangular area of geometry from another cell into the current cell, by yanking with {\cb erase} and placing with the {\cb put} command (below {\cb erase} in the side menu). The {\cb SpaceBar} toggles ``clip mode''. When clip mode is active, for objects that overlap the rectangle defined with the mouse, instead of erasing the interior of the rectangle as in the normal case, the material outside of the rectangle will be erased instead. The overlapping objects will be clipped to the rectangle. This applies whether erasing or yanking, again the yank buffer will acquire the pieces that would (or actually do) disappear in an erase operation. When the {\kb Ctrl} key is held before the box is defined, clicking on a subcell will cause the subcell's bounding box to be used as the rectangle. Thus, objects can be easily clipped to or around the subcell boundary. This applies when yanking as well. The standard erase is the inverse of the subcell paint operation in the {\cb box} command. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. The {\cb box}, {\cb erase}, and {\cb xor} commands participate in a protocol that is handy on occasion. Suppose that you want to erase an area, and you have zoomed in and clicked to define the anchor, then zoomed out or panned and clicked to finish the operation. Oops, the {\cb box} command was active, not {\cb erase}. One can press {\kb Tab} to undo the unwanted new box, then press the {\cb erase} button, and the {\cb erase} command will have the same anchor point and will be showing the ghost box, so clicking once will finish the erase operation. The anchor point is remembered, when switching directly between these three commands, and the command being exited is in the state where the anchor point is defined, and the ghost box is being displayed. One needs to press the command button in the side menu to switch commands. If {\kb Esc} is pressed, or a non-participating command is entered, the anchor point will be lost. % ----------------------------------------------------------------------------- % xic:iplot 101810 \section{The {\cb iplot} Button: Interactive Analysis Plotting} \index{iplot button} \index{interactive plotting} \epsfbox{images/iplot.eps} The {\cb iplot} command, available in electrical mode, is useful only if the {\WRspice} program is available. Operation is similar to the {\cb plot} button, whereby a command string is generated through selection of nodes and branches with the pointer. The command line can be edited in the usual way to generate, for example, functions of the plot variables. Pressing the {\kb Enter} key saves the command. When the {\cb iplot} button is active and a command has been saved, the plot is generated dynamically while a simulation, initiated with the {\cb run} command, is in progress. The {\cb S} and {\cb R} buttons, to the left of the prompt area, can be used to save and restore prompt line text in a set of internal registers. Pressing the {\cb iplot} button a second time will turn off the interactive plotting. Pressing {\cb iplot} and then {\kb Enter} will turn the interactive plotting back on. Of course, the trace points and plotting command can be modified before pressing {\kb Enter}. In particular, if all prompt line text is deleted, pressing {\kb Enter} will delete the internally saved command string, and turn interactive plotting off. Pressing the {\cb iplot} button again will take as default text the string from the {\cb plot} command, if any. The command text and mark locations are saved with the cell data when written to disk, thus the {\cb iplot} command is persistent. % ----------------------------------------------------------------------------- % xic:label 022619 \section{The {\cb label} Button: Create/Edit Labels} \label{labelbut} \index{label button} \index{object creation!labels} \index{keyboard!arrow keys} \epsfbox{images/label.eps} The {\cb label} button is used to create or modify a text label. Labels are abstract annotation objects which do not appear in physical output. For physical text, use the {\cb logo} command button. \index{label editing} If a label is selected before pressing the {\cb label} button, then the selected label can be edited. Multiple labels can be selected, and each will receive the new label text. If more than one label is being changed, the command exits after the new text is entered on the prompt line, i.e., after {\kb Enter} is pressed to terminate text entry. If only one label is being changed, on pressing {\cb Enter} the new text is ``attached'' to the mouse pointer, as for a new label. In this state, the text size, orientation, and justification can be changed as will be described below. The user can either click in a drawing window to place the label at the click location (effectively moving the selected label), or press {\kb Enter} to update the selected label at the existing location. This is the recommended way to change the size of a label: select it, press the {\cb label} button, press {\kb Enter} to keep the same text, adjust the size with the arrow keys, then press {\kb Enter} again to update the label. This keeps the label in a standard size and aspect ratio which will match other labels. This would not be the case if the {\cb Stretch} command or operation was used instead. If no label was initially selected, after the label text has been entered, the label will appear ghost-drawn, attached to the mouse pointer. The text will be rotated or mirrored according to the current transform, as set from the pop-up provided by the {\cb xform} button in the side menu. Instances of the label are placed where the user clicks in a drawing window. Label text in entered in the prompt line. While editing, if the user clicks on an existing label in a drawing widow which is contained in the current cell, the text of that label will be inserted at the prompt line cursor. Hypertext entries (see ref{hypertext}) in the label will be preserved. If the existing label is a ``long text'' label (described below), the long text attribute will be lost, unless the prompt line is empty before clicking on the label. Particularly in electrical mode, clicking on other objects in a drawing window will insert text at the cursor position, as will be described. Pressing {\kb Enter} terminates the label text and will allow placement of copies of the new label. The size and justification of the label can be adjusted with the arrow keys, before it is placed. The arrow keys have the following effect: \begin{tabular}{ll} \kb Up & enlarge by 2\\ \kb Right & enlarge by 10\%\\ \kb Down & reduce by 2\\ \kb Left & reduce by 10\%\\ \end{tabular} The initial size of a label is determined by the present default label height, and the magnification of the current drawing window. The default label height is 1.0 microns, which can be reset by setting the {\et LabelDefHeight} variable to a different value. The default height is the smallest size available through scaling with the arrow keys. Generally, {\Xic} functions that create new labels will use the default label height. The default height of one micron is too large for modern semiconductor processes, so one should redefine {\et LabelDefHeight} in the technology file to a more suitable value, typically the minimum feature size. By default, the label is anchored at the lower left corner, though this justification can be changed by holding the {\kb Shift} key while pressing the arrow keys. The {\kb Left} and {\kb Right} arrows cycle through left, center, and right justification. The {\kb Up} and {\kb Down} arrow keys cycle through bottom, center, and top justification. Finally, holding the {\kb Ctrl} key while pressing the arrow keys will change the current rotation angle. The arrow keys implicitly cycle through the angle choices, with {\kb Up} and {\kb Right} cycling in the opposite sense from {\kb Down} and {\kb Left}. Labels are scalable, and can be stretched with the {\cb Stretch} button in the {\cb Edit Menu} or with button 1 operations. Newlines can be embedded in the label text by pressing {\kb Shift-Enter}. The displayed label will contain line breaks at those points. The justification applies to the block, and line-by-line within the block. Labels are shown in legible orientation (i.e., left to right or down to up) by default, independent of the actual transformation. If the {\cb Label True Orient} button in the {\cb Main Window} sub-menu of the {\cb Attributes Menu} or the sub-window {\cb Attributes} menu is set accordingly, labels will be shown in their actual orientation. Pressing the {\kb Delete} key after the label text has been entered will repeat prompting for new label text. Labels have fixed size as compared with layout geometry. \subsection{Device Property Labels} Labels are created internally for device properties in electrical mode. These labels can be moved, deleted, and edited just as user-supplied labels. Once deleted, though, such labels can not be recreated except by recreating the device, or by using the {\cb !regen} command. The underlying property is not deleted, it simply is not displayed in a label. These labels can be ``hidden'' by clicking on the label text with button 1 with the {\kb Shift} key held. This replaces the label text with a small box icon. Shift-clicking the icon will redisplay the text. This can be useful when long labels obscure other features. See \ref{labelsize} for more information. Labels can be edited by selecting the label before pressing the {\cb label} button. If the label was generated for a property in electrical mode, the underlying property is also changed. This is a quick way to modify device properties, without invoking the {\cb Properties} command button in the {\cb Edit Menu}. \index{wire label} \index{net name label} \subsection{Wire Net Name Labels} Similar to the property labels, electrical wires that participate in schematic connectivity can have a bound label that provides a name for the net containing the wire. Unlike the device labels, wire net labels are created by the user. If the {\cb label} command is started with a single selected wire on an electrically-active layer, the label created will be bound to that wire. Thus, to create a label for a wire, select the wire, press the {\cb label} button in the side menu, and create the label. These labels can exist on any layer. Once created, these labels can be edited in the same manner as property labels, i.e., select the label and enter the label command by pressing the side menu {\cb label} button. \subsection{Ctrl-a and Ctrl-p} In electrical mode, outside of any command, pressing {\kb Ctrl-a} will cause the associated labels of any selected device or wire to also become selected. If labels are selected, then pressing {\kb Ctrl-a} will cause their associated device or wire to also become selected. The associated labels can be deselected by pressing {\kb Ctrl-p}. This is useful for determining which labels are associated with a given device or wire, and {\it vice-versa}. \subsection{Spicetext Labels} \label{spicetext} \index{spicetext label} In electrical mode, for efficiency reasons it is best not to use the SCED layer for labels. If the current layer is the SCED layer, a new label will instead be created using the ETC1 layer. If for some reason a label is required on the SCED layer, the {\cb Change Layer} command in the {\cb Modify Menu} can be used to move an existing label to the SCED layer. In electrical mode, labels can be used to enter arbitrary text into the SPICE output. There are two methods to achieve this. In addition, the {\et SpiceInclude} variable can be used to add a file inclusion to the SPICE output. If an electrical layer named ``SPTX'' exists, labels on this layer will be included, verbatim, as separate lines in SPICE output, unless the label is a ``spicetext'' label (below). These labels are sorted by position, top-to-bottom and left-to-right in output, and are placed ahead of the spicetext labels. A label on the SPTX layer in the format of a spicetext label will be output as a spicetext label. If the first word of the label is of the form \begin{quote}\vt spicetext{\it N} \end{quote} the label is a ``spicetext'' label, and the text which follows will be entered verbatim as a separate line in the SPICE output. The spicetext labels can appear on any layer. The integer {\it N\/}, which is optional, is a sorting parameter. If there are multiple labels containing SPICE text, they will be sorted by {\it N\/} before being added to the SPICE output. Smaller {\it N\/} will appear earlier in the listing, with omitted {\it N\/} corresponding to a value of zero. The {\vt spicetext\/} lines are written as a contiguous block in the listing. Any text which can be interpreted by the SPICE simulator in use can be added using these methods, but erroneous syntax will of course cause errors as the SPICE text is sourced. \subsection{``Long Text'' Capability} \label{longtext} \index{long text labels} When editing or creating unbound labels, or labels for physical or certain electrical properties ({\et value}, {\et param}, and {\et other}), there is provision for entering a block of text that will not be visible in the layout or schematic. This avoids cluttering the screen with labels containing large blocks of text. Rather, a symbolic form will be shown instead of the full text. This same capability applies when adding or editing properties from the {\cb Property Editor} provided by the {\cb Properties} button in the {\cb Edit Menu}. This capability is useful for properties which require a large block of text, such as a long PWL statement in a {\et value} property for SPICE. It is not possible to edit a large text block in the prompt area, and if displayed would cause the screen to be obscured or cluttered. The full text is added to SPICE output, however, and is available as the property value in functions that query the value. It is also useful for the {\vt spicetext} labels, so that a block of text can be inserted into SPICE output, rather than one line. Remember that the text entered into the window must begin with ``{\vt spicetext}'' and an optional integer, for the text to appear in SPICE output. When entering a label where this ``long text'' capability applies, a small ``{\cb L}'' button will appear to the left of the prompt line, and this will be active when the text cursor is in the leftmost column. Pressing this button will set the internal flag for ``long text'', and open the text editor window for the text. Any text that was previously entered in the prompt line will be added to the text editor window, or, if the label was already in long text mode, the existing text will be shown in the editor. If preexisting text was present on the prompt line when the {\cb L} button was pressed, that text will be loaded into the text editor, but any hypertext entries will be converted to plain text. The long text blocks do not support the hypertext feature. Pressing {\kb Ctrl-t} has the same effect as pressing the {\cb L} button when the button is visible and active. From the editor window, one can edit the block of text, then press {\cb Save} in the editor's {\cb File} menu to complete the operation, or {\cb Quit} to abort. The on-screen label will simply say ``{\vt [text]}'' for a normal ``long text'' property or non-associated label, or have the standard form for a script label (described below); The long text labels can be edited with the label editor, as can normal labels, by selecting the label before pressing the {\cb label} button. The prompt line will display ``{\vt [text]}'' as a hypertext entry. Pressing {\kb Enter} or the {\cb L} button will bring up the text editor loaded with the text associated with the label, allowing editing. To convert a long text label to a normal label, instead of bringing up the text editor, the hypertext ``{\vt [text]}'' entry can be deleted in the prompt line. Deleting the entry will place as much of the text block as possible on the prompt line, and delete the text block and the association of the label or property as a long text object. \subsection{Script Labels} \index{script labels} {\Xic} provides the ability to embed a script or script reference in a label, which is executed when the user clicks on the label. These are created like any other label, but have the form \begin{quote}\vt !!script [name={\it word\/}] [path={\it path\/}] [{\it script text}...] \end{quote} The leading token in the label must be ``{\vt !!script}'' to indicate that the label text is executable. This is followed by zero or more keyword/value pairs as shown, followed by the script text that will be executed. The keywords and values must be separated by `=' with no space. The value is a single token, which should be double-quoted if it contains white space. These are optional. The keywords have the following interpretations. \begin{description} \item{\vt name={\it some\_word}}\\ The script label is rendered on-screen as {\it some\_word} surrounded by a box. If no name is given, the word ``{\vt script}''" is shown. \item{\vt path={\it some\_path}}\\ If this is given, then the script to be executed is given by {\it some\_path} and any executable statements in the label are ignored. The {\it some\_path} can be an absolute path to a script file, or can be the name of a script file expected to be found in the script search path. \end{description} Any remaining text is executed as script commands, if {\vt path} is not given. For short scripts, semicolons can be used as command terminators in a single line. Otherwise, a text editor can be invoked on the label string by pressing the ``L'' (long text) button when creating the label. Clicking on a script label will execute the script, and not select the label as with normal labels. To select a script label, hold {\kb Shift} while clicking on the label, or drag over the label (area select). If a script label is selected, it will not execute when clicked on, but rather be deselected. For example, suppose that a user has a large layout, with a small section that the user often needs to zoom into. The user can create a script label to perform the zoom operation. After zooming in, one can note the position and estimate the width of the drawing window. Then, one would create a label such as \begin{quote} \vt !!script name=zoom Window({\it x\/}, {\it y\/}, {\it width\/}, GetWindow()) \end{quote} and place it somewhere convenient. The {\it x\/}, {\it y\/}, and {\it width} above of course represent the actual values (in microns). Clicking on the label will always zoom to this area. \subsection{Label Size Issues} \label{labelsize} In electrical mode, property text labels can be displayed or ``hidden''. If a label is hidden, the text is not displayed, only a small box at the text reference point is shown. Labels with text size longer than a certain length will be shown as hidden by default. Hidden labels can be made visible, and {\it vice-versa} by clicking on the label or small box with the {\kb Shift} key held. The label text can also be shrunk (with the {\cb Stretch} command in the {\cb Modify Menu} or with button 1 operations) to make it visible. The label hidden status is persistent when the cell is saved in any format, however changing the display status does {\bf not} change the modified state of a cell, thus this can be done in IMMUTABLE cells. It should be noted that the ``real'' bounding box of the label, which is used to set the cell bounding box, is always the bounding box of the actual text. The hidden display mode is only available for the labels that contain property strings in electrical mode. Hidden labels can be selected only over the small box, and only the small box is highlighted. However, when moving or stretching, the entire ``real'' bounding box is highlighted. \index{LabelMaxLen variable} The size threshold can be changed with the {\cb Maximum displayed property label length} entry in the {\cb Window Attributes} panel from the {\cb Set Attributes} button in the {\cb Attributes Menu}. Equivalently, the variable {\et LabelMaxLen} can be set to an integer greater than 6 with the {\cb !set} command. The units are the width of a default-size character cell. In releases prior to 2.5.66, the default length was 32 default character size cells. In this and later releases, the value is 256 character cells. The larger threshold makes the nondisplay of label text much less probable, as this feature has been confusing to users. Another way to obscure a long label is to convert it to a ``long text'' label. To ``hide'' a label using the ``long text'' feature: \begin{enumerate} \item{Select the label.} \item{Press the side menu {\cb label} button (with the black `T' icon).} \item{Press the gray {\cb L} button that appears to the left of the prompt line. This will cause the text editor to appear, loaded with the label text. If there is no {\cb L} button, then the property can't use long text, which is true for properties that are ``always'' short, such as for device and model names.} \item{In the text editor, press {\cb Save} in the {\cb File} menu. The editor will disappear, and the label displayed on-screen will have changed to ``{\vt [text]}''.} \end{enumerate} To convert back to a normal label: \begin{enumerate} \item{Select the long-text label (``{\vt [text]}'').} \item{Press the side menu {\cb label} button (with the black 'T' icon).} \item{With the cursor under ``{\vt [text]}'' on the prompt line, press the {\kb Delete} key. The full label text will appear on the prompt line.} \item{Press {\kb Enter}. The label will be shown normally.} \end{enumerate} Long property text labels can also be broken into multiple lines by adding embedded returns. These are added with {\kb Shift-Enter} while the string is being edited. Note that this generates newlines in the SPICE output, so that in most cases the extra lines should begin with the ``+'' continuation character. % ----------------------------------------------------------------------------- % xic:logo 052311 \section{The {\cb logo} Button: Create Physical Text} \index{logo button} \index{physical text} \index{MinWidth and logo text} \epsfbox{images/logo.eps} The {\cb logo} command allows the creation of physical text and images for labeling, identification, etc. Operation is similar to the {\cb label} command, where the arrow keys alter text or image size, {\kb Shift}-arrow cycles through the justification choices, and {\kb Ctrl}-arrow cycles through the rotation angles. By default, the text is implemented with rounded-end wires in the current layer, using a vector font. For rendering text, there are three font possibilities. The default font is a vector font which constructs the characters using wire objects. The Manhattan font is a built-in bitmap font from which the characters are constructed using Manhattan polygons. The Manhattan font is fixed-pitch with an 8X16 map. The ``pretty'' font is one of the system fonts, which similarly creates characters constructed as Manhattan polygons. Logic is applied to extract the ``best'' rendition from anti-aliased fonts, which do not have a precisely defined shape. Some fonts may look better than others in this application. While in the {\cb logo} command and using the vector font, pressing the {\kb Ctrl-Shift}-arrow key combinations will adjust the path width; the {\kb Up} and {\kb Right} arrow keys increase the width, {\kb Down} and {\kb Left} arrows decrease the path width. The {\et LogoPathWidth} variable tracks the current path width setting. The {\et LogoEndStyle} variable tracks the current end style setting. Instead of a text label, the {\cb logo} command can be used to place an image. The image must be provided by a file in the XPM format. This is a simple ASCII bitmap format, commonly used in conjunction with the X-windows system on Unix machines. Other types of bitmap files can be converted to XPM format with widely available free software, such as the ImageMagick package. Several XPM files are supplied in the help directory for {\Xic} (located by default in {\vt /usr/local/xictools/xic/help}), which illustrate the format. This feature is enabled in the {\cb logo} command by giving the path of an XPM file, which must have a ``{\vt .xpm}'' suffix, as the text string. This will cause the image to be imported such that it can be scaled, transformed, and placed, just like a normal label. The background color (the first color listed in the XPM file) is taken as transparent. All other layers found in the XPM file are mapped to the current layer. The image is rendered as a collection of Manhattan polygons. Unlike in releases 3.0.11 and earlier, there is no attempt to limit feature sizes according to design rules. The minimum size of a character is set by the internal resolution, while the maximum size is about .4 X .7 cm. Once the text is entered, the size and other attributes can be changed with the arrow keys, and the text is placed where the user clicks in the drawing with button 1. The text can be reentered, i.e., a new label or image file defined, if the {\kb Delete} key is pressed. Alternatively, a fixed "pixel" size can be specified. In this case, the arrow keys will pan the display window, and have no effect on the label or image size. The default operation is to apply the text or image feature directly in the current cell, where the user clicks. It is also possible to create a subcell containing the text, which is instantiated at the clicked-on locations. This may be more efficient if there are many copies of the same label. \index{NoDRC flag} Note that use of the vector font may produce design rule violations, which are pretty much inevitable due to the presence of acute angles in some characters. Use of the other fonts, which are rendered using Manhattan polygons, can avoid design rule violations, if the ``pixel'' size is larger than the MinWidth and MinSpace design rules for the layer. When physical text (or an image) is placed with the {\cb logo} command, interactive design rule checking is suppressed. The {\et NoDRC} flag can be set on the new label, or the NDRC layer can be used, to permanently suppress DRC. It is possible to change the font used for the {\cb logo} command. The default font is set internally by {\Xic}, however individual characters or the whole font will be updated upon startup if a file named ``{\vt xic\_logofont}'' is found along the library search path, which contains alternative character specifications. \subsection{The Logo Font Setup Panel} \index{Logo FontSetup panel} While the {\cb logo} command is active, the {\cb Logo Font Setup} panel is visible, though this can be dismissed without leaving the {\cb logo} command. The top of the panel provides three ``radio'' buttons for selecting the font: {\cb Vector}, {\cb Manhattan}, and {\cb Pretty}. The {\et LogoAltFont} variable tracks the choice in these buttons. Below the {\cb Font} choice buttons is the {\cb Define ``pixel'' size} check box and numeric entry window. When checked, the numeric entry area is enabled, and the value represents the size of a ``pixel'' used for rendering the label or image, in microns. When checked, the arrow keys have no effect on label or image size, instead they revert to their normal function of panning the display window. This feature is tied to the {\et LogoPixelSize} variable, which when set to a real number larger than 0 and less than or equal to 100.0 will define the ``pixel'' size used in the {\cb logo} command. There are two option menus in the {\cb Logo Font Setup} panel which set the end style and path width assumed in the wires used for constructing characters with the vector font. The user can set these according to personal preference. Although rounded end paths may look better, they are somewhat less efficient in terms of storage and processing, and are not handled uniformly (or at all) in some CAD environments. For example, rounded-end wires may be converted to square ends when written as OASIS data. The {\cb Select Pretty Font} button brings up the {\cb Font Selection} panel, allowing the user to select a system font for use as the ``pretty'' font. In the {\cb Font Selection} panel, the user can select a font, then press the {\cb Set Pretty Font} button to actually export the choice. This will set the {\et LogoPrettyFont} variable. The {\cb Create cell for text} check box, when checked, sets a mode where new labels and images are instantiated as subcells rather than directly as geometrical objects. In addition to generating a master cell in memory, a native cell file with the same name is written in the current directory. The boolean variable {\et LogoToFile} tracks this state of this check box. The name of the file used for the label is internally generated, and is guaranteed to be unique in the current search path. The name consists of the first 8 characters of the label, followed by an encoding of the various parameters related to the label. For a given label, the uniqueness of the file name prevents recreating the same label file in a subsequent session. The {\cb Dump Vector Font} button will create a file containing the vector font (see \ref{vectorfont}) currently being used by the {\cb logo} command. By default, the vector font uses the same character maps as the vector font used to render label text on-screen. However, these maps can be overridden by definitions from a file. The {\cb Dump Vector Font} button can be used to dump the current set of character maps to a file. Character maps from this file can be modified and placed in a file named ``{\vt xic\_logofont}'' in the library search path, in which case they will override the internal definitions when producing vector-based characters in the {\cb logo} command. % ----------------------------------------------------------------------------- % xic:nodmp 061913 \section{The {\cb nodmp} Button: Node (Net) Name Assignments} \label{nodmp} \index{node mapping} \index{nodmp button} \epsfbox{images/nodmp.eps} The {\cb nodmp} button, which is available in the electrical mode side menu, will bring up the {\cb Node (Net) Name Mapping} panel which is used to display and alter the names used for ``nodes'' (single-conductor wire nets) in the schematic, and in SPICE and other output. This name may be internally generated, or may be derived from a terminal name, or may be assigned by the user. This panel is also brought up by the {\cb Find Terminal} button in the {\cb Setup} page of the {\cb Extraction Setup} panel, which is obtained from the {\cb Setup} button in the {\cb Extract Menu}. First, to facilitate the discussion that follows, some terminology will be introduced. See also the section on wire net naming in \ref{netex}. \begin{description} \item{scalar}\\ Single-conductor wire nets, or ``nodes'' (from SPICE terminology) are referred to as ``scalar'' nets. These are the actual circuit connections, which are compared in layout vs. schematic (LVS) testing. {\Xic} also allows multi-conductor (including single-conductor) ``vector'' and ``bundle'' nets. These actually reference and organize the nodes, but do not provide actually connectivity, except through name matching. The present {\cb Node (Net) Name Mapping} panel applies only to the scalar nets. \item{associated name}\\ A scalar wire net, or ``node'' may have ``associated names''. These names derive from named terminal devices which may be connected to the net, or from labeled wires which are connected to the net. Both the terminal device and the labeled wire derive the net name from the text of an associated label. The labels can be edited, which will change the text of the associated name. A net may have any number of associated names. \item{cell terminal name}\\ Every electrical contact point of a cell has a name. This name was assigned when the cell terminal was created with the {\cb subct} command button in the side menu, or if no name was given a default name is used. It is also possible to name cell contact terminals from the {\cb Edit Terminals} command button in the {\cb Setup} page of the {\cb Extraction Setup} panel. This panel is brought up with the {\cb Setup} button in the {\cb Extract Menu}. \item{global names}\\ Certain names are registered within {\Xic} as ``global names'', and are kept in an internal string table. These names are known at every level of the cell hierarchy. There is always at least one global name defined, the ground node with name ``{\vt 0}''. Global names are easily created by the user, as any node name ending with an exclamation point (`{\vt !}') is taken as a global name. For example, ``{\vt vdd!}'' would be taken as a global name. Global node names are also set with the {\et DefaultNode} global properties, in the device library file. They may be used as default nodes in some devices. In particular, the ``three terminal'' {\cb nmos} and {\cb pmos} devices included in the default library make use of this feature, defining global node names ``{\vt NSUB}'' and ``{\vt PSUB}'' that connect to the device substrate. \item{assigned name}\\ Names that are specified from the {\cb Node (Net) Name Mapping} panel using the {\cb Map Name} button will be referred to as ``assigned names''. \end{description} A wire net can clearly have a number of names associated with it. The actual name for the node will be chosen according to the priorities listed below. \begin{enumerate} \item{If a net has an associated name that matches a global name, that global name is used, and this can not be overridden by the user. If two or more global names match associated names in the net, the name chosen will be the one earliest in ASCII lexicographic order. This situation is unlikely and probably represents a topology error.} \item{If a net is given an assigned name, that name will be used.} \item{If a net contains a cell terminal, the cell terminal name will be used. It is possible that more than one cell terminal is connected to the node, in which case the name chosen will be the one earliest in ASCII lexicographic order.} \item{If the net has an associated name, that name will be used. It is possible that more than one associated name will be found, in which case the name chosen will be the one earliest in ASCII lexicographic order.} \item{The net will be given a name based on the internally-generated node number.} \end{enumerate} For names other than the internally generated node numbers, the name is predictable. The internally generated numbers will change if the circuit is modified, or possibly for other reasons. Thus, if netlist or SPICE output is to be used in another application, it may be important to assign names to nodes to be referenced by name. The {\cb Node (Net) Name Mapping} panel contains two text listing windows. The left (node listing) window lists all of the nets in the current cell schematic. An entry in the list can be selected by clicking on the text with the mouse. When a net is selected in this list, the terminals to which the net connects are listed by name in the right (terminal listing) window. Entries in the terminal listing can be selected as well by clicking on the text with the mouse. In either window, the selected entry, if any, is highlighted. There is a ``grip'' in the region separating the two text listings, which can be dragged horizontally to change the relative widths of the windows. The left column in the node listing contains the internal node numbers, which can change arbitrarily if the circuit is modified. Entries in the second column are the mapped names, i.e., the names used in SPICE and netlist files. If the second column entry is blank, no name could be found for the net, and {\Xic} will create a name from the node number for use in output. The third column will contain the letter ``{\vt Y}'' if the node has a name assigned by the user, and/or a ``{\vt G}'' if the node name is that of a global node (including ground). Both letters will appear if the user assigns a name that matches a global name, which includes any name that ends with an exclamation point. The ``{\vt G}'' nodes without {\vt Y} can not be renamed by the user. When a node is selected in the left text window, the right text window lists terminals and other features that are found in the selected net. This includes \begin{itemize} \item{Device and subcircuit instance terminals.} \item{Named terminal devices. These start with a `{\vt T}' character, followed by a space, followed by the name from the terminal label.} \item{Named wires. These start with `{\vt W}' followed by space and the name from the wire label.} \item{Cell contact terminal names.} \end{itemize} The names used for device terminals are a concatenation of the device name and the terminal names as supplied in the node properties in the device library file, if a name was given. If no name was given, a default name is constructed as {\it devicename\/}\_{\it contactnum\/}. That is, the device name, followed by an underscore, followed by an internal index number for contacts of that device. The device name starts with a letter which is the SPICE key for that device type. Subcircuits are similar, and the terminal names begin with `{\vt X}'. In the electrical schematic drawing, when a net is selected in the node listing window, wire objects that are included in the selected net are highlighted. Each of the device and subcircuit instance terminals listed in the terminal listing area will have a small highlighting box drawn around its location. If one of the terminals in the terminal listing is selected, that terminal will be displayed using highlighting. The panel will cooperate closely with the physical extraction system when the {\cb Use Extract} check box is checked. This means that extraction/association will be performed as needed so that terminal locations are correctly defined in the physical layout as well. In this case, a terminal selected in the terminal list will be shown in physical layout windows, as well as the schematic. If the check box is not checked, extraction data will be used if present when showing the terminal in layouts, but there is no attempt to maintain currency. The {\cb Node (Net) Name Mapping} panel is also available from the {\cb Find Terminal} button in the {\cb Extraction Setup} panel in both physical and electrical modes, in addition to the side-menu button in electrical mode. When an entry in the terminal listing window is selected, the {\cb Find} button, below the listing, is un-grayed. Pressing the {\cb Find} button will bring up a sub-window displaying the current cell, with the selected terminal at the exact center of the display. One can press the numeric keypad {\kb +} key repeatedly to zoom in to the terminal location, and the terminal will remain centered. Further, if {\cb Use Extract} is set or the extraction state is current, the terminal will also be displayed and centered if the sub-window is changed to physical mode. When the {\cb Click-Select Mode} button is pressed, a command state is entered whereby clicking on a wire or contact point in a drawing window will select that net. This works a bit differently depending on the state of the {\cb Use Extract} check box. If the box is checked, the button will bring up the {\cb Path Selection Control} panel from the extraction system. This allows selection of conducting paths in the layout windows by clicking on objects. The corresponding net will be selected in the node listing window, with corresponding highlighting shown in schematic windows. One can also click on wires and terminal locations in the schematic, and the clicked-on net will become selected. The corresponding conductor group will be displayed highlighted in layout windows. With {\cb Use Extract} not checked, the {\cb Path Selection Control} panel will not appear, but clicking in schematic windows will have similar effect. The system will once again use extraction data if available to map button presses in layout windows to a conductor group and back to the coresponding electrical net to be highlighted. However, there is no highlighting of the physical conductor group. In either case, the clicked-on node will be shown selected in the node listing window, and scrolled into view if necessary. The terminal listing window will show the selected net details as usual. {\cb Click-Select Mode} is exited if another command is started, or {\kb Esc} is pressed, or the {\cb Click-Select Mode} button is pressed again, or, with {\cb Use Extract} checked, the {\cb Path Selection Control} panel is retired by any means. The {\cb Deselect} button will deselect selections in the node listing window, and the corresponding highlighting in the drawing windows. The terminal listing window becomes empty. It is also possible to search for nets and terminals by name using the controls just above the two listing windows. The two ``radio'' buttons select whether to search for node or terminal names. One enters a ``regular expression'' into the text area. The button to the left of the text entry initiates the search. A matching net is selected as is the matching terminal if searching for terminals. One can press the button again to move to the next and subsequent matches. If there is no initial selection, perhaps because {\cb Deselect} was pressed, the search area starts at the top and extends toward the end of the listing. If a net is selected, the search starts with the next item (terminal or net) after the selection end extends toward the end. The regular expression conforms to POSIX.1-2001 as an extended, case-ignored regular expression. On a Linux system, ``{\vt man grep}'' provides a good overview of regular expression syntax and capability. However, one probably doesn't need to know much more than \begin{enumerate} \item{A given string will match any name containing the string, case insensitive.} \item{The carat (`{\vt \symbol{94}}') character matches the beginning of a name.} \item{The dollar sign (`{\vt \$}') character matches the end of a name.} \end{enumerate} If the third column in the node listing window is not `{\vt G}', then an overriding name for the selected node can be assigned with the {\cb Map Name} button, but only while in electrical mode. To apply a name, select a node in the node listing area, then press the {\cb Map Name} button. A new name will be prompted for in a pop-up window. The name can be any text token (white space is not allowed), however it is up to the user to ensure that the name makes sense in the context of the output. For example, for SPICE output, the node names must adhere to the rules for valid node names in SPICE. After pressing {\cb Apply}, the second column in the listing will be updated to show the new name, and the third column will show ``{\vt Y}''. Again, this can only be done while in electrical mode, in physical mode the button remains grayed. The node naming can actually modify circuit topology, which can be a powerful feature or a curse. If two nets share a name, they will be merged, and the left window will reflect this. Thus, it is easy to make connections using node name mapping that are not obvious when looking at the schematic. For this reason, if the user is about to apply a duplicate name, a confirmation pop-up will appear. The user is given the choice to back out of the operation, or continue. The node name assignment works by association with a connection point in the net, equivalent to a hypertext reference. This association persists if the object is moved, and is transferred to another device or wire if the object is deleted, if possible. In some cases it may get lost, however, so an assigned name may have to be reentered after the circuit is edited. In electrical mode, an assigned name can be deleted by first selecting the node in the node listing area, then pressing the {\cb Unmap} button. The {\cb Unmap} button is un-grayed only if the third column of the selected node shows ``{\vt Y}'' indicating that it has an assigned name. On pressing the button, the name will revert to the default name. This may effectively change circuit topology by undoing the net merging brought about through net name assignments. Again, this operation is available only in electrical mode. The internal data structure representing node name mapping, and the listings, will be in one of two states. Either devices and subcircuits with the {\et nophys} property will be included as normal devices and subcircuits, or these will be ignored. In the latter case, if the {\et nophys} property has the ``shorted'' option, the terminals will be effectively shorted together, which will obviously change the node numbering. The current state is as set by the last function to generate the connectivity map. Functions in the extraction system will always recognize the {\cb nophys} properties, and build the map excluding these devices but taking the ``shorted'' {\et nophys} devices as shorted. Then, the schematic will correspond to the actual physical layout. Functions in the side menu which generate a SPICE listing will ignore {\et nophys} properties and include all such devices in the net list. This produces a schematic appropriate for SPICE simulation. The {\cb Use nophys} button is used to switch between these two representations, and the state of the button will be reset if another function changes the state. When the {\cb Use nophys} button is pressed, devices and subcircuits with the {\et nophys} property set will be {\it included} in the listings, just as ``normal'' devices. Their terminals will be listed in the terminals listing window. The {\et nophys} property is ignored in this case, as will be true when a listing is being prepared for SPICE output from functions in the side menu. When the {\cb Use nophys} button is not pressed, devices and subcircuits with the {\et nophys} property will be ignored in the listings, and the node numbering will respect the ``shorted'' {\et nophys} properties. Terminals from these devices and subcircuits will not be listed in the terminal listing window. This mode is consistent with the usage by the extraction system. % ----------------------------------------------------------------------------- % xic:place 100416 \section{The {\cb Place} Button: Cell Placement Control Panel} \index{Place button} \index{Cell Placement Control panel} \index{cell placement} \index{place cells} \epsfbox{images/place.eps} The {\cb place} button in the side menu brings up the {\cb Cell Placement Control} panel which allows instances of cells (subcells) to be added to the current editing cell. When the {\cb Place} button in the panel or the {\cb place} button in the side menu is active (the two buttons show the same status), the current master can be instantiated at locations where the user clicks (``place mode''). The bounding box of the cell is ghost-drawn and attached to the pointer. The orientation and size of the instance are set by the current transform. If the {\cb Cell Placement Control} panel is dismissed the place mode, if active, is exited. The place mode can be exited with the {\kb Esc} key, or by pressing the {\cb Place} button (either one) a second time. The panel is not popped down when place mode is exited. The substructure of cell instances being placed is highlighted to the depth shown in the main window. This facilitates alignment with other objects. One can change the display depth to reveal more or less of the substructure. From the {\cb Open} command in the {\cb File Menu}, if one holds down {\kb Shift} while selecting one of cells from the history list, the {\cb Cell Placement Control} panel will appear with that cell added as the current master. This applies to cell names and not the ``{\cb new}'' entry. This is a quick backdoor for instantiating cells recently edited. In electrical mode, when a connection point of a device or subcell is near another connection point, it will snap to that location and a small dotted box will be drawn around the point. This facilitates placement of devices and subcircuits in the schematic. While the {\kb Shift} or {\kb Ctrl} keys are held, this feature is disabled. \index{place panel!Use Array} \index{cell arrays} Cells can be placed individually, or as arrays in physical mode. When the {\cb Use Array} button is active, cells will be placed as arrays, governed by the currently set array parameters. The array parameters can be entered into the four text fields below, only when the {\cb Use Array} button is active. Arrays are allowed in physical mode only. If this button is not active, single cells are placed. The array replication factors Nx and Ny can be set to any value in the range of 1 through 32767. The upper limit is set by the GDSII file format, and is enforced by {\Xic} to avoid portability problems. The spacing values Dx and Dy are edge to adjacent edge spacing, i.e., when zero the elements will abut. If Dx or Dy is given the negative cell width or height, so that all elements appear at the same location, the corresponding Nx or Ny is taken as 1. Otherwise, there is no restriction on Dx or Dy except that very large (unphysical) values can cause integer overflow internally. The {\cb !array} command can be used to convert existing instances into arrays, and to modify the array parameters of existing arrays. In physical mode, the reference point of the cell, which is the point in the cell located at the pointer, can be set to either the cell's origin, or to one of the cell's corners. A drop-down menu in the {\cb Cell Placement Control} panel indicates the present selection, and allows the user to make a new choice. The nomenclature ``Upper Left'', etc., refers to the corner of the untransformed cell array bounding box. When place mode is active, pressing the {\kb Enter} key repeatedly will cycle the reference point around the corners and back to the origin. In electrical mode, the cell reference point is always set to the location of the reference terminal, which is usually the first terminal defined. If the cell has no terminals, the reference point can be cycled around the corners, as in physical mode, however for corners the reference point is snapped to the nearest grid location. This should prevent device terminals from being located off-grid. An electrical cell should always have terminals (assigned with the {\cb subct} command in the electrical side menu) if it is to be part of the circuit, and not some kind of decoration or annotation. When the {\cb Smash} button is active, is active, instances will be smashed into the parent where the user clicks, meaning that the cell content will be merged into the parent cell, rather than creating a new instance. The flattening is one-level, so that any subcells of the cell being placed become subcells in the parent. \index{place panel!Replace} \index{replace cells} When the {\cb Replace} button is active, existing cells are replaced with the new master when clicked on. and no cells are placed if the user clicks in the area outside of any subcells. When a cell is replaced, the placement of the new cell is determined in physical mode by the setting of the reference selection drop-down menu. For example, if this setting is ``Upper Right'', the new cell untransformed upper-right corner will be placed at the existing cell untransformed upper right corner. In electrical mode, the reference terminal (the first connection point) is always placed at the same location as the reference terminal of the replaced cell. In either case, any currently active transformations are performed in addition to the transformations of the replaced cell on the new cell. Cells can be placed or replaced only when place mode is active, i.e., when the {\cb Place} button in the {\cb Cell Placement Control} pop-up or the {\cb place} button in the side menu is active. \index{place panel!editing array parameters} If the {\cb Use Array} button is active when cells are being replaced, the replaced cell will take the array parameters from the {\cb Cell Placement Control} panel. Otherwise, the array parameters are unchanged during replacement. Note that it is possible to replace an instance with another instance of the same cell, but with different array parameters. This is one way that array parameters can be ``edited''. The {\cb Dismiss} button will retire the {\cb Cell Placement Control} panel, and exit place mode. The cell currently being placed, the ``master'', can be selected in several ways. A list of masters is kept, and can be viewed with the menu button in the {\cb Cell Placement Control} panel. Pressing and holding button 1 with the pointer on the menu button issues a drop-down menu, whose entries are highlighted as the pointer passes over them. A selection is made by releasing button 1 over one of the selections. Pressing the {\cb New} button in this menu brings up a dialog box which allows the user to enter a new master name. The pop-up list of cells will grow with each addition until a limit is reached, at which point new entries will replace the oldest one. The {\cb New} item is always at the top of the list. The list consists of the most recent masters specified, either with the {\cb New} button, or through the {\cb Place} button in the {\cb Cells Listing} or {\cb Files Listing} panels. \index{MasterMenuLength variable} The maximum number of masters saved in the menu can be specified with the {\cb Maximum menu length} entry area just below the menu. The default is 25, which may not be suitable for some screen resolutions or window systems. It is not very useful if the pull-down menu extends off-screen. This entry tracks the value of the {\et MasterMenuLength} variable. The variable can be set as an integer or cleared to change the value, which is equivalent to changing the integer entry in this panel. When a new entry is selected, a dialog pop-up appears for the new cell name. If a selection can be found in the various panels that provide file or cell selection, that selection is pre-loaded into the dialog as a default. Each of these sources is tested in order, and the first one that is visible and has a selection will yield the default cell name. \begin{itemize} \item{A selection in the {\cb File Selection} pop-up from the {\cb File Select} button in the {\cb File Menu}.} \item{A selection in the {\cb Cells Listing} pop-up from the {\cb Cells List} button in the {\cb Cell Menu}.} \item{A selection in the {\cb Files Listing} pop-up from the {\cb Files List} button in the {\cb File Menu}, or its {\cb Content List}.} \item{A selection in the {\cb Content List} of the {\cb Libraries} pop-up from the {\cb Libraries List} button in the {\cb File Menu}.} \item{A selection in the {\cb Cell Hierarchy} pop-up from the {\cb Show Tree} button in the {\cb Cell Menu} or from the {\cb Tree} button in the {\cb Cells Listing} pop-up.} \item{A cell name that is selected in the {\cb Info} pop-up, from the {\cb Info} button in either the {\cb View Menu} or the {\cb Cells Listing} pop-up.} \item{The name of a selected subcell in the drawing window, the most recently selected if there is more than one.} \end{itemize} The first time the {\cb Cell Placement Control} panel comes up, the user is prompted for the name of a cell, just as if the {\cb New} menu button was pressed. The name provided can be a file containing data in one of the supported archive formats, the name of an {\Xic} cell, or a library file. If the name of an archive file is given, the name of the cell to open can follow the file name separated by space. If no cell name is given, the top level cell (the one not used as a subcell by any other cells in the file) is the one opened for placement. If there is more than one top level cell, the user is presented with a pop-up choice menu and asked to make a selection. If the file is a library file, the second argument can be given, and it should be one of the reference names from the library, or the name of a cell defined in the library. If no second name is given, a pop-up listing the library contents will appear, allowing the user to select a reference or cell. The given given string can also consist of the name of a Cell Hierarchy Digest (CHD) in memory, optionally followed by the name of a cell known within the CHD hierarchy. If no cell name is provided, the cell name configured into the CHD is understood. The string can also contain the name of a saved CHD file, with an optional following cell name. The {\cb Cell Placement Control} panel is sensitive as a drop receiver. If a file name is dragged over the panel and the mouse button released, the behavior is as if the {\cb New} button in the masters menu was pressed, and the file name will be loaded into the dialog window. % ----------------------------------------------------------------------------- % xic:plot 022316 \section{The {\cb plot} Button: Generate SPICE Plot} \index{plot button} \index{SPICE plots} \index{hypertext} \epsfbox{images/plot.eps} The {\cb plot} button, available only in electrical mode, gathers input for plotting via {\WRspice}. The prompt area displays the command string as it is being built. Clicking on nodes or device ``hidden'' targets (usually indicated by a `$+$' symbol in the device schematic representation) will add hypertext entries to the command string, and will add a marker to the screen at the clicked-on location. One can click anywhere on a wire, or on subcircuit and device connection points to select nodes. Clicking on a marker will delete the marker, and the corresponding entry from the string. Some devices have ``hidden'' nodes for accessing internal variables for plotting, such as current through a voltage source or the phase of a Josephson junction. By convention, these are indicated with a `$+$' mark in the symbol. For many devices, this will access the current through the device. The marker for a current has an orientation in the direction of positive current flow. Ordinary node markers have no orientation, and access the node voltage. One can click on reference points to any depth in the hierarchy, though selection requires that the cell be showing as a schematic, and as expanded. To make selections inside a subcircuit that is shown as a symbol, one can use proxy windows (see \ref{hyproxy}). Holding down both the {\kb Shift} and {\kb Ctrl} keys, and clicking on a subcircuit instance, will bring up a sub-window displaying the master of the clicked-on instance in schematic form. One can click on objects in this window in the normal way, and plot points will be added to the prompt line. Holding the {\kb Shift} key while clicking on a device of subcircuit not over any node or ``hidden'' location will enter the hypertext device or subcircuit name. These are not often needed in plot command strings, and the requirement of holding down {\kb Shift} prevents unwanted returns. Markers can be deleted by clicking on them, or by deleting the corresponding hypertext in the prompt line. Multiple markers can reference the same node, as long as they are spatially distinct. Existing marks can be dragged and dropped to a new location, which must also reference a node or reference point, as for the initial placement. If dropped on an existing plot mark, the two marks will exchange locations, both as marks in the drawing window, and hypertext entries in the prompt line. The prompt line text is equivalent to the string given to the {\cb plot} command in {\WRspice}. The string can be edited in the usual way. The user can add text to combine the hypertext references into expressions involving other syntax elements known to {\WRspice}. The registers available through the {\cb S} and {\cb R} buttons to the left of the prompt line can be used to save and restore plot command strings. The {\WRspice} parser can distinguish the expressions, however in some cases the user must intervene to force an expected result. For example, \begin{quote} \vt v(1) -v(2) \end{quote} will be interpreted as {\vt (v(1)-v(2))}, i.e., the difference. To force a unary minus interpretation, one can enclose the second token in double quotes or parentheses, i.e. {\vt v(1) "-v(2)"} will plot {\vt v(1)} and negative {\vt v(2)}. Note that white space is not considered when interpreting the expression, and is required only to separate identifier names. One should refer to the {\WRspice} documentation for a complete description of the syntax accepted by the {\cb plot} command. The command line can also contain keyword assignments which override defaults used when composing the plot. It is also possible to produce X-Y plots by using the ``{\vt vs}'' keyword. The expression following ``{\vt vs}'' will be used as the X scale for the other expressions. The color used to render a plot reference mark in the schematic will be the same color used for the plot trace of the result to which the corresponding hypertext contributes (however, if the user has changed the plotting colors in {\WRspice} or {\Xic}, this may not be true). The number (or letter) enclosed by the plot mark represents a count of the hypertext entries found in the prompt line, left to right, starting with 1. If {\Xic} detects a syntax error, one or more plot marks may be rendered with ``no'' color (actually the highlighting color is used). This is also true for the marks used in the X-scale of an X-Y plot. The {\kb Enter} key terminates entry, and creates the plot if {\WRspice} is available, and the circuit has been simulated with the {\cb run} command. In the {\cb deck} command, the string will be added to the SPICE listing, when generated, as a {\vt .plot} line. While the {\cb plot} command is active, it is possible to select text labels in the normal way, other selections are inhibited. This allows labels to be selected and modified without having to explicitly terminate the {\cb plot} command first. The command text and mark locations are saved with the cell data when written to disk, thus the {\cb plot} command string is persistent. % ----------------------------------------------------------------------------- % xic:polyg 092717 \section{The {\cb polyg} Button: Create/Edit Polygons} \index{polyg button} \index{object creation!polygons} \epsfbox{images/polyg.eps} The {\cb polyg} button is used to create and modify polygons. In electrical mode, this functionality is available from the {\cb poly} menu selection brought up by the {\cb shapes} button. A polygon is created by clicking the left mouse buton on each vertex location in sequence. The vertices can be undone and redone with the {\kb Tab} key and {\kb Shift-Tab} combination, which are equivalent to the {\cb Undo} and {\cb Redo} commands. Vertex entry is terminated, and a new polygon potentially created, by clicking on the initial point (marked with a cross), or double-clicking the last point, or by pressing the {\kb Enter} key. At least three distinct vertices must have been defined, and the polygon must pass some ``normality'' tests, for successful object creation. The {\et PixelDelta} variable can be set to alter the value, in pixels, of the snap distance to the target when clicking to terminate. By default, the snap distance is 3 pixels, so clicking within this distance of the initial point will terminate entry rather than add a new vertex. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This is also applied to the first vertex of polygons being created, facilitating point list termination. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. When adding vertices during polygon creation, the angle of each segment can be constrained to a multiple of 45 degrees with the {\cb Constrain angles to 45 degree multiples} check box in the {\cb Editing Setup} panel from the {\cb Edit Menu}, in conjunction with the {\kb Shift} and {\kb Ctrl} keys. There are three modes: call them ``no45'' for no constraint, ``reg45'' for constraint to multiples of 45 degrees with automatic generation of the segment from the end of the 45 section to the actual point, and ``simp45'' that does no automatic segment generation. The ``reg45'' algorithm adds a 45 degree segment plus possibly an additional Manhattan segment to connect the given point. The ``simp45'' adds only the 45 degree segment. The mode employed at a given time is given by the table below. The {\et Constrain45} boolean variable tracks the state (set or not set) of the check box. \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\kb Constrain45 not set}\\ \hline & {\kb Shift} up & {\kb Shift} pressed\\ \hline {\kb Ctrl} up & no45 & reg45\\ \hline {\kb Ctrl} pressed & simp45 & simp45\\ \hline\hline \multicolumn{3}{|c|}{\kb Constrain45 set}\\ \hline & {\kb Shift} up & {\kb Shift} pressed\\ \hline {\kb Ctrl} up & reg45 & no45\\ \hline {\kb Ctrl} pressed & simp45 & no45\\ \hline \end{tabular} In physical mode, a new polygon is tested for reentrancy and other problems, and a warning message is issued if a pathology is detected. The new polygon is {\it not} removed from the database if such an error is detected. It is up to the user to make appropriate changes. In electrical mode, if the current layer is the SCED layer, the polygon will be created using the ETC2 layer, otherwise the polygon will be created on the current layer. It is best to avoid use of the SCED layer for other than active wires, for efficiency reasons, though it is not an error. The {\cb Change Layer} command in the {\cb Modify Menu} can be used to change the layer of existing objects to the SCED layer, if necessary. The outline style and fill will be those of the rendering layer. Polygons have no electrical significance, but can be used for illustrative purposes. \subsection{Polygon Vertex Editing} \index{polygon vertex editor} On entering the {\cb polyg} command, if a polygon is selected, a vertex editing mode is active on all selected polygons. Each vertex of the selected object is marked with a small highlighting box. Clicking on the edge of a selected polygon away from an existing vertex will create a new vertex, which can subsequently be moved. In order to operate on a vertex, it must be selected. A vertex can be selected by clicking on it, or by dragging over it. Any number of vertices can be selected. After the selection operation, selected vertices are shown marked with a larger box, and unselected vertices are not marked. Additional vertices can be selected, and existing selected vertices unselected, by holding the {\kb Shift} key while clicking or dragging over vertex locations. Selecting a vertex a second time will deselect it. Selected vertices can be deleted by pressing the {\kb Delete} key. This will succeed only if after vertex removal the object does not become degenerate. In particular, one can not delete the object in this manner. The selected vertices can be moved by dragging or clicking twice. The selected vertices will be translated according to the button-down location and the button up location, or the next button-down location if the pointer did not move. While the translation is in progress, the new borders are ghost-drawn. All vertex operations can be undone and redone through use of the {\cb Undo} and {\cb Redo} commands. With vertices selected, pressing the {\kb Esc} or {\kb Backspace} keys will deselect the vertices and return to the state with all vertices marked. While in the {\cb polyg} command, with no object in the process of being created, it is possible to change the selected state of polygon objects, thus displaying the vertices and allowing vertex editing. Pressing the {\kb Enter} key will cause the next button 1 operation to select (or deselect) polygon objects. This can be repeated arbitrarily. When one of these objects is selected, the vertices are marked, and vertex editing is possible. If the vertex editor is active, i.e., a selected polygon is shown with the vertices marked, clicking with the {\cb Ctrl} button pressed will start a new polygon, overriding the vertex editor. This can be used to start a new polygon at a marked vertex location, for example. Without {\cb Ctrl} pressed, the vertex editor would have precedence and would select the marked vertex instead of starting a new polygon. While moving vertices, holding the {\kb Shift} key will enable or disable constraining the translation angle to multiples of 45 degrees. If the {\cb Constrain angles to 45 degree multiples} check box in the {\cb Editing Setup} panel from the {\cb Edit Menu} is checked, {\kb Shift} will disable the constraint, otherwise the constraint will be enabled. The {\kb Shift} key must be up when the button-down occurs which starts the translation operation, and can be pressed before the operation is completed to alter the constraint. These operations are similar to operations in the {\cb Stretch} command. \subsection{Wire to Polygon Conversion} \index{wires!convert to polygons} If any non-zero width wires are selected before the {\cb polyg} command is entered, the user is given the option of changing the database representation of these objects to polygons. Is is not possible to convert polygons to wires (except via the {\cb Undo} command if the polygon was originally a wire). % ----------------------------------------------------------------------------- % xic:put 062908 \section{The {\cb put} Button: Extract From Yank Buffer} \index{put button} \index{unerase} \index{yank/put} \epsfbox{images/put.eps} The {\cb put} command allows the contents of the yank buffers to be added to the current cell. This command is available in physical mode. When parts of objects are erased with the {\cb erase} command, the erased pieces are added to a five-deep yank buffer queue. When the {\cb put} button becomes active, the most recent deletion is ghost drawn and attached to the pointer. Clicking will place the contents of the buffer at the location of the pointer. The yank buffers can be cycled through with the arrow keys. % ----------------------------------------------------------------------------- % xic:round 100916 \section{The {\cb round} Button: Create Disk Object} \index{round button} \index{object creation!disks and ellipses} \epsfbox{images/round.eps} The {\cb round} button, only available in physical mode, will create a disk polygon object. The number of sides can be altered with the {\cb sides} command. If the user presses and holds the {\kb Shift} key after the center location is defined, and before the perimeter is defined by either lifting button 1 or pressing a second time, the current radius is held for x or y. The location of the shift press defines whether x is held (pointer closer to the center y) or y is held (pointer closer to the center x). This allows elliptical objects to be generated. The {\kb Ctrl} key also provides useful constraints. Pressing and holding the {\kb Ctrl} key when defining the radius produces a radius defined by the pointer position projected on to the {\et x} or {\et y} axis (whichever is closer) defined from the center. Otherwise, off-axis snap points are allowed, which may lead to an unexpected radius on a fine grid. When the command is expecting a mouse button press to define a radius, the value as defined by the mouse pointer (in microns) is printed in the lower left corner of the drawing window, or the X and Y values are printed if different. Pressing {\kb Enter} will cause prompting for the value(s), in microns. If one number is given, a circular radius is accepted, however one can enter two numbers separated by space to set the X and Y radii separately. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. \index{SpotSize variable} If the {\et SpotSize} variable is set to a positive value, or the {\vt MfgGrid} has been given a positive value in the technology file, tiny round and donut figures are constructed somewhat differently. the figure is constructed somewhat differently. Objects created with the {\cb round} and {\cb donut} buttons will be constructed so that all vertices are placed at the center of a spot, and a minimum number of vertices will be used. The {\cb sides} number is ignored. This applies only to figures with minimum radius 50 spots or smaller; the regular algorithm is used otherwise. An object with this preconditioning applied should translate exactly to the e-beam grid. See \ref{spotsize} for more information. % ----------------------------------------------------------------------------- % xic:run 042411 \section{The {\cb run} Button: Run SPICE Analysis} \index{run button} \index{running SPICE} \index{loading rawfiles} \index{running SPICE!output to file} \epsfbox{images/run.eps} The {\cb run} button, available only in electrical mode, will establish interprocess communication with the {\WRspice} program. If a link can not be established, the {\cb run} command terminates with a message. If connection is established, then a SPICE run of the circuit is performed. The user is first prompted for the {\WRspice} analysis command string to run. This should be in a format understandable to {\WRspice} as an interactive-mode command. During prompting, the last six unique analysis commands entered are available and can be cycled through with the up and down arrow keys. The first word in the analysis string is checked, and only words from the following list will be accepted: \begin{tabular}{llll} \vt ac & \vt loop & \vt run & \vt tran\\ \vt check & \vt noise & \vt send & \\ \vt dc & \vt op & \vt sens & \\ \vt disto & \vt pz & \vt tf & \\ \end{tabular} The ``{\vt send}'' keyword is not a {\WRspice} command. If given, the circuit will be sent to {\WRspice} and sourced, but no analysis is run. Other commands can be sent to {\WRspice} with the {\cb spcmd} button. The link is established to the SPICE server ({\vt wrspiced} daemon) named in the {\et SPICE\_HOST} environment variable, or the {\et SpiceHost} ``!set'' variable (which overrides the environment). If neither is set, {\Xic} will attempt to attach to {\WRspice} on the local machine. By default, the {\WRspice} toolbar is visible when a connection has been established. This gives the user more control over {\WRspice} by providing access to the visual tools. If the {\et NoSpiceTools} variable is set (with the {\cb !set} command), the toolbar will not be visible. During a simulation run, a small pop-up window appears, which contains a status message, and a {\cb Pause} button. Pressing {\cb Pause} will pause the analysis. It can be resumed by pressing the {\cb run} button again. The analysis can also be paused by pressing {\kb Ctrl-c} in the controlling terminal (xterm) window. A {\kb Ctrl-c} press over a drawing window goes to {\Xic}, where it stops redraws and other functions as usual. {\Xic} is notified when a run is paused from {\WRspice} (using the red X button in the toolbar), and will change state accordingly. However, {\Xic} is {\it not} notified when a run is restarted from {\WRspice} (with the green triangle button in the toolbar), and will continue to assume that {\WRspice} is inactive. In this case, commands from {\Xic} that communicate with {\WRspice} will pause any analysis in progress before executing. The user will have to resume the analysis manually after the operation completes, either with the {\cb run} button or from the {\WRspice} toolbar. This affects the {\cb plot}, {\cb iplot}, and {\cb run} buttons, and the {\cb !spcmd} command. When a run is started or resumed with the {\cb run} button in {\Xic}, these features are locked out, producing a ``WRspice busy'' message, and the run in progress is not affected. The node connectivity is recomputed, if necessary, before the run. If the variable {\et CheckSolitary} is set with the {\cb !set} command, then warnings are issued if nodes with only one connection are encountered. A SPICE file is generated internally, and transmitted to {\WRspice} for evaluation. Only devices and subcircuits that are ``connected'' will be included in the SPICE file. A device or subcircuit is connected if one of the following is true: \begin{itemize} \item{There are two or more non-ground connections.} \item{There is one non-ground connection and one or more grounds.} \item{There is one non-ground connection and no opens.} \item{There is one non-ground connection and the object is a subcircuit.} \end{itemize} \index{.spinclude directive} \index{.splib directive} As a final step before sending the circuit text to SPICE, {\Xic} will recursively expand all {\vt .include} and {\vt .lib} lines, replacing the {\vt .include} lines with the actual file text, and the {\vt .lib} lines with the indicated text block from the library. This is to handle the case where {\WRspice} is located on a remote machine, and the user's files are on the local machine. As in {\WRspice}, {\vt .inc} is a synonym for {\vt .include}, and the `{\vt h}' option (strip `\$' comments for HSPICE compatibility) is recognized. The {\vt .include} and {\vt .lib} lines are generally inserted into the SPICE text using the {\vt spicetext} label mechanism. There may be occasions where the expansion of these lines by {\Xic} is undesirable, such as when the included file resides on the SPICE host, or one wishes to use the {\WRspice} {\vt sourcepath} variable to resolve the file. To this end, the user can use the {\vt .spinclude} keyword rather than {\vt .include}, and {\vt .splib} rather than {\vt .lib}. The {\vt .sp} directives use the same syntax as the normal keywords, however {\Xic} will not attempt to expand these directives, rather it changes the keyword to the normal directive (``{\vt .include}'' or ``{\vt .lib}''). Thus, {\WRspice} will see and handle these inclusions. {\WRspice} release 2.2.60 and later recognize {\vt .spinclude} as a synonym for {\vt .include}. This allows {\WRspice} to be able to directly source top-level cell files, where the SPICE listing may contain {\vt .spinclude} lines, without syntax errors. {\WRspice} release 2.2.62-2 and later recognize {\vt .splib} as a synonym for {\vt .lib}, and is able to handle {\vt .lib} constructs sent from {\Xic}. Sometimes it may be desirable to place the output of a SPICE run initiated from {\Xic} into a rawfile, rather than saving the output internally. To do this, use the {\vt spicetext} labels to add an analysis string, such as ``{\vt spicetext .tran 1p 1000p}'' (note that the `.' ahead of ``tran'' is necessary). One can also add a save command using ``{\vt spicetext *\#save v(1) ...}'' to save only a subset of the circuit variables. The ``{\vt *\#}'' means that the save is executed as a shell command, ``{\vt .save}'' is ignored since {\WRspice} is in interactive mode. Then, for the analysis string from {\Xic}, use ``{\vt run} {\it filename\/}'', where {\it filename\/} is the name for the rawfile. The run will be performed, but the output data will go to the file, so don't expect to see it with the {\cb plot} command. If the {\it filename\/} is omitted, the run will be performed with internal storage as usual. The {\cb !spcmd} command can be used to give arbitrary commands to {\WRspice}. % ----------------------------------------------------------------------------- % xic:shapes 111908 \section{The {\cb shapes} Button: Add Predefined Features} \index{shapes button} \index{shapes templates} \index{object creation!shapes templates} \epsfbox{images/shapes.eps} The {\cb shapes} button appears in the electrical mode side menu. Pressing this button provides a pull-down menu of different outlines that can be applied to drawings. These outlines have no electrical significance, but can be used for illustrative purposes. In particular, in symbolic mode, this facilitates creating symbol representations. After a selection is made from the pull-down menu, the shape outline is ghost-drawn and attached to the pointer. The object is placed at locations where the user clicks. \index{shape templates!box} \index{shape templates!poly} \index{shape templates!arc} \index{shape templates!dot} \index{shape templates!tri} \index{shape templates!ttri} \index{shape templates!and} \index{shape templates!or} The current choices in the pull-down menu are: \begin{tabular}{|l|p{4.5in}|} \hline {\cb box} & Create a box, like the physical mode {\cb box} command.\\ \hline {\cb poly} & Create a polygon, like the physical mode {\cb polyg} command.\\ \hline {\cb arc} & Create an arc, similar to the physical mode {\cb arc} command.\\ \hline {\cb dot} & Place a dot (an octagonal polygon).\\ \hline {\cb tri} & Place a triangle (buffer symbol).\\ \hline {\cb ttri} & Place a truncated triangle symbol.\\ \hline {\cb and} & Place an AND gate symbol.\\ \hline {\cb or} & Place an OR gate symbol.\\ \hline {\cb Sides} & Set the number of sides used to approximate rounded geometry, similar to the {\cb sides} command in physical mode.\\ \hline \end{tabular} None of these shapes have significance electrically, and for efficiency is is best to avoid using the SCED layer for these objects. In particular, arcs are actually wires, and arc vertices on the SCED layer are considered in the connectivity establishment. If the current layer is SCED when one of these objects is created, the object is instead created on the ETC2 layer. If the object must be on the SCED layer, the {\cb Change Layer} command in the {\cb Modify Menu} can be used to move it to that layer. The {\et dot}, {\et tri}, {\et ttri}, {\et and}, and {\et or} choices work a little differently from the others. After selection, a ghost rendering of the shape is attached to the pointer, and the objects are placed where the user clicks. The object can be modified with the arrow keys: \begin{tabular}{ll} \kb Up & expand by 2\\ \kb Right & expand by 10\%\\ \kb Down & shrink by 2\\ \kb Left & shrink by 10\%\\ \kb Shift-Up & stretch vertically 10\%\\ \kb Shift-Right & stretch horizontally 10\%\\ \kb Shift-Down & shrink vertically 10\%\\ \kb Shift-Left & shrink horizontally 10\%\\ \kb Ctrl-Arrows & cycle through 90 degree rotations\\ \end{tabular} % ----------------------------------------------------------------------------- % xic:sides 021615 \section{The {\cb sides} Button: Set Rounded Granularity} \index{sides button} \index{round figure sides} \epsfbox{images/sides.eps} The {\cb sides} button, available in physical mode, allows the user to set the number of sides used to approximate rounded geometries. Larger numbers give better resolution, but decrease efficiency. The number provided is the sides for a full 360 degrees, arcs will use proportionally fewer. The setting tracks the {\et RoundFlashSides} variable. If the variable is not set, 32 sides will be used. The acceptable range is 8--256. The setting applies when new round objects are created with the {\cb round}, {\cb donut}, and {\cb arc} buttons in the physical side menu, or the equivalent script functions. In electrical mode, the number of sides used has a separate setting using the {\et ElecRoundFlashSides} variable, which can be set from the {\cb sides} entry in the menu presented by the {\cb shapes} button in the electrical side menu. % ----------------------------------------------------------------------------- % xic:spcmd 110213 \section{The {\cb spcmd} Button: Execute {\WRspice} Command} \index{spcmd button} \index{SPICE command} \index{WRspice command} \epsfbox{images/spcmd.eps} This will prompt the user, in the prompt area, for a command that will be sent to {\WRspice} for execution. If the user simply presses {\kb Enter} without entering a command, or enters the command ``{\vt setup}'', the {\cb WRspice Interface Control Panel} will appear, from which the interface to {\WRspice} can be set up. This panel is described in the next section. Otherwise, a stream to {\WRspice} will be established, if one is not already active, providing a means for running arbitrary {\WRspice} commands. However, commands that cause {\WRspice} to prompt the user for additional input (such as {\vt setplot}) will not work properly, as the communication is one-way only and not interactive. Text output goes to the console window. In addition to the {\WRspice} commands, the client-side directive \begin{quote} {\vt send} {\it filename} \end{quote} is available. The {\it filename} is that of a local SPICE input file. The file will have {\vt .include} and {\vt .lib} lines expanded locally, and {\vt .spinclude}, {\vt .splib} lines will be converted to ``{\vt .include}'', ``{\vt .lib}'', as is done for decks created within {\Xic}. The result will be sent to {\WRspice} and sourced. This operation is basically identical to the {\cb !spcmd} command. % ----------------------------------------------------------------------------- % xic:spif 110213 \subsection{The {\WRspice} Interface Control Panel} \index{WRspice interface control panel} This panel appears when the user presses the {\cb spcmd} button in the electrical side menu, and either gives no command at the prompt, or enters ``{\vt setup}''. It provides entry areas for setting the variables which control the interprocess communication channel to the {\WRspice} circuit simulator, and other simulation settings. Most users will probably never need to use this panel or set the associated variables as the defaults suffice in most installations. The {\cb WRspice Interface Control Panel} contains the following entry objects. \begin{description} \item{\cb List all devices and subcircuits}\\ This check box corresponds to the {\et SpiceListAll} variable. When checked, all devices and subcircuits in the schematic will be included in SPICE output. Otherwise, only devices and subcircuits that are ``connected'' will be included, as explained in the {\cb deck} and {\cb run} command descriptions. \item{\cb Check and report solitary connections}\\ This check box corresponds to the {\et CheckSolitary} variable. If checked, warning messages will be issued when electrical netlists are generated for nodes having only one connection. This affects the {\cb run} and {\cb deck} commands, and the {\cb Dump Elec Netlist} command in the {\cb Extract Menu}. \item{\cb Don't show WRspice Tool Control panel}\\ This check box corresponds to the {\et NoSpiceTools} variable. When running {\WRspice} from {\Xic}, by default the {\WRspice} toolbar is shown, if {\WRspice} is running on the local machine. This gives the user much greater flexibility and control over {\WRspice}. If this check box is checked, {\it before} the connection to {\WRspice} is established, the toolbar will not be visible. This check box will also control toolbar visibility if the {\vt wrspiced} daemon is used. However, this requires {\vt wrspiced} distributed with wrspice-3.0.7 or later. {\bf If this variable is set with an earlier {\vt wrspiced} release, the {\WRspice} connection will not work!} \item{\cb Spice device prefix aliases}\\ This group consists of a check box and a text entry area. When the box is checked, the text in the entry area will be used to set the {\et SpiceAlias} variable. This can be set to a string which will modify the printing of device names in SPICE output. The aliasing operates on the first token of device lines. The format of the string is \begin{quote} {\it prefix1\/}{\vt =}{\it newprefix1} {\it prefix2\/}{\vt =}{\it newprefix2} ... \end{quote} This will cause lines beginning with {\it prefix} to have {\it prefix} replaced with {\it newprefix}. If the ``{\vt =}{\it newprefix\/}'' is omitted, that line will not be printed. For example, to map all devices that begin with `B' to `J', and to suppress all `G' devices, the string is \begin{quote} \vt B=J G \end{quote} Note that there can be no space around the `{\vt =}'. With the text entered and the box checked, the indicated mappings will be performed as SPICE text is produced. \item{\cb Remote WRspice server host name}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceHost} variable is set to the text in the text area. The text should be the name of the host which maintains a server for remote {\WRspice} runs. If set, this will override the value of the {\et SPICE\_HOST} environment variable. The host name specified in the {\et SPICE\_HOST} environment variable and the {\et SpiceHost} {\cb !set} variable can have a suffix ``{\vt :}{\it portnum\/}'', i.e., a colon followed by a port number. The port number is the port used by the {\vt wrspiced} program on the specified server, which defaults to 6114, the IANA registered port for this service. If the server uses a non-standard port, and the {\vt wrspice/tcp} service has not been registered (usually in the {\vt /etc/services} file) on this port, the port number must be provided. \item{\cb Remote WRspice server host display name}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceHostDisplay} variable is set to the text in the text area. This text can be set to the X display string to use on a remote host for running {\WRspice} through a {\vt wrspiced} daemon, from {\Xic} in electrical mode. This is intended to facilitate use of {\vt ssh} X forwarding to take care of setting up permission for the remote host to draw on the local display. See the description of the {\et piceHostDisplay} variable for complete details. \item{\cb Path to local WRspice executable}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceProg} variable is set to the text in the text area. The text is the full path name of the {\WRspice} executable. This is useful if there are multiple versions of {\WRspice} available, or the binary has been renamed, or is not located in the standard location. If given, the value supersedes the values from environment variables or other variables (and corresponding entries) which also set a path to the SPICE executable. \item{\cb Path to local directory containing WRspice executable}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceExecDir} variable is set to the text in the text area. The text is a path to the directory to search for the {\WRspice} executable. If given, the value overrides the {\et SPICE\_EXEC\_DIR} environment variable. The default search location is ``{\vt /usr/local/xictools/bin}'', or, if the {\et XT\_PREFIX} environment variable has been set, its value will replace ``{\vt /usr/local}''. \item{\cb Assumed WRspice program executable name}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceExecName} variable is set to the text in the text area. The text will give the expected name of the {\WRspice} binary. If given, the value overrides the {\et SPICE\_EXEC\_NAME} environment variable. The default name is ``{\vt wrspice}''. \item{\cb Assumed WRspice subcircuit concatenation character}\\ This group consists of a check box and a text entry area. When the box is checked, the {\et SpiceSubcCatchar} variable is set to the text in the text area. See the description of the variable for information about this setting. \item{\cb Assumed WRspice subcircuit expansion mode}\\ This group consists of a check box and a menu. When the box is checked, the {\et SpiceSubcCatmode} variable is set to the current menu selection. See the description of the variable for information about this setting. \end{description} % ----------------------------------------------------------------------------- % xic:spin 012815 \section{The {\cb spin} Button: Rotate Objects} \index{spin button} \index{object rotation} \epsfbox{images/spin.eps} The {\cb spin} button, available in physical mode, allows rotation of boxes, polygons, and wires by an arbitrary angle, and subcells and labels by multiples of 45 degrees. If no objects are selected, the user is requested to select an object. With the object selected, the user is asked to click on the origin of rotation. The selected objects are ghost-drawn, and rotated about the reference point as the pointer moves. If the {\cb Constrain angles to 45 degree multiples} check box in the {\cb Editing Setup} panel from the {\cb Edit Menu} is checked, the angle will be constrained to multiples of 45 degrees. Pressing the {\kb Shift} key will remove the constraint. If the check box is not checked, holding the {\kb Shift} key will impose the constraint. Thus the {\kb Shift} key inverts the effect of the check box. However, if the selected objects include a subcell or label, the angle will always be constrained to multiples of 45 degrees. The {\et Constrain45} variable tracks the state (set or unset) of the check box. During rotation, the angle is displayed in the lower left corner of the drawing window. The readout defaults to degrees, pressing the `{\vt r}' key will switch to radians, and pressing the `{\vt d}' key will switch back to degrees. Pressing the spacebar will toggle between radians and degrees. At this point, one can click to define the rotation angle, or an absolute angle can be entered on the prompt line. To enter an angle, either press {\kb Enter} or click on the origin marker, then respond to the prompt with an angle in degrees. In either case, the rotated boundaries of the selected objects are attached to the pointer, and new objects can be placed by clicking. Ordinarily, the original objects will be deleted, however if the {\kb Shift} key is held while clicking, the original objects are retained. Instead of clicking, one can press the {\kb Enter} key, which will simply rotate the selected objects around the reference point. When the {\cb spin} command is at the state where objects are selected, and the next button press would establish the rotation origin, if either of the {\kb Backspace} or {\kb Delete} keys is pressed, the command will revert the state back to selecting objects. Then, other objects can be selected or selected objects deselected, and the command is ready to go again. This can be repeated, to build up the set of selections needed. At any time, pressing the {\cb Deselect} button to the left of the coordinate readout will revert the command state to the level where objects may be selected to rotate. The undo and redo operations (the {\kb Tab} and {\kb Shift-Tab} keypreses and {\cb Undo}/{\cb Redo} in the {\cb Modify Menu}) will cycle the command state forward and backward when the command is active. Thus, the last command operation, such as setting the angle by clicking, can be undone and restarted, or redone if necessary. If all command operations are undone, additional undo operations will undo previous commands, as when the undo operation is performed outside of a command. The redo operation will reverse the effect, however when any new modifying operation is started, the redo list is cleared. Thus, for example, if one undoes a box creation, then starts a rotation operation, the ``redo'' capability of the box creation will be lost. It is possible to change the layer of rotated objects. During the time that newly-rotated objects are ghost drawn and attached to the mouse pointer, if the current layer is changed, the objects that are attached can be placed on the new layer. Subcells are not affected. How this is applied depends on the setting of the {\et LayerChangeMode} variable, or equivalently the settings of the {\cb Layer Change Mode} pop-up from the {\cb Set Layer Chg Mode} button in the {\cb Modify Menu}. The three possible modes are to ignore the layer change, to map objects on the old current layer to the new current layer, or to place all objects on the new current layer. If the current layer is set back to the previous layer before clicking to locate the new objects, no layers will change. Note that this operation can change boxes to polygons and vice-versa. The rotation can be performed by clicking or dragging, however an angle can only be entered textually if the clicking mode is used. % ----------------------------------------------------------------------------- % xic:style 100412 \section{The {\cb style} Button: Set/Change Wire Style} \index{style button} \index{wire width setting} \index{wire end style setting} \epsfbox{images/style.eps} The {\cb style} button, available in physical mode, pops up a menu of options for the presentation style of wires. The {\cb Wire Width} choice sets the default width if no wires are selected, or changes the width of selected wires. If there are wires selected, {\Xic} prompts for a new wire width for the selected wires, and the selected wires will have their widths altered. The new width should not be less than the minimum width ({\et MinWidth} design rule) for the layers. If there are no applicable wires selected, the default wire width for the current layer is set, which is constrained to be greater or equal to the minimum width. Wires subsequently created on the present layer will have the new width. The other choices set the default end style if no applicable wire is selected, or changes selected wires to the chosen end style if wires are selected. All selections depend on layer-specific mode. In layer-specific mode, only selected wires on the current layer are changed. Otherwise, all selected wires are changed. The possible end styles are flush ends, extended rounded ends, and extended square ends. The extended styles project the length of the wire by half of the width beyond the terminating vertex. The button icon changes to indicate the present wire end style with a small dot. % ----------------------------------------------------------------------------- % xic:subct 110713 \section{The {\cb subct} Button: Set Subcircuit Connections} \label{subct} \index{subct button} \index{subcircuit terminals} \epsfbox{images/subct.eps} The {\cb subct} button, available in the electrical side menu, allows electrical connection terminals to be added to a circuit. The terminals are points at which electrical connections are defined, as in the SPICE subcircuit definition. Terminal definition is necessary if the circuit is to be used as a subcircuit in another circuit with connections to the instance (it is possible for a subcircuit to connect to global nets only (see \ref{nodmp}), in which case the master and instances would have no terminals). The terminals are also used by the extraction system and can provide an initial association of a particular schematic net and physical conductor group. Terminals can only be created in electrical mode. Once created, a terminal's flags may be edited so as to enable a corresponding terminal location in the physical layout. The extraction system will most often find suitable physical terminal locations automatically, however there are times when the user may need to place terminals manually, which can be done with the {\cb Edit Terminals} button in the {\cb Views and Operations} page of the {\cb Extraction Setup} panel from the {\cb Setup} button in the {\cb Extract Menu}, while in physical mode. In electrical mode, this same button is equivalent to the {\cb subct} button in the side menu. Subsequent to creation with the present command, terminals can be made visible with the {\cb terms} button in the electrical side menu. While in physical mode, the terminals will be visible in electrical windows when either the {\cb All Terminals} or {\cb Cell Terminals Only} check boxes in the {\cb Show} group in the {\cb Views and Operations} page of the {\cb Extraction Setup} panel is checked. The terminals must be defined in the schematic representation of the cell, whether or not the cell will ultimately be symbolic (see \ref{symbolic}). The terminals can be created and deleted only in the schematic. Once created, they will be visible in the symbol view, but must be moved to the desired location by hand. In the symbol view (only) each terminal can have arbitrarily many copies or itself at different locations, each one of which is an equivalent connection point for the subcircuit. This facilitates, for example, tiling. If an equivalent connection point appears on either side of the instance, then placing a row of these instances side-by-side will automatically connect this node to all of the instances. This applies only to the symbolic representation. In the schematic, each cell terminal has a single connection point. In {\Xic}, there are two types of cell contact terminals. \begin{description} \item{Scalar terminals}\\ These are the ``normal'', single-contact terminals. These terminals actually convey the connectivity information between the parent and subcell schematics, and are the only terminals that may have corresponding terminals in the physical layout. A scalar terminal is associated with a {\et node} property, of a cell or cell instance. \item{Multi-contact ``bus'' terminals}\\ These terminals reference the scalar terminals and provide a means for connecting a number of these terminals to a multi-conductor net in the schematic. The use of multi-conductor nets and multi-contact terminals can greatly simplify a schematic visually. Be advised that a multi-conductor terminal only references existing scalar terminals, which must exist. These terminals are associated with a {\et bnode} property, of a cell or cell instance. \end{description} In the schematic, by default ordinary scalar terminals can only be located at connection points of the underlying geometry. These are the vertices of electrically-active wires, and device or subcell connection points. Clicking on such a point, if no terminal already exists at the point, will create a new scalar terminal at the location. The {\cb Terminal Edit} panel will appear, which can be used to apply a name for the terminal and edit other terminal properties. The new terminal will be shown highlighted to indicate that it is the target of the {\cb Terminal Edit} panel. \subsection{Virtual Terminals} \index{virtual terminals} If one holds the {\kb Ctrl} key while clicking anywhere except over another terminal, a scalar terminal will be placed, whether or not it is over a circuit connection point. This is useful if the {\et BYNAME} flag is to be set for the terminal, which indicates that it will not connect by location, but by name matching only. It is also useful for implementing ``virtual'' terminals which connect to nothing, but satisfy connectivity references in layout vs. schematic testing, and for other purposes. Suppose one has a subcell with physical layout only that one wishes to include in a full design hierarchy. It may not be convenient to create a schematic for the subcell, but it is desired that the connections to the subcell be included in the LVS checking of the overall design. It is possible to assign ``virtual terminals'' to the subcell. Virtual terminals are treated like ordinary terminals in connecting to instances of the subcell, but are ignored when creating netlists for the subcell itself. A virtual terminal is created in the {\cb subct} command by holding the {\kb Ctrl} key while clicking on locations in the electrical schematic (even if the schematic is empty). They can be placed anywhere except on top of another terminal; location is not important. Once created, they can be moved or deleted like ordinary terminals. Once placed, they will be considered in establishing the connectivity to instances of the cell, but will be ignored when establishing connections within the cell. Thus the cell looks like a ``black box'' with terminals. Virtual terminals can be used along with ordinary terminals if only part of the internal circuit is to be visible from the outside. In SPICE netlists, virtual terminals will appear in the subcircuit connection list in {\vt .subckt} and call lines, but will not be connected in the {\vt .subckt} definition. One can use a {\vt spicetext} label to add a {\vt .include} line to bring in a circuit definition from a file, for example, to satisfy the references. In the graphical display, virtual terminals of the current cell are shown with a beer-barrel outline for differentiation from the standard terminals which are square. The cell bounding box is expanded to contain all virtual terminal locations. The center of a virtual terminal is a ``hot spot'' for hypertext node references, i.e., clicking on the terminal center will add the associated node to the prompt line edit string in the {\cb plot} and {\cb iplot} commands and when creating labels or properties. \subsection{Multi-Contact Connectors} \index{bus connectors} \index{multi-contact connectors} If the {\kb Shift} key is held while clicking in the schematic, a new multi-contact terminal will be created. A different version of the {\cb Terminal Edit} panel will appear, allowing the new terminal to be configured. Multi-contact terminals reference scalar terminals, and every referenced scalar terminal should exist. The pop-up provides convenience functions for creating the ``bit'' terminals. In some cases, these will be made invisible and not shown in either the schematic or symbol, yet they must exist as they provide a crucial data structure required for actual connectivity. Named and unnamed multi-conductor terminals identify their constituent bits quite differently. If a terminal is named, the name is a net expression (see \ref{netex}) that unambiguously specifies the names of the scalar terminals. These terminals are referenced by name, so ordering is unimportant. If a multi-conductor terminal is unnamed, it will at least have a default range of {\vt [0:0]}. The terminal also has an index number that defaults to 0. The bits are the scalar terminals with indices starting with the multi-conductor terminal index value, through the width of the multi-conductor range, contiguously and increasing. In this case, terminal ordering is obviously quite important. See the {\cb Terminal Edit} panel description in \ref{termedit} for a complete discussion of the conrfiguration options for multi-contact terminals (and scalar terminals, too). \subsection{Terminal Ordering} \index{terminal order} By default, a newly-created scalar terminal will be given the largest index number, meaning that it will be the last terminal listed when the subcircuit is represented in SPICE or other netlisting output. However, it is possible to insert new terminals at any point in the sequence. If the user types a number while the command is active, the number will appear in the keypress buffer area for the drawing window that has the keyboard focus. If this number is within the range of existing terminal indices, then new terminals created from this window will be given this index, and existing terminals with this index or larger will have their indices incremented. Suppose for example that the cell contains five terminals, and one needs to add a sixth, and further the new terminal should be the fourth terminal in the sequence (index number 3). While in the {\cb subct} command, one can type ``3'' and note that ``3'' appears in the keypress buffer area. One can now click on a circuit location to create the new terminal, and note that the new terminal is given index 3, the previous 3 is now 4, etc. The backspace key can be used to clear the keypress buffer, or the next new terminal added will also be inserted as number 3. Note that one can type ``0'' and leave this in place, so that all new terminals will be added to the front of the list rather than the back. The indexing and order can also be changed with the {\cb Terminal Edit} panel. For multi-contact terminals, the index parameter provides ordering information. The terminal order assumed by {\Xic} is that a multi-contact terminal is ordered by its index, ahead of a scalar terminal with the same index. If the multi-contact terminal is named, then the index number is arbitrary, however by convention {\Xic} will set the index to the index of the first (leftmost) bit. If the terminal is unnamed, the index is also the index of the first bit, and in fact this identifies the first bit. \subsection{Terminal Naming and Editing} \index{terminal naming} If no name is given to a scalar terminal, {\Xic} will use a default name, which is an underscore followed by the internal index (the number shown in the marker). Otherwise, a short descriptive name can be entered. The name must follow the rules for a scalar net expression (see \ref{netex}), that is, it must be a simple text name, with or without a single index subscript. A non-default name will be displayed next to the terminal marker (the default name is assumed if the entry is an underscore followed by one or two digits). Clicking on an existing terminal will select it, and begin a move operation. A box will be ghost-drawn and attached to the mouse pointer. If the terminal is scaler, it can be moved to a new location by clicking on a connection point not occupied by another terminal. It can be moved to a non-contact point by holding {\kb Ctrl} while clicking, and the terminal becomes ``virtual''. Multi-contact terminals can be moved to any location not already occupied by a terminal. While a terminal is selected, pressing the {\kb Delete} key will delete the terminal. Pressing {\kb Backspace} or {\kb Esc} will deselect the terminal, aborting the move operation. If an existing terminal is clicked on with the {\kb Shift} key held down, or double-clicked on (including being ``moved'' to the same location), the {\cb Terminal Edit} panel will appear, allowing the user to edit the parameters for the terminal. From the {\cb Terminal Edit} panel, it is possible to make the terminal invisible. This can be applied to terminals that do not participate in the visual connections, so clutter the display needlessly. The {\kb PageUp} and {\kb PageDown} toggle the display of (otherwise) invisible terminals while the {\cb subct} command is active. Invisible terminals can also be selected for editing with the {\cb Next} and {\cb Prev} buttons in the panel, which cycle through the terminals to edit by the index value. In symbolic mode, terminals can not be added or deleted, however they can be moved to new locations consistent with the symbolic representation. Terminals can be moved by dragging, or by clicking on a terminal then clicking on the new location. Terminals can be placed anywhere in the symbolic representation. Further, if the {\kb Shift} key is held during the terminal placement, the original terminal mark is retained, i.e., a copy is made. Any number of copies can be placed. Copies can be deleted by clicking to select, then pressing the {\kb Delete} key. The last remaining instance of a terminal can not be deleted in this way, one must go to the schematic to delete the terminal. % ----------------------------------------------------------------------------- % xic:edtrm 110713 \section{The {\cb Terminal Edit} Pop-Up: Editing Terminals} \label{termedit} \index{Terminal Edit panel} The {\cb Terminal Edit} pop-up appears when using the {\cb subct} button in the electrical side menu. It also appears while in physical mode and using the {\cb Edit Terminals} button from the {\cb Setup} page of the {\cb Extraction Setup} panel, which is brought up with the {\cb Setup} button in the {\cb Extract Menu}. In either case, it provides a means for editing various properties of a terminal, including its name. When the panel is visible, one of the terminals in the display is highlighted, and the controls in the panel represent the state for the highlighted terminal. This is the ``target terminal'' which will be modified by the panel. The panel configures itself for either scalar or multi-contact terminals in electrical mode, depending on the target terminal. In physical mode, only scalar terminals exist and not all parameters are editable, and the panel configures itself accordingly. The panel will appear quite different in these three cases. The target terminal can be changed by {\kb Shift}-clicking or double-clicking over a different terminal. It can also be changed with the {\cb Prev}, {\cb Next}, and {\cb To Index} buttons found in the panel. Every scalar terminal has a unique index number. This is the number that is shown in the box which represents the terminal in the schematic. This represents the order of the terminals in calls to instances of the current cell. Bus terminals have an index number as well, which must be one of the scalar terminal indices. The ordering of the multi-contact terminal is at the index, but {\it before} the scalar terminal with the same index. The {\cb Prev} button will cycle the target terminal to the one with index value one less than the current index, wrapping at zero. The {\cb Next} button will cycle the target terminal in the opposite direction. The {\cb To Index} button and numeric entry area can be used to change the target terminal to one with the specified index, of the same type (scalar or multi-contact terminal) as the present terminal. No actual change is made unless or until the {\cb Apply} button is pressed. Pressing {\cb Apply} will update the target terminal according to the entries in the panel. Changes made can be undone and redone with the standard {\Xic} undo/redo operations. Pressing the {\cb Dismiss} button will retire the panel. \subsection{Electrical Scalar Terminal Editing} \index{editing terminals} At the top of the panel is a {\cb Terminal Index} numeric entry area. This can be used to change the terminals index number, and therefor order in subcircuit references. The renumbering is a two step process: \begin{enumerate} \item{The present terminal is removed, and the remaining terminals are renumbered, using unique and contiguous new index values (zero based).} \item{The terminal is reinserted at the given index. The terminal that had that index and those larger will have their index values incremented.} \end{enumerate} Changing the index of a scalar terminal does {\cb not} update the multi-contact terminals! The index values used in the bus terminals may require compensating changes. Just below is the {\cb Terminal Name} text entry area. This will contain the name of the terminal, which can be edited by the user. The entry can be empty, in which case {\Xic} will generate a name. The {\cb Has physical terminal} check box should be checked if the terminal will have a corresponding contact point in the physical layout. Setting this check box will allocate the internal data structure needed to maintain the association. In most cases, this will be required. It is not required if, for example, the user at this point is only concerned with a schematic for simulations. The terminal can be edited and this box checked at a later time, when the user is ready to add a layout. The box is never checked for terminals used in the schematic for special purposes that are perhaps related to simulation, that have no ``real'' implementation in the layout. When the {\cb Has physical terminal} check box is checked, the {\cb Physical} group is un-grayed. There are two controls in this group. \begin{description} \item{\cb Layer Binding}\\ The {\cb Layer Binding} menu provides a layer name that is a hint used by the extraction system when placing the physical terminal in the layout. This is set by {\Xic} after extraction, and if correct should not be changed. It is set by the user when a terminal is manually placed, to resolve ambiguity about which layer the terminal connects to. \index{FIXED terminal flag} \item{\cb Location locked by user placement}\\ When a terminal is manually placed, the {\cb Location locked by user placement} check box will become checked. This indicates that the {\et FIXED} flag is set in the terminal. Terminals with this flag set will never be moved by {\Xic} during extraction/association. \end{description} The location and layer must be correct or association will fail. Although {\Xic} will automatically place terminals, at times this will fail and the user will have to place some terminals manually to obtain correct or complete association. Below the {\cb Physical} group are check boxes for setting some binary options. \begin{description} \index{BYNAME terminal flag} \item{\cb Set contact by name only}\\ This check box, when checked, sets the {\et BYNAME} flag in the terminal which changes its interpretation in the schematic (it has no effect in physical mode). Ordinarily, a terminal is placed on a ``connection point'' of a wire net in the schematic (i.e., a vertex), or a device or subcircuit contact point. Association of the terminal to that wire net is by location. If there is no underlying connection point, and the terminal has an assigned name, {\Xic} will then attempt to add the terminal to an existing net with a matching name. If this flag is set, then the initial attempt to connect the terminal by location will be skipped. This is useful if the terminal is to be made invisible, to avoid accidental connections. The scalar wire nets can be named with the {\cb Node (Net) Name Mapping} panel from the side menu (see \ref{nodmp}). \index{SCINVIS terminal flag} \item{\cb Set terminal invisible in schematic}\\ This check box, when checked, sets the {\et SCINVIS} flag in the terminal which prevents the terminal from being displayed in schematics. This is for terminals that are used only as bit connections for a multi-contact connector. Recall that every bit in a multi-contact connector is a scalar connector, that must exist if a connection is to be established. If connectivity is to be provided only via the multi-contact connector, the individual bits are visually superfluous and clutter the display. However, they can be made invisible in the schematic with this flag. They should probably also have the {\et BYNAME} flag set as well, so that they don't make an unintended connection by location. The setting has no effect in physical mode. \index{SYINVIS terminal flag} \item{\cb Set terminal invisible in symbol}\\ This check box controls the analogous {\et SYINVIS} flag, which when set causes the terminal to be invisible in the symbolic representation, if any. This flag will almost always track the state of the {\et SCINVIS} flag, but this is not an absolute requirement. It is possible for a schematic to use individual bits for connections, whereas the symbol uses a multi-contact terminal, or vice-versa. \end{description} \subsection{Physical Terminal Editing} In physical mode, the panel allows changes only within the {\cb Physical} group described above. That is, the {\cb Layer Binding} choice and the {\cb Location locked by user placement} check box are the only editable entries. These have the purpose and functionality as described above. One must return to electrical mode to change other parameters. \subsection{Multi-Contact Connector Editing} When the target terminal is a multi-contact connector, the panel reconfigures itself to provide the appropriate entry areas. At the top of the panel is a numeric {\cb Term Index} entry area. Just below this are two text entry areas with labels {\cb Terminal Name} and {\cb Net Expression}. A ``bundle'' terminal may have a separate simple text name, as well as its net expression. If given, the simple text name will be used as a name for the terminal in instance placements of the cell. The terminal in the instance will look like a pure vector terminal with the given name, and a range starting with zero and extending to the width of the bundle minus one. If the terminal does not represent a bundle, then internally there is only one name, which is the net expression. This is obtained from the two entry areas, which should not conflict or an error will result. Probably the best approach is to use the {\cb Net Expression} entry for the complete expression, and leave the {\cb Terminal Name} entry blank. Alternatively, one could put a text name in the name entry, and the subscripting, without a name or with the same name, in the expression entry. It is legitimate to not provide a name, but to provide subscripting only. In this case: \begin{enumerate} \item{The subscripting is ignored, except to determine the implied width (number of conductors).} \item{The connector maps the scalar terminal with index value equal to the {\cb Term Index} entry and terminals with successive indices, the total number of which will be equal to the connector width. Thus, scalar terminal order and the {\cb Term Index} value are critical in this case. It is up to the user to maintain consistency while editing, as indices may change. Probably, though, there is no reason to use this approach, and not supply a terminal name.} \end{enumerate} If the terminal has a name, or has a bundle net expression, then the name of every scalar terminal ``bit'' is well defined. These are found by name, so there is no order requirement, only an existence requirement. Furthermore, the {\cb Term Index} entry has much less significance. It is only used to assign an order for the terminal relative to other terminals. Specifically, the terminal order is just ahead of the scalar terminal with the same index (multi-conductor terminal index values are required to be unique). {\Xic} will initially assign the index as the index of the first scalar terminal referenced. This can be changed if necessary. Below the three entry areas is a {\cb Delete} button, which will delete the terminal if pressed. This, and all other operations, can be undone/redone with the standard {\Xic} {\kb Tab}/{\kb Shift-Tab} keys and equivalent operations in the {\cb Modify Menu}. There are two check boxes for terminal visibility in the schematic and symbol, as we saw for scalar terminals. It is unlikely that the user would go to the trouble of implementing a multi-contact terminal and not have it visible, but it is possible. The {\cb Bus Term Bits} group provides some specialized functions for working with the scalar terminals referenced. These can be applied only if the terminal has a name or is a bundle terminal. \begin{description} \item{\cb Check/Create Bits}\\ This will create, at the end of the scalar terminal list, any scalar terminal referenced by the present terminal and not found. Newly created scalar terminals whill have {\et BYNAME}, {\et SCINVIS}, and {\et SYINVIS} set, meaning that the terminals will be invisible and make contact by name only. The new terminals are placed at the same location as the present terminal. As they are invisible and they do not connect by location, there is no problem with this. In one way or another, the scalar terminals referenced by a multi-conductor terminal must exist for connectivity to be established, even if they are invisible and never dealt with again after creation. The {\cb Check/Create Bits} button makes the scalar terminal creation quick and easy. Be aware, though, that it will probably still be necessary to edit these terminals to set the physical data. \item{\cb Reorder to Index}\\ This will create missing scalar terminals as above, but in addition it will reorder the scalar terminals list so that the index values of the referenced terminals are contiguous and start with the {\cb Term Index} value. All other considerations aside, this may be a ``nice'' way to organize the terminals. It is also potentially more efficient. If the net expression does not duplicate any connection bits, an internal mapping step can be skipped as it becomes an identity, saving a little memory and time. This is the same ordering used with ``unnamed'' terminals. \end{description} The four buttons below allow setting of the visibility flags of all of the referenced scalar terminals. It is unlikely that the flag states would vary between the bits. The remaining buttons operate as described for scalar terminal editing. % ----------------------------------------------------------------------------- % xic:symbl 021912 \section{The {\cb symbl} Button: Symbolic Representation} \label{symbolic} \index{symbl button} \index{symbolic mode} \epsfbox{images/symbl.eps} The {\cb symbl} button, available in electrical mode, allows instances of a cell to be shown as a symbol, rather than as a schematic. In the symbolic representation, the substructure of the cell is never shown, instead a simple figure representing the cell is displayed. This can simplify complex schematics. When this button is active, the current cell is in symbolic mode. It is not possible to add subcircuits or devices in this mode, but any geometry added will show as the symbolic representation. If the cell is saved with this button active, then the cell and its instances will use the symbolic representation. However, it is possible to apply a property to individual instances of the cell to force the display of that instance non-symbolically (as a schematic). This property can be applied with the {\cb Property Editor}. If the {\cb No Top Symbolic} button in the {\cb Main Window} sub-menu of the {\cb Attributes Menu}, or in the sub-window {\cb Attributes} menu, is set, the top cell will always display as a schematic in the window, whether or not the {\cb symbl} button is pressed. When a new cell is opened for editing, the {\cb symbl} button will become active and the symbolic representation shown if the cell was previously saved in symbolic mode. Pressing the button a second time will revert to normal presentation. While in symbolic mode, subcircuit terminals can not be added, however existing terminals can be moved to new locations by dragging. One should first place the terminals, with the {\cb subct} command, in normal mode. After switching to symbolic mode, the terminals can be moved to new locations, in the generally more compact symbolic representation. The actual locations of subcircuit connections is dependent upon the mode. % ----------------------------------------------------------------------------- % xic:terms 013113 \section{The {\cb terms} Button: Show Subcircuit Connections} \index{terms button} \index{show terminals} \epsfbox{images/terms.eps} When the {\cb terms} button is active, the electrical connection points of the subcircuits are shown. These points are placed with the {\cb subct} command. The {\cb terms} button is available in electrical mode only. When active, the physical terminals will be shown in physical mode windows, as if the {\cb All Terminals} check box in the {\cb Setup} page of the {\cb Extraction Setup} panel was checked. This panel is obtained from the {\cb Setup} button in the {\cb Extract Menu}. Similarly, in physical mode, when physical terminals are visible, electrical terminals will also be visible in electrical windows, as if the {\cb terms} button was active. % ----------------------------------------------------------------------------- % xic:wire 092717 \section{The {\cb wire} Button: Create/Edit Wires} \index{wire button} \index{object creation!wires} \epsfbox{images/wire.eps} The {\cb wire} button is used to create or modify wires. A wire is created by clicking the left mouse button on each vertex location in sequence. The vertices can be undone and redone with the {\kb Tab} key and {\kb Shift-Tab} combination, which are equivalent to the {\cb Undo} and {\cb Redo} commands. Vertex entry is terminated, and a new wire created, by clicking a second time on the last point, or by pressing the {\kb Enter} key. The {\et PixelDelta} variable can be set to alter the value, in pixels, of the snap distance to the target when clicking to terminate. By default, the snap distance is 3 pixels, so clicking within this distance of the last point will terminate entry rather than add a new vertex. In electrical mode, wires are used to connect devices into circuits. Vertices are recognized as connecting points, and are created where the wire crosses a device or subcircuit terminal or a vertex of another wire. The {\cb Connection Dots} button in the {\cb Attributes Menu} can be used to display connections. The vertices can be edited to remove or reestablish connections. In electrical mode, entering the {\cb wire} command will switch the current layer to the SCED (active) layer. The current layer can be changed if necessary, but without the reverting it was too easy to create wires on another layer, sometimes difficult to visually differentiate, that will not be electrically active in the schematic causing the circuit to not work. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This is also applied to the last vertex of wires being created, facilitating point list termination. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. When adding vertices during wire creation, the angle of each segment can be constrained to a multiple of 45 degrees with the {\cb Constrain angles to 45 degree multiples} check box in the {\cb Editing Setup} panel from the {\cb Edit Menu}, in conjunction with the {\kb Shift} and {\kb Ctrl} keys. There are three modes: call them ``no45'' for no constraint, ``reg45'' for constraint to multiples of 45 degrees with automatic generation of the segment from the end of the 45 section to the actual point, and ``simp45'' that does no automatic segment generation. The ``reg45'' algorithm adds a 45 degree segment plus possibly an additional Manhattan segment to connect the given point. The ``simp45'' adds only the 45 degree segment. The mode employed at a given time is given by the table below. The {\et Constrain45} variable tracks the state (set or not set) of the check box. \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{\kb Constrain45 not set}\\ \hline & {\kb Shift} up & {\kb Shift} pressed\\ \hline {\kb Ctrl} up & no45 & reg45\\ \hline {\kb Ctrl} pressed & simp45 & simp45\\ \hline\hline \multicolumn{3}{|c|}{\kb Constrain45 set}\\ \hline & {\kb Shift} up & {\kb Shift} pressed\\ \hline {\kb Ctrl} up & reg45 & no45\\ \hline {\kb Ctrl} pressed & simp45 & no45\\ \hline \end{tabular} In physical mode, three end styles are available for nonzero width wires: {\et Flush}, {\et Rounded}, and {\et Extended}. The end style and the default width are set from the menu provided by the {\cb style} button. The end style of selected wires can be changed from this menu, from within the {\cb wire} command or without. The width of wires on a particular layer, or the widths of existing wires, can be set of changed with the {\cb Wire Width} button in the menu brought up with the {\cb style} button. Zero-width wires are accepted into the database if they contain more than one point. In physical mode, they probably should not be used, and they will, of course, fail DRC tests. They are allowed in the off chance that the user uses them for annotation purposes. Such lines will be invisible, however, unless the layer pattern is outlined or solid. In electrical cells, zero-width wires are commonly used for the connecting lines, and there is no question of their legality in electrical cells. The width of selected wires can be changed with this menu command, from within the {\cb wire} command or without. If the first vertex of a wire being created falls on an end vertex of an existing wire on the same layer, the new wire will use the same width and end style as the existing wire, overriding the defaults. The completed new wire will be merged with the existing wire, unless merging is disabled. Merging can be controlled from the {\cb Editing Setup} panel from the {\cb Edit Menu}, and note also that the {\et NoMerge} layer attribute will prevent merging. Wires with a single vertex are acceptable if the width is nonzero and the end style is rounded or extended. These are rendered as an octagon or box, respectively, centered on the vertex. Existing wires can be converted to polygons through selection and execution of the {\cb polyg} command. \subsection{Wire Vertex Editor} \index{wire vertex editor} On entering the {\cb wire} command, if a wire is selected, a vertex editing mode is active on all selected wires. Each vertex of the selected object is marked with a small highlighting box. Clicking on a selected wire away from an existing vertex will create a new vertex, which can subsequently be moved. In order to operate on a vertex, it must be selected. A vertex can be selected by clicking on it, or by dragging over it. Any number of vertices can be selected. After the selection operation, selected vertices are shown marked with a larger box, and unselected vertices are not marked. Additional vertices can be selected, and existing selected vertices unselected, by holding the {\kb Shift} key while clicking or dragging over vertex locations. Selecting a vertex a second time will deselect it. Selected vertices can be deleted by pressing the {\kb Delete} key. This will succeed only if after vertex removal the object does not become degenerate. In particular, one can not delete the object in this manner. The selected vertices can be moved by dragging or clicking twice. The selected vertices will be translated according to the button-down location and the button up location, or the next button-down location if the pointer did not move. While the translation is in progress, the new borders are ghost-drawn. All vertex operations can be undone and redone through use of the {\cb Undo} and {\cb Redo} commands. With vertices selected, pressing the {\kb Esc} or {\kb Backspace} keys will deselect the vertices and return to the state with all vertices marked. While in the {\cb wire} command, with no object in the process of being created, it is possible to change the selected state of wire objects, thus displaying the vertices and allowing vertex editing. Pressing the {\kb Enter} key will cause the next button 1 operation to select (or deselect) wire objects. This can be repeated arbitrarily. When one of these objects is selected, the vertices are marked, and vertex editing is possible. If the vertex editor is active, i.e., a selected wire is shown with the vertices marked, clicking with the {\cb Ctrl} button pressed will start a new wire, overriding the vertex editor. This can be used to start a new wire at a marked vertex location, for example. Without {\cb Ctrl} pressed, the vertex editor would have precedence and would select the marked vertex instead of starting a new wire. While moving vertices, holding the {\kb Shift} key will enable or disable constraining the translation angle to multiples of 45 degrees. If the {\cb Constrain angles to 45 degree multiples} check box in the {\cb Editing Setup} panel from the {\cb Edit Menu} is checked, {\kb Shift} will disable the constraint, otherwise the constraint will be enabled. The {\kb Shift} key must be up when the button-down occurs which starts the translation operation, and can be pressed before the operation is completed to alter the constraint. These operations are similar to operations in the {\cb Stretch} command. \index{wire label} \index{net name label} \subsection{Associated Net Name Label} In electrical mode, wires that participate in schematic connectivity can have an associated text label. The text provides a name for the net (node) that contains the wire, and is equivalent to the placement of a named terminal device (see \ref{devtbar}) at a vertex of the wire. To create and bind a label to a wire, first select the wire. Then, press the {\cb label} button in the side menu. Enter the text, and place the label in the normal way. The text in the label will be taken as a candidate net name (see \ref{nodmp}) for the net containing the wire. Unlike unlabeled wires, a wire with a label will never be merged with adjacent wires. Labeled wires play an important role in the connectivity of some schematics, by defining multi-conductor wire nets, and providing the ``taps'' to access the net. Complete information is provided in the Connectivity Overview in \ref{connect} and the sections that follow. % ----------------------------------------------------------------------------- % xic:xform 020815 \section{The {\cb xform} Button: Current Transform Panel} \index{xform button} \index{current transform} \label{curxform} \epsfbox{images/xform.eps} The {\cb xform} button in the side menu brings up the {\cb Current Transform} panel, which allows the current transform to be set. The current transform is applied to newly-placed subcells, and to objects which are moved or copied. The transform that is applied to an instance of a cell is saved in an irreducible form in the database representation of the instance. The irreducible form is an optional reflect-y ($y \rightarrow -y$), followed by an optional rotation, followed by the translation. This maps directly to the format used in GDSII files. However, the ``current transform'' applies rotation {\it before} the reflection, so that on screen, a reflect-x, for example, will flip the object's x coordinates independent of any rotation angle, which is what users tend to expect. The transform string printed on unexpanded instances and on the status line reflects this, i.e., forms like ``{\vt R45M}'' imply a 45 degree rotation followed by a reflect-y (``{\vt M}'' always denotes reflect-y, reflect-x is equivalent to some other rotation and reflect-y combination). However, the transformation shown in an {\cb Info} window will be reflect-y followed by a 315 degree rotation (in this example), since the internal representation performs the reflection before the rotation. If the current transform is set to something other than the default identity transform, the transform code is printed on the status line. The following buttons and input fields are available in the {\cb Current Transform} panel. \begin{description} \item{\cb Angle}\\ This choice menu allows setting the rotation component of the current transform. The menu allows a choice of rotations in increments of 90 degrees in electrical mode or 45 degrees in physical mode. Pressing and holding the {\kb Ctrl} key and then pressing the left or right arrow keys cycles through the transformation angles, whether or not the {\cb Current Transform} panel is visible. The right arrow increases the angle, the left arrow decreases the angle. \item{\cb Reflect X}\\ Add a reflection of the x-axis to the current transform. The X-reflection is toggled by the {\kb Ctrl-Down Arrow} key sequence, whether or not the {\cb Current Transform} panel is visible. \item{\cb Reflect Y}\\ Add a reflection of the y-axis to the current transform. The Y-reflection is toggled by the {\kb Ctrl-Up Arrow} key sequence, whether or not the {\cb Current Transform} panel is visible. \item{\cb Magnification}\\ This entry field allows setting of the magnification component of the current transform. Any number from 0.001 through 1000.0 can be entered. \item{\cb Identity Transform}\\ This button will save the current parameters to internal storage, and reset these values to the default state (no transformation). The saved state can be restored with the {\cb Last Transform} button. When the panel first appears, this button will have the keyboard focus if the current transform is not the identity transform. The user can press {\kb Enter} to ``press'' the button. This will cause the focus to switch to the {\cb Dismiss} button, so that another {\kb Enter} press will retire the panel. Thus, one can very quickly restore the identity transform using the {\cb xform} button accelerator (``{\vt xf}'') followed by pressing the {\kb Enter} key twice. \item{\cb Last Transform}\\ This button will restore the state of the current transform last saved with the {\cb Identity Transform} button, or one of the recall buttons. If no state has been saved, the identity transform (the default) is set. Note that there is separate storage for the current transform in electrical and physical modes. When the panel first appears, if the current transform is the identity transform, this button will have the keyboard focus. In this case, the same key sequence as described above can be used to quickly restore the last transform. \item{Store and Recall}\\ There are five internal registers for storage of transformation parameters. Separate registers are used in electrical and physical modes. Pressing these buttons will either save the current parameters to a register, or set the parameters from a register. After a recall, the original parameters can be restored with the {\cb Last Transform} button. \end{description} % ----------------------------------------------------------------------------- % xic:xor 012715 \section{The {\cb xor} Button: Exclusive-OR Objects} \index{xor button} \index{inverting polarity} \index{object polarity inversion} \epsfbox{images/xor.eps} The {\cb xor} button facilitates inverting the polarity of layers, and is available only in physical mode. The operation is similar to the {\cb box} command, however all previously existing boxes, polygons, and wires on the same layer which overlap the created box become holes in the new box. Only boxes, polygons, and wires are inverted, other structures are covered. When a wire is partially xor'ed, the part of the wire outside of the xor region becomes a polygon. The {\cb !layer} command can also be used to invert layer polarity, and is recommended when an entire cell is to be inverted. While the command is active in physical mode, the cursor will snap to horizontal or vertical edges of existing objects in the layout if the edge is on-grid, when within two pixels. When snapped, a small dotted highlight box is displayed. This makes it much easier to create abutting objects when the grid snap spacing is very fine compared with the display scaling. This feature can be controlled from the {\cb Edge Snapping} group in the {\cb Snapping} page of the {\cb Grid Setup} panel. The {\cb box}, {\cb erase}, and {\cb xor} commands participate in a protocol that is handy on occasion. Suppose that you want to erase an area, and you have zoomed in and clicked to define the anchor, then zoomed out or panned and clicked to finish the operation. Oops, the {\cb box} command was active, not {\cb erase}. One can press {\kb Tab} to undo the unwanted new box, then press the {\cb erase} button, and the {\cb erase} command will have the same anchor point and will be showing the ghost box, so clicking once will finish the erase operation. The anchor point is remembered, when switching directly between these three commands, and the command being exited is in the state where the anchor point is defined, and the ghost box is being displayed. One needs to press the command button in the side menu to switch commands. If {\kb Esc} is pressed, or a non-participating command is entered, the anchor point will be lost.
{ "alphanum_fraction": 0.7468568653, "avg_line_length": 48.5081845238, "ext": "tex", "hexsha": "2a1fb0bbdeeed52e70b1dabf2bca86225c4f56f0", "lang": "TeX", "max_forks_count": 34, "max_forks_repo_forks_event_max_datetime": "2022-02-18T16:22:03.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-06T17:04:21.000Z", "max_forks_repo_head_hexsha": "4ea72c118679caed700dab3d49a8d36445acaec3", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "chris-ayala/xictools", "max_forks_repo_path": "xic/manual/sidemenu.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "4ea72c118679caed700dab3d49a8d36445acaec3", "max_issues_repo_issues_event_max_datetime": "2022-03-20T19:35:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-11-01T10:18:22.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "chris-ayala/xictools", "max_issues_repo_path": "xic/manual/sidemenu.tex", "max_line_length": 79, "max_stars_count": 73, "max_stars_repo_head_hexsha": "f46ba6d42801426739cc8b2940a809b74f1641e2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "wrcad/xictools", "max_stars_repo_path": "xic/manual/sidemenu.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-02T16:59:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-26T12:40:24.000Z", "num_tokens": 46625, "size": 195585 }
% % extra_insight.tex % Student Loans % % Created by Ed Silkworth on 6/7/19. % Copyright © 2019-2020 Ed Silkworth. All rights reserved. % % All problems detected in this document have been resolved. % \documentclass[12pt,letterpaper,oneside]{article} % preamble \usepackage{amsthm, amsmath, mathtools, fancyhdr, cases, cleveref, amssymb, setspace, xcolor, chngcntr, alphalph, graphicx, float} \usepackage[document]{ragged2e} % left-aligns content \graphicspath{ {./images/} } \usepackage{mathptmx} % times new roman font for text and mathematics \usepackage[normalem]{ulem} % ``normalem'' to prevent underline in algorithm \usepackage[vlined]{algorithm2e} % no ``end'' \usepackage[left=1.5in, top=1in, right=1in, bottom=1in]{geometry} % left margin of dissertation is 1.5in \pagestyle{fancy} % enables custom header/footer \renewcommand{\thispagestyle}[1]{} % prevent changing style \rhead{} % do not display page number on first page \lhead{} % \setcounter{page}{236} % match page number in dissertation \renewcommand{\headrulewidth}{0pt} % no header underline \setlength{\topmargin}{-0.2in} % places header page number about 0.75in from top of page; ``\topmargin'' is specifically for header \setlength{\headheight}{14.5pt} % addresses warning about \headheight being too small \setlength{\headsep}{0.07in} % places document text about 1in from top of page; essentially a bottom margin for header \cfoot{} % no footer \newtheorem{theorem}{Theorem}[section] % [section] provides #.# for theorems \newtheorem{definition}[theorem]{Definition} % embedding [theorem] provides 1.1, 1.2, 1.3, ... for definitions \newtheorem{example}{Example}[section] \def\theexample{\arabic{section}.\arabic{subsection}\alphalph{\value{example}}} % provides #.#[letter] for select examples, e.g. 1.1a, 1.1b, and 1.1c \theoremstyle{remark} % permits non-bold italics for remark \newtheorem{remark}[theorem]{Remark} \DeclareMathSymbol{,}{\mathord}{letters}{"3B} % for spacing commas properly for inline and display math \counterwithin{equation}{section} % provides #.# for equations \allowdisplaybreaks \newcommand\yesnumberequation{\addtocounter{equation}{1}\tag{\theequation}} % provides #.#[letter] for select equations, e.g. 1.1a, 1.1b, and 1.1c, acts as do subequations \newlength{\emphasislength} \newcommand{\emphasis}[3][black]{ \settowidth{\emphasislength}{#3} % using length of ``#3,'' which refers to the emphasis text (e.g., `interest paid') \stackrel{ % stacking above a math expression \begin{minipage}{\emphasislength} \color{#1}\centering #3\\ % ``#1'' refers to the `black' color \rule{0.25pt}{10pt} % {line thickness}{line height} \end{minipage} } {\colorbox[rgb]{0.95,0.95,0.95} % boxing the math expression, each rgb color is 95% of 255 {\color{#1}$#2$ % ``#2'' refers to the math expression } } } \begin{document} % opening \title{Extra Insight into the iOS App} \author{Edward A. Silkworth\\ Teachers College, Columbia University\\ [email protected]} \date{} \maketitle \begin{abstract} This document consists of examples. It also includes derivations for the monthly compound interest rate and ten-year minimum payment. \end{abstract} \section{Examples} % to strike a balance between consistency, clarity and conciseness, opting to indent and line-space after each section, but not otherwise \subsection{Increment size} If $p_{\rm{max}}=\$5,000$, $p_{\rm{min}}=\$3,000$ and $N=20$, \begin{align*} \Delta N &=\frac{\mathtt{5,000}-\mathtt{3,000}}{20}\\ &=\frac{\mathtt{2,000}}{20}\\ &=\mathtt{\$100/increment} \end{align*} If $p_{\rm{max}}=\$6,500$, $p_{\rm{min}}=\$4,500$ and $N=40$, \begin{align*} \Delta N &=\frac{\mathtt{6,500}-\mathtt{4,500}}{40}\\ &=\frac{\mathtt{2,000}}{40}\\ &=\mathtt{\$50/increment} \end{align*} If $p_{\rm{max}}=\$10,000$, $p_{\rm{min}}=\$0$ and $N=50$, \begin{align*} \Delta N &=\frac{\mathtt{10,000}-\mathtt{0}}{50}\\ &=\frac{\mathtt{10,000}}{50}\\ &=\mathtt{\$200/increment} \end{align*} \newpage \rhead{\thepage} % display page number on second page onward \subsection{Annual interest rate} If $\mbox{APR}=7\%$, \begin{align*} r&=\mathtt{7}\div 100\\ &=\mathtt{0.07/year} \end{align*} If $\mbox{APR}=6.8\%$, \begin{align*} r&=\mathtt{6.8}\div 100\\ &=\mathtt{0.068/year} \end{align*} If $\mbox{APR}=1.75\%$, \begin{align*} r&=\mathtt{1.75}\div 100\\ &=\mathtt{0.0175/year} \end{align*} \subsection{Monthly interest rate} If $\mbox{APR}=7\%$, \begin{align*} i&=\frac{\mathtt{0.07}}{12}\\ &=\mathtt{0.00583.../month} \end{align*} If $\mbox{APR}=6.8\%$, \begin{align*} i&=\frac{\mathtt{0.068}}{12}\\ &=\mathtt{0.00566.../month} \end{align*} If $\mbox{APR}=3.45\%$, \begin{align*} i&=\frac{\mathtt{0.0345}}{12}\\ &=\mathtt{0.00287\underline{5}/month} \end{align*} \newpage \setlength\parindent{0pt} If $\mbox{APR}=7\%$ compounded, \begin{align*} i&\approx\left(1+\frac{\mathtt{0.07}}{365.25}\right)^{\frac{365.25}{12}}-1\\ &=\left(\mathtt{1.000191...}\right)^{\frac{365.25}{12}}-1\\ &=\mathtt{1.00584...}-1\\ &=\mathtt{0.00584.../month} \end{align*} If $\mbox{APR}=6.8\%$ compounded, \begin{align*} i&\approx\left(1+\frac{\mathtt{0.068}}{365.25}\right)^{\frac{365.25}{12}}-1\\ &=\left(\mathtt{1.000186...}\right)^{\frac{365.25}{12}}-1\\ &=\mathtt{1.00568...}-1\\ &=\mathtt{0.00568.../month} \end{align*} If $\mbox{APR}=3.45\%$ compounded, \begin{align*} i&\approx\left(1+\frac{\mathtt{0.0345}}{365.25}\right)^{\frac{365.25}{12}}-1\\ &=\left(\mathtt{1.0000944...}\right)^{\frac{365.25}{12}}-1\\ &=\mathtt{1.00287...}-1\\ &=\mathtt{0.00287\underline{9}.../month} \end{align*} \subsection{Absolute minimum monthly payment} If $p=\$2,000$, $\mbox{APR}=6.8\%$ and $\alpha=0.25$, \begin{align*} a_{\rm{min_{\mathnormal{n}}}}&=\big\lfloor{\mathtt{0.25}\left(\mathtt{2,000}\cdot\mathtt{0.00566...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{0.25}\left(\mathtt{11.333...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{283.3...}}\big\rceil\div 100+0.01\\ &=\mathtt{283}\div 100+0.01\\ &=\mathtt{2.83}+0.01\\ &=\mathtt{\$2.84}\\ \end{align*} \newpage If $p=\$2,700$, $\mbox{APR}=2.61\%$ and $\alpha=0.2$, \begin{align*} a_{\rm{min_{\mathnormal{n}}}}&=\big\lfloor{\mathtt{0.2}\left(\mathtt{2,700}\cdot\mathtt{0.00217\underline{5}}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{0.2}\left(\mathtt{5.872...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{117.4...}}\big\rceil\div 100+0.01\\ &=\mathtt{117}\div 100+0.01\\ &=\mathtt{1.17}+0.01\\ &=\mathtt{\$1.18}\\ \end{align*} If $p=\$2,700$, $\mbox{APR}=2.61\%$ compounded and $\alpha=0.2$, \begin{align*} a_{\rm{min_{\mathnormal{n}}}}&=\big\lfloor{\mathtt{0.2}\left(\mathtt{2,700}\cdot\mathtt{0.00217\underline{7}...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{0.2}\left(\mathtt{5.878...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{117.5...}}\big\rceil\div 100+0.01\\ &=\mathtt{118}\div 100+0.01\\ &=\mathtt{1.18}+0.01\\ &=\mathtt{\$1.19}\\ \end{align*} If $p=\$35,221$, $\mbox{APR}=7\%$ and $\alpha=0$, \begin{align*} a_{\rm{min_{\mathnormal{n}}}}&=\big\lfloor{\mathtt{0}\left(\mathtt{35,221}\cdot\mathtt{0.00583...}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{0}\left(\mathtt{15.75}\right)\times 100}\big\rceil\div 100+0.01\\ &=\big\lfloor{\mathtt{0}}\big\rceil\div 100+0.01\\ &=\mathtt{0}\div 100+0.01\\ &=\mathtt{0}+0.01\\ &=\mathtt{\$0.01}\\ \end{align*} \newpage \subsection{Ten-year minimum monthly payment}\label{errorcheck} If $p=\$35,221$, $\mbox{APR}=7\%$ and $\alpha=0$, \begin{align*} i&=\mathtt{0.00583...}>0\\ \alpha&=\mathtt{0}\ \textit{(given)}\\ &\quad\;\mathtt{First\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{35,221}}{120}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{29,350.8...}}\right\rceil\div 100\\ &=\mathtt{29,351}\div 100\\ &=\mathtt{\$293.51} \end{align*} If $p=\$26,970$, $\mbox{APR}=3.45\%$ compounded and $\alpha=0$, \begin{align*} i&\approx\mathtt{0.002879...}>0\\ \alpha&=\mathtt{0}\ \textit{(given)}\\ &\quad\;\mathtt{First\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{26,970}}{120}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{22,475}}\right\rceil\div 100\\ &=\mathtt{22,475}\div 100\\ &=\mathtt{\$224.75} \end{align*} If $p=\$24,190$, $\mbox{APR}=0\%$ and $\alpha=0.5$, \begin{align*} i&=\mathtt{0}\\ \alpha&=\mathtt{0.5}\ \textit{(given)}\\ &\quad\;\mathtt{Second\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{24,190}}{120}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{20,158.3...}}\right\rceil\div 100\\ &=\mathtt{20,159}\div 100\\ &=\mathtt{\$201.59} \end{align*} \newpage \newcommand{\base}{\left(1+\mathtt{0.5}\cdot\mathtt{0.00566...}\right)} If $p=\$13,500$, $\mbox{APR}=6.8\%$ and $\alpha=0.5$, \begin{align*} i&=\mathtt{0.00566...}>0\\ \alpha&=\mathtt{0.5}\ \textit{(given)}\\ &\quad\;\mathtt{Third\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{0.5}\left(\mathtt{13,500}\cdot\mathtt{0.00566...}\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.5}\left(\mathtt{76.50}\right)\left(\mathtt{1.00283...}\right)^{120}}{\left(\mathtt{1.00283...}\right)^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.5}\left(\mathtt{76.50}\right)\left(\mathtt{1.404...}\right)}{\left(\mathtt{1.404...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{53.713...}}{\left(\mathtt{1.404...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{13,286.4...}}\right\rceil\div 100\\ &=\mathtt{13,287}\div 100\\ &=\mathtt{\$132.87} \end{align*} \renewcommand{\base}{\left(1+\mathtt{0.7}\cdot\mathtt{0.00464...}\right)} If $p=\$20,000$, $\mbox{APR}=5.56\%$ compounded and $\alpha=0.7$, \begin{align*} i&\approx\mathtt{0.00464...}>0\\ \alpha&=\mathtt{0.7}\ \textit{(given)}\\ &\quad\;\mathtt{Third\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{20,000}\cdot\mathtt{0.00464...}\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{92.874...}\right)\left(\mathtt{1.00325...}\right)^{120}}{\left(\mathtt{1.00325...}\right)^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{92.874...}\right)\left(\mathtt{1.476...}\right)}{\left(\mathtt{1.476...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{95.968...}}{\left(\mathtt{1.476...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{20,154.8...}}\right\rceil\div 100\\ &=\mathtt{20,155}\div 100\\ &=\mathtt{\$201.55} \end{align*} \newpage \renewcommand{\base}{\left(1+\mathtt{0.7}\cdot\mathtt{0.00463...}\right)} If $p=\$20,000$, $\mbox{APR}=5.56\%$ and $\alpha=0.7$, \begin{align*} i&=\mathtt{0.00463...}>0\\ \alpha&=\mathtt{0.7}\ \textit{(given)}\\ &\quad\;\mathtt{Third\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{20,000}\cdot\mathtt{0.00463...}\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{92.666...}\right)\left(\mathtt{1.00324...}\right)^{120}}{\left(\mathtt{1.00324...}\right)^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.7}\left(\mathtt{92.666...}\right)\left(\mathtt{1.474...}\right)}{\left(\mathtt{1.474...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{95.669...}}{\left(\mathtt{1.474...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{20,146.5...}}\right\rceil\div 100\\ &=\mathtt{20,147}\div 100\\ &=\mathtt{\$201.47} \end{align*} \renewcommand{\base}{\left(1+\mathtt{0.25}\cdot\mathtt{0.00333...}\right)} If $p=\$5,600$, $\mbox{APR}=4\%$ and $\alpha=0.25$, \begin{align*} i&=\mathtt{0.00333\underline{3}...}>0\\ \alpha&=\mathtt{0.25}\ \textit{(given)}\\ &\quad\;\mathtt{Third\ case,\ proceeding...}\\[12pt] a_{\rm{min_{120}}}&=\left\lceil{\frac{\mathtt{0.25}\left(\mathtt{5,600}\cdot\mathtt{0.00333...}\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.25}\left(\mathtt{18.666...}\right)\left(\mathtt{1.000833...}\right)^{120}}{\left(\mathtt{1.000833...}\right)^{120}-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{0.25}\left(\mathtt{18.666...}\right)\left(\mathtt{1.105...}\right)}{\left(\mathtt{1.105...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\frac{\mathtt{5.157...}}{\left(\mathtt{1.105...}\right)-1}\times 100}\right\rceil\div 100\\ &=\left\lceil{\mathtt{4,905.8...}}\right\rceil\div 100\\ &=\mathtt{4,906}\div 100\\ &=\mathtt{\$49.06} \end{align*} \newpage \subsection{Algorithm} \newcommand{\rate}{0.00583...} \newcommand{\proportion}{0.33} \newcommand{\amount}{550} \newcommand{\balance}{2,000} \newcommand{\interest}{0} \newcommand{\months}{0} \newcommand{\monthsp}{1} \newcommand{\balanceitb}{1,453.85} \newcommand{\interestitb}{7.82} \newcommand{\monthsitb}{1} \newcommand{\monthspitb}{2} \newcommand{\balanceitc}{906.65} \newcommand{\interestitc}{13.50} \newcommand{\monthsitc}{2} \newcommand{\monthspitc}{3} \newcommand{\balanceitf}{358.40} \newcommand{\interestitf}{17.04} \newcommand{\monthsitf}{3}% \newcommand{\monthspitf}{4}% \newcommand{\amountfinal}{377.53} \begin{example} If $p=\$2,000$, $\mbox{APR}=7\%$, $\alpha=0.33$ and $a=\$550$, \footnotesize \setstretch{1.25} \begin{align*} B_{0}&=\mathtt{\balance}\ ;\\ O_{0}&=\mathtt{\interest}\ ;\\ m&=\mathtt{\monthsp}\\[12pt] % ITERATION 1 &\quad\;\mathtt{<First\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\mathtt{3.85}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\mathtt{546.15}\overset{?}{>}0\\ &\quad\;\mathtt{1,453.85}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthsp}&=B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$1,453.85}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthsp}&=O_{\months}+\big\lfloor{\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\big\lfloor{\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\mathtt{11.67}-\mathtt{3.85}\\ &=\mathtt{\$7.82}\ ;\\[12pt] m&=\mathtt{\monthsp}+1=2\mathtt{\ months}\\[12pt] % ITERATION 2 &\quad\;\mathtt{<Second\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\mathtt{2.80}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\mathtt{547.20}\overset{?}{>}0\\ &\quad\;\mathtt{906.65}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitb}&=B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$906.65}\ ;\\[36pt] % 12x3pt acts as \newpage % Outstanding interest------------------------- O_{\monthspitb}&=O_{\monthsitb}+\big\lfloor{\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\big\lfloor{\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\mathtt{8.48}-\mathtt{2.80}\\ &=\mathtt{\$13.50}\ ;\\[12pt] m&=\mathtt{\monthspitb}+1=3\mathtt{\ months}\\[12pt] % ITERATION 3 &\quad\;\mathtt{<Third\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\mathtt{1.75}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\mathtt{548.25}\overset{?}{>}0\\ &\quad\;\mathtt{358.40}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitc}&=B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$358.40}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthspitc}&=O_{\monthsitc}+\big\lfloor{\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\big\lfloor{\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\mathtt{5.29}-\mathtt{1.75}\\ &=\mathtt{\$17.04}\ ;\\[12pt] m&=\mathtt{\monthspitc}+1=4\mathtt{\ months}\\[12pt] % FINAL ITERATION &\quad\;\mathtt{<Fourth\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitf}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\mathtt{0.69}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\mathtt{549.31}\overset{?}{>}0\\ &\quad\;\mathtt{-190.91}\ngtr 0\\ &\quad\;\mathtt{Halt!}\\[12pt] % Total months----------------------- n&=\mathtt{\monthspitf\ months}\\[60pt] % 12x5pt acts as \newpage % Final amount---------------------- a_{\rm{f}}&=B_{\monthsitf}+\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100+O_{\monthsitf}\\ &=\mathtt{\balanceitf}+\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100+\mathtt{\interestitf}\\ &=\mathtt{\balanceitf}+\mathtt{2.09}+\mathtt{\interestitf}\\ &=\mathtt{\$\amountfinal}\ ;\\[12pt] % Final balance------------------------- B_{\monthspitf}&=B_{\monthsitf}-\Big\{a_{\rm{f}}-\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100-O_{\monthsitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\mathtt{2.09}-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\mathtt{358.40}\\ &=\mathtt{\$0}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Monthly\ Balance:}\\ &\!\!\!\!\!\!\!\mathtt{2000.00}\quad +\qquad\mathtt{2000.00}\cdot \mathtt{\rate}\quad -\quad \mathtt{550.00}\quad =\quad \mathtt{1453.85}\\ &\!\!\!\!\!\!\!\mathtt{1453.85}\quad +\qquad\mathtt{1453.85}\cdot \mathtt{\rate}\quad -\quad \mathtt{550.00}\quad =\quad \ \ \mathtt{906.65}\\ &\!\!\!\!\!\!\!\ \ \mathtt{906.65}\quad +\qquad\ \ \mathtt{906.65}\cdot \mathtt{\rate}\quad -\quad \mathtt{550.00}\quad =\quad\ \ \mathtt{358.40}\\ % &\!\!\!\!\!\!\!\mathtt{906.65}\quad +\qquad\mathtt{906.65}\cdot \mathtt{\rate}\quad -\quad \mathtt{550.00}\quad =\quad \mathtt{358.40}\\ % &\vdots\qquad\qquad\qquad\qquad\qquad\qquad\ \ \vdots\qquad\quad\,\ \vdots\qquad\quad\ \ \ \:\vdots\\ &\!\!\!\!\!\!\!\ \ \mathtt{358.40}\quad +\qquad\ \ \mathtt{358.40}\cdot \mathtt{\rate}\quad -\quad \mathtt{377.53}\quad =\quad\ \ \ \ \ \ \mathtt{0.00}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Breakdown\ of\ Pay:}\\ &\!\!\!\!\!\!\!\mathtt{546.15\ Prin.}\quad +\quad\;\:\mathtt{3.85\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{547.20\ Prin.}\quad +\quad\;\:\mathtt{2.80\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{548.25\ Prin.}\quad +\quad\;\:\mathtt{1.75\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{\balanceitf\ Prin.}\quad +\ \ \ \,\mathtt{19.13\ Int.}\quad =\ \ \ \,\mathtt{\amountfinal} \end{align*} \end{example} \normalsize \setstretch{1} \newpage \renewcommand{\rate}{0.002875} \renewcommand{\proportion}{0.25} \renewcommand{\amount}{200} \renewcommand{\balance}{5,000} \renewcommand{\interest}{0} \renewcommand{\months}{0} \renewcommand{\monthsp}{1} \renewcommand{\balanceitb}{4,803.59} \renewcommand{\interestitb}{10.79} \renewcommand{\monthsitb}{1} \renewcommand{\monthspitb}{2} \renewcommand{\balanceitc}{4,607.04} \renewcommand{\interestitc}{21.15} \renewcommand{\monthsitc}{2} \renewcommand{\monthspitc}{3} \renewcommand{\balanceitf}{47.26} \renewcommand{\interestitf}{141.77} \renewcommand{\monthsitf}{25}% \renewcommand{\monthspitf}{26}% \renewcommand{\amountfinal}{189.17} \begin{example} If $p=\$5,000$, $\mbox{APR}=3.45\%$, $\alpha=0.25$ and $a=\$200$, \footnotesize \setstretch{1.25} \begin{align*} B_{0}&=\mathtt{\balance}\ ;\\ O_{0}&=\mathtt{\interest}\ ;\\ m&=\mathtt{\monthsp}\\[12pt] % ITERATION 1 &\quad\;\mathtt{<First\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\mathtt{3.59}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\mathtt{196.41}\overset{?}{>}0\\ &\quad\;\mathtt{4,803.59}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthsp}&=B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$4,803.59}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthsp}&=O_{\months}+\big\lfloor{\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\big\lfloor{\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\mathtt{14.38}-\mathtt{3.59}\\ &=\mathtt{\$10.79}\ ;\\[12pt] m&=\mathtt{\monthsp}+1=2\mathtt{\ months}\\[12pt] % ITERATION 2 &\quad\;\mathtt{<Second\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\mathtt{3.45}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\mathtt{196.55}\overset{?}{>}0\\ &\quad\;\mathtt{4,607.04}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitb}&=B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$4,607.04}\ ;\\[60pt] % 12x5pt acts as \newpage % Outstanding interest------------------------- O_{\monthspitb}&=O_{\monthsitb}+\big\lfloor{\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\big\lfloor{\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\mathtt{13.81}-\mathtt{3.45}\\ &=\mathtt{\$21.15}\ ;\\[12pt] m&=\mathtt{\monthspitb}+1=3\mathtt{\ months}\\[12pt] % ITERATION 3 &\quad\;\mathtt{<Third\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\mathtt{3.31}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\mathtt{196.69}\overset{?}{>}0\\ &\quad\;\mathtt{4,410.35}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitc}&=B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$4,410.35}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthspitc}&=O_{\monthsitc}+\big\lfloor{\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\big\lfloor{\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\mathtt{13.25}-\mathtt{3.31}\\ &=\mathtt{\$31.09}\ ;\\[12pt] m&=\mathtt{\monthspitc}+1=4\mathtt{\ months}\\[12pt] &\quad\;\vdots\\[12pt] % FINAL ITERATION &\quad\;\mathtt{<Twenty}\mbox{-}\mathtt{sixth\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitf}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\mathtt{0.03}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\mathtt{199.97}\overset{?}{>}0\\ &\quad\;\mathtt{-152.71}\ngtr 0\\ &\quad\;\mathtt{Halt!}\\[12pt] % Total months----------------------- n&=\mathtt{\monthspitf\ months}\\[36pt] % 12x3pt acts as \newpage % Final amount---------------------- a_{\rm{f}}&=B_{\monthsitf}+\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100+O_{\monthsitf}\\ &=\mathtt{\balanceitf}+\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100+\mathtt{\interestitf}\\ &=\mathtt{\balanceitf}+\mathtt{0.14}+\mathtt{\interestitf}\\ &=\mathtt{\$\amountfinal}\ ;\\[12pt] % Final balance------------------------- B_{\monthspitf}&=B_{\monthsitf}-\Big\{a_{\rm{f}}-\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100-O_{\monthsitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\mathtt{0.14}-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\mathtt{47.26}\\ &=\mathtt{\$0}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Monthly\ Balance:}\\ &\!\!\!\!\!\!\!\mathtt{5000.00}\quad +\qquad\mathtt{5000.00}\cdot \mathtt{0.00287...}\quad -\quad \mathtt{200.00}\quad =\quad \mathtt{4803.59}\\ &\!\!\!\!\!\!\!\mathtt{4803.59}\quad +\qquad\mathtt{4803.59}\cdot \mathtt{0.00287...}\quad -\quad \mathtt{200.00}\quad =\quad \mathtt{4607.04}\\ &\!\!\!\!\!\!\!\mathtt{4607.04}\quad +\qquad\mathtt{4607.04}\cdot \mathtt{0.00287...}\quad -\quad \mathtt{200.00}\quad =\quad\mathtt{4410.35}\\ &\!\!\!\!\!\!\!\mathtt{4410.35}\quad +\qquad\mathtt{4410.35}\cdot \mathtt{0.00287...}\quad -\quad \mathtt{200.00}\quad =\quad \mathtt{4213.52}\\ &\qquad\ \:\vdots\qquad\qquad\qquad\qquad\qquad\quad\ \ \ \ \ \vdots\qquad\qquad\quad\ \ \vdots\qquad\qquad\qquad\:\ \vdots\\ &\!\!\!\!\!\!\!\ \ \ \ \mathtt{47.26}\quad +\qquad\ \ \ \ \,\mathtt{47.26}\cdot \mathtt{0.00287...}\quad -\quad \mathtt{189.17}\quad =\quad\ \ \ \ \ \ \mathtt{0.00}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Breakdown\ of\ Pay:}\\ &\!\!\!\!\!\!\!\mathtt{196.41\ Prin.}\quad +\quad\;\:\mathtt{3.59\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{196.55\ Prin.}\quad +\quad\;\:\mathtt{3.45\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{196.69\ Prin.}\quad +\quad\;\:\mathtt{3.31\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\!\!\!\!\mathtt{196.83\ Prin.}\quad +\quad\;\:\mathtt{3.17\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\qquad\qquad\ \ \vdots \qquad\qquad\qquad\quad\:\vdots \qquad\qquad\quad\ \ \ \vdots\\ &\!\!\!\!\,\mathtt{\balanceitf\ Prin.}\quad +\!\!\!\ \ \ \mathtt{141.91\ Int.}\quad =\ \ \ \,\mathtt{\amountfinal} \end{align*} \end{example} \normalsize \setstretch{1} \newpage \renewcommand{\rate}{0.00566...} \renewcommand{\proportion}{0.5} \renewcommand{\amount}{132.87} \renewcommand{\balance}{13,500} \renewcommand{\interest}{0} \renewcommand{\months}{0} \renewcommand{\monthsp}{1} \renewcommand{\balanceitb}{13,405.38} \renewcommand{\interestitb}{38.25} \renewcommand{\monthsitb}{1} \renewcommand{\monthspitb}{2} \renewcommand{\balanceitc}{13,310.49} \renewcommand{\interestitc}{76.23} \renewcommand{\monthsitc}{2} \renewcommand{\monthspitc}{3} \renewcommand{\balanceitf}{131.70} \renewcommand{\interestitf}{2,443.24} \renewcommand{\monthsitf}{119} \renewcommand{\monthspitf}{120} \renewcommand{\amountfinal}{2,575.69} \begin{example} If $p=\$13,500$, $\mbox{APR}=6.8\%$, $\alpha=0.5$ and $a=a_{\rm{min_{120}}}=\$132.87$, \scriptsize \setstretch{1.25} \begin{align*} B_{0}&=\mathtt{\balance}\ ;\\ O_{0}&=\mathtt{\interest}\ ;\\ m&=\mathtt{\monthsp}\\[12pt] % ITERATION 1 &\quad\;\mathtt{<First\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\mathtt{38.25}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\mathtt{94.62}\overset{?}{>}0\\ &\quad\;\mathtt{13,405.38}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthsp}&=B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$13,405.38}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthsp}&=O_{\months}+\big\lfloor{\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\big\lfloor{\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interest}+\mathtt{76.50}-\mathtt{38.25}\\ &=\mathtt{\$38.25}\ ;\\[12pt] m&=\mathtt{\monthsp}+1=2\mathtt{\ months}\\[12pt] % ITERATION 2 &\quad\;\mathtt{<Second\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\Big\{\mathtt{\amount}-\mathtt{37.98}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitb}-\mathtt{94.89}\overset{?}{>}0\\ &\quad\;\mathtt{13,310.49}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitb}&=B_{\monthsitb}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$13,310.49}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthspitb}&=O_{\monthsitb}+\big\lfloor{\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitb}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\big\lfloor{\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitb}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitb}+\mathtt{75.96}-\mathtt{37.98}\\ &=\mathtt{\$76.23}\ ;\\[12pt] m&=\mathtt{\monthspitb}+1=3\mathtt{\ months}\\[48pt] % 12x4pt acts as \newpage % ITERATION 3 &\quad\;\mathtt{<Third\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\Big\{\mathtt{\amount}-\mathtt{37.71}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitc}-\mathtt{95.16}\overset{?}{>}0\\ &\quad\;\mathtt{13,215.33}>0\\ &\quad\;\mathtt{Proceeding...}\\[12pt] % Principal balance---------------------- B_{\monthspitc}&=B_{\monthsitc}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\\ &=\mathtt{\$13,215.33}\ ;\\[12pt] % Outstanding interest------------------------- O_{\monthspitc}&=O_{\monthsitc}+\big\lfloor{\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100-\big\lfloor{\alpha\left(B_{\monthsitc}\cdot i\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\big\lfloor{\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitc}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\\ &=\mathtt{\interestitc}+\mathtt{75.43}-\mathtt{37.71}\\ &=\mathtt{\$113.95}\ ;\\[12pt] m&=\mathtt{\monthspitc}+1=4\mathtt{\ months}\\[12pt] &\quad\;\vdots\\[12pt] % FINAL ITERATION &\quad\;\mathtt{<120^{th}\ iteration>}\\ % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\monthsitf}-\Big\{a-\big\lfloor{\alpha\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\Big\{\mathtt{\amount}-\mathtt{0.37}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balanceitf}-\mathtt{132.50}\overset{?}{>}0\\ &\quad\;\mathtt{-0.80}\ngtr 0\\ &\quad\;\mathtt{Halt!}\\[12pt] % Total months----------------------- n&=\mathtt{\monthspitf\ months}\\[12pt] % Final amount---------------------- a_{\rm{f}}&=B_{\monthsitf}+\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100+O_{\monthsitf}\\ &=\mathtt{\balanceitf}+\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100+\mathtt{\interestitf}\\ &=\mathtt{\balanceitf}+\mathtt{0.75}+\mathtt{\interestitf}\\ &=\mathtt{\$\amountfinal}\ ;\\[12pt] % Final balance------------------------- B_{\monthspitf}&=B_{\monthsitf}-\Big\{a_{\rm{f}}-\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100-O_{\monthsitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\Big\{\mathtt{\amountfinal}-\mathtt{0.75}-\mathtt{\interestitf}\Big\}\\ &=\mathtt{\balanceitf}-\mathtt{131.70}\\ &=\mathtt{\$0}\\[48pt] % 12x4pt acts as \newpage &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Monthly\ Balance:}\\ &\!\!\!\!\!\!\!\mathtt{13500.00}\quad +\qquad\mathtt{13500.00}\cdot \mathtt{\rate}\quad -\quad \mathtt{132.87}\quad =\quad \mathtt{13405.38}\\ &\!\!\!\!\!\!\!\mathtt{13405.38}\quad +\qquad\mathtt{13405.38}\cdot \mathtt{\rate}\quad -\quad \mathtt{132.87}\quad =\quad \mathtt{13310.49}\\ &\!\!\!\!\!\!\!\mathtt{13310.49}\quad +\qquad\mathtt{13310.49}\cdot \mathtt{\rate}\quad -\quad \mathtt{132.87}\quad =\quad\mathtt{13215.33}\\ &\!\!\!\!\!\!\!\mathtt{13215.33}\quad +\qquad\mathtt{13215.33}\cdot \mathtt{\rate}\quad -\quad \mathtt{132.87}\quad =\quad \mathtt{13119.90}\\ &\qquad\ \ \ \ \vdots\qquad\qquad\qquad\qquad\qquad\quad\ \ \ \ \ \ \ \vdots\qquad\qquad\quad\ \,\ \vdots\qquad\qquad\qquad\:\ \ \ \vdots\\ &\!\!\!\!\!\!\!\ \ \ \ \mathtt{131.70}\quad +\qquad\ \ \ \ \,\mathtt{131.70}\cdot \mathtt{\rate}\quad -\ \ \mathtt{2575.69}\quad =\quad\ \ \ \ \ \ \ \ \mathtt{0.00}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Breakdown\ of\ Pay:}\\ &\!\!\!\mathtt{94.62\ Prin.}\quad +\quad\;\:\mathtt{38.25\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\mathtt{94.89\ Prin.}\quad +\quad\;\:\mathtt{37.98\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\mathtt{95.16\ Prin.}\quad +\quad\;\:\mathtt{37.71\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\!\!\!\mathtt{95.43\ Prin.}\quad +\quad\;\:\mathtt{37.44\ Int.}\quad =\quad\quad\,\,\mathtt{\amount}\\ &\qquad\quad\ \ \ \ \ \ \vdots \qquad\qquad\qquad\quad\ \ \; \vdots \qquad\qquad\qquad\quad\:\vdots\\ &\!\!\!\!\!\!\!\!\mathtt{\balanceitf\ Prin.}\quad +\!\!\!\!\!\!\quad\mathtt{2443.99\ Int.}\quad =\quad\ \ \; \mathtt{2575.69} \end{align*} \end{example} \normalsize \setstretch{1} \newpage \renewcommand{\rate}{0.00566...} \renewcommand{\proportion}{0.5} \renewcommand{\amount}{5,000} \renewcommand{\balance}{4,000} \renewcommand{\interest}{0} \renewcommand{\months}{0} \renewcommand{\monthsp}{1} \renewcommand{\balanceitf}{\balance} \renewcommand{\interestitf}{\interest} \renewcommand{\monthsitf}{\months}% \renewcommand{\monthspitf}{\monthsp}% \renewcommand{\amountfinal}{4,022.67} \begin{example} If $p=\$4,000$, $\mbox{APR}=6.8\%$, $\alpha=0.5$ and $a=\$5,000$, \footnotesize \setstretch{1.25} \begin{align*} B_{0}&=\mathtt{\balance}\ ;\\ O_{0}&=\mathtt{\interest}\ ;\\ m&=\mathtt{\monthsp}\\[12pt] % FINAL ITERATION % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\mathtt{11.33}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\mathtt{4,988.67}\overset{?}{>}0\\ &\quad\;\mathtt{-988.67}\ngtr 0\\ &\quad\;\mathtt{Halt!}\\[12pt] % Total months----------------------- n&=\mathtt{\monthspitf\ month}\\[12pt] % Final amount---------------------- a_{\rm{f}}&=B_{\monthsitf}+\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100+O_{\monthsitf}\\ &=\mathtt{\balanceitf}+\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100+\mathtt{\interestitf}\\ &=\mathtt{\balanceitf}+\mathtt{22.67}+\mathtt{\interestitf}\\ &=\mathtt{\$\amountfinal}\ ;\\[12pt] % Checking-------------------------------- \mathtt{Check:}&\quad\;a-a_{\rm{f}}\\ &\quad =\mathtt{\amount}-\mathtt{\amountfinal}\\ &\quad =\mathtt{977.33}>0\\[12pt] % Refund------------------------- R&=a-a_{\rm{f}}=\mathtt{977.33}\ ;\\ &\quad\;\mathtt{``Refunded\ \$977.33"}\\[12pt] % strange: should be ``'' &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Monthly\ Balance:}\\ &\!\!\!\!\!\!\!\mathtt{4000.00}\quad +\quad\mathtt{4000.00}\cdot \mathtt{\rate}\quad -\quad \mathtt{4022.67}\quad =\quad \mathtt{0.00}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Breakdown\ of\ Pay:}\\ &\!\!\!\!\!\!\!\mathtt{4000.00\ Prin.}\quad +\quad\mathtt{22.67\ Int.}\quad =\quad\mathtt{4022.67} \end{align*} \end{example} \normalsize \setstretch{1} \newpage \renewcommand{\rate}{0.00566...} \renewcommand{\proportion}{0} \renewcommand{\amount}{4,000} \renewcommand{\balance}{4,000} \renewcommand{\interest}{0} \renewcommand{\months}{0} \renewcommand{\monthsp}{1} \renewcommand{\balanceitf}{\balance} \renewcommand{\interestitf}{\interest} \renewcommand{\monthsitf}{\months}% \renewcommand{\monthspitf}{\monthsp}% \renewcommand{\amountfinal}{4,022.67} \begin{example} If $p=\$4,000$, $\mbox{APR}=6.8\%$, $\alpha=0$ and $a=\$4,000$, \footnotesize \setstretch{1.25} \begin{align*} B_{0}&=\mathtt{\balance}\ ;\\ O_{0}&=\mathtt{\interest}\ ;\\ m&=\mathtt{\monthsp}\\[12pt] % FINAL ITERATION % Checking-------------------------------- \mathtt{Check:}&\quad\;B_{\months}-\Big\{a-\big\lfloor{\alpha\left(B_{\months}\cdot i\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\big\lfloor{\mathtt{\proportion}\left(\mathtt{\balance}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\Big\{\mathtt{\amount}-\mathtt{0}\Big\}\overset{?}{>}0\\[-6pt] &\quad\;\mathtt{\balance}-\mathtt{4,000}\overset{?}{>}0\\ &\quad\;\mathtt{0}\ngtr 0\\ &\quad\;\mathtt{Halt!}\\[12pt] % Total months----------------------- n&=\mathtt{\monthspitf\ month}\\[12pt] % Final amount---------------------- a_{\rm{f}}&=B_{\monthsitf}+\big\lfloor{\left(B_{\monthsitf}\cdot i\right)\times 100}\big\rceil\div 100+O_{\monthsitf}\\ &=\mathtt{\balanceitf}+\big\lfloor{\left(\mathtt{\balanceitf}\cdot \mathtt{\rate}\right)\times 100}\big\rceil\div 100+\mathtt{\interestitf}\\ &=\mathtt{\balanceitf}+\mathtt{22.67}+\mathtt{\interestitf}\\ &=\mathtt{\$\amountfinal}\ ;\\[12pt] % Checking-------------------------------- \mathtt{Check:}&\quad\;a-a_{\rm{f}}\\ &\quad =\mathtt{\amount}-\mathtt{\amountfinal}\\ &\quad =\mathtt{-22.67}<0\\[12pt] % Refund------------------------- E&=\left|a-a_{\rm{f}}\right|=\mathtt{22.67}\ ;\\ &\quad\;\mathtt{``Pay\ Extra\ \$22.67"}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Monthly\ Balance:}\\ &\!\!\!\!\!\!\!\mathtt{4000.00}\quad +\quad\mathtt{4000.00}\cdot \mathtt{\rate}\quad -\quad \mathtt{4022.67}\quad =\quad \mathtt{0.00}\\[12pt] &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathtt{Breakdown\ of\ Pay:}\\ &\!\!\!\!\!\!\!\mathtt{4000.00\ Prin.}\quad +\quad\mathtt{22.67\ Int.}\quad =\quad\mathtt{4022.67} \end{align*} \end{example} \normalsize \setstretch{1} \newpage \subsection{Length of repayment} For $n=4$, \begin{align*} l_{\rm{y}}&=\bigg\lfloor{\frac{\mathtt{4}}{12}}\bigg\rfloor\\ &=\big\lfloor{\mathtt{0.333...}}\big\rfloor\\ &=\mathtt{0\ years} \end{align*} \begin{align*} l_{\rm{m}}&=\mathtt{4}-12\cdot \mathtt{0}\\ &=\mathtt{4}-\mathtt{0}\\ &=\mathtt{4\ months} \end{align*} For $n=19$, \begin{align*} l_{\rm{y}}&=\bigg\lfloor{\frac{\mathtt{19}}{12}}\bigg\rfloor\\ &=\big\lfloor{\mathtt{1.583...}}\big\rfloor\\ &=\mathtt{1\ year} \end{align*} \begin{align*} l_{\rm{m}}&=\mathtt{19}-12\cdot \mathtt{1}\\ &=\mathtt{19}-\mathtt{12}\\ &=\mathtt{7\ months} \end{align*} For $n=27$, \begin{align*} l_{\rm{y}}&=\bigg\lfloor{\frac{\mathtt{27}}{12}}\bigg\rfloor\\ &=\big\lfloor{\mathtt{2.25}}\big\rfloor\\ &=\mathtt{2\ years} \end{align*} \begin{align*} l_{\rm{m}}&=\mathtt{27}-12\cdot \mathtt{2}\\ &=\mathtt{27}-\mathtt{24}\\ &=\mathtt{3\ months} \end{align*} \newpage \subsection{Total payments, savings and change in savings} \setcounter{example}{0} % reset letter counter \begin{example} Assume $p=\$13,500$, $\mbox{APR}=6.8\%$ and $\alpha=0.5$.\\[12pt] Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{300}\\[12pt] \therefore n&=\mathtt{49\ months}\\ B_{48}&=\mathtt{\$61.91}\\ O_{48}&=\mathtt{\$961.95}\\[12pt] T(a)&=(\mathtt{48}) a+B_{48}+\big\lfloor{\left(B_{48}\cdot i\right)\times 100}\big\rceil\div 100+O_{48}\\ T(\mathtt{300})&=(\mathtt{48}) \mathtt{300}+\mathtt{61.91}+\big\lfloor{\left(\mathtt{61.91}\cdot \mathtt{0.00566...}\right)\times 100}\big\rceil\div 100+\mathtt{961.95}\\ &=\mathtt{14,400}+\mathtt{61.91}+\mathtt{0.35}+\mathtt{961.95}\\ &=\mathtt{\$15,424.21} \end{align*} \label{amina} \begin{align*}\\[-36pt] \mathtt{Let\ } a&=a_{\rm{min_{120},\,\alpha=1}}=\mathtt{155.36}\\[12pt] \therefore n&=\mathtt{120\ months}\\ B_{119}&=\mathtt{\$154.16}\\ O_{119}&=\mathtt{\$0}\\[12pt] T_{\rm{max}}&=(\mathtt{119}) a_{\rm{min},\,\alpha=1}+B_{119}+\big\lfloor{\left(B_{119}\cdot i\right)\times 100}\big\rceil\div 100+O_{119}\\ &=(\mathtt{119}) \mathtt{155.36}+\mathtt{154.16}+\big\lfloor{\left(\mathtt{154.16}\cdot \mathtt{0.00566...}\right)\times 100}\big\rceil\div 100+\mathtt{0}\\ &=\mathtt{18,487.84}+\mathtt{154.16}+\mathtt{0.87}+\mathtt{0}\\ &=\mathtt{\$18,642.87} \end{align*} \vspace{12pt} Savings, \begin{align*} s&=T_{\rm{max}}-T(\mathtt{300})\\ &=\mathtt{18,642.87}-\mathtt{15,424.21}\\ &=\mathtt{\$3,218.66} \end{align*} \newpage Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{725}\\[12pt] \therefore n&=\mathtt{20\ months}\\ B_{19}&=\mathtt{\$113.61}\\ O_{19}&=\mathtt{\$388.62}\\[12pt] T(a)&=(\mathtt{19}) a+B_{19}+\big\lfloor{\left(B_{19}\cdot i\right)\times 100}\big\rceil\div 100+O_{19}\\ T(725)&=(\mathtt{19}) \mathtt{725}+\mathtt{113.61}+\big\lfloor{\left(\mathtt{113.61}\cdot \mathtt{0.00566...}\right)\times 100}\big\rceil\div 100+\mathtt{388.62}\\ &=\mathtt{13,775}+\mathtt{113.61}+\mathtt{0.64}+\mathtt{388.62}\\ &=\mathtt{\$14,277.87} \end{align*} \begin{align*}\\[-36pt] T_{\rm{max}}&=\mathtt{\$18,642.87}\ \mbox{(page\ \pageref{amina})} \end{align*} \vspace{12pt} Savings,\label{aminb} \begin{align*} s&=T_{\rm{max}}-T(\mathtt{725})\\ &=\mathtt{18,642.87}-\mathtt{14,277.87}\\ &=\mathtt{\$4,365} \end{align*} Change in savings, \begin{align*} \mathtt{Let\ }s_{1}&=\mathtt{\$3,218.66}\ \mbox{(page\ \pageref{amina})}\\ s_{2}&=\mathtt{\$4,365}\\[12pt] \Delta s&=\left|\ \mathtt{4,365}-\mathtt{3,218.66}\ \right|\\ &=\mathtt{\$1,146.34} \end{align*} \end{example} \newpage \begin{example} Assume $p=\$5,000$, $\mbox{APR}=3.45\%$ and $\alpha=0.25$.\\[12pt] Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{700}\\[12pt] \therefore n&=\mathtt{8\ months}\\ B_{7}&=\mathtt{\$114.63}\\ O_{7}&=\mathtt{\$43.90}\\[12pt] T(a)&=(\mathtt{7}) a+B_{7}+\big\lfloor{\left(B_{7}\cdot i\right)\times 100}\big\rceil\div 100+O_{7}\\ T(700)&=(\mathtt{7}) \mathtt{700}+\mathtt{114.63}+\big\lfloor{\left(\mathtt{114.63}\cdot \mathtt{0.002875}\right)\times 100}\big\rceil\div 100+\mathtt{43.90}\\ &=\mathtt{4,900}+\mathtt{114.63}+\mathtt{0.33}+\mathtt{43.90}\\ &=\mathtt{\$5,058.86} \end{align*} \label{aminc} \begin{align*}\\[-36pt] \mathtt{Let\ } a&=a_{\rm{min_{\mathnormal{n}},\,\alpha=1}}=\mathtt{14.39}\\[12pt] \therefore n&=\mathtt{2,387\ months}\\ B_{2,386}&=\mathtt{\$0.37}\\ O_{2,386}&=\mathtt{\$0}\\[12pt] T_{\rm{max}}&=(\mathtt{2,386}) a_{\rm{min},\,\alpha=1}+B_{2,386}+\big\lfloor{\left(B_{2,386}\cdot i\right)\times 100}\big\rceil\div 100+O_{2,386}\\ &=(\mathtt{2,386}) \mathtt{14.39}+\mathtt{0.37}+\big\lfloor{\left(\mathtt{0.37}\cdot \mathtt{0.002875}\right)\times 100}\big\rceil\div 100+\mathtt{0}\\ &=\mathtt{34,334.54}+\mathtt{0.37}+\mathtt{0}+\mathtt{0}\\ &=\mathtt{\$34,334.91} \end{align*} \vspace{12pt} Savings, \begin{align*} s&=T_{\rm{max}}-T(\mathtt{700})\\ &=\mathtt{34,334.91}-\mathtt{5,058.86}\\ &=\mathtt{\$29,276.05} \end{align*} Change in savings, \begin{align*} \mathtt{Let\ }s_{1}&=\mathtt{\$4,365}\ \mbox{(page\ \pageref{aminb})}\\ s_{2}&=\mathtt{\$29,276.05}\\[12pt] \Delta s&=\left|\ \mathtt{29,276.05}-\mathtt{4,365}\ \right|\\ &=\mathtt{\$24,911.05} \end{align*} \end{example} \newpage \begin{example} Assume $p=\$2,000$, $\mbox{APR}=7\%$ and $\alpha=0.33$.\\[12pt] Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{550}\\[12pt] \therefore n&=\mathtt{4\ months}\\ B_{3}&=\mathtt{\$358.40}\\ O_{3}&=\mathtt{\$17.04}\\[12pt] T(a)&=(\mathtt{3}) a+B_{3}+\big\lfloor{\left(B_{3}\cdot i\right)\times 100}\big\rceil\div 100+O_{3}\\ T(\mathtt{550})&=(\mathtt{3}) \mathtt{550}+\mathtt{358.40}+\big\lfloor{\left(\mathtt{358.40}\cdot \mathtt{0.00583...}\right)\times 100}\big\rceil\div 100+\mathtt{17.04}\\ &=\mathtt{1,650}+\mathtt{358.40}+\mathtt{2.09}+\mathtt{17.04}\\ &=\mathtt{\$2,027.53} \end{align*} \label{amind} \begin{align*}\\[-36pt] \mathtt{Let\ } a&=a_{\rm{min_{120},\,\alpha=1}}=\mathtt{23.23}\\[12pt] \therefore n&=\mathtt{120\ months}\\ B_{119}&=\mathtt{\$21.65}\\ O_{119}&=\mathtt{\$0}\\[12pt] T_{\rm{max}}&=(\mathtt{119}) a_{\rm{min},\,\alpha=1}+B_{119}+\big\lfloor{\left(B_{119}\cdot i\right)\times 100}\big\rceil\div 100+O_{119}\\ &=(\mathtt{119}) \mathtt{23.23}+\mathtt{21.65}+\big\lfloor{\left(\mathtt{21.65}\cdot \mathtt{0.00583...}\right)\times 100}\big\rceil\div 100+\mathtt{0}\\ &=\mathtt{2,764.37}+\mathtt{21.65}+\mathtt{0.13}+\mathtt{0}\\ &=\mathtt{\$2,786.15} \end{align*} \vspace{12pt} Savings, \begin{align*} s&=T_{\rm{max}}-T(\mathtt{550})\\ &=\mathtt{2,786.15}-\mathtt{2,027.53}\\ &=\mathtt{\$758.62} \end{align*} Change in savings, \begin{align*} \mathtt{Let\ }s_{1}&=\mathtt{\$29,276.05}\ \mbox{(page\ \pageref{aminc})}\\ s_{2}&=\mathtt{\$758.62}\\[12pt] \Delta s&=\left|\ \mathtt{758.62}-\mathtt{29,276.05}\ \right|\\ &=\mathtt{\$28,517.43} \end{align*} \newpage Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{1,000}\\[12pt] \therefore n&=\mathtt{3\ months}\\ B_{2}&=\mathtt{\$5.78}\\ O_{2}&=\mathtt{\$11.75}\\[12pt] T(a)&=(\mathtt{2}) a+B_{2}+\big\lfloor{\left(B_{2}\cdot i\right)\times 100}\big\rceil\div 100+O_{2}\\ T(1,000)&=(\mathtt{2}) \mathtt{1,000}+\mathtt{5.78}+\big\lfloor{\left(\mathtt{5.78}\cdot \mathtt{0.00583...}\right)\times 100}\big\rceil\div 100+\mathtt{11.75}\\ &=\mathtt{2,000}+\mathtt{5.78}+\mathtt{0.03}+\mathtt{11.75}\\ &=\mathtt{\$2,017.56} \end{align*} \begin{align*}\\[-36pt] T_{\rm{max}}&=\mathtt{\$2,786.15}\ \mbox{(page\ \pageref{amind})} \end{align*} \vspace{12pt} Savings,\label{amine} \begin{align*} s&=T_{\rm{max}}-T(\mathtt{1,000})\\ &=\mathtt{2,786.15}-\mathtt{2,017.56}\\ &=\mathtt{\$768.59} \end{align*} Change in savings, \begin{align*} \mathtt{Let\ }s_{1}&=\mathtt{\$758.62}\ \mbox{(page\ \pageref{amind})}\\ s_{2}&=\mathtt{\$768.59}\\[12pt] \Delta s&=\left|\ \mathtt{768.59}-\mathtt{758.62}\ \right|\\ &=\mathtt{\$9.97} \end{align*} \newpage Total payments, \begin{align*} \\\mathtt{Let\ } a&=\mathtt{200}\\[12pt] \therefore n&=\mathtt{11\ months}\\ B_{10}&=\mathtt{\$21.42}\\ O_{10}&=\mathtt{\$43.50}\\[12pt] T(a)&=(\mathtt{10}) a+B_{10}+\big\lfloor{\left(B_{10}\cdot i\right)\times 100}\big\rceil\div 100+O_{10}\\ T(\mathtt{200})&=(\mathtt{10}) \mathtt{200}+\mathtt{21.42}+\big\lfloor{\left(\mathtt{21.42}\cdot \mathtt{0.00583...}\right)\times 100}\big\rceil\div 100+\mathtt{43.50}\\ &=\mathtt{2,000}+\mathtt{21.42}+\mathtt{0.12}+\mathtt{43.50}\\ &=\mathtt{\$2,065.04} \end{align*} \begin{align*}\\[-36pt] T_{\rm{max}}&=\mathtt{\$2,786.15}\ \mbox{(page\ \pageref{amind})} \end{align*} \vspace{12pt} Savings, \begin{align*} s&=T_{\rm{max}}-T(\mathtt{200})\\ &=\mathtt{2,786.15}-\mathtt{2,065.04}\\ &=\mathtt{\$721.11} \end{align*} Change in savings, \begin{align*} \mathtt{Let\ }s_{1}&=\mathtt{\$768.59}\ \mbox{(page\ \pageref{amine})}\\ s_{2}&=\mathtt{\$721.11}\\[12pt] \Delta s&=\left|\ \mathtt{721.11}-\mathtt{768.59}\ \right|\\ &=\mathtt{\$47.48} \end{align*} \end{example} \newpage \section{Derivation of Monthly Compound Interest Rate} \begin{definition}\label{def1} If interest is compounded, it is compounded daily. Let daily payment balance be a function of each day's previous balance and the annual interest rate: $$B_{d}=B_{d-1}+B_{d-1}\frac{r}{365.25},$$ for number of days, $d=1,\ 2,\ 3,\ \dots,\ 30,\ \frac{365.25}{12}$, where $\frac{365.25}{12}$ is the average number of days per month. Treat the last day is a partial day. \end{definition} \begin{theorem} Monthly compound interest rate will be a function of the annual rate: $$i=\left(1+\frac{r}{365.25}\right)^{\frac{365.25}{12}}-1$$ \end{theorem} \newcommand{\bo}{p\left(1+\frac{r}{365.25}\right)} % shortens upcoming equations \newcommand{\ad}{\frac{365.25}{12}} \setcounter{equation}{1} % increment counter by 1 \begin{proof} Using definition~\ref{def1}, find average monthly balance, $B_{\ad}$, and simplify. \begin{subequations} \begin{align*} B_{1}&=B_{0}+B_{0}\frac{r}{365.25}\\ &=p+p\frac{r}{365.25}\\ &=p\left(1+\frac{r}{365.25}\right)\\[12pt] B_{2}&=B_{1}+B_{1}\frac{r}{365.25}\\ &=\bo+\bo\frac{r}{365.25}\\ &=\bo\left(1+\frac{r}{365.25}\right)\\ &=\bo^{2}\\[12pt] B_{3}&=B_{2}+B_{2}\frac{r}{365.25}\\ &=\dots=\bo^{2}\left(1+\frac{r}{365.25}\right)\\ &=\bo^{3}\\[12pt] &\vdots\\[12pt] B_{\ad}&=B_{\ad-1}+B_{\ad-1}\frac{r}{365.25}\\[6pt] &=\dots=\bo^{\ad}\yesnumberequation\label{result1} \end{align*} Write equation~\ref{result1} as a function of $i$. \begin{gather*} B_{\ad}=p\left(1+i\right)\yesnumberequation\label{result2} \end{gather*} Combine equations~\ref{result1} and~\ref{result2} to solve for $i$. \begin{gather*} p\left(1+i\right)=\bo^{\ad}\\[6pt] i=\left(1+\frac{r}{365.25}\right)^{\frac{365.25}{12}}-1 \tag*{\qedhere} % places qed in line with equation \end{gather*} \end{subequations} \end{proof} \setlength\parindent{0pt} Numerically, do not round any terms in the equation or solution itself, otherwise solutions to equations that depend on the interest rate may lose precision. The solution will be a decimal, not a percentage; this is because $i$ is only meant to be an interim calculation. \vspace{12pt} \begin{remark} The researcher is only aware of one instance in which interest is compounded: when loans are capitalized as students enter repayment. Nevertheless, he has mentioned interest being compounded, purely for the purpose of mathematical exploration. \end{remark} \newpage \section{Derivation of Ten-Year Minimum Payment} \begin{definition} Student loan payments are made in monthly installments. Let the monthly principal balance, be a function of each month's previous balance, the interest rate, minimum payment and proportion of interest that is paid, such that: \begin{equation} B_{m}=B_{m-1}-\big[a_{\rm{min}}-\alpha\left(B_{m-1}\cdot i\right)\big],\label{eq} \end{equation} for $m=1,\ 2,\ 3,\ \dots,\ 120$ and $0\leq\alpha\leq1$, where $120$ is the number of months in ten years. We want final balance, $B_{120}=0$. \end{definition} \renewcommand{\base}{\left(1+\alpha\cdot i\right)} \begin{theorem} Minimum monthly payment within ten years will depend on $i$ and $\alpha$: \small \[ a_{\rm{min_{120}}}= \left\{ \begin{array}{l l} \\[-6pt] \left\lceil{\dfrac{p}{120}\times 100}\right\rceil\div 100&\quad\mbox{if } i>0 \mbox{ and }\alpha=0\\[18pt] \left\lceil{\dfrac{p}{120}\times 100}\right\rceil\div 100&\quad\mbox{if } i=0\\[12pt] \left\lceil{\dfrac{\alpha\left(p\cdot i\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100,\mbox{ for }\alpha\cdot i\neq 0 &\quad\mbox{if } i>0\mbox{ and }0<\alpha\leq1 \\[18pt] \end{array} \right. \] \end{theorem} \renewcommand{\bo}{p\left(1+\alpha\cdot i\right)-a_{\rm{min}}} \newcommand{\bt}{p\left(1+\alpha\cdot i\right)^{2}-\left(1+\alpha\cdot i\right)a_{\rm{min}}-a_{\rm{min}}} \renewcommand{\base}{\left(1+\alpha\cdot i\right)} % for B_{120} only \normalsize \begin{proof} Simplify equation~\ref{eq}, if possible, using each case of $i$ and $\alpha$. \begin{subequations} \begin{numcases}{B_{m}=} \nonumber\\[-3pt] \ B_{m-1}-a_{\rm{min}} &\quad\mbox{if $i>0$ and $\alpha=0$}\label{case1}\\[3pt] \ B_{m-1}-a_{\rm{min}} &\quad\mbox{if $i=0$}\label{case2}\\[3pt] \ B_{m-1}-\big[a_{\rm{min}}-\alpha\left(B_{m-1}\cdot i\right)\big] &\quad\mbox{if $i>0$ and $0<\alpha\leq 1$}\label{case3} \\[-9pt]\nonumber \end{numcases} \end{subequations} Using cases~\ref{case1} and~\ref{case2}, find $B_{120}$ and simplify. \begin{align*} B_{1}&=B_{0}-a_{\rm{min}}\\ &=p-a_{\rm{min}}\\[12pt] B_{2}&=B_{1}-a_{\rm{min}}\\ &=\left(p-a_{\rm{min}}\right)-a_{\rm{min}}\\ &=p-2a_{\rm{min}}\\[12pt] B_{3}&=B_{2}-a_{\rm{min}}\\ &=\dots=p-3a_{\rm{min}}\\[12pt] &\vdots\\[12pt] B_{120}&=B_{119}-a_{\rm{min}}\\ &=p-120a_{\rm{min}} \end{align*} Set $B_{120}=0$ to solve for $a_{\rm{min}}$. \begin{gather*} 0=p-120a_{\rm{min}}\\ p=120a_{\rm{min}}\\ a_{\rm{min}}=\frac{p}{120} \end{gather*} Numerically, terms in the solution may span more than two decimal places, but cents span only two. Also, we need to ensure that one repays enough money each month. So, round terms in the solution \textit{up} to the nearest two decimal places. \begin{gather*} a_{\rm{min}}=\left\lceil{\frac{p}{120}\times 100}\right\rceil\div 100 \end{gather*} Let $a_{\rm{min}}=a_{\rm{min_{120}}}$ to differentiate it from the absolute minimum payment. \begin{gather*} a_{\rm{min_{120}}}=\left\lceil{\frac{p}{120}\times 100}\right\rceil\div 100 \tag*{\qed} % cannot use \qedhere because not at \end{proof} \end{gather*} Using case~\ref{case3}, find $B_{120}$ and simplify. \begin{align*} B_{1}&=B_{0}-\big[a_{\rm{min}}-\alpha\left(B_{0}\cdot i\right)\big]\\ &=B_{0}+\alpha\left(B_{0}\cdot i\right)-a_{\rm{min}}\\ &=p+\alpha\left(p\cdot i\right)-a_{\rm{min}}\\ &=p\left(1+\alpha\cdot i\right)-a_{\rm{min}}\\[12pt] B_{2}&=B_{1}-\big[a_{\rm{min}}-\alpha\left(B_{1}\cdot i\right)\big]\\ &=B_{1}+\alpha\left(B_{1}\cdot i\right)-a_{\rm{min}}\\ &=\big[\bo\big]+\alpha\Big(\big[\bo\big]\cdot i\Big)-a_{\rm{min}}\\ &=\big[\bo\big]\big[1+\alpha\cdot i\big]-a_{\rm{min}}\\ &=p\left(1+\alpha\cdot i\right)^{2}-\left(1+\alpha\cdot i\right)a_{\rm{min}}-a_{\rm{min}}\\[12pt] B_{3}&=B_{2}-\big[a_{\rm{min}}-\alpha\left(B_{2}\cdot i\right)\big]\\ &=\dots=\big[\bt\big]\\ \begin{split} % equation is too long &\qquad\quad\;\:\,+\alpha\Big(\big[\bt\big]\cdot i\Big)-a_{\rm{min}} \end{split}\\ &=\big[\bt\big]\big[1+\alpha\cdot i\big]-a_{\rm{min}}\\ &=p\left(1+\alpha\cdot i\right)^{3}-\left(1+\alpha\cdot i\right)^{2}a_{\rm{min}} -\left(1+\alpha\cdot i\right)a_{\rm{min}}-a_{\rm{min}}\\[12pt] % equation is not as long as it looks &\vdots\\[108pt] % 12x9pt, acts as \pagebreak B_{120}&=B_{119}-\big[a_{\rm{min}}-\alpha\left(B_{119}\cdot i\right)\big]\\ &=\dots=\big[p\base^{119}-\base^{118}a_{\rm{min}}-\base^{117}a_{\rm{min}}\\ \begin{split} &\qquad\quad\;\:\,-\dots-a_{\rm{min}}\big]\big[1+\alpha\cdot i\big]-a_{\rm{min}} \end{split}\\ &=p\base^{120}-\base^{119}a_{\rm{min}}-\base^{118}a_{\rm{min}}\\ \begin{split} &\quad-\dots-\base a_{\rm{min}}-a_{\rm{min}} \end{split}\\ &=p\base^{120}-a_{\rm{min}}\sum_{m=1}^{120}\base^{m-1} \end{align*} Set $B_{120}=0$ to solve for $a_{\rm{min}}$. \begin{gather*} 0=p\base^{120}-a_{\rm{min}}\sum_{m=1}^{120}\base^{m-1} \end{gather*} \vspace{-12pt} % remove some padding above next equation \begin{align*} p\base^{120}&=a_{\rm{min}}\sum_{m=1}^{120}\base^{m-1}\\ p\base^{120}\times\base&=a_{\rm{min}}\sum_{m=1}^{120}\base^{m}\\ p\base^{120}-p\base^{120}\times\base&=a_{\rm{min}}\nonumber\\ \begin{split} &\quad-\base^{120}a_{\rm{min}} \end{split}\\[12pt] p\base^{120}\big[1-\base\big]&=a_{\rm{min}}\big[1-\base^{120}\big]\\[12pt] p\base^{120}\big[\alpha\cdot i\big]&=a_{\rm{min}}\big[\base^{120}-1\big] \end{align*} % using a mixture of styles (i.e., [12pt] and \vspace) because some equations are centered and some are aligned, and this affects spacing \begin{gather*} a_{\rm{min}}=\frac{\alpha\left(p\cdot i\right)\base^{120}}{\base^{120}-1},\mbox{ for }\alpha\cdot i\neq 0 \end{gather*} Numerically, do not round any terms in the equation; round only those in the solution. However, again terms in the solution may span more than two decimal places, and we need to ensure that one repays enough money each month. So, round terms in the solution \textit{up} to the nearest two decimal places. \begin{gather*} a_{\rm{min}}=\left\lceil{\frac{\alpha\left(p\cdot i\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100,\mbox{ for }\alpha\cdot i\neq 0 \end{gather*} Let $a_{\rm{min}}=a_{\rm{min_{120}}}$ to differentiate it from the absolute minimum payment. \begin{gather*} a_{\rm{min_{120}}}=\left\lceil{\frac{\alpha\left(p\cdot i\right)\base^{120}}{\base^{120}-1}\times 100}\right\rceil\div 100,\mbox{ for }\alpha\cdot i\neq 0 \tag*{\qedhere} \end{gather*} \end{proof} \newpage \begin{remark} The reason for using $\alpha\left(B_{m-1}\cdot i\right)$ in equation~\ref{eq}, instead of the corresponding expression $\big\lfloor{\alpha\left(B_{m-1}\cdot i\right)\times 100}\big\rceil\div 100$ of section 2 of ``Deeper Insight into the iOS App'', is because, if $i>0$ and $0<\alpha\leq1$ and one uses the latter expression to find a simplified equation for $B_{120}$, factoring $p+\big\lfloor{\alpha\left(p\cdot i\right)\times 100}\big\rceil\div 100-a$ from the \texttt{nint} function in $B_{2}$ will not be possible. Subsequent equations (i.e., $B_{3}$, $B_{4}$, $B_{5}$, $\dots$) will continue to expand with no way to simplify them even. \end{remark} \vspace{12pt} \begin{remark} The caveat to using the former expression, though, is that each numerical solution of $a_{\rm{min_{120}}}$ for case~\ref{case3} could still deviate by one cent or more from each solution had they been computed by using the latter expression. For example, for default parameters (i.e., $p_{\rm{min}}=\$2,000$, $p_{\rm{max}}=\$10,000$, $N=40$, $\mbox{APR}=4.53\%$ [not compounded] and $\alpha =1$) no numerical solutions deviate. If $\mbox{APR}=5\%$, all other parameters being the default, one solution deviates by a cent; the correct solution for $p=\$4,600$ should be $a_{\rm{min_{120}}}=\$48.79$, not $\$48.80$. If one broadens or alters parameters, more solutions may deviate. Roughly 2.14\% of all solutions, for case~\ref{case3}, do in fact deviate, by at most one cent, 56\% one cent high and 44\% one cent low.\footnote{Based on over one million computations of $a_{\rm{min_{120}}}$ given $p_{\rm{min}}=\$0$, $p_{\rm{max}}=\$100,000$ and $N=100,000$, provided $\alpha =1$ for $\mbox{APR}=4.45\%$, $\mbox{APR}=5\%$ and $\mbox{APR}=7.4\%$ (all not compounded) and $\mbox{APR}=2.98\%$, $\mbox{APR}=4.45\%$ and $\mbox{APR}=5\%$ (all compounded), provided $\alpha =0.25$, $\alpha =0.71$ and $\alpha =0.2$ for $\mbox{APR}=4.45\%$ (not compounded) and provided $\alpha =0.6$ for $\mbox{APR}=4.45\%$ (compounded).} Open ``Ten-Year\_Minimum\_Errors.ipynb'' in SageMath Notebook, customize the cell, and run it to see for one self. For one's convenience, the iOS app automatically checks and corrects for such errors in precision. (None of the latter four numerical solutions in section~\ref{errorcheck} deviated.) \end{remark} \vspace{12pt} \begin{remark} Simplifying the fractional part $$\frac{\base^{120}}{\base^{120}-1},\mbox{ for }\alpha\cdot i\neq 0$$ is not possible. $\alpha$ is at most equal to $1$, and $\mbox{APR}$ is probably at most $10\%$ (i.e., $i$ is at most $0.00836...$ [compounded] ). $$\frac{\left(1+\mathtt{1}\cdot\mathtt{0.00836...}\right)^{120}}{\left(1+\mathtt{1}\cdot\mathtt{0.00836...}\right)^{120}-1}=\frac{\left(\mathtt{1.00836...}\right)^{120}}{\left(\mathtt{1.00836...}\right)^{120}-1}=\frac{\mathtt{2.717...}}{\mathtt{2.717...}-1}=\mathtt{1.582...}$$ In fact, all other quotients for values of $\alpha$ and $i$ will exceed $1.582...$ Only if all quotients were equal to $1$, could one cancel the numerator and denominator from the fractional part. \end{remark} \end{document}
{ "alphanum_fraction": 0.6280732851, "avg_line_length": 52.6069131833, "ext": "tex", "hexsha": "a9972f1c14ad3b55d7b183f5de83254f7abd011b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f452f71fbdea4aea7ca267bb5937712e82cb6d4a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "saegl5/check_student_loans_other_resources", "max_forks_repo_path": "LaTeX/extra_insight.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f452f71fbdea4aea7ca267bb5937712e82cb6d4a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "saegl5/check_student_loans_other_resources", "max_issues_repo_path": "LaTeX/extra_insight.tex", "max_line_length": 1627, "max_stars_count": null, "max_stars_repo_head_hexsha": "f452f71fbdea4aea7ca267bb5937712e82cb6d4a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "saegl5/check_student_loans_other_resources", "max_stars_repo_path": "LaTeX/extra_insight.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 27516, "size": 65443 }
%% This is file `askinclude-a11.tex', \chapter{Chapter A} \expandafter\let\csname filea11\endcsname=Y \endinput %% %% End of file `askinclude-a11.tex'.
{ "alphanum_fraction": 0.7254901961, "avg_line_length": 19.125, "ext": "tex", "hexsha": "35c51bc19900755a68e6212cf054878cae9e0e40", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "ho-tex/askinclude", "max_forks_repo_path": "testfiles-noxetex/support/askinclude-a11.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_issues_repo_issues_event_max_datetime": "2020-04-13T12:52:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-11T16:51:24.000Z", "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "ho-tex/askinclude", "max_issues_repo_path": "testfiles-noxetex/support/askinclude-a11.tex", "max_line_length": 43, "max_stars_count": null, "max_stars_repo_head_hexsha": "339e49523ed780542aa2d29d07d4156a45ffaa9f", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "ho-tex/askinclude", "max_stars_repo_path": "testfiles-noxetex/support/askinclude-a11.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 51, "size": 153 }
For this project we worked in pair with a partner, my partner was Valentin Lecompte, ens18vle. We did not fully separate our work in two but my partner focused more on the electronic part of the project while I was more focused on the development part. My report will then describe the program where my partner's report will talk about the electronic part and describe also the structure of the robot. The used hardware for this project is 1 button, 2 leds, 1 potentiometer, 2 DC motors, a level shifter, a numeric to analogy converter, a level converter, a IR sensors array and a Beaglebone. The developed program uses the PyBBIO library, used as an interface between the code and the hardware. All the source code can be found here: \href{http://github.com/ThomasRanvier/line_following_robot}{link to my github}. \section*{Linking the motors to the Beaglebone} The first task that we had to do was to be able to control the motors as we want. The introductory lab helped us a lot since we now knew that we had to develop a PI controller to control the motor more easily. We connected the motors to the Beaglebone, we use two PWM pins to send the power that will make their speed vary and we get the results from their encoders through the 2 specialised pairs of EQEP pins. \subsection*{Encoders} I initially tried to implement the encoder entirely by myself. The main issue was that I used normal pins on the Beaglebone because I did not know that we had to use specialised pins for this. The result was that the encoder that I implemented worked most of the time but sometimes the pins were not able to receive the informations from the motors quick enough and so the result was not constantly good. After that, to implement the encoders I used the RotaryEncoder object from the PyBBIO library. With this object we can easily recuperate the position of the encoder. I created a class named Encoder to use more simply the RotaryEncoder object. I also scale the result of the RotaryEncoder between 0 and 255 by multiplying it by a gain defined as $\dfrac{255}{8000}$. \subsection*{PI controllers} To implement the PI controllers I created a PI\_controller class, it is used to instantiate a PI controller with the $P$ and $K$ values that we want. The role of the PI controller is to take the encoder output as entry and return the speed to send to the linked motor in order to get the closest possible to a defined speed. The speed that we want to achieve is called the $wanted\_speed$ and it can be set through the $set\_wanted\_speed$ function. We can also set the limits of the PI controller, we used $0$ and $255$ as limits since the values that we can send through the PWM pins are on one Byte. We used the step measurement that we did on the first lab to set the $P$ and $I$ values of the PI controller: \begin{align*} &P \simeq 1.00726 &I \simeq 18.38404 \end{align*} \section*{Read and analyse the Infra-Red sensors values} Once we were able to control both the motors as desired we had to read and analyse the informations from the IR sensors to automatically set the wanted speeds of the two PI controllers to make the robot follow the line. \subsection*{Recuperate the informations} The IR sensors informations can be recuperated from the SPI pins, it uses an analogy to numerical hardware converter. In order to get the informations from the IR sensors and put them into a nice usable form I created the IR\_sensors class, it is used as an interface between the program and the IR sensor array. This class contains a 'private' method called '\_\_adc\_read', it returns the raw informations from the 8 IR sensors, it is used in the public 'get\_activations' method which can be used by the user. This last function returns the informations in a dictionary which contains the current and also the last activation. The current activation is a list of 8 digital values, I used an arbitrary threshold value of 300 and all the raw values from the '\_\_adc\_values' that are above the threshold are converted to 1 and all the ones under are converted to 0. It makes it very easy to know what sensor is activated or not. The last activation is of the same form, a list of 8 digital values, but instead of the current activation it is the last activation, meaning the last time that we detected an activated sensor. Indeed this list is used when the current activation detects nothing, it means that the robot is out of the line, when in that case we need to know in what direction it exited the line so that we can make it turn in the right direction. \subsection*{Analyse the informations} Once we get the current and last activation in a nice form we need to analyse them. This analyse takes place in the robot class, it uses some pre-defined values: \begin{align*} &ir\_sensor\_weights = [-9, -7, -5, -1, 1, 5, 7, 9]\\ &ir\_sensor\_max\_weight = 12\\ \end{align*} We have three cases. \subsubsection*{First case} The 8 values of the current IR activation are at 1. It basically means that all the IR sensors are detecting no surface under them. It means that the robot is lifted up the ground, to move it elsewhere for example. In that case we set the wanted speed of both the PI controllers to 0, it makes the robot stop moving. \subsubsection*{Second case} The 8 values of the current IR activation are at 0. It means that the robot detects ground but no line, it is when the last activation will be useful. We have to use the last activation to figure out in what direction to turn so we can get back to the line. There are three cases: \begin{enumerate} \item The most extreme sensor on the right was previously activated, it means that the robot exited the line by going to the left, we have to make it turn on the right. To do so we slow down the right wheel by using this formula $ir\_sensor\_max\_weight \cdot scale \cdot speed$. Here the $speed$ value is the speed that we ideally want the robot to always go and the $scale$ is a defined value that will make the opposite wheel slow more or less depending on its value. The value that we usually use for the $scale$ is $\dfrac{1}{ir\_sensor\_max\_weight}$, it makes the opposite wheel stop totally in the case where the robot is out of the line. \item The most extreme sensor on the left was previously activated, it means that the robot exited the line by going to the right, we have to make it turn on the left. \item The last case is when none of the above were activated, it means that the line stopped under the robot, then the robot should continue straight forward to reach the line when it starts again further away. \end{enumerate} \subsubsection*{Last case} Else we have to analyse the current IR activation more precisely. The first thing is to compute the weight of the current activation, the used function takes into entry the current activation. The activation list is divided in two, it only takes into account the most extreme activation on the right and on the left. This is the pseudocode of the function so it is easier to understand how it works: \FloatBarrier \begin{algorithm} \caption{Compute the weight given an IR activation} \label{compute_ir_weight} \begin{algorithmic}[1] \Procedure{$\_\_compute\_ir\_weight$}{$activation$} \State $weight \gets 0$ \For{$i\texttt{ }in\texttt{ }range(0, 4)$} \If{$activation[i] == 1$} \State $weight \gets weight + ir\_sensor\_weights[i]$ \State \textbf{break} \EndIf \EndFor \For{$i\texttt{ }in\texttt{ }range(7, 3, -1)$} \If{$activation[i] == 1$} \State $weight \gets weight + ir\_sensor\_weights[i]$ \State \textbf{break} \EndIf \EndFor \State \textbf{return} $weight$ \EndProcedure \end{algorithmic} \end{algorithm} \FloatBarrier With this computed weight we know that if it is positive we have to make the robot turn to the right and conversely. The speed of the opposite wheel is then computed in that way: $speed - (scale \cdot abs(weight) \cdot speed)$. \section*{Results and improvements} Our robot is able to follow the line, go straight forward if the line suddenly stops and go in the right direction if it goes out of the line in a sharp turn. Under is a picture of the robot. \begin{figure}[h] \centering\includegraphics[width=0.5\textwidth]{robot.jpg} \end{figure} \subsection*{The lines in parallel} In one of the two circuits there is a part of the road where there are three lines in parallel, if the robot is not exactly on the right line it will eventually detect one of the two others. In that case since we only consider the extreme values of our sensors the robot would sometimes suddenly turn into an other direction. To fix this issue I added a function that detects when there are more than one line detected. In that case the negative weights become positive and conversely, this will make the robot turn in the other way. The result is that it will avoid going on the parasite lines. \subsection*{Various improvements} We added some functionalities that make it easier to use the robot when it is not connected to a computer. We added one green LED, one red LED, one button and one potentiometer. When the Beaglebone starts our programs is started as a daemon, when the program starts the two LEDs blink simultaneously until the button is pressed. Once the button has been pressed the main function is launched, it will create all the instances to control the robot and when everything is ready the green LED goes on, at that point the robot is in pause so nothing else will happen. When the green LED is on we can press once again the button, it will light on the red LED to signal the state of the robot and it will make the robot start reading the IR sensors datas and start following the line if there is one to follow. When we want to stop the robot we can press the button again, the red LED goes off. We also added a potentiometer, it makes the speed of the robot vary between 0 and 255. \subsection*{Possible improvements} There are a lot of possible improvements to do, we could have added ultra-sonic sensors to try and detect obstacles. We could also have added a screen to make it easier to use the robot by displaying interesting informations.
{ "alphanum_fraction": 0.7565192473, "avg_line_length": 61.2222222222, "ext": "tex", "hexsha": "47403dfe16d58c6320059761bea64da645148bc5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7fd88b7d0d09f6901fbb0c79f62e483ad6cfb089", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ThomasRanvier/line_following_robot", "max_forks_repo_path": "report_thomas/sections/content.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7fd88b7d0d09f6901fbb0c79f62e483ad6cfb089", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ThomasRanvier/line_following_robot", "max_issues_repo_path": "report_thomas/sections/content.tex", "max_line_length": 240, "max_stars_count": null, "max_stars_repo_head_hexsha": "7fd88b7d0d09f6901fbb0c79f62e483ad6cfb089", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ThomasRanvier/line_following_robot", "max_stars_repo_path": "report_thomas/sections/content.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2406, "size": 10469 }
\documentclass{tufte-handout} \newcommand{\blenderVersion}{2.79} \newcommand{\programName}{Rhorix} \newcommand{\fullName}{Rhorix 1.0.0, M J L Mills, 2017, github.com/MJLMills/rhorix} \newcommand{\programWebsite}{www.mjohnmills.com/rhorix} \newcommand{\programCitation}{Mills, Sale, Simmons, Popelier, J. Comput. Chem., 2017, 38(29), 2538-2552} \newcommand{\addonPath}{File $\rightarrow$ User Preferences $\rightarrow$ Add-Ons} \newcommand{\operatorPath}{File $\rightarrow$ Import $\rightarrow$ Quantum Chemical Topology (.top)} \newcommand{\navLink}{www.blender.org/manual/editors/3dview/} \newcommand{\blenderSite}{www.blender.org} \newcommand{\manualLink}{www.blender.org/manual/} \newcommand{\downloadLink}{www.blender.org/download} \newcommand{\enterCamera}{keypad $0$} \newcommand{\flyMode}{Shift-F} \newcommand{\renderKey}{F12} \newcommand{\leaveRender}{Esc} \newcommand{\contactEmail}{[email protected]} \newcommand{\githubAddress}{github.com/MJLMills/rhorix} \usepackage{graphicx} % allow embedded images \setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio} \graphicspath{{graphics/}} % set of paths to search for images \usepackage{amsmath} % extended mathematics \usepackage{booktabs} % book-quality tables \usepackage{units} % non-stacked fractions and better unit spacing \usepackage{multicol} % multiple column layout facilities \usepackage{lipsum} % filler text \usepackage{fancyvrb} % extended verbatim environments \fvset{fontsize=\normalsize}% default font size for fancy-verbatim environments % Standardize command font styles and environments \newcommand{\doccmd}[1]{\texttt{\textbackslash#1}}% command name -- adds backslash automatically \newcommand{\docopt}[1]{\ensuremath{\langle}\textrm{\textit{#1}}\ensuremath{\rangle}}% optional command argument \newcommand{\docarg}[1]{\textrm{\textit{#1}}}% (required) command argument \newcommand{\docenv}[1]{\textsf{#1}}% environment name \newcommand{\docpkg}[1]{\texttt{#1}}% package name \newcommand{\doccls}[1]{\texttt{#1}}% document class name \newcommand{\docclsopt}[1]{\texttt{#1}}% document class option name \newenvironment{docspec}{\begin{quote}\noindent}{\end{quote}}% command specification environment \title{\programName{} - Quick Start Guide} \author{Matthew J L Mills} \begin{document} \maketitle % provide already setup default environment for rendering as a blender settings file \begin{abstract} \programName{} is an add-on (plugin) program that allows users to draw (in 3D space) and render (in 2D flatland) a representation of the quantum chemical topology (QCT) of a chemical system with the powerful Blender\sidenote{\blenderSite{}} tool. This 'Quick Start' document is intended for those users currently without interest in how the program works who simply want to get started making images. The quickest path from QCT calculation to rendered picture is discussed, explaining how to immediately start using \programName{} with Blender to generate a standard-appearance rendered image from a QCT data file. \par{} Those users requiring more detail should consult the \programName{} paper\sidenote{\programCitation{}} and Blender Manual\sidenote{\manualLink{}}. All users are urged to first look therein for answers not present in this guide before contacting the author\sidenote{\contactEmail{}}. \end{abstract} \section{Obtaining the Program} The source code for \programName{} can be downloaded from the release page on GitHub\sidenote{\githubAddress{}}. The supplied archive contains the Blender Add-On Python code, an XML document type definition file, filetype conversion Perl scripts, example projects and documentation. \section{Installing the Add-On} The currently supported version (\blenderVersion{}) of Blender must be installed before proceeding with the \programName{} installation. Installing Blender is not covered here; please consult their website\sidenote{\downloadLink{}} for instructions for your OS\sidenote{Windows, Mac OSX, GNU/Linux and FreeBSD are supported.}. Once a working version of Blender is in place, the script can be installed within it. \par{} Installation of the script requires the program contents be extracted into their own directory within the Blender addons directory, the location of which is OS-dependent. After placing the files in the appropriate location, open Blender, navigate to User Preferences in the File menu and choose the Add-Ons tab\sidenote{\addonPath{}}. Use the search box to find \programName{}. To activate, tick the checkbox to the left of the entry. The Blender configuration can be saved for all future documents\sidenote{Ctrl+U}, or alternatively the Add-On can be re-ticked in each new document used for QCT drawing. The script, when the checkbox is selected, will add the ability to read an XML topology file into Blender in various places. \section{Converting QCT Output Files} In order to model and render a topology, the topology must first be known. Therefore, a charge density analysis calculation is the first step in producing an image. Instructions on producing the data necessary to render an image of the topology should be found in the documentation of your particular QCT analysis program. \programName{} currently supports MORPHY (mif) and AIMAll (viz) output files (and by extension supports any \textit{ab initio} code that produces an AIM wavefunction), which must be converted to the .top filetype before they can be read into Blender. Several Perl scripts are provided for this purpose, named for the filetype that they convert from. The Readme file in the appropriate program directory gives detailed information on using these scripts. \section{Reading Topology Files} After successful conversion, the .top file can be read into Blender via \programName{} by clicking the appropriate menu item\sidenote{\operatorPath{}} and navigating to the file's location. The appearance of the topology will follow the default mapping described in the paper, i.e. standard colors and sphere radii will be used. Following successful reading of a .top file, a 3D model of the topology will appear in the View window. \section{Useful GUI Buttons} \programName{} provides a set of useful buttons for commonly needed manipulation tasks. These can be used to select and resize various types of gradient path, turn on or off all CPs of a given type and switch to stereoscopic rendering. The buttons can be found in the left-hand pane under Tools. \section{Placing the Camera \& Rendering an Image} The camera must be positioned such that the appropriate part of the topology will be rendered onto the 2D image plane. Rendering can be imagined as the process of taking a picture of the topology with the virtual camera, and so all of the camera settings affect the outcome. \programName{} automatically positions the camera such that it is outside the system and points at the origin. This is intended to provide an adequate starting point only. A simple three-point lighting system is also created. The user is referred to the appropriate sections of the Blender documentation for descriptions of the camera settings. The minimal demand on the user is typically to move the camera to get the desired part of the system in the final render. \par{} There are 2 important 3D viewports for the quickstart user. The program will show the 3D view window once the system is read in. The purpose of this view is to allow the manipulation of objects and materials, and provide a pre-rendered image of the scene. The second viewport is the Camera View. The camera view shows you the orientation of the objects which will be rendered to the final image. Thus you should check that your system appears correctly drawn in the 3D view first, and then enter the camera view mode to set the viewpoint of your final 3D render. \par{} To move the camera, it is recommended that you enter the camera view mode\sidenote{\enterCamera{}}, and then use the fly mode\sidenote{\flyMode{}} to position the camera using your mouse. The enter key freezes the camera when you find the viewpoint that you want. For larger systems, the z-clipping (which hides objects from the view deemed too far from the camera) will eliminate parts of your system. If this happens, select the camera, and then its object data panel. Increase the value of 'End' in the 'Clipping' section until the part of your scene you want returns to the camera view. Pressing the \renderKey{} key will invoke the Blender Render engine and produce a rendered image of your scene from the chosen viewpoint. The \leaveRender{} key leaves the render mode if you want to make further changes. The camera can be moved in other ways once in camera view mode. Please see link\sidenote{\navLink{}} for more details. \section{Manipulating Appearance} \section{Summary} The above information constitutes the minimum required to produce a \programName{} image from a QCT calculation output file. For further information on the theoretical background and function of RhoRix, please consult the Manual. For a detailed description of the implementation please consult the paper describing this work\sidenote{\programCitation{}}. \par{} It is hoped that the program website\sidenote{\programWebsite{}} will become a repository of completed images and example .blend files in order to inform and inspire future work. Please consider sharing your final Blender save file and rendered image with us, as well as the location of any publications that use this software. If you use \programName{}, please give credit by citing both the \programName{} paper and the program\sidenote{\fullName{}} where appropriate. % Import the frequently asked questions file here! \end{document}
{ "alphanum_fraction": 0.7932880379, "avg_line_length": 77.0952380952, "ext": "tex", "hexsha": "c2f6cfc1d86633bb731aade5a0f26e4f2eaaca31", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1ee0c5a4ad6112df6f37ba03f768326450a9e980", "max_forks_repo_licenses": [ "BSD-3-Clause-LBNL" ], "max_forks_repo_name": "MJLMills/QCT4Blender", "max_forks_repo_path": "documentation/QuickStart.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1ee0c5a4ad6112df6f37ba03f768326450a9e980", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause-LBNL" ], "max_issues_repo_name": "MJLMills/QCT4Blender", "max_issues_repo_path": "documentation/QuickStart.tex", "max_line_length": 470, "max_stars_count": null, "max_stars_repo_head_hexsha": "1ee0c5a4ad6112df6f37ba03f768326450a9e980", "max_stars_repo_licenses": [ "BSD-3-Clause-LBNL" ], "max_stars_repo_name": "MJLMills/QCT4Blender", "max_stars_repo_path": "documentation/QuickStart.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2238, "size": 9714 }
Nuclear forensics comprises a large part of an investigation into a nuclear incident, such as interdicted nuclear material or the detonation of a weapon containing radioactive components. The forensics portion of the investigation encompasses both the analysis of nuclear material and/or related paraphrenalia as well as the interpretation of these results to establish nuclear material provenance. The former has many technical aspects, relying on a range of nuclear science and chemistry. The latter involves intelligence and political considerations of the material analyses for attribution. This review will only consider the technical portion of the nuclear forensics workflow. First discussed are the types of forensic investigations in Section \ref{sec:types}, followed by an introduction to inverse problem theory in Section \ref{sec:inverse} as a way to evaluate the results of forensic methods. \subsection{Types of Nuclear Forensics Investigations} \label{sec:types} The technical programs researching improvements to the \gls{US}'s nuclear forensics capabilities are split between the type of material being investigated. The analysis of irradiated debris from a weapon has different collection and measurement requirements than a mass of \gls{SNM}. This separates the field into post-detonation and pre-detonation nuclear forensics. While both are discussed below in Sections \ref{sec:postdet} and \ref{sec:predet}, respectively, there is more focus on pre-detonation topics since this work is based on \gls{SNF}. \subsubsection{Post-Detonation} \label{sec:postdet} Post-detonation nuclear forensics requires a diverse set of measurements to obtain the following information: identification of nuclear material, reconstruction of the weapon device design, and reactor parameters for nuclear material provenance. This could apply to an improvised nuclear device or a nuclear bomb. In conjunction with the measurements and characterization are a large array of logistical concerns, including recovery efforts, personnel safety, and material collection cataloging and transportation. In the case of a full explosion using fissile material, the collection of materials and debris occurs as quickly as possible. It can be in the crater created by the explosion, further away from the center in the fallout, and in the atmosphere above or downwind from the detonation. These are collected by finding glass-like material near the epicenter, debris swipes in the fallout region, and advanced particle collection in the atmosphere via an airplane, respectively. While the epicenter cannot be reached for some time, the debris and atmosphere measurements of radioactive material can provide the yield of the weapon and whether it was made using uranium or plutonium. This along with other physical and chemical measurement allow device reconstruction to begin. Attribution begins to narrow to specific countries or organizations based on this information. \cite{aps_aaas_forensics} The research needs for post-detonation focus on material collection and analysis as well as nuclear device modeling for reconstruction purposes. Ideally, most material sample collection would be done using automatic instrumentation. Additionally, bolstering the existing device modeling code for reverse engineering is needed. And, as with pre-detonation, a database of standard materials must be both strengthened and centralized. \cite{aps_aaas_forensics} \subsubsection{Pre-Detonation} \label{sec:predet} Pre-detonation nuclear forensics investigations occur for every scenario in which non-detonated nuclear material has been found or intercepted. Although this could be an intact bomb, it is more likely that \gls{SNM} intended for a weapon would be the target of an investigation. Thus, the range of intact materials for measurement could be as small as a plutonium sample or as large as a shipment of \gls{UOC}. The goal is to determine the provenance of the \gls{SNM}, which in the case of \gls{SNF} is generally done by reconstructing the irradiation process that created the material. For \gls{SNF}, where the material was obtained is the first step of the investigation. This would be gleaned from the reactor parameters and storage history (e.g., reactor type, cooling time, burnup), which requires first measuring and calculating certain values: isotopic ratios, concentration of chemical compounds, or existence of trace elements. Both radiological methods (e.g., gamma spectroscopy) and ionization methods (e.g., mass spectrometry) measure these quantities. Although this is less of a humanitarian emergency than a post-detonation investigation, it is still important to have rapid characterization capabilities via on-site non-destructive analyses. As previously discussed in Section \ref{sec:motivation}, however, the faster measurements result in poor measurement quality. Also, there is a need for research to combat the database issues, as an insufficient forensics database can reduce the accuracy and/or certainty of a reconstructed set of reactor parameters. Another area of research is deeper study of known forensics signatures or discovering new signatures with modeling, simulation, or statistical methods. The top panel in Figure \ref{fig:nfworkflow} shows an example technical nuclear forensics workflow as it could occur in the real world for a pre-detonation scenario. After a sample is obtained, characterization begins. Next, the results of these techniques are then compared against existing standard materials databases to obtain the desired reactor parameters. These steps would be performed iteratively in a real investigation, first using non-destructive measurements, and then destructive measurements. The following steps in Figure \ref{fig:nfworkflow} are obtaining reactor history information, if available, and reporting the results to the investigators. \\ \begin{figure}[!h] \makebox[\textwidth][c]{\includegraphics[width=1.1\linewidth]{./chapters/intro/ForensicsWorkflows.png}} \caption{Example Forensics Workflows in Real-World Scenario} \label{fig:nfworkflow} \end{figure} Next, the middle and bottom panels in Figure \ref{fig:nfworkflow} are analagous physical and computational experimental workflows, respectively. Both panels would have validated measurements of \gls{SNF}; the middle panel shows this being done in the laboratory and the bottom panel shows that these are values from a simulation. The goal of an experimental laboratory study is to test or develop empirical relationships between forensics signatures and the desired reactor parameters. The goal of computational studies can be this, finding new empirical relationships, or performing forensics workflows prior to the implementation of new reactor technologies. For studying alternative measurement techniques or a slight difference in the overall approach, a researcher would iterate through multiple studies using known materials to probe sensitivities or other weaknesses in the procedure. \subsection{Nuclear Forensics as an Inverse Problem} \label{sec:inverse} Nuclear forensics is a traditional inverse problem, which has been well documented mathematically and applied to a range of scientific disciplines. Understanding inverse problem theory can help systematically define the limitations of certain solution methods. This section provides an introduction to the topic as well as its application to nuclear forensics. As outlined in a textbook on the formal approach to inverse problem theory \cite{inverse_theory}, the study of a typical physical system encompasses three areas: \begin{enumerate} \itemsep-0.75em \item \textit{Model parameterization} \item \textit{Forward problem:} predict measurement values given model parameters \item \textit{Inverse problem:} predict model parameters given measurement values \end{enumerate} First, this shows that it is important to consider the parameters that comprise a model; this is denoted as the \textit{model space}. This is not every measurable quantity; domain knowledge is necessary to determine the model space. In the nuclear forensics context for \gls{SNF}, this would consist of, e.g., the cooling time because the \gls{SNF} decays and material measurements are different depending on when the measurement is taken. Second, understanding the physical system also requires an understanding of the forward problem. Predicting how a certain set of values of model parameters will affect the resulting measurements is a problem with a unique solution. The breadth of these end measurements provides the \textit{data space}, which are all the conceivable results of a given forward problem. So for \gls{SNF} this would be, perhaps, the range of isotopic ratios typical of a commercial reactor. Lastly, the inverse problem is predicting the model parameters given a solution. It is statistical in nature; there is a probability that the measured isotopes are caused by some value of a model parameter. Thus, the problem is \textit{ill-posed} because a prediction is not guaranteed to be unique. Further, including measurement uncertainties broadens the linear model to probability densities of the parameters. The opposite is also true in the forward case: including parameter uncertainties broadens the forward problem results to probability densities of the potential measurement values. \cite{inverse_theory} In this way, we can define some probability that an answer is correct, given a set of measurements and their uncertainties, the calculated model parameters, the spread of the data space, and the spread of the model space. Inverse problem theory connects these values to the general form of Bayes' theorem, which is commonly expressed as follows: \begin{equation} \label{eq:bayes} P(A|B) = \frac{P(B|A)P(A)}{P(B)} \end{equation} Here, $A$ and $B$ are events, $P(A)$ and $P(B)$ are the probabilities that events $A$ and $B$ will occur, representing the model and data spaces, respectively. $P(A)$ is known as the likelihood and $P(B)$ is known as the marginal likelihood. The marginal likelihood is a concept of the data space capturing all the possible measurement values. It is ignored here because as a homogenous probability, it is only useful for determining absolute probabilities and this will only be applied in a relative context. $P(B|A)$ is the prior probability that event $B$ will occur given a known result for $A$, which are the measurements given the model parameters (i.e. the forward problem). $P(A|B)$ is the posterior probability that event $A$ will occur given a known result for $B$, which are the predicted model parameters given the measurements (i.e., the inverse problem) \cite{inverse_theory, gentle_bayes}. A discussion of how these values are obtained takes place in Section \ref{sec:invcompare}. %This is can be mapped easily to the inverse physical system problem scenario. %$A$ would represent an occurence of a parameter in the model space, and $B$ %would represent the measurement of some value. Thus, $P(A)$ is the probability %of a parameter existing without any knowledge of $B$. This is known as the prior %probability, usually given by some theory about the system. $P(B)$ is the %probability of some measurement existing without any knowledge of $A$. This is %known as the marginal likelihood, which is some homogeneous concept for the %potential measurements that could be made (this only serves to scale to absolute %probabilities and does not affect the relative probabilities). The likelihood, %$P(B|A)$, is the chance that a measurement is observed from a given parameter, %representing the forward problem. Lastly, the posterior probability is the %chance of some parameter existing given some measurement, representing the %inverse problem solution \cite{inverse_theory, gentle_bayes}. Given the above, it is more intuitive to consider the conceptual version of Bayes' theorem in Equation \ref{eq:bayes_words}. \begin{equation} \label{eq:bayes_words} Posterior = \frac{Likelihood * Prior}{Marginal \ Likelihood} \end{equation} This framework is helpful for an experiment that intends to compare different methods for calculating the posterior probability of a system given some measurements \cite{bayes_compare}. In the nuclear forensics context of pre-detonated materials, this would be a a set of probabilities for different parameters of interest, e.g., reactor type, burnup, cooling time, and enrichment of some interdicted \gls{SNF}.
{ "alphanum_fraction": 0.8105565301, "avg_line_length": 58.0648148148, "ext": "tex", "hexsha": "c17d75c375313fa8dfbbd7e6f327e14c515ccc54", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "opotowsky/prelim", "max_forks_repo_path": "document/chapters/litrev/nfoverview.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "opotowsky/prelim", "max_issues_repo_path": "document/chapters/litrev/nfoverview.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "100a27fb533beee1c985ad72ae70bdb646b04bab", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "opotowsky/prelim", "max_stars_repo_path": "document/chapters/litrev/nfoverview.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2759, "size": 12542 }
\documentclass{apnet17} \input{header.tex} \usepackage{times} %\usepackage{epsfig} %\usepackage[TABBOTCAP]{subfigure} %\usepackage{tabularx} \usepackage{graphicx} %\usepackage{color} %\usepackage{xspace} %\usepackage{thumbpdf} \usepackage{listings} %\usepackage{verbatim} %\usepackage{hyperref} %\usepackage{booktabs} %\usepackage{colortbl} \usepackage{amssymb}% http://ctan.org/pkg/amssymb \usepackage{pifont}% http://ctan.org/pkg/pifont \hypersetup{pdfstartview=FitH,pdfpagelayout=SinglePage} \setlength\paperheight {11in} \setlength\paperwidth {8.5in} \setlength{\textwidth}{7in} \setlength{\textheight}{9.25in} \setlength{\oddsidemargin}{-.25in} \setlength{\evensidemargin}{-.25in} %\setlength{\headsep}{0in} \pagenumbering{arabic} \begin{document} \def\draft{1} \newcommand{\note}[1]{\textcolor{red}{[note: #1]}} \ifdefined\draft \newcommand{\mylabel}[1]{\textcolor{blue}{LABEL: #1}} \newcommand{\wenfei}[1]{\textcolor{red}{[Wenfei: #1]}} \newcommand{\keqhe}[1]{\textcolor{blue}{[keqhe: #1]}} \else \newcommand{\mylabel}[1]{} \newcommand{\wenfei}[1]{} \newcommand{\keqhe}[1]{} \fi \newcommand{\name}{$C^3$\xspace} \newcommand{\dem}{DEM\xspace} \newcommand{\spring}{SPRING\xspace} \newcommand{\nameone}{DEM\xspace} \newcommand{\nametwo}{SPRING\xspace} \newcommand{\tightparagraph}[1]{\vspace{5pt}\noindent\textbf{#1}\ } \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \conferenceinfo{APNet 2017} {} \CopyrightYear{2017} \crdata{X} \date{} %%%%%%%%%%%% THIS IS WHERE WE PUT IN THE TITLE AND AUTHORS %%%%%%%%%%%% \title{Low Latency Software Rate Limiters for Cloud Networks} \author{\#xxx, xxx pages} \maketitle %\thispagestyle{empty} %%%%%%%%%%%%% ABSTRACT GOES HERE %%%%%%%%%%%%%% \input{sections/abstract.tex} \input{sections/introduction.tex} \input{sections/background.tex} \input{sections/measurement.tex} %\input{sections/overview.tex} \input{sections/design.tex} %\input{sections/implementation.tex} \input{sections/evaluation.tex} \input{sections/conclusion.tex} %\input{todo.tex} %\input{sections/appendix.tex} %\section*{Acknowledgments} \bibliographystyle{abbrv} %\begin{small} \bibliography{paper} %\end{small} \label{last-page} \end{document}
{ "alphanum_fraction": 0.72698268, "avg_line_length": 23.5913978495, "ext": "tex", "hexsha": "4043d5171656146a525810b6f2da983e4afbd0cb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "keqhe/phd_thesis", "max_forks_repo_path": "rate_limiter/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "keqhe/phd_thesis", "max_issues_repo_path": "rate_limiter/paper.tex", "max_line_length": 71, "max_stars_count": 2, "max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "keqhe/phd_thesis", "max_stars_repo_path": "lanyue_thesis/rate_limiter/paper.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z", "num_tokens": 732, "size": 2194 }
%% sample template file for a PhD Thesis %% The default is with two sided setup: \documentclass[% % oneside % uncomment for onesided layout ]{USN-PhD} % --- Bibliography setup --- %%% default is the "ieee" style \usepackage[style=ieee, sorting=none]{biblatex} %%% If you want to use "author-year" style %%% where `\cite{Foo2011}` generates "Foo et al. (2011)" %%% and `\parentcite{Foo2011}` generates "(Foo et al. 2011)" %%% then comment the line above and use %\usepackage[style=authoryear]{biblatex} %%% or %%% if you want to use "alphabetic" style then use %%% where `cite[Foo2011]` generates "[Foo11]" %%% then comment the line above and use %\usepackage[style=alphabetic]{biblatex} %%% instead. %% load the bib file: \addbibresource{UsersGuide.bib} \usepackage{listings} \input{listings-modelica.cfg} \title{User's guide for the Open Hydropower Library (OpenHPL)} \author{Liubomyr Vytvytskyi (original version 1.0.0)\\ Porsgrunn, \nth{6} September 2019} \date{Last updated: \today\\ by TMCC/USN } \begin{document} %% Create title page with the parameters given in the preamble above \maketitle \tableofcontents %\addcontentsline{toc}{chapter}{\contentsname} \chapter{Introduction} \emph{OpenHPL} is an open-source hydropower library that consists of hydropower unit models and is encoded in Modelica. Modelica is a multi-domain as well as a component-oriented modelling language that is suitable for complex system modelling. In order to develop the library, OpenModelica has been used as an open-source Modelica-based modelling and simulation environment. This hydropower library, \emph{OpenHPL}, provides the capability for the modelling of hydropower systems of different complexity. The library includes the following units: \begin{enumerate} \item Various waterway units are modelled based on the mass and momentum balances, i.e., reservoirs, conduits, surge tank, fittings. A modern method for solving more detailed models (PDEs) is implemented in the library, and enables the modelling of the waterway with elastic walls and compressible water as well as open channel. \item A hydrology model has been implemented and makes it possible to simulate the water inflow to the reservoirs. \item Mechanistic models, as well as simple look-up table turbine models are implemented for the Francis and Pelton turbine types. The Francis turbine model also includes a turbine design algorithm that gives all of the needed parameters for the model, based on the turbine's nominal operating values. \item The capability for multiphysics connections and work with other libraries is ensured, e.g., connecting with the Open-Instance Power System Library \emph{OpenIPSL} makes it possible to model the electrical part for the hydropower system. \end{enumerate} A detailed description of each hydropower unit and their uses are presented below in this user guide. \chapter{Installation} \emph{OpenHPL} can be opened either in open-source OpenModelica\footnote{\url{https://openmodelica.org}} or commercial Dymola\footnote{\url{https://www.3ds.com/products-services/catia/products/dymola}} modelling and simulation environments, which are based on the Modelica language. Here, OpenModelica is emphasized due to free availability. To install OpenModelica, follow the instructions at \url{https://openmodelica.org/download/download-windows} for Windows users, or find the installation instruction for other operating systems at \url{https://openmodelica.org} in ``Download'' tab. Some tutorials exist for Modelica at \url{http://book.xogeny.com}, and for OpenModelica at \url{https://goo.gl/76274H}. The \emph{OpenHPL} can be found at \url{https://openhpl.opensimhub.org}. To install this library, follow the instructions at the project homepage. In addition, Modelica models in OpenModelica can be simulated within a scripting language (Python\footnote{\url{https://www.python.org}} via the OMPython API\footnote{\url{https://www.openmodelica.org/doc/OpenModelicaUsersGuide/latest/ompython.html}}, Julia\footnote{\url{https://julialang.org}} via the OMJulia API\footnote{\url{https://openmodelica.org/doc/OpenModelicaUsersGuide/latest/omjulia.html}}) and further analysed using the analysis tools in the scripting language. The installation instructions for both these APIs can be found in the links provided in the footnotes for each API. \chapter{OpenHPL elements} An overview of each element of the hydropower library \emph{OpenHPL} is provided in this section. A screenshot of \emph{OpenHPL} in OpenModelica is shown in Figure~\ref{fig:Library}. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{Library} \caption{Screen shot of OpenModelica with the hydropower library.} \label{fig:Library} \end{figure} The library is divided into various packages: \begin{description} \item[\texttt{Copyright}] A documentation class that provides a reference to the license for this library. \item[\texttt{UsersGuide}] A documentation class that provides the user's guide of this library. \item[\texttt{Data}] A record that determines the common data set for this library. It is possible to insert this class into models and use the common data set for the whole model. \item[\texttt{Examples}] A package that provides various examples of using the library for hydropower system as well as examples of using \emph{OpenHPL} together with power system library --- \emph{OpenIPSL}. \item[\texttt{Waterway}] A package that consists of various unit models for the waterway of the hydropower system, such as reservoirs, conduits, surge tank, pipe fittings, etc. \item[\texttt{ElectroMech}] A package that provides the electro-mechanical components of the hydropower system and consists of two main sub-packages: \begin{description} \item[\texttt{Turbines}] with various turbine unit models \item[\texttt{Generators}] with simplified models of a generator \end{description} \item[\texttt{Controllers}] A package that holds a simple model for a governor of the hydropower system. \item[\texttt{Interfaces}] A package of gives connector interfaces for the library components. \item[\texttt{Functions}] A package of functions that consists of three sub-packages: \begin{itemize} \item[\texttt{Fitting}] Functions for calculation of pressure drop for various pipe fittings. \item[\texttt{DarcyFriction}] Functions to calculate the friction term in the pipe. \item[\texttt{KP07}] Functions for PDEs using Kurganov-Petrova (KP) scheme. \end{itemize} \item[\texttt{Icons}] Package of icons used in the library. \item[\texttt{Types}] Package of types used in the library. \item[\texttt{Tests}] Package of various testing models. Not guaranteed to work and meant for development only. \end{description} Below, a detailed description of each unit model of the \emph{OpenHPL} is provided. \section{Interfaces} First, a detailed description of the interface connectors is provided here. In the \emph{OpenHPL}, two types of connectors are typically used.The first type is the standard Modelica real input/output connector, the other type is a set of connectors that represent the water flow and are modelled similar to the connection in an electrical circuit with voltage and current, or similar to the idea of potential and flow in Bond Graph models. The water flow connector which is called \emph{Contact} in the library, contains information about the pressure in the connector and mass flow rate that flows through the connector. An example of a Modelica code for defining the \emph{Contact} connector looks as follows: \begin{lstlisting}[language = modelica] connector Contact "Water flow connector" Modelica.SIunits.Pressure p "Contact pressure"; flow Modelica.SIunits.MassFlowRate mdot "Mass flow rate through the contact"; // Creating an icon for connector annotation (Icon(graphics={ Ellipse(extent={{-100,-100},{100,100}}, lineColor = {28, 108, 200}, fillColor = {0, 128, 255}, fillPattern = FillPattern.Solid)})); end Contact; \end{lstlisting} In addition, some extensions of this water flow connector are developed for the better use in the library. These extensions are listed hereby. \begin{itemize} \item \emph{TwoContact} is an extension from the \emph{Contact} model which provides a model of two connectors of inlet and outlet contacts. A Modelica code for this model looks as follows: \begin{lstlisting}[language = modelica] partial model TwoContact "Model of two connectors" // Specifying connectors and their placement in diagram Contact i "Inlet contact" annotation(Placement(transformation(extent={{-110,-10},{-90,10}}))); Contact o "Outlet contact" annotation(Placement(transformation(extent={{90,-10},{110,10}}))); end TwoContact; \end{lstlisting} \item \emph{ContactPort} is an extension from the \emph{TwoContact} model which also provides information about a mass flow rate between these two connectors. The mass flow rate that flows through the inlet connector is equal to the mass flow through the outlet connector. This model is used for the pipe modelling. A Modelica code for this \emph{ContactPort} model looks as follows: \begin{lstlisting}[language = modelica] partial model ContactPort "Model of two connectors with mass flow rate" Modelica.SIunits.MassFlowRate mdot "Mass flow rate"; extends TwoContact; equation 0 = i.mdot + o.mdot; mdot = i.mdot; end ContactPort; \end{lstlisting} \item \emph{ContactNode} is an extension from the \emph{TwoContact} model and provides a node pressure that is equal to the pressures from these two connectors. This model also defines the mass flow rate that is the sum of the mass flow rates through the inlet and outlet connectors. This model is used for the surge tank modelling. A Modelica code for this \emph{ContactNode} model looks as follows: \begin{lstlisting}[language = modelica] partial model ContactNode "Model of two connectors and node pressure" Modelica.SIunits.Pressure p_n "Node pressure"; Modelica.SIunits.MassFlowRate mdot "Mass flow rate"; extends TwoContact; equation p_n = i.p; i.p = o.p; mdot = i.mdot + o.mdot; end ContactNode; \end{lstlisting} \item \emph{TurbineContacts} is an extension from \emph{ContactPort} model and provides the real input and output connectors, additionally. This model is used for turbine modelling. A Modelica code for this \emph{TurbineContacts} model looks as follows: \begin{lstlisting}[language = modelica] partial model TurbineContacts "Model of turbine connectors" extends ContactPort; // Specifying additional connectors and their placement in diagram input Modelica.Blocks.Interfaces.RealInput u_t "[Guide vane|nozzle] opening of the turbine" annotation (Placement(transformation(extent = {{-20, -20}, {20, 20}}, rotation = -90, origin={0,120}))); Modelica.Blocks.Interfaces.RealOutput P_out "Mechanical Output power" annotation (Placement(transformation(origin={0,-110}, extent={{-10,-10},{10,10}}, rotation = 270))); end TurbineContacts; \end{lstlisting} \end{itemize} \section{Functions} Here, a detailed description of the functions and their used algorithms in the library, are presented. \subsection{Friction term} First, the functions for defining the friction force in the waterway are described. More details can be found in Bernt Lie's Lecture notes, \cite{LieL:18}. The friction force $F_\mathrm{f}$ is directed in the opposite direction of the velocity $v$ (the linear velocity average across the cross-section of the pipe) of the fluid, \cite{LieL:18}. A common expression for friction force in the filled pipes is the following: \begin{equation}\label{eq:eq1} F_\mathrm{f}=-\frac{1}{8}\pi\rho LDf_\mathrm{D}v|v| \end{equation} Here, $L$ and $D$ are related to the pipe width and diameter, respectively. $f_\mathrm{D}$ is a Darcy friction factor that is a function of Reynolds number $N_\mathrm{Re}$, with the roughness ratio $\frac{\epsilon}{D}$ as a parameter, see Figure~\ref{fig:fig2}. In Figure~\ref{fig:fig2}, the turbulent region ($N_\mathrm{Re} > 2.3\cdot10^3$) is a flow regime where the velocity across the pipe has a stochastic nature, and where the velocity $v$ is relatively uniform across the pipe when we average the velocity over some short period of time. The laminar region ($N_\mathrm{Re} < 2.1\cdot10^3$) is a flow regime with a regular velocity v which varies as a parabola with the radius of the pipe, with zero velocity at the pipe wall and maximal velocity at the centre of the pipe. \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{fig/darcyf} \caption{Darcy friction factor as a function of the Reynolds number.} \label{fig:fig2} \end{figure} Darcy friction factor varies with the roughness of the pipe surface, specified by roughness height $\epsilon$. For laminar flow in a cylindrical pipe ($N_\mathrm{Re} < 2.1\cdot10^3$), the Darcy friction factor $f_\mathrm{D}$ can be found using the following expression: \begin{equation}\label{eq:eq2} f_\mathrm{D}=\frac{64}{N_\mathrm{Re}} \end{equation} Here, the Reynolds number is found as follows: $N_\mathrm{Re}=\frac{\rho|v|D}{\mu}$, where $\mu$ is the the fluid viscosity. For turbulent flow ($N_\mathrm{Re} > 2.3\cdot10^3$), it is common to rewrite the expression for the Darcy friction factor as \begin{equation}\label{eq:eq3} f_\mathrm{D}=\frac{1}{\left(2\log_{10} \left(\frac{\epsilon}{3.7D} + \frac{5.74}{N_\mathrm{Re}^{0.9}}\right)\right)^ 2} \end{equation} In order to define the Darcy friction factor in a region between laminar and turbulent flow regimes, a possibility is to use some interpolation expressions between the laminar value at $N_\mathrm{Re}=2100$ and the turbulent value at $N_\mathrm{Re}= 2300$, e.g., a cubic polynomial fitting with the same slope as laminar friction at $N_\mathrm{Re}=2100$ and turbulent friction at $N_\mathrm{Re}=2300$, \cite{LieL:18}. To achieve the global differentiability, with $p(N_\mathrm{Re})=aN_\mathrm{Re}^3+bN_\mathrm{Re}^2+cN_\mathrm{Re}+d$, thus: \begin{equation} \begin{array}{c} p(N_\mathrm{Re}=2100)=f_\mathrm{D}^\mathrm{l}(N_\mathrm{Re}=2100)\\ p(N_\mathrm{Re}=2300)=f_\mathrm{D}^\mathrm{t}(N_\mathrm{Re}=2300)\\ \left.\frac{dp}{dN_\mathrm{Re}}\right\rvert_{N_\mathrm{Re}=2100}=\left.\frac{df_\mathrm{D}^\mathrm{l}}{dN_\mathrm{Re}}\right\rvert_{N_\mathrm{Re}=2100}\\ \left.\frac{dp}{dN_\mathrm{Re}}\right\rvert_{N_\mathrm{Re}=2300}=\left.\frac{\partial f_\mathrm{D}^\mathrm{t}}{\partial N_\mathrm{Re}}\right\rvert_{\frac{\epsilon}{D},N_\mathrm{Re}=2300} \end{array} \end{equation} Hence, the constants $a$, $b$, $c$ and $d$ can be found as follows: \begin{equation}\label{eq:eq4} \begin{bmatrix}a\\b\\c\\d \end{bmatrix}= \begin{bmatrix}2100^3&2100^2 & 2100& 1\\ 2300^3 & 2300^2 & 2300 & 1\\ 3\cdot2100^2 & 2\cdot2100& 1 & 0\\ 3\cdot2300^2 & 2\cdot2300 &1 & 0\end{bmatrix}^{-1} \begin{bmatrix} \frac{64}{2100} \\ \frac{1}{\left(2\log_{10}\left(\frac{\epsilon}{3.7D}+\frac{5.74}{2300^ {0.9}}\right)\right)^2} \\ -\frac{64}{2100^ 2} \\ -0.25\frac{0.316}{2300^{1.25}} \end{bmatrix} \end{equation} Based on the presented equation for calculation of the friction force in the waterway, two functions are encoded in this class \emph{DarcyFriction}. The first function is for defining the Darcy friction factor and called \emph{fDarcy}. This function has the following inputs: the Reynolds number $N_\mathrm{Re}$, the pipe diameter $D$, and the pipe roughness height $\epsilon$. Then, based on Eq.~\ref{eq:eq2} for the laminar flow (Reynolds number < 2100), Eq.~\ref{eq:eq3} for turbulent flow (Reynolds number > 2300), and Eq.~\ref{eq:eq4} for transitional zone (2100 < Reynolds number < 2300); the \emph{fDarcy} function provides value for the Darcy friction factor $f_\mathrm{D}$. Another function, \emph{Friction} is for defining the actual friction force and is based on a response from the \emph{fDarcy} function. This function has the following inputs: the linear velocity $v$, the pipe length and diameter $L$ and $D$, the liquid density and viscosity $\rho$ and $\mu$, and the pipe roughness height $\epsilon$. As an output, this function provides a value for the friction force $F_\mathrm{f}$ based on Eq.~\ref{eq:eq1}. An example of a Modelica code for defining the \emph{Friction} function looks as follows: \begin{lstlisting}[language = modelica] function Friction "Friction force with Darcy friction factor" import Modelica.Constants.pi; input Modelica.SIunits.Velocity v "Flow velocity"; input Modelica.SIunits.Diameter D "Pipe diameter"; input Modelica.SIunits.Length L "Pipe length"; input Modelica.SIunits.Density rho "Density"; input Modelica.SIunits.DynamicViscosity mu "Dynamic viscosity of water"; input Modelica.SIunits.Height eps "Pipe roughness height"; // Function output (response) value output Modelica.SIunits.Force F_f "Friction force"; // Local (protected) quantities protected Modelica.SIunits.ReynoldsNumber N_Re "Reynolds number"; Real f "friction factor"; algorithm N_Re := rho * abs(v) * D / mu; f := fDarcy(N_Re, D, eps); F_f := 0.5 * pi * f * rho * L * v * abs(v) * D / 4; end Friction; \end{lstlisting} \subsection{KP scheme} Here, functions for solving PDEs in Modelica are described. First, the overview of the KP scheme is presented. More details about this scheme can be found in Roshan Sharma work, \cite{Sha:15}, and other works, \cite{Vyt:15,Vyt:17}. This is a well-balanced second-order scheme, which is a Reimann problem solver free scheme (central scheme) while at the same time, it takes advantage of the upwind scheme by utilizing the local, one side speed of propagation (given by the eigenvalues of the Jacobian matrix) during the calculation of the flux at the cell interfaces, \cite{Sha:15}. The central-upwind numerical scheme is presented for the one-dimensional case. \begin{equation} \frac{\partial U\left(x,t\right)}{\partial t}+\frac{\partial F\left(x,t,U\right)}{\partial x}=S\left(x,t,U\right) \end{equation} Here, $U\left(x,t\right)$ is the state vector, where states are the functions of position $x$ and time $t$. $F\left(x,t,U\right)$ is the vector of fluxes and $S\left(x,t,U\right)$ is the source terms. In order to solve this PDE, it should be first discretized by finite-volume methods. With the finite volume method, we divide the grid into small control volumes/cells and then apply the conservation laws. This control volume/cell with notations are shown in Figure~\ref{fig:fig2_1}. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{fig/kp} \caption{Control volume/cell, \cite{Sha:15}.} \label{fig:fig2_1} \end{figure} Hence, the semi-discrete (time-dependent ODEs) central-upwind scheme can be then written in the following form: \begin{equation}\label{eq:eq7} \frac{d}{dt}\bar{U}_j\left(t\right)=-\frac{H_{j+\frac{1}{2}}\left(t\right)-H_{j-\frac{1}{2}}\left(t\right)}{\Delta x}+\bar{S}_j\left(t\right) \end{equation} Here, $\bar{U}_j$ are the cell centre average values, while $H_{j\pm\frac{1}{2}}\left(t\right)$ are the central upwind numerical fluxes at the cell interfaces and are given by: \begin{equation}\label{eq:eq8} \begin{array}{c} H_{j+\frac{1}{2}}\left(t\right)=\frac{a^+_{j+\frac{1}{2}}F\left(U^-_{j+\frac{1}{2}}\right)-a^-_{j+\frac{1}{2}}F\left(U^+_{j+\frac{1}{2}}\right)}{a^+_{j+\frac{1}{2}}-a^-_{j+\frac{1}{2}}}+\frac{a^+_{j+\frac{1}{2}}a^-_{j+\frac{1}{2}}}{a^+_{j+\frac{1}{2}}-a^-_{j+\frac{1}{2}}}\left[U^+_{j+\frac{1}{2}}-U^-_{j+\frac{1}{2}}\right]\\ H_{j-\frac{1}{2}}\left(t\right)=\frac{a^+_{j-\frac{1}{2}}F\left(U^-_{j-\frac{1}{2}}\right)-a^-_{j-\frac{1}{2}}F\left(U^+_{j-\frac{1}{2}}\right)}{a^+_{j-\frac{1}{2}}-a^-_{j-\frac{1}{2}}}+\frac{a^+_{j-\frac{1}{2}}a^-_{j-\frac{1}{2}}}{a^+_{j-\frac{1}{2}}-a^-_{j-\frac{1}{2}}}\left[U^+_{j-\frac{1}{2}}-U^-_{j-\frac{1}{2}}\right] \end{array} \end{equation} Here, $a^\pm_{j\pm\frac{1}{2}}$ are the one-sided local speeds of propagation. For calculating the numerical fluxes $H_{j\pm\frac{1}{2}}\left(t\right)$, the values of states at the cell interfaces $U^\pm_{j\pm\frac{1}{2}}$ are needed. These values can be calculated as the endpoints of a piecewise linearly reconstructed function: \begin{equation}\label{eq:eq9} \begin{array}{c} U^-_{j+\frac{1}{2}}=\bar{U}_j+\frac{\Delta x}{2}s_j\\ U^+_{j+\frac{1}{2}}=\bar{U}_{j+1}-\frac{\Delta x}{2}s_{j+1}\\ U^-_{j-\frac{1}{2}}=\bar{U}_{j-1}+\frac{\Delta x}{2}s_{j-1}\\ U^+_{j-\frac{1}{2}}=\bar{U}_j-\frac{\Delta x}{2}s_j \end{array} \end{equation} The slope $s_j$ of the reconstructed function in each cell is computed using a limiter function to obtain a non-oscillatory nature of the reconstruction. The KP scheme utilizes the generalized \emph{minmod} limiter as: \begin{equation}\label{eq:eq10} \begin{array}{c} s_j^-=\theta\frac{\bar{U}_j-\bar{U}_{j-1}}{\Delta x},s_j^c=\frac{\bar{U}_{j+1}-\bar{U}_{j-1}}{2\Delta x},s_j^+=\theta\frac{\bar{U}_{j+1}-\bar{U}_{j}}{\Delta x}\\ s_j=minmod\left(s_j^-,s_j^c,s_j^+\right)=\begin{cases} \min\left(s_j^-,s_j^c,s_j^+\right), & \text{if }s_j^->0\text{ \& }s_j^c>0\text{ \& }s_j^+>0 \\ \max\left(s_j^-,s_j^c,s_j^+\right), & \text{if }s_j^-<0\text{ \& }s_j^c<0\text{ \& }s_j^+<0 \\ 0, & \text{otherwise} \end{cases} \end{array} \end{equation} The parameter $\theta\in[1,2]$ is used to control or tune the amount of numerical dissipation or numerical viscosity present in the resulting scheme. The value of $\theta = 1.3$ is an acceptable starting point in general. It can be observed that for a given $j^\mathrm{th}$ cell, the information from the neighbouring cells $j-1$ and $j-2$ (to the left) and $j+1$ and $j+2$ (to the right) are required for calculating the flux integrals. This will pose difficulties at the cells on the left and right boundaries. While evaluating the flux integrals near the left boundary cells ($j=1$ and $j=2$) and near the right boundary cells ($j=N-1$ and $j=N$; $N$ is the number of cells in the grid), imaginary cells that lie outside the physical boundary should also be taken into consideration, see Figure~\ref{fig:fig2_2}. \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{fig/ghosts} \caption{Ghost cells at the grid boundaries, \cite{Sha:15}.} \label{fig:fig2_2} \end{figure} These imaginary cells denoted by $j=0$ and $j=-1$ on the left, and $j=N+1$ and $j=N+2$ on the right are called the ghost cells. The average value of the conserved variables at the centre of these ghost cells depends on the nature of the physical boundary taken into account. These ghost cells can be defined in the following way: \begin{equation}\label{eq:eq11} \begin{array}{c} \bar{U}_{j=0}=2\bar{U}_{j=1}-\bar{U}_{j=2}\\ \bar{U}_{j=-1}=2\bar{U}_{j=0}-\bar{U}_{j=1}\\ \bar{U}_{j=N+1}=2\bar{U}_{j=N}-\bar{U}_{j=N-1}\\ \bar{U}_{j=N+2}=2\bar{U}_{j=N+1}-\bar{U}_{j=N} \end{array} \end{equation} The one-sided local speeds of propagation can be estimated as the largest and the smallest eigenvalues $\lambda_{1,2}$ of the Jacobian $\frac{\partial F}{\partial U}$ of the system as: \begin{equation}\label{eq:eq12} \begin{array}{c} a^+_{j\pm\frac{1}{2}}=\max\left(\lambda^+_{1,j\pm\frac{1}{2}},\lambda^-_{1,j\pm\frac{1}{2}},0\right)\\ a^-_{j\pm\frac{1}{2}}=\max\left(\lambda^+_{2,j\pm\frac{1}{2}},\lambda^-_{2,j\pm\frac{1}{2}},0\right) \end{array} \end{equation} Lastly, the source term $\bar{S}_j\left(t\right)$ has to be appropriately discretized to ensure the well-balanced method. This can be written as: \begin{equation} \bar{S}_j\left(t\right)=S\left(\bar{U}_j\right) \end{equation} Hence, the separate functions for each of these elements defining in Eqs.~\ref{eq:eq8}-\ref{eq:eq12} are modelled and encoded in \emph{OpenHPL}. These functions are assembled in the \emph{KPfunctions} folder in class \emph{KP07}, and look as follows: \begin{itemize} \item \emph{GhostCells} function provides values of the conserved variables at the centre of the ghost cells, using Eq.~\ref{eq:eq11}. As an input piece of information this function receives the number of cells $N$, and the state vector with the cell centre average values $\bar{U}_{j=1..N}$. Then, the \emph{GhostCells} function returns a state vector with the cell centre average values for all (including ghost) cells $\bar{U}_{j=-1..N+2}$. \item \emph{SlopeVectorS} function returns the slope vector $s_{j=0..N+1}$ of the reconstructed function for each cell, using Eq.~\ref{eq:eq10}. This function has the following inputs: the number of cells $N$, parameters $\theta$ and $\Delta x$, and the state vector with the cell centre average values $\bar{U}_{j=-1..N+2}$. \item Then, function \emph{PieceWiseU} is created to define the values of states $U^\pm_{j\pm\frac{1}{2}}$ as the endpoints of a piecewise linearly reconstructed function from Eq.~\ref{eq:eq9} using the two previous functions. This function has the following inputs: the number of cells $N$, parameters $\theta$ and $\Delta x$, condition and values for the boundaries, and the state vector with the cell centre average values $\bar{U}_{j=1..N}$. \item Another function as the \emph{SpeedPropagationApipe} provides the one-sided local speeds of propagation $a^\pm_{j\pm\frac{1}{2}}$, using Eq.~\ref{eq:eq12}. As an input information this function receives the number of cells $N$ and vectors of eigenvalues $\lambda^\pm_{1,j\pm\frac{1}{2}}$ and $\lambda^\pm_{2,j\pm\frac{1}{2}}$ of the Jacobian of the system. \item The last function \emph{FluxesH} in the \emph{KPfunctions} folder, defines the central upwind numerical fluxes at the cell interfaces $H_{j\pm\frac{1}{2}}$, using Eq.~\ref{eq:eq8}. This function has the following inputs: the number of cells $N$, the values of states at the cell interfaces $U^\pm_{j\pm\frac{1}{2}}$, the one-sided local speeds of propagation $a^\pm_{j\pm\frac{1}{2}}$, and the vector of fluxes $F\left(U^\pm_{j\pm\frac{1}{2}}\right)$. \end{itemize} Then, the primary function for the KP scheme \emph{KPmethod} is created which uses the last three presented functions to define the right-hand side of Eq.~\ref{eq:eq7} (discretization solution of PDE). As an input piece of information, this function receives the number of cells $N$, parameters $\theta$ and $\Delta x$, the state vector with the cell centre average values $\bar{U}_{j=1..N}$, vectors of eigenvalues $\lambda^\pm_{1,j\pm\frac{1}{2}}$ and $\lambda^\pm_{2,j\pm\frac{1}{2}}$ of the Jacobian of the system, the vector of fluxes $F\left(U^\pm_{j\pm\frac{1}{2}}\right)$ and source terms $\bar{S}_j$, and condition and values for the boundaries. It should be noted that the \emph{KPmethod} function is encoded for the cases of systems with two states (state vector $\bar{U}_{j=1..N}$ consists of two states) that is common for the detailed model of the pipe or open channel model (see those unit models below). The boundaries are specified with the inlet and outlet state values: either inlet (or: outlet) values for both states, or inlet and outlet values for one of the states. In the case with the use of the KP scheme for the open channel model \cite{Sha:15,Vyt:15}, one of the states should be processed through the scheme with some additional vector that is ensured in this \emph{KPmethod} function ($B_{j=-1..N+2}$ vector is also input to the functions \emph{KPmethod} and \emph{PieceWiseU}). It should be noted that due to the issues of the simulation speed, all of the presented functions in class \emph{KP07} are implemented as the \emph{model} type in OpenModelica instead of the \emph{function} type. An example of a Modelica code for defining the \emph{KPmethod} function looks as follows: \begin{lstlisting}[language = modelica] model KPmethod extends Icons.Method; parameter Integer N "number of segments"; input Real U[2 * N] "state vector", dx "length step", theta = 1.3 "parameter for slope limiter", S_[2 * N] "source term vector S", F_[2 * N, 4] "vector F", lam1[N, 4] "matrix of eigenvalues '+'", lam2[N, 4] "matrix of eigenvalues '-'", B[N + 4] = zeros(N + 4) "additional for open channel", boundary[2, 2] "values for boundary conditions"; input Boolean boundaryCon[2, 2] "boundary conditions consideration"; output Real diff_eq[2 * N] "right hand side for KP solution"; Real U_[8, N] "matrix with boundary state values. Can be extracted"; protected Real H_[2 * N, 2] "matrix of fluxes", A_speed[N, 4] "matrix of one-side local speeds propagation"; public KPfunctions.PieceWiseU wiseU(N = N, theta = theta, U = U, B = B, dx = dx, boun = boundary, bounCon = boundaryCon) "use function for defining the piece wise linear reconstruction of vector U"; KPfunctions.SpeedPropagationApipe speedA(N = N, lamda1 = lam1, lamda2 = lam2) "use function for defining the one-side local speeds propagation"; KPfunctions.FluxesH fluxesH(N = N, U_ = U_, A_ = A_speed, F_ = F_) "use function for defining the central upwind numerical fluxes"; equation ///// piece wise linear reconstruction of vector U U_ = wiseU.U_; ///// one-side local speeds propagation A_speed = speedA.A; ///// central upwind numerical fluxes H_ = fluxesH.H; //// right hand side of diff. equation diff_eq = (-(H_[:, 1] - H_[:, 2]) / dx) + S_; end KPmethod; \end{lstlisting} Examples of using the KP scheme for solving PDEs are also provided in the class \emph{KP07} in the \emph{TestKPpde} folder. More information about using the \emph{KPmethod} function is presented below in the waterway modelling section for the \emph{PenstockKP} and \emph{OpenChannel} units. \subsection{Fitting} The functions for defining the pressure drop in various pipe fittings are described here. More details can be found in Bernt Lie's Lecture notes, \cite{LieL:18}. Due to different constrictions in the pipes, it is of interest to define losses in these fittings. This can be done based on friction pressure drop which can be calculated as: \begin{equation}\label{eq:eq14} \Delta p_\mathrm{f}=\frac{1}{2}\phi\rho v|v| \end{equation} Here, the dimensionless factor $\phi$ is $\phi=f_\mathrm{D}\frac{L}{D}$ for a long, straight pipe. Here, $\phi$ will be the generalized friction factor. In this case, it is possible to write pressure drop for different constrictions. Some cases of various fittings are shown in Figures~\ref{fig:fig3}-\ref{fig:fig6}. Equations for the dimensionless factor $\phi$ are also demonstrated in these figures for the presented fittings. \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{fig/Square_fi} \caption{Square reduction/expansion fittings, \cite{LieL:18}.} \label{fig:fig3} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{fig/Tapered_fit} \caption{Tapered reduction/expansion fittings, \cite{LieL:18}.} \label{fig:fig4} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{fig/Rounded_fit} \caption{Rounded reduction/expansion fittings, \cite{LieL:18}.} \label{fig:fig5} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{fig/Sharp_fit} \caption{Sharp/Thick orifice fittings, \cite{LieL:18}.} \label{fig:fig6} \end{figure} Based on the presented equations and figures for the calculation of the dimensionless factor $\phi$ in the various fitting, a set of functions is encoded regarding each specific type of fittings, such as \emph{SquareReduction}, \emph{SquareExpansion}, \emph{TaperedReduction}, \emph{TaperedExpansion}, \emph{RoundedReduction}, \emph{SharpOrifice}, and \emph{ThickOrifice}. All these functions receive the Reynolds number $N_\mathrm{Re}$, diameters of first and second pipes $D_1$ and $D_2$, and the pipe roughness height $\epsilon$. Then, based on the equations from Figures~\ref{fig:fig3}-\ref{fig:fig6}, these functions provide value for the dimensionless factor $\phi$. As an example, a Modelica code for defining the \emph{SquareReduction} function looks as follows: \begin{lstlisting}[language = modelica] function SquareReduction input Modelica.SIunits.ReynoldsNumber N_Re "Reynolds number"; input Modelica.SIunits.Height eps "Pipe roughness height"; input Modelica.SIunits.Diameter D_i, D_o "Pipe diameters"; output Real phi; protected Real f_D "friction factor"; algorithm f_D := Functions.DarcyFriction.fDarcy(N_Re, D_1, eps); if N_Re < 2500 then phi := (1.2 + 160 / N_Re) * ((D_i / D_o) ^ 4 - 1); else phi := (0.6 + 0.48 * f_D) * (D_i / D_o) ^ 2 * ((D_i / D_o) ^ 2 - 1); end if; end SquareReduction; \end{lstlisting} Another function, \emph{FittingPhi} also provides the dimensionless factor $\phi$ as an output. This function calls the presented above functions with a specific type of the fitting in order to get value for the factor $\phi$. This function has the following inputs: the linear velocity $v$, the pipe length $L$, diameters of first and second pipes $D_i$ and $D_o$, liquid density and viscosity $\rho$ and $\mu$, the pipe roughness height $\epsilon$. The last input for this function is a variable with the specific type \emph{FittingType} that holds information about the fitting type. \section{Waterway} A typical structure of the waterway of the hydropower system is shown in Figure~\ref{fig:fig7}. \begin{figure}[!ht] \centering \includegraphics[width=0.95\textwidth]{fig/Fig_1_scheme} \caption{Reservoir structure, \cite{LieL:18}.} \label{fig:fig7} \end{figure} \subsection{Reservoir} Figure~\ref{fig:fig7} shows that the water level in the reservoir $H_\mathrm{r}$ is a key quantity, \cite{Val:17}. Similarly to the water tank, a reservoir model can be described by mass and momentum balances as following, \cite{Sha:11}: \begin{equation} \begin{array}{c} H_\mathrm{r}\frac{d\dot{m}_\mathrm{r}}{dt}=\frac{\rho}{A_\mathrm{r}}\dot{V}_\mathrm{r}^2+A_\mathrm{r}\left(p_\mathrm{atm}-p_\mathrm{r}\right)+\rho gH_\mathrm{r}A_\mathrm{r}-F_\mathrm{f,r}\\ \frac{dm_\mathrm{r}}{dt}=\dot{m}_\mathrm{r} \end{array} \end{equation} Here, $\dot{m}_\mathrm{r}$ is the reservoir mass flow rate that can be found from the reservoir volumetric flow rate $\dot{V}_\mathrm{r}$. $A_\mathrm{r}$ is a square area of the reservoir. $p_\mathrm{atm}$ and $p_\mathrm{r}$ are the atmospheric and the reservoir outlet pressures, respectively. $F_\mathrm{f,r}$ is a friction term that can be found using Darcy friction factor. In a simple case, it can be assumed that the level of the reservoir is constant, the reservoir inlet flow equal the outlet flow, and the area of the reservoir is closed to infinity. Then the reservoir can be presented just as an equation for pressure in the inlet/outlet of the reservoir, \cite{Sha:11,Val:17}. \begin{equation} p_\mathrm{r}=p_\mathrm{atm}+\rho gH_\mathrm{r} \end{equation} Hence, both of these cases are modelled in the \emph{Reservoir} unit in the library. This unit uses the \emph{Contact} connector and can be connected to other waterway units. The \emph{Reservoir} unit can be specified with the following options: \begin{itemize} \item The user can choose a simple model of the reservoir, and calculate the outlet pressure depending on the depth of the outlet from the reservoir. \item The user can also choose a more complicated model, add the inflow to the reservoir and specify the reservoir geometry. \item Also, it is possible to connect an input signal with the varying water level in the reservoir. \end{itemize} \subsection{Fitting} There are various possibilities of the fittings for the pipes with different diameters as well as the existence of orifices in the pipe. In this unit \emph{Fitting}, the pressure drop due to these constrictions is defined using Eq.~\ref{eq:eq14} and function \emph{FittingPhi}. The \emph{Fitting} unit uses the \emph{ContactPort} connector model in order to have inlet and outlet connectors and the possibility to define pressure drop between those connectors. Then, this unit can be connected to the other waterway units. When the \emph{Fitting} unit is in use, the user can postulate the specific type of fitting that is of interest and required based on the geometry parameters for this fitting. \subsection{Pipe} The simple model of the pipe unit \emph{Pipe} gives possibilities for easy modelling of different conduit: intake race, penstock, tailrace, etc. In these waterway units, there are only small pressure variations due to the small slope angle (height difference between inlet and outlet of the component). That is why the model for these units can be simplified by considering incompressibility of the water and the inelasticity of the walls, \cite{Vyt:17,Val:17,Sha:11}. A sketch of the pipe with all needed terms for modelling is shown in Figure~\ref{fig:fig8}. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{fig/Fig_2_scheme} \caption{Model for flow through a pipe.} \label{fig:fig8} \end{figure} In the case of incompressible water, the mass in the filled pipe is constant, and: \begin{equation}\label{eq:eq17} \frac{dm_\mathrm{c}}{dt} = \dot{m}_\mathrm{c,in} - \dot{m}_\mathrm{c,out} = 0 \end{equation} Here, the mass of the water in the pipe (conduit) is $m_\mathrm{c}=\rho V_\mathrm{c}=\rho L_\mathrm{c}\overline{A}_\mathrm{c}$, where $\rho$ is the water density, $V_c$ -- the volume of the water in the pipe, $L_\mathrm{c}$ -- the length of the pipe (conduit) and $\overline{A}_\mathrm{c}$ -- the averaged cross-section area of the pipe that are defined from averaged pipe diameter $\overline{D}_\mathrm{c}$. The inlet and outlet mass flow rates are equal with $\dot{m}_\mathrm{c,in}=\rho\dot{V}_\mathrm{c,in}$ and $\dot{m}_\mathrm{c,out}=\rho\dot{V}_\mathrm{c,out}$ respectively, where $\dot{V}_\mathrm{c,in}=\dot{V}_\mathrm{c,out}$ -- the inlet and outlet volumetric flow rates in the pipe. The momentum balance for this simplified model can be expressed as: \begin{equation}\label{eq:eq18} \frac{dM_\mathrm{c}}{dt} = \dot{M}_\mathrm{c,in} - \dot{M}_\mathrm{c,out} + F_\mathrm{p,c} + F_\mathrm{g,c} + F_\mathrm{f,c} \end{equation} Here, the momentum of the water in the pipe is $M_\mathrm{c}=m_\mathrm{c}v_\mathrm{c}$, where $v_\mathrm{c}$ is the average water velocity and can be defined as $v_\mathrm{c}=\dot{V}_\mathrm{c}/\overline{A}_\mathrm{c}$. The inlet and outlet momentum flow rates are $\dot{M}_\mathrm{c,in}=\dot{m}_\mathrm{c,in}v_\mathrm{c,in}$ and $\dot{M}_\mathrm{c,out}=\dot{m}_\mathrm{c,out}v_\mathrm{c,out}$ respectively, where $v_\mathrm{c,in}=\dot{V}_\mathrm{c,in}/A_\mathrm{c,in}$ and $v_\mathrm{c,out}=\dot{V}_\mathrm{c,out}/A_\mathrm{c,out}$ are the velocities in the inlet and outlet of the pipe, respectively; and are equal in a case with constant diameter of the pipe ($A_\mathrm{c,in}=A_\mathrm{c,out}$). $F_\mathrm{p,c}$ -- the pressure force, due to the difference between the inlet and outlet pressures $p_\mathrm{c,1}$ and $p_\mathrm{c,2}$ can be calculated as follows: $F_\mathrm{p,c}=A_\mathrm{c,in}p_\mathrm{c,1}-A_\mathrm{c,out}p_\mathrm{c,2}$. There is also gravity force that is defined as $F_\mathrm{g,c}=m_\mathrm{c}g\cos\theta_\mathrm{c}$, where $g$ -- the gravitational acceleration and $\theta_\mathrm{c}$ -- the angle of the pipe slope that can be defined from the ratio of height difference $H_\mathrm{c}$ and the length $L_\mathrm{c}$ of the pipe. The last term in the momentum balance is friction force which can be calculated as $F_\mathrm{f,c}=-\frac{1}{8}L_\mathrm{c}f_\mathrm{D,c}\pi\rho\overline{D}_\mathrm{c}v_\mathrm{c}|v_\mathrm{c}|$ using the Darcy friction factor $f_\mathrm{D,c}$ for the conduit. The main defined variable is the volumetric flow rate. In this \emph{Pipe} unit, the flow rate changes simultaneously in the whole pipe (information about the speed of wave propagation is not included here). Water pressures can be shown just in the boundaries of pipe (inlet and outlet pressure from connectors). This unit uses the \emph{ContactPort} connector model and can be connected to other waterway units. When the \emph{Pipe} unit is in use, the user can specify the required geometry parameters for this pipe: length $L_\mathrm{c}$, height difference $H_\mathrm{c}$, inlet and outlet diameters $D_\mathrm{c,i}$ and $D_\mathrm{c,o}$, and pipe roughness height $\epsilon_\mathrm{c}$. In order to define the friction force $F_\mathrm{f,c}$ the \emph{Friction} function is used here. It should be noted that this unit provides possibilities for the modelling of pipes with both positive and negative slopes (positive or negative height difference). This unit can be initialized by the initial value of the flow rate $\dot{V}_\mathrm{c,0}$. Otherwise, user can choose to an option when the simulation starts from steady-state and the OpenModelica handles automatically initial steady-state values (does not work properly in OpenModelica). \subsection{Surge Tank} The surge shaft/tank will be presented here as a vertical open pipe with constant diameter together with manifold, which connecting conduit, surge volume and penstock, \cite{Sha:11,Val:17}. Surge volume (vertical open pipe) is shown in Figure~\ref{fig:fig9}. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fig/surgepic} \caption{Model for a vertical open pipe.} \label{fig:fig9} \end{figure} The model for the surge volume can be described by mass and momentum balances as follows: \begin{equation} \begin{array}{c} \frac{dm_\mathrm{s}}{dt} = \dot{m}_\mathrm{s,in} = \rho \dot{V}_\mathrm{s}\\ \frac{dm_\mathrm{s}v_\mathrm{s}}{dt} =\dot{m}_\mathrm{s,in}v_\mathrm{s,in}+F_\mathrm{p,s}+F_\mathrm{g,s}+F_\mathrm{f,s} \end{array} \end{equation} Here, the mass of the water in the surge tank is $m_\mathrm{s}=\rho V_\mathrm{s}=\rho l_\mathrm{s}A_\mathrm{s}=\rho A_\mathrm{s}\frac{h_\mathrm{s}}{\cos\theta_\mathrm{s}}$, where $\rho$ is the water density, $V_s$ is the volume of the water in the surge tank, $h_\mathrm{s}$ and $l_\mathrm{s}$ are the height and length of the surge tank filled with water and $A_\mathrm{s}$ is the cross-section area of the surge tank that defined from the vertical pipe diameter $D_\mathrm{s}$. The water velocity $v_\mathrm{s}$ can be defined as $v_\mathrm{s}=\dot{V}_\mathrm{s}/A_\mathrm{s}$. The inlet water velocity $v_\mathrm{s,in}=\dot{V}_\mathrm{s}/A_\mathrm{s}$. $F_\mathrm{p,s}$ is the pressure force, due to the difference between the inlet and outlet pressures $p_\mathrm{s,1}$ and $p_\mathrm{atm}$ and can be calculated as follows: $F_\mathrm{p,s}=A_\mathrm{s}\left(p_\mathrm{s,1}-p_\mathrm{atm}\right)$. There is also gravity force that is defined as $F_\mathrm{g,s}=m_\mathrm{s}g\cos\theta_\mathrm{s}$, where $g$ -- the gravitational acceleration and $\theta_\mathrm{s}$ -- the angle of the slope of the surge tank and can be defined from the ratio of height difference $H_\mathrm{s}$ and length $L_\mathrm{s}$. The last term in the momentum balance is friction force, which can be calculated as $F_\mathrm{f,s}=-\frac{1}{8}l_\mathrm{s}f_\mathrm{D,s}\pi\rho D_\mathrm{s}v_\mathrm{s}|v_\mathrm{s}|$ using Darcy friction factor $f_\mathrm{D,s}$ for the surge tank. The manifold is described by the preservation of mass in steady-state; the volumetric flow rate in the intake race $\dot{V}_\mathrm{i}$ equals to the sum of volumetric flow rates from surge volume $\dot{V}_\mathrm{s}$ and penstock $\dot{V}_\mathrm{p}$: $\dot{V}_\mathrm{i}=\dot{V}_\mathrm{p}+\dot{V}_\mathrm{s}$. In addition, the manifold pressure is equal for all three connections. This manifold is already implemented in the \emph{ContactNode} connectors model that is used in this \emph{SurgeTank} unit. Then, this unit can be connected to other waterway units. In the \emph{SurgeTank} unit, the user can specify the required geometry parameters for the surge tank (vertical pipe): length $L_\mathrm{s}$, height difference $H_\mathrm{s}$, diameters $D_\mathrm{s}$, pipe roughness height $\epsilon_\mathrm{s}$, and value for the atmospheric pressure $p_\mathrm{atm}$. In order to define the friction force $F_\mathrm{f,s}$ the \emph{Friction} function is used here. This unit can be initialized by the initial values of the flow rate $\dot{V}_\mathrm{s,0}$ and water height $h_\mathrm{s,0}$. Otherwise, the user can decide on an option when the simulation starts from the steady-state and the OpenModelica automatically handles the initial steady-state values (does not work properly in OpenModelica). \subsection{Pipe with compressible water and elastic walls} Unlike the conduit, the penstock has considerable pressure variation due to a considerable height drop. Thus, to make the model for the penstock more realistic, the compressible water and the elastic walls of the penstock should be taken into account. To express the compressibility/elasticity, some compressibility coefficients which show the relationship between pressure, water density and pipe inner radius, are used, \cite{Sha:11,Vyt:17}. The isothermal compressibility $\beta_T$ is defined as follows: \begin{equation} \beta_T = \frac{1}{\rho}\frac{d\rho}{dp} \end{equation} Here, $\rho$ and $p$ denote density and pressure, respectively. Assuming that the isothermal compressibility is independent of the pressure, this equation can be rewritten in a way that is convenient to calculate the fluid density at different pressures: \begin{equation} \rho = \rho^{\text{atm}}e^{\beta_T(p-p^{\text{atm}})} \end{equation} Here $p^{\text{atm}}$ is the atmospheric pressure and $\rho^{\text{atm}}$ is the water density at atmospheric pressure. The relation between density and pressure from this equation is a fairly linear dependency for the pressure in the range which is normal in hydropower plants. That is why the previous equation can be simplified as follows: \begin{equation} \rho \approx \rho^{\text{atm}}(1+{\beta_T(p-p^{\text{atm}})}) \end{equation} In the same way, the relation between pressure and pipe cross-section area can be defined using equivalent compressibility coefficient $\beta^{eq}$ due to the pipe shell elasticity; after simplification the relation looks as follows: \begin{equation} A \approx A^{\text{atm}}(1+{\beta^{eq}(p-p^{\text{atm}})}) \end{equation} Here, $A^{\text{atm}}$ is the pipe cross-section area at atmospheric pressure. It is also possible to define a linear relationship for the product of density and cross-sectional area that change with the pressure. \begin{equation} A\cdot\rho \approx A^{\text{atm}}\rho^{\text{atm}}(1+{\beta^\mathrm{tot}(p-p^{\text{atm}})}) \end{equation} Here, $\beta^\mathrm{tot}$ is the total compressibility due to water compressibility and pipe shell elasticity ($\beta^\mathrm{tot}=\beta_T+\beta^{eq}$), and is related to the speed of sound in water inside the pipe. Hence, using the previous equations for the relationship between the density of the water, cross-sectional area of the pipe, and pressure in the pipe, ODEs (\ref{eq:eq17}) and (\ref{eq:eq18}) for mass and momentum balances can be further developed into the PDEs, \cite{Vyt:17}: \begin{equation}\label{eq:eq18_} \begin{array}{c} A^{\text{atm}}_\mathrm{p}\rho^{\text{atm}}\beta^\mathrm{tot}\frac{\partial m_\mathrm{p}}{\partial t} = -\frac{\partial\dot{m}_\mathrm{p}}{\partial x}\\ \frac{\partial\dot{m}_\mathrm{p}}{\partial t} = -\frac{\partial}{\partial x}\big(\dot{m}_\mathrm{p}v_\mathrm{p}+A_\mathrm{p}p_\mathrm{p})+\rho A_\mathrm{p}g\cos\theta-\frac{1}{8}f_\mathrm{D,p}\pi\rho D_\mathrm{p}v_\mathrm{p}|v_\mathrm{p}| \end{array} \end{equation} The KP scheme is chosen for the discretization of the model for the elastic penstock with compressible water. Firstly, PDEs (\ref{eq:eq18_}) for the elastic penstock model should be presented in vector form as a standard formulation for KP scheme, \cite{Sha:15}: \begin{equation} \frac{\partial U}{\partial t}+\frac{\partial F}{\partial x} = S \end{equation} Here, $U=\left[\begin{matrix}p_\mathrm{p} & \dot{m}_\mathrm{p}\end{matrix}\right]^T$ is a vector of conserved variables, $F=\left[\begin{matrix}\frac{\dot{m}_\mathrm{p}}{A_\mathrm{p}^{\mathrm{atm}}\rho^{\mathrm{atm}} \beta^\mathrm{tot}} & \dot{m}_\mathrm{p}v_\mathrm{p}+A_\mathrm{p}p_\mathrm{p}\end{matrix}\right]^T$ is a vector of fluxes, and $S=\left[\begin{matrix} 0 & \rho A_\mathrm{p}g\cos\theta_\mathrm{p}-\frac{1}{8}f_\mathrm{D,p}\pi\rho D_\mathrm{p}v_\mathrm{p}|v_\mathrm{p}|\end{matrix}\right]^T$ is a source terms vector. As shown above in the description of the KP scheme, the eigenvalues $\lambda_{1,2}$ of the Jacobian $\frac{\partial F}{\partial U}$ of the system are needed and can be found as follows, \cite{Vyt:17}: \begin{align} \lambda_{1,2}=\frac{v_\mathrm{p}\pm\sqrt{v_\mathrm{p}^2+\frac{4A_\mathrm{p}}{A_\mathrm{p}^{\text{atm}}\rho^{\text{atm}}\beta^\mathrm{tot}}}}{2} \end{align} From these eigenvalues, it can be deduced that the speed of sound is given as $c=\sqrt{\frac{A_\mathrm{p}}{A_\mathrm{p}^{\text{atm}}\rho^{\text{atm}}\beta^\mathrm{tot}}}$, thus confirming that the total compressibility factor $\beta^\mathrm{tot}$ is related to the speed of sound. Hence, the function for the KP scheme \emph{KPmethod} from function class \emph{KP07} is then used in unit \emph{PenstockKP} in order to discretize the presented PDEs into ODEs. The \emph{KPmethod} function provides the right hand side of Eq.~\ref{eq:eq7} (discretization solution of PDE) that is then used for ODE in the \emph{PenstockKP}. Moreover, the values of states at the cell interfaces $U^\pm_{j\pm\frac{1}{2}}$ are taken from function \emph{KPmethod} in the \emph{PenstockKP} unit in order to define the vectors of eigenvalues $\lambda^\pm_{1,j\pm\frac{1}{2}}$ and $\lambda^\pm_{2,j\pm\frac{1}{2}}$, and the vector of fluxes $F\left(U^\pm_{j\pm\frac{1}{2}}\right)$. Then, these vectors together with the state vector with the cell centre average values $\bar{U}_{j=1..N}$, and source terms vector $\bar{S}_j$ are used in the function \emph{KPmethod}. The boundaries conditions are also specified for the \emph{KPmethod} function in the \emph{PenstockKP} unit and are the values for the inlet and outlet pressures $p_\mathrm{p,1}$ and $p_\mathrm{p,2}$. The \emph{PenstockKP} unit uses the \emph{TwoContact} connector model that provides information about inlet and outlet pressure and the mass flow rate of two connectors which can be connected to other waterway units. In this \emph{PenstockKP} unit, the user can specify the required geometry parameters for the: length $L_\mathrm{p}$, height difference $H_\mathrm{p}$, inlet and outlet diameters $D_\mathrm{p,1}$ and $D_\mathrm{p,2}$, pipe roughness height $\epsilon_\mathrm{p}$ and the number of cells $N$ for the discretization. In order to define the friction force $F_\mathrm{f,p}$ in the cell of the pipe, the \emph{Friction} function is used here. This unit can be initialized by the initial value of the flow rate $\dot{V}_\mathrm{p,0}$ and pressure $p_\mathrm{p,0}$ for each cell of the pipe. In order to simplify the pressure initialization, the user can simply specify the initial value for the surge tank water height $h_\mathrm{s,0}$ (then an encoded formula for the pressure initialization is used). Otherwise, the user can choose an option when the simulation starts from steady-state and the OpenModelica automatically handles the initial steady-state values (does not work properly in OpenModelica). \subsection{Open Channel} Similarly to the detailed model of the pipe, the model of the open channel is also encoded in the library. The open channel model looks as follows, \cite{Sha:15,Vyt:17}: \begin{equation} \frac{\partial U}{\partial t}+\frac{\partial F}{\partial x} = S \end{equation} where:\begin{itemize} \item[] $U=\left[\begin{matrix}q & z\end{matrix}\right]^T$ , \item[] $F=\left[\begin{matrix}q & \frac{q^2}{z-B}+\frac{g}{2}\left(z-B\right)^2\end{matrix}\right]^T$, \item[] $S=\left[\begin{matrix}0 & -g\left(z-B\right)\frac{\partial B}{\partial x}-\frac{gf_n^2q|q|\left(w+2\left(z-B\right)\right)^\frac{4}{3}}{w^\frac{4}{3}}\frac{1}{\left(z-B\right)^\frac{7}{3}}\end{matrix}\right]^T$, \end{itemize} with: $z=h+B$, and $q=\frac{\dot{V}}{w}$. Here, $h$ is water depth in the channel, $B$ is the channel bed elevation, $q$ is the discharge per unit width $w$ of the open channel. $f_n$ is the Manning's roughness coefficient. The KP scheme is described earlier, but some additional specific details for open channels should be added here. Firstly, the eigenvalues for this model are defined as follows, \cite{Sha:15}: \begin{equation} \lambda_{1,2}=u\pm\sqrt{gh} \end{equation} where, $u$ is the cross-section average water velocity. In the channel areas which are dry or almost dry (if the computational domain contains a dry bed, islands or coastal areas), the values of $h_{i\pm\frac{1}{2}}^\pm$ could be very small or even zero. In such cases when $h_{i\pm\frac{1}{2}}^\pm<\epsilon$, with $\epsilon$ being an a-priori chosen small positive number (e.g. $\epsilon = 1e^{-5}$), the velocity at the cell centres in the entire domain is recomputed by the ted by the desingularization formula, \cite{Sha:15}: \begin{equation} \bar{u}_j=\frac{2\bar{h}_j\bar{q}_j}{\bar{h}_j^2+\max\left(\bar{h}_j^2,\epsilon^2\right)} \end{equation} Then, the point values of the velocity $u_{i\pm\frac{1}{2}}^\pm$ at the left/right cell interfaces, i.e., at $x_j = x_{j\pm\frac{1}{2}}$ are computed as, \cite{Sha:15} \begin{equation}\label{eq:eq19} \begin{array}{c} u^-_{j+\frac{1}{2}}=\bar{u}_j+\frac{\Delta x}{2}s_{u_j}\\ u^+_{j+\frac{1}{2}}=\bar{u}_{j+1}-\frac{\Delta x}{2}s_{u_{j+1}}\\ u^-_{j-\frac{1}{2}}=\bar{u}_{j-1}+\frac{\Delta x}{2}s_{u_{j-1}}\\ u^+_{j-\frac{1}{2}}=\bar{u}_j-\frac{\Delta x}{2}s_{u_j} \end{array} \end{equation} The slope or the numerical derivative of the velocity $s_{u_j}$ are calculated using the same limiter function as in equation~\ref{eq:eq10}, however, in this case replacing $U$ by $u$ (it has not been rewritten here for the sake of brevity), \cite{Sha:15}. Hence, similar to the \emph{PenstockKP} unit the function for the KP scheme \emph{KPmethod} from function class \emph{KP07} is then used in unit \emph{OpenCannel} in order to discretize the presented PDEs into ODEs. The values of states at the cell interfaces $U^\pm_{j\pm\frac{1}{2}}$ are taken from function \emph{KPmethod} in the \emph{OpenCannel} unit in order to define the vectors of eigenvalues $\lambda^\pm_{1,j\pm\frac{1}{2}}$ and $\lambda^\pm_{2,j\pm\frac{1}{2}}$, the point values of the velocity $u_{i\pm\frac{1}{2}}^\pm$, and the vector of fluxes $F\left(U^\pm_{j\pm\frac{1}{2}}\right)$. Then, these vectors together with the state vector with the cell centre average values $\bar{U}_{j=1..N}$ and source terms vector $\bar{S}_j$ are used in the function \emph{KPmethod}. The boundaries conditions are also specified for the \emph{KPmethod} function in the \emph{OpenCannel} unit and are the values for the inlet and outlet flows per unit width $q_\mathrm{1}$ and $q_\mathrm{2}$. The \emph{OpenCannel} unit uses the \emph{TwoContact} connector model that gives information about inlet and outlet pressure (water depth in the channel) and the flow rate of two connectors which can be connected to other waterway units. In this \emph{OpenCannel} unit, the user can specify the required geometry parameters for the: length $L$ and width $w$ of the channel, height vector $H$ of the channel bed with a height from the left and right sides, the Manning's roughness coefficient $f_n$, and the number of cells $N$ for the discretization. This unit can be initialized by the initial value of the flow rate $\dot{V}_\mathrm{0}$ and water depth $h_\mathrm{0}$ for each cell of the channel. User can also change the boundary condition for the KP scheme. \subsection{Reservoir Channel} In order to make a more detailed model of the reservoir, the open channel model is used, where the channel bed is assumed to be flat (no slope). Here, the user also specifies the geometry parameters of the channel (reservoir) such as length $L$ and width $w$ of the channel (reservoir), height vector $H$ of the reservoir bed with height from the left and right sides (should be same number in order to have flatbed), and the number of cells $N$ for the discretization. This unit can be initialized by the initial value of the water depth $h_\mathrm{0}$ in the reservoir. The \emph{ReservoirChannel} unit uses the \emph{Contact} connector that provides information about the outlet pressure and the flow rate from/to the reservoir which can be connected to other waterway units. \subsection{Runoff} Similar to many other hydrological models, the HBV model is based on the land phase of the hydrological (water) cycle, see Figure~\ref{fig:fig10}. The figure shows that the HBV model consists of four main water storage components connected in a cascade form. Using a variety of weather information, such as air temperature, precipitation and potential evapotranspiration, the dynamics and the balances of the water in the presented water storages are calculated. Hence, the runoff/inflow from some of the defined catchment areas can be found, \cite{Sha:13}. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{fig/hydrology} \caption{Structure of the HBV model.} \label{fig:fig10} \end{figure} The model is developed for each water storage component to define the dynamics and balances of the water. In addition, the catchment area is divided into elevation zones (usually not more than ten) where each zone has the same area. The air temperature and the precipitation are provided for each elevation zone. Hence, all calculations within each water storage component are performed for each elevation zone. \subsubsection{Snow routine} In the snow routine segment, the snow storage, as well as snowmelt are computed. This computation is performed for each elevation zone. Using the mass balance, the change in the dry snow storage volume $V_\mathrm{s,d}$, is found as follows: \begin{equation}\label{eq:eq20} \frac{dV_\mathrm{s,d}}{dt}=\dot{V}_\mathrm{p,s}-\dot{V}_\mathrm{d2w} \end{equation} Here, the flow of the precipitation in the form of snow is denoted as $\dot{V}_\mathrm{p,s}$. This precipitation in the form of snow is defined from the input precipitation flow, $\dot{V}_\mathrm{p}$, based on the information about the air temperature, $T$, a threshold temperature for snowmelt, $T_\mathrm{T}$, and for the area that is not covered by lakes (the fractional area covered by the lakes, $a_\mathrm{L}$, is used): \begin{equation}\label{eq:eq21} \dot{V}_\mathrm{p,s}=\begin{cases} \dot{V}_\mathrm{p}K_\mathrm{CR}K_\mathrm{CS}(1 - a_\mathrm{L}), & \mbox{if } T\leq T_\mathrm{T}\\ 0, & \mbox{if } T>T_\mathrm{T} \end{cases} \end{equation} Precipitation correction coefficients $K_\mathrm{CR}$ and $K_\mathrm{CS}$ are also used here, for the rainfall and snowfall precipitations, respectively. Then, the flow of precipitation in the form of rain is defined as follows: \begin{equation}\label{eq:eq22} \dot{V}_\mathrm{p,r}=\begin{cases} \dot{V}_\mathrm{p}K_\mathrm{CR}(1 - a_\mathrm{L}), & \mbox{if } T>T_\mathrm{T}\\ 0, & \mbox{if } T\leq T_\mathrm{T} \end{cases} \end{equation} The flow of the melting snow (melting of snow from dry form to water form), $\dot{V}_\mathrm{d2w}$, can be found using the following expression based on the degree-day factor $K_\mathrm{dd}$ and the area of the elevation zone $A_\mathrm{e}$: \begin{equation}\label{eq:eq23} \dot{V}_\mathrm{d2w}=\begin{cases} A_\mathrm{e}K_\mathrm{dd}(T - T_\mathrm{T})(1 - a_\mathrm{L}), & \mbox{if }T>T_\mathrm{T}\mbox{ and }V_\mathrm{s,d}>0\\ 0, & \mbox{otherwise} \end{cases} \end{equation} Finally, the flow out of the snow routine to the next soil moisture segment, $\dot{V}_\mathrm{s2s}$, is found as a sum of flows of precipitation in the form of rain, and the melted snow: \begin{equation}\label{eq:eq24} \dot{V}_\mathrm{s2s}=\dot{V}_\mathrm{p,r}+\dot{V}_\mathrm{d2w} \end{equation} It should be noted that a simplification related to the threshold temperature, $T_\mathrm{T}$, is assumed here. This threshold temperature describes both the snow melt and the rainfall to snowfall transition temperatures in the presented model. In reality, this threshold temperature might differ for each of these processes. In addition, the storage of snow in water form is not considered here, mostly due to the simplification with the threshold temperature. \subsubsection{Soil moisture routine} In the soil moisture segment, the water storage in the ground (soil) is found together with actual evapotranspiration from the snow-free areas. The net runoff to the next segment (upper zone) is also defined here. Using the mass balance, the volume of the soil moisture storage, $V_\mathrm{s,m}$, is found as follows: \begin{equation}\label{eq:eq25} \frac{dV_\mathrm{s,m}}{dt}=\dot{V}_\mathrm{s2s}-\dot{V}_\mathrm{s2u}-\alpha_\mathrm{e}\dot{V}_\mathrm{s,e} \end{equation} Here, $\dot{V}_\mathrm{s2u}$ is the net runoff to the next segment (the upper zone). $\dot{V}_\mathrm{s,e}$ is the actual evapotranspiration from the soil, that is taken into account only for the snow-free areas (zones). To define these snow-free zones, coefficient $\alpha_\mathrm{e}$ is used and equals one for snow-free areas and zero for covered-by-snow areas. The actual evapotranspiration can be found from the potential evapotranspiration, $\dot{V}_\mathrm{e}$, the volume of the soil moisture storage, $V_\mathrm{s,m}$, the area of the elevation zone $A_\mathrm{e}$, and the field capacity --- threshold soil (ground) moisture storage, $g_\mathrm{T}$: \begin{equation}\label{eq:eq26} \dot{V}_\mathrm{s,e}=\begin{cases} \frac{V_\mathrm{s,m}}{A_\mathrm{e}g_\mathrm{T}}\dot{V}_\mathrm{e}, & \mbox{if } V_\mathrm{s,m}< A_\mathrm{e}g_\mathrm{T}\\ \dot{V}_\mathrm{e}, & \mbox{if } V_\mathrm{s,m}\geq A_\mathrm{e}g_\mathrm{T} \end{cases} \end{equation} The potential evapotranspiration, $\dot{V}_\mathrm{e}$, is defined as the input to the hydrology model, similarly to the air temperature and precipitations. The output of the soil moisture segment --- the net runoff to the next segment, $\dot{V}_\mathrm{s2u}$, can be found based on the field capacity, $g_\mathrm{T}$, as follows: \begin{equation}\label{eq:eq27} \dot{V}_\mathrm{s2u}=\begin{cases} \Big(\frac{V_\mathrm{s,m}}{A_\mathrm{e}g_\mathrm{T}}\Big)^{\beta}\dot{V}_\mathrm{s2s}, & \mbox{if } 0\leq V_\mathrm{s,m}< A_\mathrm{e}g_\mathrm{T}\\ \dot{V}_\mathrm{s2s}, & \mbox{if } V_\mathrm{s,m}\geq A_\mathrm{e}g_\mathrm{T} \end{cases} \end{equation} Here, $\beta$ is an empirical parameter for specifying the relationship between the flow out of the snow routine, the soil moisture storage, and the net runoff from the soil moisture. Typically, $\beta \in [2,3]$, which leads to nonlinearity in Eq.~\ref{eq:eq27}. \subsubsection{Runoff routine} The upper and lower zones from Figure~\ref{fig:fig10} are combined into one segment --- the runoff routine. In this segment, the runoff from the catchment area is found based on the outflow from the soil moisture. The effects of the precipitation to, and evapotranspiration from the lakes in the catchment area are also taken into account here. The upper zone characterises components with quick runoff. The following mass balance is used for the upper zone description: \begin{equation}\label{eq:eq28} \frac{dV_\mathrm{u,w}}{dt}=\dot{V}_\mathrm{s2u}-\dot{V}_\mathrm{u2l}-\dot{V}_\mathrm{u2s}-\dot{V}_\mathrm{u2q} \end{equation} Here, $V_\mathrm{u,w}$ is the water volume in the upper zone that depends on the saturation threshold, $s_\mathrm{T}$, which defines the surface (fast) runoff, $\dot{V}_\mathrm{u2s}$, and the fast runoff, $\dot{V}_\mathrm{u2q}$. $\dot{V}_\mathrm{u2b}$ is the runoff to the lower zone and is defined by the percolation capacity, $K_\mathrm{PC}$, for the area that is not covered by lakes: \begin{equation}\label{eq:eq29} \dot{V}_\mathrm{u2l}=A_\mathrm{e}(1-a_\mathrm{L})K_\mathrm{PC} \end{equation} The surface runoff, $\dot{V}_\mathrm{u2s}$, can be found using the saturation threshold, $s_\mathrm{T}$, and the water volume in the upper zone, $V_\mathrm{u,w}$: \begin{equation}\label{eq:eq30} \dot{V}_\mathrm{u2s}=\begin{cases} a_1(V_\mathrm{u,w}-A_\mathrm{e}s_\mathrm{T}), & \mbox{if } V_\mathrm{u,w}>A_\mathrm{e}s_\mathrm{T}\\ 0, & \mbox{if } V_\mathrm{u,w}\leq A_\mathrm{e}s_\mathrm{T} \end{cases} \end{equation} Here, $a_1$ is a parameter that represents the recession constant for the surface runoff. A similar recession constant, $a_2$, is used for the fast runoff, $\dot{V}_\mathrm{u2q}$, calculations: \begin{equation}\label{eq:eq31} \dot{V}_\mathrm{u2q}=a_2\min{(V_\mathrm{u,w},A_\mathrm{e}s_\mathrm{T})} \end{equation} The lower zone characterises the lake and the groundwater storages and defines the base runoff from the catchment area. The following mass balance equation is used for the lower zone description: \begin{equation}\label{eq:eq32} \frac{dV_\mathrm{l,w}}{dt}=\dot{V}_\mathrm{u2l}+a_\mathrm{L}\dot{V}_\mathrm{p}-\dot{V}_\mathrm{l2b}-a_\mathrm{L}\dot{V}_\mathrm{e} \end{equation} The water volume in the lower zone is denoted as $V_\mathrm{l,w}$. As mentioned previously, $\dot{V}_\mathrm{p}$ and $\dot{V}_\mathrm{e}$ are the precipitation and the potential evapotranspiration flows, respectively. $a_\mathrm{L}$ is the fractional area covered by lakes. $\dot{V}_\mathrm{l2b}$ is the base runoff from the lower zone that can be found as follows: \begin{equation}\label{eq:eq33} \dot{V}_\mathrm{l2b}=a_3V_\mathrm{l,w} \end{equation} Here, $a_3$ is the recession constant similar to $a_1$ and $a_2$. The total runoff from the catchment, $\dot{V}_\mathrm{tot}$, is a sum of the base, quick, surface runoffs for each elevation zones, and is defined as follows: \begin{equation}\label{eq:eq34} \dot{V}_\mathrm{tot}=\sum\limits_{i=1}^n(\dot{V}_{\mathrm{l2b},i}+\dot{V}_{\mathrm{u2s},i}+\dot{V}_{\mathrm{u2q},i}) \end{equation} Here, the base $\dot{V}_{\mathrm{l2b},i}$, quick $\dot{V}_{\mathrm{u2q},i}$, and surface $\dot{V}_{\mathrm{u2s},i}$ runoffs are first summed up for each of the $n$ elevation zones and then these sums of the base, quick and surface runoffs are added together. Hence, this hydrology model is encoded in the \emph{OpenHPL} library as the \emph{RunOff\_zones} unit where the main defined variable is the total runoff from the catchment. This unit uses the standard Modelica connector \emph{RealOutput} connector as an output from the model that can be connected to, for example, simple reservoir model \emph{Reservoir} unit. In order to get historic information about the air temperature, precipitation, and potential evapotranspiration for each of the elevation zones, the standard Modelica \emph{CombiTimeTable} source models are used in order to read this data from the text files. When the \emph{RunOff\_zones} unit is in use, the user can specify the required geometry parameters for the catchment: the number of elevation zones, all hydrology parameters such as threshold temperatures, degree-day factor, precipitation correction coefficients, field capacity and $\beta$ parameter in soil moisture routine, threshold level for quick runoff in upper zone, percolation from upper zone to lower zone, recession constants for the surface and quick runoffs in upper zone, and recession constant for the base runoff in lower zone. Finally, the user can also specify the info about the text files where the data for the \emph{CombiTimeTable} models are stored. \section{Electro-Mechanical} \subsection{Turbine} The turbine unit can be expressed with a simple turbine model based on a look-up table (turbine efficiency vs. guide vane opening). This simple turbine model is described by Eq.~\ref{eq:eq35}, \cite{LieL:18,Vyt:19b}, where the mechanical turbine shaft power $\dot{W}_\mathrm{tr}$ is defined as: \begin{equation}\label{eq:eq35} \dot{W}_\mathrm{tr} = \eta_\mathrm{h}\Delta p_\mathrm{tr}\dot{V}_\mathrm{tr} \end{equation} Here, $\eta_\mathrm{h}$ gives the turbine hydraulic efficiency that is found from a standard turbine look-up table and depends on the turbine control signal, $u_v$. $\Delta p_\mathrm{tr}$ is the pressure drop through the turbine that is defined as the difference between inlet and outlet turbine pressures, i.e., $\Delta p_\mathrm{tr} = p_\mathrm{tr1}-p_\mathrm{tr2}$. The relationship between the turbine volumetric flow rate $\dot{V}_\mathrm{tr}$ and the pressure drop $\Delta p_\mathrm{tr}$ is described through a simple valve-like expression as follows: \begin{equation}\label{eq:eq36} \dot{V}_\mathrm{tr} = C_\mathrm{v} u_\mathrm{v} \sqrt{\frac{\Delta p_\mathrm{tr}}{p^\mathrm{a}}} \end{equation} Here, $C_\mathrm{v}$ in Eq.~\ref{eq:eq36} is some guide vane ``valve capacity'' that can be tuned by using the nominal turbine net head (nominal pressure drop) and the nominal turbine flow rate. $p^\mathrm{a}$ is the atmospheric pressure. Based on Eqs.~\ref{eq:eq35} and \ref{eq:eq36}, the simple turbine model is implemented in \emph{OpenHPL} as the \emph{Turbine} element. In this \emph{Turbine} unit, the multi-physic connections are used in order to stay connected to waterway units as well as to the other electro-mechanical units. Those connections are already implemented in the \emph{TurbineContacts} connectors model that is used in this \emph{Turbine} unit. Then, this unit can be connected to other waterway and electro-mechanical units. In the \emph{Turbine} unit, the user can specify the required parameters for the simple turbine model: guide vane ``valve capacity'' $C_\mathrm{v}$, the nominal turbine net head (nominal pressure drop) and the nominal turbine flow rate, turbine guide vane nominal opening signal $u_{v,n}$ in per unit value from 0 to 1. The user can also choose either to use the constant turbine efficiency and specify it, or to use the look-up table for the turbine efficiency and also specify this table. \subsection{Francis} Our library also includes a mechanistic Francis turbine model based on the Euler turbine equations. The key quantities of the model are shown in Fig.~\ref{fig:fig11}, and the shaft power $\dot{W}_s$ produced in the Francis turbine is defined as follows, \cite{LieL:18,Vyt:18}: \begin{equation} \label{eq:eq37} {\dot{W}_s} = \dot{m}\omega \Big(R_1\frac{\dot{V}}{A_1}\cot{\alpha_1}-R_2\big(\omega R_2+\frac{\dot{V}}{A_2}\cot{\beta_2}\big)\Big). \end{equation} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{fig/Fig_2_F_turb} % The printed column width is 8.4 cm. \caption{Key quantities in the Francis turbine model, with blade angles $\beta_1$ and $\beta_2$. The water effluent comes out from the paper plane, \cite{LieL:18}.} \label{fig:fig11} \end{figure} Here, $\dot{m}$ and $\dot{V}$ are the mass and volumetric flow rate through the turbine, respectively, and $\omega$ is the angular velocity of the runner. $R_1$ and $R_2$ are the inlet and outlet radius of the runner, respectively. $A_1$ and $A_2$ are the inlet and outlet cross-sectional areas, respectively, and can be defined by using the runner dimensions: $R_1$, $R_2$, and $w_1$ which is the inlet width/height of the runner/blades. $\alpha_1$ is the inlet guide vane angle that is given by a control signal. $\beta_2$ is the outlet blade angle. The total work rate $\dot{W}_t$ removed through the turbine is: \begin{equation} \label{eq:eq38} {\dot{W}_t} = {\dot{W}_s+\dot{W}_{ft}+\Delta p_v\dot{V}}. \end{equation} Here, $\Delta p_v$ is the pressure loss across the guide vane due to friction and is often neglected. The total work rate might also be formulated based on Bernoulli's law: $\dot{W}_t=\Delta p_{tr}\dot{V} + \frac{1}{2}\dot{m}\dot{V}^2(\frac{1}{A_0^2}-\frac{1}{A_2^2})$, from where the total pressure loss across the turbine $\Delta p_{tr}$ can be defined; $A_0$ is the inlet cross section area to the spiral case. $\dot{W}_{ft}$ -- the friction term that represents various friction losses within the turbine is calculated as follows: \begin{equation} \label{eq:eq39} \begin{array}{ll} \dot{W}_{ft} = k_{ft,1}\dot{V}(\cot{\gamma_1}-\cot{\beta_1})^2 \\ +k_{ft,2}\dot{V}\cot^2{\alpha_2}+k_{ft,3}\dot{V}^2. \end{array} \end{equation} Here, $k_{ft,1}$, $k_{ft,2}$ and $k_{ft,3}$ are friction coefficients that represent shock, whirl, and pipe friction losses, respectively. These coefficients are tuning parameters for the mechanistic Francis turbine model. $\beta_1$ is the inlet blade angle which in the nominal operating condition should be equal to the angle of the relative velocity $\gamma_1$ in order to achieve an influent no-shock condition (the angle of the relative velocity is defined from: $\cot{\gamma_1}=\cot{\alpha_1}-\frac{\omega R_1}{\dot{V}}A_1$). To satisfy the no-whirl effluent condition, angle $\alpha_2$ should be equal to 0. This angle is defined as $\cot{\alpha_2}=\cot{\beta_2}+\frac{\omega R_2}{\dot{V}/A_2}$. We propose the following expressions for the turbine loss coefficients, \cite{Vyt:19b}: \begin{equation} \begin{array}{c} k_{ft,1} = 11.6\cdot10^3e^{8.9\cdot10^{-3}H_\mathrm{n}}\\ k_{ft,2} = 0\\ k_{ft,3} = 720e^{6.7\cdot10^{-3}H_\mathrm{n}} \end{array} \end{equation} The efficiency of the turbine can be defined as follows: \begin{equation} \label{eq:eq40} {\eta} = \frac{\dot{W}_{s}}{\dot{W}_{t}} \end{equation} \textbf{Turbine design algorithm.} Geometry parameters for the Francis turbine must be found in order to use the mechanistic turbine model as presented above. These parameters, such as blade angles or runner dimensions, can be found from design data. Typically, for real (in use) turbines, these data are unavailable due to trade confidentiality. Thus, it is of interest to develop a design algorithm that can be used to define all the geometry parameters. The structure of this algorithm is shown in Fig.~\ref{fig:fig12}, where the input and output values for the design algorithm are presented. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{fig/Fig_3_Algor_str} % The printed column width is 8.4 cm. \caption{Block diagram that describes the turbine design algorithm (inputs and outputs).} \label{fig:fig12} \end{center} \end{figure} As input data for the calculation, nominal net head $H_n$ and volumetric flow rate $\dot{V}_n$ are used. A possible turbine design algorithm is as follows, ref. Fig.~\ref{fig:fig11}, \cite{Bre:01}: \begin{enumerate} \item Choose the outlet blade angle $\beta_2$ and reference velocity $v_{\omega,2}$. These values are usually in the interval: \begin{equation} \label{eq:eq41} \begin{array}{ll} 158\mathrm{^\circ} \leq \beta_2 \leq 165\mathrm{^\circ} \\ 35\mathrm{m/s} \leq v_{\omega,2} \leq 42\mathrm{m/s} \end{array} \end{equation} Here, the outlet angle and reference velocity take higher values for higher heads. Brekke suggests that these values may be chosen as $\beta_2=162.5\mathrm{^\circ}$ and $v_{\omega,2}=41\mathrm{m/s}$. \item Define the outlet runner cross-section area $A_2$ (radius $R_2$) and adjust it together with reference velocity $v_{\omega,2}$ to the normal synchronous rotational speed.\\First, the meridional velocity is defined as: \begin{equation} \label{eq:eq42} {v_2^r} = -\frac{v_{\omega,2}}{\cot{\beta_2}}, \end{equation} then outlet radius can be defined from the outlet cross-sectional area ($A_2=\pi R_2^2$): \begin{equation} \label{eq:eq43} {v_2^r} = \frac{\dot{V}}{A_2} \Rightarrow R_2=\sqrt{\frac{\dot{V}}{\pi v_2^r}} \end{equation} Then, the turbine rotational speed $n$ [$\mathrm{RPM}$] can be calculated from the angular velocity ($\omega=\frac{\pi n}{30}$): \begin{equation} \label{eq:eq44} {v_{\omega,2}}={\omega R_2} \Rightarrow n=\frac{30v_{\omega,2}}{\pi R_2} \end{equation} After this the turbine speed should be reduced to the nearest synchronous speed (depends on number of pole pairs $p$ in the generator: $n=\frac{60f}{p}$, where frequency $f$ is constant $50\,\mathrm{Hz}$) and then the outlet radius with the reference velocity should be recalculated in reverse order, using~(\ref{eq:eq44}),~(\ref{eq:eq43}) and~(\ref{eq:eq42}).\\ Normally, the information about the turbine rotational speed is available, so the outlet runner radius and the reference velocity can be found directly from~(\ref{eq:eq42}),~(\ref{eq:eq43}) and~(\ref{eq:eq44}). \item Choosing the inlet runner dimension, inlet cross-section area $A_1$ (radius $R_1$ and width $w_1$).\\ The inlet radius can be defined from the reference velocity $v_{\omega,1}$ as follows: \begin{equation} \label{eq:eq45} {R_1}=\frac{v_{\omega,1}}{\omega}=\frac{30v_{\omega,1}}{\pi n} \end{equation} Here, the reference velocity can be chosen from the range of reduced value $\overline{v}_{\omega,1}\in [0.7, 0.75]$, which is dimensionless and expressed as: \begin{equation} \label{eq:eq46} {\overline{v}_{\omega,1}} = \frac{v_{\omega,1}}{\sqrt{2gH}} \end{equation} It is common to use $\overline{v}_{\omega,1}=0.725$.\\Regularly, in order to avoid backflow in the runner, an acceleration of the flow through the runner is desirable. That is why the outlet meridional velocity can be chosen approximately ten per cent higher than the inlet. \begin{equation} \label{eq:eq47} {v_2^r} = 1.1v_1^r \end{equation} Then the inlet runner width $w_1$ can be calculated from the inlet cross-sectional area ($A_1=2\pi R_1w_1$): \begin{equation} \label{eq:eq48} {v_1^r} = \frac{\dot{V}}{A_1}\Rightarrow w_1=\frac{\dot{V}}{2\pi R_1v_1^r} \end{equation} Here, it should be noted that the blade thickness could be included for improving the calculation of the inlet cross-section area, e.g., 10\% of the perimeter. \item The inlet blade angle $\beta_1$ can be found as follows: \begin{equation} \label{eq:eq49} {\tan{(180^\circ-\beta_1)}} = \frac{v_1^r}{v_{\omega,1}-v_1^t} \end{equation} Here, $v_1^t$ is the tangential velocity and can be defined from dimensionless value $\overline{v}_1^t = 0.48/\overline{v}_{\omega,1}$, using~(\ref{eq:eq46}) to convert from dimensionless value. \end{enumerate} \textbf{Guide vane actuation.} In addition, a model for the guide vane opening is also included in order to define the inlet guide vane angle $\alpha_1$, \cite{LieL:18}. The guide vane geometry is depicted in Figure~\ref{fig:fig13}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{fig/Guide_vane} % The printed column width is 8.4 cm. \caption{Guide vane geometry relating actuator position $Y$ to guide vane angel $\alpha_1$, \cite{LieL:18}.} \label{fig:fig13} \end{center} \end{figure} From Figure~\ref{fig:fig13}~(a), assuming that the actuator cylinder is ``vertical'' in position ``0'', it can be found that \begin{equation} \begin{array}{c} {R}_Y^2=r_Y^2+Y_0^2 \\ \cos\theta_0=\frac{r_Y}{R_Y} \end{array} \end{equation} Clearly, $d_0=R_v-r_v$. Next, moving the actuator to position $Y$, Figure~\ref{fig:fig13}~(b) with the cosine law gives \begin{equation} Y^2=r_Y^2+R_Y^2-2r_YR_Y\cos\theta \end{equation} thus specifying angle $\theta$. The change in angel $\theta$ is introduced in (Figure~\ref{fig:fig12}~(b), (c)) as \begin{equation} \Delta\theta\equiv\theta-\theta_0 \end{equation} Then, applying the cosine law to Figure~\ref{fig:fig13}~(c) gives length $d$ ($d\in[d_0,2l]$) from \begin{equation} d^2=r_v^2+R_v^2-2r_vR_v\cos\Delta\theta \end{equation} and then angle $\psi$ from \begin{equation} r_v^2=d^2+R_v^2-2dR_v\cos\psi \end{equation} Here, it is necessary to ensure that the sign of $\psi$ equals to the sign of $\Delta\theta$. From Figure~\ref{fig:fig13}~(d) and applying the cosine law, we find \begin{equation} l^2=l^2+d^2-2ld\cos\phi\Rightarrow\cos\phi=\frac{d}{2l} \end{equation} Finally, the guide vane angle can be found as \begin{equation} \alpha_1=\phi-\psi \end{equation} In the above model, it has been assumed that the guide vane is perpendicular to the attached ``arm'' of length $l$, and that in position ``0'', a guide vane is at position ``9 o'clock'', Figure~\ref{fig:fig13}~(a), \cite{LieL:18}. Hence, together the Francis turbine model, the turbine design algorithm and the guide vane actuation (servo position) are realized in the \emph{Francis} turbine element in our library. In this \emph{Francis} unit, the multi-physic\emph{TurbineContacts} connectors model is also used and ensures connection to other waterway and electro-mechanical units. In addition, this \emph{Francis} unit has also the standard Modelica \emph{RealInput} connector that describes the angular velocity as an input to the Francis turbine model. Typically, this angular velocity connector is based on the derived info from (connected to) the generator units. In the \emph{Francis} unit, the user can specify the required nominal parameters for the Francis turbine: nominal turbine net head (nominal pressure drop), nominal turbine flow rate, nominal power, and nominal rotational speed. Then, the user can either choose to use the design algorithm that automatically defines the turbine geometry parameters (radius of the turbine blade inlet and outlet, the width of the turbine/blades inlet, the turbine inlet and outlet blade angles), or specify these turbine geometries manually. Similarly, the user has the same options for the losses coefficients and parameters for the guide vane actuation (servo position) model. \subsection{Pelton} Similar to the Francis turbine model, the mechanistic Pelton turbine model is developed and used. The key quantities of the model are shown in Fig.~\ref{fig:fig14}, and the shaft power $\dot{W}_s$ produced in the Pelton turbine is defined as follows, \cite{LieL:18}: \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{fig/Pelton_turb} % The printed column width is 8.4 cm. \caption{Some key concepts of the Pelton turbine, \cite{LieL:18}.} \label{fig:fig14} \end{center} \end{figure} \begin{equation} \dot{W}_s=\dot{m}v_R\left[\delta(u_\delta)\cdot v_1-v_R\right]\left(1-k\cos\beta\right) \end{equation} Here, $\dot{m}$ is the mass flow rate through the turbine. The reference velocity is equal to $v_R = \omega R$: here, $R$ is a is the radius of the rotor where the mass hits the bucket and $\omega$ is the angular velocity that is normally constrained by the grid frequency. The water velocity at position ``1'' (Figure~\ref{fig:fig14}) is equal to $v_1=\frac{\dot{V}}{A_1}$, where $\dot{V}$ is the volumetric flow rate through the turbine and $A_1$ is a cross-sectional area at position ``1'' (the end of the nuzzle). $\beta$ is the reflection angle with typical value of $\beta= 165^{\circ}$, and $k<1$ is some friction factor, typically $k\in[0.8, 0.9]$, \cite{LieL:18}. In practical installations, there is a deflector mechanism to reduce the velocity $v_1\delta(u_\delta)$ to avoid over-speed. The total work rate $\dot{W}_t$ removed through the turbine is: \begin{equation} \label{eq:eq50} {\dot{W}_t} = {\dot{W}_s+\dot{W}_{ft}} \end{equation} Here, $\dot{W}_{ft}$ is a friction of losses that can be found as follows: \begin{equation} \dot{W}_{ft}=K\left(1-k\cos\beta\right)\dot{m}v_R^2 \end{equation} Here, friction coefficient $K$ equals 0.25, \cite{LieL:18}. In addition, the pressure drop across the nozzle (positions ``0'' and ``1'`) $\Delta p_n$ can be found as follows, \cite{LieL:18}: \begin{equation} \Delta p_n=\frac{1}{2}\rho\dot{V}\left[\dot{V}\left(\frac{1}{A_1^2(Y)}-\frac{1}{A_0^2}\right)+k_f\right] \end{equation} Here, $A_0$ is a cross sectional area at position ``0'' (the beginning of the nuzzle). $A_1(Y)$ means that the cross-sectional area at position ``1'' is a function of the needle position $Y$. $k_f$ is a coefficient of friction loss in the nuzzle. Hence, this Pelton turbine model is realized in the \emph{Pelton} turbine element in our library. In this \emph{Pelton} unit, the multi-physic\emph{TurbineContacts} connectors model is also used and ensures connection to other waterway and electro-mechanical units. In addition, this \emph{Pelton} unit also has the standard Modelica \emph{RealInput} connector that describes the angular velocity as an input to the Francis turbine model. Typically, this angular velocity connector is based on the derived info from (connected to) the generator units. In the \emph{Pelton} unit, the user can specify the required geometry for the Pelton turbine: radius of the turbine runner, input diameter of the nuzzle, runner bucket angle, friction factors and coefficients, and deflector mechanism coefficient. \subsection{Simple Generator} Here, a simple model of an ideal generator with friction is considered. This model has inputs as electric power available on the grid and the turbine shaft power. This model is based on the angular momentum balance which depends on the turbine shaft power, the friction loss in the aggregate rotation, and the power taken up by the generator. The rotor angular velocity mainly depends on its inertia, internal friction and available power. The kinetic energy stored in the rotating generator is $ K_a=\frac{1}{2}J_a\omega_a^2$, where $\omega_a$ is the angular velocity of the rotor and $J_a$ is its moment of inertia. The kinetic energy $K_a$ is changed by the power terms operating on the generator axis, e.g., the turbine shaft power $\dot{W}_s$ produced by the turbine, friction power $\dot{W}_{f,a}$, and the power taken up by the generator, $\dot{W}_g$, \cite{LieL:18}, and from energy the balance can be expressed as follows: \begin{equation} \frac{dK_a}{dt}=\dot{W}_s-\dot{W}_{f,a}-\dot{W}_g \end{equation} $\dot{W}_{f,a}$ is the frictional power loss in the rotor. This frictional power loss is mainly due to losses in the shaft supporting bearings, losses in the transmission gearboxes and losses in the windage (air gap). For simplicity, it is assumed that the bearing term is dominating, and express $\dot{W}_{f,a}$ as \begin{equation} \dot{W}_{f,a}=\frac{1}{2}k_{f,b}\omega_a^2 \end{equation} Here, $k_{f,b}$ is the bearing friction factor. The power taken up by the generator is transmitted to the grid with electric efficiency $\eta_e$. Thus the electric power available on the grid is $\dot{W}_e=\eta_e\dot{W}_g$. Hence, this simple generator model is encoded in the \emph{OpenHPL} as a \emph{SimpleGen} unit. This unit has inputs as electric power available on the grid and the turbine shaft power which both are implemented with the standard Modelica \emph{RealInput} connector. This \emph{SimpleGen} unit also uses the standard Modelica \emph{RealOutput} connectors in order to provide output information about the angular velocity and frequency of the generator. All these connectors can be connected to turbines units and other standard Modelica blocks. In the \emph{SimpleGen} unit, the user can specify the required parameters for the generator: moment of inertia of the generator, generator's electrical efficiency, friction factor in the rotor bearing box, the number of the generator poles. This unit can be initialised by the initial value of the angular velocity $\omega_0$. Otherwise, the user can decide on an option when the simulation starts from a steady-state and the OpenModelica automatically handles the initial steady-state values (does not work properly in OpenModelica). \subsection{Synchronize Generator} Here, a more detailed model of the synchronous generator is presented. More details in the Behzad Sharefi master thesis, \cite{Sha:11}. This model is based on the d-q decomposition and assumed that the generator is connected to the grid, \cite{Sha:11}. The voltage-current relation is given as: \begin{equation} \left[\begin{matrix}R_a+R_e & x_q'+x_e\\ -x_d'-x_e & R_a+R_e\end{matrix}\right]\left[\begin{matrix}I_d \\ I_q\end{matrix}\right]= \left[\begin{matrix}E_d'+V_s\sin\delta_e \\ E_q'-V_s\cos\delta_e\end{matrix}\right] \end{equation} Here, $R_a$ and $R_e$ are the phase winding and equivalent network resistances, $x_d$, $x_q$, $x_d'$, and $x_q'$ are d-/q-axis normal, and transient reactances. $x_e$ is the equivalent network reactance. $I_d$ and $I_q$ are the d-/q-axis currents. $E_d'$ and $E_q'$ are the d-/q-axis transient voltages. $V_s$ is the network RMS (Root-Mean-Squared) voltage. $\delta_e$ is the phase shift angle that is described as follows: \begin{equation} \frac{d\delta_e}{dt} = (\omega - \omega_s)\frac{n_p}{2} \end{equation} Here, $n_p$ is the number of poles in the generator, where $\omega$ and $\omega_s$ are the generator and grid angular velocities, respectively. The Swing equation is used to describe the angular velocity dynamics and looks as follows: \begin{equation} \frac{d\omega}{dt}=\frac{\dot{W}_s-P_e}{J\omega} \end{equation} The dynamic equations for the transient operation are as follows: \begin{equation} \begin{array}{c} T_{qo}'\frac{dE_d'}{dt} =-E_d' + (x_q' - x_q)I_q \\ T_{do}'\frac{dE_q'}{dt} = -E_q' + (x_d - x_d')I_d + E_f \end{array} \end{equation} Here, $T_{do}'$ and $T_{qo}'$ are the d-/q-axis transient open-circuit time constants. $E_f$ is the voltage across the field winding with the following dynamic equation: \begin{equation} \frac{dE_f}{dt} = \frac{-E_f + K_E\left(V_{tr}-V_t-V_{stab}\right)}{T_E} \end{equation} Here, $K_E$ is the excitation system gain and $T_E$ --- excitation system time constant. $V_{tr}$ is the voltage reference set point for the exciter. $V_t$ is the terminal voltage and can be found as $V_t = \sqrt{\left(E_d'-R_aI_d-x_q'I_q\right)^2+\left(E_q'-R_aI_q+x_d'I_d\right)^2}$. $V_{stab}$ is the stabilisation voltage with the following dynamic equation: \begin{equation} \frac{dV_{stab}}{dt} = \frac{-V_{stab} + K_F\frac{dE_f}{dt}}{T_{FE}} \end{equation} Here, $K_F$ is the stabiliser gain, and $T_{FE}$ --- the stabiliser time constant. The output active and reactive power of the generator can be found as follows: \begin{equation} \begin{array}{c} P_e = 3\left(E_d'I_d+E_q'I_q\right)\\ Q_e = \sqrt{9V_t^2I_t^2-P_e^2} \end{array} \end{equation} Here, the terminate current is given as $I_t=\sqrt{I_d^2+I_q^2}$. Hence, this synchronise generator model is encoded in the \emph{OpenHPL} as a\emph{SynchGen} unit. This unit has inputs as the turbine shaft power, that is implemented with the standard Modelica \emph{RealInput} connector. This \emph{SynchGen} unit also uses the standard Modelica \emph{RealOutput} connectors in order to provide output information about the angular velocity and frequency of the generator. All these connectors can be connected to turbines units and other standard Modelica blocks. In the \emph{SynchGen} unit, the user can specify the required nominal parameters for the generator: active and reactive powers drawn from the generator at Steady-State operating condition, phase winding resistance, and the number of poles. The following network parameters should be also specified by the user: equivalent network resistance and reactance, network RMS voltage, grid angular velocity. The user also specifies the d-/q-axis normal and transient reactances, d-/q-axis transient open-circuit time constants, minimum and maximum field voltages, excitation system, stabilizer gains, time constants, moment of inertia of the generator, and the friction factor in the rotor bearing box. This unit can be initialized, or the user can decide on an option for the self initialisation. \section{Governor} Here, a simple model of the governor that controls the guide vane opening in the turbine based on the reference power production is described. More details in the Behzad Sharefi master thesis, \cite{Sha:11}. The block diagram of this governor model is shown in Figure~\ref{fig:fig15}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{fig/Governor} % The printed column width is 8.4 cm. \caption{Block Diagram of the governor, \cite{Sha:11}.} \label{fig:fig15} \end{center} \end{figure} Using the model in Figure~\ref{fig:fig15} and the standard Modelica blocks, the governor model is encoded in our library as the \emph{Governor} unit. This unit has inputs as the reference power production and generator frequency that are implemented with the standard Modelica \emph{RealInput} connector. This \emph{Governor} unit also uses the standard Modelica \emph{RealOutput} connectors in order to provide output information about the turbine guide vane opening. In the \emph{SynchGen} unit, the user can specify the various time constants of this model (see Figure~\ref{fig:fig15}): pilot servomotor time constant $T_p$, primary servomotor integration time $T_g$, and transient droop time constant $T_r$. The user should also provide the following parameters: droop value $\sigma$, transient droop $\delta$, and nominal values for the frequency and power generation. The information about the maximum, minimum, and initial guide vane opening should also be specified. \section{Examples} Here, various models that have been assembled in the \emph{Examples} class are described. \subsection{HPSimple} In this model of the hydropower system, the simplified models are used for conduits and turbine modelling. The generator is not included in the model. The simple \emph{Pipe} unit is used to represent the penstock, intake and discharge races. The simple \emph{Turbine} unit is used to represent the turbine. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPSimple\_generator} In this model of the hydropower system, the simplified models are used for conduits, turbine, and generator modelling. The simple \emph{Pipe} unit is used to represent the penstock, intake and discharge races. The simple \emph{Turbine} unit is used to represent the turbine. The \emph{SimpleGen} unit is used to represent the generator. The \emph{Reservoir} unit is used to represent the reservoir and tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPSimple\_Francis} In this model of the hydropower system, the simplified model is used for conduits modelling. The turbine and the generator are modelled with more detailed \emph{Francis} and \emph{SynchGen} units, respectively. The simple \emph{Pipe} unit is used to represent the penstock, intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPDetailed} In this model of the hydropower system, the simplified models are used for conduits and turbine modelling, except for the penstock that is modelled with the more detailed \emph{PenstockKP} unit. The generator is not included in the model. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The simple \emph{Turbine} unit is used to represent the turbine. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPDetailed\_generator} In this model of the hydropower system, the simplified models are used for conduits, turbine, and generator modelling, except for the penstock that is modelled with the more detailed \emph{PenstockKP} unit. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The simple \emph{Turbine} unit is used to represent the turbine. The \emph{SimpleGen} unit is used to represent the generator. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPDetailed\_Francis} In this model of the hydropower system, the simplified model is used for conduits modelling, except for the penstock that is modelled with the more detailed \emph{PenstockKP} unit. The turbine and generator are modelled with more detailed \emph{Francis} and \emph{SynchGen} units, respectively. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). Data from the Sundsbarm hydropower plant is used for this example model. \subsection{HPSimple\_Francis\_IPSLGen} Here, the last example model (uses the \emph{PenstockKP} and \emph{Francis} units) is extended by synergy with the \emph{OpenIPSL} for generator and power system modelling. The \emph{Governor} unit from the \emph{OpenHPL} is also used here. The penstock is modelled with the more detailed \emph{PenstockKP} unit. The turbine is modelled with more detailed \emph{Francis} unit. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). \subsection{HPSimple\_Francis\_GridGen} Here, the \emph{HPDetailed\_Francis} example model (uses the \emph{PenstockKP} and \emph{Francis} units) is also extended by synergy with the \emph{OpenIPSL} for only generator modelling. The \emph{Governor} unit from the \emph{OpenHPL} is also used here. The penstock is modelled with the more detailed \emph{PenstockKP} unit. The turbine is modelled with the more detailed \emph{Francis} unit. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). \subsection{HPSimple\_Francis\_IPSLGenGov} Here, the \emph{HPDetailed\_Francis} example model (uses the \emph{PenstockKP} and \emph{Francis} units) is extended by synergy with the \emph{OpenIPSL} for generator, governor and power system modelling. The penstock is modelled with the more detailed \emph{PenstockKP} unit. The turbine is modelled with the more detailed \emph{Francis} unit. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). \subsection{HPSimple\_Francis\_IPSLGenInfBus} Here, the \emph{HPDetailed\_Francis} example model (uses the \emph{PenstockKP} and \emph{Francis} units) is extended by synergy with the \emph{OpenIPSL} for generator and power system (including infinite bus) modelling. The \emph{Governor} unit from the \emph{OpenHPL} is also used here. The penstock is modelled with the more detailed \emph{PenstockKP} unit. The turbine is modelled with the more detailed \emph{Francis} unit. The simple \emph{Pipe} unit is used to represent the intake and discharge races. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). \subsection{HPSimple\_OpenChannel} In this model of the hydropower system, the simplified models are used for conduits and turbine modelling. The generator is not included in the model. The simple \emph{Pipe} unit is used to represent the penstock and intake race. The discharge race is an open channel here, and the \emph{OpenChannel} unit is used for modelling. The simple \emph{Turbine} unit is used to represent the turbine. The \emph{Reservoir} unit is used to represent the reservoir and the tailwater (here, this unit uses a simple model of the reservoir that only depends on the water depth in the reservoir). \chapter{Basic example} Here, a basic (step-by-step) example is provided in order to show how to connect and specify elements from the \emph{OpenHPL} in a flowsheet. Furthermore, an example of how to set up the OMPython API is also presented. \section{Flowsheet} In order to create a flowsheet model for the hydropower system in OpenModelica using the \emph{OpenHPL}, the following steps should be performed: \begin{enumerate} \item Create a new Modelica class that is specified as ``Model'' and assign a name for this model. Then, open this model with the ``Diagram view''. See example in Figure~\ref{fig:fig16}. \begin{figure}[ht] \begin{center} \includegraphics[width=1\textwidth]{fig/Exam_1} % The printed column width is 8.4 cm. \caption{Creating a new model in OpenModelica.} \label{fig:fig16} \end{center} \end{figure} \item Drag and drop all of the needed elements for the hydropower structure from the \emph{OpenHPL} and provide a name for each element. Then, connect the connectors of these elements between each other. See example in Figure~\ref{fig:fig17}. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{fig/Exam_2} % The printed column width is 8.4 cm. \caption{Connecting the elements of the hydropower system.} \label{fig:fig17} \end{center} \end{figure} \item Also, insert the records model ``Constants'' from the \emph{OpenHPL} with the name ``Const'' to the model in order to have control on some constants and properties that are common for all hydropower elements. As an example in case, typical initial value of the volumetric flow rate in the system for each ``Pipe'' unit can be specified. \item Specify each of the elements with an appropriate geometry. See example for the specification of the intake race element in Figure~\ref{fig:fig18}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\textwidth]{fig/Exam_4} % The printed column width is 8.4 cm. \caption{Specification of the elements.} \label{fig:fig18} \end{center} \end{figure} \item Provide a control signal for the turbine. To make this, you can either add a source of the ramp signal from the standard Modelica library (``Modelica.Blocks.Sources. Ramp''), or create an input variable (or just a simple variable) for the example model ``OpenHPL example'' and equate it to the turbine control input. Both possibilities are shown in Figure~\ref{fig:fig19}.\label{example_control} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{fig/Exam_5} % The printed column width is 8.4 cm. \caption{Creating a control signal for the turbine.} \label{fig:fig19} \end{center} \end{figure} \item Specify the simulation setup values and save it in the model. Then, the simulation can be carried out. See example for the simulation specification and running in Figure~\ref{fig:fig20}. \begin{figure}[ht] \begin{center} \includegraphics[width=1\textwidth]{fig/Exam_6} % The printed column width is 8.4 cm. \caption{Specifying and running simulation.} \label{fig:fig20} \end{center} \end{figure} \end{enumerate} \section{OMPython API} In order to run the simulations of the developed example hydropower model from Python, the OMPython API for OpenModelica can be used. The following steps should be performed to set up the API: \begin{enumerate} \item Import the ``Modelica system'' environment form the OMPython package. Then, create an object in Python of the OpenModelica model ``OpenHPL example''. Here, the libraries that are used in the model should also be loaded to the object which in this case, is the standard Modelica library and the \emph{OpenHPL}. See an example of the code below: \begin{lstlisting}[language = Python] from OMPython import ModelicaSystem hps_s = ModelicaSystem("OpenHPL_example.mo", "OpenHPL_example", ["Modelica", "OpenHPL/package.mo"]) \end{lstlisting} \item When the object is created, the simulation options, as well as the parameters and input variables can be specified. In order to check and specify the simulation options, the following commands can be used: \begin{lstlisting}[language = Python] hps_s.setSimulationOptions(stepSize=0.1, stopTime=1000) # set simulation options hps_s.getSimulationOptions() # get list of simulation options \end{lstlisting} Similar commands for parameters and input variables look as follows: \begin{lstlisting}[language = Python] hps_s.getParameters() # get list of model parameters hps_s.setParameters(**{"turbine.H_n":460}) # set parameter value for the turbine nominal head hps_s.getInputs() # get list of input variables hps_s.setInputs(u=[(0,0.75),(100,0.75),(101,0.7),(1000,0.7)]) # set input value over time as a ramp signal \end{lstlisting} It should be noted that here, I used the model with input variable for the control signal of the turbine (see item \#\ref{example_control} and Figure~\ref{fig:fig19} in the previous flowsheet section). \item Run the simulation and get the results. An example of these commands are carried out as follows: \begin{lstlisting}[language = Python] hps_s.simulate() # run simulation hps_s.getSolutions() # get list of solution variables time, Vdot, p_tr1, p_tr2 = hps_s.getSolutions("time", "turbine.Vdot", "turbine.p_tr1", "turbine.p_tr2") # get results of simulation time variable, and the turbine flow rate, inlet and outlet pressures. \end{lstlisting} These simulation results can be then plotted using \emph{matplotlib} package. See the plots of the turbine flow rate and the pressures in Figure~\ref{fig:fig21}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.7\textwidth]{fig/Exam_sim} % The printed column width is 8.4 cm. \caption{Plotting of simulation results.} \label{fig:fig21} \end{center} \end{figure} \item It is also possible to linearize the model for the future analysis. The linearization can be done with one command. However more commands can also be used to check/specify the linearization options and define the states/inputs/outputs. See the example below: \begin{lstlisting}[language = Python] hps_s.setLinearizationOptions(stopTime=0.1) # set a stop time for linearization (linearization is performed in this point) hps_s.getLinearizationOptions() # get a list of the linearization options As,Bs,Cs,Ds = hps_s.linearize() # actual linearization; defining standard A, B, C and D matrices. hps_s.getLinearStates() # get list of states hps_s.getLinearInputs() # get list of inputs hps_s.getLinearOutputs() # get list of outputs \end{lstlisting} It should be noted that the linearized model should include the input variable which in this case is the input variable for the turbine control signal. \end{enumerate} Similar to OMPython API, the running of OpenModelica models in Julia using OMJulia API can also be carried out. See the documentation of OMJulia for more information. \cleardoublepage % The bibliography should be displayed here... \printbibliography[heading=bibintoc] % You rather like to call the bibliography "References"? Then use this instead: %\printbibliography[heading=bibintoc, title={References}] \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7474698571, "avg_line_length": 92.7504317789, "ext": "tex", "hexsha": "be92e730cadd9cb6f31b449422208958c8c9f18a", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-11-12T09:33:44.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-05T09:30:02.000Z", "max_forks_repo_head_hexsha": "c725b3807a871c30d4002df10a231bfef80c8e82", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "simulatino/OpenHPL", "max_forks_repo_path": "OpenHPL/Resources/Documents/UsersGuide_src/UsersGuide.tex", "max_issues_count": 20, "max_issues_repo_head_hexsha": "c725b3807a871c30d4002df10a231bfef80c8e82", "max_issues_repo_issues_event_max_datetime": "2021-09-03T06:35:51.000Z", "max_issues_repo_issues_event_min_datetime": "2019-09-18T16:21:43.000Z", "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "simulatino/OpenHPL", "max_issues_repo_path": "OpenHPL/Resources/Documents/UsersGuide_src/UsersGuide.tex", "max_line_length": 1521, "max_stars_count": 11, "max_stars_repo_head_hexsha": "c725b3807a871c30d4002df10a231bfef80c8e82", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "simulatino/OpenHPL", "max_stars_repo_path": "OpenHPL/Resources/Documents/UsersGuide_src/UsersGuide.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-03T23:35:46.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-11T04:17:48.000Z", "num_tokens": 30846, "size": 107405 }
% $Id$ \label{componentDep} Most of the NUOPC Layer deals with specifying the interaction between ESMF components within a running ESMF application. ESMF provides several mechanisms of how an application can be made up of individual Components. This chapter deals with reigning in the many options supported by ESMF and setting up a standard way for assembling NUOPC compliant components into a working application. ESMF supports single executable as well as some forms of multiple executable applications. Currently the NUOPC Layer only addresses the case of single executable applications. While it is generally true that executing single executable applications is easier and more widely supported than executing multiple executable applications, building a single executable from multiple components can be challenging. This is especially true when the individual components are supplied by different groups, and the assembly of the final application happens apart from the component development. The purpose of standardizing component dependencies as part of the NUOPC Layer is to provide a solution to the technical aspect of assembling applications built from NUOPC compliant components. As with the other parts of the NUOPC Layer, the standardized component dependencies specify aspects that ESMF purposefully leaves unspecified. Having a standard way to deal with component dependencies has several advantages. It makes reading and understand NUOPC compliant applications more easily. It also provides a means to promote best practices across a wide range of application systems. Ultimately the goal of standardizing the component dependencies is to support "plug \& build" between NUOPC compliant components and applications, where everything needed to use a component by a upper level software layer is supplied in a standard way, ready to be used by the software. There is one aspect of the standardized component dependency that affects the component code itself: {\bf The name of the public set services entry point into a NUOPC compliant component must be called "SetServices"}. The only exception to this rule are components that are written in C/C++ and made available for static linking. In this case, because of lack of namespace protection, the {\tt SetServices} part must be followed by a component specific suffix. This will be discussed later in this chapter. For all other cases, unique namespaces exist that allow the entry point to be called {\tt SetServices} across all components. Having standardized the name of the single public entry point into a component solves the issue of having to communicate its name to the software layer that intends to use the component. At the same time, limiting the public entry point to a single accepted name does not remove any flexibility that is generally leveraged by ESMF applications. Within the context of the NUOPC Layer, there is great flexibility designed into the initialize steps. Removing the need to have to deal with alternative set services routines focuses and clarifies the NUOPC approach. The remaining aspects of component dependency standardization all deal with build specific issues, i.e. how does the software layer that uses a component compile and link against the component code. For now the NUOPC Layer does not deal with the question on how the component itself is being built. Instead the focus is on the information that a component must provide about itself, and the format of this information, in order to be usable by another piece of software. This clear separation allows components to provide their own independent build system, which often is critical to ensure bit-for-bit reproducibility. At the same time it does not prevent build systems to be connected top-down if that is desirable. Technically the problem of passing component specific build information up the build hierarchy is solved by using GNU makefile fragments that allow every component to provide information in form of variables to the upper level build system. The NUOPC Layer standardization requires that: {\bf Every component must provide a makefile fragment that defines 6 variables: \begin{verbatim} ESMF_DEP_FRONT ESMF_DEP_INCPATH ESMF_DEP_CMPL_OBJS ESMF_DEP_LINK_OBJS ESMF_DEP_SHRD_PATH ESMF_DEP_SHRD_LIBS \end{verbatim} } The convention for makefile fragments is to provide them in files with a suffix of {\tt .mk}. The NUOPC Layer currently adds no further restriction to the name of the makefile fragment file of a component. There seems little gain in standardizing the name of the NUOPC compliant makefile fragment of a component since the location must be made available anyway, and adding the specific file name at the end of the supplied path does not appear inappropriate. The meaning of the 6 makefile variables is defined in a manner that supports many different situations, ranging from simple statically linked components to situations where components are made available in shared objects, not loaded by the application until needed during runtime. The design idea of the NUOPC Layer component makefile fragment is to have each component provide a simple makefile fragment that is self-describing. Usage of advanced options requires a more sophisticated build system on the software layer that {\em uses} the component, while at the same time the same standard format is able to keep simple situations simple. An indepth understanding of the capabilities of the NUOPC Layer build dependency standard requires looking at various common cases in detail. The remainder of this chapter is dedicated to this effort. Here a general definition of each variable is provided. \begin{itemize} \item {\tt ESMF\_DEP\_FRONT} - The name of the Fortran module to be used in a USE statement, or (if it ends in ".h") the name of the header file to be used in an \#include statement, or (if it ends in ".so") the name of the shared object to be loaded at run-time. \item {\tt ESMF\_DEP\_INCPATH} - The include path to find module or header files during compilation. Must be specified as absolute path. \item {\tt ESMF\_DEP\_CMPL\_OBJS} - Object files that need to be considered as compile dependencies. Must be specified with absolute path. \item {\tt ESMF\_DEP\_LINK\_OBJS} - Object files that need to be considered as link dependencies. Must be specified with absolute path. \item {\tt ESMF\_DEP\_SHRD\_PATH} - The path to find shared libraries during link-time (and during run-time unless overridden by LD\_LIBRARY\_PATH). Must be specified as absolute path. \item {\tt ESMF\_DEP\_SHRD\_LIBS} - Shared libraries that need to be specified during link-time, and must be available during run-time. Must be specified with absolute path. \end{itemize} The following sections discuss how the standard makefile fragment is utilized in common use cases. It shows how the {\tt .mk} file would need to look like in these cases. Each section further contains hints of how a compliant {\tt .mk} file can be auto-generated by the component build system (provider side), as well as hints on how it can be used by an upper level software layer (consumer side). Makefile segments provided in these hint sections are {\em not} part of the NUOPC Layer component dependency standard. They are only provided here as a convenience to the user, showing best practices of how the standard {\tt .mk} files can be used in practice. Any specific compiler and linker flags shown in the hint sections are those compliant with the GNU Compiler Collection. The NUOPC Layer standard only covers the contents of the {\tt .mk} file itself. \subsection{Fortran components that are statically built into the executable} \label{StandardCompDep:FortranStatic} Statically building a component into the executable requires that the associated files (object files, and for Fortran the associated module files) are available when the application is being built. It makes the component code part of the executable. A change in the component code requires re-compilation and re-linking of the executable. A NUOPC compliant Fortran component that defines its public entry point in a module called "ABC", where all component code is contained in a single object file called "abc.o", makes itself available by providing the following {\tt .mk} file: \begin{verbatim} ESMF_DEP_FRONT = ABC ESMF_DEP_INCPATH = <absolute path to associated ABC module file> ESMF_DEP_CMPL_OBJS = <absolute path>/abc.o ESMF_DEP_LINK_OBJS = <absolute path>/abc.o ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} If, however, the component implementation is spread across several object files (e.g. abc.o and xyz.o), they must all be listed in the {\tt ESMF\_DEP\_LINK\_OBJS} variable: \begin{verbatim} ESMF_DEP_FRONT = ABC ESMF_DEP_INCPATH = <absolute path to associated ABC module file> ESMF_DEP_CMPL_OBJS = <absolute path>/abc.o ESMF_DEP_LINK_OBJS = <absolute path>/abc.o <absolute path>/xyz.o ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} In cases that require a large number of object files to be linked into the executable it is often more convenient to provide them in an archive file, e.g. "libABC.a". Archive files are also specified in {\tt ESMF\_DEP\_LINK\_OBJS}: \begin{verbatim} ESMF_DEP_FRONT = ABC ESMF_DEP_INCPATH = <absolute path to associated ABC module file> ESMF_DEP_CMPL_OBJS = <absolute path>/abc.o ESMF_DEP_LINK_OBJS = <absolute path>/libABC.a ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} {\bf Hints for the provider side:} A build rule for creating a compliant self-describing {\tt .mk} file can be added to the component's makefile. For the case that component "ABC" is implemented in object files listed in variable "OBJS", a build rule that produces "abc.mk" could look like this: \begin{verbatim} .PRECIOUS: %.o %.mk : %.o @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = ABC" >> $@ @echo "ESMF_DEP_INCPATH = `pwd`" >> $@ @echo "ESMF_DEP_CMPL_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_LINK_OBJS = "$(addprefix `pwd`/, $(OBJS)) >> $@ @echo "ESMF_DEP_SHRD_PATH = " >> $@ @echo "ESMF_DEP_SHRD_LIBS = " >> $@ abc.mk: $(OBJS) \end{verbatim} {\bf Hints for the consumer side:} The format of the NUOPC compliant {\tt .mk} files allows the consumer side to collect the information provided by multiple components into one set of internal variables. Notice that in the makefile code below it is critical to use the {\tt :=} style assignment instead of a simple {\tt =} in order to have the assignment be based on the {\em current} value of the right hand variables. \begin{verbatim} include abc.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) include xyz.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_XYZ=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) \end{verbatim} Besides the accumulation of information into the internal variables, there is a small amount of processing going on. The module name provided by the {\tt ESMF\_DEP\_FRONT} variable is assigned to a pre-processor macro. The intention of this macro is to be used in a Fortran {\tt USE} statement to access the Fortran module that contains the public access point of the component. The include paths in {\tt ESMF\_DEP\_INCPATH} are prepended with the appropriate compiler flag (here "-I"). The {\tt ESMF\_DEP\_SHRD\_PATH} and {\tt ESMF\_DEP\_SHRD\_LIBS} variables are also prepended by the respective compiler and linker flags in case a component brings in a shared library dependencies. Once the {\tt .mk} files of all component dependencies have been included and processed in this manner, the internal variables can be used in the build system of the application layer, as shown in the following example: \begin{verbatim} .SUFFIXES: .f90 .F90 .c .C %.o : %.f90 $(ESMF_F90COMPILER) -c $(DEP_FRONTS) $(DEP_INCS) \ $(ESMF_F90COMPILEOPTS) $(ESMF_F90COMPILEPATHS) $(ESMF_F90COMPILEFREENOCPP) $< %.o : %.F90 $(ESMF_F90COMPILER) -c $(DEP_FRONTS) $(DEP_INCS) \ $(ESMF_F90COMPILEOPTS) $(ESMF_F90COMPILEPATHS) $(ESMF_F90COMPILEFREECPP) \ $(ESMF_F90COMPILECPPFLAGS) $< %.o : %.c $(ESMF_CXXCOMPILER) -c $(DEP_FRONTS) $(DEP_INCS) \ $(ESMF_CXXCOMPILEOPTS) $(ESMF_CXXCOMPILEPATHSLOCAL) $(ESMF_CXXCOMPILEPATHS) \ $(ESMF_CXXCOMPILECPPFLAGS) $< %.o : %.C $(ESMF_CXXCOMPILER) -c $(DEP_FRONTS) $(DEP_INCS) \ $(ESMF_CXXCOMPILEOPTS) $(ESMF_CXXCOMPILEPATHSLOCAL) $(ESMF_CXXCOMPILEPATHS) \ $(ESMF_CXXCOMPILECPPFLAGS) $< app: app.o appSub.o $(DEP_LINK_OBJS) $(ESMF_F90LINKER) $(ESMF_F90LINKOPTS) $(ESMF_F90LINKPATHS) \ $(ESMF_F90LINKRPATHS) -o $@ $^ $(DEP_SHRD_PATH) $(DEP_SHRD_LIBS) \ $(ESMF_F90ESMFLINKLIBS) app.o: appSub.o appSub.o: $(DEP_CMPL_OBJS) \end{verbatim} \subsection{Fortran components that are provided as shared libraries} \label{StandardCompDep:FortranSharedLib} Providing a component in form of a shared library requires that the associated files (object files, and for Fortran the associated module files) are available when the application is being built. However, different from the statically linked case, the component code does {\em not} become part of the executable, instead it will be loaded separately each time the executable is loaded during start-up. This requires that the executable finds the component shared libraries, on which it depends, during start-up. A change in the component code typically does not require re-compilation and re-linking of the executable, instead a new version of the component shared library will be loaded automatically when it is available at execution start-up. A NUOPC compliant Fortran component that defines its public entry point in a module called "ABC", where all component code is contained in a single shared library called "libABC.so", makes itself available by providing the following {\tt .mk} file: \begin{verbatim} ESMF_DEP_FRONT = ABC ESMF_DEP_INCPATH = <absolute path to associated ABC module file> ESMF_DEP_CMPL_OBJS = ESMF_DEP_LINK_OBJS = ESMF_DEP_SHRD_PATH = <absolute path to libABC.so> ESMF_DEP_SHRD_LIBS = libABC.so \end{verbatim} {\bf Hints for the provider side:} The following build rule will create a compliant self-describing {\tt .mk} file ("abc.mk") for a component that is made available as a shared library. The case assumes that component "ABC" is implemented in object files listed in variable "OBJS". \begin{verbatim} .PRECIOUS: %.so %.mk : %.so @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = ABC" >> $@ @echo "ESMF_DEP_INCPATH = `pwd`" >> $@ @echo "ESMF_DEP_CMPL_OBJS = " >> $@ @echo "ESMF_DEP_LINK_OBJS = " >> $@ @echo "ESMF_DEP_SHRD_PATH = `pwd`" >> $@ @echo "ESMF_DEP_SHRD_LIBS = "$* >> $@ abc.mk: abc.so: $(OBJS) $(ESMF_CXXLINKER) -shared -o $@ $< mv $@ lib$@ rm -f $< \end{verbatim} {\bf Hints for the consumer side:} The format of the NUOPC compliant {\tt .mk} files allows the consumer side to collect the information provided by multiple components into one set of internal variables. This is independent on whether some or all of the components are provided as shared libraries. The path specified in {\tt ESMF\_DEP\_SHRD\_PATH} is required when building the executable in order for the linker to find the shared library. Depending on the situation, it may be desirable to also encode this search path into the executable through the RPATH mechanism as shown below. However, in some cases, e.g. when the actual shared library to be used during execution is {\em not} available from the same location as during build-time, it may not be useful to encode the RPATH. In either case, having set the {\tt LD\_LIBRARY\_PATH} environment variable to the desired location of the shared library at run-time will ensure that the correct library file is found. Notice that in the makefile code below it is critical to use the {\tt :=} style assignment instead of a simple {\tt =} in order to have the assignment be based on the {\em current} value of the right hand variables. \begin{verbatim} include abc.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) \ $(addprefix -Wl$(COMMA)-rpath$(COMMA), $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) \end{verbatim} (Here {\tt COMMA} is a variable that contains a single comma which would cause syntax issues if it was written into the "addprefix" command directly.) The internal variables set by the above makefile code can then be used by exactly the same makefile rules shown for the statically linked case. In fact, component "ABC" that comes in through "abc.mk" could either be a statically linked component or a shared library component. The makefile code shown here for the consumer side handles both cases alike. \subsection{Components that are loaded during run-time as shared objects} \label{StandardCompDep:SharedObject} Making components available in the form of shared objects allows the executable to be built in the complete absence of any information that depends on the component code. The only information required when building the executable is the name of the shared object file that will supply the component code during run-time. The shared object file of the component can be replaced at will, and it is not until run-time, when the executable actually tries to access the component, that the shared object must be available to be loaded. A NUOPC compliant component where all component code, including its public access point, is contained in a single shared object called "abc.so", makes itself available by providing the following {\tt .mk} file: \begin{verbatim} ESMF_DEP_FRONT = abc.so ESMF_DEP_INCPATH = ESMF_DEP_CMPL_OBJS = ESMF_DEP_LINK_OBJS = ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} The other parts of the {\tt .mk} file may be utilized in special cases, but typically the shared object should be self-contained. It is interesting to note that at this level of abstraction, there is no more difference between a component written in Fortran, and a component written in in C/C++. In both cases the public entry point available in the shared object must be {\tt SetServices} as required by the NUOPC Layer component dependency standard. (NUOPC does allow for customary name mangling by the Fortran compiler.) {\bf Hints for the provider side:} The following build rule will create a compliant self-describing {\tt .mk} file ("abc.mk") for a component that is made available as a shared object. The case assumes that component "ABC" is implemented in object files listed in variable "OBJS". \begin{verbatim} .PRECIOUS: %.so %.mk : %.so @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = "$< >> $@ @echo "ESMF_DEP_INCPATH = " >> $@ @echo "ESMF_DEP_CMPL_OBJS = " >> $@ @echo "ESMF_DEP_LINK_OBJS = " >> $@ @echo "ESMF_DEP_SHRD_PATH = " >> $@ @echo "ESMF_DEP_SHRD_LIBS = " >> $@ abc.mk: abc.so: $(OBJS) $(ESMF_CXXLINKER) -shared -o $@ $< rm -f $< \end{verbatim} {\bf Hints for the consumer side:} The format of the NUOPC compliant {\tt .mk} files still allows the consumer side to collect the information provided by multiple components into one set of internal variables. This still holds when some or all of the components are provided as shared objects. In fact it is very simple to make all of the component sections in the consumer makefile handle both cases. Notice that in the makefile code below it is critical to use the {\tt :=} style assignment instead of a simple {\tt =} in order to have the assignment be based on the {\em current} value of the right hand variables. \begin{verbatim} include abc.mk ifneq (,$(findstring .so,$(ESMF_DEP_FRONT))) DEP_FRONTS := $(DEP_FRONTS) -DFRONT_SO_ABC=\"$(ESMF_DEP_FRONT)\" else DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) endif DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) \ $(addprefix -Wl$(COMMA)-rpath$(COMMA), $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) \end{verbatim} The above makefile segment supports component "ABC" that is described in "abc.mk" to be made available as a Fortran static component, a Fortran shared library, or a shared object. The conditional around assigning variable {\tt DEP\_FRONTS} either leads to having set the macro {\tt FRONT\_ABC} as before, or setting a different macro {\tt FRONT\_SO\_ABC}. The former indicates that a Fortran module is available for the component and requires a {\tt USE} statement in the code. The latter macro indicates that the component is made available through a shared object, and the macro can be used to specify the name of the shared object in the associated call. Again the internal variables set by the above makefile code can be used by the same makefile rules shown for the statically linked case. \subsection{Components that depend on components} \label{StandardCompDep:CompOnComp} The NUOPC Layer supports component hierarchies where a component can be a child of another component. This hierarchy of components translates into component build dependencies that must be dealt with in the NUOPC Layer standardization of component dependencies. A component that sits in an intermediate level of the component hierarchy depends on the components "below" while at the same time it introduces a dependency by itself for the parent further "up" in the hierarchy. Within the NUOPC Layer component dependency standard this means that the intermediate component functions as a consumer of its child components' {\tt .mk} files, and as a provider of its own {\tt .mk} file that is then consumed by its parent. In practice this double role translates into passing link dependencies and shared library dependencies through to the parent, while the front and compile dependency is simply defined my the intermediate component itself. Consider a NUOPC compliant component that defines its public entry point in a module called "ABC", and where all component code is contained in a single object file called "abc.o". Further assume that component "ABC" depends on two components "XXX" and "YYY", where "XXX" provides the {\tt .mk} file: \begin{verbatim} ESMF_DEP_FRONT = XXX ESMF_DEP_INCPATH = <absolute path to the associated XXX module file> ESMF_DEP_CMPL_OBJS = <absolute path>/xxx.o ESMF_DEP_LINK_OBJS = <absolute path>/xxx.o ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} and "YYY" provides the following: \begin{verbatim} ESMF_DEP_FRONT = YYY ESMF_DEP_INCPATH = <absolute path to the associated XXX module file> ESMF_DEP_CMPL_OBJS = ESMF_DEP_LINK_OBJS = ESMF_DEP_SHRD_PATH = <absolute path to libYYY.so> ESMF_DEP_SHRD_LIBS = libYYY.so \end{verbatim} Then the {\tt .mk} file provided by "ABC" needs to contain the following information: \begin{verbatim} ESMF_DEP_FRONT = ABC ESMF_DEP_INCPATH = <absolute path to the associated ABC module file> ESMF_DEP_CMPL_OBJS = <absolute path>/abc.o ESMF_DEP_LINK_OBJS = <absolute path>/abc.o <absolute path>/xxx.o ESMF_DEP_SHRD_PATH = <absolute path to libYYY.so> ESMF_DEP_SHRD_LIBS = libYYY.so \end{verbatim} {\bf Hints for an intermediate component that is consumer and provider:} For the consumer side it is convenient to collect the information provided by multiple component dependencies into one set of internal variables. However, the details on how some of the imported information is processed into the internal variables depends on whether the intermediate component is going to make itself available for static or dynamic access. In the static case all link and shared library dependencies must be passed to the next higher level, and these dependencies should simply be collected and passed on to the next level: \begin{verbatim} include xxx.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_XXX=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(ESMF_DEP_SHRD_PATH) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(ESMF_DEP_SHRD_LIBS) include yyy.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_YYY=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(ESMF_DEP_SHRD_PATH) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(ESMF_DEP_SHRD_LIBS) .PRECIOUS: %.o %.mk : %.o @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = ABC" >> $@ @echo "ESMF_DEP_INCPATH = `pwd`" >> $@ @echo "ESMF_DEP_CMPL_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_LINK_OBJS = `pwd`/"$< $(DEP_LINK_OBJS) >> $@ @echo "ESMF_DEP_SHRD_PATH = " $(DEP_SHRD_PATH) >> $@ @echo "ESMF_DEP_SHRD_LIBS = " $(DEP_SHRD_LIBS) >> $@ \end{verbatim} In the case where the intermediate component is linked into a dynamic library, or a dynamic object, all of its object and shared library dependencies can be linked in. In this case it is more useful to do some processing on the shared library dependencies, and not to include them in the produced {\tt .mk} file. \begin{verbatim} include xxx.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_XXX=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) \ $(addprefix -Wl$(COMMA)-rpath$(COMMA), $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) include yyy.mk DEP_FRONTS := $(DEP_FRONTS) -DFRONT_YYY=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) \ $(addprefix -Wl$(COMMA)-rpath$(COMMA), $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) .PRECIOUS: %.o %.mk : %.o @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = ABC" >> $@ @echo "ESMF_DEP_INCPATH = `pwd`" >> $@ @echo "ESMF_DEP_CMPL_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_LINK_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_SHRD_PATH = " >> $@ @echo "ESMF_DEP_SHRD_LIBS = " >> $@ \end{verbatim} \subsection{Components written in C/C++} \label{StandardCompDep:C} ESMF provides a basic C API that supports writing components in C or C++. There is currently no C version of the NUOPC Layer API available, making it harder, but not impossible to write NUOPC Layer compliant ESMF components in C/C++. For the sake of completeness, the NUOPC component dependency standardization does cover the case of components being written in C/C++. The issue of whether a component is written in Fortran or C/C++ only matters when the dependent software layer has a compile dependency on the component. In other words, components that are accessed through a shared object have no compile dependency, and the language is of no effect (see \ref{StandardCompDep:SharedObject}). However, components that are statically linked or made available through shared libraries do introduce compile dependencies. These compile dependencies become language dependent: a Fortran component must be accessed via the {\tt USE} statement, while a component with a C interface must be accessed via {\tt \#include}. The decision between the three cases: compile dependency on a Fortran component, compile dependency on a C/C++ component, or no compile dependency can be made on the {\tt ESMF\_DEP\_FRONT} variable. By default it is assumed to contain the name of the Fortran module that provides the public entry point into a component written in Fortran. However, if the contents of the {\tt ESMF\_DEP\_FRONT} variable ends in {\tt .h}, it is interpreted as the header file of a component with a C interface. Finally, if it ends in {\tt .so}, there is no compile dependency, and the component is accessible through a shared object. A NUOPC compliant component written in C/C++ that defines its public access point in "abc.h", where all component code is contained in a single object file called "abc.o", makes itself available by providing the following {\tt .mk} file: \begin{verbatim} ESMF_DEP_FRONT = abc.h ESMF_DEP_INCPATH = <absolute path to abc.h> ESMF_DEP_CMPL_OBJS = <absolute path>/abc.o ESMF_DEP_LINK_OBJS = <absolute path>/abc.o ESMF_DEP_SHRD_PATH = ESMF_DEP_SHRD_LIBS = \end{verbatim} {\bf Hints for the implementor:} There are a few subtle complications to cover for the case where a component with C interface comes in as a compile dependency. First there is Fortran name mangling of symbols which includes underscores, but also changes to lower or upper case letters. The ESMF C interface provides a macro ({\tt FTN\_X}) that deals with the underscore issue on the C component side, but it cannot address the lower/upper case issue. The ESMF convention for using C in Fortran assumes all external symbols lower case. The NUOPC Layer follows this convention in accessing components with C interface from Fortran. Secondly, there is no namespace protection of the public entry points. For this reason, the public entry point cannot just be {\tt setservices} for all components written in C. Instead, for components with C interface, the public entry point must be {\tt setservices\_name}, where "name" is the same as the root name of the header file specified in {\tt ESMF\_DEP\_FRONT}. (The absence of namespace protection is still an issue where multiple C components with the same name are specified. This case requires that components are renamed to something more unique.) Finally there is the issue of providing an explicit Fortran interface for the public entry point. One way of handling this is to provide the explicit Fortran interface as part of the components header file. This is essentially a few lines of Fortran code that can be used by the upper software layer to implement the explicit interface. As such it must be protected from being processed by the C/C++ compiler: \begin{verbatim} #if (defined __STDC__ || defined __cplusplus) // ---------- C/C++ block ------------ #include "ESMC.h" extern "C" { void FTN_X(setservices_abc)(ESMC_GridComp gcomp, int *rc); } #else !! ---------- Fortran block ---------- interface subroutine setservices_abc(gcomp, rc) use ESMF type(ESMF_GridComp) :: gcomp integer, intent(out) :: rc end subroutine end interface #endif \end{verbatim} An upper level software layer that intends to use a component that comes with such a header file can then use it directly on the Fortran side to make the component available with an explicit interface. For example, assuming the macro {\tt FRONT\_H\_ATMF} holds the name of the associated header file: \begin{verbatim} #ifdef FRONT_H_ATMF module ABC #include FRONT_H_ATMF end module #endif \end{verbatim} This puts the explicit interface of the {\tt setservices\_abc} entry point into a module named "ABC". Except for this small block of code, the C/C++ component becomes indistinguishable from a component implemented in Fortran. {\bf Hints for the provider side:} Adding a build rule for creating a compliant self-describing {\tt .mk} file into the component's makefile is straightforward. For the case that the component in "abc.h" is implemented in object files listed in variable "OBJS", a build rule that produces "abc.mk" could look like this: \begin{verbatim} .PRECIOUS: %.o %.mk : %.o @echo "# ESMF self-describing build dependency makefile fragment" > $@ @echo >> $@ @echo "ESMF_DEP_FRONT = abc.h" >> $@ @echo "ESMF_DEP_INCPATH = `pwd`" >> $@ @echo "ESMF_DEP_CMPL_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_LINK_OBJS = `pwd`/"$< >> $@ @echo "ESMF_DEP_SHRD_PATH = " >> $@ @echo "ESMF_DEP_SHRD_LIBS = " >> $@ abc.mk: abc.o: abc.h \end{verbatim} {\bf Hints for the consumer side:} The format of the NUOPC compliant {\tt .mk} files still allows the consumer side to collect the information provided by multiple components into one set of internal variables. This still holds even when any of the provided components could come in as a Fortran component for static linking, as a C/C++ component for static linking, or as a shared object. All of the component sections in the consumer makefile can be made capable of handling all three cases. However, if it is clear that a certain component is for sure supplied as one of these flavors, it may be clearer to hard-code support for only one mechanism for this component. Notice that in the makefile code below it is critical to use the {\tt :=} style assignment instead of a simple {\tt =} in order to have the assignment be based on the {\em current} value of the right hand variables. This example shows how the section for a specific component can be made compatible with all component dependency modes: \begin{verbatim} include abc.mk ifneq (,$(findstring .h,$(ESMF_DEP_FRONT))) DEP_FRONTS := $(DEP_FRONTS) -DFRONT_H_ABC=\"$(ESMF_DEP_FRONT)\" else ifneq (,$(findstring .so,$(ESMF_DEP_FRONT))) DEP_FRONTS := $(DEP_FRONTS) -DFRONT_SO_ABC=\"$(ESMF_DEP_FRONT)\" else DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) endif DEP_FRONTS := $(DEP_FRONTS) -DFRONT_ABC=$(ESMF_DEP_FRONT) DEP_INCS := $(DEP_INCS) $(addprefix -I, $(ESMF_DEP_INCPATH)) DEP_CMPL_OBJS := $(DEP_CMPL_OBJS) $(ESMF_DEP_CMPL_OBJS) DEP_LINK_OBJS := $(DEP_LINK_OBJS) $(ESMF_DEP_LINK_OBJS) DEP_SHRD_PATH := $(DEP_SHRD_PATH) $(addprefix -L, $(ESMF_DEP_SHRD_PATH)) \ $(addprefix -Wl$(COMMA)-rpath$(COMMA), $(ESMF_DEP_SHRD_PATH)) DEP_SHRD_LIBS := $(DEP_SHRD_LIBS) $(addprefix -l, $(ESMF_DEP_SHRD_LIBS)) \end{verbatim} The above makefile segment will end up setting macro {\tt FRONT\_H\_ABC} to the header file name, if the component described in "abc.mk" is a C/C++ component. It will instead set macro {\tt FRONT\_SO\_ABC} to the shared object if this is how the component is made available, or set macro {\tt FRONT\_ABC} to the Fortran module name if that is the mechanism for gaining access to the component code. The calling code can use these macros to activate the corresponding code, as well as has access to the required name string in each case The internal variables set by the above makefile code can be used by the same makefile rules shown for the statically linked case. This usage implements the correct dependency rules, and passes the macros through the compiler flags.
{ "alphanum_fraction": 0.7469410907, "avg_line_length": 73.4583333333, "ext": "tex", "hexsha": "d0e400e48c557b25d976c758abe355c5343d1d85", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_forks_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_forks_repo_name": "joeylamcy/gchp", "max_forks_repo_path": "ESMF/src/addon/NUOPC/doc/NUOPC_StandardComponentDep.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z", "max_issues_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_issues_repo_name": "joeylamcy/gchp", "max_issues_repo_path": "ESMF/src/addon/NUOPC/doc/NUOPC_StandardComponentDep.tex", "max_line_length": 780, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3", "max_stars_repo_licenses": [ "NCSA", "Apache-2.0", "MIT" ], "max_stars_repo_name": "joeylamcy/gchp", "max_stars_repo_path": "ESMF/src/addon/NUOPC/doc/NUOPC_StandardComponentDep.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z", "num_tokens": 9737, "size": 37023 }
% !TeX spellcheck = en_GB % !TeX encoding = UTF-8 \chapter{Background and Theory} \label{ch:theory} \epigraph{"Math is like water. It has a lot of difficult theories, of course, but its basic logic is very simple."}{- Haruki Murakami, IQ84} This chapter is a summary of all the fundamental aspects of machine learning. More specifically, the crucial elements for an understanding of the prototype-functionality. First of all, the historical development of machine learning will be covered briefly. As soon as the reader has an understanding of the historical development of machine learning aspects and algorithms of machine learning, which are relevant to talk about the topic of this thesis are discussed. In the final part of this chapter, two different approaches of explainable artificial intelligence are introduced. Furthermore, a specific kind explainable artificial intelligence is mentioned, such that the reader can understand how predictions can become transparent. \section{History of Machine Learning} \label{sec:history} An example of machine learning is to take a lot of images and try to recognize cats (patterns) on them. The machine learning algorithm would be able to predict whether it is a cat or not. In this example, machine learning can be seen as \begin{quote}\textit{"[...] a set of methods which can automatically detect patterns in data, and then use the uncovered patterns to predict unseen data or to perform other kinds of decision making under uncertainty" \cite[p. 1]{Murphy2012}.} \end{quote} But there is more than one definition which describes machine learning. Arthur Samuel was the first researcher who used this term and defined it as "Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed" \cite{Samuel1959SomeSI}. The computer scientist Mitchel provided a more formal definition of machine learning, which is quoted in hundreds of papers\footnote{998 results on google scholar while searching for the original meaning from Mitchel (date: 24 Jan 2020)}. In his book from 1997, he says "A computer program is said to learn from experience E concerning some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E" \cite{Mitchell97}.\\ \begin{figure}[!htp] \centering \fbox{\includegraphics[width=1\linewidth]{photo/04_history_of_machine_learning}} \caption{Determining critical technologies within the domains of artificial intelligence(extended representation by Dmitri Gross according to a presentation from Michael Copeland) \cite{Gross2017} \cite{COPELAND2017}} \label{fig:04_history_of_machine_learning} \end{figure} According to the science magazine, the most influential computer scientist, in the field of pattern recognition, Michael Jordan wants to give another modern definition. His objective is to give a neutral statement, which defines the word artificial intelligence. He says: \begin{quote} \textit{"It is one of today's rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science" \cite{Bohannon2016} \cite{Jordan255}} \end{quote} Furthermore, with this definition, Jordan wants to connect the terms of machine learning and statistics. In his opinion, these two terms are the essential components of artificial intelligence \cite{MichaelJordan2018}. Machine learning has changed over the last decades. Machine learning became popular in the eighties, as a result of the increasing use of personal computers. Furthermore, deep learning gained on popularity until 2010, caused by the development of more powerful graphical processor units. This statement is criticized as well. Mainly because of the unclearly defined terminology. Anyways it is useful to get an overview of the field and the historical evolvement over the last years (\hyperref[fig:04_history_of_machine_learning]{Figure \ref{fig:04_history_of_machine_learning}}). \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.4\linewidth]{photo/05_deep_learning_sucess_with_alpha_go.jpeg}} \caption{The Netflix documentary Alpha Go is part of the hype around deep learning methods. Critics claim until the problem of transparency is not solved this results are not crucial because these models can not be used in a real-world application \cite{Thetruec1:online}} \label{fig:05_deep_learning_sucess_with_alpha_go} \end{figure} Alpha Go, which is developed by the Google Research Laboratory, is a good example of the popularity of deep learning methods. It shows how an enormous amount of time and hardware resources can lead to success as the artificial player has been the world ranked number one in the board game Go. But this success is criticized as well, because it is not a real-world application and gots pushed by a Netflix documentary, scientist claim that it is just an overhyped topic as long as the model could not explain its decisions (\hyperref[fig:05_deep_learning_sucess_with_alpha_go]{Figure \ref{fig:05_deep_learning_sucess_with_alpha_go}}). \section{Suprvised Machine Learning for Upload Filters} \label{sec:supervised_learning} By doing the following steps, a so called cost function (\hyperref[def:cost_function]{Definition \ref{def:cost_function}}) will be optimized to get a prediction on unseen data, as defined by Mitchel (\hyperref[sec:history]{Section \ref{sec:history}}). An example application is an upload filter which determines if a dog or a cat is on an image: \begin{enumerate} \itemsep-0.8em \item Eliminate empty entries and statistically irrelevant data e.g. duplicates \item Rescale the input data in the required form \item Split the Data into a test and a training set \item Initialize the parameters of the model \item Learn the parameters for the model by minimizing the cost function (\hyperref[def:cost_function]{Definition \ref{def:cost_function}}) \item Use the learned parameters to make predictions (\hyperref[def:mapping_function]{Definition \ref{def:mapping_function}}) \item Test the ability to predict patterns on unseen data correctly \end{enumerate} In the following, these steps are described in greater detail. The example of an Instagram upload filter is a classification task which can be solved by a supervised machine learning algorithm. It is designed to learn by example, \cite[p. 3 - 4]{Murphy2012}. Examples of cats and suitable counterexamples are assumed to be given before a supervised machine learning algorithms get developed. (\hyperref[fig:06_example_dog_vs_cat_dataset]{Figure \ref{fig:06_example_dog_vs_cat_dataset}}).\\\\ \begin{figure}[htp] \fbox{\includegraphics[width=1\linewidth]{photo/06_example_dog_vs_cat_dataset}} \caption{Examples sampled from the the ImaegNet data set. The pattern on these images are similar to the patterns which the prototype shall recognise \cite{Building13:online}.} \label{fig:06_example_dog_vs_cat_dataset} \end{figure} The name supervised learning comes from the idea that training this type of algorithm is like having a supervisor which observes the whole learning process, e.g. recognize cats on images after having seen enough samples of the same distribution. At the same time, a "trainer" leads the algorithm to the correct result \cite[p. 103]{Goodfellow-et-al-2016} \cite[p. 3]{Murphy2012}. Technically spoken, these input data is called training data. First, training data consists of input data, for example, values that represent an image. Second, training data consists of output data, for example, the class which an image belongs to. During training, the algorithm will search for patterns in the input data that correlate with the desired outputs. After training, the supervised learning algorithm will feed with new unseen examples such that its ability to predict unseen data can be tested. So, the algorithm determines which labels correspondent to the unseen examples, based on optimized parameters. Such an algorithm can be expressed as follows \cite[p. 3]{Murphy2012}: \begin{definition}[label=def:mapping_function]{Mapping Function} \begin{align*} f(X) = \hat{y} \end{align*} where \\\\ \( f() \) = Mapping function which assigns the labels to a given input. \\ \( X \) = A matrix of input value where each column in \( X \) stands for a single example. \\ \( \hat{y}\) = The determined labels for each column in \( X \) represented by a vector.\\ \end{definition} Where \(\hat{y}\) is the predicted output, which is determined by a mapping function \(f\) that assigns a label to a single value or a set of multiple-input values donated by \(X\), the function connects an input features to a probability to belong to a certain class. The logic behind this function is also called machine learning model \cite[p. 3]{Murphy2012}.\\ Before a machine learning algorithm gets trained, the training data has to be prepared. This step is called preprocessing. An example of this is a machine learning algorithm which detects cats on images. Therefore training data is required to be preprocessed. This means the data has to be prepared such it fits into the mapping function. In the example of an machine learning algorithm which detects cat and dogs, this means the images transformed such that they are represented by a matrix. Within this matrix, every column stands for an example. Furthermore, the corresponding labels shall be in the form of a vector. Therefore, every value within this vector corresponds to a column in the input matrix. The vector represents the labels by zeros and ones. For example, a zero at the first position in the output vector means that the first column in the input matrix shows a cat \cite{brownlee2019deep}. When talking about the training of a machine learning algorithm, the optimization of a function is always meant by that. Further, the function to be optimized is the mapping function which assigns an output value to a given input. Therefore the parameters of the function will be optimized during the training process, such that the output values match the actual output values as closely as possible. In other words, such that the real values as close as possible to the predicted values (\hyperref[def:cost_function]{Definition \ref{def:cost_function}}). Usually, that is an optimization problem and is solved with different techniques. The technique also determines the required form of the input and output data, as suggested in step two. If we used a logistic regression to classify cats, a structure as can be seen in \hyperref[def:structure_for_linear_logistic_regression]{Definition \ref{def:structure_for_linear_logistic_regression}} is necessary. The logistic regression is a model which is applied to determine the probability of a certain class or event exists such as pass/fail, win/lose, alive/dead or healthy/sick. It can be extended to predict several classes of events such as determining whether an image contains a cat, dog, lion, etc. \begin{definition}[label=def:structure_for_linear_logistic_regression]{required input format for a machine learning program with logical regression} \begin{align*} & X \in \mathbb{R}^{n\times m} \\ & y \in \mathbb{R}^{m} \end{align*} where \\\\ \( X \) = A matrix of input values. Each column in stands for a single example. \\ \( y \) = A vector with labels. Each value belongs to a column of the input matrix\\ \( n \) = Total amount of input features\\ \( m \) = Total amount of examples\\ \( \mathbb{R} \) = All values are in real number space \\ \end{definition} A pixel image of a cat, as you can see on the left-hand side in \hyperref[fig:07_image_vector_representation.png]{Figure \ref{fig:07_image_vector_representation.png}}, is represented as three matrices (\hyperref[fig:07_image_vector_representation.png]{Figure \ref{fig:07_image_vector_representation.png}, centre}). With the help of mathematical functions from the field of linear algebra, the images are transformed from the matrix form into a vector form (\hyperref[fig:07_image_vector_representation.png]{Figure \ref{fig:07_image_vector_representation.png}, right}) \cite[p. 276]{brownlee2019deep}. Even if it is not necessary, it is recommended to normalize the values. Otherwise, the calculations could become very slow and very memory intensive. \cite[p. 57]{Goodfellow-et-al-2016}. \begin{figure}[htp] \fbox{\includegraphics[width=1\linewidth]{photo/07_image_vector_representation.png}} \caption{An image is represented by three matrices. Each for one colour channel (red, green and blue). After transforming it with linear algebra it becomes a vector.} \label{fig:07_image_vector_representation.png} \end{figure} The objective is to optimize the cost function (\hyperref[def:cost_function]{Definition \ref{def:cost_function}}) for many examples. This is why the input gets feed into a matrix (\hyperref[def:structure_for_linear_logistic_regression]{Definition \ref{def:structure_for_linear_logistic_regression}}). After transforming an image, we get a vector, so it is mandatory to store every single vector in the column of a matrix. Matrix representations are one of the reasons why machine learning algorithms are much more efficient. Nowadays, memory is cheaper than computing power. Without matrix multiplication, more loops are required. Therefore, loops require a relatively large amount of computing power, especially for extensive data. Matrix multiplication requires a lot of memory but requires fewer loops and therefore, less computing power. Due to the extreme amounts of data that are processed during the calculation of machine learning algorithms, these effects are multiplied by each other, which makes an efficient computation even more important \cite{AndrewNG}.\\ The next step in designing a machine learning algorithm is to create a specific function which can be optimized. As a first step a linear function which looks like the following can be used: \begin{definition}[label=lf]{A Linear Function as a vector and matrix representation to compute multiple input values at once} \begin{align*} z = W X + b \end{align*} where \\\\ \(z\) = The results of the linear function\\ \(W\) = A matrix with parameters assigned to every row of the input matrix \( X \)\\ \(X\) = A matrix of input value where each column in \( X \) stands for a single example. \\ \(b\) = Intercept added in a linear equation called bias parameter.\\ \end{definition} Next step in order to get the results is to pass the results \( z \) to a sigmoid function. Every function which is used after calculating the linear function itself is a so-called activation function. Activation functions are mathematical equations to determine the output of a neural network. Furthermore, an activation function can be used for different tasks. In the pretend example the sigmoid function (\hyperref[def:sf]{Definition \ref{def:sf}}) maps a value between 0 and 1 to each output. In such a way, that this value can be interpreted as a probability (\hyperref[def:sf]{Definition \ref{def:sf}}). Finally, a label can be assigned to the input by using a rule-based approach. For example, it could be the case that if the probability is greater than 50\% that a certain case will occur, a positive label will also be assigned (the probability that a cat is on the image is greater than 50\%, then the image has the label cat).\\ \begin{definition}[label=def:sf]{Sigmoid Function e.g. for using it to get probabilities} \begin{align*} \hat{y} = sigmoid(z) \end{align*} where \\\\ \( \hat{y}\) = The determined labels for each column in \( X \) represented by a vector.\\ \( z \) = The results of the linear function\\ \end{definition} The next step for the example of an Instagram upload filter is to get a criterion, which can be used to optimize the parameters. In other words, to quantify the success of the training process. In order to get such a criterion, a so-called error value is formed. This is calculated from the actual label (\(y\) )and the predicted label (\( \hat{y} \)). How exactly this calculation is done is determined by the loss function. For example the loss function could be defined by the cross-entropy function (\hyperref[def:cost_function]{Definition \ref{def:sf}}), which compares the given values with the calculated values. Every function in machine learning that could be used to compute such an error can be used as a loss function as well. Finally, all error values of the individual examples are summed up and divided by the total number of examples. The function which will be used to do so is therefore called error function and denoted by \(\mathcal{J}\). \(\mathcal{J}\) maps a single value to all errors. Finally, the error function \(\mathcal{J}\) gets optimized during the training process such that the parameters (\( W \) and \( b \)) adjusted such that the error function \(\mathcal{J}\) reaches a global minimum. While in the first step, the so-called forward propagation, the error values and calculations are calculated, in the second step, the weights of the function will be optimized. This step uses the chain rule to form the gradient of the weights and adjust them according to this gradient. In the literature, the second step is commonly called backward propagation. This step is explained in greater detail in \hyperref[sec:gd]{Section (\ref{sec:gd})}. \begin{definition}[label=def:cost_function]{Cost Function which can be optimized} \begin{align*} \mathcal{J} = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(\hat{y}^{i}, y^{i}) \end{align*} where \\\\ \(m\) = Number of examples e.g. number of cat and no cat images \\ \(\mathcal{L}\) = Cross-Entropy function \\ \(\hat{y}\textsuperscript{i}\) = Calculated label for the \(i\)th example\\ \(y\textsuperscript{i}\) = Correct label for the \(i\)th example\\ \( \mathcal{J} \) = Cost Function which can be optimized s. t. it is at its global minimum \end{definition} The cost function (\hyperref[def:cost_function]{Definition \ref{def:cost_function}}) should not be optimized as close as possible to the training data. A function which fits perfectly is called over-fitted. Overfitting means models perform well on the training data but do not generalize well for new data. It happens when the model is too complex relative to the amount and noisiness of the training data. If the accuracy on predicting patterns within the training set is high, but the accuracy on predicting patterns on images from the test set is low, the machine learning algorithm is likely overfitted. So, it is time to take corrective measure \cite[p. 110]{Goodfellow-et-al-2016}. Anyway, the goal of supervised learning as in the Instagram upload filter example is to achieve high performance on unseen data. To do that the data has to be divided into a test and a training set. The training set is used like described before (preprocessing the data and approximate the cost function to its global minimum). In contrast, the test data set would only be preprocessed and then used to make predictions about unseen cat images. So it is possible to test the neural network of its ability to predict unseen data \hyperref[sec:history]{Section (\ref{sec:history}}).\\\\ In other words, every step is necessary to train a machine-learning algorithm to predict if it is a cat or not. In \hyperref[fig:08_process_of_pretictions_for_a_cat_image]{Figure \ref{fig:08_process_of_pretictions_for_a_cat_image}} is the process of predicting an \(\times64 \) image visualized. Therefore the notations are explained in \hyperref[def:label]{Definition \ref{def:label}}. \begin{definition}[label=def:label]{Cost Function which can be optimized} \(x\textsuperscript{th} \) = exists for each pixel of the \(i\)th example \\ \( W\textsubscript{x} \) = \(x\)th parameter assigned to each pixel of \(x\textsuperscript{th}\) \\ \( W a + b\) = the linear function which computes a first output (\hyperref[lf]{Definition \ref{lf}}) \\ \( \sigma \) = the sigmoid function to get the labels (\hyperref[def:sf]{Definition \ref{def:sf}}) \end{definition} \begin{figure}[htp] \fbox{\includegraphics[width=1\linewidth]{photo/08_process_of_pretictions_for_a_cat_image.png}} \caption{The image visualizes the whole process of using an image (1) and putting this into a single neuron (4). First, the neuron calculates a linear function and uses the result as input to a sigmoid function (becomes an interpretable value between zero and one). The network can make a prediction (5) while using a decision boundary (e.g. if the probability is higher as 0.5 it is a cat) if the forecast is not correct, the parameters of the function (2, 3) getting adjusted as long as the function is optimized. Finally, we get an approximation that fits all training examples as close as possible.} \label{fig:08_process_of_pretictions_for_a_cat_image} \end{figure} An algorithm that works like this can be considered as a simple neural network \cite{Britz2015}. The term neural came from the fact that at first scientists tried to recreate the functionality of neurons in the human brain. Besides the early beginning, neural networks have not much in common with the brain because actually, it seems the brain is much more complicated as it seems before \cite{Kriesel2007NeuralNetworks}. However, it is called a network because usually different neurons working together. In the simple scenario from \hyperref[fig:08_process_of_pretictions_for_a_cat_image]{Figure \ref{fig:08_process_of_pretictions_for_a_cat_image}} we had just one neuron (\hyperref[fig:09_example_of_a_single_neuron]{Figure \ref{fig:09_example_of_a_single_neuron}}) \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.4\linewidth]{photo/09_example_of_a_single_neuron}} \caption{An example for a single neuron. First, a linear output will be calculated by a simple linear function with the parameters \(W\) and \(b\). Afterwards the output will be normalized to get probabilities.} \label{fig:09_example_of_a_single_neuron} \end{figure} For the Instagram upload filter, such a neural network\footnote{The achieved accuracy is not always precisely the same, because the optimization just reaches a local minimum rather than a global minimum. In practice, this change is hardly relevant. Besides, the results can differ depending on the implementation. The results presented here were obtained with the programming language Python using frameworks from the field of data science.} achieves a test accuracy of 70\% (\hyperref[lst:acc]{Listing \ref{lst:acc}}).\\\\ \begin{lstlisting}[captionpos=b,label={lst:acc},language=Python, caption=Test accuracy is 70\% after iteration 2000 times and using 209 examples with 12287 features (\(64\times64 \) pixels). This is not state of the art but very good if considering that this is a linear classifier on a high dimensional feature space.] Cost after iteration 0: 0.693147 Cost after iteration 1000: 0.214820 ... Cost after iteration 1800: 0.146542 Cost after iteration 1900: 0.140872 train accuracy: 99.04306220095694 % test accuracy: 70.0 % \end{lstlisting} It is crucial to achieving a higher accuracy before creating a transparent neural network which is easy to understand. Within the next section, machine learning algorithms will be introduced to get higher accuracy. \subsection{Deep Learning Neural Networks} \label{subsec:deep_learning} \hyperref[fig:10_process_of_pretictions_for_a_cat_image_two_layers]{Figure \ref{fig:10_process_of_pretictions_for_a_cat_image_two_layers}} shows how the same network as in \hyperref[fig:08_process_of_pretictions_for_a_cat_image]{Figure \ref{fig:08_process_of_pretictions_for_a_cat_image}} would look like if we would use two layers. Therefore the notations are explained in \hyperref[def:label2]{Definition \ref{def:label2}}. \begin{definition}[label=def:label2]{Cost Function which can be optimized} \(x\textsubscript{th} \) = exists for each pixel of the \(i\)th example \\ \(a\textsubscript{th} \) = a so called neuron which gets the results from the first layer \\ \( WX+b\) = the linear function which computes a first output (\hyperref[lf]{Definition \ref{lf}}) \\ \( \sigma \) = the sigmoid function to get labels (\hyperref[def:sf]{Definition \ref{def:sf}}) \end{definition} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.8\linewidth]{photo/10_process_of_pretictions_for_a_cat_image_two_layers.png}} \caption{As before in the \hyperref[fig:08_process_of_pretictions_for_a_cat_image]{Figure \ref{fig:08_process_of_pretictions_for_a_cat_image}} The image visualizes the whole process of using a neural network to recognise patterns of cats and dogs. Instead of just on layer now the machine learning algorithm uses two layers.} \label{fig:10_process_of_pretictions_for_a_cat_image_two_layers} \end{figure} On the one hand, more layers resulting in an increasing amount of parameters which the algorithm has to optimize \cite[p. 21]{Goodfellow-et-al-2016}. For example, to find the global minimum of a cost function in our Instagram upload filter example, it would take significantly more time and requires more examples. And even then it is not sure if such a global minimum exists. On the other hand, a neural network with more layers would outstand a neural network with just one layer by far. \hyperref[lst:acc2l]{Listing \ref{lst:acc2l}} shows that the algorithm detects up to 80\% of the images. So, the deep neuronal network, compared with a single layer neural network, has increased by 10\%. The results could be better if more examples were used as input data. The required time and computing capacity would be comparatively low with such an implementation. That is why the neural network is just used with the same amount of examples \cite[p.167]{Goodfellow-et-al-2016} \cite[p.995 - 997]{Murphy2012}. In general, the term deep within deep neural networks is related to the number of layers. Andrew Ng suggests that neural networks with more than one layer should be called deep \cite{AndrewNG}. Furthermore, every layer which is not the output and not the output layer itself would be called the hidden layer. Whereas the output layer counts to the total amount of layers, the input layer is usually not considered as a layer. There is not an exact definition of what a layer exactly is. Most sources are excluding the input and including the output layer. Steps in which an activation function activates the neurons do not count as independent layers. They belong to the previous layer \cite{AndrewNG} \cite{Kriesel2007NeuralNetworks} \cite{Goodfellow-et-al-2016}. A problem with the neural network, which is not deep is its capability of training with larger images. So, the network was built to recognize patterns on images with 4096 input features ((\( 64 \times 64\) pixels). Suppose the input is a (\( 300 \times 300\) RGB image, the first layer of the network has 90000 neurons, and each one is fully connected to the input. That means that each neurone in the previous layer is connected to each node in the following layer. The number of parameters which the training process has to optimize would be calculated with the following formula \cite{Vasudev2019}\cite{DBLP:journals/corr/TraskGR15}. \begin{definition}[label=cn]{Calculate the total number of neurons in a neuronal network} \begin{align*} & W\textsubscript{ff} = F\textsubscript{-1}\times F \\ & B\textsubscript{ff} = F \\ & P\textsubscript{ff} = W\textsubscript{ff}\times + B\textsubscript{ff} \end{align*} where \\\\ \(W\textsubscript{ff}\) = Number of weights of a FC Layer which is connected to an FC Layer \\ \(B\textsubscript{ff}\) = Number of biases of a FC Layer which is connected to an FC Layer \\ \(P\textsubscript{ff}\) = Number of parameters of a FC Layer which is connected to an FC Layer \\ F = Number of neurons in the FC Layer\\ \(F\textsubscript{-1}\) = Number of neurons in the previous FC Layer \end{definition} In the equation above, \( F_{-1} \times F \) is the total number of weights (connections between layers) from neurons of the previous fully connected layer, to the neurons of the current fully connected layer. The total number of biases is the same as the number of neurons (F). So, for the example given above (\( 300 \times 300\) pixels, 100 Layers in the first hidden layer, RGB and fully connected) the calculation would look like this:\\ \begin{align*} & W\textsubscript{ff} = 270000 \times 100 = 27000000 \\ & B\textsubscript{ff} = 100 \\ & P\textsubscript{ff} = W\textsubscript{ff} + B\textsubscript{ff} = 27000000 + 100 =27000100 \\ & F = 100 \\ & F\textsubscript{-1} = 270000 \text{ (300} \times \text{300 pixels} \times \text{3 colour channels)}\\ \end{align*} If the image gets even bigger and the amount of layers increases, the weights can not be optimized with standard hardware, because the number of parameters is too big. As a guideline, there are about 14000000000 parameters where it is still possible to train without special hardware (e.g. on your own computer). Everything above this is difficult to realize on a personal computer without any hardware adjustments. Besides, the models could not predict in any case in a pleasant amount of time \cite{DBLP:journals/corr/TraskGR15}. Instead of the introduced neural network, there are particular implementations of deep neural networks that can calculate a forecast more efficiently.\\ The objective of the next section is to determine how to get excellent results in a reasonable time, even if the input images are greater or equal to \( 300 \times 300\) pixels. \begin{lstlisting}[captionpos=b,label={lst:acc2l}, float=tb,language=Python, caption=Test accuracy is 80\% after iteration 2400 times and using 209 examples with 12288 features (\(64\times64 \) pixels). This is not state of the art but very good if considering that this is an algorihm wich is not spezialised to recoginize images.] ... Cost after iteration 2300: 0.100897 Cost after iteration 2400: 0.092878 train accuracy: 98.5645933014 % test accuracy: 80.0 % \end{lstlisting} \subsection{Convolutional Neural Networks} \label{subsec:cnn} The most popular deep learning models leveraged for computer vision problems are convolutional neural networks. To present the convolutional neural networks, the example given at the beginning of this chapter will be changed. Instead of cats, the network shall now recognize ten different digits which are simulated with the hands as you can see in \hyperref[fig:signs]{Figure \ref{fig:signs}}, where: \begin{itemize} \itemsep-0.8em \item \textbf{y} is a vector with the length of the possible outputs, e.g. five-digit imitating means the length of a single output vector would be five \\ \item and each element of \textbf{y} is a binary value which determines if the input belongs to a class on the \(i\textsuperscript{th}\) position within the vector\\ \end{itemize} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.9\linewidth]{photo/11_numbers_by_hand_gestures}} \caption{Examples of gestures which are imitating digits} \label{fig:signs} \end{figure} This is a more realistic case of an upload filter because similar to this, the "Hitlergruß" gesture or another forbidden gesticulation can be identified. Even if the strength of a convolutional neural network is to recognize images with high dimensions, the dimensions will be low as in the example before. The calculation with inputs of higher dimensions takes to long such that it can be trained along with this thesis. \cite{DBLP:journals/corr/TraskGR15}\\ Before I used a so-called densely fully connected neural network (Figure \ref{fig:10_process_of_pretictions_for_a_cat_image_two_layers}). The network consists of a certain amount of neurons which are arranged in different layers (Figure \ref{fig:08_process_of_pretictions_for_a_cat_image}). Each neuron is fully connected with the neurons in the previous layer. To see how complex such a network can be, in Figure \ref{fig:signs} such a network can be seen. The Figure shows that such a network can become complex. Consider that each connection is represented by a weight. This weight has to be initialized and adjusted. If it comes to understand when and why which weight was adjusted to ensure transparency, it becomes clear that such a network can be a confusing issue. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/12_example_fully_connected_network}} \caption{Examples of a fully connected neuronal network} \label{fig:signs} \end{figure} More neurons would result in a high amount of parameters if the image has too many dimensions \cite[p. 324]{Goodfellow-et-al-2016}. An alternative approach would be to use the mathematical operation of convolution \footnote{Technically I skip the narrowing operation, so this would be a cross-correlation instead of convolution. Still, as in literature and by convention, I call this a convolutional operation anyway\cite{AndrewNG}}. The convolution makes it possible to detect patterns like edges which could be used to classify an image. If the images have edges which are similar to the kind of edges from another image, the probability is high that it is in the same class. For example, if the algorithms detect a set of edges which are typically for cat ears, the algorithm will probably consider this image as a cat. Other possibilities of extracted patterns are shown in Figure \ref{fig:edge_detec}.\\\\ \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/13_example_feature_extraction}} \caption{Patterns identified while using convolutional neural networks \cite{SimpleIn5:online}} \label{fig:edge_detec} \end{figure} Convolution operations are widely spread in computer vision algorithms, so it is not unique for convolutional neural networks. It is a mathematical operation where a small matrix of numbers is passed through the matrix image representation. This matrix is a so-called filter or kernel. Every colour channel would need its filter. The name filter comes from the fact that it filters specific features from the input image, e.g. horizontal and vertical images. In Figure \ref{fig:edge_detec}, where \begin{itemize} \item \textbf{*} is the mathematical operation of convolution \item \textbf{X} represents an image with a horizontal edge in the middle of the image \item \textbf{A} = the output matrix (feature map) which shows where the edge is\footnote{because this images are very small the dimensions of the edge which is shown in A are not accurate and unprofessionally. While using matrices of higher dimensions the proportions would be more accurate} \item \textbf{f} = a filter for detecting horizontal edges, \end{itemize} you can see how a vertical edge, which is represented by a \( 6 \times 6\) matrix \(X\) is filtered and represented by a matrix while using another \( 3 \times 3\) matrix as a filter. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/14_vertical_edge_detection}} \caption{Using Convolution for edge detection} \label{fig:edge_detec} \end{figure} For a convolution operation, a filter would be placed in front of a selected pixel. Afterwards, each value from the filter has to be multiplied with the corresponding values from the image. Finally, the sum would be placed in the right place in the original matrix, as shown in Figure \ref{fig:conv}. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/15_example_convolution_operation.png}} \caption{Convolution Operation over a matrix} \label{fig:conv} \end{figure} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/16_example_horizontal_edges_detected.png}} \caption{Convolution Operation on hand gestures, using a filter for horizontal edges and projecting them on the original image (without changing the size of it)} \label{fig:horizontal} \end{figure} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/17_example_verticall_edges_detected.png}} \caption{Convolution Operation on hand gestures, using a filter for vertical edges and projecting them on the original image (without changing the size of it)} \label{fig:vert} \end{figure} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/18_example_verticall_horizontal_edges_detected}} \caption{Convolution Operation on hand gestures, using a filter for horizontal and vertical edges and projecting them on the original image (without changing the size of it but while using normalization to make the edges more clear)} \label{fig:edges} \end{figure} The algorithms use a specific filter matrix, to detect of edges of every kind (e.g. vertical and horizontal edges). Two filters are used to identify at first horizontal edges (Figure \ref{fig:horizontal})and second vertical edges (Figure \ref{fig:vert}). Finally, both edges get projected to the original image and colour values were normalized to show the vertical and horizontal edges more clearly (\ref{fig:edges}). To figure out which filters shall be used to detect patterns is challenging because there are almost endless opportunities. That is where the neural network comes into part. Instead of setting the filter values manually, they are the parameters of a neural network. These parameters can now be optimized by using the cost function of the network (Definition \ref{def:cost_function}). That means the total amount of parameters comes no longer from the size of an image it comes now from the filter size \cite{AndrewNG})\\\\ Figure \ref{fig:archi} shows the architecture of such a network which is used to identify filter values and finally helps to classify more efficiently. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/19_example_cnn_architecture.png}} \caption{A typical architecture of convolutional neural network made from different building blocks} \label{fig:archi} \end{figure} As can be seen in the image, a convolutional neuronal network contains more than just convolution operations. One example is the RelU which is an activation function and is used to standardize values between the layers. The softmax activation function is used to get probabilities for each possible class. Afterwards, a decision boundary can be decided whether the image belongs to a class or not. The pooling layer reduces the height and width of the input. It helps minimize computations, as well as it helps to make feature detectors more invariant to its position in the input data. Typically a pooling layer is one of the two following types: \begin{itemize} \item Max-pooling uses another matrix which slides over the input and stores the max value of the window in the output. \item Average-pooling uses another matrix which slides over the input and stores the average value in the output. \end{itemize} In the Figure \ref{fig:avg} \ref{fig:max} you can see how this would be done, where: \begin{itemize} \item \textbf{Stride} is the value which determines for how many pixels the filter get shifted each time. \end{itemize} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/20_max_pooling}} \caption{Maximum pooling, or max pooling, is a pooling operation that calculates the maximum, or largest, value in each patch of each feature map. The results are down sampled or pooled feature maps that highlight the most present feature in the patch, not the average presence of the feature in the case of average pooling \cite{}.} \label{fig:max} \end{figure} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/21_average_pooling.png}} \caption{Average pooling involves calculating the average for each patch of the feature map. This means that each 2×2 square of the feature map is down sampled to the average value in the square.} \label{fig:avg} \end{figure} This kind of network and a dataset of ten different classes achieve accuracy close to 80\%\footnote{The result could be even better if more training data and a deeper network would be used}. \begin{lstlisting}[captionpos=b,label={lst:cnn}, float=tb,language=Python, caption=Test accuracy is 80\% after iterations 2400 times and using 209 examples with 12288 features (\(64\times64 \) pixels). That accuracy is not state of the art but very good if considering that this is an algorithm which is not specialized to recognize images.] Cost after epoch 0 = 1.917929 Cost after epoch 5 = 1.506757 Train Accuracy = 0.940741 Test Accuracy = 0.783333 \end{lstlisting} The state of the art results in image recognition, e.g. on the cifar10\footnote{CIFAR-10 is an established computer-vision dataset used for object recognition. It is a subset of the 80 million tiny images dataset and consists of 60,000 \( 32 \times 32\) colour images containing one of 10 object classes, with 6000 images per class. Alex Krizhevsky, Vinod Nair collected it, and Geoffrey Hinton.} dataset, are impressive examples of how this technique has increased the performance of machine learning in the last years. They achieve results of 90\% and can be computed within a pleasant amount of time.\\ However, because of their non-linear structure, convolutional neural network algorithms with outstanding overall performance are usually seen as a black box. That means no information is provided about what exactly lead the networks to their conclusion. Transferred to the Instagram upload filter example, this would imply that 10\% of the users are blocked more falsely during the uploading. The user will not be informed about why the algorithm came to its decision.\\ The goal of the next section is to present the theory of different methods of how a convolutional neural network behave more like a white-box model. In other words, a decision support system from which the user can be informed of why the algorithm came to a specific decision. \section{Explainable Artificial Intelligence} Most deep learning models today are black-box. That means no counteraction is set to make them transparent. Within the last years, techniques have arisen, which help to make those models more transparent. That means for a given prediction, how important is each input feature value to that prediction? Before two of these techniques are get explained in detail, it will be demonstrated one more time why this is so important.\\ Since deep learning has become more and more successful, science is also increasingly concerned with how to make deep learning models more transparent. For business leaders, machine learning engineers and the users who are confronted by the results, it is essential to know why a decision was made. Unless we were not able to learn from these models and they will still be a black box \cite{MichaelJordan2018} \cite{Kuang2017}.\\ Figure \ref{fig:xai-old-vs-new} shows the difference between a traditional machine learning model and an explainable artificial intelligence model. Whereas the traditional approach focuses on the output itself (e.g. this is a cat) the explainable approach combines some features to an explanation which can be understood by humans (e.g. has fur, whiskers and claws).\\ \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/22_what_is_xai}} \caption{This is a cat, but how do you know? Explainable artificial intelligence seeks to develop systems that can translate complex algorithmic decision-making into language humans can understand.\cite{Robinson2017}} \label{fig:xai-old-vs-new} \end{figure} Managers with a high responsibility for there department find it challenging to use these complex algorithms. Although they are aware that the results could increase the success of their company, they are afraid of using these models, because they are not transparent. If something goes wrong, they can hardly explain why the algorithm made the wrong decision. The results created by deep learning in image recognition could be beneficial for the healthcare sector as well, but if it is unclear why decisions were made, doctors will not give these approaches any chance. As well as the manager's doctors are responsible for their decisions, and if they get supported by an algorithm, they must understand the algorithm in the first place. Otherwise, they could not trust them, because they would not be able to distinguish between a right and a wrong decision.\\ A lack of transparency is responsible for the rejection of deep learning models in the finance industry as well. Especially since the last big financial crisis in 2009, banks and financial companies must make their decisions transparent to different stakeholders. And as already introduced in the introduction, small business owners could be affected by semitransparent models as well. If they run their business with the help of social media, it would harm them if their uploads get rejected automatically, without even knowing why this decision was made.\\ More specifically, a small shop owner wants to upload an image which shows him, introducing a new product. The algorithm was trained to detect the "Hitlergruß", and so he predicted the user performs this gesture. So, explainable artificial intelligence would help to explain automatically why this prediction would be made. Thus, the shop owner could change the crucial factors immediately.\\\\ The following sections will focus on two techniques which could be used in an upload filter. This could help the shop owner from the example given in the paragraph above. These two techniques are \cite{Kuang2017}: \begin{enumerate} \item Shapely Additive exPlanations (SHAP) and \item Integrated Gradients (IG) \end{enumerate} The Shapley Additive exPlanations and the Integrated Gradients (IG) belong to two different categories for explainable artificial intelligence. \begin{enumerate} \item Shapley-value-based algorithms and \item gradient-based algorithms \end{enumerate} Two factors can easily distinguish these models. Firstly, the assumptions and secondly, the underlying mathematical principles used in the models. Before it is explained how they differ, the fundamentals of each category will be explained.\\ The two fundamentals of the shapely-value-based algorithms and gradient-based algorithms are \cite{MichaelJordan2018} \cite{Kuang2017}: \begin{enumerate} \item Shapley Values \item The Gradient \end{enumerate} \subsection*{Shapley Values} Let assume that the algorithm behind an upload filter predicts the price for a painting. For a particular painting, it predicts 300000€, and the prediction will be explained through shapely values. The art has an age of 50 years, is part of a private collection, is not mentioned in literature and is in good shape (Figure \ref{fig:shapley_values_example_01}).\\ \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.5\linewidth]{photo/23_example_shapley_values_1}} \caption{A concrete visualization of the features from painting for which the hapley values will be calculated.The painting has an age of 50 years, is part of a private collection, is not mentioned in literature and and is in a good shape.} \label{fig:shapley_values_example_01} \end{figure} The average prediction for all paintings is 310000€. Now the question is how much has each feature value contributed to the prediction compared to the average prediction? For that, quantitative criteria will be used. For a linear regression model, the answer is quite simple. The attribute of each feature is the weight of the feature times the feature value. It works because linear regression is a linear model. So, to make more complex models (e.g. deep learning, convolutional neural networks) transparent, a different solution is required. The Shapley value, coined by Shapley (1953), is a method for assigning payouts to players depending on their contribution to the total payout. Players cooperate in a coalition and receive a certain profit from this cooperation \cite{aas2019explaining} \cite{Lundberg} \cite{ScottM}. Because this is not a real game, the game is the prediction task for a single instance of the dataset. Therefore the gain is the actual prediction for this instance, minus the average prediction for all instances. The players are the feature values of the instance that collaborate to receive the gain (predict the price for the painting). In the example, the feature values "Not mentioned in the literature, "painting is in good shape", "painting is 50 years old" and "painting is part of a private collection" worked together to achieve the prediction of 300000€. Our goal is to explain the difference between the actual prediction (300000€) and the average prediction (310000€): a difference of -10000€. An answer looks like the following: \begin{itemize} \item "Not mentioned in literature" feature contributed 30000€ \item "50 years old" feature contributed 10000€ \item "Part of a private collection" feature contributed 0€ \item "Paining is in good shape" feature contributed 50000€ \end{itemize} The contributions add up to 10000€, the final prediction minus the average predicted apartment price. How do we calculate The Shapley value for one feature gets calculated the follows. The Shapley value is the average minimal contribution of a feature value among all possible coalitions.\\ \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.5\linewidth]{photo/24_example_shapley_values_2}} \caption{One sample in order to calculate the contribution of "is in a good shape" feature value to the prediction when added to the coalition of age, mentioned in literature and within a private collection } \label{fig:shapley_values_example_01} \end{figure} In the following we evaluate the contribution of the "Painting is in good shape" feature value when it is added to a coalition of "Not mentioned in literature" and "50 years old"."Not mentioned in literature", "Painting is in good shape", and 50-years old" feature values are from the given predicted instance (painting with a worth of 300000€). The feature value for whether a painting is or is not in a private collection came from a random choice among all possible paintings which are in the same distribution. So, the value "in private collection" was replaced from yes by no. If the price of the painting gets now predicted again (with this combination), the predicted price is 310000€. In a second step, the "in good shape" feature value get doped from the coalition by replacing it with a random value among all other possible paintings which can be predicted. In the example, it was not in good shape, but it could have been good shape again. The prediction of the painting price for the coalition of "mentioned in literature" and "50 years old" is 320000€. The contribution of "In Good Shape" was: \begin{quote} \centering 310,000€ - 320000€ = -10000€ \end{quote} This estimate depends on the values of the randomly drawn painting that served as a “donor” for the "shape" and "private" feature values. We will get better estimates if we repeat this sampling step and average the contributions. The result is even better if the sampling step gets repeated and calculate and calculate the average of all contributions. Furthermore, it is done repeatedly computing all coalitions. Because of the repeatedly computing for every combination, the calculation time increases exponentially. One solution to keep the computation time small is to calculate contributions for only a few samples of the possible coalitions.\\ If we estimate the Shapley values for all feature values, we get the complete distribution of the prediction (minus the average) among the feature values. \subsection*{SHalpey Additive exPlanations (shap-software-library)} If Shapley values are a method which maps a contribution to each player, a Shapley value-based explanation method will aim to get estimated shapely values which are close to the precisely calculable ones. It is achieved because the Shapley Additive Explanations are randomly dropping out some of the features. Potentially any combination of features can be left out. As long as you look at each feature individually, there is an almost infinite number of combinations. A calculation would take too long. As a counteraction, pairs of features are put together and in the following considered as one player. The individual success of a player is then estimated. Let us look again at the example of the shop owner. If a group of pixels does not contribute to the overall result of the game, it will not change even if you drop out the whole group \cite{ScottM} \cite{molnar2019} \cite{Lundberg}.\\ \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.8\linewidth]{photo/43_shap_library_example.png}} \caption{The header of the SHAP Github Repoitory shows an example of a \gls{mla} is explained through the library. It takes a model with age, sex, bp and bmi as feature examples and maps an attribute to this values. Each values showsthe average marginal contribution of a feature value across all possible coalitions of features.} \label{fig:shap_library_example} \end{figure} Figure \ref{fig:shap_library_example} shows an easy example of how the shape library explains the feature importance of a machine learning model. The colours help to get an intuitive understanding of which features are more important, so red stands for significant contribution to the overall prediction, and blue is less important to it. \subsection*{Gradient and Gradient Decent} \label{sec:gd} I have to give some theory about the gradient and the gradient descent if it comes to an explainable artificial intelligence method which is built on top of the gradient and the gradient descent.\\ Gradient descent is possibly the most used optimization algorithm in the field of deep learning and machine learning. If there would be a cost function as introduced in the section about machine learning, the goal is to find the optimum for every weight in the model. So, the cost value would be as small as possible as long as the value is not adapting to a specific set of examples. The gradient descent is an algorithm that optimizes the cost value by making changes to the weights. The difference is determined by the gradient, which shows the direction of the deepest decent. In other words, it is a value which determines a direction which minimizes the cost in as few backpropagation steps as possible. Whereas in the so-called forward propagation the actual cost is calculated the backpropagation uses the chain rule to get the gradient at first and at second adjusts the weights by subtracting the gradient multiplied by the learning rate from them \cite{Goodfellow-et-al-2016}.\\ This method is quite old, and one of the reasons why this method got popular in the last years was the increase in computation power. Until a few years, a new generation of computational processing units and graphical procession units are fast enough to work on such tasks efficiently. \\ Figure shows\ref{fig:gd}, the graph of a cost function in three-dimensional space. The line shows the line from a starting point by going down to a local minimum. It is crucial to know that gradient descent can not distinguish between a local and a global minimum. Therefore it would need different steps of gradient descent. In practice, this is not important because it works for most cases just fine enough. For a field where very high accuracy is a must, this kind of machine learning would be the wrong approach anyway. Therefore a logic-based attempt would lead to a state of the art result \cite{Russell} \cite{conf/mkm/KohlhaseKMT17}. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.65\linewidth]{photo/25_gradident_decent}} \caption{Shows the path to a local minimum while using the parameters gradients to compute the path \cite{Gradient47:online}} \label{fig:gd} \end{figure} \subsection{Integated Gradient} \label{subsec:ig} A gradient-based explanation method tries to explain a given prediction by using the gradient of the output concerning the input features. The gradient is not only used here to optimize the weights of the parameters but also to see which features have been adjusted the most and are therefore most important. \\ The goal is to know how a decision was made. So it can be concluded afterwards why this decision was made, so the gradient is used again. As already mentioned, the gradient points in the direction of the next local minimum for each parameter. If it is observable which parameters are adjusted more than others, a statement which parameters are more important than others can be made. Since each parameter is assigned to an input value, it can be concluded how important this value is to reach the desired minimum as fast as possible. An efficient implementation of a deep learning approach is essential to save computing power. An exact calculation costs an enormous amount of resources, which is why the gradients are not calculated in this step but estimated \cite{molnar2019} \cite{TjoaGuan} \cite{Mukund}.\\ Figure \ref{fig:ig} shows three paths between a baseline (r1 , r2) and an input (s1, s2). Path P2, used by integrated gradients, simultaneously moves all features from off to on. Path P1 moves along the edges, turning features on in sequence. Other paths like P1 along different edges correspond to different sequences. SHAP computes the expected attribution over all such edge paths like P1.\\ \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.65\linewidth]{photo/26_integradet_gradient.png}} \caption{Shows different paths between a baseline and an input to compare it the Integrated Gradient (P2) with the SHalpey Additive exPlanations\cite{TjoaGuan}} \label{fig:ig} \end{figure} The integrated gradient approach tries to estimate Aumann-Shapley values which are as close as possible to the calculated values. The integrated gradient works by assuming there is a straight line, from the actual input (e.g. the image of a shop owner which tires of uploading it to Instagram) to a specific baseline input (e.g. a black image). The gradient of the prediction concerning input features would be integrated along this path \cite{Mukund}.\\ The integrated gradient is also a method where the input varies along a straight line between the baseline and the input. At the same time, the prediction moves from uncertainty to certainty (the final result, e.g. for 98\% the image shows the "Hiterlergruß". At each point on this path, the gradient is used to attribute the change in the prediction probability back to the input features. So, integrated gradient aggregates these gradients along the paths integral \cite{mudrakarta-etal-2018-model}. Figure \ref{fig:igiu} visualizes who this method would look like after applying it to a few examples. \begin{enumerate} \item Choose any image as a baseline (e.g. black image with each pixel 0) \item Make images brighter as long as they become the input again (Figure \ref{fig:step2}) \item Compare the final output (as moving from certainty to uncertainty) to the path of the images (from the black baseline to the input, Figure \ref{fig:step3}) \item We want to know then the slope of the score vs intensity graph doesn't remain stagnant (Interesting Gradients) \item The Input images get changed such that the interesting gradients can be seen in there (Figure \ref{fig:step5}) \end{enumerate} \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/27_integradet_gradient_step_1}} \caption{Three different steps along the path from the baseline to the input} \label{fig:step2} \end{figure} \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/28_integradet_gradient_step_2}} \caption{compression from the two paths (along the result and along the baseline to output)} \label{fig:step3} \end{figure} \begin{figure}[!htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/29_integradet_gradient_step_3}} \caption{result of the Integrated Gradient} \label{fig:step5} \end{figure} As the Integrated Gradient approximates Aumann-Sapley values the function which estimates the values must be a piecewise differentiable function of the input features.\\ Because the method should make sensible feature attributions a sufficient baseline is crucial. For example, if a black image is selected as a baseline, the integrated gradient would not choose attribute importance to a completely black pixel in an actual image. If black pixels are not necessary, e.g. because just the frame of the shop owners image is black and the rest of the image is only bright, this will work perfectly fine. But in general, the baseline value should both have a near-zero prediction, and also faithfully represent a complete absence of signal.\\ \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.85\linewidth]{photo/30_integradet_gradient_step_4}} \caption{The integrated gradient applied to sample of images and visualized to demonstrate which pixels are important (brighter) and which pixels are less important (darker)} \label{fig:igiu} \end{figure} \subsection{Expected Gradient Approach} \label{subsec:expected_gradient_approach} The expected Gradient Approach combines the two methods which were described above. It is useful because the two approaches above generate noisy data. While in the two methods above, one single example is used as a reference value. This approach allows using the whole dataset. It makes the resulting values easier to interpret. It tries to combine a multitude of ideas from integrated Gradients, SHapley Additive exPlanations (SHAP) and libraries such as SHAP-Library implement some brilliant approximations and samplings. To cover up all their work would fill an entire mater thesis with ease so this part is left out. Important to understand is the result which is provided by software library such that it can be interpreted later on the course of this thesis. An intuitive way to understand the expected gradient values is following illustration: The feature values are chefs who are entering a hotel kitchen step by step in random order. All chefs (feature values) in the room contributing to the lunch (= contribute to the prediction, which is predicted by a machine learning model). The expected gradient values are equal in the same grade given for the meal which got prepared in the kitchen. This value is the average change in the grading. That mentioned change is defined as follow: Grade revived by a combination of cooks which are already in the kitchen if the actual chef is entering. More precisely, the average of all different combinations of chefs which are in the kitchen, while the specific chef comes into the kitchen. Figure \ref{fig:gradient_imagenet_plot} shows how this can be visualized on an overlay on image predictions. Especially, it can be seen which regions on the image are responsible for the predicted probability of being part of a class (red) or not (blue). \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.8\linewidth]{photo/31_gradient_imagenet_plot}} \caption{The shap applied to sample of images and visualized to demonstrate which pixels are important (red) and which pixels are less important (blue) \cite{shaoshan91:online}} \label{fig:gradient_imagenet_plot} \end{figure}
{ "alphanum_fraction": 0.791882, "avg_line_length": 109.1955403087, "ext": "tex", "hexsha": "0697380433b413694a5eeb5c66addd7229108250", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "bohniti/Computing-Transparent-Decisions", "max_forks_repo_path": "Thesis/content/2_theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "bohniti/Computing-Transparent-Decisions", "max_issues_repo_path": "Thesis/content/2_theory.tex", "max_line_length": 1674, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "bohniti/Computing_transparent_decisions", "max_stars_repo_path": "Thesis/content/2_theory.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-25T18:56:22.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-25T18:56:22.000Z", "num_tokens": 14723, "size": 63661 }
\documentclass[a4paper,11pt]{article} \usepackage{assignment_style} \usetikzlibrary{patterns} \newcounter{problem} \newenvironment{problem}[1][]{% \refstepcounter{problem}\par \medskip \noindent \textbf{Problem~\theproblem.#1 \rmfamily}{\medskip} } \newenvironment{solution}{ \noindent \textbf{Solution: \medskip}}{} % To make solutions not visible use the command \excludecomment{solution} % %\excludecomment{solution} \title{Assignment 1 \\ \vspace{2em} \large WSU Economics PhD Math Bootcamp } \date{} \begin{document} \maketitle \begin{problem} %Leon application page 23-25 Consider an economy with three sectors: farming, manufacturing and textiles. Each sector produces its goods and then trades (through barter) with the other sectors. Hence the three kinds of goods move between sectors. Let the following table describe the results of trade. \vspace{1em} \begin{center} \begin{tabular}{ c | c c c }\hline\hline & F & M & T \\ \hline F & $\frac{1}{2}$ & $\frac{1}{3}$ & $\frac{1}{2}$ \\ M & $\frac{1}{4}$ & $\frac{1}{3}$ & $\frac{1}{4}$ \\ T & $\frac{1}{4}$ & $\frac{1}{3}$ & $\frac{1}{4}$ \\ \hline \end{tabular} \end{center} The table is interpreted is as follows. Let $x_1$ be the total value of farm goods, $x_2$ the total value of manufacturing goods and $x_3$ the total value of textile goods. The first column of the table describes where the farming sectors output goes -- $1/2$ to themselves, $1/4$ to manufacturing and $1/4$ to textiles. The first row describes the value of the farming sectors inputs -- $1/2$ from farming itself, $1/3$ manufacturing goods and $1/2$. Hence the total value of farm goods is $x_1 = (1/2)x_1 + (1/3)x_2 + (1/2)x_3$. Doing the same for the other sectors gives the system \begin{align} x_1 &= \frac{1}{2}x_1 + \frac{1}{3}x_2 + \frac{1}{2}x_3 \nonumber \\ x_2 &= \frac{1}{4}x_1 + \frac{1}{3}x_2 + \frac{1}{4}x_3 \nonumber \\ x_3 &= \frac{1}{4}x_1 + \frac{1}{3}x_2 + \frac{1}{4}x_3 \nonumber \end{align} Turn the above system into a homogenous system and then solve the system to determine the total values of goods $x_1,x_2,x_3$. \end{problem} \insblock{Problem 1 Note}{Understand the definition of a homogenous system of linear equations and apply the appropriate matrix manipulations to solve the system.} \begin{solution} The reduced row echelon form for the augmented system is \begin{align} \left[ \begin{matrix} 1 & 0 & -\frac{5}{3} & | 0 \nonumber \\ 0 & 1 & -1 & | 0 \nonumber \\ 0 & 0 & 0 & | 0 \nonumber \end{matrix} \right] \end{align} There is one free variable, $x_3$. If we let $x_3=3$ then the solution is $(5,3,3)$. \end{solution} \begin{problem} %Leon section 1.2 Problem 5 part (g) Determine whether the following system is inconsistent. If not inconsistent and no free variables, find the unique solution. If there are free variables find all solutions (describe the set). \begin{align} x_1 + x_2 + x_3 + x_4 &= 0 \nonumber \\ 2x_1 + 3x_2 -x_3 -x_4 &= 2 \nonumber \\ 3x_1 + 2x_2 + x_3 + x_4 &= 5 \nonumber \\ 3x_1 + 6x_2 -x_3 - x_4 &= 4 \nonumber \end{align} \end{problem} \insblock{Problem 2 Note}{This problem tests the idea that any linear system as either no solutions, 1 solution or an infinite number of solutions and you need to apply the appropriate methods to determine the case for the above system and how to describe the solution. The following problem tests the same concepts.} \begin{solution} The system is inconsistent \end{solution} \begin{problem} %Leon section 1.2 Problem 5 part (i) Determine whether the following system is inconsistent. If not inconsistent and no free variables, find the unique solution. If there are free variables find all solutions (describe the set). \begin{align} -x_1 + 2x_2 -x_3 &= 2 \nonumber \\ -2x_1 + 2x_2 + x_3 &= 4 \nonumber \\ 3x_1 + 2x_2 + 2x_3 &= 5 \nonumber \\ -3x_1 + 8x_2 + 5x_3 &= 17 \nonumber \end{align} \end{problem} \begin{solution} The system is consistent with no free variables and the unique solution is $(0, 3/2, 1)$. \end{solution} \begin{problem} Find the inverse matrix for the following matrices: \[ (\text{a}) \; \left[\begin{matrix} -1 & 1 \\ 1 & 0 \end{matrix}\right] \qquad (\text{b}) \; \left[\begin{matrix} 2 & 5 \\ 1 & 3 \end{matrix}\right] \qquad (\text{c}) \; \left[\begin{matrix} 2 & 6 \\ 3 & 8 \end{matrix}\right] \qquad (\text{d}) \; \left[\begin{matrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{matrix}\right] \qquad \] \[ (\text{e}) \; \left[\begin{matrix} 2 & 0 & 5 \\ 0 & 3 & 0 \\ 1 & 0 & 3 \end{matrix}\right] \qquad (\text{f}) \; \left[\begin{matrix} -1 & -3 & -3 \\ 2 & 6 & 1 \\ 3 & 8 & 3 \end{matrix}\right] \qquad (\text{g}) \; \left[\begin{matrix} 1 & 0 & 1 \\ -1 & 1 & 1 \\ -1 & -2 & -3 \end{matrix}\right] \] \end{problem} \insblock{Problem 4 Note}{Tests understanding of the definition of an inverse matrix and how to calculate it for $2\times 2$ and $3\times 3$ systems.} \begin{solution} \[ (\text{a}) \; \left[\begin{matrix} 0 & 1 \\ 1 & 1 \end{matrix}\right] \qquad (\text{b}) \; \left[\begin{matrix} 3 & -5 \\ -1 & 2 \end{matrix}\right] \qquad (\text{c}) \; \left[\begin{matrix} -4 & 3 \\ 3/2 & -1 \end{matrix}\right] \qquad (\text{d}) \; \left[\begin{matrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{matrix}\right] \qquad \] \[ (\text{e}) \; \left[\begin{matrix} 3 & 0 & -5 \\ 0 & 1/3 & 0 \\ -1 & 0 & 2 \end{matrix}\right] \qquad (\text{f}) \; \left[\begin{matrix} 2 & -3 & 3 \\ -3/5 & 6/5 & -1 \\ -2/5 & -1/5 & 0 \end{matrix}\right] \qquad (\text{g}) \; \left[\begin{matrix} -1/2 & -1 & -1/2 \\ -2 & -1 & -1 \\ 3/2 & 1 & 1/2 \end{matrix}\right] \qquad \] \end{solution} \begin{problem} %Leon section 2.1 problem 3 (d),(g) Evaluate the following determinants \[ (\text{a}) \; \left|\begin{matrix} 4 & 3 & 0 \\ 3 & 1 & 2 \\ 5 & -1 & -4 \end{matrix}\right| \qquad (\text{b}) \; \left|\begin{matrix} 2 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 1 & 6 & 2 & 0 \\ 1 & 1 & -2 & 3 \end{matrix}\right| \] \end{problem} \insblock{Problem 5 Note}{Tests the ability to calculate the determinant of a linear system.} \begin{solution} (a) 58; \; (b) 8 \end{solution} \begin{problem} %Leon Section 2.3 Cramer's rule example page 107 Use Cramer's rule to solve the following system \begin{align} x_1 + 2x_2 + x_3 &=5 \nonumber \\ 2x_1 + 2x_2 + x_3 &= 6 \nonumber \\ x_1 + 2x_2 + 3x_3 &= 9 \nonumber \end{align} \end{problem} \begin{solution} \[ x_1 = \frac{-4}{-4} = 1, \qquad x_2 = \frac{-4}{-4} = 1, \qquad x_3 = \frac{-8}{-4} = 2 \] \end{solution} \begin{problem} Let $A$ and $B$ be $6\times 6$ matrices, with $\det(A) = -10$ and $\det(B) = 5$. Use the properties of determinants to compute \begin{enumerate}[(a)] \item $\det(3A)$ \item $\det(A^{T} B^{-1})$ \end{enumerate} \end{problem} \insblock{Problem 7 Note}{Tests understanding of the properties of determinants as they apply to matrix algebra.} \begin{solution} \begin{enumerate}[(a)] \item $\det(3A) = 3^6 \det(A) = 729(-10) = -7290$. \item \begin{align} \det(A^T B^{-1}) &= \det(A^T) \cdot \det(B^{-1}) \nonumber \\ &= \left( \det(A) \right) \frac{1}{\det(B)} \nonumber \\ &= -10 \frac{1}{5} \nonumber \\ &= -2 \nonumber \end{align} \end{enumerate} \end{solution} \begin{problem} Prove that if $A$ is invertible, then $\det(A^{-1}) = \frac{1}{\det(A)}$ \end{problem} \insblock{Problem 8 Note}{Tests understanding of properties of determinants and inverses as well as the idea of mathematical proof.} \begin{solution} By multiplicative properties of the determinant \[ \det(AA^{-1}) = \det(A) \cdot \det(A^{-1}) \] We also know that $AA^{-1} = I_n$, the $n\times n$ identity matrix, and $\det(I_n) = 1$ (this can be seen because $I_n$ is a diagonal matrix). So we have \[ \det(A)\cdot \det(A^{-1}) = 1 \] Solving for $\det(A)$, we get $\det(A^{-1}) = \frac{1}{\det(A)}$ where is it okay to divide by $\det(A)$ because we know that $\det(A) \neq 0$ because $A$ is invertible. \qedsymbol \end{solution} \begin{problem} Let $A$ be a $3\times 3$ matrix with $\det(A) = 5$. Find each of the following if possible. \begin{enumerate}[(a)] \item $\det(A^T)$ \item $\det(A + I)$ \item $\det(2A)$. \end{enumerate} \end{problem} \insblock{Problem 9 Note}{Tests understanding of matrix algebra and its effects on the value of a determinant of a linear system. Also, remember that one reason we care about determinants is because they can often tell us something about the solution space to a linear system.} \begin{solution} \begin{enumerate}[(a)] \item $\det(A^T) = \det(A) = 5$ \item There is not enough information \item $\det(2A) = 2^3 \det(A) = 40$. \end{enumerate} \end{solution} \begin{problem} What are the dimensions of the following subsets of $\mathbb{R}^3$? \begin{enumerate}[(i)] \item The origin? \item A line through the origin? \item A plane which passes through the origin? \end{enumerate} \end{problem} \insblock{Note}{Tests understanding of the idea of linear spaces and their dimension -- Hint: linear spaces are the \textit{span} of a set of \textit{basis vectors}.} \begin{solution} \begin{enumerate}[(i)] \item The origin has dimension zero. Note that $\alpha(0,0,0) = (0,0,0)$ is true for all $\alpha \in \mathbb{R}$ so its always linearly dependent. \item A line through the origin has dimension 1. \item A plane has dimension 2. \end{enumerate} \end{solution} \begin{problem} %Duke Mathcamp notes problem 1.23 and 1.24 For $\mathbf{a},\mathbf{x} \in \mathbb{R}^n$, consider the equation $\mathbf{a} \cdot \mathbf{x} = 0$ and its solution set $X(\mathbf{a}) = \{ \mathbf{x} \in \mathbb{R}^n \; | \; \mathbf{a} \cdot \mathbf{x} = 0 \}$. \begin{enumerate}[(i)] \item Show that $X(\mathbf{a})$ is a linear subspace. \item Show the dimension of $X(\mathbf{a})$. \end{enumerate} \end{problem} \insblock{Problem 11 Note}{Tests understanding of \textit{null spaces} of linear functions (matrices). Null spaces can often represent the space of solutions to an economic problem. It also tests your understanding of a linear subspace and how to show a set is one as well as identify its dimension.} \begin{solution} \begin{enumerate}[(i)] \item If $x,x' \in X(a)$ then $a\cdot x = a\cdot x' = 0$. Hence $a\cdot x + a\cdot x' = a\cdot(x + x') = 0$ and this implies that $x+x' \in X(a)$. Again let $x \in X(a)$ then $a\cdot x = 0$ and $\alpha a\cdot x = 0$ as well. Moving the constant we see that $a \cdot (\alpha x) = 0$ which means that $\alpha x \in X(a)$. \item Clearly the dimension of $\mathbb{R}^n$ is $n$. The vector $a$ represents a linear mapping with one row. The linear span of $a$ is contained in a subspace with dimension 1. The set $X(a)$ contains all the vectors which are orthogonal to $a$. By the rank nullity theorem we know that $\text{rank}(f) + Null(f) = dim(\mathbb{R}^n)$. Hence the dimension of the null space $X(a)$ is $dim(\mathbb{R}^n) - \text{rank}(f) = n - 1$. Another way to look at this is by using what you found in Problem 1 and recognizing that $X(a)$ is a hyperplane in $\mathbb{R}^n$ passing through the origin. In Problem 1, you found that a plane in $\mathbb{R}^3$ passing through the origin had dimension $3-1 = 2$. Well a hyperplane is just a plane, so if $X(a)$ is a hyperplane through the origin in $\mathbb{R}^n$ it must have dimension $n-1$. \end{enumerate} \end{solution} \paragraph{Some Definitions} The following definitions describe the ``greater than or equal to'' type of ordering on $\mathbb{R}^n$ that will be important for answering Problem 3. \begin{definition} Let $\mathbf{x}$ and $\mathbf{y}$ be vectors in $\mathbb{R}^n$. We define the following relations: \begin{itemize} \item $\mathbf{x} = \mathbf{y}$ iff $x_i = y_i$ for all $i=1,2,\dots, n$. \item $\mathbf{x} \geq \mathbf{y}$ iff $x_i \geq y_i$ for all $i=1,2,\dots, n$. \item $\mathbf{x} > \mathbf{y}$ iff $x_i \geq y_i$ and $x\neq y$ -- meaning there is at least one element $j$ such that $x_j > y_j$. \item $\mathbf{x} \gg \mathbf{y}$ iff $x_i > y_i$ for all $i=1,2,\dots, n$. \end{itemize} \end{definition} \begin{problem} Suppose $\mathbf{a},\mathbf{x},\mathbf{y} \in \mathbb{R}^n$ and $\mathbf{a}\cdot \mathbf{x} > \mathbf{a} \cdot \mathbf{y}$. Does it follow that $x > y$? [Hint: do not divide both sides by $\mathbf{a}$] \end{problem} \insblock{Problem 12 Notes}{Tests understanding of linear functions (functionals in particular) and how they can \textit{order} the vectors in a linear subspace. It also introduces and provides practice with vector orderings that come up relatively frequently. Also note that the linear function created by $\mathbf{a}\cdot \mathbf{x}$ for $\mathbf{x}\in X$ two \textit{half-spaces} are created that order some vectors ``above'' others.} \begin{solution}\\ Let $\mathbf{a},\mathbf{x},\mathbf{y} \in \mathbb{R}^n$ (note these are vectors) and let $\mathbf{a} \cdot \mathbf{x} > \mathbf{a} \cdot \mathbf{y}$. Note that both $\mathbf{a}\cdot \mathbf{x}$ and $\mathbf{a}\cdot \mathbf{y}$ are real numbers so the $>$ here refers to the ordering on the real line, while the comparison we want to investigate $\mathbf{x} > \mathbf{y}$ is an ordering on vectors (as defined before the problem). Lets consider the value $r_y = \mathbf{a} \cdot \mathbf{y}$. Consider the hyperplane $X(\mathbf{a},r_y) = \{ \mathbf{z} \in \mathbb{R}^n: \mathbf{a} \cdot \mathbf{z} = r_y \}$ and the ``upper'' half-space $X^+(\mathbf{a},r_y) = \{ \mathbf{z} \in \mathbb{R}^n: \mathbf{a} \cdot \mathbf{z} \geq r_y \}$. Now we know that $\mathbf{a}\cdot \mathbf{x} > r_y$ so we can be sure $\mathbf{x} \in X^+(\mathbf{a},r_y)$. Let $Y = \{ \mathbf{z} \in \mathbb{R}^n: z_i \geq y_i \; \forall i = 1, \dots, n\}$ be the set of all vectors $\mathbf{z}$ such that $\mathbf{z} \geq \mathbf{y}$. When $Y \subsetneq X^+(\mathbf{a},r_y)$ then its possible for $\mathbf{a}\cdot \mathbf{x} > \mathbf{a}\cdot \mathbf{y}$ and yet $\neg (\mathbf{x} > \mathbf{y})$. Consider the two-dimensional graph below. Each of the points $\mathbf{x},\mathbf{x}',\mathbf{x}''$ are contained in $X^+(\mathbf{a},r_y)$ yet are not in $Y$. So its possible for $\mathbf{a}\cdot \mathbf{x} > \mathbf{a}\cdot \mathbf{y}$ and yet $\mathbf{x} \leq \mathbf{y}$. \begin{figure}[htbp] \centering \caption{Hyperplanes, Half-spaces and Vector Ordering} \begin{tikzpicture} \draw[<->, thick] (0,4) -- (0,-2); \draw[<->, thick] (5,0) -- (-2,0); \node[left] at (0,-1.5) {$x_2$}; \node[above] at (-1.5,0) {$x_1$}; \draw (1.3,1.3) node[anchor=south east] {$\mathbf{a}$}; \draw[fill=black] (1.5,0.5) circle[radius=1.5pt]; \draw (1.5,0.5) node[above right] {$\mathbf{y}$}; \draw[fill=black] (1.3,1.3) circle[radius=1.5pt]; \draw[arrows=->,thick] (0,0) -- (1.25,1.25); \draw[fill=gray, draw=gray, opacity=0.4] (1.5,4) -- (1.5,0.5) -- (5,0.5) -- (5,4); \draw[dashed, thin] (1.5,0.5) -- (1.5,4); \draw[dashed, thin] (1.5,0.5) -- (5,0.5); \node[above] at (3.3,1.5) {$\{ \mathbf{z} \in \mathbb{R}^2: \mathbf{z} > \mathbf{y}\}$}; \draw[very thick] (-2,4) -- (1,1) -- (4,-2); \node[right] at (2.95,-0.5) {$X^+(\mathbf{a},r_y)$}; \node[left] at (2,-1.5) {$X(\mathbf{a},r_y)$}; \draw[->] (2,-1.5) -- (3,-1); \draw[pattern=north west lines, pattern color=gray, opacity=0.7,draw=none] (-2,4) -- (1,1) -- (4,-2) -- (5,-2) -- (5,4); \draw[fill=black] (-1,3.5) circle[radius=1.5pt] node[right] {$\mathbf{x}$}; \draw[fill=black] (1.3,3) circle[radius=1.5pt] node[left] {$\mathbf{x}'$}; \draw[fill=black] (4.2,-1.5) circle[radius=1.5pt] node[right] {$\mathbf{x}''$}; \draw (0.8,1.2) -- (0.6,1) -- (0.8,0.8); \end{tikzpicture} \end{figure} \end{solution} \begin{problem} Let $X$ be a vector space and consider a function $f:X \rightarrow \mathbb{R}$ defined for some $\mathbf{a} \in X$ defined as $f_a(\mathbf{x}) = \mathbf{a} \cdot \mathbf{x}$. \begin{enumerate}[(i)] \item Prove that $f_a(\mathbf{x}) = \mathbf{a}\cdot \mathbf{x}$ is a linear function. \item Let $X^* = \{ f:X \rightarrow R \; | \; f \text{ is linear}\}$ be the set of all linear functions from $X$ into $\mathbb{R}$. Prove that for all $f \in X^*$ there exists an $\mathbf{a}\in X$ such that $f(\mathbf{x}) = \mathbf{a} \cdot \mathbf{x}$. \item (\textbf{Optional}) Define function addition as $f+g = f(\mathbf{x})+g(\mathbf{x})$ and function scaling as $\alpha f = \alpha f(\mathbf{x})$ over the set $X^*$. Prove or disprove the following statement: $X^*$ \textit{with the defined operations is a linear vector space}. \item (\textbf{Optional}) What is dimension of $X^*$? \end{enumerate} \end{problem} \insblock{Problem 13 Note}{Tests understanding of the definition of a linear function (mapping, transformation) and how to use it. Tests understanding of linear functionals and how they can be represented by vectors and the (inner) dot product. The optional parts are good practice in applying the axioms of abstract vector spaces and identifying dimension.} \begin{solution} \textbf{and hints...} \begin{enumerate}[(i)] \item So for this problem just assume that $X = \mathbb{R}^n$. Then $a\cdot x = a_1 x_1 + \cdots + a_n x_n$. Given two vectors $x,y \in \mathbb{R}^n$ and scalars $\alpha, \beta \in \mathbb{R}$ we have \begin{align} a \cdot (\alpha x + \beta y) &= a\cdot (\alpha x) + a\cdot (\beta y) \nonumber \\ &= \alpha(a\cdot x) + \beta (a\cdot y) \nonumber \end{align} therefore its a linear map. \item For part (ii), this is basically the proof that linear transformations between real vector spaces can be represented by real valued matrices. In this case the matrices will all be $n\times 1$. Consider a function $f\in X^*$. By definition $f$ is a linear map from $X = \mathbb{R}^n$ to $\mathbb{R}$. Because $X$ is a linear subspace and $\mathbb{R}$ is a linear subspace and $f$ is a linear map we know that $f(X)$ is a linear subspace in $\mathbb{R}$. Because $X$ and $f(X)$ are subspaces they have a basis. Let $(\mathbf{e}_1,\dots, \mathbf{e}_n)$ be the coordinate vectors for $\mathbb{R}^n$ (which are a basis for $X$). Then for any $x\in X$ we can write $\mathbf{x} = x_1 \mathbf{e}_1 + \cdots + x_n \mathbf{e}_n$ (a linear combination of the basis vectors). Applying our function $f$ to $x$ gives us \begin{align} f(\mathbf{x}) &= f( x_1 \mathbf{e}_1 + \cdots + x_n \mathbf{e}_n ) \nonumber \\ &= x_1 f(\mathbf{e}_1) + \cdots + x_n f(\mathbf{e}_n) \nonumber \end{align} Since $\mathbf{e}_i$, for $i=1,\dots,n$ is a vector we know $f(\mathbf{e}_i) \in \mathbb{R}$. For all $i = 1,\dots, n$ define $f(\mathbf{e}_i) = a_i$. Then \begin{align} f(\mathbf{x}) &= x_1 a_1 + \cdots + x_n a_n \nonumber \\ &= a_1 x_n + \cdots + a_n x_n \nonumber \\ &= \mathbf{a} \cdot \mathbf{x} \nonumber \end{align} so we can define the $\mathbf{a} = (f(\mathbf{e}_1), \dots, f(\mathbf{e}_n))$. \item For part (iii), you should use the defined operations to show that the vector space axioms hold for this set so that its a vector space. The key operations are that if $f$ and $g$ are linear functions in the set, then these will be our vectors. So $f + g$ is defined as $f(x) + g(x)$ where $x$ is a vector in $X$. If $\alpha$ is a real number, then we scale the vectors in $X^*$ as $\alpha f = \alpha f(x)$. Let $\mathbf{u},\mathbf{v},\mathbf{w} \in V$ where $V$ is a vector space. Let $c,d \in \mathbb{R}$. Then the following are the axioms of an abstract vector space. \begin{enumerate}[1.] \item $\mathbf{u} + \mathbf{v} \in V$ \item $\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}$ \item $(\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w})$ \item $\exists \mathbf{0} \in V$ such that $\forall \mathbf{u} \in V, \; \mathbf{u} + \mathbf{0} = \mathbf{u}$ \item $\forall \mathbf{u} \in V, \; \exists -\mathbf{u} \in V$ such that $\mathbf{u} + (-\mathbf{u}) = \mathbf{0}$ \item $\forall c \in \mathbb{R}$, $c\mathbf{u} \in V$. \item $c(\mathbf{u} + \mathbf{v}) = c \mathbf{u} + c \mathbf{v}$ \item $(c+d)\mathbf{u} = c\mathbf{u} + d \mathbf{v}$ \item $(cd)\mathbf{u} = c(d\mathbf{u})$ \item $1\mathbf{u} = \mathbf{u}$ \end{enumerate} All the axioms should be tested. \item For part (iv) the idea is to use what we just learned in part (ii), that every linear functional $f \in X^*$ can be uniquely determined by a vector $\mathbf{a} = (f(\mathbf{e}_1),\dots,f(\mathbf{e}_n))$. We can think about finding a set of such $\mathbf{a}$ vectors. For example suppose we have a set of $n$ vectors, say the coordinate vectors again, $(\mathbf{e}_1,\dots, \mathbf{e}_n)$, do each of these represent their own linear functional? Yes. Now that we are calling them functionals, lets represent the set now as $\{f_1,\dots, f_n\}$. At this point we establish that any linear functional is just a linear combination of these $n$ linear functionals and that the set is linearly independent. Once you've done that you will have shown that a set of $n$ vectors is a basis for $X^*$ and therefore $\dim\left(X^*\right) = n$. \end{enumerate} \end{solution} \begin{problem} Prove that the set $Z$ is a subspace of $\mathbb{R}^3$. \[ Z = \left\{ [x_1,x_2,x_3] \; | \; 4x_1 - x_2 + 5x_3 = 0 \right\} \] \end{problem} \insblock{Problem 14 Note}{Tests application of the definition of subspaces.} \begin{solution} Let $a = [4,-1,5]$ and note that the equation in the definition of $Z$ is $4x_1 - x_2 + 5x_3 = a \cdot x$. Hence we have the linear function $L(x) = a\cdot x$ and $a\cdot x = 0$ represents a homogenous system of equations. The set $Z$ represents the set of solutions $\{x \in \mathbb{R}^3 | a\cdot x = 0\}$ which is also the null space of the linear function $L(x) = a\cdot x$. The null space of any linear function is a subspace. \qedsymbol \end{solution} \begin{problem} %Leon Ch 4 Section 1 # 17 Determine the null space and range of each of the following linear operators on $\mathbb{R}^3$. \begin{enumerate}[(i)] \item $L(\mathbf{x}) = (x_3,x_2,x_1)^T$ \item $L(\mathbf{x}) = (x_1,x_1,x_1)^T$ \end{enumerate} \end{problem} \insblock{Problem 15 Note}{Tests understanding of the definition and concept of null space (kernal) and range (column space) of a linear operator (matrix).} \begin{solution} \begin{enumerate}[(i)] \item $Null(L) = \{\mathbf{0}\}$ and $L(\mathbb{R}^3) = \mathbb{R}^3$. \item $Null(L) = \text{Span}(\mathbf{e}_2,\mathbf{e}_3)$ and $L(\mathbb{R}^3) = \text{Span}(\mathbf{e}_1)$. \end{enumerate} \end{solution} \begin{definition} Let $X$ be a vector space and let $a \in X$ and $b \in \mathbb{R}$. A \textbf{hyperplane} in $X$ is a set of the form $H_a(b) = \{ x \in X: a\cdot x = b\}$ and associated with hyperplane $X$ are two \textbf{half-spaces} $H_a^{\geq}(b) = \{x\in X: a\cdot x \geq b \}$ and $H_a^{\leq}(b) = \{x\in X: a\cdot x \leq b\}$. \end{definition} \begin{problem} (\textbf{Optional}) What are the range of angles between vectors in $x \in H_a^{\leq}(0)$ and the vector $a$? What are the ranges of the angles between vectors $x\in H_a^{\geq}(0)$ and the vector $a$? \end{problem} \insblock{Problem 16 Note}{Tests knowledge and understanding of the Cauchy-Schwartz inequality, the dot product and their relationship to the angle between two vectors in a vector space. It connects this concept to that of half-spaces.} \begin{solution} For $H_a^{\geq}(0)$ the range on the angles between vectors $x$ and the vector $a$ are $\theta = [0,90]\cup [270,360]$ and the range on the angles between vectors $x$ in $H_a^{\leq}(0)$ and $a$ are $\theta = [90,270]$. \end{solution} \begin{problem} %Simon & Blume exercise 27.19 Which of the following are \emph{subspaces} of the vector space $M_{2,2}$ of $2\times 2$ matrices? Justify your answer. \begin{enumerate}[(i)] \item the set of $2\times 2$ real symmetric matrices. \item the set of $2\times 2$ real diagonal matrices. \item the set of $2\times 2$ real ``singular'' matrices (remember $M$ is singular if $det(M) = 0$). \item the zero matrix. \item the set of all $2\times 2$ nonsingular matrices \end{enumerate} \end{problem} \insblock{Problem 17 Note}{More practice applying definition of subspaces to spaces of specific matrices to show which are subspaces.} \begin{solution} \begin{enumerate}[(i)] \item \begin{align} \alpha \left[\begin{matrix} a_{11} & a_{12} \\ a_{12} & a_{22} \end{matrix}\right] + \beta \left[\begin{matrix} b_{11} & b_{12} \\ b_{12} & b_{22} \end{matrix}\right] = \left[\begin{matrix} \alpha a_{11} + \beta b_{11} & \alpha a_{12} + \beta b_{12} \\ \alpha a_{12} + \beta b_{12} & \alpha a_{22} + \beta b_{22} \end{matrix}\right] \nonumber \end{align} \item \begin{align} \alpha \left[\begin{matrix} a_{11} & 0 \\ 0 & a_{22} \end{matrix}\right] + \beta \left[\begin{matrix} b_{11} & 0 \\ 0 & b_{22} \end{matrix}\right] = \left[\begin{matrix} \alpha a_{11} + \beta b_{11} & 0 \\ 0 & \alpha a_{22} + \beta b_{22} \end{matrix}\right] \nonumber \end{align} \item This space is not a subspace. Consider the matrices \[ A = \left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] \qquad B = \left[\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right] \] The $\det A = 0$ and $\det B = 0$, but $\det(A + B) = \det(I) = 1 \neq 0$. \item Yes it is a subspace as any linear combination of it is the zero matrix. \item Note that $\det I = 1 \neq 0$ and $\det(-I) = (-1)^2 \det I = 1 \neq 0$ but $I + (-I) = 0$ and the zero matrix is not invertible. \end{enumerate} \end{solution} \begin{problem} Let $f:\mathbb{R}^n \rightarrow \mathbb{R}^m$ be a linear function such that for all $y \in \mathbb{R}^m$ the set $\{x \in \mathbb{R}^n : f(x)=y\}$ is a singleton. \begin{enumerate}[(i)] \item Is the linear function $f$ invertible? \item If $f$ can be represented by matrix $A$, then show $A$ is invertible if and only if $\text{rank}(A) = m = n$. \item Suppose that an $n\times n$ matrix $A$ is invertible. Does it follow that $[Ax = 0] \implies [x = 0]$? \item Suppose that $A$ is an $n \times n$ matrix and that $[Ax=0]\implies [x=0]$. Does it follow that $A$ is invertible? \end{enumerate} \end{problem} \insblock{Problem 18 Note}{Tests understanding of the requirements for the existence of an inverse function and its relationship to matrix inverses in the case of linear functions. These concepts are then tied to some fundamental properties about the existence and uniqueness of the solution to a system of linear equations.} \begin{solution} \begin{enumerate}[(i)] \item Yes, the function is a bijection and so the inverse correspondence will be a function. \item ($\Leftarrow$) Suppose that $\text{rank}(A)=m=n$ and assume to the contrary that $A$ is not invertible. Then there must exist $y,x,x' \in \mathbb{R}^n$ with $x\neq x'$ such that $y=Ax=Ax'$. But this means that $A(x-x')=0$ with $(x-x')\neq 0$ and thus the columns of $A$ are linearly dependent. This means there can be, at most, $n-1$ linearly independent columns making the rank of $A$ less than $n$ - which is a contradiction. ($\Rightarrow$) For the other direction, we can use proof by contrapositive. So assume $\text{rank}(A) \neq n$ and attempt to show that this implies $A$ is not invertible. Since $\text{rank}(A) \leq \dim \mathbb{R}^n = n$ we know $\text{rank}(A) < n$. Which means that the number of linearly independent columns of $A$ is less than $n$. Let $\mathbf{a}_1, \dots, \mathbf{a}_n$ represent the $n$ columns of $A$. Then $\exists \boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_n) \neq \mathbf{0}$ such that $\alpha_1 \mathbf{a}_1 + \cdots + \alpha_n \mathbf{a}_n = \mathbf{0}$ which is equivalent $A \boldsymbol{\alpha} = \mathbf{0}$ so $Null(A)$ contains more elements than just the zero vector $\mathbf{0}$. Since multiple elements are mapped to the same vector, the function $f$ represented by $A$ is not injective and therefore not bijective and thus not invertible. \qedsymbol \item Let $A$ be invertible and for some vector $\mathbf{x}$ suppose $A\mathbf{x} = \mathbf{0}$. Then \begin{align} A\mathbf{x} &= \mathbf{0} \nonumber \\ A^{-1} A \mathbf{x} &= A^{-1} \mathbf{0} \nonumber \\ I_{n} \mathbf{x} &= \mathbf{0} \nonumber \\ \mathbf{x} &= \mathbf{0} \nonumber \end{align} which establishes the result. \qedsymbol \item Let $A$ be an $n\times n$ matrix such that $Null(A) = \{\mathbf{0}\}$. Let $\mathbf{u} \neq \mathbf{v}$ be two nonzero vectors. Suppose $A \mathbf{u} = A \mathbf{v}$ then $A \mathbf{u} - A \mathbf{v} = \mathbf{0}$ and finally $A(\mathbf{u} - \mathbf{v}) = \mathbf{0}$. But $(\mathbf{u}-\mathbf{v})\neq \mathbf{0}$ which is impossible since only $\mathbf{0}$ is in the null space of $A$. This contradiction implies that $A$ is injective. By the rank nullity theorem we have $\dim \mathbb{R}^n = \dim Null(A) + \dim \text{img}(A)$ and since $\dim Null(A) = 0$ we have $n = 0 + \dim \text{img}(A)$ which implies $\dim \text{img}(A) = n$. This means the image of $A$ ( or of $f$) fills all of $\mathbb{R}^n$ and $A$ is surjective. Because it is injective and surjective $A$ is invertible. \qedsymbol \end{enumerate} \end{solution} \begin{problem} Let $y\in \mathbb{R}^n$ be a \textbf{netput} vector where each element $y_i$ for $i=1,\dots, n$ is a commodity. If $y_i < 0$ then $y_i$ is an input into a production process. If $y_i > 0$ then $y_i$ is an output of the production process. Let $F:\mathbb{R}^n \rightarrow \mathbb{R}$ be a \emph{transformation function} and we define the set $Y =\{y\in \mathbb{R}^n : F(y) \leq 0 \}$ and we'll call $Y$ a technology. The technology $Y$ describes all the feasible production plans a firm can choose (i.e., combinations of feasible inputs and outputs). We will assume that $Y$ is a convex set. Now let $p\in \mathbb{R}^n$ where $p \gg 0$ be vector prices for the $n$ commodities in any bundle $y\in Y$. Note that for fixed $p$, we can define a function $T_p(y) = p\cdot y$ for all $y \in Y$. The function $T_p(y)$ represents the profit (revenue minus costs) of choosing production plan $y$ when input and output prices are given by $p$. Firms want to choose some feasible production plan $y^*$ that satisfies $T_p(y^*) = \max\{p\cdot y: y\in Y\}=\pi(p)$. \begin{enumerate}[(i)] \item If $\pi(p)$ is the maximum achievable profit for firms given technology $Y$ and prices $p$ the profit-maximizing production plans are $\{ y \in Y: p\cdot y = \pi(p) \}$. Now consider the set $\{y \in \mathbb{R}^n: p\cdot y = \pi(p)\}$. Is this a hyperplane? If so, show that $Y$ is contained in one of the half-spaces of $p\cdot y = \pi(p)$ and specify which one (i.e., upper or lower). \item Does the set $\{y\in \mathbb{R}^n: p\cdot y = \pi(p)\}$ ``touch'' the technology $Y$? i.e, is it the case that \[ \min_{y\in Y}\; | p\cdot y - \pi(p)| = 0 \] If so, what does this say about the value of $F(y^*)$ for any $y^* \in \{y\in Y: p\cdot y = \pi(p)\}$? \item \textbf{Optional} (Duality) Consider the sets of the form $A(p) = \{ y \in \mathbb{R}^n : p\cdot y \leq \pi(p)\}$ and we create a collection of sets $\{ A(p) \subset \mathbb{R}^n : p \in \mathbb{R}^n\}$. Show that the following equality is true. \[ Y = \bigcap_{p\in \mathbb{R}^n} A(p) \] \end{enumerate} \end{problem} \insblock{Problem 19 Note}{Application of key concepts you've been working on. In particular the idea of the support function (which is a linear functional) and how it relates to the profit to the firm of a particular production plan. This function creates half-spaces and the hyperplane can be tangent to a curve representing optimal points. If this problem is difficult don't worry it is some of the more difficult concepts you'll see in the fall micro class.} \begin{solution} See the hints I provided through email. \end{solution} \end{document}
{ "alphanum_fraction": 0.655226552, "avg_line_length": 50.7926634769, "ext": "tex", "hexsha": "3e311d318b44be5d4ad280f658f9ddf6848a6eea", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_path": "pages/teaching/math_bootcamp/assignment_1/assignment_1.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/assignment_1/assignment_1.tex", "max_line_length": 465, "max_stars_count": null, "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_path": "assets/pdfs/math_bootcamp/bootcamp_repo/assignment_1/assignment_1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11666, "size": 31847 }
\documentclass[onecolumn,unpublished]{quantumarticle} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{amsfonts} \usepackage[caption=false]{subfig} \usepackage[colorlinks]{hyperref} \usepackage[all]{hypcap} \usepackage{tikz} \usepackage{color,soul} \usepackage[utf8]{inputenc} \usepackage{capt-of} \usepackage{multirow} \usepackage{tabularx} \usepackage{multicol} \usepackage[numbers]{natbib} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \theoremstyle{definition} \newtheorem{lemma}{Lemma}[section] \newcommand{\hide}[1]{} % Default fixed font does not support bold face \DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold \DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal % Custom colors \usepackage{color} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \usepackage{listings} % Python style for highlighting \newcommand\pythonstyle{\lstset{ language=Python, basicstyle=\ttfamily, otherkeywords={self,controlled_by,with,Mod,Quint,LookupTable,assert}, keywordstyle=\bfseries\color{deepblue}, commentstyle=\color{gray}, emph={measure,__init__}, emphstyle=\bfseries\color{deepblue}, stringstyle=\color{deepgreen}, showstringspaces=false }} % Python environment \lstnewenvironment{python}[1][] { \pythonstyle \lstset{#1} } {} \renewcommand\thesection{\arabic{section}} \newcommand{\eq}[1]{\hyperref[eq:#1]{Equation~\ref*{eq:#1}}} \renewcommand{\sec}[1]{\hyperref[sec:#1]{Section~\ref*{sec:#1}}} \DeclareRobustCommand{\app}[1]{\hyperref[app:#1]{Appendix~\ref*{app:#1}}} \newcommand{\fig}[1]{\hyperref[fig:#1]{Figure~\ref*{fig:#1}}} \newcommand{\tbl}[1]{\hyperref[tbl:#1]{Table~\ref*{tbl:#1}}} \newcommand{\theoremref}[1]{\hyperref[theorem:#1]{Theorem~\ref*{theorem:#1}}} \newcommand{\definitionref}[1]{\hyperref[definition:#1]{Theorem~\ref*{definition:#1}}} \newcommand{\cgate}[1]{*+<.6em>{#1} \POS ="i","i"+UR;"i"+UL **\dir{-};"i"+DL **\dir{-};"i"+DR **\dir{-};"i"+UR **\dir{-},"i" \cw} \newcommand{\igate}[1]{*+<.6em>{#1} \POS ="i","i"+UR;"i"+UL **\dir{-};"i"+DL **\dir{-};"i"+DR **\dir{-};"i"+UR **\dir{-},"i"} \newcommand{\imultigate}[2]{*+<1em,.9em>{\hphantom{#2}} \POS [0,0]="i",[0,0].[#1,0]="e",!C *{#2},"e"+UR;"e"+UL **\dir{-};"e"+DL **\dir{-};"e"+DR **\dir{-};"e"+UR **\dir{-},"i"} \newcommand{\ighost}[1]{*+<1em,.9em>{\hphantom{#1}}} \newcommand{\lenexp}{{n_e}} \newcommand{\devoff}{{\delta_{\text{off}}}} \newcommand{\gexp}{{g_{\text{exp}}}} \newcommand{\gmul}{{g_{\text{mul}}}} \newcommand{\gsep}{{g_{\text{sep}}}} \newcommand{\gpad}{{g_{\text{pad}}}} \newcommand{\distone}{{d_1}} \newcommand{\disttwo}{{d_2}} \newcommand{\productreg}{x} \newcommand{\workreg}{y} \newcommand{\pluseq}{\mathrel{+}=} \newcommand{\minuseq}{\mathrel{-}=} \newcommand{\timeseq}{\mathrel{\ast}=} \input{qcircuit} \begin{document} \title{Windowed quantum arithmetic} \date{\today} \author{Craig Gidney} \email{[email protected]} \affiliation{Google Inc., Santa Barbara, California 93117, USA} \begin{abstract} We demonstrate a technique for optimizing quantum circuits that is analogous to classical windowing. Specifically, we show that small table lookups can allow control qubits to be iterated in groups instead of individually. We present various windowed quantum arithmetic circuits, including a windowed modular exponentiation with nested windowed modular multiplications, which have lower Toffoli counts than previous work at register sizes ranging from tens of qubits to thousands of qubits. \end{abstract} \maketitle \section{Introduction} \label{sec:introduction} In classical computing, a widely used technique for reducing operation counts is ``windowing"; merging operations together by using lookup tables. For example, fast software implementations of CRC parity check codes process multiple bits at a time using precomputed tables \cite{perez1983crcbyte}. In this paper, we show that windowing is also useful in quantum computing. There are situations where several controlled operations can be merged into a single operation acting on a value produced by a small QROM lookup \cite{babbush2018} (hereafter just ``table lookup"). A simple example of a quantum windowing optimization is, when starting a modular exponentiation, look up the final result of the first twenty iterations of the repeated squaring process instead of actually performing those iterations. At first glance, this may seem like a bad idea. This optimization saves twenty multiplications, but generating the lookup circuit is going to require classically computing all $2^{20}$ possible results and so will take millions of multiplications. But physical qubits are noisy \cite{schroeder2009dram,Bare13,Kim2014}, and quantum error correction is expensive \cite{fowler2012surfacecodereview, campbell2018constraintsatisfaction}. Unless huge advances are made on these problems, fault tolerant quantum computers will be at least a billion times less efficient than classical computers on an operation-by-operation basis. Given the current state of the art, trading twenty quantum multiplications for a measly million classical multiplications is a great deal. The cost of performing the quantum lookup is far more significant than the classical multiplications. In this paper, we present several other examples of using table lookups to reduce the Toffoli count of operations. The content is organized as follows. In \sec{background}, we review background information on performing table lookups over classical data addressed by quantum data. In \sec{results} we present our methods and results. We provide pseudo code of constructions using table lookups to accelerate several quantum arithmetic tasks related to multiplications involving classical constants. We compare the cost of windowed multiplication, schoolbook multiplication, and Karatsuba multiplication routines. We also show how to construct nested lookup optimizations, by performing windowed multiplications inside a windowed exponentiation. Finally, we summarize our contributions in \sec{conclusion}. \begin{figure}[h!] \centering \resizebox{\linewidth}{!}{ \includegraphics{assets/muladd-toffolis.png} } \resizebox{\linewidth}{!}{ \includegraphics{assets/muladd-space.png} } \caption{ \label{fig:product-add-costs} Log-log plots of amortized costs of performing the operation $x \pluseq ky$ where $k$ is a classical constant using various constructions. Compares windowed multiplication, multiplication via iteration over the classical constant (schoolbook multiplication), and the Karatsuba multiplication procedure from \cite{gidney2019karatsuba} which uses a word size of 32. We implemented each of the three methods in Q\# (see ancillary files), and extracted costs using Q\#'s tracing simulator. Windowed arithmetic uses slightly more space but has a lower Toffoli count at sizes relevant in practice. Karatsuba multiplication will eventually have a lower Toffoli count as the problem size increases, but currently the space overhead is substantial. } \end{figure} \section{Background: table lookups} \label{sec:background} A table lookup is an operation that retrieves data from a classical table addressed by a quantum register. It performs the operation $\sum_{j=0}^{L-1} |j\rangle|0\rangle \rightarrow \sum_{j=0}^{L-1} |j\rangle|T_j\rangle$, where $T$ is a classically precomputed table with $L$ entries. In \fig{lookup} we relay a construction that performs this operation with a Toffoli count of $L-1$, independent of the number of bits in each entry, from \cite{babbush2018}. It is possible to compute a table lookup in $O(WL/k + k)$ Toffolis, where $W$ is the output size of the lookup and $k$ is a freely chosen parameter \cite{low2018qrom}. Unfortunately, doing so requires $O(Wk)$ ancillae. The lookups we are performing in this paper are most beneficial when $W$ is large, and so we do not take advantage of this optimization when computing lookups. However, when uncomputing a lookup, measurement based uncomputation makes these optimizations applicable \cite{berry2019qubitization}. \begin{figure} \centering \resizebox{\textwidth}{!}{ \Qcircuit @R=0.7em @C=0.5em { &\multigate{2}{\text{Input }a} &\qw& &&&&&&\ctrlo{3} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{3} &\qw &\\ & \ghost{\text{Input }a} &\qw& &&&&&&\qw &\ctrlo{3}&\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{3} &\qw &\ctrlo{3} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{3} &\qw &\qw &\\ & \ghost{\text{Input }a} &\qw& &&&&&&\qw &\qw &\ctrlo{3} &\qw &\qw &\qw &\ctrl{3} &\qw &\ctrlo{3} &\qw &\qw &\qw &\ctrl{3} &\qw &\qw &\qw &\ctrlo{3} &\qw &\qw &\qw &\ctrl{3} &\qw &\ctrlo{3} &\qw &\qw &\qw &\ctrl{3} &\qw &\qw &\qw &\\ & \ctrl{4} \qwx &\qw& &&&&&&\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\\ & & & &&&&&& &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\targ &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ctrl{1} &\qw & &\\ & & & &&=&&&& & &\ctrl{1} &\qw &\ctrl{1} &\qw &\ctrl{1} &\targ &\ctrl{1} &\qw &\ctrl{1} &\qw &\ctrl{1} &\qw & & &\ctrl{1} &\qw &\ctrl{1} &\qw &\ctrl{1} &\targ &\ctrl{1} &\qw &\ctrl{1} &\qw &\ctrl{1} &\qw & & &\\ & & & &&&&&& & & &\ctrl{6} &\targ &\ctrl{6} &\qw & & &\ctrl{6} &\targ &\ctrl{6} &\qw & & & & &\ctrl{6} &\targ &\ctrl{6} &\qw & & &\ctrl{6} &\targ &\ctrl{6} &\qw & & & &\\ &\multigate{5}{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & \ghost{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & \ghost{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & \ghost{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & \ghost{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & \ghost{\oplus\text{Lookup }T_a} &\qw& &&&&&&\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\targ^? &\qw &\targ^? &\qw &\qw &\qw &\qw &\\ & & & &&&&&& & & &T_0 & &T_1 & & & &T_2 & &T_3 &&&&&&T_4 &&T_5 &&&&T_6 &&T_7 &&&&&&\\ &&&&&&&&& &&& &&&&& &&&&&&& &&& && &&& &&&&&&\\ } } \caption{ \label{fig:lookup} Table lookup circuit from \cite{babbush2018}. The lines emerging from and merging into other lines are AND computations and uncomputations (notation from \cite{gidney2018addition}); they are equivalent to Toffoli gates. If the control qubit is set and the address register contains the binary value $a$, then this circuit xors the $a$'th bitstring from a precomputed lookup table $T$ into the $W$ output qubits. The diagram is showing the case where $L=2^3$ and $W=6$. The question marks beside the CNOT targets indicate that the target should be omitted or included depending on a corresponding bit in $T$. } \end{figure} To uncompute a table lookup using measurement based uncomputation, start by measuring all of the output qubits in the X basis. This will produce random measurement results and negate the phase of random computational basis states in the address register. However, measuring the qubits frees up significant workspace and the measurement results indicate which computational basis states of the address register were negated. Fixing the state negations is done using a lookup over a significantly smaller table. To be specific, the phase negation task is performed by separating the address register into a low half and a high half. A binary-to-unary conversion (see \fig{unary}) is performed on the low half, and then a table lookup addressed by the high half and targeting the low half is performed. This creates the opportunity to negate the amplitude of any combination of the computational basis states of the address register. See \fig{unlookup} for a quantum circuit showing an overview of this process. The Toffoli count of uncomputing the lookup is $2\sqrt{L}$ instead of $L$. Uncomputing the lookup has negligible cost, compared to computing the lookup. \begin{figure} \resizebox{\textwidth}{!}{ \Qcircuit @R=1em @C=0.75em { &\qw & \qw&\qw & \ctrl{1} &\qw&&& &&&&&\qw & \qw &\qw & \qw& \qw& \qw& \qw&\qw&\qw& \qw& \qw& \qw& \qw& \qw& \ctrl{1} & \qw& \qw&\qw&\\ &\qw {/}& \qw &\qw & \multigate{1}{\text{Input }a} &\qw&&& &&&&&\qw {/}& \qw &\qw & \qw& \qw& \qw&\qw &\qw&\qw& \qw&\qw & \qw & \qw& \qw& \gate{\text{Input }a_0} & \qw& \qw&\qw&\\ &\qw {/}&\ustick{l}\qw &\qw & \ghost{\text{Input }a} &\qw&&&=&&&&&\qw {/}&\ustick{l}\qw &\qw & \qw& \qw& \qw&\qw &\qw&\qw& \qw&\qw & \qw &\gate{\text{Input }a_1} & \qw& \qw\qwx& \qw& \gate{\text{Input }a_1} &\qw&\\ &\qw {/}&\ustick{\geq 2^l}\qw&\qw & \gate{\text{Unlookup }D_{a}}\qwx& \rstick{\langle0|} \qw &&& &&&&&\qw {/}&\ustick{\geq 2^l}\qw&\qw &\gate{H}&\meter& \cw & & & &\lstick{|0\rangle}&\qw {/}&\ustick{2^l} \qw&\gate{\text{Init Unary }a_1}\qwx&\gate{H}& \gate{\oplus\text{Lookup }F_{a_0}}\qwx&\gate{H}& \gate{\text{Clear Unary }a_1}\qwx& \rstick{\langle0|} \qw& \\ & & & & & &&& &&&&& & & & & &\igate{\text{Compute fixup table }F}\cwx&\cw {/}&\cw&\cw& \cw& \cw& \cw& \cw& \cw& \cw\cwx& } } \caption{ \label{fig:unlookup} Uncomputing a table lookup with address space size $L$ and an output size larger than $\sqrt{L}$. Has a Toffoli count and measurement depth of $O(\sqrt{L})$. Quadratically cheaper than computing the table lookup. } \end{figure} \begin{figure} \centering \resizebox{!}{!}{ \Qcircuit @R=1em @C=0.75em { & \multigate{2}{\text{Input }x} &\qw && &&& &\qw&\qw &\qw &\ctrl{4}&\qw &\qw &\qw &\qw &\qw &\qw &\qw &\\ & \ghost{\text{Input }x} &\qw && &&& &\qw&\qw &\qw &\qw &\ctrl{4}&\ctrl{5}&\qw &\qw &\qw &\qw &\qw &\\ & \ghost{\text{Input }x} &\qw && &&& &\qw&\qw &\qw &\qw &\qw &\qw &\ctrl{5}&\ctrl{6}&\ctrl{7}&\ctrl{8}&\qw &\\ &\imultigate{7}{\text{Init Unary }x}\qwx&\qw && &&& & &\lstick{|0\rangle} &\targ &\qswap &\qswap &\qw &\qswap &\qw &\qw &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qswap &\qw &\qswap &\qw &\qswap &\qw &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw &&=&&& & &\lstick{|0\rangle} &\qw &\qw &\qswap &\qw &\qw &\qw &\qswap &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qw &\qw &\qswap &\qw &\qw &\qw &\qswap &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qw &\qw &\qw &\qswap &\qw &\qw &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qw &\qw &\qw &\qw &\qswap &\qw &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qw &\qw &\qw &\qw &\qw &\qswap &\qw &\qw \\ & \ighost{\text{Init Unary }x} &\qw && &&& & &\lstick{|0\rangle} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qswap &\qw \\ } } \caption{ \label{fig:unary} Example circuit producing a unary register from a binary register. The qubit at offset $k$ of the unary register will end up set if the binary register is storing $|k\rangle$. The general construction, that this circuit is an example of, uses $L$ Fredkin gates (equivalent to $L$ Toffoli gates) where $L$ is the length of the unary register and $n=\lg L$ is the length of the binary register. The ``Clear Unary" circuit is this circuit in reverse (and can be optimized using measuring based uncomputation if desired). } \end{figure} \section{Windowed arithmetic constructions} \label{sec:results} In this section we will be presenting our windowed arithmetic constructions using pseudo code. We also provide a few circuit diagrams, but the focus is the code. Because it is not common to specify quantum algorithms using pseudo code, we will quickly discuss some of the syntactical and semantic choices we've made before continuing. Although we refer to the snippets as pseudo code, they are actually executable python 3 code. We have written an experimental python 3 library, available at \href{https://github.com/strilanc/quantumpseudocode}{github.com/strilanc/quantumpseudocode}, which provides the necessary glue. The snippets are slightly modified versions of code from the \href{https://github.com/Strilanc/quantumpseudocode/tree/v0.1/examples}{examples folder of the v0.1 release} of that repository. We do not think this library is ready to be used by others, but we used it to test the code we are presenting and so have made it available. The basic idea of the pseudo code is that quantum operations are specified in the same way as classical operations, and it is the job of the interpreter to decompose high-level quantum arithmetic into low-level quantum circuit operations. For example, when $a$ and $b$ are variables holding quantum integers, the statement $a \pluseq b$ applies a quantum addition circuit to $a$ and $b$. If $b$ is instead a classical integer, then it is treated as a temporary expression (an ``rvalue") that needs to be loaded into a quantum register so that a quantum addition circuit using it can be applied. There are many other kinds of rvalues that can be temporarily loaded into a register in order to add them into a target. For example, indexing a lookup table with a quantum integer produces a lookup rvalue and so the statement $a \pluseq T[b]$ results in the following three actions: compute a table lookup with classical data $T$ and quantum address $b$ into a temporary register, then add the temporary register into $a$, then uncompute the table lookup. We use standard python features such as ranges, slices, and list comprehensions. We also use a quantum generalization of the ``if c:" block, which we write as ``with controlled\_by(c):" due to technical limitations. Operations within such a block will be controlled by the qubit ``c". We use two important custom types: ``Quint" and ``QuintMod". A Quint is a quantum integer; a register capable of storing a superposition of classical integers. Every quint has a fixed register length (accessed via ``len"), and stores integers as a sequence of qubits using 2s complement little endian format. The format is relevant because quints support slicing in order to access subsections of its qubits as a quint. For example, if ``q" is a 32-qubit quint then ``q[0:8]" is an aliased quint over the least significant byte. Quints support operations such as addition, subtraction, comparisons, and xoring. A QuintMod is a quantum integer associated with some modulus. An inline addition into a modular quint will be performed using modular arithmetic circuits, instead of using 2s complement arithmetic circuits used on quints. Quantum registers can be allocated and deallocated using ``qalloc" and ``qfree" methods. \subsection{Product addition} A product addition operation performs $x \pluseq ky$ where $x$ is a fixed width 2s complement register. We focus on the case where $k$ is a classical constant. Normally an implementation of this operation would iterate over the bits of $k$, because this creates many small opportunities for optimization: \begin{python} def plus_equal_product(target: Quint, k: int, y: Quint): for i in range(k.bit_length()): if (k >> i) & 1: target[i:] += y \end{python} However, we will instead start from an implementation iterating over the qubits of $y$. This implementation performs quantumly controlled additions instead of classically controlled additions: \begin{python} def plus_equal_product(target: Quint, k: int, y: Quint): for i in range(len(y)): with controlled_by(y[i]): target[i:] += k \end{python} Adding $k$ into $x$ controlled by a qubit $q$ is equivalent to adding into $x$ the result of a table lookup with $q$ as the address, the value $0$ at address 0, and the value $k$ at address 1. So the above code is equivalent to the following code: \begin{python} def plus_equal_product(target: Quint, k: int, y: Quint): table = LookupTable([0, k]) for i in range(len(y)): target[i:] += table[y[i]] \end{python} Instead of performing a lookup over one qubit, we can perform a lookup over many qubits. That is to say, we can introduce windowing: \begin{python} def plus_equal_product(target: Quint, k: int, y: Quint, window: int): table = LookupTable([ i*k for i in range(2**window) ]) for i in range(0, len(y), window): target[i:] += table[y[i:i+window]] \end{python} This windowed implementation of product addition has an asymptotic Toffoli count of $O(\frac{n}{w} (n + 2^w))$ where $w$ is the window size. Setting the window size to $w=\lg n$, so that the table lookup is as expensive as the addition, achieves a Toffoli complexity of $O(n^2/\lg n)$. Windowing achieves a logarithmic factor improvement over the construction we started with. And we show in \fig{product-add-costs} that the advantage is not just asymptotic. Windowed multiplication has a lower Toffoli count than schoolbook multiplication at relatively small register sizes. \subsection{Multiplication} A multiplication operation performs $x \timeseq k$ where $k$ is odd and $x$ is a fixed width 2s complement register. Because qubits later in $x$ cannot affect qubits earlier in $x$, this operation can be implemented by iterating over $x$, from most significant qubit to least significant qubit, performing controlled additions targeting the rest of the register: \begin{python} def times_equal(target: Quint, k: int): assert k % 2 == 1 # Even factors aren't reversible. k %= 2**len(target) # Normalize factor. for i in range(len(target))[::-1]: with controlled_by(target[i]): target[i + 1:] += k >> 1 \end{python} As in the previous subsection, we can rewrite the controlled addition into a lookup addition, and then window the lookup. However, there is a complication introduced by the fact that qubits within a window need to operate on each other. To handle this, we split the inner-loop into two steps: adding the correct value into the rest of the register, and then recursively multiplying within the window. \begin{python} def times_equal(target: Quint, k: int, window: int): assert k % 2 == 1 # Even factors aren't reversible. k %= 2**len(target) # Normalize factor. if k == 1: return table = LookupTable([ (j * k) >> window for j in range(2**window) ]) for i in range(0, len(target), window)[::-1]: w = target[i:i + window] target[i + window:] += table[w] # Recursively fix up the window. times_equal(w, k, window=1) \end{python} The Toffoli complexity of this windowed multiplication is $O(\frac{n}{w} \cdot (n + 2^w + w^2))$. If we set $w = \lg n$ then the Toffoli count is $O(n^2/\lg n)$. \subsection{Modular product addition} A modular product addition operation performs $x \pluseq k y \pmod{N}$. We focus on the case where $k$ and $N$ are classical constants. We require that $x$, $y$, and $k$ are all non-negative and less than $N$. Modular product addition is identical to product addition, except that we need to use a modular addition in the inner loop, we need to fold the position factor $2^i$ into the table lookup, and we need to ensure the table lookup returns a value modulo $N$: \begin{python} def plus_equal_product_mod(target: QuintMod, k: int, y: Quint, window: int): N = target.modulus for i in range(0, len(y), window): w = y[i:i + window] table = LookupTable([ j * k * 2**i % N for j in range(2**window) ]) target += table[w] \end{python} See also \fig{multiply-add}, which generalizes this code to the case where $k$ is a function of a small number of qubits. This code achieves a Toffoli count of $O(\frac{n}{w} (n + 2^w))$ and, setting $w=\lg n$ as usual, the Toffoli complexity is $O(n^2/\lg n)$. \subsection{Modular multiplication} A modular multiplication operation performs $x \timeseq k \pmod{N}$, where $k$ has a multiplicative inverse modulo $N$. We focus on the case where $k$ and $N$ are classical constants. Modular multiplication is performed via a series of modular product additions \cite{zalka2006pure, haner2016factoring, gidney2017factoring}, and so this case reduces to the modular product addition case from the previous subsection. See \fig{multiply}. \subsection{Modular exponentiation} A modular exponentiation operation performs $x \timeseq k^e \pmod{N}$, where $k$ has a multiplicative inverse modulo $N$. We focus on the case where $k$ and $N$ are classical constants. Modular exponentiation is typically implemented using a series of controlled modular multiplications \cite{vedral1996arithmetic,zalka1998fast,haner2016factoring,gidney2017factoring}. We can reduce the number of multiplications that are needed by iterating over small windows of the exponent and looking up the corresponding factor to multiply by for each one. This also removes the need for the multiplications to be controlled, because the table lookup can evaluate to the factor 1 in cases where none of the exponent qubits are set. There is a catch here. Windowing over the exponent results in fewer modular multiplications, but the number being multiplied against is now quantum instead of classical. This could prevent us from applying windowing to the modular multiplications, because windowing isn't faster when both values being multiplied are quantum. But there is a way around this problem. Instead of using the exponent qubits to look up the number of multiply by, include the exponent qubits as address qubits in lookups within the windowed modular multiplication. This allows the intermediate values that are being retrieved to be the correct ones for the factor being multiplied by. In the following pseudo code, we have inlined the modular product additions (see \fig{multiply-add}) that are performed as part of the modular multiplications (see \fig{multiply}) that implement the modular exponentiation (see \fig{exponentiation}). This makes the code longer, but demonstrates how the exponent window and multiplication window are being used together when looking up values to add into registers. \begin{python} def times_equal_exp_mod(target: QuintMod, k: int, e: Quint, e_window: int, m_window: int): """Performs `target *= k**e`, modulo the target's modulus.""" N = target.modulus ki = modular_multiplicative_inverse(k, N) assert ki is not None a = target b = qalloc_int_mod(modulus=N) for i in range(0, len(e), e_window): ei = e[i:i + e_window] # Exponent-indexed factors and inverse factors. kes = [pow(k, 2**i * x, N) for x in range(2**e_window)] kes_inv = [modular_multiplicative_inverse(x, N) for x in kes] # Perform b += a * k_e (mod modulus). # Maps (x, 0) into (x, x*k_e). for j in range(0, len(a), m_window): mi = a[j:j + m_window] table = LookupTable( [(ke * f * 2**j) % N for f in range(2**len(mi))] for ke in kes) b += table[ei, mi] # Perform a -= b * inv(k_e) (mod modulus). # Maps (x, x*k_e) into (0, x*k_e). for j in range(0, len(a), m_window): mi = b[j:j + m_window] table = LookupTable( [(ke_inv * f * 2**j) % N for f in range(2**len(mi))] for ke_inv in kes_inv) a -= table[ei, mi] # Relabelling swap. Maps (0, x*k_e) into (x*k_e, 0). a, b = b, a # Xor swap result into correct register if needed. if a is not target: swap(a, b) a, b = b, a qfree(b) \end{python} We have tested that the above code actually returns the correct result in randomly chosen cases. The Toffoli complexity of this code is $O(\frac{n_e n}{w_e w_m} (n + 2^{w_e + w_m}))$ where $n_e$ is the number of exponent qubits, $n$ is the register size, $w_e$ is the exponent windowing size, and $w_m$ is the multiplication windowing size. For the same reason that square fences cover more area per perimeter than rectangular fences, it is best to use roughly even window sizes over the exponentiation and the multiplications. Setting $w_e=w_m=\frac{1}{2}\lg n$ achieves a Toffoli complexity of $O(\frac{n_e n^2}{\lg^2 n})$, saving two log factors over the naive algorithm. \begin{figure}[h] \resizebox{\textwidth}{!}{ \Qcircuit @R=1em @C=0.75em { \\ &\qw {/}&\ustick{w_e}\qw&\gate{\text{Input }e} \qw &\qw&& &&&&&&&\lstick{\text{exponent}}&\qw {/}&\ustick{w_e}\qw&\gate{\text{Input }e} \qw & \qw &\gate{\text{Input }e} \qw &\qw &\dots &&\gate{\text{Input }e} \qw & \qw &\gate{\text{Input }e} \qw &\qw\\ &\qw & \qw&\multigate{4}{\text{Input }a} \qw\qwx&\qw&& &&&&&&&\lstick{\text{in}_0} &\qw & \qw&\multigate{1}{\text{Input }a_0} \qw\qwx& \qw &\multigate{1}{\text{Input }a_0} \qw\qwx&\qw &\dots && \qw\qwx& \qw & \qw\qwx&\qw\\ &\qw & \qw& \ghost{\text{Input }a} \qw &\qw&& &&&&&&&\lstick{\text{in}_1} &\qw & \qw& \ghost{\text{Input }a_0} \qw & \qw & \ghost{\text{Input }a_0} \qw &\qw &\dots && \qw\qwx& \qw & \qw\qwx&\qw\\ & & \vdots& & && &&&&&&&\lstick{\vdots} & & & \qwx& & \qwx& &\ddots&& \qwx& & \qwx& \\ &\qw & \qw& \ghost{\text{Input }a} \qw &\qw&& &&&&&&&\lstick{\text{in}_{m-2}}&\qw & \qw& \qw\qwx& \qw & \qw\qwx&\qw &\dots &&\multigate{1}{\text{Input }a_{\lceil m/g-1\rceil}} \qw\qwx& \qw &\multigate{1}{\text{Input }a_{\lceil m/g-1 \rceil}} \qw\qwx&\qw\\ &\qw & \qw& \ghost{\text{Input }a} \qw &\qw&& &&&&&&&\lstick{\text{in}_{m-1}}&\qw & \qw& \qw\qwx& \qw & \qw\qwx&\qw &\dots && \ghost{\text{Input }a_{\lceil m/g-1\rceil}} \qw & \qw & \ghost{\text{Input }a_{\lceil m/g-1 \rceil}} \qw &\qw\\ &\qw {/}&\ustick{m} \qw& \qw\qwx&\qw&& &&&&&&&\lstick{|0\rangle} &\qw {/}&\ustick{m}\qw&\gate{\otimes \text{Lookup }a_0 k^e 2^0\pmod{N}}\qw\qwx&\gate{\text{Input }v}\qw &\gate{\text{Unlookup } a_0 k^e 2^g \pmod{N}}\qw\qwx&\qw &\dots &&\gate{\otimes \text{Lookup } a_{\lceil m/g-1\rceil} k^e 2^{g \lceil m/g-1\rceil} \pmod{N}}\qw\qwx&\gate{\text{Input }v}\qw &\gate{\text{Unlookup } a_{\lceil m/g-1\rceil} k^e 2^{g \lceil m/g - 1 \rceil} \pmod{N}}\qw\qwx&\qw\\ &\qw {/}&\ustick{m} \qw&\gate{+ak^e \pmod{N}} \qw\qwx&\qw&& &&&&&&&\lstick{\text{out}} &\qw {/}&\ustick{m}\qw& \qw &\gate{+v \pmod{N}} \qw\qwx& \qw &\qw &\dots && \qw &\gate{+v \pmod{N}} \qw\qwx& \qw &\qw\\ \\ } } \caption{ \label{fig:multiply-add} A windowed modular product addition circuit using windowed arithmetic, where the factor to multiply by is derived from a small number of input qubits from an exponent in a modular exponentiation. } \end{figure} \begin{figure}[h] \centering \resizebox{0.7\textwidth}{!}{ \Qcircuit @R=1em @C=0.75em { \\ &\qw {/}&\ustick{n}\qw&\qw &\gate{\times k \pmod{N}} &\qw&&&=&&& &\qw {/}&\ustick{n}\qw&\gate{\text{Input a}}&\gate{+b(-k^{-1}) \pmod{N}} &\qswap &\qw\\ & & & & & &&& &&&\lstick{|0\rangle} &\qw {/}&\ustick{n}\qw&\gate{+ak \pmod{N}} \qwx&\gate{\text{Input b}}\qwx&\qswap\qwx&\qw\\ \\ } } \caption{ \label{fig:multiply} Modular multiplication decomposes into modular product additions. } \end{figure} \begin{figure}[h] \resizebox{\textwidth}{!}{ \Qcircuit @R=1em @C=0.75em { \\ &\qw&\qw&\multigate{5}{\text{Input }e}&\qw&\qw &&&&& &\qw&\qw& \multigate{1}{\text{Input }e_{0:2}}&\qw&\qw&\qw&\\ &\qw&\qw&\ghost{\text{Input }e}&\qw&\qw &&&&& &\qw&\qw& \ghost{\text{Input }e_{0:2}}&\qw&\qw&\qw&\\ &\qw&\qw&\ghost{\text{Input }e}&\qw&\qw &&&&& &\qw&\qw&\qw\qwx& \multigate{1}{\text{Input }e_{2:4}}&\qw&\qw&\\ &\qw&\qw&\ghost{\text{Input }e}&\qw&\qw &&&=&& &\qw&\qw&\qw\qwx& \ghost{\text{Input }e_{2:4}}&\qw&\qw&\\ &\qw&\qw&\ghost{\text{Input }e}&\qw&\qw &&&&& &\qw&\qw&\qw\qwx&\qw\qwx& \multigate{1}{\text{Input }e_{4:6}}&\qw&\\ &\qw&\qw&\ghost{\text{Input }e}&\qw&\qw &&&&& &\qw&\qw&\qw\qwx&\qw\qwx& \ghost{\text{Input }e_{4:6}}&\qw&\\ &{/}\qw&\ustick{n}\qw&\gate{\times k^e \pmod{N}}\qwx&\qw&\qw &&&&& &{/}\qw&\ustick{n}\qw&\gate{\times k^{2^0 e_{0:2}} \pmod{N}}\qwx& \gate{\times k^{2^2 e_{2:4}} \pmod{N}}\qwx& \gate{\times k^{2^4 e_{4:6}} \pmod{N}}\qwx&\qw&\\ \\ } } \caption{ \label{fig:exponentiation} A six-exponent-qubit modular exponentiation circuit performed using windowed arithmetic with an exponent window size of 2. The relevant exponent qubits have to be included as address qubits in lookups within the windowed modular multiplication circuits, as seen in \fig{multiply-add}. } \end{figure} \subsection{Cost comparison} We implemented different product addition algorithms in Q\#, and used its tracing simulator to compare their costs. The results are shown in \fig{product-add-costs}. The estimation methodology is as follows. We generated random problems at various sizes. The problem input at size $n$ is an $n$-bit classical constant, an $n$-qubit offset register, and a $2n$-qubit target register all initialized into a random computational basis state. At small sizes we sampled several problems, in order to average out noise due to the ``classical iteration" algorithm having a cost that depends on the Hamming weight of the factors. At larger sizes the Hamming weight varies less (proportionally speaking), so we only sampled single problem instances. We fed the sampled problems into the three constructions while checking that they each returned the correct result. We had to omit some optimizations, in particular the table lookup uncomputation optimization, because they were incompatible with Q\#'s Toffoli simulator. \section{Conclusion} \label{sec:conclusion} In this paper we generalized the classical concept of windowing, of using table lookups to reduce operation counts, to the quantum domain. We presented constructions using this technique for several multiplication tasks involving classical constants. Although windowed constructions are not asymptotically optimal, e.g. Karatsuba multiplication has a lower asymptotic cost than windowed multiplication, we presented data indicating windowed constructions are more efficient than previous work for register sizes that would be relevant in practice (from tens of qubits to thousands of qubits). There are cases where windowing does not work. For example, if all of the registers in $x \pluseq a \cdot b$ are quantum registers, we are not aware of a way to use windowing to optimize the operation. Windowing should be useful for computing remainders and quotients when the divisor is a classical constant, but we are unsure if it can be used when computing multiplicative inverses modulo a classical constant. We leave the task of exhaustively surveying which quantum arithmetic tasks benefit from windowing, and which do not, as future work. Ultimately, windowing is an optimization that improves the cost of several basic quantum arithmetic tasks at practical register sizes. Because of this, we believe windowing will be a mainstay of quantum software engineering as it has proven to be in classical software engineering. \section{Acknowledgements} We thank Ryan Babbush, Adam Langley, Ilya Mironov, and Ananth Raghunathan for reading an early version of this paper and providing useful feedback which improved it. We thank Austin Fowler, Martin Ekerå, and Johan Håstad for useful feedback and discussions. We thank Hartmut Neven for creating an environment where this research was possible in the first place. \bibliographystyle{plainnat} \bibliography{refs} \end{document}
{ "alphanum_fraction": 0.5817766273, "avg_line_length": 70.5246710526, "ext": "tex", "hexsha": "1e14d6e274779cfca0ea74881029641dd95c6381", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "97d8ecf746545b480e8006caa5b6fb10694b3435", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Strilanc/windowed-quantum-arithmetic", "max_forks_repo_path": "main.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "97d8ecf746545b480e8006caa5b6fb10694b3435", "max_issues_repo_issues_event_max_datetime": "2020-03-18T11:47:06.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-24T23:44:59.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Strilanc/windowed-quantum-arithmetic", "max_issues_repo_path": "main.tex", "max_line_length": 503, "max_stars_count": null, "max_stars_repo_head_hexsha": "97d8ecf746545b480e8006caa5b6fb10694b3435", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Strilanc/windowed-quantum-arithmetic", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12410, "size": 42879 }
\documentclass [a4paper,portrait]{article} \usepackage[utf8]{inputenc} \usepackage{titlesec} \usepackage[english]{babel} \usepackage{fontspec} \usepackage{xcolor} \usepackage{fancyhdr} \usepackage [margin=0.6in]{geometry} \usepackage{graphicx} \usepackage{hyperref} \usepackage{lipsum} \usepackage{multicol} \usepackage{anyfontsize} \titlespacing\section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 8pt minus 6pt} \setmainfont[Color=003f7f]{AnglicanText} \begin{document} \setlength{\columnsep}{8mm} \setlength{\parskip}{0.1em} \pagestyle{empty} \title{\fontsize{50}{60}\selectfont{The Oath of Covenant of the Azure Concord}} \author{\large Ashe of Criamon} \date{\large 21\textsuperscript{st} December 1231} \maketitle \fontspec[Color=003366]{Precious} \large{{Being the Charter of the Covenant of The Azure Concord in the 1231\textsuperscript{st} Year of Our Lord, Jesus Christ.}} \vspace*{5mm} \noindent \normalsize{I pledge my lifelong support and loyalty to the Covenant of the Azure Concord, and declare that the trials and fortunes of this covenant are now my own. Just as I am pledged to the Oath of Hermes, so do I pledge the covenant to the Order of Hermes, and the authority of the Tribunal of Transylvania. I swear to uphold and protect this covenant regardless of personal price. Over all the years of my life, and throughout my studies and travels, I will neither betray the covenant nor give aid to its enemies. In times of need, I will aid the covenant in whatever way I am able, and I will devote myself to its service if the need is clear. I will abide by the decisions of the ruling council of this covenant, and I will treat these decisions as if they were my own. I will treat my fellows with respect and fairness, and I will not attempt to harm them in any way. Their blood is my blood. Where the covenant stands, there do I stand; how the covenant grows, so do I grow; should the covenant fall, then do I fall. This I so swear, upon the honour of my house and its Founder.} \large{Given under our hand,} \vspace{40mm} \fontspec[Color=000000]{BlackChancery} \begin{multicols}{2} \section*{\fontsize{30}{35}\selectfont{Membership}} \thispagestyle{empty} \begin{small} The covenant allows for two types of membership of its council, and recognizes a third status, which it offers to visitors to the covenant. The status of Protected Guest may be extended to any person by the formal invitation of a single full member of the covenant. Protected Guests are afforded the basic rights detailed by this charter, and are not obligated to the Council of Members, nor are they a member of this council. Protected Guests may partake in meetings of the Council of Members should they desire it, but are required to leave if asked to do so by a member of that council, and are afforded no voice nor vote unless granted such by the council’s chairman, the disceptator. The status of Protected Guest may be may be revoked by the member who granted it, or by a vote of the Council of Members. The status of Probationary Member of the Council may be extended to any magus in good standing of the Order of Hermes, who owes no allegiance nor fealty to any other covenant, and is admitted upon the unanimous approval of the current Council of Members. Provisional members assume the basic and provisional rights detailed by this charter and the duties therein attached. The status of provisional member shall last a period of seven seasons, unless abridged through censure or cancelled through expulsion. The status of Full Member of the Council is extended upon the completion of the duties and obligations of a probationary member, unless testimony is brought against him that proves him unfit to swear the Oath of Covenant in good conscience; in which case all rights of membership will be withdrawn. Should elevation to the role of full member take place, then all rights and duties of probationary membership are shed, to be replaced with the assumption of the basic and full rights detailed by this charter, and the duties therein attached. Full membership persists, unless abridged through censure or cancelled through expulsion. Should a magus ever come to desire release from this covenant, he must renounce his Oath of Covenant in the presence of at least two members of the council, and shall thereby be relieved of all duties and rights, and may not call upon such rights furthermore. \end{small} \end{multicols} \newpage \begin{multicols}{2} \begin{small} \section*{\fontsize{30}{35}\selectfont{Governance of this Covenant}} The members of this covenant are governed by the Council of Members, which shall consist of all probationary and full members of the covenant. This council shall not declare action except on behalf of the entire membership of the covenant; no action may be demanded of individuals by council agreement. Conversely, the rulings of the council cannot be overturned by an individual. Any member of the covenant shall have the right and duty to convene the Council of Members for consideration of matters justly grave, and all members shall be charged with attendance and diligence in the proceedings. Should it not be possible to convene the full Council of Members, any quorum consisting of more than half of its current members including the Councillor for the matter raised is considered valid, or failing that, his chosen prot\'eg\'e; else the discharge of the council’s duty must be delayed until such time as the full council may be convened, or the Councillor (or his prot\'eg\'e) may attend. The Council of Members shall convene four times each year, one day prior to each equinox and solstice, regardless of call from any member, and all members of the covenant should endeavour to make themselves present. Motions to be decided upon by the Council of Members must be introduced by a member; debated fully and justly, allowing those who wish to speak to do so; and then proposed for the vote. Proposals must be seconded by another member of the covenant, else no vote can take place. All issues shall be passed by a majority vote of the members there present; excepting that the unanimous opinion of the Council of Members is required for issues involving changes to the charter; expulsion of a member; and acceptance of a new probationary member. Each Councillor will be allowed the right to veto any motion that falls within his bounds of duty, so long as the motion has been debated fully, and the Councillor feels that the motion would create a significant risk to the Covenant, or his realm. He must, however, suggest an acceptable alternative.\footnote{Exemplar Gratis - A motion is brought to the council regarding the release of a captured magical creature. The Coucillor of Security may employ his Security veto if he thinks that the released creature might bring undue attention to the Covenant, but is required to suggest an alternative to releasing it freely.} The Council of Members shall confer the office of disceptator to the representative of the covenant in matters of governance and temporal concern. The title of disceptator is a duty of each and every full member of the covenant; this position is cyclical and mandatory, with the responsibility rotating in sequence of Hermetic seniority amongst the full members of the council. Each disceptator serves for a period of seven years, commencing one year following Tribunal. The duties of the disceptator are: to attend regular meetings of the council; to keep order at meetings; to break tied votes with a discretionary casting vote; to determine the yearly surplus of provision and store; and to act as a spokesman for the Council of Members. The disceptator shall not be empowered to rule on matters on the covenant's behalf, but instead is charged with ensuring the rulings of the Council of Members are enacted. \section*{\fontsize{30}{35}\selectfont{Resources Owned by this Covenant}} Resources of this covenant are held in common by the Council of Members, and it is the responsibility of this council to maintain and defend them. This covenant lays claim to all the vis originating from undisputed and unclaimed sources discovered by members of the council; save for the first harvest of a new vis source, which belongs to the finder or finders. This covenant also lays claim to any vis gifted to the Council of Members as a whole. In all other situations, undisputed and unclaimed vis belongs to the finder or finders. This covenant lays claim to all books obtained by members of the council while acting at the behest of the council, and all books scribed by members of the council where payment was received for this scribing from the covenant's resources. This covenant also lays claim to any texts gifted to the Council of Members as a whole. This council lays claim to all magical items obtained by members of the council while acting at the behest of the council; and all magical items made by the members of the council where payment was received for this manufacture from the covenant's resources. This covenant also lays claim to any magical items gifted to the Council of Members as a whole. This council lays claim to all monies generated using the resources of the covenant. This council also lays claim to all monies obtained by members of the council while acting at the behest of the council. This covenant also lays claim to any monies gifted to the Council of Members as a whole. This council lays claim to all buildings, defenses, chattels, and inhabitants of the covenant. This council also lays claims to any such buildings, defenses, chatells and inhabitants gifted to the Council of Members as a whole. Surplus resources of the covenant will be determined at the Winter meeting of the Council of Members. Resources necessary for the continued existence of the covenant and the protection of its members' rights are accounted for first; this includes payment for seasons of work performed on behalf of the covenant, and a stipend of vis for the casting of the Aegis of the Hearth. Contributions to all debts owed to the covenant are decided by the disceptator, and set aside. The remaining resources are deemed surplus, and shall be allocated to the settlement of requests from each member of the covenant. \section*{\fontsize{30}{35}\selectfont{Rights of the Members of this Covenant}} Each and every member of this covenant and protected guests shall be intitled to the basic rights of the covenant; to whit, full and unrestricted access to the protection and support of the covenant within the boundaries of the covenant by all the rights and benefits accorded by the Code of Hermes, the benefit of a sanctum which shall remain inviolate and the supply of materials thereof, access to the library of the covenant, and victuals appropriate to the status of a magus. These basic rights shall not be abridged except by expulsion from the Council of Members. In furtherance and additional to the basic rights, a full member of the covenant shall be entitled to the full rights of the covenant; to whit, the right to presence and a vote in the Council of Members, which he shall exercise dutifully with due prudence. Further, full and unrestricted access to the services and skills of the servants and covenfolk. Further, an equal right to all surplus provision and store necessary to conduct his studies, or the travel demanded by those studies; such rights to include (but be not limited to) vis, monies, and diverse magical and mundane resources claimed by the covenant. Where a conflict is evident between members of the council over the allotment of surplus resource, distribution is drawn by ballot; excepting that priority claims that have been advanced and agreed by the disceptator are taken into consideration prior to the ballot. These rights shall not be abridged except by decision of the council under conditions of grave concern. In furtherance and additional to the basic rights of a member of this covenant, a probationary member of the covenant is entitled the probationary rights of the covenant; to whit, a fractional share of those rights and duties offered to a full member of the covenant, such share being equal to half that offered to full members. A probationary member's vote counts only half that of a full member, and they may only claim half the share of the surplus provision and store of the covenant's resources afforded a full member. The services and skills of the servants and covenfolk may not be halved, but the needs of a full member of the covenant take precedence over the needs of a probationary member. Further, a probationary member of the covenant who remains true to his Oath of Covenant has the right to remain at the covenant for a total of seven seasons following the conferral of this status. Further, a probationary member of the covenant has the right to be considerd for full membership of this covenant after serving a total of seven seasons as a probationary member. These rights shall not be abridged except by decision of the council under conditions of grave concern. \section*{\fontsize{30}{35}\selectfont{Responsibilities of the Members of this Covenant}} Members of this covenant are obligated to obey the Oath of Hermes and the Peripheral Code, as demanded by the Oath of Covenant; failure on this account will not be tolerated by the Counil of Members, and the covenant reserves the right to censure those members who are convicted in just Tribunal of an offence against the Order of Hermes. The responsibility of members of this covenant towards its lasting success is dependant on service to the covenant. The Council of Members will declare the duties that need be performed at the regular meetings of the covenant. Such duties include (but are not limited to) the safeguarding and harvesting of the covenant's claimed vis sources, the safeguarding and harvesting of the covenant's income, the wellbeing and discipline of the covenant's employees, the maintenance of the covenant's resources, the increase of the covenant's resources, and the maintenance of good relations with the covenant's allies. Duties that will not entail more than a week of service at low personal risk will be assigned by the council to its members, with no more than one being assigned to each member in each season. Such assigned duties attract no recompense or advantage to the member who discharges them, but cannot be refused without reasonable extenuation. Duties that will entail a higher investment of time or personal risk will be offered up for service by the covenant. These services will attract a renumeration which shall be commensurate with the time, risk, and potential benefit to the covenant. The renumeration is decided by the disceptator, but maintains a minimum payment which shall be, for a single season of work at low risk with a low gain, two pawns of vis, of the flavour most prelavent in the stores at the time. The disceptator may increase the renumeration to increase the attractiveness of a particular urgent task, for the Council of Members is not empowered to force a member to accept one of these duties unless failure to perform it would be in breach of this covenant, in which case the threat of censure may be employed. All payments will be made in the Spring meeting of the Council. If there is more than one claimant for the service, and each claimant refuses to share the duty, then the disceptator will assign the duty by ballot. If there is insufficient vis to meet the demands of the council, the disceptator may withhold payment for one or more years. Covenant work may be declared such retroactively. Each probationary member of the covenant is obligated to perform no fewer than one of the tasks under renumeration currently outlined by the Council of Members during his period of probation. For this mandatory service, no payment need be offered by the Council of Members. \section*{\fontsize{30}{35}\selectfont{Censure of the Members of this Covenant}} If a member should contravene the decisions of the council, by vote or by charter, then the member may be censured or expelled by a vote of the Council of Members. Censure requires the passing of a motion at a meeting of the Council of the Members. The censure of a full member revokes the rights of that status, returning him to probationary status; whereupon he assumes all the duties and rights of that status. Censure must not prejudice the application of a probationary member to the position of full member of the covenant. The censure of a probationary member shall abridge the rights of the member to remain a probationary member of the covenant, and shall confer upon him instead the status of Protected Guest. The status of a Protected Guest may be withdrawn at any time by a vote of the Council of Members without need for censure. Expulsion is enacted by a unanimous vote of the remaining Council of Members. Expulsion is the only means through which a member of the covenant shall lose his basic rights; and requires that the former member ceases to draw upon those basic rights subsequent to the first full moon after expulsion was enacted. Should a magus be cast out of the Order, it is the duty and obligation of the covenant that he shall also and without delay be expelled from the covenant. \vfil \vskip-\prevdepth \nointerlineskip\null \penalty 200 \vfilneg \end{small} \end{multicols} \section*{\centering\fontsize{40}{40}\selectfont{The Seal of Office}} \vspace{43mm} \begin{multicols}{3} \section*{\fontsize{12}{12}\selectfont{This charter was approved by, }} \columnbreak \section*{\fontsize{12}{12}\selectfont{Quaesitor in good standing, in the}} \columnbreak \section*{\fontsize{12}{12}\selectfont{Year of Our Lord, Jesus Christ.}} \end{multicols} \end{document}
{ "alphanum_fraction": 0.7832504237, "avg_line_length": 70.0881226054, "ext": "tex", "hexsha": "1b7c07b875d63a07ce64a657d315db27f0d1c207", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09ed53e208ad999fa7e23d6b72df81c237ad92f4", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Diserasta/Ars-Magica", "max_forks_repo_path": "Charter of Covenant/Charter of Covenant.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "09ed53e208ad999fa7e23d6b72df81c237ad92f4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Diserasta/Ars-Magica", "max_issues_repo_path": "Charter of Covenant/Charter of Covenant.tex", "max_line_length": 153, "max_stars_count": null, "max_stars_repo_head_hexsha": "09ed53e208ad999fa7e23d6b72df81c237ad92f4", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Diserasta/Ars-Magica", "max_stars_repo_path": "Charter of Covenant/Charter of Covenant.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4113, "size": 18293 }
\subsection{Single equality constraint} \subsubsection{Constrained optimisation} Rather than maximise \(f(x)\), we want to maximise \(f(x)\) subject to \(g(x)=0\). We write this, the Lagrangian, as: \(\mathcal{L}(x,\lambda )=f(x)-\sum^m_k\lambda_k [g_k(x)-c_k]\) We examine the stationary points for both vector \(x\) and \(\lambda \). By including the latter we ensure that these points are consistent with the constraints. \subsubsection{Solving the Langrangian with one constraint} Our function is: \(\mathcal{L}(x, \lambda )=f(x)-\lambda [g(x)-c]\) The first-order conditions are: \(\mathcal{L}_{\lambda }= -[g(x)-c]\) \(\mathcal{L}_{x_i}=\dfrac{\delta f}{\delta x_i}-\lambda \dfrac{\delta g}{\delta x_i}\) The solution is stationary so: \(\mathcal{L}_{x_i}=\dfrac{\delta f}{\delta x_i}-\lambda \dfrac{\delta g}{\delta x_i}=0\) \(\lambda \dfrac{\delta g}{\delta x_i}=\dfrac{\delta f}{\delta x_i}\) \(\lambda =\dfrac{\dfrac{\delta f}{\delta x_i}}{\dfrac{\delta g}{\delta x_i}}\) Finally, we can use the following in practical applications. \(\dfrac{\dfrac{\delta f}{\delta x_i}}{\dfrac{\delta g}{\delta x_i}}=\dfrac{\dfrac{\delta f}{\delta x_j}}{\dfrac{\delta g}{\delta x_j}}\)
{ "alphanum_fraction": 0.675, "avg_line_length": 31.5789473684, "ext": "tex", "hexsha": "5b57f226a2ac8e17af495da7941701a8d77984a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/analysis/optimisationMulti/02-01-equality.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/analysis/optimisationMulti/02-01-equality.tex", "max_line_length": 161, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/analysis/optimisationMulti/02-01-equality.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 399, "size": 1200 }
This module tries to recognize syntactic symmetries in the input formula and adds symmetry breaking constraints. The core functionality is provided by \texttt{carl} through \texttt{carl::formula::breakSymmetries()} which internally encodes the formula as a graph and uses \texttt{bliss} to find automorphisms on this graph. \paragraph{Efficiency} Finding automorphisms is as difficult as determining whether two graphs are isomorphic, and it is not known whether this problem can be solved in polynomial or exponential time. In practice, current solvers like \texttt{bliss} perform very good on large graphs and we therefore assume this module to be sufficiently fast.
{ "alphanum_fraction": 0.8194029851, "avg_line_length": 95.7142857143, "ext": "tex", "hexsha": "c755145e216f7afc906e05367ff0b33df8fd8af9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "minemebarsha/smtrat", "max_forks_repo_path": "src/smtrat-modules/SymmetryModule/SymmetryModule.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "minemebarsha/smtrat", "max_issues_repo_path": "src/smtrat-modules/SymmetryModule/SymmetryModule.tex", "max_line_length": 210, "max_stars_count": null, "max_stars_repo_head_hexsha": "eaada50cdf9bbfe4dd4f6a54776387484c37b0f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "minemebarsha/smtrat", "max_stars_repo_path": "src/smtrat-modules/SymmetryModule/SymmetryModule.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 145, "size": 670 }
\subsection{\stid{1} \pmr}\label{subsect:pmr} \textbf{End State:} A cross-platform, production-ready programming environment that enables and accelerates the development of mission-critical software at both the node and full-system levels. \subsubsection{Scope and Requirements} A programming model provides the abstract design upon which developers express and coordinate the efficient parallel execution of their program. A particular model is implemented as a developer-facing interface and a supporting set of runtime layers. To successfully address the challenges of exascale computing, these software capabilities must address the challenges of programming at both the node- and full-system levels. These two targets must be coupled to support multiple complexities expected with exascale systems (e.g., locality for deep memory hierarchies, affinity for threads of execution, load balancing) and also provide a set of mechanisms for performance portability across the range of potential and final system designs. Additionally, there must be mechanisms for the interoperability and composition of multiple implementations (e.g., one at the system level and one at the node level). This must include abilities such as resource sharing for workloads that include coupled applications, supporting libraries and frameworks, and capabilities such as in situ analysis and visualization. Given the ECP’s timeline, the development of new programming languages and their supporting infrastructure is infeasible. We do, however, recognize that the augmentation or extension of the features of existing and widely used languages (e.g., C/C++ and Fortran) could provide solutions for simplifying certain software development activities. \subsubsection{Assumptions and Feasibility} The intent of the PMR L3 is to provide a set of programming abstractions and their supporting implementations that allow programmers to select from options that meet demands for expressiveness, performance, productivity, compatibility, and portability. It is important to note that, while these goals are obviously desirable, they must be balanced with an additional awareness that today’s methods and techniques may require changes in both the application and the overall programming environment and within the supporting software stack. \subsubsection{Objectives} PMR provides the software infrastructure necessary to enable and accelerate the development of HPC applications that perform well and are correct and robust, while reducing the cost both for initial development and ongoing porting and maintenance. PMR activities need to reflect the requirements of increasingly complex application scenarios, usage models, and workflows, while at the same time addressing the hardware challenges of increased levels of concurrency, data locality, power, and resilience. The software environment will support programming at multiple levels of abstraction that includes both mainstream as well as alternative approaches if feasible in ECP’s timeframe. Both of these approaches must provide a portability path such that a single application code can run well on multiple types of systems, or multiple generations of systems, with minimal changes. The layers of the system and programming environment implementation will therefore aim to hide the differences through compilers, runtime systems, messaging standards, shared-memory standards, and programming abstractions designed to help developers map algorithms onto the underlying hardware and schedule data motion and computation with increased automation. \subsubsection{Plan} PMR contains nine L4 projects. To ensure relevance to DOE missions, these efforts leverage and collaborate with existing activities within the broader HPC community. The PMR area supports the research and development needed to produce exascale-ready versions of the Message Passing Interface (MPI); Partitioned Global-Address Space Libraries (UPC++, GASNet); task-based programming models (Legion, PaRSEC); software for node-level performance portability (Kokkos, RAJA); and libraries for memory, power, and resource management. Initial efforts focused on identifying the core capabilities needed by the selected ECP applications and components of the software stack, identifying shortcomings of current approaches, establishing performance baselines of existing implementations on available petascale and prototype systems, and the re-implementation of the lower-level capabilities of relevant libraries and frameworks. These efforts provided demonstrations of parallel performance of algorithms on pre-exascale, leadership-class machines--at first on test problems, but eventually in actual applications (in close collaboration with the AD and HI teams). Initial efforts also informed research into exascale-specific algorithms and requirements that will be implemented across the software stack. The supported projects targeted and implemented early versions of their software on CORAL, NERSC and ACES pre-exascale systems--with an ultimate target of production-ready deployment on the exascale systems. In FY20--23, the focus is on development and tuning for the specific architectures of the selected exascale platforms, in addition to tuning specific features that are critical to ECP applications. Throughout the effort, the applications teams and other elements of the software stack evaluate and provide feedback on their functionality, performance, and robustness. Progress towards these goals is documented quarterly and evaluated annually (or more frequently if needed) based on PMR-centric milestones as well as joint milestone activities shared across associated software stack activities by Application Development and Hardware \& Integration focus areas. \subsubsection{Risks and Mitigation Strategies} The mainstream activities of ECP in the area of programming models focus on advancing the capabilities of MPI and OpenMP. Pushing them as far as possible into the exascale era is key to supporting an evolutionary path for applications. This is the primary risk mitigation approach for existing application codes. Extensions to MPI and OpenMP standards require research, and part of the efforts will focus on rolling these findings into existing standards, which takes time. To further address risks, PMR is exploring alternative approaches to mitigate the impact of potential limitations of the MPI and OpenMP programming models. Another risk is the failure of adoption of the software stack by the vendors, which is mitigated by the specific delivery focus in sub-element SW Ecosystem and Delivery. Past experience has shown that a combination of laboratory-supported open-source software and vendor-optimized solutions built around standard APIs that encourage innovation across multiple platforms is a viable approach and is what we are doing in PMR. We are using close interaction with the vendors early on to encourage adoption of the software stack, including well-tested practices of including support for key software products or APIs into large procurements through NRE or other contractual obligations. A mitigation strategy for this approach involves building a long-lasting open-source community around projects that are supported via laboratory and university funding. Creating a coordinated set of software requires strong management to ensure that duplication of effort is minimized. This is recognized by ECP management, and processes are in place to ensure collaboration is effective, shortcuts are avoided unless necessary, and an agile approach to development is instituted to prevent prototypes moving directly to product. \subsubsection{Future Trends} Recently announced exascale system procurements have shown that the trend in exascale compute-node hardware is toward heterogeneity: Compute nodes of future systems will have a combination of regular CPUs and accelerators (typically GPUs). Furthermore, the GPUs will not be just from NVIDIA as on existing systems: One system will have Intel GPUs and another will have AMD GPUs. In other words, there will be a diversity of GPU architectures, each with their own vendor-preferred way of programming the GPUs. An additional complication is that although the HPC community has some experience in using NVIDIA GPUs and the associated CUDA programming model, the community has relatively little experience in programming Intel or AMD GPUs. These issues lead to challenges for application and software teams in developing exascale software that is both portable and high performance. Below we outline trends in programming these complex systems that will help alleviate some of these challenges. \paragraph{Trends in Internode Programming} The presence of accelerator hardware on compute nodes has resulted in individual compute nodes becoming very powerful. As a result, millions of compute nodes are no longer needed to build an exascale system. This trend results in a lower burden on the programming system used for internode communication. It is widely expected that MPI will continue to serve the purpose of internode communication on exascale systems and is the least disruptive path for applications, most of which already use MPI. Nonetheless, improvements are needed in the MPI Standard as well as in MPI implementations in areas such as hybrid programming (integration with GPUs and GPU memory, integration with the intranode programming model), overall resilience and robustness, scalability, low-latency communication, optimized collective algorithms, optimized support for exascale interconnects and lower-level communication paradigms such as OFI and UCX, and scalable process startup and management. PGAS models, such as UPC++ and OpenSHMEM, are also available to be used by applications that rely on them and face similar challenges as MPI on exascale systems. These challenges are being tackled by the MPI and UPC++/GASNet projects in the PMR area. \paragraph{Trends in Intranode Programming} The main challenge for exascale is in achieving performance and portability for intranode programming, for which a variety of options exist. Vendor-supported options include CUDA and OpenACC for NVIDIA GPUs, SYCL/DPC++ for Intel GPUs, and HIP for AMD GPUs. OpenACC supports accelerator programming via compiler directives. SYCL provides a C++ abstraction on top of OpenCL, which itself is a portable, lower-level API for programming heterogeneous devices. Intel's DPC++ is similar to SYCL with some extensions. HIP from AMD is similar to CUDA; in fact, AMD provides translation tools to convert CUDA programs to HIP. Among portable, standard programming models, OpenMP has supported accelerators via the \texttt{target} directive starting with OpenMP version 4.0 released in July 2013. Subsequent releases of OpenMP (version 4.5 and 5.0) have further improved support for accelerators. OpenMP is supported by vendors on all platforms and, in theory, could serve as a portable intranode programming model for systems with accelerators. However, in practice, a lot depends on the quality of the implementation. Kokkos and RAJA provide another alternative for portable, heterogenous-node programming via C++ abstractions. They are designed to work on complex node architectures with multiple types of execution resources and multilevel memory hierarchies. Many ECP applications are successfully using Kokkos and RAJA to write portable parallel code that runs efficiently on GPUs. We believe these options (and high-quality implementations of them) will meet the needs of applications in the exascale timeframe.
{ "alphanum_fraction": 0.8311989686, "avg_line_length": 117.5252525253, "ext": "tex", "hexsha": "9ea33a01e85aece675dbeacbf6b5f9ec649d886c", "lang": "TeX", "max_forks_count": 104, "max_forks_repo_forks_event_max_datetime": "2022-02-11T19:13:59.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-20T23:14:32.000Z", "max_forks_repo_head_hexsha": "439b74be2cc7545b106ed36a1f6af42aebbe0994", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "thoasm/ECP-Report-Template", "max_forks_repo_path": "projects/2.3.1-PMR/2.3.1-PMR.tex", "max_issues_count": 24, "max_issues_repo_head_hexsha": "439b74be2cc7545b106ed36a1f6af42aebbe0994", "max_issues_repo_issues_event_max_datetime": "2022-02-11T21:51:14.000Z", "max_issues_repo_issues_event_min_datetime": "2018-12-16T00:09:45.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "thoasm/ECP-Report-Template", "max_issues_repo_path": "projects/2.3.1-PMR/2.3.1-PMR.tex", "max_line_length": 1108, "max_stars_count": 16, "max_stars_repo_head_hexsha": "439b74be2cc7545b106ed36a1f6af42aebbe0994", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "thoasm/ECP-Report-Template", "max_stars_repo_path": "projects/2.3.1-PMR/2.3.1-PMR.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-21T16:46:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-11-30T02:07:34.000Z", "num_tokens": 2195, "size": 11635 }
\documentclass[10p]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage[backend=biber,% style=numeric-comp, % how it appears in the literature overview citestyle=numeric-comp, % how it appears when citing in the text maxnames=1]{biblatex} \addbibresource{bibliography.bib} %Imports bibliography file \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathtools} \mathtoolsset{showonlyrefs=true} % only number equations that have been references in the text (set 'false' to number all) \usepackage[framemethod=TikZ]{mdframed} \usepackage{xcolor} \usepackage{booktabs} \usepackage{url} \usepackage{adjustbox} \usepackage[ruled,linesnumbered]{algorithm2e} \usepackage{pdflscape} \usepackage{hyperref} \usepackage{tabularx} \frenchspacing % No double spacing between sentences \linespread{1.2} % Set linespace \usepackage[a4paper, lmargin=0.1666\paperwidth, rmargin=0.1666\paperwidth, tmargin=0.1111\paperheight, bmargin=0.1111\paperheight]{geometry} %margins \newcommand{\st}{\text{s.t.}} \newcommand{\defi}{:=} \newcommand{\balpha}{\boldsymbol{\alpha}} \newcommand{\bones}{\mathbf{1}} \newcommand{\numexp}{n} \newcommand{\numpair}{m} \newcommand{\x}{\mathbf{x}} \newcommand{\z}{\mathbf{z}} \newcommand{\Pset}{\mathcal{P}} \newcommand{\w}{\mathbf{w}} \newcommand{\Molspace}{\mathcal{M}} \newcommand{\bPhi}{\boldsymbol{\Phi}} \newcommand{\A}{\mathbf{A}} \newcommand{\molkern}{\kappa} \newcommand{\bmolkern}{\mathbf{k}} \newcommand{\Molkern}{\mathbf{K}} \newcommand{\syskern}{\lambda} \DeclareMathOperator{\sign}{sign} \mdfdefinestyle{codeframe}{% linecolor=black, outerlinewidth=0.5pt, roundcorner=0pt, innertopmargin=\baselineskip, innerbottommargin=\baselineskip, innerrightmargin=20pt, innerleftmargin=20pt, backgroundcolor=gray!20!white} \mdfdefinestyle{openquestion}{% linecolor=black, outerlinewidth=0.5pt, roundcorner=0pt, innertopmargin=\baselineskip, innerbottommargin=\baselineskip, innerrightmargin=20pt, innerleftmargin=20pt, backgroundcolor=green!20!white} \title{ROSVM Package - Mathematical Background} \author{Eric Bach} \begin{document} \maketitle \tableofcontents \section{ToDo} \begin{itemize} \item Add derivations for the exterior product features. \end{itemize} \section{Introduction} This documents describes the mathematical background of the Ranking Support Vector Machine (RankSVM) \parencite{Joachims2002} implemented in the ROSVM package. \section{Method} \subsection{Notation} \begin{table}[t] \centering \caption{Notation table} \vspace{2pt} \label{tab:notations} \begin{tabular}{ll} \toprule {\bf Notation} & {\bf Description} \\ \midrule $\Pset$ & Set of preferences with size $\numpair=|\Pset|$ \\ $m\in\Molspace$ & Molecule from the space of molecules $\Molspace$\\ $\x\in\mathbb{R}^{d_x}$ & Molecule feature representation, e.g. fingerprint vector, with dimension $d_x$ \\ $\z\in\mathbb{R}^{d_z}$ & Chromatographic system feature representation with dimension $d_z$ \\ $\phi(\x)\in\mathbb{R}^{d_\mathcal{X}}$ & \emph{Kernel} feature representation of a molecule based on $\x$\\ $\phi_i=\phi(\x_i)$ & Shorthand for the kernel feature vector of example $i$\\ $\bPhi\in\mathbb{R}^{\numexp\times d_\mathcal{X}}$ & Kernel feature vector matrix, with $\numexp$ examples each of dimension $d_\mathcal{X}$\\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:notations} summarizes the notation used in this document. \subsection{Ranking Support Vector Machine (RankSVM)} The RankSVM's primal optimization problem is given as: \begin{equation} \begin{split} \underset{\mathbf{w},\boldsymbol{\xi}}{min} &\quad f(\mathbf{w},\boldsymbol{\xi}) = \frac{1}{2}\|\mathbf{w}\|^2 + C\sum_{(i,j)\in P}\xi_{ij} \\ \st &\quad y_{ij}\mathbf{w}^T(\phi_i-\phi_j)\geq 1-\xi_{ij},\quad\forall(i,j)\in\Pset\\ &\quad \xi_{ij} \geq 0,\quad\forall(i,j)\in\Pset, \label{eq:RankSVM_primal_problem} \end{split} \end{equation} where $C>0$ is the regularization parameter. We define the pairwise labels as the retention time difference of the corresponding molecules, i.e. $y_{ij}\defi\sign(t_i-t_j)$. From the primal problem in Eq.~\eqref{eq:RankSVM_primal_problem} we can derive the following dual optimization problem: \begin{equation} \begin{split} \underset{\balpha}{\max} &\quad g(\balpha) = \bones^T\balpha - \frac{1}{2} \balpha^T\left(\mathbf{y}\mathbf{y}^T\circ\mathbf{B}\mathbf{K}\mathbf{B}^T\right)\balpha \\ \st &\quad 0\leq\alpha_{ij}\leq C,\quad\forall (i,j)\in \Pset, \label{eq:RankSVM_dual_problem} \end{split} \end{equation} where $\mathbf{y}\in\mathbb{R}^\numexp$ is the vector of pairwise labels, and $\mathbf{B}\in\{-1,0,1\}^{\numpair\times\numexp}$ with row $p=(i,j)$ being $[\mathbf{B}]_{p\cdot}=(0,\ldots,0,\underbrace{1}_{i},0,\ldots,0,\underbrace{-1}_{j},0,\ldots,0)$. For further details refer to the work by \cite{Kuo2014}. Using the properties of the Hadamard product $\circ$ we can reformulate the function $g(\balpha)$ of the problem in Eq.~\eqref{eq:RankSVM_dual_problem} \parencite{Styan1973}: \begin{equation} \begin{split} g(\balpha) &= \bones^T\balpha-\frac{1}{2} \balpha^T\left(\mathbf{y}\mathbf{y}^T\circ\mathbf{B}\mathbf{K}\mathbf{B}^T\right)\balpha \\ &= \bones^T\balpha-\frac{1}{2} \balpha^T\left(\mathbf{D}_\mathbf{y}\mathbf{B}\mathbf{K}\mathbf{B}^T\mathbf{D}_\mathbf{y}\right)\balpha \\ &= \bones^T\balpha-\frac{1}{2} \balpha^T\mathbf{A}\mathbf{K}\mathbf{A}^T\balpha\\ &= \sum_{(i,j)\in\Pset}\alpha_{ij} -\frac{1}{2}\sum_{(i,j)\in\Pset}\sum_{(u,v)\in\Pset}\alpha_{ij}\alpha_{uv}y_{ij}y_{uv}\big(\ldots\\ &\quad\quad\underbrace{\molkern(\x_i,\x_u)-\molkern(\x_i,\x_v)-\molkern(\x_j,\x_u)+\molkern(\x_j,\x_v)}_{\text{Pairwise kernel between }(i,j)\text{ and }(u,v)}\big) \label{eq:dual_objective_function} \end{split} \end{equation} Here, $\mathbf{D}_\mathbf{y}\in\mathbb{R}^{\numpair\times\numpair}$ is a diagonal matrix storing the pairwise labels, and $\mathbf{A}\defi\mathbf{D}_\mathbf{y}\mathbf{B}\in\{-1,0,1\}^{\numpair\times\numexp}$. The matrix $\mathbf{A}$ now contains the pairwise labels as well by multiplying each row $p=(i,j)$ of $\mathbf{B}$ with $y_{ij}$, i.e. $[\mathbf{A}]_{p\cdot}=y_{ij}\cdot(0,\ldots,0,\underbrace{1}_{i},0,\ldots,0,\underbrace{-1}_{j},0,\ldots,0)$. \begin{mdframed}[style=codeframe] Check out '\texttt{\_build\_A\_matrix}' for the actual implementation of the $\mathbf{A}$-matrix construction from the data. \end{mdframed} \subsubsection{Optimizing the RankSVM Model Parameters} % \begin{figure}[t] % \centering \begin{algorithm}[t] \label{alg:frank-wolfe} \SetAlgoLined \caption{Conditional gradient algorithm} Let $\balpha^{(0)}\in\mathcal{A}$\tcc*{A feasible initial dual variable.} \For{$k=0,\ldots,(K-1)$}{ $\mathbf{s}\leftarrow\underset{\mathbf{s}'\in\mathcal{A}}{\arg\max}\, \left\langle\nabla g(\balpha^{(k)}),\mathbf{s}'\right\rangle$ \tcc*{Solve sub-problem} $\gamma^{(k)}\leftarrow\frac{2}{k+2}$\tcc*{Step-size; also line-search possible} $\balpha^{(k+1)}\leftarrow(1-\gamma)\balpha^{(k)}+\gamma\mathbf{s}$\tcc*{Update} } $\balpha^*\leftarrow\balpha^{(K)}$\; \end{algorithm} % \end{figure} We find the optimal RankSVM model $\balpha^*$ in the dual space given a training dataset $\mathcal{D}=\{(x_i,t_i)\}_{i=1}^\numexp$ using the conditional gradient algorithm \parencite{Jaggi2013}. The algorithm is shown in \ref{alg:frank-wolfe}. The feasible set is defined as $\mathcal{A}\defi\{\balpha\in\mathbb{R}^\numpair\,|\,0\leq\alpha_{ij}\leq C,\forall (i,j)\in\Pset\}$ which follows from the constraints of the dual optimization problem in Eq.~\eqref{eq:RankSVM_dual_problem}. Note that $\mathcal{A}$ is compact convex set. \begin{mdframed}[style=codeframe] The function '\texttt{\_assert\_is\_feasible}' implements the feasibility check for a given $\balpha^{(k)}$ iterate. \end{mdframed} \paragraph{Solving the Sub-problem:} In each iteration of Algorithm~\ref{alg:frank-wolfe} we need to solve the following linear optimization problem: \begin{align} \mathbf{s} &=\underset{\mathbf{s}'\in\mathcal{A}}{\arg\max}\,\left\langle\nabla g(\balpha^{(k)}),\mathbf{s}'\right\rangle\\ &=\underset{\mathbf{s}'\in\mathcal{A}}{\arg\max}\,\left\langle\underbrace{\bones-\mathbf{A}\mathbf{K}\mathbf{A}^T\balpha^{(k)}}_{\defi\mathbf{d}},\mathbf{s}'\right\rangle.\label{eq:sub_problem} \end{align} Eq.~\eqref{eq:sub_problem} can be solved by simply evaluating $\mathbf{d}$ and subsequently setting the components of $s\in\mathbb{R}^{\numpair}$ as: \begin{equation} s_{ij}=\begin{cases} C&\text{if }d_{ij}>0\\ 0&\text{else} \end{cases}. \end{equation} \begin{mdframed}[style=codeframe] The function '\texttt{\_solve\_sub\_problem}' implements the sub problem solver. \end{mdframed} \paragraph{Line-search:} The optimal step-size $\gamma^{(k)}$ can be determined by solving an univariate problem: \begin{equation} \gamma_{LS}^{(k)}=\underset{\gamma\in[0,1]}{\max}\quad g\left(\balpha^{(k)}-\gamma\left(\mathbf{s}-\balpha^{(k)}\right)\right). \label{eq:linesearch_problem} \end{equation} For that, we set the derivative of \eqref{eq:linesearch_problem} to zero: \begin{align} &\frac{\partial g\left(\balpha^{(k)}-\gamma\left(\mathbf{s}-\balpha^{(k)}\right)\right)}{\partial\gamma}\\ &=\left(\balpha^{(k)}-\gamma\left(\mathbf{s}-\balpha^{(k)}\right)\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right)-\bones^T\left(\mathbf{s}-\balpha^{(k)}\right)\\ &=\left(\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right) -\gamma\left(\mathbf{s}-\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right) -\bones^T\left(\mathbf{s}-\balpha^{(k)}\right)\\ &=0 \end{align} and solve for $\gamma$: \begin{align} \gamma\left(\mathbf{s}-\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right) &=\left(\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right) -\bones^T\left(\mathbf{s}-\balpha^{(k)}\right)\\ \Leftrightarrow\\ \gamma&=\frac{\left(\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right) -\bones^T\left(\mathbf{s}-\balpha^{(k)}\right)}{\left(\mathbf{s}-\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right)}\\ \Leftrightarrow\\ \gamma_{LS}^{(k)}&=\frac{\left\langle\nabla g\left(\balpha^{(k)}\right),\left(\mathbf{s}-\balpha^{(k)}\right)\right\rangle}{\left(\mathbf{s}-\balpha^{(k)}\right)^T\mathbf{A}\mathbf{K}\mathbf{A}^T\left(\mathbf{s}-\balpha^{(k)}\right)}\label{eq:optimal_stepsize_with_linesearch}. \end{align} To ensure that $\gamma_{LS}^{(k)}$ we clip the value to the range of $[0,1]$. To evaluate the nominator in Eq.~\eqref{eq:optimal_stepsize_with_linesearch} we can reuse the value of $\mathbf{d}=\nabla g\left(\balpha^{(k)}\right)$ calculated when solving the sub-problem (see Eq.~\eqref{eq:sub_problem}) \paragraph{Determine Convergence:} \textcite{Jaggi2013} propose to use the duality gap: \begin{equation} h\left(\balpha^{(k)}\right)\defi\left\langle\nabla g\left(\balpha^{(k)}\right),\left(\mathbf{s}-\balpha^{(k)}\right)\right\rangle \end{equation} as convergence criteria by defining a threshold $\epsilon$ and iterating until $h(\balpha^{(k)})<\epsilon$. The ratio behind this idea is, that the duality gap is an upper bound on the difference of the current function value for $\balpha^{(k)}$ and the one of the global maximizer $\balpha^*$ of Eq.~\eqref{eq:RankSVM_primal_problem}, i.e. $h\left(\balpha^{(k)}\right)\geq g\left(\balpha^*\right)-g\left(\balpha^{(k)}\right)$\footnote{Note: Here we formulate the dual optimization as a maximization. In \cite{Jaggi2013} the authors formulate it as a minimization which leads to slightly changed duality gap definition.}. \begin{mdframed}[style=openquestion] In practice it the duality gap $h\left(\balpha^{(k)}\right)$ was observed to have very large values and never approaching a reasonable threshold. However, the model performance was nevertheless very good. That might be because the $\balpha^{(k)}$ has many entries and those can have values up to $C$. A quadratic function of the dual vector, like $g$ in Eq.~\eqref{eq:RankSVM_dual_problem}, can take on very large values. The ``un-boundedness'' or missing normalization might be an issue here. The current implementation gets around this issue, by checking the following quantity for the convergence: \begin{equation} \frac{h(\balpha^{(k)})}{h(\balpha^{(0)})}<\epsilon, \end{equation} where $\epsilon>0$, e.g. $\epsilon=0.005$, is the convergence threshold. Possible scaling factors of the function $h$ are canceled out. This convergence criteria can be interpreted as the relative decrease of the duality gap given the initial model $\balpha^{(0)}$. \end{mdframed} \subsubsection{Prediction Step} Given two molecules $m_u$ and $m_v$ their retention order label $\hat{y}_{uv}$ using a trained RankSVM model $\w=\bPhi^T\A^T\balpha$: \begin{equation} \hat{y}_{uv} =\sign(\langle\w,\phi_u-\phi_v\rangle) =\sign\left(\left\langle\bPhi^T\A^T\balpha,\phi_u-\phi_v\right\rangle\right). \end{equation} Sometimes, it might be of use, to get the decision score for a molecule. This score can be calculated by exploiting the linearity of the kernel feature vector \emph{difference} \begin{equation} \langle\w,\phi_u-\phi_v\rangle=\langle\w,\phi_u\rangle-\langle\w,\phi_v\rangle. \end{equation} We can now evaluate the decision score: \begin{align} s_u &=\langle\w,\phi_u\rangle\\ &=\left\langle\bPhi^T\A^T\balpha,\phi_u\right\rangle\\ &=\balpha^T\A\bmolkern(\x_u)\\ &=\sum_{(i,j)\in\Pset}\alpha_{ij}y_{ij}\left(\molkern(\x_i,\x_u)-\molkern(\x_j,\x_u)\right), \end{align} with $\bmolkern(\x_u)=(\molkern(\x_1,\x_u),\ldots,\molkern(\x_\numexp,\x_u))\in\mathbb{R}^{\numexp}$ being a vector containing the kernel similarities between the molecule $m_u$ (using its representation $\x_u$) and all molecules in the training set. \subsection{Include Chromatographic System Descriptors} In this section we are inspecting different ways to include feature descriptions of the utilized chromatographic system into the prediction. We will thereby focus on the inclusion using a joint feature-vector for molecules and chromatographic systems. In the following we assume, that for all pairs in the training set, i.e. $(i,j)\in\Pset$, the molecules $m_i$ and $m_j$ have been measured with the same chromatographic system. That means, no cross-system pairwise relations are explicitly measured. It furthermore motivates the use of $\z_{ij}=\z_i=\z_i$ as the notation for the chromatographic system feature descriptor corresponding to the pair $(i,j)$. \subsubsection{Concatenating $\x$ and $\z$} Concatenating the feature descriptors of the molecules and the chromatographic system is one option to include the descriptors in to prediction. For that, the kernel feature vector $\phi_i$ (see Eq.~\eqref{eq:RankSVM_primal_problem}) is defined as: \begin{equation} \phi_i = \phi\left(\left[\begin{matrix}\x_i\\\z_i\end{matrix}\right]\right). \end{equation} The \emph{pairwise kernel} (see Eq.~\eqref{eq:dual_objective_function}) will be given as: \begin{equation} \molkern\left(\left[\begin{matrix}\x_i\\\z_{ij}\end{matrix}\right],\left[\begin{matrix}\x_u\\\z_{uv}\end{matrix}\right]\right) -\molkern\left(\left[\begin{matrix}\x_i\\\z_{ij}\end{matrix}\right],\left[\begin{matrix}\x_v\\\z_{uv}\end{matrix}\right]\right) -\molkern\left(\left[\begin{matrix}\x_j\\\z_{ij}\end{matrix}\right],\left[\begin{matrix}\x_u\\\z_{uv}\end{matrix}\right]\right) +\molkern\left(\left[\begin{matrix}\x_j\\\z_{ij}\end{matrix}\right],\left[\begin{matrix}\x_v\\\z_{uv}\end{matrix}\right]\right). \end{equation} Here, the kernel $\molkern$ does not have the interpretation of a similarity measure between molecules, but rather between the combination of molecule and chromatographic system features. \begin{mdframed}[style=openquestion] In practice we could for example concatenate molecular fingerprints and eluent descriptors. As kernel we can utilize the generalized Tanimoto kernel developed by \textcite{Szedmak2020a}. This kernel can be used on real valued features. \end{mdframed} \subsubsection{Kronecker Product of Kernel Feature vectors ($\phi(\x)\otimes\varphi(\z)$)} Another approach to include the chromatographic system features is through a separate kernel and the Kronecker product. We define the $\phi(\x_i)\otimes\varphi(\z_i)$ as the feature associated with $\x_i$ and $\z_i$. The pairwise kernel (see Eq.~\eqref{eq:dual_objective_function}) will be given as: \begin{align} &\langle\phi_i\otimes\varphi_{ij}-\phi_j\otimes\varphi_{ij},\phi_u\otimes\varphi_{uv}-\phi_v\otimes\varphi_{uv}\rangle\\ &=\langle\phi_i\otimes\varphi_{ij},\phi_u\otimes\varphi_{uv}\rangle -\langle\phi_i\otimes\varphi_{ij},\phi_v\otimes\varphi_{uv}\rangle -\langle\phi_j\otimes\varphi_{ij},\phi_u\otimes\varphi_{uv}\rangle +\langle\phi_j\otimes\varphi_{ij},\phi_v\otimes\varphi_{uv}\rangle\\ &=\molkern(\x_i,\x_u)\syskern(\z_{ij},\z_{uv}) -\molkern(\x_i,\x_v)\syskern(\z_{ij},\z_{uv}) -\molkern(\x_j,\x_u)\syskern(\z_{ij},\z_{uv}) +\molkern(\x_j,\x_v)\syskern(\z_{ij},\z_{uv})\\ &=\syskern(\z_{ij},\z_{uv})(\molkern(\x_i,\x_u)-\molkern(\x_i,\x_v)-\molkern(\x_j,\x_u)+\molkern(\x_j,\x_v)). \end{align} Here, $\syskern(\z_{ij},\z_{uv})$ expresses the similarity between the two chromatographic systems associated with $(i,j)$ and $(u,v)$. \subsubsection{Chromatographic System Descriptors} \newpage \printbibliography \end{document}
{ "alphanum_fraction": 0.6969513537, "avg_line_length": 57.872611465, "ext": "tex", "hexsha": "b70bf663d5908243ca90ba3e5d2ec2384a98a108", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b5979ec0fad660464c5bccbbd0cac074a91db5bd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bachi55/ROM", "max_forks_repo_path": "doc/documentation.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "b5979ec0fad660464c5bccbbd0cac074a91db5bd", "max_issues_repo_issues_event_max_datetime": "2022-02-23T09:23:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-21T08:42:13.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bachi55/ROM", "max_issues_repo_path": "doc/documentation.tex", "max_line_length": 622, "max_stars_count": 4, "max_stars_repo_head_hexsha": "b5979ec0fad660464c5bccbbd0cac074a91db5bd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bachi55/ROM", "max_stars_repo_path": "doc/documentation.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-06T17:21:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-13T02:52:00.000Z", "num_tokens": 6139, "size": 18172 }
\documentclass{article} \usepackage{graphicx} \usepackage[utf8]{inputenc} \usepackage{amsmath, amssymb, latexsym} \usepackage{pgfplots} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{tikz} \usepackage{nicefrac} \usepackage{placeins} \pgfplotsset{every axis legend/.append style={ at={(0,0)}, anchor=north east}} \usetikzlibrary{shapes,positioning,intersections,quotes} \usetikzlibrary{arrows.meta, bending, intersections, quotes, shapes.geometric} \definecolor{darkgreen}{rgb}{0.0, 0.6, 0.0} \definecolor{darkred}{rgb}{0.7, 0.0, 0.0} \makeatletter \def\BState{\State\hskip-\ALG@thistlm} \makeatother \title{Week 16} \begin{document} \pagenumbering{gobble} \maketitle \newpage \pagenumbering{arabic} \section*{Recommender systems} \begin{itemize} \item Many technological businesses consider recommender systems to be critical (amazon, Ebay, iTunes genius). \item Based on previous purchases, try to propose new content to you. \item It's not so much a method as it is a concept. \end{itemize} Example: predict movie ratings. \begin{itemize} \item You work at a firm that sells movies. \item You allow viewers to rate movies on a scale of 1 to 5. \item You have five films. \item You also have four users. \end{itemize} ~\\ \includegraphics[width=\textwidth]{resources/movie_recommendation} \begin{itemize} \item $n_u$ - number of users. \item $n_m$ - number of movies. \item $r(i,j)$ - 1 if user j has rated movie i. \item $y^{(i,j)}$ - rating given by user j to move i. \item If we have features like this, a feature vector may recommend each film. \item For each film, add an additional feature that is x0 = 1. \item So we have a $[3 \times 1]$ vector for each film, which for film number three ("Cute Puppies of Love") would be: \end{itemize} \begin{align*} x^{(3)} & = \begin{bmatrix} 1 \\ 0.99 \\ 0 \end{bmatrix} \end{align*} So, let's take a look at user 1 (Alice) and see what she thinks of the modern classic Cute Puppies of Love (CPOL). She is associated with the following parameter vector: \begin{align*} \theta^{(1)} & = \begin{bmatrix} 0 \\ 5 \\ 0 \end{bmatrix} \end{align*} ~\\ Our prediction: $$(\theta^1)^Tx^3 = (0\cdot1)+(5\cdot0.99)+(0\cdot0)=4.95$$ \section*{How do we learn $(\theta^j)$?} This is analogous to linear regression with least-squares error: $$min_{\theta^j} = \frac{1}{2m^{(j)}}\sum_{i:r(i,j)=1}((\theta^{(j)})^T(x^{(i)})-y^{(i,j)})^2+\frac{\lambda}{2m^{(j)}}\sum_{k=1}^{n}(\theta_k^{(j)})^2$$ We have the gradient descent algorithm to find the minimum: $$\theta_k^{(j)} := \theta_k^{(j)}-\alpha \sum_{i:r(i,j)=1}((\theta^{(j)})^T(x^{(i)})-y^{(i,j)})x_k^{(i)}\ (for\ k=0)$$ $$\theta_k^{(j)} := \theta_k^{(j)}-\alpha( \sum_{i:r(i,j)=1}((\theta^{(j)})^T(x^{(i)})-y^{(i,j)})x_k^{(i)} + \lambda \theta_k^{(j)})\ (for\ k\neq0)$$ \section*{Collaborative filtering} The collaborative filtering algorithm has a very intriguing property: it can learn what characteristics it needs to learn for itself. ~\\ If we are given user preferences ($\theta^{(1)}, ...,\theta^{(n_u)}$) we may use them to find out the film's features ($x^{(1)}, ...,x^{(n_m)}$) and vice versa: $$min_{x^{(1)}, ...,x^{(n_m)}} \frac{1}{2} \sum_{i=1}^{n_m} \sum_{i:r(i,j)=1}((\theta^{(j)})^T(x^{(i)})-y^{(i,j)})^2+\frac{\lambda}{2}\sum_{i=1}^{n_m}\sum_{k=1}^{n}(x_k^{(i)})^2$$ One thing you could do is: \begin{itemize} \item Randomly initialize parameter. \item Go back and forth in time. \end{itemize} \section*{Vectorization: Low rank matrix factorization} How can we enhance the collaborative filtering algorithm now that we've looked at it? ~\\ So, in our previous example, take all ratings from all users and organize them into a matrix Y. ~\\ 5 movies and 4 users, give us a $[5 \times 4]$ matrix: \begin{equation*} Y = \begin{pmatrix} 5 & 5 & 0 & 0 \\ 5 & ? & ? & 0 \\ ? & 4 & 0 & ? \\ 0 & 0 & 5 & 4 \\ 0 & 0 & 5 & 0 \end{pmatrix} \end{equation*} ~\\ Given [Y] there's another way of writing out all the predicted ratings: \begin{equation*} X \cdot \Theta^T = \begin{pmatrix} (\theta^{(1)})^T(x^{(1)}) & (\theta^{(2)})^T(x^{(1)}) & ... & (\theta^{(n_u)})^T(x^{(1)}) \\ (\theta^{(1)})^T(x^{(2)}) & (\theta^{(2)})^T(x^{(2)}) & ... & (\theta^{(n_u)})^T(x^{(2)}) \\ \vdots & \vdots & \vdots & \vdots \\ (\theta^{(1)})^T(x^{(n_m)}) & (\theta^{(2)})^T(x^{(n_m)}) & ... & (\theta^{(n_u)})^T(x^{(n_m)}) \end{pmatrix} \end{equation*} ~\\ Where matrix X is constructed by taking all the features for each movie and stacking them in rows: \begin{align*} X & = \begin{bmatrix} -(x^{(1)})^T- \\ -(x^{(2)})^T- \\ \vdots \\ -(x^{(n_m)})^T- \end{bmatrix} \end{align*} And matrix $\Theta$ is constructed by taking all the features for each movie and stacking them in rows: \begin{align*} \Theta & = \begin{bmatrix} -(\theta^{(1)})^T- \\ -(\theta^{(2)})^T- \\ \vdots \\ -(\theta^{(n_u)})^T- \end{bmatrix} \end{align*} \section*{Mean Normalization} Consider a user who hasn't reviewed any movies. \includegraphics[width=\textwidth]{resources/no_ratings} \begin{itemize} \item There are no films for which r(i,j) = 1. \item So this term places no role in determining $\theta^5$. \item So we're just minimizing the final regularization term. \end{itemize} \includegraphics[width=\textwidth]{resources/no_ratings_formula} \begin{itemize} \item As previously, put all of our ratings into matrix Y. \item We now compute the average rating for each movie and store it in a $n_m$ - dimensional column vector. \begin{align*} \mu & = \begin{bmatrix} 2.5 \\ 2.5 \\ 2 \\ 2.25 \\ 1.25 \end{bmatrix} \end{align*} \item If we take a look at all of the movie ratings in [Y], we can subtract the mean rating. \begin{equation*} Y = \begin{pmatrix} 2.5 & 2.5 & -2.5 & -2.5 & ? \\ 2.5 & ? & ? & -2.5 & ? \\ ? & 2 & -2 & ? & ? \\ -2.25 & -2.25 & 2.75 & 1.75 & ? \\ -1.25 & -1.25 & 3.75 & -1.25 & ? \end{pmatrix} \end{equation*} \item That is, we normalize each film to have an average rating of 0. \end{itemize} \end{document}
{ "alphanum_fraction": 0.5968014762, "avg_line_length": 30.1064814815, "ext": "tex", "hexsha": "8ac97683be6de9f1ca08996f8aa5d639028de2ef", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e6ef77939b7c581aebb5e9454669ad2dbb4f98f0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "djeada/Stanford-Machine-Learning", "max_forks_repo_path": "slides/week_16.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e6ef77939b7c581aebb5e9454669ad2dbb4f98f0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "djeada/Stanford-Machine-Learning", "max_issues_repo_path": "slides/week_16.tex", "max_line_length": 179, "max_stars_count": null, "max_stars_repo_head_hexsha": "e6ef77939b7c581aebb5e9454669ad2dbb4f98f0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "djeada/Stanford-Machine-Learning", "max_stars_repo_path": "slides/week_16.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2231, "size": 6503 }
\documentclass[12pt]{article} \usepackage[pdftex]{graphicx} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage{fancyhdr} \lhead{Sharing Framework} \rhead{Intermediate Report} \setlength{\headheight}{15.0pt} \pagestyle{fancy} \title{Sharing Framework: Intermediate Report} \author{Chris Spalding\\Ian Saad\\Greg Finch\\Wesley Stedman\\Bobby Kearns} \begin{document} \begin{titlepage} \maketitle \end{titlepage} \section{Design Specification} \subsection{Walkthrough of our code} We will likely use a Singleton design pattern for our database. This is mainly because we only need a single database class to keep track of all of the users' information. The code for the database will be pretty basic. The script will access the server by logging in and from there the code will instantiate a database that will hold the files. The code will then create tables that will contain different files. There will be a table that holds user account information, a table that holds saved projects created by the user, and images that are used in the project. For the project table there will be unique keys that link that specific project to the user that created so that when the user loads that file the server will know where that file is because the key is associated with the user calling it. This will work the same way when the user is trying to share the project as well. \subsection{Design Rationale} We chose to use the singleton pattern because we will have only one instance of the database operating. No other design pattern would apply because our part of the system can only operate as a single entity and will only be represented by one class. \subsection{Interfaces} Our system provides other teams with the ability to load and save to a database. Teams will have to use a loadFile command to load a file from the database. Also provided are saveFile, shareFile, and Login commands. Login is required before loading or saving can occur. This allows us to distinguish between users when saving files. This also allows us to prevent users from accessing other files. The user will not be directly interacting with us. They will being using elements in the gui that will call our provided methods to accomplish tasks with the server. \subsection{Design Details and Restrictions} We are partially restricted by other groups and their styles/implementation; need to know exactly what we will be called for. Overall our module has fairly specific operations that have rather straightforward designs. We have a database that will “guide” us on how we implement storing and retrieving information from it. \section{Project Status} We currently have the skeleton for a database. We need to implement load, save, and share methods. In order to do that we need to talk to a few groups to make sure we know what type of file to save/share. After implementing these three methods, the last step for our group will be to effectively put the other groups’ code ‘on top’ of ours. We’ll have to allow the other groups to access our database so that user’s will be able to save, share, and load their projects without seeing the inner workings of the server. \subsection{Member Duties} Ian is working on saving files. This means that he will be working on making a save method within our database. Greg's main duty is making a method to facilitate file sharing between users. Chris' main duty has been to write Latex code, but moving forward he will help to facilitate group communication as well as debugging and helping everyone with miscellaneous coding needs. Wesley will work on creating a file loading method that is easy to use, and Bobby's task will be to work on user authentication. \subsection{Timeline for Implementation} While every member of our team will be involved in each part of the module, we have divided the work into the methods involved with the database. Greg will be focused primarily on file sharing, Wesley on file loading, Bobby on user authentication, Ian on file saving, and Chris being the floating member and \LaTeX expert. After the server is set up, all the functions can be worked on simultaneously to some extent, so we have layed them out as such. For the first Integration Day we will have save and load completed as they are fairly similar and are the core functionality of our module. These two functions also allow the other modules to do most of what they need from us. When those are completed, the working members will move to assist on log on and share. For the second Integration Day we will have log on completed and before the final day, share will be complete. \newpage \begin{figure} \centerline{\includegraphics{sharingframework-uml.pdf}} \end{figure} \begin{figure} \centerline{\includegraphics[scale=0.75]{sharingframework-sfs.pdf}} \end{figure} \begin{figure} \centerline{\includegraphics[scale=0.75]{sharingframework-sfs2.pdf}} \end{figure} \end{document}
{ "alphanum_fraction": 0.7983310153, "avg_line_length": 69.9027777778, "ext": "tex", "hexsha": "21b8ca1133d7d9181c5977eabd428022ee33ce35", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2c674f6feb333571eae240dc1d094936b8c59c71", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kcanfieldpugetsound/edith", "max_forks_repo_path": "docs/sf-IntermediateReport/sharingframework-intermediatereport.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2c674f6feb333571eae240dc1d094936b8c59c71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kcanfieldpugetsound/edith", "max_issues_repo_path": "docs/sf-IntermediateReport/sharingframework-intermediatereport.tex", "max_line_length": 717, "max_stars_count": null, "max_stars_repo_head_hexsha": "2c674f6feb333571eae240dc1d094936b8c59c71", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kcanfieldpugetsound/edith", "max_stars_repo_path": "docs/sf-IntermediateReport/sharingframework-intermediatereport.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1094, "size": 5033 }
\section{201803-5} \input{problem/13/201803-5-p.tex}
{ "alphanum_fraction": 0.7358490566, "avg_line_length": 17.6666666667, "ext": "tex", "hexsha": "a5df5b0ba53aa329f62a452858a82c9c16d36744", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "xqy2003/CSP-Project", "max_forks_repo_path": "problem/13/201803-5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "xqy2003/CSP-Project", "max_issues_repo_path": "problem/13/201803-5.tex", "max_line_length": 33, "max_stars_count": 1, "max_stars_repo_head_hexsha": "26ef348463c1f948c7c7fb565edf900f7c041560", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "xqy2003/CSP-Project", "max_stars_repo_path": "problem/13/201803-5.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T01:47:19.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-14T01:47:19.000Z", "num_tokens": 22, "size": 53 }
\chapter*{Introduction}\addcontentsline{toc}{chapter}{Introduction} Amidst the countless stars and galaxies we observe in the Universe lie undetected structures of dark matter, orders of magnitude larger than the luminous objects they engulf. These vast invisible structures began in the very early Universe, as quantum fluctuations in the aftermath of the Big Bang. During the subsequent period of inflation, these primordial fluctations were amplified by the accelerated expansion of the Universe and then propagated through gravitational instability for billions of years. Despite constituting most of the matter in the Universe, dark matter has yet to be directly observed. In fact, it can only be studied through its gravitational interactions with luminous baryons, the matter of stars, galaxies, and celestial objects that emit light. In a way, the galaxies we observe in the cosmic volumes probed by our telescopes act as illuminated beacons tracing the vast dark matter terrains of the Universe. Over the past decade, spectroscopic redshift surveys like the Sloan Digital Sky Survey III Baryon Oscillation Spectroscopic Survey (BOSS; \citealt{Anderson:2012aa, Dawson:2013aa}) have exploited these galactic beacons to map out cosmic structures of the Universe. Precise measurements of distance and growth of large-scale structure (LSS) from these surveys, provide tests of cosmological models that describe the content, geometry and history of the Universe. The next leap in galaxy surveys will continue to expand the cosmic volumes probed by galaxies. These observations have the potential to constrain cosmological parameters with unprecedented precision. In the following sections, I briefly introduce how observations from galaxy redshift surveys can be used to test cosmological models, General Relativity, and particle physics beyond the Standard Model. %In this disseration, I address major methodological challenges in %analyzing LSS with galaxies %Through these galaxies, we can explore the properties of the underlying dark matter %and the growth of their structure. Measurements that quantify these properties allow us to make %precise calculations of cosmological parameters, which quantify the content, geometry, and %expansion history of the Universe. Ultimately the constraints we measure on these parameters, %enlighten us on the properties of dark energy, which remains one of the most crucial unsolved %questions in cosmology. Certainly these precise cosmological measurements require a profound %understanding of the formation and evolution of galaxies. Unfortunately there is no clear %narrative of galaxy formation and evolution due to the complex, non-linear, and stochastic nature %of the physical processes that govern them. %In fact, galaxy formation and evolution remain another central unsolved questions in %astrophysics and cosmology. However, since galaxies are enveloped in the massive gravitational %wells of their host dark matter structures, the underlying dark matter of galaxies undoubtedly %plays a crucial role in their formation and evolution. Therefore, with its implications on the most %crucial questions in both cosmology and our understanding of galaxies, the interactions between %galaxies and their host dark matter environments pose some of the most impactful questions, %questions that I seek to answer in my dissertation. \section{Large Scale Structure in $\Lambda$CDM} \label{sec:lss} From the early Universe, primordial quantum fluctuations grow into the large-scale structures of the Universe we observe today through gravitational instability over different epochs of cosmic history. In this section, I briefly describe the simplified ({\em linear}) theory of this evolution and explain core concepts of LSS cosmology using galaxies. Let us begin by defining the matter overdensity field (or density fluctuation) at comoving position $\bm{r}$: \beq \label{eq:delta} \delta(\bm{r}) = \frac{\rho(\bm{r}) - \bar{\rho}}{\bar{\rho}}, \eeq where $\rho(\bm{r})$ is the density field and $\bar{\rho}$ is the mean density. Then, in Fourier space the density fluctuation becomes \beq \delta(\bm{k}) = \int \frac{{\rm d^3}\bm{r}}{(2\pi)^3}\; e^{-i\bm{k}\cdot\bm{r}}\;\delta(\bm{r}), \eeq the Fourier transform of $\delta(\bm{r})$. For describing the evolution of the overdensity field, Fourier space is often favored in the literature. %because, as derived later in the section, the Fourier modes of $\delta$ evolve independently on large scales -- \emph{i.e.} in linear theory. The information in the overdensity field is often quantified using its $N$-point statistics \citep{peebles80, Bernardeau:2002aa, DodelsonBook}. In fact, the two-point statistic is one of the most commonly used tool in large scale structure studies. This two-point statistic, which is also referred to as the correlation function, is defined as \beq \xi(\bm{r}) = \langle \delta(\bm{x})\delta(\bm{x} + \bm{r}) \rangle \eeq and in Fourier space as \beq \langle \delta(\bm{k})\delta(\bm{k'}) \rangle = (2\pi)^3 P(\bm{k})\;\delta^{D}(\bm{k}+\bm{k'}). \eeq $\delta^{D}$ is the Dirac delta function and $P(\bm{k})$ is the two-point statistic in Fourier space -- the {\em power spectrum}. $P(\bm{k})$ is the Fourier transform of $\xi(\bm{r})$ and, in principle, they contain the same information. In practice, however, analyzing $\xi(\bm{r})$ and $P(\bm{k})$ carry different caveats~\citep{Feldman:1994aa}. Throughout this dissertation, I will mainly focus on the power spectrum. Now in order to determine the evolution of the matter overdensity field (on sub-horizon scales) consider pressureless dark matter, which consistutes most of the matter in the Universe. From the continuity, Euler, and Poisson equations \beqa \frac{\partial \rho}{\partial t} + \nabla \cdot \rho \; \bm{u} = 0 \\ \frac{\partial \bm{u} }{\partial t} + (\bm{u} \cdot \nabla) \cdot \bm{u} - \nabla\Phi = 0 \\ \nabla^2\Phi - 4 \pi G \rho = 0 \eeqa its equation of motion can be derived \beq \label{eq:meszaros} \frac{\partial^2 \delta}{\partial t^2} + 2 \frac{\dot{a}}{a} \frac{\partial \delta}{\partial t} - 4 \pi G \bar{\rho}\;\delta = 0. \eeq $\bm{u}$ is the velocity field, $\Phi$ is the gravitational potential, and $a$ is the scale factor. For a detailed derivation I refer readers to \cite{peebles80} and \cite{DodelsonBook}. The solution for this second order differential equation can be written as \beq \delta(\bm{r}, t) = D^{(+)}(t) A(\bm{r}) + D^{(-)}(t) B(\bm{r}). \eeq The density flucation has two components: a growing mode $D^{(+)}$ and a decaying mode $D^{(-)}$. The decaying mode, as its name suggests, decreases with time and its contribution becomes negligible in the late Universe leaving only the growing mode. To quantify the evolution of the growing mode $D^{(+)}$, one commonly used quantity is the ``growth rate of structure'': \beq \label{eq:f_growth} f = \frac{ d\; {\rm ln}\;D^{(+)}}{d\; {\rm ln}\;a}. \eeq This growth rate of structure is a key quantity in LSS cosmology for testing different cosmological models and theories of gravity. $f$ will be discussed further in Section~\ref{sec:rsd}. In addition to their gravitation evolution, the density fluctuations evolve through different epochs in cosmic history: inflation, radiation-dominated era, matter-radiation equality, and matter-dominated era. Each of these periods leave an imprint on the evolution of $\delta$. In Fig.~\ref{fig:lifo}, I mark the eras in the early Universe and plot how the physical scale of the Universe, represented by the Hubble radius, evolves with the scale factor $a$. During inflation, the Hubble radius remains constant. Afterwards the Universe becomes radiation dominated. Based on the Friedmann equations the Hubble radius during the radiation dominated era is approximately $\propto a^{2}$. After a period when radiation and matter have comparable energy densities, the Universe becomes matter dominated where the Hubble radius is approximately $\propto a^{3/2}$. Meanwhile, the physical scale of perturbations is $\lambda_{phys} = \lambda_{comov}\ a(t)$ and thus $\propto a(t)$. As Fig.~\ref{fig:lifo} schematically illustrates, perturbations exit the Hubble radius during inflation then reenter the Hubble radius later on. Depending on the physical scale of the perturbation, it enters either during the radiation-dominated era (smaller scale) or matter-dominated era (larger scale). The physical scale of perturbations that enter the horizon at the time of matter-radiation equality, where $a(t) = a_{eq}$, is $\lambda_{eq} \sim 500\;h^{-1}{\rm Mpc}$. Then perturbations that enter before the matter-radiation equality have physical scales $\lambda_{phys} < \lambda_{eq}$ and since they enter during the radiation dominated era, these smaller scale perturbations are effectively frozen and hence their growth is suppressed. On the other hand, the larger scale perturbations with $\lambda_{phys} > \lambda_{eq}$ enter after matter-radiation equality during the matter dominated epoch. These perturbations do {\em not} experience the suppression of growth of the radiation dominated era. The net effect on the overdensity as it goes through these epochs is the suppression of growth on scales smaller than $\lambda_{eq}$, or $k_{eq}$ in Fourier space, by a factor of $\sim k^4$. In practice, this scale dependent evolution of the density fluctuation is quantified through the ``transfer function'' $T(k)$~\citep{Eisenstein:1998aa, Eisenstein:1999aa}. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{figs/lifo.png} \caption{Schematic diagram that illustrates the evolution of the density fluctuations in the early Universe through inflation, radiation-dominated epoch, matter-radiation equality, and matter-dominated epoch. The evolution of the Hubble radius (solid line) remains flat during inflation (flat), scales by $\propto a^2$ during radiation domination, and $\propto a^{3/2}$ during matter domination. $a_{eq}$ marks matter radiation equality. The physical lengths of three constant comoving scales are marked by dashed, dotted, and dot-dashed lines. The dashed line represents physical lengths of perturbations that enter the horizon during matter-radiation equality $\lambda_{phys}=\lambda_{eq}$. The dot-dashed line mark perturbations that enter the horizon during radiation domination with $\lambda_{phys} < \lambda_{eq}$. The dotted line mark perturbations that enter the horizon during matter domination with $\lambda_{phys} > \lambda_{eq}$. As described in the text, the growth of perturbations with $\lambda_{phys} < \lambda_{eq}$ are suppressed because they enter the horizon during the radiation dominated era. The evolution of the density perturbation through these epochs are quantified through the transfer function $T(k)$.} \label{fig:lifo} \end{center} \end{figure*} The density fluctuations after inflation can be summarized by the power spectrum: \beq P_{\rm inf}(k) \propto k^{n_s} \eeq where $n_s$, the spectral tilt of the primordial power spectrum, is measured to be $\sim 1$ \citep{Harrison:1970aa, Peebles:1970, Zeldovich:1972, Komatsu:2011aa}. Then, the power spectrum of the density fluctuation in the late Universe can be expressed as \beq P(k) \propto k^{n_s} \; T^2(k) \; D^2(k). \eeq where $D(k) \equiv D^{(+)}$ is the growth function from earlier this section. Through the cosmological models and parameters, which predict $T(k)$ and $D(k)$, we predict the power spectrum of the density fluctuation. Then these predictions can then be compared to measurements made from observations in order to produce constraints on cosmological parameters, better understand dark energy, and test theories of gravity. Unfortunately, most of the matter in the Universe is in the form of dark matter and does not interact with radiation, so observers cannot measure the spatial/clustering statistics of dark matter directly. Instead, we measure the clustering of galaxies or quasars, which trace the underlying matter distribution. The smoothed galaxy/quasar density field can be approximated by a local function of the matter density field \beq \delta_g(\bm{r}) = f( \delta(\bm{r})). \eeq $f(\delta(\bm{r}))$ can then be expanded Taylor series~\citep{Fry:1993}: \beq \delta_g({\bf r}) = \sum\limits_{k=0}^{\infty} \frac{b_k}{k!} \delta^k \eeq where $b_0$ is chosen so that $\langle \delta_g \rangle = 0$ and $b_1$ is referred to as the linear bias factor. To linear order, \beq P_g(k) = b_1^2 P(k). \eeq The primary galaxy subpopulation used in LSS studies so far are luminous red galaxies~\citep{Eisenstein:2001aa, Dawson:2013aa}. These galaxies have $b_1 > 1$, which makes them {\em biased} tracers of the matter distribution~\citep{Zehavi:2005aa, Sheldon:2009aa,Gaztanaga:2009aa, Zhai:2016aa}. Luminous galaxies reside in larger potential wells. The peaks of the density fluctuation have stronger clustering properties than the overall overdensity field~\citep{Manera:2010aa}. Based on the derivation of this section, once we have the spatial distribution of galaxies or quasars, we can derive the clustering of the matter distribution and then infer cosmological constraints. In practice, however, a number of factors complicate this procedure. One major complication is redshift-space distortions, which will be discussed in the next section. \section{Redshift-Space Distortions} \label{sec:rsd} Spectroscopic redshifts surveys, such as 2dF Galaxy Redshift Survey~\citep{Colless:1999aa}, Sloan Digital Sky Survey~\citep[SDSS][]{York:2000aa}, and BOSS, have mapped out millions of distant galaxies. Current surveys such as Extended Baryon Oscillation Spectroscopic Survey (eBOSS; \citealt{Dawson:2015aa}), and future surveys such as the Dark Energy Survey Instrument (DESI; \citealt{Schlegel:2011aa, Morales:2012aa, Makarem:2014aa}) and the Subaru Prime Focus Spectrograph (PFS; \citealt{Takada:2014aa}), will continue to map out millions more. These surveys dominate LSS studies and have/will been critical for inferring precise cosmological constraints. As their name suggest, however, these {\em redshift} surveys do not directly measure the actual position of galaxies. Instead they measure the angular positions (right ascension and declination) and redshifts of galaxies. These redshifts are a combined measurement of the recession velocities due to the expansion of the Universe and the peculiar velocities of the galaxies: \beq z_{\rm obv} = z_{\rm true} + \frac{v_{\rm pec}}{c}. \eeq The galaxy comoving positions derived from the angular positions and redshifts are in {\em redshift-space} and ``distorted'' compared to real-space comoving positions by \beq \bm{s} = \bm{x} + \frac{\bm{v}_{\rm pec} \cdot \hat{n}}{H_0} \eeq where $\hat{n}$ is the unit vector along the line-of-sight. Thankfully, all hope is not lost. The peculiar velocities of galaxies are directly related to the total matter distribution, since galaxies can be thought of as test particles in a gravitational field. Using this relation, \cite{Kaiser:1984aa} derived an approximation for the distortion caused by the coherent infall of galaxies onto overdense regions in redshift space. This redshift-space distortion (RSD), often referred to as the Kaiser effect, causes observations of overdense regions to appear squashed along the line of sight in redshift-space. Galaxies around an overdense region that are closest to the observer on Earth are moving towards the center of the overdense region and away from the observer. So in they appear farther away than their true position. Galaxies on the other side are moving towards both the overdense region and the observer, so they appear closer to us in redshift-space. The relation between the overdensity field in redshift-space can be derived from the continuity equation and the distant observer approximation, \beq \delta^{(s)}(\bm{k}) = (1 + f \mu^2) \delta(\bm{k}). \eeq $f$ is the growth rate of structure (Eq.~\ref{eq:f_growth}) and $\mu = \bm{k} \cdot \hat{n} / k$, cosine of the angle between $\bm{k}$ and the line-of-sight. In corporating the Kaiser effect into the galaxy bias model from Section~\ref{sec:lss}, the galaxy/quasar overdensity field in redshift-space becomes \beq \delta_g^{(s)}(\bm{k}) = (b_1 + f \mu^2) \delta(\bm{k}). \eeq The redshift-space power spectrum of the galaxy overdensity field can then be written as \beq P_g^{(s)}(k, \mu) = (b_1 + f \mu^2)^2 P(\bm{k}). \eeq On large scales and with small overdensities, the effect of redshift-space distoritons is well described by the Kaiser effect. On small scales with large overdensities things get a little more complicated. The random peculiar velocities of galaxies in gravitationally bound structures such as clusters cause their position in redshift-space to be smeared out to larger scales along the line-of-sight. This effect can easily be identified by eye in galaxy redshift maps where the elongations of the galaxy positions along the line-of-sight resemble fingers pointing towards the observer. Aptly this redshift-space distortion is referred to as the ``fingers-of-god''. Its impact on the power spectrum, is empirically modeled and typical quantified using an overall exponential factor \citep[][]{Jackson:1972aa,Scoccimarro:2004aa,Taruya:2010aa,Beutler:2016aa}. Including both RSDs from the Kaiser effect and the fingers-of-god, the redshift-space power spectrum is then \beq \label{eq:pk_rsd} P_g^{(s)}(k, \mu) \approx e^{-f^2 \sigma_v^2 \mu^2 k^2} (b_1 + f \mu^2)^2 P(k) \eeq where $\sigma_v$ is a paramter quantifying the strength of the effect and is usually left as a free parameter in analyses. Eq.~\ref{eq:pk_rsd} reveals the $f$ dependence in RSDs. RSD analyses in LSS studies exploit this dependence by measuring the impact of RSDs on the power spectrum to constrain $f$. Consider the Legendre expansion of $P_g^{(s)}(k, \mu)$, \beq P_g^{(s)}(k, \mu) = \sum\limits_{\ell=0, 2, 4 ...} \mathcal{L}_\ell(\mu) P_g^\ell(k). \eeq Each of power spectrum ``multipole'' of this expansion is then \beq P_g^{\ell}(k) = \frac{2 \ell + 1}{2} \int\limits_{-1}^{1} {\rm d}\mu \; P_g^{(s)}(k, \mu)\; \mathcal{L}_\ell(\mu). \eeq The RSD factor in the power spectrum multipoles for $\ell= 0$ (monopole) and $2$ (quadrupole) are \beqa P_g^0 (k) &=& (b_1^2 + \frac{2}{3} f b_1 + \frac{1}{5}f^2) P(k) \\ P_g^2 (k) &=& (\frac{4}{3} f b_1 + \frac{4}{7} f^2) P(k). \eeqa For simplicity, we neglect the fingers-of-god, which does not significantly impact larger scales. Taking the ratio of the quadrupole over the monopole, \beq \label{eq:multipole_ratio} \frac{P_g^2}{P_g^0} = \frac{\frac{4}{3} f b_1 + \frac{4}{7} f^2}{b_1^2 + \frac{2}{3} f b_1 + \frac{1}{5}f^2}, \eeq we can in principle eliminate the dependence on scale and extract information on $f$. Of course in practice the simplified derivations of this section break down. Instead of the simple linear theory theoretical models I derived, models of $P_g^{(s)}$ are derived using perturbation theory and incorporate more sophisticated RSD and bias models~\citep[][]{Bernardeau:2002aa, Scoccimarro:2004aa, Taruya:2010aa, Nishimichi:2011aa, Taruya:2013aa, Taruya:2014aa, Beutler:2016aa}. These models are then compared to the observed $P_g$ multipoles from galaxy surveys in order to derive constraints on cosmological parmaeters such as $f$. %illustrates how the distortions caused by RSDs allow us to extract information of $f$ through measurements of the redshift-space galaxy power spectrum! \section{Weighting Neutrinos with Galaxies} \label{sec:mneut} Beyond inferring the growth rate of structure, which can be used to test GR and modified gravity scenarios, galaxy clustering also provides a unique window to probe fundamental physics beyond the standard model. In the derivations of Sections~\ref{sec:lss} and~\ref{sec:rsd} we focused on how the dark matter density fluctuations of evolves. This is an excellent approximation because dark matter consistutes the majority of matter in the Universe. However, it negelects some of the more detailed imprints on LSS from other components of matter -- \emph{i.e.} neutrinos, which oscillation and detection experiments have \emph{very} convincingly (Nobel Prize in Physics 2015) confirmed is {\em not} massless~\citep[][]{ Hu:1998aa, Lesgourgues:2012aa, Lesgourgues:2013aa, Lesgourgues:2014aa}. %(\todo{Beringer et al. 2012,Lesgourges, 2012, 2013}) In the very early Universe, neutrinos are relativistic and coupled to the primordial plamsa. Later they decouple from the plasma, while they are still ultra-relativistic and redshift. At this point, they do not contribute to the energy density of matter but instead radiation. Eventually during matter domination era, neutrinos become non-relativistic and then contribute to the matter energy density acting as ``warm/hot'' dark matter. After decoupling from the primoridal plasma, neutrinos are effectively a collisionless fluid, where the individual particles free-stream with characteristic velocities defined by their thermal velocity. Earlier on when they are relativistic, their free-streaming scale is simply equal to the Hubble radius. Later when they are non-relativistic, their characteristic velocity is approximately \beq v_{\rm th} \approx 158 (1 + z) \left(\frac{1 {\rm eV}}{m} \right) \; \; {\rm km \; s^{-1}} \eeq and the free-streaming scale can be derived in an analogous way as the Jean's length derivation: \beq \lambda_{\rm FS} = 2 \pi \sqrt{\frac{2}{3}} \left( \frac{v_{\rm th}}{H} \right) \eeq or \beq \label{eq:kfs} k_{\rm FS} = \frac{2\pi a}{\lambda_{\rm FS}} \approx 0.82 \frac{\sqrt{\Omega_\Lambda + \Omega_m(1+z)^3}}{(1+z)^2} \left(\frac{m_\nu}{1\;{\rm eV}} \right). \eeq where $\Omega_\Lambda$ and $\Omega_m$ are the current cosmological constant and matter density fractions, respectively. Neutrinos leave two main imprints on LSS. In the early Universe they contribute to the radiation energy density but later, they contribute to the matter energy density. As described in Section~\ref{sec:lss}, matter-radiation equality marks the turning point in suppression of growth of structure, quantified by $T(k)$. The transition of neutrinos from radiation to matter impacts $a_{eq}$ and thus impacts $T(k)$ by shifting the turning point of the cold dark matter (CDM) only power spectrum. Even after becoming non-relativistic, neutrinos still do not contribute to the clustering of matter on scale smaller than $k_{\rm FS}$. The impact of this scale dependent suppression of clustering, can be analytically estimated for the matter power spectrum \citep{Bird:2012aa}: \beq \frac{\Delta P}{P} = \frac{P^{f_\nu \neq 0} - P^{f_\nu = 0}}{P^{f_\nu = 0}} \approx - 8 f_\nu \;\;\;\; {\rm for}\;\; k \gg k_{\rm FS}. \eeq where $f_\nu$ is the ratio of the neutrino energy density over that of matter ($\Omega_\nu / \Omega_m$). The total mass of neutrinos (\mneut) dictates the strength of these imprints and can therefore be constrained by the shape of the power spectrum. The same tools used for analyzing RSDs and measuring the growth rate of structure can also be used to measure \mneut\; from observations of galaxy surveys~\citep{Hu:1998ab, Costanzi:2013aa, Villaescusa:2015aa, Cuesta:2016ab}. Based on forecasts, the next galaxy surveys such as DESI\footnote{DESI \textit{Final Design Report} (FDR): \url{http://desi.lbl.gov/tdr/}} have the potential to infer the most stringent constraints on \mneut -- $\sigma_{\sum m_\nu} \sim 0.03\;{\rm eV}$. Such constraints %would trump the sensitivity of particle physics experiments~\citep[][]{Wolf_Katrin} and have the potential to distinguish between the normal or invereted neutrino mass hierarchy and reveal physics beyond the Standard Model. \section{Analyzing Galaxy Clustering} %Beyond the general description and derivation of the redshift-space galaxy power spectrum, the rest of galaxy clustering analysis in LSS studies follows the standard approach to Bayesian parameter inference. In the previous sections, I laid out the theoretical framework for LSS analysis using galaxy clustering. As I alluded earlier, the models and predictions of this theoretical framework can be compared to observations to derive constraints on parameters of interest. In this section I describe the statistical framework for comparing the theoretical models to observations from galaxy surveys. The ultimate goal of galaxy clustering analyses is to derive probability distributions of the cosmological parameters (\emph{e.g.} $f$, \mneut) given the data from observations. The standard approach to deriving this {\em posterior} probability distribution is using Bayesian parameter inference. Based on Bayes theorem, the posterior probability distribution can be expressed as \beq P(\bm{\theta}| \bm{D}) = \frac{P(\bm{D}|\bm{\theta}) P(\bm{\theta})}{P(\bm{D})}. \eeq $\bm{D}$ and $\bm{\theta}$ refer to observations and cosmological parameters, respectively. $P(\bm{D}|\bm{\theta})$, the probability distribution function for the observation $\bm{D}$ given model parameters $\bm{\theta}$, is the {\em likelihood function} ($\mathcal{L}$). $P(\bm{\theta})$ is the {\em prior} probability distribution function. Lastly, $P(\bm{D})$ is the ``evidence'', which for our purposes is just a normalization factor independent of $\bm{\theta}$. The equation is more commonly simplified as \beqa \label{eq:bayes} P(\bm{\theta}|\bm{D}) &\propto& P(\bm{D}|\bm{\theta}) \; P(\bm{\theta}) \\ {\rm posterior} &\propto& {\rm likelihood}\; \times \; {\rm prior}. \eeqa In the context of galaxy clustering analyses and LSS cosmology in general, the likelihood function is {\em typically} assumed to have Gaussian function form and calculated as \beq \label{eq:likelihood} P(\bm{D}|\bm{\theta}) = \mathcal{L} = \frac{1}{(2\pi)^{N_d/2}\; {\rm det}\bm{C}^{1/2}}\; {\rm exp}\left[ -\frac{1}{2} (\bm{D} - F(\bm{\theta}))^T \bm{C}^{-1} (\bm{D} - F(\bm{\theta}))\right]. \eeq $\bm{D}$ is data observed and measured from galaxy surveys with dimension $N_d$. $F(\bm{\theta})$ is the model prediction of the observable (\emph{e.g.} $P_g^{(s)}$) generated from cosmological parameters $\bm{\theta}$, described in earlier sections. And $\bm{C}$ is the covariance matrix. A number of different methods are used to estimate the covariance matrix. For instance, efforts to analytically estimate the covariance matrix from theory have been made in the past~\citep{Hamilton:2006aa, Pope:2008aa, dePutter:2012aa}. However, non-linear evolution, shot-noise, RSDs, and mapping between galaxies and matter complicate accurate estimations. Jack-knife resampling~\citep{Shao:1995aa}, a commonly used method in astronomy for estimating covariances directly from data have also been used. However, the method requires a number of arbitrary choices and cannot account for fluctuations on the scale of the survey~\citep{Norberg:2009aa}. Instead, the latest analyses estimate $\bm{C}$ from galaxy mock catalogs generated from $N$-body simulations. For accurate estimation, an order of $\sim 1000$ mock galaxy catalogs are required in the analysis~\citep{Scoccimarro:2002aa, McBridge:2011aa, Anderson:2012aa, Manera:2013aa, Rodriguez-Torres:2015aa, Kitaura:2016aa, Beutler:2016aa} Developing fast and accurate galaxy mock catalogs for LSS analyses has now become a subfield of its own. As an added detail, in standard analyses, in order to account for biases in the $\bm{C}$ estimates, include a correction -- the Hartlap factor -- to the covariance matrix estimate~\citep{hartlap2007}. From $\bm{D}$, $F(\bm{\theta})$, and $\bm{C}$ we can evalulate an estimate of the likelihood function. From the likelihood, since the prior probability distribution is chosen {\em a priori}, the posterior probability distribution functions of the cosmological parameters is essentially already evaluated. In practice, the posterior distribution is not evaluated at all points in parameter space, but rather sampled using a sampler such as a Markov Chain Monte Carlo sampler \citep[\emph{e.g.} $\mathtt{emcee}$][]{emcee}. From the galaxy clustering analysis described in this chapter, the latest galaxy surveys have produced some remarkable constraints on cosmological parameters. From the SDSS and BOSS surveys, measurements of the power spectrum multipoles along with analogous configure-space analyses have yielded a number of constraints on $f\sigma_8$~\citep{Reid:2012aa, Oka:2014aa, Beutler:2014aa, Alam:2015aa, Alam:2016aa, Beutler:2016aa}, where $\sigma_8$ is the the rms linear fluctuation in density perturbations on scales of $8\;h^{-1}{\rm Mpc}$. %amplitude of the power spectrum on the scale of $8\;h^{-1}{\rm Mpc}$. Similar to multipoles, power spectrum wedges have also been used, in both Fourier and configuration-spaces, to infer $f\sigma_8$ constraints~\citep{Sanchez:2013aa, Sanchez:2016aa, Grieb:2016aa}. These $f \sigma_8$ constraints can then be compared to cosmological predictions from Cosmic Microwave Background (CMB) experiments such as the Wilkinson Microwave Anisotropy Probe~\citep{WMAP:2013} and {\em Planck}~\citep{Planck:2014aa} to test $\Lambda$CDM cosmology and General Relativity. The constraints from BOSS are generally consistent with $\Lambda$CDM and GR over $0.2 < z < 0.75$. For instance, \cite{Beutler:2016aa} derives $f\sigma_8 = 0.482 \pm 0.053, 0.455 \pm 0.050$, and $0.410 \pm 0.042$ from BOSS for effective redshift $z_{\rm eff} = 0.38, 0.51$, and $0.61$. $f \sigma_8$ constraints from galaxy power spectrum analyses have also been combined with CMB data to constrain \mneut~\citep{Zhao:2013aa, Beutler:2014ab, Gil-Marin:2015aa}. \cite{Beutler:2014ab}, from combining constraints from galaxy power spectrum analyses with {\em Planck} CMB results, derives the upper bound \mneut$ < 0.51\;{\rm eV}$. %Cosmological measurements such as galaxy clustering statistics are no longer dominated by uncertainties from statistical precision, but from systematic effects of the measurements. This is a result of the millions of redshifts to distant galaxies that have been obtained through redshift surveys such as the 2dF Galaxy Redshift Survey (2dFGRS; \citealt{Colless:1999aa}) and the Sloan Digital Sky Survey III Baryon Oscillation Spectroscopic Survey Ongoing and future surveys, such as eBOSS, PFS, and DESI, will continue to collect many more million redshifts and expand the probed cosmic volume by an order of magnitude. These observations have the potential to produce cosmological parameter constraints with unprecedented statistical precision. The main challenges for realizing their full potential are {\em methodological}. So far I have focused on LSS analyses using only the galaxy power spectrum -- the two-point statistic of the density fluctuations. Analyses restricted to just the two-point statistic, however, face a number of limitations. The constraints on the growth rate of structure, listed above, have all constrained $f \sigma_8$ rather than $f$ alone. The degeneracy between $f$ and $\sigma_8$ cannot be broken with $P(k)$ alone. Furthermore, the $P(k)$ multipoles in Eq.~\ref{eq:multipole_ratio} illustrate that $P(k)$ analyses also suffer from the degeneracy between $f$ and bias parameters. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{figs/bisp.png} \caption{Amplitude of the reduced galaxy bispectrum $Q(k_1, k_2, k_3)$ plotted as a function of ratios $k_2/k_1$ and $k_3/k_1$, which describe the triangle configurations. The $Q(k_1, k_2, k_3)$ in the left panel is calculated from a perturbation theory model while the right panel presents $Q(k_1, k_2, k_3)$ of BOSS Data Release 12 CMASS galaxy sample using the \cite{Scoccimarro:2015aa} estimator. } \label{fig:bisp} \end{center} \end{figure*} The {\em bispectrum} $B(k_1, k_2, k_3)$, the three-point statistic of density fluctuations, can be used to break the degeneracies among $f$, $\sigma_8$, and bias parameters~\citep[][see \citealt{Bernardeau:2002aa} for a review]{Scoccimarro:1998aa, Verde:1998aa, Scoccimarro:2000aa}. The dependence on triangle configuration in $B(⃗k_1, k_2, k_3)$ disentangles contributions from gravitational instability versus non-linear biasing of galaxies. Without going into any further detail, in Figure\ref{fig:bisp} I present the reduced galaxy bispectrum $Q(k_1, k_2, k_3) = B(k_1, k_2, k_3)/(P(k_1)P(k_2) + P(k_2)P(k_3) + P(k_1)P(k_3))$ measurement for the BOSS Data Release 12 CMASS galaxy sample (right) and a perturbation theory model (left). The BOSS $Q(k_1, k_2, k_3)$ is measured using the \cite{Scoccimarro:2015aa} estimator. $P(k)$ and $B(k_1, k_2, k_3)$ measurements from galaxy surveys can be jointly analyzed in order to derive constraints explicitly on $f$. All LSS analyses suffer from observational systematic effects. For fiber-fed multi-object spectroscopic surveys (\emph{e.g.} SDSS, BOSS, eBOSS, DESI, and PFS) these effects include variations in target selection relatd to stellar density, image depth, seeing, and other factors~\citep{Ross:2012aa, Anderson:2012aa}. If not accounted for fiber collisions, for instance, prevent surveys from collecting a significant fraction redshifts due to physical constraints on the focal plane. As I detail in \chap{fc}, their impact on $P(k)$ goes well beyond their angular scale and restricts analysis on small scales, which have higher signal-to-noise. In addition to diminishing the statistical power of galaxy redshift surveys, fiber collisions can also bias constraints on cosmological parameters. Many efforts have been made to tackle these challenges from observational systematics~\citep[][and \chap{fc}]{Ross:2012aa, Guo:2012aa}. In Eq.~\ref{eq:likelihood}, the likelihood function assumes a Gaussian functional form -- a standard assumption in LSS analyses. However, in detail, this assumption cannot be correct due to nonlinear gravitational evolution and biasing~\citep{Mo:1996aa, Sommerville:2001aa, Casas-Miranda:2002aa, Bernardeau:2002aa}. The likelihood also relies on the estimated covariance matrix to capture the sample variance of the data. Besides the labor and computational costs required to make them, simulated mock catalogs used for covariance matrix estimation are inaccurate on small scales~(see \citealt{cosmiccode,nifty} and references therein). Furthermore, using covariance matrix estimates rather than the ``true'' covariance matrix \citep{Sellentin:2016a} along with systematics impact the likelihood in ways difficult to model. Fortunately, evaluating the explicit likelihood is {\em not} necessary for inferring cosmological parameters. Likelihood-free inference techniques such as Approximate Bayesian Computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. In \chap{abc} I combine ABC with a Population Monte Carlo sampler and apply it in the context of LSS. As described earlier, galaxies are biased tracers of the underlying matter distribution. For more than a decade, halo occupation modeling has been a popular framework for connecting galaxies to the dark matter structures underneath in galaxy formation and cosmology studies~\citep{Yang:2003aa, Tinker:2005aa, vandenBosch:2007aa, Zheng:2007aa, Conroy:2009aa, Guo:2011aa, Leauthaud:2012aa, Tinker:2013aa, Zu:2015aa}. The standard halo occupation model assumes that galaxies reside in dark mater halos and their occupation is a function of only the mass of the halo. However, the clustering of dark matter halos depend on properties beyond their masses, such as their assembly history. If this effect, coined {\em halo assembly bias}, propagates to galaxies, it will induce {\em galaxy assembly bias} on standard halo occupaiton model and significantly impact galaxy clustering analyses~\citep[][]{hearin15, Zentner:2016aa, Vakili:2016aa}. Therefore, better understanding of the galaxy-halo connection is essential for LSS analyses. Beyond their utility as tracers for cosmology, galaxies also pose fundamental questions regarding how the early homogenous Universe became the heterogenous one today. Observations have now firmly established a global view of galaxy properties out to $z\sim1$~\citep[\emph{e.g.}][\chap{galenv}]{Bell:2004aa, Bundy:2006aa, Cooper:2007aa, Cassata:2008aa, Blanton:2009aa, Whitaker:2012aa, Moustakas:2013aa}. Galaxies roughly fall into two categories: star-forming galaxies and quiescent ones with little star formation. The star-forming population undergoes significant decline in star formation rate (SFR) over cosmic time and significant fractions of them also rapidly ``quench'' their star formation and become quiescent. The underlying drivers of this evolution, however, are not directly revealed by observations. Cosmology, which precisely predicts the dark matter evolution, provides a framework for answering {\em specific} and {\em tractable} questions in galaxy evolution. In $\Lambda$CDM, structures form ``hierarchically'' --- smaller ones form earlier and subsequently merge to form larger ones. The galaxy population can be positioned in this framework with halo occupation in order to constrain key elements of their evolution and better understand the galaxy-halo connection \citep[][]{Wetzel:2013aa, Wetzel:2014aa, Tinker:2016ab, Tinker:2017aa}. In \chap{galhalo}, I use this approach to measure the timescale of star-formation quenching in central galaxies. In this dissertation, I tackle key methodological challenges in LSS analyses with galaxy clustering by developing methods to robustly treat systematics (\chap{fc}), introducing innovative approaches to inference in LSS studies (\chap{abc}), and improving our understanding of the galaxy-halo connection (\chapname s~\chapalt{galenv} and~\chapalt{galhalo}). Each Chapter contributes to unlocking the full potential of current and future galaxy redshift surveys and will be critical for testing cosmological models and General Relativity and constraining the total neutrino mass. \chapname s~\chapalt{fc} and~\chapalt{galenv} have both been refereed and published in the astronomical literature. \chapname s~\chapalt{abc} and~\chapalt{galhalo} have both been refereed and accepted to the \emph{Monthly Notices of the Royal Astronomical Society} and \emph{The Astrophysical Journal}, respectively. All of these \chapname s were co-authored with collaborators but the majority of the work and writing in each \chapname\ is mine. Below, I describe my contributions to each \chapname: \begin{enumerate} {\item For \chap{fc}, I developed the idea for the project in collaboration with Roman Scoccimarro and Michael Blanton. I implemented the project with contributions from Roman Scoccimarro. The project utilized simulation data from Jeremy Tinker and Sergio Rodr\'{i}guez-Torres. I wrote the paper with additions from Roman Scoccimarro and edits by Michael Blanton. } {\item For \chap{abc}, I developed the idea for the project in collaboration with Mohammadjavad Vakili, Andrew Hearin, and David Hogg. I implemented the project with Mohammadjavad Vakili and contributions from Andrew Hearin and Kilian Walsh. The project utilized software written by Andrew Hearin and Duncan Campell. I wrote the paper together with Mohammadjavad Vakili with additions from Andrew Hearin, David Hogg, and Kilian Walsh. } {\item For \chap{galenv}, I developed the idea for the project in collaboration with Michael Blanton. I implemented the project using catalogs constructed by John Moustakas from observations made by the PRIMUS collaboration (Scott Burles, Alison Coil, Richard Cool, Daniel Eisenstein, Ramin Skibba, Kenneth Wong, and Guangtun Zhu). I wrote the paper with additions from Michael Blanton. } {\item For \chap{galhalo}, I developed the idea for the project in collaboration with Jeremy Tinker. I implemented the project using simulation data from Andrew Wetzel. I wrote the paper with comments and edits by Jeremy Tinker and Andrew Wetzel. } \end{enumerate}
{ "alphanum_fraction": 0.7578128821, "avg_line_length": 61.7734138973, "ext": "tex", "hexsha": "0a428e7b9f3def9c4ff369be15091a492696b98f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "changhoonhahn/DisThesis", "max_forks_repo_path": "intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "changhoonhahn/DisThesis", "max_issues_repo_path": "intro.tex", "max_line_length": 449, "max_stars_count": null, "max_stars_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "changhoonhahn/DisThesis", "max_stars_repo_path": "intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10786, "size": 40894 }
\documentclass[12pt]{article} \usepackage[english]{babel} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{mathdots} \usepackage{hyperref} \usepackage{graphicx} \usepackage{url} \usepackage[T1]{fontenc} \usepackage{euler} % for math \usepackage{beramono} % for typewriter \usepackage{newtxtext} % for text \usepackage{minted} \textwidth 16cm \textheight 23cm \topmargin -1cm \oddsidemargin 0cm %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{\texttt{unittest} and \texttt{unittest.mock}\\ Python modules: some exercises} \author{Massimo Nocentini\\ Dipartimento di Statistica, Informatica, Applicazioni \\ viale Morgagni 65, 50134, Firenze, Italia \\ {\sl [email protected]}} \date{\today} \maketitle \begin{abstract} This note collects three exercises about \textit{Test-Driven Development} and \textit{Mocking} methodologies; in particular, \texttt{unittest} and \texttt{unittest.mock} Python modules are investigated and used in order to simulate the Java counterpart given in \cite{course}. \end{abstract} \section{Introduction} Our aim is to deepen the understanding of two Python modules, namely \texttt{unittest} and \texttt{unittest.mock}, and use them in an \emph{agile programming} context: the former concerns \textit{TDD}, the latter concerns \textit{mocking}, as their names suggest. Both modules belong to the Python Standard Library \cite{psl}, available for both Python 2.7 and 3.x releases; eventually, we code two toy projects, \texttt{factorial} and \texttt{payroll} respectively, using the provided definitions in order to write clean and maintainable pieces of code. As a side project, we bootstrap a tiny testing framework from scratch, all versioned and integrated on top of \emph{GitHub} and \emph{Travis CI}. The following sections describe them in more details. \section{Toy projects} \subsection{\texttt{factorial}} In this exercise we define and implement the factorial function $n!$ for $n\in\mathbb{N}$, driven by test cases using the \texttt{unittest} module \cite{unittest}; the following definitions read as a \emph{runnable specification}: \inputminted{python}{../factorial/factorial_test.py} Therefore a possible implementation satisfying the given requirements reads as follows: \inputminted{python}{../factorial/factorial.py} \noindent where it is interesting to observe that implementation uses iteration, while the specification is given by recursion. Moreover, examples seen during classes have been easily ported on top of \texttt{unittest} module. \subsection{\texttt{payroll}} In this exercise we want to \emph{mock} dependencies of \mintinline{python}{Payroll} objects to test them in isolation using definitions provided by the \texttt{unittest.mock} module \cite{unittest.mock}. Here is the system under test: \inputminted[firstline=1, lastline=19]{python}{../payroll/payroll.py} \noindent where dependecies are given in the ctor and interaction happens in method \mintinline{python}{monthly_payment}. On the other hand, in the corresponding test class, dependencies are mocked with \texttt{unittest.mock.Mock} objects in the \texttt{setUp} method and an example of \emph{stubbing} is given in the following chunk: \inputminted[firstline=20, lastline=30]{python}{../payroll/payroll_test.py} Moreover, the following methods have been implemented to show a parallel with objects and methods provided by \texttt{Mockito}, a Java mocking framework for what concerns interception of a single method call: \inputminted[firstline=42, lastline=51]{python}{../payroll/payroll_test.py} and about a sequence of method calls: \inputminted[firstline=63, lastline=73]{python}{../payroll/payroll_test.py} Finally, all the code shown in Java has been ported in test class \texttt{PayrollTest}. \subsection{\texttt{bootstrapping}} In this exercise we experience the bootstrap of a unit test framework, driven by test itself; in parallel, we find interesting to look at version history to understand the steps that make it happen, strictly following \cite{beck}. The driver reads as follows: \inputminted{python}{../tdd/tdd_test.py} \section{GitHub and Travis CI} All code has been versioned using \emph{Git} and hosted at \cite{repo}; moreover, a link with \emph{Travis CI} server \cite{travis} has been established in order to have automatic builds for both branches and pull requests, targeting two different releases of Python. %\inputminted{yaml}{../.travis.yml} \section{Conclusions} We have shown two ports of simple exercises using the Python language, in order to understand and translate TDD and mock concept provided by Java framework in pure Python; additionally, a simple but working test framework has been implemented from scratch. % Bibliography {{{ \begin{thebibliography}{10} \bibitem{course} Lorenzo Bettini, \newblock {\em B026351 (B059) - Tecniche Avanzate di Programmazione, 2016/2017} \newblock \url{https://e-l.unifi.it/course/view.php?id=2215} % \bibitem{psl} Python Software Foundation, \newblock {\em Python Standard Library}, \newblock \url{https://docs.python.org/3/library/} % \bibitem{beck} Kent Beck, \newblock {\em Test Driven Development: By Example}, \newblock Addison-Wesley, 2002 % \bibitem{unittest} Python Software Foundation, \newblock {\em \texttt{unittest} -- Unit testing framework}, \newblock \url{https://docs.python.org/3/library/unittest.html} % \bibitem{unittest.mock} Python Software Foundation, \newblock {\em \texttt{unittest.mock} -- mock object library}, \newblock \url{https://docs.python.org/3/library/unittest.mock.html} % \bibitem{repo} Massimo Nocentini, \newblock \url{https://github.com/massimo-nocentini/advanced-programming-techniques-course} % \bibitem{travis} Massimo Nocentini, \newblock \url{https://travis-ci.org/massimo-nocentini/advanced-programming-techniques-course} % \end{thebibliography} % }}} \end{document}
{ "alphanum_fraction": 0.7545992116, "avg_line_length": 39.0256410256, "ext": "tex", "hexsha": "335f04e09e84373c34fecdc302ad55ebea035ed6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "11b6573893dcac38a33b6c725cddea9962e0833a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "massimo-nocentini/advanced-programming-techniques-course", "max_forks_repo_path": "tex/doc.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "11b6573893dcac38a33b6c725cddea9962e0833a", "max_issues_repo_issues_event_max_datetime": "2017-01-04T12:28:13.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-04T10:36:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "massimo-nocentini/apt-unifi-course", "max_issues_repo_path": "tex/doc.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "11b6573893dcac38a33b6c725cddea9962e0833a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "massimo-nocentini/apt-unifi-course", "max_stars_repo_path": "tex/doc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1573, "size": 6088 }
\documentclass[10pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{makeidx} \usepackage{listings} \usepackage{color} \usepackage{minted} \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} % Default fixed font does not support bold face \DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold \DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal \usepackage{fontspec} \setmonofont{Consolas} % Custom colors \usepackage{color} \definecolor{deepblue}{rgb}{0,0,0.5} \definecolor{deepred}{rgb}{0.6,0,0} \definecolor{deepgreen}{rgb}{0,0.5,0} \usepackage{listings} % Python style for highlighting \newcommand\pythonstyle{\lstset{ language=Python, basicstyle=\ttfamily, otherkeywords={self}, % Add keywords here keywordstyle=\ttfamily\color{deepblue}, emph={MyClass,__init__}, % Custom highlighting emphstyle=\ttfamily\color{deepred}, % Custom highlighting style stringstyle=\color{deepgreen}, frame=tb, % Any extra options here showstringspaces=false % }} % Python environment \lstnewenvironment{python}[1][] { \pythonstyle \lstset{#1} } {} % Python for external files \newcommand\pythonexternal[2][]{{ \pythonstyle \lstinputlisting[#1]{#2}}} % Python for inline \newcommand\pythoninline[1]{{\pythonstyle\lstinline!#1!}} \usepackage{amssymb} \usepackage{graphicx} \usepackage{float} \usepackage{hyperref} %\usepackage[Sonny]{fncychap} \floatstyle{boxed} \restylefloat{figure} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} \begin{document} \input{title.tex} %\maketitle \newpage \tableofcontents \newpage \section{Overview} \input{overview.tex} \input{tracking.tex} \newpage \input{bib.tex} \end{document}
{ "alphanum_fraction": 0.7299191375, "avg_line_length": 22.6219512195, "ext": "tex", "hexsha": "3560f90cb7cbca717d39ca19937a2653eaba04ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5791ba2f8557a278404de37fb9c13042abe3ae27", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "andyfangdz/ProjectQuad-restored", "max_forks_repo_path": "Paper/main.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "5791ba2f8557a278404de37fb9c13042abe3ae27", "max_issues_repo_issues_event_max_datetime": "2016-02-09T09:26:28.000Z", "max_issues_repo_issues_event_min_datetime": "2016-02-09T09:26:26.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "andyfangdz/ProjectQuad-restored", "max_issues_repo_path": "Paper/main.tex", "max_line_length": 66, "max_stars_count": null, "max_stars_repo_head_hexsha": "5791ba2f8557a278404de37fb9c13042abe3ae27", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "andyfangdz/ProjectQuad-restored", "max_stars_repo_path": "Paper/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 614, "size": 1855 }
\documentclass{standalone} \begin{document} \subsection{Deformable Model} Deformable Models use an artificial, closed, contour/surface able to expand or contract over time and conforme to a specific image feature~\cite{INP:Withey}. This approach is physically motivated model-based technique for the detection of region boundaries~\cite{ART:Pham}.\\ The curve/surface is placed near the desidered boundary and it is deformed by the action of internal and external forces that act iteratively. The external forces are usually derived from the image.\\ This approach has the capability to directly generate closed parametric curves or surfaces from images and and incorporate smootness constraint that provides robustness to noise and spurioous edges. \\ However this approach requires a manual interaction to place the appropriate set of parameters. \end{document}
{ "alphanum_fraction": 0.805209513, "avg_line_length": 98.1111111111, "ext": "tex", "hexsha": "5cb9e635677b8e685042e14be85e0d1f8c8bf14c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RiccardoBiondi/SCDthesis", "max_forks_repo_path": "tex/Chapter1/MainSegmentation/DeformableModel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RiccardoBiondi/SCDthesis", "max_issues_repo_path": "tex/Chapter1/MainSegmentation/DeformableModel.tex", "max_line_length": 278, "max_stars_count": null, "max_stars_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RiccardoBiondi/SCDthesis", "max_stars_repo_path": "tex/Chapter1/MainSegmentation/DeformableModel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 192, "size": 883 }
\section{Mutual information} We now consider the case where we have a set of images. This can be R,G,B color band images, observations of the same area but at different times, etc. We are interested in having an entropy measurement of such data sets, knowing that the correlation between two images can be relatively high. A 3-dimensional wavelet transform is not appropriate because the third dimension is generally completely different from the two first. The standard case is the data set where we have 2-dimensional spatial information versus a frequency band. \subsection{Principal component analysis} Principal Component Analysis (PCA), also often referred as eigenvector, Hotelling, or Karhunen-Lo\`eve transform \cite{ima:karhunen47,ima:loeve48,ima:hotelling33}, allows us to transform discrete signals into a sequence of uncorrelated coefficients. Considering a population $D(1..M,1..N)$ of $M$ signals or images of dimension $N$, the PCA method consists of expressing the dataset $D$ by: \begin{eqnarray} D = U \Lambda^{\frac{1}{2}} V^t \end{eqnarray} and \begin{eqnarray} DD^t & = & U \Lambda U^t \\ D^t D & = & V \Lambda V^t \end{eqnarray} where $\Lambda$ is the diagonal matrix of eigenvalues of the covariance matrix $C = DD^t$, the columns of $U$ are the eigenvectors of $C$, and the columns of $U$ are the eigenvector of $D^t D$. For a signal $d(1..N)$ we have, \begin{eqnarray} d = \sum_{i=1}^{M} \sqrt{\lambda_i} u_i v_i^t \end{eqnarray} where $\lambda_i$ are the eigenvalues of the covariance matrix. The $v_i$ vectors can also be calculated by \cite{ima:bijaoui79}: \begin{eqnarray} V(i,j) = v_i(j) = \frac{1}{\sqrt{\lambda_i}} \sum_k D(i,k) u_i(j) \end{eqnarray} In practice, we construct the matrix $A$ whose rows are formed from the eigenvectors of $C$ \cite{ima:gonzalez93}, ordered following the monotonic decreasing order of eigenvalues. A vector $x(1..M)$ can then be transformed by: \begin{eqnarray} y = \Lambda^{-\frac{1}{2}}A(x-m_x) \end{eqnarray} where $m_x$ is the mean value of $x$. Because the rows of $A$ are orthonormal vectors, $A^{-1} = A^t$, and any vector $x$ can be recovered from its corresponding $y$ by: \begin{eqnarray} x = \Lambda^{\frac{1}{2}}A^t y + m_x \end{eqnarray} The $\Lambda$ matrix multiplication can be seen as a normalization. Building $A$ from the correlation matrix instead of the covariance matrix leads to another kind of normalization, and the $\Lambda$ matrix can be suppressed ($y = A(x-m_x)$ and $x = A^t y + m_x$). Then the norm of $y$ will be equal to the norm of $x$. \subsection{The WT-PCA transform} We consider now that we have $M$ observations of the same view, but at different wavelengths (or at different epochs, etc.), and denote as $D_k$ one observation, and $W(k, 1..P)$ its wavelet transform, $P$ being the number of scales (or bands) of the wavelet transform. Then for a given frequency band $j$, we can compare the information content for all observations. As a strong correlation may exist between the same frequency band of two different observations, we can appy a principal component analysis for the specific scale $j$, and repeat the same information for all $j$. Then, for each scale $j$, we build a correlation matrix $C_j$ (and also $A_j$), and the wavelet coefficients are transformed into their principal components. As the obtained coefficients are obtained by applying successively a wavelet transform and a principal component analysis, we will term these two transformations a WT-PCA transform, and the values obtained will be called WT-PCA coefficients. The advantages of the WT-PCA transform are: \begin{itemize} \item We have separated the information not only spatially as if we had used only a wavelet transform, but also on the wavelength axis as if had used directly a PCA on the images. \item We keep the possibility open to estimate the noise level of the WT-PCA coefficients in the same rigorous way as for a simple wavelet transform. \item We have an exact reconstruction. \end{itemize} \subsection{Entropy from the WT-PCA transform} The entropy relative to a set of observations $D(1..M)$ can be written by: \begin{eqnarray} H(D) = \sum_{j=1}^{l} \sum_{e=1}^{M} \sum_{k=1}^{N_j} h(c_{j,k,e}) \end{eqnarray} where $l$ is the number of scales used in the wavelet transform decomposition, $M$ the number of observations, $k$ a pixel position, $c$ a WT-PCA coefficients, and $e$ denotes the eigenvector number. The last scale of the wavelet transform is not used, as previously, so this entropy measurement is background independent, which is really important because the background can vary from one wavelength to another.
{ "alphanum_fraction": 0.7504814894, "avg_line_length": 42.871559633, "ext": "tex", "hexsha": "44fe8b4ecd584c0e06786ce36db7c260f7ad1b47", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_path": "src/doc/doc_mra/doc_mr2/infomut.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_path": "src/doc/doc_mra/doc_mr2/infomut.tex", "max_line_length": 81, "max_stars_count": null, "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_path": "src/doc/doc_mra/doc_mr2/infomut.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1290, "size": 4673 }
\chapter{Lepton Hadron Interactions} Please $\backslash$input$\{path/file\}$ your text here.
{ "alphanum_fraction": 0.7634408602, "avg_line_length": 31, "ext": "tex", "hexsha": "4b9a4b1492f4a45cf2fec45d012457bf0a5b7482", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "geant4/photolepton_hadron/photolepton_hadron.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "geant4/photolepton_hadron/photolepton_hadron.tex", "max_line_length": 55, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "geant4/photolepton_hadron/photolepton_hadron.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 25, "size": 93 }
% !TEX root = ./physics_of_fluids.tex % !TEX TS-program = xelatex % !TEX encoding = UTF-8 Unicode \chapter{Viscous flows} \label{chap:viscous_flows} Whenever the viscous diffusion timescale $\rho L^2/\mu$ is short with respect to inertial $L/U$ or imposed $\omega^{-1}$ timescales, flows will be dominated by viscosity. This situation occurs of course if the viscosity of the material is high, e.g. with mud, magma or glass melt (figure~\ref{fig:viscous_flows}) but not only. Glaciers were reported to flow as early as 1873 \citep{Aitken1873}. These so-called \textit{rivers of ice} indeed flow, albeit very slowly -- typically less than a metre a day -- making inertial phenomena irrelevant (figure~\ref{fig:viscous_flows}; see also the slow motion footage taken by BBC Earth Lab \url{https://youtu.be/ghC-Ut0fW4o}). At very small scales where bacteria and algae evolve, diffusion competes and usually overcomes inertial effects. As a result, microorganisms living at such scales have evolved non-intuitive strategies to move as we will see next. In all these examples the ratio between the diffusive and inertial timescales -- the Reynolds number Re = $\rho U L/\mu$ -- is much smaller than one. Fluid motion can still be described with the Navier-Stokes equations in this context, but we will now see that the condition Re $\ll$ 1 implies that some terms have not the same order of magnitude than others. Exploiting the low-Re number hypothesis will enable us to obtain the relevant equation for viscous flow motion, the \textbf{Stokes equation}. \begin{figure}[htbp] \begin{center} \includegraphics[height=4cm]{Briksdalsbreen.jpg} \includegraphics[height=4cm]{fiberglass.jpg} \includegraphics[height=4cm]{sorin.png} \includegraphics[width=15cm]{salmonella.png} \caption{\textbf{Flows dominated by viscosity.} Top left : the Briksdalsbreen glacier in Norway is slowly flowing into a lake (photograph by vicrogo, public domain). Top middle: glass fibre manufacturing (photograph Saint-Gobain). Top right: modern optical fibre drawing process allow to produce multilayered fibres \citep{Abouraddy2007}. Bottom: micron-sized salmonellae swim with the help of a bundle of rotating helicoidal flagellae \citep{Elgeti2015}.} \label{fig:viscous_flows} \end{center} \end{figure} \section{Low-Re number flows} Whenever fluids have a high viscosity $\mu$, or flow at small scale $L$ or with a low velocity $U$, the equations describing their motion can be greatly simplified. This can be seen by non dimensionalising the equations of motions with the natural scales of the problem $\bU=U\bu$, $\bX=L\bx$ (here we denote dimensioned quantities with capital letters). The pressure can be made dimensionless with a natural viscous scale $P=\mu\frac{U}{L}p$ and finally if there is a imposed timescale $\omega^{-1}$ (e.g. the inverse of the swimming frequency) we may write $T=\omega^{-1}t$. \begin{equation} \mathrm{Re}_\omega\pd{\bu}{t} + \mathrm{Re}\lp\bu\cdot\nabla\rp\bu=-\nabla p+\Delta \bu. \label{eq:pre-stokes_equation} \end{equation} Here, two Reynolds numbers appear: \begin{equation} \mathrm{Re}_\omega=\frac{\rho L^2 \omega}{\mu} \quad \text{and} \quad \mathrm{Re}=\frac{\rho U L}{\mu}, \end{equation} which each may be interpreted as a ratio of timescales as seen in the introduction of this chapter. Note that for e.g. flagellae propelled microorganisms, the oscillatory Reynolds number $\mathrm{Re}_\omega$ involves the relevant velocity scale $L\omega$ for the fluid set into motion by the oscillating flagella. In equation~\eqref{eq:pre-stokes_equation} each variable has been rescaled with its expected range of variation range. Therefore each of the force terms (right hand side) is $\mathcal O(1)$ while the unsteady and convective part of momentum variation (left hand side) are respectively of order $\mathrm{Re}_\omega$ and $\mathrm{Re}$. For flows without an imposed frequency (flowing glass melts or glacier flow), these two numbers will be identical. But if the flow is produced by an oscillating object, they can significantly differ. Table~\ref{tbl:Reynolds} reports an estimation of these two Reynolds numbers for a range of organisms living in aqueous environments, where it can be seen that the double condition $\mathrm{Re}_\omega\ll 1$, $\mathrm{Re}\ll 1$ is fulfilled in the realm of microorganisms. \begin{table} \begin{center} \begin{tabular}{ccccccc} Organism & length & velocity & frequency & Re & Re$_\omega$\\ \hline\hline \textbf{Bacterium} & 10 $\mu$m & 10 $\mu$m/s & 100 Hz & \textbf{10$^\text{-4}$} & \textbf{10$^\text{-2}$}\\ \textbf{Spermatozoon} & 100 $\mu$m & 100 $\mu$m/s & 10 Hz & \textbf{10$^\text{-2}$} & \textbf{10$^\text{-1}$}\\ \textbf{Ciliate} & 100 $\mu$m & 1 mm/s & 10 Hz & \textbf{10$^\text{-1}$} & \textbf{10$^\text{-1}$}\\ Tadpole & 1 cm & 10 cm/s & 10 Hz & 10$^\text{3}$ & 10$^\text{3}$\\ Small fish & 10 cm & 10 cm/s & 10 Hz & 10$^\text{4}$ & 10$^\text{5}$\\ Penguin & 1 m & 1 m/s & 1 Hz & 10$^\text{6}$ & 10$^\text{6}$\\ Sperm whale & 10 m & 1 m/s & 0.1 Hz & 10$^\text{7}$ & 10$^\text{7}$\\ \hline \end{tabular} \end{center} \caption{\textbf{Reynolds number for different living organisms.} Bacteria, spermatozoa and ciliates are all characterised by Reynolds numbers Re and Re$_\omega$ much smaller than unity. The flowing fluid in their vicinity is therefore accurately described with the Stokes equation. Data from \citet{Lauga2020}.} \label{tbl:Reynolds} \end{table} In this limit, the Navier-Stokes equation reduces to the much simpler \textbf{Stokes equation}\index{Stokes equation}: \begin{equation} \nabla p=\Delta \bu\quad\text{or, in its dimensioned version:}\quad\nabla P=\mu\Delta \bU. \label{eq:stokes_equation} \end{equation} \prg{Cauchy equation.} The Stokes equation may also be rewritten as the following \textit{Cauchy equation}: \begin{equation} \nabla\cdot\tensorsym\sigma=\matrixsym 0. \end{equation} With this formulation it becomes apparent that viscous flows are \textit{force free}: in absence of significant inertia, the forces balance each other. \section{Stokes flow's properties} \prg{Linearity.} The Stokes equation~\eqref{eq:stokes_equation} is \textbf{linear}: this means that elementary solutions can be used to construct more involved ones, either as a weighted sum of individual singular solutions or as a convolution integral between an appropriate Green's function and data boundaries. For example, the knowledge of the velocity $\bU$ at the boundary $S$ of a fluid domain allows to express the fluid velocity $\bu$ at any point $\br$ of the domain as: \begin{equation} u_i(\br)=\int_S \mathcal G_{ij}(\br|\br_0)U_j(\br_0)\,\mathrm dS(\br_0). \label{eq:stokes_green_velocity_boundary} \end{equation} Similarly a knowledge of the \textit{forces}\footnote{\textit{Mixed boundary conditions}, constituted of velocity data on part of the boundaries, and forces data on other, can be treated with the same procedure.} $\bF$ at the domain boundary would lead to the following formal form for the solution: \begin{equation} u_i(\br)=\int_S g_{ij}(\br|\br_0)F_j(\br_0)\,\mathrm dS(\br_0). \end{equation} A practical consequence of these relations is the \textbf{unicity} of the Stokes flow solution: the boundary conditions uniquely determine the solution. This contrasts with the multitude of solutions that can be encountered for higher Reynolds number and originate (mathematically) from the nonlinear term of the Navier-Stokes equation. \prg{Reversibility.\index{Stokes reversibility}} Another quite surprising property of Stokes flow is their \textbf{reversibility}. \begin{figure}[htbp] \begin{center} \includegraphics[width=4cm]{taylor_reversibility_1.png} \includegraphics[width=4cm]{taylor_reversibility_2.png} \includegraphics[width=4cm]{taylor_reversibility_3.png} \caption{\textbf{Stokes flow kinematic reversibility.} Left: a drop of dye is injected in a quiescent viscous liquid filling the space between two concentric cylinders. Middle: on rotating the inner cylinder with a handle, the dye is stretched and stirred so that it becomes barely visible. Right: reversing the cylinder motion allows to relocate the dye's drop at its initial position almost perfectly (from G.I. Taylor's \textit{Low-Reynolds-Numbers Flows} movie \copyright\ National Committee for Fluid Mechanics Films / Education Development Center).} \label{fig:reversibility} \end{center} \end{figure} This counter-intuitive phenomenon is illustrated on figure~\ref{fig:reversibility}. A drop of dye mixed by differential rotation in a Taylor-Couette apparatus can be ``unmixed'' by reversing the boundary velocity. From a mathematical point of view, this reversal property can be understood by noting that changing the sign of the boundary velocity in \eqref{eq:stokes_green_velocity_boundary} simply changes the sign of the solution. Alternatively it can also be noted that the transformation $(\bu,p)\to(-\bu,-p)$ also yields a solution of the Stokes equation~\eqref{eq:stokes_equation} (provided that the boundary conditions are transformed as well). \prg{A paradox?} From a physical point of view, this reversibility is more troublesome: if diffusion is associated with irreversible microscopic phenomena, how can diffusion-dominated flows be reversible? Actually the reversal illustrated figure~\ref{fig:reversibility} is purely \textit{kinematic}: while the velocity fields have been reversed and the drop of dye retrieved its overall initial position, molecular diffusion has acted on the microscopic scale as evidenced by the slightly smeared aspect of the final drop, heat has been produced by viscous dissipation and the entropy of the final state is indeed higher than that of the starting state. \section{Moving in a viscous world} A paradigm for motion in viscous fluids is the settlement of a sphere, first investigated by Stokes. Arguably lengthy calculations (see tutorial) allow to obtain the expression for the velocity and pressure field around a sphere of radius $R$ settling steadily at velocity $-\bV^\infty$ in a quiescent viscous fluid in its reference frame: \begin{subequations} \label{eq:stokes_sphere} \begin{empheq}[left=\empheqlbrace]{alignat=2} u_i &\,=\,&& -\frac{3R}{4}V^\infty_j\lp\frac{\delta_{ij}}{r}+\frac{r_ir_j}{r^3}\rp -\frac{3R^3}{4}V^\infty_j\lp\frac{\delta_{ij}}{3r^3}-\frac{r_ir_j}{r^5}\rp,\\ p-p_\infty&\,=\,&&-\frac{3\mu R}{2}\frac{V^\infty_jr_j}{r^3}. \end{empheq} \end{subequations} \begin{figure}[htbp] \begin{center} \includegraphics{guazzelli_fixed_sphere.pdf} \includegraphics{guazzelli_moving_sphere.pdf} \caption{\textbf{Spheres in viscous flows}. Left: A fixed sphere deflects the surrounding flowing fluid. Right: A moving sphere in a still environment pushes the fluid in its vicinity \citep{Guazzelli2011}.} \label{fig:viscous_spheres} \end{center} \end{figure} These expressions allow to evaluate the stresses at the sphere surface and to deduce the well-known \textbf{Stokes drag}\index{Stokes drag} $\bF = 6 \pi \mu R \bV^\infty$ exerted on the sphere. An alternative and very useful viewpoint is to present these results in terms of the force $\mathbf f=-\bF$ exerted by the sphere on the fluid: \begin{equation} u_i \,=\, \underbrace{\frac{1}{8 \pi\mu R}f_j\lp\frac{\delta_{ij}}{r}+\frac{r_ir_j}{r^3}\rp}_\text{Stokeslet contribution} +\frac{R^2}{8\pi\mu R}f_j\lp\frac{\delta_{ij}}{3r^3}-\frac{r_ir_j}{r^5}\rp. \label{eq:stokes_force_sphere} \end{equation} Interestingly if we were to shrink the size of the sphere to 0 while keeping the force constant, the only remaining term in the flow field would be the first one. This so-called Stokeslet contribution is of fundamental importance in suspension dynamics, bacteria hydrodynamics and more generally in the modelling of viscous flows. \subsection{Point force induced flow: the Stokeslet\index{Stokeslet}} The Stokeslet is a fundamental solution for the Stokes equation, and describes the flow that would be induced by a point force $\mathbf f\, \delta(\bx-\bx_0)$ located at $\bx = \bx_0$. The corresponding velocity and pressure fields therefore satisfy the following \textit{forced Stokes equation}: \begin{equation} -\nabla p + \mu \Delta \bu + \mathbf f \,\delta(\bx-\bx_0) = \matrixsym 0. \label{eq:forced_stokes_equation} \end{equation} Note that in this expression $\mathbf f$ is a constant vector. The flow field solution is termed \textbf{Stokeslet} and is characterized by: \begin{equation} \left.\bu_{\text{stokeslet}}\right|_i(\bx)=\frac{1}{8\mathrm\pi\mu}\mathcal S_{ij}(\bx |\bx_0) f_j, \label{eq:stokeslet_velocity_component} \end{equation} with $\mathcal S_{ij}$ being the \textit{Oseen-Burgers tensor} defined as: \begin{equation} \mathcal S_{ij}=\frac{\delta_{ij}}{r}+\frac{r_i r_j}{r^3}. \end{equation} Let's now see in more details how this solution is constructed. To so so it will prove useful to first introduce the Green's function for Laplace equation $g(\bx |\by)$. \prg{Green's function\index{Green's function} for Laplace equation.} The Green's function $g(x|y)$ for Laplace equation is the function satisfying: \begin{equation} \Delta g(\bx |\by) = \delta\lp\bx-\by\rp. \end{equation} It is an harmonic function of space except at the point $\bx=\by$ where it is singular. Symmetry considerations on this function suggest that it only depends on the radius $r = \left\|\bx-\by\right\|$. On integrating over a small ball containing the singularity we get: $$ \oiint \pd{g}{r} \mathrm dS = 1, $$ so that the Green's function for Laplace equation is: \begin{equation} g(\bx |\by) = -\frac{1}{4\mathrm{\pi}r}. \end{equation} \prg{Stokeslet obtention.} With the help of the Green's function for Laplace equation, the divergence of the forced Stokes equation~\eqref{eq:forced_stokes_equation} may be written as \begin{equation} \Delta\lp p-\mathbf f\cdot\nabla\lp-\frac{1}{4\pi r}\rp\rp = 0. \end{equation} The maximum principle for harmonic functions allows us to directly write the pressure as: \begin{equation} p=\mathbf f\cdot\nabla\lp-\frac{1}{4\pi r}\rp. \end{equation} Injecting this form for the pressure in equation~\eqref{eq:forced_stokes_equation} we obtain: \begin{equation} \mu \Delta u_i=\frac{1}{4\mathrm\pi}\lp\underbrace{\delta_{ij}\pd{}{x_k}\pd{}{x_k}}_{\tensorsym I \Delta}\lp\frac{1}{r}\rp-\underbrace{\pd{}{x_i}\pd{}{x_j}}_{\nabla\nabla}\lp\frac{1}{r}\rp\rp f_j. \label{eq:stokeslet_velocity_laplacian} \end{equation} From the structure of this relationship, the adventurous reader might tempt to look for a solution of the form: \begin{equation} u_i=\frac{1}{4\mathrm\pi}\lp\delta_{ij}\Delta \mathcal H-\pd{^2\mathcal H}{x_ix_j}\rp f_j. \label{eq:stokeslet_velocity_form} \end{equation} Note that this velocity field is solenoidal, as: \begin{equation} u_{i,i}=\frac{1}{4\mathrm\pi}\lp\Delta \mathcal H_{,j}-\Delta \mathcal H_{,j}\rp f_j \equiv 0. \end{equation} Injecting~\eqref{eq:stokeslet_velocity_form} into~\eqref{eq:stokeslet_velocity_laplacian} we get: \begin{equation} \frac{1}{4\mathrm\pi}\lp\tensorsym I\delta-\nabla\nabla\rp\lp\mu\Delta\mathcal H-\lp\frac{1}{r}\rp\rp=0. \end{equation} This reduces to the following Poisson equation for $\mathcal H$\footnote{Note that we did not consider the integration constants because they would not appear in the velocity field expression anyway.}: \begin{equation} \mu\Delta\mathcal H=\frac{1}{r} \quad\text{with solution:}\quad\mathcal H=\tfrac{1}{2\mu}r. \end{equation} Noting that $r=\lp r_kr_k\rp^{1/2}$, we deduce: \begin{equation} r_{,i}=\frac{r_{k,i}r_k}{(r_mr_m)^{1/2}}\equiv\frac{x_i}{r} \quad \text{and similarly} \quad r_{,ij}=\frac{\delta_{ij}}{r}-\frac{r_ir_j}{r^3}, \end{equation} to finally obtain the expression of the Stokeslet velocity field~\eqref{eq:stokeslet_velocity_component} we were looking for: \begin{equation} \bu_\text{stokeslet}=\frac{1}{8\pi\mu}\tensorsym S \mathbf f. \end{equation} \subsection{The motion of slender objects} We have seen that in the limit of radius shrinking down to zero, a sphere applying a force to a viscous fluid generates a Stokeslet flow. But looking back at the full expression for the flow set into motion by a sphere of finite size~\eqref{eq:stokes_force_sphere}, it is apparent that the total contribution is actually composed of two parts: a Stokeslet and higher-order singularity -- a dipole. Without entering into the details of the derivation, \citet{Hancock1953} and \citet{Lighthill1975} proposed to describe the fluid flows around more general, and slender, objects, as integrals of Stokeslets and dipole contribution. More precisely it appears that the force exerted by a viscous fluid on a very long cylinder of radius $R$ and length $L$ depends on the orientation of the flow. If the flow is perpendicular to the filament axis, the force exerted on the cylinder per unit length is: \begin{subequations} \begin{equation} f_\perp \simeq c_\perp u_\perp \quad \text{with}\quad c_\perp=\frac{4\pi\mu}{\ln(L/R)}. \end{equation} And similarly, if the flow is now parallel to the fibre: \begin{equation} f_\parallel \simeq c_\parallel u_\parallel \quad \text{with}\quad c_\parallel=\frac{2\pi\mu}{\ln(L/R)}. \end{equation} \end{subequations} Note that there is a factor 2 between $f_\perp$ and $f_\parallel$. This drag anisotropy has consequences on the settling of slender objects that we shall not detail here, but the interested reader will find detailed and useful accounts in \citet{Duprat2016} or \citet{Lauga2020} for example. %\section{Bacterial hydrodynamics} % %Waving sheet Taylor % %Scallop theorem % %Bacteria with flagellae % %\section{Flows at the nanoscale} %Slip length
{ "alphanum_fraction": 0.7635897436, "avg_line_length": 86.0294117647, "ext": "tex", "hexsha": "9dd316daa737379b0c7e7b091837900edd17cce1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "307f1c25c59345943a4bce90e031ced5dde105bb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "antko/physics-of-fluids", "max_forks_repo_path": "lecture/03_viscous_flows.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "307f1c25c59345943a4bce90e031ced5dde105bb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "antko/physics-of-fluids", "max_issues_repo_path": "lecture/03_viscous_flows.tex", "max_line_length": 898, "max_stars_count": null, "max_stars_repo_head_hexsha": "307f1c25c59345943a4bce90e031ced5dde105bb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "antko/physics-of-fluids", "max_stars_repo_path": "lecture/03_viscous_flows.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5110, "size": 17550 }
\section{Actors} \ba provides several actor implementations, each covering a particular use case. The class \lstinline^local_actor^ is the base class for all implementations, except for (remote) proxy actors. Hence, \lstinline^local_actor^ provides a common interface for actor operations like trapping exit messages or finishing execution. The default actor implementation in \ba is event-based. Event-based actors have a very small memory footprint and are thus very lightweight and scalable. Context-switching actors are used for actors that make use of the blocking API (see Section \ref{Sec::BlockingAPI}), but do not need to run in a separate thread. Context-switching and event-based actors are scheduled cooperatively in a thread pool. Thread-mapped actors can be used to opt-out of this cooperative scheduling. \subsection{Implicit \texttt{self} Pointer} When using a function or functor to implement an actor, the first argument \emph{can} be used to capture a pointer to the actor itself. The type of this pointer is \lstinline^event_based_actor*^ per default and \lstinline^blocking_actor*^ when using the \lstinline^blocking_api^ flag. When dealing with typed actors, the types are \lstinline^typed_event_based_actor<...>*^ and \lstinline^typed_blocking_actor<...>*^. \clearpage \subsection{Interface} \begin{lstlisting} class local_actor; \end{lstlisting} {\small \begin{tabular*}{\textwidth}{m{0.45\textwidth}m{0.5\textwidth}} \multicolumn{2}{m{\linewidth}}{\large{\textbf{Member functions}}\vspace{3pt}} \\ \\ \hline \lstinline^quit(uint32_t reason = normal)^ & Finishes execution of this actor \\ \hline \\ \multicolumn{2}{l}{\textbf{Observers}\vspace{3pt}} \\ \hline \lstinline^bool trap_exit()^ & Checks whether this actor traps exit messages \\ \hline \lstinline^any_tuple last_dequeued()^ & Returns the last message that was dequeued from the actor's mailbox\newline\textbf{Note}: Only set during callback invocation \\ \hline \lstinline^actor_addr last_sender()^ & Returns the sender of the last dequeued message\newline\textbf{Note}: Only set during callback invocation \\ \hline \lstinline^vector<group> joined_groups()^ & Returns all subscribed groups \\ \hline \\ \multicolumn{2}{l}{\textbf{Modifiers}\vspace{3pt}} \\ \hline \lstinline^void trap_exit(bool enabled)^ & Enables or disables trapping of exit messages \\ \hline \lstinline^void join(const group& g)^ & Subscribes to group \lstinline^g^ \\ \hline \lstinline^void leave(const group& g)^ & Unsubscribes group \lstinline^g^ \\ \hline \lstinline^void on_sync_failure(auto fun)^ & Sets a handler, i.e., a functor taking no arguments, for unexpected synchronous response messages (default action is to kill the actor for reason \lstinline^unhandled_sync_failure^) \\ \hline \lstinline^void on_sync_timeout(auto fun)^ & Sets a handler, i.e., a functor taking no arguments, for \lstinline^timed_sync_send^ timeout messages (default action is to kill the actor for reason \lstinline^unhandled_sync_timeout^) \\ \hline \lstinline^void monitor(actor_ptr whom)^ & Adds a unidirectional monitor to \lstinline^whom^ (see Section \ref{Sec::Management::Monitors}) \\ \hline \lstinline^void demonitor(actor_ptr whom)^ & Removes a monitor from \lstinline^whom^ \\ \hline \lstinline^bool has_sync_failure_handler()^ & Checks wheter this actor has a user-defined sync failure handler \\ \hline \end{tabular*} }
{ "alphanum_fraction": 0.7602779386, "avg_line_length": 53.96875, "ext": "tex", "hexsha": "4340e3a1916fadb8206bc693cc0d7dda6f374601", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "syoummer/boost.actor", "max_forks_repo_path": "manual/Actors.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "syoummer/boost.actor", "max_issues_repo_path": "manual/Actors.tex", "max_line_length": 235, "max_stars_count": 2, "max_stars_repo_head_hexsha": "58f35499bac8871b8f5b0b024246a467b63c6fb0", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "syoummer/boost.actor", "max_stars_repo_path": "manual/Actors.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-20T08:05:41.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-20T21:11:16.000Z", "num_tokens": 913, "size": 3454 }
\documentclass[t]{beamer} \usetheme{Copenhagen} \setbeamertemplate{headline}{} % remove toc from headers \beamertemplatenavigationsymbolsempty \usepackage{amsmath, array, tikz, tcolorbox, bm, tkz-euclide, pgfplots, graphicx} \pgfplotsset{compat = 1.16} \usetkzobj{all} \everymath{\displaystyle} \title{Circles} \author{} \date{} \AtBeginSection[] { \begin{frame} \frametitle{Objectives} \tableofcontents[currentsection] \end{frame} } \begin{document} \begin{frame} \maketitle \end{frame} \begin{frame}[c] \includegraphics[scale=0.80]{Images/conics.jpg} \end{frame} \section{Identify the center and radius of a circle.} \begin{frame}{Circles} You may remember circles from geometry class. In this chapter, we will look at equations and properties of circles. \newline\\ \pause \begin{tcolorbox}[colback=red!10!white, colframe=red!60!black,, title=\textbf{Circles}] The set of points, each of whose distance from a fixed point (the center) is the same. \end{tcolorbox} \vspace{11pt} \pause The standard form of the equation of a circle is \[ (x-h)^2+(y-k)^2 = r^2 \] with center $(h, \, k)$ and radius $r$. \end{frame} \begin{frame}{Example 1} Identify the center and exact radius of each. \newline\\ (a) \quad $(x-4)^2 + (y+3)^2 = 49$ \newline\\ \onslide<2->{Center: $(4, -3)$} \newline\\ \onslide<3->{Radius: $\sqrt{49}$ } \onslide<4->{$=7$} \end{frame} \begin{frame}{Example 1} (b) \quad $(x+1)^2 + (y-7)^2 = 72$ \newline\\ \onslide<2->{Center: $(-1, 7)$} \newline\\ \onslide<3->{Radius: $\sqrt{72}$ } \onslide<4->{$=6\sqrt{2}$} \end{frame} \begin{frame}{Writing the Standard Form of the Equation of a Circle} To get the standard form, perform the following steps: \newline\\ \begin{enumerate} \item Bring the constant over to the other side of the equation. \newline\\ \pause \item Find the vertex of the $x$-terms and $y$-terms. \newline\\ \pause \begin{itemize} \item The $x$-coordinates of each vertex will represent $h$ and $k$, respectively. \newline\\ \pause \item The absolute value $y$-coordinates will be added to the constant. \end{itemize} \end{enumerate} \end{frame} \begin{frame}{Example 2} Identify the center and exact radius of each. \newline\\ (a) \quad $x^2-4x+y^2+6y-23=0$ \onslide<2->{\[{\color{blue}\bm{x^2 - 4x}} \qquad + {\color{red}\bm{y^2 - 6y}} \qquad = 23\]} \onslide<3->{{\color{blue}\textbf{Vertex: }$\bm{(2,-4)}$}} \newline\\ \onslide<4->{{\color{red}\textbf{Vertex: }$\bm{(3,-9)}$}} \onslide<5->{\[(x-2)^2 + (y-3)^2 = 23 + |-4| + |-9|\]} \onslide<6->{\[(x-2)^2+(y-3)^2=36\]} \\ \onslide<7->{Center: $(2,3)$ \quad Radius: 6} \end{frame} \begin{frame}{Example 2} (b) \quad $x^2+16x+y^2-8y-1=0$ \onslide<2->{\[{\color{blue}\bm{x^2 + 16x}} \qquad + {\color{red}\bm{y^2 - 8y}} \qquad = 1\]} \onslide<3->{{\color{blue}\textbf{Vertex: }$\bm{(-8,-64)}$}} \newline\\ \onslide<4->{{\color{red}\textbf{Vertex: }$\bm{(4,-16)}$}} \onslide<5->{\[(x+8)^2 + (y-4)^2 = 1 + |-64| + |-16|\]} \onslide<6->{\[(x+8)^2+(y-4)^2=81\]} \\ \onslide<7->{Center: $(-8,4)$ \quad Radius: 9} \end{frame} \begin{frame}{Example 2} (c) \quad $x^2-10x+y^2+2y+14=0$ \newline\\ \onslide<2->{$x^2-10x+y^2+2y=-14$} \onslide<3->{\[{\color{blue}\bm{x^2 - 10x}} \qquad + {\color{red}\bm{y^2 + 2y}} \qquad = -14\]} \onslide<4->{{\color{blue}\textbf{Vertex: }$\bm{(5,-25)}$}} \newline\\ \onslide<5->{{\color{red}\textbf{Vertex: }$\bm{(-1,-1)}$}} \onslide<6->{\[(x-5)^2 + (y+1)^2 = 14 + |-25| + |-1|\]} \onslide<7->{\[(x-5)^2+(y+1)^2=40\]} \\ \onslide<8->{Center: $(5,-1)$ \quad Radius: $2\sqrt{10}$} \end{frame} \section{Write the general form of the equation of a circle in standard form} \begin{frame}{General Form} Circles can also be written in general form. General form is standard form multiplied out and simplified. \newline\\ \pause General form will have all terms on one side of the equation and 0 on the other. \end{frame} \begin{frame}{Example 3} Write the general form of each of the following. \newline\\ (a) \quad $(x-3)^2+y^2=6$ \begin{align*} \onslide<2->{(x-3)^2 + y^2 &= 6} \\[6pt] \onslide<3->{x^2 - 6x + 9 + y^2 &= 6} \\[6pt] \onslide<4->{x^2 - 6x + y^2 + 3 &= 0} \end{align*} \end{frame} \begin{frame}{Example 3} (b) \quad $(x+7)^2+(y-5)^2=10$ \begin{align*} \onslide<2->{(x+7)^2+(y-5)^2 &= 10} \\[6pt] \onslide<3->{x^2 + 14x + 49 + y^2 - 10y + 25 &= 10} \\[6pt] \onslide<4->{x^2 + 14x + y^2 - 10y + 74 &= 10} \\[6pt] \onslide<5->{x^2 + 14x + y^2 - 10y + 64 &= 0} \end{align*} \end{frame} \end{document}
{ "alphanum_fraction": 0.6208499336, "avg_line_length": 32.5035971223, "ext": "tex", "hexsha": "bd7e9ea045784590b135da3c00554824f711052d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-26T15:49:45.000Z", "max_forks_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "BryanBain/HA2_BEAMER", "max_forks_repo_path": "Circles(BEAMER).tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "BryanBain/HA2_BEAMER", "max_issues_repo_path": "Circles(BEAMER).tex", "max_line_length": 133, "max_stars_count": null, "max_stars_repo_head_hexsha": "a5e021f12d3cdd0541353c9e121ff5e4df7decd1", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "BryanBain/HA2_BEAMER", "max_stars_repo_path": "Circles(BEAMER).tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1833, "size": 4518 }
\startcomponent ma-cb-en-margintexts \product ma-cb-en \chapter{Margin texts} \index{margin text} \Command{\tex{inmargin}} \Command{\tex{inleft}} \Command{\tex{inright}} \Command{\tex{margintitle}} It is very easy to put text in the margin. You just use \type{\inmargin}. \shortsetup{inmargin} You may remember one of the earlier examples: \typebuffer[marginpicture] This would result in a figure in the \pagereference [marginpicture]\getbuffer [marginpicture]margin. You can imagine that it looks quite nice in some documents. But be careful. The margin is rather small so the figure could become very marginal. A few other examples are shown in the text below. \startbuffer The Ridderstraat (Street of knights) \inmargin{Street of\\Knights} is an obvious name. In the 14th and 15th centuries, nobles and prominent citizens lived in this street. Some of their big houses were later turned into poorhouses \inright{poorhouse}and old peoples homes. Up until \inleft[low]{\tfc 1940}1940 there was a synagog in the Ridderstraat. Some 40 Jews gathered there to celebrate their sabbath. During the war all Jews were deported to Westerbork and then to the extermination camps in Germany and Poland. None of the Jewish families returned. The synagog was knocked down in 1958. \stopbuffer \typebuffer The commands \type{\inmargin}, \type{\inleft} and \type{\inright} all have the same function. In a two sided document \type{\inmargin} puts the margin text in the correct margin. The \type{\\} is used for line breaking. The example above would look like this: \getbuffer You can set up the margin text with: \starttyping \setupinmargin \stoptyping \stopcomponent
{ "alphanum_fraction": 0.780167264, "avg_line_length": 26.5714285714, "ext": "tex", "hexsha": "088a422326a88b51dd897a100b837b570ab7dc57", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "marcpaterno/texmf", "max_forks_repo_path": "contextman/context-beginners/en/ma-cb-en-margintexts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "marcpaterno/texmf", "max_issues_repo_path": "contextman/context-beginners/en/ma-cb-en-margintexts.tex", "max_line_length": 66, "max_stars_count": null, "max_stars_repo_head_hexsha": "aa7ad70e0102492ff89b7967b16b499cbd6c7f19", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "marcpaterno/texmf", "max_stars_repo_path": "contextman/context-beginners/en/ma-cb-en-margintexts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 454, "size": 1674 }