Search is not available for this dataset
text
string
meta
dict
\chapterimage{head3} % Chapter heading image \chapter{Experimental Results} Here is the Makefile for this project: \begin{lstlisting}[title = {Makefile}] CC = g++ NVCC = nvcc LD_LIBRARY_PATH= /usr/local/cuda/lib64 CUDA_LIB = -L /usr/local/cuda/lib64/ -lcuda -lcudart -lcusparse -lcusolver ARCH_CUDA = -arch=sm_30 NVCC_FLAGS = --ptxas-options=-v -Xcompiler -fopenmp -O3 -std=c++11 -D_MWAITXINTRIN_H_INCLUDED -D_FORCE_INLINES GCC_FLAGS = -fopenmp -std=c++11 -Wall -pedantic all: clean serial parallel remove serial: memm.o cuThomasVBatch.o serial.cc $(CC) $(GCC_FLAGS) -o serial serial.cc $(CUDA_LIB) memm.o cuThomasVBatch.o parallel: memm.o cuThomasVBatch.o parallel.cc $(CC) $(GCC_FLAGS) -o parallel parallel.cc $(CUDA_LIB) memm.o cuThomasVBatch.o memm.o: memm.cu $(NVCC) -c $(NVCC_FLAGS) $(ARCH_CUDA) memm.cu cuThomasVBatch.o: cuThomasVBatch.cu $(NVCC) -c $(NVCC_FLAGS) $(ARCH_CUDA) cuThomasVBatch.cu clean: rm -rf *.o serial parallel remove: rm -rf *.o \end{lstlisting} \vspace{5ex} Here is the experimental results: \begin{table}[htbp] \caption{Compare the metrics of different implementations (Case 8)} \centering \begin{tabular}[width=1.0\linewidth]{lllllll} \toprule \quad Method & Case 7 & Case 8 & Case 9 & Case 10 & Case 11 & Case 12\\ \midrule Serial & 1.832603e-03 & 4.852763e-03 & 1.556245e-2 & 2.745925e-2 & 1.34865e-1 & 1.624341e-1 \\ OpenMP & 6.644011e-02 & 2.541431e-03 & 3.947942e-2 & 2.346893e-2 & 5.79814e-1 & 3.565425e-1 \\ cuHinesBatch & 8.718967e-03 & 2.536453e-03 & 1.746943e-2 & 1.914897e-2 & 3.31285e-2 & 4.681494e-2 \\ \bottomrule \end{tabular} \label{tab:table1} \end{table} \vspace{1ex} From the results above, we could find out that the speedup > 1 for Multi-CPU and GPU implemenations both in large cases. On the consideration of the parallel overhead, when the numbers $N$ rises, the paralleled version have better performance compared to the serial one. \vspace{10ex} Besides, we also tested the Heterogeneous Parallelization, but the performance is not satisfying.
{ "alphanum_fraction": 0.6668161435, "avg_line_length": 30.9722222222, "ext": "tex", "hexsha": "8ff91810fc987589dd5e737c7a534a8045d1556c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d284c0c5de395c499619a3961bf1544df504c565", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CrazyIvanPro/cuHinesBatch", "max_forks_repo_path": "report/chapter3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d284c0c5de395c499619a3961bf1544df504c565", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CrazyIvanPro/cuHinesBatch", "max_issues_repo_path": "report/chapter3.tex", "max_line_length": 150, "max_stars_count": null, "max_stars_repo_head_hexsha": "d284c0c5de395c499619a3961bf1544df504c565", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CrazyIvanPro/cuHinesBatch", "max_stars_repo_path": "report/chapter3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 771, "size": 2230 }
%% Modified for NDSS 2018 on 2017/11/08 %% %% bare_conf.tex %% V1.3 %% 2007/01/11 %% by Michael Shell %% See: %% http://www.michaelshell.org/ %% for current contact information. %% %% This is a skeleton file demonstrating the use of IEEEtran.cls %% (requires IEEEtran.cls version 1.7 or later) with an IEEE conference paper. %% %% Support sites: %% http://www.michaelshell.org/tex/ieeetran/ %% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/ %% and %% http://www.ieee.org/ %%************************************************************************* %% Legal Notice: %% This code is offered as-is without any warranty either expressed or %% implied; without even the implied warranty of MERCHANTABILITY or %% FITNESS FOR A PARTICULAR PURPOSE! %% User assumes all risk. %% In no event shall IEEE or any contributor to this code be liable for %% any damages or losses, including, but not limited to, incidental, %% consequential, or any other damages, resulting from the use or misuse %% of any information contained here. %% %% All comments are the opinions of their respective authors and are not %% necessarily endorsed by the IEEE. %% %% This work is distributed under the LaTeX Project Public License (LPPL) %% ( http://www.latex-project.org/ ) version 1.3, and may be freely used, %% distributed and modified. A copy of the LPPL, version 1.3, is included %% in the base LaTeX documentation of all distributions of LaTeX released %% 2003/12/01 or later. %% Retain all contribution notices and credits. %% ** Modified files should be clearly indicated as such, including ** %% ** renaming them and changing author support contact information. ** %% %% File list of work: IEEEtran.cls, IEEEtran_HOWTO.pdf, bare_adv.tex, %% bare_conf.tex, bare_jrnl.tex, bare_jrnl_compsoc.tex %%************************************************************************* % *** Authors should verify (and, if needed, correct) their LaTeX system *** % *** with the testflow diagnostic prior to trusting their LaTeX platform *** % *** with production work. IEEE's font choices can trigger bugs that do *** % *** not appear when using other class files. *** % The testflow support page is at: % http://www.michaelshell.org/tex/testflow/ % Note that the a4paper option is mainly intended so that authors in % countries using A4 can easily print to A4 and see how their papers will % look in print - the typesetting of the document will not typically be % affected with changes in paper size (but the bottom and side margins will). % Use the testflow package mentioned above to verify correct handling of % both paper sizes by the user's LaTeX system. % % Also note that the "draftcls" or "draftclsnofoot", not "draft", option % should be used if it is desired that the figures are to be displayed in % draft mode. % \documentclass[conference]{IEEEtran} % Add the compsoc option for Computer Society conferences. % % If IEEEtran.cls has not been installed into the LaTeX system files, % manually specify the path to it like: % \documentclass[conference]{../sty/IEEEtran} \pagestyle{plain} % Some very useful LaTeX packages include: % (uncomment the ones you want to load) % *** MISC UTILITY PACKAGES *** % %\usepackage{ifpdf} % Heiko Oberdiek's ifpdf.sty is very useful if you need conditional % compilation based on whether the output is pdf or dvi. % usage: % \ifpdf % % pdf code % \else % % dvi code % \fi % The latest version of ifpdf.sty can be obtained from: % http://www.ctan.org/tex-archive/macros/latex/contrib/oberdiek/ % Also, note that IEEEtran.cls V1.7 and later provides a builtin % \ifCLASSINFOpdf conditional that works the same way. % When switching from latex to pdflatex and vice-versa, the compiler may % have to be run twice to clear warning/error messages. % *** CITATION PACKAGES *** % \usepackage{cite} % cite.sty was written by Donald Arseneau % V1.6 and later of IEEEtran pre-defines the format of the cite.sty package % \cite{} output to follow that of IEEE. Loading the cite package will % result in citation numbers being automatically sorted and properly % "compressed/ranged". e.g., [1], [9], [2], [7], [5], [6] without using % cite.sty will become [1], [2], [5]--[7], [9] using cite.sty. cite.sty's % \cite will automatically add leading space, if needed. Use cite.sty's % noadjust option (cite.sty V3.8 and later) if you want to turn this off. % cite.sty is already installed on most LaTeX systems. Be sure and use % version 4.0 (2003-05-27) and later if using hyperref.sty. cite.sty does % not currently provide for hyperlinked citations. % The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/cite/ % The documentation is contained in the cite.sty file itself. % *** GRAPHICS RELATED PACKAGES *** % \ifCLASSINFOpdf \usepackage[pdftex]{graphicx} % declare the path(s) where your graphic files are % \graphicspath{{../pdf/}{../jpeg/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics % \DeclareGraphicsExtensions{.pdf,.jpeg,.png} \else % or other class option (dvipsone, dvipdf, if not using dvips). graphicx % will default to the driver specified in the system graphics.cfg if no % driver is specified. % \usepackage[dvips]{graphicx} % declare the path(s) where your graphic files are % \graphicspath{{../eps/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics % \DeclareGraphicsExtensions{.eps} \fi % graphicx was written by David Carlisle and Sebastian Rahtz. It is % required if you want graphics, photos, etc. graphicx.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/graphics/ % Another good source of documentation is "Using Imported Graphics in % LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or % epslatex.pdf at: http://www.ctan.org/tex-archive/info/ % % latex, and pdflatex in dvi mode, support graphics in encapsulated % postscript (.eps) format. pdflatex in pdf mode supports graphics % in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure % that all non-photo figures use a vector format (.eps, .pdf, .mps) and % not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats % which can result in "jaggedy"/blurry rendering of lines and letters as % well as large increases in file sizes. % % You can find documentation about the pdfTeX application at: % http://www.tug.org/applications/pdftex % *** MATH PACKAGES *** % %\usepackage[cmex10]{amsmath} % A popular package from the American Mathematical Society that provides % many useful and powerful commands for dealing with mathematics. If using % it, be sure to load this package with the cmex10 option to ensure that % only type 1 fonts will utilized at all point sizes. Without this option, % it is possible that some math symbols, particularly those within % footnotes, will be rendered in bitmap form which will result in a % document that can not be IEEE Xplore compliant! % % Also, note that the amsmath package sets \interdisplaylinepenalty to 10000 % thus preventing page breaks from occurring within multiline equations. Use: %\interdisplaylinepenalty=2500 % after loading amsmath to restore such page breaks as IEEEtran.cls normally % does. amsmath.sty is already installed on most LaTeX systems. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/ % *** SPECIALIZED LIST PACKAGES *** % %\usepackage{algorithmic} % algorithmic.sty was written by Peter Williams and Rogerio Brito. % This package provides an algorithmic environment fo describing algorithms. % You can use the algorithmic environment in-text or within a figure % environment to provide for a floating algorithm. Do NOT use the algorithm % floating environment provided by algorithm.sty (by the same authors) or % algorithm2e.sty (by Christophe Fiorio) as IEEE does not use dedicated % algorithm float types and packages that provide these will not provide % correct IEEE style captions. The latest version and documentation of % algorithmic.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithms/ % There is also a support site at: % http://algorithms.berlios.de/index.html % Also of interest may be the (relatively newer and more customizable) % algorithmicx.sty package by Szasz Janos: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithmicx/ % *** ALIGNMENT PACKAGES *** % %\usepackage{array} % Frank Mittelbach's and David Carlisle's array.sty patches and improves % the standard LaTeX2e array and tabular environments to provide better % appearance and additional user controls. As the default LaTeX2e table % generation code is lacking to the point of almost being broken with % respect to the quality of the end results, all users are strongly % advised to use an enhanced (at the very least that provided by array.sty) % set of table tools. array.sty is already installed on most systems. The % latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/tools/ %\usepackage{mdwmath} %\usepackage{mdwtab} % Also highly recommended is Mark Wooding's extremely powerful MDW tools, % especially mdwmath.sty and mdwtab.sty which are used to format equations % and tables, respectively. The MDWtools set is already installed on most % LaTeX systems. The lastest version and documentation is available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/mdwtools/ % IEEEtran contains the IEEEeqnarray family of commands that can be used to % generate multiline equations as well as matrices, tables, etc., of high % quality. %\usepackage{eqparbox} % Also of notable interest is Scott Pakin's eqparbox package for creating % (automatically sized) equal width boxes - aka "natural width parboxes". % Available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/eqparbox/ % *** SUBFIGURE PACKAGES *** %\usepackage[tight,footnotesize]{subfigure} % subfigure.sty was written by Steven Douglas Cochran. This package makes it % easy to put subfigures in your figures. e.g., "Figure 1a and 1b". For IEEE % work, it is a good idea to load it with the tight package option to reduce % the amount of white space around the subfigures. subfigure.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/obsolete/macros/latex/contrib/subfigure/ % subfigure.sty has been superceeded by subfig.sty. %\usepackage[caption=false]{caption} %\usepackage[font=footnotesize]{subfig} % subfig.sty, also written by Steven Douglas Cochran, is the modern % replacement for subfigure.sty. However, subfig.sty requires and % automatically loads Axel Sommerfeldt's caption.sty which will override % IEEEtran.cls handling of captions and this will result in nonIEEE style % figure/table captions. To prevent this problem, be sure and preload % caption.sty with its "caption=false" package option. This is will preserve % IEEEtran.cls handing of captions. Version 1.3 (2005/06/28) and later % (recommended due to many improvements over 1.2) of subfig.sty supports % the caption=false option directly: %\usepackage[caption=false,font=footnotesize]{subfig} % % The latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/subfig/ % The latest version and documentation of caption.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/caption/ % *** FLOAT PACKAGES *** % %\usepackage{fixltx2e} % fixltx2e, the successor to the earlier fix2col.sty, was written by % Frank Mittelbach and David Carlisle. This package corrects a few problems % in the LaTeX2e kernel, the most notable of which is that in current % LaTeX2e releases, the ordering of single and double column floats is not % guaranteed to be preserved. Thus, an unpatched LaTeX2e can allow a % single column figure to be placed prior to an earlier double column % figure. The latest version and documentation can be found at: % http://www.ctan.org/tex-archive/macros/latex/base/ %\usepackage{stfloats} % stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e % the ability to do double column floats at the bottom of the page as well % as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in % LaTeX2e). It also provides a command: %\fnbelowfloat % to enable the placement of footnotes below bottom floats (the standard % LaTeX2e kernel puts them above bottom floats). This is an invasive package % which rewrites many portions of the LaTeX2e float routines. It may not work % with other packages that modify the LaTeX2e float routines. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/sttools/ % Documentation is contained in the stfloats.sty comments as well as in the % presfull.pdf file. Do not use the stfloats baselinefloat ability as IEEE % does not allow \baselineskip to stretch. Authors submitting work to the % IEEE should note that IEEE rarely uses double column equations and % that authors should try to avoid such use. Do not be tempted to use the % cuted.sty or midfloat.sty packages (also by Sigitas Tolusis) as IEEE does % not format its papers in such ways. % *** PDF, URL AND HYPERLINK PACKAGES *** % %\usepackage{url} % url.sty was written by Donald Arseneau. It provides better support for % handling and breaking URLs. url.sty is already installed on most LaTeX % systems. The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/misc/ % Read the url.sty source comments for usage information. Basically, % \url{my_url_here}. % *** Do not adjust lengths that control margins, column widths, etc. *** % *** Do not use packages that alter fonts (such as pslatex). *** % There should be no need to do such things with IEEEtran.cls V1.6 and later. % (Unless specifically asked to do so by the journal or conference you plan % to submit to, of course. ) % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} \usepackage{subfigure} \usepackage{url} \begin{document} % % paper title % can use linebreaks \\ within to get better formatting as desired \title{Poster: LEADER (Low-Rate Denial-of-Service Attacks Defense)} % author names and affiliations % use a multiple column layout for up to three different % affiliations %\author{\IEEEauthorblockN{Jelena Mirkovic, Christophe Hauser, Nicolaas Weideman, Haoda Wang, Rajat Tandon} \author{\IEEEauthorblockN{Rajat Tandon, Haoda Wang, Nicolaas Weideman, Christophe Hauser, Jelena Mirkovic} \IEEEauthorblockA{Information Sciences Institute, University of Southern California \\ tandon, haodawa, nweidema, hauser, sunshine @isi.edu}} %\and %\IEEEauthorblockN{Homer Simpson} %\IEEEauthorblockA{Twentieth Century Fox\\ %[email protected]} %\and %\IEEEauthorblockN{James Kirk\\ and Montgomery Scott} %\IEEEauthorblockA{Starfleet Academy\\ %[email protected]}} % conference papers do not typically use \thanks and this command % is locked out in conference mode. If really needed, such as for % the acknowledgment of grants, issue a \IEEEoverridecommandlockouts % after \documentclass % for over three affiliations, or if they all won't fit within the width % of the page, use this alternative format: % %\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1}, %Homer Simpson\IEEEauthorrefmark{2}, %James Kirk\IEEEauthorrefmark{3}, %Montgomery Scott\IEEEauthorrefmark{3} and %Eldon Tyrell\IEEEauthorrefmark{4}} %\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\ %Georgia Institute of Technology, %Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html} %\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\ %Email: [email protected]} %\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\ %Telephone: (800) 555--1212, Fax: (888) 555--1212} %\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}} % use for special paper notices %\IEEEspecialpapernotice{(Invited Paper)} %\IEEEoverridecommandlockouts %\makeatletter\def\@IEEEpubidpullup{9\baselineskip}\makeatother %\IEEEpubid{\parbox{\columnwidth}{ % Network and Distributed Systems Security (NDSS) %Symposium 2019\\ % 24-27 February 2019, San Diego, CA, USA\\ % ISBN 1-1891562-49-5\\ % http://dx.doi.org/10.14722/ndss.2019.23xxx\\ % www.ndss-symposium.org %} %\hspace{\columnsep}\makebox[\columnwidth]{}} % make the title area \maketitle \begin{abstract} %\boldmath Low-rate denial-of-service(LRD) attacks are often hard to detect at the network level as they consume little bandwidth. Both the legitimate traffic and the attack traffic look alike. Moreover, the attack traffic often appears to comply with transport protocol, and application protocol semantics. It is the intricacies in the payloads and the dynamics of the attack traffic that induces denial-of-service on servers when processed by specific hardware and software. We introduce Leader, a hybrid approach for application-agnostic and attack-agnostic detection and mitigation of LRD attacks. Leader operates by learning normal patterns of network, application and system-level resources when processing legitimate external requests. It relies on a novel combination of runtime, system and network monitoring and offline binary program analysis. \end{abstract} \section{Introduction} % IEEEtran.cls defaults to using nonbold math in the Abstract. % This preserves the distinction between vectors and scalars. However, % if the conference you are submitting to favors bold math in the abstract, % then you can use LaTeX's standard command \boldmath at the very start % of the abstract to achieve this. Many IEEE journals/conferences frown on % math in the abstract anyway. Low-rate denial-of-service attacks deny service at the device level (server, router, switch), and stay well below the network bandwidth’s limit. These attacks use a specific basic mechanism to deny service. They deplete some limited resource at the application, the operating system or the firmware of a device. This makes the device unable to process legitimate clients' traffic. Examples of LRD attacks are: (1) Connection depletion attacks (e.g., TCP SYN flood \cite{manna2012review} or SIP flood \cite{luo2008cpu}, which deplete a connectiontable space for a given service by keeping many half-open connections, (2) Application/System-level depletion attacks ((e.g., ZIP bombs \cite{zip}), which deplete CPU through expensive and never-ending computation operations). Leader protects the deploying server against every variant of LRD attack. The novel aspect of leader is it's hybrid detection approach combining network security mechanisms with OS and program-level aspects. Leader operates by learning, during baseline operation, a normal pattern of an application’s and the device’s use of system resources, when serving external requests. Leader detects attacks as cases of resource overload that impairs some service’s quality. It then runs diagnostics, compares the observed and the expected resource usage patterns, and checks for anomalies. This helps it diagnose the attack type, and select the best remediation actions. %For simplicity, we assume that Leader is deployed on a server targeted by an LRD attack. Another novel aspect of Leader lies in the structures it uses to capture sequences of resource-use events in a temporal and relational manner per each incoming service request. These sequences are known as connection life stages. They are built from multiple, complementary observations collected at the (1) network level, (2) OS level and (3) application level. The connection life stages are then clustered into typical resource-use patterns, or profiles for applications and for users. We use these profiles to detect anomalous use and characterize LRD attacks. We also design mitigation actions that remove attack traffic, or increase system’s robustness to the specific attack with the aid of these profiles. %~\cite{dyn2016}). % no keywords % For peer review papers, you can put extra information on the cover % page as needed: % \ifCLASSOPTIONpeerreview % \begin{center} \bfseries EDICS Category: 3-BBND \end{center} % \fi % % For peerreview papers, this IEEEtran command inserts a page break and % creates the second title. It will be ignored for other modes. %%\IEEEpeerreviewmaketitle %\section{Introduction} %% no \IEEEPARstart %DDoS attacks are hard to prevent because of the vastly distributed malware (botnet) and IP address falsification (spoofing).RFC 2827[1] describes best practices for origin IP address validation that would prevent spoofing, but these recommendations are not universally deployed by operators [2]. Further, the non-centrally controlled diverse hardware and software network infrastructure, leads to many additional and unforeseen vulnerabilities. % %Additionally, detection is harder at ISPs because of different traffic recieving capacities and also, there are so many destinations involved. Our insight, i.e. assymetry of traffic, can give us hints when the destinations cannot deal with the traffic they receive. We take inspiration from AMON[3] to represent the heavily sampled traffic data in terms of the traf?c matrix time series. The data structure we use is AMON databrick. % \begin{figure*} \begin{center} \includegraphics[width=3.5in]{Overview.png} \caption{Monitoring and system overview: novelty of our approach lies in our connection life stages and code path abstractions, which are built from monitoring the system at network, OS and application levels. \label{fig:overview}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5in]{Slowloris.png} \caption{Life-stage diagrams for Slowloris attack: highlighted items show anomalies. \label{fig:slowloris}} \end{center} \end{figure*} \section{Methodology} Figure \ref{fig:overview}(a) shows a high-level view of the abstraction levels at which Leader operates. It also illustrates the trade-offs between accuracy of semantic reasoning on one side, and monitoring cost and delay on another side. Semantic reasoning is accurately understanding and attributing resource usage to a given application and network connection. Better accuracy implies more monitoring, which incurs higher cost and higher delay. We qualify as black box observation the process of characterizing the behavior of an application based on the sole observation of its inputs and outputs. OS-level instrumentation allows an observer to gain more insights about the application’s semantics and program analysis offers the highest level of semantic reasoning. Figure \ref{fig:overview}(b) gives the system overview. LRD attacks usually involve one or several incoming service requests arriving at the server that are expensive or slow processing leading to resource depletion. This causes a state where the service or the global system is unable to gracefully handle new requests and denial of service occurs. The defense mechanisms in Leader comprises the following operational steps unified into the three main modules: (1) Behavior profiling, (2) Attack detection (and characterization), and (3) Attack mitigation. \subsection{Behavior profiling} LEADER relies on the collection of measures and statistics of system resources’ usage for successful attack detection. These measures are collected per each incoming service request and comprise observations at the network, system and application level. Leader captures the connection, client, application and whole-device profiles at all times. During normal operation, Leader performs profiling to learn the legitimate behavior of the connections, applications, clients and the device where it is deployed. \subsection{Attack detection} Another novel aspect of leader is that it uses a hybrid detection approach combining network security mechanisms with OS and program-level aspects. Our attack detection module compares instantaneous profiles to corresponding baseline profiles, with the goal of detecting evidence of troubled services that have low instantaneous percentages of successfully served incoming requests, compared to the historical (baseline) percentages of successfully served request. \subsection{Attack mitigation} Leader deploys a combination of attack mitigation approaches that includes (1) Derivation of the attack signature from attack connections (2) Costly connection termination, (3) Blacklisting of attack sources, (4) Dynamic resource replication, (5) Program patching and algorithm modification, and (6) Blacklisting of sources with anomalous profiles. %Figure \ref{fig:brick} illustrates our data structures. \section{Preliminary Findings and Next Steps} We relied on on Emulab \cite{Emulab} to experiment with Slowloris \cite{slowloris}, a common type of LDR attacks. We set up a static Web server, and used 10 legitimate and one attack client. The legitimate clients continuously requested the main Web page, using wget, each 200 ms. This created 50 requests per second at the server. The attack client opened 1,000 simultaneous attack connections and kept them open for as long as possible by sending never-ending headers on each. This had negative impact on legitimate clients, that had trouble establishing connection with the server. Figure \ref{fig:slowloris} shows the life stage diagram for traffic in attack case, which includes both legitimate and attack connections, and compared it to the diagram in the baseline case. The highlighted stages differ between two cases and help us identify anomalous connections. Currently, we use Systemtap \cite{systemtap} to build aggregate profiles of an application's/system's resource usage pattern for each connection at the system call level. We capture the time spent on each system call and the resources used by them such as memory usage, number of page faults and open file descriptors, cpu cycles and other thread/process level details. We do such profiling for both legitimate traffic as well as attack traffic. There are notable differences between the two. We will utilize these profiles, along with other sophistication, to detect attacks and mitigate them. %We illustrate our attack detection in Figure \ref{fig:114}. \section{Conclusion} LEADER leverages a novel combination of runtime monitoring and offline binary program analysis to protect a deploying server against LRD attacks and prevents external service requests from misusing system resources. During baseline operation, LEADER learns resource-usage patterns and detects and mitigates attacks by following an anomaly detection paradigm. % conference papers do not normally have an appendix % use section* for acknowledgement % trigger a \newpage just before the given reference % number - used to balance the columns on the last page % adjust value as needed - may need to be readjusted if % the document is modified later %\IEEEtriggeratref{8} % The "triggered" command can be changed if desired: %\IEEEtriggercmd{\enlargethispage{-5in}} % references section % can use a bibliography generated by BibTeX as a .bbl file % BibTeX documentation can be easily obtained at: % http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/ % The IEEEtran BibTeX style support page is at: % http://www.michaelshell.org/tex/ieeetran/bibtex/ %\bibliographystyle{IEEEtranS} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} % % <OR> manually copy in the resultant .bbl file % set second argument of \begin to the number of references % (used to reserve space for the reference number labels box) %\begin{thebibliography}{1} %\bibitem{IEEEhowto:Emulab} %B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, and A. Joglekar. An integrated experimental environment for distributed systems and networks. In Proceedings of the Operating System Design and Implementation, pages 255–270, 2002. {\footnotesize \bibliographystyle{acm} \bibliography{paper.bib}} %\bibitem{IEEEhowto:Ferguson} %P.~Ferguson and D.~Senie. RFC 2827: Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoo?ng. https://tools.ietf.org/html/bcp38 %\bibitem{IEEEhowto:Beverly} %R.~Beverly, A.~Berger, Y.~Hyun,and A.~Claffy. Understanding the ef?cacy of deployed internet source address validation ?ltering. In Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement Conference, IMC ?09, pages 356?369, New York, NY, USA, 2009. ACM. %\bibitem{IEEEhowto:AMON} %M.~Kallitsis, SA.~Stoev, S.~Bhattacharya, G.~Michailidis. AMON: An Open Source Architecture for Online Monitoring, Statistical Analysis, and Forensics of Multi-Gigabit Streams. IEEE Journal on Selected Areas in Communications, 2016 %\end{thebibliography} % that's all folks \end{document}
{ "alphanum_fraction": 0.7737265962, "avg_line_length": 49.2948073702, "ext": "tex", "hexsha": "acf421797210eb9c55ef1678cdfd34ccc9a546a6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e9b884d3645789d6171bfa85e920b5da8cab9db5", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "STEELISI/LEADER", "max_forks_repo_path": "docs/POSTER_WRITEUP/NDSS_2019bare_final.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e9b884d3645789d6171bfa85e920b5da8cab9db5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "STEELISI/LEADER", "max_issues_repo_path": "docs/POSTER_WRITEUP/NDSS_2019bare_final.tex", "max_line_length": 770, "max_stars_count": null, "max_stars_repo_head_hexsha": "e9b884d3645789d6171bfa85e920b5da8cab9db5", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "STEELISI/LEADER", "max_stars_repo_path": "docs/POSTER_WRITEUP/NDSS_2019bare_final.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7013, "size": 29429 }
\subsection{Comparison of Mechanisms} In the previous section, we studied several different algorithmic approaches that guarantee different optimality criteria. To better evaluate the algorithms against the optimality criteria we defined in Section \ref{sec:optimality}, we will now summarize and compare the properties of the algorithms. Afterwards, we will look at practical results that were obtained by performing experiments of matching mechanisms with one- and two-sided preferences by Diebold and Bichler \cite{DieboldBenchmark}. \subsubsection{Theoretical Results} Referring back to the list of desirable properties defined in Section \ref{criteria-application}, let us now recap and compare the aforementioned algorithms to evaluate which one could be applicable for the problem of matching students to seminars. Unfortunately, none of the algorithms guarantee all of the optimality criteria at the same time, which makes the choice of an algorithm non-trivial. Table \ref{tab:algorithm-comparison} gives an overview of the presented algorithms and their properties. Each of the algorithms mentioned in this section is listed, and for each optimality criteria, a yes/no encoding is used to make a statement about which properties an algorithm guarantees. It is important to note here that a "no" in a column does not strictly mean that the given optimality criteria cannot be fulfilled by the algorithm, but rather that the algorithm does not guarantee it. For instance, a matching computed with the greedy algorithm can be of maximum cardinality or be popular. Only the results for strategy-proofness are a strict yes or no, since fulfilling strategy-proofness does not depend on the instance of the problem, but only of the mechanism being used. \begin{table}[h!] \begin{tabular}{l|llll} \hline & RSD & Max-PaCHA & Assignment & Popular-CHA \\ \hline Maximum Cardinality & no & yes & yes & yes \\ Pareto-Optimal & yes & yes & yes & yes \\ Popular & no & no & no & yes \\ Rank Maximal & no & no & yes & no \\ Always Exists & yes & yes & yes & no \\ Strategy Proof & yes & no & no & yes \\ \hline Time Complexity & $\mathcal{O}(n)$ & $\mathcal{O}(\sqrt{n} * m)$ & $\approx\mathcal{O}(n^3)$ & $\mathcal{O}(\sqrt{C} * n_1 + m)$ \\ \hline \end{tabular} \caption{Comparison of Different Algorithmic Approaches} \label{tab:algorithm-comparison} \end{table} To summarize the results, we can see that all of the algorithms guarantee pareto-optimality, however only Popular-CHA guarantees popularity. At the same time, only RSD and Popular-CHA also guarantee strategy-proofness, which makes Popular-CHA particularly interesting for the student-seminar problem. \paragraph{Strategy-proofness and Maximum Cardinality:} One interesting observation is that fulfilling maximum cardinality comes at the cost of either not being strategy-proof, or not guaranteeing that a matching exists at all. Indeed, only RSD and Popular-CHA algorithm guarantee strategy-proofness. However, ensuring strategy-proofness and maximum cardinality at the same time comes at the cost of not always finding a matching (Popular CHA). If we look back at the Popular-CHA algorithm, we remember that a maximum cardinality matching $M'$ is computed on the reduced graph $G'$. We saw that a maximum popular matching does not exist, iff the matching is not agent-complete, meaning that one of the agents is matched to their last-resort house. While this mechanism ensures strategy-proofness, it is also not always possible to find such a maximum cardinality matching using the Popular-CHA algorithm. Therefore, it remains an open question whether or not a mechanism exists that both is strategy-proof and always produces maximum-cardinality matchings. \paragraph{Max-PaCHA and the Assignment Problem:} Another important thing to notice is the similarity of the properties between Max-PaCHA and assignment problem algorithm. Except for the fact that the assignment algorithm guarantees rank maximality, the two algorithms produce matchings with very similar characteristics, which then begs the question of why one should use the Max-PaCHA algorithm. But looking at the runtime complexity of the algorithms, we see that, while both algorithms run in polynomial time, the assignment problem takes longer to be solved. \subsubsection{Practical Results in the Literature}\label{sec:practical-results-lit} Diebold et al. have published results for extensive experiments on matching mechanisms with both one- and two-sided preferences \cite{DieboldBenchmark}. They used real course registration data from TUM for investigating properties, including size, rank and popularity, of matchings produced by several mechanisms. The mechanisms are the same as described in Section \ref{chapter:algorithms}, with one exception being that the \emph{ProB-CHAT} algorithm \cite{DieboldBenchmark} is used in place of the Hungarian algorithm for finding rank-maximal matchings. The authors found that all algorithms' matchings achieve an average cardinality of at least 97.48\%. For one of the 9 instances, Popular-CHA failed to find a matching, but when it found one, the matchings' average ranks of 1.33 got close to the ones produced by ProB-CHAT of 1.26. Unsurprisingly, ProB-CHAT performed best on the rank metrics and also produced more popular matchings than all other algorithms except for Popular-CHA. However, its max runtime was the worst with 33.852s, compared to the second highest 2.458s of Popular-CHA on a dataset with 915 students and 51 courses. Another surprising finding is that RSD performed better on rank metrics with an average rank of 1.41 than Max-PaCHA with an average rank of 1.51. Generally, the average ranks of all algorithms are somewhat close by being in a range of 1.26 (ProB-CHAT) to 1.51 (Max-PaCHA). Unfortunately, the authors do not disclose more detailed information about their datasets and structure of matchings. While these results confirm some of the theoretical observations, it will be interesting to get more insights with differently structured datasets being used. Additionally, most of the data contains ties and we do not get any meaningful insights on the distribution of preferences.
{ "alphanum_fraction": 0.7582619339, "avg_line_length": 167.5897435897, "ext": "tex", "hexsha": "9a8c518aca8d191249f9763e5875a8da8c4e858e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "aaronoe/bachelorarbeit", "max_forks_repo_path": "chapters/4_algorithm_comparison.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "aaronoe/bachelorarbeit", "max_issues_repo_path": "chapters/4_algorithm_comparison.tex", "max_line_length": 1184, "max_stars_count": null, "max_stars_repo_head_hexsha": "b267dc2eb69cc7a1c2421b76277f69517957375d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "aaronoe/bachelorarbeit", "max_stars_repo_path": "chapters/4_algorithm_comparison.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1391, "size": 6536 }
\documentclass{article} \usepackage{graphicx} \usepackage{geometry} \usepackage{hyperref} \usepackage{mathtools} \usepackage{float} \usepackage{minted} \graphicspath{{./}} \geometry{a4paper, portrait, margin = 1in} \title{2: Linear Regression in Pytorch, the Dumb Way} \date{\today} \author{Aniruddh K Budhgavi \\Enigma, IIIT-B} \begin{document} \maketitle \section{Introduction} Constructing machine learning models using Numpy is a fantastic way of understanding the nitty-gritties of a particular method, but it is not an ideal way for more complex models or in production environments. Instead, we use libraries like Pytorch, Tensorflow and Keras. These libraries ensure that you spend less time reinventing the wheel, leaving you free to fine-tune the model without getting bogged down by details. They also make your training process faster by utilizing the GPU (if you have one). \\ My personal preference is towards Pytorch (anyone who has used Tensorflow 1.X will understand why). Pytorch has enough utilities to make the process painless but is flexible when it comes to customizations. \section{Installation} It is worth putting a separate section for this because of a few details. The primary complication is due to \textbf{CUDA}, a GPU computing API provided by NVIDIA. The GPU implementation of Pytorch uses CUDA, so \textbf{you can only use Pytorch with a CUDA- capable NVIDIA GPU} (or any CPU). You can check if your GPU is supported at \url{https://developer.nvidia.com/cuda-gpus}. \\ You can install Pytorch from \url{https://pytorch.org/get-started/locally/}. There are various package managers and builds which are available. \begin{enumerate} \item On Windows, I recommend that you use the Anaconda package manager. Use the latest stable version of Pytorch, and be sure to check the CUDA version (later the better). \item On Ubuntu, you can install using Pip without any troubles. \textbf{You may have to install CUDA manually}.To do so, visit \url{https://developer.nvidia.com/cuda-downloads}. \item In case your PC is underpowered, you can always run your models in Kaggle. In fact, that is what I do for Deep Learning models, even though I have a GTX 1050. Kaggle provides (as of writing) 30 hrs per week of an NVIDIA Tesla P100 GPU. You can even natively import Kaggle datasets without any trouble. \end{enumerate} To check if the installation was successful, open a Jupyter notebook and run: \begin{minted}{python} import torch torch.cuda.is_available() \end{minted} Expected output: \begin{minted}{python} True \end{minted} If the output doesn't match, you may have to manually install CUDA. Further, try: \begin{minted}{python} torch.cuda.get_device_name() \end{minted} Expected output should be something like: \begin{minted}{python} 'GeForce GTX 1050' \end{minted} With this, we are ready to build a simple linear regression model using Pytorch. \begin{itemize} \item \textbf{Tip:} You can run Jupyter Notebooks in Visual Studio Code. This gives you all of VS Code's syntax highlighting and autocomplete features. Just be sure to install the Python plugin from the VS Code marketplace. \end{itemize} \section{Building the model} \begin{enumerate} \item You can find the code for this model \href{https://raw.githubusercontent.com/aniruddhkb/enigmatutorials/master/intro2ml/linearRegressionPytorch/linearRegressionTheDumbWay.ipynb}{here}. It won't make much sense in the browser -- download it and open in Jupyter. \item First, let us import the relevant libraries. \begin{minted}{python} import numpy as np import torch import matplotlib.pyplot as plt \end{minted} \item Next, let's define a few helper functions. \begin{minted}{python} def forward(X, W, b): return W*X + b def mse(Yhat, Y, m): return (1/(2*m))*torch.sum((Yhat - Y)**2) def update(W, b, W_grad, b_grad, alpha): W = W -alpha*W_grad b = b - alpha*b_grad return W, b \end{minted} We use {\tt forward} to compute $\hat{Y}$. We use {\tt mse} to compute $J$, the cost function. We use {\tt update} to update the parameters $W$ and $b$. \item Next, let's define some hyperparameters. \begin{itemize} \item \emph{Hyperparameters} are those variables that you, the creator of the ML model specify in order to tune your model. This is in contrast to the model \emph{parameters}, which are learned through training. \item $W$ and $b$ are parameters, while $\alpha$ and $num\textunderscore iters$ are hyperparameters. \end{itemize} \begin{minted}{python} m = 100 # Number of data points. noise_qty = 0.1 # How noisy is the data to be generated? alpha = 0.0001 # The learning rate. num_iters = 100000 # The number of iterations. \end{minted} \item Next, let's generate the data. \begin{minted}{python} X = torch.rand(m)*m W_optim = torch.rand(1) b_optim = torch.rand(1) Y = forward(X, W_optim, b_optim) + torch.rand(m)*(m*noise_qty) \end{minted} \item Our dataset is $(X, Y)$. Let's plot it. \begin{minted}{python} plt.scatter(X, Y) plt.show() \end{minted} \begin{figure}[H] \begin{center} \includegraphics[width = 0.5\textwidth]{plot1.png} \caption{$Y$ as a function of $X$.} \end{center} \end{figure} \item Let's initialize $W$ and $b$. \begin{minted}{python} W = torch.rand(1, requires_grad=True) b = torch.rand(1, requires_grad=True) \end{minted} The argument \emph{requires\textunderscore grad = True} is needed for Pytorch's automatic differentiation package. This tells Pytorch to maintain a record of the computational graph starting from these nodes. If this doesn't make sense, hold on. \item Let's visualize the current parameters. \begin{minted}{python} Yhat = forward(X, W, b).detach() plt.scatter(X, Y) plt.plot(X, Yhat, color = "red") plt.show() \end{minted} \begin{figure}[H] \begin{center} \includegraphics[width = 0.5\textwidth]{plot2.png} \caption{Our line, as expected.} \end{center} \end{figure} \item Now, we come to the key step of training. There are four steps: \begin{enumerate} \item Compute $\hat{Y}$. \item Compute $J(\hat(Y), Y)$, the cost function. \item Compute $\frac{\partial{J}}{\partial{W}}$ and $\frac{\partial{J}}{\partial{b}}$. \item Update $W$ and $b$. \end{enumerate} The code: \begin{minted}{python} costs = [] for i in range(num_iters): if(i % (num_iters//100) == 0): print("\r",i/(num_iters//100), "%", end = "") W = W.clone().detach().requires_grad_(True) b = b.clone().detach().requires_grad_(True) Yhat = forward(X, W, b) cost = mse(Yhat, Y, m) cost.backward() costs.append(cost.item()) W, b = update(W, b ,W.grad, b.grad, alpha) print("") \end{minted} Let's go line by line. \begin{enumerate} \item The loop is the same as in Numpy. \item The {\tt if} block is to make a progress indicator. By using the escape sequence {\tt \textbackslash r}, we overwrite the previously printed number. \item I will explain {\tt W.clone} and {\tt b.clone} momentarily. \item The next two lines are to compute $\hat{Y}$ and $J$. \item This line is the deal-maker when it comes to Pytorch. \begin{itemize} \item \textbf{Pytorch includes an automatic differentiation package called Autograd.} Using this, one can automatically compute the derivatives of a tensor with respect to other tensors. \item There are some preconditions as to which tensors can utilize Autograd. \item If you wish to compute the derivative of $b$ with respect to $a$, then, in the computation graph, \begin{enumerate} \item $a$ must be a leaf node. What this means is that $a$ must not itself depend on some other tensors. \item $a$ must be initialized with \tt{requires \textunderscore grad = True}. \end{enumerate} \item To compute $\frac{\partial{b}}{\partial{a}}$, first compute $b$ as a function of $a$, then run {\tt b.backward()}. \end{itemize} \item The next line {\tt update} is used to update $W$ and $b$. \end{enumerate} \item Now, we come to why we use \emph{W.clone} and \emph{b.clone}. \begin{enumerate} \item When we use tensors with {\tt requires\textunderscore grad = True}, Pytorch makes a \emph{computational graph} of the same and stores it. \item When we call {\tt tensor.backward()}, this computation graph is used to compute the derivatives. \item Here's the computational graph when we run {\tt cost.backward()} for the first time. \begin{figure}[H] \begin{center} \includegraphics[width = 0.5\textwidth]{graph1.png} \end{center} \end{figure} \newpage \item Here's the computational graph when we run {\tt cost.backward()} for the second time. \begin{figure}[H] \begin{center} \includegraphics[width = \textwidth]{graph2.png} \end{center} \end{figure} Notice how, in the second graph, \textbf{$W$ and $b$ are no longer leaf nodes}. What this means is that {\tt W.grad} and {\tt b.grad} will both be {\tt None}. Further, we \emph{definitely} don't want this kind of computational graph for {\tt cost.backward()}. \item The solution is to "refresh" $W$ and $b$ at each iteration. \end{enumerate} \item With that out of the way, let's see the plot for our new function. \begin{figure}[H] \begin{center} \includegraphics[width = 0.5\textwidth]{plot3.png} \end{center} \end{figure} \item And the cost: \begin{figure}[H] \begin{center} \includegraphics[width = 0.5\textwidth]{plot4.png} \end{center} \end{figure} \end{enumerate} \newpage \section{Why was this the "dumb" way?} \begin{itemize} \item There are too many things that can go wrong here. \item First, there's the business with refreshing $W$ and $b$ at every iteration. \item We could mess up the forward pass, the cost function or the update rule. \item Most importantly, here, we're reinventing the wheel. Pytorch has many utilities which make it a very easy task to define and train models. \item Imagine if we had to define the forward pass for a deep neural network or a convolutional neural network! \end{itemize} Next time, we'll see how we \emph{really} build and train models in Pytorch. \end{document}
{ "alphanum_fraction": 0.6216422985, "avg_line_length": 51.147826087, "ext": "tex", "hexsha": "918cfae56538250dbde2d6056bd9c833036bd8d7", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-31T06:52:05.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-31T06:52:05.000Z", "max_forks_repo_head_hexsha": "440d93f93cc1ca710113bfd5edb61f6db41fb274", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aniruddhkb/enigmatutorials", "max_forks_repo_path": "intro2ml/linearRegressionPytorch/linearRegressionTheDumbWay.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "440d93f93cc1ca710113bfd5edb61f6db41fb274", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aniruddhkb/enigmatutorials", "max_issues_repo_path": "intro2ml/linearRegressionPytorch/linearRegressionTheDumbWay.tex", "max_line_length": 198, "max_stars_count": 1, "max_stars_repo_head_hexsha": "440d93f93cc1ca710113bfd5edb61f6db41fb274", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aniruddhkb/enigmatutorials", "max_stars_repo_path": "intro2ml/linearRegressionPytorch/linearRegressionTheDumbWay.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-30T18:40:22.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-30T18:40:22.000Z", "num_tokens": 2937, "size": 11764 }
\documentclass[letterpaper,10pt,titlepage]{article} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{alltt} \usepackage{float} \usepackage{color} \usepackage{url} \usepackage{balance} %\usepackage[TABBOTCAP, tight]{subfloat} \usepackage{enumitem} \usepackage{pstricks, pst-node} \usepackage{geometry} \geometry{textheight=8.5in, textwidth=6in} %random comment %\include{pygments.tex} \newcommand{\cred}[1]{{\color{red}#1}} \newcommand{\cblue}[1]{{\color{blue}#1}} \usepackage{hyperref} \usepackage{geometry} \def\name{Neale Ratzlaff} \title{Computer Architecture Assignment 4} \author{Neale Ratzlaff} \date{30 September 2015} %pull in the necessary preamble matter for pygments output %\input{pygments.tex} %% The following metadata will show up in the PDF properties \hypersetup{ colorlinks = true, urlcolor = black, pdfauthor = {\name}, pdfkeywords = {CS472}, pdftitle = {CS 472 Assignment 4}, pdfpagemode = UseNone } \begin{document} \maketitle \pagebreak \section{Memory Optimization} The author does a brief review of what the cache is and the important aspects that are needed to understand that cache: cache, lines, and associativity. They go though what the cache heirarchy and what it means to have different levels of cache. The topic of the presentation is how one might avoid cache misses. Cache misses happen because of three main causes. Cache misses due to capacity, i.e. the datd simply cannot be held in one level of cache. There are also misses due to conflict, when the processor tries to map overlapping data to the same line. Finally there are unaboidable misses where data is prefetched and cannot possibly already be in cache. There are ways to avoid most misses in cache. The use of the cache by a program can be optmized by taking advantage of temporal and spatial locality in code. Temporal locality means that if data is going to be processed more than once (array or buffer) the it is better to do all the operations when the program loads the data into cache the first time, instead of doing sparse instructions and making to program load and unload different data sets. Spatial locality refers the speedup gained by placing the code that uses the same piece of memory close together in the program. Spatially local code is usually temporally local. There are various GCC directives that are made for optimizing the program's use of the cache, attribute and restrict are some of them. The author goes through various ways to store and process data efficiently, like trees, unrolling loops, and hot-cold splitting. The main take away from these is the programmers should beware of large structures that can take up more memory than they need to, also contiguous is king. Data structures that allow rapid access by loading the whole thing into cache are very efficient. The author goes on to warn programmers about aliasing. Aliasing occurs when an array is accessed out of bounds. Avoiding this kind of aliasing is trivial. But it can become more difficult when there are structures up against the wall of another. Aliasing also occurs when more than one name points to a spot in memory. When one pointer is changed, all the other variables that point to that spot in memory are also changed, though the programmer may forget. Using type-based alias analysis may eliminate this as it forces the compiler to establish the value as belonging to the correct type, so there is never an issue with data compatibility. \pagebreak \section{What Every Programmer Should Know About Memory} This author also goes through an introduction to the cache, explaining that it is the fastest way to process data in the CPU. The author stresses the disadvantages of adding to cache any data that is smaller or larger than line size, as it causes a slowdown over processing data that is exactly line size. Caches can by sped up by making them fully associative, but it comes with so much complexity that it becomes impossible to do with anything other than a very small cache, TLBs in general are the only caches that are fully associative. Cache size can be calculated with (cache line size * associativity * set number). The caches can be seenby taking a large enough array and iterating through it, accessing each element in turn. This will take a certain amount of time per access for L1, and a larger amount of time per access for L2. If elements in the array are accessed randomly, it becomes far more unstable as the CPU only loads in sequential memory addresses to cache, which may or may not be used by the program. The cache is always in sync with main memory. When the CPU loads data into cache, it then immediately loads the same data into main memory, assuring that there are no discrepencies in memory. If data is changed in cache, the cache line is marked as dirty, and the processor deals with the change in main memory as it can. There is multiprocessor/thread support for cache, but it makes everythin far more complicated. There is one L1 cache per core, and one L2 for the whole CPU. So cache lines on multiple levels will be changed, marked, and merged into main memory. The author ends with discussing cache misses and how they apply as in the previous section. \pagebreak \end{document}
{ "alphanum_fraction": 0.7442518571, "avg_line_length": 59.5157894737, "ext": "tex", "hexsha": "7e98501140a8a3fdf1aca953ad9b6aa6ef78060b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6fef9c39579143bde0ab5d1ec5fedc7210e55814", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "neale/CS-program", "max_forks_repo_path": "472-Architecture2/assignment4/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6fef9c39579143bde0ab5d1ec5fedc7210e55814", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "neale/CS-program", "max_issues_repo_path": "472-Architecture2/assignment4/template.tex", "max_line_length": 148, "max_stars_count": 1, "max_stars_repo_head_hexsha": "6fef9c39579143bde0ab5d1ec5fedc7210e55814", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "neale/CS-program", "max_stars_repo_path": "472-Architecture2/assignment4/template.tex", "max_stars_repo_stars_event_max_datetime": "2016-10-24T13:36:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-24T13:36:23.000Z", "num_tokens": 1265, "size": 5654 }
% !TEX root = ./busty_transcription.tex \section{Introduction} Gene expression presides over much of the most important dynamism of living organisms. The level of expression of batteries of different genes is altered as a result of spatiotemporal cues that integrate chemical, mechanical and other types of signals. The original repressor-operator model conceived by Jacob and Monod in the context of bacterial metabolism has now been transformed into the much broader subject of gene regulatory networks in living organisms of all kinds~\cite{Jacob1961, Britten1969, Ben-TabouDe-Leon2007}. One of the remaining outstanding challenges to have emerged in the genomic era is our continued inability to predict the regulatory consequences of different regulatory architectures, i.e. the arrangement and affinity of binding sites for transcription factors and RNA polymerases on the DNA. This challenge stems first and foremost from our ignorance about what those architectures even are, with more than 60\% of the genes even in an ostensibly well understood organism such as {\it E. coli} having no regulatory insights at all~\cite{Rydenfelt2014-2, Belliveau2018, Ghatak2019, Santos_Zavaleta2019}. But even once we have established the identity of key transcription factors and their binding sites of a given promoter architecture, there remains the predictive challenge of understanding its input-output properties, an objective that can be met by a myriad of approaches using the tools of statistical physics~\cite{Ackers1982, Shea1985, Buchler2003, Vilar2003a, Vilar2003b, Bintu2005a, Bintu2005c, Gertz2009, Sherman2012, Saiz2013, Ko1991, Peccoud1995, Record1996, Kepler2001, Sanchez2008, Shahrezaei2008, Sanchez2011, Michel2010}. One route to such predictive understanding is to focus on the simplest regulatory architecture and to push the theory-experiment dialogue as far and as hard as it can be pushed~\cite{Garcia2011, Phillips2019}. If we demonstrate that we can pass that test by successfully predicting both the means and variance in gene expression at the mRNA level, then that provides a more solid foundation upon which to launch into more complex problems - for instance, some of the previously unknown architectures uncovered in~\cite{Belliveau2018} and~\cite{Ireland2020}. To that end, in this paper we examine a wide variety of distinct models for the simple repression regulatory architecture. This genetic architecture consists of a DNA promoter regulated by a transcriptional repressor that binds to a single binding site as developed in pioneering early work on the quantitative dissection of transcription \cite{Oehler1994, Oehler1990}. All of the proposed models coarse-grain away some of the important microscopic features of this architecture that have been elucidated by generations of geneticists, molecular biologists and biochemists. One goal in exploring such coarse-grainings is to build towards the future models of regulatory response that will be able to serve the powerful predictive role needed to take synthetic biology from a brilliant exercise in enlightened empiricism to a rational design framework as in any other branch of engineering. More precisely, we want phenomenology in the sense of coarse-graining away atomistic detail, but still retaining biophysical meaning. For example, we are not satisfied with the strictly phenomenological approach offered by the commonly used Hill functions. As argued in~\cite{Frank2013}, Hill functions are ubiquitous precisely because they coarse-grain away all biophysical details into inscrutable parameters. Studies like~\cite{Razo-Mejia2018} have demonstrated that Hill functions are clearly insufficient since each new situation requires a completely new set of parameters. Such work requires a quantitative theory of how biophysical changes at the molecular level propagate to input-output functions at the genetic circuit level. In particular a key question is: at this level of coarse-graining, what microscopic details do we need to explicitly model, and how do we figure that out? For example, do we need to worry about all or even any of the steps that individual RNA polymerases go through each time they make a transcript? Turning the question around, can we see any imprint of those processes in the available data? If the answer is no, then those processes are irrelevant for our purposes. Forward modeling and inverse (statistical inferential) modeling are necessary to tackle such questions. Figure~\ref{fig1:means_cartoons}(A) shows the qualitative picture of simple repression that is implicit in the repressor-operator model. An operator, the binding site on the DNA for a repressor protein, may be found occupied by a repressor, in which case transcription is blocked from occurring. Alternatively, that binding site may be found unoccupied, in which case RNA polymerase (RNAP) may bind and transcription can proceed. The key assumption we make in this simplest incarnation of the repressor-operator model is that binding of repressor and RNAP in the promoter region of interest is exclusive, meaning that one or the other may bind, but never may both be simultaneously bound. It is often imagined that when the repressor is bound to its operator, RNAP is sterically blocked from binding to its promoter sequence. Current evidence suggests this is sometimes, but not always the case, and it remains an interesting open question precisely how a repressor bound far upstream is able to repress transcription~\cite{Rydenfelt2014-2}. Suggestions include ``action-at-a-distance'' mediated by kinks in the DNA, formed when the repressor is bound, that prevent RNAP binding. Nevertheless, our modeling in this work is sufficiently coarse-grained that we simply assume exclusive binding and leave explicit accounting of these details out of the problem. \afterpage{\clearpage} \begin{figure}[p] \centering \includegraphics[width=\textwidth]{../figures/main/fig01.pdf} \caption{\textbf{An overview of the simple repression motif at the level of means.} (A) Schematic of the qualitative biological picture of the simple repression genetic architecture. (B) and (C) A variety of possible mathematicized cartoons of simple repression, along with the effective parameter $\rho$ which subsumes all regulatory details of the architecture that do not directly involve the repressor. (B) Simple repression models from an equilibrium perspective. (C) Equivalent models cast in chemical kinetics language. (D) The ``master curve'' to which all cartoons in (B) and (C) collapse.} \label{fig1:means_cartoons} \end{figure} The logic of the remainder of the paper is as follows. In section~\ref{section_02_means}, we show how both thermodynamic models and kinetic models based upon the chemical master equation all culminate in the same underlying functional form for the fold-change in the average level of gene expression as shown in Figure~\ref{fig1:means_cartoons}(D). Section~\ref{sec:beyond_means} goes beyond an analysis of the mean gene expression by asking how the same models presented in Figure~\ref{fig1:means_cartoons}(C) can be used to explore noise in gene expression. To make contact with experiment, all of these models must make a commitment to some numerical values for the key parameters found in each such model. Therefore in Section~\ref{section_04_bayesian_inference} we explore the use of Bayesian inference to establish these parameters and to rigorously answer the question of how to discriminate between the different models. %\mmnote{Key ideas, no particular order, that I haven't written down before: %\begin{itemize} %\item Our goal is to build phenomenological models of input-output functions of %genetic circuits. More precisely, we want phenomenology in the sense of %coarse-graining away atomistic detail, but still retaining biophysical meaning, %e.g., we don't want to coarse-grain as far as Hill functions. Why not? We are %motivated by studies like~\cite{Razo-Mejia2020}, for which Hill functions are %insufficient. Such work requires a quantitative theory of how biophysical %changes at the molecular level propagate to input-output functions at the %genetic circuit level. We want concepts, not mere facts. In particular a key %question is: at this level of coarse graining, what microscopic details do I %need to explicitly model, and how do we figure that out? For example, do I need %to worry about all or even any of the steps that individual RNAPs go through %each time they make a transcript? Turning the question around, can we see any %imprint of those processes in the available data? If the answer is no, then %those processes are irrelevant for our purposes. Forward modeling and inverse %(statistical inferential) modeling can complement each other beautifully here. %\end{itemize}} %Biochemistry tells us of a progression of steps, but theory tells us %that coarse graining can be exact
{ "alphanum_fraction": 0.8164705882, "avg_line_length": 66.1111111111, "ext": "tex", "hexsha": "39745dc086e3d5181a3ec83a26777aeac80d98a1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cd3082c567168dfad12c08621976ea49d6706f89", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RPGroup-PBoC/bursty_transcription", "max_forks_repo_path": "doc/section_01_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cd3082c567168dfad12c08621976ea49d6706f89", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RPGroup-PBoC/bursty_transcription", "max_issues_repo_path": "doc/section_01_introduction.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "cd3082c567168dfad12c08621976ea49d6706f89", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RPGroup-PBoC/bursty_transcription", "max_stars_repo_path": "doc/section_01_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2022, "size": 8925 }
\subsection{Subset relation} \subsubsection{Subset} If all terms which are members of term \(x\) are also members of term \(y\), then \(x\) is a subset of \(y\). \(\forall x\forall y[(\forall z(z\in x\rightarrow z\in y))\leftrightarrow (x\subseteq y)]\) \subsubsection{Proper subset} If two sets are equal, then each is a subset of the other. A proper subset is one which is a subset, and not equal to the other set. \(\forall x\forall y[((\forall z(z\in x\rightarrow z\in y)))\land(x\ne y)\leftrightarrow (x\subset y)]\)
{ "alphanum_fraction": 0.6981132075, "avg_line_length": 33.125, "ext": "tex", "hexsha": "d676c082888386ede49a5bc5e65eefd56b78b3cf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/sets/02-01-subset.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/sets/02-01-subset.tex", "max_line_length": 132, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/sets/02-01-subset.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 156, "size": 530 }
\documentclass{article} \usepackage[minionint,mathlf,textlf]{MinionPro} % To gussy up a bit \usepackage[margin=1in]{geometry} \usepackage{graphicx} % For .eps inclusion %\usepackage{indentfirst} % Controls indentation \usepackage[compact]{titlesec} % For regulating spacing before section titles \usepackage{adjustbox} % For vertically-aligned side-by-side minipages \usepackage{array, mathrsfs, mathrsfs, mhchem, amsmath} % For centering of tabulars with text-wrapping columns \usepackage{hyperref, chemfig} \usepackage{subfigure} \usepackage[autolinebreaks,framed,numbered]{mcode} \newcommand{\Lapl}{\mathscr{L}} \pagenumbering{gobble} \setlength\parindent{0 cm} \begin{document} \large \section*{Introduction to self-assembly} Self-assembly is the process by which components join to form a desired structure without the help of an external agent. For example, a protein formed from subunits which remain bound together after randomly colliding has undergone self-assembly, whereas a messenger RNA produced through joining of free nucleotides by a proteinaceous polymerase has not. If all or even most complex structures in biology required an agent dedicated to their own construction, then an enormous number of components would be necessary: it is therefore likely that many biological structures manage to construct themselves through component ``design" that exploits the laws of physics and chemistry. Self-assembly is also of particular interest for the evolution of life, a condition in which other organic agents cannot be invoked to aid in replication. Producing building blocks which self-assemble is a simple strategy for self-replication, one of the fundamental processes of life. \section*{Self-assembly of virions} Consider the dilemma faced by a virus which must encapsulate its genome and release it from a host cell as an infectious virion. Every base pair in the viral genome (assuming that it is double-stranded) weighs around 600 Da, and it takes three such base pairs to encode a single 100-Da amino acid in the viral proteome, yet many viruses produce a proteinaceous capsid that completely encloses the genome. All of this is achieved despite many negative charges on the phosphodiester backbone of DNA that tend to impede the genome's compression through electrostatic repulsion. The situation merits a detailed analysis since many gene therapy and bioengineering applications require delivering synthetic sequences via virion, and knowing the limitations on size and sequence would be helpful for developing new vectors.\\ Key strategies of which you may be aware include producing a capsid from many copies of a small number of proteins by taking advantage of symmetry; positively-charged protein domains that partially neutralize the viral genome's charge through associations with the backbone; and encoding proteins in overlapping open reading frames to minimize the genome's size. Even with these clever approaches, there are still some serious challenges. The simian vacuolating virus SV40, for example, is an icosahedral virus with no plasma membrane envelope, i.e., the protein capsid is the only structure maintaining integrity of the virion. The capsid comprises three hundred and sixty copies of the same protein, arranged into 72 pentamers. Some of these pentamers (at each vertex of the icosahedron) have five neighbors; others, six. \begin{center} \includegraphics[width=0.7\textwidth]{sv40.pdf} \end{center} Nonspecific interactions between host cell proteins and even a small fraction of these three hundred and sixty virion proteins could prevent the capsid from properly forming. A typical theme in viral capsid formation is that single subunits assemble into multi-subunit complexes (equivalent in SV40 to the pentamers), which in turn can form small groups or sheets that interact to complete the capsid. Each of these assembly steps creates a larger interface between interacting units that likely is stabilizing only if all units are correctly assembled. In this way properly-assembled pentamers are preferentially incorporated into groups/sheets and properly-assembled groups/sheets are incorporated into the capsid.\\ Another major issue in assembly is that a copy of the viral genome must be incorporated inside of the capsid when it assembles. Some viral genomes contain packaging sequences that help the capsid assemble around the genome through specific binding, but it appears in many cases that only size (folded or otherwise) and charge determine the likelihood that a nucleic acid will be incorporated. Though capsid proteins often have positive charge on their interior surfaces as mentioned above, it is rarely enough to balance the negative charge of the nucleic acid contained within the capsid. \begin{center} \includegraphics[width=0.4\textwidth]{capsid_face_diagram.pdf} \end{center} Recently Perlmutter et al. (2013) have used molecular dynamics to simulate the movement of components of a viral capsid and their interactions with the nucleic acid they are designed to contain. The steps in these simulations will be familiar to you from simulation of Langevin SDEs: in each step of the simulation the forces acting on each atom are calculated and used to update its position and momentum with added noise. To make the problem computationally tractable, the authors examined a dodecahedon capsid in which the faces are pentagons: the whole face is modeled by five ``atoms" arranged in a pentagon, each of which is attracted to like atoms on adjacent faces, and two ``atoms" above and below the face which repel one another with different strengths to position adjacent faces at an angle. Onto the interior faces are positioned strands of positive charges meant to represent the positive-charged residues that interact with the nucleic acid backbone. A ``nucleic acid" (chain of positive charges perhaps with secondary structure from hydrogen bonding) dropped into a solution of these capsid faces will tend to increase their local concentration enough to nucleate capsid assembly. The simulation can be repeated with nucleic acids of various sizes and secondary structures to study average time to assembly.\\ \begin{center} \includegraphics[width=0.3\textwidth]{assembly.pdf} \end{center} In such simulations, capsids will spontaneously assemble around a nucleic acid even if the NA's charge significantly exceeds that of the capsid faces. A more important determinant is the typical volume of the NA, determined both by its length and its folding pattern.\\ These simulations suggest another way in which capsids can be encouraged to assemble without unwanted interactions between host proteins: the NA acts to increase the local concentration of capsid faces and facilitates their interaction. HIV-1 also increases the local concentration of its capsid components via tethering even before its viral envelope has formed. The HIV-1 genome encodes a protein called Gag which is made up of many domains which will later be proteolytically cleaved during maturation. The N-terminal end of Gag (called MA) binds to the plasma membrane and to the HIV Env protein, the major components of the HIV envelope. The central region of Gag (called CA) will form the capsid, and the C-terminal end (called NC) will bind to the HIV genome to facilitate its inclusion in the virion. The interaction of Gag with the plasma membrane ensures its enriched concentration there so that nonspecific binding of CA with host proteins is less likely. (Interestingly the binding between CA domains at this stage is different from the interaction in the mature capsid, permitting formation of a dome rather than a cone.) Co-expression of these proteins in the long polypeptide Gag also ensures a fixed stoichiometry within the virion. \begin{center} \includegraphics[width=0.7\textwidth]{hiv.pdf} \end{center} \section*{Influenza A} Influenza A, the most virulent form of flu, is a single-stranded RNA virus which encodes its genome on eight separate RNA molecules. It lacks a tight capsid but rather encloses its genome and other proteins within a viral envelope made of viral proteins and the host's plasma membrane. To be effective, a virion must be packaged with one of each of these RNAs: there appears to be no room for an extra copy of any. Specific ``packaging sequences" have been identified on the viral RNAs without which most virions have an incorrect complement of RNAs. \begin{center} \includegraphics[width=0.5\textwidth]{noda.pdf} \end{center} Consider two alternative hypotheses for how these packaging sequences could work. In the first, packaging sequences help RNAs to bind to one another (either directly or via RNA-protein interactions). Carefully orchestrated interactions would bring exactly the right set of eight RNAs together under this theory. I will bet green cash that this is the approach any of us would take if asked to design a self-assembling virion. However, there is no evidence for spatial order of the RNAs within the envelope. An alternative hypothesis is that packaging sequences introduce repulsion between like RNAs. Repulsion could be thought of simply as an absence of interactions possible between other sequences. (Steric hindrance and electrostatic based on positioning of packaging sequences on loops is also imaginable but not discussed below.)Unlike the first model, each RNA's self-repulsion system could evolve independently under this theory and there would be no constraints on co-evolution of e.g. the packaging sequences between RNAs, and no requirement for spatial order. On the other hand, one of the advantages of the specific positive interaction model is that it explains why host RNAs are rarely included in the virion by mistake. \begin{center} \includegraphics[width=0.5\textwidth]{venev1.pdf} \end{center} The plausibility of this model was investigated by Venev and Zeldovich (2013) using an \textit{in silico} evolution approach. They envision an interaction energy $E$ which depends on the number of each type of RNA that can be included in the virion. They assume that the envelope does not have room for extra RNAs and that eight RNAs are always included; they also assume that only the simplest mistakes -- leaving out one RNA and including two of another -- occur, so that there are 56 possible errors and one correct combination\footnote{This assumption may seem arbitrary but is reasonably justified by experimental analysis of single virions, c.f. Noda et al. (2012) and Chou et al. (2012).}. The combination of RNAs in an envelope can be described by a packaging index: for example, $V=(1,1,1,1,1,1,1,1)$ represents correct packaging and $U_{12}=(0,2,1,1,1,1,1,1)$ one possible error. The interaction energy also depends on the spatial position $k$ in which these RNAs happen to find themselves. The probability of correct binding is given by a Boltzmann factor: \[ \Pi = \frac{\sum_{k} e^{-\beta E(V,k)}}{\sum_{k} e^{-\beta E(V,k)} + \sum_{k,i,j} e^{-\beta E(U_{ij},k)}} \] The interaction energies depend on the sequences of the packaging loops, which are allowed to evolve in the simulation and assumed to depend only on Watson-Crick base pairing. \begin{center} \includegraphics[width=0.5\textwidth]{venev2.pdf} \end{center} Venev and Zeldovich reasoned that if the first model were correct, at the end of the simulation they could expect to see strong positive interactions between certain pairs of RNAs (which must form a network connecting all eight RNAs) and relatively weak interactions between all others. However, if the second model were correct, all but the self-interactions would be relatively strong. The latter best describes what they observed in sequences that were evolved to maximize $\Pi$. The maximum value for $\Pi$ which they obtained by this method was 0.35, on par with the $\approx$50\% correct virion assembly rate estimated experimentally via FISH and electron tomography (Chou et al., 2012 and Noda et al., 2012). \begin{center} \includegraphics[width=0.5\textwidth]{venev3.pdf} \end{center} \section*{Self-assembly of genomes} Annaluru et al. (2014) recently reported the synthesis of a complete chromosome in \textit{S. cerevisiae}, the first step in a project to create the first synthetic eukaryotic genome (Sc2.0). Reconstructing a genome provides the opportunity to test many theories concerning its function. For example, it is known that roughly 5000 of \textit{S. cerevisiae}'s 6000 genes are non-essential when deleted alone, and much is also known of epistasis in gene pairs; however, it is not known what complement of genes is minimal for survival. The synthetic genome will flank open reading frames with loxP sites: recombinants which have had the opportunity to remove genes (or rearrange them) can be isolated after recombinase activity to assess which genes are essential. One stop codon will also be completely absent in open reading frames of this synthetic genome, so that synthetic tRNAs with a complementary anticodon can be used to label a protein of choice with a non-canonical amino acid. Alterations to the genome have also been used to confirm or refute experimental predictions involving the sufficiency of telomeric repeats, the role of tRNA genes in cohesin recruitment, removal of repetitive sequences that were once hotspots for genomic rearrangements, and reduction of introns (where this does not interfere with fitness). Ultimately, the construction of synthetic genomes could be used to produce species which grow optimally under defined conditions. \begin{center} \includegraphics[width=0.5\textwidth]{annaluru.pdf} \end{center} Construction of the synthetic chromosome begins with 750 bp blocks constructed by hybridizing overlapping 70 nt oligonucleotides and employing PCR to assemble these into a complete double-stranded DNA ``building block" (Stemmer et al., 1995). Fusion PCR is then used to assemble 3-4 building blocks into an approx. 3 kb ``mini-chunk." Multiple minichunks which overlap one another are cotransformed into yeast, where they integrate with the genome and one another by homologous recombination. (If the sequence of any mini-chunk makes the host inviable, the transformation fails and a new sequence can be attempted.) With this approach the authors were able to walk along the chromosome until it had been fully replaced with synthetic sequence. This synthesis strategy mirrors the approach used by the Venter Institute to synthesize and replace genomes in \textit{Mycoplasma} (Gibson et al., 2010). \section*{DNA origami} \begin{center} \includegraphics[width=0.5\textwidth]{rothemund1.pdf} \end{center} Computational design of nucleic acids which partially hybridize to form arbitrary shapes has been continually improving since Paul Rothemund introduced the technique in 2006. Rothemund's method used the 7kb single-stranded DNA genome of the phage M13mp18 as a ``scaffold" which was forced to bend by sequence-specific hybridization to smaller DNA ``staples" that bound to two different genomic regions to pull them physically together. Heating the combination of DNAs together and allowing them to very slowly cool and anneal produces the arbitrary shapes, which can be visualized by EM. \begin{center} \includegraphics[width=0.5\textwidth]{rothemund2.pdf} \end{center} Just as in PCR primer design, algorithmic design of hybridization sequences reduces the probability of undesired binding events. Full control over the sequence can be had by swapping the viral genome scaffold for fully-synthetic sequences; many shapes can be constructed from a small number of appropriately-chosen sequences (Wei et al., 2010). The error rate can also be tuned by choosing an appropriate stoichiometry between the nucleic acids, which does not necessarily reflect the stoichiometry in the correct structure but may instead be chosen to disfavor incorrect structures from forming (Murugan et al., 2015).\\ The shapes chosen to test these sequence-selecting algorithms are often whimsical to demonstrate the versatility of the method. Of course three dimensional shapes can be constructed by a similar approach (Ke et al., 2012), including some which execute assembly reactions in a specific order (Sadowski et al., 2014).\\ Recently DNA origami has also been employed to create ``molds" in which metal deposits can be nucleated to form specific shapes (Sun et al., 2014). Boxes made from DNA have also been caused to execute logic functions with the goal of delivering their ``contents" only under specific conditions (Amir et al., 2014). \begin{center} \includegraphics[width=0.5\textwidth]{mold.pdf} \end{center} \section*{Self-replication} In the majority of this lecture, we have treated self-assembly as a spontaneous process involving only interactions between the building blocks during assembly. Though this approach is used often in both natural and synthetic biological systems, in many cases assembly requires a more direct agent. If that agent is an already-assembled entity of the same type, the process is still considered a form of self-assembly but is more likely to be referred to as self-replication.\\ The ``RNA world" hypothesis holds that nucleic acids once served as both genetic material and catalytic agents (a function they continue to perform today, e.g., at the heart of the ribosome). One possibility for the origin of life is that a single nucleic acid became capable of folding in such a way that it could replicate itself (or at least, copies of itself). Alternatively, a small number of nucleic acids may form a cooperative network which favors the replication of all components. It is theoretically possible through artificial selection or experimental evolution to produce an RNA which is capable of templated nucleotide polymerization. Since binding free nucleotides could be imagined to be an important function for self-replicating RNAs, an interesting first-pass is to select just for e.g. ability to bind GTP, an experiment which has already been performed using an exhaustive covering of the sequence space for 24-nt RNAs (Jim\'{e}nez et al., 2014). Longer RNAs capable of extending a dsRNA overhang up to their own length have been evolved through directed evolution (see e.g. Attwater et al., 2013) and can function under imperfect conditions such as within ice. Important functions such as priming and loss of the fully double-stranded state for folding into the functional state prevent these RNAs from being fully capable of self-replication, however.\\ The general principle that a parent entity may serve as a template over which a new one could be assembled from its constituent parts has also been studied in model systems that permit simulation and functional analysis. A model based on a catalytic pair of structures formed from interacting spheres can be modeled by an approach similar to molecular dynamics (Zeravcic and Brenner, 2014). These toy models reveal that even a relatively simple system can be capable of self-replication. \end{document}
{ "alphanum_fraction": 0.8095837926, "avg_line_length": 151.2142857143, "ext": "tex", "hexsha": "6cb26bbe67d1e9b710f6b844d3ba0ae33b4ae075", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-03-25T14:42:10.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-20T17:43:51.000Z", "max_forks_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mewahl/intro-systems-biology", "max_forks_repo_path": "lectures/Lecture 32 - Self-assembly/lecture notes/lecture 32 notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mewahl/intro-systems-biology", "max_issues_repo_path": "lectures/Lecture 32 - Self-assembly/lecture notes/lecture 32 notes.tex", "max_line_length": 1458, "max_stars_count": 3, "max_stars_repo_head_hexsha": "95ad58ec50ef79d084e71f4380fbfbf5e1603836", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mewahl/intro-systems-biology", "max_stars_repo_path": "lectures/Lecture 32 - Self-assembly/lecture notes/lecture 32 notes.tex", "max_stars_repo_stars_event_max_datetime": "2019-01-31T17:23:09.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-20T17:43:31.000Z", "num_tokens": 4146, "size": 19053 }
%-----------------------------------------------------------------------------% % PACKAGES AND DOCUMENT CONFIGURATIONS %-----------------------------------------------------------------------------% % Document styles \documentclass[letterpaper,12pt]{article} % Packages \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[hmargin=0.5in, vmargin=1.0in]{geometry} % Document margins \usepackage{float} % Options for floating objects \usepackage{array} % Row and column types \newcolumntype{L}[1]{>{\raggedright\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\arraybackslash\hspace{0pt}}m{#1}} % Custom automatic indentation \setlength\parindent{0pt} %-----------------------------------------------------------------------------% \begin{document} %-----------------------------------------------------------------------------% {\centering\textbf{\Large OpenBox Keybindings} \par} \section*{Desktops management} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Win + F1} & Go to desktop 1 \\ {\tt Win + F2} & Go to desktop 2 \\ {\tt Win + F3} & Go to desktop 3 \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \section*{Extras} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt C-g} & Openbox chain quit key \\ \hline {\tt PrntScr} & Take screenshot of active desktop \\ {\tt Alt + PrntScr} & Take screenshot of active window \\ {\tt Win + E} & Open file manager \\ {\tt Win + R} & LXPanel run menu \\ {\tt Ctrl + Esc} & LXPanel main menu \\ {\tt Ctrl + F7} & Turn off monitor \\ {\tt Ctrl + F10} & Decrease monitor backlight \\ {\tt Ctrl + F11} & Increase monitor backlight \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \section*{Mouse (keyboard)} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Alt + $\leftarrow$} & Move mouse (left) \\ {\tt Alt + $\rightarrow$} & Move mouse (right) \\ {\tt Alt + $\uparrow$} & Move mouse (left) \\ {\tt Alt + $\downarrow$} & Move mouse (down) \\ \hline {\tt Alt + Shft + $\leftarrow$} & Move mouse to side of monitor (left-center) \\ {\tt Alt + Shft + $\rightarrow$} & Move mouse to side of monitor (right-center) \\ {\tt Alt + Shft + $\uparrow$} & Move mouse to side of monitor (up-center) \\ {\tt Alt + Shft + $\downarrow$} & Move mouse to side of monitor (down-center) \\ {\tt Alt + Shft + Space} & Move mouse to center of monitor \\ \hline {\tt Win + Alt + $\leftarrow$} & Mouse click (left-button) \\ {\tt Win + Alt + $\rightarrow$} & Mouse click (right-button) \\ {\tt Win + Alt + $\uparrow$} & Mouse wheel (up) \\ {\tt Win + Alt + $\downarrow$} & Mouse wheel (down) \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \section*{Windows management} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Alt + Tab} & Focus next window \\ {\tt Alt + Shft + Tab} & Focus previous window \\ {\tt Alt + H} & Focus left window \\ {\tt Alt + L} & Focus right window \\ {\tt Alt + K} & Focus up window \\ {\tt Alt + J} & Focus down window \\ \hline {\tt Win + H} & Tile active window in monitor (half-left) \\ {\tt Win + L} & Tile active window in monitor (half-right) \\ {\tt Win + K} & Tile active window in monitor (half-upper) \\ {\tt Win + J} & Tile active window in monitor (half-lower) \\ {\tt Win + Alt + H} & Tile active window in monitor (half-upper-left) \\ {\tt Win + Alt + L} & Tile active window in monitor (half-lower-right) \\ {\tt Win + Alt + J} & Tile active window in monitor (half-lower-left) \\ {\tt Win + Alt + K} & Tile active window in monitor (half-upper-right) \\ {\tt Win + Shft + H} & Tile active window in monitor (shifted, half-left) \\ {\tt Win + Shft + L} & Tile active window in monitor (shifted, half-right) \\ {\tt Win + Shft + k} & Tile active window in monitor (shifted, half-upper) \\ {\tt Win + Shft + J} & Tile active window in monitor (shifted, half-lower) \\ \hline {\tt Win + [Alt|Shft] + 0} & Three-way layout (2-horz, 1-vert) \\ {\tt Win + [Alt|Shft] + 1} & Three-way layout (1-vert, 2-horz) \\ {\tt Win + [Alt|Shft] + 2} & Two-way layout (2-vert) \\ {\tt Win + [Alt|Shft] + 3} & Three-way layout (3-vert) \\ {\tt Win + [Alt|Shft] + 4} & Four-way layout (2-horz, 2-vert) \\ {\tt Win + [Alt|Shft] + 5} & Two-way layout (2-horz) \\ {\tt Win + [Alt|Shft] + 6} & Six-way layout (2-horz, 3-vert) \\ {\tt Win + [Alt|Shft] + 7} & Three-way layout (3-horz) \\ {\tt Win + [Alt|Shft] + 8} & Eight-way layout (4-horz, 4-vert) \\ {\tt Win + [Alt|Shft] + 9} & Three-way layout (1-horz, 2-vert) \\ \hline {\tt Ctrl + Win + 0} & Change to primary monitor \\ {\tt Ctrl + Win + 1} & Extend to secondary monitor (left) \\ {\tt Ctrl + Win + 2} & Extend to secondary monitor (up) \\ {\tt Ctrl + Win + 3} & Extend to secondary monitor (right) \\ {\tt Ctrl + Win + 4} & Extend to secondary monitor (down) \\ {\tt Ctrl + Win + 5} & Mirror to secondary monitor \\ {\tt Ctrl + Win + H} & Send active window to next monitor \\ {\tt Ctrl + Win + L} & Send active window to previous monitor \\ {\tt Ctrl + Win + $\leftarrow$} & Send active window to next desktop (left) \\ {\tt Ctrl + Win + $\rightarrow$} & Send active window to next desktop (right) \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Win + D} & Toggle show desktop \\ {\tt Win + F} & Toggle fullscreen of active window \\ {\tt Win + M} & Toggle maximize of active window \\ {\tt Win + Shft + M} & Maximize active window (shifted) \\ {\tt Win + N} & Toggle show/hide Xpad notes \\ {\tt Win + Shft + N} & Tile on screen edge (Xpad notes) \\ {\tt Win + I} & Iconify active window \\ {\tt Win + S} & Shade/unshade active window \\ {\tt Alt + F4} & Close active window \\ {\tt Win + Esc} & Show active window client-menu \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \section*{Applications} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Ctrl + Alt + F1-F6} & Go to terminal text mode tty1-tty6 \\ {\tt Ctrl + Alt + F7} & Go to terminal GUI mode, tty7 \\ {\tt Ctrl + Alt + Del} & Task manager \\ {\tt Ctrl + Alt + H} & Open Openbox keybindings file\\ {\tt Ctrl + Alt + T} & LXTerminal \\ {\tt Ctrl + Alt + X} & xterm \\ {\tt Ctrl + Alt + B} & Firefox \\ {\tt Ctrl + Alt + R} & Citrix Receiver \\ {\tt Ctrl + Alt + O} & Tor \\ {\tt Ctrl + Alt + S} & Slack \\ {\tt Ctrl + Alt + M} & MATLAB \\ {\tt Ctrl + Alt + E} & gedit \\ {\tt Ctrl + Alt + G} & Galculator \\ {\tt Ctrl + Alt + N} & Xpad \\ {\tt Ctrl + Alt + L} & Lock session \\ {\tt Ctrl + Alt + Q} & Quit session \\ {\tt Ctrl + Alt + 0} & Default wallpaper \\ {\tt Ctrl + Alt + 1} & Wallpaper style 1 \\ {\tt Ctrl + Alt + 2} & Wallpaper style 2 \\ {\tt Ctrl + Alt + 3} & Wallpaper style 3 \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% %-----------------------------------------------------------------------------% \section*{Sound management} \begin{table}[H] \begin{tabular}{|C{5cm}|L{10cm}|} \hline \textbf{Keybind} & \textbf{Description} \\ \hline {\tt Win + V} & Toggle mute/unmute volume \\ {\tt Win + $\uparrow$} & Increase volume \\ {\tt Win + $\downarrow$} & Decrease volume \\ {\tt Win + $\leftarrow$} & Set sound card to analog stereo \\ {\tt Win + $\rightarrow$} & Set sound card to HDMI stereo \\ \hline \end{tabular} \end{table} %-----------------------------------------------------------------------------% \end{document}
{ "alphanum_fraction": 0.497173889, "avg_line_length": 40.4618834081, "ext": "tex", "hexsha": "98c9ecb5cd35976d74b33e50ecab1b2de84cba89", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca2f9b1dfa612e9a269f664a1b22f441a6c978e0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "edponce/linux_config", "max_forks_repo_path": "bin/keybinds/openbox_keybinds.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "ca2f9b1dfa612e9a269f664a1b22f441a6c978e0", "max_issues_repo_issues_event_max_datetime": "2018-03-19T18:14:16.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T17:52:45.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "edponce/linux_config", "max_issues_repo_path": "bin/keybinds/openbox_keybinds.tex", "max_line_length": 86, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca2f9b1dfa612e9a269f664a1b22f441a6c978e0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "edponce/linux_config", "max_stars_repo_path": "bin/keybinds/openbox_keybinds.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2605, "size": 9023 }
\chapter{Exodiff}\label{ch:exodiff} \section{Introduction} \exodiff{} compares the results data from two \exo{} databases. The databases should represent the same model, that is, the \exo{} meta data should be identical as should be the genesis portion of the bulk data. The only differences should be in the values of the transient bulk data. \exodiff{}'s main purpose is to detect and report these differences. \exodiff{} will compare global, nodal, element, nodeset, and sideset transient variables at each selected timestep; it will also compare element attribute variables on each element block containing attributes. If a third file is specified on the command line, it will be created with the same meta data and non-transient bulk data as the first file, and each variable in the third file will be the differences of the corresponding variables in the first two files. A command file can be specified and used to control exactly what variables are to be compared/differenced and to what tolerance. By default, element block names and variable names are compared ignoring case. \subsection{Difference Terminology} \exodiff{} supports several options for determining whether two values differ. These are called {\em difference types} and include the following: \begin{tabular}{ll} relative difference & $|val1 - val2|/\max(|val1|, |val2|)$. \\ absolute difference & $|val1 - val2|$ \\ combined difference & $|val1 - val2| / \max(tol, tol * \max(|val1|, |val2|))$ \\ eigen\_relative difference & $||val1| - |val2||/\max{(|val1|,|val2|)}$. \\ eigen\_absolute difference & $||val1| - |val2||$\\ eigen\_combined difference & $||val1| - |val2|| / \max(tol, tol * \max(|val1|, |val2|))$\\ \end{tabular} Where $tol$ is a user-specified tolerance. The difference types prefixed by {\em eigen\_} are intended to be used when the variable being differenced describes the shape of an eigenvector and the eigenvector shape is considered equal if the values on one database are equal in magnitude, but possibly of a different sign\footnote{Note that the difference type as implemented does not fully check whether the eigenvectors represented by the data are truly the same shape with a potential difference of sign since it works on an item-by-item basis and does not check whether all items in the first database are multiplied by the same 1.0 or -1.0 to match the items in the second database. However, the implementation can be improved in the future without breaking any existing scripts or command files.}. Values are considered equal if $|val1| <= floor \&\& |val2| <= floor$; where $floor$ is a user-specified value. Otherwise the difference is computed using one of the above formulas and compared to a tolerance. If the difference is greater than the tolerance, then the databases are different. At the end of execution, a summary of the differences found is output. By default: \begin{itemize} \item All results variables and attributes are compared using a {\em relative difference} of $10^{-6}$ (about 6 significant digits) and a {\em floor} of~0.0. \item Nodal locations are compared using {\em absolute difference} with a tolerance of $10^{-6}$ and a {\em floor} of~0.0. \item Time step values are compared using {\em relative difference} tolerance of $10^{-6}$ and a floor of $10^{-15}$. \end{itemize} \section{Invoking \exodiff} \exodiff{} can be invoked using the following command lines: To do normal comparison of two files using default tolerances producing text output summarizing the differences, enter: \begin{syntax} exodiff [{options}] [-f <cmd\_file>] file1.e file2.e \end{syntax} Where \file{cmd\_file} is an optional file containing options and tolerance values; its syntax is described below. If you want \exodiff{} to output an \exo{} file created containing the differences of the two files, the command line is similar, but also contains the name of the file where the differences should be written\footnote{Note that all variables on the third file are the difference of the values on the first two files including the displacement variables. If you visualize the file containing the differences, the visualization program may show a strange deformed shape since the displacement variables are no longer true displacements.}: \begin{syntax} exodiff [{options}] [-f <cmd\_file>] file1.e file2.e diff_file.e \end{syntax} The third invocation option reads a single file and outputs a summary of the variable data contained in the file. The summary data are the minimum and maximum values for each variable and the time step and entity id where the minimums and maximums occurred. This file can be used for preparing a command input file for use in the previous two invocations. The \param{no\_coord\_sep} option if present will inhibit the calculation and output of the minimum distance between any two nodes in the file which can take a long time for large models and is often unneeded data. \begin{syntax} exodiff -summary [no\_coord\_sep] file.e (create variable summary) \end{syntax} The remaining invocation lines will output a short usage summary, a much longer usage summary, and the last just output the version information. \begin{syntax} exodiff [-h] [-help] (short usage summary) exodiff [-H] (longer usage summary) exodiff [-v] [-version] (version info) \end{syntax} The basic behavior can be modified using several optional parameters specified on the command line. These are documented below: \subsection{Optional Parameters} \renewcommand\arraystretch{1.5} \begin{longtable}{lp{4.0in}} -t \tt<real value> & Overrides the default tolerance of $10^{-6}$ for all variables.\\ -F \tt<real value> & Overrides the default floor tolerance of 0.0 for all variables.\\ -absolute & Use absolute differences as default tolerance type \\ -relative & Use relative differences as default tolerance type \\ -combined & Use combined differences as default tolerance type \\ -eigen\_absolute & Use eigen\_absolute differences as default tolerance type (absolute value of values) \\ -eigen\_relative & Use eigen\_relative differences as default tolerance type (absolute value of values) \\ -eigen\_combined & Use eigen\_combined differences as default tolerance type (absolute value of values) \\ -T \tt<offset> & Match timestep 'x+offset' in first file with timestep 'x' in second file. \\ -TA & Automatically determine the timestep offset such that both databases end at the same step. \\ -TM & Automatically determine the timestep offset to find the closest match in file1 to the first step on file2.\\ -q & Quiet. Only errors will be sent to stdout. Comparison mode will echo: {\tt exodiff: Files are the same.} or {\tt exodiff: Files are different.}\\ -show\_all\_diffs & Show all differences for all variables, not just the maximum. Default behavior is that there will be a maximum of one difference output per variable per timestep. If this option is specified, then any pair of values that exceed the tolerance will be output. Use of this option Can result in lots of output on large files with lots of differences.\\ -m & Invoke a matching algorithm to create a mapping between the nodes and elements of the two files based on geometric proximity. The topology must still be the same (within tolerance), but can be ordered differently. A match must be found for all nodes and elements or \exodiff{} will output an error message and stop.\\ -p & Invoke a matching algorithm similar to the -m option. However this option ignores unmatched nodes and elements. This allows comparison of files that only partially overlap. \\ -match\_ids & Invoke a matching algorithm which matches nodes and elements using the node and element global id maps in the two files. This is the default mode of operation.\\ -match\_file\_order & Invoke a matching algorithm using the node and element position order in the two files. \\ -show\_unmatched & If the -p option is given, this prints out the elements that did not match. \\ -dumpmap & If the -m or -p switch is given, this prints out the resulting map between the nodes and elements in the two files. \\ -nsmap & Create a map between the nodeset nodes in the two files if they include the same nodes, but are in different order. This is enabled by default.\\ -ssmap & Create a map between the sideset faces in the two files if they include the same sides, but are in different order. This is enabled by default.\\ -no\_nsmap & Compare nodeset nodes based on file order only. \\ -no\_ssmap & Compare sideset faces based on file order only. \\ -s & Short block type compare. Forces element block type strings to be compared only up to the shortest string length. For example, ``HEX'' and ``HEX8'' will be considered the same. This is enabled by default. \\ -no\_short & Do not do the short block type compare. Forces element block type strings to fully match. For example, ``HEX'' and ``HEX8'' will be considered different. \\ -ignore\_case & Ignore case. Variable names are compared case in-sensitive. For example, ``Stress'' and ``STRESS'' will be considered as the same variable. This is enabled by default. \\ -case\_sensitive & Variable names are compared case sensitive. For example, ``Stress'' and ``STRESS'' will be considered as two different variables. \\ -ignore\_maps& Output node and element difference summaries using file local implicit ids instead of global ids. Note that the matching of nodes and elements will use the mapping option specified above; this option only affects the output of the node or element id where the difference occurred. \\ -ignore\_nans& Don't check data for NaNs. By default, \exodiff{} will output a warning message if any variable's value is NaN (Not a number). \\ -ignore\_dups& If two elements/nodes are in the same location in match or partial match case, return first match instead of aborting. This is used in the -m and -p matching options. Normally, \exodiff{} will output an error message if a node in one file can be matched to two or more nodes in the second file.\\ -ignore\_attributes& Don't compare element attribute values. \\ -nosymm & Turn off symmetric variable name checking. By default, a warning will be produced if a variable that is not to be excluded\footnote{See the command file description in Section~\ref{ed:command_file} for details on excluding variables} is contained in the second file given on the command line but not the first. This ``symmetric'' check can be turned off with this option and the extra variables in the second file will be ignored.\\ -allow\_name\_mismatch & Allow a variable name that is in the first database to be absent from the second database. The default behavior is to output an error if all variables in the first file cannot be matched to variables in the second file.\\ -x {\tt <list>} & Exclude time steps. Does not calculate any variable differences for the time steps given in the list of integers. The format is comma separated and ranged integers (with no spaces), such as ``1,5-9,28''. The first time step is the number one. \\ -steps {\tt <b:e:i>}& Specify subset of steps to consider. Syntax is begin:end:increment, Enter -1:: for just the last step. If only begin is set, then end=begin and only that step will be considered \\ -norms& Calculate the $L_2$ norm of variable differences and output if the norm is greater than 0.0. The output will also contain the $L_2$ norm of each variable. This can be used to give an idea of the relative magnitudes of the differences compared to the magnitudes of the variables. This is for informational purposes only at this time; it does not affect the determination of whether the databases compare the same or different.\\ -stat & Return an exit status of 2 if the files are different. Normally, the exit status is zero unless an error occurs. \\ -maxnames {\tt <int>}& There is a compiled limit of 1000 exodus variable names. This option allows the maximum number to be set to a larger value if either of the input databases contains more than 1000 variables. \\ -use\_old\_floor& When \exodiff{} was first released, it used an incorrect definition of the floor tolerance in which the differences were ignored if the difference itself was less than the floor value. This was fixed several years ago to the new definition which is to ignore the differences if $|a| < floor \&\& |b| < floor$. This option was provided so that users could use the old definition if desired. It should not be used. \\ -summary & Produce a summary in \exodiff{} input format. This will create an output file with max/min statistics on the data in the format of an \exodiff{} input file. The algorithm to determine the minimum separation between any two nodes can be disabled with the ``no\_coord\_sep'' switch. \\ -copyright & Output the copyright and license information. \\ -f {\tt <cmd file>} & Use the given file to specify the variables to be considered and the tolerances to be used for each variable type or each individual variable. See Section~\ref{ed:command_file} for details of the syntax in this file.\\ -H file & Show the syntax help for the command input file. This is also documented in Section~\ref{ed:command_file}.\\ \end{longtable} \subsection{\exodiff{} Command File Syntax}\label{ed:command_file} If an \exodiff{} invocation uses the \param{-f <cmd\_file>} option, then \exodiff{} will read commands from the specified file in addition to parsing the options given on the command line. The command line will be parsed first and then the commands in the input file. The primary use of the input file is to give more control over the difference types and tolerances to be used for individual variables. The basic syntax of the file is: \begin{itemize} \item each command is given on a separate line. \item Anything following the \param{\#} character on a line will be treated as a comment and ignored. \item Within a ``variables'' block, lines must be indented and must begin with a ``tab'' character. \end{itemize} The valid command lines are shown in all uppercase in the following list. The list also describes the behavior that the command line will specify. \begin{itemize} \item The variable names are case insensitive (unless the \param{-case\_sensitive} option is specified or there is a \param{CASE SENSITIVE} line in the command file). \item All keyword comparisons are case insensitive. Abbreviations can be used. \item All variable comparisons use the default of relative $10^{-6}$ for variables and absolute $10^{-6}$ for coordinates. This is overridden with the \param{DEFAULT TOLERANCE} line. The \param{DEFAULT TOLERANCE} values are overridden by the values given on the \param{VARIABLES} line and apply only to those variables. Each variable can override all values by following its name with a value. \item A variable name must start with a tab character. If there is at least one variable name of a specified type (element, nodal, global, ...) is listed, then only the listed variable(s) of that type will be differenced. The variable name can be followed by an optional difference type and tolerance, and an optional \param{floor} and floor tolerance. The NOT symbol \param{!} means do not include this variable. Mixing non-! and ! is not allowed without the \param{(all)} specifier. For example \begin{verbatim} NODAL VARIABLES (all) absolute 1.E-8 <tab> DISPLX <tab> !VELX <tab> VELY relative 1.E-6 floor 1.e-10 \end{verbatim} In this case, all variables are considered that are not prepended with a ``!'' symbol. \item If a variable type (e.g. \param{NODAL VARIABLES}) is not specified, no variables of that type will be considered. Allowed variable types are: \param{GLOBAL VARIABLES}, \param{NODAL VARIABLES}, \param{ELEMENT VARIABLES}, \param{NODESET VARIABLES}, and \param{SIDESET VARIABLES}. \item The command line option to set the maximum number of \exo{} names can be set with \param{MAX NAMES <int>}. Note: this option must appear before the variable blocks are read! \item The time step exclusion option can be used in the input file with the syntax \param{EXCLUDE TIMES <list>}, where \param{<list>} has the same format as in the command line options. \item The matching algorithm, \param{-m}, can be turned on from the input file with the \param{APPLY MATCHING} keyword on a separate line. \item The nodeset matching algorithm, \param{-nsmap}, can be turned on from the input file with the \param{NODESET MATCH} keyword on a separate line. \item The sideset matching algorithm, \param{-ssmap}, can be turned on from the input file with the \param{SIDESET MATCH} keyword on a separate line. \item The short block type compare option, \param{-s}, can be turned on with the \param{SHORT BLOCKS} keyword. \item The no short compare option, \param{-no\_short}, can be turned on with the \param{NO SHORT BLOCKS} keyword. \item The case\_sensitive option, \param{-case\_sensitive}, can be turned on with the \param{CASE SENSITIVE} keyword. \item The ignore case option, \param{-i}, can be turned on with the \param{IGNORE CASE} keyword. (default behavior) \item The ignore maps option, \param{-ignore\_maps}, can be turned on with the \param{IGNORE MAPS} keyword. \item The ignore nans option, \param{-ignore\_nans}, can be turned on with the \param{IGNORE NANS} keyword. \item The ignore dups option, \param{-ignore\_dups}, can be turned on with the \param{IGNORE DUPLICATES} keyword. \item The time step offset option, \param{-T}, can be turned on with the \param{STEP OFFSET} keyword. \item The automatic time step offset option, \param{-TA}, can be turned on with the \param{STEP OFFSET AUTOMATIC} keyword. \item The automatic time step offset option, \param{-TM}, can be turned on with the \param{STEP OFFSET MATCH} keyword. \item The calculation of the L2 norm of differences \param{-norms}, can be turned on with the \param{CALCULATE NORMS} keyword. \item The exit status return option, \param{-stat}, can be turned on with the \param{RETURN STATUS} keyword. \end{itemize} \section{Examples} The output below shows an example run of \exodiff{}. The command invocation used was: \begin{syntax} exodiff -f P_exodiff.cmd P_gold_results.e bar-P.e \end{syntax} The \file{P\_exodiff.cmd} command file contains the following: \begin{verbatim} COORDINATES absolute 1.e-6 TIME STEPS relative 1.e-6 floor 0.0 GLOBAL VARIABLES relative 1.e-6 floor 1.e-16 internal_energy kinetic_energy momentum_x NODAL VARIABLES relative 1.e-4 floor 1.e-16 displacement_x acceleration_x force_internal_x mass velocity_x ELEMENT VARIABLES relative 1.e-6 floor 1.e-16 eqps stress_xx absolute 1000 stress_yy absolute 1000 stress_zz absolute 1000 temperature absolute 1 yield_stress absolute 1000 \end{verbatim} The first section of the output shows the code version and contact information and when the output was generated; followed by some summary statistics of the two files including the file paths and the counts of nodes, elements, etc. If options are read from a command file, the path to that file is listed. \begin{verbatim} ***************************************************************** EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF Version 2.43 (2011-04-07) Authors: Richard Drake, [email protected] Greg Sjaardema, [email protected] 2011/04/28 21:11:00 MDT EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF ***************************************************************** Reading first file ... Reading second file ... FILE 1: /home/exodiff/axial_pulse_par_ns/P_gold_results.e Title: Default Database Title Dim = 3, Blocks = 1, Nodes = 816, Elements = 450, Nodesets = 6, Sidesets = 0 Vars: Global = 7, Nodal = 13, Element = 16, Nodeset = 0, Sideset = 0, Times = 23 FILE 2: /home/exodiff/axial_pulse_par_ns/bar-P.e Title: Default Database Title Dim = 3, Blocks = 1, Nodes = 816, Elements = 450, Nodesets = 6, Sidesets = 0 Vars: Global = 7, Nodal = 13, Element = 16, Nodeset = 0, Sideset = 0, Times = 23 COMMAND FILE: /home/exodiff/axial_pulse_par_ns/P_exodiff.cmd \end{verbatim} \sectionline The next output section summarizes what variables will be compared and the difference types, tolerances, and floor values that will be used. Note that the command file is specifying that only a subset of the variables on the files will be differenced since the output above shows 7 global variables, 13 nodal variables, and 16 element variables, but the list below only shows 3 global, 5 nodal, and 6 element variables. \begin{verbatim} Coordinates will be compared .. tol: 1e-06 (absolute), floor: 0 Time step values will be compared .. tol: 1e-06 (relative), floor: 0 Global variables to be compared: internal_energy tol: 1e-06 (relative), floor: 1e-16 kinetic_energy 1e-06 (relative), 1e-16 momentum_x 1e-06 (relative), 1e-16 Nodal variables to be compared: displacement_x tol: 1e-06 (relative), floor: 1e-16 acceleration_x 1e-06 (relative), 1e-16 force_internal_x 1e-06 (relative), 1e-16 mass 1e-06 (relative), 1e-16 velocity_x 1e-06 (relative), 1e-16 Element variables to be compared: eqps tol: 1e-06 (relative), floor: 1e-16 stress_xx 1e-06 (relative), 1e-16 stress_yy 1e-06 (relative), 1e-16 stress_zz 1e-06 (relative), 1e-16 temperature 1e-06 (relative), 1e-16 yield_stress 1e-06 (relative), 1e-16 No Element Attribute variables on either file. No Nodeset variables on either file. No Sideset variables on either file. ============================================================== NOTE: All node and element ids are reported as global ids. \end{verbatim} \sectionline The next output section shows the results of the differencing. For the first several timesteps, no differences were found. \begin{verbatim} --------- Time step 1, 0.0000000e+00 ~ 0.0000000e+00, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: --------- Time step 2, 2.2708229e-08 ~ 2.2708229e-08, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: --------- Time step 3, 8.1607527e-08 ~ 8.1607527e-08, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: --------- Time step 4, 2.3437714e-07 ~ 2.3437714e-07, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: ... deleted some output ... --------- Time step 11, 7.7253933e-06 ~ 7.7253933e-06, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: --------- Time step 12, 8.9485520e-06 ~ 8.9485520e-06, rel diff: 0.00000e+00 --------- Global variables: Nodal variables: Element variables: \end{verbatim} \sectionline At this time step, differences are detected and output. The output format is: \begin{verbatim} variable name diff type val file 1 val file 2 difference (which entity) stress_xx rel diff: -1.1444528e+04 ~ -1.1444553e+04 = 2.15241e-06 (block 1, elmt 66) \end{verbatim} Note that only the maximum difference found for each variable at each time step is output. There may be many more differences detected. \begin{verbatim} --------- Time step 13, 1.0171704e-05 ~ 1.0171704e-05, rel diff: 1.66547e-16 --------- Global variables: Nodal variables: acceleration_x rel diff: 1.1719403e+04 ~ 1.1719426e+04 = 1.98010e-06 (node 68) force_internal_x rel diff: -5.8141261e+02 ~ -5.8141376e+02 = 1.98010e-06 (node 68) Element variables: stress_xx rel diff: -1.1444528e+04 ~ -1.1444553e+04 = 2.15241e-06 (block 1, elmt 66) stress_yy rel diff: -4.9048081e+03 ~ -4.9048309e+03 = 4.63816e-06 (block 1, elmt 266) stress_zz rel diff: -4.9048129e+03 ~ -4.9048357e+03 = 4.64075e-06 (block 1, elmt 266) --------- Time step 14, 1.1394849e-05 ~ 1.1394849e-05, rel diff: 1.48669e-16 --------- Global variables: Nodal variables: displacement_x rel diff: 1.0981488e-11 ~ 1.0980936e-11 = 5.02741e-05 (node 740) acceleration_x rel diff: 2.0947516e+02 ~ 2.0950905e+02 = 1.61776e-04 (node 639) force_internal_x rel diff: -5.1961477e+00 ~ -5.1969885e+00 = 1.61776e-04 (node 639) velocity_x rel diff: 5.9451636e-05 ~ 5.9447122e-05 = 7.59215e-05 (node 740) Element variables: stress_xx rel diff: -1.9233572e+02 ~ -1.9236707e+02 = 1.62922e-04 (block 1, elmt 326) stress_yy rel diff: -8.2409892e+01 ~ -8.2442564e+01 = 3.96299e-04 (block 1, elmt 326) stress_zz rel diff: -8.2408238e+01 ~ -8.2440733e+01 = 3.94165e-04 (block 1, elmt 326) --------- Time step 15, 1.2617989e-05 ~ 1.2617989e-05, rel diff: 1.34258e-16 --------- Global variables: Nodal variables: displacement_x rel diff: 1.4026865e-13 ~ 1.4075811e-13 = 3.47725e-03 (node 648) acceleration_x rel diff: 2.7998765e+00 ~ 2.8278524e+00 = 9.89300e-03 (node 700) force_internal_x rel diff: -1.3890498e-01 ~ -1.4029290e-01 = 9.89300e-03 (node 700) velocity_x rel diff: 7.8395852e-07 ~ 7.8796013e-07 = 5.07844e-03 (node 648) Element variables: stress_xx rel diff: -2.4472564e+00 ~ -2.4778807e+00 = 1.23591e-02 (block 1, elmt 386) stress_yy rel diff: -1.0366468e+00 ~ -1.0631157e+00 = 2.48974e-02 (block 1, elmt 386) stress_zz rel diff: -1.0455424e+00 ~ -1.0720140e+00 = 2.46934e-02 (block 1, elmt 386) ... deleted some output ... --------- Time step 22, 2.1179859e-05 ~ 2.1179859e-05, rel diff: 7.99848e-16 --------- Global variables: Nodal variables: Element variables: --------- Time step 23, 2.2036041e-05 ~ 2.2036041e-05, rel diff: 7.68771e-16 --------- Global variables: Nodal variables: Element variables: \end{verbatim} \sectionline The final section is the status output indicating that differences were detected. This string will not change in future versions and can be searched for to determine whether the files are the same or different. The \exodiff{} exit status can also be used for this if the \param{-status} option is set. \begin{verbatim} exodiff: Files are different \end{verbatim} \sectionline The next example shows the summary output produced by the command line: \begin{syntax} exodiff -summary bar-P.e \end{syntax} \begin{verbatim} # ***************************************************************** # EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF # # Version 2.43 (2011-04-07) # Authors: Richard Drake, [email protected] # Greg Sjaardema, [email protected] # 2011/06/03 11:23:07 MDT # # EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF EXODIFF # ***************************************************************** # FILE 1: /scratch/user/bar-P.e # Title: An Exodiff Summary Example # Dim = 3, Blocks = 1, Nodes = 204, Elements = 50, Nodesets = 5, Sidesets = 0 # Vars: Global = 7, Nodal = 10, Element = 16, Nodeset = 0, Sideset = 0, Times = 206 # ============================================================== # NOTE: All node and element ids are reported as global ids. # NOTES: - The min/max values are reporting the min/max in absolute value. # - Time values (t) are 1-offset time step numbers. # - Element block numbers are the block ids. # - Node(n) and element(e) numbers are 1-offset. COORDINATES absolute 1.e-6 # min separation = 0.1 TIME STEPS relative 1.e-6 floor 0.0 # min: 0 @ t1 max: 2.2088109e-05 @ t206 GLOBAL VARIABLES relative 1.e-6 floor 0.0 external_energy # min: 0 @ t1 max: 0 @ t1 internal_energy # min: 0 @ t1 max: 22205882 @ t206 kinetic_energy # min: 0 @ t1 max: 20210551 @ t206 momentum_x # min: 0 @ t1 max: 42651.567 @ t206 momentum_y # min: 0 @ t1 max: 0 @ t1 momentum_z # min: 0 @ t1 max: 0 @ t1 timestep # min: 0 @ t1 max: 1.3153439e-07 @ t51 NODAL VARIABLES relative 1.e-6 floor 0.0 acceleration_x # min: 0 @ t1,n1 max: 3.7989521e+08 @ t190,n1 acceleration_y # min: 0 @ t1,n1 max: 0 @ t1,n1 acceleration_z # min: 0 @ t1,n1 max: 0 @ t1,n1 force_internal_x # min: 0 @ t1,n1 max: 82739542 @ t190,n5 force_internal_y # min: 0 @ t1,n1 max: 1.3526743e+08 @ t206,n201 force_internal_z # min: 0 @ t1,n1 max: 1.3526743e+08 @ t206,n201 mass # min: 0.111625 @ t1,n1 max: 0.22325 @ t1,n21 velocity_x # min: 0 @ t1,n1 max: 994.29394 @ t206,n201 velocity_y # min: 0 @ t1,n1 max: 0 @ t1,n1 velocity_z # min: 0 @ t1,n1 max: 0 @ t1,n1 ELEMENT VARIABLES relative 1.e-6 floor 0.0 eqps # min: 0 @ t1,b2,e1 max: 0.00083689614 @ t206,b2,e50 rate_of_deformation_xx # min: 0 @ t1,b2,e1 max: 539.94599 @ t116,b2,e50 rate_of_deformation_yy # min: 0 @ t1,b2,e1 max: 4.0126165e-32 @ t200,b2,e46 rate_of_deformation_zz # min: 0 @ t1,b2,e1 max: 1.1275579e-32 @ t185,b2,e49 rate_of_deformation_xy # min: 0 @ t1,b2,e1 max: 1.1868359e-13 @ t186,b2,e42 rate_of_deformation_yz # min: 0 @ t1,b2,e1 max: 1.1093839e-32 @ t185,b2,e49 rate_of_deformation_zx # min: 0 @ t1,b2,e1 max: 4.7474497e-14 @ t203,b2,e39 sound_speed # min: 394000 @ t1,b2,e1 max: 395751.09 @ t206,b2,e50 stress_xx # min: 0 @ t1,b2,e1 max: 3.980196e+09 @ t206,b2,e50 stress_yy # min: 0 @ t1,b2,e1 max: 2.7099141e+09 @ t206,b2,e50 stress_zz # min: 0 @ t1,b2,e1 max: 2.7099141e+09 @ t206,b2,e50 stress_xy # min: 0 @ t1,b2,e1 max: 2.7908475e-07 @ t205,b2,e46 stress_yz # min: 0 @ t1,b2,e1 max: 3.7940314e-26 @ t190,b2,e49 stress_zx # min: 0 @ t1,b2,e1 max: 1.5648498e-07 @ t188,b2,e49 temperature # min: 298 @ t78,b2,e8 max: 299.37791 @ t206,b2,e50 yield_stress # min: 7.5751705e+08 @ t142,b2,e34 max: 1.3389387e+09 @ t148,b2,e50 # No NODESET VARIABLES # No SIDESET VARIABLES \end{verbatim} \sectionline The output starts with a database summary similar to the previous example. It then gives a summary of the minimum and maximum values of each variable and the timestep and node or element where that minimum or maximum occurs. The format of the summary is such that it can be used as a basis for creating an \exodiff{} command input file.
{ "alphanum_fraction": 0.6814269677, "avg_line_length": 48.0779610195, "ext": "tex", "hexsha": "0542567c24655d19c7a6f7957a5197942ac4b78f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_forks_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_forks_repo_name": "tokusanya/seacas", "max_forks_repo_path": "packages/seacas/doc-source/exo_utils/exodiff.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_issues_repo_name": "tokusanya/seacas", "max_issues_repo_path": "packages/seacas/doc-source/exo_utils/exodiff.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "54d9c3b68508ca96e3db1fd00c5d84a810fb330b", "max_stars_repo_licenses": [ "Zlib", "NetCDF", "MIT", "BSL-1.0", "X11", "BSD-3-Clause" ], "max_stars_repo_name": "tokusanya/seacas", "max_stars_repo_path": "packages/seacas/doc-source/exo_utils/exodiff.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9018, "size": 32068 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Deedy - One Page Two Column Resume % LaTeX Template % Version 1.2 (16/9/2014) % % Original author: % Debarghya Das (http://debarghyadas.com) % % Original repository: % https://github.com/deedydas/Deedy-Resume % % IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 1. Integrate biber/bibtex for article citation under publications. % 2. Figure out a smoother way for the document to flow onto the next page. % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % 5. Merge OpenFont and MacFonts as a single sty with options. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % CHANGELOG: % v1.1: % 1. Fixed several compilation bugs with \renewcommand % 2. Got Open-source fonts (Windows/Linux support) % 3. Added Last Updated % 4. Move Title styling into .sty % 5. Commented .sty file. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{deedy-resume-openfont} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % LAST UPDATED DATE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \lastupdated %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TITLE NAME % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{Tony}{Cook}{ \urlstyle{same}\href{https://github.com/tony-cook}{github.com/tony-cook }| \href{https://www.linkedin.com/in/tony-software-development/}{linkedin.com/in/tony-software-development}\\ \href{mailto:[email protected]}{[email protected]} | 022.3183073 | \href{mailto:[email protected]}{[email protected]} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN ONE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{minipage}[t]{3.2in} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EDUCATION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Education} \subsection{Mission Ready HQ} \descript{Software Development} \location{Graduated: April 2022} Curriculum: Full Stack JavaScript, Cloud services, DevOps as well as agile mindsets and practices \\ \sectionsep \subsection{AUT University} \descript{Bachelor of Business} \location{Graduated: December 2013} Major: Finance \& Economics \textbullet{}Minor: International Business \\ Elective Coursework:\\ \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Entrepreneurship \\ \item International Transport and Logistics \\ \end{tightemize} \sectionsep \subsection{University of North Florida} \descript{International Student Exchange} \location{Graduated: December 2011} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SKILLS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Skills} \subsection{Technical Skills} Proficient with: \\ JavaScript (ES6) \textbullet{} React/Redux \textbullet{} Material UI \\ Node.js \textbullet{} MySQL \textbullet{} MongoDB \textbullet{} HTML5 \textbullet{} CSS3 \\ Shell \textbullet{} Git \textbullet{} Linux \textbullet{} Cloud services \textbullet JSON \\ DevOps \textbullet Agile (Scrum) \sectionsep \subsection{Soft Skills} Outstanding: \\ Critical Thinking \textbullet{} Presentation skills \\ Collaboration\textbullet{} Problem Solving \\ Research \textbullet{} Business Knowledge \\ \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % AWARDS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Awards} \begin{tabular}{rll} 2009 & AUT Maori Business Scholarship \\ 2021 & MPTT Scholarship \\ \end{tabular} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN TWO % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \hfill \begin{minipage}[t]{4in} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % TECHNICAL PROJECTS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Technical Projects} \runsubsection{Property Management Website} \descript{| \href{https://github.com/tony-cook/property-management-full-stack-app} {Github}} \location{March 2022} \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Built and deployed a full stack multi-layer container with React/Redux and Node.js \item Worked alongside UX/UI designers during the design process while also implementing agile practices \item Implemented CI/CD pipline with AWS cloud \item Conceptualized and implemented a MongoDB cloud database connecting to the application \end{tightemize} \sectionsep \runsubsection{Early Learning Application} \\ \descript{| \href{https://github.com/tony-cook/early-learning-application-react} {Github(frontend)}, \href{https://github.com/tony-cook/early-learning-application-nodejs} {Github (backend)}} \location{December 2021} \vspace{\topsep} \begin{tightemize} \item Built a front end single-page application with React/Redux and back end with Node.js for early learning \\ \item Extensive user interface(UI) development with Material UI and CSS5 \item Collaborated using Git with a team of full stack developers \end{tightemize} \sectionsep %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EXPERIENCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experience} \runsubsection{Dangerous Goods Management} \descript{| Operations Coordinator} \location{June 2017 – February 2019} \begin{tightemize} \item Built and maintained effective relationships with customers, suppliers and freight forwarders \item Maintained stock management system for accuracy and future planning of storage capacity \item Attended training courses related to dangerous goods training and certification \end{tightemize} \sectionsep \runsubsection{Waipareira Trust} \descript{| Data Assistant } \location{June 2013 - November 2013} \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Assisted with collecting, interpreting, and compiling data \item Provided software training including one-to-one and assisted with larger group training. \item Assisted other business units with interpreting and compiling data \end{tightemize} \sectionsep \runsubsection{2Degrees Mobile } \descript{| Finance Assistant } \location{November 2012 – February 2013} \begin{tightemize} \item Prepared reports for inclusion in monthly, quarterly and annual internal audit reports \item Processed monthly bills and sent to customers for payment \end{tightemize} \sectionsep \end{minipage} \end{document} \documentclass[]{article}
{ "alphanum_fraction": 0.6671988389, "avg_line_length": 31.4611872146, "ext": "tex", "hexsha": "957d81eb6b5c7c2e89399e18eb9aea335e2829a8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cdadbfecc942bee49b3b186dc40276d7db160786", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tony-cook/Deedy-Resume", "max_forks_repo_path": "OpenFonts/deedy_resume-openfont.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cdadbfecc942bee49b3b186dc40276d7db160786", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tony-cook/Deedy-Resume", "max_issues_repo_path": "OpenFonts/deedy_resume-openfont.tex", "max_line_length": 207, "max_stars_count": null, "max_stars_repo_head_hexsha": "cdadbfecc942bee49b3b186dc40276d7db160786", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tony-cook/Deedy-Resume", "max_stars_repo_path": "OpenFonts/deedy_resume-openfont.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1646, "size": 6890 }
% make sure you have the VPN on, so that latex can load packages on the fly \documentclass{article} % graphics package \usepackage{graphicx} % enhanced citation package \usepackage{natbib} \bibpunct{(}{)}{;}{a}{}{,} % to adjust punctuation in references % adjust caption properties \usepackage[margin=10pt, font=small, labelfont=bf]{caption} % hyperrefs on, with nicer colors \usepackage{color} \usepackage{xcolor} \usepackage[]{hyperref} \definecolor{darkblue}{rgb}{0,0,.5} \hypersetup{colorlinks=true, breaklinks=true, linkcolor=darkblue, menucolor=darkblue, urlcolor=darkblue, citecolor=darkblue} % enhanced tables \usepackage{multicol} \usepackage{multirow} \usepackage{booktabs} \author{Your name(s)} \title{Title of your paper} \begin{document} \maketitle \begin{abstract} Follow the instructions in the lecture notes concerning scientific writing. \end{abstract} \section{Introduction} Use references in yourbibtexfile, e.g. \citep{Farrell-Cheaptalk-1996}. \section{Methods} ... \section{Results} Present your results here, use your produced figures. Refer to your figures and explain what we see in fig. \ref{myplot} \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{../results/myplot.pdf} \caption{explain what to see. \label{myplot}} \end{figure} \section{Discussion} ... % this is the style file. If you need to change something, google if the file you need is already there. If not (very uncommon) google makebst. \bibliographystyle{chicago} % this is the bibtex libary file. \bibliography{../literature/yourbibtexfile} % Note: all files can be anywhere, just give the full path. \end{document}
{ "alphanum_fraction": 0.750448833, "avg_line_length": 24.2173913043, "ext": "tex", "hexsha": "fbea17e893e5a184ca7ea193d288e27e8ea9f7ca", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-04-13T09:12:35.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-02T14:23:29.000Z", "max_forks_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "zoeschindler/ResearchSkills", "max_forks_repo_path": "Labs/ProjectOrganization/ExampleProject/thesis/mydocument.tex", "max_issues_count": 34, "max_issues_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0", "max_issues_repo_issues_event_max_datetime": "2020-01-24T08:34:26.000Z", "max_issues_repo_issues_event_min_datetime": "2015-06-24T09:32:33.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "zoeschindler/ResearchSkills", "max_issues_repo_path": "Labs/ProjectOrganization/ExampleProject/thesis/mydocument.tex", "max_line_length": 143, "max_stars_count": 9, "max_stars_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "florianhartig/ResearchSkills", "max_stars_repo_path": "Labs/ProjectOrganization/ExampleProject/thesis/mydocument.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-08T05:59:12.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-20T22:16:38.000Z", "num_tokens": 471, "size": 1671 }
\documentclass[11pt]{article} \usepackage[utf8]{inputenc} %common math and LaTeX packages \usepackage{amsmath,amsthm,amsfonts,amssymb,amscd} \usepackage{multirow,booktabs} \usepackage[table]{xcolor} \usepackage{multirow} \usepackage{fullpage} \usepackage{lastpage} \usepackage{enumitem} \usepackage{fancyhdr} \usepackage{mathrsfs} \usepackage{wrapfig} \usepackage{setspace} \usepackage{calc} \usepackage{multicol} \usepackage{cancel} \usepackage[retainorgcmds]{IEEEtrantools} \usepackage[margin=1in]{geometry} \usepackage{amsmath} \newlength{\tabcont} \setlength{\parindent}{0.0in} \setlength{\parskip}{0.0in} \usepackage{empheq} %shaded environment for important equations/notes \usepackage{mdframed} \colorlet{shaded}{blue!15} \colorlet{shadedtext}{black} \newenvironment{shaded} { \raggedright \color{shadedtext}% }{} \surroundwithmdframed[ hidealllines=true, backgroundcolor=shaded ]{shaded} %page geometry definitions \usepackage[most]{tcolorbox} \usepackage{xcolor} \parindent 0in \parskip 6pt \geometry{margin=1in, headsep=0.25in} %custom theorem definitions \theoremstyle{definition} \newtheorem{innercustomgeneric}{\customgenericname} \providecommand{\customgenericname}{} \newcommand{\newcustomtheorem}[2]{% \newenvironment{#1}[1] {% \renewcommand\customgenericname{#2}% \renewcommand\theinnercustomgeneric{##1}% \innercustomgeneric } {\endinnercustomgeneric} } \newcustomtheorem{thm}{Theorem} \newcustomtheorem{lem}{Lemma} \newcustomtheorem{defn}{Definition} \newcustomtheorem{prop}{Proposition} \newcustomtheorem{exer}{Exercise} \newcustomtheorem{note}{Note} \renewcommand{\qedsymbol}{$\blacksquare$} \let\a\alpha \let\b\beta \let\g\gamma \let\e\varepsilon \let\t\theta \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} \newcommand{\PP}{\mathcal{P}} \newcommand{\C}{\mathcal{C}} \newcommand{\Lagr}{\mathcal{L}} \begin{document} %document header \begin{center} {\LARGE \bf ME 226 - Mechanical Measurements}\\ {Instructor: \textit{Prof. Dipanshu Bansal}}\\ Last updated \today \\~\\ {\large \bf Om Prabhu}\\ Undergraduate, Department of Mechanical Engineering\\ Indian Institute of Technology Bombay\\~\\ \textsc{Note To Reader} \end{center} \vspace{-6pt} This document is a compilation of the notes I made while taking the course ME 226 (Mechanical Measurements) in my 4$^{\text{th}}$ semester at IIT Bombay. It is not a substitute for any formal lecture or textbook on the subject, since I pretty much overlook all the theory parts. If you have any suggestions and/or spot any errors, you know where to contact me. \vspace{-3mm} \hrulefill \section{Introduction \& Basic Concepts} \textbf{\large Some definitions:} \vspace{-3.5mm} \begin{itemize} \itemsep-0.2em \item[$-$] sensitivity: slope of the output vs input curve for an instrument \item[$-$] span: difference between maximum and minimum possible measurements for an instrument \item[$-$] range: difference between maximum and minimum deflection for an instrument \item[$-$] resolution: smallest measurable change in input \item[$-$] threshold: smallest measurable input \item[$-$] hysteresis: inability of instrument to give repeatable results during loading and unloading (hysteresis loss = area under the input-output curve) \end{itemize} \vspace{-3.5mm} Error in an instrument is a combination of 2 factors - bias (correctable by calibration) and imprecision (permanent component caused due to human error). \vspace{-3.5mm} \begin{wrapfigure}[7]{L}{0.4\textwidth} \includegraphics[width=0.35\textwidth]{error.jpeg} \end{wrapfigure} \hspace{0.37\textwidth} I $-$ no bias, no imprecision II $-$ bias, no imprecision III $-$ no bias, imprecision IV $-$ bias, imprecision Additionally, results should be fairly repeatable (i.e. repeating the measurements should yield similar values). \textbf{\large Basic Statistics:} $$\text{probability density function}=\frac{\text{(number of readings in an interval)}}{\text{(total number of readings)}\times\text{(width of interval)}}$$ Plot pdf as a function of interval length $-$ area under the curve is 1. On dividing the data into very small intervals, the pdf is a continuous function $f(x)$ such that $\displaystyle P(a<x<b)=\int_a^b f(x)\text{d}x$. In practice, many measurement sets are very close to the Gaussian distribution $\displaystyle f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$. For an ideal condition $-\infty<x<\infty$, but instruments cannot have infinite range. \begin{center} \begin{tabular}{l|cc} & for a population & for a sample\\ \hline mean value & $\displaystyle \mu=\frac{\displaystyle\sum_{i=1}^Nx_i}{N}$ & $\displaystyle \bar{X}=\frac{\displaystyle\sum_{i=1}^nx_i}{n}$\\ variance & $\displaystyle\sigma^2=\frac{\displaystyle\sum_{i=1}^N(x_i-\mu)^2}{N}$ & $\displaystyle s^2=\frac{\displaystyle\sum_{i=1}^n(x_i-\bar{X})^2}{n-1}$\\ \end{tabular} \end{center} A population refers to a continuous data distribution whereas a sample refers to the fixed number of discrete data points. 68\%, 95\%, 99.7\% readings lie in the $\pm\sigma,\pm 2\sigma,\pm 3\sigma$ range respectively. \vspace{2mm} \textbf{\large Method of Least Squares:} Assume a linear fit $y=mx+c$. We define the error $\displaystyle E=\sum_{k=1}^N\left((mx_k+c)-y_k\right)^2$. In order to minimize the error, $$\frac{\partial E}{\partial m}=0\implies \sum 2(mx_k+c-y_k)x_k=0$$ $$\frac{\partial E}{\partial c}=0\implies \sum 2(mx_k+c-y_k)=0$$ Solving this as a system of linear equations, we get \begin{align*} m&=\frac{1}{D}\left(N\sum x_ky_k-\sum x_k\sum y_k\right)\\ c&=\frac{1}{D}\left(N\sum x_k^2\sum y_k-\sum x_k\sum x_ky_k\right)\\ D&=N\sum x_k^2-\left(\sum x_k\right)^2 \end{align*} The variances in $y,x,m,c$ are calculated using the following formulae: \vspace{-3.5mm} \begin{center} \begin{tabular}{cc} $\displaystyle s_y^2=\frac{1}{N-2}\sum \left(mx_k+c-y_k\right)^2$ & $\displaystyle s_x^2=\frac{s_y^2}{m^2}$\\~\\ \vspace{-3.5mm} $\displaystyle s_m^2=\frac{Ns_y^2}{N\sum x_k^2-\left(\sum x_k\right)^2}$ & $\displaystyle s_c^2=\frac{s_y^2\sum x_k^2}{N\sum x_k^2-\left(\sum x_k\right)^2}$\\ \end{tabular} \end{center} \textbf{\large The Error Function:} We have the Gaussian distribution given by $\displaystyle f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$. Define $\eta=\dfrac{x-\mu}{\sigma\sqrt{2}}$. The error function is defined as $\displaystyle\text{er}f(\eta)=\frac{2}{\sqrt{\pi}}\int_0^{\eta}e^{-t^2}\text{d}t$. It follows that $\text{er}f(-\eta)=-\text{er}f(\eta)$. $$P(X<x)=F(x)=\frac{1}{2}(1+\text{er}f(\eta))$$ $$P(x_1<X<x_2)=F(x_2)-F(x_1)=\frac{1}{2}(\text{er}f(\eta_2)-\text{er}f(\eta_1))$$ \newpage A table for error function values is as follows: \vspace{-8mm} \begin{center} \includegraphics{erf.jpg} \end{center} \vspace{-3.5mm} \textbf{\large Combination of Component Errors:} Measured quantities are often influenced by a combination of other measured quantities (for example, stored potential energy = $\rho gh$). Let quantity $P=f(u_1,u_2,\dots,u_n)$ with individual errors $\Delta u_1,\Delta u_2,\dots,\Delta u_n$. $$\text{absolute error}=\Delta P=\left|\frac{\partial f}{\partial u_1}\Delta u_1\right|+\left|\frac{\partial f}{\partial u_2}\Delta u_2\right|+\dots+\left|\frac{\partial f}{\partial u_n}\Delta u_n\right|$$ $$\text{root sum square error}=E_{RSS}=\sqrt{\left(\frac{\partial f}{\partial u_1}\Delta u_1\right)^2+\left(\frac{\partial f}{\partial u_2}\Delta u_2\right)^2+\dots+\left(\frac{\partial f}{\partial u_n}\Delta u_n\right)^2}$$ For $N$ measurements of each of the quantities, $$\sigma_P^2=\left(\frac{\partial f}{\partial u_1}\right)\sigma_1^2+\left(\frac{\partial f}{\partial u_2}\right)\sigma_2^2+\dots+\left(\frac{\partial f}{\partial u_n}\right)\sigma_n^2$$ \textbf{\large Error Analysis of Voltmeters and Ammeters:} For a voltmeter, we first calculate the equivalent resistance $R_{eq}$ across the points where the voltmeter is to be connected. Then, $$\text{measured voltage }E_m=\frac{R_{m}}{R_m+R_{eq}}E_0\text{ }\text{ and }\text{ }\text{error }\epsilon=1-\frac{E_m}{E_0}=\frac{R_{eq}}{R_m+R_{eq}}$$ For an ammeter, we again calculate $R_{eq}$ (this time the meter will be in series with the rest of the circuit). Then, $$\text{measured current }I_m=\frac{R_{eq}}{R_m+R_{eq}}I_u\text{ }\text{ and }\text{ }\text{error }\epsilon=1-\frac{I_m}{I_u}=\frac{R_m}{R_m+R_{eq}}$$ \section{Dynamic Characteristics} General mathematical model (input $q_i\rightarrow$ output $q_0$) of a system can be represented by: $$a_{n}\frac{\text{d}^{n}q_{0}}{\text{d}t^{n}}+a_{n-1}\frac{\text{d}^{n-1}q_{0}}{\text{d}t^{n-1}}+\dots+a_{1}\frac{\text{d}q_{0}}{\text{d}t}+a_0q_0=b_{m}\frac{\text{d}^{m}q_{i}}{\text{d}t^{m}}+b_{m-1}\frac{\text{d}^{m-1}q_{i}}{\text{d}t^{m-1}}+\dots+b_{1}\frac{\text{d}q_{i}}{\text{d}t}+b_0q_i$$ Normally we don't specify the input derivatives, so we replace the RHS by just $q_i$. Sometimes we may also need to employ techniques like the Laplace transform to solve certain problems. \vspace{-2mm} \begin{center} \includegraphics[width=0.8\textwidth]{laplace.jpg} \end{center} \textbf{\large Zero Order Systems:} The general equation can be written as $a_0q_0=b_0q_i\implies q_0=\dfrac{b_0}{a_0}q_i=Kq_i$. \vspace{-2mm} \begin{itemize} \itemsep-0em \item[$-$] $K$ is the static sensitivity of the system \item[$-$] output is instantaneous with respect to input (i.e. $\phi=0$) \end{itemize} \vspace{-4mm} An example of a zero order system is a potentiometer. The emf $e_0=\dfrac{x}{L}E_b$ is a function of only variable, i.e. distance of the sliding contact. \newpage \subsection{First Order Systems} The general equation characterizing a first order system is: \begin{align*} a_1\frac{\text{d}q_0}{\text{d}t}+a_0q_0&=b_0q_i\\ \therefore \frac{a_1}{a_0}\frac{\text{d}q_0}{\text{d}t}+q_0&=\frac{b_0}{a_0}q_i\\ \therefore (\tau D+1)q_0=Kq_i\implies & \frac{q_0}{q_i}(D)=\frac{K}{1+\tau D} \end{align*} $\tau$ is the time constant whereas $K$ is the static sensitivity of the system. With certain assumptions, we can model a thermometer as a 1$^{\text{st}}$ order system. Its relies on thermal expansion of the liquid column in response to changes in the surrounding temperature. $$\left.\begin{array}{r} \b=\text{coefficient of volume expansion}\\ V=\text{volume of bulb}\\ A_c=\text{cross-sectional area of capillary}\\ \rho=\text{density of thermometer fluid}\\ C_p=\text{specific heat capacity of thermometer fluid}\\ h=\text{heat transfer coefficient}\\ A_s=\text{surface area of bulb} \end{array}\right\rbrace K=\frac{\b V}{A_c}\text{ }\text{ and }\text{ }\tau=\frac{\rho C_pV}{hA_s} $$ The differential equation obtained is: $$\frac{\text{d}T}{\text{d}t}+\frac{hA_s}{\rho C_pV}T=\frac{hA_sT_f}{\rho C_pV}\implies \frac{\text{d}y}{\text{d}t}+p(t)y=g(t)$$ $$y(t)=\frac{\int e^{\int p(t)\text{d}t}g(t)\text{d}t+C}{e^{\int p(t)\text{d}t}}\implies T=T_f+(T_0-T_f)e^{-t/\tau}$$ \textbf{\large Step Response:} For a step response, the input $q_i$ is constant. Hence, the governing equation is: $$(\tau D+1)q_0=Kq_i\implies q_0=Kq_i +Ce^{-t/\tau})$$ For zero initial conditions, we have $q_0=Kq_i(1-e^{-t/\tau})$. Thus, the response time depends only on the value of $\tau$. The error for a step response can be written as $$e_m=Kq_i-q_0=Kq_ie^{-t/\tau}$$ \vspace{-3mm} \begin{center} \includegraphics[width=0.49\textwidth]{firstorder_stepresponse.jpg} \includegraphics[width=0.49\textwidth]{firstorder_steperror.jpg} \end{center} \newpage \textbf{\large Ramp Response:} The governing equation is $(\tau D+1)q_0=Kq_{iramp}t$. Applying the Laplace transform, we get $$q_0=\frac{Kq_{iramp}t}{(1+\tau D)}\implies Q_0(s)=\frac{Kq_{iramp}}{s^2(1+\tau s)}\implies \frac{Q_0(s)}{Kq_{iramp}}=\frac{1}{s^2}-\frac{\tau}{s}+\frac{1}{s+\frac{1}{\tau}}$$ Inverting the Laplace transform, we finally get $$q_0(t)=Kq_{iramp}(t-\tau)+Kq_{iramp}\tau e^{-t/\tau}\text{ }\text{ and }\text{ }e_m=Kq_{iramp}\tau(1-e^{-t/\tau})$$ The steady state error (i.e. component of error that stays constant with time) is given by $e_{ss}=Kq_{iramp}\tau$ (often we assume $K=1$). The transient error eventually converges to 0, thus there is always an error of $e_{ss}$ even for very large values of time. \textbf{\large Impulse Response:} We initially assume a step input of magnitude $A/T$ applied for time $T$. The impulse response can then be found in the limit $T\to 0$. \begin{align*} q_0=\frac{A}{T}(1-e^{-t/\tau})&\text{ for }0\leqslant t\leqslant T\\ q_0=\frac{A(1-e^{-T/\tau})e^{-t/\tau}}{Te^{-T/\tau}}&\text{ for }t>T \end{align*} In the limit $T\to 0$, we get the impulse response as $$q_0=\frac{A}{T}e^{-t/\tau}$$ \textbf{Frequency Response:} $$\frac{q_0}{Kq_i}=\frac{1}{1+\tau D}=\frac{1}{1+j\tau\omega}\implies \frac{q_0}{Kq_i}=\frac{1}{\sqrt{1+\tau^2\omega^2}};\text{ }\phi=\arctan(-\tau\omega)$$ $$\therefore\text{ for input }q_i=a\sin(\omega t)\rightarrow \text{output }q_0=\frac{a}{\sqrt{1+\tau^2\omega^2}}\sin(\omega t+\phi)$$ As observed, the frequency response has a magnitude as well as a phase difference associated with it. An ideal frequency response would have $\dfrac{q_0}{Kq_i}=1$ and $\phi=0$. \subsection{Second Order Systems} The general equation characterizing a second order system is: \begin{align*} a_2\frac{\text{d}^2q_0}{\text{d}t^2}+a_1\frac{\text{d}q_0}{\text{d}t}+a_0q_0&=b_0q_i\\ \therefore \frac{a_2}{a_0}\frac{\text{d}^2q_0}{\text{d}t^2}+\frac{a_1}{a_0}\frac{\text{d}q_0}{\text{d}t}+q_0&=\frac{b_0}{a_0}q_i \end{align*} A very common example of a second order system is that of a mass, spring and damper. The force applied by the spring depends on the displacement $x$ while the force applied by the damper depends on the velocity $v$. \newpage $$m\frac{\text{d}^2x}{\text{d}t^2}=F-Kx-B\frac{\text{d}x}{\text{d}t}\implies (mD^2+BD+K)x=F\implies \left(\frac{m}{k}D^2+\frac{B}{K}D+1\right)x=\frac{F}{K}$$ Replacing $\displaystyle \omega_n=\sqrt{\frac{K}{m}}$ and $\displaystyle \zeta=\frac{B}{2\sqrt{mK}}$, we get $$\left(\frac{D^2}{\omega_n^2}+\frac{2\zeta D}{\omega_n}+1\right)x=\frac{F}{k}$$ \textbf{\large Step, Ramp \& Impulse Responses:} All of the following equations can be derived using the fundamental differential equation $$\ddot{y}+2\zeta\omega_n\dot{y}+\omega_n^2y=f(t)$$ \begin{center} \includegraphics[width=\textwidth]{secondorder_response.jpg} \end{center} \begin{itemize} \itemsep0em \item[$-$] damped natural frequency $\omega_d=\sqrt{1-\zeta^2}\omega_n$ for $0\leqslant\zeta<1$ \item[$-$] phase angle $\psi=\arctan(\dfrac{\zeta}{\sqrt{1-\zeta^2}})$ for $0\leqslant\zeta<1$ \item[$-$] time constants for overdamped ($\zeta>1$) systems are $$\tau_1=\frac{1}{\zeta\omega_n-\sqrt{\zeta^2-1}\omega_n}\text{ and }\tau_2=\frac{1}{\zeta\omega_n+\sqrt{\zeta^2-1}\omega_n}$$ \end{itemize} The impulse response can be found by simply differentiating the step response. Similarly, the ramp response can be found by integrating the step response. For a ramp response, the steady state error is given by $$e_{m,ss}=\frac{2\zeta q_{iramp}}{\omega_n}$$ Some important observations from the above equations are as follows: \vspace{-3mm} \begin{itemize} \itemsep-0.1em \item[$-$] overdamped systems have a sluggish response (i.e. large time delay to reach desired output) \item[$-$] underdamped system have an oscillatory response depending on the damping coefficient \item[$-$] critically damped systems have the most desirable performance \item[$-$] in most systems, $\omega_nt$ is determined by the response, so we often try to design $\omega_n$ to be as large as possible \item[$-$] most commercial systems tend to use $0.6<\zeta<0.7$, since the system gives $\approx 90\%$ accuracy at $\omega_nt=2.5$ \end{itemize} \textbf{\large Frequency Response:} The general equation for frequency response of a second order system is: $$\left(\frac{D^2}{\omega_n^2}+\frac{2\zeta D}{\omega_n}+1\right)q_0=Kq_i\implies \frac{q_0}{Kq_i}=\frac{1}{\dfrac{D^2}{\omega_n^2}+\dfrac{2\zeta D}{\omega_n}+1}\implies \frac{q_0}{Kq_i}=\frac{1}{\dfrac{-\omega^2}{\omega_n^2}+\dfrac{2\zeta\omega}{\omega_n}j+1}$$ $$\therefore \frac{q_0}{Kq_i}=\frac{1}{\sqrt{\left(1-\dfrac{\omega^2}{\omega_n^2}\right)^2+\left(\dfrac{2\zeta\omega}{\omega_n}\right)^2}};\text{ }\phi=\arctan\left[\frac{-2\zeta}{\dfrac{\omega_n}{\omega}-\dfrac{\omega}{\omega_n}}\right]$$ When $\omega/\omega_n$ is small, the response for $0.6<\zeta<0.7$ is satisfactory. Also when the system frequency matches the natural frequency of the device, resonance occurs in which $\phi=0$ and the amplitude rises. \subsection{Combination of Systems} For systems in series, their individual transfer functions are simply multiplied. For example, 2 first order systems in series give a second order system as follows: \begin{align*} \text{for }q_i=(\tau_1D+1)q_{01}&\text{ and }q_{01}=(\tau_2D+1)q_0\\ q_i=(\tau_1D+1)(\tau_2D+1)q_0\implies & q_i=(\tau_1\tau_2D^2+\tau_1D+\tau_2D+1)q_o \end{align*} Comparing this with the standard equation for a second order system, we get $\tau_1\tau_2=\dfrac{1}{\omega_n^2}$ and $\tau_1+\tau_1=\dfrac{2\zeta}{\omega_n}$. \end{document}
{ "alphanum_fraction": 0.7143608935, "avg_line_length": 52.2996941896, "ext": "tex", "hexsha": "b0dc0c2c16193ad1f5bd603eeb2b19151b4cb35d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-08T09:18:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-08T09:18:00.000Z", "max_forks_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "omprabhu31/omprabhu31.github.io", "max_forks_repo_path": "academics/notes/me-226/me226notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "omprabhu31/omprabhu31.github.io", "max_issues_repo_path": "academics/notes/me-226/me226notes.tex", "max_line_length": 341, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c9b5747607d4a8cccfe68410bf4deb9a15960636", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "omprabhu31/omprabhu31.github.io", "max_stars_repo_path": "academics/notes/me-226/me226notes.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-03T10:43:43.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-03T10:43:43.000Z", "num_tokens": 6066, "size": 17102 }
The User/Resource Manger Interface is intended to support access to power and energy related information, specifically pertaining to jobs, relevant to an HPC user. This interface is similar to the User/Monitor and Control Interface (section \ref{sec:UserMC}) but in this case assumes that the Resource Manager has a data retention capability (database) available to query energy and statistics information based on job or user Id. The availability of this information is implementation dependent. Alternatively, if the Resource Manager does not have a database capability, the same interfaces are available to the user role through the User/Monitor and Control System Interface (section \ref{sec:UserMC} which may provide this functionality. \subsection{Supported Attributes}\label{sec:UserRMAttributes} The Power API specification does not currently recommend that any of the attributes be exposed to the user role. The implementation is free to expose any attribute they determine is useful to the user role without violating the specification. \subsection{Supported Core (Common) Functions}\label{sec:UserRMSupportedCommon} \begin{itemize}[noitemsep,nolistsep] \item{Hierarchy Navigation Functions - section \ref{sec:Navigation}} \begin{itemize}[noitemsep,nolistsep] \item{ALL} \end{itemize} \item{Group Functions - section \ref{sec:Group}} \begin{itemize}[noitemsep,nolistsep] \item{ALL} \end{itemize} \item{Metadata Functions - section \ref{sec:METADATA}} \begin{itemize}[noitemsep,nolistsep] \item{ALL} \end{itemize} \item{Statistics Functions - section \ref{sec:StatisticsFunctions}} \begin{itemize}[noitemsep,nolistsep] \item{ALL - for historic queries only} \end{itemize} \end{itemize} %==============================================================================% \subsection{Supported High-Level (Common) Functions}\label{sec:UserRMHighLevel} \begin{itemize}[noitemsep,nolistsep] \item{Report Functions - section \ref{sec:ReportFunctions}} \begin{itemize}[noitemsep,nolistsep] \item{ALL} \end{itemize} \end{itemize} %==============================================================================% \subsection{Interface Specific Functions}\label{sec:UserRMFunctions}
{ "alphanum_fraction": 0.7304075235, "avg_line_length": 48.5434782609, "ext": "tex", "hexsha": "d48c4f8d1aac9028e4b1a1a67b84384398233fcb", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-05-24T13:46:52.000Z", "max_forks_repo_forks_event_min_datetime": "2018-04-18T16:06:43.000Z", "max_forks_repo_head_hexsha": "e3b74b0c62fa7e6104b8b18c4334e71afb745802", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "regrant/powerapi_spec-1", "max_forks_repo_path": "UserRM.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "ddcc18ba6d2a9669f1c30f86f438ddecd5f444de", "max_issues_repo_issues_event_max_datetime": "2020-09-18T15:02:08.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-09T17:13:36.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "pwrapi/powerapi_spec", "max_issues_repo_path": "UserRM.tex", "max_line_length": 266, "max_stars_count": 4, "max_stars_repo_head_hexsha": "e3b74b0c62fa7e6104b8b18c4334e71afb745802", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "regrant/powerapi_spec-1", "max_stars_repo_path": "UserRM.tex", "max_stars_repo_stars_event_max_datetime": "2018-06-07T17:19:34.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-09T17:10:47.000Z", "num_tokens": 537, "size": 2233 }
\subsection{Salt}
{ "alphanum_fraction": 0.7, "avg_line_length": 5, "ext": "tex", "hexsha": "b5d992ae4e55425f47a863f0061b014dbd4bf31d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/culture/ingredients/05-01-Salt.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/culture/ingredients/05-01-Salt.tex", "max_line_length": 17, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/culture/ingredients/05-01-Salt.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6, "size": 20 }
\section{Directed Hypergraphs and Generalized Lambda Calculus} \phantomsection\label{sThree} \p{Thus far in this chapter, I have written in general terms about architectural features related to Cyber-Physical software; especially, verifying coding assumptions concerning individual data types and/or procedures. My comments were intended to summarize the relevant territory, so that I can add some theoretical details or suggestions from this point forward. In particular, I will explore how to model software components at different scales so as to facilitate robust, safety-conscious coding practices. } \p{Note that almost all non-trivial software is in some sense \q{procedural}: the total package of functionality provided by each software component is distributed among many individual, interconnected procedures. Each procedure, in general, implements its functionality by calling \i{other} procedures in some strategic order. Of course, often inter-procedure calls are \i{conditional} \mdash{} a calling procedure will call one (or some sequence of) procedures when some condition holds, but call alternate procedures when some other conditions hold. In any case, computer code can be analyzed as a graph, where connections exist between procedures insofar as one procedure calls, or sometimes calls, the other. } \p{This general picture is only of only limited applicability to actual applications, however, because the basic concept of \q{procedure} varies somewhat between different programming languages. As a result, it takes some effort to develop a comprehensive model of computer code which accommodates a representative spectrum of coding styles and paradigms. } \p{There are perhaps three different perspectives for such a comprehensive theory. One perspective is to consider source code as a data structure in its own right, employing a Source Code Algebra or Source Code Ontology to assert properties of source code and enable queries against source code, qua information space. A second option derives from type theory: to consider procedures as instances of functional types, specified by tuples of input and output types. A procedure is then a transform which, in the presence of (zero or more) inputs having the proper types, produces (one or more) outputs with their respective types. (In practice, some procedures do not return values, but they \i{do} have some kind of side-effect, which can be analyzed as a variety of \q{output}.) Finally, third, procedures can be studied via mathematical frameworks such as Lambda Calculus, allowing notions of functions on typed parameters, and of functional application \mdash{} applying functions to concrete values, which is analogous to calling procedures with concrete input arguments \mdash{} to be made formally rigorous. } \p{I will briefly consider all three of these perspectives \mdash{} Source Code Ontology, type-theoretic models, and Lambda Calculus \mdash{} in this section. I will also propose a new model, based on the idea of \q{channels}, which combines elements of all three. } \vspace{-.1em} \subsection{Generalized Lambda Calculus} \p{Lambda (or \mOldLambda{}-) Calculus emerged in the early 20th Century as a formal model of mathematical functions and function-application. There are many mathematical constructions which can be subsumed under the notion of \q{function-application}, but these have myriad notations and conventions (compare the visual differences between mathematical notations \mdash{} integrals, square roots, super- and sub-scripted indices, and so forth \mdash{} to the much simpler alphabets of mainstream programming languages). But the early 20th century was a time of great interest in \q{mathematical foundations}, seeking to provide philosophical underpinnings for mathematical reasoning in general, unifying disparate mathematical methods and subdisciplines. One consequence of this foundational program was an attempt to capture the formal essence of the concept of \q{function} and of functions being applied to concrete values. } \p{A related foundational concern is how mathematical formulae can be nested, yielding new formulae. For example, the volume of a sphere (expressed in terms of its radius \rRad{}) is \VolSphere{}. The symbol \rRad{} is just a mnemonic which could be replaced with a different symbol, without the formula being different. But it can also be replaced by a more complex expression, to yield a new formula. In this case, substituting the formula for a cube's half-diagonal \mdash{} \crVOverRTwo{} where \vVol{} is its volume \mdash{} for \rRad{}, in the first formula, yields \volSphCube{}: a formula for the sphere's volume in terms of the volume of the largest cube that can fit inside it (\cite{KennethAnderson} has similar interesting examples in the context of code optimization). This kind of tinkering with equations is of course a bread-and-butter of mathematical discovery. In terms of foundations research, though, observe that the derivation depended on two givens: that the \rRad{} symbol is \q{free} in the first formula \mdash{} it is a place-holder rather than the designation of a concrete value, like \piSym{} \mdash{} and that free symbols (like \rRad{}) can be bound to other formulae, yielding new equations. } \p{From cases like these \mdash{} relatively simple geometric expressions \mdash{} mathematicians began to ask foundational questions about mathematical formulae: what are all formulae that can be built up from a set of core equations via repeatedly substituting nested expressions for free symbols? This question turns out to be related to the issue of finite calculations: in lieu of building complex formulae out of simpler parts, we can proceed in the opposite direction, replacing nested expressions with values. Formulae are constructed in terms of unknown values; when we have concrete measurements to plug in to those formulae, the set of unknowns decreases. If \i{all} values are known, then a well-constructed formula will converge to a (possibly empty) set of outcomes. This is roughly analogous to a computation which terminates in real time. On the other hand, a \i{recursive} formula \mdash{} an expression nested inside itself, such as a continued fraction \mdash{} is analogous to a computation which loops indefinitely.\nobrfootnote{Although there are sometimes techniques for converting formulae like Continued Fractions into \q{closed form} equations which do \q{terminate}. % It may be desirable to write this as "nobrfootnote" ... } } \p{In the early days of computer programming, it was natural to turn to \mOldLambda{}-Calculus as a formal model of computer procedures, which are in some ways analogous to mathematical formulae. As a mathematical subject, \mOldLambda{}-Calculus predates digital computers as we know them. While there were no digital computers at the time, there \i{was} a growing interest in mechanical computing devices, which led to the evolution of cryptographic machines used during the Second World War. So there was indeed a practical interest in \q{computing machines}, which eventually led to John von Neumann's formal prototypes for digital computers. } \p{Early on, though, \mOldLambda{}-Calculus was less about blueprints for calculating machines and more about \i{abstract} formulation of calculational processes. Historically, the original purpose of \mOldLambda{}-Calculus was largely a mathematical \i{simulation} of computations, which is not the same as a mathematical \i{prototype} for computing machines. Mathematicians in the decades before WWII investigated logical properties of computations, with particular emphasis on what sort of problems could always be solved in finite time, or what kinds of procedures can be guaranteed to terminate \mdash{} a \q{Computable Number}, for example, is a number which can be approximated to any degree of precision by a terminating function. Similarly, a Computable Function is a function from input values to output values that can be associated with an always-terminating procedure which necessarily calculates the desired outputs from a set of inputs. The space of Computable Functions and Computable Numbers are mathematical objects whose properties can be studied through mathematical techniques \mdash{} for instance, Computable Numbers are known to be a countable field within the real numbers. These mathematical properties are proven using a formal description of \q{any computer whatsoever}, which has no concern for the size and physical design of the \q{computers} or the time required for its \q{programs}, so long as they are finite. Computational procedures in this context are not actual implementations but rather mathematical distillations that can stand in for calculations for the purpose of mathematical analysis (interesting and representative contemporary articles continuing these perspectives include, e.g., \cite{MartinEscardo}, \cite{MasahitoHasegawa}, \cite{TuckerZucker}). } \p{It was only after the emergence of modern digital computers that \mOldLambda{}-Calculus become reinterpreted as a model of \i{concrete} computing machines. In its guise as a Computer Science (and not just Mathematical Foundations) discipline, \mOldLambda{}-Calculus has been most influential not in its original form but in a plethora of more complex models which track the evolution of programming languages. Many programming languages have important differences which are not describable on a purely mathematical basis: two languages which are both \q{Turing complete} are abstractly interchangeable, but it is important to represent the contrast between, say, Object-Oriented and Functional programming. In lieu of a straightforward, mathematical model of formulae as procedures which map inputs to outputs, modern programming languages add may new constructs which determine different mechanisms whereby procedures can read and modify values: objects, exceptions, closures, mutable references, side-effects, signal/slot connections, and so forth. Accordingly, new programming constructions have inspired new variants of \mOldLambda{}-Calculus, analyzing different features of modern programming languages \mdash{} Object Orientation, Exceptions, call-by-name, call-by-reference, side effects, polymorphic type systems, lazy evaluation \mdash{} in the hopes of deriving formal proofs of program behavior insofar as computer code uses the relevant constructions. In short, a reasonable history can say that \mOldLambda{}-Calculus mutated from being an abstract model for studying Computability as a mathematical concept, to being a paradigm for prototype-specifications of concretely realized computing environments. } \p{Modern programming languages have many different ways of handing-off values between procedures. The \q{inputs} to a function can be \q{message receivers} as in Object-Oriented programming, or lexically scoped values \q{captured} in an anonymous function that inherits values from the lexical scope (loosely, the area of source code) where its body is composed. Procedures can also \q{receive} data indirectly from pipes, streams, sockets, network connections, database connections, or files. All of these are potential \q{input channels} whereby a function implementation may access a value that it needs. In addition, procedures can \q{return} values not just by providing a final result but by throwing exceptions, writing to files or pipes, and so forth. To represent these myriad \q{channels of communication} computer scientists have invented a menagerie of extensions to \mOldLambda{}-Calculus \mdash{} a noteworthy example is the \q{Sigma} calculus to model Object-Oriented Programming; but parallel extensions represent call-by-need evaluation, exceptions, by-value and by-reference capture, etc. } \p{Rather than study each system in isolation, in this chapter I propose an integrated strategy for unifying disparate \mOldLambda{}-Calculus extensions into an overarching framework. The \q{channel-based} tactic I endorse here may not be optimal for a \i{mathematical} calculus which has formal axioms and provable theorems, but I believe it can be useful for the more practical goal of modeling computer code and software components, to establish recommended design patterns and to document coding assumptions. } \p{In this perspective, different extensions or variations to \mOldLambda{}-Calculus model different \i{channels}, or data-sources through which procedures receive and/or modify values. Different channels have their own protocols and semantics for passing values to functions. We can generically discuss \q{input} and \q{output} channels, but programming languages have different specifications for different genres of input/output, which we can model via different channels. For a particular channel, we can recognize language-specific limitations on how values passed in to or received from those channels are used, and how the symbols carrying those values interact with other symbols both in function call-sites and in the body of procedure implementations. For example, procedures can output values by throwing exceptions, but exceptions are unusual values which have to be handled in specific ways \mdash{} languages employ exceptions to signal possible programming errors, and they are engineered to interrupt normal program flow until or unless exceptions are \q{caught}. } \p{Computer scientists have explored these more complex programming paradigms in part by inventing new variations on \mOldLambda{}-calculi. Here I will develop one theory representing code in terms of Directed Hypergraphs, which are subject to multiple kinds of lambda abstraction \mdash{} in principle, unifying multiple \mOldLambda{}-Calculus extensions. The following subsection will lay out the details of this form of Directed Hypergraph and how \mOldLambda{}-calculi can be defined on its foundation, while the last subsection summarizes an expanded type theory which follows organically from this approach. } \p{Many concepts outlined here are reflected in the accompanying code set (which includes a \Cpp{} Directed Hypergraph library). My strategy for unifying multiple \mOldLambda{}-calculi depends in turn on hypergraph code representations, which is a theme in the umbrella of graph-based data modeling, to which I now turn. } \vspace{-.1em} \subsectiontwolinerepl{Directed Hypergraphs and \q{Channel Abstractions}}% {Directed Hypergraphs and 'Channel Abstractions'} \p{A \i{hypergraph} is a graph whose edges (a.k.a. \q{hyperedges}) can span more than two nodes (\cite[e.g. volume 2, page 24]{BenGoetzel}, \cite{HaishanLiu}, \cite{MarkMinas} and \cite{MinasSchneider}, \cite{BalintMolnar}, \cite{AlexandraPoulovassilis}, \cite{JohnStell}, \cite{JohnStellFCA}). A \i{directed} hypergraph (\q{\DH{}}) is a hypergraph where each edge has a \i{head set} and \i{tail set} (both possibly empty). Both of these are sets of nodes which (when non-empty) are called \i{hypernodes}. A hypernode can also be thought of as a hyperedge whose tail-set (or head-set) is empty. Note that a typical hyperedge connects two hypernodes (its head- and tail-sets), so if we consider just hypernodes, a hypergraph potentially reduces to a directed ordinary graph.\footnote{Here when distinguishing \q{head} and \q{tail} I will invert the orientation which most mathematical treatments of hypergraphs use: that is, I define hyperedges such that the edge \i{starts at} the head and \i{ends at} the tail. My rationale is that hyperedges induce an orientation not only on the head/tail pair, but \i{within} the head and tail, which become ordered tuples rather than sets. Hyperedges can therefore be seen as paths that \q{visit} a chain of hyponodes, first those in the head, then those in the tail. My terminology is consistent with software libraries wherein the \i{beginning} of an ordered list is called its \q{head}. } While \q{edge} and \q{hyperedge} are formally equivalent, I will use the former term when attending more to the edge's representational role as linking two hypernodes, and use the latter term when focusing more on its tuple of spanned nodes irrespective of their partition into \i{head} and \i{tail}. } \p{I assume that hyperedges always span an \i{ordered} node-tuple which induces an ordering in the head- and tail-sets: so a hypernode is an \i{ordered list} of nodes, not just a \i{set} of nodes. I will say that two hypernodes \i{overlap} if they share at least one node; they are \i{identical} if they share exactly the same nodes in the same order; and \i{disjoint} if they do not overlap at all. I call a Directed Hypergraph \q{reducible} if all hypernodes are either disjoint or identical. The information in reducible \DH{}s can be factored into two \q{scales}, one a directed graph whose nodes are the original hypernodes, and then a table of all nodes contained in each hypernode. Reducible \DH{}s allow ordinary graph traversal algorithms when hypernodes are treated as ordinary nodes on the coarser scale (so that their internal information \mdash{} their list of contained nodes \mdash{} is ignored).\footnote{A weaker restriction on \DH{} nodes is that two non-identical hypernodes \i{can} overlap, but must preserve node-order: i.e., if the first hypernode includes nodes \nodeNOne{}, and \nodeNTwo{} immediately after, and the second hypernode also includes \nodeNOne{}, then the second hypernode must also include \nodeNTwo{} immediately thereafter. Overlapping hypernodes can not \q{permute} nodes \mdash{} cannot include them in different orders or in a way that \q{skips} nodes. Trivially, all reducible \DH{}s meet this condition. Any graphs discussed here are assumed to meet this condition. } } \p{To avoid confusion, I will hereafter use the word \q{hyponode} in place of \q{node}, to emphasize the container/contained relation between hypernodes and hyponodes. I will use \q{node} as an informal word for comments applicable to both hyper- and hypo-nodes. Some Hypergraph theories and/or implementations allow hypernodes to be nested: i.e., a hypernode can contain another hypernode. In these theories, in the general case any node is potentially both a hypernode and a hyponode. For this chapter, I assume the converse: any \q{node} (as I am hereafter using the term) is \i{either} hypo- or hyper-. However, multi-scale Hypergraphs can be approximated by using hyponodes whose values are proxies to hypernodes. } \p{Here I will focus on a class of \DH{}s which (for reasons to emerge) I will call \q{Channelizable}. Channelizable Hypergraphs (\CH{}s) have these properties: \begin{enumerate}\item{} They have a Type System \TyS{} and all hyponodes and hypernodes are assigned exactly one canonical type (they may also be considered instances of super- or subtypes of that type). \item{} All hyponodes can have (or \q{express}) at most one value, an instance of its canonical type, which I will call a \i{hypovertex}. Hypernodes, similarly, can have at most one \i{hypervertex}. Like \q{node} being an informal designation for hypo- and hyper-nodes, \q{vertex} will be a general term for both hypo- and hyper-vertices. Nodes which do have a vertex are called \i{initialized}. The hypovertices \q{of} a hypernode are those of its hyponodes. \item{} Two hyponodes are \q{equatable} if they express the same value of the same type. Two (possibly non-identical) hypernodes are \q{equatable} if all of their hyponodes, compared one-by-one in order, are equatable. I will also say that values are \q{equatable} (rather than just saying \q{equal}) to emphasize that they are the respective values of equatable nodes. \item{} There may be a stronger relation, defined on equatable non-equivalent hypernodes, whereby two hypernodes are \i{inferentially equivalent} if any inference justified via edges incident to the first hypernode can be freely combined with inferences justified via edges incident to the second hypernode. Equatable nodes are not necessarily inferentially equivalent. \item{} Hypernodes can be assumed to be unique in each graph, but it is unwarranted to assume (without type-level semantics) that two equatable hypernodes in different graphs are or are not inferentially equivalent. Conversely, even if graphs are uniquely labeled \mdash{} which would appear to enable a formal distinction between hypernodes in one graph from those in another, \CH{} semantics does not permit the assumption that this separation alone justifies inferences presupposing that their hypernodes \i{are not} inferentially equivalent. \item{} All hypo- and hypernodes have a \q{proxy}, meaning there is a type in \TyS{} including, for each node, a unique identifier designating that node, that can be expressed in other hyponodes. \item{} There are some types (including these proxies) which may only be expressed in hyponodes. There may be other types which may only be expressed in hypernodes. Types can then be classified as \q{hypotypes} and \q{hypertypes}. The \TyS{} may stipulate that all types are \i{either} hypo or hyper. In this case, it is reasonable to assume that each hypotype maps to a unique hypertype, similar to \q{boxing} in a language which recognizes \q{primitive} types (in Object-Oriented languages, boxing allows non-class-type values to be used as if they were objects). \item{} Types may be subject to the restriction that any hypernode which has that type can only be a tail-set, not a head-set; call these \i{tail-only} types. \item{} Hyponodes may not appear in the graph outside of hypernodes. However, a hypernode is permitted to contain only one hyponode. \item{} Each edge, separate and apart from the \CH{}'s actual graph structure, is associated with a distinct hypernode, called its \i{annotation}. This annotation cannot (except via a proxy) be associated with any other hypernode (it cannot be a head- or tail-set in any hypernode). The first hyponode in its annotation I will dub a hyperedge's \i{classifier}. The outgoing edge-set of a hypernode can always be represented as an associative array indexed by the classifier's vertex. \item{} A hypernode's type may be subject to restrictions such that there is a single number of hyponodes shared by all instances. However, other types may be expressed in hypernodes whose size may vary. In this case the hyponode types cannot be random; there must be some pattern linking the distribution of hyponode types evident in hypernodes (with the same hypernode types) of different sizes. For example, the hypernodes may be dividable into a fixed-size, possibly empty sequence of hyponodes, followed by a chain of hyponode-sequences repeating the same type pattern. The simplest manifestation of this structure is a hypernode all of whose hyponodes are the same type. \item{} Call a \i{product-type transform} of a hypernode to be a different hypernode whose hypovertices are tuples of values equatable to those from the first hypernode, typed in terms of product types (i.e., tuples). For example, consider two different representations of semi-transparent colors: as a 4-vector \vecrgbt{}, or as an \vecrgb{} three-vector paired with a transparency magnitude. The second representation is a product-type transform of the first, because the first three values are grouped into a three-valued tuple. We can assert the requirement in most contexts that \CH{}s whose hypernodes are product-type transforms of each other contain \q{the same information} and as sources of information are interchangeable. \item{} The Type System \TyS{} is \i{channelized}, i.e., closed under a Channel Algebra, as will be discussed below. \end{enumerate} } \p{These definitions allude to two strategies for computationally representing \CH{}s. One, already mentioned, is to reduce them to directed graphs by treating hypernodes as integral units (ignoring their internal structure). A second is to model hypernodes as a \q{table of associations} whose keys are the values of the classifier hyponodes on each of their edges. A \CH{} can also be transformed into an \i{undirected} hypergraph by collapsing head- and tail- sets into an overarching tuple. All of these transformations may be useful in some analytic/representational contexts, and \CH{}s are flexible in part by morphing naturally into these various forms.\phantomsection\label{unplug} } \spinctc{unplug}{Unplugging a Node.}{fig:unplug} \p{Notice that information present \i{within} a hypernode can also be expressed as relations \i{between} hypernodes. For example, consider the information that I (Nathaniel), age \FourtySix{}, live in Brooklyn as a registered Democrat. This may be represented as a hypernode with hyponodes \NathFF{}, connected to a hypernode with hyponodes \BrookDem{}, via a hyperedge whose classifier encodes the concept \q{lives in} or \q{is a resident of}. However, it may also be encoded by \q{unplugging} the \q{age} attribute so the first hypernode becomes just \Nath{} and it acquires a new edge, whose tail has a single hyponode \ageFF{} and a classifier (encoding the concept) \q{age} (see the comparison in Diagram \hyperref[fig:unplug]{\ref{fig:unplug}}). This construction can work in reverse: information present in a hyperedge can be refactored so that it \q{plugs in} to a single hypernode. } \p{These alternatives are not redundant. Generally, representing information via hyperedges connecting two hypernodes implies that this information is somehow conceptually apart from the hypernodes themselves, whereas representing information via hyponodes \i{inside} hypernodes implies that this information is central and recurring (enforced by types), and that the data thereby aggregated forms a recurring logical unit. In a political survey, people's names may \i{always} be joined to their age, and likewise their district of residence \i{always} joined to their political affiliation. The left-hand side representation of the info (seen as an undirected hyperedge) \NathFFBD{} in Diagram \hyperref[fig:unplug]{\ref{fig:unplug}} captures this semantics better because it describes the name/age and \mbox{place/party} pairings as \i{types} which require analogous node-tuples when expressed by other hypernodes. For example, any two hypernodes with the same type as \NathFF{} will necessarily have an \q{age} hypovertex and so can predictably be compared along this one axis. By contrast, the right-hand (\q{unplugged}) version in Diagram \hyperref[fig:unplug]{\ref{fig:unplug}} implies no guarantees that the \q{age} data point is present as part of a recurring pattern. } \itclfig{initializing-hypernodes}{fig:initializinghypernodes} \p{The two-tiered \DH{} structure is also a factor when integrating serialized or shared data structures with runtime data values. In the demo \DH{} library, for example, it is assumed that each node can be associated with a runtime, binary data allocation (practically speaking, a pointer to user data). Hypernodes' internal structure can therefore be represented \i{either} via hyponodes explicit in the graph content \i{or} by internal structure in the user data (or some combination). Graph deserialization can then be a matter of mapping hyponodes to fields in the \q{internal} data allocations, before then mapping inter-hypernode relations to the proper hypervertex-relations. Code sample \ref{lst:initializing-hypernodes} demonstrates the pattern of hypervertex construction as \Cpp{} objects that get wrapped in new nodes ({\OneOverlay}-{\TwoOverlay}), along with obtaining nodes already registered in a runtime graph ({\ThreeOverlay}) and then inserting the new nodes (with stated relationships) alongside prior ones into the runtime graph ({\FourOverlay}).\footnote{The code samples in this text are drawn from a working demo at the time of writing; the actual code belonging to a downloadable data set at the time of publication may be slightly revised. The data set will include components to help readers cross-reference between the chapter's samples and working demo code. } } \p{In general, graph representations like \CH{} and \RDF{} serve two goals: first, they are used to \i{serialize} data structures (so that they may be shared between different locations; such as, via the internet); and, second, they provide formal, machine-readable descriptions of information content, allowing for analyses and transformations, to infer new information or produce new data structures. The design and rationale of representational paradigms is influenced differently by these two goals, as I will review now with an eye in part on drawing comparisons between \CH{} and \RDF{}. } \vspace{-.1em} \subsection{Channelized Hypergraphs and \largeRDF{}} \phantomsection\label{RDF} \p{The Resource Description Framework (\RDF{}) models information via directed graphs (\cite{MadalinaCroitoru}, \cite{ErnestoDamiani}, \cite{AnglesGuttierez}, and \cite{RodriguezWatkins} are good discussions of Semantic Web technologies from a graph-theoretic perspective), whose edges are labeled with concepts that, in well-structured contexts, are drawn from published Ontologies (these labels play a similar role to \q{classifiers} in \CH{}s). In principle, all data expressed via \RDF{} graphs is defined by unordered sets of labeled edges, also called \q{triples} (\q{\SPO{}}, where the \q{Predicate} is the label). In practice, however, higher-level \RDF{} notation such as \TTL{} (\Turtle{} or \q{Terse \RDF{} Triple Language}) and Notation3 (\NThree{}) deal with aggregate groups of data, such as \RDF{} containers and collections.\phantomsection\label{lived} } \spinctc{lived}{CH vs. RDF Collections.}{fig:lived} \p{For example, imagine a representation of the fact \q{(A/The person named) Nathaniel, \FourtySix{}, has lived in Brooklyn, Buffalo, and Montreal} (shown in Diagram \hyperref[fig:lived]{\ref{fig:lived}} as both a \CH{} and in \RDF{}). If we consider \Turtle{} or \NThree{} as \i{languages} and not just \i{notations}, it would appear as if their semantics is built around hyperedges rather than triples. It would seem that these languages encode many-to-many or one-to-many assertions, graphed as edges having more than one subject and/or predicate. Indeed, Tim Berners-Lee himself suggests that \q{Implementations may treat list as a data type rather than just a ladder of rdf:first and rdf:rest properties} \cite[page 6]{TimBernersLee}. That is, the specification for \RDF{} list-type data structures invites us to consider that they \i{may} be regarded integral units rather than just aggregates that get pulled apart in semantic interpretation. } \p{Technically, perhaps, this is an illusion. Despite their higher-level expressiveness, \RDF{} expression languages are, perhaps, supposed to be deemed \q{syntactic sugar} for a more primitive listing of triples: the \i{semantics} of \Turtle{} and \NThree{} are conceived to be defined by translating expressions down to the triple-sets that they logically imply (see also \cite{YurickWilks}). This intention accepts the paradigm that providing semantics for a formal language is closely related to defining which propositions are logically entailed by its statements. } \p{There is, however, a divergent tradition in formal semantics that is oriented to type theory more than logic. It is consistent with this alternative approach to see a different semantics for a language like \Turtle{}, where larger-scale aggregates become \q{first class} values. So, \NathFF{} can be seen as a (single, integral) \i{value} whose \i{type} is a \nameAge{} pair. Such a value has an \q{internal structure} which subsumes multiple data-points. The \RDF{} version is organized, instead, around a \i{blank node} which ties together disparate data points, such as my name and my age. This blank node is also connected to another blank node which ties together place and party. The blank nodes play an organizational role, since nodes are grouped together insofar as they connect to the same blank node. But the implied organization is less strictly entailed; one might assume that the \BrookDem{} nodes could just as readily be attached individually to the \q{name/age} blank (i.e., I live in Brooklyn, \i{and} I vote Democratic). } \p{Why, that is, are Brooklyn and Democratic grouped together? What concept does this fusion model? There is a presumptive rationale for the name/age blank (i.e., the fusing name/age by joining them to a blank node rather than allowing them to take edges independently): conceivably there are multiple \FourtySix{}-year-olds named Nathaniel, so \i{that} blank node plays a key semantic role (analogous to the quantifier in \q{\i{There is} a Nathaniel, age \FourtySix{}...}); it provides an unambiguous nexus so that further predicates can be attached to \i{one specific} \FourtySix{}-year-old Nathaniel rather than any old \NathFF{}. But there is no similarly suggested semantic role for the \q{place/party} grouping. The name cannot logically be teased apart from the name/age blank (because there are multiple Nathaniels); but there seems to be no \i{logical} significance to the \mbox{place/party} grouping. Yet pairing these values \i{can} be motivated by a modeling convention \mdash{} reflecting that geographic and party affiliation data are grouped together in a data set or data model. The logical semantics of \RDF{} make it harder to express these kinds of modeling assumptions that are driven by convention more than logic \mdash{} an abstracting from data's modeling environment that can be desirable in some contexts but not in others. } \p{So, why does the Semantic Web community effectively insist on a semantic interpretation of \Turtle{} and \NThree{} as \i{just} a notational convenience for \NTrips{} rather than as higher-level languages with a different higher-level semantics \mdash{} and despite statements like the above Tim Berners-Lee quote insinuating that an alternative interpretation has been contemplated even by those at the heart of Semantic Web specifications? Moreover, defining hierarchies of material composition or structural organization \mdash{} and so by extension, potentially, distinct scales of modeling resolution \mdash{} has been identified as an intrinsic part of domain-specific Ontology design (see \cite{Aranda}, \cite{BittnerSmithDonnelly}, \cite{BittnerSmith}, \cite{MaureenDonnelly}, \cite{Fabrikant}, \cite{PetitotSmith}, \cite{SegevGal}, \cite{BarrySmithBlood}, or \cite{PietroRamellini}). Semantic Web advocates have not however promoted multitier structure as a feature \i{of} Semantic models fundamentally, as opposed to criteriology \i{within} specific Ontologies. To the degree that this has an explanation, it probably has something to do with reasoning engines: the tools that evaluate \SPARQL{} queries operate on a triplestore basis. So the \q{reductive} semantic interpretation is arguably justified via the warrant that the definitive criteria for Semantic Web representations are not their conceptual elegance \visavis{} human judgments but their utility in cross-Ontology and cross-context inferences. } \p{As a counter-argument, however, note that many inference engines in Constraint Solving, Computer Vision, and so forth, rely on specialized algorithms and cannot be reduced to a canonical query format. Libraries such as \GeCODE{} and \ITK{} are important because problem-solving in many domains demands fine-tuned application-level engineering. We can think of these libraries as supporting \i{special} or domain-specific reasoning engines, often built for specific projects, whereas \OWL{}-based reasoners like \FactPP{} are \i{general} engines that work on general-purpose \RDF{} data without further qualification. In order to apply \q{special} reasoners to \RDF{}, a contingent of nodes must be selected which are consistent with reasoners' runtime requirements. } \p{Of course, special reasoners cannot be expected to run on the domain of the entire Semantic Web, or even on \q{very large} data sets in general. A typical analysis will subdivide its problem into smaller parts that are each tractable to custom reasoners \mdash{} in radiology, say, a diagnosis may proceed by first selecting a medical image series and then performing image-by-image segmentation. Applied to \RDF{}, this two-step process can be considered a combination of general and special reasoners: a general language like \SPARQL{} filters many nodes down to a smaller subset, which are then mapped/deserialized to domain-specific representations (including runtime memory). For example, \RDF{} can link a patient to a diagnostic test, ordered on a particular date by a particular doctor, whose results can be obtained as a suite of images \mdash{} thereby selecting the particular series relevant for a diagnostic task. General reasoners can \i{find} the images of interest and then pass them to special reasoners (such as segmentation algorithms) to analyze. Insofar as this architecture is in effect, Semantic Web data is a site for many kinds of reasoning engines. Some of these engines need to operate by transforming \RDF{} data and resources to an optimized, internal representation. Moreover, the semantics of these representations will typically be closer to a high-level \NThree{} semantics taken as \suigeneris{}, rather than as interpreted reductively as a notational convenience for lower-level formats like \NTrip{}. This appears to undermine the justification for reductive semantics in terms of \OWL{} reasoners. } \p{Perhaps the most accurate paradigm is that Semantic Web data has two different interpretations, differing in being consistent with special and general semantics, respectively. It makes sense to label these the \q{special semantic interpretation} or \q{semantic interpretation for special-purpose reasoners} (\SSI{}, maybe) and the \q{general semantic interpretation} (\GSI{}), respectively. Both these interpretations should be deemed to have a role in the \q{semantics} of the Semantic Web. } \p{Another order of considerations involve the semantics of \RDF{} nodes and \CH{} hypernodes particularly with respect to uniqueness. Nodes in \RDF{} fall into three classes: blank nodes; nodes with values from a small set of basic types like strings and integers; and nodes with \URL{}s which are understood to be unique across the entire World Wide Web. There are no blank nodes in \CH{}; and intrinsically no \URL{}s either, although one can certainly define a \URL{} \i{type}. There is nothing in the semantics of \URL{}s which guarantees that each \URL{} designates a distinct internet resource; this is just a convention which essentially, \i{de facto}, fulfills itself because it structures a web of commercial and legal practices, not just digital ones; e.g. ownership is uniquely granted for each internet domain name. In \CH{}, a data type may be structured to reflect institutional practices which guarantee the uniqueness of values in some context: books have unique \ISBN{} codes; places have distinct \GIS{} locations, etc. These uniqueness requirements, however, are not intrinsically part of \CH{}, and need to be expressed with additional axioms. In general, a \CH{} hypernode is a tuple of relatively simple values and any additional semantics are determined by type definitions (it may be useful to see \CH{} hypernodes as roughly analogous to \CStruct{}s \mdash{} which have no \i{a priori} uniqueness mechanism). } \p{Also, \RDF{} types are less intrinsic to \RDF{} semantics than in \CH{} (see \cite{HeikoPaulheim}). The foundational elements of \CH{} are value-tuples (via nodes expressing values, whose tuples in turn are hypernodes). Tuples are indexed by position, not by labels: the tuple \NathFF{} does not in itself draw in the labels \q{name} or \q{age}, which instead are defined at the type-level (insofar as type-definitions may stipulate that the label \q{age} is an alias for the node in its second position, etc.). So there is no way to ascertain the semantic/conceptual intent of hypernodes without considering both hyponode and hypernode types. Conversely, \RDF{} does not have actual tuples (though these can be represented as collections, if desired); and nodes are always joined to other nodes via labeled connectors \mdash{} there is no direct equivalent to the \CH{} modeling unit of a hyponode being included in a hypernode by position. } \p{At its core, then, \RDF{} semantics are built on the proposition that many nodes can be declared globally unique by fiat. This does not need to be true of all nodes \mdash{} \RDF{} types like integers and floats are more ethereal; the number \FourtySix{} in one graph is indistinguishable from \FourtySix{} in another graph. This can be formalized by saying that some nodes can be \i{objects} but never \i{subjects}. If such restrictions were not enforced, then \RDF{} graphs could become in some sense overdetermined, implying relationships by virtue of quantitative magnitudes devoid of semantic content. This would open the door to bizarre judgments like \q{my age is non-prime} or \q{I am older than Mohamed Salah's 2018 goal totals}. One way to block these inferences is to prevent nodes like \q{the number \FourtySix{}} from being subjects as well as objects. But nodes which are not primitive values \mdash{} ones, say, designating Mohamed Salah himself rather than his goal totals \mdash{} are justifiably globally unique, since we have compelling reasons to adopt a model where there is exactly one thing which is \i{that} Mohamed Salah. So \RDF{} semantics basically marries some primitive types which are objects but never subjects with a web of globally unique but internally unstructured values which can be either subject or object. } \p{In \CH{} the \q{primitive} types are effectively hypotypes; hyponodes are (at least indirectly) analogous to object-only \RDF{} nodes insofar as they can only be represented via inclusion inside hypernodes. But \CH{} hypernodes are neither (in themselves) globally unique nor lacking in internal structure. In essence, an \RDF{} semantics based on guaranteed uniqueness for atom-like primitives is replaced by a semantics based on structured building-blocks without guaranteed uniqueness. This alternative may be considered in the context of general versus special reasoners: since general reasoners potentially take the entire Semantic Web as their domain, global uniqueness is a more desired property than internal structure. However, since special reasoners only run on specially selected data, global uniqueness is less important than efficient mapping to domain-specific representations. It is not computationally optimal to deserialize data by running \SPARQL{} queries. } \p{Finally, as a last point in the comparison between \RDF{} and \CH{} semantics, it is worth considering the distinction between \q{declarative knowledge} and \q{procedural knowledge} (see e.g. \cite[volume 2, pages 182-197]{BenGoetzel}). According to this distinction, canonical \RDF{} data exemplifies \i{declarative} knowledge because it asserts apparent facts without explicitly trying to interpret or process them. Declarative knowledge circulates among software in canonical, reusable data formats, allowing individual components to use or make inferences from data according to their own purposes. } \p{Counter to this paradigm, return to hypothetical {\sgapped}\USH{}{\egapped}{\sadded}Cyber-Physical{\eadded} examples, such as the conversion of Voltage data to acceleration data, which is a prerequisite to accelerometers' readings being useful in most contexts. Software possessing capabilities to process accelerometers therefore reveals what can be called \i{procedural} knowledge, because software so characterized not only receives data but also processes such data in standardized ways. } \p{The declarative/procedural distinction perhaps fails to capture how procedural transformations may be understood as intrinsic to some semantic domains \mdash{} so that even the information we perceive as \q{declarative} has a procedural element. For example, the very fact that \q{accelerometers} are not called \q{Voltmeters} (which are something else) suggests how the Ubiquitous Computing community perceives voltage-to-acceleration calculations as intrinsic to accelerometers' data. But strictly speaking the components which participate in \USH{} networks are not just engaged in data sharing; they are functioning parts of the network because they can perform several widely-recognized computations which are understood to be central to the relevant domain \mdash{} in other words, they have (and share with their peers) a certain \q{procedural knowledge}. } \p{\RDF{} is structured as if static data sharing were the sole arbiter of semantically informed interactions between different components, which may have a variety of designs and rationales \mdash{} which is to say, a Semantic Web. But a thorough account of formal communication semantics has to reckon with how semantic models are informed by the implicit, sometimes unconscious assumption that producers and/or consumers of data will have certain operational capacities: the dynamic processes anticipated as part of sharing data are hard to conceptually separate from the static data which is literally transferred. To continue the accelerometer example, designers can think of such instruments as \q{measuring acceleration} even though \i{physically} this is not strictly true; their output must be mathematically transformed for it to be interpreted in these terms. Whether represented via \RDF{} graphs or Directed Hypergraphs, the semantics of shared data is incomplete unless the operations which may accompany sending and receiving data are recognized as preconditions for legitimate semantic alignment. } \p{While Ontologies are valuable for coordinating and integrating disparate semantic models, the Semantic Web has perhaps influenced engineers to conceive of semantically informed data sharing as mostly a matter of presenting static data conformant to published Ontologies (i.e., alignment of \q{declarative knowledge}). In reality, robust data sharing also needs an \q{alignment of \i{procedural} knowledge}: in an ideal Semantic Network, procedural capabilities are circled among components, promoting an emergent \q{collective procedural knowledge} driven by transparency about code and libraries as well as about data and formats. The \CH{} model arguably supports this possibility because it makes type assertions fundamental to semantics. Rigorous typing both lays a foundation for procedural alignment and mandates that procedural capabilities be factored in to assessments of network components, because a type attribution has no meaning without adequate libraries and code to construct and interpret type-specific values. } \thindecoline{} \p{Despite their differences, the Semantic Web, on the one hand, and Hypergraph-based frameworks, on the other, both belong to the overall space of graph-oriented semantic models. Hypergraphs can be emulated in \RDF{}, and \RDF{} graphs can be organically mapped to a Hypegraph representation (insofar as Directed Hypegraphs with annotations are a proper superspace of Directed Labeled Graphs). Semantic Web Ontologies for computer source code can thus be modeled by suitably typed \DH{}s as well, even while we can also formulate Hypergraph-based Source Code Ontologies as well. So, we are justified in assuming that a sufficient Ontology exists for most or all programming languages. This means that, for any given procedure, we can assume that there is a corresponding \DH{} representation which embodies that procedure's implementation. } \p{\phantomsection\label{detachedeval} Procedures, of course, depend on \i{inputs} which are fixed for each call, and produce \q{outputs} once they terminate. In the context of a graph-representation, this implies that some hypernodes represent and/or express values that are \i{inputs}, while others represent and/or express its \i{outputs}. These hypernodes are \i{abstract} in the sense (as in Lambda Calculus) that they do not have a specific assigned value within the body, \i{qua} formal structure. Instead, a \i{runtime manifestation} of a \DH{} (or equivalently a \CH{}, once channelized types are introduced) populates the abstract hypernodes with concrete values, which in turn allows expressions described by the \CH{} to be evaluated. } \p{These points suggest a strategy for unifying Lambda Calculi with Source Code Ontologies. The essential construct in \mOldLambda{}-calculi is that mathematical formulae include \q{free symbols} which are \i{abstracted}: sites where a formula can give rise to a concrete value, by supplying values to unknowns; or give rise to new formulae, via nested expressions. Analogously, nodes in a graph-based source-code representation are effectively \mOldLambda{}-abstracted if they model input parameters, which are given concrete values when the procedure runs. Connecting the output of one procedure to the input of another \mdash{} which can be modeled as a graph operation, linking two nodes \mdash{} is then a graph-based analog to embedding a complex expression into a formula (via a free symbol in {\sadded}the{\eadded} latter). } \p{Carrying this analogy further, I earlier mentioned different \mOldLambda{}-Calculus extensions inspired by programming-language features such as Object-Orientation, exceptions, and by-reference or by-value captures. These, too, can be incorporated into a Source Code Ontology: e.g., the connection between a node holding a value passed to an input parameter node, in a procedure signature, is semantically distinct from the nodes holding \q{Objects} which are senders and receivers for \q{messages}, in Object-Oriented Parlance. Variant input/output protocols, including Objects, captures, and exceptions, are certainly semantic constructs (in the computer-code domain) which Source Code Ontologies should recognize. So we can see a convergence in the modeling of multifarious input/output protocols via \mOldLambda{}-Calculus and via Source Code Ontologies. I will now discuss a corresponding expansion in the realm of applied Type Theory, with the goal of ultimately folding type theory into this convergence as well. } \vspace{-.1em} \subsectiontwoline{Procedural Input/Output Protocols via Type Theory} \p{\label{types}Parallel to the historical evolution where \mOldLambda{}-Calculus progressively diversified and re-oriented toward concrete programming languages, there has been an analogous (and to some extent overlapping) history in Type Theory. When there are multiple ways of passing input to a function, there are {\sgapped}at{\egapped} potentially multiple kinds of function types. For instance, Object-Orientation inspired expanded \mOldLambda{}-calculi that distinguish function inputs which are \q{method receivers} or \q{\this{} objects} from ordinary (\q{lambda}) inputs. Simultaneously, Object-Orientation also distinguishes \q{class} from \q{value} types and between function-types which are \q{methods} versus ordinary functions. So, to take one example, a function telling us the size of a list can exhibit two different types, depending on whether the list itself is passed in as a method-call target (\listsize{} vs. \sizelist{}). } \p{One way to systematize the diversity of type systems is to assume that, for any particular type system, there is a category \tCat{} of types conformant to that system. This requires modeling important type-related concepts as \q{morphisms} or maps between types. Another useful concept is an \q{endofunctor}: an \q{operator} which maps elements in a category to other (or sometimes the same) elements. In a \tCat{} an endofunctor selects (or constructs) a type \tyTwo{} from a type \tyOne{} \mdash{} note how this is different from a morphism which maps \i{values of} \tyOne{} to \tyTwo{}. Type systems are then built up from a smaller set of \q{core} types via operations like products, sums, enumerations, and forming \q{function-like} types. } \p{We may think of the \q{core} types for practical programming as number-based (booleans, bytes, and larger integer types), with everything else built up by aggregation or encodings (like \ascii{} and \unicode{}, allowing types to include text and alphabets; or pixel-coordinates and colors, allowing for graphical/visual components).\footnote{In other contexts, however, non-mathematical core types may be appropriate: for example, the grammar of natural languages can be modeled in terms of a type system whose core are the two types \tyNoun{} and \tyProposition{} and which also includes function types (maps) between pairs or tuples of types (verbs, say, map \tyNoun{}s \mdash{} maybe multiple nouns, e.g. direct objects \mdash{} to \tyProposition{}s). } Ultimately, a type system \tCat{} is characterized (1) by which are its core types and (2) by how aggregate types are built from simpler ones (which essentially involves endofunctors and/or products). } \p{In Category Theory, a Category \cCat{} is called \q{Cartesian Closed} if for every pair of elements \eOne{} and \eTwo{} in \cCat{} there is an element \eOneToeTwo{} representing (for some relevant notion of \q{function}) all functions from \eOne{} to \eTwo{} \cite{RBrown}. The stipulation that a type system \TyS{} include function-like types is roughly equivalent, then, to the requirement that \TyS{}, seen as a Category, is Cartesian-Closed. The historical basis for this concept (suggested by the terminology) is that the construction to form function-types is an \q{operator}, something that creates new types out of old. A type system \TyS{} may then be \q{closed} under products: if \tOne{} and \tTwo{} are in \TyS{} then \tOneTimesTTwo{} must be as well. Analogously, \TyS{} supports function-like types if it is closed under a kind of \q{functionalization} operator \mdash{} if the \tOneTimesTTwo{} product can be mapped onto a function-like type \tyOneTotyTwo{}. } \p{In general, more sophisticated type systems \TyS{} are described by identifying new kinds of inter-type operators and studying those type systems which are closed under these operators: if \tyOne{} and \tyTwo{} are in \TyS{} then so is the combination of \tyOne{} and \tyTwo{}, where the meaning of \q{combination} depends on the operator being introduced. Expanded \mOldLambda{}-calculi \mdash{} which define new ways of creating functions \mdash{} are correlated with new type systems, insofar as \q{new ways of creating functions} also means \q{new ways of combining types into function-like types}. } \p{Furthermore, \q{expanded} \mOldLambda{}-calculi generally involve \q{new kinds of abstraction}: new ways that the building-blocks of functional expressions, whether these be mathematical formulae or bodies of computer code, can be \q{abstracted}, treated as inputs or outputs rather than as fixed values. In this chapter, I attempt to make the notion of \q{abstraction} rigorous by analyzing it against the background of \DH{}s that formally model computer code. So, given the correlations I have just described between \mOldLambda{}-calculi and type systems \mdash{} specifically, on \TyS{}-closure stipulations \mdash{} there are parallel correlations between type systems and \i{kinds of abstraction defined on Channelized Hypergraphs}. I will now discuss this further. } \subsubsection{Kinds of Abstraction} \p{The \q{abstracted} nodes in a \CH{} are loosely classifiable as \q{input} and \q{output}, but in practice there are various paradigms for passing values into and out of functions, each with their own semantics. For example, a \q{\this{}} symbol in \Cpp{} is an abstracted, \q{input} hypernode with special treatment in terms of overload resolution and access controls. Similarly, exiting a function via \returnct{} presents different semantics than exiting via \throw{}. As mentioned earlier, some of this variation in semantics has been formally modeled by different extensions to \mOldLambda{}-Calculus. } \p{So, different hypernodes in a \CH{} are subject to different kinds of abstraction. Speaking rather informally, hypernodes can be grouped into \i{channels} based on the semantics of their kind of abstraction. More precisely, channels are defined initially on \i{symbols}, which are associated with hypernodes: in any \q{body} (i.e., an \q{implementation graph}) hypernodes can be grouped together by sharing the same symbol, and correlatively sharing the same value during a \q{runtime manifestation} of the \CH{}. Therefore, the \q{channels of abstraction} at work in a procedure can be identified by providing a name representing the \i{kind} of channel and a list of symbols affected by that kind of abstraction.{\sgapped}In the notation I adopt here, conventional lambda-abstraction like \lXY{} would be written as \CHlXY{}. {\egapped} } \p{I propose \q{Channel Algebra} as a tactic for capturing the semantics of channels, so as to model programming languages' conventions and protocols with respect to calls between procedures. Once we get beyond the basic contrast between \q{input} and \q{output} parameters, it becomes necessary to define conditions on channels' size, and on how channels are associated with different procedures that may share values. Here are several examples: \begin{itemize}\item{} In most Object-Oriented languages, any procedure can have at most one \this{} (\q{message receiver}) object. Let \sCh{} model a \q{Sigma} channel, as in \q{Sigma Calculus} (written as \sigmaCalculus{}: see e.g. \cite{MartinAbadi}, \cite{CamposVasconcelos}, \cite{KathleenFisher}, \cite{EdwardZalta}, etc.). We then have the requirement than any procedure's \sCh{} channel can carry at most one value. \item{} \label{retexc} In all common languages which have exceptions, procedures can \i{either} throw an exception \i{or} return a value. If \return{} and \exception{} model the channels carrying standard returns and thrown exceptions, respectively, this convention translates to a requirement that the two channels cannot both be non-empty. \item{} A thrown exception cannot be handled as an ordinary value. The whole point of throwing exceptions is to disrupt ordinary program flow, which means the exception value is only accessible in special constructs, like a \catch{} block. One way to model this restriction is to forbid \exception{} channels from transferring values to other channels. Instead, exception values are bound (in \catch{} blocks) to lexically-scoped symbols (I will discuss channel-to-symbol transfers below). \item{} Suppose a procedure is an Object-Oriented method (it has a non-empty \q{\sCh{}} channel). Any other methods called from that procedure will \mdash{} at least in the conventional Object-Oriented protocol \mdash{} automatically receive the enclosing method's \sigmach{} channel unless a different object for the called method is supplied expressly. \item{} \phantomsection\label{chaining} In the object-oriented technique known as \q{method chaining}, one procedures' \return{} channel is transferred to a subsequent procedures' \sCh{} channel. The pairing of \return{} and \sCh{} thereupon gives rise to one function-composition operator. With suitable restrictions (on channel size), \return{} and \lambda{} channels engender a different function-composition operator. So channels can be used to define operators between procedures which yield new function-like values (i.e., instances of function-like types). In some cases, function-like values defined via inter-function operators can be used in lieu of those instantiated from implemented procedures (although the specifics of this substitutability \mdash{} an example of so-called \q{eta ($\eta{}$) equivalence} \mdash{} varies by language). \end{itemize} } \p{The above examples represent possible combinations or interconnections (sharing values) between channels, together with semantic restrictions on when such connections are possible. In this chapter, I assume that notations describing these connections and restrictions can be systematized into a \q{Channel Algebra}, and then used to model programming language conventions and computer code. A basic example of inter-channel aggregation would be how a \lambda{} channel, combined with a \return{} channel, associated with one procedure, yields a conventional input/output pairing. One particular channel formation \mdash{} \lambdaPLUSreturn{}, say \mdash{} therefore models the basic \mOldLambda{}-Calculus and, simultaneously, a minimal definition of function-like types. Notionally, a procedure is, in the simplest conceptualization, the unification of an input channel and an output channel \mdash{} written, say, \chaOnePluschaTwo{} (with the \chplus{} possibly holding extra stipulations, like \cChaOne{} and \cChaTwo{} cannot both be non-empty). So a \q{channel sum} creates the basic foundation for a procedure, analogous to how input and output graph elements yield the foundations for morphisms in Hypergraph Categories. More complex channel combinations and protocols can then model more complex variations on \mOldLambda{}-Calculi and on programming language type systems. } \subsubsection{Channelized Type Systems} \p{Collectively, to summarize my discussion to this point, I will say that formulations describing channel kinds, their restrictions, and their interrelationships {\sgapped}describe{\egapped}{\sadded}outline{\eadded} a \i{Channel Algebra}, which express how channels combine to describe possible function signatures \mdash{} and accordingly to describe functional \i{types}. The purpose of a Channel Algebra is, among other things, to elucidate how formal languages (like programming languages) formulate functions and procedures, and the rules they put in place for inputs and outputs. If \Chi{} is a Channel Algebra, a language adequately described by its formulations (channel kinds, restrictions, and interrelationships) can be called a \Chi{}-language. The basic \mOldLambda{}-Calculus can be described as a \Chi{}-language for the algebra defined by a minimal \lambdaPLUSreturn{} combination (with \return{} channels restricted to at most one element). Analogously, a type system \TyS{} is a \q{\Chi{}-type-system}, and is \q{closed} with respect to \Chi{}, if valid signatures {\sadded}characterized{\eadded}{\sgapped}described{\egapped} using channel kinds in \Chi{} correspond to types found in \TyS{}. Types may be less granular than signatures: as a case in point, functions differing in signature only by whether they throw exceptions may or may not be deemed the same type. But a channel construction on types in \TyS{} must also yield a type in \TyS{}. } \p{I say that a type system is \i{channelized} if it is closed with respect to some Channel Algebra. Channelized Hypergraphs are then \DH{}s whose type system is Channelized. We can think of channel constructions as operators which combine groups of types into new types. Once we assert that a \CH{} is Channelized, we know that there is a mechanism for describing some Hypergraphs or subgraphs as \q{procedure implementations} some of whose hypernodes are subject to kinds of abstraction present in the relevant Channel Algebra. Channel formulae and signatures describe source-code norms which could also be expressed via more conventional Ontologies. So Channel Algebra can be seen as a generalization of (\RDF{}-environment) Source Code Ontology (of the kinds studied for example by \cite{ImanKeivanloo}, \cite{WernerKlieber}, \cite{JohnathanLee}, \cite{TurnerEden}, \cite{ReneWitte}, \cite{PornpitWongthongtham}). Given the relations between \RDF{} and Directed Hypergraphs (despite differences I have discussed here), Channel Algebras can also be seen as adding to Ontologies governing Directed Hypergraphs. Such is the perspective I will take for the remainder of this chapter. } \p{For a Channel Algebra \Chi{} and a \Chi{}-closed type system (written, say) \TySChi{}, \Chi{} extends \TyS{} because function-signatures conforming to \Chi{} become types in \TyS{}. At the same time, \TyS{} also extends \Chi{}, because the elements that populate channels in \Chi{} have types within \TyS{}. Assume that for any type system, there is a partner \q{Type Expression Language} (\TXL{}) which governs how type descriptions (especially for aggregate types that do not have a single symbol name) can be composed consistent with the logic of the system. The \TXL{} for a type-system \TyS{} can be notated as \TXLTyS{}. If \TyS{} is channelized then its \TXL{} is also channelized \mdash{} say, \TXLTySChi{} for some \Chi{}. } \p{Similarly, we can then develop for Channel Algebras a \i{Channel Expression Language}, or \CXL{}, which can indeed be integrated with appropriate \TXL{}s. Formal declarations of channel axioms \mdash{} e.g., restrictions on channel sizes, alone or in combination \mdash{} are examples of terms that should be representable in a \CXL{}. However, whereas the \CXL{} expressions I have described so far {\sgapped}describe{\egapped}{\sadded}elucidate{\eadded} the overall shape of channels \mdash{} which channels exist in a given context and their sizes \mdash{} \CXL{} expressions can also add details concerning the \i{types} of values that can or do populate channels. \CXL{} expressions with these extra specifications then become function signatures, and as such type-expressions in the relevant \TXL{}. A channelized \TXL{} is then a superset of a \CXL{}, because it adds \mdash{} to \CXL{} expressions for function-signatures \mdash{} the stipulation that a particular signature does describe a \i{type}; so \CXL{} expressions become \TXL{} expressions when supplemented with a proviso that the stated \CXL{} construction describes a function-like type's signature. With such a proviso, descriptions of channels used by a function qualifies as a type attribution, connecting function symbol-names to expressions recognized in the \TXL{} as describing a type. } \p{Some \TXL{} expressions designate function-like types, but not all, since there are many types (\int{}, etc.) which do not have channels at all. While a \TXL{} lies \q{above} a \CXL{} by adding provisos that yield type-definition semantics from \CXL{} expressions, the \TXL{} simultaneously in a sense lies \q{beneath} the \CXL{} in that it provides expressions for the non-functional types which in the general case are the basis for \CXL{} expressions of functional types, since most function parameters \mdash{} the input/output values that populate channels \mdash{} have non-functional types. Section \sectsym{}\hyperref[sFive]{\ref{sFive}} will discuss the elements that \q{populate} channels (which I will call \q{carriers}) in more detail. } \p{In the following sections I will sketch a Channel Algebra that codifies the graph-based representation of functions as procedures whose inputs and outputs are related to other functions by variegated semantics (semantics that can be catalogued in a Source Code Ontology). With this foundation, I will argue that Channel-Algebraic type representations can usefully model higher-scale code segments (like statements and code blocks) within a type system, and also how type interpretations can give a rigorous interpretation to modeling constructs such as code specifications and \q{gatekeeping} code. I will start this discussion, however, by expanding on the idea of employing code-graphs \mdash{} hypergraphs annotated according to a Source Code Ontology \mdash{} to represent procedure implementations, and therefore to model procedures as instances of function-like types. }
{ "alphanum_fraction": 0.7887590086, "avg_line_length": 55.9573770492, "ext": "tex", "hexsha": "9b2fa01817c5d983a179023675430d622967e733", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "ScignScape-RZ/ntxh", "max_forks_repo_path": "elsev/NathanielChristen-proofed-gen/section3.ngml.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "ScignScape-RZ/ntxh", "max_issues_repo_path": "elsev/NathanielChristen-proofed-gen/section3.ngml.tex", "max_line_length": 140, "max_stars_count": null, "max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "ScignScape-RZ/ntxh", "max_stars_repo_path": "elsev/NathanielChristen-proofed-gen/section3.ngml.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 16233, "size": 68268 }
% Included from both -slides and -handout versions. \documentclass[pdftex]{beamer} % used to trigger beamer mode in Emacs, % normally commented out. \usetheme{metropolis} \usepackage[english]{babel} \usepackage[latin1]{inputenc} \usepackage{graphicx} \usepackage{times} \usepackage[T1]{fontenc} \usepackage{fancyvrb} \usepackage{listings} \begin{document} \lstset{language=C, escapeinside={(*@}{@*)}, numbers=left, basicstyle=\tiny, showspaces=false, showtabs=false} \title{Introduction to Operating Systems} \subtitle{Through tracing, analysis, and experimentation} %\institute{University of Cambridge} \author{George V. Neville-Neil} %\author{Dr Robert N. M. Watson} \date{1 August 2016} \begin{frame} \titlepage \end{frame} \section{Communication} \label{sec:communication} \section{Inter-Process Communication} \label{sec:ipc} \begin{frame} \frametitle{Goals of IPC} \begin{itemize} \item Data Sharing \item Signaling events \item Control of multiple processes \end{itemize} \end{frame} \begin{frame} \frametitle{Mechanisms} \begin{itemize} \item Shared files \item Semaphores and Mutexes \item Signals \item Sockets \end{itemize} \end{frame} \begin{frame} \frametitle{Relationship to Networking} \begin{itemize} \item Extension of mechanisms across machines \item Everything is a byte stream \item No record boundaries \item File like API \end{itemize} \end{frame} \begin{frame} \frametitle{Signals} \begin{itemize} \item Based on hardware interrupt model \item Not useful for data transfer \item Catch and process \end{itemize} \end{frame} \begin{frame} \frametitle{Signal Handling} \begin{itemize} \item Source raises a signal \item destination catches the signal \item Uncaught signals cause a program to exit \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Available Signals} \resizebox{1.0\textwidth}{!}{% \begin{tabular}{l|l|l|l} Name & Meaning & Name & Meaning \\ \hline SIGHUP & line hangup & SIGURG & urgent condition present on socket \\ SIGINT & interrupt program & SIGSTOP & stop (cannot be caught or ignored) \\ SIGQUIT & quit program & SIGTSTP & stop signal generated from keyboard \\ SIGILL & illegal instruction & SIGCONT & continue after stop \\ SIGTRAP & trace trap & SIGCHLD & child status has changed \\ SIGABRT & abort program & SIGTTIN & background read attempted from control terminal \\ SIGEMT & emulate instruction executed & SIGTTOU & background write attempted to control terminal \\ SIGFPE & floating-point exception & SIGIO & I/O is possible on a descriptor \\ SIGKILL & kill program & SIGXCPU & cpu time limit exceeded \\ SIGBUS & bus error & SIGXFSZ & file size limit exceeded \\ SIGSEGV & segmentation violation & SIGVTALRM & virtual time alarm \\ SIGSYS & non-existent system call invoked & SIGPROF & profiling timer alarm \\ SIGPIPE & write on a pipe with no reader & SIGWINCH & Window size change \\ SIGALRM & eal-time timer expired & SIGINFO & status request from keyboard \\ SIGTERM & software termination signal & SIGUSR1 & User defined signal 1 \\ & & SIGUSR2 & User defined signal 2 \\ & & SIGTHR & thread interrupt \\ & & SIGLIBRT & real-time library interrupt \end{tabular} } \end{frame} \begin{frame} \frametitle{Tracing Signals} \end{frame} \begin{frame}[fragile] \frametitle{Pipes} \begin{itemize} \item Earliest bulk data IPC \item Key innovation of UNIX systems \item Depends on file descriptors \begin{itemize} \item \emph{STDIN}, \emph{STDOUT}, \emph{STDERR} \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Pipe Demonstration} \end{frame} \section{Internetworked Communication} \label{sec:internet} \begin{frame} \frametitle{Networking and FreeBSD} \begin{itemize} \item Everyone's TCP/IP Stack \item IPv4, IPv6, UDP, TCP, SCTP \item Various drivers \item Multiple firewalls \end{itemize} \end{frame} \begin{frame} \frametitle{Networking: The ISO Model} \begin{itemize} \item Canonical description of network protocols \item Each protocols are layered \item Seven layers in all \item Beware Van Jacobsen's warning! \end{itemize} \end{frame} \begin{frame} \frametitle{Networking and Layering} \centering \includegraphics[width=0.4\textwidth]{../../figures/ISO-layers.pdf} \end{frame} \begin{frame} \frametitle{Networking and Layering} \centering \includegraphics[width=0.7\textwidth]{../../figures/ISO-layers-mapped.pdf} \end{frame} \begin{frame} \frametitle{Networking and Layering} \centering \includegraphics[width=0.8\textwidth]{../../figures/ISO-layers-http.pdf} \end{frame} \begin{frame} \frametitle{The User Program View} \begin{itemize} \item User programs use sockets \item Network programs follow UNIX model \item Flexible interfaces for different protocols \end{itemize} \end{frame} \begin{frame} \frametitle{Sockets} \begin{itemize} \item Main programmer interface to networking \item Generic API \item Attempts to support read/write semantics \end{itemize} \end{frame} \begin{frame} \frametitle{Socket System Calls} \begin{description} \item [socket] Returns a file descriptor \item [connect] Connect to a remote program \item [bind] Bind a socket to a port \item [listen] Listen for connections \item [accept] Returns a new file descriptor \end{description} \end{frame} \begin{frame} \frametitle{Transferring Data on Sockets} \begin{description} \item [read] Just like a file \item [write] Just like a file \item [recv] Receive a single message \item [send] Send a single message \item [recvmsg] Receive a message with meta-data \item [sendmsg] Send a message with meta-data \end{description} \end{frame} \begin{frame} \frametitle{Network Stack Overview} \centering \includegraphics[width=0.8\textwidth]{../../figures/network-in-out.pdf} \end{frame} \begin{frame}[fragile] \frametitle{UDP} \begin{itemize} \item Simplest transport protocol \item No states to maintain \item Data is sent immediately \item Supports multicast \item Only probes are \verb+send+ and \verb+receive+ \end{itemize} \end{frame} \begin{frame} \frametitle{TCP} \begin{itemize} \item Transmission Control Protocol \item Stream based \item In order delivery \item Maintains the illusion of a byte stream \end{itemize} \end{frame} \begin{frame} \frametitle{Three Way Handshake} \begin{itemize} \item Initiating a connection between two nodes \end{itemize} \begin{enumerate} \item Start a connection with a Synchronize (SYN) packet. \item Acknowledge the first SYN and initiate a full connection (SYN/ACK) \item Acknowledge the second SYN. \end{enumerate} \end{frame} \begin{frame}[fragile] \frametitle{Starting a Connection} \centering \includegraphics[width=0.9\textwidth]{../../figures/tcp-three-way.pdf} \end{frame} \begin{frame} \frametitle{TCP States} \begin{description}[labelwidth=\widthof{SYN RECEIVED}] \item[CLOSED] \item[SYN SENT] Client initiated a connection. \item[SYN RECEIVED] Server received initiation from client. \item[ESTABLISHED] Client and server can communicate. \item[FIN WAIT 1] \item[FIN WAIT 2] \item[TIME WAIT] \item[CLOSE WAIT] \item[LAST ACK] Awaiting client's final acknowledgment \end{description} \end{frame} \begin{frame} \frametitle{TCP Data Flow} \begin{itemize} \item Sequence Numbers \item Acknowledgements \item The sliding window \item Congestion Control \end{itemize} \end{frame} \begin{frame} \frametitle{The Sliding Window} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.9\textwidth]{../../figures/sliding-window-1.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.9\textwidth]{../../figures/sliding-window-2.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.9\textwidth]{../../figures/sliding-window-3.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.725\textwidth]{../../figures/sliding-window-4.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.725\textwidth]{../../figures/sliding-window-5.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.725\textwidth]{../../figures/sliding-window-6.pdf} \end{frame} \begin{frame}[fragile] \frametitle{The Sliding Window} \centering \includegraphics[width=0.375\textwidth]{../../figures/sliding-window-7.pdf} \end{frame} \begin{frame} \frametitle{Four Way Close} \begin{itemize} \item Closing a connection between two nodes \item Each node must close its side of the connection \item More complicated than opening a connection. \end{itemize} \begin{enumerate} \item Node A sends a Finalize (FIN) packet \item Node B acknowledges the FIN packet. \item Node B sends a Finalize (FIN) packet \item Node A acknowledges the FIN packet. \end{enumerate} \end{frame} \begin{frame}[fragile] \frametitle{Closing a Connection} \centering \includegraphics[width=0.9\textwidth]{../../figures/tcp-four-way-close.pdf} \end{frame} \begin{frame} \frametitle{TCP States} \begin{description}[labelwidth=\widthof{SYN RECEIVED}] \item[FIN WAIT 1] \item[FIN WAIT 2] \item[TIME WAIT] \item[CLOSE WAIT] \item[LAST ACK] Awaiting client's final acknowledgment \end{description} \end{frame} \begin{frame}[fragile] \frametitle{TCP State Machine} \centering \includegraphics[width=0.6\textwidth]{../../figures/tcp-timeline.pdf} \end{frame} \begin{frame} \frametitle{DTrace and Networking Walkthrough} \end{frame} \begin{frame} \frametitle{Communication Review} \begin{enumerate} \item [IPC] Signals \end{enumerate} \end{frame} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: "lecture3-communication" %%% End:
{ "alphanum_fraction": 0.7255579696, "avg_line_length": 26.165374677, "ext": "tex", "hexsha": "37bd5673db9bc5baf76da7e4376629e148607bf4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-25T13:41:08.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-25T13:41:08.000Z", "max_forks_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "admdev8/course", "max_forks_repo_path": "undergraduate/lectures/lecture3-communication.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "admdev8/course", "max_issues_repo_path": "undergraduate/lectures/lecture3-communication.tex", "max_line_length": 104, "max_stars_count": null, "max_stars_repo_head_hexsha": "b7f40a0ffd18f2be31603b12d1079c9ea1043734", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "admdev8/course", "max_stars_repo_path": "undergraduate/lectures/lecture3-communication.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2994, "size": 10126 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[citestyle=ieee,backend=biber,sorting=ynt]{biblatex} \addbibresource{bibliography.bib} \usepackage{hyperref} \hypersetup{colorlinks=true,allcolors=blue} \usepackage{appendix} \usepackage{changepage} \usepackage{enumitem} \usepackage{graphicx} \usepackage{tabularx} \usepackage{xcolor,colortbl} % \newcommand{\txvm}{\textsc{t}x\textsc{vm}} \newcommand{\txvm}{TxVM} \definecolor{smallint}{rgb}{1.0,1.0,0.9} \definecolor{int}{rgb}{1.0,1.0,0.8} \definecolor{stack}{rgb}{1.0,0.8,0.8} \definecolor{value}{rgb}{1.0,0.8,1.0} \definecolor{crypto}{rgb}{0.8,1.0,0.8} \definecolor{tx}{rgb}{0.8,0.8,1.0} \definecolor{flow}{RGB}{186,225,255} \definecolor{ext}{rgb}{0.8,0.8,0.8} \definecolor{data}{rgb}{1.0,0.9,0.8} \definecolor{push}{RGB}{255,240,225} \newenvironment{example}{ \medskip\begin{adjustwidth}{.25in}{}\footnotesize }{ \normalsize\end{adjustwidth} } \newenvironment{commentary}{\begin{quote}\itshape}{\normalfont\end{quote}} \title{\txvm{} \\ \large A New Design for Blockchain Transactions} \date{March 2018} \begin{document} \author{ \normalsize Bob Glickstein, Cathie Yun, Dan Robinson, Keith Rarick, Oleg Andreev \\ \texttt{\normalsize \{bobg, cathie, dan, kr, oleg\}@chain.com} \\ \\ Chain } \maketitle \begin{abstract} We present a new design for blockchain transactions called \txvm{}, the transaction virtual machine. \txvm{} seeks to achieve the expressiveness and flexibility of an imperative contract model such as Ethereum's while maintaining the efficiency, safety, and scalability of a declarative transaction model such as Bitcoin's. \txvm{} defines a stack machine for manipulating plain data items like strings, integers, and tuples, but also special types: \textit{values}, each with an amount and asset type; and \textit{contracts}, programs that lock up values and other data. Rules governing the handling of these types provide guarantees about integrity and security. Each transaction is a \txvm{} program evaluated in isolation from the blockchain state, and whose output is a deterministic log of proposed state changes for the blockchain. Transactions can be therefore be validated in parallel. Their logs can be applied to the blockchain in linear time. Our implementation of \txvm{} is currently used in production in our hosted ledger service, Sequence. The implementation and specification are available as an open-source project on GitHub. \end{abstract} \section{Introduction} A transaction in a blockchain protocol is a proposal to update the blockchain's global state. Depending on the blockchain, this could mean consuming some value tokens and creating others, or updating the balances in one or more accounts. Participating nodes in the blockchain network reach consensus on whether the transaction is valid and should be applied. If so, the proposed state changes are made. Different blockchain protocols take different approaches to representing transactions, each with its own strengths and weaknesses. \subsection{Prior art: Bitcoin} In Bitcoin \cite{nakamoto} and similar systems, including earlier versions of the Chain Protocol, a transaction is a static data structure with fields that must be interpreted and validated using an ad hoc collection of rules. One or more ``inputs'' identify existing tokens to be redeemed from earlier transactions. Each input includes the data needed to authorize access to those tokens. The accessed value is divided among one or more ``outputs.'' Each describes how to secure its value. This is typically done by specifying the public key of a payee, usually in the form of a short program for a specialized virtual machine. The program verifies a digital signature against that public key. Some future ``input'' must supply that signature to ``spend'' the output. This model is \textbf{declarative}. The transaction's data structure declares its proposed state changes directly. No user-defined program runs except for the predicates attached to the previous outputs to authorize their use, and there are no effects on the blockchain state other than the ones discoverable by a simple inspection of the transaction's fields. Furthermore, tokens are immutable. Once created, they persist in the same state until consumed. These properties make declarative transactions efficient and secure. Transactions can be validated in parallel before any of their effects have to be applied to the blockchain state.\footnote{One transaction's application to the state may render another transaction inapplicable, as when each tries to spend the same token. For our purposes, this is a separate step from validation, and is considered acceptable as long as such conflicts can be resolved in approximately linear time.} It is easy to avoid transactions with unexpected side-effects. And the global blockchain state does not need to contain much more than a cryptographic hash committing to the contents of each existing token. On the other hand, Bitcoin's transaction model limits the power of its smart contracts. This is partially a result of Bitcoin's restricted virtual machine, which by design is not Turing-equivalent. But mainly it's because Bitcoin's scripts are simple predicates evaluated independently of each other. This makes it unduly difficult to express multiple contracts interacting, even after adding more powerful opcodes, as in earlier versions of the Chain Protocol. We found that modeling more sophisticated flows of value required unwieldy contortions in the best case and were occasionally impossible to do securely. \subsection{Prior art: Ethereum} In Ethereum \cite{wood2014ethereum} and similar systems, value resides in contracts that also embody mutable state and program logic. A transaction is a message sent to a contract, causing its program to execute. This is an \textbf{imperative} model, in which the effects of the transaction on the blockchain are not known until the contract logic finishes running, during which time it may alter its own state as well as send messages to other contracts, which can update their own state. Contract logic may be highly sophisticated, and indeed a wide variety of novel flows of value have been demonstrated using imperative blockchain contracts, from decentralized autonomous organizations to cryptocurrency-based virtual pets. However, interaction with the global blockchain state during execution means that there can be no meaningful optimization by parallelizing validation. Transactions must execute serially, in a deterministic order. Since one contract might alter the state of any other contract, it is easy for execution to have unexpected side-effects, and it can be difficult to reason about a contract's state even during the lifetime of a single transaction. This can be catastrophic, such as in the June~2016 hack of the ``DAO'' contract \cite{dao}, which resulted in the theft of around \$50~million, and the November~2017 Parity bug \cite{parity}, which froze wallets containing around \$150~million. \subsection{A combined approach} \txvm{} is the basis for the protocol used in Sequence \cite{sequence}, Chain's blockchain-based hosted ledger service. \txvm{} stands for ``transaction virtual machine.'' With \txvm{} we seek to combine the respective strengths of the declarative and imperative approaches to representing blockchain transactions, while avoiding their weaknesses. It takes advantage of lessons we learned from our own previous design, ChainVM \cite{chainvm}, and from developing Ivy \cite{ivy}, our higher-level smart-contract language, which compiles to ChainVM and also to Bitcoin Script. \txvm{} is designed to be an ideal compilation target for Ivy. A \txvm{} transaction is an \textbf{imperative} program that produces a \textbf{declarative} log of proposed blockchain state changes when executed. Execution happens in isolation from the global blockchain state. Running in isolation means \txvm{} programs cannot have unexpected side effects in other contracts, and that they can be run in parallel. \section{Operation of the virtual machine} \txvm{} defines a stack-based virtual machine to execute transaction programs. Programs are expressed as strings of bytecode. The \txvm{} instruction set includes a \texttt{jumpif} instruction, making it Turing-complete. It also includes operations for manipulating various types of data, introspecting aspects of the VM state, computing cryptographic hashes, verifying signatures, and more. The complete instruction set is shown in figure~\ref{instset}. \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{figure} \def\arraystretch{1.5} \setlength\tabcolsep{5pt} \centering\ttfamily\scriptsize \begin{tabularx}{\textwidth}{lCCCCCCCC} & 00 & 10 & 20 & 30 & 40 & 50 & 60 & 70 \\ 0 & \cellcolor{smallint} 0/false & \cellcolor{smallint} 16 & \cellcolor{int} int & \cellcolor{value} nonce & \cellcolor{flow} verify & \cellcolor{data} eq & \cellcolor{push} 1 byte & \cellcolor{push} 17 bytes \\ 1 & \cellcolor{smallint} 1/true & \cellcolor{smallint} 17 & \cellcolor{int} add & \cellcolor{value} merge & \cellcolor{flow} jumpif & \cellcolor{data} dup & \cellcolor{push} 2 bytes & \cellcolor{push} 18 bytes \\ 2 & \cellcolor{smallint} 2 & \cellcolor{smallint} 18 & \cellcolor{int} neg & \cellcolor{value} split & \cellcolor{flow} exec & \cellcolor{data} drop & \cellcolor{push} 3 bytes & \cellcolor{push} 19 bytes \\ 3 & \cellcolor{smallint} 3 & \cellcolor{smallint} 19 & \cellcolor{int} mul & \cellcolor{value} issue & \cellcolor{flow} call & \cellcolor{data} peek & \cellcolor{push} 4 bytes & \cellcolor{push} 20 bytes \\ 4 & \cellcolor{smallint} 4 & \cellcolor{smallint} 20 & \cellcolor{int} div & \cellcolor{value} retire & \cellcolor{flow} yield & \cellcolor{data} tuple & \cellcolor{push} 5 bytes & \cellcolor{push} 21 bytes \\ 5 & \cellcolor{smallint} 5 & \cellcolor{smallint} 21 & \cellcolor{int} mod & \cellcolor{value} amount & \cellcolor{flow} wrap & \cellcolor{data} untuple & \cellcolor{push} 6 bytes & \cellcolor{push} 22 bytes \\ 6 & \cellcolor{smallint} 6 & \cellcolor{smallint} 22 & \cellcolor{int} gt & \cellcolor{value} assetid & \cellcolor{flow} input & \cellcolor{data} len & \cellcolor{push} 7 bytes & \cellcolor{push} 23 bytes \\ 7 & \cellcolor{smallint} 7 & \cellcolor{smallint} 23 & \cellcolor{int} not & \cellcolor{value} anchor & \cellcolor{flow} output & \cellcolor{data} field & \cellcolor{push} 8 bytes & \cellcolor{push} 24 bytes \\ 8 & \cellcolor{smallint} 8 & \cellcolor{smallint} 24 & \cellcolor{int} and & \cellcolor{crypto} vmhash & \cellcolor{flow} contract & \cellcolor{data} encode & \cellcolor{push} 9 bytes & \cellcolor{push} 25 bytes \\ 9 & \cellcolor{smallint} 9 & \cellcolor{smallint} 25 & \cellcolor{int} or & \cellcolor{crypto} sha256 & \cellcolor{flow} seed & \cellcolor{data} cat & \cellcolor{push} 10 bytes & \cellcolor{push} 26 bytes \\ a & \cellcolor{smallint} 10 & \cellcolor{smallint} 26 & \cellcolor{stack} roll & \cellcolor{crypto} sha3 & \cellcolor{flow} self & \cellcolor{data} slice & \cellcolor{push} 11 bytes & \cellcolor{push} 27 bytes \\ b & \cellcolor{smallint} 11 & \cellcolor{smallint} 27 & \cellcolor{stack} bury & \cellcolor{crypto} checksig & \cellcolor{flow} caller & \cellcolor{data} bitnot & \cellcolor{push} 12 bytes & \cellcolor{push} 28 bytes \\ c & \cellcolor{smallint} 12 & \cellcolor{smallint} 28 & \cellcolor{stack} reverse & \cellcolor{tx} log & \cellcolor{flow} cprog. & \cellcolor{data} bitand & \cellcolor{push} 13 bytes & \cellcolor{push} 29 bytes \\ d & \cellcolor{smallint} 13 & \cellcolor{smallint} 29 & \cellcolor{stack} get & \cellcolor{tx} peeklog & \cellcolor{flow} timerange & \cellcolor{data} bitor & \cellcolor{push} 14 bytes & \cellcolor{push} 30 bytes \\ e & \cellcolor{smallint} 14 & \cellcolor{smallint} 30 & \cellcolor{stack} put & \cellcolor{tx} txid & \cellcolor{ext} prv & \cellcolor{data} bitxor & \cellcolor{push} 15 bytes & \cellcolor{push} 31 bytes \\ f & \cellcolor{smallint} 15 & \cellcolor{smallint} 31 & \cellcolor{stack} depth & \cellcolor{tx} finalize & \cellcolor{ext} ext & \cellcolor{push} 0 bytes & \cellcolor{push} 16 bytes & \cellcolor{push} 32 bytes \\ \end{tabularx} \normalfont\medskip \def\arraystretch{1.7} \begin{tabular}{ccccc} \cellcolor{smallint} small ints & \cellcolor{stack} stack ops & \cellcolor{crypto} crypto ops & \cellcolor{flow} control flow & \cellcolor{data} data ops \\ \cellcolor{int} int ops & \cellcolor{value} value ops & \cellcolor{tx} tx ops & \cellcolor{ext} extension & \cellcolor{push} pushdata \\ \end{tabular} \normalsize % \includegraphics[width=\textwidth]{instset.png} \caption{The TxVM instruction set.} \label{instset} \end{figure} Certain instructions cause records to be appended to the \textit{transaction log}, a VM data structure that is the primary output of running a \txvm{} program. It contains the blockchain state changes proposed by the transaction. After the transaction program finishes, the log may be applied to the blockchain. A \textit{contract} is a unit of program execution that contains a program (a string of bytecode) and a stack. While a contract is running, its stack serves as the VM's \textit{current contract stack}, where most stack-based operations take place. A contract may also suspend its execution in a variety of ways, passing control to some other contract. At such times the suspended contract's stack is preserved until it is reinvoked. When control passes from one contract to another, data may be passed between them on the VM's shared \textit{argument stack}. The overall transaction program forms an implicit top-level contract. Stacks may contain plain data items: strings, integers, and tuples. They may also contain contracts, and they may contain \textit{values}, each of which is a specific amount of a specific asset type. Values and asset types are discussed in further detail in the next section. In addition to their bytecode, transaction programs specify an integer \textit{runlimit}. Each instruction costs a nonzero amount to execute, and the total cost of running the program must not exceed the specified runlimit. This prevents abuse of the network in a similar way to the role played by Ethereum's ``gas.'' A network can agree to reject transactions with runlimits that are too high. In order to be valid, a transaction program must execute to completion and leave no data on any stack. \subsection{Values} A \txvm{} blockchain is used to track the issuance, ownership, and transfer of values of different types and amounts, e.g. ``5 USD'' or ``3 shares of AAPL.'' A value is a first-class item on the stack. Inside a value object is an amount, an \textit{asset~ID}, and an \textit{anchor}. A new value may be created only by the \texttt{issue} instruction, which populates the asset~ID field with a cryptographic hash computed from the currently running contract---the one containing the \texttt{issue} instruction. An issuance contract thus uniquely determines the asset type it issues. No other contract may issue units of the same asset. The \texttt{issue} instruction populates the anchor field with a hash derived from earlier values in the blockchain, guaranteeing uniqueness and non-replayability. Finally, \texttt{issue} adds a record of the new issuance to the transaction log. Once created, a value may be \texttt{split} into two values with the same asset~ID and sum, and may be \texttt{merge}d with other values with the same asset~ID, in all cases producing values with new anchors. A value with an amount of zero is useful in some cases for its anchor alone. Values may be destroyed with the \texttt{retire} instruction, which also creates a transaction log record. Unlike plain data, values may not be duplicated or dropped from a stack. \subsection{Contracts} A contract contains a program and a stack. It is created with the \texttt{contract} instruction. Its stack is initially empty, and its program is set to the bytecode string that \texttt{contract} takes as an argument. Contracts are invoked with \texttt{call}, which causes the VM's current contract stack to be saved away and replaced by the called contract's stack. The saved stack is restored when control returns to the caller. When a contract reaches the end of its program with an empty stack, it is \textit{complete}. It is removed from the VM and control returns to the caller. It is an error for a contract to reach the end of its program while items remain on its stack. If it is the implicit top-level contract, the argument stack must also be empty. A contract that has not yet completed may suspend its own execution in one of three ways: \begin{itemize} \item It may execute the \texttt{yield} instruction, placing the contract on the argument stack while returning control to the caller. \item It may execute the \texttt{output} instruction, writing an ``output'' record to the transaction log. That record contains a cryptographic hash committing to a snapshot of the contract's state, including the contents of its stack. This \textit{snapshot hash} will be added to the blockchain's global state as an unspent output contract. The contract may be reconstituted in a later transaction, and its execution resumed, with the \texttt{input} instruction. \item It may execute the \texttt{wrap} instruction, which is like \texttt{yield} but makes the suspended contract ``portable.'' Portability of contracts is not discussed here; for details please see the \txvm{} spec. \cite{txvm-spec} \end{itemize} In order to reconstitute a contract from the global blockchain state (placed there with \texttt{output}), a program first creates a plain-data depiction of the contract: a tuple that includes the contract's program string and all the items on its stack.\footnote{Since only the snapshot hash is stored in the blockchain state, the user has the responsibility to remember or retrieve sufficient information about the contract to reconstruct it. This may involve monitoring the blockchain for recognizable contract patterns and parsing those, communicating out-of-band with the contract's creator, or other techniques.} The \texttt{input} instruction turns that tuple into a callable contract object while adding an ``input'' record to the transaction log. The input record contains a snapshot hash computed from the tuple. Later, when the log is applied to the blockchain, that snapshot hash is checked against the global state to ensure that the stipulated contract actually exists to be consumed. Like values, contracts may not be duplicated or dropped from a stack. Thus, all contracts created during a transaction must run to completion or be persisted to the global blockchain state with \texttt{output} in order to be cleared from the~VM. It follows too that all values (other than those that are destroyed with \texttt{retire}) must end up in the stack of a contract persisted with \texttt{output}, or else be left in the VM, preventing successful completion. \subsection{The transaction log} The transaction log is the primary result of running a \txvm{} transaction. It is also the source of a transaction's unique~ID, which is computed from a hash of the log's contents. A transaction usually includes one or more signature checks. The message being signed is typically the transaction's~ID, possibly in combination with other data. This creates a chicken-and-egg problem: the transaction is not complete until it includes the necessary signatures, but the signatures require the transaction~ID, which requires running the transaction. To solve this problem, \txvm{} includes the \texttt{finalize} instruction. This freezes the transaction log, prohibiting further changes to it. It also makes the \texttt{txid} instruction available for querying the transaction's~ID. Every transaction must execute \texttt{finalize} exactly once. It is possible to run a transaction program up to its \texttt{finalize} instruction, in order to compute the~ID of that transaction. Signature-checking contracts presumably still remain on the stack at this point. Once the~ID has been computed, it's possible to compute any required signatures. These can now be added to the transaction program as arguments to those contracts, together with the \texttt{call} instructions that will invoke them and clear them from the~VM. To ensure the uniqueness of each transaction and each transaction~ID, the \texttt{finalize} instruction consumes an anchor (a value with an amount of zero) from the stack. \section{A typical transaction} In this section we present a simplified \txvm{} transaction, in which Alice wishes to pay~10 units of some asset to Bob. Alice's transaction inputs two contracts from the blockchain, one containing~5 units and the other containing~7. The transaction combines those values and then resplits them, creating one output of~10 for Bob and a ``change'' output of~2 for Alice. This example uses \txvm{} assembly-language notation, in which certain operations have a simplified depiction. For instance, ``pushdata'' instructions are implicit, tuple literals (delimited as \texttt{\{...\}}) abbreviate the steps needed to construct them, and a sequence of assembly-language instructions enclosed in square brackets (\texttt{[...]}) denotes the bytecode string they produce when assembled. Here is the transaction, with some details elided for clarity. \begin{example} \begin{verbatim} {...} input call get get {...} input call get get \end{verbatim} \begin{commentary} Marshal two contracts from the global blockchain state, call them, and move their results---a value and a signature-check contract each---from the argument stack to the current contract stack. \end{commentary} \begin{verbatim} 2 roll merge \end{verbatim} \begin{commentary} Put the two values (a 5-unit value and a 7-unit value) next to each other on the stack and merge them into one 12-unit value. \end{commentary} \begin{verbatim} 10 split <Bob's pubkey> put put [get get ... output] contract call \end{verbatim} \begin{commentary} Split the 12-unit value into one 10-unit value and one 2-unit value. Add Bob's pubkey to the stack. Move it and the 10-unit value to the argument stack. Construct and call a contract whose program consumes the value and pubkey, then {\normalfont\texttt{output}}s itself. \end{commentary} \begin{verbatim} 2 split <Alice's pubkey> put put [get get ... output] contract call \end{verbatim} \begin{commentary} Split the 2-unit value into one 2-unit value and one zero-unit value (which will be used as an anchor by {\normalfont\texttt{finalize}}). Add Alice's pubkey to the stack. Construct and call a contract whose program consumes the value and pubkey, then {\normalfont\texttt{output}}s itself. \end{commentary} \begin{verbatim} finalize \end{verbatim} \begin{commentary} Consume the zero-value anchor and freeze the transaction log. At this point, the two signature-check contracts (for authorizing the {\normalfont\texttt{input}}s above) remain on the stack. \end{commentary} \begin{verbatim} <a signature by Alice of this transaction ID> put call <a signature by Alice of this transaction ID> put call \end{verbatim} \begin{commentary} Supply a signature to each signature-check contract and call it to clear it from the~VM. \end{commentary} \end{example} The next sections take a look at some of the details elided from the example above. \subsection{Checking signatures} Here is a simple signature-checking program. \begin{verbatim} txid <pubkey> get 0 checksig verify \end{verbatim} The steps of this program are: \medskip \begin{tabular}{rp{0.75\textwidth}} \texttt{txid} & Get the transaction's ID and push it on the stack (only possible after \texttt{finalize}); \\ \textit{pubkey} & Push the pubkey (of a value's owner, or an asset's authorized issuer, etc\@.) on the stack; \\ \texttt{get} & Move a data item (the signature) from the argument stack to the current contract stack; \\ \texttt{0} & Push a~0 on the stack (signaling the \texttt{checksig} instruction to use the Ed25519 signature scheme); \\ \texttt{checksig} & Compute the validity of the signature with respect to the transaction~ID and pubkey; \\ \texttt{verify} & Fail execution if \texttt{checksig} did not produce a true value. \\ \end{tabular} \subsection{Unspent output} Here is a simple program for an unspent output contract that already contains a value and a payee's pubkey on its stack. \begin{verbatim} put [txid swap get 0 checksig verify] yield \end{verbatim} The \texttt{put} instruction releases the contract's value to the argument stack. The \texttt{yield} instruction, with a signature-checking program as an argument, suspends this contract's execution (with the payee's pubkey still on its stack) and places \emph{it} on the argument stack. Of course, the suspended contract will need to be \texttt{call}ed again to clear it from the~VM. This is a \textit{deferred} signature check, which can run only after \texttt{finalize}, since it uses \texttt{txid}. Note that this version of the signature-checking contract differs slightly from the example presented above. In that example, the pubkey appears literally in the program. In this example, the pubkey is already on the stack and is moved into its proper place (with \texttt{swap}\footnote{Which is \txvm{} assembly-language shorthand for \texttt{1~roll}.}) after the transaction~ID is placed on the stack. Knowing this program, it is possible to flesh out some of the tuple passed to \texttt{input}: \begin{verbatim} {'C', <seed>, [txid swap get 0 checksig verify], <payee pubkey>, {'V', <amount>, <asset ID>, <anchor>} } \end{verbatim} Here, \texttt{C} and \texttt{V} are type codes (for ``contract'' and ``value'' respectively), and ``seed'' is the contract's seed, a unique identifier (not discussed here). The \texttt{input} instruction turns this structure into a contract object, meanwhile computing a snapshot hash for the transaction log that must match the snapshot hash from an earlier \texttt{output} instruction. \subsection{Pay-to-pubkey} Finally, here is the contract used to lock up value with a payee's pubkey. Note how the unspent-output contract is a latter phase of this contract, and the signature check is a latter phase of \emph{that}. \begin{verbatim} get get [put [txid swap get 0 checksig verify] yield] output \end{verbatim} This consumes two arguments from the argument stack, a pubkey and a value. It then \texttt{output}s itself as an unspent-output contract. When next called (after \texttt{input}), it will release the value and defer a signature check against the pubkey. \subsection{Scratching the surface} The example in this section is a simplified transfer of a single type of value from a single sender to a single recipient. In Sequence such transfers are made with slightly more elaborate versions of the contract programs presented here. Those programs include provisions for M-of-N signatures and for attaching user-supplied reference data to payments, among other things. Beyond that, it should be evident that much more is possible. A transaction can involve multiple parties trading multiple asset types simultaneously and atomically. A contract can lock up zero, two, or more values, not just one. An asset's issuance contract can be designed to constrain how units of it may be spent. A discussion of TxVM's full power is beyond the scope of this paper; indeed we are still discovering it ourselves. In the coming months we will be retargeting our compiler for the Ivy high-level smart contract language to TxVM. We expect to show how use cases such as escrowed payments, collateralized loans, second-price auctions, bond coupons, and even decentralized autonomous organizations and cryptocurrency-based virtual pets may be expressed in Ivy and compiled to compact TxVM programs. \section{Further reading} We have a full specification and implementation of \txvm{} available as an open-source project on GitHub, at \href{https://github.com/chain/txvm}{\texttt{github.com/chain/txvm}}. One of us (Yun) presented \txvm{} at the Stanford Blockchain Protocol Analysis and Security Engineering (\textsc{bpase}) 2018 conference. \cite{bpase} The talk and slides are available online. \cite{txvm-talk} \cite{txvm-slides} \txvm{} is currently being used in production in our ledger service, Sequence. For more information, visit \href{http://www.chain.com}{\texttt{Chain.com}} and read our blog post, ``Introducing Sequence.'' \cite{sequence} \section{Conclusion} We have presented \txvm{}, a transaction model and virtual-machine design that combines the usability and power of Ethereum-like contracts with the safety, efficiency, and scalability of Bitcoin-like transactions. We designed \txvm{} to serve as the core of our platform for financial applications. We have a full specification and open-source implementation of \txvm{} in Go, currently deployed in production. Beyond continuing to use \txvm{} in production and developing it further, we are interested in applying these ideas to other blockchain protocols. Constructive transaction programs may provide novel ways to build and serialize transactions in Bitcoin-like protocols, while declarative deterministic effect logs may improve the safety and scalability of Ethereum-like platforms. \newpage \printbibliography \end{document}
{ "alphanum_fraction": 0.7675126225, "avg_line_length": 48.8676470588, "ext": "tex", "hexsha": "efc125c05c41be03eda05d0ca347211d518f2df4", "lang": "TeX", "max_forks_count": 34, "max_forks_repo_forks_event_max_datetime": "2022-03-31T22:25:23.000Z", "max_forks_repo_forks_event_min_datetime": "2018-03-22T21:51:02.000Z", "max_forks_repo_head_hexsha": "b9ab531471a06f52b20b13c8e6b1faea12de325e", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "joshterrill/txvm", "max_forks_repo_path": "whitepaper/whitepaper.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "b9ab531471a06f52b20b13c8e6b1faea12de325e", "max_issues_repo_issues_event_max_datetime": "2018-10-25T07:38:24.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-25T07:38:24.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "joshterrill/txvm", "max_issues_repo_path": "whitepaper/whitepaper.tex", "max_line_length": 276, "max_stars_count": 157, "max_stars_repo_head_hexsha": "b9ab531471a06f52b20b13c8e6b1faea12de325e", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "joshterrill/txvm", "max_stars_repo_path": "whitepaper/whitepaper.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-07T14:14:09.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-22T18:51:35.000Z", "num_tokens": 7476, "size": 29907 }
\chapter{***}\label{AppendixB} ***.
{ "alphanum_fraction": 0.5833333333, "avg_line_length": 18, "ext": "tex", "hexsha": "f2904a75e3542d6f13a31d4863b4fe9a853134eb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5bc4a34113cc6cd2d074e146775440adb2faa9f4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "a-mhamdi/Report-LaTeX", "max_forks_repo_path": "append/app-B.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5bc4a34113cc6cd2d074e146775440adb2faa9f4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "a-mhamdi/Report-LaTeX", "max_issues_repo_path": "append/app-B.tex", "max_line_length": 31, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5bc4a34113cc6cd2d074e146775440adb2faa9f4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "a-mhamdi/Report-LaTeX", "max_stars_repo_path": "append/app-B.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-16T12:40:38.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-16T12:40:38.000Z", "num_tokens": 13, "size": 36 }
\documentclass{pset_template} \title{Prefix/Postfix/Infix Notation} \date{January 17, 2019} \editorOne{Alexander Sun} \editorTwo{Sanjit Bhat} \lectureNum{1} \contestMonth{January} \begin{document} \maketitle \section{Introduction:} This problem section refers to the notation/format of the operations in an equation. \href{http://interactivepython.org/runestone/static/pythonds/BasicDS/InfixPrefixandPostfixExpressions.html}{This source} gives good explanation, and real uses of this seemingly toxic topic. \subsection{Infix:} Infix refers to notation of writing the operators lying in between the operands. This is the normal notation that we use. E.X: A + B \subsection{Prefix:} In prefix notation the operator lies in front of the operands. E.X: + A B \subsection{Postfix:} In postfix notation the operator lies behind the operands. E.X: A B + \section{Evaluation:} Evaluation of different notations can be quite confusing. One method to simplify the operation is to insert parenthesis around each operation. Remember that each operation only applies to 2 numbers. Solve the problem by taking apart the problem in layers and inserting parenthesis where each operation falls in the later. \subsection{Identification:} The first step is to determine whether the equation is in prefix or postfix. One easy way to tell is if there are operators leading or trailing the first and last numbers in the equation. If there are number leading then it is prefix, and vice-versa for postfix. \subsection{Prefix:} -*DA/+BCD = -(*DA)/(+BC)D = -((*DA)(/(+BC)D)) = (D*A)-((B+C)/D) Multiply A and D and subtract the quotient of the sum of B and C and D. \subsection{Postfix:} AB*CD/+ = ((AB *) (CD /) +) = (A*B)+(C/D) Multiply A and B, Divide C by D, then add the results \section{Exercises:} Not too many challenging problems in this topic can be written. It's pretty straightforward \begin{enumerate} \item Evaluate: 9 18 6 27 3 / * + / 24 2 12 6 / + / / \item Translate into infix: * + A D - + B C E \item Find all integer values of Y for which the following prefix expression has a value of zero\\ * + Y 4 - 6 Y \item Given A=4, B=14 and C=2, evaluate the following prefix expression: \\ * / - + A B C * A C B \item Translate the following infix expression into postfix:\\ \begin{equation*} \frac{(A-\frac{B}{C}+D)^{\frac{1}{2}}}{A+B} \end{equation*} \item Evaluate the following prefix expression, when A=10, B=2, C=12 and D=2:\\ \begin{equation*} + / \uparrow - A B 2 \uparrow / - C D / A B 3 / + A C B \end{equation*} \end{enumerate} \subsection{Convert to Prefix and Postfix} \label{sec:convert} \begin{enumerate} \item Infix Expression: ( AX + ( B * C ) ) \item Infix Expression: ( ( AX + ( B * CY ) ) / ( D - E ) ) \item Infix Expression: ( ( A + B ) * ( C + E ) ) \item Infix Expression: ( AX * ( BX * ( ( ( CY + AY ) + BY ) * CX ) ) ) \item Infix Expression: ( ( H * ( ( ( ( A + ( ( B + C ) * D ) ) * F ) * G ) * E ) ) + J ) \end{enumerate} \newpage \section{Solutions to Section~\ref{sec:convert}} \begin{enumerate} \item Infix Expression: ( AX + ( B * C ) ) \\ Postfix Expression: AX B C * + \\ Prefix Expression: + AX * B C \item Infix Expression: ( ( AX + ( B * CY ) ) / ( D - E ) ) \\ Postfix Expression: AX B CY * + D E - / \\ Prefix Expression: / + AX * B CY - D E \item Infix Expression: ( ( A + B ) * ( C + E ) ) \\ Postfix Expression: A B + C E + * \\ Prefix Expression: * + A B + C E \item Infix Expression: ( AX * ( BX * ( ( ( CY + AY ) + BY ) * CX ) ) ) \\ Postfix Expression: AX BX CY AY + BY + CX * * * \\ Prefix Expression: * AX * BX * + + CY AY BY CX \item Infix Expression: ( ( H * ( ( ( ( A + ( ( B + C ) * D ) ) * F ) * G ) * E ) ) + J ) \\ Postfix Expression: H A B C + D * + F * G * E * * J + \\ Prefix Expression: + * H * * * + A * + B C D F G E J \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6589187777, "avg_line_length": 31.9083333333, "ext": "tex", "hexsha": "fd224f641d59d1abb83bfbb0121827cd092b1930", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sanjit-bhat/AB-ACSL", "max_forks_repo_path": "prefix-postfix-infix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sanjit-bhat/AB-ACSL", "max_issues_repo_path": "prefix-postfix-infix.tex", "max_line_length": 321, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ab9bf7e5526cc5863c0173ab518138dada2dc1ef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sanjit-bhat/AB-ACSL", "max_stars_repo_path": "prefix-postfix-infix.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-12T03:01:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-12T03:01:29.000Z", "num_tokens": 1182, "size": 3829 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Statistical methods for environmental mixtures}, pdfauthor={Andrea Bellavia}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs,array} \usepackage{calc} % for calculating minipage widths % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{xcolor} \ifluatex \usepackage{selnolig} % disable illegal ligatures \fi \usepackage[]{natbib} \bibliographystyle{apalike} \title{Statistical methods for environmental mixtures} \author{Andrea Bellavia} \date{2021-11-04} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \#Preface This document contains an extended version of the material for the winter class in ``Statistical methods for Environmental Mixtures'', which I taught at the Harvard T.H. Chan School of Public Health between 2018 and 2020. The course was designed as a 2-weeks intensive introductory class, which made it realistically impossible to cover all topics and methodologies related to the continuously expanding field of statistical approaches for high-dimensional exposures, and their application in exposome research. As such, the goal of this document is not to comprehensibly summarize the existing literature, but only to present in teaching format the selected topics covered in the course. Credits should also go to Dr.~Paige Williams and Prof.~Brent Coull who give guest lectures during the course on principal components analysis and Bayesian Kernel Machine Regression: the material related to these topics that is here discussed is largely taken from their material. The statistical software R was used for the practical sessions in the class. Despite some introduction to the specific packages and examples are here provided, the reader should refer to online documentations, provided from links throughout the document, for detailed descriptions of the software. You can label chapter and section titles using \texttt{\{\#label\}} after them, e.g., we can reference Chapter \ref{intro}. If you do not manually label them, there will be automatic labels anyway, e.g., Chapter \ref{Data}. Reference a figure by its code chunk label with the \texttt{fig:} prefix, e.g., see Figure \ref{fig:nice-fig}. Similarly, you can reference tables generated from \texttt{knitr::kable()}, e.g., see Table \ref{tab:nice-tab}. \hypertarget{introduction}{% \chapter{Introduction}\label{introduction}} \hypertarget{the-exposome}{% \section{The Exposome}\label{the-exposome}} A major goal of public health research is the study of the complex mechanisms leading to the development of diseases in humans, and the identification of potentially modifiable risk factors that could be targeted to reduce the burden of diseases in the overall population or in specific subgroups at high risk. A considerable number of potentially modifiable risk factors have been thoroughly studied, including dietary constituents, environmental factors such as chemicals and pollutants, lifestyle, social, and other ecological factors. Nevertheless, throughout their lifetime, humans are exposed to hundreds of these factors, which jointly contribute to the development of a given disease with complex mechanisms that can also involve antagonistic or synergistic interactions. This complex set of exposure is commonly referred to as ``exposome'' \citep{vermeulen2020exposome}. \begin{figure} \centering \includegraphics{images/exposome.png} \caption{The exposome (figure from Vermeulen et al.~2020)} \end{figure} Even restricting our interest to environmental exposures, a substantial component of the exposome, it is recognized that we are simultaneously exposed to hundreds of chemicals and pollutants, and it has been shown that a given blood or urine sample taken from a random American will contain some concentration of at least 400 different chemicals. A group of 3 or more chemicals/pollutants, simultaneously present in nature or in the human body, is commonly defined as an environmental mixture. \hypertarget{why-focusing-on-multiple-exposures}{% \section{Why focusing on multiple exposures?}\label{why-focusing-on-multiple-exposures}} Common approaches that have been used on a daily basis in environmental epidemiology might fail to capture the complexity of exposures in our world. For several years, despite recognizing that individuals are generaly exposed to multiple environmental factors, the ``one-at-the-time'' approach has remained the standard practice in most epidemiological research. To better understand what we mean by ``one-at-the-time'' approach, and its limitations, let's think of a study where we want to evaluate the effects of parabens - endocrine disrupting chemicals commonly used in the production of personal care products and cosmetics - on diabetes in a population of 1000 individuals. Let's assume that through urine samples analysis we were able to detect concentrations of three common parabens compounds (metylparaben, butylparaben, propylparaben) in most of our individuals. The ``one-at-the-time'' approach would build 3 independent statistical models (these could even be very sophisticated models that account for any level of data complexity), one for each parabens compound, adjusting for potential confounders of the associations but without taking into account the other 2 detected compounds. When this approach is chosen we encounter three main limiations: \begin{itemize} \item We know that individuals are exposed to multiple factors, and we might want to estimate the joint (also knows as cumulative) effects of these chemicals. A ``one-at-the-time'' approach does not allow responding to this question. \item Is there any interaction between the three compounds in predicting diabetes? A ``one-at-the-time'' approach does not allow responding to this question. \item Last but not least, this approach is making strong assumptions with regards to the causal structure underlying the data. Specifically, we are assuming that, very unrealistically, the association between each compound and the outcome is not confounded by the presence of any of the other compounds. \end{itemize} To overcome these 3 major limitations we need to evaluate exposure to parabens as a mixture of the three evaluated compounds, building a single statistical model that could jointly evaluate the three exposures and possibly accounting for co-confounding, interactions, and other specific features of the data. Obtaining such statistical model is not easy, and things would only get more complex if we wanted to account for a larger mixture of chemicals, or even to incorporate several groups of exposures in an exposome-wide analysis. Over the last decade or so, many researchers have focused their effort on developing statistical approaches for environmental mixtures, adapting techniques from other fields or developing new methodologies from scratch. The National Institute of Environmental Health Sciences (NIEHS) launched a specific initiative, called Powering Research Through Innovative Methods for Mixtures in Epidemiology (PRIME), to encourage methods developments in this direction, and organized workshops and symposiums on the topics. An important symposium in 2015 identified several available approaches and discussed advantages and limitations for each \citep{taylor2016statistical}. \begin{figure} \centering \includegraphics{images/table.png} \caption{Approaches discussed by NIEHS in 2015 (from Taylor et al.~2016)} \end{figure} Five years later the number of available approaches has multiplied, and several of the discussed methodologies has been extended, revised, and presented to the public. The field of environmental epidemiology is gradually moving to a multi-pollutants or multi-chemical framework as a default \citep{dominici2010protecting}, leading the ground in exposome research, and more and more papers are published within this topic. The goal of this class is to present and discuss some of these approaches, presenting their advantages and limitations and, most importantly, discussing what research question they target and when they should be chosen to evaluate environmental mixtures. While it is impossible to cover all available techniques, we will provide a set of references for alternative methodologies that are not discussed here. Finally, most of the examples and discussion will focus on environmental exposures. It comes without saying that extension of these approaches into other fields of exposome research (e.g.~evaluating multiple nutrients, multiple lifestyle factors \dots) is recommended and would provide enormous benefits. \hypertarget{what-is-your-research-question}{% \section{What is your research question?}\label{what-is-your-research-question}} When evaluating a set of environmental factors detected in a given population as an environmental mixture, a critical step is the identification of the research question of interest. The discussion of the different methodologies presented in the aforementioned NIEHS workshop concluded that we do not have an optimal approach, but that each method performed well under a specific research question. Here are some of the most common questions that we may want to address: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Do we have recurrent patterns of exposures? \end{enumerate} With several factors at play, it is often of interest to understand whether specific components of the mixture are clustered into smaller subgroups, based on similar characteristics, shared sources, or other features. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item What is the overall effect of the mixture on a given outcome? \end{enumerate} From our previous example, we may be interested in evaluating the overall effects of parabens exposure on the risk of diabetes. We are not really interested in the specific role of each compound but only on the cumulative effect of the several components. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{2} \tightlist \item Who are the bad actors? What are the individual effects within the mixture? \end{enumerate} Let's assume that we have identified a potentially harmful effect of our mixture on the outcome of interest, and therefore we want to reduce the levels of exposures in our population. If question 1 has identified common patterns due to shared sources, we could simply target these sources, disregarding the actual effects of these chemicals. Alternatively, we could try to identify which component of the mixture is responsible for the effect observed in question 2. In our parabens example, if we had observed a positive association we may want to further investigate whether it is MP, PP, BP, or more than one of them, driving the association between the mixture and the outcome. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{3} \tightlist \item Is there any interaction between chemicals in predicting the outcome? \end{enumerate} When more than one mixture component contributes to a given mixture-outcome association, it is reasonable to expect that some kind of interaction between the 2 will be present. In general, one might have one or more research questions in mind, or simply want to evaluate the mixture in an exploratory way. No matter what, it will always be recommended to explore different techniques and thoroughly compare and validate results. \hypertarget{broad-classifications-of-statistical-approaches}{% \section{Broad classification(s) of statistical approaches}\label{broad-classifications-of-statistical-approaches}} Over the last few years several papers have reviewed the existing literature on statistical methods for mixtures and provide different criteria for their classifications \citep{hamra2018environmental}, \citep{stafoggia2017statistical}. Simple and relevant classification criteria are the following: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Supervised vs unsupervised procedures \end{enumerate} This first distinction refers to whether or not the mixture is evaluated by taking into account its association with a given outcome of interest. We will see in Section 2 that, before evaluating the effects of our exposures on health outcomes, it is important to carefully assess the structure of the mixture, especially when this is composed by several components, investigating its correlations structure and identifying the presence of subgroups or clusters of exposures. To this end, unsupervised techniques directly focus on characterizing the complex mixture of exposures without any reference to a given outcome of interest. Supervised techniques, on the other hand, attempt to account for the complex nature of exposures while investigating a given mixture-outcome association. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \tightlist \item Data reduction vs variable selection techniques. \end{enumerate} The goal of all techniques we will cover is to reduce the complexity of the data to be able to assess mixtures-outcome associations, while maintaining as much information as possible. This is broadly done in two way: by summarizing the original exposures into fewer and easier to deal with covariates, or by selecting targeted elements of the mixture. We can use the term ``data reduction approaches'' to describe those techniques that reduce the dimension of the mixture by generating new variables (scores, components, indexes \dots). On the other hand, ``variable selection approaches'' are those that select specific elements of the mixture that are directly evaluated with respect to the outcome. \hypertarget{introduction-to-r-and-the-simulated-data}{% \section{Introduction to R and the simulated data}\label{introduction-to-r-and-the-simulated-data}} All methods that we will present can be used in the R statistical software. An introduction to R, for those unfamiliar with the software, can be found here: \url{https://rpubs.com/alecri/intro-epiR}. R is a free statistical software environment that allows you to write your own code and packages, sharing them as open sources. For this reason, it is common that any newly developed statistical method will first be implemented in R. As such, several recently developed approaches for environmental mixtures are only available in R. Most R packages are accompanied by online tutorials and vignettes that describe all features of the library and provide illustrative examples and explanations. We refer to those documents for the technical information of the R packages, and only focus here on methods implementation and results interpretation. The following packages will be used: \begin{Shaded} \begin{Highlighting}[] \NormalTok{Packages }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"readxl"}\NormalTok{, }\StringTok{"bkmr"}\NormalTok{, }\StringTok{"qgraph"}\NormalTok{, }\StringTok{"gWQS"}\NormalTok{, }\StringTok{"qgcomp"}\NormalTok{, }\StringTok{"corrplot"}\NormalTok{, }\StringTok{"cluster"}\NormalTok{,}\StringTok{"factoextra"}\NormalTok{,}\StringTok{"gridExtra"}\NormalTok{,}\StringTok{"table1"}\NormalTok{,}\StringTok{"glmnet"}\NormalTok{)} \FunctionTok{lapply}\NormalTok{(Packages, library, }\AttributeTok{character.only =} \ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} As an illustrative example we will use a simulated dataset that was developed for the 2015 NIEHS workshop previously mentioned and made publicly available. The dataset is available here (under Data Set \#2 \url{https://www.niehs.nih.gov/news/events/pastmtg/2015/statistical/index.cfm}) and a description of the data structure is also provided. Specifically, the data includes a mixture of 14 continuous exposures, (\(X_1-X_{14}\)), a continuous outcome \(Y\), and 3 additional covariates (\(Z_1-Z_3\)). \begin{table} \caption{\label{tab:unnamed-chunk-3}First rows of the dataset} \centering \begin{tabular}[t]{r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r} \hline Obs & y & x1 & x2 & x3 & x4 & x5 & x6 & x7 & x8 & x9 & x10 & x11 & x12 & x13 & x14 & z1 & z2 & z3\\ \hline 1 & 3.35244 & 0.48719 & -2.81309 & -0.15955 & 0.95293 & -0.83727 & -0.00003 & 0.97400 & 2.13765 & 1.39604 & 3.56099 & 4.26839 & 0.45545 & 0.72929 & 0.57650 & 0.98552 & 8.695 & 0\\ \hline 2 & 3.69033 & 0.82919 & -2.55938 & 2.68266 & 3.77467 & 1.81320 & 1.91995 & 1.18520 & 2.66005 & 0.96977 & 2.71796 & 4.95887 & 0.60921 & 0.52988 & 1.96180 & 3.71546 & 43.606 & 0\\ \hline 3 & 3.57359 & 0.95442 & -1.68660 & 0.90617 & 1.53099 & -0.52228 & 0.66634 & 0.91016 & 2.79356 & 1.77319 & 3.60018 & 6.34345 & 0.52247 & 0.28810 & 1.51987 & -0.26049 & 35.179 & 0\\ \hline 4 & 4.08506 & 0.44262 & -2.32889 & 2.87066 & 3.69266 & 2.23544 & 0.96392 & 0.45412 & 4.38613 & 0.45019 & 3.39090 & 5.23588 & -0.13227 & -0.15786 & 1.29478 & 3.50177 & 53.850 & 0\\ \hline 5 & 4.32196 & 0.90320 & -2.64624 & 1.85611 & 2.66537 & 0.66575 & 1.22047 & 2.13394 & 3.25436 & 1.68486 & 3.35262 & 5.76463 & 0.71263 & 0.86847 & 1.49974 & 2.48495 & 46.692 & 0\\ \hline 6 & 4.48195 & 2.19892 & -2.82971 & 2.77514 & 3.93696 & 1.15633 & 1.15479 & 1.33877 & 1.65140 & 1.21884 & 2.65061 & 5.38122 & 0.46648 & 0.45505 & 1.09930 & 3.25059 & 33.677 & 0\\ \hline \end{tabular} \end{table} By actually knowing the actual associations we will be able to evaluate how well each method performs with respect to the several research questions of interest. Specifically, chemical concentrations were generated based on the correlation between log-transformed PCBs, dioxins, and furans, from NHANES data. Two clusters of highly-correlated covariates were present (\(X_3-X_4-X_5\), and \(X_{12}- X_{13}\), while low to moderate correlations were simulated between other covariates. \(Z_1\) and \(Z_2\) were simulated based on poverty index and age, both assumed to be confounders of the association. \(Z_3\) was simulated based on gender distribution, and assumed to be an effect modifier. The outcome was generated with the following functions for male and female, respectively: \[Z_3=0: E[Y]=3 + 0.05*X_4 + 0.1*X_6 + 0.1*X_{11} + 0.5*X_{12} + 0.1*X_{14} + 0.01*Z_1 + 0.003*Z_2 \] \[Z_3=1: E[Y]=3 + 0.01*X_1 + 0.05*X_4 + 0.1*X_{11} + 0.1*X_{14} + 0.01*Z_1 + 0.003*Z_2 – 0.32*(Z_3=1) \] Thus, for \(Z_3=0\) only \(X_4, X_6, X_{11}, X_{12}\) and \(X_{14}\) are positively associated with \(Y\). When \(Z_3=1\), only \(X_1, X_4, X_{11}\) and \(X_{14}\) are associated with \(Y\). Interactions between chemicals were not considered. \hypertarget{unsupervised-analysis}{% \chapter{Unsupervised analysis}\label{unsupervised-analysis}} As introduced in the previous section, the term unsupervised analysis refers to that critical part of the analytic phase where we only focus on the exposures, trying to characterize, explain, and describe the complex environmental mixture of interest. This could even be the ultimate goal of the analysis (as a matter of fact, to respond to common questions such as ``what are the most common exposures in our populations?'' or ``can we identify subgroups of exposures that are often found together?'' we do not to account for the outcome. In other setting, this will be an important part that will inform subsequent analytic steps. Note that here the focus is not on understanding biological mechanisms through which chemicals or pollutants operate in the body. The focus on unsupervised analysis in this context is, instead, a descriptive and epidemiologic one. When we are attempting to identify clusters of exposures without accounting the their relationship with a given outcome, the grouping will be based on aspects such as population distribution and shared sources rather than on similar mechanisms of action. \hypertarget{pre-processing}{% \section{Pre-processing}\label{pre-processing}} Before getting into the actual analysis of the mixture it is important to carefully assess each component independently. Environmental exposures such as chemicals or pollutants, but also indicators of greenness, noise, or temperature, share important characteristics that complicate their statistical evaluation. \begin{itemize} \item Skewedness and variance. Exposures are often non-negative and heavily skewed on the right due to the presence of outliers and to the fact that they are strictly non-negative. For this reason, it is usually recommended to log-transform these exposures. Nevertheless, when such operation is taken into account, researchers have to deal with decisions on how to treat eventual zero-values that do not necessarily represent missing data. \item Centering and standardizing exposures. Mixture components tend to have different and difficult to compare measurements and variability, even within the same family of exposures. Since these exposures will be eventually evaluated together, centering and standardizing the covariates will allow comparability and better interpretation of statistical findings. \item Zero values. It is relatively common, when evaluating large mixtures of environmental exposures, to encounter one or more covariates with a considerable amount of values equal to 0. How to deal with such zero-values will have important consequences on the implementation and interpretation of statistical approaches for mixtures. The first question to consider is what these zero values represent: specifically, are they ``real zeros'' (i.e.~the individuals had no exposure to a given chemical), or do they represent non-detected values (i.e.~the individual had a low level of exposure that we were not able to detect)? In the first case, the values will have to be treated as an actual zero, with important implications for the analysis (we will briefly deal with this when talking about zero-inflated covariates in Section 6). In the second case, non-detected values are usually imputed to a predefined value (several approaches are available) and the covariate can be treated as continuous. \item Missing values. Finally, it is important to evaluate the percentage of missing values for each exposure in our mixture. Most techniques that allow evaluating the joint effect of several covariates, including regresison models, will require a complete-case analysis. As such, an individual with just one missing values in one of the several mixture components, will be excluded from the whole analyses. If the proportion of missingness is not too high (10-15\%), multiple imputation techniques can be used, even though the user should be aware that most advanced methodologies might not be fully integrated withing a multiple implementation procedure. If the percentage of missingness is too high, there is not too much to be done, and we will have to decide whether to give up the covariate (excluding it from the mixture), or reduce the sample size (excluding all individual with missing values on that component) \end{itemize} The dataset we are using in our illustrative example includes simulated covariates where this pre-processing steps have been done ( all values are greater than 0, no missing data are present, covariates are log-transofmred and standardized). \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figure1-1} } \caption{Histogram of first 3 components}\label{fig:figure1} \end{figure} To conduct a thorough exploratory analysis of environmental mixtures, especially when several covariates are of interest, we encourage the use of the R package \texttt{rexposome} , fully described \href{https://www.bioconductor.org/packages/release/bioc/vignettes/rexposome/inst/doc/exposome_data_analysis.html\#multivariate-exposome-analysis}{here} \hypertarget{correlation-analysis}{% \section{Correlation analysis}\label{correlation-analysis}} An essential step when evaluating an environmental mixture is the assessment of the correlation between the mixture components. This preliminary analysis gives a sense of the relationship between exposures, allows a preliminary assessment of exposures patterns and clusters, and gives important information that will suggest which method could be better suited for future modeling. Given 2 continuous covariates, a simple assessment of their relationship can be checked with a simple two-ways scatterplots. Here we show a set of three 2x2 comparison, also adding a lowess trend line on top of the scatter plot. \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figure2-1} } \caption{Scatter plots}\label{fig:figure2} \end{figure} We see some combinations of covariates being highly correlated (like \(X_3\) and \(X_4\)), while other exposures seem to be completely independent (e.g.~\(X_1\) and \(X_5\)). A correlation coefficient and a correlation test, will additionally provide a quantitative measure of this relationship. The Pearson correlation (\(r\)) measures the linear dependence between two variables and it can only be used when both covariates are normally distributed \[r=\frac{\sum(x-m_x)(y-m_y)}{\sqrt{\sum(x-m_x)^2(y-m_y)^2}}\] where \(m_x\) and \(m_y\) are the means of the two covariates \(x\) and \(y\) The Spearman correlation (\(\rho\)) measure computes the correlation between the rank of the two covariates \(x\) and \(y\) \[\rho=\frac{\sum(x'-m_{x'})(y'-m_{y'})}{\sqrt{\sum(x'-m_{x'})^2(y'-m_{y'})^2}}\] where \(m_{x'}\) and \(m_{y'}\) are the ranks of \(x\) and \(y\). This correlation test is non-parametric and does not require assuming normality for the two evaluated covariates. Both \(r\) and \(\rho\) are bounded between -1 and 1 (negative and positive correlation). There is no correlation between the covariates when the coefficient is equal to 0. Tests for significance of the correlation coefficient are available for both \(r\) and \(\rho\), testing the null hypothesis of no correlation Here we calculate the correlation coefficients, and test, for some pair of exposures in our mixture: \begin{Shaded} \begin{Highlighting}[] \NormalTok{r15 }\OtherTok{\textless{}{-}} \FunctionTok{cor.test}\NormalTok{(data2}\SpecialCharTok{$}\NormalTok{x1, data2}\SpecialCharTok{$}\NormalTok{x5, }\AttributeTok{method =} \StringTok{"pearson"}\NormalTok{)} \NormalTok{r15} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Pearson's product-moment correlation ## ## data: data2$x1 and data2$x5 ## t = -0.78615, df = 498, p-value = 0.4322 ## alternative hypothesis: true correlation is not equal to 0 ## 95 percent confidence interval: ## -0.12251881 0.05264665 ## sample estimates: ## cor ## -0.03520647 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{rho15 }\OtherTok{\textless{}{-}} \FunctionTok{cor.test}\NormalTok{(data2}\SpecialCharTok{$}\NormalTok{x1, data2}\SpecialCharTok{$}\NormalTok{x5, }\AttributeTok{method =} \StringTok{"spearman"}\NormalTok{)} \NormalTok{rho15} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Spearman's rank correlation rho ## ## data: data2$x1 and data2$x5 ## S = 21488934, p-value = 0.4824 ## alternative hypothesis: true rho is not equal to 0 ## sample estimates: ## rho ## -0.03147296 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{r1213 }\OtherTok{\textless{}{-}} \FunctionTok{cor.test}\NormalTok{(data2}\SpecialCharTok{$}\NormalTok{x12, data2}\SpecialCharTok{$}\NormalTok{x13, }\AttributeTok{method =} \StringTok{"pearson"}\NormalTok{)} \NormalTok{r1213} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Pearson's product-moment correlation ## ## data: data2$x12 and data2$x13 ## t = 47.687, df = 498, p-value < 2.2e-16 ## alternative hypothesis: true correlation is not equal to 0 ## 95 percent confidence interval: ## 0.8886169 0.9203247 ## sample estimates: ## cor ## 0.90573 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{rho1213 }\OtherTok{\textless{}{-}} \FunctionTok{cor.test}\NormalTok{(data2}\SpecialCharTok{$}\NormalTok{x12, data2}\SpecialCharTok{$}\NormalTok{x13, }\AttributeTok{method =} \StringTok{"spearman"}\NormalTok{)} \NormalTok{rho1213} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Spearman's rank correlation rho ## ## data: data2$x12 and data2$x13 ## S = 2113532, p-value < 2.2e-16 ## alternative hypothesis: true rho is not equal to 0 ## sample estimates: ## rho ## 0.8985501 \end{verbatim} When evaluating the correlation between several exposures we can create a correlation matrix \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#Correlation matrix} \NormalTok{cor.matrix }\OtherTok{\textless{}{-}} \FunctionTok{cor}\NormalTok{ (data2[,}\DecValTok{3}\SpecialCharTok{:}\DecValTok{16}\NormalTok{], }\AttributeTok{method =} \StringTok{"spearman"}\NormalTok{)} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \NormalTok{ cor.matrix, }\AttributeTok{booktabs =} \ConstantTok{TRUE}\NormalTok{,} \AttributeTok{caption =} \StringTok{\textquotesingle{}Correlation matrix\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-6}Correlation matrix} \centering \begin{tabular}[t]{lrrrrrrrrrrrrrr} \toprule & x1 & x2 & x3 & x4 & x5 & x6 & x7 & x8 & x9 & x10 & x11 & x12 & x13 & x14\\ \midrule x1 & 1.0000000 & 0.3015516 & -0.0443238 & -0.0277916 & -0.0314730 & 0.0355735 & 0.1636032 & -0.0453658 & 0.1525454 & -0.1027089 & -0.1102655 & 0.3459623 & 0.3358113 & 0.0099408\\ x2 & 0.3015516 & 1.0000000 & 0.0525986 & 0.0624227 & 0.0846338 & 0.0693562 & 0.1827964 & 0.0299643 & 0.1357383 & -0.0186805 & -0.0251231 & 0.3923731 & 0.3821572 & 0.0482436\\ x3 & -0.0443238 & 0.0525986 & 1.0000000 & 0.9874641 & 0.9326813 & 0.5992095 & 0.2848127 & 0.7373648 & 0.1720033 & 0.4028520 & 0.5609233 & -0.1051226 & -0.1411885 & 0.7041847\\ x4 & -0.0277916 & 0.0624227 & 0.9874641 & 1.0000000 & 0.9410680 & 0.6075068 & 0.2893180 & 0.7419298 & 0.1757588 & 0.4087219 & 0.5662499 & -0.0938162 & -0.1289154 & 0.7113510\\ x5 & -0.0314730 & 0.0846338 & 0.9326813 & 0.9410680 & 1.0000000 & 0.5931920 & 0.2946356 & 0.7244964 & 0.1676753 & 0.4165967 & 0.5632797 & -0.1040214 & -0.1430413 & 0.6895982\\ \addlinespace x6 & 0.0355735 & 0.0693562 & 0.5992095 & 0.6075068 & 0.5931920 & 1.0000000 & 0.4614976 & 0.6356481 & 0.3805305 & 0.4460931 & 0.5435489 & 0.0582305 & 0.0282746 & 0.6183391\\ x7 & 0.1636032 & 0.1827964 & 0.2848127 & 0.2893180 & 0.2946356 & 0.4614976 & 1.0000000 & 0.3906414 & 0.6974279 & 0.3637349 & 0.4113740 & 0.4586443 & 0.4831571 & 0.4081206\\ x8 & -0.0453658 & 0.0299643 & 0.7373648 & 0.7419298 & 0.7244964 & 0.6356481 & 0.3906414 & 1.0000000 & 0.3707775 & 0.5494283 & 0.6431383 & 0.0104423 & -0.0333734 & 0.7430785\\ x9 & 0.1525454 & 0.1357383 & 0.1720033 & 0.1757588 & 0.1676753 & 0.3805305 & 0.6974279 & 0.3707775 & 1.0000000 & 0.3177311 & 0.3558096 & 0.5047644 & 0.4954366 & 0.3966118\\ x10 & -0.1027089 & -0.0186805 & 0.4028520 & 0.4087219 & 0.4165967 & 0.4460931 & 0.3637349 & 0.5494283 & 0.3177311 & 1.0000000 & 0.7741935 & 0.0031582 & -0.0451602 & 0.4188343\\ \addlinespace x11 & -0.1102655 & -0.0251231 & 0.5609233 & 0.5662499 & 0.5632797 & 0.5435489 & 0.4113740 & 0.6431383 & 0.3558096 & 0.7741935 & 1.0000000 & -0.0115591 & -0.0697092 & 0.5384582\\ x12 & 0.3459623 & 0.3923731 & -0.1051226 & -0.0938162 & -0.1040214 & 0.0582305 & 0.4586443 & 0.0104423 & 0.5047644 & 0.0031582 & -0.0115591 & 1.0000000 & 0.8985501 & 0.1073926\\ x13 & 0.3358113 & 0.3821572 & -0.1411885 & -0.1289154 & -0.1430413 & 0.0282746 & 0.4831571 & -0.0333734 & 0.4954366 & -0.0451602 & -0.0697092 & 0.8985501 & 1.0000000 & 0.0694033\\ x14 & 0.0099408 & 0.0482436 & 0.7041847 & 0.7113510 & 0.6895982 & 0.6183391 & 0.4081206 & 0.7430785 & 0.3966118 & 0.4188343 & 0.5384582 & 0.1073926 & 0.0694033 & 1.0000000\\ \bottomrule \end{tabular} \end{table} While informative, this is not a really nice way of presenting results, and we prefer to use graphical tools such as the Correlation plot (or, correlogram) This is done by using the package \texttt{corrplot}. Note that the command requires you use the correlation matrix you previously defined. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{corrplot}\NormalTok{(cor.matrix,} \AttributeTok{method=}\StringTok{"circle"}\NormalTok{,} \AttributeTok{order =} \StringTok{"hclust"}\NormalTok{,} \AttributeTok{addrect =}\DecValTok{10}\NormalTok{,} \AttributeTok{tl.pos =} \StringTok{"l"}\NormalTok{,} \AttributeTok{tl.col =} \StringTok{"black"}\NormalTok{,} \AttributeTok{sig.level =} \FloatTok{0.05}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figure-1} } \caption{Correlation Plot}\label{fig:figure} \end{figure} \href{https://cran.r-project.org/web/packages/corrplot/vignettes/corrplot-intro.html}{This} link provides a very useful description of the several \texttt{corrplot} options. The correlation plot in the example provides several important information: first of all, we see a cluster of highly correlated exposures (\(X_3\),\(X_4\),\(X_5\)), and a cluster of moderately correlated exposures (\(X_{12}\), \(X_{13}\)). In addition, we observe that low to moderate levels of correlation also exist between most pairs of exposures, and it is not straightforward to identify clearly define additional subgroups of exposures. \hypertarget{weighted-correlation-network-analysis}{% \section{Weighted correlation network analysis}\label{weighted-correlation-network-analysis}} Network analysis is emerging as a flexible and powerful technique in different fields. In a nutshell, a network refers to a complex structure of variables, called nodes, and the relationships (formally called edges) between these nodes. Correlation networks define such relationships on the basis of their quantitative correlations, and are increasingly being used in biology to analyze high-dimensional data sets. Weighted correlation networks, in particular, preserve the continuous nature of the underlying correlation information without dicothomizing information. While the theory behind network analysis is beyond the scope of this course, and we refer to other publications for further details (\citet{langfelder2008wgcna}), (\citet{hevey2018network}), it is here useful to mention that these networks can be used in descriptive analyses to graphically display the relationship between exposures in our mixture based on the correlation structure. This can be now obtained with several R packages, including \texttt{qgraph}, documented \href{http://sachaepskamp.com/files/Cookbook.html\#pearson-correlations}{here}. \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figure3-1} } \caption{Weighted correlation network}\label{fig:figure3} \end{figure} This network confirms our finding from the correlation plot, but provides a different and possibly better way of representing and visualizing the relationships between components of the mixture. \hypertarget{principal-component-analysis}{% \section{Principal component analysis}\label{principal-component-analysis}} Principal Component Analysis (PCA) is a useful technique for exploratory data analysis, which allows a better visualization of the variability present in a dataset with many variables. This ``better visualization'' is achieved by transforming a set of covariates into a smaller set of Principal Components. A principal component can be thought of as the direction where there is the most variance or, geometrically speaking, where the data is most spread out. In practical terms, to derive the first principal component that describe our mixture, we try to find the straight line that best spreads the data out when it is projected along it, thus explaining the most substantial variance in the data. The following Figure shows the first principal component in a simple setting with only 3 covariates of interest (so that we could graphically represent it): \begin{figure} \centering \includegraphics{images/pca1.png} \caption{First principal component in a 3-covariates setting} \end{figure} Mathematically speaking, this first component \(t_1\) is calculated as a linear combination of the \(p\) original predictors \(T=XW_p\), where \(W_p\) are weights that would maximize the overall explained variability. For those math-oriented readers, it turns out that such weights are the eigenvectors of the correlation matrix of the original exposures. Once a first component has been retrieved, we proceed by calculating a second component that would maximize the residual variance. Of interest in our context, the procedure adds a constraints of orthogonality to this second component, that is, it will be uncorrelated to the first one, as presented in the figure. \begin{figure} \centering \includegraphics{images/pca2.png} \caption{First principal component in a 3-covariates setting} \end{figure} Mathematically, this is obtained by another linear combination where the weights are the eigenvectors corresponding to the second largest eigenvalue. In this way we can proceed to derive a full set of \(p\) components from our original \(p\) covariates, until all variance has been explained. In summary, PCA is a set of linear transformation that fits the matrix of exposures into a new coordinate system so that the most variance is explained by the first coordinate, and each subsequent coordinate is orthogonal to the last and has a lesser variance. You transform a set of \(p\) correlated variables into a set of \(p\) uncorrelated principal components. PCA is sensitive to unscaled covariates, so it is usually recommended to standardize your matrix of exposures before running a PCA analysis. \hypertarget{fitting-a-pca-in-r}{% \subsection{Fitting a PCA in R}\label{fitting-a-pca-in-r}} There are several options to conduct PCA in R. Here we will use \texttt{prcomp} but alternative options are available (\texttt{princomp} and \texttt{principal}). PCA is also available in the aforementioned \texttt{rexposome} package. If you want to prepare nice figures for presentations or usage in manuscripts, I also recommend taking a look a the \texttt{factoextra} package to create a ggplot2-based elegant visualization (\href{http://www.sthda.com/english/wiki/factoextra-r-package-easy-multivariate-data-analyses-and-elegant-visualization}{link}). The \texttt{prcomp(\ )} function produces a basic principal component analysis. The command requires the raw data you want to reduce (the exposure matrix) and will extract principal components. Here we are also centering and scaling all exposures. Table 2.2. shows the first rows of the newly derived variables (the components). \begin{Shaded} \begin{Highlighting}[] \NormalTok{fit }\OtherTok{\textless{}{-}} \FunctionTok{prcomp}\NormalTok{(X, }\AttributeTok{center=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{scale=}\ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-8}First rows of the components} \centering \begin{tabular}[t]{r|r|r|r|r|r|r|r|r|r|r|r|r|r} \hline PC1 & PC2 & PC3 & PC4 & PC5 & PC6 & PC7 & PC8 & PC9 & PC10 & PC11 & PC12 & PC13 & PC14\\ \hline -2.2389015 & 0.0189525 & -1.2859775 & -0.3216035 & -0.2117773 & 0.5065609 & 0.0949531 & 0.0498184 & -0.3528743 & 0.6327501 & -0.9635341 & -0.3863832 & -0.2260102 & 0.0393756\\ \hline 1.1559396 & -0.6968029 & 1.0199832 & -1.2520422 & 0.2019949 & 0.1850336 & 0.6115127 & -0.7637062 & -0.0160088 & -0.4436463 & -0.1977756 & 0.2647719 & 0.1831903 & 0.0473029\\ \hline 0.1629765 & -0.0676542 & -0.9676685 & 1.3357760 & -0.3980497 & -0.0163378 & 0.5089395 & 1.0205435 & -0.6352686 & -0.6295934 & 0.6662812 & 0.4333277 & 0.2718524 & -0.1982041\\ \hline 1.0947459 & -3.8859461 & 1.0072495 & 0.4083807 & -0.2634263 & 0.1651619 & 0.3376678 & 0.3956715 & -0.0564324 & 1.1606182 & 0.0526433 & 0.0768175 & -0.0135220 & -0.0826189\\ \hline 1.4205153 & 1.1283036 & -1.0392885 & -0.6507554 & 0.1587461 & 0.3137072 & -0.3465914 & -0.4683247 & 0.3952183 & 0.1897396 & 0.4224867 & -0.0995627 & 0.1514907 & -0.0933670\\ \hline 0.3557820 & -0.2771229 & 1.0902309 & -0.1366375 & 1.8059563 & 0.3943783 & -0.9437355 & -0.9424548 & -0.5576855 & -0.8672917 & 0.1908570 & 0.0755395 & 0.5675384 & 0.1145596\\ \hline \end{tabular} \end{table} \hypertarget{choosing-the-number-of-components}{% \subsection{Choosing the number of components}\label{choosing-the-number-of-components}} One of the most interesting features of PCA is that, while it is possible to calculate \(p\) components from a set of \(p\) covariates, we usually need a smaller nummber to successfully describe most of the variance of the original matrix of exposures. In practical terms, not only we are reshaping the original set of exposures into uncorrelated principal components, but we are also able to reduce the dimension of the original matrix into a smaller number of variables that describe the mixture. How many components do we actually need? Before getting to describe the several tools that can guide us on this decision, it is important to stress that this step will be purely subjective. Sometimes these tools will lead to the same evident conclusion, but other times it might not be straightforward to identify a clear number of components to describe the original data. In general, the three common tools used to select a number of components include: \begin{itemize} \tightlist \item Select components that explain at least 70 to 80\% of the original variance \item Select components corresponding to eigenvalues larger than 1 \item Look at the point of inflation of the scree plot \end{itemize} Let's take a look at these approaches in our illustrative example. These are the results of the PCA that we ran with the previous R command: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(fit)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Importance of components: ## PC1 PC2 PC3 PC4 PC5 PC6 PC7 ## Standard deviation 2.4627 1.7521 1.12071 0.89784 0.83905 0.72337 0.63861 ## Proportion of Variance 0.4332 0.2193 0.08971 0.05758 0.05029 0.03738 0.02913 ## Cumulative Proportion 0.4332 0.6525 0.74219 0.79977 0.85006 0.88744 0.91657 ## PC8 PC9 PC10 PC11 PC12 PC13 PC14 ## Standard deviation 0.60268 0.4892 0.46054 0.43573 0.29751 0.25542 0.09904 ## Proportion of Variance 0.02594 0.0171 0.01515 0.01356 0.00632 0.00466 0.00070 ## Cumulative Proportion 0.94251 0.9596 0.97476 0.98832 0.99464 0.99930 1.00000 \end{verbatim} (Square roots of) eigenvalues are reported in the first line, while the second and third line present, respectively, the proportion of variance explained by each given component (note that, as expected, this decreases as we proceed with the estimation), and the cumulative variance explained. The scree plot is the plot of the descending eigenvalues. Ideally we would like to identify a point of inflation (also known as ``elbow'' of the curve), signifying that after a certain number of components, the proportion of variance that is additionally explained becomes minimal. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(fit,}\AttributeTok{type=}\StringTok{"lines"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{bookdown-demo_files/figure-latex/screeplot-1.pdf} \caption{\label{fig:screeplot}Scree plot} \end{figure} All these techniques seem to indicate that 3 components might successfully describe the original set of 14 exposures. \hypertarget{getting-sense-of-components-interpretation}{% \subsection{Getting sense of components interpretation}\label{getting-sense-of-components-interpretation}} A PCA is made by three steps. Fitting the model is the easiest part as it only requires a line of coding (assuming that the pre-processing has been carefully conducted). The second step, selecting the number of components, requires some levels of subjectivity but is also relatively simple in most settings. The third step is usually the more complicated ones, as we are now tasked with providing some interpretation to the set of principal components that we have selected. To get a sense of what principal components represent, we usually look at loading factors, the correlation coefficients between the derived components and the original covariates. In practical terms they inform us on how much of the original variance of each covariate is explained by each component. Here are the loading factors for our example: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#Correlation matrix} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \NormalTok{ fit}\SpecialCharTok{$}\NormalTok{rotation, }\AttributeTok{booktabs =} \ConstantTok{TRUE}\NormalTok{,} \AttributeTok{caption =} \StringTok{\textquotesingle{}Loading factors\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-9}Loading factors} \centering \begin{tabular}[t]{lrrrrrrrrrrrrrr} \toprule & PC1 & PC2 & PC3 & PC4 & PC5 & PC6 & PC7 & PC8 & PC9 & PC10 & PC11 & PC12 & PC13 & PC14\\ \midrule x1 & 0.0017143 & 0.2965204 & 0.3599497 & 0.4250816 & 0.7716378 & 0.0241569 & -0.0616770 & 0.0387385 & 0.0109328 & -0.0093448 & 0.0086599 & -0.0130703 & 0.0023798 & -0.0065664\\ x2 & 0.0348101 & 0.2789407 & 0.4678814 & 0.4540801 & -0.5779524 & -0.3462568 & -0.0266760 & 0.1950322 & 0.0333408 & -0.0376913 & 0.0035594 & -0.0102783 & 0.0235316 & 0.0015467\\ x3 & 0.3581046 & -0.1358796 & 0.2730333 & -0.1308238 & -0.0277497 & 0.1564570 & -0.2131188 & -0.0881940 & -0.1294119 & -0.0178195 & -0.0636970 & -0.0951566 & 0.4535153 & -0.6688317\\ x4 & 0.3603185 & -0.1307061 & 0.2778158 & -0.1220231 & -0.0224492 & 0.1593879 & -0.1971830 & -0.1012582 & -0.1325533 & -0.0225153 & -0.0513707 & -0.0756590 & 0.3346375 & 0.7399659\\ x5 & 0.3538227 & -0.1278570 & 0.2817634 & -0.0994146 & -0.0387517 & 0.1404008 & -0.2198003 & -0.1002304 & -0.1072366 & -0.0246148 & -0.0823058 & 0.1455071 & -0.8030483 & -0.0683894\\ \addlinespace x6 & 0.3149583 & 0.0061111 & -0.0271741 & -0.0576998 & 0.1244917 & -0.6198890 & 0.4398883 & -0.5028593 & -0.2167525 & -0.0206792 & -0.0551119 & -0.0059773 & -0.0017058 & -0.0090137\\ x7 & 0.2351557 & 0.3255146 & -0.2369090 & -0.1356534 & 0.0298706 & -0.2785907 & -0.5252466 & -0.1919585 & 0.5956024 & 0.0534903 & 0.0214018 & 0.1151769 & 0.0398633 & 0.0078041\\ x8 & 0.3589990 & -0.0482006 & -0.0071116 & -0.0053818 & 0.0322053 & 0.0151705 & 0.2523556 & 0.3158598 & 0.1335415 & 0.7818271 & 0.2730510 & -0.0112164 & -0.0151052 & -0.0010123\\ x9 & 0.1994071 & 0.3538425 & -0.3139263 & -0.1751365 & 0.0764960 & -0.2198462 & -0.2360800 & 0.5070062 & -0.5711764 & -0.0808290 & -0.0680062 & -0.0325586 & -0.0184280 & 0.0060362\\ x10 & 0.2719030 & -0.0246063 & -0.4067448 & 0.5312151 & -0.0862533 & 0.2341755 & 0.0432445 & -0.0808694 & 0.0100615 & 0.1084960 & -0.6275928 & -0.0353979 & 0.0115316 & 0.0036618\\ \addlinespace x11 & 0.3171664 & -0.0545167 & -0.3057558 & 0.3905798 & -0.0645898 & 0.1757389 & -0.0067514 & -0.1002333 & -0.0707991 & -0.3387861 & 0.6962794 & -0.0190559 & -0.0130578 & -0.0108932\\ x12 & 0.0255698 & 0.5179345 & 0.0409402 & -0.1247229 & -0.1192813 & 0.3585736 & 0.2381737 & -0.1875666 & -0.1238166 & 0.0428674 & 0.0193237 & 0.6686867 & 0.1201416 & -0.0054517\\ x13 & 0.0053067 & 0.5259246 & 0.0210257 & -0.1709542 & -0.1152485 & 0.2871454 & 0.1429746 & -0.2310884 & 0.0123912 & 0.0492287 & 0.0319341 & -0.7074520 & -0.1414000 & -0.0070955\\ x14 & 0.3423131 & 0.0204381 & 0.0623947 & -0.1981742 & 0.0732766 & 0.0523398 & 0.4415704 & 0.4250604 & 0.4319188 & -0.4951539 & -0.1539282 & -0.0062058 & -0.0021136 & -0.0004576\\ \bottomrule \end{tabular} \end{table} It is not simple to identify any clear pattern. Loading factors are generally low, and several covariates seem to equally load to more components. However, there is a trick that can be tried out to improve the interpretation of the components, consisting in rotating the axes. The most common approach to do that is called ``varimax''. Let's take a look at the rotated loading factors for the first three components (the ones that we have selected) in our example: \begin{Shaded} \begin{Highlighting}[] \NormalTok{rawLoadings\_3}\OtherTok{\textless{}{-}}\NormalTok{ fit}\SpecialCharTok{$}\NormalTok{rotation[,}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{]} \NormalTok{rotatedLoadings\_3 }\OtherTok{\textless{}{-}} \FunctionTok{varimax}\NormalTok{(rawLoadings\_3)}\SpecialCharTok{$}\NormalTok{loadings} \NormalTok{rotatedLoadings\_3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Loadings: ## PC1 PC2 PC3 ## x1 0.162 0.428 ## x2 0.163 0.123 0.506 ## x3 0.452 -0.102 ## x4 0.455 ## x5 0.450 ## x6 0.275 0.105 -0.115 ## x7 0.435 -0.152 ## x8 0.332 -0.131 ## x9 0.473 -0.198 ## x10 0.178 -0.446 ## x11 0.182 0.134 -0.382 ## x12 0.466 0.227 ## x13 0.473 0.218 ## x14 0.332 ## ## PC1 PC2 PC3 ## SS loadings 1.000 1.000 1.000 ## Proportion Var 0.071 0.071 0.071 ## Cumulative Var 0.071 0.143 0.214 \end{verbatim} Interpretation remains a bit tricky and very subjective, but definitely improves. With 3 rotated components we observe covariates groupings that recall what we observed in the network analysis: we have \(X_1, X_2\) with higher loadings on PC3, \(X_7, X_9, X_{12}, X_{13}\) loading on PC2, and all others on PC1 \hypertarget{using-principal-components-in-subsequent-analyses}{% \subsection{Using principal components in subsequent analyses}\label{using-principal-components-in-subsequent-analyses}} We have here described PCA as an unsupervised technique for describing the mixture. Principal components, however, can be used in further analysis, for example including the selected components into a regression model instead of the original exposures. This approach is very appealing in the context of environmental mixtures as it would result into incorporating most of the information of out exposure matrix into a regression models by using uncorrelated covariates, thus overcoming one of the major limitations of using multiple regression in this context (see Section 3). Nevertheless, the validity of this approach is strictly dependent on whether a good interpretation of the components has been determined; in our example we would not conclude that the PCA clearly summarizes exposures into well defined groups, and we would get negligible advantages by including such components into a regression model. The next subsection will present some published papers that applied this technique in environmental epidemiology. Furthermore, if subgroups of exposures are clearly identified from a PCA, this information can be incorporated into subsequent modeling technique such as BKMR or hierarchical modeling. \hypertarget{pca-in-practice}{% \subsection{PCA in practice}\label{pca-in-practice}} Despite several techniques developed ad-hoc for the analysis of environmental mixtures have emerged, PCA remains a very common choice among environmental epidemiologists. Most of the times, the method is used to reduce the dimension of a mixture of correlated exposures into a subset of uncorrelated components that are later included in regression analysis. As a first example, let's consider a paper by \citet{lee2017identification} evalauting the association between pregnancy exposure to 28 contaminants (metals, pesticides, PCBs, phthalates, PFAS, BPA) and socio-economic status in the MIREC study.To summarize the mixture, the Authors conduct a PCA that suggests selecting 11 principal components. The following figure presents the loading factors, as included in the paper: \begin{figure} \centering \includegraphics{images/table5.png} \caption{Loading factors (figure from Lee et al., 2017)} \end{figure} The interpretation of such components is not straightforward (the paper does not mention whether a rotation was considered). The first component has higher loadings on PCBs, while the second component has high loadings on DEHP metabolites. All other components have high loadings on specific subsets of exposures, but fail to uniquely identify clusters of exposures within the mixture. For example, to describe exposure to organochlorine pesticides, we find similar loading factors in PC1, PC5, and PC9. Similarly, organophosphate pesticides equivalently load on PC3, PC4, and PC6. As described in the previous paragraphs, this has relevant implications when attempting to evaluate PCA components in a regression model. The following figure presents results from such regression in the paper: \begin{figure} \centering \includegraphics{images/table6.png} \caption{Regression (figure from Lee et al., 2017)} \end{figure} From this table we might be able to conclude that PCBs are associated with the outcome of interest (as they load on PC1), but it is not easy to draw any conclusion about other sets of exposures, whose variability is captured by multiple components. To conclude, the real information that a PCA model is giving us in this example is that the mixture is very complex and we do not observe clearly defined subgroups of exposures based on the correlation structure. In such setting, a PCA analysis might not be the best option to evaluate exposure-outcome associations, and other methods should be considered. A second interesting example can be found in \citet{sanchez2018urinary}, evaluating metals and socio-demographic characteristics in the HEALS study in Bangladesh. Out of a mixture of 15 metals, a rotated PCA identified 6 principal components explaining 81\% of the total variability. Differently from the previous examples, such components better identify subgroups of exposures (figure). \begin{figure} \centering \includegraphics{images/tableaa.png} \caption{Loading factors (figure from Sanchez et al., 2018)} \end{figure} If we look at these loading factors by row, we see that each metal has a high loading factor with one component, and low loadings to all other. For example arsenic (row 1) is described by PC3, cadmium (row 3), by PC6, and so on down to zinc, described by PC5. In this situation, a regression model with the principal components will have a better interpretation; for example, associations between PC3 and the outcome can be used to retrieve information on the associations between arsenic, molybdenum, and tungsten, on the outcome. Nevertheless, it is important to note some critical limitations of this approach, that remain valid also when a perfect interpretation can be provided. Let's think of this third principal component that is well describing the variability of arsenic, molybdenum, and tungsten. A regression coefficient linking PC3 with the outcome would only tell us how the subgroup of these 3 exposures is associated with the outcome, but would not inform us on which if the three is driving the association, whether all three exposures have effects in the same direction, nor whether there is any interaction between the three components. Moreover, let's not forget that components are calculated as linear combinations of the exposures and without taking the relationship with the outcome into account. For these reasons, we can conclude that PCA is very powerful tool to be considered in the preliminary unsupervised assessment of the mixture as it can inform subsequent analyses. On the other hand, using derived components into regression modeling must be done with caution, and is usually outperformed by most supervised approaches that we will describe later. Finally, it is important to mention that several extensions of the classical PCA have been developed, including a supervised version of the approach. These techniques, however, were developed in other fields and have not gained too much popularity in the context of environmental exposures, where alternative supervised approaches, presented in the following sections, are generally used. \hypertarget{cluster-analysis}{% \section{Cluster analysis}\label{cluster-analysis}} While a principal components analysis can be seen as a way to identify subgroups of exposures (the columns of the mixture matrix) within the mixture based on their correlation structure, another useful exploratory analysis consists in identifying subgroups of individuals (the rows of the data) that share similar exposure profiles. This is commonly done with cluster analysis. Like PCA, cluster analysis requires complete data and standardized variables. To group individuals, a distance measure must be identified, with several options available from standard euclidean distance to distances based on the correlations structure. \hypertarget{k-means-clustering}{% \subsection{K-means clustering}\label{k-means-clustering}} The most common approach to partition the data into clusters, is an unsupervised approach called k-means clustering. This method classifies objects in \(k\) groups (i.e., clusters), so that individuals within the same cluster are as similar as possible, while individuals from different clusters are as dissimilar as possible. To achieve that, clusters are defined in a way that minimizes within-cluster variation. A simple algorithm for k-clustering proceeds as follow \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Pre-specify \(k\), the number of clusters \item Select \(k\) random individuals as center for each cluster and define the centroids, vectors of length \(p\) that contain the means of all variables for the observation in the cluster. In our context, the \(p\) variables are the components of our mixture of interest \item Define a distance measure. The standard choice is the euclidean distance defined as \((x_i-\mu_k)\) i, for each individual in the study (\(x_i\)) and each cluster center (\(\mu_k\)). \item Assign each individual to the closest centroid \item For each of the \(k\) clusters update the cluster centroid by calculating the new mean values of all the data points in the cluster \item Iteratively update the previous 2 steps until the the cluster assignments stop changing or the maximum number of iterations is reached. By default, the R software uses 10 as the default value for the maximum number of iterations. \end{enumerate} This simple algorithm minimizes the total within-cluster variation, defined for each cluster \(C_k\) as the sum of squared euclidean distances within that cluster \(W(C_k)=\sum_{x_i\in C_k}(x_i-\mu_k)^2\) Since k-mean clustering requires the user to specify the number of groups, it is important to assess the optimal number of groups. A simple technique is to use the elbow method, similar to the one presented for PCA, which consists in plotting the within-cluster sum of squares versus the number of clusters, and locating the bend in the plot. \hypertarget{k-means-in-r}{% \subsection{K-means in R}\label{k-means-in-r}} We can compute k-means in R with the \texttt{kmeans} function within the \texttt{cluster} package. Here we are selecting 3 groups, also using the \texttt{nstart} option that will attempts multiple initial configurations (here 20) and report the best one. \begin{Shaded} \begin{Highlighting}[] \NormalTok{k3 }\OtherTok{\textless{}{-}} \FunctionTok{kmeans}\NormalTok{(X, }\AttributeTok{centers =} \DecValTok{3}\NormalTok{, }\AttributeTok{nstart =} \DecValTok{20}\NormalTok{)} \NormalTok{k3} \end{Highlighting} \end{Shaded} The option \texttt{fviz\_cluster} provides a nice graphical representation of the groupings. If there are more than two variables \texttt{fviz\_cluster} will perform principal component analysis (PCA) and plot the data points according to the first two principal components that explain the majority of the variance. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{fviz\_cluster}\NormalTok{(k3, }\AttributeTok{data =}\NormalTok{ X)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figurecl-1} } \caption{Cluster analysis with 3 groups}\label{fig:figurecl} \end{figure} Here we can test more \begin{Shaded} \begin{Highlighting}[] \NormalTok{k2 }\OtherTok{\textless{}{-}} \FunctionTok{kmeans}\NormalTok{(X, }\AttributeTok{centers =} \DecValTok{2}\NormalTok{, }\AttributeTok{nstart =} \DecValTok{20}\NormalTok{)} \NormalTok{k4 }\OtherTok{\textless{}{-}} \FunctionTok{kmeans}\NormalTok{(X, }\AttributeTok{centers =} \DecValTok{4}\NormalTok{, }\AttributeTok{nstart =} \DecValTok{20}\NormalTok{)} \NormalTok{k5 }\OtherTok{\textless{}{-}} \FunctionTok{kmeans}\NormalTok{(X, }\AttributeTok{centers =} \DecValTok{5}\NormalTok{, }\AttributeTok{nstart =} \DecValTok{20}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \NormalTok{p1 }\OtherTok{\textless{}{-}} \FunctionTok{fviz\_cluster}\NormalTok{(k2, }\AttributeTok{geom =} \StringTok{"point"}\NormalTok{, }\AttributeTok{data =}\NormalTok{ X) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"k = 2"}\NormalTok{)} \NormalTok{p2 }\OtherTok{\textless{}{-}} \FunctionTok{fviz\_cluster}\NormalTok{(k3, }\AttributeTok{geom =} \StringTok{"point"}\NormalTok{, }\AttributeTok{data =}\NormalTok{ X) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"k = 3"}\NormalTok{)} \NormalTok{p3 }\OtherTok{\textless{}{-}} \FunctionTok{fviz\_cluster}\NormalTok{(k4, }\AttributeTok{geom =} \StringTok{"point"}\NormalTok{, }\AttributeTok{data =}\NormalTok{ X) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"k = 4"}\NormalTok{)} \NormalTok{p4 }\OtherTok{\textless{}{-}} \FunctionTok{fviz\_cluster}\NormalTok{(k5, }\AttributeTok{geom =} \StringTok{"point"}\NormalTok{, }\AttributeTok{data =}\NormalTok{ X) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"k = 5"}\NormalTok{)} \FunctionTok{grid.arrange}\NormalTok{(p1, p2, p3, p4, }\AttributeTok{nrow =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figurecl2-1} } \caption{Cluster analysis with 2-5 groups}\label{fig:figurecl2} \end{figure} The elbow plot can tell us how many groups optimally classify individuals. This figure shows that 2 might be enough. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{set.seed}\NormalTok{(}\DecValTok{123}\NormalTok{)} \FunctionTok{fviz\_nbclust}\NormalTok{(X, kmeans, }\AttributeTok{method =} \StringTok{"wss"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figurecl3-1} } \caption{Elbow plot}\label{fig:figurecl3} \end{figure} \hypertarget{cluster-analysis-to-simplify-descriptive-statistics-presentation}{% \subsection{Cluster analysis to simplify descriptive statistics presentation}\label{cluster-analysis-to-simplify-descriptive-statistics-presentation}} One of the advantages of clustering individuals is to provide a better presentation of descriptive statistics and univariate associations with other covariates in the dataset prior to formal analysis (what is commonly done in table 1 of a scientific manuscript). First, let's define the exposure profiles by evaluating the distribution of original exposures in the clusters: \begin{tabular}[t]{llll} \toprule   & 1 & 2 & Overall\\ \midrule & (N=236) & (N=264) & (N=500)\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x1}}\\ \hspace{1em}Mean (SD) & 1.04 (0.718) & 1.01 (0.684) & 1.02 (0.699)\\ \hspace{1em}Median [Min, Max] & 1.04 [-1.18, 2.49] & 1.02 [-0.936, 2.89] & 1.03 [-1.18, 2.89]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x2}}\\ \hspace{1em}Mean (SD) & -2.05 (0.765) & -2.18 (0.759) & -2.12 (0.764)\\ \hspace{1em}Median [Min, Max] & -2.10 [-4.32, 0.193] & -2.23 [-4.01, -0.291] & -2.14 [-4.32, 0.193]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x3}}\\ \hspace{1em}Mean (SD) & 2.48 (0.885) & 0.291 (0.907) & 1.32 (1.41)\\ \hspace{1em}Median [Min, Max] & 2.36 [0.993, 5.43] & 0.511 [-2.33, 1.83] & 1.33 [-2.33, 5.43]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x4}}\\ \hspace{1em}Mean (SD) & 3.49 (0.867) & 1.33 (0.889) & 2.35 (1.39)\\ \hspace{1em}Median [Min, Max] & 3.39 [1.93, 6.36] & 1.51 [-1.36, 2.92] & 2.32 [-1.36, 6.36]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x5}}\\ \hspace{1em}Mean (SD) & 1.86 (1.03) & -0.645 (1.04) & 0.537 (1.62)\\ \hspace{1em}Median [Min, Max] & 1.81 [-0.311, 4.85] & -0.459 [-4.22, 1.42] & 0.504 [-4.22, 4.85]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x6}}\\ \hspace{1em}Mean (SD) & 1.52 (0.892) & 0.326 (0.815) & 0.891 (1.04)\\ \hspace{1em}Median [Min, Max] & 1.50 [-0.564, 3.76] & 0.356 [-2.12, 2.25] & 0.833 [-2.12, 3.76]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x7}}\\ \hspace{1em}Mean (SD) & 1.51 (0.556) & 1.15 (0.490) & 1.32 (0.552)\\ \hspace{1em}Median [Min, Max] & 1.51 [0.216, 2.94] & 1.18 [-0.356, 2.50] & 1.33 [-0.356, 2.94]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x8}}\\ \hspace{1em}Mean (SD) & 3.39 (0.737) & 2.08 (0.767) & 2.70 (0.999)\\ \hspace{1em}Median [Min, Max] & 3.31 [1.65, 5.92] & 2.10 [-0.268, 4.84] & 2.69 [-0.268, 5.92]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x9}}\\ \hspace{1em}Mean (SD) & 1.45 (0.529) & 1.21 (0.571) & 1.32 (0.564)\\ \hspace{1em}Median [Min, Max] & 1.43 [0.0496, 2.70] & 1.25 [-0.328, 2.98] & 1.34 [-0.328, 2.98]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x10}}\\ \hspace{1em}Mean (SD) & 3.46 (0.690) & 2.86 (0.683) & 3.14 (0.748)\\ \hspace{1em}Median [Min, Max] & 3.42 [1.69, 5.26] & 2.81 [1.07, 4.66] & 3.13 [1.07, 5.26]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x11}}\\ \hspace{1em}Mean (SD) & 5.61 (0.638) & 4.81 (0.674) & 5.19 (0.769)\\ \hspace{1em}Median [Min, Max] & 5.53 [3.98, 7.80] & 4.80 [2.62, 6.71] & 5.21 [2.62, 7.80]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x12}}\\ \hspace{1em}Mean (SD) & 0.466 (0.337) & 0.493 (0.347) & 0.481 (0.342)\\ \hspace{1em}Median [Min, Max] & 0.443 [-0.429, 1.15] & 0.507 [-0.481, 1.49] & 0.483 [-0.481, 1.49]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x13}}\\ \hspace{1em}Mean (SD) & 0.530 (0.348) & 0.578 (0.348) & 0.555 (0.349)\\ \hspace{1em}Median [Min, Max] & 0.514 [-0.371, 1.42] & 0.570 [-0.355, 1.65] & 0.552 [-0.371, 1.65]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{x14}}\\ \hspace{1em}Mean (SD) & 1.79 (0.549) & 0.881 (0.562) & 1.31 (0.719)\\ \hspace{1em}Median [Min, Max] & 1.79 [0.373, 3.55] & 0.879 [-1.29, 2.64] & 1.31 [-1.29, 3.55]\\ \bottomrule \end{tabular} We see that individuals in the first cluster have higher exposure levels to most of the included contaminants, so we could define cluster 1 as ``high'' and cluster 2 as ``low'' exposure. Next, we can see the distribution of outcome and covariates by clustering. \begin{tabular}[t]{llll} \toprule   & 1 & 2 & Overall\\ \midrule & (N=236) & (N=264) & (N=500)\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{Outcome}}\\ \hspace{1em}Mean (SD) & 4.19 (0.619) & 3.64 (0.569) & 3.90 (0.653)\\ \hspace{1em}Median [Min, Max] & 4.17 [2.66, 6.00] & 3.62 [2.25, 5.22] & 3.87 [2.25, 6.00]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{Poverty index}}\\ \hspace{1em}Mean (SD) & 2.26 (1.59) & 1.90 (1.63) & 2.07 (1.62)\\ \hspace{1em}Median [Min, Max] & 2.18 [-1.87, 7.62] & 1.87 [-2.47, 5.95] & 2.08 [-2.47, 7.62]\\ \addlinespace[0.3em] \multicolumn{4}{l}{\textbf{Age}}\\ \hspace{1em}Mean (SD) & 46.4 (18.8) & 14.4 (17.9) & 29.5 (24.3)\\ \hspace{1em}Median [Min, Max] & 45.2 [1.01, 102] & 15.1 [-38.3, 54.3] & 28.6 [-38.3, 102]\\ \bottomrule \end{tabular} We see that both z1, z2, z3, as well as the outcome are higher among individuals in cluster 1, who are characterized by the exposure profile presented in the previous table. \hypertarget{regression-based-approaches}{% \chapter{Regression-based approaches}\label{regression-based-approaches}} The previous section described a set of unsupervised techniques for the analysis of environmental mixtures, used to process the complex data before further analyses and to address well defined research questions related to the identification of common patterns of exposures or clustering of individuals based on exposure profiles. In the context of environmental health studies, however, this only represents the first (yet critical) step of analysis. The ultimate goal of most research in the field is in fact to investigate whether exposure to mixtures of environmental factors are associated with a given health outcome, and possibly whether these associations represent causal effects. Epidemiologists are usually trained to address these questions using regression-based techniques such as generalized linear models, for binary and continuous outcomes, or parametric and semi-parametric regression techniques for survival data, with time-to-event outcomes. Nevertheless, environmental exposures often present complex settings that require handling regression with care. The goal of this section is to present the use of classical regression techniques (i.e.~ordinary least squares (OLS)) in mixtures modeling, its limitations, and introduce some important extensions of OLS that allow overcoming these shortcomings. \hypertarget{ols-regression}{% \section{OLS regression}\label{ols-regression}} \hypertarget{single-regression-ewas}{% \subsection{Single regression (EWAS)}\label{single-regression-ewas}} A simple way to assess the association between a set of \(p\) environmental exposures (\(X_1 - X_p\)) and a given outcome \(Y\) is to build \(p\) different regression models, one for each exposure (the approach that we previously described as ``one-at-the-time''). Each model can be further adjusted for potential confounders of each exposure-outcome association. For example, is \(Y\) was a continuous exposure, we could fit a set of linear regression models such as: \(E[Y|X_1,C]=\beta_0+\beta_1 \cdot X_1 + \beta\cdot C\). The implicit assumption of this modeling procedure is that, for each element of the mixture, the other components do not act as confounders of the exposure-outcome association, as depicted in this DAG: \includegraphics{images/dag1.png} When evaluating a set of environmental exposures, this procedure of fitting a set of independent regression models is usually referred to as environment-wide association study (EWAS, Patel et al.~2010). This approach usually requires correcting for multiple comparisons using wither the Bonferroni approach or the false discovery rate (FDR). The following table reports results from fitting independent linear regression models (here without any adjustment for multiple comparisons) in our illustrative example with 14 exposures: Dependent variable: y (1) (2) x12 0.294*** (0.169, 0.420) x13 0.238*** (0.112, 0.364) z1 -0.010 (-0.036, 0.016) -0.010 (-0.037, 0.016) z2 0.013*** (0.011, 0.015) 0.013*** (0.011, 0.015) z3 -0.610*** (-0.694, -0.525) -0.612*** (-0.698, -0.527) Constant 3.712*** (3.592, 3.833) 3.725*** (3.596, 3.854) Observations 500 500 Note: \emph{p\textless0.1; \textbf{p\textless0.05; }}p\textless0.01 These results seem to indicate that all exposures are independently associated with the outcome (many coefficients fail to reach the conventional threshold of statistical significance, but we will stick on the magnitude and direction of the associations for this illustrative example). \hypertarget{multiple-regression}{% \subsection{Multiple regression}\label{multiple-regression}} Results from independent linear regression are hampered by the strong assumption that mixture components do not act as confounders of the association between each other component and the outcome of interest. This assumption, however, is very seldom met in practice. A common situation, for example, is that two or more constituents of the mixture share one or more source, which usually results in moderate to high levels of correlation between exposures. Using DAGs, we can depict this situation with the following: \begin{figure} \centering \includegraphics{images/dag2.png} \caption{DAG for 2 exposures} \end{figure} In this situation, a statistical model evaluating the association between \(X_1\) and \(Y\) will need to adjust for \(X_2\) to reduce the impact of bias due to residual confounding. In general, when any level of correlation exists between two mixture components, we do expect them to act as confounders of the association between the other exposure and the outcome. This implies that results from independent linear regressions are likely biased due to uncontrolled confounding. In our illustrative example, for instance, we know that \(X_{12}\) and \(X_{13}\) are highly correlated; results from independent linear regressions indicated that both exposures are positively associated with the outcome, but we now know that these coefficients are probably biased. Mutually adjusting for the two exposures in the same statistical model is therefore required to account for such confounding and possibly identify whether both exposures are really associated with the outcome, or if the real driver of the association is just one of the two. Note that both situations are realistic: we might have settings where a specific exposure is biologically harmful (say \(X_{12}\)), and the association between the correlated one (\(X_{13}\)) and the outcome was a spurious result due to this high correlation, as well as settings where both exposures are really associated with the outcome (maybe because it is the source of exposure to have a direct effect). We need statistical methodologies to be able to detect and distinguish these possible scenarios. The most intuitive way to account for co-confounding between mixture components is to mutually adjust for all exposures in the same regression model: \[E[Y|X,C]=\beta_0+\sum_{i=1}^p\beta_i \cdot X_i + \beta \cdot C\] The following table presents results from a multiple regression that includes the 14 exposures in our example, as well as results from the independent models for \(X_{12}\) and \(X_{13}\) for comparison We can compare results from different models using the stargazer package. Let's use it to compare results from the full model, and the models for \(X_{12}\) and \(X_{13}\) alone: Dependent variable: y (1) (2) (3) x1 0.058* (-0.007, 0.123) x2 0.018 (-0.043, 0.080) x3 -0.030 (-0.232, 0.173) x4 0.053 (-0.170, 0.275) x5 0.004 (-0.080, 0.088) x6 0.060** (0.001, 0.119) x7 -0.031 (-0.153, 0.091) x8 0.017 (-0.063, 0.097) x9 0.025 (-0.090, 0.140) x10 0.052 (-0.039, 0.144) x11 0.049 (-0.052, 0.151) x12 0.222 (-0.071, 0.515) 0.294*** (0.169, 0.420) x13 -0.083 (-0.382, 0.216) 0.238*** (0.112, 0.364) x14 0.054 (-0.047, 0.154) z1 0.006 (-0.021, 0.032) -0.010 (-0.036, 0.016) -0.010 (-0.037, 0.016) z2 0.006*** (0.003, 0.010) 0.013*** (0.011, 0.015) 0.013*** (0.011, 0.015) z3 -0.609*** (-0.696, -0.522) -0.610*** (-0.694, -0.525) -0.612*** (-0.698, -0.527) Constant 3.265*** (2.800, 3.730) 3.712*** (3.592, 3.833) 3.725*** (3.596, 3.854) Observations 500 500 500 Note: \emph{p\textless0.1; \textbf{p\textless0.05; }}p\textless0.01 \hypertarget{the-problem-of-multicollinearity}{% \subsection{The problem of Multicollinearity}\label{the-problem-of-multicollinearity}} Results from the multiple regression are not consistent with those obtained from independent regression models, especially (and unsurprisingly) for those exposures that showed high levels of correlations. For example, within the exposure cluster \(X_{12}-X_{13}\), the multiple regression model suggests that only \(X_{12}\) is associated with the outcome, while the coefficient of \(X_{13}\) is strongly reduced. Something similar happens for the \(X_3-X_4-X_5\) cluster, where only \(X_4\) remains associated with \(Y\). Can we safely conclude that \(X_{12}\) and \(X_4\) are associated with \(Y\) and that the other results were biased due to uncontrolled confounders? Before addressing this question, let's take a look at this published paper where we evaluated the performance of several statistical models to evaluate the association between a mixture of 8 phthalate metabolites and birth weight in a pregnancy cohort (\citet{chiu2018evaluating}). The following table presents results from 8 independent regressions and a multiple regression model. The next figure presents instead the correlation plot of the 8 metabolites. \begin{longtable}[]{@{}lcrrr@{}} \toprule Metabolite & \(\beta\) (one at the time) & p-value & \(\beta\) (mutually adjusted) & p-value \\ \midrule \endhead MiBP & -20.0 & 0.51 & -6.8 & 0.84 \\ MBzP & -24.7 & 0.34 & -18.7 & 0.53 \\ MEOHP & -23.7 & 0.33 & 247.1 & 0.11 \\ MnBP & -28.5 & 0.31 & -6.5 & 0.86 \\ MEHHP & -28.2 & 0.24 & -127.4 & 0.36 \\ MECPP & -32.6 & 0.20 & -82.8 & 0.32 \\ MEP & -27.1 & 0.18 & 25.0 & 0.24 \\ MEHP & -36.8 & 0.10 & -59.0 & 0.18 \\ \bottomrule \end{longtable} \begin{figure} \centering \includegraphics{images/Rplot02.png} \caption{Correlation plot from Chiu et al.} \end{figure} While we were expecting results from the two approaches to be different in the presence of high correlations, the coefficients obtained from the multiple regression leave room to a lot of skepticism. For example, the coefficients for MEOHP and MEHHP, when evaluated together, change respectively from -24 to 247, and from -28 to -127. Are these results reliable? Are we getting any improvement from to the biased results that we obtained from independent linear regressions? The most common problem that arises when using multiple regression to investigate mixture-outcome association is multicollinearity (or simply collinearity). This occurs when independent variables in a regression model are correlated, with stronger consequences the higher the correlation. More specifically, a high correlation between two predictors simultaneously included in a regression model will decrease the precision of their estimates and increase their standard errors. If the correlation between two covariates (say \(X_1\) and \(X_2\)) is very high, then one is a pretty accurate linear predictor of the other. Collinearity does not influence the overall performance of the model, but has an important impact on individual predictors. In general (as a rule of thumb), given two predictors \(X_1\) and \(X_2\) that are associated with the outcome (\(\beta=0.2\) for both) when their correlation is equal to 0, the estimates in a linear model will be impacted by \(\rho(X_1, X_2)\) as in this figure: \includegraphics{images/revparadox.png} This issue, usually referred to as reverse paradox (the coefficients of 2 correlated covariates will inflate in opposite directions), is clearly affecting results from the paper presented above (the coefficients of highly correlated phthalate metabolites are either extremely large or extremely small), and possibly also results from the illustrative example (coefficients from correlated variables have opposite signs). Nevertheless, it should be noted that high correlation does not automatically imply that coefficients will be inflated. In another example (\citet{bellavia2019urinary}), for instance, we evaluated a mixture of three highly correlated parabens compounds, yet results from multiple regression were in line to those obtained from other mixture modeling techniques. To quantify the severity of multicollinearity in a regression analysis one should calculate the Variance Inflation Factor (VIF). The VIF provides a measure of how much the variance of an estimated regression coefficient is increased because of collinearity. For example, if the VIF for a given predictors were 4, than the standard error of that predictors is 2 times larger than if that predictor had 0 correlation with other variables. As a rule of thumb, VIFs above 4 should set the alarm off, as they indicate that those coefficients are likely affected by the high correlations between them and other covariates in the model. The following table shows VIFs in our illustrative example, indicating that our results are deeply affected by multicollinearity. In this situation, alternative modeling options should be pursued. \begin{table} \caption{\label{tab:unnamed-chunk-15}VIFs} \centering \begin{tabular}[t]{l|r} \hline & x\\ \hline x1 & 1.235658\\ \hline x2 & 1.317951\\ \hline x3 & 49.479946\\ \hline x4 & 58.241935\\ \hline x5 & 11.256382\\ \hline x6 & 2.271043\\ \hline x7 & 2.722583\\ \hline x8 & 3.892965\\ \hline x9 & 2.553431\\ \hline x10 & 2.810535\\ \hline x11 & 3.694404\\ \hline x12 & 6.085748\\ \hline x13 & 6.557098\\ \hline x14 & 3.152092\\ \hline z1 & 1.139690\\ \hline z2 & 4.784064\\ \hline z3 & 1.135437\\ \hline \end{tabular} \end{table} \hypertarget{penalized-regression-approaches}{% \section{Penalized regression approaches}\label{penalized-regression-approaches}} An important set of models that can be very useful in the context of environmental mixtures are penalized regression approaches. These methods are directly built as extensions of standard OLS by incorporating a penalty in the loss function (hence the name). Their popularity in environmental epidemiology is due to the fact that this penalization procedure tends to decrease the influence of collinearity by targeting the overall variability of the model, thus improving the performance of the regression in the presence of high levels of correlations between included covariates. As always, however, everything comes for a price, and the improvement in the variance is achieved by introducing some bias (specifically, coefficients will be shrinked towards zero, reason why these approaches are also referred to as shrinkage procedures). \hypertarget{bias-variance-tradeoff}{% \subsection{Bias-variance tradeoff}\label{bias-variance-tradeoff}} The word bias usually triggers epidemiologists' ears, so it is important to understand what we mean by ``introducing some bias'' and how this can be beneficial in our context. To do so, let's begin by refreshing the basic math behind the estimation of a classical multiple regression. In linear regression modeling, we aim at predicting \(n\) observations of the response variable, \(Y\), with a linear combination of \(m\) predictor variables, \(X\), and a normally distributed error term with variance \(\sigma^2\): \[Y=X\beta+\epsilon\] \[\epsilon\sim N(0, \sigma^2)\] We need a rule to estimate the parameters, \(\beta\), from the sample, and a standard choice to do so is by using ordinary least square (OLS), which produce estimates \(\hat{\beta}\) by minimizing the sum of squares of residuals is as small as possible. In other words, we minimize the following loss function: \[L_{OLS}(\hat{\beta})=\sum_{i=1}^n(y_i-x_i'\hat{\beta})^2=\|y-X\hat{\beta}\|^2\] Using matrix notation, the estimate turns out to be : \[\hat{\beta}_{OLS}=(X'X)^{-1}(X'Y)\] To evaluate the performance of an estimator, there are two critical characteristics to be considered: its bias and its variance. The bias of an estimator measures the accuracy of the estimates: \[Bias(\hat{\beta}_{OLS})=E(\hat{\beta}_{OLS})-\beta\] The variance, on the other hand, measures the uncertainty of the estimates: \[Var(\hat{\beta}_{OLS})=\sigma^2(X'X)^{-1}\] Think of the estimator as an olympic archer: \includegraphics{images/archery.png} The best performer will be an archer with low bias and low variance (top-left), who consistently aims the target for every estimate. An archer with low bias but high variance will be the one who will shoot inconsistently around the center (top-right), but we may also have an archer with high bias and low variance, who is extremely precise in consistently shooting at the wrong target (bottom-left). Now, the OLS estimator is an archer who is designed to be unbiased, but in certain situations might have a very high variance, situation that commonly happens when collinearity is a threat, as documented by the inflation in the variance calculated by the VIF. To assess the overall performance an estimator by taking into account both bias and variance, one can look at the Mean Squared Error (MSE), defined as the sum of Variance and squared Bias. \[MSE=\frac{1}{n}\sum_{i=1}^n(Y_i-\hat{Y_i})^2=Var(\hat{\beta})+Bias^2(\hat{\beta})\] The basic idea of bias-variance tradeoff is to introduce some bias in order to minimize the mean squared error in those situation where the performances of OLS are affected by high variance. This is achieved by augmenting the loss function by introducing a penalty. While there are several ways of achieving this, we will here focus on 3 common penalty functions that originate Ridge, LASSO, and Elastic-Net regression, with the latter being a generalized version of the previous 2. \hypertarget{ridge-regression}{% \subsection{Ridge regression}\label{ridge-regression}} Ridge regression augments the OLS loss function as to not only minimize the sum of squared residuals, but also penalize the size of the parameter estimates, shrinking them towards zero \[L_{ridge}(\hat{\beta})=\sum_{i=1}^n(y_i-x_i'\hat{\beta})^2+\lambda\sum_{j=1}^m\hat{\beta}_j^2=\|y-X\hat{\beta}\|^2+\lambda\|\hat{\beta}\|^2\] Minimizing this equation provides this solution for the parameters estimation: \[\hat{\beta}_{ridge}=(X'X+\lambda I)^{-1}(X'Y)\] where \(\lambda\) is the penalty and \(I\) an identity matrix We can notice that: As \(\lambda\rightarrow 0\), \(\hat{\beta}_{ridge}\rightarrow\hat{\beta}_{OLS}\), while as \(\lambda\rightarrow \infty\), \(\hat{\beta}_{ridge}\rightarrow 0\). In words, setting \(\lambda\) to 0 is like using OLS, while the larger its value, the stronger the penalization. The unique feature of Ridge regression, as compared to other penalization techniques, is that coefficients can be shrinked over and over but will never reach 0. In other words, all covariates will always remain the model, and Ridge does not provide any form of variable selection. It can be shown that as \(\lambda\) becomes larger, the variance decreases and the bias increases. How much are we willing to trade? There are several approaches that can be used to choose for the best value of \(\lambda\): \begin{itemize} \tightlist \item Choose the \(\lambda\) that minimizes the MSE \item Use a traditional approach based on AIC or BIC criteria, to evaluate the performance of the model in fitting the data. While software tend to do the calculation automatically, it is important to remember that the degrees of freedom of a penalized model, needed to calculate such indexes, are different from the degrees of freedom of a OLS model with the same number of covariates/individuals. \item Finally, a recommended procedure is based on cross-validation, focusing more on the predictive performances of the model. More specifically, to avoid the the model perfectly fits our data with poor generalizability (situation commonly known as overfitting in the machine learning vocabulary), we tend to select the model corresponding to the largest \(\lambda\) within one unit of standard deviation around the \(\lambda\) that minimizes the MSE. \end{itemize} Let's turn to our illustrative example to see Ridge regression in practice. Given that both ridge and lasso are special cases of elastic net, we are going to use the \texttt{glmnet} package for all 3 approaches. Alternative approaches are available and could be considered. First,let's define a set of potential values of \(\lambda\) that we will then evaluate; the following chunk of code generates a set of potential value, in addition to defining outcome, exposures, and confounders, as well as a seed that will be required for the section of analyses involving cross validation. To select the optimal \(\lambda\) we are going to use the 10-fold cross validation approach, which can be conducted with the \texttt{cv.glmnet} command. Note that with option \texttt{standardize=TRUE} exposure will be standardized; this can be set to FALSE if standardization has been already conducted. Also, the option \texttt{alpha=0} has to be chosen to conduct Ridge regression (we will see later that Ridge is an Elastic Net model where an \(\alpha\) parameter is equal to 0) \begin{Shaded} \begin{Highlighting}[] \NormalTok{ridge\_cv }\OtherTok{\textless{}{-}} \FunctionTok{cv.glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \DecValTok{0}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambdas\_to\_try,} \AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nfolds =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} We can now plot the MSE at different levels of \(\lambda\). While the goal is to find the model that minimizes the MSE (\texttt{lambda.min}), we don't want the model to overfit our data. For this reason we tend to select the model corresponding to the largest \(\lambda\) within one unit of standard deviation around \texttt{lambda.min} (\texttt{lambda.1se}). The following figure shows the plot of MSE over levels of \(\lambda\), also indicating these 2 values of interest \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureridge-1} } \caption{MSE vs lambda for ridge}\label{fig:figureridge} \end{figure} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# lowest lambda} \NormalTok{lambda\_cv\_min }\OtherTok{\textless{}{-}}\NormalTok{ ridge\_cv}\SpecialCharTok{$}\NormalTok{lambda.min} \NormalTok{lambda\_cv\_min} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.3199267 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Best cross{-}validated lambda} \NormalTok{lambda\_cv }\OtherTok{\textless{}{-}}\NormalTok{ ridge\_cv}\SpecialCharTok{$}\NormalTok{lambda}\FloatTok{.1}\NormalTok{se} \NormalTok{lambda\_cv} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 2.056512 \end{verbatim} Another useful figure is the trajectory of coefficients at varying levels of \(\lambda\): \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureridge2-1} } \caption{coefficients trajectories for ridge}\label{fig:figureridge2} \end{figure} The starting values on the left of the figure are the ones from OLS estimation, and then we see how coefficients get shrinked at increasingly higher levels of \(\lambda\). Note that the shrinkage is operated on the entire model, and for this reason individual trajectories are not necessarily forced to decrease (here some coefficients become larger before getting shrinked). Also, from the numbers plotted on top of the figure, indicating the number of coefficients that are still included in the model, we can see that coefficients only tend asymptotically to 0 but are never really removed from the model. Finally, we can summarize the results of our final model for the selected value of lambda: \begin{Shaded} \begin{Highlighting}[] \NormalTok{model\_cv }\OtherTok{\textless{}{-}} \FunctionTok{glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \DecValTok{0}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambda\_cv, }\AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{)} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \FunctionTok{summary}\NormalTok{(model\_cv}\SpecialCharTok{$}\NormalTok{beta),} \AttributeTok{caption =} \StringTok{\textquotesingle{}Ridge\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-16}Ridge} \centering \begin{tabular}[t]{r|r|r} \hline i & j & x\\ \hline 1 & 1 & 0.0147047\\ \hline 2 & 1 & 0.0231112\\ \hline 3 & 1 & 0.0254830\\ \hline 4 & 1 & 0.0263153\\ \hline 5 & 1 & 0.0214385\\ \hline 6 & 1 & 0.0276044\\ \hline 7 & 1 & 0.0299720\\ \hline 8 & 1 & 0.0334388\\ \hline 9 & 1 & 0.0315407\\ \hline 10 & 1 & 0.0277843\\ \hline 11 & 1 & 0.0242869\\ \hline 12 & 1 & 0.0420859\\ \hline 13 & 1 & 0.0348748\\ \hline 14 & 1 & 0.0508906\\ \hline \end{tabular} \end{table} These results can provide some useful information but are of little use in our context. For example, we know from our VIF analysis that the coefficients for \(X_{12}\) and \(X_{13}\) are affected by high collinearity, but we would like to understand whether a real association exists for both exposures or whether one of the 2 is driving the cluster. To do so, we might prefer to operate some sort of variable selection, constructing a penalty so that non-influential covariates can be set to 0 (and therefore removed). This is what LASSO does. \hypertarget{lasso}{% \subsection{LASSO}\label{lasso}} Lasso, standing for Least Absolute Shrinkage and Selection Operator, also adds a penalty to the loss function of OLS. However, instead of adding a penalty that penalizes sum of squared residuals (L2 penalty), Lasso penalizes the sum of their absolute values (L1 penalty). As a results, for high values of \(\lambda\), many coefficients are exactly zeroed under lasso, which is never the case in ridge regression (where 0s are the extreme case as \(\lambda\rightarrow\infty\)). Specifically, the Lasso estimator can be written as \textbackslash end\{itemize\} \[L_{lasso}(\hat{\beta})=\sum_{i=1}^n(y_i-x_i'\hat{\beta})^2+\lambda\sum_{j=1}^m|\hat{\beta}_j|\] \textbackslash end\{frame\} As before, let's turn to our illustrative example to understand properties and interpretation. The procedure in R is exactly the same, with the only difference that the parameter \(\alpha\) is set to 1. First, let's identify the optimal value of \(\lambda\) using the cross validation procedure, \begin{Shaded} \begin{Highlighting}[] \NormalTok{lasso\_cv }\OtherTok{\textless{}{-}} \FunctionTok{cv.glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \DecValTok{1}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambdas\_to\_try,} \AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nfolds =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figurelasso-1} } \caption{MSE vs lambda for lasso}\label{fig:figurelasso} \end{figure} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# lowest lambda} \NormalTok{lambda\_cv\_min\_lasso }\OtherTok{\textless{}{-}}\NormalTok{ lasso\_cv}\SpecialCharTok{$}\NormalTok{lambda.min} \NormalTok{lambda\_cv\_min\_lasso} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.01123324 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Best cross{-}validated lambda} \NormalTok{lambda\_cv\_lasso }\OtherTok{\textless{}{-}}\NormalTok{ lasso\_cv}\SpecialCharTok{$}\NormalTok{lambda}\FloatTok{.1}\NormalTok{se} \NormalTok{lambda\_cv\_lasso} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.07220809 \end{verbatim} and then plot the coefficients trajectories. \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figurelasso2-1} } \caption{coefficients trajectories for lasso}\label{fig:figurelasso2} \end{figure} We see that, differently from what observed in Ridge regression, coefficients are shrinked to a point where they exactly equal 0, and are therefore excluded from the model. The numbers on top of Figure 3.4 show how many exposures are left in the model at higher levels of \(\lambda\). Finally, let's take a look at the results of the optimal selected model. \begin{Shaded} \begin{Highlighting}[] \NormalTok{model\_cv\_lasso }\OtherTok{\textless{}{-}} \FunctionTok{glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \DecValTok{1}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambda\_cv\_lasso, }\AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{)} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \FunctionTok{summary}\NormalTok{(model\_cv\_lasso}\SpecialCharTok{$}\NormalTok{beta),} \AttributeTok{caption =} \StringTok{\textquotesingle{}Lasso\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-17}Lasso} \centering \begin{tabular}[t]{r|r|r} \hline i & j & x\\ \hline 2 & 1 & 0.0115952\\ \hline 4 & 1 & 0.0930313\\ \hline 6 & 1 & 0.0210787\\ \hline 8 & 1 & 0.0483009\\ \hline 9 & 1 & 0.0100253\\ \hline 12 & 1 & 0.0436241\\ \hline 14 & 1 & 0.1242974\\ \hline \end{tabular} \end{table} The final model selects only 6 covariates, while all other 8 drop to 0. If we look at our 2 established groups of correlated exposures, \(X_4\) and \(X_{12}\) are selected, while the others are left out. In general, Lasso's results may be very sensitive to weak associations, dropping coefficients that are not actually 0. Lasso can set some coefficients to zero, thus performing variable selection, while ridge regression cannot. The two methods solve multicollinearity differently: in ridge regression, the coefficients of correlated predictors are similar, while in lasso, one of the correlated predictors has a larger coefficient, while the rest are (nearly) zeroed. Lasso tends to do well if there are a small number of significant parameters and the others are close to zero (that is - when only a few predictors actually influence the response). Ridge works well if there are many large parameters of about the same value (that is - when most predictors impact the response). \hypertarget{elastic-net}{% \subsection{Elastic net}\label{elastic-net}} Rather than debating which model is better, we can directly use Elastic Net, which has been designed as a compromise between Lasso and Ridge, attempting to overcome their limitations and performing variable selection in a less rigid way than Lasso. Elastic Net combines the penalties of ridge regression and Lasso, aiming at minimizing the following loss function \[L_{enet}(\hat{\beta})=\frac{\sum_{i=1}^n(y_i-x_i'\hat{\beta})^2}{2n}+\lambda\left(\frac{1-\alpha}{2}\sum_{j=1}^m\hat{\beta}_j^2+\alpha\sum_{j=1}^m|\hat{\beta}_j|\right)\] where \(\alpha\) is the mixing parameter between ridge (\(\alpha\)=0) and lasso (\(\alpha\)=1). How this loss function is derived, given the ridge and lasso ones, is described in \citet{zou2005regularization}. Procedures to simultaneously tune both \(\alpha\) and \(\lambda\) to retrieve the optimal combinations are available and developed in the R package \texttt{caret}. For simplicity we will here stick on \texttt{glmnet}, which requires pre-defining a value for \(\alpha\). One can of course fit several models and compare them with common indexes such as AIC or BIC. To ensure some variable selection, we may for example choose a value of \(\lambda\) like 0.7, closer to Lasso than to Ridge. Let's fit an Elastic Net model, with \(\alpha=0.7\) in our example. First, we need to select the optimal value of \(\lambda\): \begin{Shaded} \begin{Highlighting}[] \NormalTok{enet\_cv }\OtherTok{\textless{}{-}} \FunctionTok{cv.glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \FloatTok{0.7}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambdas\_to\_try,} \AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nfolds =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenet-1} } \caption{MSE vs lambda for elastic net}\label{fig:figureenet} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{lambda\_cv\_min\_enet }\OtherTok{\textless{}{-}}\NormalTok{ enet\_cv}\SpecialCharTok{$}\NormalTok{lambda.min} \NormalTok{lambda\_cv\_min\_enet} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.01963041 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Best cross{-}validated lambda} \NormalTok{lambda\_cv\_enet }\OtherTok{\textless{}{-}}\NormalTok{ enet\_cv}\SpecialCharTok{$}\NormalTok{lambda}\FloatTok{.1}\NormalTok{se} \NormalTok{lambda\_cv\_enet} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.1519911 \end{verbatim} and plot the coefficients' trajectories. \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenet2-1} } \caption{coefficients trajectories for elastic net}\label{fig:figureenet2} \end{figure} We see that coefficients are shrinked to a point where they exactly equal 0, and therefore excluded from the model, but that this happens more conservatively as compared to Lasso (as documented from the numbers on top). Let's take a look at the results of the optimal selected model. \begin{Shaded} \begin{Highlighting}[] \NormalTok{model\_cv\_enet }\OtherTok{\textless{}{-}} \FunctionTok{glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \FloatTok{0.7}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambda\_cv\_enet, }\AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{)} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \FunctionTok{summary}\NormalTok{(model\_cv\_enet}\SpecialCharTok{$}\NormalTok{beta),} \AttributeTok{caption =} \StringTok{\textquotesingle{}Lasso\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-18}Lasso} \centering \begin{tabular}[t]{r|r|r} \hline i & j & x\\ \hline 3 & 1 & 0.0209672\\ \hline 4 & 1 & 0.0465784\\ \hline 5 & 1 & 0.0090842\\ \hline 6 & 1 & 0.0123642\\ \hline 8 & 1 & 0.0430287\\ \hline 14 & 1 & 0.1098168\\ \hline \end{tabular} \end{table} As expected, less covariates are dropped to 0. Unfortunately, however, all components of the group of correlated covariates \(X_3-X_5\) remain in the model, and we are not able to identify the key actor of that group. Before getting deeper into the discussion of these results, however, it is useful to incorporate the potential confounders available in the data. Including confounders can be done by specifying them in the model as we do in a regular OLS model. However, we may want them to be involved in the selection process. To such end, the best way is to include them in the matrix of covariates to be penalized, but inform the CV procedure that you don't want their coefficients to be modified. The following chunk of code will do that: \begin{Shaded} \begin{Highlighting}[] \NormalTok{X}\OtherTok{\textless{}{-}}\FunctionTok{as.matrix}\NormalTok{(data2[,}\DecValTok{3}\SpecialCharTok{:}\DecValTok{19}\NormalTok{])} \NormalTok{enet\_cv\_adj }\OtherTok{\textless{}{-}} \FunctionTok{cv.glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \FloatTok{0.6}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambdas\_to\_try,} \AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nfolds =} \DecValTok{10}\NormalTok{, }\AttributeTok{penalty.factor=}\FunctionTok{c}\NormalTok{(}\FunctionTok{rep}\NormalTok{(}\DecValTok{1}\NormalTok{,}\FunctionTok{ncol}\NormalTok{(X) }\SpecialCharTok{{-}} \DecValTok{3}\NormalTok{),}\DecValTok{0}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{0}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadjasdfasdf-1} } \caption{MSE vs lambda for elastic net, adjusted for confounders}\label{fig:figureenetadjasdfasdf} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{lambda\_cv\_min\_enet\_adj }\OtherTok{\textless{}{-}}\NormalTok{ enet\_cv\_adj}\SpecialCharTok{$}\NormalTok{lambda.min} \NormalTok{lambda\_cv\_min\_enet\_adj} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.01629751 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Best cross{-}validated lambda} \NormalTok{lambda\_cv\_enet\_adj }\OtherTok{\textless{}{-}}\NormalTok{ enet\_cv\_adj}\SpecialCharTok{$}\NormalTok{lambda}\FloatTok{.1}\NormalTok{se} \NormalTok{lambda\_cv\_enet\_adj} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.0869749 \end{verbatim} Note that, regardless of how large \(\lambda\) will be, the three confounders will remain non-penalized in the model, as documented from the numbers on top. Here the final results: \begin{Shaded} \begin{Highlighting}[] \NormalTok{model\_cv\_enet\_adj }\OtherTok{\textless{}{-}} \FunctionTok{glmnet}\NormalTok{(X, Y, }\AttributeTok{alpha =} \FloatTok{0.6}\NormalTok{, }\AttributeTok{lambda =}\NormalTok{ lambda\_cv\_enet, }\AttributeTok{standardize =} \ConstantTok{TRUE}\NormalTok{,}\AttributeTok{penalty.factor=}\FunctionTok{c}\NormalTok{(}\FunctionTok{rep}\NormalTok{(}\DecValTok{1}\NormalTok{,}\FunctionTok{ncol}\NormalTok{(X) }\SpecialCharTok{{-}} \DecValTok{3}\NormalTok{),}\DecValTok{0}\NormalTok{,}\DecValTok{0}\NormalTok{,}\DecValTok{0}\NormalTok{))} \NormalTok{knitr}\SpecialCharTok{::}\FunctionTok{kable}\NormalTok{(} \FunctionTok{summary}\NormalTok{(model\_cv\_enet\_adj}\SpecialCharTok{$}\NormalTok{beta),} \AttributeTok{caption =} \StringTok{\textquotesingle{}Lasso, adjusted\textquotesingle{}} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{table} \caption{\label{tab:unnamed-chunk-19}Lasso, adjusted} \centering \begin{tabular}[t]{r|r|r} \hline i & j & x\\ \hline 15 & 1 & -0.0101247\\ \hline 16 & 1 & 0.0119696\\ \hline 17 & 1 & -0.6454254\\ \hline \end{tabular} \end{table} In addition to the 3 confounders (now named \(X_{15}-X_{17}\)) only 4 covariates did not drop to 0. Interestingly, we are now selecting none of the 3 covariates we wanted to distinguish. Results seem to agree with multiple regression in indicating \(X_6\) and \(X_{12}\) as the main predictors of the outcome. \hypertarget{additional-notes}{% \subsection{Additional notes}\label{additional-notes}} We have covered the basic theory of penalized regression techniques (also referred to with other common terminology such as shrinkage procedures, or regularization processes). Before moving to the presentation of two examples of application of these techniques in environmental epidemiology, let's mention some additional details. \begin{itemize} \tightlist \item Replicate results in classical OLS \end{itemize} When Elastic Net is used to describe associations in population-based studies, it is common practice to also present a final linear regression model that only includes those predictors that were selected from the penalized approach. This model will ensure better interpretation of the coefficients, and hopefully not be subject anymore to issues of collinearity that the selection should have addressed. Here are the results from such model in our illustrative example, based on covariates selected by the final adjusted elastic net model. Dependent variable: y (1) (2) x1 0.058* (-0.007, 0.123) x2 0.018 (-0.043, 0.080) x3 -0.030 (-0.232, 0.173) x4 0.053 (-0.170, 0.275) x5 0.004 (-0.080, 0.088) x6 0.085*** (0.031, 0.139) 0.060** (0.001, 0.119) x7 -0.031 (-0.153, 0.091) x8 0.017 (-0.063, 0.097) x9 0.015 (-0.081, 0.111) 0.025 (-0.090, 0.140) x10 0.085** (0.019, 0.152) 0.052 (-0.039, 0.144) x11 0.049 (-0.052, 0.151) x12 0.224*** (0.074, 0.374) 0.222 (-0.071, 0.515) x13 -0.083 (-0.382, 0.216) x14 0.054 (-0.047, 0.154) z1 0.003 (-0.023, 0.030) 0.006 (-0.021, 0.032) z2 0.009*** (0.006, 0.011) 0.006*** (0.003, 0.010) z3 -0.617*** (-0.700, -0.534) -0.609*** (-0.696, -0.522) Constant 3.476*** (3.272, 3.680) 3.265*** (2.800, 3.730) Observations 500 500 Note: \emph{p\textless0.1; \textbf{p\textless0.05; }}p\textless0.01 \begin{itemize} \tightlist \item Grouped Lasso \end{itemize} In some settings, the predictors belong to pre-defined groups, or we might have observed well-defined subgroups of exposures from our PCA. In this situation one may want to shrink and select together the members of a given group, which can be achieved with grouped Lasso. The next section will provide alternative regression approaches where preliminary grouping information can be used to address some limitations of standard regression. \begin{itemize} \tightlist \item Time-to-event outcomes \end{itemize} Recent developments allow fitting Elastic Net with time-to-event outcomes, within the context of a regularized Cox regression model. Given the popularity of this method in epidemiology it is reasonable to expect that this approach will become more popular in the context of environmental mixture since (as we will see in next sections) methods that were built ad-hoc do not always account for these types of outcomes. A first R package was develop in 2011 (\texttt{coxnet}), fully documented \href{https://cran.r-project.org/web/packages/glmnet/vignettes/Coxnet.pdf}{here}, and those algorithms for right-censored data have also been included in the most recent version of \texttt{glmnet} \begin{itemize} \tightlist \item Non-linear associations \end{itemize} An implicit assumption we have made so far is that each covariate included in the model has a linear (or log-linear) effect on the outcome of interest. We know that this is often not true (several environmental exposures, for example, have some kind of plateau effect) and we might want to be able to incorporate non-linearities in our analyses. While classical regression can flexibly allow incorporating non-linearities by means of techniques such as restricted cubic splines, this is not of straightforward application in penalized regression. In complex settings where strong departures from linearity are observed in preliminary linear regressions, one should probably consider more flexible techniques such as BKMR (Section 5). \hypertarget{elastic-net-and-environmental-mixtures}{% \subsection{Elastic Net and environmental mixtures}\label{elastic-net-and-environmental-mixtures}} Using Elastic Net to evaluate the association between a mixture of environemtanl exposures and a health outcome is becoming increasingly popular. A nice and rigorous application of the method can be found in \citet{lenters2016prenatal}, evaluating co-exposure to 16 chemicals as they relate to birth weight in 1250 infants. Here the correlation plot from the manuscript, /\includegraphics{images/corrplot.png} and here results presenting, respectively, the Elastic Net model, and the final OLS only including selected covariates. \includegraphics{images/table2.png} \begin{figure} \centering \includegraphics{images/table3.png} \caption{Table 3 from Lenters et al.} \end{figure} Another application that thoroughly report methods presentation, stating all assumptions and clearly discussing the results, can be seen in \citet{vriens2017neonatal}, evaluating environmental pollutants and placental mitochondrial DNA content in infants. This is the starting correlation plot reported in the paper: \includegraphics{images/corrplot2.png} Several detailed figures are used to present results providing the reader with all necessary tools to understand associations and provide clear interpretation. \begin{figure} \centering \includegraphics{images/resgraph.png} \caption{Figure from Vriens et al.} \end{figure} \begin{figure} \centering \includegraphics{images/restab.png} \caption{Table from Vriens et al.} \end{figure} \hypertarget{other-regression-based-approaches}{% \section{Other regression-based approaches}\label{other-regression-based-approaches}} Before moving on to the a general discussion on advantages and limitations of regression-based approaches, and introduce and motivate further approaches for environmental mixtures, it is useful to provide a broad overview of some alternative approaches based on or derived from classical regression that have proven useful in this context. \hypertarget{hierarchical-linear-models}{% \subsection{Hierarchical linear models}\label{hierarchical-linear-models}} Hierarchical modeling allows improving performances of a multiple regression model when clustering of exposures can be clearly identified. Application of this approach for multiple exposures was first introduced to evaluate the effect of antiretroviral treatments in HIV epidemiology, where several drugs belonging to clearly defined drug classes are usually defined (\citet{correia2019hierarchical}). In brief, the model incorporates first-stage effects for each drug class, and second-stage effects for individual drugs, assuming that the effect of each drug is the summation of the (fixed) effect of its drug class and a residual effect specific to the individual drug. Assuming that we can identify (or observe from preliminary analysis such as a PCA) well characterized subgroups of environmental exposures, this modeling technique can be used to improve the performance of multiple regression when focusing on environmental mixtures. Potential advantages include the absence of variable selection and shrinkage,thus allowing a better interpretation of results. \hypertarget{partial-least-square-regression}{% \subsection{Partial least square regression}\label{partial-least-square-regression}} The Partial least square (PLS) regression can be seen as a method that generalizes and combines PCA and multiple regression. PLS regression is very useful to predict dependent variables from a very large number of predictors that might be highly correlated. The PLS regression replaces the initial independent variable space (X) and the initial response variable space (Y) by smaller spaces that rely on a reduced number of variables named latent variables, which are included one by one in an iterative process. The sparse PLS (sPLS) regression, in particular, is an extension of PLS that aims at combining variable selection and modeling in a one-step procedure (\citet{le2008sparse}). Components are defined iteratively such that they explain as much of the remaining covariance between the predictors and the outcome as possible. The sPLS approach simultaneously yields good predictive performance and appropriate variable selection by creating sparse linear combinations of the original predictors. Sparsity is induced by including a penalty (η) in the estimation of the linear combination coefficients; that is to say, all coefficients with an absolute value lower than some fraction η of the maximum absolute coefficient are shrunk to zero. Only the first K components are included as covariates in a linear regression model, calibrating K and η by minimizing the RMSE using 5-fold cross-validation (the default implementation). sPLS is available in the R package \texttt{spls} , documented \href{https://cran.r-project.org/web/packages/spls/vignettes/spls-example.pdf}{here}. A good illustration of using sPLS in environmental epidemiology can be found in \citet{lenters2015phthalates}. \hypertarget{advantages-and-limitations-of-regression-approaches}{% \section{Advantages and limitations of regression approaches}\label{advantages-and-limitations-of-regression-approaches}} Together with underlying some of the limitations of single and multiple regression in evaluating the effects of environmental mixtures on health outcomes, primarily due to the main problem of multicollinearity, this Section has also introduced techniques that overcome such limitation while remaining embedded in a regression framework. Among these techniques, review articles and simulation studies agree in concluding that penalized regression consistently outperformed conventional approaches, and that the choice of what method to use should be selected based on one-by-one situation. I recommend reading this paper from \citet{agier2016systematic}, systematically comparing methods based on regression in exposome-health analyses. In practical settings, several research questions can be addressed by using multiple regression or its extensions. Nevertheless, there might be research questions that are beyond the reach of regression techniques and for which some additional methodologies should be considered. \begin{itemize} \tightlist \item Assessing the overall mixture effect. \end{itemize} Penalized approaches addressed the issues of collinearity and high-dimension by operating some sort of variable selection. While this allows retrieving information on the actual effects for each selected component, addressing other questions such as the ones related to the overall effect of the mixture can not be evaluated. As discussed in Section 1, this is a relevant research question that is often of primary interest. The next section will address this problem, introducing the weighted quantile sum (WQS) regression framework as a technique to evaluate the overall effect of an environmental mixture while taking into account high levels of correlation. \begin{itemize} \tightlist \item Complex scenarios with several exposures and interactive mechanisms. \end{itemize} When the mixture of interest is composed by several exposures, it is likely that the mixture-outcome association will involve non-linear and interactive mechanisms. As the number of potential predicors gets higher, so does the complexity of the model. In such situations the performances of regression-based approaches are generally weak, and more flexible algorithms should be taken into considerations. These problems will be assessed in section 6, introducing Bayesian kernel Machine Regression as a flexible non-parametric approach to estimate the mixture-outcome association in the presence of complex non-linear and interactive mechanisms, and then discussing techniques for the assessment of high-dimensional interactions, including machine learning algorithms based on trees modeling. \hypertarget{assessing-the-overall-cumulative-effect-of-multiple-exposures}{% \chapter{Assessing the overall (cumulative) effect of multiple exposures}\label{assessing-the-overall-cumulative-effect-of-multiple-exposures}} Extensions of linear regression presented in the previous chapter address the complexity of the mixture-outcome association by selecting relevant predictors within the mixture, thus removing covariates that would create problems due to high collinearity, or simply by reducing the dimension of the exposure matrix thus improving the fit of the model. This approach, however, also comes with relevant drawbacks. Let's think of the group of highly correlated exposures from our hypothetical example (\(X_3-X_4-X_5\)), where penalized approaches recommended only selecting \(X_4\). This allowed evaluating the independent effect of \(X_4\) on the outcome without being troubled by the high levels of correlation between this covariate and the other 2 of the cluster. This same selection, however, is preventing us to address other important questions. For example, what if there is an interaction between \(X_3\) and \(X_4\) (this can happen even if \(X_3\) does not have an independent effect on the outcome, but only an effect that is triggered in the presence of the other co-exposure)? By removing \(X_3\) from the model, we will not be able to evaluate this interaction. Moreover, we will not be able to correctly quantify the joint effect of \(X_3\) and \(X_4\), which is the sum of the two main effects and their 2-way interaction. As discussed in the first chapter, this is a very important research question: the three correlated exposures might for instance come from the same source, and quantifying their joint effect would in this case provide useful information on the public health benefits of reducing exposure to the source. The question that we will address in this section is the following: how do we quantify the joint effect of several exposures, possibly highly correlated, when regression techniques are not functional? \hypertarget{unsupervised-summary-scores}{% \section{Unsupervised summary scores}\label{unsupervised-summary-scores}} A very intuitive approach is to create one or more summary score(s) that summarize individual levels of exposure to the mixture, thus reducing the numbers of covariates that are going to be evaluated. A very common example of such approach is used by investigators working on phthalates. In this context, analyses are often hampered by the presence of extreme correlation between metabolites of Di(2-ethylhexyl)phthalate (DEHP), and researchers are commonly summarizing this information into a molar sum of DEHP. \citet{li2019serum} writes, for example ``we calculated the molar sum of DEHP metabolites (ΣDEHP) by dividing each metabolite concentration by its molecular weight and then summing: ΣDEHP={[}MEHP (μg/L)×(1/278.34 (g/mol)){]}+{[}MEHHP (μg/L) × (1/294.34 (g/mol)){]} + {[}MEOHP (μg/L) × (1/292.33 (g/ mol)){]} + {[}MECPP (μg/L) × (1/308.33 (g/mol)){]}''. Note that, with this approach, the score targets a selected sub-sample of exposures (the highly-correlated cluster creating problems), and other phthalates metabolites are included in the model without any transformation. Another common approach is to use components derived from PCA, as described in section 3. PCA allows identifying continuous covariates that summarize the variability of the mixture exposure. Including these derived components into a regression model has the great advantage that all collinearity issues will be resolved, as the components are uncorrelated by definition. On the other hand, the validity of this approach is severely affected by whether the obtained components have clear biological interpretation. A good example of this approach can be found in \citet{souter2020urinary}. \hypertarget{weighted-quantile-sum}{% \section{Weighted quantile sum}\label{weighted-quantile-sum}} Taking one step further, researchers might be interested in taking into account the relationship between the exposures and the outcome while summarizing the complex exposure to the mixture of interest. The weighted quantile sum (WQS), developed specifically for the context of environmental mixtures analysis, is an increasingly common approach that allows evaluating a mixture-outcome association by creating a summary score of the mixture in a supervised fashion (\citet{czarnota2015assessment}), (\citet{carrico2015characterization}). Specifically, WQS is a statistical model for multivariate regression in high-dimensional dataset that operates in a supervised framework, creating a single score (the weighted quantile sum) that summarizes the overall exposure to the mixture, and by including this score in a regression model to evaluate the overall effect of the mixture on the outcome of interest. The score is calculated as a weighted sum (so that exposures with weaker effects on the outcome have lower weight in the index) of all exposures categorized into quartiles, or more groups (so that extreme values have less impact on the weight estimation. \hypertarget{model-definition-and-estimation}{% \subsection{Model definition and estimation}\label{model-definition-and-estimation}} Most of what follows in this subsection is taken from the excellent introductory material shared online by Dr.~Renzetti at this \href{https://cran.r-project.org/web/packages/gWQS/vignettes/gwqs-vignette.html}{link}, which should be referred to for further details on the technique. The WQS model takes the following form: \begin{equation} \label{eq:wqs} g(\mu) = \beta_0 + \beta_1\Bigg(\sum_{i=1}^{c}w_iq_i\Bigg) + \boldsymbol{z'\varphi} \end{equation} The \((\sum_{i=1}^{c}w_iq_i)\) term represents the index that weights and sums the components included in the mixture. As such, \(\beta_1\) will be the parameter summarizing the overall effect to the (weighted) mixture. In addition, the model will also provide an estimate of the individual weights \(w_i\) that indicate the relative importance of each exposure in the mixture-outcome association. To estimate the model, the data may be split in a training and a validation dataset: the first one to be used for the weight estimation, the second one to test for the significance of the final WQS index. The weights are estimated through a bootstrap and constrained to sum to one and bounded between zero and one: \(\sum_{i=1}^{c}w_i=1\) and \(0 \leq w_i \leq 1\). For each bootstrap sample (usually \(B=100\) total samples) a dataset is created sampling with replacement from the training dataset and the parameters of the model are estimated through an optimization algorithm.An inequality constraint is also applied in order to impose that \(0 \leq w_i \leq 1\). Once the weights are estimated, the model is fitted in order to find the regression coefficients in each ensemble step. After the bootstrap ensemble is completed, the estimated weights are averaged across bootstrap samples to obtain the WQS index: \[WQS = \sum_{i=1}^c \bar{w}_iq_i\] Typically weights are estimated in a training set then used to construct a WQS index in a validation set, which can be used to test for the association between the mixture and the health outcome in a standard generalized linear model, as: \[g(\mu) = \beta_0 + \beta_1WQS + \boldsymbol{z'\varphi}\] After the final model is fitted one can test the significance of the \(\beta_1\) to see if there is an association between the WQS index and the outcome. In the case the coefficient is significantly different from 0 then we can interpret the weights: the highest values identify the associated components as the relevant contributors in the association. A selection threshold can be decided a priori as \(\tau = 1/c\) to identify those chemicals that have a significant weight in the index. \hypertarget{the-unidirectionality-assumption}{% \subsection{The unidirectionality assumption}\label{the-unidirectionality-assumption}} WQS makes an important assumption of uni-direction (either a positive or a negative) of all exposures with respect to the outcome. The model is inherently one-directional, in that it tests only for mixture effects positively or negatively associated with a given outcome. In practice analyses should therefore be run twice to test for associations in either direction. The one-directional index allows not to incur in the reversal paradox when we have highly correlated variables thus improving the identification of bad actors. \hypertarget{extensions-of-the-original-wqs-regression}{% \subsection{Extensions of the original WQS regression}\label{extensions-of-the-original-wqs-regression}} \begin{itemize} \tightlist \item Dependent variables \end{itemize} The WQS regression can be generalized and applied to multiple types of dependent variables. In particular, WQS regression has been adapted to four different cases: logistic, multinomial, Poisson and negative binomial regression. For these last two cases it is also possible to fit zero-inflated models keeping the same objective function used to estimate the weights as for the Poisson and negative binomial regression but taking into account the zero inflation fitting the final model. \begin{itemize} \tightlist \item Random selection \end{itemize} A novel implementation of WQS regression for high-dimensional mixtures with highly correlated components was proposed in \citet{curtin2021random}. This approach applies a random selection of a subset of the variables included in the mixture instead of the bootstrapping for parameter estimation. Through this method we are able to generate a more de-correlated subsets of variables and reduce the variance of the parameter estimates compared to a single analysis. This novel statistical methodology was shown to be more effective compared to WQS in modeling contexts with large predictor sets, complex correlation structures, or where the numbers of predictors exceeds the number of subjects. \begin{itemize} \tightlist \item Repeated holdout validation for WQS regression \end{itemize} One limit of WQS is the reduced statistical power caused by the necessity to split the dataset in training and validation sets. This partition can also lead to unrepresentative sets of data and unstable parameter estimates. A recent work from \citet{tanner2019repeated} showed that conducing a WQS on the full dataset without splitting in training and validation produces optimistic results and proposed to apply a repeated holdout validation combining cross-validation and bootstrap resampling. They suggested to repeatedly split the data 100 times with replacement and fit a WQS regression on each partitioned dataset. Through this procedure we obtain an approximately normal distribution of the weights and the regression parameters and we can apply the mean or the median to estimate the final parameters. A limit of this approach is the higher computational intensity. \begin{itemize} \tightlist \item Additional approaches \end{itemize} To complete the set of currently available extensions of this approach, it is finally worthy to mention the Bayesian WQS (\citet{colicino2020per}), which also allows relaxing the uni-directional assumption, and the lagged WQS (\citet{gennings2020lagged}), which deals with time-varying mixtures of exposures to understand the role of exposure timing. \hypertarget{quantile-g-computation}{% \subsection{Quantile G-computation}\label{quantile-g-computation}} A recent paper by \citet{keil2020quantile} introduced an additional modeling technique for environmental mixture that builds up on WQS regression integrating its estimation procedure with g-computation. This approach, called Quantile-based g-Computation estimates the overall mixture effect with the same procedure used by WQS, but estimating the parameters of a marginal structural model, rather than a standard regression. In this way, under common assumptions in causal inference such as exchangeability, causal consistency, positivity, no interference, and correct model specification, this model will also improve the causal interpretation of the overall effect. Importantly, the procedure also allegedly overcomes the assumption of uni-direction, and the flexibility of marginal structural models also allows incorporating non-linearities in the contribution of each exposure to the score. Additional details on the models can be found on the original paper or in this useful R \href{https://cran.r-project.org/web/packages/qgcomp/vignettes/qgcomp-vignette.html}{vignette}. \hypertarget{wqs-regression-in-r}{% \subsection{WQS regression in R}\label{wqs-regression-in-r}} WQS is available in the R package \texttt{gWQS} (standing for generalized WQS). Documentation and guidelines can be found \href{https://cran.r-project.org/web/packages/gWQS/gWQS.pdf}{here}. Note that if you are working on a mac with a OS different than OS 10.5 through 10.7, you may have to install X Quartz at \url{https://www.xquartz.org}, before being able to load the \texttt{gWQS} library. The recently developed quantile G-computation approach is instead available in the \texttt{qgcomp} package. Fitting WQS in R will require some additional data management. First of all, both \texttt{gWQS} and \texttt{qgcomp} will require an object with the \textbf{names} of the exposures, rather than a matrix with the exposures themselves. \begin{Shaded} \begin{Highlighting}[] \NormalTok{exposure}\OtherTok{\textless{}{-}} \FunctionTok{names}\NormalTok{(data2[,}\DecValTok{3}\SpecialCharTok{:}\DecValTok{16}\NormalTok{])} \end{Highlighting} \end{Shaded} The following lines will fit a WQS regression model for the positive direction, with a 40-60 training validation split, and without adjusting for covariates. The reader can refer to the link above for details on all available options. \begin{Shaded} \begin{Highlighting}[] \NormalTok{results1 }\OtherTok{\textless{}{-}} \FunctionTok{gwqs}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ wqs, }\AttributeTok{mix\_name =}\NormalTok{ exposure, }\AttributeTok{data =}\NormalTok{ data2, }\AttributeTok{q =} \DecValTok{4}\NormalTok{, }\AttributeTok{validation =} \FloatTok{0.6}\NormalTok{,} \AttributeTok{b =} \DecValTok{10}\NormalTok{, }\AttributeTok{b1\_pos =}\NormalTok{ T, }\AttributeTok{b1\_constr =}\NormalTok{ F, }\AttributeTok{family =} \StringTok{"gaussian"}\NormalTok{, } \AttributeTok{seed =} \DecValTok{123}\NormalTok{)} \end{Highlighting} \end{Shaded} After fitting the model, this set of lines will produce a barplot with the weights as well as the summary of results (overall effect and weights estimation) \begin{Shaded} \begin{Highlighting}[] \FunctionTok{gwqs\_barplot}\NormalTok{(results1, }\AttributeTok{tau=}\ConstantTok{NULL}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadjx-1} } \caption{WQS: weights estimation}\label{fig:figureenetadjx} \end{figure} Here the actual estimates that are being plotted: \begin{table} \caption{\label{tab:unnamed-chunk-22}WQS: weights estimation} \centering \begin{tabular}[t]{l|l|r} \hline & mix\_name & mean\_weight\\ \hline x12 & x12 & 0.2080444\\ \hline x5 & x5 & 0.1990505\\ \hline x4 & x4 & 0.1860921\\ \hline x6 & x6 & 0.1190146\\ \hline x8 & x8 & 0.1120772\\ \hline x2 & x2 & 0.0586329\\ \hline x1 & x1 & 0.0518465\\ \hline x9 & x9 & 0.0295788\\ \hline x14 & x14 & 0.0224343\\ \hline x7 & x7 & 0.0074698\\ \hline x13 & x13 & 0.0057590\\ \hline x3 & x3 & 0.0000000\\ \hline x10 & x10 & 0.0000000\\ \hline x11 & x11 & 0.0000000\\ \hline \end{tabular} \end{table} To estimate the negative index, still without direct constraint on the actual \(\beta\), we change the \texttt{b1\_pos} option to FALSE. In this situation, all bootstrap samples provide a positive coefficient. This suggests that we are in a situation where all covariates have a positive (or null) effect. Even constraining the coefficient would likely not make any difference in this case - coefficients would either be all around 0, or the model will not converge. For the next points, therefore, we will only focus on the positive index. To adjust for covariates we can add them in the model as presented here: \begin{Shaded} \begin{Highlighting}[] \NormalTok{results1\_0\_adj }\OtherTok{\textless{}{-}} \FunctionTok{gwqs}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ wqs}\SpecialCharTok{+}\NormalTok{z1}\SpecialCharTok{+}\NormalTok{z2}\SpecialCharTok{+}\NormalTok{z3, }\AttributeTok{mix\_name =}\NormalTok{ exposure, }\AttributeTok{data =}\NormalTok{ data2, }\AttributeTok{q =} \DecValTok{4}\NormalTok{, }\AttributeTok{validation =} \FloatTok{0.6}\NormalTok{,} \AttributeTok{b =} \DecValTok{10}\NormalTok{, }\AttributeTok{b1\_pos =}\NormalTok{ T, }\AttributeTok{b1\_constr =}\NormalTok{ F, }\AttributeTok{family =} \StringTok{"gaussian"}\NormalTok{, } \AttributeTok{seed =} \DecValTok{123}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{gwqs\_barplot}\NormalTok{(results1\_0\_adj, }\AttributeTok{tau=}\ConstantTok{NULL}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadj-1} } \caption{WQS: weights estimation with covariates adjustment}\label{fig:figureenetadj} \end{figure} After adjustment the association is largely attenuated, and the weights of the most important contributors change both in magnitude as well as in ranking. This implies that the three confounders have a different effect on each of the components (e.g.~the contribution of \(X_6\) was attenuated before adjusting, while the contribution of \(X_4\) was overestimated). The following lines will fit a quantile G-computation model. What we have to specify in the command is the list of exposures, the name of the mixture object, the data, the type of outcome (continuous here), and whether we want quartiles or other categorizations. \begin{Shaded} \begin{Highlighting}[] \NormalTok{qc }\OtherTok{\textless{}{-}} \FunctionTok{qgcomp}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ x1}\SpecialCharTok{+}\NormalTok{x2}\SpecialCharTok{+}\NormalTok{x3}\SpecialCharTok{+}\NormalTok{x4}\SpecialCharTok{+}\NormalTok{x5}\SpecialCharTok{+}\NormalTok{x6}\SpecialCharTok{+}\NormalTok{x7}\SpecialCharTok{+}\NormalTok{x8}\SpecialCharTok{+}\NormalTok{x9}\SpecialCharTok{+}\NormalTok{x10}\SpecialCharTok{+}\NormalTok{x11}\SpecialCharTok{+}\NormalTok{x12}\SpecialCharTok{+}\NormalTok{x13}\SpecialCharTok{+}\NormalTok{x14} \SpecialCharTok{+}\NormalTok{z1}\SpecialCharTok{+}\NormalTok{z2}\SpecialCharTok{+}\NormalTok{z3,} \AttributeTok{expnms=}\NormalTok{exposure,} \NormalTok{ data2, }\AttributeTok{family=}\FunctionTok{gaussian}\NormalTok{(), }\AttributeTok{q=}\DecValTok{4}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(qc)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadja-1} } \caption{qgcomp: weights estimation with covariates adjustment}\label{fig:figureenetadja} \end{figure} The Authors also recommended fitting the model using bootstrap, which can be achieved with the following command. Note that the number of iterations, her set to 10, should be at least 200. The plot from this model will provide the estimate of the overall effect of the mixture. \begin{Shaded} \begin{Highlighting}[] \NormalTok{qc.boot }\OtherTok{\textless{}{-}} \FunctionTok{qgcomp.boot}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ x1}\SpecialCharTok{+}\NormalTok{x2}\SpecialCharTok{+}\NormalTok{x3}\SpecialCharTok{+}\NormalTok{x4}\SpecialCharTok{+}\NormalTok{x5}\SpecialCharTok{+}\NormalTok{x6}\SpecialCharTok{+}\NormalTok{x7}\SpecialCharTok{+}\NormalTok{x8}\SpecialCharTok{+}\NormalTok{x9}\SpecialCharTok{+}\NormalTok{x10}\SpecialCharTok{+}\NormalTok{x11}\SpecialCharTok{+}\NormalTok{x12}\SpecialCharTok{+}\NormalTok{x13}\SpecialCharTok{+}\NormalTok{x14} \SpecialCharTok{+}\NormalTok{z1}\SpecialCharTok{+}\NormalTok{z2}\SpecialCharTok{+}\NormalTok{z3,} \AttributeTok{expnms=}\NormalTok{exposure,} \NormalTok{ data2, }\AttributeTok{family=}\FunctionTok{gaussian}\NormalTok{(), }\AttributeTok{q=}\DecValTok{4}\NormalTok{, }\AttributeTok{B=}\DecValTok{10}\NormalTok{, }\AttributeTok{seed=}\DecValTok{123}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(qc.boot)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadjsZCV-1} } \caption{qgcomp: overall effect}\label{fig:figureenetadjsZCV} \end{figure} It is interesting to note that in this situation of high-collinearity, qgcomp's results are still affected as we see a strikingly high (and, as we know since data are simulated, wrong) negative weight for \(X_3\). A final note: both packages are very recent and constantly updated and revised. You should always refer to the vignette and documentation provided above for updates and eventual modification in the syntax. \hypertarget{example-from-the-literature}{% \subsection{Example from the literature}\label{example-from-the-literature}} Thanks for its easy implementation in statistical software and the development of the several discussed extensions, WQS is rapidly becoming one of the most common techniques used by investigators to evaluate environmental mixtures. As an illustrative example on how methods and results can be presented the reade can refer to , we can use the paper from \citet{deyssenroth2018intrauterine}, evaluating the association between 16 trace metals, measured in post-partum maternal toe nails in about 200 pregnant women from the Rhode Island Child Health Study, and small for gestational age (SGA) status. before fitting WQS the Authors conduct a preliminary analysis using conditional logistic regression, which indicates that effects seem to operate in both directions. \includegraphics{images/deyslog.png} As a consequence, WQS results are presented for both the positive and negative directions, summarizing both weights estimates and total effects in a clear and informative figure. \begin{figure} \centering \includegraphics{images/deys.png} \caption{WQS results from Deyssenroth et al.} \end{figure} \hypertarget{flexible-approaches-for-complex-settings}{% \chapter{Flexible approaches for complex settings}\label{flexible-approaches-for-complex-settings}} In the previous sections we have discussed the challenges that arise when evaluating environmental mixtures and the several available techniques based on regression modeling that can be used to to address different research questions in this context. The final section of section 4 discussed the 2 major limitations shared by all regression techniques, namely the difficulties in estimating overall mixture effects and to include additional model complexities such as non-linearities and (possibly high-order) interactions. In the previous section we have discussed WQS as a useful tool to address the first limitation. Note than, interestingly, this technique can be actually seen as yet another regression extension, as it is based on integrating a summary score into a generalized linear model. To tackle the second challenge, let's first note that any regression would allow integrating interactions of any order (this is done by simply including product terms between any pair, or higher combination, of exposures) as well as non-linear associations. Splines modeling is probably the best way of accounting for non-linear effects in regression modeling, and one can also consider using generalized additive models (GAM), which have been successfully applied in the context of environmental mixtures (\citet{zheng2020evaluating}). Nevertheless, both the inclusion of product terms and spline transformations will rapidly increase the number of parameters that are to be estimated, and we might be in need of alternative techniques that can more flexibly tackle these issues. In this context, we are going to describe two approaches: first, bayesian kernel machine regression (BKMR), a method directly developed for evaluating environmental mixtures that is increasing in popularity because of its several advantages and flexibility (\citet{bobb2015bayesian}),(\citet{bobb2018statistical}). Second, the use of machine learning techniques, and specifically tree-based modeling such as boosted regression trees (\citet{lampa2014identification}),(\citet{bellavia2021joint}). Additional techniques that can be considered when the specific focus is on detecting interactions will not be discussed here, and the reader can refer to these publications summarizing and discussing methodologies in this context: \citet{barrera2017systematic}, \citet{sun2013statistical}. \hypertarget{bayesian-kernel-machine-regression}{% \section{Bayesian Kernel Machine Regression}\label{bayesian-kernel-machine-regression}} The material presented in this section is largely taken from Prof.~Coull's guest lectures material and Dr.~Bobb's \href{https://jenfb.github.io/bkmr/overview.html}{vignette}. \hypertarget{introduction-1}{% \subsection{Introduction}\label{introduction-1}} Possible objectives of a mixtures analysis could include detection and estimation of an effect of the overall mixture, identification of pollutant or group of pollutants responsible for observed mixture effects,visualizing the exposure-response function, or detection of interactions among individual pollutants. Bayesian Kernel Machine Regression (BKMR) is designed to address all four of these objectives in a flexible non-parametric way. The main idea of BKMR is to model exposure through means of a kernel function. Specifically, the general modeling framework is \[Y_i=h(z_{i1},…,z_{iM})+βx_i+\epsilon_i\] where \(Y_i\) is a continuous, normally distributed health endpoint, \(h\) is a flexible function of the predictor variables \(z_{i1},…,z_{iM}\), and \(x_i\) is a vector of covariates assumed to have a linear relationship with the outcome There are several choices for the kernel function used to represent \(h\). The focus here is on the Gaussian kernel, which flexibly captures a wide range of underlying functional forms for h and can accommodate nonlinear and non-additive effects of the multivariate exposure. Specifically, the Gaussian kernel implies the following representation for \(h\): \[K_{vs}(z_i,z_j)=exp\{-\sum_{M}r_m(z_{im}-z_{jm})^2\}\] Intuitively, the kernel function shrinks the estimated health effects of two individuals with similar exposure profiles toward each other. The weights \(r_m\) present the probability that each exposure is important in the function, with \(r_m=0\) indicating that there is no association between the \(m^{th}\) exposure and the outcome. By allowing some weights to be 0, the method is implicitly embedding a variable selection procedure. This can also integrate information on existing structures among exposures (e.g.~correlation clusters, PCA results, similar mechanisms \ldots) with the so-called hierarchical variable selection, which estimates the probability each group of exposures is important, and the probability that, given a group is important, each exposure in that group is driving that group-outcome association. \hypertarget{estimation}{% \subsection{Estimation}\label{estimation}} BKMR takes its full name from the Bayesian approach used for estimating the parameters. The advantages of this include the ability of estimating the importance of each variable (\(r_m\)) simultaneously, estimating uncertainties measures, and easily extending the estimation to longitudinal data. Since the estimation is built within an iterative procedure (MCMC), variable importance are provided in terms of Posterior Inclusion Probability (PIP), the proportion of iterations with \(r_m>0\). Typically, several thousands of iterations are required. The \texttt{bkmr} R package developed by the Authors makes implementation of this technique relatively straightforward. Using our illustrative example, the following chunk of code includes a set of lines required before estimating a BKMR model. Specifically, we are defining the object containing the mixture (\(X_{1}-X_{14}\)), the outcome (\(Y\)), and the confounders (\(Z_1-Z_3\)). We also need to generate a seed (we are using an iterative process with a random component) and a knots matrix that will help speeding up the process. This final step is very important as the model estimation can be extremely long (the recommendation is to use a number of knots of more or less n/10). \begin{Shaded} \begin{Highlighting}[] \NormalTok{mixture}\OtherTok{\textless{}{-}}\FunctionTok{as.matrix}\NormalTok{(data2[,}\DecValTok{3}\SpecialCharTok{:}\DecValTok{16}\NormalTok{])} \NormalTok{y}\OtherTok{\textless{}{-}}\NormalTok{data2}\SpecialCharTok{$}\NormalTok{y} \NormalTok{covariates}\OtherTok{\textless{}{-}}\FunctionTok{as.matrix}\NormalTok{(data2[,}\DecValTok{17}\SpecialCharTok{:}\DecValTok{19}\NormalTok{])} \FunctionTok{set.seed}\NormalTok{(}\DecValTok{10}\NormalTok{)} \NormalTok{knots100 }\OtherTok{\textless{}{-}}\NormalTok{ fields}\SpecialCharTok{::}\FunctionTok{cover.design}\NormalTok{(mixture, }\AttributeTok{nd =} \DecValTok{50}\NormalTok{)}\SpecialCharTok{$}\NormalTok{design} \end{Highlighting} \end{Shaded} The actual estimation of a BKMR model is very simple and requires one line of R code. With the following lines we fit a BKMR model with Gaussian predictive process using 100 knots. We are using 1000 MCMC iterations for the sake of time, but your final analysis should be run on a much larger number of samples, up to 50000. Here we are allowing for variable selection, but not providing any information on grouping. \begin{Shaded} \begin{Highlighting}[] \NormalTok{temp }\OtherTok{\textless{}{-}} \FunctionTok{kmbayes}\NormalTok{(}\AttributeTok{y=}\NormalTok{y, }\AttributeTok{Z=}\NormalTok{mixture, }\AttributeTok{X=}\NormalTok{covariates, }\AttributeTok{iter=}\DecValTok{1000}\NormalTok{, }\AttributeTok{verbose=}\ConstantTok{FALSE}\NormalTok{, }\AttributeTok{varsel=}\ConstantTok{TRUE}\NormalTok{, } \AttributeTok{knots=}\NormalTok{knots100)} \FunctionTok{ExtractPIPs}\NormalTok{(temp)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## variable PIP ## 1 x1 0.110 ## 2 x2 0.082 ## 3 x3 0.000 ## 4 x4 0.000 ## 5 x5 0.072 ## 6 x6 0.142 ## 7 x7 0.000 ## 8 x8 0.336 ## 9 x9 0.062 ## 10 x10 0.400 ## 11 x11 0.188 ## 12 x12 0.818 ## 13 x13 0.080 ## 14 x14 0.158 \end{verbatim} The \texttt{ExtractPIPs()} command will show one of the most important results, the posterior inclusion probabilities. We can interpret this output as the variable selection part, in which we get information on the importance of each covariate in defining the exposures-outcome association. In descending order, the most important contribution seem to come from \(X_{12}, X_{6}, X_{10}, X_{2}, X_{14}, X_{11}\). This is in agreement with Elastic Net and WQS, which also identified \(X_{12}\) and \(X_6\) as the important contributors. Also note that within the other cluster we haven't yet been able to understand who the bad actor, if any, is. \hypertarget{trace-plots-and-burning-phase}{% \subsection{Trace plots and burning phase}\label{trace-plots-and-burning-phase}} Since we are using several iterations it is important to evaluate the convergence of the parameters. These can be checked by looking at trace plots (what we expect here is some kind of random behaving around a straight line). What we generally observe is an initial phase of burning, which we should remove from the analysis. Here, we are removing the first 100 iterations and this number should be modify depending on the results of your first plots. Here the figures show good convergence. \begin{Shaded} \begin{Highlighting}[] \NormalTok{sel}\OtherTok{\textless{}{-}}\FunctionTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{,}\DecValTok{1000}\NormalTok{,}\AttributeTok{by=}\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{TracePlot}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{par =} \StringTok{"beta"}\NormalTok{, }\AttributeTok{sel=}\NormalTok{sel)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadjs-1} } \caption{Convergence plot for a single parameter without exclusions}\label{fig:figureenetadjs} \end{figure} \begin{Shaded} \begin{Highlighting}[] \NormalTok{sel}\OtherTok{\textless{}{-}}\FunctionTok{seq}\NormalTok{(}\DecValTok{100}\NormalTok{,}\DecValTok{1000}\NormalTok{,}\AttributeTok{by=}\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{TracePlot}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{par =} \StringTok{"beta"}\NormalTok{, }\AttributeTok{sel=}\NormalTok{sel)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/figureenetadjsd-1} } \caption{Convergence plot for a single parameter after burning phase exclusion}\label{fig:figureenetadjsd} \end{figure} \hypertarget{visualizing-results}{% \subsection{Visualizing results}\label{visualizing-results}} After estimation of a BKMR model, which is relatively straightforward and just requires patience throughout iterations, most of the work will consist of presenting post-estimation figures and functions that can present the complex relationship between the mixture and the outcome. The R package includes several functions to summarize the model output in different ways and to visually display the results. To visualize the exposure-response functions we need to create different dataframes with the predictions that will be then graphically displayed with \texttt{ggpolot}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{pred.resp.univar }\OtherTok{\textless{}{-}} \FunctionTok{PredictorResponseUnivar}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{sel=}\NormalTok{sel, }\AttributeTok{method=}\StringTok{"approx"}\NormalTok{)} \NormalTok{pred.resp.bivar }\OtherTok{\textless{}{-}} \FunctionTok{PredictorResponseBivar}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{min.plot.dist =} \DecValTok{1}\NormalTok{, }\AttributeTok{sel=}\NormalTok{sel, } \AttributeTok{method=}\StringTok{"approx"}\NormalTok{)} \NormalTok{pred.resp.bivar.levels }\OtherTok{\textless{}{-}} \FunctionTok{PredictorResponseBivarLevels}\NormalTok{(}\AttributeTok{pred.resp.df =}\NormalTok{ pred.resp.bivar, } \AttributeTok{Z =}\NormalTok{ mixture, }\AttributeTok{both\_pairs =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{qs =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.5}\NormalTok{, }\FloatTok{0.75}\NormalTok{))} \NormalTok{risks.overall }\OtherTok{\textless{}{-}} \FunctionTok{OverallRiskSummaries}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{qs =} \FunctionTok{seq}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.75}\NormalTok{, }\AttributeTok{by =} \FloatTok{0.05}\NormalTok{), } \AttributeTok{q.fixed =} \FloatTok{0.5}\NormalTok{, }\AttributeTok{method =} \StringTok{"approx"}\NormalTok{,}\AttributeTok{sel=}\NormalTok{sel)} \NormalTok{risks.singvar }\OtherTok{\textless{}{-}} \FunctionTok{SingVarRiskSummaries}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{qs.diff =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.75}\NormalTok{),} \AttributeTok{q.fixed =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.50}\NormalTok{, }\FloatTok{0.75}\NormalTok{), }\AttributeTok{method =} \StringTok{"approx"}\NormalTok{)} \NormalTok{risks.int }\OtherTok{\textless{}{-}} \FunctionTok{SingVarIntSummaries}\NormalTok{(}\AttributeTok{fit =}\NormalTok{ temp, }\AttributeTok{qs.diff =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.75}\NormalTok{),} \AttributeTok{qs.fixed =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.25}\NormalTok{, }\FloatTok{0.75}\NormalTok{))} \end{Highlighting} \end{Shaded} The first three objects will allow us to examine the predictor-response functions, while the next three objects will calculate a range of summary statistics that highlight specific features of the surface. \hypertarget{univariate-dose-responses}{% \subsubsection{Univariate dose-responses}\label{univariate-dose-responses}} One cross section of interest is the univariate relationship between each covariate and the outcome, where all of the other exposures are fixed to a particular percentile. This can be done using the function \texttt{PredictorResponseUnivar}. The argument specifying the quantile at which to fix the other exposures is given by \texttt{q.fixed} (the default value is \texttt{q.fixed\ =\ 0.5}). \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(pred.resp.univar, }\FunctionTok{aes}\NormalTok{(z, est, }\AttributeTok{ymin =}\NormalTok{ est }\SpecialCharTok{{-}} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{se, }\AttributeTok{ymax =}\NormalTok{ est }\SpecialCharTok{+} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{se)) }\SpecialCharTok{+} \FunctionTok{geom\_smooth}\NormalTok{(}\AttributeTok{stat =} \StringTok{"identity"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ylab}\NormalTok{(}\StringTok{"h(z)"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{facet\_wrap}\NormalTok{(}\SpecialCharTok{\textasciitilde{}}\NormalTok{ variable) } \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save-1} } \caption{Univariate dose-response associations from BKMR}\label{fig:save} \end{figure} We can conclude from these figures that all selected covariates have weak to moderate associations, and that all dose-responses seem to be linear (maybe leaving some benefit of doubt to \(X_6\)). \hypertarget{bivariable-exposure-response-functions}{% \subsubsection{Bivariable Exposure-Response Functions}\label{bivariable-exposure-response-functions}} This visualizes the bivariate exposure-response function for two predictors, where all of the other predictors are fixed at a particular percentile. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(pred.resp.bivar, }\FunctionTok{aes}\NormalTok{(z1, z2, }\AttributeTok{fill =}\NormalTok{ est)) }\SpecialCharTok{+} \FunctionTok{geom\_raster}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{facet\_grid}\NormalTok{(variable2 }\SpecialCharTok{\textasciitilde{}}\NormalTok{ variable1) }\SpecialCharTok{+} \FunctionTok{scale\_fill\_gradientn}\NormalTok{(}\AttributeTok{colours=}\FunctionTok{c}\NormalTok{(}\StringTok{"\#0000FFFF"}\NormalTok{,}\StringTok{"\#FFFFFFFF"}\NormalTok{,}\StringTok{"\#FF0000FF"}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{xlab}\NormalTok{(}\StringTok{"expos1"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ylab}\NormalTok{(}\StringTok{"expos2"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"h(expos1, expos2)"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save2-1} } \caption{Bivariate exposure-response associations from BKMR}\label{fig:save2} \end{figure} \hypertarget{interactions}{% \subsubsection{Interactions}\label{interactions}} The figure we just plotted might not be the most intuitive way of checking for interactions. An alternative approach is to investigate the predictor-response function of a single predictor in Z for the second predictor in Z fixed at various quantiles (and for the remaining predictors fixed to a particular value). These can be obtained using the \texttt{PredictorResponseBivarLevels} function, which takes as input the bivariate exposure-response function outputted from the previous command, where the argument \texttt{qs} specifies a sequence of quantiles at which to fix the second predictor. We can easily select a specific combination we want to present, like the X6-X12 one. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(pred.resp.bivar.levels, }\FunctionTok{aes}\NormalTok{(z1, est)) }\SpecialCharTok{+} \FunctionTok{geom\_smooth}\NormalTok{(}\FunctionTok{aes}\NormalTok{(}\AttributeTok{col =}\NormalTok{ quantile), }\AttributeTok{stat =} \StringTok{"identity"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{facet\_grid}\NormalTok{(variable2 }\SpecialCharTok{\textasciitilde{}}\NormalTok{ variable1) }\SpecialCharTok{+} \FunctionTok{ggtitle}\NormalTok{(}\StringTok{"h(expos1 | quantiles of expos2)"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{xlab}\NormalTok{(}\StringTok{"expos1"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save3-1} } \caption{Qualitative interaction assessment from BKMR}\label{fig:save3} \end{figure} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save3fdg-1} } \caption{Qualitative interaction assessment between X6 and x12 from BKMR}\label{fig:save3fdg} \end{figure} This figures do not provide any evidence of interactions throughout the mixture. \hypertarget{overall-mixture-effect}{% \subsubsection{Overall Mixture Effect}\label{overall-mixture-effect}} Another interesting summary plot is the overall effect of the mixture, calculated by comparing the value of \(h\) when all of predictors are at a particular percentile as compared to when all of them are at their 50th percentile. The function \texttt{OverallRiskSummaries} allows one to specify a sequence of values of quantiles using the argument \texttt{qs} and the fixed quantile (the default is the 50th percentile) using the argument \texttt{q.fixed}. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(risks.overall, }\FunctionTok{aes}\NormalTok{(quantile, est, }\AttributeTok{ymin =}\NormalTok{ est }\SpecialCharTok{{-}} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd, }\AttributeTok{ymax =}\NormalTok{ est }\SpecialCharTok{+} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd)) }\SpecialCharTok{+} \FunctionTok{geom\_hline}\NormalTok{(}\AttributeTok{yintercept=}\DecValTok{00}\NormalTok{, }\AttributeTok{linetype=}\StringTok{"dashed"}\NormalTok{, }\AttributeTok{color=}\StringTok{"gray"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{geom\_pointrange}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{scale\_y\_continuous}\NormalTok{(}\AttributeTok{name=}\StringTok{"estimate"}\NormalTok{) } \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save4-1} } \caption{Overall Mixture Effect from BKMR}\label{fig:save4} \end{figure} In agreement with WQS, higher exposure to the overall mixture is associated with higher mean outcome. \hypertarget{single-variables-effects}{% \subsubsection{Single Variables effects}\label{single-variables-effects}} This additional function summarizes the contribution of an individual predictor to the response. For example, we may wish to compare risk when a single predictor in \(h\) is at the 75th percentile as compared to when that predictor is at its 25th percentile, where we fix all of the remaining predictors to a particular percentile. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(risks.singvar, }\FunctionTok{aes}\NormalTok{(variable, est, }\AttributeTok{ymin =}\NormalTok{ est }\SpecialCharTok{{-}} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd, }\AttributeTok{ymax =}\NormalTok{ est }\SpecialCharTok{+} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd, } \AttributeTok{col =}\NormalTok{ q.fixed)) }\SpecialCharTok{+} \FunctionTok{geom\_hline}\NormalTok{(}\FunctionTok{aes}\NormalTok{(}\AttributeTok{yintercept=}\DecValTok{0}\NormalTok{), }\AttributeTok{linetype=}\StringTok{"dashed"}\NormalTok{, } \AttributeTok{color=}\StringTok{"gray"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{geom\_pointrange}\NormalTok{(}\AttributeTok{position =} \FunctionTok{position\_dodge}\NormalTok{(}\AttributeTok{width =} \FloatTok{0.75}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{coord\_flip}\NormalTok{() }\SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{legend.position=}\StringTok{"none"}\NormalTok{)}\SpecialCharTok{+}\FunctionTok{scale\_x\_discrete}\NormalTok{(}\AttributeTok{name=}\StringTok{""}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{scale\_y\_continuous}\NormalTok{(}\AttributeTok{name=}\StringTok{"estimate"}\NormalTok{) } \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save5-1} } \caption{Individual effects from BKMR}\label{fig:save5} \end{figure} \hypertarget{single-variable-interaction-terms}{% \subsubsection{Single Variable Interaction Terms}\label{single-variable-interaction-terms}} Finally, this function is similar to the latest one, but refers to the interaction of a single exposure with all other covariates. It attempts to represent an overall interaction between that exposure and all other components. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ggplot}\NormalTok{(risks.int, }\FunctionTok{aes}\NormalTok{(variable, est, }\AttributeTok{ymin =}\NormalTok{ est }\SpecialCharTok{{-}} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd, }\AttributeTok{ymax =}\NormalTok{ est }\SpecialCharTok{+} \FloatTok{1.96}\SpecialCharTok{*}\NormalTok{sd)) }\SpecialCharTok{+} \FunctionTok{geom\_pointrange}\NormalTok{(}\AttributeTok{position =} \FunctionTok{position\_dodge}\NormalTok{(}\AttributeTok{width =} \FloatTok{0.75}\NormalTok{)) }\SpecialCharTok{+} \FunctionTok{geom\_hline}\NormalTok{(}\AttributeTok{yintercept =} \DecValTok{0}\NormalTok{, }\AttributeTok{lty =} \DecValTok{2}\NormalTok{, }\AttributeTok{col =} \StringTok{"brown"}\NormalTok{) }\SpecialCharTok{+} \FunctionTok{coord\_flip}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{figure}[H] {\centering \includegraphics[width=0.8\linewidth]{bookdown-demo_files/figure-latex/save6-1} } \caption{Individual interaction effects from BKMR}\label{fig:save6} \end{figure} As we concluded before, this graph also leads us to conclude that we have no evidence of interaction for any covariate (which, as we know from simulated data, is true). \hypertarget{hierarchical-selection}{% \subsection{Hierarchical selection}\label{hierarchical-selection}} The variable selection procedure embedded into BKMR can also operate within a hierarchical procedure. Using our example, we could for instance inform the model that there are highly correlated clusters of exposures. This will allow us to get an estimate of the relative importance of each cluster and of each exposure within it. The procedure is implemented as follows, where we are specifically informing the model that there is a cluster of three highly correlated covariates: \begin{Shaded} \begin{Highlighting}[] \NormalTok{hier }\OtherTok{\textless{}{-}} \FunctionTok{kmbayes}\NormalTok{(}\AttributeTok{y=}\NormalTok{y, }\AttributeTok{Z=}\NormalTok{mixture, }\AttributeTok{X=}\NormalTok{covariates, }\AttributeTok{iter=}\DecValTok{1000}\NormalTok{, }\AttributeTok{verbose=}\ConstantTok{FALSE}\NormalTok{, }\AttributeTok{varsel=}\ConstantTok{TRUE}\NormalTok{, } \AttributeTok{knots=}\NormalTok{knots100, }\AttributeTok{groups=}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \FunctionTok{ExtractPIPs}\NormalTok{(hier)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## variable group groupPIP condPIP ## 1 x1 1 1.000 0.0000000 ## 2 x2 1 1.000 0.0000000 ## 3 x3 2 0.044 0.3636364 ## 4 x4 2 0.044 0.4545455 ## 5 x5 2 0.044 0.1818182 ## 6 x6 1 1.000 0.0160000 ## 7 x7 1 1.000 0.0040000 ## 8 x8 1 1.000 0.0620000 ## 9 x9 1 1.000 0.0540000 ## 10 x10 1 1.000 0.0000000 ## 11 x11 1 1.000 0.0020000 ## 12 x12 1 1.000 0.4500000 ## 13 x13 1 1.000 0.1460000 ## 14 x14 1 1.000 0.2660000 \end{verbatim} Group PIPs seem to point out that the cluster is somehow relevant in the dose-response association, and indicates that that \(X_4\) might be the most relevant of the three exposures. \hypertarget{extensions}{% \subsection{Extensions}\label{extensions}} The first release of BKMR was only available for evaluating continuous outcomes, but recent work has extended its use to the context of binary outcomes, which are also integrated in the latest versions of the package. \citet{domingo2019association} have also described how to apply BKMR with time-to-event outcomes. Additional extensions of the approach that could be of interest in several settings also include a longitudinal version of BKMR based on lagged regression, which can be used to evaluate time-varying mixtures (\citet{liu2018lagged}). While this method is not yet implemented in the package, it is important to note that similar results can be achieved by evaluating time-varying effects through hierarchical selection. In brief, multiple measurements of exposures can be included simultaneously in the kernel, grouping exposures by time. An example of this application can be found in \citet{tyagi2021identifying}, evaluating exposures to phthalates during pregnancy, measured at different trimester, as they relate to final gestational weight. By providing a measure of group importance, group PIPs can here be interpreted as measures of relative importance of the time-windows of interest, thus allowing a better understanding of the timing of higher susceptibility to mixture exposures. Finally, we have described how BKMR can provide a graphical qualitative assessment of interaction. Some additional work is being conducted to formally provide measures of interaction and is briefly presented \href{https://github.com/jantonelli111/NLinteraction}{here}. \hypertarget{practical-considerations-and-discussion}{% \subsection{Practical considerations and discussion}\label{practical-considerations-and-discussion}} To conclude our presentation of BKMR, let's list some useful considerations that one should take into account when applying this methodology: \begin{itemize} \tightlist \item As a Bayesian technique, prior information could be specified on the model parameters. Nevertheless this is not commonly done, and all code presented here is assuming the use of non-informative priors. In general, it is good to remember that PIP values can be sensitive to priors (although relative importance tends to be stable). \item Because of their sensitivity, PIP values can only be interpreted as a relative measure of importance (as ranking the importance of exposures). Several applied papers have been using thresholds (e.g.~0.5) to define a variable ``important'' but this is interpretation is erroneous and misleading. \item The BKMR algorithm is more stable when it isn't dealing with exposures on vastly different scales. We typically center and scale both the outcome and the exposures (and continuous confounders). Similarly, we should be wary of exposure outliers, and log-transforming exposures is also recommended. \item BKMR is operating a variable selection procedure. As such, a PIP of 0 will imply that the dose-response for that covariate is a straight line on zero. This does not mean that a given exposure has no effect on the outcome, but simply that it was not selected in the procedure. As a matter of fact, when an exposure has a weak effect on the outcome BKMR will mostly tend to exclude it. As a consequence of this, the overall mixture effect will really present the overall effect of the selected exposures. \item As a Bayesian technique, BKMR is not based on the classical statistical framework on null-hypothesis testing. 95\% CI are interpreted as credible intervals, and common discussions on statistical power should be avoided. \item Despite the estimation improvements through the use of knots as previously described, fitting a BKMR model remains time-demanding. In practice, you might be able to fit a BKMR model on a dataset of up to 10.000 individuals (still waiting few hours to get your results). For any large dataset, alternative approaches should be considered. \item BKMR is a flexible non-parametric method that is designed to deal with complex settings with non-linearities and interactions. In standard settings, regression methods could provide a better estimation and an easier interpretation of results. In practical terms, you would never begin your analysis by fitting a BKMR model but only get to it for results validation or if alternative techniques were not sufficiently equipped to deal with your data. \end{itemize} \hypertarget{assessing-interactions}{% \section{Assessing interactions}\label{assessing-interactions}} \hypertarget{tree-based-modeling}{% \subsection{Tree-based modeling}\label{tree-based-modeling}} In settings when one is interested in formally evaluating interactions, unique challenges are involved. First, we already discussed how evaluating several covariates and high-order interactions within a regression framework will rapidly increase the number of parameters to be estimated, and the resulting complexity of the model will make classical regression techniques of little use. Summary and classification approaches like WQS will not be able to provide an estimate of interaction effects, and we have just discussed how BKMR can only provide some qualitative assessment of interactions, and only among those exposures that have passed the selection procedure. To account for the complexity of joint effects and high-dimensional interactions, one should consider specific techniques that have been specifically develop to deal with complex and big data. One machine learning (ML) approach that can be useful in the context of interaction analysis, and specifically when evaluating environmental exposures, is the application of boosted regression trees (BRT), BRT is a tree-based modeling technique that can be used to evaluate complex high-dimensional interactions among several variables, which can be continuous, categorical, or binary. Boosted trees are designed to improve the performance of classification and regression trees (CARTs), which partition the data into several disjoint regions approximating the outcome as constant within these regions. CARTs can account for complex interactions by conditioning subsequent splits on previous ones, a feature that is controlled by a ``depth'' option. Higher-order depths correspond to accounting for higher-order interactions. In practical terms, this implies that by modifying the depth option of the algorithm we can incorporate an increasingly higher number of interaction orders. How many interactions should be evaluated, together with other parameters of the model, are identified by the machine through cross validation techniques. Boosted trees improve the predictive performance of a single CART by combining several weak learners to accurately identify a set of explanatory variables that are associated with the outcome. The improved predictive performance, however, will come at the expense of an easy interpretation. Specifically, the output of a BRT will provide identification of variable importance, partial dependence plot, and interactions hierarchy, but will not provide effect estimates for each variable or interaction as in classical regression. A BRT model will provide the following objects as output: \begin{itemize} \item Variable importance: this is based on how many times each variable is involved in a split, capturing its independent predictive power with respect to the outcome. This measure holds the same interpretation of PIPs in BKMR \item Dependence plots: similarly to the univariate dose-responses in BKMR, these provide a graphical visualization of the fitted function that presents the associations between one or more predictors and the outcome. These plots are especially helpful with continuous predictors, but let's stress that this technique can be used with any kind of exposures. \item H-statistics: these are the unique measures of interaction relevance, which indicate, for any pair of predictors, the fraction of variance that is not captured by the sum of the two fitted response functions. Of importance, the Depending on the depth of the algorithm, H-statistics can be calculated for all levels of interactions including 2-way and more. These measures do not provide a summary of relative importance (i.e.~they do not sum up to 1) but rather indicate a ranking of importance of interactions. \end{itemize} For more details on boosted trees we refer to previous publications (\citet{lampa2014identification}) and \href{http://uc-r.github.io/gbm_regression}{online documentation}. In this study, parameters in the final model were selected using a 10-fold cross validation on 75\% of the data (training sample), estimating the model separately among men and women, and selecting the model with the lowest root mean squared error (RMSE). \hypertarget{interaction-screening-and-regression-approaches}{% \subsection{Interaction screening and regression approaches}\label{interaction-screening-and-regression-approaches}} Let us stress once more that both BKMR, which provide a qualitative graphical assessment of interactions, and BRT models, which allow estimating H-statistics to rank interactions of different orders, do not provide direct estimates or tests for interactions effects. For this reason, a recommended practice is to use these techniques as interaction screening procedures and employ a 2-steps approach in which selected interactions are then evaluated in a final regression model. As an illustrative example, we used this approach in a recent paper to identify 2-ways interactions between occupational exposures and health factors that we later integrated in a regression models evaluating the effect of this mixture on ALS risk (\citet{bellavia2021joint}). \begin{figure} \centering \includegraphics{images/hstats.png} \caption{H-statistics from Bellavia et al.} \end{figure} \hypertarget{additional-topics-and-final-remarks}{% \chapter{Additional topics and final remarks}\label{additional-topics-and-final-remarks}} The aim of this last section is to provide a very brief introduction to additional topics that are often of relevance when investigating the health effects of multiple environmental exposures. First, we will provide a general overview of the extent to which what has been discussed so far can be evaluated from a causal inference perspective. Next, we will describe some relatively common situations where additional methodological considerations are required, namely the presence of zero-inflated or binary exposures. Finally, we will present an introductory overview of approaches that allow incorporating multiple exposures in mediation analysis, which is often a primary goal of exposome research. \hypertarget{causal-mixture-effects}{% \section{Causal mixture effects}\label{causal-mixture-effects}} To improve our understanding of the associations between environmental exposures and health outcomes, and facilitate the development of more stringent public health regulations and interventions, it is important to determine to which extent these associations reflect causal relationships. To establish causal links, researchers are advocating the use of a pluralistic approach in terms of study design, to reduce the potential harm due to typical epidemiological bias such as confounding or selection bias, as well in terms of statistical methodologies for causal inference (\citet{vandenbroucke2016causality}), (\citet{dominici2017best}). In the specific case of environmental exposures, this switch from association to causation has to account for the co-occurrence of multiple components or constituents, present in the real world as a complex mixture. At this time, regulatory policies are still mostly designed to regulate one pollutant or chemical at the time, thus hampering the implementation of integrated policies and possibly resulting in uncertainties about the exact impact of regulations. For this reasons, several researchers, as well as and both governmental and private institutions, are increasingly advocating for more research that could improve our understanding of the causal effects of environmental mixtures evaluated as a complex exposure situation of high-dimensional data. The first step to improve findings interpretation towards a causal perspective is to focus on study design and pre-analytic considerations. The paper from \citet{dominici2017best} provides an excellent introduction that tackles these issues in the context of air pollution epidemiology, but can easily be extended to any set of environmental exposures. Another important contribution in terms of pre-analytic aspects was provided by Weisskopf and Webster (\citet{weisskopf2018bias}),(\citet{webster2020epidemiology}) who have discussed the issue of bias amplification when evaluating environmental mixtures. Their work directly addresses issues of co-confounding related bias, presenting direct acyclic graphs (DAGs) in different contexts of interest. After these pre-analytic aspects have been taken into consideration, the focus can be transferred to the statistical approaches that can be used to improve the causal understanding of mixture-outcome associations. Here several points should be mentioned: \begin{itemize} \item As the exposures mixture gets more and more complex, the time spent on the pre-processing phase (unsupervised analysis) will be more and more important. \item After this pre-processing phase, the assessment of the mixture-outcome effect should be conducted in two stages. First, by using some of the techniques described here (WQS, BKMR, tree-based methods \ldots) one can identify a set of exposures (and interactions) that can be included in a causal model that will later be investigated in a secondary step. \item This 2-stages approach is highly recommended because most of the available methodologies for causal inference are based on extensions of regression techniques (e.g.~propensity score, difference in differences, marginal structural models, inverse probability weighting). If the setting is not too complex (i.e.~those settings where multiple regression is a potentially good choice), one can directly build the regression-based causal inference technique. A good introduction of causal inference techniques based on regression that can be useful in environmental epidemiology was provided by \citet{bind2019causal}. \item Out of the possible methods for causal inference, a versatile option in the context of environmental mixtures is the multivariate version of the generalized propensity score (\href{https://github.com/williazo/mvGPS}{mvGPS}), which we have applied and described in the context of air pollution epidemiology in a recent publication (Traini et al.~under review). \item Finally, it is useful to remember that one of the recent extensions of WQS (quantile G-comp) was developed with the aim of improving the causal interpretation of the estimated weights and overall effect and could be used to provide a validation of the cumulative mixture effect from a causal perspective. \end{itemize} \hypertarget{binary-and-zero-inflated-exposures}{% \section{Binary and zero-inflated exposures}\label{binary-and-zero-inflated-exposures}} The common setting that we have described so far was making the implicit assumption that we are dealing with a set of multiple continuous exposures (e.g.~concentrations of chemicals or pollutants) of joint interest. One important caveat, however, is that continuous exposures evaluated in this context are usually highly skewed (they are strictly non-negative). Log-transformation are commonly used, but these are ineffective where several values are zero. Zero-inflated exposures are skewed covariates with a lot of zeros, typically occurring in environmental epidemiology when several individuals have values below the limit of detection. Removing those individuals from the study (that is, considering this as missing) might reduce power and, most importantly, does not reflect real levels of exposures (it would silence all effects occurring at low levels of exposures). Common alternative options include dicothomization of each exposure into detected/non detected, the use of categorical exposures, or imputation of non-detected values. Even in the latter, however, in the presence of a high number of zeros we would end up getting inflated covariates with a large proportion of individuals with the same exposure value (in practical terms, we might find it hard to really consider the exposure as continuous). If one wants to include zero-inflated covariates in the mixture without any transformation, available techniques include zero inflated poisson models (ZIP), zero-inflated negative binomial models (ZINB), or \href{https://data.library.virginia.edu/getting-started-with-hurdle-models/}{hurdle models}. When exposures are instead dicothomized (or, in general, when the interest is to evaluate multiple binary exposures), some additional techniques can be considered: \begin{itemize} \tightlist \item First of all, evaluating the crude association between binary exposures, as we presented earlier with the correlation matrix, can be done using the \(\phi\) coefficients, with \(\phi=\chi^2/n\). \item Correspondence analysis: This will graphically display all covariates based on their proximity. We can think of this approach as an unsupervised method to investigate and depict patterns of exposures \item Hierarchical models and penalized methods can be used with binary exposures. If all covariates are binary, you may prefer not to standardize in order to improve interpretation.\\ \item For high dimensional data, extensions of the regression and classification tree approaches for binary data have been developed, both unsupervised and supervised (e.g.~CART/MARS, logic regression). BRT can be used with binary exposures. \end{itemize} \hypertarget{mediation-analysis}{% \section{Mediation analysis}\label{mediation-analysis}} Mediation analysis is a common approach to investigate causal pathways relating the exposure(s) and the outcome of interest. When evaluating environmental mixtures, there are several settings where our mixture of interest is only a component of an even larger picture. For example, we may want to integrate sources of exposures, or evaluate the contribution of environmental chemicals to health disparities. Our mixture, in these cases, is a mediator of a given \(X-Y\) association In other settings, we might be interested in the mechanisms through which the mixture affects the outcome. The mixture here is the exposure in a mediation model. We can also have several mixtures affecting each other, or potential causal dependencies within the mixture. In the general framework of exposome analysis, the underlying hypothesis is that a set of multiple exogenous exposures (the external exposome) affects the complex set of biomarkers at the microbiome level (the internal exposome), thus contributing to the development of health effects. This structure is explicitly making assumptions in terms of mediation: \begin{figure} \centering \includegraphics{images/exposome.jpg} \caption{Vrijheid et al.~Thorax. 2014} \end{figure} The following DAG presents an integrative framework for environmental exposures (E), lifestyle and behavioral factors (B), and social constructs (X), which may be complex but has the potential to elucidate mechanisms through which diseases are caused, which we presented in an introductory publication (\citet{bellavia2018multiple}). \begin{figure} \centering \includegraphics{images/mediation.png} \caption{Bellavia et al.~Env. Epi. 2018} \end{figure} Integrating methods for environmental exposures into mediation analysis has been the goal of several recent papers, which the reader could refer to for further details (\citet{bellavia2019approaches}), (\citet{blum2020challenges}), (\citet{devick2018bayesian}). These methods have been largely unexplored in applied studies and may represent a critical tool to further identify the mechanisms through which the exposome affects human health. A recent R function was also developed to integrate BKMR into a mediation analysis context (\citet{wang2020bkmr}). \hypertarget{conclusion}{% \section{Conclusion}\label{conclusion}} The goal of this introductory course was to discuss the challenges involved in the study of environmental mixtures as they relate to health outcomes and introduce the most common statistical approaches that can help addressing some of these challenges. The core points of the discussion were the following: \begin{itemize} \item Environmental mixtures represent the way environmental exposures occur and operate in real life and, as such, should be integrated and evaluated in environmental epidemiology studies. This involves a series of analytic and statistical issues that should be carefully addressed and taken into account. \item A first critical classification of statistical methodologies it the one between supervised and unsupervised approaches. It is always recommended to begin the analyses with a thorough pre-processing phase that involves unsupervised techniques. These will help identifying clustering and groupings of exposures, high levels of correlations, missingness, and the presence of inflated covariates, crucially guiding subsequent steps. \item When incorporating the outcome into the picture (supervised approaches) it it always recommended to begin with regression-based approaches. These provide unique advantages and most of the times will provide a good fit to the question of interest. \item Specific methods have been developed to address environmental mixtures when regression techniques are of little use or more flexible approaches are required. This occurs, for example, when high-dimensional interactions are of interests, if most associations are non-linear, or if the primary interest is in retrieving the cumulative mixture effect. Generally, all techniques come with some limitations and it is always recommended to test several methods and validate results providing different perspectives. \item With a very large number of exposures and/or interaction, machine learning (ML) techniques should be considered. Recent extensions of random forests such as gradient boosting machines (or boosted regression trees) provide several advantages in this context. Proceeding with different layers of analysis, using ML results to build a second-step regression model is recommended. \item Most current methods are available and well documented/presented in the statistical software R. \item In general, when dealing with environmental exposures, the choice of the methods should be drive by the research question of interest. \begin{itemize} \tightlist \item Are there exposure patterns? (unsupervised analysis, e.g.~PCA) \item What are the effects of individual exposures within the mixture? (regression methods, BKMR) \item What are the most important contributors to the association? (PIPs in BKMR, weights from WQS, selected covariates in elastic net \ldots) \item What is the overall (cumulative) effect of the mixture? (regression methods, WQS) \item Are there interactions (or even synergy) between chemicals? (tree-based modeling, BKMR, regression methods) \end{itemize} \end{itemize} Several papers have been presented discussing all these techniques and providing further guidance to choose the correct approach (\citet{hamra2018environmental}),(\citet{stafoggia2017statistical}),(\citet{gibson2019overview}). Finally, it is useful to remind that the material here presented is just a selection of topics out of a wide and fast-growing research area. Methodological extensions and new applications are continuously published and it is crucial for researchers working in this area to keep up with the literature. \bibliography{book.bib,packages.bib} \end{document}
{ "alphanum_fraction": 0.7693568418, "avg_line_length": 70.7228221742, "ext": "tex", "hexsha": "ec7de24bce3b81bc6ff2d2f7ba0bc75f0443c884", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ff69129ef9f0d863d8409ff2fc812c44132f0f3e", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "andreabellavia/mixtures", "max_forks_repo_path": "docs/bookdown-demo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ff69129ef9f0d863d8409ff2fc812c44132f0f3e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "andreabellavia/mixtures", "max_issues_repo_path": "docs/bookdown-demo.tex", "max_line_length": 1694, "max_stars_count": null, "max_stars_repo_head_hexsha": "ff69129ef9f0d863d8409ff2fc812c44132f0f3e", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "andreabellavia/mixtures", "max_stars_repo_path": "docs/bookdown-demo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 53105, "size": 196468 }
%% %% Automatically generated file from DocOnce source %% (https://github.com/hplgit/doconce/) %% %% % #ifdef PTEX2TEX_EXPLANATION %% %% The file follows the ptex2tex extended LaTeX format, see %% ptex2tex: http://code.google.com/p/ptex2tex/ %% %% Run %% ptex2tex myfile %% or %% doconce ptex2tex myfile %% %% to turn myfile.p.tex into an ordinary LaTeX file myfile.tex. %% (The ptex2tex program: http://code.google.com/p/ptex2tex) %% Many preprocess options can be added to ptex2tex or doconce ptex2tex %% %% ptex2tex -DMINTED myfile %% doconce ptex2tex myfile envir=minted %% %% ptex2tex will typeset code environments according to a global or local %% .ptex2tex.cfg configure file. doconce ptex2tex will typeset code %% according to options on the command line (just type doconce ptex2tex to %% see examples). If doconce ptex2tex has envir=minted, it enables the %% minted style without needing -DMINTED. % #endif % #define PREAMBLE % #ifdef PREAMBLE %-------------------- begin preamble ---------------------- \documentclass[% twoside, % oneside: electronic viewing, twoside: printing final, % or draft (marks overfull hboxes, figures with paths) 10pt]{article} \listfiles % print all files needed to compile this document \usepackage{relsize,makeidx,color,setspace,amsmath,amsfonts} \usepackage[table]{xcolor} \usepackage{bm,microtype} \usepackage{ptex2tex} % #ifdef MINTED \usepackage{minted} \usemintedstyle{default} % #endif \usepackage[T1]{fontenc} %\usepackage[latin1]{inputenc} \usepackage{ucs} \usepackage[utf8x]{inputenc} \usepackage{lmodern} % Latin Modern fonts derived from Computer Modern % Hyperlinks in PDF: \definecolor{linkcolor}{rgb}{0,0,0.4} \usepackage{hyperref} \hypersetup{ breaklinks=true, colorlinks=true, linkcolor=linkcolor, urlcolor=linkcolor, citecolor=black, filecolor=black, %filecolor=blue, pdfmenubar=true, pdftoolbar=true, bookmarksdepth=3 % Uncomment (and tweak) for PDF bookmarks with more levels than the TOC } %\hyperbaseurl{} % hyperlinks are relative to this root \setcounter{tocdepth}{2} % number chapter, section, subsection \usepackage[framemethod=TikZ]{mdframed} % --- begin definitions of admonition environments --- % --- end of definitions of admonition environments --- % prevent orhpans and widows \clubpenalty = 10000 \widowpenalty = 10000 % --- end of standard preamble for documents --- % insert custom LaTeX commands... \raggedbottom \makeindex %-------------------- end preamble ---------------------- \begin{document} % #endif % ------------------- main content ---------------------- % ----------------- title ------------------------- \thispagestyle{empty} \begin{center} {\LARGE\bf \begin{spacing}{1.25} How to parallelize a Variational Monte Carlo code with MPI and OpenMP \end{spacing} } \end{center} % ----------------- author(s) ------------------------- \begin{center} {\bf Morten Hjorth-Jensen${}^{1, 2}$} \\ [0mm] \end{center} \begin{center} % List of all institutions: \centerline{{\small ${}^1$National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA}} \centerline{{\small ${}^2$Department of Physics, University of Oslo, Oslo, Norway}} \end{center} % ----------------- end author(s) ------------------------- \begin{center} % date Spring 2015 \end{center} \vspace{1cm} % !split \subsection{Your background} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item You have some experience in programming but have never tried to parallelize your codes \item Here I will base my examples on C/C++ using Message Passing Interface (MPI) and OpenMP. \item I will also give you some simple hints on how to run and install codes on your laptop/office PC \item The programs and slides can be found at the weblink \item Good text: Karniadakis and Kirby, Parallel Scientific Computing in C++ and MPI, Cambridge. \end{itemize} \noindent We will discuss Message passing interface (MPI) and OpenMP. % --- end paragraph admon --- % !split ===== ===== % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Develop codes locally, run with some few processes and test your codes. Do benchmarking, timing and so forth on local nodes, for example your laptop or PC. You can install MPICH2 on your laptop/PC. \item Test by typing \emph{which mpd} \item When you are convinced that your codes run correctly, you start your production runs on available supercomputers, in our case \emph{smaug} locally (to be discussed after Easter). \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{How do I run MPI on a PC/Laptop?} % --- begin paragraph admon --- \paragraph{} Most machines at computer labs at UiO are quad-cores \begin{itemize} \item Compile with mpicxx or mpic++ or mpif90 \item Set up collaboration between processes and run \end{itemize} \noindent \bcppcod mpd --ncpus=4 & # run code with mpiexec -n 4 ./nameofprog \ecppcod Here we declare that we will use 4 processes via the \emph{-ncpus} option and via $-n 4$ when running. \begin{itemize} \item End with \emph{mpdallexit} \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Can I do it on my own PC/laptop?} % --- begin paragraph admon --- \paragraph{} Of course: \begin{itemize} \item go to the website of \href{{http://www.mcs.anl.gov/research/projects/mpich2/}}{Argonne National Lab} \item follow the instructions and install it on your own PC/laptop \item Versions for Ubuntu/Linux, windows and mac \item For windows, you may think of installing WUBI \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{What is Message Passing Interface (MPI)?} % --- begin paragraph admon --- \paragraph{} MPI is a library, not a language. It specifies the names, calling sequences and results of functions or subroutines to be called from C/C++ or Fortran programs, and the classes and methods that make up the MPI C++ library. The programs that users write in Fortran, C or C++ are compiled with ordinary compilers and linked with the MPI library. MPI programs should be able to run on all possible machines and run all MPI implementetations without change. An MPI computation is a collection of processes communicating with messages. % --- end paragraph admon --- % !split \subsection{Going Parallel with MPI} % --- begin paragraph admon --- \paragraph{} \textbf{Task parallelism}: the work of a global problem can be divided into a number of independent tasks, which rarely need to synchronize. Monte Carlo simulations or numerical integration are examples of this. MPI is a message-passing library where all the routines have corresponding C/C++-binding \bcppcod MPI_Command_name \ecppcod and Fortran-binding (routine names are in uppercase, but can also be in lower case) \bforcod MPI_COMMAND_NAME \eforcod % --- end paragraph admon --- % !split \subsection{MPI is a library} % --- begin paragraph admon --- \paragraph{} MPI is a library specification for the message passing interface, proposed as a standard. \begin{itemize} \item independent of hardware; \item not a language or compiler specification; \item not a specific implementation or product. \end{itemize} \noindent A message passing standard for portability and ease-of-use. Designed for high performance. Insert communication and synchronization functions where necessary. % --- end paragraph admon --- % !split \subsection{The basic ideas of parallel computing} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Pursuit of shorter computation time and larger simulation size gives rise to parallel computing. \item Multiple processors are involved to solve a global problem. \item The essence is to divide the entire computation evenly among collaborative processors. Divide and conquer. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{A rough classification of hardware models} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Conventional single-processor computers can be called SISD (single-instruction-single-data) machines. \item SIMD (single-instruction-multiple-data) machines incorporate the idea of parallel processing, which use a large number of processing units to execute the same instruction on different data. \item Modern parallel computers are so-called MIMD (multiple-instruction-multiple-data) machines and can execute different instruction streams in parallel on different data. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Shared memory and distributed memory} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item One way of categorizing modern parallel computers is to look at the memory configuration. \item In shared memory systems the CPUs share the same address space. Any CPU can access any data in the global memory. \item In distributed memory systems each CPU has its own memory. \end{itemize} \noindent The CPUs are connected by some network and may exchange messages. % --- end paragraph admon --- % !split \subsection{Different parallel programming paradigms} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item \textbf{Task parallelism}: the work of a global problem can be divided into a number of independent tasks, which rarely need to synchronize. Monte Carlo simulation is one example. Integration is another. However this paradigm is of limited use. \item \textbf{Data parallelism}: use of multiple threads (e.g.~one thread per processor) to dissect loops over arrays etc. This paradigm requires a single memory address space. Communication and synchronization between processors are often hidden, thus easy to program. However, the user surrenders much control to a specialized compiler. Examples of data parallelism are compiler-based parallelization and OpenMP directives. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Different parallel programming paradigms} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item \textbf{Message passing}: all involved processors have an independent memory address space. The user is responsible for partitioning the data/work of a global problem and distributing the subproblems to the processors. Collaboration between processors is achieved by explicit message passing, which is used for data transfer plus synchronization. \item This paradigm is the most general one where the user has full control. Better parallel efficiency is usually achieved by explicit message passing. However, message-passing programming is more difficult. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{SPMD (single-program-multiple-data)} % --- begin paragraph admon --- \paragraph{} Although message-passing programming supports MIMD, it suffices with an SPMD (single-program-multiple-data) model, which is flexible enough for practical cases: \begin{itemize} \item Same executable for all the processors. \item Each processor works primarily with its assigned local data. \item Progression of code is allowed to differ between synchronization points. \item Possible to have a master/slave model. The standard option in Monte Carlo calculations and numerical integration. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Today's situation of parallel computing} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Distributed memory is the dominant hardware configuration. There is a large diversity in these machines, from MPP (massively parallel processing) systems to clusters of off-the-shelf PCs, which are very cost-effective. \item Message-passing is a mature programming paradigm and widely accepted. It often provides an efficient match to the hardware. It is primarily used for the distributed memory systems, but can also be used on shared memory systems. \item Modern nodes have nowadays several cores, which makes it interesting to use both shared memory (the given node) and distributed memory (several nodes with communication). This leads often to codes which use both MPI and OpenMP. \end{itemize} \noindent Our lectures will focus on both MPI and OpenMP. % --- end paragraph admon --- % !split \subsection{Overhead present in parallel computing} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item \textbf{Uneven load balance}: not all the processors can perform useful work at all time. \item \textbf{Overhead of synchronization} \item \textbf{Overhead of communication} \item \textbf{Extra computation due to parallelization} \end{itemize} \noindent Due to the above overhead and that certain part of a sequential algorithm cannot be parallelized we may not achieve an optimal parallelization. % --- end paragraph admon --- % !split \subsection{Parallelizing a sequential algorithm} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Identify the part(s) of a sequential algorithm that can be executed in parallel. This is the difficult part, \item Distribute the global work and data among $P$ processors. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Bindings to MPI routines} % --- begin paragraph admon --- \paragraph{} MPI is a message-passing library where all the routines have corresponding C/C++-binding \bcppcod MPI_Command_name \ecppcod and Fortran-binding (routine names are in uppercase, but can also be in lower case) \bforcod MPI_COMMAND_NAME \eforcod The discussion in these slides focuses on the C++ binding. % --- end paragraph admon --- % !split \subsection{Communicator} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item A group of MPI processes with a name (context). \item Any process is identified by its rank. The rank is only meaningful within a particular communicator. \item By default communicator $MPI\_COMM\_WORLD$ contains all the MPI processes. \item Mechanism to identify subset of processes. \item Promotes modular design of parallel libraries. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Some of the most important MPI functions} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item $MPI\_Init$ - initiate an MPI computation \item $MPI\_Finalize$ - terminate the MPI computation and clean up \item $MPI\_Comm\_size$ - how many processes participate in a given MPI communicator? \item $MPI\_Comm\_rank$ - which one am I? (A number between 0 and size-1.) \item $MPI\_Send$ - send a message to a particular process within an MPI communicator \item $MPI\_Recv$ - receive a message from a particular process within an MPI communicator \item $MPI\_reduce$ or $MPI\_Allreduce$, send and receive messages \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{The first MPI C/C++ program} % --- begin paragraph admon --- \paragraph{} Let every process write "Hello world" (oh not this program again!!) on the standard output. \bcppcod using namespace std; #include <mpi.h> #include <iostream> int main (int nargs, char* args[]) { int numprocs, my_rank; // MPI initializations MPI_Init (&nargs, &args); MPI_Comm_size (MPI_COMM_WORLD, &numprocs); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank); cout << "Hello world, I have rank " << my_rank << " out of " << numprocs << endl; // End MPI MPI_Finalize (); \ecppcod % --- end paragraph admon --- % !split \subsection{The Fortran program} % --- begin paragraph admon --- \paragraph{} \bforcod PROGRAM hello INCLUDE "mpif.h" INTEGER:: size, my_rank, ierr CALL MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD, my_rank, ierr) WRITE(*,*)"Hello world, I've rank ",my_rank," out of ",size CALL MPI_FINALIZE(ierr) END PROGRAM hello \eforcod % --- end paragraph admon --- % !split \subsection{Note 1} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item The output to screen is not ordered since all processes are trying to write to screen simultaneously. \item It is the operating system which opts for an ordering. \item If we wish to have an organized output, starting from the first process, we may rewrite our program as in the next example. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Ordered output with $MPI\_Barrier$} % --- begin paragraph admon --- \paragraph{} \bcppcod int main (int nargs, char* args[]) { int numprocs, my_rank, i; MPI_Init (&nargs, &args); MPI_Comm_size (MPI_COMM_WORLD, &numprocs); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank); for (i = 0; i < numprocs; i++) {} MPI_Barrier (MPI_COMM_WORLD); if (i == my_rank) { cout << "Hello world, I have rank " << my_rank << " out of " << numprocs << endl;} MPI_Finalize (); \ecppcod % --- end paragraph admon --- % !split \subsection{Note 2} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Here we have used the $MPI\_Barrier$ function to ensure that that every process has completed its set of instructions in a particular order. \item A barrier is a special collective operation that does not allow the processes to continue until all processes in the communicator (here $MPI\_COMM\_WORLD$) have called $MPI\_Barrier$. \item The barriers make sure that all processes have reached the same point in the code. Many of the collective operations like $MPI\_ALLREDUCE$ to be discussed later, have the same property; that is, no process can exit the operation until all processes have started. \end{itemize} \noindent However, this is slightly more time-consuming since the processes synchronize between themselves as many times as there are processes. In the next Hello world example we use the send and receive functions in order to a have a synchronized action. % --- end paragraph admon --- % !split \subsection{Ordered output with $MPI\_Recv$ and $MPI\_Send$} % --- begin paragraph admon --- \paragraph{} \bccpcod ..... int numprocs, my_rank, flag; MPI_Status status; MPI_Init (&nargs, &args); MPI_Comm_size (MPI_COMM_WORLD, &numprocs); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank); if (my_rank > 0) MPI_Recv (&flag, 1, MPI_INT, my_rank-1, 100, MPI_COMM_WORLD, &status); cout << "Hello world, I have rank " << my_rank << " out of " << numprocs << endl; if (my_rank < numprocs-1) MPI_Send (&my_rank, 1, MPI_INT, my_rank+1, 100, MPI_COMM_WORLD); MPI_Finalize (); \eccpcod % --- end paragraph admon --- % !split \subsection{Note 3} % --- begin paragraph admon --- \paragraph{} The basic sending of messages is given by the function $MPI\_SEND$, which in C/C++ is defined as \bcppcod int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)} \ecppcod This single command allows the passing of any kind of variable, even a large array, to any group of tasks. The variable \textbf{buf} is the variable we wish to send while \textbf{count} is the number of variables we are passing. If we are passing only a single value, this should be 1. If we transfer an array, it is the overall size of the array. For example, if we want to send a 10 by 10 array, count would be $10\times 10=100$ since we are actually passing 100 values. % --- end paragraph admon --- % !split \subsection{Note 4} % --- begin paragraph admon --- \paragraph{} Once you have sent a message, you must receive it on another task. The function $MPI\_RECV$ is similar to the send call. \bcppcod int MPI_Recv( void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status ) \ecppcod The arguments that are different from those in MPI\_SEND are \textbf{buf} which is the name of the variable where you will be storing the received data, \textbf{source} which replaces the destination in the send command. This is the return ID of the sender. Finally, we have used $MPI\_Status\_status$, where one can check if the receive was completed. The output of this code is the same as the previous example, but now process 0 sends a message to process 1, which forwards it further to process 2, and so forth. % --- end paragraph admon --- % !split \subsection{Numerical integration in parallel} % --- begin paragraph admon --- \paragraph{Integrating $\pi$.} \begin{itemize} \item The code example computes $\pi$ using the trapezoidal rules. \item The trapezoidal rule \end{itemize} \noindent \[ I=\int_a^bf(x) dx\approx h\left(f(a)/2 + f(a+h) +f(a+2h)+\dots +f(b-h)+ f(b)/2\right). \] % --- end paragraph admon --- % !split \subsection{Dissection of trapezoidal rule with $MPI\_reduce$} % --- begin paragraph admon --- \paragraph{} \bcppcod // Trapezoidal rule and numerical integration usign MPI using namespace std; #include <mpi.h> #include <iostream> // Here we define various functions called by the main program double int_function(double ); double trapezoidal_rule(double , double , int , double (*)(double)); // Main function begins here int main (int nargs, char* args[]) { int n, local_n, numprocs, my_rank; double a, b, h, local_a, local_b, total_sum, local_sum; double time_start, time_end, total_time; \ecppcod % --- end paragraph admon --- % !split \subsection{Dissection of trapezoidal rule with $MPI\_reduce$} % --- begin paragraph admon --- \paragraph{} \bcppcod // MPI initializations MPI_Init (&nargs, &args); MPI_Comm_size (MPI_COMM_WORLD, &numprocs); MPI_Comm_rank (MPI_COMM_WORLD, &my_rank); time_start = MPI_Wtime(); // Fixed values for a, b and n a = 0.0 ; b = 1.0; n = 1000; h = (b-a)/n; // h is the same for all processes local_n = n/numprocs; // make sure n > numprocs, else integer division gives zero // Length of each process' interval of // integration = local_n*h. local_a = a + my_rank*local_n*h; local_b = local_a + local_n*h; \ecppcod % --- end paragraph admon --- % !split \subsection{Integrating with \textbf{MPI}} % --- begin paragraph admon --- \paragraph{} \bcppcod total_sum = 0.0; local_sum = trapezoidal_rule(local_a, local_b, local_n, &int_function); MPI_Reduce(&local_sum, &total_sum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); time_end = MPI_Wtime(); total_time = time_end-time_start; if ( my_rank == 0) { cout << "Trapezoidal rule = " << total_sum << endl; cout << "Time = " << total_time << " on number of processors: " << numprocs << endl; } // End MPI MPI_Finalize (); return 0; } // end of main program \ecppcod % --- end paragraph admon --- % !split \subsection{How do I use $MPI\_reduce$?} % --- begin paragraph admon --- \paragraph{} Here we have used \bcppcod MPI_reduce( void *senddata, void* resultdata, int count, MPI_Datatype datatype, MPI_Op, int root, MPI_Comm comm) \ecppcod The two variables $senddata$ and $resultdata$ are obvious, besides the fact that one sends the address of the variable or the first element of an array. If they are arrays they need to have the same size. The variable $count$ represents the total dimensionality, 1 in case of just one variable, while $MPI\_Datatype$ defines the type of variable which is sent and received. The new feature is $MPI\_Op$. It defines the type of operation we want to do. % --- end paragraph admon --- % !split \subsection{More on $MPI\_Reduce$} % --- begin paragraph admon --- \paragraph{} In our case, since we are summing the rectangle contributions from every process we define $MPI\_Op = MPI\_SUM$. If we have an array or matrix we can search for the largest og smallest element by sending either $MPI\_MAX$ or $MPI\_MIN$. If we want the location as well (which array element) we simply transfer $MPI\_MAXLOC$ or $MPI\_MINOC$. If we want the product we write $MPI\_PROD$. $MPI\_Allreduce$ is defined as \bcppcod MPI_Allreduce( void *senddata, void* resultdata, int count, MPI_Datatype datatype, MPI_Op, MPI_Comm comm) \ecppcod % --- end paragraph admon --- % !split \subsection{Dissection of trapezoidal rule with $MPI\_reduce$} % --- begin paragraph admon --- \paragraph{} We use $MPI\_reduce$ to collect data from each process. Note also the use of the function $MPI\_Wtime$. \bcppcod // this function defines the function to integrate double int_function(double x) { double value = 4./(1.+x*x); return value; } // end of function to evaluate \ecppcod % --- end paragraph admon --- % !split \subsection{Dissection of trapezoidal rule with $MPI\_reduce$} % --- begin paragraph admon --- \paragraph{} \bcppcod // this function defines the trapezoidal rule double trapezoidal_rule(double a, double b, int n, double (*func)(double)) { double trapez_sum; double fa, fb, x, step; int j; step=(b-a)/((double) n); fa=(*func)(a)/2. ; fb=(*func)(b)/2. ; trapez_sum=0.; for (j=1; j <= n-1; j++){ x=j*step+a; trapez_sum+=(*func)(x); } trapez_sum=(trapez_sum+fb+fa)*step; return trapez_sum; } // end trapezoidal_rule \ecppcod % --- end paragraph admon --- % !split \subsection{Optimization and profiling} % --- begin paragraph admon --- \paragraph{} Till now we have not paid much attention to speed and possible optimization possibilities inherent in the various compilers. We have compiled and linked as \bcppcod mpic++ -c mycode.cpp mpic++ -o mycode.exe mycode.o \ecppcod For Fortran replace with mpif90. This is what we call a flat compiler option and should be used when we develop the code. It produces normally a very large and slow code when translated to machine instructions. We use this option for debugging and for establishing the correct program output because every operation is done precisely as the user specified it. It is instructive to look up the compiler manual for further instructions \bcppcod man mpic++ > out_to_file \ecppcod % --- end paragraph admon --- % !split \subsection{More on optimization} % --- begin paragraph admon --- \paragraph{} We have additional compiler options for optimization. These may include procedure inlining where performance may be improved, moving constants inside loops outside the loop, identify potential parallelism, include automatic vectorization or replace a division with a reciprocal and a multiplication if this speeds up the code. \bcppcod mpic++ -O3 -c mycode.cpp mpic++ -O3 -o mycode.exe mycode.o \ecppcod This is the recommended option. \textbf{But you must check that you get the same results as previously}. % --- end paragraph admon --- % !split \subsection{Optimization and profiling} % --- begin paragraph admon --- \paragraph{} It is also useful to profile your program under the development stage. You would then compile with \bcppcod mpic++ -pg -O3 -c mycode.cpp mpic++ -pg -O3 -o mycode.exe mycode.o \ecppcod After you have run the code you can obtain the profiling information via \bcppcod gprof mycode.exe > out_to_profile \ecppcod When you have profiled properly your code, you must take out this option as it increases your CPU expenditure. For memory tests use "valgrind":"http:www.valgrind.org". An excellent GUI is also Qt, with debugging facilities. % --- end paragraph admon --- % !split \subsection{Optimization and profiling} % --- begin paragraph admon --- \paragraph{} Other hints \begin{itemize} \item avoid if tests or call to functions inside loops, if possible. \item avoid multiplication with constants inside loops if possible \end{itemize} \noindent Bad code \bcppcod for i = 1:n a(i) = b(i) +c*d e = g(k) end \ecppcod Better code \bcppcod temp = c*d for i = 1:n a(i) = b(i) + temp end e = g(k) \ecppcod % --- end paragraph admon --- % !split \subsection{What is OpenMP} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item OpenMP provides high-level thread programming \item Multiple cooperating threads are allowed to run simultaneously \item Threads are created and destroyed dynamically in a fork-join pattern \begin{itemize} \item An OpenMP program consists of a number of parallel regions \item Between two parallel regions there is only one master thread \item In the beginning of a parallel region, a team of new threads is spawned \end{itemize} \noindent \item The newly spawned threads work simultaneously with the master thread \item At the end of a parallel region, the new threads are destroyed \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Getting started, things to remember} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Remember the header file \end{itemize} \noindent \bcppcod #include <omp.h> \ecppcod \begin{itemize} \item Insert compiler directives in C++ syntax as \end{itemize} \noindent \bcppcod #pragma omp... \ecppcod \begin{itemize} \item Compile with for example \emph{c++ -fopenmp code.cpp} \item Execute \begin{itemize} \item Remember to assign the environment variable \textbf{OMP NUM THREADS} \item It specifies the total number of threads inside a parallel region, if not otherwise overwritten \end{itemize} \noindent \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{General code structure} % --- begin paragraph admon --- \paragraph{} \bcppcod #include <omp.h> main () { int var1, var2, var3; /* serial code */ /* ... */ /* start of a parallel region */ #pragma omp parallel private(var1, var2) shared(var3) { /* ... */ } /* more serial code */ /* ... */ /* another parallel region */ #pragma omp parallel { /* ... */ } } \ecppcod % --- end paragraph admon --- % !split \subsection{Parallel region} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item A parallel region is a block of code that is executed by a team of threads \item The following compiler directive creates a parallel region \end{itemize} \noindent \bcppcod #pragma omp parallel { ... } \ecppcod \begin{itemize} \item Clauses can be added at the end of the directive \item Most often used clauses: \begin{itemize} \item \textbf{default(shared)} or \textbf{default(none)} \item \textbf{public(list of variables)} \item \textbf{private(list of variables)} \end{itemize} \noindent \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Hello world, not again, please!} % --- begin paragraph admon --- \paragraph{} \bcppcod #include <omp.h> #include <stdio.h> int main (int argc, char *argv[]) { int th_id, nthreads; #pragma omp parallel private(th_id) shared(nthreads) { th_id = omp_get_thread_num(); printf("Hello World from thread %d\n", th_id); #pragma omp barrier if ( th_id == 0 ) { nthreads = omp_get_num_threads(); printf("There are %d threads\n",nthreads); } } return 0; } \ecppcod % --- end paragraph admon --- % !split \subsection{Important OpenMP library routines} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item \textbf{int omp get num threads ()}, returns the number of threads inside a parallel region \item \textbf{int omp get thread num ()}, returns the a thread for each thread inside a parallel region \item \textbf{void omp set num threads (int)}, sets the number of threads to be used \item \textbf{void omp set nested (int)}, turns nested parallelism on/off \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Parallel for loop} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Inside a parallel region, the following compiler directive can be used to parallelize a for-loop: \end{itemize} \noindent \bcppcod #pragma omp for \ecppcod \begin{itemize} \item Clauses can be added, such as \begin{itemize} \item \textbf{schedule(static, chunk size)} \item \textbf{schedule(dynamic, chunk size)} \item \textbf{schedule(guided, chunk size)} (non-deterministic allocation) \item \textbf{schedule(runtime)} \item \textbf{private(list of variables)} \item \textbf{reduction(operator:variable)} \item \textbf{nowait} \end{itemize} \noindent \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Example code} % --- begin paragraph admon --- \paragraph{} \bcppcod #include <omp.h> #define CHUNKSIZE 100 #define N 1000 main () { int i, chunk; float a[N], b[N], c[N]; for (i=0; i < N; i++) a[i] = b[i] = i * 1.0; chunk = CHUNKSIZE; #pragma omp parallel shared(a,b,c,chunk) private(i) { #pragma omp for schedule(dynamic,chunk) for (i=0; i < N; i++) c[i] = a[i] + b[i]; } /* end of parallel region */ } \ecppcod % --- end paragraph admon --- % !split \subsection{More on Parallel for loop} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item The number of loop iterations can not be non-deterministic; break, return, exit, goto not allowed inside the for-loop \item The loop index is private to each thread \item A reduction variable is special \begin{itemize} \item During the for-loop there is a local private copy in each thread \item At the end of the for-loop, all the local copies are combined together by the reduction operation \end{itemize} \noindent \item Unless the nowait clause is used, an implicit barrier synchronization will be added at the end by the compiler \end{itemize} \noindent \bcppcod // #pragma omp parallel and #pragma omp for \ecppcod can be combined into \bcppcod #pragma omp parallel for \ecppcod % --- end paragraph admon --- % !split \subsection{Inner product} % --- begin paragraph admon --- \paragraph{} \[ \sum_{i=0}^{n-1} a_ib_i \] \bcppcod int i; double sum = 0.; /* allocating and initializing arrays */ /* ... */ #pragma omp parallel for default(shared) private(i) reduction(+:sum) for (i=0; i<N; i++) sum += a[i]*b[i]; } \ecppcod % --- end paragraph admon --- % !split \subsection{Different threads do different tasks} % --- begin paragraph admon --- \paragraph{} Different threads do different tasks independently, each section is executed by one thread. \bcppcod #pragma omp parallel { #pragma omp sections { #pragma omp section funcA (); #pragma omp section funcB (); #pragma omp section funcC (); } } \ecppcod % --- end paragraph admon --- % !split \subsection{Single execution} % --- begin paragraph admon --- \paragraph{} \bcppcod #pragma omp single { ... } \ecppcod The code is executed by one thread only, no guarantee which thread Can introduce an implicit barrier at the end \bcppcod #pragma omp master { ... } \ecppcod Code executed by the master thread, guaranteed and no implicit barrier at the end. % --- end paragraph admon --- % !split \subsection{Coordination and synchronization} % --- begin paragraph admon --- \paragraph{} \bcppcod #pragma omp barrier \ecppcod Synchronization, must be encountered by all threads in a team (or none) \bcppcod #pragma omp ordered { a block of codes } \ecppcod is another form of synchronization (in sequential order). The form \bcppcod #pragma omp critical { a block of codes } \ecppcod and \bcppcod #pragma omp atomic { single assignment statement } \ecppcod is more efficient than \bcppcod #pragma omp critical \ecppcod % --- end paragraph admon --- % !split \subsection{Data scope} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item OpenMP data scope attribute clauses: \begin{itemize} \item \textbf{shared} \item \textbf{private} \item \textbf{firstprivate} \item \textbf{lastprivate} \item \textbf{reduction} \end{itemize} \noindent \end{itemize} \noindent What are the purposes of these attributes \begin{itemize} \item define how and which variables are transferred to a parallel region (and back) \item define which variables are visible to all threads in a parallel region, and which variables are privately allocated to each thread \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Some remarks} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item When entering a parallel region, the \textbf{private} clause ensures each thread having its own new variable instances. The new variables are assumed to be uninitialized. \item A shared variable exists in only one memory location and all threads can read and write to that address. It is the programmer's responsibility to ensure that multiple threads properly access a shared variable. \item The \textbf{firstprivate} clause combines the behavior of the private clause with automatic initialization. \item The \textbf{lastprivate} clause combines the behavior of the private clause with a copy back (from the last loop iteration or section) to the original variable outside the parallel region. \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Parallelizing nested for-loops} % --- begin paragraph admon --- \paragraph{} \begin{itemize} \item Serial code \end{itemize} \noindent \bcppcod for (i=0; i<100; i++) for (j=0; j<100; j++) a[i][j] = b[i][j] + c[i][j] \ecppcod \begin{itemize} \item Parallelization \end{itemize} \noindent \bcppcod #pragma omp parallel for private(j) for (i=0; i<100; i++) for (j=0; j<100; j++) a[i][j] = b[i][j] + c[i][j] \ecppcod \begin{itemize} \item Why not parallelize the inner loop? to save overhead of repeated thread forks-joins \item Why must \textbf{j} be private? To avoid race condition among the threads \end{itemize} \noindent % --- end paragraph admon --- % !split \subsection{Nested parallelism} % --- begin paragraph admon --- \paragraph{} When a thread in a parallel region encounters another parallel construct, it may create a new team of threads and become the master of the new team. \bcppcod #pragma omp parallel num_threads(4) { /* .... */ #pragma omp parallel num_threads(2) { // } } \ecppcod % --- end paragraph admon --- % !split \subsection{Parallel tasks} % --- begin paragraph admon --- \paragraph{} \bcppcod #pragma omp task #pragma omp parallel shared(p_vec) private(i) { #pragma omp single { for (i=0; i<N; i++) { double r = random_number(); if (p_vec[i] > r) { #pragma omp task do_work (p_vec[i]); \ecppcod % --- end paragraph admon --- % !split \subsection{Common mistakes} % --- begin paragraph admon --- \paragraph{} Race condition \bcppcod int nthreads; #pragma omp parallel shared(nthreads) { nthreads = omp_get_num_threads(); } \ecppcod Deadlock \bcppcod #pragma omp parallel { ... #pragma omp critical { ... #pragma omp barrier } } \ecppcod % --- end paragraph admon --- % !split \subsection{Matrix-matrix multiplication} % --- begin paragraph admon --- \paragraph{} \bcppcod # include <cstdlib> # include <iostream> # include <cmath> # include <ctime> # include <omp.h> using namespace std; // Main function int main ( ) { // brute force coding of arrays double a[500][500]; double angle; double b[500][500]; double c[500][500]; int i; int j; int k; \ecppcod % --- end paragraph admon --- % !split \subsection{Matrix-matrix multiplication} % --- begin paragraph admon --- \paragraph{} \bcppcod int n = 500; double pi = acos(-1.0); double s; int thread_num; double wtime; cout << "\n"; cout << " C++/OpenMP version\n"; cout << " Compute matrix product C = A * B.\n"; thread_num = omp_get_max_threads ( ); // // Loop 1: Evaluate A. // s = 1.0 / sqrt ( ( double ) ( n ) ); wtime = omp_get_wtime ( ); \ecppcod % --- end paragraph admon --- % !split \subsection{Matrix-matrix multiplication} % --- begin paragraph admon --- \paragraph{} \bcppcod # pragma omp parallel shared ( a, b, c, n, pi, s ) private ( angle, i, j, k ) { # pragma omp for for ( i = 0; i < n; i++ ) { for ( j = 0; j < n; j++ ) { angle = 2.0 * pi * i * j / ( double ) n; a[i][j] = s * ( sin ( angle ) + cos ( angle ) ); } } // // Loop 2: Copy A into B. // # pragma omp for for ( i = 0; i < n; i++ ) { for ( j = 0; j < n; j++ ) { b[i][j] = a[i][j]; } } \ecppcod % --- end paragraph admon --- % !split \subsection{Matrix-matrix multiplication} % --- begin paragraph admon --- \paragraph{} \bcppcod // Loop 3: Compute C = A * B. // # pragma omp for for ( i = 0; i < n; i++ ) { for ( j = 0; j < n; j++ ) { c[i][j] = 0.0; for ( k = 0; k < n; k++ ) { c[i][j] = c[i][j] + a[i][k] * b[k][j]; } } } } wtime = omp_get_wtime ( ) - wtime; cout << " Elapsed seconds = " << wtime << "\n"; cout << " C(100,100) = " << c[99][99] << "\n"; // // Terminate. // cout << "\n"; cout << " Normal end of execution.\n"; return 0; \ecppcod % --- end paragraph admon --- % ------------------- end of main content --------------- % #ifdef PREAMBLE \printindex \end{document} % #endif
{ "alphanum_fraction": 0.708009768, "avg_line_length": 23.4133790738, "ext": "tex", "hexsha": "a5bd56cd0434fba7be8e3accc23cc36709e512c5", "lang": "TeX", "max_forks_count": 54, "max_forks_repo_forks_event_max_datetime": "2022-03-07T10:44:14.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-09T10:02:00.000Z", "max_forks_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_forks_repo_path": "doc/pub/para/src/para.p.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_issues_repo_issues_event_max_datetime": "2020-02-08T13:15:42.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-18T10:43:38.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_issues_repo_path": "doc/pub/para/src/para.p.tex", "max_line_length": 429, "max_stars_count": 87, "max_stars_repo_head_hexsha": "a840b97b651085090f99bf6a11abab57100c2e85", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "GabrielSCabrera/ComputationalPhysics2", "max_stars_repo_path": "doc/pub/para/src/para.p.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T07:11:53.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-21T08:29:56.000Z", "num_tokens": 10768, "size": 40950 }
\documentclass[]{article} \usepackage[margin=1in]{geometry} \usepackage{physics} \usepackage{amsmath, amsfonts, amssymb} \usepackage{nccmath} \usepackage{cuted} \usepackage{mathtools} \usepackage{hyperref} \usepackage{empheq} \usepackage{graphicx} % MATLAB Formating Code \usepackage[numbered,framed]{matlab-prettifier} \lstset{style=Matlab-editor,columns=fullflexible} \renewcommand{\lstlistingname}{Script} \newcommand{\scriptname}{\lstlistingname} %opening \title{Problem Statement of Summer 2021 Project:\\Bounding the Residual Error for Static Luenberger Observers for Polytopic Systems} \author{Jonas Wagner} \date{2021 July 02} \begin{document} \maketitle % %Note: See the (\href{https://cometmail-my.sharepoint.com/personal/jrw200000_utdallas_edu/_layouts/OneNote.aspx?id=%2Fpersonal%2Fjrw200000_utdallas_edu%2FDocuments%2FResearch%2FPolytopic%20System%20Security%2FPolytopic%20System%20Security&wd=target%28L-Observer%20Residual%20Bounds.one%7C96625950-A6A9-4A54-B35B-46969BCD56B2%2FProblem.%20Statement%7C43940B17-ACF8-4A2E-A2D8-259E68EF4028%2F%29}{OneNote problem statement page}) %for additional info. (hopefully that link works... idk how well OneNote will integrate as pdf references or GitHub) %%%% Yeah it doesn't work... %%%%%%%%%%%%%%%%%% % I don't think these things are really important for this problem statement %%%%%%%%%%%%%%%%%%% %\begin{abstract} % % In this project they dynamics of Discrete-Time Polytopic Linear Parameter-Varying (LPV) Systems will be examined. Specifically, various methods for the dual state and parameter estimation will be reproduced with the intent of analyzing effectiveness of these observers against various attacks. Each method performs optimization to minimize the estimation error in various ways while remaining stable and achieving certain performance criteria. Potentially the reachability of the system may be determined for various fault and attack scenarios through the minimization of an ellipsoidal bound. %\end{abstract} % %\newpage %\tableofcontents %\newpage %%%%%%%%%%%%%%%%%% % Overdetailed explination.... (putting in appendix beocuse why not....) %%%%%%%%%%%%%%%%%%% \section{Polytopic Systems Background} (A detailed walkthrough is in \appendixname \ \ref{apx:PolytopicSystemsBackround}) \subsection{Discrete Time Polytopic Model} A standard DT-Polytopic system will be used in this project, as given as: \begin{equation}\label{eq:DT_poly_sys_def} \begin{cases} x_{k+1} &= \sum_{i=1}^{m} \alpha^i (A_i x_k + B_i u_k)\\ y &= C x_k \end{cases} \end{equation} with state variable $x \in \real^n$, control input $u \in \real^p$, and output $y \in \real^q$ common to all of the $m$ submodels. Each submodel is also associated with state matricies $A_i$ and $B_i$ while the output is calculated from the actual state by matrix $C$. The scheduling parameter, $\alpha \in \mathcal{A}$ is unknown and time-varying, with $\mathbf{A}$ defined as: \begin{equation}\label{eq:alpha_set} \mathcal{A} = \{\alpha\in \real^m \ | \ \sum_{i=1}^m \alpha^i = 1, \ \alpha^i \geq 0 \ \ \forall \ i \in \{1,2,\dots,m\}\} \end{equation} %notes on dimensions: n = states, m = inputs, p = outputs, N = # of subsystems \subsection{Assumptions} The following assumptions will also be made: \begin{enumerate} \item $A_i$ is stable $\forall \ i = 1, \dots, m$ \item $(A_i, B_i)$ is a controllable pair $\forall \ i = 1, \dots, m$ \item $(A_i, C)$ is an observable pair $\forall \ i = 1, \dots, m$ \item $\alpha \in \mathcal{A}$ is constant (or at least slowly time-varying) \end{enumerate} \newpage \section{State Observer and Residual Definition} The polytopic system described in \eqref{eq:DT_poly_sys_def} \ for assumed scheduling parameters $\alpha^i$, a State Observer can be designed to estimate the state of the system from the known inputs and outputs.\\ \subsection{Simple Luenberger Observer} A simple Luenberger Observer for system matrices A, B, and C is defined as \begin{equation}\label{eq:simple_L_Observer} \hat{x}_{k+1} = A \hat{x}_k + B_i u_k + L \qty(C \hat{x}_k - y_k) \end{equation} where $L \in \real^{n \cross q}$ is the Luemburger gain. \subsection{Polytopic System Luenburger Observer} For a Polytopic System given with \eqref{eq:DT_poly_sys_def} \ with known (or estimated) scheduling parameters $\hat{\alpha} \in \mathcal{A}$\footnote{which technically may not need to be restricted to be within $\mathcal{A}$}, a Luenberger Observer can be defined by: \begin{equation}\label{eq:DT_poly_l_observer} \hat{x}_{k+1} = \sum_{i=1}^m \hat{\alpha}^i \qty(A_i \hat{x}_k + B_i u_k + L_i \qty(y_k - C \hat{x}_k)) \end{equation} with $L_i$ designed so that $(A_i - L_i C)$ is stable $\forall i = 1 \dots m$.\footnote{might be useful to also specify $L_i$ specifically based on the LMI from the paper... $L_i = G_i^{-1} F_i$} \subsection{State Estimation Error} In a deterministic system, let the actual scheduling parameters be defined as ${\alpha}\in \mathcal{A}$ and a single selected scheduling parameter of the system be defined as $\alpha \in \mathcal{A}$. The state-estimation error is then defined by \begin{equation}\label{eq:est_error_def} e_k =x_k - \hat{x}_k \end{equation} where $x_k$ is the actual state and $\hat{x}_k$ is the estimated state. The estimation error update equation can then be calculated to be: \begin{equation}\label{eq:est_error_update_def} e_{k+1} = \sum_{i=1}^{m} \hat{\alpha}^i \qty(A_i - L_i C) e_k + v_k^i \end{equation} where the disturbance term $v_k^i$ is defined by \begin{equation}\label{eq:param_error_disturbance_def} v_k^i = \qty({\alpha}^i - \hat{\alpha}^i) \qty(A_i x_k + B_i u_k) \end{equation} \textbf{Prove BIBS (and/or ISS?) for $v_k$ assuming that conditions exist that $v_k$ is bounded...} (should be simple to expand from standard DT to polytopic system) %\textbf{Related Question:} Since $\hat{\alpha}_k$ will be constant, ${\alpha}_k - \hat{\alpha}_k$ but does that mean $v_k$ will never decay to zero? and if so, will it at least remain bounded (under certain conditions for $A_i$ and $B_i$)?\\ %\textbf{Solution:}... $v_k$ is never bounded if $B_i$ is actually polytopic... $v_k$ is also not bounded if $x_k$ itself is fully reachable (it probably needs stabilizing controller)... \subsection{Output Residual Definition} The measured output $y_k = C x_k$ and estimated output $\hat{y}_k = C \hat{x}_k$ are used to define the residual, $r_k$ as: \begin{equation}\label{eq:output_residual_def} r_k = y_k - \hat{y}_k = C(x_k - \hat{x}_k) = C e_k \end{equation} %The output residual update equation can be calculated from \eqref{eq:DT_poly_l_observer} and \eqref{eq:output_residual_def} to be: %\begin{equation}\label{eq:output_residual_update_def} % r_{k+1} = \sum_{i=1}^k \hat{\alpha}^i \qty(A_i - L_i C) r_k + C v_k %\end{equation} \subsection{Feedback Control Implementation} As is evident in \eqref{eq:param_error_disturbance_def}, when $\alpha^i \neq \hat{\alpha}^i$ the disturbance term is not bounded and therefore the error (and residual) itself is not bounded... when a feedback controller is implimented as well, this may be possible... Let, ... \newpage \section{Problem Objectives} \begin{enumerate} \setcounter{enumi}{-1} % make it start at a 0 \item Simulate using a toy system to gain intuition for bounds on the residual using the simple SISO system w/ a static system scheduling parameter (${\alpha}$) and no noise (deterministic). \item For a deterministic DT-polytopic system, calculate an ellipsoid bound on the residual, assuming $r_k \sim \mathcal{N}(0,\Sigma)$, meaning a test statistic is defined by $$z_k = r_k^T \Sigma^{-1} r_k \leq z_{threshold}$$ so that the threshold $z_{threshold}$ can be defined as the reachable residual for a specific set of scheduling parameters: $\hat{\alpha} \in \mathcal{A} \neq {\alpha} \in \mathcal{A}$. \item Attempt to use the bounds for scheduling parameters for any ${\alpha}\in \mathcal{A}$ to find the worst case scenarios for a given $\hat{\alpha}$. \item Find a way to calculate the minimum bounded region $\forall {\alpha} \in \mathcal{A}$ by selecting the best $\hat{\alpha}$ that minimizes the size of the bounded region. \item Confirm the analysis with simulations with the toy model, as well as, more interesting higher-order and MIMO systems. \begin{enumerate} \item Test with noise to ensure robustness of the estimates (and potentially robustness to stealthy/unstealthy attacks) \item Mabye: Run a lot of simulations to experimentally find regions where it is vulnerable (i.e. find what is contained within the ellipsoidal bound but not actually reachable) \end{enumerate} \end{enumerate} % % %\newpage %\section{Project Objectives} %The primary objective of this project will be to reproduce three joint state and parameter estimator methods for LPV systems then test the ability of each to react to malicious input and measurement interference. A secondary/future objective will be to calculate the reachability set and how it is manipulated due to an attack on the system. % %The three estimation methods of interest \footnote{taken directly from \cite{beelen2017joint} and we are essentially recreating these results but performing additional tests} include: %\begin{enumerate} % \item Dual-Estimation (DE) approach is a method that first solves a two step optimization problem for parameters-estimation and then uses a "traditional" robust polytopic observer design for state estimation. \cite{beelen2017joint} % \item Extended Kalman Filter (EKF) using prediction and update steps for the system estimates, but this version does require the assumption of Gaussian noise. \cite{beelen2017joint} % \item Interacting Multiple Model estimation (IMM) method which uses a different Kalmen filter for multiple modes and the probability that the system will be a certain mode.\cite{bar2004estimation} %\end{enumerate} % Need to find access to \cite{bar2004estimation} for the IMM algorithm details... %The primary attack methods for initial testing (for simplicity) will consist of malicious random gaussian noise being added to measurements. The power of these attacks can be classified into three catagories depending on the malicous noise power: %\begin{enumerate} % \item Stealthy attacks: power of the attack is along the same level as the normal noise standard-deviation. % \item Unstealthy attacks: the attack is disruptive, yet detectable, with aims to degrade the system performance. % \item Super Unstealthy attack: a very considerable attack that aims to severely disrupt a system while not remaining undetectable. %\end{enumerate} % %The next objective will be to show how much each attack method can effect the states (specifically the reachable set) for each estimator.\footnote{and potentially develop a better solution... modifying \cite{securestateestimation}?} This work is very similar to \cite{hashemi2018comparison} but will be expanding from stochastic DT-LTI systems to deterministic DT-LPV systems. % %\section{Proposed Methods} %The following steps will be taken to complete the problem. % %\begin{enumerate} % \item This project will begin by reproducing the results of joint state and parameter estimation from \cite{beelen2017joint} using the same LPV system used in the paper. (This will likely be done using Simulink with custom estimator blocks.) %% \item The next step will be to introduce additional system noise (presumably to the scheduling parameters themselves) and measurement poise into the sensors. This will be important to do first and perform a separate analysis of each before malicious attacks are included. % \item Next attacks will be introduced into the sensor and the response for each estimator will be compared. % \item This will then be expanded to a more interesting system\footnote{Seperator Testbed? scheduling parameters being valve on/off and for various linearized tank level systems... is it possible to analyze with a scheduling parameter dependent on a state???... Otherwise a more complicated electrical network w/ switches or pneumatic system could be done instead} that will be more useful for sensor attack testing (i.e. more sensors then states or high noise system). % \item Finally, an analysis of the reachable set deviation due to attacks will be performed by finding a minimal ellipsoid constraining the states that would be reachable prior to attack detection.\footnote{possibly future work} %\end{enumerate} %\newpage %\bibliographystyle{ieeetr} %\bibliography{mybib.bib} \onecolumn \newpage \appendix \section{In-Depth Polytopic System Backround} \label{apx:PolytopicSystemsBackround} Polytopic LPV system models are essentially a smooth interpolation of a set of LTI submodels constructed using a specified weighting function. This can be looked at as decomposing a system into multiple operating spaces that operate as linear submodels. It is possibile for a Polytopic model to take a complex nonlinear model and redefine it as a time-varying interpolation of multiple linear submodels. Section references:\footnote{Each subsection is mostly a summary of sections from these sources but with elaboration and consistent notation.} \cite{beelen2017joint} \cite{ORJUELA2019295} \cite{orjuela2013nonlinear}\\ \subsection{General Continuous Time Polytopic Model} The simple polyotopic LPV structure can be described by the following weighted linear combination of LTI submodels: \begin{equation}\label{eq:CT_poly_sys_def} \begin{cases} \dot{x}(t) = \sum_{i=1}^r \mu_i(\xi(t))\{A_i x(t) + B_i u(t)\} \vspace{5pt} \\ y(t) = \sum_{i=1}^r \mu_i(\xi(t)) C_i x(t) \end{cases} \end{equation} with state variable $x \in \real^n$ common to all $r$ submodels, control input $u \in \real^p$, output $y \in \real^q$, weighting function $\mu_i(\cdot)$ and premise variable $\xi(t) \in \real^{w}$. Additionally, the weighting functions $\mu_i (\cdot)$ for each subsystem must satisfy the convex sum constraints: \begin{equation}\label{eq:convex_sum_constraints} 0 \leq \mu_i(\xi), \ \forall i = 1,\dots,r \ \ \text{and} \ \ \sum_{i=1} \mu_i(\xi) = 1 \end{equation} %notes on dimensions: n = states, m = inputs, p = outputs, w = # of weights, r = # of subsystems One notable downside, for our application, is the requirement for $\xi(t)$ to be explicitly known in real-time for the model to function. This requirement is the primary driving factor in investigating this system as when $\xi(t)$ is not explicitly known additional uncertainties now exist in a system that are open for exploitation by an attacker. \subsection{Discrete Time Polytopic Model} In the DT-Polytopic Model the CT-Polytopic Model, \eqref{eq:CT_poly_sys_def}, is extended into the discrete time equivalence (either through sampling and zero-order holds or by definition) by the following parameter-varying system: \begin{equation} \begin{cases} x_{k+1} &= \sum_{i=1}^{m} \alpha^i (A_i x_k + B_i u_k)\\ y &= C x_k \end{cases} \end{equation} with state variable $x \in \real^n$, control input $u \in \real^p$, and output $y \in \real^q$ common to all of the $m$ submodels. Each submodel is also associated with state matricies $A_i$ and $B_i$ while the output is calculated from the actual state by matrix $C$. The scheduling parameter, $\alpha \in \mathcal{A}$ is unknown and time-varying, with $\mathbf{A}$ defined as: \begin{equation} \mathcal{A} = \{\alpha\in \real^m \ | \ \sum_{i=1}^m \alpha^i = 1, \ \alpha^i \geq 0 \ \ \forall \ i \in \{1,2,\dots,m\}\} \end{equation} %notes on dimensions: n = states, m = inputs, p = outputs, N = # of subsystems In the discrete time case, the unknown scheduling parameter, $\alpha$, is problematic for when developing a state-estimator, thus a Joint State-Parameter estimator must be used. The discrete nature of the measurements may also prove to be even more problematic if an attack is injected in any discrete measurement. \newpage \subsection{MATLAB}\label{apx:MATLAB} All code I wrote for this project can be found on my GitHub repository:\\ \href{https://github.com/jonaswagner2826/DT_LPV_attack_analysis}{https://github.com/jonaswagner2826/DT\_LPV\_attack\_analysis}\\ %% DT_LPV_sim_script %\lstinputlisting[caption={DT\_LPV\_sim\_script}]{../../DT_LPV_sim/DT_LPV_sim_script.m} %\newpage %% DT_LPV_sim_script %\lstinputlisting[caption={alpha\_traj}]{../../DT_LPV_sim/alpha_traj.m} %\newpage %% DT_LPV_sim_script %\lstinputlisting[caption={est\_DE}]{../../DT_LPV_sim/est_DE.m} %\newpage %% DT_LPV_sim_script %\lstinputlisting[caption={est\_EKF}]{../../DT_LPV_sim/est_EKF.m} \end{document}
{ "alphanum_fraction": 0.7573946638, "avg_line_length": 59.8050541516, "ext": "tex", "hexsha": "03ecf80eb410585d7fa54edd2660a7e1ef7fbcb4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4515569bc9baa9b1f3e4782d73745374a7a11009", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jonaswagner2826/polytopic-system-security", "max_forks_repo_path": "LObserver_Residual_Bounds/ProblemStatement/ProblemStatement_LObserverResidualBounds.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4515569bc9baa9b1f3e4782d73745374a7a11009", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jonaswagner2826/polytopic-system-security", "max_issues_repo_path": "LObserver_Residual_Bounds/ProblemStatement/ProblemStatement_LObserverResidualBounds.tex", "max_line_length": 595, "max_stars_count": null, "max_stars_repo_head_hexsha": "4515569bc9baa9b1f3e4782d73745374a7a11009", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jonaswagner2826/polytopic-system-security", "max_stars_repo_path": "LObserver_Residual_Bounds/ProblemStatement/ProblemStatement_LObserverResidualBounds.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4565, "size": 16566 }
\documentclass[natbib]{article} \usepackage{microtype} \usepackage{lmodern} \usepackage{url} \usepackage{xspace} \usepackage{calc} \usepackage{enumerate} \usepackage{listings} \usepackage{amsmath,amssymb} \usepackage{rotating} \usepackage{colortbl} \usepackage{pifont} \usepackage{tikz} %\usetikzlibrary{shapes,shadows,arrows,calc,positioning,fit,matrix,mindmap,trees} %\usepackage{pgfplots} %\usepackage{pgfplotstable} \usepackage{booktabs} \usepackage{natbib} \usepackage{colortbl} % pantone colors % More sensible defaults akin to \sloppy % \tolerance 1414 % \hbadness 1414 % \emergencystretch 1.5em % \hfuzz 0.3pt % \widowpenalty=10000 % \clubpenalty=10000 % \vfuzz % \hfuzz % \raggedbottom \newcommand{\ignore}[1]{} \newcommand{\st}{\textit{s.\,t.}\xspace} \newcommand{\eg}{\textit{e.\,g.}\xspace} \newcommand{\ie}{\textit{i.\,e.}\xspace} \newcommand{\cf}{\textit{cf.}\xspace} \newcommand{\blackarrow}{{\color{black} \Pisymbol{pzd}{217}}} \newcommand{\redarrow}{{\color{DarkRed} \Pisymbol{pzd}{217}}} \newcommand{\minibox}[2]{\begin{minipage}{#1}\raggedright #2\end{minipage}} \newcommand{\enquote}[1]{``#1''} %\newcommand{\fixme}[1]{\begin{tikzpicture} %\node[bottom color=red!80!white, top color=red!70!black, rounded corners, % font=\bf\color{white}\footnotesize] { % \begin{minipage}{.75\columnwidth} % FIXME\\ % #1 % \end{minipage} %}; %\end{tikzpicture} %} \lstset{ language=C, basicstyle=\small,%\scriptsize, %\footnotesize\ttfamily, keywordstyle={\bf}, keywordstyle={[2]\it},%\color{Blue!40!black}}, breaklines=true, identifierstyle=, stringstyle=\bf, commentstyle=\it\color{black!80}, captionpos=b, numbers=left, stepnumber=3, columns=fullflexible } \begin{document} \title{CodeThorn} \author{\small Markus Schordan, Marc Jasper, Joshua Asplund, Maximilian Fecke, Adrian Prantl} %\end{tabular} \date{August 11, 2017} \maketitle \begin{abstract} \noindent CodeThorn is a tool for analyzing C/C++ programs by combining approaches from data flow analysis, constraint-based analysis, and model checking. The main focus in the development of CodeThorn is to explore program analysis algorithms while combining above approaches and to investigate methods for combining static analysis with methods for software verification. The input language is currently restricted to a subset of C. \end{abstract} \tableofcontents %------------------------------------------------------------------------- \section{Introduction} \label{sec:intro} CodeThorn was initially developed as a tool for exploring approaches for reachbility analysis and verification of linear temporal logic (LTL) formulae based on finite state systems~\cite{schordan2014combining}. This was later extended to also perform specialization of programs and program equivalence checking~\cite{schordan2014verification}. CodeThorn is based on the ROSE compiler infrastructure\footnote{\url{http://www.rosecompiler.org/}} and uses the ROSE abstract syntax tree as basis for its input. A number of components have been moved from CodeThorn to ROSE over time. What remains are command line options that allow to access those features conveniently and also to reproduce some published results. \nocite{roseWWW} \subsection{Use in Competitions} Since 2012, CodeThorn has been successfully used to participate in the international RERS Challenge\footnote{\url{http://www.rers-challenge.org}} on program analysis and verification~\cite{schordan2014combining}. Among other accomplishments, the use of CodeThorn helped to become overall winner and obtain the method combination award in RERS 2014 as well as to win the 1st place in the Sequential LTL track of the recent 2017 iteration of RERS\footnote{See \url{http://www.rers-challenge.org} for detailed competition results}. Participating in the challenge led to many improvements in the tool such as an efficient parallelization of the analysis~\cite{jasper2016multi} and to the development of new model checking approaches~\cite{jasper2014counterexample}. Starting in 2016, the LTL model checking infrastructure of CodeThorn has been successfully applied to generate parallel verification benchmarks. These benchmarks were used for a new parallel track of the RERS Challenges 2016 and 2017~\cite{jasper2017rers}. \subsection{Benchmarks} We have found that benchmarks of the RERS Challenge serve as an excellent guidance in crafting this tool and investigating the impact and performance of each of the approaches on the overall results. For the RERS programs, LTL formulae are provided. This allows to verify behavioral properties of these programs. Reachability properties can be verified by checking the reachability of failing assertions. For program equivalence checking and data race detection the Polybench/C 3.2 suite has provided a good basis for evaluation. By generating various polyhedral variants of the 30+ benchmarks, CodeThorn can be used to check the equivalence of two given programs and verify whether the optimizations are semantics preserving. Furthermore, parallel OpenMP for loops are recognized and can be checked not to introduce data races. Currently these approaches are extended to address other large scale applications. \section{Installation} No additional configuration is required because CodeThorn is configured as part of ROSE. In order to use all features of CodeThorn however, the SPOT LTL model checking library version 1.2.6\footnote{This version of SPOT can be downloaded here: \url{https://www.lrde.epita.fr/dload/spot/spot-1.2.6.tar.gz}} is required. In addition, some experimental features require the Z3 SMT solver\footnote{\url{https://github.com/Z3Prover/z3}}. Please provide the options \verb+--with-spot=<spot-install-dir>+ and \verb+--with-z3=<z3-install-dir>+ to ROSE's configure command. Run 'make', 'make install', and optionally 'make check' in the \verb+projects/+ \verb+CodeThorn+ directory to install CodeThorn. CodeThorn is installed as 'codethorn' (at the same location as other ROSE tools, in the 'bin' directory of the ROSE installation). \section{Command Line Options} The following list of command line options is accessible by running \verb+codethorn --help+. The main options below comprise general analysis parameters such as the exploration mode or resource constraints. More detailed options belonging to individual aspects of CodeThorn are listed in the following sections and can be seen by running \verb+codethorn --help-<name-of-detailed-options>+. \begin{verbatim} --csv-stats arg output statistics into a CSV file [arg] --colors arg use colors in output [=yes|no] --display-diff arg Print statistics every <arg> computed estates. --exploration-mode arg set mode in which state space is explored ([breadth-first], depth-first, loop-aware, loop-aware-sync) -h [ --help ] produce this help message --help-cegpra show options for CEGRPA --help-eq show options for program equivalence checking --help-exp show options for experimental features --help-pat show options for pattern search mode --help-svcomp show options for SV-Comp specific features --help-rers show options for RERS specific features --help-ltl show options for LTL verification --help-par show options for analyzing parallel programs --help-vis show options for visualization output files --help-data-race show options for data race detection --help-info show options for program info --status show status messages --no-reduce-cfg Do not reduce CFG nodes that are irrelevant for the analysis. --internal-checks run internal consistency checks (without input program) --input-values arg specify a set of input values (e.g. "{1,2,3}") --input-values-as-constraints arg represent input var values as constraints (otherwise as constants in PState) --input-sequence arg specify a sequence of input values (e.g. "[1,2,3]") --log-level arg (=none,>=warn) Set the log level (none|info|warn|trace|deb ug) --max-transitions arg Passes (possibly) incomplete STG to verifier after <arg> transitions have been computed (default: no limit). --max-iterations arg Passes (possibly) incomplete STG to verifier after <arg> loop iterations have been explored (default: no limit). Currently requires --exploration-mode=loop- aware[-sync]. --max-memory arg Stop computing the STG after a total physical memory consumption of approximately <arg> Bytes has been reached. (default: no limit). --max-time arg Stop computing the STG after an analysis time of approximately <arg> seconds has been reached. (default: no limit). --max-transitions-forced-top arg Performs approximation after <arg> transitions (default: no limit). --max-iterations-forced-top arg Performs approximation after <arg> loop iterations (default: no limit). Currently requires --exploration-mode=loop-aware[-syn c]. --max-memory-forced-top arg Performs approximation after <arg> bytes of physical memory have been used (default: no limit). --max-time-forced-top arg Performs approximation after an analysis time of approximately <arg> seconds has been reached. (default: no limit). --resource-limit-diff arg Check if the resource limit is reached every <arg> computed estates. --print-all-options arg print the default values for all yes/no command line options. --rewrite rewrite AST applying all rewrite system rules. --run-rose-tests arg Run ROSE AST tests. [=yes|no] --threads arg Run analyzer in parallel using <arg> threads (experimental) -v [ --version ] display the version \end{verbatim} \subsection{Counterexample-Guided Prefix Refinement Analysis} The Counterexample-guided prefix refinement analysis (CEGPRA)~\cite{jasper2014counterexample} is a special instance of CEGAR for reactive, PLC-like systems. Based on an over-approximating initial model of the analyzed program's behavior, model checking is performed. In an iterative process, spurious counterexamples are removed by guided unrolling of the actual program's reachable state space. \begin{verbatim} --csv-stats-cegpra arg output statistics regarding the counterexample-guided prefix refinement analysis (cegpra) into a CSV file [arg] --cegpra-ltl arg Select the ID of an LTL property that should be checked using cegpra (between 0 and 99). --cegpra-ltl-all arg Check all specified LTL properties using cegpra [=yes|no] --cegpra-max-iterations arg Select a maximum number of counterexamples anaylzed by cegpra (default: no limit). --viz-cegpra-detailed arg generate visualization (.dot) output files with prefix <arg> for different stages within each loop of cegpra. \end{verbatim} \subsection{Program Equivalence Checking} The following list of options is relevant to the program equivalence checking capabilities of CodeThorn. \begin{verbatim} --dump-sorted arg [experimental] generates sorted array updates in file <file> --dump-non-sorted arg [experimental] generates non-sorted array updates in file <file> --rewrite-ssa rewrite SSA form: replace use of SSA variable by rhs of its assignment (only applied outside loops or unrolled loops). --print-rewrite-trace print trace of rewrite rules. --print-update-infos arg print information about array updates on stdout --rule-const-subst arg use const-expr substitution rule <arg> --rule-commutative-sort arg apply rewrite rule for commutative sort of expression trees. --specialize-fun-name arg function of name [arg] to be specialized --specialize-fun-param arg function parameter number to be specialized (starting at 0) --specialize-fun-const arg constant [arg], the param is to be specialized to. --specialize-fun-varinit arg variable name of which the initialization is to be specialized (overrides any initializer expression) --specialize-fun-varinit-const arg constant [arg], the variable initialization is to be specialized to. --verify-update-sequence-race-conditions arg [experimental] check race conditions of update sequence \end{verbatim} \subsection{Experimental} Experimental features that are not (yet) fully integrated. \begin{verbatim} --annotate-terms arg annotate term representation of expressions in unparsed program. --eliminate-stg-back-edges arg eliminate STG back-edges (STG becomes a tree). --generate-assertions arg generate assertions (pre-conditions) in program and output program (using ROSE unparser). --precision-exact-constraints arg (experimental) use precise constraint extraction [=yes|no] --report-semantic-fold arg report each folding operation with the respective number of estates. [=yes|no] --semantic-fold arg compute semantically folded state transition graph [=yes|no] --semantic-fold-threshold arg Set threshold with <arg> for semantic fold operation (experimental) --post-semantic-fold arg compute semantically folded state transition graph only after the complete transition graph has been computed. [=yes|no] --trace-file arg generate STG computation trace [=filename] --explicit-arrays arg represent all arrays ecplicitly in every state. --z3 RERS specific reachability analysis using z3 --rers-upper-input-bound arg RERS specific parameter for z3 --rers-verifier-error-number arg RERS specific parameter for z3 --ssa Generate SSA form (only works for programs without function calls, loops, jumps, pointers and returns) \end{verbatim} \subsection{Pattern Search} These options correspond to the pattern search exploration mode. During state-space exploration, it systematically unrolls repeating input/output patterns in order to reach deep areas of the state space of reactive systems. It was used as a black-box analysis during participation in the RERS Challenge. \begin{verbatim} --pattern-search-max-depth arg parameter of the pattern search mode. Sets the maximum input depth that is searched for cyclic I/O patterns (default: 10). --pattern-search-repetitions arg parameter of the pattern search mode. Sets the number of unrolled iterations of cyclic I/O patterns (default: 100). --pattern-search-max-suffix arg parameter of the pattern search mode. Sets the maximum input depth of the suffix that is searched for failing assertions after following an I/O-pattern (default: 5). --pattern-search-asserts arg reads a .csv-file (one line per assertion, e.g. "1,yes"). The pattern search terminates early if traces to all errors with "yes" entries have been found. [=file-path] --pattern-search-exploration arg exploration mode for the pattern search. Note: all suffixes will always be checked using depth-first search. [=depth-first|breadth-first] \end{verbatim} \subsection{SV-COMP} Options specific to analyzing programs of the SV-COMP competition (work in progress). \begin{verbatim} --svcomp-mode sets default options for all following SVCOMP-specific options. --error-function arg detect a verifier error function with name [arg] (terminates verification) \end{verbatim} \subsection{RERS Challenge} The following list contains options that are relevant when analyzing programs of the RERS Challenge. \begin{verbatim} --csv-assert arg output assert reachability results into a CSV file [arg] --eliminate-arrays arg transform all arrays into single variables. --iseq-file arg compute input sequence and generate file [arg] --iseq-length arg set length [arg] of input sequence to be computed. --iseq-random-num arg select random search and number of paths. --rers-binary arg Call rers binary functions in analysis. Use [=yes|no] --rers-numeric arg print rers I/O values as raw numeric numbers. --rersmode arg sets several options such that RERS-specifics are utilized and observed. --stderr-like-failed-assert arg treat output on stderr similar to a failed assert [arg] (default:no) \end{verbatim} \subsection{Linear Temporal Logic (LTL)} Options below allow to check whether an analyzed program satisfies Linear Temporal Logic (LTL) properties (currently restrcited to input/ouput traces). Option \enquote{--check-ltl} is used to specify an input LTL file in the format of the RERS Challenge\footnote{The following link leads to an exemplary input file: \url{http://www.rers-challenge.org/2014Isola/problems/constraints-RERS14-5.txt}}. \begin{verbatim} --csv-spot-ltl arg output SPOT's LTL verification results into a CSV file [arg] --csv-stats-size-and-ltl arg output statistics regarding the final model size and results for LTL properties into a CSV file [arg] --check-ltl arg take a text file of LTL I/O formulae [arg] and check whether or not the analyzed program satisfies these formulae. Formulae should start with '('. Use "csv-spot-ltl" option to specify an output csv file for the results. --single-property arg number (ID) of the property that is supposed to be analyzed. All other LTL properties will be ignored. ( Use "check-ltl" option to specify an input property file). --counterexamples-with-output arg reported counterexamples for LTL or reachability properties also include output values [=yes|no] --inf-paths-only arg recursively prune the transition graph so that only infinite paths remain when checking LTL properties [=yes|no] --io-reduction arg (work in progress) reduce the transition system to only input/output/worklist states after every <arg> computed EStates. --keep-error-states arg Do not reduce error states for the LTL analysis. [=yes|no] --ltl-in-alphabet arg specify an input alphabet used by the LTL formulae (e.g. "{1,2,3}") --ltl-out-alphabet arg specify an output alphabet used by the LTL formulae (e.g. "{19,20,21,22,23,24,25,26}") --ltl-driven select mode to verify LTLs driven by SPOT's access to the state transitions --no-input-input arg remove transitions where one input states follows another without any output in between. Removal occurs before the LTL check. [=yes|no] --std-io-only arg bypass and remove all states that are not standard I/O [=yes|no] --std-in-only arg bypass and remove all states that are not input-states [=yes|no] --std-out-only arg bypass and remove all states that are not output-states [=yes|no] --tg-ltl-reduced arg (experimental) compute LTL-reduced transition graph based on a subset of computed estates [=yes|no] --with-counterexamples arg adds counterexample I/O traces to the analysis results. Applies to reachable assertions and falsified LTL properties (uses RERS-specific alphabet). [=yes|no] --with-assert-counterexamples arg report counterexamples leading to failing assertion states [=yes|no] --with-ltl-counterexamples arg report counterexamples that violate LTL properties [=yes|no] \end{verbatim} \subsection{Parallel Process Graphs} These options allow to generate random parallel process graphs in the form of synchronized labeled transition systems. In addition, CodeThorn can be used to explore the state space of parallel interleavings of such process graphs. When selecting \enquote{--ltl=mode=mine}, random LTL properties are mined based on subsets of the analyzed process graphs. These features have been used to generate benchmarks for the Parallel LTL track of the RERS Challenge~\cite{jasper2017rers}. \begin{verbatim} --seed arg seed value for randomly selected integers (concurrency-related non-determinism might still affect results). --generate-automata arg generate random control flow automata that can be interpreted and analyzed as a parallel program. --num-automata arg select the number of parallel automata to generate. --num-syncs-range arg select a range for the number of random synchronizations between the generated automata (csv pair of integers). --num-circles-range arg select a range for the number of circles that a randomly generated automaton consists of (csv pair of integers). --circle-length-range arg select a range for the length of circles that are used to construct an automaton (csv pair of integers). --num-intersections-range arg select a range for the number of intersections of a newly added circle with existing circles in the automaton (csv pair of integers). --automata-dot-input arg reads in parallel automata with synchronized transitions from a given .dot file. --keep-systems arg store computed parallel systems (over- and under-approximated STGs) during exploration so that they do not need to be recomputed ([yes]|no). --use-components arg Selects which parallel components are chosen for analyzing the (approximated) state space ([all] | subsets-fixed | subsets-random). --fixed-subsets arg A list of sets of parallel component IDs used for analysis (e.g. "{1,2},{4,7}"). Use only with "--use-components=subsets-fixed". --num-random-components arg Number of different random components used for the analysis. Use only with "--use-components=subsets-random". Default: min(3, <num-parallel-components>) --parallel-composition-only arg If set to "yes", then no approximation will take place. Instead, the parallel compositions of the respective sub-systems will be expanded (sequentialized). Skips any LTL analysis. (Default: "no") --num-components-ltl arg Number of different random components used to generate a random LTL property. Default: value of option --num-random-components (a.k.a. all analyzed components) --minimum-components arg Number of different parallel components that need to be explored together in order to be able to analyze the mined properties. (default: 3). --different-component-subsets arg Number of random component subsets. The solver will be run for each of the random subsets. Use only with "--use-components=subsets-random" (default: no termination). --ltl-mode arg "check" checks the properties passed to option "--check-ltl=<filename>". "mine" searches for automatically generated properties that adhere to certain criteria. "none" means no LTL analysis (default). --mine-num-verifiable arg Number of verifiable properties satisfying given requirements that should be collected (default: 10). --mine-num-falsifiable arg Number of falsifiable properties satisfying given requirements that should be collected (default: 10). --minings-per-subsets arg Number of randomly generated properties that are evaluated based on one subset of parallel components (default: 50). --ltl-properties-output arg Writes the analyzed LTL properties to file <arg>. --promela-output arg Writes a promela program reflecting the synchronized automata of option "--automata-dot-input" to file <arg>. Includes LTL properties if analyzed. --promela-output-only arg Only generate Promela code, skip analysis of the input .dot graphs (yes|[no]). --output-with-results arg include results for the LTL properties in generated promela code and LTL property files (yes|[no]). --output-with-annotations arg include annotations for the LTL properties in generated promela code and LTL property files (yes|[no]). --verification-engine arg Choose which backend verification engine is used (ltsmin|[spot]). \end{verbatim} \subsection{Visualization} Transition graphs and other data structures can be visualized using the follwing command line options. The main option that activates most of the visualization features is \verb+--viz=yes+. \begin{verbatim} --dot-io-stg arg output STG with explicit I/O node information in dot file [arg] --dot-io-stg-forced-top arg output STG with explicit I/O node information in dot file. Groups abstract states together. [arg] --tg1-estate-address arg transition graph 1: visualize address [=yes|no] --tg1-estate-id arg transition graph 1: visualize estate-id [=yes|no] --tg1-estate-properties arg transition graph 1: visualize all estate-properties [=yes|no] --tg1-estate-predicate arg transition graph 1: show estate as predicate [=yes|no] --tg2-estate-address arg transition graph 2: visualize address [=yes|no] --tg2-estate-id arg transition graph 2: visualize estate-id [=yes|no] --tg2-estate-properties arg transition graph 2: visualize all estate-properties [=yes|no] --tg2-estate-predicate arg transition graph 2: show estate as predicate [=yes|no] --visualize-read-write-sets arg generate one graph for each parallel loop that illustrates the read and write accesses of the involved threads. --viz arg generate visualizations (.dot) outputs [=yes|no] \end{verbatim} \subsection{Data Race Detection} Options for data race detection. \begin{verbatim} --data-race perform data race detection --data-race-csv arg write data race detection results in specified csv file. Implicitly enables data race detection. --data-race-fail perform data race detection and fail on error (codethorn exit status 1). For use in regression verification. Implicitly enables data race detection. \end{verbatim} \subsection{Information} The following option allows to display additional information about the analysis. \begin{verbatim} --print-varid-mapping Print all information stored in var-id mapping after analysis. \end{verbatim} \section{Analysis Overview} The analysis is performed in five phases: \begin{enumerate} \item Syntactic and semantic analysis of the input program (ROSE). The program is analyzed and represented in memory as an annotated abstract syntax tree (AST). \item Control flow analysis. We compute a control flow graph (CFG) for the AST. Transitions, as computed for the state transition system in the next phase, correspond to edges in the CFG. \item Computation of the state transition system. \item LTL checking. Input to the LTL checking phase are the state transition system and the LTL formulae. \item Reporting of analysis results. Reachability of failing assertions or verification errors is computed based on the transition system. Results for LTL formulae are computed solely by the LTL checker. \end{enumerate} \noindent States of the analyzed program are represented as follows: \subsection{Program State} A program state consists of a label (lab), a property state (pstate), a constraint set (cset), and an IO property (io). $PState = Var \rightarrow Val$ where $Val$ is either a constant or $top$. $Val$ is a lifted integer set. An execution state is defined as $EState = Lab \times PState \times Constraints \times IO$ where $Constraints$ is a set of constraints, and $IO$ determines whether one of the variables in $PState$ is an input or output variable. More specifically, whether a variable is read from stdin, or printed to stdout or stderr. Furthermore it determines whether the state produces an output which is caused by a failed assert. \bibliographystyle{plain} \bibliography{codethorn} \end{document}
{ "alphanum_fraction": 0.5793893773, "avg_line_length": 55.8916797488, "ext": "tex", "hexsha": "8e5b056c704955ac934dcd9decb89579bfacdce8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b4a6d1b7f8762d94d9ee49777e2f0bf146475a10", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mschordan/rose-develop", "max_forks_repo_path": "projects/CodeThorn/docs/manual/codethorn.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b4a6d1b7f8762d94d9ee49777e2f0bf146475a10", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mschordan/rose-develop", "max_issues_repo_path": "projects/CodeThorn/docs/manual/codethorn.tex", "max_line_length": 239, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b4a6d1b7f8762d94d9ee49777e2f0bf146475a10", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mschordan/rose-develop", "max_stars_repo_path": "projects/CodeThorn/docs/manual/codethorn.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-05T21:59:32.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-05T21:59:32.000Z", "num_tokens": 6804, "size": 35603 }
\chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} SALSA-Onsala (``Such A Lovely Small Antenna'') is a 2.3~m diameter radio telescope built at Onsala Space Observatory, Sweden, to introduce pupils, students and teachers to the marvels of radio astronomy. The sensitive receiver makes it possible to detect radio emission from atomic hydrogen far away in our galaxy. From these measurements we can learn about the kinematics and distribution of gas in our galaxy, the Milky Way. In this document we first review some properties of the Milky Way, starting by describing the Galactic coordinate system and the geometry of a rotating disk. Then, we describe how to use data from SALSA to understand how fast gas rotates at different distances from the galactic center, i.e. how to make a \emph{rotation curve}. Finally, we use additional measurements, and our knowledge of the kinematics, to make a map of the spiral arms. Please note that this document is focused on the scientific interpretation. Instructions for operating the SALSA telescope can be found in the document entitled \emph{SALSA users manual} available at the SALSA website. \vspace{9cm} {\bf Coverimage:} Artist's conception of the spiral structure of the Milky Way with two major stellar arms and a central bar. Distances in light-years (ly) and directions in galactic coordinates. Credit: NASA/JPL-Caltech/ESO/R. Hurt.
{ "alphanum_fraction": 0.7965860597, "avg_line_length": 45.3548387097, "ext": "tex", "hexsha": "621aa06e70693a1a1f072c9a7d40e792d3f4a3e2", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-21T11:32:05.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-14T10:01:29.000Z", "max_forks_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "varenius/salsa", "max_forks_repo_path": "Lab_instructions/HI/English/abstract.tex", "max_issues_count": 72, "max_issues_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_issues_repo_issues_event_max_datetime": "2022-03-02T10:24:24.000Z", "max_issues_repo_issues_event_min_datetime": "2015-05-30T21:33:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "varenius/salsa", "max_issues_repo_path": "Lab_instructions/HI/English/abstract.tex", "max_line_length": 79, "max_stars_count": 13, "max_stars_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "varenius/salsa", "max_stars_repo_path": "Lab_instructions/HI/English/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-21T04:03:36.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-18T07:51:46.000Z", "num_tokens": 325, "size": 1406 }
\chapter{Contrast Enhancement} \section{Introduction} Because some features are hardly detectable by eye in an image, we often transform it before display. Histogram equalization is one the most well-known methods for contrast enhancement. Such an approach is generally useful for images with a poor intensity distribution. Since edges play a fundamental role in image understanding, a way to enhance the contrast is to enhance the edges. For example, we can add to the original image its Laplacian ($I^{'}= I + \gamma \Delta I$, where $\gamma$ is a parameter). Only features at the finest scale are enhanced (linearly). For a high $\gamma$ value, only the high frequencies are visible. Multiscale edge enhancement \cite{col:velde99} can be seen as a generalization of this approach to all resolution levels. In color images, objects can exhibit variations in color saturation with little or no correspondence in luminance variation. Several methods have been proposed in the past for color image enhancement \cite{col:toet92}. The retinex concept was introduced by Land \cite{col:land86} as a model for human color constitancy. The single scale retinex (SSR) method \cite{col:jobson97a} consists of applying the following transform to each band $i$ of the color image: \begin{eqnarray} R_i(x,y) = \log( I_i(x,y)) - \log(F(x,y) * I_i(x,y)) \end{eqnarray} where $R_i(x,y)$ is the retinex output, $I_i(x,y)$ is the image distribution in the $i$th spectral band, and $F$ is a Gaussian function. A gain/offset is applied to the retinex output which clips the highest and lowest signal excursions. This can be done by a k-sigma clipping. The retinex method is efficient for dynamic range compression, but does not provide good tonal rendition \cite{col:rahman96}. The Multiscale Retinex (MSR) combines several SSR outputs to produce a single output image which has both good dynamic range compression and color constancy, and good tonal rendition \cite{col:jobson97b}. The MSR can be defined by: \begin{eqnarray} R_{MSR_i} = \sum_{j=1}^N w_j R_{i,j} \end{eqnarray} with \begin{eqnarray} R_{i,j}(x,y) = \log( I_i(x,y)) - \log(F_j(x,y) * I_i(x,y)) \end{eqnarray} $N$ is the number of scales, $R_{i,j}$ is the $i$th spectral component of the MSR output, and $w_j$ is the weight associated with the scale $j$. The Gaussian $F_j$ is given by: \begin{eqnarray} F_j(x,y) = K \exp{- {r^2 \over c_j^2}} \end{eqnarray} $c_j$ defines the width of the Gaussian. In \cite{col:jobson97b}, three scales were recommended with $c_j$ values equal respectively to 15,80,250, and all weights $w_j$ fixed to ${1 \over N}$. The Multiscale Retinex introduces the concept of multiresolution for contrast enhancement. Velde \cite{col:velde99} has explicitly introduced the wavelet transform and has proposed an algorithm which modifies the wavelet coefficients in order to amplify faint features. % \section{Contrast Enhancement using the Wavelet Transform} % Velde \cite{col:velde99} proposed to use the wavelet transform % for edge enhancement. The idea is to first transform the image using the dyadic wavelet transform (two directions per scale). The gradient $G_{j,k}$ at scale $j$ and at pixel location $k$ is calculated at each scale $j$ from the wavelet coefficients $w_{j,k}^{(h)}$ and $w_{j,k}^{(v)}$ relative to the horizontal and vertical wavelet bands: $G_{j,k} = \sqrt{ (w_{j,k}^{(h)})^2 + (w_{j,k}^{(v)})^2}$. Then the two wavelet coefficients at scale $j$ and at position $k$ are multiplied by $y(G_{j,k})$, where $y$ is defined by: \begin{eqnarray} y(x) & = & ({m \over c})^p \mbox{ if } \mid x \mid < c \nonumber \\ y(x) & = & ({m \over \mid x \mid })^p \mbox{ if } c \le \mid x \mid < m \nonumber \\ y(x) & = & 1 \mbox{ if } \mid x \mid \ge m \label{eqn_velde} \end{eqnarray} \begin{figure}[htb] \vbox{ \centerline{ \hbox{ \psfig{figure=fig_velde.ps,bbllx=3cm,bblly=13cm,bburx=20cm,bbury=25.cm,width=6.5cm,height=4cm,clip=} }} } \caption{Enhanced coefficients versus original coefficients. Parameters are m=30, c=3 and p=0.5. } \label{fig_velde} \end{figure} Three parameters are needed: $p$, $m$ and $c$. $p$ determines the degree of non-linearity in the nonlinear rescaling of the luminance, and must be in $]0,1[$. Coefficients larger than $m$ are not modified by the algorithm. The $c$ parameter corresponds to the noise level. Figure~\ref{fig_velde} shows the modified wavelet coefficients versus the original wavelet coefficients for a given set of parameters ($m=30$, $c=3$ and $p=0.5$). Finally, the enhanced image is obtained by the inverse wavelet transform from the modified wavelet coefficients. For color images, a similar method can be used, but by calculating the multiscale gradient $\Gamma_{j,k}$ from the multiscale gradient of the three $L$, $u$, $v$ components: $\Gamma_j(i) = \sqrt{ \parallel G_{j,k}^L \parallel^2 + \parallel G_{j,k}^u \parallel^2 + \parallel G_{j,k}^v \parallel^2 }$. All wavelet coefficients at scale $j$ and at position $k$ are multiplied by $y(\Gamma_{j,k})$, the enhanced $\tilde L$, $\tilde u$, $\tilde v$ components are reconstructed from the modified wavelet coefficients, and the ($\tilde L$,$\tilde u$,$\tilde v$) image is transformed into an RGB image. More details can be found in \cite{col:velde99}. Wavelet bases present some limitations, because they are not adapted to the detection of highly anisotropic elements, such as alignments in an image, or sheets in a cube. Recently, other multiscale systems like ridgelets \cite{Harmnet} and curvelets \cite{Curvelets-StMalo,starck:sta01_3} which are very different from wavelet-like systems have been developed. Curvelets and ridgelets take the form of basis elements which exhibit very high directional sensitivity and are highly anisotropic. The curvelet transform uses the ridgelet transform in its digital implementation. We first describe the ridgelet and the curvelet transform, then we show how contrast enhancement can be obtained from the curvelet coefficients. \section{Contrast Enhancement by the Cur\-ve\-let Trans\-form} Since the curvelet transform is well-adapted to represent images containing edges, it is a good candidate for edge enhancement \cite{starck:capri02,starck:sta02_4}. Curvelet coefficients can be modified in order to enhance edges in an image. A function $y_c$ must be defined which modifies the values of the curvelet coefficients. It could be a function similar to the one defined for the wavelet coefficients \cite{col:velde99} (see equation~\ref{eqn_velde}). This function presents however the drawback of amplifying the noise (linearly) as well as the signal of interest. We introduce explicitly the noise standard deviation $\sigma$ in the equation: \begin{eqnarray} y_c(x, \sigma) & = & 1 \mbox{ if } x < c \sigma \nonumber \\ y_c(x, \sigma) & = & \frac{x-c\sigma}{c \sigma}(\frac{m}{c \sigma})^p + \frac{2c\sigma-x}{c \sigma} \mbox{ if } x < 2c \sigma \nonumber \\ y_c(x, \sigma) & = & (\frac{m}{x})^p \mbox{ if } 2c\sigma \le x < m \nonumber \\ y_c(x, \sigma) & = & (\frac{m}{x})^s \mbox{ if }x \ge m \label{eqn_velde_curve} \end{eqnarray} \begin{figure}[htb] \centerline{ \hbox{ \psfig{figure=fig_velde_mod.ps,bbllx=3cm,bblly=13cm,bburx=20cm,bbury=25.cm,width=6.5cm,height=4cm,clip=} \psfig{figure=fig_velde_mod_sat.ps,bbllx=3cm,bblly=13cm,bburx=20cm,bbury=25cm,width=6.5cm,height=4cm,clip=} }} \caption{Enhanced coefficients versus original coefficients. Left, parameters are m=30,c=0.5,s=0, and p=0.5. Right, parameters are m=30,c=0.5,s=0.7,p=0.9. } \label{fig_velde_cur_enhance} \end{figure} We have fixed $m=c=p=0.5$ and $s=0$ in all our experiments. $p$ determines the degree of non-linearity and $s$ introduces a saturation. $c$ becomes a normalized parameter, and a $c$ value larger than $3$ guaranties that the noise will not be amplified. The $m$ parameter can be defined either from the noise standard deviation ($m = K_m \sigma$) or from the maximum curvelet coefficient $M_c$ of the relative band ($m = l M_c$, with $l < 1$). The first choice allows the user to define the coefficients to amplify as a function of their signal-to-noise ratio, while the second one gives an easy and general way to fix the $m$ parameter independently of the range of the pixel values. Figure~\ref{fig_velde_cur_enhance} shows the curve representing the enhanced coefficients versus the original coefficients for two sets of parameters. In the second case, a saturation is added. The curvelet enhancement method for grayscale images consists of the following steps: \begin{enumerate} \item Estimate the noise standard deviation $\sigma$ in the input image $I$. \item Calculate the curvelet transform of the input image. We get a set of bands $w_{j}$, each band $w_j$ contains $N_j$ coefficients and corresponds to a given resolution level. \item Calculate the noise standard deviation $\sigma_j$ for each band $j$ of the curvelet transform (see \cite{starck:sta01_3} more details on this step). \item For each band $j$ do \begin{itemize} \item Calculate the maximum $M_j$ of the band. \item Multiply each curvelet coefficient $w_{j,k}$ by $y_c(\mid w_{j,k} \mid ,\sigma_j)$. \end{itemize} \item Reconstruct the enhanced image from the modified curvelet coefficients. \end{enumerate} For color images, we apply first the curvelet transform on the three components $L,u,v$. For each cur\-velet coef\-fi\-cient, we cal\-cu\-la\-te $e = \sqrt{ c_L^2 + c_u^2 + c_v^2}$, where $(c_L, c_u, c_v)$ are respectively the curvelet coefficients of the three components, and the mo\-di\-fied coef\-fi\-cients are obtained by: $(\tilde c_L, \tilde c_u, \tilde c_v) = (y_c(e, \sigma)c_L , y_c(e, \sigma)c_u, y_c(e, \sigma)c_v)$. Values in the enhanced components can be larger than the authorized upper limit (in general $255$), and we found it necessary to add a final step to our method, which is a sigma-clipping saturation. \section{Examples} \subsubsection*{Saturn Image} \begin{figure}[htb] \centerline{ \vbox{ \hbox{ \psfig{figure=fig_sat512.ps,bbllx=1.8cm,bblly=12.7cm,bburx=14.5cm,bbury=25.4cm,width=8cm,height=8cm,clip=} \psfig{figure=fig_sat_contrast_histo.ps,bbllx=1.8cm,bblly=12.7cm,bburx=14.5cm,bbury=25.4cm,width=8cm,height=8cm,clip=} } \hbox{ \psfig{figure=fig_sat_contrast_wedge.ps,bbllx=1.8cm,bblly=12.7cm,bburx=14.5cm,bbury=25.4cm,width=8cm,height=8cm,clip=} \psfig{figure=fig_sat_contrast_cur.ps,bbllx=1.8cm,bblly=12.7cm,bburx=14.5cm,bbury=25.4cm,width=8cm,height=8cm,clip=} }} } \caption{Top, Saturn image and its histogram equalization. Bottom, enhancement image by the wavelet transform and the curvelet transform.} \label{fig_saturn_cur_enhance} \end{figure} Figure~\ref{fig_saturn_cur_enhance} shows respectively from left to right and from top to bottom the Saturn image, the histogram equalized image, the wavelet multiscale edge enhanced image and the curvelet multiscale edge enhanced image (parameters were $s=0$, $p=0.5$, $c=3$, and $l=0.5$). The curvelet multiscale edge enhanced image shows clearly better the rings and edges of Saturn. \subsubsection*{Satellite Image} \begin{figure}[htb] \centerline{ \vbox{ \hbox{ \psfig{figure=fig_marseille.ps,bbllx=1.9cm,bblly=12.8cm,bburx=14.6cm,bbury=25.5cm,width=10.cm,height=10cm,clip=} } \hbox{ \psfig{figure=fig_cur_marseille.ps,bbllx=1.9cm,bblly=12.8cm,bburx=14.6cm,bbury=25.5cm,width=10.cm,height=10cm,clip=} }} } \caption{Top, grayscale image, and bottom, curvelet enhanced image.} \label{fig_marseille_bw_cur_enhance} \end{figure} \begin{figure}[htb] \centerline{ \vbox{ \hbox{ \psfig{figure=kodak140501.ps,bbllx=5.9cm,bblly=8.1cm,bburx=15cm,bbury=21.7cm,width=5.5cm,height=8cm,clip=} \psfig{figure=kodak140501_ret.ps,bbllx=5.9cm,bblly=8.1cm,bburx=15cm,bbury=21.7cm,width=5.5cm,height=8cm,clip=} } \hbox{ \psfig{figure=kodak140501_mret.ps,bbllx=5.9cm,bblly=8.1cm,bburx=15cm,bbury=21.7cm,width=5.5cm,height=8cm,clip=} \psfig{figure=kodak140501_cur.ps,bbllx=5.9cm,bblly=8.1cm,bburx=15cm,bbury=21.7cm,width=5.5cm,height=8cm,clip=} }} } \caption{Top, color image (Kodak picture of the day 14/05/02) and retinex method. Bottom, multiscale retinex method and multiscale edge enhancement.} \label{fig_kodak_col_wt_enhance} \end{figure} \begin{figure}[htb] \centerline{ \vbox{ \hbox{ \psfig{figure=K111201.ps,bbllx=4.3cm,bblly=10.8cm,bburx=16.7cm,bbury=19.1cm,width=12.5cm,height=8.2cm,clip=} } \hbox{ \psfig{figure=K111201_cur.ps,bbllx=4.3cm,bblly=10.8cm,bburx=16.7cm,bbury=19.1cm,width=12.5cm,height=8.2cm,clip=} }} } \caption{Left, color image (Kodak picture of the day 11/12/01), and right, curvelet enhanced image.} \label{fig_kodak2_col_cur_enhance} \end{figure} Figure~\ref{fig_marseille_bw_cur_enhance} shows the results for the enhancement of a grayscale satellite image, and Figure~\ref{fig_kodak_col_wt_enhance} shows the results for the enhancement of a color image (Kodak image of the day 14/05/01) by the retinex, the multiscale retinex and the curvelet multiscale edge enhancement methods. Figure~\ref{fig_kodak2_col_cur_enhance} shows the results for the enhancement of a color image (Kodak image of the day 11/12/01). \section{Discussion} A number of properties, respected by the curvelet filtering described here, are important for contrast stretching: \begin{enumerate} \item Noise must not be amplified in enhancing edges. \item Colors should not be unduly modified. In multiscale retinex, for example, a tendancy towards increased grayness is seen. This is not the case using curvelets. \item It is very advantageous if block effects do not occur. Block overlapping is usually not necessary in curvelet-based contrast enhancement, unlike in the case of noise filtering. \end{enumerate} % A range of further examples can be seen at \\ % http://www-stat.stanford.edu/$\sim$jstarck/contrast.html. % \clearpage % \newpage
{ "alphanum_fraction": 0.7491908221, "avg_line_length": 45.2866449511, "ext": "tex", "hexsha": "480687979942511084b2ea46cac6cd95a4d30944", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sfarrens/cosmostat", "max_forks_repo_path": "src/doc/doc_mra/doc_mr4/ch_curcontrast.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sfarrens/cosmostat", "max_issues_repo_path": "src/doc/doc_mra/doc_mr4/ch_curcontrast.tex", "max_line_length": 141, "max_stars_count": null, "max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sfarrens/cosmostat", "max_stars_repo_path": "src/doc/doc_mra/doc_mr4/ch_curcontrast.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4367, "size": 13903 }
\section{Data Generation} Before models for the detection of mechanical symbols can be created, it is necessary to collect data suitable for this task. The aim is to read handwritten mechanical symbols. The data for training the model are necessarily created by myself at this point. Since the data generation can be time consuming, a way had to be found to create the data as time efficient as possible. Fortunately I had access to a tablet which allows to draw directly on the display. To generate the data quickly, some requirements had to be taken into account: \begin{enumerate} \item A drawing context is created. \item The context is variable in size and color. \item The context should be able to recognize mouse events. \item The script should have access to the file system to safe data on the hard drive with no extra work required. \end{enumerate} A Python script is able to meet these requirements. The library \name{OpenCV} \cite{OpenCV2019} allows to create a window with corresponding requirements. The library \name{opencv-python} \cite{Heinisuo2019} is used here implementation of \name{OpenCV}. It meets all these requirements because it provides the creation of a window with programmable context and functions to handle mouse events\footnote{The data generation should be done using JavaScript in a web context, but the final requirement to secure data does not seem to be easy to meet. Node.js has access to the filesystem, but is inherently "headless" and therefore does not allow any context to be used for drawing. }. Since the model should be able to recognize symbols of any color on any background, the context background is a random value in grayscale, as is the color of the line drawn on the background. The thickness of the line used for drawing is also random, thus mimicking the appearance of a symbol of different sizes\footnote{This should serve to predict images of different resolutions in a later process}. After the parameters used to draw the context have been set, the function \name{cv2} is used to \code{namedWindow} created the function \name{cv2}. For each interaction with the canvas, the \code{setMouseCallback} function is applied to the generated \code{namedWindow}, calling a \code{draw} function that allows to detect and react to mouse events. Drawing is done by functions triggered by events called \code{cv2.EVENT\_LBUTTONDOWN} and \code{cv2.EVENT\_LBUTTONUP}, which set a boolean \code{drawing} flag to either \code{True} or \code{False}. The drawn image is saved with \code{cv2.Event\_RBUTTONUP}, which names the image after the number of previously automatically drawn images. The names of the different classes are set at the beginning of the script, with an interactive prompt asking the user which class to fill next, with the options \textit{"x" for base, "o" for link, "n" for no match}. Consideration of \textit{not found} is especially important because later, when searching for any image, most responses are likely to contain no symbol at all. With this script 500 symbols are created for each class. It is expected that these images will be augmented later to counteract overfitting, but for the task of distinguishing three different classes, this set should be sufficient\footnote{The code can be viewed at \aka{https://github.com/klawr/deepmech/tree/master/reports/srp/code/data_generation.py}}. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{images/25_n.png} \caption{Non hits} \label{fig:25_non_hits} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{images/25_o.png} \caption{Nodes} \label{fig:25_links} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{images/25_x.png} \caption{Bases} \label{fig:25_bases} \end{subfigure} \caption[Examples of node data for training]{Some examples of the data created with the described method. Only these three classes were created for the first tests. The images are created centered in a square, so that later any images can be scanned, using squares of different sizes as templates. } \label{fig:generated_data_samples} \end{figure}
{ "alphanum_fraction": 0.7725371747, "avg_line_length": 76.8571428571, "ext": "tex", "hexsha": "c79019e89a9584d90d236724b886c3a744314e5d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "klawr/deepmech", "max_forks_repo_path": "reports/srp/sections/data_generation.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_issues_repo_issues_event_max_datetime": "2022-02-27T13:13:17.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-27T13:13:17.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "klawr/deepmech", "max_issues_repo_path": "reports/srp/sections/data_generation.tex", "max_line_length": 428, "max_stars_count": 1, "max_stars_repo_head_hexsha": "61de238f1d4b1b867ec1d5f4e4af2a3b25a5abff", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "klawr/deepmech", "max_stars_repo_path": "reports/srp/sections/data_generation.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-17T12:27:06.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-17T12:27:06.000Z", "num_tokens": 993, "size": 4304 }
\section{Introduction} \label{section:introduction} \setcounter{footnote}{0} Infernal is used to search sequence databases for homologs of structural RNA sequences, and to make sequence- and structure-based RNA sequence alignments. Infernal builds a \emph{profile} from a structurally annotated multiple sequence alignment of an RNA family with a position-specific scoring system for substitutions, insertions, and deletions. Positions in the profile that are basepaired in the h consensus secondary structure of the alignment are modeled as dependent on one another, allowing Infernal's scoring system to consider the secondary structure, in addition to the primary sequence, of the family being modeled. Infernal profiles are probabilistic models called ``covariance models'', a specialized type of stochastic context-free grammar (SCFG) \citep{Lari90}. Compared to other alignment and database search tools based only on sequence comparison, Infernal aims to be significantly more accurate and more able to detect remote homologs because it models sequence and structure. But modeling structure comes at a high computational cost, and the slow speed of CM homology searches has been a serious limitation of previous versions. With version 1.1, typical homology searches are now about 100x faster, thanks to the incorporation of accelerated HMM methods from the HMMER3 software package (\url{http://hmmer.org}), making Infernal a much more practical tool for RNA sequence analysis. \subsection{How to avoid reading this manual} If you're like most people, you don't enjoy reading documentation. You're probably thinking: \pageref{manualend} pages of documentation, you must be joking! I just want to know that the software compiles, runs, and gives apparently useful results, before I read some \pageref{manualend} exhausting pages of someone's documentation. For cynics that have seen one too many software packages that don't work: \begin{itemize} \item Follow the quick installation instructions on page \pageref{section:installation}. An automated test suite is included, so you will know immediately if something went wrong.\footnote{Nothing should go wrong.} \item Go to the tutorial section on page \pageref{section:tutorial}, which walks you through some examples of using Infernal on real data. \end{itemize} Everything else, you can come back and read later. \subsection{What covariance models are} Covariance models (CMs) are statistical models of structurally annotated RNA multiple sequence alignments, or even of single sequences and structures. CMs are a specific formulation of profile stochastic context-free grammars (profile SCFG), which were introduced independently by Yasu Sakakibara in David Haussler's group \citep{Sakakibara94c} and by Sean Eddy and Richard Durbin \citep{Eddy94}. CMs are closely related to profile hidden Markov models (profile HMMs) commonly used for protein sequence analysis, but are more complex. CMs and profile HMMs both capture position-specific information about how conserved each column of the alignment is, and which residues are likely. However, in a profile HMM each position of the profile is treated independently, while in a CM basepaired positions are dependent on one another. The dependency between paired positions in a CM enables the profile to model \emph{covariation} at these positions, which often occurs between basepaired columns of structural RNA alignments. For many of these basepairs, it is not the specific nucleotides that make up the pair that is conserved by evolution, but rather that the pair maintain Watson-Crick basepairing. The added signal from covariation can be significant when using CMs for homology searches in large databases. Section~\ref{section:cmbuild} of this guide explains how a CM is constructed from a structurally annotated alignment using a toy example. CMs do have important limitations though. For example, a CM can only model what is called a ``well-nested'' set of basepairs. Formally, in a well-nested set of basepairs there are no two basepairs between positions $i:j$ and $k:l$ such that $i<k<j<l$. CMs cannot model pseudoknots in RNA secondary structures. Additionally, a CM only models a single consensus structure for the family it models. \subsection{Applications of covariance models} Infernal can be useful if you're intereseted in a particular RNA family. Imagine that you've carefully collected and aligned a set of homologs and have a predicted (or known) secondary structure for the family. Homology searches with BLAST using single sequences from your set of homologs may not reveal any additional homologs in sequence databases. You can build a CM from your alignment and redo your search using Infernal (this time only a single search) and you may find new homologs thanks to the added power of the profile-based sequence and structure scoring system of CMs. The Rfam database \citep{Gardner11} essentially does just this, but on a much larger scale. The Rfam curators maintain about 2000 RNA families, each represented by a multiple sequence alignment (called a \emph{seed} alignment) and a CM built from that alignment. Each Rfam release involves a search through a large EMBL-based nucleotide sequence database with each of the CMs which identifies putative structural RNAs in the database. The annotations of these RNAs, as well as the CMs and seed alignments are freely available. Automated genome annotation of structural RNAs can be performed with Infernal and a collection of CMs from Rfam, by searching through the genome of interest with each CM and collecting information on high-scoring hits. Previous versions of Infernal were too slow to be incorporated into many genome annotation pipelines, but we're hoping the improved speed of version 1.1 changes this. Another application is the automated construction and maintenance of large sequence- and structure-based multiple alignment databases. For example, the Ribosomal Database Project uses CMs of 16S small subunit ribosomal RNA (16S SSU rRNA) to maintain alignments of millions of 16S sequences \citep{Cole09}. The CMs (one archaeal 16S and one bacterial 16S model) were built from training alignments of only a few hundred representative sequences. The manageable size of the training alignments means that they can be manually curated prior to building the model. Rfam is another example of this application too because Rfam creates and makes available multiple alignments (called \emph{full} alignments) of all of the hits from the database its curators believe to be real RNA homologs. Infernal can also be used to determine what types of RNAs exist in a particular sequence dataset. Suppose you're performing a metagenomics analysis and have collected sequences from an exotic environmental sample. You can download all the CMs from Rfam and use Infernal to search through all your sequences for high-scoring hits to the models. The types of structural RNAs identified in your sample can be informative as to what types of organisms are in your sample, and what types of biological processes they're carrying out. Version 1.1 includes a new program called \prog{cmscan} which is designed for just this type of analysis. \subsection{Infernal and HMMER, CMs and profile HMMs} Infernal is closely related to HMMER. In fact, HMMER is used as a library within the Infernal codebase. This allows Infernal to use the highly optimized profile HMM dynamic programming implementations in HMMER to greatly accelerate its homology searches. Also, the design and organization of the Infernal programs (e.g. \ccode{cmbuild}, \ccode{cmsearch}, \ccode{cmalign}) follows that in HMMER (\ccode{hmmbuild}, \ccode{hmmsearch}, \ccode{hmmalign}). And there are many functions in Infernal that are based on analogous ones in HMMER. The formatting of output is often very similar between these two software packages, and the user guide's are even organized and written in a similar (and, in some places, identical) way. This is, of course, on purpose. Since both packages are developed in the same lab, consistency simplifies the development and maintenance of the code, but we also do it to make the software (hopefully) easier to use (someone familiar with using HMMER should be able to pick up and use Infernal very easily, and vice versa). However, Infernal development tends to lag behind HMMER development as new ideas and algorithms are applied to the protein or DNA world with profile HMMs, and then later extended to CMs for use on RNAs. %Some of the current features of HMMER are on %the 00TODO list for Infernal (and by the time they're implemented they %will have been replaced on that list). This consistency is possible because profile HMMs and covariance models are related models with related applications. Profile HMMs are profiles of the conserved sequence of a protein or DNA family and CMs are profiles of the conserved sequence \emph{and} well-nested secondary structure of a structural RNA family. Applications of profile HMMs include annotating protein sequences in proteomes or protein sequence database and creating multiple alignments of protein domain families. And similarly applications of CMs include annotating structural RNAs in genomes or nucleotide sequence databases and creating sequence- and structure-based multiple alignments of RNA. The crucial difference is that CMs are able to model dependencies between a set of well-nested (non-pseudoknotted) basepaired positions in a structural RNA family. The statistical signal inherent in these dependencies is often significant enough to make modeling the family with a CM a noticeably more powerful approach than modeling the family with a profile HMM. \subsection{What's new in Infernal 1.1} The most important difference between version 1.1 and the previous version (1.0.2) is the improved search speed that results from a new filter pipeline. The pipeline is explained more in section~\ref{section:pipeline}. Another important change is the introduction of the \prog{cmscan} program, for users who want to know what structural RNAs are present in a collection of sequences, such as a metagenomics dataset\footnote{\prog{cmscan} is similar to \prog{cmsearch} but is more convenient for some applications. One difference between the two programs is that results from \prog{cmscan} are organized per-sequence instead of per-model.}. Another new feature of version 1.1 is better handling of truncated RNAs, for which part of one or both ends of the RNA is missing due to a premature end of the sequence \citep{KolbeEddy09}. These types of fragmentary sequences are common in whole genome shotgun sequencing datasets. While previous versions of Infernal were prone to misalignment of these sequences, version 1.1 includes implementations of CM search and alignment algorithms specialized for truncated sequences \citep{KolbeEddy09} in \prog{cmsearch}, \prog{cmscan} and \prog{cmalign}. Model parameterization has changed in several minor ways. Mixture Dirichlet priors for emissions and single component Dirichlet priors for transitions have been reestimated using larger and more diverse datasets than the ones the previous priors were derived from (discussed in \citep{NawrockiEddy07}). Also, the definition of match and insert columns, previously determined by a simple majority rule using absolute counts (columns in which $\geq 50\%$ of columns include residues were match, all others were insert), now use \emph{weighted} counts (and same $>=50\%$ rule) after a sequence weighting algorithm is applied. And inserts before the first and after the final match position of alignments are now ignored by the CM construction procedure and thus no longer contribute to parameterizing the transition probabilities of the model (specifically, the \ccode{ROOT\_IL} and \ccode{ROOT\_IR} states). These changes mean that for a given input alignment a model built with version 1.1 may have different numbers of states and nodes, and will have (usually) slightly different parameters, than a model built from the same alignment with version 1.0.2. Finally, the important \prog{cmbuild} command line options \prog{--rf} and \prog{--gapthresh} have been renamed to \prog{--hand} and \prog{--symfrac}\footnote{To reproduce the behavior obtained in previous versions with \prog{--gapthresh <x>} use \prog{--symfrac <1-x>}.}. The formatting of \prog{cmsearch} output has also changed. It mirrors the output format of the \prog{hmmsearch} program from HMMER3, for examples see the tutorial section of this guide. Another change is that the most compute-intensive programs in Infernal 1.1 (\prog{cmcalibrate}, \prog{cmsearch}, \prog{cmscan} and \prog{cmalign}) support multicore parallelization using threads. \subsection{How to learn more about CMs and profile HMMs} Section~\ref{section:cmbuild} of this guide may be a good place to start. That section walks through an example of how a CM is constructed from a structurally annotated multiple sequence alignment. The tutorial section is also recommended for all users. As for other available publications: two papers published in 1994 introduced profile SCFGs in computational biology \citep{Sakakibara94c,Eddy94}, and our lab has published several papers \citep{Eddy02b,KleinEddy03,NawrockiEddy07,Nawrocki09,KolbeEddy09,KolbeEddy11}, book chapters \citep{Eddy06b,NawrockiEddy09}, and a few doctoral theses \citep{Klein03,Nawrocki09b,Kolbe10} related to CMs\footnote{Eddy lab publications are available from \url{http://eddylab.org/publications.html}}. The book \emph{Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids} \citep{Durbin98} has several chapters devoted to HMMs and CMs. Profile HMM filtering for CMs was introduced by Weinberg and Ruzzo \citep{WeinbergRuzzo04,WeinbergRuzzo04b,WeinbergRuzzo06}. There are two papers from our lab on HMMER3 profile HMMs that are directly related to Infernal's accelerated filter pipeline \citep{Eddy08,Eddy11}. Since CMs are closely related to, but more complex than, profile HMMs, readers seeking to understand CMs who are unfamiliar with profile HMMs may want to start there. Reviews of the profile HMM literature have been written by our lab \citep{Eddy96,Eddy98} and by Anders Krogh \citep{Krogh98}. And to learn more about HMMs from the perspective of the speech recognition community, an excellent tutorial introduction has been written by Rabiner \citep{Rabiner89}. For details on how profile HMMs and probabilistic models are used in computational biology, see the pioneering 1994 paper from Krogh et al. \citep{Krogh94} and again the \emph{Biological Sequence Analysis} book \citep{Durbin98}. Finally, Sean Eddy writes about HMMER, Infernal and other lab projects in his blog \textbf{Cryptogenomicon} \url{http://cryptogenomicon.org/}). \begin{srefaq}{How do I cite Infernal?} The Infernal 1.1 paper (Infernal 1.1: 100-fold faster RNA homology searches, EP Nawrocki and SR Eddy. Bioinformatics, 29:2933-2935, 2013.) is the most appropriate paper to cite. If you’re writing for an enlightened (url-friendly) journal, you may want to cite the webpage \url{http://eddylab.org/infernal/} because it is kept up-to-date. \end{srefaq}
{ "alphanum_fraction": 0.8078634044, "avg_line_length": 51.6418918919, "ext": "tex", "hexsha": "2ebf62ce7b25186e96d0688840225f93792b989d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "lamby/infernal", "max_forks_repo_path": "documentation/userguide/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "lamby/infernal", "max_issues_repo_path": "documentation/userguide/introduction.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "lamby/infernal", "max_stars_repo_path": "documentation/userguide/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3703, "size": 15286 }
\begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.5\textwidth]{tarasov-phy/cover} \end{center} %\caption{Front Cover} \end{wrapfigure} \section{Foreword} \hspace{5mm}It can safely be asserted that no student preparing for an entrance examination in physics, for admission to an engineering institute has yet opened a book similar to this one. Employing the extremely lively form of dialogue, the authors were able to comprehensively discuss almost all the subjects in the syllabus, especially questions usually considered difficult to understand. The book presents a detailed analysis of common mistakes made by students taking entrance examinations in physics. Students will find this to be an exceptionally clear and interesting textbook which treats of complicated problems from various viewpoints and contains a great many excellent illustrations promoting a deeper understanding of the ideas and concepts involved. \vspace{2mm} The authors are lecturers of the Moscow Institute of Electronics Engineering and are well acquainted with the general level of training of students seeking admission to engineering institutes; they have years of experience in conducting entrance examinations. The expert knowledge of the authors, in conjunction with the lively and lucid presentation, has made this a very useful study guide for students preparing for physics examinations. \vspace{0.2in} \hfill \emph{Prof. G. Epifanov, D.Sc. (Phys. and Math.)} \section{Preface} \thispagestyle{empty} \hspace{5mm}This book was planned as an aid to students preparing for an entrance examination in physics for admission to an engineering institute. It has the form of a dialogue between the author (the TEACHER) and an inquisitive reader (the STUDENT). This is exceptionally convenient for analyzing common errors made by students in entrance examinations, for reviewing different methods of solving the same problems and for discussing difficult questions of physical theory. A great many questions and problems of school physics are dealt with. Besides, problems are given (with solutions) for home study. Most of the questions and problems figured in the entrance examinations of the Moscow Institute of Electronics Engineering in the years 1964-66. \vspace{2mm} An analysis of mistakes made by students is always instructive. Attention can be drawn to various aspects of the problem, certain fine points can be made, and a more thorough understanding of the fundamentals can be reached. Such an analysis, however, may prove to be very difficult. Though there is only one correct answer, there can be a great many incorrect ones. It is practically impossible to foresee all the incorrect answers to any question; many of them remain concealed forever behind the distressing silence of a student being orally examined. Nevertheless, one can point out certain incorrect answers to definite questions that are heard continually. There are many questions that are almost inevitably answered incorrectly. This book is based mainly on these types of questions and problems. \vspace{2mm} We wish to warn the reader that this is by no means a textbook embracing all the items of the syllabus. He will not find here a systematic account of the subject matter that may be required by the study course in physics. He will find this text to be perhaps more like a freely told story or, rather, a freely conducted discussion. Hence, it will be of little use to those who wish to begin their study of physics or to systematize their knowledge of this science. It was intended, instead, for those who wish to increase their knowledge of physics on the threshold of their examinations. \vspace{2mm} Our ideal reader, as we conceive him, has completed the required course in school physics, has a good general idea of what it is all about, remembers the principal relationships, can cite various laws and has a fair knowledge of the units employed. He is in that \hlt{suspended} state in which he is no longer a secondary school student and has not yet become a full-fledged student of an institute. He is eager, however, to become one. If this requires an extension of his knowledge in physics, our book can help him. \vspace{2mm} Primarily, we hope our book will prove that memorizing a textbook (even a very good one) is not only a wearisome business, but indeed a fruitless one. A student must learn to \hlt{think}, to ponder over the material and not simply learn it by heart. If such an understanding is achieved, to some extent or other, we shall consider our efforts worthwhile. \vspace{10mm} \thispagestyle{empty} In conclusion, we wish to thank Prof. G. Epifanov without whose encouragement and invaluable aid this book could not have been written and prepared for publication. We also gratefully acknowledge the many helpful suggestions and constructive criticism that were made on the manuscript by Prof. V. A. Fabrikant, Associate Prof. A. G. Chertov, and E. N. Vtorov, Senior Instructor of the Physics Department of the Moscow Power Engineering Institute. \vspace{0.2in} \hfill \hlt{Lev Tarasov} \hfill \hlt{Aldina Tarasova} \vspace{0.2in} Employing the extremely lively form of dialogue between the TEACHER and an inquisitive READER, this book comprehensively discusses almost all the subjects in school physics, especially questions usually considered difficult to understand. \vspace{2mm} In this edition, the entire manuscript was typeset using the \LaTeXe{} document processing system originally developed by \emph{Leslie Lamport}, based on \TeX{} typesetting system created by \emph{Donald Knuth}. The typesetting software used the \hologo{XeLaTeX} distribution. \vspace{2mm} I am grateful for this opportunity to put the materials into a consistent format, and to correct errors in the original publication that have come to my attention. The process of compiling this book has given me an incentive to improve and extend the text, dialogue, concepts, problems, solutions, layout, to double check almost all of the mathematical rendering, to correct all known errors, to improve the original illustrations by redrawing them with Till Tantau's marvellous \textup{Ti\textit{k}Z}, to include new diagrams. Thus the book now appears in a form that we hope will remain useful for at least another generation. \vspace{3mm} \noindent {\calligra Ancient Science Publishers} \hfill \emph{Chandra Shekhar Kumar} \begin{center} AUTHOR (TEACHER) \\ READER (STUDENT) \end{center} \section{Table of Contents} \begin{enumerate}[nosep] \item Can You Analyze Graphs Representing the Kinematics of Straight-Line Motion ? \item Can You Show The Forces Applied To A Body? \item Can You Determine The Friction Force? \item How Well Do You Know Newton’s Laws Of Motion ? \item How Do You Go About Solving Problems In Kinematics ? \item How Do You Go About Solving Problems In Dynamics ? \item Are Problems In Dynamics Much More Difficult To Solve If Friction Is Taken Into Account ? \item How Do You Deal With Motion In A Circle ? \item How Do You Explain The Weightlessness Of Bodies ? \item Can You Apply The Laws Of Conservation Of Energy And Linear Momentum ? \item Can You Deal With Harmonic Vibrations ? \item What Happens To A Pendulum In A State Of Weightlessness ? \item Can You Use The Force Resolution Method Efficiently ? \item What Do You Know About The Equilibrium Of Bodies ? \item How Do You Locate The Centre Of Gravity ? \item Do you know Archimedes’ principle ? \item Is Archimedes’ Principle Valid In A Spaceship ? \item What Do You Know About The Molecular-Kinetic Theory Of Matter ? \item How Do You Account For The Peculiarity In The Thermal Expansion Of Water ? \item How Well Do You Know The Gas Laws ? \item How Do You Go About Solving Problems On Gas Laws ? \item Let Us Discuss Field Theory \item How Is An Electrostatic Field Described ? \item How Do Lines Of Force Behave Near The Surface Of A Conductor ? \item How Do You Deal With Motion In A Uniform Electrostatic Field ? \item Can You Apply Coulomb’s Law ? \item Do You Know Ohm’s Law ? \item Can A Capacitor Be Connected Into A Direct-Current Circuit ? \item Can you compute the resistance of a branched portion of a circuit ? \item Why Did The Electric Bulb Burn Out ? \item Do You Know How Light Beams Are Reflected And Refracted ? \item How Do You Construct Images Formed By Mirrors And Lenses ? \item How Well Do You Solve Problems Involving Mirrors And Lenses ? \end{enumerate} \hrulefill \newtheorem{probl}{\textcolor{Gold}{\textbf{\textsc{Author}}}}[chapter] \renewenvironment{p} % this is the environment name for the input {\renewcommand{\qedsymbol}{$\lozenge$}% \pushQED{\qed}\begin{probl}} {\popQED\end{probl}} \renewenvironment{s} {\renewcommand{\qedsymbol}{\tiny$\blacksquare$} \vspace{-\baselineskip} \begin{proof}[\emph{\textbf{\scshape \textcolor{BurntOrange}{Reader}}}]\color{zinnwalditebrown}} {\end{proof}} \underline{\textbf{\textcolor{BurntOrange}{Excerpt from the Chapter} \textcolor{Sepia}{4:}}} \section{How Well Do You Know Newton's Laws Of Motion ?} \begin{p} Please state \hlt{Newton's first law of motion.} \end{p} \begin{s} A body remains at rest or in a state of uniform motion in a straight line until the action of other bodies compels it to change that state. \end{s} \begin{p} Is this law valid in all frames of reference ? \end{p} \begin{s} I don't understand your question. \end{s} \begin{p} If you say that a body is at rest, you mean that it is stationary with respect to some other body which, in the given case, serves as the reference system, or frame of reference. It is quite pointless to speak of a body being in a state of rest or definite motion without indicating the frame of reference. The nature of the motion of a body depends upon the choice of the frame of reference. For instance, a body lying on the floor of a traveling railway car is at rest with respect to a frame of reference attached to the car, but is moving with respect to a frame of reference attached to the track. Now we can return to my question. Is Newton's first law valid for all frames of reference ? \end{p} \begin{s} Well, it probably is. \end{s} \begin{p} I see that this question has taken you unawares. Experiments show that Newton's first law is not valid for all reference systems. Consider the example with the body lying on the floor of the railway car. We shall neglect the friction between the body and the floor. First we shall deal with the position of the body with respect to a frame of reference attached to the car. We can observe the following: the body rests on the floor and, all of a sudden, it begins to slide along the floor even though no action of any kind is evident. Here we have an obvious violation of Newton's first law of motion. The conventional explanation of this effect is that the car, which had been traveling in a straight line and at uniform velocity, begins to slow down, because the train is braked, and the body, due to the absence of friction, continues to maintain its state of uniform straight-line motion with respect to the railway tracks. From this we can conclude that Newton's law holds true in a frame of reference attached to the railway tracks, but not in one attached to a car being slowed down. \hlt{Frames of reference for which Newton's first law is valid are said to be inertial; those in which it is not valid are non-inertial.} For most of the phenomena we deal with we can assume that any frame of reference is inertial if it is attached to the earth's surface, or to any other bodies which are at rest with respect to the earth's surface or travel in a straight line at uniform velocity. Non-inertial frames of reference are systems traveling with acceleration (or deceleration), for instance rotating systems, accelerating or decelerating lifts, etc. \hlt{Note that not only Newton's first law of motion is invalid for non-inertial reference systems, but his second law as well (since the first law is a particular case of the second law).} \end{p} \begin{s} But if Newton's laws cannot be employed for frames of reference traveling with acceleration, then how can we deal with mechanics in such frames ? \end{s} \begin{p} Newton's laws of motion can nevertheless be used for non-inertial frames of reference. To do this, however, it will be necessary to apply, purely formally, an additional force to the body. This force, the so called \hlt{inertial force}, equals the product of the mass of the body by the acceleration of the reference system, and its direction is opposite to the acceleration of the body. \hlt{I should emphasize that no such force actually exists but, if it is formally introduced, then Newton's laws of motion will hold true in a non-inertial frame of reference.} \hlt{I want to advise you, however, to employ only inertial frames of reference in solving problems.} Then, all the forces that you have to deal with will be really existing forces. \end{p} \begin{s} But if we limit ourselves to inertial frames of reference, then we cannot analyze, for instance, a problem about a body lying on a rotating disk. \end{s} \begin{p} Why can't we ? The choice of the frame of reference is up to you. If in such a problem you use a reference system attached to the disk (i.e. a non-inertial system), the body is considered to be at rest. But if your reference system is attached to the earth (i.e. an inertial reference system), then the body is dealt with as one traveling in a circle. I would advise you to choose an inertial frame of reference. And now please state \hlt{Newton's second law of motion.} \end{p} \begin{s} This law can be written as \hlm{$F=ma$}, where \hlm{$F$} is the force acting on the body, \hlm{$m$} is its mass and \hlm{$a$} : acceleration. \end{s} \begin{p} Your laconic answer is very typical. I should make three critical remarks on your statement; two are not very important and one is essential. In the first place, \hlt{it is not the force that results from the acceleration, but, on the contrary, the acceleration is the result of the applied force.} It is therefore more logical to write the equation of the law as \hlm{\begin{equation}% a=B\cdot\frac{F}{m} \label{eq-10} \end{equation}} where \hlm{$B$} is the proportionality factor depending upon the choice of units of measurement of the quantities in~\cref{eq-10}. Notice that your version had no mention of the proportionality factor \hlm{$B$}. Secondly, a body is accelerated by all forces applied to it (though some may counterbalance one another). Therefore, in stating the law you should use, not the term \hlt{force}, but the more accurate term \hlt{resultant force.} My third remark is the most important. Newton's second law establishes a relationship between force and acceleration. But force and acceleration are vector quantities, characterized not only by their numerical value (magnitude) but by their direction as well. Your statement of the law fails to specify the directions. This is an essential shortcoming. Your statement leaves out a vital part of Newton's second law of motion. Correctly stated it is: \hlt{the acceleration of a body is directly proportional to the resultant of all forces acting on the body, inversely proportional to the mass of the body and takes place in the direction of the resultant force}. This statement can be analytically expressed by the formula \hlm{\begin{equation}% \vec{a}=B\cdot\frac{\vec{F}}{m} \label{eq-11} \end{equation}} (where the arrows over the letters denote vectors). \end{p} \begin{s} When in~\cref{ch2} we discussed the forces applied to a body thrown upward at an angle to the horizontal, you said you would show later that the direction of motion of a body does not necessarily coincide with the direction of the force applied to it. You referred then to Newton's second law. \end{s} \begin{p} Yes, I remember, and I think it would be quite appropriate to return to this question. Let us recall what acceleration is. As we know, acceleration is characterized by the change in velocity in unit time. Illustrated in~\cref{fig:18} are the velocity vectors \hlm{$\vv{v_{1}}$} and \hlm{$\vv{v_{2}}$} of a body for two nearby instants of time \hlm{$t$} and \hlm{$t+\Delta t$}. The change in velocity during the time \hlm{$\Delta t$} is the vector \hlm{$\Delta \vv{v} =\vv{v_{2}} - \vv{v_{1}}$}. \begin{figure}[H] \centering \begin{tikzpicture} \draw[-latex, line width=1.7pt, xptcolor] (0,0) -- (7,0) node[above left,xptcolor] {$\vec{v_1}$}; \draw[-latex, line width=1.7pt, horzlinecolor] (7,0) -- (7,-2); \draw (7.4,-1) node[rotate=-90,horzlinecolor] {$\vec{\Delta v}$}; \draw[-latex, line width=1.7pt, plotptcolor] (0,0) -- (7,-2); \draw (-20:4) node[rotate=-20,plotptcolor] {$\vec{v_2}$}; \end{tikzpicture} \caption{Change in the velocity vectors in unit time is acceleration} \label{fig:18} \end{figure} By definition, the acceleration is \hlm{\begin{equation}% \vec{a}(t) \cong \lmts{\Delta t}{0}\frac{\Delta \vec{v}}{\Delta t} \label{eq-12} \end{equation}} or, more rigorously, \hlm{\begin{equation}% \vec{a}(t) = \lmts{\Delta t}{0}\frac{\Delta \vec{v}}{\Delta t} \label{eq-13} \end{equation}} It follows that the acceleration vector is directed along vector \hlm{$\Delta v$}, which represents the change in velocity during a sufficiently short interval of time. It is evident from~\cref{fig:18} that the velocity vectors and the change in velocity vector can be oriented in entirely different directions. This means that, \hlt{in the general case, the acceleration and velocity vectors are also differently oriented.} Is that clear ? \end{p} \begin{s} Yes, now I understand. For example, when a body travels in a circle, the velocity of the body is directed along a tangent to the circle, but its acceleration is directed along a radius toward the center of rotation (I mean centripetal acceleration). \end{s} \begin{p} Your example is quite appropriate. Now let us return to relationship (\cref{eq-11}) and make it clear that it is precisely the acceleration and not the velocity that is oriented in the direction of the applied force, and that it is again the acceleration and not the velocity that is related to the magnitude of this force. On the other hand, the nature of a body's motion at any given instant is determined by the direction and magnitude of its velocity at the given instant (the velocity vector is always tangent to the path of the body). Since the acceleration and velocity are different vectors, the direction of the applied force and the direction of motion of the body may not coincide in the general case. Consequently, the nature of the motion of a body at a given instant is not uniquely determined by the forces acting on the body at the given instant. \end{p} \begin{s} This is true for the general case. But, of course, the direction of the applied force and the velocity may coincide. \end{s} \begin{p} Certainly, that is possible. Lift a body and release it carefully, so that no initial velocity is imparted to it. Here the direction of motion will coincide with the direction of the force of gravity. If, however, you impart a horizontal initial velocity to the body then its direction of motion will not coincide with the direction of the gravity force; the body will follow a parabolic path. Though in both cases the body moves due to the action of the same force : its weight, the nature of its motion differs. A physicist would say that this difference is due to the different initial conditions : at the beginning of the motion the body had no velocity in the first case and a definite horizontal velocity in the second. Illustrated in~\cref{fig:19} are the trajectories of bodies thrown with initial velocities of different directions, but in all cases the same force, the weight of the body, is acting on it. \end{p} \begin{figure}[H] \centering \begin{tikzpicture}[ declare function={f(\x)=-\x^2+ \x;}, %unit of length is cm declare function={g(\x)=-2*\x^2+ 1.5*\x;}, declare function={h(\x)=-24*\x^2+ 6*\x;}, declare function={ft(\x)=\x;}, declare function={gt(\x)=1.5*\x;}, declare function={ht(\x)=6*\x;}, % declare function={g(\x)=0.86 + .98*(\x-0.76);}, %tangent at(0.3in,0.34in) = (0.3*2.54cm, 0.34*2.54cm) is : y = 0.34*2.54cm + (2.5-2*.3*2.54cm)(x-.3*2.54cm) xscale=10,yscale=18 ] %\draw[red, arrows={-Triangle[angle=45:10pt]}] (0,0.0) -- (1,0.0); \draw[variable=\x, xptcolor, thick, samples=100, domain=0:1,smooth] plot(\x, {f(\x)}); \draw[arrows={-Triangle[angle=45:10pt]},variable=\x, xptcolor, thick, samples=100, domain=0:0.75,smooth] plot(\x, {f(\x)}); \draw[-latex,variable=\x, xptcolor, samples=100, domain=0:0.2,smooth] plot(\x, {ft(\x)}); \draw[variable=\x, plotptcolor, thick, samples=100, domain=0:0.75,smooth] plot(\x, {g(\x)}); \draw[arrows={-Triangle[angle=45:10pt]},variable=\x, plotptcolor, thick, samples=100, domain=0:0.4,smooth] plot(\x, {g(\x)}); \draw[-latex,variable=\x, plotptcolor, samples=100, domain=0:0.145,smooth] plot(\x, {gt(\x)}); \draw[variable=\x, horzlinecolor, thick, samples=100, domain=0:0.25,smooth] plot(\x, {h(\x)}); \draw[arrows={-Triangle[angle=45:10pt]},variable=\x, horzlinecolor, thick, samples=100, domain=0:0.19,smooth] plot(\x, {h(\x)}); \draw[-latex,variable=\x, horzlinecolor, samples=100, domain=0:0.039,smooth] plot(\x, {ht(\x)}); %\draw[-latex,variable=\x, dotted, xptcolor, ultra thick, samples=100, domain=.255:1.5,smooth] plot(\x, {f(\x)}); %\draw[variable=\x, dotted, xptcolor, ultra thick, samples=100, domain=.255:1.245,smooth] plot(\x, {h(\x)}); \draw[xlabelcolor,ultra thick] (-0.1,0) -- (1.1,0); \fill [pattern = bricks, pattern color=xlabelcolor] (-0.1,0) rectangle (1.1,-0.05); \node[shading=ball,circle,ball color=xptcolor,inner sep=1mm] at (0.9,0.09) {}; \node[shading=ball,circle,ball color=plotptcolor,inner sep=1mm] at (0.3,0.27) {}; \node[shading=ball,circle,ball color=horzlinecolor,inner sep=1mm] at (0.15,0.36) {}; %\draw[variable=\x, -latex, ultra thick, samples=100, domain=0.76:1.2,smooth] plot(\x, {g(\x)}) node[above] {\hlm{$F$}}; \end{tikzpicture} \caption{Projectiles with different initial velocities} \label{fig:19} \end{figure} \begin{s} Does that mean that the nature of the motion of a body at a given instant depends not only on the forces acting on the body at this instant, but also on the initial conditions ? \end{s} \begin{p} Exactly. It should be emphasized that the initial conditions reflect the prehistory of the body. They are the result of forces that existed in the past. These forces no longer exist, but the result of their action is manifested. From the philosophical point of view, this demonstrates the relation of the past to the present, i.e, the principle of causality. Note that if the formula of Newton's second law contained the velocity and not the acceleration, this relationship of the past and present would not be revealed. In this case, the velocity of a body at a given instant (i.e. the nature of its motion at a given instant) would be fully determined by the forces acting on the body precisely at this instant; the past would have no effect whatsoever on the present. I want to cite one more example illustrating the aforesaid. A ball hanging on a string is subject to the action of two forces, the weight and the tension of the string. If it is deflected to one side of the equilibrium position and then released, it will begin to oscillate. \begin{figure}[H] \centering \begin{tikzpicture} \fill [pattern = bricks, pattern color=Red] (-1in,0in) rectangle (1in,0.3in); \draw[ultra thick, plotptcolor, dashed] (0,0) -- (-90:4.5cm); \draw[ultra thick, plotptcolor] (0,0) -- (-60:4cm); \draw[ultra thick, plotptcolor, dashed] (0,0) -- (-120:4cm); \draw[xptcolor, ultra thick, dashed] (-120:4cm) arc(-120:-60:4cm); \node[shading=ball,circle,ball color=Black,inner sep=2mm] (a) at (-60:4cm) {}; \node[circle,draw,ultra thick,inner sep=2.0mm] at (-120:4cm) {}; \path (a) -- ++(0,-0.6in) coordinate(P); \draw[-latex, ultra thick, xptcolor] (a) -- (P) node[below] {$P$}; \draw[-latex, ultra thick, plotptcolor] (a) -- (-60:2cm) node[right,rotate=25] {$T_1$};; \end{tikzpicture} \caption{Oscillating ball} \label{fig:20a} \end{figure} If, however, a definite velocity is imparted to the ball in a direction perpendicular to the plane of deviation, the ball will begin to travel in a circle at uniform velocity. As you can see, depending upon the initial conditions, the ball either oscillates in a plane (see~\cref{fig:20a}), or travels at uniform velocity in a circle (see~\cref{fig:20b}). Only two forces act on it in either case: its weight and the tension of the string. \end{p} \begin{s} I haven't considered Newton's laws from this viewpoint. \end{s} \begin{p} No wonder then that some students, in trying to determine the forces applied to a body, base their reasoning on the nature of motion without first finding out what bodies interact with the given body. You may recall that you did the same. That is exactly why, when drawing~\cref{fig:8c} and~\cref{fig:8d}, it seemed to you that the sets of forces applied to the body in those cases should be different. Actually, in both cases two forces are applied to the body: its weight and the tension of the string. \end{p} \begin{figure}[H] \centering \begin{tikzpicture} \fill [pattern = bricks, pattern color=xlabelcolor] (-1in,0in) rectangle (1in,0.3in); \draw[thick, xlabelcolor] (-1in,0in) -- (1in,0in); \draw[thick, plotptcolor, dotted] (0,0) -- (-90:5cm); \draw[ultra thick, plotptcolor] (0,0) -- (-60:4cm); \draw[xptcolor, thick, dashed] (2,-3.46) arc (0:180:2 and 0.7); \draw[xptcolor, thick] (2,-3.46) arc (0:-180:2 and 0.7); \node[shading=ball,circle,ball color=Black,inner sep=2mm] (a) at (-60:4cm) {}; \path (a) -- ++(0,-0.6in) coordinate(P); \draw[-latex, ultra thick, xptcolor] (a) -- (P) node[below] {$P$}; \draw[-latex, ultra thick, plotptcolor] (a) -- (-60:2cm) node[right,rotate=25] {$T_2$};; \end{tikzpicture} \caption{Motion of a ball in circle} \label{fig:20b} \end{figure} \begin{s} Now I understand that \hlt{the same set of forces can cause motions of different nature and therefore data on the nature of the motion of a body cannot serve as a starting point in determining the forces applied to the body.} \end{s} \begin{p} You have stated the matter very precisely. There is no need, however, to go to the extremes. Though different kinds of motion may be caused by the same set of forces (as in~\cref{fig:20a} and~\cref{fig:20b}), the numerical relations between the acting forces differ for the different kinds of motion. This means that there will be a different resultant applied force for each motion. Thus, for instance, in uniform motion of a body in a circle, the resultant force should be the \hlt{centripetal} one; in oscillation in a plane, the resultant force should be the \hlt{restoring force}. From this it follows that even though data on the kind of motion of a body cannot serve as the basis for determining the applied forces, they are far from superfluous. In this connection, let us return to the example illustrated in~\cref{fig:20a} and~\cref{fig:20b}. Assume that the angle \hlm{$\alpha$}, between the vertical and the direction of the string is known and so is the weight \hlm{$P$} of the body. Find the tension \hlm{$T$} in the string when \begin{enumerate}[label=(\arabic*)] \item the oscillating body is in its extreme position, and \item when the body is traveling uniformly in a circle. \end{enumerate} In the first case, the resultant force is the restoring force and it is perpendicular to the string. Therefore, the weight \hlm{$P$} of the body is resolved into two components, with one component along the resultant force and the other perpendicular to it (i.e. directed along the string). Then the forces perpendicular to the resultant force, i.e. those acting in the direction along the string, are equated to each other (see~{fig:21a}). \hlm{\begin{equation*}% \therefore T_{1} = P \cos \alpha \end{equation*}} \begin{figure}[H] \centering \begin{tikzpicture} \fill [pattern = bricks, pattern color=xlabelcolor] (-1in,0in) rectangle (1in,0.3in); \draw[thick, xlabelcolor] (-1in,0) -- (1in,0); \draw[ultra thick, plotptcolor, dashed] (0,0) -- (-90:5cm); \draw[ultra thick, plotptcolor] (0,0) -- (-60:4cm); \node[circle,draw,ultra thick,inner sep=2.0mm] (a) at (-60:4cm) {}; \path (a) -- ++(0,-2.31) coordinate(P); \path (a) -- ++(30:-1.155) coordinate(P1); \draw[-latex, ultra thick, xptcolor] (-60:4) -- (P) node[below] {$P$}; \draw[-latex, ultra thick, plotptcolor] (-60:4) -- (P1); \draw[-latex, ultra thick, plotptcolor] (a) -- (-60:2cm) node[right,rotate=30] {$T_1$};; %\draw[xptcolor, thick] (0,-1) arc (270:300:1); %\draw[xptcolor, thick] (0,-1.1) arc (270:300:1.1); %\draw[xptcolor, thick] (0.4,-1.3) node[rotate=30] {$\alpha$}; \tkzDefPoint(0,-1){l} \tkzDefPoint(0,0){O} \tkzDefPoint(-60:1){m} \tkzMarkAngle[size = 1.0,arc=ll,color=xptcolor](l,O,m) \tkzLabelAngle[pos=1.3,rotate=30,color=xptcolor](l,O,m) {$\alpha$} \path (a) -- ++(-150:2) coordinate(Q); \path (a) -- ++(30:1) coordinate(R); \draw[horzlinecolor,dashed] (Q) -- (R); \draw[-latex, ultra thick, horzlinecolor] (-60:4) -- (-60:6) coordinate(b) node[right,rotate=30] {$P\cos\alpha$}; \tkzMarkAngle[size = 1.0,arc=ll,color=xptcolor](P,a,b) \tkzLabelAngle[pos=1.3,rotate=30,color=xptcolor](P,a,b) {$\alpha$} \tkzDefPoint(-60:2){u} \tkzMarkSegments[mark=||](a,u a,b) \draw[ultra thick, horzlinecolor,dotted] (P) -- (P1) (P) -- (b); %\draw[-latex, ultra thick, horzlinecolor] (a) -- (P1); %\tkzDrawLine[altitude](P,Q,R) %\tkzDefLine[perpendicular=through P,K=-1](Q,R) \tkzGetPoint{p} %\draw[-latex, ultra thick, plotptcolor] (a) -- (p); \end{tikzpicture} \caption{Forces on the oscillating ball} \label{fig:21a} \end{figure} In the second case, the resultant force is the centripetal one and is directed horizontally. Hence, the tension \hlm{$T_{2}$} of the string should be resolved into a vertical and a horizontal force, and the forces perpendicular to the resultant force, i.e, the vertical forces, should be equated to each other (\cref{fig:21b}). \hlm{\begin{equation*}% \therefore T_{2} \cos \alpha = P \quad \text{or} \quad T_{2} = \frac{P}{\cos \alpha} \end{equation*}} \begin{figure}[H] \centering \begin{tikzpicture} \fill [pattern = bricks, pattern color=xlabelcolor] (-1in,0in) rectangle (1in,0.3in); \draw[thick, xlabelcolor] (-1in,0) -- (1in,0); \draw[ultra thick, plotptcolor, dashed] (0,0) -- (-90:5cm) coordinate(y); \draw[ultra thick, plotptcolor] (0,0) -- (-60:4cm); \node[circle,draw,ultra thick,inner sep=2.0mm] (a) at (-60:4cm) {}; \path (a) -- ++(0,-1.73) coordinate(P); \path (a) -- ++(0,1.73) coordinate(P1); %\path (a) -- ++(30:-1.155) coordinate(P1); \draw[-latex, ultra thick, xptcolor] (-60:4) -- (P) node[below] {$P$}; \draw[-latex, ultra thick, plotptcolor] (-60:4) -- (P1) node[right] {$T_2 \cos\alpha$}; \draw[-latex, ultra thick, plotptcolor] (a) -- (-60:2cm) node[left,rotate=30] {$T_2$};; \tkzDefPoint(0,-1){l} \tkzDefPoint(0,0){O} \tkzDefPoint(-60:1){m} \tkzDefPoint(-60:2){n} \tkzMarkAngle[size = 0.7,arc=ll,color=xptcolor](l,O,m) \tkzLabelAngle[pos=0.9,rotate=30,color=xptcolor](l,O,m) {$\alpha$} \tkzMarkAngle[size = 0.7,arc=ll,color=xptcolor](P1,a,n) \tkzLabelAngle[pos=0.9,rotate=30,color=xptcolor](P1,a,n) {$\alpha$} %\path (a) -- ++(-150:2) coordinate(Q); %\path (a) -- ++(30:1) coordinate(R); %\draw[horzlinecolor,dashed] (Q) -- (R); %\draw[-latex, ultra thick, horzlinecolor] (-60:4) -- (-60:6) coordinate(b) node[right,rotate=30] {$P\cos\alpha$}; %\tkzMarkAngle[size = 1.0cm,arc=ll,color=xptcolor](P,a,b) %\tkzLabelAngle[pos=1.3,rotate=30,color=xptcolor](P,a,b) {$\alpha$} \tkzMarkSegments[mark=||](a,P a,P1) %\draw[ultra thick, horzlinecolor,dotted] (P) -- (P1) (P) -- (b); \draw[thick, horzlinecolor, dashed, shorten >= 0.3cm, shorten <= -0.8cm] (-60:4) -- ($(O)!(a)!(y)$); % $(O)!(a)!(y)$ is a projection of the point a on the line Oy \draw[ultra thick, horzlinecolor, dotted] (n) -- ($(a)!(n)!(P1)$); \draw[ultra thick, horzlinecolor, dotted] (n) -- ($(a)!(n)!($(O)!(a)!(y)$)$); \draw[-latex, ultra thick, horzlinecolor] (a) -- ($(a)!(n)!($(O)!(a)!(y)$)$); %\draw[-latex, ultra thick, horzlinecolor] (a) -- (P1); %\tkzDrawLine[altitude](P,Q,R) %\tkzDefLine[perpendicular=through P,K=-1](Q,R) \tkzGetPoint{p} %\draw[-latex, ultra thick, plotptcolor] (a) -- (p); \end{tikzpicture} \caption{Forces on the moving ball in circle} \label{fig:21b} \end{figure} As you can see, a knowledge of the nature of the body's motion proved useful in finding the tension of the string. \end{p} \begin{s} If I understand all this correctly, then, knowing the interaction of bodies, you can find the forces applied to one of them; if you know these forces and the initial conditions, you can predict the nature of the motion of the body (the magnitude and direction of its velocity at any instant). On the other hand, if you know the kind of motion of a body you can establish the relationships between the forces applied to it. Am I reasoning correctly? \end{s} \begin{p} Quite so. But let us continue. I want to propose a comparatively simple problem relating to Newton's second law of motion. \hlt{Two bodies, of masses \hlm{$M$} and \hlm{$m$}, are raised to the same height above the floor and are released simultaneously. Will the two bodies reach the floor simultaneously if the resistance of the air is the same for each of them? For simplicity we shall assume that the air resistance is constant.} \end{p} \begin{s} Since the air resistance is the same for the two bodies, it can be disregarded. Consequently, both bodies reach the floor simultaneously. \end{s} \begin{p} You are mistaken. You have no right to disregard the resistance of the air. Take, for example, the body of mass \hlm{$M$}. It is subject to two forces: the weight \hlm{$Mg$} and the air resistance \hlm{$F$}. The resultant force is \hlm{$Mg - F$}. From this we find the acceleration. Thus \hlm{\begin{equation*}% a = \dfrac{Mg - F}{M} = g - \dfrac{F}{M} \end{equation*}} In this manner, the body of larger mass has a higher acceleration and will, consequently, reach the floor first. Once more I want to emphasize that in calculating the acceleration of a body it is necessary to take into account all the forces applied to it, i.e. you must find the resultant force. In this connection, the use of the term \hlt{driving force} is open to criticism. This term is inappropriate. In applying it to some force (or to several forces) we seem to single out the role of this force (or forces) in imparting acceleration to the body. As if the other forces concerned were less essential. This is absolutely wrong. \hlt{The motion of a body is a result of the action of all the forces applied to it without any exceptions (of course, the initial conditions should be taken into account).} Let us now consider an example on \hlt{Newton’s third law of motion}. A horse starts to pull a wagon. As a result, the horse and wagon begin to travel with a certain acceleration. \begin{figure}[H] \centering \begin{tikzpicture}[y=0.80pt, x=0.80pt, scale=-0.5]%yscale=-0.2, xscale=-0.2]%, inner sep=0pt, outer sep=0pt, even odd rule] \fill [pattern = bricks, pattern color=xlabelcolor] (3in,3.5in) rectangle (-6in,2.0in); %\begin{scope}[shift={(-891.18847,366.43251)}] %\begin{scope}[cm={{-2.67454,0.0,0.0,2.67403,(1596.5431,-564.7219)}},draw,line width=0.526pt,miter limit=4.00] % front first leg \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (201.7321,169.0993) .. controls (210.6547,168.5150) and (222.6633,176.8008) .. (234.1592,184.7189) .. controls (240.7621,186.7291) and (252.0399,193.5261) .. (249.7262,200.2990) .. controls (246.7657,208.9652) and (246.6050,221.8705) .. (246.4940,231.9747) .. controls (245.8476,237.1462) and (243.2618,238.4391) .. (241.9689,240.3785) .. controls (240.6760,242.3178) and (240.6760,244.9035) .. (239.3831,246.8429) .. controls (238.0903,248.7822) and (235.5045,250.0751) .. (232.9187,252.0144) .. controls (230.3330,253.9537) and (227.7472,256.5395) .. (225.8079,257.1859) .. controls (223.8685,257.8324) and (222.5756,256.5395) .. (222.5756,254.6002) .. controls (224.1107,250.5565) and (223.6015,244.8050) .. (226.2211,241.4175) .. controls (227.5140,240.7710) and (230.3330,241.0249) .. (232.9187,239.7320) .. controls (235.5045,238.4391) and (238.0903,233.2676) .. (238.0903,230.6818) .. controls (238.0903,228.0961) and (235.5045,228.0961) .. (234.2116,227.4496) .. controls (232.9187,226.8032) and (232.9187,225.5103) .. (233.5652,224.2174) .. controls (234.2116,222.9245) and (235.5045,221.6316) .. (236.7974,219.0459) .. controls (239.7857,213.2305) and (240.2647,207.2146) .. (236.7974,201.5919) .. controls (233.1465,196.1300) and (225.7875,197.0510) .. (219.9899,196.4204) .. controls (214.8183,195.7740) and (199.7797,194.4811) .. (193.6451,189.5606); % back first leg \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (116.8889,186.5926) .. controls (125.4438,210.0510) and (112.2568,211.5131) .. (106.2161,220.9852) .. controls (103.1212,230.3523) and (109.8691,239.0367) .. (115.9127,245.5500) .. controls (119.7835,249.4235) and (124.9663,251.3812) .. (128.1951,255.8931) .. controls (132.2799,261.4739) and (132.0738,268.0905) .. (132.0738,274.6399) .. controls (130.7083,279.8349) and (121.6600,270.2560) .. (119.7914,268.1755) .. controls (117.8521,265.5897) and (119.1449,264.2968) .. (119.1449,262.3575) .. controls (119.1449,260.4181) and (117.8521,257.8324) .. (116.5592,257.1859) .. controls (115.2663,256.5395) and (113.9734,257.8324) .. (112.6805,257.8324) .. controls (111.3876,257.8324) and (110.0948,256.5395) .. (109.4483,254.6002) .. controls (108.8471,248.6065) and (106.9058,245.5539) .. (102.3374,241.6713) .. controls (98.4588,238.4391) and (91.9944,233.2676) .. (88.7622,229.3889) .. controls (85.5300,225.5103) and (85.5300,222.9245) .. (86.8228,221.6316) .. controls (90.9452,219.2854) and (92.7218,218.3762) .. (93.2376,212.4710) .. controls (93.2376,209.2388) and (89.3037,195.8939) .. (88.0108,190.7224); % body, back second leg, tail \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (232.9187,115.6151) .. controls (219.7563,131.8228) and (221.7872,150.8868) .. (219.9899,170.5131) .. controls (218.6970,181.5026) and (216.1112,187.9670) .. (210.9397,192.4921) .. controls (205.7681,197.0172) and (198.0108,199.6030) .. (187.6678,201.5423) .. controls (159.0377,208.0009) and (139.7049,206.9899) .. (113.3269,198.3597) .. controls (108.8019,197.0668) and (104.9232,197.0668) .. (101.6910,199.0062) .. controls (92.8122,215.0779) and (89.7743,217.5692) .. (73.9548,220.7420) .. controls (70.7226,222.6813) and (68.1368,226.5600) .. (66.1975,233.0244) .. controls (64.2582,239.4888) and (62.9653,248.5390) .. (64.9046,254.3570) .. controls (66.8439,260.1749) and (72.0155,262.7607) .. (75.8941,266.6394) .. controls (79.7728,270.5180) and (82.3585,275.6896) .. (82.3585,278.2753) .. controls (80.0244,282.0729) and (70.6392,280.2070) .. (68.0760,279.5682) .. controls (66.1367,278.2753) and (66.1367,275.6896) .. (65.4902,273.1038) .. controls (64.8438,270.5180) and (63.5509,267.9322) .. (61.6116,267.2858) .. controls (59.6723,266.6394) and (57.0865,267.9322) .. (55.7936,265.9929) .. controls (54.5007,264.0536) and (54.5007,258.8821) .. (55.1472,254.3570) .. controls (58.9721,242.6745) and (53.7032,231.1402) .. (53.8543,219.4491) .. controls (54.5007,214.9240) and (58.3794,214.9240) .. (60.9652,213.6311) .. controls (66.2590,210.5104) and (66.2379,205.3094) .. (64.8438,200.0558) .. controls (63.5509,195.5307) and (60.9652,185.4309) .. (59.6723,179.6129) .. controls (58.3794,173.7949) and (58.3794,169.9163) .. (59.0258,166.6841) .. controls (59.6723,163.4519) and (60.9651,160.8661) .. (60.9651,159.5732) .. controls (47.8166,156.5745) and (42.6944,164.7989) .. (42.5611,173.2949) .. controls (42.4903,177.8083) and (48.6359,190.7640) .. (43.4238,202.9700) .. controls (41.0005,207.5082) and (36.1900,211.3528) .. (27.3502,213.8743) .. controls (46.0283,196.6756) and (26.2323,177.1985) .. (24.7644,163.4519) .. controls (24.7795,158.6425) and (26.6909,154.9610) .. (29.6880,152.2885) .. controls (40.9846,143.2477) and (54.6328,140.2214) .. (63.5509,154.4017) .. controls (78.9782,132.0680) and (101.3646,141.7435) .. (126.2558,142.7657) .. controls (139.8311,142.7657) and (151.4670,142.7657) .. (158.5779,142.1193) .. controls (172.2007,141.5393) and (174.2605,126.3465) .. (180.4051,116.6195) .. controls (193.7433,95.5049) and (215.6713,81.5666) .. (241.1707,84.2974); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (78.1198,268.7859) .. controls (73.1120,268.4697) and (69.0991,268.7677) .. (65.3775,272.2253); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (252.3341,95.5893) .. controls (252.3341,91.4962) and (252.3341,87.4031) .. (251.0414,83.3100) .. controls (250.1965,80.6349) and (248.7994,77.9597) .. (247.5719,76.2369) .. controls (246.1916,74.5194) and (245.1852,73.4908) .. (244.6645,76.2618) .. controls (243.0874,81.7037) and (244.5777,86.4195) .. (244.5777,91.4962); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (205.1873,189.4820) .. controls (202.4560,204.8378) and (205.3512,209.2042) .. (204.4753,215.1672) .. controls (203.4816,221.9318) and (204.0766,228.6004) .. (203.2432,235.2069) .. controls (202.8312,238.4730) and (200.0110,241.0249) .. (198.7181,245.5500) .. controls (197.4252,250.0751) and (197.4252,256.5395) .. (198.0717,260.4182) .. controls (197.5956,268.5050) and (205.3511,266.8094) .. (208.4755,272.7006) .. controls (211.0613,275.9328) and (213.6471,281.1043) .. (214.2935,284.3365) .. controls (214.9399,287.5687) and (213.6471,288.8616) .. (209.7684,288.8616) .. controls (205.8898,288.8616) and (200.5966,287.5687) .. (198.6573,284.9830) .. controls (196.7180,282.3972) and (199.3037,278.5185) .. (199.3037,276.5792) .. controls (197.4375,272.6147) and (192.7262,274.6658) .. (190.2535,273.9934) .. controls (188.9607,273.3470) and (187.6678,272.0541) .. (187.6678,270.1148) .. controls (187.6678,268.1755) and (188.9607,265.5897) .. (188.9607,261.7110) .. controls (188.9607,257.8324) and (187.6678,252.6609) .. (187.6678,246.1964) .. controls (187.6678,239.7320) and (188.9607,231.9747) .. (188.3142,222.9245) .. controls (187.6678,213.8743) and (180.7949,196.6059) .. (178.2091,186.2629); % face \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (240.6760,84.5859) .. controls (241.9689,84.5859) and (243.2618,84.5859) .. (245.2011,84.5859) .. controls (247.1404,84.5859) and (249.7262,84.5859) .. (251.6655,85.8788) .. controls (253.6049,87.1717) and (254.8978,89.7575) .. (256.1906,94.2826) .. controls (257.4835,98.8077) and (259.8756,105.3854) .. (258.7764,111.7365) .. controls (257.1712,121.0114) and (260.9694,130.6092) .. (258.7764,137.5942) .. controls (258.3895,138.8264) and (257.9208,138.9004) .. (258.0030,140.9430) .. controls (258.0959,143.2520) and (257.8591,147.4786) .. (254.8978,148.5837) .. controls (248.7424,151.7676) and (246.7122,146.7258) .. (244.5547,140.8264) .. controls (243.1452,136.9050) and (241.2694,138.0709) .. (242.9017,140.8264) .. controls (244.2039,144.3644) and (246.4080,153.4366) .. (240.6760,149.2301) .. controls (239.3832,147.9373) and (238.0903,145.3515) .. (237.4438,142.7657) .. controls (236.7974,140.1799) and (236.7974,137.5942) .. (236.1509,135.0084) .. controls (234.9120,126.1872) and (225.8838,128.5817) .. (225.1614,119.4938); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (247.7565,91.7423) .. controls (247.4178,91.0276) and (247.7786,90.0667) .. (247.4285,89.3642) .. controls (246.4416,87.3848) and (245.3625,85.5027) .. (243.9314,83.9922) .. controls (240.8264,82.1090) and (238.5325,79.9741) .. (236.1889,77.1340) .. controls (233.9515,74.1895) and (233.2206,72.9003) .. (233.3082,77.4936) .. controls (233.4466,80.9472) and (234.3955,85.7658) .. (236.8213,89.4497) .. controls (238.1141,91.4962) and (241.0389,91.7423) .. (242.3317,91.7423); % head hair \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (253.7331,91.8654) .. controls (261.3090,92.6446) and (265.0930,98.9449) .. (262.6760,107.1865) .. controls (261.3833,110.5974) and (260.1968,111.2344) .. (257.5051,111.9617) .. controls (256.9118,108.2596) and (256.1850,107.9313) .. (255.5660,104.4577) .. controls (255.0362,106.4824) and (254.2098,107.0073) .. (252.9805,108.5508) .. controls (252.4507,106.7722) and (252.3341,106.5043) .. (251.6878,103.7755) .. controls (250.2900,94.1050) and (246.7242,93.9269) .. (241.5726,96.7237) .. controls (239.7256,97.7265) and (238.7648,99.2401) .. (235.1582,97.9599); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (245.6881,115.6642) .. controls (246.3797,113.7256) and (250.3479,112.9963) .. (250.1796,115.4181) .. controls (250.4206,117.7259) and (246.4524,117.2248) .. (245.6881,115.6642) -- cycle; \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (249.2369,146.7816) .. controls (250.9139,144.7083) and (253.7372,144.6477) .. (254.2198,147.1507) .. controls (254.2198,148.2558) and (252.3578,150.2591) .. (251.7866,150.2591) .. controls (251.2155,150.2591) and (249.2369,147.8867) .. (249.2369,146.7816) -- cycle; \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (252.3341,116.0548) .. controls (252.1866,113.3290) and (250.7428,112.1237) .. (249.1060,112.1385) .. controls (245.8772,112.1678) and (244.9628,114.2783) .. (242.7135,114.5932); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (232.9187,239.7320) .. controls (234.2116,241.0249) and (235.5045,242.3178) .. (235.5045,244.2571) .. controls (235.5045,246.1964) and (233.9784,249.4818) .. (232.6855,252.0675); \path[draw,ball color=horsecolor,even odd rule,line cap=butt,line join=miter,line width=0.526pt,miter limit=4.00] (210.9645,276.2504) .. controls (206.9848,276.2434) and (202.2076,276.9384) .. (198.7429,278.6649); \path[cm={{-0.26086,0.0,0.0,0.27531,(263.793,24.0347)}},draw,fill=black,line width=1.961pt,miter limit=4.00] (59.6113,337.0374)arc(-250.249:109.653:3.571429 and 5.159); \draw[ultra thick, rounded corners, shading=ball, ball color=plotptcolor] (190,185) rectangle ++(-400,10); \node[ultra thick,circle,draw,fill=Snow,inner sep=4] (a) at (190,185) {}; \node[shading=ball,circle,ball color=plotptcolor,inner sep=2] at (a) {}; \draw[-latex, line width=1.7pt, horzlinecolor] (140,170) -- (290,170) node[left,horzlinecolor] {$F$}; \draw[-latex, line width=1.7pt, ballpoint] (140,170) -- (100,170) node[right,ballpoint] {$f_0$}; \node[shading=ball,circle,ball color=ballpoint,inner sep=2.5] at (140,170) {}; \begin{scope}[shift={(-500,120)}] \draw[ultra thick, rounded corners, shading=ball, ball color=plotptcolor] (0,0) rectangle (300,100); \node[ultra thick,circle,draw,fill=Snow,inner sep=10] at (200,100) {}; \node[shading=ball,circle,ball color=plotptcolor,inner sep=4] at (200,100) {}; \node[ultra thick,circle,draw,fill=Snow,inner sep=10] at (100,100) {}; \node[shading=ball,circle,ball color=plotptcolor,inner sep=4] at (100,100) {}; \draw[-latex, line width=1.7pt, ballpoint] (150,50) -- (190,50) node[left] {$f_0$}; \draw[-latex, line width=1.7pt, ballpoint] (150,50) -- (50,50) node[right] {$f$}; \node[shading=ball,circle,ball color=Black,inner sep=2.5] at (150,50) {}; \draw[-latex, line width=1.7pt, horzlinecolor] (150,220) -- (250,220) node[left,horzlinecolor] {$f$}; \draw[-latex, line width=1.7pt, horzlinecolor] (150,220) -- (0,220) node[right,horzlinecolor] {$F$}; \node[shading=ball,circle,ball color=Black,inner sep=2.5] at (150,220) {}; \end{scope} % \end{scope} %\end{scope} \end{tikzpicture} \caption{Horse Pulling a Wagon} \label{fig:22} \end{figure} According to Newton’s third law, whatever the force with which the horse pulls the wagon, the wagon pulls back on the horse with exactly the same force but in the opposite direction. This being so, \hlt{why do the horse and wagon travel forward with an acceleration ? Please explain.} \end{p} \begin{s} I have never given this any thought but I see no contradictions. The acceleration would be difficult to explain if the force with which the horse acts on the wagon was counterbalanced by the force with which the wagon acts on the horse. But these forces cannot cancel each other since they are applied to different bodies: one to the horse and the other to the wagon. \end{s} \begin{p} Your explanation is applicable to the case when the wagon is not harnessed to the horse. Then the horse pushes away from the wagon, as a result of which the wagon moves in one direction and the horse in the other. The case I proposed is entirely different. The horse is harnessed to the wagon. Thus they are linked together and travel as a single system. The forces of interaction between the horse and wagon that you mentioned are applied to different parts of the same system. In the motion of this system as a whole, these forces can be regarded as mutually counterbalancing forces. Thus, you haven't yet answered my question. \end{p} \begin{s} Well, then I can't understand what the matter is. Maybe the action here is not fully counterbalanced by the reaction ? After all a horse is a living organism. \end{s} \begin{p} Now don't let your imagination run away with you. It was sufficient for you to meet with some difficulty and you are ready to sacrifice one of the principal laws of mechanics. To answer my question, there is no need to revise Newton's third law of motion. On the contrary, let us use this law as a basis for our discussion. According to the third law. the interaction of the horse and the wagon cannot lead to the motion of this system as a whole (or, more precisely, it cannot impart acceleration to the system as a whole). This being so, there must exist some kind of supplementary interaction. In other words at least one more body must participate in the problem in addition to the horse and wagon. This body, in the given case is the earth. As a result, we have three interactions to deal with instead of one, namely: \begin{enumerate}[label=(\arabic*),leftmargin=1cm] \item between the horse and the wagon (we shall denote this force by \hlm{$f_{0}$}; \item between the horse and the earth (force \hlm{$F$}), in which the horse pushes against the ground; and \item between the wagon and the earth (force \hlm{$f$}) which is the friction of the wagon against the ground. \end{enumerate} All bodies are shown in~\cref{fig:22} : the horse, the wagon and the earth and two forces are applied to each body. These two forces are the result of the interaction of the given body with the two others. The acceleration of the horse-wagon system is caused by the resultant of all the forces applied to it. There are \hlt{four} such forces and their resultant is \hlm{$F - f$}. This is what causes the acceleration of the system. Now you see that this acceleration is not associated with the interaction between the horse and the wagon. \end{p} \begin{s} So the earth's surface turns out to be, not simply the place on which certain events occur, but an active participant of these events. \end{s} \begin{p} Your pictorial comment is quite true. Incidentally, if you locate the horse and wagon on an ideal icy surface, thereby excluding all horizontal interaction between this system and the earth, there will be no motion, whatsoever. It should be stressed that no internal interaction can impart acceleration to a system as a whole. This can be done only by external action (you can't lift yourself by your hair, or bootstraps either). This is an important practical inference of Newton's third law of motion. \end{p} \hrulefill \foreach \x in {1,...,11} { \includegraphics[width=2.0\linewidth]{tarasov-phy/ch4-p\x} }
{ "alphanum_fraction": 0.726274273, "avg_line_length": 73.9065550907, "ext": "tex", "hexsha": "89821d1b7ef7b9fdd37d952fcdc917b16d73d8cf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2c8e3c6a8017164fd86fabaaa3343257cea54405", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ancientscience/ancientscience.github.io", "max_forks_repo_path": "tarasov-phy/tp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2c8e3c6a8017164fd86fabaaa3343257cea54405", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ancientscience/ancientscience.github.io", "max_issues_repo_path": "tarasov-phy/tp.tex", "max_line_length": 2521, "max_stars_count": null, "max_stars_repo_head_hexsha": "2c8e3c6a8017164fd86fabaaa3343257cea54405", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ancientscience/ancientscience.github.io", "max_stars_repo_path": "tarasov-phy/tp.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 16793, "size": 52991 }
%\documentclass[10pt]{beamer} % aspect ratio 4:3, 128 mm by 96 mm \documentclass[10pt,aspectratio=169]{beamer} % aspect ratio 16:9 %\graphicspath{{../../figures/}} \graphicspath{{../../figures/}{../../figures/beamer_common/}{../../conference_papers/SPIE2020/figs/}{../../journal_papers/Composite_Structures_GA/figs/}} %\includeonlyframes{frame1,frame2,frame3,frame4,frame5,frame6,frame7,frame8,frame9} %\includeonlyframes{frame10,frame11,frame12,frame13} %\includeonlyframes{frame14,frame15,frame16,frame17,frame18,frame19,frame20,frame21} %\includeonlyframes{frame22,frame23,frame24,frame25,frame26} %\includeonlyframes{frame27,frame28} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Packages %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{appendixnumberbeamer} \usepackage{booktabs} \usepackage{csvsimple} % for csv read \usepackage[scale=2]{ccicons} \usepackage{pgfplots} \usepackage{xspace} \usepackage{amsmath} \usepackage{totcount} \usepackage{tikz} \usepackage{bm} %\usepackage{FiraSans} %\usepackage{comment} %\usetikzlibrary{external} % speedup compilation %\tikzexternalize % activate! %\usetikzlibrary{shapes,arrows} %\usepackage{bibentry} %\nobibliography* \usepackage{caption}% \captionsetup[figure]{labelformat=empty}% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Metropolis theme custom modification file \input{metropolis_mods.tex} \usefonttheme[onlymath]{Serif} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Custom commands %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % matrix command \newcommand{\matr}[1]{\mathbf{#1}} % bold upright (Elsevier, Springer) %\newcommand{\matr}[1]{#1} % pure math version %\newcommand{\matr}[1]{\bm{#1}} % ISO complying version % vector command \newcommand{\vect}[1]{\mathbf{#1}} % bold upright (Elsevier, Springer) % bold symbol \newcommand{\bs}[1]{\boldsymbol{#1}} % derivative upright command \DeclareRobustCommand*{\drv}{\mathop{}\!\mathrm{d}} \newcommand{\ud}{\mathrm{d}} % \newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Title page options %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \date{\today} \date{} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % option 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{Parametric studies of composite material properties influence on dispersion curves of Lamb waves} \subtitle{Lamb-opt} \author{\textbf{Paweł Kudela}\\Piotr Fiborek\\Maciej Radzieński \\Tomasz Wandowski } % logo align to Institute \institute{Institute of Fluid Flow Machinery\\Polish Academy of Sciences \\ \vspace{-1.5cm}\flushright \includegraphics[width=4cm]{../images/logo/logo_eng_40mm.eps}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % option 2 - authors in one line %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \title{Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods} % \subtitle{Lamb-opt} % \author{\textbf{Paweł Kudela}\textsuperscript{2}, Maciej Radzieński\textsuperscript{2}, Wiesław Ostachowicz\textsuperscript{2}, Zhibo Yang\textsuperscript{1} } % % logo align to Institute % \institute{\textsuperscript{1}Xi'an Jiaotong University \\ \textsuperscript{2}Institute of Fluid Flow Machinery\\ \hspace*{1pt} Polish Academy of Sciences \\ \vspace{-1.5cm}\flushright \includegraphics[width=4cm]{../images/logo/logo_eng_40mm.eps}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % option 3 - multilogo vertical %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\title{Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods} %\subtitle{Lamb-opt} % \author{\textbf{Paweł Kudela}\inst{1}, Maciej Radzieński\inst{1}, Wiesław Ostachowicz\inst{1}, Zhibo Yang\inst{2} } % % logo under Institute % \institute% % { % \inst{1}% % Institute of Fluid Flow Machinery\\ \hspace*{1pt} Polish Academy of Sciences \\ \includegraphics[height=0.85cm]{../images/logo/logo_eng_40mm.eps} \\ % \and % \inst{2}% % Xi'an Jiaotong University \\ \includegraphics[height=0.85cm]{../images/logo/logo_box.eps} % } % end od option 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% option 4 - 3 Institutes and logos horizontal centered %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\title{Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods} %\subtitle{Lamb-opt } %\author{\textbf{Paweł Kudela}\textsuperscript{1}, Maciej Radzieński\textsuperscript{1}, Marco Miniaci\textsuperscript{2}, Zhibo Yang\textsuperscript{3} } % %\institute{ %\begin{columns}[T,onlytextwidth] % \column{0.39\textwidth} % \begin{center} % \textsuperscript{1}Institute of Fluid Flow Machinery\\ \hspace*{3pt}Polish Academy of Sciences % \end{center} % \column{0.3\textwidth} % \begin{center} % \textsuperscript{2}Zurich University % \end{center} % \column{0.3\textwidth} % \begin{center} % \textsuperscript{3}Xi'an Jiaotong University % \end{center} %\end{columns} %\vspace{6pt} %% logos %\begin{columns}[b,onlytextwidth] % \column{0.39\textwidth} % \centering % \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{../images/logo/logo_eng_40mm.eps} % \column{0.3\textwidth} % \centering % \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{../images/logo/logo_box.eps} % \column{0.3\textwidth} % \centering % \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{../images/logo/logo_box2.eps} %\end{columns} %} %\makeatletter %\setbeamertemplate{title page}{ % \begin{minipage}[b][\paperheight]{\textwidth} % \centering % <-- Center here % \ifx\inserttitlegraphic\@empty\else\usebeamertemplate*{title graphic}\fi % \vfill% % \ifx\inserttitle\@empty\else\usebeamertemplate*{title}\fi % \ifx\insertsubtitle\@empty\else\usebeamertemplate*{subtitle}\fi % \usebeamertemplate*{title separator} % \ifx\beamer@shortauthor\@empty\else\usebeamertemplate*{author}\fi % \ifx\insertdate\@empty\else\usebeamertemplate*{date}\fi % \ifx\insertinstitute\@empty\else\usebeamertemplate*{institute}\fi % \vfill % \vspace*{1mm} % \end{minipage} %} % %\setbeamertemplate{title}{ % % \raggedright% % <-- Comment here % \linespread{1.0}% % \inserttitle% % \par% % \vspace*{0.5em} %} %\setbeamertemplate{subtitle}{ % % \raggedright% % <-- Comment here % \insertsubtitle% % \par% % \vspace*{0.5em} %} %\makeatother % end of option 4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % option 5 - 2 Institutes and logos horizontal centered %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\title{Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods} %\subtitle{Lamb-opt } %\author{\textbf{Paweł Kudela}\textsuperscript{1}, Maciej Radzieński\textsuperscript{1}, Marco Miniaci\textsuperscript{2}} % %\institute{ % \begin{columns}[T,onlytextwidth] % \column{0.5\textwidth} % \centering % \textsuperscript{1}Institute of Fluid Flow Machinery\\ \hspace*{3pt}Polish Academy of Sciences % \column{0.5\textwidth} % \centering % \textsuperscript{2}Zurich University % \end{columns} % \vspace{6pt} % % logos % \begin{columns}[b,onlytextwidth] % \column{0.5\textwidth} % \centering % \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{../images/logo/logo_eng_40mm.eps} % \column{0.5\textwidth} % \centering % \includegraphics[scale=0.9,height=0.85cm,keepaspectratio]{../images/logo/logo_box.eps} % \end{columns} %} %\makeatletter %\setbeamertemplate{title page}{ % \begin{minipage}[b][\paperheight]{\textwidth} % \centering % <-- Center here % \ifx\inserttitlegraphic\@empty\else\usebeamertemplate*{title graphic}\fi % \vfill% % \ifx\inserttitle\@empty\else\usebeamertemplate*{title}\fi % \ifx\insertsubtitle\@empty\else\usebeamertemplate*{subtitle}\fi % \usebeamertemplate*{title separator} % \ifx\beamer@shortauthor\@empty\else\usebeamertemplate*{author}\fi % \ifx\insertdate\@empty\else\usebeamertemplate*{date}\fi % \ifx\insertinstitute\@empty\else\usebeamertemplate*{institute}\fi % \vfill % \vspace*{1mm} % \end{minipage} %} % %\setbeamertemplate{title}{ % % \raggedright% % <-- Comment here % \linespread{1.0}% % \inserttitle% % \par% % \vspace*{0.5em} %} %\setbeamertemplate{subtitle}{ % % \raggedright% % <-- Comment here % \insertsubtitle% % \par% % \vspace*{0.5em} %} %\makeatother % end of option 5 % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % End of title page options %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % logo option - alternative manual insertion by modification of coordinates in \put() %\titlegraphic{% % %\vspace{\logoadheight} % \begin{picture}(0,0) % \put(305,-185){\makebox(0,0)[rb]{\includegraphics[width=4cm]{../images/logo/logo_eng_40mm.eps}}} % \end{picture}} % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\tikzexternalize % activate! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \maketitle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % SLIDES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[label=frame1]{Table of contents \label{frameone}} \setbeamertemplate{section in toc}[sections numbered] \tableofcontents[hideallsubsections] \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[label=frame2]{Determination of mechanical properties of materials} \begin{itemize} \item Destructive testing \item Static tests (displacement measurements + model) \item Dynamic tests (natural frequencies) \item Ultrasonic methods (bulk wave velocities, ultrasonic polar scan) \item \textbf{Lamb wave methods (dispersion curves)} \end{itemize} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[label=frame3]{Idea of the project} \begin{figure} \only<1>{ \includegraphics[width=0.8\textwidth]{Plan-scheme4a.png} } \only<2>{ \includegraphics[width=0.65\textwidth]{Plan-scheme4b.png} } \end{figure} \only<1>{ \begin{columns}[T] \column{0.5\textwidth} \hspace{1.5cm} \(E_f=E_{11f}\) \quad \(E_{22f}=0.1 E_{11f}\) \column{0.5\textwidth} \(\bs{\sigma} = \matr{C} \, \bs{\varepsilon}\) \end{columns} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[label=frame4]{Dispersion curves (1)} \begin{alertblock}{Definition} A \textbf{dispersion relation} relates the wavelength $\lambda$ or wavenumber $k$ of a wave to its frequency $\omega$.\\ \vspace{10pt} $k(\omega)$ $[\frac{\mathrm{rad}}{\mathrm{m}}]$\\ \vspace{6pt} $k(f)$ $[\frac{1}{\mathrm{m}}]$ \end{alertblock} \begin{block}{Phase velocity} \begin{equation*} c_p = \frac{\omega}{k} \end{equation*} \end{block} \begin{block}{Group velocity} \begin{equation*} c_g = \frac{\drv \omega}{\drv k} \end{equation*} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Semi Analytical Spectral Element Method (SASE)} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[label=frame9]{Semi Analytical Spectral Element Method (SASE)} \begin{figure} \includegraphics[width=\textwidth]{figure1.png} \end{figure} \begin{equation*} \vect{u}(x,y,z,t) = \matr{U}(x) \exp \left[ i (\omega t + k \sin (\beta) y - k \cos (\beta) z)\right] \end{equation*} \end{frame} %%%%%%%%%%%%% \begin{frame}[t,label=frame10]{Semi Analytical Spectral Element Method (SASE)} \only<1-3>{ \begin{equation*} \left[\matr{A} - \omega^2\matr{M} \right] \vect{U} =0, \label{eq:eig_dispersion} \end{equation*} where $\omega$ is the angular frequency, $\matr{M}$ is the mass matrix, $\matr{U}$ is the nodal displacement vector, and the matrix $\matr{A}$ can be defined as: \begin{equation*} \begin{aligned} \matr{A} & = k^2\left(s^2 \,\matr{K}_{22} + c^2\, \matr{K}_{33} - c s\, \matr{K}_{23} - c s\, \matr{K}_{32}\right) \\ & + i k\, \matr{T}^T\left(-c\, \matr{K}_{13} - s\, \matr{K}_{21} + s\, \matr{K}_{12} + c\, \matr{K}_{31}\right) \matr{T} +\matr{K}_{11}, \end{aligned} \label{eq:dispersion} \end{equation*} where $s = \sin(\beta)$, $c = \cos(\beta)$, $i = \sqrt{-1}$. } \only<2-3>{ \begin{equation*} \matr{K}_{mn}^e= \int \limits_{(e)} \matr{B}_m^{T} \matr{C}^e \, \matr{B}_n\, \ud x \label{eq:stiffness_matrix_e} \end{equation*} } \only<3>{ Possible solutions: \begin{columns}[T] \column{0.5\textwidth} \begin{itemize} \item standard eigenvalue problem $\omega (k)$ \end{itemize} \column{0.5\textwidth} \begin{itemize} \item second-order polynomial eigenvalue problem $k(\omega)$ \end{itemize} \end{columns} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame11]{Semi Analytical Spectral Element Method (SASE)} \only<1-2>{ \begin{columns}[T] \column{0.5\textwidth} \begin{itemize} \item standard eigenvalue problem $\omega (k)$ \item real $k$ -- real $\omega$ \item only dispersion curves \end{itemize} \column{0.5\textwidth} \begin{itemize} \item second-order polynomial eigenvalue problem $k(\omega)$ \item real $\omega$ -- complex $k$ \item dispersion curves and attenuation (complex $\matr{C}$) \end{itemize} \end{columns} \vspace{10pt} \begin{columns}[T] \column{0.5\textwidth} \begin{equation*} \left[\matr{A} - \omega^2\matr{M} \right]_{\alert{M}} \vect{U} =0 \end{equation*} \column{0.5\textwidth} \begin{equation*} \left[\hat{\matr{A}} - k \hat{\matr{D}} \right]_{\alert{2M}} \hat{\vect{Q}} =0 \end{equation*} \begin{equation*} \hat{\vect{Q}} =\left[\begin{array}{c} \vect{U}\\ k \vect{U} \end{array} \right] \end{equation*} \end{columns} } \only<2>{ \begin{flalign*} &\hat{\matr{A}} =\left[\begin{array}{cc} 0 & \matr{K}_{11} - \omega^2 \matr{M}\\ \matr{K}_{11} - \omega^2 \matr{M} & -i \left( c \, \matr{K}_{13} - s\, \matr{K}_{12} + s\, \matr{K}_{21} - c \, \matr{K}_{31} \right) \end{array} \right] \\ &\hat{\matr{D}} =\left[\begin{array}{cc} \matr{K}_{11} - \omega^2 \matr{M} & 0\\ 0& - \left( s^2 \, \matr{K}_{22} + c^2 \, \matr{K}_{33} -s c \, \matr{K}_{23} -sc \, \matr{K}_{32} \right) \end{array} \right] \end{flalign*} %\begin{equation*} %\hat{\matr{A}} =\left[\begin{array}{cc} %0 & \matr{K}_{11} - \omega^2 \matr{M}\\ %\matr{K}_{11} - \omega^2 \matr{M} & -i \left( c \, \matr{K}_{13} - s\, \matr{K}_{12} + s\, \matr{K}_{21} - c \, \matr{K}_{31} \right) %\end{array} \right] %%\end{equation*} %\begin{equation*} %\hat{\matr{D}} =\left[\begin{array}{cc} %\matr{K}_{11} - \omega^2 \matr{M} & 0\\ %0& - \left( s^2 \, \matr{K}_{22} + c^2 \, \matr{K}_{33} -s c \, \matr{K}_{23} -sc \, \matr{K}_{32} \right) %\end{array} \right] %\end{equation*} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Parametric studies} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame16]{Variability of parameters in \alert{indirect method}} \begin{table} \label{tab:mat_prop} \renewcommand{\arraystretch}{1.1} \centering \footnotesize \caption{Initial material properties of composite laminate} \begin{tabular}{ccccccc} \toprule \multicolumn{3}{c}{\textbf{Matrix} } & \multicolumn{3}{c}{\textbf{Fibres} } & \textbf{Volume fraction} \\ \midrule $\rho_m$ & $E_m$ & $\nu_m$ & $\rho_f$ & $E_f$ & $\nu_f$ & $V$\\ kg/m\textsuperscript{3} &GPa& -- & kg/m\textsuperscript{3} & GPa& -- & \%\\ \cmidrule(lr){1-3} \cmidrule(lr){4-6} \cmidrule(lr){7-7} 1250 &3.43& 0.35& 1900 & 240 & 0.2 & 50\\ \bottomrule \end{tabular} \end{table} \vspace{10pt} \centering \Large $\pm$20\%\\ \vspace{10pt} \normalsize Influence on dispersion curves \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame17]{SASE dispersion curves: density influence} \vspace{-10pt} \def\myindenta{0.17\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE2_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{matrix density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{90}$^{\circ}$ } } \label{fig:rhom} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE3_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{fibre density}\\ \hspace{\myindenta}on dispersion curves at angle \textbf{90}$^{\circ}$} } \label{fig:rhof} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} \textbf{Fibres density} has slightly more influence on dispersion curves than \textbf{matrix density}. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame18]{SASE dispersion curves: Young modulus influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE4_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus} \alert{of matrix}\\ \hspace{\myindenta} on dispersion curves at angle \textbf{90}$^{\circ}$ } } \label{fig:em} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE5_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Young's modulus} \alert{of fibres}\\ \hspace{\myindenta} on dispersion curves at angle \textbf{90}$^{\circ}$} } \label{fig:ef} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} \textbf{Young's modulus of matrix} has much more influence on dispersion curves than \textbf{Young's modulus of fibres}. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame19]{SASE dispersion curves: Poisson's ratio influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE6_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio} \alert{of matrix} \\ \hspace{\myindenta} on dispersion curves at angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of matrix} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$ } } \label{fig:nim} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE7_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio}\\ \hspace{\myindenta}\alert{of fibres} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{Poisson's ratio} \alert{of fibres} \\ \hspace{\myindenta} on dispersion curves at angle \textbf{90}$^{\circ}$} } \label{fig:nif} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} \textbf{Poisson's ratio of fibres} is the least influential parameter on dispersion curves among investigated parameters. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame20]{SASE dispersion curves: volume fraction influence} \vspace{-10pt} \def\myindenta{0.18\textwidth} % define myindenta variable for correcting caption placement \newcommand{\modelname}{SASE8_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{The influence of \alert{volume fraction of reinforcing fibres} on dispersion curves at angle \textbf{90}$^{\circ}$ } } \label{fig:vol} \end{figure} \only<8>{ \begin{alertblock}{Remarks} \textbf{Volume fraction} of reinforcing fibres is the most influential parameter on dispersion curves among investigated parameters. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % DIRECT METHOD %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame30a]{Variability of parameters in \alert{direct method}} \begin{table}[h!] \renewcommand{\arraystretch}{1.1} %\centering \footnotesize \caption{Initial values of elastic constants used in parametric studies (direct method). Units: [GPa]} \label{tab:Ctensor_initial} \begin{center} \begin{tabular}{ccccccccc} \toprule \(C_{11}\) & \(C_{12}\) & \(C_{13}\) & \(C_{22}\) & \(C_{23}\) & \(C_{33}\) & \(C_{44}\) & \(C_{55}\) & \(C_{66}\) \\ \midrule 50 &5& 5& 50 & 5 & 9 & 3 & 3 & 3\\ \bottomrule \end{tabular} \end{center} \end{table} \vspace{10pt} \centering \Large $\pm$20\%\\ \vspace{10pt} \normalsize Influence on dispersion curves \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame30]{SASE dispersion curves: \(C_{11}\) and \(C_{12}\) influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE20_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{11}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$ } } \label{fig:C11} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE19_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{12}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$} } \label{fig:C12} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} The influence of $C_{12}$ elastic constant on the dispersion curves is least pronounced \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame31]{SASE dispersion curves: \(C_{13}\) and \(C_{22}\) influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE18_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{13}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$ } } \label{fig:C13} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE17_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{22}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$} } \label{fig:C22} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} The \(C_{13}\) and \(C_{22}\) parameters affects dispersion curves depending on angle. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame32]{SASE dispersion curves: \(C_{23}\) and \(C_{33}\) influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE16_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{23}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$ } } \label{fig:C23} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE15_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{33}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$} } \label{fig:C33} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} $C_{33}$ influence on S0 mode is strong at frequencies above 250 kHz \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame33]{SASE dispersion curves: \(C_{44}\) and \(C_{55}\) influence} \vspace{-10pt} \def\myindenta{0.12\textwidth} % define myindenta variable for correcting caption placement \begin{columns}[T] \column{0.5\textwidth} \newcommand{\modelname}{SASE14_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{44}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$ } } \label{fig:C44} \end{figure} \column{0.5\textwidth} \newcommand{\modelname}{SASE13_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{15}$^{\circ}$} } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{30}$^{\circ}$} } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{45}$^{\circ}$} } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{60}$^{\circ}$} } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{75}$^{\circ}$} } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{\hspace{\myindenta}The influence of \alert{\(C_{55}\)} on dispersion curves at\\ \hspace{\myindenta}angle \textbf{90}$^{\circ}$} } \label{fig:C55} \end{figure} \end{columns} \only<8>{ \begin{alertblock}{Remarks} $C_{44}$ and $C_{55}$ influence on dispersion curves is mode and frequency dependent. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame34]{SASE dispersion curves: \(C_{66}\) influence} \vspace{-10pt} \def\myindenta{0.18\textwidth} % define myindenta variable for correcting caption placement \newcommand{\modelname}{SASE12_plain_weave} \begin{figure} \only<1>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_0_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{0}$^{\circ}$} } \only<2>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_15_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{15}$^{\circ}$ } } \only<3>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_30_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{30}$^{\circ}$ } } \only<4>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_45_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{45}$^{\circ}$ } } \only<5>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_60_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{60}$^{\circ}$ } } \only<6>{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_75_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{75}$^{\circ}$ } } \only<7->{ \includegraphics[scale=0.9]{SASE/\modelname_out/\modelname_angle_90_param_dispersion_curves_color.png} \caption{The influence of \alert{\(C_{66}\)} on dispersion curves at angle \textbf{90}$^{\circ}$ } } \label{fig:C66} \end{figure} \only<8>{ \begin{alertblock}{Remarks} $C_{66}$ has strong influence on dispersion curves at angle 0$^{\circ}$ and 90$^{\circ}$. \end{alertblock} } \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Optimisation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame23]{Optimization results by using GA} \begin{columns}[T] \column{0.5\textwidth} \begin{figure} [h!] \newcommand{\modelname}{ga_plain_weave_known_mass} %\centering \includegraphics[width=0.9\textwidth]{genetic_algorithm/\modelname_out/\modelname_angle_60_dispersion_curves_initial.png} \caption{Numerical dispersion curves calculated for initial parameters overlayed on experimental data} \label{fig:dispersion60deg_initial_} \end{figure} \column{0.5\textwidth} \begin{figure} [h!] \newcommand{\modelname}{ga_plain_weave_known_mass} %\centering \includegraphics[width=0.9\textwidth]{genetic_algorithm/\modelname_out/\modelname_angle_60_dispersion_curves_test_case_2.png} \caption{Numerical dispersion curves calculated for optimised parameters overlayed on experimental data} \label{fig:dispersion60deg_optimized} \end{figure} \end{columns} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[t,label=frame29]{Summary} \begin{alertblock}{Indirect method} Parameters such as Young's modulus and volume fraction of reinforcing fibers compete with each other in the overlapping range of wavenumber values. Moreover, dispersion curves are affected in a similar way irrespective of the propagation angle. This can lead to ambiguities of optimal solutions. Actually, it is highly likely that a completely different set of indirect parameters can lead to the same dispersion curves. \end{alertblock} \begin{alertblock}{Direct method} C tensor influence on the dispersion curves show much more localized changes. There is a strict correlation between elastic constant and changes in wavenumber values of particular Lamb wave mode, frequency range or angle of propagation. Therefore, it can be deduced that such an approach will lead to less ambiguous solutions in an optimization problem with a greater chance of finding a global minimum. \end{alertblock} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{frame} \begin{frame}[t,label=A]{Acknowledgements} \begin{alertblock}{Project title: Elastic constants identification of composite laminates by using Lamb wave dispersion curves and optimization methods} The research was funded by the Polish National Science Center under grant agreement no 2018/29/B/ST8/00045. \end{alertblock} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\setbeamercolor{palette primary}{fg=black, bg=white} \begin{frame}[standout] Thank you for your attention!\\ \vspace{12pt} Questions?\\ \vspace{12pt} \url{[email protected]} \end{frame} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % END OF SLIDES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document}
{ "alphanum_fraction": 0.6839926812, "avg_line_length": 48.9079061685, "ext": "tex", "hexsha": "e3dd780464dd918ef8726cc815649da50f353a30", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "81fb823edeb26ed2e92b6296ac65f6659811e447", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "IFFM-PAS-MISD/lamb-opt", "max_forks_repo_path": "reports/beamer_presentations/SPIE2020/SPIE2020presentation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "81fb823edeb26ed2e92b6296ac65f6659811e447", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "IFFM-PAS-MISD/lamb-opt", "max_issues_repo_path": "reports/beamer_presentations/SPIE2020/SPIE2020presentation.tex", "max_line_length": 422, "max_stars_count": 3, "max_stars_repo_head_hexsha": "81fb823edeb26ed2e92b6296ac65f6659811e447", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "IFFM-PAS-MISD/lamb-opt", "max_stars_repo_path": "reports/beamer_presentations/SPIE2020/SPIE2020presentation.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-03T05:24:05.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-15T14:20:33.000Z", "num_tokens": 18601, "size": 56293 }
\chapter{Background}\label{chap:background} \newpage \input{content/2-background/1-network.tex} \newpage \input{content/2-background/2-mesh.tex} \newpage \input{content/2-background/3-p2p.tex} \newpage \input{content/2-background/4-media-api.tex} \newpage \input{content/2-background/5-webrtc.tex} \newpage \input{content/2-background/6-video-stream.tex} \newpage
{ "alphanum_fraction": 0.7857142857, "avg_line_length": 24.2666666667, "ext": "tex", "hexsha": "8317a705be3a7de979c1ff4387c5fc70c32995df", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "auxdotapp/report", "max_forks_repo_path": "content/2-background/0-index.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "auxdotapp/report", "max_issues_repo_path": "content/2-background/0-index.tex", "max_line_length": 47, "max_stars_count": 1, "max_stars_repo_head_hexsha": "962809e03db2b91c30301aa8238b09d256d1569a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "auxdotapp/report", "max_stars_repo_path": "content/2-background/0-index.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-21T18:26:56.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-21T18:26:56.000Z", "num_tokens": 117, "size": 364 }
\documentclass[class=report, float=false, crop=false]{standalone} \input{preamble} \graphicspath{{figures/images/}{figures/figs/}} \begin{document} \chapter{Motility-induced phase separation} \label{chap:mips} \section{Spontaneous phase separation} \subsection{Illustration} \vspace{-0.3cm} \inserttikz{spontaneous_phase_separation}{Displacement maps obtained with our model system. We denote $\vec{u}(t, t + \Delta t)$ the displacement vector of a particle between times $t$ and $t + \Delta t$. Arrows are aligned with the displacement vector of the corresponding particle (disk). Colors correspond to the amplitude of the displacement vector of the corresponding particle (disk) as reported on the color map. \textbf{(top)} Initial state of the system, after applying the FIRE algorithm. \textbf{(bottom)} Final state of the system, after integration of the equation of motion (equation \ref{equation_motion}). \href{https://github.com/yketa/UBC_2018_Wiki/blob/master/presentation_7_30_18/movies/u_Dk5000_Vj1000_Rf5000_No2000_Il0000_Tl5000_Pl5000_Mn1000/u_Dk5000_Vj1000_Rf5000_No2000_Il0000_Tl5000_Pl5000_Mn1000.mov?raw=true}{Movie on GitHub \faGithub}.}{spontaneous_phase_separation} Spontaneous phase separation is undoubtedly the most visual phenomenon we can observe with our model system. As illustrated by figure \ref{spontaneous_phase_separation}, an initially homogeneous system is able to spontaneously separate into two distinct phases: \begin{itemize} \item an active gas phase, where particles are seldom in contact and thus can move fast, \item and a dense fluid phase, where the motility of particles is greatly reduced. \end{itemize} It is a common feature of systems of self-propelled particles and has been thoroughly explored \cite{fily2012athermal, fily2014freezing, redner2013structure, bialke2013microscopic, levis2014clustering, wysocki2014cooperative}.\\ This phenomenon is particularly counter-intuitive if one thinks about the behaviour of passive colloids at equilibirum. Despite the lack of an attractive interparticle potential, particles seem to be inevitably attracted by each others and form clusters. We describe in part \ref{mips_mechanism} the underlying mechanism. \subsection{Mechanism} \label{mips_mechanism} Within a system of motile particles, a phase separated state can arise if the speed of particles decreases sufficiently steeply with increasing local density. A dilute active gas then coexists with a dense liquid of substantially reduced motility. This phenomenon is called \textit{motility-induced phase separation} \cite{cates2015motility}.\\ Its mechanism, described extensively in \cite{cates2015motility}, relies on two ingredients: \begin{enumerate} \item[(i)] Particles tend to accumulate where they move more slowly, this being inferred directly from the master equation of a self-propelled particle of spatially varying speed. \item[(ii)] Particles tend to move more slowly where they accumulate. In our case, we see that coarse-graining interparticle forces in equation \ref{equation_motion} will result in an effective and lesser self-propulsion force. \end{enumerate} We then have a positive feedback loop between (i) a slowing-induced accumulation and (ii) an accumulation-induced slowing, which destabilises the uniform suspension. This destabilisation eventually leads to the phase separation we have observed. \subsection{Phase diagram} \label{subsection:phase_diagram} We do not observe motility-induced phase separation for every set of parameters $(\phi, \tilde{v}, \tilde{\nu}_r)$. In order to build a phase diagram, \textit{i.e.} to identify which sets of parameters lead to phase separation, Fily \textit{et al.} propose two characterisations of the system. \myparagraph{Mean square displacement} \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{figures/images/msd_Vj1000_Rh5000_No2000.png} \hfill \raisebox{-2.5mm}[0pt][0pt]{\makebox[0.49\textwidth][c]{\includegraphics[width=0.49\textwidth]{figures/figs/msd_Vj1000_Rh5000_No2000.eps}}} \caption{Mean square displacement as a function of time $\left<|\Delta\vec{r}(t)|^2\right>$ for a system of $N=2\cdot10^3$ particles, with dimensionless self-propulsion velocity $\tilde{v} = 1\cdot10^{-2}$ and dimensionless rotational diffusion rate $\tilde{\nu}_r = 5\cdot10^{—4}$, for different packing fractions $\phi$. \textbf{(left)} Data from Fily \textit{et al.}. \textit{(inset)} Exponent $\alpha$ as a function of the packing fraction $\phi$. \textit{source:} \cite{fily2014freezing} \textbf{(right)} Data from our simulations. The upward trend at high times observed for the highest packing fractions $\phi$ is due to the lack of statistics.} \label{msd_phi} \end{figure} We know from the study of glass-forming materials \cite{binder2011glassy} that, at very low temperatures, particles may get temporarily trapped in cages formed by their neighbours -- they are in a "frozen" state, in opposition to a liquid state where they can move more freely. This is shown in mean square displacement (equation \ref{msd_ensemble}) plots where, at low temperature, $\left<|\Delta\vec{r}(t)|^2\right>$ reaches a plateau which height is related to the size of the cage and which length is related to the amount of time the particle stays trapped.\\ We have observed in our model system that, for a given set of parameters $(\tilde{v}, \tilde{\nu}_r)$, the mean square displacement can be linear (diffusive regime) at low packing fraction $\phi$ (see equation \ref{msd_prw_limit} and figure \ref{msd_ensemble_fit}) and reach a plateau (caged regime) at high packing fraction (see figure \ref{msd_phi}).\\ Thus, in order to systematically identify "frozen" and liquid states, the authors of \cite{fily2014freezing} introduce the exponent $\alpha$ characterising the long-time time-dependence of the mean square displacement \begin{equation} \left<|\Delta\vec{r}(t)|^2\right> \underset{t \rightarrow +\infty}{\sim} t^{\alpha} \label{exponent_alpha} \end{equation} and arbitrarily choose a threshold value, $\alpha_x = 0.5$, separating "frozen" states ($\alpha < \alpha_x$) and liquid state ($\alpha > \alpha_x$) (see inset of left figure \ref{msd_phi}). \myparagraph{Number fluctuation} To identify phase separated regions, the authors of \cite{fily2014freezing} measure the spatial variance $\left<[\Delta N]^2\right>$ of the number of particles in a subsystem as a function of the average number $N_s$ in these subsystems. For large enough subsystems, $N_s \gg 1$, this function is a powerlaw of exponent $\beta$ \begin{equation} \left<[\Delta N]^2\right> \underset{N_s \rightarrow +\infty}{\sim} N_s^{\beta} \label{exponent_beta} \end{equation} With $\tilde{T} = k_BT/(ka^2)$ the dimensionless temperature in a thermal system, they also introduce the thermal counterpart of exponent $\beta$ in the limit of zero temperature, $\beta_0 = \lim_{\tilde{T} \rightarrow 0} \beta$.\\ With \begin{equation} \beta_e = \beta - (\beta_0 - 1) \label{exponent_betae} \end{equation} the authors then define phase separated states as states for which $\beta_e > \beta_x = 1.5$.\\ We propose an other characterisation of the phase separated regime in section \ref{mips_characterisation}. \myparagraph{Phase diagram} \vspace{-0.5cm} \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{figures/images/phase_diagram.png} \caption{Phase diagram of the system, with fixed rotational diffusion rate $\tilde{\nu}_r = 5\cdot10^{—4}$, as a colormap of the exponents $\alpha$ (see equation \ref{exponent_alpha}) and $\beta_e$ (see equations \ref{exponent_beta} and \ref{exponent_betae}). Phase separated states ($\beta_e > \beta_x = 1.5$) are represented in red and glass ("frozen") states ($\alpha < \alpha_x = 0.5$) are represented in blue. The reminding phase diagram space corresponds to liquid states. \textit{source:} \cite{fily2014freezing}} \label{phase_diagram} \end{figure} Exponents $\alpha$ and $\beta_e$ have been systematically measured for different sets of parameters $(\phi, \tilde{v}, \tilde{\nu}_r)$ by the authors of \cite{fily2014freezing} in order to access the phase diagram of the system (see figure \ref{phase_diagram}).\\ We observe that phase separated states only exist in a range of self-propulsion velocities $\tilde{v}$. For $\tilde{v} \ll 1$, we expect that the effect of activity might be negligible so that the system resembles a packing of athermal frictionless disks, which cannot phase separate and which jams at high packing fraction $\phi$ \cite{o2003jamming, olsson2007critical}. For $\tilde{v} \gg 1$, we expect the interparticle interactions to be negligible in comparison to the self-propulsion term in equation \ref{equation_motion}, thus preventing the accumulation-induced slowing leading to phase separation.\\ The authors also looked at the evolution of the boundaries between the phase separated states and the liquid states and between the liquid states and the "frozen" states while varying the rotational diffusion rate (see figure \ref{phase_boundary}). \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{figures/images/phase_boundary.png} \caption{Boundaries of phase separated states (dashed lines) and "frozen" states (solid lines) for different dimensionless rotational diffusion rates, $0 \leq \tilde{\nu}_r \leq 1\cdot10^{-2}$. We remind that the persistence time is $\tau_r = \tilde{\nu}_r^{-1}$. \textit{source:} \cite{fily2014freezing}} \label{phase_boundary} \end{figure} We observe that the domain of "frozen" states is almost not affected by the rotational diffusion rate, while the domain of phase separated states increases in volume with increasing persistence time $\tau_r$ (\textit{i.e.}, decreasing rotational diffusion rate $\tilde{\nu}_r$). \section{Characterisation} \label{mips_characterisation} \subsection{Local density probability} \label{subsection:local_density_probability} Rather than characterising phase separated states with number fluctuations, we will more simply measure the distribution $P(\phi_{loc})$ of local packing fractions $\phi_{loc}$, like Wysocki \textit{et al.} \cite{wysocki2014cooperative}. This distribution must be unimodal for fluid states and centred around the system packing fraction $\phi$, while it must be bimodal for phase separated states with two local maxima at the average packing fractions of the dense fluid phase and the active gas phase.\\ We define the local density at position $\vec{r}$ and time $t$ as \begin{equation} \phi_{loc}(\vec{r}, t, r_{max}) = \frac{1}{4r_{max}^2} \sum_{i, ||\vec{r}_i(t) - \vec{r}||_{\infty} \leq r_{max}} \pi a_i^2 \label{philoc} \end{equation} where $r_{max}$ is chosen to be a few particle diameters. \myparagraph{Computation details} We divide the system square box in $N_{cases} \times N_{cases}$ linearly spaced square boxes with centres $(\vec{R}_{kl})_{1 \leq k, l \leq N_{cases}}$. We then choose $S_{max}$ times $(t_m)_{1 \leq m \leq S_{max}}$, with $\forall m, t_m \geq S_{init}$, and compute the local packing fractions at these times and positions $(\phi_{loc}(\vec{R}_{kl}, t_m, r_{max}))_{1 \leq k, l \leq N_{cases}, 1 \leq m \leq S_{max}}$ then the histogram of the values $P(\phi_{loc})$.\\ Our computation script is available at \href{https://github.com/yketa/active_particles/blob/master/analysis/varn.py}{{\faGithub~ yketa/active\_particles/analysis/varn.py}}. \subsection{Fluid to phase separated transition} We can plot local packing fraction histograms as functions of any parameter of the set $(\phi, \tilde{v}, \tilde{\nu}_r)$, with the two other parameters being fixed, to identify the values of these parameters at the transition between the fluid and phase separated states. \myparagraph{Varying self-propulsion velocity $\tilde{v}$} As expected from the phase diagram (figure \ref{phase_diagram}), we have from figure \ref{philoc_v} that \begin{itemize} \item at low P\'eclet number $\text{Pe}$ (\textit{i.e.}, low self-propulsion velocity $\tilde{v}$), the distribution $P(\phi_{loc})$ is unimodal and centred around the system packing fraction $\phi$, thus indicating a fluid state ; \item at high P\'eclet number $\text{Pe}$ (\textit{i.e.}, high self-propulsion velocity $\tilde{v}$), the distribution $P(\phi_{loc})$ is bimodal, thus indicating a separation between regions of high packing fraction (a dense fluid) and vanishing packing fraction (an active gas). \end{itemize} This plot shows that the transition from the fluid state to the phase separated state, when varying $\tilde{v}$ and keeping $(\phi, \tilde{\nu}_r)$ constant, might be continuous. Nonetheless, we do not rule out that additional simulations at self-propulsion velocities close to the transition would show a sharper change in the distribution $P(\phi_{loc})$. \begin{figure}[h!] \centering \makebox[\textwidth]{ \hspace*{0.3cm}\raisebox{1.1cm}[0pt][0pt]{\includegraphics[width=0.39\textwidth]{figures/images/phase_diagram_arrow.png}} \hspace{-2cm}\includegraphics[width=0.8\textwidth]{figures/figs/Pphiloc_Dl1000_Rh3000_Nq1000_Io5000_Ml1000_Cn5000.eps} } \caption{\textbf{(left)} (figure \ref{phase_diagram}) Phase diagram of the system, with fixed rotational diffusion rate $\tilde{\nu}_r = 5\cdot10^{—4}$. Phase separated states are colored in red and "frozen" states are colored in blue. The green arrow represents the path in phase diagram corresponding to the local packing fraction histogram plot. \textit{source:} \cite{fily2014freezing}. \textbf{(right)} Local packing fraction histogram as function of the P\'eclet number $\text{Pe} = \tilde{v}/\tilde{\nu}_r$. Colors correspond to the probability $P(\phi_{loc})$ of the local packing packing fraction $\phi_{loc}$ as reported on the color map. The red dashed line corresponds the system packing fraction $\phi = 1.00$.} \label{philoc_v} \end{figure} \myparagraph{Varying rotational diffusion rate $\tilde{\nu}_r$} As expected from the phase diagram (figure \ref{phase_boundary}), we have from figure \ref{philoc_dr} that \begin{itemize} \item at low P\'eclet number $\text{Pe}$ (\textit{i.e.}, high rotational diffusion rate $\tilde{\nu}_r$ and low persistence time $\tau_r$), the distribution $P(\phi_{loc})$ is unimodal and centred around the system packing fraction $\phi$, thus indicating a fluid state ; \item at high P\'eclet number $\text{Pe}$ (\textit{i.e.}, low rotational diffusion rate $\tilde{\nu}_r$ and high persistence time $\tau_r$), the distribution $P(\phi_{loc})$ is bimodal, thus indicating a separation between regions of high packing fraction (a dense fluid) and vanishing packing fraction (an active gas). \end{itemize} This plot shows that the transition from the fluid state to the phase separated state, when varying $\tilde{\nu}_r$ and keeping $(\phi, \tilde{v})$ constant, might be continuous. Nonetheless, we do not rule out that additional simulations at rotational diffusion rates close to the transition would show a sharper change in the distribution $P(\phi_{loc})$. \begin{figure}[h!] \centering \makebox[\textwidth]{ \hspace*{0.5cm}\raisebox{0.9cm}[0pt][0pt]{\includegraphics[width=0.3\textwidth]{figures/images/phase_boundary_dot.png}} \hspace{-0cm}\includegraphics[width=0.8\textwidth]{figures/figs/Pphiloc_Dk8000_Vj1000_Nq1000_Io5000_Ml1000_Cn5000.eps} } \caption{\textbf{(left)} (figure \ref{phase_boundary}) Boundaries of phase separated states (dashed lines) and "frozen" states (solid lines) for different dimensionless rotational diffusion rates. The green dot marks the set of parameter $(\phi = 0.80, \tilde{v} = 1\cdot10^{—2})$ corresponding to the local packing fraction histogram plot. \textit{source:} \cite{fily2014freezing}. \textbf{(right)} Local packing fraction histogram as function of the P\'eclet number $\text{Pe} = \tilde{v}/\tilde{\nu}_r$. Colors correspond to the probability $P(\phi_{loc})$ of the local packing packing fraction $\phi_{loc}$ as reported on the color map. The red dashed line corresponds the system packing fraction $\phi = 0.80$.} \label{philoc_dr} \end{figure} \end{document}
{ "alphanum_fraction": 0.7714516832, "avg_line_length": 92.3620689655, "ext": "tex", "hexsha": "3fd527167a13186e550f5dff2e2c70a0094f15ad", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f68e704111c5c1802afa880950c8520cc8acc9a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yketa/UBC---Spring-2018---Wiki", "max_forks_repo_path": "report/chapters/mips.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f68e704111c5c1802afa880950c8520cc8acc9a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yketa/UBC---Spring-2018---Wiki", "max_issues_repo_path": "report/chapters/mips.tex", "max_line_length": 895, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f68e704111c5c1802afa880950c8520cc8acc9a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yketa/UBC---Spring-2018---Wiki", "max_stars_repo_path": "report/chapters/mips.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4347, "size": 16071 }
\section{Introduction} \label{sec:intro} In this paper we design numerical methods to solve a two-moment model that governs the transport of particles obeying Fermi-Dirac statistics (e.g., neutrinos), with the ultimate target being nuclear astrophysics applications (e.g., neutrino transport in core-collapse supernovae and compact binary mergers). The numerical method is based on the discontinuous Galerkin (DG) method for spatial discretization and implicit-explicit (IMEX) methods for time integration, and it is designed to preserve certain physical constraints of the underlying model. The latter property is achieved by considering the spatial and temporal discretization together with the closure procedure for the two-moment model. In many applications, the particle mean free path is comparable to or exceeds other characteristic length scales in the system under consideration, and non-equilibrium effects may become important. In these situations, a kinetic description based on a particle distribution function may be required. The distribution function, a phase space density $f$ depending on momentum $\vect{p}\in\bbR^{3}$ and position $\vect{x}\in\bbR^{3}$, is defined such that $f(\vect{p},\vect{x},t)$ gives at time $t\in\bbR^{+}$ the number of particles in the phase space volume element $d\vect{p}\,d\vect{x}$ (i.e., $d\cN=f\,d\vect{p}\,d\vect{x}$). The evolution of the distribution function is governed by the Boltzmann equation, which states a balance between phase space advection and particle collisions (see, e.g., \cite{braginskii_1965,chapmanCowling_1970,lifshitzPitaevskii_1981}). Solving the Boltzmann equation numerically for $f$ is challenging, in part due to the high dimensionality of phase space. To reduce the dimensionality of the problem and make it more computationally tractable, one may instead solve (approximately) for a finite number of angular moments $\vect{m}_{N}=(m^{(0)},m^{(1)},\ldots,m^{(N)})^{T}$ of the distribution function, defined as \begin{equation} m^{(k)}(\varepsilon,\vect{x},t)=\f{1}{4\pi}\int_{\bbS^{2}}f(\omega,\varepsilon,\vect{x},t)\,g^{(k)}(\omega)\,d\omega, \end{equation} where $\varepsilon=|\vect{p}|$ is the particle energy, $\omega$ is a point on the unit sphere $\bbS^{2}$ indicating the particle propagation direction, and $g^{(k)}$ are momentum space angular weighing functions. In problems where collisions are sufficiently frequent, solving a \emph{truncated moment problem} can provide significant reductions in computational cost since only a few moments are needed to represent the solution accurately. On the other hand, in problems where collisions do not sufficiently isotropize the distribution function, more moments may be needed. In the two-moment model considered here ($N=1$), angular moments representing the particle density and flux (or energy density and momentum) are solved for. Two-moment models for relativistic systems appropriate for nuclear astrophysics applications have been discussed in, e.g., \cite{lindquist_1966,andersonSpiegel_1972,thorne_1981,shibata_etal_2011,cardall_etal_2013a}. However, in this paper, for simplicity (and clarity), we consider a non-relativistic model, leaving extensions to relativistic systems for future work. In a truncated moment model, the equation governing the evolution of the \mbox{$N$-th} moment $m^{(N)}$ contains higher moments $\{m^{(k)}\}_{k=N+1}^{M}$ ($M>N$), which must be specified in order to form a closed system of equations. For the two-moment model, the symmetric rank-two Eddington tensor (proportional to the pressure tensor) must be specified. Approaches to this \emph{closure problem} include setting $m^{(k)}=0$, for $k>N$ ($P_N$ equations \cite{brunnerHolloway_2005} and filtered versions thereof \cite{mcclarrenHauck_2010,laboure_etal_2016}), Eddington approximation (when $N=0$) \cite{mihalasMihalas_1999}, Kershaw-type closure \cite{kershaw_1976}, and maximum entropy closure \cite{minerbo_1978,cernohorskyBludman_1994,olbrant_etal_2013}. The closure procedure often results in a system of nonlinear hyperbolic conservation laws, which can be solved using suitable numerical methods (e.g., \cite{leveque_1992}). One challenge in solving the closure problem is constructing a sequence of moments that are consistent with a positive distribution function, which typically implies algebraic constraints on the moments \cite{kershaw_1976,levermore_1984}. Moments satisfying these constraints are called \emph{realizable moments} (e.g., \cite{levermore_1996}). When evolving a truncated moment model numerically, maintaining realizable moments is challenging, but necessary in order to ensure the well-posedness of the closure procedure \cite{levermore_1996,junk_1998,hauck_2008}. In addition to putting the validity of the numerical results into question, failure to maintain moment realizability in a numerical model may, in order to continue a simulation, require ad hoc post-processing steps with undesirable consequences such as loss of conservation. Here we consider a two-moment model for particles governed by Fermi-Dirac statistics. It is well known from the two-moment model for particles governed by Maxwell-Boltzmann statistics (``classical'' particles with $f\ge0$), that the particle density is nonnegative and the magnitude of the flux vector is bounded by the particle density. (There are further constraints on the components of the Eddington tensor \cite{levermore_1984}.) Furthermore, the set of realizable moments generated by the particle density and flux vector constitutes a convex cone \cite{olbrant_etal_2012}. In the fermionic case, there is also an upper bound on the distribution function (e.g., $f\le1$) because Pauli's exclusion principle prevents particles from occupying the same microscopic state. The fermionic two-moment model has recently been studied theoretically in the context of maximum entropy closures \cite{lareckiBanach_2011,banachLarecki_2013,banachLarecki_2017b} and Kershaw-type closures \cite{banachLarecki_2017a}. Because of the upper bound on the distribution function, the algebraic constraints on realizable moments differ from the classical case with no upper bound, and can lead to significantly different dynamics when the occupancy is high (i.e., when $f$ is close to its upper bound). In the fermionic case, the set of realizable moments generated by the particle density and flux vector is also convex. It is ``eye-shaped'' (as will be shown later; cf. Figure~\ref{fig:RealizableSetFermionic} in Section~\ref{sec:realizability}) and tangent to the classical realizability cone on the end representing low occupancy, but is much more restricted for high occupancy. In this paper, the two-moment model is discretized in space using high-order Discontinuous Galerkin (DG) methods (e.g., \cite{cockburnShu_2001,hesthavenWarburton_2008}). DG methods combine elements from both spectral and finite volume methods and are an attractive option for solving hyperbolic partial differential equations (PDEs). They achieve high-order accuracy on a compact stencil; i.e., data is only communicated with nearest neighbors, regardless of the formal order of accuracy, which can lead to a high computation to communication ratio, and favorable parallel scalability on heterogeneous architectures has been demonstrated \cite{klockner_etal_2009}. Furthermore, they can easily be applied to problems involving curvilinear coordinates (e.g., beneficial in numerical relativity \cite{teukolsky_2016}). Importantly, DG methods exhibit favorable properties when collisions with a background are included, as they recover the correct asymptotic behavior in the diffusion limit, characterized by frequent collisions (e.g., \cite{larsenMorel_1989,adams_2001,guermondKanschat_2010}). The DG method was introduced in the 1970s by Reed \& Hill \cite{reedHill_1973} to solve the neutron transport equation, and has undergone remarkable developments since then (see, e.g., \cite{shu_2016} and references therein). We are concerned with the development and application of DG methods for the fermionic two-moment model that can preserve the aforementioned algebraic constraints and ensure realizable moments, provided the initial condition is realizable. Our approach is based on the constraint-preserving (CP) framework introduced in \cite{zhangShu_2010a}, and later extended to the Euler equations of gas dynamics in \cite{zhangShu_2010b}. (See, e.g., \cite{xing_etal_2010,zhangShu_2011,olbrant_etal_2012,cheng_etal_2013,zhang_etal_2013,endeve_etal_2015,wuTang_2015} for extensions and applications to other systems.) The main ingredients include (1) a realizability-preserving update for the cell averaged moments based on forward Euler time stepping, which evaluates the polynomial representation of the DG method in a finite number of quadrature points in the local elements and results in a Courant-Friedrichs-Lewy (CFL) condition on the time step; (2) a limiter to modify the polynomial representation to ensure that the algebraic constraints are satisfied point-wise without changing the cell average of the moments; and (3) a time stepping method that can be expressed as a convex combination of Euler steps and therefore preserves the algebraic constraints (possibly with a modified CFL condition). As such, our method is an extension of the realizability-preserving scheme developed by Olbrant el al. \cite{olbrant_etal_2012} for the classical two-moment model. The DG discretization leaves the temporal dimension continuous. This semi-discretization leads to a system of ordinary differential equations (ODEs), which can be integrated with standard ODE solvers (i.e., the method of lines approach to solving PDEs). We use implicit-explicit (IMEX) Runge-Kutta (RK) methods \cite{ascher_etal_1997,pareschiRusso_2005} to integrate the two-moment model forward in time. This approach is motivated by the fact that we can resolve time scales associated with particle streaming terms in the moment equations, which will be integrated with explicit methods, while terms associated with collisional interactions with the background induce fast time scales that we do not wish to resolve, and will be integrated with implicit methods. This splitting has some advantages when solving kinetic equations since the collisional interactions may couple across momentum space, but are local in position space, and are easier to parallelize than a fully implicit approach. The CP framework of \cite{zhangShu_2010a} achieves high-order (i.e., greater than first-order) accuracy in time by employing strong stability-preserving explicit Runge-Kutta (SSP-RK) methods \cite{shuOsher_1988,gottlieb_etal_2001}, which can be written as a convex combination of forward Euler steps. Unfortunately, this strategy to achieve high-order temporal accuracy does not work as straightforwardly for standard IMEX Runge-Kutta (IMEX-RK) methods because implicit SSP Runge-Kutta methods with greater than first-order accuracy have time step restrictions similar to explicit methods \cite{gottlieb_etal_2001}. To break this ``barrier,'' recently proposed IMEX-RK schemes \cite{chertock_etal_2015,hu_etal_2018} have resorted to first-order accuracy in favor of the SSP property in the standard IMEX-RK scheme, and recover second-order accuracy with a correction step. We consider the application of the correction approach to the two-moment model. However, with the correction step from \cite{chertock_etal_2015} we are unable to prove the realizability-preserving property without invoking an overly restrictive time step. With the correction step from \cite{hu_etal_2018} the realizability-preserving property is guaranteed with a time step comparable to that of the forward Euler method applied to the explicit part of the scheme, but the resulting scheme performs poorly in the asymptotic diffusion limit. Because of these challenges, we resort to first-order temporal accuracy, and propose IMEX-RK schemes that are convex-invariant with a time step equal to that of forward Euler on the explicit part, perform well in the diffusion limit, and reduce to a second-order SSP-RK scheme in the streaming limit (no collisions with the background material). The realizability-preserving property of the DG-IMEX scheme depends sensitively on the adopted closure procedure. The explicit update of the cell average can, after employing the simple Lax-Friedrichs flux and imposing a suitable CFL condition on the time step, be written as a convex combination. Realizability of the updated cell average is then guaranteed from convexity arguments \cite{zhangShu_2010a}, provided all the elements in the convex combination are realizable. Realizability of individual elements in the convex combination is conditional on the closure procedure (components of the Eddington tensor must be computed to evaluate numerical fluxes). We prove that each element in the convex combination is realizable provided the moments involved in expressing the elements are moments of a distribution function satisfying the bounds implied by Fermi-Dirac statistics (i.e., $0\le f \le 1$). For algebraic two-moment closures, which we consider, the so-called Eddington factor is given by an algebraic expression depending on the evolved moments and completely determines the components of the Eddington tensor. Realizable components of the Eddington tensor demand that the Eddington factor satisfies strict lower and upper bounds (e.g., \cite{levermore_1984,lareckiBanach_2011}). We discuss algebraic closures derived from Fermi-Dirac statistics that satisfy these bounds, and demonstrate with numerical experiments that the DG-IMEX scheme preserves realizability of the moments when these closures are used. We also demonstrate that further approximations to algebraic two-moment closures for modeling particle systems governed by Fermi-Dirac statistics may give results that are incompatible with a bounded distribution and, therefore, unphysical. The example we consider is the Minerbo closure \cite{minerbo_1978}, which can be obtained as the low occupancy limit of the maximum entropy closure of Cernohorsky \& Bludman \cite{cernohorskyBludman_1994}. The paper is organized as follows. In Section~\ref{sec:model} we present the two-moment model. In Section~\ref{sec:realizability} we discuss moment realizability for the fermionic two-moment model, while algebraic moment closures are discussed in Section~\ref{sec:algebraicClosure}. In Section~\ref{sec:dg} we briefly introduce the DG method for the two-moment model, while the (convex-invariant) IMEX time stepping methods we use are discussed in Section~\ref{sec:imex}. The main results on the realizability-preserving DG-IMEX method for the fermionic two-moment model are worked out in Sections~\ref{sec:realizableDGIMEX} and \ref{sec:limiter}. In Section~\ref{sec:limiter} we also discuss the realizability-enforcing limiter. Numerical results are presented in Section~\ref{sec:numerical}, and summary and conclusions are given in Section~\ref{sec:conclusions}. Additional details on the IMEX schemes are provided in Appendices.
{ "alphanum_fraction": 0.8055664702, "avg_line_length": 167.8021978022, "ext": "tex", "hexsha": "d4390dd10280dafe76e6f8d9747dd1ecb4e149fd", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2022-01-24T02:08:20.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-14T01:13:40.000Z", "max_forks_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "srichers/thornado", "max_forks_repo_path": "Documents/M1/realizableFermionicM1/sections/intro.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_issues_repo_issues_event_max_datetime": "2021-11-11T13:21:00.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-10T20:13:15.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "srichers/thornado", "max_issues_repo_path": "Documents/M1/realizableFermionicM1/sections/intro.tex", "max_line_length": 690, "max_stars_count": 6, "max_stars_repo_head_hexsha": "bc6666cbf9ae8b39b1ba5feffac80303c2b1f9a8", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "srichers/thornado", "max_stars_repo_path": "Documents/M1/realizableFermionicM1/sections/intro.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-24T19:31:21.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-08T16:16:55.000Z", "num_tokens": 3576, "size": 15270 }
\section*{Box 1: Definitions} \begin{itemize} \item \textbf{Version Control System (VCS)}: \textit{(noun)} a program that tracks changes to specified files over time and maintains a library of all past versions of those files \item \textbf{Git}: \textit{(noun)} a version control system \item \textbf{repository (repo)}: \textit{(noun)} folder containing all tracked files as well as the version control history \item \textbf{commit}: \textit{(noun)} a snapshot of changes made to the staged file(s); \textit{(verb)} to save a snapshot of changes made to the staged file(s) \item \textbf{stage}: \textit{(noun)} the staging area holds the files to be included in the next commit; \textit{(verb)} to mark a file to be included in the next commit \item \textbf{track}: \textit{(noun)} a tracked file is one that is recognized by the Git repository \item \textbf{branch}: \textit{(noun)} a parallel version of the files in a repository (Box 7) \item \textbf{local}: \textit{(noun)} the version of your repository that is stored on your personal computer \item \textbf{remote}: \textit{(noun)} the version of your repository that is stored on a remote server, for instance on GitHub \item \textbf{clone}: \textit{(verb)} to create a local copy of a remote repository on your personal computer \item \textbf{fork}: \textit{(noun)} a copy of another user's repository on GitHub; \textit{(verb)} to copy a repository, for instance from one user's GitHub account to your own \item \textbf{merge}: \textit{(verb)} to update files by incorporating the changes introduced in new commits \item \textbf{pull}: \textit{(verb)} to retrieve commits from a remote repository and merge them into a local repository \item \textbf{push}: \textit{(verb)} to send commits from a local repository to a remote repository \item \textbf{pull request}: \textit{(noun)} a message sent by one GitHub user to merge the commits in their remote repository into another user's remote repository \end{itemize}
{ "alphanum_fraction": 0.7592311583, "avg_line_length": 98.85, "ext": "tex", "hexsha": "e7f2f65dca91c45bb75e6bc6f0cae9f3d14f3bfd", "lang": "TeX", "max_forks_count": 190, "max_forks_repo_forks_event_max_datetime": "2022-03-25T12:44:25.000Z", "max_forks_repo_forks_event_min_datetime": "2015-06-30T18:45:35.000Z", "max_forks_repo_head_hexsha": "d9ee9a3ca47227c643ebc19618e8f4074abcd6ec", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "Hennebe/git-for-science", "max_forks_repo_path": "box-1-definitions.tex", "max_issues_count": 117, "max_issues_repo_head_hexsha": "d9ee9a3ca47227c643ebc19618e8f4074abcd6ec", "max_issues_repo_issues_event_max_datetime": "2022-03-25T14:06:53.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-09T02:16:11.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "Hennebe/git-for-science", "max_issues_repo_path": "box-1-definitions.tex", "max_line_length": 179, "max_stars_count": 30, "max_stars_repo_head_hexsha": "d9ee9a3ca47227c643ebc19618e8f4074abcd6ec", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "Hennebe/git-for-science", "max_stars_repo_path": "box-1-definitions.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-28T08:14:37.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-27T18:44:08.000Z", "num_tokens": 510, "size": 1977 }
\documentclass[11pt]{article} \pdfpagewidth 8.5in \pdfpageheight 11in \setlength\topmargin{0in} \setlength\headheight{0in} \setlength\headsep{0.25in} \setlength\textheight{7.7in} \setlength\textwidth{6.5in} \setlength\oddsidemargin{0in} \setlength\evensidemargin{0in} \setlength\parindent{0.25in} \setlength\parskip{0.5in} \usepackage{verbatim} %\usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. %\geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for for rotated page geometry \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} \usepackage{amssymb,amsmath} \usepackage{epstopdf} \DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png} \usepackage{fullpage} \usepackage{mdwlist} \usepackage{enumitem} \setdescription{noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt} \title{Python and Command Line Review} \author{Emily Josephs \& Nancy Chen} %\date{} % Activate to display a given date or no date \begin{document} \maketitle %\section{} %\subsection{} This is not a quiz! Please work through the questions. If you get stuck you can try actually running the scripts to see how they work. \\ \textsl{Typeset conventions}: All code or code-like text is written in \texttt{constant-width font}.\\ \begin{enumerate} \section*{Unix} \item Briefly define what the following unix commands will do. \begin{itemize} \item \texttt{cd /home/mydirectory}\\ \item \texttt{cd ..}\\ \item \texttt{cp}\\ \item \texttt{less}\\ \item \texttt{head}\\ \end{itemize} \item How do you save a file in vim? \\ \\ \section*{Data types} \item Data can be stored as integers, floats, strings, or lists. Write the data type of each variable listed below. \begin{itemize} \item \texttt{'tomato'} \item \texttt{56} \item \texttt{72.2} \item \texttt{['apple','pear','plum']} \item \texttt{[1,2,3,4,5]}\\ \end{itemize} \item \texttt{myFruit = ['apple','pear','plum']}. What does \texttt{myFruit[0]} refer to?\\ \\ \item What would the output of \texttt{len(myFruit)} be?\\ \\ \item Write down the code would you use to add \texttt{'tomato'} to the list \texttt{myFruit}.\\ \\ \\ \\ \\ \section*{If/Else and For Loops} \item Look at the following script: \begin{verbatim} myVeg = ['carrot','beet','bok choy'] if 'cabbage' in myVeg: print('yay I can make cole slaw!') else: print('I need to go to the store') \end{verbatim} What will be printed out if you run the script?\\ \\ \\ \\ \item Look at the following script: \begin{verbatim} myVeg = ['carrot', 'beet', 'bok choy'] for veg in myVeg: print(myVeg[0]) \end{verbatim} What will be printed if you run the script?\\ \\ \\ \\ \\ \item Look at the following line: \begin{verbatim} myCodons = ["atg","tac","ttt","tcc"] \end{verbatim} Write a short script that will tell you if the codon \texttt{`atg'} is in the list \texttt{myCodons}\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \item \texttt{codonTable[``atg'']} will return the amino acid coded by the codon \texttt{`atg'}. Write a script that will print out the amino acids coded by the four codons in \texttt{myCodons}. You will likely need to use a \texttt{for} loop.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \item Open a new python file in vim and type the following code: \begin{verbatim} myRange = range(0,10) print(myRange) \end{verbatim} Write down what you think the \texttt{range} function does. What data type is the output?\\ \\ \\ \\ \\ \\ Edit the script to print out all the numbers between 20 and 30. \\ \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6731784583, "avg_line_length": 26.124137931, "ext": "tex", "hexsha": "9b4863535b2cff2b4192a18df1fe104a06c4182d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e1cc9156e1f666c50d3935155eaa283c1bb3128c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "emjosephs/phs-outreach", "max_forks_repo_path": "PythonReview.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e1cc9156e1f666c50d3935155eaa283c1bb3128c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "emjosephs/phs-outreach", "max_issues_repo_path": "PythonReview.tex", "max_line_length": 241, "max_stars_count": null, "max_stars_repo_head_hexsha": "e1cc9156e1f666c50d3935155eaa283c1bb3128c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "emjosephs/phs-outreach", "max_stars_repo_path": "PythonReview.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1218, "size": 3788 }
\documentclass{beamer} \usepackage[english]{babel} \usepackage{lipsum} \usepackage{todonotes} \usepackage{style} \usepackage{tabularx} \usepackage{hyperref} % Theme based on https://github.com/elauksap/beamerthemepolimi \usetheme[bgphoto]{polimi} \usefonttheme[onlymath]{serif} \setbeamertemplate{bibliography item}{\insertbiblabel} % Hide section name from frame header \newif\ifhidesechead \hidesecheadfalse \title{\tlap} \subtitle{% A modeling language for \texorpdfstring{\linebreak}{}% concurrent and distributed systems } \author{M. Donadoni \and A. Fulgini \and E. Morassutto} \begin{document} \begin{frame} \maketitle \end{frame} \begin{frame}{Table of contents} \tableofcontents[hideallsubsections] \end{frame} \input{sections/1.introduction.tex} \input{sections/2.clock.tex} \input{sections/3.tlc.tex} \input{sections/4.twophase.tex} \hidesecheadtrue \input{sections/5.conclusions.tex} \def\insertsectiongraphic{metro_exit} \section*{\texorpdfstring{\Circle}{Next}, \emph{Questions?}} \begin{frame} \sectionpage \end{frame} \nocite{*} \section*{References} \begin{frame}{References} \scriptsize \bibliographystyle{abbrv} \bibliography{references} \end{frame} \input{sections/photo_credits.tex} \end{document}
{ "alphanum_fraction": 0.702189781, "avg_line_length": 20.7575757576, "ext": "tex", "hexsha": "306ed59dfab82cf8fc9ec9a58137865f36330c36", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "47c417ef66c1e34b2e13abef1d70cb73e703f731", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fuljo/flaTLAnd", "max_forks_repo_path": "tla_slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "47c417ef66c1e34b2e13abef1d70cb73e703f731", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fuljo/flaTLAnd", "max_issues_repo_path": "tla_slides.tex", "max_line_length": 64, "max_stars_count": null, "max_stars_repo_head_hexsha": "47c417ef66c1e34b2e13abef1d70cb73e703f731", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fuljo/flaTLAnd", "max_stars_repo_path": "tla_slides.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 419, "size": 1370 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsmath} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{subfig} \usepackage[ backend=biber, style=alphabetic, sorting=ynt ]{biblatex} \addbibresource{ssvgd.bib} \title{State Space Reporting Delay} \date{January 2018} \begin{document} \subsection*{Introduction} State-Space models have become popular tools in the analysis of time-series. They allow for arbitrary transition and observation dynamics. The researcher can assign a latent data generating process, while simultaneously allowing for observational error on that process. The classic algorithm for fitting non-Gaussian SSMs is given by the particle filter. Although many variations exist, we generally refer to the sampling importance re-sampling (SIR) filter when discussing particle filtering. Although a powerful inference tool, particle filtering suffers from several well known drawbacks. The first is the problem of filter degeneracy. This occurs when the observations are far from the state predicted by the latent dynamics. The second is the excessive run-times on longer time series with complex dynamics. We propose an alternative approach that we hope will do better than particle filtering in practice. In this approach, Stein Variational Gradient Descent (SVGD) is used to sequentially estimate the distribution of state variables in each time step, conditional on observed data up through that time. \subsection*{Overview of SVGD} Stein Variational Gradient Descent can be used to estimate a continuous distribution by a set of particles. By iteratively transporting samples from an initial distribution in the direction of the likelihood, we are able to generate compute Monte Carlo estimates of the posterior. The usefulness of this approximation is apparent in Bayesian statistics, where the usually intractable normalizing constant disappears in the particle update step. The particles are subject to the following gradient ascent procedure. $$x_i^{l+i} \leftarrow x_i^{l}+\epsilon_l\hat{\phi^*(x_i^l)} $$ $$\hat{\phi^*(x)} = \frac{1}{n}\sum_{j=1}^n[k(x_j^l,x)\nabla_{x_j^l}log\ p(x_j^l) + \nabla_{x_j^l}k(x_j^l,x)]$$ for an arbitrary positive definite kernel function $k(.,.)$ usually chosen to be a Gaussian kernel. \subsection*{State Space Models} Suppose we are given a time series $Y_1,Y_2,...,Y_t$ for $Y \in \mathbb{R}$. We model the sequence as a state-space model parameterized by an observation density $p(y_t | x_t)$ and a transition density $p(x_t | x_{t-1})$ Figure 1. \begin{center} \includegraphics[scale=.5]{/home/gcgibson/ssm.png} \end{center} We are interested in the filtering distribution $p(x_1,...,x_n | y_1,...,y_n)$ which by Bayes formula is $$p(x_1,...,x_n | y_1,...,y_n) = \frac{p(y_1,...,y_n | x_1,...,x_n) p(x_1,...,x_n)}{Z}$$. Because computing the normalizing constant $Z$ is intractable for many choices of $p(y_t | x_t)$ and $p(x_t | x_{t-1})$, we must resort to Monte Carlo algorithms. The classic approach that incorporates the sequential nature of the data is given by the particle filtering algorithm. Particle filtering approximates the filtering density using sequential importance sampling. We instead focus on the following recursion. $$p(x_t | y_{1:t}) = \int p(x_{0:t} | y_{1:t})d_{x_0:t-1}$$ $$=\frac{p(y_t | x_t)}{\int p(y_t|x_t)p(x_t | y_{1:t-1})dx_t}p(x_t | y_{1:t-1})$$ $$\propto p(y_t|x_t)p(x_t | y_{1:t-1})$$ $$\propto p(y_t|x_t)p(x_t | y_{1:t-1})$$ $$\propto p(y_t|x_t)\int_{x_{t-1}}p(x_t,x_{t-1} | y_{1:t-1})d_{x_{t-1}}$$ $$\propto p(y_t|x_t)\int_{x_{t-1}}p(x_t |x_{t-1} )p(x_{t-1}| y_{1:t-1})d_{x_{t-1}}$$ which we can approximate using svgd as $$\approx p(y_t|x_t) \frac{1}{n}\sum_{i=1}^n p(x_t | x_{t-1}^{(i)})$$ We can now estimate $p(x_{t+1}|y_{1:t+1})$ using the same algebra as above. (proof in apendix A) \subsection*{Locally Level Gaussian Noise Model} In order to demonstrate that the approximation is reasonable we evaluate the predictive accuracy under an analytically tractable model, the locally level Gaussian model. This model takes the form $$X_t \sim N(X_{t-1},\sigma_1^2)$$ $$Y_t \sim N(X_t, \sigma_2^2)$$ \begin{figure}[!tbp] \centering \subfloat[SSVGD]{\includegraphics[scale=.25]{/home/gcgibson/ssvgd/manuscript/ssvgd_locally_level.pdf}\label{fig:f1}} \hfill \subfloat[PF]{\includegraphics[scale=.25]{/home/gcgibson/ssvgd/manuscript/pf_locally_level.pdf} \label{fig:f2}} \caption{Comparison of locally level Gaussian model} \end{figure} \subsection*{Poisson Observation Model With Seasonal State-Space Dynamics} In order to evaluate the performance on more involved dynamics we consider the following state-space model. $$\begin{pmatrix} X_{t,1} \\ X_{t,2} \end{pmatrix} = \begin{pmatrix} cos(2\pi/s) & sin(2\pi/s) \\ -sin(2\pi/s) & cos(2\pi/s) \end{pmatrix} \begin{pmatrix} X_{t-1,1} \\ X_{t-1,2} \end{pmatrix} $$ $$Y_t \sim Pois(e^{X_{t,1}})$$ \begin{figure}[!tbp] \centering \subfloat[SSVGD]{\includegraphics[scale=.25]{/home/gcgibson/ssvgd/manuscript/ssvgd_seasonal.pdf}\label{fig:f1}} \hfill \subfloat[PF]{\includegraphics[scale=.25]{/home/gcgibson/ssvgd/manuscript/pf_seasonal.pdf} \label{fig:f2}} \caption{Comparison of seasonal Poisson model} \end{figure} \subsection*{Divergent Particle Filter} We next investigate the ability of SSVGD to perform in the presence of poor initialization. This is a well known issue with current particle filter implementations: starting far from a plausible value of $x_0$ forces all particles to receive weight $0$ under the likelihood, leading to a degenerate filtering distribution. However, under SSVGD, we can simply increase the number of iterations, allowing for arbitrarily poor starting points. Standard particle filtering algorithms use effective sample size as a measure of degeneracy. This is commonly defined as $$S^{pf}_{eff} = \frac{1}{\sum_i (w_t^i)^2}$$. The common rule of thumb is to not allow this quantity to drop below 50. The natural translation of this metric into particle filtering is compute the same metric based on the samples obtained by SSVGD. \subsection*{Results} Standard particle filtering algorithms use effective sample size as a measure of degeneracy. This is commonly defined as $$S_{eff} = \frac{1}{\sum_i (w_i)^2}$$. The common rule of thumb is to not allow this quantity to fall below 50. Indeed, software implementations such as Biips throws an error if the number of effective particles falls below 50. We compute the effective sample size in an analogous way to the particle filter, where $w_i$ is defined as in the SIR particle filter. \subsection*{Discussion} \cite{liu_stein_2016-4} \printbibliography \end{document}
{ "alphanum_fraction": 0.7519609294, "avg_line_length": 55.3852459016, "ext": "tex", "hexsha": "5b76a38f258e9b445367b4b0941ce2774a41719a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8f47dca7588a3ccbc13069860f342efcd5bbf644", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gcgibson/ssvgd", "max_forks_repo_path": "manuscript/ssvgd.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8f47dca7588a3ccbc13069860f342efcd5bbf644", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gcgibson/ssvgd", "max_issues_repo_path": "manuscript/ssvgd.tex", "max_line_length": 813, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8f47dca7588a3ccbc13069860f342efcd5bbf644", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gcgibson/ssvgd", "max_stars_repo_path": "manuscript/ssvgd.tex", "max_stars_repo_stars_event_max_datetime": "2018-02-06T20:18:28.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-06T20:18:28.000Z", "num_tokens": 1939, "size": 6757 }
\documentclass[a4paper,10pt]{article} % use larger type; default would be 10pt \usepackage[a4paper,left=2cm,right=2cm,top=2.5cm,bottom=2.5cm]{geometry} \bibliographystyle{plainurl} %\input{../cours_thermo/lib_jld.tex} %\graphicspath{{./images/}} \usepackage{hyperref,verbatim} %%% The "real" document content comes below... \title{A certification-oriented OpenCL subset: definition} \author{Jean-Louis Dufour} %\date{} % Activate to display a given date or no date (if empty), % otherwise the current date is printed \date{Safran Electronics \& Defense\\ \today } \begin{document} \maketitle \section{Introduction} \cite{eigenmann1991experience} %\citefield{eigenmann1991experience}{title} \cite{gupta1997privatization} \section{bla} Manycores (i.e. mainly GPUs programmed in CUDA or OpenCL) are emerging in embedded systems, but only in non-critical usages. In safety-critical usages, of course they will not ensure alone safety: they will be just a part of a redundant and diverse architecture. But this will not suppress the expection for correctness justification: it will only be lower. Indeed, the justifications required by a certification process are beyond the reach of industrial practice, and at the frontier of the academic state of the Art. The problem can be decomposed as follows: \begin{description} \item[Hardware] is a complex black box, where critical mechanisms like the network-on-chip linking cores to memory banks must be guessed from the patents \cite{aamodt2018general}; certification-friendly suppliers are a rare exception \cite{boyer2018computing}. \item[Software / Compiler-and-Scheduler] is also a complex black box, with very few justifications of the compliance w.r.t. the OpenCL specification (beyond passing the OpenCL conformance test suite; from now on, we focus on OpenCL \cite{munshi2011opencl}), \item[Software / Application] (a.k.a. \emph{(compute) kernel}) consists of tens, hundreds or thousands of lightweight threads sharing two levels of memory ('local' and 'global' in OpenCL). This kind of software is the favorite playground of \emph{heisenbugs}, hence test-based methodologies are notoriously ineffective, whether to debug or to verify. \end{description} These three aspects are equally important, but the first two are more 'industrial/business' than technical, because of the competitive nature of the main market of GPUs : video games. We focus here on the third aspect, which is mainly technical, but has also a non-negligeable 'industrial/human' facet. Our proposal can be simply summed up : \begin{enumerate} \item we demonstrate that the kernel is \emph{deterministic} : it performs betwen inputs and outputs a transfer \emph{function} and not a transfer \emph{relation}. Due to the astronomical number of possible interleavings between the threads (the 'work-items' in OpenCL), the only way to do this is to do it \emph{formally}. The state of the Art doesn't allow to automate this in the general case, but we claim here that a special case can be defined, whose semi-automation is \emph{industrially} achievable. This 'industrial/human' aspect is a key point, and relates both to the engineers but also to the certifiers: the mastery of a formal language is not needed, and as often as possible, only simple annotations are to be written. It is the subject of this paper. \item we then demonstrate that the (parallel) kernel has an equivalent sequential version : thanks to the previous property, it can convincingly be done by test. This is not covered by this paper. \item finally, it only remains to validate the sequential version functionally: business as usual (and usually by test); again not covered by this paper. \end{enumerate} Heisenbugs are detected by the first step, and after only 'standard' bugs remain. A formal method is mandatory only for this first step, and we will do our best to not use it for more than that : functional correctness is not in its scope, and even low-level correctness (like array indexes in their range) as such is not in its scope (but see at the end). We cheat a little bit when we talk about an 'OpenCL subset' for which semi-automation is possible. This subset is \begin{enumerate} \item a syntactic subset of the OpenCL language (device side), \item a subset of the possible codes written in the former syntactic subset : let's call them the \emph{'almost-embarrassingly-parallel' codes}, this will be explained later. \end{enumerate} The usefulness of this subset will be demonstrated on a 'real-life' set of OpenCL kernels : the OpenCV library. In particular the process will be illustrated on a representative kernel: the histogram equalization function 'equalizeHist'. Lastly, this process raises an interesting question: will many-core be the Trojan horse of formal methods to penetrate the impassable enclosure of safety-critical software development ? \section{Multicores and manycores pose different certification problems} Multicores and manycores look the same from an hardware point of view : cores sharing memories. Therefore both undergo the same problems w.r.t. critical applications: \begin{itemize} \item variability in access times to shared memory, which induces variability in execution times. \item race conditions: \cite{padua2011encyclopedia} \begin{quotation} when two or more threads access a common resource, e.g., a variable in shared memory, and the order of the accesses depends on the timing, i.e., the progress of individual threads \end{quotation} \end{itemize} For multicores the timing variability is significant \cite{cullmann2010predictability}, even with long pipelines and out-of-order execution. But with only slight exaggeration, this is their only problem with regard to certification. For manycores, this variability is also a active research subject \cite{de2020scaling}, but it may be less important in practice, because the hardware architecture is completely 'data-oriented', and especially because the first implementations will be careful not to execute different kernels in parallel. But let's say it is as important: it's still not the most feared phenomenon. The problem comes from the particular kind of supported algorithm, which induces a particular kind of sharing between threads : \begin{itemize} \item on a multicore, \emph{in an embedded use}, the design aims to safely assess the timing variability, for example with a synchronous approach. In this case, each task has exclusivity on its own data, and data exchanged between tasks are carefully read and written at the start and end of the tasks, in such a way to avoid simultaneous accesses. In other words the few data sharing which occurs is under time-control, and there is mainly a non-functional bus/memory sharing: there is no fundamental difficulty in obtaining a deterministic functional behavior. \item on a manycore, hundreds of work-items read and write simultaneously the same data arrays. Work-items share not only organs, but also data, leading if we are not careful to {race conditions}. This is a potential source of non-determinism, which is usually a show-stopper for certification. \end{itemize} To summarize, race conditions and non-determinism are not a problem for embedded multicores, but are THE problem for manycores (embedded or not). \begin{comment} There seems to be an agreement \cite{narayanasamy2007automatically} \cite{burnim2009asserting} where a thread reads a data which is updated simultaneously by another thread This very last point is a key point : the software engineer will not be asked to master a formal language. Instead he will state simple assertions, which reflect (ANGLAIS) his design intent. almost-embarassingly parallel le titre est un peu trompeur : the OpenCL subset means 2 things: - syntactic subset - a subset of the applications : ... \end{comment} \section{The OpenCL subset} Due to the complexity of the possible interleavings between the work-items, the elimination of race conditions is a challenging aspect of parallel programming : a standard name for this is \emph{'Data Race Freedom'} ('DRF'; here a data race will be defined as a race condition on the simplest object : the memory cell). The DRF property is an active research topic of parallel programming, and several tools have been developped for tracking race conditions on manycores (among other things), among them PUG \cite{li2010scalable}, GPUVerify \cite{betts2012gpuverify}, VerCors \cite{blom2014vercors}. Now, the real property we are looking for is not DRF but \emph{determinism} : every possible execution of the kernel gives the same outputs (starting from given inputs). Determinism is also a hot subject for parallel programming \cite{burnim2009asserting}, both properties are related but not in an obvious way. We will set a stronger objective which we call the \emph{'almost-embarrassingly-parallel'} property, which implies both DRF and determinism. To define it, we must first recall what is a \emph{barrier interval} : it is the kernel code between two consecutive barriers. The start and the end of a kernel are implicit barriers, so a barrier-free kernel has a single barrier interval (which is the kernel itself). For this notion of 'consecutive barrier' to be meaningful, we have to restrict the placement of barriers : typically, a barrier will be forbidden in a conditional statement. The chosen restriction will also ensure statically that \emph{barrier divergence} will not occur: a kernel can now be seen as a predictable sequence of barrier intervals, the same for all work-items. We restrict OpenCL not only syntactically, but also semantically: \emph{each barrier interval must be embarrassingly parallel}. We formalize this fuzzy notion in the following way : consider any barrier interval, then any pair of work-items will work on disjoint subsets of each shared array. These partitions of the shared arrays will vary from barrier interval to barrier interval, that's what makes the difference between \emph{'almost-embarrassingly'} and \emph{'embarrassingly'}. We don't even try to infer these disjoint subsets : they are the rationale for the design, so they have to be explicitely stated by the designer. They are in fact a simplified version of the 'separation logic' used in VerCors \cite{blom2014vercors}. Typically, for each shared array, the subset is an interval parameterized by the work-item id. They give rise to two kind of proof obligations (for any barrier interval): \begin{description} \item[disjointness] for any pair of work-items, for any shared array, the subsets are disjoint, \item[correctness] for any work-item, for any shared array, for any access to this array, the access is in the corresponding subset. \end{description} Let's state the restrictions (the first three define the syntactic subset, the fourth is the semantic restriction): \begin{enumerate} \item the kernel execution involves a unique work-group, \item the only synchronization mechanism is the barrier (no atomics), \item barriers occur either at toplevel in the kernel, or at toplevel in a toplevel 'for' loop (containing neither 'break' nor 'continue') whose iteration values (start, stop, step) depend only on the scalar inputs of the kernel (not on the arrays, not on the work-item ids), \item each barrier interval is embarrassingly parallel. \end{enumerate} \begin{comment} \begin{quotation} Note that the work-group barrier must be encountered by all workitems of a work-group executing the kernel or by none at all. \end{quotation} synchronization : host and device (wi, kernel) work-item synchronization is not necessary in case of embarassingly parallel applies only to work-items in the same work-group \end{comment} \section{Issues and discussion} There are three main issues: on the principle itself, on the subset and on the proof obligations. The fact that \emph{'almost-embarrassingly-parallel'} implies \emph{DRF} seems (at least to us) obvious, but the implication towards \emph{deterministic} is not so obvious, and the informal justification we will present would have been advantageously replaced by a more formal proof. The subset is very restrictive, that's why we are testing it on the OpenCV kernels. The two kinds of proof obligations differ in terms of complexity: \begin{itemize} \item the disjointness needs few context and is within the reach of the best SMT-solvers. Of course, in absolute terms the problem is undecidable, that's why a 5th optional restriction is that this proof obligation belongs to Presburger arithmetic. \item the correctness needs more context and, as its name implies, is in fact the typical proof obligation associated with an assertion in a Hoare-logic framework like Frama-C. It is a bit harder than proving indexing correctness (alluded to in the introduction), because the subset of indexes is strictly smaller than the full range of the array. In particular, if the access is located after a loop, a loop invariant may be necessary. \end{itemize} \begin{comment} bla skeletons : \cite{cole2004bringing}, \cite{steuwer2011skelc} \cite{betts2012gpuverify} Early GPUs were primarily tailored toward \emph{embarrassingly parallel} graphics workloads : computing independently each of the pixels that constitute the display, hence a low degree of data sharing. \end{comment} \section{Related works} As already mentioned, this work is strongly inspired by PUG \cite{li2010scalable}, GPUVerify \cite{betts2012gpuverify} and VerCors \cite{blom2014vercors}. The motivation is to make these technologies accessible to engineers. For this, it is necessary to significantly reduce the complexity, hence the new concept of \emph{'almost-embarrassingly-parallel'}. \begin{comment} Two common ways to do so are sequential consistency \cite{lamport1979make} and linearizability \cite{herlihy1990linearizability}. Both require that the values returned by the operations appear to have been returned by a sequential execution of the same operations; sequential consistency only requires this order to be consistent with the order in which each individual thread invokes the operations, while linearizability further requires this order to be consistent with the real-time order of nonoverlapping operations. This concept of \emph{'almost-embarrassingly-parallel'} kernel has to be validated. \end{comment} \section{Conclusions} The three issues mentioned constitute the work plan for the next few months. \bibliography{opencl_certif} \end{document}
{ "alphanum_fraction": 0.7824442944, "avg_line_length": 59.24, "ext": "tex", "hexsha": "a4ccb61f780db4eeae850043a26d89f592fc6338", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dab8ad460c4a2e8e50da26968d13138704dd6975", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "JeanLouisDufour/pyOclSimu", "max_forks_repo_path": "opencl_certif_def.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dab8ad460c4a2e8e50da26968d13138704dd6975", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "JeanLouisDufour/pyOclSimu", "max_issues_repo_path": "opencl_certif_def.tex", "max_line_length": 351, "max_stars_count": null, "max_stars_repo_head_hexsha": "dab8ad460c4a2e8e50da26968d13138704dd6975", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "JeanLouisDufour/pyOclSimu", "max_stars_repo_path": "opencl_certif_def.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3307, "size": 14810 }
\documentclass[10pt,a4paper]{book} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{braket} \usepackage{dsfont} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, filecolor=magenta, urlcolor=cyan, pdftitle={Quantum mechanics}, pdfpagemode=FullScreen, } \title{Quantum Mechanics} \author{Stefan Aeschbacher} \date{\today} \newtheorem{post}{Postulate} \DeclareMathOperator {\op1} {\mathds{1}} \DeclareMathOperator {\opH} {\hat{H}} \DeclareMathOperator {\opA} {\hat{A}} \DeclareMathOperator {\opB} {\hat{B}} \DeclareMathOperator {\opC} {\hat{C}} \DeclareMathOperator {\opU} {\hat{U}} \DeclareMathOperator {\opT} {\hat{T}} \DeclareMathOperator {\opV} {\hat{V}} \DeclareMathOperator {\opPos} {\hat{x}} \DeclareMathOperator {\opMom} {\hat{p}} \DeclareMathOperator {\opSigma} {\hat{\sigma}} \begin{document} \maketitle \tableofcontents \chapter{Dirac notation} A bra $\bra{v}$ is an element in a complex vector space. The corresponding ket $\ket{v}$ is an elemen in its dual space. The usual rules of linear algebra are valid: \begin{align} \ket{u} + \ket{v} & = \ket{w} \\ c\ket{v} &= \ket{u}; c \in \mathbb{C} \end{align} Convert between bras and kets: \begin{align} c_1 \ket{v_1} + c_2 \ket{v_2} \iff c_1^* \bra{v_1} + c_2^* \bra{v_2} \end{align} \section{Inner product} \begin{align} \braket{u|v} & = \braket{v|u}^* \\ \braket{v|v} & \ge 0 \\ \braket{v|v} &= 0 \iff v = 0 \end{align} Linearity in the second argument and antilinear in the first: \begin{align} \braket{u|c_1 v_1 + c_2 v_2} & = c_1\braket{u|v_1} + c_2\braket{u|v_2} \\ \braket{c_1 u_1 + c_2 u_2|v} & = c_1^*\braket{u_1|v} + c_2^*\braket{u_2|v} \\ \end{align} For $v, u \in \mathbb{C}^n$ as vectors ($\bra{v}$ is a row vector, $\ket{u}$ is a column vector) \begin{align} \braket{v|u} = \sum_{n}v_i^*u_i \end{align} For functions $f, g \in \mathbb{C}$ as vectors with $x \in [0,L]$ \begin{align} \braket{f|g} = \int_0^Lf^*(x)g(x)dx \end{align} For a set of basis vectors $\{e_i\}$ (kronecker delta) \begin{align} \braket{e_i|e_j} = \delta_{ij} \end{align} Write a vector as a linear combination of basis vectors \begin{align} \ket{v} = \sum_{i=0}^n v_i\ket{e_i} &= \sum_{i=0}^n \ket{e_i}\bra{e_i}\ket{v} \\ \braket{e_i|v} &= v_i \end{align} \section{Outer product} A bra and a ket can be combined in the outer product to create an operator \textbf{\begin{align} X &= \ket{v}\bra{u} \\ X\ket{\Psi} &= \ket{v}\bra{u}\ket{\Psi} = \ket{v}\braket{u|\Psi} \\ \end{align} } \chapter{Postulates} \section{Postulate 1: state} \begin{post} The state of a physical system is described by a state vector that belongs to a complex vector space V, called the state space of the system. \end{post} \section{Postulate X: time evolution} \begin{align} i\hbar \frac{ \partial } { \partial t } \ket{ \Psi(t) } = \opH(t)\ket{ \Psi(t) } \end{align} \chapter{State space} \begin{align} \ket{ \Psi_1} + \ket{ \Psi_2} & = \ket{ \Psi_3} \\ \ket{ \Psi_1} + \ket{ \Psi_2} & = \ket{ \Psi_2} + \ket{ \Psi_1} \end{align} \chapter{Operators} \section{Basic Properties} An operator acting on a ket creates a new ket: \begin{align} \opA \ket{\Psi} = \ket{\Psi'} \\ \ket{\Psi}, \ket{\Psi'} \in V \end{align} Operators are linear \begin{align} \opA (a_1\ket{\Psi_1} + a_2\ket{\Psi_2}) = (a_1\opA\ket{\Psi_1} + a_2\opA\ket{\Psi_2}) \\ \ket{\Psi}, \ket{\Psi'} \in V; a_1, a_2 \in \mathbb{C} \end{align} Operators are associative and commutative under addition \begin{align} \opA + (\opB + \opC) = (\opA + \opB) + \opC \\ \opA + \opB = \opB + \opA \end{align} Multiplying operators is interpreted as applying them to kets. It is associative but NOT (in general) commutative. \begin{align} \opA\opB\ket{\Psi} = \opA(\opB\ket{\Psi}) = \opA\ket{\Psi'} \\ \opA(\opB\opC) = (\opA\opB)\opC \\ \opA\opB \ne \opB\opA \end{align} The lack of commutativeness makes the "commutator" useful \begin{align} [\opA, \opB] = \opA\opB - \opB\opA \\ \end{align} The inverse $\opA^{-1}$ of an operator is defined by \begin{align}\label{InverseOperator} \opA^{-1}\opA = \opA\opA^{-1} = \op1 \end{align} \section{Hermitian Operators} An operator is called Hermitian (or self-adjoint) if it is it's own hermitian conjugate $\opA = \opA^\dagger$ \begin{align} \opA\ket{A} = \ket{B} \rightarrow \bra{A}\opA^\dagger = \bra{B} \\ \opA\ket{A} = \ket{B} \rightarrow \bra{A}\opA = \bra{B} \\ \end{align} A hermitian operator $\opA$ has the following properties \begin{enumerate} \item $\opA\ket{\lambda} = \lambda \ket{\lambda} \rightarrow \lambda \in \mathbb{R}$ \item $\braket{\opA} = \bra{\Psi}\opA\ket{\Psi} \in \mathbb{R}$ \item All eigenvectors with different eigenvalues are orthogonal \end{enumerate} The Hermitian conjugate of a product is \begin{align} (\opA\opB)^\dagger = \opB^\dagger\opA^\dagger \end{align} \section{Projection Operators} A projection operator is defined \begin{align} \end{align} \section{Unitary operators} A unitary operator is defined by \begin{align} \opU^{-1} = \opU^\dagger \end{align} This leads to (see also \eqref{InverseOperator}) \begin{align} \opU^\dagger\opU = \opU\opU^\dagger = \op1 \end{align} The product of two unitary operators ($\opU^{-1} = \opU^\dagger; \opV^{-1} = \opV^\dagger$) is as well unitary \begin{align} (\opU\opV)^\dagger(\opU\opV) = \op1 \\ (\opU\opV)(\opU\opV)^\dagger = \op1 \end{align} The eigenvalues of a unitary operator have magnitude 1 \begin{align} \opU\ket{\lambda} = \lambda\ket{\lambda} \Rightarrow |\lambda|^2 = 1 \\ |\lambda\| = 1 \Rightarrow \lambda = e^{i\phi_\lambda}; \phi_\lambda \in \mathbb{R} \end{align} The eigenvectors are orthogonal $\braket{\mu|\lambda} = 0$. Unitary transformations conserve the scalar product of two kets and the norm of a ket. \begin{align} \ket{\Psi_1'} = \opU \ket{\Psi_1'}&; \ket{\Psi_2'} = \opU \ket{\Psi_2'} \\ \braket{\Psi_1'|\Psi_2'} = \bra{\Psi_1}\opU^\dagger&\opU\ket{\Psi_1} =\braket{\Psi_1|\Psi_2} \\ \braket{\Psi_1'|\Psi_1'} &=\braket{\Psi_1|\Psi_1} \\ \end{align} The unitary transformation of an operator is \begin{align} \opA' = \opU\opA\opU^\dagger \end{align} Which has the following properties \begin{enumerate} \item if $\opA$ is Hermitain, so is $\opA'$ \item $(\opA')^n = (\opA^n)'$ \item $\opA\ket{\alpha} = \alpha\ket{\alpha} \Rightarrow \opA'\ket{\alpha'} = \alpha\ket{\alpha'} ; \ket{\alpha'} = \opU\ket{\alpha}$ \end{enumerate} An infinitesimal unitary operator is \begin{align} \opU(\epsilon) = \op1 - i\epsilon\opA; \epsilon \in \mathbb{R}; \opA = \opA^\dagger \end{align} A common form of unitary operators is \begin{align} \opU = e^{i\opA}; \opA = \opA^\dagger \\ \text{e.g. the translation operator} \opT(\alpha) = e^{i\alpha\frac{\opMom}{\hbar}} \end{align} See also: \href{https://www.youtube.com/watch?v=baIT6HaaYuQ}{Prof.M. Unitary Operators} and \href{https://www.youtube.com/watch?v=tRWBoossG0Y&list=PL701CD168D02FF56F&index=9}{TM Lecture 4} \chapter{Eigenvectors and Eigenvalues} \section{Degenerate Eigenvectors} If two eigenvectors have the same eigenvalue: \begin{align} \opH\ket{\lambda_1} = \lambda \ket{\lambda_1} \\ \opH\ket{\lambda_2} = \lambda \ket{\lambda_2} \end{align} their linear combination is an eigenvector as well: \begin{align} \alpha \opH\ket{\lambda_1} = \lambda \alpha \ket{\lambda_1} \\ \beta \opH\ket{\lambda_1} = \lambda \beta \ket{\lambda_1} \\ \opH[\alpha \ket{\lambda_1} + \beta \ket{\lambda_2}] = \lambda [\alpha \ket{\lambda_1} + \beta \ket{\lambda_2}] \end{align} Therefore it is possible to create two orthogonal eigenvectors for this eigenvalue. The probability in the degenerate case is the sum of the probabilities for each eigenvector $| \bra{\opH} \ket{\lambda_1}|^2 + | \bra{\opH} \ket{\lambda_2}|^2$ \chapter{Uncertanity} \section{Probabilities} When $\Psi$ is represented in a basis $u$ \begin{align} \opA\ket{u_n} = \lambda_n\ket{u_n}\\ \ket{\Psi} = \sum_{n}c_n\ket{u_n} \\ c_n = \braket{u_n|\Psi} \end{align} $|c_n|^2$ is the probability to get the eigenvalue $\lambda_n$ as a result. \section{Expectation value and RMS} Expectation value of $\opA$ in state $\Psi$ \begin{align} \braket{\opA}_\Psi = \bra{\Psi}\opA\ket{\Psi} \end{align} Root mean square deviation \begin{align} \Delta \opA &= \sqrt{\braket{\opSigma_A^2}_\Psi}\\ \opSigma_A &= \opA - \braket{\opA}_\Psi \\ \Delta \opA &= \sqrt{\braket{\opA^2}_\Psi - \braket{\opA}_\Psi^2} \end{align} \section{Uncertanity Principle} \begin{align} \Delta\opA\Delta\opB \geq \frac{1}{2}|\braket{[\opA\opB]}| \end{align} \section{Position and Momentun} \begin{align} [\opPos,\opMom] &= i\hbar \\ \Delta\opPos\Delta\opMom &\geq \frac{\hbar}{2} \end{align} \end{document}
{ "alphanum_fraction": 0.6684150513, "avg_line_length": 32.3616236162, "ext": "tex", "hexsha": "ab1734c38f5def89896a74be412c7785990039a2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8b88d1127407af65097b236425c1360d5038c01c", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "imix/quantum-mechanics", "max_forks_repo_path": "QM-Compact.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8b88d1127407af65097b236425c1360d5038c01c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "imix/quantum-mechanics", "max_issues_repo_path": "QM-Compact.tex", "max_line_length": 188, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8b88d1127407af65097b236425c1360d5038c01c", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "imix/quantum-mechanics", "max_stars_repo_path": "QM-Compact.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-14T20:22:05.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-14T20:22:05.000Z", "num_tokens": 3362, "size": 8770 }
\documentclass{beamer} \usepackage[british]{babel} \usepackage{graphicx,hyperref,sdu,url} %% 中文 \usepackage{ctex} \usepackage{newtxmath} %% 字体 % \setCJKmainfont[ItalicFont={SimSun}]{SimSun} % \setCJKsansfont{Microsoft YaHei} % \setCJKmonofont{FangSong} % macos word % \setCJKmonofont{SimSun} % \xeCJKsetcharclass{"0}{"2E7F}{0} % \xeCJKsetcharclass{"2E80}{"FFFF}{1} % Require XeLaTeX \RequirePackage{fontspec,xltxtra,xunicode} \setmainfont[Mapping=tex-text]{Times New Roman} \setsansfont[Mapping=tex-text]{Helvetica} \setmonofont{Monaco} % The title of the presentation: % - first a short version which is visible at the bottom of each slide; % - second the full title shown on the title slide; \title[SDU 样式 Beamer]{ 山东大学Beamer样式 \LaTeX} % Optional: a subtitle to be dispalyed on the title slide \subtitle{这里是副标题} % The author(s) of the presentation: % - again first a short version to be displayed at the bottom; % - next the full list of authors, which may include contact information; \author[WANG Maomao]{ 李二花 \\\medskip {\small \url{[email protected]}} \\ {\small \url{http://www.sdu.edu.cn/}}} % The institute: % - to start the name of the university as displayed on the top of each slide % this can be adjusted such that you can also create a Dutch version % - next the institute information as displayed on the title slide \institute[SHANDONG UNIVERSITY]{ 机械工程学院 \\ % 农业与农村发展学院 \\ 山东大学} %% \today % Add a date and possibly the name of the event to the slides % - again first a short version to be shown at the bottom of each slide % - second the full date and event name for the title slide \date[May. 01 2017]{ 2017年5月1日} \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame} \frametitle{提纲 Outline} \tableofcontents \end{frame} % Section titles are shown in at the top of the slides with the current section % highlighted. Note that the number of sections determines the size of the top % bar, and hence the university name and logo. If you do not add any sections % they will not be visible. \section{提纲} \begin{frame} \frametitle{介绍} \begin{itemize} \item 测试介绍 \item 请参考 \LaTeX\ 文件 \item 数学公式 $x_{100}$ \item 基于“山大红”颜色 \url{http://www.sdu.edu.cn/} \end{itemize} \end{frame} \section{背景知识} \begin{frame} \frametitle{背景消息} \begin{block}{Slides with \LaTeX} Beamer offers a lot of functions to create nice slides using \LaTeX. \end{block} \begin{block}{The basis} 内部使用以下主题 \begin{itemize} \item split \item whale \item rounded \item orchid \end{itemize} \end{block} \end{frame} \section{The important things} \begin{frame} \frametitle{The important things} \begin{enumerate} \item This just shows the effect of the style \item It is not a Beamer tutorial \item Read the Beamer manual for more help \item Contact me only concerning the style file \end{enumerate} \end{frame} \section{Analysis of the work} \begin{frame} \frametitle{Analysis of the work} This style file gives your slides some nice Radboud branding. When you know how to work with the Beamer package it is easy to use. Just add:\\ ~~~$\backslash$usepackage$\{$ruc$\}$ \\ at the top of your file. \end{frame} \section{Conclusion} \begin{frame} \frametitle{Conclusion} \begin{itemize} \item Easy to use \item Good results \end{itemize} \end{frame} \end{document}
{ "alphanum_fraction": 0.7098370198, "avg_line_length": 24.027972028, "ext": "tex", "hexsha": "6092cfe24bc6e7a74e13ca8b53e1ebd62bf2cea8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c98b3752823558b629ab10765792ac52d1c7f5e9", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "LiErhua/sdu_beamer_template", "max_forks_repo_path": "example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c98b3752823558b629ab10765792ac52d1c7f5e9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "LiErhua/sdu_beamer_template", "max_issues_repo_path": "example.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "c98b3752823558b629ab10765792ac52d1c7f5e9", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "LiErhua/sdu_beamer_template", "max_stars_repo_path": "example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1108, "size": 3436 }
\section{Protocol Details}\label{sec:decoupled_protocol} % Change protocol name from Dcup We have developed a protocol for the \textsf{AmaaS} model, which we call \textsf{SCast}; as the protocol offers multicast as a service, hence Service Multicast - \textsf{SCast}. \textsf{SCast} satisfies all of the requirements specified in $\S$ \ref{sec:absaas_requirements} with the exception of S2. S2 cannot be satisfied by our protocol directly, rather \textsf{SCast} relies on a \emph{abcast} protocol to guarantee S1, S3 and S4, therefore in order for S2 to be satisfied, the underlying \emph{abcast} protocol must not block in the presence of node failures. \textsf{SCast} consists of five distinct phases, each of which are explored below. In the explanation below we assume that Infinispan is executing a 1-Phase Total Order transaction, without a second WSC phase, and that the transaction has already been successfully executed locally. Furthermore, we assume that a reliable network protocol is being utilised as the underlying communication mechanism, for example TCP\citep{Cerf:2005:PPN:1064413.1064423} or Reliable UDP\citep{ReliableUDP}. Finally, we refer to a collection of $s$-nodes providing the \emph{amcast} service as the \emph{multicast service}. \begin{description} \item[1. Client Request - Client] \hfill \\ Once a transaction coordinator, $Tx_i.c$, has completed its local execution of $Tx_i$ it is ready to \emph{amcast} a $prepare(Tx_i)$ message to $Tx_i.dst$ as required by the total order commit protocol. In \textsf{SCast} \emph{amcasts} are initiated by the $Tx_i.c$ unicasting a \emph{amcast} request, $req(Tx_i)$, to all $s$-nodes in the \emph{ordering service}. The request, $req(Tx_i)$, contains the contents of a transaction's $prepare(Tx_i)$ message and the addresses of $Tx_i.dst$. Each client request is associated with a unique id that consists of the $c$-node's address and a sequence number that is incremented after each request from this client. \item[2. Receive Request - Multicast Service] \hfill \\ Upon receiving $req(Tx_i)$, each $s$-node places the request in its \emph{Abcast Request Pool} (ARP), which is a bounded queue for storing requests before they are \emph{abcast} to all $s$-nodes. If an $s$-node's ARP becomes full, subsequent requests from $c$-nodes are rejected until space becomes available in the ARP. When a $c$-node request is rejected a \emph{reject} response is sent to $Tx_i.c$. If $Tx_i.c$ receives a \emph{reject} response from all $s$-nodes, then it can either abort $Tx_i$ or resend the \emph{amcast} request after a configurable amount of time. The ARP is necessary to ensure that if the \emph{ordering service} starts to become overloaded by client requests their is a 'feedback' mechanism that makes clients aware of the services current limitations, allowing clients to restrict user operations if necessary. Utilising an ARP is also essential for providing message bundling, which as described in \ref{ssec:abaas_optimisations} is an effective optimisation for improving the throughput of the \emph{multicast service}. \item[3. Process ARP - Multicast Service] \hfill \\ A single thread, called the \emph{send} thread, is utilised for retrieving requests from the ARP and \emph{abcast}ing them to all $s$-nodes for ordering. The \emph{send} thread retrieves ordering requests from the ARP in their arrival order, and bundles them into a single message bundle $mb$, before \emph{abcast}ing $mb$ to all $s$-nodes. If the ARP is empty, then the \emph{send} thread waits for the ARP to become non-empty before resuming \emph{abcast}ing. A configurable upper limit is placed on the maximum size (number of messages or \emph{bytes}) of a bundle message. If this upper limit is reached and the ARP still has available requests, then the \emph{send} thread will \emph{abcast} the next message bundle $mb'$ once $mb$ has been \emph{abcast}. If message bundling is not enabled, then a upper limit of one message is set for all bundles. All \emph{abcast} $mb$ sent by an $s$-node have an originator field that is set to equal the sending node's address $N_s$, $mb.o$ = $N_s$, this is necessary for the next phase of the protocol. \item[4. Process Requests and Multicast - Multicast Service] \hfill \\ When an $s$-node, $N_s$, receives a request bundle $mb$, it 'un-bundles' $mb$ and processes each ordering request $req(Tx)$ in the order that they arrived in the ARP at $mb.o$. If $N_s$ has already received $req(Tx)$ in a previous \emph{abcast} message, we discard this request and take no further action. It is possible to discard a repeat request, as we know that all other $s$-nodes have, or will eventually, handle(d) the same copy of the request as $N_s$ due to the guarantees provided by \emph{abcast}. Each accepted $req(Tx)$ is associated with a global timestamp $ts$: $req(Tx_i).ts = m.ts\oplus m.o \oplus$\emph{sequence number} of $req(Tx_i)$ within the bundle; where $\oplus$ is the append operator and $m.ts$ is the final timestamp provided by the underlying \emph{abcast} protocol utilised between $s$-nodes. The $s$-node who's copy of $req(Tx_i)$ was first received by the $s$-nodes, and thus accepted, is responsible for multicasting a response message, $Rsp(Tx_i)$, containing the transaction and associated ordering data to all $Tx.dst$. Delegating the multicasting of requests in this manner prevents $Rsp(Tx_i)$ being multicast by all $s$-nodes. In addition to the actual transaction, a multicast response $Rsp(Tx_i)$ consists of two types of ordering data: $ts$ agreed by $s$-nodes for $req(Tx_i)$, and $req(Tx_i)$'s \emph{immediate} predecessor data. The latter is the identity of $Rsp(Tx_j)$ whose delivery at the specified $c$-node must \emph{precede} \emph{immediately} before delivery of $Rsp(Tx_i)$. More precisely, all $d \in Rsp(Tx_i).dst$ must not deliver $Rsp(Tx_i)$ until they have delivered $Rsp(Tx_j)$, and only $Rsp(Tx_i)$ can be delivered immediately after $Rsp(Tx_j)$. The storage of \emph{immediate} predecessor data works as follows: All $s$-nodes maintain a map that stores a transaction history by mapping a $c$-node address with the id of the last transaction they were associated with, hence its \emph{immediate} predecessor. So for each $req(Tx)$ the associated $req(Tx).ts$ is stored in the map for each $d \in Tx.dst$. When a $s$-node receives an \emph{abcast} bundle $mb$, it knows that all other $s$-nodes have received, or will receive, $mb$ in the same order. Therefore, when $mb$ is processed by an $s$-node, it is guaranteed that all other $s$-nodes will have processed $mb$ in the exact same order, hence we know that the transaction history will be consistent across all $s$-nodes. Note that the immediate predecessor of $Rsp(Tx_i)$ is applicable to \emph{all} \emph{amcast}s directed at a given $d$ - not just those that originate from $Tx_i.c$ nor just those that are handled by one $s$-node. Thus, it is specific to each $d \in Tx_i.dst$ and ensures that delivery at every $d$ is per the finalized $Rsp(Tx).ts$. To illustrate this, let $Tx_i.c$ send $req(Tx_i)$ to the \emph{multicast service}, with $N_s$'s \emph{abcast} copy being accepted , $Tx_j.c$ then sends $req(Tx_j)$ and $N_{s'}$'s copy is accepted by the service. Assume $d \in Tx_i.dst \cap Tx_j.dst$ and the \emph{multicast service} orders $req(Tx_j)$ before $req(Tx_i)$, if $d$ receives $Rsp(Tx_i)$ before $Rsp(Tx_j)$ it will not deliver $Rsp(Tx_i)$ until it has delivered $Rsp(Tx_j)$. \item[5. Receive Multicast - Client] \hfill \\ Upon receiving $Rsp(Tx_i)$, a $c$-node, $c$, will check the \emph{immediate} predecessor data that is applicable to $c$, in this case $Rsp(Tx_j)$. If $Rsp(Tx_j)$ has been received by $c$ then $Rsp(Tx_i)$ can be delivered by $c$ and the $prepare(Tx_i)$ operation is executed. However, if $Rsp(Tx_j)$ has not yet been received by $c$ then $Rsp(Tx_i)$ cannot be delivered locally, and $c$ must wait to receive $Rsp(Tx_j)$ before delivering $Rsp(Tx_i)$. A single $ts$ is provided for each $d \in Tx.dst$ in the predecessor data, opposed to a list of past timestamps for each $d$, in order to reduce the size of each $Rsp(Tx)$. This results in a cascading wait occurring if multiple messages have not yet been received by $c$. For example, if $c$ has received $Rsp(Tx_i)$ but has not received its predecessors $Rsp(Tx_j)$ and $Rsp(Tx_k)$, $c$ is only aware of $m.j$, however when $Rsp(Tx_j)$ arrives, it reads $Rsp(Tx_j)$'s predecessor data and becomes aware that it has not yet received $Rsp(Tx_k)$ and must therefore wait for $Rsp(Tx_k)$ before delivering $Rsp(Tx_j)$ and $Rsp(Tx_i)$. \end{description} \subsection{Fault-Tolerance} Fault-tolerance in \textsf{SCast} must consider the consequences of both crashed $c$-nodes and $s$-nodes. Here we explore the consequences of both $c$-node and $s$-node crashes during various stages of a \textsf{SCast} \emph{amcast}. For the sake of simplicity, we only consider node crashes from the perspective of a single transaction, however it is worth noting that each $c$-node would typically have multiple transactions executing concurrently. \paragraph{Client Node Crash} \begin{description} \item[\emph{Local Tx Execution}] \hfill \\ If a $c$-node, $Tx.c$, crashes during or directly after the local execution of $Tx_j$, then no action needs to be taken for this $Tx$, as no interactions with other nodes has occurred. \item[\emph{Phase 1}] \hfill \\ If $Tx.c$ crashes during the unicasting of a request to the \emph{multicast service}, then three scenarios are possible: \begin{itemize} \item No $s$-nodes receive the request, in which case the multicast will never complete and no further actions are required. \item Not all of the $s$-nodes receive the original request, in which case the nodes who do receive the request will execute as normal. All $s$-nodes will eventually receive details of the transaction, as the other $s$-nodes \emph{abcast} their copy of the request between service members, and the transaction will be multicast as normal. \item Only one $s$-node receives a copy of the request and that $s$-node also crashes, in which case no further action is required as no other $s$-node is aware of the request. \end{itemize} \item[\emph{Phase 2-5}] \hfill \\ Finally, its possible for $Tx.c$ or any other $d \in Tx.dst$ to crash after the \emph{multicast service} has received the original request, in which case the service will continue to process the request as normal and multicast the message to all of the operative destinations. \end{description} \paragraph{Service Node Crash} \begin{description} \item[\emph{Phase 1}] \hfill \\ If a $s$-node crashes while a $Tx.c$ is issuing a service request, then the \emph{amcast} can still succeed as the client request is unicast to all $s$-nodes, therefore one of the remaining $s$-nodes will handle the request. \item[\emph{Phase 2-3}] \hfill \\ If a $s$-node crashes after receiving a request $req(Tx_i)$, then another $s$-node will eventually receive $req(Tx_i)$, as $Tx.c$ unicasts the request to all $s$-nodes. \item[\emph{Phase 4}] \hfill \\ Its possible for a $s$-node, $N_s$, to crash just after it has been designated as the multicasting $s$-node for $Rsp(Tx_i)$. In which case, it is necessary for the remaining $s$-nodes to take responsibility for multicasting $Rsp(Tx_i)$ to ensure that all $Tx_i.dst$ receive the transaction. The remaining $s$-nodes can determine which requests still require multicasting if meta data is piggybacked onto each \emph{abcast} sent between $s$-nodes. For example, if each operative $s$-node: \begin{enumerate} \item Piggybacks the timestamp of the latest request it has successfully responded to; where success is defined as the $Rsp$ message being multicast to all destinations. \item Maintains a recent history of client requests that have been accepted; storing the transaction as well as the address of the $s$-node whose \emph{abcast} request was accepted by the service. \end{enumerate}. When an $s$-node crash is detected the remaining $s$-nodes iterate through their recent history of client requests, starting at the timestamp of the last confirmed multicast to be completed by the crashed $s-$node. Each subsequent client request that was the responsibility of the crashed $s$-node to respond to, which has not been completed, is then handled by the operative $s$-node. This will potentially cause multiple $s$-nodes to multicast the same $Rsp(Tx_i)$ message, however $c$-nodes can simply discard any duplicate transmissions that are received from the \emph{multicast service}. \item[\emph{Stage 5}] \hfill \\ A $s$-node crash at this stage of the protocol has no effect on the outcome of the \emph{amcast}, as the $Rsp$ message has already been multicast to all destinations. \end{description}
{ "alphanum_fraction": 0.7362754019, "avg_line_length": 160.8292682927, "ext": "tex", "hexsha": "8505167f4b36dab5fe282d5234a44375cdb3780f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1f6c26c04032ec95ed6ce4930bebe72677eb00b8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ryanemerson/Thesis", "max_forks_repo_path": "Appendix1/appendix1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1f6c26c04032ec95ed6ce4930bebe72677eb00b8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ryanemerson/Thesis", "max_issues_repo_path": "Appendix1/appendix1.tex", "max_line_length": 828, "max_stars_count": null, "max_stars_repo_head_hexsha": "1f6c26c04032ec95ed6ce4930bebe72677eb00b8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ryanemerson/Thesis", "max_stars_repo_path": "Appendix1/appendix1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3633, "size": 13188 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{iftex} \ifPDFTeX \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Using pandoc-ling}, pdfauthor={Michael Cysouw}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{linguex} \renewcommand{\theExLBr}{} \renewcommand{\theExRBr}{} \newcommand{\jdg}[1]{\makebox[0.4em][r]{\normalfont#1\ignorespaces}} \usepackage{chngcntr} \counterwithin{ExNo}{section} \renewcommand{\Exarabic}{\thesection.\arabic} \ifLuaTeX \usepackage{selnolig} % disable illegal ligatures \fi \title{Using pandoc-ling} \author{Michael Cysouw} \date{} \begin{document} \maketitle { \setcounter{tocdepth}{3} \tableofcontents } \hypertarget{pandoc-ling}{% \section{pandoc-ling}\label{pandoc-ling}} \emph{Michael Cysouw} \textless{}\href{mailto:[email protected]}{\nolinkurl{[email protected]}}\textgreater{} A Pandoc filter for linguistic examples tl;dr \begin{itemize} \tightlist \item Easily write linguistic examples including basic interlinear glossing. \item Let numbering and cross-referencing be done for you. \item Export to (almost) any format of your wishes for final polishing. \item As an example, check out this readme in \href{https://cysouw.github.io/pandoc-ling/readme.html}{HTML} or \href{https://cysouw.github.io/pandoc-ling/readme_gb4e.pdf}{Latex}. \end{itemize} \hypertarget{rationale}{% \section{Rationale}\label{rationale}} In the field of linguistics there is an outspoken tradition to format example sentences in research papers in a very specific way. In the field, it is a perennial problem to get such example sentences to look just right. Within Latex, there are numerous packages to deal with this problem (e.g.~covington, linguex, gb4e, expex, etc.). Depending on your needs, there is some Latex solution for almost everyone. However, these solutions in Latex are often cumbersome to type, and they are not portable to other formats. Specifically, transfer between latex, html, docx, odt or epub would actually be highly desirable. Such transfer is the hallmark of \href{https://pandoc.org}{Pandoc}, a tool by John MacFarlane that provides conversion between these (and many more) formats. Any such conversion between text-formats naturally never works perfectly: every text-format has specific features that are not transferable to other formats. A central goal of Pandoc (at least in my interpretation) is to define a set of shared concepts for text-structure (a `common denominator' if you will, but surely not `least'!) that can then be mapped to other formats. In many ways, Pandoc tries (again) to define a set of logical concepts for text structure (`semantic markup'), which can then be formatted by your favourite typesetter. As long as you stay inside the realm of this `common denominator' (in practice that means Pandoc's extended version of Markdown/CommonMark), conversion works reasonably well (think 90\%-plus). Building on John Gruber's \href{https://daringfireball.net/projects/markdown/syntax}{Markdown philosophy}, there is a strong urge here to learn to restrain oneself while writing, and try to restrict the number of layout-possibilities to a minimum. In this sense, with \texttt{pandoc-ling} I propose a Markdown-structure for linguistic examples that is simple, easy to type, easy to read, and portable through the Pandoc universe by way of an extension mechanism of Pandoc, called a `Pandoc Lua Filter'. This extension will not magically allow you to write every linguistic example thinkable, but my guess is that in practice the present proposal covers the majority of situations in linguistic publications (think 90\%-plus). As an example (and test case) I have included automatic conversions into various formats in this repository (chech them out in the directory \texttt{tests} to get an idea of the strengths and weaknesses of the current implementation). \hypertarget{the-basic-structure-of-a-linguistic-example}{% \section{The basic structure of a linguistic example}\label{the-basic-structure-of-a-linguistic-example}} Basically, a linguistic example consists of 6 possible building blocks, of which only the number and at least one example line are necessary. The space between the building blocks is kept as minimal as possible without becoming cramped. When (optional) building blocks are not included, then the other blocks shift left and up (only exception: a preamble without labels is not shifted left completely, but left-aligned with the example, not with the judgement). \begin{itemize} \tightlist \item \textbf{Number}: Running tally of all examples in the work, possibly restarting at chapters or other major headings. Typically between round brackets, possibly with a chapter number added before in long works, e.g.~example (7.26). Aligned top-left, typically left-aligned to main text margin. \item \textbf{Preamble}: Optional information about the content/kind of example. Aligned top-left: to the top with the number, to the left with the (optional) label. When there is no label, then preamble is aligned with the example, not with the judgment. \item \textbf{Label}: Indices for sub-examples. Only present when there are more than one example grouped together inside one numbered entity. Typically these sub-example labels use latin letters followed by a full stop. They are left-aligned with the preamble, and each label is top-aligned with the top-line of the corresponding example (important for longer line-wrapped examples). \item \textbf{Judgment}: Examples can optionally have grammaticality judgments, typically symbols like **?!* sometimes in superscript relative to the corresponding example. judgements are right-aligned to each other, typically with only minimal space to the left-aligned examples. \item \textbf{Line example}: A minimal linguistic example has at least one line example, i.e.~an utterance of interest. Building blocks in general shift left and up when other (optional) building blocks are not present. Minimally, this results in a number with one line example. \item \textbf{Interlinear example}: A complex structure typically used for examples from languages unknown to most readers. Consist of three or four lines that are left-aligned: \begin{itemize} \tightlist \item \textbf{Header}: An optional header is typically used to display information about the language of the example, including literature references. When not present, then all other lines from the interlinear example shift upwards. \item \textbf{Source}: The actual language utterance, often typeset in italics. This line is internally separated at spaces, and each sub-block is left-aligned with the corresponding sub-blocks of the gloss. \item \textbf{Gloss}: Explanation of the meaning of the source, often using abbreviations in small caps. This line is internally separated at spaces, and each block is left-aligned with the block from source. \item \textbf{Translation}: Free translation of the source, typically quoted. Not separated in blocks, but freely extending to the right. Left-aligned with the other lines from the interlinear example. \end{itemize} \end{itemize} \begin{figure} \centering \includegraphics{figure/ExampleStructure.png} \caption{The structure of a linguistic example.} \end{figure} There are of course much more possibilities to extend the structure of a linguistic examples, like third or fourth subdivisions of labels (often using small roman numerals as a third level) or multiple glossing lines in the interlinear example. Also, the content of the header is sometimes found right-aligned to the right of the interlinear example (language into to the top, reference to the bottom). All such options are currently not supported by \texttt{pandoc-ling}. Under the hood, this structure is prepared by \texttt{pandoc-ling} as a table. Tables are reasonably well transcoded to different document formats. Specific layout considerations mostly have to be set manually. Alignment of the text should work in most exports. Some \texttt{CSS} styling is proposed by \texttt{pandoc-ling}, but can of course be overruled. For latex (and beamer) special output is prepared using various available latex packages (see options, below). \hypertarget{introducing-pandoc-ling}{% \section{\texorpdfstring{Introducing \texttt{pandoc-ling}}{Introducing pandoc-ling}}\label{introducing-pandoc-ling}} \hypertarget{editing-linguistic-examples}{% \subsection{Editing linguistic examples}\label{editing-linguistic-examples}} To include a linguistic example in Markdown \texttt{pandoc-ling} uses the \texttt{div} structure, which is indicated in Pandoc-Markdown by typing three colons at the start and three colons at the end. To indicate the \texttt{class} of this \texttt{div} the letters `ex' (for `example') should be added after the top colons (with or without space in between). This `ex'-class is the signal for \texttt{pandoc-ling} to start processing such a \texttt{div}. The numbering of these examples will be inserted by \texttt{pandoc-ling}. Empty lines can be added inside the \texttt{div} for visual pleasure, as they mostly do not have an influence on the output. Exception: do \emph{not} use empty lines between unlabelled line examples. Multiple lines of text can be used (without empty lines in between), but they will simply be interpreted as one sequential paragraph. \begin{verbatim} ::: ex This is the most basic structure of a linguistic example. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.1} This is the most basic structure of a linguistic example. \end{samepage} Alternatively, the \texttt{class} can be put in curled brackets (and then a leading full stop is necessary before \texttt{ex}). Inside these brackets more attributes can be added (separated by space), for example an id, using a hash, or any attribute=value pairs that should apply to this example. Currently there is only one real attribute implemented (\texttt{formatGloss}), but in principle it is possible to add more attributes that can be used to fine-tune the typesetting of the example (see below for a description of such \texttt{local\ options}). \begin{verbatim} ::: {#id .ex formatGloss=false} This is a multi-line example. But that does not mean anything for the result All these lines are simply treated as one paragraph. They will become one example with one number. ::: \end{verbatim} \begin{samepage} \ex. \label{id} This is a multi-line example. But that does not mean anything for the result All these lines are simply treated as one paragraph. They will become one example with one number. \end{samepage} A preamble can be added by inserting an empty line between preamble and example. The same considerations about multiple text-lines apply. \begin{verbatim} :::ex Preamble This is an example with a preamble. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.3} Preamble\\ This is an example with a preamble. \end{samepage} Sub-examples with labels are entered by starting each sub-example with a small latin letter and a full stop. Empty lines between labels are allowed. Subsequent lines without labels are treated as one paragraph. Empty lines \emph{not} followed by a label with a full stop will result in errors. \begin{verbatim} :::ex a. This is the first example. b. This is the second. a. The actual letters are not important, `pandoc-ling` will put them in order. e. Empty lines are allowed between labelled lines Subsequent lines are again treated as one sequential paragraph. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.4} \a. This is the first example. \b. This is the second. \b. The actual letters are not important, \texttt{pandoc-ling} will put them in order. \b. Empty lines are allowed between labelled lines Subsequent lines are again treated as one sequential paragraph. \end{samepage} A labelled list can be combined with a preamble. \begin{verbatim} :::ex Any nice description here a. one example sentence. b. two c. three ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.5} Any nice description here \a. one example sentence. \b. two \b. three \end{samepage} Grammaticality judgements should be added before an example, and after an optional label, separated from both by spaces (though four spaces in a row should be avoided, that could lead to layout errors). To indicate that any sequence of symbols is a judgements, prepend the judgement with a caret \texttt{\^{}}. Alignment will be figured out by \texttt{pandoc-ling}. \begin{verbatim} :::ex Throwing in a preamble for good measure a. ^* This traditionally signals ungrammaticality. b. ^? Question-marks indicate questionable grammaticality. c. ^^whynot?^ But in principle any sequence can be used (here even in superscript). d. However, such long sequences sometimes lead to undesirable effects in the layout. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.6} Throwing in a preamble for good measure \a. *This traditionally signals ungrammaticality. \b. ?Question-marks indicate questionable grammaticality. \b. \textsuperscript{whynot?}But in principle any sequence can be used (here even in superscript). \b. However, such long sequences sometimes lead to undesirable effects in the layout. \end{samepage} A minor detail is the alignment of a single example with a preamble and grammaticality judgements. In this case it looks better for the preamble to be left aligned with the example and not with the judgement. \begin{verbatim} :::ex Here is a special case with a preamble ^^???^ With a singly questionably example. Note the alignment! Especially with this very long example that should go over various lines in the output. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.7} Here is a special case with a preamble\\ \textsuperscript{???}With a singly questionably example. Note the alignment! Especially with this very long example that should go over various lines in the output. \end{samepage} For the lazy writers among us, it is also possible to use a simple bullet list instead of a labelled list. Note that the listed elements will still be formatted as a labelled list. \begin{verbatim} :::ex - This is a lazy example. - ^# It should return letters at the start just as before. - ^% Also testing some unusual judgements. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.8} \a. This is a lazy example. \b. \#It should return letters at the start just as before. \b. \%Also testing some unusual judgements. \end{samepage} Just for testing: a single example with a judgement (which resulted in an error in earlier versions). \begin{verbatim} ::: ex ^* This traditionally signals ungrammaticality. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.9} *This traditionally signals ungrammaticality. \end{samepage} \hypertarget{interlinear-examples}{% \subsection{Interlinear examples}\label{interlinear-examples}} For interlinear examples with aligned source and gloss, the structure of a \texttt{lineblock} is used, starting the lines with a vertical line \texttt{\textbar{}}. There should always be four vertical lines (for header, source, gloss and translation, respectively), although the content after the first vertical line can be empty. The source and gloss lines are separated at spaces, and all parts are right-aligned. If you want to have a space that is not separated, you will have to `protect' the space, either by putting a backslash before the space, or by inserting a non-breaking space instead of a normal space (either type \texttt{\&nbsp;} or insert an actual non-breaking space, i.e.~unicode character \texttt{U+00A0}). \begin{verbatim} :::ex | Dutch (Germanic) | Deze zin is in het nederlands. | DEM sentence AUX in DET dutch. | This sentence is dutch. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.10} Dutch (Germanic) \gll Deze zin is in het nederlands. \\ DEM sentence AUX in DET dutch. \\ \glt This sentence is dutch. \end{samepage} An attempt is made to format interlinear examples when the option \texttt{formatGloss=true} is added. This will: \begin{itemize} \tightlist \item remove formatting from the source and set everything in italics, \item remove formatting from the gloss and set sequences (\textgreater1) of capitals and numbers into small caps (note that the positioning of small caps on web pages is \href{https://iamvdo.me/en/blog/css-font-metrics-line-height-and-vertical-align}{highly complex}), \item a tilde \texttt{\textasciitilde{}} between spaces in the gloss is treated as a shortcut for an empty gloss (internally, the sequence \texttt{space-tilde-space} is replaced by \texttt{space-space-nonBreakingSpace-space-space}), \item consistently put translations in single quotes, possibly removing other quotes. \end{itemize} \begin{verbatim} ::: {.ex formatGloss=true} | Dutch (Germanic) | Deze zin is in het nederlands. | DEM sentence AUX in DET dutch. | This sentence is dutch. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.11} Dutch (Germanic) \gll \emph{Deze} \emph{zin} \emph{is} \emph{in} \emph{het} \emph{nederlands.} \\ \textsc{dem} sentence \textsc{aux} in \textsc{det} dutch. \\ \glt `This sentence is dutch.' \end{samepage} The results of such formatting will not always work, but it seems to be quite robust in my testing. The next example brings everything together: \begin{itemize} \tightlist \item a preamble, \item labels, both for single lines and for interlinear examples, \item interlinear examples start on a new line immediately after the letter-label, \item grammaticality judgements with proper alignment, \item when the header of an interlinear example is left out, everything is shifted up, \item The formatting of the interlinear is harmonised. \end{itemize} \begin{verbatim} ::: {.ex formatGloss=true} Completely superfluous preamble, but it works ... a. Mixing single line examples with interlinear examples. a. This is of course highly unusal. Just for this example, let's add some extra material in this example. a. | Dutch (Germanic) Note the grammaticality judgement! | ^^:–)^ Deze zin is (dit\ is&nbsp;test) nederlands. | DEM sentence AUX ~ dutch. | This sentence is dutch. b. | | Deze tweede zin heeft geen header. | DEM second sentence have.3SG.PRES no header. | This second sentence does not have a header. ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.12} Completely superfluous preamble, but it works \ldots{} \a. Mixing single line examples with interlinear examples. \b. This is of course highly unusal. Just for this example, let's add some extra material in this example. \b. Dutch (Germanic) Note the grammaticality judgement! \gll \textsuperscript{:--)}\emph{Deze} \emph{zin} \emph{is} \emph{(dit~is~test)} \emph{nederlands.} \\ \textsc{dem} sentence \textsc{aux} ~ dutch. \\ \glt `This sentence is dutch.' \b. \gll \emph{Deze} \emph{tweede} \emph{zin} \emph{heeft} \emph{geen} \emph{header.} \\ \textsc{dem} second sentence have.\textsc{3sg}.\textsc{pres} no header. \\ \glt `This second sentence does not have a header.' \end{samepage} \hypertarget{cross-referencing-examples}{% \subsection{Cross-referencing examples}\label{cross-referencing-examples}} The examples are automatically numbered by \texttt{pandoc-ling}. Cross-references to examples inside a document can be made by using the \texttt{{[}@ID{]}} format (used by Pandoc for citations). When an example has an explicit identifier (like \texttt{\#test} in the next example), then a reference can be made to this example with \texttt{{[}@test{]}}, leading to (\ref{test}) when formatted (note that the formatting does not work on the github website. Please check the `tests' subdirectory). \begin{verbatim} ::: {#test .ex} This is a test ::: \end{verbatim} \begin{samepage} \ex. \label{test} This is a test \end{samepage} Inspired by the \texttt{linguex}-approach, you can also use the keywords \texttt{next} or \texttt{last} to refer to the next or the last example, e.g.~\texttt{{[}@last{]}} will be formatted as (\ref{test}). By doubling the first letters to \texttt{nnext} or \texttt{llast} reference to the next/last-but-one can be made. Actually, the number of starting letters can be repeated at will in \texttt{pandoc-ling}, so something like \texttt{{[}@llllllllast{]}} will also work. It will be formatted as (\ref{ex4.6}) after the processing of \texttt{pandoc-ling}. Needless to say that in such a situation an explicit identifier would be a better choice. Referring to sub-examples can be done by manually adding a suffix into the cross reference, simply separated from the identifier by a space. For example, \texttt{{[}@lllast~c{]}} will refer to the third sub-example of the last-but-two example. Formatted this will look like this: (\ref{ex4.11}\,c), smile! However, note that the ``c'' has to be manually determined. It is simply a literal suffix that will be copied into the cross-reference. Something like \texttt{{[}@last\ hA1l0{]}} will work also, leading to (\ref{test}\,hA1l0) when formatted (which is of course nonsensical). For exports that include attributes (like html), the examples have an explicit id of the form \texttt{exNUMBER} in which \texttt{NUMBER} is the actual number as given in the formatted output. This means that it is possible to refer to an example on any web-page by using the hash-mechanism to refer to a part of the web-page. For example \texttt{\#ex4.7} at can be used to refer to the seventh example in the html-output of this readme (try \href{https://cysouw.github.io/pandoc-ling/readme.html\#ex4.7}{this link}). The id in this example has a chapter number `4' because in the html conversion I have set the option \texttt{addChapterNumber} to \texttt{true}. (Note: when numbers restart the count in each chapter with the option \texttt{restartAtChapter}, then the id is of the form \texttt{exCHAPTER.NUMBER}. This is necessary to resolve clashing ids, as the same number might then be used in different chapters.) I propose to use these ids also to refer to examples in citations when writing scholarly papers, e.g.~(Cysouw 2021: \#ex7), independent of whether the links actually resolve. In principle, such citations could easily be resolved when online publications are properly prepared. The same proposal could also work for other parts of research papers, for example using tags like \texttt{\#sec,\ \#fig,\ \#tab,\ \#eq} (see the Pandoc filter \href{https://github.com/cysouw/count-para}{\texttt{crossref-adapt{]}(https://github.com/cysouw/crossref-adapt)).\ To\ refer\ to\ paragraphs\ (which\ should\ replace\ page\ numbers\ in\ a\ future\ of\ adaptive\ design),\ I\ propose\ to\ use\ no\ tag,\ but\ directly\ add\ the\ number\ to\ the\ hash\ (see\ the\ Pandoc\ filter\ {[}}count-para`} for a practical mechanism to add such numbering). \hypertarget{options-of-pandoc-ling}{% \subsection{\texorpdfstring{Options of \texttt{pandoc-ling}}{Options of pandoc-ling}}\label{options-of-pandoc-ling}} \hypertarget{global-options}{% \subsubsection{Global options}\label{global-options}} The following global options are available with \texttt{pandoc-ling}. These can be added to the \href{https://pandoc.org/MANUAL.html\#metadata-blocks}{Pandoc metadata}. An example of such metadata can be found at the bottom of this \texttt{readme} in the form of a YAML-block. Pandoc allows for various methods to provide metadata (see the link above). \begin{itemize} \tightlist \item \textbf{\texttt{formatGloss}} (boolean, default \texttt{false}): should all interlinear examples be consistently formatted? If you use this option, you can simply use capital letters for abbreviations in the gloss, and they will be changed to small caps. The source line is set to italics, and the translations is put into single quotes. \item \textbf{\texttt{xrefSuffixSep}} (string, defaults to no-break-space): When cross references have a suffix, how should the separator be formatted? The defaults `no-break-space' is a safe options. I personally like a `narrow no-break space' better (Unicode \texttt{U+202F}), but this symbol does not work with all fonts, and might thus lead to errors. For Latex typesetting, all space-like symbols are converted to a Latex thin space \texttt{\textbackslash{},}. \item \textbf{\texttt{restartAtChapter}} (boolean, default \texttt{false}): should the counting restart for each chapter? \begin{itemize} \tightlist \item Actually, when \texttt{true} this setting will restart the counting at the highest heading level, which for various output formats can be set by the Pandoc option \texttt{top-level-division}. \item The id of each example will now be of the form \texttt{exCHAPTER.NUMBER} to resolve any clashes when the same number appears in different chapter. \item Depending on your Latex setup, an explicit entry \texttt{top-level-division:\ chapter} might be necessary in your metadata. \end{itemize} \item \textbf{\texttt{addChapterNumber}} (boolean, default \texttt{false}): should the chapter (= highest heading level) number be added to the number of the example? When setting this to \texttt{true} any setting of \texttt{restartAtChapter} will be ignored. In most Latex situations this only works in combination with a \texttt{documentclass:\ book}. \item \textbf{\texttt{latexPackage}} (one of: \texttt{linguex}, \texttt{gb4e}, \texttt{langsci-gb4e}, \texttt{expex}, default \texttt{linguex}): Various options for converting examples to Latex packages that typeset linguistic examples. None of the conversions works perfectly, though in should work in most normal situations (think 90\%-plus). It might be necessary to first convert to \texttt{Latex}, correct the output, and then typeset separately with a latex compiler like \texttt{xelatex}. Using the direct option insider Pandoc might also work in many situations. Export to \textbf{\texttt{beamer}} seems to work reasonably well with the \texttt{gb4e} package. All others have artefacts or errors. \end{itemize} \hypertarget{local-options}{% \subsubsection{Local options}\label{local-options}} Local options are options that can be set for each individual example. The \texttt{formatGloss} option can be used to have an individual example be formatted differently from the global setting. For example, when the global setting is \texttt{formatGloss:\ true} in the metadata, then adding \texttt{formatGloss=false} in the curly brackets of a specific example will block the formatting. This is especially useful when the automatic formatting does not give the desired result. If you want to add something else (not a linguistic example) in a numbered example, then there is the local option \texttt{noFormat=true}. An attempt will be made to try and do a reasonable layout. Multiple paragraphs will simply we taken as is, and the number will be put in front. In HTML the number will be centred. It is usable for an incidental mathematical formula. \begin{verbatim} ::: {.ex noFormat=true} $$\sum_{i=1}^{n}{i}=\frac{n^2-n}{2}$$ ::: \end{verbatim} \begin{samepage} \ex. \label{ex4.14} \[\sum_{i=1}^{n}{i}=\frac{n^2-n}{2}\]\\ \end{samepage} \hypertarget{issues-with-pandoc-ling}{% \subsection{\texorpdfstring{Issues with \texttt{pandoc-ling}}{Issues with pandoc-ling}}\label{issues-with-pandoc-ling}} \begin{itemize} \tightlist \item Manually provided identifiers for examples should not be purely numerical (so do not use e.g.~\texttt{\#5789}). In some situation this interferes with the setting of the cross-references. \item Because the cross-references use the same structure as citations in Pandoc, the processing of citations (by \texttt{citeproc}) should be performed \textbf{after} the processing by \texttt{pandoc-ling}. Another Pandoc filter, \href{https://github.com/lierdakil/pandoc-crossref}{\texttt{pandoc-crossref}}, for numbering figures and other captions, also uses the same system. There seems to be no conflict between \texttt{pandoc-ling} and \texttt{pandoc-crossref}. \item Interlinear examples will will not wrap at the end of the page. There is no solution yet for longer examples that are longer than the size of the page. \item It is not (yet) possible to have more than one glossing line. \item When exporting to \texttt{docx} there is a problem because there are paragraphs inserted after tables, which adds space in lists with multiple interlinear examples (except when they have exactly the same number of columns). This is \href{https://answers.microsoft.com/en-us/msoffice/forum/msoffice_word-mso_windows8-mso_2013_release/how-to-remove-extra-paragraph-after-table/995b3811-9f55-4df1-bbbc-9f672b1ad262}{by design}. The official solution is to set font-size to 1 for this paragraph inside MS Word. \item Multi-column cells are crucial for \texttt{pandoc-ling} to work properly. These are only introduced in new table format with Pandoc 2.10 (so older Pandoc version are not supported). Also note that these structures are not yet exported to all formats, e.g.~it will not be displayed correctly in \texttt{docx}. However, this is currently an area of active development \item \texttt{langsci-gb4e} is only available as part of the \href{https://ctan.org/pkg/langsci?lang=en}{\texttt{langsci} package}. You have to make it available to Pandoc, e.g.~by adding it into the same directory as the pandoc-ling.lua filter. I have added a recent version of \texttt{langsci-gb4e} here for convenience, but this one might be outdated at some time in the future. \item \texttt{beamer} output seems to work best with \texttt{latexPackage:\ gb4e}. \end{itemize} \hypertarget{a-note-on-latex-conversion}{% \subsection{A note on Latex conversion}\label{a-note-on-latex-conversion}} Originally, I decided to write this filter as a two-pronged conversion, making a markdown version myself, but using a mapping to one of the many latex libraries for linguistics examples as a quick fix. I assumed that such a mapping would be the easy part. However, it turned out that the mapping to latex was much more difficult that I anticipated. Basically, it turned out that the `common denominator' that I was aiming for was not necessarily the `common denominator' provided by the latex packages. I worked on mapping to various packages (linguex, gb4e, langsci-gb4e and expex) with growing dismay. This approach resulted in a first version. However, after this version was (more or less) finished, I realised that it would be better to first define the `common denominator' more clearly (as done here), and then implement this purely in Pandoc. From that basis I have then made attempts to map them to the various latex packages. \hypertarget{a-note-on-implementation}{% \subsection{A note on implementation}\label{a-note-on-implementation}} The basic structure of the examples are transformed into Pandoc tables. Tables are reasonably safe for converting in other formats. Care has been taken to add \texttt{classes} to all elements of the tables (e.g.~the preamble has the class \texttt{linguistic-example-preamble}). When exported formats are aware of these classes, they can be used to fine-tune the formatting. I have used a few such fine-tunings into the html output of this filter by adding a few CSS-style statements. The naming of the classes is quite transparent, using the form \texttt{linguistic-example-STRUCTURE}. The whole table is encapsulated in a \texttt{div} with class \texttt{ex} and an id of the form \texttt{exNUMBER}. This means that an example can be directly referred to in web-links by using the hash-mechanism. For example, adding \texttt{\#ex3} to the end of a link will immediately jump to this example in a browser. The current implementation is completely independent from the \href{https://pandoc.org/MANUAL.html\#numbered-example-lists}{Pandoc numbered examples implementation} and both can work side by side, like (2): \begin{enumerate} \def\labelenumi{(\arabic{enumi})} \item These are native Pandoc numbered examples \item They are independent of \texttt{pandoc-ling} but use the same output formatting in many default exports, like latex. \end{enumerate} However, in practice various output-formats of Pandoc (e.g.~latex) also use numbers in round brackets for these, so in practice it might be confusing to combine both. \end{document}
{ "alphanum_fraction": 0.7648096726, "avg_line_length": 38.7144456887, "ext": "tex", "hexsha": "5ce47eba6ac829ebc36b1975b163606b2bfc92a4", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-02T09:12:54.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-11T21:09:37.000Z", "max_forks_repo_head_hexsha": "ce59ac7443ee6c760ec5283f9e057deb9d2d52c2", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "camilstaps/pandoc-ling", "max_forks_repo_path": "docs/readme_linguex.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "ce59ac7443ee6c760ec5283f9e057deb9d2d52c2", "max_issues_repo_issues_event_max_datetime": "2022-03-07T09:27:51.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-08T03:50:51.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "camilstaps/pandoc-ling", "max_issues_repo_path": "docs/readme_linguex.tex", "max_line_length": 343, "max_stars_count": 24, "max_stars_repo_head_hexsha": "ce59ac7443ee6c760ec5283f9e057deb9d2d52c2", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "camilstaps/pandoc-ling", "max_stars_repo_path": "docs/readme_linguex.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T12:47:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-06T10:57:07.000Z", "num_tokens": 9317, "size": 34572 }
\documentclass{article} \begin{document} \section{Logic Gate} Combining 10s, 1000s or millions of logic gates makes it possible for a computer to perform highly complex operations and tasks at ever increasing speeds. \end{document}
{ "alphanum_fraction": 0.8, "avg_line_length": 33.5714285714, "ext": "tex", "hexsha": "5b7b698e6e814ea8cd09abca8ddb0ab500d757be", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "aaf64737a2b07c9d14ac9f96eb0101537e080587", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "E-Precious/precious_eCSC101", "max_forks_repo_path": "Logic Gate/Logic Gate 6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aaf64737a2b07c9d14ac9f96eb0101537e080587", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "E-Precious/precious_eCSC101", "max_issues_repo_path": "Logic Gate/Logic Gate 6.tex", "max_line_length": 154, "max_stars_count": null, "max_stars_repo_head_hexsha": "aaf64737a2b07c9d14ac9f96eb0101537e080587", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "E-Precious/precious_eCSC101", "max_stars_repo_path": "Logic Gate/Logic Gate 6.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 57, "size": 235 }
\subsection{Determinants} \noindent The determinant of a matrix is a signed number that tells by how much the transformation represented by a matrix scales volumes in a space. The number is negative if the space was ``flipped'' during a transformation. The number is zero if the dimension of the output space is less than that of the input space.\\ \noindent The determinant is only defined for square matrices. It's easiest to understand the definition of a determinant recursively. \begin{align*} \det{\left[ a \right]} &= \lvert a \rvert = a \\ \det{\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]} &= \begin{array}{|cc|} a & b \\ c & d \end{array} = ad - bc. \end{align*} We can define $a_{ij}$ as the entry in the ith row and jth column of matrix $A$ and $A_{ij}$ as the adjudicate matrix, which is the matrix $A$ if row $i$ and column $j$ were removed. This allows us to write a general formula for the determinant. \begin{definition} \begin{equation*} \det{A} = \sum_{j=1}^{n}{\left(-1\right)^{i+j}a_{ij}A_{ij}} \text{ (for fixed i)} = \sum_{i=1}^{n}{\left(-1\right)^{i+j}a_{ij}A_{ij}} \text{ (for fixed j)} \end{equation*} \end{definition} \noindent This formula allows us to use any row or column to calculate the determinant, which is especially useful if a certain row contains lots of 0's.\\ \noindent Below are some properties of the determinant for some $n \times n$ matrix $A$ and scalar $\lambda$. \begin{align*} \det{I_n} &= 1 \\ \det{(A^T)} &= \det{A} \\ \text{If $A$ is invertible, } \det{(A^{-1})} &= \frac{1}{\det{A}} \\ \det{(\lambda A)} &= \lambda^n\det{A} \\ \det{(AB)} &= \det{A}\det{B} \\ \text{If $A$ is triangular, } \det{A} &= \prod_{i=1}^{n}{a_{ii}} \end{align*} \begin{example} Find the determinant of the following 3 x 3 matrix. \begin{equation*} A = \begin{bmatrix} 1 & 3 & 7 \\ 0 & 2 & -1 \\ 2 & 7 & 9 \end{bmatrix} \end{equation*} \end{example} \noindent We'll use the first column since it has only two non-zero entries. \begin{equation*} \begin{bmatrix} 1 & 3 & 7 \\ 0 & 2 & -1 \\ 2 & 7 & 9 \end{bmatrix} = 1 \text{ } \begin{array}{|cc|} 2 & -1 \\ 7 & 9 \end{array} + 2 \text{ } \begin{array}{|cc|} 3 & 7 \\ 2 & -1 \end{array} = (18+7) + 2(-3-14) = -9. \end{equation*}
{ "alphanum_fraction": 0.6195142735, "avg_line_length": 36.1076923077, "ext": "tex", "hexsha": "47b4f21756f989e3640bd967ab82ce525225559c", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-17T15:21:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-10T05:41:17.000Z", "max_forks_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "aneziac/Math-Summaries", "max_forks_repo_path": "common/vectorsMatrices/determinants.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_issues_repo_issues_event_max_datetime": "2021-10-07T04:47:03.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-28T17:44:18.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "aneziac/Math-Summaries", "max_issues_repo_path": "common/vectorsMatrices/determinants.tex", "max_line_length": 246, "max_stars_count": 39, "max_stars_repo_head_hexsha": "20a0efd79057a1f54e093b5021fbc616aab78c3f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "aneziac/Math-Summaries", "max_stars_repo_path": "common/vectorsMatrices/determinants.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T17:38:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-26T06:20:36.000Z", "num_tokens": 850, "size": 2347 }
\subsection{Design Language} \label{DesignLanguage} This section will present the result of the physical design process which is the design language. It will start so by defining each screen of the program and the functions that each screen should have. Afterwards it will explain the navigation method that is used in the program's design and end with some general principles for designing the program's interface. \subsubsection{Program screens and functionality} \label{ScreensandFunctionality} The domains were identified in \cref{ScenarioCorpus} and they are as follows: planning, shopping, recipes, inventory and general. In \nameref{Sketches} the domains were used to structure the screens in the program. In the \nameref{ConsMod} some of the domains were used as main elements, because of the structural contribution that the domains give, they will be used as a basis for the actual screens of the program. The following text will describe each screen of the program by the funcionalities of the particular screen. \textbf{Planning:} By first looking at the \nameref{ConsMod} we see that there are some functional requirements stated for the Meal Planner: \begin{itemize} \item Recipes planned for a dynamic number of people. \item Varied meals. \item Change meal plans on the fly. \end{itemize} By also looking at the \nameref{Sketches} in \cref{MealScheduleSketches}, we can add these requirements to the list: \begin{itemize} \item A weekly overview of planned meals. \begin{itemize} \item With information for each meal e.g. name of the recipe, the date for the cooking, and the number of ingredients the user has. \end{itemize} \item A view of a specific day and the planned recipes. \begin{itemize} \item Display the current selected day the user is viewing. \item A way to add a meal to the day. \item A way to edit already scheduled meals. \end{itemize} \end{itemize} \textbf{Shopping:} By also looking at the \nameref{ConsMod} for this screen we can identify different functional requirements: \begin{itemize} \item Dynamic number of days to shop for. \item Shared list. \item Automatically add bought items to inventory. \item Meal plan changes effects the shopping list. \end{itemize} By also looking at the \nameref{Sketches} in \cref{ShoppingListSketches}, we can add these requirements to the list: \begin{itemize} \item An overview of all ingredients on the shopping list. \begin{itemize} \item Displaying each ingredients name and quantity. \end{itemize} \item A search function to find and add specific ingredients. \item A function to buy or add ingredients to the inventory. \end{itemize} \textbf{Recipes:} The \nameref{ConsMod} also states different functional requirements for this screen: \begin{itemize} \item Ingredients. \begin{itemize} \item Displaying name and quantity of the ingredients. \end{itemize} \item Preparation \begin{itemize} \item An explanation of cooking steps for the specific recipe. \end{itemize} \item Diets \end{itemize} The list can be expanded with the functional requirement from \nameref{Sketches} in \cref{RecipesSketches}: \begin{itemize} \item A search function. \item A list of recipes with little but relevant information. \begin{itemize} \item Information could be a picture of the recipe, recipe name, ingredients of the recipe and the number of ingredients the user has. \end{itemize} \item A categorisation function \item A view of a specific recipe. \begin{itemize} \item Information such as preparation guide, ingredients and a picture. \item A function to add the recipe to the meal schedule. \end{itemize} \end{itemize} \textbf{Inventory:} The \nameref{ConsMod} can also be used to state different functionality requirements for the inventory screen: \begin{itemize} \item A function to manually add ingredients. \item Automatic adding of bought shopping list ingredients. \end{itemize} The additions to the functional requirements from \nameref{Sketches} in \cref{RecipesSketches} are listed below: \begin{itemize} \item A search function to find and add specific ingredients. \item A function to remove ingredients. \item A list of ingredients with little but relevant information. \begin{itemize} \item Information such as ingredient name, purchase date and quantity. \end{itemize} \item A function or view that easily groups ingredients of the same name but with different quantities and/ or purchase dates. \end{itemize} \textbf{General:} The \nameref{ConsMod} does not include this domain and the design requirements will therefore solely be identified from the \nameref{Sketches}: \begin{itemize} \item An overview of all the different setting categories. \begin{itemize} \item Categories such as stock, allergies, preferences and more. \end{itemize} \item A function to expand and display specific categories and the information for that category. \end{itemize} \subsubsection{Program navigation} The main method of the navigation in the program is done via the bottom navigation bar shown on \cref{NavigationBarSketch}. This bar can be used from all the screens of the program and directs the user to one of the five main screens written prior in this section, in \textit{Program Screens and Functionality}. If the user navigates away from one of the main screens, e.g. when viewing a list of recipe, the user can press a recipe and view that specific recipe, the top of the screen will then include a back functionality. \subsubsection{Design principles} This section will discuss different design subjects that are used as general principles in the design language for the program. These general principles are created, to give the program's design coherence and thereby increase the usability. \textbf{Navigation:} The navigation in the program is mainly done via the bottom navigation bar. Consistently displaying and using this as the main way of navigation gives the program better usability. If the user is confused as to where he is in the program he can always use the bottom navigation bar to find his way to a screen that is familiar. Another consistency that the program's navigation has is the use of the top navigation bar. When the user is on any screen that is not one of the main screens the top bar will be used as a backwards navigation. The consistency of using both the bottom and top for navigation functions gives an easy of use and better usability. \textbf{Mobile application:} The program will be designed as a mobile application. It is therefore important that this is considered when defining more general principles for the program. These additional principles are listed and briefly described below. \begin{itemize} \item General \begin{itemize} \item Try to avoid to much redundant information on the screen \item Avoid many click and mouseover events. \end{itemize} \item Text \begin{itemize} \item The text has to be readable. That means using logical typography e.g. giving it the right size relative to the screen and an easy to read font. \item The use of font and text size has to be consistent. \end{itemize} \item Buttons \begin{itemize} \item The shape and colour theme has to be consistent on buttons. \item The size of the buttons has to be large enough for a person to click with a finger. \item Avoid to much text on the buttons. A good idea is to use icons. \item Avoid using buttons when it is not necessary e.g. when editing an ingredient the program should update automatically. \end{itemize} \end{itemize} \input{Design/ColourChoice}
{ "alphanum_fraction": 0.7794482939, "avg_line_length": 52.3904109589, "ext": "tex", "hexsha": "9499db37d5a2f89f9e8f0bb672403ff4dad9b6d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "amatt13/FoodPlanner-Report", "max_forks_repo_path": "Design/DesignLanguage.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "amatt13/FoodPlanner-Report", "max_issues_repo_path": "Design/DesignLanguage.tex", "max_line_length": 655, "max_stars_count": null, "max_stars_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "amatt13/FoodPlanner-Report", "max_stars_repo_path": "Design/DesignLanguage.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1828, "size": 7649 }
\section{Documents Links} Data Management System-level network specifications (extracted from \href{https://confluence.lsstcorp.org/download/attachments/20284335/LSE-61.pdf?version=1&modificationDate=1490879770000&api=v2}{LSE-61 Data Management Subsystem Requirements}) 180 days MTBF/year 48 hours MTTR/year Base to Archive nightly data volume (science images and meta-data): 15TB Amount of time available to transfer data from Mountain to Base to Archive: Crosstalk-corrected images for Alert Production: 6 seconds to move (between 2.6 - 12.4 Gbytes depending on compression) Raw images: 24 hours Note, there is also Observatory Control System and other operational data transferred at night, and there is daytime engineering and calibration traffic potentially at the same volume. Finally, there is periodic data transfer from the Archive to the Base, primarily as a result of annual Data Release Processing. All the data flows and allocated bandwidths are defined in the following documents. A grouping of this traffic for purposes of network design, QoS, prioritization etc. is on the \href{https://confluence.lsstcorp.org/display/DM/LSST+Network+Traffic+Types}{LSST Network Traffic Types}. A bandwidth allocation by link of this traffic for purposes of network design, QoS, prioritization etc. is on the \href{https://confluence.lsstcorp.org/display/DM/LSST+Network+Bandwidth+Allocation}{LSST Bandwidth Allocation}. The LSST Long-Haul Networks are specified in \href{https://docushare.lsstcorp.org/docushare/dsweb/Get/LSE-78/lse78observatoryNetworkDesign_rel5.1_20200825.pdf}{LSE-78 Rubin Observatory Network Design} and \href{https://docushare.lsstcorp.org/docushare/dsweb/Get/LSE-479/lse479observatoryNetworkTechnicalDoc_rel1_20200825.pdf}{LSE-479 Rubin Observatory Networks Technical Document}. The budgeted cost and schedule of deployment of links and bandwidth, and their utilization is documented in \href{https://confluence.lsstcorp.org/download/attachments/20284335/20170130%20LDM-142%20LSST%20Networks%20BL%20and%20Plan.xls?version=1&modificationDate=1491479508000&api=v2}{LDM-142 Network Sizing Model} The requirements for the Summit Network are in \href{https://docushare.lsstcorp.org/docushare/dsweb/Get/LTS-577/LTS-577%20Summit%20Network%20Specification%20Rel1%2006292017.pdf}{LTS-577 Summit Network Specification} In addition to the top-level documents above, there are documents describing how the LSST Networks are to be tested, verified, and managed: The \href{https://confluence.lsstcorp.org/download/attachments/20284335/LSST%20LHN%20End-to-End_Plan_v6.docx?version=1&modificationDate=1490879785000&api=v2}{LSST Network End-to-End Test Plan} defines the plan for development testing and monitoring the LSST networks. The \href{https://docushare.lsstcorp.org/docushare/dsweb/Get/LDM-732/NoContent8135677362395171770.txt}{Vera C. Rubin Network Verification Baseline} defines the plan for formal verification of the LSST Networks \href{https://docushare.lsstcorp.org/docushare/dsweb/Get/Document-35934/Rubin%20Observatory%20Networks%20Pre-Verification%20Review%20Report%202020-06-22.pdf}{Rubin Observatory Network Pre-Verification Review Report} is the report from the panel from the June 2020 review. The LSST Observatory Network Verification Plan defines plan for formal verification of the LSST networks. OBSOLETE, SUPERCEDED BY LDM-732 and JIRA LVV Project The LSST Observatory Network Verification Matrix defines requirements and methods for formal verification of the LSST networks. OBSOLETE, SUPERCEDED BY LDM-732 and JIRA LVV Project The \href{https://confluence.lsstcorp.org/download/attachments/20284335/LSST%20Network%20O%26M%20Plan_v2.docx?version=1&modificationDate=1490879794000&api=v2}{LSST Network Operations and Management Plan} defines the plan for operating and maintaining the LSST networks as a single integrated process. SLAC US Data Facility Networks (\href{https://confluence.lsstcorp.org/download/attachments/20284335/Rubin-SLAC-ESNET-p1.pptx?version=1&modificationDate=1615304516000&api=v2}{ppt}, \href{https://confluence.lsstcorp.org/download/attachments/20284335/Rubin-SLAC-ESNET-p1.pdf?version=1&modificationDate=1615304502000&api=v2}{pdf}) ESnet - Europe Networks (\href{https://confluence.lsstcorp.org/download/attachments/20284335/ESnet-Europe-networks.pdf?version=1&modificationDate=1615479399000&api=v2}{pdf})
{ "alphanum_fraction": 0.8176271186, "avg_line_length": 96.1956521739, "ext": "tex", "hexsha": "592ad70f21108ca86fcb04d00f8efd705b7f1ddb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9d6a578e51c4b1a8d3d80f87ccfbbb388583e806", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-it/ittn-044", "max_forks_repo_path": "body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9d6a578e51c4b1a8d3d80f87ccfbbb388583e806", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-it/ittn-044", "max_issues_repo_path": "body.tex", "max_line_length": 398, "max_stars_count": null, "max_stars_repo_head_hexsha": "9d6a578e51c4b1a8d3d80f87ccfbbb388583e806", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-it/ittn-044", "max_stars_repo_path": "body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1164, "size": 4425 }
\documentclass[11pt,]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \PassOptionsToPackage{hyphens}{url} % url is loaded by hyperref \usepackage[unicode=true]{hyperref} \hypersetup{ pdftitle={A framework for effective application of machine learning to microbiome-based classification problems}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage[margin=1in]{geometry} \usepackage{longtable,booktabs} % Fix footnotes in tables (requires footnote package) \IfFileExists{footnote.sty}{\usepackage{footnote}\makesavenoteenv{long table}}{} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi % set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{caption} \usepackage{hyperref} \usepackage{helvet} % Helvetica font \renewcommand*\familydefault{\sfdefault} % Use the sans serif version of the font \usepackage[T1]{fontenc} \usepackage[labelfont=bf]{caption} \usepackage[none]{hyphenat} \usepackage{setspace} \doublespacing \setlength{\parskip}{1em} \usepackage{lineno} \usepackage{pdfpages} \floatplacement{figure}{H} % Keep the figure up top of the page \title{\textbf{A framework for effective application of machine learning to microbiome-based classification problems}} \author{} \date{\vspace{-2.5em}} \begin{document} \maketitle \vspace{30mm} Running title: Machine learning framework to model microbiome data \vspace{20mm} Begüm D. Topçuoğlu\({^1}\), Nicholas A. Lesniak\({^1}\), Mack Ruffin\({^3}\), Jenna Wiens\textsuperscript{2\(\dagger\)}, Patrick D. Schloss\textsuperscript{1\(\dagger\)} \vspace{30mm} \(\dagger\) To whom correspondence should be addressed: \href{mailto:[email protected]}{\nolinkurl{[email protected]}}, \href{mailto:[email protected]}{\nolinkurl{[email protected]}} 1. Department of Microbiology and Immunology, University of Michigan, Ann Arbor, MI 48109 2. Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 3. Department of Family Medicine and Community Medicine, Penn State Hershey Medical Center, Hershey, PA \newpage \linenumbers \subsection{Abstract}\label{abstract} Machine learning (ML) modeling of the human microbiome has the potential to identify microbial biomarkers and aid in the diagnosis of many diseases such as inflammatory bowel disease, diabetes, and colorectal cancer. Progress has been made towards developing ML models that predict health outcomes using bacterial abundances, but inconsistent adoption of training and evaluation methods call the validity of these models into question. Furthermore, there appears to be a preference by many researchers to favor increased model complexity over interpretability. To overcome these challenges, we trained seven models that used fecal 16S rRNA sequence data to predict the presence of colonic screen relevant neoplasias (SRNs; n=490 patients, 261 controls and 229 cases). We developed a reusable open-source pipeline to train, validate, and interpret ML models. To show the effect of model selection, we assessed the predictive performance, interpretability, and training time of L2-regularized logistic regression, L1 and L2-regularized support vector machines (SVM) with linear and radial basis function kernels, decision trees, random forest, and gradient boosted trees (XGBoost). The random forest model performed best at detecting SRNs with an AUROC of 0.695 {[}IQR 0.651-0.739{]} but was slow to train (83.2 h) and not inherently interpretable. Despite its simplicity, L2-regularized logistic regression followed random forest in predictive performance with an AUROC of 0.680 {[}IQR 0.625-0.735{]}, trained faster (12 min), and was inherently interpretable. Our analysis highlights the importance of choosing an ML approach based on the goal of the study, as the choice will inform expectations of performance and interpretability. \newpage \subsection{Importance}\label{importance} Diagnosing diseases using machine learning (ML) is rapidly being adopted in microbiome studies. However, the estimated performance associated with these models is likely over-optimistic. Moreover, there is a trend towards using black box models without a discussion of the difficulty of interpreting such models when trying to identify microbial biomarkers of disease. This work represents a step towards developing more reproducible ML practices in applying ML to microbiome research. We implement a rigorous pipeline and emphasize the importance of selecting ML models that reflect the goal of the study. These concepts are not particular to the study of human health but can also be applied to environmental microbiology studies. \newpage \subsection{Background}\label{background} As the number of people represented in human microbiome datasets grow, there is an increasing desire to use microbiome data to diagnose diseases. However, the structure of the human microbiome is remarkably variable among individuals to the point where it is often difficult to identify the bacterial populations that are associated with diseases using traditional statistical models. For example it is not possible to classify individuals as having healthy colons or screen relevant neoplasia using Bray-Curtis distances based on the 16S rRNA gene sequences collected from fecal samples {[}Figure S1{]}. This variation is likely due to the ability of many bacterial populations to fill the same niche such that different populations cause the same disease in different individuals. Furthermore, a growing number of studies have shown that it is rare for a single bacterial species to be associated with a disease. Instead, subsets of the microbiome account for differences in health. Traditional statistical approaches do not adequately account for the variation in the human microbiome and typically consider the protective or risk effects of each bacterial population separately (1). Recently, machine learning (ML) models have grown in popularity among microbiome researchers because ML models can effectively account for the interpersonal microbiome variation and the ecology of disease as they consider the relative abundance of each bacterial population in the context of others rather than in isolation. ML models can be used to increase our understanding of the variation in the structure of existing data and in making predictions about new data. Researchers have used ML models to diagnose and understand the ecological basis of diseases such as liver cirrhosis, colorectal cancer, inflammatory bowel diseases, obesity, and type 2 diabetes (2--19). The task of diagnosing an individual relies on a rigorously validated model. However, there are common methodological and reporting problems that arise when applying ML to such data that need to be addressed for the field to progress. These problems include a lack of transparency in which methods are used and how these methods are implemented; evaluating models without separate held-out test data; unreported variation between the predictive performance on different folds of cross-validation; and unreported variation between cross-validation and testing performances. Though the microbiome field is making progress to avoid some of these pitfalls including validating their models on independent datasets (8, 19, 20) and introducing accessible and open-source ML tools (21--24), more work is needed to improve reproducibility further and minimize overestimating for model performance. Among microbiome researchers, the lack of justification when selecting a modeling approach has often been due to an implicit assumption that more complex models are better. This has resulted in a trend towards using non-linear models such as random forest and deep neural networks (3, 12, 25--27) over simpler models such as logistic regression or other linear models (19, 23, 28). Although in some cases, complex models may capture important non-linear relationships and therefore yield better predictions, they can also result in black boxes that lack interpretability. Such models require post hoc explanations to quantify the importance of each feature in making predictions. Depending on the goal of the modeling, other approaches may be more appropriate. For example, researchers trying to identify the microbiota associated with disease may desire a more interpretable model, whereas clinicians may emphasize predictive performance. Nonetheless, it is essential to understand that the benefit of more complex, less interpretable models may be minimal (29--31). It is important for researchers to justify their choice of modeling approach. In this study, we provided steps toward standardization of machine learning methods for microbiome studies which are often poorly documented and executed. To showcase a rigorous ML pipeline and to shed light on how ML model selection can affect modeling results, we performed an empirical analysis comparing the predictive performance, interpretability, data requirements, and training times of seven modeling approaches with the same dataset and pipeline. We built three linear models with different forms of regularization: L2-regularized logistic regression and L1 and L2-regularized support vector machines (SVM) with a linear kernel. We also trained four non-linear models: SVM with radial basis function kernel, a decision tree, random forest, and gradient boosted trees. We compared their predictive performance, interpretability, and training time. To demonstrate the performance of these modeling approaches and our pipeline, we present a case study using data from a previously published study that sought to classify individuals as having healthy colons or colonic lesions based on the 16S rRNA gene sequences collected from fecal samples (4). This dataset was selected because it is a relatively large collection of individuals (N=490) connected to a clinically significant disease where there is ample evidence that the disease is driven by variation in the microbiome (2, 4, 5, 32). With this dataset, we developed an ML pipeline that can be used in many different scenarios for training and evaluating models. This framework can be easily applied to other host-associated and environmental microbiome datasets. We also provided an aspirational rubric for evaluating the rigor of ML practices applied to microbiome data {[}Table S1{]} to urge the audience to be diligent in their study design and model selection, development, evaluation, and interpretation. \subsection{Results}\label{results} \textbf{Model selection and pipeline construction}. We established a reusable ML pipeline for model selection and evaluation, focusing on seven different commonly used supervised learning algorithms {[}Figure 1, Table 1{]}. First, we randomly split the data into training and test sets so that the training set consisted of 80\% of the full dataset, while the test set was composed of the remaining 20\% {[}Figure 1{]}. To maintain the distribution of controls and cases found in the full dataset, we performed stratified splits. For example, our full dataset included 490 individuals. Of these, 261 had healthy colons (53.3\%) and 229 had a screen relevant neoplasia (SRN; 46.7\%). A training set included 393 individuals, of which 184 had an SRN (46.8\%), while the test set was composed of 97 individuals, of which 45 had an SRN (46.4\%). The training data were used to build and select the models, and the test set was used for evaluating the model. We trained seven different models using the training data {[}Table 1{]}. Model selection requires tuning hyperparameters. Hyperparameters are parameters that need to be specified or tuned by the user, in order to train a model for a specific modeling problem. For example, when using regularization, C is a hyperparameter that indicates the penalty for overfitting. Hyperparameters are tuned using the training data to find the best model. We selected hyperparameters by performing repeated five-fold cross-validation (CV) on the training set {[}Figure 1{]}. The five-fold CV was also stratified to maintain the overall case and control distribution. We chose the hyperparameter values that led to the best average CV predictive performance using the area under the receiver operating characteristic curve (AUROC) {[}Figure S2 and S3{]}. The AUROC ranges from 0, where the model's predictions are perfectly incorrect, to 1.0, where the model perfectly distinguishes between cases and controls. An AUROC value of 0.5 indicates that the model's predictions are no different than random. To select hyperparameters, we performed a grid search for hyperparameter settings when training the models. Default hyperparameter settings in developed ML packages available in R, Python, and MATLAB programming languages may be inadequate for effective application of classification algorithms and need to be optimized for each new ML task. For example, L1-regularized SVM with linear kernel showed large variability between different regularization strengths (C) and benefited from tuning as the default C parameter was 1 {[}Figure S2{]}. Once hyperparameters were selected, we trained the model using the full training dataset and applied the final model to the held-out data to evaluate the testing predictive performance of each model. The data-split, hyperparameter selection, training and testing steps were repeated 100 times to obtain a robust interpretation of model performance, less likely to be affected by a ``lucky'' or ``unlucky'' split {[}Figure 1{]}. \textbf{Predictive performance and generalizability of the seven models.} We evaluated the predictive performance of the seven models to classify individuals as having healthy colons or SRNs {[}Figure 2{]}. The predictive performance of random forest model was higher than other ML models with a median 0.695 {[}IQR 0.650-0.739{]}, though not significantly (p=0.5; The p-value was manually calculated using the sampling distribution of the test statistic under the null hypothesis) (Figure S4). Similarly, L2-regularized logistic regression, XGBoost, L2-regularized SVM with linear and radial basis function kernel AUROC values were not significantly different from one another and had median AUROC values of 0.680 {[}IQR 0.639-0.750{]}, 0.679 {[}IQR 0.643-0.746{]}, 0.678 {[}IQR 0.639-0.750{]} and 0.668 {[}IQR 0.639-0.750{]}, respectively. L1-regularized SVM with linear kernel and decision tree had significantly lower AUROC values than the other ML models with median AUROC of 0.650 {[}IQR 0.629-0.760{]} and 0.601 {[}IQR 0.636-0.753{]}, respectively {[}Figure 2{]}. Interestingly, these results demonstrate that the most complex model (XGBoost) did not have the best performance and that the most interpretable models (L2-regularized logistic regression and L2-regularized SVM with linear kernel) performed nearly as well as non-linear models. To evaluate the generalizability of each model, we compared the median cross-validation AUROC to the median testing AUROC. If the difference between the cross-validation and testing AUROCs was large, then that could indicate that the models were overfit to the training data. The largest difference in median AUROCs was 0.021 in L1-regularized SVM with linear kernel, followed by SVM with radial basis function kernel and decision tree with a difference of 0.007 and 0.006, respectively {[}Figure 2{]}. These differences were relatively small and gave us confidence in our estimate of the generalization performance of the models. To evaluate the variation in the estimated performance, we calculated the range of AUROC values for each model using 100 data-splits. The range among the testing AUROC values within each model varied by 0.230 on average across the seven models. If we had only done a single split, then there is a risk that we could have gotten lucky or unlucky in estimating model performance. For instance, the lowest AUROC value of the random forest model was 0.593, whereas the highest was 0.810. These results showed that depending on the data-split, the testing performance can vary {[}Figure 2{]}. Therefore, it is important to employ multiple data splits when estimating generalization performance. To show the effect of sample size on model generalizability, we compared cross-validation AUROC values of L2-regularized logistic regression and random forest models when we subsetted our original study design with 490 subjects to 15, 30, 60, 120, and 245 subjects {[}Figure S5{]}. The variation in cross-validation performance within both models at smaller sample sizes was larger than when the full collection of samples was used to train and validate the models. Because of the high dimensionality of the microbiome data (6920 OTUs), large sample sizes can lead to better models. \textbf{Interpretation of each ML model.} We often use ML models not just to predict a health outcome, but also to identify potential biomarkers for disease. Therefore, model interpretation becomes crucial for microbiome studies. Interpretability is related to the degree to which humans can understand the reasons behind a model prediction (33--35). ML models often decrease in interpretability as they increase in complexity. In this study, we used two methods to help interpret our models. First, we interpreted the feature importance of the linear models (L1 and L2-regularized SVM with linear kernel and L2-regularized logistic regression) using the median rank of absolute feature weights for each OTU {[}Figure 3{]}. We also reviewed the signs of feature weights to determine whether an OTU was associated with classifying a subject as being healthy or having an SRN. It was encouraging that many of the highest-ranked OTUs were shared across these three models (e.g., OTUs 50, 426, 609, 822, 1239). The benefit of this approach was knowing the sign and magnitude of each OTU coefficient in the trained model. This allowed us to immidiately interpret negative and positive coefficient signs as protective and risk factors, respectively and the magnitude as the impact of these factors. However, this approach is limited to linear models or models with prespecified interaction terms. Second, to analyze non-linear models, we interpreted the feature importance using permutation importance (36). Whereas the absolute feature weights were determined from the trained models, here we measured importance using the held-out test data. Permutation importance analysis is a post hoc explanation of the model, in which we randomly permuted groups of perfectly correlated features together and other features individually across the two groups in the held-out test data {[}Figure S6{]}. We then calculated how much the predictive performance of the model (i.e., testing AUROC values) decreased when each OTU or group of OTUs was randomly permuted. We ranked the OTUs based on how much the median testing AUROC decreased when it was permuted; the OTU with the largest decrease ranked highest {[}Figure 4{]}. Among the twenty OTUs with the largest impact, there was only one OTU (OTU 822) that was shared among all of the models. We also found that three OTUs (OTUs 58, 110, 367) were important in each of the tree-based models. Similarly, the random forest and XGBoost models shared four of the most important OTUs (OTUs 2, 12, 361, 477). Permutation analysis results also revealed that with the exception of the decision tree model, removal of any individual OTU had minimal impact on model performance. For example, if OTU 367 was permuted across the samples in the decision tree model, the median AUROC dropped from 0.601 to 0.525. In contrast, if the same OTU was permuted in the random forest model, the AUROC only dropped from 0.695 to 0.680, which indicated high degree of collinearity in the dataset. Permutation analysis allowed us to gauge the importance of each OTU in non-linear models and partially account for collinearity by grouping correlated OTUs to determine their impact as a group. To further highlight the differences between the two interpretation methods, we used permutation importance to interpret the linear models {[}Figure S7{]}. When we analyzed the L1-regularized SVM with linear kernel model using feature rankings based on weights {[}Figure 3{]} and permutation importance {[}Figure S7{]}, 17 of the 20 top OTUs (e.g., OTU 609, 822, 1239) were deemed important by both interpretation methods. Similarly, for the L2-regularized SVM and L2-regularized logistic regression, 9 and 12 OTUs, respectively, were shared among the two interpretation methods. These results indicate that both methods are consistent in selecting the most important OTUs. We also compared the top 20 OTUs selected by permutation importance in L2-regularized logistic regression {[}Figure S7{]} and the highest performing tree-based models, random forest and XGBoost {[}Figure 4{]}. Two and five OTUs, respectively, were shared among the models. These results indicate that we were able to identify important OTUs that are shared across the highest performing linear and non-linear models when we use permutation importance as our interpretation method. We then evaluated the difference in relative abundances of the top 20 OTUs identified in L2-regularized logistic regression and random forest models between healthy patients and patients with SRNs {[}Figure S8{]}. There were minimal differences in the median relative abundances across OTUs between different diagnoses. This supports our claim that it is not possible to differentiate disease versus healthy states by focusing on individual taxa. The ability for ML models to simultaneously consider the relative abundances of multiple OTUs and their context dependency is a great advantage over traditional statistical approaches that consider each OTU in isolation. \textbf{The computational efficiency of each ML model.} We compared the training times of the seven ML models. The training times increased with the complexity of the model and the number of potential hyperparameter combinations. Also, the linear models trained faster than non-linear models {[}Figure 5{]}. \subsection{Discussion}\label{discussion} There is a growing awareness that many human diseases and environmental processes are not driven by a single organism but are the product of multiple bacterial populations. Traditional statistical approaches are useful for identifying those cases where a single organism is associated with a process. In contrast, ML methods offer the ability to incorporate the structure of the microbial communities as a whole and identify associations between community structure and disease state. If it is possible to classify communities reliably, then ML methods also offer the ability to identify those microbial populations within the communities that are responsible for the classification. However, the application of ML in microbiome studies is still in its infancy, and the field needs to develop a better understanding of different ML methods, their strengths and weaknesses, and how to implement them. To address these needs, we developed an open-sourced framework for ML models. Using this pipeline, we benchmarked seven ML models and showed that the tradeoff between model complexity and performance may be less severe than originally hypothesized. In terms of predictive performance, the random forest model had the best AUROC compared to the other six models. However, the second-best model was L2-regularized logistic regression with a median AUROC difference of less than 0.015 compared to random forest. While our implementation of random forest took 83.2 hours to train, our L2-regularized logistic regression trained in 12 minutes. In terms of interpretability, random forest is a non-linear ML model, while L2-regularized logistic regression, a linear model, was more easily interpreted because we could use the feature weights. Comparing many different models showed us that the most complex model was not necessarily the best model for our ML task. We established a pipeline that can be generalized to any modeling method that predicts a binary health outcome. We performed a random data-split to create a training set (80\% of the data) and a held-out test set (20\% of the data), which we used to evaluate predictive performance. We used the AUROC metric to evaluate predictive performance as it is a clinically relevant evaluation metric for our study. We repeated this data-split 100 times to measure the possible variation in predictive performance. During training, we tuned the model hyperparameters with a repeated five-fold cross-validation. Despite the high number of features microbiome datasets typically have, the models we built with this pipeline generalized to the held-out test sets. We highlighted the importance of model interpretation to gain greater biological insights into microbiota-associated diseases. In this study, we showcased two different interpretation methods: ranking each OTU by (i) their absolute weights in the trained models and (ii) their impact on the predictive performance based on permutation importance. Previous studies have emphasized the difficulty of interpreting the feature coefficients in linear models (37) and the biases introduced by computing feature importance using built-in methods (e.g., gini drop) of tree-based models (38). Therefore, we encourage our audience to use both interpretation methods highlighted in this study as permutation importance is a model-agnostic tool that can be used to compared feature importance across different models. Human-associated microbial communities have complex correlation structures that create collinearity in the datasets. This can hinder our ability to reliably interpret models because the feature weights of correlated OTUs are influenced by one another (39). To capture all important features, once we identify highly ranked OTUs, we should review their relationships with other OTUs. These relationships will help us generate new hypotheses about the ecology of the disease and test them with follow-up experiments. When we used permutation importance, we partially accounted for collinearity by grouping correlated OTUs to determine their impact as a group. We grouped OTUs that had a perfect correlation with each other; however, we could reduce the correlation threshold to further investigate the relationships among correlated features. By our approach, we identified 432 OTUs out of 6,920 that had perfect correlations with at least one other OTU. The decision to establish correlation thresholds is left to researchers to implement for their own analyses. Regardless of the threshold, undestanding the correlation structures within the data is critical to avoid misinterpreting the models. Such structures are likely to be a particular problem with shotgun metagenomic datasets where collinearity will be more pronounced due to many genes being correlated with one another because they come from the same chromosome. Finally, true causal mechanisms (e.g., role of microbiome in colorectal cancer) cannot be explained solely by the highest performing machine learning model (40). To identify the true underlying microbial factors of a disease, it is crucial to follow up on any correlation analyses with further hypothesis testing and experimentation for biological validation. In this study, we did not consider all possible modeling approaches. However, the principles highlighted throughout this study apply to other ML modeling tasks with microbiome data. For example, we did not evaluate multicategory classification methods to predict non-binary outcomes. We could have trained models to differentiate between people with healthy colons and those with adenomas or carcinomas (k=3 categories). We did not perform this analysis because the clinically relevant diagnosis grouping was between patients with healthy colons and those with SRNs. Furthermore, as the number of classes increases, more samples are required for each category to train an accurate model. We also did not use regression-based analyses to predict a non-categorical outcome. We have previously used such an approach to train random forest models to predict fecal short-chain fatty acid concentrations based on microbiome data (41). Our analysis was also limited to shallow learning methods and did not explore deep learning methods such as neural networks. Deep learning methods hold promise (12, 42, 43), but microbiome datasets often suffer from having many features and small sample sizes, which result in overfitting. Our framework provides a reproducible pipeline to train, evaluate, and interpret microbiome-based ML models and generate hypotheses to explain the underlying microbiology of the model prediction. However, deploying microbiome-based models to make clinical diagnoses or predictions is a significantly more challenging and distinct undertaking (44). For example, we currently lack standardized methods to collect patient samples, generate sequence data, and report clinical data. We are also challenged by the practical constraints of OTU-based approaches. The de novo algorithms commonly in use are slow, require considerable memory, and result in different OTU assignments as new data are added (45). Finally, we also need independent validation cohorts to test the performance of a diagnostic model. To realize the potential for using ML approaches with microbiome data, it is necessary that we direct our efforts to overcome these challenges. Our study highlights the need to make educated choices at every step of developing an ML model with microbiome data. We created an aspirational rubric that researchers can use to identify potential pitfalls when using ML in microbiome studies and ways to avoid them {[}Table S1{]}. We highlighted the trade-offs between model complexity and interpretability, the need for tuning hyperparameters, the utility of held-out test sets for evaluating predictive performance, and the importance of considering correlation structures in datasets for reliable interpretation. We showed the importance of interpretability for generating hypotheses to identify causal, biological relationships and for identifying inconsistencies in model setup. Furthermore, we underscored the importance of proper experimental design and methods to help us achieve the level of validity and accountability we want from models built for patient health. \subsection{Materials and Methods}\label{materials-and-methods} \textbf{Data collection and study population.} The original stool samples described in our analysis were obtained from patients recruited by Great Lakes-New England Early Detection Research Network (5). Stool samples were provided by adults who were undergoing a scheduled screening or surveillance colonoscopy. Participants were recruited from Toronto (ON, Canada), Boston (MA, USA), Houston (TX, USA), and Ann Arbor (MI, USA). Patients' colonic health was visually assessed by colonoscopy with bowel preparation and tissue histopathology of all resected lesions. We assigned patients into two classes: those with healthy colons and those with screen relevant neoplasias (SRNs). The healthy class included patients with healthy colons or non-advanced adenomas, whereas the SRN class included patients with advanced adenomas or carcinomas (46). Patients with an adenoma greater than 1 cm, more than three adenomas of any size, or an adenoma with villous histology were classified as having advanced adenomas (46). There were 172 patients with normal colonoscopies, 198 with adenomas, and 120 with carcinomas. Of the 198 adenomas, 109 were identified as advanced adenomas. Together 261 patients were classified as healthy and 229 patients were classified as having an SRN. \textbf{16S rRNA gene sequencing data.} Stool samples provided by the patients were used for 16S rRNA gene sequencing to measure bacterial population abundances. The sequence data used in our analyses were originally generated by Baxter et al. (available through NCBI Sequence Read Archive {[}SRP062005{]}, (5)). The OTU abundance table was generated by Sze et al (47), who processed the 16S rRNA sequences in mothur (v1.39.3) using the default quality filtering methods, identifying and removing chimeric sequences using VSEARCH, and assigning to OTUs at 97\% similarity using the OptiClust algorithm (45, 48, 49); (\url{https://github.com/SchlossLab/Sze_CRCMetaAnalysis_mBio_2018/blob/master/data/process/baxter/baxter.0.03.subsample.shared}). These OTU abundances were the features we used to predict colorectal health of the patients. There were 6920 OTUs. OTU abundances were subsampled to the size of the smallest sample and normalized across samples such that the highest abundance of each OTU would be 1, and the lowest would be 0. \textbf{Model training and evaluation.} Models were trained using the caret package (v.6.0.81) in R (v.3.5.0). We modified the caret code to calculate decision values for models generated using L2-regularized SVM with linear kernel and L1-regularized SVM with linear kernel. The code for these changes on L2-regularized SVM with linear kernel and L1-regularized SVM with linear kernel models are available at \url{https://github.com/SchlossLab/Topcuoglu_ML_mBio_2020/blob/master/data/caret_models/svmLinear3.R} and at \url{https://github.com/SchlossLab/Topcuoglu_ML_mBio_2020/blob/master/data/caret_models/svmLinear4.R}, respectively. For hyperparameter selection, we started with a granular grid search. Then we narrowed and fine-tuned the range of each hyperparameter.For L2-regularized logistic regression, L1 and L2-regularized SVM with linear and radial basis function kernels, we tuned the cost hyperparameter, which controls the regularization strength, where smaller values specify stronger regularization. For SVM with radial basis function kernel, we also tuned the sigma hyperparameter, which determines the reach of a single training instance where, for a high value of sigma, the SVM decision boundary will be dependent on the points that are closest to the decision boundary. For the decision tree model, we tuned the depth of the tree where the deeper the tree, the more splits it has. For random forest, we tuned the number of features to consider when looking for the best tree split. For XGBoost, we tuned the learning rate and the fraction of samples used for fitting the individual base learners. Performing a grid search for hyperparameter selection might not be feasible for when there are more than two hyperparameters to tune for. In such cases, it is more efficient to use random search or recently developed tools such as Hyperband to identify good hyperparameter configurations (50). The computational burden during model training due to model complexity was reduced by parallelizing segments of the ML pipeline. We parallelized the training of each data-split. This allowed the 100 data-splits to be processed through the ML pipeline simultaneously at the same time for each model. It is possible to further parallelize the cross-validation step for each hyperparameter setting which we have not performed in this study. \textbf{Permutation importance workflow.} We calculated a Spearman's rank-order correlation matrix and defined correlated OTUs as having perfect correlation (correlation coefficient = 1 and p \textless{} 0.01). OTUs without a perfect correlation to each other were permuted individually, whereas correlated ones were grouped together and permuted at the same time. \textbf{Statistical analysis workflow.} Data summaries, statistical analysis, and data visualizations were performed using R (v.3.5.0) with the tidyverse package (v.1.2.1). We compared the performance of the models pairwise by calculating the difference between AUROC values from the same data-split (for 100 data-splits). We determined if the models were significantly different by calculating the empirical p-value (2 x min(\% of AUROC differences \(\geq\) 0, \% of AUROC differences \(\leq\) 0) for the double tail event (e.g., Figure S4). \textbf{Code availability.} The code for all sequence curation and analysis steps including an Rmarkdown version of this manuscript is available at \url{https://github.com/SchlossLab/Topcuoglu_ML_mBio_2020/}. \textbf{Acknowledgements.} We thank all the study participants of Great Lakes-New England Early Detection Research Network. We would like to thank the members of the Schloss lab for their valuable feedback. Salary support for MR came from NIH grant 1R01CA215574. Salary support for PDS came from NIH grants P30DK034933 and 1R01CA215574. \newpage \subsection{References}\label{references} \hypertarget{refs}{} \hypertarget{ref-segata_metagenomic_2011}{} 1. \textbf{Segata N}, \textbf{Izard J}, \textbf{Waldron L}, \textbf{Gevers D}, \textbf{Miropolsky L}, \textbf{Garrett WS}, \textbf{Huttenhower C}. 2011. Metagenomic biomarker discovery and explanation. Genome Biol \textbf{12}:R60. doi:\href{https://doi.org/10.1186/gb-2011-12-6-r60}{10.1186/gb-2011-12-6-r60}. \hypertarget{ref-zeller_potential_2014}{} 2. \textbf{Zeller G}, \textbf{Tap J}, \textbf{Voigt AY}, \textbf{Sunagawa S}, \textbf{Kultima JR}, \textbf{Costea PI}, \textbf{Amiot A}, \textbf{Böhm J}, \textbf{Brunetti F}, \textbf{Habermann N}, \textbf{Hercog R}, \textbf{Koch M}, \textbf{Luciani A}, \textbf{Mende DR}, \textbf{Schneider MA}, \textbf{Schrotz-King P}, \textbf{Tournigand C}, \textbf{Tran Van Nhieu J}, \textbf{Yamada T}, \textbf{Zimmermann J}, \textbf{Benes V}, \textbf{Kloor M}, \textbf{Ulrich CM}, \textbf{Knebel Doeberitz M von}, \textbf{Sobhani I}, \textbf{Bork P}. 2014. Potential of fecal microbiota for early-stage detection of colorectal cancer. Mol Syst Biol \textbf{10}. doi:\href{https://doi.org/10.15252/msb.20145645}{10.15252/msb.20145645}. \hypertarget{ref-zackular_human_2014}{} 3. \textbf{Zackular JP}, \textbf{Rogers MAM}, \textbf{Ruffin MT}, \textbf{Schloss PD}. 2014. The human gut microbiome as a screening tool for colorectal cancer. Cancer Prev Res \textbf{7}:1112--1121. doi:\href{https://doi.org/10.1158/1940-6207.CAPR-14-0129}{10.1158/1940-6207.CAPR-14-0129}. \hypertarget{ref-baxter_dna_2016}{} 4. \textbf{Baxter NT}, \textbf{Koumpouras CC}, \textbf{Rogers MAM}, \textbf{Ruffin MT}, \textbf{Schloss PD}. 2016. DNA from fecal immunochemical test can replace stool for detection of colonic lesions using a microbiota-based model. Microbiome \textbf{4}. doi:\href{https://doi.org/10.1186/s40168-016-0205-y}{10.1186/s40168-016-0205-y}. \hypertarget{ref-baxter_microbiota-based_2016}{} 5. \textbf{Baxter NT}, \textbf{Ruffin MT}, \textbf{Rogers MAM}, \textbf{Schloss PD}. 2016. Microbiota-based model improves the sensitivity of fecal immunochemical test for detecting colonic lesions. Genome Medicine \textbf{8}:37. doi:\href{https://doi.org/10.1186/s13073-016-0290-3}{10.1186/s13073-016-0290-3}. \hypertarget{ref-hale_shifts_2017}{} 6. \textbf{Hale VL}, \textbf{Chen J}, \textbf{Johnson S}, \textbf{Harrington SC}, \textbf{Yab TC}, \textbf{Smyrk TC}, \textbf{Nelson H}, \textbf{Boardman LA}, \textbf{Druliner BR}, \textbf{Levin TR}, \textbf{Rex DK}, \textbf{Ahnen DJ}, \textbf{Lance P}, \textbf{Ahlquist DA}, \textbf{Chia N}. 2017. Shifts in the fecal microbiota associated with adenomatous polyps. Cancer Epidemiol Biomarkers Prev \textbf{26}:85--94. doi:\href{https://doi.org/10.1158/1055-9965.EPI-16-0337}{10.1158/1055-9965.EPI-16-0337}. \hypertarget{ref-pasolli_machine_2016}{} 7. \textbf{Pasolli E}, \textbf{Truong DT}, \textbf{Malik F}, \textbf{Waldron L}, \textbf{Segata N}. 2016. Machine learning meta-analysis of large metagenomic datasets: Tools and biological insights. PLoS Comput Biol \textbf{12}. doi:\href{https://doi.org/10.1371/journal.pcbi.1004977}{10.1371/journal.pcbi.1004977}. \hypertarget{ref-sze_looking_2016}{} 8. \textbf{Sze MA}, \textbf{Schloss PD}. 2016. Looking for a signal in the noise: Revisiting obesity and the microbiome. mBio \textbf{7}. doi:\href{https://doi.org/10.1128/mBio.01018-16}{10.1128/mBio.01018-16}. \hypertarget{ref-walters_meta-analyses_2014}{} 9. \textbf{Walters WA}, \textbf{Xu Z}, \textbf{Knight R}. 2014. Meta-analyses of human gut microbes associated with obesity and IBD. FEBS Lett \textbf{588}:4223--4233. doi:\href{https://doi.org/10.1016/j.febslet.2014.09.039}{10.1016/j.febslet.2014.09.039}. \hypertarget{ref-vazquez-baeza_guiding_2018}{} 10. \textbf{Vázquez-Baeza Y}, \textbf{Gonzalez A}, \textbf{Xu ZZ}, \textbf{Washburne A}, \textbf{Herfarth HH}, \textbf{Sartor RB}, \textbf{Knight R}. 2018. Guiding longitudinal sampling in IBD cohorts. Gut \textbf{67}:1743--1745. doi:\href{https://doi.org/10.1136/gutjnl-2017-315352}{10.1136/gutjnl-2017-315352}. \hypertarget{ref-qin_alterations_2014}{} 11. \textbf{Qin N}, \textbf{Yang F}, \textbf{Li A}, \textbf{Prifti E}, \textbf{Chen Y}, \textbf{Shao L}, \textbf{Guo J}, \textbf{Le Chatelier E}, \textbf{Yao J}, \textbf{Wu L}, \textbf{Zhou J}, \textbf{Ni S}, \textbf{Liu L}, \textbf{Pons N}, \textbf{Batto JM}, \textbf{Kennedy SP}, \textbf{Leonard P}, \textbf{Yuan C}, \textbf{Ding W}, \textbf{Chen Y}, \textbf{Hu X}, \textbf{Zheng B}, \textbf{Qian G}, \textbf{Xu W}, \textbf{Ehrlich SD}, \textbf{Zheng S}, \textbf{Li L}. 2014. Alterations of the human gut microbiome in liver cirrhosis. Nature \textbf{513}:59--64. doi:\href{https://doi.org/10.1038/nature13568}{10.1038/nature13568}. \hypertarget{ref-geman_deep_2018}{} 12. \textbf{Geman O}, \textbf{Chiuchisan I}, \textbf{Covasa M}, \textbf{Doloc C}, \textbf{Milici M-R}, \textbf{Milici L-D}. 2018. Deep learning tools for human microbiome big data, pp. 265--275. \emph{In} Balas, VE, Jain, LC, Balas, MM (eds.), Soft computing applications. Springer International Publishing. \hypertarget{ref-thaiss_persistent_2016}{} 13. \textbf{Thaiss CA}, \textbf{Itav S}, \textbf{Rothschild D}, \textbf{Meijer MT}, \textbf{Levy M}, \textbf{Moresi C}, \textbf{Dohnalová L}, \textbf{Braverman S}, \textbf{Rozin S}, \textbf{Malitsky S}, \textbf{Dori-Bachash M}, \textbf{Kuperman Y}, \textbf{Biton I}, \textbf{Gertler A}, \textbf{Harmelin A}, \textbf{Shapiro H}, \textbf{Halpern Z}, \textbf{Aharoni A}, \textbf{Segal E}, \textbf{Elinav E}. 2016. Persistent microbiome alterations modulate the rate of post-dieting weight regain. Nature \textbf{540}:544--551. doi:\href{https://doi.org/10.1038/nature20796}{10.1038/nature20796}. \hypertarget{ref-dadkhah_gut_2019}{} 14. \textbf{Dadkhah E}, \textbf{Sikaroodi M}, \textbf{Korman L}, \textbf{Hardi R}, \textbf{Baybick J}, \textbf{Hanzel D}, \textbf{Kuehn G}, \textbf{Kuehn T}, \textbf{Gillevet PM}. 2019. Gut microbiome identifies risk for colorectal polyps. BMJ Open Gastroenterology \textbf{6}:e000297. doi:\href{https://doi.org/10.1136/bmjgast-2019-000297}{10.1136/bmjgast-2019-000297}. \hypertarget{ref-flemer_oral_2018}{} 15. \textbf{Flemer B}, \textbf{Warren RD}, \textbf{Barrett MP}, \textbf{Cisek K}, \textbf{Das A}, \textbf{Jeffery IB}, \textbf{Hurley E}, \textbf{O`Riordain M}, \textbf{Shanahan F}, \textbf{O`Toole PW}. 2018. The oral microbiota in colorectal cancer is distinctive and predictive. Gut \textbf{67}:1454--1463. doi:\href{https://doi.org/10.1136/gutjnl-2017-314814}{10.1136/gutjnl-2017-314814}. \hypertarget{ref-montassier_pretreatment_2016}{} 16. \textbf{Montassier E}, \textbf{Al-Ghalith GA}, \textbf{Ward T}, \textbf{Corvec S}, \textbf{Gastinne T}, \textbf{Potel G}, \textbf{Moreau P}, \textbf{Cochetiere MF de la}, \textbf{Batard E}, \textbf{Knights D}. 2016. Pretreatment gut microbiome predicts chemotherapy-related bloodstream infection. Genome Medicine \textbf{8}:49. doi:\href{https://doi.org/10.1186/s13073-016-0301-4}{10.1186/s13073-016-0301-4}. \hypertarget{ref-ai_systematic_2017}{} 17. \textbf{Ai L}, \textbf{Tian H}, \textbf{Chen Z}, \textbf{Chen H}, \textbf{Xu J}, \textbf{Fang J-Y}. 2017. Systematic evaluation of supervised classifiers for fecal microbiota-based prediction of colorectal cancer. Oncotarget \textbf{8}:9546--9556. doi:\href{https://doi.org/10.18632/oncotarget.14488}{10.18632/oncotarget.14488}. \hypertarget{ref-dai_multi-cohort_2018}{} 18. \textbf{Dai Z}, \textbf{Coker OO}, \textbf{Nakatsu G}, \textbf{Wu WKK}, \textbf{Zhao L}, \textbf{Chen Z}, \textbf{Chan FKL}, \textbf{Kristiansen K}, \textbf{Sung JJY}, \textbf{Wong SH}, \textbf{Yu J}. 2018. Multi-cohort analysis of colorectal cancer metagenome identified altered bacteria across populations and universal bacterial markers. Microbiome \textbf{6}:70. doi:\href{https://doi.org/10.1186/s40168-018-0451-2}{10.1186/s40168-018-0451-2}. \hypertarget{ref-mossotto_classification_2017}{} 19. \textbf{Mossotto E}, \textbf{Ashton JJ}, \textbf{Coelho T}, \textbf{Beattie RM}, \textbf{MacArthur BD}, \textbf{Ennis S}. 2017. Classification of paediatric inflammatory bowel disease using machine learning. Scientific Reports \textbf{7}. doi:\href{https://doi.org/10.1038/s41598-017-02606-2}{10.1038/s41598-017-02606-2}. \hypertarget{ref-wong_quantitation_2017}{} 20. \textbf{Wong SH}, \textbf{Kwong TNY}, \textbf{Chow T-C}, \textbf{Luk AKC}, \textbf{Dai RZW}, \textbf{Nakatsu G}, \textbf{Lam TYT}, \textbf{Zhang L}, \textbf{Wu JCY}, \textbf{Chan FKL}, \textbf{Ng SSM}, \textbf{Wong MCS}, \textbf{Ng SC}, \textbf{Wu WKK}, \textbf{Yu J}, \textbf{Sung JJY}. 2017. Quantitation of faecal fusobacterium improves faecal immunochemical test in detecting advanced colorectal neoplasia. Gut \textbf{66}:1441--1448. doi:\href{https://doi.org/10.1136/gutjnl-2016-312766}{10.1136/gutjnl-2016-312766}. \hypertarget{ref-statnikov_comprehensive_2013}{} 21. \textbf{Statnikov A}, \textbf{Henaff M}, \textbf{Narendra V}, \textbf{Konganti K}, \textbf{Li Z}, \textbf{Yang L}, \textbf{Pei Z}, \textbf{Blaser MJ}, \textbf{Aliferis CF}, \textbf{Alekseyenko AV}. 2013. A comprehensive evaluation of multicategory classification methods for microbiomic data. Microbiome \textbf{1}:11. doi:\href{https://doi.org/10.1186/2049-2618-1-11}{10.1186/2049-2618-1-11}. \hypertarget{ref-knights_supervised_2011}{} 22. \textbf{Knights D}, \textbf{Costello EK}, \textbf{Knight R}. 2011. Supervised classification of human microbiota. FEMS Microbiology Reviews \textbf{35}:343--359. doi:\href{https://doi.org/10.1111/j.1574-6976.2010.00251.x}{10.1111/j.1574-6976.2010.00251.x}. \hypertarget{ref-wirbel_meta-analysis_2019}{} 23. \textbf{Wirbel J}, \textbf{Pyl PT}, \textbf{Kartal E}, \textbf{Zych K}, \textbf{Kashani A}, \textbf{Milanese A}, \textbf{Fleck JS}, \textbf{Voigt AY}, \textbf{Palleja A}, \textbf{Ponnudurai R}, \textbf{Sunagawa S}, \textbf{Coelho LP}, \textbf{Schrotz-King P}, \textbf{Vogtmann E}, \textbf{Habermann N}, \textbf{Niméus E}, \textbf{Thomas AM}, \textbf{Manghi P}, \textbf{Gandini S}, \textbf{Serrano D}, \textbf{Mizutani S}, \textbf{Shiroma H}, \textbf{Shiba S}, \textbf{Shibata T}, \textbf{Yachida S}, \textbf{Yamada T}, \textbf{Waldron L}, \textbf{Naccarati A}, \textbf{Segata N}, \textbf{Sinha R}, \textbf{Ulrich CM}, \textbf{Brenner H}, \textbf{Arumugam M}, \textbf{Bork P}, \textbf{Zeller G}. 2019. Meta-analysis of fecal metagenomes reveals global microbial signatures that are specific for colorectal cancer. Nature Medicine \textbf{25}:679. doi:\href{https://doi.org/10.1038/s41591-019-0406-6}{10.1038/s41591-019-0406-6}. \hypertarget{ref-vangay_microbiome_2019}{} 24. \textbf{Vangay P}, \textbf{Hillmann BM}, \textbf{Knights D}. 2019. Microbiome learning repo (ML repo): A public repository of microbiome regression and classification tasks. Gigascience \textbf{8}. doi:\href{https://doi.org/10.1093/gigascience/giz042}{10.1093/gigascience/giz042}. \hypertarget{ref-galkin_human_2018}{} 25. \textbf{Galkin F}, \textbf{Aliper A}, \textbf{Putin E}, \textbf{Kuznetsov I}, \textbf{Gladyshev VN}, \textbf{Zhavoronkov A}. 2018. Human microbiome aging clocks based on deep learning and tandem of permutation feature importance and accumulated local effects. bioRxiv. doi:\href{https://doi.org/10.1101/507780}{10.1101/507780}. \hypertarget{ref-reiman_using_2017}{} 26. \textbf{Reiman D}, \textbf{Metwally A}, \textbf{Dai Y}. 2017. Using convolutional neural networks to explore the microbiome, pp. 4269--4272. \emph{In} 2017 39th annual international conference of the IEEE engineering in medicine and biology society (EMBC). \hypertarget{ref-fioravanti_phylogenetic_2017}{} 27. \textbf{Fioravanti D}, \textbf{Giarratano Y}, \textbf{Maggio V}, \textbf{Agostinelli C}, \textbf{Chierici M}, \textbf{Jurman G}, \textbf{Furlanello C}. 2017. Phylogenetic convolutional neural networks in metagenomics. arXiv:170902268 {[}cs, q-bio{]}. \hypertarget{ref-thomas_metagenomic_2019}{} 28. \textbf{Thomas AM}, \textbf{Manghi P}, \textbf{Asnicar F}, \textbf{Pasolli E}, \textbf{Armanini F}, \textbf{Zolfo M}, \textbf{Beghini F}, \textbf{Manara S}, \textbf{Karcher N}, \textbf{Pozzi C}, \textbf{Gandini S}, \textbf{Serrano D}, \textbf{Tarallo S}, \textbf{Francavilla A}, \textbf{Gallo G}, \textbf{Trompetto M}, \textbf{Ferrero G}, \textbf{Mizutani S}, \textbf{Shiroma H}, \textbf{Shiba S}, \textbf{Shibata T}, \textbf{Yachida S}, \textbf{Yamada T}, \textbf{Wirbel J}, \textbf{Schrotz-King P}, \textbf{Ulrich CM}, \textbf{Brenner H}, \textbf{Arumugam M}, \textbf{Bork P}, \textbf{Zeller G}, \textbf{Cordero F}, \textbf{Dias-Neto E}, \textbf{Setubal JC}, \textbf{Tett A}, \textbf{Pardini B}, \textbf{Rescigno M}, \textbf{Waldron L}, \textbf{Naccarati A}, \textbf{Segata N}. 2019. Metagenomic analysis of colorectal cancer datasets identifies cross-cohort microbial diagnostic signatures and a link with choline degradation. Nature Medicine \textbf{25}:667. doi:\href{https://doi.org/10.1038/s41591-019-0405-7}{10.1038/s41591-019-0405-7}. \hypertarget{ref-rudin_please_2018}{} 29. \textbf{Rudin C}. 2018. Please stop explaining black box models for high stakes decisions. arXiv:181110154 {[}cs, stat{]}. \hypertarget{ref-rudin_optimized_2018}{} 30. \textbf{Rudin C}, \textbf{Ustun B}. 2018. Optimized scoring systems: Toward trust in machine learning for healthcare and criminal justice. Interfaces \textbf{48}:449--466. doi:\href{https://doi.org/10.1287/inte.2018.0957}{10.1287/inte.2018.0957}. \hypertarget{ref-Quinn847475}{} 31. \textbf{Quinn TP}, \textbf{Erb I}. 2019. Another look at microbemetabolite interactions: How scale invariant correlations can outperform a neural network. bioRxiv. doi:\href{https://doi.org/10.1101/847475}{10.1101/847475}. \hypertarget{ref-knights_human-associated_2011}{} 32. \textbf{Knights D}, \textbf{Parfrey LW}, \textbf{Zaneveld J}, \textbf{Lozupone C}, \textbf{Knight R}. 2011. Human-associated microbial signatures: Examining their predictive value. Cell Host Microbe \textbf{10}:292--296. doi:\href{https://doi.org/10.1016/j.chom.2011.09.003}{10.1016/j.chom.2011.09.003}. \hypertarget{ref-miller_explanation_2017}{} 33. \textbf{Miller T}. 2017. Explanation in artificial intelligence: Insights from the social sciences. arXiv:170607269 {[}cs{]}. \hypertarget{ref-ribeiro_why_2016}{} 34. \textbf{Ribeiro MT}, \textbf{Singh S}, \textbf{Guestrin C}. 2016. ``Why should i trust you?'': Explaining the predictions of any classifier. arXiv:160204938 {[}cs, stat{]}. \hypertarget{ref-nori_interpretml:_2019}{} 35. \textbf{Nori H}, \textbf{Jenkins S}, \textbf{Koch P}, \textbf{Caruana R}. 2019. InterpretML: A unified framework for machine learning interpretability. arXiv:190909223 {[}cs, stat{]}. \hypertarget{ref-10.1093ux2fbioinformaticsux2fbtq134}{} 36. \textbf{Altmann A}, \textbf{Toloşi L}, \textbf{Sander O}, \textbf{Lengauer T}. 2010. Permutation importance: a corrected feature importance measure. Bioinformatics \textbf{26}:1340--1347. doi:\href{https://doi.org/10.1093/bioinformatics/btq134}{10.1093/bioinformatics/btq134}. \hypertarget{ref-breiman_statistical_2001}{} 37. \textbf{Breiman L}. 2001. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statist Sci \textbf{16}:199--231. doi:\href{https://doi.org/10.1214/ss/1009213726}{10.1214/ss/1009213726}. \hypertarget{ref-strobl_bias_2007}{} 38. \textbf{Strobl C}, \textbf{Boulesteix A-L}, \textbf{Zeileis A}, \textbf{Hothorn T}. 2007. Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinformatics \textbf{8}:25. doi:\href{https://doi.org/10.1186/1471-2105-8-25}{10.1186/1471-2105-8-25}. \hypertarget{ref-dormann_collinearity:_2013}{} 39. \textbf{Dormann CF}, \textbf{Elith J}, \textbf{Bacher S}, \textbf{Buchmann C}, \textbf{Carl G}, \textbf{Carré G}, \textbf{Marquéz JRG}, \textbf{Gruber B}, \textbf{Lafourcade B}, \textbf{Leitão PJ}, \textbf{Münkemüller T}, \textbf{McClean C}, \textbf{Osborne PE}, \textbf{Reineking B}, \textbf{Schröder B}, \textbf{Skidmore AK}, \textbf{Zurell D}, \textbf{Lautenbach S}. 2013. Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography \textbf{36}:27--46. doi:\href{https://doi.org/10.1111/j.1600-0587.2012.07348.x}{10.1111/j.1600-0587.2012.07348.x}. \hypertarget{ref-li_accurate_2020}{} 40. \textbf{Li J}, \textbf{Liu L}, \textbf{Le TD}, \textbf{Liu J}. 2020. Accurate data-driven prediction does not mean high reproducibility. Nat Mach Intell \textbf{2}:13--15. doi:\href{https://doi.org/10.1038/s42256-019-0140-2}{10.1038/s42256-019-0140-2}. \hypertarget{ref-sze_fecal_2019}{} 41. \textbf{Sze MA}, \textbf{Topçuoğlu BD}, \textbf{Lesniak NA}, \textbf{Ruffin MT}, \textbf{Schloss PD}. 2019. Fecal short-chain fatty acids are not predictive of colonic tumor status and cannot be predicted based on bacterial community structure. mBio \textbf{10}:e01454--19. doi:\href{https://doi.org/10.1128/mBio.01454-19}{10.1128/mBio.01454-19}. \hypertarget{ref-kocheturov_massive_2019}{} 42. \textbf{Kocheturov A}, \textbf{Pardalos PM}, \textbf{Karakitsiou A}. 2019. Massive datasets and machine learning for computational biomedicine: Trends and challenges. Ann Oper Res \textbf{276}:5--34. doi:\href{https://doi.org/10.1007/s10479-018-2891-2}{10.1007/s10479-018-2891-2}. \hypertarget{ref-kim_improved_2018}{} 43. \textbf{Kim M}, \textbf{Oh I}, \textbf{Ahn J}. 2018. An improved method for prediction of cancer prognosis by network learning. Genes \textbf{9}:478. doi:\href{https://doi.org/10.3390/genes9100478}{10.3390/genes9100478}. \hypertarget{ref-wiens_no_2019}{} 44. \textbf{Wiens J}, \textbf{Saria S}, \textbf{Sendak M}, \textbf{Ghassemi M}, \textbf{Liu VX}, \textbf{Doshi-Velez F}, \textbf{Jung K}, \textbf{Heller K}, \textbf{Kale D}, \textbf{Saeed M}, \textbf{Ossorio PN}, \textbf{Thadaney-Israni S}, \textbf{Goldenberg A}. 2019. Do no harm: A roadmap for responsible machine learning for health care. Nat Med \textbf{25}:1337--1340. doi:\href{https://doi.org/10.1038/s41591-019-0548-6}{10.1038/s41591-019-0548-6}. \hypertarget{ref-westcott_opticlust_2017}{} 45. \textbf{Westcott SL}, \textbf{Schloss PD}. 2017. OptiClust, an Improved Method for Assigning Amplicon-Based Sequence Data to Operational Taxonomic Units. mSphere \textbf{2}. doi:\href{https://doi.org/10.1128/mSphereDirect.00073-17}{10.1128/mSphereDirect.00073-17}. \hypertarget{ref-redwood_stool_2016}{} 46. \textbf{Redwood DG}, \textbf{Asay ED}, \textbf{Blake ID}, \textbf{Sacco PE}, \textbf{Christensen CM}, \textbf{Sacco FD}, \textbf{Tiesinga JJ}, \textbf{Devens ME}, \textbf{Alberts SR}, \textbf{Mahoney DW}, \textbf{Yab TC}, \textbf{Foote PH}, \textbf{Smyrk TC}, \textbf{Provost EM}, \textbf{Ahlquist DA}. 2016. Stool DNA testing for screening detection of colorectal neoplasia in alaska native people. Mayo Clin Proc \textbf{91}:61--70. doi:\href{https://doi.org/10.1016/j.mayocp.2015.10.008}{10.1016/j.mayocp.2015.10.008}. \hypertarget{ref-sze_leveraging_2018}{} 47. \textbf{Sze MA}, \textbf{Schloss PD}. 2018. Leveraging existing 16S rRNA gene surveys to identify reproducible biomarkers in individuals with colorectal tumors. mBio \textbf{9}:e00630--18. doi:\href{https://doi.org/10.1128/mBio.00630-18}{10.1128/mBio.00630-18}. \hypertarget{ref-schloss_introducing_2009}{} 48. \textbf{Schloss PD}, \textbf{Westcott SL}, \textbf{Ryabin T}, \textbf{Hall JR}, \textbf{Hartmann M}, \textbf{Hollister EB}, \textbf{Lesniewski RA}, \textbf{Oakley BB}, \textbf{Parks DH}, \textbf{Robinson CJ}, \textbf{Sahl JW}, \textbf{Stres B}, \textbf{Thallinger GG}, \textbf{Van Horn DJ}, \textbf{Weber CF}. 2009. Introducing mothur: Open-Source, Platform-Independent, Community-Supported Software for Describing and Comparing Microbial Communities. ApplEnvironMicrobiol \textbf{75}:7537--7541. \hypertarget{ref-rognes_vsearch_2016}{} 49. \textbf{Rognes T}, \textbf{Flouri T}, \textbf{Nichols B}, \textbf{Quince C}, \textbf{Mahé F}. 2016. VSEARCH: A versatile open source tool for metagenomics. PeerJ \textbf{4}:e2584. doi:\href{https://doi.org/10.7717/peerj.2584}{10.7717/peerj.2584}. \hypertarget{ref-li_hyperband:_2016}{} 50. \textbf{Li L}, \textbf{Jamieson K}, \textbf{DeSalvo G}, \textbf{Rostamizadeh A}, \textbf{Talwalkar A}. 2016. Hyperband: A novel bandit-based approach to hyperparameter optimization. arXiv:160306560 {[}cs, stat{]}. \newpage \textbf{Table 1.} Characteristics of the machine learning models in our comparative study. \begin{longtable}[]{@{}lll@{}} \toprule \begin{minipage}[b]{0.23\columnwidth}\raggedright\strut \textbf{Model}\strut \end{minipage} & \begin{minipage}[b]{0.50\columnwidth}\raggedright\strut \textbf{Description}\strut \end{minipage} & \begin{minipage}[b]{0.18\columnwidth}\raggedright\strut \textbf{Linearity}\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut Logistic regression\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A predictive regression analysis when the dependent variable is binary\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Linear\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut SVM with linear kernel\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A classifier that is defined by an optimal linear separating hyperplane that discriminates between labels\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Linear\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut SVM with radial basis kernel\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A classifier that is defined by an optimal non-linear separating hyperplane that discriminates between labels\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Non-linear\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut Decision tree\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A classifier that sorts samples down from the root to the leaf node where an attribute is tested to discriminate between labels\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Non-linear\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut Random forest\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A classifier that is an ensemble of decision trees that grows randomly with subsampled data\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Non-linear\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.23\columnwidth}\raggedright\strut Gradient Boosted Trees (XGBoost)\strut \end{minipage} & \begin{minipage}[t]{0.50\columnwidth}\raggedright\strut A classifier that is an ensembe of decision trees that grows greedily\strut \end{minipage} & \begin{minipage}[t]{0.18\columnwidth}\raggedright\strut Non-linear\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \newpage \textbf{Figure 1. Machine learning pipeline.} We split the data to create a training (80\%) and held-out test set (20\%). The splits were stratified to maintain the overall class distribution. We performed five-fold cross-validation on the training data to select the best hyperparameter setting and then used these hyperparameters to train the models. The model was evaluated on the held-out data set. Abbreviations: cvAUC, cross-validation area under the receiver operating characteristic curve. \hfill\break \textbf{Figure 2. Generalization and classification performance of ML models using AUROC values of all cross-validation and testing performances.} The median AUROC for diagnosing individuals with SRN using bacterial abundances was higher than chance (depicted by a vertical line at 0.50) for all the ML models. The predictive performance of random forest model was higher than other ML models, though not significantly (p \textgreater{} 0.05). L2-regularized logistic regression, XGBoost, L2-regularized SVM with linear and radial basis function kernel performances were not significantly different from one another. The boxplot shows quartiles at the box ends and the median as the horizontal line in the box. The whiskers show the farthest points that were not outliers. Outliers were defined as those data points that are not within 1.5 times the interquartile ranges. \hfill\break \textbf{Figure 3. Interpretation of the linear ML models.} The ranks of absolute feature weights of (A) L1-regularized SVM with linear kernel, (B) L2-regularized SVM with linear kernel, and (C) L2-regularized logistic regression, were ranked from highest rank, 1, to lowest rank, 100, for each data-split. The feature ranks of the 20 highest ranked OTUs based on their median ranks (median shown in black) are reported here. OTUs that were associated with classifying a subject as being healthy had negative signs and were shown in blue. OTUs that were associated with classifying a subject having an SRN had positive signs and were shown in red. \hfill\break \textbf{Figure 4. Interpretation of the non-linear ML models.} (A) SVM with radial basis kernel, (B) decision tree, (C) random forest, and (D) XGBoost feature importances were explained using permutation importance on the held-out test data set. The gray rectangle and the dashed line show the IQR range and median of the base testing AUROC without any permutation. The 20 OTUs that caused the largest decrease in the AUROC when permuted are reported here. The colors of the box plots represent the OTUs that were shared among the different models; yellow were OTUs that were shared among all the non-linear models, green were OTUs that were shared among the tree-based models, turqoise were the OTUs shared among SVM with radial basis kernel, decision tree and XGBoost, pink were the OTUs shared among SVM with radial basis kernel and XGBoost only, red were the OTUs shared among random forest and XGBoost only and blue were the OTUs shared among decision tree and random forest only. For all of the tree-based models, a \emph{Peptostreptococcus} species (OTU00367) had the largest impact on predictive performance. \hfill\break \textbf{Figure 5. Training times of seven ML models.} The median training time was the highest for XGBoost and shortest for L2-regularized logistic regression. \newpage \textbf{Figure S1. NMDS ordination of Bray-Curtis distances.} NMDS ordination relating the community structures of the fecal microbiota from 490 patients (261 patients with normal colonoscopies and 229 patients who have screen relevant neoplasias; SRNs). \hfill\break \textbf{Figure S2. Hyperparameter setting performances for linear models.} (A) L2-regularized logistic regression, (B) L1-regularized SVM with linear kernel, and (C) L2-regularized SVM with linear kernel mean cross-validation AUROC values when different hyperparameters were used in training the model. The stars represent the highest performing hyperparameter setting for each model. \hfill\break \textbf{Figure S3. Hyperparameter setting performances for non-linear models.} (A) Decision tree, (B) random forest, (C) SVM with radial basis kernel, and (D) XGBoost mean cross-validation AUROC values when different hyperparameters were used in training the model. The stars represent the highest performing hyperparameter setting for the models. \hfill\break \textbf{Figure S4. Histogram of AUROC differences between L2-regularized logistic regression and random forest for each of the hundred data-splits.} In 75\% of data-splits, the AUROC of random forest was greater than that of L2-regularized logistic regression. The p-value was manually calculated using the sampling distribution of the test statistic under the null hypothesis. We tested how often random forest performed more accurately than L2-regularized logistic regression. The null hypothesis is that the distribution of the difference between the AUROC values of random forest and L2 logistic regression is symmetric about 0, therefore the p-value was calculated for a double-tail event. \hfill\break \textbf{Figure S5. Classification performance of ML models across cross-validation when trained on a subset of the dataset.} (A) L2-regularized logistic regression and (B) random forest models were trained using the original study design with 490 subjects and subsets of it with 15, 30, 60, 120, and 245 subjects. The range among the cross-validation AUROC values within both models at smaller sample sizes were much larger than when the full collection of samples was used to train and validate the models but included the ranges observed with the more complete datasets. \hfill\break \textbf{Figure S6. Permutation importance analysis.} Permutation importance analysis measures the decrease in the predictive performance of the model after we permute (A) a feature's or (B) a group of correlated features' values, which breaks the relationship between the feature and the diagnosis. \hfill\break \textbf{Figure S7. Interpretation of the linear ML models with permutation importance.} (A) L1-regularized SVM with linear kernel, (B) L2-regularized SVM with linear kernel, and (C) L2-regularized logistic regression were interpreted using permutation importance using held-out test set. \hfill\break \textbf{Figure S8. Relative abundances of the 20 most important OTUs in L2-regularized logistic regression and random forest models.} The most important 20 OTUs were chosen for (A) Random forest and (B) L2-regularized logistic regression models by permutation importance and ranking feature coefficients, respectively. The relative abundances of these OTUs were compared based on the diagnosis of the patients. The minimal differences betweeen relative abundances for these OTUs show that it is not possible to differentiate disease vs healthy states by focusing on individual taxa. \hfill\break \textbf{Table S1. An aspirational rubric for evaluating the rigor of ML practices applied to microbiome data.} \end{document}
{ "alphanum_fraction": 0.7906345288, "avg_line_length": 52.8955453149, "ext": "tex", "hexsha": "c2cb1258494b28be0ebf0bee4d71d4632c34839c", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-31T14:30:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-18T05:43:37.000Z", "max_forks_repo_head_hexsha": "68b9385bbf9249900e52bfacbd4aa6f1a1dad830", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ivanvujkc/Topcuoglu_ML_mBio_2020", "max_forks_repo_path": "submission/manuscript.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "68b9385bbf9249900e52bfacbd4aa6f1a1dad830", "max_issues_repo_issues_event_max_datetime": "2019-12-09T22:49:42.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-23T15:30:51.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ivanvujkc/Topcuoglu_ML_mBio_2020", "max_issues_repo_path": "submission/manuscript.tex", "max_line_length": 129, "max_stars_count": 11, "max_stars_repo_head_hexsha": "68b9385bbf9249900e52bfacbd4aa6f1a1dad830", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "BTopcuoglu/DeepLearning", "max_stars_repo_path": "submission/manuscript.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-31T21:33:11.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-05T15:38:21.000Z", "num_tokens": 19351, "size": 68870 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{anysize} \usepackage{color} \usepackage{xcolor} \usepackage{graphicx} \usepackage{float} \usepackage{subfigure} \definecolor{dkgreen}{rgb}{0, 0.6, 0} \definecolor{gray}{rgb}{0.5, 0.5, 0.5} \usepackage{listings} \lstset{ language=Matlab, % choose the language of the code keywords={break,case,catch,continue,else,elseif,end,for,function, global,if,otherwise,persistent,return,switch,try,while}, keywordstyle=\color{blue}, commentstyle=\color{red}, basicstyle=\footnotesize, % the size of the fonts that are used for the code numbers= left, % where to put the line-numbers numberstyle=\footnotesize, % the size of the fonts that are used for the line-numbers stepnumber=1, % the step between two line-numbers. If it is 1 each line will be numbered numbersep=5pt, % how far the line-numbers are from the code backgroundcolor=\color{white}, % choose the background color. You must add \usepackage{color} showspaces=false, % show spaces adding particular underscores showstringspaces=false, % underline spaces within strings showtabs=false, % show tabs within strings adding particular underscores frame=single, % adds a frame around the code tabsize=2, % sets default tabsize to 2 spaces captionpos=t, % sets the caption-position to bottom (t=top, b=bottom) breaklines=true, % sets automatic line breaking breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace escapeinside={\%*}{*), % if you want to add a comment within your code flexiblecolumns=true} } \usepackage{caption} \DeclareCaptionFont{white}{\color{white}} \DeclareCaptionFormat{listing}{\colorbox{gray}{\parbox[c]{\textwidth}{#1#2#3}}} \captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white} \setlength\parindent{0pt} \setlength{\parskip}{10pt} \marginsize{3cm}{2cm}{2cm}{2cm} \title{Visual Perception\\ OpenCV Toolbox\\ User Manual} \author{Emre Ozan Alkan\\ \{[email protected]\}\\ MSCV-5} \date{\today} \begin{document} \maketitle \section{Introduction} Developing Computer Vision applications can be tough. One should consider the capabilities of the framework, on the other hand how this framework will react and perform on given test data. In other case, one may want to see only effect of the consecutive image processing functions on test data. In both and many cases, small toolboxes of the frameworks help people to see results easily, fast and enable them to fast prototyping. Hence this toolbox is created for Computer Vision application developers and enthusiast who want to see image processing functions on their images with very basic knowledge. It's developed with minimal design, which makes it easy to use. However it is also powerful toolbox due to its support of parameters. \section{Getting Started} \subsection{Image Toolbox} \subsubsection{Basics} This application is embedding functions of OpenCV(tested on 2.4.8) for users. This application consist of one main window and sub-windows prompt upon on displaying input and output images by OpenCV. It accepts one image at a time and storing it as original image. Each modification is applied on output image consecutively. There is also history option keep track of change of the output image where you can revert to any old state you want. By default, all operations are disabled until you select an image. \subsubsection{User Interface} User interface consist of 4 buttons and a tab view which embeds all image processing functions. \par \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{toolbox1.png} \caption{VP OpenCV Toolbox} \end{center} \end{figure} Main buttons: \begin{itemize} \item Load Image: Loads image. \item Save Output Image: Saves current output image. \item Reset Output Image: Resets output to original loaded image. \item Clear Images and Close All WIndows: Clear history stack, output and closes windows. \end{itemize} Tabs: \begin{itemize} \item History: History of change of output images \item Noise: Adding salt and pepper noise \item Logo: Adding logo and ROI \item Color: Changing color space \item Histogram: Calculating and Equalizing Histogram. \item Morph: Morphological Operations. \item Blur: Blurring operations. \item Sobel: Sobel and Laplacian derivative operators. \item Sharpen: Sharpening images. \item Edge: Canny Edge Detect. \item Hough: Hough Transfor finding lines and circles \item Contour: Finding countours of connected objects. \item Harris: Harris corner extraction. \item Features: Extracting FAST, SURF, SIFT key points. \end{itemize} \subsubsection{Image Input and Output} Image input and output is very easy in this toolbox. "Load Image" button in upper left corner opens file dialog and enable to select any image in your computer. Supported formats are *.png, *.jpg, *.jpeg, amd *.bmp; by default, OpenCV loads images in BGR color space. As soon as you select and load image, image is opened in windows titled "Input", which you can see there is already toolboxes on it which enable you to zoom in/out, showing pixel values and even saving. \par Even Qt built OpenCV windows has saving option, there is also saving current image option on left top of application. It opens save dialog and let you save your image any place you want. \subsubsection{Image History} Image history is one of the strong functionality of this toolbox. After your each operation, output image is saved and kept for history. \par In any moment, you can go history tab and click "Revert" button to go back to that state of the image. Another beautiful feature is you can see that history items by clicking "View" and they will be pop-up in seperate windows, you can open as much as you want. \begin{figure}[H] \begin{center} \includegraphics[scale=0.42]{toolboxHistory.png} \caption{History with Details} \end{center} \end{figure} \subsubsection{Noise} In "Noise" tab, there is only Salt and Pepper noise option. It's adding white and black pixels to your image with given amount as "Rate" parameter, by default set to 100. Salt and Pepper options are with check-boxes, you have option to add them separately. \begin{figure}[H] \begin{center} \includegraphics[scale=0.8]{toolboxNoise.png} \caption{Salt and Pepper Noise} \end{center} \end{figure} \subsubsection{Logo} In "Logo" tab, first you need to load logo by clicking "Load Logo". You should select logo smaller or equal to output image size, otherwise warning message will appear. Click "Add Logo" to add your logo. There are 5 parameters you can manipulate before adding logo to your image. Here are the parameters: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline X & \multicolumn{4}{|c|}{Logo offset X} \\ \hline Y & \multicolumn{4}{|c|}{Logo offset Y} \\ \hline Alpha & \multicolumn{4}{|c|}{Weight of the image} \\ \hline Beta & \multicolumn{4}{|c|}{Weight of the logo} \\ \hline Gamma & \multicolumn{4}{|c|}{Scalar added to each pixel} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.7]{toolboxLogo.png} \caption{Adding Logo} \end{center} \end{figure} \subsubsection{Color} In "Color" tab, you can change the color space of the your current image. Supported color spaces are: BGR, RGB, GRAY, HSV, HLS. Current color space combo-box is disabled by default that showing current color space of the image. Each time you change the color space both 'Current Color Space' and 'New Color Space' combo-boxes are updated accordingly. \begin{figure}[H] \begin{center} \includegraphics[scale=0.7]{toolboxColorSpace.png} \caption{Changing Color Space} \end{center} \end{figure} \subsubsection{Histogram} In "Histogram" tab, you can calculate histogram of the image or equalize histogram of the current image. There is also option to choose channel for viewing histograms for multi channel images. \begin{figure}[H] \begin{center} \includegraphics[scale=0.8]{toolboxHistogram.png} \caption{Histogram Calculation} \end{center} \end{figure} \subsubsection{Morph} In "Morph" tab, you can perform morphological operations to your current image. Available operations are; Dilation, Erosion, Opening, Closing, Morphological Gradient, Top Hat and Black Hat. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Operation & \multicolumn{4}{|c|}{Dilation/Erosion/Opening/Closing/Morphological Gradient/Top Hat/Black Hat} \\ \hline Iteration Count & \multicolumn{4}{|c|}{Number of times erosion and dilation are applied} \\ \hline Kernel Size & \multicolumn{4}{|c|}{Size of the structuring element.} \\ \hline Kernel Type & \multicolumn{4}{|c|}{Element shape} \\ \hline Kernel Anchor X & \multicolumn{4}{|c|}{x-coordinate of the kernel anchor} \\ \hline Kernel Anchor Y & \multicolumn{4}{|c|}{y-coordinate of the kernel anchor} \\ \hline Image Padding Method & \multicolumn{4}{|c|}{Pixel extrapolation method} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxMorph.png} \caption{Histogram Calculation} \end{center} \end{figure} \subsubsection{Blur} In "Blur" tab, you can perform blurring your image with Homogeneous, Gaussian, Median or Bilateral smoothing.You also have options to specify kernel size, its anchor and border replication method. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Bllurring Method & \multicolumn{4}{|c|}{Homogeneous/Gaussian/Median/Bilateral} \\ \hline Kernel Size & \multicolumn{4}{|c|}{Size of the structuring element.} \\ \hline Kernel Anchor X & \multicolumn{4}{|c|}{x-coordinate of the kernel anchor} \\ \hline Kernel Anchor Y & \multicolumn{4}{|c|}{y-coordinate of the kernel anchor} \\ \hline Image Padding Method & \multicolumn{4}{|c|}{Pixel extrapolation method} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxBlur.png} \caption{Blurring} \end{center} \end{figure} \subsubsection{Sobel} In "Sobel" tab, you can use Sobel and Laplacian operators with many parameters. Parameters are; \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Operation & \multicolumn{4}{|c|}{Sobel/Laplacian} \\ \hline Output Depth & \multicolumn{4}{|c|}{CV\_8U/CV\_16U/CV\_16S/CV\_32F/CV\_64F} \\ \hline Image Padding Method & \multicolumn{4}{|c|}{Pixel extrapolation method.} \\ \hline Sobel Kernel Size & \multicolumn{4}{|c|}{size of the extended Sobel kernel; it must be 1, 3, 5, or 7} \\ \hline Sobel X Order & \multicolumn{4}{|c|}{Order of the derivative x} \\ \hline Sobel Y Order & \multicolumn{4}{|c|}{Order of the derivative y} \\ \hline Scale Factor & \multicolumn{4}{|c|}{Optional scale factor for the computed derivative values;} \\ \hline Delta Offset & \multicolumn{4}{|c|}{Optional delta value that is added to the results} \\ \hline Laplacian Aperture Size & \multicolumn{4}{|c|}{Aperture size used to compute the second-derivative filters} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxSobel.png} \caption{Laplacian Operator} \end{center} \end{figure} \subsubsection{Sharpen} In "Sharpen" tab, you can perform sharpening on images. Behind the code, gaussian blurred image is weighted with the current image. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Kernel Size & \multicolumn{4}{|c|}{Gaussian kernel size} \\ \hline Gaussian Sigma & \multicolumn{4}{|c|}{Gaussian kernel standard deviation} \\ \hline Image Padding Method & \multicolumn{4}{|c|}{Pixel extrapolation method} \\ \hline Filter Weight & \multicolumn{4}{|c|}{Weight of the first array elements} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxSharpen.png} \caption{Sharpening} \end{center} \end{figure} \subsubsection{Edge} In "Edge" tab, you can use Canny Edge detector algorithm to find edges in your image. There are 4 parameters for Canny. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Threshold1 & \multicolumn{4}{|c|}{First threshold for the hysteresis procedure} \\ \hline Threshold1 & \multicolumn{4}{|c|}{Second threshold for the hysteresis procedure} \\ \hline Aperture Size & \multicolumn{4}{|c|}{Aperture size for the Sobel() operator} \\ \hline L2gradient & \multicolumn{4}{|c|}{Use l2 normalization should be used to calculate the image gradient magnitude} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxEdge.png} \caption{Canny Edge Detector} \end{center} \end{figure} \subsubsection{Hough} In "Hough" tab, you can find lines and circles with Hough Transform method. By choosing find method Lines or Circles, related parameters are enabled or disabled. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \multicolumn{5}{|c|}{\textbf{Find Circle}} \\ \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Threshold & \multicolumn{4}{|c|}{Accumulator threshold parameter} \\ \hline Theta & \multicolumn{4}{|c|}{Angle resolution of the accumulator in radians} \\ \hline Rho & \multicolumn{4}{|c|}{Distance resolution of the accumulator in pixels} \\ \hline SRN & \multicolumn{4}{|c|}{For the multi-scale Hough transform, it is a divisor for the distance resolution rho} \\ \hline STN & \multicolumn{4}{|c|}{For the multi-scale Hough transform, it is a divisor for the distance resolution theta} \\ \hline \multicolumn{5}{|c|}{\textbf{Find Circle}} \\ \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline DP & \multicolumn{4}{|c|}{Inverse ratio of the accumulator resolution to the image resolution} \\ \hline Min Dist & \multicolumn{4}{|c|}{Minimum distance between the centers of the detected circles} \\ \hline Param1 & \multicolumn{4}{|c|}{First method-specific parameter.} \\ \hline Param2 & \multicolumn{4}{|c|}{Second method-specific parameter} \\ \hline Min Radius & \multicolumn{4}{|c|}{Minimum circle radius} \\ \hline Max Radius & \multicolumn{4}{|c|}{Maximum circle radius} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxHough.png} \caption{Hough Transform} \end{center} \end{figure} \subsubsection{Contour} In "Contour" tab, you can find contours of connected object and draw them onto image. There are many parameters available. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Mode & \multicolumn{4}{|c|}{Contour retrieval mode} \\ \hline Method & \multicolumn{4}{|c|}{Contour approximation method} \\ \hline Offset X & \multicolumn{4}{|c|}{Optional offset by which every contour point is shifted} \\ \hline Offset Y & \multicolumn{4}{|c|}{Optional offset by which every contour point is shifted} \\ \hline Binary Threshold & \multicolumn{4}{|c|}{Binary threshold value before finding contour} \\ \hline Min Contour Size & \multicolumn{4}{|c|}{Eliminate too short or too long contours} \\ \hline Max Contour Size & \multicolumn{4}{|c|}{Eliminate too short or too long contours} \\ \hline Bounding Box & \multicolumn{4}{|c|}{Drawing bounding box} \\ \hline Bounding Min Cirle & \multicolumn{4}{|c|}{Drawing bounding circle} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxContours.png} \caption{Contours} \end{center} \end{figure} \subsubsection{Harris} Tn "Harris" tab, you can extract corners with Harris Corner Extraction. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Derivative Size of Neighborhood & \multicolumn{4}{|c|}{Neighborhood size} \\ \hline Harris Parameter & \multicolumn{4}{|c|}{Harris detector free parameter} \\ \hline Non-max Size of Neighborhood & \multicolumn{4}{|c|}{Aperture parameter for the Sobel() operator} \\ \hline Image Padding Method & \multicolumn{4}{|c|}{Pixel extrapolation method} \\ \hline Threshold Max Strength & \multicolumn{4}{|c|}{Binary threshold value before finding contour} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxHarris.png} \caption{Contours} \end{center} \end{figure} \subsubsection{Features} In "Features" tab you can find 3 method implemented for keypoint extraction from current image. \begin{figure}[H] \begin{center} \includegraphics[scale=0.3]{toolboxFeature.png} \caption{SIFT Keypoint Extraction} \end{center} \end{figure} \subsubsection{FAST} Using "FastFeatureDetector" and given parameters, it finds key points. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Threshold & \multicolumn{4}{|c|}{Threshold on diff between intensity of the central pixel} \\ \hline Non-max Supression & \multicolumn{4}{|c|}{If true, non-maximum suppression is applied to detected corners (keypoints)} \\ \hline Keypoint Drawing Flag & \multicolumn{4}{|c|}{Flags setting drawing features} \\ \hline Keypoint Colors & \multicolumn{4}{|c|}{Draws keypoints with different colors} \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{SURF} Using "SurfFeatureDetector" and given parameters, it finds key points. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Min Hessian & \multicolumn{4}{|c|}{Threshold on diff between intensity of central px and px of a circle around this px} \\ \hline Keypoint Drawing Flag & \multicolumn{4}{|c|}{Flags setting drawing features} \\ \hline Keypoint Colors & \multicolumn{4}{|c|}{Draws keypoints with different colors} \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{SIFT} Using "SiftFeatureDetector" and given parameters, it finds key points. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Feature Threshold & \multicolumn{4}{|c|}{The contrast threshold used to filter out weak features in semi-uniform regions} \\ \hline Edge Threshold & \multicolumn{4}{|c|}{The threshold used to filter out edge-like features.} \\ \hline Keypoint Drawing Flag & \multicolumn{4}{|c|}{Flags setting drawing features} \\ \hline Keypoint Colors & \multicolumn{4}{|c|}{Draws keypoints with different colors} \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Estimation} In "Estimation" tab, you can find many functions, including camera calibration with chessboard patter, finding matches between images. drawing epipolar lines, connecting two images with tomography and more. \subsubsection{Camera Calibration} Camera calibration is calculating the distortion for your camera and, find undistortion for better image acquire. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline \# Corners X & \multicolumn{4}{|c|}{Number of corners in horizontal your chessboard pattern has} \\ \hline \# Corners Y & \multicolumn{4}{|c|}{Number of corners in vertical your chessboard pattern has} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.3]{toolboxCameraCalibration.png} \caption{Undistorted Image After Calibration} \end{center} \end{figure} \subsubsection{Find Matches} Find matches between two images; current output image and matching image you will provide with "Load Matching Image". Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline SURF Min Hessian & \multicolumn{4}{|c|}{Threshold on diff between intensity of the central pixel} \\ \hline Calculate Fundemental Matrix & \multicolumn{4}{|c|}{Calculates F matrix with given method} \\ \hline Keypoint Drawing Flag & \multicolumn{4}{|c|}{Flags setting drawing features} \\ \hline Matching Image & \multicolumn{4}{|c|}{Maching image your provide with "Load Matching Image" button} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxFindMatches.png} \caption{Find Matches} \end{center} \end{figure} \subsubsection{Epipolar} Finds and draws epipolar lines between two images; current output image and matching image you will provide with "Load Matching Image". Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Ratio & \multicolumn{4}{|c|}{Max ratio between 1st and 2nd Nearest Neighbor} \\ \hline SURF Min Hessian & \multicolumn{4}{|c|}{Threshold on diff between intensity of central px and px of a circle around this px} \\ \hline Confidence Level & \multicolumn{4}{|c|}{Confidence level (probability)} \\ \hline Min Distance to Epioplar & \multicolumn{4}{|c|}{Min distance to epipolar} \\ \hline Matching Image & \multicolumn{4}{|c|}{Maching image your provide with "Load Matching Image" button} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.5]{toolboxEpipolarLines.png} \caption{Epipolar Lines} \end{center} \end{figure} \subsubsection{Homography} Computes the homography between two images and connect them by the found homography. Parameters are: \begin{table}[H] \begin{center} \begin{tabular}{|c|c|l|l|l|} \hline \textbf{Parameters} & \multicolumn{4}{|c|}{\textbf{Details}} \\ \hline Ratio & \multicolumn{4}{|c|}{Max ratio between 1st and 2nd Nearest Neighbor} \\ \hline SURF Min Hessian & \multicolumn{4}{|c|}{Threshold on diff between intensity of central px and px of a circle around this px} \\ \hline Confidence Level & \multicolumn{4}{|c|}{Confidence level (probability)} \\ \hline Min Distance to Epioplar & \multicolumn{4}{|c|}{Min distance to epipolar} \\ \hline Matching Image & \multicolumn{4}{|c|}{Maching image your provide with "Load Matching Image" button} \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{toolboxHomography.png} \caption{Homography Applied Image} \end{center} \end{figure} \subsection{Camera Toolbox} \subsubsection{Basics} Camera Toolbox is embedding functions of OpenCV(tested on 2.4.8) for users. This toolbox consist of one main window and sub-windows prompt upon on displaying input and output camera stream by OpenCV. It opens one camera at a time. Each image process operation you add is applied on output stram consecutively. There is also list of operations you can track of change of the output stream where you can remove any operation you want. By default, GUI elements are disabled until you open a stream. \subsubsection{User Interface} User interface consist of camera index picker, play and pause buttons, image process operation selection combo-box, operation add button, image process operation list and remove operation button \par \begin{figure}[H] \begin{center} \includegraphics[scale=0.4]{cameraToolbox.png} \caption{VP OpenCV Toolbox} \end{center} \end{figure} Main buttons: \begin{itemize} \item Play: Opens the camera stream with given index \item Pause: Pauses and closes the stream. \item Add Operation: Add image process operation to list to be applied on stream. \item Remove Operation: Removes image process operation from list to be applied. \end{itemize} Operations: \begin{itemize} \item Noise: Adding salt and pepper noise \item Logo: Adding logo and ROI \item Histogram: Calculating and Equalizing Histogram. \item Morph: Morphological Operations. \end{itemize} \subsubsection{Camera Input} Camera input is very easy in this toolbox. Leave camera index "0" as default or select index and click play button. It starts to play camera stream. Camera stream plays on Qt built OpenCV windows, which has option to save images, zoom in/out and more options. \subsubsection{Image Processing Operation List} Image processing operation list is showing the operations you added to your list for real time image processing. Each operation is applied on the stream consecutively. \begin{figure}[H] \begin{center} \includegraphics[scale=0.42]{cameraToolboxOperationList.png} \caption{History with Details} \end{center} \end{figure} \end{document}
{ "alphanum_fraction": 0.6232931194, "avg_line_length": 50.2029950083, "ext": "tex", "hexsha": "330e5fa164d4032fc2da971efb9ab5ffe4a59bd3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "953fccdac9fa4c9793e3aeb0b97352ccabd28a64", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "emreozanalkan/VPOpenCVProject", "max_forks_repo_path": "Report/VPOpenCVToolboxManual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "953fccdac9fa4c9793e3aeb0b97352ccabd28a64", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "emreozanalkan/VPOpenCVProject", "max_issues_repo_path": "Report/VPOpenCVToolboxManual.tex", "max_line_length": 739, "max_stars_count": 1, "max_stars_repo_head_hexsha": "953fccdac9fa4c9793e3aeb0b97352ccabd28a64", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "emreozanalkan/VPOpenCVProject", "max_stars_repo_path": "Report/VPOpenCVToolboxManual.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-30T11:43:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-30T11:43:34.000Z", "num_tokens": 7690, "size": 30172 }
\SetAPI{J-C} \section{ambeth.mth.timeout} \label{configuration:AmbethMthTimeout} \ClearAPI Defines the maximal time the \type{MultithreadingHelper} should wait for a parallel task to complete. When the time passed by the original thread is continued. %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.ioc.util.MultithreadingHelper} & \prettyref{module:IoC} \\ \hline \type{com.koch.ambeth.ioc.util.MultithreadingHelper} & \prettyref{module:IoC} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \begin{lstlisting}[style=Props,caption={Usage example for \textit{ambeth.mth.timeout}}] ambeth.mth.timeout=30000 \end{lstlisting}
{ "alphanum_fraction": 0.7626666667, "avg_line_length": 37.5, "ext": "tex", "hexsha": "c9f9dee6be528d735d711972ba34eb241f424c17", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/AmbethMthTimeout.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/AmbethMthTimeout.tex", "max_line_length": 159, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/AmbethMthTimeout.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 240, "size": 750 }
\documentclass{memoir} \usepackage{notestemplate} %\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png} %\institute{Rice University} %\faculty{Faculty of Whatever Sciences} %\department{Department of Mathematics} %\title{Class Notes} %\subtitle{Based on MATH xxx} %\author{\textit{Author}\\Gabriel \textsc{Gress}} %\supervisor{Linus \textsc{Torvalds}} %\context{Well, I was bored...} %\date{\today} \begin{document} % \maketitle % Notes taken on 05/20/21 \chapter{Introduction} \label{cha:introduction} These notes are personal notes I have created in the process of studying geometric measure theory and contain a wide variety of definitions and techniques that appear often in the field. The notes were created primarily from a reading course taken with Dr. Gregory Chambers at Rice University in Spring 2021 that followed Dr. Kenneth Falconer's textbook \textit{Fractal Geometry}. One will notice that the proofs of major results are either lacking or not included in these notes. This is because this document is primarily intended as a reference-- if the reader is looking for a deeper insight as to why these results are true, I would highly recommend one read the details in \textit{Fractal Geometry}-- if the proof isn't in there, then I have listed the source separately with the statement.\\ The results in this book assume a basic understanding of measure theory, and so one should already know the definition of a measure, \(\sigma \)-algebra, Borel set, and so on. Introductory notes on this topic will be provided in the LibreMath repository (soon). \end{document}
{ "alphanum_fraction": 0.7835633626, "avg_line_length": 54.9655172414, "ext": "tex", "hexsha": "8f202d6200d746aa94fd8ad90ebbac1334d08a37", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_path": "Geometric Measure Theory/Notes/source/Introduction.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_path": "Geometric Measure Theory/Notes/source/Introduction.tex", "max_line_length": 798, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_path": "Geometric Measure Theory/Notes/source/Introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "num_tokens": 373, "size": 1594 }
% \section{201809-3} % \input{problem/14/201809-3-p.tex}
{ "alphanum_fraction": 0.6842105263, "avg_line_length": 19, "ext": "tex", "hexsha": "1d7ff2fee34c49584fbef41157ece746bdb6f756", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-28T15:33:04.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-01T06:04:16.000Z", "max_forks_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lxlonlyn/CSP-Project", "max_forks_repo_path": "problem/14/201809-3.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_issues_repo_issues_event_max_datetime": "2022-02-03T15:32:34.000Z", "max_issues_repo_issues_event_min_datetime": "2022-01-22T15:33:17.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lxlonlyn/CSP-Project", "max_issues_repo_path": "problem/14/201809-3.tex", "max_line_length": 35, "max_stars_count": 5, "max_stars_repo_head_hexsha": "9d432ec2255b170f2bb1e0879e42c93f80a1b21c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lxlonlyn/CSP-Project", "max_stars_repo_path": "problem/14/201809-3.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-27T03:58:42.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-22T15:34:01.000Z", "num_tokens": 24, "size": 57 }
\documentclass[a4paper,onecolumn]{article} \usepackage[width=180truemm,height=260truemm]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{lmodern} \usepackage{fourier} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{fancyhdr} \usepackage{lastpage} \usepackage{indentfirst} \pagestyle{fancy} \lhead{} \rhead{} \cfoot{-- \thepage{}/{}\pageref{LastPage} --} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\tran}{^{\mkern-1.5mu\mathsf{T}}} \title{Gaussian Process Bootstrapping Layer} \author{Tetsuya Ishikawa} \begin{document} \maketitle \thispagestyle{fancy} \section{Theoretical details} % {{{ The Gaussian process with random Fourier features can be regarded as a variant of a fully connected layer. We can design a new fully connected layer that can compute variance of it's intermediate features. \subsection{Gaussian process with random Fourier features} Let $\mathcal{D} = \{ (\boldsymbol{x}_n, y_n) \}_{n=1}^{N}$ be a training dataset where $\boldsymbol{x}_n$ is a $n$-th data and $y_n$ is a label of $n$-th data. The Gaussian process with random Fourier features can be formulated as follows: \begin{align} \boldsymbol{\phi}_{\boldsymbol{x}_n} &= f(\boldsymbol{Wx}_n), \\ \boldsymbol{P} &= \sum_{n = 1}^{N} \boldsymbol{\phi}_{\boldsymbol{x}_n} \boldsymbol{\phi}_{\boldsymbol{x}_n}^{\mkern-1.5mu\mathsf{T}}, \\ m(\boldsymbol{x}_i) &= \frac{1}{\sigma^2} \boldsymbol{y}^{\mkern-1.5mu\mathsf{T}} \boldsymbol{\Phi}^{\mkern-1.5mu\mathsf{T}} \left( \boldsymbol{I} - (\boldsymbol{P} + \sigma^2 \boldsymbol{I})^{-1} \right) \boldsymbol{\phi}_{\boldsymbol{x}_i}, \\ v(\boldsymbol{x}_i, \boldsymbol{x}_j) &= \boldsymbol{\phi}_{\boldsymbol{x}_i}^{\mkern-1.5mu\mathsf{T}} \left\{ \boldsymbol{I} - \frac{1}{\sigma^2} \boldsymbol{P} \left( \boldsymbol{I} - (\boldsymbol{P} + \sigma^2 \boldsymbol{I})^{-1} \right) \right\} \boldsymbol{\phi}_{\boldsymbol{x}_j}, \label{eqn:gp_variance} \end{align} where $f$ is a non-linear function and $\boldsymbol{W}$ is a random matrix derived from a kernel function of the Gaussian process. \subsection{Analogy to a fully connected layer} The formula $f(\boldsymbol{Wx})$ looks like a fully connected layer of neural network with activation function. For example, if the kernel function is RBF kernel $k(\bs{x}_1, \bs{x}_2) = \exp \bigl( \| \bs{x}_1 - \bs{x}_2 \|^2 \bigr)$, then $\boldsymbol{\phi}_{\boldsymbol{x}_n}$ can be written as \begin{equation} \boldsymbol{\phi}_{\boldsymbol{x}_n} = \begin{pmatrix} \cos \bs{Wx}_n \\ \sin \bs{Wx}_n \end{pmatrix}, \hspace{10pt} \bs{W} \sim \mathcal{N} (0, \sigma^2). \end{equation} In this case, $\bs{Wx}_n$ corresponds to a fully connected layer and the circuler functions correspond to an activation function. On the other hand, $v(\boldsymbol{x}_n, \boldsymbol{x}_n)$ correspond to the variance of input data $\boldsymbol{x}_n$. Notable point is that the $v(\boldsymbol{x}_n, \boldsymbol{x}_n)$ is not depend on the label $y_n$, in other words, variance $v(\boldsymbol{x}_n, \boldsymbol{x}_n)$ is the same for any label. See the equation (\ref{eqn:gp_variance}). \subsection{Gaussian process bootstrapping layer} Bacause of the independence of the variance and $v(\boldsymbol{x}_n, \boldsymbol{x}_n)$ and the label $y_n$ mentioned in the previous subsection, we can replace the expectation prediction part as a identity function (theoretically, this situation correspond that $y_n = \bs{\phi}_{\bs{x}_n}$). See the figure \ref{fig:sketch_gpbl}. Therefore, we've got a new layer that can predict variance of intermediate features by replacing the random Fourier features as a fully connected layer, and expectation prediction as a identity function. Gaussian process bootstrapping layer is a layer to add noises to the intermediate features where the variance of the noises is the variance of the intermediate features. \begin{figure}[t] \center \includegraphics[width=300pt]{../figures/sketch_gpbl.pdf} \caption{Illustration of the analogy between GP w/ RFF and GPB layer} \label{fig:sketch_gpbl} \end{figure} \subsection{Psuedo code of GPB layer} The algorithm \ref{alg:gpbl} is the pseudo code of the GPB layer. \begin{algorithm}[t] \caption{Gaussian process bootstrapping layer} \label{alg:gpbl} \textbf{Input} \\ \hspace*{\algorithmicindent} \begin{tabular}{rl} $\bs{X}$: & \hspace*{-10pt} tensor with shape $(N, C)$ \\ \end{tabular} \\ \textbf{Output} \\ \hspace*{\algorithmicindent} \begin{tabular}{rl} $\bs{Y}$: & \hspace*{-10pt} tensor with shape $(N, C)$ \end{tabular} \\ \textbf{Hyperpatameters} \\ \hspace*{\algorithmicindent} \begin{tabular}{rl} $\sigma$: & \hspace*{-10pt} standard deviation of measurement error \\ $\alpha$: & \hspace*{-10pt} coefficient of exponential moving average \\ $s$: & \hspace*{-10pt} number of steps to skip bootstrapping \\ \end{tabular} \\[10pt] \textbf{function} \textsc{Gaussian process bootstrapping layer}$(X, \alpha, \sigma)$ \\[5pt] \noindent\hspace*{\algorithmicindent} \texttt{\# Update matrix P with exponential moving average.} \\[1pt] \noindent\hspace*{\algorithmicindent} $\bs{P} = \alpha \bs{X}\tran \bs{X} + (1 - \alpha) \bs{P}$ \\ \vspace*{-6pt} \noindent\hspace*{\algorithmicindent} \texttt{\# Compute matrix M.} \\[1pt] \noindent\hspace*{\algorithmicindent} $\bs{M} = \bs{I} - \frac{1}{\sigma^2} \bs{P} \bigl( \bs{I} - (\bs{P} + \sigma^2 * \bs{I})^{-1} \bs{P} \bigr)$ \\ \vspace*{-6pt} \noindent\hspace*{\algorithmicindent} \texttt{\# Compute variance v[n].} \\[1pt] \noindent\hspace*{\algorithmicindent} $\textbf{for} \,\, n \,\, \textbf{in} \,\, [0, N):$ \\ \noindent\hspace*{\algorithmicindent}\hspace*{\algorithmicindent} $v[n] = X[n, :]\tran M X[n, :]$ \\ \vspace*{-6pt} \noindent\hspace*{\algorithmicindent} \texttt{\# Add perturbation to the input tensor X.} \\[1pt] \noindent\hspace*{\algorithmicindent} $\textbf{for} \,\, n \,\, \textbf{in} \,\, [0, N):$ \\ \noindent\hspace*{\algorithmicindent}\hspace*{\algorithmicindent} $Y[n, :] = X[n, :] + \sqrt{v[n]} \, \bigl( \textrm{sampling from normal distribution with shape} \,\, (1, C) \bigr)$ \\[5pt] \textbf{end function} \end{algorithm} % }}} \begin{thebibliography}{9} % {{{ \bibitem{Rasmussen2006} C. Rasmussen and C. Williams, “Gaussian Processes for Machine Learning”, MIT Press, 2006. \bibitem{Rahimi2007} A. Rahimi and B. Recht, "Random Features for Large-Scale Kernel Machines", NIPS, 2007. % }}} \end{thebibliography} \end{document} % vim: expandtab tabstop=4 shiftwidth=4 fdm=marker
{ "alphanum_fraction": 0.6685201027, "avg_line_length": 39.8522727273, "ext": "tex", "hexsha": "6dee25177b4c74a69f087a05d10c5d4d58517205", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1c20232ba286aa3245e6aab575a9aaaf274931f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tiskw/gaussian-process-bootstrapping-layer", "max_forks_repo_path": "documents/gaussian-process-bootstrapping-layer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1c20232ba286aa3245e6aab575a9aaaf274931f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tiskw/gaussian-process-bootstrapping-layer", "max_issues_repo_path": "documents/gaussian-process-bootstrapping-layer.tex", "max_line_length": 134, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1c20232ba286aa3245e6aab575a9aaaf274931f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tiskw/gaussian-process-bootstrapping-layer", "max_stars_repo_path": "documents/gaussian-process-bootstrapping-layer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2272, "size": 7014 }
%- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \subsection{Allowable Values for Standard Trade Data} \label{sec:allowable_values} %- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \begin{table}[H] \centering \begin{tabular} {|p{4cm}|p{11cm}|} \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!Date! & \begin{tabular}[l]{@{}l@{}} The following date formats are supported: \\ \emph{yyyymmdd} \\ \emph{yyyy-mm-dd} \\ \emph{yyyy/mm/dd} \\ \emph{yyyy.mm.dd} \\ \emph{dd-mm-yy} \\ \emph{dd/mm/yy} \\ \emph{dd.mm.yy} \\ \emph{dd-mm-yyyy} \\ \emph{dd/mm/yyyy} \\ \emph{dd.mm.yyyy} \\ and \\ Dates as serial numbers, comparable to Microsoft Excel \\dates, with a minimum of 367 for Jan 1, 1901,\\ and a maximum of 109574 for Dec 31, 2199. \end{tabular} \\ \hline \lstinline!Currency! & % hard coded currencies: % \emph{AED, AOA, ARS, ATS, AUD, BEF, BGN, BHD, % BRL, CAD, CHF, CLF, CLP, CNH, CNY, COP, COU, CZK, DEM, DKK, EGP, % ESP, ETB, EUR, FIM, FRF, GBP, GEL, GHS, GRD, HKD, HRK, HUF, IDR, % IEP, ILS, INR, ISK, ITL, JOD, JPY, KES, KRW, KWD, KZT, LKR, LUF, MAD, % MUR, MXN, MXV, MYR, NGN, NLG, NOK, NZD, OMR, PEN, PHP, PKR, PLN, % PTE, QAR, RON, RSD, RUB, SAR, SEK, SGD, THB, TND, TRY, TWD, UAH, % UGX, USD, UYU, VND, XOF, ZAR, ZMW, XAG, XAU, XPD, XPT}, % hard coded and configured currencies: \emph{AED,AFN,ALL,AMD,ANG,AOA,ARS,AUD,AWG,AZN, BAM,BBD,BDT,BGN,BHD,BIF,BMD,BND,BOB,BOV, BRL,BSD,BTN,BWP,BYN,BZD,CAD,CDF,CHE,CHF, CHW,CLF,CLP,CNH,CNT,CNY,COP,COU,CRC,CUC, CUP,CVE,CZK,DJF,DKK,DOP,DZD,EGP,ERN,ETB, EUR,FJD,FKP,GBP,GEL,GGP,GHS,GIP,GMD,GNF, GTQ,GYD,HKD,HNL,HRK,HTG,HUF,IDR,ILS,IMP, INR,IQD,IRR,ISK,JEP,JMD,JOD,JPY,KES,KGS, KHR,KID,KMF,KPW,KRW,KWD,KYD,KZT,LAK,LBP, LKR,LRD,LSL,LYD,MAD,MDL,MGA,MKD,MMK,MNT, MOP,MRU,MUR,MVR,MWK,MXN,MXV,MYR,MZN,NAD, NGN,NIO,NOK,NPR,NZD,OMR,PAB,PEN,PGK,PHP, PKR,PLN,PYG,QAR,RON,RSD,RUB,RWF,SAR,SBD, SCR,SDG,SEK,SGD,SHP,SLL,SOS,SRD,SSP,STN, SVC,SYP,SZL,THB,TJS,TMT,TND,TOP,TRY,TTD, TWD,TZS,UAH,UGX,USD,USN,UYI,UYU,UYW,UZS, VES,VND,VUV,WST,XAF,XAG,XAU,XCD,XOF,XPD, XPF,XPT,XSU,XUA,YER,ZAR,ZMW,ZWL.} This full list of currencies is available via loading the provided {\tt currencies.xml} at start-up. Note: Currency codes must also match available currencies in the {\tt simulation.xml} file. \\ \hline \lstinline!Minor Currencies! & \emph{GBp, GBX, ILa, ILX, ZAc, ZAC, ZAX}, Note: Minor Currency codes only supported for Equity products. \\ \hline %\lstinline!DayCount! \lstinline!Convention! & \begin{tabular}[l]{@{}l@{}}\indent Actual 360 can be expressed by:\\ \emph{A360, Actual/360, ACT/360}\\ \indent Actual 365 Fixed can be expressed by: \\ \emph{A365, A365F, Actual/365, Actual/365 (fixed)} \\ \indent Thirty 360 (US) can be expressed by: \\ \emph{T360, 30/360, 30/360 (Bond Basis), ACT/nACT} \\ \indent Thirty 360 (European) can be expressed by: \\ \emph{30E/360, 30E/360 (Eurobond Basis)}\\ \indent Thirty 360 (Italian) is expressed by: \\ \emph{30/360 (Italian)} \\ \indent Actual Actual (ISDA) can be expressed by: \\ \emph{ActActISDA, ActualActual (ISDA), ACT/ACT, ACT} \\ \indent Actual Actual (ISMA) can be expressed by: \\ \emph{ActActISMA, ActualActual (ISMA)} \\ \indent Actual Actual (AFB) can be expressed by:\\ \emph{ActActAFB, Actual/Actual (AFB)} \end{tabular} \\ \hline \lstinline!Roll Convention! & \begin{tabular}[l]{@{}l@{}} \emph{F, Following, FOLLOWING}\\ \emph{MF, ModifiedFollowing, Modified Following, MODIFIEDF}\\ \emph{P, Preceding, PRECEDING}\\ \emph{MP, ModifiedPreceding, Modified Preceding, MODIFIEDP}\\ \emph{U, Unadjusted, INDIFF }\end{tabular} \\ \hline \end{tabular} \caption{Allowable values for standard trade data.} \label{tab:allow_stand_data} \end{table} \newpage %\begin{table}[H] %\centering %\begin{tabular}{|l|p{6cm}|} \begin{longtable}{| p{.23\textwidth} | p{.80\textwidth} |} \hline \multicolumn{2}{|l|}{\lstinline!Rule!} \\ \hline \textbf{Allowable Values} & \textbf{Effect} \\ \hline \emph{Backward} & Backward from termination date to effective date. \\ \hline \emph{Forward} & Forward from effective date to termination date. \\ \hline \emph{Zero} & No intermediate dates between effective date and termination date. \\ \hline \emph{ThirdWednesday} & All dates but effective date and termination date are taken to be on the third Wednesday of their month (with forward calculation.) \\ \hline \emph{LastWednesday} & All dates but effective date and termination date are taken to be on the last Wednesday of their month (with forward calculation.) \\ \hline \emph{ThirdThursday} & All dates but effective date and termination date are taken to be on the third Thursday of their month (with forward calculation.) \\ \hline \emph{ThirdFriday} & All dates but effective date and termination date are taken to be on the third Friday of their month (with forward calculation.) \\ \hline \emph{MondayAfterThird-} \emph{Friday} & All dates but effective date and termination date are taken to be on the Monday following the third Friday of their month (with forward calculation.) \\ \hline \emph{TuesdayAfterThird-} \emph{Friday} & All dates but effective date and termination date are taken to be on the Tuesday following the third Friday of their month (with forward calculation.) \\ \hline \emph{Twentieth} & All dates but the effective date are taken to be the twentieth of their month (used for CDS schedules in emerging markets.) The termination date is also modified. \\ \hline \emph{TwentiethIMM} & All dates but the effective date are taken to be the twentieth of an IMM month (used for CDS schedules.) The termination date is also modified. \\ \hline \emph{OldCDS} & Same as TwentiethIMM with unrestricted date ends and long/short stub coupon period (old CDS convention).\\ \hline \emph{CDS} & \makecell[tl]{Credit derivatives standard rule defined in 'Big Bang' changes in 2009. \\ \\ For quarterly periods (\lstinline!Tenor! set to \emph{3M}): \\ (Assuming no \lstinline!FirstDate!/\lstinline!LastDate!) \\ Dates fall on 20th of March, June, September, December. A \emph{Following} \\ roll convention will be applied if the 20th falls on a non-business day. \\ If the \lstinline!EndDate! in the schedule is set to a date beyond the rolled \\ quarterly CDS date, the actual trade termination date will be on the \\ following quarterly CDS date. \\ The first coupon will be paid on the quarterly CDS date following the \\ \lstinline!StartDate!, and be for the period since the previous quarterly CDS \\ date. \\ \\ For monthly periods (\lstinline!Tenor! set to \emph{1M}): \\ (Assuming no \lstinline!FirstDate!/\lstinline!LastDate!)\\ Dates fall on 20th of each month, but the termination is still adjusted \\ to be in line with quarterly periods. \\ If the \lstinline!EndDate! in the schedule is set to a date beyond the rolled \\ quarterly CDS date (i.e. the 20th+roll Mar, Jun, Sep, Dec), \\ the actual termination date will be on the following quarterly CDS \\ date, causing a long final stub. \\ The first coupon will be paid on the next 20th monthly following the \\ \lstinline!StartDate!, and be for the period since the previous month's 20th.}\\ \hline \emph{CDS2015} & \makecell[tl]{Credit derivatives standard rule updated in 2015. \\ Same as \emph{CDS} but with termination dates adjusted to \\ 20th June and 20th December. \\ For schedule \lstinline!EndDates! from the 20th of March to the 19th September, \\ both included, the termination date will fall on the 20th June (with \\ \emph{Following} roll). \\ For schedule \lstinline!EndDates! from the 20th September to the 19th March, \\ both included, the termination date will fall on the 20th December \\ (with \emph{Following} roll).} \\ \hline \caption{Allowable Values for Rule} \label{tab:rule} \end{longtable} \begin{longtable}{| p{.30\textwidth} | p{.70\textwidth} |} \hline \multicolumn{2}{|l|} {\tt Calendar} \\ \hline \bfseries{Allowable Values} & \bfseries{Resulting Calendar} \\ \hline \emph{TARGET, TGT, EUR} & Target Calendar \\ \hline \emph{CA, CAN, CAD, TRB} & Canada Calendar \\ \hline \emph{JP, JPN, JPY, TKB} & Japan Calendar \\ \hline \emph{CH, CHE, CHF, ZUB} & Switzerland Calendar \\ \hline \emph{GB, GBR, GBP, LNB, UK} & UK Calendar \\ \hline \emph{US, USA, USD, NYB} & US Calendar \\ \hline \emph{US-SET} & US Settlement Calendar \\ \hline \emph{US-GOV} & US Government Bond Calendar \\ \hline \emph{US-NYSE, New York stock exchange} & US NYSE Calendar \\ \hline \emph{US with Libor impact} & US Calendar for Libor fixings \\ \hline \emph{US-NERC} & US NERC Calendar \\ \hline \emph{AR, ARG, ARS} & Argentina Calendar \\ \hline \emph{AU, AUD, AUS} & Australia Calendar \\ \hline \emph{AT, AUT, ATS} & Austria Calendar \\ \hline \emph{BE, BEL, BEF} & Belgium Calendar \\ \hline \emph{BW, BWA, BWP} & Botswana Calendar \\ \hline \emph{BR, BRA, BRL} & Brazil Calendar \\ \hline \emph{CL, CHL, CLP} & Chile Calendar \\ \hline \emph{CN, CHN, CNH, CNY} & China Calendar \\ \hline \emph{CO, COL, COP} & Colombia Calendar \\ \hline \emph{CZ, CZE, CZK} & Czech Republic Calendar \\ \hline \emph{DK, DNK, DKK, DEN} & Denmark Calendar \\ \hline \emph{FI, FIN} & Finland Calendar \\ \hline \emph{FR, FRF} & France Calendar \\ \hline \emph{DE, DEU} & Germany Calendar \\ \hline \emph{HK, HKG, HKD} & Hong Kong Calendar \\ \hline \emph{HU, HUN, HUF} & Hungary Calendar \\ \hline \emph{IS, ISL, ISK} & Iceland Calendar \\ \hline \emph{IN, IND, INR} & India Calendar \\ \hline \emph{ID, IDN, IDR} & Indonesia Calendar \\ \hline \emph{IL, ISR, ILS} & Israel Calendar \\ \hline \emph{Telbor} & Tel Aviv Inter-Bank Offered Rate Calendar \\ \hline \emph{IT, ITA, ITL} & Italy Calendar \\ \hline \emph{LU, LUX, LUF} & Luxembourg Calendar \\ \hline \emph{MX, MEX, MXN} & Mexico Calendar \\ \hline \emph{MY, MYS, MYR} & Malaysia Calendar \\ \hline \emph{NL, NLD, NZD} & New Zealand Calendar\\ \hline \emph{NO, NOR, NOK} & Norway Calendar \\ \hline \emph{PE, PER, PEN} & Peru Calendar \\ \hline \emph{PH, PHL, PHP} & Philippines Calendar \\ \hline \emph{PO, POL, PLN} & Poland Calendar \\ \hline \emph{RO, ROU, RON} & Romania Calendar \\ \hline \emph{RU, RUS, RUB} & Russia Calendar \\ \hline \emph{SAU, SAR} & Saudi Arabia \\ \hline \emph{SG, SGP, SGD} & Singapore Calendar \\ \hline \emph{ZA, ZAF, ZAR, SA} & South Africa Calendar \\ \hline \emph{KR, KOR, KRW} & South Korea Calendar \\ \hline \emph{ES, ESP} & Spain Calendar \\ \hline \emph{SE, SWE, SEK, SS} & Sweden Calendar \\ \hline \emph{TW, TWN, TWD} & Taiwan Calendar \\ \hline \emph{TH, THA, THB} & Thailand Calendar \\ \hline \emph{TR, TUR, TRY} & Turkey Calendar \\ \hline \emph{UA, UKR, UAH} & Ukraine Calendar \\ \hline \emph{BVMF} & Brazil Bovespa Calendar \\ \hline \emph{XTSE} & Canada Toronto Stock Exchange Calendar \\ \hline \emph{XSHG} & China Shanghai Stock Exchange Calendar \\ \hline \emph{XFRA} & Germany Frankfurt Stock Exchange \\ \hline \emph{XETR} & Germany XETRA Calendar \\ \hline \emph{ECAG} & Germany EUREX Calendar \\ \hline \emph{EUWA} & Germany EUWAX Calendar \\ \hline \emph{XJKT} & Indonesia Jakarta Stock Exchange (now IDX) Calendar \\ \hline \emph{XIDX} & Indonesia Indonesia Stock Exchange Calendar \\ \hline \emph{XTAE} & Israel Tel Aviv Stock Exchange Calendar \\ \hline \emph{XMIL} & Italy Italian Stock Exchange Calendar \\ \hline \emph{MISX} & Russia Moscow Exchange Calendar \\ \hline \emph{XKRX} & Korea Exchange Calendar \\ \hline \emph{XSWX} & Switzerland SIX Swiss Exchange Calendar \\ \hline \emph{XLON} & UK London Stock Exchange \\ \hline \emph{XLME} & UK London Metal Exchange \\ \hline \emph{XNYS} & US New York Stock Exchange Calendar \\ \hline \emph{WMR} & Thomson Reuters QM/Reuters Spot \\ \hline \emph{WeekendsOnly} & Weekends Only Calendar \\ \hline \emph{ICE\_FuturesUS} & ICE Futures U.S. Currency, Stock and Credit Index, Metal, Nat Gas, Power, Oil and Environmental \\ \hline \emph{ICE\_FuturesUS\_1} & ICE Futures U.S. Sugar, Cocoa, Coffee, Cotton and FCOJ \\ \hline \emph{ICE\_FuturesUS\_2} & ICE Futures U.S. Canola \\ \hline \emph{ICE\_FuturesEU} & ICE Futures Europe \\ \hline \emph{ICE\_FuturesEU\_1} & ICE Futures Europe for contracts where 26 Dec is a holiday \\ \hline \emph{ICE\_EndexEnergy} & ICE Endex European power and natural gas products \\ \hline \emph{ICE\_EndexEquities} & ICE Endex European equities \\ \hline \emph{ICE\_SwapTradeUS} & ICE Swap Trade U.S. \\ \hline \emph{ICE\_SwapTradeUK} & ICE Swap Trade U.K. \\ \hline \emph{ICE\_FuturesSingapore} & ICE futures Singapore \\ \hline \emph{CME} & CME group exchange calendar \\ \hline % \emph{US+TARGET, NYB\_TGT, TGT\_NYB} & US and Target Calendar \\ \hline % \emph{NYB\_LNB, LNB\_NYB} & US and UK Calendar \\ \hline % \emph{LNB\_ZUB, ZUB\_LNB} & Switzerland and UK Calendar \\ \hline % \emph{TGT\_ZUB, ZUB\_TGT} & Switzerland and Target Calendar \\ \hline % \emph{NYB\_SYB} & US and Australia Calendar \\ \hline % \emph{TGT\_BDP, BDP\_TGT} & Hungary and Target Calendar \\ \hline % \emph{LNB\_NYB\_TGT} & UK, US and Target Calendar \\ \hline % \emph{TKB\_TGT\_LNB} & Japan, Target and UK Calendar \\ \hline % \emph{LNB\_NYB\_ZUB} & UK, US and Switzerland Calendar \\ \hline % \emph{LNB\_NYB\_TRB} & UK, US and Canada Calendar \\ \hline % \emph{LNB\_NYB\_TKB} & UK, US and Japan Calendar \\ \hline % \emph{NullCalendar} & Null Calendar, i.e. all days are business days \\ \hline \caption{Allowable Values for Calendar. Combinations of calendars can be provided using comma separated calendar names.} \label{tab:calendar} \end{longtable} \begin{table}[H] \centering \begin{tabular} {|p{6cm}|p{6cm}|} \hline %\multicolumn{2}{|l|}{\lstinline{DayCount Convention} } \\ \hline \multicolumn{2}{|l|}{\tt DayCount Convention} \\ \hline \bfseries{Allowable Values} & \bfseries{Resulting DayCount Convention} \\ \hline \emph{A360, Actual/360, ACT/360, Act/360}& Actual 360 \\ \hline \emph{A365, A365F, Actual/365 (Fixed), Actual/365 (fixed), ACT/365.FIXED, ACT/365, ACT/365L, Act/365, Act/365L} & Actual 365 Fixed \\ \hline \emph{A364, Actual/364, Act/364, ACT/364}& Actual 364 \\ \hline \emph{Actual/365 (No Leap), Act/365 (NL), NL/365, Actual/365 (JGB)} & Actual 365 Fixed (No Leap Year)\\ \hline \emph{T360 ,30/360, 30/360 (Bond Basis), ACT/nACT} & Thirty 360 (US) \\ \hline \emph{30E/360 (Eurobond Basis), 30E/360, 30E/360.ISDA} & Thirty 360 (European) \\ \hline \emph{30/360 (Italian)} & Thirty 360 (Italian) \\ \hline \emph{ActActISDA, ACT/ACT.ISDA, Actual/Actual (ISDA), ActualActual (ISDA), ACT/ACT, ACT} & Actual Actual (ISDA) \\ \hline \emph{ActActISMA, Actual/Actual (ISMA), ActualActual (ISMA), ACT/ACT.ISMA} & Actual Actual (ISMA) \\ \hline \emph{ActActICMA, Actual/Actual (ICMA), ActualActual (ICMA), ACT/ACT.ICMA} & Actual Actual (ICMA) \\ \hline \emph{ActActAFB, ACT/ACT.AFB, Actual/Actual (AFB)} & Actual Actual (AFB) \\ \hline \emph{BUS/252, Business/252} & Brazilian Bus/252 \\ \hline \emph{1/1} & 1/1 \\ \hline \end{tabular} \caption{Allowable Values for DayCount Convention} \label{tab:daycount} \end{table} \begin{table}[H] \centering \begin{supertabular}{|l|l|} \hline %\multicolumn{2}{|l|}{\lstinline!Index!} \\ \hline \multicolumn{2}{|l|}{\tt Index} \\ \hline \multicolumn{2}{|l|}{On form CCY-INDEX-TENOR, and matching available } \\ \multicolumn{2}{|l|}{ indices in the market data configuration.} \\ \hline \textbf{Index Component} & \textbf{Allowable Values} \\ \hline CCY-INDEX & \textit{\begin{tabular}[c]{@{}l@{}} EUR-EONIA\\ EUR-ESTER, EUR-ESTR, EUR-STR \\ EUR-EURIBOR\\ EUR-LIBOR\\ EUR-CMS\\ USD-FedFunds\\ USD-SOFR \\ USD-Prime\\ USD-LIBOR\\ USD-SIFMA\\ USD-CMS\\ GBP-SONIA\\ GBP-LIBOR\\ GBP-CMS \\ GBP-BoEBase \\ JPY-LIBOR\\ JPY-TIBOR \\ JPY-EYTIBOR \\ JPY-TONAR \\ JPY-CMS \\ CHF-LIBOR\\ CHF-SARON\\ AUD-LIBOR\\ AUD-BBSW\\ CAD-CDOR\\ CAD-BA\\ SEK-STIBOR\\ SEK-LIBOR\\ SEK-STINA \\ DKK-LIBOR\\ DKK-CIBOR \\ DKK-CITA \\ SGD-SIBOR\\ SGD-SOR \\ HKD-HIBOR \\ HKD-HONIA \\ NOK-NIBOR \\ HUF-BUBOR \\ IDR-IDRFIX \\ INR-MIFOR \\ MXN-TIIE \\ PLN-WIBOR \\ SKK-BRIBOR \\ THB-THBFIX\\ THB-BIBOR\\ NZD-BKBM \\ \end{tabular}} \\ \hline TENOR & An integer followed by \emph{D, W, M or Y} \\ \hline \end{supertabular} \caption{Allowable values for Index.} \label{tab:indices} \end{table} \begin{table}[H] \centering \begin{tabular}{|l|p{10cm}|} \hline \multicolumn{2}{|l|} {Defaults for {\tt FixingDays}} \\ \hline \textbf{Index} &\textbf{Default value} \\ \hline \hline Ibor indices & 2, except for the Ibor indices below: \\ \hline \emph{USD-SIFMA} & 1 \\ \hline \emph{GBP-LIBOR} & 0 \\ \hline \emph{AUD-BBSW} & 0 \\ \hline \emph{CAD-CDOR} & 0 \\ \hline \emph{CNY-SHIBOR} & 1 \\ \hline \emph{HKD-HIBOR} & 0 \\ \hline \emph{MXN-TIIE} & 1 \\ \hline \emph{MYR-KLIBOR} & 0 \\ \hline \emph{TRY-TRLIBOR} & 0 \\ \hline \emph{ZAR-JIBAR} & 0 \\ \hline \hline Overnight indices & 0, except for the Overnight indices below: \\ \hline \emph{CHF-TOIS} & 1 \\ \hline \emph{CLP-CAMARA} & 2 \\ \hline \emph{PLN-POLONIA} & 1 \\ \hline \emph{DKK-DKKOIS} & 1 \\ \hline \emph{SEK-SIOR} & 1 \\ \hline \end{tabular} \caption{Defaults for FixingDays} \label{tab:fixingdaysdefaults} \end{table} \begin{table}[H] \centering \begin{tabular}{|l|p{10cm}|} \hline %\multicolumn{2}{|l|}{\lstinline!Index!} \\ \hline \multicolumn{2}{|l|}{\tt FX Index} \\ \hline \textbf{Index Format} &\textbf{Allowable Values} \\ \hline FX-SOURCE-CCY1-CCY2 & The FX- part of the string stays constant for all currency pairs. SOURCE is the market data fixing source defined in the market configuration. CCY1 and CCY2 are the ISO currency codes of the fx pair. Fixings are expressed as amount in CCY2 for one unit of CCY1.\\ \hline \end{tabular} \caption{Allowable values for FX index fixings.} \label{tab:fxindex_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|l|p{8cm}|} \hline \multicolumn{2}{|l|}{\tt Inflation CPI Index} \\ \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!Index! for CPI leg & Any string (provided it is the ID of an inflation index in the market configuration) \\ \hline \end{tabular} \caption{Allowable values for CPI index.} \label{tab:cpiindex_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|p{3cm}|p{12cm}|} \hline \multicolumn{2}{|l|}{\tt Credit CreditCurveId} \\ \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!CreditCurveId! for credit trades - single name and index & \begin{tabular}[l]{@{}l@{}} Any string (provided it is the ID of a single name or index \\ reference entity in the market conguration). \\ Typically a RED-code with the \emph{RED:} prefix \\ Examples: \\ \emph{RED:2I65BRHH6} (CDX N.A. High Yield, Series 13, Version 1) \\ \emph{RED:008CA0\textbar{}SNRFOR\textbar{}USD\textbar{}MR14} (Agilent Tech Senior USD) \end{tabular} \\ \hline \end{tabular} \caption{Allowable values for credit \lstinline!CreditCurveId!} \label{tab:equity_credit_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|l|l|} \hline \multicolumn{2}{|l|}{\tt Equity Name} \\ \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!Name! for equity trades & \begin{tabular}[l]{@{}l@{}} Any string (provided it is the ID of an equity in the market \\ conguration). \\ Typically a RIC-code with the \emph{RIC:} prefix \\ Examples: \\ \emph{RIC:.SPX} (S\&P 500 Index) \\ \emph{RIC:EEM.N} (iShares MSCI Emerging Markets ETF) \\ \end{tabular} \\ \hline \end{tabular} \caption{Allowable values for equity \lstinline!Name!.} \label{tab:equity_name} \end{table} \begin{table}[H] \centering \begin{tabular} {|p{3cm}|p{12cm}|} \hline \multicolumn{2}{|l|}{\tt Commodity Curve Name} \\ \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!Name! for commodity trades & Any string (provided it is the ID of an commodity in the market configuration) \\ \hline \end{tabular} \caption{Allowable values for commodity data.} \label{tab:commodity_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|p{3cm}|p{12cm}|} \hline \multicolumn{2}{|l|}{\lstinline!Tier!} \\ \hline \textbf{Value} & \textbf{Description} \\ \hline \lstinline!SNRFOR! & Senior unsecured for corporates or foreign debt for sovereigns \\ \hline \lstinline!SUBLT2! & Subordinated or lower Tier 2 debt for banks \\ \hline \lstinline!SNRLAC! & Senior loss absorbing capacity \\ \hline \lstinline!SECDOM! & Secured for corporates or domestic debt for sovereigns \\ \hline \lstinline!JRSUBUT2! & Junior subordinated or upper Tier 2 debt for banks \\ \hline \lstinline!PREFT1! & Preference shares or Tier 1 capital for banks \\ \hline \end{tabular} \caption{Allowable values for \lstinline!Tier!} \label{tab:tier_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|p{3cm}|p{12cm}|} \hline \multicolumn{2}{|l|}{\lstinline!DocClause!} \\ \hline \textbf{Value} & \textbf{Description} \\ \hline \lstinline!CR! & Full or old restructuring referencing the 2003 ISDA Definitions \\ \hline \lstinline!MM! & Modified modified restructuring referencing the 2003 ISDA Definitions \\ \hline \lstinline!MR! & Modified restructuring referencing the 2003 ISDA Definitions \\ \hline \lstinline!XR! & No restructuring referencing the 2003 ISDA Definitions \\ \hline \lstinline!CR14! & Full or old restructuring referencing the 2014 ISDA Definitions \\ \hline \lstinline!MM14! & Modified modified restructuring referencing the 2014 ISDA Definitions \\ \hline \lstinline!MR14! & Modified restructuring referencing the 2014 ISDA Definitions \\ \hline \lstinline!XR14! & No restructuring referencing the 2014 ISDA Definitions \\ \hline \end{tabular} \caption{Allowable values for \lstinline!DocClause!} \label{tab:docclause_data} \end{table} \begin{table}[H] \centering \begin{tabular} {|p{3cm}|p{12cm}|} \hline \multicolumn{2}{|l|}{\tt Exchange} \\ \hline \bfseries{Trade Data} & \bfseries{Allowable Values} \\ \hline \lstinline!Exchange! & Any string, typically a MIC code (provided it is the ID of an exchange in the market configuration) \\ \hline \end{tabular} \caption{Allowable Values for Exchange} \label{tab:mic} \end{table} \begin{table}[H] \centering \begin{tabular} {|l|l|} \hline \multicolumn{2}{|l|}{Boolean nodes} \\ \hline \textbf{Node Value} & \textbf{Evaluates To} \\ \hline \lstinline!Y!, \lstinline!YES!, \lstinline!TRUE!, \lstinline!true!, \lstinline!1! & \lstinline!true! \\ \hline \lstinline!N!, \lstinline!NO!, \lstinline!FALSE!, \lstinline!false!, \lstinline!0! & \lstinline!false! \\ \hline \end{tabular} \caption{Allowable values for boolean node} \label{tab:boolean_allowable} \end{table}
{ "alphanum_fraction": 0.6396370853, "avg_line_length": 54.7427937916, "ext": "tex", "hexsha": "6fd90fd74ac99defb761b75ea2edf063f71c7b04", "lang": "TeX", "max_forks_count": 180, "max_forks_repo_forks_event_max_datetime": "2022-03-28T10:43:05.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-08T14:23:50.000Z", "max_forks_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "mrslezak/Engine", "max_forks_repo_path": "Docs/UserGuide/allowablevalues.tex", "max_issues_count": 59, "max_issues_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657", "max_issues_repo_issues_event_max_datetime": "2022-01-03T16:39:57.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-31T04:20:24.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "mrslezak/Engine", "max_issues_repo_path": "Docs/UserGuide/allowablevalues.tex", "max_line_length": 1385, "max_stars_count": 335, "max_stars_repo_head_hexsha": "c46ff278a2c5f4162db91a7ab500a0bb8cef7657", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "mrslezak/Engine", "max_stars_repo_path": "Docs/UserGuide/allowablevalues.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-02T07:12:03.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-07T16:31:10.000Z", "num_tokens": 8496, "size": 24689 }
\section{Modules} \begin{itemize} \item Modules provide a way to store data structures and procedures \item Example:\\ \gcl{0}{\Module ~ UniqueNumberAllocator} \gcl{1}{\Export ~ Acquire, Reset} \gcl{1}{\Import ~ Choose} \gcl{1}{} \gcl{1}{\Var ~ u : \textbf{set} \left[0, N\right)} \gcl{1}{} \gcl{1}{\Procedure ~ Acquire(\Result ~ t) \defeq} \gcl{2}{Choose(\left[0, N\right) - $ u $, t); u := u \cup \left\lbrace t \right\rbrace]} \gcl{1}{} \gcl{1}{\Procedure ~ Reset() \defeq u := \left\lbrace \right\rbrace} \gcl{1}{} \gcl{1}{\Procedure ~ Choose(\Value ~ s; \Result ~ e) \defeq e : [s \ne \left\lbrace \right\rbrace, e \in s]} \gcl{1}{} \gcl{1}{\Initially ~ u = \left\lbrace \right\rbrace} $ \End $ \item Syntax \begin{itemize} \item Modules are declared with $ \Module $ and have a unique name \item Module-level variables are declared in the $ \Var $ clause and given a type \item The initial condition of module variables is given the predicate in the $ \Initially $ clause \item Modules may define procedures which make use of its variables \item Modules list which procedures are exported publicly with an $ \Export $ clause (if there is no $ \Export $ clause, all procedures are exported) \item The variables and procedures of another module may be used if they are included in the $ \Import $ clause \begin{itemize} \item Imported variables must be redeclared exactly as in their source module \item Imported procedures must be redeclared: the original declaration must refine the redeclaration \item Imported procedures cannot refer to the local variables of the module they are imported into \item Circular import/export is not well defined \end{itemize} \end{itemize} \end{itemize} \subsection{Module Refinement} \begin{itemize} \item A module $ M' $ refines some module $ M $ with exported procedures $ E $, imported procedures $ I $ and initialisation condition $ init $ when \begin{itemize} \item $ M' $ has the same local and imported variables as $ M $ \item The exported procedures $ E' $ refine those in $ E $ (there may be more procedures in $ E' $, but not fewer) \item The imported procedures $ I' $ refine those in $ I $ (there may be fewer procedures in $ I' $ but not more) \item The initialisation $ init' $ is stronger than $ init $ -- i.e. $ init' \entails init $ \end{itemize} \item To refine modules with different variables, data refinement is required \end{itemize} \newpage \subsection{Data Refinement} \begin{itemize} \item The local state of a module cannot be accessed from the outside, so it may be changed provided the difference cannot be detected by use of the exported procedures \item Rule 1: \textbf{Introducing new variables} \begin{itemize} \item Relationships between new and existing variables are maintained via a \textbf{coupling invariant} $ CI $ \begin{itemize} \item E.g. $ CI \defeq p = q + r $ \end{itemize} \item The initialisation $ init $ becomes $ init \land CI $ \begin{itemize} \item E.g. if initialisation was $ p = 1 $, it would become $ p = 1 \land p = q + r $ \end{itemize} \item Any specification $ w : [P, Q] $ becomes $ w, c : [P \land CI, Q \land CI] $ where $ c $ is the list of new variables \begin{itemize} \item E.g. $ p : [p > 0, p < p_0] $ becomes $ p, q, r : [p > 0 \land p = q + r, p < p_0 \land p = q + r] $ \end{itemize} \item Every assignment in the module $ w := E $ becomes $ w, c := E, F $ provided that $ CI \entails CI[w, c \backslash E, F] $ \begin{itemize} \item E.g. $ p := p + 1 $ becomes $ p, q := p + 1, q + 1 $, or alternately $ p, r := p + 1, r + 1 $ \end{itemize} \item Every guard in the module $ G $ becomes $ G' $ provided that $ CI \entails (G \iff G') $ \begin{itemize} \item $ G' \defeq CI \land G $ is always suitable \item E.g. the guard $ p > 0 $ could become $ p > 0 \land p = q + r $, or alternately $ p = q +r \implies p > 0 $ \end{itemize} \end{itemize} \item Rule 2: \textbf{Removing an existing variable} \begin{itemize} \item Only \textbf{auxiliary variables} may be removed, i.e. they must only appear in: \begin{itemize} \item Assignments \item Specifications which modify only auxiliary variables \end{itemize} \item The initialisation $ init $ becomes $ \exists a \cdot init $ where $ a $ is the auxiliary variable \begin{itemize} \item The `one-point rule' may be used to remove the existential quantifier: $ \exists x \cdot P \land x = n \equiv P[x \backslash n] $ \item E.g. given initialisation $ p = 1 \land p = q + r $, it would become $ \exists p \cdot p = 1 \land p = q + r \equiv q + r = 1 $ \end{itemize} \item All specifications $ w, a : [P, Q] $ become $ w : [\exists a \cdot P, \forall a_0 \cdot P[w, a \backslash w_0, a_0] \implies (\exists a \cdot Q)] $ \begin{itemize} \item A similar one-point rule may be used to remove the universal quantifier:\\ $~~~~ \forall x \cdot P \land x = n \implies Q \equiv P[x \backslash n] \implies Q[x \backslash n] $ \item E.g. $ p, q, r : [p > 0 \land p = q + r, p < p_0 \land p = q + r] $ can be refined to:\\ \form{q, r : [\exists p \cdot p > 0 \land p = q + r, \forall p_0 \cdot p_0 > 0 \land p_0 = q_0 + r_0 \implies (\exists p \cdot p < p_0 \land p = q + r)]} \hint{\refsto}{Apply $ \exists $ one-point rule to pre and postconditions} \form{q, r : [q + r > 0, \forall p_0 \cdot p_0 > 0 \land p_0 = q_0 + r_0 \implies q + r < p_0]} \hint{\refsto}{Apply $ \forall $ one-point rule to postcondition} \form{q, r : [q + r > 0, q_0 + r_0 > 0 \implies q + r < q_0 + r_0]} \end{itemize} \item Any assignment $ w, a := E, F $ where $ E $ contains no variables from $ a $ can be replaced by $ w := E $ \begin{itemize} \item E.g. $ p, q := p + 1, q + 1 $ can be replaced by $ q := q + 1 $ \end{itemize} \item Normally the coupling invariant $ CI $ relates each concrete state to a unique abstract state (e.g. $ p = q + r $), in which case the following rule applies: \begin{itemize} \item Given abstract variables $ a $ and concrete variables $ c $, if $ CI \defeq a = f(c) \land P(c) $ then a guard $ G $ may be replaced by $ G[a \backslash f(c)] \land P(c)$, or simply by $ G[a \backslash f(c)] $ \item E.g. $ p > 0 \land p = q + r $ can be replaced by $ q + r > 0 \land q + r = q + r \equiv q + r > 0 $ \end{itemize} \end{itemize} \end{itemize}
{ "alphanum_fraction": 0.6248684408, "avg_line_length": 34.4611398964, "ext": "tex", "hexsha": "661ebb325211adcd3083fce537032e0085c57f01", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mcoot/CourseNotes", "max_forks_repo_path": "CSSE3100/modules.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mcoot/CourseNotes", "max_issues_repo_path": "CSSE3100/modules.tex", "max_line_length": 218, "max_stars_count": null, "max_stars_repo_head_hexsha": "c643f46e32cdf4c567bf73d4a23784c834278803", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mcoot/CourseNotes", "max_stars_repo_path": "CSSE3100/modules.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2242, "size": 6651 }
\title{Graph Theoretical Analysis of Developer Communities} \author{Nathan Hughes} \date{\today} \documentclass[12pt]{article} \begin{document} \maketitle \section{Introduction} This document outlines how the WPI EEROS IQP Analysis Tool generates a network model of developers from a software project's code. \section{Input Data} The tool takes the commit tree structure of the main repository associated with a software project and combines it with other forks of the software project, eventually creating a commit tree that contains every commit ever made that's publically available that results from the first commit of the main repository. Once the commit tree is created, it's updated using the all the file differences between consecutive commits. \section{Classification of Interaction} The actual network is built up using by using file differences between consecutive commits. The commit tree structure is traversed in a modified breadth-first order where tie breaking between commits is done by the commit date. Each commit's file differences are used to calculate an interaction strength between the current commit and previous commits. An interaction strength between two developers is given by a weighted sum of the number of strong interactions and weak interactions. A strong interaction is when two developers edit the same section of code in consecutive commits. A weak interaction is when two developers edit the same file within a certain timespan. The interaction strength for a given commit and a given pair of developer is given by: $$ s_c(u, v) = 0.5 * n_s(u, v) + 0.1 * n_w(u, v) $$ where $ s_c(u, v)$ is the interaction strength between developer $u$ and developer $v$, $n_s(u,v)$ is the nubmer of interactions between developer $u$ and developer $v$ and $n_w(u,v)$ is the number of interactions between developers $u$ and $v$. The strength of a relationship between developer $u$ and $v$ is given by: $$ s(u, v) = s_c(u, v) + t \left( \sum_{\forall c' \in C | c' < c} \left( s_{c'}(u, v) \right), c \right) $$ where $C$ is the collection of all the commits, and $t(x, y)$ is a function used to decay the value of a relationship over time. The weight of each edge in a relatinship is given by $s(u, v)$ and edges only exist between $u$ and $v$ when $s(u, v) > 0.5$. \section{Analysis of Developer Network} Using the network of developers, it is possible to derive the estrada index and average closeness of the the network. To calculate the estrada index, we first calculate the communicability of the developer network. This is given by: $$ G = \mathrm{e}^{w} $$ where $w$ is the weighted adjacency matrix of the graph. The estrada index is calculated as follows: $$ ln \left( \sum_{v \in V} G_{vv} \right) $$ where $V$ is the set of all the developers To calculate the average closeness, we first calculate the closeness for each developer, which is given by: $$ g(u) = \frac{1}{\sum_{v \in V} d(u, v)} $$ where $d(u, v)$ is the distance between developers $u$ and $v$ where the distance of each edge is the inverse of it's weight. The average closenesss is then given by: $$ \frac{\sum_{v \in V} g(v)}{|V|} $$ \end{document}
{ "alphanum_fraction": 0.7467166979, "avg_line_length": 52.4262295082, "ext": "tex", "hexsha": "c8339fb5e144c697723d975e52a85c5ff758e9b8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4fa366dc0eb907f36220905b5b1a5501826b28b7", "max_forks_repo_licenses": [ "WTFPL", "Condor-1.1" ], "max_forks_repo_name": "nhhughes/eeros-community-analysis", "max_forks_repo_path": "math_explanation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4fa366dc0eb907f36220905b5b1a5501826b28b7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "WTFPL", "Condor-1.1" ], "max_issues_repo_name": "nhhughes/eeros-community-analysis", "max_issues_repo_path": "math_explanation.tex", "max_line_length": 765, "max_stars_count": null, "max_stars_repo_head_hexsha": "4fa366dc0eb907f36220905b5b1a5501826b28b7", "max_stars_repo_licenses": [ "WTFPL", "Condor-1.1" ], "max_stars_repo_name": "nhhughes/eeros-community-analysis", "max_stars_repo_path": "math_explanation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 789, "size": 3198 }
\subsection{Speakers}
{ "alphanum_fraction": 0.75, "avg_line_length": 6, "ext": "tex", "hexsha": "aee5a1be55aa1f7ed646b6f0113a4088ab4c0081", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/engineering/engineeringElectrical/07-04-Speakers.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/engineering/engineeringElectrical/07-04-Speakers.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/engineering/engineeringElectrical/07-04-Speakers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 24 }
\documentclass[11pt,final,twoside]{article} %%%%%%%%%%%%%%%%%%% PACKAGES %%%%%%%%%%%%%%%%%%%% \usepackage[a4paper, top=2.5cm, bottom=3.5cm, inner=2cm, outer=2.5cm]{geometry} \usepackage[usenames,dvipsnames]{xcolor} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{verbatim} \usepackage{enumitem} \usepackage{layout} \usepackage{fancyhdr} \usepackage{ifthen} \usepackage{graphicx} \usepackage{pdfpages} \usepackage{calc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{hyperref} \hypersetup{ colorlinks=true, urlcolor=blue } \usepackage{tikz} \usepackage[strict]{changepage} \usepackage{xparse} % Set max width of \includegraphics to \linewidth. \usepackage[export]{adjustbox} \let\oldincludegraphics\includegraphics \renewcommand{\includegraphics}[2][]{% \oldincludegraphics[#1,max width=\linewidth]{#2}% } %%%%%%%%%%%%%%%%%%% GLOBAL SPACING CONFIGURATION %%%%%%%%%%%%%%%%%%%% \setlist{leftmargin=2em,topsep=0.5em} \setlength{\parindent}{0em} \setlength{\parskip}{0.5em} \raggedbottom \usepackage{titlesec} \titlespacing*{\section}{0em}{1.5em}{0.2em} \titleformat*{\section}{\Large\scshape\bfseries} \titlespacing*{\subsection}{0em}{-1.5em}{-0.4em} \titleformat*{\subsection}{\normalsize\bfseries} %%%%%%%%%%%%%%%%%%% HEADER AND FOOTER %%%%%%%%%%%%%%%%%%%% \fancypagestyle{problem}{ \fancyhf{} \renewcommand{\headrulewidth}{0.1pt} \setlength\headheight{14pt} \lhead{\textsc{Problem \problemlabel: \problemtitle}} \rhead{\textsc{\contestname}} % e.g., SWERC 2021/22 - Milan \rfoot{\thepage\hspace{2em}} } \fancypagestyle{solution}{ \fancyhf{} \renewcommand{\headrulewidth}{0.1pt} \setlength\headheight{14pt} \lhead{\textsc{\problemlabel: \problemtitle}} \rhead{\textsc{Solutions of \contestname}} % e.g., SWERC 2021/22 - Milan \rfoot{\thepage\hspace{2em}} } %%%%%%%%%%%%%%%%%%%%%%% BLANK PAGES %%%%%%%%%%%%%%%%%%%%%%% \fancypagestyle{blank}{ \fancyhf{} \renewcommand{\headrulewidth}{0pt} \rfoot{\thepage\hspace{2em}} } \newcommand{\insertblankpageifnecessary}{ \clearpage \checkoddpage \ifoddpage\else \thispagestyle{blank} \vspace*{\fill} \begin{center} \scalebox{3}{\rotatebox{45}{\color{black!6}\Huge\textbf{BLANK PAGE}}} \vspace{80pt} \end{center} \vspace*{\fill} \fi \cleardoublepage } %%%%%%%%%%%%%%%%%%% PROBLEM TITLE %%%%%%%%%%%%%%%%%%%% \newcommand\balloon{% \if \showballoon 1 \begin{tikzpicture}[scale=0.5, overlay, shift={(34.5, 0.5)}] \shade[ball color = \problemcolorname] ellipse (1.75 and 2); \shade[ball color = \problemcolorname] (-.1,-2) -- (-.3,-2.2) -- (.3,-2.2) -- (.1,-2) -- cycle; \path (0, -2.2) edge [out=250, in=120] (0.3, -4); \path (0.3, -4) edge [out=-60, in=60] (0, -6); \end{tikzpicture} \fi } \newcommand\tlml{% \if \showtlml 1 \begin{flushright} \begin{minipage}[t]{4.5cm} \textsc{Time limit: \hspace{1.55em}\timelimit{}s} \\ \textsc{Memory limit: \memorylimit{}MB} % The memory limit is in MiB, but most contestants don't know the difference and the difference is minimal, so we prefer to write MB. \end{minipage} \end{flushright} \fi } \newcommand\problemheader{% \setcounter{samplescnt}{0} \balloon {\bf \huge \fbox{\textsc{\problemlabel}} \problemtitle} \tlml \vspace{2em}% } \newcommand\solutionheader{% {\bf \huge \fbox{\textsc{\problemlabel}} \problemtitle} \begin{flushright} \begin{tabular}{l l} \textsc{Author:} & \textsc{\problemauthor{}} \\ \textsc{Preparation:} & \textsc{\problempreparation{}} \end{tabular} \end{flushright} \vspace{1em}% } %%%%%%%%%%%%%%%%%%% SAMPLES PRETTY PRINTING %%%%%%%%%%%%%%%%%%%% \newcounter{samplescnt} \newcommand\printfile[2]{% \begin{minipage}[t]{#1} \vspace{-0.1em} {\verbatiminput{#2} } \vspace{-0.5em} \end{minipage}% \ignorespacesafterend } \newcommand\sampleexplanation[1]{ \subsection*{Explanation of sample \arabic{samplescnt}.} #1% \addvspace{2em} }% %%%%%%%%%%%%%%%%%%% SMALL SAMPLE %%%%%%%%%%%%%%%%%%%% \newlength\smallsamplewidth \setlength\smallsamplewidth{8.08cm} \newcommand\smallsample[1]{ \stepcounter{samplescnt}% \begin{tabular}{| c | c |} \hline \textbf{Sample input \arabic{samplescnt}} & \textbf{Sample output \arabic{samplescnt}} \\ \hline \printfile{\smallsamplewidth}{#1.in} & \printfile{\smallsamplewidth}{#1.out} \\ \hline \end{tabular}% \addvspace{2em} \ignorespacesafterend } %%%%%%%%%%%%%%%%%%% BIG SAMPLE %%%%%%%%%%%%%%%%%%%% \newlength\bigsamplewidth \setlength\bigsamplewidth{16.58cm} \newcommand\bigsample[1]{ \stepcounter{samplescnt}% \begin{tabular}{| c |} \hline \textbf{Sample input \arabic{samplescnt}} \\ \hline \printfile{\bigsamplewidth}{#1.in} \\ \hline \end{tabular}% \\[1em] \begin{tabular}{| c |} \hline \textbf{Sample output \arabic{samplescnt}} \\ \hline \printfile{\bigsamplewidth}{#1.out} \\ \hline \end{tabular}% \addvspace{2em} \ignorespacesafterend } %%%%%%%%%%%%%%%%%%%%%%%% SAMPLE %%%%%%%%%%%%%%%%%%%%%%%%% % This magic trick to capture the shell output was copied from % tex.stackexchange.com/questions/16790 \ExplSyntaxOn \NewDocumentCommand{\captureshell}{som} { \sdaau_captureshell:Ne \l__sdaau_captureshell_out_tl { #3 } \IfBooleanT { #1 } {% we may need to stringify the result \tl_set:Nx \l__sdaau_captureshell_out_tl { \tl_to_str:N \l__sdaau_captureshell_out_tl } } \IfNoValueTF { #2 } { \tl_use:N \l__sdaau_captureshell_out_tl } { \tl_set_eq:NN #2 \l__sdaau_captureshell_out_tl } } \tl_new:N \l__sdaau_captureshell_out_tl \cs_new_protected:Nn \sdaau_captureshell:Nn { \sys_get_shell:nnN { #2 } { } #1 \tl_trim_spaces:N #1 % remove leading and trailing spaces } \cs_generate_variant:Nn \sdaau_captureshell:Nn { Ne } \ExplSyntaxOff \newcommand\sample[1]{ \captureshell*[\linelengthin]{cat #1.in | wc -L} \captureshell*[\linelengthout]{cat #1.out | wc -L} \ifnum \linelengthin>40 \bigsample{#1} \else \ifnum \linelengthout>40 \bigsample{#1} \else \smallsample{#1} \fi \fi } %%%%%%%%%%%%%%%%%%% CONTEST METADATA %%%%%%%%%%%%%%%%%%%% \newcommand\contestname{??CONTESTNAME??} \newcommand\showballoon{??SHOWBALLOON??} \newcommand\showtlml{??SHOWTLML??} %%%%%%%%%%%%%%%%%%% PROBLEM METADATA %%%%%%%%%%%%%%%%%%%% \newcommand\problemlabel{undefined} \newcommand\problemcolor{undefined} \newcommand\problemcolorname{undefined} \newcommand\problemtitle{undefined} \newcommand\timelimit{undefined} \newcommand\memorylimit{undefined} \newcommand\problemauthor{undefined} \newcommand\problempreparation{undefined} \begin{document} ??DOCUMENTCONTENT?? \end{document}
{ "alphanum_fraction": 0.650862069, "avg_line_length": 24.5547445255, "ext": "tex", "hexsha": "05d17c4dba2828148ad0344fa897f35dd04d0072", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f2f1817bdbc222bbc06a825ef56f69d26d19d12", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dario2994/pol2dom", "max_forks_repo_path": "p2d/resources/document_template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f2f1817bdbc222bbc06a825ef56f69d26d19d12", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dario2994/pol2dom", "max_issues_repo_path": "p2d/resources/document_template.tex", "max_line_length": 141, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f2f1817bdbc222bbc06a825ef56f69d26d19d12", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dario2994/pol2dom", "max_stars_repo_path": "p2d/resources/document_template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2280, "size": 6728 }
%%%%%%%%%%%%%%%%% % File: Manual_Module_1_Cyclic_improved_en.tex % manual for module 1 of the cyclic part, mol-infer project % edit: Feb 10, 2021 %%%%%%%%%%%%%%%%%% \documentclass[11pt, titlepage, dvipdfmx, twoside]{article} \linespread{1.1} %\documentclass[11pt,dvipdfmx,twoside]{jarticle} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % style definitions % % following setting makes 3cm spaces for top and bottom, and % 2.5cm spaces for left and right % % default setting \setlength{\oddsidemargin}{22pt} % 62pt \setlength{\evensidemargin}{22pt} % 62pt \setlength{\headheight}{12pt} % 12pt \setlength{\textheight}{662pt} % 592pt \setlength{\marginparsep}{10pt} % 10pt \setlength{\footskip}{30pt} % 30pt \setlength{\hoffset}{-13pt} % 0pt \setlength{\paperwidth}{597pt} % 597pt \setlength{\topmargin}{20pt} % 20pt \setlength{\headsep}{25pt} % 25pt \setlength{\textwidth}{427pt} % 327pt \setlength{\marginparwidth}{106pt} % 106pt \setlength{\marginparpush}{5pt} % 5pt \setlength{\voffset}{-37pt} % 0pt \setlength{\paperheight}{845pt} % 845pt % 1 inch = 2.54 cm = 72.27 pt \renewcommand{\baselinestretch}{1.20} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage[dvipdfmx]{graphicx} \usepackage{framed} \usepackage{url} \usepackage{color} \newenvironment{myframe}{\begin{trivlist}\item[] \hrule \hbox to \linewidth\bgroup \advance\linewidth by -30pt \hsize=\linewidth \vrule\hfill \vbox\bgroup \vskip15pt \def\thempfootnote{\arabic{mpfootnote}} \begin{minipage}{\linewidth}}{% \end{minipage}\vskip15pt \egroup\hfill\vrule \egroup\hrule \end{trivlist}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\project}{{\tt mol-infer/Cyclic\_improved}} %\newcommand{\project}{{\tt mol-infer/Cyclic}} \newcommand{\secref}[1]{Section~\ref{sec:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\figref}[1]{Figure~\ref{fig:#1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{\huge Module 1: Calculating a Feature Vector from an SDF File} \author{\project} \begin{document} \makeatletter \let\c@lstlisting\c@figure \makeatother % \西暦 \date{\today} \maketitle \thispagestyle{empty} \tableofcontents \clearpage \pagenumbering{arabic} \section{Introduction} This note serves as a manual and explains the procedures to run Module~1 of the \project project. The input and output of Module~1 are as follows. \begin{oframed} \begin{description} \item[Input:] A set $D=\{G_1, G_2, \dots, G_p\}$ of cyclic chemical graphs. % \item[Output:] A set ${\mathcal F}(D)\triangleq\{f(G_1), f(G_2), \dots, f(G_p)\}$ of feature vectors, such that $f(\cdot)$ is a feature vector of chemical graphs as details in the accompanying article~\cite{BH_cyclic_arxiv}. % \end{description} \end{oframed} % The output is written to a csv (comma-separated value) file. This csv file is used in Module~2 of the project. The remainder of this note is organized as follows \begin{itemize} \item \secref{preparation}: Summary of essential terminology, as well, as the file organization in this package. \item \secref{quick}: A short computational example. \item \secref{io}: Detailed explanations of the program's input and output. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \section{Preliminaries} \label{sec:preparation} \subsection{Terminology} \paragraph{Chemical Graph.} A {\bf graph} is an abstract combinatorial construction comprising a set of {\bf nodes} and a set of {\bf edges}, where an edge is an unordered pair of nodes. A {\bf cycle} in a graph is a sequence of nodes such that except for the first and the last node, each node is unique, and there is an edge for each pair of consecutive nodes in the sequence. A graph where each node is assigned a chemical element (such as carbon, nitrogen, oxygen, etc.) and each edge is assigned a multiplicity between 1 and 4, is called a {\bf chemical graph}. \paragraph{Descriptor.} A {\bf descriptor} is a numerical value that indicates a certain characteristic of a chemical graph. In this project, among others, descriptors include the number of non-hydrogen atoms, the number of atoms in the core of the chemical graph, the core height, etc. For a complete list of descriptors, please refer to the accompanying article~\cite{BH_cyclic_arxiv}. \paragraph{Feature vector.} A vector that comprises the numerical values for the descriptors of a chemical graph. \subsection{File Structure} The following set of files accompany this note. \begin{itemize} \item {\tt Makefile}: A makefile for compiling the programs. % \item {\tt cycle\_checker.cpp}: Source code written in C++ that for a given chemical compound checks if the chemical graph contains a cycle or not %. \item {\tt eliminate.py}: A Python script that screens chemical compounds that are not considered under this project, such us inorganic compounds with less than four carbon atoms, that contain charged atoms, etc. % \item {\tt fv\_ec.cpp}: Source code written in C++ of the mail program of Module~1, calculating a feature vector. % \item {\tt fv\_proj.cpp}: Source code (C++) of a program that given a feature vector function $f$ calculated over a set $D$ of chemical graphs, and a set $D'$ of chemical graphs that does not necessarily have the same descriptors as $D$ does, calculates the set ${\mathcal F}(D')$ of feature vectors projected onto the domain of~$f$. This is an auxiliary program, and usually not essential to the flow of the entire project. % \item Folder {\tt data} Contains sample input and output files used to test and demonstrate the execution of the programs in Module~1. The files in this folder as as follows. % \begin{itemize} \item {\tt sample1.sdf}: An SDF file that contains a single chemical compound. (Please check \secref{io} for more details on SDF files.) % \item {\tt sample1\_eli.sdf}: An SDF file obtained as the output of the Python script {\tt eliminate.py} when invoked on the file {\tt sample1.sdf}. The contents of the files {\tt sample1.sdf} and {\tt sample1\_eli.sdf} should be identical. % \item {\tt sample1.csv}: Contains a single feature vector constructed from the file {\tt sample1\_eli.sdf}. % \item {\tt sample2.sdf}: An SDF file that contains data on 175 chemical graphs. % \item {\tt sample2\_eli.sdf}: An SDF file obtained as the output of the Python script {\tt eliminate.py} when invoked on the file {\tt sample2.sdf}. The contents of the files {\tt sample2.sdf} and {\tt sample2\_eli.sdf} should be identical. % \item {\tt sample2.csv}: Contains the set of feature vectors constructed from the file {\tt sample2\_eli.sdf}. % \item {\tt sample1\_on\_2.csv}: Contains a single feature vector whose values are calculated from the file {\tt sample1\_eli.sdf}, however, the dimensions of the vector are projected on the domain of the feature vector obtained from the file {\tt sample2\_eli.sdf}. \end{itemize} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \clearpage \section{Execution Example} \label{sec:quick} \subsection{Validation of the Data} % Data on chemical compounds (equivalently, chemical graphs) is stored in a standard SDF file (more information on the structure of SDF files is given in \secref{io}). Each chemical graph~$G$ must satisfy the following conditions % \begin{description} \item[(i)] $G$ must contain a cycle; % \item[(ii)] $G$ must contain at least four carbon atoms, none of the atoms is allowed to be charged, and each atom must have atomic mass as listed in~\secref{caution}; and % \item[(iii)] Must not include an aromatic edge. \end{description} % The Python script {\tt eliminate.py} included in Module~1 can be used to remove the graphs that do not satisfy condition~(ii).% and~(iii). For condition (iii), the user must confirm whether it is satisfied or not on his/her own. \paragraph{Confirming that a chemical graph contains a cycle.} % Please use the program compiled from the source file {\tt cycle\_checker.cpp} included in Module~1 to confirm whether each chemical graph in a given SDF file contains a cycle. To compile the program, the included {\tt Makefile} can be used by issuing the following command in the command prompt. \begin{oframed} {\small \verb|$ make CHECKER| } \end{oframed} % In case the {\tt make} command is not available on the system, then the program can be compiled in the following way. \begin{oframed} {\small \verb|$ g++ -std=c++11 -Wall -O3 -o CHECKER cycle_checker.cpp| } \end{oframed} In order to check if a given SDF file {\tt input.sdf} contains a chemical graph that does not include a cycle by issuing the following command on the terminal. \begin{oframed} {\small \verb|$ ./CHECKER input.sdf| } \end{oframed} \begin{itemize} \item If all chemical graphs have cycles (i.e., all satisfy (i)), then the program CHECKER does not output any message. In this case, one can go to the next step. \item Otherwise, (i.e., there is a chemical graph that does not satisfy (i)), CID of such a chemical graph is output. Before going to the next step, such a graph must be removed from the SDF file manually. \end{itemize} \paragraph{Elimination of chemical graphs that are out-of-scope.} To check whether each chemical graph in a given SDF file satisfies condition (ii) or not, please use the Python script named {\tt eliminate.py}. The script generates a new SDF file that consists of all chemical graphs in the input SDF file that satisfy (ii). To use {\tt eliminate.py}, execute the following command. \begin{oframed} {\small \verb|$ python eliminate.py input.sdf| } \end{oframed} If the {\tt input.sdf} constains a chemical graph that does not satisfy (ii), the CID is output. After the execution of {\tt eliminate.py}, a new SDF file {\tt input\_eli.sdf} is output. The file consists of all chemical graphs in {\tt input.sdf} that satisfy condition (ii). % This means that, if all chemical graphs in {\tt input.sdf} satisfy (ii), {\tt input.sdf} and {\tt input\_eli.sdf} are equivalent. \subsection{Calculating a Feature Vector} Please use the program compiled from the source file {\tt fv\_ec.cpp} included in Module~1 to calculate feature vectors for an SDF file such that every chemical satisfies conditions (i), (ii) and (iii). To compile the program, the included {\tt Makefile} can be used by issuing the following command in the command prompt. \begin{oframed} {\small \verb|$ make FV_ec| } \end{oframed} % In case the {\tt make} command is not available on the system, then the program can be compiled in the following way. \begin{oframed} {\small \verb|$ g++ -std=c++11 -Wall -O3 -o FV_ec fv_ec.cpp| } \end{oframed} In order to calculate feature vectors from {\tt input\_eli.sdf} and to output the result in {\tt output.csv}, issue the following command on the terminal. \begin{oframed} {\small \verb|$ ./FV_ec input_eli.sdf output.csv| } \end{oframed} The program {\tt FV\_ec} prints on the terminal instructions on how to provide the arguments and halts if the arguments are not provided appropriately. \subsection{Calculating a Feature Vector from Other SDF (Not Mandatory)} The mapping $f$ that transforms a chemical graph into a feature vector is constructed from a given set $D$ of chemical graphs. To calculate a feature vector of a chemical graph in another set $D'\ne D$ using $f$, please use the program compiled from {\tt fv\_proj.cpp}. To compile the program, the included {\tt Makefile} can be used by issuing the following command in the command prompt. \begin{oframed} {\small \verb|$ make FV_proj| } \end{oframed} In case the {\tt make} command is not available on the system, then the program can be compiled in the following way. \begin{oframed} {\small \verb|$ g++ -std=c++11 -Wall -O3 -o FV_proj fv_proj.cpp| } \end{oframed} Let {\tt descriptor.csv} be the name of the csv file that is obtained by executing {\tt FV\_ec} on the original SDF containing $D$. That is, the mapping $f$ constructed from $D$. Let {\tt input.sdf} be the SDF file that contains the data on $D'\ne D$. To calculate ${\mathcal F}(D')$ and obtain the result in {\tt output.csv}, issue the following command on the terminal. \begin{oframed} {\small \verb|$ ./FV_proj descriptor.csv input.sdf output.csv| } \end{oframed} For example, one can run the program in the following way, using the sample files in Module~1. \begin{oframed} {\small \verb|$ ./FV_proj data/sample2.csv data/sample1.sdf data/sample1_on_2.csv| } \end{oframed} It is not mandatory to execute {\tt FV\_proj} to proceed to Module~2 and afterwards. Let us describe an example of when to use {\tt FV\_proj}. Suppose that a neural network has been constructed from {\tt descriptor.csv} in Module~2, and the neural network can be used to predict the value of a certain chemical property, say $\pi$. When one uses the neural network to predict the value of $\pi$ for a chemical graph in {\tt input.sdf}, the chemical graph must be converted into a feature vector by the mapping $f$. The program {\tt FV\_proj} can be used for this. \clearpage \section{Details in the Input and Output of the Program} \label{sec:io} \subsection{Input} The programs in Module~1 use SDF (Structure Data File), a standard format, for input. For the detail of SDF, splease check the following reference: \begin{itemize} \item \url{http://help.accelrysonline.com/ulm/onelab/1.0/content/ulm_pdfs/direct/reference/ctfileformats2016.pdf} (accessible on Feb 1, 2021) %\item \url{https://www.chem-station.com/blog/2012/04/sdf.html} (accessible on Feb 1, 2021) \end{itemize} %例として、sample1.sdf (https://pubchem.ncbi.nlm.nih.gov/compound/6140) を添付した。 \subsection{Output} The output is in an original FV (Feature Vector) format, which is just a CSV file so that can be opened by many spreadsheet softwares. The first line shows the components of FV and the following lines show the values for those components of FV. For example, let us have a look at the FV file {\tt sample1.csv} that is obtained by running {\tt FV\_ec} for {\tt sample1.sdf}. \begin{myframe} \begin{verbatim} CID,n,cs,ch,bl_2,ms,dg_co_1,dg_co_2,dg_co_3,dg_co_4,dg_nc_1,\ dg_nc_2,dg_nc_3,dg_nc_4,bd_co_2,bd_co_3,bd_in_2,bd_in_3,\ bd_ex_2,bd_ex_3,ns_co_C3,ns_co_C2,ns_nc_O1,ns_nc_N1,ns_nc_C2,ns_nc_C3,\ ec_co_C2_C3_2,ec_co_C2_C2_1,ec_co_C2_C3_1,ec_co_C2_C2_2,\ ec_in_C2_C3_1,ec_in_C3_C2_1,\ ec_ex_C3_N1_1,ec_ex_C3_C3_1,ec_ex_C3_O1_1,ec_ex_C3_O1_2,nsH 6140,12,6,4,1,128.333,0,5,1,0,3,1,2,0,3,0,0,0,1,0,1,5,2,\ 1,1,2,1,2,1,2,1,1,1,1,1,1,11 \end{verbatim} \end{myframe} The symbol $\backslash$ at the end of a line indicates that there is no line break between the two lines. Here is the overview of descriptors. See \cite{BH_cyclic_arxiv} for details. \begin{itemize} \item {\bf CID:} Compound ID. In this example ({\tt sample1.sdf}), it is 6140. The molecule is Phenylalanine, which is taken from \url{https://pubchem.ncbi.nlm.nih.gov/compound/6140}. \item {\bf n:} Number of atoms except for the hydrogen. \item {\bf cs:} Number of atoms in the core. \item {\bf ch:} Core height. \item {\bf bl:} Number of 2-leaves. \item {\bf ms:} Average molecular mass defined by $\textrm{ms}\triangleq\frac{1}{n}\sum_{a}\lfloor 10 \cdot \textrm{mass}(a)$, where $\textrm{mass}(a)$ represents the mass of an atom $a$. \item {\bf dg\_co\_1, \dots, dg\_co\_4:} Number of atoms in the core such that the degree is 1, 2, 3 and 4, resp. \item {\bf dg\_nc\_1, \dots, dg\_nc\_4:} Number of atoms not in the core such that the degree is 1, 2, 3 and 4, resp. \item {\bf bd\_co\_2, bd\_co\_3:} Number of double and triple bonds in the core paths, resp. \item {\bf bd\_in\_2, bd\_in\_3:} Number of double and triple bonds in the internal paths, resp. \item {\bf bd\_ex\_2, bd\_ex\_3:} Number of double and triple bonds in the external paths, resp. \item {\bf ns\_co\_Xd:} Number of atoms in the core such that the element symbol is X and the degree is d. For example, {\tt ns\_co\_C3} represents the number of carbon atoms in the core such that the degree is 3. \item {\bf ns\_nc\_Xd:} Number of atoms not in the core such that the element symbol is X and the degree is d. \item {\bf ec\_co\_Xx\_Yy\_2, ec\_co\_Xx\_Yy\_3:} Number of double and triple bonds in the core paths such that the end nodes have X and Y as element symbols and the degrees x and y, resp. For example, {\tt ec\_co\_C2\_C3\_2} represents the number of double bonds in the core paths such that both end nodes are carbon atoms and have the degrees 2 and 3, resp. \item {\bf ec\_in\_Xx\_Yy\_2, ec\_in\_Xx\_Yy\_3:} Number of double and triple bonds in the internal paths such that the end nodes have X and Y as element symbols and the degrees x and y, resp. \item {\bf ec\_ex\_Xx\_Yy\_2, ec\_ex\_Xx\_Yy\_3:} Number of double and triple bonds in the external paths such that the end nodes have X and Y as element symbols and the degrees x and y, resp. \item {\bf nsH:} Number of the hydrogen atoms. \end{itemize} For the descriptors whose names begin with {\tt ns\_} and {\tt ec\_}, only those appearing the input SDF are written in the output CSV file. \subsection{Attention} \label{sec:caution} The mass of each atom is hard-coded in the program. They are written in the function {\tt init\_MassMap()} in {\tt fv\_ec.cpp} as follows. If one needs to change values or to add another atoms, edit the source code directly and compile again. \begin{myframe} \begin{verbatim} M["B"] = 108; M["C"] = 120; M["O"] = 160; M["N"] = 140; M["F"] = 190; M["Si"] = 280; M["P"] = 310; M["S"] = 320; M["Cl"] = 355; M["V"] = 510; M["Br"] = 800; M["Cd"] = 1124; M["I"] = 1270; M["Hg"] = 2006; M["Pb"] = 2072; M["Al"] = 269; \end{verbatim} \end{myframe} \begin{thebibliography}{9} \bibitem{BH_cyclic_arxiv} T.~Akutsu and H.~Nagamochi. \newblock A Novel Method for Inference of Chemical Compounds with Prescribed Topological Substructures Based on Integer Programming. \newblock Arxiv preprint, arXiv:2010.09203 \end{thebibliography} \end{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
{ "alphanum_fraction": 0.7040361154, "avg_line_length": 33.5866425993, "ext": "tex", "hexsha": "8122027ad6b1da5569f840781ef26316b77000e2", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-02-27T09:05:41.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-03T02:41:23.000Z", "max_forks_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CitrusAqua/mol-infer", "max_forks_repo_path": "Cyclic_improved/Module_1/Manual_Module_1_Cyclic_improved_en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CitrusAqua/mol-infer", "max_issues_repo_path": "Cyclic_improved/Module_1/Manual_Module_1_Cyclic_improved_en.tex", "max_line_length": 360, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CitrusAqua/mol-infer", "max_stars_repo_path": "Cyclic_improved/Module_1/Manual_Module_1_Cyclic_improved_en.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T20:39:26.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-14T02:16:56.000Z", "num_tokens": 5455, "size": 18607 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS630: Database Management Systems % Copyright 2014 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 2} Let $a$ and $b$ be integer-valued attributes that may be \texttt{NULL} in some tuples. For each of the following conditions that may appear in a \texttt{WHERE} clause, describe exactly the set of $(a,b)$ tuples that satisfy the condition, including the case where $a$ and/or $b$ is \texttt{NULL}. \begin{enumerate}[label=(\alph*)] \item $a=10$ or $b=20$ \item $a=10$ and $b=20$ \item $a<10$ or $a>=10$ \item $a=b$ \end{enumerate} \textbf{Solution:} For ease of representation, sets $I$ and $J$ are defined respectively as set of integers and nullable integers; the latter being set of all possible integer values and the \texttt{NULL} value, $\lambda$. As well, set of all binary relations that satisfy specified condition is defined as $S$. \begin{enumerate}[label=(\alph*)] \item \begin{equation}\nonumber S = \{ (a,b) | a = 10, b\in J \} \cup \{ (a,b) | b = 20, a\in J \} \end{equation} \item \begin{equation}\nonumber S = \{ (a,b) | a = 10, b = 20 \} \end{equation} \item \begin{equation}\nonumber S = \{ (a,b) | a \in I, b \in J\}= J - \{(a,b)|a = \lambda , b \in J\} \end{equation} \item \begin{equation}\nonumber S = \{ (a,b) | a = b, a \in I, b \in I\} \end{equation} \end{enumerate}
{ "alphanum_fraction": 0.6221518987, "avg_line_length": 35.9090909091, "ext": "tex", "hexsha": "bdbd1f960bde99b2ee42d98a8d1349a235b10016", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs630-2014f/src/tex/hw03/hw03q02.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs630-2014f/src/tex/hw03/hw03q02.tex", "max_line_length": 209, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs630-2014f/src/tex/hw03/hw03q02.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 504, "size": 1580 }
\chapter{Introduction} \label{Introduction} There are 23.14 billion IoT devices in use worldwide that number is expected to grow to 75.44 billion by 2025 \cite{statista_2016}. To support the demand for these devices, 2,888 companies have jumped into the field, contributing various types of IoT devices. Some devices have no support, some devices get updates but have bad support, some devices are outright malicious. Some devices have privacy and security concerns even when brand new and still technically "supported". To address these concerns, this thesis contributes an IoT testbed that logs network and power data from 16 IoT devices over one year, accumulating 184.94 GB of data and 172,445,929 data points into a database. To help researchers sort and view this data, this thesis adds a Python \cite{python} program that graphs network traffic and power data from the database. The graphs created by this tool were used to analyze IoT devices network and power usage in the testbed while idle, during startup, and while in use. From these graphs, it appears to be possible to identify the smart speaker in use when viewing just one minute of the shared power usage, for multiple commands. This paper focuses on security and privacy flaws, that if fixed, do not affect the core features of an IoT device. For example, a smart speaker must store audio snippets to parse for the wake word. If the smart speaker occasionally hears a false positive wake word and sends the audio to its server, that is reasonable. However, Google Homes had an issue with their wake button, which caused the Home to listen 24/7 \cite{burke_2017}. This type of unexpected behavior is a privacy concern. In one of the largest IoT cybersecurity attacks, the Mirai Botnet, an attacker was able to use weak login credentials to take control of 2.5 million IoT devices to perform a denial of service attack \cite{whittaker_2017}. This is one of many events that introduces security concerns. We design the IoT database to better understand and detect these types of vulnerabilities in the future. \section{Previous Work} \label{Previous Work} This section presents and analyzes related works on the topic of characterizing IoT devices. It presents the previous works individually. Because these papers are similar to each other, commentary on how their work is different and useful to this paper is covered in Section \ref{Scope}. \subsection{An Analysis of Home IoT Network Traffic and Behaviour} \label{homeIoTPaper} In \textit{An Analysis of Home IoT Network Traffic and Behaviour}~\cite{home_iot}, the authors analyze IoT traffic in the home. The authors created an IoT testbed by setting up multiple IoT devices, connecting them to a router, sniffing their network packets while idle, and storing these packets. The IoT testbed consists of a smart air quality monitor, Amazon Echo, a few Apple devices, a smart hub, and a smart vacuum cleaner. After 22 days of network logging, the authors analyzed each IoT device's network traffic individually and as a whole. They noticed that they can identify most devices' vendor from the first three MAC address bytes and make an assumption to what device it is. This information does not give away the exact device but knowing a specific vendor for devices within a house creates a privacy risk. The Hue bridge communicates over unencrypted HTTP with other devices and the outside world. The authors state that the security flaw creates a privacy risk. A user’s presence in a room or house can be determined from these unencrypted HTTP packets. The authors also show the percentage of network packets by protocol and various other device network patterns. This general analysis fingerprints each device. \textit{An Analysis of Home IoT Network Traffic and Behaviour} most closely matches this paper. The authors have the same overall idea to collect network data and then use it to analyze metadata surrounding the networks. \subsection{ProfilIoT} \label{ProfilIoTPaper} The paper, \textit{ProfilIoT: A Machine Learning Approach for IoT Device Identification Based on Network Traffic Analysis} ~\cite{Meidan:2017:PML:3019612.3019878} uses machine learning algorithms to classify IoT devices. The researchers of this paper collect traffic from 13 different IoT and non-IoT devices. The IOT devices include a baby monitor, motion sensor, printer, refrigerator, security camera, socket, thermostat, smartwatch, and television. The non-IoT devices include two PCs and two smartphones. These devices connect to a Wi-Fi access point that records their network traffic with Wire Shark\cite{wireshark}. The researchers use machine learning on single-sessions to classify a device as an IoT device or non-IoT device. Then, they can classify the IoT devices by brand and model(e.g. Samsung Refrigerator, LG TV, WeMo Motion Sensor) with multi-sessions. A single-session is a 4-tuple consisting of the source IP, destination IP, source port number, and destination port number of a single TCP packet from a device. A multi-session is a list of single-sessions. Another machine learning model determines the minimum number of single-sessions needed to classify each device, determining the size of a multi-session. With single-sessions, they could determine if the device is an IoT device or not with 100 percent accuracy. Out of their nine IoT devices, they can classify brand and model of the IoT device with 99.281 percent accuracy. Like \textit{ProfilIoT: A Machine Learning Approach for IoT Device Identification Based on Network Traffic Analysis}, this paper also focuses on classification of devices from data. \subsection{Logging and Analysis of Internet of Things (IoT) Device Network Traffic and Power Consumption} \label{frawleyPaper} \textit{Logging and Analysis of Internet of Things (IoT) Device Network Traffic and Power Consumption}\cite{frawley_2018}, written by Ryan Frawley, was formed in conjunction with this paper. Frawley's paper and this paper were both directed by advisor Andrew Danowitz at Cal Poly. Frawley's paper documents the steps necessary to construct a reliable IoT testbed capable of capturing network traffic and power data for connected devices and analyzing these devices further. He performed GeoIP\cite{maxmind} lookups on each device, showing the percentage of packets originating from each country and company. He also analyzed the packets of any unencrypted data in these devices. \section{Scope} \label{Scope} Our work expands on the paper from Subsection \ref{homeIoTPaper} by contributing a portable database consisting of 10 months of data rather than 22 days of data. This paper adds more devices in our study and focuses more on device power/network usage over time rather than specific network packet information. Instead of using machine learning techniques on network data, like the paper form Subsection \ref{ProfilIoTPaper}, this paper focuses on manual analysis, looking for spikes, the height of the spike, the length of the power spike, and other graphical heuristics. Compared to the paper in \ref{homeIoTPaper} and paper in \ref{ProfilIoTPaper}, this paper also adds power usage over time to the data set. The two papers mentioned only focus on network traffic. This paper also puts a significant emphasis on creating an extensive database rather than a smaller set of data to create graphs on network and power usage over time. This paper is a continuation of the third paper in Subsection \ref{frawleyPaper}. Due to overlap between these two works, certain aspects of the IoT testbed setup and usage is only covered in cursory detail here. The reader is advised to access Frawley's work for full information. We both assembled the IOT test bed and interacted with the devices on a daily basis to simulate regular usage. We both also performed a preliminary analysis of the device network and power usage together. The contributions of this work are: \begin{itemize} \item Logging of IoT device Power and Network Usage \item A custom data visualization tool that attaches to the database \item Analysis smart speaker and streaming device startup, idle, and in use network and power usage over time \item Analysis of smart speaker network and power usage \item Figures that show it is likely possible to identify a smart speaker through analysis of the powerline over time. \end{itemize} \section{Thesis Organization} Chapter \ref{Method} discusses steps in setting up the IOT test bed, the analysis tool, and the logging system for interaction with devices. Chapter \ref{Method} also highlights the steps to set up a development environment to run the analysis tool (realTimeIoTGrapher.py) and how to use it. Chapter \ref{Results} presents power and network traffic for smart speakers and streaming devices while idle, in use, and during startup in the form of line graphs and discuss the implications of these graphs. Chapter \ref{Results} also shows the graphs used to fingerprint the smart speakers while handling different commands and under noise. Finally, Chapter \ref{Conclusion} finishes with concluding thoughts and future work.
{ "alphanum_fraction": 0.8107338445, "avg_line_length": 152.1666666667, "ext": "tex", "hexsha": "62f7e8095ad53cb912f8880c5b60b4bd1963ce66", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6a46be8fe9f3170f90592df6ef34f5770fa8039", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nealhnguyen/iotNetworkPowerAnalysisPaper", "max_forks_repo_path": "chapters/Introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6a46be8fe9f3170f90592df6ef34f5770fa8039", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nealhnguyen/iotNetworkPowerAnalysisPaper", "max_issues_repo_path": "chapters/Introduction.tex", "max_line_length": 827, "max_stars_count": null, "max_stars_repo_head_hexsha": "a6a46be8fe9f3170f90592df6ef34f5770fa8039", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nealhnguyen/iotNetworkPowerAnalysisPaper", "max_stars_repo_path": "chapters/Introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1907, "size": 9130 }
%!TEX root = forallxsol.tex %\part{Truth-functional logic} %\label{ch.TFL} %\addtocontents{toc}{\protect\mbox{}\protect\hrulefill\par} \chapter{Sentences of TFL}\setcounter{ProbPart}{0} \problempart \label{pr.wiffTFL} For each of the following: (a) Is it a sentence of TFL, strictly speaking? (b) Is it a sentence of TFL, allowing for our relaxed bracketing conventions? \begin{earg} \item $(A)$\hfill \myanswer{(a) no (b) no} \item $J_{374} \eor \enot J_{374}$\hfill \myanswer{(a) no (b) yes} \item $\enot \enot \enot \enot F$\hfill \myanswer{(a) yes (b) yes} \item $\enot \eand S$\hfill \myanswer{(a) no (b) no} \item $(G \eand \enot G)$\hfill \myanswer{(a) yes (b) yes} \item $(A \eif (A \eand \enot F)) \eor (D \eiff E)$\hfill \myanswer{(a) no (b) yes} \item $[(Z \eiff S) \eif W] \eand [J \eor X]$\hfill \myanswer{(a) no (b) yes} \item $(F \eiff \enot D \eif J) \eor (C \eand D)$\hfill \myanswer{(a) no (b) no} \end{earg} \problempart Are there any sentences of TFL that contain no atomic sentences? Explain your answer. \\\myanswer{No. Atomic sentences contain atomic sentences (trivially). And every more complicated sentence is built up out of less complicated sentences, that were in turn built out of less complicated sentences, \ldots, that were ultimately built out of atomic sentences.}\\ \problempart What is the scope of each connective in the sentence $$\bigl[(H \eif I) \eor (I \eif H)\bigr] \eand (J \eor K)$$ \myanswer{The scope of the left-most instance of `$\eif$' is `$(H \eif I)$'.\\ The scope of the right-most instance of `$\eif$' is `$(I \eif H)$'.\\ The scope of the left-most instance of `$\eor$ is `$\bigl[(H \eif I) \eor (I \eif H)\bigr]$'\\ The scope of the right-most instance of `$\eor$' is `$(J \eor K)$'\\ The scope of the conjunction is the entire sentence; so conjunction is the main logical connective of the sentence.}
{ "alphanum_fraction": 0.6872649113, "avg_line_length": 56.3939393939, "ext": "tex", "hexsha": "0f61a65b91ce9e7f8425a68aa0e585e9fccf13f8", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z", "max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z", "max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "ryanmichaelhebert/forallx-cam", "max_forks_repo_path": "solutions/forallx-sol-tfl.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "ryanmichaelhebert/forallx-cam", "max_issues_repo_path": "solutions/forallx-sol-tfl.tex", "max_line_length": 275, "max_stars_count": 7, "max_stars_repo_head_hexsha": "7023818f0871d1d4712fd84032b920c73292cc53", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "peasantcore/forallx-yyc", "max_stars_repo_path": "solutions/forallx-sol-tfl.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z", "num_tokens": 652, "size": 1861 }
\section{Conclusion} \begin{frame}{Summary and Future directions} \begin{itemize} \item<1-> Resource levels can control strength of competition \item<2-> Balance of limitations promote coexistence \item<3-> With standard-of-care: testosterone limitation is increased leading to extinction \item<4-> With adaptive therapy: competitive release avoided \begin{itemize} \item Effectiveness depends on $T^+$ and $T^p$ population \item Population controlled by resource limitations \item Maximum limit on $T^+$ and $T^p$ by thresholds of adaptive therapy \end{itemize} \item<5-> Future directions: \begin{itemize} \item<6-> Make adaptive therapy effective at reducing tumour burden \item<7-> Dynamic thresholds for turning on/off based on composition of the tumour \item<8-> Different limitations for different cell types \end{itemize} \end{itemize} \end{frame} \begin{frame}{Acknowledgement} I would like to thank the following people: \begin{itemize} \item Supervisor: Prof. Sutirth Dey \item Expert: Dr. M.S. Madhusudhan \item Mentor: Vibishan B \item PBL Members \item Friends and Family \item KVPY and IISER Pune \end{itemize} \end{frame}
{ "alphanum_fraction": 0.7126805778, "avg_line_length": 36.6470588235, "ext": "tex", "hexsha": "20824a99e81da7571fe550401b143d55d5175b94", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_forks_repo_path": "writing/MSDefence/chapters/Summary.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_issues_repo_path": "writing/MSDefence/chapters/Summary.tex", "max_line_length": 95, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_stars_repo_path": "writing/MSDefence/chapters/Summary.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-18T15:54:26.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-18T15:54:26.000Z", "num_tokens": 331, "size": 1246 }
\documentclass[letterpaper,11pt,oneside]{memoir} \usepackage{xess} \product[StickIt! Grove] \manualnum{013} \manualversion{1.0} \newcommand{\copyrightyear}{2015} \newcommand{\xula}{XuLA Board} \newcommand{\stickit}{StickIt! Board} \begin{document} \frontmatter \makexessmanualtitlepage{product_cover.jpg}{0.7} \makexesslegal{\copyrightyear} \begin{xessrevisiontbl} 07/15/2015 & 1.0 & Initial release for \product\ V1.0.\\ % \hline % 06/04/2015 & 4.0 & Revised for \product\ V4.0.\\ \end{xessrevisiontbl} \makexesstoc \mainmatter \chapter{Preliminaries} Here's some helpful information before getting started. \section{Getting Help!} Here are some places to get help if you encounter problems: \begin{itemize} \item If you can't get the \product\ to work, send an e-mail message describing your problem to \href{mailto:\helpemail}{\helpemail}. \item Or submit a problem report at \url{\helppage}. \item \href{http://www.xess.com}{Our web site} also has \begin{itemize} \item \href{http://www.xess.com/projects/}{example designs}, \item \href{http://www.xess.com/appnotes/}{application notes}, and \item \href{http://www.xess.com/tutorials/}{tutorials}. % \item answers to frequently-asked-questions, % \item \href{}{a forum where you can post questions}. \end{itemize} \end{itemize} % \section{Take Notice!} % \begin{itemize} % \item The \xula\ is not 5V-tolerant. \warning{Do not connect 5V logic signals % to the \digpmod\ sockets of the \product.} % \item \warning{Only power the \product\ with a regulated 5 VDC, % center-positive power supply.} % \end{itemize} \section{Packing List} Here is what you should have received in your package: \begin{itemize} \item a \product. \item a 6\by 2 right-angle male header. \item two 5\by 1 male headers. \end{itemize} \chapter{Setup} The \product\ provides sockets for connecting up to four Grove modules to a single \digpmod\ socket on a \stickit, or it can be plugged into a solderless breadboard. \section{Inserting Your \texorpdfstring{\product}{StickIt! Grove}\ Into Your \texorpdfstring{\stickit}{StickIt! Board}} To use the \product\ with a \digpmod\ socket, first solder the 6\by 2 right-angle male header to the module. Then the \product\ can be inserted into any of the \digpmod\ sockets of the \stickit\ as shown below. \fixedpic{\includegraphics[width=0.7\textwidth]{pmod_insertion.jpg}} \section{Inserting Your \texorpdfstring{\product}{StickIt! Grove}\ Into a Breadboard} To use the \product\ with a solderless breadboard, first solder the two 5\by 1 male headers as shown. Then the \product\ can be inserted into the breadboard as shown below. \fixedpic{\includegraphics[width=0.7\textwidth]{breadboard_insertion.jpg}} \chapter{Using the \product} Each of the four Grove sockets receives two of the eight data lines that connect to the \digpmod\ and breadboard connectors. Each Grove socket also shares a common ground and power connection with the \digpmod\ and breadboard connectors. Attaching a Grove module to one of the sockets provides the module with power, ground, and two I/O lines. \section{Using the \product\ with a \stickit} In order to interface a Grove module with a \stickit\ and \xula\ through a \digpmod\ socket, you have to figure out the path the signals take from the pins of the FPGA through the \stickit\ and \digpmod\ sockets and finally on to the Grove module. You can manually trace the path using the following procedure: \begin{itemize} \item Connect the \product\ to one of the \digpmod\ sockets (PM1--PM3) on the \stickit. \item \label{itm:groveconnect} Connect a Grove module to one of the sockets (GR1--GR4) on the \product\ and note which \digpmod\ signals (D0--D7) it connects to. \item Using the \digpmod\ socket and signal found in the previous steps, lookup the channel it connects to in the table on page 12 of the \href{http://www.xess.com/static/media/manuals/StickIt-manual-v4_0.pdf}{\stickit\ manual}. \item Now use the channel to lookup the FPGA pin of the \xula\ it connects to in the table on page 9 of the \href{http://www.xess.com/static/media/manuals/StickIt-manual-v4_0.pdf}{\stickit\ manual}. \item Make a UCF file associating each FPGA pin with each I/O of the module. \item Include the UCF file in your Xilinx ISE FPGA project. \end{itemize} As an example, consider using a simple Grove module with a single LED. Plugging the module into socket GR3 on the \product\ connects the LED's anode to pin D7 of the \digpmod\ connector and its cathode to ground. Inserting the \product\ into socket PM3 of the \stickit\ connects D7 to channel 30. Assuming a XuLA2 board is inserted into the \stickit, channel 30 will terminate on pin B2 of the FPGA. So the UCF file would contain a constraint like this: \begin{lstlisting} net LED loc = B2; \end{lstlisting} Admittedly, that's a lot of work just to make a connection! Instead of going through all that, the |xsconnect| Python package (\url{https://pypi.python.org/pypi/xsconnect}) provides two scripts to make the process easier. The command-line script generates the UCF directly like so: \begin{lstlisting} xsconn -p grove -m stickit4 -n pm3 -d xula2 \end{lstlisting} which gives: \begin{lstlisting} ######################################################################## # StickIt! Grove V1.0 ==[pm3]==> StickIt! V4 ==> XuLA2 net gr1-d0 loc = h2; net gr1-d1 loc = f1; net gr2-d2 loc = f2; net gr2-d3 loc = e1; net gr3-d6 loc = b1; net gr3-d7 loc = b2; net gr4-d4 loc = e2; net gr4-d5 loc = c1; ######################################################################## \end{lstlisting} The |gxsconn| script does the same thing, but with a GUI: \fixedpic{\includegraphics[width=0.7\textwidth]{gxsconn.png}} Just change the |gr3-d7| in the output to |LED| (or whatever name you want to use) and include the constraint in the UCF file of your ISE project. \section{Using the \product\ with a Breadboard} After inserting the \product\ into a breadboard, attaching a Grove module to one of the sockets connects its two I/O signals to two of the pins D0--D7 connecting to the breadboard as well as the power and ground pins. The pins associated with each Grove socket are printed next to each socket, and the corresponding pin locations on the breadboard header are shown in the \hyperref[fig:connections]{figure on page~\pageref*{fig:connections}}. Then just use jumpers or hookup wire to make connections from the header to the rest of your circuitry on the breadboard. \chapter{I/O Locations} The connections of the \digpmod\ and breadboard I/O signals to the Grove sockets are shown below. \fixedpic{\label{fig:connections}\includegraphics[width=0.7\textwidth]{grove_pcb.png}} \chapter{Schematic} \pagebreak \makebox[\textwidth][r]{\hss\includegraphics[width=\textheight, angle=90]{grove_schematic.png}} \end{document}
{ "alphanum_fraction": 0.7377311781, "avg_line_length": 34.8578680203, "ext": "tex", "hexsha": "2c2b3e65482c812183cc4b56282ff819f4c83435", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bae8695d0d3adcf0a71f6a2ab7e550640e44c171", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "xesscorp/StickIt-Grove", "max_forks_repo_path": "docs/Manual/StickIt-Grove-manual-v1_0.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bae8695d0d3adcf0a71f6a2ab7e550640e44c171", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "xesscorp/StickIt-Grove", "max_issues_repo_path": "docs/Manual/StickIt-Grove-manual-v1_0.tex", "max_line_length": 152, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bae8695d0d3adcf0a71f6a2ab7e550640e44c171", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "xesscorp/StickIt-Grove", "max_stars_repo_path": "docs/Manual/StickIt-Grove-manual-v1_0.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-11T22:02:21.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-11T22:02:21.000Z", "num_tokens": 1958, "size": 6867 }
%% %% Copyright 2007, 2008, 2009 Elsevier Ltd %% %% This file is part of the 'Elsarticle Bundle'. %% --------------------------------------------- %% %% It may be distributed under the conditions of the LaTeX Project Public %% License, either version 1.2 of this license or (at your option) any %% later version. The latest version of this license is in %% http://www.latex-project.org/lppl.txt %% and version 1.2 or later is part of all distributions of LaTeX %% version 1999/12/01 or later. %% %% The list of all files belonging to the 'Elsarticle Bundle' is %% given in the file `manifest.txt'. %% %% Template article for Elsevier's document class `elsarticle' %% with numbered style bibliographic references %% SP 2008/03/01 %% %% %% %% $Id: elsarticle-template-num.tex 4 2009-10-24 08:22:58Z rishi $ %% %% \documentclass[preprint,12pt]{elsarticle} %% Use the option review to obtain double line spacing %% \documentclass[preprint,review,12pt]{elsarticle} %% Use the options 1p,twocolumn; 3p; 3p,twocolumn; 5p; or 5p,twocolumn %% for a journal layout: %% \documentclass[final,1p,times]{elsarticle} %% \documentclass[final,1p,times,twocolumn]{elsarticle} %% \documentclass[final,3p,times]{elsarticle} %% \documentclass[final,3p,times,twocolumn]{elsarticle} %% \documentclass[final,5p,times]{elsarticle} %% \documentclass[final,5p,times,twocolumn]{elsarticle} %% if you use PostScript figures in your article %% use the graphics package for simple commands %% \usepackage{graphics} %% or use the graphicx package for more complicated commands %% \usepackage{graphicx} %% or use the epsfig package if you prefer to use the old commands %% \usepackage{epsfig} %% The amssymb package provides various useful mathematical symbols \usepackage{amssymb} \usepackage{xcolor} %\ifCLASSINFOpdf %\usepackage[pdftex]{graphicx} %% declare the path(s) where your graphic files are %%\graphicspath{{../pdf/}{../jpeg/}} %\graphicspath{{images/}} %% and their extensions so you won't have to specify these with %% every instance of \includegraphics %% \DeclareGraphicsExtensions{.pdf,.jpeg,.png} %\else %% The amsthm package provides extended theorem environments %% \usepackage{amsthm} \usepackage{amsmath} %\usepackage{calrsfs} %\usepackage{bm} \usepackage{amssymb} \usepackage{array} \usepackage{tabu} \usepackage{float} \usepackage{svg} %\usepackage[]{algorithm2e} %\usepackage{algorithm} %\usepackage[]{algorithmic} \usepackage{algorithm, algorithmic} %\usepackage[colorlinks]{hyperref} \usepackage{xcolor} \newcolumntype{M}[1]{>{\centering\arraybackslash}m{#1}} \newcolumntype{N}{@{}m{0pt}@{}} \newcommand\norm[1]{\left\lVert#1\right\rVert} \newcommand{\trans}{\mathsf{T}} %% The lineno packages adds line numbers. Start line numbering with %% \begin{linenumbers}, end it with \end{linenumbers}. Or switch it on %% for the whole article with \linenumbers after \end{frontmatter}. %% \usepackage{lineno} %% natbib.sty is loaded by default. However, natbib options can be %% provided with \biboptions{...} command. Following options are %% valid: %% round - round parentheses are used (default) %% square - square brackets are used [option] %% curly - curly braces are used {option} %% angle - angle brackets are used <option> %% semicolon - multiple citations separated by semi-colon %% colon - same as semicolon, an earlier confusion %% comma - separated by comma %% numbers- selects numerical citations %% super - numerical citations as superscripts %% sort - sorts multiple citations according to order in ref. list %% sort&compress - like sort, but also compresses numerical citations %% compress - compresses without sorting %% %% \biboptions{comma,round} % \biboptions{} \journal{} \graphicspath{{images/}} \begin{document} \begin{frontmatter} %% Title, authors and addresses %% use the tnoteref command within \title for footnotes; %% use the tnotetext command for the associated footnote; %% use the fnref command within \author or \address for footnotes; %% use the fntext command for the associated footnote; %% use the corref command within \author for corresponding author footnotes; %% use the cortext command for the associated footnote; %% use the ead command for the email address, %% and the form \ead[url] for the home page: %% %\title{k km kmkmkm\tnoteref{label1}} %\tnotetext[label1]{} \author{Ali Noroozi} \author{Mansoor rezghi\corref{cor1}\fnref{label2}} \ead{[email protected]} %\ead[url]{home page} % \fntext[label2]{} % \cortext[cor1]{} \address{Department of Computer Science, Tarbiat Modares University, Tehran-Iran\fnref{label3}} %\fntext[label3]{} \title{A Tensor Framework for Alzheimer's Disease early Detection and Functional Connectivity Analysis in Resting State fMRI} %% use optional labels to link authors explicitly to addresses: %% \author[label1,label2]{<author name>} %% \address[label1]{<address>} %% \address[label2]{<address>} \address{} \begin{abstract} %% Text of abstract Recently machine learning methods had gain lots of publicity among researchers in order to analyze the brain images such as Resting-State Functional Magnetic Resonance Imaging(rs-fMRI) to obtain a better understanding of the brain and its related disease such as Alzheimer’s disease. Finding the common patterns caused by a brain disorder through analyzing the functional connectivity(FC) network along with discriminating brain diseases from normal controls have traditionally been two main goals in studying rs-fMRI data. The majority of techniques for finding an FC, calculate the FC matrix for each subject and then use simple techniques in order to combine them to obtain general functional connectivity. Also, the states of the art classification techniques for finding subjects with brain disorders, also rely on calculating an FC for each subject, vectorize them and then feed them to the classifier. Considering these problems and based on multidimensional nature data, we have come up with a novel tensor framework in which the FC calculation for each class is done without the need to construct the FC for each sample, also this framework allows us to reduce the dimensionality, and create a novel discriminant function that avoids vectorization in any step and uses the test data in the training process without forcing any prior knowledge about its label to the classifier Extensive experiments using the ADNI dataset demonstrate that our proposed framework effectively boosts the fMRI classification performance and reveals novel connectivity patterns in Alzheimer's disease at its early stages. % %Different methods have been deployed in order to discriminate Alzheimer's disease from normal ones which is a hard task, especially in early stages (eMCI) case. The majority of deployed techniques rely on constructing the functional connectivity (FC) for each person and use the vectorized FC as the input for the classifiers which has two main drawbacks: 1) The need for constructing the FC********The loss of possible valuable structural information in the vectorization step. %Considering these problems and based on multidimensional nature the data, we have came up with a novel framework which omits the FC construction part and preserve the structural integrity of data for the classification. % The proposed framework uses the High Order Singular Value Decomposition (HOSVD) in order to prune the classes and select the proper basis for each of them. %This framework also allows us to obtain a general FC pattern for normal and eMCI classes but not a single sample which helps us to shed more lights on the brain abnormalities in the Alzheimers disease at its early stages. % Extensive experiments using the ADNI dataset demonstrate that % our proposed framework effectively boosts the fMRI classification performance and reveals novel connectivity patterns in Alzheimer's disease at its early stages. \end{abstract} \begin{keyword} %% keywords here, in the form: keyword \sep keyword Tensor Decomposition, fMRI, Functional Connectivity %% MSC codes here, in the form: \MSC code \sep code %% or \MSC[2008] code \sep code (2000 is the default) \end{keyword} \end{frontmatter} %% %% Start line numbering here if you want %% % \linenumbers %% main text \section{Introduction} \label{Intro} Alzheimer’s disease (AD) is a progressive neurodegenerative disorder with a long pre-morbid asymptomatic period which affects millions of elderly individuals worldwide\cite{r01}. It is predicted that the number of affected people will double in the next 20 years, and 1 in 85 people will be affected by 2050 \cite{r02}. The predominant clinical symptoms of AD include a decline in some important brain cognitive and intellectual abilities, such as memory, thinking, and reasoning. Precise diagnosis of AD, especially at its early warning stage: early Mild Cognitive Impairment (eMCI), enables treatments to delay or even avoid such disorders \cite{r03}. In recent years, brain imaging techniques like Positron Emission Tomography (PET)\cite{r21}, Electroencephalography (EEG)\cite{r22} and functional Magnetic Resonance Imaging (fMRI)\cite{r23} have been used in the analysis of AD. Due to the high spatial resolution and relatively lower costs, fMRI is vastly used among researchers in order to monitor brain activities especially in AD and all its stages in which detecting abnormalities within small brain regions is essential \cite{r04}. An fMRI sample is naturally a 4D tensor consisting of 3D voxels moving in time, and each voxel contains an intensity value that is proportional to the strength of the Blood Oxygenation Level Dependent(BOLD) signal, which is a measure of the changes in blood flow, to estimate the activity of different brain regions\cite{r07}. Resting-state fMRI(rs-fMRI) is an fMRI technique in which the patient is asked to rest during the whole scan, focuses on the low-frequency $\left( < 0.1 Hz \right)$ oscillations of BOLD signal, which presents the underlying neuronal activation patterns of brain regions[8]–[10]. rs-fMRI is usually used in order to analyze brain diseases like AD or Autism\cite{r33,r34}. % %Since the number of voxels within a single full scan is high %($5000$ up to roughly $200,000$[ref(Definingnodes)]) and they form strong spatial relations, Since each fMRI volume consist of hundreds of thousands of voxels which are often highly correlated with the surrounding voxels in the brain volume, parcellation of the brain for further analysis has moved toward the use of anatomical atlases. These atlases are strictly defined using anatomical features of the brain, like locations of common gyri and do not rely on any functional information. To generate data using an Atlas-based approach, the BOLD signal from all voxels is averaged within each brain region called Region of Interest(ROI)\cite{r09}. By putting together the average time-series for all the ROIs, the $i$th volume would become $X_i \in \mathbb{R}^{T \times R} , i = \{1,2,\cdots, S\}$ in which $R$, $T$ and $S$ are the number of ROIs, time points and samples respectively. The process of obtaining such a matrix is shown in Figure \ref{g1.1}.% \begin{figure*}[!t] \centering \includegraphics[width=5.5in]{Data} \caption{The process of extracting ROI time-series from the original 4D volume. } \label{g1.1} \end{figure*} % There are two major studies associated with rs-fMRI data: finding common brain disorders caused by diseases like Alzheimer's or autism, and more recently detecting patients with brain disorders using classification techniques \cite{r35,r36}. % Due to the high dimensionality of data and the nature of diseases like eMCI which does not show any reliable clinical symptoms, % researchers moved towards advanced machine learning techniques in order to achieve more reliable analysis \cite{r37}. % There are two major studies associated with rs-fMRI data: finding common brain disorders caused by diseases like Alzheimer's, Autism, schizophrenia and etc. and more recently detecting patients with brain disorders using classification techniques \cite{r35,r36}. Due to the high dimensionality of data and the nature of diseases like eMCI which does not show any reliable clinical symptoms, researchers moved towards advanced machine learning techniques in order to achieve more reliable analysis \cite{r37}. A powerful tool that is commonly used in order to achieve aforementioned goals is Functional Connectivity(FC) network. FC is a $region \times region$ matrix $\bar{X}$ in which $\bar{x}_{ij}$ represents the functional connectivity between the $i$th and $j$th ROI. Functional connectivity is an observable phenomenon quantifiable with measures of statistical dependencies, such as correlations, coherence, or transfer entropy \cite{r38}. Recent studies have shown that some brain disorders like AD could alter the way that some brain regions interact with each other. For example, compared with the healthy, AD patients have been found decreased functional connectivity between the hippocampus and other brain regions, and MCI patients have been observed increased functional connectivity between the frontal lobe and other brain regions\cite{r04}. So, Finding an FC that highlights the patterns caused by a disease, i.e. a \textbf{General} functional connectivity, has been a common goal in the rs-fMRI study for a long time. Several approaches exist to find common patterns among different brain scans. Data-driven methods such as PCA have been proposed for this task \cite{r55}. But ultimately most of them rely on calculating a network for each volume which may overlook the role of noises or outliers within the data\cite{r53,r54}. In recent years FCs are also used as features in classification. So, instead of using $X_i$ as the $i^{th}$ sample, corresponding FC i.e. $\bar{X}_i$ is used as a feature. Although FCs show promising results, they bring their own challenges. The computational cost of FC is usually high and also its quality massively affects the performance of the learning process. Also, Since the conventional classifiers like \textbf{S}upport \textbf{V}ector \textbf{M}achine(SVM) and or k-NN works on data in vector format, these matrix features should be vectorized in order be fed to these classifiers. This vectorization leads to high-dimensional vectors which produce poor performance due to the phenomena known as the Curse of Dimensionality. Alongside the curse of dimensionality, vectorization also destroys potential information that is embedded in the structure of data. This problem has been studied especially in image data in which vectorization destroys the spatial relations within an image\cite{r60}. In this paper, based on high order tensor decomposition, we have created a framework in which the aforementioned goals i.e. finding a general FC and detecting a disorder via classification could be achieved via a single \textbf{H}igh \textbf{O}rder \textbf{S}ingular \textbf{V}alue \textbf{D}ecomposition (HOSVD) of each class. Here based on latent variables obtained by HOSVD a general representative pattern of FC for eMCI and Normal controls are obtained. The majority of connectivity patterns detected by this method have been observed and studied in several separated types of research which shows the reliability and power of the proposed method. Along with these connections, we have also detected novel connectivities especially regarding the Cerebellum which is usually discarded in the analysis of AD. The proposed classifier also outperforms state of the art eMCI classification methods. Viewing each class as a tensor allows us to work with \textit{time} and \textit{region} features separately but simultaneously. This multilinear view ables us to design a proper dimension reduction relative to the nature of each feature along with a discriminant function based on linear regression on latent space of samples that uses the test data to enhance the quality of the training set without forcing any a prior knowledge to the classifier, a task which is not possible through well known classifiers like SVM, logistic regression or k-NN. It is also notable that the proposed discriminant function directly works with the $X_i$s as features. Having the FC calculation step omitted in classification not only heavily affects the computational performance of the method, but it also saves us from the troubles of FCs which will be discussed in the next section. To verify our approach, we conduct an extensive experimental study on rs-fMRI data from the benchmark dataset ADNI \footnote{http://adni.loni.usc.edu/}. As will be seen, the results demonstrate the effectiveness and advantages of our method. Specifically, the proposed framework, not only grants us superior classification accuracy to that from other methods, but it is also much faster and more stable against different data selection schemes. We have also confirmed our achieved general FC matrix using empirical data on the eMCI and Normal functional connectivity patterns. \section{eMCI classification and FC construction techniques}\label{related_works} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % As it was discussed before, classification techniques have become a favorable method for Alzheimer disease early detection. early classification methods used $X_{i}$s as the representative for each subject and used them directly in the classification process\textcolor{red}{ref?}. % As the functional connectivity matrix gain popularity among researchers, the majority of these classification methods shifted towards classifying the functional connectivity matrices\textcolor{red}{add some references!!}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% As it was mentioned before, obtaining and classifying FC matrices have become the dominant approach towards eMCI analysis. Variety of methods such as Pairwise Pearson’s correlation coefficient \cite{r10, r11}, sparse representation \cite{r10, r12, r13} and \textbf{S}parse \textbf{I}nverse \textbf{C}ovariance \textbf{E}stimation (SICE)\cite{r15} exists to obtain an FC. While the first two are easy to understand and can capture pairwise functional relationship based on a pair of ROIs, the latter can account for more complex interactions among multiple ROIs, but the estimation of partial correlation involves an inversion of a covariance matrix, which may be ill-posed due to the singularity of the covariance matrix. These methods result in vastly different networks\cite{r35}. On the other hand, computing the correlations, based on the entire time series of fMRI data simply measures the FC between ROIs with a scalar value, which is fixed across time. This actually implicitly hypothesizes the \textbf{Stationary} interaction patterns among ROIs which will result in a \textit{static functional connectivity} (sFC). As a result, this method may overlook the complex and dynamic interaction patterns among ROIs, which are essentially time-varying. In order to overcome this issue, \textbf{Non-stationary} methods have been proposed which results in more complex networks also known as dynamic functional connectivity (dFC)\cite{r16,r19,r56}. The most common and straightforward way to investigate dFC is using windowed FC, which consists of calculating a given FC measure, for example, the Pearson correlation coefficient, over consecutive windowed segments of the data\cite{r58,r59}. Although such an analysis seems straightforward, there are also pitfalls associated with it which may cause in a non-accurate FC\cite{r57}. % There are two main paradigms towards obtaining the FCs: Stationary and non-stationary methods. Stationary methods use a single scaler value like Partial correlation or \textcolor{red}{sparse coefficients} in order to determine the functional connectivity between two ROIs. Non-stationary methods consider a more complex relation between ROIs that can not be best captured with a single scaler value. In the following, we briefly discuss two states of the art eMCI classification techniques belonging to these two paradigms: % In the following for future comparisons with our method, we explain the mentioned state of the art candidates from these two approaches in details. As one of the main methods in this approch we can mentioed a method proposed in??. This method % uses the Sparse Inverse Covariance matrix in order to establish a functional connectivity matrix and then using the SPD property of SICE, propose a dimension reduction technique in order to find a set of low dimensional features for classification. The reported results in ?? demonstrate the quality of this method over other ones. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % \textcolor{red}{In this approach different kinds of methods has been used to obtain Fc matrix, like methods based on Pearson's correlation, sparse coding and SICE. Ref????. In classification steps the vectorization of these FC matrices are used. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %In order to demonstrate the power of our proposed framework, we have implemented an state of the art method from each of these paradigms. % The first method which is an stationary one The second method which is non-stationary, $\mathbf{Kernel~Compact~SICE(K-SIEC):}$ SICE matrix have proven itself to be one of the best static functional connectivity models \cite{r15,r60,r61,r62, r63}, which is extracted via the following optimization: \begin{align} S^* = \operatorname{arg} \max\limits_{S\succ 0} \hspace{3mm} \log \left( \det(S) \right) - \operatorname{tr}(CS) - \lambda \norm{S}_1 \end{align} where $C$ is the sample-based covariance matrix; $\det(·)$, $\operatorname{tr}(·)$, and $\norm{.}_1$ denote the determinant, trace, and the sum of the absolute values of the entries of a matrix respectively. In classification with FC features, the vectorized SICE of each sample is used \cite{r19}. The occurrence of the curse of dimensionality and losing useful information contained in the SICE matrices(like SPD property) are two main drawbacks of this vectorization approach. To overcome these drawbacks, since each SICE matrix belongs to symmetric semidefinite positive definite (SPD) matrices Riemannian manifold, in the proposed method in \cite{r14}, some SPD manifold-based distances like Log-Euclidean distance\cite{r49} and Root Stein divergence\cite{r50} are employed in kernel-based PCA to extract a compact representation of brain network. The power of this method resides in a massive dimension reduction of SICE, using its SPD property. The performance of this method heavily relies on the choice of sparsity parameter $\lambda$ for SICE calculations and the number of top eigenvectors $m$. %\textcolor{blue}{This method which belongs to the stationary paradigm was proposed in \cite{r14}. In this method, the SICE matrix is chosen as the FC for each sample, then using the SPD property of this matrix, kernel-PCA is deployed in order to reduce its dimensions.} % %The SICE matrix is extracted from the data sample $S$ using the following optimization: %\begin{align} %S^* = \operatorname{arg} \max\limits_{S\succ 0} \hspace{3mm} \log \left( \det(S) \right) - \operatorname{tr}(CS) - \lambda \norm{S}_1 %\end{align} %where $C$ is the sample-based covariance matrix; $\det(·)$, $\operatorname{tr}(·)$, %and $\norm{.}_1$ denote the determinant, trace, and the sum of the absolute values of the entries of a matrix respectively. %\textcolor{red}{Since the SICE matrix is in $\mathbb{R}^{R\times R}$}, with $R$ representing the number of ROIs, the obtained matrix is still relatively large. % Principal Component Analysis(PCA) has proved itself to be one of the most powerful methods of dimension reduction and this method uses Kernel-PCA with a Gaussian kernel in order to extract the key features of SICE. %In order to reduce the dimensionality of the obtained SICE matrix, Kernel-PCA with a Gaussian kernel is used to extract the key features. %\textcolor{red}{Since SICE is an SPD matrix, specific distance functions such as Log-Euclidean distance\cite{r49} or Root Stein divergence\cite{r50} can be used as the distance function in the Gaussian kernel. Analogous to linear PCA, and by defining % $\Phi(\cdot) = Sym^{+}_{d} \mapsto \mathcal{F}$ as the kernel map, % for a given SICE matrix $S$,$\Phi(S)$ can then be projected onto the top $m$ eigenvectors to obtain an $m$-dimensional principal % component vector. These vectors are used to train an SVM classifier. The performance of this method heavily relies on the choice of sparsity parameter $\lambda$ and the number of top eigenvectors $m$. } $\mathbf{High~Order~Networks(HON):}$ \textcolor{red}{**}This method which is proposed in ?? belongs to non-stationary paradigm uses so called High Order Networks as features for classification purposes. It uses the sliding window technique in order to split the time-series into smaller pieces and then find the relation between them\cite{r51,r52}. Let $x_{i}^{(l)}(k) \in \mathbb{R}^N$ denotes the $k$-th segment of the $i$-th region in the $l$-th sample. For each sample a network with nodes $x_{i}^{(l)}(k)$ could be constructed which its edge weights are obtained as \[ C_{ij}^{(l)}(k) = \operatorname{corr}\left(x_{i}^{(l)}(k),x_{j}^{(l)}(k). \right) \] Here the weight $C_{ij}^{(l)}(k)$ represents the pairwise Pearson’s correlation coefficients between the $i$-th and the $j$-th ROIs of the $l$-th subject using the $k$-th segment of subseries. %\[ %G_l^{(l)}(k) = \left( %\{ x_i^{(l)}(k) \} , \{ C_{ij}^{(l)}(k) \} %\right) %\] Now \[ y_{ij}^{(l)} = \left[ C_{ij}^{(l)}(1), C_{ij}^{(l)}(2), \cdots , C_{ij}^{(K)}(1) \right] \in \mathbb{R}^K \] represents the similarity of the $i$th and $j$-th, regions of the $l$-th sample in all segments. In ??, for each $l$ by considering $y_{ij}^{(l)}$ as nodes of a networks with weights \[ H_{ij,pq}^{(l)} = \operatorname{corr} \left( y_{ij}^{(l)},y_{pq}^{(l)} \right) \] an higher-order network is obtained for each sample. Here for each pair of correlation time series $y_{ij}$ and $y_{pq}$, $H_{ij,pq}^{(l)}$ indicates how the correlation between the $i$-th and the $j$-th ROIs influence the correlation between the $p$-th and the $q$-th ROIs. So for each sample its higher-order networks $\{ H_{ij,pq}^{(l)} \}$ will be a matrix with size $R^4\times R^4$($R$ is the number of regions) which will lead to a large-scale high-order FC network, containing at least thousands of vertices and millions of edges. In order to overcome this issue, the correlation time series within each subject are grouped into different clusters. Then, the correlation computations are carried out between the means of clusters. After reducing the network size, the weighted-graph local clustering coefficients was used to select the key features for each network and then an SVM classifier is trained in order to classify the obtained features. As a result of constructing a high-order network, the notion of a physical ROI become vague and thus such networks are not preferable choices in order to analyze functional connectivities. % \end{itemize} It is noteworthy that none of these techniques consider the multi-linearity nature of the data, and since both methods use traditional classifiers like SVM or KNN, they follow a rather complex path to find vector features as the representative of each FC matrix. \section{Proposed fMRI analysis Framework Based On HOSVD} % needed in second column of first page if using \IEEEpubid %\IEEEpubidadjcol All dominant eMCI classification techniques use the FCs as the input feature of the classifier. As a result, the burden of calculating the FC along with its computational complexity is also added to the classification process. In addition, the quality of the obtained FC heavily affects the classification performance. Moreover, none of these methods attend the multi-linear propriety of such data. In this section by tensor viewpoint we show that only by one tensor decomposition we could do the following analysis on fMRI data: \textbullet\ $\mathbf{Classification:}$ Viewing each class as a 3D tensor ables us to separately project each mode into a smaller one, without the necessity of unfolding it into matrices or vectors, and thus preserving the structural integrity of data along with reducing its dimensions. Transferring each sample matrix $X_i\in \mathbb{R}^{T \times R}$ into the new feature space granted by tensor viewpoint, obsoletes the necessity of constructing the FC for each sample by producing a high quality and low-dimensional feature $\bar{X}_i \in \mathbb{R}^{\bar{T} \times \bar{R}}$ in which $\bar{T}$ and $\bar{R}$ are much smaller than $T$ and $R$. This viewpoint also ables us to design a novel discriminant function in which the test data is used in the training process without forcing any prior knowledge about its label to the classifier, a task which is impossible via the conventional classifiers such as SVM, K-NN or logistic regression. \\ % time-region matrices without folding and construction of FC matrix per-sample. Here data set of each class is denoted by a tensor and by using the HOSVD of such data % all region-time matrices are projected to smaller ones(without folding the data to vectors). Also based on this HOSVD decpmoistion we design a tensor based discriminant function for classification.As the first time we propose a novel method that allows test data have a contribution in constructions of discriminant function of each class without using extra information. This idea have a big effect in the quality of propped method. % % Since this method directly works on region-time matrices as input features, its computational complexity is more smaller than state of the art methods that use FC matrices as input feature. Also the experimental results confirm the quality of this method in recognition of normal and eMCI data. \textbullet\ $\mathbf{General~FC~Construction:}$ Using the components of HOSVD extracted in the previous step, a general FC could be constructed for each class that reveals common patterns shared among all samples within each class. This technique allows us to discard the role of outliers or noisy samples. %By this HOSVD decomposition we % proposed a novel method that could % find a general representative FC for eMCI and Normal data. In this method having the FC of all samples is not necessary. % {\color{red} In the experiments we show that the obtained FC contains relations that recently % founded experimentally(clinically).} % By using High order singular value eecomposition % (HOSVD), all region-time matrices are projected to % smaller matrices (without flding the data to vectors %\end{itemize} %} %By using the properties of HOSVD, the proposed framework allows us to reduce the dimensionality of data, create a discriminant function and enhance it and also calculate an FC network. These four features are discussed in full detail in the following sections: %%Begin Dr.Rezghi Suggestions \subsection{ A Classification of Region-Time data based on HOSVD} %In region based fMRI data each data is a region-Time matrix $X\in \mathbb{R}^{T\times R}$, where $T, R$ denote the length of time and regions of the data. For reducing the computational complexity and reducing the occurrence of overfitting, especially for data with large features relative to samples (Like fMRI data), using dimensionality reduction techniques is inevitable. Although each sample $X\in \mathbb{R}^{T\times R}$ has two different time and region features, the classical methods like PCA, SVD and etc. only works on vectorized version ($x=vec(X)$) of such data. % In these methods by having projection matrix $U$(Due to method), the projected data will be $y=U^{\sf T}x$. Although this approach is easy to deploy, it has several drawbacks like the occurrence of Curse of dimensionality and mixing up different features(time and region). % would be the Curse of dimensionality that appears when the proportion of the number of features to the number of samples is relatively high (which results in over-fitting). %Also in this view different kind of features(like \emph{Time} and \emph{region} in our case) would be mixed together which may discard some important information within these features. The second approach is to deploy multilinear methods. %This vectorizatin forthermore inceasing the computatina coplexity, mixed the metioed two type of vectors. Recently multilinear dimension reduction methods like MPCA, GLRAM have been proposed that could work with multidimensional data, without folding them into vectors. In these methods, there is a freedom to select specific reduction for each kind of feature. In this section, we will use a well-known tensor decomposition named HOSVD for both dimension reduction and classification of fMRI data. %Let ${X_1,\ldots,X_{s_1}}$ and ${X_1,\ldots,X_{s_2}}$ be the train data of eMCI and Normal classes, respectively. Let tensors $\mathcal{X}^{(i)}\in \mathbb{R}^{T\times R \times S_i}$, consists normal and eMCI data, for i=1,2, respectively. Here $S_1,S_2$ are the number of Normal and eMCI data. % Also % mode-2 slices of each tensor show the sample data. %the sample data. %Let $\mathcal{X}^{(i)}\in \mathbb{R}^{T \times R \times S_i}$, where each slice $X(:,:,i)$ denotes the time-region feature of the $i^{th}$ sample. For tensor $\mathcal{X}^{(i)}$, the decomposition \begin{equation} \label{ho} \mathcal{X}^{(i)} = \left( U^{(i)},V^{(i)},W^{(i)} \right)\boldsymbol{\cdot} \mathcal{S}^{(i)}, \end{equation} is known as \textbf{H}igher \textbf{O}rder \textbf{S}ingular \textbf{V}alue \textbf{D}ecomposition(HOSVD), where orthogonal matrices $U^{(i)}\in \mathbb{R}^{T\times T}, V^{(i)}\in \mathbb{R}^{R\times R} $ and $W^{(i)}\in \mathbb{R}^{S_i\times S_i}$ are known as modes-1,2,3 singular matrices of and $\mathcal{S}^{(i)}$ is the corresponding core tensor \cite{r64}. Here $U^{(i)}$ is a base of all mode-$ 1 $ fibers $\mathcal{X}^{(i)}(:,l,k)$ %. Here $\mathcal{X}^{(i)}(:,l,k)$ which indicates the behavior of $l$th region of the $k$th sample of the $i$th class in all times. Also $V^{(i)}$ is a base of all mode-$ 2 $ fibers $\mathcal{X}(l,:,k)$ which indicates the behavior of all regions of $l$th sample of the $i$th class in the $k$th time. Due to the properties of HOSVD inherited from svd, the first columns of the $k$th singular matrix ($k = 1,2,3$) have more ability in construction of main parts of $k$th fibers. On the other hand, the last columns of these singular matrices, have more fluctuations and are usually associated with the noisy parts of their corresponding fibers\cite{r64}. Therefore a suitable dimension reduction would be to project the mode-1 and mode-2 fibers into space spanned by the first $k^i_1$ and $k^i_2$ singular vectors of modes-1,2, which will be denoted by $U^{(i)}_{k^i_1}$ and $V_{k^i_2}^{(i)}$, respectively. This dimension reduction could be done as: %\textcolor{blue}{Therefore, with appropriate values of $k^i_1$ and $k^i_2$ projection of mode-1 and mode-2 fibers into space spanned by the first $k_1$ and $k_2$ singular vectors of modes-1,2, which are denoted by $U^{(i)}_{k^i_1}$ and $V_{k^i_2}^{(i)}$, respectively, % is a suitable dimension reduction. This dimensionality reduction could be done as} \begin{align} \label{m1} \mathbb{R}^{k_1 \times k_2 \times S_i} \ni \bar{{\mathcal{X}}}^{(i)} = \left( U^{(i) {\sf {T}}}_{k^i_1}, V_{k^i_2}^{\sf {T}} \right)_{1,2}\boldsymbol{\cdot} \mathcal{X}^{(i)} \end{align} It is clear that this reduction could be done separately on each mode without the need to fold any of them. This means that the structural integrity of data is preserved during the dimension reduction process which is a key aspect in our work. It has been shown that even choosing relatively small values for $k_1^1$ and $k_2^i$ would result in a very good reconstruction error\cite{r60}. %\textcolor{blue}{\textcolor{olive}{Here it's clear that the reduced version of the $k^{th}$ sample, i.e.,$\overline{\mathcal{X}}(:,:,k)\in \mathbb{R}^{k_1\times k_2}$ could be reduced separately in each mode without folding. This means that the structure of data is preserved in the reducing process. % Base on the properties of the HOSVD, it could b shown that by small number of $k_1^1, k_2^i$, the reduced data have good reconstruction error. } %} Inspired by the structure of this reduction, In the following, we present a tensor-based discriminant function. By HOSVD decomposition of $\mathcal{X}^{(i)}$ the projected data $\overline{\mathcal{X}}^{(i)}$ in equation \eqref{m1} becomes \begin{align} \bar{\mathcal{X}}^{(i)}&= \notag \left( \begin{bmatrix} I_{k_1^i} & 0 \end{bmatrix}, \begin{bmatrix} I_{k_2^i} & 0 \end{bmatrix}, W \right)\boldsymbol{\cdot} \mathcal{S}^{(i)} \notag \\&=\left( W \right)_{3} \boldsymbol{\cdot} \mathcal{S}^{(i)}(1:k_1, 1:k_2, :) \notag & \end{align} So, each sample of the $i^{th}$ class in the reduced space has the following form \begin{align*} \bar{{\mathcal{X}}}^{(i)}(:,:,k) &= \left( W^{(i)}(k,:) \right)_{3} \boldsymbol{\cdot} \mathcal{S}^{(i)}(1:k_1^i, 1:k_2^i, :)\\ &= \sum_{k' = 1}^{S_i} W^{(i)}(k,k') \boldsymbol{\cdot} \mathcal{S}^{(i)}(1:k_1^i, 1:k_2^i, k'). \end{align*} This means that each sample in the $i^{th}$ class could be represented as linear combination of the slices of the tensor $\overline{\mathcal{S}}^{(i)}=\mathcal{S}^{(i)}(1:k_1^i, 1:k_2^i, :)$. So if a test data like $X\in \mathbb{R}^{T\times R}$ belongs to the $i^{th}$ class it's natural to expect that its projected version into principle region and times spaces, spanned by $U_{k_1^i},V_{k_2^i}$, i.e, \[ Z^{(i)}= \left( U_{k_1^i}^{(i)\sf T}, V_{k_2^i}^{(i)\sf T} \right)_{1,2}\boldsymbol{\cdot} X \] could be approximated well as a linear combination of the slices of the tensor $\overline{\mathcal{S}}^{(i)}$ as follows \begin{equation} \label{m2} Z^{(i)} \approx \sum_{k=1}^{S_i} \lambda_k^i \overline{\mathcal{S}}^{(i)}(:,:,k). \end{equation} Based on this viewpoint, each test data $X$ could be assigned to a class that its projected version has the best approximation in the form \eqref{m2}. Due to the importance of core tensor elements with small indices in the reconstruction of the signal part of data in comparison with its last parts, the small number $k_3^i< S_i$ of slices $\overline{\mathcal{S}}^{(i)}(:,:,k)$ could be used in \eqref{m2}. In this viewpoint each test data $X$ would be assigned to the $l^{th}$class, if \[ r_{l}=\min_{i=1,2} {r_{i}}, \] where \begin{equation} \label{ls} r_{i}=\min_{{\lambda^{i}}} \|Z^{(i)} -\sum_{k=1}^{k_3^i} \lambda_k^i \overline{\mathcal{S}}^{(i)}(:,:,k)\|,\quad \lambda^i=\begin{pmatrix} \lambda_1^i\\ \vdots\\ \lambda_{s_i} \end{pmatrix} \end{equation} shows the reconstruction error of the projected version of $X$ in the $i^{th}$ class. The minimization \eqref{ls} is a simple least square problem that could be solved easily. The proposed method has an interesting property which allows us to enhance the classification performance by using the test data in the training process without forcing any prior knowledge to the classifier. We have shown that the principal properties of the $i^{th}$ class is reflected in $\overline{\mathcal{S}}^{(i)}$, we also know that the first slices of this tensor, i.e. slices with lower indices, have the most role in reconstructing the main parts of this class i.e. the signal parts. The same reasoning also leads to the conclusion that slices with higher indices are responsible for the possible noise in this class. %By the properties of HOSVD, the principle properties of the $i^{th}$ class could be reflected in the slices of %$\overline{\mathcal{S}}^{(i)}$. Also, for small indices these Slices have a better role in construction of main properties of this class. %So the first slices of $\overline{\mathcal{S}}$ %\textcolor{blue}{could represent signal parts, while last ones do not this role and sometimes represents the wast parts.} Now consider that the test data $X$ is added to data set $\mathcal{X}^{(i)}$ of the $i^{th}$ class. So the new data set will be $\mathcal{\widetilde{X}}\in \mathbb{R}^{T\times R \times (S_{i}+1)}$, \begin{eqnarray*} \widetilde{\mathcal{X}}^{(i)}(:,:,1:S_i)&=&{\mathcal{X}}^{(i)},\\ \widetilde{\mathcal{X}}^{(i)}(:,:,S_i+1)&=&X. \end{eqnarray*} If $X$ belongs to the $i^{th}$ class, then in the decomposition of $\widetilde{\mathcal{X}}^{(i)}$, $X$ would reinforce the first slices of the core tensor. On the other hand, if $X$ does not belong to the $i^{th}$ class, HOSVD would naturally consider it as noise, since $X$ is not similar to other samples and thus does not play a key role in reconstructing them. So its effect would be on the last slices of the core tensor, i.e. slices with higher indices. %\textcolor{olive}{If $X$ belongs to this class then in HOSVD of this tensor $X$ reinforce the slices of core tensor with small indices. But when it dose not belong to this class it will changes the behavior of last slices of core tensor of new tensor. Remember that the last slices of the core tensor are discarded in the dimension reduction process. As a result, if $X$ does not belong to the $i^{th}$ class, it would not be involved in the classification process, on the other hand, if $X$ belongs to this class it would affect the first slices of the core tensor and thus would lead into smaller reconstruction error. So if we add $X$ to all classes before the decomposition process, the reconstruction error for the right class would be less than other classes. Note that since $X$ is added to all classes, without sneaking any information about its label to the training process this technique is legit. %This meas that before classification of a test data $X$, We could added it to both classes. This addition, will changed the first slices of core tensor when it belongs to \textcolor{red}{this} class or %will changed the last Slices when dose not belong. But in our classification method only the first Slices of core tensor are exits. Also the elements of $S$ in the first and second modes are filtered and we work with $S(1:k_1^i,1:k_2^i,1:k_3^i)$. This means that by addition of $X$ in both classes for class that it belongs to it, the first slices in \eqref{ls}, will be adapted to have good reconstruction for this data. But for a class that dose not belong to, this addition have small changes in the slices in \eqref{ls}. %It should be mentioned that %since $X$ added to both data sets, so we did not use its label and the method in true. After computing the HOSDVD of this new data sets $\widetilde{\mathcal{X}}^{(i)}$, we apply the method in \eqref{ls} for classification. Algorithm 1 summarizes the proposed classification method. \begin{algorithm}[h!] \label{ATNB} \caption{\textbf{TNBeMCI}: Tensor based Classification method} \begin{algorithmic} \STATE 1) \textbf{Input}: Normal train data $\mathcal{X}^{(1)}$, eMCI train data $\mathcal{X}^{(2)}$ \STATE~~~ $k_i^j, i=1,2,3, \quad j=1,2$.\\ \STATE~~~ Test data $X$ \STATE 2) Construct $\widetilde{\mathcal{X}}^{(i)}$ for $i=1,2$ by adding $X$. \STATE 3) Compute $U_{k_1^i}, V_{k_2^i}$ and $\mathcal{S}(1:k_1^i,1:k_2^i,k_3^i)$ of $\widetilde{\mathcal{X}}^{(i)}$. \STATE 4) Compute $Z^{(i)} = \left( U_{k_1^i}^{(i)\sf T}, V_{k_2^i}^{(i)\sf T} \right)_{1,2}\boldsymbol{\cdot} X$, $i=1,2.$ \STATE 5)~~Comput $r_1,r_2$ from \eqref{ls} \STATE 6 )~~Assign $X$ to class $l$, if $l= \arg \min_{i} \{r_i\}$ \end{algorithmic} \end{algorithm} In this section by HOSVD on both Normal and eMCI classes, we proposed a novel tensor-based classification and dimension reduction technique. The benefits of the proposed framework could be summarized as follow: \begin{itemize} \item With respect to the multilinear nature of data, the proposed method works with \textit{time}, \textit{region} and \textit{sample} features separately and simultaneously using the extracted singular matrix for each mode without any folding sessions in order to preserve the structural integrity of each class. % This method uses multi linear property of region-Time matrix data and could works with time and region features separately based on their specific bases(corresponding singular matrices). This means that the classification works on data without folding them into vectors. \item The proposed classification technique allows us to use the test data in the training process in order to enhance the classification performance. A task which is not possible via conventional classifiers such as SVM, logistic regression and or neuronal networks. % In this classification, the test data has a role in construction of bases for both classes, without using its label. This increase the quality of the discrimination. This could not applied on important classification methods like SVM, logistic regression classifier and neuronal networks. \item The fact that the proposed method does not require the calculation of FC for each sample, helps us to classify the data much faster than other states of the art methods. % This method works with time-region features, and dose not need to have FC matrices for each sample. So % its computational complexity is more less than other state of the art methods based on FC features. Also in this method we do not wary about the quality of FC methods. % It should be mentioned that our method also could be applied on FC matrices as input features. But in experiments the proposed method gives better results on original time-region features. \end{itemize} %In the experiments we compare the quality of the method with some of the sate of the art methods for recognition of eMCI like mentioned k-SICE and HON . %Experimental results confirm the quality of the proposed method. In the next section, we show that based on the singular matrices obtained via HOSVD, general functional connectivity for each of the eMCI and Normal classes could be obtained. \subsection{General Functional Connectivity Calculations based on HOSVD} \label{FC_Construction} As it was mentioned before, one of the main studies associated with rs-fMRI analysis is finding common functional disorders caused by a disease like AD. This can be done by constructing proper FC matrices. The majority of techniques calculate the FC matrix for each individual subject(like SICE). This may overlook tiny but common connectivities shared within a class. It is also noteworthy that the majority of non-stationary methods are not capable of constructing a relatable FC since the concept of a physical ROI is not well defined in them. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Functional connectivity simply means the relation between different ROIs. As it was mentioned the FC could be used for two different purposes. In one approach for each sample its per sample FC matrix is obtained by its time-region matrix and this new matrix is used in classification instead of time-region matrix. The obtained FC matrices in mentioned k-SICE and HON bloges to this type of application of FC. ???ref % % In other approach the general pattern of normal and abnormal data is desired. So in this approach based on all data only one FC for normal and one FC for abnormal data are obtained. % These general FC matrices could helps the experts persons to found the effects of desis in the?? % {\color{blue} % Although several approaches have been proposed in order to find the functional connectivity, the majority of them focus on the individual samples rather that the whole class. This may overlook tiny but common connectivities shared within a class like the class of people in their early Alzheimer's disease. % \textcolor{red}{ref?} % } %v %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In this section, we show that based on HOSVD decomposition one general functional connectivity matrix could be obtained for each class without obtaining the per-sample FC of all data separately. Here the classification is not our goal, instead of finding a general and representative relation of regions for eMCI and Normal subjects is our demand. This general pattern could be used clinically by cognitive scientists in order to obtain deeper knowledge about Alzheimer's disease and brain in general. % %As we know in HOSVD, the singular matrices $U,V,W$, are the bases of time, region and sample receptively. % In the previous section by property of these matrices we designed our classification method. know in this section we show that %, the same idea could helps us to define a general FC matrix based on HOSVD decomposition for normal and %eMCI data. It should be mentioned that here we did not use the FC of per-samples to construct this general FC matrix. As we saw in the previous section, the obtained $U, V, W$ matrices are the basis for time, region and sample respectively. We used these matrices in order to reduce the dimensionality of data and create a discriminant function. In this section, we will show that these basis matrices could be reused in order to obtain a general functional connectivity matrix for each class. In the $i^{th}$ class which is represented by $\mathcal{X}^{(i)}$, the slice $\mathcal{X}^{(i)}(:,l,:)$ denotes the behavior of $l^{th}$ region of all samples in all times. %In tensor $\mathcal{X}^{(i)}$ of the $l^{th}$ class, the slice $\mathcal{X}^{(i)}(:,l,:)$ denotes %the behavior of $l^{th}$ region of all samples in all times in the $i^{th}$ class. This slice could be considered as a feature for the $l^{th}$ region of the $i^{th}$ class so each region is represented as a Times-sample feature matrix. Viewing each region as a slice would allow us to consider it's behavior in all times and across all samples, which itself allows us to shed more light on common properties and ignore individual differences that are highly possible due to the presence of noise and outliers. Thus, by the properties of singular matrices in modes-1,3, and for appropriate values $k_1^i,k_3^i$, each region $\mathcal{X}(:,l,:)$ could be reduced in both time and sample features separately based on mode-1 and mode-3 truncated singular matrices $U_{k_1^i}^{(i)}$ and $W_{k_3}^{(i)}$ as follows: \begin{eqnarray} \mathcal{Y}^{(i)}(:,l,:) = \left( {U_{k_1^i}^{(i)}}^{\trans}, {W_{k_3}^{(i)}}^{\trans} \right)_{1,3} \boldsymbol{\cdot} \mathcal{X}^{(i)}(:,l,:). \end{eqnarray} Here $\mathcal{Y}^{(i)}(:,l,:)$ denotes a reduced version of $\mathcal{X}^{(i)}(:,l,:)$ into space spanned by $U_{k_1^i}^{(i)}$ and $W_{k_3}^{(i)}$ in modes-1,3. So, \begin{align} \mathbb{R}^{k_1^i \times R \times k_3^i} \ni {{\mathcal{Y}^{(i)}}} = \left( {U_{k_1^i}^{(i)}}^{\trans}, {W_{k^i_3}^{(i)}}^{\trans} \right)_{1,3}\boldsymbol{\cdot} \mathcal{X}^{(i)} \label{DR_Version_FC1} \end{align} denote all reduced regions of the $i^{th}$ class. By this structure and substituting the HOSVD decomposition of $\mathcal{X}^{(i)}$ in \eqref{DR_Version_FC1}, we obtain \begin{align} {\mathcal{Y}}^{(i)}&= \notag \left( \begin{bmatrix} I_{k_1^i} & 0 \end{bmatrix}, V, \begin{bmatrix} I_{k_3^i} & 0 \end{bmatrix} \right)\boldsymbol{\cdot} \mathcal{S}^{(i)} \notag \\&=\left( V \right)_{2} \boldsymbol{\cdot} \mathcal{S}^{(i)}(1:k_1^i,:, 1:k_3^i) \notag & \end{align} thus \begin{align} \label{ok} {\mathcal{Y}}^{(i)}(:,k,:) &= \sum_{k'}^{R} V^{(i)}(k,k')\bar{\mathcal{C}}^{(i)}(:,k',:) \notag \\&= \left( V^{(i)}(k,:)\right)_2 \boldsymbol{\cdot} {\mathcal{C}}^{(i)} \end{align} in which \[ \mathbb{R}^{k_1^i\times R \times k_3^i}\ni {\mathcal{C}}^{(i)} = \mathcal{S}(1:k_1^i,:,1:k_3^i). \] % Although we do not use the FCs in the classification process, FC networks would allow further investigations on the brain activities. Let $\mathcal{X} \in \mathbb{R}^{T \times R \times S}$, finding the FC for this class means finding the relation between mode-2 slices of $\mathcal{X}$. Viewing each region as a slice would allow us to consider it's behavior in all time points and across all samples, which itself allows us to shed more light on common properties and ignore individual differences that is highly possible due to the presence of noise and outliers. %In order to calculate the FC, we first reduced the dimensions of our data similar to what we did in the previous part: %\begin{align} % \mathbb{R}^{k_1^i \times R \times k_3^i} \ni \bar{{\mathcal{X}}} = \left( %{U_{k_1^i}^{(i)}}^{\trans}, {U_{k^i_3}^{(i)}}^{\trans} % \right)_{1,3}\boldsymbol{\cdot} \mathcal{X} \label{DR_Version_FC} % \end{align} % it can be seen that each region slice can be written as: % \begin{align} % \label{ok} % \bar{\mathcal{X}}^{(i)}(:,k,:) &= \sum_{k'}^{N_i} U_2^{(i)}(k,k')\bar{\mathcal{C}}^{(i)}(:,k',:) \notag % \\&= \left( U_{2}^{(i)}(k,:)\right)_2 \boldsymbol{\cdot} \bar{\mathcal{C}}^{(i)} % \end{align} % in which % \[ % \bar{\mathcal{C}}^{(i)} = \mathcal{C}(1:k_1^i,:,1:k_3^i). % \] The equation \eqref{ok}, shows that the reduced version of each region in the $i^{th}$ class could be written as the linear combinations of mode-2 slices of $\mathcal{C}^{(i)}$. So the coefficients of slices in this linear combination could be considered as a new feature for the $l^{th}$ region of the $i^{th}$ class. Also as we mentioned before, the first slices are better than the last ones to reflect the principle properties of the data. So for appropriate $k_3^i$ we could select only the first coefficients in \eqref{ok} as new features for the $l^{th}$ region. Mathematically this means each region in the $i^{th}$ class could be represented by a new feature vector $V(l,1;k_3^i)\in \mathbb{R}^{k_3^i}$. The main benefit of this approach is that each the region could be represented only be a vector with size $k_3^i$, instead of a large time-sample matrix. Having each region be represented via a single low dimensional vector variety of methods such as SICE and other mentioned similarity measures could be deployed in order to construct a general FC for each class. % correlation of reigns based on this new type of feature and clean matrix $V(:,1:k_{3}^i)$ which its rows corresponding to different regions and columns are new features.} %Due to the qualities of SICE, \textcolor{red}{this model} is deployed in order to obtain a generan FC network via input matrix $V(:,1:k_3^i)^{\sf T}$ for the $i^{th}$ class. as the representative of general FC % \begin{figure}[!t] % \centering % \includegraphics[width=3.5in]{images/SVDiff} % \caption{The blue line with the star indicator, shows the difference between the mode-$3$ singular values of the original data and the data appended with a \emph{same} label subject. The orange line indicated with circles, shows the difference between the mode-$3$ singular values of the original data and the data appended with a \emph{different} label subject.} % \label{g3.1} % \end{figure} %As it was discussed before in \eqref{related_works}, there are two types of FC: Dynamic FC and Static FC. Although the obtained FC is an sFC by definition, we argue that the core tensor obtained from HOSVD, which contains the features included in the FC calculation process, has a dynamic structure. The majority of non-stationary methods uses the sliding window technique in order to analyze the BOLD signal in several sessions. The main idea is that the presence of temporal fluctuations in FC within different sessions should be taken into account in order to obtain a better FC. \section{EXPERIMENTAL STUDY} \subsection{Data Preprocessing and Experimental Settings} Rs-fMRI data of $196$ subjects were downloaded from the ADNI website\footnote{http://adni.loni.usc.edu}. Nine subjects were discarded due to the corruption of data, and the remaining $187$ subjects were preprocessed for analysis. After removing subjects that had problems in the preprocessing steps, such as large head motion, $156$ subjects were kept, including $26$ AD, $44$ early MCI, $38$ late MCI, $38$ NC, and ten significant memory concern labeled by ADNI. We used the $38$ NC and the $44$ early MCI because our focus in this paper is to identify MCI at a very early stage, which is the most challenging and significant task in AD prediction. The IDs of the $82$ ($38$ NC and $44$ early MCI) subjects are provided in the supplementary material. The data are acquired on a $3$-T (Philips) scanner with TR/TE set as $3000/30$ ms and flip angle of $80◦$. Each series has $140$ volumes, and each volume consists of 48 slices of image matrices with dimensions $64 \times 64$ with voxel size of $ 3.31 \times 3.31 \times 3.31$ $mm^3$ . The preprocessing is carried out using SPM12 and DPARSFA [40]. The first ten volumes of each series are discarded for signal equilibrium. Slice timing, head motion correction, and MNI space normalization are performed. Participants with too much head motion are excluded. The normalized brain images are warped into automatic anatomical labeling (AAL) [41] atlas to obtain $116$ ROIs as nodes. By following common practice [15]–[17], the ROI mean time series are extracted by averaging the time series from all voxels within each ROI and then bandpass filtered to obtain multiple sub-bands as in [17]. \subsection{Classification} Almost every subject in ADNI dataset has several scans. Usually, random scan data is selected and enters the processing step\cite{12}. This random selection may cause several problems. Since the number of train data is very low, a small alteration in the samples could drastically change the set of input parameters in order to achieve the highest prediction accuracy and other classification evaluation methods. Also achieving high-quality results with a classifier does not guarantee its effectiveness on other datasets even with fine-tuning the parameters since the training set may contain outliers and unidentified corrupted data. In order to show that the proposed framework is less sensitive against the choice of different permutations of data (i.e. Same patient with the different scan) is less vulnerable towards the aforementioned issues, we have selected $18$ different permutations of data and test two state of the art classification methods on them: \textbf{HON} and \textbf{k-SICE}. To make full use of the limited subjects, a leave-one-out procedure is used for training and test. That is, each sample is reserved for the test in turn, while the remaining samples are used for training. We have use five evaluation measures: accuracy (ACC), sensitivity (SEN), Youden’s index(YI), F-score, and balanced accuracy (BAC)\cite{r65}. %The detailed definitions %of these seven statistical measures are provided in Table \eqref{Table_1}, where TP, TN, FP, and FN denote the true positive, true negative, false positive, and false negative, respectively, and %$precision = \frac{TP}{TP + FP} $ %and %$recall = \frac{TP}{TP + FN}$. In this article, we treat the eMCI samples as positive class and the NC samples as negative class. %\begin{table} % \begin{center} % \caption{Definitions of five statistical measurement % indices} \label{Table_1} % \begin{tabular}{l c} % \hline % \hline % Measureement & Definition\\ % \hline % \\ % Acc & $\dfrac{TP+TN}{TP+FP+TN+FN}$\\[7pt] % SEN & $\dfrac{TP}{TP+FN}$\\[7pt] % %SPE & $\dfrac{TN}{TN+FP}$\\[7pt] % YI & $SEN + SPE - 1$\\[7pt] % F-Score & $2\times \dfrac{precesion \times recall}{precesion + recall}$\\[7pt] % BAC & $\dfrac{1}{2}(SEN + SPE)$\\[7pt] % \hline % \hline % \end{tabular} % \end{center} %\end{table} \subsubsection{Classification performance} The classification accuracy measure(ACC), After fine-tuning the input parameter set for each method, shows that for $16$ out of $18$ different random selected datasets, our approach performs better than k-SICE, the same also holds for $15$ datasets comparing to HON. i.e. in $88.8 \%$ of datasets, proposed method works better than k-SICE and in $83.3 \%$ of datasets, it works better than FON. The highest classification accuracy($86.59\%$) is achieved with the proposed method in the $15$th sample data. The highest accuracy for the HON ($84.15\%$) is achieved in the $14$th, and the highest accuracy for the SICE method ($85.37\%$) is achieved in the $6$th sample data. As it was mentioned before, being stable when the input dataset changes is a very important aspect for a classifier, in order to measure the stability, the standard deviation of accuracy along with other measures are calculated. The std. of accuracy for the proposed method is $0.64$ times less than HON and $1.73$ times less than k-SICE method. Similar results also hold for other classification measures. \begin{figure*} \centering \includegraphics[width=6in]{images/Final-eps.pdf} \caption{ % Updated upstream Comparison of proposed method(Prop) with K-SICE and HON applied on 18 different dataset permutations in five different classification evaluation measures. Figures \textit{a} through \textit{e} shows \textit{accuracy}, \textit{F-Score}, \textit{balanced accuracy}, \textit{sensitivity} and \textit{Youden Index} respectively along with the max, min and standard deviation of each one presented at the embedded table (f). %======= % Comparison of proposed method(Prop) with K-SICE and HON in five different classification evaluation measures along with the standard deviation of each one. %As it can be seen, the both the minimum and maximum of the proposed methods in these measurements are higher that the other two. Also the lower std. of our method indicates its stability. %>>>>>>> Stashed changes } \label{g3.2} \end{figure*} Figure \eqref{g3.2} shows the performance of these three methods in all five measurements. Some statistical information about these plots is also included in the embedded table. As it can bee seen in this figure, similar to the accuracy, the proposed method in overall works much better than FON and k-SICE. For a better Demonstration, table \eqref{AVG} provides the average of several classification measurements scores for all dataset permutations. \begin{table} \begin{center} \caption{The Average of Different Classification Measurements in all dataset permutations in \% } %\resizebox{\textwidth}{!}{ \begin{tabular}{@{}c*{6}{c}} \hline\hline Method&ACC& F-Score&SEN & SPE &YI & BAC \\ \hline k-SICE &75.57& 77.36 & 78.50 & 72.19 & 50.69 & 75.34 \\ FON &75.66& 77.44 & 78.40 & 72.48 & 50.89 & 75.44 \\ Proposed &\textbf{80.43}& \textbf{82.20} & \textbf{84.60} & \textbf{75.59} & \textbf{60.20} & \textbf{80.09} \\ \hline\hline \label{AVG} \end{tabular} %} \end{center} \end{table} As it can be seen in this table, the average accuracy of Proposed method which is $80.43 \%$ is $4.77\%$ higher than the next method HON, and $4.86 \%$ better than k-SICE. It is noteworthy that The other two methods i.e HON and SICE shows similar results in average. \begin{figure*} \centering \includegraphics[width=6in]{images/FG} \caption{ The difference graph. This graph is obtained via subtracting the functional connectivity of eMCI subjects from normal subjects. Each circle represents a ROI in AAL atlas and the color and size of each circle is proportional to the graph clustering coefficient of the difference graph. red = more activity in EMCI, green: less. } \label{g3.3} \end{figure*} \subsubsection{Runtime Comparison} One other key features of the proposed method is that it works significantly faster than the other two methods. Table \eqref{Time} shows the average elapsed time (Training plus Testing) of each method for all data permutations. These methods were executed in Matlab R2017b and carried with an Intel Core-i7 processor and $16$GB of RAM. As it can be seen in this table, the proposed method is more than $600$ times faster than HON and $20$ times faster that SICE. \begin{table} \begin{center} \caption{Elapsed time of the test and train phase in seconds} %\resizebox{\textwidth}{!}{ \begin{tabular}{@{}c*{4}{c}} \hline\hline Method& HON & k-SICE& proposed method \\ \hline Elapsed Time &6950& 230 & 11 \\ \hline\hline \label{Time} \end{tabular} %} \end{center} \end{table} Having a huge execution time especially affects the parameter selection for HON, since it uses cross-validation procedure in order to find the optimal parameters which itself require several runs of the algorithm. %\subsubsection{Parameter Estimation and Sensitivity} %As it was discussed before, the choice of parameters drastically affect the performance of the aforementioned classifier. \subsection{Functional connectivity Network} The vector features for both Normal and eMCI classes was obtained via the proposed method as it is described in \eqref{FC_Construction}. Due to the aforementioned qualities of partial correlation, SICE is deployed in order to obtain the final FC. In order to better highlight the differences between Normal and eMCI subjects, a difference graph $D$ is constructed by subtracting the Normal FC from the eMCI FC. This graph could be seen in Figure\eqref{g3.3}. The nodes of $D$ shows the ROIs according to the AAL atlas. The size of each node is proportional to its graph clustering coefficient, i.e. the bigger node demonstrates higher activity in eMCI subjects in the corresponding ROI. Similar to nodes, the size of each edge is also proportional to the correlation between two ROI's. In addition, the edges are also color-coded in a way that the green edges show the positive edges in $D$ and the orange edges shows the negative edges in $D$. In this manner, the green edges demonstrate decreasing in activity between the corresponding nodes in eMCI subjects and vice versa, the orange edges shows increasing activity between corresponding ROIs in the eMCI subjects. As it can be seen in the difference graph, the big nodes i.e. ROIs with higher activities does not necessarily establish strong connections with the other nodes. As an obvious example, higher activities in Lingual gyrus(ROI index: 47,48)\cite{r24,r25}, Calcarine sulcus(ROI index: 43, 44)\cite{r26,r27}, Supplementary motor area(ROI index: 19,20)\cite{r27,r28} and Temporal\_mid\_L(ROI index: 85)\cite{r29} are easily detectable. The majority of ROIs located in frontal lobe also shows rather high activities comparing to normal subjects\cite{r30,r04}. Similar to the nodes, the strong edge between two ROIs does not necessarily require the nodes to be highly active in eMCI. Although a strong edge does indicate high activities and functional connectivity between the two corresponding ROIs. The difference Graph shows a significant increase in connectivity between Rectus(ROI index: 28, 27 in Frontal lobe) and Parietal\_Sup\_R(ROI index: 60 in Parietal lobe) \cite{r40, r41}, Frontal\_Inf\_Orb\_R(ROI index: 16 in Frontal lobe) and Cingulum\_Ant(ROI index: 31,32 in Limbic lobe)\cite{r42}, Insula\_L, Temporal\_Pole\_Sup\_L(ROI index: 29,83 in Limbic lobe) and Pallidum\_R, Caudate\_R(ROI index: 29,83 in Sub Cortical Grey Nuclei)\cite{r43}. It can also be seen that within activities in frontal lobe also increased in patients with eMCI\cite{r44}. There is a decrease in connectivity between Amygdala\_L(ROI index: 41 in Sub Cortical Grey Nuclei) with Frontal\_Mid\_Orb\_R(ROI index: 10 in Sub Frontal lobe) and ParaHippocampal\_L(ROI index: 39 in Sub Limbic lobe)\cite{r45}. The connectivity between Heschl\_L(ROI index: 79 in Temporal lobe) and two ROIs Temporal\_Mid\_R(ROI index: 86 also in Temporal lobe) and Occipital\_Inf\_R(ROI index: 54 in Occipital lobe) also decreased in eMCI\cite{r46}. \subsubsection*{\textbf{Regarding the Cerebellum and Vermis}} In fMRI data analysis and especially in Alzheimer's disease studies, ROIs within the Cerebellum and Vermis are usually excluded since their role was regarded as insignificant\cite{r47, r48}. Recent studies have shown that the traditional assumption that Cerebral area is essential only for the coordination of voluntary motor activity and motor learning is not valid and indicates the significant role of the cerebellum in nervous system function, cognition, and emotion\cite{r32}. As it can be seen in the difference graph that we obtained, ROIs within Cerebellum and Vermis are highly active and both their Intra and interconnections are noticeable. There is increased functional connectivity between the Limbic lobe especially Hippocampus\_R, Temporal\_Pole\_Mid(ROI index: 38,87,88) and Cerebral areas in eMCI patients. Also, the connectivity between Occipital lobe, especially Occipital\_mid\_R(ROI index: 52), the Frontal lobe, especially in Frontal\_mid\_orb(ROI index: 9,10) and Cerebral areas seems to decrease in patients with eMCI. %\begin{table*} % \begin{center} % \caption{Active Nodes } % \resizebox{\textwidth}{!}{ % \begin{tabular}{@{}l*{4}{l}} % \hline\hline % NO.&ROI Index&ROI name&Location& Citations % \\ % \hline % 1& 8 & Frontal\_Mid\_R & Middle frontal gyrus % $\rightarrow$ Prefrontal cortex $\rightarrow$ \textbf{Frontal lobe} % & Main % \\ % 2& 14 & Frontal\_Inf\_Tri\_R & Inferior frontal gyrus, pars triangularis % $\rightarrow$ Inferior frontal gyrus $\rightarrow$ Prefrontal cortex % $\rightarrow$ \textbf{Frontal lobe} & Main % \\ % 3& 17 & Rolandic\_Oper\_L & Rolandic operculum % $\rightarrow$ Operculum $\rightarrow$ \textbf{Cerebral cortex} & Main % \\ % 4& 19 & Supp\_Motor\_Area\_L & Supplementary motor area % $\rightarrow$ Motor area $\rightarrow$ \textbf{Functional area} & Main % \\ % 5& 20 & Supp\_Motor\_Area\_R & Supplementary motor area % $\rightarrow$ Motor area $\rightarrow$ \textbf{Functional area} & Main % \\ % 6& 25 & Frontal\_Med\_Orb\_L & Medial orbitofrontal cortex $\rightarrow$ Orbitofrontal cortex % $\rightarrow$ Orbitofrontal area $\rightarrow$ \textbf{Frontal lobe} & ? % \\ % 7 & 34 & Cingulum\_Mid\_R & Right midcingulate area % $\rightarrow$ Midcingulate area $\rightarrow$ Cingulate gyrus % $\rightarrow$ Cerebral cortex $\rightarrow$ \textbf{Frontal lobe} & ? % \\ % 8 & 43 & Calcarine\_L & Calcarine sulcus % $\rightarrow$ Midcingulate area $\rightarrow$ \textbf{Sulcus} & ? % \\ % 9 & 44 & Calcarine\_R & Calcarine sulcus % $\rightarrow$ Midcingulate area $\rightarrow$ \textbf{Sulcus} & ? % \\ % 10 & 47 & Lingual\_L & Lingual gyrus % $\rightarrow$ Occipital lobe $\rightarrow$Cerebral cortex $\rightarrow$ \textbf{Frontal lobe} & ? % \\ % 11 & 48 & Lingual\_R & Lingual gyrus % $\rightarrow$ Occipital lobe $\rightarrow$Cerebral cortex $\rightarrow$ \textbf{Frontal lobe} & ? % \\ % 12 & 70 & Paracentral\_Lobule\_R & Paracentral lobule $\rightarrow$ \textbf{Frontal lobe} & ? % \\ % 13 & 69 & Paracentral\_Lobule\_L & Middle temporal gyrus $\rightarrow$ \textbf{Temporal lobe} & ? % \\ % 14 & 99 & Cerebelum\_6\_L & Lobule VI of cerebellar hemisphere $\rightarrow$Cerebellum % $\rightarrow$ Metencephalon $\rightarrow$ \textbf{Hindbrain} & ? % \\ % 15 & 111 & Vermis\_4\_5 & Vermis $\rightarrow$ Cerebellum $\rightarrow$ Metencephalon $\rightarrow$ \textbf{Hindbrain} & ? % \\ % \hline\hline % \label{sss} % \end{tabular} % } % \end{center} %\end{table*} % An example of a floating figure using the graphicx package. % Note that \label must occur AFTER (or within) \caption. % For figures, \caption should occur after the \includegraphics. % Note that IEEEtran v1.7 and later has special internal code that % is designed to preserve the operation of \label within \caption % even when the captionsoff option is in effect. However, because % of issues like this, it may be the safest practice to put all your % \label just after \caption rather than within \caption{}. % % Reminder: the "draftcls" or "draftclsnofoot", not "draft", class % option should be used if it is desired that the figures are to be % displayed while in draft mode. % %\begin{figure}[!t] %\centering %\includegraphics[width=2.5in]{myfigure} % where an .eps filename suffix will be assumed under latex, % and a .pdf suffix will be assumed for pdflatex; or what has been declared % via \DeclareGraphicsExtensions. %\caption{Simulation results for the network.} %\label{fig_sim} %\end{figure} % Note that the IEEE typically puts floats only at the top, even when this % results in a large percentage of a column being occupied by floats. % An example of a double column floating figure using two subfigures. % (The subfig.sty package must be loaded for this to work.) % The subfigure \label commands are set within each subfloat command, % and the \label for the overall figure must come after \caption. % \hfil is used as a separator to get equal spacing. % Watch out that the combined width of all the subfigures on a % line do not exceed the text width or a line break will occur. % %\begin{figure*}[!t] %\centering %\subfloat[Case I]{\includegraphics[width=2.5in]{box}% %\label{fig_first_case}} %\hfil %\subfloat[Case II]{\includegraphics[width=2.5in]{box}% %\label{fig_second_case}} %\caption{Simulation results for the network.} %\label{fig_sim} %\end{figure*} % % Note that often IEEE papers with subfigures do not employ subfigure % captions (using the optional argument to \subfloat[]), but instead will % reference/describe all of them (a), (b), etc., within the main caption. % Be aware that for subfig.sty to generate the (a), (b), etc., subfigure % labels, the optional argument to \subfloat must be present. If a % subcaption is not desired, just leave its contents blank, % e.g., \subfloat[]. % An example of a floating table. Note that, for IEEE style tables, the % \caption command should come BEFORE the table and, given that table % captions serve much like titles, are usually capitalized except for words % such as a, an, and, as, at, but, by, for, in, nor, of, on, or, the, to % and up, which are usually not capitalized unless they are the first or % last word of the caption. Table text will default to \footnotesize as % the IEEE normally uses this smaller font for tables. % The \label must come after \caption as always. % %\begin{table}[!t] %% increase table row spacing, adjust to taste %\renewcommand{\arraystretch}{1.3} % if using array.sty, it might be a good idea to tweak the value of % \extrarowheight as needed to properly center the text within the cells %\caption{An Example of a Table} %\label{table_example} %\centering %% Some packages, such as MDW tools, offer better commands for making tables %% than the plain LaTeX2e tabular which is used here. %\begin{tabular}{|c||c|} %\hline %One & Two\\ %\hline %Three & Four\\ %\hline %\end{tabular} %\end{table} % Note that the IEEE does not put floats in the very first column % - or typically anywhere on the first page for that matter. Also, % in-text middle ("here") positioning is typically not used, but it % is allowed and encouraged for Computer Society conferences (but % not Computer Society journals). Most IEEE journals/conferences use % top floats exclusively. % Note that, LaTeX2e, unlike IEEE journals/conferences, places % footnotes above bottom floats. This can be corrected via the % \fnbelowfloat command of the stfloats package. \section{Conclusion} The majority of functional connectivity analysis methods rely on calculating the FC matrix for each individual, then using simple methods to combine them in order to obtain a general FC network for a class. Also, the state of the art classification techniques use FC as the representative for each sample. In this paper, based on multilinear nature of data, we have proposed a novel framework in which the general FC is extracted directly from the Time-Region features and does not rely on individual FC calculations. Also the obtained FC by the proposed method contains some relations that recently approved experimentally. This framework also ables us to design a discriminant function that works directly with $Time \times Region$ samples rather that their FC. The new discriminant function also uses the test data in order to enhance the training set. The benefits of the proposed method could be summarize as follow: %\begin{enumerate} %Taking advantage of multilinear nature of data, Avoid vectorization at any stage, % \item FC calculation regarding the whole class % \item Directly classifying the original $Time \times Region$ matrix % \item Uses test samples to enhance the training set %\end{enumerate} Extensive studies on the rs-fMRI provided by ADNI shows the superiority of the proposed framework in both classification and functional connectivity. The obtained FC network not only acknowledge the previous discovered connections but also reveals new connectivity patterns previously unknown. The framework proposed in this paper can be easily extended to other studies involved with high order data. %\begin{table*} % \begin{center} % \caption{content...} % \resizebox{\textwidth}{!}{ % \begin{tabular}{@{}c*{20}{c}} % \hline\hline % \multicolumn{20}{c}{Datasets} % \\ % \hline %&\multicolumn{1}{|c}{TBNA} & \textbf{79.27} & \textbf{80.49} & \textbf{76.83} & 78.05 & 78.05 & 81.71 & \textbf{80.49} & \textbf{78.05} & \textbf{81.71} & \textbf{78.05} & \textbf{78.05} & \textbf{83.1} & \textbf{79.27} & 8171 & \textbf{86.59} & 82.93 & \textbf{82.93} & \textbf{80.49} %\\ %ACC&\multicolumn{1}{|c}{FON} & 78.05 & 73.17 & 74.39 & \textbf{79.27} & 73.17 & 73.17 & 75.61 & 70.73 & 74.39 & 74.39 & 76.83 & 73.17 & 78.05 & \textbf{84.15} & 70.73 & \textbf{83.9} & 76.83 & 71.95 %\\ %&\multicolumn{1}{|c}{SICE} & 73.17 & 74.39 & 74.39 & 71.95 & \textbf{79.27} & \textbf{85.37} & 71.95 & 68.29 & 78.05 & 73.17 & 74.39 & 70.73 & 74.39 & 80.49 & 82.38 & 76.83 & 73.17 & 78.05 %%\\\hline %%&\multicolumn{1}{|c}{TBNA} & \textbf{0.9091} & \textbf{0.8409} & 0.7727 & \textbf{0.8864} & 0.7955 & \textbf{0.8636} & \textbf{0.9318} & 0.75 & \textbf{0.9091} & \textbf{0.8864} & \textbf{0.8864} & \textbf{0.8421} & 0.6818 & \textbf{0.9091} & \textbf{0.8864} &\textbf{ 0.7955} &\textbf{ 0.8636} & \textbf{0.8182} %%\\ %%SEN&\multicolumn{1}{|c}{FON} & 0.8636 & 0.75 & 0.75 & 0.7273 & \textbf{0.8182} & 0.8182 & 0.8182 & 0.75 & 0.7045 & 0.7955 & 0.8636 & 0.75 & \textbf{0.8636} & 0.8864 & 0.6136 & 0.8636 & 0.7955 & 0.6818 %%\\ %%&\multicolumn{1}{|c}{SICE} & 0.7955 & 0.7727 & \textbf{0.8409} & 0.7273 & 0.75 & 0.8864 & 0.75 & 0.5227 & 0.7727 & 0.7727 & 0.8409 & 0.7955 & 0.7273 & 0.8864 & 0.7895 & 0.8864 & 0.7727 & 0.8409 %\\\hline %%&\multicolumn{1}{|c}{TBNA} & 0.6579 & \textbf{0.7632} & \textbf{0.7632} & 0.6579 & \textbf{0.7632} & 0.7632 & 0.6579 & 0.8158 & 0.7105 & 0.6579 & 0.6579 & 0.8182 & 0.9211 & 0.7105 & 0.8421 & 0.8684 & 0.7895 & 0.7895 %%\\ %%SPE&\multicolumn{1}{|c}{FON} & 0.6842 & 0.7105 & 0.7368 & 0.8684 & 0.6316 & 0.6316 & 0.6842 & 0.6579 & 0.7895 & 0.6842 & 0.6579 & 0.7105 & 0.6842 & 0.7895 & 0.8158 & 0.8106 & 0.7368 & 0.7632 %%\\ %%&\multicolumn{1}{|c}{SICE} & 0.6579 & 0.7105 & 0.6316 & 0.7105 & 0.8421 & 0.8158 & 0.6842 & 0.8684 & 0.7895 & 0.6842 & 0.6316 & 0.6053 & 0.7632 & 0.7105 & 0.8636 & 0.6316 & 0.6842 & 0.7105 %%\\\hline %%&\multicolumn{1}{|c}{TBNA} & 0.567 & 0.6041 & 0.5359 & 0.5443 & 0.5586 & 0.6268 & 0.5897 & 0.5658 & 0.6196 & 0.5443 & 0.5443 & 0.6603 & 0.6029 & 0.6196 & 0.7285 & 0.6639 & 0.6531 & 0.6077 %%\\ %%YI&\multicolumn{1}{|c}{FON} & 0.5478 & 0.4605 & 0.4868 & 0.5957 & 0.4498 & 0.4498 & 0.5024 & 0.4079 & 0.494 & 0.4797 & 0.5215 & 0.4605 & 0.5478 & 0.6758 & 0.4294 & 0.6742 & 0.5323 & 0.445 %%\\ %%&\multicolumn{1}{|c}{SICE} & 0.4533 & 0.4833 & 0.4725 & 0.4378 & 0.5921 & 0.7022 & 0.4342 & 0.3911 & 0.5622 & 0.4569 & 0.4725 & 0.4007 & 0.4904 & 0.5969 & 0.6531 & 0.5179 & 0.4569 & 0.5514 %%\\\hline %&\multicolumn{1}{|c}{TBNA} &\textbf{ 82.47} & \textbf{82.22} & \textbf{78.16} & \textbf{81.25} & \textbf{79.55} & 83.52 & \textbf{83.67} &\textbf{ 78.57} & \textbf{84.21} & \textbf{81.25 } & \textbf{81.25} & \textbf{84.25} & 77.92 & 84.21 & \textbf{87.64 }& 83.33 & \textbf{84.44} & \textbf{81.82 } %\\ %F-SC&\multicolumn{1}{|c}{FON} & 80.85 & 75 & 75.86 & 79.01 & 76.6 & 76.6 & 78.26 & 73.33 & 74.7 & 76.92 & 80 & 75 & \textbf{80.85} & \textbf{85.71} & 69.23 & \textbf{85.2} & 78.65 & 72.29 %\\ %&\multicolumn{1}{|c}{SICE} & 76.09 & 76.4 & 77.89 & 73.56 & 79.52 & \textbf{86.67} & 74.16 & 63.89 & 79.07 & 75.56 & 77.89 & 74.47 & 75.29 & 82.98 & 82.79 & 80.41 & 75.56 & 80.43 %\\\hline %%&\multicolumn{1}{|c}{TBNA} & 0.7835 & 0.802 & 0.7679 & 0.7721 & 0.7793 & 0.8134 & 0.7949 & 0.7829 & 0.8098 & 0.7721 & 0.7721 & 0.8301 & 0.8014 & 0.8098 & 0.8642 & 0.8319 & 0.8266 & 0.8038 %%\\ %%BAC&\multicolumn{1}{|c}{FON} & 0.7739 & 0.7303 & 0.7434 & 0.7978 & 0.7249 & 0.7249 & 0.7512 & 0.7039 & 0.747 & 0.7398 & 0.7608 & 0.7303 & 0.7739 & 0.8379 & 0.7147 & 0.8371 & 0.7661 & 0.7225 %%\\ %%&\multicolumn{1}{|c}{SICE} & 0.7267 & 0.7416 & 0.7362 & 0.7189 & 0.7961 & 0.8511 & 0.7171 & 0.6956 & 0.7811 & 0.7285 & 0.7362 & 0.7004 & 0.7452 & 0.7984 & 0.8266 & 0.759 & 0.7285 & 0.7757 %% & HON & &\multicolumn{1}{|c}{kernel} & \\ %% %% & TBNA & &\multicolumn{1}{|c}{kernel} & \\ % \\ % \hline\hline % \end{tabular} %} % \end{center} %\end{table*} % if have a single appendix: %\appendix[Proof of the Zonklar Equations] % or %\appendix % for no appendix heading % do not use \section anymore after \appendix, only \section* % is possibly needed % use appendices with more than one appendix % then use \section to start each appendix % you must declare a \section before using any % \subsection or using \label (\appendices by itself % starts a section numbered zero.) % %\appendices %\section{Proof of the First Zonklar Equation} %Appendix one text goes here. % you can choose not to have a title for an appendix % if you want by leaving the argument blank %\section{} %Appendix two text goes here. % % use section* for acknowledgment % \section*{Acknowledgment} % % % The authors would like to thank... % Can use something like this to put references on a page % by themselves when using endfloat and the captionsoff option. %\ifCLASSOPTIONcaptionsoff %\newpage %\fi % trigger a \newpage just before the given reference % number - used to balance the columns on the last page % adjust value as needed - may need to be readjusted if % the document is modified later %\IEEEtriggeratref{8} % The "triggered" command can be changed if desired: %\IEEEtriggercmd{\enlargethispage{-5in}} % references section % can use a bibliography generated by BibTeX as a .bbl file % BibTeX documentation can be easily obtained at: % http://mirror.ctan.org/biblio/bibtex/contrib/doc/ % The IEEEtran BibTeX style support page is at: % http://www.michaelshell.org/tex/ieeetran/bibtex/ %\bibliographystyle{IEEEtran} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} % % <OR> manually copy in the resultant .bbl file % set second argument of \begin to the number of references % (used to reserve space for the reference number labels box) %% The Appendices part is started with the command \appendix; %% appendix sections are then done as normal sections %% \appendix %% \section{} %% \label{} %% References %% %% Following citation commands can be used in the body text: %% Usage of \cite is as follows: %% \cite{key} ==>> [#] %% \cite[chap. 2]{key} ==>> [#, chap. 2] %% %% References with bibTeX database: %\bibliographystyle{elsarticle-num} %\bibliography{<your-bib-database>} %% Authors are advised to submit their bibtex database files. They are %% requested to list a bibtex style file in the manuscript if they do %% not want to use elsarticle-num.bst. %% References without bibTeX database: % \begin{thebibliography}{00} %% \bibitem must have the following form: %% \bibitem{key}... %% % \bibitem{} % \end{thebibliography} %\section{Bibliography} \begin{thebibliography}{1} \bibitem{r01} Caselli, Richard J., et al. "Longitudinal changes in cognition and behavior in asymptomatic carriers of the APOE e4 allele." Neurology 62.11 (2004): 1990-1995. \bibitem{r02} Brookmeyer, Ron, et al. "Forecasting the global burden of Alzheimer’s disease." Alzheimer's \& dementia: the journal of the Alzheimer's Association 3.3 (2007): 186-191. \bibitem{r03} Musha, Toshimitsu, et al. "EEG markers for characterizing anomalous activities of cerebral neurons in NAT (neuronal activity topography) method." IEEE Transactions on Biomedical Engineering 60.8 (2013): 2332-2338. \bibitem{r04} Gould, R. L., et al. "Brain mechanisms of successful compensation during learning in Alzheimer disease." Neurology 67.6 (2006): 1011-1017. \bibitem{r04}Dennis, Emily L., and Paul M. Thompson. "Functional brain connectivity using fMRI in aging and Alzheimer’s disease." Neuropsychology review 24.1 (2014): 49-62. % \bibitem{r05} % Richiardi, Jonas, et al. "Classifying minimally disabled multiple sclerosis patients from resting state functional connectivity." Neuroimage 62.3 (2012): 2021-2033. % \bibitem{r06} % Yang, Xue, et al. "Evaluation of statistical inference on empirical resting state fMRI." IEEE Transactions on Biomedical Engineering 61.4 (2014): 1091-1099. \bibitem{r07} R. Graaf and K. Kevin. Methods and apparatus for compensating eld inhomogeneities in magnetic resonance studies. US Patent No. 8035387, 2011. % \bibitem{r08} % Zhang, Xiaowei, et al. "Resting-state whole-brain functional connectivity networks for mci classification using l2-regularized logistic regression." IEEE transactions on nanobioscience 14.2 (2015): 237-247. \bibitem{r09} Stanley, Matthew Lawrence, et al. "Defining nodes in complex brain networks." Frontiers in computational neuroscience 7 (2013): 169. \bibitem{r10} Jie, Biao, et al. "Integration of network topological and connectivity properties for neuroimaging classification." IEEE transactions on biomedical engineering 61.2 (2014): 576-589. \bibitem{r11} Wee, Chong-Yaw, et al. "Resting-state multi-spectrum functional connectivity networks for identification of MCI patients." PloS one 7.5 (2012): e37828. \bibitem{r12} Tibshirani, Robert, et al. "Sparsity and smoothness via the fused lasso." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67.1 (2005): 91-108. \bibitem{r13} Wright, John, et al. "Robust face recognition via sparse representation." IEEE transactions on pattern analysis and machine intelligence 31.2 (2009): 210-227. \bibitem{r14} Zhang, Jianjia, et al. "Functional brain network classification with compact representation of SICE matrices." IEEE Transactions on Biomedical Engineering 62.6 (2015): 1623-1634. \bibitem{r15} Huang, Shuai, et al. "Learning brain connectivity of Alzheimer's disease by sparse inverse covariance estimation." NeuroImage 50.3 (2010): 935-949. \bibitem{r16} Allen, Elena A., et al. "Tracking whole-brain connectivity dynamics in the resting state." Cerebral cortex 24.3 (2014): 663-676. % \bibitem{r17} % Damaraju, Eswar, et al. "Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia." NeuroImage: Clinical 5 (2014): 298-308. % \bibitem{r18} % Hutchison, R. Matthew, et al. "Dynamic functional connectivity: promise, issues, and interpretations." Neuroimage 80 (2013): 360-378. \bibitem{r19} Leonardi, Nora, et al. "Principal components of functional connectivity: a new approach to study dynamic brain connectivity during rest." NeuroImage 83 (2013): 937-950. % \bibitem{r20} % Leonardi, Nora, et al. "Principal components of functional connectivity: a new approach to study dynamic brain connectivity during rest." NeuroImage 83 (2013): 937-950. \bibitem{r21} Nordberg, Agneta. "PET imaging of amyloid in Alzheimer's disease." The lancet neurology 3.9 (2004): 519-527. \bibitem{r22} Jeong, Jaeseung. "EEG dynamics in patients with Alzheimer's disease." Clinical neurophysiology 115.7 (2004): 1490-1505. \bibitem{r23} Jeong, Jaeseung. "EEG dynamics in patients with Alzheimer's disease." Clinical neurophysiology 115.7 (2004): 1490-1505. \bibitem{r24}Golby, Alexandra, et al. "Memory encoding in Alzheimer's disease: an fMRI study of explicit and implicit memory." Brain 128.4 (2005): 773-787. \bibitem{r25}He, Yong, et al. "Regional coherence changes in the early stages of Alzheimer’s disease: a combined structural and resting-state functional MRI study." Neuroimage 35.2 (2007): 488-500. \bibitem{r26}Bakkour, Akram, et al. "The effects of aging and Alzheimer's disease on cerebral cortical anatomy: specificity and differential relationships with cognition." Neuroimage 76 (2013): 332-344. \bibitem{r27}Brewer, Alyssa A., and Brian Barton. "Visual cortex in aging and Alzheimer's disease: changes in visual field maps and population receptive fields." Frontiers in psychology 5 (2014): 74. \bibitem{r28}Jacobsen, Jörn-Henrik, et al. "Why musical memory can be preserved in advanced Alzheimer’s disease." Brain 138.8 (2015): 2438-2450. \bibitem{r29}Kosicek, Marko, and Silva Hecimovic. "Phospholipids and Alzheimer’s disease: alterations, mechanisms and potential biomarkers." International journal of molecular sciences 14.1 (2013): 1310-1322. \bibitem{r30}Salvatore, Christian, et al. "Magnetic resonance imaging biomarkers for the early diagnosis of Alzheimer's disease: a machine learning approach." Frontiers in neuroscience 9 (2015): 307. \bibitem{r32}Jacobs, Heidi IL, et al. "The cerebellum in Alzheimer’s disease: evaluating its role in cognitive decline." Brain 141.1 (2017): 37-47. \bibitem{r33}N. Leonardi et al., “Principal components of functional connectivity: A new approach to study dynamic brain connectivity during rest,” NeuroImage, vol. 83, pp. 937–950, 2013. \bibitem{r34}Cherkassky, Vladimir L., et al. "Functional connectivity in a baseline resting-state network in autism." Neuroreport 17.16 (2006): 1687-1690. \bibitem{r35}Du, Yuhui, Zening Fu, and Vince D. Calhoun. "Classification and prediction of brain disorders using functional connectivity: promising but challenging." Frontiers in neuroscience 12 (2018). \bibitem{r36}de Vos, Frank, et al. "A comprehensive analysis of resting state fMRI measures to classify individual patients with Alzheimer's disease." Neuroimage 167 (2018): 62-72. \bibitem{r37}Cuingnet, Rémi, et al. "Automatic classification of patients with Alzheimer's disease from structural MRI: a comparison of ten methods using the ADNI database." neuroimage 56.2 (2011): 766-781. \bibitem{r38}Friston, Karl J. "Functional and effective connectivity: a review." Brain connectivity 1.1 (2011): 13-36. % \bibitem{r39}Jones, David T., et al. "Non-stationarity in the “resting brain’s” modular architecture." PloS one 7.6 (2012): e39731. \bibitem{r40}Brickman, Adam M., et al. "Reconsidering harbingers of dementia: progression of parietal lobe white matter hyperintensities predicts Alzheimer's disease incidence." Neurobiology of aging 36.1 (2015): 27-32. \bibitem{r41}De Reuck, J., et al. "Topography of cortical microbleeds in Alzheimer’s disease with and without cerebral amyloid angiopathy: a post-mortem 7.0-tesla MRI Study." Aging and disease 6.6 (2015): 437. \bibitem{r42}Perani, Daniela, et al. "The impact of bilingualism on brain reserve and metabolic connectivity in Alzheimer's dementia." Proceedings of the National Academy of Sciences 114.7 (2017): 1690-1695. \bibitem{r43}\textcolor{red}{Subcortical} volume changes in dementia with Lewy bodies and Alzheimer's disease. A comparison with healthy aging \bibitem{r44}Cai, Suping, et al. "Changes in thalamic connectivity in the early and late stages of amnestic mild cognitive impairment: a resting-state functional magnetic resonance study from ADNI." PloS one 10.2 (2015): e0115573. \bibitem{r45}Ortner, Marion, et al. "Progressively Disrupted intrinsic Functional connectivity of Basolateral amygdala in Very early alzheimer’s Disease." Frontiers in neurology 7 (2016): 132. \bibitem{r46}Steketee, Rebecca ME, et al. "Early-stage differentiation between presenile Alzheimer’s disease and frontotemporal dementia using arterial spin labeling MRI." European radiology 26.1 (2016): 244-253. \bibitem{r47}Sanz-Arigita, Ernesto J., et al. "Loss of ‘small-world’networks in Alzheimer's disease: graph analysis of FMRI resting-state functional connectivity." PloS one 5.11 (2010): e13788. \bibitem{r48}Zhang, Daoqiang, et al. "Multimodal classification of Alzheimer's disease and mild cognitive impairment." Neuroimage 55.3 (2011): 856-867. \bibitem{r49}V.Arsigny et al.,. (2006). Log-euclidean metrics for fast and simple calculus on diffusion tensors. Magn. Reson. Med.. [Online]. 56(2), pp. 411–421. Available: http://dx.doi.org/10.1002/mrm.20965 \bibitem{r50}S. Sra, “A new metric on the manifold of kernel matrices with application to matrix geometric mean,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, K. Q.Weinberger, Eds., New York, NY: Curran Associates, Inc., 2012, pp. 144–152. \bibitem{r51}Allen, Elena A., et al. "Tracking whole-brain connectivity dynamics in the resting state." Cerebral cortex 24.3 (2014): 663-676. \bibitem{r51}Chang, Catie, and Gary H. Glover. "Time–frequency dynamics of resting-state brain connectivity measured with fMRI." Neuroimage 50.1 (2010): 81-98. \bibitem{r52}Handwerker, Daniel A., et al. "Periodic changes in fMRI connectivity." Neuroimage 63.3 (2012): 1712-1719. \bibitem{r53}Supekar, Kaustubh, et al. "Network analysis of intrinsic functional brain connectivity in Alzheimer's disease." PLoS computational biology 4.6 (2008): e1000100. \bibitem{r54}Contreras, Joey A., et al. "Resting state network modularity along the prodromal late onset Alzheimer's disease continuum." NeuroImage: Clinical 22 (2019): 101687. \bibitem{r55}Leonardi, Nora, et al. "Principal components of functional connectivity: a new approach to study dynamic brain connectivity during rest." NeuroImage 83 (2013): 937-950. \bibitem{r56}Leonardi, Nora, and Dimitri Van De Ville. "On spurious and real fluctuations of dynamic functional connectivity during rest." Neuroimage 104 (2015): 430-436. \bibitem{r57} Hindriks, Rikkert, et al. "Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI?." Neuroimage 127 (2016): 242-256. \bibitem{r58}Barttfeld, Pablo, et al. "Signature of consciousness in the dynamics of resting-state brain activity." Proceedings of the National Academy of Sciences 112.3 (2015): 887-892. \bibitem{r59}Zalesky, Andrew, et al. "Time-resolved resting-state brain networks." Proceedings of the National Academy of Sciences 111.28 (2014): 10341-10346. \bibitem{r60}Ahmadi, Soheil, and Mansoor Rezghi. "A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations." CoRR (2018). \bibitem{r61}Ng, Bernard, et al. "A novel sparse group Gaussian graphical model for functional connectivity estimation." International Conference on Information Processing in Medical Imaging. Springer, Berlin, Heidelberg, 2013. \bibitem{r62}Colclough, Giles L., et al. "Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks." NeuroImage 178 (2018): 370-384. \bibitem{r63} Foti, Nicholas J., and Emily B. Fox. "Statistical model-based approaches for functional connectivity analysis of neuroimaging data." Current opinion in neurobiology 55 (2019): 48-54. \bibitem{r64}Rezghi, Mansoor. "A novel fast tensor-based preconditioner for image restoration." IEEE Transactions on Image Processing 26.9 (2017): 4499-4508. \bibitem{r65}Tan, Pang-Ning. Introduction to data mining. Pearson Education India, 2018. \end{thebibliography} \end{document} %% %% End of file `elsarticle-template-num.tex'.
{ "alphanum_fraction": 0.7367901133, "avg_line_length": 70.8832968636, "ext": "tex", "hexsha": "c3e98a7d554f6db090dab042d143c177c91b1e4e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5a717e2b599ca45cd48e3f2b4853bec8a8dd37bb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "heretic133/Noroozi_paper", "max_forks_repo_path": "Norouzi.ElsVerSub.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5a717e2b599ca45cd48e3f2b4853bec8a8dd37bb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "heretic133/Noroozi_paper", "max_issues_repo_path": "Norouzi.ElsVerSub.tex", "max_line_length": 1109, "max_stars_count": null, "max_stars_repo_head_hexsha": "5a717e2b599ca45cd48e3f2b4853bec8a8dd37bb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "heretic133/Noroozi_paper", "max_stars_repo_path": "Norouzi.ElsVerSub.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 27267, "size": 97181 }
\chapter{Literature Review} This section reviews various academic sources related to the methodology proposed. It will look at the various fields of engineering as well as bio-mechanics, biomimicry and applied mathematics to form a holistic understanding of the design space. Importantly the technologies discussed will also be related to the field of robotics as many concepts transfer easily from bipedal humans to bipedal robots. \section{Introduction} This research project brings together various disciplines of research. By combining techniques from computer vision, sensors and data fusion we can design and develop new way of capturing human gait data. Whilst the fields of biomimicry and bio-inspired robotics are relatively new, recent advances in related fields such as artificial intelligence and robotics have invigorated the pursuit of functional humanoid robotics. Kaneko et al. described various components of humanoid robotics in \cite{kaneko2002design}. Herein a fundamental element of dynamics is discussed and improvements to the robots mobility outlined as the first step in the iterative design process. The same author published work \cite{kaneko2002legs} relating to a functional leg module to be used for such robotic projects. These works shows that engineers have been trying to replicate the bipedal motion of humans for some time wit relatively limited success. If we observe some of the worlds foremost attempts at bipedal robotics such as Boston Dynamics' Atlas \cite{bdyt} and Agility Robotics' Cassie \cite{aryt} we can see that recent attempts are improving rapidly. This thesis believes that with bigger datasets of human motion in complex environments we can better design and control robotic lower limbs. \begin{figure}[!ht] \captionsetup{width=0.8\linewidth, font=small} \includegraphics[width=0.8\linewidth]{figures/bipeds.png} \caption{Pictures of modern bipedal robots Atlas (left) and Cassie (right) from \cite{bdpic} and \cite{arpic} respectively} \label{fig:bipeds} \end{figure} \section{Human Motion and Gait} The human gait is well understood and has been studied in detail as it is a fundamental part of human mobility. It is one of the first skills developed in infancy and its importance for healthy develpoment, as outlined by Adolph et al. \cite{adolph2013road}, cannot be understated. Walking and running are also critical factors in transportation and geographical movement of people and goods in developing countries where public transport is underdeveloped and private transport not within the means of the populous. Finally walking and running as exercise has proven benefits as shown in \cite{hanson2015there} (general health) and \cite{fox1999influence} (mental health). There is thus clear evidence that the human gait has earned its right as a field of study in academia. \section{Computer Vision} While the previous section answers "why" understanding the human gait is important, the following sections will explain fields that contribute to the question of "how" the gait is studied. The technologies and methods used to quantify it. This section titled computer vision should be interpreted within the context of this document. It will be used interchangeable with image processing as the underlying philosophies of both methodologies are algorithmic interpretation of images. Image processing as a field was born from digital signal processing as it relates to the extraction of critical data from noisy data streams. Computer vision is the use of computational methods to achieve the same end goal; ultimately trying to emulate human vision. Image Processing has many different approaches and methodologies. These include but are not limited to feature detection, pattern recognition and classification. Modern works such as \cite{chen2016deeplab} and \cite{shi2016real} have shown how neural networks perform previously mentioned tasks with unprecedented accuracy. \subsection{Computer Vision in Robotics} Recent improvements to real time image processing has allowed amazing technological breakthroughs in fields closely related to robotics. One such breakthrough is the rapid improvement of self-driving cars developed by Tesla. These vehicles use vision based technologies and real time image processing to navigate complex and changing road networks. The figure below shows how a Tesla identifies different roadside artefacts. \begin{figure}[!ht] \captionsetup{width=0.8\linewidth, font=small} \includegraphics[width=0.8\linewidth]{figures/teslaauto.jpg} \caption{Insight into object classification by Tesla, image from \cite{tesla}} \label{fig:teslaauto} \end{figure} Another interesting use for computer vision in robotics is the ability to identify and classify real world objects. This allows for robotics to perform menial household tasks. Finally work completed by Taylor et al. \cite{taylor2016medical} shows the importance in computer vision in robotics assisted surgery. Computer vision is an important step in automatizing robotics and due to the rapid progression in artificial intelligence, it i a field with a large potential for growth. \subsection{New Perspectives from Animal Borne Cameras} Patel et al. \cite{patel2017trackingieee} showed that using animal borne cameras and motion sensors, the tail kinematics of the cheetah (Acinonyx Jubatus) could be tracked. Patel's work was partly inspired by Kane et al. \cite{kane2014falcons} where falcon (Falco Peregrinus) borne cameras were used to better understand airborne pursuit of prey. Giving researchers a new perspective on the behaviour of animals in the natural world. Further work completed by Pearson et al. \cite{pearson2017testing} showed that cameras mounted to dolphins (Lagenorhynchus Obscurus) could provide insight into the their movement, social and foraging strategies. Using cameras to study ocean-life has become a popular methodology in recent time due to difficulties imposed by their environment. In essence We struggle to understand flying and swimming animals due to their complex environments. THe folowing image shows how these devices are carried by various dolphins. \begin{figure}[!ht] \captionsetup{width=\linewidth, font=small} \includegraphics[width=\linewidth]{figures/dol.png} \caption{Dollphins wearing dorsal mounted cameras, image from \cite{pearson2017testing}} \label{fig:dol} \end{figure} The above research has shown the unique benefits of having cameras and sensors mounted to the subject in question. \subsection{Human Motion Analysis Using Computer Vision} From Chen et al.\cite{chen2013survey} using depth imagery to understand human motion we can see that this is a popular technique. Because imaging is a popular method in medical research imaging various human movements has a large set of well established methodologies. Naturally this has formed a foundation of using cameras to capture human movement. Companies like Vicon, Optitrack and Motion Analysis have created multiple consumer products for quantifying motion using cameras. The following image shows a typical motion capture setup using multiple cameras. \begin{figure}[!ht] \captionsetup{width=\linewidth, font=small} \includegraphics[width=\linewidth]{figures/mc.jpeg} \caption{Dollphins wearing dorsal mounted cameras, image from \cite{vicon}} \label{fig:mc} \end{figure} One system often used for motion capture is the Microsoft developed Kinect. \cite{gabel2012full} \cite{stone2013unobtrusive} \cite{clark2013concurrent} have shown positive results in modelling and quantifying the human gait using this technology. Open source software like OpenKinect allows for easy implementation and configuration. A drawback to this methodology is that all of these works require controlled environments due to the nature of the technology. A thorough search for wearable system using cameras to identify critical points on the lower limbs was done, but no pre-existing research was found. It was concluded that this thesis is novel and exploring a new approach to understanding lower limb kinematics. \section{Inertial Measurement Units and Sensors} IMU's are a staple of electrical engineering as applied to dynamic systems. These sensors give us insight as to how an object is moving in space by providing data relating to orientation and acceleration of said system. These data points are created by electronically interpreting signals generated by micro-electromechanical system (MEMS). Modern smartphones have built in IMU's that are not only accurate \cite{gikas2016rigorous}, but also easy to interface with due to the open source nature of the Android operating system \cite{androidSensorLib}. Generally Smartphones contain the following sensors: \begin{itemize} \item Accelerometer \item Gyroscope \item Magnetometer \item Barometer \item Temperature \end{itemize} \textbf{Accelerometers} provide linear acceleration data; these accelerations may be constant (eg. gravity) or changing (eg. relative motion). In smartphones they are usually based on MEMS that use various mechanical phenomena to determine motion. \textbf{Gyroscopes} provide rotational data of the sensor relative to the inertial frame. These sensors generate angular velocity data by using . \textbf{Magnetometers} provide information relating to the macroscopic magnetic fields in a certain area. These sensors can measure the direction, strength, or relative change of fields in three different dimensions relative to the smartphone. \textbf{Barometers} are finely tuned atmospheric pressure sensors that can determine pressure an object is experiencing. By combining this pressure data with a well defined map of different pressure the the relative height with respect to sea-level can be calculated. \textbf{Temperature} sensors generate local temperature data of the surrounding environment. They are important in smartphones that use lithium ion or lithium polymer batteries that can explode at high temperatures. These sensors can be used together to better model the position, velocity and acceleration of a modern smartphone. This is easily seen when a smartphone rotates the display when held in landscape. \subsection{Global Position System} GPS (Global Position System) is a space based navigational system that uses satellites to determine a receivers absolute position on earth. This system was developed by the United States Air Force ion 1973 and made available for public use in the 1980s. It has since been inproved by the addition of satellites. \subsection{Inertial Measurement Units in Robotics} IMUs are integral in the functioning of robotics. Up until very recently intelligent robotic systems had no sense of vision to provide feedback for their internal control systems. Instead this feedback was generated by various sensors providing information about the dynamics of these systems. \cite{kaneko2002legs} showed the importance of feedback to control bipedal robotic lower limbs. This feedback is achieved with different electronics components including gyroscopes and accelerometers; they are preferred above potentiometers since they do not mechanically intrude on the system. Another common application for sensors in robots is for th use of navigation. By using accurate IMU's mounted to the body of drones etc pose estimation can be used to control movement. \subsection{Human Motion Analysis Using Sensors} Picerno completed and extensive review of motion sensor based data capture for human motion in \cite{picerno201725}. Some new methods using interesting sensors have been developed to log human motion. \cite{wang2014wearable} showed that by using highly sensitive strain sensors positioned on various joints the movement of such points of interest could be quantified. Another exotic method is the use of soft carbon nanotube capacitive sensors as in \cite{cai2013super}. These sensors are flexible and non intrusive allowing comfortable data capture. A low cost approach in the form of a smartphone and wrist mounted sensors was used by \cite{shoaib2016complex} to show alternative methods to interpret human arms movements. Finally software developments by \cite{sun2017new} has allowed for more accurate simulations to be produced using inertial sensors. These papers are recent and shows that modern technologies and approaches to capturing motion data are being developed. \section{Mathematical Modelling} The binding element presented in this work is the underlying mathematics. Using various mathematical tools and methods known to robotics and bio-mechanics it is possible to transform various data types in various frames of reference to a singular model. \subsection{Mathematical Models of the Gait} Before exploring complex methods and tools used to analyze the human gait, it is important to select and understand the model they are derived from. Due to the large amount of existing research related to the human gait some models have been well established. These models are capable of quantifying important elements of the human gait such as gait period, dynamic joint forces and neuromuscular control. Some fundamental work complete by Zajac, Neptune and Winters will be discussed to better understand existing models. In work completed by Zajac et al. \cite{zajac1990modeling} it is interesting to note the how modelling difficulties are compared to that seen in bipedal robotics. In this work he also places some important bounds on human joints. He argues that the maximum DOF (Degrees of Freedom) that any single joint can have is 6; 3 for translation and 3 for rotation. He also constrains body segments as rigid and that the internal happenings of a body segment is insignificant. In further papers published by these authors \cite{zajac2002biomechanics} and \cite{zajac2003biomechanics} various dynamic simulations are tested against proposed methods and subject studies. These papers confirm the multi rigid segment model for studying human dynamics. Since this study is only concerned with the kinetics the assumption can be made that the model is adequate in kinematic analysis. \subsection{Linear Kinematics} By using kinematics we can quantify and understand the movement of the lower limbs. Kinematics is a branch of mechanics that fully defines the motion of a point with respect to position, velocity and acceleration (be it linear, rotational or a combination). Kinematics does not however describe the forces, torques or other variables that may affect that point. This is due to a fundamental assumption in kinematics that the point is massless. Kinematics can be broken up into 2 main branches: \textit{forward} and \textit{inverse}. To illustrate the matter the following diagram is that of a basic kinematic model. \begin{figure}[!ht] \captionsetup{width=\linewidth, font=small} \includegraphics[width=\linewidth]{figures/kinematics.png} \caption{Basic kinematic model to demonstrate} \label{fig:kinematics} \end{figure} In this figure the position of point P is defined by 2 lengths, L1 and L2 with different lengths and angles from a set of shared axis. In forward kinematics we can find P if we know the angles and lengths of the different links in the system. The motion of point P can then be described by looking at how the angles and lengths of the links in the system change over time. Inverse kinematics uses knowledge of different points in the system, such as the origin and P and the lengths of the links in the system to determine the angular offsets of each length. Since these produce a set of linear equations, the more unknowns we are faced with the more possible solutions we can generate. As discussed in the previous section a common method of modelling the human lower limbs is to use a collection of rigid beams. \subsection{Rotational Matrices} There is an underlying difficulty in mathematically fusing various data sources and models; that of finding a common frame of reference. With the intent of using Lagrangian mechanics good definitions for the different frames are critical. This thesis will primarily use 2 different frames of reference. The inertial frame and the body frame. The inertial frame (or world frame) can be defined in different ways as seen in \cite{soechting1992moving}, for the purpose of this study the NED (North East Down) definition is used. This configuration is also known as the local tangent plain and is often used in aviation. Rotational matrices are mathematical objects that rotate vectors in three dimensional space. Since most engineering is constrained to the physical three dimensional world these matrices commonly rotate 3 dimensional vectors with a 3x3 sized matrix. \subsection{Kalman Filter and Extended Kalman Filter} The Kalman filter is a mathematical tool used to estimate the states of a system. All measurements contain some unwanted elements of noise that produce uncertainty. Another source of uncertainty is the imprecision in the model. Simplifying assumptions disregard the minute details that when summed can have an effect on the interpretation of the data. To minimize these uncertainties it is important to filter the datasets correctly. Fortunately, estimation can be used as a form of filtering to reduce the impact of these uncertainties. Another powerful element of the Kalman filter is its ability to fuse data from different sources to compute a more holistic picture of the underlying system. Fusion allows us to interpret sensor data within constraints of other sensors, creating a more accurate dataset. For example we can negate the drift of an accelerometer if we have absolute positional data provided by a GPS. There is also an important distinction to be made between the KF and the EKF. To briefly explain this it should be understood that the Kalman filter was the original concept as developed by Rudolf E. Kálmán and he EKF the extension of said work. The KF has an inherent limitation that it can only be applied to linear systems, whereas the EKF can be applied to non-linear system operating within a certain defined range. The KF itself can be broken down into 2 fundamental stages of operation; a prediction stage and update stage. The prediction stage takes the known current states of the system and estimates what the measurements should be for the next time interval. The measurement stage takes in current measurements and mathematically determines the states. The states of the system are user defined parameters that can often not be directly measured. %%Systems control engineers define the cost function. Within the field of optimal control we want to control system behaviours using minimal input and with minimal error. the cost function is a mathematical representation of these extremes. \section{Natural Solutions for Robotic Shortcomings} Naturally the question arises: why would we want to better understand the dynamics of animals? A persistent problem in the field of modern robotics is that of mobility; robots struggle to navigate real world surfaces and obstacles. Work by Patel et al. \cite{patel2013rapid} shows how we can look towards nature for inspiration to solve this mobility problem. As demonstrated by various prototype robots built by Boston Dynamics bipedal robots are severely limited in manoeuvrability when compared to animals. This is due to the longest iterative design process known to man, evolution. Pictured below is a collection of bio-inspired robots build by Boston Dynamics. \begin{figure}[!ht] \captionsetup{width=0.75\linewidth, font=small} \includegraphics[width=0.75\linewidth]{figures/boston.jpg} \caption{Different bipedal and quadroped robots created by Boston Dynamics, image from \cite{bostondynamics}} \label{fig:boston} \end{figure} \section{Conclusion} This chapter has shown the direct parallels of technologies related to gate capture to dynamic robotic systems. With these strong parallels in mind the transferability of these systems from humans to humanoid robots is clear. In the same manner we are able to take a bio-mechanical look at the human body and treat it as a dynamics mechanical system instead of the complex bio-chemical and physiological system it really is. This technique of abstracting systems to different domains of knowledge allows us to apply engineering methods and design to complex problem spaces. As mechanical engineers have used resistive networks to understand thermodynamics \cite{chen2015electrical} and control engineers have used mechanical models and electrical models interchangeably to apply control principles \cite{karnopp2012system}, there is power in this methodology. Fusing different methods from different fields has proven its usefulness and this work will use this approach of horizontal thinking to create its own unique methodology. As discussed in this chapter the importance of understanding the human gait cannot be understated. The recent breakthroughs in computer vision and neural networks has reignited a field that has potential to truly change our day to day life. The ever increasing ability of sensors technology and data capture systems allow us to quantify what we have never been able to and the underlying fundamental mathematical methods never seem to fall short. The future of humanoid robotics sits at the overlap of computer vision, IMUs and biomimicry; add to this some form of general intelligence and the world reaches a stage of automation and transformation only imagined by authors such as Asimov and Wiener. Perhaps this work could contribute to that future.
{ "alphanum_fraction": 0.8191184624, "avg_line_length": 100.6697674419, "ext": "tex", "hexsha": "6080b9c86524a0ba98770fb29b487a2b210dc8b5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wolvexyz/thesis-latex", "max_forks_repo_path": "body/lit_review.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wolvexyz/thesis-latex", "max_issues_repo_path": "body/lit_review.tex", "max_line_length": 776, "max_stars_count": 1, "max_stars_repo_head_hexsha": "64325c76b83f5f80f629eeaba4971c719e7f8eae", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wolvexyz/thesis-latex", "max_stars_repo_path": "body/lit_review.tex", "max_stars_repo_stars_event_max_datetime": "2018-02-16T09:22:13.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-16T09:22:13.000Z", "num_tokens": 4348, "size": 21644 }
\chapter{The Drupal structure} \section{Structural elements} Drupal 8 core defines eight types of structural elements. To see an overview of these elements click on the \textbf{Structure} button in the administrative menu (Figure \ref{fig:structural_elements}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/structural_elements} \caption{Structural Elements} \label{fig:structural_elements} \end{figure} Each of these elements has a certain use within Drupal. The following list describes each of the structural elements. Don't worry if not all the elements are clear to you, when you start changing and using your site the structure will become clear. \begin{description} \item[Block layout:] Blocks are site elements which can be positioned in different places on your site. For example the search field, which is visible on the left side of your first Drupal page, is a Drupal block. You can put a block in different places on your page, these places are called \textbf{regions}. These regions depend on the Drupal theme you are using. When you click the \textbf{Block layout} link you will see the following page (Figure \ref{fig:block_layout}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/block_layout} \caption{Block layout} \label{fig:block_layout} \end{figure} This page shows a list of the different regions and allows you to add blocks to each of these regions. You can click the link \textbf{Demonstrate block regions(Bartik)} to view the regions of the current theme (Bartik) (Figure \ref{fig:region_demo}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/region_demo} \caption{Bartik regions} \label{fig:region_demo} \end{figure} \item[Comment types] The comment types page allows you to create and manage different comment types. When you add content to your site you can allow people to comment on the new content. The comment type defines the fields that the commenter has to fill out when he's writing his comment. The default comment type has only one field: the comment body. We could, for example, create a new comment type which includes a field for the name and age of the commenter. \item[Contact form] The Personal contact form is the form for site visitors to contact registered users; the name and recipients of this form cannot be edited. Other forms listed here are your configured site-wide contact forms, which site visitors can use to send mail to a centralized email address or addresses. You can edit the name and recipients of site-wide forms by choosing the Edit operation. You can also configure the fields and display of both personal and site-wide forms. \item[Content types] The content types page is very important. The content types define which kind of information your CMS will manage. A content type has different fields. These fields define the information that is stored in the content type. By default Drupal has two content types: Article and Basic page (Figure \ref{fig:content_types}); \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/content_types} \caption{Default content types} \label{fig:content_types} \end{figure} When you click the \textbf{Manage fields} button on the right you can see what kind of information is stored in this content type. (Figure \ref{fig:content_type_fields}). As you can see, the Article content type has four fields: Body, Comments, Image and Tags. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/content_type_fields} \caption{Article content type fields} \label{fig:content_type_fields} \end{figure} Next to the \textbf{Manage fields} menu item tab you have the \textbf{Manage form display} and \textbf{Manage display} tabs. These allow you to edit which fields are displayed when an element is created/edited or viewed. \item[Display modes] Display modes define different ways in which information is displayed. There are two types of display modes: form modes and view modes. Form modes are used when the content is created or edited, view modes when the content is viewed. When you go to \textbf{Display modes} $\rightarrow$ \textbf{View modes} (Figure \ref{fig:viewmodes}) you will see the different ways in which a content type can be displayed. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/viewmodes} \caption{Default view modes} \label{fig:viewmodes} \end{figure} When you go to \textbf{Structure $\rightarrow$ Content types $\rightarrow$ Article $\rightarrow$ Manage display} you will see the following page: \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/display_types} \caption{Display modes} \label{fig:display_types} \end{figure} There you see the different display modes that you can use for your Article content type. At the bottom of the page you can enable other display modes in the custom display settings dropdown (Figure \ref{fig:custom_display_settings_dropdown}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/custom_display_settings_dropdown} \caption{Custom display settings} \label{fig:custom_display_settings_dropdown} \end{figure} \item[Menus] Menus are easy, they define a menu with different menu items. Each menu has a corresponding block that is managed on the Block layout page. \item[Taxonomy] A taxonomy defines lists of terms, these terms can be associated with the content. A list of terms is called a vocabulary. For example: if we have a site with news articles about sports we could add a tag to each article to tell what the article is about. These tags can be defined in a vocabulary. \item[Views] A view enables you to create a display based on different content types. This course has a whole chapter dedicated to Drupal Views so we won't go into details here. \end{description} \section{Bitingbugs example} In the following sections we will review the content of this chapter by applying it to an example website. In the previous chapter we created the bitingbugs example website through the Acquia Dev Desktop. In this and the following chapters we will keep adding features to this site. Our goal is to create a webstore for selling edible insects. The site will also provide a database with recipes so people know how to cook with the insects. \subsection{Change the site title and logo} To make our site a little bit prettier we will change the logo and title. To change the site title go to \textbf{Configuration $\rightarrow$ Site information}. There change the site name to \textbf{Biting Bugs} (Figure \ref{fig:bitingbugs_change_title}). \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_change_title} \caption{Changing the site title} \label{fig:bitingbugs_change_title} \end{figure} The logo is part of the Drupal theme you are using. To change the logo go to: \textbf{Appearance $\rightarrow$ Settings $\rightarrow$ Logo image settings}. Uncheck \textbf{Use the default logo supplied by the theme} and upload the file \url{bitingbugs_logo_transp_white_right_small.png} (Available in the course files zip). Click \textbf{Save configuration}. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_logo_and_title_changed} \caption{Changed logo and title} \label{fig:bitingbugs_logo_and_title_changed} \end{figure} \subsection{Removing the Search and Tools block} On the right side of the page we have the \textbf{Search} and \textbf{Tools} block. We don't need them for now so you can remove them by going to \textbf{Structure $\rightarrow$ Block layout} and moving them from the \textbf{Sitebar first} to the \textbf{None} region (Figure \ref{fig:bitingbugs_remove_search_block}). Click \textbf{Save blocks}. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_remove_search_block} \caption{Remove a block from a region} \label{fig:bitingbugs_remove_search_block} \end{figure} \subsection{Adding a Taxonomy} To categorise the recipes on our site we will add a taxonomy. This taxonomy will contain different types of dishes. Go to \textbf{Structure $\rightarrow$ Taxonomy $\rightarrow$ Add vocabulary}. Use the following settings: \begin{description} \item[Name:] Dish types \item[Description:] Describes the type of dish we will add. \item[Vocabulary language] English \item[Default language] Site's default language (English) \end{description} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_add_dish_types} \caption{Adding a vocabulary terms} \label{fig:bitingbugs_add_dish_types} \end{figure} Click \textbf{Save}. In Figure \ref{fig:bitingbugs_add_dish_types} you can see the empty vocabulary. Next we will add some terms to the vocabulary. Click the \textbf{Add term} button and add the following terms: \begin{itemize} \item candy \item curry \item dessert \item pasta \item salad \item sandwiches \item sauces \item vegetarian \item vegan \end{itemize} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_add_vocabulary_terms} \caption{Adding a vocabulary term} \label{fig:bitingbugs_add_vocabulary_terms} \end{figure} To see an overview of the terms you have added go to \textbf{Structure $\rightarrow$ Taxonomy} and click the \textbf{List items} button next to your vocabulary. (Figure \ref{fig:bitingbugs_dish_types}) \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_dish_types} \caption{Vocabulary terms} \label{fig:bitingbugs_dish_types} \end{figure} \subsection{Adding the \textbf{Recipe} content type} Since we are going to store recipes in our CMS we will need to add the \textbf{Recipe} content type. Go to: \textbf{Structure $\rightarrow$ Content types $\rightarrow$ Add content type}. Give it the following properties: \begin{description} \item[Name:] Recipe \item[Description] A recipe for cooking bugs! \end{description} At the bottom of the page you can see some settings for this content type. Explore the settings, the names are very descriptive so most of them should be clear without further explanation. These are general settings that apply to all instances of this content type. You are able to change these for each instance individually when you create them. Click \textbf{Save and manage fields}. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_manage_recipe_fields} \caption{Manage recipe fields} \label{fig:bitingbugs_manage_recipe_fields} \end{figure} Add the following fields to the \textbf{Recipe} content type: \begin{description} \item[Name] (Text/plain, Maximum length = 255, number of values = 1) \item[Ingredients] (Text/plain, Maximum length = 255, number of values = unlimited) \item[Body] (already there) \item[Plate image] (Image, number of values = 1) \item[Estimated time] (Number(decimal), number of values = 1, Help text = Estimated time to cook the recipe in minutes). \item[Type] (Taxonomy term, number of values = unlimited, reference method = default, Vocabularies: Dish types, Create referenced entities if they don't already exist). \end{description} In figure \ref{fig:bitingbugs_recipe_fields} you can see an overview of the fields. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{chapter4/bitingbugs_recipe_fields} \caption{Recipe fields} \label{fig:bitingbugs_recipe_fields} \end{figure} \section{review exercises} \begin{enumerate} \item Log in to your \textbf{exploringdrupal8} site (created in the previous chapter). Add the search block to the \textbf{sidebar second} region and disable the \textbf{powered by Drupal} block. \item Add a comment type \textbf{Answer} to your \textbf{exploringdrupal8} site. We will use the comment type to allow users to answer a question. The comment type has two fields: answer (a number between 0 and 100000) and motivation (a textual explanation describing how they got the answer). \item Add a new content type \textbf{recipe} to your \textbf{exploringdrupal8} site. The new content type has the following fields: Title, PlateImage (Image), ingredients (list), description. Make sure the teaser only displays the Title and Image fields. \item Add a vocabulary \textbf{Food types} to your \textbf{exploringdrupal8} site. Add the following terms to the vocabulary: Indian, Chinese, vegetarian, vegan. \end{enumerate}
{ "alphanum_fraction": 0.7455180796, "avg_line_length": 55.0794979079, "ext": "tex", "hexsha": "c4b5a0a19fed274a281b9ca5bc47a82ba6846fd7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b30a08346d934237c6fa6d11bb8ee0dbf1b63de4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tomneutens/e-commerce-course", "max_forks_repo_path": "Chapter_4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b30a08346d934237c6fa6d11bb8ee0dbf1b63de4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tomneutens/e-commerce-course", "max_issues_repo_path": "Chapter_4.tex", "max_line_length": 489, "max_stars_count": null, "max_stars_repo_head_hexsha": "b30a08346d934237c6fa6d11bb8ee0dbf1b63de4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tomneutens/e-commerce-course", "max_stars_repo_path": "Chapter_4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3340, "size": 13164 }
% !TEX root = main.tex \section{Conclusion} In this project, we properly defined what it means if a program has unsafe dataflow. We proposed an algorithm schema that computes if a program has unsafe dataflow. Furthermore, we proved that any algorithm implementing this schema is sound. We implemented a database schema and library in the CodeQL framework. On top of that, we provide a dataflow algorithm that fits our proposed dataflow algorithm schema. Thus, it is a sound dataflow algorithm. To close the loop, we demonstrate how to recover a part of the algorithm description in regular first-order predicate logic. Furthermore, we extended the production dataflow library employed by CodeQL for analysing programs in Java by a notion of call sensitivity. This reduces the number of false positives. Our extension was accepted upstream and is now part of the CodeQL codebase and deployed to customers around the world. \subsection{Future Work} There are several areas of future work. The theoretical framework can be improved by making a distinction in the soundness theorem between sinks that (if they are executed) always will abort the program, and sinks that are safe to execute in some contexts. The section about call sensitivity already outlines possible future improvements on the code for call sensitivity. Another area which can be investigated is making the dataflow analysis path-sensitive. \subsection{Path-Sensitive Dataflow} Initially, the goal of this project was to make the dataflow analysis path-sensitive to reduce false positives. Code like in~\autoref{lst:ex-ps-1} is currently (wrongly) detected as having unsafe dataflow by the CodeQL dataflow library. \begin{listing}[h] \begin{javacode} public void f(boolean b) { Object o = null; if (b) { o = source(); } if (!b) { sink(o); } } \end{javacode} \caption{Simple example where path-sensitive dataflow would reduce false positives} \label{lst:ex-ps-1} \end{listing} However, implementing a path-sensitive dataflow analysis in QL without introducing introducing soundness issues proved to be too difficult. We focussed on suppressing nodes from the dataflow graph, instead of edges, which didn't turn out to be a good approach. Furthermore, the dataflow graph is a very coarse representation of the program. It cannot easily be extended with all the information that is needed to track the outcomes of tests, while still being a compact representation. After failing with path-sensitivity, we turned to understanding the dataflow problem better. For that, we developed the formalism introduced in section 2, and we showed how that theory connects to the practically relevant QL implementation in section 3. Now that all the theoretical framework is in place, it is much easier to investigate path-sensitivity. Unfortunately, due to time constraints, we are not able to investigate path-sensitivity here. \iffalse \subsection{Path Sensitivity} The initial goal of this project was to make the dataflow analysis path-sensitive to reduce false positives. Code like in~\autoref{lst:ex-ps-1} is currentyl detected as having (potentially) dataflow. \begin{listing}[h] \begin{javacode} public void f(boolean b) { Object o = null; if (b) { o = source(); } if (!b) { sink(o); } } \end{javacode} \caption{Simple example of path-sensitive dataflow} \label{lst:ex-ps-1} \end{listing} \begin{listing}[h] \begin{javacode} public void f() { boolean[] boolArray = {true, false}; Object x = null; for(boolean b: boolArray) { if (b) { x = source(); } if (!b) { sink(x); } } } \end{javacode} \caption{Example of a false negative with the implemented algorithm} \label{lst:ps-false-negative} \end{listing} However, the initial implementation had a serious soundness problem, that only was discovered quite late in the implementation process. Namely, it was wrongly not reporting code like in~\autoref{lst:ps-false-negative} to have dataflow. It boils down that the implementation was ignoring dataflow in~\autoref{lst:ps-false-negative}, because both variable reads refer to the same variable (even in SSA), but the variable can have different values in different loop iterations. This unsoundness was understandably not acceptable to the CodeQL developers. The focus of the project then shifted to understanding the dataflow algorithm better, which is when sections 2 and 3 were developed. Now, using these tools to formalize a path-sensitive dataflow algorithm, proving it sound and also providing a sound implementation for Java is not feasible in the time-constraints of this project. However, using the understanding gained while developing the theoretical part of the project, we can at least outline a possible solution to the soundness problem. While this solution has good chances of being sound, it raises performance concerns that might make it prohibitive to implement in a intraprocedural setting. \subsubsection*{Changes to the Algorithm} In order to make the algorithm path-sensitive, we have to look at the labels. Right now, the algorithm labels nodes with $\labclean$ and $\labtracked$. This needs to be changed to track labels together with a dynamic set of constraints. These constraints describe the values of the conditionals that need to be true for a value to propagate from its creation (a node in the dataflow graph that introduces a new label) to any given node. Thus, the constraints in the set are viewed as conjunction. During label propagation, dataflow nodes in a then-branch of an if statement attach the condition of the if to their labels, and in the else-branch the negation of that if statement. The same propagation rule is implemented for while loops. If this rule leads to an unsatisfiable constraint set, i.e.\ the conjunction of its elements will always evaluate to false, regardless of the variables in the program, the label is dropped. An example of an unsatisfiable constraint set would be $\{b, \neg b\}$. Note that getting constraints from nodes in the the dataflow graph alone would only result in a subset of all available conditions be attached to the labels, as the dataflow graph is a very coarse subset of the control-flow graph. Thus, there is a loss of precision here that would only be alleviated if switching to propagate the dataflow information directly on the control-flow graph. That would have huge performance implications, that might be untenable in practice. At $\varphi$-nodes, both incoming labels are preserved if they have different labels or different constraint sets. If the same label is attached to a node with different constraint sets, it means that either one of the constraint set holds for that label. Furthermore, and this is the important difference to our na\"ive implementation suffering from soundness problems is that constraints in the constraint set sometimes need to be dropped during dataflow propagation. Specifically, all constraints that refer to variables that go out of scope need to be dropped. Determining which labels to drop exactly based on the dataflow graph alone might be very difficult, if not impossible. This is because data can flow from a dataflow node to another node, and all SSA variables referenced in the constraints are live at both nodes. However, the variables could have been re-defined in the meantime, thus possibly having new values. This is mainly caused by the dataflow graph skipping over some parts of the control-flow graph, so label propagation might not see the re-definition of variables. A main concern for a sound implementation is thus figuring out when variables in the constraint set go out of scope, and the constraint needs to be dropped. At the end, dataflow is detected if a sink has at least one label of type $\labtracked$, regardless of the constraint set. \subsubsection*{Changes to the Theory} The theoretical model of dataflow would need to change quite a bit to accomodate a soundness proof of the algorithm outlined above. This section is purely speculative, as we haven't done the actual work, and it might turn out that it turns out entirely different. In our discussion of the dataflow algorithm without path-sensitivity, we have two slightly different approaches in theoretical world, that keeps track of annotated types, whereas the algorithm implementation computes these types by using label sets. If each label gets a constraint set, and labels of the same type (but with different constraint sets) can coexist in the same set, this needs to be modelled on the type level as well. Furthermore, the typing rules need to allow for the possibility of adding constraints at branch points (i.e.\ while and if statements). This formulation should be flexible enough that not every condition \emph{needs} to contribute to the constraint set, so conditions that are too complicated can be skipped by the implementation. However, on the other hand, the framework needs to be strong enough to be able to prove that if two constraints in a constraint set conflict, that the label can be dropped. Not having a label assigned at a node after the label propagation stops needs somehow to result in an annotated type that markes variables as clean. We can conclude this thought experiment by stating that the algorithm implementation and the algorithm specification (that should be proven to be sound) need a tighter coupling than the framework presented in this project report provides. \subsubsection*{Practical Feasibility} The main concern with path-sensitive extensions of the dataflow algorithm in practice is that it is unlikely to work purely on the dataflow graph. Probably more than less involvement of the control-flow graph would be needed. While an algorithm for that could certainly be developed, and, in a reduced setting like dIMP, be proven sound, it would probably perform badly. The size of the control-flow graph is much bigger than the carefully optimized dataflow graph. Especially in an intraprocedural setting, the whole control-flow graph of a program consisting of several millions of lines of code is too big run analyses on. Thus, without further research and some good ideas, it is probably not a good idea to spend significant ressources on making an intraprocedural dataflow algorithm path-sensitive as outlined above. \fi
{ "alphanum_fraction": 0.7858778626, "avg_line_length": 48.7441860465, "ext": "tex", "hexsha": "ac7d442d60674c7a2dad71ff5225e08558396f65", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "088e1670104fe5ffd6708a33a013ec062f7c37f7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "criemen/semantics-project-report", "max_forks_repo_path": "5-conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "088e1670104fe5ffd6708a33a013ec062f7c37f7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "criemen/semantics-project-report", "max_issues_repo_path": "5-conclusion.tex", "max_line_length": 119, "max_stars_count": 1, "max_stars_repo_head_hexsha": "088e1670104fe5ffd6708a33a013ec062f7c37f7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "criemen/semantics-project-report", "max_stars_repo_path": "5-conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-31T13:36:57.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-31T13:36:57.000Z", "num_tokens": 2224, "size": 10480 }
\chapter{Survival Trees} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Log-Rank Test} The \textbf{log-rank test} (Mantel 1966; Peto and Peto 1972) is a test for statistical equivalence of two survival curves. It is obtained by constructing a $2x2$ contingency table at the time of each event and comparing the failure rates between the two groups, conditional on the number at risk in each group\footnote{See https://bookdown.org/sestelo/sa\_financial/comparing-survival-curves.html.}. In this way, the test compares the entire survival experience between groups. The null hypothesis is that the true underlying curves for the two groups are identical. ``In the absence of censoring, these methods reduce to the Wilcoxon-Mann-Whitney rank-sum test (Mann and Whitney 1947) for two samples and to the Kruskal-Wallis test (Kruskal and Wallis 1952) for more than two groups of survival times.'' %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Survival Example: Primary Biliary Cirrhosis} Now let's consider how the same tree-building machinery This data is from the Mayo Clinic trial in primary biliary cirrhosis (PBC) of the liver conducted between 1974 and 1984. A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo controlled trial of the drug D-penicillamine. The first 312 cases in the data set participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial, but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants. \begin{question}{} Here's a dataset of survival data... how would you build a survival forest? \end{question} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Random Survival Forests} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Boosted Survival Trees}
{ "alphanum_fraction": 0.6768814781, "avg_line_length": 59.972972973, "ext": "tex", "hexsha": "62b9891b8e00c6049e0d30a785c10d75a1f2f102", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-14T17:16:44.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-14T17:16:44.000Z", "max_forks_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "blpercha/mcds-notes", "max_forks_repo_path": "tex/mcds-survival-trees.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "blpercha/mcds-notes", "max_issues_repo_path": "tex/mcds-survival-trees.tex", "max_line_length": 720, "max_stars_count": 6, "max_stars_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "blpercha/mcds-notes", "max_stars_repo_path": "tex/mcds-survival-trees.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-03T01:31:23.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-10T16:51:08.000Z", "num_tokens": 450, "size": 2219 }
\chapter{Conclusions}\label{ch:8} In this final chapter we look back at the road traveled so far and gather some thoughts about how things fit together, and where to go from here. This chapter serves as a summary of the contents of the thesis, a reflection on its connections to other, more broadly related work, and a pointer to directions for future research. In putting together the material for this thesis we took seriously a claim made by Hans Rott and others \cite{Rott01,Bonanno09,Arlo-CostaP10} that changing beliefs is like making a decision. According to this viewpoint, revision is analogous to a single agent making a decision as to what possible outcomes out of a given menu it will focus on, where the menu consists of the allowed outcomes provided by the new information; update is a variation on this, according to which the final decision is distributed across all the models of the prior information; and merging is analogous to a group of agents deciding on the collective set of acceptable outcomes, subject to a constraint. The parallel with decision making was facilitated by the fact that the revision postulates $\ppr{1}$, $\ppr{3}$ and $\ppr{5-6}$, as well as the update postulates $\ppu{1}$, $\ppu{3}$ and $\ppu{5-6}$, are close analogues of axioms $\ooch{1-4}$ for individual rational choice, and that merging postulates $\ppm{0-8}$, besides rehashing the revision postulates, also closely track properties typically employed to characterize voting rules. This parallel, we argued in Chapters \ref{ch:1} and \ref{ch:3}, also makes sense on a conceptual level: the preferences that lie at the heart of rational choice, individual as well as social, reappear in belief change as preorders over outcomes, encoding the agents' assessments of the plausibility, or desirability, of outcomes relative to each other. More broadly, the idea that agents use something along the lines of preference information when drawing inferences in the wild fits with a distinct line of research on the way in which non-monotonic logics look like at the semantic level \cite{StrasserA19}. The idea, simply put, is that when agents use their background information, which we may call $\phi$, to figure out whether something, which we may call $\mu$, holds in the real world, what they do is that they pick \emph{some} models of $\phi$ on top of which to reason. What exactly this picking represents has never been entirely settled, but we can readily see that, from a cognitive point of view, it makes eminent sense: % if $\phi$ represents the entirety of an agent's background knowledge, if the agent had to consult all the models of $\phi$ before it could make up its mind as to whether $\mu$ follows from it, as classical logic instructs, then it would probably never reach any conclusion, since the number of possibilities is likely to be astronomically high; and even if the agent did manage to reach a conclusion in efficient time, the answer would probably be, more often than not, \emph{no}, since most real world inferences do not account for all the subtle, but entirely irrelevant, ways in which a scenario can be varied. Rather, we can imagine that real world agents draw inferences by picking something like the most `normal', `typical', or `probable' models of $\phi$ and checking those to see if $\mu$ holds in them. Of course, the agent does not literally go through a list and picks out models of $\phi$: a specialized module of its cognitive apparatus, e.g., its memory, attention or social background, does this for it. Thus, it could be argued, ambitiously, that all of non-monotonic reasoning, in general, is about choice: choice over which of the myriad possible configurations of the world to use in a specific reasoning task. And we can picture the rational choice theorists of yore pointing out that this process can be described, as it actually has been \cite{Shoham87,Pearl89,KrausLM90}, using choice functions and preference orders. In Sections \ref{sec:3-revision}, \ref{sec:3-update} and \ref{sec:3-merging} we presented the formal models for revision, update and merging, respectively, in the light of this preference-driven, choice theoretic approach. In doing so, we merely retraced steps taken by our predecessors \cite{Rott01,Bonanno09,Arlo-CostaP10,KoniecznyP11}, steps that were present even in the original models of belief revision \cite{AlchourronGM85,KatsunoM92}. In Section \ref{sec:3-enforcement} we showed that the choice theoretic perspective can also be useful for the design of new belief change operators, and exemplified this on \emph{enforcement}, a dual version of revision that sits somewhere on the spectrum of non-prioritized belief change operators. The main challenge, for us, of figuring out what enforcement does was to understand it at the semantic level: what do the preorders look like? And what kind of choice function best fits enforcement? Originally, we opted for a representation in terms of partial orders on formulas, or sets of interpretations \cite{HaretWW18}, with the choice function picking out the one set that was best, given the new information: the partial order, then, had to be designed in such a way that there would always be a unique best set of interpretations out of any lineup that could be presented, and the specification of the conditions under which this held true ended up being rather opaque. In this work we switched to a more standard representation, in terms of preorders on the interpretations themselves; what had to be changed, then, was the choice function: we could not use something that selected models of $\mu$, since the models of $\mu$ needed to be left in place. What we needed was a function that added models to $\mu$ in as greedy manner as possible, and this led us to the idea of the addition operator. The idea behind enforcement proved to be more fertile than we thought it would be, as it plied itself naturally to revision of preferences, described in Chapter \ref{ch:7}. The original aim for enforcement, which was to provide a principled approach to enforcement in abstract argumentation \cite{Baumann12}, ended up being sidelined, but is a promising direction for future work. The same choice theoretic perspective, applied back to revision, led us to think about the role of the different postulates in the grand scheme of things. It became clear that postulate $\ppr{2}$ was not a rationality constraint in the same manner as the other postulates were, in the sense that it concerned exclusively the placement of the models of $\phi$ in the agent's ranking on outcomes, and corresponded to something like the agent's attitude, or bias, about how privileged these models should be when revision needed to occur: in this perspective, $\ppr{2}$ could be seen as one attitude among many. A more systematic attempt to generate such biases, using simple variations of the functions used to rank outcomes relative to $\phi$, i.e., the \emph{aggregation functions} in Section \ref{sec:2-distances}, led to Chapter \ref{ch:4}. Of course, more sophisticated variations, corresponding to more psychologically realistic biases can be imagined, and it is an exciting prospect to think of revision along these parameters. At the same time, the more fine grained view on the types of biases an agent can have towards its initial beliefs raises the question of what these attitudes are good for, i.e., whether they can be used for tasks such as learning or tracking the truth \cite{Kelly98,BaltagGS19}. The idea here is to view %Here one views revision as part of an ongoing process by which the agent continuously refines its representation of the outside world, with the aim of settling on stable, correct information. Such a task, we think, provides a natural benchmark for revision operators, and it has the potential to connect belief revision to other topics of importance to the field of AI. It would also be interesting to study the complexity of these operators, and see how it compares to the complexity of existing belief change operators \cite{EiterG92,PfandlerRWW15}. In Chapter \ref{ch:5} we looked at merging, which is to revision as social choice is to individual rational choice. From the onset we opted to look at merging as a collective decision process, whose aim is to be fair, rather than as an information aggregation process, whose aim would be to be right, or accurate. Postulates $\ppm{0-8}$ are, largely, compatible with both approaches. The idea of looking at merging as a kind of voting scenario, where the candidates are the outcomes, suggested that postulates $\ppm{0-8}$ were only a starting point, and that merging was fair game for the large variety of properties studied in social choice. This led to the original paper \cite{HaretPW16} and to Sections \ref{sec:5-syntax}, \ref{sec:5-evenhandedness} and \ref{sec:5-responsiveness}, which are based on it. Shortly after, the \emph{Handbook of Computational Social Choice} \cite{BrandtCELP2016} and the volume on \emph{Trends in Computational Social Choice} \cite{Endriss17} came out, and it became clear that merging occupied a place somewhere in between combinatorial voting \cite{LangX16} and multiwinner voting \cite{FaliszewskiSST17}, and that the transfer of knowledge from the classical voting models to more sophisticated settings was a matter of considerable interest, so we set our sights on strategyproofness. At the same time, our interests were equally stoked by the idea that merging, or a merging-like framework, could be used to aggregate other types of formalisms of interest to the AI community, such as Horn formulas \cite{HaretRW15,HaretRW17} or abstract argumentation frameworks \cite{DelobelleHKMRW16}. This led us to consider applying acceptance notions (such as the skeptical and credulous notions presented in Sections \ref{sec:5-strategyproofness}) to the results of a merging operator, and to see what happened to the existing strategyproofness results \cite{EveraereKM07}. Since our methods for calculating satisfaction with respect to the merging results were different from the original setting \cite{EveraereKM07}, there was no promise that its results would be instantly applicable. What we found, however, was that the situation was even worse, in the sense that, with one exception, restrictions that guaranteed strategyproofness in \cite{EveraereKM07} failed to do so in our setting. The main goal for future research here is to tie the properties in Sections \ref{sec:5-syntax}, \ref{sec:5-evenhandedness} and \ref{sec:5-responsiveness} together with the notions of strategyproofness in Section \ref{sec:5-strategyproofness} for a general result along the lines of the classical theorems of social choice theory \cite{Gibbard73,Satterthwaite75,DugganS00}. Our aim is to also consider extended settings of manipulation, e.g., bribery~\cite{BaumeisterEER15}, where sets of agents can be incentivized to form a joint manipulating coalition. Our work on merging and proportionality also suggests several directions for future research. Even though the two proportionality postulates $\ppm{\CPROP}$ and $\ppm{\BPROP}$ we proposed apply only to very restricted instances, % One the one hand, experience has shown that even weak proportionality postulates have proven sufficient for axiomatic characterizations \cite{LacknerS18b}. In our work, as well, these two postulates are sufficient to distinguish proportional from non-proportional operators. On the other hand, stronger postulates are desirable to determine to which degree proportionality guarantees can be given. This has recently been investigated in the context of approval-based committee elections \cite{AzizBCEFW17,AzizEHLFS18,FernandezELGABS17}, and this line of work can serve as a basis for a similar analysis for belief merging operators. Coming back to manipulation, it can be fully expected that proportional belief merging operators are prone to strategic voting, as in the setting of approval-based committee elections even weak forms of proportionality and strategy-proofness have been shown to be incompatible \cite{Peters18}. Still, it has been found that the percentage of manipulable instances depends strongly on the choice of voting rules~\cite{LacknerS18}, indicating that a detailed analysis of vulnerabilities is an interesting avenue for future work. Finally, it would be interesting to see if the framework of merging can be used in different social choice contexts, e.g., resource allocation \cite{ChevaleyreEM17}. Chapters \ref{ch:4} and \ref{ch:5} are both concerned with foundational issues in the theory of belief change. The remaining chapters have a more applied bent. Chapter \ref{ch:6} takes us back to the single-agent belief change operations of revision and update, this time applied to the Horn fragment. Section \ref{sec:6-revision-hph} developed alongside Chapter \ref{ch:4}. It was clear to us that in certain situations postulate $\ppr{2}$ would make an $\HPH$-revision operator choose a set of interpretations that could not be expressed as a Horn formula, but a weaker version of postulate $\ppr{2}$, which allowed the operator to select some portion of that set that could be expressed as a Horn formula, might work. The catch, of course, was that such a discriminatory behavior was bound to violate postulate $\ppr{\NEUT}$. Sections \ref{sec:6-hhh-revision} and \ref{sec:6-hhh-update} grew out of an attempt to extend the basic framework for revision in fragments in \cite{DelgrandeP15,DelgrandePW18} to other settings: first of all to update, and then to the weaker postulates $\ppr{7-8}$ (and $\ppu{7-8}$), describing partial preorders. The latter turned out to be more challenging. The main challenge for the future, in this case, is to flesh out the properties that are essential for the representation results to work, and extend these results to other fragments, in the manner of existing models \cite{DelgrandePW18}. Chapter \ref{ch:7} applies the principles of belief change to preferences. Since belief change, as described in Chapter \ref{ch:3}, is itself about choice and preferences, preference change ended up being characterized in terms of preferences on preferences, which coincided with thoughts about the dynamics of preferences from Sen and others \cite{Sen77}. Interestingly, the principles that were best suited for this type of operation were not the revision postulates $\ppr{1-6}$, but their dual versions $\ppe{1-6}$, used for enforcement. This formulation also suggested the right kind of choice function for preference revision operators, with the addition operator in Chapter \ref{ch:7} being adapted directly from the addition operator in Section \ref{sec:3-enforcement}. In general, due to its flipped choice function that adds elements to the new information rather than removing them, enforcement is more suited to describe the dynamics of types of objects that are constructed out of some building block-like elements, in the way in which strict partial orders are constructed out of their comparisons. By contrast, a propositional formula is not `made up of' its models in the same way in which a partial order is constructed out of its comparisons: a propositional formula is more like a set of specifications, with its models being the outcomes that meet those specifications. To put this differently, we could have have approached preference revision in an alternative way, by using a logical formalism in which the object being revised would be something like a preference formula $\phi$, whose models are all the different preference orders that satisfy it. Such a formalism could be a fragment of propositional logic, e.g., of acyclic definite Horn clauses consisting of two variables, where one such clause $a\rightarrow b$ encodes the comparison that $a$ is at least as good as $b$. Such a fragment, however, is not closed under conjunction, and therefore does not fall within the purview of existing work on revision in fragments \cite{DelgrandePW18}. Another option would be to use a formalism specifically tailored to talk about preference orders, such as the language $\mathrm{PL}$ \cite{BienvenuLW10}, but there we would encounter the same problems of expressibility, i.e., of making sure that the output will be expressible in the target language. Either way, if we proceeded in this way, the revision problem would have amounted to selecting some models of the new information $\mu$, and in this case the revision postulate $\ppr{1-6}$ would be the appropriate postulates to use. This is definitely a viable alternative, and a promising direction for further work. More generally, one lesson that can be drawn from this thesis is that a belief change operator arises out of a combination of a few basic elements: a language for representing the information, a set of logical postulates, a set of semantic properties describing the preferences over outcomes, and a choice procedure connecting the two. For propositional revision we have propositional logic, postulates $\ppr{1-8}$, properties $\oor{1-7}$ and the choice procedure that selects the minimal elements of $\mu$. In the Horn fragment we have the same choice procedures but a different representation language, which then requires that the postulates and properties be supplemented to make up for the expressive limitations of Horn formulas. For enforcement and its offshoots, the postulates are $\ppe{1-6}$, the properties are $\ooe{1-6}$ and the choice function is given by the addition operator, with additional quirks depending on the type of representation language used. Designing a belief change operator requires all these elements to work together, in what is usually a delicate and fragile balance: modifying one element, even slightly, usually requires rethinking most of the other elements as well. At the moment, a universal, foolproof recipe for applying belief change to any Knowledge Representation formalism we might be interested in still seems slightly out of reach. Hopefully, as more work becomes available, the gap will become narrower.
{ "alphanum_fraction": 0.7958097954, "avg_line_length": 61.3905723906, "ext": "tex", "hexsha": "7aae6d720c68a76c509637ece8ae64c5fc0073c3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9f154e39d753c1bc4edc0382cf8a2e2655e19393", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adrian-haret/choosing-beliefs", "max_forks_repo_path": "chapters/8-conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9f154e39d753c1bc4edc0382cf8a2e2655e19393", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adrian-haret/choosing-beliefs", "max_issues_repo_path": "chapters/8-conclusion.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "9f154e39d753c1bc4edc0382cf8a2e2655e19393", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adrian-haret/choosing-beliefs", "max_stars_repo_path": "chapters/8-conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4134, "size": 18233 }
% This LaTeX was auto-generated from an M-file by MATLAB. % To make changes, update the M-file and republish this document. \subsection{getPlotDistr\_cdp.m} \begin{par} \textbf{Summary:} Compute means and covariances of the Cartesian coordinates of the tips both the inner and outer pendulum assuming that the joint state $x$ of the cart-double-pendulum system is Gaussian, i.e., $x\sim N(m, s)$ \end{par} \vspace{1em} \begin{verbatim} function [M1, S1, M2, S2] = getPlotDistr_cdp(m, s, ell1, ell2)\end{verbatim} \begin{par} \textbf{Input arguments:} \end{par} \vspace{1em} \begin{verbatim}m mean of full state [6 x 1] s covariance of full state [6 x 6] ell1 length of inner pendulum ell2 length of outer pendulum\end{verbatim} \begin{verbatim}Note: this code assumes that the following order of the state: 1: cart pos., 2: cart vel., 3: pend1 angular velocity, 4: pend2 angular velocity, 5: pend1 angle, 6: pend2 angle\end{verbatim} \begin{par} \textbf{Output arguments:} \end{par} \vspace{1em} \begin{verbatim}M1 mean of tip of inner pendulum [2 x 1] S1 covariance of tip of inner pendulum [2 x 2] M2 mean of tip of outer pendulum [2 x 1] S2 covariance of tip of outer pendulum [2 x 2]\end{verbatim} \begin{par} Copyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen. \end{par} \vspace{1em} \begin{par} Last modification: 2013-03-06 \end{par} \vspace{1em} \subsection*{High-Level Steps} \begin{enumerate} \setlength{\itemsep}{-1ex} \item Augment input distribution to complex angle representation \item Compute means of tips of pendulums (in Cartesian coordinates) \item Compute covariances of tips of pendulums (in Cartesian coordinates) \end{enumerate} \begin{lstlisting} function [M1, S1, M2, S2] = getPlotDistr_cdp(m, s, ell1, ell2) \end{lstlisting} \subsection*{Code} \begin{lstlisting} % 1. Augment input distribution (complex representation) [m1 s1 c1] = gTrig(m, s, [5 6], [ell1, ell2]); % map input through sin/cos m1 = [m; m1]; % mean of joint c1 = s*c1; % cross-covariance between input and prediction s1 = [s c1; c1' s1]; % covariance of joint % 2. Mean of the tips of the pendulums (Cart. coord.) M1 = [m1(1) - m1(7); m1(8)]; % p2: E[x -l1\sin\theta_2]; E[l2\cos\theta_2] M2 = [M1(1) - m1(9); M1(2) + m1(10)]; % p3: mean of cart. coord. % 2. Put covariance matrices together (Cart. coord.) % first set of coordinates (tip of 1st pendulum) S1(1,1) = s1(1,1) + s1(7,7) -2*s1(1,7); S1(2,2) = s1(8,8); S1(1,2) = s1(1,8) - s1(7,8); S1(2,1) = S1(1,2)'; % second set of coordinates (tip of 2nd pendulum) S2(1,1) = S1(1,1) + s1(9,9) + 2*(s1(1,9) - s1(7,9)); S2(2,2) = s1(8,8) + s1(10,10) + 2*s1(8,10); S2(1,2) = s1(1,8) - s1(7,8) - s1(9,8) ... + s1(1,10) - s1(7,10) - s1(9,10); S2(2,1) = S2(1,2)'; % make sure we have proper covariances (sometimes numerical problems occur) try chol(S1); catch warning('matrix S1 not pos.def. (getPlotDistr)'); S1 = S1 + (1e-6 - min(eig(S1)))*eye(2); end try chol(S2); catch warning('matrix S2 not pos.def. (getPlotDistr)'); S2 = S2 + (1e-6 - min(eig(S2)))*eye(2); end \end{lstlisting}
{ "alphanum_fraction": 0.625256975, "avg_line_length": 33.3823529412, "ext": "tex", "hexsha": "088b32b60c66886dbcef1c54fbf206abc53c96ea", "lang": "TeX", "max_forks_count": 36, "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_path": "doc/tex/getPlotDistr_cdp.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_path": "doc/tex/getPlotDistr_cdp.tex", "max_line_length": 226, "max_stars_count": 53, "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_path": "doc/tex/getPlotDistr_cdp.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "num_tokens": 1221, "size": 3405 }
\chapter{Newton polygons} We'll now introduce a very useful tool to study radius of convergence and zeroes of an analytic function: the Newton polygon. We'll first introduce it for polynomials and then try to generalize our results to power series. \section{Newton polygons for polynomials} \begin{defn} \label{defn:newton-polygon-polynomials} Let $f(X) = 1 + \sum_{i=1}^n a_iX^i \in 1 + X\Cp[X]$ be a polynomial and consider the following set of points in $\R^2$: \[ \Gamma := \{(0,0)\} \cup \left\{(i, \ord a_i) \mid a_i \neq 0, i \in \{1, \dots, n\}\right\}. \] The \emph{Newton polygon} of $f(X)$ is the inferior convex hull of these points, i.e. the highest convex polygonal line joining $(0, 0)$ with $(n, \ord a_n)$ which passes on or below all the points in $\Gamma$. \end{defn} A nice way to think at the Newton polygon is the following: we begin with a vertical line through $(0,0)$ and we rotate it about $(0,0)$ counter-clockwise until we hit some point of $\Gamma$; then we consider the segment joining $(0,0)$ with the last point we hit ($P$) as the first segment of the Newton polygon and we continue to rotate the line counter-clockwise about $P$ and repeat the procedure. \begin{example} In \cref{figure:figure4.1} it is shown the Newton polygon for $f(X) = 1 + X^2 + \tfrac{1}{3}X^3 + 3X^4 + 54X^5$ in $\Q_3[X]$. \begin{figure} \centering \includegraphics[scale=2.5]{/home/carlo/Tesi/images/figure_4_1} \caption{Newton polygon of $f(X) \in \Q_3[X]$} \label{figure:figure4.1} \end{figure} \end{example} Let's introduce some basic terms we'll adopt from now on. \begin{defn} The \emph{vertices} of the Newton polygon are the points $\left(i_j, \ord a_{i_j}\right)$ where the slope changes, the \emph{segments} of the Newton polygon are the segments joining one vertex to the next one; if a segment joins $(i, m)$ to $(i', m')$ its slope is $\tfrac{m' - m}{i' -i}$ and its length is $i' - i$, i.e. the length of its projection onto the horizontal axis. \end{defn} We have defined the Newton polygon only for a polynomial with constant term $1$, but this doesn't cause loss of generality because the main use of the Newton polygon is to characterize zeroes (and radius of convergence) of $f(X)$. Given a generic $g(X) \in \Cp[X]$ we can write: \[ g(X) = b_kX^k + \dots + b_nX^n = b_k\cdot X^k \cdot \left(1 + \frac{b_{k+1}}{b_k}X + \dots + \frac{b_n}{b_k}X^{n-k}\right) =: b_k \cdot X^k \cdot f(X) \] and we can study $f(X)$, which satisfies our initial hypothesis. Before proving our main result about the Newton polygon for polynomials, let's recall what symmetric polynomials are. \begin{defn} Let $K$ be a commutative ring with unit, $\underline{X} := (X_1, \dots, X_n)$ and let $P(\underline{X}) \in K[\underline{X}]$ be a polynomial in $n$ variables. We say that $P(\underline{X})$ is symmetric if for every $\sigma \in S_n$ we have $P(X_{\sigma(1)}, \dots, X_{\sigma(n)}) = P(X_1, \dots, X_n)$, where $S_n$ is the symmetric group of $n$ elements. \newline The symmetric polynomials $\left\{e_i(\underline{X}) : i \in \{0, 1, \dots, n\}\right\}$ defined by \begin{gather*} e_0(\underline{X}) = 1,\\ e_k(\underline{X}) = \sum_{1 \leq i_1 < \dots < i_k \leq n} X_{i_1}X_{i_2}\dots X_{i_k} \end{gather*} are the \emph{elementary symmetric polynomials}. \end{defn} It is well known that the symmetric polynomials in $n$ variables form a subring $K[\underline{X}]^{S_n}$ and if $P(\underline{X})$ is symmetric then there exists $Q(\underline{Y}) \in K[\underline{Y}]$ such that $P(\underline{X}) = Q(e_1(\underline{X}), \dots, e_n(\underline{X}))$, i.e. the elementary symmetric polynomials ``generate'' all symmetric polynomials. It is easy to prove that if $f(X) \in K[X]$ is a monic polynomial of degree $n$ (here we add the hypothesis that $K$ is an integral domain, i.e. there are no divisors of zero) and all its roots are $\alpha_1, \dots, \alpha_n$ then \[ f(X) = \prod_{j=1}^n \left(X - \alpha_j\right) = \sum_{j=0}^n (-1)^{n-j} \cdot e_{n-j}(\alpha_1, \dots, \alpha_n) \cdot X^j, \] which is a precise relation between the coefficients of $f$ and its roots. Finally we recall that if $f(X) = 1 + \sum_{i=1}^n a_iX^i \in K[X]$ has degree $n$ (here $K$ is a field) and $\alpha_1, \dots, \alpha_n$ are all of its roots, we can write \[ f(X) = \prod_{j=1}^n \left(1 - \frac{X}{\alpha_j}\right) = \sum_{j=0}^n (-1)^j \cdot e_j\left(\frac{1}{\alpha_1}, \dots, \frac{1}{\alpha_n}\right) \cdot X^j; \] in-fact $f(0) = 1$ and we can divide by $1 = (-1)^na_n\alpha_1\dots\alpha_n$ both sides of $f(X) = a_n(X - \alpha_1)\dots(X - \alpha_n)$.\newline We are ready to state and prove the following. \begin{thm} \label{thm:newton-polygon-polinomial-zeroes} Let $f(X) = 1 + \sum_{i=1}^n a_iX^i \in 1 + X\Cp[X]$ be a polynomial of degree $n$, let $\alpha_1, \dots, \alpha_n \in \Cp$ be all of its roots and $\lambda_i := \mathrm{ord}_p\,\left(1/\alpha_i\right)$. If $\lambda$ is a slope of the Newton polygon of $f$ with length $l$, it follows that precisely $l$ of the $\lambda_i$ are equal to $\lambda$. Vice-versa, if $\gamma$ is a \padic order of a reciprocal root then there is a segment of the Newton polygon with slope $\gamma$. \end{thm} \begin{proof} The last statement is trivial if we prove the first one: in-fact the total length of the Newton polygon is $n$ so we have already considered all the roots (counting multiplicity).\newline Let's suppose the $\alpha_i$ arranged so that $\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_n$. Let's suppose that $\lambda_1 = \lambda_2 = \dots = \lambda_r < \lambda_{r+1}$. We then claim that the first segment of the Newton polygon is the one joining $(0,0)$ to $(r, r\lambda_1)$. We know that $a_i = (-1)^i \cdot e_i\left(1/\alpha_1, \dots, 1/\alpha_n\right)$ and, recalling how the $i$-th elementary symmetric polynomial is defined (sum of all possible products of $i$ different variables) and that $\ord(x + y) \geq \min\{\ord(x),\ord(y)\}$, we obtain \[ \ord(a_i) \geq i\lambda_1, \] which means that the point $(i, \ord(a_i))$ is on or above the line joining $(0,0)$ to $(r, r\lambda_1)$. Let's now consider $a_r$: only one of the products of $r$ of the $1/\alpha_i$ has \padic order $r\lambda_1$ and it is exactly $1/(\alpha_1 \dots \alpha_r)$, while all the other products have bigger \padic order since they must include at least one $1/\alpha_i$ with $i > r$. Then, by the isosceles triangle principle, $\ord(a_r) = r\lambda_1$. Finally, let's consider $a_i$ with $i > r$: for the same reasoning as before we have $\ord(a_i) > i\lambda_1$. \newline All these considerations means exactly that the first segment of the Newton polygon is the one joining $(0,0)$ and $(r, r\lambda_1) = (r, \lambda_1 + \dots + \lambda_r)$. Now, if we have $\lambda_s < \lambda_{s+1} = \dots = \lambda_{s+t} < \lambda_{s+t+1}$ the line joining $(s, \lambda_1 + \dots + \lambda_s)$ to $(s+t, \lambda_1 + \dots + \lambda_s + t\lambda_{s+1})$ is a segment of the Newton polygon. The proof is very similar: if $s \leq i$ then $\ord(a_i) \geq \lambda_1 + \dots + \lambda_s + (i-s)\lambda_{s+1}$, since this is the minimum \padic order in $e_i\left(1/\alpha_1, \dots, 1/\alpha_n\right)$, reached for example by $1/(\alpha_1\dots\alpha_i)$, $\ord(a_{s+t}) = \lambda_1 + \dots + \lambda_s + t\lambda_{s+1}$ by the isosceles triangle principle and if $i > s+t$ then $\ord(a_i) > \lambda_1 + \dots +\lambda_s + (i - s)\lambda_{s+1}$ since we have to choose at least one $1/\alpha_j$ with $j > s+t$. \end{proof} This theorem, in other words, says that the slopes of the Newton polygon of $f(X)$ are counting with multiplicity the \padic orders of the reciprocal roots of $f(X)$. The aim of the rest of this chapter will be to extend this result to formal power series, but we'll need to do a little more work before. \section{Newton polygons for power series} The definition of the Newton polygon for $f(X) \in 1 + X\Cp\ser{X}$ is the same of \cref{defn:newton-polygon-polynomials}: it is the inferior convex hull of all the points in $\Gamma$ (which, this time, will be infinite). Sometimes we'll denote the Newton polygon of $f(X)$ by $\mathfrak{N}(f)$. From now on we'll only consider proper power series, i.e. we'll exclude the case in which $f(X)$ is a polynomial. We can distinguish three different kinds on Newton polygon. \begin{enumerate}[label=(\arabic*)] \label{enumerate:newton-polygon-types} \item We get infinitely many segments of finite length, for example the Newton polygon $f(X) = 1 + \sum_{i=1}^{+\infty} p^{i^2}X^i$ shown in \cref{figure:figure4.2}. \item At some point the line we're rotating simultaneously hits infinite points. In this case the Newton polygon has only a finite number of segments, the last one being infinitely long. An example is $f(X) = 1 + \sum_{i=1}^{+\infty} X^i$, whose Newton polygon is simply the horizontal axis. \item At some point the line we're rotating has not hit any point yet but it cannot rotate any farther without passing above some points. If this happens, we let the last segment of the Newton polygon have slope equal to the least upper bound of all possible slopes for which the line passes below all the points. A simple example is given by $f(X) = 1 + \sum_{i=1}^{+\infty} pX^i$, whose Newton polygon is the horizontal axis as shown in \cref{figure:figure4.3}. \end{enumerate} There is a degenerate case of type $(3)$: the vertical line through $(0,0)$ cannot be rotated at all without crossing above some points $(i, \ord a_i)$. An example of this possibility is given by $f(X) = \sum_{i=0}^{+\infty} \tfrac{X^i}{p^{i^2}}$, whose Newton polygon is shown in \cref{figure:figure4.3.1}. \begin{figure} \centering \subfloat[][Newton polygon of type 1 \label{figure:figure4.2}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_2}} \qquad \qquad \subfloat[][Newton polygon of type 3 \label{figure:figure4.3}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_3}} \\ \subfloat[][Degenerate Newton polygon \label{figure:figure4.3.1}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_3_1}} \qquad \qquad \subfloat[][Newton polygon of $f(X)$ \label{figure:figure4.4}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_4}} \caption{Various Newton polygons} \end{figure} We'll exclude this case from our study since, as we'll prove in the next proposition, all such series have zero radius of convergence. \begin{prop} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ be a power series whose Newton polygon is a degenerate case of type $(3)$, i.e. \[ \forall m \in \R \quad \exists i_m \in \N : \mathrm{ord}_p\, a_{i_m} < m \cdot i_m. \] Then the radius of convergence of $f$ is $0$. \end{prop} \begin{proof} We just need to prove that $\limsup\, \pabs{a_n}^{1/n} = +\infty$. Let's define a subsequence of the coefficients $(a_{n_k})_{k \geq 1}$ by induction. We set $n_1 = i_{-1}$ so that $(n_1, \ord a_{n_1})$ lies below the line $y = -x$. Let's now consider the lines $\ell_1$, joining $(0, 0)$ to $(n_1, \ord a_{n_1})$, and $\ell_2$, with equation $y = -2x$: by hypothesis there must be an infinite number of points $(i, \ord a_i)$ lying below both of these two lines. Then there is at least one such point $(j, \ord a_j)$ with $j > n_1$ and we set $n_2 := j > n_1$. We can iterate this procedure (every time we choose $n_k > n_{k-1}$ such that $(n_k, \ord a_{n_k})$ lies below both $y = -kx$ and the line joining $(0,0)$ to $(n_{k-1}, \ord a_{n_{k-1}})$). We have obtained an increasing sequence $(n_k)_{k \geq 1} \subseteq \N$ such that \[ \ord a_{n_k} < -k \cdot n_k \implies \pabs{a_{n_k}}^{1/n_k} > p^k. \] Using this subsequence we can conclude. \end{proof} From now on we'll always consider analytic functions with a non-trivial disc of convergence. Before proving general properties of the Newton polygon of analytic functions, let's consider a concrete example. \begin{example} Let's consider the function $f$ defined by \[ f(X) = \sum_{n=0}^{+\infty} \frac{X^n}{n+1} = \frac{1}{X} \cdot \sum_{n=0}^{+\infty} \frac{X^{n+1}}{n+1} = -\frac{1}{X} \cdot \log_p(1 - X). \] Looking at the right member it's immediate to see that $f$ converges in $D(1^-)$. If we denote $\ell_i$ the segment joining $\left(p^i-1, -i\right)$ to $\left(p^{i+1}-1, -i-1\right)$ then it's easy to see that the Newton polygon of $f$ is the polygonal line $\bigcup_{i \in \N} \ell_i$ shown in \cref{figure:figure4.4} for $p=3$. Assuming that the power series analogue of \cref{thm:newton-polygon-polinomial-zeroes} holds, then, by looking at the Newton polygon of $f$, we would expect to find exactly $p^{i+1} - p^i$ roots having \padic order $1/\left(p^{i+1} - p^i\right)$ for every $i \in \N$ and no other roots. \newline Let's prove this claim: let's fix $j \in \N$ and consider $x = 1 - \zeta$, where $\zeta \in \Cp$ is a primitive $p^{j+1}$-th root of $1$. Then we know by \cref{exercise:7-p.74} that $\ord x = 1/\left(p^{j+1} - p^j\right)$ and that $\log_p(1 - x) = 0$ by \cref{corollary:log-root-of-1} so $f(x) = 0$. Since there are exactly $p^{i+1} - p^i$ primitive roots of $1$, we have found all the predicted roots. Let's now prove that there are no other roots of $f$, i.e. any root is of the form $1 - \xi$ where $\xi$ is a primitive $p^k$-th root of $1$. Let $x \in D(1^-)$ be a root of $f$ and let \[ x_j := 1 - (1 - x)^{p^j} \] for any $j \in \N$. Using Newton's binomial expansion we get \[ \pabs{x_j} = \pabs{1 - (1 - x)^{p^j}} = \pabs{\sum_{i=1}^{p^j} \binom{p^j}{i} (-x)^i} \leq \pabs{x} < 1, \] which implies $x_j \in D(1^-)$ for every $j$. We claim that for any $M > 0$ we can find $j_m \in \N$ such that $\pabs{x_{j_m}}< M$. Fixed $M > 0$ we just need to find a $j$ such that \[ \max_{1 \leq i \leq p^j} \pabs{\binom{p^j}{i}x^i} < M. \] Since $\pabs{x} < 1$ we can find $N \in \N$ such that if $n > N$ then $\pabs{\binom{p^j}{n}x^n} < M$. Now we just need to find a $j$ such that \[ \max_{1 \leq i \leq N} \pabs{\binom{p^j}{i}x^i} < M. \] Writing $m := \max_{1 \leq i \leq N} (1/\pabs{i!}) > 0$ we have that \[ \pabs{\binom{p^j}{i}} \leq \pabs{\frac{p^j}{i!}} \leq \pabs{p^j} \cdot m \] and we can conclude, since $\pabs{p^j} \to 0$ as $j \to +\infty$. Now let's consider $j \in \N$ such that $x_j \in D(r_p^-)$; thanks to \cref{prop:exp-and-log-inverse} we have \[ 1 - x_j = \exp_p(\log_p(1 - x_j)) =\exp_p\left(p^j\cdot \log_p(1 - x)\right) = \exp_p(0) = 1 \] hence $(1 - x)^{p^j} = 1$ so that $x = 1 - \zeta$ where $\zeta$ is a $p^j$-th root of $1$ and it's one of the roots we already considered. \newline We have proved that, for this particular $f(X)$, the power series analogue of \cref{thm:newton-polygon-polinomial-zeroes} holds. \end{example} Let's now prove a simple but interesting result which explains how we can find the radius of convergence of a series just by looking at its Newton polygon. \begin{prop} \label{prop:newton-polygon-radius-convergence} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ and let $b$ be the least upper bound of all slopes of the Newton polygon of $f$. Then the radius of convergence of $f(X)$ is $p^b$ (if $b=+\infty$ then $f$ converges everywhere). \end{prop} \begin{proof} Let's fix $x \in \Cp$ with $\pabs{x} < p^b$, i.e. $-b' := \ord x > -b$. Then $\ord(a_ix^i) = \ord a_i -ib'$ but, since $b' < b$, it's clear that sufficiently far out all the points $(i, \ord a_i)$ will lie arbitrarily far above $(i, b'i)$, see \cref{figure:figure4.5}. This means exactly $\lim_{i\to +\infty} \ord(a_ix^i) = +\infty$, i.e. $f(X)$ converges at $x$. \begin{figure} \centering \includegraphics[scale=1.5]{/home/carlo/Tesi/images/figure_4_5} \caption{Case $\protect\pabs{x} < p^b$} % Workaround, it works \label{figure:figure4.5} \end{figure} Let's now consider the case $\pabs{x} > p^b$, i.e. $-b' := \ord x < -b$. Since $b' > b$ we find an infinite number of $i \in \N$ such that $\ord(a_ix^i) = \ord a_i - ib' < 0$ which implies that $f(X)$ does not converge at $x$. We can then conclude that the radius of convergence of $f$ is exactly $p^b$. \end{proof} Obviously this proposition doesn't tell us anything about the convergence of $f(X)$ at the radius of convergence, i.e. if $\pabs{x} = p^b$. \begin{prop} \label{prop:newton-polygon-circonference-convergence} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ be an analytic power series with radius of convergence $r=p^b$, where $b$ is the least upper bound of the slopes of the Newton polygon. Then $f(X)$ converges on $D(r)$ if and only if $\mathfrak{N}(f)$ is of type $(3)$ (see the beginning of \cref{enumerate:newton-polygon-types}) and $\lim_{i \to +\infty} d_i = +\infty$, where $d_i$ is the distance between $(i, \mathrm{ord}_p\, a_i)$ and the last line of $\mathfrak{N}(f)$. \end{prop} \begin{proof} If $b \notin \Q$ there's nothing to prove since no element of $\Cp$ can have order $b$; from now on we'll assume $b \in \Q$. First of all we prove that if the Newton polygon of $f$ is of type $(1)$ or $(2)$ then $f(X)$ does not converge if $\pabs{x} = p^b$. \newline Let's first consider a Newton polygon of type $(1)$ and let $\Lambda$ be the set of all its slopes. Then $b = \sup \Lambda$ and if $b = +\infty$ there's nothing to prove. If $b < +\infty$ then there exists $y_0 \in \R$ such that $\ell\colon y = y_0 + bx$ is an ``asymptote'' of the Newton polygon, see \cref{figure:figure-extra-1} (the slopes are increasing and their $\sup$/$\lim$ is $b$). Then we can consider the vertices of the Newton polygon, indexed by $\left(i_j\right)_{j \in \N}$. It is clear that the distance $d_j$ between $\left(i_j, \ord a_{i_j}\right)$ and $\ell$ tends to $0$ and so does $\left(\ord a_{i_j} - i_jb\right)$, which is equal to $d_j/\cos(\arctan b)$ (if $b=0$ then it is equal to $d_j$). If $\pabs{x} = p^b$ then $\ord x = -b$ so $\ord(a_ix^i) = \ord a_i - ib$. We then conclude that $\ord(a_ix^i) \not\to +\infty$ when $i \to +\infty$, i.e. $f$ does not converge at $x$. Instead if $f$ has a Newton polygon of type $(2)$ then $b$ is its final slope and, by definition, there are infinite points on this final segment. This means that if we call the final line $\ell\colon y_0 + bx$ then we can find an increasing subsequence $\left(i_j\right)_{j \in \N} \subseteq \N$ such that $\ord a_{i_j} = y_0 + i_jb$ so $\ord\left(a_{i_j}x^{i_j}\right) = y_0 \not\to +\infty$ and we can conclude that there's no convergence in $x$.\newline Let's now suppose that $\mathfrak{N}(f)$ is of type $(3)$ and $x \in \Cp$ with $\pabs{x} = p^b$. Then $f(X)$ converges in $x$ if and only if $\lim_{i \to +\infty} \ord\left(a_ix^i\right) = +\infty$; as before, with a little trigonometry, we have \begin{gather*} \ord\left(a_ix^i\right) = \ord a_i - ib = \begin{cases} d_i, & \text{if $b=0$;} \\ \frac{d_i}{\cos(\arctan b)}, & \text{otherwise;} \end{cases} \end{gather*} and we can conclude (by hypothesis $\lim_{i \to +\infty} d_i = +\infty$). An example is $f(X) = 1 + \sum_{i=1}^{+\infty} 2^iX^{2^i} \in 1 + X\C_2\ser{X}$, whose Newton polygon is shown in \cref{figure:figure4.5.1}. \begin{figure} \centering \subfloat[][Newton polygon with an asymptote \label{figure:figure-extra-1}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_extra_1}} \qquad \subfloat[][Newton polygon with convergence at border \label{figure:figure4.5.1}]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_5_1}} \caption{Two other types of Newton polygons} \end{figure} \end{proof} Let's introduce a useful trick we'll often use in the next proofs. \begin{lemma} \label{lemma:newton-polygon-translation} Let $c \in \Cp^\times$ with $\mathrm{ord}_p\, c = \lambda$, $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ and $g(X) := f\left(X/c\right)$. Then the Newton polygon of $g$ is obtained subtracting the line $y = \lambda x$ to the Newton polygon of $f$. \end{lemma} \begin{proof} If we write $g(X) = 1 + \sum_{i=1}^{+\infty} b_iX^i$ then it's immediate that $b_i = a_i/\left(c^i\right)$ so $\ord b_i = \ord a_i - i\lambda$ and we can conclude. \end{proof} We'll now prove four technical lemmas we'll then use to prove our final result. \begin{lemma} \label{lemma:lemma6-p.102} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ and suppose that $\lambda_1$ is the first slope of its Newton polygon. Let $c \in \Cp$ with $\mathrm{ord}_p\,c = \lambda \leq \lambda_1$ and assume that $f(X)$ converges on the closed disc $D(p^{\lambda})$ (this automatically happens if $\lambda < \lambda_1$ or if the Newton polygon has more than one segment). Let \[ g(X) = (1 - cX)f(X) \in 1 + X\Cp\ser{X}. \] Then $\mathfrak{N}(g)$ is obtained by joining $(0,0)$ to $(1, \lambda)$ and then translating $\mathfrak{N}(f)$ by $\vec{v} = (1, \lambda)$ ($1$ to the right and $\lambda$ upwards). If $\mathfrak{N}(f)$ has last slope $\lambda_f$ and $f(X)$ converges on $D(p^{\lambda_f})$ then $g(X)$ also converges on $D(p^{\lambda_f})$. Conversely, if $g(X)$ converges on $D(p^{\lambda_f})$ then so does $f(X)$. \end{lemma} \begin{proof} A graphic interpretation of the lemma can be found at \cref{figure:figure4.6}. \begin{figure} \centering \subfloat[][Newton polygon of $f_1(X)$]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_6_1}} \qquad \qquad \subfloat[][Newton polygon of $g_1(X)$]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_6_2}} \\ \subfloat[][Newton polygon of $f(X)$]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_6_3}} \qquad \qquad \subfloat[][Newton polygon of $g(X)$]{\includegraphics[scale=1.25]{/home/carlo/Tesi/images/figure_4_6_4}} \caption{Example of \protect\cref{lemma:lemma6-p.102}} \label{figure:figure4.6} \end{figure} We can consider only the special case $c=1, \lambda = 0$. In-fact, let's suppose the lemma holds for this case and let $f(X)$ and $g(X)$ as in the statement. Then $f_1(X) := f\left(\tfrac{X}{c}\right)$ and $g_1(X) := (1 - X)f_1(X)$ satisfy our hypothesis (with the parameters $\underline{c}=1, \underline{\lambda} = 0, \underline{\lambda_1} = \lambda_1 - \lambda$, by \cref{lemma:newton-polygon-translation}). Thus, since we're assuming the lemma to be true if $c=1$, we know the shape of the Newton polygon of $g_1(X)$ (and the convergence of $g_1(X)$ on $D(p^{\lambda_f - \lambda})$ when $f$ converges on $D(p^{\lambda})$). Now, $g(X) = g_1(cX)$ so, using again \cref{lemma:newton-polygon-translation}, we obtain the desired information about the Newton polygon of $g(X)$ (and the desired convergence, which is immediate). So we can just prove the lemma when $c = 1$.\newline If $g(X) = 1 + \sum_{i=1}^{+\infty} b_iX^i$ then, since by definition $g(X) = (1 - X)f(X)$, we have $b_{i+1} = a_{i+1} - a_i$ for $i \geq 0$ (clearly $a_0 = 1$). Then \begin{equation*} \ord b_{i+1} \geq \min\left\{\ord a_{i+1}, \ord a_i \right\} \tag{$\star$} \end{equation*} and the equality holds when $\ord a_{i+1} \neq \ord a_i$. It is easy to see that both $(i, \ord a_i)$ and $(i, \ord a_{i+1})$ lie on or above the Newton polygon of $f(X)$ and so does $(i, \ord b_{i+1})$, by $(\star)$. If $(i, \ord a_i)$ is a vertex then necessarily $\ord a_{i+1} > \ord a_i$ so $\ord b_{i+1} = \ord a_i$. This means exactly that the Newton polygon of $g(X)$ has the shape described in the lemma, as far as the last vertex of $f(X)$. If $\mathfrak{N}(f)$ is of type $(1)$ we can conclude here: there is no last vertex and no last slope. It remains only to show that when $\mathfrak{N}(f)$ has last slope $\lambda_f$ then also $\mathfrak{N}(g)$ does and if $f(X)$ converges on $D(p^{\lambda_f})$ then so does $g(X)$. We already know $\ord b_{i+1} \geq \min\left\{\ord a_{i+1}, \ord a_i \right\}$ so $g(X)$ converges wherever $f(X)$ does; then if $\lambda_g$ is the least upper bound of the slopes of $\mathfrak{N}(g)$ we have $\lambda_g \geq \lambda_f$ (by \cref{prop:newton-polygon-radius-convergence}). We must only rule out the case $\lambda_g > \lambda_f$. If it were the case, then, for some large $i$, the point $(i+1, \ord a_i)$ would lie below $\mathfrak{N}(g)$ so we'd have $\ord b_j > \ord a_i$ for every $j \geq i+1$ (this holds in this particular case where $\lambda = 0$ since $0 \leq \lambda_1 \leq \lambda_f < \lambda_g$). Using $j = i+1$ we obtain $\ord a_{i+1} = \ord a_i$ because $a_{i+1} = b_{i+1} + a_i$. Then, using $j = i+2$, we obtain $\ord a_{i+2} = \ord a_{i+1} = \ord a_i$ and so on for every $j$. This means $\ord a_j = \ord a_i$ for every $j \geq i$ and contradicts the assumed convergence of $f(X)$ on $D(1) \subseteq D(p^{\lambda_f})$. Then we must have $\lambda_g = \lambda_f$ and $\mathfrak{N}(g)$ is exactly of the predicted shape. This implies in particular that if $f(X)$ converges on $D(p^{\lambda_f})$ then so does $g(X)$ (see \cref{prop:newton-polygon-circonference-convergence}). The converse assertion, i.e. convergence of $g(X)$ implies convergence of $f(X)$, can be proved in an analogue way. \end{proof} \begin{lemma} \label{lemma:lemma7-p.103} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ have Newton polygon with first slope $\lambda_1$. Let's assume that $f(X)$ converges on $D\left(p^{\lambda_1}\right)$ and that the line $\ell\colon y = \lambda_1x$ actually passes through a point $(i, \mathrm{ord}_p\, a_i)$ with $i \geq 1$ (both of these conditions are automatically satisfied if $\mathfrak{N}(f)$ has more than one slope). Then there exists an $x \in \Cp$ for which $\mathrm{ord}_p\, x = -\lambda_1$ and $f(x) = 0$. \end{lemma} \begin{proof} Let's first consider the case $\lambda_1 = 0$ and then reduce the general case to this one. If $\lambda_1 = 0$ we have $\ord a_i \geq 0$ for every $i \in \N$ and $\lim_{i \to +\infty} \ord a_i = +\infty$ since $f(X)$ converges on $D(1)$. Let $N := \max \left\{i \in \N^\times : \ord a_i=0 \right\}$ and let $f_n(X) := 1 + \sum_{i=1}^n a_iX^i \in 1 + X\Cp[X]$. By \cref{thm:newton-polygon-polinomial-zeroes}, if $n \geq N$ then the polynomial $f_n(X)$ has precisely $N$ roots with \padic order $0$, let them be $x_{n, 1}, \dots, x_{n, N}$ (it's immediate that $\mathfrak{N}(f_n)$ has a first segment with slope $0$ and length $N$). Let's define a sequence: $x_N := x_{N, 1}$ and, for $n \geq N$, $x_{n+1} := x_{n+1, i}$ where $i$ is such that $\pabs{x_{n+1, i} - x_n}$ is minimal. We claim that $\left(x_n\right)_{n \geq N} \subseteq \Cp$ is Cauchy and its limit $x$ is the desired root of $f$. If $S_n$ denotes the set containing the roots of $f_n(X)$, counted with multiplicity, for $n \geq N$ we have \[ \pabs{f_{n+1}(x_n) - f_n(x_n)} = \pabs{f_{n+1}(x_n)} = \prod_{\alpha \in S_{n+1}} \pabs{1 - \frac{x_n}{\alpha}} \] where we used $f_n(x_n) = 0$ and $f_{n+1}(X) = \prod_{\alpha \in S_{n+1}} \left(1 - \tfrac{X}{\alpha}\right)$. It's clear that if $\alpha \in S_{n+1}$ then $\ord \alpha \leq 0$: in-fact we cannot have $\ord \alpha > 0$ and $f_{n+1}(\alpha) = 0$ by the isosceles triangle principle (recall that $\ord a_i \geq 0$). Now if $\alpha \in S_{n+1}$ has $\ord \alpha < 0$ then $\pabs{1 - \tfrac{x_n}{\alpha}} = 1$, since $\pabs{x_n} = 1$. Then we can write \[ \pabs{f_{n+1}(x_n) - f_n(x_n)} = \prod_{i=1}^N \pabs{1 - \frac{x_n}{x_{n+1, i}}} = \prod_{i=1}^N \pabs{x_{n+1, i} - x_n} \geq \pabs{x_{n+1} - x_n}^N, \] by the choice of $x_{n+1}$. We have obtained \[ \pabs{x_{n+1} - x_n}^N \leq \pabs{f_{n+1}(x_n) - f_n(x_n)} = \pabs{a_{n+1}x_n^{n+1}} = \pabs{a_{n+1}} \] so $\lim_{n \to +\infty} \pabs{x_{n+1} - x_n}^N = 0$ (by hypothesis $\lim_{n \to +\infty} \pabs{a_{n+1}} = 0$) and we have proved that $\left(x_n\right)_{n \geq N}$ is Cauchy (see \cref{lemma:cauchy-sequence-ultrametric}). Since $\Cp$ is complete there exists $x := \lim_{n \to +\infty} x_n$ and, by continuity of $\pabs{\ }$, we have $\pabs{x} = 1$. It's clear that for any $y \in D(1)$ we have $\lim_{n \to +\infty} f_n(y) = f(y)$ (the \padic absolute value of the difference tends to zero) so we have $f(x) = \lim_{n \to +\infty} f_n(x)$. Now, \[ \pabs{f_n(x)} = \pabs{f_n(x) - f_n(x_n)} = \pabs{x - x_n}\cdot\pabs{\sum_{i=1}^n a_i\frac{x^i - x_n^i}{x - x_n}} \leq \pabs{x - x_n} \] because $\pabs{a_i} \leq 1$ and $\pabs{\tfrac{x^i - x_n^i}{x - x_n}} = \pabs{x^{i-1} + x^{i-2}x_n + \dots + x_n^{i-1}} \leq 1$. Hence we can conclude that $f(x) = \lim_{n \to +\infty} f_n(x) = 0$ and we have proved the lemma if $\lambda_1 = 0$.\newline The general case follows easily. Let $\pi \in \Cp$ be any number with $\ord \pi = \lambda_1$. Clearly such a $\pi$ exists: for example, if $(i, \ord a_i)$ lies on $y=\lambda_1x$ and $i \geq 1$ (such a point exists by assumption) then $\pi$ can be any $i$-th root of $a_i$ (recall that $\Cp$ is algebraically closed). Now let $g(X) := f\left(X/\pi\right)$; it's clear by \cref{lemma:newton-polygon-translation} that $g(X)$ satisfies the conditions of the lemma with $\lambda_1 = 0$. Then we already know that there exists $x_0$ with $\ord x_0 = 0$ such that $g(x_0) = 0$. Then if $x = x_0/\pi$ we have $\ord x = -\lambda_1$ and $f(x) = f\left(x_0/\pi\right) = g(x_0) = 0$. \end{proof} \begin{lemma} \label{lemma:lemma8-p.105} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ and let $\alpha \in \Cp$ such that $f(\alpha) = 0$. Let $g(X)$ be obtained by dividing $f(X)$ by $1 - \tfrac{X}{\alpha}$. Then $g(X)$ converges on $D(\pabs{\alpha})$. \end{lemma} \begin{proof} First of all, let's observe that $\alpha \neq 0$ and that dividing $f(X)$ by $1 - \tfrac{X}{\alpha}$ is the same thing of multiplying $f(X)$ by the geometric series $\sum_{i=0}^{+\infty} \left(\tfrac{X}{\alpha}\right)^i$. Let's write $g(X) = 1 + \sum_{i=1}^{+\infty} b_iX^i$ and let $f_n(X) := 1 + \sum_{i=1}^n a_iX^i$ be the $n$-th partial sum of $f(X)$. By an easy computation we infer that \[ b_i = \sum_{j=0}^i \frac{a_j}{\alpha^j} \] where we set $a_0 = 1$. Then it's easy to see that \[ b_i\alpha^i = f_i(\alpha) \] hence $\pabs{b_i\alpha^i} = \pabs{f_i(\alpha)} \to 0$ as $i \to +\infty$, since $f(\alpha) = 0$ and $f(x) = \lim_{n \to +\infty}f_n(x)$ wherever $f$ converges. This means exactly that $g(X)$ converges on $D(\pabs{\alpha})$. \end{proof} \begin{lemma} \label{lemma:order-zeroes-function} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ such that $\lambda$ is the first slope of $\mathfrak{N}(f)$ and $f$ converges on some disc $D$. If $\alpha \in D$ is a root of $f$, i.e. $f(\alpha) = 0$, then $\mathrm{ord}_p\, \alpha \leq -\lambda$. If $\lambda$ is the only slope of $\newt{f}$ and no point of $\newt{f}$ lies on $y=\lambda x$, then $\mathrm{ord}_p\,\alpha < -\lambda$. \end{lemma} \begin{proof} Let's suppose that $\alpha \in D$ is such that $\ord \alpha = -\lambda' > -\lambda$. We have \[ \ord(a_i\alpha^i) = \ord a_i -i\lambda' > \ord a_i -i\lambda \geq 0, \] where we used that all the points $(i, \ord a_i)$ lie on or above the line $y=\lambda x$ (by definition of Newton polygon). Then we have $\ord 1 = 0$ and $\ord(a_i\alpha^i) > 0$ for $i \geq 1$ and so $\alpha$ cannot be a root of $f$. The last statement can be proved with an analogue reasoning. \end{proof} Finally we are ready to prove the main theorem of this section which will imply, as a corollary, the power series analogue of \cref{thm:newton-polygon-polinomial-zeroes}. \begin{thm}[\padic Weierstrass Preparation Theorem] \label{thm:weierstrass-padic-preparation} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ converge on $D(p^{\lambda})$. Let $N$ be the total horizontal length of all segments in $\mathfrak{N}(f)$ having slope less or equal to $\lambda$ if this length is finite $($i.e. if $\mathfrak{N}(f)$ hasn't an infinitely long last segment of slope $\lambda)$. On the other hand, if the Newton polygon of $f$ has last slope $\lambda$, then let $N$ be the greatest index $i$ such that $(i, \ord a_i)$ lies on that final segment $($there must be such a final index since $f$ converges on $D(p^{\lambda}))$. Then there exists a polynomial $h(X) \in 1 + X\Cp[X]$ of degree $N$ and a power series $g(X) = 1 + \sum_{i=1}^{+\infty} b_iX^i$, which converges and is non-zero on $D(p^{\lambda})$, such that \[ h(X) = f(X) \cdot g(X). \] The polynomial $h(X)$ is uniquely determined by these properties and $\mathfrak{N}(h)$ coincides with $\mathfrak{N}(f)$ up to $x = N$. \end{thm} \begin{proof} We use induction on $N$. Let's first consider the basic case $N = 0$, where the first slope of $\mathfrak{N}(f)$ is greater or equal to $\lambda$. In this case it's evident that we can assume $\lambda \in \Q$ without loss of generality. We have to show that $g(X) = 1/f(X)$ converges and is non-zero on $D(p^{\lambda})$ (recall that any power series with a non-zero constant term is invertible). We can only consider the special case $\lambda = 0$. In-fact, let $f(X) \in 1 + X\Cp\ser{X}$ converge on $D(p^{\lambda})$: we can choose $c \in \Cp$ with $\ord c = \lambda$ using \cref{prop:qpa-every-order} (we assumed $\lambda \in \Q$) and then define $\tilde{f}(X) := f\left(\tfrac{X}{c}\right)$. Now, $\tilde{f}$ converges on $D(1)$ and if $\lambda = 0$ then $N = 0$, i.e. the first slope of its Newton polygon is greater or equal to $0$ by \cref{lemma:newton-polygon-translation}. So, assuming the theorem holds when $N = \lambda = 0$ we infer that there exists $\tilde{g}(X) \in 1 + X\Cp\ser{X}$ which converges and is non-zero on $D(1)$ such that $1 = \tilde{f}(X)\cdot\tilde{g}(X)$. Using $cX$ in place of $X$ we obtain $1 =f(X) \cdot \tilde{g}(cX)$ and it's immediate that $g(X) := \tilde{g}(cX)$ has all the desired properties. So we can only consider the special case $\lambda = 0$. Thus, we can suppose $\ord a_i > 0$ for every $i \in \N$ and $\lim_{i \to +\infty} \ord a_i = +\infty$ (we have convergence on $D(1)$). It's easy to obtain the following equality for the coefficients of $g(X) = 1/f(X)$: \[ b_i = -\left(\sum_{j=1}^i b_{i-j}a_j \right), \] where we set $b_0 = 1$. From an easy induction on $i$ it follows that $\ord b_i > 0$ for $i \geq 1$. This implies that the first slope of $\mathfrak{N}(g)$ is greater than $0$ (or it's equal to $0$ but with no points on it) and, by \cref{lemma:order-zeroes-function}, we know that $g$ doesn't have roots on $D(1)$. Now it remains only to show that $g(X)$ actually converges on $D(1)$, i.e. that $\lim_{i \to +\infty} \ord b_i = +\infty$. Let's fix $M > 0$: we can find $m \in \N$ such that $i > m$ implies $\ord a_i > M$. Now if \[ \epsilon := \min_{1 \leq j \leq m} \ord a_j> 0 \] we claim that $i > nm$ implies $\ord b_i > \min\{M, n\epsilon\}$, from which it easily follows $\ord b_i \to +\infty$ as $i \to +\infty$. We'll prove this claim by induction on $n$. We have already proved the case $n = 0$. Now, let's suppose $n \geq 1$ and that the claim holds for $n - 1$; if $i > nm$ we have \[ b_i = -\left(b_{i-1}a_i + \dots + b_{i-m}a_m + b_{i-(m+1)}a_{m+1} + \dots + a_1 \right). \] The terms $b_{i-j}a_j$ with $j > m$ have \padic order greater than $M$, while if $j \geq m$ we have $\ord(b_{i-j}a_j) \geq \ord b_{i-j} + \epsilon$ and, since $i - j > (n-1)m$, by inductive hypothesis we obtain \[ \ord(b_{i-j}a_j) \geq \ord b_{i-j} + \epsilon > \min\{M, (n-1)\epsilon\} + \epsilon. \] This proves our claim, hence the theorem when $N = 0$ (the statement about the Newton polygon here is trivial since $h(X) = 1$).\newline Now let's consider the general case with $N \geq 1$ and suppose that the theorem holds for $N - 1$. Let $\lambda_1 \leq \lambda$ be the first slope of $\mathfrak{N}(f)$; if it is the only slope then, since $N \geq 1$, there's at least one point on $y = \lambda_1x$. We can then use \cref{lemma:lemma7-p.103} to find $\alpha$ such that $f(\alpha) = 0$ and $\ord \alpha = -\lambda_1$. Let's define \[ f_1(X) := \frac{f(X)}{1 - \frac{X}{\alpha}} = f(X) \cdot \sum_{j=0}^{+\infty} \left(\frac{X}{\alpha}\right)^j \in 1 + X\Cp\ser{X}. \] By \cref{lemma:lemma8-p.105}, $f_1$ converges on $D(p^{\lambda_1})$. Setting $c := \tfrac{1}{\alpha}$ we have $f(X) = (1 - cX)\cdot f_1(X)$. Let $\lambda_1'$ be the first slope of $\mathfrak{N}(f_1)$; it must necessarily be $\lambda_1' \geq \lambda_1$. In-fact $\lambda_1' < \lambda_1$ implies that $\mathfrak{N}(f_1)$ has more than one slope and that, by \cref{lemma:lemma7-p.103}, $f_1$ has a root with \padic order $-\lambda_1'$ and so does $f$, but this is impossible by \cref{lemma:order-zeroes-function} since $-\lambda_1' > -\lambda_1$. We can now apply \cref{lemma:lemma6-p.102}, with parameters $\underline{f} = f_1, \underline{g} = f, \underline{\lambda} = \lambda_1, \underline{\lambda_1} = \lambda_1'$ and we get that $\mathfrak{N}(f_1)$ is obtained translating $\mathfrak{N}(f) \setminus \ell((0,0), (1, \lambda_1))$ by $\vec{v} = (-1, -\lambda_1)$, where $\ell(P, Q)$ is the segment joining $P$ to $Q$. We claim that $f_1$ converges on $D(p^{\lambda})$: if $\lambda$ isn't the final slope of $\mathfrak{N}(f)$ then it's trivially true, otherwise \cref{lemma:lemma6-p.102} tells us that when $\mathfrak{N}(f)$ has last slope $\lambda$ and $f$ converges on $D(p^{\lambda})$ then so does $f_1$. Thus, $f_1$ satisfies all the conditions of the theorem with $N-1$ instead of $N$ (recall that, to obtain $\mathfrak{N}(f_1)$, we removed a segment with slope $\lambda_1 \leq \lambda$ and with length $1$ from $\mathfrak{N}(f)$). By inductive hypothesis we can find $h_1(X) \in 1 + X\Cp[X]$ of degree $N-1$ and a series $g(X) \in 1 + X\Cp\ser{X}$, convergent and non-zero on $D(p^{\lambda})$, such that \[ h_1(X) = f_1(X) \cdot g(X). \] Multiplying both sides by $(1 - cX)$ and setting $h(X) := (1 - cX)h_1(X)$ we obtain \[ h(X) = f(X) \cdot g(X), \] where $h$ and $g$ have the desired properties. Let's also observe that $\mathfrak{N}(h_1)$ coincides with $\mathfrak{N}(f_1)$ up to $x = N-1$ and that, since $h(X) = (1 - cX)h_1(X)$, $\mathfrak{N}(h)$ is obtained joining $(0,0)$ to $(1, \lambda_1)$ and then translating $\mathfrak{N}(h_1)$. Then it's clear that $\mathfrak{N}(h)$ will coincide with $\mathfrak{N}(f)$ up to $x = N$.\newline Now we have only to prove the uniqueness of $h(X)$ (we have only proved its existence). Let's suppose that $\tilde{h}(X) \in 1 + X\Cp[X]$ is another polynomial of degree $N$ such that \[ \tilde{h}(X) = f(X) \cdot g_1(X), \] where $g_1(X) \in 1 + X\Cp\ser{X}$ converges and is non-zero on $D(p^{\lambda})$. We have \[ \tilde{h}(X)\cdot g(X) = f(X)\cdot g(X) \cdot g_1(X) = h(X) \cdot g_1(X). \tag{$*$} \] To prove uniqueness it suffices to show that $(*)$ implies that $h$ and $h_1$ have the same roots with the same multiplicities (they both have constant term $1$). The case $N=1$ is trivial. Let's now consider $N > 1$. The polynomial $h(X)$ is the one we built before so we already know that $\mathfrak{N}(h)$ coincides with $\mathfrak{N}(f)$ up to $x = N$. Using \cref{thm:newton-polygon-polinomial-zeroes}, this means that every root of $h(X)$ is in $D(p^{\lambda})$ (by assumption all the slopes of $\mathfrak{N}(h)$ are less or equal to $\lambda$). Let $\alpha \in \Cp$ be a root of $h(X)$. Since $\alpha \in D(p^{\lambda})$ we can compute $g(\alpha)$ and $g_1(\alpha)$ and, by hypothesis, they're not zero. So $\alpha$ must also be a root of $\tilde{h}(X)$. Let's define \[ \tilde{k}(X) := \frac{\tilde{h}(X)}{1 - \frac{X}{\alpha}}, \qquad k(X) := \frac{h(X)}{1 - \frac{X}{\alpha}}; \] they're two polynomials in $1 + X\Cp[X]$ of degree $N - 1$ satisfying $\tilde{k}(X)\cdot g(X) = k(X)\cdot g_1(X)$. We can repeat this process with every other root of $h(X)$ and, at the end, both polynomials will be $1$ so we have proved uniqueness. \end{proof} This is a very powerful theorem, with a lot of interesting corollaries. \begin{corollary} \label{corollary:newton-polygon-zeroes} If a segment of the Newton polygon of $f(X) \in 1 + X\Cp\ser{X}$ has finite length $N$ and slope $\lambda$, then there are exactly $N$ values of $x$ (counting multiplicity) for which $f(x) = 0$ and $\mathrm{ord}_p\, x = -\lambda$. \end{corollary} \begin{proof} It is an immediate application of \cref{thm:weierstrass-padic-preparation} and \cref{thm:newton-polygon-polinomial-zeroes}. \end{proof} \begin{example} We can use the Newton polygon to study the exact region of convergence of $\E_p(X)$, the Artin-Hasse exponential (see \cref{defn:artin-hasse}). We already know, by \cref{prop:artin-hasse-formula} and \cref{prop:artin-hasse-formula}, that \[ \E_p(X) = \exp_p\left(\sum_{i=0 }^{+\infty} \frac{X^{p^i}}{p^i}\right) \] and that $\E_p(X)$ converges on $D(1^-)$. We'll show that this is the exact region of convergence, i.e. that $\E_p(X)$ doesn't converge if $\pabs{x} = 1$. Let's define \[ f(X) = \sum_{i=0}^{+\infty} \frac{X^{p^i -1}}{p^i} \in 1 + X\Cp\ser{X}, \] so that $\E_p(X) = \exp_p(X \cdot f(X))$. Now, $\E_p(X)$ converges at $x \in \Cp$ if and only if $x\cdot f(x) \in D(r_p^-)$. We'll show that $f(X)$ doesn't even converge if $\pabs{x} = 1$. Writing $f(X) = 1 + \sum_{n=1}^{+\infty} a_iX^i$, it's immediate that \begin{gather*} (i, \ord a_i) = \begin{cases} \left(p^k - 1, -k\right), & \text{if $\exists k \in \N$ such that $i = p^k - 1$;} \\ (i, 0), & \text{otherwise;} \end{cases}. \end{gather*} If $\ell_i$ is the segment joining $\left(p^i-1, -i\right)$ to $\left(p^{i+1}-1, -i-1\right)$ then we have $\mathfrak{N}(f) = \bigcup_{i \in \N} \ell_i$ (see \cref{figure:figure-extra-2} for $p=2$). It is clearly a type $(1)$ polygon (infinite number of finite segments). The segment $\ell_i$ has slope $\lambda_i =- \tfrac{1}{p^i(p - 1)} < 0$ and we have $\lim_{i \to +\infty} \lambda_i = 0$. This proves that $0$ is the least upper bound of all slopes of $\mathfrak{N}(f)$ so, using \cref{prop:newton-polygon-radius-convergence}, we can conclude: the radius of convergence of $f$ is $1 = p^0$ and we cannot have convergence ``at the border'', since we would need a type $(3)$ polygon. \begin{figure} \centering \includegraphics[scale=1.5]{/home/carlo/Tesi/images/figure_extra_2} \caption{Newton polygon of $f(X)$ for $p=2$} \label{figure:figure-extra-2} \end{figure} \end{example} Finally, we'll show a nice application of \cref{thm:weierstrass-padic-preparation}, which will imply the non-existence of a non-constant power series which converges on $\Cp$ and is never zero. This means exactly that we cannot have an exponential with the same properties of the classical one: in-fact in the classical case, if $h(X)$ is a convergent power series, then $e^{h(X)}$ is everywhere convergent and non-zero. We'll first need a technical lemma. \begin{lemma} \label{lemma:infinite-zeroes} Let $f(X)$ be a power series which converges on $D(p^{\lambda})$. If $f(X)$ has an infinite number of zeroes on $D(p^{\lambda})$ then $f(X)$ is identically zero. \end{lemma} \begin{proof} If $f(X) = 0$ there's nothing to prove, otherwise we can assume, by contradiction, $f(X) \in 1 + X\Cp\ser{X}$ (we can write $f(X) = a_dX^d \cdot g(X)$, where $d$ is such that $a_d$ is the first non-zero coefficient and study $g(X) \in 1 + X\Cp\ser{X}$). We can then apply \cref{thm:weierstrass-padic-preparation}, using $\lambda$, to obtain $N \in \N$, $h(X) \in 1 + X\Cp[X]$, a polynomial of degree $N$, and $g(X) \in 1 + X\Cp\ser{X}$, a power series convergent and non-zero on $D(p^{\lambda})$, such that \[ h(X) = f(X) \cdot g(X). \] By hypothesis, $f(X)$ has infinite zeroes in $D(p^{\lambda})$ and, since $g(X)$ is never zero on $D(p^{\lambda})$, $h(X)$ must have infinite zeroes on $D(p^{\lambda})$. But $h(X)$ is a non-zero polynomial of degree $N$ so it cannot have infinite zeroes, and this is a contradiction. Thus the only possible case is $f(X) = 0$. \end{proof} \begin{prop} Let $f(X) = 1 + \sum_{i=1}^{+\infty} a_iX^i \in 1 + X\Cp\ser{X}$ be an everywhere convergent power series. For every $\lambda$, let $h_{\lambda}(X)$ be the polynomial obtained applying \cref{thm:weierstrass-padic-preparation}. Then $h_{\lambda} \to f$ as $\lambda \to +\infty$ (i.e., each coefficient of $h_{\lambda}$ converges to the corresponding coefficient of $f$). In particular, if $f$ is not a polynomial, then its zeroes are $(r_n)_{n \geq 1}$ (i.e. they're countable infinite) and \[ f(X) = \prod_{i=1}^{+\infty} \left(1 - \frac{X}{r_i}\right). \] \end{prop} \begin{proof} If $f(X)$ is a polynomial, then the statement is trivial. From now on we'll consider $f(X)$ to be a proper power series. It's clear that such an $f$ must have a type $(1)$ Newton polygon. Let $(\lambda_n)_{n \geq 1}$ be the slopes of $\mathfrak{N}(f)$ (clearly we consider them in order, i.e. such that $\lambda_1 < \lambda_2 < \dots < \lambda_n < \dots$). Since $f(X)$ converges everywhere, by \cref{prop:newton-polygon-radius-convergence} we must have $\lim_{n \to +\infty} \lambda_n = +\infty$. It is also clear that $f$ has a countable infinite set of zeroes (there's clearly no contradiction here, because the zeroes are in $\Cp$): in-fact, applying \cref{corollary:newton-polygon-zeroes}, we obtain that for any segment of $\mathfrak{N}(f)$ we have a finite number of zeroes (and clearly the segments of the Newton polygon are countable infinite). Let it be $(r_n)_{n \geq 1}$, where they're listed in such a way that the first ``cluster'' corresponds to slope $\lambda_1$, the second to slope $\lambda_2$ and so on. Applying \cref{thm:weierstrass-padic-preparation} with $\lambda = \lambda_n$ we obtain a polynomial $1 + X\Cp[X] \ni h_n(X) := h_{\lambda_n}(X)$ and a power series $g_n(X) \in 1 + X\Cp\ser{X}$, convergent and non-zero on $D(p^{\lambda_n})$, such that \[ h_n(X) = f(X) \cdot g_n(X). \] Let's introduce some terminology: \begin{gather*} h_n(X) = 1 + \sum_{i=1}^{d_n} a_{n,i}X^i, \qquad g_n(X) = 1 + \sum_{i=1}^{+\infty} b_{n, i}X^i, \end{gather*} where we set $d_n := \deg h_n(X)$. By \cref{thm:weierstrass-padic-preparation} we know that $d_n$ is the total horizontal length of segments of $\mathfrak{N}(f)$ with slope less or equal to $\lambda_n$ and this also means that \[ h_n(X) = \prod_{j=1}^{d_n} \left(1 - \frac{X}{r_j}\right). \tag{$*$} \] First of all, let's prove that the sequences $(a_{n,m})_{n \geq 1}$ are all Cauchy \emph{uniformly} in $m$, i.e. we'll find an upper bound which doesn't depend on $m$. Let $k \in \N$ be such that $\lambda_1 < \dots < \lambda_k < 0 \leq \lambda_{k+1}$, i.e. the first $k$ slopes of $\mathfrak{N}(f)$ are negative. Let's consider $r_1, \dots, r_{d_k}$, all the roots of $f$ (they're not necessarily distinct) corresponding to the negative slopes of $\mathfrak{N}(f)$. Then $\pabs{1/r_i} = p^{\ord r_i} > 1$, for every $1 \leq 1 \leq d_k$. Instead, for any other root $r_m$ with $m > d_k$ we have $\pabs{1/r_m} \leq 1$, since it corresponds to a non-negative slope. Let's set $M := \pabs{1/r_1}\cdots\pabs{1/r_{d_k}}$ (if all slopes are non-negative we simply set $M = 1$). Recalling the relations between coefficients and reciprocal of roots (using elementary symmetric polynomials), for $n \geq k$, by $(*)$, we have \[ a_{n, m} = (-1)^m \cdot e_m\left(\frac{1}{r_1}, \dots, \frac{1}{r_{d_k}}, \frac{1}{r_{d_k + 1}}, \dots, \frac{1}{r_{d_n}}\right). \] Since for any $j > d_k$ we have $\pabs{1/r_j} \leq 1$, it's easy to see that \[ \pabs{a_{n,m}} \leq \pabs{1/(r_1 \cdots r_{d_k})} = M, \] for any $m \in \N$ and $n \geq k$. We have found a common upper bound for all the coefficients of all the polynomials $h_n(X)$ with $n \geq k$. Now we have \[ h_{n+1}(X) = h_n(X) \cdot \prod_{j=d_n+1}^{d_{n+1}} \left(1 - \frac{X}{r_j}\right) \] so we obtain \[ a_{n+1, m} = a_{n, m} + \sum_{j=1}^m (-1)^j\cdot a_{n, m-j}\cdot e_j\left(\frac{1}{r_{d_n + 1}}, \dots, \frac{1}{r_{d_{n+1}}}\right), \] where we set $a_{n, 0} = 1$. Since $\lim_{n \to +\infty} d_n = +\infty$ (by construction) we can choose a large enough $n$ such that $\lambda_{n+1} > 0$. Then, $\pabs{1/r_j} = p^{\ord r_j} = p^{-\lambda_{n+1}} < 1$ for any $d_n + 1 \leq j \leq d_{n+1}$. Now it's easy to see that \begin{gather*} \forall\, j \in \N, \quad \pabs{e_j\left(\frac{1}{r_{d_n + 1}}, \dots, \frac{1}{r_{d_{n+1}}}\right)} \leq \pabs{\frac{1}{r_{d_n + 1}}} = p^{-\lambda_{n+1}} \\ \implies \pabs{a_{n+1, m} - a_{n,m}} = \max_{1 \leq j \leq m} \pabs{a_{n, m-j}\cdot e_j\left(\frac{1}{r_{d_n + 1}}, \dots, \frac{1}{r_{d_{n+1}}}\right) } \leq M \cdot p^{-\lambda_{n+1}}. \end{gather*} Since $\lim_{n \to +\infty} p^{-\lambda_{n+1}} = 0$, $(a_{n, m})_{n \geq 1}$ is Cauchy (see \cref{lemma:cauchy-sequence-ultrametric}). Let's observe that our bounds don't depend on $m$, i.e. $\pabs{a_{n+1, m} - a_{n, m}} \leq M\cdot p^{-\lambda_{n+1}}$ for any $m \in \N$ and $n \geq k$. Since $(\lambda_n)_{n \geq 1}$ is non-decreasing, for $m > n \geq k$ we obtain \[ \pabs{a_{m, i} - a_{n, i}} \leq \max_{n \leq j < m}\pabs{a_{j+1, i} - a_{j,i}} \leq \max_{n \leq j < m} Mp^{-\lambda_{j+1}} = Mp^{-\lambda_{n+1}}. \] Now, we know that $g_n(X)$ converges and is non-zero on $D(p^{\lambda_n})$; this means exactly that, if $\gamma_n$ is the first slope of $\mathfrak{N}(g_n)$, then $\gamma_n > \lambda_n$. In-fact, $\gamma_n \leq \lambda_n$ would imply, by \cref{corollary:newton-polygon-zeroes}, the existence of $\alpha \in \Cp$ such that $\pabs{\alpha} = p^{\gamma_n} \leq p^{\lambda_n}$ such that $g(\alpha) = 0$ and this cannot be the case. From a geometrical point of view, this means that every point $(i, \ord b_{n,i})$ lies on or above the line $y = \gamma_n \cdot x$, i.e. \[ \ord b_{n,i} \geq i \cdot \gamma_n. \] We have already proved that $\lim_{n \to +\infty} \lambda_n = +\infty$ so $\lim_{n \to +\infty} \gamma_n = +\infty$ and this implies $\lim_{n \to +\infty} \ord b_{n,i} = +\infty$, i.e. $\lim_{n \to +\infty} b_{n,i} = 0$ for every $i \geq 1$. Let's now come back to the relation $h_n(X) = f(X)\cdot g_n(X)$ and let's consider the single coefficients; we obtain \begin{align*} a_{n, 1} &= b_{n,1} + a_1; \\ a_{n, 2} &= b_{n,2} + a_1b_{n,1} + a_2; \\ \vdots \\ a_{n, m} &= b_{n, m} + \sum_{j=1}^{m-1} a_jb_{n, m-j} + a_m. \end{align*} Then, for any $m \geq 1$, we have $\lim_{n \to +\infty} a_{n, m} = a_m$. Let's fix $x \in D(1)$ and $\epsilon > 0$ and consider \begin{gather*} \pabs{f(x) - h_n(x)} = \pabs{\sum_{i=1}^{+\infty} (a_i - a_{n, i}) x^i} \leq \max\left\{\max_{1 \leq i \leq d_n} \pabs{a_i - a_{n,i}},\, \max_{i > d_n}\,\pabs{a_i} \right\} \end{gather*} where we set $a_{n, i} = 0$ if $i > d_n$. We already know $\lim_{n \to +\infty} d_n =+\infty$ and we know that $\lim_{i \to +\infty} \pabs{a_i} = 0$ since $f$ converges everywhere (see \cref{prop:summable_families}). Let's choose $n \in \N$ such that $i > d_n$ implies $\pabs{a_i} < \epsilon$. Now we have only to give an upper bound on the first term, but this is easy thanks to the bounds we proved before: \begin{gather*} \pabs{a_i - a_{n, i}} = \lim_{m \to +\infty} \pabs{a_{m, i} - a_{n, i}} \leq \lim_{m \to +\infty} Mp^{-\lambda_{n+1}} = Mp^{-\lambda_{n+1}} \\ \implies \max_{1 \leq i \leq d_n} \pabs{a_i - a_{n,i}} \leq M\cdot p^{-\lambda_{n+1}} \end{gather*} and we can assume that $n \in \N$ is big enough such that $M\cdot p^{-\lambda_{n+1}} < \epsilon$ and $i > d_n$ implies $\pabs{a_i} < \epsilon$. Since $\epsilon$ is chosen arbitrarily, we conclude that if $x \in D(1)$ then \[ f(x) = \lim_{n \to +\infty} h_n(x) = \lim_{n \to +\infty} \prod_{j=1}^{d_n} \left(1 - \frac{x}{r_j}\right) = \prod_{j=1}^{+\infty} \left(1 - \frac{x}{r_j}\right). \] Let's define $\ell(X) := \prod_{j=1}^{+\infty} \left(1 - \frac{X}{r_j}\right)$. It can be proved that $\ell(X) \in 1 + X\Cp\ser{X}$ exploiting the fact that $\lim_{n \to +\infty} \pabs{1/r_n} = 0$ and that its coefficient of $X^m$ is simply the sum of the series of all possible products of $m$ of the $-1/r_i$'s (which converges). Now, $\ell(X)$ converges on $D(1)$ because $\ell(x) = f(x)$ for any $x \in D(1)$. We can conclude that, in $\Cp\ser{X}$, we have \[ f(X) = \ell(X) = \prod_{j=1}^{+\infty} \left(1 - \frac{X}{r_j}\right) \] since $g(X) := f(X) - \ell(X)$ is a power series convergent on $D(1)$ with infinite zeroes and, by \cref{lemma:infinite-zeroes}, it must be $g(X) = 0$. \end{proof} This proposition resembles a lot the Weierstrass factorization theorem of complex analysis, although the \padic result is much more clean: there are no exponential factor in the product. One immediate implication is that any power series which converges everywhere and is never zero must be a constant: here is why we cannot have an exponential similar to the classic one, which converges everywhere, is never zero but isn't constant. Finally we can think as power series which converges everywhere simply as ``polynomials with infinite zeroes'', which can be factorized in the same exact way we factorize polynomials.
{ "alphanum_fraction": 0.6460043277, "avg_line_length": 123.8060046189, "ext": "tex", "hexsha": "661ffe2312ef809d4a0463e1a4cde4718cf08d17", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "carlo300/BachelorThesis", "max_forks_repo_path": "Mainmatter/chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "carlo300/BachelorThesis", "max_issues_repo_path": "Mainmatter/chapter5.tex", "max_line_length": 2053, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d7c1311e2abc12c80ffac864b74b214e6a63b9fb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "carlo300/BachelorThesis", "max_stars_repo_path": "Mainmatter/chapter5.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-29T10:11:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-21T10:59:24.000Z", "num_tokens": 19865, "size": 53608 }
\subsection{Dynamics Constraints} \label{text:approach/constraint/dynamics} The dynamics constraints subside all constraints in problem \ref{problem:general} that are not directly related to safety, i.e. \begin{align} \label{eq:constraint_f} &\x_{t+1} = \f(\x_t, \u_t) \qquad& \forall t \in [0, T - 1] \\ \label{eq:constraint_x} &\x_t \in \xset & \forall t \in [0, T]\\ \label{eq:constraint_u} &\u_t \in \uset & \forall t \in [0, T]\\ \label{eq:constraint_x0} &\x_0 \in \xset_0 \end{align} Using the shooting trajectory optimization paradigm, the decision variables are the robot's control inputs $\u_t \in \uset$. The states $\x_t$ are obtained by unrolling the controls starting at some initial state $\x_0 \in \xset_0$ and iteratively using the robot's dynamics $\f(\cdot)$. \ac{IPOPT} on the other side solves a general \ac{NLP} of the form \cite{Wachter2006} \\ \begin{problem}{General IPOPT problem formulation} \begin{align} \min_{x \in \mathbb{R}^n} \quad & \f(x) \\ \textrm{s.t. } \quad & g^L \leq g(x) \leq g^U \\ & x^L \leq x \leq x^U \end{align} \label{problem:general_ipopt} \end{problem} As indicated in Section \ref{text:approach/formulation}, the bounds of the controls $\u_t$ have a shape, as the decision variable $\x$ in problem \ref{problem:general_ipopt}. Therefore \ref{eq:constraint_u} is implicitly posed, without further ado, just by using the robot's control input bounds as bounds of the decision variable. Also, the constraints \ref{eq:constraint_f} and \ref{eq:constraint_x0} are satisfied by the problem design, using the shooting method, leaving merely the constraint \ref{eq:constraint_x} to be explicitly defined. The state $\x$ of the robot incorporates both its position and its velocity. While the prediction models, objectives, and constraints only depend on relative measures regarding the agent's positions, the positional subset of $\xset$ can be safely assumed to be unbounded ($\xset_{pos} = \mathbb{R}^2$). Due to the speed boundaries imposed on the robot ($||v||_1 \leq v_{max}$, comp. Section \ref{text:approach/formulation}), in order for the solution to be feasible, a maximal speed constraint must be established: \begin{equation} \x_t \in \xset \Rightarrow -v_{max} \leq g_{v_{max}}(\x_t) = \dot{\x}_t \leq v_{max} \quad \forall t \in [0, T] \label{eq:constraint_v_max} \end{equation} Computing the Jacobian for constraint \ref{eq:constraint_v_max} is straightforward using the chain rule and exploiting the linear robot dynamics (as above when deriving the goal objective's gradient in equation \ref{eq:goal_gradient_dynamics}): \begin{align} \nabla g_{v_{max}} &= \pd{g_{v_{max}}}{\u_{0:T-1}} = \pd{g_{v_{max}}}{\x_{0:T}} \cdot \pd{\x_{0:T}}{\u_{0:T-1}} \\ \Rightarrow \pd{g_{v_{max}}}{\x_{0:T}} &= \begin{bmatrix} \pd{g_{v_{max}}^1}{\x_{0:T}} & \hdots & \pd{g_{v_{max}}^T}{\x_{0:T}}\end{bmatrix}^T \\ &= \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \hdots & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & \hdots & 0 \\  0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \hdots & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \hdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \hdots & 1 \end{bmatrix} \\ \Rightarrow \pd{\x_{0:T}}{\u_{0:T-1}} &= \begin{bmatrix} \mathbf{0}_{n \times m} \\ B_n \end{bmatrix} \end{align}
{ "alphanum_fraction": 0.6829341317, "avg_line_length": 68.1632653061, "ext": "tex", "hexsha": "7067264f92b5421b9933061bd60ef0b3574c8ab8", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2022-03-03T10:39:03.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-09T00:03:26.000Z", "max_forks_repo_head_hexsha": "9a2b3f32a0005cc0cb79bb78924f09da5a94587d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "StanfordASL/mantrap", "max_forks_repo_path": "report/thesis/constraint_dynamics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9a2b3f32a0005cc0cb79bb78924f09da5a94587d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "StanfordASL/mantrap", "max_issues_repo_path": "report/thesis/constraint_dynamics.tex", "max_line_length": 1060, "max_stars_count": 7, "max_stars_repo_head_hexsha": "9a2b3f32a0005cc0cb79bb78924f09da5a94587d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simon-schaefer/mantrap", "max_stars_repo_path": "report/thesis/constraint_dynamics.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T02:52:48.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-11T18:13:27.000Z", "num_tokens": 1162, "size": 3340 }
\section{Integral of $\frac{1}{x^4+1}$} When we first see function $\frac{1}{x^4+1}$, it always reminds us of the derivative of arc tangent (which is $\frac{1}{x^2+1}$). Therefore we can solve this using some intuitions of arc tangent. First step, we try to make it into multiplication of polynomials in order to facilitate the integration: $$ \begin{aligned} x^4+1 &=(x^4+2x^2+1)-2x^2 \\ &=(x^2+1)^2-(\sqrt2x)^2 \\ &=(x^2-\sqrt2x+1)(x^2+\sqrt{2x}+1) \\ \end{aligned} $$ It seems that we are now able to deal with it using partial fractions: $$\frac{1}{x^4+1}=\frac{Ax+B}{x^2-\sqrt2x+1}+\frac{Cx+D}{x^2+\sqrt2x+1}$$ Since we know the numerator of the fraction in the left part is one, we can make the addition of numerator equal to $1$: $$ \begin{aligned} 1=&(Ax+B)(x^2+\sqrt2x+1)+(Cx+D)(x^2-\sqrt2x+1) \\ 1=&(Ax^3+A\sqrt2x^2+Ax)+(Bx^2+B\sqrt2x+B)+ \\ &(Cx^3-C\sqrt2x^2+Cx)+(Dx^2-D\sqrt2x+D) \\ 1=&(A+C)x^3+[\sqrt2(A-C)+B+D]x^2+[A+C+\sqrt2(B-D)]x+B+D \end{aligned} $$ We can turn this nasty polynomial equation into linear equations: $$ \begin{aligned} \begin{cases} A+C=0 \\ \sqrt2(A-C)+B+D=0 \\ A+C+\sqrt2(B-D)=0 \\ B+D=1 \label{e_d} \end{cases} \\ \because B+D=1 \\ \therefore \sqrt2(A-C)+1=0 \\ \therefore A-C=-\frac{1}{\sqrt2} \\ \because A+C=0 \\ \therefore 2A=-\frac{1}{\sqrt2} \\ \therefore A=-\frac{1}{2\sqrt2} \\ \therefore C=\frac{1}{2\sqrt2} \\ \because B+D=1,B-D=0 \\ \therefore B=D=\frac{1}2 \\ \begin{cases} A=-\frac{1}{\sqrt2} \\ B=\frac{1}2 \\ C=\frac{1}{\sqrt2} \\ D=\frac{1}2 \\ \end{cases} \\ \therefore \frac{1}{x^4+1}=\frac{-\frac{1}{2\sqrt2}x+ \frac{1}2}{x^2-\sqrt2x+1}+\frac{\frac{1}{2\sqrt2}x+ \frac{1}2}{x^2+\sqrt2x+1} \end{aligned} $$ Since we split the one fraction into two fractions in which each denominator is made of a second-power trinomial, so we can complete squares for integrating for arc tangent. \begin{eqnarray} \begin{aligned} x^2-\sqrt2x+1 &=(x^2-\sqrt2x+\frac{1}2)+\frac{1}2 \\ &=(x-\frac{1}{\sqrt2})^2+\frac{1}2 \end{aligned} \label{eqn1}\\ x^2+\sqrt2x+1=(x+\frac{1}{\sqrt2})^2+\frac{1}2 \label{eqn2} \end{eqnarray} Now we can integrate since everything is ready. The integration starts with partial fractions and then plugging in \ref{eqn1} and \ref{eqn2} $$ \begin{aligned} \int\frac{dx}{x^4+1}&=\int\frac{-\frac{1}{2\sqrt2}x+ \frac{1}2}{x^2-\sqrt2x+1}dx+\int\frac{\frac{1}{2\sqrt2}x+ \frac{1}2}{x^2+\sqrt2x+1}dx \\ &=\frac{1}{4\sqrt2}\left(\int\frac{-2x+2\sqrt2}{(x-\frac{1}{\sqrt2})^2 +\frac{1}2}dx+\int\frac{2x+2\sqrt2}{(x+\frac{1}{\sqrt2})^2+\frac{1}2}dx \right) \end{aligned} $$ Now we start dealing with them with the first part: $$ \begin{aligned} \int\frac{-2x+2\sqrt2}{(x-\frac{1}{\sqrt2})^2+\frac{1}2}dx &=-\int\frac{2x}{(x-\sqrt2)^2+\frac{1}2}dx+ \int\frac{2\sqrt2}{(x-\sqrt2)^2+\frac{1}2}dx \end{aligned} $$ It seems like a \textbf{$u$-substitution} is applicable in the first integral, so we let $u=(x-\sqrt2)^2+\frac{1}2$, so $du=2xdx$. Then the whole thing is transformed into this: $$ \begin{aligned} -\int\frac{2x}{(x-\sqrt2)^2+\frac{1}2}dx+ \int\frac{2\sqrt2}{(x-\sqrt2)^2+\frac{1}2}dx &=-\int\frac{du}u+ \int\frac{2\sqrt2}{(x-\sqrt2)^2+\frac{1}2}dx \\ &=-\ln(u)+ \int\frac{2\sqrt2}{(x-\sqrt2)^2+\frac{1}2}dx \\ &=-\ln(x^2-\sqrt2x+1)+ \int\frac{2\sqrt2}{(x-\sqrt2)^2+\frac{1}2}dx \\ &=-\ln(x^2-\sqrt2x+1)+ \int\frac{2\sqrt2}{(x-\frac{1}{\sqrt2})^2 +(\frac{1}{\sqrt2})^2}dx \\ &=-\ln(x^2-\sqrt2x+1)+\sqrt2\arctan(\sqrt2x-1) \end{aligned} $$ The integration of the second integral is very similar, at last the indefinite integral of $\frac1{x^4+1}$ is: $$ \frac1{4\sqrt2}\left(\ln({x^2+\sqrt2x+1\over x^2-\sqrt2x+1})+ 2\arctan(\sqrt2x+1)+2\arctan(\sqrt2x-1)\right)+C $$
{ "alphanum_fraction": 0.6260752688, "avg_line_length": 29.0625, "ext": "tex", "hexsha": "8a2a282f8db1ccea183194b1c2fadb6456719165", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-11-07T07:20:36.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-07T07:20:36.000Z", "max_forks_repo_head_hexsha": "7800a3056657691e9cf81687309cb8ffd1a44887", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "TravorLZH/mathcol-doc", "max_forks_repo_path": "integrate-1overx4plus1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7800a3056657691e9cf81687309cb8ffd1a44887", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "TravorLZH/mathcol-doc", "max_issues_repo_path": "integrate-1overx4plus1.tex", "max_line_length": 79, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7800a3056657691e9cf81687309cb8ffd1a44887", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "TravorLZH/mathcol-doc", "max_stars_repo_path": "integrate-1overx4plus1.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-17T04:58:27.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-17T04:58:27.000Z", "num_tokens": 1676, "size": 3720 }
\section{Correlation coefficients}\label{app:correlations} A graphical representation of the correlation coefficients for the features in Table~\ref{table:manual_features}. \begin{figure}[!htbp] \begin{center} \includegraphics[width=.75\linewidth]{../images/standard_correlation_plot.pdf} \end{center} \caption{Correlation coefficients of features in Table~\ref{table:manual_features} for standard tournaments} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=.75\linewidth]{../images/noise_correlation_plot.pdf} \end{center} \caption{Correlation coefficients of features in Table~\ref{table:manual_features} for noisy tournaments} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=.75\linewidth]{../images/probend_correlation_plot.pdf} \end{center} \caption{Correlation coefficients of features in Table~\ref{table:manual_features} for probabilistic ending tournaments} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=.75\linewidth]{../images/probend_noise_correlation_plot.pdf} \end{center} \caption{Correlation coefficients of features in Table~\ref{table:manual_features} for noisy probabilistic ending tournaments} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=.75\linewidth]{../images/merged_correlation_plot.pdf} \end{center} \caption{Correlation coefficients of features in Table~\ref{table:manual_features} for data set} \end{figure}
{ "alphanum_fraction": 0.7340492735, "avg_line_length": 39.575, "ext": "tex", "hexsha": "cd4178cca0858397aeb4d1bcb919aed8aa81ac65", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-03-30T08:13:32.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-30T08:13:32.000Z", "max_forks_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_forks_repo_path": "paper/correlation_section.tex", "max_issues_count": 14, "max_issues_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_issues_repo_issues_event_max_datetime": "2020-05-08T11:23:19.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-29T14:42:49.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_issues_repo_path": "paper/correlation_section.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "0e7c9949d996cf3822072321b603fcff707e97d8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Nikoleta-v3/meta-analysis-of-prisoners-dilemma-tournaments", "max_stars_repo_path": "paper/correlation_section.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 400, "size": 1583 }
\documentclass[11pt,paper=a4]{article} %\documentclass[12pt,preprint]{aastex} \usepackage{amsmath} \usepackage{natbib} \usepackage{graphicx} \usepackage[usenames,dvipsnames]{color} \usepackage[absolute,overlay]{textpos} \usepackage{color} \usepackage{multirow} \usepackage{hyperref} \hypersetup{ colorlinks = true, linkcolor = blue, anchorcolor = red, citecolor = blue, filecolor = red, urlcolor = red } \newcommand{\eht}{\overline} \newcommand{\fht}{\widetilde} \newcommand{\dr}{\frac{\partial}{\partial r}} \newcommand{\dt}{\frac{\partial}{\partial t}} \newcommand{\dth}{\frac{\partial}{\partial \theta}} \newcommand{\dph}{\frac{\partial}{\partial \phi}} \def\ef#1{#1'} \def\ff#1{#1''} \def\fhtc#1{\left\{#1\right\}} \def\erho{\eht{\rho}} % From Maxime's paper \newcommand{\vdag}{(v)^\dagger} \newcommand{\Teff}{T_\mathrm{eff}} \newcommand{\Rsun}{R_\odot} \newcommand{\dV}{\mathrm{d}V} %\newcommand\av[1]{\overline{<#1>}} %\newcommand\av[1]{<\overline{#1}>} %\newcommand\av[1]{\overline{\langle{#1}\rangle}} \def\av#1{\overline{#1}} %\newcommand\fav[1]{\widetilde{\langle #1 \rangle}} \def\fav#1{\widetilde{#1}} \newcommand\br[1]{\langle #1\rangle} \newcommand\hpartial[1]{\hat{\partial} #1} %---------- Own Macros -------------------------------------- \def\etal{{\it et al. }} \def\ie{{\it i.e. }} \def\eg{{\it e.g. }} \def\ms{\, {M_{\odot}}} \def\la{\hbox{\raise.5ex\hbox{$<$} \kern-1.1em\lower.5ex\hbox{$\sim$}}} \def\ga{\hbox{\raise.5ex\hbox{$>$} \kern-1.1em\lower.5ex\hbox{$\sim$}}} \def\msun{$M_\odot$} \newcommand{\SubItem}[1]{ {\setlength\itemindent{15pt} \item[-] #1} } \newcommand{\mean}[1]{\ensuremath{\overline{#1}}} \newcommand{\dgr}{\mbox{$^\circ$}} % degrees \newcommand{\Msun}{\mbox{M$_\odot$\,}} % M_sun \newcommand{\Lsun}{\mbox{$L_\odot$}} % L_sun \newcommand{\dTdp}{\mbox{$\nabla$}} % Temp.grad. \newcommand{\dTdpad}{\mbox{$\dTdp_{ad}$}} % ad. Temp.grad. \newcommand{\Tmax}{\mbox{$T_{max}$}} % T_max \newcommand{\Tmaxq}{\mbox{$\overline T_{max}$}} % T_max quer \newcommand{\RTmax}{\mbox{$R_{max}$}} % RT_max \newcommand{\sTmax}{\mbox{$\sigma_T$}} % sigma T_max \newcommand{\DTmax}{\mbox{$\Delta_T$}} % Delta T_max \newcommand{\vexp}{\mbox{$v_{exp}^{(a)}$}} % vexp \newcommand{\vprop}{\mbox{$v_{prop}^{(i)}$}} % vprop \newcommand{\Drinv}{\mbox{$d_{inv}$}} % Drinv \newcommand{\DRinv}{\mbox{$\Delta R_{inv}$}} % DRinv \newcommand{\aP}{${}^\star$} % prime \newcommand{\ad}{\mbox{d}} % d \newcommand{\gsim}{\gtrsim} % greater sim \newcommand{\cm}{\mbox{\ cm}} % units \newcommand{\g}{\mbox{\ g}} % units \newcommand{\s}{\mbox{\ s}} % units \newcommand{\K}{\mbox{\ K}} % units \newcommand{\erg}{\mbox{\ erg }} % units \newcommand{\cms}{\mbox{\ cm s${}^{-1}$}} % units \newcommand{\cmss}{\mbox{\ cm s${}^{-2}$}} % units \newcommand{\Ks}{\mbox{\ K s${}^{-1}$}} % un \newcommand{\Kcm}{\mbox{\ K cm${}^{-1}$}} % un \newcommand{\mes}{\mbox{\ m s${}^{-1}$}} % units \newcommand{\ergK}{\mbox{\ erg K${}^{-1}$}} % units \newcommand{\gcm}{\mbox{\ g cm${}^{-3}$}} % units \newcommand{\gcms}{\mbox{$\g\cm^{-1}\s^{-1}$}} % units \newcommand{\ergcms}{\mbox{$\erg\cm^{-3}\s^{-1}$}} % units \newcommand{\erggs}{\mbox{$\erg\g^{-1}\s^{-1}$}} % units \newcommand{\ergs}{\mbox{$\erg\s^{-1}$}} % units \newcommand{\ergcmsK}{\mbox{$\erg\K^{-1}\cm^{-1}\s^{-1}$}} % units \newcommand{\dyncm}{\mbox{\ dyn$\cm^{-2}$}} % units \newcommand{\radcm}{\mbox{\ rad$^2 \cm^{-2}$\,}} % units \newcommand{\radss}{\mbox{\ rad$^2 \s^{-2}$\,}} % units \newcommand{\vrad}{\mbox{v$_{r}$}} \newcommand{\sv}{\langle\sigma v\rangle} \newcommand{\cer}{\color{red}} \def\eSGS{\eta_{SGS}} % input: Math. f"ur Hydroformeln \newcommand{\dz}{\partial_t} %\newcommand{\dr}{\partial_r} %\newcommand{\dt}{\partial_\theta} \newcommand{\df}{\partial_\phi} % \dp macht in LaTeX Schwierigkeiten \newcommand{\ddz}{\frac{\partial}{\dz}} \newcommand{\ddr}{\frac{\partial}{\dr}} \newcommand{\ddt}{\frac{\partial}{\dt}} \newcommand{\ddf}{\frac{\partial}{\df}} %\newcommand{\vr}{v_r} \newcommand{\vt}{v_\theta} \newcommand{\vp}{v_\phi} \newcommand{\st}{{\,\sin\!\theta}} \newcommand{\ct}{{\,\cos\!\theta}} \newcommand{\stq}{{\,\sin^2\!\theta}} \newcommand{\rez}[1]{\frac{1}{#1}} \newcommand{\rezr}{\rez{r}} \newcommand{\rezrs}{\rez{r\st}} \newcommand\bba{\,\,\Bigl[\,\,} \newcommand\bbz{\,\,\Bigr]\,\,} \newcommand{\mbf}[1]{\mbox{\boldmath$#1$}} % bold in mmode \DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it} \def\todo#1{{\color{red}[#1]}} \usepackage{titling} \newcommand{\subtitle}[1]{% \posttitle{% \par\end{center} \begin{center}\large#1\end{center} \vskip0.5em}% } %\title{... title ...} \title{Towards Complex Understanding of Turbulent Convection in Stellar Interiors Using ransX Analysis Framework} \subtitle{Proposal For Post-Doctoctoral Research Position} \author{Dr. Miroslav Moc\'ak} \begin{document} \maketitle \bibliographystyle{plainnat} \section{Introduction} Contemporary ground- and space-based telescopes provide us precise stellar data leading to challenging questions and forcing us to reconsider our basic assumptions regarding turbulent convection and mixing in stars. Properties of supernova explosions studied by HST or Keck can not be linked to their progenitors conclusively \citep{Smartt2009}. Such progenitors are known to have a structure interleaved by turbulent convection shells \citep{HirschiMeynet2004}. VLT is observing massive stars with unexplained chemical peculiarities, where rotational mixing was considered to be enough to explain observations \citep{Evans2008}. Kepler spacecraft finds unexplained pulsations of $\delta$ Scuti and $\gamma$ Doradus stars \citep{UytterhoevenArxiv2011}, which depend heavily on properties of sub-surface stellar convection \citep{GuzikKaye2000}. Explanation of observed element abundances in AGB stars requires physically motivated but still inconclusive tuning for mixing between turbulent envelope convection and underlying hydrogen-free core \citep{Herwig2005}. Turbulence is during stellar evolution one of the most fundamental processes and before taking into account binarity, magnetism or rotation of a star to explain observations, we should understand stellar turbulence well first. It is arguably the greatest weakness in the modern theory of stellar evolution, which is mostly derived from one-dimensional calculations approximating dynamic turbulent processes by simplified theories \citep{KipWeigert1990,CoxGiuli2008}. In reality, turbulent flows are multidimensional and driven by non-linear terms of the hydrodynamic Navier-Stokes equations. \section{Aims} I will analyze three-dimensional (3D) hydrodynamic simulations of stellar convection within the context of Reynolds-Averaged Navier Stokes (RANS) approach pursued by \citet{Besnard1992,Livescu2009,Schwarzkopf2011}. It is a unique way of learning about turbulence based on budget analysis of hydrodynamic equations averaged in space and time, by which complexity of every term is reduced to a one-dimensional mean field. Using this methodology, we derived RANS evolution equations for transport/flux/variance of mass, momenta, kinetic/internal/total energy, temperature, enthalpy, pressure and composition densities (no magnetic fields, no rotation) \citep{Mocak2014} and implemented them to analysis framework, that we call rans(eXtreme) or ransX\footnote{ransX is free for download and test on \href{https://github.com/mmicromegas/ransX}{https://github.com/mmicromegas/ransX}} for short. It should be noted here, that it is only one of many possible sets of equations relevant to closure problems in turbulence and there are many other formulations, which then need to approximate different terms \citep{Canuto1992,Canuto1993,CanutoHoward2001,Hanjalic2002,Alfonsi2009,Garaud2010,Canuto2011a,BiferaleMantovani2011}. RANS approach introduces into the averaged equations many correlations of various thermodynamic fluctuations which are essentially new unknown variables. Hence, to solve them, we need either to design appropriate closures or derive and close evolution equations for them. Either of the tasks is difficult, because stellar turbulence is anisotropic, compressible and embedded in highly stratified environment where external forces like gravity and mean background flow play an important role. But we hope that this approach could in the future allow us to study stellar evolution using solution of the mean fields hydrodynamic equations, move away from canonical form of stellar structure equations and most importantly allow for a comprehensive synergy between engineering turbulence modeling and stellar astrophysics. My aims encompass the following targets (their content partially overlap with each other and the estimated time of completion is stated in brackets): \begin{itemize} \item publish our RANS mean-field equations implemented within ransX framework \citep{Mocak2014}\footnote{More up-to-date equation content of the ransX framework can be found here \href{https://github.com/mmicromegas/ransX/blob/master/DOCS/ransXtheoryGuide.pdf}{https://github.com/mmicromegas/ransX/blob/master/DOCS/ransXtheoryGuide.pdf}} in high-impact referred journal and validate them with new high-resolution 3D hydrodynamic simulations (2+ year) \item help to implement the ransX framework to all hydrodynamic codes capable of simulating stellar core and envelope convection (e.g. MUSIC \citep{VialletBaraffe2011}, PROMETHEUS \citep{Fryxell1991,Mueller1991}) or stellar atmospheres in 3D and make it an analysis standard (3+ years) \end{itemize} %The new hydrodynamic stellar structure equations for stellar turbulence are listed below as equations (1),(2),(3),(4),(5),(6). They appear to work well (Fig.\ref{hsse:eq_simp}) but the first four equations still lack a theory that explains them and the last equation (5) requires a proper model for transport of composition density ($\nabla_r f_\alpha$) commonly treated in stars as difussion. %\begin{align} %\partial_r \eht{m} = & \ -\eht{\rho} \ \eht{m} \ \eht{g}_r / \Gamma_1 \eht{P} + 4 \pi r^2 \eht{\rho} & \\ %\partial_r \eht{P} = & \ -\eht{\rho} \ \eht{g}_r \\ %\partial_r \fht{L} = & \ -4 \pi r^2 \eht{\rho} \ \eht{g}_r / \Gamma_1 + \widetilde{\epsilon}_{t} \partial_r 4 \pi r^2 \eht{\rho} \fht{u}_r \\ %\partial_r \eht{T} = & -(\Gamma_3 -1) \ \eht{\rho} \ \eht{T} \ \eht{g}_r / \Gamma_1 \eht{P} \\ %\partial_t \fht{X}_i = & \ \fht{\dot{X}}_i^{nuc} - (1/\eht{\rho})\nabla_r f_i - \fht{u}_r \partial_r \fht{X}_i \\ %\fht{u}_r = & \ \dot{\overline{M}} / 4 \pi r^2 \overline{\rho} %\end{align} %\begin{figure}[!h] %\centerline{ % \includegraphics[width=6.3cm]{oblrez_hsse_continuity_eq_alternative_simplified.eps} % \includegraphics[width=6.3cm]{oblrez_hsse_momentum_x_eq_alternative_simplified.eps}} %\centerline{ % \includegraphics[width=6.3cm]{oblrez_hsse_temperature_eq_alternative_simplified.eps} % \includegraphics[width=6.3cm]{oblrez_hsse_luminosity_eq_alternative_simplified.eps}} %\centerline{ % \includegraphics[width=6.3cm]{oblrez_hsse_mean_Xtransport_ne20.eps}} %\caption{Hydrodynamic stellar structure equations without MLT validated by 3D low-resolution oxygen burning convective shell simulation. Initial model is described more in detail in \citet{Mocak2018}.} %\label{hsse:eq_simp} %\end{figure} Partial results from our mean-field RANS analysis related mostly to turbulent kinetic energy and transport of some chemical elements based on oxygen burning shell in massive stars have been already published e.g. \citet{MeakinArnett2007,ArnettMeakin2009,Meakin2010,VialletMeakin2013,Mocak2018}. In order to cover wider range of conditions present in stars like Schwarzschild and Ledoux stable/unstable regions, electron degeneracy and multiplicity of convection zones, I also plan to extend our library of ransX mean fields calculated during 3D high-resolution hydrodynamic simulations of: \begin{itemize} \item single convection zone during core helium flash in low-mass stars with Ledoux unstable region at its bottom \citep{Mocak2008,Mocak2009,Mocak2011} (1+ year) \item dual convection zone during core helium flash in metallicity free stars \citep{Mocak2010} (1+ year) \item single convection zone resulting from core carbon flash in intermediate stars with Ledoux unstable region at its bottom \citep{Mocak2011} (1+ years) \item O-Ne-C burning stellar interior in massive pre-supernova progenitor with multiple interacting convection zones \citep{Meakin2006} (3+ years) %\item thermal pulse (model from Marcello) \end{itemize} The setups are already prepared in our MPI parallelized multi-species compressible fluid dynamics code PROMPI \citep{MeakinArnett2007}. Anticipated problems encompass computational time required to perform high-resolution 3D simulations, that may require 100k CPU hours for a single convective turnover timescale. In order to get statistically robust mean-fields from our framework, we need to simulate at least three such timescales per model after initial transient behaviour. Besides general understading of the time-dependency, non-local and compressibility effects of turbulent convection in stars, these simulations will also serve as test beds for turbulence models inspired by work of \citet{rogers1989,lazeroms2013} and \citet{biferale2011}. \section{Summary} \begin{itemize} \item publish comprehensive description and validation of the ransX framework in referred journal \item extend our library of 3D hydrodynamic simulations with core helium flash, core carbon flash, dual core flash and O-Ne-C burning shell simulations (setups already prepared in our hydrodynamic code PROMPI) \item develop turbulence models suitable for 1D stellar evolution calculations inspired by engineering turbulence literature with focus on turbulent composition flux, which in reactive flow controls nuclear reaction rates \item make ransX a standard analysis tool in as many hydrodynamic codes as possible \end{itemize} \section{Intended Collaboration and Topic} \begin{itemize} \item Simon Campbell (Monash Centre for Astrophysics, Australia) \begin{itemize} \item 3D simulations of dual core flashes and nucleosynthesis in low-mass stars \end{itemize} \item Casey Meakin (Karagozian and Case, Inc., Glendale, California) \begin{itemize} \item turbulence modelling, ransX development, hydrodynamic stellar structure equations \end{itemize} \item Dave Arnett (Steward Observatory, University of Arizona) \begin{itemize} \item turbulence modelling, hydrodynamic stellar structure equations \end{itemize} \item Cyril Georgy (Geneva Observatory, University of Geneva, Switzerland) \begin{itemize} \item ransX development, hydrodynamic stellar structure equations \end{itemize} \item Ewald Mueller (Max-Planck-Institut f\"ur Astrophysik, Germany) \begin{itemize} \item search for origin of gravitational wave signals during 3D hydrodynamic simulations of core-collapse supernovas using ransX framework \end{itemize} \end{itemize} %\section{Definitions} %\begin{align} % & \rho \ \ \mbox{density} & & g_r \ \ \mbox{radial gravitational acceleration} \nonumber \\ % & m = \rho V = \rho \frac{4}{3} \pi r^3\ \ \mbox{mass} & & M = \int \rho(r) dV \ \ \mbox{integrated mass} \nonumber \\ %& T \ \ \mbox{temperature} & & X_i \ \ \mbox{mass fraction)} \nonumber \\ %& P \ \ \mbox{pressure} & & \epsilon_t \ \ \ \mbox{specific total energy} \nonumber \\ %& u_r, u_\theta, u_\phi \ \ \mbox{velocity components} & & f_i = \eht{\rho}\fht{X''_i u''_i} \ \ \mbox{composition flux} \nonumber \\ %& {\bf u} = u (u_r, u_\theta, u_\phi) \ \ \mbox{velocity} & & d = \nabla \cdot {\bf u} \ \ \mbox{dilatation} \nonumber \\ %& \Gamma_1 = (d \ ln \ P/ d \ ln \ \rho)|_s & & \Gamma_2 / (\Gamma_2 -1) = (d \ ln \ P/ d \ ln \ T)|_s \nonumber \\ %& \Gamma_3 -1 = (d \ ln \ T/ d \ ln \ \rho)|_s & & \nonumber %\end{align} %add here also horizontal and statistical operator definitions, definition of X'' and u''r and the operators \bibliography{referenc} \end{document}
{ "alphanum_fraction": 0.7084986176, "avg_line_length": 58.3789473684, "ext": "tex", "hexsha": "913a66c5baf1092dc7fd714f1cdbc61f04e59457", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-09-16T00:28:17.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-16T00:28:17.000Z", "max_forks_repo_head_hexsha": "2faaa786e00cfd14dce0e18f0793cd0252428d2a", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "mmicromegas/ransX", "max_forks_repo_path": "DOCS/RANDOM/TEX/ransXproposal2019.tex", "max_issues_count": 34, "max_issues_repo_head_hexsha": "2faaa786e00cfd14dce0e18f0793cd0252428d2a", "max_issues_repo_issues_event_max_datetime": "2022-03-30T13:35:43.000Z", "max_issues_repo_issues_event_min_datetime": "2019-07-01T09:11:00.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "mmicromegas/ransX", "max_issues_repo_path": "DOCS/RANDOM/TEX/ransXproposal2019.tex", "max_line_length": 1063, "max_stars_count": 4, "max_stars_repo_head_hexsha": "2faaa786e00cfd14dce0e18f0793cd0252428d2a", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "mmicromegas/ransX", "max_stars_repo_path": "DOCS/RANDOM/TEX/ransXproposal2019.tex", "max_stars_repo_stars_event_max_datetime": "2020-09-16T00:28:15.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-22T11:43:47.000Z", "num_tokens": 4955, "size": 16638 }
\chapter{Related Work} Studies on automatic musical performance assessment usually adopt the existing algorithms for tasks such as chord detection, automatic transcription or onset detection. A recent overview of performance assessment studies is provided by Lerch et al. \cite{lerch2019mpa}. More general reviews of MIR and music education can be found in \cite{dittmar2012} and \cite{percival2007effective}. There are a few studies that develop a new onset detection algorithm, and only one for guitars \cite{eremenko2020performance}. There are no public work focusing on amateur (noisy) recordings or their implications in the context of MOOCs. In this chapter, we review onset detection algorithms in general, focusing on the ones used for guitars or performance assessment systems. \section{Onset Detection} Onset can be defined as the first detectable part of a note event in the recording if the note were isolated \cite{leveau2004}. The task can be separated as offline and online (real-time) onset detection. Some applications provide real-time feedback to the player (e.g. karaoke, Rocksmith\footnote{www.rocksmith.com}, Yousician\footnote{https://yousician.com}) and require online detection. B{\"o}ck et al. \cite{bock2012evaluating} provides an overview for online onset detection. Music performance analysis systems do not necessarily require onset detection in real-time. In MusicCritic, analysis is done after performances are recorded. Therefore we focus on offline onset detection in the rest of the work. Most existing algorithms can be grouped under signal processing, machine learning or probabilistic methods. There are several reviews \cite{hainsworth2003onset} \cite{collins2005comparison} \cite{bello2005tutorial} \cite{dixon2006onset} available mostly covering signal processing methods. Hidden Markov Models (HMM) are commonly used in several probabilistic methods \cite{abdallah2003unsupervised}, \cite{raphael2010music}. Signal processing methods rely on spectral energy, phase, pitch, or a combination of those. A musical onset most likely increases the energy of the signal, which simply explains the motivation behind common usage of energy. However in complex situations such as a quiet note is played while another note is decaying, the total energy might not increase. This issue is addressed by discarding the frequencies that are losing energy in spectral flux \cite{spectralflux} (eq. \ref{SF}). Spectral flux is widely used within many other algorithms \cite{holzapfel2009three}, \cite{bock2013maximum}. Wu and Lerch \cite{wu2018learned} combined spectral flux with an adaptive peak picking method for their experiments in assessment of percussive instruments.\\ Spectral energy is often sufficient for detecting the onsets of percussive instruments but not for instruments with slow attacks, such as wind, bowed and voice. As first introduced \cite{bello2003phase}, phase information is found to be useful for non-percussive instruments. The energy of a note with a slow attack may increase steadily for a long duration, which makes it an imprecise indicator of the onset location. Whereas phases of frequencies change abruptly only in the beginning of the attack. However, abrupt changes in phase may arise due to the unreliability of phase processing \cite{holzapfel2009three} or inaudible noises. A common approach is to combine phase information with other onset detection functions. Bello et al. \cite{bello2004use} used both energy and phase information and reported overall improvement over the use of energy or phase only. Pitch information is especially powerful on monophonic instruments with slow attacks and seldom unwanted noises, such as wind instruments. The absence of noises allows clear detection of pitch over contours. Two recent automatic assessment studies by Vidwans et al. \cite{vidwansobj} and Wu et al. \cite{wu2016towards} take the boundaries of the pitch contours as onsets for wind instruments. For more complex scenarios, pitch information can be combined with energy \cite{tan2010audio} \cite{zhou2007music} or both phase and energy \cite{brossier2004fast} \cite{holzapfel2009three}. Vibrato and tremolo techniques create fluctuations on pitch and energy of a note. Those fluctuations cause multiple false detections. Vibrato suppression methods are developed on energy \cite{bock2013maximum} and pitch based \cite{collins2005using} detection algorithms to address this issue. Özaslan and Arcos \cite{ozaslan2010legato} focused on the identification of playing techniques legato and glissando on classical guitar. Plucked onsets are detected plucked notes with HFC and YIN pitch detection algorithm is used to detect the technique. Laurson et al. \cite{laurson2010simulating} worked on the simulation of the rasgueado technique on the classical guitar, where the notes are very close to each other due to fast strumming. The onsets from the real recording are detected by selecting the peaks of the smoothed total energy of frequencies between 11kHz and 20kHz. Mounir et al. \cite{mounir2016guitar} proposed an algorithm for guitar onset detection, which is called NINOS$^2$. After taking the STFT of the audio, the algorithm measures the sparsity of the spectral energy after discarding the frequencies with high energy. The motivation is that the low energy frequencies represent the guitar onsets better, as they usually arise from the interaction of finger (or plectrum) and strings and decay fast. The algorithm predicts the frames as onsets that have low sparsity. Kehling \cite{kehling2014automatic} developed an automatic transcription system with an onset detection stage. Three existing onset detection algorithms (Spectral flux, Pitchogram Novelty \cite{abesser2017instrument} and Rectified Complex Domain \cite{dixon2006onset}) are applied and combined additively. Each algorithm exploits a different feature; energy, pitch and phase. The combination is found to be performing better than individual algorithms. In MusicCritic \cite{eremenko2020performance}, Superflux \cite{bock2013maximum} algorithm is used. For the elimination of noises, detections are rejected if the energy difference is less than zero or averaged spectral centroid is more than a threshold. Neural network based algorithms, as in many other MIR tasks, perform best in onset detection. According to comparisons in MIREX \cite{mirex} \footnote{https://www.music-ir.org/mirex/wiki/MIREX\_HOME} in the last years, Convolutional Neural Network (CNN) \cite{lecun1998gradient} based algorithms achieve higher detection scores than previous algorithms. The current state-of-the-art onset detection algorithm (CNNOnsetDetector) is also a CNN developed by Schlüter and Böck \cite{schluter2014improved}. Their motivation behind the use of CNN on onset detection is that note onsets create edges in spectrograms and CNNs can learn to detect edges effectively. \section{Perceived Attack Time} Physical onset time (PhOT) is the actual acoustic beginning of an audio event, perceptual onset time (POT) is the moment listeners perceive the event and perceptual attack time (PAT) is defined as the perceived moment of rhythmic placement of the event \cite{wright2008shape}. Most onset detection studies aim to find PhOT or POT of musical events. Although physical onset is useful for analysis of the audio, PAT is more accurate for rhythmic performance assessment. Polfreman \cite{polfreman2013} evaluated nine different onset detection algorithms on five different onset types, concluding that the algorithms are not suitable to detect PAT of non-percussive sounds. On guitars, POTs of single notes are close to their PATs, since the instrument is plucked and percussive. This is not the case for strummed chords. A strummed chord is a single musical object that consists of multiple onsets close to each other. Hove et al. \cite{hove2007sensorimotor} showed that the PAT (they used the term perceptual center) of two close tones depends on the pitch of the tones, their order, and the amount of time between them. Frerie et al. \cite{freire2018strumming} studied the beat location of guitar strums perceived by the players. In their experiment, the same excerpts are played by different musicians on an acoustic guitar with hexaphonic pickups to record each string. Results showed that each player aligns chords to the metronome differently.
{ "alphanum_fraction": 0.8199001427, "avg_line_length": 227.3513513514, "ext": "tex", "hexsha": "ddfca176355fcdcc4002581a88b7281375876c48", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5db53f36db7909215df1408b7ea4128dd9d47aa7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "siyarvurucu/smc-thesis-source", "max_forks_repo_path": "related work/relatedwork.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5db53f36db7909215df1408b7ea4128dd9d47aa7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "siyarvurucu/smc-thesis-source", "max_issues_repo_path": "related work/relatedwork.tex", "max_line_length": 868, "max_stars_count": null, "max_stars_repo_head_hexsha": "5db53f36db7909215df1408b7ea4128dd9d47aa7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "siyarvurucu/smc-thesis-source", "max_stars_repo_path": "related work/relatedwork.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1864, "size": 8412 }
%!TEX root = paper.tex \subsection{Depth-averaged \nheswe} The \nheswe\ is linked to the solution of the Euler equations using kinematic boundary conditions on the surface and the bottom. The basis of the method is a projection method (see e.g. \cite{Chorin.1968}) solving the time-discretized equations stepwise. The pressure is decomposed into a hydrostatic and a \nh\ part \cite{CasulliStelling.1998, StansbyZhou.1998}. This splitting has the advantage that the solver for the \nh\ equations can resort to the solver for the shallow water equations. The \da\ version of this apporach was derived from a multi-layer formulation \cite{StellingZijlema.2003} applying linear approximations between different layers, s.t. also the \nhp\ is assumed to be linear in the multi-layer equations. On the other hand, the vertical Euler equation leads to a quadratic vertical profile of the \nhp when considering \da\ equations. Therefore, there are two possibilities to choose the pressure profile. Under the assumption of small vertical variations of horizontal velocities, the \danheswe\ is derived out of the Euler equations for \da\ variables \begin{align} (u,v)=\bu:=\frac{1}{h}\int_{-d}^{\xi}{\bU}\,dz, \qquad w:=\frac{1}{h}\int_{-d}^{\xi}{W}\,dz, \qquad \pnh&:=\frac{1}{h}\int_{-d}^{\xi}{\Pnh}\,dz. \label{eq:def_pnh} \end{align} The \danheswe\ is described with the equation system \begin{align} \partial_t \xi+\bnabla \cdot (h\bu)=&0, \label{eq:nh_conti} \\ \partial_t \bu+(\bu \cdot \bnabla)\bu=&-g\bnabla \xi-\frac{1}{\rho h}\left(\bnabla \left(hp^{nh} \right) -\fnh\pnh \bnabla d\right), \label{eq:nh_Momxy} \\ \partial_t w+(\bu \cdot \bnabla)w=&\frac{1}{\rho h}\fnh\pnh, \label{eq:nh_Momz} \\ 2 \left(w+\bu \cdot \bnabla d \right)=&-h\left(\bnabla \cdot \bu\right), \label{eq:nh_closure} \end{align} where the scalar $\fnh$ refers to the chosen pressure profile. In case of the linear pressure profile \begin{equation} P^{nh}(z)=\frac{\Pnhd}{h}(\xi -z), \end{equation} with $\Pnhd$ being the \nhp\ at the bottom, it is $\fnh=2$, whereas it is $\fnh=1.5$ in the case of the quadratic pressure profile \begin{equation} P^{nh}(z)=\frac{1}{2}\frac{\Gamma}{h}\left(-(z+d)^2+h^2\right)%+\rho\Phi\left(\xi-z\right) \label{eq:Pnh_quadr_z} \end{equation} with \begin{align*} \Gamma &:= \rho h\left( -(\bnabla \cdot \partial_t \bu) - (\bu \cdot \bnabla) (\bnabla \cdot \bu) + (\bnabla \cdot \bu)^2 \right). \\ %\Phi &:= - \bnabla d \cdot \left( \partial_t \bu + (\bu \cdot \bnabla)\bu \right) - \bu \cdot \bnabla(\bnabla d) \cdot \bu. \end{align*} For a more detailed derivation see \cite{Jeschke.2016}. The spatial discretization of the shallow water equations implements the $P^{NC}_1$--$P_1$ finite element method as described in \cite{Hanert.2005, LeRouxPouliot.2008} with the $P^{NC}_1$--$P_1$ advection scheme of \cite{Androsov.2011}. It uses nonconforming linear basis functions for the horizontal velocities and conforming linear basis functions for the height. A two-dimensional computational domain is represented by a structured triangulation generated with the adaptive mesh generator amatos \cite{Behrens.2005}. The time-stepping scheme applies the Leapfrog method stabilized with the Robert-Asselin filter \cite{Asselin.1972} using $\alpha=0.025$. The \danheswe\ \eqref{eq:nh_conti}--\eqref{eq:nh_Momz} is solved based on the projection method proposed in \cite{StellingZijlema.2003}. The advantage of this method is that the shallow water solver does not have to change in order to solve the \nh\ equation system. This projection method involves the solution of a Poisson equation for the \nhp\ in each timestep, which is constructed as follows: The time-discretized horizontal momentum equations are written as an intermediate solution in the present timestep plus a correction term depending on the \nhp. Together with the time-discretized vertical momentum equation, this splitting is substituted into equation \eqref{eq:nh_closure} in its weak formulation to receive the Poisson equation. The emerging linear equation system is solved by means of a GMRES algorithm. The computed \nhp\ is added to the intermediate horizontal velocities and to the vertical velocity of the previous time step. At this point, the velocities have been updated and the numerical solution of the continuity equation completes the computation of the timestep. To discretize the \nhp\ and vertical velocity, conforming linear basis functions are used. The same approach was taken in \cite{Fuchs.2013}, but we retain an absent bathymetry gradient term.
{ "alphanum_fraction": 0.7474417592, "avg_line_length": 127.5833333333, "ext": "tex", "hexsha": "3bace8c6122e8bc2b1e8798f12d006fa68e4c848", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mandli/coastal", "max_forks_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mandli/coastal", "max_issues_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_line_length": 1094, "max_stars_count": null, "max_stars_repo_head_hexsha": "8c80a4c740f92ea83b54c8a5432d11058c0d3476", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mandli/coastal", "max_stars_repo_path": "doc/papers/theoretical_1d/M_nonhydrostatic.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1344, "size": 4593 }
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%% % % This is the LaTeX source for the instructions to authors using % the LaTeX document class 'llncs.cls' for contributions to % the Lecture Notes in Computer Sciences series. % http://www.springer.com/lncs Springer Heidelberg 2006/05/04 % % It may be used as a template for your own input - copy it % to a new file with a new name and use it as the basis % for your article. % % NB: the document class 'llncs' has its own and detailed documentation, see % ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[runningheads,a4paper]{llncs} \usepackage{amssymb} \setcounter{tocdepth}{3} \usepackage{graphicx} \usepackage{multirow} \usepackage{subfigure} \usepackage{graphics} \usepackage{longtable} \usepackage{rotating} \usepackage{verbatim} \usepackage{float} %\usepackage[backend=bibtex8]{biblatex} \usepackage{pbox} \usepackage{url} \urldef{\mailsa}\path|{alfred.hofmann, ursula.barth, ingrid.haas, frank.holzwarth,| \urldef{\mailsb}\path|anna.kramer, leonie.kunz, christine.reiss, nicole.sator,| \urldef{\mailsc}\path|erika.siebert-cole, peter.strasser, lncs}@springer.com| \newcommand{\keywords}[1]{\par\addvspace\baselineskip \noindent\keywordname\enspace\ignorespaces#1} \usepackage{listings} \usepackage{color} \definecolor{javared}{rgb}{0.6,0,0} % for strings \definecolor{javagreen}{rgb}{0.25,0.5,0.35} % comments \definecolor{javapurple}{rgb}{0.5,0,0.35} % keywords \definecolor{javadocblue}{rgb}{0.25,0.35,0.75} % javadoc \lstset{language=Java, basicstyle=\ttfamily, keywordstyle=\color{javapurple}\bfseries, stringstyle=\color{javared}, commentstyle=\color{javagreen}, morecomment=[s][\color{javadocblue}]{/**}{*/}, %numbers=left, numberstyle=\tiny\color{black}, stepnumber=2, numbersep=10pt, tabsize=4, showspaces=false, showstringspaces=false} \def\rot{\rotatebox} \begin{document} \mainmatter % start of an individual contribution % first the title is needed \title{Evaluation of ADFD and ADFD$^+$ techniques} % a short form should be given in case it is too long for the running head \titlerunning{Evaluation of ADFD and ADFD$^+$ techniques} % the name(s) of the author(s) follow(s) next % % NB: Chinese authors should write their first names(s) in front of % their surnames. This ensures that the names appear correctly in % the running heads and the author index. % \author{Mian Asbat Ahmad% %\thanks{Please note that the LNCS Editorial assumes that all authors have usedn the western naming convention, with given names preceding surnames. This determines the structure of the names in the running heads and the author index.}% \and Manuel Oriol} % \authorrunning{Evaluation of ADFD and ADFD$^+$ techniques} %: Authors' Instructions} % (feature abused for this document to repeat the title also on left hand pages) % the affiliations are given next; don't give your e-mail address % unless you accept that it will be published \institute{University of York, Department of Computer Science,\\ Deramore Lane, YO10 5GH YORK, United Kingdom\\ %\mailsa\\ %\mailsb\\ %\mailsc\\ %\url{http://www.springer.com/lncs}} } % % NB: a more complex sample for affiliations and the mapping to the % corresponding authors can be found in the file "llncs.dem" % (search for the string "\mainmatter" where a contribution starts). % "llncs.dem" accompanies the document class "llncs.cls". % \toctitle{Lecture Notes in Computer Science} \tocauthor{Authors' Instructions} \maketitle \begin{abstract} The ever-increasing reliance on software-intensive system is driving research to discover software faults more efficiently. Despite intensive research, very few approaches have studied and used knowledge about fault domains to improve the testing or the feedback given to developers. The shortcoming was addressed by developing ADFD and ADFD$^+$ strategies presented in our previous publications. In the present study, the two strategies were enhanced by integration of the automatic testing tool Daikon and the precision of identifying failure domains was determined through extensive experimental evaluation of real world Java projects contained in Qualitas Corpus. The analyses of results, cross-checked by manual testing indicated that ADFD and ADFD$^+$ techniques are highly effective in providing assistance but are not an alternative to manual testing with the limited available resources. %The achievement of up-to 50\% better results by Adaptive Random Testing (ART) in comparison with Random Testing (RT) ensures that failure-domains within the input domain are useful and need due consideration in selection of test inputs. Our previously developed two automated techniques (ADFD and ADFD$^+$), automatically find failure and its domain in a specific range and present it graphically. %They can precisely detect the failure-domain of the identified failure in an effective way. Performing exhaustive testing in a limited region around the failure is the key to the success of ADFD and ADFD+ techniques. %In this article, we performed an extensive experimental analysis of Java projects contained in Qualitas Corpus for finding the effectiveness of automated techniques (ADFD and ADFD$^+$). The results obtained were analysed and cross-checked using manual testing. The impact of nature, location, size, type and complexity of failure-domains on the testing techniques were studied. The results provide insights into the effectiveness of automated techniques and a number of lessons for testing researchers and practitioners. %The abstract should summarize the contents of the paper and shouldcontain at least 70 and at most 150 words. It should be written using the \emph{abstract} environment. %\keywords{We would like to encourage you to list your keywords within the abstract section} \keywords{software testing, automated random testing, manual testing, ADFD, ADFD$^+$, Daikon} \end{abstract} \section{Introduction} The input-domain of a given System Under Test (SUT) can be divided into two sub-domains. The pass-domain comprises of the values for which the software behaves correctly and the failure-domain comprises of the values for which the software behaves incorrectly. Chan et al.~\cite{chan1996proportional} observed that input inducing failures are contiguous and form certain geometrical shapes. They divided these into point, block and strip failure-domains as shown in Figure~\ref{fig:patterns}. Adaptive Random Testing achieved up to 50\% better performance than random testing by taking into consideration the presence of failure-domains while selecting the test input~\cite{Chen2008}. \smallskip \begin{figure} [H] \centering \subfigure[Point domain]{ \includegraphics[width=2.5cm,height=2cm]{point.png} \label{fig:point} } \subfigure[Block domain]{ \includegraphics[width=2.5cm,height=2cm]{block.png} \label{fig:block} } \subfigure[Strip domain]{ \includegraphics[width=2.5cm,height=2cm]{strip.png} \label{fig:strip} } \smallskip \caption{Failure domains across input domain~\cite{chan1996proportional}} \label{fig:patterns} \end{figure} %Adaptive Random Testing (ART) exploited the existence of the failure-domains and resultantly achieved up to 50\% better performance than random testing~\cite{Chen2008}. This was mainly attributed to the better distribution of input which increased the chance of selecting inputs from failure-domains. This insight motivated us to increase our understanding of failure-domains in production software. %The cost of software testing constitute about half of the total cost of software development~\cite{myers2011art}. Software testing is an expensive but essential process which is particularly time consuming, laborious and error-prone when performed manually. Alternatively, automated software testing may involve higher initial cost but brings the key benefits of lower cost of production, higher productivity, maximum availability, greater reliability, better performance and ultimately proves highly beneficial for any organisation~\cite{Beizer1990}. A case study conducted by Pacheco et al. reveals that the 150 hours of automated testing found more faults in complex .NET code than a test engineer finds in one year by manual testing~\cite{pacheco2008finding}. We have developed two fully automated techniques ADFD~\cite{ahmad2013adfd} and ADFD$^+$~\cite{ahmad2014adfd2}, which effectively find failures and failure domains in a specified range and also provide visualisation of the pass and fail domains. The process is accomplished in two steps. In the first step, an upgraded random testing is used to find the failure. In the second step, exhaustive testing is performed in a limited region around the detected failure in order to identify the domains. The ADFD searches in one-dimension and covers longer range than ADFD$^+$ which is more effective in multi-dimension and covers shorter range. Three separate tools including York Extensible Testing Infrastructure (YETI), Daikon and JFreeChart have been used in combination for developing ADFD and ADFD$^+$ techniques. The YETI~\cite{Oriol2011yeti}, Daikon~\cite{ernst2007daikon} and JFreeChart~\cite{gilbert2008jfreechart} are used for testing the program, generating invariants and plotting the pass and fail domains respectively. %Software testing can be performed either automatically or manually. Both the techniques have their own advantages and limitations. The main advantage of automated testing is execution of large number of tests in little time, whereas manual testing utilizes the tester experience to concentrate on error-prone part of the SUT and generate target oriented test cases~\cite{Leitner2007}. %The analysis of failures and failure-domains discovered in 57 classes from 25 open source Java projects of Qualitas Corpus through three different techniques---ADFD, ADFD+ and Manual testing---reveals that each is good at uncovering different type of failure-domains and each brings distinct contributions. The rest of the paper is organized as follows: \S~2 presents enhancement of the techniques. \S~3 shows the difference in working mechanism of the two techniques by a motivating example. \S~4 highlights the key research questions. \S~5 describes the evaluation process comprising experiments, results and answers to the research questions. \S~6 presents threats to validity while \S~7 points out the related work. Finally, \S~8 presents conclusion of the study. %%%%%%%%%%%%%%%%% ADFD and ADFD+ %%%%%%%%%%%%%%%%%%% %\section{Automated Techniques} %The two automated techniques used in our experiments are ADFD and ADFD+. A short overview of both the techniques and the enhancements that have been made to the techniques along with a motivating example are given below: %\subsection{Overview of ADFD technique} %Automated Discovery of Failure Domain is an automated technique for finding and drawing the failure domain of detected failure in the input domain. ADFD searches for failure domain between the specified lower and upper bound in uni-direction. It test and note only the ranges of pass and fail values and uses the scatter graph of the JFreeChart to plot them on the screen. For more details please see \cite{ahmad2013adfd}. %\subsection{Overview of ADFD+ technique} %Automated Discovery of Failure Domain+ is an automated technique for finding and drawing the failure domain of detected failure in the input domain. ADFD+ searches for failure domain around the failure in specified range in multi-idirection. It test and note individual value as either pass or fail. The values are drawn using the vector graph of the JFreeChart. For more details please see \cite{ahmad2014adfd2}. \section{Enhancement of the techniques} Prior to experimental evaluation, new features were incorporated in ADFD and ADFD$^+$ techniques to: increase the code coverage, provide information about the identified failure and generate invariants of the detected failure domains as stated below: \begin{enumerate} \item The GUI was enabled to launch all the strategies defined in YETI from a single interface. As an example, if ADFD strategy is selected for testing, the system automatically hides the field (range value) associated with ADFD$^+$ and displays two fields of lower and upper bounds. On the other hand if ADFD$^+$ strategy is selected for testing, the system automatically hides the two fields (lower and upper bounds) associated with ADFD technique and displays a single field of range value. \item Code coverage was increased by extending the techniques to support the testing of methods with \verb+byte, short, long, double+ and \verb+float+ type arguments while it was restricted to \verb+int+ type arguments only in the original techniques. \item Invariants of the detected failure domains were automatically generated by integrating the tool Daikon in the two techniques. Daikon is an automated invariant detector that detects likely invariants in the program~\cite{ernst2007daikon}. The generated invariants are displayed in GUI at the end of test execution. \item The screen capture button was added to the GUI to allow the user to capture multiple screen-shots at different intervals of testing for future reference. \end{enumerate} %\item Additional information was facilitated by adding the YETI generated failure finding test case to the GUI of the two techniques. Test case included type of failure, name of the failing class, name of the failing method, values causing the failure and line number of the code causing failure. %\begin{figure} %\centering % \makebox[\textwidth][c]{\includegraphics[width=1.25\textwidth, height=10cm]{chapter7/adfdUpgraded1.png}}% % \bigskip % \caption{GUI front end of upgraded ADFD and ADFD$^+$} % \label{fig:adfdUpgraded} %\end{figure} %\bigskip %\bigskip %\begin{figure}[ht] %\centering %\includegraphics[width= 17.5cm,height=11cm]{chapter7/adfdUpgraded1.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdPlus1.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdPlus2.png} %\bigskip %\caption{GUI front end of upgraded ADFD and ADFD$^+$} %\label{fig:adfdUpgraded} %\end{figure} %Four of the above enhancements are visible from the front-end. As shown in Figure~\ref{fig:adfdUpgraded}, the drop down menu for strategy field enables the tester to choose the appropriate strategy in the list for the test session. Secondly, the block failure domain is shown in graphical form and with the help of automatic tool Daikon the failure domain is also shown by invariants ( i one of \{-1, 0, 1\}, j == 0). Thirdly, the addition of YETI generated test case shows type of failure (RUNTIME EXCEPTION, java.lang.ArithmaticException: / by zero), name of the failing class (OneDimensionalBlockFailureDomain), name of the failing method (blockErrors), value causing the failure (1) and line number of the code causing failure (11). Fourthly, the provision of screen capture button allows the tester to store the record of each test for record. % ORIGINAL % %Prior to conducting the experiments for comparative evaluation, the ADFD and ADFD+ techniques were enhanced to increase the code coverage, provide information about the identified failure and generate invariants of the detected failure-domains as stated below. %\begin{enumerate} %\item Code coverage was increased by extending the techniques to support the testing of methods with byte, short, long, double and float arguments while it was restricted to int type arguments only in the original techniques. %\item Additional information was facilitated by adding the YETI generated test case to the GUI of the two techniques. Test case includes the name of the failing method, values that caused the failure and stack trace of the failure. %\item Invariants of the detected failure-domains were automatically generated by integrating the tool Daikon in the two techniques. Daikon is an automated invariant detector that detects likely invariants in the program~\cite{ernst2007daikon}. The generated invariants are displayed in the GUI of the techniques after completion of the test. %\end{enumerate} \section{Difference in working mechanism of the two techniques} Difference in working mechanism of ADFD and ADFD$^+$ for identification of failure domains is illustrated by testing a simple Java program (given below) with the two techniques. It is evident from the program code that failure is generated when the value of variable \textit{x = \{4, 5, 6, 7 or 8\}} and the corresponding value of variable \textit{y = \{2, 3 or 4\}}. The total number of 12 failing instances form a block failure domain in the input domain. \begin{lstlisting} /** * A program with block failure domain. * @author (Mian and Manuel) */ public class BlockErrorPlotTwoShort { public static void blockErrorPlot (int x, int y) { if ((x >= 4) && (x <= 8) && (y == 2)) { abort(); /* error */ } if ((x >= 5) && (x <= 8) && (y == 3)) { abort(); /* error */ } if ((x >= 6) && (x <= 8) && (y == 4)) { abort(); /* error */ } } } \end{lstlisting} The test output generated by ADFD technique is presented in Figure~\ref{fig:ADFD}. The labelled graph shows 4 out of 12 failing values in red whereas the passing values are shown in blue. The generated invariants identify all but one failing value ($x = 4$). This is due to the fact that ADFD scans the values in one-dimension around the failure. The test case shows the type of failure, name of the failing class, name of the failing method, values causing the failure and line number of the code causing failure. \begin{figure}[h] \makebox[\textwidth][c]{\includegraphics[width=1.2\textwidth]{adfdCombined.png}}% %\includegraphics[width= 15cm,height=7cm]{adfdCombined.png} \caption{Graph, Invariants and test case generated by ADFD for the given code} \label{fig:ADFD} \end{figure} The test output generated by ADFD$^+$ technique is presented in Figure~\ref{fig:ADFD+}. The labelled graph correctly shows all the 12 out of 12 available failing values in red whereas the passing values are shown in blue. The invariants correctly represent the failure domain. The test case shows the type of failure, name of the failing class, name of the failing method, values causing the failure and line number of the code causing failure. \begin{figure}[h] \makebox[\textwidth][c]{\includegraphics[width=1.2\textwidth]{adfdPlusCombined.png}}% %\includegraphics[width= 15cm,height=7cm]{adfdPlusCombined.png} \caption{Graph, Invariants and Test case generated by ADFD$^+$ for the given code} \label{fig:ADFD+} \end{figure} The comparative results derived from execution of the two techniques on the developed program indicate that, ADFD$^+$ is more efficient than ADFD in identification of failures in two-dimensional programs. The ADFD and ADFD$^+$ performs equally well in one-dimensional program, but ADFD covers more range around the first failure than ADFD$^+$ and is comparatively economical because it uses fewer resources than ADFD$^+$. % ORIGINAL % %The difference with respect to the identification of failure-domains is illustrated by testing a simple Java program (given below) with ADFD and ADFD+ techniques. %\smallskip %\begin{lstlisting} %/** %* A program with block failure-domain. %* @author (Mian and Manuel) %*/ %public class BlockErrorPlotTwoShort { % public static void blockErrorPlot (int x, int y){ % int z; % if ((x >= 4) && (x <= 8) && (y == 2)) % { z = 50/0;} % if ((x >= 5) && (x <= 8) && (y == 3)) % { z = 50/0;} % if ((x >= 6) && (x <= 8) && (y == 4)) % { z = 50/0;} % } %} %\end{lstlisting} %\smallskip %As evident from the program code that an $ArithmeticException$ failure (divison by zero) is generated when the value of variable \verb+x one of {4, 5, 6, 7, 8}+ and the corresponding value of variable \verb+y one of {2, 3, 4}+. The values form a block failure-domain in the input domain. %\begin{figure}[H] %\centering %\includegraphics[width= 12.5cm,height=5.5cm]{adfdPlusCombined.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdPlus1.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdPlus2.png} %\caption{Graph, Invariants and Test case generated by ADFD+} %\label{fig:ADFD+} %\end{figure} %The test output generated by ADFD+ technique is presented in Figure~\ref{fig:ADFD+}. The labelled graph correctly shows all the 12/12 available failing values in red whereas the passing values are shown in blue. The invariants correctly represent the failure-domain. The test case shows the type of failure, the values causing the first failure and the stack trace of the failure. %\begin{figure}[H] %\centering %\includegraphics[width= 12.5cm,height=5.5cm]{adfdCombined.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdAround1.png} %\includegraphics[width= 8.5cm,height=7cm]{adfdAround2.png} %\caption{Graph, Invariants and test case generated by ADFD} %\label{fig:ADFD} %\end{figure} %The test output generated by ADFD technique is presented in Figure~\ref{fig:ADFD}. The labelled graph correctly shows the 4/12 available failing values in red whereas the passing values are shown in blue. The invariants identify all but one failing values ($x = 4$). This is due to the fact that ADFD scans the values in one dimension around the failure. The test case shows the type of failure, the values causing the first failure and the stack trace of the failure. %The comparative results derived from the execution of the two techniques on the selected program indicate that ADFD+ is more efficient than ADFD in identification of failures in two dimensional program. ADFD and ADFD+ performs equally well in one-dimensional program but ADFD covers more range around the first failure than ADFD+ and is comparatively economical because it uses less resources than ADFD+. \section{Research questions} \label{sec:questions} The following research questions have been addressed in the study: \begin{enumerate} % \item What is the relevance of ADFD and ADFD$^+$ techniques in identification and presentation of failure domains in production software? % ORIGINAL % %\item Can ADFD and ADFD+ techniques identify and present failure-domains in production software? %\\%The experimental results claiming the correct identification of ADFD and ADFD+ were based on the purpose build error-seeded programs~\cite{}. To answer the question, we applied the two techniques to all the projects of Qualitas Corpus and examined the results. %\item \textit{If the graph and invariants generated, correctly represent the failure domains?} %Invariants generated by Daikon can identify the start and stop of the failure domain. To answer this question we compared the generated invariants with the source code and the failure-domain presented in graphical form. % % \item What types and frequencies of failure domains exist in production software? %\item What types and frequencies of failure-domains exist in production software? %\\%There are strategies~\cite{}. that exploit the presence of block and strip failure-domain to get better results. Therefore identifying the presence of underlying failure-domains in production software can help in high quality of software testing. To answer the questions, we reviewed all the classes containing failure-domains manually, automatically and graphically. % \item What is the nature of identified failure domain and how it affects the automated testing techniques? %\item What is the nature of identified failure-domain and how it affects the testing techniques? % \\% An interesting point is to know what failure is responsible for a failure-domain and how difficult it is to identify that failure by manual testing. To answer this question, we studied the test logs and test output of the automated testing and the source code of the program manually to identify the cause and complexity of failures of failure-domains. %\item \textit{If the presence of a particular failure-domain can make it easy or hard to find using automated and manual techniques?} %Failure-domain can reside in the form of point, block or strip shape in the input domain. To answer this question we analysed the source code of all the programs in which failure-domains were detected. % %\item \textit{If the graph generated by ADFD correctly represent the pass and fail domains?} Both the ADFD and ADFD+ techniques generate graphs to represent failure-domains for simplicity. To answer the question we compared the generated graphs with the source code and the invariants generated by Daikon. % %\item If obtained results consistent with previous theoretical and practical results presented? %As per our knowledge, till now no specific study has been conducted to automatically identify the pass and fail domains however it has been claimed by some researchers~\cite{} that there exist more block and strip patterns then the point patterns. % %\item What is the external validity of the results obtained?\\ \end{enumerate} \section{Evaluation} Experimental evaluation of ADFD and ADFD$^+$ techniques was carried out to determine: the effectiveness of the techniques in identifying and presenting the failure domains, the types and frequencies of failure domains, the nature of error causing a failure domain and the external validity of the results obtained. %\section{Evaluation} %Experimental evaluation of ADFD and ADFD+ techniques was carried out to determine: the effectiveness of the techniques in identifying and presenting the failure-domains, the types and frequencies of failure-domains, the nature of error causing failure-domain and the external validity of the results obtained. \subsection{Experiments} In the present experiments, we tested all 106 packages of Qualitas Corpus containing the total of 4000 classes. Qualitas Corpus was selected because it is a database of Java programs that span across the whole set of Java applications and is specially built for empirical research which takes into account a large number of developmental models and programming styles. All packages included in Qualitas Corpus are open source with an easy access to the source code. For experimental purpose, the main ``.jar'' file of each package was extracted to get the ``.class'' files as appropriate input for YETI. All 4000 classes were individually tested. The classes containing one and two-dimensional methods with arguments (int, long, float, byte, double and short) were selected for experimental analysis. Non-numerical arguments and more than two-dimensional methods were ignored because the two proposed techniques support the testing of one and two dimensional methods with numerical arguments. Each test took 40 seconds on the average to complete the execution. The initial 5 seconds were used by YETI to find the first failure while the remaining 35 seconds were jointly consumed by ADFD/ADFD$^+$ technique, JFreeChart and Daikon to identify, draw graph and generate invariants of the failure domains respectively. The machine took approximately 500 hours to perform the experiments completely. Due to the absence of contracts and assertions in the code under test, undeclared exceptions were taken as failures in accordance with the previous studies~\cite{oriol2012random}, \cite{ahmad2013adfd}. The source code of the programs containing failure domains were also evaluated manually to cross-examine the experimental results. In accordance with Chan et al.~\cite{chan1996proportional}, classification of failure domain into various types was based on the number of contiguous failures detected in the input-domain as shown in Table~\ref{table:resultsSummary}. If the contiguous failures detected range from 1 to 5, 6 to 49 or 50 and above the failure domain is classified as point, block or strip type respectively. If more than one type of domain are detected in a program, it is termed as mix type. \begin{table}[h] \scriptsize \caption{Classification of failure domains} \centering {\renewcommand{\arraystretch}{1.5} \begin{tabular}{| r | l | l |} \hline S. No & Type of failure domain & No of contiguous failures \\ \hline 1 & Point & 01 to 05 \\ \hline 2 & Block & 06 to 49 \\ \hline 3 & Strip & 50 \& above \\ %Mix & \\ \hline & & point \& block \\ 4 & Mix & point \& strip \\ & & point, block \& strip \\ \hline \end{tabular} } \label{table:resultsSummary} % is used to refer this table in the text \end{table} %All experiments were conducted with a 64-bit Mac OS X Mountain lion version 10.8.5 running on 2.7 GHz Intel Core i7 with 16 GB (1600 MHz DDR3) of RAM. YETI runs on top of the Java\texttrademark SE Runtime Environment [version 1.7.0\_45]. The ADFD and ADFD$^+$ executable files are available at \url{https://code.google.com/p/yeti-test/downloads/list/}. Daikon and JFreeChart can be seperately obtained from \url{http://plse.cs.washington.edu/daikon/} and \url{http://www.jfree.org/jfreechart/download.html} respectively. %\subsection{Experiments} %In the present experiments we tested all 106 packages of Qualitas Corpus containing the total of 4000 classes. Qualitas Corpus was selected because it is a database of Java programs that spans across the whole set of Java applications, it is specially built for empirical research which takes into account a large number of developmental models and programming styles. Its all included packages are open source with an easy access to the source code. %Since YETI tests the byte code only therefore the main ``.jar'' file of each package was extracted to get the ``.class'' files. Each class was individually tested. The one and two dimensional methods with arguments (int, long, float, byte, double and short) of each class were selected for experimental testing. Non numerical arguments and more than two dimensional methods were ignored because the two proposed techniques support the one and two dimensional methods with numerical arguments. Each test took 40 seconds on the average to complete the execution. The initial 5 seconds were used by YETI to find the first failure while the remaining 35 seconds were jointly consumed by ADFD/ADFD+ technique, JFreeChart and Daikon to identify, draw graph and generate invariants of the failure-domains respectively. The machine took approximately 100 hours to perform the experiments. Due to the absence of contracts and assertions in the code under test, undeclared exceptions were taken as failures in accordance with the previous studies~\cite{ahmad2013adfd}\cite{oriol2012random}. The source code of the programs containing failure-domains were also evaluated manually to cross-examine the experimental results. All experiments were conducted with a 64-bit Mac OS X Mountain lion version 10.8.5 running on 2.7 GHz Intel Core i7 with 16 GB (1600 MHz DDR3) of RAM. YETI runs on top of the Java\texttrademark SE Runtime Environment [version 1.7.0\_45]. The ADFD and ADFD+ executable files are available at \url{https://code.google.com/p/yeti-test/downloads/list/}. \subsection{Results} The testing of 106 Java packages including 4000 classes, resulted in 25 packages containing 57 classes to have various types of failure domains. The details pertaining to project, class, method, dimension, line of code (LOC) and type of detected failure domains for each class are given in Table 3. Out of the total of 57 methods indicated in the table, 10 methods are two-dimensional while the remaining 47 methods are one-dimensional. A total number of 17262 lines of code spread across 57 classes in various proportions as shown in the table. The results obtained show that out of 57 classes 2 contain point failure domain, 1 contains block failure domain, 50 contain strip failure domain and 4 contain mix failure domain. %Mix failure domain includes the combination of two or more types of failure domains including point \& block, point \& strip and point, block \& strip. %Among 106 packages we found 25 packages containing 57 classes with different types of failure-domains. Based on the type of failure-domains the results are presented in Table \ref{table:stripDomains}, \ref{table:pointDomains}, \ref{table:blockDomains}, \ref{table:mixDomains}. The information available in the table includes the class showing failure domain, the method involved, the invariants generated by ADFD and ADFD+ (automatic techniques) and by manual analysis. %Classification of failure-domains into strip, point, block and mix types is based on the degree of contiguity of failures detected in the input-domain as shown in Table~\ref{table:results}. If failures detected as contiguous are 50 or more, the failure-domain is classified as strip. If failures detected as contiguous lie in the range of 1 to 5, the failure domain is classified as point. If failures detected as contiguous lie in the range of 6 to 49, the failure domain is classified as block. If more than one type of failure domains are detected in the input domain, the domain is classified as mix. %The results obtained show that out of 57 classes 50 contain strip failure domain,2 contain point failure domain, 1 contain block failure domain and 4 contain mix failure domain. Mix failure-domain includes the combination of two or more failure domain types including point \& strip, point \& block and point, block \& strip. Invariants generated by manual and automated techniques, and analysis of the source code is also performed to differentiate the simplicity and complexity of the identified failure-domains as shown in Table~\ref{table:results}. Further explanation is available in the Nature of failure-domain subsection. The key research questions identified in the previous section are individually addressed in the following. %The failure-domains were declared as strip failure-domains if 50 or more contagious failures were detected. %Accordingly, in 48 out of 57 classes strip failure-domains were detected as shown in Table~\ref{table:stripDomains}. %The failure-domains were declared as point failure-domains if more than 1 and less than 5 contagious failures were detected. Accordingly, in 4 out of 57 classes point failure-domains were detected as shown in Table~\ref{table:pointDomains}. %The failure-domains were declared as block failure-domains if more than 5 and less than 50 contagious failures were detected. Accordingly, in 2 out of 57 classes block failure-domains were detected as shown in Table~\ref{table:blockDomains}. %The remaining 2 classes contained two types of failure-domains i.e one containing both point and block failure-domain and the other containing point and Strip failure-domain as shown in Table~\ref{table:mixDomains}. %\begin{table}[h] %\scriptsize %\caption{Results of the experiments} %\centering %{\renewcommand{\arraystretch}{1.5} %\begin{tabular}{| l | l | l | l | l | l | l | l | l | l | l | } %\hline %Failure domain & Contiguous failures & \rot{90}{No. of classes} & \rot{90}{No. of failure-domains} & \rot{90}{Easy to Find FD by ADFD} & \rot{90}{Easy to Find FD by ADFD+} & \rot{90}{Easy to Find FD by MT} & \rot{90}{Hard to find FD by ADFD} & \rot{90}{Hard to find FD by ADFD+} & \rot{90}{Hard to find FD by ADFD+}\\ %\hline %Strip & 50 or more & 50 & 50 & 50 & 45 & 48 & 0 & 5 & 2 \\ %Point & between 1 and 5 & 2 & 2 & 2 & 2 & 2 & 0 & 0 & 0 \\ %Block & between 6 and 49 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0\\ %Mix & & & & & & & & & \\ % & point and strip & 3 & 3 & 3 & %0 & 2 & 0 & 3 & 1\\ % & point and block & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ % & point, block \& strip & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1\\ %\hline %Total & & 57 & 57 & 57 & 48 & 53 & 1 & 9 & 4\\ %\hline %\end{tabular} %} %\label{table:results} % is used to refer this table in the text %\end{table} \subsubsection{Effectiveness of ADFD and ADFD$^+$ techniques} The experimental results confirmed the effectiveness of the techniques by discovering all three types of failure domains (point, block and strip) across the input domain. The results obtained by applying the two automated techniques were verified: by manual analysis of the source code of all 57 classes; by cross checking the test case, the graph and the generated invariants of each class; by comparing the invariants generated by automatic and manual techniques. The identification of failure domain by both ADFD and ADFD$^+$ is dependant upon the detection of failure by random$^+$ strategy in YETI. Because only after a failure is identified, its neighbouring values are analysed according to the set range to plot the failure domain. The generation of graph and invariants and the time of test execution directly depends on the range value, if the range value of a technique is greater, the presentation of failure domain is better and the execution time required is higher. This is due to the testing and handling of greater number of test cases when the range is set to a bigger level. Comparatively, ADFD requires fewer resources than ADFD$^+$ therefore it is less influenced by the range value. %\subsubsection{Effectiveness of ADFD and ADFD+ techniques:} %The effectiveness of ADFD and ADFD+ techniques for identifying failure-domains in production software was demonstrated. The experimental results confirmed the effectiveness of the techniques by discovering all three types of failure-domains (point, block and strip) across the input domain. The results obtained by applying the two automated techniques were verified: by manual analysis of the source code of all 57 classes containing failure domains; by cross checking the test case, the graph and the generated invariants of each class; by comparing the invariants generated by automatic and manual techniques. %The identification of failure domain by both ADFD and ADFD+ is dependant on the identification of failure by ADFD and ADFD+ strategy in YETI. Because only after a failure is identified, its neighbour values according to the set range are analysed and failure domain of the failure is plotted. %The generation of graph and invariants depends on range value, the greater the range value of a technique the better is the presentation of failure domain. The generation of graph and invariants starts from the minimum range value and ends at the maximum range value around the detected failure value. The ADFD requires less resources and is thus capable of handling greater range value as compared to ADFD+. %For example consider the following code under test. If the range value of ADFD is from -100 to 100 and the range value for ADFD+ is from -10 to 10 then the invariants generated to represent the failure domain by ADFD will be $ i one of \{ -1, -100 \} $ while for ADFD+ they will be $ i one of \{-1, -10\} $. Similarly the invariants generated to represent the failure-domain manually will be $ i <= -1 $. The presentation can be further improved if the value of range is extended to Integer.MIN\_INT and Integer.MAX\_INT . %\smallskip %\begin{lstlisting} %/** %* A program with strip failure-domain. %* @author (Mian and Manuel) %*/ %public class StripErrorPlot { % public static void stripErrorPlot (int x){ % int a[] = new int[x]; % } %} %\end{lstlisting} %\smallskip %With all the effectiveness of automated techniques we still believe that ADFD and ADFD+ cannot be used as replacement of manual testing however it should be used to assist the manual testing for achieving higher quality. \subsubsection{Type and Frequency of Failure domains} As evident from the results given in Table 4 - 7, all the three techniques (ADFD, ADFD$^+$ and Manual) detected the presence of strip, point and block types of failure domains in different frequencies. The results obtained show that out of 57 classes 2 contain point failure domain, 1 contains block failure domain, 50 contain strip failure domain and 4 contain mix failure domain. Mix failure domain includes the combination of two or more types of failure domains including point \& block, point \& strip and point, block \& strip. %Out of 57 classes containing failure domains, 50 classes showed strip failure domain, 2 point failure domain, 1 block failure domain and 4 mix failure domains. The discovery of higher number of strip failure domains may be attributed to the fact that a limited time of 5 seconds were set in YETI testing tool for searching the first failure. The ADFD and ADFD$^+$ strategies set in YETI for testing the classes are based on random$^+$ strategy which gives high priority to boundary values, therefore, the search by YETI was prioritised to the boundary area where there were greater chances of occurrence of failures constituting strip failure domain. %It may be noted that YETI, which is used to find the first failure, is executed only for five seconds which uses ADFD and ADFD+ testing strategies. Both the strategies are based on random+ strategy which gives high priority to boundary values. It may be possible that the high number of strip failure-domains are detected because most of the failures are found at the boundaries. %\subsubsection{Type and Frequency of Failure-domains:} %As evident from the results given in Table 3 - 6, all the three techniques (ADFD, ADFD+ and Manual) detected the presence of strip, point and block types of failure domains in different frequencies. Out of 57 classes containing failure domains, 50 classes showed strip failure domain, 2 point failure domain, 1 block failure domain and 4 mix failure domains. %The discovery of higher number of strip type of failure domains may be attributed to the fact that a limited time of 5 seconds were set in YETI testing tool for searching the first failure. The ADFD and ADFD+ strategies set in YETI for testing the classes are based on random+ strategy which gives high priority to boundary values, therefore the search by YETI was prioritised to the boundary area where there were greater chances of occurrence of failures constituting strip type of failure domain. %It may be noted that YETI, which is used to find the first failure, is executed only for five seconds which uses ADFD and ADFD+ testing strategies. Both the strategies are based on random+ strategy which gives high priority to boundary values. It may be possible that the high number of strip failure-domains are detected because most of the failures are found at the boundaries. % ADD TABLES AT THE END RELATED TO THIS SECTION % \subsubsection{Nature of failure domain} \label{sec:nature} The nature of failure domain identified by two automatic techniques (ADFD and ADFD$^+$) and the manual technique was examined in terms of simplicity and complexity by comparing the invariants generated by automatic techniques with those of the manual technique. The results were split into six categories (2 categories per technique) on the basis of simplicity and complexity of failure domains identified by each of the three techniques. The comparative results show that ADFD, ADFD$^+$ and Manual testing can easily detect 56, 48 and 53 and difficultly detect 1, 9 and 4 failure domains respectively as shown in Table~\ref{table:simpleComplex}. The analysis of generated invariants indicate that the failure domains which are simple in nature are easily detectable by both automated and manual techniques while the failure domains which are complex in nature are difficultly detectable by both automated and manual techniques. %Both types are explained with the help of following examples. \begin{table}[h] \scriptsize \caption{Simplicity and complexity of Failure Domains (FD) as found by 03 techniques} \centering {\renewcommand{\arraystretch}{1.5} \begin{tabular}{| c | r | r | r | r | r | r | r | r | } \hline \rot{90}{\pbox{20cm}{Type of \\failure domain}} & \rot{90}{\pbox{20cm}{No. of \\classes}} & \rot{90}{\pbox{20cm}{No. of \\FD}} & \rot{90}{\pbox{20cm}{Easy to find \\FD by ADFD}} & \rot{90}{\pbox{20cm}{Easy to find \\FD by ADFD$^+$}} & \rot{90}{\pbox{20cm}{Easy to find \\FD by MT}} & \rot{90}{\pbox{20cm}{Hard to find \\FD by ADFD}} & \rot{90}{\pbox{20cm}{Hard to find \\FD by ADFD$^+$}} & \rot{90}{\pbox{20cm}{Hard to find \\FD by MT}}\\ %\rot{90}{failure domain} & \rot{90}{No. of classes} & \rot{90}{No. of FD} & \rot{90}{Easy to find FD by ADFD} & \rot{90}{Easy to find FD by ADFD$^+$} & \rot{90}{Easy to find FD by MT} & \rot{90}{Hard to find FD by ADFD} & \rot{90}{Hard to find FD by ADFD$^+$} & \rot{90}{Hard to find FD by MT}\\ \hline Point & 2 & 2 & 2 & 2 & 2 & 0 & 0 & 0 \\ \hline Block & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0\\ \hline Strip & 50 & 50 & 50 & 45 & 48 & 0 & 5 & 2 \\ %Mix & & & & & & & & \\ \hline & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ Mix & 3 & 3 & 3 & 0 & 2 & 0 & 3 & 1\\ & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1\\ \hline Total & 57 & 57 & 57 & 48 & 53 & 1 & 9 & 4\\ \hline \end{tabular} } \label{table:simpleComplex} % is used to refer this table in the text \end{table} The simplicity of failure domain is illustrated by taking an example of ADFD, ADFD$^+$ and Manual Analysis in Table 7 for class BitSet. The negativeArray failure is detected due to the input of negative value to the method bitSet.of(i). The invariants generated by ADFD are \textit{\{i $\le$ -1, i $\ge$ -18\}}, by ADFD$^+$ are \textit{\{i $\le$ -1, i $\ge$ -512\}} and by Manual Analysis are \textit{\{i $\le$ -1, i $\ge$ Integer.MIN\_INT\}}. These results indicate maximum degree of representation of failure domain by Manual Analysis followed by ADFD and ADFD$^+$ respectively. This is mainly due to the bigger range value in manual analysis followed by ADFD and ADFD$^+$ respectively. %It was also found that ADFD+ is capable of identifying the failure domain to the large degree of accuracy in the case of point and block failure domain but not in the strip failure domain. While ADFD and Manual techniques are capable of correctly identifying all type of failure domain. The complexity of failure domain is illustrated by taking an example of ADFD, ADFD$^+$ and Manual Analysis in Table 7 for class ArrayStack. The \verb+OutOfMemoryError+ failure is detected due to the input of value to the method ArrayStack(i). The invariants generated by ADFD are \textit{\{ i $\ge$ 698000000, i $\le$ 698000300\}}, by ADFD$^+$ are \textit{\{ i $\ge$ 2147483636, I $\le$ MAX\_INT\}}, by Manual analysis \textit{\{ i $\ge$ 698000000 \}}. All the three strategies indicate the same failure but at different intervals. The ADFD$^+$ is unable to show the starting point of failure due to its small range value. The ADFD easily discovers the breaking point due to its bigger range value while manual testing requires over 50 attempts to find the breaking point. %\subsubsection{Nature of failure-domain:} %The nature of failure domain as identified by automatic techniques (ADFD and ADFD+) and Manual technique was examined in terms of simplicity and complexity by comparing the invariants generated by the automatic techniques with the manual technique. The results were split into six categories on the basis of simplicity and complexity of failure-domains identified by each technique. The comparative results show that ADFD, ADFD+ and Manual testing can easily detect 56, 48 and 53 and difficultly detect 1, 9 and 4 failure domains respectively as shown in ~\ref{table:results}. %The analysis of generated invariants indicated that the failure domains which are simple in nature are easily detectable by both automated and manual techniques irrespective of the type of failure domain (Strip, point, block). It was further indicated that the failure domains which are complex in nature are difficultly detectable by both automated and manual techniques. %Both types are explained with the help of following examples. %Consider the following class with a simple failure domain detectable by all three techniques, we consider the results of ADFD, ADFD+ and Manual Analysis in Table 1 for class BitSet. The negativeArray failure is detected due to the input of negative value to the method bitSet.of(i). The invariants generated by ADFD are $\{i <= -1, i >= -18\}$, by ADFD+ are $\{i <= -1, i >= -512\}$ and by Manual Analysis are $\{i <= -1, i >= Integer.MIN\_INT\}$. These results indicate maximum degree of representation of failure-domain by Manual Analysis followed by ADFD and ADFD+ respectively. %It was also found that ADFD+ is capable of identifying the failure domain to the large degree of accuracy in the case of point and block failure domain but not in the strip failure domain. While ADFD and Manual techniques are capable of correctly identifying all type of failure domain. %As an example of complex failure, we consider the results of ADFD, ADFD+ and Manual Analysis in Table 1 for class ArrayStack. The OutOfMemoryError failure is detected due to the input of value to the method ArrayStack(i). The invariants generated by ADFD are $\{ i >= 2147483636, I <= 2147483647\}$, by ADFD+ are $\{ i >= 2147483142, i <= 2147483647\}$, by Manual analysis $\{ i >= 698000000 \}$. %Easy to find were those in which negativearraysizeexceptin. %hard to find were those like IndexArrayOutOfBoundsException. %Impossible to find were those in which finding failure is easy but finding the cut over point is very difficult. like OutOfMemoryError. %\subsubsection{External validity of Results:} %The external validity is the degree to which the subject packages are representative of true practice. %\section{Experimental results} \label{sec:result} %The experimental results show that the ADFD+ outperformed Randoop in both the time taken and number of tests used to detect all the injected faults. The ADFD+ also provide the added benefit of presenting the results in graphical form as shown in Figure \ref{fig:failureDomainsOneDimension} and \ref{fig:failureDomainsTwoDimension}. %Results are split in to two sections depicting efficiency and effectiveness of the two tools. %\subsection{Efficiency} %Figure \ref{fig:testtime} shows the comparative efficiency of ADFD+ and Randoop. The $x-axis$ represents one and two-dimensional programs with point, block and strip failure domains while the $y-axis$ represents average time taken by the tools to detect the failure domains. As shown in the figure ADFD+ showed extra ordinary efficiency by taking two orders of magnitude less time to discover failure domains as compared to Randoop. %This may be partially attributed to the very fast processing of YETI, integrated with ADFD+. YETI is capable of executing $10^6$ test calls per minute on Java code. To counter the contribution of YETI and assess the performance of ADFD+ by itself, the effectiveness of ADFD+ was compared with Randoop in terms of the number of test cases required to identify the failure domains without giving any consideration to the time consumed for completing the test session. The results are presented in the following section. %It should be noted that the part of the gain may also be due to the fast processing of the underlying tool YETI, which is capable of executing $10^6$ test calls per minute on Java code. Therefore, to find the performance of only ADFD+ we performed the second set of experiments to measure effectiveness. %For finding the efficiency, the CPU time consumed from the start of the test to the identification of last failure was measured for each experiment of ADFD+ and Randoop. Figure \ref{fig:testtime} shows the results in a box-and-whisker plot. The figure shows that ADFD+ in no case took more than ten seconds to find the failures where Randoop consumed at least 80 seconds to find the same failures. %\subsection{Effectiveness} %Figure \ref{fig:testcases} shows the comparative effectiveness of ADFD+ and Randoop. The $x-axis$ represents one and two-dimensional programs with point, block and strip failure domains while the $y-axis$ represents average number of test cases used by the tools to detect the failure domains. The figure shows higher effectiveness in case of ADFD+, amounting to 100\% or more. The higher effectiveness of ADFD+ may be attributed to its working mechanism in comparison with Randoop for identifying failures. ADFD+ dynamically changes its algorithm to exhaustive testing in a specified radius around the failure as against Randoop which uses the same random algorithm for searching failures. %\subsection{Failure Domains} %The comparative results of the two tools with respect to presentation of the identified failure domains reveal better performance of ADFD+ by providing the benefit of presenting the failure domains in graphical form as shown in Figure \ref{fig:failureDomainsOneDimension} and \ref{fig:failureDomainsTwoDimension}. The user can also enable or disable the option of showing the failing values on the graph. In comparison Randoop lacks the ability of graphical presentation and the option of showing the failure domains separately and provides the results scattered across the textual files. %\section{Discussion}\label{sec:discussion} %The results indicated that ADFD+ is a promising technique for finding failure and failure domain efficiently and effectively. It has the added advantage of showing the results in graphical form. The pictorial representation of failure domains facilitates the debuggers to easily identify the underlying failure domain and its boundaries for troubleshooting. %In the initial set of experiments Randoop was executed for several minutes with default settings. The results indicated no identification of failures after several executions. On analysis of the generated unit tests and Randoop's manual, it was found that the pool of values stored in Randoop database for int primitive type contains only 5 values including -1, 0, 1, 10 and 100. To enable Randoop to select different values, we supplied a configuration file with the option to generate random values between -500 and 500 for the test cases as all the seeded errors were in this range. %As revealed in the results ADFD+ outperformed Randoop by taking two orders of magnitude less time to discover the failure domains. This was partially attributed to the very fast processing of YETI integrated with ADFD+. To counter the effect of YETI the comparative performance of ADFD+ and Randoop was determined in terms of the number of test cases required to identify the failure domains giving no consideration to the time taken for completing the test session. As shown in the results ADFD+ identified all failure domains in 50\% or less number of test cases. %The ADFD+ was found quite efficient and effective in case of block and strip domains but not so in case of point domains where the failures lied away from each other as shown in the following code. This limitation of ADFD+ may be due to the search in vain for new failures in the neighbourhood of failures found requiring the additional test cases resulting in increased overhead.\\ %\begin{lstlisting} %public class Error { %public static void Error (int x, int y){ % int z; % if (x == 10000) % { z = 50/0; } % % if (y == -2000) % { z = 50/0; } % } %} %\end{lstlisting} %The number of test cases to be undertaken in search of failures around the previous failure found is set in the range value by the user. The time taken by test session is directly proportional to the range value. Higher range value leads to larger graphical output requiring zoom feature which has been incorporated in ADFD+ for use when the need arise. \section{Threats to validity} \label{sec:threat} All packages in Qualitas Corpus were tested by ADFD, ADFD$^+$ and Manual technique in order to minimize the threats to external validity. The Qualitas Corpus contains packages of different: functionality, size, maturity and modification histories. YETI using ADFD/ADFD$^+$ strategy was executed only for 5 seconds to find the first failure in the given SUT. Since both ADFD and ADFD$^+$ are based on random$^+$ strategy having high preference for boundary values, therefore, most of the failures detected are from the boundaries of the input domain. It is quite possible that increasing the test duration of YETI may lead to the discovery of new failures with different failure domain. A threat to validity is related to the hardware and software resources. For example, the \verb+OutOfMemoryError+ occurs at the value of 6980000 on the machine used for executing the test. On another machine with different specification the failure revealing value can increase or decrease depending on the hardware and software resources. It is to point out that all non-numerical and more than two-dimensional methods were not considered in the experiments. The failures caught due to error of non-primitive type were also ignored because of the inability of the techniques to present them graphically. Therefore, the results may reflect less number of failures. %\section{Threats to validity} \label{sec:threat} %All packages in Qualitas Corpus were tested by ADFD, ADFD+ and Manual techniques in order to minimize the threats to external validity. The Qualitas Corpus contains packages of different: functionality, size, maturity and modification histories. %YETI using ADFD/ADFD+ strategy was executed only for 5 seconds to find the first failure in the given SUT. Since both ADFD and ADFD+ strategies are based on random+ strategy having high preference for boundary values therefore most of the failures detected are from the boundaries of input domain. It is quite possible that increasing the test duration of YETI may lead to the discovery of new failures with different failure domain. %Another threat to validity is related to the hardware and software resources. For example the OutOfMemoryError occured at the value of 6980000 on the machine executing the test. On another machine with different specification the failure revealing value can increase or decrease depending on the hardware and software. %It may be noted that all the non numerical and more than two dimensional methods were not considered in the experiments. The failures caught due to the error of non primitive type were also ignored because of the inability to present them. Therefore the results may reflect less number of failures. %The study faces threats to external and internal validity. The external threats are common to most of the empirical evaluations. It includes the extent to which the programs under test the generation tools and the nature of seeded errors are representative of the true practice. The present findings will serve as foundation for future research studies needed to be undertaken with several types of classes, test generation tools and diversified nature of seeded errors in order to overcome the threats to external validity. The internal threats to validity includes error-seeded and limited number of classes used in the study. These may be avoided by taking real and higher number of classes in future studies. \section{Related Work} Shape and location of failure domain within the input domain have been studied in the past. Similar to our findings, White et al.~\cite{white1980domain} reported that the boundary values have more chances of forming strip failure domain. Finally~\cite{finelli1991nasa} and Bishop~\cite{bishop1993variation} found that failure causing inputs form a continuous region inside the input domain. Chan et al. revealed that failure causing values form point, block and strip failure domains~\cite{chan1996proportional}. Random testing is quick in execution and experimentally proven to detect errors in programs of various platforms including Windows~\cite{forrester2000empirical}, Unix~\cite{miller1990empirical}, Java Libraries~\cite{pacheco2005eclat}, Heskell~\cite{claessen2011quickcheck} and Mac OS~\cite{miller2006empirical}. Its potential to become fully automated makes it one of the best choice for developing automated testing tools~\cite{csallner2004jcrasher}, \cite{pacheco2005eclat}. AutoTest~\cite{ciupa2008predictability}, Jcrasher~\cite{csallner2004jcrasher}, Eclat~\cite{pacheco2005eclat}, Jartege~\cite{oriat2005jartege}, Randoop~\cite{pacheco2007randoop} and YETI~\cite{oriol2012random}, \cite{ahmad2013adfd}, \cite{ahmad2014adfd2} are a few of the most common automated random testing tools used by the research community. %YETI is loosely coupled, highly flexible and allows easily extendible~\cite{oriol2010testing}. In our previous research publications, we have described the fully automated techniques ADFD~\cite{ahmad2013adfd} and ADFD$^+$ \cite{ahmad2014adfd2} for the discovery of failure domains and have experimentally evaluated the performance with one and two-dimensional error-seeded numerical programs. The current study is a continuation of the previous work. It is aimed at the enhancement of the two techniques for evaluation of the precision of identifying failure domains by integrating Daikon with ADFD and ADFD$^+$. Our current approach of evaluation is inspired from several studies in which random testing has been compared with other testing techniques to find the failure finding ability \cite{hamlet1990partition}, \cite{weyuker1991analyzing}, \cite{gutjahr1999partition}. The automated techniques have been compared with manual techniques in the previous research studies \cite{leitner2007reconciling}, \cite{ciupa2008finding}. This study is of special significance because we compared the effectiveness of the techniques by identifying failure domains rather than individual failures considered in the previous studies. %The comparison of automated technique with manual technique, although not very common, has been performed previously \cite{leitner2007reconciling, ciupa2008finding}. The feature which makes our research study unique is that, we compared the effectiveness of the techniques in identifying of failure domain and not an individual failure. %\section{Related Work} %In previous work, researchers have done some work to study the shape and location of the failure-domain in the input domain. According to White et al.~\cite{white1980domain} the boundary values located at the edge of domains have more chances of forming strip failure domain. Finelly~\cite{finelli1991nasa} and Bishop~\cite{bishop1993variation} found that failure causing inputs form a continuous region inside the input domain. Chan et al. reveal that failure causing values form certain geometrical shapes in the input domain, they classified the failure-domains into point, block and strip failure domains~\cite{chan1996proportional}. %Random testing is quick in execution and experimentally proven to detect errors in programs of various platforms including Windows~\cite{forrester2000empirical}, Unix{16}, Java Libraries~cite{pacheco2005eclat}, Heskell~\cite{claessen2011quickcheck} and Mac OS~\cite{miller2006empirical}. Its ability to become fully automated makes it one of the best choice for automated testing tools~\cite{csallner2004jcrasher}\cite{pacheco2005eclat}. AutoTest~\cite{ciupa2008predictability}, Jcrasher~\cite{csallner2004jcrasher}, Eclat~\cite{pacheco2005eclat}, Jartege~\cite{oriat2005jartege}, Randoop~\cite{pacheco2007randoop} and YETI~\cite{oriol2012random}\cite{ahmad2013adfd}\cite{ahmad2014adfd2} are few of the most common automated random testing tools used by research community. YETI is loosely coupled, highly flexible and allows easy extensibility as reported previously~\cite{oriol2010testing}. %Our previous studies ADFD~\cite{ahmad2013adfd} and ADFD+ \cite{ahmad2014adfd2} describes fully automated techniques for the discovery of failure domains and evaluate it experimentally. The programs used in evaluation were error-seeded one and two dimensional programs. This work is a direct continuation of our previous work to further contributes to this line of research by extending the techniques with support of Daikon, manual analysis and testing of production software from Qualitas Corpus. %A common practice to evaluate the effectiveness of an extended technique is to compare the results obtained by applying the new and existing techniques to identical programs~\cite{Duran1984}\cite{Gutjahr1999}. Arcuri et al.~\cite{Arcuri2012}, stresses on the use of random testing as a baseline for comparison with other testing techniques. We followed the procedure and evaluated ADFD, ADFD+ and Manual testing under identical conditions. %The increase in complexity of programs poses new challenges to researchers for finding more efficient and effective ways of software testing with user friendly easy to understand test results. Adaptive Random Testing \cite{Chen2008}, Proportional random testing \cite{chan1996proportional} and feedback directed random testing \cite{Pacheco2007a} are some of the prominent upgraded versions of random testing with better performance. Automated random testing is simple to implement and capable of finding hitherto bugs in complex programs \cite{Csallner2004, Pacheco2005}. %ADFD+ is an upgraded version of ADFD technique \cite{ahmad2013adfd} to find a failure and using it can effectively and efficiently detect the whole failure domain. %ADFD+ is a promising technique for finding failures and failure domains efficiently and effectively with the added advantage of presenting the output in graphical form showing point, block and strip domains. %Some previous research studies have reported work on Identification, classification and visualisation of pass and fail domains in the past \cite{agrawal1995fault, jones2002visualization, podgurski2003automated}. This includes Xslice~\cite{agrawal1995fault} is used to differentiate the execution slices of passing and failing part of a test in a visual form. Another tool called Tarantula uses colour coding to track the statements of a program during and after the execution of the test suite~\cite{jones2002visualization}. Hierarchical Multi Dimension Scaling (HMDS) describes a semi-automated procedure of classifying and plotting the faults \cite{podgurski2003automated}. A serious limitation of the above mentioned tools is that they are not fully automated and require human intervention during execution. Moreover these tools need the requirement of existing test cases to work on where as ADFD+ strategy generates test cases, discovers failures, identifies pass and fail domains and visualises the results in a graphical form operating in fully automated manner. \section{Conclusion} \label{sec:conclusion} Based on the results, it is concluded that the two automated techniques (ADFD and ADFD$^+$) are more effective in identifying and presenting complex (point and block) failure domains with minimal labour. The manual technique is more effective in identifying simple (long strip) failure domain but is tedious and labour intensive. The precision to identify failure domains can be increased by increasing the range value. The results indicate that the automated techniques can be highly effective in providing assistance to manual testing but are not an alternative to manual testing. %It is concluded from the results that ADFD and ADFD$^+$ techniques are highly effective in providing assistance but are not an alternative to manual technique with the limited available resources. %Failures within the input domain are contiguous and form point, block and strip failure domains. Existing automated testing tools, such as JCrasher and Jartege, search for individual failure ignoring the failure domain. We have developed ADFD and ADFD$^+$ techniques for identification of failure domains and its presentation by graph and invariants. We have conducted automated and manual experiments that evaluate the effectiveness of our techniques on detecting and presenting the failure domains in production software contained in Qualitas Corpus. The results show that the two techniques can effectively identify and present the failure domains to certain degree of accuracy. We further explain how the degree of accuracy can be increased in ADFD and ADFD$^+$ techniques. %Failures within the input domain are contiguous and form point, block and strip failure-domains. Existing automated testing tools, such as JCrasher and Jartege, search for individual failure ignoring the failure-domain. We have developed ADFD and ADFD+ techniques for identification of failure-domains and its presentation by graph and invariants. We have conducted automated and manual experiments that evaluate the effectiveness of our techniques on detecting and presenting the failure-domains in production software contained in Qualitas Corpus. The results show that the two techniques can effectively identify and present the failure-domains to certain degree of accuracy. We further explain how the degree of accuracy can be increased in ADFD and ADFD+ techniques. %\smallskip %The newly developed ADFD+ technique is distinct from other random testing techniques because it not only identifies failures but also discovers failure domains and provides the result output in easily understandable graphical form. The paper highlights the improved features of ADFD+ in comparison with ADFD technique previously developed by our team~\cite{ahmad2013adfd}. The paper then analyses and compares the experimental results of ADFD+ and Randoop for the point, block and strip failure domains. The ADFD+ demonstrated extra ordinary efficiency by taking less time to the tune of two orders of magnitude to discover the failure domains and it also surpassed Randoop in terms of effectiveness by identifying the failure domains in 50\% or less number of test cases. %The rationale for better performance of ADFD+ has been given in the paper. %The better performance of ADFD+ may be attributed mainly to its ability to dynamically change algorithm to exhaustive testing in a specified radius around the first identified failure as against Randoop which uses the same random algorithm continuously for searching failures. %\section{Future Work} \label{sec:futurework} %The ADFD+ strategy is capable of testing numerical programs and needs to be extended for testing of non numerical and reference data types to enable it to test all types of data. %\textbf{Extension of ADFD+ to apply it to the real world scenario} %The newly developed ADFD+ strategy uses error-seeded programs for assessment of accuracy and effectiveness. This may likely expose it to external validity threat. Future studies may be undertaken in the real world scenario by including the feature of testing non numerical and reference data types so that their is more threat to validity. \\ %Current implementation of ADFD and ADFD+ tests only numerical programs. This restricts the usability of ADFD+ for production software of non-numerical data types. This can be solved by extending the tool to include testing of other primitive and reference data types. \\ %ADFD+ has the capability of graphical presentation of results for one and two-dimensional numerical programs. It is worthwhile to extend the technique to enable it to present the results of multi-dimensional numerical and non numerical programs in the graphical form. \\ \bigskip %%%%%%%%%%%%%%%%% ACKNOWDLEGEMENT %%%%%%%%%%%%%%%%%%%% \noindent\textbf{Acknowledgments} The authors are thankful to the Department of Computer Science, University of York for academic and financial support. % with the Departmental Overseas Research Scholarship (DORS) award. Thanks are also extended to Prof. Richard Paige and Prof. John Clark for their valuable guidance, help and cooperation. \section*{References} \renewcommand\refname{} \vspace*{-0.7cm} \bibliographystyle{splncs} \bibliography{sigproc} \begin{comment} \newpage \section*{Appendix} \appendix \begin{table}[H] \caption{Table depicting results of ADFD and ADFD$^+$} \centering \small \noindent\makebox[\textwidth]{ {\renewcommand{\arraystretch}{.9} \begin{tabular}{|l|l|l|l|r|r|c|} \hline S\# & Project & Class & Method & Dim. & LOC & Failure \\ & & & & & & domain \\ \hline 1 & ant & LeadPipeInputStream & LeadPipeInputStream(i) & 1 & 159 & Strip \\ 2 & antlr & BitSet & BitSet.of(i,j) & 2 & 324 & Strip \\ 3 & artofillusion & ToolPallete & ToolPalette(i,j) & 2 & 293 & Strip \\ 4 & aspectj & AnnotationValue & whatKindIsThis(i) & 1 & 68 & Mix \\ & & IntMap & idMap(i) & 1 & 144 & Strip \\ 5 & cayenne & ExpressionFactory & expressionOfType(i) & 1 & 146 & Strip \\ 6 & collections & ArrayStack & ArrayStack(i) & 1 & 192 & Strip \\ & & BinaryHeap & BinaryHeap(i) & 1 & 63 & Strip \\ & & BondedFifoBuffer & BoundedFifoBuffer(i) & 1 & 55 & Strip \\ & & FastArrayList & FastArrayList(i) & 1 & 831 & Strip \\ & & StaticBucketMap & StaticBucketMap(i) & 1 & 103 & Strip \\ & & PriorityBuffer & PriorityBuffer(i) & 1 & 542 & Strip \\ 7 & colt & GenericPermuting & permutation(i,j) & 2 & 64 & Strip \\ & & LongArrayList & LongArrayList(i) & 1 & 153 & Strip \\ & & OpenIntDoubleHashMap& OpenIntDoubleHashMap(i) & 1 & 47 & Strip \\ 8 & drjava & Assert & assertEquals(i,j) & 2 & 780 & Point \\ & & ByteVector & ByteVector(i) & 1 & 40 & Strip \\ 9 & emma & ClassLoaderResolver & getCallerClass(i) & 1 & 225 & Strip \\ & & ElementFactory & newConstantCollection(i)& 1 & 43 & Strip \\ & & IntIntMap & IntIntMap(i) & 1 & 256 & Strip \\ & & ObjectIntMap & ObjectIntMap(i) & 1 & 252 & Strip \\ & & IntObjectMap & IntObjectMap(i) & 1 & 214 & Strip \\ 10 & heritrix & ArchiveUtils & padTo(i,j) & 2 & 772 & Strip \\ & & BloomFilter32bit & BloomFilter32bit(i,j) & 2 & 223 & Strip \\ 11 & hsqld & IntKeyLongValueHashMap& IntKeyLongValueHashMap(i)& 1 & 52 & Strip \\ & & ObjectCacheHashMap & ObjectCacheHashMap(i) & 1 & 76 & Strip \\ 12 & htmlunit & ObjToIntMap & ObjToIntMap(i) & 1 & 466 & Strip \\ & & Token & typeToName(i) & 1 & 462 & Mix \\ 13 & itext & PRTokeniser & isDelimiterWhitespace(i) & 1 & 593 & Strip \\ & & PdfAction & PdfAction(i) & 1 & 585 & Strip \\ & & PdfLiteral & PdfLiteral(i) & 1 & 101 & Strip \\ 14 & jung & PhysicalEnvironment & PhysicalEnvironment(i) & 1 & 503 & Strip \\ 15 & jedit & IntegerArray & IntegerArray(i) & 1 & 82 & Strip \\ 16 & jgraph & AttributeMap & AttributeMap(i) & 1 & 105 & Strip \\ 17 & jruby & ByteList & ByteList(i) & 1 & 1321 & Strip \\ & & WeakIdentityHashMap & WeakIdentityHashMap(i) & 1 & 50 & Strip \\ 18 & junit & Assert & assertEquals(i,j) & 2 & 780 & Point \\ 19 & megamek & AmmoType & getMunitionsFor(i) & 1 & 268 & Strip \\ & & Board & getTypeName(i, j) & 1 & 1359 & Mix \\ 20 & nekohtml & HTMLEntities & get(i) & 1 & 63 & Strip \\ 21 & poi & Variant & getVariantLength(i) & 1 & 476 & Mix \\ & & IntList & IntList(i,j) & 2 & 643 & Block \\ 22 & sunflow & QMC & halton(i,j) & 2 & 32 & Strip \\ & & BenchmarkFramework & BenchmarkFramework(i,j) & 2 & 24 & Strip \\ & & IntArray & IntArray(i) & 1 & 47 & Strip \\ 23 & trove & TDoubleStack & TDoubleStack(i) & 1 & 120 & Strip \\ & & TIntStack & TIntStack(i) & 1 & 120 & Strip \\ & & TLongArrayList & TLongArrayList(i) & 1 & 927 & Strip \\ 24 & weka & AlgVector & AlgVector(i) & 1 & 424 & Strip \\ & & BinarySparseInstance & BinarySparseInstance(i) & 1 & 614 & Strip \\ 25 & xerces & SoftReferenceSymbolTable& SoftReferenceSymbolTable(i) & 1 & 71 & Strip \\ & & SymbolHash & SymbolHash(i) & 1 & 82 & Strip \\ & & SynchronizedSymbolTable& SynchronizedSymbolTable(i) & 1 & 57 & Strip \\ & & XMLChar & isSpace(i) & 1 & 169 & Strip \\ & & XMLGrammarPoolImpl & XMLGrammarPoolImpl(i) & 1 & 96 & Strip \\ & & XML11Char & isXML11NCNameStart(i) & 1 & 184 & Strip \\ & & AttributeList & AttributeList(i) & 1 & 321 & Strip \\ \hline \end{tabular} } } \bigskip \label{table:packages} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\section{Frequency of failure domains} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% block failure domain %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table*}[!H] \caption{Classes with block failure domains} \centering \noindent\makebox[\textwidth]{ {\renewcommand{\arraystretch}{1.5} \begin{tabular}{|l|l|l|l|l|} \hline S\# & Class & Invariants by ADFD$^+$ & Invariants by ADFD & Invariants by Manual \\ \hline 1 & IntList & I $\le$ -1, I $\ge$ -15 & I $\le$ -1, I $\ge$ -509 & I $\le$ -1, I $\ge$ min\_int \\ & & J = 0 & J =0 & J = 0 \\ \hline \end{tabular} } } \label{table:blockDomains} \end{table*} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Point Failure domain %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table*}[!H] \caption{Classes with point failure domains} \centering \noindent\makebox[\textwidth]{ {\renewcommand{\arraystretch}{1} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline S\# & Class & Invariants by ADFD$^+$ & Invariants by ADFD & Invariants by Manual \\ \hline 1 & Assert & I != J & I != J & I != J \\ 2 & Assert & I $\le$ 0, I $\ge$ 20 & I $\le$ -2147483142, I $\ge$ min\_int & I any value \\ & & J = 0 & J = 0 & J = 0 \\ \hline \end{tabular} } } \label{table:pointDomains} \end{table*} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Mix Failure Domain %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{table*}[!H] \caption{Classes with mix failure domains} \centering \noindent\makebox[\textwidth]{ {\renewcommand{\arraystretch}{1} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline S\# & Class & Invariants by ADFD & Invariants by ADFD$^+$ & Invariants by Manual \\ \hline 1 & Board & I $\le$ -1 & I $\ge$ -504, I $\le$ -405, & I $\le$ -910, I $\ge$ -908, I $\le$ -809, \\ & & I $\ge$ -18 & I $\ge$ -403, I $\le$ -304, & I $\ge$ -807, I $\le$ -708, I $\ge$ -706, \\ & & J = 0 & I $\ge$ -302, I $\le$ -203, & I $\le$ -607, I $\ge$ -605, I $\le$ -506, \\ & & & I $\ge$ -201, I $\le$ -102, & I $\ge$ -504, I $\le$ -405, I $\ge$ -403, \\ & & & I $\ge$ -100, I $\le$ -1 & I $\le$ -304, I $\ge$ -302, I $\le$ -203, \\ & & & J = 0 & I $\ge$ -201, I $\le$ -102, I $\ge$ -100 \\ & & & & I $\le$ -1, \\ & & & & J = 0 \\ 2 & Variant & I $\ge$0, I $\le$ 12 & I $\ge$ 0, I $\le$ 14, I $\ge$ 16 & I $\ge$ 0, I $\le$ 14, I $\ge$ 16 \\ & & & I $\le$ 31, I $\ge$ 64, I $\le$ 72 & I $\le$ 31, I $\ge$ 64, I $\le$ 72 \\ 3 & Token & I $\le$ -2147483641 & I $\le$ -2, I $\ge$ -510 & I $\le$ -2, I \textgreater min\_int \\ % Point and Strip & & I $\ge$ min\_int & I = \{73, 156\} & I = 73, 156, \\ & & & I $\ge$ 162, I $\le$ 500 & I $\ge$ 162, I $\le$ max\_int \\ 4 & AnnotationValue & I $\le$ 85, I $\ge$ 92, & I $\le$63, I = \{65, 69, 71, 72\} & I $\le$ 63, I = {65, 69, 71, 72} \\ & & I $\ge$ 98, I $\le$ 100, & I $\ge$ 75, I $\le$ 82, I $\ge$ 84 & I $\ge$ 75, I $\le$ 82, I $\ge$ 84 \\ & & I $\ge$ 102, I $\le$ 104 & I $\le$ 89, I $\ge$ 92, I $\le$ 98 & I $\le$ 89, I $\ge$ 92, I $\le$ 98 \\ & & & I = 100, I $\ge$ 102, I $\le$ 114 & I = 100, I $\ge$ 102, I $\le$ 114 \\ & & & I $\ge$ 116 & I $\ge$ 116 and so on \\ \hline \end{tabular} } } \label{table:mixDomains} \end{table*} \clearpage \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Strip Failure Domain %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\scriptsize \begin{longtable}{|l|l|l|l|l|} \caption{Classes with strip failure domains}\\ \hline S\# & Class & Invariants by ADFD$^+$ & Invariants by ADFD & Invariants by Manual \\ \hline \endhead 1 &LeadPipeInputStream & I $\ge$ 2147483140 & I $\ge$ 2147483143 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 2 & BitSet & I $\le$ -1, I $\ge$ -18, & I $\le$ -1, I $\ge$ -513 & I $\le$ -1, I $\ge$ min\_int \\ & & J $\le$ 7, J $\ge$ -12 & J $\ge$ -503, J $\le$ 507 & J any value \\ % When J != 0. 3 & ToolPallete & I $\le$ -1, I $\ge$ -18 & I $\le$ -1, I $\ge$ -515 & I $\le$ -1, I $\ge$ min\_int \\ & & J $\le$ 3, J $\ge$ -15 & J $\ge$ -509, J $\le$ 501 & J any value \\ 4 & IntMap & I $\le$ -1, I $\ge$ -18 & I $\le$ -1, I $\ge$ -512 & I $\le$ -1, I $\ge$ min\_int \\ 5 & ExpressionFactory & I $\le$ 13, I $\ge$ -7 & I $\ge$ -497, I $\le$ 513 & I $\ge$ min\_int \\ % For any value of I. & & & & I $\le$ max\_int \\ 6 & ArrayStack & I $\ge$ 2147483636 & I $\ge$ 2147483142 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 7 & BinaryHeap & I $\le$ -2147483637 & I $\le$ -2147483142 & I $\le$ 0 \\ & & I $\ge$ min\_int & I $\ge$ min\_int & I $\ge$ min\_int \\ 8 & BondedFifoBuffer & I $\le$ -2147483639 & I $\ge$ -505, I $\le$ 0 & I $\le$ 0 \\ & & I $\ge$ min\_int & & I $\ge$ min\_int \\ 9 & FastArrayList & I $\le$ -2147483641 & I $\le$ -2147483644, & I $\le$ -1 \\ & & I $\ge$ min\_int & I $\ge$ -2147483139 & I $\ge$ min\_int \\ 10 & StaticBucketMap & I $\ge$ 2147483635 & I $\ge$ 2147483140 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 11 & PriorityBuffer & I $\le$ -1, I $\ge$ -14 & I $\le$ -2147483142 & I $\le$ 0 \\ & & & I $\ge$ -2147483647 & I $\ge$ min\_int \\ 12 & GenericPermuting & I $\le$ 0, I $\ge$ -18 & I $\ge$ -498, I $\le$ 0 & I $\le$ 0, I $\ge$ min\_int \\ % point & & & I $\ge$ 2, I $\le$ 512 & I $\ge$ 2, I $\le$ max\_int \\ 13 & LongArrayList & I $\le$ -2147483640 & I $\le$ -1, I $\ge$ -510 & I $\le$ -1 \\ & & I $\ge$ min\_int & & I $\ge$ min\_int \\ 14 & OpenIntDoubleHashMap & I $\le$ -1, I $\ge$ -17 & I $\le$ -1, I $\ge$ -514 & I $\le$ -1, I $\ge$ min\_int \\ 15 & ByteVector & I $\le$ -2147483639 & I $\le$ -2147483141 & I $\le$ -1 \\ % Strip & & I $\ge$ min\_int & I $\ge$ min\_int & I $\ge$ min\_int \\ 16 & ElementFactory & I $\ge$ 2147483636 & I $\ge$ 2147483141 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 17 & IntIntMap & I $\le$ -2147483638 & I $\le$ -2147483644 & I $\le$ -1 \\ & & I $\ge$ min\_int & I $\ge$ -2147483139 & I $\ge$ min\_int \\ 18 & ObjectIntMap & I $\ge$ 2147483640 & I $\ge$ 2147483591 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 19 & IntObjectMap & I $\le$ -1, I $\ge$ -17 & I $\le$ -1, I $\ge$ -518 & I $\le$ -1, I $\ge$ min\_int\\ 20 & ArchiveUtils & I $\ge$ 2147483641 & I $\ge$ -497 & I any value \\ & & I $\le$ max\_int & I $\le$ 513 & \\ & & J $\ge$ 2147483639 & J $\ge$ 2147483591 & J \textgreater~698000000 \\ & & J $\le$ max\_int & J $\le$ max\_int & \\ 21 & BloomFilter32bit & I $\le$ -1, I $\ge$ -18 & I $\le$ -1, I $\ge$ -515 & I \textless -1 \\ & & J may be any value & J may be any value & J \textless -1 \\ 22 & IntKeyLongValueHashMap & I $\ge$ 2147483635 & I $\ge$ 2147483590 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 23 & ObjectCacheHashMap & I $\le$ -2147483641 & I $\ge$ -512, I $\le$ 0 & I $\le$ 0 \\ & & I $\ge$ min\_int & & I $\ge$ min\_int \\ 24 & ObjToIntMap & I $\le$ -2147483636 & I $\le$ -2147483646 & I $\le$ -1 \\ & & I $\ge$ min\_int & I $\ge$ min\_int & I $\ge$ min\_int \\ 25 & PRTokeniser & I $\le$ -2 & I $\le$ -2, I $\ge$ -509 & I $\le$ -2 , I $\ge$ min\_int\\ & & I $\ge$ -18 & I $\ge$ 256, I $\le$ 501 & I $\ge$ 256 , I $\le$ max\_int\\ 26 & PdfAction & I $\le$ -2147483640 & I $\le$ 0, I $\ge$ -514 & I $\le$ 0, I $\ge$ min\_int \\ & & I $\ge$ min\_int & I $\ge$ 6, I $\le$ 496 & I $\ge$ 6, I $\le$ max\_int \\ 27 & PdfLiteral & I $\le$ -1, I $\ge$ -14 & I $\le$ -1, I $\ge$ -511 & I $\le$ -1, I $\ge$ min\_int \\ & & & & \\ 28 & PhysicalEnvironment & I $\le$ -1, I $\ge$ -11 & I $\le$ -2147483646 & I $\le$ -1, \\ & & & I $\ge$ min\_int & I $\ge$ min\_int \\ 29 & IntegerArray & I $\ge$ 2147483636 & I $\ge$ 2147483587 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 30 & AttributeMap & I $\le$ -2147483639 & I $\le$ 0, I $\ge$ -514 & I $\le$ 0 \\ & & I $\ge$ min\_int & & I $\ge$ min\_int \\ 31 & ByteList & I $\le$ -1, I $\ge$ -14 & I $\le$ -1, I $\ge$ -513 & I $\le$ -1, I $\ge$ min\_int \\ 32 & WeakIdentityHashMap & I $\ge$ 2147483636 & I $\ge$ 2147483140 & I \textgreater 698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 33 & AmmoType & I $\le$ -1 & I $\le$ -1, I $\ge$ -514 & I $\le$ -1, I $\ge$ min\_int \\ & & I $\ge$ -17 & I $\ge$ 93, I $\le$ 496 & I $\ge$ 93, I $\le$ max\_int \\ 34 & QMC & I $\le$ -1, I $\ge$ -12 & I $\le$ -1, I $\ge$ -508 & I $\le$ -1, I $\ge$ min\_int \\ & & J $\le$ -1, J $\ge$ -15 & J $\le$ 499, J $\ge$ -511 & J any value \\ 35 & BenchmarkFramework & I $\le$ -1, I $\ge$ -13 & I $\le$ -1, I $\ge$ -508 & I $\le$ -1, I $\ge$ min\_int \\ 36 & IntArray & I $\le$ -1, I $\ge$ -16 & I $\le$ -2147483650 & I $\le$ -1 \\ & & & I $\ge$ -2147483141 & I $\ge$ min\_int \\ 37 & TDoubleStack & I $\le$ -1, I $\ge$ -13 & I $\le$ -1, I $\ge$ -511 & I $\le$ -1, I $\ge$ min\_int \\ 38 & TIntStack & I $\le$ -1, I $\ge$ -12 & I $\le$ min\_int & I $\le$ -1 \\ & & & I $\ge$ -2147483144 & I $\ge$ min\_int \\ 39 & TLongArrayList & I $\le$ -1, I $\ge$ -16 & I $\le$ min\_int & I $\le$ -1, \\ & & & I $\ge$ -2147483141 & I $\ge$ min\_int \\ 40 & AlgVector & I $\le$ -1, I $\ge$ -15 & I $\le$ -1, I $\ge$ -511 & I $\le$ -1, I $\ge$ min\_int \\ 41 & BinarySparseInstance & I $\le$ -1, I $\ge$ -15 & I $\le$ -1, I $\ge$ -506 & I $\le$ -1, I $\ge$ min\_int \\ 42 & SoftReferenceSymbolTable & I $\ge$ 2147483635 & I $\ge$ 2147483140 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 43 & HTMLEntities & I $\le$- 1 & I $\ge$ -504, I $\le$ -405, & I $\le$ -809, I $\le$ -607, I $\ge$ -605, \\ & & I $\ge$ -17 & I $\ge$ -403, I $\le$ -304, & I $\le$ -506, I $\ge$ -504, I $\le$ -405, \\ & & & I $\ge$ -302, I $\le$ -203, & I $\ge$ -403, I $\le$ -304, I $\ge$ -302, \\ & & & I $\ge$ -201, I $\le$ -102, & I $\le$ -203, I $\ge$ -201, I $\le$ -102, \\ & & & I $\ge$ -100, I $\le$ -1 & I $\ge$ -100, I $\le$ -1 \\ 44 & SymbolHash & I $\le$ -1, I $\ge$ -16 & I $\le$ -2147483592 & I $\le$ -1, \\ & & & I $\ge$ min\_int & I $\ge$ min\_int \\ 45 & SynchronizedSymbolTable & I $\le$ -2147483140 & I $\le$ -2147483592, & I $\le$ -1, I $\ge$ min\_int \\ & & I $\ge$ min\_int & I $\ge$ min\_int & \\ 46 & XMLChar & I $\le$ -1, I $\ge$ -12 & I $\le$ -1, I $\ge$ -510 & I $\le$ -1, I $\ge$ min\_int \\ 47 & XMLGrammarPoolImpl & I $\le$ -1, I $\ge$ -13 & I $\le$ -2147483137 & I $\le$ -1, \\ & & & I $\ge$ min\_int & I $\ge$ min\_int \\ 48 & XML11Char & I $\le$ -1, I $\ge$ -16 & I $\le$ -1, I $\ge$ -512 & I $\le$ -1, I $\ge$ min\_int \\ 49 & AttributeList & I $\ge$ 2147483635 & I $\ge$ 2147483590 & I \textgreater~698000000 \\ & & I $\le$ max\_int & I $\le$ max\_int & I $\le$ max\_int \\ 50 & ClassLoaderResolver & I $\ge$ 2, & I $\ge$ 500, I $\le$ -2 & I $\le$ -2, I \textgreater min\_int \\ & \label{table:stripDomains7} & I $\le$ 18 & I $\ge$ 2, I $\le$ 505 & I $\ge$ 2, I $\le$ max\_int \\ \hline \end{longtable} } \end{comment} * Tables 3 - 7 are available at \url{https://code.google.com/p/yeti-test/} \end{document} %\section{Automated Discovery of Failure Domain+}\label{sec:adfd+} %It is an improved version of ADFD technique developed earlier by Ahmad and Oriol~\cite{ahmad2013adfd}. The technique automatically finds failures, failure domains and present the results in graphical form. In this technique, the test execution is initiated by random+ and continues till the first failure is found in the SUT. The technique then copies the values leading to the failure and the surrounding values to the dynamic list of interesting values. The resultant list provides relevant test data for the remaining test session and the generated test cases are effectively targeted towards finding new failures around the existing failures in the given SUT. \\* %The improvements made in ADFD+ over ADFD technique are stated as follows. %\begin{itemize} %\item ADFD+ generates a single Java file dynamically at run time to plot the failure domains as compared to one Java file per failure in ADFD. This saves sufficient time and makes the execution process quicker. %\item ADFD+ uses (x, y) vector-series to represent failure domains as opposed to the (x, y) line-series in ADFD. The vector-series allows more flexibility and clarity to represent failure and failure domains. %\item ADFD+ takes a single value for the radius within which the strategy searches for a failure domain whereas ADFD takes two values as lower and upper bounds representing x and y-axis respectively. This results in consumption of lower number of test cases for detecting failure domain. %\item In ADFD+, the algorithm of dynamically generating Java file at run-time has been made simplified and efficient as compared to ADFD resulting in reduced overhead. %\item In ADFD+, the point, block and strip failure domains generated in the output graph present a clear view of pass and fail domains with individually labelled points of failures as against a less clear view of pass and fail domains and lack of individually labelled points in ADFD. %The points are also labelled for clarification. % as shown in Figure~\ref{fig:Workflow}. %The difference in representation of fault by ADFD and ADFD+ can be seen in figure .... Figure x is generated by ADFD with lower bound as ... and upper bound as ... While Figure Y is generated by ADFD+ with range ... for the same program given in appendix a. %\end{itemize} %%%%%%%%%%%%%%%%%%%% %\subsection{Workflow of ADFD+} \label{sec:workflow} %ADFD+ is a fully automatic technique requiring the user to select radius value and feed the program under test followed by clicking the $Draw Fault Domain$ button for test execution. %The default range value is set to 5 meaning that ADFD+ will search 83 values around the failure. %As soon as the button is clicked, YETI comes in to play with ADFD+ strategy to search for failures in the program under test. On finding a failure, the strategy creates a Java file which contains calls to the program on the failing and surrounding values within the specified radius. The Java file is executed after compilation and the results obtained are analysed to separate pass and fail values which are accordingly stored in the text files. At the end of test, all the values are plotted on the graph with pass values in blue and fail values in red colour as shown in Figure~\ref{fig:adfdPlusExample}. %\\ %Instead of front end give workflow. It will make more sense. Change the code of the program %\begin{figure}[ht] %\centering %\includegraphics[width= 8.5cm,height=7cm]{adfdPlusWorkflow.png} %\caption{Workflow of ADFD+} %\label{fig:Workflow} %\end{figure} %%%%%%%%%%%%%%%%%%%% %ADFD+ is an extension of ADFD's algorithm with more accuracy to find and clarity to plot the failure domain on a graphical chart. Deriving failure domains using ADFD+ is a one click process and all the tester needs to input is the class to test and the range-value for which to search around the found failure. %%%%%%%%%%%%%%%%%%%% %\subsection{Implementation of ADFD+} \label{sec:implementation} %The ADFD+ technique is implemented in YETI which is available in open-source at \url{http://code.google.com/p/yeti-test/}. A brief overview of YETI is given with the focus on parts relevant to implementation of ADFD+ strategy. \\* %YETI is a testing tool developed in Java for automatic testing of programs using random strategies. YETI meta-model is language-agnostic which enables it to test %programs written in functional, procedural and object-oriented languages. YETI consists of three main parts including core infrastructure for extendibility, strategies section for adjustment of multiple strategies and languages section for supporting multiple languages. Both strategies and languages sections have pluggable architecture for easily incorporating new strategies and languages making YETI a favourable choice for implementing ADFD+ strategy. YETI is also capable of generating test cases to reproduce the failures found during the test session. %The strategies section in YETI contains different strategies including random, random+, DSSR and ADFD for selection according to specific needs. ADFD+ strategy is implemented in this section by extending the $YetiADFDStrategy$. %\begin{figure*}[ht] %\centering %\includegraphics[width=17cm,height=10.3cm]{exampleError.png} %\caption{The output of ADFD+ for the above code.} %\label{fig:adfdPlusExample} %\end{figure*} %\subsection{Example to illustrate working of ADFD+} %Suppose we have the following error-seeded class under test. It is evident from the program code that an $ArithmeticException$ (divison by zero) failure is generated when the value of variable $x$ ranges between 5 to 8 and the value of variable $y$ between 2 to 4. %\begin{lstlisting} %public class Error { % public static void Error (int x, int y){ % int z; % if (((x>=5)&&(x<=8))&&((y>=2)&&(y<=4))) % { % z = 50/0; % } % } %} %\end{lstlisting} %At the beginning of the test, ADFD+ strategy evaluates the given class with the help of YETI and finds the first failure at x = 6 and y = 3. Once a failure is identified ADFD+ uses the surrounding values around it to find a failure domain. The radius of surrounding values is limited to the value set by the user in the $Domain Range$ variable. When the value of $Domain Range$ is set to 5, ADFD+ evaluates a total of 83 values of $x$ and $y$ around the found failure. All evaluated $(x, y)$ values are plotted on a two-dimensional graph with red filled circles indicating fail values and blue filled circles indicating pass values. Figure~\ref{fig:adfdPlusExample} shows that the failure domain forms a block pattern and the boundaries of the failure are $(5, 2), (5, 3),(5, 4), (6, 2), (6, 4), (7, 2), (7, 4), (8, 2), (8, 3), (8, 4)$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Randoop maintains two sets called \verb+ErrorSeqs+ and \verb+NonErrorSeqs+ to record the feedback. It extends \verb+ErrorSeqs+ set in case of contract or filter violation and \verb+NonErrorSeqs+ set when no violation is recorded in the feedback. The use of this dynamic feedback evaluation at runtime brings an object to an interesting state. On test completion, \verb+ErrorSeqs+ and \verb+NonErrorSeqs+ are produced as JUnit/NUnit test suite. In terms of coverage and number of faults discovered, Randoop implementing FDRT was compared with JCrasher and JavaPathFinder and 14 libraries of both Java and .Net were evaluated~\cite{visser2004test}. The results showed that Randoop achieved more branch coverage and better fault detection than JCrasher. %Daikon is a tool~\cite{ernst2007daikon}, which uses machine-learning technique to automatically generate likely invariants of the program written in C, C++, Java and Pearl. Daikon takes the program and a few test cases as input. The test cases may be either generated manually or by an automated tool. Daikon executes the test cases on the program under test and observes the values that the program computes. At the end of the test session it reports the properties that were true for the observed executions. A feature of Daikon facilitate to process the generated invariants to mitigate non-interesting and redundant invariants. Another feature allows to inserts the generated invariants in to the source code as assertions. The report generated by Daikon is useful in understanding program logic, generating invariants, predicting incompatibilities in component integration, automating theorem proving, repairing inconsistencies in data structures and checking the validity of data streams. %%%%%%%%%%%%%%%%% EVALUATION %%%%%%%%%%%%%%%%%%%% %\section{Comparison of ADFD+ \& Randoop}\label{sec:eval} %In order to check the effectiveness and efficiency of ADFD+ we compared it with a random testing tool Randoop. Our subject classes for these experiments were the same that were used in evaluation of ADFD \cite{ahmad2013adfd}. We ran ADFD+ and Randoop for 30 times on each error-seeded one and two dimensional numerical programs, measuring its effectiveness by the total number of test cases used to detect all the failures and its efficiency by the CPU time consumed. %\subsection{Research questions} \label{sec:questions} %The following research questions have been addressed in the study: %\begin{enumerate} % %\item If ADFD and ADFD+ techniques capable of correctly identifying and presenting the failure-domains in production software? %The experimental results claiming the correct identification of ADFD and ADFD+ were based on the purpose build error-seeded programs~\cite{}. To answer the question, we applied the two techniques to all the projects of Qualitas Corpus and examined the results. %\item \textit{If the graph and invariants generated, correctly represent the failure domains?} %Invariants generated by Daikon can identify the start and stop of the failure domain. To answer this question we compared the generated invariants with the source code and the failure-domain presented in graphical form. % % %\item What are the types and frequency of identified failure-domains? %There are strategies~\cite{}. that exploit the presence of block and strip failure-domain to get better results. Therefore identifying the presence of underlying failure-domains in production software can help in high quality of software testing. To answer the questions, we reviewed all the classes containing failure-domains manually, automatically and graphically. % %\item If the nature of identified failure-domains is simple or complex and does it make any difference in its identification by manual and automated techniques? % An interesting point is to know what failure is responsible for a failure-domain and how difficult it is to identify that failure by manual testing. To answer this question, we studied the test logs and test output of the automated testing and the source code of the program manually to identify the cause and complexity of failures of failure-domains. %\item \textit{If the presence of a particular failure-domain can make it easy or hard to find using automated and manual techniques?} %Failure-domain can reside in the form of point, block or strip shape in the input domain. To answer this question we analysed the source code of all the programs in which failure-domains were detected. % %\item \textit{If the graph generated by ADFD correctly represent the pass and fail domains?} Both the ADFD and ADFD+ techniques generate graphs to represent failure-domains for simplicity. To answer the question we compared the generated graphs with the source code and the invariants generated by Daikon. % %\item If obtained results consistent with previous theoretical and practical results presented? %As per our knowledge, till now no specific study has been conducted to automatically identify the pass and fail domains however it has been claimed by some researchers~\cite{} that there exist more block and strip patterns then the point patterns. % %\end{enumerate} %\section{Evaluation} \label{sec:evaluation} %All the programs in which failure-domains were identified are presented in Tables~\ref{table:stripDomains, table:pointDomains, table:blockDomains, table:mixDomains} % Every program was tested independently by ADFD, ADFD+ and manual testing. All the programs in which failure-domains were identified are presented in Table~\ref{}. Due to the absence of contracts and assertions in the code under test, undeclared exceptions were taken as failures in accordance with the previous studies~\cite{ahmad2013adfd, Oriol2012}. %\subsection{Randoop} \label{sec:randoop} %Random tester for object oriented programs (Randoop) is a fully automatic tool, capable of testing Java classes and .Net binaries. It takes as input a set of classes, time limit or number of tests and optionally a set of configuration files to assist testing. Randoop checks for assertion violations, access violations and un-expected program termination in a given class. Its output is a suite of JUnit for Java and NUnit for .Net program. Each unit test in a test suite is a sequence of method calls (hereafter referred as sequence). Randoop builds the sequence incrementally by randomly selecting public methods from the class under test. Arguments for these methods are selected from the pre-defined pool in case of primitive types and as sequence of null values in case of reference type. Randoop uses feedback mechanism to filter out duplicate test cases. %The code for the programs under test is given in Appendix~\ref{} while the test details are presented in Table~\ref{table:Results}. %Every class was evaluated through $10^5$ calls in each test session of ADFD+. %\footnote{The total number of tests is equal to $60\times 30\times 3 \times 10^5 = 540\times10^6~tests$.}
{ "alphanum_fraction": 0.7075614884, "avg_line_length": 85.6841680129, "ext": "tex", "hexsha": "ba4682f88ff359aad3d96b4663340c05cc8b6cbf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_forks_repo_licenses": [ "BSD-4-Clause" ], "max_forks_repo_name": "maochy/yeti-test", "max_forks_repo_path": "papers/YDS2014/backup/typeinst_without_tables.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-4-Clause" ], "max_issues_repo_name": "maochy/yeti-test", "max_issues_repo_path": "papers/YDS2014/backup/typeinst_without_tables.tex", "max_line_length": 1564, "max_stars_count": null, "max_stars_repo_head_hexsha": "6cf105977c25eb94e641b06cb443bbe1573ef6b1", "max_stars_repo_licenses": [ "BSD-4-Clause" ], "max_stars_repo_name": "maochy/yeti-test", "max_stars_repo_path": "papers/YDS2014/backup/typeinst_without_tables.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 28248, "size": 106077 }
\chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} SALSA-Onsala (``Such A Lovely Small Antenna'') is a 2.3~m diameter radio telescope built at Onsala Space Observatory, Sweden, to introduce pupils, students and teachers to the marvels of radio astronomy. The sensitive receiver makes it possible to detect radio emission from atomic hydrogen far away in our galaxy. From these measurements we can learn about the kinematics and distribution of gas in our galaxy, the Milky Way. One can also use the antenna for other projects which does not involve hydrogen. In this document we describe how you can measure the antenna response function, also called the \emph{beam}, of the SALSA telescope by observing the total power received from the Sun. First we review some basic concepts of how radio telescopes work and what the antenna reponse function for SALSA is expected to look like. Then we describe how to use the SALSA control program to observe the Sun to learn about the beam of SALSA. Please note that this document is focused on understanding the antenna response and only briefly describes the telescope control program. Instructions for operating the SALSA telescope can be found in the document entitled \emph{SALSA users manual} available at the SALSA website. \vspace{9cm} {\bf Cover image:} The SALSA telescopes in Onsala.
{ "alphanum_fraction": 0.8010316875, "avg_line_length": 43.7741935484, "ext": "tex", "hexsha": "7b1fd78800b654b6cf3d9a947096b2c527b42e66", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-21T11:32:05.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-14T10:01:29.000Z", "max_forks_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "varenius/salsa", "max_forks_repo_path": "Lab_instructions/Beam/English/abstract.tex", "max_issues_count": 72, "max_issues_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_issues_repo_issues_event_max_datetime": "2022-03-02T10:24:24.000Z", "max_issues_repo_issues_event_min_datetime": "2015-05-30T21:33:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "varenius/salsa", "max_issues_repo_path": "Lab_instructions/Beam/English/abstract.tex", "max_line_length": 77, "max_stars_count": 13, "max_stars_repo_head_hexsha": "2ddb4c34943d85aecebdef8745cc64c2daa4b8bb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "varenius/salsa", "max_stars_repo_path": "Lab_instructions/Beam/English/abstract.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-21T04:03:36.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-18T07:51:46.000Z", "num_tokens": 306, "size": 1357 }
\section{The Lifecycle of a Decentralized Software Update} \label{lifecycle} %A \emph{software update (SU)} is the unit of change for the blockchain software. It must have a clear goal of what it tries to achieve and why it would be beneficial, if applied to the system. Moreover, it should have a clear scope. In Figure \ref{lifecycle}, we depict the full lifecycle of a software update following a decentralized approach. In this lifecycle, we identify four distinct phases: a) the \emph{ideation phase}, b) the \emph{implementation phase}, c) the \emph{approval phase} and d) the \emph{activation phase}. In this section, we briefly outline each phase and at the subsequent sections, we provide all necessary details for realizing each phase in a decentralized setting. A \emph{software update (SU)} is the unit of change for the blockchain software. In Figure~1, we depict %In this section, we define the full lifecycle of a software update following a decentralized approach. In this lifecycle, we identify four distinct consecutive phases: a) the \emph{ideation phase}, b) the \emph{implementation phase}, c) the \emph{approval phase} and d) the \emph{activation phase}. In the subsequent subsections, we provide a detailed description of each individual phase. %The SU starts from the ideation phase which is the conceptualization step in the process. It is where an SU is born. During this phase, a justification must be made for the SU and this has to be formally agreed by the community (or the code owner). This justification takes the form of an improvement proposal document (appears as \emph{SIP} in the figure and will be defined shortly). Once the SU's justification has been approved, then we enter the implementation phase. It is where the actual development of the SU takes place. The result of this phase is a bundle \emph{update proposal (UP)} consisting of source-code (implementing the SU), metadata and optionally binaries produced for one, or more, specific platforms. This is submitted for approval and thus the approval phase follows. Once the UP has been approved (by the community, or the code owner), the community is called for upgrading. The actual upgrading takes place in the activation phase, which is there to guard against chain-splits by synchronizing the activation of the changes. Interestingly, the phases in the lifecycle of a SU are essentially independent from the approach (centralized or decentralized) that we follow. They constitute intuitive steps in a software lifecycle process that starts from the initial idea conception and ends at the actual activation of the change on the client software. Based on this observation, one can examine each phase and compare the traditional centralized approach, used to implement it, to its decentralized alternative. In the Appendix~\ref{appxlifecycle}, you can find a description of the centralized approach for each phase, for comparison reasons. Moreover, not all phases need to be decentralized in a real world scenario. One has to measure the trade-off between decentralization benefits versus practicality and decide what phases will be decentralized. Our decomposition of the lifecycle of a SU in distinct phases helps towards this direction. \begin{figure}[h!] %[H] \centering % \includegraphics[width=\textwidth]{figures/lifecycle_phases.pdf} \includegraphics[width=1.0 \columnwidth,keepaspectratio]{figures/lifecycle_phases.pdf} \label{lifecycle} \caption{The lifecycle of a software update (a decentralized approach)} \end{figure} \subsubsection{Ideation.} %\paragraph{Scope of the phase.} A SU starts as an idea. Someone captures the idea of implementing a change that will serve a specific purpose (fix a bug, implement a new feature, provide some change in the consensus protocol, perform some optimization etc.). The primary goal of this phase is to capture the idea behind a SU, then record the justification and scope of the SU in some appropriate documentation and finally come to a decision on the priority that will be given to this SU. %\paragraph{Centralized approach.} %Traditionally, in the centralized approach, a SU is proposed by some central authority (original author, group of authors, package maintainer etc.), who essentially records the need for a specific SU and then decides when (or, in which version) this could be released. In many cases, (e.g., Bitcoin \cite{bitcoin}, Ethereum \cite{ethereum}) the relevant SU justification document (called BIP, or EIP respectively) is submitted to the community, in order to be discussed. Even when this \say{social alignment} step is included in this phase, the ultimate decision (which might take place at a later phase in the lifecycle), for the proposed SU, is taken by the central authority. Therefore, the road-map for the system evolution is effectively decided centrally. Moreover, this social consensus approach is informal (i.e., not part of a protocol, or output of an algorithm) and is not recorded on-chain as an immutable historical event. %\paragraph{Decentralized approach.} The ideation phase in the decentralized approach is depicted in Figure~2. \begin{figure}[h!] %[H] \centering \includegraphics[width=1.0 \columnwidth,keepaspectratio]{figures/ideation_phase.pdf} \label{ideation} \caption{The ideation phase.} \end{figure} In the decentralized setting, a SU starts its life as an idea for improvement of the blockchain system, which is recorded in a human readable simple text document, called the \emph{SIP (Software\footnote{\say{Software} and \say{System} are two terms that could be considered equivalent for the scope of this paper and we intend to use them interchangeably. For example, a SIP could also stand for a System Improvement Proposal} Improvement Document)}. The SU life starts by submitting the corresponding SIP to the blockchain by means of a fee-supported %\mnote{Why is this fee special? I think that might be dangerous to have a different fee for different type of transactions} transaction. Any stakeholder can potentially submit a SIP and thus propose a SU. A SIP includes basic information about a SU, such as the title, a description, the author(s), the priority/criticality of the SU etc. Its sole purpose is to justify the necessity of the proposed software update and try to raise awareness and support from the community of users. A SIP must also include all necessary information that will enable the SU validation against other SUs (e.g., update dependencies or update conflict issues), or against any prerequisites required, in order to be applied. We call these requirements as \emph{update constraints} (cf. Appendix \ref{appdxupdcons}) and can be abstracted as a predicate, whose evaluation determines the feasibility of a software update. A SIP is initially uploaded to some external (to the blockchain system) \emph{decentralized storage solution} and a hash id is generated, in order to uniquely identify it. This is an abstraction to denote a not centrally-owned storage area. It can be something very common, as a developer-owned Github repository, to something more elaborate as a content-addressable decentralized file system. In any case, it is in the interest of the party that made the proposal to keep the SIP available, otherwise the SIP will be rejected. % \mnote{We should clarify that is in the interest of the party that made the proposal to keep the SIP available. Which is something safe to assume since the SIP could be moved from one decentralized storage to another without affecting the hash of the SIP.} This hash id is committed to the blockchain in a two-step approach, following a hash-based commitment scheme, in order to preserve the rightful authorship of the SIP. Once the SIP is revealed a voting period for the specific proposal is initiated. Any stakeholder is eligible to vote for a SIP and the voting power will be proportional to his/her stake. Votes are fee-supported transactions, which are committed to the blockchain. More details for the proposed voting mechanism can be found in the Appendix \ref{appxvoting}. Note that voters are not called to vote only for the SIP per se, but also for the various characteristics of the software update, such as the type of the change, the priority/criticality, etc., which are described in the corresponding metadata. These characteristics will drive the \emph{update policy} adopted, as we will describe in the corresponding section. %Note that since a SIP is a document justifying the purpose and benefit of the proposed software update, it should not require in general sufficient technical expertise, in order for a stakeholder to review it and decide on his/her vote. However, in the case that the In the case that the evaluation of a SIP requires greater technical knowledge, then a voting delegation mechanism exists. This means that a stakeholder can delegate his/her voting rights to an appropriate group of experts but also preserve the right to override the delegate's vote, if he/she wishes. Essentially, we propose the use of a delegation mechanism for three distinct reasons: a) for technical expertise, b) for special categories of software updates (e.g., security fixes, platform specific issues etc.) and c) for ensuring an appropriate level of stake participation in the protocol, similar to the stakepools concept described in Karakostas et. al. \cite{stakepools}. More details for the proposed delegation mechanism can be found in the Appendix \ref{appxdelegation}. %The delegation mechanism will also be used in order to implement the concept of an \emph{update policy} that will be described in a later section and enables different activation speeds for a SU depending on its type (e.g., a bug-fix versus a change request, a SU that has a consensus protocol impact versus a no-impact one, etc.). For all these, special \emph{delegation groups} will be considered, as we will discuss in the relevant section. A SIP after the voting period can either be voted or rejected. Details on the voting and delegation protocols can be found in the relevant section. %Note that in the decentralized approach the ideation phase could very well be implemented by a treasury system (e.g., similar to the one proposed by Bingsheng et al. \cite{treasury}). A treasury is a decentralized and secure system aimed at the maintenance of a blockchain system that allows the submission of proposals (i.e., candidate projects) for improvement of the system. These proposals go through a voting process, in order to select the surviving ones. More importantly, the system is supported by a funding mechanism, where funds raised are stored in the treasury. These funds are used for funding the approved projects. Implementing the ideation phase with a treasury system, would enable additionally the appropriate management of the funding of each SU. \subsubsection{Implementation.} %The voting of a SIP is the green light signal for entering the implementation phase. This is the period where the actual implementation of a SIP takes place. So one could very roughly imagine this phase as a box, where a SIP comes in as input and source-code implementing the SIP comes out as output. %\paragraph{Scope of this phase.} The scope of this phase is twofold: a) to develop the source-code changes that implement a specific voted SIP and b) to execute a second voting delegation round, in order to identify the experts that will approve the new source-code. At the end of this phase, the developer creates a bundle comprising the new source-code, the accompanied metadata and optionally produced binaries, which we call an \emph{update proposal (UP)}. The newly created UP must be submitted for approval, in order to move forward. %\paragraph{Centralized approach.} %In the centralized setting, it is typical (in the context of an open source software development model), when a developer wants to implement a change, first to download from a centrally-owned code repository the version of the source-code that will be the base for the implementation and then, when the implementation is finished, to upload it to the same code repository and submit a \emph{pull-request}. The latter is essentially a call for approval for the submitted code. The central authority responsible for the maintenance of the code-base, must review the submitted code and decide, if it will be accepted, or not. Therefore, in the centralized approach the implementation phase ends with the submission of a pull-request. %\paragraph{Decentralized approach} The decentralized alternative for the implementation phase is identical to its centralized counterpart as far as the development of the new code is concerned. However, in the decentralized setting, there exist these major differences: a) there is not a centrally-owned code repository to maintain the code-base (since there is not a central authority responsible for the maintenance of the code), b) a delegation process is executed, in parallel to the implementation, as a preparation step for the (decentralized) approval phase that will follow and c) the conceptual equivalent to the submission of a pull-request (i.e., a call for approval by the developer to the code maintainer authority) must be realized. \begin{figure}[h!] %[H] \centering \includegraphics[width=1.0 \columnwidth,keepaspectratio]{figures/implementation_phase.pdf} \label{fig:implementation} \caption{The implementation phase.} \end{figure} %In Figure \ref{implementation}, we depict the decentralized implementation phase. Similar to the centralized case, the implementation of a change must be based on some existing code, which we call the base source code and the developer must download locally, in order to initiate the implementation. However, in the decentralized setting there is not a centrally-owned code repository. All the approved versions of the code are committed into the blockchain (i.e., only the hash of the update code is stored on-chain). Therefore, we assume that the developer finds the appropriate (usually the latest) approved base source code in the blockchain and downloads it locally, using the link to the developer-owned code repository provided in the UP metadata. We abstract this code repository in Figure \ref{implementation} with the depicted decentralized storage solution. This conceptually can be any storage area that is not centrally-owned; from something very common, as a developer-owned Github repository to something more elaborate as a content-addressable decentralized file system. In Figure~3, we depict the decentralized implementation phase. All the approved versions of the code are committed into the blockchain (i.e., only the hash of the update code is stored on-chain). Therefore, we assume that the developer finds the appropriate (usually the latest) approved base source code in the blockchain and downloads it locally, using the link to the %developer-owned code repository provided in the UP metadata. %We abstract this code repository in Figure \ref{implementation} with the depicted decentralized storage solution. This conceptually can be any storage area that is not centrally-owned; from something very common, as a developer-owned Github repository to something more elaborate as a content-addressable decentralized file system. It is true that the review of source code is a task that requires extensive technical skills and experience. Therefore, it is not a task that can be assumed by the broad base of the stakeholders community. A voting delegation mechanism at this point must be in place, to enable the delegation of the strenuous code-approval task to some group of experts (see the details of the proposed delegation mechanism in the Appendix \ref{appxdelegation}). %In a similar logic with the delegation process, within the ideation phase, discussed above, the delegation process could be leveraged to implement different update policies per type of software update. %As we have seen, the voting approval of a SIP signals the beginning of the implementation phase for this SIP. %The SIP has an estimated implementation elapsed time that was included in the SIP metadata, submitted along with the SIP at the ideation phase. This time period, increased by a contingency parameter, will be the available time window for a SIP to be implemented. %Upon the conclusion of the implementation, a bundled (source code and metadata) UP is created. The UP must be uploaded to some (developer-owned) code repository and a content-based hash id must be produced that will uniquely identify the UP. This hash id will be submitted to the blockchain as a request to approval. This is accomplished with a specialized fee-supported transaction, which represents the decentralized equivalent to a pull-request. SIPs that fail to be implemented within the required time framework (explicitly stated in the SIP metadata), will result to expired UPs and the SIP must be resubmitted to the ideation phase, as a new proposal. The UP submission transaction signals the entering into the approval phase. Upon the conclusion of the implementation, the UP must be uploaded to some (developer-owned) code repository and a content-based hash id must be produced that will uniquely identify the UP. This hash id will be submitted to the blockchain as a request to approval. This is accomplished with a fee-supported transaction, which represents the \say{decentralized equivalent} to a pull-request. \subsubsection{Approval.} %\nnote{The voter at this phase votes for three things: a) for the correspondence of the source code to the CIP (i.e., authenticity testing / security auditing), b) For the inclusion from the new source code of all previous approved UPs (i.e., regression testing), c) The correctness of the new code (i.e., testing)} %\nnote{If there is also a binary upload for a specific platform for a UP, then the approval must vote for the authenticity and safety of the binaries as well. This might require a re-delegation to a specialized team for the specific platform. So this could be a separate vote} %\paragraph{Scope of this phase.} %The submission of an UP to the blockchain, as we have seen, is the semantic equivalent to a pull-request, in the decentralized approach. It is a call for approval. Indeed, the The main goal of the approval phase is to approve the proposed new code; but what exactly is the approver called to approve for? The submitted UP, which as we have seen, is a bundle consisting of source code, metadata and optionally produced binaries, must satisfy certain properties, in order to guarantee its \emph{correctness}. Overall, the approver approves the correctness and safety of the submitted UP. In the Appendix~\ref{appxapproval}, we provide a detail list of the properties that a UP must satisfy in order to justify its correctness. %\mnote{Could we shorten the following part by saying that the node check that the code is correct and without bugs? Then we could move this list into the appendix. I would suggest to move also Fig.~\ref{approval} into the appendix. } %\begin{itemize} %\item %\emph{Correctness and accuracy.} The UP implements correctly (i.e., without bugs) and accurately (i.e., with no divergences) the changes described in the corresponding voted SIP. % %\item \emph{Continuity.} Nothing else has changed beyond the scope of the corresponding SIP and everything that worked in the base version for this UP, it continues to work, as it did (as long as it was not in the scope of the SIP to be changed). % %\item \emph{Authenticity and safety.} The submitted new code is free of any malware and it is safe to be downloaded and installed by the community; and by downloading it, one downloads the original authentic code that has been submitted in the first place. % %\item \emph{Fulfillment of update constraints.} We call the dependencies of an UP to other UPs, the potential conflicts of an UP with other UPs and in general all the prerequisites of an UP, in order to be successfully deployed, \emph{update constraints}. The fulfillment, or not, of all the update constraints for an UP, determines the feasibility of this UP. %\end{itemize} %%\paragraph{Centralized approach.} %From the centralized approach perspective the above properties of the new code that the approver has to verify and approve are not uncommon. In fact, one could argue that these are the standard quality controls in any software development model. The first property has to do with testing; testing that verifies that the changes described in the SIP have been implemented correctly and accurately. In the centralized approach this means that the main maintainer of the code has to validate that the new code successfully passes specific test cases, either by reviewing test results, of executed test cases, or by running tests on his/her own. Regardless, of the testing methodology or type of test employed (unit test, property-based test, system integration test, stress test etc.), this is the basic tool that helps the central authority to decide on the correctness and accuracy of the new code. % %The second property for approving the new code has to do with not breaking something that used to work in the past. In software testing parlance, this is known as regression testing. Again, in the centralized approach, it is the main maintainer's responsibility to verify the successful results of regression tests run against the new code. % %The third property has to do with the security of the new code and the authenticity of the downloaded software. The former calls for the security auditing of the new code. The latter, in the centralized case, is easy. Since, there is a trusted central authority (i.e., the main code maintainer), the only thing that is required, is for this authority to produce new binaries based on the approved source code, sign them and also the source code with his/her private key and distribute the signed code to the community. Then, the users only have to verify that their downloaded source code, or binaries, has been signed by the trusted party and if yes, then to safely proceed to the installation. % %Finally, the last property that has to be validated by the approver pertains to the fulfillment of the update constraints. All the prerequisites of an UP must be evaluated and also the potential conflicts triggered by the deployment of an UP must be considered. For example, an UP might be based on a version of the software that has been rejected; or, similarly, it might be based on a version that has not yet been approved. Moreover, it might require the existence of third party libraries that it is not possible to incorporate into the software (e.g., they require licenses, or are not trusted). Then, we have the potential conflicts problem. What if the deployment of an UP cancels a previously approved UP, without this cancellation to be clearly stated in the scope of the corresponding SIP? All these are issues that typically a code maintainer takes into consideration, in order to reach at a decision for a new piece of code. %\paragraph{Decentralized approach.} Once more, the essential part that differentiates the decentralized from the centralized approach is the lack of the central authority. All the properties that have to be validated basically remain the same but in this case the approval must be a collective decision, which is enabled, similarly to the Ideation phase, by the voting and delegation mechanism in place. The approval phase in the decentralized approach is very similar to the Ideation phase, and because of space constraints we have moved the corresponding figure into the Appendix \ref{appxapproval}. %\begin{figure}[h!] %[H] % \caption{The approval phase.} % \centering % \includegraphics[width=1.0 \columnwidth,keepaspectratio]{figures/approval_phase.pdf} % \label{approval} %\end{figure} %The approval phase in the decentralized approach is depicted in Figure \ref{approval}. As we have seen, an UP is a bundle consisting of source code, update metadata and optionally binaries produced from the source code, aiming at a specific platform (e.g., Windows, Linux, MacOS etc.). The update metadata have to include basic information about the update, its justification, they have to clearly state all update constraints and finally declare the type of the change (e.g., bug-fix, or change request, soft/hard fork etc.) % and priority, in order to enable the appropriate \emph{update policy} (we will return to these concept in the relevant section). %The UP bundle is uploaded to some developer-owned code repository and a unique hash id, from hashing the content of the UP is produced. This UP hash id is submitted to the blockchain, along with a link to the code repository. %Similar to the ideation phase (where the corresponding SIP was submitted), the submission of a UP is a special fee-supported transaction that can be submitted by any stakeholder. The UP is committed to the blockchain following again a hash based commitment scheme, in order to preserve the rightful authorship of the UP. %Once the UP is revealed the delegated experts (remember the delegation that took place during the implementation phase) for this UP, will essentially assume the role of the main code maintainer that we typically see in a centralized setting. In other words, they have to download the source code, metadata and possible binaries and validate the aforementioned properties. %The tools (e.g., testing) that the experts have available for doing the validation are no different than the tools used by the main maintainer in the centralized approach. Moreover, if binaries for a specific platform have been uploaded by the UP submitter, then the delegated experts must go through the process of reproducing a binary from the source code and verifying that it matches (based on a hash code comparison) the one submitted. If not, then the submitted binary must be rejected and this will cause a rejection of the UP as a whole. So there must be some extra caution when binaries are submitted along with source code, since the metadata need to include sufficient information for the approver to be able to reproduce the same binaries per platform. %Therefore the revealing of a UP, initiates a voting %The revealing of a UP, initiates a voting period for the specific proposal, in which the delegated experts must validate \emph{all} the UP properties posed and approve, or reject it, with their vote. Any stakeholder is eligible to vote for an UP and the voting power will be proportional to his/her stake. If a stakeholder wishes to cast a vote, although he/she has already delegated this right to an expert, then this vote will override the delegates vote. Votes are specialized fee-supported transactions, which are committed to the blockchain. %We will return to the voting protocol in the relevant section. %One final note is that, as we have described, the decentralized approval phase that we propose, entails transaction fees. This means that the approval phase is not so flexible from a practical perspective, as to be used iteratively (although technically this is possible). In other words, to reject an UP, then fix some bugs and upload a new version for review etc. An UP rejection means that a resubmission must take place, with all the overhead that this entails (transaction fees, storage costs, a new voting must take place, etc.). This is a deliberate design choice that guards the system against DoS attacks. From a practical perspective though, it means that the submitted UPs must be robust and thoroughly tested versions of the code, in order to avoid the resubmission overhead. We do not want to pollute the immutable blockchain history with intermediate trial-and-error UP events. \subsubsection{Activation.}\label{se:activation} %\paragraph{Scope of this phase.} The final phase in the lifecycle of a software update is the activation phase. This is a preparatory phase before the changes actually take effect. It is the phase, where we let the nodes do all the manual steps necessary, in order to upgrade to an approved UP and then synchronize with their peers before the changes take effect. %at the end, send a signal to their peers that they are ready for the changes to be activated. Thus, the activation phase is clearly a synchronization period. Its primary purpose is for the nodes to activate changes synchronously. %to signal upgrade readiness, before the actual changes take effect (i.e., activate). Why do we need such a synchronization period in the first place? Why is not the approval phase enough to trigger the activation of the changes? The problem lies in that there are manual steps involved for upgrading to the new software, such as downloading and building the software from source code, or even the installation of new hardware, which entail delays that are difficult to foresee and standardize. This results into the need for a synchronization mechanism between the nodes that upgrade concurrently. The lack of such a synchronization between the nodes, prior to activation, might cause a chain split, since different versions of the blockchain will be running concurrently. Of course, this is true only for those software updates that impact the consensus protocol (i.e., the validation rules for blocks and transactions and the consensus protocol per se).%\mnote{we should say more about that and explain what happens in the other cases (where we could have forks).} For all the other SUs, the participating nodes can activate the changes asynchronously. %This synchronization mechanism exactly is the activation phase and for this, it is considered very important. %Clearly, the activation phase is not aimed as a re-approval phase for the UP. It is there to allow a smooth incorporation of the software update into the network. Therefore it becomes relevant only for those UPs that impact the consensus and can risk a chain split. For UPs that don't impact the consensus (e.g., a code refactoring, or some short of optimization, or even a change in the consensus protocol rules, which is a velvet fork \cite{velvet}) there is essentially no need for an activation phase and the change can activate, as soon as the software upgrade takes place. %%\paragraph{Centralized approach.} %Traditionally, when a software update needs to be activated and it is known that it is likely to cause a chain split, a specific target date, or better, a target block number is set by the central authority, so that all the nodes to get synchronized. Indeed, this is a practice followed by Ethereum \cite{ethereum}. All major releases have been announced enough time before the activation, which takes place when a specific block number arrives (i.e., the corresponding block is mined). All nodes must have upgraded by then, otherwise they will be left behind. In Bitcoin \cite{bitcoin}, there also exists a signaling mechanism\footnote{see BIP-9 at https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki}. In this case, the activation takes place, only if a specific percentage of blocks (95\%) within a retargeting period of 2016 blocks, signal readiness for the upgrade. %\paragraph{Decentralized approach.} %Once the UP approval result has been buried under a sufficient number of blocks (i.e., the stabilization period passes)\mnote{I suppose that here you are talking about the activation method where the signalling is made by the block generators. Shouldn't this be part of the next paragraph?} Once the voting period of the Approval phase ends, the votes have been stably stored in the blockchain and the tally result is positive, then the activation period is initiated. In Figure~4, we depict the activation period in the decentralized setting. \begin{figure}[h!] %[H] \centering \includegraphics[width=1.0 \columnwidth,keepaspectratio]{figures/activation_phase.pdf} \label{activation} \caption{The activation phase.} \end{figure} %The first step in the activation phase is the installation of the software update. Typically, as soon as the UP approval is stabilized in the blockchain, the GUI of the client software (e.g., the wallet) prompts the user to download and install the update, using the link that accompanies the UP. If in the UP bundle there exists an approved binary, then the user can download and install this, otherwise the user must download the approved source code. In the latter case, there exist an extra step of producing the binary code from the source code. In any case, it is important to note that the new software is just installed but not activated. It will remain in a latent state until the actual activation takes place. The first step in the activation phase is the installation of the software update. It is important to note that the new software is just installed but not activated. It will remain in a latent state until the actual activation takes place. For the nodes participating in the consensus protocol the installation of a software update means that they are ready to activate, but wait to synchronize with their other peers. To this end, they initiate signaling as a means to synchronization. %This means that every new block issued will be stamped with the new version of the software, signifying their readiness for the new update. One popular method for signaling, that is used by Bitcoin \cite{bitcoin}, is every new block issued to be stamped with the new version of the software, signifying their readiness for the new update. This approach is simple and straightforward to implement, but it restricts signaling only to \emph{maintainer nodes}\footnote{By \emph{maintainer} we mean a node that runs the consensus protocol and can issue a new block (also called \emph{minter}).}, excluding other type of nodes like \emph{full-node clients}, \emph{light-node clients} etc. Moreover, the major drawback of this approach is that the activation of changes is delayed significantly by the block creation process, which is slow. %\mnote{I suggest to say that in general the block generation process is slow. Indeed there are blockchain that can generate blocks very fast.} % One way to overcome this delay is to use sampling for estimating the adoption percent, instead of waiting for all the signaling blocks to be stably stored into the blockchain. For example, a \emph{random sampling} method could be used to collect a representative subset of $m$ signaling blocks that will be also proportional to the stake and thus we could base our calculation of the adoption threshold on this sample. Intuitively, the sampling size should reduce the more the stake distribution diverges from the uniform distribution and vice versa. %\mnote{Ad more details on the techniques that can be used for the sampling. } Since the adoption threshold is based on the stake that has signaled and not on the number of signaling blocks, another more flexible approach would be to signal by means of simple messages (i.e., fee-based transactions) that are issued from stakeholder keys and thus are bound to specific stake. In this way, we only have to wait for the stabilization of these messages into the blockchain and not for individual blocks to stabilize (a single block can store many such messages). %\mnote{Before introducing the third approach I think that we should give more details about that. For example, say that this is possible because we consider PoS Blockchain.} Furthermore, we could even use a separate consensus protocol, just for the purpose to agree on the binary value carried by these messages (Ready | Not Ready) (i.e., a binary Byzantine Agreement problem). The parties taking part in this new protocol would be any node (client, or maintainer) that actively participates in the underlying consensus protocol and thus needs to synchronize with its peers with respect to the activation of some change. The main assumptions (network, setup, computation as in Garay et al. \cite{sok}) of this consensus protocol, naturally will be identical to the assumptions of the underlying consensus protocol and therefore the \emph{resiliency}\footnote{According to Garay et. al. in \cite{sok}, the resiliency is the fraction $(t/n)$ of misbehaving parties a protocol can tolerate, where $t$ are the number of adversaries and n are the total number of parties. In our case, we can assume that $t$ is the total adversary stake and $n$ is the total stake.} of the protocol will also be the same. All in all, by departing from the block signaling solution, we can distinguish between the readiness of different types of nodes and achieve a faster calculation of the adoption threshold. Of course, the downsides in this case are the added complexity and the extra fee that has to be paid for each activation signal. %\mnote{We should give more details on the characteristics of these ``special purpose consensus protocol''. Who are the parties running the protocol and what is the assumption that makes the consensus protocol secure (honest majority? stake majority?).} In the absence of a central authority to set a deadline for the activation of changes, the parties need a way to synchronize, in order to avoid chain splits. Hence we need some sort of a synchronization mechanism. Signaling is indeed the most popular method for synchronization. However, signaling alone is not enough to protect from the risk of a chain split. This is exactly the topic of the next section, where we also discuss our proposal for the activation phase. %When the first signal appears, we enter the adoption period for the specific UP. During the adoption period the following conditions have to be met, in order for the activation to take place: a) The stake percent that has signaled activation readiness must exceed a specific threshold, b) the update constraints for the specific UP must be fulfilled (issues like conflicts, or dependencies) and c) the adoption time period must not be exceeded, otherwise the UP will become expired. %The blocks generated in a proof-of stake protocol are proportional to the stake and therefore, we can assume that the signaling mechanism is also proportional to the stake. %Once the stake threshold from the signals of the new blocks is reached, then the changes can take effect. %We can also assume that the honest stake majority, will follow the protocol and eventually will upgrade and thus signal this event with their generated blocks. This means that the minimum expected percent of signals (i.e., the activation threshold) cannot be other that the minimum percent of honest stake majority required by the proof-of-stake consensus protocol. Of course, as we have noted above, for changes that don't impact the protocol, the activation threshold could be zero; meaning that even if only one node upgrades, then the changes can be immediately activated. %Moreover, the adoption time period is not fixed for all UPs. It varies based on the type of the change, which is something recorded in the UP metadata. One size does not fit all, and this is indeed true for the adoption time period of UPs. For example, major updates that require a lot of manual steps, or significant build time, or even hardware upgrade, should be adopted in a sufficient period of time, while small updates should be activated more swiftly. %Finally, before the actual activation of a change, the validation of all the update constraints must take place once more. This is true, although we make the assumption that from the approval phase all relevant update constraints' issues (like conflicts, or dependencies) have been considered. The fact that the adoption period might require significant time for a UP, whose update constraints were fulfilled at the approval phase, while concurrently there are other UPs that become activated, means that the conditions might have changed and the update constraints must be reevaluated to make sure that no problems will arise upon activation of a UP.
{ "alphanum_fraction": 0.7997109178, "avg_line_length": 205.7794871795, "ext": "tex", "hexsha": "4890719f85fd3e6c6bf2fe0696f2e96d0dce1c1e", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-16T10:39:00.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-18T13:38:25.000Z", "max_forks_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_forks_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_forks_repo_name": "MitchellTesla/decentralized-software-updates", "max_forks_repo_path": "papers/Esorics20/submitted_paper/lifecycle.tex", "max_issues_count": 120, "max_issues_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_issues_repo_issues_event_max_datetime": "2021-06-24T10:20:09.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-06T18:29:25.000Z", "max_issues_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_issues_repo_name": "MitchellTesla/decentralized-software-updates", "max_issues_repo_path": "papers/Esorics20/submitted_paper/lifecycle.tex", "max_line_length": 1087, "max_stars_count": 10, "max_stars_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da", "max_stars_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_stars_repo_name": "MitchellTesla/decentralized-software-updates", "max_stars_repo_path": "papers/Esorics20/submitted_paper/lifecycle.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-06T02:08:38.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-25T19:38:49.000Z", "num_tokens": 8201, "size": 40127 }
\chapter[Ontological Structures to Personalize the Gamification in CL Scenarios]{Ontological Structures to Personalize the Gamification in Collaborative Learning Scenarios} \label{chapter:ontogacles-1} This chapter presents the formalization of ontological structures proposed by the author of this thesis dissertation to represent gamified CL scenarios. These ontological structures allow us to systematically represent knowledge extracted from the player types models and needs-based theories of motivation to deal with motivation problems in scripted collaborative learning. This knowledge corresponds to concepts identified as relevant to solve the context-dependency of gamification based on the individual user characteristics, so that the ontological structures delineated in this chapter are also used to represent ontological models to personalize the gamification in CL scenarios based on player types models and need-based theories of motivation. The ontological structures to represent gamified CL scenarios have been developed as an extension of ontological structures proposed to represent CL scenarios in the CL ontology, hence the chapter starts with an overview of the CL ontology (\autoref{sec:overview-of-cl-ontology}). The ontological structures that have been formalized in the \emph{\textbf{Onto}logy to \textbf{Ga}mify \textbf{C}ollaborative \textbf{Le}arning \textbf{S}cenarios} - \textbf{OntoGaCLeS} to represent gamified CL scenarios based on the knowledge extracted from the player types models and needs-based theories of motivation are presented in \autoref{sec:modeling-gamified-cl-scenarios}. To demonstrate the usefulness of this formalization, and then to validate the ontological structures as a formal representation of ontological models to personalize the gamification in CL scenarios, \autoref{sec:formalizing-ontological-model} shows the procedure followed to build an ontological model to personalize the gamification of CL scenarios based on the Dodecad player type models \cite{Marczewski2015b}. Finally, \autoref{sec:ontogacles1-concluding-remarks} presents the concluding remarks of this chapter. Part of the work described in this chapter was published by the author of this PhD thesis dissertation in the scientific articles: \begin{itemize} \item \aspas{\emph{Towards an Ontology for Gamifying Collaborative Learning Scenarios}} published in the 12\textsuperscript{th} International Conference on Intelligent Tutoring Systems, ITS 2014, held in Honolulu, HI, USA \cite{ChallcoMoreiraMizoguchiIsotani2014a}. \item \aspas{\emph{An Ontology Engineering Approach to Gamify Collaborative Learning Scenarios}} published in the 20\textsuperscript{th} International Conference on Collaboration and Technology, CRIWG 2014, held in Santiago, Chile \cite{ChallcoMoreiraMizoguchiIsotani2014}. \item \aspas{\emph{Personalization of Gamification in Collaborative Learning Contexts using Ontologies}} published as Volume 13, Issue 6, in the journal of IEEE Latin America Transactions, 2015 \cite{ChallcoMoreiraBittencourtMizoguchiIsotani2015}. \end{itemize} %% ================================== %% \section{Overview of the Collaborative Learning Ontology} \label{sec:overview-of-cl-ontology} The CL ontology has been developed for a long time by the contributions of many researchers. Initially, the CL ontology was conceived to support the opportunistic group formation \cite{IkedaGoMizoguchi1997}, so that, to identify situations in which an individual shifting from individual learning mode to CL mode, the CL ontology formalizes the agreement in the negotiation process for group formation as ontological structures to describe individual and group learning goals. Employing this formalization, intelligent agents have been developed to help students to find group members for establishing group learning activities in which they should participate. These agents check the individual and group learning goals, and then they initiate a negotiation process to establish an agreement for the participants in group learning activities. This first version of the CL ontology has been demonstrated to be useful in the development of agent-based systems that provide helpful support for the group formation \cite{InabaOhkuboIkedaMizoguchiToyoda2001, SupnithiInabaIkedaMizoguchi1999}. To provide theoretical and pedagogical justification in the group formation, the CL ontology has been extended to represent CL scenario that compliant with instructional and learning theories \cite{InabaMizoguchi2004,IsotaniMizoguchiIsotaniCapeliIsotanideAlbuquerqueBittencourtJaques2013}. In this extension, concepts, such as interaction patterns, group goals, individual goals, CL roles and so on, have been formalized from different instructional/learning theories, so that, in addition to support the group formation \cite{IsotaniMizoguchi2008}, the ontological structures to represent CL scenarios have been successfully applied in: the modeling of learners' development \cite{InabaIkedaMizoguchi2003} the interaction analysis \cite{InabaOhkuboIkedaMizoguchi2002}, and the design of CL process \cite{IsotaniMizoguchiIsotaniCapeliIsotanideAlbuquerqueBittencourtJaques2013}. \autoref{fig:concepts-terms-and-relation-in-cl-ontology} shows the terms, concepts and relations defined in the CL ontology. These concepts are defined as follows as: \begin{description} \item[\textbf{I-goal}] is the individual learning goal that represents what the participant in focus (\emph{I}) is expected to acquire, and it is described as a change in his/her learning stage. \item[\textbf{I-role}] is the CL role played by the participant in focus (\emph{I}). \item[\textbf{You-role}] is the CL role played by the participant (\emph{You}) who is interacting with the participant in focus (\emph{I}). \item[\textbf{Y<=I-goal}] is the learning strategy employed by the participant in focus (\emph{I}) to interact with the participant (\emph{You}) in order to achieve his/her individual learning goals (\emph{I-goal}). \item[\textbf{W(L)-goal}] is the common learning goal for the group members in the CL scenario. \item[\textbf{W(A)-goal}] is the rational arrangement of the group activity used to achieve the common learning goal (\emph{W(L)-goal}) and the individual learning goals (\emph{I-goal}). \end{description} \begin{figure}[!htb] \caption{Concepts, terms and relations defined in the CL Ontology} \label{fig:concepts-terms-and-relation-in-cl-ontology} \centering \includegraphics[width=0.95\textwidth]{images/chap-ontogacles1/concepts-terms-and-relation-in-cl-ontology.png} \fdireta{Isotani2009} \end{figure} To express the relationship of concepts delineated above, the CL Ontology employs the ontological structures shown in \autoref{fig:ontological-structure-cl-scenario} to represent CL scenarios. In these ontological structures, a CL scenario is represented by three parts defined as: the \emph{Group structure benefit} (\emph{W(S)-goal}) to describe the expected benefits of the structured collaboration (i.e. positive interdependence, individual accountability, promotive interactions); the \emph{Learning strategy} (\emph{Y<=I-goal}) to describe the learning strategies employed by the group members in the CL scenario; and (3) the \emph{CL process} to describe the rational arrangement of the group activity (\emph{W(A)-goal}). \begin{figure}[!htb] \caption{Ontological structure to represent CL scenarios} \label{fig:ontological-structure-cl-scenario} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-cl-scenario.png} \fdireta{Isotani2009} \end{figure} \begin{enumerate} [label=(\alph*)] \item The \textbf{Learning strategies} (\emph{Y<=I-goal}) are guidelines that specify how the participants should interact with others members of group to achieve their individual goals. These guidelines help the group members to externalize a desired behavior to play a given CL role more adequately. Therefore, the Learning strategy is represented as an ontological structure composes by: the participant in focus (\emph{I}) who plays the CL role \aspas{\emph{I-role}}; the participant (\emph{You}) who interacts with the participant in focus (\emph{I}) playing the CL role \aspas{\emph{You-role};} and the individual learning goals (\emph{I-goal}) that are expected to be achieved by the participant in focus (\emph{I}) at the end of CL scenario. The \emph{behavioral role} as part of the CL roles \aspas{\emph{I-role}} and \aspas{\emph{You-role}} is used to describe the behaviors externalized by the participants \aspas{\emph{I}} and \aspas{\emph{You}} when they interact in the CL scenario employing the learning strategy (\emph{Y<=I-goal}). \item The \textbf{CL role} describes functions, goals, duties and responsibilities that must be taken by members of group to achieve the common and individual learning goals. Thus, the ontological structure to represent a CL role is composed by: the \emph{necessary condition} and \emph{desired conditions} to play the CL role; the description of \emph{how to collaborate} when a group member plays the CL role; and the description of \emph{benefits for playing the role}. In this ontological structure, \emph{Cognitive/Knowledges states} are used to define the necessary and desired conditions for a group member to play the CL role, \emph{behaviors} are used to describe \emph{how to collaborate} playing the CL role, and \emph{individual learning goals} (\emph{I-goal}) is employed to describe the expected \emph{benefits for playing the role}. \item The \textbf{CL process} is the \emph{rational arrangement of group activity} (\emph{W(A)-goal}) whereby the common and individual learning goals are achieved by the group members. This arrangement is represented by the \emph{common learning goals} (\emph{W(L)-goal}) as result of the negotiation process in the group formation, and by the \emph{Interaction Pattern} as the sequencing mechanism followed by the participants to achieve their individual learning goals (\emph{I-goal}). The interaction pattern is represented as a set of \emph{necessary} and \emph{desired interactions} in which the interaction for the group members is described as influential Instructional-Learning events (\emph{Influential I\_L events}). \item The \textbf{Influential I\_L event} represents the interaction among the group members and the benefits obtained by the interaction from two viewpoints: from the viewpoint of participants who play a role of instructor, and from the viewpoint of participants who play a role of learner. The influential I\_L event describes group members performing actions that influence other members with the purpose to change their own learning states by helping others to achieve their individual learning goals. Therefore, the ontological structure to represent an influential I\_L event is composed by two events: a \emph{learning event} and an \emph{instructional event} in which the participants are represented as actors of CL scenario playing CL roles and performing a set of actions to achieve their individual learning goals (\emph{I-goal}). For a group member acting as \emph{instructor}, the influential I\_L event describes his/her interaction with other group member who acts as \emph{learner} through instructional actions, and the expected \emph{benefits for the instructor} (\emph{I-goal}). For a group member acting as \emph{learner}, the influential I\_L event describes his/her interaction with other group member who acts as \emph{instructor} through learning actions, and the expected \emph{benefits for the learner} (\emph{I-goal}). \end{enumerate} As it was said before, the ontological structures shown in \autoref{fig:ontological-structure-cl-scenario} are used to delineate CL scenarios that compliant with instructional and learning theories. To illustrate this, \autoref{fig:cognitive-apprenticeship-ontological-structure} shows the representation of a CL scenario based on the Cognitive Apprentice theory. According to this theory, the CL activities should incorporate situations that are familiar to those who are using these activities, and these situations must lead the participants to act and interact acquiring skills in a specific context, and then generalizing these skills to other situations. Therefore, the CL scenarios based on the Cognitive Apprentice theory focuses on supporting a more skilled participant (known as \emph{master}) to teach a familiar situation for the lesser skilled participants (known as \emph{apprentices}) who learn by observing the skilled participant's behaviors and mimic him/her in other similar situations. From the viewpoint of the more skilled participant: he/she is supported by the learning strategy \aspas{\emph{learning by guiding}} (a1); his/her role (\emph{I-role}) is the \emph{Master role} with a behavioral role of \emph{Guider}; and his/her individual learning goals is the \emph{development of cognitive} or \emph{meta-cognitive skills} at the levels of \emph{Autonomous stage}. From the viewpoint of a lesser skilled participant: he/she is supported by the learning strategy \aspas{\emph{learning strategy by guiding}} (a2) to interact with the master; his/her role (\emph{I-role}) is the \emph{Apprentice role} with the behavioral role of \emph{Imitator}; and his/her individual goals are the \emph{development of cognitive} and/or \emph{meta-cognitive skills} at the levels of \emph{Cognitive stage} and \emph{Associative stage}. \begin{figure}[!htbp] \caption{Ontological structures to represent a CL scenario based on the cognitive apprenticeship theory} \label{fig:cognitive-apprenticeship-ontological-structure} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/cognitive-apprenticeship-ontological-structure.png} \fautor \end{figure} According to the cognitive apprentice theory, the more skilled participant who plays the master role must have knowledge and/or experience in using the target cognitive or metacognitive skill. Therefore, the necessary conditions to play the \emph{Master role} as shown in \autoref{fig:cognitive-apprenticeship-ontological-structure} (b1) are: \emph{having the knowledge how to use the target cognitive skill}; \emph{having experience in using the target cognitive skill}; and \emph{having experience in using the target metacognitive skill}. When a participant adequately plays the master role, he/she acts \emph{Guiding} others participants, and as consequence of this behavior, he/she is benefited with the \emph{Development of cognitive or metacognitive skill} at the \emph{Autonomous stage}. The cognitive apprenticeship theory indicates that the participants without any knowledge or experience in how to use the target skill should play the apprentice role. Therefore, there are not necessary conditions in the ontological structure shown in \autoref{fig:cognitive-apprenticeship-ontological-structure} (b2) to represent the \emph{Apprentice role}, and the desired conditions for this role are: \emph{not having the knowledge how to use target metacognitive or cognitive skill}; and \emph{not having experience in using the target metacognitive or cognitive skill}. When a participant adequately plays the \emph{Apprentice role}, he/she acts \emph{Imitating} the behavior of the master and obtaining the benefits in the \emph{Development of metacognitive or cognitive skill} at the levels of \emph{Cognitive} and \emph{Associative} stages. When the two learning strategies, \emph{Learning by Guiding} and \emph{Learning by Apprenticeship}, are simultaneously employed to structure the interactions among the participants in the CL scenario, a positive synergy is created among them producing a \emph{Spread of skills}. This arrangement is formalized by the ontological structure shown in \autoref{fig:cognitive-apprenticeship-ontological-structure} (c), where the \emph{CL process} is defined as a \emph{Cognitive Apprenticeship type CL session}, the \emph{Common goal} of this session is the \emph{Spread of skill}, and the \emph{Teaching-Learning Process} is an \emph{Interaction Pattern} defined by the sequencing mechanism of a CSCL script inspired by the Cognitive Apprenticeship theory. This sequencing mechanism defines the necessary and complementary interactions showed in \autoref{fig:cognitive-apprenticeship-cscl-script}. \begin{figure}[!htbp] \caption{Necessary and complementary interactions defined by the sequencing mechanism of a CSCL script inspired by the cognitive apprenticeship theory} \label{fig:cognitive-apprenticeship-cscl-script} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/cognitive-apprenticeship-cscl-script.png} \fadaptada{Isotani2009} \end{figure} The necessary and desired interactions defined by the sequencing mechanism shown in \autoref{fig:cognitive-apprenticeship-cscl-script} are formalized as \emph{Influential I\_L event} in the \emph{Teaching-Learning Process} of \emph{Cognitive Apprenticeship type CL session} showed in \autoref{fig:cognitive-apprenticeship-ontological-structure} (c). The ontological structure to represent the interaction \aspas{\emph{Setting up learning context type CA}} is shown in detail in \autoref{fig:cognitive-apprenticeship-ontological-structure} (d). In this interaction, the instructional event \aspas{\emph{Giving Information}} delineates the action \aspas{\emph{Explain}} as an instructional action performed by the participant who plays the \emph{Master role} to \emph{develop the metacognitive skill} at the level of \emph{Autonomous stage}. The learning event \aspas{\emph{Receiving information}} delineates the action \aspas{\emph{Identify the context}} as a learning action performed by the participant who plays the \emph{Apprentice role} to \emph{develop the cognitive skill} at the level of \emph{Rough-Cognitive stage}. %% ================================== %% \section[Ontological Structures to Represent Gamified CL Scenarios]{Ontological Structures to Represent Gamified Collaborative Learning Scenarios} \label{sec:modeling-gamified-cl-scenarios} The concepts, terms and relations shown in \autoref{fig:concepts-terms-and-relation-in-gamified-cl-scenarios} have been formalized in the ontology OntoGaCLeS to represent gamified CL scenarios. These elements employ an independent vocabulary from any theory and practice, and they are described as follows as: \begin{figure}[!htbp] \caption{Concepts, terms and relations defined in the ontology to represent gamified CL scenarios} \label{fig:concepts-terms-and-relation-in-gamified-cl-scenarios} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/concepts-terms-and-relation-in-gamified-cl-scenarios.png} \fautor \end{figure} \begin{description} \item[\textbf{Y<=I-mot goal}] is the \emph{individual motivational strategy} used to enhance the learning strategy (\emph{Y<=I-goal}) employed by the participant in focus (\emph{I}). \item[\textbf{I-mot goal}] is the \emph{individual motivational goal} for the participant in focus (\emph{I}), and it represents what is expected to happen in his/her motivational stage when an individual motivational strategy (\emph{Y<=I-mot goal}) is applied in the CL scenario to enhance the learning strategy (\emph{Y<=I-goal}) employed by him/her to interact with other member of group (\emph{You}). \item[\textbf{I-player role}] is the \emph{player role} for the participant in focus (\emph{I}). \item[\textbf{You-player role}] is the \emph{player role} for the participant (\emph{You}) who interacts with the participant in focus (\emph{I}). \item[\textbf{I-gameplay}] is the \emph{individual gameplay strategy} for the participant in focus (\emph{I}), and it indicates the implementation of the individual motivational strategy (\emph{Y<=I-mot goal}) when this strategy corresponds to the gamification. \end{description} In the following subsections, the formalization of concepts, terms and relations briefly introduced here are detailed. \subsection{Individual Motivational Goal (I-mot goal)} \label{subsec:individual-motivational-goal} The \emph{individual motivational goal} (\emph{I-mot goal}) has been formalized in the ontology OntoGaCLeS to represent the reason why is necessary to apply an individual motivational strategy in a CL scenario. Thus, for the participant in focus (\emph{I}), the individual motivational goal (\emph{I-mot goal}) represents what is expected to happen in his/her motivational stage when a motivational strategy is applied in the CL scenario to enhance the learning strategy employed by him/her to interact with others. Thus, the individual motivational goal indicates the motivational stages that must be reached by a person to be motivated to interact with other. \autoref{fig:ontological-structure-i-mot-goal} shows the ontological structure that has been formalized in the ontology OntoGaCLeS to represent an individual motivational goal (\emph{I-mot goal}), where: the \emph{initial stage} and \emph{goal stage} are stages used to represent the expected change in the motivational stage of the person in focus (\emph{I}). \begin{figure}[!htbp] \caption[Ontological structures to represent individual motivational goal (I-mot goal)]{Ontological structures to represent individual motivational goal (\emph{I-mot goal}). At the bottom, the \aspas{\emph{Satisfaction of psychological need}} (left) and the \aspas{\emph{Internalization of motivation}} (right)} \label{fig:ontological-structure-i-mot-goal} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-i-mot-goal.png} \fautor \end{figure} Two types of individual motivational goals have been currently formalized in the ontology OntoGaCLeS to represent the individual motivational\emph{ goals (I-}mot goal) of gamification as individual motivational strategy. The former, known as \emph{Satisfaction of psychological needs}, has been formalized based on the conceptualization of motivation as internal psychological process to satisfy human needs \cite{PritchardAshwood2008}; and the latter, known as \emph{Internalization of motivation}, has been formalized based on the form in which an individual regulates his/her own choices to behave and act \cite{DeciRyan2010}. \autoref{fig:ontological-structure-i-mot-goal} shows the representation for these two types of individual motivational goals. The initial and goal stages for the \emph{Internalization of motivation} are defined by the self-determination stage, whereas the initial and goal stages for the \emph{Satisfaction of psychological need} are defined by the \emph{psychological need stages}. In the articles \cite{ChallcoMoreiraBittencourtMizoguchiIsotani2015, ChallcoMoreiraMizoguchiIsotani2014, ChallcoMoreiraMizoguchiIsotani2014a}, the thesis author used the concept of \aspas{\emph{Phychological need}} to refer the concept of \aspas{\emph{Psychological need stage},} and he used the concept of \aspas{\emph{Without need}} to refer the stages indicated as \aspas{\emph{\$1 need satisfied}} where \$1 is substituted by psychological needs (e.g. \emph{Mastery need satisfied}). As it was mentioned before, in the \autoref{chapter:general-background}, motivation is an internal psychological process associated with three general components of arousal, direction and intensity in which the arousal component is caused by needs (also called \emph{wants} or \emph{desires}). These needs cause that a person behaves and acts to satisfy needs \cite{MitchellDaniels2003}. So, motivation is a constructor that delineates why a person chooses to allocate time and energy for different behaviors and actions to maximize the satisfaction of his/her own needs \cite{PritchardAshwood2008}. It means that, in a CL scenario, a motivation problem in a scripted collaborative learning occurs when the participant believes that this scenario will not lead him/her to satisfy his/her individual needs. Therefore, the motivational strategy is applied in the CL scenario to change this perception. Based on this assumption, the individual motivational goals (\emph{I-mot goal}) for the person in focus (\emph{I}) have been formalized in the ontology OntoGaCLeS as the satisfaction of needs. More specifically, in gamified CL scenarios, the individual motivational goal is described as \emph{Satisfaction of psychological needs} because game elements do not satisfy all human needs, they satisfy only part of these needs that are referred by the thesis author as \emph{psychological needs}. The psychological needs are the human needs that are classified in the groups of relatedness and growth needs according to the ERG (Existence, Relatedness and Growth) theory \cite{Alderfer1972}. \begin{figure}[!htbp] \caption[Ontological structures to represent satisfaction of psychological need]{Ontological structures to represent \aspas{\emph{Satisfaction of psychological need}.} At the top right, the ontological structure to represent \aspas{\emph{Satisfaction of autonomy}.}} \label{fig:ontological-structure-satisfaction-psychological-need} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-satisfaction-psychological-need.png} \fautor \end{figure} \autoref{fig:ontological-structure-satisfaction-psychological-need} shows the ontological structures formalized to represent the \emph{Satisfaction of psychological need}. These ontological structures represent the satisfaction of innate psychological needs, and they comprise what is intended to evoke in minds of users by most experts when non-game contexts are gamified \cite{MoraRieraGonzalezArnedo-Moreno2015,SeabornFels2015}. According to the SDT theory \cite{RyanDeci2000,DeciRyan2010}, the well-being of an individual is reached when the psychological needs of autonomy, competence and relatedness are satisfied \cite{DeciRyan1985, DeciRyan2010}. According to the Dan Pink's theory \cite{Pink2011}, a person is motivated and engagement in a cognitive, decision-making, creative or higher-order thinking task when he/she is given with autonomy, mastery and purpose. At the top right of \autoref{fig:ontological-structure-satisfaction-psychological-need}, the ontological structure to represent the \emph{Satisfaction of autonomy} is detailed in which, based on an unipolar scale from an unsatisfied need stage to a satisfied need stage, the roles for the initial and goal stages are played by the \emph{Autonomy unsatisfied} and the \emph{Autonomy satisfied}, respectively. Employing the same unipolar scale, and the need-theories of motivation, SDT theory \cite{DeciRyan2010} and Dan Pink motivation theory \cite{Pink2011}, a set of individual motivational goals as satisfactions of psychological needs have been formalized in the ontology OntoGaCLeS, and they are detailed in \autoref{sec:ontogacles:i-mot-goal}. \begin{figure}[!htbp] \caption[Ontological structures to represent internalization of motivation]{Ontological structures to represent \aspas{\emph{Internalization of motivation}.} At the top right, the ontological structure to represent the \aspas{\emph{Internalization from amotivation to intrinsic motion}.}} \label{fig:ontological-structure-internalization-motivation} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-internalization-motivation.png} \fautor \end{figure} The \emph{internalization of motivation} is the process by which \aspas{\emph{values, attitudes or regulatory structures, such that the external regulation of a behavior is transformed into an internal regulation, so no longer requires the presence of an external contingency}} \cite{GagneDeci2005}. Thus, the internalization of motivation for the satisfaction of needs refers to changes in the motivation from a non-free choice to a free choice of needs satisfied by oneself. According to the SDT theory \cite{DeciRyan1985, RyanDeci2000}, this change happens from the extrinsic motivation to intrinsic motivation when motivation is changed from a non-self-determined form (\emph{non-freely choice}) to a self-determined form (\emph{freely choice by oneself}). Here, the extrinsic motivators employed by the game elements must be configured as an attempt to transform the current motivation stages of participants from amotivation and extrinsic motivation into intrinsic motivation. Based on these definitions, the ontological structures shown in \autoref{fig:ontological-structure-internalization-motivation} have been formalized to represent the \emph{Internalization of motivation}. These ontological structures employ the continuum ranging of stages from \emph{amotivation} (not internalized behave) into \emph{external motivation} (not at all internalized behave) to \emph{introjected motivation} (partially internalized behave) to \emph{identify motivation} (fully internalized behave) to \emph{intrinsic motivation} (automatically internalized behave). At the top right of \autoref{fig:ontological-structure-internalization-motivation} is detailed the formalization for the change from \emph{Amotivation stage} (\emph{initial stage}) to \emph{Intrinsic motivation stage} (\emph{goal stage}) defined as \aspas{\emph{Internalization from amotivation to intrinsic motivation}.} The detailing of all ontological structures to represent the internalization of motivation is presented in \autoref{sec:ontogacles:i-mot-goal}. \subsection{Player Role} \label{subsec:player-role} The identification of homogeneous people group that differs from other groups in a significant way is essential to define the personalization in any system. In game design, this segmentation is established by player types models in which typologies are used to categorize the users in different groups according to their geographic location \cite{BenJuddChrisAvelloneHideoKojimaKeijiInafune2016, ChakrabortyNorcioVeerAndreMillerRegelsberger2015}, their demographic situation \cite{GreenbergSherryLachlanLucasHolmstrom2010, Shaw2012}, their psychological characteristics \cite{Tseng2011, Yee2006}, and their behavioral characteristics \cite{Bartle2004, Lazzaro2009}. These player type models aim to help the game designers to identify the necessary features that make a game fun, enjoyable and desirable for a particular audience. The player type models cannot be directly extrapolated to others context for which they are not intended. Thus, the concept of \emph{Player role} formalized in the ontology OntoGaCLeS to define typologies of player types in the context of CL scenarios. Player roles delineate the functionality, responsibilities and requirements whereby participants of a group become players in a gamified CL scenario. This segmentation is based on individual characteristics of participants that establish a segmentation of participants using necessary and desired conditions. In this sense, the \emph{Player role} has been formalized by the ontological structure shown in \autoref{fig:ontological-structure-player-role}. This structure defines the conditions that a participant must satisfy in the CL scenario to play the player role as \emph{necessary condition} and \emph{desire condition}. Thus, a participant of CL scenario cannot play a player role when he/she does not fulfill the necessary conditions, and when the participant fulfills the necessary and desired conditions has more probability to obtain the expected \emph{benefits for playing the role}. The necessary and desire conditions in the ontological structure to represent \emph{Player role} are represented by: \emph{motivation state}, \emph{psychological need state}, and \emph{individual personality trait state}. A tree overview for these states is detailed in \autoref{sec:ontogacles:tree-overview-states}, where: \begin{itemize} \item The \emph{motivation state} is an internal state that indicates the temporal attitudinal state of a person about his/her desire to be a participant in the CL session. These stages can be \emph{Not motivated} and \emph{Motivated}. The state of motivated is also divided in two types: \aspas{\emph{Intrinsic motivated}} and \aspas{\emph{Extrinsic motivated}} \cite{DeciRyan2010}. It is important to notice here that the concept of motivation state is not the same as the concept of motivation stage. Although both concepts represent changes in the participant’s motivation, the motivation state represents a specific point in the whole process of being motivated, whereas the motivation stage represents an interval in a participant's motivation process. \item The \emph{psychological need state} represents the current psychological need of a person in which the states for each one of the psychological needs are formalized through the representation of pair states: \aspas{\emph{Having need of \$1}} and \aspas{\emph{Not having need of \$1}} in which \aspas{\emph{\$1}} is replaced by the name of the need that is being defined as prerequisite. For instance, to represent the states about the psychological need of competence, the states of \aspas{\emph{Having need of competence}} and \aspas{\emph{Not having need of competence}} have been formalized as psychological need state in the ontology OntoGaCLeS. \item The \emph{individual personality trait state} indicates states of the individual personality traits, such as introversion, extroversion, openness to experience, and conscientiousness. The individual personality trait states delineate the characteristics that make a person unique by indicating his/her habitual patterns of thought, emotion and behavior for different situations \cite{MatthewsDearyWhiteman2003}. These states express whether a participant either has or does not have the individual personality trait. In the ontology OntoGaCLeS, the formalized individual personality traits states are: the \emph{big five personality traits} \cite{CostaMacCrae1992}, the \emph{MBTI personality traits} \cite{Briggs1976}, the \emph{game-playing style preferences} described in the Bartle's player type model \cite{Bartle2004}, and the \emph{game-playing liking preferences} described in the Yee's motivation components \cite{Yee2006}. \end{itemize} \begin{figure}[!htbp] \caption[Ontological structure to represent player role]{Ontological structure to represent \aspas{\emph{Player role}} (At the top). At the bottom, the ontological structure to represent the player role \aspas{\emph{Dreamer role}.}} \label{fig:ontological-structure-player-role} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-player-role.png} \fautor \end{figure} Beside the necessary and desired conditions that an individual should satisfy, the ontological structure to represent \emph{Player role} shown in \autoref{fig:ontological-structure-player-role} indicate the information about: how the participant with the player role is expected to interact with the game elements (\emph{how to interact}); and the expected benefits for playing the player role (\emph{benefits for playing the role}). Thus, concepts delineated as \emph{behavior}s represent the possible manners in which a participant should interact to other, and concepts delineated as individual motivati\emph{onal goals} (I-mot goal) represent the expected \emph{benefits for playing the role}. At the bottom of \autoref{fig:ontological-structure-player-role}, the \emph{Creator role} is shown as example of the formalization of a player role using the ontological structure proposed in this section. According to this structure, participants who have a greater liking for customization-components instead of the liking for other game components are classified as creators. This segmentation is represented by the necessary condition of \aspas{\emph{having a non-negative liking for customization-components},} and the desired conditions of \aspas{\emph{having a positive liking for customization-components},} \aspas{\emph{having a non-positive liking for achievement-component},} \aspas{\emph{having a negative liking for achievement-component},} \aspas{\emph{having a non-positive liking for social-component},} and \aspas{\emph{having a negative liking for social-component}.} The desired conditions for the behavioral characteristics of participants to act as a player role are: \aspas{\emph{having preference for interacting on the system},} and \aspas{\emph{having need of autonomy}.} The expected behaviors to obtain benefits for playing the creator role are: \aspas{\emph{Creating},} \aspas{\emph{Tweaking},} \aspas{\emph{Building},} \aspas{\emph{Customizing},} \aspas{\emph{Transforming},} \aspas{\emph{Adapting},} \aspas{\emph{Inventing}} or \aspas{\emph{Crafting}.} As consequence to behave as creator, the participants attain the \emph{Satisfaction of autonomy}, and the \emph{Internalization to intrinsic motivation} (\emph{I-mot goal}). In the ontology OntoGaCLeS, based on the information extracted from five different player type models, twenty-six players roles have been formalized and represented using the ontological structure proposed in this section. These player roles, their conditions, expected behaviors and benefits for the person who plays the role are detailed in \autoref{sec:ontogacles:player-role}. \subsection{Individual Motivational Strategy (Y<=I-mot goal)} In the context of CL scenarios, an \emph{individual motivational strategy} is the guidelines to motivate a participant to interact with other group members using a learning strategy. These guidelines are independent of any technology, so that the individual motivational strategy basically indicates what motivate a participant to act and behave in certain way. For example, consider the following guidelines extracted from the Model-driven Persuasive Game in which: \begin{citacao} \aspas{... cooperation is only a significant motivator of behaviour change for achievers and socializers... This is in line with the gaming style of socializers, who enjoy helping others. Achievers would also prefer to cooperate because they are inherently more altruistic ... achievers do often co-operate with one another, usually to perform some difficult collective goal, and from these shared experiences can grow deep, enduring friendships which may surpass in intensity those commonly found among individuals other groups.} \citeonline{Orji2014}. \end{citacao} When these two guidelines are applied in a CL scenario by providing a situation in which the participants must cooperate to achieve a group goal (e.g. obtain a especial reward based on the collective performance of group members), these guidelines become an individual motivational strategy that could be applied to motivate participant who fall in the category of socializer or achiever because they are motivated by the desired to accomplish the group goal and the desired to help others, respectively. The ontological structure showed in \autoref{fig:ontological-structure-individual-motivational-strategy} represent the formalization of individual motivational strategies whose guidelines are extracted from gamification models or game design models. According to this structure, an \emph{individual motivational strategy} (\emph{Y<=I-mot goal}) is composed by: \begin{description} \item[\textbf{I-player role}] to indicate the player role for the participant in focus (\emph{I}) who becomes a \emph{player role holder} when he/she is motivated by the motivational strategy. This player role also indicates the \emph{behavioral roles} whereby the participant in focus (\emph{I}) is motivated to interact with other participant (\emph{You}) employing the learning strategy (\emph{Y<=I-goal}). \item[\textbf{You-player role}] to indicate the player role for the participant (\emph{You}) who interacts with the participant in focus (\emph{I}). The \emph{behavioral roles} whereby the \emph{player role holder} of this role supports the interaction of participant in focus (\emph{I}) are also indicated in this structure. \item[\textbf{I-mot goal (I)}] to indicate the individual motivational goals (\emph{I-mot goal (I)}) whereby the participant in focus (\emph{I}) is motivated to interact with other participant (\emph{You}) employing a learning strategy (\emph{Y<=I-goal}). In this sense, these individual motivational goals represent the reasons why the guidelines in the motivational strategy are applied in the CL scenario to enhance the learning strategy (\emph{Y<=I-goal}) employed by the participant in focus (\emph{I}) to interact with other participant (\emph{You}). \end{description} \begin{figure}[!htbp] \caption[Ontological structure to represent individual motivational strategy]{Ontological structure to represent \aspas{\emph{Individual motivational strategy}} (at the left). At the right, the motivational strategies \aspas{\emph{Gamifying for Consumer and Dodecad Achiever}} (right-top) and \aspas{\emph{Gamifying by COOP}} (right-bottom).} \label{fig:ontological-structure-individual-motivational-strategy} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-individual-motivational-strategy.png} \fautor \end{figure} To exemplify the formalization of the individual motivational strategies using the ontological structure proposed in this section, \autoref{fig:ontological-structure-individual-motivational-strategy} also shows two examples in which the attribute \aspas{\emph{based on}} indicates the gamification models in which these motivational strategies are based. The individual motivational strategy showed at the top-right of \autoref{fig:ontological-structure-individual-motivational-strategy} is known as \aspas{\emph{Gamifying for Consumer and Dodecad Achiever},} and it has been formalized based on guidelines of the Dodecad model \cite{Marczewski2015a} and 5 Groups of fun framework \cite{Marczewski2015b}. According to these guidelines, the consumers and achievers are motivated by the need to obtain a reward that demonstrates for other participants their accomplishments. Hence, the \emph{Accomplisher} and \emph{Social-comparer} are \emph{behavioral roles} whereby a participant in focus (\emph{I}) playing the \emph{Consumer role} is motivated to interact with the participant (\emph{You}) who plays the \emph{Achieve role}. Playing this role, the \emph{Satisfaction of mastery} and the \emph{Internalization from extrinsic to intrinsic motivation} are individual motivational goals whereby the participant in focus (\emph{I}) as consumer is motivated to interact with other participant (\emph{You}) who acts as achiever. Behaving as accomplisher and social-comparer, the participant in focus (\emph{I}) has two individual motivational goals that are: to demonstrate his/her mastery represented as \aspas{\emph{Satisfaction of mastery};} and to internalize his/her current extrinsic motivation stage into intrinsic motivation stage represented as \aspas{\emph{Internalization from extrinsic to intrinsic motivation}.} At the bottom-right of \autoref{fig:ontological-structure-individual-motivational-strategy}, it is shown the ontological structure formalized to represent the application of the guidelines extracted from the Model-driven persuasive game for the cooperation strategy \cite{OrjiVassilevaMandryk2014}. These guidelines indicate cooperation as significant motivator for a participant who plays the socializer or achiever role because a participant who plays these roles enjoys to help others and cooperate with others to accomplish a difficult collective goal. Based on this, the motivational strategy of \aspas{\emph{Gamifying by COOP}} defines the \emph{BrainHex Socializer role} and \emph{Brainhex Achiever role} as player roles that would be played by the participant in focus (\emph{I}) and the participant (\emph{You}) who gives support to the participant in focus. Playing these roles, the participants (\emph{I} and \emph{You}) act as \emph{Helper} and \emph{Accomplisher}. When the participant in focus (\emph{I}) has the desire to accomplish the difficult collective goal, his/her individual motivational goal is the \emph{Satisfaction of competence}, and when the participant in focus (\emph{I}) has the desire to help others, his individual motivational goal is the \emph{Satisfaction of relatedness}. The ontological structure also indicates the consequence of the application of the motivational strategy, it is expected changes in the motivational state for the participant in focus (\emph{I}) from the amotivation or extrinsic motivated state to the intrinsic motivated state (\emph{Internalization to intrinsic motivation}). The individual motivational strategies based on gamification models currently defined in the ontology OntoGaCLeS, their player roles, their behavioral roles, and their individual motivational goals are detailed in \autoref{sec:ontogacles:individual-motivational-strategy}. \subsection{Individual Gameplay Strategy (I-gameplay strategy)} \label{sec:individual-gameplay-strategy} The guidelines extracted from the literature of gamification, game design and serious games are implemented through the design of way in which the users will experience their interactions with the game-like system \cite{FabricatoreLopez2014, NackeDrachenGobel2010, Schell2008}. Such design in gamification is frequently called as gameful design \cite{DeterdingDixonKhaledNacke2011, DichevDichevaAngelovaAgre2014}, and it has been formalized under the concept of \emph{individual gameplay strategy} (\emph{I-gameplay strategy}). Thus, the gameplay of a gamified CL scenario is the way in which the interactions between the participants and the game elements could occur. When a participant interacts with the game elements, the rules defined in the gamified CL scenario process his/her inputs causing changes in the game elements, and these modifications are communicated to the participant. These rules and changes are related to the individual motivational goals that must be achieved by the participants, so that each participant has his/her own strategy to interact with the gamified CL scenario to achieve these goals. This strategy of interaction is the individual gameplay strategy, and it has been formalized by the ontological structure shown in \autoref{fig:ontological-structure-individual-gameplay-strategy}. \begin{figure}[!htbp] \caption[Ontological structure to represent individual gameplay strategy]{Ontological structure to represent \aspas{\emph{Individual gameplay strategy}} (at the top). At the bottom, the \aspas{\emph{Coop. CMPT gameplay strategy}} (bottom-left), and the \aspas{\emph{Achievement fun gameplay strategy}} (bottom-right)} \label{fig:ontological-structure-individual-gameplay-strategy} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-individual-gameplay-strategy.png} \fautor \end{figure} The individual gameplay strategy depends of the player roles assigned for the participants of CL scenario, the motivational strategies employed to gamify the CL scenario, and the game elements introduced in the CL scenario. Thus, the ontological structure to represent an individual gameplay strategy is defined as a rational arrangement of these elements, where: \begin{description} \item [\textbf{Primary focus (P)}] indicates the \emph{Player role holders} who are in the primary focus (P) of individual gameplay strategy. These player role holders are the participants who use the individual gameplay strategy (\emph{I-gameplay strategy}) to interact with the game elements indicated in the attribute \aspas{\emph{What to use}.} \item [\textbf{Secondary focus (S)}] indicates the \emph{Player role holders} who are in the secondary focus (S) of individual gameplay strategy. These player role holders are the participants who provide support for the player role holders in the primary focus (P) through the game elements indicated in the attribute \aspas{\emph{What to use}.} It means that the individual gameplay strategy (\emph{I-gameplay strategy}) is unnecessary for the participants in secondary focus (S) to interact with the game elements, but their interactions in the gamified CL scenario produce changes in the state of game elements indicated in the attribute \aspas{\emph{What to use}.} \item[\textbf{S<=I-mot goal}] indicates the motivational strategies employed in the gamified CL scenario to motivate the player role holders who are in the primary focus (P). \item[\textbf{P<=S-mot goal}] indicates the motivational strategies employed in the gamified CL scenario to motivate and engage the player role holders who are in the secondary focus (S). \item[\textbf{What to use}] indicates the game elements needed to act according to the individual gameplay strategy. Thus, the game elements defined in this attribute are the ones that are used to process the interactions of participants who are in the primary focus (P). \end{description} Currently, in the literature of gamification and game design, there is no one set of gameplay strategies established that could be directly formalized as individual gameplay strategies employing the ontological structure (\emph{I-gameplay strategy}) proposed here. Therefore, the thesis author has inferred some individual gameplay strategies employing the guidelines of gamification and game design models. \autoref{fig:ontological-structure-individual-gameplay-strategy} shows two examples of this formalization in which the guidelines from the Yee's model \cite{Yee2006} have been used to develop the cooperative competition gameplay strategy (\emph{Coop. CMPT gameplay strategy}) shown at the bottom-left of figure. According to this structure, a cooperative competition gameplay strategy is beneficial for participants who are holders of Yee's Socializer role, Primary focus (P), when the motivational strategy \aspas{\emph{Gamifying for Yee Socializer}} is applied in a CL scenario to motivate these group of participants to interact with other participants who are also holders of Yee's Socializer role, Secondary focus (S). In the attribute \aspas{\emph{What to use},} this structure also indicates that game challenges for groups/teams, game level/progression systems for groups/teams, game point system for groups/teams, game leaderboard system with groups/teams rankings, game achievement system for groups/teams, and game badge systems are necessary to implement the cooperative competition gameplay strategy. t \subsection{Gamified CL Scenario} \label{subsec:gamified-cl-scenario} A gamified CL scenario is a CL scenario in which the concepts earlier presented in this section have been properly applied to gamify it. In this sense, to represent a gamified CL scenario in the ontology OntoGaCLeS, the ontological structures proposed in the CL ontology to represent a CL scenario (\autoref{fig:ontological-structure-cl-scenario}) has been extended by adding the representation of motivational strategies (\emph{Y<=I-mot goal}) and gameplay strategies (\emph{I-gameplay strategy}) at the same level that the learning strategies (\emph{Y<=I-goal}). The proper connection of these elements represents a \aspas{\emph{Gamified CL Scenario}} by the ontological structures shown in \autoref{fig:ontological-structure-gamified-cl-scenario}. \begin{figure}[!htbp] \caption[Ontological structures to represent a gamified CL scenario]{Ontological structures to represent a \aspas{\emph{Gamified CL Scenario}}} \label{fig:ontological-structure-gamified-cl-scenario} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-gamified-cl-scenario.png} \fautor \end{figure} As was explained in previous subsections, the individual motivational strategy (\emph{Y<=I-mot goal}) indicates the guidelines used to enhance the learning strategy employed by the participant in focus (\emph{I}), and the individual gameplay strategy (\emph{I-gameplay strategy}) indicates the strategy used to implement the guidelines of individual motivational strategies. Based on these definitions, in the ontological structures to represent a gamified CL scenario (\autoref{fig:ontological-structure-gamified-cl-scenario}), the connection of these elements has been represented by the two relational-concepts: \aspas{\emph{enhanced\_by}} and \aspas{\emph{implemented\_by}.} The relational-concept \aspas{\emph{enhanced\_by}} indicates what individual motivational strategy (\emph{Y<=I-mot goal}) is used to enhance a learning strategy (\emph{Y<=I-goal}), and the relational-concept \aspas{\emph{implemented\_by}} indicates what individual gameplay strategy (\emph{I-gameplay strategy}) is used to implement the guidelines of an individual motivational strategy (\emph{Y<=I-mot goal}). To illustrate the use of the ontological structures proposed in \autoref{fig:ontological-structure-gamified-cl-scenario}, a gamified CL scenario for participant who plays the Dodecad Socializers has been formalized as shown in \autoref{fig:ontological-structure-gamified-cl-scenario-dodecad-socializers}, where the learning strategies (\emph{Y<=I-goal}) of participants are \emph{enhanced by} the individual motivational strategy \aspas{\emph{Gamifying for Dodecad Socializer}.} According to this motivational strategy: \begin{citacao} \aspas{... Socializers are motivated by relatedness. They want to interact with others and create social connections ... Socializers are the ones who want to interact with others. They like to be connected to others. They are interested in parts of the system that help them do this. These are the ones will evangelize your internal social networks. Most motivated by the social connections aspects of relatedness ... Socializer and Networkers will wish to interact with people. Neither will be after anything from people directly. In the case of a networker, their reward comes from being connected; whereas the socialiser's reward is knowing you and interacting with you ...} \citeonline{Marczewski2015d}. \end{citacao} Based on these guidelines, the individual motivational strategy \aspas{\emph{Gamifying for Dodecad Socializer}} indicates that a participant who plays the Dodecad Socializer role (\emph{I-player role}) interacts with other socializer (\emph{You-player role}) acting as \emph{Helper} to achieve the \emph{Satisfaction of relatedness} (\emph{I-mot goal}). Thus, the motivational strategy is \emph{implemented by} a \emph{Social fun gameplay strategy} (\emph{I-gameplay strategy}) in which, to support the communication and cooperation of participants, the game social-status and game social-connections were inferred as necessary game elements to support the social fun gameplay strategy. This inference pertains to the thesis author, and it consists in that participants who play the socializer role are interesting into help others by looking for social connections and status to satisfy his/her need of relatedness. \begin{figure}[!htbp] \caption[Ontological structures to represent a gamified CL scenario for dodecad socializers]{Ontological structures to represent a \aspas{\emph{Gamified CL Scenario for Dodecad Socializers}}} \label{fig:ontological-structure-gamified-cl-scenario-dodecad-socializers} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-gamified-cl-scenario-dodecad-socializers.png} \fautor \end{figure} %% ================================== %% \section[Formalizing an Ontological Model to Personalize the Gamification in CL Scenarios]{Formalizing an Ontological Model to Personalize the Gamification in Collaborative Learning Scenarios} \label{sec:formalizing-ontological-model} Through the ontological structures presented in the previous section, the thesis author expects to facilitate the systematic formalization of gamified CL scenarios based on concepts extracted from player types models and need-based theories of motivation. With this formalization, it is possible to build ontological models to personalize the gamification in CL scenario. These models consist in a set of gamified CL scenarios formally represented as the ontological structures proposed in \autoref{fig:ontological-structure-gamified-cl-scenario}. The building of these structures to define an ontological model comprises the following steps: (1) to identify the player roles that can be assigned for the participants of CL scenario when they are playing a CL role, (2) to identify the restriction and elements of motivational strategies for each pair of identified player roles, and (3) to define individual gameplay strategies for the identified pairs of player roles. In this section, following these steps, the building of an ontological model to personalize the gamification in CL scenario is detailed in this section. This model has been built to gamify CL scenarios based on the Peer-tutoring theory \cite{Endlsey1980} in which the Dodecad player type model\cite{Marczewski2017,Marczewski2015b} has been used as source of information. \subsubsection*{Step (1): Identifying Player Roles for CL Scenarios} The identification of player roles to gamify a CL scenario is carried out by analyzing the expected behaviors to be externalized for these roles and the CL roles. Possible counterproductive behaviors indicate why player roles cannot be assigned to a participant when he/she plays the CL role. \autoref{tab:player-roles-in-peer-tutoring-cl-scenarios} shows the result of this step (1) for the CL roles of \aspas{\emph{Peer-Tutor}} and \aspas{\emph{Peer-Tutee}} defined in CL Scenarios based on the Peer-tutoring theory. Counterproductive behaviors of player roles are avoided to not interfere with the expected behaviors of CL roles. Thus, for example, participants who are playing the CL roles of Peer-tutor and Peer-tutee cannot play the \emph{Griefer roles} because they want to negatively affect other users. \begin{quadro}[htb] \caption{Dodecad player roles that can be assigned for participants of a Peer-tutoring scenario} \label{tab:player-roles-in-peer-tutoring-cl-scenarios} \centering \scriptsize \begin{tabular}{|l|c|c|} \hline%\hline \multicolumn{1}{|l}{}& \multicolumn{1}{|c}{\textbf{Peer-Tutor}}& \multicolumn{1}{|c|}{\textbf{Peer-Tutee}}\tabularnewline \multicolumn{1}{|l}{}& \multicolumn{1}{|c}{\footnotesize(explaining)}& \multicolumn{1}{|c|}{\footnotesize(passive learning)}\tabularnewline \hline \hline \textbf{Achiever}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(accomplishing, comparing)}& & \tabularnewline \hline \textbf{Free-Spirit}&No&No\tabularnewline {\footnotesize(creating, exploring)}&{\footnotesize(don't want to be restricted)}&{\footnotesize(don't want to be restricted)}\tabularnewline \hline \textbf{Socializer}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(helping)}& & \tabularnewline \hline \textbf{Philanthropist}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(giving, helping, sharing)}& & \tabularnewline \hline \textbf{Consumer}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(accomplishing, comparing)}& & \tabularnewline \hline \textbf{Exploiter}&No&No\tabularnewline {\footnotesize(creating, exploring)}&{\footnotesize(don't want to be restricted)}&{\footnotesize(don't want to be restricted)} \tabularnewline \hline \textbf{Networker}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(helping)}& & \tabularnewline \hline \textbf{Self-Seeker}&\multirow{2}{*}{Yes}&\multirow{2}{*}{Yes}\tabularnewline {\footnotesize(giving, helping, sharing)}& & \tabularnewline \hline \textbf{Destroyer}&No&No \tabularnewline {\footnotesize(hacking)}&{\footnotesize(hacking to ruin experience of others)}&{\footnotesize(hacking to ruin experience of others)}\tabularnewline \hline \textbf{Improver}&No&No\tabularnewline {\footnotesize(hacking, exploring, fixing)}&{\footnotesize(hacking to change the system)}&{\footnotesize(hacking to change the system)}\tabularnewline \hline \textbf{Influencer}&No&No \tabularnewline {\footnotesize(commenting)}&{\footnotesize(requiring changes in the system)}&{\footnotesize(requiring changes in the system)} \tabularnewline \hline \textbf{Griefer}&No&No\tabularnewline {\footnotesize(troublemaking, defying)}&{\footnotesize(negatively affect to others)}&{\footnotesize(negatively affect to others)}\tabularnewline \hline \end{tabular} \fautor \end{quadro} \subsubsection*{Step (2): Identifying Restrictions and Elements of Motivational Strategies} To identify the restrictions and elements of individual motivational strategies (\emph{Y<=I-mot goal}), guidelines for the pairs of player roles identified in the step (1) are crossed. These guidelines are extracted from the player type models for the building of ontological models to personalize the gamification in CL scenarios. When these guidelines related to a pair of player roles are crossed, counterproductive behaviors are avoided to not interfere with the expected benefits that can be achieved by the participants playing these roles and performing these behaviors. The expected benefits are expressed as individual motivational goals (\emph{I-mot goals}) based on interpretation of these benefits using need-based theories of motivation. \autoref{tab:individual-motivational-strategies-in-peer-tutoring-cl-scenarios} shows the result obtained in this step for the definition of individual motivational strategies in the ontological model to personalize the gamification in Peer-tutoring CL scenarios. The rows indicate the player roles (\emph{I-Player role}) for the participant in focus (\emph{I}), and the columns indicate the player roles (\emph{You-Player role}) for the participant (\emph{You}) who interacts with the participant in focus (\emph{I}). The individual gameplay strategies and their elements are indicated in the crossed cells. These strategies were defined from common guidelines for each pair of player roles. Thus, an individual gameplay strategy has been formalized in the ontological model when there are commonly expected behaviors indicated in the guidelines of player roles \aspas{\emph{I-Player role}} and \aspas{\emph{You-Player role}.} %\setlongtables \begin{landscape}%{\small %\begin{longtable}{|l|c|c|c|c|c|c|} %\caption[Individual motivational strategies identified for the building of an ontological model to personalize the gamification in Peer-tutoring scenarios]{Individual motivational strategies identified for the building of an ontological model to personalize the gamification in Peer-tutoring scenarios} %\tabularnewline \begin{quadro}[htb] \caption{Individual motivational strategies identified for the building of an ontological model to personalize the gamification in Peer-tutoring scenarios} \label{tab:individual-motivational-strategies-in-peer-tutoring-cl-scenarios} \centering \scriptsize \begin{tabular}{|l|c|c|c|c|c|c|} \hline%\hline \multicolumn{1}{|l|}{}& \multicolumn{1}{c|}{\textbf{Achiever}}& \multicolumn{1}{c|}{\textbf{Socializer}}& \multicolumn{1}{c|}{\textbf{Philanthropist}}& \multicolumn{1}{c|}{\textbf{Consumer}}& \multicolumn{1}{c|}{\textbf{Networker}}& \multicolumn{1}{c|}{\textbf{Self-seeker}}\tabularnewline \multicolumn{1}{|l|}{}& \multicolumn{1}{c|}{\tiny{(\emph{accomplishing, comparing})}}& \multicolumn{1}{c|}{\tiny{(\emph{helping})}}& \multicolumn{1}{c|}{\tiny{(\emph{giving, helping, sharing})}}& \multicolumn{1}{c|}{\tiny{(\emph{accomplishing, comparing})}}& \multicolumn{1}{c|}{\tiny{(\emph{helping})}}& \multicolumn{1}{c|}{\tiny{(\emph{giving, helping, sharing})}}\tabularnewline \hline %\endfirsthead\caption[]{\em (continued)} \tabularnewline \hline %\multicolumn{1}{|l|}{}& %\multicolumn{1}{c|}{\textbf{Achiever}}& %\multicolumn{1}{c|}{\textbf{Socializer}}& %\multicolumn{1}{c|}{\textbf{Philanthropist}}& %\multicolumn{1}{c|}{\textbf{Consumer}}& %\multicolumn{1}{c|}{\textbf{Networker}}& %\multicolumn{1}{c|}{\textbf{Self-seeker}}\tabularnewline %\multicolumn{1}{|l|}{}& %\multicolumn{1}{c|}{\tiny{(\emph{accomplishing, comparing})}}& %\multicolumn{1}{c|}{\tiny{(\emph{helping})}}& %\multicolumn{1}{c|}{\tiny{(\emph{giving, helping, sharing})}}& %\multicolumn{1}{c|}{\tiny{(\emph{accomplishing, comparing})}}& %\multicolumn{1}{c|}{\tiny{(\emph{helping})}}& %\multicolumn{1}{c|}{\tiny{(\emph{giving, helping, sharing})}}\tabularnewline %\hline %\endhead %\hline %\endfoot %\label{tab:individual-motivational-strategies-in-peer-tutoring-cl-scenarios} & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Dodecad Achievers}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Dodecad Achievers and Consumer}}& & \tabularnewline {\textbf{Achiever}}& \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of mastery}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of mastery}}& & \tabularnewline {\tiny(\emph{accomplishing, comparing})}& \multicolumn{1}{p{3cm}|}{}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}& & \tabularnewline \hline & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Dodecad Socializers}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Dodecad Socializer and Networker}}& \tabularnewline {\textbf{Socializer}}& & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of relatedness}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of relatedness}}& \tabularnewline {\tiny(\emph{helping})}& & \multicolumn{1}{p{3cm}|}{}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}& \tabularnewline \hline \textbf{Philanthropist}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Philanthropists}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Philanthropist and Self-seeker}}\tabularnewline {\tiny(\emph{giving, helping, sharing})}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of purpose}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of purpose}}\tabularnewline & & & \multicolumn{1}{p{3cm}|}{}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}\tabularnewline \hline & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Consumer and Dodecad Achiever}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Consumers}}& & \tabularnewline {\textbf{Consumer}}& \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of mastery}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of mastery}}& & \tabularnewline {\tiny(\emph{accomplishing, comparing})}& \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}& & & \multicolumn{1}{p{3cm}|}{}& & \tabularnewline% \newpage \hline & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Networker and Dodecad Socializer}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Networkers}}& \tabularnewline {\textbf{Networker}}& & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of relatedness}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of relatedness}}& \tabularnewline {\tiny(\emph{helping})}& & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}& & & \multicolumn{1}{p{3cm}|}{}& \tabularnewline \hline & & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Self-seeker and Philanthropist}}& & & \multicolumn{1}{p{3cm}|}{\tiny\emph{Gamifying for Philanthropists}}\tabularnewline {\textbf{Self-seeker}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of purpose}}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Satisfaction of purpose}}\tabularnewline {\tiny(\emph{giving, helping, sharing})}& & & \multicolumn{1}{p{3cm}|}{\tiny{$\bullet$ Internalization from extrinsic to intrinsic motivation}}& & & \multicolumn{1}{p{3cm}|}{}\tabularnewline \hline %\end{longtable} %} \end{tabular} \fautor \end{quadro} \end{landscape} To illustrate the identification of restrictions and elements in the individual motivational strategy (\emph{Y<=I-mot goal}), let us see the \aspas{\emph{Gamifying for Dodecad Achiever and Conqueror}} indicated in \autoref{tab:individual-motivational-strategies-in-peer-tutoring-cl-scenarios}, this strategy was identified from the guidelines of Dodecad model in which the behaviors of \emph{accomplishing} and \emph{comparing} are indicated as adequate to motivate achievers and consumers. In this case, the expected benefits to accomplish a goal, and then, compare it against the accomplishments of others is enjoyable for achievers. This benefit is represented as the individual motivational goal \aspas{\emph{Satisfaction of mastery}} (\emph{I-mot goal}) based on the Dan Pink motivation theory \cite{Pink2011}. According to this theory, mastery is an inherit human need that love to get better at stuff enjoying satisfaction from personal achievement and progress. \subsubsection*{Step (3): Defining Individual Gameplay Strategies} Individual gameplay strategies (\emph{I-gameplay strategy}) are inferred from the individual motivational strategies (\emph{Y<=I-mot goal}) identified in the step (2). Game elements support the behaviors indicated in the guidelines of individual motivational strategies to accomplish the expected benefits indicated as individual motivational goals. \autoref{tab:individual-gameplay-strategies-peer-tutoring-cl-scenarios} shows the results of this step for the ontological model to personalize the gamification in Peer-tutoring scenarios. \begin{quadro}[htb] \caption{Individual gameplay strategies to gamify Peer-tutoring scenarios} \label{tab:individual-gameplay-strategies-peer-tutoring-cl-scenarios} \centering \small \begin{tabular}{l|l|l} %\setlongtables{\small %\begin{longtable}{l|l|l} %\caption{Individual gameplay strategies to gamify Peer-tutoring scenarios} %\tabularnewline \hline%\hline \multicolumn{1}{p{4.75cm}}{\centering\textbf{Achievement fun}}& \multicolumn{1}{|p{4.75cm}|}{\centering\textbf{Social fun}}& \multicolumn{1}{p{4.75cm}}{\centering\textbf{Facilitated-personal fun}}\tabularnewline %\hline %\endfirsthead\caption[]{\em (continued)} \tabularnewline %\hline %\multicolumn{1}{p{4.75cm}}{\centering\textbf{Achievement fun}}& %\multicolumn{1}{|p{4.75cm}|}{\centering\textbf{Social fun}}& %\multicolumn{1}{p{4.75cm}}{\centering\textbf{Facilitated-personal fun}}\tabularnewline \hline %\endhead \hline %\endfoot %\label{tab:individual-gameplay-strategies-peer-tutoring-cl-scenarios} Primary focus (P):& Primary focus (P):& Primary focus (P):\tabularnewline {\scriptsize$\bullet$ Gamifying for Dodecad Achiever}& {\scriptsize$\bullet$ Gamifying for Dodecad Socializer}& {\scriptsize$\bullet$ Gamifying for Philanthropists}\tabularnewline {\scriptsize$\bullet$ Gamifying for Consumer}& {\scriptsize$\bullet$ Gamifying for Networker}& {\scriptsize$\bullet$ Gamifying for Self-seekers}\tabularnewline Secondary focus (S):& Secondary focus (S):& Secondary focus (S):\tabularnewline {\scriptsize$\bullet$ Gamifying for Consumer}& {\scriptsize$\bullet$ Gamifying for Networker}& {\scriptsize$\bullet$ Gamifying for Self-seekers}\tabularnewline {\scriptsize$\bullet$ Gamifying for Dodecad Achiever}& {\scriptsize$\bullet$ Gamifying for Dodecad Socializer}& {\scriptsize$\bullet$ Gamifying for Philanthropists}\tabularnewline \hline What to use:& What to use:& What to use:\tabularnewline $\bullet$ Challenges& $\bullet$ Social-status& $\bullet$ Meaning/purpose\tabularnewline $\bullet$ Certificates& $\bullet$ Point system& $\bullet$ Access system\tabularnewline $\bullet$ Levels/progression system& (\emph{social status})& $\bullet$ Collect/trade system\tabularnewline $\bullet$ Point system& $\bullet$ Physical-reward system& $\bullet$ Gifting/sharing system\tabularnewline (\emph{levels/progression})& (\emph{social status})& $\bullet$ Point system\tabularnewline $\bullet$ Physical-reward system& $\bullet$ Leaderboard system& (\emph{meaning/purpose})\tabularnewline (\emph{certificates})& (\emph{social status})& $\bullet$ Physical-reward system\tabularnewline $\bullet$ Leaderboard system& $\bullet$ Badge system& (\emph{meaning/purpose})\tabularnewline (\emph{levels/progression})& (\emph{social status})& $\bullet$ Leaderboard system\tabularnewline $\bullet$ Badge system& $\bullet$ Virtual-economy system& (\emph{meaning/purpose})\tabularnewline (\emph{level/progression})& $\bullet$ Lottery system& $\bullet$ Badge system\tabularnewline $\bullet$ Virtual-economy system& & (\emph{meaning/purpose})\tabularnewline $\bullet$ Lottery system& & $\bullet$ Virtual economy system\tabularnewline & & $\bullet$ Lottery system\tabularnewline \hline %\end{longtable} %} \end{tabular} \fautor \end{quadro} The individual gameplay strategies indicated in the \autoref{tab:individual-gameplay-strategies-peer-tutoring-cl-scenarios} are: \begin{itemize} \item \emph{Achievement fun gameplay strategy}: is an individual motivational strategy in which the system recognizes achievements through game challenges, certificates and level/progression. To satisfy the mastery need, the system must try to produce in the participants the feel that they are achieving something by performing the interactions indicated by the Peer-tutoring scripts. Thus, the system would use a point system to indicate the levels/progression in the CSCL script, and when the CL scenario is completed as a game challenge, a certificate would be given by a physical-reward system. The leaderboard system would indicate the level/progression of the script. Badges would be obtained by the participants at the end of CL scenario according to the level/progression in the script. Finally, virtual-economy and lottery systems would establish the relation between the levels/progression of the script and the points, ranking in the leaderboard and badges. \item \emph{Social fun gameplay strategy}: is an individual motivational strategy in which social status is used to support the feeling of relatedness. In this sense, the system should provide some form of social network/group to indicate and/or create group/collective game elements. Thus, the system would use a point system with a social status system to indicate points gathered by the participant as group. When the CL scenario is completed, the system would give a physical reward for the groups. A leaderboard would provide rankings by groups to indicate the social status of groups. Badges for groups with a social status would be given by the system to groups when the CL scenario is completed. Finally, virtual-economy and lottery systems would establish the relation between the social status of groups in CL scenarios, and the points, physical-rewards, leaderboards, and badges. \item \emph{Facilitated-personal fun gameplay strategy}: is an individual motivational strategy in which the excitement from changing the system satisfies the need of purpose. This satisfaction comes from collection of trading valuable things. So when participants help to others, game elements are collected to be converted into something that has an important value. Thus, meaning/purpose should be given to game elements such as points, physical-rewards, leaderboards, and badges, so that the system provides a collect/trade system to change these element for gifting and/or sharable elements (such as elements to customize the avatars, elements to change part of the system). \end{itemize} Employing the information of \autoref{tab:individual-gameplay-strategies-peer-tutoring-cl-scenarios}, twelve ontological structures to represent gamified Peer-tutoring scenarios have been formalized in the ontology OntoGaCLeS to define the model to personalize the gamification in Peer-tutoring scenarios based on the Dodecad model \cite{Marczewski2015b}. These structures in the ontological model are: \emph{Gamified Peer Tutoring Scenario for Achievers}, \emph{Gamified Peer Tutoring Scenario for Achiever/Consumer}, \emph{Gamified Peer Tutoring Scenario for Consumer/Achiever}, \emph{Gamified Peer Tutoring Scenario for Consumers}, \emph{Gamified Peer Tutoring Scenario for Socializers}, \emph{Gamified Peer Tutoring Scenario for Socializer/Networker}, \emph{Gamified Peer Tutoring Scenario for Networker/Socializer}, \emph{Gamified Peer Tutoring Scenario for Networkers}, \emph{Gamified Peer Tutoring Scenario for Philanthropists}, \emph{Gamified Peer Tutoring Scenario for Philanthropist/Self-seeker}, \emph{Gamified Peer Tutoring Scenario for Self-seeker/Philanthropist}, and \emph{Gamified Peer Tutoring Scenario for Self-seekers}. \begin{figure}[!htbp] \caption[Ontological structures to represent a gamified CL scenario for dodecad socializers]{Ontological structure to represent \aspas{\emph{Gamified Peer Tutoring Scenario for Achiever/Consumer}}} \label{fig:ontological-structure-gamified-peer-tutoring-scenario-achiever-consumer} \centering \includegraphics[width=1\textwidth]{images/chap-ontogacles1/ontological-structure-gamified-peer-tutoring-scenario-achiever-consumer.png} \fautor \end{figure} \autoref{fig:ontological-structure-gamified-peer-tutoring-scenario-achiever-consumer} shows as example the formalization of \emph{Gamified Peer Tutoring Scenario for Achiever/Consumer} in which the motivational strategy to enhance the learning strategy \aspas{\emph{Learning by Teaching}} is \aspas{\emph{Gamifying for Dodecad Achiever},} and the motivational strategy to enhance the learning strategy \aspas{\emph{Learning by being Taught}} is \aspas{\emph{Gamifiying for Consumer}.} These both motivational strategies are implemented by the gameplay strategy \aspas{\emph{Achievement fun gameplay strategy},} where the participants in the primary focus (P) are holders of \emph{Achiever/Peer Tutor} roles, and the participants in the secondary focus (S) are holders of \emph{Consumer/Peer Tutee} roles. As can be appreciated in the motivational strategy \aspas{\emph{Gamifying for Dodecad Achiever and Consumer},} the potential player for the \emph{Dodecad Achiever role} has been defined as a \emph{Peer Tutor}, and in the motivational strategy \aspas{\emph{Gamifying for Consumer and Dodecad Achiever},} the \emph{Peer Tutee} has been defined as the potential player for the \emph{Consumer role}. %% ================================== %% \section{Concluding Remarks} \label{sec:ontogacles1-concluding-remarks} In this chapter, concepts extracted from player type models and need-based theories of motivation have been formalized in the ontology OntoGaCLeS to solve the context-dependency related to the individual characteristics of participants when a CL scenario is been gamified to deal with motivation problems in a scripted collaborative learning. The formalization of these concepts consists in ontological structures to represent individual motivational goals, player roles, motivational strategies, individual gameplay strategies, and gamified CL scenarios. Through ontological structures proposed in this chapter, it is possible the systematic building of ontology-based models to personalize gamification in CL scenarios based on player types models. This usefulness is demonstrated through an example in which information from the Dodecad player type model is employed to develop an ontological model to personalize the gamification in Peer-tutoring scenarios. Employing the same formalization, it is possible to obtain ontological models to personalize the gamification in CL scenarios based on other player type models, such as the Yee's model \cite{Yee2006}, Borges' player type model \cite{BorgesMizoguchiDurelliBittencourtIsotani2016}, and BrainHex player type \cite{NackeBatemanMandryk2014}. With the ontological structures proposed in this chapter, computational mechanisms and procedures could be built to set player roles and game element for each participant in CL sessions. These mechanisms will use the ontological structures formalized here as a knowledge-base that provides theoretical justification in an algorithm that help the users to gamify CL scenarios. \autoref{chapter:computer-based-mechanisms-procedures} shows a computational mechanism developed by the thesis author as proof of concept to set player roles for students in the Moodle platform.
{ "alphanum_fraction": 0.8001894057, "avg_line_length": 87.6102620087, "ext": "tex", "hexsha": "18349dbce1ba00f4f64e8e32f1c957452123a5a7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bb1a824ec1dabbd443e3ca9a3fac15fd47fe64b4", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "geiser/phd-thesis-dissertation", "max_forks_repo_path": "tex/ontogacles-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bb1a824ec1dabbd443e3ca9a3fac15fd47fe64b4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "geiser/phd-thesis-dissertation", "max_issues_repo_path": "tex/ontogacles-1.tex", "max_line_length": 587, "max_stars_count": null, "max_stars_repo_head_hexsha": "bb1a824ec1dabbd443e3ca9a3fac15fd47fe64b4", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "geiser/phd-thesis-dissertation", "max_stars_repo_path": "tex/ontogacles-1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 19452, "size": 80251 }
\documentclass{report} \usepackage{graphicx} \usepackage{outlines} \usepackage{multirow} \usepackage{tabularx} \usepackage{nameref} \usepackage{hyperref} \usepackage{color} \usepackage{listings} \usepackage{titlepic} \usepackage{comment} \usepackage{fancyvrb} \usepackage[margin=1in]{geometry} \usepackage{pdfpages} \usepackage{float} \setcounter{secnumdepth}{5} \setcounter{tocdepth}{5} \restylefloat{table} \definecolor{maroon}{rgb}{0.5,0,0} \definecolor{darkgreen}{rgb}{0,0.5,0} \definecolor{FormulaKeywordColor}{RGB}{0,0,204} \definecolor{FormulaAnnotColor}{RGB}{128,128,128} \definecolor{darkgreen}{rgb}{0,0.5,0} \lstdefinelanguage{XML} { basicstyle=\ttfamily\footnotesize, morestring=[s]{"}{"}, morecomment=[s]{?}{?}, morecomment=[s]{!--}{--}, commentstyle=\color{darkgreen}, moredelim=[s][\color{black}]{>}{<}, moredelim=[s][\color{red}]{\ }{=}, stringstyle=\color{blue}, identifierstyle=\color{maroon}, breaklines=true } \hypersetup{% pdfborder = {0 0 0} } \makeatletter \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand\subparagraph{\@startsection{subparagraph}{5}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \g@addto@macro\@floatboxreset\centering \makeatother \title{AVM Component Model Specification\\Version 2.5} \author{Adam Nagel, Sandeep Neema, Mike Myers, Robert Owens, Zsolt Lattmann\\ Institute for Software Integrated Systems (ISIS)\\ Vanderbilt University\\ \texttt{[email protected]}\\ \\ Dan Finke\\ Applied Research Laboratory (ARL)\\ Pennsylvania State University\\ \\ Developed for the DARPA Adaptive Vehicle Make (AVM) Program} \date{August 4, 2014} \titlepic{ \includegraphics[scale=0.75]{ISIS-logoNEW} \hspace{1cm} \includegraphics[scale=0.103]{DARPA-logo} } \begin{document} \maketitle \def\chapterautorefname{Chapter} \def\subsubsectionautorefname{Section} \def\subsectionautorefname{Section} \def\sectionautorefname{Section} \newpage \tableofcontents \cleardoublepage \listoffigures \newpage % Chapter 1 - introduction, scope, purpose \include{AVM_Component_Introduction} % Chapter 2 - % 2.1 Overview % 2.2 Component Packaging \include{AVM_Component_Model} % 2.3 \include{AVM_Component_Spec_schema} % 2.4 \include{AVM_Component_Spec_CAD} % 2.5 \include{AVM_Component_Spec_Dynamics_Domain} % 2.6 \include{AVM_Component_Spec_Cyber_Domain} % 2.7 \include{AVM_Component_Spec_manufacturing} % 2.8 \include{AVM_Component_General_Conventions} % Chapter 3 - \include{AVM_Component_Spec_Authoring_and_Curation} % Chapter 4 - \include{AVM_Component_Validation} % Chapter 5 - \include{./AVM_Formal_Semantics/AVM_Formal_Semantics} % Chapter 6 - % Reference Implementations \include{Revision_History} \end{document}
{ "alphanum_fraction": 0.7490481135, "avg_line_length": 20.9347826087, "ext": "tex", "hexsha": "4488df6b92ba3518e65dd30f5075c57ad5c0eed7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "08f3115e76498df1f8d70641d71f5c52cab4ce5f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lefevre-fraser/openmeta-mms", "max_forks_repo_path": "meta/DesignDataPackage/doc/AVM_Component_Spec.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "08f3115e76498df1f8d70641d71f5c52cab4ce5f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lefevre-fraser/openmeta-mms", "max_issues_repo_path": "meta/DesignDataPackage/doc/AVM_Component_Spec.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "08f3115e76498df1f8d70641d71f5c52cab4ce5f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lefevre-fraser/openmeta-mms", "max_stars_repo_path": "meta/DesignDataPackage/doc/AVM_Component_Spec.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 995, "size": 2889 }
\documentclass[10pt]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{cite} \usepackage[]{amsthm} %lets us use \begin{proof} \usepackage[]{amssymb} %gives us the character \varnothing \usepackage[ruled,vlined]{algorithm2e} \usepackage{listings} \usepackage[utf8]{inputenc} \usepackage{hyperref} \usepackage{amsmath} \usepackage{csvsimple} \usepackage{graphicx} % One inch margins \PassOptionsToPackage{margin=0.25in}{geometry} \title{Modern Optimization Final Exam} \author{Guanhua Huang} \date\today \begin{document} \maketitle %This command prints the title based on information entered above All the source code is stored at \url{https://github.com/victorhuangkk/york_optimization_final} There are five problems in total in this final exam. Please find them in the following. \begin{section}{Problem 9} This is the problem \\ \includegraphics[width=5cm]{img/problem9.png} \subsection{Part a} Let's analyze the two parts separately, which are $a(T, T)$ and $l(T)$. Those two terms are generated by the variational method. \[\frac{1}{2}a(T, T) = \frac{1}{2} \int_{0}^{1}T^\prime(x)T^{\prime}(x) dx\] \[l(T) = \int_{0}^{1} f(x)T(x)dx\] Since we assume the second order differentiable, we can rewrite all the terms into a series of integral summations. For simplicity reasons, I would omit $\frac{1}{2}$ here, But in the quadratic optimization part, we can add it back. \[\frac{1}{2}a(T, T) = \frac{1}{2} \int_{0}^{1}T^\prime(x)T^{\prime}(x) dx\] \[= \int_{0}^{x_1}(T^{\prime}(x))^2dx + \int_{x_1}^{x_2}(T^{\prime}(x))^2dx + ... \int_{x_{n-1}}^{x_n}(T^{\prime}(x))^2dx\] \[\approxeq \sum_{i=1}^{N-1}h (T^{\prime}_{i+\frac{1}{2}})^2\] \[\approxeq \sum_{i=1}^{N-1}h \frac{1}{h}(T_{i+1} - T_{i})\frac{1}{h}(T_{i+1} - T_{i})\] \[= \sum_{i=1}^{N-1}\frac{1}{h}(T_{i+1} - T_{i})^2\] And the approximation of $l(T)$ is done as below \[\int_{0}^{1}f(x)T(x)dx = \int_{0}^{x_1}f(x)T(x)dx + \int_{x_1}^{x_2}f(x)T(x)dx + ... \int_{x_{n-1}}^{x_n}f(x)T(x)dx\] \[= \sum_{i=1}^{N-1} \int_{x_i}^{x_{i+1}}f(x)T(x)dx\] \[\approxeq \sum_{i=1}^{N-1} hf(x_{i+\frac{1}{2}})T(x_{i+\frac{1}{2}}), \;\;\; (\int_{x_i}^{x_{i+1}}g(x)dx \approxeq hg_{i+\frac{1}{2}} )\] \[\approxeq h\sum_{i=1}^{N-1}\frac{1}{2} (f(x_i) + f(x_{i+1})) \frac{1}{2} (T(x_i) + T(x_{i+1}))\] After combining these two terms together, we got the following approximation to the original objective function, \[min(\frac{1}{h} \sum_{i=1}^{N-1}(T(x_{i+1}) - T(x_i))^2 - \frac{h}{4}\sum_{i=1}^{N-1} (f(x_i) + f(x_{i+1})) (T(x_i) + T(x_{i+1})))\] \[s.t. \;\;\;\;\ T(0) = T(1) = 0\] Then, in order to change the constraint optimization problem to an unconstrained version, Lagrange Multiple is used. \[min(\frac{1}{h} \sum_{i=1}^{N-1}(T(x_{i+1}) - T(x_i))^2 - \frac{h}{4}\sum_{i=1}^{N-1} (f(x_i) + f(x_{i+1})) (T(x_i) + T(x_{i+1})) + \lambda_0T(0) + \lambda_1T(1))\] Because of the definition, $T(0) = T(x_1) = 0, T(1) = T(x_N) = 0$ To convert the polynomial to matrix format, let's rewrite the objective function by parts. \[\frac{1}{h} \sum_{i=1}^{N-1}(T(x_{i+1}) - T(x_i))^2 = \frac{1}{h} X^T A X\] \[\frac{1}{h} (T_1^2 + 2T_2^2 + 2T_3^2 +... + 2(T_{N-1})^2 + (T_{N})^2 - 2T_1T_2 - 2T_2T_3 - .... - 2T_{N-1}T_{N})\] One route to write the problem as an unconstrained problem is to do the following. Just to clarify here. $X = (T_1, T_2, T_3, ... T_{N-1}, T_N, \lambda_1, \lambda_N)^T$. This is the unknown vector we are trying to solve. However, $f(x_i)$ is a given value if $x_i$ is given. Everything looks fine up until now. The problem comes up when I tried to decompose matrix A. It is not positive definite anymore. \includegraphics[width=13cm]{img/problem9_proof.png} For that reason, I decided to use penalty terms, fix $\lambda_1$ and $\lambda_N$ to solve this problem. For illustration purposes, I fix $N = 6$, $h = 0.2$, $\lambda_1 = 1 = \lambda_N$ from now on and would write out all parameters explicitly. \[A = 2 \times 5 \times \begin{pmatrix} 2 & -1 & 0 & 0 & 0 &0\\ -1 & 2 & -1 & 0 & 0 &0\\ 0 & -1 & 2 & -1 & 0 &0\\ 0 & 0 & -1 & 2 & -1 &0\\ 0 & 0 & 0 & -1 & 2 &-1\\ 0 & 0 & 0 & 0 & -1 &2 \end{pmatrix}\] \[B = \begin{pmatrix} 1 & 1 & 0 & 0 & 0 &0\\ 1 & 1 & 1 & 0 & 0 &0\\ 0 & 1 & 1 & 1 & 0 &0\\ 0 & 0 & 1 & 1 & 1 &0\\ 0 & 0 & 0 & 1 & 1 &1\\ 0 & 0 & 0 & 0 & 1 &1 \end{pmatrix}, \;\;\; f(x_i) = (1, 1, 1, 1, 1, 1)\] \[b = f(x_i)B = 0.2/4 \times (2, 3, 3, 3, 3, 2) \] So, in the end, the problem is formulated as the following: \[l(T) = \frac{1}{2} X^T A X + b^T X \to min\] As observed from the approximation, the quadratic terms omitted a few things. In the very end, we can see what's the results by these approximations. \subsection{Part b} After transforming the original problem into a standard quadratic optimization problem, the uniqueness of solution can be proven by the positive definiteness of matrix A. To prove the positive definiteness, I would try to prove all the pivots of A (which is a symmetric square metrics) are positive. Then, the matrix is positive definite. \begin{proof} From linear algebra, we know that, in this scenario, A's quadratic summation can be written as \[x^T A x = \sum_{i=1}^{N-1}(x_{i+1} - x_{i})^2, \;\;\; \forall x_i \in \mathbf{x}\] if $\mathbf{x} \neq \mathbf{0}$, then $ \forall x, x^T A x >0$. For that reason, $A$ is positive definite. In quadratic programming, if $A$ is positive definite, there is a one and only one solution. Uniqueness is proven to be true. \end{proof} \subsection{Part c} As hinted by the instructor, $f(x) = c = 1$ to simplify the visualization. Coordinate descent and gradient descent are used to optimize the objective function respectively. For simplicity reason, I designate $N = 6$ such that $h = 0.2$. And there are six points at $x_1 = 0, x_2 = 0.2, x_3 = 0.4, x_4 = 0.6, x_5 = 0.8, x_6 = 1.0$. This is the exact temperature distribution over the bar, with the simple function $T(x) = \frac{1}{2}x(x-1)$\\ \includegraphics[width=10cm]{img/problem9_plt1.png} The gradient method with Armijo step size rule's visualization is shown here \includegraphics[width=10cm]{img/problem9_plt2.png} The coordinate gradient descent code is shown here. \lstinputlisting[language=python, basicstyle=\tiny]{src/coordinate_descent.py} \subsection{Part d} The Armijo size rule has been replaced by the predefined step size and the results are summarized below. In terms of gradient method, $\lambda = 0.2$ overflow on my laptop. So, I showed the $1/k$'s results below. \\ \includegraphics[width=10cm]{img/problem9_plt3.png} As we see, they converge in a very similar manner. Coordinatewise descent is not able to converge in this case. \end{section} \begin{section}{Problem 12} \includegraphics[width=10cm]{img/problem12.png} \subsection{Part a} The level plot of the function is shown below. The left hand side is the original function and the right hand side is the quadratic appropriation at $x_0 = (0, -1)^T$. \\ \includegraphics[width=12cm]{img/problem12_plt1.png} \subsection{Part b} The trust region method is implemented to solve this sub-problem \lstinputlisting[language=python, basicstyle=\tiny]{src/trust_region.py} Computational Results \begin{tabular}{lll} \hline $\Delta_0$ & $d_0$ & Steps Until Converge \\ \hline\hline 0.25 & (0.02, 0.249) & 2 \\ 0.75 & (0.048, 1.0) & 1 \\ 1.25 & (0.048, 1.0) & 1 \end{tabular} \subsection{Part c} The original function's level graph is the same but the quadratic approximation is different. \\ \includegraphics[width=12cm]{img/problem12_plt2.png} The algorithm failed in this case since the hessian matrix is not positive definite such that Cholesky cannot be performed. \end{section} \begin{section}{Problem 13} \includegraphics[width=10cm]{img/problem13.png} First, let's calculate A and b from the given formula: \[f(x) = \frac{1}{2}(x_1, x_2, x_3) \begin{pmatrix} a_1 & a_2 & a_3\\ a_2 & a_4 & a_5 \\ a_3 & a_5 & a_6 \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \\ x_3 \end{pmatrix} + (b_1, b_2, b_3) \begin{pmatrix} x_1\\ x_2\\x_3 \end{pmatrix}\] Since there is no first order term in the final section, so, $b = (0, 0, 0)^T$ \[\frac{1}{2}(a_1x_1+a_2x_2+a_3x_3, a_2x_1+a_4x_2+a_5x_3, a_3x_1+a_5x_2+a_6x_3) \begin{pmatrix} x_1\\ x_2 \\ x_3 \end{pmatrix}\] \[=\frac{1}{2}x_1(a_1x_1+a_2x_2+a_3x_3) + \frac{1}{2}x_2(a_2x_1+a_4x_2+a_5x_3) + \frac{1}{2}x_3(a_3x_1+a_5x_2+a_6x_3)\] \[=\frac{1}{2}a_1x_1^2 + \frac{1}{2}a_4x_2^2 + \frac{1}{2}a_6x_3^2 + a_2x_1x_2 + a_3x_1x_3 + a_5x_2x_3\] \[=x_1^2 -x_1x_2 + x_2^2 -x_2x_3 + x_3^2\] For this reason, we know $$A = \begin{pmatrix} 2 & -1 & 0\\ -1 & 2 & -1 \\ 0 & -1 & 2 \end{pmatrix}, \;\;\; b = \begin{pmatrix} 0\\ 0 \\ 0 \end{pmatrix}$$ The implementation is done in python and I pasted the code here for reference purposes. \lstinputlisting[language=python, basicstyle=\small]{src/cg_algo.py} And the execution results is \begin{tabular}{llll} \hline Iteration & $x_0$ & $x_1$& $x_2$ \\ \hline\hline 0 & 0 & 1 & 2 \\ 1 & 0.5 & 1 & 0.5 \\ 2 & 0.556 & 0.444 & 0.333 \\ 3 & 0 & 0 & 0 \end{tabular} \end{section} \begin{section}{Problem 15} \includegraphics[width=12cm]{img/problem15.png} \subsection{Part 1} First, let's write the matrix format of the function, $$\frac{1}{2} (x_1, x_2) \begin{pmatrix} a_1 & a_2\\ a_2 & a_3 \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix} + (b_1, b_2) \begin{pmatrix} x_1\\ x_2 \end{pmatrix}$$ \[= \frac{1}{2}(a_1x_1 + a_2x_2, a_2x_1+a_3x_2) \begin{pmatrix} x_1\\ x_2 \end{pmatrix} + x_1b_1 + x_2b_2\] \[= \frac{1}{2}(a_1x_1 + a_2x_2)x_1 + \frac{1}{2}(a_2x_1 + a_3x_2)x_2 + x_1b_1 + x_2b_2\] \[= \frac{1}{2}a_1x_1^2 + a_2x_1x_2 + \frac{1}{2}a_3x_2^2 + b_1x_1 + b_2x_2\] \[=-12x_2 + 4x_1^2 + 4x_2^2-4x_1x_2\] By comparing parameters, we got the following, \[b_2 = -12, b_1 = 0, a_2 = -4, a_1 =a_3 = 8\] \[A = \begin{pmatrix} 8 & -4\\ -4 & 8 \end{pmatrix}, b = \begin{pmatrix} 0\\ -12 \end{pmatrix}\] Based on A-conjugate definition (on page 120), $\langle d_1, Ad_2 \rangle = 0, \;\; d_1 = (1,0)^T$, then, let's write out the problem as the following: $$(1,0) \begin{pmatrix} 8 & -4\\ -4 & 8 \end{pmatrix} \begin{pmatrix} \delta_1\\ \delta_2 \end{pmatrix} = (8, -4) \begin{pmatrix} \delta_1\\ \delta_2 \end{pmatrix} = 8\delta_1 - 4\delta_2 = 0$$ So, as long as $2\delta_1 = \delta_2$, the condition $\langle d_1, Ad_2 \rangle = 0$ is satisfied $$d_2 = \begin{pmatrix} \gamma\\ 2\gamma \end{pmatrix}, \;\; \forall \gamma \in \mathbb{R}$$ \subsection{Part 2} Let's calculate the gradient of the function, \[f(x) = -12x_2 + 4x_1^2 + 4x_2^2-4x_1x_2, \;\; \nabla f(x) = \begin{pmatrix} 8x_1 - 4x_2\\ -12 + 8x_2 - 4x_1 \end{pmatrix} = g(x)\] Then, we can evaluate $g_0 = \begin{pmatrix} -\frac{1}{2} \times 8 - 4 \times 1\\ -12 + 8 + 2 \end{pmatrix} = \begin{pmatrix} -8\\ -2 \end{pmatrix}$ by given $x_0 = \begin{pmatrix} -\frac{1}{2}\\ 1 \end{pmatrix}$ In this problem, $d_1$ is given so we don't have to calculate it by $g_0$. So, the next stuff we need is the $\lambda_0 = -\frac{ \langle g_0, d_0 \rangle}{\langle Ad_0, d_0\rangle}$. I would calculate the numerator and enumerator separately. \[-\langle g_0, d_0 \rangle = -(-8, -2) \begin{pmatrix} 1\\ 0 \end{pmatrix} = 8\] \[\langle Ad_0, d_0 \rangle = (1,0) \begin{pmatrix} 8 & -4\\ -4 & 8 \end{pmatrix} \begin{pmatrix} 1\\ 0 \end{pmatrix} = (8, -4) \begin{pmatrix} 1\\ 0 \end{pmatrix} = 8\] For this reason, $\lambda_0 = \frac{8}{8}=1$, and $x^{(1)}$ is, \[x^{(1)} = \begin{pmatrix} -1/2\\ 1 \end{pmatrix} + 1 \begin{pmatrix} 1\\ 0 \end{pmatrix} = \begin{pmatrix} \frac{1}{2}\\ 1 \end{pmatrix} \] In the next iteration, we calculate $x^{(2)}$. To accomplish that, we calculate $g_1 = \begin{pmatrix} \frac{1}{2} \times 8 - 4 \times 1\\ -12 + 8 - 4 \times \frac{1}{2} \end{pmatrix} = \begin{pmatrix} 0\\ -6 \end{pmatrix}$ and by given in the problem, we know $d_1 = \begin{pmatrix} 1\\ 2 \end{pmatrix}$, I follow the same routine to calculate the $\lambda_1$ \[-\langle g_1, d_1 \rangle = -(0, -6) \begin{pmatrix} 1\\ 2 \end{pmatrix} = 12\] \[\langle Ad_0, d_0 \rangle = (1,2) \begin{pmatrix} 8 & -4\\ -4 & 8 \end{pmatrix} \begin{pmatrix} 1\\ 2 \end{pmatrix} = (0, 12) \begin{pmatrix} 1\\ 2 \end{pmatrix} = 24\] For this reason, $\lambda_1 = \frac{1}{2}$ and we can evaluate $x^{(2)}$ as \[x^{(2)} = \begin{pmatrix} 1/2\\ 1 \end{pmatrix} + \frac{1}{2} \begin{pmatrix} 1\\ 2 \end{pmatrix} = \begin{pmatrix} 1\\ 2 \end{pmatrix} \] \subsection{Part 3} The visualization has been done in python via matplotlib. Please refer to the source code to see details. Here is the plot. \includegraphics[width=6cm]{img/problem15_plt1.png} \end{section} \begin{section}{Problem 18} \includegraphics[width=8cm]{img/problem18.png} \subsection{Part a} First, followed by the hints posted on the website, I tested the four methods, which are ,Newton-cg, trust-ncg, trust-krylov, and trust-exact in scipy with tolerance $10^{-6}$. All results are summarized below. To make sure all methods works fine, the original function, Jacobian and Hessian are all provided to the program. In terms of details, please refer to the source code. \begin{tabular}{lllll} \hline Method &Iterations & $x_1$ & $x_2$& Function Value \\ \hline\hline newton-cg & 14 & 2.541 & 0.260& 2.247 \\ trust-ncg & 15 & 2.541 & 0.260 & 2.247 \\ trust-krylov & 21 & 2.541 & 0.260 & 2.247 \\ trust-exact & 14 & 2.541 & 0.260 & 2.247 \end{tabular} So, all the methods generated the same results. \subsection{Part b} The least square result is summarized here, three methods were used as requested by the instructor. The objective function is different from part a, but just the residual of the least square function. After fitting, here are the results. \begin{tabular}{lllll} \hline Method &Iterations & $x_1$ & $x_2$& Function Value \\ \hline\hline trf & 8 & 2.541 & 0.260 & $5.159\times 10^{-5}$ \\ dogbox & 8 & 2.541 & 0.260 & $5.167\times 10^{-5}$ \\ lm & 22 & 2.541 & 0.260 & $5.179\times 10^{-5}$ \end{tabular} \subsection{Part c} The ultimate goal is to estimate parameters and fit the five data points. The graph below is the visualization in part a and b. Since all the parameters are the same, I used only one plot here for simplification. \includegraphics[width=10cm]{img/problem18_plt1.png} \end{section} \end{document}
{ "alphanum_fraction": 0.6393409276, "avg_line_length": 38.1085271318, "ext": "tex", "hexsha": "0378be2ed52c8954f63841a59fefcc3fc2cfdf41", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5620ad797327a9337aded59d7c184cb3ca305830", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "victorhuangkk/york_optimization_final", "max_forks_repo_path": "final_exam_proof_part.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5620ad797327a9337aded59d7c184cb3ca305830", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "victorhuangkk/york_optimization_final", "max_issues_repo_path": "final_exam_proof_part.tex", "max_line_length": 405, "max_stars_count": null, "max_stars_repo_head_hexsha": "5620ad797327a9337aded59d7c184cb3ca305830", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "victorhuangkk/york_optimization_final", "max_stars_repo_path": "final_exam_proof_part.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5963, "size": 14748 }
\chapter{<%= chapterName %>} Write <%= chapterName %> here.
{ "alphanum_fraction": 0.6229508197, "avg_line_length": 15.25, "ext": "tex", "hexsha": "eabb4c8bd8a62a3f5a050c1b76dcfe9dbc6add89", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2020-01-16T09:45:56.000Z", "max_forks_repo_forks_event_min_datetime": "2015-10-08T15:59:33.000Z", "max_forks_repo_head_hexsha": "8a53327fee11261ed9990eeabd6856286ab9dc83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RobbyLena/generator-latex", "max_forks_repo_path": "generators/chapter/templates/chapter.tex", "max_issues_count": 21, "max_issues_repo_head_hexsha": "8a53327fee11261ed9990eeabd6856286ab9dc83", "max_issues_repo_issues_event_max_datetime": "2021-11-22T19:03:23.000Z", "max_issues_repo_issues_event_min_datetime": "2016-05-20T13:59:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RobbyLena/generator-latex", "max_issues_repo_path": "generators/chapter/templates/chapter.tex", "max_line_length": 30, "max_stars_count": 25, "max_stars_repo_head_hexsha": "7e4263f920ff50d2ba32b1bcd76b4696d54c46cc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "syyunn/generator-latex", "max_stars_repo_path": "generators/chapter/templates/chapter.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-13T21:28:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-20T15:08:11.000Z", "num_tokens": 16, "size": 61 }
\unnumberedchapter{Acknowledgment} \chapter*{Acknowledgment} Please refer to \url{https://groups.oist.jp/grad/academic-program-policies} for specifications.
{ "alphanum_fraction": 0.8113207547, "avg_line_length": 39.75, "ext": "tex", "hexsha": "926afdcf21e58be0fbd542e1ddd67442668e9a93", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "54903214ab2a89d0dbab5c5dcfde7d495d5f5fcc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gsivori/LaTeX-templates", "max_forks_repo_path": "PhD Thesis/Preamble/acknowledgments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "54903214ab2a89d0dbab5c5dcfde7d495d5f5fcc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gsivori/LaTeX-templates", "max_issues_repo_path": "PhD Thesis/Preamble/acknowledgments.tex", "max_line_length": 95, "max_stars_count": null, "max_stars_repo_head_hexsha": "54903214ab2a89d0dbab5c5dcfde7d495d5f5fcc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gsivori/LaTeX-templates", "max_stars_repo_path": "PhD Thesis/Preamble/acknowledgments.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 41, "size": 159 }
\section{Convergence in Probability}% \label{sec:convergence_in_probability} \begin{definition}[Convergence in Probability] $Y_n \convp c$ if for every $\epsilon>0$ and $\delta > 0,\ \exists\ n_0(\epsilon, \delta)$ such that \begin{equation*} P(|Y_n - c| > \epsilon) < \delta,\ \forall n > n_0(\epsilon, \delta) \end{equation*} \end{definition} \begin{thm}[Chebyshev Inequality] For random variable, $Y$, $a>0$, and $c$, \begin{equation*} P(|Y-c| \ge a) \le \frac{\E (Y-c)^2}{a^2} \end{equation*} \end{thm} \begin{definition}[Markov Inequality] If $X$ is a non-negative random variable and $a>0$ then \begin{equation*} P( X \ge a) \le \frac{\E X}{a} \end{equation*} \end{definition} \begin{thm} If $\E (Y-c)^2 \to 0$, then $Y_n \convp c$. \end{thm} \begin{thm} If $X_1, \ldots, X_n$ iid, $\E X_i = \mu$, $\Var X_i = \sigma^2 < \infty$, then \begin{equation*} \bar{X} \convp \mu \end{equation*} \end{thm} \begin{thm} If $A_n \convp a$ and $B_n \convp b$, then \begin{enumerate} \item $A_n \pm B_n \convp a \pm b$, \item $A_n \cdot B_n \convp a \cdot b$. \end{enumerate} \end{thm} \begin{thm} If $Y_n \convp c$ and $f$ is continuous at $c$, then $f(Y_n) \convp f(c)$. \end{thm} \begin{definition} A sequence of estimators $\delta_n$ of $g(\theta)$ is \it{consistent} if \begin{equation*} \delta_n \convp g(\theta) \end{equation*} \end{definition} \begin{thm} If bias and variance of $\delta_n \to 0$ as $n \to \infty$, $\delta_n$ is consistent. \end{thm} \begin{definition} $A_n = o_p (B_n)$ if $ \frac{A_n}{B_n} \convp 0$. \end{definition}
{ "alphanum_fraction": 0.5925277291, "avg_line_length": 26.3538461538, "ext": "tex", "hexsha": "a2cf955d2e5d2275a133848f8f9cf6fa37039eb9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2e121f2131c4225776d3c820ac4372968e8248d3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jems-lee/notes", "max_forks_repo_path": "statistics/inference/src/01-convergence-in-prob.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2e121f2131c4225776d3c820ac4372968e8248d3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jems-lee/notes", "max_issues_repo_path": "statistics/inference/src/01-convergence-in-prob.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "2e121f2131c4225776d3c820ac4372968e8248d3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jems-lee/notes", "max_stars_repo_path": "statistics/inference/src/01-convergence-in-prob.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 657, "size": 1713 }
\section{Properties of linear transformations} \begin{outcome} \begin{enumerate} \item Use properties of linear transformations to solve problems. \item Find the composite of transformations and the inverse of a transformation. \end{enumerate} \end{outcome} We begin by noting that linear transformations preserve the zero vector, negation, and linear combinations. \begin{proposition}{Properties of linear transformations}{properties-linear-transformation} Let $T: \R^n \to \R^m$ be a linear transformation. Then \begin{itemize} \item $T$ preserves the zero vector: $T(\vect{0}) = \vect{0}$. \item $T$ preserves negation: $T(-\vect{v}) = -T(\vect{v})$. \item $T$ preserves linear combinations: \begin{equation*} T(a_1\vect{v}_1 + a_2\vect{v}_2 + \ldots + a_k \vect{v}_k) ~=~ a_1T(\vect{v}_1) + a_2T(\vect{v}_2) + \ldots + a_k T(\vect{v}_k). \end{equation*} \end{itemize} \end{proposition} \begin{example}{Linear combination}{linear-transformation-combination} Let $T:\R^3 \to \R^4$ be a linear transformation such that \begin{equation*} T \paren{\begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix}} = \begin{mymatrix}{r} 4 \\ 4 \\ 0 \\ -2 \end{mymatrix} \quad\mbox{and}\quad T \paren{\begin{mymatrix}{r} 4 \\ 0 \\ 5 \end{mymatrix}} = \begin{mymatrix}{r} 4 \\ 5 \\ -1 \\ 5 \end{mymatrix}. \end{equation*} Find $T\paren{\begin{mymatrix}{r} -7 \\ 3 \\ -9 \end{mymatrix}}$. \end{example} \begin{solution} Using the third property in Proposition~\ref{prop:properties-linear-transformation}, we can find $T\paren{\begin{mymatrix}{r} -7 \\ 3 \\ -9 \end{mymatrix}}$ by writing $\begin{mymatrix}{r} -7 \\ 3 \\ -9 \end{mymatrix}$ as a linear combination of $\begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix}$ and $\begin{mymatrix}{r} 4 \\ 0 \\ 5 \end{mymatrix}$. By solving the appropriate system of equations, we find that \begin{equation*} \begin{mymatrix}{r} -7 \\ 3 \\ -9 \end{mymatrix} = \begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix} - 2 \begin{mymatrix}{r} 4 \\ 0 \\ 5 \end{mymatrix}. \end{equation*} Therefore, \begin{eqnarray*} T \paren{\begin{mymatrix}{r} -7 \\ 3 \\ -9 \end{mymatrix}} &=& T \paren{ \begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix} -2 \begin{mymatrix}{r} 4 \\ 0 \\ 5 \end{mymatrix} } \\ &=& T \begin{mymatrix}{r} 1 \\ 3 \\ 1 \end{mymatrix} -2T \begin{mymatrix}{r} 4 \\ 0 \\ 5 \end{mymatrix} ~=~ \begin{mymatrix}{r} 4 \\ 4 \\ 0 \\ -2 \end{mymatrix} -2 \begin{mymatrix}{r} 4 \\ 5 \\ -1 \\ 5 \end{mymatrix} ~=~ \begin{mymatrix}{r} -4 \\ -6 \\ 2 \\ -12 \end{mymatrix}. \end{eqnarray*} \end{solution} Suppose that we first apply a linear transformation $T$ to a vector, and then the linear transformation $S$ to the result. The resulting two-step transformation is also a linear transformation, called the \textbf{composition} of $T$ and $S$. \begin{definition}{Composition of linear transformations}{composite-transformations} Let $S: \R^k \to \R^n$ and $T: \R^n \to \R^m$ be linear transformations. Then the \textbf{composition}% \index{linear transformation!composition}% \index{composition of linear transformations} of $S$ and $T$ (also called the \textbf{composite transformation}% \index{composite transformation} of $S$ and $T$) is the linear transformation \begin{equation*} T\circ S: \R^k \to \R^m \end{equation*} that is defined by \begin{equation*} (T\circ S) (\vect{v}) = T(S(\vect{v})), \end{equation*} for all $\vect{v}\in\R^k$. \end{definition} Notice that the resulting vector will be in $\R^m$. Be careful to observe the order of transformations. The composite transformation $T\circ S$ means that we are {\em first} applying $S$, and {\em then} $T$. Composition of linear transformations is written from right to left. The composition $T\circ S$ is sometimes pronounced ``{\em $T$ after $S$}''. \begin{theorem}{Matrix of a composite transformation}{composite-transformation} Let $S: \R^k \to \R^n$ and $T: \R^n \to \R^m$ be linear transformations. Let $A$ be the matrix corresponding to $S$, and let $B$ be the matrix corresponding to $T$. Then the matrix corresponding to the composite linear transformation $T\circ S$ is $BA$. \end{theorem} \begin{proof} For all $\vect{v}\in\R^k$, we have \begin{equation*} (T\circ S)(\vect{v}) = T(S(\vect{v})) = B(A \vect{v}) = (BA) \vect{v}. \end{equation*} Therefore, $BA$ is the matrix corresponding to $T\circ S$. \end{proof} \begin{example}{Two rotations}{two-rotations} Find the matrix for a counterclockwise rotation% \index{matrix!of a rotation}% \index{rotation!matrix of}% \index{linear transformation!rotation} by angle $\theta+\phi$ in two different ways, and compare. \end{example} \begin{solution} Let $A_{\theta}$ be the matrix of a rotation by $\theta$, and let $A_{\phi}$ be the matrix of a rotation by angle $\phi$. We calculated these matrices in Example~\ref{exa:rotation-theta-R2}. Then a rotation by the angle $\theta+\phi$ is given by the product of these two matrices: \begin{eqnarray*} A_{\theta}A_{\phi} &=& \begin{mymatrix}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \\ \end{mymatrix} \begin{mymatrix}{cc} \cos\phi & -\sin\phi \\ \sin\phi & \cos\phi \\ \end{mymatrix} \\ &=& \begin{mymatrix}{cc} \cos\theta \cos\phi - \sin\theta \sin\phi & -\cos\theta \sin\phi - \sin\theta \cos\phi \\ \sin\theta \cos\phi + \cos\theta \sin\phi & \cos\theta \cos\phi - \sin\theta \sin\phi \end{mymatrix}. \end{eqnarray*} On the other hand, we can compute the matrix for a rotation by angle $\theta+\phi$ directly: \begin{eqnarray*} A_{\theta+\phi} &=& \begin{mymatrix}{cc} \cos(\theta+\phi) & -\sin(\theta+\phi) \\ \sin(\theta+\phi) & \cos(\theta+\phi) \\ \end{mymatrix}. \end{eqnarray*} The fact that these matrices are equal amounts to the well-known trigonometric identities for the sum of two angles% \index{trigonometry!sum of two angles}% \index{sum!of two angles}% \index{addition!of two angles}, which we have here derived using linear algebra concepts: \begin{eqnarray*} \sin(\theta+\phi) &=& \sin\theta \cos\phi + \cos\theta \sin\phi, \\ \cos(\theta+\phi) &=& \cos\theta \cos\phi - \sin\theta \sin\phi. \end{eqnarray*} \end{solution} \begin{example}{Multiple rotations in $\R^3$}{multiple-rotations} Find the matrix of the linear transformation $T:\R^3\to\R^3$ that is given as follows: a rotation by $30$ degrees about the $z$-axis, followed by a rotation by $45$ degrees about the $x$-axis. \end{example} \begin{solution} It would be quite difficult to picture the transformation $T$ in one step. Fortunately, we don't have to do this. All we have to do is find the matrix for each rotation separately, then multiply the two matrices. We have the be careful to multiply the matrices in the correct order. Let $B$ be the matrix for a $30$-degree rotation about the $z$-axis. It is given exactly as in Example~\ref{exa:rotation-R3}: \begin{equation*} \def\arraystretch{1.4} B = \begin{mymatrix}{ccc} \cos 30^{\circ} & -\sin 30^{\circ} & 0 \\ \sin 30^{\circ} & \cos 30^{\circ} & 0 \\ 0 & 0 & 1 \\ \end{mymatrix} = \begin{mymatrix}{ccc} \frac{\sqrt3}{2} & -\frac{1}{2} & 0 \\ \frac{1}{2} & \frac{\sqrt3}{2} & 0 \\ 0 & 0 & 1 \\ \end{mymatrix}. \end{equation*} Let $C$ be the matrix for a $45$-degree rotation about the $x$-axis. It is analogous to Example~\ref{exa:rotation-R3}, except that the rotation takes place in the $yz$-plane instead of the $xy$-plane. \begin{equation*} \def\arraystretch{1.4} C = \begin{mymatrix}{ccc} 1 & 0 & 0 \\ 0 & \cos 45^{\circ} & -\sin 45^{\circ} \\ 0 & \sin 45^{\circ} & \cos 45^{\circ} \\ \end{mymatrix} = \begin{mymatrix}{ccc} 1 & 0 & 0 \\ 0 & \frac{1}{\sqrt2} & -\frac{1}{\sqrt2} \\ 0 & \frac{1}{\sqrt2} & \frac{1}{\sqrt2} \\ \end{mymatrix}. \end{equation*} Finally, to apply the linear transformation $T$ to a vector $\vect{v}$, we must first apply $B$ and then $C$. This means that $T(\vect{v}) = C(B\vect{v})$. Therefore, the matrix corresponding to $T$ is $CB$. Note that it is important that we multiply the matrices corresponding to each subsequent rotation {\em from right to left}. \begin{equation*} \def\arraystretch{1.4} A ~=~ CB ~=~ \begin{mymatrix}{ccc} 1 & 0 & 0 \\ 0 & \frac{1}{\sqrt2} & -\frac{1}{\sqrt2} \\ 0 & \frac{1}{\sqrt2} & \frac{1}{\sqrt2} \\ \end{mymatrix} \begin{mymatrix}{ccc} \frac{\sqrt3}{2} & -\frac{1}{2} & 0 \\ \frac{1}{2} & \frac{\sqrt3}{2} & 0 \\ 0 & 0 & 1 \\ \end{mymatrix} ~=~ \begin{mymatrix}{ccc} \frac{\sqrt3}{2} & -\frac{1}{2} & 0 \\ \frac{1}{2\sqrt2} & \frac{\sqrt3}{2\sqrt2} & -\frac{1}{\sqrt2} \\ \frac{1}{2\sqrt2} & \frac{\sqrt3}{2\sqrt2} & \frac{1}{\sqrt2} \\ \end{mymatrix}. \end{equation*} \end{solution} We can also consider the inverse of a linear transformation. The inverse of $T$, if it exists, is a linear transformation that undoes the effect of $T$. \begin{definition}{Inverse of a transformation}{inverse-transformation} Let $T,S: \R^n \to \R^n$ be linear transformations. Suppose that for each $\vect{v} \in \R^n$, \begin{equation*} (S\circ T)(\vect{v}) = \vect{v} \end{equation*} and \begin{equation*} (T\circ S)(\vect{v}) = \vect{v}. \end{equation*} Then $S$ is called the \textbf{inverse}% \index{linear transformation!inverse}% \index{inverse!of a linear transformation} of $T$, and we write $S=T^{-1}$. \end{definition} \begin{example}{Inverse of a transformation}{inverse-transformation} What is the inverse of a counterclockwise rotation by the angle $\theta$ in $\R^2$? \end{example} \begin{solution} The inverse is a clockwise rotation by the same angle. \end{solution} It is perhaps not entirely unexpected that the matrix of $T^{-1}$ is exactly the inverse of the matrix of $T$, if it exists. \begin{theorem}{Matrix of the inverse transformation}{inverse-transformation} Let $T:\R^n \to \R^n$ be a linear transformation and let $A$ be the corresponding $n\times n$-matrix. Then $T$ has an inverse if and only if the matrix $A$ is invertible. In this case, the matrix of $T^{-1}$ is $A^{-1}$. \end{theorem} \begin{example}{Matrix of the inverse transformation}{matrix-inverse-transformation} Find the inverse of the linear transformation $T:\R^2\to\R^2$ given by \begin{equation*} T\paren{\begin{mymatrix}{c} x \\ y \end{mymatrix}} = \begin{mymatrix}{c} 2x+y \\ 7x+4y \end{mymatrix}. \end{equation*} \end{example} \begin{solution} The easiest way to do this is to find the matrix of $T$. We have \begin{equation*} T\paren{\begin{mymatrix}{c} 1 \\ 0 \end{mymatrix}} = \begin{mymatrix}{c} 2 \\ 7 \end{mymatrix} \quad\mbox{and}\quad T\paren{\begin{mymatrix}{c} 0 \\ 1 \end{mymatrix}} = \begin{mymatrix}{c} 1 \\ 4 \end{mymatrix}. \end{equation*} Therefore, the matrix of $T$ is \begin{equation*} A = \begin{mymatrix}{rr} 2 & 1 \\ 7 & 4 \\ \end{mymatrix}. \end{equation*} The inverse of $A$ is \begin{equation*} A^{-1} = \begin{mymatrix}{rr} 4 & -1 \\ -7 & 2 \\ \end{mymatrix}. \end{equation*} Therefore, $T^{-1}$ is the linear transformation defined by \begin{equation*} T^{-1}\paren{\begin{mymatrix}{c} x \\ y \end{mymatrix}} = \begin{mymatrix}{rr} 4 & -1 \\ -7 & 2 \\ \end{mymatrix} \begin{mymatrix}{c} x \\ y \end{mymatrix} = \begin{mymatrix}{c} 4x-y \\ -7x+2y \end{mymatrix}. \end{equation*} \end{solution}
{ "alphanum_fraction": 0.6320006748, "avg_line_length": 36.5925925926, "ext": "tex", "hexsha": "0b931d33437362e2dca1fb7f7a7e679fe5fc7136", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_path": "baseText/content/LinearTransformationsRn-Properties.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_path": "baseText/content/LinearTransformationsRn-Properties.tex", "max_line_length": 91, "max_stars_count": 3, "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_path": "baseText/content/LinearTransformationsRn-Properties.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "num_tokens": 4196, "size": 11856 }
\documentclass[a4paper,12pt]{article} \usepackage[left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm]{geometry} \usepackage{amsmath,amssymb,amsthm,algorithm,algorithmic,graphicx,yhmath,url,enumitem,lscape,hyperref,enumitem} \usepackage{wrapfig,subfigure} \newcounter{problem} \newcounter{remark} \newcounter{hint} \newenvironment{remark}{\refstepcounter{remark} \vspace{0.1cm} \par \noindent {\bf Remark \arabic{remark}}}{\vspace{0.3cm}} \newenvironment{hint}{\refstepcounter{hint} \vspace{0.1cm} \par \noindent {\bf Hint \arabic{hint}}}{\vspace{0.3cm}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Rn}{\mathbb{R}^n} \newcommand{\Rnn}{\mathbb{R}^{n \times n}} \newcommand{\bes}{\begin{equation*}} \newcommand{\ees}{\end{equation*}} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bms}{\begin{multline*}} \newcommand{\emults}{\end{multline*}} % Matrices \newcommand{\bbm}{\begin{bmatrix}} \newcommand{\ebm}{\end{bmatrix}} \newcommand{\bpm}{\begin{pmatrix}} \newcommand{\epm}{\end{pmatrix}} % Strecthing \renewcommand{\arraystretch}{1.3} \newcommand{\eps}{\epsilon} \newcommand{\fl}{\text{fl}} \newcommand{\Lp}{{L^p}} \newcommand{\Ker}{\text{Ker}\,} \newcommand{\loc}{{\text{loc}}} \newcommand{\ccinf}{C_c^\infty} \newcommand{\supp}{\text{supp}} \newcommand{\dist}{\text{dist}} % Gravitational acceleration \newcommand{\bg}{\mathbf{g}} % Friction force \newcommand{\bFf}{\mathbf{F_f}} % Gravitational force \newcommand{\bFg}{\mathbf{F_g}} % Force \newcommand{\bF}{\mathbf{F}} % Position, velocity, acceleration \newcommand{\br}{\mathbf{r}} \newcommand{\bv}{\mathbf{v}} \newcommand{\ba}{\mathbf{a}} % Jacobian \newcommand{\bDF}{\mathbf{DF}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \title{Accuracy in artillery computations} \author{Carl Christian Kjelgaard Mikkelsen} \begin{document} \pagenumbering{arabic} \thispagestyle{empty} \noindent Ume\aa{} University \hfill Fall 2018 \\ Department of Computing Science\\ \vskip 2.5cm \begin{center} {\Huge {\bf Project 3}}\\{\LARGE Error estimation for artillery computations}\end{center} \vskip 0.3cm \begin{center} {\huge Scientific Computing} \vfill {\Large The deadline for this project can be found at: \href{http://www8.cs.umu.se/kurser/5DV005/HT18/planering.html}{http://www8.cs.umu.se/kurser/5DV005/HT18/planering.html}\\ (Link \emph{Overview} on the course homepage.)} \end{center} \vfill {% \large \begin{itemize} \item The submission should consist of: \begin{itemize} \item The complete report, including \begin{itemize} \item A front page with the following information: \begin{enumerate} \item Your {\bf name}. \item The {\bf course name}. \item Your {\bf username} at the Department of Computing Science. \item The {\bf project number}. \item The {\bf version} of the submission (in case of re-submissions). \end{enumerate} \end{itemize} \item An appendix with the source code. \item To simplify feedback, the main report (optionally excluding the appendix) must have {\bf numbered sections} and {\bf page numbers}. \end{itemize} \item The submitted code must be {\tt MATLAB}-compatible. If you choose to work in Octave, verify that your code is {\tt MATLAB}-compatible before you submit your project. \item If you write your report using \LaTeX, double-check that your references have been resolved correctly before you submit. ``Figure ??'' is useless to any reader. \item Your report should be submitted as a pdf file uploaded via the\linebreak \href{https://www8.cs.umu.se/~labres/py/handin.cgi}{https://www8.cs.umu.se/\textasciitilde{}labres/py/handin.cgi} page, also available as the \begin{center} \href{https://www8.cs.umu.se/~labres/py/handin.cgi}{Submit/Check results} \end{center} link at the bottom left of the course home page. \item Furthermore, the source code should be available in a folder called {\tt edu/5dv005/assN} in your home folder, where N is the project number. You will probably have to create the folder yourself. \end{itemize} } \vfill \newpage \maketitle \tableofcontents \section{Primary purpose} The primary purpose of this assignment to develop the ability to compute error estimates which are reliable and accurate. \section{Asymptotic error expansions} We have considered the problem of approximating the range of a gun, the flight time of a shell or the elevation necessary to hit a particular target. It remains to compute reliable error estimates for such approximations. In this project, we view each approximation $A$ as a function of the size of the time step $h$ used when computing the trajectories, i.e., $A=A_h$. We will investigate, if there exists asymptotic error expansions of the form \be \label{equ:expansion} T - A_h = \alpha h^p + \beta h^q + O(h^r), \quad 0 < p < q < r \ee The term $\alpha h^p$ is called the primary error term, while $\beta h^q$ is the secondary order term. We will obtain reliable error estimates. The asymptotic error expansion describes the difference between the target value $T$ and the approximation $A_h$. We can not hope to obtain the exact value of $A_h$. The very best we can hope for is the floating point representation of $A_h$, i.e. $\hat{A}_h = \text{fl}(A_h)$, but in general this is not a realistic goal. The difference between $A_h$ and the computed value of $\hat{A}_h$ is the result of many rounding errors. By monitoring the \emph{computed} value of Richardson's fraction we can determine when the rounding error $A_h - \hat{A}_h$ is irrelevant compared with the error $T-A_h$. \section{Software} The function {\tt range\_rkx} moves the shell from point to point using a time step which is fixed except for the very last time step. Here a non-linear solver is used to compute the time step which will place the shell on the ground. This equation is solved to the limit of machine precision, specifically the tolerance passed to the underlying bisection routine is $\text{tol} = 2^{-53}$. The function {\tt range\_rkx\_sabotage} is identical to {\tt range\_rkx} except that the tolerance is much larger, specifically, $\text{tol} = 2^{-3}$. The function {\tt a3int} computes the integral of a given function along a trajectory computed by {\tt range\_rkx}. \section{Questions} \begin{figure} \begin{verbatim} k | Approximation A_h | Fraction F_h | Error estimate E_h 1 | 2.895480163672e+00 | 0.00000000 | 0.000000000000e+00 2 | 2.805025851403e+00 | 0.00000000 | -9.045431226844e-02 3 | 2.761200888902e+00 | 2.06399064 | -4.382496250165e-02 4 | 2.739629445828e+00 | 2.03161941 | -2.157144307422e-02 5 | 2.728927822736e+00 | 2.01571695 | -1.070162309156e-02 6 | 2.723597892360e+00 | 2.00783544 | -5.329930376490e-03 7 | 2.720938129638e+00 | 2.00391198 | -2.659762721350e-03 8 | 2.719609546672e+00 | 2.00195456 | -1.328582965698e-03 9 | 2.718945579511e+00 | 2.00097692 | -6.639671619268e-04 10 | 2.718613676976e+00 | 2.00048837 | -3.319025345263e-04 11 | 2.718447745963e+00 | 2.00024413 | -1.659310128161e-04 12 | 2.718364785520e+00 | 2.00012206 | -8.296044325107e-05 13 | 2.718323306559e+00 | 2.00006078 | -4.147896106588e-05 14 | 2.718302567373e+00 | 2.00002842 | -2.073918585666e-05 15 | 2.718292197853e+00 | 2.00001403 | -1.036952016875e-05 16 | 2.718287013122e+00 | 2.00001123 | -5.184730980545e-06 17 | 2.718284420669e+00 | 1.99993264 | -2.592452801764e-06 18 | 2.718283124268e+00 | 1.99973060 | -1.296401023865e-06 19 | 2.718282476068e+00 | 2.00000000 | -6.482005119324e-07 20 | 2.718282151967e+00 | 2.00000000 | -3.241002559662e-07 \end{verbatim} \caption{The output of {\tt a3f2.m} after the completion of {\tt MyRichardson.m}} \label{fig:a3f2} \end{figure} \begin{enumerate} \item Copy {\tt scripts/a3f1.m} in {\tt work/MyRichardson.m} and complete the function according to the the specification. It is likely, that the function is working correctly when the corresponding minimal working example {\tt scripts/a3f2.m} returns the output given by Figure \ref{fig:a3f2}. \item \label{q:range} Develop a script {\tt a3range.m} which apply Richardson's techniques to compute the range of the shot whose physical parameters are given by {\tt a3f3.m} \begin{itemize} \item Use methods {\tt 'rk1', 'rk2', 'rk3', 'rk4'} to compute the trajectories \item Use time steps $h_k = 2^{-k}$ seconds for $k=0,1,2,\dotsc$. \end{itemize} For each of the four methods: \begin{enumerate} \item Determine the power of the primary error term. \item Determine the power of the secondary error term. \item Determine when the computed value of Richardson's fractions behaving in a manner consistent with an asymptotic error expansion of the type given by equation \eqref{equ:expansion}. \item Identify the best approximation of the range and explain why the error estimate is reliable. \end{enumerate} \item \label{q:time} Develop a script {\tt a3time} which compute the flight time of the shot given by {\tt a3f3} using your method of choice. \begin{enumerate} \item You must include an error estimate. \item You must explain why your error estimate can be trusted. \item You must discuss the behavior of Richardson's fractions. \end{enumerate} \item \label{q:elevation} Develop a script {\tt a3low} which computes the low firing solution for a target located at $15000$ meters to the right of the gun given by {\tt a3f3.m}. \begin{itemize} \item You must include an error estimate. \item You must explain why your error estimate can be trusted. \item You must compute Richardson's fractions and discuss their behavior. \end{itemize} {\bf Warning:} Richardson's fraction will not behave correctly unless the elevations are computed with what would appear to be excessive accuracy! Expect to use residuals which are as small as $10^{-10}$ meters. {\bf Warning:} This is not a fast calculation, so begin with a small number of rows, say, 6 rows, in Richardson's table and see if this is enough. \item \label{q:smooth} Develop a script {\tt a3range\_g7} which computes the range of the shot defined by {\tt a3f4}. \begin{enumerate} \item For which methods do you retain the ability to estimate the range accurately? \end{enumerate} \item \label{q:sabotage} Develop a script {\tt a3range\_sabotage} which uses {\tt range\_rkx\_sabotage} to compute the range of the shot given by {\tt a3f3}. \begin{enumerate} \item For which methods do you retain the ability to estimate the range accurately? \end{enumerate} \item \label{q:length} The script {\tt a3length} computes the length of the trajectory of the shot given by {\tt a3f3.m}. \begin{itemize} \item It uses methods {\tt 'rk1', 'rk2', 'rk3', 'rk4'} to compute the trajectories \item It uses time steps $h_k = 2^{-k}$ seconds for $k=0,1,2,\dotsc$. \item It uses the trapezoidal rule to compute the length of each trajectory. \end{itemize} \begin{enumerate} \item For each method what is the order of the primary error term? \end{enumerate} \end{enumerate} \section{Concluding remarks} Our entire analysis hinges on the existence of an asymptotic error expansion (AEE), see equation \eqref{equ:expansion}. It can be difficult to prove the existence of an AEE, but close observation of the Richardson's fraction will allow you to determine when Richardson's error estimate is reliable, i.e., Questions \ref{q:range}, \ref{q:time}, \ref{q:elevation}. Mathematically, the existence of an AEE requires a certain degree of differentiability to sustain the necessary Taylor expansions. In particular, higher order methods such as 'rk3' and 'rk4' require more differentiability than lower order methods such as 'rk1' and 'rk2'. The drag function used by {\tt a3f4} is a piecewise cubic polynomial which is two, but not three times differentiable. This significantly reduces the amount of differentiability available and causes the problems which you detected in Question \ref{q:smooth}. Moreover, all central equations must solved accurately or you loose the ability to estimate the error accurately. The error made by {\tt range\_rkx\_sabotage} when computing the final time-step destroyed you ability to estimate the difference between the true range $r$ and the approximation $r_h$, see Question \ref{q:sabotage}. Similarly, it was necessary to use a very small residual when computing the elevation $\theta_h$, see Question \ref{q:elevation}. The calculation of the length of the trajectory of the shell in Question \ref{q:length} illustrates a fundamental principle of scientific computing. In general, a calculation is only as a accurate as its least accurate component. If we use a first order method to compute the trajectory, then we gain nothing from using a second order method to compute the arc length. Similarly, if we use a third or fourth order method to compute the trajectory, then a second order accurate calculation of the arc length will reduce the overall accuracy to second order. \end{document}
{ "alphanum_fraction": 0.7319333383, "avg_line_length": 55.0658436214, "ext": "tex", "hexsha": "9bcf790421ae6884eb71ec1d101055d048b2d4f9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4d90e52e5d8fd00749dd009eadf80eca0adc3624", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SkruvdragarN/5DV005", "max_forks_repo_path": "assignments/no3/specification/no3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4d90e52e5d8fd00749dd009eadf80eca0adc3624", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SkruvdragarN/5DV005", "max_issues_repo_path": "assignments/no3/specification/no3.tex", "max_line_length": 659, "max_stars_count": null, "max_stars_repo_head_hexsha": "4d90e52e5d8fd00749dd009eadf80eca0adc3624", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SkruvdragarN/5DV005", "max_stars_repo_path": "assignments/no3/specification/no3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4007, "size": 13381 }
%\section{Technical development} \section{How is the Development Affected by the Technical Possibilities?} %\section{What in the development has been affected by the technical possibilities?} As devices were limited, a goal was to make the app available on as many devices as possible. Creating a hybrid app using web technologies using Meteor made the app available both as a native app on Android and iOS devices, as well as on the web. On the other hand, this is not enough: the pre-evaluation showed that only 3 out of 16 had a smartphone today \cite{youngdrive-statistics}. As internet is accessible but expensive and often used seldom, the app does not provide rich media or simulations, but focuses instead on creative design possibilities using multiple-choice, also having cognitive load and scaffolding in mind. The interviews and observations from iteration 4 details that the coaches are happy with the user friendliness of the app, and that the training in its current form has high value for the coaches. On the other hand, images could probably be used to lower misconceptions in language, and a future wish of the coaches is to have the manuals accessible via mobile as well, which includes both text and images. Most of the coaches have been first-time smartphone users. Letting them continuously test and co-create the app has created a tailor-made app from their needs and conditions. It may be surprising how simple design solutions using text and clear visuals can provide rich learning feedback, mentioned by Nocol \cite{nicol}. On the other hand, since it is so tailor-made, it needs to be examined if to make the app work in other countries than Uganda and Zambia, where design and technology preferences might be different. That the app should work offline, and still be able to push quiz results when it gets online, has been a challenge. It can be hard to find good existing approaches for some technical platforms, but for Meteor plugins such as GroundDB proved very usable, since it is also automatic. In other apps in developing countries sometimes the user decides themselves when to push data, but in this case, quiz results are so small in size that it was unnecessary. This might be reconsidered in the future, for example if answers are no longer solely multiple-choice. Below, reasons the development was negatively affected by the technical constrains are highlighted. \subsection{Online Data Collection was Needed Earlier} To test on all of the coaches in Uganda, it would have been preferable if data collection would have happened via the app instead of manually already in iteration 3, since there would be more than 10 test subjects, which had been the limit in Zambia. This was planned for, but technical implications with Meteor made it delayed. Done manually, not all data was recoded in iteration 3, which made it harder to draw conclusions from the quiz results. Both Lopez \cite{une-terre} and \cite{timo-ropinski-liu} explains how visualization techniques (like parallell coordinates) are more suitable for large data sets. This can be read more about below. \subsubsection{Problems with Internet Access} In day 3 of the Zambia coach training in iteration 2, iOS no longer allowed uncertified app installs from computer: you needed to have paid a license even for unreleased apps, being a "Trusted developer". This stopped the app from being able to be installed on all the iOS devices, so that only the web version could be used. Thus, only the web app was tested from Wednesday and onwards. This was a problem, as the app regularly crashed at refresh because of low internet capacity. Sometimes, it was neeed to go to the other office where there was wifi, to refresh the webpage, and go back to the location. This of course would not work for Josefina. While it was positive that these issues were found thanks to testing, valuable time testing the actual functionality of the app was lost, with less feedback for continued development as a result. \subsection{Backwards Capability Issues} \label{backwards-capability} Upgrading from version 1.2 to 1.3 during Iteration 3 was a good example of technical limitations. It took a lot of time, but when it was discovered that version 1.2 did not work for old Android devices, the changes needed to be reverted. In another project, new Andorid versions might have been acceptable, but here a "better" version of software was not viable. Meteor 1.2 had several disadvantages: while it worked for all devices, it did not support React.js Meteor 1.3 was released, which promised a better developer experience, with JavaScript ES6 support, and access to Node Package Manager (npm), plus official support for React.js. In 1.2, only some npm packages had been adapted for Meteor, and tools such as Webpack could not be used. The downsides was discovered after implementation: there were missing backward compatibility to the older of the Android devices. The backup would be the web version, but at the time of iteration 3, there were no Heroku build-pack for Meteor 1.3, making the website to crash. This was however fixed before iteration 4, which is why Meteor 1.3 was kept. \subsection{No Time Assigned for Writing Automatic Tests} The project would have benefited from passing automatic tests before doing user tests. While automatic tests were never written because of time constraints, since iteration 3, beta releases and production releases were separated into different domains, using Heroku's staging environment, with a different GitHub branch for each new iteration. Even so, doing automated tests would could have helped finding things that had worked previously but not in a new version, or finding bugs with new functionality like client-server communication. This would have made interactions with coaches more efficient, since the users would have been exposed to an app with less accidental errors. \subsection{Difficulties Comparing Quiz Results between Iterations} It would be highly interesting to compare the quiz results between different iterations of the app, to measure how much learning has increased. However, the educational range and knowledge between Zambia and Uganda is too large to draw such conclusions: while all of the Zambia coaches had 100\% correct answers on quiz 3 "Financial literacy" (iteration 2), the same number for Uganda was 91.8\% (iteration 4). This makes further conclusions very hard, more than from informed guesses and observations. See more about this in section \ref{sec:internationalization}. How to empirically measure and compare learning effectiveness between different countries could be interesting future work.
{ "alphanum_fraction": 0.8085615926, "avg_line_length": 185.5833333333, "ext": "tex", "hexsha": "dd6a33f6cff046f28da27674bcc4da7e2c9fb9e2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_forks_repo_path": "discussion/1_dev_by_tech_limitations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_issues_repo_path": "discussion/1_dev_by_tech_limitations.tex", "max_line_length": 846, "max_stars_count": null, "max_stars_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "marcusnygren/YoungDriveMasterThesis", "max_stars_repo_path": "discussion/1_dev_by_tech_limitations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1353, "size": 6681 }
\graphicspath{{Pics/}} \newpage\section{Centers of inside and outside} \impden{Incenter and Co.}{ Let $\triangle ABC$ be an ordinary triangle, $I$ is its incenter, $D, E, F$ are the touch points of the incenter with $BC, CA, AB$ and $ D', E', F' $ are the reflections of $D, E, F$ wrt $ I $. Let the $ I_a, I_b, I_c $ excircles touch $BC, CA, AB$ at $D_1, E_1, F_1$.\\ Let $M_a, M_b, M_c$ be the midpoints of the smaller arcs $BC, CA, AB$, and $M_A, M_B, M_C$ be the midpoints of the major arcs $BC, CA, AB$. $ M $ are the midpoint of $ BC $. Let $ A' $ be the antipode of $ A $ wrt $ \odot ABC $.\\ Let $(I_a)$ touch $BC, CA, AB$ at $D_A, E_A, F_A$. So, $D_A \equiv D_1$.\\ Call $ EF $, `$ A $-tangent line', and $ DE, DF $ similarly. And call $ E_AF_A $ `$ A_A $-tangent line.\\ \figdf{.8}{incenter_main}{All the primary points related to the incenter and the excenters} } \newpage \begin{minipage}{.59\linewidth} \lem{Antipode and Incenter}{ $ A'I, \odot ABC, \odot AEIF $ are concurrent at $ Y_A $. And $ Y_A, D, M_a $ are collinear. } \lem{}{ $ DD_H \perp EF $, then $ D_H, I, A' $ are collinear. } \lem{}{ \[\frac{FD_H}{D_HE} = \frac{BD}{DC}\] } \vspace{2em} \lem{Arc Midpoint as Centers}{ \[M_AE_1 = M_AF_1\quad M_BF_1 = M_BD_1\quad M_CD_1 = M_CE_1\] Moreover, $I, O$ are the orthocenter and the circumcenter of $\triangle I_aI_bI_c$ } \vspace{4em} \lem{Incircle Touchpoint and Cevian}{ Let a cevian be $ AX $ and let $ I_1, I_2 $ be the incirlces of $ \triangle ABX, \triangle ACX $. Then $ D, I_1, I_2, X $ are concyclic. And the other common tangent of $ \odot I_1 $ and $ \odot I_2 $ goes through $ D $. } \end{minipage}\hfill% \begin{minipage}{.39\linewidth} \figdf{.9}{antipode_incenter}{\autoref{lemma:Antipode and Incenter}} \figdf{.9}{excenter_touchpoint_bigarc-midpoints}{\autoref{lemma:Arc Midpoint as Centers}} \figdf{.9}{incircle_touchpoint_and_cevian}{\autoref{lemma:Incircle Touchpoint and Cevian}} \end{minipage} \newpage \begin{minipage}{.5\linewidth} \lem{Apollonius Circle and Incenter, ISL 2002 G7}{ Let $ \omega_a $ be the circle that goes through $ B, C $ and is tangent to $ (I) $ at $ X $. \hl{Then $ XD', EF, BC $ are concurrent and $ X, D, I_a $ are collinear.} The same properties is held if the roles of incenter and excenter are swapped. \begin{itemize}[left=0pt, itemsep=0pt] \item The circle $ BXC $ is tangent to $ (I) $ \item $ X $ lies on the Apollonius Circle of $ (B, C; D, G) $. \item $ XD $ bisects $ \angle BXC $. \end{itemize} } \lem{Line parallel to BC through I}{ Let $E, F$ be the intersection of the $B, C$ angle bisectors with $AC, AB$. Then the tangent to $\odot ABC$ at $A$, $EF$ and the line through $I$ parallel to $BC$ are concurrent. } \lem{Midline Concurrency with Incircle Touchpoints}{ $AI$, $ B, B_A $-tangent lines and $ C $-mid-line are concurrent. And, if the concurrency point is $ X $, then $ CS\perp AI $ } \theo{}{Paul Yui Theorem}{ $ B $-tangent line, $ C_A $-tangent line, and $ AH $ are concurrent. } \lem{Concurrent Lines in Incenter}{ Let $ AD \cap (I) = G, AD' \cap (I) = H $. Let the line through $ D' $ parallel to $ BC $ meet $ AB, AC $ at $ B', C' $. Then $ AM,\ EF,\ GH,\ DD',\ BC',\ CB' $ are concurrent. }\label{lemma:concurrent_lines_in_incenter} \end{minipage}\hfill% \begin{minipage}{.48\linewidth} \figdf{.9}{InExLemma3}{\autoref{lemma:Apollonius Circle and Incenter, ISL 2002 G7}} \figdf{.9}{BC_parallel_through_I}{\autoref{lemma:Line parallel to BC through I}} \figdf{.8}{InExLemma4}{ \autoref{lemma:Midline Concurrency with Incircle Touchpoints} \& \autoref{theorem:Paul Yui Theorem} } \end{minipage} \figdf{.5}{concurrent_lines_in_incenter}{ \autoref{lemma:Concurrent Lines in Incenter} The lines are concurrent. } \begin{minipage}{.5\linewidth} \lem{Insimilicenter}{ The \emph{insimilicenter} is the positive homothety center of the circumcircle and the incircle. It is also the \emph{isogonal conjugate of the Nagel Point} wrt $\triangle ABC$. } \begin{solution} Let $T$ be the $A$-mixtilinear touchpoint. If $AT\cap \left(I\right) =A'$, then if we can show that $A'B'$ arc has angle $\angle C$, where $A'B'\parallel AB$, we are done. \end{solution} \end{minipage}\hfill% \begin{minipage}{.49\linewidth} \figdf{.9}{insimilicenter}{} \end{minipage} \prob{} {Application of Aollonius Circle and Incenter Lemma}{}{ Let triangle $ABC$, incircle $(I)$, the $A$-excircle $(I_a)$ touches $BC$ at $M$. $IM$ intersects $(I_a)$ at the second point $X$. Similarly, we get $Y$, $Z$. Prove that $AX$, $BY$, $CZ$ are concurrent.\\ \href{http://artofproblemsolving.com/community/c6h1595900p9909100}{Extension, by buratinogigle}: Triangle $ABC$ and $XYZ$ are homothetic with center $I$ is incenter of $ABC.$ Excircles touches $BC,$ $CA,$ $AB$ at $D,$ $E,$ $F.$ $XD,$ $YE,$ $FZ$ meets excircles again at $U,$ $V,$ $W.$ Prove that $AU,$ $BV,$ $CW$ are concurrent. } \newpage \den{Isodynamic Points}{ Let $ ABC $ be a triangle, and let the angle bisectors of $ \angle A $ meet $ BC $ at $ X, Y $. Call $ \omega_a $ the circumcircle of $ \triangle AXY $. Define $ \omega_b, \omega_c $ similarly. The first and second isodynamic points are the points where the three circles $ \omega_a, \omega_b, \omega_c $ meet. I.e. these two points are the intersections of the three Apollonius circles. These two points satisfy the following relations: \begin{enumerate} \item $PA\sin A = PB\sin B = PC\sin C$ \item They are the isogonal congugates of the Fermat Points, and they lie on the `Brocard Axis' \end{enumerate} \figdf{.6}{Isodynamic_Points}{} } \theo{https://en.wikipedia.org/wiki/Isodynamic_point}{Pedal Triangles of Isodynamic Points}{ Prove that the pedal triangles of the isodynamic points are equilateral triangles. Also, Inverting around the Isodynamic Points trasnform $\triangle ABC $ into an equilateral triangle. } \prob{https://artofproblemsolving.com/community/c6h1568534p9617561}{China TST 2018 T1P3}{EM}{ Circle $\omega$ is tangent to sides $AB$,$AC$ of triangle $ABC$ at $D$,$E$ respectively, such that $D\neq B$, $E\neq C$ and $BD+CE<BC$. $F$,$G$ lies on $BC$ such that $BF=BD$, $CG=CE$. Let $DG$ and $EF$ meet at $K$. $L$ lies on minor arc $DE$ of $\omega$, such that the tangent of $L$ to $\omega$ is parallel to $BC$. Prove that the incenter of $\triangle ABC$ lies on $KL$. } \solu{ Using \autoref{lemma:Collinearity with antipode and center}, in the touch triangle of $\omega$. } \prob{} {}{E}{ Given a triangle $A B C$ with circumcircle $\Gamma$. Points $E$ and $F$ are the foot of angle bisectors of $B$ and $C, I$ is incenter and $K$ is the intersection of $A I$ and $E F$. Suppose that $N$ be the midpoint of arc $B A C$. Circle $\Gamma$ intersects the $A$ -median and circumcircle of $A E F$ for the second time at $X$ and $S$. Let $S^{\prime}$ be the reflection of $S$ across $A I$ and $J'$ be the second intersection of circumcircle of $A S^{\prime} K$ and $A X$. Prove that quadrilateral $T J' I X$ is cyclic. } \begin{solution}[Reim and lemmas] Since $NI\cap \odot ABC=T$, the mixtilinear touchpoint, if we can show that $AT||IJ'$, we will be done. Instead of working with $S'$ and $J'$, we reflect them back and work with $S, J$. Then we need to prove that $IJ||AD'$ where $D'$ is the reflection of $D$ over $I$. \figdf{.7}{mixti_symmedian}{} From \autoref{lemma:Line parallel to BC through I} we know that the $A$ symmedian, $EF$, $IM$ are concurrent at a point $P$. We prove that $P \equiv J$. For that we need to show that $P$ lies on $\odot AKS$.\\ If $\odot AKP\cap AB, AC = U, V$, it is sufficient to prove that \[\frac{UF}{FB} = \frac{VE}{EC}\] We have: \begin{align*} \frac{UF}{KU} = \frac{\sin FAP}{\sin KFA} &\quad\frac{VE}{KV} = \frac{\sin EAP}{\sin KEA}\\[1em] \therefore \frac{UF}{VE} &=\frac{\sin FAP}{\sin EAP} \frac{\sin KEA}{\sin KFA} =\frac{\sin CAM}{\sin BAM}\frac{AF}{AE}\\[1em] &=\frac{BA}{CA}\frac{AF}{AE}=\frac{BA}{AE}\frac{AF}{CA} =\frac{BC}{EC}\frac{CF}{BC}\\[1em] &=\frac{BF}{EC} \end{align*} \end{solution} \begin{minipage}{.55\linewidth} \prob{https://artofproblemsolving.com/community/c6t45786f6h1618685}{Vietnamese TST 2018 P6.a}{M}{ Triangle $ABC$ circumscribed $(O)$ has $A$-excircle $(I_a)$ that touches $AB,\ BC,\ AC$ at $F,\ D,\ E$, resp. $M$ is the midpoint of $BC$. Circle with diameter $MI_a$ cuts $DE,\ DF$ at $K,\ H$. Prove that $(BDK),\ (CDH)$ have an intersecting point on $(I_a)$. } \vspace{2em} \prob{} {After Inverting Around D}{}{ $MD$ is a line, $ I_a $ is an arbitrary point such that $ DI_a\perp MD$. $l$ is the perpendicular bisector of $ DI_a $. $ F, E $ are arbitrary points on $ l $. $ B=I_aF\cap MD, C=I_aE\cap MD\, H=FD\cap MI_a, K=DE\cap MI_a $. Then $ BK, CH, l $ are concurrent. } \begin{solution} It is straightforward using Puppus's Theorem on lines $ BDC $ and $ HI_aK $. \end{solution} \end{minipage}\hfill% \begin{minipage}{.4\linewidth} \figdf{}{Vietnamese_TST_2018_P6_a_problem}{} \figdf{}{Vietnamese_TST_2018_P6_a_inv}{After inverting around $ D $} \end{minipage} \begin{solution}[Synthetic: Length Chase] \sollem{ Let $ G, H, B', C' $ be defined the same way in Lemma 3.2. Prove that $ F $ lies on the radical axis of $ \odot D'GI, D'C'H $. By extension prove that $ B $ lies on the radical axis of $ \odot D'B'I, D'C'H $ }\label{problem:vietTST2018P6.a} \figdf{.7}{Vietnamese_TST_2018_P6_a_modified}{\hrf{problem:vietTST2018P6.a}{Vietnamese TST 2018 P6.a}} We prove the first part, and the second part follows using spiral similarity. Suppose $ K\in FD\cap \odot KDI $. Due to spiral similarity on $ \odot KDI, \odot (I) $, we have $ \triangle GFK \sim \triangle GD'I $. Which implies: \[\frac{FK}{GF}=\frac{ID}{GD'} \implies FK = ID\frac{GF}{GD'}\] Now, if $ KDCE $ is to be cyclic, we need to have $ \triangle HFK \sim \triangle HDC $. So we need, \[\frac{FK}{HF}=\frac{DC}{HD}\implies FK=DC\frac{HF}{HD}\] Combining two equations: \[\frac{GF}{GD'}\cdot \frac{ID}{DC}=\frac{HF}{HD}\] Now, using Ptolemy's theorem in $ \square FDEH $, we have, \begin{align*} FD\cdot EH + DE\cdot FH &= DH\cdot EF\\ EH \cdot \frac{FD}{FH} + DE &= EF \cdot\frac{DH}{FH}\\ 2\ \frac{DE}{EF} &= \frac{DH}{FH} \end{align*} Similarly from $ \square FGED' $ we get, \[2\ \frac{D'E}{EF} = \frac{GD'}{FG}\] Combining these two equations gives us the desired result. \end{solution} \begin{minipage}{.45\linewidth} \gene{https://artofproblemsolving.com/community/c374081h1619335}{Vietnamese TST 2018 P6.a Generalization}{ Let $ABC$ be a triangle. The points $D,$ $E,$ $F$ are on the lines $BC,$ $CA,$ $AB$ respectively. The circles $(AEF),$ $(CFD),$ $(CDE)$ have a common point $P.$ A circle $(K)$ passes through $P,$ $D$ meet $DE,$ $DF$ again at $Q,$ $R$ respectively. Prove that the circles $(DBQ),$ $(DCR)$ and $(DEF)$ are coaxial. } \begin{solution}[Inversion] Invert around $ D $, and use Pappu's Theorem as in\\ \autoref{problem:Vietnamese TST 2018 P6.a}. \end{solution} \end{minipage}\hfill% \begin{minipage}{.52\linewidth} \figdf{}{Vietnamese_TST_2018_P6_a_gene}{\hrf{VNTST2018P6a_Gene}{Vietnamese TST 2018 P6.a Generalization}} \end{minipage} \rem{ The synthetic solution of \autoref{problem:Vietnamese TST 2018 P6.a} can't be reproduced here maybe because here we don't have $ A, P, D $ collinear, and we can't have harmonic quadrilaterals either. } \theo{https://en.wikipedia.org/wiki/Poncelet's_closure_theorem}{Poncelet's Porism}{ Poncelet's porism (sometimes referred to as Poncelet's closure theorem) states that whenever a polygon is inscribed in one conic section and circumscribes another one, the polygon must be part of an infinite family of polygons that are all inscribed in and circumscribe the same two conics. } \prob{https://artofproblemsolving.com/community/c6h1181536p5720184}{IMO 2013 P3}{M}{ Let the excircle of triangle $ABC$ opposite the vertex $A$ be tangent to the side $BC$ at the point $A_1$. Define the points $B_1$ on $CA$ and $C_1$ on $AB$ analogously, using the excircles opposite $B$ and $C$, respectively. Suppose that the circumcentre of triangle $A_1B_1C_1$ lies on the circumcircle of triangle $ABC$. Prove that triangle $ABC$ is right-angled. } \solu{ Straightforward use of \autoref{lemma:excenter_touchpoint_bigarc-midpoints} } \prob{https://artofproblemsolving.com/community/c74453h1225408_some_geometric_problems}{buratinogigle's proposed probs for Arab Saudi team 2015}{E}{ Let $ABC$ be acute triangle with $AB < AC$ inscribed circle $(O)$. Bisector of $\angle BAC$ cuts $(O)$ again at $D$. $E$ is reflection of $B$ through $AD$. $DE$ cuts $BC$ at $F$. Let $(K)$ be circumcircle of triangle $BEF$. $BD, EA$ cut $(K)$ again at $M, N$, reps. Prove that $\angle BMN = \angle KFM$. } \fig{.5}{SATST2015proposed_by_bura/derakynay1134-8}{} \prob{https://artofproblemsolving.com/community/c6h54506p340041}{USAMO 1999 P6}{E}{ Let $ABCD$ be an isosceles trapezoid with $AB \parallel CD$. The inscribed circle $\omega$ of triangle $BCD$ meets $CD$ at $E$. Let $F$ be a point on the (internal) angle bisector of $\angle DAC$ such that $EF \perp CD$. Let the circumscribed circle of triangle $ACF$ meet line $CD$ at $C$ and $G$. Prove that the triangle $AFG$ is isosceles. } \prob{https://artofproblemsolving.com/community/c6h1619730p10134424}{Serbia 2018 P1}{E}{ Let $\triangle ABC$ be a triangle with incenter $I$. Points $P$ and $Q$ are chosen on segments $BI$ and $CI$ such that $2\angle PAQ=\angle BAC$. If $D$ is the touch point of incircle and side $BC$ prove that $\angle PDQ=90$. } \solu{Straightforward Trig application.} \prob{https://artofproblemsolving.com/community/c6h1628676p10217476}{Iran TST T2P5}{E}{ Let $\omega$ be the circumcircle of isosceles triangle $ABC$ ($AB=AC$). Points $P$ and $Q$ lie on $\omega$ and $BC$ respectively such that $AP=AQ$ .$AP$ and $BC$ intersect at $R$. Prove that the tangents from $B$ and $C$ to the incircle of $\triangle AQR$ (different from $BC$) are concurrent on $\omega$. } \prob{}{}{M}{ Let a point $ P $ inside of $ \triangle ABC $ be such that the following condition is satisfied \[\frac{AP+BP}{AB} = \frac{BP+CP}{BC} = \frac{CP+AP}{CA}\] Lines $ AP, BP, CP $ intesect the circumcirle again at $ A', B', C' $. Prove that $ ABC $ and $ A', B', C' $ have the same incircle. } \solu{ After finiding the point $ P $, we get a lot of ideas. \figdf{.8}{itti_same_incircle}{two lines are parallel} } \prob{https://artofproblemsolving.com/community/c6h1623012p10163453}{Iran TST 2018 P3}{EM}{ In triangle $ABC$ let $M$ be the midpoint of $BC$. Let $\omega$ be a circle inside of $ABC$ and is tangent to $AB,AC$ at $E,F$, respectively. The tangents from $M$ to $\omega$ meet $\omega$ at $P,Q$ such that $P$ and $B$ lie on the same side of $AM$. Let $X \equiv PM \cap BF $ and $Y \equiv QM \cap CE $. If $2PM=BC$ prove that $XY$ is tangent to $\omega$. } \solu{Work backwards} \prob{https://artofproblemsolving.com/community/c6h1623417p10167655}{Iran TST 2018 P4}{E}{ Let $ABC$ be a triangle ($\angle A\neq 90^\circ$). $BE,CF$ are the altitudes of the triangle. The bisector of $\angle A$ intersects $EF,BC$ at $M,N$. Let $P$ be a point such that $MP\perp EF$ and $NP\perp BC$. Prove that $AP$ passes through the midpoint of $BC$. } \prob{https://artofproblemsolving.com/community/c6h1662902p10561154}{APMO 2018 P1}{E}{ Let $H$ be the orthocenter of the triangle $ABC$. Let $M$ and $N$ be the midpoints of the sides $AB$ and $AC$, respectively. Assume that $H$ lies inside the quadrilateral $BMNC$ and that the circumcircles of triangles $BMH$ and $CNH$ are tangent to each other. The line through $H$ parallel to $BC$ intersects the circumcircles of the triangles $BMH$ and $CNH$ in the points $K$ and $L$, respectively. Let $F$ be the intersection point of $MK$ and $NL$ and let $J$ be the incenter of triangle $MHN$. Prove that $F J = F A$. } \prob{https://artofproblemsolving.com/community/c6h155710p875026}{ISL 2006 G6}{E}{ Circles $ w_{1}$ and $ w_{2}$ with centres $ O_{1}$ and $ O_{2}$ are externally tangent at point $ D$ and internally tangent to a circle $ w$ at points $ E$ and $ F$ respectively. Line $ t$ is the common tangent of $ w_{1}$ and $ w_{2}$ at $ D$. Let $ AB$ be the diameter of $ w$ perpendicular to $ t$, so that $ A, E, O_{1}$ are on the same side of $ t$. Prove that lines $ AO_{1}$, $ BO_{2}$, $ EF$ and $ t$ are concurrent. } \solu{\hrf{lemma:concurrent_lines_in_incenter}{This}} \lem{Tangential Quadrilateral Incenters}{ Let $ ABCD $ be a tangential quatrilateral. Let $ I_1, I_2 $ be the incenters of $ \triangle ABD, \triangle BCD $. Then $ (I_1), (I_2) $ is tangent to $ BD $ at the same point. \fig{.5}{tangential_quad_incenters}{} } \prob{https://artofproblemsolving.com/community/c6h21758p140322}{Four Incenters in a Tangential Quadrilateral}{E}{ Let $ABCD$ be a quadrilateral. Denote by $X$ the point of intersection of the lines $AC$ and $BD$. Let $I_{1}$, $I_{2}$, $I_{3}$, $I_{4}$ be the centers of the incircles of the triangles $XAB$, $XBC$, $XCD$, $XDA$, respectively. Prove that the quadrilateral $I_{1}I_{2}I_{3}I_{4}$ has a circumscribed circle if and only if the quadrilateral $ABCD$ has an inscribed circle. } \solu{ There is a lot going on in this figure, firstly, the $ J_1, J_2 $ and $ M $, then $ K $, then $ \angle I_4ME = \angle I_3ME $. Connecting them with the \hyperref[Incircle Touchpoint and Cevian]{lemma}. \figdf{1}{tangential_quad_four_incenters}{} } \prob{}{Geodip}{E}{ Let $ G $ be the centeroid. Dilate $ \odot I $ from $ G $ with constant $ -2 $ to get $ I'$. Then $ I' $ is tangent to the circumcircle. \figdf{.5}{nice_prob_by_geodip}{} } \theo{http://mathworld.wolfram.com/FuhrmannCircle.html}{Fuhrmann Circle}{ Let $ X', Y', Z' $ be the midpoints of the arcs not containing $ A, B, C $ of $ \odot ABC $. Let $ X, Y, Z $ be the reflections of these points on the sides. Then $ \odot XYZ $ is called the \textbf{Fuhrmann Circle}. The orthocenter $ H $ and the nagel point $N$ lies on this circle, and $ HN $ is a diameter of this circle. Furthermore, $ AH, BH, CH $ cut the circle for the second time at a distance $ 2r $ from the vertices. \figdf{1}{fuhrmann_circle}{Fuhrmann Circle} } \prob{https://artofproblemsolving.com/community/c6h213443p1178421}{Iran TST 2008 P12}{E}{ In the acute-angled triangle $ ABC$, $ D$ is the intersection of the altitude passing through $ A$ with $ BC$ and $ I_a$ is the excenter of the triangle with respect to $ A$. $ K$ is a point on the extension of $ AB$ from $ B$, for which $ \angle AKI_a=90^\circ+\frac 34\angle C$. $ I_aK$ intersects the extension of $ AD$ at $ L$. Prove that $ DI_a$ bisects the angle $ \angle AI_aB$ iff $ AL=2R$. ($ R$ is the circumradius of $ ABC$) } \solu{} \lem{Polars in Incircle}{ In the acute angled triangle $ABC$, $I$ is the incenter and $DEF$ is the touch triangle. Let $EF$ meet $\odot ABC$ at $P, Q$ such that $E$ lies inside $F, Q$. If $QD$ meets $\odot ABC$ for the second time at $U$, prove that $AU$ is the polar line of $P$ wrt $\left(I\right)$. } \proof{[Projective] We have: \begin{align*} \left(B, C; D, EF\cap BC\right) &= Q\left(B, C; P, U\right) \\ &= A(E, F; P, U)\\ &= -1 \end{align*} Which means $\left(P, AU\cap EF; E, F\right)$ is harmonic. \figdf{.5}{ISL2019G6_lem}{} } \prob{}{ISL 2019 G6}{HM}{ In the acute angled triangle $ABC$, $I$ is the incenter and $DEF$ is the touch triangle. Let $EF$ meet $\odot ABC$ at $P, Q$ such that $E$ lies inside $F, Q$. Prove that \[\angle APD + \angle AQD = \angle PIQ\] } \begin{solution} Since $PI \cap AU$ at $X$ from \autoref{lemma:Polars in Incircle}, we have $AFPX$ is cyclic. And so \[\angle FAX = \angle FIX\] \figdf{.5}{ISL2019G6}{} After some more angle chasing, we reach our goal. \end{solution} \newpage \subsection{Feurbach Point} \den{Feurbach Point}{ The point where the nine point circle touches the incircle is called the \emph{Feurbach Point}. } \thmbox{} {It Exists!}{ The nine point circle touches the incircle and the excircles. } \begin{prooof}[Inversion] Let $D, D'$ be the incircle and the $A$-excircle touchpoints with $BC$. Let $M, N, P$ be the midpoints of $BC, CA, AB$ resp. Also let $B'C'$ be the reflection of $BC$ on $AI$. Now let $N', P'$ be the intersection points of $MN, MP$ with $B'C'$. \\ We invert around $M$ with radius $MD = \frac{b-c}{2}$. We prove that the image of $\odot MNP$ after the inversion is $B'C'$. And since $(I)$ and $(I_a)$ are orthogonal to $(M)$, we will be done. \figdf{.8}{feurbach_inversion}{} Wlog, assume that $b \ge c$. \[\begin{aligned} B'N &= AB' - AN = c - \frac{b}{2} &\quad NN' &= B'N \cdot\frac{AC'}{AB'}\\[.5em] MN' &= MN - NN' &\quad &= \frac{2c-b}{2}\cdot \frac{b}{2}\\[1em] MN'\cdot MN &= \frac{c}{2}\left(\frac{c}{2} - \frac{b}{c}\cdot\frac{2c-b}{2}\right) &= \frac{b-c}{2}^2 \end{aligned}\] Which concludes the proof. \end{prooof} \thmbox{} {Construction of Feurbach Point}{ Let $D$ be the incenter touch point with $BC$. Let $M, L$ be the midpoints of $BC$ and $AI$. Let $D_1, D'$ be the reflections of $D$ over $I$ and $M$. Let $K, P$ be the refecltions of $D_1, D$ over $L$ and $AI$. Let $Q$ be the intersection of $AD_1$ with the incircle.\\ Then $D_1K$ and $MP$ meet at $F$ on the incircle, which is the Feurbach Point of $\triangle ABC$. } \figdf{.7}{feurbach_construction}{} \begin{prooof} It is easy to see that the tangents at $P$ and $M$ to the incircle and the nine point circle are parallel. So if we let $F = MP\cap \left(I\right)$, then we have $F$ is the Feurbach point. \\ And since $MQ$ is tangent to $(I)$, we also have $\left(F, P; D, Q\right)=-1$. But notice that \[\begin{aligned} D_1(A, I; L, P) &= D_1(K, P; Q, I)\\ &=-1 \end{aligned}\] So $D_1K$ passes through $F$. \end{prooof} \newpage \subsection{Assorted Diagrams} \figdf{.7}{circles_with_arc_midpoints}{The smaller circles touches the side and the circumcircle}
{ "alphanum_fraction": 0.6279725578, "avg_line_length": 36.6086286595, "ext": "tex", "hexsha": "5d8ccdd4ee2613f2676b0e4fb6bbc71d10b58e9c", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_path": "geo/sec3_in-ex-circle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_path": "geo/sec3_in-ex-circle.tex", "max_line_length": 103, "max_stars_count": 48, "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_path": "geo/sec3_in-ex-circle.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "num_tokens": 8015, "size": 23759 }
%\documentclass[12pt]{article} %\documentclass[12pt]{scrartcl} \documentclass{hitec} % contained in texlive-latex-extra \settextfraction{0.9} % indent text \usepackage{csquotes} \usepackage[hidelinks]{hyperref} % doi links are short and usefull? \hypersetup{% colorlinks=true, linkcolor=blue, urlcolor=magenta } \urlstyle{rm} \usepackage[english]{babel} \usepackage{mathtools} % loads and extends amsmath \usepackage{amssymb} % packages not used %\usepackage{graphicx} %\usepackage{amsthm} %\usepackage{subfig} \usepackage{bm} \usepackage{longtable} \usepackage{booktabs} \usepackage{ragged2e} % maybe use \RaggedRight for tables and literature? \usepackage[table]{xcolor} % for alternating colors \rowcolors{2}{gray!25}{white} \renewcommand\arraystretch{1.3} %%% reset bibliography distances %%% \let\oldthebibliography\thebibliography \let\endoldthebibliography\endthebibliography \renewenvironment{thebibliography}[1]{ \begin{oldthebibliography}{#1} \RaggedRight % remove if justification is desired \setlength{\itemsep}{0em} \setlength{\parskip}{0em} } { \end{oldthebibliography} } %%% --- %%% %%%%%%%%%%%%%%%%%%%%%definitions%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\eps}{\varepsilon} \renewcommand{\d}{\mathrm{d}} \newcommand{\T}{\mathrm{T}} \renewcommand{\vec}[1]{\boldsymbol{#1}} \newcommand{\dx}{\,\mathrm{d}x} %\newcommand{\dA}{\,\mathrm{d}(x,y)} %\newcommand{\dV}{\mathrm{d}^3{x}\,} \newcommand{\dA}{\,\mathrm{dA}} \newcommand{\dV}{\mathrm{dV}\,} \newcommand{\Eins}{\mathbf{1}} \newcommand{\ExB}{$\bm{E}\times\bm{B} \,$} \newcommand{\GKI}{\int d^6 \bm{Z} \BSP} \newcommand{\GKIV}{\int dv_{\|} d \mu d \theta \BSP} \newcommand{\BSP}{B_{\|}^*} \newcommand{\GA}[1]{\langle #1 \rangle} \newcommand{\Abar}{\langle A_\parallel \rangle} %Vectors \newcommand{\bhat}{\bm{\hat{b}}} \newcommand{\bbar}{\overline{\bm{b}}} \newcommand{\chat}{\bm{\hat{c}}} \newcommand{\ahat}{\bm{\hat{a}}} \newcommand{\xhat}{\bm{\hat{x}}} \newcommand{\yhat}{\bm{\hat{y}}} \newcommand{\zhat}{\bm{\hat{z}}} \newcommand{\Xbar}{\bar{\vec{X}}} \newcommand{\phat}{\bm{\hat{\perp}}} \newcommand{\that}{\bm{\hat{\theta}}} \newcommand{\eI}{\bm{\hat{e}}_1} \newcommand{\eII}{\bm{\hat{e}}_2} \newcommand{\ud}{\mathrm{d}} %Derivatives etc. \newcommand{\pfrac}[2]{\frac{\partial#1}{\partial#2}} \newcommand{\ffrac}[2]{\frac{\delta#1}{\delta#2}} \newcommand{\fixd}[1]{\Big{\arrowvert}_{#1}} \newcommand{\curl}[1]{\nabla \times #1} \newcommand{\np}{\nabla_{\perp}} \newcommand{\npc}{\nabla_{\perp} \cdot } \newcommand{\nc}{\nabla\cdot } \newcommand{\GAI}{\Gamma_{1}^{\dagger}} \newcommand{\GAII}{\Gamma_{1}^{\dagger -1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%DOCUMENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{The feltorSH project} \maketitle \begin{abstract} This is a program for 2d thermal electrostatic full-F gyro-fluid simulations for blob studies. \end{abstract} \section{Equations} The four-field model evolves electron density \(n_e\), ion gyro-center density \(N_i\), (perpendicular) electron temperature \(t_e\) and (perpendicular) ion gyr-center temperature \(T_i\) \begin{align} \frac{\partial}{\partial t} n_e =&- \frac{1}{B} \left[\phi,n_e \right]_{\perp} - n_e \mathcal{K}\left(\phi \right) + \frac{1}{e} \mathcal{K}\left(t_{e\perp} n_e \right) + \Lambda_{n_e}, \\ \frac{\partial}{\partial t} N_i =& -\frac{1}{B} \left[\psi_i ,N_i \right]_{\perp} +\frac{1}{B} \left[\ln T_{i\perp},N_i \chi_i \right]_{\perp} - N_i \mathcal{K}\left(\psi_i + \chi_i\right) + N_i \chi_i \mathcal{K}\left(\ln T_{i\perp} - \ln N_i \right) \\ \nonumber & - \frac{1}{e} \mathcal{K}\left(T_{i\perp} N_i \right) + \Lambda_{N_i} , \\ \frac{\partial }{\partial t} t_{e\perp} =& -\frac{1}{B} \left[ \phi , t_{e\perp} \right]_{\perp} -t_{e\perp} \mathcal{K} (\phi ) + \frac{3 t_{e\perp}}{ e } \mathcal{K} (t_{e\perp}) + \left(\frac{t_{e\perp}^2}{e} \right)\mathcal{K} \left( \ln n_e \right) + \Lambda_{t_{e\perp} }, \\ \frac{\partial }{\partial t} T_{i\perp} =& -\frac{1}{B} \left[ \psi_i + 2\chi_i ,T_{i\perp} \right]_{\perp} - \frac{T_{i\perp} \chi_i}{B } \left[\ln \chi_i - \ln T_{i\perp} , \ln N_i\right]_{\perp} -T_{i\perp} \mathcal{K} (\psi_i + 3\chi_i) \\ \nonumber & - \left(\frac{3 T_{i\perp} }{e} - \chi_i \right)\mathcal{K} (T_{i\perp} ) - \left(\frac{T_{i\perp} ^2}{e} + T_{i\perp} \chi_i \right)\mathcal{K} \left( \ln N_i \right) + \Lambda_{T_{i\perp} }. \end{align} The latter equations are coupled by the thermal version of the nonlinear polarisation equation, which reads: \begin{align} n_e -\Gamma_{1,i}^\dagger N_i &= \vec{\nabla} \cdot\left(\frac{N_i}{\Omega_i B} \vec{\nabla}_\perp \phi\right). \end{align} The generlized electric potential, its FLR due to a dynamic gyro-radius and the ExB drift velocity are defined by \begin{align} \psi_i = \Gamma_{1,i} \phi - \frac{m u_E^2 }{2 q}, \\ \chi_i := \Gamma_{2,i} \phi \\ \vec{u}_E = \frac{1}{B} \vec{\hat{b}} \times \vec{\nabla} \phi . \end{align} The gyro-averaging operators read: \begin{align}\label{eq:gamma1def} \Gamma_{1,i} f&:= \frac{1}{1-\frac{\rho_i^2}{2}\vec{\nabla}_\perp^2} f. & \Gamma_{1,i}^\dagger f&:= \frac{1}{1-\vec{\nabla}_\perp^2\frac{\rho_i^2}{2}} f. \end{align} \begin{align}\label{eq:gamma2def} \Gamma_{2,i} f&: = \frac{\frac{\rho_i^2}{2}\vec{\nabla}_\perp^2}{\left(1-\frac{\rho_i^2}{2}\vec{\nabla}_\perp^2\right)^2} f.& \Gamma_{2,i}^\dagger f&: = \frac{\vec{\nabla}_\perp^2\frac{\rho_i^2}{2}}{\left(1-\vec{\nabla}_\perp^2\frac{\rho_i^2}{2}\right)^2} f. \end{align} \subsection{Perpendicular dissipation} The perpendicular diffusive terms are given by \begin{align}\label{eq:perpdiffNT} \Lambda_{n_e} &= -\nu_\perp \vec{\nabla}_\perp^4 n_e, & \Lambda_{N_i} &= -\nu_\perp \vec{\nabla}_\perp^4 N_i, & \Lambda_{t_e} &= -\nu_\perp \vec{\nabla}_\perp^4 t_e, & \Lambda_{T_i} &= -\nu_\perp \vec{\nabla}_\perp^4 T_i. \end{align} \subsection{Energy theorem} The energy theorem is given by the explicit expressions \begin{align}\label{eq:energytheorem} %\qquad \mathcal{E} =& \int d\vec{x} \left(n_e t_{e} +N_i T_{i} + \frac{m_i N_i u_E^2}{2} \right) , \\ %\qquad \Lambda = & \int d\vec{x} \bigg[\left(t_{e} - e \phi \right) \Lambda_{n_e} +\left(T_{i} + e \psi_i \right) \Lambda_{N_i} + n_e \Lambda_{t_{e}}+ \left(1+\frac{e}{T_{i}} \chi_i\right)N_i \Lambda_{T_{i}}\bigg]. \end{align} The energy consists of the internal (thermal) energy densities for electrons and ions and the $\vec{E}\times\vec{B}$ energy density. The dissipative terms of Equation~\eqref{eq:perpdiffNT} enter the energy theorem via $\Lambda$. \\ \subsection{Slab magnetic field}\label{sec:slabapprox} The here presented slab approximation employs Cartesian coordinates \((x,y,z)\) with a slab magnetic field unit vector \(\vec{\hat{b}} = \vec{\hat{e}}_z\). The inverse magnetic field magnitude is radially varying \begin{align} \frac{1}{B}&= \frac{1}{B_0} \left( 1+x/R_0\right) . \end{align} with \(R_0\) the distance to the outboard mid-plane and \(B_0\) the reference magnetic field magnitude. For a slab magnetic field the curvature \(\vec{\kappa} = 0\) vanishes and the curvature operator \(\mathcal{K} (f)\) reduces to \begin{align} \mathcal{K} (f) &=\mathcal{K}_{\vec{\nabla} B} (f) =-\frac{1}{B_0 R_0 } \frac{\partial }{\partial y }f. \end{align} We note here that no factor of two arises, as is the case in the low beta approximation. In slab approximation the following relations hold \begin{align} \vec{\nabla} \cdot \vec{\mathcal{K}}_{\kappa} &= \vec{\nabla} \cdot \vec{\mathcal{K}}_{\vec{\nabla} B} = \vec{\nabla} \cdot \vec{ \mathcal{K}} = \vec{\nabla} \cdot \vec{\hat{b}} = 0, & \vec{\kappa} \cdot \vec{\mathcal{K}}_{\vec{\nabla} B} &= 0. \end{align} and energetic consistency is assured in the curvature parts. \section{Initialization} \subsection{\(\phi=0\) initialization} \begin{align} n_e =\Gamma_{1,i}^\dagger N_i, \\ p_{i} = \left(\Gamma_{1,i}^{\dagger} + \Gamma_{2,i}^{\dagger} \right)P_{i} . \end{align} For the ion gyro-centre density and perpendicular temperature we mimic an initial blob by a Gaussian of the form \begin{eqnarray} % \qquad N_{i}\left(\vec{x},0\right) = n_{e0}\left[1+A \exp{\left(-\frac{\left(\vec{x}-\vec{x}_0\right)^2}{2\sigma^2}\right)}\right], \\ % \qquad T_{i}\left(\vec{x},0\right) = t_{i0}\left[1+A \exp{\left(-\frac{\left(\vec{x}-\vec{x}_0\right)^2}{2\sigma^2}\right)} \right], \end{eqnarray} \section{Numerical methods} discontinuous Galerkin on structured grid \begin{longtable}{ll>{\RaggedRight}p{7cm}} \toprule \rowcolor{gray!50}\textbf{Term} & \textbf{Method} & \textbf{Description} \\ \midrule coordinate system & cartesian 2D & equidistant discretization of $[0,l_x] \times [0,l_y]$, equal number of Gaussian nodes in x and y \\ matrix inversions & conjugate gradient & Use previous two solutions to extrapolate initial guess and $1/\chi$ as preconditioner \\ \ExB advection & Poisson & \\ curvature terms & direct & cf. slab approximations \\ time & Karniadakis multistep & $3rd$ order explicit, diffusion $2nd$ order implicit \\ \bottomrule \end{longtable} \subsection{Input file structure} Input file format: json %%This is a booktabs table \begin{longtable}{llll>{\RaggedRight}p{7cm}} \toprule \rowcolor{gray!50}\textbf{Name} & \textbf{Type} & \textbf{Example} & \textbf{Default} & \textbf{Description} \\ \midrule n & integer & 3 & - &\# Gaussian nodes in x and y \\ Nx & integer &100& - &\# grid points in x \\ Ny & integer &100& - &\# grid points in y \\ dt & integer &1.0& - &time step in units of $c_{s0}/\rho_{s0}$ \\ n\_out & integer &3 & - &\# Gaussian nodes in x and y in output \\ Nx\_out & integer &100& - &\# grid points in x in output fields \\ Ny\_out & integer &100& - &\# grid points in y in output fields \\ itstp & integer &2 & - & steps between outputs \\ maxout & integer &100& - & \# outputs excluding first \\ eps\_pol & float &1e-6 & - & accuracy of polarisation solver \\ eps\_gamma & float &1e-7 & - & accuracy of $\Gamma_1$ and $\Gamma_2$\\ eps\_time & float &1e-10 & - & accuracy of implicit time-stepper \\ curvature & float &0.00015& - & magnetic curvature $\kappa:=\rho_{s0}/R_0$ \\ tau & float &1 & - & $\tau = T_i/T_e$ \\ nu\_perp & float &5e-3 & - & pependicular viscosity $\nu$ \\ amplitude & float &1.0 & - & amplitude $A$ of the blob \\ sigma & float &10 & - & blob radius $\sigma$ \\ posX & float &0.3 & - & blob x-position in units of $l_x$, i.e. $X = p_x l_x$\\ posY & float &0.5 & - & blob y-position in units of $l_y$, i.e. $Y = p_y l_y$ \\ lx & float &200 & - & $l_x$ \\ ly & float &200 & - & $l_y$ \\ bc\_x & char & "DIR" & - & boundary condition in x (one of PER, DIR, NEU, DIR\_NEU or NEU\_DIR) \\ bc\_y & char & "PER" & - & boundary condition in y (one of PER, DIR, NEU, DIR\_NEU or NEU\_DIR) \\ initmode & integer & 0 & - & \(n_e = \Gamma_1^\dagger N_i\)(0), \(n_e = N_i\) (1) (cf. initialization)\\ tempmode & integer & 0 & - &thermal (0), isothermal (1)\\ flrmode & integer & 1 & - &const FLR (0), dyn. FLR (1)\\ \bottomrule \end{longtable} The default value is taken if the value name is not found in the input file. If there is no default and the value is not found, the program exits with an error message. %.................................................................. \begin{thebibliography}{1} \bibitem{held16b} M. Held, M. Wiesenberger, J. Madsen, A. Kendl, Nuclear Fusion 56, 126005 (2016) \end{thebibliography} %.................................................................. \end{document}
{ "alphanum_fraction": 0.6387152034, "avg_line_length": 45.7843137255, "ext": "tex", "hexsha": "61723fc2a04306a4692ad3dc70dd4dee082f7838", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2021-09-03T08:12:25.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-27T13:18:11.000Z", "max_forks_repo_head_hexsha": "c70bc6bb43f39261f6236df88e16610d08cb98ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mrheld/feltor", "max_forks_repo_path": "src/feltorSH/feltorSH.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "a566f8a9003ade437e093334877f839f3dfd0260", "max_issues_repo_issues_event_max_datetime": "2021-02-22T19:11:42.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-18T16:06:15.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RaulGerru/FELTOR_Raul", "max_issues_repo_path": "src/feltorSH/feltorSH.tex", "max_line_length": 198, "max_stars_count": 18, "max_stars_repo_head_hexsha": "a566f8a9003ade437e093334877f839f3dfd0260", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RaulGerru/FELTOR_Raul", "max_stars_repo_path": "src/feltorSH/feltorSH.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-06T08:50:48.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-28T14:34:29.000Z", "num_tokens": 4323, "size": 11675 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={What is happening on Twitter? A framework for student research projects with tweets}, pdfauthor={Frederick J. Boehm and Bret M. Hanlon}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} %\usepackage{lineno} %\linenumbers \usepackage{setspace} \doublespacing \usepackage{caption} \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{xcolor} %\usepackage[utf8x]{inputenc} \usepackage{rotating} \usepackage{wrapfig} \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{colortbl} \usepackage{pdflscape} \usepackage{tabu} \usepackage{threeparttable} \usepackage{threeparttablex} \usepackage[normalem]{ulem} \usepackage{makecell} \usepackage{xcolor} \newlength{\cslhangindent} \setlength{\cslhangindent}{1.5em} \newenvironment{cslreferences}% {\setlength{\parindent}{0pt}% \everypar{\setlength{\hangindent}{\cslhangindent}}\ignorespaces}% {\par} \title{What is happening on Twitter? A framework for student research projects with tweets} \author{Frederick J. Boehm and Bret M. Hanlon} \date{} \begin{document} \maketitle \hypertarget{abstract}{% \section{Abstract}\label{abstract}} We draw on our experiences with mentoring two students to develop a framework for undergraduate research projects with Twitter data. Leveraging backward design principles, we share our learning objectives and rubric for summative assessments. To illustrate the value of Twitter as a data source, we detail methods for collecting and analyzing tweets. We conclude by emphasizing how Twitter text analysis projects enable students to formulate original research questions, collect and analyze data, and communicate findings and their implications. \hypertarget{introduction}{% \section{Introduction}\label{introduction}} Twitter has profoundly changed how we communicate. In only 280 characters, users instantly contribute to public conversations on politics, current events, sports, media, and many other topics. Recent development of accessible statistical methods for large-scale text analysis now enable instructors to use tweets as contemporary pedagogical tools in guiding undergraduate research projects. We guided two statistics students in their senior research projects. Both students used tweets to address novel research questions. We share products of their research in supplementary files. Because their data are no longer available, we present as a case study one analysis with tweets from May 2020. We share our data and computer code to encourage others to undertake tweet text analysis research. We also describe methods for creating a collection of tweets. Some social media data, including tweets from Twitter, is available through website application programming interfaces (APIs). By way of a streaming API, Twitter shares a sample of approximately one percent of all tweets during an API query time period (``Sampled stream'' 2019). Any Twitter user can freely access this one percent sample, whereas access to a larger selection is available to researchers for a fee. Through our work with tweets, we demonstrate that Twitter data is a rich source of new data science research questions. Box (1976) described a positive feedback loop for the interplay of discipline-specific research and quantitative methods research. The two components, ``science'' and ``statistics'' in the language of Box (1976), iteratively fuel research questions in each other. A new statistical method enables new discipline-specific questions to be addressed, while a new scientific question motivates new data science methods. We describe below two data science research questions that mentored students addressed with tweets. Studies of Twitter conversations have yielded valuable insights into modern culture. Using large collections of tweets, scholars have investigated diverse research questions, including the inference of relationships and social networks among Twitter users (Lin et al. 2011); authorship of specific tweets when multiple persons share a single account (Robinson 2016); and rhetoric in recruiting political supporters (Pelled et al. 2018; Wells et al. 2016). Recognizing the potential utility of tweets for data science research and teaching, we created a collection of tweets over time by repeated querying of the Twitter streaming API. We envisioned this collection as a rich resource for data science research projects. This vision grew into two mentored undergraduate student research projects in the 2015-2016 academic year. In line with recent calls for students to work with real data (Carver et al. 2016; Nolan and Temple Lang 2010), our collection of tweets has served as a valuable resource in our mentoring of undergraduate data science research. Working with real data allows students to develop proficiency not only in statistical analysis, but also in related data science skills, including data transfer from online sources, data storage, using data from multiple file formats, and communicating findings. Collaboratively asking and addressing novel questions with our collection of tweets gave mentored students opportunities to develop competency in all of these areas. While our tweet collection enables us to address many possible research questions, the dynamic content of tweets over time particularly piqued our interest. We hypothesized that high-profile social media events would generate a high volume of tweets, and that we would detect social media events through changes in tweet topic content over time. We discuss in detail below one approach to studying this question. In the sections that follow, we detail our backward design-inspired approach to writing learning objectives, preliminary research mentoring considerations, data science methods for collecting and analyzing tweets, analysis results, and ideas on assessment and advanced topics. \hypertarget{structure-of-mentored-research}{% \section{Structure of mentored research}\label{structure-of-mentored-research}} \hypertarget{backward-design}{% \subsection{Backward design}\label{backward-design}} Backward design principles guided our planning and informed the writing of learning objectives (Wiggins and McTighe 2005). Following Wiggins and McTighe (2005), we began by listing what students, at the end of their thesis research, should be able to do, understand, and know. We then classified each of these items into one of three categories: enduring understanding, important to know and do, and worth being familiar with (Wiggins and McTighe 2005) (Table \ref{tab:circle-table}). While other researchers may categorize these skills differently, our assignments reflect our projects' priorities. Nearly all of the skills in Table \ref{tab:circle-table} are transferable. They apply not merely to thesis projects, but also to data science research in general. \begin{table} \caption{\label{tab:circle-table}Classifying project skills} \centering \begin{tabular}[t]{|>{}l|>{}l|} \hline Skill & Category\\ \hline Communicate results in speaking and in writing & Enduring understanding\\ \hline Formulate a research question & Enduring understanding\\ \hline Develop data science strategies to address research question & Enduring understanding\\ \hline Use text analysis tools to analyze tweets & Enduring understanding\\ \hline Translate analysis results into scientific conclusions & Enduring understanding\\ \hline Describe assumptions and limitations of statistical analyses & Enduring understanding\\ \hline Use Github to share code and documentation & Important to know and do\\ \hline Use git for version control & Important to know and do\\ \hline Use data visualization to clarify and inform quantitative analyses & Important to know and do\\ \hline Incorporate supplementary data sources into analysis & Important to know and do\\ \hline Acquire data from internet sources & Important to know and do\\ \hline Structure research project files as R package & Worth being familiar with\\ \hline Use cluster computing as needed & Worth being familiar with\\ \hline \end{tabular} \end{table} In particular, our project skills reflect elements of ``data acumen'', as defined in a report from the U.S.A. National Academies of Sciences, Engineering, and Medicine (NASEM) (National Academies of Sciences et al. 2018). For example, our skills ``Acquire data from internet sources'' and ``Develop data science strategies to address a research question'' implicitly require a trainee, in the language of the NASEM report (National Academies of Sciences et al. 2018), to ``ingest, clean, and then wrangle data into reliable and useful forms'' and to ``combine many existing programs or codes into a workflow that will accomplish some important task''. Additionally, we tailored our list of project skills with the assumption that students would work in R. R use is not required for such projects, but it is a convenience for many of our students. \hypertarget{learning-objectives}{% \subsection{Learning objectives}\label{learning-objectives}} We translated our prioritized list of skills that students should be able to do, understand, and know into learning objectives (Table \ref{tab:circle-table}). We phrased learning objectives in a manner that enabled their subsequent assessment (Table \ref{tab:summative}) with formative and summative strategies. These were our four learning objectives: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Write R code to perform text analysis of large volumes of tweets (R Core Team 2019). \item Communicate results in a written report and poster presentation. \item Translate statistical findings into scientific conclusions. \item Develop data science strategies to address a scientific research question. \end{enumerate} Having decided on four learning objectives for students, we next established mentoring relationships with each student. \hypertarget{preliminary-research-mentoring-considerations}{% \subsection{Preliminary research mentoring considerations}\label{preliminary-research-mentoring-considerations}} We developed research goals with students in a series of brainstorming sessions and discussions. As trainees began their senior research projects, we spoke in detail about both their research interests and goals and their experience with data analysis software. When possible, we encouraged them to incorporate their existing academic interests into their senior research projects. In our statistics department, most students learn elementary R computing skills through class assignments. Some students, by concentrating in computer science, learn other data analysis software packages, such as Python. Those who do undergraduate statistics research often learn advanced topics in R computing, such as R package assembly, documentation, and testing. Many develop expertise in linux computing and cluster computing, too. One of our two students had extensive experience in statistical computing. In addition to R computing skills, she also worked in Python and excelled in shell scripting. She first learned Python in computer science courses. Our second student had extensive experience with R from his statistics courses. His background enabled him to write an R package as part of his senior project. To encourage further development of R computing skills in our two students, we guided them towards the free, online books ``R for Data Science'' (Wickham and Grolemund 2016) and ``Advanced R'' (Wickham 2019). \hypertarget{student-research-interests-and-goals}{% \subsection{Student research interests and goals}\label{student-research-interests-and-goals}} Our two students had diverse interests, and, initially, they had little experience in articulating research goals. We engaged each in a brainstorming session to clarify their interests and encourage them to think critically about research goals under the time constraints of their academic schedules. We briefly describe the two student projects to give readers a better sense of research possibilities with tweets. Our first student examined relationships over time between stock market index prices and tweet sentiment. For each day in her 12-month study period, she identified stock market-related tweets with a key word search. With the complete texts of stock market-related tweets for each day, she calculated a daily sentiment score and plotted it over time. Her sentiment score reflected presence of emotion-associated terms (\emph{e.g}., ``happy'', ``sad'', ``mad'', ``scared'') in tweet texts. Days with more net positive emotion words in the collected tweets received a higher (positive) daily sentiment score, while days with more net negative words received a negative daily sentiment score. For her final project, she presented plots over time of her daily sentiment scores and daily closing prices of the Standard and Poor's 500 index. She also explored time series analysis methods to quantify relationships between index prices and sentiment scores. Our second student developed social media event detection methods with topic models. He hypothesized that tweet content changes over time, and that we might detect these changes by comparing inferred tweet topics from distinct time periods. To validate his hypothesis, he examined tweet content before, during, and after the National Football League's Super Bowl game in 2015. He reasoned that because the Super Bowl is widely discussed on Twitter, we might detect Super Bowl-related topics from tweets sent during the game, but that the football-related topics would be short-lived in the continuous Twitter stream. We discovered evidence to support his ideas, and we ultimately presented our findings at international and local research meetings. Below, we share a case study on a different, widely discussed topic which is analyzed using an approach similar to that from the Super Bowl tweets. \hypertarget{time-period}{% \subsection{Time period}\label{time-period}} Our two statistics students conducted their research projects during the 2015-2016 academic year. We recommend a full academic year for projects of this magnitude, although a summer or one-semester project is possible. Our students presented their findings at the statistics department's undergraduate poster session near the end of the 2015-2016 academic year (Supplementary files). We present below reproducible R code for analyzing data from May 2020. While these are not the same data that our students analyzed in 2015, the methods and code are very similar to that of our second student's project. \hypertarget{case-study-methods}{% \section{Case study methods}\label{case-study-methods}} To illustrate the value of Twitter data and to encourage readers to envision other uses for tweets, we present below a reproducible case study. It is essentially a reproduction of our second student's project, but at a distinct time period. In it, we aim to detect a social media event by examining tweet topic content over time. We use Latent Dirichlet Allocation (LDA) models (Blei et al. 2003) to infer topics on three consecutive days centered on Memorial Day 2020. We chose this example case study, instead of the student projects, because of limited data availability for the student projects. Despite this, the case study illustrates the strategy and methods for one student project. Below, we discuss case study design, tweet collection, and tweet structure, before turning to quantitative methods for the case study. \hypertarget{case-study-design}{% \subsection{Case study design}\label{case-study-design}} We sought to validate our hypothesis that we could detect a social media event by examining tweet topic content at distinct time periods. As a proof of principle of our event detection strategy, we analyzed tweets before, during, and after Memorial Day (May 25, 2020). We fitted LDA models for each of three distinct five-minute periods. The first period began at noon Eastern time on May 24, 2020. Subsequent time periods started 24 and 48 hours later. We defined each time period to be a single collection, or corpus, of tweets. \hypertarget{collecting-tweets-over-time}{% \subsection{Collecting tweets over time}\label{collecting-tweets-over-time}} We include here instructions for creating a tweet collection. First, we created a new account on Twitter. With these user credentials, we used the R package \texttt{rtweet} to query the API (Kearney 2019). We used the linux \texttt{crontab} software to repeatedly execute R code to submit API queries. Each query lasted five minutes and produced a text file of approximately 130 MB. We timed the API queries so that there was no time lag between queries. We stored tweets resulting from API queries in their native JSON format. Setting up the query task with \texttt{crontab} is straightforward. On our computer, with Ubuntu 20.04 linux operating system, we opened a terminal and typed \texttt{crontab\ -e}. This opened a text file containing user-specified tasks. We added the following line to the bottom of the file before saving and closing the text file. \begin{Shaded} \begin{Highlighting}[] \ExtensionTok{*/5}\NormalTok{ * * * * R {-}e }\StringTok{\textquotesingle{}rtweet::stream\_tweets(timeout = (60 * 5), } \StringTok{parse = FALSE, file\_name = paste0("\textasciitilde{}/work/mentoring/mentoring{-}framework/data/",} \StringTok{lubridate::now(), "{-}tweets"))\textquotesingle{}} \end{Highlighting} \end{Shaded} Readers may need to slightly amend the above line to conform to requirements of their operating system's software. Readers who use Mac OS may proceed as we did, while those with Windows operating systems may consider using the R package \texttt{taskscheduleR} to schedule API queries via the Windows task scheduler (Wijffels and Belmans 2018). \hypertarget{querying-twitter-api-to-get-complete-tweets}{% \subsection{Querying Twitter API to get complete tweets}\label{querying-twitter-api-to-get-complete-tweets}} Twitter API use agreements forbid users from sharing complete API query results. However, Twitter permits users to share tweet identification numbers. With a tweet identification number, a user may query a Twitter API to obtain complete tweet data. In our experience, this process is incomplete; that is, many tweet identification numbers submitted to the Twitter API return no data. Additionally, some tweet identification numbers return data on the first query, but don't return data on subsequent queries. This complicates our goal of making all analyses computationally reproducible and motivates our decision to share the tweet IDs of those tweets that we actually analyzed (Supplementary files). Should a reader wish to reproduce our analysis, we anticipate that she will get complete tweet data for all or most of these tweet identification numbers from the API. We provide R code for this task in the supplementary files. \hypertarget{tweet-structure}{% \subsection{Tweet structure}\label{tweet-structure}} Tweets are available from the Twitter API as Javascript Object Notation (JSON) objects (``Introducing JSON'' 2020). Every tweet consists of multiple key-value pairs. The number of fields per tweet depends on user settings, retweet status, and other factors (``Introduction to Tweet JSON'' 2020). The 31 tweet key-value pairs belong to 12 distinct classes (Supplementary files). The classes are either vectors - numeric, logical, or character - or arrays assembled from the vector classes. Below is an example of a tweet in JSON format. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{\{} \StringTok{"created\_at"}\NormalTok{: }\StringTok{"Thu Apr 06 15:24:15 +0000 2017"}\NormalTok{,} \StringTok{"id\_str"}\NormalTok{: }\StringTok{"850006245121695744"}\NormalTok{,} \StringTok{"text"}\NormalTok{: }\StringTok{"1\textbackslash{}/ Today we\textbackslash{}u2019re sharing our vision for the future of the Twitter API platform!"}\NormalTok{,} \StringTok{"user"}\NormalTok{: }\KeywordTok{\{} \StringTok{"id"}\NormalTok{: }\ExtensionTok{2244994945}\NormalTok{,} \StringTok{"name"}\NormalTok{: }\StringTok{"Twitter Dev"}\NormalTok{,} \StringTok{"screen\_name"}\NormalTok{: }\StringTok{"TwitterDev"}\NormalTok{,} \StringTok{"location"}\NormalTok{: }\StringTok{"Internet"}\NormalTok{,} \StringTok{"url"}\NormalTok{: }\StringTok{"https:\textbackslash{}/\textbackslash{}/dev.twitter.com\textbackslash{}/"}\NormalTok{,} \StringTok{"description"}\NormalTok{: }\StringTok{"Your official source for Twitter Platform news, updates \& events. } \StringTok{ Need technical help? Visit https:\textbackslash{}/\textbackslash{}/twittercommunity.com\textbackslash{}/ \textbackslash{}u2328\textbackslash{}ufe0f } \StringTok{ \#TapIntoTwitter"} \KeywordTok{\}}\NormalTok{,} \StringTok{"place"}\NormalTok{: }\KeywordTok{\{} \KeywordTok{\}}\NormalTok{,} \StringTok{"entities"}\NormalTok{: }\KeywordTok{\{} \StringTok{"hashtags"}\NormalTok{:}\BuiltInTok{ [} \NormalTok{ ],} \StringTok{"urls"}\NormalTok{: [} \NormalTok{ \{} \StringTok{"url"}\NormalTok{: }\StringTok{"https:\textbackslash{}/\textbackslash{}/t.co\textbackslash{}/XweGngmxlP"}\NormalTok{,} \StringTok{"unwound"}\NormalTok{: \{} \StringTok{"url"}\NormalTok{: }\StringTok{"https:\textbackslash{}/\textbackslash{}/cards.twitter.com\textbackslash{}/cards\textbackslash{}/18ce53wgo4h\textbackslash{}/3xo1c"}\NormalTok{,} \StringTok{"title"}\NormalTok{: }\StringTok{"Building the Future of the Twitter API Platform"} \NormalTok{ \}} \NormalTok{ \}} \NormalTok{ ],} \StringTok{"user\_mentions"}\NormalTok{: [ } \NormalTok{ ]} \NormalTok{ \}} \NormalTok{\}} \end{Highlighting} \end{Shaded} Our analyses use three fields from each tweet: date (``created\_at''), tweet identifier (``id\_str''), and tweet text (``text''). The ``created\_at'' field is a character string containing the date and time of the tweet. Every tweet has a unique identifier, the ``id\_str'' value. The ``text'' field contains the unicode representation of the message. After creating a text file with tweet JSON, our next step involved reading and parsing tweets with the R packages \texttt{rtweet} (Kearney 2019) and \texttt{tidytext} (Silge and Robinson 2016). \hypertarget{parsing-tweet-text}{% \subsection{Parsing tweet text}\label{parsing-tweet-text}} The next task is to wrangle the tweet JSON data into a structure suitable for LDA modeling. We used functions from the \texttt{rtweet} R package to parse tweet JSON into a data frame. We then divided tweet text into words with functions from the \texttt{tidytext} R package. We discarded commonly used ``stop words'' and emojis. LDA model fitting requires that the corpus be organized as a document-term matrix. In a document-term matrix, each row corresponds to a single document (a single tweet), and each column is a single term (or word). Each cell contains a count (the number of occurrences of a term in the specified document). We created a document-term matrix with the R function \texttt{cast\_dtm} from the \texttt{tidytext} package. \hypertarget{latent-dirichlet-allocation}{% \subsection{Latent Dirichlet Allocation}\label{latent-dirichlet-allocation}} LDA is a statistical method for inferring latent (unobservable) topics (or themes) from a large corpus (or collection) of documents (Blei et al. 2003). We pretend that there's an imaginary process for creating documents in the corpus. For each document, we choose a discrete distribution over topics. For example, some tweets from Memorial Day may refer to the holiday. This may constitute one topic in the corpus. Having chosen a distribution over topics, we then select document words by first drawing a topic from the distribution over topics, then drawing a word from the chosen topic. In mathematical notation, we write the generative process assumed by LDA (Blei et al. 2003): \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Choose \(N \sim \text{Poisson}(\xi)\)\\ \item Choose \(\theta \sim \text{Dirichlet}(\alpha)\)\\ \item For each word, \(w_n\) with \(n = 1, ..., N\), \end{enumerate} \begin{enumerate} \def\labelenumi{\alph{enumi}.} \tightlist \item Choose a topic \(z_n \sim \text{Multinomial}(\theta)\)\\ \item Choose a word \(w_n\) from \(p(w_n | z_n, \beta)\), a multinomial probability \end{enumerate} \(\beta\) refers to the \(k\) by \(V\) matrix of topic-specific word probabilities, where \(k\) is the number of topics and \(V\) is the size of the vocabulary, \emph{i.e.}, the number of unique words in the corpus. The goal for LDA is to infer both the distribution over topics and the topics (Blei et al. 2003). A topic, in this setting, is a distribution over the vocabulary (the collection of all words in a corpus). Inference for latent Dirichlet allocation models is performed by either sampling from the posterior distribution or through variational methods. Researchers have devised a variety of Gibbs sampling techniques for these models (Porteous et al. 2008). Variational methods, while using approximations to the posterior distribution, offer the advantage of computational speed (Blei et al. 2017). We used variational methods below in our case study. \hypertarget{case-study-results}{% \section{Case study results}\label{case-study-results}} We identified the top ten most probable terms for each of ten topics in our models (Figures \ref{fig:may24}, \ref{fig:may25}, \ref{fig:may26}). We plotted the within-topic word probabilities as bar graphs. We see that topic-specific word probabilities seldom exceed 0.05. We also note that some words are heavily weighted in multiple topics. This observation complicates semantic topic interpretation. We also caution that the results display expletives (that appeared on Twitter) and may be ``not suitable for work (NSFW)''. Instructors may apply a filter to remove common expletives before LDA modeling of tweets. \begin{figure} \includegraphics[width=29.17in]{../results/beta-2020-05-24} \caption{Top terms for LDA model from May 24, 2020. Results contain expletives and may be not suitable for work (NSFW).}\label{fig:may24} \end{figure} \begin{figure} \includegraphics[width=29.17in]{../results/beta-2020-05-25} \caption{Top terms for LDA model from May 25, 2020 (Memorial Day).}\label{fig:may25} \end{figure} \begin{figure} \includegraphics[width=29.17in]{../results/beta-2020-05-26} \caption{Top terms for LDA model from May 26, 2020}\label{fig:may26} \end{figure} Assigning meaning to topics is an active research area (Chang et al. 2009). Since our interest is in the transient appearance of a new topic, we don't attempt to assign meaning to every topic in our models. Instead, we anticipate that discussions on Twitter are a mixture of topics that endure over weeks or months and subjects that appear and disappear quickly. We see that topic 7 from May 25 has several words that suggest Memorial Day: memorial, remember, honor, country. A similar topic is not seen on May 24 or May 26. Some topics persist, with distinct word probabilities, across the three days. For example, we see that President Trump features prominently in all three models' results. On May 26, topic 10 reflects discussion of the Amy Cooper Central Park incident (\url{https://www.nytimes.com/2020/05/26/nyregion/amy-cooper-dog-central-park.html}). The murder of George Floyd occurred on May 25, 2020. Our last examined time period, from 12:00 pm to 12:05 pm (Eastern USA time zone) on May 26, occurred after Floyd's murder, yet we didn't detect this event in our ten-topic LDA model. Several considerations may account for this. While outrage at the murder eventually spread worldwide, there may have been few Floyd-related tweets during our collection time on May 26, less than 24 hours after the murder and video release. Had we extended our analysis to May 27 and beyond, we may have identified George Floyd-related topics. \hypertarget{assessment-of-learning-exploring-more-advanced-topics-and-concluding-remarks}{% \section{Assessment of learning, exploring more advanced topics, and concluding remarks}\label{assessment-of-learning-exploring-more-advanced-topics-and-concluding-remarks}} \hypertarget{assessment-of-learning}{% \subsection{Assessment of learning}\label{assessment-of-learning}} We examined student learning with both formative and summative assessments. We conducted formative assessments through weekly discussions with students. In these discussions, we developed action items to advance research progress and overcome challenges. We summatively assessed student achievement at the end of the academic year. Both students wrote a thesis and presented a poster to our statistics department. We asked questions at the poster session to probe student understanding and critically evaluated the theses. \begin{table} \caption{\label{tab:summative}Rubric for summative assessment of learning objectives.} \centering \fontsize{6}{8}\selectfont \begin{tabular}[t]{>{\raggedright\arraybackslash}p{10em}|>{\raggedright\arraybackslash}p{10em}|>{\raggedright\arraybackslash}p{10em}|>{\raggedright\arraybackslash}p{10em}|>{\raggedright\arraybackslash}p{10em}} \hline Learning objective & Assessment item & 2 points & 1 point & 0 points\\ \hline Write R code to perform text analysis of large volumes of tweets. & R code performs intended analyses & Code contains few or no bugs & Code contains one or more errors & Code contains many errors\\ \hline Write R code to perform text analysis of large volumes of tweets. & Uses literate programming tools, such as Sweave or knitr & Report is written using literate programming tools. It compiles easily when run by instructor. Time-consuming calculations are cached. & Report is written using literate programming tools, but compilation takes too long or fails. & Report is not written with literate programming tools.\\ \hline Write R code to perform text analysis of large volumes of tweets. & Uses git for version control & Log reveals regular commits with informative commit messages & Log reveals intermittent commits and uninformative commit messages & Doesn't use git.\\ \hline Write R code to perform text analysis of large volumes of tweets. & Shares code and data via Github & Instructor easily clones repository from Github. Contains share-able data and instructions for getting other data to reproduce analysis. & One or more needed files is missing from repository. & Doesn't use Github.\\ \hline Communicate results in a written report and poster presentation. & Organizes poster to highlight main points & When prompted, can describe main points in less than one minute. & Less fluid presentation with periods of silence or confusion. & Disorganized presentation.\\ \hline Communicate results in a written report and poster presentation. & Accurately presents study and findings during poster session & Fluently describes background, study goals, study design, approach, data, findings, and conclusions & At least one section is incomplete or is verbal explanation is incomplete. & At least one section is missing.\\ \hline Communicate results in a written report and poster presentation. & Report structure mirrors a research manuscript & Contains abstract, introduction, methods, results, and discussion & At least one section is incomplete. & At least one section is missing.\\ \hline Translate statistical findings into scientific conclusions. & Places statistical results in their scientific context & Demonstrates understanding of scientific context and integrates findings into it. & Incomplete scientific understanding or incomplete integration of findings. & Major gaps in scientific understanding or integration of findings.\\ \hline Translate statistical findings into scientific conclusions. & Accurately portrays study limitations & Accurately describes, in writing and in speaking, limitations of the study & Incomplete or partially inaccurate description of limitations & Doesn't describe limitations.\\ \hline Translate statistical findings into scientific conclusions. & Demonstrates familiarity with relevant literature & Fluent in both relevant data science literature and scientific literature. & Incomplete knowledge and understanding of relevant literature & Major gaps in knowledge and understanding\\ \hline Develop data science strategies to address a scientific research question. & Presents an original research question & Presents, in writing and in speaking, a novel research question. Explains why it's novel, too. & Partially lacking in elements of question's background or novelty. & Doesn't present an original question.\\ \hline Develop data science strategies to address a scientific research question. & Effectively uses data visualizations & Visualizations highlight main points of report. & Incomplete or omitted visualizations. & Doesn't use visualizations.\\ \hline Develop data science strategies to address a scientific research question. & Presents accurate scientific conclusions & Effectively translates analysis results into their scientific context. & Minor inaccuracy in translation of findings into scientific context. & Major errors in translation of results.\\ \hline \end{tabular} \end{table} With future students, we will use a written rubric to evaluate theses (Table \ref{tab:summative}). We'll share the rubric with our students at the start of the academic year. With only minor modifications, the rubric may be suitable for projects that don't use tweets. \hypertarget{exploring-more-advanced-topics}{% \subsection{Exploring more advanced topics}\label{exploring-more-advanced-topics}} Twitter data over time inspires a variety of research projects. Supplementing tweets with public data from other sources multiplies the possibilities. For example, one of our two students supplemented tweets with daily stock market index prices. She studied sentiment of finance-related tweets and daily stock market index closing prices (Supplementary files). LDA modeling and related methods are a major research area in the quantitative social sciences. Advanced students with interest in statistical computing might compare inferential methods for topic models. Those with interests in event detection and time series analysis could build on the findings of our student by explicitly accounting for topic evolution with dynamic topic models (Blei and Lafferty 2006). \hypertarget{concluding-remarks}{% \subsection{Concluding remarks}\label{concluding-remarks}} Our mentoring in data science aligns with others' calls to reconsider the role of computing in statistics and data science (Carver et al. 2016; Nolan and Temple Lang 2010). Hicks and Irizarry (2018) argue for incorporating three concepts into data science training: computing, connecting and creating. They use the terms ``connecting'' and ``creating'' to describe the processes of applying quantitative methods to real data and research questions and of formulating research questions, respectively. Our tweet analysis projects offer students opportunities in all three skills sets. Our students first formulated research questions, then collected and analyzed data to address the questions. Throughout the projects, students drew heavily on computing, both to acquire data and to analyze it. Tweet analysis gives students practical experience in the data science process of formulating a research question, gathering data to address it, summarizing the data, visualizing results, and communicating findings. Tweets over time are a rich, large, authentic data set that offers many opportunities. We provided instructions to enable readers to establish their own tweet collections. We also presented details for one analysis strategy. By considering first student research interests and integrating them with our senior thesis learning objectives, we successfully guided two undergraduate researchers in data science research with tweets. \hypertarget{acknowledgements}{% \section{Acknowledgements}\label{acknowledgements}} The authors thank Betsy Colby Davie and Rick Nordheim for helpful discussions and feedback on preliminary versions of the manuscript. We thank the special issue editors and anonymous reviewers for their constructive comments and suggestions. Finally, this work wouldn't have been possible without the keen and enthusiastic students, Jinyu Xia and Robert Turner. \hypertarget{references}{% \section{References}\label{references}} \hypertarget{refs}{} \begin{cslreferences} \leavevmode\hypertarget{ref-blei2017variational}{}% Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017), ``Variational inference: A review for statisticians,'' \emph{Journal of the American Statistical Association}, Taylor \& Francis, 112, 859--877. \leavevmode\hypertarget{ref-blei2006dynamic}{}% Blei, D. M., and Lafferty, J. D. (2006), ``Dynamic topic models,'' in \emph{Proceedings of the 23rd International Conference on Machine Learning}, pp. 113--120. \leavevmode\hypertarget{ref-blei2003latent}{}% Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003), ``Latent Dirichlet Allocation,'' \emph{Journal of Machine Learning Research}, 3, 993--1022. \leavevmode\hypertarget{ref-box1976science}{}% Box, G. E. (1976), ``Science and statistics,'' \emph{Journal of the American Statistical Association}, Taylor \& Francis, 71, 791--799. \leavevmode\hypertarget{ref-carver2016guidelines}{}% Carver, R., Everson, M., Gabrosek, J., Horton, N., Lock, R., Mocko, M., Rossman, A., Roswell, G. H., Velleman, P., Witmer, J., and others (2016), ``Guidelines for assessment and instruction in statistics education (GAISE) college report 2016,'' AMSTAT. \leavevmode\hypertarget{ref-chang2009reading}{}% Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J. L., and Blei, D. M. (2009), ``Reading tea leaves: How humans interpret topic models,'' in \emph{Advances in Neural Information Processing Systems}, pp. 288--296. \leavevmode\hypertarget{ref-hicks2018guide}{}% Hicks, S. C., and Irizarry, R. A. (2018), ``A guide to teaching data science,'' \emph{The American Statistician}, Taylor \& Francis, 72, 382--391. \leavevmode\hypertarget{ref-json}{}% ``Introducing JSON'' (2020), \url{https://www.json.org/json-en.html}. \leavevmode\hypertarget{ref-tweet_json}{}% ``Introduction to Tweet JSON'' (2020), \url{https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/intro-to-tweet-json}. \leavevmode\hypertarget{ref-rtweet-package}{}% Kearney, M. W. (2019), ``rtweet: Collecting and analyzing Twitter data,'' \emph{Journal of Open Source Software}, 4, 1829. \url{https://doi.org/10.21105/joss.01829}. \leavevmode\hypertarget{ref-lin2011joint}{}% Lin, C. X., Mei, Q., Han, J., Jiang, Y., and Danilevsky, M. (2011), ``The joint inference of topic diffusion and evolution in social communities,'' in \emph{2011 IEEE 11th International Conference on Data Mining}, IEEE, pp. 378--387. \leavevmode\hypertarget{ref-national2018data}{}% National Academies of Sciences, Engineering, Medicine, and others (2018), \emph{Data science for undergraduates: Opportunities and options}, National Academies Press. \leavevmode\hypertarget{ref-nolan2010computing}{}% Nolan, D., and Temple Lang, D. (2010), ``Computing in the statistics curricula,'' \emph{The American Statistician}, Taylor \& Francis, 64, 97--107. \leavevmode\hypertarget{ref-pelled2018little}{}% Pelled, A., Lukito, J., Boehm, F., Yang, J., and Shah, D. (2018), ```Little Marco,'\,`Lyin'Ted,'\,`Crooked Hillary,' and the `Biased' media: How Trump used Twitter to attack and organize,'' in \emph{Digital Discussions}, Routledge, pp. 176--196. \leavevmode\hypertarget{ref-porteous2008fast}{}% Porteous, I., Newman, D., Ihler, A., Asuncion, A., Smyth, P., and Welling, M. (2008), ``Fast Collapsed Gibbs Sampling for Latent Dirichlet Allocation,'' in \emph{Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining}, pp. 569--577. \leavevmode\hypertarget{ref-r}{}% R Core Team (2019), \emph{R: A language and environment for statistical computing}, Vienna, Austria: R Foundation for Statistical Computing. \leavevmode\hypertarget{ref-drob}{}% Robinson, D. (2016), ``Text analysis of Trump's tweets confirms he writes only the (angrier) Android half,'' \url{http://varianceexplained.org/r/trump-tweets/}. \leavevmode\hypertarget{ref-tweet_stream}{}% ``Sampled stream'' (2019), \url{https://developer.twitter.com/en/docs/labs/sampled-stream/overview}. \leavevmode\hypertarget{ref-tidytext}{}% Silge, J., and Robinson, D. (2016), ``tidytext: Text mining and analysis using tidy data principles in R,'' \emph{JOSS}, The Open Journal, 1. \url{https://doi.org/10.21105/joss.00037}. \leavevmode\hypertarget{ref-wells2016trump}{}% Wells, C., Shah, D. V., Pevehouse, J. C., Yang, J., Pelled, A., Boehm, F., Lukito, J., Ghosh, S., and Schmidt, J. L. (2016), ``How Trump drove coverage to the nomination: Hybrid media campaigning,'' \emph{Political Communication}, Taylor \& Francis, 33, 669--676. \leavevmode\hypertarget{ref-wickham2019advanced}{}% Wickham, H. (2019), \emph{Advanced R}, CRC press. \leavevmode\hypertarget{ref-wickham2016r}{}% Wickham, H., and Grolemund, G. (2016), \emph{R for Data Science: Import, tidy, transform, visualize, and model data}, O'Reilly Media, Inc. \leavevmode\hypertarget{ref-wiggins2005understanding}{}% Wiggins, G., and McTighe, J. (2005), \emph{Understanding by Design}. \leavevmode\hypertarget{ref-taskscheduleR}{}% Wijffels, J., and Belmans, O. (2018), \emph{taskscheduleR: Schedule R Scripts and Processes with the Windows Task Scheduler}. \end{cslreferences} \newpage \hypertarget{supplementary-files}{% \section{Supplementary files}\label{supplementary-files}} \hypertarget{tweets-data-dictionary}{% \subsection{Tweets data dictionary}\label{tweets-data-dictionary}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \href{https://github.com/fboehm/jse-2019/blob/master/data/tweets-data-dictionary.csv}{Data dictionary} \end{enumerate} \hypertarget{r-code-to-reproduce-the-case-study}{% \subsection{R code to reproduce the case study}\label{r-code-to-reproduce-the-case-study}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \href{https://raw.githubusercontent.com/fboehm/jse-2019/master/Rmd/tweets.Rmd}{tweets.Rmd} \item \href{https://raw.githubusercontent.com/fboehm/jse-2019/master/Rmd/tweets-one.Rmd}{tweets-one.Rmd} \item \href{https://raw.githubusercontent.com/fboehm/jse-2019/master/Rmd/recover_tweets.R}{recover\_tweets.R} \end{enumerate} \hypertarget{student-projects}{% \subsection{Student projects}\label{student-projects}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Student 1 poster: \href{https://github.com/fboehm/jse-2019/blob/master/supplementary/Project_Poster.pdf}{Project\_Poster.pdf} \item Student 1 report: \href{https://github.com/fboehm/jse-2019/blob/master/supplementary/report.pdf}{report.pdf} \item Student 2 useR 2016 slides: \href{https://github.com/fboehm/jse-2019/blob/master/supplementary/user2016boehm.pdf}{user2016boehm.pdf} \item Student 2 poster: \href{https://github.com/fboehm/jse-2019/blob/master/supplementary/warfdiscovery2016boehm.tiff}{warfdiscovery2016boehm.tiff} \end{enumerate} \hypertarget{github-repository}{% \subsection{Github repository}\label{github-repository}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \url{https://github.com/fboehm/jse-2019} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7823910725, "avg_line_length": 55.9678530425, "ext": "tex", "hexsha": "ea7b5002a095c72e49f024c6ce2817c268ff5e02", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ba1c1c8d73ed9db87b543813d64930828caa7070", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fboehm/jse-2019", "max_forks_repo_path": "november2020_uploads/zip_ms_tex/jse-special-issue-2020-november.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ba1c1c8d73ed9db87b543813d64930828caa7070", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fboehm/jse-2019", "max_issues_repo_path": "november2020_uploads/zip_ms_tex/jse-special-issue-2020-november.tex", "max_line_length": 415, "max_stars_count": null, "max_stars_repo_head_hexsha": "ba1c1c8d73ed9db87b543813d64930828caa7070", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fboehm/jse-2019", "max_stars_repo_path": "november2020_uploads/zip_ms_tex/jse-special-issue-2020-november.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12491, "size": 48748 }
\subsection{Principled statistical inference} \begin{frame}{Principle I: the sufficiency principle} Sufficiency plays a central role in all of Statistics. \begin{defn}[Sufficient statistic] Let $x \sim f(x \mid \theta)$. We say $T : \mathcal{X} \to \mathbb{R}$ is a \textbf{sufficient statistic} for the parameter $\theta$ if $\pr(X = x \mid T(x), \theta)$ is independent of $\theta$. \end{defn} This is the basis for a cornerstone of Statistics, \begin{theo}[Factorisation theorem] Under mild regularity conditions, we can write: $$ f(x \mid \theta) = g(T(x) \mid \theta) h(x \mid T(x)).$$ \end{theo} We can now state \begin{idea}[Sufficiency principle (SP)] \label{idea:SP} For $x, y \in \mathcal{X}$, if $T$ is sufficient for $\theta$ and $T(x) = T(y)$, then $x$ and $y$ should lead to the same inferences about $\theta$. \end{idea} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[allowframebreaks]{Principle II: the Likelihood principle} The Likelihood Principle (LP) is a key concept in Statistics, of particular Bayesian Statistics. \begin{idea}[Likelihood Principle] \label{idea:LP} The information brought by an observation $x \in \mathcal{X}$ about a parameter $\theta \in \boldsymbol{\Theta}$ is \textbf{completely} contained in the likelihood function $l(\theta \mid x) \propto f(x \mid \theta)$. \end{idea} \begin{example}[Uma vez Flamengo...] Suppose a pollster is interested in estimating the fraction $\theta$ of football fans that cheer for Clube de Regatas do Flamengo (CRF). They survey $n=12$ people and get $x=9$ supporters and $y=3$ ``antis''. Consider the following two designs: \begin{itemize} \item[i)] Survey $12$ people and record the number of supporters; \item[ii)] Survey until they get $y=3$. \end{itemize} The likelihoods for both surveys are, respectively, \begin{align*} x \sim \operatorname{Binomial}(n, \theta) \implies l_1(\theta \mid x, n) &= \binom{n}{x} \theta^{x}(1-\theta)^{n-x},\\ n \sim \operatorname{Negative Binomial}(y, 1-\theta) \implies l2(\theta \mid n, y) &= \binom{n-1}{y-1}y (1-\theta)^{n-y} \theta^y, \end{align*} hence \begin{equation*} l_1(\theta) \propto l_2(\theta) \propto \theta^{3}(1-\theta)^9. \end{equation*} Therefore, we say that these two experiments bring exactly the same information about $\theta$. \end{example} A generalised version of the LP can be stated as follows: \begin{theorem}[\textbf{Likelihood Proportionality Theorem}~\citep{Goncalves2019}] Let $\Theta$ be a nonempty set and $\mathcal{P} = \{ P_\theta; \theta \in \Theta \}$ be a family of probability measures on $(\Omega, \mathcal{A})$ and $\nu_1$ and $\nu_2$ be $\sigma$-finite measures on $(\Omega, \mathcal{A})$. Suppose $P \ll \nu_1$ and $P \ll \nu_2$ for all $P \in \mathcal{P}$. Then there exists a measurable set $A \in \mathcal{A}$ such that $P_\theta(A) = 1$ for all $\theta \in \Theta$ and there exist $f_{1,\theta} \in \left[ \frac{dP_\theta}{d\nu_1}\right]$ and $f_{2,\theta} \in \left[ \frac{dP_\theta}{d\nu_2}\right]$ and a measurable function $h$ such that \begin{equation*} f_{1,\theta}(\omega) = h(\omega)f_{2,\theta}(\omega), \forall\, \theta \in \Theta\, \forall\, \omega \in A. \end{equation*} \end{theorem} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Principle III: stopping rule principle} A subject of contention between inference paradigms is the role of stopping rules in the inferences drawn. \begin{idea}[Stopping rule principle (SRP)] \label{idea:SRP} Let $\tau$ be a stopping rule directing a series of experiments $\mathcal{E}_1, \mathcal{E}_2, \ldots$, which generates data $\boldsymbol{x} = (x_1, x_2, \ldots)$. Inferences about $\theta$ should depend on $\tau$ only through $\boldsymbol{x}$. \end{idea} \begin{example}[Finite stopping rules] Suppose experiment $\mathcal{E}_i$ leads to the observation of $x_i \sim f(x_i \mid \theta)$ and let $\mathcal{A}_i \subset \mathcal{X}_1 \times \ldots \times \mathcal{X}_i$ be a sequence of events. Define $$ \tau := \inf \left\{ n : (x_1, \ldots, x_n) \in \mathcal{A}_n \right\}.$$ It can be shown that $\pr(\tau < \infty) = 1$ (exercise 1.20 BC). \end{example} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Principle IV: the conditionality principle} We will now state one of the main ingredients of the derivation of the LP. The Conditionality Principle (CP) is a statement about the permissible inferences from randomised experiments. \begin{idea}[Conditionality Principle] \label{idea:CP} Let $\mathcal{E}_1$ and $\mathcal{E}_2$ be two experiments about $\theta$. Let $Z \sim \operatorname{Bernoulli}(p)$ and \begin{itemize} \item If $Z=1$, perform $\mathcal{E}_1$ to generate $x_1 \sim f_1(x_1 \mid \theta)$; \item If $Z=0$ perform $\mathcal{E}_2$ to generate $x_2 \sim f_2(x_2 \mid \theta)$. \end{itemize} Inferences about $\theta$ should depend \textbf{only} on the selected experiment, $\mathcal{E}_i$. \end{idea} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Deriving the Likelihood Principle} \cite{Birnbaum1962} showed that the simpler and mostly uncontroversial Sufficiency and Conditionality principles lead to the Likelihood Principle. \begin{theo}[Birnbaum's theorem~\citep{Birnbaum1962}] \label{thm:Birnbaum} \begin{equation} \operatorname{SP} + \operatorname{CP} \implies \operatorname{LP}. \end{equation} \end{theo} \begin{proof} Sketch: \begin{itemize} \item Define a function $\operatorname{EV}(\mathcal{E}, x)$ to quantify the evidence about $\theta$ brought by data $x$ from experiment $\mathcal{E}$ and consider a randomised experiment $\mathcal{E}^*$ in which $\mathcal{E}_1$ and $\mathcal{E}_2$ are performed with probability $p$; \item Show that CP implies $\operatorname{EV}(\mathcal{E}^*, (j, x_j)) = \operatorname{EV}(\mathcal{E}_j, x_j), j = 1, 2$; \item Show that SP implies $\operatorname{EV}(\mathcal{E}^*, (1, x_1)) = \operatorname{EV}(\mathcal{E}^*, (2, x_2))$ when $$ l(\theta \mid x_1) = c l(\theta \mid x_2).$$ \end{itemize} \end{proof} See~\cite{Robert2007}, pg.18 for a complete proof. \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Recommended reading} \begin{itemize} \item[\faBook] \cite{Robert2007} Ch. 1; \item[\faForward] Next lecture: \cite{Robert2007} Ch. 2 and $^\ast$ \cite{Schervish2012} Ch.3; % \item {\large\textbf{Recommended exercises}} % \begin{itemize} % \item[\faBookmark] \cite{Robert2007}. % \begin{itemize} % \item Sections. % \item $^\ast$ Sections . % \end{itemize} % \end{itemize} \end{itemize} \end{frame}
{ "alphanum_fraction": 0.6833180568, "avg_line_length": 53.6557377049, "ext": "tex", "hexsha": "ba9bdc5d925fe813d02d2b39cf2f4ee1cf35d26c", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-03-23T12:33:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-26T16:28:02.000Z", "max_forks_repo_head_hexsha": "79fe17dd71fa9638ae4865c8e75eeb0f814d2ccb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "anhnguyendepocen/BayesianStatisticsCourse", "max_forks_repo_path": "slides/lecture_1.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "79fe17dd71fa9638ae4865c8e75eeb0f814d2ccb", "max_issues_repo_issues_event_max_datetime": "2022-02-16T20:49:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-24T01:28:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "anhnguyendepocen/BayesianStatisticsCourse", "max_issues_repo_path": "slides/lecture_1.tex", "max_line_length": 289, "max_stars_count": 6, "max_stars_repo_head_hexsha": "79fe17dd71fa9638ae4865c8e75eeb0f814d2ccb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lucasmoschen/BayesianStatisticsCourse", "max_stars_repo_path": "slides/lecture_1.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-15T23:40:56.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-17T17:39:29.000Z", "num_tokens": 2129, "size": 6546 }
\section{The Adaptation Layer} \label{sec:adaptation_layer} The second layer of the AdaptUI architecture is the Adaptation Layer. After the processes performed by the Modelling Layer, the Adaptation Layer aims to lead the dynamic adaptation of the elements presented in the user interface. It is formed by two different modules: The Adaptation Engine, whose purpose is to adapt the currently interface shown by the device, and the Adaptation Polisher, which aims the refinement of the user interface basing its task in several usability metrics. \subsection{The Adaptation Engine} \label{sec:adaptation_engine} After the storage of the domain knowledge in the AdaptUIOnt ontology by the Semantic Modeller, several rules are triggered. These rules are grouped in three different subsets: the pre-adaptation rules, the adaptation rules and the post-adaptation rules. More concretely, the adaptation rules subset is the one which modifies the knowledge represented by the \textit{Adaptation} class in the AdaptUIOnt ontology. Once these rules have been executed the Adaptation Engine requests these results to the \textit{Adaptation} class. Then, it launches several methods to dynamically change the aspect of the current user interface. These methods basically redraw and refresh the different components shown in the current activity, sharing their new characteristics to the rest of activities. Listing~\ref{lst:redraw} shows an example of how several views are redrawn. First, the elements that are part of the activity (i.e., buttons and textviews) are initialized (similarly to other applications). Next, several methods regarding the adaptation are called. The \textit{redrawViews()} method takes into account the user's configured profile through the Capabilities Collector and adapts the components of the activity accordingly. Every activity overwrites this method (and others), as they extend from a parent abstract class called \textit{AbstractActivity}. This class mainly manages the services initialization (e.g., TextToSpeech) and the ontology. % Listing~\ref{lst:abstract_activity} shows % part of the AdaptUI source code where the ontology is initialized. \inputminted[linenos=true, fontsize=\footnotesize, frame=lines]{java}{4_system_architecture/redraw.java} \captionof{listing}{Example of the creation and adaptation of an activity. In this case the example is centred in the adaptation of a button.\label{lst:redraw}} % \inputminted[linenos=true, fontsize=\footnotesize, frame=lines]{java}{4_system_architecture/abstract_activity.java} % \captionof{listing}{The AbstractActivity class ontology related methods.\label{lst:abstract_activity}} Listing~\ref{lst:store_in_ontology} shows a piece of source code where the developer uses the \ac{owl}-\ac{api} to store the user profile in the AdaptUIOnt model. % The methods called in this example are part of the Turambar framework~\citep{david_ausin_probabilistic_2014}. % This framework provides a high-level \ac{api} for managing the \ac{owl}-\ac{api}. \inputminted[linenos=true, fontsize=\footnotesize, frame=lines]{java}{4_system_architecture/store_in_ontology.java} \captionof{listing}{Inserting values in the corresponding classes of the AdaptUIOnt ontology.\label{lst:store_in_ontology}} Finally, Figure~\ref{fig:adaptation_differences} shows two activities with the same user interface. The difference is that the activity on the left is showing the default user interface defined in the activity's layout. On the contrary, the activity on the right shows adapted components. \begin{figure}[H] \centering \includegraphics[width=0.65\textwidth]{adaptation_differences.png} \caption{User interface adaptation performed by the Adaptation Engine. On the left, a default activity with no adaptation. On the right, the same activity after the adaptation process. As is shown, the colours sets and sizes of each component of the adapted user interface are different from the non adapted one.} \label{fig:adaptation_differences} \end{figure} \subsection{The Adaptation Polisher} \label{sec:adaptation_polisher} Although the adaptations for the user follow the instructions detailed by him/her, there is still the possibility that the Adaptation Engine's results lead to a unsuccessful interaction/adaptation. One of the main reasons for this is that the classification of the model, considering the context aspects, is just an approximation of the reality. For example, considering that between 1,000 \ac{lx} and 25,000 \ac{lx} the reasoner determines that the ontology value for the luminance is \textit{daylight}, the difference might be significant in real scenarios (see Table~\ref{tbl:luminance}). In other words, the user might easily interact with a 1,000 \ac{lx} context light but tediously in a 24,999 \ac{lx} light environment. In order to tackle this problem, there are two possible solutions: to consider an extensive set of classification rules including more possible situations (e.g., dividing the luminance table into more categories), or to design a specific module which evaluates the user interaction results: the Adaptation Polisher. The first case is difficult to implement. Nevertheless, AdaptUI provides an \ac{api} which aims to help in this issue allowing developers to create and modify the ontology knowledge. This means that it is possible to try to model every tiny context variation to capture different context characteristics and adapt the user interface accordingly. However, this is not practical, and AdaptUI covers the second case with the Adaptation Polisher. Therefore developers do not have to model these small variations in the environment. The Adaptation Polisher is a software module, part of the Adaptation Layer, which monitors the effectiveness and responsiveness of the adapted interfaces. Collecting different usability and productivity metrics of the interaction carried out by the user, this module is able to make small but specific adaptations to improve the ongoing interaction. This module has been designed considering the relative user efficiency productivity metrics. These metrics compare the efficiency of the user compared to an expert. But it has no sense to compare the user with others in AdaptUI. The system cannot generalize and apply adaptations based on other users' preferences. To solve this, in AdaptUI we propose to maintain a base adaptation, which thanks to the Adaptation Polisher and its interaction results will be improved. Consequently, an interaction model is built for each adaptation. Therefore, we can determine the efficiency of the user when he/she is manipulating an adaptation made by the system by comparing the last adaptation to the previous one. % AdaptUI keeps an interaction model of the user (as an expert) stored in the % ontology. Once an adaptation is made, the Adaptation Polisher monitors the user % interaction and then checks the stored model with the new generated one. Next, % the post-adaptation rules are triggered. \subsubsection{The Usability Metrics} \label{sec:usability_metrics} For this module, several usability metrics have been studied and implemented. These metrics are classified into two different groups: \textit{effectiveness} metrics and \textit{productivity} metrics. Table~\ref{tbl:effectiveness_metrics} and Table~\ref{tbl:productivity_metrics} detail the usability metrics implemented in AdaptUI. The following information is given for each metric in the cited tables: \begin{enumerate}[label=\alph*)] \item Purpose of the metric: Expresses the main goal of the current metric. \item Measurement, formula and data element computations: Provide the formulas to compute the current metric and the meaning of the used data elements. \item Interpretation of the measured value: Details the range and preferred values. \item Metric scale type: Provides the type of scale used by the current metrics. The possible scale types are: Nominal, Ordinal, Interval, Ratio and Absolute scale. \item Measure type: Provides the type of the measure. The possible measure types are: Size, Time and Count. \end{enumerate} In the original \ac{iso} document~\citep{ISOIEC9126} there are 4 extra columns that have not been included in Table~\ref{tbl:effectiveness_metrics} and Table~\ref{tbl:productivity_metrics}. This is because the values for each metric under these columns are the same: \begin{enumerate}[label=\alph*)] \item Method of application: Provides an outline of the application. \item Input to measurement: Details the source of data used in the measurement. In this case there are two inputs that each metric shares: Operation (test) report and User monitoring record. \item Reference: Identifies software life cycle processes where the metric is applicable. There are three processes that each metric shares: 6.5 Validation, 5.3 Qualification testing and 5.4 Operation. \item Target audience: Identifies the user(s) of the measurement results. Again, the metrics share User and Human interface designer as their audiences. \end{enumerate} \myparagraph{Effectiveness Metrics} \label{sec:effectiveness_metrics} Effectiveness metrics, as detailed in the \ac{iso}/\ac{iec} 9126-4~\citep{ISOIEC9126}, evaluate whether the current task achieves a specific goal considering the accuracy and completeness of the corresponding task. These metrics are shown in Table~\ref{tbl:effectiveness_metrics}. % \begin{landscape} \begin{table} \caption{The effectiveness metrics used in the Adaptation Polisher, as it appears in~\citep{ISOIEC9126}.} \label{tbl:effectiveness_metrics} \footnotesize \centering \begin{tabular}{l l l l l l} \hline \textbf{Metric} & \textbf{Purpose of } & \textbf{Measurement,} & \textbf{Interpretation } & \textbf{Metric} & \textbf{Measure} \\ \textbf{name} & \textbf{the metrics} & \textbf{formula and data} & \textbf{of measured } & \textbf{scale} & \textbf{type} \\ & & \textbf{element compu-} & \textbf{value} & \textbf{type} \\ & & \textbf{tations} \\ \hline Task & To measure the & $M1=|1-\Sigma A_{i}|$ & $0\leq M1 \leq 1$ & \textemdash & $A=$ proportion \\ effectiveness & proportion of the & & & & \\ & goals of the task & $A_{i}=$ proportional & The closer to & & \\ & achieved & value of each & 1.0 the better. & & \\ & correctly. & missing or incorrect & & & \\ & & component in the \\ & & task output \\ \hline Task & To measure the & $X=A/B$ & $0\leq X \leq 1$ & Ratio & $A=$ Count \\ completion & proportion of & & & & $B=$ Count \\ & the task that & $A=$ number of & The closer to & & $X=$ Count/Count \\ & is completed. & tasks completed & 1.0 the better. & & \\ & & $B=$ total number of & & & \\ & & tasks attempted \\ \hline Error & To measure the & $X=A/T$ & $0\leq X$ & Absolute & $A=$ Count \\ frequency & frequency of & & & &~ \\ & errors. & $A=$ number of & The closer to & &~ \\ & & errors made by the & 0 the better. & &~ \\ & & user \\ & & $T=$ time or number & & & \\ & & of tasks \\ \hline \end{tabular} \end{table} % \end{landscape} \myparagraph{Productivity Metrics} \label{sec:productivity_metrics} Productivity metrics evaluate the resources consumed by the users in relation to the effectiveness achieved in the current task~\citep{ISOIEC9126}. These metrics are shown in Table~\ref{tbl:productivity_metrics}. % \begin{landscape} \begin{table} \caption{The productivity metrics used in the Adaptation Polisher, as it appears in~\citep{ISOIEC9126}.} \label{tbl:productivity_metrics} \footnotesize \centering \begin{tabular}{l l l l l l} \hline \textbf{Metric} & \textbf{Purpose of } & \textbf{Measurement,} & \textbf{Interpretation } & \textbf{Metric} & \textbf{Measure} \\ \textbf{name} & \textbf{the metrics} & \textbf{formula and data} & \textbf{of measured } & \textbf{scale} & \textbf{type} \\ & & \textbf{element compu-} & \textbf{value} & \textbf{type} \\ & & \textbf{tations} \\ \hline Task & To measure the & $X=Ta$ & $0\leq X$ & Interval & $T=$ Time \\ time & required time to & & & & \\ & complete the task. & $Ta=$ Task time & The closer to & & \\ & & & 1.0 the better. & & \\ \hline Task & To measure how & $X=M1/T$ & $0\leq X \leq 1$ & \textemdash & $T=$ Time \\ efficiency & efficient the & & & & $X=$ proportion/ \\ & users are. & $M1=$ task & The larger the & & time \\ & & effectiveness & better. & & \\ & & $T=$ task time & & & \\ \hline Economic & To measure the & $X=M1/C$ & $0\leq X$ & \textemdash & $C=$ Value \\ productivity & cost-effectiveness & & & &~ \\ & of the user. & $M1=$ task & The larger the & &~ \\ & & effectiveness & better. & &~ \\ & & $C=$ total cost & & &~ \\ & & of the tasks \\ \hline Productive & To measure the & $X=Ta/Tb$ & $0\leq X \leq 1$ & Absolute & $Ta=$ Time \\ proportion & proportion of & & & & $Tb=$ Time \\ & time the user & $Ta=$ productive & The closer to & & $X=$ Time/ \\ & is performing & time = task time - & 1.0 the better. & & Time \\ & productive actions. & help time - error & & & \\ & & time - search time \\ & & $Tb=$ task time \\ \hline Relative user & To measure the & Relative user & $0\leq X \leq 1$ & Absolute & $C=$ proportion/ \\ efficiency & efficiency of & efficiency & & & time \\ & the user compared & $X=A/B$ & & &~ \\ & to an expert. & & & & \\ & & $A=$ ordinary & The closer to & &~ \\ & & user's task & 1.0 the better. & & \\ & & efficiency \\ & & $B=$ expert & & & \\ & & user's task & & & \\ & & efficiency \\ \hline \end{tabular} \end{table} % \end{landscape} \subsubsection{Adaptation Polisher Scenario} \label{sec:adaptation_polisher_scenario} In the following lines a scenario describing step by step the actions performed by the Adaptation Polisher is presented. Table~\ref{tbl:polisher_adaptation} shows the inferred adaptation for the user, context and device characteristics described in Table~\ref{tbl:polisher_scenario}. \begin{table} \caption{Scenario situation summary.} \label{tbl:polisher_scenario} \footnotesize \centering \begin{tabular}{l l} \hline & \textbf{Scenario} \\ \hline User \\ \qquad - Personal data & David, 23 years old, Spanish \\ \qquad - Activity & - \\ \qquad - Known disabilities & - \\ % & Hearing loss \\ % \hline Context \\ \qquad - Location & Relative: Vitoria, Spain \\ & \\ \qquad - Time & 06:30 \\ \qquad - Brightness & 600 \ac{lx} \\ \qquad - Temperature & -5 ºC \\ % \hline Device & Motorola Moto G \\ % & \\ \hline Task & Send an email \\ \hline \end{tabular} \end{table} \begin{table} \caption{Final adaptation for the presented scenario.} \label{tbl:polisher_adaptation} \footnotesize \centering \begin{tabular}{l l} \hline % \multicolumn{2}{c}{\textbf{Scenario 2}} \\ \textbf{Adaptation} & \textbf{Value}\\ \hline % \textit{hasBrightness} & ??? \\ \textit{hasColourSet} & - \\ \textit{hasViewSize} & 10 \\ \textit{hasResponse} & vibration \\ % \textit{hasColourSet} & Colour blindness \\ % \textit{hasViewSize} & 20 \\ % \textit{hasInput} & Voice and haptic \\ \textit{hasInput} & Default \\ \textit{hasOutput} & Visual and audio\\ \textit{hasVolume} & 5 \\ \hline \end{tabular} \end{table} As Table~\ref{tbl:polisher_scenario} shows, David does not suffer from any disability. Nevertheless, the context situation presents characteristics that might trouble David during the interaction process. The cold temperature and the lack of sufficient light requires a user interface adaptation. Thus, AdaptUI increases the device's brightness and the views' sizes. Figure~\ref{fig:polisher_scenario} illustrates the differences between the default user interface (left) and the adapted one (right). \begin{figure} \centering \includegraphics[width=0.65\textwidth]{polisher_scenario.pdf} \caption{User interface adaptation performed by the Adaptation Engine. On the left, the default version, without adaptations. On the right, the same application adapted by AdaptUI.} \label{fig:polisher_scenario} \end{figure} Thus, David uses his device through the adapted user interface. At this point, the corresponding interaction model is built by the Adaptation Layer, collecting the usability metrics shown in Section~\ref{sec:usability_metrics} (see Table~\ref{tbl:effectiveness_metrics} and Table~\ref{tbl:productivity_metrics}). Table~\ref{tbl:model_comparison} shows the used metrics and the computed values for both the default and the adapted interaction models. \begin{table} \caption{The interaction model computed by the Adaptation Layer. Time ($T$) has been measured in seconds.} \label{tbl:model_comparison} \footnotesize \centering \begin{tabular}{l l l} \hline \textbf{Metric} & \textbf{Value for the default}& \textbf{Value for the adapted}\\ & \textbf{interaction model} & \textbf{interaction model} \\ \hline Task effectiveness & 0.7 & 0.350 \\ ($M1=|1-\Sigma A_{i}|$)\\ Task completion & 1 & 0 \\ ($X=A/B$)\\ Error frequency & 0.2 & 0.562 \\ %3/15, 18/32 ($X=A/T$)\\ \hline Task time & 15 & 32 \\ ($X=Ta$)\\ Task efficiency & 0.046 & 0.010 \\ ($X=M1/T$)\\ % Productive proportion\\ % ($X=Ta/Tb$)\\ Relative user efficiency & 1.0 & 0.5 \\ ($X=A/B$)\\ \hline \end{tabular} \end{table} As is shown in Table~\ref{tbl:model_comparison}, the resulting adapted user interface provided by AdaptUI does not improve the interaction of the user. The required time for performing the same task (sending an email) is 32 seconds, while by default David uses 15 seconds approximately. Thus, the user interface does not fit the user needs. Once the interaction model has been built, the Adaptation Polisher checks the usability rules set. As detailed in Section~\ref{sec:adaptation_polisher}, these rules trigger the polisher rules if certain usability ranges are exceeded. Equation~\ref{ec:usability_rule} shows a usability rule checking the relative user efficiency of the interaction model. \footnotesize \begin{equation} \label{ec:usability_rule} \begin{align*} UserAux(?user)\; \&\; Productivity(?productivity)\; \&\;Polisher(?polisher)\; \&\\ userAuxHasProductivityMetrics(?user, ?productivity)\; \&\;\\ hasRelativeEfficiency(?productivity, ?efficiency)\; \&\;\\ lessThanOrEqual(?efficiency, 0.5)\; \&\;\\ \Rightarrow \\ launchPolisherRules(?polisher, true) \end{align*} \end{equation} \normalsize On the other hand, Equation~\ref{ec:polisher_rule} is triggered by Equation~\ref{ec:usability_rule}. In the consequent of this rule there is a value of $1.10$. This value means that the size of the views presented in the previous adapted version of the user interface should be increased in a $10\%$ Hence, the next rule polishes the adapted user interface. \footnotesize \begin{equation} \label{ec:polisher_rule} \begin{align*} Polisher(?polisher)\; \&\;launchPolisherRules(?polisher, true)\; \&\\ UserAux(?user)\; \&\;userAuxHasEffectivenessMetrics(?user, ?effectiveness)\; \&\;\\ effectivenessMetricHasErrorFreequency(?effectiveness, ?freq?)\; \&\\ greaterThan(?freq, 0.5)\\ \Rightarrow \\ setViewSize(1.10) \end{align*} \end{equation} \normalsize Thus, the resulting polished user interface is shown in Figure~\ref{fig:polisher_4}. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{polisher_4.pdf} \caption{Polished user interface. On the left, the adapted version. On the right, the polished one.} \label{fig:polisher_4} \end{figure}
{ "alphanum_fraction": 0.7108445629, "avg_line_length": 45.322147651, "ext": "tex", "hexsha": "0e15f944daa320570d304667ac39f8004ef987c8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "edlectrico/dissertation", "max_forks_repo_path": "4_system_architecture/42_adaptation_layer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "edlectrico/dissertation", "max_issues_repo_path": "4_system_architecture/42_adaptation_layer.tex", "max_line_length": 131, "max_stars_count": null, "max_stars_repo_head_hexsha": "eec342383ef4f15968e6417020681a3eb095bf08", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "edlectrico/dissertation", "max_stars_repo_path": "4_system_architecture/42_adaptation_layer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6058, "size": 20259 }
\documentclass[]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \newcommand{\euro}{€} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \PassOptionsToPackage{usenames,dvipsnames}{color} % color is loaded by hyperref \hypersetup{unicode=true, pdftitle={REF IR/Political Science Prediction Models (version 0.3)}, pdfauthor={Christopher Gandrud}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{0} %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{REF IR/Political Science Prediction Models (version 0.3)} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Christopher Gandrud} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{06 March, 2016} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi \begin{document} \maketitle I conducted a simple Random Forest Regression to examine how IR/Political Science REF 2014 Output GPAs could be predicted using: \begin{itemize} \item \textbf{Mean Journal Impact Factor} for all of the journal article submissions that each university made. Note: if a journal was not assigned an impact factor\footnote{E.g. new journals and journals not included in the impact factor list used. Note that we attempted to match all of the submitted articles' journal names with those on the impact factor list. However, due to spelling variations in the two sets of journal names, some matches may not have been made.} it is effectively given an impact factor of 0. \item \textbf{Percent of journal article} submissions from journals in the \textbf{top 20} IR or Political Science categories assembled by Google Scholar. \item \textbf{Percent of non-edited books} submitted that were published by a \textbf{top university press} (see table in the Appendix for the complete list). \end{itemize} All of these metrics are highly correlated with REF Output GPAs. Mean impact factor has a correlation coefficient of 0.68 with REF Output GPAs. The REF GPA correlation with the Google Scholar metric is 0.76 and 0.7 with the percentage of books submitted that were published by a top university press. The following figure further illustrates these close relationships and City University's placement within them. It is important to note that neither of these metrics contain information on other materials including edited volues which are also submitted to the REF. \begin{figure}[htbp] \centering \includegraphics{README_files/figure-latex/descript-1.pdf} \caption{Comparing Universities' REF 2014 Output GPA to Journal Research Output Metrics} \end{figure} \section{More Complex Model: Google Scholar + Impact Factor + Books}\label{more-complex-model-google-scholar-impact-factor-books} To examine how well these metrics predict REF Output GPAs I first ran the random forest regression model on a random sample of 70\% of the 56 universities (i.e.~38) that made REF submissions for IR/Political Science. I then used the estimates from the model to predict the REF Output scores of the remaining 30\% (i.e.~17 universities). The following figure compares the actual REF GPA scores to the predictions. Note: if the model perfectly predicted the GPA score then each dot would lie one the 45 degree line. The mean absolute prediction error when using the two journal metrics was 0.1. In other words, on average the model incorrectly predicted the REF GPA score by 0.1 GPA points or 2.5\% of the GPA scale. \begin{figure}[htbp] \centering \includegraphics{README_files/figure-latex/unnamed-chunk-1-1.pdf} \caption{Actual vs.~Predicted 2014 REF Output GPAs Using All Output Metrics for a Test Set of 17 Randomly Selected Universities} \end{figure} \section{Simpler model: Google Scholar Only}\label{simpler-model-google-scholar-only} The percentage of journal submissions in the top Google Scholar lists is more strongly correlated with REF GPA scores than impact factors. Would a simpler model using just the Google Scholar metric perform just as well as the more complex two metric model? The following figures shows actual vs.~predicted GPA scores for this model. The mean absolute prediction error when using only the Google Scholar metric was 0.2. In other words, on average the model incorrectly predicted the REF GPA score by 0.2 GPA points or 5\% of the GPA scale. The Google Scholar Only model slightly outperforms the more complex model that also included information on impact factors. \begin{figure}[htbp] \centering \includegraphics{README_files/figure-latex/unnamed-chunk-3-1.pdf} \caption{Actual vs.~Predicted 2014 REF Output GPAs Using Google Scholar Metric for a Test Set of 17 Randomly Selected Universities} \end{figure} \section{Simple, But a Little More Complex: Google Plus}\label{simple-but-a-little-more-complex-google-plus} The Google Top 20 IR and Political Science lists are notably lacking important political economy journals, including \emph{Review of International Political Economy} and \emph{New Political Economy}. Does adding these journals to a ``Google Scholar Plus'' variable improve prediction performance? The following figure shows the predicted vs.~actual REF GPAs for our test sample using the Google Scholar Plus variable. The mean absolute prediction error when using only the Google Scholar Plus metric was 0.174. In other words, on average the model incorrectly predicted the REF GPA score by 0.174 GPA points or 4.4\% of the GPA scale. The Google Scholar Plus model slightly outperforms both the Two Metric model and the Google Scholar Only model. \begin{figure}[htbp] \centering \includegraphics{README_files/figure-latex/unnamed-chunk-4-1.pdf} \caption{Actual vs.~Predicted 2014 REF Output GPAs Using Google Scholar Plus Metric for a Test Set of 17 Randomly Selected Universities} \end{figure} \section{Conclusion}\label{conclusion} For the International Relations/Political Science 2014 REF we can create highly accurate predictions of Output GPA using only universities' percentage of journal submissions that were from the Google Scholar IR and Political Science top 20 lists. Using subject specific knowledge to select two additional journals that are prominent in international political economy allows us to make even better predictions. It may be possible to make even better predictions by including information on high impact book publishers. \section{Appendix: Top 20 (IR + PS) Google Scholar Journals (February 2016)}\label{appendix-top-20-ir-ps-google-scholar-journals-february-2016} For reference, the following is a list of the top 20 journals in the Google Scholar IR and Political Science categories. We also include the most recent Thomson-Reuters Impact factor to enable comparison between the two metrics. Note that the table does not include 40 journals because some journals (\emph{World Politics} and \emph{Journal of Democracy} show up on both the IR and Political Science lists). \begin{longtable}[c]{@{}lr@{}} \toprule Journal & TR Impact Factor\tabularnewline \midrule \endhead political analysis & 4.655\tabularnewline american political science review & 3.688\tabularnewline journal of peace research & 3.387\tabularnewline american journal of political science & 3.269\tabularnewline annual review of political science & 3.140\tabularnewline international organization & 3.019\tabularnewline european journal of political research & 2.508\tabularnewline world politics & 2.450\tabularnewline journal of politics & 2.255\tabularnewline governance & 2.237\tabularnewline perspectives on politics & 2.132\tabularnewline comparative political studies & 2.028\tabularnewline foreign affairs & 2.009\tabularnewline british journal of political science & 1.987\tabularnewline european journal of international relations & 1.972\tabularnewline jcms & 1.855\tabularnewline party politics & 1.830\tabularnewline journal of european public policy & 1.817\tabularnewline international studies quarterly & 1.705\tabularnewline political behavior & 1.691\tabularnewline journal of conflict resolution & 1.609\tabularnewline west european politics & 1.576\tabularnewline security dialogue & 1.356\tabularnewline international affairs & 1.246\tabularnewline electoral studies & 1.182\tabularnewline journal of democracy & 1.180\tabularnewline political research quarterly & 1.149\tabularnewline review of international studies & 1.087\tabularnewline global governance & 1.016\tabularnewline third world quarterly & 0.981\tabularnewline political studies & 0.939\tabularnewline international studies review & 0.878\tabularnewline millennium & 0.841\tabularnewline washington quarterly & 0.788\tabularnewline journal of european integration & 0.656\tabularnewline international studies perspectives & 0.652\tabularnewline pacific review & 0.527\tabularnewline ethics \& international affairs & 0.453\tabularnewline \bottomrule \end{longtable} \end{document}
{ "alphanum_fraction": 0.7935583659, "avg_line_length": 41.4754716981, "ext": "tex", "hexsha": "a17e16e9c2a628c44d9f2da7a7b661215cca00af", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ef287c11b227fde91bb9baafc56bb62d70d86b2a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "christophergandrud/ref_2014_IR_PoliSci", "max_forks_repo_path": "README.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ef287c11b227fde91bb9baafc56bb62d70d86b2a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "christophergandrud/ref_2014_IR_PoliSci", "max_issues_repo_path": "README.tex", "max_line_length": 83, "max_stars_count": null, "max_stars_repo_head_hexsha": "ef287c11b227fde91bb9baafc56bb62d70d86b2a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "christophergandrud/ref_2014_IR_PoliSci", "max_stars_repo_path": "README.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2928, "size": 10991 }
\documentclass[titlepage]{article} \input{../preamble.tex} \externaldocument{psets} \title{CHEM 20100 (Inorganic Chemistry I) Problem Sets} \author{Steven Labalme} \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \newpage \pagenumbering{arabic} \pagestyle{main} \renewcommand{\leftmark}{Problem Set \thesection} \setcounter{section}{-1} \section{Course Prep Problems} \subfile{PSet0/pset0.tex} \newpage \section{VSEPR and Point Groups} \subfile{PSet1/pset1.tex} \newpage \section{Representations, Character Tables, and Vibrations} \subfile{PSet2/pset2.tex} \newpage \section{Constructing Molecular Orbitals} \subfile{PSet3/pset3.tex} \newpage \section{Band Theory and Acid-Base Interactions} \subfile{PSet4/pset4.tex} \subsection{Extra Materials} Help with Problem I can be found \href{https://courses.cit.cornell.edu/ece407/Lectures/handout3.pdf?fbclid=IwAR3KV4T7d_OBTlnd5kNTxmYTSlkJJSWLNfx8YGSNt-mtykwIfxkG4nWkGoQ}{here}. \newpage \section{Coordination Complexes: Name-Structure Conversions and Isomers} \subfile{PSet5/pset5.tex} \newpage \section{Coordination Complexes: Electron Configurations} \subfile{PSet6/pset6.tex} \newpage \section{Coordination Complexes: Bonding Models} \subfile{PSet7/pset7.tex} \newpage \section{Coordination Complexes: Spectra and Reactions} \subfile{PSet8/pset8.tex} \newpage \renewcommand{\leftmark}{References} \printbibliography[heading=bibintoc] Note that solutions to all book problems can be found \href{https://www.chem.uci.edu/~lawm/107.html?fbclid=IwAR0mQljnCSONs96ZRMeseJbx-0psD2kslMZfl0nDnpq5SmcAnE0isXZU1C8}{\underline{here}}. \end{document}
{ "alphanum_fraction": 0.789346247, "avg_line_length": 17.3894736842, "ext": "tex", "hexsha": "bf835e958803567e072b99f504a8a94433905ca8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "af69c03d0c24335a1a049082ddb70c47306d28ff", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "shadypuck/CHEM20100Notes", "max_forks_repo_path": "PSets/psets.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "af69c03d0c24335a1a049082ddb70c47306d28ff", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "shadypuck/CHEM20100Notes", "max_issues_repo_path": "PSets/psets.tex", "max_line_length": 188, "max_stars_count": null, "max_stars_repo_head_hexsha": "af69c03d0c24335a1a049082ddb70c47306d28ff", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "shadypuck/CHEM20100Notes", "max_stars_repo_path": "PSets/psets.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 556, "size": 1652 }
\documentclass[12pt, a4paper]{article} \setlength{\oddsidemargin}{0.5cm} \setlength{\evensidemargin}{0.5cm} \setlength{\topmargin}{-1.6cm} \setlength{\leftmargin}{0.5cm} \setlength{\rightmargin}{0.5cm} \setlength{\textheight}{24.00cm} \setlength{\textwidth}{15.00cm} \parindent 0pt \parskip 5pt \pagestyle{plain} \title{Project Proposal} \author{} \date{} \newcommand{\namelistlabel}[1]{\mbox{#1}\hfil} \newenvironment{namelist}[1]{%1 \begin{list}{} { \let\makelabel\namelistlabel \settowidth{\labelwidth}{#1} \setlength{\leftmargin}{1.1\labelwidth} } }{%1 \end{list}} \begin{document} \maketitle \begin{namelist}{xxxxxxxxxxxx} \item[{\bf Title:}] Music Data Mining for Data Visualization \item[{\bf Author:}] Jeremy Grifski \item[{\bf Instructor:}] Professor DeLiang Wang \end{namelist} \section*{Background} As a first-year PhD student, my research background is very limited. That said, I am interested in doing research in the area between music, education, gaming, and data visualization. To supplement these interests, I took three game development courses, a computer graphics course, and a modeling and simulation course during undergrad. In addition, I recently took a real-time rendering course, and I am currently taking a data visualization course and a graphics seminar. In terms of technical experience, I spent two years in industry working for General Electric Transportation as a part of the Edison Engineering Development Program. In the span of two years, I managed to rotate through various roles including software engineer and prognostics engineer. The first role allowed me to work on camera systems while the second role gave me experience with some basic data analytics. \section*{Aim} The aim of this project is to explore different audio signal processing methods as a mode of data mining for the purposes of visualization. For example, I am interested in collecting data such as loudness, onset density, and auditory roughness~\cite{jeong}. In addition, I'd like to explore pitch~\cite{cuadra}\cite{rabiner}, key~\cite{zhu}\cite{chai}, and onset detection~\cite{bello}. In addition, with all the data, I'd like to do a bit of visualization to see if there are any interesting trends between music categories. For example, is rock music audio generally more rough than pop music? Do some artists leverage key changes more than others? How has loudness varied over the decades? \section*{Method} Over the course of the semester, there are several small tasks that I would like to complete. The list below details a set of features to be implemented in the final version of the music data mining and visualization tool: \begin{itemize} \item Music directory selection \item Recursive music directory traversal \item Music file data modeling using Python classes \item Music file metadata mapping to data model \begin{itemize} \item Length \item Genre \item Artist \item Year \item Bitrate \end{itemize} \item Mine music file for signal data \begin{itemize} \item Loudness \item Auditory Roughness \item Onset Density \item Pitch \item Key \end{itemize} \item Aggregate music data by category (see metadata) \item Visualize filtered music data \begin{itemize} \item Average auditory roughness vs. genre \item Average key change count vs. artist \item Average loudness vs. time \end{itemize} \end{itemize} Overall, the plan is to build a functioning audio data mining tool which can be leveraged for data visualization. \section*{Software and Hardware Requirements} For this project, I intend to use Python for both signal processing and visualization. Signal processing can be done using the scipy library, and visualization can be performed using any number of libraries such as matplotlib and vtk. In addition, I will need a rather large repository of music files. Luckily, I have plenty myself as well as access to Spotify Premium. \begin{thebibliography}{9} \bibitem{jeong} Jeong, Dasaem, and Juhan Nam. "Visualizing music in its entirety using acoustic features: Music flowgram." in Proceedings of the International Conference on Technologies for Music Notation and Representation-TENOR2016, Anglia Ruskin University. Anglia Ruskin University. 2016. \bibitem{cuadra} De La Cuadra, Patricio, Aaron S. Master, and Craig Sapp, "Efficient Pitch Detection Techniques for Interactive Music." ICMC. 2001. \bibitem{rabiner} Rabiner, Lawrence, et al. "A comparative performance study of several pitch detection algorithms." IEEE Transactions on Acoustics, Speech, and Signal Processing 24.5 (1976): 399-418. \bibitem{zhu} Zhu, Yongwei, Mohan S. Kankanhalli, and Sheng Gao. "Music key detection for musical audio." Multimedia Modelling Conference, 2005. MMM 2005. Proceedings of the 11th International. IEEE, 2005. \bibitem{chai} Chai, Wei, and Barry Vercoe. "Detection of Key Change in Classical Piano Music." ISMIR. 2005. \bibitem{bello} Bello, Juan Pablo, et al. "A tutorial on onset detection in music signals." IEEE Transactions on speech and audio processing 13.5 (2005): 1035-1047. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7695425594, "avg_line_length": 36.485915493, "ext": "tex", "hexsha": "bf8ffbe7f727718304e8721ec9fcf8f04ffa7f27", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b86a28fd3bc5296acfa3aed70f85d092a84644ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jrg94/CSE5539", "max_forks_repo_path": "term-project/proposal/grifski-term-project-proposal.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "b86a28fd3bc5296acfa3aed70f85d092a84644ca", "max_issues_repo_issues_event_max_datetime": "2019-03-25T04:04:11.000Z", "max_issues_repo_issues_event_min_datetime": "2019-02-07T03:44:49.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jrg94/CSE5539", "max_issues_repo_path": "term-project/proposal/grifski-term-project-proposal.tex", "max_line_length": 91, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b86a28fd3bc5296acfa3aed70f85d092a84644ca", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jrg94/CSE5539", "max_stars_repo_path": "term-project/proposal/grifski-term-project-proposal.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-07T03:55:44.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-07T03:55:44.000Z", "num_tokens": 1363, "size": 5181 }
\subsection{\soarb{gds-print}} \label{gds-print} \index{gds-print} Print the WMEs in the goal dependency set for each goal. \subsubsection*{Synopsis} gds-print \end{verbatim} \subsubsection*{Options} No options. \subsubsection*{Description} The Goal Dependency Set (GDS) is described in an appendix of the Soar manual. This command is a debugging command for examining the GDS for each goal in the stack. First it steps through all the working memory elements in the rete, looking for any that are included in \emph{any} goal dependency set, and prints each one. Then it also lists each goal in the stack and prints the wmes in the goal dependency set for that particular goal. This command is useful when trying to determine why subgoals are disappearing unexpectedly: often something has changed in the goal dependency set, causing a subgoal to be regenerated prior to producing a result. \subsubsection*{Warnings} gds-print is horribly inefficient and should not generally be used except when something is going wrong and you need to examine the Goal Dependency Set. \subsubsection*{Default Aliases} \hline \soar{\soar{\soar{\soar{ Alias }}}} & Maps to \\ \hline \soar{\soar{\soar{\soar{ gds\_print }}}} & gds-print \\ \hline \end{tabular}
{ "alphanum_fraction": 0.7741420591, "avg_line_length": 56.9545454545, "ext": "tex", "hexsha": "de6fd35578ed218eaaf20fb2fe4ac7c9f508d12d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "sleyzerzon/soar", "max_forks_repo_path": "Documentation/ManualSource/cli/gds-print.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "sleyzerzon/soar", "max_issues_repo_path": "Documentation/ManualSource/cli/gds-print.tex", "max_line_length": 371, "max_stars_count": 1, "max_stars_repo_head_hexsha": "74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "sleyzerzon/soar", "max_stars_repo_path": "Documentation/ManualSource/cli/gds-print.tex", "max_stars_repo_stars_event_max_datetime": "2016-04-01T04:02:28.000Z", "max_stars_repo_stars_event_min_datetime": "2016-04-01T04:02:28.000Z", "num_tokens": 318, "size": 1253 }
% !TEX TS-program = pdflatex % !TEX encoding = UTF-8 Unicode % {{{ setup \documentclass[11pt,twoside]{article} \newcommand*{\ShowAbstract}{} % Comment to hide the Abstract. \newcommand*{\ShowFrontmatter}{} % Comment to hide the Frontmatter. %\newcommand*{\ShowBibliography}{} % Comment to hide the Bibliography. %\newcommand*{\ShowAppendix}{} % Comment to hide the Appendix. %\newcommand*{\SansFontFamily}{} % Comment to use normal font. \newcommand{\doctitle}{Path-Length Balancing in Homogeneous Operation Trees} \usepackage[pdftex, pdfauthor={Dave McEwan}, pdftitle={\doctitle}, pdfsubject={\doctitle}, pdfkeywords={}, pdfproducer={}, pdfcreator={pdflatex}, breaklinks]{hyperref} \input{setup.tex} \input{glossary.tex} \graphicspath{{./img/}} \renewcommand{\fancyrefdefaultformat}{plain} \author{Dave McEwan} \title{WIP: \doctitle} \date{} % }}} setup \begin{document} \ifdefined\SansFontFamily % Font for main text. \sffamily \fi \ifdefined\ShowFrontmatter \maketitle \ifdefined\ShowAbstract \begin{abstract} This paper explores the problem of path-length balancing in k-ary trees. A simple technique is presented to reduce the differences in the path-lengths between inputs which is named ``port-mapping''. Several variants of port-mapping are given and compared. This is directly related to several problems in \gls{asic} design including OR tree implementation, counting the number of set bits in a vector, and finding the sum or product of a list of results. %Firstly the central ideas and notation are introduced with examples. %Secondly a handful of port-mapping algorithms are explained. %Thirdly some results are presented. \end{abstract} \fi \fi \setcounter{tocdepth}{2} %\tableofcontents \section{Introduction} The problem of balancing the lengths of operation trees is of interest to several fields, in particular \gls{asic} layout design where a longer path between the output of one \gls{ff} to the input of another \gls{ff} means that the clock must be run more slowly in order to avoid metastability and unknown data issues. In this paper the length of a path refers primarily to the number of logic cells passed through (e.g. 5 AND cells), rather than the physical length of the path (e.g. \SI{0.5}{\mm}). The operation tree input vector $\vec{x}$ is composed of $w$ elements. The values applied to the input vector also form a vector $\vec{x}$. The output scalar $f(\vec{x})$ of the operation tree is a function of the values applied to the input elements. Each operation must take the same number of inputs $b$, which in \gls{asic} layout corresponds to all operations being implemented with the same cell, e.g. for a tree of two input OR cells $b = 2$. It is assumed to be legal to tie any number of inputs to a constant value, and that when less than two inputs to the operation are constant then the operation may be removed completely. A homogenous series of operations may be arranged as a tree if the operation is both commutative and associative. Common examples include AND, OR, XOR, ADD, and MULTIPLY. %Common operations which may not be arranged as such include NAND, NOR, NXOR, %SUBTRACT, and DIVIDE. %Without loss of generallity we may use an OR tree for all demonstrations of the %port-mapping technique where the output is defined by %$f(\vec{x}) = x_0 \lor x_1 \lor \ldots \lor x_{w-1}$. %In the Verilog language this may be written as \texttt{wire f = |x;}. Two implementations of the 4-input OR function are shown in \fref{fig:OR_b2_w4} to demonstrate the terms path-length and balancing. All OR operations use $b=2$ inputs, and each implementation uses $3$ operations. In \fref{fig:OR_b2_w4_maxbal} all $4$ inputs `cross' an identical number of $2$ operations to reach the output, known as the path length. In contrast in \fref{fig:OR_b2_w4_unbal} $x_3$ only crosses one operation whereas $x_0$ and $x_1$ cross three operations. In an \gls{asic} layout this would mean that all inputs in \fref{fig:OR_b2_w4_maxbal} may propagate their voltage faster toward the net $f$ than inputs $x_0$ and $x_1$ in \fref{fig:OR_b2_w4_unbal}, which means that the balanced design will be able to run at a faster clock speed. This suggests that a balanced implementation may be a better choice for a fast, high performance design whereby the speed is limited by the maximum path length. Alternatively, to make both designs function at the same clock speed some additional buffering and amplifiying components may be added to the unbalanced design, at the expense of the additional area and power consumed, in order to compensate for the parasitic elements associated with the higher gate delays on in inputs $0$ and $1$. This suggests that a balanced implementation may be a better choice for a physically smaller and lower power design. \begin{figure}[h] \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{OR_b2_w4_maxbal.png} \caption{Maximally balanced. \label{fig:OR_b2_w4_maxbal}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.75\textwidth]{OR_b2_w4_unbal.png} \caption{Maximally unbalanced. \label{fig:OR_b2_w4_unbal}} \end{subfigure} \caption{OR trees with $b=2$ and $w=4$. \label{fig:OR_b2_w4}} \end{figure} This problem may be formulated as a rooted k-ary tree graph where the output $f$ is the root node, and the input vector $\vec{x}$ is the leaf nodes. Let a vector $\vec{d}$ be formed by the distances from the leaf nodes to the root node. E.g. In \fref{fig:OR_b2_w4_maxbal} it can be seen that $\vec{d} = \transpose{(2\ 2\ 2\ 2)}$ and in \fref{fig:OR_b2_w4_unbal} it can be seen that $\vec{d} = \transpose{(3\ 3\ 2\ 1)}$. \begin{align} \label{eq:d} \vec{d}_i = d(\vec{x}_i, f) \end{align} It can be supposed that a faster circuit implementation is achieved by minimizing $\max(\vec{d})$, and that a lower power \gls{asic} implementation is achieved by minimizing the total path length $t = \sum \vec{d}$. By using a tree of logic like \fref{fig:OR_b2_w4_maxbal} rather than a chain of operations like \fref{fig:OR_b2_w4_unbal}, it can be seen that $\max(\vec{d}) = log_b\lceil \log_bw \rceil$. A chain of operations like \fref{fig:OR_b2_w4_unbal} which is a maximally unbalanced tree may be used to give an upper bound $t \leq t_{\text{max}}$. A maximally balanced tree may be used to find an approximation of the lower bound $t \geq w\log_bw$. An exact integer solution $t_{\text{min}}$ may be found for the lower bound by observing that each leaf node will have a path length equal to either $\lfloor \log_bw \rfloor$ or $\lceil \log_bw \rceil$. To reduce the about of notation let $n = b^{\lceil \log_bw \rceil}$, i.e. the next closest power of $b$ which is greater than or equal to $w$, and let $p = b^{\lfloor \log_bw \rfloor}$, i.e. the previous closest power of $b$ which is less than or equal to $w$. It can then be calculated that the number of leaf nodes with a path length equal to $\lfloor \log_bw \rfloor$ is given by $\#_{\text{floor}}=p-\lceil \frac{w-p}{b-1} \rceil$. \begin{align} \label{eq:t_lower} t \geq t_{\text{min}} &= \ (\#_{\text{floor}})\lfloor \log_bw \rfloor + \ (w-\#_{\text{floor}})\lceil \log_bw \rceil \\ \label{eq:t_upper} t \leq t_{\text{max}} &= \frac{w(w+1)}{2}-1 \end{align} % Interestingly, it can be shown that for trees with $b=2$ the port numbers can % be represented in Gray code and the results for the usual `forward' case are % identical to BASE2FWD, and the results for the digit reversed case are % identical to BASE2REV. \begin{figure}[h] \centering \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=0.67\textwidth]{OR_b2_w5_FWD.png} \caption{OR tree with FWD port mapping. $\vec{d} = \transpose{(3\ 3\ 3\ 3\ 1)}$, $t = 13$. \label{fig:OR_b2_w5_FWD}} \end{subfigure} \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=0.67\textwidth]{OR_b2_w5_PINGPONG.png} \caption{OR tree with PINGPONG port mapping. $\vec{d} = \transpose{(3\ 2\ 3\ 2\ 2)}$, $t = 12$. \label{fig:OR_b2_w5_PINGPONG}} \end{subfigure} \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=0.67\textwidth]{OR_b2_w5_BSTRIDE.png} \caption{OR tree with BSTRIDE port mapping. $\vec{d} = \transpose{(3\ 2\ 2\ 2\ 3)}$, $t = 12$. \label{fig:OR_b2_w5_BSTRIDE}} \end{subfigure} \begin{subfigure}[t]{0.9\textwidth} \centering \includegraphics[width=0.67\textwidth]{OR_b2_w5_BASEBREV.png} \caption{OR tree with BASEBREV port mapping. $\vec{d} = \transpose{(3\ 2\ 2\ 2\ 3)}$, $t = 12$. \label{fig:OR_b2_w5_BASEBREV}} \end{subfigure} \caption{OR trees with $b=2$ and $w=5$. \label{fig:OR_b2_w5}} \end{figure} \Fref{fig:OR_b2_w5} demonstrates that for the example of $w=5,\ b=2$ the total path length can be changed simply by changing the order in which inputs are allocated to the operation tree. The indexes of the leaf nodes are shown in gray smaller text and are numbered in the natural order. The function input indices are shown in black larger text and are numbered according to the named algorithms. Leaf nodes which are not used are unlabelled to signify that they are tied to a constant equal to the identity element and that any operations depending on them may be removed. This means that in \gls{asic} development the optmization may be performed at the \gls{rtl} (e.g. Verilog) level. OR gates which may be removed are marked with a large red cross. The right hand diagrams show the resulting implementations for this example, all of which use four operations and have a $\max(\vec{d}) = 3$. \clearpage \section{Port-Mapping Algorithms} Each of these algorithms $a$ performs a bijective mapping from a leaf node index $x$ onto a function input index $y$, i.e. $x,y \in \integers_0,\ a: x \to y,\ a^{-1}: y \to x$. %The inverse function $a^{-1}$ maps a function input index to a leaf node index. Leaf nodes which do not have a corresponding function index, i.e. $a^{-1}(y) \geq w$, are tied to a constant of the identity element which allows operations to be removed. An operation may be removed if less than two of its inputs are constant. For trees where $b>2$ it is possible for an individual operation to have more than one non-constant input, and one or more constant inputs meaning it is only partially used and ideally would be relaced with a smaller cell. For this reason the concept of using a non-linear weighting $\alpha$ for each operation is introduced, i.e. the distance across two adjacent nodes $d(a, b) = ( \frac{\text{\# connected inputs}}{b} )^\alpha$. Each result is therefore fully specificed by $t_{w,\alpha,b,a}$. Note that the bounds given by \fref{eq:t_lower} and \fref{eq:t_upper} only hold for $b=2$ or $\alpha=0$. \begin{figure}[h] \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{alpha_demo.png} \caption{3AND with only 2 used inputs may be converted to a 2AND. \label{fig:alpha_demo}} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \begin{tabular}{ l | c } $\alpha$ & $d(\vec{x}_i, 2AND),\quad b=3$ \\ \hline $-2$ & $2.25$ \\ $-1$ & $1.5$ \\ $-0.5$ & $1.225$ \\ $0$ & $1$ \\ $0.5$ & $0.816$ \\ $1$ & $\frac{2}{3}$ \\ $\varphi$ & $0.519$ \\ $2$ & $0.444$ \\ % $e$ & $0.332$ \\ % $\pi$ & $0.280$ \\ \end{tabular} \caption{Example distances across a single operation for different values of $\alpha$. \label{tab:alpha_values}} \end{subfigure} \end{figure} In typical models where cells with less inputs are smaller faster, and incur less capacitance, $\alpha$ is set to greater than $1$. For example it may be the case that a 2AND cell incurs less than $\frac{2}{3}$ of the capacitance per input of a 3AND cell. The golden ratio defined as $\varphi = \frac{1 + \sqrt 5}{2}$ is though to be a reasonable initial approximation. For \gls{asic} layout comparisons the chosen value of $\alpha$ is dependent on the process node and cell library which will be used for implementation. The \textit{FWD} algorithm is the simplest port-mapping algorithm and perhaps the na\"{\i}ve method. \begin{align} \label{eq:a_fwd} a_{\textsc{fwd}}(i) = a_{\textsc{fwd}}^{-1}(i) = i \end{align} For a more intuitive comparison of values it is useful to scale $t$ by the number of inputs and the result of the \textit{FWD} algorithm to give a measure of the reduction in total capacitance given by use of an algorithm other than the na\"{\i}ve method, i.e, \begin{align} \label{eq:u} u_{w,\alpha,b,a} = w - \frac{wt_{w,\alpha,b,a}}{t_{w,\alpha,b,\textsc{fwd}}} \end{align} For a lower total capacitance we want $u$ to be as high as possible. The \textit{BASEBREV} algorithm takes a function input index represented in base $b$ and zero-extended to $m = \log_bn$ digits, and reverses the order of the digits. \begin{align} \text{let}\quad i &= \sum\limits_{k = 0}^{m-1} x_k b^k \\ \label{eq:a_basebrev} a_{\textsc{basebrev}}(i) = a_{\textsc{basebrev}}^{-1}(i) &= \ \sum\limits_{k = 0}^{m-1} x_{m-k-1} b^k \end{align} The \textit{PINGPONG} algorithm allocates function input indices to alternating ends of the ordered set of leaf node indices. The \textit{PONGPING} algorithm is simply the reverse of the \textit{PINGPONG} process. \begin{align} \label{eq:a_pingpong} a_{\textsc{pongping}}^{-1}(i) = a_{\textsc{pingpong}}(i) &= \ 2i\ \text{if}\ (i < \frac{n}{2})\ \text{else}\ 2(n-i)-1 \\ \label{eq:a_pongping} a_{\textsc{pingpong}}^{-1}(i) = a_{\textsc{pongping}}(i) &= \ n-\frac{i+1}{2}\ \text{if}\ (i \bmod 2)\ \text{else}\ \frac{i}{2} \end{align} The \textit{STRIDEB} algorithm allocates function input indices in strides of $b$, restarting at the lowest index when it walks off the upper end. The \textit{STRODEB} algorithm is simply the inverse of the \textit{STRIDEB} process. \begin{align} \label{eq:a_stride} a_{\textsc{strodeb}}^{-1}(i) = a_{\textsc{strideb}}(i) &= \ (ib \bmod n) + \left\lfloor \frac{ib}{n} \right\rfloor \\ \label{eq:a_strode} a_{\textsc{strideb}}^{-1}(i) = a_{\textsc{strodeb}}(i) &= \ \Big( \left\lfloor \frac{in}{b} \right\rfloor \bmod n \Big) + \ \left\lfloor \frac{i}{b} \right\rfloor \end{align} The \textit{GRAYFWD} and \textit{GRAYREV} algorithms use n-ary generalized (n,k)-Gray codes to allocate indices. i.e. using $(b, \lceil\log_bw\rceil)$-Gray codes. \begin{align} \text{let}\quad i &= \sum\limits_{k = 0}^{m-1} x_k b^k \\ \text{let}\quad j &= \left\lfloor \frac{i}{b^k} \right\rfloor \\ \label{eq:a_grayfwd} a_{\textsc{grayfwdi}}^{-1}(i) = a_{\textsc{grayfwd}}(i) &= \ \sum\limits_{k = 0}^{m-1} y_k b^k \\ \label{eq:a_grayrev} a_{\textsc{grayrevi}}^{-1}(i) = a_{\textsc{grayrev}}(i) &= \ \sum\limits_{k = 0}^{m-1} y_{m-k-1} b^k \\ y_k &= \Big( j - \left\lfloor \frac{j}{b} \right\rfloor \Big) \bmod b \end{align} The \textit{GRAYREV} algorithm reverses the order of the base $b$ digits as compared to the values returned by \textit{GRAYFWD}. The inverse functions have not been implemented or tested here. These are just a few named examples of the $n!$ possible algorithms as it is not feasible to test all possibile algorithms. The number of possible non-equivalent algorithms may be reduced to $\frac{n!}{\frac{n}{b} \times (\frac{n}{b})!}$ by observing that rearranging the inputs to a single operation at the top of the operation tree always gives equivalent results, however, this is still an infeasibly large number of algorithms to test exhaustively. \clearpage \section{Results} % Generate figures with: % ./treebalance.py -p --wmax 300 % OR % make plots \begin{figure}[h] \centering \includegraphics[width=0.93\textwidth]{b=2,alpha=1_618.png} \caption{Total path length savings compared to $a_{\textsc{fwd}}$ where only 2-input cells are available. \label{fig:b=2,alpha=1_618}} \end{figure} \vfill \begin{figure}[h] \centering \includegraphics[width=0.93\textwidth]{b=3,alpha=1_618.png} \caption{Total path length savings of compared to $a_{\textsc{fwd}}$ where 3-input cells are available. \label{fig:b=3,alpha=1_618}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.93\textwidth]{b=4,alpha=1_618.png} \caption{Total path length savings of compared to $a_{\textsc{fwd}}$ where 4-input cells are available. \label{fig:b=4,alpha=1_618}} \end{figure} \vfill \begin{figure}[h] \centering \includegraphics[width=0.93\textwidth]{b=5,alpha=1_618.png} \caption{Total path length savings of compared to $a_{\textsc{fwd}}$ where 5-input cells are available. \label{fig:b=5,alpha=1_618}} \end{figure} \clearpage These figures show that for operation trees where the number of leaf nodes is not an integer power of $b$ it is possible to reduce the total path length, and the associated capacitance simply by changing the order in which function indices are assigned to leaf node indices. The biggest potential savings appear where $w$ is a power of $b$ plus 1. \TODO{Discuss comparative advantages} \TODO{Non-trivial estimation of optimal choice} For the special case $b=2$ where $\alpha$ has no effect and the bounds of $t$ are known it makes sense to compare against the lower bound rather than the result of $a_{\textsc{fwd}}$ i.e. $v_{w,a} = w - \frac{w-wt_{w,a}}{t_{\text{min}}}$, as shown in \fref{fig:b=2}. Here it can be seen that $a_{\textsc{fwd}}$ has the highest total path length and $a_{\textsc{basebrev}}$ is among a collection of algorithms which is always optimal. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{b=2.png} \caption{Total path length savings compared to optimal algorithm where only 2-input cells are available. Higher is better, $v=0$ is optimal. \label{fig:b=2}} \end{figure} The case where only 2-input cells are available is not uncommon for more complex operations such as multipliers. For simpler operations such as OR many cell libraries will have cells with more than 2 inputs available, in which case an appropriate value of $\alpha$ must be chosen in order to perform a meaningful comparison. %\TODO{ref TSMC, GF cell lib specs} %\TODO{ref Xilinx, Altera, Lattice primitive cell specs} \TODO{practical demonstration with FPGA?} \TODO{Zero cost in synthesis computation time.} \TODO{Zero cost in silicon area.} \TODO{Zero cost in max path length.} \TODO{RTL/HDL level.} %\clearpage %\section{Conclusions} % {{{ epilog \ifdefined\ShowBibliography \clearpage \bibliographystyle{plain} \addcontentsline{toc}{section}{Bibliography} \bibliography{../refs}{} % refs.bib \fi \ifdefined\ShowAppendix \clearpage \begin{appendices} % {{{ \addappheadtotoc %\appendixpage %\section{Lists} % %\listoffigures % %\listoftables \end{appendices} \fi % }}} % }}} epilog \end{document}
{ "alphanum_fraction": 0.6902295348, "avg_line_length": 40.5185185185, "ext": "tex", "hexsha": "70cfb6b8ec360fe3d7745be974541bdb4a81bf4f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68e8a121d4591360080cd40121add1796ae48a1b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DaveMcEwan/dmppl", "max_forks_repo_path": "dmppl/experiments/treebalance/treebalance.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68e8a121d4591360080cd40121add1796ae48a1b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DaveMcEwan/dmppl", "max_issues_repo_path": "dmppl/experiments/treebalance/treebalance.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "68e8a121d4591360080cd40121add1796ae48a1b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DaveMcEwan/dmppl", "max_stars_repo_path": "dmppl/experiments/treebalance/treebalance.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-05T19:46:43.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-05T19:46:43.000Z", "num_tokens": 5925, "size": 19692 }
% Meyers_Chris_Resume.tex \input{Meyers_Chris_Settings} \begin{document} % =================================== Header =================================== \input{Meyers_Chris_Header} % ================================= Experience ================================= \hrule \vspace{-0.4em} \subsection*{Experience} \begin{itemize} \parskip=0.1em % BEGIN Experience \item \headerrow {\textbf{ASRC Federal Mission Solutions}} {\textbf{Moorestown, NJ}} \\ \headerrow {\emph{Software Engineer, Display Systems}} {\emph{Jun 2018 -- Present}} \begin{itemize*} \item Worked on an Independent Research and Development (IRAD) project that experimented with map and graphic updates to the Tactical Situation (TACSIT) display. \item Developed scripts that aided in converting the department build system to CMake which drastically reduced build times and improved build stability. \item Participated in a Pair Programming pilot that doubled as an opportunity to mentor a new employee. \end{itemize*} \item \headerrow {\textbf{MCG Strategic}} {\textbf{Marlton, NJ}} \\ \headerrow {\emph{Software Engineer}} {\emph{Feb 2017 -- May 2018}} \begin{itemize*} \item Lead backend developer and maintainer of client websites, web apps, and mobile apps. \item Contributed to the creation of engineering process best practices to strengthen development and QA workflow (deploying via Git, Bitbucket/JIRA integrations, Slack integrations). \item Designed and implemented internal automation tools that include an app build/delivery system and a weekly time report generation system. \end{itemize*} \item \headerrow {\textbf{ASRC Federal Mission Solutions}} {\textbf{Moorestown, NJ}} \\ \headerrow {\emph{Software Engineer, Display Systems}} {\emph{Mar 2015 -- Feb 2017}} \begin{itemize*} \item Designed, developed, and tested solutions in CMMI Level 5 for the U.S. Navy's real-time, mission critical Aegis Ballistic Missile Defense (BMD) system (intern from Mar -- May 2015). \item Nominated as a mentor to help new employees adjust to their position. \item Lead developer of the display system for the Littoral Combat Ship's (LCS) SeaRAM weapon system. \end{itemize*} \item \headerrow {\textbf{Rowan University}} {\textbf{Glassboro, NJ}} \\ \headerrow {\emph{Web Developer, History Department}} {\emph{Nov 2013 -- May 2015}} \begin{itemize*} \item Updated and maintained the History Department's website. \end{itemize*} % Same Company \headerrow {\emph{Network Assistant, Network \& System Services}} {\emph{Jul 2014 -- Sept 2014}} \begin{itemize*} \item Installed and maintained network infrastructure across the Glassboro campus. \end{itemize*} % END Experience \end{itemize} % ================================== Education ================================= \hrule \vspace{-0.4em} \subsection*{Education} \begin{itemize} \parskip=0.1em % BEGIN Education \item \headerrow {\textbf{Rowan University}} {\textbf{Glassboro, NJ}} \\ \headerrow {\emph{Bachelor of Science, Computer Science}} {\emph{Sept 2011 -- May 2015}} % END Education \end{itemize} % ============================== Technical Skills ============================== \hrule \vspace{-0.4em} \subsection*{Technical Skills} \begin{itemize*} % BEGIN TechnicalSkills \item Experience developing Desktop and CLI applications using: \CPP, Java, Python, C\#. \item Experience developing Websites, Web Apps, and APIs using: LAMP stack (Linux, Apache, MySQL, PHP), Flask (Python framework), Aqueduct (Dart framework), JavaScript (jQuery, Angular, Node, Vue). \item Experience developing native Android apps (Java) and hybrid mobile apps (Apache Cordova). \item Experience using relational (MySQL, MariaDB, PostgreSQL) and NoSQL (MongoDB, Firebase) databases. \item Experience with concrete5, WordPress, and Drupal Content Management Systems (CMSs). \item Experience writing Bash and Python scripts to automate processes and aid in development. \item Experience as a member of Agile and Kanban teams utilizing JIRA to track issues. \item Experience with Git, ClearCase, and Subversion version control. \item Experience debugging software with visual debuggers and the command line GNU debugger (GDB). \item Very comfortable with Windows, macOS, and Command Line Interfaces (Unix, Linux, Solaris). \item Experience installing, maintaining, and relocating data center hardware. % END TechnicalSkills \end{itemize*} \end{document}
{ "alphanum_fraction": 0.6893774488, "avg_line_length": 36.4603174603, "ext": "tex", "hexsha": "f949b8f57fd098f4e54bf58bbfd9af4ea97ecc97", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bb837e58d44ad7946ca443af5b42585b62c857f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "chrismeyers/CM_WEBSITE", "max_forks_repo_path": "public_html/v6/sections/Meyers_Chris_Resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bb837e58d44ad7946ca443af5b42585b62c857f2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "chrismeyers/CM_WEBSITE", "max_issues_repo_path": "public_html/v6/sections/Meyers_Chris_Resume.tex", "max_line_length": 199, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bb837e58d44ad7946ca443af5b42585b62c857f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "chrismeyers/CM_WEBSITE", "max_stars_repo_path": "public_html/v5/sections/Meyers_Chris_Resume.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-18T04:52:07.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-18T04:52:07.000Z", "num_tokens": 1157, "size": 4594 }